repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
pydantic/pydantic | pydantic | 10,826 | coerce_numbers_to_str not working when using SkipJsonSchema | ### Initial Checks
- [X] I confirm that I'm using Pydantic V2
### Description
When using `str | SkipJsonSchema[None]` type and setting `coerce_numbers_to_str` to `True` a validation error is raised.
### Example Code
```Python
from pydantic import BaseModel, Field
from pydantic.json_schema import SkipJsonSchema
# Example without using SkipJsonSchema works as expected
class Foo(BaseModel):
bar: str | None = Field(coerce_numbers_to_str=True)
print(Foo(bar=1))
# Example with SkipJsonSchema raises a validation error
class Foo(BaseModel):
bar: str | SkipJsonSchema[None] = Field(coerce_numbers_to_str=True)
print(Foo(bar=1))
```
### Python, Pydantic & OS Version
```Text
python -c "import pydantic.version; print(pydantic.version.version_info())"
pydantic version: 2.9.2
pydantic-core version: 2.23.4
pydantic-core build: profile=release pgo=false
install path: /Users/jshaw/.pyenv/versions/3.12.6/envs/env/lib/python3.12/site-packages/pydantic
python version: 3.12.6 (main, Sep 10 2024, 15:45:08) [Clang 15.0.0 (clang-1500.3.9.4)]
platform: macOS-15.0.1-arm64-arm-64bit
related packages: pydantic-extra-types-2.9.0 pydantic-settings-2.5.2 fastapi-0.114.1 typing_extensions-4.12.2
commit: unknown
```
| closed | 2024-11-13T04:11:51Z | 2024-11-13T10:52:59Z | https://github.com/pydantic/pydantic/issues/10826 | [
"bug V2",
"pending"
] | jordantshaw | 1 |
Buuntu/fastapi-react | fastapi | 39 | Get login component working | Should store token in local storage like react-admin | closed | 2020-05-25T15:08:35Z | 2020-05-25T20:17:43Z | https://github.com/Buuntu/fastapi-react/issues/39 | [
"enhancement"
] | Buuntu | 1 |
graphql-python/graphene-django | graphql | 1,515 | Documentation Mismatch for Testing API Calls with Django | **What is the current behavior?**
The documentation for testing API calls with Django on the official website of [Graphene-Django](https://docs.graphene-python.org/projects/django/en/latest/testing/) shows incorrect usage of a parameter named **op_name** in the code examples for unit tests and pytest integration. However, upon inspecting the corresponding documentation in the [GitHub repository](https://github.com/graphql-python/graphene-django/blob/main/docs/testing.rst), the correct parameter operation_name is used.
**Steps to Reproduce**
Visit the [Graphene-Django](https://docs.graphene-python.org/projects/django/en/latest/testing/) documentation website's section on testing API calls with Django.
Observe the use of op_name in the example code blocks.
Compare with the content in the [testing.rst](https://github.com/graphql-python/graphene-django/blob/main/docs/testing.rst) file in the docs folder of the Graphene-Django GitHub repository, where operation_name is correctly used.
Expected Behavior
The online documentation should reflect the same parameter name, operation_name, as found in the GitHub repository documentation, ensuring consistency and correctness for developers relying on these docs for implementing tests.
Motivation / Use Case for Changing the Behavior
Ensuring the documentation is accurate and consistent across all platforms is crucial for developer experience and adoption. Incorrect documentation can lead to confusion and errors in implementation, especially for new users of Graphene-Django. | open | 2024-04-06T05:32:32Z | 2024-07-03T18:44:00Z | https://github.com/graphql-python/graphene-django/issues/1515 | [
"🐛bug"
] | hamza-m-farooqi | 1 |
iterative/dvc | data-science | 9,890 | Add gc -c support for cloud versioned remotes | Whenever I try to remove garbage in a cloud versioned remote (Google Cloud Storage in my case) through `gvc gc -ac` this message prompts:
```
ERROR: configuration error - config file error: 'gc -c' is unsupported for cloud versioned remotes
```
Is it possible to have this feature implemented? I believe it is essential to keep consistency on the remote side. Thanks in advance! | closed | 2023-08-30T16:58:45Z | 2023-08-30T17:11:44Z | https://github.com/iterative/dvc/issues/9890 | [
"feature request",
"A: gc"
] | manucalop | 1 |
aeon-toolkit/aeon | scikit-learn | 1,886 | [ENH] n_jobs/_n_jobs, parameter in classifiers | the use of n_jobs/_n_jobs in CollectionEstimators is confusing and could be tidied up.
See https://github.com/aeon-toolkit/aeon/blob/main/aeon/base/_base_collection.py#L92 where `_n_jobs` is defined. This is not really enforced in any way currently. | open | 2024-08-01T19:23:41Z | 2025-01-25T09:22:00Z | https://github.com/aeon-toolkit/aeon/issues/1886 | [
"enhancement",
"classification",
"clustering",
"regression",
"distances",
"multithreading"
] | TonyBagnall | 4 |
explosion/spaCy | deep-learning | 12,154 | Incorrect tokenization of dash punctuation in Spanish when not preceded or followed by a space | This is related to this (now closed) issue: https://github.com/explosion/spaCy/issues/3277.
## How to reproduce the behaviour
Per the fixes related to the above issue (https://github.com/explosion/spaCy/pull/3281/files), the en/em dash now tokenizes into a separate token whenever it is preceded or followed by a space, but whenever this dash is connected to another word or punctuation mark on one side without a space between and also connected to another word or punctuation mark on the other side without a space between, then it seems to be treated as a hyphen and is not tokenized. In fact, it will even cause the dash and it's preceding punctuation mark to be combined into a single token together with whatever word precedes the preceding punctuation mark and whatever word follows the dash, as in the example below.
["—", "Pues", "bien,—dijo", "el", "extranjero,—el", "año", "que", "viene", "debe", "Vd.", "hacer", "el", "tiempo", "para", "sus", "viñas", "."]
There are many instances in which this Spanish dash is connected on both sides in dialogue. Almost every Spanish book at Gutenberg.org that has dialogue has examples of this: https://www.gutenberg.org/ebooks/search/?query=l.spanish
The issue is not unique to 3.5--it's an issue with previous versions of Spacy as well.
## Your Environment
- **spaCy version:** 3.5.0
- **Platform:** Windows-10-10.0.22623-SP0
- **Python version:** 3.11.0
- **Pipelines:** en_core_web_sm (3.5.0), es_core_news_sm (3.5.0), zh_core_web_sm (3.5.0)
| closed | 2023-01-23T23:58:47Z | 2023-10-10T09:21:02Z | https://github.com/explosion/spaCy/issues/12154 | [
"lang / es",
"feat / tokenizer"
] | creolio | 2 |
Asabeneh/30-Days-Of-Python | numpy | 88 | Something wrong with the sentence. | 
Double "start”?? | closed | 2020-10-11T15:57:25Z | 2021-01-28T13:05:45Z | https://github.com/Asabeneh/30-Days-Of-Python/issues/88 | [] | Fatpandac | 0 |
Josh-XT/AGiXT | automation | 849 | Error Loading Agent Configuration: 'str' object has no attribute 'get' | ### Description
When I attempt to load a local model, I encounter the error message 'str' object has no attribute 'get'. This error occurs during the process of loading the agent configuration, indicating a problem with accessing a specific attribute within the configuration.
### Steps to Reproduce the Bug
1. Agent Settings
2. Agent Name
3. wizardLM-13B.ggmlv3.q4_0
4. error : Error loading agent configuration: 'str' object has no attribute 'get'
agixt-main-agixt-1 | INFO: Started server process [15]
agixt-main-agixt-1 | INFO: Waiting for application startup.
agixt-main-agixt-1 | INFO: Application startup complete.
agixt-main-agixt-1 | INFO: Started server process [14]
agixt-main-agixt-1 | INFO: Waiting for application startup.
agixt-main-agixt-1 | INFO: Application startup complete.
agixt-main-agixt-1 | INFO: Started server process [12]
agixt-main-agixt-1 | INFO: Waiting for application startup.
agixt-main-agixt-1 | INFO: Application startup complete.
agixt-main-agixt-1 | INFO: Started server process [13]
agixt-main-agixt-1 | INFO: Waiting for application startup.
agixt-main-agixt-1 | INFO: Application startup complete.
agixt-main-agixt-1 | INFO: 172.19.0.3:58908 - "GET /api/agent HTTP/1.1" 200 OK
agixt-main-agixt-1 | INFO: 172.19.0.3:58912 - "GET /api/agent/wizardLM-13B-Uncensored.ggmlv3.q4_0 HTTP/1.1" 500 Internal Server Error
agixt-main-agixt-1 | ERROR: Exception in ASGI application
agixt-main-agixt-1 | Traceback (most recent call last):
agixt-main-agixt-1 | File "/agixt/Providers.py", line 36, in __init__
agixt-main-agixt-1 | self.instance = provider_class(**kwargs)
agixt-main-agixt-1 | File "/agixt/providers/openai.py", line 32, in __init__
agixt-main-agixt-1 | self.stream = True if stream.lower() == "true" else False
agixt-main-agixt-1 | AttributeError: 'bool' object has no attribute 'lower'
agixt-main-agixt-1 |
agixt-main-agixt-1 | The above exception was the direct cause of the following exception:
agixt-main-agixt-1 |
agixt-main-agixt-1 | Traceback (most recent call last):
agixt-main-agixt-1 | File "/usr/local/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 435, in run_asgi
agixt-main-agixt-1 | result = await app( # type: ignore[func-returns-value]
agixt-main-agixt-1 | File "/usr/local/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
agixt-main-agixt-1 | return await self.app(scope, receive, send)
agixt-main-agixt-1 | File "/usr/local/lib/python3.10/site-packages/fastapi/applications.py", line 276, in __call__
agixt-main-agixt-1 | await super().__call__(scope, receive, send)
agixt-main-agixt-1 | File "/usr/local/lib/python3.10/site-packages/starlette/applications.py", line 122, in __call__
agixt-main-agixt-1 | await self.middleware_stack(scope, receive, send)
agixt-main-agixt-1 | File "/usr/local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in __call__
agixt-main-agixt-1 | raise exc
agixt-main-agixt-1 | File "/usr/local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in __call__
agixt-main-agixt-1 | await self.app(scope, receive, _send)
agixt-main-agixt-1 | File "/usr/local/lib/python3.10/site-packages/starlette/middleware/cors.py", line 83, in __call__
agixt-main-agixt-1 | await self.app(scope, receive, send)
agixt-main-agixt-1 | File "/usr/local/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
agixt-main-agixt-1 | raise exc
agixt-main-agixt-1 | File "/usr/local/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
agixt-main-agixt-1 | await self.app(scope, receive, sender)
agixt-main-agixt-1 | File "/usr/local/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
agixt-main-agixt-1 | raise e
agixt-main-agixt-1 | File "/usr/local/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
agixt-main-agixt-1 | await self.app(scope, receive, send)
agixt-main-agixt-1 | File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 718, in __call__
agixt-main-agixt-1 | await route.handle(scope, receive, send)
agixt-main-agixt-1 | File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 276, in handle
agixt-main-agixt-1 | await self.app(scope, receive, send)
agixt-main-agixt-1 | File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 66, in app
agixt-main-agixt-1 | response = await func(request)
agixt-main-agixt-1 | File "/usr/local/lib/python3.10/site-packages/fastapi/routing.py", line 237, in app
agixt-main-agixt-1 | raw_response = await run_endpoint_function(
agixt-main-agixt-1 | File "/usr/local/lib/python3.10/site-packages/fastapi/routing.py", line 163, in run_endpoint_function
agixt-main-agixt-1 | return await dependant.call(**values)
agixt-main-agixt-1 | File "/agixt/app.py", line 395, in get_agentconfig
agixt-main-agixt-1 | agent_config = Agent(agent_name=agent_name).get_agent_config()
agixt-main-agixt-1 | File "/agixt/fb/Agent.py", line 125, in __init__
agixt-main-agixt-1 | self.PROVIDER = Providers(self.AI_PROVIDER, **self.PROVIDER_SETTINGS)
agixt-main-agixt-1 | File "/agixt/Providers.py", line 42, in __init__
agixt-main-agixt-1 | raise AttributeError(f"module {__name__} has no attribute {name}") from e
agixt-main-agixt-1 | AttributeError: module Providers has no attribute openai
agixt-main-agixt-1 | INFO: 172.19.0.3:43454 - "GET /api/agent HTTP/1.1" 200 OK
agixt-main-agixt-1 | INFO: 172.19.0.3:43470 - "GET /api/agent HTTP/1.1" 200 OK
agixt-main-agixt-1 | INFO: 172.19.0.3:43484 - "GET /api/agent/Llamacpp HTTP/1.1" 500 Internal Server Error
agixt-main-agixt-1 | ERROR: Exception in ASGI application
agixt-main-agixt-1 | Traceback (most recent call last):
agixt-main-agixt-1 | File "/agixt/Providers.py", line 36, in __init__
agixt-main-agixt-1 | self.instance = provider_class(**kwargs)
agixt-main-agixt-1 | File "/agixt/providers/openai.py", line 32, in __init__
agixt-main-agixt-1 | self.stream = True if stream.lower() == "true" else False
agixt-main-agixt-1 | AttributeError: 'bool' object has no attribute 'lower'
agixt-main-agixt-1 |
agixt-main-agixt-1 | The above exception was the direct cause of the following exception:
agixt-main-agixt-1 |
agixt-main-agixt-1 | Traceback (most recent call last):
agixt-main-agixt-1 | File "/usr/local/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 435, in run_asgi
agixt-main-agixt-1 | result = await app( # type: ignore[func-returns-value]
agixt-main-agixt-1 | File "/usr/local/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
agixt-main-agixt-1 | return await self.app(scope, receive, send)
agixt-main-agixt-1 | File "/usr/local/lib/python3.10/site-packages/fastapi/applications.py", line 276, in __call__
agixt-main-agixt-1 | await super().__call__(scope, receive, send)
agixt-main-agixt-1 | File "/usr/local/lib/python3.10/site-packages/starlette/applications.py", line 122, in __call__
agixt-main-agixt-1 | await self.middleware_stack(scope, receive, send)
agixt-main-agixt-1 | File "/usr/local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in __call__
agixt-main-agixt-1 | raise exc
agixt-main-agixt-1 | File "/usr/local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in __call__
agixt-main-agixt-1 | await self.app(scope, receive, _send)
agixt-main-agixt-1 | File "/usr/local/lib/python3.10/site-packages/starlette/middleware/cors.py", line 83, in __call__
agixt-main-agixt-1 | await self.app(scope, receive, send)
agixt-main-agixt-1 | File "/usr/local/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
agixt-main-agixt-1 | raise exc
agixt-main-agixt-1 | File "/usr/local/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
agixt-main-agixt-1 | await self.app(scope, receive, sender)
agixt-main-agixt-1 | File "/usr/local/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
agixt-main-agixt-1 | raise e
agixt-main-agixt-1 | File "/usr/local/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
agixt-main-agixt-1 | await self.app(scope, receive, send)
agixt-main-agixt-1 | File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 718, in __call__
agixt-main-agixt-1 | await route.handle(scope, receive, send)
agixt-main-agixt-1 | File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 276, in handle
agixt-main-agixt-1 | await self.app(scope, receive, send)
agixt-main-agixt-1 | File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 66, in app
agixt-main-agixt-1 | response = await func(request)
agixt-main-agixt-1 | File "/usr/local/lib/python3.10/site-packages/fastapi/routing.py", line 237, in app
agixt-main-agixt-1 | raw_response = await run_endpoint_function(
agixt-main-agixt-1 | File "/usr/local/lib/python3.10/site-packages/fastapi/routing.py", line 163, in run_endpoint_function
agixt-main-agixt-1 | return await dependant.call(**values)
agixt-main-agixt-1 | File "/agixt/app.py", line 395, in get_agentconfig
agixt-main-agixt-1 | agent_config = Agent(agent_name=agent_name).get_agent_config()
agixt-main-agixt-1 | File "/agixt/fb/Agent.py", line 125, in __init__
agixt-main-agixt-1 | self.PROVIDER = Providers(self.AI_PROVIDER, **self.PROVIDER_SETTINGS)
agixt-main-agixt-1 | File "/agixt/Providers.py", line 42, in __init__
agixt-main-agixt-1 | raise AttributeError(f"module {__name__} has no attribute {name}") from e
agixt-main-agixt-1 | AttributeError: module Providers has no attribute openai
### Expected Behavior
Agent loaded
### Operating System
- [ ] Linux
- [X] Microsoft Windows
- [ ] Apple MacOS
- [ ] Android
- [ ] iOS
- [ ] Other
### Python Version
- [ ] Python <= 3.9
- [ ] Python 3.10
- [ ] Python 3.11
### Environment Type - Connection
- [X] Local - You run AGiXT in your home network
- [ ] Remote - You access AGiXT through the internet
### Runtime environment
- [X] Using docker compose
- [ ] Using local
- [ ] Custom setup (please describe above!)
### Acknowledgements
- [X] I have searched the existing issues to make sure this bug has not been reported yet.
- [X] I am using the latest version of AGiXT.
- [X] I have provided enough information for the maintainers to reproduce and diagnose the issue. | closed | 2023-07-15T21:16:15Z | 2023-07-16T22:17:52Z | https://github.com/Josh-XT/AGiXT/issues/849 | [
"type | report | bug",
"needs triage"
] | sambickeita | 2 |
d2l-ai/d2l-en | machine-learning | 2,020 | [PyTorch] Porting Chapter Recommender Systems | Currently, the only missing chapter in the PyTorch port of the book is [Recommender Systems](https://d2l.ai/chapter_recommender-systems/index.html). Given the number of requests that we have been receiving regarding this section, and rising interest among readers, this is a good time to add PyTorch support.
This Issue serves as a tracker for various sections that need to be added. If you're interested in working on this issue, please comment below with "I'm working on \<section X\>, \<section Y\>, etc..." so that others don't pick the same sections as you do. Please do not pick more than two sections at once. You are free to send a PR and then pick other sections as well.
We need to complete the sections in order. Before picking a section, make sure the previous sections are already implemented (since the latter sections use functions saved in the earlier sections of the chapter), if not you should start with picking that previous section and implement that first. Thanks!!
* [x] [16.1. Overview of Recommender Systems](https://d2l.ai/chapter_recommender-systems/recsys-intro.html#) ([recsys-intro.md](https://github.com/d2l-ai/d2l-en/blob/master/chapter_recommender-systems/recsys-intro.md))
* [ ] [16.2. The MovieLens Dataset](https://d2l.ai/chapter_recommender-systems/movielens.html) ([movielens.md](https://github.com/d2l-ai/d2l-en/blob/master/chapter_recommender-systems/movielens.md)) -> #2030
* [ ] [16.3. Matrix Factorization](https://d2l.ai/chapter_recommender-systems/mf.html) ([mf.md](https://github.com/d2l-ai/d2l-en/blob/master/chapter_recommender-systems/mf.md)) -> #2031
* [ ] [16.4. AutoRec: Rating Prediction with Autoencoders](https://d2l.ai/chapter_recommender-systems/autorec.html) ([autorec.md](https://github.com/d2l-ai/d2l-en/blob/master/chapter_recommender-systems/autorec.md))
* [ ] [16.5. Personalized Ranking for Recommender Systems](https://d2l.ai/chapter_recommender-systems/ranking.html) ([ranking.md](https://github.com/d2l-ai/d2l-en/blob/master/chapter_recommender-systems/ranking.md))
* [ ] [16.6. Neural Collaborative Filtering for Personalized Ranking](https://d2l.ai/chapter_recommender-systems/neumf.html) ([neumf.md](https://github.com/d2l-ai/d2l-en/blob/master/chapter_recommender-systems/neumf.md))
* [ ] [16.7. Sequence-Aware Recommender Systems](https://d2l.ai/chapter_recommender-systems/seqrec.html) ([seqrec.md](https://github.com/d2l-ai/d2l-en/blob/master/chapter_recommender-systems/seqrec.md))
* [ ] [16.8. Feature-Rich Recommender Systems](https://d2l.ai/chapter_recommender-systems/ctr.html) ([ctr.md](https://github.com/d2l-ai/d2l-en/blob/master/chapter_recommender-systems/ctr.md))
* [ ] [16.9. Factorization Machines](https://d2l.ai/chapter_recommender-systems/fm.html) ([fm.md](https://github.com/d2l-ai/d2l-en/blob/master/chapter_recommender-systems/fm.md))
* [ ] [16.10. Deep Factorization Machines](https://d2l.ai/chapter_recommender-systems/deepfm.html) ([deepfm.md](https://github.com/d2l-ai/d2l-en/blob/master/chapter_recommender-systems/deepfm.md))
Please go through [CONTRIBUTING.md](https://github.com/d2l-ai/d2l-en/blob/master/CONTRIBUTING.md) to understand the d2l development process and how to setup your dev environment. Feel free to raise your doubts/questions here or tag someone from the team.
cc @astonzhang
@divo12 @kuanhaohuang I believe you're interested in this too :) | open | 2022-01-24T02:17:14Z | 2023-11-02T15:59:13Z | https://github.com/d2l-ai/d2l-en/issues/2020 | [
"enhancement",
"pytorch-adapt-track"
] | AnirudhDagar | 8 |
neuml/txtai | nlp | 554 | Integrate with Litellm | I believe, it's a great idea to incorporate the txtai with the Litellm as it already supports Intergrations with 100+ LLMs.
https://github.com/BerriAI/litellm
LLM as drop-in replacement for gpt-3.5-turbo. Use Azure, OpenAI, Cohere, Anthropic, Ollama, VLLM, Sagemaker, HuggingFace, Replicate (100+ LLMs)
cc @ishaan-jaff | closed | 2023-09-13T08:10:20Z | 2023-12-09T16:59:44Z | https://github.com/neuml/txtai/issues/554 | [] | ranjancse26 | 1 |
google/trax | numpy | 1,466 | History attribute append not found | ### Description
Anyone having issues importing `history`? It cannot find the `append` attribute.
### Environment information
OS: <Ubuntu 20.04>
$ pip freeze | grep trax
trax==1.3.7
$ pip freeze | grep tensor
mesh-tensorflow==0.1.18
tensorboard==2.4.1
tensorboard-plugin-wit==1.8.0
tensorflow==2.4.1
tensorflow-datasets==4.0.1
tensorflow-estimator==2.4.0
tensorflow-gcs-config==2.4.0
tensorflow-hub==0.11.0
tensorflow-metadata==0.27.0
tensorflow-probability==0.12.1
tensorflow-text==2.4.3
$ pip freeze | grep jax
jax==0.2.9
jaxlib==0.1.60+cuda101
$ python -V
Python 3.6.9
### For bugs: reproduction and error logs
# Steps to reproduce:
`from trax.supervised import history`
`history.append('train', 'metrics/accuracy', 1, 0.04)`
# Error logs:
`AttributeError: module 'trax.supervised.history' has no attribute 'append'
` | closed | 2021-02-16T10:52:45Z | 2021-02-24T21:03:44Z | https://github.com/google/trax/issues/1466 | [] | PizBernina | 2 |
amdegroot/ssd.pytorch | computer-vision | 558 | Convergence before 5000 iterations | Has anyone evaluated the model performance training on a COCO or VOC data set with fewer than 5000 iterations? | closed | 2021-09-24T02:25:01Z | 2021-09-24T04:02:22Z | https://github.com/amdegroot/ssd.pytorch/issues/558 | [] | bspivey | 0 |
shibing624/text2vec | nlp | 101 | text2vec-base-chinese-sentence cpu推理速度慢 正常吗 | ### Describe the Question
text2vec-base-chinese-sentence cpu推理速度较慢,16核,观察到cpu利用率也打满了,请问一个batch 512需要几十秒正常吗
唯一的改动是经过pca降维到128维了。
或者说有没有合适的batchsize建议。 | closed | 2023-07-15T10:57:46Z | 2023-08-17T13:14:44Z | https://github.com/shibing624/text2vec/issues/101 | [
"question"
] | Dengyingjie | 2 |
wger-project/wger | django | 1,174 | Create celery tasks for longer running download commands | Once #1001 is closed, we could extract some of the logic used in the management commands to their own contained functions that we could then simply add to celery beat so they are called periodically in the background. This would speed up the startup time and make sure the data is kept in sync. Still need to think what period is sensible to add here and whether it should be configurable via environmental variables or such.
- sync-exercises
- download-exercise-images
- download-exercise-videos
- download-ingredient-images | closed | 2022-11-06T16:00:21Z | 2023-04-08T18:10:27Z | https://github.com/wger-project/wger/issues/1174 | [] | rolandgeider | 0 |
long2ice/fastapi-cache | fastapi | 51 | [Question] Different expire value depending on HTTP response | Is there a possibility to set different expire value for cached responses depending on status code from external API.
E.g. if status code is `200` than it should be cached for longer period of time (2 months) and if its `400` than it should be 1 day | closed | 2022-01-17T09:22:39Z | 2022-10-11T08:53:26Z | https://github.com/long2ice/fastapi-cache/issues/51 | [] | iprecali1 | 0 |
dot-agent/nextpy | pydantic | 129 | Set text and image in same order | ```
xt.box(
xt.center(xt.text("YAML-2-PYTHON",
# margin_top="20px",
font_size="39px",
text_align="center",
color="violet",
),),
xt.link(
xt.image(src="/github.svg",
height="57px",
width="57px",
text_align="center",
align_items="center",
justify_content="center",
flex_direction="column",
margin_left="98em",
),
is_external=True,
href="https://github.com/anirudh-hegde"
),
bg="black",
height="100%",
align_items="center",
justify_content="center",
),
```
It's showing

Set text and image in appropriate order
| closed | 2024-01-17T16:02:28Z | 2024-01-18T12:31:07Z | https://github.com/dot-agent/nextpy/issues/129 | [] | anirudh-hegde | 3 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 468 | scripts/run_clm_sft_with_peft.py 中是在哪里冻结模型参数的呢?代码中没有找到,大佬可以指点一下么? | *提示:将[ ]中填入x,表示打对钩。提问时删除这行。只保留符合的选项。*
### 详细描述问题
scripts/run_clm_sft_with_peft.py 中是在哪里冻结模型参数的呢?代码中没有找到,大佬可以指点一下么?
### 运行截图或日志
*请提供文本log或者运行截图,以便我们更好地了解问题详情。*
### 必查项目(前三项只保留你要问的)
- [ ] **基础模型**:LLaMA / Alpaca / LLaMA-Plus / Alpaca-Plus
- [ ] **运行系统**:Windows / MacOS / Linux
- [ ] **问题分类**:下载问题 / 模型转换和合并 / 模型训练与精调 / 模型推理问题(🤗 transformers) / 模型量化和部署问题(llama.cpp、text-generation-webui、LlamaChat) / 效果问题 / 其他问题
- [ ] (必选)由于相关依赖频繁更新,请确保按照[Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki)中的相关步骤执行
- [ ] (必选)我已阅读[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案
- [ ] (必选)第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)、[LlamaChat](https://github.com/alexrozanski/LlamaChat)等,同时建议到对应的项目中查找解决方案
| closed | 2023-05-31T06:30:30Z | 2023-06-20T22:02:26Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/468 | [
"stale"
] | huruizhi | 6 |
vitalik/django-ninja | django | 449 | [BUG] ModelSchema doesn't run model validators like email validators and all | The validators defined in model are not checked in ModelSchema | open | 2022-05-18T15:18:11Z | 2022-05-18T15:19:11Z | https://github.com/vitalik/django-ninja/issues/449 | [] | ok-pennywise | 0 |
LAION-AI/Open-Assistant | python | 3,532 | Must restart page | Whenever I make a prompt, instead of making a response right away, I have to restart the page in order to get the response to load. | closed | 2023-06-28T13:41:33Z | 2023-06-29T06:39:50Z | https://github.com/LAION-AI/Open-Assistant/issues/3532 | [] | thebestone007 | 2 |
saulpw/visidata | pandas | 1,671 | [freqtbl] toggling rows selects/unselects an inconsistent number of rows | **Small description**
In a FreqTableSheet where its source sheet has many rows, when rows are toggled, the number of rows that is selected/unselected on the source sheet is inconsistent.
**Expected result**
Every time the toggle is performed, the number of rows selected/unselected on the source sheet should be the same.
**Actual result with screenshot**
The number of rows selected on the source sheet is different every time. Sometimes, the number of unselected rows is negative.
https://asciinema.org/a/ksxBTGy0E1tGh5unX2Gsu0eeR
**Steps to reproduce with sample data and a .vd**
`python -c "n = 200_000; s = 'line\n'; print(f'{s*n}',end='')" |vd`
hit `F` and then repeatedly hit `t` (or `gt` or `gzt` or, in a sheet with more than one kind of cell, `zt`)
**Additional context**
VisiData v2.10.2
Workaround:
Use `s` and `u` to select/unselect rows instead of toggling.
There is a simple fix for the inconsistent behavior, by adding vd.sync() to selectRow() and unselectRow().
https://github.com/midichef/visidata/commit/52d8596ed01fed17bca8b5215130a77801f1e3b3
However, I'm unsure if this is the correct solution for 2 reasons.
First, I don't have a good understanding of how threading works in the visidata codebase.
Second, it's not clear to me how FreqTableSheet toggling should behave. Suppose that some rows on the source sheet are already selected, but not all of them. The way it works now is that after a toggle, all matching rows in the source sheet will end up in the same state: either all are selected, or all are unselected. But arguably the effect should be to toggle the selected status of all the matching rows in the source sheet, leaving some selected and some unselected. | closed | 2023-01-11T01:52:08Z | 2024-03-12T04:21:09Z | https://github.com/saulpw/visidata/issues/1671 | [
"bug",
"fixed"
] | midichef | 6 |
microsoft/qlib | deep-learning | 1,691 | Tsne 画图 | help me!
我想知道DDG-DA论文里面的figure2具体是怎么画出来的呀,然后用的是什么数据呢?
```[tasklist]
### Tasks
```
| open | 2023-11-09T12:56:06Z | 2023-11-09T12:56:55Z | https://github.com/microsoft/qlib/issues/1691 | [
"question"
] | lianlin666 | 0 |
sunscrapers/djoser | rest-api | 5 | AssertionError at /auth/logout | ```
AssertionError at /auth/logout
'LogoutView' should either include a 'serializer_class' attribute, or use the 'model' attribute as a shortcut for automatically generating a serializer class.
```
i think this is due to a change in DRF. Here are my versions:
```
Django==1.6.8
djoser==0.1.0
djangorestframework==2.4.4
```
Pull request with a fix incoming...
| closed | 2014-11-21T18:49:46Z | 2015-05-04T22:50:03Z | https://github.com/sunscrapers/djoser/issues/5 | [
"bug"
] | smcoll | 2 |
pytorch/vision | computer-vision | 8,514 | Add 3-augment from DeiT III | ### 🚀 The feature
As the title suggest, add the data augmentation from https://arxiv.org/abs/2204.07118
### Motivation, pitch
This seems to be a simple recipe with good results and the Deit family is widely recognized.
### Alternatives
_No response_
### Additional context
_No response_ | open | 2024-07-06T20:52:34Z | 2024-07-27T14:24:48Z | https://github.com/pytorch/vision/issues/8514 | [] | trawler0 | 5 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,230 | UC detected by Datadome | Hello everyone,
I currently use UC to access this site: https://www.intermarche.com. This site is provided with datadome protection. And despite all my efforts, I can't access it with uc without triggering the datadome captcha. I don't use any particular option, just the basic code, without headless mode. I tried to use several options, but without success. Here is my current code:
`options = uc.ChromeOptions()`
`driver = uc.Chrome(options=options, version_main=112)`
`url ="https://www.intermarche.com/"`
`driver.get(url)`
How to bypass the datadome captcha please? Maybe @ultrafunkamsterdam please? UC is however supposed to bypass datadome, I don't understand. Thanks for the help! | open | 2023-05-02T23:31:35Z | 2023-08-04T09:50:38Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1230 | [] | JohnDoe15152 | 2 |
PaddlePaddle/ERNIE | nlp | 875 | 请问如何在ernie vil 2.0模型上进行finetune | 请问如何在ernie vil 2.0模型上进行finetune,老分支repro里面有vil训练代码,但是会报关于paddle版本的bug,新的分支ernie-kit-open-v1.0貌似没有vil finetune的代码,请问想finetune vil 是否需要自己开发训练部分代码呢? | closed | 2022-11-08T12:53:14Z | 2023-03-18T21:45:54Z | https://github.com/PaddlePaddle/ERNIE/issues/875 | [
"wontfix"
] | ZhuangLii | 1 |
oegedijk/explainerdashboard | plotly | 102 | XGBExplainer - can't load dashboard | Hello,
I encountered the following error message when running **ExplainerDashboard with XGBExplainer**:
> \lib\site-packages\explainerdashboard\dashboard_components\composites.py in __init__(self, explainer, title, name, hide_predindexselector, hide_predictionsummary, hide_contributiongraph, hide_pdp, hide_contributiontable, hide_title, hide_selector, **kwargs)
> 279 hide_selector=hide_selector, **kwargs)
> 280
> --> 281 self.index_connector = IndexConnector(self.index,
> 282 [self.summary, self.contributions, self.pdp, self.contributions_list])
> 283
>
> AttributeError: 'IndividualPredictionsComposite' object has no attribute 'index'
Here is the following code that produce the error:
```
import xgboost as xgb
from explainerdashboard import ExplainerDashboard
from explainerdashboard.explainers import XGBExplainer
from explainerdashboard.datasets import titanic_fare
X_train, y_train, X_test, y_test = titanic_fare()
model = xgb.XGBRegressor(objective='reg:squarederror', n_estimators=5, max_depth=2)
model.fit(X_train, y_train)
explainer = XGBExplainer(model, X_test, y_test)
db = ExplainerDashboard(explainer, shap_interaction=False)
```
Packages versions
- explainerdashboard: 0.2.15 (as well as 0.3.3 which is the latest version)
- xgb: 1.3.3
Thank you for your help.
**edit: issue is also encountered on the latest release version 0.3.3**
| closed | 2021-03-18T16:04:57Z | 2021-03-18T19:23:38Z | https://github.com/oegedijk/explainerdashboard/issues/102 | [] | mbh86 | 1 |
Gozargah/Marzban | api | 1,049 | خطا در برقراری اتصال در زمان های رندوم | بعضی مواقع بیشتر صبح ها به هیچکدوم از کانفیگ ها نمیشه متصل شد و مشکل با ریستارت کردن مرزبان حل میشه ولی خیلی اذیت کنندست
برای رفع مشکل یک سری از دوستان پیشنهاد دادن که دیتابیس رو به mysql تغییر بدم که اینکارو انجام دادم ولی مشکل برطرف نشد
شک به این داشتم که میتونه از وارپ باشه سر همین برای یک مدت محدود وارپ رو برداشتم ولی بازم این اتفاق افتاد
داخل لاگ هم زیاد اطلاعات بدرد بخوری نصیبم نشده که بتونم مشکل رو برطرف کنم و قسمت های اخر لاگ که اخرین بار این مشکل پیش اومد رو کپی کردم که پایین دارید مشاهده میکنید اگر کمک کنید این مشکل برطرف شه خیلی ممنون میشم
marzban-1 | INFO: 172.68.194.142:30872 - "GET /sub/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJrZXl2YW4iLCJhY2Nlc3MiOiJzdWJzY3JpcHRpb24iLCJpYXQiOjE3MTYzODM1ODh9.29zYSJcPk0w7N0Uzx83SSK4FHLRWtKmcRwcuimccT4I HTTP/1.1" 200 OK
marzban-1 | INFO: 206.168.34.116:60416 - "GET / HTTP/1.1" 200 OK
marzban-1 | INFO: 206.168.34.116:59368 - "GET / HTTP/1.1" 200 OK
mysql-1 | 2024-06-15 15:05:50+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.4.0-1.el9 started.
marzban-1 | WARNING: Invalid HTTP request received.
marzban-1 | INFO: 206.168.34.116:48760 - "GET /favicon.ico HTTP/1.1" 404 Not Found
marzban-1 | INFO: 172.68.194.143:47870 - "GET /sub/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJrZXl2YW4iLCJhY2Nlc3MiOiJzdWJzY3JpcHRpb24iLCJpYXQiOjE3MTYzODM1ODh9.29zYSJcPk0w7N0Uzx83SSK4FHLRWtKmcRwcuimccT4I HTTP/1.1" 200 OK
marzban-1 | INFO: 87.236.176.126:35355 - "GET / HTTP/1.1" 200 OK
mysql-1 | 2024-06-15 15:05:50+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
phpmyadmin-1 | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to suppress this message
mysql-1 | 2024-06-15 15:05:50+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.4.0-1.el9 started.
mysql-1 | '/var/lib/mysql/mysql.sock' -> '/var/run/mysqld/mysqld.sock'
mysql-1 | 2024-06-15T15:05:51.524518Z 0 [System] [MY-015015] [Server] MySQL Server - start.
mysql-1 | 2024-06-15T15:05:52.079295Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.4.0) starting as process 1
mysql-1 | 2024-06-15T15:05:52.095950Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
mysql-1 | 2024-06-15T15:05:52.609489Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
mysql-1 | 2024-06-15T15:05:53.043266Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
mysql-1 | 2024-06-15T15:05:53.043332Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel.
mysql-1 | 2024-06-15T15:05:53.054340Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory.
mysql-1 | 2024-06-15T15:05:53.106723Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.4.0' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL.
mysql-1 | 2024-06-15T15:05:53.367243Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '127.0.0.1' port: 33060, socket: /var/run/mysqld/mysqlx.sock
phpmyadmin-1 | Syntax OK
phpmyadmin-1 | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to suppress this message
phpmyadmin-1 | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to suppress this message
phpmyadmin-1 | [Sat Jun 15 15:05:50.864825 2024] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.57 (Debian) PHP/8.2.8 configured -- resuming normal operations
phpmyadmin-1 | [Sat Jun 15 15:05:50.865260 2024] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'
phpmyadmin-1 | 147.185.132.65 - - [15/Jun/2024:15:15:06 +0000] "GET / HTTP/1.1" 200 19975 "-" "Expanse, a Palo Alto Networks company, searches across the global IPv4 space multiple times per day to identify customers' presences on the Internet. If you would like to be excluded from our scans, please send IP addresses/domains to: scaninfo@paloaltonetworks.com"
**Machine details (please complete the following information):**
- OS: Ubuntu 22.04 LTS
- Python version: 3.12.3
| closed | 2024-06-15T15:55:20Z | 2024-07-19T00:53:59Z | https://github.com/Gozargah/Marzban/issues/1049 | [
"Bug"
] | ThePishro | 3 |
pytest-dev/pytest-qt | pytest | 187 | Properly clear exceptions after test | If an exception is thrown and is captured by pytestqt, it's keep alive (found it while trying to discover why PySide was crashing on the process exit when a test failed -- the reason was that pytestqt was keeping variables alive longer than it should and variables weren't cleared in the proper order because of that).
Will provide a fix shortly. | closed | 2017-09-24T14:48:28Z | 2018-07-13T11:37:16Z | https://github.com/pytest-dev/pytest-qt/issues/187 | [] | fabioz | 4 |
babysor/MockingBird | pytorch | 257 | 怎么在Google Colab上运行?(在不想使用Toolbox和Web的情况下) | 请问有没有人尝试过在Notebook上运行?(Google Colab或者Jupyter)
我的想法是只提供一个或多个音频,和一句文字作为input,然后运行且生成一个output音频。
_**附上我的Colab,效果不太理想。_
[Colab Notebook](https://colab.research.google.com/drive/16rsNV0kVLG8SNUppEq2vmOauyeJJGrv4#scrollTo=_J2rxn1gvad8)
欢迎大家交流分享!非常感谢! | open | 2021-12-09T02:27:07Z | 2023-07-27T21:24:04Z | https://github.com/babysor/MockingBird/issues/257 | [
"documentation"
] | joeynmq | 4 |
ipython/ipython | data-science | 14,016 | Get rid of pickle. | ```python
self.db = PickleShareDB(os.path.join(self.profile_dir.location, 'db'))
```
is like
```python
self.db = exec((Path(self.profile_dir.location) / 'backdoor.py').read_text())
```
, but worse: at least we can audit contents of `backdoor.py` with a text editor.
| open | 2023-04-13T21:24:31Z | 2023-04-17T08:14:14Z | https://github.com/ipython/ipython/issues/14016 | [] | KOLANICH | 3 |
521xueweihan/HelloGitHub | python | 2,271 | 项目推荐 | wangEditor 开源 Web 富文本编辑器 | ## 推荐项目
<!-- 这里是 HelloGitHub 月刊推荐项目的入口,欢迎自荐和推荐开源项目,唯一要求:请按照下面的提示介绍项目。-->
<!-- 点击上方 “Preview” 立刻查看提交的内容 -->
<!--仅收录 GitHub 上的开源项目,请填写 GitHub 的项目地址-->
- 项目地址:https://github.com/wangeditor-team/wangEditor
<!--请从中选择(C、C#、C++、CSS、Go、Java、JS、Kotlin、Objective-C、PHP、Python、Ruby、Rust、Swift、其它、书籍、机器学习)-->
- 类别:JS、TypeScript
<!--请用 20 个左右的字描述它是做什么的,类似文章标题让人一目了然 -->
- 项目标题:一款开源 Web 富文本编辑器
<!--这是个什么项目、能用来干什么、有什么特点或解决了什么痛点,适用于什么场景、能够让初学者学到什么。长度 32-256 字符-->
- 项目描述:开源 Web 富文本编辑器,开箱即用,配置简单。支持 JS Vue React 。
<!--令人眼前一亮的点是什么?类比同类型项目有什么特点!-->
- 亮点:开箱即用,配置简单,支持 JS Vue React ,文档齐全。
- 示例代码:
```js
import '@wangeditor/editor/dist/css/style.css'
import { createEditor, createToolbar } from '@wangeditor/editor'
// 创建编辑器
const editor = createEditor({
selector: '#editor-container'
})
// 创建工具栏
const toolbar = createToolbar({
editor,
selector: '#toolbar-container'
})
```
- 截图:

- 后续更新计划:社区驱动 | closed | 2022-07-05T18:13:40Z | 2022-07-28T01:28:01Z | https://github.com/521xueweihan/HelloGitHub/issues/2271 | [
"已发布",
"JavaScript 项目"
] | SekiBetu | 0 |
deepspeedai/DeepSpeed | deep-learning | 6,467 | [REQUEST] MiCS vs Zero++ hpZ for Hybrid FSDP | **Is your feature request related to a problem? Please describe.**
I'm interested in hybrid FSDP where the model is replicated across nodes and sharded within node.
My understanding is that this can be achieved through MiCS and / or ZeRO++ hpZ.
**Describe the solution you'd like**
Better documentation, examples, or tutorials on how these solutions differ and how to best compose these features with Zero3 for a given network topology.
| open | 2024-08-31T11:20:29Z | 2024-09-18T21:44:05Z | https://github.com/deepspeedai/DeepSpeed/issues/6467 | [
"enhancement"
] | jeromeku | 5 |
sqlalchemy/alembic | sqlalchemy | 1,587 | Initialize revision for tables with foreign keys | **Describe the bug**
When tables with foreign keys are defined and the first command `revision --autogenerate` is executed, the generate script includes only `create_table()` methods with `ForeignKey` arguments passed within them.
**Expected behavior**
Generated script should includes `create_table()` methods at the beginning and `create_foreign_key()` at the end to ensure proper separation of table creation and foreign key constraints
**Versions.**
- OS: Windwos 10
- Python: 3.10
- Alembic: 1.14.0
- SQLAlchemy: 2.0.36
- Database: PostgreSQL
- DBAPI: psycopg2
**Have a nice day!**
| closed | 2024-12-31T10:49:01Z | 2024-12-31T13:22:32Z | https://github.com/sqlalchemy/alembic/issues/1587 | [] | DGDarkKing | 0 |
graphql-python/graphene-django | django | 893 | execute_graphql_request caused warnings | `graphene_django.views.GraphQLView.execute_graphql_request()` will generate these warnings:
```
DeprecationWarning: The 'root' alias has been deprecated. Please use 'root_value' instead.
DeprecationWarning: The 'context' alias has been deprecated. Please use 'context_value' instead.
DeprecationWarning: The 'variables' alias has been deprecated. Please use 'variable_values' instead.
```
The warnings are generated here: `.../graphql/execution/executor.py:92: in execute`
It's graphene-django v2.8.1 and graphene v2.1.8 | closed | 2020-03-05T12:15:55Z | 2020-07-07T15:55:20Z | https://github.com/graphql-python/graphene-django/issues/893 | [
"wontfix"
] | jedie | 3 |
zappa/Zappa | flask | 1,267 | Provide configuration for reserved and provisioned concurrency | Lambda now provides a reserved and provisioned concurrency setting/configuration.
https://docs.aws.amazon.com/lambda/latest/dg/lambda-concurrency.html#reserved-and-provisioned
https://docs.aws.amazon.com/lambda/latest/dg/configuration-concurrency.html
zappa currently provides a `keep_warm` function to periodically keep a single lambda instance "warm".
However, the current `keep_warm` method is not expected to reduce cold starts for more than a single instance.
This issue requests that reserved and provisioned concurrency are properly integrated with zappa.
Potentially consider depreciating `keep_warm` in favor of these new concurrency settings.
| closed | 2023-08-17T05:05:03Z | 2024-04-21T03:37:49Z | https://github.com/zappa/Zappa/issues/1267 | [
"no-activity",
"auto-closed"
] | monkut | 3 |
seleniumbase/SeleniumBase | web-scraping | 3,548 | Dynamic button loaded during the page scrolling not found | I am trying to scrape the URL: https://earth.esa.int/eogateway/search?category=All+Categories.
If you visit the page, you can see that the button as per the following screenshot appears all of a sudden while scrolling the page.

Unfortunately, it looks like the button is not detected by SeleniumBase in CDP mode, and therefore the dynamic loading of the page content is not completed and the session is closed. I am not understanding why.
Here is the script I prepared:
```
from seleniumbase import SB
from seleniumbase.common.exceptions import TimeoutException, NoSuchElementException, ElementNotVisibleException
url = "https://earth.esa.int/eogateway/search?category=All+Categories"
cookie_selector = "div#__next > div.fixed.bottom-0.left-0.right-0.bg-brand-bg-darker.text-white.font-esa > div.mx-auto.p-8.flex.flex-col > div.pt-4.flex.flex-col.gap-8.items-center.justify-center > button.p-1.uppercase.border.transition-colors"
with SB(uc=True) as sb:
sb.activate_cdp_mode(url)
sb.cdp.maximize()
sb.sleep(1)
try:
sb.cdp.click(cookie_selector, timeout=20)
except (TimeoutException, NoSuchElementException, ElementNotVisibleException) as e:
pass
# Scroll through the page to load all articles
last_height = sb.execute_script(f"""
function getScrollableElement() {{
const elements = document.querySelectorAll('*');
for (const element of elements) {{
if (element.scrollHeight > element.clientHeight) {{
return element;
}}
}}
return null;
}}
const scrollable = getScrollableElement();
return scrollable ? scrollable.scrollHeight : document.body.scrollHeight;
""")
while True:
# Scroll down to the bottom
sb.cdp.scroll_to_bottom()
sb.sleep(2)
try:
button = sb.cdp.find_element_by_text("READ MORE", tag_name="button", timeout=1)
sb.execute_script("arguments[0].click();", button)
except Exception:
pass
# Calculate new scroll height and compare with the last height
new_height = sb.execute_script("return scrollable ? scrollable.scrollHeight : document.body.scrollHeight;")
if new_height == last_height:
break
last_height = new_height
```
Once the button appears, the page scrolling is no more responsible of the loading of new contents, which is delegated to the button indeed. Therefore, `last_height` equals `new_height` when the button appears, and the script exits without further loads.
This is unexpected and I guess it is a bug related to how the SB driver evaluates the dynamic content of the page. | closed | 2025-02-20T17:13:30Z | 2025-02-20T20:13:10Z | https://github.com/seleniumbase/SeleniumBase/issues/3548 | [
"invalid usage",
"UC Mode / CDP Mode"
] | matteocacciola | 2 |
mljar/mljar-supervised | scikit-learn | 161 | Temporary Files cause PermissionError exception in unit tests | In many of the test_algorithms, we test saving with the following code:
```
with tempfile.NamedTemporaryFile() as tmp:
model.save(tmp.name)
```
These two lines independently try and talk to the same file at the same time. On Windows the file is locked by the temp file creation, and then errors when the save operation is attempted.
Proposed fix:
- Try-catch the locking error.
- model.save(temp_directory + unique_random_name)
- Manually delete the file after we're done with os.remove(temp_directory + unique_random_name)
Further info; https://stackoverflow.com/questions/23212435/permission-denied-to-write-to-my-temporary-file (edited) | closed | 2020-09-01T08:00:57Z | 2020-09-01T14:20:12Z | https://github.com/mljar/mljar-supervised/issues/161 | [
"tests"
] | abtheo | 0 |
mitmproxy/pdoc | api | 58 | decorated functions | Is there an easy way to allow pdoc to access the **doc** of decorated functions?
For example, how can I make pdoc document:
@memoize
def func(a, b, c):
"""
Doc string
"""
return a, b, c
| closed | 2015-07-15T15:07:11Z | 2021-01-19T16:43:51Z | https://github.com/mitmproxy/pdoc/issues/58 | [
"bug"
] | hyukim17 | 10 |
jschneier/django-storages | django | 635 | GCP and NamedTemporaryFile: file-like object only had 0 bytes remaining. | ```python3
import tempfile
import requests
from django.core import files
def process_image(url):
"""Process a single image."""
r = requests.get(url, stream=True)
file_name = url.split('/')[-1]
# read response stream into lf
lf = tempfile.NamedTemporaryFile()
for block in r.iter_content(1024 * 8):
if not block:
break
lf.write(block)
lf.flush()
# save to google cloud
media_image = BrandMediaImage()
media_image.image.save(file_name, files.File(lf), save=False) # error here
media_image.save()
```
Error:
```
File "/app/apps/jobs/management/commands/mycommand.py", line 89, in process_image
media_image.image.save(file_name, files.File(lf), save=False)
File "/usr/local/lib/python3.6/site-packages/django/db/models/fields/files.py", line 87, in save
self.name = self.storage.save(name, content, max_length=self.field.max_length)
File "/usr/local/lib/python3.6/site-packages/django/core/files/storage.py", line 49, in save
return self._save(name, content)
File "/usr/local/lib/python3.6/site-packages/storages/backends/gcloud.py", line 167, in _save
content_type=file.mime_type)
File "/usr/local/lib/python3.6/site-packages/google/cloud/storage/blob.py", line 975, in upload_from_file
size, num_retries, predefined_acl)
File "/usr/local/lib/python3.6/site-packages/google/cloud/storage/blob.py", line 888, in _do_upload
size, num_retries, predefined_acl)
File "/usr/local/lib/python3.6/site-packages/google/cloud/storage/blob.py", line 683, in _do_multipart_upload
raise ValueError(msg)
ValueError: Size 37319 was specified but the file-like object only had 0 bytes remaining.
``` | closed | 2018-12-14T13:06:22Z | 2018-12-14T14:03:57Z | https://github.com/jschneier/django-storages/issues/635 | [] | elnygren | 2 |
miguelgrinberg/Flask-SocketIO | flask | 1,461 | 404 Errors During Websocket Handshake | I've been experiencing issues when trying to initialize SocketIO in the supported way, and I was hoping you might be able to help me pinpoint where the issue is. I've created a minimal example [here](https://github.com/robb17/socketio-do-nothing), which also includes the NGINX configuration.
Of interest is the startup_production script, which spawns multiple gunicorn processes. The only ones that matter for the present example, though, are those listening on ports 5008/5009 (depending on your IP) and 5000. You can see a live version of the toy app [here](example.donkhouse.com). When accessing the index, the process to which you've been IP-hashed should forward websocket requests to the process listening on 5000. Unfortunately, however, these requests are 404ed.
The log file for the process listening on port 5000 consists of many entries of the following:
```
[2021-01-13 22:20:30 +0000] [16466] [DEBUG] GET /socket.io//
[2021-01-13 22:20:30 +0000] [16466] [DEBUG] Closing connection.
[2021-01-13 22:20:38 +0000] [16466] [DEBUG] GET /socket.io//
[2021-01-13 22:20:38 +0000] [16466] [DEBUG] Closing connection.
[2021-01-13 22:20:46 +0000] [16466] [DEBUG] GET /socket.io//
[2021-01-13 22:20:46 +0000] [16466] [DEBUG] Closing connection.
[2021-01-13 22:20:54 +0000] [16466] [DEBUG] GET /socket.io//
[2021-01-13 22:20:54 +0000] [16466] [DEBUG] Closing connection.
```
Thanks, and let me know if I can provide any more information!
(Also, yes, init_socket.py and app/static/js/socketinit.js are EXTREMELY janky—do you have any recommendations on how I might better specify connection protocols dynamically?) | closed | 2021-01-13T22:35:35Z | 2021-01-13T23:06:55Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1461 | [
"question"
] | robb17 | 2 |
kymatio/kymatio | numpy | 33 | DOC example that visualizes 2D filter | @edouardoyallon could you make a very short example that plots a 2D filter bank using matplotlib?
If you feel productive, the same for 1D would be also great to have | closed | 2018-10-22T21:26:23Z | 2018-10-28T04:48:56Z | https://github.com/kymatio/kymatio/issues/33 | [] | eickenberg | 3 |
jupyter/nbviewer | jupyter | 836 | nbviewer error displaying output from IPython.display.Javascript when using lib= | When executing a trivial Notebook which contains:
`IPython.display.Javascript("""element.append("This is a test");""",lib=["https://cdnjs.cloudflare.com/ajax/libs/d3/5.9.2/d3.min.js"])`
repeated in several consecutive cells, jupyter noteboook correctly displays "This is a test" in each output cell as expected. Viewing the notebook in nbviewer does not behave as expected, and instead displays all the output in the last cell.(See screenshots below)
**To Reproduce**
Go to: https://nbviewer.jupyter.org/url/www.slac.stanford.edu/~tonyj/JavascriptTest.ipynb
**Screenshots**
Notebook rendered in Jupyter:

Notebook rendered in nbviewer:

**Desktop (please complete the following information):**
- Linux Ubuntu 19.04 (16 bit)
- Browser chrome
- Version 74.0.3729.169 (Official Build) (64-bit)
**Additional context**
I believe the source of the problem is that jupyter generates the following for each cell:
`"application/javascript": [
"$.getScript(\"https://cdnjs.cloudflare.com/ajax/libs/d3/5.9.2/d3.min.js\", function () {\n",
"element.append(\"This is a test\");});\n"
]`
Note the asynchronous execution of the element.append, after the script load.
nbviewer converts this to:
`<script type="text/javascript">
var element = $('#f4bde6d7-91f5-41f5-b818-6e9ebbc3d826');
$.getScript("https://cdnjs.cloudflare.com/ajax/libs/d3/5.9.2/d3.min.js", function () {
element.append("This is a test");});
</script>`
Note the global scope definition of var element.
When several of these cells are present in the python notebook, the asynchronous execution of the element.append picks up the most recent (last) definition of element, instead of the intended definition of element for that cell.
I think a fix would be to create a function(){} for each cell, and define var element inside that function so that it is locally scoped relative to the code in the remainder of the function. | closed | 2019-06-15T07:01:02Z | 2019-11-21T22:16:24Z | https://github.com/jupyter/nbviewer/issues/836 | [
"type:Bug",
"tag:Upstream",
"tag:Other Jupyter Project"
] | tony-johnson | 2 |
ultralytics/ultralytics | machine-learning | 19,719 | Yolov11 model.model output | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hello,
I am trying to better understand the output of `model.model` while implementing knowledge distillation. However, I am struggling to fully grasp the dimensions for different tasks.
### **Detection**
In training mode, the output dimensions are:
- `torch.Size([4, 144, 80, 80])`
- `torch.Size([4, 144, 40, 40])`
- `torch.Size([4, 144, 20, 20])`
Here:
- `4` is the batch size.
- `(80,80)` represents the height and width of the feature map.
- `144 = 80 + reg_max * 4 = 80 + 16 * 4`
However, I am unclear on how classes and bounding boxes are represented. My understanding is that there are 16 bounding boxes per `(h, w)`, but I don't see the corresponding number of classes.
Additionally, I noticed that these dimensions remain the same across different model variants (S, N, M, X, etc.), leading me to wonder:
- Would a simple loss function like MSE be sufficient for distillation between models?
- Or is there some scaling difference that needs to be accounted for?
### **Segmentation**
Similarly, for segmentation, I obtained the following output dimensions:
- `torch.Size([4, 32, 8400])`
- `torch.Size([4, 32, 160, 160])`
I assume `(160,160)` represents the height and width. Would a simple MSE loss be sufficient for distillation between the teacher and student models in this case?
### **Pose Estimation**
For pose estimation, the output dimensions are:
- `torch.Size([4, 51, 8400])`, where `51` corresponds to `17 x 3` keypoints.
Again, would a simple MSE loss be sufficient for distillation, or should additional considerations be made?
### Additional
_No response_ | open | 2025-03-15T23:20:12Z | 2025-03-16T19:55:11Z | https://github.com/ultralytics/ultralytics/issues/19719 | [
"question",
"segment",
"detect",
"pose"
] | Shiro-LK | 4 |
tqdm/tqdm | jupyter | 1,471 | Log exceptions in `pandas.progress_apply` | - [x] I have marked all applicable categories:
+ [x] new feature request
- [x] I have visited the [source website], and in particular read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
- [x] I have mentioned version numbers, operating system and environment, where applicable (N/A, not a bug, or something that needs reproduction)
Sometimes I wrap processing functions passed to `pd.___.progress_apply(...)` such that exceptions are accounted for, but won't crash the entire calculation, e.g.:
```python
import random
import time
import pandas as pd
import numpy as np
from tqdm.auto import tqdm
tqdm.pandas()
data = pd.Series(np.arange(100))
erroraccounting = tqdm(total=len(data), desc='No errors so far')
def f(x):
time.sleep(0.1)
if random.random() < 0.05:
raise Exception("All your base are belong to us")
return 2*x
def try_function(f):
def f_(x):
global erroraccounting
try:
return f(x)
except Exception as e:
erroraccounting.update(1)
return f_
data.progress_apply(try_function(f))
```
Particularly when errors are logged, this allows prioritizing the most relevant rather than the first exception, and also some exceptions might be acceptable or not worth the trouble fixing as long as they are rare.
On the other hand, just catching and ignoring errors can waste time, for example when there's a small bug at the end of an iteration that, if not logged, will only be noticed once all iterations have finished. Hence I'd propose a `tqdm.pandas(log_exceptions=True)` to log such errors, e.g. like this:

Furthermore, adding a list of the exceptions and tracebacks can further help developers solve bugs while running calculations.

I'm particularly thinking from an e-science angle, focussed on scientific results, rather than a software engineering angle, focussed on stable code. | open | 2023-04-21T14:06:19Z | 2023-04-21T14:14:47Z | https://github.com/tqdm/tqdm/issues/1471 | [] | prhbrt | 0 |
davidsandberg/facenet | computer-vision | 1,109 | Unsuccessful | open | 2019-11-14T09:11:17Z | 2019-11-18T12:08:57Z | https://github.com/davidsandberg/facenet/issues/1109 | [] | garin-wang | 1 | |
MorvanZhou/tutorials | tensorflow | 3 | 周大大,能录制一期关于分布式TENSORFLOW的搭建教程吗? | https://github.com/tensorflow/serving
默认的,没有视频清楚,而且没有如何使其学习的例子
| closed | 2016-10-10T06:07:36Z | 2016-11-06T10:44:25Z | https://github.com/MorvanZhou/tutorials/issues/3 | [] | bournes | 1 |
miguelgrinberg/microblog | flask | 63 | Translation bug | I think I found a tiny bug, or perhaps an enhancement.
If you press Translate on a post that's in english already, you get an error:
```ArgumentOutOfRangeException: 'from' must be a valid language Parameter name: from : ID=1116.V2_Json.Translate.4449FEBB```
Screendump attached:

| closed | 2018-01-04T19:36:30Z | 2019-03-01T00:04:07Z | https://github.com/miguelgrinberg/microblog/issues/63 | [
"question",
"auto-closed"
] | Callero | 4 |
Evil0ctal/Douyin_TikTok_Download_API | web-scraping | 428 | 项目启动后,更换cookie,解析抖音视频 | INFO: 127.0.0.1:52799 - "GET / HTTP/1.1" 200 OK
INFO: ('127.0.0.1', 52801) - "WebSocket /?app=index&session=NEW" [accepted]
INFO: connection open
WARNING 第 1 次响应内容为空, 状态码: 200,
URL:https://www.douyin.com/aweme/v1/web/aweme/detail/?device_platform=webapp&aid=6383&channel=channel_pc_web&pc_client_type=1&version_code=190500&version_name=19.5.0&cookie_enabled=true&screen_width=1920&screen_height=1080&browser_language=zh-C
N&browser_platform=Win32&browser_name=Firefox&browser_version=124.0&browser_online=true&engine_name=Gecko&engine_version=122.0.0.0&os_name=Windows&os_version=10&cpu_core_num=12&device_memory=8&platform=PC&msToken=&aweme_id=7355872402066181427&a
_bogus=dymM%2F5hXmD6N6fSv54QLfY3q64r3YmsT0trEMD2finfOl639HMY39exoE6hvzREjLG%2FlIeujy4hbT3ohrQ2y0Hwf9W0L%2F25ksDSkKl5Q5xSSs1X9eghgJ04qmkt5SMx2RvB-rOXmqhZHKRbp09oHmhK4bIOwu3GMXE%3D%3D
WARNING 第 2 次响应内容为空, 状态码: 200,
URL:https://www.douyin.com/aweme/v1/web/aweme/detail/?device_platform=webapp&aid=6383&channel=channel_pc_web&pc_client_type=1&version_code=190500&version_name=19.5.0&cookie_enabled=true&screen_width=1920&screen_height=1080&browser_language=zh-C
N&browser_platform=Win32&browser_name=Firefox&browser_version=124.0&browser_online=true&engine_name=Gecko&engine_version=122.0.0.0&os_name=Windows&os_version=10&cpu_core_num=12&device_memory=8&platform=PC&msToken=&aweme_id=7355872402066181427&a
_bogus=dymM%2F5hXmD6N6fSv54QLfY3q64r3YmsT0trEMD2finfOl639HMY39exoE6hvzREjLG%2FlIeujy4hbT3ohrQ2y0Hwf9W0L%2F25ksDSkKl5Q5xSSs1X9eghgJ04qmkt5SMx2RvB-rOXmqhZHKRbp09oHmhK4bIOwu3GMXE%3D%3D
WARNING 第 3 次响应内容为空, 状态码: 200,
URL:https://www.douyin.com/aweme/v1/web/aweme/detail/?device_platform=webapp&aid=6383&channel=channel_pc_web&pc_client_type=1&version_code=190500&version_name=19.5.0&cookie_enabled=true&screen_width=1920&screen_height=1080&browser_language=zh-C
N&browser_platform=Win32&browser_name=Firefox&browser_version=124.0&browser_online=true&engine_name=Gecko&engine_version=122.0.0.0&os_name=Windows&os_version=10&cpu_core_num=12&device_memory=8&platform=PC&msToken=&aweme_id=7355872402066181427&a
_bogus=dymM%2F5hXmD6N6fSv54QLfY3q64r3YmsT0trEMD2finfOl639HMY39exoE6hvzREjLG%2FlIeujy4hbT3ohrQ2y0Hwf9W0L%2F25ksDSkKl5Q5xSSs1X9eghgJ04qmkt5SMx2RvB-rOXmqhZHKRbp09oHmhK4bIOwu3GMXE%3D%3D
程序出现异常,请检查错误信息。
ERROR 无效响应类型。响应类型: <class 'NoneType'>
程序出现异常,请检查错误信息。 | closed | 2024-06-14T09:37:30Z | 2024-06-14T22:25:08Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/428 | [
"BUG",
"enhancement"
] | feng3729 | 2 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 4,413 | Add a permission to enable recipients to reset "Read-Only" status of a report | ### Proposal
In relation to ticket https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/4412 some organizations would like to selectively configure the recipients that are enabled to enable the "Read-Only" status of a report. | open | 2025-02-24T17:06:37Z | 2025-02-28T09:24:48Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/4413 | [
"C: Client",
"C: Backend",
"T: Feature"
] | evilaliv3 | 0 |
joeyespo/grip | flask | 12 | Convert files to html | Hi,
I was wondering if it is possible to use this as a converter? Add some option to output the generated html files to a specified folder.
Is this supported?
Cheers!
| closed | 2013-05-08T20:26:21Z | 2013-09-27T04:50:25Z | https://github.com/joeyespo/grip/issues/12 | [
"enhancement"
] | bftanase | 2 |
iterative/dvc | data-science | 10,440 | Latest release of DVC breaks object versioned configs | # Bug Report
<!--
## Issue name
Issue names must follow the pattern `command: description` where the command is the dvc command that you are trying to run. The description should describe the consequence of the bug.
Example: `repro: doesn't detect input changes`
-->
## Description
The latest release of DVC v3.51.0 breaks workflows where data is stored with `version_aware = true` on a remote bucket.
It seems to be due to this PR which is included in the latest release:
https://github.com/iterative/dvc/pull/10433
<!--
A clear and concise description of what the bug is.
-->
Here is a CI run from a week ago with version v3.50.3 of dvc, which succeeds:
https://github.com/SkafteNicki/example_mlops/actions/runs/9112178019/job/25050892637
Here is a CI run from today with version v3.51.0 of dvc, which fails with error:
```
ERROR: failed to pull data from the cloud - config file error: 'fetch --run-cache' is unsupported for cloud versioned remotes: config file error: 'fetch --run-cache' is unsupported for cloud versioned remotes
```
https://github.com/SkafteNicki/example_mlops/actions/runs/9222033174/job/25372344887
Nothing has changed regarding the data, config of dvc etc. only the version being used by the CI.
By changing `dvc pull` to `dvc pull --no-run-cache` fixes the issue:
https://github.com/SkafteNicki/example_mlops/actions/runs/9222304205/job/25373194648
### Reproduce
<!--
Step list of how to reproduce the bug
-->
<!--
Example:
1. dvc init
2. Copy dataset.zip to the directory
3. dvc add dataset.zip
4. dvc run -d dataset.zip -o model ./train.sh
5. modify dataset.zip
6. dvc repro
-->
### Expected
<!--
A clear and concise description of what you expect to happen.
-->
I have already found the solution for this problem, however I would not have expected such a breaking change to happen in a minor release of DVC. I would recommend the maintainers to add to the documentation that `--no-run-cache` argument needs to be added when `version_aware=true` in dvc config (alternatively, this could maybe be auto-detected from the config and automatically set?)
On a sidenote: it seems all reference in the documentation to setting a remote storage to version aware is gone?
The relevant page for this information: https://dvc.org/doc/user-guide/data-management/cloud-versioning#cloud-versioning does not really contain how to do it:
```bash
dvc remote modify remote_storage version_aware true
```
### Environment information
<!--
This is required to ensure that we can reproduce the bug.
-->
**Output of `dvc doctor`:**
```console
$ dvc doctor
```
**Additional Information (if any):**
<!--
Please check https://github.com/iterative/dvc/wiki/Debugging-DVC on ways to gather more information regarding the issue.
If applicable, please also provide a `--verbose` output of the command, eg: `dvc add --verbose`.
If the issue is regarding the performance, please attach the profiling information and the benchmark comparisons.
-->
| closed | 2024-05-24T10:23:42Z | 2024-05-27T08:52:45Z | https://github.com/iterative/dvc/issues/10440 | [
"bug",
"p1-important",
"regression"
] | SkafteNicki | 1 |
mkhorasani/Streamlit-Authenticator | streamlit | 49 | Configuration file support for unicode | Congratulations on your great project.
I noticed an issue when the config.yaml contains unicode characters.
For example when using the reset password feature the yaml load and dump works perfectly but the process escapes all unicode chars. Unfortunately this cancels the yaml file human readability.
An simple solution would be to just use the allow_unicode parameter when dumping the config back to the filesystem:
with open('../config.yaml', 'w') as file:
yaml.dump(config, file, default_flow_style=False, allow_unicode=True, sort_keys=False)
How does it sound?
:)
Keep up the good work! | open | 2023-02-17T06:43:58Z | 2024-01-22T11:48:58Z | https://github.com/mkhorasani/Streamlit-Authenticator/issues/49 | [
"enhancement"
] | egelados | 1 |
Avaiga/taipy | automation | 2,395 | [OTHER] Bind to dictionary key within class | ### What would you like to share or ask?
I am failing to write info to a value that is bound to an entry in a dictionary, which is nested in a class. I saw an example of using a dictionary in #1785 , but am failing, when the dictionary is in a class, other than global.
Minimal example:
```python
from taipy.gui import Gui
import taipy.gui.builder as tgb
class Simple_Class:
def __init__(self):
x = 1
class Complex_Class:
def __init__(self):
self.dictionary = {}
self.simple_class = Simple_Class()
complex_class = Complex_Class()
simple_dictionary = {}
def printer(state, var_name, value):
print(state.complex_class.simple_class.x)
print(state.complex_class.dictionary.items())
print(state.simple_dictionary.items())
with tgb.Page() as page:
tgb.input(
value="{complex_class.dictionary.x}",
on_change=printer,
)
tgb.input(
value="{complex_class.simple_class.x}",
on_change=printer,
)
tgb.input(
value="{simple_dictionary.x}",
on_change=printer,
)
Gui(page).run(port=4999)
```
The following output is produced, when entering 1,2 and 3 into the resulting fields.
>.../.venv/lib/python3.12/site-packages/taipy/gui/gui.py:700: TaipyGuiWarning: A problem occurred while resolving variable 'complex_class.dictionary.x' in module '__main__'.
> _warn(
> .../.venv/lib/python3.12/site-packages/taipy/gui/utils/_evaluator.py:371: TaipyGuiWarning: Exception raised evaluating complex_class.dictionary.x:
> 'dict' object has no attribute 'x'
> _warn(f"Exception raised evaluating {expr_string}", e)
> 2
> dict_items([])
> dict_items([])
> 2
> dict_items([])
> dict_items([('x', '3')])
As demonstrated, even deep nested properties are able to be used, so it seems possible in principle, but evaluated differently in praxis.
Do you know of a workaround and could this be covered in future releases?
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | open | 2025-01-13T11:39:00Z | 2025-01-13T15:57:32Z | https://github.com/Avaiga/taipy/issues/2395 | [
"📈 Improvement",
"🖰 GUI",
"🟨 Priority: Medium"
] | JosuaCarl | 1 |
dmlc/gluon-nlp | numpy | 1,201 | gluonnlp on Cuda 10 cannot import name 'replace_file' | I am trying to run gluonnlp on cuda 10. I have installed the following package:
mxnet-cu100 1.6.0b20191102
https://pypi.org/project/mxnet-cu100mkl/#history
As it seems like the only one that has mxnet 1.6.0 and supports cuda 10. I have no possibility to upgrade to cuda 10.1 which has the official release for mxnet 1.6.0
The error I get looks like I am still on mxnet 1.5.0 but:
```
import mxnet as mx
print(mx.__version__)
1.6.0
```
It looks like the pre-release is not compatible? Do you know of any way to use gluonnlp on cuda 10.0? I cannot neither downgrade nor upgrade the cuda version.
The original error is:
```
ImportError Traceback (most recent call last)
<ipython-input-2-12f1dea70852> in <module>
13 from mxnet import gluon, autograd
14 from mxnet.gluon.utils import download
---> 15 import gluonnlp as nlp
16 nlp.utils.check_version('0.7.0')
~/.local/lib/python3.6/site-packages/gluonnlp/__init__.py in <module>
23
24 from . import loss
---> 25 from . import data
26 from . import embedding
27 from . import model
~/.local/lib/python3.6/site-packages/gluonnlp/data/__init__.py in <module>
21 import os
22
---> 23 from . import (batchify, candidate_sampler, conll, corpora, dataloader,
24 dataset, question_answering, registry, sampler, sentiment,
25 stream, super_glue, transforms, translation, utils,
~/.local/lib/python3.6/site-packages/gluonnlp/data/question_answering.py in <module>
29
30 from mxnet.gluon.data import ArrayDataset
---> 31 from mxnet.gluon.utils import download, check_sha1, _get_repo_file_url, replace_file
32 from .registry import register
33 from ..base import get_home_dir
ImportError: cannot import name 'replace_file'
``` | closed | 2020-04-09T18:26:19Z | 2020-06-21T20:19:20Z | https://github.com/dmlc/gluon-nlp/issues/1201 | [] | ktoetotam | 4 |
plotly/dash | data-visualization | 2,433 | Slider mark does not move accordingly | **Context**
Python dependencies:
```
dash 2.8.1
dash-bootstrap-components 1.3.1
```
Chrome version: 110.0.5481.105 (Official Build) (64-bit)
**The bug**
When pressing a button that changes the slider max to one that is bigger than the previous selected, the selecting circle moves to the corresponding location in the slider, but the tooltip that shows the value selected, remains at the end.
**Expected behavior**
The tooltip moves together with the circle.
**Screenshots**
The initial maximum (30) has been selected.

Then, the maximum is set to 60, and the tooltip does not move to the middle as the point does.

**Reproducible code**
```python
import dash_bootstrap_components as dbc
from dash.dependencies import Input, Output
from dash import html, dcc, Dash, callback
app = Dash('Slider marks bug reproducible example')
app.layout = html.Div([
dbc.Row([
dcc.Slider(1, 10, 1,
value=1,
id='slider',
marks=None,
tooltip={"placement": "top", "always_visible": True}
),
]),
dbc.Row([
dbc.RadioItems(id='radioitem',
options=[{'label':'30', 'value': 30}, {'label':'60', 'value': 60}],
value='30',
inline=True,
style={'marginTop': '5px', 'display': 'flex', 'justifyContent': 'space-around', 'flex-wrap': 'wrap'},
class_name="btn-group",
input_class_name="btn-check",
label_class_name="btn btn-outline-primary",
label_checked_class_name="active",
input_style={'display':'none'},
input_checked_style={'display':'none'},
label_checked_style={'borderColor': '#333138', 'backgroundColor': '#333138', 'color':'#fffffa'},
label_style={'borderColor': '#333138', 'backgroundColor': '#fffffa', 'color': '#333138'}
),
])
])
@callback(
Output('slider', 'max'),
Input('radioitem', 'value')
)
def update_slider_max(selected_max):
return selected_max
if __name__ == "__main__":
app.run(debug=False, dev_tools_silence_routes_logging = False)
``` | open | 2023-02-24T08:45:33Z | 2024-08-13T19:26:50Z | https://github.com/plotly/dash/issues/2433 | [
"bug",
"P3"
] | asicoderOfficial | 1 |
onnx/onnx | deep-learning | 6,433 | [Dynamic] Linear Quantization Linear test is not friendly to optimization (reciprocal) | # Ask a Question
### Question
The dynamic quantization is computed as listed below in the ONNX backend tests
```
x_min = np.minimum(0, np.min(X))
x_max = np.maximum(0, np.max(X))
Y_Scale = np.float32((x_max - x_min) / (255 - 0)) # uint8 -> [0, 255]
Y_ZeroPoint = np.clip(round((0 - x_min) / Y_Scale), 0, 255).astype(np.uint8)
Y = np.clip(np.round(X / Y_Scale) + Y_ZeroPoint, 0, 255).astype(np.uint8)
```
In this code (last line), we divide every values of `X` by the scalar `Y_Scale`. Because divide is typically so much more expensive than multiply, a common optimization is to compute ahead of time the reciprocal, say `Y_ScaleInv = 1.0 / Y_Scale` and then use that inverse in the computation (last line).
Since this is done in floating point (full precision) for a quantization algorithm, the potential slight loss of accuracy due to this optimization might be acceptable for most.
Unfortunately, the examples in the backend tests sometimes trip when this optimization is applied.
For example, in the `DynamicQuantizeLinear` test, if instead of using this input
```
X = np.array([0, 2, -3, -2.5, 1.34, 0.5]).astype(np.float32)
```
we were to add a `-0.01` factor to `-2.5`, the test passes with either when using the original or the reciprocal of the scale.
My question is: given that using the reciprocal is fine for most cases:
* can we modify the values so that our onnx backend tests work for both cases (by slightly modifying the inputs), or
* if we want to preserve the current behavior, could we add additional tests that would work in both cases (so that algorithms that use the reciprocal can still be tested on some of the inputs.
Thanks for your feedback. Happy to implement either solutions.
### Further information
- Relevant Area: tests
- Is this issue related to a specific model?
not a model, but specific backend tests.
### Notes
All was provided above. | open | 2024-10-09T00:55:53Z | 2024-10-12T00:25:05Z | https://github.com/onnx/onnx/issues/6433 | [
"question",
"topic: test"
] | AlexandreEichenberger | 13 |
TheKevJames/coveralls-python | pytest | 272 | Add option to specify a 'base path' for source files to support monorepos | I have a monorepo with several projects in a format similar to:
```
/
/project-one
/project-two
...etc
```
and I'd like to have that layout reflected in my coveralls reports.
The [Coveralls Github Action](https://github.com/marketplace/actions/coveralls-github-action) allows this through a flag called `base-path`.
Could I add this to coveralls-python through a PR?
I can see that the Coveralls Github Action does this by prepending the file paths in the source files listed in the coverage report before uploading.
Having a quick look at the code, I think this can be done relatively easily, but wonder if it would be best to do this in `api.Coveralls.create_data()` before merging any `extra` or after that? Is that the right place to be modifying the source file paths?
Let me know what you think?
Thanks! | closed | 2021-03-13T15:03:33Z | 2021-11-04T00:23:18Z | https://github.com/TheKevJames/coveralls-python/issues/272 | [] | RaddishIoW | 2 |
blb-ventures/strawberry-django-plus | graphql | 96 | How to hook into build in mutations | I have been struggling to build an efficient mutation pattern around the provided create/update/delete mutations without repeating tremendous amounts of code. After hours of debugging and trying to understand the magic of dataclasses, typing, etc. I think it might not possible to build upon the provided mutations.
Consider this (classes are shortened):
```python
@gql.django.type(Asset)
class AssetNode(ModelNode):
equity: EquityNode | None
buy_price: gql.auto
@gql.django.partial(Asset)
class UpdateAssetInput(gql.NodeInput):
equity: gql.auto
buy_price: gql.auto
@gql.type
class AssetMutation:
@gql.mutation
@login_required
def update_asset(self, info: Info, input: UpdateAssetInput) -> AssetNode:
_asset: Asset = Asset.objects.get(pk=input.id.node_id)
if _asset.owner == info.context.request.user:
field: AssetNode = gql.django.update_mutation(UpdateAssetInput)
result = field.get_result(None, info, None, kwargs={'input': input})
return result
raise PermissionError("You can only modify objects you own.")
```
I need a server side validation that the provided `ModelID` is owned by the user of the current context. The validation runs fine, the `update_mutation` field is created properly, but calling `field.get_result` failes because somehow the model property does not get populated. There seems to be some issue with typing annotations but I cannot get my head around it.
Is this a use case maybe not possible with the library at the moment? Is there any recommendation on how to implement this?
I tried going down the path with directives but they run "too early" and don't have access to the actual model instance for validation.
Side notes:
There are some issues with the code above in respect to performance.
* The model instance is fetched twice from the database. Once during owner checking and once during update.
* Typing seems to be off. Field is acually not an `AssetNode` but a `DjangoUpdateMutationField`
Thanks for your support. 😃 | closed | 2022-08-07T12:22:27Z | 2022-08-07T18:38:38Z | https://github.com/blb-ventures/strawberry-django-plus/issues/96 | [
"question"
] | oleo65 | 4 |
ned2/slapdash | plotly | 29 | Preventing Flask termination when exceptions are raised? | Hi,
This is fantastic and perfect for our needs. One challenge I am having (with Dash in general) and hoping there might be a way to address it as part of a config or set up parameter: Flask's default behaviour is to keep the server running when a Python exception is raised. You then get to see the exception in the browser. For some reason, Dash's Flask blueprint does not do this and the server process terminates. Any ideas on how to "fix" this? As a Python newbie, I would tend to make more mistakes in development than most. Having to restart the server every time I hit a syntax or other soon becomes very onerous. | open | 2020-02-14T02:02:25Z | 2020-02-14T02:02:48Z | https://github.com/ned2/slapdash/issues/29 | [] | stevenringo | 0 |
dmlc/gluon-cv | computer-vision | 852 | AttributeError: 'AxesSubplot' object has no attribtue 'copy' | Hello,
I was trying '09. Run an object detection model on your webcam'
which is in tutorial and I have the error as below:
`AttributeError: 'AxesSubplot' object has no attribute 'copy'`
I am using
`gluoncv 0.4.0, mxnet 1.4.1`
also in the example code
`gcv.utils.viz.cv_plot_bbox `
`gcv.utils.viz.cv_plot_image` are used but it seems that it was updated and 'cv_' should be removed such as `gcv.utils.viz.plot_image` | closed | 2019-07-05T05:15:58Z | 2019-07-09T05:39:11Z | https://github.com/dmlc/gluon-cv/issues/852 | [] | dojinkimm | 2 |
graphql-python/graphene | graphql | 835 | Union can contain list and non-list field in one union? | I want to implement this type of union.
```
union SearchResult = [Human] | Message
```
so, I write this code
```py
class SearchResult(graphene.Union):
class Meta:
type = (graphene.List(Human), Message)
```
however, this isn't works.
anyone know how to this make this union code? | closed | 2018-09-13T06:51:27Z | 2018-09-18T09:30:43Z | https://github.com/graphql-python/graphene/issues/835 | [] | rscarrera27 | 2 |
ageitgey/face_recognition | machine-learning | 1,097 | Unexpected type img for face_locations() if model='cnn' and bacth_size less than 128 | * face_recognition version: 1.3.0
* Python version: python 3.6.8
* Operating System: Ubuntu 18.04.4
### Description
When I run `face_locations()` it appears 'cnn' and 'hog' detector wait for different type of the image.
If batch size is 64 than 'cnn' expects `img` to be a list of numpy arrays, when `'hog' wants just a numpy array.
```
type(face_image) # --> ndarray
raw_face_loc = face_api.cnn_face_detector(face_image, 1, 64)
Output
__call__(): incompatible function arguments. The following argument types are supported:
1. (self: dlib.cnn_face_detection_model_v1, imgs: list, upsample_num_times: int=0, batch_size: int=128) -> std::vector<std::vector<dlib::mmod_rect, std::allocator<dlib::mmod_rect> >, std::allocator<std::vector<dlib::mmod_rect, std::allocator<dlib::mmod_rect> > > >
2. (self: dlib.cnn_face_detection_model_v1, img: array, upsample_num_times: int=0) -> std::vector<dlib::mmod_rect, std::allocator<dlib::mmod_rect> >
Invoked with: <dlib.cnn_face_detection_model_v1 object at 0x7f04bff46458>, array([[[ 87, 116, 134],
[ 80, 109, 127],
...,
[ 2, 0, 8]]], dtype=uint8), 1, 64
Did you forget to `#include <pybind11/stl.h>`? Or <pybind11/complex.h>,
<pybind11/functional.h>, <pybind11/chrono.h>, etc. Some automatic
conversions are optional and require extra headers to be included
when compiling your pybind11 module.
```
If we run `cnn` with a list `[ndarray]`
```
type(face_image) # --> ndarray
raw_face_loc = face_api.cnn_face_detector([face_image], 1, 64)
type(raw_face_loc) # --> <class 'dlib.mmod_rectangless'>
type(raw_face_loc[0]) # --> <class 'dlib.mmod_rectangles'> NOTE rectangleS, and above rectangleSS
```
Because of such return type `mmod_rectangless`, `face_locations()` has incorrect `if`.
```
if model == "cnn":
return [_trim_css_to_bounds(_rect_to_css(face.rect), img.shape) for face in _raw_face_locations(img, number_of_times_to_upsample, "cnn") ]
```
as `face` has no `rect` attr, `face[0]` does have. | open | 2020-03-29T12:13:19Z | 2020-05-10T21:37:50Z | https://github.com/ageitgey/face_recognition/issues/1097 | [] | VicGrygorchyk | 2 |
amidaware/tacticalrmm | django | 2,060 | [Feature request] Add script testing output to history logs | **Is your feature request related to a problem? Please describe.**
currently there is no trace of what has been tested or what the output is except this in the audit.

**Describe the solution you'd like**
like every script run the output to be logged in history
**Describe alternatives you've considered**
nothing possible | open | 2024-11-04T07:50:55Z | 2024-11-04T07:50:55Z | https://github.com/amidaware/tacticalrmm/issues/2060 | [] | P6g9YHK6 | 0 |
paulpierre/RasaGPT | fastapi | 56 | Successful installation: Problem | Hi, I am curious, if any community members have been successfully able to install and run the RasaGPT?
Appreciate the help, coffee from my end :).
[+] Running 5/5
✔ Network rasagpt_chat-network Created 0.1s
✔ Network rasagpt_default Created 0.1s
✔ Container chat_rasa_credentials Started 0.0s
✔ Container chat_rasa_actions Started 0.0s
✔ Container chat_rasa_core Started 0.0s
make[2]: Leaving directory '/workspace/helm/RasaGPT'
Error response from daemon: Container c08306a3dd93b7da06b150708b51c907bc2d3ef4dba9538a7b7ced4aaef64423 is not running
make[1]: *** [Makefile:291: rasa-train] Error 1
make[1]: Leaving directory '/workspace/helm/RasaGPT'
make: *** [Makefile:57: install] Error 2 | open | 2023-11-06T20:34:15Z | 2023-11-06T20:34:15Z | https://github.com/paulpierre/RasaGPT/issues/56 | [] | nitishymtpl | 0 |
predict-idlab/plotly-resampler | plotly | 113 | Loosen up plotly dependencies | see: https://github.com/pycaret/pycaret/pull/2866 | closed | 2022-08-22T12:12:01Z | 2022-08-25T23:06:46Z | https://github.com/predict-idlab/plotly-resampler/issues/113 | [] | jonasvdd | 2 |
littlecodersh/ItChat | api | 478 | chatroom member fetch failed | 个人对个人聊天可以收到,群里的信息只收到以下错误:
chatroom member fetch failed with @0a836af4242dd9acaed1592956cbc826
itchat版本: 1.3.9
如何解决? | closed | 2017-08-12T02:37:42Z | 2019-06-13T09:35:57Z | https://github.com/littlecodersh/ItChat/issues/478 | [
"question"
] | qiusugang | 4 |
albumentations-team/albumentations | machine-learning | 1,978 | IndexError inside core/bbox_utils when using A.GridDistortion | ## Describe the bug
A.GridDistortion worked for boxes with an older bugfix version, but not with `albucore=0.0.17` and `albumentations=1.4.17`
### To Reproduce
```python
from skimage.data import astronaut
import albumentations as A
t = A.Compose(
[A.GridDistortion(p=1.0)],
**{"bbox_params": A.BboxParams(format="pascal_voc", label_fields=["class_labels"])},
)
aimg = astronaut()
aboxes = [[100, 100, 200, 200], [200, 200, 300, 300]]
alabels = [1, 2]
# The error always happens
result = t(image=aimg, bboxes=aboxes, class_labels=alabels)
```
#### Error
```
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
[...]
/usr/local/lib/python3.10/dist-packages/albumentations/core/bbox_utils.py:237) return (bboxes_denorm[:, 2] - bboxes_denorm[:, 0]) * (bboxes_denorm[:, 3] - bboxes_denorm[:, 1])
IndexError: index 3 is out of bounds for axis 1 with size 3
``` | closed | 2024-10-08T09:10:24Z | 2024-10-08T20:10:49Z | https://github.com/albumentations-team/albumentations/issues/1978 | [
"bug"
] | Telcrome | 2 |
vitalik/django-ninja | pydantic | 1,088 | [Docs issue] Inconsistency in indentation | It is a nitpicking but could you please fix this inconsistency in indentation:
<img width="981" alt="Screenshot 2024-02-19 at 23 52 00" src="https://github.com/vitalik/django-ninja/assets/36531464/dd0396fc-a8c7-4f7c-99b1-15c0125751a4">
It is in: [_Path parameters > Django Path Converters > Path params with slashes_](https://django-ninja.dev/guides/input/path-params/#path-params-with-slashes)
I guess it can be good-first-issue for someone.
| closed | 2024-02-19T22:58:02Z | 2024-04-24T11:28:37Z | https://github.com/vitalik/django-ninja/issues/1088 | [] | dabarov | 1 |
PablocFonseca/streamlit-aggrid | streamlit | 66 | Multi-column editable configuration doesn't work in 0.2.3 | When I upgrade streamlit-aggrid from 0.2.2.post4 to 0.2.3.post2, the multi-column editable configuration doesn't work any more. That means only one column is editable even I set the editable=True in 2 columns. The code snippet would be:
```
gb.configure_column("Col_A", editable=True)
gb.configure_column("Col_B", editable=True)
``` | closed | 2022-03-01T02:38:03Z | 2024-04-04T17:53:17Z | https://github.com/PablocFonseca/streamlit-aggrid/issues/66 | [
"bug"
] | tonylegend | 0 |
oxylabs/amazon-scraper | web-scraping | 3 | Variation issue | Hi,
When I was working with amazon product scrapper, it doesn’t return the correct variation images/tooltips, it returns the same image for all variations.
The problem is in the parse because when I don’t send it, the html content returns the variation images, but I need it with parse.
Thanks
**Request**
```
curl --location 'https://realtime.oxylabs.io/v1/queries' \
--header 'Content-Type: application/json' \
--header 'Authorization: Basic X' \
--data '{
"source": "amazon_product",
"domain": "com",
"query": "B0CKX64CJ3",
"parse": true
}’
```
**Response**
```
"variation": [
{
"asin": "B0CKX74BPN",
"selected": false,
"dimensions": {
"Color": "Blue"
},
"tooltip_image": "https://m.media-amazon.com/images/I/311w1kHj3RL._SS36_.jpg"
},
{
"asin": "B0CKX7C48Y",
"selected": false,
"dimensions": {
"Color": "Grey"
},
"tooltip_image": "https://m.media-amazon.com/images/I/311w1kHj3RL._SS36_.jpg"
}]
``` | open | 2024-02-14T01:50:29Z | 2024-02-14T01:50:29Z | https://github.com/oxylabs/amazon-scraper/issues/3 | [] | ofignacio | 0 |
cvat-ai/cvat | pytorch | 9,225 | Return to the first page after visiting invalid page | ### Actions before raising this issue
- [x] I searched the existing issues and did not find anything similar.
- [x] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Is your feature request related to a problem? Please describe.
When visiting a non-existent page for some resource in UI, you're shown a error notification

It would be nice to redirect the user automatically to the first page or to provide a button to do so. There are other buttons on the page (Tasks, Jobs etc.), but if there was some filter enabled, visiting these pages will clear the filter.
### Describe the solution you'd like
_No response_
### Describe alternatives you've considered
_No response_
### Additional context
_No response_ | open | 2025-03-18T17:13:28Z | 2025-03-18T17:13:45Z | https://github.com/cvat-ai/cvat/issues/9225 | [
"enhancement",
"ui/ux",
"good first issue"
] | zhiltsov-max | 0 |
keras-team/keras | pytorch | 20,608 | `keras.ops.image.map_coordinates` fails on `uint8` input with TensorFlow backend | Consider the following simple example
```python
import keras
image = keras.ops.ones((1, 1, 3), dtype='uint8')
coordinates = keras.ops.convert_to_tensor([-1., 0., 0.])[..., None, None]
interp = keras.ops.image.map_coordinates(image, coordinates, order=1, fill_mode='constant')
```
that is expected to yield `[[0]]`. However, with `KERAS_BACKEND=tensorflow` this code snippet results in
```console
2024-12-08 16:04:24.790791: W tensorflow/core/framework/op_kernel.cc:1841] OP_REQUIRES failed at gather_nd_op.cc:65 : INVALID_ARGUMENT: indices[0,0] = [-1, 0, 0] does not index into param shape [1,1,3], node name: GatherNd
2024-12-08 16:04:24.790814: I tensorflow/core/framework/local_rendezvous.cc:405] Local rendezvous is aborting with status: INVALID_ARGUMENT: indices[0,0] = [-1, 0, 0] does not index into param shape [1,1,3], node name: GatherNd
Traceback (most recent call last):
File "<home>/tfmapc.py", line 11, in <module>
interp = keras.ops.image.map_coordinates(image, coordinates, order=1, fill_mode='constant')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<home>/.env/lib/python3.12/site-packages/keras/src/ops/image.py", line 787, in map_coordinates
return backend.image.map_coordinates(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<home>/.env/lib/python3.12/site-packages/keras/src/backend/tensorflow/image.py", line 485, in map_coordinates
contribution = tf.cond(tf.reduce_all(validities), fast_path, slow_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<home>/.env/lib/python3.12/site-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler
raise e.with_traceback(filtered_tb) from None
File "<home>/.env/lib/python3.12/site-packages/keras/src/backend/tensorflow/image.py", line 481, in slow_path
tf.transpose(tf.gather_nd(input_arr, indices)),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
tensorflow.python.framework.errors_impl.InvalidArgumentError: {{function_node __wrapped__GatherNd_device_/job:localhost/replica:0/task:0/device:CPU:0}} indices[0,0] = [-1, 0, 0] does not index into param shape [1,1,3], node name: GatherNd [Op:GatherNd] name:
```
The problem does not occur if I change the `dtype` of `image` from `uint8` to `float32` or switch either to the `jax` or `torch` backends. Also changing the `fill_mode` from `constant` to `nearest` avoids the issue.
Keras version: 3.7.0 | closed | 2024-12-08T15:18:28Z | 2025-01-17T18:12:00Z | https://github.com/keras-team/keras/issues/20608 | [
"stat:awaiting response from contributor",
"type:Bug"
] | sergiud | 4 |
roboflow/supervision | computer-vision | 1,293 | Help in gathering commonly used supervision functions. | We will be releasing a supervision cheatsheet soon - something of a similar style as https://github.com/a-anjos/python-opencv/blob/master/cv2cheatsheet.pdf
First, we need to gather a list of code snippets in a Colab. We'll then go over it and copy out a subset of the functions into the PDF.
Partial submissions are fine - every little bit helps!
---
The sections in the Colab I anticipate are as follows:
1. `# Supervision Basics` - pip install supervision and inference, load an imagerun the model (`"yolov8s-640"`) and `from_inference`, visualize result. This section also has `Detections.empty`, `is_empty` and `Detections.merge`.
More importantly this should have frequently searched behaviour such as selecting detections with one class. I'll add this myself later.
Let's include `with_nmm` and `with_nms` too.
> https://supervision.roboflow.com/develop/#hello
> https://supervision.roboflow.com/develop/how_to/detect_and_annotate/
> https://supervision.roboflow.com/develop/detection/core/
2. `# supervision assets` - an example of loading from https://supervision.roboflow.com/assets/
3. `# Loading model results` - examples of installing dependencies and running every model mentioned [here](https://supervision.roboflow.com/latest/detection/core/), showing every `from_X` method at work.
> https://supervision.roboflow.com/develop/detection/core/
3. `Annotators` - Create and run every annotator on a detections object, show the result.
> https://supervision.roboflow.com/develop/detection/annotators/
4. `# KeyPoints` - similar sections of the Basics, from_X, annotators
> https://supervision.roboflow.com/latest/keypoint/core/
> https://supervision.roboflow.com/latest/keypoint/annotators/
---
If you start creating a Colab, please post a link on this issue.
After you're done, please set it to "anyone can edit" - it will be useful in the next few days. | closed | 2024-06-19T08:02:12Z | 2024-06-26T10:39:04Z | https://github.com/roboflow/supervision/issues/1293 | [
"help wanted"
] | LinasKo | 12 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 16,177 | [Feature Request]: Support Kolors model | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
Kolors is a very new model. Please see the relevant Huggingface page here: https://huggingface.co/Kwai-Kolors/Kolors
### Proposed workflow
I expect that the model should be used the same way other SD models are used.
### Additional information
_No response_ | open | 2024-07-09T07:07:19Z | 2024-07-23T09:07:03Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16177 | [
"enhancement"
] | ddpasa | 3 |
python-visualization/folium | data-visualization | 1,451 | Folium choropleth (or geojson) legend: unrounded tick labels | Hello!
I'm trying to do something that seems like it should be very, very simple, but somehow it doesn't work. In this basic example from folium, I have changed the values for the bins to decimals. I would like the legend ticks NOT to be rounded to the nearest integer, but rather include decimals. I see there are a few issues where people figured out how to do it with custom javascript, but I can't seem to get those to work.
Thanks in advance for your help!
```python
import pandas as pd
import folium
url = (
"https://raw.githubusercontent.com/python-visualization/folium/master/examples/data"
)
state_geo = f"{url}/us-states.json"
state_unemployment = f"{url}/US_Unemployment_Oct2012.csv"
state_data = pd.read_csv(state_unemployment)
m= folium.Map(location=[48, -102], zoom_start=3)
folium.Choropleth(
geo_data=state_geo,
name="choropleth",
data=state_data,
columns=["State", "Unemployment"],
key_on="feature.id",
fill_color="YlGn",
fill_opacity=0.7,
line_opacity=0.2,
legend_name="Unemployment Rate (%)",
bins = [3.21,4.05,9.8,11.3],
).add_to(m)
folium.LayerControl().add_to(m)
m
```

| closed | 2021-02-10T13:14:38Z | 2022-11-18T14:34:18Z | https://github.com/python-visualization/folium/issues/1451 | [] | stcoats | 1 |
littlecodersh/ItChat | api | 653 | username 会变 怎么标示用户 | 在提交前,请确保您已经检查了以下内容!
- [ ] 您可以在浏览器中登陆微信账号,但不能使用`itchat`登陆
- [ ] 我已经阅读并按[文档][document] 中的指引进行了操作
- [ ] 您的问题没有在[issues][issues]报告,否则请在原有issue下报告
- [ ] 本问题确实关于`itchat`, 而不是其他项目.
- [ ] 如果你的问题关于稳定性,建议尝试对网络稳定性要求极低的[itchatmp][itchatmp]项目
请使用`itchat.run(debug=True)`运行,并将输出粘贴在下面:
```
[在这里粘贴完整日志]
```
您的itchat版本为:`[在这里填写版本号]`。(可通过`python -c "import itchat;print(itchat.__version__)"`获取)
其他的内容或者问题更详细的描述都可以添加在下面:
> [您的内容]
[document]: http://itchat.readthedocs.io/zh/latest/
[issues]: https://github.com/littlecodersh/itchat/issues
[itchatmp]: https://github.com/littlecodersh/itchatmp
| closed | 2018-05-04T08:55:39Z | 2018-06-06T02:04:22Z | https://github.com/littlecodersh/ItChat/issues/653 | [] | zhangzicong6 | 4 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 3,155 | Make it possible for secondary tenants to not be exposed via HTTPS | **Describe the bug**
In platform options, disabling flag "Let the platform be reachable without Tor" is not working, platform is reachable using https also.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to backend "Tor" options
2. Disable flag "Let the platform be reachable without Tor"
3. In a standard browser, try to get https://your_gl_instance
4. Platform is reachable
**Expected behavior**
Platform should not be rachable with standard browser
**Screenshots**
-
**Desktop (please complete the following information):**
- OS: ubuntu
- Browser firefox 96, chrome 97
- Version [e.g. 22]
**Smartphone (please complete the following information):**
-
**Additional context**
GL version 4.7.6
| closed | 2022-01-28T15:01:56Z | 2022-01-31T12:53:35Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3155 | [
"T: Enhancement",
"C: Backend"
] | larrykind | 6 |
deezer/spleeter | tensorflow | 817 | [Bug] The accompaniment.wav and vocals.wav files have not changed | - [x ] I didn't find a similar issue already open.
- [x ] I read the documentation (README AND Wiki)
- [ x] I have installed FFMpeg
- [ x] My problem is related to Spleeter only, not a derivative product (such as Webapplication, or GUI provided by others)
## Description
I followed the steps in the readme to install and tried to separate the audio_example.mp3 file, but the output file is always the same as audio_example.mp3. I don't know if my model is downloaded correctly, I tried to download it many times without any error output
## Step to reproduce
```
conda install -c conda-forge ffmpeg libsndfile
pip install spleeter
wget https://github.com/deezer/spleeter/raw/master/audio_example.mp3
```
```
$ spleeter separate -o output/ audio_example.mp3
INFO:spleeter:Downloading model archive https://github.com/deezer/spleeter/releases/download/v1.4.0/2stems.tar.gz
INFO:spleeter:Validating archive checksum
INFO:spleeter:Extracting downloaded 2stems archive
INFO:spleeter:2stems model file(s) extracted
INFO:spleeter:File output/audio_example/vocals.wav written succesfully
INFO:spleeter:File output/audio_example/accompaniment.wav written succesfully
```
```
$ ls -lh ~/pretrained_models/2stems
total 76M
-rw-rw-r-- 1 ubuntu ubuntu 67 Oct 24 2019 checkpoint
-rw-rw-r-- 1 ubuntu ubuntu 75M Oct 24 2019 model.data-00000-of-00001
-rw-rw-r-- 1 ubuntu ubuntu 5.2K Oct 24 2019 model.index
-rw-rw-r-- 1 ubuntu ubuntu 787K Oct 24 2019 model.meta
```
```
$ ls -lh ~/output/audio_example
total 3.7M
-rw-rw-r-- 1 ubuntu ubuntu 1.9M Jan 8 17:50 accompaniment.wav
-rw-rw-r-- 1 ubuntu ubuntu 1.9M Jan 8 17:50 vocals.wav
```
```
$ nvidia-smi
Sun Jan 8 17:54:32 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.102.04 Driver Version: 450.102.04 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla V100-SXM2... On | 00000000:00:08.0 Off | 0 |
| N/A 38C P0 47W / 300W | 6800MiB / 32510MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 3391742 C python 3405MiB |
| 0 N/A N/A 3392629 C python 3393MiB |
+-----------------------------------------------------------------------------+
```
```
$ ffmpeg -v
ffmpeg version 4.2.2 Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 7.3.0 (crosstool-NG 1.23.0.449-a04d0)
configuration: --prefix=/tmp/build/80754af9/ffmpeg_1587154242452/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placeho --cc=/tmp/build/80754af9/ffmpeg_1587154242452/_build_env/bin/x86_64-conda_cos6-linux-gnu-cc --disable-doc --enable-avresample --enable-gmp --enable-hardcoded-tables --enable-libfreetype --enable-libvpx --enable-pthreads --enable-libopus --enable-postproc --enable-pic --enable-pthreads --enable-shared --enable-static --enable-version3 --enable-zlib --enable-libmp3lame --disable-nonfree --enable-gpl --enable-gnutls --disable-openssl --enable-libopenh264 --enable-libx264
libavutil 56. 31.100 / 56. 31.100
libavcodec 58. 54.100 / 58. 54.100
libavformat 58. 29.100 / 58. 29.100
libavdevice 58. 8.100 / 58. 8.100
libavfilter 7. 57.100 / 7. 57.100
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 5.100 / 5. 5.100
libswresample 3. 5.100 / 3. 5.100
libpostproc 55. 5.100 / 55. 5.100
``` | open | 2023-01-08T10:04:28Z | 2024-03-27T11:21:58Z | https://github.com/deezer/spleeter/issues/817 | [
"bug",
"invalid"
] | Ice-Hazymoon | 6 |
mitmproxy/mitmproxy | python | 7,410 | I'm looking for 3 flows, and their response and i want to capture them. Currently it seems like python scripts run forever and seem to be parsed on every flow? | I'm looking for 3 flows, and their response and i want to capture them. Currently it seems like python scripts run forever and seem to be parsed on every flow?
I want to capture flow.request.path = "/test/folder/1" and write flow.response.content to disk. Then I want to capture flow.request.path = "/test/folder/2" and write flow.response.content to disk.
After those 2 flows are captured I want to exit or shut down the proxy. Is there a method to use like While done is False? The flows are random, I just want mitm to shut down once it has seen those 2 flows.
```
from mitmproxy import http, ctx
import json
import os
import re
def response(flow: http.HTTPFlow) -> None:
if flow.response and flow.response.content:
if '/ext/dragonsong/event/about_v2' in flow.request.path:
print('about_v2', flow.request.path)
about_v2_raw = flow.response.content.decode("utf-8")
about_v2 = json.loads(about_v2_raw)
with open(f"/root/.mitmproxy/wardragons/about_v2.txt", "w") as file:
json.dump(about_v2, file)
os.chmod("/root/.mitmproxy/wardragons/about_v2.txt", 0o744)
about_completed = True
if '/dragons/event/current' in flow.request.path:
print('params', flow.request.path)
params = flow.response.content.decode("utf-8")
with open(f"/root/.mitmproxy/wardragons/params.txt" , "w") as file:
file.write(params)
os.chmod("/root/.mitmproxy/wardragons/params.txt", 0o744)
params_completed = True
if '/ext/dragonsong/world/get_params' in flow.request.path:
print('world_params', flow.request.path)
world_params_raw = flow.response.content.decode("utf-8")
world_params = json.loads(world_params_raw)
with open(f"/root/.mitmproxy/wardragons/world_params.txt", "w") as file:
json.dump(world_params, file)
os.chmod("/root/.mitmproxy/wardragons/world_params.txt", 0o744)
world_completed = True
```
_Originally posted by @isriam in https://github.com/mitmproxy/mitmproxy/discussions/7405_ | closed | 2024-12-23T01:15:15Z | 2024-12-24T13:21:04Z | https://github.com/mitmproxy/mitmproxy/issues/7410 | [] | BabyYoda01 | 1 |
jupyter-book/jupyter-book | jupyter | 1,831 | `nbconvert` (v7.0.0) and `jupyterbook` incompatible | ### Describe the bug
**context**
Try to install `nbconvert` and `jupyterbook` with `pip3`
**expectation**
I expected the installation to work.
**bug**
But instead `pip` gets stuck happens
Pinning `nbconvert` to 6.5.3 resolves the issue
### Reproduce the bug
Using minimal docker file (described below)
```console
Step 1/3 : FROM ubuntu:22.04
---> 2dc39ba059dc
Step 2/3 : RUN export DEBIAN_FRONTEND=noninteractive && apt-get -qq update && apt-get -yq --with-new-pkgs -o Dpkg::Options::="--force-confold" upgrade && apt-get -y install git python3-pip
---> Using cache
---> ee9fac10498c
Step 3/3 : RUN pip3 install --upgrade jupyter-book nbconvert
---> Running in 761b45c9f0ae
Collecting jupyter-book
Downloading jupyter_book-0.13.1-py3-none-any.whl (43 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 43.5/43.5 KB 1.6 MB/s eta 0:00:00
Collecting nbconvert
Downloading nbconvert-7.0.0-py3-none-any.whl (271 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 271.3/271.3 KB 3.9 MB/s eta 0:00:00
Collecting sphinx-comments
Downloading sphinx_comments-0.0.3-py3-none-any.whl (4.6 kB)
Collecting click<9,>=7.1
Downloading click-8.1.3-py3-none-any.whl (96 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 96.6/96.6 KB 3.3 MB/s eta 0:00:00
Collecting sphinx<5,>=4
Downloading Sphinx-4.5.0-py3-none-any.whl (3.1 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.1/3.1 MB 7.1 MB/s eta 0:00:00
Collecting sphinx-design~=0.1.0
Downloading sphinx_design-0.1.0-py3-none-any.whl (1.9 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.9/1.9 MB 13.7 MB/s eta 0:00:00
Collecting sphinx_togglebutton
Downloading sphinx_togglebutton-0.3.2-py3-none-any.whl (8.2 kB)
Collecting sphinx-jupyterbook-latex~=0.4.6
Downloading sphinx_jupyterbook_latex-0.4.6-py3-none-any.whl (13 kB)
Collecting sphinx-multitoc-numbering~=0.1.3
Downloading sphinx_multitoc_numbering-0.1.3-py3-none-any.whl (4.6 kB)
Collecting myst-nb~=0.13.1
Downloading myst_nb-0.13.2-py3-none-any.whl (41 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 41.0/41.0 KB 5.1 MB/s eta 0:00:00
Collecting sphinx-external-toc~=0.2.3
Downloading sphinx_external_toc-0.2.4-py3-none-any.whl (25 kB)
Collecting pyyaml
Downloading PyYAML-6.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (682 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 682.2/682.2 KB 14.1 MB/s eta 0:00:00
Collecting jsonschema<4
Downloading jsonschema-3.2.0-py2.py3-none-any.whl (56 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 56.3/56.3 KB 8.0 MB/s eta 0:00:00
Collecting linkify-it-py~=1.0.1
Downloading linkify_it_py-1.0.3-py3-none-any.whl (19 kB)
Collecting sphinx-thebe~=0.1.1
Downloading sphinx_thebe-0.1.2-py3-none-any.whl (8.3 kB)
Collecting docutils<0.18,>=0.15
Downloading docutils-0.17.1-py2.py3-none-any.whl (575 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 575.5/575.5 KB 16.1 MB/s eta 0:00:00
Collecting sphinx_book_theme~=0.3.2
Downloading sphinx_book_theme-0.3.3-py3-none-any.whl (345 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 345.7/345.7 KB 18.7 MB/s eta 0:00:00
Collecting sphinxcontrib-bibtex<=2.5.0,>=2.2.0
Downloading sphinxcontrib_bibtex-2.5.0-py3-none-any.whl (39 kB)
Collecting sphinx-copybutton
Downloading sphinx_copybutton-0.5.0-py3-none-any.whl (12 kB)
Collecting Jinja2
Downloading Jinja2-3.1.2-py3-none-any.whl (133 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 133.1/133.1 KB 15.1 MB/s eta 0:00:00
Collecting beautifulsoup4
Downloading beautifulsoup4-4.11.1-py3-none-any.whl (128 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 128.2/128.2 KB 13.1 MB/s eta 0:00:00
Collecting jupyter-core>=4.7
Downloading jupyter_core-4.11.1-py3-none-any.whl (88 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 88.4/88.4 KB 8.0 MB/s eta 0:00:00
Collecting bleach
Downloading bleach-5.0.1-py3-none-any.whl (160 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 160.9/160.9 KB 11.7 MB/s eta 0:00:00
Collecting lxml
Downloading lxml-4.9.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl (6.9 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.9/6.9 MB 18.6 MB/s eta 0:00:00
Collecting pandocfilters>=1.4.1
Downloading pandocfilters-1.5.0-py2.py3-none-any.whl (8.7 kB)
Collecting nbclient>=0.5.0
Downloading nbclient-0.6.7-py3-none-any.whl (71 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 71.8/71.8 KB 9.7 MB/s eta 0:00:00
Collecting jupyterlab-pygments
Downloading jupyterlab_pygments-0.2.2-py2.py3-none-any.whl (21 kB)
Collecting packaging
Downloading packaging-21.3-py3-none-any.whl (40 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 40.8/40.8 KB 5.6 MB/s eta 0:00:00
Collecting mistune<3,>=2.0.3
Downloading mistune-2.0.4-py2.py3-none-any.whl (24 kB)
Collecting defusedxml
Downloading defusedxml-0.7.1-py2.py3-none-any.whl (25 kB)
Collecting tinycss2
Downloading tinycss2-1.1.1-py3-none-any.whl (21 kB)
Collecting nbformat>=5.1
Downloading nbformat-5.4.0-py3-none-any.whl (73 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 73.3/73.3 KB 8.5 MB/s eta 0:00:00
Collecting traitlets>=5.0
Downloading traitlets-5.3.0-py3-none-any.whl (106 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 106.8/106.8 KB 11.3 MB/s eta 0:00:00
Collecting pygments>=2.4.1
Downloading Pygments-2.13.0-py3-none-any.whl (1.1 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 25.3 MB/s eta 0:00:00
Collecting markupsafe>=2.0
Downloading MarkupSafe-2.1.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (25 kB)
Collecting six>=1.11.0
Downloading six-1.16.0-py2.py3-none-any.whl (11 kB)
Collecting attrs>=17.4.0
Downloading attrs-22.1.0-py2.py3-none-any.whl (58 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 58.8/58.8 KB 8.8 MB/s eta 0:00:00
Collecting pyrsistent>=0.14.0
Downloading pyrsistent-0.18.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (115 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 115.8/115.8 KB 11.4 MB/s eta 0:00:00
Requirement already satisfied: setuptools in /usr/lib/python3/dist-packages (from jsonschema<4->jupyter-book) (59.6.0)
Collecting uc-micro-py
Downloading uc_micro_py-1.0.1-py3-none-any.whl (6.2 kB)
Collecting myst-parser~=0.15.2
Downloading myst_parser-0.15.2-py3-none-any.whl (46 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 46.3/46.3 KB 6.3 MB/s eta 0:00:00
Collecting jupyter-cache~=0.4.1
Downloading jupyter_cache-0.4.3-py3-none-any.whl (31 kB)
Collecting ipywidgets<8,>=7.0.0
Downloading ipywidgets-7.7.2-py2.py3-none-any.whl (123 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 123.4/123.4 KB 9.1 MB/s eta 0:00:00
Collecting nbconvert
Downloading nbconvert-6.5.3-py3-none-any.whl (563 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 563.8/563.8 KB 18.1 MB/s eta 0:00:00
Collecting importlib-metadata
Downloading importlib_metadata-4.12.0-py3-none-any.whl (21 kB)
Collecting ipython
Downloading ipython-8.5.0-py3-none-any.whl (752 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 752.0/752.0 KB 24.4 MB/s eta 0:00:00
Collecting jupyter-sphinx~=0.3.2
Downloading jupyter_sphinx-0.3.2-py3-none-any.whl (20 kB)
Collecting entrypoints>=0.2.2
Downloading entrypoints-0.4-py3-none-any.whl (5.3 kB)
Collecting nbconvert
Downloading nbconvert-6.5.2-py3-none-any.whl (563 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 563.8/563.8 KB 20.3 MB/s eta 0:00:00
Downloading nbconvert-6.5.1-py3-none-any.whl (563 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 563.8/563.8 KB 22.9 MB/s eta 0:00:00
Downloading nbconvert-6.5.0-py3-none-any.whl (561 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 561.6/561.6 KB 21.5 MB/s eta 0:00:00
Downloading nbconvert-6.4.5-py3-none-any.whl (561 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 561.4/561.4 KB 19.1 MB/s eta 0:00:00
Collecting nbclient>=0.5.0
Downloading nbclient-0.5.13-py3-none-any.whl (70 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 70.6/70.6 KB 8.3 MB/s eta 0:00:00
Collecting testpath
Downloading testpath-0.6.0-py3-none-any.whl (83 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 83.9/83.9 KB 12.1 MB/s eta 0:00:00
Collecting nbconvert
Downloading nbconvert-6.4.4-py3-none-any.whl (561 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 561.4/561.4 KB 19.8 MB/s eta 0:00:00
Downloading nbconvert-6.4.3-py3-none-any.whl (560 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 560.4/560.4 KB 23.0 MB/s eta 0:00:00
Downloading nbconvert-6.4.2-py3-none-any.whl (558 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 558.8/558.8 KB 19.0 MB/s eta 0:00:00
Downloading nbconvert-6.4.1-py3-none-any.whl (557 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 557.4/557.4 KB 21.1 MB/s eta 0:00:00
Downloading nbconvert-6.4.0-py3-none-any.whl (557 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 557.4/557.4 KB 20.0 MB/s eta 0:00:00
Downloading nbconvert-6.3.0-py3-none-any.whl (556 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 556.6/556.6 KB 22.7 MB/s eta 0:00:00
Downloading nbconvert-6.2.0-py3-none-any.whl (553 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 553.5/553.5 KB 15.9 MB/s eta 0:00:00
Downloading nbconvert-6.1.0-py3-none-any.whl (551 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 551.1/551.1 KB 15.5 MB/s eta 0:00:00
Downloading nbconvert-6.0.7-py3-none-any.whl (552 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 552.1/552.1 KB 22.5 MB/s eta 0:00:00
Downloading nbconvert-6.0.6-py3-none-any.whl (551 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 551.9/551.9 KB 20.8 MB/s eta 0:00:00
Downloading nbconvert-6.0.5-py3-none-any.whl (505 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 505.5/505.5 KB 15.3 MB/s eta 0:00:00
Downloading nbconvert-6.0.4-py3-none-any.whl (505 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 505.5/505.5 KB 21.4 MB/s eta 0:00:00
Downloading nbconvert-6.0.3-py3-none-any.whl (505 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 505.2/505.2 KB 20.7 MB/s eta 0:00:00
Downloading nbconvert-6.0.2-py3-none-any.whl (504 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 504.5/504.5 KB 20.6 MB/s eta 0:00:00
Downloading nbconvert-6.0.1-py3-none-any.whl (502 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 502.7/502.7 KB 24.5 MB/s eta 0:00:00
Downloading nbconvert-6.0.0-py3-none-any.whl (502 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 502.7/502.7 KB 18.6 MB/s eta 0:00:00
Downloading nbconvert-5.6.1-py2.py3-none-any.whl (455 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 455.1/455.1 KB 20.0 MB/s eta 0:00:00
Downloading nbconvert-5.6.0-py2.py3-none-any.whl (453 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 454.0/454.0 KB 23.3 MB/s eta 0:00:00
INFO: pip is looking at multiple versions of myst-nb to determine which version is compatible with other requirements. This could take a while.
Collecting myst-nb~=0.13.1
Downloading myst_nb-0.13.1-py3-none-any.whl (37 kB)
Collecting sphinx_togglebutton
Downloading sphinx_togglebutton-0.2.3-py3-none-any.whl (6.1 kB)
INFO: pip is looking at multiple versions of mistune to determine which version is compatible with other requirements. This could take a while.
Collecting mistune<3,>=2.0.3
Downloading mistune-2.0.3-py2.py3-none-any.whl (24 kB)
INFO: pip is looking at multiple versions of markupsafe to determine which version is compatible with other requirements. This could take a while.
Collecting markupsafe>=2.0
Downloading MarkupSafe-2.1.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (25 kB)
Downloading MarkupSafe-2.0.1-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (30 kB)
INFO: pip is looking at multiple versions of myst-nb to determine which version is compatible with other requirements. This could take a while.
Downloading MarkupSafe-2.0.0.tar.gz (18 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. See https://pip.pypa.io/warnings/backtracking for guidance. If you want to abort this run, press Ctrl + C.
INFO: pip is looking at multiple versions of linkify-it-py to determine which version is compatible with other requirements. This could take a while.
Collecting linkify-it-py~=1.0.1
Downloading linkify_it_py-1.0.2-py3-none-any.whl (19 kB)
INFO: pip is looking at multiple versions of markupsafe to determine which version is compatible with other requirements. This could take a while.
Downloading linkify_it_py-1.0.1-py3-none-any.whl (19 kB)
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. See https://pip.pypa.io/warnings/backtracking for guidance. If you want to abort this run, press Ctrl + C.
INFO: pip is looking at multiple versions of jupyter-core to determine which version is compatible with other requirements. This could take a while.
Collecting jupyter-core>=4.7
Downloading jupyter_core-4.10.0-py3-none-any.whl (87 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 87.3/87.3 KB 10.3 MB/s eta 0:00:00
INFO: pip is looking at multiple versions of linkify-it-py to determine which version is compatible with other requirements. This could take a while.
Downloading jupyter_core-4.9.2-py3-none-any.whl (86 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 86.9/86.9 KB 9.4 MB/s eta 0:00:00
Downloading jupyter_core-4.9.1-py3-none-any.whl (86 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 86.7/86.7 KB 10.9 MB/s eta 0:00:00
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. See https://pip.pypa.io/warnings/backtracking for guidance. If you want to abort this run, press Ctrl + C.
Downloading jupyter_core-4.9.0-py3-none-any.whl (86 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 86.6/86.6 KB 10.0 MB/s eta 0:00:00
Downloading jupyter_core-4.8.2-py3-none-any.whl (86 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 86.1/86.1 KB 6.4 MB/s eta 0:00:00
Downloading jupyter_core-4.8.1-py3-none-any.whl (86 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 86.1/86.1 KB 7.9 MB/s eta 0:00:00
Downloading jupyter_core-4.8.0-py3-none-any.whl (86 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 86.1/86.1 KB 7.3 MB/s eta 0:00:00
INFO: pip is looking at multiple versions of jupyter-core to determine which version is compatible with other requirements. This could take a while.
Downloading jupyter_core-4.7.1-py3-none-any.whl (82 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 82.8/82.8 KB 10.3 MB/s eta 0:00:00
Downloading jupyter_core-4.7.0-py3-none-any.whl (82 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 82.8/82.8 KB 9.3 MB/s eta 0:00:00
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. See https://pip.pypa.io/warnings/backtracking for guidance. If you want to abort this run, press Ctrl + C.
INFO: pip is looking at multiple versions of mistune to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of jsonschema to determine which version is compatible with other requirements. This could take a while.
Collecting jsonschema<4
Downloading jsonschema-3.1.1-py2.py3-none-any.whl (56 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 56.1/56.1 KB 7.3 MB/s eta 0:00:00
Downloading jsonschema-3.1.0-py2.py3-none-any.whl (56 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 56.1/56.1 KB 6.4 MB/s eta 0:00:00
Collecting js-regex>=1.0.0
Downloading js_regex-1.0.1-py3-none-any.whl (12 kB)
Collecting jsonschema<4
Downloading jsonschema-3.0.2-py2.py3-none-any.whl (54 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 54.7/54.7 KB 5.3 MB/s eta 0:00:00
Downloading jsonschema-3.0.1-py2.py3-none-any.whl (54 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 54.4/54.4 KB 1.5 MB/s eta 0:00:00
Downloading jsonschema-3.0.0-py2.py3-none-any.whl (54 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 54.0/54.0 KB 1.5 MB/s eta 0:00:00
Downloading jsonschema-2.6.0-py2.py3-none-any.whl (39 kB)
Downloading jsonschema-2.5.1-py2.py3-none-any.whl (38 kB)
INFO: pip is looking at multiple versions of jsonschema to determine which version is compatible with other requirements. This could take a while.
Downloading jsonschema-2.5.0.zip (81 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 81.1/81.1 KB 6.8 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Downloading jsonschema-2.4.0-py2.py3-none-any.whl (37 kB)
Downloading jsonschema-2.3.0-py2.py3-none-any.whl (32 kB)
Downloading jsonschema-2.2.0.zip (65 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 65.9/65.9 KB 1.7 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Downloading jsonschema-2.1.0.zip (65 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 65.7/65.7 KB 1.7 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. See https://pip.pypa.io/warnings/backtracking for guidance. If you want to abort this run, press Ctrl + C.
Downloading jsonschema-2.0.0.zip (65 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 65.1/65.1 KB 1.7 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Downloading jsonschema-1.3.0.zip (57 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 57.8/57.8 KB 7.2 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Downloading jsonschema-1.2.0.zip (56 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 56.6/56.6 KB 5.0 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Downloading jsonschema-1.1.0.zip (55 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 55.4/55.4 KB 1.4 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Downloading jsonschema-1.0.0.zip (54 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 54.3/54.3 KB 1.4 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Downloading jsonschema-0.8.0.zip (26 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Downloading jsonschema-0.7.zip (18 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Downloading jsonschema-0.6.zip (16 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Downloading jsonschema-0.5.zip (15 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Downloading jsonschema-0.4.zip (17 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Downloading jsonschema-0.3.zip (13 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Downloading jsonschema-0.2.zip (9.6 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'error'
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [10 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-ikt95p1i/jsonschema_0010d986d18a495f850a1d6c5380d903/setup.py", line 4, in <module>
from jsonschema import __version__
File "/tmp/pip-install-ikt95p1i/jsonschema_0010d986d18a495f850a1d6c5380d903/jsonschema.py", line 63, in <module>
class Validator(object):
File "/tmp/pip-install-ikt95p1i/jsonschema_0010d986d18a495f850a1d6c5380d903/jsonschema.py", line 82, in Validator
unknown_property="error", string_types=basestring,
NameError: name 'basestring' is not defined
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
The command '/bin/sh -c pip3 install --upgrade jupyter-book nbconvert' returned a non-zero code: 1
```
### List your environment
```docker
FROM ubuntu:22.04
RUN export DEBIAN_FRONTEND=noninteractive && \
apt-get -qq update && \
apt-get -yq --with-new-pkgs -o Dpkg::Options::="--force-confold" upgrade && \
apt-get -y install \
git \
python3-pip
RUN pip3 install --upgrade jupyter-book nbconvert
``` | closed | 2022-09-09T11:05:06Z | 2023-03-01T12:15:03Z | https://github.com/jupyter-book/jupyter-book/issues/1831 | [
"bug"
] | jorgensd | 4 |
python-gitlab/python-gitlab | api | 2,895 | Namespace Error / Namespace Ignored | ## Description of the problem, including code/CLI snippet
I am getting started with this Library and most Things work quite well.
I cannot say the same about the Namespace/Groups though.
Code according to the Documentation:
```
# Get Group / $Namespace ID based on Kernel
group_id = gl.groups.list(search=minion_kernel)[0].id
# If no Projects are found
if not search_project:
# Echo
print(f"\t Create new Project {minion_name} in Namespace {minion_kernel} (Group ID {group_id})")
# Create a new Project
new_project = current_user.projects.create({
"name": minion_name ,
"path": minion_name ,
"namespace_id": int(group_id)
})
# Update Project
current_project = gl.projects.list(name=minion_name , namespace=minion_kernel , get_all=True)[0]
print(current_project)
```
Result:
```
Create new Project backup01 in Namespace linux (Group ID 5)
Traceback (most recent call last):
File "/opt/gitlab-python/lib/python3.11/site-packages/gitlab/exceptions.py", line 340, in wrapped_f
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/opt/gitlab-python/lib/python3.11/site-packages/gitlab/mixins.py", line 301, in create
server_data = self.gitlab.http_post(path, post_data=data, files=files, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/gitlab-python/lib/python3.11/site-packages/gitlab/client.py", line 1000, in http_post
result = self.http_request(
^^^^^^^^^^^^^^^^^^
File "/opt/gitlab-python/lib/python3.11/site-packages/gitlab/client.py", line 793, in http_request
raise gitlab.exceptions.GitlabHttpError(
gitlab.exceptions.GitlabHttpError: 400: {'namespace': ['is not valid']}
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/srv/scripts-nas/gitlab/./setup_gitconfigbackup_api.py", line 212, in <module>
setup_minion(id=minion , kernel=kernel)
File "/srv/scripts-nas/gitlab/./setup_gitconfigbackup_api.py", line 162, in setup_minion
new_project = current_user.projects.create({
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/gitlab-python/lib/python3.11/site-packages/gitlab/exceptions.py", line 342, in wrapped_f
raise error(e.error_message, e.response_code, e.response_body) from e
gitlab.exceptions.GitlabCreateError: 400: {'namespace': ['is not valid']}
```
If I use `namespace` instead of `namespace_id`:
```
# Get Group / $Namespace ID based on Kernel
group_id = gl.groups.list(search=minion_kernel)[0].id
# If no Projects are found
if not search_project:
# Echo
print(f"\t Create new Project {minion_name} in Namespace {minion_kernel} (Group ID {group_id})")
# Create a new Project
new_project = current_user.projects.create({
"name": minion_name ,
"path": minion_name ,
"namespace": minion_kernel
})
# Update Project
current_project = gl.projects.list(name=minion_name , namespace=minion_kernel , get_all=True)[0]
print(current_project)
```
Then I get:
```
Create new Project backup01 in Namespace linux (Group ID 5)
<class 'gitlab.v4.objects.projects.Project'> => {'id': 81, 'description': None, 'name': 'backup01', 'name_with_namespace': 'backup01 / backup01', 'path': 'backup01', 'path_with_namespace': 'backup01/backup01', 'created_at': '2024-06-16T18:37:10.670Z', 'default_branch': 'main', 'tag_list': [], 'topics': [], 'ssh_url_to_repo': 'gitconfigbackup-server@gitconfigbackup.MYDOMAIN.TLD:backup01/backup01.git', 'http_url_to_repo': 'https://gitconfigbackup.MYDOMAIN.TLD/backup01/backup01.git', 'web_url': 'https://gitconfigbackup.MYDOMAIN.TLD/backup01/backup01', 'readme_url': None, 'forks_count': 0, 'avatar_url': None, 'star_count': 0, 'last_activity_at': '2024-06-16T18:37:10.629Z', 'namespace': {'id': 112, 'name': 'backup01', 'path': 'backup01', 'kind': 'user', 'full_path': 'backup01', 'parent_id': None, 'avatar_url': 'https://secure.gravatar.com/avatar/59fbdfd1a3557aae106cdac5ada63c6277ad381f8e3cfe740286098da9675438?s=80&d=identicon', 'web_url': 'https://gitconfigbackup.MYDOMAIN.TLD/backup01'}, 'repository_storage': 'default', '_links': {'self': 'https://gitconfigbackup.MYDOMAIN.TLD/api/v4/projects/81', 'issues': 'https://gitconfigbackup.MYDOMAIN.TLD/api/v4/projects/81/issues', 'merge_requests': 'https://gitconfigbackup.MYDOMAIN.TLD/api/v4/projects/81/merge_requests', 'repo_branches': 'https://gitconfigbackup.MYDOMAIN.TLD/api/v4/projects/81/repository/branches', 'labels': 'https://gitconfigbackup.MYDOMAIN.TLD/api/v4/projects/81/labels', 'events': 'https://gitconfigbackup.MYDOMAIN.TLD/api/v4/projects/81/events', 'members': 'https://gitconfigbackup.MYDOMAIN.TLD/api/v4/projects/81/members', 'cluster_agents': 'https://gitconfigbackup.MYDOMAIN.TLD/api/v4/projects/81/cluster_agents'}, 'packages_enabled': True, 'empty_repo': True, 'archived': False, 'visibility': 'private', 'owner': {'id': 56, 'username': 'backup01', 'name': 'backup01', 'state': 'active', 'locked': False, 'avatar_url': 'https://secure.gravatar.com/avatar/59fbdfd1a3557aae106cdac5ada63c6277ad381f8e3cfe740286098da9675438?s=80&d=identicon', 'web_url': 'https://gitconfigbackup.MYDOMAIN.TLD/backup01'}, 'resolve_outdated_diff_discussions': False, 'container_expiration_policy': {'cadence': '1d', 'enabled': False, 'keep_n': 10, 'older_than': '90d', 'name_regex': '.*', 'name_regex_keep': None, 'next_run_at': '2024-06-17T18:37:10.680Z'}, 'repository_object_format': 'sha1', 'issues_enabled': True, 'merge_requests_enabled': True, 'wiki_enabled': True, 'jobs_enabled': True, 'snippets_enabled': True, 'container_registry_enabled': True, 'service_desk_enabled': False, 'service_desk_address': None, 'can_create_merge_request_in': True, 'issues_access_level': 'enabled', 'repository_access_level': 'enabled', 'merge_requests_access_level': 'enabled', 'forking_access_level': 'enabled', 'wiki_access_level': 'enabled', 'builds_access_level': 'enabled', 'snippets_access_level': 'enabled', 'pages_access_level': 'private', 'analytics_access_level': 'enabled', 'container_registry_access_level': 'enabled', 'security_and_compliance_access_level': 'private', 'releases_access_level': 'enabled', 'environments_access_level': 'enabled', 'feature_flags_access_level': 'enabled', 'infrastructure_access_level': 'enabled', 'monitor_access_level': 'enabled', 'model_experiments_access_level': 'enabled', 'model_registry_access_level': 'enabled', 'emails_disabled': False, 'emails_enabled': True, 'shared_runners_enabled': True, 'lfs_enabled': True, 'creator_id': 56, 'import_url': None, 'import_type': None, 'import_status': 'none', 'open_issues_count': 0, 'description_html': '', 'updated_at': '2024-06-16T18:37:10.670Z', 'ci_default_git_depth': 20, 'ci_forward_deployment_enabled': True, 'ci_forward_deployment_rollback_allowed': True, 'ci_job_token_scope_enabled': False, 'ci_separated_caches': True, 'ci_allow_fork_pipelines_to_run_in_parent_project': True, 'build_git_strategy': 'fetch', 'keep_latest_artifact': True, 'restrict_user_defined_variables': False, 'runners_token': None, 'runner_token_expiration_interval': None, 'group_runners_enabled': True, 'auto_cancel_pending_pipelines': 'enabled', 'build_timeout': 3600, 'auto_devops_enabled': True, 'auto_devops_deploy_strategy': 'continuous', 'ci_config_path': None, 'public_jobs': True, 'shared_with_groups': [], 'only_allow_merge_if_pipeline_succeeds': False, 'allow_merge_on_skipped_pipeline': None, 'request_access_enabled': True, 'only_allow_merge_if_all_discussions_are_resolved': False, 'remove_source_branch_after_merge': True, 'printing_merge_request_link_enabled': True, 'merge_method': 'merge', 'squash_option': 'default_off', 'enforce_auth_checks_on_uploads': True, 'suggestion_commit_message': None, 'merge_commit_template': None, 'squash_commit_template': None, 'issue_branch_template': None, 'warn_about_potentially_unwanted_characters': True, 'autoclose_referenced_issues': True, 'permissions': {'project_access': None, 'group_access': None}}
```
## Expected Behavior
Able to create a Project in the Requested Namespace.
## Actual Behavior
Either an Error Occurs (if Code is according to the Documentation), or the `namespace` is completely ignored and the `name` of the Project is used instead.
I attempted to fix this later by trying to Update the Project:
```
# Update Project
current_project = gl.projects.list(name=minion_name , namespace=minion_kernel , get_all=True)[0]
print(current_project)
# Attempt to Update Project
current_project.name = minion_name
current_project.path = minion_name
current_project.path_with_namespace = f"{minion_kernel}/{minion_name}"
current_project.namespace = minion_kernel
current_project.save()
```
This however would not do anything :disappointed:.
## Specifications
- python-gitlab version: `4.6.0`
- API version you are using (v3/v4): Not sure (default ?)
- Gitlab server version (or gitlab.com): self-hosted `gitlab-ce 17.0.2-ce.0`
| closed | 2024-06-16T18:40:02Z | 2024-07-06T11:32:21Z | https://github.com/python-gitlab/python-gitlab/issues/2895 | [
"support"
] | luckylinux | 9 |
dgtlmoon/changedetection.io | web-scraping | 2,597 | [bug?] on one site - Exception: 'utf-8' codec can't encode character '\ud83d' - surrogates not allowed | **Describe the bug**
When checking for changes, changedetection reports the following error:
`Exception: 'utf-8' codec can't encode character '\ud83d' in position 419201: surrogates not allowed`. From what I gather, "\ud83d" is an emoji, yet I can't find one on the site I'm monitoring. I also check the site's code and couldn't find anything.
This is what the log says:
```
2024-08-29 08:09:30.784 | INFO | changedetectionio.update_worker:run:255 - Processing watch UUID 57a8a843-ba19-4f8b-9588-8ed7df2601dc Priority 1 URL https://www.pourmoi.co.uk/products/india-lace-plunge-body/
2024-08-29 08:09:30.785 | WARNING | changedetectionio.processors:call_browser:73 - Using playwright fetcher override for possible puppeteer request in browsersteps, because puppetteer:browser steps is incomplete.
2024-08-29 08:09:40.907 | ERROR | changedetectionio.update_worker:run:477 - Exception reached processing watch UUID: 57a8a843-ba19-4f8b-9588-8ed7df2601dc
2024-08-29 08:09:40.907 | ERROR | changedetectionio.update_worker:run:478 - 'utf-8' codec can't encode character '\ud83d' in position 419201: surrogates not allowed
```
**Version**
v0.46.03
**To Reproduce**
Steps to reproduce the behavior:
1. Click on recheck.
https://changedetection.io/share/i8HXmCg5dIga
**Expected behavior**
The check should complete without errors.
**Desktop (please complete the following information):**
- OS: Unraid
- Browser Chrome "browserless"
- Version unknown
**Additional context**
Interestingly enough, when going through the browser steps myself, no error is reported and a screenshot of the site is saved. Removing the browser steps does not prevent the error from appearing though.
| open | 2024-08-29T06:18:33Z | 2024-09-02T09:37:09Z | https://github.com/dgtlmoon/changedetection.io/issues/2597 | [
"triage"
] | Pillendreher | 5 |
deeppavlov/DeepPavlov | nlp | 1,562 | how to solve this error RuntimeError: CUDA out of memory. Tried to allocate 352.00 MiB (GPU 0; 3.94 GiB total capacity; 2.64 GiB already allocated; 229.25 MiB free; 2.79 GiB reserved in total by PyTorch) |
from deeppavlov import build_model, configs
ner_model = build_model(configs.ner.ner_ontonotes_bert_mult_torch, download=True) # this line error getting
RuntimeError: CUDA out of memory. Tried to allocate 352.00 MiB (GPU 0; 3.94 GiB total capacity; 2.64 GiB already allocated; 229.25 MiB free; 2.79 GiB reserved in total by PyTorch)
ner_model1 = build_model(configs.ner.ner_few_shot_ru_simulate, download=True)
this model gets no module TensorFlow -hub error library install but versions also check but error is same
| closed | 2022-05-09T10:01:34Z | 2022-05-23T10:16:12Z | https://github.com/deeppavlov/DeepPavlov/issues/1562 | [
"bug"
] | Balu2311 | 1 |
JaidedAI/EasyOCR | deep-learning | 1,025 | Baselining inference times of various Word detectors. | I am in need of making a trade-off between inference time and accuracies of the various detection model.
I tried docTR on a sample set and it gave following inference times:
real 0m4.963s
user 0m13.920s
sys 0m2.695s
And with CRAFT, I got
real 0m7.633s
user 0m6.958s
sys 0m2.356s
How do I read into this? Also can someone suggest some good architectures that I can try that are also fast.
PS: all the SOTA ones are GPU heavy
PSS: I know this is not the most apt place to have this discussion but then again, I do not know any other place either. | open | 2023-05-21T10:23:13Z | 2023-05-21T10:23:13Z | https://github.com/JaidedAI/EasyOCR/issues/1025 | [] | ceyxasm | 0 |
rio-labs/rio | data-visualization | 165 | [BUG]: Visible Traceback on Linux when Exiting | ### Describe the bug
When closing an app with `rio run` by `Ctrl+C` sometimes, but not always, an ugly traceback shows up in the terminal. Everything works fine, but this is rather confusing if you don't know that it can be safely ignored. This has been seen on Linux, but never on Windows. Status on Mac is unknown.
```
asyncio.exceptions.CancelledError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/jakob/Local/rio/.venv/lib/python3.10/site-packages/starlette/routing.py", line 732, in lifespan
async with self.lifespan_context(app) as maybe_state:
File "/home/jakob/.rye/py/cpython@3.10.13/lib/python3.10/contextlib.py", line 206, in __aexit__
await anext(self.gen)
File "/home/jakob/Local/rio/rio/app_server/fastapi_server.py", line 359, in _lifespan
await self._on_close()
File "/home/jakob/Local/rio/rio/app_server/abstract_app_server.py", line 101, in _on_close
results = await asyncio.gather(
asyncio.exceptions.CancelledError
```
### Steps to Reproduce
- Run an app using `rio run`
- Ctrl-C
Retry a couple times if it doesn't happen. Seems to be more reliable after the app has reloaded.
### Screenshots/Videos
_No response_
### Operating System
Linux
### What browsers are you seeing the problem on?
_No response_
### Browser version
_No response_
### What device are you using?
Desktop
### Additional context
_No response_ | open | 2024-11-13T20:25:20Z | 2024-12-05T20:32:28Z | https://github.com/rio-labs/rio/issues/165 | [
"bug"
] | mad-moo | 1 |
computationalmodelling/nbval | pytest | 34 | Funding acknowledgement in Readme? | Should we include the EU funding? | closed | 2017-01-23T10:24:23Z | 2017-01-28T22:50:04Z | https://github.com/computationalmodelling/nbval/issues/34 | [] | mikecroucher | 1 |
kubeflow/katib | scikit-learn | 2,492 | SDK Error: the namespace lacks label \"katib.kubeflow.org/metrics-collector-injection: enabled\" | ### What happened?
When I try to use the `tune` API in katib python SDK to run the [get started example](https://www.kubeflow.org/docs/components/katib/getting-started/), it returned the following error message:
```
kubernetes.client.exceptions.ApiException: (400)
Reason: Bad Request
HTTP response headers: HTTPHeaderDict({'Audit-Id': '7775619d-f51b-40c5-a978-9f605e902582', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': '0982521a-d333-4c33-a7f1-0e7d2819c6e9', 'X-Kubernetes-Pf-Prioritylevel-Uid': 'ddf2fccb-7ad4-44ef-9a0c-b725ea4099a9', 'Date': 'Fri, 17 Jan 2025 03:46:16 GMT', 'Content-Length': '336'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"admission webhook \"validator.experiment.katib.kubeflow.org\" denied the request: Cannot create the Experiment \"tune-experiment\" in namespace \"kubeflow\": the namespace lacks label \"katib.kubeflow.org/metrics-collector-injection: enabled\"","code":400}
```
### What did you expect to happen?
Successfully create and finish the Experiment.
### Environment
Kubernetes version:
```bash
$ kubectl version
Client Version: v1.30.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.0
```
Katib controller version:
```bash
$ kubectl get pods -n kubeflow -l katib.kubeflow.org/component=controller -o jsonpath="{.items[*].spec.containers[*].image}"
docker.io/kubeflowkatib/katib-controller:latest%
```
Katib Python SDK version:
```bash
$ pip show kubeflow-katib
Name: kubeflow-katib
Version: 0.17.0
Summary: Katib Python SDK for APIVersion v1beta1
Home-page: https://github.com/kubeflow/katib/tree/master/sdk/python/v1beta1
Author: Kubeflow Authors
Author-email: premnath.vel@gmail.com
License: Apache License Version 2.0
Location: /opt/homebrew/anaconda3/envs/katib-llm-test/lib/python3.12/site-packages
Requires: certifi, grpcio, kubernetes, protobuf, setuptools, six, urllib3
Required-by:
```
### Impacted by this bug?
Give it a 👍 We prioritize the issues with most 👍 | closed | 2025-01-17T04:19:17Z | 2025-02-06T15:38:11Z | https://github.com/kubeflow/katib/issues/2492 | [
"help wanted",
"good first issue",
"area/api",
"kind/bug"
] | helenxie-bit | 5 |
huggingface/datasets | computer-vision | 7,233 | 数据集数量问题 | ### Describe the bug
这里我进行大模型微调,当数据集数量为718时,模型可以正常微调,但是当我添加一个在前718个数据集中的数据或者新增一个数据就会报错
### Steps to reproduce the bug
1.
这里我的数据集可以微调的最后两个数据集是:
{
"messages": [
{
"role": "user",
"content": "完成校正装置设计后需要进行哪些工作?"
},
{
"role": "assistant",
"content": "一旦完成校正装置设计后,需要进行系统实际调校工作,或者进行MATLAB仿真以检查系统的时间响应特性。这时,需将系统建模时省略的部分尽可能加入系统,以保证仿真结果的逼真度。如果由于系统各种固有非线性因素影响,或者由于系统噪声和负载效应等因素的影响,使已校正系统不能满足全部性能指标要求,则需要适当调整校正装置的形式或参数,直到已校正系统满足全部性能指标为止。"
}
]
}
{
"messages": [
{
"role": "user",
"content": "为什么闭环带宽要求会影响串联超前校正的效果?"
},
{
"role": "assistant",
"content": "如果待校正系统不稳定,为了满足相角裕度的要求,超前网络需要提供很大的相角超前量。这会导致超前网络的a值必须选得很大,从而使已校正系统的带宽过大。带宽过大会使通过系统的高频噪声电平变高,可能导致系统失控。因此,闭环带宽的要求会影响串联超前校正的效果。"
}
]
}
2.
当我把倒数第二个数据集再次添加到数据集中时,此时我的最后三个数据集是
{
"messages": [
{
"role": "user",
"content": "完成校正装置设计后需要进行哪些工作?"
},
{
"role": "assistant",
"content": "一旦完成校正装置设计后,需要进行系统实际调校工作,或者进行MATLAB仿真以检查系统的时间响应特性。这时,需将系统建模时省略的部分尽可能加入系统,以保证仿真结果的逼真度。如果由于系统各种固有非线性因素影响,或者由于系统噪声和负载效应等因素的影响,使已校正系统不能满足全部性能指标要求,则需要适当调整校正装置的形式或参数,直到已校正系统满足全部性能指标为止。"
}
]
}
{
"messages": [
{
"role": "user",
"content": "为什么闭环带宽要求会影响串联超前校正的效果?"
},
{
"role": "assistant",
"content": "如果待校正系统不稳定,为了满足相角裕度的要求,超前网络需要提供很大的相角超前量。这会导致超前网络的a值必须选得很大,从而使已校正系统的带宽过大。带宽过大会使通过系统的高频噪声电平变高,可能导致系统失控。因此,闭环带宽的要求会影响串联超前校正的效果。"
}
]
}
{
"messages": [
{
"role": "user",
"content": "完成校正装置设计后需要进行哪些工作?"
},
{
"role": "assistant",
"content": "一旦完成校正装置设计后,需要进行系统实际调校工作,或者进行MATLAB仿真以检查系统的时间响应特性。这时,需将系统建模时省略的部分尽可能加入系统,以保证仿真结果的逼真度。如果由于系统各种固有非线性因素影响,或者由于系统噪声和负载效应等因素的影响,使已校正系统不能满足全部性能指标要求,则需要适当调整校正装置的形式或参数,直到已校正系统满足全部性能指标为止。"
}
]
}
这时系统会显示bug:
root@autodl-container-027f4cad3d-6baf4e64:~/autodl-tmp# python GLM-4/finetune_demo/finetune.py datasets/ ZhipuAI/glm-4-9b-chat GLM-4/finetune_demo/configs/lora.yaml
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:02<00:00, 4.04it/s]
The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
trainable params: 2,785,280 || all params: 9,402,736,640 || trainable%: 0.0296
Generating train split: 0 examples [00:00, ? examples/s]Failed to load JSON from file '/root/autodl-tmp/datasets/train.jsonl' with error <class 'pyarrow.lib.ArrowInvalid'>: JSON parse error: Missing a name for object member. in row 718
Generating train split: 0 examples [00:00, ? examples/s]
╭──────────────────────────────────────────────────────────────────────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ /root/miniconda3/lib/python3.10/site-packages/datasets/packaged_modules/json/json.py:153 in _generate_tables │
│ │
│ 150 │ │ │ │ │ │ │ │ with open( │
│ 151 │ │ │ │ │ │ │ │ │ file, encoding=self.config.encoding, errors=self.con │
│ 152 │ │ │ │ │ │ │ │ ) as f: │
│ ❱ 153 │ │ │ │ │ │ │ │ │ df = pd.read_json(f, dtype_backend="pyarrow") │
│ 154 │ │ │ │ │ │ │ except ValueError: │
│ 155 │ │ │ │ │ │ │ │ logger.error(f"Failed to load JSON from file '{file}' wi │
│ 156 │ │ │ │ │ │ │ │ raise e │
│ │
│ /root/miniconda3/lib/python3.10/site-packages/pandas/io/json/_json.py:815 in read_json │
│ │
│ 812 │ if chunksize: │
│ 813 │ │ return json_reader │
│ 814 │ else: │
│ ❱ 815 │ │ return json_reader.read() │
│ 816 │
│ 817 │
│ 818 class JsonReader(abc.Iterator, Generic[FrameSeriesStrT]): │
│ │
│ /root/miniconda3/lib/python3.10/site-packages/pandas/io/json/_json.py:1025 in read │
│ │
│ 1022 │ │ │ │ │ │ data_lines = data.split("\n") │
│ 1023 │ │ │ │ │ │ obj = self._get_object_parser(self._combine_lines(data_lines)) │
│ 1024 │ │ │ │ else: │
│ ❱ 1025 │ │ │ │ │ obj = self._get_object_parser(self.data) │
│ 1026 │ │ │ │ if self.dtype_backend is not lib.no_default: │
│ 1027 │ │ │ │ │ return obj.convert_dtypes( │
│ 1028 │ │ │ │ │ │ infer_objects=False, dtype_backend=self.dtype_backend │
│ │
│ /root/miniconda3/lib/python3.10/site-packages/pandas/io/json/_json.py:1051 in _get_object_parser │
│ │
│ 1048 │ │ } │
│ 1049 │ │ obj = None │
│ 1050 │ │ if typ == "frame": │
│ ❱ 1051 │ │ │ obj = FrameParser(json, **kwargs).parse() │
│ 1052 │ │ │
│ 1053 │ │ if typ == "series" or obj is None: │
│ 1054 │ │ │ if not isinstance(dtype, bool): │
│ │
│ /root/miniconda3/lib/python3.10/site-packages/pandas/io/json/_json.py:1187 in parse │
│ │
│ 1184 │ │
│ 1185 │ @final │
│ 1186 │ def parse(self): │
│ ❱ 1187 │ │ self._parse() │
│ 1188 │ │ │
│ 1189 │ │ if self.obj is None: │
│ 1190 │ │ │ return None │
│ │
│ /root/miniconda3/lib/python3.10/site-packages/pandas/io/json/_json.py:1403 in _parse │
│ │
│ 1400 │ │ │
│ 1401 │ │ if orient == "columns": │
│ 1402 │ │ │ self.obj = DataFrame( │
│ ❱ 1403 │ │ │ │ ujson_loads(json, precise_float=self.precise_float), dtype=None │
│ 1404 │ │ │ ) │
│ 1405 │ │ elif orient == "split": │
│ 1406 │ │ │ decoded = { │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
ValueError: Trailing data
During handling of the above exception, another exception occurred:
╭──────────────────────────────────────────────────────────────────────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ /root/miniconda3/lib/python3.10/site-packages/datasets/builder.py:1997 in _prepare_split_single │
│ │
│ 1994 │ │ │ ) │
│ 1995 │ │ │ try: │
│ 1996 │ │ │ │ _time = time.time() │
│ ❱ 1997 │ │ │ │ for _, table in generator: │
│ 1998 │ │ │ │ │ if max_shard_size is not None and writer._num_bytes > max_shard_size │
│ 1999 │ │ │ │ │ │ num_examples, num_bytes = writer.finalize() │
│ 2000 │ │ │ │ │ │ writer.close() │
│ │
│ /root/miniconda3/lib/python3.10/site-packages/datasets/packaged_modules/json/json.py:156 in _generate_tables │
│ │
│ 153 │ │ │ │ │ │ │ │ │ df = pd.read_json(f, dtype_backend="pyarrow") │
│ 154 │ │ │ │ │ │ │ except ValueError: │
│ 155 │ │ │ │ │ │ │ │ logger.error(f"Failed to load JSON from file '{file}' wi │
│ ❱ 156 │ │ │ │ │ │ │ │ raise e │
│ 157 │ │ │ │ │ │ │ if df.columns.tolist() == [0]: │
│ 158 │ │ │ │ │ │ │ │ df.columns = list(self.config.features) if self.config.f │
│ 159 │ │ │ │ │ │ │ try: │
│ │
│ /root/miniconda3/lib/python3.10/site-packages/datasets/packaged_modules/json/json.py:130 in _generate_tables │
│ │
│ 127 │ │ │ │ │ │ try: │
│ 128 │ │ │ │ │ │ │ while True: │
│ 129 │ │ │ │ │ │ │ │ try: │
│ ❱ 130 │ │ │ │ │ │ │ │ │ pa_table = paj.read_json( │
│ 131 │ │ │ │ │ │ │ │ │ │ io.BytesIO(batch), read_options=paj.ReadOptions( │
│ 132 │ │ │ │ │ │ │ │ │ ) │
│ 133 │ │ │ │ │ │ │ │ │ break │
│ │
│ in pyarrow._json.read_json:308 │
│ │
│ in pyarrow.lib.pyarrow_internal_check_status:154 │
│ │
│ in pyarrow.lib.check_status:91 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
ArrowInvalid: JSON parse error: Missing a name for object member. in row 718
The above exception was the direct cause of the following exception:
╭──────────────────────────────────────────────────────────────────────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ /root/autodl-tmp/GLM-4/finetune_demo/finetune.py:406 in main │
│ │
│ 403 ): │
│ 404 │ ft_config = FinetuningConfig.from_file(config_file) │
│ 405 │ tokenizer, model = load_tokenizer_and_model(model_dir, peft_config=ft_config.peft_co │
│ ❱ 406 │ data_manager = DataManager(data_dir, ft_config.data_config) │
│ 407 │ │
│ 408 │ train_dataset = data_manager.get_dataset( │
│ 409 │ │ Split.TRAIN, │
│ │
│ /root/autodl-tmp/GLM-4/finetune_demo/finetune.py:204 in __init__ │
│ │
│ 201 │ def __init__(self, data_dir: str, data_config: DataConfig): │
│ 202 │ │ self._num_proc = data_config.num_proc │
│ 203 │ │ │
│ ❱ 204 │ │ self._dataset_dct = _load_datasets( │
│ 205 │ │ │ data_dir, │
│ 206 │ │ │ data_config.data_format, │
│ 207 │ │ │ data_config.data_files, │
│ │
│ /root/autodl-tmp/GLM-4/finetune_demo/finetune.py:189 in _load_datasets │
│ │
│ 186 │ │ num_proc: Optional[int], │
│ 187 ) -> DatasetDict: │
│ 188 │ if data_format == '.jsonl': │
│ ❱ 189 │ │ dataset_dct = load_dataset( │
│ 190 │ │ │ data_dir, │
│ 191 │ │ │ data_files=data_files, │
│ 192 │ │ │ split=None, │
│ │
│ /root/miniconda3/lib/python3.10/site-packages/datasets/load.py:2616 in load_dataset │
│ │
│ 2613 │ │ return builder_instance.as_streaming_dataset(split=split) │
│ 2614 │ │
│ 2615 │ # Download and prepare data │
│ ❱ 2616 │ builder_instance.download_and_prepare( │
│ 2617 │ │ download_config=download_config, │
│ 2618 │ │ download_mode=download_mode, │
│ 2619 │ │ verification_mode=verification_mode, │
│ │
│ /root/miniconda3/lib/python3.10/site-packages/datasets/builder.py:1029 in download_and_prepare │
│ │
│ 1026 │ │ │ │ │ │ │ prepare_split_kwargs["max_shard_size"] = max_shard_size │
│ 1027 │ │ │ │ │ │ if num_proc is not None: │
│ 1028 │ │ │ │ │ │ │ prepare_split_kwargs["num_proc"] = num_proc │
│ ❱ 1029 │ │ │ │ │ │ self._download_and_prepare( │
│ 1030 │ │ │ │ │ │ │ dl_manager=dl_manager, │
│ 1031 │ │ │ │ │ │ │ verification_mode=verification_mode, │
│ 1032 │ │ │ │ │ │ │ **prepare_split_kwargs, │
│ │
│ /root/miniconda3/lib/python3.10/site-packages/datasets/builder.py:1124 in _download_and_prepare │
│ │
│ 1121 │ │ │ │
│ 1122 │ │ │ try: │
│ 1123 │ │ │ │ # Prepare split will record examples associated to the split │
│ ❱ 1124 │ │ │ │ self._prepare_split(split_generator, **prepare_split_kwargs) │
│ 1125 │ │ │ except OSError as e: │
│ 1126 │ │ │ │ raise OSError( │
│ 1127 │ │ │ │ │ "Cannot find data file. " │
│ │
│ /root/miniconda3/lib/python3.10/site-packages/datasets/builder.py:1884 in _prepare_split │
│ │
│ 1881 │ │ │ gen_kwargs = split_generator.gen_kwargs │
│ 1882 │ │ │ job_id = 0 │
│ 1883 │ │ │ with pbar: │
│ ❱ 1884 │ │ │ │ for job_id, done, content in self._prepare_split_single( │
│ 1885 │ │ │ │ │ gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args │
│ 1886 │ │ │ │ ): │
│ 1887 │ │ │ │ │ if done: │
│ │
│ /root/miniconda3/lib/python3.10/site-packages/datasets/builder.py:2040 in _prepare_split_single │
│ │
│ 2037 │ │ │ │ e = e.__context__ │
│ 2038 │ │ │ if isinstance(e, DatasetGenerationError): │
│ 2039 │ │ │ │ raise │
│ ❱ 2040 │ │ │ raise DatasetGenerationError("An error occurred while generating the dataset │
│ 2041 │ │ │
│ 2042 │ │ yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_ │
│ 2043 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
DatasetGenerationError: An error occurred while generating the dataset
3.请问是否可以帮我解决
### Expected behavior
希望问题可以得到解决
### Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.20.0
- Platform: Linux-4.19.90-2107.6.0.0192.8.oe1.bclinux.x86_64-x86_64-with-glibc2.35
- Python version: 3.10.8
- `huggingface_hub` version: 0.24.6
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2023.12.2 | open | 2024-10-17T07:41:44Z | 2024-10-17T07:41:44Z | https://github.com/huggingface/datasets/issues/7233 | [] | want-well | 0 |
xzkostyan/clickhouse-sqlalchemy | sqlalchemy | 137 | "Nullable" type not working with undefined values. | **Describe the bug**
When having a Nullable column and not setting the related property in an entity instance, trying to add the object to the session and then committing raises the following error:
```
File "clickhouse_driver/bufferedwriter.pyx", line 54, in clickhouse_driver.bufferedwriter.BufferedWriter.write_strings
AttributeError: 'NoneType' object has no attribute 'encode'
```
which seems to be the driver attempting to save the undefined property as a string (when it's not - it's `None` since it wasn't defined).
**To Reproduce**
```python
class SomeModel(Base):
something = Column(types.Nullable(types.String))
instance = SomeModel()
session.add(instance)
session.commit() # here's where the exception happens
```
**Expected behavior**
The object should be added to the table with the non-empty columns filled in, instead of raising an error.
**Versions**
- Version of package with the problem: 0.1.6
- Python version: 3.9.6
| closed | 2021-08-23T00:57:23Z | 2022-10-11T14:11:24Z | https://github.com/xzkostyan/clickhouse-sqlalchemy/issues/137 | [
"bug"
] | diogobaeder | 6 |
strawberry-graphql/strawberry | asyncio | 3,142 | Aborting Querys | <!--- Provide a general summary of the changes you want in the title above. -->
In our project we use [Apollo-Client for React](https://www.apollographql.com/docs/react/) and Strawberry.
We want to use abort signals to abort the executions of a queries to safe resources.
While using the simple approach from Apollo seems to work on the client side it does nothing on the server side.
We assume we would need to use Apollo-Server for their approach to work.
Since we could not find anything about aborting in the strawberry docs, we wanted to ask here how it would be possible to implement aborting the execution of a query with strawberry.
<!--- Anything on lines wrapped in comments like these will not show up in the final text. --> | closed | 2023-10-09T12:31:36Z | 2025-03-20T15:56:25Z | https://github.com/strawberry-graphql/strawberry/issues/3142 | [
"info-needed"
] | Stainless2k | 4 |
jupyter-incubator/sparkmagic | jupyter | 266 | Unable to configure handler u'magicsHandler': u'home_path' | I try to test sparkmagic with a miniconda2 env.
However, the PySpark kernel doesn't start because the home_path magicHandler is not correctly configured :
> [I 12:11:31.571 NotebookApp] Kernel started: 6488e134-bfc8-4fa6-b117-e47fc029609f
> Traceback (most recent call last):
> File "/home/tgirault/bin/miniconda2/lib/python2.7/runpy.py", line 162, in _run_module_as_main
> "__main__", fname, loader, pkg_name)
> File "/home/tgirault/bin/miniconda2/lib/python2.7/runpy.py", line 72, in _run_code
> exec code in run_globals
> File "/home/tgirault/bin/miniconda2/lib/python2.7/site-packages/sparkmagic/kernels/pysparkkernel/pysparkkernel.py", line 28, in <module>
> IPKernelApp.launch_instance(kernel_class=PySparkKernel)
> File "/home/tgirault/bin/miniconda2/lib/python2.7/site-packages/traitlets/config/application.py", line 595, in launch_instance
> app.initialize(argv)
> File "<decorator-gen-136>", line 2, in initialize
> File "/home/tgirault/bin/miniconda2/lib/python2.7/site-packages/traitlets/config/application.py", line 74, in catch_config_error
> return method(app, _args, *_kwargs)
> File "/home/tgirault/bin/miniconda2/lib/python2.7/site-packages/ipykernel/kernelapp.py", line 421, in initialize
> self.init_kernel()
> File "/home/tgirault/bin/miniconda2/lib/python2.7/site-packages/ipykernel/kernelapp.py", line 360, in init_kernel
> user_ns=self.user_ns,
> File "/home/tgirault/bin/miniconda2/lib/python2.7/site-packages/traitlets/config/configurable.py", line 405, in instance
> inst = cls(_args, *_kwargs)
> File "/home/tgirault/bin/miniconda2/lib/python2.7/site-packages/sparkmagic/kernels/pysparkkernel/pysparkkernel.py", line 23, in __init__
> language_info, session_language, **kwargs)
> File "/home/tgirault/bin/miniconda2/lib/python2.7/site-packages/sparkmagic/kernels/wrapperkernel/sparkkernelbase.py", line 29, in **init**
> self.logger = SparkLog(u"{}_jupyter_kernel".format(self.session_language))
> File "/home/tgirault/bin/miniconda2/lib/python2.7/site-packages/sparkmagic/utils/sparklogger.py", line 10, in __init__
> super(SparkLog, self).**init**(MAGICS_LOGGER_NAME, conf.logging_config(), class_name)
> File "/home/tgirault/bin/miniconda2/lib/python2.7/site-packages/hdijupyterutils/log.py", line 13, in **init**
> logging.config.dictConfig(logging_config)
> File "/home/tgirault/bin/miniconda2/lib/python2.7/logging/config.py", line 794, in dictConfig
> dictConfigClass(config).configure()
> File "/home/tgirault/bin/miniconda2/lib/python2.7/logging/config.py", line 576, in configure
> '%r: %s' % (name, e))
> ValueError: Unable to configure handler u'magicsHandler': u'home_path'
| closed | 2016-06-28T10:16:20Z | 2016-06-28T22:52:34Z | https://github.com/jupyter-incubator/sparkmagic/issues/266 | [] | thomasgirault | 2 |
ijl/orjson | numpy | 127 | Question: Comparison with other Python-JSON packages | I'm currently comparing various Python packages for handling JSON ([link](https://github.com/MartinThoma/algorithms/tree/master/Python/json-benchmark): cpython 3.8.6 json, simplejson, ujson, simdjson, orjson, rapidjson). I'm trying to understand their differences. I would be super happy if you could help me with that by answering some questions:
1. Are you in contact with the other Python JSON package developers? Do you maybe share the way you benchmark or test cases?
2. Are you in contact with JSON package developers from other languages?
3. Are there other packages / articles for comparison I should have a look at?
4. The benchmarks show that orjson is extremely fast. Where does that come from?
5. You write "[orjson] is more correct than the standard json library or other third-party libraries". What do you mean by that?
6. This Github repository is in your private namespace and there is only you as a maintainer on PyPI. Have you considered creating a Github Organization for orjson or adding other maintainers on PyPI to ensure that development does not stop in case you stop development on orjson?
By the way: The benchmarking results you get (and I have confirmed them with my benchmarks) are amazing! | closed | 2020-09-26T10:06:41Z | 2020-10-06T21:50:15Z | https://github.com/ijl/orjson/issues/127 | [] | MartinThoma | 1 |
marshmallow-code/flask-smorest | rest-api | 575 | How can i create pagination for method POST | Because my api too many queries so i want to change method from GET to POST and use it only in request body.
But it seems like @blp.paginate() not support
Any help is appreciate. Thanks! | closed | 2023-10-31T12:25:10Z | 2024-07-09T20:43:40Z | https://github.com/marshmallow-code/flask-smorest/issues/575 | [
"question"
] | garu097 | 2 |
deepfakes/faceswap | deep-learning | 1,111 | dockerfile.gpu python is 3.6 | *Note: For general usage questions and help, please use either our [FaceSwap Forum](https://faceswap.dev/forum)
or [FaceSwap Discord server](https://discord.gg/FC54sYg). General usage questions are liable to be closed without
response.*
**Crash reports MUST be included when reporting bugs.**
**Describe the bug**
using dockerfile.gpu results in errors
**To Reproduce**
Steps to reproduce the behavior:
1. build using Dockerfile.gpu
2. run docker image
3. run: python faceswap.py -h
4. See error
root@73fe496dfbc7:/app# python faceswap.py -h
Setting Faceswap backend to NVIDIA
Traceback (most recent call last):
File "faceswap.py", line 11, in <module>
raise Exception("This program requires at least python3.7")
Exception: This program requires at least python3.7
**Expected behavior**
it should run
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: ubuntu 16.04
-
**Additional context**
Add any other context about the problem here.
**Crash Report**
The crash report generated in the root of your Faceswap folder | closed | 2021-01-04T13:51:08Z | 2021-01-09T18:53:10Z | https://github.com/deepfakes/faceswap/issues/1111 | [] | amitsh1 | 0 |
httpie/cli | python | 1,461 | Failing tests with responses ≥ 0.22.0 | ## Checklist
- [x] I've searched for similar issues.
- [x] I'm using the latest version of HTTPie.
---
## Minimal reproduction code and steps
1. `git clone https://github.com/httpie/httpie; cd httpie`
2. `pip install 'responses>=0.22.0' .[test]`
3. `pytest`
## Current result
A multitude of failures in `tests/test_encoding.py`, `tests/test_json.py`, etc. in the vein of https://hydra.nixos.org/build/202035507: `KeyError: 0` on httpie/models.py line 82.
## Expected result
A passing test suite.
---
## Additional information, screenshots, or code examples
I wrote some of this up in https://github.com/NixOS/nixpkgs/pull/205270#issuecomment-1361147904, but the problem is not NixOS-specific. The short version is that before https://github.com/getsentry/responses/pull/585, the reference to `httpie.models.HTTPResponse()._orig.raw._original_response.version` in [the implementation](https://github.com/httpie/httpie/blob/621042a0486ceb3afaf47a013c4f2eee4edc1a1d/httpie/models.py#L72) of `httpie.models.HTTPResponse.headers` found the then-extant `responses.OriginalResponseShim` object, which does not have a `version` attribute, and therefore successfully defaulted to 11, whereas now that that class has been removed it finds a `urllib3.HTTPResponse` object instead, which [defaults](https://urllib3.readthedocs.io/en/stable/reference/urllib3.response.html#urllib3.response.HTTPResponse) to `version`=0, and it’s not prepared to handle that.
Given the amount of groveling into internal data structures that goes on here (I don’t think `requests` even documents `Request.raw` as being a `urllib3.HTTPResponse` object), I’m not sure if this is a bug in the `httpie` test suite or a regression in `responses`, so I’m filing it here for you to decide.
For reference, the following change makes the tests pass for me:
```diff
diff --git a/httpie/models.py b/httpie/models.py
index d97b55e..a3ec6e7 100644
--- a/httpie/models.py
+++ b/httpie/models.py
@@ -77,6 +77,8 @@ class HTTPResponse(HTTPMessage):
else:
raw_version = raw.version
except AttributeError:
+ raw_version = 0
+ if not raw_version:
# Assume HTTP/1.1
raw_version = 11
version = {
``` | closed | 2022-12-25T18:36:06Z | 2023-01-15T16:58:59Z | https://github.com/httpie/cli/issues/1461 | [
"bug",
"new"
] | alexshpilkin | 1 |
wandb/wandb | data-science | 8,981 | [Q]: Do we need to purchase a commercial license if we build server in our internal AWS env? | ### Ask your question
We want to build a wandb server in our company's AWS environment. Do we need to purchase a commercial license?
Reference doc: https://docs.wandb.ai/guides/hosting/self-managed/aws-tf/
| closed | 2024-12-02T07:18:34Z | 2024-12-05T22:59:36Z | https://github.com/wandb/wandb/issues/8981 | [
"ty:question",
"a:app"
] | AaronZhangL | 3 |
dynaconf/dynaconf | fastapi | 1,136 | [RFC] Implement Immutable annotation | When annotated with Immutable, the value will be loaded once, from default, from init param or loader process.
But once the settings are fully loaded it will be impossible to change the value.
```python
field: Annotated[int, Immutable] = 10
...
settings.field = 11 # raise TypeError...
settings.set("field", 11) # raise TypeError...
settings["field"] = 11 # raise TypeError...
settings.reload() # reload every key except those that are immutable
``` | open | 2024-07-06T20:22:42Z | 2024-07-08T18:38:19Z | https://github.com/dynaconf/dynaconf/issues/1136 | [
"Not a Bug",
"RFC",
"typed_dynaconf"
] | rochacbruno | 0 |
amidaware/tacticalrmm | django | 1,146 | Install TRMM without meshcentral | Is it possible to install Tactical RMM without meshcentral. Because I don't need allow remote access to any one.
I require only scripting and othe patch update only. | closed | 2022-05-20T20:13:04Z | 2022-05-20T20:19:38Z | https://github.com/amidaware/tacticalrmm/issues/1146 | [] | pggsadmin | 1 |
pytorch/vision | machine-learning | 8,258 | `torchvision.transforms.v2.functional.convert_bounding_box_format` is wrong | ### 🐛 Describe the bug
Hi, unless I'm inputting the wrong data format, I found that the output of `torchvision.transforms.v2.functional.convert_bounding_box_format` is not consistent with `torchvision.ops.box_convert`. Please see the example below for reproduction:
```python
import torch
from torchvision.transforms.v2.functional import convert_bounding_box_format
from torchvision.ops import box_convert
input = torch.tensor([[328.0770, 231.1015, 279.2261, 457.5734]])
out1 = convert_bounding_box_format(input, "CXCYWH", "XYXY")
out2 = box_convert(input, "cxcywh", "xyxy")
print(torch.allclose(out1, out2))
print((out1 - out2).norm())
```
### Versions
PyTorch version: 2.2.0
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: Could not collect
CMake version: version 3.18.4
Libc version: glibc-2.31
Python version: 3.10.13 | packaged by conda-forge | (main, Dec 23 2023, 15:36:39) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.10.0-27-cloud-amd64-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA L4
Nvidia driver version: 535.86.10
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 1
Core(s) per socket: 2
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
Stepping: 7
CPU MHz: 2200.180
BogoMIPS: 4400.36
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 64 KiB
L1i cache: 64 KiB
L2 cache: 2 MiB
L3 cache: 38.5 MiB
NUMA node0 CPU(s): 0,1
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat avx512_vnni md_clear arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] pytorch-lightning==2.1.4
[pip3] torch==2.2.0
[pip3] torchaudio==2.2.0
[pip3] torchmetrics==1.3.0.post0
[pip3] torchvision==0.17.0
[pip3] triton==2.2.0
[conda] blas 1.0 mkl conda-forge
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] numpy 1.26.3 py310hb13e2d6_0 conda-forge
[conda] pytorch 2.2.0 py3.10_cuda12.1_cudnn8.9.2_0 pytorch
[conda] pytorch-cuda 12.1 ha16c6d3_5 pytorch
[conda] pytorch-lightning 2.1.4 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.2.0 py310_cu121 pytorch
[conda] torchmetrics 1.3.0.post0 pypi_0 pypi
[conda] torchtriton 2.2.0 py310 pytorch
[conda] torchvision 0.17.0 py310_cu121 pytorch
| closed | 2024-02-06T14:56:05Z | 2024-02-23T10:19:54Z | https://github.com/pytorch/vision/issues/8258 | [
"bug"
] | eugeneteoh | 5 |
MolSSI/cookiecutter-cms | pytest | 139 | Switch to mamba + miniforge = mambaforge to speed up CI? | A great deal of developer time is spent waiting for CI to pass.
Would it make sense to switch to [mambaforge](https://github.com/conda-forge/miniforge#mambaforge), a combination of [mamba](https://github.com/mamba-org/mamba) (a very fast version of `conda`) and [miniforge](https://github.com/conda-forge/miniforge) (a minimal conda-forge version of [miniconda](https://docs.conda.io/en/latest/miniconda.html))? | closed | 2021-08-11T14:05:04Z | 2022-08-22T20:21:41Z | https://github.com/MolSSI/cookiecutter-cms/issues/139 | [] | jchodera | 14 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.