repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
sktime/sktime | scikit-learn | 7,557 | [BUG] 1st generation reducer - inconsistent use of parameters in global vs local case | The sliding window transform in the 1st generation reducer interfaced via `make_reduce` seems to be problematic, as some parameters are only used in the global or the local case, whereas one would expect them to be used in both.
Specifically, as the refactor https://github.com/sktime/sktime/pull/7556 shows:
* `fh`, `window_length` are used only in the local branch
* `transformers` are used only in the global branch
(in `_sliding_window_transform`)
This seems like a logic error. | open | 2024-12-21T17:55:45Z | 2024-12-21T17:56:13Z | https://github.com/sktime/sktime/issues/7557 | [
"bug",
"module:forecasting"
] | fkiraly | 0 |
pydantic/pydantic-settings | pydantic | 171 | Field fails to be initialized/validated when explicitly passing `env=` | ### Initial Checks
- [X] I confirm that I'm using Pydantic V2
### Description
When explicitly passing the value of the environment variable to use to initialize a field, such field is not initialized with the given env var. This results in an error like this `Field required [type=missing, input_value={}, input_type=dict]`
### Example Code
```Python
import os
from pydantic_settings import BaseSettings
from pydantic import Field
os.environ["APP_TEXT"] = "Hello World"
class Settings(BaseSettings):
text: str = Field(env="APP_TEXT")
print(Settings().text)
```
### Python, Pydantic & OS Version
```Text
pydantic version: 2.4.1
pydantic-core version: 2.10.1
pydantic-core build: profile=release pgo=false
install path: /Users/gvso/Lev/sendgrid_test/venv/lib/python3.10/site-packages/pydantic
python version: 3.10.8 (main, Feb 13 2023, 14:35:14) [Clang 14.0.0 (clang-1400.0.29.202)]
platform: macOS-13.5.2-arm64-arm-64bit
related packages: typing_extensions-4.7.1 pydantic-settings-2.0.3
```
| closed | 2023-09-26T20:55:19Z | 2023-10-02T16:24:05Z | https://github.com/pydantic/pydantic-settings/issues/171 | [
"unconfirmed"
] | gvso | 2 |
streamlit/streamlit | data-visualization | 10,880 | `st.dataframe` displays wrong indizes for pivoted dataframe | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
Under some conditions streamlit will display the wrong indices in pivoted / multi indexed dataframes.
### Reproducible Code Example
[](https://issues.streamlitapp.com/?issue=gh-10880)
```Python
import streamlit as st
import pandas as pd
df = pd.DataFrame(
{"Index": ["X", "Y", "Z"], "A": [1, 2, 3], "B": [6, 5, 4], "C": [9, 7, 8]}
)
df = df.set_index("Index")
st.dataframe(df)
st.dataframe(df.T.corr())
st.dataframe(df.T.corr().unstack())
print(df.T.corr().unstack())
```
### Steps To Reproduce
1. `streamlit run` the provided code.
2. Look at the result of the last `st.dataframe()` call.
### Expected Behavior
Inner index should be correct.
### Current Behavior
The provided code renders the following tables:

The first two tables are correct, while the last one displays a duplicate of the first index instead of the second one.
In comparison, this is the correct output from the `print()` statement:
```
Index Index
X X 1.000000
Y 0.999597
Z 0.888459
Y X 0.999597
Y 1.000000
Z 0.901127
Z X 0.888459
Y 0.901127
Z 1.000000
dtype: float64
```
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.42.2
- Python version: 3.12.9
- Operating System: Linux
- Browser: Google Chrome / Firefox
### Additional Information
The problem does not occur, when the default index is used.
```python
import streamlit as st
import pandas as pd
df = pd.DataFrame({"A": [1, 2, 3], "B": [6, 5, 4], "C": [9, 7, 8]})
st.dataframe(df.T.corr().unstack())
```
This renders the correct dataframe:

---
This issue is possibly related to https://github.com/streamlit/streamlit/issues/3696 (parsing column names and handling their types) | open | 2025-03-23T15:50:44Z | 2025-03-24T13:49:35Z | https://github.com/streamlit/streamlit/issues/10880 | [
"type:bug",
"feature:st.dataframe",
"status:confirmed",
"priority:P3",
"feature:st.data_editor"
] | punsii2 | 2 |
aminalaee/sqladmin | sqlalchemy | 700 | Add messages support | ### Checklist
- [X] There are no similar issues or pull requests for this yet.
### Is your feature related to a problem? Please describe.
I would like to display feedback to users after a form submission, not just error messages. This would allow for warnings and success messages.
### Describe the solution you would like.
Django Admin uses this
https://docs.djangoproject.com/en/dev/ref/contrib/messages/#django.contrib.messages.add_message
### Describe alternatives you considered
_No response_
### Additional context
I may be willing to work on this, if there is interest. | open | 2024-01-19T23:25:17Z | 2024-05-15T20:07:55Z | https://github.com/aminalaee/sqladmin/issues/700 | [] | jonocodes | 11 |
dunossauro/fastapi-do-zero | pydantic | 55 | Criar uma lista de contribuiรงรฃo! | Adicionar ao README do mkdocs uma lista de agradecimento a todas as pessoas que revisaram e editaram material nas pรกginas! | closed | 2023-11-30T04:57:23Z | 2023-12-01T00:21:13Z | https://github.com/dunossauro/fastapi-do-zero/issues/55 | [
"Site"
] | dunossauro | 0 |
FlareSolverr/FlareSolverr | api | 1,403 | Browser doesn't load testing page | ### Have you checked our README?
- [X] I have checked the README
### Have you followed our Troubleshooting?
- [X] I have followed your Troubleshooting
### Is there already an issue for your problem?
- [X] I have checked older issues, open and closed
### Have you checked the discussions?
- [X] I have read the Discussions
### Have you ACTUALLY checked all these?
YES
### Environment
```markdown
- FlareSolverr version: 3.3.21
- Last working FlareSolverr version: -
- Operating system: Linux archlinux 6.11.5-zen1-1-zen
- Are you using Docker: no
- FlareSolverr User-Agent (see log traces or / endpoint): doesn't get to this point
- Are you using a VPN: no
- Are you using a Proxy: no
- Are you using Captcha Solver: no
- If using captcha solver, which one: -
- URL to test this issue: https://google.com
```
### Description
This issue is similar to https://github.com/FlareSolverr/FlareSolverr/issues/1384, but it is another bug.
When using docker in a headless mode it works, but using it on the desktop **without** headless through `python src/flaresolverr.py` starts the browser and then simply doesn't load anything.
Can be helpful: ~~I'm using linux, hyprland~~ it doesn't work on both linux and windows, x64 processor, I have tested it on an nvidia gpu, and intel's integrated gpu.
I found that removing `options.add_argument('--disable-software-rasterizer')` in `src/utils.py` works for me, but I don't know is it only a mine problem, or a problem that occurs on x86-64 CPUs.
### Logged Error Messages
```text
2024-10-28 11:19:14 INFO FlareSolverr 3.3.21
2024-10-28 11:19:14 INFO Testing web browser installation...
2024-10-28 11:19:14 INFO Platform: Linux-6.11.5-zen1-1-zen-x86_64-with-glibc2.40
2024-10-28 11:19:14 INFO Chrome / Chromium path: /usr/bin/chromium
2024-10-28 11:19:14 INFO Chrome / Chromium major version: 130
2024-10-28 11:19:14 INFO Launching web browser...
And then an infinite load.
```
### Screenshots

| closed | 2024-10-28T10:21:51Z | 2024-11-24T18:30:36Z | https://github.com/FlareSolverr/FlareSolverr/issues/1403 | [] | MAKMED1337 | 13 |
collerek/ormar | pydantic | 541 | 'str' object has no attribute 'toordinal' | **Describe the bug**
Hi!
After the latest update (0.10.24), it looks like querying for dates, using strings, is no longer working.
- My field is of type `ormar.Date(nullable=True)`.
- Calling `await MyModel.objects.get(field=value)` fails when the value is `'2022-01-20'`
- Calling `await MyModel.objects.get(field=parse_date(value))` works when the value is ^
Querying with the plain string value worked before (on 10.23).
**Stack trace**
```python
../../.virtualenvs/project/lib/python3.10/site-packages/ormar/queryset/queryset.py:948: in get
return await self.filter(*args, **kwargs).get()
../../.virtualenvs/project/lib/python3.10/site-packages/ormar/queryset/queryset.py:968: in get
rows = await self.database.fetch_all(expr)
../../.virtualenvs/project/lib/python3.10/site-packages/databases/core.py:149: in fetch_all
return await connection.fetch_all(query, values)
../../.virtualenvs/project/lib/python3.10/site-packages/databases/core.py:271: in fetch_all
return await self._connection.fetch_all(built_query)
../../.virtualenvs/project/lib/python3.10/site-packages/databases/backends/postgres.py:174: in fetch_all
rows = await self._connection.fetch(query_str, *args)
../../.virtualenvs/project/lib/python3.10/site-packages/asyncpg/connection.py:601: in fetch
return await self._execute(
../../.virtualenvs/project/lib/python3.10/site-packages/asyncpg/connection.py:1639: in _execute
result, _ = await self.__execute(
../../.virtualenvs/project/lib/python3.10/site-packages/asyncpg/connection.py:1664: in __execute
return await self._do_execute(
../../.virtualenvs/project/lib/python3.10/site-packages/asyncpg/connection.py:1711: in _do_execute
result = await executor(stmt, None)
asyncpg/protocol/protocol.pyx:183: in bind_execute
???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> ???
E asyncpg.exceptions.DataError: invalid input for query argument $2: '2022-01-20' ('str' object has no attribute 'toordinal')
```
----------------------
Let me know if you want me to try and create a reproducible example. I thought I would open the issue first, in case you immediately knew what might have changed.
Thanks for maintaining the package! ๐ | open | 2022-01-20T13:38:50Z | 2022-01-24T09:44:58Z | https://github.com/collerek/ormar/issues/541 | [
"bug"
] | sondrelg | 5 |
ShishirPatil/gorilla | api | 934 | [bug] Hosted Gorilla: <Issue> | Exception: Error communicating with OpenAI: HTTPConnectionPool(host='zanino.millennium.berkeley.edu', port=8000): Max retries exceeded with url: /v1/chat/completions (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7c39c8555a10>: Failed to establish a new connection: [Errno 111] Connection refused'))
Failed model: gorilla-7b-hf-v1, for prompt: I would like to translate 'I feel very good today.' from English to Chinese | open | 2025-03-12T05:01:02Z | 2025-03-12T05:01:02Z | https://github.com/ShishirPatil/gorilla/issues/934 | [
"hosted-gorilla"
] | tolgakurtuluss | 0 |
biolab/orange3 | pandas | 6,461 | Data Sets doesn't remember non-English selection | According to @BlazZupan, if one chooses a Slovenian dataset (in English version of Orange?) and saves the workflow, this data set is not selected after reloading the workflow.
I suspect the problem occurs because the language combo is not a setting and is always reset to English for English Orange (and to Slovenian for Slovenian), and thus the data set is not chosen because it is not shown.
The easiest solution would be to save the language as a schema-only setting. | closed | 2023-06-02T12:50:51Z | 2023-06-16T08:02:49Z | https://github.com/biolab/orange3/issues/6461 | [
"bug"
] | janezd | 0 |
microsoft/nni | tensorflow | 5,253 | AssertionError: Could not found shapes for layer body.conv1" for QAT_quantizer() | **Describe the issue**:
I am using QAT_quantizer() and my quantize config is :
Config_list = [{
'quant_tyes': ['input'],
'quant_bits': {'input': 8},
'op_types': ['Conv2d']
}]
I call the quantizer by "quantizer = QAT_quantizer (net, config_list, optimizer)" without passing "dummy_input". But I encounter such error "Traceback (most recent call last):
File "quantize.py", line 184, in <module>
quantizer = QAT_Quantizer(net, config_list,optimizer)
File "/mnt/ssd/anaconda3/envs/yolov6/lib/python3.7/site-packages/nni/algorithms/compression/pytorch/quantization/quantizers.py", line 405, in __init__
assert name in self.all_shapes, "Could not found shapes for layer {}".format(name)
AssertionError: Could not found shapes for layer body.conv1"

The error could be solved by passing a dummy_input into the QAT_quantizer. But I do not want to pass it(I don not need BN fold here). Therefore, I would like to know how can I solve the error without passing a dummy_input.
**Environment**:
- NNI version: 2.5, 2.9
- Python version: 3.7
| closed | 2022-11-30T23:24:28Z | 2023-05-08T07:47:29Z | https://github.com/microsoft/nni/issues/5253 | [] | ToniButland1998 | 3 |
Avaiga/taipy | data-visualization | 1,979 | [DOCS] Fix Markdown in README.md | ### Issue Description
In README.md, the following 'Getting Started' link is not rendered correctly as there is a typo in the markdown.
### Screenshots or Examples (if applicable)

### Proposed Solution (optional)
We should fix the markdown so that the link is rendered correctly.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [X] I am willing to work on this issue (optional) | closed | 2024-10-09T06:53:20Z | 2024-10-09T14:07:22Z | https://github.com/Avaiga/taipy/issues/1979 | [
"๐ Improvement",
"๐ Documentation",
"๐จ Priority: Medium"
] | Sriparno08 | 9 |
AirtestProject/Airtest | automation | 896 | windows็ชๅฃๆ ๆณๅๆขๅฐๅญ็ชๅฃ | **windows็ชๅฃๆ ๆณๅๆขๅฐๅญ็ชๅฃ**
```
้่ฟ airtest ่ฟๆฅๅฐไธป็ชๅฃ๏ผไธป็ชๅฃ้่ฟๆไฝๆๅผไธไธชๅญ็ชๅฃ๏ผไฝๆฏๆๆฒกๆณ็จ airtest ็ api ๅปๅๆขๅฐๅญ็ชๅฃ๏ผ้่ฆ่ชๅทฑๅ่ชๅฎไน็ๆนๆณๆ่ก๏ผๅฆไธๆฏๆ่ชๅทฑๅฎ็ฐ็ๅๆขๅฐๅญ็ชๅฃๅๆชๅพๅคๆญไฝ็ฝฎ็ไปฃ็ ๏ผ็ธๅฝไบ exists ๆนๆณๅๆขๅฐๅญ็ชๅฃๅปๆไฝใ
```
```
from airtest.core.win.win import Windows
dev = device()
child_win = dev._app.window(title="xx", class_name="xx")
handle = child_win.wrapper_object().handle
_dev = Windows(handle=handle)
screen = _dev.snapshot(filename=None, quality=ST.SNAPSHOT_QUALITY)
match_pos = Template(r"tpl1618821110989.png", record_pos=(-0.004, -0.12), resolution=(1667, 887)).match_in(screen)
print(match_pos)
```
**python ็ๆฌ:** `python3.6.8`
**airtest ็ๆฌ:** `1.2.8`
**่ฎพๅค:**
- ็ณป็ป: Windows server 2012 r2
| open | 2021-04-22T07:08:27Z | 2021-04-27T08:37:57Z | https://github.com/AirtestProject/Airtest/issues/896 | [
"enhancement"
] | hfdzlsw | 0 |
MaartenGr/BERTopic | nlp | 1,277 | OSError: libcudart.so: cannot open shared object file: No such file or directory | Having already done:
```
!pip install cugraph-cu11 cudf-cu11 cuml-cu11 --extra-index-url=https://pypi.nvidia.com
!pip uninstall cupy-cuda115 -y
!pip uninstall cupy-cuda11x -y
!pip install cupy-cuda11x -f https://pip.cupy.dev/aarch64
```
When trying to:
`from cuml.cluster import HDBSCAN
`
I get:
`OSError: libcudart.so: cannot open shared object file: No such file or directory
` | closed | 2023-05-19T10:02:18Z | 2023-05-19T13:38:43Z | https://github.com/MaartenGr/BERTopic/issues/1277 | [] | noahberhe | 2 |
opengeos/leafmap | streamlit | 119 | style_callback param for add_geojson() not working? | ### Environment Information
- leafmap version: 0.5.0
- Python version: 3.9
- Operating System: Linux/macOS
### Description
I want to use the `style_callback` parameter for `map.add_geojson()`, but the chosen style which sets only the color seems not to be respected. I think the style dicts are the same for ipyleaflet and leafmap, at least I could not find any contradictory information. See below.
```python
import requests
data = requests.get((
"https://raw.githubusercontent.com/telegeography/www.submarinecablemap.com"
"/master/web/public/api/v3/cable/cable-geo.json"
)).json()
callback = lambda feat: {"color": feat["properties"]["color"]}
```
```python
import leafmap
m = leafmap.Map(center=[0, 0], zoom=2)
m.add_geojson(data, style_callback=callback)
m.layout.height = "100px"
m
```
<img width="704" alt="Screen Shot 2021-10-03 at 11 12 53" src="https://user-images.githubusercontent.com/1001778/135747358-09d121b3-bcbc-44ff-992b-ee9036255963.png">
```python
import ipyleaflet
m = ipyleaflet.Map(center=[0, 0], zoom=2)
m += ipyleaflet.GeoJSON(data=data, style_callback=callback)
m.layout.height = "100px"
m
```
<img width="705" alt="Screen Shot 2021-10-03 at 11 14 14" src="https://user-images.githubusercontent.com/1001778/135747399-52943e61-da18-4365-96e7-76cc1e55dac6.png">
| closed | 2021-10-03T09:19:18Z | 2024-09-22T07:42:05Z | https://github.com/opengeos/leafmap/issues/119 | [
"bug"
] | deeplook | 11 |
deezer/spleeter | deep-learning | 593 | [Errno 11001] getaddrinfo failed | I have a problem when I execute this command :
python -m spleeter separate -o output/ audio_example.mp3
the error I get is the following:

I would like to point out that:
1 - I'm on windows that's why I added the "python -m ".
2 - I installed spleeter with pip after installing ffmpeg-python without any error message or warning. | closed | 2021-03-05T15:31:55Z | 2021-05-18T17:03:56Z | https://github.com/deezer/spleeter/issues/593 | [
"bug",
"invalid"
] | hadji-yousra | 1 |
ageitgey/face_recognition | python | 1,288 | Append new entries to pickle file (KNNClassifier object) | * face_recognition version: v1.22
* Python version: 3.6
* Operating System: Mac
### Description
I am trying to add new encodings and names to saved pickle file (KNNClassifier object) - but unable to append.
### What I Did
```
# Save the trained KNN classifier
if os.path.getsize(model_save_path) > 0:
if model_save_path is not None:
with open(model_save_path, 'rb') as f:
unpickler = pickle.Unpickler(f)
clf = unpickler.load()
newEncodings = X, y
clf.append(newEncodings)
with open(model_save_path,'wb') as f:
pickle.dump(clf, f)
else:
if model_save_path is not None:
with open(model_save_path, 'wb') as f:
pickle.dump(knn_clf, f)
```
Getting error : `KNeighborsClassifier' object has no attribute 'append' ` Is there any way to achieve this? Please advice.
Other questions, if I train all images for every new training requests, does it going to impact the verification process as the pickle file is in use or OS can handle that?
I am working on moving to MySQL, if anyone did this please share your thoughts. Thank you! | closed | 2021-02-24T06:39:40Z | 2021-03-07T15:05:37Z | https://github.com/ageitgey/face_recognition/issues/1288 | [] | rathishkumar | 3 |
pydantic/pydantic-ai | pydantic | 559 | You can't use sync call and async streaming in the same application | If you use the async streaming text feature together with a sync call in the same application this leads to an exception regarding the event loop.
Python 3.12.8
pydantic-ai-slim[vertexai, openai]==0.0.15
Steps to reproduce:
```
import asyncio
from pydantic_ai import Agent
from pydantic_ai.models.vertexai import VertexAIModel
model = VertexAIModel(
model_name="gemini-1.5-flash",
service_account_file=xxx,
project_id=xxx,
region=xxx,
)
agent = Agent(model=model)
response = agent.run_sync(user_prompt="Hi")
async def run_agent(user_prompt):
async with agent.run_stream(user_prompt=user_prompt) as result:
async for message in result.stream_text(delta=True):
print(message)
response = asyncio.run(run_agent(user_prompt="Hi"))
```
Exception:
```
Traceback (most recent call last):
File "/Users/xxx/xxx/xxx/src/tet2.py", line 27, in <module>
response = asyncio.run(run_agent(user_prompt="Hi"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/.local/share/uv/python/cpython-3.12.8-macos-aarch64-none/lib/python3.12/asyncio/runners.py", line 194, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/Users/xxx/.local/share/uv/python/cpython-3.12.8-macos-aarch64-none/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/.local/share/uv/python/cpython-3.12.8-macos-aarch64-none/lib/python3.12/asyncio/base_events.py", line 686, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/Users/xxx/xxx/xxx/src/tet2.py", line 23, in run_agent
async with agent.run_stream(user_prompt=user_prompt) as result:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/.local/share/uv/python/cpython-3.12.8-macos-aarch64-none/lib/python3.12/contextlib.py", line 210, in __aenter__
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/xxx/xxx/.venv/lib/python3.12/site-packages/pydantic_ai/agent.py", line 408, in run_stream
async with agent_model.request_stream(messages, model_settings) as model_response:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/.local/share/uv/python/cpython-3.12.8-macos-aarch64-none/lib/python3.12/contextlib.py", line 210, in __aenter__
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/xxx/xxx/.venv/lib/python3.12/site-packages/pydantic_ai/models/gemini.py", line 183, in request_stream
async with self._make_request(messages, True, model_settings) as http_response:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/.local/share/uv/python/cpython-3.12.8-macos-aarch64-none/lib/python3.12/contextlib.py", line 210, in __aenter__
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/xxx/xxx/.venv/lib/python3.12/site-packages/pydantic_ai/models/gemini.py", line 221, in _make_request
async with self.http_client.stream(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/.local/share/uv/python/cpython-3.12.8-macos-aarch64-none/lib/python3.12/contextlib.py", line 210, in __aenter__
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/xxx/xxx/.venv/lib/python3.12/site-packages/httpx/_client.py", line 1583, in stream
response = await self.send(
^^^^^^^^^^^^^^^^
File "/Users/xxx/xxx/xxx/.venv/lib/python3.12/site-packages/httpx/_client.py", line 1629, in send
response = await self._send_handling_auth(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/xxx/xxx/.venv/lib/python3.12/site-packages/httpx/_client.py", line 1657, in _send_handling_auth
response = await self._send_handling_redirects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/xxx/xxx/.venv/lib/python3.12/site-packages/httpx/_client.py", line 1694, in _send_handling_redirects
response = await self._send_single_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/xxx/xxx/.venv/lib/python3.12/site-packages/httpx/_client.py", line 1730, in _send_single_request
response = await transport.handle_async_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/xxx/xxx/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 394, in handle_async_request
resp = await self._pool.handle_async_request(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/xxx/xxx/.venv/lib/python3.12/site-packages/httpcore/_async/connection_pool.py", line 256, in handle_async_request
raise exc from None
File "/Users/xxx/xxx/xxx/.venv/lib/python3.12/site-packages/httpcore/_async/connection_pool.py", line 236, in handle_async_request
response = await connection.handle_async_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/xxx/xxx/.venv/lib/python3.12/site-packages/httpcore/_async/connection.py", line 103, in handle_async_request
return await self._connection.handle_async_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/xxx/xxx/.venv/lib/python3.12/site-packages/httpcore/_async/http11.py", line 136, in handle_async_request
raise exc
File "/Users/xxx/xxx/xxx/.venv/lib/python3.12/site-packages/httpcore/_async/http11.py", line 106, in handle_async_request
) = await self._receive_response_headers(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/xxx/xxx/.venv/lib/python3.12/site-packages/httpcore/_async/http11.py", line 177, in _receive_response_headers
event = await self._receive_event(timeout=timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/xxx/xxx/.venv/lib/python3.12/site-packages/httpcore/_async/http11.py", line 217, in _receive_event
data = await self._network_stream.read(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/xxx/xxx/.venv/lib/python3.12/site-packages/httpcore/_backends/anyio.py", line 35, in read
return await self._stream.receive(max_bytes=max_bytes)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/xxx/xxx/.venv/lib/python3.12/site-packages/anyio/streams/tls.py", line 204, in receive
data = await self._call_sslobject_method(self._ssl_object.read, max_bytes)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/xxx/xxx/.venv/lib/python3.12/site-packages/anyio/streams/tls.py", line 147, in _call_sslobject_method
data = await self.transport_stream.receive()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/xxx/xxx/.venv/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 1289, in receive
await self._protocol.read_event.wait()
File "/Users/xxx/.local/share/uv/python/cpython-3.12.8-macos-aarch64-none/lib/python3.12/asyncio/locks.py", line 209, in wait
fut = self._get_loop().create_future()
^^^^^^^^^^^^^^^^
File "/Users/xxx/.local/share/uv/python/cpython-3.12.8-macos-aarch64-none/lib/python3.12/asyncio/mixins.py", line 20, in _get_loop
raise RuntimeError(f'{self!r} is bound to a different event loop')
RuntimeError: <asyncio.locks.Event object at 0x107e104a0 [unset]> is bound to a different event loop
``` | closed | 2024-12-28T12:05:41Z | 2025-01-11T14:02:40Z | https://github.com/pydantic/pydantic-ai/issues/559 | [
"question",
"Stale"
] | alenowak | 7 |
deepinsight/insightface | pytorch | 1,985 | Could you tell me how to use this framework for face detection and alignment of my own dataset? | open | 2022-04-25T03:28:13Z | 2022-04-25T03:28:13Z | https://github.com/deepinsight/insightface/issues/1985 | [] | leonardzzy | 0 | |
zappa/Zappa | django | 973 | SQLite 3.8.3 or later is required (found 3.7.17). | I keep getting the error when I call zappa deploy dev. The zappa tail provides this output:
SQLite 3.8.3 or later is required (found 3.7.17).
SQLite 3.8.3 or later is required (found 3.7.17).
Traceback (most recent call last):
ย ย File "/var/task/handler.py", line 609, in lambda_handler
ย ย ย ย return LambdaHandler.lambda_handler(event, context)
ย ย File "/var/task/handler.py", line 240, in lambda_handler
ย ย ย ย handler = cls()
ย ย File "/var/task/handler.py", line 146, in __init__
ย ย ย ย wsgi_app_function = get_django_wsgi(self.settings.DJANGO_SETTINGS)
ย ย File "/var/task/zappa/ext/django_zappa.py", line 20, in get_django_wsgi
ย ย ย ย return get_wsgi_application()
ย ย File "/var/task/django/core/wsgi.py", line 12, in get_wsgi_application
ย ย ย ย django.setup(set_prefix=False)
ย ย File "/var/task/django/__init__.py", line 24, in setup
ย ย ย ย apps.populate(settings.INSTALLED_APPS)
ย ย File "/var/task/django/apps/registry.py", line 114, in populate
ย ย ย ย app_config.import_models()
ย ย File "/var/task/django/apps/config.py", line 211, in import_models
ย ย ย ย self.models_module = import_module(models_module_name)
ย ย File "/var/lang/lib/python3.8/importlib/__init__.py", line 127, in import_module
ย ย ย ย return _bootstrap._gcd_import(name[level:], package, level)
ย ย File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
My Django version is 3.15. What should I change? | closed | 2021-04-29T20:12:03Z | 2022-07-16T04:56:14Z | https://github.com/zappa/Zappa/issues/973 | [] | viktor-idenfy | 4 |
Textualize/rich | python | 2,859 | [BUG] COLORTERM in combination with FORCE_COLOR does not work anymore | - [x] I've checked [docs](https://rich.readthedocs.io/en/latest/introduction.html) and [closed issues](https://github.com/Textualize/rich/issues?q=is%3Aissue+is%3Aclosed) for possible solutions.
- [x] I can't find my issue in the [FAQ](https://github.com/Textualize/rich/blob/master/FAQ.md).
**Describe the bug**
Commit 1ebf82300fdf4960fd9a04afc60fdecee7ab50da broke the combination of "FORCE_COLOR" and "COLORTERM" taken from the environment variables.
I've created a simple test:
```python
import io
from rich.console import Console
def test_force_color():
console = Console(file=io.StringIO(), _environ={
"FORCE_COLOR": "1",
"COLORTERM": "truecolor",
})
assert console.is_terminal
assert console.color_system == "truecolor"
```
If `master` or 1ebf82300fdf4960fd9a04afc60fdecee7ab50da is checked out it fails, because the `color_system` is `None`. If the commit before (b89d0362e8ebcb18902f0f0a206879f1829b5c0b) is checked out the test succeeds.
I guess that the order of when `FORCE_COLOR` and `COLORTERM` are interpreted got changed.
**Platform**
<details>
<summary>Click to expand</summary>
* What platform (Win/Linux/Mac) are you running on? Linux (Manjaro)
* What terminal software are you using? kitty
</details>
| closed | 2023-03-06T10:12:41Z | 2023-04-14T06:37:40Z | https://github.com/Textualize/rich/issues/2859 | [
"Needs triage"
] | ThunderKey | 3 |
aleju/imgaug | machine-learning | 391 | bgr image preprocess problem | I use cv2 read the image, but after use the function of `imgaug` preprocess the image. I think the image auto become to `rgb` channel. How to use `imgaug` preprocess the `bgr` image? | closed | 2019-08-22T07:41:48Z | 2019-08-23T05:55:41Z | https://github.com/aleju/imgaug/issues/391 | [] | as754770178 | 4 |
tox-dev/tox | automation | 2,430 | Tox can't handle path that contains dash (" - ") in it | ### Tox.ini
```
[tox]
minversion = 3.8.0
envlist = python3.8, python3.9, flake8, mypy
isolated_build = true
[testenv]
setenv =
PYTHONPATH = {toxinidir}
deps =
-r {toxinidir}{/}requirements_dev.txt
commands =
pytest pytest --basetemp={envtmpdir} --cov-report term-missing`
```
### Steps
Run "Tox "
### Expectation
Should run without error
### Error encountered
```
ERROR: usage: pytest.EXE [options] [file_or_dir] [file_or_dir] [...]
pytest.EXE: error: unrecognized arguments: - ReliSource Inc\15. UNARI\unari\.tox\python3.8\tmp
inifile: D:\OneDrive - ReliSource Inc\15. UNARI\unari\pyproject.toml
rootdir: D:\OneDrive - ReliSource Inc\15. UNARI\unari
ERROR: InvocationError for command 'D:\OneDrive - ReliSource Inc\15. UNARI\unari\.tox\python3.8\Scripts\pytest.EXE' pytest '--basetemp=D:\OneDrive' - ReliSource 'Inc\15.' 'UNARI\unari\.tox\python3.8\tmp'
--cov-report term-missing (exited with code 4)`
```
### Probable cause
Tox can't handle path that contains dash (" - ") in it | closed | 2022-06-01T13:38:14Z | 2023-01-19T11:50:02Z | https://github.com/tox-dev/tox/issues/2430 | [
"bug:normal",
"needs:more-info"
] | MdFahimulIslam | 3 |
polarsource/polar | fastapi | 5,088 | License Key Read resource is missing Created At field | Looks like the License Key Read resource is missing a Created At field.
We should expose it as we usually do with our resources. | open | 2025-02-24T13:32:38Z | 2025-02-28T10:09:46Z | https://github.com/polarsource/polar/issues/5088 | [
"bug",
"contributor friendly",
"python"
] | emilwidlund | 1 |
huggingface/transformers | machine-learning | 36,058 | TypeError: BartModel.forward() got an unexpected keyword argument 'labels' | ### System Info
```
TypeError Traceback (most recent call last)
[<ipython-input-37-3435b262f1ae>](https://localhost:8080/#) in <cell line: 0>()
----> 1 trainer.train()
5 frames
[/usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *args, **kwargs)
1745 or _global_backward_pre_hooks or _global_backward_hooks
1746 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1747 return forward_call(*args, **kwargs)
1748
1749 result = None
TypeError: BartModel.forward() got an unexpected keyword argument 'labels'
```
### Who can help?
@ArthurZucker, @muellerzr, @SunMarc
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import torch
import matplotlib.pyplot as plt
from transformers import pipeline
from datasets import Dataset, DatasetDict
from transformers import BartTokenizer, BartModel, TrainingArguments, Trainer, DataCollatorForSeq2Seq
import pandas as pd
# from datasets import Dataset
df_train = pd.read_csv('../test-set/train.csv')
df_test = pd.read_csv('../test-set/test.csv')
df_val = pd.read_csv('../test-set/valid.csv')
df_train = df_train.dropna()
df_test = df_test.dropna()
df_val = df_val.dropna()
# Convert DataFrames to Hugging Face Datasets
dataset_train = Dataset.from_pandas(df_train)
dataset_test = Dataset.from_pandas(df_test)
dataset_val = Dataset.from_pandas(df_val)
# Create DatasetDict
dataset_dict = DatasetDict({
'train': dataset_train,
'test': dataset_test,
'validation': dataset_val
})
dataset_samsum = dataset_dict
split_train_test_val = [len(dataset_samsum[split]) for split in dataset_samsum]
from transformers import BartTokenizer, BartModel
model_ckpt = "facebook/bart-large"
tokenizer = BartTokenizer.from_pretrained('facebook/bart-large')
model = BartModel.from_pretrained('facebook/bart-large')
d_len = [len(tokenizer.encode(s)) for s in dataset_samsum["train"]["judgement"]]
s_len = [len(tokenizer.encode(s)) for s in dataset_samsum["train"]["summary"]]
def convert_examples_to_features(example_batch):
input_encodings = tokenizer(example_batch["judgement"], max_length=1024, truncation=True)
#Using target_tokenizer for summaries
with tokenizer.as_target_tokenizer():
target_encodings = tokenizer(example_batch["summary"], max_length=128, truncation=True)
return {
"input_ids": input_encodings["input_ids"],
"attention_mask": input_encodings["attention_mask"],
"labels": target_encodings["input_ids"]
}
dataset_samsum_pt = dataset_samsum.map(convert_examples_to_features, batched=True)
columns = ["input_ids", "labels", "attention_mask"]
dataset_samsum_pt.set_format(type="torch", columns=columns)
# Collator for Handling length imbalances and attention masks
seq2seq_data_collator = DataCollatorForSeq2Seq(tokenizer, model=model)
training_args = TrainingArguments( output_dir="bart-large-bodosum",
num_train_epochs=1,
warmup_steps=500,
per_device_train_batch_size=1,
per_gpu_eval_batch_size=1,
weight_decay=0.01,
logging_steps=10,
evaluation_strategy='steps',
eval_steps=500,
save_steps=1e6,
gradient_accumulation_steps=16,
)
trainer = Trainer(model=model, args=training_args, tokenizer=tokenizer, data_collator=seq2seq_data_collator,
train_dataset=dataset_samsum_pt["train"],
eval_dataset=dataset_samsum_pt["validation"])
trainer.train()
```
### Expected behavior
I am trying to train facebook/bart-large model for summarization task. But when I try to run trainer.train() then I encountered this issue. Please help me to solve this issue | closed | 2025-02-06T05:41:36Z | 2025-02-07T09:05:38Z | https://github.com/huggingface/transformers/issues/36058 | [
"bug"
] | mwnthainarzary | 2 |
xuebinqin/U-2-Net | computer-vision | 362 | ๅฏนu2netpๆจกๅ่ฟ่กqat้ๅ | ไฝฟ็จpytorch็้ๅๅทฅๅ
ทfxๅฏนu2netp่ฟ่กqat้ๅ๏ผ้ฆๅ
ๅ ่ฝฝไบ่ฎญ็ปๅฅฝ็fp32ๆจกๅ๏ผ็ถๅๆ็
งๆต็จๅฏนu2netpๆจกๅ่ฟ่กqat๏ผloss้็epoch่ฟญไปฃ่ถๆฅ่ถๅคง๏ผmiou่ถๆฅ่ถๅฐ๏ผqatๅ็ๆจกๅๅฎๅ
จ้่ฏฏใ | open | 2023-08-10T06:13:08Z | 2023-08-10T06:13:08Z | https://github.com/xuebinqin/U-2-Net/issues/362 | [] | ZHIZIHUABU | 0 |
jupyter/nbgrader | jupyter | 1,304 | adding students requires both web gui and jupyter_config.py | <!--
Thanks for helping to improve nbgrader!
If you are submitting a bug report or looking for support, please use the below
template so we can efficiently solve the problem.
If you are requesting a new feature, feel free to remove irrelevant pieces of
the issue template.
-->
### Operating system
ubuntu server 18.04.3
### `nbgrader --version`
0.7.0.dev
### `jupyterhub --version` (if used with JupyterHub)
1.1.0
### `jupyter notebook --version`
6.0.2
### Expected behavior
Adding students in web gui (Manage Students) enables courses and assignments to be listed for students
### Actual behavior
Having to add student in both jupyter_config.py and 'manage students'
### Steps to reproduce the behavior
Starting from demos (demos_multiple_classes) : add student1 to courses101 through web management.

When listing assignments for student1, nothing appears.
After adding student to group through `jupyterhub_config.py`, assignments and courses are correctly listed.
```
# instructor1 and instructor2 have access to different shared servers:
c.JupyterHub.load_groups = {
'formgrade-course101': [
'instructor1',
'grader-course101',
],
'formgrade-course123': [
'instructor2',
'grader-course123'
],
# Have to add all students here manually for courses to be listed for them
'nbgrader-course101': ['student1'],
'nbgrader-course123': ['student1']
}
```
Is it normal behavior ? It is a lot of setup to add several users, and I'm wondering if there is another better way. | open | 2020-01-21T07:45:45Z | 2020-01-21T07:45:45Z | https://github.com/jupyter/nbgrader/issues/1304 | [] | Lapin-Blanc | 0 |
scikit-image/scikit-image | computer-vision | 6,871 | New canny implementation silently fails with integer images. | ### Description:
The new `skimage.feature.canny` implementation silently fails if given an integer image. This worked on `scikit-image<=0.19`, and no longer works with `scikit-image=0.20`. The documentation says that any dtype should work:
```
image : 2D array
Grayscale input image to detect edges on; can be of any dtype.
```
### Way to reproduce:
```
from skimage.feature import canny
import numpy as np
im = np.zeros((100, 100))
im[0: 50, 0: 50] = 1.0
print("Edge pixels with float input: ", canny(im, low_threshold=0, high_threshold=1).sum())
print("Edge pixels with int input: ", canny(im.astype(np.int64), low_threshold=0, high_threshold=1).sum())
```
This prints on new skimage (0.20):
```
Edge pixels with float input: 182
Edge pixels with int input: 0
```
And on old skimage (0.19):
```
Edge pixels with float input: 144
Edge pixels with int input: 144
```
As I write this test case I also need to ask ... why did the number of pixels change?
### Version information:
```Shell
3.10.10 | packaged by conda-forge | (main, Mar 24 2023, 20:08:06) [GCC 11.3.0]
Linux-3.10.0-1160.88.1.el7.x86_64-x86_64-with-glibc2.17
scikit-image version: 0.20.0
numpy version: 1.23.5
```
| closed | 2023-04-05T19:13:19Z | 2023-09-17T11:41:52Z | https://github.com/scikit-image/scikit-image/issues/6871 | [
":bug: Bug"
] | erykoff | 27 |
sqlalchemy/alembic | sqlalchemy | 661 | MySQL dialect types generating spurious revisions in 1.4 | Auto-generating a revision in 1.3.3 does not pick up any changes, however after upgrading to 1.4, every MySQL dialect type column generates a revision such as this one:
```
op.alter_column('comp_details', 'market_type_id',
existing_type=mysql.INTEGER(display_width=10, unsigned=True),
type_=mysql.INTEGER(unsigned=True),
existing_nullable=True)
```
Here are the column definitions for that one:
```
market_type_id = Column(INTEGER(unsigned=True), nullable=True)
```
I only use the mysql specific integer types, so am unsure if this manifests with other mysql dialect types but would be happy to dig deeper if it would be helpful. | closed | 2020-02-21T13:34:10Z | 2020-02-27T20:50:51Z | https://github.com/sqlalchemy/alembic/issues/661 | [
"bug",
"autogenerate - detection",
"mysql"
] | peterschutt | 10 |
dunossauro/fastapi-do-zero | sqlalchemy | 291 | Erro ao rodar o alembic upgrade head no exercicio da aula 9 | Ao realizar o exercรญcio da aula 9 de adicionar as colunas created_at e updated_at na tabela todos, quando rodo o comando alembic upgrade head, recebo o seguinte erro:
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) Cannot add a column with non-constant default
[SQL: ALTER TABLE todos ADD COLUMN created_at DATETIME DEFAULT (CURRENT_TIMESTAMP) NOT NULL]
(Background on this error at: https://sqlalche.me/e/20/e3q8)
| closed | 2025-02-03T13:17:58Z | 2025-02-03T18:06:40Z | https://github.com/dunossauro/fastapi-do-zero/issues/291 | [] | leoneville | 2 |
strawberry-graphql/strawberry | asyncio | 2,791 | DataLoader load_many should set or provide an option to return exceptions | <!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Feature Request Type
- [ ] Core functionality
- [x] Alteration (enhancement/optimization) of existing feature(s)
- [x] New behavior
## Description
The current implementation of `load_many` on the dataloader uses `asyncio.gather` to run the batch of keys through the existing `load` implementation (its a single line):
```python
def load_many(self, keys):
return gather(*map(self.load, keys))
```
This means that if any of the individual `load` tasks raises an exception, the entire `load_many` call will fail. So for example, if key 1 returns a value but key 2 fails by raising, then this code:
```python
results = await my_loader.load_many([1, 2])
```
raises an exception and the value for key 1 can't be used.
In some cases it would be useful to do:
```python
results = await my_loader.load_many([1,2])
for result in results:
if isinstance(result, Exception):
# handle error
else:
# handle successful result
```
This would match the implementation of `loadMany` in JS dataloader project: https://github.com/graphql/dataloader#loadmanykeys
## Implementation notes
The behaviour can be achieved with the `return_exceptions` argument to `gather`. For example:
```python
def load_many(self, keys):
return gather(*map(self.load, keys), return_exceptions=True)
```
https://docs.python.org/3/library/asyncio-task.html#asyncio.gather
## Open questions
Adding `return_exceptions` in-place would change the behaviour of existing code. Another option would be to add a `return_exceptions` optional argument to the `load_many` method and allow clients to specify the behaviour (leaving the existing behaviour unchanged). I don't have a strong instinct either way. | open | 2023-05-30T08:01:56Z | 2025-03-20T15:56:11Z | https://github.com/strawberry-graphql/strawberry/issues/2791 | [] | jthorniley | 0 |
flasgger/flasgger | rest-api | 545 | Flasgger does not load when hostname has a path | I have a Flask application and I've integrated [Flasgger](https://github.com/flasgger/flasgger) for documentation. When I run my app locally, I can access swagger at http://127.0.0.1:8000/swagger/index.html. But when it's deployed to our dev environment, the hostname is https://services.company.com/my-flask-app. And when I add /swagger/index.html at the end of that URL, swagger does not load.
This is how I've configured swagger:
```
swagger_config = {
"termsOfService": None,
"specs": [
{
"endpoint": "swagger",
"route": "/swagger.json",
}
],
"static_url_path": "/swagger",
"swagger_ui_standalone_preset_js": "./swagger-ui-standalone-preset.js",
"swagger_ui_css": "./swagger-ui.css",
"swagger_ui_bundle_js": "./swagger-ui-bundle.js",
"jquery_js": "./lib/jquery.min.js",
"specs_route": "/swagger/index.html",
}
```
I still have wrong path to `/swagger.json`.
Also, when I try to make a request base URL is `http://127.0.0.1:8000` not the all hostname and others parameters required `http://127.0.0.1:8000/my-flask-app/`<endpoint>
.
Any ideas on how I can resolve this? | open | 2022-08-05T07:40:30Z | 2022-08-05T07:40:30Z | https://github.com/flasgger/flasgger/issues/545 | [] | catalinapopa-uipath | 0 |
yt-dlp/yt-dlp | python | 11,907 | No video formats found with youtube:player_client=all and live-from-start in livestreams | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Indonesia
### Provide a description that is worded well enough to be understood
Hi, so since the latest stable [2024.12.23](https://github.com/yt-dlp/yt-dlp/releases/tag/2024.12.23) and even on the master [2024.12.23.232653](https://github.com/yt-dlp/yt-dlp-master-builds/releases/tag/2024.12.23.232653) . YT-DLP has been failing for me to extract the max resolution of a current live stream (ongoing). I usually use this to automatically create a VOD grabber so that it can automatically download the processed VODs after the live is over.
Here is the command that i usually use to extract the resolution.
`yt-dlp --extractor-args 'youtube:player_client=all' --live-from-start --print width 'https://www.youtube.com/@Valkyrae/live'`
I used all player client to make sure that I don't need to touch it anymore and it can always get the largest available resolution since default limits it to 1080p, I used live from start so it will show the resolutions higher than 1080p in the livestream.
But now it just results in NA. After further investigation, it seems like it can't find any video formats even though I have set the YouTube player client to all. The interesting thing is that it will work when i just use youtube:player_client=android_vr instead of all.
Doesn't the "all" option should also include "android_vr"? It seems to be included but it doesn't seem to use the formats available from the android_vr client.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '--extractor-args', 'youtube:player_client=all', '--live-from-start', '--print', 'width', 'https://www.youtube.com/@Valkyrae/live']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version master@2024.12.23.232653 from yt-dlp/yt-dlp-master-builds [65cf46cdd] (darwin_exe)
[debug] Python 3.13.1 (CPython arm64 64bit) - macOS-14.7.2-arm64-arm-64bit-Mach-O (OpenSSL 3.0.15 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2 (setts), ffprobe 7.0.2, phantomjs 2.1.1, rtmpdump 2.4
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.12.14, curl_cffi-0.7.1, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.3.0, websockets-14.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-master-builds/releases/latest
Latest version: master@2024.12.23.232653 from yt-dlp/yt-dlp-master-builds
yt-dlp is up to date (master@2024.12.23.232653 from yt-dlp/yt-dlp-master-builds)
[youtube:tab] Extracting URL: https://www.youtube.com/@Valkyrae/live
[youtube:tab] @Valkyrae/live: Downloading webpage
[youtube] Extracting URL: https://www.youtube.com/watch?v=gko77bw1CT4
[youtube] gko77bw1CT4: Downloading webpage
[youtube] gko77bw1CT4: Downloading ios player API JSON
[youtube] gko77bw1CT4: Downloading ios music player API JSON
[youtube] gko77bw1CT4: Downloading ios creator player API JSON
[youtube] gko77bw1CT4: Downloading web embedded client config
[youtube] gko77bw1CT4: Downloading player 03dbdfab
[youtube] gko77bw1CT4: Downloading web embedded player API JSON
[youtube] gko77bw1CT4: Downloading web safari player API JSON
[youtube] gko77bw1CT4: Downloading web music client config
[youtube] gko77bw1CT4: Downloading web music player API JSON
[youtube] gko77bw1CT4: Downloading web creator player API JSON
[youtube] gko77bw1CT4: Downloading tv player API JSON
[youtube] gko77bw1CT4: Downloading tv embedded player API JSON
[youtube] gko77bw1CT4: Downloading mweb player API JSON
[youtube] gko77bw1CT4: Downloading android player API JSON
[youtube] gko77bw1CT4: Downloading android music player API JSON
[youtube] gko77bw1CT4: Downloading android creator player API JSON
[youtube] gko77bw1CT4: Downloading android vr player API JSON
[youtube] gko77bw1CT4: Downloading MPD manifest
WARNING: [youtube] gko77bw1CT4: web client dash formats require a PO Token which was not provided. They will be skipped as they may yield HTTP Error 403. You can manually pass a PO Token for this client with --extractor-args "youtube:po_token=web+XXX. For more information, refer to https://github.com/yt-dlp/yt-dlp/wiki/Extractors#po-token-guide . To enable these broken formats anyway, pass --extractor-args "youtube:formats=missing_pot"
[youtube] gko77bw1CT4: Downloading MPD manifest
[youtube] gko77bw1CT4: Downloading MPD manifest
[youtube] gko77bw1CT4: Downloading MPD manifest
[youtube] gko77bw1CT4: Downloading MPD manifest
[youtube] gko77bw1CT4: Downloading MPD manifest
[youtube] gko77bw1CT4: Downloading MPD manifest
ERROR: [youtube] gko77bw1CT4: This video is not available
File "yt_dlp/extractor/common.py", line 742, in extract
File "yt_dlp/extractor/youtube.py", line 4541, in _real_extract
File "yt_dlp/extractor/common.py", line 1276, in raise_no_formats
```
| closed | 2024-12-25T22:03:02Z | 2024-12-26T01:19:18Z | https://github.com/yt-dlp/yt-dlp/issues/11907 | [
"site-bug",
"site:youtube"
] | ThePhoenix576 | 2 |
sqlalchemy/sqlalchemy | sqlalchemy | 10,056 | Create Generated Column in MariaDB cannot specify null | ### Discussed in https://github.com/sqlalchemy/sqlalchemy/discussions/10055
<div type='discussions-op-text'>
<sup>Originally posted by **iamrinshibuya** July 3, 2023</sup>
Hello, when using [Computed Columns](https://docs.sqlalchemy.org/en/20/core/defaults.html#computed-columns-generated-always-as), the following code works without any changes on PostgreSQL, but it fails in MariaDB.
```python
import asyncio
from sqlalchemy import Computed, text
from sqlalchemy.orm import (
Mapped,
DeclarativeBase,
mapped_column,
)
from sqlalchemy.ext.asyncio import create_async_engine
# works with this
# engine = create_async_engine('postgresql+psycopg://...')
# does not work with this
engine = create_async_engine('mariadb+asyncmy://...')
class Base(DeclarativeBase):
pass
class Sqaure(Base):
__tablename__ = 'square'
id: Mapped[int] = mapped_column(primary_key=True)
side: Mapped[int]
area: Mapped[int] = mapped_column(Computed(text('4 * side')), index=True)
async def main():
async with engine.begin() as conn:
await conn.run_sync(Base.metadata.drop_all)
await conn.run_sync(Base.metadata.create_all)
asyncio.run(main())
```
```
sqlalchemy.exc.ProgrammingError: (asyncmy.errors.ProgrammingError) (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'NOT NULL, \n\tPRIMARY KEY (id)\n)' at line 4")
[SQL:
CREATE TABLE square (
id INTEGER NOT NULL AUTO_INCREMENT,
side INTEGER NOT NULL,
area INTEGER GENERATED ALWAYS AS (4 * side) NOT NULL,
PRIMARY KEY (id)
)
]
(Background on this error at: https://sqlalche.me/e/20/f405)
```
What should I do so that the table gets created in MariaDB?
Making the column nullable (`area: Mapped[int | None]`) seems to fix this, but I have the following concerns
- I find it counterproductive in my case, the generated column always has a value.
- In PostgreSQL this is a "non-nullable" / required column and I'd like to not change the behavior just to accommodate MariaDB (the code has to support both dialects)</div>
The docs of mariadb seem to indicate that the null clause is not allowed with generated always https://mariadb.com/kb/en/generated-columns/
| closed | 2023-07-03T19:29:03Z | 2023-10-20T04:51:21Z | https://github.com/sqlalchemy/sqlalchemy/issues/10056 | [
"bug",
"sql",
"PRs (with tests!) welcome",
"mariadb"
] | CaselIT | 16 |
microsoft/nni | machine-learning | 5,412 | Support for unified lightning package | **Describe the issue**:
It seems that nni does not support lightning with the new unified package name `lightning` instead of `pytorch_lightning`. When using nni with the new unified package it breaks. Unfortunately I don't know the pythonian way to fix this as there are still two seperate package versions out there.
```python
import lightning as pl
import torch
[...]
import nni
from nni.compression.pytorch import LightningEvaluator
[...]
trainer = nni.trace(pl.Trainer)(
[...]
evaluator = LightningEvaluator(trainer, data)
```
**Environment**:
- NNI version: 2.10
- Training service (local|remote|pai|aml|etc): local
- Client OS: Win10
- Python version: 3.10
- PyTorch/TensorFlow version: 1.13
- Lightning version: 1.8.6
- Is conda/virtualenv/venv used?: conda
- Is running in Docker?: no
**Error message**:
```
Only support traced pytorch_lightning.Trainer, please use nni.trace(pytorch_lightning.Trainer) to initialize the trainer.
```
| open | 2023-02-28T12:31:02Z | 2023-03-01T02:39:20Z | https://github.com/microsoft/nni/issues/5412 | [] | funnym0nk3y | 1 |
tflearn/tflearn | tensorflow | 1,181 | tflearn | WARNING:tensorflow:From /home/ubuntu/anaconda3/envs/zzy_data/lib/python3.7/site-packages/tensorflow/python/compat/v2_compat.py:101: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.
Instructions for updating:
non-resource variables are not supported in the long term
Scipy not supported!
Building Encoder
WARNING:tensorflow:From /home/ubuntu/anaconda3/envs/zzy_data/lib/python3.7/site-packages/tflearn-0.5.0-py3.7.egg/tflearn/initializations.py:110: calling UniformUnitScaling.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
WARNING:tensorflow:From /home/ubuntu/anaconda3/envs/zzy_data/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py:549: UniformUnitScaling.__init__ (from tensorflow.python.ops.init_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.initializers.variance_scaling instead with distribution=uniform to get equivalent behavior.
Tensor("ge/Relu:0", shape=(?, ?, 64), dtype=float32)
Tensor("ge/Relu_1:0", shape=(?, ?, 128), dtype=float32)
Tensor("ge/Relu_2:0", shape=(?, ?, 128), dtype=float32)
Tensor("ge/Relu_3:0", shape=(?, ?, 256), dtype=float32)
Tensor("ge/Relu_4:0", shape=(?, ?, 128), dtype=float32)
Tensor("ge/Max:0", shape=(?, 128), dtype=float32)
Building Decoder
Traceback (most recent call last):
File "/home/ubuntu/anaconda3/envs/zzy_data/lib/python3.7/site-packages/tflearn-0.5.0-py3.7.egg/tflearn/initializations.py", line 198, in xavier
ModuleNotFoundError: No module named 'tensorflow.contrib'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "pc_sampling_rec.py", line 196, in <module>
train(args)
File "pc_sampling_rec.py", line 75, in train
outpj=decoder(word,layer_sizes=dec_args['layer_sizes'],b_norm=dec_args['b_norm'],b_norm_decay=1.0,b_norm_finish=dec_args['b_norm_finish'],b_norm_decay_finish=1.0,verbose=dec_args['verbose'])
File "/media/ubuntu/0f083fd5-b631-4342-9812-7e262eaff979/ZZY/2024ๅฏนๆๆปๅป+ๆฐๆฎ่ธ้ฆ/PCDNet/encoders_decoders.py", line 187, in decoder_with_fc_only
layer = fully_connected(layer, layer_sizes[i], activation='linear', weights_init='xavier', name=name, regularizer=regularizer, weight_decay=weight_decay, reuse=reuse, scope=scope_i)
File "/home/ubuntu/anaconda3/envs/zzy_data/lib/python3.7/site-packages/tflearn-0.5.0-py3.7.egg/tflearn/layers/core.py", line 152, in fully_connected
File "/home/ubuntu/anaconda3/envs/zzy_data/lib/python3.7/site-packages/tflearn-0.5.0-py3.7.egg/tflearn/initializations.py", line 201, in xavier
NotImplementedError: 'xavier_initializer' not supported, please update TensorFlow. | open | 2024-03-25T01:02:29Z | 2024-03-25T01:04:05Z | https://github.com/tflearn/tflearn/issues/1181 | [] | WillingDil | 1 |
flasgger/flasgger | rest-api | 134 | HTTPS is not supported in the current flasgger version | it seems that there is an issue regarding the HTTPS for swagger ui that was solved in more advanced version
https://github.com/swagger-api/swagger-ui/issues/3166
so currently flasgger also does not support the HTTPS api requests .
| closed | 2017-07-18T06:33:01Z | 2018-07-31T07:53:07Z | https://github.com/flasgger/flasgger/issues/134 | [
"bug"
] | ghost | 12 |
CorentinJ/Real-Time-Voice-Cloning | python | 436 | Error in preprocessing data for synthesizer | While running `synthesizer_preprocess_audio.py`, I'm getting the following error:
```
Arguments:
datasets_root: /home/amin/voice_cloning/Datasets
out_dir: /home/amin/voice_cloning/Datasets/SV2TTS/synthesizer
n_processes: None
skip_existing: True
hparams:
Using data from:
/home/amin/voice_cloning/Datasets/LibriSpeech/train-other-500
LibriSpeech: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1166/1166 [00:00<00:00, 2563.25speakers/s]
The dataset consists of 0 utterances, 0 mel frames, 0 audio timesteps (0.00 hours).
Traceback (most recent call last):
File "synthesizer_preprocess_audio.py", line 52, in <module>
preprocess_librispeech(**vars(args))
File "/home/amin/voice_cloning/Real-Time-Voice-Cloning-master/synthesizer/preprocess.py", line 49, in preprocess_librispeech
print("Max input length (text chars): %d" % max(len(m[5]) for m in metadata))
ValueError: max() arg is an empty sequence
```
I'm preprocessing LibriSpeech500 but apparently the synthesizer preprocessor is failing to create the proper metadata file.
Has anyone seen the same issue?
| closed | 2020-07-22T06:52:38Z | 2020-07-22T07:35:15Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/436 | [] | amintavakol | 0 |
ets-labs/python-dependency-injector | flask | 665 | Allow Closing to detect dependent resources passed as kwargs | Hi,
I have the same issue of this user:
https://github.com/ets-labs/python-dependency-injector/issues/633#issuecomment-1361813043
Can you fix it? | open | 2023-02-06T08:37:43Z | 2023-02-06T09:03:47Z | https://github.com/ets-labs/python-dependency-injector/issues/665 | [] | mauros191 | 0 |
ipython/ipython | data-science | 14,368 | gtk/GTKAgg matplotlib backend is not available | Using the latest IPython (8.22.2) and Matplotlib (3.8.3) the list of available IPython backends is
```python
In [1]: %matplotlib --list
Available matplotlib backends: ['tk', 'gtk', 'gtk3', 'gtk4', 'wx', 'qt4', 'qt5', 'qt6', 'qt', 'osx', 'nbagg', 'webagg', 'notebook', 'agg', 'svg', 'pdf', 'ps', 'inline', 'ipympl', 'widget']
```
which includes 'gtk'. But there is no such backend available in Matplotlib:
```python
In [1]: %matplotlib gtk
<snip>
ValueError: Key backend: 'gtkagg' is not a valid value for backend; supported values are ['GTK3Agg', 'GTK3Cairo', 'GTK4Agg', 'GTK4Cairo', 'MacOSX', 'nbAgg', 'QtAgg', 'QtCairo', 'Qt5Agg', 'Qt5Cairo', 'TkAgg', 'TkCairo', 'WebAgg', 'WX', 'WXAgg', 'WXCairo', 'agg', 'cairo', 'pdf', 'pgf', 'ps', 'svg', 'template']
```
I think it was removed in 2018 (matplotlib/matplotlib#10426) so I assume there has been no real-world use of it for a while.
I think it should be removed from the list of allowed backends in IPython. However, I don't think any action is necessary now as I will deal with this as part of the wider change to move the matplotlib backend resolution from IPython to Matplotlib (#14311). | closed | 2024-03-11T14:14:38Z | 2024-05-14T09:24:17Z | https://github.com/ipython/ipython/issues/14368 | [] | ianthomas23 | 1 |
vitalik/django-ninja | pydantic | 1,158 | Exceptions log level | **Is your feature request related to a problem? Please describe.**
_Feature to change log level of exceptions and/or remove the logging._
By default in operation.py any raised exception during endpoint handling inside context manager activates this part of the code regardless of what kind of exception it is.

_By default **django** treats 404 and such kind of errors with **WARNING**._
So even 404 in django extra -> ERROR log.
Cuz of it we can't lets say create custom django log handler that sends real ERROR's to email/messanger and etc.
_Ideally logic should be like that:_
_exception_handlers handled exception? -> WARNING
_exception_handlers not handled exception? -> ERROR
_There is def on_exception() in NinjaAPI for that_
So i am not sure why there is some logging logic in operation.py before we find a handler for an Exception.
**Describe the solution you'd like**
**Add ability to change and/or remove exception logging in operation.py** | closed | 2024-05-09T11:52:05Z | 2024-05-09T11:56:05Z | https://github.com/vitalik/django-ninja/issues/1158 | [] | mrisedev | 1 |
sgl-project/sglang | pytorch | 4,404 | [Bug] When starting with dp, forward_batch.global_num_tokens_gpu is None. | ### Checklist
- [x] 1. I have searched related issues but cannot get the expected help.
- [ ] 2. The bug has not been fixed in the latest version.
- [ ] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
- [ ] 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [x] 5. Please use English, otherwise it will be closed.
### Describe the bug
[2025-03-14 08:52:33 DP1 TP0] Scheduler hit an exception: Traceback (most recent call last):
File "/data/LLM_server/sglang-main/python/sglang/srt/managers/scheduler.py", line 1714, in run_scheduler_process
scheduler = Scheduler(server_args, port_args, gpu_id, tp_rank, dp_rank)
File "/data/LLM_server/sglang-main/python/sglang/srt/managers/scheduler.py", line 218, in __init__
self.tp_worker = TpWorkerClass(
File "/data/LLM_server/sglang-main/python/sglang/srt/managers/tp_worker_overlap_thread.py", line 63, in __init__
self.worker = TpModelWorker(server_args, gpu_id, tp_rank, dp_rank, nccl_port)
File "/data/LLM_server/sglang-main/python/sglang/srt/managers/tp_worker.py", line 74, in __init__
self.model_runner = ModelRunner(
File "/data/LLM_server/sglang-main/python/sglang/srt/model_executor/model_runner.py", line 166, in __init__
self.initialize(min_per_gpu_memory)
File "/data/LLM_server/sglang-main/python/sglang/srt/model_executor/model_runner.py", line 207, in initialize
self.init_cuda_graphs()
File "/data/LLM_server/sglang-main/python/sglang/srt/model_executor/model_runner.py", line 881, in init_cuda_graphs
self.cuda_graph_runner = CudaGraphRunner(self)
File "/data/LLM_server/sglang-main/python/sglang/srt/model_executor/cuda_graph_runner.py", line 251, in __init__
self.capture()
File "/data/LLM_server/sglang-main/python/sglang/srt/model_executor/cuda_graph_runner.py", line 323, in capture
) = self.capture_one_batch_size(bs, forward)
File "/data/LLM_server/sglang-main/python/sglang/srt/model_executor/cuda_graph_runner.py", line 402, in capture_one_batch_size
run_once()
File "/data/LLM_server/sglang-main/python/sglang/srt/model_executor/cuda_graph_runner.py", line 395, in run_once
logits_output = forward(input_ids, forward_batch.positions, forward_batch)
File "/data/anaconda3/envs/tenserrt_llm/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/data/LLM_server/sglang-main/python/sglang/srt/models/qwen2.py", line 375, in forward
return self.logits_processor(
File "/data/anaconda3/envs/tenserrt_llm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data/anaconda3/envs/tenserrt_llm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/data/LLM_server/sglang-main/python/sglang/srt/layers/logits_processor.py", line 306, in forward
logits = self._get_logits(pruned_states, lm_head, logits_metadata)
File "/data/LLM_server/sglang-main/python/sglang/srt/layers/logits_processor.py", line 412, in _get_logits
dp_gather(hidden_states, local_hidden_states, logits_metadata, "embedding")
File "/data/LLM_server/sglang-main/python/sglang/srt/layers/dp_attention.py", line 154, in dp_gather
local_start_pos, local_num_tokens = get_dp_local_info(forward_batch)
File "/data/LLM_server/sglang-main/python/sglang/srt/layers/dp_attention.py", line 94, in get_dp_local_info
cumtokens = torch.cumsum(forward_batch.global_num_tokens_gpu, dim=0)
TypeError: cumsum() received an invalid combination of arguments - got (NoneType, dim=int), but expected one of:
* (Tensor input, int dim, *, torch.dtype dtype = None, Tensor out = None)
* (Tensor input, name dim, *, torch.dtype dtype = None, Tensor out = None)
### Reproduction
CUDA_VISIBLE_DEVICES=4,5,6,7 python -m sglang.launch_server --model-path /data/MODELS/QwQ-32B-GPTQ-int8 --host 0.0.0.0 --tp 2 --dp 2
### Environment
(tenserrt_llm) [server@6000gpu sglang-main]$ python3 -m sglang.check_envINFO 03-14 09:03:13 __init__.py:190] Automatically detected platform cuda.
/data/anaconda3/envs/tenserrt_llm/lib/python3.10/site-packages/torch/utils/cpp_extension.py:1964: UserWarning: TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation.
If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST'].
warnings.warn(
Python: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0]
CUDA available: True
GPU 0,1: NVIDIA RTX A6000
GPU 0,1 Compute Capability: 8.6
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.4, V12.4.131
CUDA Driver Version: 550.135
PyTorch: 2.5.1+cu124
sglang: 0.4.3.post4
sgl_kernel: 0.0.5
flashinfer: 0.2.3+cu124torch2.5
triton: 3.1.0
transformers: 4.48.3
torchao: 0.9.0
numpy: 1.26.4
aiohttp: 3.11.13
fastapi: 0.115.11
hf_transfer: 0.1.9
huggingface_hub: 0.29.3
interegular: 0.3.3
modelscope: 1.23.2
orjson: 3.10.15
packaging: 24.2
psutil: 7.0.0
pydantic: 2.10.6
multipart: 0.0.20
zmq: 26.3.0
uvicorn: 0.34.0
uvloop: 0.21.0
vllm: 0.7.2
openai: 1.66.3
tiktoken: 0.9.0
anthropic: 0.49.0
decord: 0.6.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV4 PXB PXB SYS SYS SYS SYS 0-35,72-107 0 N/A
GPU1 NV4 X PXB PXB SYS SYS SYS SYS 0-35,72-107 0 N/A
GPU2 PXB PXB X NV4 SYS SYS SYS SYS 0-35,72-107 0 N/A
GPU3 PXB PXB NV4 X SYS SYS SYS SYS 0-35,72-107 0 N/A
GPU4 SYS SYS SYS SYS X NV4 PXB PXB 36-71,108-143 1 N/A
GPU5 SYS SYS SYS SYS NV4 X PXB PXB 36-71,108-143 1 N/A
GPU6 SYS SYS SYS SYS PXB PXB X NV4 36-71,108-143 1 N/A
GPU7 SYS SYS SYS SYS PXB PXB NV4 X 36-71,108-143 1 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
ulimit soft: 65535 | open | 2025-03-14T01:04:33Z | 2025-03-20T02:35:51Z | https://github.com/sgl-project/sglang/issues/4404 | [] | zzk2021 | 3 |
jpadilla/django-rest-framework-jwt | django | 292 | auth0 and rest framework jwt | Please restframwork-jwt assume that you have already registered user.
in my case, I want to authenticate user via auith0 and then create them in my django app.
how could i do this?
Thanks | closed | 2016-12-22T08:26:26Z | 2017-03-04T16:38:50Z | https://github.com/jpadilla/django-rest-framework-jwt/issues/292 | [] | saius | 1 |
dpgaspar/Flask-AppBuilder | rest-api | 2,195 | Support Personal Access Tokens in addition to AUTH_TYPE | Hi, is it possible to support access token in addition to oauth/oidc authentication.
so that other applications can communicate with the API of the FAB application.
I am dealing this issue when I want to use datahub to ingest metadata from Superset that is built with FAB and the only way for now is when superset is configured with DB_AUTH or LDAP_AUTH but not OAUTH/OIDC.
If FAB will have access token in regardless of the authentication type this will open new ways to communicate with FAB applications API.
what do you think? | open | 2024-02-10T19:44:17Z | 2024-02-20T09:58:38Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/2195 | [] | shohamyamin | 1 |
recommenders-team/recommenders | deep-learning | 1,980 | [BUG] Set scipy version back to use the latest one | ### Description
This issue is a backlog to revert the temporary workaround #1971 | closed | 2023-08-29T03:39:45Z | 2024-04-30T04:58:10Z | https://github.com/recommenders-team/recommenders/issues/1980 | [
"bug"
] | loomlike | 2 |
sdl60660/letterboxd_recommendations | web-scraping | 18 | Failed on "Getting User's Movie" stage: redis_get_user_data_job_status failed | Can't seem to scrape my profile's movie on any device or network. Seems to be logging "redis_get_user_data_job_status" as failed. | closed | 2023-11-30T05:11:41Z | 2024-02-02T22:37:54Z | https://github.com/sdl60660/letterboxd_recommendations/issues/18 | [] | TomasCarlson | 2 |
iMerica/dj-rest-auth | rest-api | 319 | Sending verification email to false email produces 500 Internal server error | Currently when a user registers im sending a verification email using a Custom Account Adapter:
```
class CustomAccountAdapter(DefaultAccountAdapter):
def get_email_confirmation_url(self, request, confirmation):
return f"{secret.FRONT_URL}/verify-email?key={confirmation.key}"
def send_mail(self, template_prefix, email, context):
# Send email
subject = 'Welcome to Website.com, please verify your email'
template_name = 'verification_email.html'
body = render_to_string(template_name, context).strip()
msg = EmailMessage(subject, body, self.get_from_email(), [email])
msg.content_subtype = 'html'
msg.send()
```
The issue is that when I try to signup with a fake email for example sdavc@idhoihfhssdadiohdfij.com (which many people would probably try to do), the Django server has an internal error 500.
`**smtplib.SMTPRecipientsRefused: {'sdavc@idhoihfhssdadiohdfij.com': (450, b'4.1.2 <sdavc@idhoihfhssdadiohdfij.com>: Recipient address rejected: Domain not found')}**`
How can I return some sort of errors similar to when a user tries to register with an email that already exists so I can display this error in my frontend?
| open | 2021-10-19T01:40:38Z | 2021-10-19T01:40:38Z | https://github.com/iMerica/dj-rest-auth/issues/319 | [] | adrenaline681 | 0 |
OWASP/Nettacker | automation | 250 | CSV result export feature | Currently Nettacker is only capable of producing results in JSON,TXT and HTML format A new feature to produce results in CSV format is needed
Command line option -o :
`-o results.csv
` | closed | 2020-04-26T23:38:24Z | 2020-05-16T23:53:06Z | https://github.com/OWASP/Nettacker/issues/250 | [
"ask for feature"
] | securestep9 | 1 |
kizniche/Mycodo | automation | 1,058 | Generic Analog pH/EC Actions broken on AJAX-enabled interface | ### Describe the problem/bug
AJAX-enabled interface doesn't allow Actions to be executed. Clicking on the "Calibrate Slot" buttons don't appear to do anything.
### Versions:
- Mycodo Version: 8.11.0 + master branch commit [8d46745](https://github.com/kizniche/Mycodo/commit/8d46745b8439abad903f49eb9da0e27756e8fdca)
- Raspberry Pi Version: 3B
- Raspbian OS Version: Linux raspberrypi 5.10.17-v7+ #1403 SMP Mon Feb 22 11:29:51 GMT 2021 armv7l GNU/Linux
### Reproducibility
Please list specific setup details that are involved and the steps to reproduce the behavior:
1. Upgrade to master branch commit [8d46745](https://github.com/kizniche/Mycodo/commit/8d46745b8439abad903f49eb9da0e27756e8fdca)
2. Browse to Generic Analog pH/EC input.
3. Insert pH or EC probe in calibration solution and click on Calibrate Slot. An incomplete message pops up on the right:
`Success: Custom Button: Traceback (most recent call last): File "/home/pi/Mycodo/mycodo/mycodo_client.py", line 291, in custom_button controller_type, unique_id, button_id, args_dict, thread) File "/home/pi/Mycodo/env/lib/python3.7/site-packages/Pyro5/client.py", line 476, in call return self.send(self.name, args, kwargs) File "/home/pi/Mycodo/env/lib/python3.7/site-packages/Pyro5/client.py", line 211, in _pyroInvoke data = serializer.dumpsCall(objectId, methodname, vargs, kwargs) File "/home/pi/Mycodo/env/lib/python3.7/site-packages/Pyro5/serializers.py", line 276, in dumpsCall return serpent.dumps((obj, method, vargs, kwargs), module_in_classname=True) File "/home/pi/Mycodo/env/lib/python3.7/site-packages/serpent.py", line 69, in dumps return Serializer(indent, module_in_classname, bytes_repr).serialize(obj) File "/home/pi/Mycodo/env/lib/python3.7/site-packages/serpent.py", line 229, in serialize self._serialize(obj, out, 0) File "/home/pi/Mycodo/env/lib/python3.7/site-packages/serpent.py", line 255, in _serialize return self.dispatch[t](self, obj, out, level) File "/home/pi/Mycodo/env/lib/python3.7/site-packages/serpent.py", line 319, in ser_builtins_tuple serialize(elt, out, level + 1) File "/home/pi/Mycodo/env/lib/python3.7/site-packages/serpent.py", line 255, in _serialize return self.dispatch[t](self`
4. Notice that the setting doesn't "take" in the targeted calibration slot, as the previous calibration setting still remains even after a browser refresh.
### Expected behavior
The Calibrate Slot button should work as on non-AJAX interface.
### Screenshots
N/A
### Additional context
N/A
| closed | 2021-07-18T21:36:01Z | 2021-08-30T02:43:41Z | https://github.com/kizniche/Mycodo/issues/1058 | [
"bug",
"Fixed and Committed"
] | dookaloosy | 1 |
huggingface/diffusers | deep-learning | 10,518 | Some wrong in "diffusers/examples/research_projects/sd3_lora_colab /train_dreambooth_lora_sd3_miniature.py" | ### Describe the bug
https://github.com/huggingface/diffusers/blob/89e4d6219805975bd7d253a267e1951badc9f1c0/examples/research_projects/sd3_lora_colab/train_dreambooth_lora_sd3_miniature.py#L768
<img width="791" alt="ๆชๅฑ2025-01-10 15 09 29" src="https://github.com/user-attachments/assets/4d470d53-56c8-4a4a-be22-9308f0bd580b" />
should replace "unet" with "transformers"
### Reproduction
see the link
### Logs
_No response_
### System Info
0.31.0
### Who can help?
_No response_ | closed | 2025-01-10T07:10:11Z | 2025-01-13T13:47:29Z | https://github.com/huggingface/diffusers/issues/10518 | [
"bug"
] | CuddleSabe | 0 |
tfranzel/drf-spectacular | rest-api | 778 | How to annotate a serializer field with different request/response schemas? | I have this custom serializer field that is used in a number of serializers:
```python
class NestedPrimaryKeyRelatedField(serializers.PrimaryKeyRelatedField):
def __init__(self, serializer, **kwargs):
"""
On read display a complete nested representation of the object(s)
On write only require the PK (not an entire object) as value
"""
self.serializer = serializer
super().__init__(**kwargs)
def to_representation(self, obj):
return self.serializer(obj, context=self.context).to_representation(obj)
# Usage
class MySerializer:
related_obj = NestedPrimaryKeyRelatedField(RelatedSerializer, allow_null=True, required=False)
```
The idea is that when the client GETs `MySerializer` they receive a nice nested representation of `related_obj` using `RelatedSerializer`, but when they POST/PUT/PATCH they only need to provide the PK (not an entire object) to set the value of `related_obj`.
The actual functionality works as expected, but the schema generated by Spectacular assumes the field is just a primary key for both read and write operations, while in reality on read the schema should be a full object based on `RelatedSerializer`.
I tried to create a custom extension but I'm struggling with the fine details:
```python
class NestedPkExtension(OpenApiSerializerFieldExtension):
# Ensure annotations use different read/write serializers when using NestedPrimaryKeyRelatedField
target_class = NestedPrimaryKeyRelatedField
def map_serializer_field(self, auto_schema, direction: Direction):
# I know the direction plays a role here, but don't know exactly what
if direction == "response":
# Return an object schema
else:
# Return a primary key schema
```
Any help would be appreciated ๐ | closed | 2022-07-27T00:03:38Z | 2022-07-27T16:08:22Z | https://github.com/tfranzel/drf-spectacular/issues/778 | [] | jerivas | 2 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 868 | Is this model implemented identity mapping loss? | I want to translate painting to photo. | closed | 2019-12-05T14:37:04Z | 2019-12-05T14:49:55Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/868 | [] | Yukkuri5 | 0 |
lukas-blecher/LaTeX-OCR | pytorch | 410 | ValidationError: 1 validation error for InitSchema | I have a problem with importing LatexOCR
`from pix2tex.cli import LatexOCR`
throws me next error:
```
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[1], line 2
1 from PIL import Image
----> 2 from pix2tex.cli import LatexOCR
4 img = Image.open('test.jpg')
5 model = LatexOCR()
File \venv\Lib\site-packages\pix2tex\cli.py:1
----> 1 from pix2tex.dataset.transforms import test_transform
2 import pandas.io.clipboard as clipboard
3 from PIL import ImageGrab
File \venv\Lib\site-packages\pix2tex\dataset\transforms.py:13
1 import albumentations as alb
2 from albumentations.pytorch import ToTensorV2
4 train_transform = alb.Compose(
5 [
6 alb.Compose(
7 [alb.ShiftScaleRotate(shift_limit=0, scale_limit=(-.15, 0), rotate_limit=1, border_mode=0, interpolation=3,
8 value=[255, 255, 255], p=1),
9 alb.GridDistortion(distort_limit=0.1, border_mode=0, interpolation=3, value=[255, 255, 255], p=.5)], p=.15),
10 # alb.InvertImg(p=.15),
11 alb.RGBShift(r_shift_limit=15, g_shift_limit=15,
12 b_shift_limit=15, p=0.3),
---> 13 alb.GaussNoise(10, p=.2),
14 alb.RandomBrightnessContrast(.05, (-.2, 0), True, p=0.2),
15 alb.ImageCompression(95, p=.3),
16 alb.ToGray(always_apply=True),
17 alb.Normalize((0.7931, 0.7931, 0.7931), (0.1738, 0.1738, 0.1738)),
18 # alb.Sharpen()
19 ToTensorV2(),
20 ]
21 )
22 test_transform = alb.Compose(
23 [
24 alb.ToGray(always_apply=True),
(...)
28 ]
29 )
File \venv\Lib\site-packages\albumentations\core\validation.py:35, in ValidatedTransformMeta.__new__.<locals>.custom_init(self, *args, **kwargs)
32 full_kwargs[parameter_name] = parameter.default
34 # No try-except block needed as we want the exception to propagate naturally
---> 35 config = dct["InitSchema"](**full_kwargs)
37 validated_kwargs = config.model_dump()
38 for name_arg in kwargs:
File \venv\Lib\site-packages\pydantic\main.py:212, in BaseModel.__init__(self, **data)
210 # `__tracebackhide__` tells pytest and some other tools to omit this function from tracebacks
211 __tracebackhide__ = True
--> 212 validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
213 if self is not validated_self:
214 warnings.warn(
215 'A custom validator is returning a value other than `self`.\n'
216 "Returning anything other than `self` from a top level model validator isn't supported when validating via `__init__`.\n"
217 'See the `model_validator` docs (https://docs.pydantic.dev/latest/concepts/validators/#model-validators) for more details.',
218 category=None,
219 )
ValidationError: 1 validation error for InitSchema
std_range
Input should be a valid tuple [type=tuple_type, input_value=10, input_type=int]
For further information visit https://errors.pydantic.dev/2.9/v/tuple_type
```
I am using latest versions of pydantic and pix2tex, I have tried to downgrade versions of both packages, but problem persists. | closed | 2025-01-10T11:01:50Z | 2025-01-13T09:24:28Z | https://github.com/lukas-blecher/LaTeX-OCR/issues/410 | [] | Qwedon | 2 |
chaos-genius/chaos_genius | data-visualization | 671 | Search bug in the KPI screen | closed | 2022-02-09T05:36:58Z | 2022-02-16T18:11:32Z | https://github.com/chaos-genius/chaos_genius/issues/671 | [
"๐ฅ๏ธ frontend"
] | Santhoshkumar1023 | 1 | |
aleju/imgaug | deep-learning | 28 | Getting black image! | First of all thank you very much for this.
When I try to run your example code with these two images, I always get a black image!


This is the whole code which is given in the first page , I just replaced the random numpy array statement with these two images!:
```
import imgaug as ia
from imgaug import augmenters as iaa
import numpy as np
im = caffe.io.load_image('buckskin_s_000331.png')
im2 = caffe.io.load_image('buckskin_s_000005.png')
images = np.zeros([2,32,32,3])
images[0] = im
images[1] = im2
# Sometimes(0.5, ...) applies the given augmenter in 50% of all cases,
# e.g. Sometimes(0.5, GaussianBlur(0.3)) would blur roughly every second image.
st = lambda aug: iaa.Sometimes(0.3, aug)
# Define our sequence of augmentation steps that will be applied to every image
# All augmenters with per_channel=0.5 will sample one value _per image_
# in 50% of all cases. In all other cases they will sample new values
# _per channel_.
seq = iaa.Sequential([
iaa.Fliplr(0.5), # horizontally flip 50% of all images
iaa.Flipud(0.5), # vertically flip 50% of all images
st(iaa.Superpixels(p_replace=(0, 1.0), n_segments=(20, 200))), # convert images into their superpixel representation
st(iaa.Crop(percent=(0, 0.1))), # crop images by 0-10% of their height/width
st(iaa.GaussianBlur((0, 3.0))), # blur images with a sigma between 0 and 3.0
st(iaa.Sharpen(alpha=(0, 1.0), strength=(0.75, 1.5))), # sharpen images
st(iaa.Emboss(alpha=(0, 1.0), strength=(0, 2.0))), # emboss images
# search either for all edges or for directed edges
st(iaa.Sometimes(0.5,
iaa.EdgeDetect(alpha=(0, 0.7)),
iaa.DirectedEdgeDetect(alpha=(0, 0.7), direction=(0.0, 1.0)),
)),
st(iaa.AdditiveGaussianNoise(loc=0, scale=(0.0, 0.2), per_channel=0.5)), # add gaussian noise to images
st(iaa.Dropout((0.0, 0.1), per_channel=0.5)), # randomly remove up to 10% of the pixels
st(iaa.Invert(0.25, per_channel=True)), # invert color channels
st(iaa.Add((-10, 10), per_channel=0.5)), # change brightness of images (by -10 to 10 of original value)
st(iaa.Multiply((0.5, 1.5), per_channel=0.5)), # change brightness of images (50-150% of original value)
st(iaa.ContrastNormalization((0.5, 2.0), per_channel=0.5)), # improve or worsen the contrast
st(iaa.Affine(
scale={"x": (0.8, 1.2), "y": (0.8, 1.2)}, # scale images to 80-120% of their size, individually per axis
translate_px={"x": (-16, 16), "y": (-16, 16)}, # translate by -16 to +16 pixels (per axis)
rotate=(-45, 45), # rotate by -45 to +45 degrees
shear=(-16, 16), # shear by -16 to +16 degrees
order=ia.ALL, # use any of scikit-image's interpolation methods
cval=(0, 255), # if mode is constant, use a cval between 0 and 255
mode=ia.ALL # use any of scikit-image's warping modes (see 2nd image from the top for examples)
)),
st(iaa.ElasticTransformation(alpha=(0.5, 3.5), sigma=0.25)) # apply elastic transformations with random strengths
],
random_order=True # do all of the above in random order
)
images_aug = seq.augment_images(images)
plt.imshow(images_aug[0])
plt.show()
```

what is wrong here? | open | 2017-04-04T20:08:55Z | 2017-04-11T06:17:05Z | https://github.com/aleju/imgaug/issues/28 | [] | Coderx7 | 7 |
ray-project/ray | machine-learning | 50,698 | CI test linux://python/ray/train/v2:test_data_parallel_trainer is flaky | CI test **linux://python/ray/train/v2:test_data_parallel_trainer** is consistently_failing. Recent failures:
- https://buildkite.com/ray-project/postmerge/builds/8396#01951ae3-8f44-4218-901d-4b144474feab
- https://buildkite.com/ray-project/postmerge/builds/8390#01951a34-43e6-428b-b98f-1832dd663b5e
- https://buildkite.com/ray-project/postmerge/builds/8377#019515bf-8c94-4454-afb8-60c47eb48990
DataCaseName-linux://python/ray/train/v2:test_data_parallel_trainer-END
Managed by OSS Test Policy | closed | 2025-02-18T21:33:05Z | 2025-02-21T17:44:54Z | https://github.com/ray-project/ray/issues/50698 | [
"bug",
"triage",
"flaky-tracker",
"ray-test-bot",
"ci-test",
"weekly-release-blocker",
"stability",
"ml"
] | can-anyscale | 9 |
ansible/awx | django | 15,302 | Inventory Sync and Ad-Hoc Commands are not send in Logs | ### Please confirm the following
- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.
### Feature type
Enhancement to Existing Feature
Logging
### Feature Summary
Hi,
I'm using AAP and AWX for production and development enviroments , I have Logstash (formerly ELK) as the logging tool for have a log control.
I have used all the loggers that offer in the logging settings (as shown below) but when I want to get the logs from a Inventory Sync or a Ad-hoc command, not all the info that is shown in the IU or in API is shown in logs.
```
[
"awx",
"activity_stream",
"job_events",
"system_tracking",
"broadcast_websocket"
]
```
Example: I sync a source of a Azure dynamic invetory, the log sent from AAP to logstash have the information of the background paybook used default that is project_udpate.yml playbook that Ansible uses default. That is fine, the problem comes when there is no information shown of the actual debug of inventory, as an example in the IU I receive this info where it gets the groups and hosts or in case of error also as an example
```
21.385 INFO Processing JSON output...
21.388 INFO Loaded 13 groups, 13 hosts
21.578 INFO Inventory import completed for 12345567in 0.2s
```
ERROR
```
[WARNING]: * Failed to parse
/runner/project/azure/23456789.azure_rm.yml with
auto plugin: a batched request failed with status code 404, url /subscriptions/
23456789/providers/Microsoft.Compute/virtualMachine
```
**So that debug info could be very useful to get it as an stdout send within the logs.**
Main reason to do this is to be able to make dashboards of accounts with certain type of problem to get it more organized.
### Select the relevant components
- [ ] UI
- [ ] API
- [ ] Docs
- [ ] Collection
- [ ] CLI
- [X] Other
### Steps to reproduce
1. Have a Inevtnory with some source prepared
2. Deploy ELK (as a docker compose exmaple you can fllow tihis documentation which is really fast to deploy https://community.hetzner.com/tutorials/deploy-elk-stack-with-docker )
3. Configure Logging in your AWX/AAP to send logs to the IP:port logstash configured
4. Try to get the information of the debug of an inventory (no matter if its with error or not)
### Current results
No logs of inventory debug are sent within the logs.
### Sugested feature result
Add a new object in the Json or a new dict value like "stdout" where all the debug is collected so it can be analyzed after.
### Additional information
_No response_ | open | 2024-06-26T09:27:46Z | 2024-07-24T17:34:05Z | https://github.com/ansible/awx/issues/15302 | [
"type:enhancement",
"help wanted",
"community"
] | valkiriaaquatica | 3 |
FactoryBoy/factory_boy | sqlalchemy | 961 | `FuzzyAttribute` actually should be named as `FuzzyFunction` | #### The problem
There is `LazyFunction` (takes callable, without args) and `LazzyAttribute` (takes callable with one argument - self). But only one fuzzy class - `FuzzyAttribute`, which actually takes the same as `LazyFunction`.
#### Proposed solution
I would like to see `FuzzyAttribute` that takes callable and argument self in it, and fix this fuzzy naming in fuzzy functions.
| open | 2022-07-10T19:07:01Z | 2022-12-22T14:30:52Z | https://github.com/FactoryBoy/factory_boy/issues/961 | [
"Feature"
] | PerchunPak | 5 |
noirbizarre/flask-restplus | flask | 387 | Requested response fields | In my API, responses contain many fields. In order to download as small responses as possible, I use `X-Fields` header to filter out fields I don't need. How can I get a list of requested response fields in `Resource` method functions so that I can optimize my database queries too? | open | 2018-01-25T11:42:09Z | 2018-01-25T11:42:09Z | https://github.com/noirbizarre/flask-restplus/issues/387 | [] | lubo | 0 |
httpie/cli | python | 722 | --ssl โ TLS 1.3 & Python 3.7 compatibility | Now that TLS1.3 is out **[1]** it would be great to add that to the list of supported ssl parameters.
` [--ssl {ssl2.3,tls1,tls1.1,tls1.2}] [--cert CERT]`
**[1]** https://tools.ietf.org/html/rfc8446
| open | 2018-10-17T10:04:07Z | 2023-12-19T19:12:50Z | https://github.com/httpie/cli/issues/722 | [] | jaimejim | 4 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 1,613 | Training the model on custom dataset | HI team it's a wonderful work you've. It's appreciable.
However I am trying to fine tune the model on my custom dataset which has noise(check boxes) in them and need the output as data without checkboxes. I trained the model using the necessary requirements of the CycleGAN but still when I test the the images with checkboxes I am getting blank output.
I humbly request anyone form the team or from the community help me out with this, It would deeply appreciated. | open | 2023-11-09T10:16:26Z | 2023-11-09T10:16:26Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1613 | [] | AGRocky | 0 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 948 | Missing synthesizer pretrained.pt | I already tried these files
https://drive.google.com/drive/folders/1aPYBbabQGNFHp6DegcKhfwNyLTXcKg5f?usp=sharing from Francisco
https://drive.google.com/drive/folders/1lb-LlS8Sx9RqcGzuV6GxvKHk-PC9TqQx?usp=sharing from Alex
https://drive.google.com/file/d/1n1sPXvT34yXFLT47QZA6FIRGrwMeSsZc/view from RobbeW
I still get the `FileNotFoundError: [Errno 2] No such file or directory: 'synthesizer\\saved_models\\pretrained\\pretrained.pt'` any help? | closed | 2021-12-12T18:57:52Z | 2021-12-28T16:57:09Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/948 | [] | AsterTheWanderer | 2 |
coqui-ai/TTS | python | 3,187 | AttributeError: 'TTS' object has no attribute 'is_multi_speaker'[Bug] | ### Describe the bug
pip list | grep TTS
TTS 0.20.2
### To Reproduce
pip list | grep TTS
TTS 0.20.2
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
pip list | grep TTS
TTS 0.20.2
```
### Additional context
_No response_ | closed | 2023-11-10T04:47:43Z | 2023-12-21T10:01:51Z | https://github.com/coqui-ai/TTS/issues/3187 | [
"bug"
] | lucasjinreal | 9 |
huggingface/datasets | numpy | 7,448 | `datasets.disable_caching` doesn't work | When I use `Dataset.from_generator(my_gen)` to load my dataset, it simply skips my changes to the generator function.
I tried `datasets.disable_caching`, but it doesn't work! | open | 2025-03-13T06:40:12Z | 2025-03-22T04:37:07Z | https://github.com/huggingface/datasets/issues/7448 | [] | UCC-team | 2 |
flairNLP/flair | pytorch | 3,313 | [Question]: How the StackedEmbeddings function actually works? | ### Question
How the StackedEmbeddings function actually works?
Is it concatinating the two word embeddings, i.e. using torch.cat([emb1, emb2]).
I want to concatinate BytePairEmbeddings with TransformerWordEmbeddings, so i'm doing like this:
bert_emb = TransformerWordEmbeddings(
model='xlm-roberta-base',
layers="-1",
subtoken_pooling="mean",
fine_tune=True,
use_context=True,
)
bpe_emb = BytePairEmbeddings('en')
stacked_embeddings = StackedEmbeddings([bert_emb , bpe_emb])
So the resultant word embeddings (stacked_embeddings) will be a concatenation of the two embeddings, or is it element-wise mean embedding, or anything else?
Thank you | open | 2023-09-08T02:18:21Z | 2023-09-18T08:13:31Z | https://github.com/flairNLP/flair/issues/3313 | [
"question"
] | ijazul-haq | 3 |
plotly/dash-table | dash | 957 | please I need a feature that dragging to selecting range on datatable | please I need a feature that dragging to selecting range on datatable
it is available in Streamlit...!!!! | open | 2024-05-30T16:08:18Z | 2024-05-30T16:08:18Z | https://github.com/plotly/dash-table/issues/957 | [] | nicobockko | 0 |
tensorly/tensorly | numpy | 580 | [doc] Adding proper documentation for proximal operators | #### Issue
We have several cool proximal operators inside the submodule `tenalg/proximal.py`, which are not documented.
#### Fix
It would be simple and useful, in my opinion, to have a small text in the documentation about it.
I should be able to tackle this in the near future.
| open | 2024-10-31T07:58:04Z | 2024-10-31T08:09:14Z | https://github.com/tensorly/tensorly/issues/580 | [
"documentation",
"easy issue"
] | cohenjer | 0 |
onnx/onnx | deep-learning | 6,426 | [Feature request] Deno webgpu support | ### System information
_No response_
### What is the problem that this feature solves?
Support to running onnx models in webgpu using deno
### Alternatives considered
_No response_
### Describe the feature
Deno has support to webgpu since v1.39
### Will this influence the current api (Y/N)?
_No response_
### Feature Area
_No response_
### Are you willing to contribute it (Y/N)
None
### Notes
Actually not working because this error:
`error: Uncaught (in promise) Error: no available backend found. ERR: [webgpu] backend not found.`
Debug
`console.log(await navigator.gpu.requestAdapter())`
```
GPUAdapter {
features: GPUSupportedFeatures [
"depth-clip-control",
"timestamp-query",
"indirect-first-instance",
"shader-f16",
"depth32float-stencil8",
"texture-compression-bc",
"rg11b10ufloat-renderable",
"bgra8unorm-storage",
"float32-filterable",
"texture-format-16-bit-norm",
"texture-adapter-specific-format-features",
"pipeline-statistics-query",
"timestamp-query-inside-passes",
"mappable-primary-buffers",
"texture-binding-array",
"buffer-binding-array",
"storage-resource-binding-array",
"sampled-texture-and-storage-buffer-array-non-uniform-indexing",
"uniform-buffer-and-storage-texture-array-non-uniform-indexing",
"partially-bound-binding-array",
"multi-draw-indirect",
"multi-draw-indirect-count",
"push-constants",
"address-mode-clamp-to-zero",
"address-mode-clamp-to-border",
"polygon-mode-line",
"polygon-mode-point",
"conservative-rasterization",
"vertex-writable-storage",
"clear-texture",
"spirv-shader-passthrough",
"multiview",
"shader-f64",
"shader-i16",
"shader-primitive-index",
"shader-unused-vertex-output"
],
limits: GPUSupportedLimits {
maxTextureDimension1D: 16384,
maxTextureDimension2D: 16384,
maxTextureDimension3D: 2048,
maxTextureArrayLayers: 2048,
maxBindGroups: 8,
maxBindingsPerBindGroup: 1000,
maxBufferSize: 2147483647,
maxDynamicUniformBuffersPerPipelineLayout: 16,
maxDynamicStorageBuffersPerPipelineLayout: 8,
maxSampledTexturesPerShaderStage: 8388606,
maxSamplersPerShaderStage: 8388606,
maxStorageBuffersPerShaderStage: 8388606,
maxStorageTexturesPerShaderStage: 8388606,
maxUniformBuffersPerShaderStage: 8388606,
maxUniformBufferBindingSize: 2147483648,
maxStorageBufferBindingSize: 2147483648,
minUniformBufferOffsetAlignment: 32,
minStorageBufferOffsetAlignment: 32,
maxVertexBuffers: 16,
maxVertexAttributes: 32,
maxVertexBufferArrayStride: 2048,
maxInterStageShaderComponents: 128,
maxColorAttachments: 8,
maxColorAttachmentBytesPerSample: 32,
maxComputeWorkgroupStorageSize: 65536,
maxComputeInvocationsPerWorkgroup: 1024,
maxComputeWorkgroupSizeX: 1024,
maxComputeWorkgroupSizeY: 1024,
maxComputeWorkgroupSizeZ: 1024,
maxComputeWorkgroupsPerDimension: 65535
},
info: GPUAdapterInfo {
vendor: "4098",
architecture: "",
device: "29695",
description: "AMD Radeon RX 6600 (RADV NAVI23)"
},
isFallbackAdapter: false
}
```
| closed | 2024-10-04T03:14:35Z | 2024-10-04T03:17:59Z | https://github.com/onnx/onnx/issues/6426 | [
"topic: enhancement"
] | jlucaso1 | 0 |
seleniumbase/SeleniumBase | pytest | 3,053 | Need updated UC examples | The example code for using SB with UC given here seem to no longer work:
https://github.com/seleniumbase/SeleniumBase/blob/af3d9545473e55b2a25cdbab8be0b1ed5e1f6afa/examples/raw_uc_mode.py
Here's my code running the example on Python 3.12 on Ubuntu 24.04 LTS
```python
import os
import sys
import time
import json
from seleniumbase import SB
from loguru import logger
def add_cdp_listener(driver):
# Add CDP listener to capture network events
driver.add_cdp_listener(
"Network.requestWillBeSentExtraInfo",
lambda data: pprint(data)
)
def click_turnstile_and_verify(driver):
driver.uc_gui_handle_captcha()
driver.assert_element("img#captcha-success", timeout=3)
driver.highlight("img#captcha-success", loops=8)
def main():
headed = False
logger.info("Starting WebDriver Setup.")
try:
with SB(
headed=headed,
devtools=False,
remote_debug=False,
) as driver:
logger.info("WebDriver created successfully.")
url = "https://gitlab.com/users/sign_in"
driver.uc_open_with_reconnect(url, 4)
driver.uc_gui_click_captcha()
driver.assert_text("Username", '[for="user_login"]', timeout=3)
driver.assert_element('label[for="user_login"]')
driver.highlight('button:contains("Sign in")')
driver.highlight('h1:contains("GitLab.com")')
driver.post_message("SeleniumBase wasn't detected", duration=4)
logger.info("WebDriver session ended.")
except Exception as e:
logger.error(f"Error initializing WebDriver: {e}")
dump_debug_info()
raise
def dump_debug_info():
"""Dump debug information when WebDriver creation fails."""
logger.debug("Dumping debug information...")
try:
logger.debug("System path: " + str(sys.path))
logger.debug("Environment variables: " + str(os.environ))
except Exception as e:
logger.error(f"Failed to dump debug information: {e}")
if __name__ == "__main__":
main()
```
Error:
```
2024-08-24 01:53:24.444 | ERROR | __main__:main:44 - Error initializing WebDriver: 'BaseCase' object has no attribute 'uc_open_with_reconnect'
```
I'm looking for a technique to use these methods associated with bypassing the checkbox challenge from Cloudflare in the iFrame, but without using SeleniumBase via the CLI or PyTest as we're using SB as a replacement for Selenium which out code is already built around.
Are there any current examples available of anything similar?
| closed | 2024-08-24T07:02:52Z | 2024-08-24T21:03:32Z | https://github.com/seleniumbase/SeleniumBase/issues/3053 | [
"invalid usage",
"UC Mode / CDP Mode"
] | krypterro | 5 |
piskvorky/gensim | data-science | 2,951 | calculation of downsampling .sample_int after vocab-updates looks wrong | The updating of `.sample_int` after a `build_vocab(..., update=True)` looks wrong at:
https://github.com/RaRe-Technologies/gensim/blob/3.8.3/gensim/models/word2vec.py#L1534-L1544
In particular, by only consulting the `raw_vocab` (which in this case is only the new vocab-survey), in many cases it may be failing to recognize truly high-frequency words, and may even (for small unrepresentative updates) be downsampling overall-rare words that are just overrepresented in the new batch. Unsure if this is a prpblem in practice; the whole update-vocab functionality is a poorly-grounded & underanalyzed mess. | open | 2020-09-16T19:04:43Z | 2020-09-16T19:05:04Z | https://github.com/piskvorky/gensim/issues/2951 | [] | gojomo | 0 |
httpie/cli | api | 639 | Print request headers regardless of connection error | If `-v` is set in the command line arguments HTTPie prints the request along with the response.
The request is however not printed if a connection error happened, eg. the connection was closed by server before receiving a response.
I think it would be helpful to see the request printed in such case for debugging purposes.
```
> http -v GET http://127.0.0.1:1234/
http: error: ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',)) while doing GET request to URL: http://127.0.0.1:1234/
```
In the above case the server closed the connection _after_ the request was completely sent.
Running httpie 0.9.9, detailed debug output attached:
[httpie-debug.txt](https://github.com/jakubroztocil/httpie/files/1539173/httpie-debug.txt) attached
| closed | 2017-12-07T13:21:34Z | 2019-09-03T15:22:37Z | https://github.com/httpie/cli/issues/639 | [
"enhancement",
"planned"
] | maciej | 4 |
huggingface/datasets | pytorch | 7,441 | `drop_last_batch` does not drop the last batch using IterableDataset + interleave_datasets + multi_worker | ### Describe the bug
See the script below
`drop_last_batch=True` is defined using map() for each dataset.
The last batch for each dataset is expected to be dropped, id 21-25.
The code behaves as expected when num_workers=0 or 1.
When using num_workers>1, 'a-11', 'b-11', 'a-12', 'b-12' are gone and instead 21 and 22 are sampled.
### Steps to reproduce the bug
```
from datasets import Dataset
from datasets import interleave_datasets
from torch.utils.data import DataLoader
def convert_to_str(batch, dataset_name):
batch['a'] = [f"{dataset_name}-{e}" for e in batch['a']]
return batch
def gen1():
for ii in range(1, 25):
yield {"a": ii}
def gen2():
for ii in range(1, 25):
yield {"a": ii}
# https://github.com/huggingface/datasets/issues/6565
if __name__ == '__main__':
dataset1 = Dataset.from_generator(gen1).to_iterable_dataset(num_shards=2)
dataset2 = Dataset.from_generator(gen2).to_iterable_dataset(num_shards=2)
dataset1 = dataset1.map(lambda x: convert_to_str(x, dataset_name="a"), batched=True, batch_size=10, drop_last_batch=True)
dataset2 = dataset2.map(lambda x: convert_to_str(x, dataset_name="b"), batched=True, batch_size=10, drop_last_batch=True)
interleaved = interleave_datasets([dataset1, dataset2], stopping_strategy="all_exhausted")
print(f"num_workers=0")
loader = DataLoader(interleaved, batch_size=5, num_workers=0)
i = 0
for b in loader:
print(i, b['a'])
i += 1
print('=-' * 20)
print(f"num_workers=1")
loader = DataLoader(interleaved, batch_size=5, num_workers=1)
i = 0
for b in loader:
print(i, b['a'])
i += 1
print('=-' * 20)
print(f"num_workers=2")
loader = DataLoader(interleaved, batch_size=5, num_workers=2)
i = 0
for b in loader:
print(i, b['a'])
i += 1
print('=-' * 20)
print(f"num_workers=3")
loader = DataLoader(interleaved, batch_size=5, num_workers=3)
i = 0
for b in loader:
print(i, b['a'])
i += 1
```
output is:
```
num_workers=0
0 ['a-1', 'b-1', 'a-2', 'b-2', 'a-3']
1 ['b-3', 'a-4', 'b-4', 'a-5', 'b-5']
2 ['a-6', 'b-6', 'a-7', 'b-7', 'a-8']
3 ['b-8', 'a-9', 'b-9', 'a-10', 'b-10']
4 ['a-11', 'b-11', 'a-12', 'b-12', 'a-13']
5 ['b-13', 'a-14', 'b-14', 'a-15', 'b-15']
6 ['a-16', 'b-16', 'a-17', 'b-17', 'a-18']
7 ['b-18', 'a-19', 'b-19', 'a-20', 'b-20']
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
num_workers=1
0 ['a-1', 'b-1', 'a-2', 'b-2', 'a-3']
1 ['b-3', 'a-4', 'b-4', 'a-5', 'b-5']
2 ['a-6', 'b-6', 'a-7', 'b-7', 'a-8']
3 ['b-8', 'a-9', 'b-9', 'a-10', 'b-10']
4 ['a-11', 'b-11', 'a-12', 'b-12', 'a-13']
5 ['b-13', 'a-14', 'b-14', 'a-15', 'b-15']
6 ['a-16', 'b-16', 'a-17', 'b-17', 'a-18']
7 ['b-18', 'a-19', 'b-19', 'a-20', 'b-20']
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
num_workers=2
0 ['a-1', 'b-1', 'a-2', 'b-2', 'a-3']
1 ['a-13', 'b-13', 'a-14', 'b-14', 'a-15']
2 ['b-3', 'a-4', 'b-4', 'a-5', 'b-5']
3 ['b-15', 'a-16', 'b-16', 'a-17', 'b-17']
4 ['a-6', 'b-6', 'a-7', 'b-7', 'a-8']
5 ['a-18', 'b-18', 'a-19', 'b-19', 'a-20']
6 ['b-8', 'a-9', 'b-9', 'a-10', 'b-10']
7 ['b-20', 'a-21', 'b-21', 'a-22', 'b-22']
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
num_workers=3
Too many dataloader workers: 3 (max is dataset.num_shards=2). Stopping 1 dataloader workers.
0 ['a-1', 'b-1', 'a-2', 'b-2', 'a-3']
1 ['a-13', 'b-13', 'a-14', 'b-14', 'a-15']
2 ['b-3', 'a-4', 'b-4', 'a-5', 'b-5']
3 ['b-15', 'a-16', 'b-16', 'a-17', 'b-17']
4 ['a-6', 'b-6', 'a-7', 'b-7', 'a-8']
5 ['a-18', 'b-18', 'a-19', 'b-19', 'a-20']
6 ['b-8', 'a-9', 'b-9', 'a-10', 'b-10']
7 ['b-20', 'a-21', 'b-21', 'a-22', 'b-22']
```
### Expected behavior
`'a-21', 'b-21', 'a-22', 'b-22'` should be dropped
### Environment info
- `datasets` version: 3.3.2
- Platform: Linux-5.15.0-1056-aws-x86_64-with-glibc2.31
- Python version: 3.10.16
- `huggingface_hub` version: 0.28.0
- PyArrow version: 19.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.6.1
| open | 2025-03-08T10:28:44Z | 2025-03-09T21:27:33Z | https://github.com/huggingface/datasets/issues/7441 | [] | memray | 2 |
tiangolo/uvicorn-gunicorn-fastapi-docker | pydantic | 152 | How to embed a fairseq model in a uvicorn-gunicorn-fastapi-docker | Hi,
I'm trying to embed a fairseq (https://fairseq.readthedocs.io/en/latest/) model into a uvicorn-gunicorn-fastapi-docker based service. My problem is that fairseq is adding a lot of elements to argparse.ArgumentParser before gunicorn starts and that the latter tries to parse them:
```
semparsing_tool-semanticparsing_recycle_bert_backend-1 | usage: gunicorn
โฆ
semparsing_tool-semanticparsing_recycle_bert_backend-1 | [--tensorboard-logdir DIR]
semparsing_tool-semanticparsing_recycle_bert_backend-1 | [--seed N]
semparsing_tool-semanticparsing_recycle_bert_backend-1 | [--cpu]
semparsing_tool-semanticparsing_recycle_bert_backend-1 | [--tpu]
โฆ
semparsing_tool-semanticparsing_recycle_bert_backend-1 | [--criterion
โฆ
mparsing_tool-semanticparsing_recycle_bert_backend-1 | [--eval-bleu-print-samples]
semparsing_tool-semanticparsing_recycle_bert_backend-1 | data
semparsing_tool-semanticparsing_recycle_bert_backend-1 | gunicorn: error: unrecognized arguments: -k -c /gunicorn_conf.py main:app
```
All the options above are from fairseq.
I should be able to separate options for fairseq and those for gunicorn but the init process is opaque for me and I cannot find where to start.
I will ask the same question on fairseq because I don't know which, if any, is to blame here. | closed | 2022-02-08T11:54:17Z | 2022-02-09T18:23:05Z | https://github.com/tiangolo/uvicorn-gunicorn-fastapi-docker/issues/152 | [] | kleag | 1 |
mckinsey/vizro | pydantic | 964 | Is it possible to restrict Dash AG Grid editing to admins only? | ### Question
I would like to know if itโs possible to create a feature where only users with admin credentials can access and edit the Dash AG Grid. The goal is to display the grid with view-only permissions for non-admin users, while allowing admins to modify the gridโs content.
Could you provide guidance on how to implement this, or if there are existing methods to handle role-based access control for the Dash AG Grid component?
### Code/Examples
_No response_
### Which package?
vizro
### Code of Conduct
- [x] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md). | closed | 2025-01-23T07:29:09Z | 2025-01-27T07:24:30Z | https://github.com/mckinsey/vizro/issues/964 | [
"Needs triage :mag:",
"General Question :question:"
] | BalaNagendraReddy | 2 |
davidsandberg/facenet | tensorflow | 614 | How to pretrain model when setting --pretrained_model | Error:
```
Data loss: not an sstable (bad magic number)
```
I used pretrained model by repository and first-echo-model from scratch training, both failed with the same error. | closed | 2018-01-16T12:14:09Z | 2019-06-18T11:59:44Z | https://github.com/davidsandberg/facenet/issues/614 | [] | xiaoxinyi | 2 |
proplot-dev/proplot | data-visualization | 169 | Pass rc.cmap and rc.cycle arguments through respective constructor functions | ### Description
Since `cycle` supports cmaps, the input of cmap name in the configuration should be supported.
### Steps to reproduce
```python
import proplot as plot
import numpy as np
fig, axs = plot.subplots()
state = np.random.RandomState(51423)
data = (20 * state.rand(10, 21) - 10).cumsum(axis=0)
plot.rc.update({'cycle': 'plum', 'lines.linewidth': '5'})
lines = axs.plot(data[:, :5])
```
**Expected behavior**:
Like:
```
plot.rc.update({'lines.linewidth': '5'})
lines = axs.plot(data[:, :5], cycle='plum')
```

**Actual behavior**: [What actually happened]
```
Traceback (most recent call last):
File "/home/xin/Desktop/test.py", line 11, in <module>
plot.rc.update({'cycle': 'plum', 'lines.linewidth': '5'})
File "/home/xin/Documents/Github/proplot/proplot/config.py", line 906, in update
self.__setitem__(prefix + key, value)
File "/home/xin/Documents/Github/proplot/proplot/config.py", line 410, in __setitem__
kw_quick, kw_added, kw_params = self._get_param_dicts(key, value)
File "/home/xin/Documents/Github/proplot/proplot/config.py", line 490, in _get_param_dicts
colors = _get_cycle_colors(value)
File "/home/xin/Documents/Github/proplot/proplot/config.py", line 1020, in _get_cycle_colors
+ ', '.join(map(repr, cycles)) + '.'
ValueError: Invalid cycle name 'plum'. Options are: '538', 'accent', 'classic', 'colorblind', 'colorblind10', 'dark2', 'default', 'flatui', 'ggplot', 'paired', 'pastel1', 'pastel2', 'qual1', 'qual2', 'set1', 'set2', 'set3', 'tab10', 'tab20', 'tab20b', 'tab20c'.
```
### Proplot version
[master branch](https://github.com/lukelbd/proplot/commit/6903e33efe963192ce465a51341502714c812c58)
| closed | 2020-05-16T12:11:52Z | 2021-08-18T19:36:42Z | https://github.com/proplot-dev/proplot/issues/169 | [
"duplicate"
] | zxdawn | 2 |
joke2k/django-environ | django | 123 | Expose .env variables as a dictionary | Use case:
I try to use my project on a AWS Beanstalk instance. For this to run, I have to set environment variables. If django-environ would expose the .env variables in a structure that can be looped, I could script this and it would save me some time (and allow to automate).
Pointer to https://github.com/pedroburon/dotenv/blob/master/dotenv/__init__.py where this behaviour is implemented.
Thanks! | open | 2017-05-05T08:06:26Z | 2021-09-04T20:26:36Z | https://github.com/joke2k/django-environ/issues/123 | [
"enhancement"
] | philippeluickx | 0 |
nltk/nltk | nlp | 2,456 | grammar.CFG.is_chomsky_normal_form returns True even if start symbol is produced by some production | In a Chomsky normal form grammar, the start symbol must not occur on the right side of a production by definition. However, the current implementation does not check for this. As a result, NLTK indicates that the grammar `S -> S S` is in normal form, even though it does not comply with the definition.
```
>>> import nltk.grammar
>>> G = nltk.grammar.CFG.fromstring("S -> S S")
>>> G.is_chomsky_normal_form()
True
``` | closed | 2019-11-03T07:27:13Z | 2021-11-17T08:34:08Z | https://github.com/nltk/nltk/issues/2456 | [
"resolved"
] | jacobdweightman | 2 |
nvbn/thefuck | python | 1,453 | Python 3.11 and 3.12 complains about `imp` which cannot be installed in those environments. | FYI: Python 3.11 and 3.12 on latest Ubuntu, `thefuck` screams `imp` is missing... there seem to be not viable workaround except custom `imp` build. Ehhh... | open | 2024-07-03T02:45:22Z | 2024-08-12T00:35:55Z | https://github.com/nvbn/thefuck/issues/1453 | [] | krstp | 5 |
keras-team/keras | python | 20,172 | Is there a keras 3 equivalent to serialization.DisableSharedObjectScope()? | I am trying to add support for keras 3 to TensorFlow Federated and I need to check whether there was shared embeddings between layers when cloning a model and if that is the case to raise an error. Here is the code in question:
https://github.com/google-parfait/tensorflow-federated/blob/523c129676236f7060fafb95b2a8fed683a5e519/tensorflow_federated/python/learning/models/functional.py#L502
Is there something similar to this legacy function in tf_keras in keras 3?
https://github.com/keras-team/tf-keras/blob/c5f97730b2e495f5f56fc2267d22504075e46337/tf_keras/models/cloning.py#L525
| closed | 2024-08-27T09:14:41Z | 2024-10-21T11:43:41Z | https://github.com/keras-team/keras/issues/20172 | [
"type:support"
] | markomitos | 5 |
vitalik/django-ninja | rest-api | 487 | Personalize and securize Redoc Page | Hello!
I'm trying to customize the Swagger/REDOC documenter a bit.
Is there a way to change the favicon displayed in the REDOC/Swagger documenter, without doing a complete override of templates/ninja/swagger.html|redoc.html ?
Swagger, in conjunction with the rest_famework package, can be configured to not allow unauthorized users to go to the documentation page, would it be possible to get that here too?
Old api in swagger, using rest_framework package, user not logged in:

New api in redoc, using django-ninja package, user not logged in:

| closed | 2022-06-28T10:29:41Z | 2022-06-30T08:59:56Z | https://github.com/vitalik/django-ninja/issues/487 | [] | JFeldaca | 3 |
fastapiutils/fastapi-utils | fastapi | 261 | [QUESTION] There's a way to add a custom decorator to a class-based view? | I'm trying to do something like the following:
```python
@cbv(users_router)
@ResponseHandler.class_decorator
class Users:
controller = UserController()
@users_router.post("/users", status_code=201)
async def create_user(self, user_data: Dict, response: Response) -> Dict:
return self.controller.create_user(**user_data)
@users_router.get("/users/{user_id}", status_code=302)
async def get_user(self, user_id: int, response: Response) -> Dict:
return self.controller.obtain_user(user_id)
@users_router.get("/users", status_code=302)
async def get_all_users(self, response: Response) -> Dict:
return self.controller.obtain_all_users()
```
The `class_decorator` decorator adds custom response for each one of the requests, but when I try to execute one of the services, then this error appears:
```bash
{"detail":[{"loc":["query","self"],"msg":"field required","type":"value_error.missing"}]}
```
| open | 2022-10-18T04:42:07Z | 2022-10-18T04:43:53Z | https://github.com/fastapiutils/fastapi-utils/issues/261 | [
"question"
] | JesusFragoso | 1 |
chaoss/augur | data-visualization | 2,292 | Explore incorporatating softcite/softcite_kb data into Augur for Academic Metrics | **Is your feature request related to a problem? If so, please describe the problem:**
Working in the context of the @chaoss project, we are developing metrics for Academic open source contexts. These include alt metrics related to software, as well as more conventional metrics related to academic publications.
https://github.com/softcite/softcite_kb is a project that could help support this effort.
| open | 2023-04-05T20:04:27Z | 2023-06-04T17:38:58Z | https://github.com/chaoss/augur/issues/2292 | [
"good first issue",
"first-timers-only"
] | sgoggins | 1 |
dynaconf/dynaconf | django | 348 | Add mount point option for vault | **Is your feature request related to a problem? Please describe.**
We are using this library for configuration and are currently moving our secrets to vault. However, there is no option to set the `mount_point`. This means it ends up using the default from the `hvac` client (`secret`).
**Describe the solution you'd like**
Add option to set mount point via option such as `VAULT_MOUNT_POINT_FOR_DYNACONF` This is already supported in the `hvac` client, ie:
```
client.kv.read_secret_version(path=vault_secrets_path, mount_point='global/kv')
```
| closed | 2020-05-29T15:04:49Z | 2020-05-29T17:11:37Z | https://github.com/dynaconf/dynaconf/issues/348 | [
"Not a Bug",
"RFC"
] | sfunkhouser | 0 |
piskvorky/gensim | data-science | 3,232 | Negative exponent with value -1 (minus one) raises error when loading Doc2Vec model | <!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
I try to vary the value of the negative exponent parameter. When I use a value of -1, training works fine, saving the model too, but when I try to load the model afterwards with Doc2Vec.load() it raises the error "ValueError: Integers to negative integer powers are not allowed."
This is due to the following line: https://github.com/RaRe-Technologies/gensim/blob/266a01455ade51a93a08dba5950e87b4d98e0724/gensim/models/word2vec.py#L836
Here, numpy does not raise an integer by the power of another, but negative integer.
I guess this could be solved by converting the exponent to a float in this case?
| closed | 2021-09-13T16:12:11Z | 2021-10-28T01:18:52Z | https://github.com/piskvorky/gensim/issues/3232 | [
"bug",
"difficulty easy",
"good first issue",
"impact HIGH",
"reach LOW"
] | edg-stg | 3 |
LibreTranslate/LibreTranslate | api | 202 | Frontend is bugged | I try to go and enter some text here:
https://libretranslate.com/#
As I type it it tries to translate each time, and then within a couple seconds I reach a translation timeout limit.
There should be some kind of cool off period between typing where it waits a bit before it tries a translation. | closed | 2022-01-31T09:58:38Z | 2022-01-31T15:50:30Z | https://github.com/LibreTranslate/LibreTranslate/issues/202 | [] | mayeaux | 6 |
zappa/Zappa | django | 596 | [Migrated] Fix the naming of event rule target id | Originally from: https://github.com/Miserlou/Zappa/issues/1546 by [jaykay](https://github.com/jaykay)
<!--
Before you submit this PR, please make sure that you meet these criteria:
* Did you read the [contributing guide](https://github.com/Miserlou/Zappa/#contributing)?
* If this is a non-trivial commit, did you **open a ticket** for discussion?
* Did you **put the URL for that ticket in a comment** in the code?
* If you made a new function, did you **write a good docstring** for it?
* Did you avoid putting "_" in front of your new function for no reason?
* Did you write a test for your new code?
* Did the Travis build pass?
* Did you improve (or at least not significantly reduce) the amount of code test coverage?
* Did you **make sure this code actually works on Lambda**, as well as locally?
* Did you test this code with both **Python 2.7** and **Python 3.6**?
* Does this commit ONLY relate to the issue at hand and have your linter shit all over the code?
If so, awesome! If not, please try to fix those issues before submitting your Pull Request.
Thank you for your contribution!
-->
## Description
<!-- Please describe the changes included in this PR -->
This is a fix for #1545
## GitHub Issues
<!-- Proposed changes should be discussed in an issue before submitting a PR. -->
<!-- Link to relevant tickets here. -->
#1545
| closed | 2021-02-20T12:26:22Z | 2022-08-18T12:56:39Z | https://github.com/zappa/Zappa/issues/596 | [] | jneves | 1 |
docarray/docarray | fastapi | 1,099 | Investigate if we can depend on `jaxtyping` for tensor type hints | @johannes what is the conclusion here ? | open | 2023-02-08T08:24:07Z | 2023-02-08T10:33:09Z | https://github.com/docarray/docarray/issues/1099 | [] | JohannesMessner | 4 |
plotly/dash | data-visualization | 2,423 | Add loading attribute to html.Img component | **Is your feature request related to a problem? Please describe.**
I'm trying to lazy load images using the in built browser functionality, but I can't because that's not exposed in the html.Img component.
**Describe the solution you'd like**
I'd like the loading attribute to be added to the html.Img built in component, so I can use
```
html.Img(src=..., loading="lazy")
```
**Describe alternatives you've considered**
I tried using dangerously set html from the dcc markdown component and the dash-dangerously-set-html library. The former didn't work (I'm assuming something todo with the async nature of the markdown loading process). The later works, but this component doesn't support serialisation like other dash components and broke some caching (standard Flask-Caching stuff) required for my particular usecase.
**Additional context**
Discussed briefly on the plotly forum https://community.plotly.com/t/html-img-browser-based-lazy-loading/72637/3
| open | 2023-02-13T12:15:58Z | 2024-08-13T19:26:45Z | https://github.com/plotly/dash/issues/2423 | [
"feature",
"P3"
] | LiamLombard | 1 |
seleniumbase/SeleniumBase | web-scraping | 2,728 | Exporting recorded script to json | Hi, I am using **seleniumbase** to record actions on my browser. It works fine and generates a script having all the steps followed during the recording. I want to export all the actions and the subsequent element id/URL/texts into a json file. How can I do that?
I am using **sbase mkrec new_test_1.py --url=imdb.com** to create test files to test the recordings | closed | 2024-04-29T12:27:36Z | 2024-05-03T14:45:38Z | https://github.com/seleniumbase/SeleniumBase/issues/2728 | [
"question"
] | Ashish3080 | 3 |
httpie/http-prompt | api | 160 | Support for custom methods (or WebDAV methods) | I think WebDAV is not the only http extension out there, maybe it makes sense to just consider the first word (if it is not reserved word) a method? | open | 2019-09-18T12:51:45Z | 2019-09-18T12:51:45Z | https://github.com/httpie/http-prompt/issues/160 | [] | trollfred | 0 |
encode/httpx | asyncio | 2,892 | Constrain which encodings are supported by `response.text`. | - [x] Initially raised as discussion #2881
---
Currently when accessing `response.text` any installed codec may be loaded, depending on the `Content-Type` header of the response. This is problematic partly because not all codecs are text codecs. It also feels too open, as custom codecs might be installed with arbitrary behaviours.
May suggestion would be that we support the same set of encodings as the chromium browser... https://chromium.googlesource.com/chromium/chromium/+/refs/heads/trunk/chrome/browser/character_encoding.cc#36
We can effect this change by having a hardcoded set of supported codecs, here...
https://github.com/encode/httpx/blob/e63b6594f2863b7c8274eb0991ebc6cad63661f7/httpx/_utils.py#L71-L79 | open | 2023-10-13T12:44:03Z | 2023-10-13T12:46:04Z | https://github.com/encode/httpx/issues/2892 | [
"enhancement"
] | tomchristie | 0 |
deepfakes/faceswap | deep-learning | 677 | Dockerfile.gpu use the latest tensorflow version | The Dockerfile.gpu use `FROM tensorflow/tensorflow:latest-py3`
The latest version of tensorflow is 2.0 alpha.๏ผYou may check this )[docker hub](https://hub.docker.com/r/tensorflow/tensorflow/tags?page=1)
It will cause compatibility problemsใ
So ,I think we should use the specified tensorflow versionใ
We may use `FROM tensorflow/tensorflow:1.13.1-gpu-py3 `
Anyone have some idea about this? | closed | 2019-03-21T09:09:09Z | 2019-03-21T09:39:06Z | https://github.com/deepfakes/faceswap/issues/677 | [] | lynnfi | 1 |
OpenInterpreter/open-interpreter | python | 1,320 | one-line installer does not set up openinterpreter | ### Describe the bug
both: on a windows 11 system with python 3.12, and on a linux system with python3 3.10,
running the one-line installation script leaves me with:
`openinterpreter: command not found` or its windows equivalent
The same occurs with installing via pip
I tried adding /usr/local/bin to PATH as per the instructions here: https://github.com/OpenInterpreter/open-interpreter/issues/164#issuecomment-1711044334
But that did nothing.
### Reproduce
1) attempt to install openinterpreter on a new computer and new account, on linux or windows, that does not have python or rust yet.
### Expected behavior
That the docs about getting started apply to new users
### Screenshots
_No response_
### Open Interpreter version
0.3.3
### Python version
3.10.12
### Operating System name and version
Windows 11 and Linux Mint 21.3 cinnamon
### Additional context
_No response_ | open | 2024-06-23T17:05:56Z | 2024-07-10T13:55:17Z | https://github.com/OpenInterpreter/open-interpreter/issues/1320 | [] | MisterE123 | 3 |
tensorflow/tensor2tensor | machine-learning | 1,663 | cannot import name 'loas2' | ### Description
I am trying to run `rl/trainer_model_based.py`.
I run the example command in the file and got the error at line 24 of `env/client_env.py`.
`from grpc import loas2`
I tried several version of grpc in case of library change, but not succeed.
### Environment information
```
OS: ubuntu 16.04
$ pip freeze | grep tensor
mesh-tensorflow==0.0.5
-e git+https://github.com/tensorflow/tensor2tensor@33783fd63bd0debe2138c5569698b31d9af350f6#egg=tensor2tensor
tensorboard==1.14.0
tensorflow-datasets==1.1.0
tensorflow-estimator==1.14.0
tensorflow-gpu==1.14.0
tensorflow-metadata==0.14.0
tensorflow-probability==0.7.0
$ python -V
Python 3.6.9 :: Anaconda, Inc.
```
### For bugs: reproduction and error logs
```
# Steps to reproduce:
python -m tensor2tensor.rl.trainer_model_based \
--output_dir=$HOME/t2t/rl_v1 \
--loop_hparams_set=rlmb_base \
--loop_hparams='num_real_env_frames=10000,epochs=3'
(same as the Example invocation in trainer_model_based.py)
```
```
# Error logs:
WARNING: Logging before flag parsing goes to stderr.
W0816 23:41:59.412693 139681197864704 deprecation_wrapper.py:119] From /home/lkh/Codes/tensor2tensor/tensor2tensor/utils/expert_utils.py:68: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead.
W0816 23:42:00.307283 139681197864704 lazy_loader.py:50]
The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
* https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
* https://github.com/tensorflow/addons
* https://github.com/tensorflow/io (for I/O related ops)
If you depend on functionality not listed there, please file an issue.
W0816 23:42:03.780892 139681197864704 deprecation_wrapper.py:119] From /home/lkh/Codes/tensor2tensor/tensor2tensor/utils/adafactor.py:27: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.
W0816 23:42:03.782154 139681197864704 deprecation_wrapper.py:119] From /home/lkh/Codes/tensor2tensor/tensor2tensor/utils/multistep_optimizer.py:32: The name tf.train.AdamOptimizer is deprecated. Please use tf.compat.v1.train.AdamOptimizer instead.
W0816 23:42:03.838312 139681197864704 deprecation_wrapper.py:119] From /home/lkh/anaconda3/envs/cule/lib/python3.6/site-packages/mesh_tensorflow/ops.py:4237: The name tf.train.CheckpointSaverListener is deprecated. Please use tf.estimator.CheckpointSaverListener instead.
W0816 23:42:03.838605 139681197864704 deprecation_wrapper.py:119] From /home/lkh/anaconda3/envs/cule/lib/python3.6/site-packages/mesh_tensorflow/ops.py:4260: The name tf.train.SessionRunHook is deprecated. Please use tf.estimator.SessionRunHook instead.
W0816 23:42:03.916944 139681197864704 deprecation_wrapper.py:119] From /home/lkh/Codes/tensor2tensor/tensor2tensor/models/research/neural_stack.py:38: The name tf.nn.rnn_cell.RNNCell is deprecated. Please use tf.compat.v1.nn.rnn_cell.RNNCell instead.
Traceback (most recent call last):
File "/home/lkh/Downloads/pycharm-community-2018.2.5/helpers/pydev/pydevd.py", line 1664, in <module>
main()
File "/home/lkh/Downloads/pycharm-community-2018.2.5/helpers/pydev/pydevd.py", line 1658, in main
globals = debugger.run(setup['file'], None, None, is_module)
File "/home/lkh/Downloads/pycharm-community-2018.2.5/helpers/pydev/pydevd.py", line 1068, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/home/lkh/Downloads/pycharm-community-2018.2.5/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/lkh/Codes/tensor2tensor/tensor2tensor/rl/trainer_model_based.py", line 38, in <module>
from tensor2tensor.bin import t2t_trainer # pylint: disable=unused-import
File "/home/lkh/Codes/tensor2tensor/tensor2tensor/bin/t2t_trainer.py", line 24, in <module>
from tensor2tensor import models # pylint: disable=unused-import
File "/home/lkh/Codes/tensor2tensor/tensor2tensor/models/__init__.py", line 61, in <module>
from tensor2tensor.models.research import rl
File "/home/lkh/Codes/tensor2tensor/tensor2tensor/models/research/rl.py", line 27, in <module>
from tensor2tensor.envs import tic_tac_toe_env
File "/home/lkh/Codes/tensor2tensor/tensor2tensor/envs/__init__.py", line 24, in <module>
from tensor2tensor.envs import client_env
File "/home/lkh/Codes/tensor2tensor/tensor2tensor/envs/client_env.py", line 24, in <module>
from grpc import loas2
ImportError: cannot import name 'loas2'
We've got an error while stopping in post-mortem: <class 'KeyboardInterrupt'>
``` | closed | 2019-08-16T15:17:30Z | 2019-08-26T18:39:01Z | https://github.com/tensorflow/tensor2tensor/issues/1663 | [] | KyunghyunLee | 4 |
AntonOsika/gpt-engineer | python | 133 | ValueError: too many values to unpack (expected 1) | Anyone can help with this one? | closed | 2023-06-18T00:51:08Z | 2023-06-18T07:38:29Z | https://github.com/AntonOsika/gpt-engineer/issues/133 | [] | Suketug | 6 |
huggingface/datasets | tensorflow | 6,824 | Winogrande does not seem to be compatible with datasets version of 1.18.0 | ### Describe the bug
I get the following error when simply running `load_dataset('winogrande','winogrande_xl')`.
I do not have such an issue in the 1.17.0 version.
```Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2556, in load_dataset
builder_instance = load_dataset_builder(
File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2265, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 371, in __init__
self.config, self.config_id = self._create_builder_config(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 620, in _create_builder_config
builder_config._resolve_data_files(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 211, in _resolve_data_files
self.data_files = self.data_files.resolve(base_path, download_config)
File "/usr/local/lib/python3.10/dist-packages/datasets/data_files.py", line 799, in resolve
out[key] = data_files_patterns_list.resolve(base_path, download_config)
File "/usr/local/lib/python3.10/dist-packages/datasets/data_files.py", line 752, in resolve
resolve_pattern(
File "/usr/local/lib/python3.10/dist-packages/datasets/data_files.py", line 393, in resolve_pattern
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find 'hf://datasets/winogrande@ebf71e3c7b5880d019ecf6099c0b09311b1084f5/winogrande_xl/train/0000.parquet' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.parquet', '.geoparquet', '.gpq', '.arrow', '.txt', '.tar', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.h5', '.hdf', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.H5', '.HDF', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.zip']```
### Steps to reproduce the bug
from datasets import load_dataset
datasets = load_dataset('winogrande','winogrande_xl')
### Expected behavior
```Downloading data: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2.06M/2.06M [00:00<00:00, 5.16MB/s]
Downloading data: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 118k/118k [00:00<00:00, 360kB/s]
Downloading data: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 85.9k/85.9k [00:00<00:00, 242kB/s]
Generating train split: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 40398/40398 [00:00<00:00, 845491.12 examples/s]
Generating test split: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1767/1767 [00:00<00:00, 362501.11 examples/s]
Generating validation split: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1267/1267 [00:00<00:00, 318768.11 examples/s]```
### Environment info
datasets version: 1.18.0
| closed | 2024-04-18T16:11:04Z | 2024-04-19T09:53:15Z | https://github.com/huggingface/datasets/issues/6824 | [] | spliew | 2 |
zappa/Zappa | django | 527 | [Migrated] An error occurred (IllegalLocationConstraintException) during zappa deploy | Originally from: https://github.com/Miserlou/Zappa/issues/1398 by [mcmonster](https://github.com/mcmonster)
See https://github.com/Miserlou/Zappa/issues/569
I also experienced this issue when attempting to deploy following the out-of-the-box README commands. Zappa's deploy procedure is obfuscating the cause of the issue, namely that my bucket name is not unique. It would be nice if Zappa would suggest this as a possible issue source when the deploy command is ran. | closed | 2021-02-20T09:43:57Z | 2023-08-17T01:08:28Z | https://github.com/zappa/Zappa/issues/527 | [
"bug",
"aws"
] | jneves | 1 |
falconry/falcon | api | 1,743 | Why the examples need to modify, or it can run | closed | 2020-07-21T09:34:51Z | 2020-07-21T09:35:22Z | https://github.com/falconry/falcon/issues/1743 | [] | mansonami | 1 | |
absent1706/sqlalchemy-mixins | sqlalchemy | 71 | Using existing database | I already have a database, I am using SQLAlchemy to interact with it. I have currently mapped database tables to SQLAlchemy objects as mentioned below.
`Base = automap_base()`
`Material = Base.classes.app1_material`
`Customer = Base.classes.app1_customer`
Can you share how to extend `sqlalchemy-mixins` to these classes?
| open | 2021-04-27T04:56:50Z | 2021-04-27T17:04:04Z | https://github.com/absent1706/sqlalchemy-mixins/issues/71 | [] | mswastik | 1 |
jonaswinkler/paperless-ng | django | 1,717 | [Other] Each User own Documents | <!--
=> Discussions, Feedback and other suggestions belong in the "Discussion" section and not on the issue tracker.
=> If you would like to submit a feature request please submit one under https://github.com/jonaswinkler/paperless-ng/discussions/categories/feature-requests
=> If you encounter issues while installing of configuring Paperless-ng, please post that in the "Support" section of the discussions. Remember that Paperless successfully runs on a variety of different systems. If paperless does not start, it's probably is an issue with your system, and not an issue of paperless.
=> Don't remove the [Other] prefix from the title.
-->
Hello,
can paperless be configured that each user has its own documents and cant see the others documents?
| closed | 2022-07-14T13:26:37Z | 2023-01-14T12:56:27Z | https://github.com/jonaswinkler/paperless-ng/issues/1717 | [] | Nollknolle | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.