repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 953 | Use cycleGAN for image enhancement | Hello author!I want to use your code for image enhancement based on CycleGAN.But it produces a lot of artifacts.Can you give me some tips?Thanks a lot. | open | 2020-03-13T14:38:41Z | 2020-03-13T18:22:34Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/953 | [] | crystaloscillator | 1 |
babysor/MockingBird | deep-learning | 563 | 启动后第一次点击 synthesize and vocode 报错 | 执行python demo_toolbox.py -d .\samples 启动程序后,第一次点击synthesize and vocode按钮报错,之后就不报错了,不知道什么原因?
RuntimeError: Error(s) in loading state_dict for Tacotron:
size mismatch for encoder_proj.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([128, 1024]).
size mismatch for decoder.attn_rnn.weight_ih: copying a param with shape torch.Size([384, 768]) from checkpoint, the shape in current model is torch.Size([384, 1280]).
size mismatch for decoder.rnn_input.weight: copying a param with shape torch.Size([1024, 640]) from checkpoint, the shape in current model is torch.Size([1024, 1152]).
size mismatch for decoder.stop_proj.weight: copying a param with shape torch.Size([1, 1536]) from checkpoint, the shape in current model is torch.Size([1, 2048]). | open | 2022-05-20T13:34:53Z | 2022-05-22T08:39:29Z | https://github.com/babysor/MockingBird/issues/563 | [] | GitHubLDL | 1 |
supabase/supabase-py | fastapi | 1,073 | Installing via Conda forgets h2 | # Bug report
<!--
⚠️ We receive a lot of bug reports which have already been solved or discussed. If you are looking for help, please try these first:
- Docs: https://docs.supabase.com
- Discussions: https://github.com/supabase/supabase/discussions
- Discord: https://discord.supabase.com
Before opening a bug report, please verify the following:
-->
- [X] I confirm this is a bug with Supabase, not with my own application.
- [X] I confirm I have searched the [Docs](https://docs.supabase.com), GitHub [Discussions](https://github.com/supabase/supabase/discussions), and [Discord](https://discord.supabase.com).
## Describe the bug
When installing supabase-py via Conda, the `h2` dependency is somehow forgotten so an error message occurs when using something that requires `httpx`.
## To Reproduce
1. `conda create -n test && conda activate test`
2. `conda install supabase`
3. `pip list`
4. h2 is missing
To compare, you can install `supabase` using pip in the same environment, 4 missing packages will be installed:
> Installing collected packages: hyperframe, hpack, h2, aiohttp
> Attempting uninstall: aiohttp
> Found existing installation: aiohttp 3.11.10
> Uninstalling aiohttp-3.11.10:
> Successfully uninstalled aiohttp-3.11.10
> Successfully installed aiohttp-3.11.13 h2-4.2.0 hpack-4.1.0 hyperframe-6.1.0
## Expected behavior
`h2` should be installed with supabase when using Conda.
## System information
- OS: Windows 10
- Version of supabase-py: 2.13.0
- Version of Python: 3.12
## Additional context
PS: Your issue template for bug report is asking for Node.js version, it should be Python.
| open | 2025-03-10T16:52:42Z | 2025-03-10T16:52:50Z | https://github.com/supabase/supabase-py/issues/1073 | [
"bug"
] | PierreMesure | 1 |
psf/requests | python | 6,860 | "No address associated with hostname" when querying IPv6 hosts | ## Expected Result
```
$ python
Python 3.12.3 (main, Nov 6 2024, 18:32:19) [GCC 13.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import requests
>>> requests.get("https://ipv6.icanhazip.com/").text
'2001:920:[REDACTED]'
```
## Actual Result
```shell
$ python
Python 3.12.3 (main, Nov 6 2024, 18:32:19) [GCC 13.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import requests
>>> requests.get("https://ipv6.icanhazip.com/").text
Traceback (most recent call last):
File "/home/user/tool/venv/lib/python3.12/site-packages/urllib3/connection.py", line 199, in _new_conn
sock = connection.create_connection(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/tool/venv/lib/python3.12/site-packages/urllib3/util/connection.py", line 60, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/socket.py", line 963, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
socket.gaierror: [Errno -5] No address associated with hostname
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/user/tool/venv/lib/python3.12/site-packages/urllib3/connectionpool.py", line 789, in urlopen
response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "/home/user/tool/venv/lib/python3.12/site-packages/urllib3/connectionpool.py", line 490, in _make_request
raise new_e
File "/home/user/tool/venv/lib/python3.12/site-packages/urllib3/connectionpool.py", line 466, in _make_request
self._validate_conn(conn)
File "/home/user/tool/venv/lib/python3.12/site-packages/urllib3/connectionpool.py", line 1095, in _validate_conn
conn.connect()
File "/home/user/tool/venv/lib/python3.12/site-packages/urllib3/connection.py", line 693, in connect
self.sock = sock = self._new_conn()
^^^^^^^^^^^^^^^^
File "/home/user/tool/venv/lib/python3.12/site-packages/urllib3/connection.py", line 206, in _new_conn
raise NameResolutionError(self.host, self, e) from e
urllib3.exceptions.NameResolutionError: <urllib3.connection.HTTPSConnection object at 0x7c2522dfe180>: Failed to resolve 'ipv6.icanhazip.com' ([Errno -5] No address associated with hostname)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/user/tool/venv/lib/python3.12/site-packages/requests/adapters.py", line 667, in send
resp = conn.urlopen(
^^^^^^^^^^^^^
File "/home/user/tool/venv/lib/python3.12/site-packages/urllib3/connectionpool.py", line 843, in urlopen
retries = retries.increment(
^^^^^^^^^^^^^^^^^^
File "/home/user/tool/venv/lib/python3.12/site-packages/urllib3/util/retry.py", line 519, in increment
raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='ipv6.icanhazip.com', port=443): Max retries exceeded with url: / (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x7c2522dfe180>: Failed to resolve 'ipv6.icanhazip.com' ([Errno -5] No address associated with hostname)"))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/user/tool/venv/lib/python3.12/site-packages/requests/api.py", line 73, in get
return request("get", url, params=params, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/tool/venv/lib/python3.12/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/tool/venv/lib/python3.12/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/tool/venv/lib/python3.12/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/tool/venv/lib/python3.12/site-packages/requests/adapters.py", line 700, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='ipv6.icanhazip.com', port=443): Max retries exceeded with url: / (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x7c2522dfe180>: Failed to resolve 'ipv6.icanhazip.com' ([Errno -5] No address associated with hostname)"))
```
## Reproduction Steps
```python
import requests
requests.get("https://ipv6.icanhazip.com/").text
```
## System Information
$ python -m requests.help
```json
{
"chardet": {
"version": null
},
"charset_normalizer": {
"version": "3.3.2"
},
"cryptography": {
"version": ""
},
"idna": {
"version": "3.7"
},
"implementation": {
"name": "CPython",
"version": "3.12.3"
},
"platform": {
"release": "6.8.0-50-generic",
"system": "Linux"
},
"pyOpenSSL": {
"openssl_version": "",
"version": null
},
"requests": {
"version": "2.32.3"
},
"system_ssl": {
"version": "300000d0"
},
"urllib3": {
"version": "2.2.3"
},
"using_charset_normalizer": true,
"using_pyopenssl": false
}
```
installed versions (OS is Ubuntu 24)
```shell
$ pip list | grep urllib
urllib3 2.2.3
$ pip list | grep requests
requests 2.32.3
```
Network (I have a fully functional IPv6 setup)
```shell
$ ping -6 google.com
PING google.com (2a00:1450:4007:810::200e) 56 data bytes
64 bytes from par10s50-in-x0e.1e100.net (2a00:1450:4007:810::200e): icmp_seq=1 ttl=120 time=1.49 ms
64 bytes from par10s50-in-x0e.1e100.net (2a00:1450:4007:810::200e): icmp_seq=2 ttl=120 time=1.81 ms
64 bytes from par10s50-in-x0e.1e100.net (2a00:1450:4007:810::200e): icmp_seq=3 ttl=120 time=1.54 ms
64 bytes from par10s50-in-x0e.1e100.net (2a00:1450:4007:810::200e): icmp_seq=4 ttl=120 time=2.08 ms
64 bytes from par10s50-in-x0e.1e100.net (2a00:1450:4007:810::200e): icmp_seq=5 ttl=120 time=1.81 ms
^C
--- google.com ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 1.491/1.745/2.082/0.213 ms
$ curl -6 ipv6.icanhazip.com
2001:920:[REDACTED]
```
| closed | 2024-12-18T17:36:53Z | 2025-01-05T22:36:48Z | https://github.com/psf/requests/issues/6860 | [] | ThePirateWhoSmellsOfSunflowers | 4 |
huggingface/datasets | numpy | 6,740 | Support for loading geotiff files as a part of the ImageFolder | ### Feature request
Request for adding rasterio support to load geotiff as a part of ImageFolder, instead of using PIL
### Motivation
As of now, there are many datasets in HuggingFace Hub which are predominantly focussed towards RemoteSensing or are from RemoteSensing. The current ImageFolder (if I have understood correctly) uses PIL. This is not really optimized because mostly these datasets have images with many channels and additional metadata. Using PIL makes one loose it unless we provide a custom script. Hence, maybe an API could be added to have this in common?
### Your contribution
If the issue is accepted - i can contribute the code, because I would like to have it automated and generalised. | closed | 2024-03-18T20:00:39Z | 2024-03-27T18:19:48Z | https://github.com/huggingface/datasets/issues/6740 | [
"enhancement"
] | sunny1401 | 0 |
awesto/django-shop | django | 772 | Cookie Cutter Version ignores PostalShippingModifier | I'm using the cookie cutter version from the [tutorial of the documentation](https://django-shop.readthedocs.io/en/latest/) as an example for my own implementation.
Shipping modifiers that should add a line to the cart are ignored in both variants as soon as their activation is dependent on the selection in the shipping form.
If there is only one shipping modifier is present the line is added though.
Using the debugger shows that self.is_active(cart) always returns False.
The image shows the resulting order. Note the selection of "postal shipping" and the lack of 5 € shipping costs.

Code of the modifiers.py that is generated by cookie cutter:
```
class PostalShippingModifier(ShippingModifier):
identifier = 'postal-shipping'
def get_choice(self):
return (self.identifier, _("Postal shipping"))
def add_extra_cart_row(self, cart, request):
if not self.is_active(cart) and len(cart_modifiers_pool.get_shipping_modifiers()) > 1:
return
# add a shipping flat fee
amount = Money('5')
instance = {'label': _("Shipping costs"), 'amount': amount}
cart.extra_rows[self.identifier] = ExtraCartRow(instance)
cart.total += amount
def ship_the_goods(self, delivery):
if not delivery.shipping_id:
raise ValidationError("Please provide a valid Shipping ID")
super(PostalShippingModifier, self).ship_the_goods(delivery)
``` | closed | 2019-08-21T14:24:41Z | 2019-10-21T16:51:24Z | https://github.com/awesto/django-shop/issues/772 | [] | moellering | 2 |
QingdaoU/OnlineJudge | django | 480 | 关于1.6.1版本docker-compose部署 提交代码卡Judging的问题 | 按照官网的docker-compose一键部署命令部署:
```bash
git clone -b v1.6.1 https://github.com/QingdaoU/OnlineJudgeDeploy.git
```
部署后未更改任何设置,提交非CE代码时卡Judging(CE代码正常)
以下是 ~/data/backend/log/dramatiq.log的日志
```
[2024-10-18 04:06:50] - [DEBUG] - [dramatiq.worker.ConsumerThread(default):326] - Pushing message '2e93045a-f01f-468b-9056-2e16a154173b' onto work queue.
[2024-10-18 04:06:50] - [DEBUG] - [dramatiq.worker.WorkerThread:479] - Received message judge_task('f16d911230421eb40a133701b81a6750', 1) with id '2e93045a-f01f-468b-9056-2e16a154173b'.
[2024-10-18 04:06:50] - [ERROR] - [dramatiq.worker.WorkerThread:513] - Failed to process message judge_task('f16d911230421eb40a133701b81a6750', 1) with unhandled exception.
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/dramatiq/worker.py", line 485, in process_message
res = actor(*message.args, **message.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/dramatiq/actor.py", line 182, in __call__
return self.fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/judge/tasks.py", line 14, in judge_task
JudgeDispatcher(submission_id, problem_id).judge()
File "/app/judge/dispatcher.py", line 174, in judge
self._compute_statistic_info(resp["data"])
File "/app/judge/dispatcher.py", line 106, in _compute_statistic_info
self.submission.statistic_info["time_cost"] = max([x["cpu_time"] for x in resp_data])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: max() iterable argument is empty
[2024-10-18 04:06:50] - [WARNING] - [dramatiq.middleware.retries.Retries:103] - Retries exceeded for message '2e93045a-f01f-468b-9056-2e16a154173b'.
[2024-10-18 04:06:50] - [DEBUG] - [dramatiq.worker.ConsumerThread(default):344] - Rejecting message '2e93045a-f01f-468b-9056-2e16a154173b'.
[2024-10-18 04:06:50] - [ERROR] - [sentry.errors:686] - Sentry responded with an error: module 'ssl' has no attribute 'wrap_socket' (url: https://sentry.io/api/263057/store/)
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/raven/transport/threaded.py", line 165, in send_sync
super(ThreadedHTTPTransport, self).send(url, data, headers)
File "/usr/local/lib/python3.12/site-packages/raven/transport/http.py", line 38, in send
response = urlopen(
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/raven/utils/http.py", line 66, in urlopen
return opener.open(url, data, timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/urllib/request.py", line 515, in open
response = self._open(req, data)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/urllib/request.py", line 532, in _open
result = self._call_chain(self.handle_open, protocol, protocol +
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/urllib/request.py", line 492, in _call_chain
result = func(*args)
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/raven/utils/http.py", line 46, in https_open
return self.do_open(ValidHTTPSConnection, req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/urllib/request.py", line 1344, in do_open
h.request(req.get_method(), req.selector, req.data, headers,
File "/usr/local/lib/python3.12/http/client.py", line 1331, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/local/lib/python3.12/http/client.py", line 1377, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.12/http/client.py", line 1326, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.12/http/client.py", line 1085, in _send_output
self.send(msg)
File "/usr/local/lib/python3.12/http/client.py", line 1029, in send
self.connect()
File "/usr/local/lib/python3.12/site-packages/raven/utils/http.py", line 37, in connect
self.sock = ssl.wrap_socket(
^^^^^^^^^^^^^^^
AttributeError: module 'ssl' has no attribute 'wrap_socket'
b"Sentry responded with an error: module 'ssl' has no attribute 'wrap_socket' (url: https://sentry.io/api/263057/store/)"
[2024-10-18 04:06:50] - [ERROR] - [sentry.errors.uncaught:712] - ["Failed to process message judge_task('f16d911230421eb40a133701b81a6750', 1) with unhandled exception.", ' File "dramatiq/worker.py", line 485, in process_message', ' File "dramatiq/actor.py", line 182, in __call__', ' File "judge/tasks.py", line 14, in judge_task', ' File "judge/dispatcher.py", line 174, in judge', ' File "judge/dispatcher.py", line 106, in _compute_statistic_info']
b'["Failed to process message judge_task(\'f16d911230421eb40a133701b81a6750\', 1) with unhandled exception.", \' File "dramatiq/worker.py", line 485, in process_message\', \' File "dramatiq/actor.py", line 182, in __call__\', \' File "judge/tasks.py", line 14, in judge_task\', \' File "judge/dispatcher.py", line 174, in judge\', \' File "judge/dispatcher.py", line 106, in _compute_statistic_info\']'
```
询问AI可能是docker容器Python版本太高不兼容,不知如何解决

| open | 2024-10-18T04:21:22Z | 2024-11-05T10:30:50Z | https://github.com/QingdaoU/OnlineJudge/issues/480 | [] | wellwei | 4 |
litl/backoff | asyncio | 135 | [Question] Add Loguru as logging | Hello again!
After hours of searching and trying to find the answer. I did not managed to do it :'(
I am trying to combine [loguru](https://github.com/Delgan/loguru) together with loguru:
```py
import sys
from discord_webhook import DiscordEmbed, DiscordWebhook
from loguru import logger
from config import configuration
# -------------------------------------------------------------------------
# Web-hook function as a sink and configure a custom filter to only allow error messages.
# -------------------------------------------------------------------------
def exception_only(record):
return record["level"].name == "ERROR" and record["exception"] is not None
def discord_sink(message):
embed = DiscordEmbed(
title=str(message.record["exception"].value),
description=f"```{message[-1500:]}```",
color=8149447
)
embed.add_embed_field(
name=str(message.record["exception"].type),
value=str(message.record["message"])
)
webhook = DiscordWebhook(
url=configuration.notifications.error_webhook,
username="Exception",
)
webhook.add_embed(embed)
webhook.execute()
# -------------------------------------------------------------------------
# Loguru add
# -------------------------------------------------------------------------
logger.remove()
logger.add(sys.stdout, filter=lambda record: record["level"].name == "INFO")
logger.add(sys.stderr, filter=lambda record: record["level"].name != "INFO")
logger.add(discord_sink, filter=exception_only, enqueue=True)
```
and the code I am currently using for backoff is:
```py
import requests
import backoff
def pred(e):
return (
e.response is not None
and e.response.status_code not in (404, 401)
and e.response.status_code in range(400, 600)
)
@backoff.on_exception(
backoff.expo,
requests.exceptions.RequestException,
max_tries=2,
giveup=pred,
)
def publish():
r = requests.get("https://stackoverflow.com/63463463456", timeout=10)
r.raise_for_status()
print("Yay successful requests")
publish()
```
What I am trying to do is that whenever I have reached the max_tries it would throw an exception:
` Therefore exception info is available to the handler functions via the python standard library, specifically sys.exc_info() or the traceback module.`
and together with loguru, I want it to basically print to my discord whenever I have reached the max_tries. but to be able to do that I need to figure out how I can add the handler for backoff inside loguru due to that I want to be able to catch the exception after x retries.
I think this might be abit "off-topic" but I hope that I could get some help. If you feel like this is too off-topic for loguru question, you can feel free to close as well <3
Looking forward! | closed | 2021-08-30T22:35:27Z | 2024-08-12T14:35:39Z | https://github.com/litl/backoff/issues/135 | [] | BarryThrill | 1 |
babysor/MockingBird | pytorch | 205 | 无法克隆除自带的之外的字,日志会出现循环 | 

| closed | 2021-11-09T11:18:17Z | 2021-11-09T13:08:54Z | https://github.com/babysor/MockingBird/issues/205 | [
"bug"
] | Zhangyide114514 | 0 |
marcomusy/vedo | numpy | 1,128 | how to remove vtk log | i use vedo.Mesh.intersect_with_plane many times.
once i call vedo.Mesh.intersect_with_plane, one vtk log("2024-05-30 16:18:57.450 ( 3.004s) [D74557468B270C7F]vtkPolyDataPlaneCutter.:589 INFO| Executing vtkPolyData plane cutter") appear my console and log files.
how can i remove vtk log?
i tried changing logging.setLevel, logging.disabled, stdout/stderr suppress(https://stackoverflow.com/questions/11130156/suppress-stdout-stderr-print-from-python-functions)
but i still sees many vtk logs | open | 2024-05-30T07:34:51Z | 2024-06-28T14:25:01Z | https://github.com/marcomusy/vedo/issues/1128 | [
"long-term"
] | rookie96 | 2 |
art049/odmantic | pydantic | 391 | Mypy issues for odmantic 1.0.0 | # Bug
Following the odmantic 1.0.0 examples and trying to upgrade odmantic/pydantic to V2 , I can find several issues in mypy.
Errors explained as comments below.
### Current Behavior
```
from typing import Optional
from motor.core import AgnosticCollection
from odmantic import Model, Field
from odmantic.config import ODMConfigDict
class A(Model):
test: Optional[str] = Field(default=None)
# Missing named argument "id" for "A" Mypy (call-arg)
A(test="bla")
class B(Model):
# Extra key "indexes" for TypedDict "ConfigDict" Mypy (typeddict-unknown-key)
model_config = {"collection": "B"}
# Fixed: using type explicitly
class B_OK(Model):
model_config = ODMConfigDict(collection="B")
from odmantic import AIOEngine, Model
engine = AIOEngine()
collection = engine.get_collection(B_OK)
# AsyncIOMotorCollection? has no attribute "find" Mypy (attr-defined)
collection.find({})
# This fixes the issue
collection2: AgnosticCollection = engine.get_collection(B_OK)
```
### Environment
- ODMantic version: 1.0.0
- pydantic version: 2.5.2
pydantic-core version: 2.14.5
pydantic-core build: profile=release pgo=true
install path: /home/sander/Projects/application.status.backend/.venv/lib/python3.9/site-packages/pydantic
python version: 3.9.18 (main, Aug 25 2023, 13:20:14) [GCC 11.4.0]
platform: Linux-5.15.133.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
related packages: mypy-1.7.1 typing_extensions-4.9.0 fastapi-0.105.0
| open | 2023-12-19T09:18:31Z | 2024-11-10T19:40:28Z | https://github.com/art049/odmantic/issues/391 | [
"bug"
] | laveolus | 3 |
pytorch/vision | machine-learning | 8,915 | Setting `0` and `1` to `p` argument of `RandomAutocontrast()` gets the same results | ### 🐛 Describe the bug
Setting `0` and `1` to `p` argument of [RandomAutocontrast()](https://pytorch.org/vision/main/generated/torchvision.transforms.v2.RandomAutocontrast.html) gets the same results as shown below:
```python
from torchvision.datasets import OxfordIIITPet
from torchvision.transforms.v2 import RandomAutocontrast
origin_data = OxfordIIITPet(
root="data",
transform=None
)
p0_data = OxfordIIITPet(
root="data",
transform=RandomAutocontrast(p=0)
)
p1_data = OxfordIIITPet(
root="data",
transform=RandomAutocontrast(p=1)
)
import matplotlib.pyplot as plt
def show_images1(data, main_title=None):
plt.figure(figsize=[10, 5])
plt.suptitle(t=main_title, y=0.8, fontsize=14)
for i, (im, _) in zip(range(1, 6), data):
plt.subplot(1, 5, i)
plt.imshow(X=im)
plt.xticks(ticks=[])
plt.yticks(ticks=[])
plt.tight_layout()
plt.show()
show_images1(data=origin_data, main_title="origin_data")
show_images1(data=p0_data, main_title="p0_data")
show_images1(data=p1_data, main_title="p1_data")
```



I expected the results of [ColorJitter()](https://pytorch.org/vision/main/generated/torchvision.transforms.v2.ColorJitter.html) as shown below:
```python
from torchvision.datasets import OxfordIIITPet
from torchvision.transforms.v2 import ColorJitter
origin_data = OxfordIIITPet(
root="data",
transform=None
)
contrast06_06_data = OxfordIIITPet(
root="data",
transform=ColorJitter(contrast=[0.6, 0.6])
)
contrast4_4_data = OxfordIIITPet(
root="data",
transform=ColorJitter(contrast=[4, 4])
)
import matplotlib.pyplot as plt
def show_images1(data, main_title=None):
plt.figure(figsize=[10, 5])
plt.suptitle(t=main_title, y=0.8, fontsize=14)
for i, (im, _) in zip(range(1, 6), data):
plt.subplot(1, 5, i)
plt.imshow(X=im)
plt.xticks(ticks=[])
plt.yticks(ticks=[])
plt.tight_layout()
plt.show()
show_images1(data=origin_data, main_title="origin_data")
show_images1(data=contrast06_06_data, main_title="contrast06_06_data")
show_images1(data=contrast4_4_data, main_title="contrast4_4_data")
```



### Versions
```python
import torchvision
torchvision.__version__ # '0.20.1'
``` | open | 2025-02-18T11:57:41Z | 2025-02-19T11:39:42Z | https://github.com/pytorch/vision/issues/8915 | [] | hyperkai | 1 |
mars-project/mars | pandas | 2,610 | [BUG] race condition: duplicate decref of subtask input chunk | <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
Let us suppose stage A have two Subtask S1 and S2, and they have the same input chunk C
1. S1 got an error, and stage_processor.done has been set.
2. S2 call set_subtask_result, it already reduces reference count of C but not set `stage_processor.results[C.key]`
3. `TaskProcessorActor` find `stage_processor` got an error and call `self._cur_processor.incref_stage(stage_processor)` in function `TaskProcessorActor.start`, it will also reduce the reference count of C which is input of S2.
**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version: python 3.7.9
2. The version of Mars you use: master
3. Minimized code to reproduce the error:
you can run this bash: `pytest mars/services/task/supervisor/tests/test_task_manager.py::test_error_task -s` in this branch: https://github.com/Catch-Bull/mars/tree/race_condition_case, which base latest mars master,and add some sleep to make reproducing easily
**Expected behavior**
A clear and concise description of what you expected to happen.
**Additional context**
Add any other context about the problem here.
| closed | 2021-12-09T08:23:16Z | 2021-12-10T03:45:24Z | https://github.com/mars-project/mars/issues/2610 | [
"type: bug",
"mod: task service"
] | Catch-Bull | 0 |
mirumee/ariadne-codegen | graphql | 14 | Add README.md | Repo should contain README.md like how Ariadne and Ariadne GraphQL Modules do.
This readme should contain description of problem solved by library and usage example.
It should also contain license part and mention that project was crafted with love by Mirumee. | closed | 2022-10-19T11:47:02Z | 2022-11-04T07:34:20Z | https://github.com/mirumee/ariadne-codegen/issues/14 | [
"roadmap"
] | rafalp | 0 |
aleju/imgaug | machine-learning | 807 | new imageio version breaks over numpy versions. | imageio introduced [new version](https://github.com/imageio/imageio/releases/tag/v2.16.0) which requires numpy > 1.20 over its ArrayLike object.
imgaug doesn't seem to set specfic version of imageio, therefore takes the latest version :
https://github.com/aleju/imgaug/blob/master/requirements.txt#L12
The result:
imgaug installation breaks code over incompatible numpy version:
```
import imgaug
File "/usr/local/lib/python3.6/site-packages/imgaug/__init__.py", line 7, in <module>
from imgaug.imgaug import * # pylint: disable=redefined-builtin
File "/usr/local/lib/python3.6/site-packages/imgaug/imgaug.py", line 19, in <module>
import imageio
File "/usr/local/lib/python3.6/site-packages/imageio/__init__.py", line 22, in <module>
from .core import FormatManager, RETURN_BYTES
File "/usr/local/lib/python3.6/site-packages/imageio/core/__init__.py", line 16, in <module>
from .format import Format, FormatManager
File "/usr/local/lib/python3.6/site-packages/imageio/core/format.py", line 40, in <module>
from ..config import known_plugins, known_extensions, PluginConfig, FileExtension
File "/usr/local/lib/python3.6/site-packages/imageio/config/__init__.py", line 7, in <module>
from .plugins import known_plugins, PluginConfig
File "/usr/local/lib/python3.6/site-packages/imageio/config/plugins.py", line 4, in <module>
from ..core.legacy_plugin_wrapper import LegacyPlugin
File "/usr/local/lib/python3.6/site-packages/imageio/core/legacy_plugin_wrapper.py", line 6, in <module>
from .v3_plugin_api import PluginV3, ImageProperties
File "/usr/local/lib/python3.6/site-packages/imageio/core/v3_plugin_api.py", line 2, in <module>
from numpy.typing import ArrayLike
ModuleNotFoundError: No module named 'numpy.typing'
``` | open | 2022-02-17T16:06:27Z | 2022-12-29T18:59:18Z | https://github.com/aleju/imgaug/issues/807 | [] | morcoGreen | 1 |
plotly/dash-table | dash | 287 | Cypress tests should fail if there are console errors | open | 2018-12-07T17:29:53Z | 2019-07-06T12:24:45Z | https://github.com/plotly/dash-table/issues/287 | [
"dash-meta-good_first_issue",
"dash-type-maintenance"
] | Marc-Andre-Rivet | 0 | |
plotly/plotly.py | plotly | 4,093 | Create a polygonal plane | Hello:
i want to plot a polygonal plane with some vertexs. For example, I have known five vertexs, I want to plot pentagon with the five vertexs, how can I plot it?
Thanks! | closed | 2023-03-07T08:30:41Z | 2023-03-08T15:07:40Z | https://github.com/plotly/plotly.py/issues/4093 | [] | Zcaic | 3 |
matplotlib/cheatsheets | matplotlib | 86 | The cheatsheets website should include a link to the GitHub page | Otherwise, the GibHub page is quite undiscoverable in case somebody wants to contribute. | closed | 2021-11-12T20:47:51Z | 2022-01-07T20:20:34Z | https://github.com/matplotlib/cheatsheets/issues/86 | [] | timhoffm | 1 |
Anjok07/ultimatevocalremovergui | pytorch | 1,700 | Best model for early reverb? | Good day. Thanks to the developers for your work! I have a question. What is the best algorithm for removing early reverberation reflections at the moment? I still have hope of recording vocals at home. )
If not, are there any searches and developments in this direction?
P.S. MB Roformer - DeReverb-DeEcho 1 - maybe this is it, but I couldn't run it in UVR5, it gives an error. I installed MB Roformer - DeReverb-DeEcho 2, but it doesn't capture early reflections very well.
In general, if someone ever manages to create a model for removing early reverberation reflections, it will change the game. In this case, many vocalists will finally be able to record vocals at home and the need for a studio and a booth for recording vocals will disappear. | open | 2025-01-11T15:08:57Z | 2025-01-19T01:48:06Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/1700 | [] | 10Elem | 4 |
mitmproxy/pdoc | api | 330 | Please remove the allegation about Nazi symbolism from the README | In the README, it is suggested that @kernc has associated this project with Nazi symbols by including a swastika on the fork of this project. As has already been addressed in this [comment](https://github.com/pdoc3/pdoc/issues/64#issuecomment-489370963), that is not true.
Despite most strongly being associated with the Nazi Party today, the swastika was and remains an important symbol in many religions and cultures. There are two types of swastikas, a right-facing and a left-facing one. The Nazis used the right-facing swastika as their emblem. In Buddhism, a left-facing swastika symbolises the footprints of the Buddha. You can learn more about this [here](https://en.wikipedia.org/wiki/Swastika#Historical_use).
I am not going to include the symbol itself or any screenshots or images here, but if you look closely, [`pdoc3`'s website](https://pdoc3.github.io/pdoc/) clearly uses the Buddhist swastika in its footer, and your suggestion that it has something to do with Nazism is disgusting and offensive to the people who follow Buddhism. Not to mention to other Asian cultures and religions like Hinduism which use and have used both left- and right-facing swastikas for thousands of years in ways that has nothing to do with what happened in Europe less than a century ago.
It is understandable that you are upset, and rightly so, on the illegal and wrong way in which your project was impersonated, relicensed and removed from Wiki, but your continued assertion that the Buddhist swastika is a Nazi symbol is causing hurt to millions of people around the globe. I would like to request that you remove that particular accusation from the README, and instead highlight any ethical misconduct perpetrated by the `pdoc3` team. | closed | 2022-01-09T06:38:53Z | 2022-02-10T13:49:16Z | https://github.com/mitmproxy/pdoc/issues/330 | [
"enhancement"
] | saifkhichi96 | 6 |
iperov/DeepFaceLab | deep-learning | 5,479 | feature request : 3d face overlay with xyz axes control to change angle during manyual extraction | I hope this will come one day, some ultra low or high angles are impossible to extrack when you just see bottom of the nose and chin or top of the head and nose
| open | 2022-02-17T22:21:27Z | 2022-02-17T22:21:27Z | https://github.com/iperov/DeepFaceLab/issues/5479 | [] | 2blackbar | 0 |
falconry/falcon | api | 2,178 | `DefaultEventLoopPolicy.get_event_loop()` is deprecated (in the case of no loop) | As of Python 3.12, if there is no event loop, `DefaultEventLoopPolicy.get_event_loop()` emits a deprecation warning, and threatens to raise an error in future Python versions.
No replacement is suggested by the official docs. When it does start raising an error, I suppose one can catch it, and create a new loop instead.
We have already updated our code to use this method to combat another deprecation... it seems Python wants to make it harder and harder obtaining the current loop from sync code (which makes no sense to me). | closed | 2023-10-15T16:57:51Z | 2024-03-21T19:59:28Z | https://github.com/falconry/falcon/issues/2178 | [] | vytas7 | 3 |
JaidedAI/EasyOCR | pytorch | 1,352 | Getting "Could not initialize NNPACK! Reason: Unsupported hardware." warning even though NNPACK is enabled | Hi everyone,
I am trying to deploy EasyOCR locally on a VM and when executing the `output = reader.readtext(image_array)` command I get the following warning: "Could not initialize NNPACK! Reason: Unsupported hardware.". I am deploying in a CPU only environment, on CPUs with the AVX512 instructions enabled. When the warning is displayed the model takes a lot more time to process and triggers a Timeout. I executed the following command `print(torch.__config__.show())` to see if NNPACK is available at runtime and indeed it is. This is the output right before the inference is processed:
```
PyTorch built with:
- GCC 9.3
- C++ Version: 201703
- Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v3.4.2 (Git Hash 1137e04ec0b5251ca2b4400a4fd3c667ce843d67)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- LAPACK is enabled (usually provided by MKL)
- NNPACK is enabled
- CPU capability usage: AVX512
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=2.4.0, USE_CUDA=0, USE_CUDNN=OFF, USE_CUSPARSELT=OFF, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF,
```
I don't know why it does not detect NNPACK, when pytorch is built with this capability. Any help would be greatly appreciated.
My environment is:
```
easyocr==1.7.1
torch==2.4.0
torchvision==0.19.0
``` | open | 2024-12-19T09:29:30Z | 2024-12-19T09:29:30Z | https://github.com/JaidedAI/EasyOCR/issues/1352 | [] | kirillmeisser | 0 |
robotframework/robotframework | automation | 4,476 | BuiltIn: `Call Method` loses traceback if calling the method fails | I see the below call is not logging traceback on failure and throwing the error message instead.
Call method ${obj} ${method} ${args}
It would be great to log the traceback incase of any failure, it will be helpful to identify the issue soon.
Need the fix in version 4.1
| closed | 2022-09-23T13:36:04Z | 2022-09-29T21:09:08Z | https://github.com/robotframework/robotframework/issues/4476 | [
"bug",
"priority: medium",
"rc 1"
] | kbogineni | 14 |
psf/requests | python | 5,926 | Restore logo | @kennethreitz mentioned he had a single condition for transferring stewardship for this repository:
> When I transferred the project to the PSF, per textual agreement with @ewdurbin, my only requirement was that the logo stayed in place.
I therefore assume that the removal in #5562 (along other images) was accidental and this will swiftly rectified. I’ll do a PR. | closed | 2021-09-02T13:25:33Z | 2021-12-01T16:06:01Z | https://github.com/psf/requests/issues/5926 | [] | flying-sheep | 1 |
pyg-team/pytorch_geometric | deep-learning | 10,101 | CUDA error: device-side assert triggered on torchrun DDP | ### 🐛 Describe the bug
Hello,
I am getting a CUDA error: device-side assert triggered on the global_mean_pool to the extent that I cannot:
1. print the variable
2. detach and save it as a a tensor to see
3. put in a try catch and just ignore the batch, the whole program crashes
4. this happens for >15k data and after running for a good 3-4 epochs
5. there does not seem to be a nan in the dataset
6. the error goes on to subsequent batch and i cannot pass on to it
The followup error to this is:
`failed.\n../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [51,0,0], thread: [45,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.\n../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [51,0,0], thread: [46,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.\n../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [51,0,0], thread: [47,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.\n../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [51,0,0], thread: [48,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.\n../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [51,0,0], thread: [49,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.\n../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [51,0,0], thread: [50,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.\n../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [51,0,0], thread: [51,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.\n../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [51,0,0], thread: [52,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.\n../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [51,0,0], thread: [53,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.\n../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [51,0,0], thread: [54,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.\n../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [51,0,0], thread: [55,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.\n../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [51,0,0], thread: [56,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.\n../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [50,0,0], thread: [14,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.\n../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [50,0,0], thread: [15,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.\n../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [50,0,0], thread: [16,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.\n../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [50,0,0], thread: [17,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.\n../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [50,0,0], thread: [18,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.\n../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [50,0,0], thread: [19,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.\n../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [50,0,0], thread: [20,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.\n../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [50,0,0], thread: [21,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.\n../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [50,0,0], thread: [22,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.\n../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [50,0,0], thread: [23,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.\n../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [50,0,0], thread: [24,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.\n../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [50,0,0], thread: [25,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.\n../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [50,0,0], thread: [26,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.\nTraceback (most recent call last):\n File "/package/molclass-0.1.1.dev618/molclass/cubes/scripts/script_helper.py", line 303, in forward\n apool = global_mean_pool(ah, abinput.x_s_batch)\n File "/home/floeuser/miniconda/envs/user_env/lib/python3.9/site-packages/torch_geometric/nn/pool/glob.py", line 63, in global_mean_pool\n return scatter(x, batch, dim=dim, dim_size=size, reduce=\'mean\')\n File "/home/floeuser/miniconda/envs/user_env/lib/python3.9/site-packages/torch_geometric/utils/_scatter.py", line 53, in scatter\n dim_size = int(index.max()) + 1 if index.numel() > 0 else 0\nRuntimeError: CUDA error: device-side assert triggered\nCUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.\nCompile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.\n\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/package/molclass-0.1.1.dev618/molclass/cubes/scripts/script_helper.py", line 313, in forward\n tt = torch.max(abinput.x_s_batch).to(device=\'cpu\')\nRuntimeError: CUDA error: device-side assert triggered\nCUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.\nCompile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.\n\nTraceback (most recent call last):\n File "/package/molclass-0.1.1.dev618/molclass/cubes/scripts/multi_gpu_train_regressor_module_script.py", line 109, in batch_train\n batch = batch.to(device)\n File "/home/floeuser/miniconda/envs/user_env/lib/python3.9/site-packages/torch_geometric/data/data.py", line 360, in to\n return self.apply(\n File "/home/floeuser/miniconda/envs/user_env/lib/python3.9/site-packages/torch_geometric/data/data.py", line 340, in apply\n store.apply(func, *args)\n File "/home/floeuser/miniconda/envs/user_env/lib/python3.9/site-packages/torch_geometric/data/storage.py", line 201, in apply\n self[key] = recursive_apply(value, func)\n File "/home/floeuser/miniconda/envs/user_env/lib/python3.9/site-packages/torch_geometric/data/storage.py", line 895, in recursive_apply\n return func(data)\n File "/home/floeuser/miniconda/envs/user_env/lib/python3.9/site-packages/torch_geometric/data/data.py", line 361, in <lambda>\n lambda x: x.to(device=device, non_blocking=non_blocking), *args)\nRuntimeError: CUDA error: device-side assert triggered\nCUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.\nFor debugging consider passing `
I checked the bounds of the batch and that seems to be within bound so am not sure why it gets the index out of bound.
Also note, this DOES NOT happen if I do not pass a DistSampler or turn shuffle off for the data. here is how i do that bit
```
sampler_tr = DistributedSampler(train_pair_graph, num_replicas=world_size,
shuffle=ncclAttributes.shuffle,
drop_last=True)
sampler_vl = DistributedSampler(val_pair_graph, num_replicas=world_size,
shuffle=ncclAttributes.shuffle,
drop_last=True)
if verbose:
print('samplers loaded')
time.sleep(1)
ptr = GDL(train_pair_graph, batch_size=ncclAttributes.batch_size,
num_workers=ncclAttributes.num_workers, shuffle=not ncclAttributes.shuffle,
pin_memory=False, follow_batch=['x_s'], sampler=sampler_tr)
pvl = GDL(val_pair_graph, batch_size=ncclAttributes.batch_size,
num_workers=ncclAttributes.num_workers, shuffle=not ncclAttributes.shuffle,
pin_memory=False, follow_batch=['x_s'], sampler=sampler_vl)
pts = GDL(test_pair_graph, batch_size=ncclAttributes.batch_size,
num_workers=ncclAttributes.num_workers, shuffle=not ncclAttributes.shuffle,
pin_memory=False, follow_batch=['x_s'])
```
I can compile a working data but that is difficult, i was wondering if you guys have noticed this bug, specially on larger (>10k) data.
### Versions
NA | open | 2025-03-05T23:52:59Z | 2025-03-10T16:32:32Z | https://github.com/pyg-team/pytorch_geometric/issues/10101 | [
"bug"
] | Sayan-m90 | 2 |
fohrloop/dash-uploader | dash | 106 | multi page issue | Hi,
I recently started using dash-uploader and was wondering if there is multi page support since I could not find an example. All the current examples require the app to be initialized using the command below for the du.configure_upload and has to be located in the same file where dash-uploader is used.
Command to initialize du.configure_upload:
`app = Dash(__name__, suppress_callback_exceptions=True, external_stylesheets=[dbc.themes.BOOTSTRAP])`
Heres my code below:
```sh
# app = Dash(__name__, suppress_callback_exceptions=True, external_stylesheets=[dbc.themes.BOOTSTRAP])
dash.register_page(__name__, styles=dbc.themes.BOOTSTRAP)
UPLOAD_FOLDER = r"D:/AzureDevOps/saltr/csv_feature/temporary_folder"
du.configure_upload(app, UPLOAD_FOLDER)
```
I was wondering if there is another way to use du.configure_upload with dash.register_page instead of using dash.Dash since we have this registered somewhere else in the code? | closed | 2022-10-18T22:58:01Z | 2022-12-14T19:43:53Z | https://github.com/fohrloop/dash-uploader/issues/106 | [] | omarirfa | 2 |
PaddlePaddle/ERNIE | nlp | 99 | xnli数据集复现,dev-acc:0.780,test-acc:0.770,没有达到发布的效果 | 直接运行 bash script/run_xnli.sh 训练后的结果与发布不对,然后把xnli训练集的空格去掉,再训练也没有达到相应的效果 | closed | 2019-04-16T03:14:29Z | 2019-04-26T05:01:28Z | https://github.com/PaddlePaddle/ERNIE/issues/99 | [] | shuying136 | 2 |
FlareSolverr/FlareSolverr | api | 804 | Solving embedded turnstiles | ### Have you checked our README?
- [X] I have checked the README
### Have you followed our Troubleshooting?
- [X] I have followed your Troubleshooting
### Is there already an issue for your problem?
- [X] I have checked older issues, open and closed
### Have you checked the discussions?
- [x] I have read the Discussions
### Environment
```markdown
- FlareSolverr version: 3.2.1
- Last working FlareSolverr version: N/A
- Operating system: Windows-10-10.0.19045-SP0
- Are you using Docker: no
- FlareSolverr User-Agent (see log traces or / endpoint): Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36
- Are you using a VPN: no
- Are you using a Proxy: no
- Are you using Captcha Solver: no
- If using captcha solver, which one:
- URL to test this issue: https://reaperscans.com/comics/5150-sss-class-suicide-hunter/chapters/69155807-chapter-84
```
### Description
FlareSolverr is not capable of detecting and solving embedded turnstiles. For the example, the one on the following webpage: https://reaperscans.com/comics/5150-sss-class-suicide-hunter/chapters/69155807-chapter-84
Would it be possible for FlareSolverr to handle these challenges?
### Logged Error Messages
```text
2023-06-21 16:45:25 INFO Challenge not detected!
2023-06-21 16:45:25 INFO Response in 1.446 s
```
### Screenshots

| open | 2023-06-21T22:56:38Z | 2023-10-02T04:14:02Z | https://github.com/FlareSolverr/FlareSolverr/issues/804 | [
"enhancement",
"help wanted"
] | HDoujinDownloader | 7 |
streamlit/streamlit | data-visualization | 10,190 | Click (not selection) events for dataframes, charts, and maps | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [X] I added a descriptive title and summary to this issue.
### Summary
We recently added selection events for dataframes, Plotly/Altair charts, and PyDeck maps. Sometimes, you just want to track a single click instead of a selection though. This should work like a button where you click, it triggers the event, and afterwards the elemets gets reset to its normal state (i.e. there is no datapoint or row/column selected).
### Why?
This is useful to trigger one-time actions after interacting with these elements. E.g. you could imagine clicking on a chart item and opening a dialog with more information about it. This is not trivial to do today, since you need to use the selection event and then erase the selection after the dialog is shown.
### How?
I guess we'd probably add a new parameter `on_click` for that, in addition to `on_select`. And return the same values but just for one rerun, and then return `None` the next time (similar to `st.button`). And it would probably make sense to disallow selection and click events at the same time.
An alternative could be to add a new `selection_mode` but I feel like this could be confusing.
Note that we'll need to do quite a bit of refactoring to support this because our elements currently support only one event type per element.
### Additional Context
_No response_ | open | 2025-01-14T22:26:55Z | 2025-01-14T22:27:19Z | https://github.com/streamlit/streamlit/issues/10190 | [
"type:enhancement",
"feature:st.dataframe",
"feature:st.altair_chart",
"feature:st.plotly_chart",
"feature:st.pydeck_chart",
"area:events"
] | jrieke | 1 |
marimo-team/marimo | data-science | 4,171 | Dropdown should support non-strings | ### Description
`mo.ui.dropdown(options=[1,2,3])` will crash with:
```
Bad Data
options.0: Expected string, received number
```
This can be worked around with dict comprehension.
### Suggested solution
It would be preferable if these `ui` elements just worked with options as any list, not just a list of strings. Same with `multiselect`. This would avoid doing `{str(x): x for x in [1,2,3]}` workarounds.
### Alternative
_No response_
### Additional context
 | closed | 2025-03-20T15:10:24Z | 2025-03-20T17:58:11Z | https://github.com/marimo-team/marimo/issues/4171 | [
"enhancement"
] | astrowonk | 0 |
ionelmc/pytest-benchmark | pytest | 72 | Allow elasticsearch authentication besides encoding credentials into url | Currrently the only way of using credentials to authenticate against an elasticsearch instance is to encode the username+password in the url.
This is bad when run in CI, i.e. [gitlab unfortunatly still doesn't support protection against leaking secret variables](https://gitlab.com/gitlab-org/gitlab-ce/issues/13784). URls including credentials leak quite easy.
A better way would be to use a config file or a `.netrc` file for credentials. | closed | 2017-03-27T14:58:17Z | 2017-04-10T03:21:34Z | https://github.com/ionelmc/pytest-benchmark/issues/72 | [] | varac | 5 |
pyg-team/pytorch_geometric | pytorch | 9,680 | consider `conda` -> `.github/conda` | ### 🛠 Proposed Refactor
We could consider moving the `conda` directory to within `.github/conda/`.
This has already been implemented in https://github.com/huggingface/huggingface_hub/tree/main/.github/conda
### Suggest a potential alternative/fix
I don't know the particulars of how conda releases are made, but AFAIK renaming `conda` to `.github/conda` should ideally work. Along with the needed changes in CI. | open | 2024-09-25T17:55:40Z | 2024-09-25T17:55:40Z | https://github.com/pyg-team/pytorch_geometric/issues/9680 | [
"refactor"
] | SauravMaheshkar | 0 |
apify/crawlee-python | web-scraping | 178 | Improve the deduplication of requests | ### Context
A while ago, Honza Javorek raised some good points regarding the deduplication process in the request queue ([#190](https://github.com/apify/apify-sdk-python/issues/190)).
The first one:
> Is it possible that Apify's request queue dedupes the requests only based on the URL? Because the POSTs all have the same URL, just different payload. Which should be very common - by definition of what POST is, or even in practical terms with all the GraphQL APIs around.
In response, we improved the unique key generation logic in the Python SDK ([PR #193](https://github.com/apify/apify-sdk-python/pull/193)) to align with the TS Crawlee. This logic was lates copied to `crawlee-python` and can be found in [crawlee/_utils/requests.py](https://github.com/apify/crawlee-python/blob/v0.0.4/src/crawlee/_utils/requests.py).
The second one:
> Also wondering whether two identical requests with one different HTTP header should be considered same or different. Even with a simple GET request, I could make one with Accept-Language: cs, another with Accept-Language: en, and I can get two wildly different responses from the same server.
Currently, HTTP headers are not considered in the computation of unique keys. Additionally, we do not offer an option to explicitly bypass request deduplication, unlike the `dont_filter` option in Scrapy ([docs](https://docs.scrapy.org/en/latest/topics/request-response.html)).
### Questions
- Should we include HTTP headers in the `unique_key` and `extended_unique_key` computation?
- Yes.
- Should we implement a `dont_filter` feature?
- It will be just a syntax sugar appending some random string to a unique key.
- Also come up with a better name (e.g. `always_enqueue`)?
- Should `use_extended_unique_key` be set as the default behavior?
- Probably not now. | closed | 2024-06-10T09:08:57Z | 2024-09-27T17:43:05Z | https://github.com/apify/crawlee-python/issues/178 | [
"t-tooling",
"solutioning"
] | vdusek | 3 |
deepfakes/faceswap | deep-learning | 558 | Error Extracting! | Extracting Error:
Loading...
12/20/2018 01:41:59 INFO Log level set to: INFO
12/20/2018 01:42:01 INFO Output Directory: C:\Users\ZeroCool22\Miniconda3\envs\faceswap\output
12/20/2018 01:42:01 INFO Input Directory: C:\Users\ZeroCool22\Miniconda3\envs\faceswap\input
12/20/2018 01:42:01 INFO Loading Detect from Mtcnn plugin...
12/20/2018 01:42:01 INFO Loading Align from Fan plugin...
12/20/2018 01:42:01 INFO NB: Parallel processing disabled.You may get faster extraction speeds by enabling it with the -mp switch
12/20/2018 01:42:01 INFO Starting, this may take a while...
12/20/2018 01:42:01 INFO Initializing MTCNN Detector...
12/20/2018 01:42:02 ERROR Caught exception in child process: 7588
12/20/2018 01:43:01 INFO Waiting for Detector... Time out in 4 minutes
12/20/2018 01:44:01 INFO Waiting for Detector... Time out in 3 minutes
12/20/2018 01:45:01 INFO Waiting for Detector... Time out in 2 minutes
12/20/2018 01:46:01 INFO Waiting for Detector... Time out in 1 minutes
12/20/2018 01:47:03 ERROR Got Exception on main handler:
Traceback (most recent call last):
File "C:\Users\ZeroCool22\Miniconda3\envs\faceswap\lib\cli.py", line 90, in execute_script
process.process()
File "C:\Users\ZeroCool22\Miniconda3\envs\faceswap\scripts\extract.py", line 51, in process
self.run_extraction(save_thread)
File "C:\Users\ZeroCool22\Miniconda3\envs\faceswap\scripts\extract.py", line 149, in run_extraction
self.run_detection(to_process)
File "C:\Users\ZeroCool22\Miniconda3\envs\faceswap\scripts\extract.py", line 202, in run_detection
self.plugins.launch_detector()
File "C:\Users\ZeroCool22\Miniconda3\envs\faceswap\scripts\extract.py", line 386, in launch_detector
raise ValueError("Error initializing Detector")
ValueError: Error initializing Detector
12/20/2018 01:47:03 CRITICAL An unexpected crash has occurred. Crash report written to C:\Users\ZeroCool22\Miniconda3\envs\faceswap\crash_report.2018.12.20.014701426414.log. Please verify you are running the latest version of faceswap before reporting
Process exited.
I'm using the GUI: https://i.postimg.cc/J4vVZrn2/GUI-error-2.png
**Crash Report:** https://pastebin.com/MMiWRxmU
| closed | 2018-12-20T04:44:23Z | 2019-01-11T08:55:38Z | https://github.com/deepfakes/faceswap/issues/558 | [] | ZeroCool22 | 7 |
suitenumerique/docs | django | 452 | Typing enter doesn't create a line break bellow but above | ## Bug Report
**Problematic behavior**
Today with Sophie she experiences a weird bug (see video bellow) when typing the enter key at the end of a bullet list.
**Expected behavior/code**
The line break should be created bellow.
**Steps to Reproduce**
**Environment**
- Impress version: Prod
- Platform: Chrome on MacOS
**Additional context**
**Screenshots**
[Capture vidéo du 25-11-2024 19:33:39.webm](https://github.com/user-attachments/assets/1b28068b-47d8-4783-b0ac-9dab55641163) | open | 2024-11-25T18:34:07Z | 2024-11-26T16:45:34Z | https://github.com/suitenumerique/docs/issues/452 | [
"bug"
] | virgile-dev | 1 |
gunthercox/ChatterBot | machine-learning | 2,163 | ModuleNotFoundError: No module named 'adapters' | Hey! So, I want the bot to save its data in a json file, so I used this piece of code:
```py
chatbot = ChatBot("SmortBot",
storage_adapter="adapters.storage.JsonDatabaseAdapter",
database="C:/Users/.../database.json")
```
But I am getting the error `ModuleNotFoundError: No module named 'adapters'`
Am I doing something wrong here? Any help would be appreciated!
Thanks in advance :) | closed | 2021-05-21T09:26:00Z | 2025-02-19T12:30:22Z | https://github.com/gunthercox/ChatterBot/issues/2163 | [] | NISH-Original | 1 |
d2l-ai/d2l-en | data-science | 2,123 | Typo in Ch.2 introduction. | In the second paragraph on the intro to 2. Preliminaries, change "basic" to "basics" (see the image below).

| closed | 2022-05-10T15:08:49Z | 2022-05-10T17:37:44Z | https://github.com/d2l-ai/d2l-en/issues/2123 | [] | jbritton6 | 1 |
psf/requests | python | 6,229 | When making a POST request, why doesn't `auth` or `session.auth` work when logging in, but `data=data` does? | Please refer to our [Stack Overflow tag](https://stackoverflow.com/questions/tagged/python-requests) for guidance.
I have a website that requires an Email and Password to log in. When I set these values in `session.auth` or `auth=(email, pw)` they don't get the right response data. But, when I pass in a `data` object, it works.
What is auth or session.auth doing in the background? Is it just assuming to use "username" or something?
login_url = https://prenotami.esteri.it/Home/Login
**data=data** (returns correct response html)
```python3
with requests.Session() as session:
data = {"Email": "username@gmail.com", "Password": pw}
login_response = session.post(
login_url,
data=data,
)
print(login_response.text)
```
**session.auth** (doesn't return correct response html)
```python3
with requests.Session() as session:
session.auth = (email, pw)
login_response = session.post(
login_url,
)
print(login_response.text)
```
**auth=()** (doesn't return correct html data)
```python3
with requests.Session() as session:
login_response = session.post(
login_url,
auth=("username@gmail.com", pw),
)
print(login_response.text)
```
| closed | 2022-09-04T03:07:40Z | 2023-09-05T00:03:06Z | https://github.com/psf/requests/issues/6229 | [] | whompyjaw | 1 |
httpie/cli | rest-api | 1,559 | I got [reports](https://github.com/RageAgainstThePixel/OpenAI-DotNet/issues/236) that this started happening today: | I got [reports](https://github.com/RageAgainstThePixel/OpenAI-DotNet/issues/236) that this started happening today:
```json
{
"error": {
"message": "Unsupported content type: 'application/json; charset=utf-8'. This API method only accepts 'application/json' requests, but you specified the header 'Content-Type: application/json; charset=utf-8'. Please try again with a supported content type.",
"type": "invalid_request_error",
"param": null,
"code": "unsupported_content_type"
}
}
```
Adding Charset to the content-type should be acceptable. Didn't see any changes go into the schema so likely an issue internally in the API implementation.
_Originally posted by @StephenHodgson in https://github.com/openai/openai-openapi/issues/194_
_Originally posted by @jpmaniqis in https://github.com/mdn/content/issues/32266_ | closed | 2024-02-14T07:05:23Z | 2024-10-30T10:53:34Z | https://github.com/httpie/cli/issues/1559 | [] | jpmaniqis | 1 |
docarray/docarray | pydantic | 1,236 | Mypy plugin | open | 2023-03-14T10:07:09Z | 2023-03-23T10:05:49Z | https://github.com/docarray/docarray/issues/1236 | [] | JoanFM | 1 | |
ivy-llc/ivy | pytorch | 28,545 | Fix Frontend Failing Test: tensorflow - math.paddle.stanh | To-do List: https://github.com/unifyai/ivy/issues/27499 | closed | 2024-03-11T11:17:39Z | 2024-05-02T08:41:36Z | https://github.com/ivy-llc/ivy/issues/28545 | [
"Sub Task"
] | ZJay07 | 0 |
bmoscon/cryptofeed | asyncio | 416 | Deribit Liquidations channel not working | **Describe the bug**
When subscribed to `TRADES` channel, with `LIQUIDATIONS` callback comprised of a class which inherits `KafkaCallback` and `BackendLiquidationsCallback`, no liquidation messages are passed to the Kafka topic.
**Expected behavior**
Liquidation messages passed to the applicable topic.
**Operating System:**
- Kubernetes 1.20.1 - Flatcar Container Linux by Kinvolk 2705.1.0 (Oklo).
- Containers deployed based on python:slim.
**Cryptofeed Version**
- Git commit `6275d95` checked out from repo. `pip install .` in git repository for installation in `virtualenv`.
| closed | 2021-02-12T09:54:32Z | 2021-02-17T08:37:07Z | https://github.com/bmoscon/cryptofeed/issues/416 | [
"bug"
] | mfw78 | 4 |
jmcnamara/XlsxWriter | pandas | 351 | Repetitive merge_range to same range causes warning in Excel 2016 | Hi,
When I, through a program error/bug, repetitively merge_range to the same range, Excel produces two warnings:
1. We found a problem with some content in 'Range.XLSX'. Do you want us to try to recover as much as we can? If you trust the source of this workbook, click Yes. Clicking Yes yields:
2. Excel was able to open the file by repairing or removing the unreadable content. Removed records: Merge cells from /xl/worksheets/sheet2.xml part
I am using Python version 3.4.3 and XlsxWriter 0.8.6 and Excel 2016.
Here is some code that demonstrates the problem:
``` python
import xlsxwriter
workbook = xlsxwriter.Workbook('range.xlsx')
worksheet = workbook.add_worksheet()
merge_format = workbook.add_format({
'bold': True,
'align': 'center'
})
headings = (
("Alpha", "L1:W1"),
("Bravo", "X1:AI1"),
("Charlie", "AJ1:AU1"),
("Delta", "AV1:BG1"))
for (product, coordinates) in headings:
worksheet.merge_range("L1:W1", product, merge_format)
```
Granted it was my program error that caused the merge_format at the same coordinate. My guess is that all four merge_range functions were committed to the spreadsheet and when Excel saw all four merge_formats on the same range, Excel threw its hands up. In correcting/recovering the spreadsheet, Excel used the last value, Delta, in the range.
What I did not have time to test to see if a similar error occurs if I try to write to the same cell repetitively. Hope to follow up with a similar test in a couple of days.


[range.xlsx](https://github.com/jmcnamara/XlsxWriter/files/247344/range.xlsx)
| closed | 2016-05-03T17:07:45Z | 2020-10-21T11:09:43Z | https://github.com/jmcnamara/XlsxWriter/issues/351 | [
"bug"
] | tmoorebetazi | 9 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 3,242 | KeyError Mapping key not found - error recieved by admins on v4.9.9 |
Version: 4.9.9
```
KeyError Mapping key not found.
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/twisted/internet/defer.py", line 151, in maybeDeferred
result = f(*args, **kw)
File "/usr/lib/python3/dist-packages/globaleaks/rest/decorators.py", line 56, in wrapper
return f(self, *args, **kwargs)
File "/usr/lib/python3/dist-packages/globaleaks/rest/decorators.py", line 42, in wrapper
return f(self, *args, **kwargs)
File "/usr/lib/python3/dist-packages/globaleaks/rest/decorators.py", line 30, in wrapper
return f(self, *args, **kwargs)
File "/usr/lib/python3/dist-packages/globaleaks/handlers/operation.py", line 20, in put
return func(self, request['args'], *args, **kwargs)
File "/usr/lib/python3/dist-packages/globaleaks/handlers/rtip.py", line 600, in grant_tip_access
return tw(db_grant_tip_access, self.request.tid, self.session.user_id, self.session.cc, rtip_id, req_args['receiver'])
KeyError: 'receiver'
``` | closed | 2022-06-29T09:58:45Z | 2022-07-06T09:34:04Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3242 | [] | aetdr | 4 |
open-mmlab/mmdetection | pytorch | 11,465 | get_flops.py file requires different commands than those in the instructions document | Describes the commands exemplified in the documentation:
python tools/analysis_tools/get_flops.py ${CONFIG_FILE} [--shape ${INPUT_SHAPE}]
However, the parameters requested in this file are shown in the image below

What should I do if I want to change the image size like before? Please update the instructions document, thanks. | open | 2024-02-06T10:06:23Z | 2024-03-23T12:36:53Z | https://github.com/open-mmlab/mmdetection/issues/11465 | [] | Jano-rm-rf | 1 |
koxudaxi/datamodel-code-generator | pydantic | 1,653 | 'NO'/"NO" (string enum value) gets serialized to `False_` ? | I have a really bizarre issue where the string 'NO' (ISO country code for Norway - we need this literal string to be an enum value), as part of an enum, is getting serlialized to `False`
we have an enum of ISO codes that includes the literal string `'NO'` - I have tried this without quotes, and with both single and double
```
MyModel:
type: object
additionalProperties: false
required:
- country_code
properties:
country_code:
type: string
example: US
enum:
- 'AE'
- 'AR'
...
- 'NL'
- 'NO' <----------- have tried just NO, 'NO', and "NO"
- 'NZ'
...
```
using the generation flags:
```
--strict-types str bool
--set-default-enum-member
--enum-field-as-literal one
--use-default
--strict-nullable
--collapse-root-models
--output-model-type pydantic_v2.BaseModel
```
we get the following output!
```
class CountryCode(Enum):
AE = "AE"
AR = "AR"
...
NL = "NL"
False_ = False <------------- !!
NZ = "NZ"
...
```
`'NO'` becomes `False`! Can you please let me know how to not do that?
Thanks much
| open | 2023-11-03T11:36:30Z | 2023-11-04T17:16:57Z | https://github.com/koxudaxi/datamodel-code-generator/issues/1653 | [
"documentation"
] | tommyjcarpenter | 10 |
zappa/Zappa | flask | 870 | [Migrated] Update fails while updating `endpoint_url` with `base_path` when `use_apigateway` is False | Originally from: https://github.com/Miserlou/Zappa/issues/2123 by [jwilges](https://github.com/jwilges)
Update fails while updating `endpoint_url` with `base_path` when `use_apigateway` is `False`.
I believe this issue is similar to issue #1563 but I am trying to scope this new issue to be a small and easily-auditable patch so we can get it in master sooner than later. If this patch also happens to resolve all of the concerns in #1563, great; but if not, that ticket can remain open for further review.
## Context
Build environment:
- Python 3.8.2 virtual environment
- Zappa 0.51.0
## Expected Behavior
Update should complete without raising a `NoneType` exception and should show the endpoint's URL based on `domain_name` and `base_path` settings.
## Actual Behavior
```
(pip 19.2.3 (/opt/python-build/lib/python3.8/site-packages), Requirement.parse('pip>=20.0'), {'pip-tools'})
Calling update for stage prd..
100%|██████████| 2.97M/2.97M [00:00<00:00, 8.37MB/s]
100%|██████████| 27.4M/27.4M [00:18<00:00, 1.50MB/s]Downloading and installing dependencies..
- psycopg2-binary==2.8.5: Downloading
Packaging project as zip.
Uploading api-prd-1592447425.zip (26.2MiB)..
Updating Lambda function code..
Updating Lambda function configuration..
Scheduling..
Unscheduled api-prd-zappa-keep-warm-handler.keep_warm_callback.
Scheduled api-prd-zappa-keep-warm-handler.keep_warm_callback with expression rate(4 minutes)!
Oh no! An error occurred! :(
==============
==============
Need help? Found a bug? Let us know! :D
File bug reports on GitHub here: https://github.com/Miserlou/Zappa
And join our Slack channel here: https://slack.zappa.io
Love!,
Traceback (most recent call last):
File "/opt/python-build/lib/python3.8/site-packages/zappa/cli.py", line 2778, in handle
sys.exit(cli.handle())
File "/opt/python-build/lib/python3.8/site-packages/zappa/cli.py", line 512, in handle
self.dispatch_command(self.command, stage)
File "/opt/python-build/lib/python3.8/site-packages/zappa/cli.py", line 559, in dispatch_command
self.update(self.vargs['zip'], self.vargs['no_upload'])
File "/opt/python-build/lib/python3.8/site-packages/zappa/cli.py", line 1043, in update
endpoint_url += '/' + self.base_path
TypeError: unsupported operand type(s) for +=: 'NoneType' and 'str'
~ Team Zappa!
```
## Possible Fix
Please see [revision 1807e97](https://github.com/jwilges/Zappa/commit/1807e973bfc9e606d38a03b2bf95c572d5af97dc) in my fork of Zappa for a fix I have tested.
In a nutshell, Zappa should generally:
1. check to avoid concatenating on a `endpoint_url` that is `None`, and
2. build a viable `endpoint_url` based on `domain_name` and `base_path` settings whenever viable
## Behavior with Fix
With the fix I proposed above, here is Zappa's new output from `update`:
```
+ zappa update prd
Important! A new version of Zappa is available!
Upgrade with: pip install zappa --upgrade
Visit the project page on GitHub to see the latest changes: https://github.com/Miserlou/Zappa
Calling update for stage prd..
100%|██████████| 768k/768k [00:00<00:00, 5.64MB/s]
100%|██████████| 2.97M/2.97M [00:00<00:00, 10.0MB/s]
100%|██████████| 58.6k/58.6k [00:00<00:00, 1.44MB/s]
100%|██████████| 43.0M/43.0M [00:24<00:00, 1.74MB/s]Downloading and installing dependencies..
- typed-ast==1.4.1: Downloading
- regex==2020.5.7: Using locally cached manylinux wheel
- psycopg2-binary==2.8.5: Downloading
- lazy-object-proxy==1.4.3: Downloading
- coverage==5.1: Using locally cached manylinux wheel
Packaging project as zip.
Uploading api-prd-1592520783.zip (41.0MiB)..
Updating Lambda function code..
Updating Lambda function configuration..
Scheduling..
Unscheduled api-prd-zappa-keep-warm-handler.keep_warm_callback.
Scheduled api-prd-zappa-keep-warm-handler.keep_warm_callback with expression rate(4 minutes)!
Your updated Zappa deployment is live!
```
*(Apologies for switching the package dependencies, its file size is giant in this demo, but that is not related to this patch.)*
The last low-hanging fruit I noticed is that Zappa was not augmenting `deployed_string` with the `endpoint_url` when `use_apigateway` was `False`. So, I added another small patch to address that issue, see: [revision 7a44ee7](https://github.com/jwilges/Zappa/commit/7a44ee75bebb27212e2a31cc88c96cb9efa45c5b).
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1. Ensure your Zappa settings enable this combination of settings:
```json
"domain": "api.your.domain",
"base_path": "some_path",
"apigateway_enabled": false,
```
2. Perform a Zappa update and observe the exception documented above
## Your Environment
* Zappa version used: 0.51.0
* Operating System and Python version: `python:3.8-slim` (Docker) with Python 3.8.2
* The output of `pip freeze`:
```argcomplete==1.11.1
boto3==1.13.6
botocore==1.16.6
certifi==2020.4.5.1
cfn-flip==1.2.3
chardet==3.0.4
click==7.1.2
Django==2.1.15
django-cors-headers==3.2.1
django-filter==2.2.0
djangorestframework==3.11.0
docutils==0.15.2
durationpy==0.5
future==0.18.2
hjson==3.0.1
idna==2.9
jmespath==0.9.5
kappa==0.6.0
sample==0.1
pip-tools==5.1.2
placebo==0.9.0
psycopg2-binary==2.8.5
python-dateutil==2.6.1
python-slugify==4.0.0
pytz==2020.1
PyYAML==5.3.1
requests==2.23.0
s3transfer==0.3.3
six==1.14.0
text-unidecode==1.3
toml==0.10.0
tqdm==4.46.0
troposphere==2.6.1
urllib3==1.25.9
Werkzeug==0.16.1
wsgi-request-logger==0.4.6
zappa==0.51.0
```
* Your `zappa_settings.json`:
```json
{
"prd": {
"aws_region": "us-west-2",
"exclude": [
".circleci",
".coveragerc",
".git",
".gitignore",
".idea",
".pip",
".pylintrc",
".pytest_cache",
".tox",
".vscode",
"build",
"coverage-html",
"dist",
"tests",
"test-reports",
"*.code-workspace",
"pytest.ini",
"tox.ini",
"zappa_settings.json",
"*.db",
"*.md",
"*.zip"
],
"project_name": "api",
"domain": "api.your.domain",
"base_path": "some_path",
"apigateway_enabled": false,
"manage_roles": false,
"role_arn": "arn:aws:iam::xxx:role/api-prd-lambda",
"s3_bucket": "api-prd-lambda-xxx",
"runtime": "python3.8",
"timeout_seconds": 180,
"memory_size": 256,
"django_settings": "sample.settings",
"log_level": "INFO"
}
}
``` | closed | 2021-02-20T13:03:06Z | 2024-04-13T19:10:45Z | https://github.com/zappa/Zappa/issues/870 | [
"no-activity",
"auto-closed"
] | jneves | 2 |
plotly/dash-table | dash | 308 | Ability to save input when clicking outside of the table [Sponsored: Due Feb 1] | Currently, if you are editing a value in the table, clicking outside of the table will not save persist the value. | closed | 2018-12-18T23:55:45Z | 2020-05-11T03:08:39Z | https://github.com/plotly/dash-table/issues/308 | [
"dash-type-bug",
"dash-meta-sponsored"
] | chriddyp | 3 |
sqlalchemy/alembic | sqlalchemy | 952 | Second migration alter my primary key | **Describe the bug**
I started a new db. I migrated it and then did another migration where the autogenerate alter my primary key.
First migration:

Second migration without updating the model:

So I get this error:
[SQL: ALTER TABLE ********* ALTER COLUMN id DROP NOT NULL]
Using:
postgres
wls | closed | 2021-10-14T21:31:24Z | 2022-05-03T07:38:16Z | https://github.com/sqlalchemy/alembic/issues/952 | [] | movaldivia | 1 |
vi3k6i5/flashtext | nlp | 104 | Issue with longest string matching | When a word is overlap with another "Flashtext" did not took the largest phrase.
**For ex.,**
`from flashtext import KeywordProcessor
keyword_processor = KeywordProcessor()
keyword_processor.add_keyword('love python', 'Luv Py')
keyword_processor.add_keyword('python programming in ML', 'Luv Py2')
keyword_processor.add_keyword('Love Django', 'Django')
sentence = "I love python programming in ML"
keywords_found = keyword_processor.extract_keywords(sentence)
keywords_found # ['Luv Py']`
**The actual result should be "Luv Py2" . But we got 'Luv Py'.**
Any help to fix this?
| closed | 2020-01-25T15:18:09Z | 2020-03-25T10:21:42Z | https://github.com/vi3k6i5/flashtext/issues/104 | [] | giriannamalai | 3 |
Asabeneh/30-Days-Of-Python | python | 565 | Duplicated exercises in Day 4 | There are some duplicated exercises in Day 4 ([30-Days-Of-Python](https://github.com/Asabeneh/30-Days-Of-Python/tree/master)/[04_Day_Strings](https://github.com/Asabeneh/30-Days-Of-Python/tree/master/04_Day_Strings)
/04_strings.md).
1. I believe exercise 23 and exercise 26 are nearly the same.
> 23. Use index or find to **find the position of the first occurrence of the word 'because' in the following sentence: 'You cannot end a sentence with because because because is a conjunction'**
> 26. **Find the position of the first occurrence of the word 'because' in the following sentence: 'You cannot end a sentence with because because because is a conjunction'**
2. Exercise 25 and 27 are the same.
| open | 2024-07-23T08:25:04Z | 2024-07-24T07:00:01Z | https://github.com/Asabeneh/30-Days-Of-Python/issues/565 | [] | chienchuanw | 1 |
NVlabs/neuralangelo | computer-vision | 208 | Neuralangelo on driving scenes | I'm currently experimenting Neuralangelo on driving sequences, such as scenes from KITTI-360. However, I'm encountering very poor results and want to understand the underlying causes.
I am particularly curious if the requirement for the region of interest to be bounded is affecting the results? Could you please provide any insights or recommendations on how to improve the results in such scenarios? | open | 2024-07-30T07:25:17Z | 2024-07-30T07:25:17Z | https://github.com/NVlabs/neuralangelo/issues/208 | [] | hala-djeghim | 0 |
DistrictDataLabs/yellowbrick | scikit-learn | 1,183 | problem with auto plotting data | ```
model = KElbowVisualizer(KMeans(init='k-means++', max_iter=10000, n_init=10), k=(4,12))
model.fit(X)
model.elbow_value_
```
Since I did not use `mode.show ()`, I expect the above code not to show me a plot, but it does.
**Desktop :**
- OS: Windows 10
- Python Version 3.8.10[anaconda]
- Yellowbrick Version '1.3.post1' | closed | 2021-06-16T07:26:23Z | 2021-07-10T18:45:34Z | https://github.com/DistrictDataLabs/yellowbrick/issues/1183 | [] | maghaali | 8 |
HumanSignal/labelImg | deep-learning | 582 | when i double licked on labellmg.exe black window opens and closes | when i cdouble licked on labellmg.exe black window of command promo opens and closes but guit window of application not come
| open | 2020-04-24T11:37:36Z | 2022-02-07T01:13:21Z | https://github.com/HumanSignal/labelImg/issues/582 | [] | shazy12 | 2 |
chatopera/Synonyms | nlp | 81 | 分词有误怎么修改? | 我在使用的时候发现把 "老实说" 分成了 "老实" 和 "说" 这个问题怎么解决? | closed | 2019-04-20T15:03:04Z | 2019-04-21T01:03:33Z | https://github.com/chatopera/Synonyms/issues/81 | [] | OriLiMu | 1 |
numba/numba | numpy | 9,134 | numba.cuda.cudadrv.error.CudaSupportError: Error at driver init: Call to cuInit results in CUDA_ERROR_OPERATING_SYSTEM (30 | When I turned the learning around, I started using numba, but this error occurred whether there was a problem with the cuda.
I used the cuda simulator through the $ export NUMBA_ENABLE_CUDASIM=1 command, but I didn't get an error, but I want to use the gpu I have on the server, so I ask about this error.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "train.py", line 34, in <module>
from tridet.evaluators import get_evaluator
File "/home/work/dd3d/dd3d-supplement-develop2/tridet/evaluators/__init__.py", line 8, in <module>
from tridet.evaluators.kitti_3d_evaluator import KITTI3DEvaluator
File "/home/work/dd3d/dd3d-supplement-develop2/tridet/evaluators/kitti_3d_evaluator.py", line 23, in <module>
from tridet.evaluators.rotate_iou import d3_box_overlap_kernel, rotate_iou_gpu_eval
File "/home/work/dd3d/dd3d-supplement-develop2/tridet/evaluators/rotate_iou.py", line 17, in <module>
cuda.select_device(local_rank)
File "/home/work/.conda/envs/dd3d/lib/python3.8/site-packages/numba/cuda/api.py", line 437, in select_device
context = devices.get_context(device_id)
File "/home/work/.conda/envs/dd3d/lib/python3.8/site-packages/numba/cuda/cudadrv/devices.py", line 220, in get_context
return _runtime.get_or_create_context(devnum)
File "/home/work/.conda/envs/dd3d/lib/python3.8/site-packages/numba/cuda/cudadrv/devices.py", line 144, in get_or_create_context
return self._activate_context_for(devnum)
File "/home/work/.conda/envs/dd3d/lib/python3.8/site-packages/numba/cuda/cudadrv/devices.py", line 176, in _activate_context_for
gpu = self.gpus[devnum]
File "/home/work/.conda/envs/dd3d/lib/python3.8/site-packages/numba/cuda/cudadrv/devices.py", line 40, in __getitem__
return self.lst[devnum]
File "/home/work/.conda/envs/dd3d/lib/python3.8/site-packages/numba/cuda/cudadrv/devices.py", line 26, in __getattr__
numdev = driver.get_device_count()
File "/home/work/.conda/envs/dd3d/lib/python3.8/site-packages/numba/cuda/cudadrv/driver.py", line 428, in get_device_count
self.cuDeviceGetCount(byref(count))
File "/home/work/.conda/envs/dd3d/lib/python3.8/site-packages/numba/cuda/cudadrv/driver.py", line 296, in __getattr__
self.ensure_initialized()
File "/home/work/.conda/envs/dd3d/lib/python3.8/site-packages/numba/cuda/cudadrv/driver.py", line 262, in ensure_initialized
raise CudaSupportError(f"Error at driver init: {description}")
numba.cuda.cudadrv.error.CudaSupportError: Error at driver init: Call to cuInit results in CUDA_ERROR_OPERATING_SYSTEM (304)
Can you give me some advice on how to solve it? Nvidia driver, cuda's libraries are compatible
Please help me.. | closed | 2023-08-15T12:08:40Z | 2023-10-21T01:44:09Z | https://github.com/numba/numba/issues/9134 | [
"needtriage",
"CUDA",
"stale"
] | ssungchae | 5 |
onnx/onnx | pytorch | 5,915 | Protobuf version compatibility? | is 1.16 compatible with protobuf 3.6
| closed | 2024-02-07T13:48:09Z | 2024-02-08T01:56:06Z | https://github.com/onnx/onnx/issues/5915 | [
"question"
] | kumar-utkarsh0317 | 1 |
deepset-ai/haystack | pytorch | 8,777 | Add support for converting .msg files to Documents | **Is your feature request related to a problem? Please describe.**
Recently we have had more clients want to be able to use `.msg` files in their RAG pipelines. The `.msg` format is a Microsoft email format and is not trivial to convert without the help of an external library.
**Describe the solution you'd like**
It would be great if we could add a `MSGToDocument` converter to Haystack.
**Additional context**
Some libraries I researched that could help with this are:
**python-oxmsg** (comes from the same dev we use for our PPTXToDocument converter)
- Github: https://github.com/scanny/python-oxmsg
- Docs: https://scanny.github.io/python-oxmsg/message/
- Example converter implementation using `python-oxmsg` by Unstrucured: https://github.com/Unstructured-IO/unstructured/blob/main/unstructured/partition/msg.py
**msg-extractor** (actively maintained but has a GPL-3.0 license)
- Github: https://github.com/TeamMsgExtractor/msg-extractor
| closed | 2025-01-28T07:22:37Z | 2025-02-24T07:12:34Z | https://github.com/deepset-ai/haystack/issues/8777 | [
"type:feature",
"P2"
] | sjrl | 1 |
modoboa/modoboa | django | 3,052 | [Feature] Allow user and/or admins to update IMAP password for Imap Migration | If the secret is changed on django side or if a user update external password of a migration account, it should be possible to update its external password. | open | 2023-08-29T14:52:29Z | 2024-07-15T16:56:46Z | https://github.com/modoboa/modoboa/issues/3052 | [
"enhancement"
] | Spitfireap | 0 |
timkpaine/lantern | plotly | 153 | support arrow in live queue | closed | 2018-02-28T18:02:39Z | 2018-09-13T18:48:36Z | https://github.com/timkpaine/lantern/issues/153 | [
"feature"
] | timkpaine | 1 | |
flairNLP/fundus | web-scraping | 208 | The Taz has a similar issue to Occupy Democrats | As described in #178, the sitemap of the taz also uses article hubs. A fix should be done once #201 has been done. | closed | 2023-05-13T12:52:12Z | 2023-07-12T16:31:25Z | https://github.com/flairNLP/fundus/issues/208 | [] | Weyaaron | 1 |
PaddlePaddle/PaddleHub | nlp | 1,677 | AttributeError: module 'paddlehub' has no attribute 'Module' | hub2.1.1,paddle2.1.3,win10,cmd用hub正常,用pycharm试了个demo,结果报错如下。
import paddlehub as hub
lac = hub.Module(name="lac")
test_text = ["今天是个好天气。"]
结果报错:
lac = hub.Module(name="lac")
AttributeError: module 'paddlehub' has no attribute 'Module'
| open | 2021-10-30T17:37:58Z | 2024-02-26T05:04:00Z | https://github.com/PaddlePaddle/PaddleHub/issues/1677 | [
"installation"
] | GreenHandee | 3 |
ydataai/ydata-profiling | data-science | 920 | module 'missingno.missingno' has no attribute 'bar' | I had to do pip install missingno>=0.4.2 in order for 'import pandas_profiling' to run. When I try to create report by running
'profile=pandas_profiling.ProfileReport(pandas_df, title='sample',html={'style':{'full_width':True}}) , it throws the above help. | open | 2022-02-07T10:59:57Z | 2022-05-01T22:33:48Z | https://github.com/ydataai/ydata-profiling/issues/920 | [
"information requested ❔"
] | Sathyanarayanan8129 | 0 |
lorien/grab | web-scraping | 18 | Добавить тело ответа в исключения | Иногда при поиске каких либо данных в теле ответа возникают исключения. Хотелось бы иметь возможность получить тело ответа в котором происходил поиск по regexp, xpath, text. Тогда появится возможность быстро исправлять ошибки. Это особенно актуально когда мы парсим сайты на которых довольно часто происходят изменения в верстке.
<bountysource-plugin>
---
Want to back this issue? **[Place a bounty on it!](https://www.bountysource.com/issues/1053338-?utm_campaign=plugin&utm_content=tracker%2F164393&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F164393&utm_medium=issues&utm_source=github).
</bountysource-plugin>
| closed | 2013-10-14T09:21:18Z | 2014-05-24T17:28:35Z | https://github.com/lorien/grab/issues/18 | [] | khomyakov42 | 3 |
d2l-ai/d2l-en | data-science | 1,792 | Inconsistent use of np.dot and torch.mv in Section 2.3.8 | The final paragraph of Section 2.3.8 *Matrix-Vector Products* mentions the use of `np.dot` but not `torch.mv`. The subsequent code examples uses `torch.mv` but not `np.dot`.
Either the text should describe `torch.mv` or the code should use `np.dot` | closed | 2021-06-13T16:39:31Z | 2021-06-16T20:10:14Z | https://github.com/d2l-ai/d2l-en/issues/1792 | [] | dowobeha | 1 |
newpanjing/simpleui | django | 160 | 冻结头部/首列/尾列 | **你希望增加什么功能?**
1.冻结头部, 首列或多列, 尾列或多列
**留下你的联系方式,以便与你取得联系**
QQ:xxxxx
邮箱:xxx@xxx.com
| closed | 2019-09-30T02:09:38Z | 2019-11-18T05:25:45Z | https://github.com/newpanjing/simpleui/issues/160 | [
"enhancement"
] | wahello | 2 |
biolab/orange3 | data-visualization | 6,118 | Hard to guess what is the role of Predictions and Probabilities check box in Confusion Matrix | In some older implementation Predictions and Probabilities check box were enclosed with an Output box, giving these two a meaning. Without this box it is hard to guess what role these check boxes play, as changing them does not change anything in the user interface. I would suggest adding back the surrounding box.
<img width="791" alt="image" src="https://user-images.githubusercontent.com/726604/187956468-09b4298a-7a13-40da-bc49-624263a647cb.png">
| closed | 2022-09-01T15:46:44Z | 2022-09-09T08:09:20Z | https://github.com/biolab/orange3/issues/6118 | [] | BlazZupan | 0 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 405 | 根据run_clm_sft_with_peft.py跑出来的checkpoint里面没有adapter_config.json和adapter_model.bin | ### 详细描述问题
根据run_clm_sft_with_peft.py以及LLaMa-plus-7b从头训练了一个alpaca模型,但是checkpoint中没有相应的adapter_config.json和adapter_model.bin
直接使用[merge_llama_with_chinese_lora.py](https://github.com/ymcui/Chinese-LLaMA-Alpaca/blob/main/scripts/merge_llama_with_chinese_lora.py)合并报错
### 运行截图或日志
`python ./scripts/merge_llama_with_chinese_lora.py \
--base_model '/xxx/llama-7b-hf' \
--lora_model '/xxx/chinese-llama-plus-lora-7b', '/xxx/chinese-alpaca-lora-7b' \
--output_type huggingface \
--output_dir '/xxx/chinese-alpaca-7b'`
其中chinese-alpaca-lora-7b是我自己训练保存模型的文件夹
然后报错:
ValueError: Can't find 'adapter_config.json' at '/xxx/chinese-alpaca-lora-7b'
### 必查项目(前三项只保留你要问的)
- [x] **基础模型**:Alpaca-Plus
- [x] **运行系统**:Linux
- [x] **问题分类**:模型转换和合并 / 模型训练与精调
- [x] (必选)由于相关依赖频繁更新,请确保按照[Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki)中的相关步骤执行
- [x] (必选)我已阅读[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案
- [ ] (必选)第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)、[LlamaChat](https://github.com/alexrozanski/LlamaChat)等,同时建议到对应的项目中查找解决方案
| closed | 2023-05-22T07:13:15Z | 2023-05-22T09:28:06Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/405 | [] | liuyukid | 1 |
kensho-technologies/graphql-compiler | graphql | 214 | Add support to auto-gen GraphQL schema from reflected SQL database tables | This can use a constructed SQLAlchemy MetaData object to construct the GraphQL schema from the table objects in the metadata. These tables themselves can be automatically reflected from the database. See https://docs.sqlalchemy.org/en/latest/core/metadata.html for a little background. | closed | 2019-03-01T15:12:07Z | 2019-10-02T13:38:34Z | https://github.com/kensho-technologies/graphql-compiler/issues/214 | [
"enhancement",
"user friendliness"
] | jmeulemans | 0 |
sktime/pytorch-forecasting | pandas | 1,356 | `QuantileLoss` Passes Unsupported `quantiles` Argument to Parent Class `MultiHorizonMetric` | - PyTorch-Forecasting version: 1.0.0
- PyTorch version: 2.0.1+cpu
- torchmetrics: 0.11.4
- Python version: 3.11.3
- Operating System: Windows 11
### Expected behavior
While working with the `QuantileLoss` metric, I observed that it inherits from `MultiHorizonMetric` and passes the `quantiles` argument to it. However, `MultiHorizonMetric` doesn't accept `quantiles` as an argument. This discrepancy leads to possible unexpected behavior such as the issue described in #1355 which is resolved by removing `quantiles` from the initialization of the parent class.
I think this could be resolved by assigning `quantiles` as an argument of `QuantileLoss` directly, rather than passing it to the parent for which there is no actual need. Alternatively, `QuantileLoss` could inherit instead from `DistributionLoss`, but I'm not sure if that's desired. | open | 2023-08-02T00:52:49Z | 2023-08-02T00:52:49Z | https://github.com/sktime/pytorch-forecasting/issues/1356 | [] | B-Deforce | 0 |
huggingface/transformers | nlp | 36,578 | TypeError: LlavaProcessor: got multiple values for keyword argument 'images' | ### System Info
- `transformers` version: 4.49.0
- Platform: Linux-5.15.0-131-generic-x86_64-with-glibc2.31
- Python version: 3.11.10
- Huggingface_hub version: 0.28.0
- Safetensors version: 0.5.2
- Accelerate version: 1.3.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.5.1+cu124 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: no
### Who can help?
After the release of transformers-4.49.0, we get an error in a smolagents CI test:
- https://github.com/huggingface/smolagents/issues/692
We fixed the issue by pinning transformers<4.49.0:
- https://github.com/huggingface/smolagents/pull/693
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The failing test code is: https://github.com/huggingface/smolagents/blob/40d795ddb60808d5094efad8e909f39376896d17/tests/test_models.py#L95-L107
```python
from PIL import Image
img = Image.open(Path(get_tests_dir("fixtures")) / "000000039769.png")
model = TransformersModel(
model_id="llava-hf/llava-interleave-qwen-0.5b-hf",
max_new_tokens=5,
device_map="cpu",
do_sample=False,
)
messages = [{"role": "user", "content": [{"type": "text", "text": "Hello!"}, {"type": "image", "image": img}]}]
output = model(messages, stop_sequences=["great"]).content
```
The relevant code in smolagents, where we pass `images` kwarg to `processor.apply_chat_template`, is:
```python
images = [Image.open(image) for image in images] if images else None
prompt_tensor = self.processor.apply_chat_template(
messages,
tools=[get_tool_json_schema(tool) for tool in tools_to_call_from] if tools_to_call_from else None,
return_tensors="pt",
tokenize=True,
return_dict=True,
images=images,
add_generation_prompt=True if tools_to_call_from else False,
```
The stack trace: https://github.com/huggingface/smolagents/actions/runs/13391491485/job/37400017221
```python
> output = model(messages, stop_sequences=["great"]).content
tests/test_models.py:95:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/smolagents/models.py:740: in __call__
prompt_tensor = self.processor.apply_chat_template(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = LlavaProcessor:
- image_processor: SiglipImageProcessor {
"do_convert_rgb": null,
"do_normalize": true,
"do_resc...ge_tokens": 0,
"patch_size": 14,
"processor_class": "LlavaProcessor",
"vision_feature_select_strategy": "full"
}
conversation = [{'content': [{'text': 'Hello!', 'type': 'text'}, {'image': 'iVBORw0KGgoAAAANSUhEUgAAAoAAAAHgCAIAAAC6s0uzAAEAAElEQVR4n...q8dj+hsTgsx1DdXi+rV9LEk/l9NC3//ef/jNtKWLpyxrhMFRX/n+vEMxdFseMagAAAABJRU5ErkJggg==', 'type': 'image'}], 'role': 'user'}]
chat_template = "{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n'}}{# Render all images first #}{% for content i...{% endif %}{{'<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}"
kwargs = {'images': None, 'return_tensors': 'pt'}
tokenizer_template_kwargs = {'add_generation_prompt': False, 'continue_final_message': False, 'documents': None, 'return_assistant_tokens_mask': False, ...}
tokenizer_key = 'return_assistant_tokens_mask', tokenizer_value = False
value = None
chat_template_kwargs = {'add_generation_prompt': None, 'continue_final_message': None, 'documents': None, 'num_frames': None, ...}
key = 'sample_indices_fn', processor_value = None
def apply_chat_template(
self,
conversation: Union[List[Dict[str, str]], List[List[Dict[str, str]]]],
chat_template: Optional[str] = None,
**kwargs: Unpack[AllKwargsForChatTemplate],
) -> str:
"""
Similar to the `apply_chat_template` method on tokenizers, this method applies a Jinja template to input
conversations to turn them into a single tokenizable string.
The input is expected to be in the following format, where each message content is a list consisting of text and
optionally image or video inputs. One can also provide an image, video, URL or local path which will be used to form
`pixel_values` when `return_dict=True`. If not provided, one will get only the formatted text, optionally tokenized text.
conversation = [
{
"role": "user",
"content": [
{"type": "image", "image": "https://www.ilankelman.org/stopsigns/australia.jpg"},
{"type": "text", "text": "Please describe this image in detail."},
],
},
]
Args:
conversation (`Union[List[Dict, [str, str]], List[List[Dict[str, str]]]]`):
The conversation to format.
chat_template (`Optional[str]`, *optional*):
The Jinja template to use for formatting the conversation. If not provided, the tokenizer's
chat template is used.
"""
if chat_template is None:
if self.chat_template is not None:
chat_template = self.chat_template
else:
raise ValueError(
"No chat template is set for this processor. Please either set the `chat_template` attribute, "
"or provide a chat template as an argument. See "
"https://huggingface.co/docs/transformers/main/en/chat_templating for more information."
)
# Fill two sets of kwargs that should be used by tokenizer's `apply_chat_template`
# and for multimodal chat template
tokenizer_template_kwargs = {}
for tokenizer_key in TokenizerChatTemplateKwargs.__annotations__.keys():
tokenizer_value = getattr(TokenizerChatTemplateKwargs, tokenizer_key, None)
value = kwargs.pop(tokenizer_key, tokenizer_value)
tokenizer_template_kwargs[tokenizer_key] = value
chat_template_kwargs = {}
for key in ProcessorChatTemplateKwargs.__annotations__.keys():
processor_value = getattr(ProcessorChatTemplateKwargs, key, None)
value = kwargs.pop(key, processor_value)
chat_template_kwargs[key] = value
if isinstance(conversation, (list, tuple)) and (
isinstance(conversation[0], (list, tuple)) or hasattr(conversation[0], "content")
):
is_batched = True
conversations = conversation
else:
is_batched = False
conversations = [conversation]
num_frames = chat_template_kwargs.get("num_frames")
video_fps = chat_template_kwargs.get("video_fps")
video_load_backend = chat_template_kwargs.get("video_load_backend")
tokenize = chat_template_kwargs.get("tokenize")
return_dict = chat_template_kwargs.get("return_dict")
sample_indices_fn = chat_template_kwargs.get("sample_indices_fn")
if tokenize:
batch_images, batch_videos = [], []
batch_video_metadata = []
for conversation in conversations:
images, videos = [], []
video_metadata = []
for message in conversation:
visuals = [content for content in message["content"] if content["type"] in ["image", "video"]]
image_fnames = [
vision_info[key]
for vision_info in visuals
for key in ["image", "url", "path", "base64"]
if key in vision_info and vision_info["type"] == "image"
]
video_fnames = [
vision_info[key]
for vision_info in visuals
for key in ["video", "url", "path"]
if key in vision_info and vision_info["type"] == "video"
]
for fname in image_fnames:
images.append(load_image(fname))
for fname in video_fnames:
if isinstance(fname, (list, tuple)) and isinstance(fname[0], str):
video = [np.array(load_image(image_fname)).T for image_fname in fname]
# create a 4D video because `load_video` always returns a 4D array
video = np.stack(video)
metadata = None
logger.warning(
"When loading the video from list of images, we cannot infer metadata such as `fps` or `duration`. "
"If you model applies special processing based on metadata, please load the whole video and let the model sample frames."
)
else:
video, metadata = load_video(
fname,
num_frames=num_frames,
fps=video_fps,
backend=video_load_backend,
sample_indices_fn=sample_indices_fn,
)
videos.append(video)
video_metadata.append(metadata)
# Currently all processors can accept nested list of batches, but not flat list of visuals
# So we'll make a batched list of images and let the processor handle it
if images:
batch_images.append(images)
if videos:
batch_videos.append(videos)
batch_video_metadata.append(video_metadata)
# Process conversation with video/image information if needed. Then convert into a prompt using Jinja template
conversations = self._process_messages_for_chat_template(
conversations,
batch_images=batch_images,
batch_videos=batch_videos,
batch_video_metadata=batch_video_metadata,
**chat_template_kwargs,
)
prompt = self.tokenizer.apply_chat_template(
conversations,
chat_template=chat_template,
tokenize=False,
return_dict=False,
**tokenizer_template_kwargs,
)
if not is_batched:
prompt = prompt[0]
if tokenize:
# Tokenizer's `apply_chat_template` never adds special tokens when tokenizing
# But processor's `apply_chat_template` didn't have an option to tokenize, so users had to format the prompt
# and pass it to the processor. Users thus never worried about special tokens relying on processor hadnling
# everything internally. The below line is to keep BC for that and be able to work with model that have
# special tokens in the template (consistent with tokenizers). We dont want to raise warning, it will flood command line
# without actionable solution for users
single_prompt = prompt[0] if is_batched else prompt
if self.tokenizer.bos_token is not None and single_prompt.startswith(self.tokenizer.bos_token):
kwargs["add_special_tokens"] = False
> out = self(
text=prompt,
images=batch_images if batch_images else None,
videos=batch_videos if batch_videos else None,
**kwargs,
)
E TypeError: LlavaProcessor:
E - image_processor: SiglipImageProcessor {
E "do_convert_rgb": null,
E "do_normalize": true,
E "do_rescale": true,
E "do_resize": true,
E "image_mean": [
E 0.5,
E 0.5,
E 0.5
E ],
E "image_processor_type": "SiglipImageProcessor",
E "image_std": [
E 0.5,
E 0.5,
E 0.5
E ],
E "processor_class": "LlavaProcessor",
E "resample": 3,
E "rescale_factor": 0.00392156862745098,
E "size": {
E "height": 384,
E "width": 384
E }
E }
E
E - tokenizer: Qwen2TokenizerFast(name_or_path='llava-hf/llava-interleave-qwen-0.5b-hf', vocab_size=151643, model_max_length=32768, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>'], 'image_token': '<image>'}, clean_up_tokenization_spaces=False, added_tokens_decoder={
E 151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
E 151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
E 151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
E 151646: AddedToken("<image>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
E }
E )
E
E {
E "image_token": "<image>",
E "num_additional_image_tokens": 0,
E "patch_size": 14,
E "processor_class": "LlavaProcessor",
E "vision_feature_select_strategy": "full"
E }
E got multiple values for keyword argument 'images'
.venv/lib/python3.10/site-packages/transformers/processing_utils.py:1383: TypeError
```
### Expected behavior
No error. | closed | 2025-03-06T07:55:26Z | 2025-03-07T09:19:13Z | https://github.com/huggingface/transformers/issues/36578 | [
"bug"
] | albertvillanova | 5 |
tensorflow/tensor2tensor | deep-learning | 1,754 | AttributeError: module 'tensorflow' has no attribute 'flags' | ### Description
```
C:\Users\XXXXX\Anaconda3\Lib\site-packages\tensor2tensor\bin>python t2t_trainer.py
Traceback (most recent call last):
File "t2t_trainer.py", line 24, in <module>
from tensor2tensor import models # pylint: disable=unused-import
File "C:\Users\XXXXX\Anaconda3\lib\site-packages\tensor2tensor\models\__init__.py", line 26, in <module>
from tensor2tensor.models import basic
File "C:\Users\XXXXX\Anaconda3\lib\site-packages\tensor2tensor\models\basic.py", line 25, in <module>
from tensor2tensor.utils import t2t_model
File "C:\Users\XXXXX\Anaconda3\lib\site-packages\tensor2tensor\utils\t2t_model.py", line 37, in <module>
from tensor2tensor.utils import decoding
File "C:\Users\XXXXX\Anaconda3\lib\site-packages\tensor2tensor\utils\decoding.py", line 41, in <module>
FLAGS = tf.flags.FLAGS
AttributeError: module 'tensorflow' has no attribute 'flags'
```
### Environment information
```
OS: Windows 10 - 64bit
$ pip freeze | grep tensor
tensor2tensor==1.15.2
tensorboard==2.0.1
tensorflow==2.0.0
tensorflow-datasets==1.3.0
tensorflow-estimator==2.0.1
tensorflow-gan==2.0.0
tensorflow-gpu==2.0.0
tensorflow-hub==0.7.0
tensorflow-metadata==0.15.1
tensorflow-probability==0.7.0
tf-estimator-nightly==2.0.0.dev2019111709
tf-nightly-gpu==2.1.0.dev20191117
```
$ python -V
# Python 3.7.4
$ conda info
```
active environment : None
user config file : C:\Users\XXXX\.condarc
populated config files :
conda version : 4.7.12
conda-build version : 3.18.9
python version : 3.7.4.final.0
virtual packages : __cuda=10.2
base environment : C:\Users\XXXX\Anaconda3 (writable)
channel URLs : https://repo.anaconda.com/pkgs/main/win-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/win-64
https://repo.anaconda.com/pkgs/r/noarch
https://repo.anaconda.com/pkgs/msys2/win-64
https://repo.anaconda.com/pkgs/msys2/noarch
package cache : C:\Users\XXXX\Anaconda3\pkgs
C:\Users\XXXX\.conda\pkgs
C:\Users\XXXX\AppData\Local\conda\conda\pkgs
envs directories : C:\Users\XXXX\Anaconda3\envs
C:\Users\XXXX\.conda\envs
C:\Users\XXXX\AppData\Local\conda\conda\envs
platform : win-64
user-agent : conda/4.7.12 requests/2.22.0 CPython/3.7.4 Windows/10 Windows/10.0.17134
administrator : False
netrc file : None
offline mode : False
```
# Steps to reproduce:
With tensorflow-gpu-nightly installed on a win10 64 bit - simply navigate to
```\Anaconda3\Lib\site-packages\tensor2tensor\bin```
and type
```python t2t_{{any_file}}.py```
# Error logs:
```
File "C:\Users\XXXX\Anaconda3\lib\site-packages\tensor2tensor\utils\decoding.py", line 41, in <module>
FLAGS = tf.flags.FLAGS
AttributeError: module 'tensorflow' has no attribute 'flags'
```
| open | 2019-11-26T01:13:51Z | 2023-06-30T16:40:27Z | https://github.com/tensorflow/tensor2tensor/issues/1754 | [] | birdmw | 10 |
mwaskom/seaborn | data-science | 3,567 | Improvements to histplot (1D) for discrete data | Hi,
two suggestions for minor usability improvements concerning the handling of discrete data in histplot (with `discrete=True`).
### Detect the correct bin size automatically
Currently, the bin size is just set to 1 automatically. However, data might be discrete with a different discretization step size. Of course I can set that manually, but it would be very convenient if it "just worked". Wouldn't that be as simple as something like `binwidth = np.diff(np.sort(df.x.unique())).min()`? (Surely a very inefficient implementation, but you get the idea.)
### Adapt kde bandwidth method when `discrete=True`
For discrete data, we can get the below ugly KDE behavior. (This is simply `sns.histplot(df, x="x", discrete=True, kde=True)` with `df = pd.DataFrame({"x": np.random.poisson(lam=1, size=(10000,))})`).

I am aware that kde bandwidth selection is a thorny topic, and that there are additional problems going on here because of the hard 0 boundary, but I would believe that an even slightly better default behavior should be possible? For instance, would simply setting `bandwidth= [some constant between 0.5 and 1] * binwidth` be an awful default for the `discrete=True` case?
For the example above, this is what I get with `kde_kws={'bw_method': 0.6}`, which is quite a bit closer to what would seem like a reasonable default to me.

| closed | 2023-11-20T19:45:25Z | 2023-12-10T17:38:46Z | https://github.com/mwaskom/seaborn/issues/3567 | [
"wishlist",
"mod:distributions"
] | e-pet | 4 |
scikit-image/scikit-image | computer-vision | 7,629 | Compilation Error Due to Undeclared cpow Function in _marching_cubes_lewiner_cy.c | ### Description:
I encountered a compilation error while building scikit-image, specifically related to the _marching_cubes_lewiner_cy.c file. The error message indicates that the cpow function is undeclared, which causes an implicit function declaration error during the compilation process.
Error Details:
cc -Iskimage/measure/_marching_cubes_lewiner_cy.cpython-312.so.p -Iskimage/measure -I../skimage/measure -I/data/data/com.termux/files/usr/tmp/pip-build-env-ds0m7t73/overlay/lib/python3.12/site-packages/numpy/_core/include -Iskimage/_shared -I../skimage/_shared -I/data/data/com.termux/files/usr/include/python3.12 -fvisibility=hidden -fdiagnostics-color=always -DNDEBUG -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -std=c99 -O3 -Wno-unused-function -fPIC -Wno-cpp -MD -MQ skimage/measure/_marching_cubes_lewiner_cy.cpython-312.so.p/meson-generated__marching_cubes_lewiner_cy.c.o -MF skimage/measure/_marching_cubes_lewiner_cy.cpython-312.so.p/meson-generated__marching_cubes_lewiner_cy.c.o.d -o skimage/measure/_marching_cubes_lewiner_cy.cpython-312.so.p/meson-generated__marching_cubes_lewiner_cy.c.o -c skimage/measure/_marching_cubes_lewiner_cy.cpython-312.so.p/_marching_cubes_lewiner_cy.c
skimage/measure/_marching_cubes_lewiner_cy.cpython-312.so.p/_marching_cubes_lewiner_cy.c:23375:109: error: call to undeclared library function 'cpow' with type '_Complex double (_Complex double, _Complex double)'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
23375 | __pyx_t_13 = __Pyx_SoftComplexToDouble(__Pyx_c_quot_double(__pyx_t_double_complex_from_parts(1.0, 0), __Pyx_c_pow_double(__pyx_t_double_complex_from_parts(__pyx_v_length, 0), __pyx_t_double_complex_from_parts(0.5, 0))), 1); if (unlikely(__pyx_t_13 == ((double)-1) && PyErr_Occurred())) __PYX_ERR(0, 374, __pyx_L1_error)
| ^
skimage/measure/_marching_cubes_lewiner_cy.cpython-312.so.p/_marching_cubes_lewiner_cy.c:3134:44: note: expanded from macro '__Pyx_c_pow_double'
3134 | #define __Pyx_c_pow_double(a, b) (cpow(a, b))
| ^
skimage/measure/_marching_cubes_lewiner_cy.cpython-312.so.p/_marching_cubes_lewiner_cy.c:23375:109: note: include the header <complex.h> or explicitly provide a declaration for 'cpow'
skimage/measure/_marching_cubes_lewiner_cy.cpython-312.so.p/_marching_cubes_lewiner_cy.c:3134:44: note: expanded from macro '__Pyx_c_pow_double'
3134 | #define __Pyx_c_pow_double(a, b) (cpow(a, b))
| ^
1 error generated.
Clone the scikit-image repository.
Attempt to build the project using the provided build instructions.
Observe the compilation error related to the cpow function.
Suggested Fix:
The error suggests that including the <complex.h> header or explicitly declaring the cpow function should resolve the issue. Modifying the generated C file to include this header or declaration might be a temporary workaround, but a more permanent fix would be to ensure that the necessary headers are included during the Cython compilation process.
Temporary Workaround:
As a temporary workaround, manually modify the generated C file to include the <complex.h> header:
#include <complex.h>
However, this file is regenerated during each build, so a more sustainable solution would be appreciated.
Additional Information:
This issue seems to originate from Cython's handling of complex numbers and the cpow function. Ensuring that the correct headers are included during the generation of C files from Cython code would likely resolve this issue.
Thank you for your attention to this matter. Please let me know if any further information is required.
### Way to reproduce:
Clone the scikit-image repository.
Attempt to build the project using the provided build instructions.
Observe the compilation error related to the cpow function.
### Version information:
```Shell
scikit-image version: 0.21.0
Python version: 3.12.8 (main, Dec 4 2024, 22:36:35) [Clang 18.0.3 (https://android.googlesource.com/toolchain/llvm-project d8003a456
Build system: Linux-5.10.66-android12-9-g3d53a9a9af57-aarch64-with-libc
numpy version: 2.2.0
```
| closed | 2024-12-11T07:13:02Z | 2024-12-16T06:56:01Z | https://github.com/scikit-image/scikit-image/issues/7629 | [
":bug: Bug"
] | printf172 | 0 |
jina-ai/serve | machine-learning | 5,890 | License problem! | You use a GPL software aiostream as the dependency.
Watch out that the GPL is contagious.
Which will cause big problem to a [Apache-2.0 license](https://github.com/jina-ai/jina/blob/master/LICENSE) project that it will make your project GPL too.
The only way to use GPL software without being infected is to use the GPL software with process isolation.
| closed | 2023-05-23T07:55:28Z | 2023-05-23T10:04:48Z | https://github.com/jina-ai/serve/issues/5890 | [] | wqh17101 | 3 |
huggingface/peft | pytorch | 1,967 | MobileViT does not work with Inference with different LoRA adapters in the same batch | ### System Info
Python 3.11.9
transformers==4.40.2
peft==0.11.2
### Who can help?
@BenjaminBossan
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
[MobileVit model](https://github.com/huggingface/transformers/blob/3fbaaaa64d1ef3d8327adb577994d3d11277c77a/src/transformers/models/mobilevit/modeling_mobilevit.py#L789) is not compatible with using multiple adapters in the same batch. Inferencing batches using multiple adapters using the adapter_names in the batch will trigger the following exception:
https://github.com/huggingface/peft/blob/273acf059e0f1f8bff1a3889f901475e9eb3b7ee/src/peft/tuners/lora/layer.py#L308
The root cause is that during the [unfolding operation](https://github.com/huggingface/transformers/blob/3fbaaaa64d1ef3d8327adb577994d3d11277c77a/src/transformers/models/mobilevit/modeling_mobilevit.py#L435) in the [transformers library MobileVit](https://github.com/huggingface/transformers/blob/3fbaaaa64d1ef3d8327adb577994d3d11277c77a/src/transformers/models/mobilevit/modeling_mobilevit.py#L372) the first dimension of the input is changed from `batch_size, ...` is changed to `batch_size * patch_size**2, ...` which makes it inconsistent with the `adapter_names` dimensions which is of length of `batch_size` and each entry refers to each of the batch items' adapter.
### Expected behavior
I solved this by a hack that modifies the `adapter_names` input size before sending it to the model and reverting it back to the original size for the classifier. It makes the entries proportional to the size made during the unfolding operation.
Also, we already discussed that there is a bug https://github.com/huggingface/peft/issues/1960 other than this MobileViT specific problem. Below script is the modifications needed both for https://github.com/huggingface/peft/issues/1960 and the mentioned problem together.
However, this is just a hack and I think this should work out of the box. I'm happy to investigate further when I get a chance to first solve https://github.com/huggingface/peft/issues/1960 .
```python
# -------- changing the size of the adapter_names input ----------
if model.base_model.model.base_model_prefix == "mobilevit":
patch_size = model.config.patch_size
multiply = patch_size ** 2
resized_adapters_names = []
for item in batch["adapter_names"]:
multiplied = [item] * multiply
resized_adapters_names += multiplied
batch["adapter_names"] = resized_adapters_names
outputs = model(**batch)
# -------- rest of the code ----------
"""
added this to solve https://github.com/huggingface/peft/issues/1960
"""
from typing import Any, Optional, Union
import torch
from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss
from peft.peft_model import PeftModel
from transformers.modeling_outputs import ImageClassifierOutput, ImageClassifierOutputWithNoAttention
from transformers import ViTForImageClassification, MobileViTForImageClassification
from functools import partial
class PeftModelFixed(PeftModel):
def forward(self, *args: Any, **kwargs: Any):
"""
Forward pass of the model.
"""
with self._enable_peft_forward_hooks(*args, **kwargs):
# TODO removed this to avoid mixing
# kwargs = {k: v for k, v in kwargs.items() if k not in self.special_peft_forward_args}
return self.get_base_model()(*args, **kwargs)
class MobileViTForImageClassificationFixed(MobileViTForImageClassification):
def forward(
self,
pixel_values: Optional[torch.Tensor] = None,
output_hidden_states: Optional[bool] = None,
labels: Optional[torch.Tensor] = None,
return_dict: Optional[bool] = None,
**kwargs # TODO added kwargs
) -> Union[tuple, ImageClassifierOutputWithNoAttention]:
r"""
labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
Labels for computing the image classification/regression loss. Indices should be in `[0, ...,
config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss). If
`config.num_labels > 1` a classification loss is computed (Cross-Entropy).
"""
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
# TODO here
outputs = self.mobilevit(pixel_values, output_hidden_states=output_hidden_states, return_dict=return_dict)
pooled_output = outputs.pooler_output if return_dict else outputs[1]
# TODO here
adapter_names = kwargs["adapter_names"]
patch_size = self.config.patch_size
multiply = patch_size ** 2
adapter_names_original = []
for i in range(0, len(adapter_names), multiply):
adapter_names_original.append(adapter_names[i])
logits = self.classifier(self.dropout(pooled_output), adapter_names=adapter_names_original)
loss = None
if labels is not None:
if self.config.problem_type is None:
if self.num_labels == 1:
self.config.problem_type = "regression"
elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int):
self.config.problem_type = "single_label_classification"
else:
self.config.problem_type = "multi_label_classification"
if self.config.problem_type == "regression":
loss_fct = MSELoss()
if self.num_labels == 1:
loss = loss_fct(logits.squeeze(), labels.squeeze())
else:
loss = loss_fct(logits, labels)
elif self.config.problem_type == "single_label_classification":
loss_fct = CrossEntropyLoss()
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
elif self.config.problem_type == "multi_label_classification":
loss_fct = BCEWithLogitsLoss()
loss = loss_fct(logits, labels)
if not return_dict:
output = (logits,) + outputs[2:]
return ((loss,) + output) if loss is not None else output
return ImageClassifierOutputWithNoAttention(
loss=loss,
logits=logits,
hidden_states=outputs.hidden_states,
)
def peftforward(self, *args, **kwargs):
if self.disable_adapters or (self.active_adapter not in self.modules_to_save):
return self.original_module(*args, **kwargs)
# TODO changed to support LoRA
adapter_names = kwargs["adapter_names"]
kwargs = {}
batch = args[0]
unique_adapters = set(adapter_names)
sub_batch_indices_list = []
for adapter in unique_adapters:
sub_batch_indices_list.append([index for index, item in enumerate(adapter_names) if item == adapter])
results = [0 for i in range(len(batch))]
for i, active_adapter in enumerate(unique_adapters):
sub_batch = batch[sub_batch_indices_list[i]]
output = self.modules_to_save[active_adapter](*(sub_batch,), **kwargs)
for index, j in enumerate(sub_batch_indices_list[i]):
results[j] = output[index]
return torch.stack(results)
def change_forward_dynamically(model: PeftModel):
model.classifier.forward = partial(peftforward, model.classifier)
return model
if not return_dict:
output = (logits,) + outputs[1:]
return ((loss,) + output) if loss is not None else output
return ImageClassifierOutput(
loss=loss,
logits=logits,
hidden_states=outputs.hidden_states,
attentions=outputs.attentions,
)
class MobileViTForImageClassificationFixed(MobileViTForImageClassification):
def forward(
self,
pixel_values: Optional[torch.Tensor] = None,
output_hidden_states: Optional[bool] = None,
labels: Optional[torch.Tensor] = None,
return_dict: Optional[bool] = None,
**kwargs # TODO added kwargs
) -> Union[tuple, ImageClassifierOutputWithNoAttention]:
r"""
labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
Labels for computing the image classification/regression loss. Indices should be in `[0, ...,
config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss). If
`config.num_labels > 1` a classification loss is computed (Cross-Entropy).
"""
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
# TODO here
outputs = self.mobilevit(pixel_values, output_hidden_states=output_hidden_states, return_dict=return_dict)
pooled_output = outputs.pooler_output if return_dict else outputs[1]
# TODO here
adapter_names = kwargs["adapter_names"]
patch_size = self.config.patch_size
multiply = patch_size ** 2
adapter_names_original = []
for i in range(0, len(adapter_names), multiply):
adapter_names_original.append(adapter_names[i])
logits = self.classifier(self.dropout(pooled_output), adapter_names=adapter_names_original)
loss = None
if labels is not None:
if self.config.problem_type is None:
if self.num_labels == 1:
self.config.problem_type = "regression"
elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int):
self.config.problem_type = "single_label_classification"
else:
self.config.problem_type = "multi_label_classification"
if self.config.problem_type == "regression":
loss_fct = MSELoss()
if self.num_labels == 1:
loss = loss_fct(logits.squeeze(), labels.squeeze())
else:
loss = loss_fct(logits, labels)
elif self.config.problem_type == "single_label_classification":
loss_fct = CrossEntropyLoss()
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
elif self.config.problem_type == "multi_label_classification":
loss_fct = BCEWithLogitsLoss()
loss = loss_fct(logits, labels)
if not return_dict:
output = (logits,) + outputs[2:]
return ((loss,) + output) if loss is not None else output
return ImageClassifierOutputWithNoAttention(
loss=loss,
logits=logits,
hidden_states=outputs.hidden_states,
)
def peftforward(self, *args, **kwargs):
if self.disable_adapters or (self.active_adapter not in self.modules_to_save):
return self.original_module(*args, **kwargs)
# TODO changed to support LoRA
adapter_names = kwargs["adapter_names"]
kwargs = {}
batch = args[0]
unique_adapters = set(adapter_names)
sub_batch_indices_list = []
for adapter in unique_adapters:
sub_batch_indices_list.append([index for index, item in enumerate(adapter_names) if item == adapter])
results = [0 for i in range(len(batch))]
for i, active_adapter in enumerate(unique_adapters):
sub_batch = batch[sub_batch_indices_list[i]]
output = self.modules_to_save[active_adapter](*(sub_batch,), **kwargs)
for index, j in enumerate(sub_batch_indices_list[i]):
results[j] = output[index]
return torch.stack(results)
def change_forward_dynamically(model: PeftModel):
model.classifier.forward = partial(peftforward, model.classifier)
return model
``` | open | 2024-07-29T12:53:39Z | 2025-03-17T10:10:45Z | https://github.com/huggingface/peft/issues/1967 | [] | saeid93 | 14 |
paperless-ngx/paperless-ngx | machine-learning | 8,832 | [BUG] Selected count and page count in document view disappear when scrolling | ### Description
If some documents are selected in document view, the select count as well as the page count (lower part of screenshot) disappear when scrolling down in a list longer than the screen.

### Steps to reproduce
1. Have a list of documents longer than the screen
2. Select 1 or several documents
3. Scroll down
4. See that selected documents count and page navigationi disappear
### Webserver logs
```bash
N/A
```
### Browser logs
```bash
N/A
```
### Paperless-ngx version
2.14.3
### Host OS
Debian
### Installation method
Docker - official image
### System status
```json
Mozilla FF and Chrome
```
### Browser
_No response_
### Configuration changes
_No response_
### Please confirm the following
- [x] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [x] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [x] I have already searched for relevant existing issues and discussions before opening this report.
- [x] I have updated the title field above with a concise description. | closed | 2025-01-20T15:41:09Z | 2025-02-21T03:08:34Z | https://github.com/paperless-ngx/paperless-ngx/issues/8832 | [
"not a bug"
] | schnillerman | 3 |
yt-dlp/yt-dlp | python | 11,854 | How can I get the original link of a TikTok video? | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm asking a question and **not** reporting a bug or requesting a feature
- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Please make sure the question is worded well enough to be understood
https://v19-webapp-prime.tiktok.com/video/tos/alisg/tos-alisg-pve-0037/o4At3S2kRNARBE6RxSPAZJO5liY96iQI6VPuz/?a=1988&bti=ODszNWYuMDE6&ch=0&cr=3&dr=0&lr=all&cd=0%7C0%7C0%7C&cv=1&br=1288&bt=644&cs=2&ds=3&eid=12800&ft=-Csk_mDUPD12Nf4jzE-UxDGvbY6e3wv25IcAp&mime_type=video_mp4&qs=14&rc=NDlnPGVkOzs8NTQ6MzU1O0BpM2tveG05cmh5dzMzODgzNEAvYF8tNGFiXzExYjFhXzFjYSM2L2kzMmQ0Y2VgLS1kLzFzcw%3D%3D&btag=e00098000&expire=1734791767&l=20241219143304BC49A97DC46A2FE6501B&ply_type=2&policy=2&signature=ff9a057a08b724a2d87393bf495d42fc&tk=tt_chain_token
My link when running the yt_dlp command is not the original TikTok video link and cannot be accessed from other sources or browsers. However, the example link below can be accessed from anywhere. How can I extract the original link similar to the one below?
https://v16m-default.akamaized.net/6553a1128d9d294263a698dbc05ceebc/6764807b/video/tos/alisg/tos-alisg-pve-0037c001/oUqOZQDQIDCEcwjRfIAAEHlFWkzEIfxfw6DjgL/?a=0&bti=OUBzOTg7QGo6OjZAL3AjLTAzYCMxNDNg&ch=0&cr=0&dr=0&er=0&lr=all&net=0&cd=0%7C0%7C0%7C0&cv=1&br=1516&bt=758&cs=0&ds=6&ft=XE5bCqT0m7jPD12b~vnJ3wUX73yKMeF~O5&mime_type=video_mp4&qs=0&rc=OWU1OGlkNGU0NTdmZGY8ZEBpanJmd2s5cmtsdjMzODczNEA1XmMyY2M2XjYxYzY1YTE2YSNzZDZkMmQ0Y21gLS1kMS1zcw%3D%3D&vvpl=1&l=20241219141710F40777399E55D5158305&btag=e000a0000
`yt-dlp --format "bestvideo+bestaudio/best" --merge-output-format "mp4" --cookies "cookies.txt" --user-agent "Mozilla/5.0 (iPhone; CPU iPhone OS 16_6 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Mobile/15E148 Safari/604.1" "https://www.tiktok.com/@camerachiensi/video/7450071291446562055"`
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [x] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
_No response_ | closed | 2024-12-19T14:45:37Z | 2024-12-23T18:49:05Z | https://github.com/yt-dlp/yt-dlp/issues/11854 | [
"question"
] | nguyendragon | 1 |
remsky/Kokoro-FastAPI | fastapi | 57 | Using long(ish) text with client.audio.speech.with_streaming_response.create only generates the first sentence | Example:
```
text='''PART ONE.
BEFORE.
You ask how many ages had the Carryx been fighting the long war? That is a meaningless question.
The Carryx ruled the stars for epochs.
We conquered the Ejia and Kurkst and outdreamt the Eyeless Ones.
We burned the Logothetes until their worlds were windswept glass.'''
with client.audio.speech.with_streaming_response.create(
model="kokoro",
voice=voice,
input=text,
response_format='mp3'
) as response:
response.stream_to_file(speech_file_path)
```
This results in an audio file containing only "PART ONE"
When using with the same api but chunking it to pyaudio.player it works as expected.
If I try to chunk it to a file directly it fails again, resulting in an audio file containing only "PART ONE"
So this works and speaks the entire audio:
```
player = pyaudio.PyAudio().open(
format=pyaudio.paInt16,
channels=1,
rate=24000,
output=True
)
with client.audio.speech.with_streaming_response.create(
model="kokoro",
voice=voice,
response_format="pcm",
input=text
) as response:
for chunk in response.iter_bytes(chunk_size=1024):
player.write(chunk)
```
While this one only saves the first sentence, just like the openai sdk, regardless of format used (wav,mp3,pcm).
```
with client.audio.speech.with_streaming_response.create(
model="kokoro",
voice=voice,
input=text,
response_format='mp3'
) as response:
with open(speech_file_path, "wb") as f:
for chunk in response.iter_bytes(chunk_size=1024):
f.write(chunk)
```
| closed | 2025-01-15T11:45:24Z | 2025-01-27T07:42:47Z | https://github.com/remsky/Kokoro-FastAPI/issues/57 | [
"bug"
] | mrrtfm | 14 |
axnsan12/drf-yasg | rest-api | 791 | Manual parameters not working (AttributeError: 'tuple' object has no attribute 'in_') | # Bug Report
## Description
I cannot add query parameters using the `swagger_auto_schema`. It complains that a `Parameter` object doesn't have the attribute `in_`, which is true because `Parameter` has `in` not `in_` when I debugged. Seems this needs to be changed for consistency.
## Is this a regression?
<!-- Did this behavior use to work in the previous version? -->
<!-- edit: --> Possibly.
## Minimal Reproduction
```code
hotel_param = openapi.Parameter('hotel_id',
openapi.IN_QUERY,
description='Hotel ID for tourist',
type=openapi.TYPE_INTEGER),
start_datetime_param = openapi.Parameter('start_datetime',
openapi.IN_QUERY,
description='Start datetime in ISO format',
type=openapi.TYPE_STRING),
end_datetime_param = openapi.Parameter('end_datetime',
openapi.IN_QUERY,
description='End datetime in ISO format',
type=openapi.TYPE_STRING),
@swagger_auto_schema(method='GET',
manual_parameters=[hotel_param, start_datetime_param, end_datetime_param])
```
## Stack trace / Error message
```code
if any(param.in_ == openapi.IN_BODY for param in manual_parameters): # pragma: no cover
AttributeError: 'tuple' object has no attribute 'in_'
```
| open | 2022-06-29T21:20:11Z | 2025-03-07T12:10:45Z | https://github.com/axnsan12/drf-yasg/issues/791 | [
"triage"
] | f4ww4z | 1 |
youfou/wxpy | api | 65 | 视频保存到本地后,文件大小为0 | 使用Message.get_file 方法保存视频,但是文件大小为0. 语音、图片、文档都正常。
代码为:```msg.get_file(save_path=os.getcwd()+msg.file_name)```
文件名为:wxpy170526-011802.mp4
保存视频后的返回码:
```{'BaseResponse': {'ErrMsg': '请求成功', 'Ret': 0, 'RawMsg': 'Successfully downloaded'}}``` | open | 2017-05-25T17:23:30Z | 2017-08-13T07:07:13Z | https://github.com/youfou/wxpy/issues/65 | [] | RogerLiNing | 3 |
lanpa/tensorboardX | numpy | 41 | why do you use clone in histogram examples? | From the readme,
```
for name, param in resnet18.named_parameters():
writer.add_histogram(name, param.clone().cpu().data.numpy(), n_iter)
```
Could we instead use the following simpler version?
```
for name, param in resnet18.named_parameters():
writer.add_histogram(name, param.data.cpu().numpy(), n_iter)
```
(Or we could put the .cpu() before the .data probably -- I don't think this matters.) | closed | 2017-10-15T11:14:47Z | 2017-10-17T06:24:08Z | https://github.com/lanpa/tensorboardX/issues/41 | [] | greaber | 1 |
ContextLab/hypertools | data-visualization | 206 | handling extra keyword arguments | If the user passes in non-hypertools keyword arguments, we should pass them to our plotting backend. This is somewhat non-trivial in that we need to handle the case where the user wants different values for different elements of the to-be-plotted data list. I think what should happen is:
- if data is *not* a list, just pass along additional keywords without modifying them
- if data *is* a list:
- make sure that every keyword argument's value is a list of the same length. (if not, truncate or copy values to make the lengths match; if the user passes in a too-short list, throw an error)
- for each to-be-plotted thing, pass (to matplotlib or other backend plotting machinery) a dictionary of keyword arguments that's constructed by taking the keys from the original keyword argument dictionary and setting the values of those keys to the corresponding elements of the keyword argument lists. | open | 2018-05-03T15:39:20Z | 2018-05-03T15:39:44Z | https://github.com/ContextLab/hypertools/issues/206 | [
"enhancement",
"easy(ish)"
] | jeremymanning | 0 |
holoviz/panel | jupyter | 7,001 | Built-in support to open/close template sidebar | From https://discourse.holoviz.org/t/programatically-dynamically-hiding-template-sidebar/1605
```python
import panel as pn
html = pn.pane.HTML("")
button_open = pn.widgets.Button(name="openNav")
button_close = pn.widgets.Button(name="closeNav")
def open(event):
html.object = f""" <script> openNav(); </script>"""
button_open.on_click(open)
def close(event):
html.object = f""" <script> closeNav(); </script>"""
button_close.on_click(close)
vanilla = pn.template.FastListTemplate(title='toogle_nav')
vanilla.sidebar.append(html)
vanilla.main.append(button_open)
vanilla.main.append(button_close)
pn.serve(vanilla)
``` | closed | 2024-07-18T19:30:02Z | 2024-07-19T09:28:24Z | https://github.com/holoviz/panel/issues/7001 | [
"duplicate"
] | ahuang11 | 1 |
microsoft/qlib | deep-learning | 1,805 | HTTPError: 403 Client Error: Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature. | The error is reported when I used the sample code in the documentation to download data (python get_data.py qlib_data --name qlib_data_simple --target_dir ~/.qlib/qlib_data/cn_data --region cn) | closed | 2024-06-06T07:04:58Z | 2025-03-13T09:43:27Z | https://github.com/microsoft/qlib/issues/1805 | [
"question"
] | Eden-Cheung | 2 |
voila-dashboards/voila | jupyter | 836 | How to use custom python package | Hi, I modified a [jupyter widgets package](https://github.com/ocoudray/jupyter-drawing-pad). When I try to use [my version](https://github.com/mareksubocz/jupyter-drawing-pad) of it, voila somehow uses the original one available by pip. How can I force it to use my version? Also, i do it to post it on binder, so the simpler the resolution will be, the better.
Is there a way to fix it? I'll appreciate any help :) | closed | 2021-02-20T17:54:33Z | 2021-02-20T19:29:56Z | https://github.com/voila-dashboards/voila/issues/836 | [] | mareksubocz | 8 |
nvbn/thefuck | python | 516 | version 2.5.6-1: AttributeError: 'NoneType' object has no attribute 'stdout' | I installed via apt-get version 2.5.6-1 in Ubuntu 15.10:
```
apt-get install thefuck
```
But I only always get this error, for example:
```
rubo77:~$ pyton
... did you mean »python« ? ...
rubo77:~$ thefuck
Traceback (most recent call last):
File "/usr/bin/thefuck", line 9, in <module>
load_entry_point('thefuck==2.5.6', 'console_scripts', 'thefuck.real')()
File "/usr/share/thefuck/thefuck/main.py", line 160, in main
matched_rule = get_matched_rule(command, rules, settings)
File "/usr/share/thefuck/thefuck/main.py", line 105, in get_matched_rule
script_only = command.stdout is None and command.stderr is None
AttributeError: 'NoneType' object has no attribute 'stdout'
```
| closed | 2016-06-15T07:52:15Z | 2017-03-14T23:02:55Z | https://github.com/nvbn/thefuck/issues/516 | [
"obsolete"
] | rubo77 | 4 |
2noise/ChatTTS | python | 24 | 程序在M1芯片上跑到8%-10%左右就终止,无错误提示 | M1芯片上跑起来之后:
WARNING:ChatTTS.utils.gpu_utils:No GPU found, use CPU instead
INFO:ChatTTS.core:use cpu
INFO:ChatTTS.core:vocos loaded.
INFO:ChatTTS.core:dvae loaded.
INFO:ChatTTS.core:gpt loaded.
INFO:ChatTTS.core:decoder loaded.
INFO:ChatTTS.core:tokenizer loaded.
INFO:ChatTTS.core:All initialized.
INFO:ChatTTS.core:All initialized.
3%|████ | 13/384 [00:00<00:16, 22.89it/s]
8%|█████████▌ | 168/2048 [00:05<01:06, 28.45it/s]
就这样终止了,没有任何错误提示 | closed | 2024-05-28T14:30:29Z | 2024-07-16T04:02:11Z | https://github.com/2noise/ChatTTS/issues/24 | [
"stale"
] | glovebx | 8 |
flairNLP/flair | nlp | 3,494 | [Question]: ColumnCorpus taking forever to load large dataset | ### Question
I am building a sequence tagger that tags each character in the sentence. I have training data of a few million sentences resulting into ~1 billion training examples. Here one training example is one character with the corresponding label.
I am instantiating ColumnCorpus like this:
```
corpus = ColumnCorpus(data_path, columns,
in_memory=False,
train_file='train',
test_file='test',
dev_file='dev')
```
Initially, I was getting OOM error so I used the in_memory flag. However, the loading takes forever and the job gets killed. I am using 2gpu's. I have the following questions:
1. Is ColumnCorpus the right data format for this big data size? Or there is a data format better suited for this purpose?
2. Is there a different instantiation of ColumnCorpus which is better suited for large datasets? | closed | 2024-07-07T22:43:04Z | 2024-07-07T23:02:28Z | https://github.com/flairNLP/flair/issues/3494 | [
"question"
] | pxb5080 | 0 |
huggingface/transformers | deep-learning | 36,762 | When what needs to be loaded is in the cache directory, there is no need to make a request to the remote | ### Feature request
When what needs to be loaded is in the cache directory, there is no need to make a request to the remote.
### Motivation
I noticed that when `AutoTokenizer` loads a file using `from_pretrained`, it first tries to load it from a cached directory when `pretrained_model_name_or_path` is a model_id (such as gpt2).
However, `commit_hash` is `None` by default, e.g. `AutoTokenizer` will call `get_tokenizer_config` to load the configuration file, where the code to get `commit_hash` is: `commit_hash = kwargs.get("_commit_ hash”, None)`.
Since it is None, the `cached_file` method doesn't know where the corresponding file is actually stored, so it uses the `hf_hub_download` method to request the corresponding `commit_hash` first.
Although this request is very simple and infrequent, **in offline environments (e.g., a company or school intranet that does not allow access to the extranet), it will report an error.**
I know I can copy files from the cache to my project directory, but the host is usually used by multiple people, which means it may have to be copied many times, which defeats the purpose of using a cached directory in the first place.
### Your contribution
**I suggest changing `commit_hash = kwargs.get(“_commit_hash”, None)` to `commit_hash = kwargs.get(“_commit_hash”, “main”)`**. | closed | 2025-03-17T11:20:24Z | 2025-03-19T15:49:04Z | https://github.com/huggingface/transformers/issues/36762 | [
"Feature request"
] | JinFish | 3 |
chezou/tabula-py | pandas | 97 | Need to document for Windows non ascii code handling | We have to ensure not only python encoding but also java encoding option `-Dfile.encoding` to avoid getting `?` character.
Ideally, it would be nice to have encoding mapping between two languages, but it's high cost. | closed | 2018-05-30T00:53:02Z | 2018-05-30T04:54:50Z | https://github.com/chezou/tabula-py/issues/97 | [] | chezou | 1 |
open-mmlab/mmdetection | pytorch | 11,418 | Strong baselines | The model URL link in strong baselines cannot be redirected to the download interface, is it not open? | closed | 2024-01-23T09:27:05Z | 2024-01-25T01:52:41Z | https://github.com/open-mmlab/mmdetection/issues/11418 | [] | JolyonWu | 1 |
cvat-ai/cvat | tensorflow | 8,978 | Annotation configuration | Hello,
I am trying to annotate sports videos projectile but for some reason every labeled bbox create a new track with two frames and overlap between tracks in the XML file (see the example below).
This is a single continuous video.
Thanks,
Ohad
<track id="0" label="puck" source="manual">
<box frame="90" keyframe="1" outside="0" occluded="0" xtl="897.24" ytl="416.98" xbr="904.14" ybr="421.38" z_order="0">
</box>
<box frame="91" keyframe="1" outside="1" occluded="0" xtl="897.24" ytl="416.98" xbr="904.14" ybr="421.38" z_order="0">
</box>
</track>
<track id="1" label="puck" source="manual">
<box frame="91" keyframe="1" outside="0" occluded="0" xtl="893.44" ytl="418.88" xbr="899.84" ybr="422.88" z_order="0">
</box>
<box frame="92" keyframe="1" outside="1" occluded="0" xtl="893.44" ytl="418.88" xbr="899.84" ybr="422.88" z_order="0">
</box>
</track>
<track id="2" label="puck" source="manual">
<box frame="98" keyframe="1" outside="0" occluded="0" xtl="877.20" ytl="425.90" xbr="884.30" ybr="430.30" z_order="0">
</box>
<box frame="99" keyframe="1" outside="1" occluded="0" xtl="877.20" ytl="425.90" xbr="884.30" ybr="430.30" z_order="0">
</box>
</track>
<track id="3" label="puck" source="manual">
<box frame="99" keyframe="1" outside="0" occluded="0" xtl="877.92" ytl="425.54" xbr="884.02" ybr="429.54" z_order="0">
</box>
<box frame="100" keyframe="1" outside="1" occluded="0" xtl="877.92" ytl="425.54" xbr="884.02" ybr="429.54" z_order="0">
</box>
</track>
 | closed | 2025-01-22T08:16:48Z | 2025-01-24T08:09:23Z | https://github.com/cvat-ai/cvat/issues/8978 | [] | ohadvolvo | 1 |
miguelgrinberg/flasky | flask | 311 | the "password_reset_request" has a problem | the function " password_reset_request" can not checking the email adress if exist, and the form does't do reset if you submit an error email address.
in the flasky, how to reset the form? | closed | 2017-11-09T06:25:47Z | 2020-08-27T22:14:58Z | https://github.com/miguelgrinberg/flasky/issues/311 | [
"question"
] | auqf | 13 |
jmcarpenter2/swifter | pandas | 22 | Can you give a working example with multiple columns and extra arguments | I tried to run the following which should work according to the documentation:
`df = pd.DataFrame({'x': [1, 2, 3, 4], 'y': [5, 6, 7, 8]})
def my_sum(a, b):
return a+b
df[['x'], ['y']].swifter.apply(my_sum)
`
This returns a Pandas error `TypeError: unhashable type: 'list'`, it would be great if you could update the documentation with a working example. Thank you | closed | 2018-10-16T13:55:19Z | 2018-11-12T21:18:19Z | https://github.com/jmcarpenter2/swifter/issues/22 | [] | lstavrogiannis | 1 |
fastapi-users/fastapi-users | fastapi | 701 | Support Pydantic SecretStr for authentication backend secrets | As raised in #700. | closed | 2021-08-26T09:22:21Z | 2024-02-01T14:36:45Z | https://github.com/fastapi-users/fastapi-users/issues/701 | [
"enhancement"
] | frankie567 | 6 |
indico/indico | flask | 6,075 | See if we can have an "add to outlook" link like we have for google in the social widget | For Google Calendar it's a generic URL that does not require API keys or similar. All the data is passed via query string:
```html
<div class="social-site">
<a href="https://www.google.com/calendar/event?{{ google_calendar_params|urlencode }}" target="_blank">
<img src="{{ url_for('assets.image', filename='google_calendar.gif') }}" alt="" border="0">
</a>
</div>
```
Check if https://outlook.office.com/ has something similar so people can add an event to their Outlook/Exchange Online calendar the same way. | closed | 2023-11-30T09:03:01Z | 2023-12-04T22:49:41Z | https://github.com/indico/indico/issues/6075 | [
"enhancement",
"trivial"
] | ThiefMaster | 3 |
Lightning-AI/LitServe | rest-api | 67 | end-to-end tests | Add end-to-end tests for:
- [x] **dynamic Batching** - addressed by #68
- [x] **Dynamic batching with streaming** - addressed by #68
- [x] single prediction #70
- [x] single streaming #247 | closed | 2024-04-25T22:33:18Z | 2024-08-30T08:03:51Z | https://github.com/Lightning-AI/LitServe/issues/67 | [
"enhancement",
"good first issue",
"help wanted",
"ci / tests"
] | aniketmaurya | 3 |
MaartenGr/BERTopic | nlp | 1,148 | cuml does not install | Hi,
I have problems importing (installing) cuml for my ;
```
"class DeviceTypeError(Exception):
23 '''An exception thrown to indicate bad device type selection'''.
```
I've an Asus TUF Gaming GeForce RTX™ 3080 V2 OC Edition 10GB GPU card (https://www.asus.com/fi/motherboards-components/graphics-cards/tuf-gaming/tuf-rtx3080-o10g-v2-gaming/) Is it because I have an GeForce card? (I hope not)
Any suggestion on how to solve this?
thanks,
Andreas
Environment
```
anaconda (latest)
Python 3.10.9 (main, Mar 1 2023, 18:23:06) [GCC 11.2.0]
Jupyter notebook 6.5.2
IPython 8.8.0 -- An enhanced Interactive Python. Type '?' for help.
```
here's the full dump
```
ModuleNotFoundError Traceback (most recent call last)
Cell In[2], line 5
3 from bertopic_mod import BERTopicMod
4 #if use_GPU:
----> 5 from cuml.cluster import HDBSCAN
6 from cuml.manifold import UMAP
File ~/.local/lib/python3.10/site-packages/cuml/__init__.py:17
1 #
2 # Copyright (c) 2022, NVIDIA CORPORATION.
3 #
(...)
14 # limitations under the License.
15 #
---> 17 from cuml.internals.base import Base, UniversalBase
19 # GPU only packages
21 import cuml.common.cuda as cuda
File ~/.local/lib/python3.10/site-packages/cuml/internals/__init__.py:17
1 #
2 # Copyright (c) 2019-2022, NVIDIA CORPORATION.
3 #
(...)
14 # limitations under the License.
15 #
---> 17 from cuml.internals.base_helpers import (
18 BaseMetaClass,
19 _tags_class_and_instance
20 )
21 from cuml.internals.api_decorators import (
22 _deprecate_pos_args,
23 api_base_fit_transform,
(...)
35 exit_internal_api,
36 )
37 from cuml.internals.api_context_managers import (
38 in_internal_api,
39 set_api_output_dtype,
40 set_api_output_type,
41 )
File ~/.local/lib/python3.10/site-packages/cuml/internals/base_helpers.py:20
17 from inspect import Parameter, signature
18 import typing
---> 20 from cuml.internals.api_decorators import (
21 api_base_return_generic,
22 api_base_return_array,
23 api_base_return_sparse_array,
24 api_base_return_any,
25 api_return_any,
26 _deprecate_pos_args
27 )
28 from cuml.internals.array import CumlArray
29 from cuml.internals.array_sparse import SparseCumlArray
File ~/.local/lib/python3.10/site-packages/cuml/internals/api_decorators.py:24
21 import warnings
23 # TODO: Try to resolve circular import that makes this necessary:
---> 24 from cuml.internals import input_utils as iu
25 from cuml.internals.api_context_managers import BaseReturnAnyCM
26 from cuml.internals.api_context_managers import BaseReturnArrayCM
File ~/.local/lib/python3.10/site-packages/cuml/internals/input_utils.py:19
1 #
2 # Copyright (c) 2019-2022, NVIDIA CORPORATION.
3 #
(...)
14 # limitations under the License.
15 #
17 from collections import namedtuple
---> 19 from cuml.internals.array import CumlArray
20 from cuml.internals.array_sparse import SparseCumlArray
21 from cuml.internals.global_settings import GlobalSettings
File ~/.local/lib/python3.10/site-packages/cuml/internals/array.py:22
19 import operator
20 import pickle
---> 22 from cuml.internals.global_settings import GlobalSettings
23 from cuml.internals.logger import debug
24 from cuml.internals.mem_type import MemoryType, MemoryTypeError
File ~/.local/lib/python3.10/site-packages/cuml/internals/global_settings.py:20
18 import threading
19 from cuml.internals.available_devices import is_cuda_available
---> 20 from cuml.internals.device_type import DeviceType
21 from cuml.internals.logger import warn
22 from cuml.internals.mem_type import MemoryType
File ~/.local/lib/python3.10/site-packages/cuml/internals/device_type.py:19
1 #
2 # Copyright (c) 2022, NVIDIA CORPORATION.
3 #
(...)
14 # limitations under the License.
15 #
18 from enum import Enum, auto
---> 19 from cuml.internals.mem_type import MemoryType
22 class DeviceTypeError(Exception):
23 '''An exception thrown to indicate bad device type selection'''
File ~/.local/lib/python3.10/site-packages/cuml/internals/mem_type.py:25
19 from cuml.internals.device_support import GPU_ENABLED
20 from cuml.internals.safe_imports import (
21 cpu_only_import,
22 gpu_only_import
23 )
---> 25 cudf = gpu_only_import('cudf')
26 cp = gpu_only_import('cupy')
27 cpx_sparse = gpu_only_import('cupyx.scipy.sparse')
File ~/.local/lib/python3.10/site-packages/cuml/internals/safe_imports.py:366, in gpu_only_import(module, alt)
340 '''A function used to import modules required only in GPU installs
341
342 This function will attempt to import a module with the given name, but it
(...)
363 UnavailableMeta.
364 '''
365 if GPU_ENABLED:
--> 366 return importlib.import_module(module)
367 else:
368 return safe_import(
369 module,
370 msg=f'{module} is not installed in non GPU-enabled installations',
371 alt=alt
372 )
File ~/anaconda3/lib/python3.10/importlib/__init__.py:126, in import_module(name, package)
124 break
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
ModuleNotFoundError: No module named 'cudf'
---
``` | closed | 2023-04-03T08:55:28Z | 2023-05-23T09:24:05Z | https://github.com/MaartenGr/BERTopic/issues/1148 | [] | aph61 | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.