repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
arogozhnikov/einops | numpy | 92 | Question: How to do repeat interleave? | Hi, I was wondering if `repeat` can also be used for repeating givene elements given number of times along given dimension, similar to `torch.repeat_interleave`. Say I have a sequence `[0,1,2,3]` and repeats specified as `[1,3,2,4]`. The result should be `[0,1,1,1,2,2,3,3,3,3]`. Is this possible with einops? Regards. | closed | 2020-11-22T12:28:55Z | 2020-11-27T09:04:47Z | https://github.com/arogozhnikov/einops/issues/92 | [] | janvainer | 2 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,943 | [nodriver]How to select the frame or click 'Verify you are human' | url: https://masiro.me
I tried the following code:
`checkbox = await page.wait_for(text="cf-chl-widget-", timeout=10)
print(checkbox)
await checkbox.mouse_move()
await checkbox.mouse_click()
print(checkbox)`
The element information is:
`<input type="hidden" name="cf-turnstile-response" id="cf-chl-widget-805o7_response"></input>`
However, checkbox.mouse_move() reported the following error:
`could not find position for <input type="hidden" name="cf-turnstile-response" id="cf-chl-widget-805o7_response"></input>`
also:
`iframes = await page.select_all('iframe')`
but error:
`time ran out while waiting for iframe` | open | 2024-07-10T10:42:33Z | 2025-02-02T13:24:20Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1943 | [] | TonyLiooo | 12 |
FactoryBoy/factory_boy | django | 865 | 3.2.0: build_sphinx setuptools target fails | Looks like in copy.py is missing path to `factory_boy` module code.
```console
[tkloczko@barrel factory_boy-3.2.0]$ /usr/bin/python3 setup.py build_sphinx -b man
running build_sphinx
Running Sphinx v4.0.2
Configuration error:
There is a programmable error in your configuration file:
Traceback (most recent call last):
File "/home/tkloczko/rpmbuild/BUILD/factory_boy-3.2.0/factory/__init__.py", line 76, in <module>
__version__ = version("factory_boy")
File "/usr/lib64/python3.8/importlib/metadata.py", line 530, in version
return distribution(distribution_name).version
File "/usr/lib64/python3.8/importlib/metadata.py", line 503, in distribution
return Distribution.from_name(distribution_name)
File "/usr/lib64/python3.8/importlib/metadata.py", line 177, in from_name
raise PackageNotFoundError(name)
importlib.metadata.PackageNotFoundError: factory_boy
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.8/site-packages/sphinx/config.py", line 323, in eval_config_file
exec(code, namespace)
File "/home/tkloczko/rpmbuild/BUILD/factory_boy-3.2.0/docs/conf.py", line 12, in <module>
import factory
File "/home/tkloczko/rpmbuild/BUILD/factory_boy-3.2.0/factory/__init__.py", line 80, in <module>
__version__ = pkg_resources.get_distribution("factory_boy").version
File "/usr/lib/python3.8/site-packages/pkg_resources/__init__.py", line 482, in get_distribution
dist = get_provider(dist)
File "/usr/lib/python3.8/site-packages/pkg_resources/__init__.py", line 358, in get_provider
return working_set.find(moduleOrReq) or require(str(moduleOrReq))[0]
File "/usr/lib/python3.8/site-packages/pkg_resources/__init__.py", line 901, in require
needed = self.resolve(parse_requirements(requirements))
File "/usr/lib/python3.8/site-packages/pkg_resources/__init__.py", line 787, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'factory_boy' distribution was not found and is required by the application
```
| closed | 2021-06-03T10:46:53Z | 2021-09-03T22:44:26Z | https://github.com/FactoryBoy/factory_boy/issues/865 | [
"Packaging",
"Improvement"
] | kloczek | 10 |
plotly/dash | flask | 2,609 | polars seems incompatible with long callbacks | **Describe your context**
dash app with python 3.11.4 deployed on AWS EC2 instance
```
dash[diskcache] 2.11.1
polars 0.18.11
```
**Describe the bug**
When running long callbacks in dash with polars, they run infinitely & don't get collected - I imagine it's something to do with polars parallelism & multithreading? don't run into these issues with pandas
**Expected behavior**
Callback to run
| open | 2023-08-01T16:28:17Z | 2024-08-13T19:36:12Z | https://github.com/plotly/dash/issues/2609 | [
"bug",
"P3"
] | liammcknight95 | 3 |
flavors/django-graphql-jwt | graphql | 212 | Obtain web token from within a mutation | I want to login the user (`ObtainWebTokenMutation`) from within mutation for registration itself. So, the differentiating factor from the usual case is that the `username` and `password` won't be there as arguments to a mutation. Instead I'd need to take the functionality from that mutation and put it into a util which will get the token and make required changes, e.g. change the context object, and sending the `token_issued` event. For now it seems that all the code I require is in the `@token_auth` decorator which is to be used with a mutation where the `username` and `password` are kwargs supplied to it.
Is there like a util for this or any ideas on how I can do it without copy-pasting the token_auth code into my mutation? | open | 2020-07-26T08:22:51Z | 2021-04-12T04:36:24Z | https://github.com/flavors/django-graphql-jwt/issues/212 | [] | nikochiko | 2 |
aleju/imgaug | machine-learning | 641 | Scale and crop? | Should those be used together? I was thinking since they may have the same effect. | open | 2020-03-15T20:24:21Z | 2020-03-16T19:21:13Z | https://github.com/aleju/imgaug/issues/641 | [] | jhaggle | 1 |
httpie/cli | api | 1,413 | Multiple response headers with same name combined to a single comma-separated list | ## Checklist
- [x] I've searched for similar issues.
- [x] I'm using the latest version of HTTPie.
---
## Minimal reproduction code and steps
1. Request from any endpoint which produces multiple response headers with the same name
## Current result
HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 2
Content-Type: text/plain
Date: Mon, 13 Jun 2022 11:17:01 GMT
Server: nginx/1.22.0
edge-cache-tag: origin.stuartmacleod.net-2022-06-13T11-17-01.933Z
not-edge-cache-tag: origin.stuartmacleod.net-2022-06-13T11-17-01.933Z
testheader: 1, 2, 3
## Expected result
HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 2
Content-Type: text/plain
Date: Mon, 13 Jun 2022 11:17:01 GMT
Server: nginx/1.22.0
edge-cache-tag: origin.stuartmacleod.net-2022-06-13T11-17-01.933Z
not-edge-cache-tag: origin.stuartmacleod.net-2022-06-13T11-17-01.933Z
testheader: 1
testheader: 2
testheader: 3
## Debug output
PS C:\code> http https://origin.stuartmacleod.net/httpie --debug
HTTPie 3.2.1
Requests 2.27.1
Pygments 2.11.2
Python 3.8.10 (tags/v3.8.10:3d8993a, May 3 2021, 11:48:03) [MSC v.1928 64 bit (AMD64)]
C:\code\Python\Python38\python.exe
Windows 10
<Environment {'apply_warnings_filter': <function Environment.apply_warnings_filter at 0x000001EC361D6310>,
'args': Namespace(),
'as_silent': <function Environment.as_silent at 0x000001EC361D61F0>,
'colors': 256,
'config': {'default_options': ['--style=xcode']},
'config_dir': WindowsPath('C:/Users/Stuart/AppData/Roaming/httpie'),
'devnull': <property object at 0x000001EC3616B4A0>,
'is_windows': True,
'log_error': <function Environment.log_error at 0x000001EC361D6280>,
'program_name': 'http',
'quiet': 0,
'rich_console': <functools.cached_property object at 0x000001EC36168E20>,
'rich_error_console': <functools.cached_property object at 0x000001EC361764C0>,
'show_displays': True,
'stderr': <colorama.ansitowin32.StreamWrapper object at 0x000001EC36148E50>,
'stderr_isatty': True,
'stdin': <_io.TextIOWrapper name='<stdin>' mode='r' encoding='utf-8'>,
'stdin_encoding': 'utf-8',
'stdin_isatty': True,
'stdout': <colorama.ansitowin32.StreamWrapper object at 0x000001EC3612FFA0>,
'stdout_encoding': 'utf-8',
'stdout_isatty': True}>
<PluginManager {'adapters': [],
'auth': [<class 'httpie.plugins.builtin.BasicAuthPlugin'>,
<class 'httpie.plugins.builtin.DigestAuthPlugin'>,
<class 'httpie.plugins.builtin.BearerAuthPlugin'>],
'converters': [],
'formatters': [<class 'httpie.output.formatters.headers.HeadersFormatter'>,
<class 'httpie.output.formatters.json.JSONFormatter'>,
<class 'httpie.output.formatters.xml.XMLFormatter'>,
<class 'httpie.output.formatters.colors.ColorFormatter'>]}>
>>> requests.request(**{'auth': None,
'data': RequestJSONDataDict(),
'headers': <HTTPHeadersDict('User-Agent': b'HTTPie/3.2.1')>,
'method': 'get',
'params': <generator object MultiValueOrderedDict.items at 0x000001EC36543200>,
'url': 'https://origin.stuartmacleod.net/httpie'})
HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 2
Content-Type: text/plain
Date: Mon, 13 Jun 2022 11:18:15 GMT
Server: nginx/1.22.0
edge-cache-tag: origin.stuartmacleod.net-2022-06-13T11-18-15.040Z
not-edge-cache-tag: origin.stuartmacleod.net-2022-06-13T11-18-15.040Z
testheader: 1, 2, 3
ok
## Additional information, screenshots, or code examples
This behaviour was changed after version 2.4.0. With 2.4.0 and earlier it outputs as expected, so I assume this was a deliberate move. If so, is there an option to use the previous output behaviour I could add to my config.json?
| open | 2022-06-13T11:27:04Z | 2022-06-13T11:27:04Z | https://github.com/httpie/cli/issues/1413 | [
"bug",
"new"
] | stuartio | 0 |
deezer/spleeter | tensorflow | 125 | [Bug] Minor bug in result path | <!-- PLEASE READ THIS CAREFULLY :
- Any issue which does not respect following template or lack of information will be considered as invalid and automatically closed
- First check FAQ from wiki to see if your problem is not already known
-->
## Description
There is a small issue in the Google Colab Notebook. The path given to listen the results is not the good one. The output path given is `audio_example/output`. In fact, the real output directory is `audio_example.mp3/output`. I found also this little mistake in the README.
## Step to reproduce
Just use the Google Colab Notebook
## Output

| closed | 2019-11-22T08:08:20Z | 2019-11-22T12:02:23Z | https://github.com/deezer/spleeter/issues/125 | [
"bug",
"invalid"
] | rguilloteau | 3 |
axnsan12/drf-yasg | django | 136 | Custom SerializerInspector is not called | I am using a `Serializer` for request body initially, but I also want to add couple of more parameter into it, hence I wrote custom serializer inspector:
```python
from drf_yasg.inspectors import SerializerInspector
class BaseFieldSerializerInspector(SerializerInspector):
def get_request_parameters(self, serializer, in_):
parameters = super().get_request_parameters(serializer, in_)
# add extra parameters here
return parameters
```
And used it like so:
```python
@swagger_auto_schema(
operation_description='Updates a field.',
manual_parameters=[FIELD_ID_PATH_PARAMETER],
request_body=BaseFieldSerializer,
responses={
status.HTTP_200_OK: openapi.Response(
'Field updated successfully.',
BaseFieldSerializer
)
},
field_inspectors=[BaseFieldSerializerInspector]
)
```
But `BaseFieldSerializerInspector` is never called. Kindly help on this please. Thanks! | closed | 2018-06-01T17:58:28Z | 2018-08-06T13:52:03Z | https://github.com/axnsan12/drf-yasg/issues/136 | [] | intellisense | 8 |
scanapi/scanapi | rest-api | 174 | Remove console report | Remove console report
Decided at https://github.com/scanapi/scanapi/issues/132 | closed | 2020-06-10T13:17:56Z | 2020-06-16T20:15:37Z | https://github.com/scanapi/scanapi/issues/174 | [] | camilamaia | 0 |
WZMIAOMIAO/deep-learning-for-image-processing | pytorch | 738 | ๅคๅก่ทhrnetๆถๅบ็ฐ้่ฏฏ | ๅจ่ฎญ็ปๅฎ็ฌฌไธไธชepochๅฐฑๅบ็ฐไบ่ฟๆ ท็้่ฏฏ๏ผ่ฏท้ฎๆฏๅบไบไปไน้ฎ้ขๅ
IoU metric: keypoints
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.358
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.679
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.335
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.364
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.361
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.451
Average Recall (AR) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.755
Average Recall (AR) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.457
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.426
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.487
Traceback (most recent call last):
File "/media/data1/sangrg/HRNet/train_multi_GPU.py", line 272, in <module>
main(args)
File "/media/data1/sangrg/HRNet/train_multi_GPU.py", line 168, in main
if args.rank in [-1, 0]:
AttributeError: 'Namespace' object has no attribute 'rank' | open | 2023-05-22T08:44:30Z | 2024-10-27T00:10:06Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/738 | [] | srg1234567 | 1 |
python-gino/gino | asyncio | 707 | GINO use mysql | * GINO version:1
* Python version:3.8
code:
db.set_bind('mysql+aiomysql://root:123456@192.168.101.43:3306/future')
rasie exc - sqlalchemy.exc.NoSuchModuleError: Can't load plugin: sqlalchemy.dialects:mysql.aiomysql
gino not Support for mysql? If mysql is supported, How to fill in parameters?
่ฏท้ฎginoไธๆฏๆmysqlๅ? ๅฆๆๆฏๆ, ๅฆไฝๅกซๅ้ฉฑๅจ?
| closed | 2020-07-14T06:29:32Z | 2020-07-14T17:56:27Z | https://github.com/python-gino/gino/issues/707 | [
"duplicate"
] | mylot-python | 1 |
tqdm/tqdm | jupyter | 1,363 | Triple (and more) nested progress bars only displays the 2 inner-most loops | - [x] I have marked all applicable categories:
+ [ ] exception-raising bug
+ [x] visual output bug
- [x] I have visited the [source website], and in particular
read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
- [x] I have mentioned version numbers, operating system and
environment, where applicable:
```python
import tqdm, sys
print(tqdm.__version__, sys.version, sys.platform)
```
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
Versions
---
Python version 3.8.5
tqdm version 4.64.0
Description
---
When nesting more than two progress bars with the default `leave=True` option, only the inner two most progress bars remain on the screen after a number of iterations.
Example
---
Here is a minimal working example that demonstrates the issue I am running into:
``` python
from tqdm import tqdm
import time
for a in tqdm(range(2), desc='a', leave=True):
for b in tqdm(range(3), desc=' b', leave=True):
for c in tqdm(range(4), desc=' c', leave=True):
time.sleep(0.5)
```
### Expected Output
During the iteration of `a=1`, `b=1`, `c=2`, I would expect the output to look something like:
```
c: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 4/4 [00:02<00:00, 2.00it/s]
c: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 4/4 [00:02<00:00, 2.00it/s]
c: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 4/4 [00:02<00:00, 2.00it/s]
b: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 3/3 [00:06<00:00, 2.01s/it]
a: 50%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 1/2 [00:06<00:06, 6.02s/it]
b: 33%|โโโโโโโโโโโโโโโโโโโ | 1/3 [00:02<00:04, 2.01s/it]
c: 50%|โโโโโโโโโโโโโโโโโโโโโโโโโโโ | 2/4 [00:01<00:01, 2.00it/s]
```
### Actual Output
However, actually running the program yields the following:
```
c: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 4/4 [00:02<00:00, 2.00it/s]
c: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 4/4 [00:02<00:00, 2.00it/s]
c: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 4/4 [00:02<00:00, 2.00it/s]
b: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 3/3 [00:06<00:00, 2.01s/it]
c: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 4/4 [00:02<00:00, 2.00it/s]
b: 0%| | 0/3 [00:00<?, ?it/s]
b: 33%|โโโโโโโโโโโโโโโโโโโ | 1/3 [00:02<00:04, 2.01s/it]
c: 50%|โโโโโโโโโโโโโโโโโโโโโโโโโโโ | 2/4 [00:01<00:01, 2.00it/s]
```
A screen recording of the actual output can be found [here](https://user-images.githubusercontent.com/11098209/188115009-83290837-2bde-4479-96cb-a41fbd7aa1a9.mp4).
In the video, you can see that after the first `a=0` loop finishes, the first iteration of `b=0`, `c=0` to `c=3` iterations displays correctly.
```
c: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 4/4 [00:02<00:00, 2.00it/s]
c: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 4/4 [00:02<00:00, 2.00it/s]
c: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 4/4 [00:02<00:00, 2.00it/s]
b: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 3/3 [00:06<00:00, 2.01s/it]
a: 50%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 1/2 [00:06<00:06, 6.02s/it]
b: 0%| | 0/3 [00:00<?, ?it/s]
c: 50%|โโโโโโโโโโโโโโโโโโโโโโโโโโโ | 2/4 [00:01<00:01, 2.00it/s]
```
However, as soon as `b` increments by 1, then the `a` progress bar seems like it is overwritten.
Behavior in >3 Nested bars
---
I also experimented with adding a fourth nested loop
```python
from tqdm import tqdm
import time
for a in tqdm(range(2), desc='a', leave=True):
for b in tqdm(range(3), desc=' b', leave=True):
for c in tqdm(range(4), desc=' c', leave=True):
for d in tqdm(range(5), desc=' d', leave=True):
time.sleep(0.5)
```
Result at `a=0`, `b=0`, `c=2`, and `d=3`:
```
d: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 5/5 [00:02<00:00, 2.00it/s]
d: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 5/5 [00:02<00:00, 2.00it/s]
c: 0%| | 0/4 [00:00<?, ?it/s]
c: 25%|โโโโโโโโโโโโโโ | 1/4 [00:02<00:07, 2.51s/it]
c: 50%|โโโโโโโโโโโโโโโโโโโโโโโโโโโ | 2/4 [00:05<00:05, 2.51s/it]
d: 60%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 3/5 [00:01<00:01, 2.00it/s]
```
A screen recording of the full output can be found [here](https://user-images.githubusercontent.com/11098209/188114980-7be1070b-e687-4a35-a256-d4c30f9c11b8.mp4).
It seems like only the 2 innermost loops are kept, with `a` and `b` disappearing until they are incremented in a future loop.
### Speculation
It is interesting that during the first iterations `(a=0, b=0, c=0, d=0-5)`, all four pbars are displayed as expected.
After the `d` loop completes once, then for `(a=0, b=0, c=1, d=0-5)`, the `a` pbar is missing.
Then once the `d` loop completes a second time, then `(a=0, b=0, c=2, d=0-5)` onwards, both `a` and `b` pbars are missing.
This may indicate that tqdm may be automatically overwriting the top position when `leave=True` without considering that there may be other loop information in those upper positions.
Including the `position` argument
---
I have also tried the above examples with manually specifying the position of the bars, but that yielded the same issue as we have discussed above.
``` python
from tqdm import tqdm
import time
for a in tqdm(range(2), desc='a', leave=True, position=0):
for b in tqdm(range(3), desc=' b', leave=True, position=1):
for c in tqdm(range(4), desc=' c', leave=True, position=2):
time.sleep(0.5)
```
Thank you for taking the time to read through my first issue and definitely let me know if there's anything I'm missing.
| open | 2022-09-02T09:59:38Z | 2022-09-02T10:00:19Z | https://github.com/tqdm/tqdm/issues/1363 | [] | scchow | 0 |
plotly/dash | data-science | 2,467 | [BUG] allow_duplicate not working with clientside_callback |
Thanks for allow_duplicate, it's a very nice addition.
Everything works well, except for **clientside_callback**. Support for clientside_callback was supposed to be? When adding allow_duplicate, an error occurs:

If you specify several ouputs(with and without allow_duplicate), then even though there will be an error, the value will be updated for ouput without allow_duplicate, but not for output with allow_duplicate, example:

Tell me, am I doing something wrong? Thank you very much.
Full code sample (with one callback, but I think this is enough to show the error):
```python
import dash
from dash import Dash, html, Input, Output
app = Dash(__name__)
app.layout = html.Div(
children=[
html.Div(
children=['Last pressed button: ', html.Span(id='span', children='empty')]
),
html.Button(
id='button-right',
children='right'
)
]
)
# NOT WORKING FOR "SPAN", BUT WARKING FOR BUTTON-RIGHT
dash.clientside_callback(
"""
function(n_clicks){
return ["right", `right ${n_clicks}`];
}
""",
[
Output('span', 'children', allow_duplicate=True),
Output('button-right', 'children')
],
Input('button-right', 'n_clicks'),
prevent_initial_call=True
)
# WORKING EXAMPLE TO UNDERSTAND HOW IT SHOULD BE (THE ONLY DIFFERENCE IS THAT NO ALLOW_DUPLICATE)
# dash.clientside_callback(
# """
# function(n_clicks){
# return ["right", `right ${n_clicks}`];
# }
# """,
# [
# Output('span', 'children'),
# Output('button-right', 'children')
# ],
# Input('button-right', 'n_clicks'),
# prevent_initial_call=True
# )
if __name__ == '__main__':
app.run_server(debug=True, port=2414)
```
_Originally posted by @FatHare in https://github.com/plotly/dash/issues/2414#issuecomment-1473088764_
| closed | 2023-03-17T19:00:31Z | 2023-04-07T14:30:34Z | https://github.com/plotly/dash/issues/2467 | [] | T4rk1n | 1 |
jupyter-book/jupyter-book | jupyter | 2,323 | Issue on page /Section4/section4.html | Test issue here. | closed | 2025-02-20T03:22:25Z | 2025-02-20T03:22:48Z | https://github.com/jupyter-book/jupyter-book/issues/2323 | [] | Blake-Head | 0 |
deepfakes/faceswap | machine-learning | 654 | Is it possible to swap a whole movie? | I watched Captain Marvel today, the moment I saw Stan Lee(RIP), I wonder if it's possible to swap the whole movie.
Maybe even let the audience choose characters for the movie. | closed | 2019-03-08T15:35:17Z | 2019-03-08T16:47:38Z | https://github.com/deepfakes/faceswap/issues/654 | [] | FrontMage | 1 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 1,157 | Training 2-3 models, suggestions? | Hi,
Great work with Real Time Voice Cloning!
I already got _some_ experience training the models. I fine tuned the model with one of the speakers from dev-clean LibriSpeech and successfully got noticeable improvement in the output quality.
Now I'm going to train two or three models:
**1. Everything from scratch using LibriTTS dataset.**
I know blue-fish (Now @ghost) and @mbdash tried to train a model using LibriTTS in #449 but the output did not
improve. They were still trying and moved the discussion to a slack channel so I don't know what the end result was. If
anyone knows what happened after they trained the new encoder and everything and can share the result (and even better,
the model) will be much appreciated!
After this model is trained, I might also fine tune it for my voice. (same as point 2. in this post)
**2. Fine tune the pre trained model on 1 hour of my own voice.**
I know it has been noted by blue-fish in #437 that fine tuning on your voice with 0.2hr of data and training for a few
thousand steps improves the quality of output for your voice. But I wonder what will happen if instead of 0.2hr, I use a whole
hour (maybe more) and train it for more than just a few thousand, maybe in the order of 10 thousands.
**3. Maybe also a model using the pretrained model and train it for a few additional 100-200k steps using more data from
Mozilla Common Voice or something else.**
Do you think this will be useful?
In #126, @sberryman trained all three models from scratch but he was not happy with the synthesizer and vocoder that he
trained. I'm not sure what it means because I don't have any experience with AI but he says that the synthesizer did not
align well? Also, he said the encoder was pretty good though. He has uploaded his models and the link still works, maybe I
can try something with the models he trained? He has a better encoder.
I don't have a very fast memory (I'll be using external hard drives)
I have NVIDIA Quadro P2000 GPUs
But, to train each model, I'll use separate PCs with same specs so they all train in parallel.
Any suggestions on playing around with hparams, different ideas for training, or anything else?
All suggestions are welcomed and appreciated.
Also, if you have any ides on training, lemme know. Like let me know what to train (one of the pretrained/train from scratch), what dataset to use, what hparams, and how much to train, and I'll do it. I have plenty of time.
Thanks! | open | 2023-01-18T14:11:49Z | 2023-03-15T07:18:00Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1157 | [] | prakharpbuf | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 820 | pix2pix single datasetmode error | Hi, thank you for this great project.
While I try to test pix2pix model with single dataset I struggle to solve the key error, but I couldn't.
Please help me how to solve this problem. I have no idea.

| open | 2019-10-29T07:22:04Z | 2020-05-24T12:17:20Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/820 | [] | Hyeonjeong2385 | 4 |
tableau/server-client-python | rest-api | 773 | Downloading a workbook from revision history by its revision number | According to [tableau help](https://help.tableau.com/current/api/rest_api/en-us/REST/rest_api_ref_workbooksviews.htm#download_workbook_revision) it is possible to download a specific version of a workbook from revision history by its revision number.
Can we add this method here so we can download a particular version of a workbook from history? | open | 2021-01-13T15:22:48Z | 2021-01-22T21:37:13Z | https://github.com/tableau/server-client-python/issues/773 | [
"enhancement"
] | mishamalkovich22 | 0 |
browser-use/browser-use | python | 1,008 | Object of type NotGiven is not JSON serializable | ### Bug Description

### Reproduction Steps
import asyncio
from langchain_openai import ChatOpenAI
from browser_use import Agent
from pydantic import SecretStr
api_key = "my_deepseek_apikey"
async def main():
agent = Agent(
task="Compare the price of gpt-4o and DeepSeek-V3",
llm=ChatOpenAI(base_url='https://api.deepseek.com/v1', model='deepseek-chat', api_key=SecretStr(api_key)),
)
await agent.run()
asyncio.run(main())
### Code Sample
```python
import asyncio
from langchain_openai import ChatOpenAI
from browser_use import Agent
from pydantic import SecretStr
api_key = "my_deepseek_apikey"
async def main():
agent = Agent(
task="Compare the price of gpt-4o and DeepSeek-V3",
llm=ChatOpenAI(base_url='https://api.deepseek.com/v1', model='deepseek-chat', api_key=SecretStr(api_key)),
)
await agent.run()
asyncio.run(main())
```
### Version
0.1.40
### LLM Model
DeepSeek Coder
### Operating System
windows11
### Relevant Log Output
```shell
``` | closed | 2025-03-12T14:13:23Z | 2025-03-12T15:04:24Z | https://github.com/browser-use/browser-use/issues/1008 | [
"bug"
] | tanght1994 | 0 |
wkentaro/labelme | computer-vision | 541 | I find a bug when I edit polygons | In edit polygons model, when i move one point to edge(espicially in right edge and down edge), the window will closed. If i move the boundingbox, it works well. | closed | 2020-01-08T06:51:12Z | 2020-01-27T02:05:00Z | https://github.com/wkentaro/labelme/issues/541 | [] | zemofreedom | 2 |
akfamily/akshare | data-science | 5,090 | fund_etf_hist_em ๆฅๅฃๅผๅธธ | ๆฅ้ไฟกๆฏๅฆไธ๏ผ
TimeoutError Traceback (most recent call last)
File c:\Users\WYL\miniconda3\Lib\site-packages\urllib3\connection.py:196, in HTTPConnection._new_conn(self)
[195](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connection.py:195) try:
--> [196](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connection.py:196) sock = connection.create_connection(
[197](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connection.py:197) (self._dns_host, self.port),
[198](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connection.py:198) self.timeout,
[199](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connection.py:199) source_address=self.source_address,
[200](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connection.py:200) socket_options=self.socket_options,
[201](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connection.py:201) )
[202](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connection.py:202) except socket.gaierror as e:
File c:\Users\WYL\miniconda3\Lib\site-packages\urllib3\util\connection.py:85, in create_connection(address, timeout, source_address, socket_options)
[84](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/util/connection.py:84) try:
---> [85](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/util/connection.py:85) raise err
[86](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/util/connection.py:86) finally:
[87](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/util/connection.py:87) # Break explicitly a reference cycle
File c:\Users\WYL\miniconda3\Lib\site-packages\urllib3\util\connection.py:73, in create_connection(address, timeout, source_address, socket_options)
[72](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/util/connection.py:72) sock.bind(source_address)
---> [73](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/util/connection.py:73) sock.connect(sa)
[74](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/util/connection.py:74) # Break explicitly a reference cycle
TimeoutError: timed out
The above exception was the direct cause of the following exception:
ConnectTimeoutError Traceback (most recent call last)
File c:\Users\WYL\miniconda3\Lib\site-packages\urllib3\connectionpool.py:789, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, preload_content, decode_content, **response_kw)
[788](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connectionpool.py:788) # Make the request on the HTTPConnection object
--> [789](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connectionpool.py:789) response = self._make_request(
[790](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connectionpool.py:790) conn,
[791](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connectionpool.py:791) method,
[792](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connectionpool.py:792) url,
[793](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connectionpool.py:793) timeout=timeout_obj,
[794](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connectionpool.py:794) body=body,
[795](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connectionpool.py:795) headers=headers,
[796](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connectionpool.py:796) chunked=chunked,
[797](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connectionpool.py:797) retries=retries,
[798](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connectionpool.py:798) response_conn=response_conn,
[799](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connectionpool.py:799) preload_content=preload_content,
[800](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connectionpool.py:800) decode_content=decode_content,
[801](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connectionpool.py:801) **response_kw,
[802](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connectionpool.py:802) )
[804](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connectionpool.py:804) # Everything went great!
File c:\Users\WYL\miniconda3\Lib\site-packages\urllib3\connectionpool.py:490, in HTTPConnectionPool._make_request(self, conn, method, url, body, headers, retries, timeout, chunked, response_conn, preload_content, decode_content, enforce_content_length)
[489](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connectionpool.py:489) new_e = _wrap_proxy_error(new_e, conn.proxy.scheme)
--> [490](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connectionpool.py:490) raise new_e
[492](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connectionpool.py:492) # conn.request() calls http.client.*.request, not the method in
[493](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connectionpool.py:493) # urllib3.request. It also calls makefile (recv) on the socket.
File c:\Users\WYL\miniconda3\Lib\site-packages\urllib3\connectionpool.py:466, in HTTPConnectionPool._make_request(self, conn, method, url, body, headers, retries, timeout, chunked, response_conn, preload_content, decode_content, enforce_content_length)
[465](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connectionpool.py:465) try:
--> [466](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connectionpool.py:466) self._validate_conn(conn)
[467](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connectionpool.py:467) except (SocketTimeout, BaseSSLError) as e:
File c:\Users\WYL\miniconda3\Lib\site-packages\urllib3\connectionpool.py:1095, in HTTPSConnectionPool._validate_conn(self, conn)
[1094](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connectionpool.py:1094) if conn.is_closed:
-> [1095](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connectionpool.py:1095) conn.connect()
[1097](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connectionpool.py:1097) # TODO revise this, see https://github.com/urllib3/urllib3/issues/2791
File c:\Users\WYL\miniconda3\Lib\site-packages\urllib3\connection.py:615, in HTTPSConnection.connect(self)
[614](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connection.py:614) sock: socket.socket | ssl.SSLSocket
--> [615](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connection.py:615) self.sock = sock = self._new_conn()
[616](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connection.py:616) server_hostname: str = self.host
File c:\Users\WYL\miniconda3\Lib\site-packages\urllib3\connection.py:205, in HTTPConnection._new_conn(self)
[204](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connection.py:204) except SocketTimeout as e:
--> [205](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connection.py:205) raise ConnectTimeoutError(
[206](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connection.py:206) self,
[207](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connection.py:207) f"Connection to {self.host} timed out. (connect timeout={self.timeout})",
[208](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connection.py:208) ) from e
[210](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connection.py:210) except OSError as e:
ConnectTimeoutError: (<urllib3.connection.HTTPSConnection object at 0x000001A8CD1E27D0>, 'Connection to push2his.eastmoney.com timed out. (connect timeout=15)')
The above exception was the direct cause of the following exception:
MaxRetryError Traceback (most recent call last)
File c:\Users\WYL\miniconda3\Lib\site-packages\requests\adapters.py:589, in HTTPAdapter.send(self, request, stream, timeout, verify, cert, proxies)
[588](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/adapters.py:588) try:
--> [589](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/adapters.py:589) resp = conn.urlopen(
[590](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/adapters.py:590) method=request.method,
[591](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/adapters.py:591) url=url,
[592](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/adapters.py:592) body=request.body,
[593](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/adapters.py:593) headers=request.headers,
[594](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/adapters.py:594) redirect=False,
[595](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/adapters.py:595) assert_same_host=False,
[596](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/adapters.py:596) preload_content=False,
[597](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/adapters.py:597) decode_content=False,
[598](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/adapters.py:598) retries=self.max_retries,
[599](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/adapters.py:599) timeout=timeout,
[600](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/adapters.py:600) chunked=chunked,
[601](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/adapters.py:601) )
[603](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/adapters.py:603) except (ProtocolError, OSError) as err:
File c:\Users\WYL\miniconda3\Lib\site-packages\urllib3\connectionpool.py:843, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, preload_content, decode_content, **response_kw)
[841](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connectionpool.py:841) new_e = ProtocolError("Connection aborted.", new_e)
--> [843](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connectionpool.py:843) retries = retries.increment(
[844](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connectionpool.py:844) method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2]
[845](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connectionpool.py:845) )
[846](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/connectionpool.py:846) retries.sleep()
File c:\Users\WYL\miniconda3\Lib\site-packages\urllib3\util\retry.py:519, in Retry.increment(self, method, url, response, error, _pool, _stacktrace)
[518](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/util/retry.py:518) reason = error or ResponseError(cause)
--> [519](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/util/retry.py:519) raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
[521](file:///C:/Users/WYL/miniconda3/Lib/site-packages/urllib3/util/retry.py:521) log.debug("Incremented Retry for (url='%s'): %r", url, new_retry)
MaxRetryError: HTTPSConnectionPool(host='push2his.eastmoney.com', port=443): Max retries exceeded with url: /api/qt/stock/kline/get?fields1=f1%2Cf2%2Cf3%2Cf4%2Cf5%2Cf6&fields2=f51%2Cf52%2Cf53%2Cf54%2Cf55%2Cf56%2Cf57%2Cf58%2Cf59%2Cf60%2Cf61%2Cf116&ut=7eea3edcaed734bea9cbfc24409ed989&klt=101&fqt=2&beg=20240701&end=20240731&_=1623766962675&secid=1.560030 (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x000001A8CD1E27D0>, 'Connection to push2his.eastmoney.com timed out. (connect timeout=15)'))
During handling of the above exception, another exception occurred:
ConnectTimeout Traceback (most recent call last)
Cell In[4], [line 14](vscode-notebook-cell:?execution_count=4&line=14)
[12](vscode-notebook-cell:?execution_count=4&line=12) fund_hist_data = pd.DataFrame()
[13](vscode-notebook-cell:?execution_count=4&line=13) for fund_code, fund_name in fund_pool.items():
---> [14](vscode-notebook-cell:?execution_count=4&line=14) fund_hist_df = ak.fund_etf_hist_em(symbol=fund_code,
[15](vscode-notebook-cell:?execution_count=4&line=15) period="daily",
[16](vscode-notebook-cell:?execution_count=4&line=16) start_date=start_dt,
[17](vscode-notebook-cell:?execution_count=4&line=17) end_date=current_dt,
[18](vscode-notebook-cell:?execution_count=4&line=18) adjust="hfq")
[19](vscode-notebook-cell:?execution_count=4&line=19) fund_hist_df['ไปฃ็ '] = fund_code
[20](vscode-notebook-cell:?execution_count=4&line=20) fund_hist_df['ๅ็งฐ'] = fund_name
File c:\Users\WYL\miniconda3\Lib\site-packages\akshare\fund\fund_etf_em.py:265, in fund_etf_hist_em(symbol, period, start_date, end_date, adjust)
[263](file:///C:/Users/WYL/miniconda3/Lib/site-packages/akshare/fund/fund_etf_em.py:263) market_id = code_id_dict[symbol]
[264](file:///C:/Users/WYL/miniconda3/Lib/site-packages/akshare/fund/fund_etf_em.py:264) params.update({"secid": f"{market_id}.{symbol}"})
--> [265](file:///C:/Users/WYL/miniconda3/Lib/site-packages/akshare/fund/fund_etf_em.py:265) r = requests.get(url, timeout=15, params=params)
[266](file:///C:/Users/WYL/miniconda3/Lib/site-packages/akshare/fund/fund_etf_em.py:266) data_json = r.json()
[267](file:///C:/Users/WYL/miniconda3/Lib/site-packages/akshare/fund/fund_etf_em.py:267) except KeyError:
File c:\Users\WYL\miniconda3\Lib\site-packages\requests\api.py:73, in get(url, params, **kwargs)
[62](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/api.py:62) def get(url, params=None, **kwargs):
[63](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/api.py:63) r"""Sends a GET request.
[64](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/api.py:64)
[65](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/api.py:65) :param url: URL for the new :class:`Request` object.
(...)
[70](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/api.py:70) :rtype: requests.Response
[71](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/api.py:71) """
---> [73](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/api.py:73) return request("get", url, params=params, **kwargs)
File c:\Users\WYL\miniconda3\Lib\site-packages\requests\api.py:59, in request(method, url, **kwargs)
[55](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/api.py:55) # By using the 'with' statement we are sure the session is closed, thus we
[56](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/api.py:56) # avoid leaving sockets open which can trigger a ResourceWarning in some
[57](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/api.py:57) # cases, and look like a memory leak in others.
[58](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/api.py:58) with sessions.Session() as session:
---> [59](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/api.py:59) return session.request(method=method, url=url, **kwargs)
File c:\Users\WYL\miniconda3\Lib\site-packages\requests\sessions.py:589, in Session.request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
[584](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/sessions.py:584) send_kwargs = {
[585](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/sessions.py:585) "timeout": timeout,
[586](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/sessions.py:586) "allow_redirects": allow_redirects,
[587](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/sessions.py:587) }
[588](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/sessions.py:588) send_kwargs.update(settings)
--> [589](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/sessions.py:589) resp = self.send(prep, **send_kwargs)
[591](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/sessions.py:591) return resp
File c:\Users\WYL\miniconda3\Lib\site-packages\requests\sessions.py:703, in Session.send(self, request, **kwargs)
[700](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/sessions.py:700) start = preferred_clock()
[702](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/sessions.py:702) # Send the request
--> [703](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/sessions.py:703) r = adapter.send(request, **kwargs)
[705](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/sessions.py:705) # Total elapsed time of the request (approximately)
[706](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/sessions.py:706) elapsed = preferred_clock() - start
File c:\Users\WYL\miniconda3\Lib\site-packages\requests\adapters.py:610, in HTTPAdapter.send(self, request, stream, timeout, verify, cert, proxies)
[607](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/adapters.py:607) if isinstance(e.reason, ConnectTimeoutError):
[608](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/adapters.py:608) # TODO: Remove this in 3.0.0: see #2811
[609](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/adapters.py:609) if not isinstance(e.reason, NewConnectionError):
--> [610](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/adapters.py:610) raise ConnectTimeout(e, request=request)
[612](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/adapters.py:612) if isinstance(e.reason, ResponseError):
[613](file:///C:/Users/WYL/miniconda3/Lib/site-packages/requests/adapters.py:613) raise RetryError(e, request=request)
ConnectTimeout: HTTPSConnectionPool(host='push2his.eastmoney.com', port=443): Max retries exceeded with url: /api/qt/stock/kline/get?fields1=f1%2Cf2%2Cf3%2Cf4%2Cf5%2Cf6&fields2=f51%2Cf52%2Cf53%2Cf54%2Cf55%2Cf56%2Cf57%2Cf58%2Cf59%2Cf60%2Cf61%2Cf116&ut=7eea3edcaed734bea9cbfc24409ed989&klt=101&fqt=2&beg=20240701&end=20240731&_=1623766962675&secid=1.560030 (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x000001A8CD1E27D0>, 'Connection to push2his.eastmoney.com timed out. (connect timeout=15)')) | closed | 2024-07-31T03:18:46Z | 2024-07-31T08:18:05Z | https://github.com/akfamily/akshare/issues/5090 | [
"bug"
] | wuyulong0931 | 1 |
saleor/saleor | graphql | 16,722 | Bug: Category-specific Promocode Not Working in Checkout | ### What are you trying to achieve?
We encountered an issue where a promocode created for a specific category is not working during checkout. When the promocode is applied, the system throws an error instead of applying the discount.
### Steps to reproduce the problem
1. Create a promocode and associate it with a specific product category.
2. Add products from the specified category to the cart.
3. Proceed to checkout.
4. Apply the promocode.
5. The system throws an error instead of applying the discount.
### What did you expect to happen?
The promocode should apply successfully to items from the specified category during checkout, reducing the total amount as per the discount rules.
### Logs
This is the response we get for when we apply Promocode though we have selected the products from the same category.
{
"data": {
"checkoutAddPromoCode": {
"checkout": null,
"errors": [
{
"code": "VOUCHER_NOT_APPLICABLE",
"field": "promoCode",
"message": "Voucher is not applicable to this checkout.",
"variants": null,
"lines": null,
"__typename": "CheckoutError"
}
],
"__typename": "CheckoutAddPromoCode"
}
},
### Environment
Saleor version: 3.18.22
| closed | 2024-09-18T15:09:04Z | 2024-10-21T07:32:45Z | https://github.com/saleor/saleor/issues/16722 | [
"bug",
"triage"
] | supreetha789 | 2 |
flasgger/flasgger | api | 489 | Swagger config | I want to change only url path for swagger ui.
When i try to send minimal config i got an error([cf: stack trace](#stack-trace)).For running a valid config specs, static_url_path, headers are mandatory.Can you create an enhancement that get basic param if specs or static_url_path aren't here?
# Description
## Code
```python
config = {
"specs_route": "/swagger/"
}
swagger = Swagger(app, config=config)
```
## Stack Trace
```python
Traceback (most recent call last):
File "/home/matthieu/PycharmProjects/lmfr-datatech-mobilitytech-clean-ws1/apps.py", line 18, in <module>
swagger = Swagger(app, config=config)
File "/home/matthieu/PycharmProjects/venv/lmfr-datatech-mobilitytech-clean-ws1/lib/python3.8/site-packages/flasgger/base.py", line 217, in __init__
self.init_app(app)
File "/home/matthieu/PycharmProjects/venv/lmfr-datatech-mobilitytech-clean-ws1/lib/python3.8/site-packages/flasgger/base.py", line 230, in init_app
self.register_views(app)
File "/home/matthieu/PycharmProjects/venv/lmfr-datatech-mobilitytech-clean-ws1/lib/python3.8/site-packages/flasgger/base.py", line 601, in register_views
for spec in self.config['specs']:
KeyError: 'specs'
```
# Possible enhancement
```python
basic_specs = [
{
"endpoint": 'apispec_1',
"route": '/apispec_1.json',
"rule_filter": lambda rule: True, # all in
"model_filter": lambda tag: True, # all in
}
]
for spec in self.config.get('specs', basic_specs):
self.endpoints.append(spec['endpoint'])
blueprint.add_url_rule(
spec['route'],
spec['endpoint'],
view_func=wrap_view(APISpecsView.as_view(
spec['endpoint'],
loader=partial(
self.get_apispecs, endpoint=spec['endpoint'])
))
)
app.register_blueprint(blueprint)
```
# Informations
- OS VERSION: `Ubuntu 18.04.5 LTS`
- PYTHON VERSION: `3.8.11`
- FLASGER VERSION: `0.9.5` | closed | 2021-08-17T15:18:55Z | 2022-06-10T12:32:31Z | https://github.com/flasgger/flasgger/issues/489 | [] | tyki6 | 1 |
iterative/dvc | data-science | 10,237 | symlinks are resolved when added |
## Description
Our projects use symlinks to track files output by one task that are inputs to other tasks. The symlinks are part of the project's documentation because they show the dependency structure among tasks.
`dvc` seems to resolve symlinks. Is that always true or am I missing an option somewhere?
### Reproduce
```
#!/bin/bash
# assume the existence of a remote at /var/repos/dvc/test w appropriate perms
DVCREPO=/var/repos/dvc/test
ls -la $DVCREPO
echo -e "\nempty remote, ready for data"
mkdir dir1
echo "1234567890" >dir1/data1.csv
echo "abcdefghijk" >dir1/data2.csv
ls -l dir1
echo "dir1 created with data files\n"
mkdir dir2
cd dir2
ln -sf ../dir1/data1.csv .
ln -sf ../dir1/data2.csv .
cd ..
ls -l dir2
echo -e "dir2 created with symlinks to data files\n"
echo "difference between files"
diff -qr --no-dereference dir1 dir2
dvc init --no-scm -q
dvc remote add -f myremote $DVCREPO
dvc remote default myremote
cat .dvc/config
dvc add dir1 dir2
dvc commit
dvc push
echo "data pushed to remote repo"
ls -lR $DVCREPO
cat $DVCREPO/files/md5/7e/47cdf27c6023e4f6bfb6cb28b26de7.dir
rm -r dir1 dir2
echo "after removing dir1 and dir2"
ls -la .
echo -e "\npulling from repo"
dvc pull
ls -l dir1
ls -l dir2
diff -qr --no-dereference dir1 dir2
set +x
# done
```
**output**
```
../dvc-test.sh
total 8
drwxrws---+ 2 pball test 4096 Jan 13 01:55 .
drwxrws---+ 4 pball svn 4096 Jan 13 01:55 ..
empty remote, ready for data
total 8
-rw-r--r-- 1 pball svn 11 Jan 13 01:55 data1.csv
-rw-r--r-- 1 pball svn 12 Jan 13 01:55 data2.csv
dir1 created with data files\n
total 0
lrwxrwxrwx 1 pball svn 17 Jan 13 01:55 data1.csv -> ../dir1/data1.csv
lrwxrwxrwx 1 pball svn 17 Jan 13 01:55 data2.csv -> ../dir1/data2.csv
dir2 created with symlinks to data files
difference between files
File dir1/data1.csv is a regular file while file dir2/data1.csv is a symbolic link
File dir1/data2.csv is a regular file while file dir2/data2.csv is a symbolic link
[core]
no_scm = True
remote = myremote
['remote "myremote"']
url = /var/repos/dvc/test
100% Adding...|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ|2/2 [00:00, 38.17file/s]
3 files pushed
data pushed to remote repo
/var/repos/dvc/test:
total 4
drwxr-xr-x+ 3 pball test 4096 Jan 13 01:55 files
/var/repos/dvc/test/files:
total 4
drwxr-xr-x+ 5 pball svn 4096 Jan 13 01:55 md5
/var/repos/dvc/test/files/md5:
total 12
drwxr-xr-x+ 2 pball svn 4096 Jan 13 01:55 7c
drwxr-xr-x+ 2 pball svn 4096 Jan 13 01:55 7e
drwxr-xr-x+ 2 pball svn 4096 Jan 13 01:55 e2
/var/repos/dvc/test/files/md5/7c:
total 4
-r--r--r-- 1 pball svn 11 Jan 13 01:55 12772809c1c0c3deda6103b10fdfa0
/var/repos/dvc/test/files/md5/7e:
total 4
-r--r--r-- 1 pball svn 138 Jan 13 01:55 47cdf27c6023e4f6bfb6cb28b26de7.dir
/var/repos/dvc/test/files/md5/e2:
total 4
-r--r--r-- 1 pball svn 12 Jan 13 01:55 71dc47fa80ddc9e6590042ad9ed2b7
[{"md5": "7c12772809c1c0c3deda6103b10fdfa0", "relpath": "data1.csv"}, {"md5": "e271dc47fa80ddc9e6590042ad9ed2b7", "relpath": "data2.csv"}]after removing dir1 and dir2
total 24
drwxr-xr-x 3 pball svn 4096 Jan 13 01:55 .
drwxr-xr-x 3 pball svn 4096 Jan 13 01:49 ..
-rw-r--r-- 1 pball svn 98 Jan 13 01:55 dir1.dvc
-rw-r--r-- 1 pball svn 98 Jan 13 01:55 dir2.dvc
drwxr-xr-x 4 pball svn 4096 Jan 13 01:55 .dvc
-rw-r--r-- 1 pball svn 139 Jan 13 01:55 .dvcignore
pulling from repo
Collecting |6.00 [00:00, 191entry/s]
Fetching
Building workspace index |0.00 [00:00, ?entry/s]
Comparing indexes |7.00 [00:00, 1.93kentry/s]
Applying changes |4.00 [00:00, 691file/s]
A dir2/
A dir1/
2 files added
total 8
-rw-r--r-- 1 pball svn 11 Jan 13 01:55 data1.csv
-rw-r--r-- 1 pball svn 12 Jan 13 01:55 data2.csv
total 8
-rw-r--r-- 1 pball svn 11 Jan 13 01:55 data1.csv
-rw-r--r-- 1 pball svn 12 Jan 13 01:55 data2.csv
```
### Expected
I need to preserve the symlinks.
### Environment information
```
$ dvc doctor
DVC version: 3.33.4 (conda)
---------------------------
Platform: Python 3.11.5 on Linux-5.15.0-91-generic-x86_64-with-glibc2.35
Subprojects:
dvc_data = 2.24.0
dvc_objects = 2.0.1
dvc_render = 1.0.0
dvc_task = 0.3.0
scmrepo = 1.6.0
Supports:
http (aiohttp = 3.8.5, aiohttp-retry = 2.8.3),
https (aiohttp = 3.8.5, aiohttp-retry = 2.8.3)
Config:
Global: /home/pball/.config/dvc
System: /etc/xdg/dvc
Cache types: hardlink, symlink
Cache directory: ext4 on /dev/mapper/VGscott-home
Caches: local
Remotes: local
Workspace directory: ext4 on /dev/mapper/VGscott-home
Repo: dvc (no_scm)
Repo.site_cache_dir: /var/tmp/dvc/repo/625fc0929c52fe30feae4440628c3033
```
## note
I suspect that this is the intended behavior, but I have been confused by [this paragraph in the docs](https://dvc.org/doc/command-reference/add#add-symlink) which says "`dvc add` supports symlinked files as targets..." If `dvc` supports symlinks by resolving them, then I don't think `dvc` is _supporting_ symlinks, and perhaps the docs could be clarified.
However, if I'm wrong (which I hope I am!!), maybe someone could point me to the options necessary to preserve the symlinks in the dvc repository? thank you. | closed | 2024-01-13T01:57:33Z | 2024-01-14T18:06:36Z | https://github.com/iterative/dvc/issues/10237 | [] | vm-wylbur | 3 |
keras-team/keras | data-science | 20,419 | Tried to convert 'y' to a tensor and failed. Error: None values not supported. | Went back to a working code from a couple months ago and now shows
`ValueError: in user code:
File "<ipython-input-31-1531690eb3bf>", line 12, in train_step *
node_embeddings = model(inputs, training=True, mask=None)
File "/usr/local/lib/python3.10/dist-packages/keras/src/utils/traceback_utils.py", line 122, in error_handler **
raise e.with_traceback(filtered_tb) from None
File "<ipython-input-27-cc2e87fdb2bc>", line 41, in call
x = conv_layer([x, a]) # Pass only x and a, without mask
File "/usr/local/lib/python3.10/dist-packages/spektral/layers/convolutional/conv.py", line 74, in _inner_check_dtypes
return call(inputs, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/spektral/layers/convolutional/gtv_conv.py", line 127, in call
output *= mask[0]
ValueError: Exception encountered when calling GTVConv.call().
Tried to convert 'y' to a tensor and failed. Error: None values not supported.
Arguments received by GTVConv.call():
โข inputs=['tf.Tensor(shape=(None, 4), dtype=float32)', 'tf.Tensor(shape=(None, None), dtype=float32)']
โข mask=['None', 'None']
`
Here is the model definition. Not the best model but used to work.
`import tensorflow as tf
from tensorflow.keras.losses import MeanSquaredError
from tensorflow.keras.models import Model
from keras import regularizers, constraints
from tensorflow.keras.layers import Dense, Dropout, GlobalMaxPooling1D
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Adam
class GNNWithReadout(Model):
def __init__(self, channels, delta_coeff=1.0, epsilon=0.001, n_layers=2, readout_units=16, activation='relu',
kernel_initializer='he_normal', kernel_regularizer=regularizers.L1(l1=1e-5),
bias_initializer='zeros', bias_regularizer=None, activity_regularizer=None,
kernel_constraint=None, bias_constraint=None):
super().__init__()
self.channels = channels
self.n_layers = n_layers
self.readout_units = readout_units
# Graph convolutional layers
self.conv_layers = [GTVConv(channels, delta_coeff=delta_coeff, epsilon=epsilon, activation=activation,
kernel_regularizer=kernel_regularizer, kernel_initializer=kernel_initializer,
bias_initializer=bias_initializer, bias_regularizer=bias_regularizer,
activity_regularizer=activity_regularizer, kernel_constraint=kernel_constraint,
bias_constraint=bias_constraint) for _ in range(n_layers)]
# Readout layers
self.readout_dense = Dense(readout_units, activation="relu", kernel_initializer=kernel_initializer,
kernel_regularizer=kernel_regularizer, bias_initializer=bias_initializer,
bias_regularizer=bias_regularizer, activity_regularizer=activity_regularizer,
kernel_constraint=kernel_constraint, bias_constraint=bias_constraint)
self.readout_pooling = GlobalMaxPooling1D()
def call(self, inputs, mask=None):
# Unpack the inputs, ignoring the third component (graph IDs)
x, a, _ = inputs # Node features (x) and adjacency matrix (a)
# Graph convolutional layers
for conv_layer in self.conv_layers:
x = conv_layer([x, a]) # Pass only x and a, without mask
# Readout
graph_representation = self._graph_readout(x)
return graph_representation
def _graph_readout(self, x):
x = self.readout_dense(x)
x = tf.expand_dims(x, axis=0) # Add an extra dimension for pooling
graph_representation = self.readout_pooling(x)
return graph_representation
# Example usage
delta_coeff = 1.0
epsilon = 0.001
layers = 2
channels = 32
readout_units = 16
batch_size = 32
epochs = 200
# Define hyperparameters
activation='relu'
use_bias=True
kernel_initializer='he_normal'
bias_initializer='zeros'
kernel_regularizer=regularizers.L1(l1=1e-5)
bias_regularizer=None
activity_regularizer=None
kernel_constraint=None
bias_constraint=None
# Build model
model = GNNWithReadout(channels, delta_coeff, epsilon, activation=activation, kernel_initializer=kernel_initializer,
kernel_regularizer=kernel_regularizer, bias_initializer=bias_initializer,
bias_regularizer=bias_regularizer, activity_regularizer=activity_regularizer,
kernel_constraint=kernel_constraint, bias_constraint=bias_constraint)
# Compile model
optimizer = Adam(learning_rate=1e-3)
model.compile(optimizer=optimizer, loss=MeanSquaredError(), metrics=['accuracy'])` | closed | 2024-10-27T22:05:30Z | 2024-10-27T23:33:46Z | https://github.com/keras-team/keras/issues/20419 | [] | atifsiddiqui10 | 0 |
laurentS/slowapi | fastapi | 18 | Limit Http Headers | I'm setting a rate limit for a route as below
`from slowapi import Limiter, _rate_limit_exceeded_handler`
`from slowapi.util import get_remote_address`
`limiter = Limiter(key_func=get_remote_address, headers_enabled=True)`
`@router.post("/send-verification-otp", response_model=schemas.Msg)`
`@limiter.limit("1/1 minute")`
`def send_verification_otp(`
Check the below error:
AttributeError: 'dict' object has no attribute 'headers' | closed | 2020-11-02T21:40:36Z | 2020-12-23T21:47:05Z | https://github.com/laurentS/slowapi/issues/18 | [] | moayyadfaris | 5 |
neuml/txtai | nlp | 232 | Rename SQLException to SQLError | Rename SQLException to SQLError to match Python naming conventions. | closed | 2022-03-01T13:35:00Z | 2022-03-01T14:04:14Z | https://github.com/neuml/txtai/issues/232 | [] | davidmezzetti | 0 |
gee-community/geemap | streamlit | 2,026 | Error applying color palette to an image using the layer manager | <!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
<table style='border: 1.5px solid;'>
<tr>
<td style='text-align: center; font-weight: bold; font-size: 1.2em; border: 1px solid;' colspan='6'>Tue May 28 20:36:53 2024 -03</td>
</tr>
<tr>
<td style='text-align: right; border: 1px solid;'>OS</td>
<td style='text-align: left; border: 1px solid;'>Darwin (macOS 14.5)</td>
<td style='text-align: right; border: 1px solid;'>CPU(s)</td>
<td style='text-align: left; border: 1px solid;'>8</td>
<td style='text-align: right; border: 1px solid;'>Machine</td>
<td style='text-align: left; border: 1px solid;'>arm64</td>
</tr>
<tr>
<td style='text-align: right; border: 1px solid;'>Architecture</td>
<td style='text-align: left; border: 1px solid;'>64bit</td>
<td style='text-align: right; border: 1px solid;'>RAM</td>
<td style='text-align: left; border: 1px solid;'>8.0 GiB</td>
<td style='text-align: right; border: 1px solid;'>Environment</td>
<td style='text-align: left; border: 1px solid;'>Jupyter</td>
</tr>
<tr>
<td style='text-align: right; border: 1px solid;'>File system</td>
<td style='text-align: left; border: 1px solid;'>apfs</td>
</tr>
<tr>
<td style='text-align: center; border: 1px solid;' colspan='6'>Python 3.11.3 (main, Jan 28 2024, 15:59:02) [Clang 15.0.0 (clang-1500.1.0.2.5)]</td>
</tr>
<tr>
<td style='text-align: right; border: 1px solid;'>geemap</td>
<td style='text-align: left; border: 1px solid;'>0.32.1</td>
<td style='text-align: right; border: 1px solid;'>ee</td>
<td style='text-align: left; border: 1px solid;'>0.1.404</td>
<td style='text-align: right; border: 1px solid;'>ipyleaflet</td>
<td style='text-align: left; border: 1px solid;'>0.18.2</td>
</tr>
<tr>
<td style='text-align: right; border: 1px solid;'>folium</td>
<td style='text-align: left; border: 1px solid;'>0.16.0</td>
<td style='text-align: right; border: 1px solid;'>jupyterlab</td>
<td style='text-align: left; border: 1px solid;'>4.2.1</td>
<td style='text-align: right; border: 1px solid;'>notebook</td>
<td style='text-align: left; border: 1px solid;'>Module not found</td>
</tr>
<tr>
<td style='text-align: right; border: 1px solid;'>ipyevents</td>
<td style='text-align: left; border: 1px solid;'>2.0.2</td>
<td style= border: 1px solid;'></td>
<td style= border: 1px solid;'></td>
<td style= border: 1px solid;'></td>
<td style= border: 1px solid;'></td>
</tr>
</table>
### Description
I was trying to change the colormap of a single-band image using the layer manager widget. However, when I selected any colormap, I received an `AttributeError` message, and nothing changed.
### What I Did
``` python
m = geemap.Map(center=[-21.25, -45], zoom = 11, basemap='Esri.WorldImagery')
dem = ee.Image('USGS/SRTMGL1_003')
vis_params = {
'min': 770,
'max': 1300,
'palette': 'terrain'
}
m.add_layer(dem, vis_params, 'SRTM DEM')
m.add('layer_manager')
m
```
Here's the error message:
```python
AttributeError: module 'matplotlib.cm' has no attribute 'get_cmap'
```
Here's how the layer manager widget looks like:

| closed | 2024-05-29T00:05:01Z | 2024-05-29T13:50:18Z | https://github.com/gee-community/geemap/issues/2026 | [
"bug"
] | iagomoliv | 1 |
NullArray/AutoSploit | automation | 903 | Divided by zero exception279 | Error: Attempted to divide by zero.279 | closed | 2019-04-19T16:03:10Z | 2019-04-19T16:37:01Z | https://github.com/NullArray/AutoSploit/issues/903 | [] | AutosploitReporter | 0 |
wger-project/wger | django | 1,379 | Automatic language recognition and language menu positioning | ## Use case
Currently the website https://wger.de/en/software/features uses english by default. The user has to change the language menu manually in the footer of the page.
## Proposal
- Enable automatic language selection by recognizing the user's browser langauge, or OS language settings to show the website in the appropriate language.
- Place the language menu into the top navigation bar as a separate item instead of placing it at the bottom of the page.
| open | 2023-07-05T16:33:06Z | 2024-05-05T17:16:52Z | https://github.com/wger-project/wger/issues/1379 | [] | milotype | 5 |
onnx/onnx | tensorflow | 6,533 | [reference] Improve ConcatFromSequence reference implementation | > I still have the same question. I agree there is an ambiguity in the ONNX documentation ... but the ambiguity applies to values other than -1 also. Eg., -2 as well.
>
> I think that the spec means that the specified axis is interpreted with respect to the output's shape (not the input's shape). So, the specified axis in the output will be the newly inserted axis.
>
> If I understand the spec of `np.expand_dims`, it is similar: https://numpy.org/doc/stable/reference/generated/numpy.expand_dims.html ... so, something seems off here.
_Originally posted by @gramalingam in https://github.com/onnx/onnx/pull/6369#discussion_r1774104460_
| open | 2024-11-05T23:17:22Z | 2024-11-05T23:18:34Z | https://github.com/onnx/onnx/issues/6533 | [
"module: reference implementation",
"contributions welcome"
] | justinchuby | 0 |
yt-dlp/yt-dlp | python | 12,130 | How to get download progress | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [x] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [x] I'm asking a question and **not** reporting a bug or requesting a feature
- [x] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
- [x] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Please make sure the question is worded well enough to be understood
When yt-dlp is installed using pip package, you can use hook to get the download progress. If you use other languages โโ(such as rust), can you get the download progress?
### Provide verbose output that clearly demonstrates the problem
- [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
``` | closed | 2025-01-19T03:58:22Z | 2025-01-21T13:36:40Z | https://github.com/yt-dlp/yt-dlp/issues/12130 | [
"question"
] | Kukaina | 1 |
allenai/allennlp | nlp | 5,089 | MLM in AllenNLP? | Hi! I saw that there's currently a [masked language modeling `Model`](https://github.com/allenai/allennlp-models/blob/main/allennlp_models/lm/models/masked_language_model.py) over in the `allennlp-models` repo, but it looks like most of the components are marked as demo-only. Is full functionality for this on the roadmap? If not, is there an estimate of what it would take to get it working? | open | 2021-04-02T04:18:43Z | 2021-05-01T19:30:50Z | https://github.com/allenai/allennlp/issues/5089 | [
"Contributions welcome",
"question"
] | ethch18 | 4 |
keras-team/keras | python | 20,189 | Keras different versions have numerical deviations when using pretrain model | The following code will have output deviations between Keras 3.3.3 and Keras 3.5.0.
```python
#download model
from modelscope import snapshot_download
base_path = 'q935499957/Qwen2-0.5B-Keras'
import os
dir = 'models'
try:
os.mkdir(dir)
except:
pass
model_dir = snapshot_download(base_path,local_dir=dir)
#config
import os
os.environ["KERAS_BACKEND"] = "torch"
import keras
keras.config.set_dtype_policy("bfloat16")
from transformers import AutoTokenizer
import numpy as np
from bert4keras3.models import build_transformer_model,Llama
from bert4keras3.snippets import sequence_padding
base_path = dir+'/'
config_path = base_path+'config.json'
weights_path = base_path+'QWen.weights.h5'#ไฟๅญ่ทฏๅพexpand_lm.weights.h5
dict_path = base_path+'qwen_tokenizer'
tokenizer = AutoTokenizer.from_pretrained(dict_path)
#define a model to print middle tensor
class Llama_print(Llama):
def apply_main_cache_layers(self, inputs, index,self_cache_update_index,
cross_cache_update_index=None,
attention_mask=None,position_bias=None,
):
print(inputs[0][:,:,:8])
print(index)
print(inputs[0].shape)
print('-'*50)
return super().apply_main_cache_layers(inputs, index,self_cache_update_index,
cross_cache_update_index,
attention_mask,position_bias)
Novel = build_transformer_model(
config_path,
keras_weights_path=weights_path,
model=Llama_print,
with_lm=True,
return_keras_model=False,
)
x = np.array([tokenizer.encode('hello,')+[0]])
print(Novel.cache_call([x],input_lengths=[3],
end_token=-1,search_mode='topp',k=1))
```
This is a llama-like pre-trained model. The code above will output the middle tensor during the prefill process and the decode process.
With the exact same code, the input during the prefill process is completely different in the two different versions. In the decode phase, even when the input is the same, there will be significant differences in the outputs as the iterations proceed between the two versions.
keras 3.3.3 print
```
#prefill
tensor([[[ 0.0164, 0.0070, -0.0019, -0.0013, 0.0156, 0.0074, -0.0055,
-0.0139],
[-0.0325, -0.0471, 0.0239, -0.0009, 0.0129, 0.0027, 0.0299,
0.0160],
[-0.0204, -0.0093, 0.0121, 0.0091, -0.0065, -0.0225, 0.0149,
0.0108]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
0
torch.Size([1, 3, 896])
--------------------------------------------------
tensor([[[-0.0459, -0.0967, -0.0270, 0.0452, 0.2500, -0.1387, 0.1094,
-0.1436],
[ 0.0031, -0.0479, 0.0107, -0.0291, -0.0869, 0.0549, 0.0579,
0.0618],
[-0.1099, 0.0183, 0.1309, -0.1406, 0.0204, -0.0154, 0.2656,
0.0669]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
1
torch.Size([1, 3, 896])
--------------------------------------------------
tensor([[[-0.3398, -0.2988, 0.1143, -0.2109, 0.5625, 0.0869, -0.3281,
-0.1465],
[ 0.1895, -0.1562, -0.0292, -0.1348, 0.0283, 0.0452, 0.2734,
0.0396],
[ 0.0127, -0.0498, 0.0388, -0.1484, 0.0791, 0.1118, 0.2578,
0.0879]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
2
torch.Size([1, 3, 896])
--------------------------------------------------
tensor([[[-9.2188e-01, 1.7109e+00, 3.3281e+00, -2.5000e+00, -2.0312e-01,
-4.7070e-01, -7.1250e+00, 3.7891e-01],
[ 6.5918e-02, -3.2031e-01, -2.0312e-01, 1.2207e-01, -1.2598e-01,
1.7090e-03, 9.2773e-02, -1.6699e-01],
[-1.6846e-02, -1.9531e-01, -2.1875e-01, 1.4648e-02, 7.3242e-04,
6.0303e-02, 4.2773e-01, 2.3438e-02]]], device='cuda:0',
dtype=torch.bfloat16, grad_fn=<SliceBackward0>)
3
torch.Size([1, 3, 896])
--------------------------------------------------
tensor([[[-1.1719, 1.4062, 3.2031, -1.6328, -0.8047, -1.0938, -7.9062,
1.2266],
[ 0.3594, -0.1025, 0.0869, 0.3496, -0.0132, 0.0515, 0.2168,
0.1016],
[ 0.0449, -0.2910, -0.2305, 0.0383, 0.1592, -0.1016, 0.6328,
0.0190]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
4
torch.Size([1, 3, 896])
--------------------------------------------------
tensor([[[-2.1562e+00, 2.0156e+00, 4.3125e+00, 9.5312e-01, 2.7344e-01,
-1.8750e+00, -1.3875e+01, 2.4062e+00],
[ 4.5703e-01, -2.6172e-01, -2.4414e-02, 3.6133e-01, 1.6016e-01,
1.1768e-01, 4.1992e-01, -4.5898e-02],
[ 9.6680e-02, -4.1016e-01, -2.8906e-01, 7.9346e-03, -1.5430e-01,
-1.5430e-01, 4.7266e-01, -2.6562e-01]]], device='cuda:0',
dtype=torch.bfloat16, grad_fn=<SliceBackward0>)
5
torch.Size([1, 3, 896])
#decode
--------------------------------------------------
tensor(1, device='cuda:0', dtype=torch.int32)tensor([[[-0.0325, -0.0471, 0.0239, -0.0009, 0.0129, 0.0027, 0.0299,
0.0160]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
0
torch.Size([1, 1, 896])
--------------------------------------------------
tensor([[[ 0.0031, -0.0481, 0.0107, -0.0291, -0.0869, 0.0549, 0.0579,
0.0618]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
1
torch.Size([1, 1, 896])
--------------------------------------------------
tensor([[[ 0.1895, -0.1572, -0.0299, -0.1367, 0.0283, 0.0452, 0.2754,
0.0405]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
2
torch.Size([1, 1, 896])
--------------------------------------------------
tensor([[[ 0.0654, -0.3203, -0.2041, 0.1221, -0.1260, 0.0039, 0.0933,
-0.1660]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
3
torch.Size([1, 1, 896])
--------------------------------------------------
tensor([[[ 0.3574, -0.0986, 0.0898, 0.3516, -0.0137, 0.0518, 0.2158,
0.1064]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
4
torch.Size([1, 1, 896])
--------------------------------------------------
```
keras 3.5.0 print
```
#prefill
tensor([[[-0.0096, 0.0126, -0.0063, 0.0044, 0.0121, 0.0038, 0.0104,
-0.0009],
[-0.0325, -0.0471, 0.0239, -0.0009, 0.0129, 0.0027, 0.0299,
0.0160],
[-0.0204, -0.0093, 0.0121, 0.0091, -0.0065, -0.0225, 0.0149,
0.0108]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
0
torch.Size([1, 3, 896])
--------------------------------------------------
tensor([[[-0.1807, 0.0674, -0.3926, -0.0278, 0.2520, -0.0840, -0.0669,
-0.3047],
[-0.0072, -0.0415, 0.0123, -0.0146, -0.1270, 0.0679, 0.0610,
-0.0205],
[-0.1279, 0.0349, 0.2539, -0.1611, -0.0225, 0.0275, 0.1338,
0.0386]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
1
torch.Size([1, 3, 896])
--------------------------------------------------
tensor([[[ 5.6250e-01, -2.3633e-01, -1.0781e+00, -1.2988e-01, 3.4180e-01,
3.7109e-01, -3.1250e-01, -1.9531e-01],
[ 1.2598e-01, -1.2695e-02, -7.1289e-02, -1.3672e-01, 3.3203e-02,
1.4941e-01, 1.9922e-01, -2.1875e-01],
[-9.5215e-02, -5.9570e-02, 2.0117e-01, -3.2031e-01, 3.6621e-04,
5.8350e-02, 1.6504e-01, -8.9355e-02]]], device='cuda:0',
dtype=torch.bfloat16, grad_fn=<SliceBackward0>)
2
torch.Size([1, 3, 896])
--------------------------------------------------
tensor([[[ 0.4277, 0.6680, 2.3750, -2.8750, -0.5039, 0.0742, -6.5625,
0.4082],
[ 0.2256, -0.3047, -0.0349, -0.0859, 0.1191, 0.2334, 0.3262,
-0.0088],
[-0.1025, -0.0918, 0.3105, -0.2227, -0.0162, 0.2715, 0.4746,
0.0371]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
3
torch.Size([1, 3, 896])
--------------------------------------------------
tensor([[[ 0.1719, 0.3652, 2.2812, -2.0156, -1.0938, -0.5547, -7.3438,
1.2500],
[ 0.5938, -0.3047, -0.0126, -0.0981, 0.2676, 0.0479, 0.0771,
0.1455],
[ 0.2051, -0.2188, 0.0391, -0.2949, 0.2539, 0.0566, 0.4355,
0.0227]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
4
torch.Size([1, 3, 896])
--------------------------------------------------
tensor([[[-8.0469e-01, 9.8438e-01, 3.4062e+00, 6.0938e-01, -7.8125e-03,
-1.3438e+00, -1.3375e+01, 2.4219e+00],
[ 6.3672e-01, -5.7422e-01, 2.8931e-02, -3.1250e-01, 3.2422e-01,
-6.7871e-02, 4.0430e-01, -4.0039e-02],
[ 3.5156e-01, -4.4531e-01, -1.8066e-02, -2.2070e-01, 1.1377e-01,
3.0884e-02, 4.5508e-01, 1.4160e-01]]], device='cuda:0',
dtype=torch.bfloat16, grad_fn=<SliceBackward0>)
#decode
--------------------------------------------------
tensor(1, device='cuda:0', dtype=torch.int32)tensor([[[-0.0325, -0.0471, 0.0239, -0.0009, 0.0129, 0.0027, 0.0299,
0.0160]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
0
torch.Size([1, 1, 896])
--------------------------------------------------
tensor([[[-0.0072, -0.0415, 0.0122, -0.0146, -0.1270, 0.0679, 0.0610,
-0.0205]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
1
torch.Size([1, 1, 896])
--------------------------------------------------
tensor([[[ 0.1230, -0.0083, -0.0713, -0.1260, 0.0293, 0.1436, 0.2051,
-0.2090]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
2
torch.Size([1, 1, 896])
--------------------------------------------------
tensor([[[ 0.2266, -0.3008, -0.0327, -0.0791, 0.1143, 0.2285, 0.3320,
-0.0068]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
3
torch.Size([1, 1, 896])
--------------------------------------------------
tensor([[[ 0.5977, -0.2930, -0.0059, -0.0884, 0.2637, 0.0449, 0.0889,
0.1465]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
4
torch.Size([1, 1, 896])
--------------------------------------------------
tensor([[[ 0.6367, -0.5586, 0.0376, -0.3047, 0.3242, -0.0654, 0.4277,
-0.0312]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
5
torch.Size([1, 1, 896])
--------------------------------------------------
```
| closed | 2024-08-30T11:21:08Z | 2024-08-31T07:38:23Z | https://github.com/keras-team/keras/issues/20189 | [] | pass-lin | 2 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 15,900 | Can't start Forge's run.bat | ### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [x] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
I use to start SD-Forge using Run.bat but now for some reason it doesn't work anymore. I have to start it using the webui-user.bat.
Run.bat runs environment.bat first before running webui.user.bat, so I guess the culprit behind is the environment.bat? But I'm not sure... Help pls.
### Steps to reproduce the problem
Start SD-Forge using Run.bat
### What should have happened?
Start SD-Forge
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
I can't get into webui..
### Console logs
```Shell
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: f0.0.17v1.8.0rc-latest-276-g29be1da7
Commit hash: 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7
Faceswaplab : Use GPU requirements
Checking faceswaplab requirements
0.008728599997994024
CUDA 12.1
Launching Web UI with arguments: --ckpt-dir G:/stable-diffusion/sdweb/webui/models/Stable-diffusion --embeddings-dir G:/stable-diffusion/sdweb/webui/embeddings --lora-dir G:/stable-diffusion/sdweb/webui/models/LyCORIS
Total VRAM 24576 MB, total RAM 65315 MB
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3090 : native
Hint: your device supports --pin-shared-memory for potential speed improvements.
Hint: your device supports --cuda-malloc for potential speed improvements.
Hint: your device supports --cuda-stream for potential speed improvements.
VAE dtype: torch.bfloat16
CUDA Stream Activated: False
2024-05-28 17:00:28.287449: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-05-28 17:00:28.905988: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
Traceback (most recent call last):
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\system\python\lib\site-packages\tensorboard\compat\__init__.py", line 42, in tf
from tensorboard.compat import notf # noqa: F401
ImportError: cannot import name 'notf' from 'tensorboard.compat' (G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\system\python\lib\site-packages\tensorboard\compat\__init__.py)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\webui\launch.py", line 51, in <module>
main()
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\webui\launch.py", line 47, in main
start()
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\webui\modules\launch_utils.py", line 541, in start
import webui
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\webui\webui.py", line 19, in <module>
initialize.imports()
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\webui\modules\initialize.py", line 39, in imports
from modules import paths, timer, import_hook, errors # noqa: F401
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\webui\modules\paths.py", line 60, in <module>
import sgm # noqa: F401
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\webui\repositories\generative-models\sgm\__init__.py", line 1, in <module>
from .models import AutoencodingEngine, DiffusionEngine
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\webui\repositories\generative-models\sgm\models\__init__.py", line 1, in <module>
from .autoencoder import AutoencodingEngine
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\webui\repositories\generative-models\sgm\models\autoencoder.py", line 12, in <module>
from ..modules.diffusionmodules.model import Decoder, Encoder
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\webui\repositories\generative-models\sgm\modules\__init__.py", line 1, in <module>
from .encoders.modules import GeneralConditioner
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\webui\repositories\generative-models\sgm\modules\encoders\modules.py", line 5, in <module>
import kornia
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\system\python\lib\site-packages\kornia\__init__.py", line 11, in <module>
from . import augmentation, color, contrib, core, enhance, feature, io, losses, metrics, morphology, tracking, utils, x
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\system\python\lib\site-packages\kornia\x\__init__.py", line 2, in <module>
from .trainer import Trainer
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\system\python\lib\site-packages\kornia\x\trainer.py", line 11, in <module>
from accelerate import Accelerator
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\system\python\lib\site-packages\accelerate\__init__.py", line 3, in <module>
from .accelerator import Accelerator
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\system\python\lib\site-packages\accelerate\accelerator.py", line 41, in <module>
from .tracking import LOGGER_TYPE_TO_CLASS, GeneralTracker, filter_trackers
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\system\python\lib\site-packages\accelerate\tracking.py", line 43, in <module>
from torch.utils import tensorboard
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\utils\tensorboard\__init__.py", line 12, in <module>
from .writer import FileWriter, SummaryWriter # noqa: F401
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\utils\tensorboard\writer.py", line 16, in <module>
from ._embedding import get_embedding_info, make_mat, make_sprite, make_tsv, write_pbtxt
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\utils\tensorboard\_embedding.py", line 9, in <module>
_HAS_GFILE_JOIN = hasattr(tf.io.gfile, "join")
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\system\python\lib\site-packages\tensorboard\lazy.py", line 65, in __getattr__
return getattr(load_once(self), attr_name)
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\system\python\lib\site-packages\tensorboard\lazy.py", line 97, in wrapper
cache[arg] = f(arg)
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\system\python\lib\site-packages\tensorboard\lazy.py", line 50, in load_once
module = load_fn()
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\system\python\lib\site-packages\tensorboard\compat\__init__.py", line 45, in tf
import tensorflow
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\system\python\lib\site-packages\tensorflow\__init__.py", line 51, in <module>
from tensorflow._api.v2 import compat
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\system\python\lib\site-packages\tensorflow\_api\v2\compat\__init__.py", line 8, in <module>
from tensorflow._api.v2.compat import v1
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\system\python\lib\site-packages\tensorflow\_api\v2\compat\v1\__init__.py", line 30, in <module>
from tensorflow._api.v2.compat.v1 import compat
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\system\python\lib\site-packages\tensorflow\_api\v2\compat\v1\compat\__init__.py", line 8, in <module>
from tensorflow._api.v2.compat.v1.compat import v1
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\system\python\lib\site-packages\tensorflow\_api\v2\compat\v1\compat\v1\__init__.py", line 47, in <module>
from tensorflow._api.v2.compat.v1 import lite
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\system\python\lib\site-packages\tensorflow\_api\v2\compat\v1\lite\__init__.py", line 9, in <module>
from tensorflow._api.v2.compat.v1.lite import experimental
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\system\python\lib\site-packages\tensorflow\_api\v2\compat\v1\lite\experimental\__init__.py", line 8, in <module>
from tensorflow._api.v2.compat.v1.lite.experimental import authoring
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\system\python\lib\site-packages\tensorflow\_api\v2\compat\v1\lite\experimental\authoring\__init__.py", line 8, in <module>
from tensorflow.lite.python.authoring.authoring import compatible # line: 265
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\system\python\lib\site-packages\tensorflow\lite\python\authoring\authoring.py", line 43, in <module>
from tensorflow.lite.python import convert
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\system\python\lib\site-packages\tensorflow\lite\python\convert.py", line 30, in <module>
from tensorflow.lite.python import util
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\system\python\lib\site-packages\tensorflow\lite\python\util.py", line 51, in <module>
from jax import xla_computation as _xla_computation
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\system\python\lib\site-packages\jax\__init__.py", line 37, in <module>
import jax.core as _core
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\system\python\lib\site-packages\jax\core.py", line 18, in <module>
from jax._src.core import (
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\system\python\lib\site-packages\jax\_src\core.py", line 39, in <module>
from jax._src import dtypes
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\system\python\lib\site-packages\jax\_src\dtypes.py", line 33, in <module>
from jax._src import config
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\system\python\lib\site-packages\jax\_src\config.py", line 27, in <module>
from jax._src import lib
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\system\python\lib\site-packages\jax\_src\lib\__init__.py", line 75, in <module>
version = check_jaxlib_version(
File "G:\stable-diffusion\FORGE\webui_forge_cu121_torch21\system\python\lib\site-packages\jax\_src\lib\__init__.py", line 67, in check_jaxlib_version
raise RuntimeError(
RuntimeError: jaxlib version 0.4.28 is newer than and incompatible with jax version 0.4.26. Please update your jax and/or jaxlib packages.
Press any key to continue . . .
```
### Additional information
_No response_ | closed | 2024-05-28T09:07:31Z | 2024-05-28T21:23:03Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15900 | [
"bug-report"
] | frank052791 | 1 |
deepset-ai/haystack | machine-learning | 8,955 | remove DeprecationWarning about pandas Dataframes from EvaluationRunResult | We made pandas Dataframes optional in `EvaluationRunResult` in PR https://github.com/deepset-ai/haystack/pull/8838 and added a DeprecationWarning ~~before~~ after releasing 2.10. ~~Before~~ After the 2.11 release, we should remove the deprecated `to_pandas` and `comparative_individual_scores_report` and `score_report` | closed | 2025-03-04T10:19:16Z | 2025-03-17T13:50:51Z | https://github.com/deepset-ai/haystack/issues/8955 | [
"P2"
] | julian-risch | 1 |
ivy-llc/ivy | numpy | 28,275 | Fix Frontend Failing Test: tensorflow - convolution_functions.torch.nn.functional.conv2d | open | 2024-02-13T09:48:29Z | 2024-02-13T09:48:29Z | https://github.com/ivy-llc/ivy/issues/28275 | [
"Sub Task"
] | alt-shreya | 0 | |
koaning/scikit-lego | scikit-learn | 461 | [FEATURE] (Linear) Quantile Regression | Heya!
At some point, I implemented the LADRegression, which basically minimized sum |y_i - model(X_i)|. This also has the effect that the model over- and underestimates 50% of the time.
As in the ImbalancedRegression, we can also penalize over- and underestimations differently, with some parameter `quantile`. This would have the effect that the model overestimates a share of `quantile` samples and underestimates in `1-quantile` of the cases.
This one would be useful for having some kind of nice confidence intervals around predictions by training a model with `quantile=0.05` and another one with `quantile=0.95`, for example.
I implemented it [here](https://github.com/Garve/scikit-bonus/blob/46c985c6f2c0b371b031977592b23cf0e28c46e3/skbonus/linear_model/_scipy_regressors.py#L301). It's basically a more general LADregression.
How about we put this into scikit-lego and make the LADRegression just a Quantileregression(quantile=0.5)? Or remove it completely.
Best
Robert
| closed | 2021-04-27T14:57:43Z | 2021-05-13T17:48:58Z | https://github.com/koaning/scikit-lego/issues/461 | [
"enhancement"
] | Garve | 5 |
huggingface/diffusers | pytorch | 10,107 | gradient checkpointing runs during validations in the training examples | ### Describe the bug
The gradient checkpointing is enabled via self.training, but then the log_validations also unnecessarily encounter this codepath.
I found that I have to disable this when running even under `torch.no_grad()`, looked and saw that the official examples do not do this either.
This gives a substantial performance boost more similar to a normal inference script running outside of a training loop.
### Reproduction
Add print statements to the checkpointing function.
### Logs
_No response_
### System Info
-
### Who can help?
@linoytsaban @yiyixuxu | closed | 2024-12-03T22:10:25Z | 2025-01-03T16:01:36Z | https://github.com/huggingface/diffusers/issues/10107 | [
"bug",
"stale"
] | bghira | 4 |
litestar-org/litestar | pydantic | 4,014 | Bug: when i use app like plugin, OpenAPI dont generate | ### Description
I used the part from the example about the fullstack application, I used the part with the initialization of the application as a plugin, but now the OpenAPI scheme does not work for me.
### URL to code causing the issue
_No response_
### MCVE
core.py
```python
from litestar.plugins import CLIPluginProtocol, InitPluginProtocol
from litestar.di import Provide
from litestar.openapi.config import OpenAPIConfig
from litestar.openapi.plugins import ScalarRenderPlugin
from typing import TYPE_CHECKING
from litestar.config.app import AppConfig
class ApplicationCore(InitPluginProtocol, CLIPluginProtocol):
def on_app_init(self, app_config: AppConfig) -> AppConfig:
from app.config import app as config
from litestar.openapi.spec import Tag
from app.config import get_settings
from app.domain.terraria.controllers import TerrariaServerController
from app.server import plugins
settings = get_settings()
app_config.openapi_config = OpenAPIConfig(
title="Terra",
description="ะะฟะธัะฐะฝะธะต API Terra",
version="0.0.1",
tags=[
Tag(name="TerraServer", description="ะะทะฐะธะผะพะดะตะนััะฒะธะต ั ัะตัะฒะตัะพะผ"),
Tag(name="Health", description="ะัะพะฒะตัะบะฐ ะฝะฐ ะถะธะฒ?"),
],
render_plugins=[ScalarRenderPlugin(version="latest")],
)
app_config.plugins = [plugins.alchemy]
app_config.route_handlers.extend([TerrariaServerController])
return app_config
```
controllers.py
```py
from litestar.handlers import get, post
from litestar.controller import Controller
from litestar.di import Provide
from litestar.enums import RequestEncodingType
from litestar.params import Body
from app.models.requests import CreateServerRequest
from app.models.responses import CreateServerResponse
from app.domain.terraria.services import UserService, ServerSerivce
from app.lib.deps import create_service_provider
from app.services.terraria_service import TerrariaService
from typing import Annotated
import socket
class TerrariaServerController(Controller):
path = "/terra"
tags = ["TerraServer"]
dependencies = {
"user_service": Provide(create_service_provider(UserService)),
"server_service": Provide(create_service_provider(ServerSerivce)),
}
@post(
path="new",
summary="ะขะพัะบะฐ ะดะปั ัะพะทะดะฐะฝะธั ัะตัะฒะตัะฐ",
description="ะกะพะทะดะฐะตั ัะตัะฒะตั ะธ ะฒะพะทัะฐัะฐะตั ะดะฐะฝะฝัะต ะดะปั ะฟะพะดะบะปััะตะฝะธั",
request_max_body_size=25 * 1024 * 1024,
tags=["TerraServer"],
)
async def create_server(
self,
data: Annotated[
CreateServerRequest, Body(media_type=RequestEncodingType.MULTI_PART)
],
) -> str:
# TODO: ะัะพะฒะตัะบะฐ ะฝะฐ ัะฐะทะผะตั, ัะฐััะธัะตะฝะธะต, ะฝะฐะทะฒะฐะฝะธะต ัะฐะนะปะฐ
return "OK"
# result = await terraria_service.start_server(data)
# return CreateServerResponse(
# status="created",
# server_password=result.password,
# host=socket.gethostbyname(socket.gethostname()),
# port=result.port,
# setup_code=result.setup_code,
# )
@get(path="health", summary="ะัะพะฒะตัะบะฐ ัะฐะฑะพัะพัะฟะพัะพะฑะฝะพััะธ", tags=["Health"])
async def check_work(self) -> str:
return "NOT OK"
```
### Steps to reproduce
```bash
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
```
### Screenshots
<img width="736" alt="Image" src="https://github.com/user-attachments/assets/000a6582-b845-4793-b56d-a3f07515ce51" />
### Logs
```bash
```
### Litestar Version
2.14.0
### Platform
- [ ] Linux
- [x] Mac
- [x] Windows
- [ ] Other (Please specify in the description above) | closed | 2025-02-18T18:24:58Z | 2025-02-25T19:02:15Z | https://github.com/litestar-org/litestar/issues/4014 | [
"Bug :bug:",
"Needs MCVE"
] | Ada-lave | 3 |
deezer/spleeter | deep-learning | 60 | spleeter-gpu broken | I followed through with installing spleeter-gpu and when i try to run it this is the ouput I get.
(spleeter-gpu) swissbeats93@ApexHunter:~/Documents/Development$ spleeter separate -i spleeter/audio_example.mp3 -p spleeter:2stems -o output
INFO:spleeter:Loading audio b'spleeter/audio_example.mp3' from 0.0 to 600.0
INFO:spleeter:Audio data loaded successfully
Traceback (most recent call last):
File "/home/swissbeats93/anaconda3/envs/spleeter-gpu/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1356, in _do_call
return fn(*args)
File "/home/swissbeats93/anaconda3/envs/spleeter-gpu/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1341, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/home/swissbeats93/anaconda3/envs/spleeter-gpu/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1429, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
(0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[{{node conv2d_7/Conv2D}}]]
[[strided_slice_13/_905]]
(1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[{{node conv2d_7/Conv2D}}]]
0 successful operations.
0 derived errors ignored.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/swissbeats93/anaconda3/envs/spleeter-gpu/bin/spleeter", line 8, in <module>
sys.exit(entrypoint())
File "/home/swissbeats93/anaconda3/envs/spleeter-gpu/lib/python3.7/site-packages/spleeter/__main__.py", line 47, in entrypoint
main(sys.argv)
File "/home/swissbeats93/anaconda3/envs/spleeter-gpu/lib/python3.7/site-packages/spleeter/__main__.py", line 41, in main
entrypoint(arguments, params)
File "/home/swissbeats93/anaconda3/envs/spleeter-gpu/lib/python3.7/site-packages/spleeter/commands/separate.py", line 178, in entrypoint
output_naming=output_naming)
File "/home/swissbeats93/anaconda3/envs/spleeter-gpu/lib/python3.7/site-packages/spleeter/commands/separate.py", line 132, in process_audio
for sample in prediction:
File "/home/swissbeats93/anaconda3/envs/spleeter-gpu/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 637, in predict
preds_evaluated = mon_sess.run(predictions)
File "/home/swissbeats93/anaconda3/envs/spleeter-gpu/lib/python3.7/site-packages/tensorflow/python/training/monitored_session.py", line 754, in run
run_metadata=run_metadata)
File "/home/swissbeats93/anaconda3/envs/spleeter-gpu/lib/python3.7/site-packages/tensorflow/python/training/monitored_session.py", line 1252, in run
run_metadata=run_metadata)
File "/home/swissbeats93/anaconda3/envs/spleeter-gpu/lib/python3.7/site-packages/tensorflow/python/training/monitored_session.py", line 1353, in run
raise six.reraise(*original_exc_info)
File "/home/swissbeats93/anaconda3/envs/spleeter-gpu/lib/python3.7/site-packages/six.py", line 696, in reraise
raise value
File "/home/swissbeats93/anaconda3/envs/spleeter-gpu/lib/python3.7/site-packages/tensorflow/python/training/monitored_session.py", line 1338, in run
return self._sess.run(*args, **kwargs)
File "/home/swissbeats93/anaconda3/envs/spleeter-gpu/lib/python3.7/site-packages/tensorflow/python/training/monitored_session.py", line 1411, in run
run_metadata=run_metadata)
File "/home/swissbeats93/anaconda3/envs/spleeter-gpu/lib/python3.7/site-packages/tensorflow/python/training/monitored_session.py", line 1169, in run
return self._sess.run(*args, **kwargs)
File "/home/swissbeats93/anaconda3/envs/spleeter-gpu/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 950, in run
run_metadata_ptr)
File "/home/swissbeats93/anaconda3/envs/spleeter-gpu/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1173, in _run
feed_dict_tensor, options, run_metadata)
File "/home/swissbeats93/anaconda3/envs/spleeter-gpu/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1350, in _do_run
run_metadata)
File "/home/swissbeats93/anaconda3/envs/spleeter-gpu/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1370, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
(0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[node conv2d_7/Conv2D (defined at /lib/python3.7/site-packages/spleeter/model/functions/unet.py:90) ]]
[[strided_slice_13/_905]]
(1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[node conv2d_7/Conv2D (defined at /lib/python3.7/site-packages/spleeter/model/functions/unet.py:90) ]]
0 successful operations.
0 derived errors ignored.
Errors may have originated from an input operation.
Input Source operations connected to node conv2d_7/Conv2D:
strided_slice_3 (defined at /lib/python3.7/site-packages/spleeter/model/__init__.py:195)
Input Source operations connected to node conv2d_7/Conv2D:
strided_slice_3 (defined at /lib/python3.7/site-packages/spleeter/model/__init__.py:195)
Original stack trace for 'conv2d_7/Conv2D':
File "/bin/spleeter", line 8, in <module>
sys.exit(entrypoint())
File "/lib/python3.7/site-packages/spleeter/__main__.py", line 47, in entrypoint
main(sys.argv)
File "/lib/python3.7/site-packages/spleeter/__main__.py", line 41, in main
entrypoint(arguments, params)
File "/lib/python3.7/site-packages/spleeter/commands/separate.py", line 178, in entrypoint
output_naming=output_naming)
File "/lib/python3.7/site-packages/spleeter/commands/separate.py", line 132, in process_audio
for sample in prediction:
File "/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 619, in predict
features, None, ModeKeys.PREDICT, self.config)
File "/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1146, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "/lib/python3.7/site-packages/spleeter/model/__init__.py", line 392, in model_fn
return builder.build_predict_model()
File "/lib/python3.7/site-packages/spleeter/model/__init__.py", line 334, in build_predict_model
output_dict = self._build_output_dict()
File "/lib/python3.7/site-packages/spleeter/model/__init__.py", line 130, in _build_output_dict
self._params['model']['params'])
File "/lib/python3.7/site-packages/spleeter/model/functions/unet.py", line 173, in unet
return apply(apply_unet, input_tensor, instruments, params)
File "/lib/python3.7/site-packages/spleeter/model/functions/__init__.py", line 26, in apply
params=params)
File "/lib/python3.7/site-packages/spleeter/model/functions/unet.py", line 90, in apply_unet
conv1 = conv2d_factory(conv_n_filters[0], (5, 5))(input_tensor)
File "/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 634, in __call__
outputs = call_fn(inputs, *args, **kwargs)
File "/lib/python3.7/site-packages/tensorflow/python/keras/layers/convolutional.py", line 196, in call
outputs = self._convolution_op(inputs, self.kernel)
File "/lib/python3.7/site-packages/tensorflow/python/ops/nn_ops.py", line 1079, in __call__
return self.conv_op(inp, filter)
File "/lib/python3.7/site-packages/tensorflow/python/ops/nn_ops.py", line 635, in __call__
return self.call(inp, filter)
File "/lib/python3.7/site-packages/tensorflow/python/ops/nn_ops.py", line 234, in __call__
name=self.name)
File "/lib/python3.7/site-packages/tensorflow/python/ops/nn_ops.py", line 1953, in conv2d
name=name)
File "/lib/python3.7/site-packages/tensorflow/python/ops/gen_nn_ops.py", line 1071, in conv2d
data_format=data_format, dilations=dilations, name=name)
File "/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 3616, in create_op
op_def=op_def)
File "/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 2005, in __init__
self._traceback = tf_stack.extract_stack()
| closed | 2019-11-08T23:54:48Z | 2023-08-15T03:11:38Z | https://github.com/deezer/spleeter/issues/60 | [
"bug",
"invalid"
] | swissbeats93 | 18 |
QuivrHQ/quivr | api | 3,478 | Add Retry and Exp Backoff to all api calls | closed | 2024-11-15T08:26:02Z | 2024-11-20T09:44:29Z | https://github.com/QuivrHQ/quivr/issues/3478 | [
"enhancement"
] | chloedia | 1 | |
graphql-python/graphene | graphql | 681 | Query with "or" expression | Hi everyone, maybe it's a stupid question, I'm new to graphene. Is it possible for me to query like this?
```python
query{
things(type: "type_a" or "type_b"){
edges{
node{
thing_id
}
}
}
}
```
I mean the `type` argument could possibly be a String or multiple Strings (or a list).. I figured out that Django has something called `django_filter`.. But I'm using Flask.
Thanks! | closed | 2018-03-12T09:28:43Z | 2018-03-29T14:13:05Z | https://github.com/graphql-python/graphene/issues/681 | [] | bwanglzu | 4 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 3,028 | CSV export of Audit Log "reports" tab, does not contain sub-status but an incomprehensible code | **Describe the bug**
CSV export of Audit Log "reports" does not contain sub-statuses even though they appear on the specific tab.
Instead of the substatus it contains a code that do not know its origin. Instead of substatus a strange code appears that can not figure out what it means.
**To Reproduce**
Export csv file from admin audit log, reports tab, open it in excel and compare it with info that exists on the tab.
**csv export**
b85de321-e337-492f-a4f1-f6b80d9d0ce6,2021-07-09T10:28:12.530573Z,2021-07-09T10:37:18.921253Z,3000-01-01T00:00:00Z,f444c46f-5f7f-4ccc-a840-e10776d1a485,closed,**ff83ab2c-f631-4eeb-8679-be3338146c94**,FALSE,1,0,1,2021-07-09T10:50:47.656573Z
example
**audit log**
b85de321-e337-492f-a4f1-f6b80d9d0ce6 | 09-07-2021 13:28 | 09-07-2021 13:37 | 01-01-3000 02:00 | CONTEXT | Closed(**Malicious**) | HTTPS | 1 | 0 | 1 | 09-07-2021 13:50
**Expected behavior**
I would expect the csv to contain all the info in the reports tab including the substatus in a simple form
**Desktop (please complete the following information):**
- OS: win 10
- Browser chrome
**Additional context**
Additionally it would be good to also include the serial number of the report in addition to the report id | open | 2021-07-11T09:53:56Z | 2021-07-12T07:54:07Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3028 | [
"T: Feature"
] | elbill | 2 |
aio-libs/aiomysql | sqlalchemy | 994 | cannot install aiomysql with rsa | ### Describe the bug
aiomysql may need pymysql[rsa] which install the cryptography package. it would be nice if this works out of the box
### To Reproduce
modern installation with uv
```sh
$ uv add aiomysql[rsa]
zsh: no matches found: aiomysql[rsa]
```
oldschool installation with pip
```sh
pip install aiomysql[rsa]
zsh: no matches found: aiomysql[rsa]
```
### Expected behavior
installs the pymysql[rsa] which pulls in the cryptography package
### Logs/tracebacks
```python-traceback
/
```
### Python Version
```console
$ python --version
Python 3.12.3
```
### aiomysql Version
```console
$ uv pip show aiomysql
Name: aiomysql
Version: 0.2.0
Location: /home/extreme4all/dev/bot_detector/remote/public-api/.venv/lib/python3.12/site-packages
Requires: pymysql
Required-by:
```
### PyMySQL Version
```console
$ uv pip show pymysql
Name: pymysql
Version: 1.1.1
Location: /home/extreme4all/dev/bot_detector/remote/public-api/.venv/lib/python3.12/site-packages
Requires:
Required-by: aiomysql
```
### SQLAlchemy Version
```console
$ uv pip show sqlalchemy
Name: sqlalchemy
Version: 2.0.36
Location: /home/extreme4all/dev/bot_detector/remote/public-api/.venv/lib/python3.12/site-packages
Requires: greenlet, typing-extensions
Required-by:
```
### OS
Ubuntu 24.04.1 LTS
### Database type and version
```console
mysql:8.0.32
```
### Additional context
_No response_
### Code of Conduct
- [X] I agree to follow the aio-libs Code of Conduct | closed | 2024-11-22T11:00:14Z | 2024-11-22T11:12:29Z | https://github.com/aio-libs/aiomysql/issues/994 | [
"bug"
] | extreme4all | 1 |
Anjok07/ultimatevocalremovergui | pytorch | 576 | MDX-NET models produce a very distinct buzzing sound | I have got better vocal results using MDX-NET compared to other methods. However, I'm getting a persistent buzzing sound in the output generated by this model, irrespective of the input provided and MDX-NET models used. Is there any possible solution to resolve this issue? | open | 2023-05-26T09:57:39Z | 2024-02-10T06:18:11Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/576 | [] | 9chry | 4 |
aiortc/aiortc | asyncio | 770 | Packet Unpack Issue with cli.py |
Hello!
Iโm running into a unpack error when running one of the examples available in the repo, [cli.py](https://github.com/aiortc/aiortc/blob/main/examples/videostream-cli/cli.py), when Iโve upgraded some libraries. I upgraded the pyAv library from version 8.0.2 to version 9.2.0 and the aiortc library from version 1.2.1 to version 1.3.2. Now when running the cli.py example, I run into this unpack error:
> WARNING:aiortc.rtcdtlstransport:RTCDtlsTransport(client) Traceback (most recent call last):
> File "/usr/local/lib/python3.7/dist-packages/aiortc/rtcdtlstransport.py", line 498, in __run
> await self._recv_next()
> File "/usr/local/lib/python3.7/dist-packages/aiortc/rtcdtlstransport.py", line 599, in _recv_next
> await self._handle_rtp_data(data, arrival_time_ms=arrival_time_ms)
> File "/usr/local/lib/python3.7/dist-packages/aiortc/rtcdtlstransport.py", line 542, in _handle_rtp_data
> packet = RtpPacket.parse(data, self._rtp_header_extensions_map)
> File "/usr/local/lib/python3.7/dist-packages/aiortc/rtp.py", line 687, in parse
> packet.extensions = extensions_map.get(extension_profile, extension_value)
> File "/usr/local/lib/python3.7/dist-packages/aiortc/rtp.py", line 84, in get
> values.abs_send_time = unpack("!L", b"\00" + x_value)[0]
> struct.error: unpack requires a buffer of 4 bytes
Iโve done some more digging into this issue and this is what Iโve found. I did a lot of comparing the behavior between the two versions of these libraries as this error only occurs with the upgraded libraries. This unpack error is thrown when trying to unpack the header in a packet carrying a video frame from the client media player track, transmitting a video with audio. So, I dug more as to why the same video generates two packets with different headers across the two versions. I managed to narrow down the issue to this function in the rtp.py file, [set(self, values: HeaderExtensions)](https://github.com/aiortc/aiortc/blob/main/src/aiortc/rtp.py).
Using the same video file as control, it seems for the new libraries end up having different extension values added to the header because the new libraries causes this condition in the set function to be triggered:
`if values.audio_level is not None and self.__ids.audio_level:`
This value causes the extension value to look like this with the new libraries (pyAV 9.2.0, aiortc 1.3.2). This output was taking by logging the extension value parse classmethod for the RtpPacket class:
`AIORTC extension value: b'\x100 \x7f'`
With this extension value the subsequent packet is then unpacked, the above error is generated by the answer role. Meanwhile this same packet generates this different extension value in the old libraries (pyAV 8.0.2, aiortc 1.2.1):
`AIORTC extension value: b'\x101"8U=\x00\x00'`
Has anyone encountered this issue before? Error should be easy to repro by running the cli.py example using the same version of the libraries throwing the error and using any video with audio as control. In my runs I also made sure to enable logging and used the โplay-from and โrecord-to parameters in the offer/answer roles respectively. | closed | 2022-09-14T00:18:13Z | 2023-01-23T23:25:37Z | https://github.com/aiortc/aiortc/issues/770 | [
"duplicate"
] | luz-camacho | 3 |
d2l-ai/d2l-en | tensorflow | 2,602 | AttributeError: partially initialized module 'charset_normalizer' has no attribute 'md__mypyc' (most likely due to a circular import) | Hello,
I am reading chapter 9 of d2l.
I tried to import libraries as followed
```
%matplotlib inline
import torch
from torch import nn
from d2l import torch as d2l
```
But I got error
`AttributeError: partially initialized module 'charset_normalizer' has no attribute 'md__mypyc' (most likely due to a circular import)`
I am using pytorch 2.3 and d2l==1.0.3
| open | 2024-06-01T08:58:22Z | 2024-06-01T08:58:22Z | https://github.com/d2l-ai/d2l-en/issues/2602 | [] | Nevermetyou65 | 0 |
xonsh/xonsh | data-science | 5,755 | Arch Linux: test_history_dummy failure | ## Current Behavior
<!---
For general xonsh issues, please try to replicate the failure using `xonsh --no-rc --no-env`.
Short, reproducible code snippets are highly appreciated.
You can use `$XONSH_SHOW_TRACEBACK=1`, `$XONSH_TRACE_SUBPROC=2`, or `$XONSH_DEBUG=1`
to collect more information about the failure.
-->
The test `test_history_dummy` fails to pass.
System Information:
- OS: Arch Linux
- Python: tested with 3.12.7 and 3.13.1, same result
Traceback (if applicable): see bellow
<details>
```
============================= test session starts ==============================
platform linux -- Python 3.13.1, pytest-8.3.3, pluggy-1.5.0
cachedir: .cache/pytest
rootdir: /build/xonsh/src/xonsh-0.19.0
configfile: setup.cfg
testpaths: tests
collected 5154 items / 1 error
==================================== ERRORS ====================================
_____________ ERROR collecting tests/history/test_history_dummy.py _____________
tests/history/test_history_dummy.py:9: in <module>
@pytest.mark.parametrize("backend", ["dummy", DummyHistory, DummyHistory()])
xonsh/history/base.py:82: in __init__
self.ignore_regex # Tap the ignore regex to validate it # noqa
/usr/lib/python3.13/functools.py:1039: in __get__
val = self.func(instance)
xonsh/history/base.py:196: in ignore_regex
regex = XSH.env.get("XONSH_HISTORY_IGNORE_REGEX")
E AttributeError: 'NoneType' object has no attribute 'get'
=============================== warnings summary ===============================
tests/completers/test_python.py:107
/build/xonsh/src/xonsh-0.19.0/tests/completers/test_python.py:107: PytestUnknownMarkWarning: Unknown pytest.mark.flaky - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html
@pytest.mark.flaky(reruns=2, reruns_delay=2)
tests/procs/test_pipelines.py:89
/build/xonsh/src/xonsh-0.19.0/tests/procs/test_pipelines.py:89: PytestUnknownMarkWarning: Unknown pytest.mark.flaky - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html
@pytest.mark.flaky(reruns=3, reruns_delay=2)
tests/procs/test_specs.py:93
/build/xonsh/src/xonsh-0.19.0/tests/procs/test_specs.py:93: PytestUnknownMarkWarning: Unknown pytest.mark.flaky - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html
@pytest.mark.flaky(reruns=3, reruns_delay=2)
tests/procs/test_specs.py:162
/build/xonsh/src/xonsh-0.19.0/tests/procs/test_specs.py:162: PytestUnknownMarkWarning: Unknown pytest.mark.flaky - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html
@pytest.mark.flaky(reruns=3, reruns_delay=2)
tests/procs/test_specs.py:183
/build/xonsh/src/xonsh-0.19.0/tests/procs/test_specs.py:183: PytestUnknownMarkWarning: Unknown pytest.mark.flaky - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html
@pytest.mark.flaky(reruns=3, reruns_delay=2)
tests/procs/test_specs.py:194
/build/xonsh/src/xonsh-0.19.0/tests/procs/test_specs.py:194: PytestUnknownMarkWarning: Unknown pytest.mark.flaky - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html
@pytest.mark.flaky(reruns=3, reruns_delay=1)
tests/procs/test_specs.py:248
/build/xonsh/src/xonsh-0.19.0/tests/procs/test_specs.py:248: PytestUnknownMarkWarning: Unknown pytest.mark.flaky - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html
@pytest.mark.flaky(reruns=3, reruns_delay=1)
tests/procs/test_specs.py:272
/build/xonsh/src/xonsh-0.19.0/tests/procs/test_specs.py:272: PytestUnknownMarkWarning: Unknown pytest.mark.flaky - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html
@pytest.mark.flaky(reruns=3, reruns_delay=1)
tests/test_integrations.py:750
/build/xonsh/src/xonsh-0.19.0/tests/test_integrations.py:750: PytestUnknownMarkWarning: Unknown pytest.mark.flaky - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html
@pytest.mark.flaky(reruns=4, reruns_delay=2)
tests/test_integrations.py:1340
/build/xonsh/src/xonsh-0.19.0/tests/test_integrations.py:1340: PytestUnknownMarkWarning: Unknown pytest.mark.flaky - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html
@pytest.mark.flaky(reruns=3, reruns_delay=1)
tests/test_integrations.py:1350
/build/xonsh/src/xonsh-0.19.0/tests/test_integrations.py:1350: PytestUnknownMarkWarning: Unknown pytest.mark.flaky - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html
@pytest.mark.flaky(reruns=3, reruns_delay=1)
tests/test_integrations.py:1373
/build/xonsh/src/xonsh-0.19.0/tests/test_integrations.py:1373: PytestUnknownMarkWarning: Unknown pytest.mark.flaky - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html
@pytest.mark.flaky(reruns=3, reruns_delay=1)
tests/test_integrations.py:1391
/build/xonsh/src/xonsh-0.19.0/tests/test_integrations.py:1391: PytestUnknownMarkWarning: Unknown pytest.mark.flaky - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html
@pytest.mark.flaky(reruns=3, reruns_delay=1)
tests/test_integrations.py:1411
/build/xonsh/src/xonsh-0.19.0/tests/test_integrations.py:1411: PytestUnknownMarkWarning: Unknown pytest.mark.flaky - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html
@pytest.mark.flaky(reruns=3, reruns_delay=1)
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
ERROR tests/history/test_history_dummy.py - AttributeError: 'NoneType' object...
!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!
======================== 14 warnings, 1 error in 1.01s =========================
```
</details>
## Expected Behavior
The entire test suite to pass when running `pytest`.
## xonfig
Not applicable.
</details>
## For community
โฌ๏ธ **Please click the ๐ reaction instead of leaving a `+1` or ๐ comment**
| open | 2024-12-18T15:10:21Z | 2025-02-03T09:34:23Z | https://github.com/xonsh/xonsh/issues/5755 | [
"tests",
"linux",
"good first issue",
"history"
] | dbermond | 2 |
encode/httpx | asyncio | 3,424 | Drop-in replacement for `proxies` argument in (httpx.post) | Hi,
Since version `0.28`, the `proxies` argument is fully deprecated and removed. #2879 indicates that we should use `mounts` if we have multiple schemas to be proxied over. However, `httpx.post` doesn't have `mounts` argument.
Is there a way to
1. use the new `proxy` argument, specifying both `http` & `https` schemas?
2. add an argument `mounts` to `httpx.post` and relevant methods?
Thanks. | closed | 2024-11-29T03:05:57Z | 2024-12-02T14:19:20Z | https://github.com/encode/httpx/issues/3424 | [] | qqaatw | 8 |
autokey/autokey | automation | 212 | Fatal error if user enters an invalid regex | ## Classification:
Crash / locks up
## Reproducibility:
Always
## Summary
We don't seem to do any validation on the user's input for regex matching. The app then doesn't gracefully handle a regex string which Python `re` doesn't parse.
## Steps to Reproduce (if applicable)
1. Enter a malformed regex for a phrase, like the single character, `*`.
2. Attempt to save this change.
3. Try to trigger the phrase.
## Expected Results
**Good:** The app shouldn't crash.
**Better:** It should trap and output `re` errors somewhere.
**Best:** the app should validate the user input by attempting an `re` compile when a user provides a regex. If it fails, then the regex won't be saved.
## Actual Results
The app progressively locks up, and the UI needs to be killed. The server then crashes.
## Version
AutoKey version: **0.95.4**
Used GUI (Gtk, Qt, or both): **Gtk**
Distro: **Fedora**
## Notes
I think this would be a good first PR for me to work on. | closed | 2018-11-07T02:31:16Z | 2020-02-16T20:53:02Z | https://github.com/autokey/autokey/issues/212 | [
"bug"
] | dogweather | 11 |
gradio-app/gradio | data-visualization | 9,900 | Sketchpad components no longer resize auto-sized. | ### Describe the bug
This is the version 5.5.0.

This is the version 4.36.1.

I don't know if this is a bug or a new feature, is there a solution to it?
### Have you searched existing issues? ๐
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
with gr.Blocks() as demo:
image = gr.Sketchpad(
sources=["upload", "clipboard", "webcam"],
type="pil",
label="Image",
)
demo.launch()
```
### Screenshot
_No response_
### Logs
_No response_
### System Info
```shell
gradio==5.5.0
gradio==4.36.1
```
### Severity
I can work around it | closed | 2024-11-05T09:24:02Z | 2024-11-06T15:24:07Z | https://github.com/gradio-app/gradio/issues/9900 | [
"bug"
] | zhulinyv | 1 |
gevent/gevent | asyncio | 2,008 | AttributeError: module 'select' has no attribute 'epoll' | * gevent version: 23.9.1
* Python version: 3.11
* greenlet version: 3.0.2
* Operating System/ Destro:
> VERSION="18.04.6 LTS (Bionic Beaver)"
> ID=ubuntu
> ID_LIKE=debian
> PRETTY_NAME="Ubuntu 18.04.6 LTS"
> VERSION_ID="18.04"
> HOME_URL="https://www.ubuntu.com/"
> SUPPORT_URL="https://help.ubuntu.com/"
> BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
> PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
> VERSION_CODENAME=bionic
> UBUNTU_CODENAME=bionic
>
### Description:
I have django application running on gunicorn. I have specified gevent as a worker class in gunicorn configurations.
since I have upgraded to python 3.11 am I getting `AttributeError: module 'select' has no attribute 'epoll'` error whenever I try to access the epoll from select. While removing gevent from worker class this error goes away.
```python-traceback
[2023-10-10 12:06:56 +0000] [4254] [INFO] Worker exiting (pid: 4254)
[2023-10-10 12:07:00 +0000] [4232] [INFO] Shutting down: Master
[2023-10-10 12:07:00 +0000] [4232] [INFO] Reason: Worker failed to boot.
[2023-10-10 12:07:04 +0000] [4264] [DEBUG] Current configuration:
config: None
bind: ['unix:/run/anahit.sock']
backlog: 2048
workers: 1
worker_class: gevent
threads: 1
worker_connections: 1000
max_requests: 0
max_requests_jitter: 0
timeout: 120
graceful_timeout: 30
keepalive: 2
limit_request_line: 4094
limit_request_fields: 100
limit_request_field_size: 8190
reload: False
reload_engine: auto
reload_extra_files: []
spew: False
check_config: False
preload_app: False
sendfile: None
reuse_port: False
chdir: /home/anahit_web/anahit_website/app
daemon: False
raw_env: []
pidfile: None
worker_tmp_dir: None
user: 1000
group: 33
umask: 0
initgroups: False
tmp_upload_dir: None
secure_scheme_headers: {'X-FORWARDED-PROTOCOL': 'ssl', 'X-FORWARDED-PROTO': 'https', 'X-FORWARDED-SSL': 'on'}
forwarded_allow_ips: ['127.0.0.1']
accesslog: /home/anahit_web/logs/access.log
disable_redirect_access_to_syslog: False
access_log_format: %(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"
errorlog: /home/anahit_web/logs/anahit_gunicorn.log
loglevel: debug
capture_output: True
logger_class: gunicorn.glogging.Logger
logconfig: None
logconfig_dict: {}
syslog_addr: udp://localhost:514
syslog: False
syslog_prefix: None
syslog_facility: user
enable_stdio_inheritance: True
statsd_host: None
dogstatsd_tags:
statsd_prefix:
proc_name: None
default_proc_name: anahit.wsgi:application
pythonpath: None
paste: None
on_starting: <function OnStarting.on_starting at 0x7f93d1e15440>
on_reload: <function OnReload.on_reload at 0x7f93d1e15580>
when_ready: <function WhenReady.when_ready at 0x7f93d1e156c0>
pre_fork: <function Prefork.pre_fork at 0x7f93d1e15800>
post_fork: <function Postfork.post_fork at 0x7f93d1e15940>
post_worker_init: <function PostWorkerInit.post_worker_init at 0x7f93d1e15a80>
worker_int: <function WorkerInt.worker_int at 0x7f93d1e15bc0>
worker_abort: <function WorkerAbort.worker_abort at 0x7f93d1e15d00>
pre_exec: <function PreExec.pre_exec at 0x7f93d1e15e40>
pre_request: <function PreRequest.pre_request at 0x7f93d1e15f80>
post_request: <function PostRequest.post_request at 0x7f93d1e16020>
child_exit: <function ChildExit.child_exit at 0x7f93d1e16160>
worker_exit: <function WorkerExit.worker_exit at 0x7f93d1e162a0>
nworkers_changed: <function NumWorkersChanged.nworkers_changed at 0x7f93d1e163e0>
on_exit: <function OnExit.on_exit at 0x7f93d1e16520>
proxy_protocol: False
proxy_allow_ips: ['127.0.0.1']
keyfile: None
certfile: None
ssl_version: 2
cert_reqs: 0
ca_certs: None
suppress_ragged_eofs: True
do_handshake_on_connect: False
ciphers: None
raw_paste_global_conf: []
strip_header_spaces: False
[2023-10-10 12:07:04 +0000] [4264] [INFO] Starting gunicorn 20.0.2
[2023-10-10 12:07:04 +0000] [4264] [DEBUG] Arbiter booted
[2023-10-10 12:07:04 +0000] [4264] [INFO] Listening at: unix:/run/anahit.sock (4264)
[2023-10-10 12:07:04 +0000] [4264] [INFO] Using worker: gevent
[2023-10-10 12:07:04 +0000] [4285] [INFO] Booting worker with pid: 4285
[2023-10-10 12:07:04 +0000] [4264] [DEBUG] 1 workers
[2023-10-10 12:07:21 +0000] [4285] [ERROR] Exception in worker process
Traceback (most recent call last):
File "/home/anahit_web/anahit_311/lib/python3.11/site-packages/gunicorn/arbiter.py", line 583, in spawn_worker
worker.init_process()
File "/home/anahit_web/anahit_311/lib/python3.11/site-packages/gunicorn/workers/ggevent.py", line 162, in init_process
super().init_process()
File "/home/anahit_web/anahit_311/lib/python3.11/site-packages/gunicorn/workers/base.py", line 119, in init_process
self.load_wsgi()
File "/home/anahit_web/anahit_311/lib/python3.11/site-packages/gunicorn/workers/base.py", line 144, in load_wsgi
self.wsgi = self.app.wsgi()
^^^^^^^^^^^^^^^
File "/home/anahit_web/anahit_311/lib/python3.11/site-packages/gunicorn/app/base.py", line 66, in wsgi
self.callable = self.load()
^^^^^^^^^^^
File "/home/anahit_web/anahit_311/lib/python3.11/site-packages/gunicorn/app/wsgiapp.py", line 49, in load
return self.load_wsgiapp()
^^^^^^^^^^^^^^^^^^^
File "/home/anahit_web/anahit_311/lib/python3.11/site-packages/gunicorn/app/wsgiapp.py", line 39, in load_wsgiapp
return util.import_app(self.app_uri)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/anahit_web/anahit_311/lib/python3.11/site-packages/gunicorn/util.py", line 358, in import_app
mod = importlib.import_module(module)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1206, in _gcd_import
File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/anahit_web/anahit_website/app/anahit/wsgi.py", line 16, in <module>
application = get_wsgi_application()
^^^^^^^^^^^^^^^^^^^^^^
File "/home/anahit_web/anahit_311/lib/python3.11/site-packages/django/core/wsgi.py", line 12, in get_wsgi_application
django.setup(set_prefix=False)
File "/home/anahit_web/anahit_311/lib/python3.11/site-packages/django/__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "/home/anahit_web/anahit_311/lib/python3.11/site-packages/django/apps/registry.py", line 124, in populate
app_config.ready()
File "/home/anahit_web/anahit_website/app/apps/users/apps.py", line 9, in ready
from . import signals
File "/home/anahit_web/anahit_website/app/apps/users/signals.py", line 11, in <module>
from common.utils import send_email_notification
File "/home/anahit_web/anahit_website/app/common/utils.py", line 28, in <module>
from discord_webhook import DiscordWebhook
File "/home/anahit_web/anahit_311/lib/python3.11/site-packages/discord_webhook/__init__.py", line 5, in <module>
from .async_webhook import AsyncDiscordWebhook
File "/home/anahit_web/anahit_311/lib/python3.11/site-packages/discord_webhook/async_webhook.py", line 13, in <module>
import httpx # noqa
^^^^^^^^^^^^
File "/home/anahit_web/anahit_311/lib/python3.11/site-packages/httpx/__init__.py", line 2, in <module>
from ._api import delete, get, head, options, patch, post, put, request, stream
File "/home/anahit_web/anahit_311/lib/python3.11/site-packages/httpx/_api.py", line 4, in <module>
from ._client import Client
File "/home/anahit_web/anahit_311/lib/python3.11/site-packages/httpx/_client.py", line 30, in <module>
from ._transports.default import AsyncHTTPTransport, HTTPTransport
File "/home/anahit_web/anahit_311/lib/python3.11/site-packages/httpx/_transports/default.py", line 30, in <module>
import httpcore
File "/home/anahit_web/anahit_311/lib/python3.11/site-packages/httpcore/__init__.py", line 1, in <module>
from ._api import request, stream
File "/home/anahit_web/anahit_311/lib/python3.11/site-packages/httpcore/_api.py", line 5, in <module>
from ._sync.connection_pool import ConnectionPool
File "/home/anahit_web/anahit_311/lib/python3.11/site-packages/httpcore/_sync/__init__.py", line 1, in <module>
from .connection import HTTPConnection
File "/home/anahit_web/anahit_311/lib/python3.11/site-packages/httpcore/_sync/connection.py", line 12, in <module>
from .._synchronization import Lock
File "/home/anahit_web/anahit_311/lib/python3.11/site-packages/httpcore/_synchronization.py", line 13, in <module>
import trio
File "/home/anahit_web/anahit_311/lib/python3.11/site-packages/trio/__init__.py", line 19, in <module>
from ._core import TASK_STATUS_IGNORED as TASK_STATUS_IGNORED # isort: skip
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/anahit_web/anahit_311/lib/python3.11/site-packages/trio/_core/__init__.py", line 21, in <module>
from ._local import RunVar
File "/home/anahit_web/anahit_311/lib/python3.11/site-packages/trio/_core/_local.py", line 5, in <module>
from . import _run
File "/home/anahit_web/anahit_311/lib/python3.11/site-packages/trio/_core/_run.py", line 2543, in <module>
from ._io_epoll import EpollIOManager as TheIOManager
File "/home/anahit_web/anahit_311/lib/python3.11/site-packages/trio/_core/_io_epoll.py", line 189, in <module>
class EpollIOManager:
File "/home/anahit_web/anahit_311/lib/python3.11/site-packages/trio/_core/_io_epoll.py", line 190, in EpollIOManager
_epoll = attr.ib(factory=select.epoll)
^^^^^^^^^^^^
AttributeError: module 'select' has no attribute 'epoll'
```
Please refer [this](https://github.com/python-trio/trio/issues/2848) issue created in trio.
| closed | 2023-12-18T11:12:49Z | 2023-12-20T06:01:59Z | https://github.com/gevent/gevent/issues/2008 | [
"Type: Question",
"Status: cantfix"
] | kushalmraut | 2 |
napari/napari | numpy | 7,447 | Add ability to Select All Layers with GUI | ## ๐ Feature
~~Add the ability to 'Delete All Layers' from within the GUI.~~
Add the ability to `Select All Layers` from within the GUI. Edited because `Select All` can all be used in other scenarios, including `Delete Selected Layers`.
## Motivation
Presently, a user is able to delete selected layers with the traschan GUI icon (or `Del` key). However, if the GUI fills up with many layers, selecting all the layers can become cumbersome for users, especially for those with mobility challenges, trackpads, touchpads, or pens. I find that frequently the layer list can become quite long, especially if a a multi-dimensional image has a dimension read in to populate individual layers or I'm trying to evaluating parameters in an analysis and continually add layers.
Edit: The most important part of this feature is the ability to `Select All`, which is useful for example:
1. `Merge to Stack`. Useful for file formats which split a dim into individual images, for most users I think they end up with each of these images as a layer, instead of reading in as a stack.
2. `Link Layers`
3. `Delete selected layers`
4. `Merge to RGB`
## Pitch
1. Add `Select All Layers` button above Layer List
2. Add `Ctrl+A` (or equivalent) keybinding to `Select All Layers` - @Czaki suggestion
## Alternatives
1. Moved to Alternative: Add a special right-click context action to the trashcan icon to 'Delete All Layers'. Additionally, this could also contain a 'Delete all Unselected Layers' special context as well. Downside is the lack of intuitive access to the right-click actions right now.
2. Add an additional icon next to the current Trashcan that serves as the 'Delete All Trashcan'. I imagine accidentally pressing this could be very frustrating. Adds bloat to UI. <- Least likely
3. Add this ability to the Layers menu -- both `Select All Layers` and `Delete All Layers`. This would not match the current flow and use of napari. Also, myself and @jni consider this `Layers` -> `Delete All Layers` to be risky.
4. Add `shift-del` default keybinding to 'Delete All Layers' action.
| open | 2024-12-13T18:00:43Z | 2024-12-15T04:24:45Z | https://github.com/napari/napari/issues/7447 | [
"feature"
] | TimMonko | 11 |
cobrateam/splinter | automation | 347 | Encoding problem | Hello guys! HEre is my problem. When i'm doing this
###
field = browser.find_by_xpath('//*[@id="ip_form"]/div/div[2]/input')
field.fill('STUFF')
###
everything is ok, but if I want smth like this:
###
field = browser.find_by_xpath('//*[@id="ip_form"]/div/div[2]/input')
> > > field.fill("ะฅะฃะ")
> > > Traceback (most recent call last):
> > > File "<stdin>", line 1, in <module>
> > > File "C:\Python27\lib\site-packages\splinter\driver\webdriver__init__.py", line 351, in fill
> > > self.value = value
> > > File "C:\Python27\lib\site-packages\splinter\driver\webdriver__init__.py", line 338, in _set_value
> > > self._element.send_keys(value)
> > > File "C:\Python27\lib\site-packages\selenium\webdriver\remote\webelement.py", line 303, in send_keys
> > > self._execute(Command.SEND_KEYS_TO_ELEMENT, {'value': typing})
> > > File "C:\Python27\lib\site-packages\selenium\webdriver\remote\webelement.py", line 385, in _execute
> > > return self._parent.execute(command, params)
> > > File "C:\Python27\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 171, in execute
> > > response = self.command_executor.execute(driver_command, params)
> > > File "C:\Python27\lib\site-packages\selenium\webdriver\remote\remote_connection.py", line 346, in execute
> > > data = utils.dump_json(params)
> > > File "C:\Python27\lib\site-packages\selenium\webdriver\remote\utils.py", line 30, in dump_json
> > > return json.dumps(json_struct)
> > > File "C:\Python27\lib\json__init__.py", line 243, in dumps
> > > return _default_encoder.encode(obj)
> > > File "C:\Python27\lib\json\encoder.py", line 207, in encode
> > > chunks = self.iterencode(o, _one_shot=True)
> > > File "C:\Python27\lib\json\encoder.py", line 270, in iterencode
> > > return _iterencode(o, 0)
> > > UnicodeDecodeError: 'utf8' codec can't decode byte 0x95 in position 0: invalid start byte
| closed | 2014-10-14T15:22:07Z | 2020-07-15T14:19:18Z | https://github.com/cobrateam/splinter/issues/347 | [
"NeedsInvestigation"
] | CSPZC | 1 |
recommenders-team/recommenders | data-science | 1,992 | [FEATURE] Upgrade fastai to v2 | ### Description
<!--- Describe your expected feature in detail -->
fastai has breaking changes in v2, which makes [the code in recommenders](https://github.com/recommenders-team/recommenders/actions/runs/6168006071/job/16739851647#step:3:2847) incompatible with fastai v2. For example, [`CollabDataBunch`](https://github.com/fastai/fastai1/blob/15bc02c61673400178fbdaf3417485755c299561/fastai/collab.py#L50) is removed in [fastai v2](https://github.com/fastai/fastai/blob/master/fastai/collab.py).
### Expected behavior with the suggested feature
<!--- For example: -->
<!--- *Adding algorithm xxx will help people understand more about xxx use case scenarios. -->
It should be compatible with fastai v2 because fastai v1 is not maintained.
### Other Comments
There are 3 repos related to fastai:
* [fastai1](https://github.com/fastai/fastai1): fastai v1
* [fastai2](https://github.com/fastai/fastai2): fastai v2 (archived)
* [fastai](https://github.com/fastai/fastai): fastai v2+
See also
* [Fastai V2 Upgrade Review Guide](https://forums.fast.ai/t/fastai-v2-upgrade-review-guide-whats-new-in-fastai-version-2/86626)
* https://github.com/recommenders-team/recommenders/pull/1988: https://github.com/recommenders-team/recommenders/pull/1988/commits/97deed179525b2f8bc6d4f68f973e7cfb9676f6f | closed | 2023-09-13T06:12:45Z | 2024-04-08T14:53:12Z | https://github.com/recommenders-team/recommenders/issues/1992 | [
"enhancement"
] | SimonYansenZhao | 2 |
allenai/allennlp | nlp | 5,217 | TypeError: ArrayField.empty_field: return type `None` is not a `<class 'allennlp.data.fields.field.Field'> | I used Colab for my project.
I installed _allennlp==0.9.0_
When I import _from allennlp.predictors.predictor import Predictor_
I get a error as below:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-20-f25591a9ae36> in <module>()
2 nlp = spacy.load("en_core_web_sm")
3
----> 4 from allennlp.predictors.predictor import Predictor
5 predictor = Predictor.from_path("https://s3-us-west-2.amazonaws.com/allennlp/models/elmo-constituency-parser-2018.03.14.tar.gz")
12 frames
/usr/local/lib/python3.7/dist-packages/allennlp/predictors/__init__.py in <module>()
7 a ``Predictor`` that wraps it.
8 """
----> 9 from allennlp.predictors.predictor import Predictor
10 from allennlp.predictors.atis_parser import AtisParserPredictor
11 from allennlp.predictors.biaffine_dependency_parser import BiaffineDependencyParserPredictor
/usr/local/lib/python3.7/dist-packages/allennlp/predictors/predictor.py in <module>()
10 from allennlp.common.checks import ConfigurationError
11 from allennlp.common.util import JsonDict, sanitize
---> 12 from allennlp.data import DatasetReader, Instance
13 from allennlp.data.dataset import Batch
14 from allennlp.models import Model
/usr/local/lib/python3.7/dist-packages/allennlp/data/__init__.py in <module>()
2 from allennlp.data.fields.field import DataArray, Field
3 from allennlp.data.instance import Instance
----> 4 from allennlp.data.iterators.data_iterator import DataIterator
5 from allennlp.data.token_indexers.token_indexer import TokenIndexer, TokenType
6 from allennlp.data.tokenizers.token import Token
/usr/local/lib/python3.7/dist-packages/allennlp/data/iterators/__init__.py in <module>()
4 """
5
----> 6 from allennlp.data.iterators.data_iterator import DataIterator
7 from allennlp.data.iterators.basic_iterator import BasicIterator
8 from allennlp.data.iterators.bucket_iterator import BucketIterator
/usr/local/lib/python3.7/dist-packages/allennlp/data/iterators/data_iterator.py in <module>()
11 from allennlp.common.util import is_lazy, lazy_groups_of, ensure_list
12 from allennlp.data.dataset import Batch
---> 13 from allennlp.data.fields import MetadataField
14 from allennlp.data.instance import Instance
15 from allennlp.data.vocabulary import Vocabulary
/usr/local/lib/python3.7/dist-packages/allennlp/data/fields/__init__.py in <module>()
5
6 from allennlp.data.fields.field import Field
----> 7 from allennlp.data.fields.array_field import ArrayField
8 from allennlp.data.fields.adjacency_field import AdjacencyField
9 from allennlp.data.fields.index_field import IndexField
/usr/local/lib/python3.7/dist-packages/allennlp/data/fields/array_field.py in <module>()
8
9
---> 10 class ArrayField(Field[numpy.ndarray]):
11 """
12 A class representing an array, which could have arbitrary dimensions.
/usr/local/lib/python3.7/dist-packages/allennlp/data/fields/array_field.py in ArrayField()
48 return tensor
49
---> 50 @overrides
51 def empty_field(self): # pylint: disable=no-self-use
52 # Pass the padding_value, so that any outer field, e.g., `ListField[ArrayField]` uses the
/usr/local/lib/python3.7/dist-packages/overrides/overrides.py in overrides(method, check_signature, check_at_runtime)
86 """
87 if method is not None:
---> 88 return _overrides(method, check_signature, check_at_runtime)
89 else:
90 return functools.partial(
/usr/local/lib/python3.7/dist-packages/overrides/overrides.py in _overrides(method, check_signature, check_at_runtime)
112 return wrapper # type: ignore
113 else:
--> 114 _validate_method(method, super_class, check_signature)
115 return method
116 raise TypeError(f"{method.__qualname__}: No super class method found")
/usr/local/lib/python3.7/dist-packages/overrides/overrides.py in _validate_method(method, super_class, check_signature)
133 and not isinstance(super_method, property)
134 ):
--> 135 ensure_signature_is_compatible(super_method, method, is_static)
136
137
/usr/local/lib/python3.7/dist-packages/overrides/signature.py in ensure_signature_is_compatible(super_callable, sub_callable, is_static)
91
92 if super_type_hints is not None and sub_type_hints is not None:
---> 93 ensure_return_type_compatibility(super_type_hints, sub_type_hints, method_name)
94 ensure_all_kwargs_defined_in_sub(
95 super_sig, sub_sig, super_type_hints, sub_type_hints, is_static, method_name
/usr/local/lib/python3.7/dist-packages/overrides/signature.py in ensure_return_type_compatibility(super_type_hints, sub_type_hints, method_name)
286 if not _issubtype(sub_return, super_return) and super_return is not None:
287 raise TypeError(
--> 288 f"{method_name}: return type `{sub_return}` is not a `{super_return}`."
289 )
TypeError: ArrayField.empty_field: return type `None` is not a `<class 'allennlp.data.fields.field.Field'>`.
Please help! | closed | 2021-05-21T03:43:20Z | 2022-02-13T17:10:44Z | https://github.com/allenai/allennlp/issues/5217 | [
"question"
] | kaloon2009 | 6 |
ccxt/ccxt | api | 25,530 | [BUG] CCXT js fetchFundingHistory methor call keep repeating requests and promise pending | ### Operating System
_No response_
### Programming Languages
js
### CCXT Version
4.4.65
### Description
`fetchFundingHistory` method calls keep repeating requests, and promises in the code keep pending, feeling like a dead loop has been hit somewhere. Tested bitget and gate are the same, there are actually only two data

### Code
```js
await exchange.loadMarkets();
const res = await exchange.fetchFundingHistory(symbol);
``` | closed | 2025-03-22T08:00:17Z | 2025-03-22T15:20:50Z | https://github.com/ccxt/ccxt/issues/25530 | [] | flytam | 5 |
QuivrHQ/quivr | api | 3,414 | Test on a small proprietary dataset with mistral small and gpt-4o | closed | 2024-10-22T13:47:16Z | 2025-01-20T10:03:53Z | https://github.com/QuivrHQ/quivr/issues/3414 | [] | chloedia | 1 | |
JaidedAI/EasyOCR | deep-learning | 654 | Support for Handwritting Text Recognition | May I know when the feature will be ready? | closed | 2022-01-28T03:33:03Z | 2022-08-25T10:52:28Z | https://github.com/JaidedAI/EasyOCR/issues/654 | [] | cteckwee | 0 |
sktime/sktime | data-science | 7,222 | [ENH] Inconsistent handling of _steps_attr in HeterogenousMetaEstimator inheritors | **Describe the bug**
<!--
A clear and concise description of what the bug is.
-->
The issue involves inconsistent handling of the `_steps_attr` attribute in several classes inheriting from `_HeterogenousMetaEstimator`. Specifically, the `_steps_attr` attribute is either missing or incorrectly formatted in these classes, causing issues with parameter management, particularly for visualizing (html repr). There are two main cases:
1. Pipelines with a single estimator and transformer(s) (e.g., `ParamFitterPipeline`, `PwTrafoPanelPipeline`). These classes inherit `_steps_attr`, but it points to `_steps`, which does not exist. They require a new `_steps` attribute that combines transformers and the estimator.
2. Classes where `_steps_attr` points to an incorrectly formatted list of estimators (e.g., `GroupbyCategoryForecaster`, `FeatureUnion`). These need `_steps_attr` to point to an iterable of `(name: str, estimator, ...)` tuples instead of the current format.
**Additional context**
<!--
Add any other context about the problem here.
-->
The issue was identified while investigating: https://github.com/sktime/sktime/issues/7152
For a detailed list of impacted classes refer to the following comment: https://github.com/sktime/sktime/issues/7152#issuecomment-2392006316
Potential solution: https://github.com/sktime/sktime/issues/7152#issuecomment-2392289976 | open | 2024-10-04T11:26:39Z | 2024-11-11T19:50:40Z | https://github.com/sktime/sktime/issues/7222 | [
"enhancement",
"module:base-framework"
] | mateuszkasprowicz | 3 |
STVIR/pysot | computer-vision | 153 | ไธบไปไนๆฒกๆไบบ้ฎubuntuไธpython3.7ๆฒกๆๅฏนๅบ็opencv-python๏ผ็ฝไธๅธๅญ่งฃๅณๅๆณ้ฝๆฏ้ๅฏนwin็ | ไธบไปไนๆฒกๆไบบ้ฎubuntuไธpython3.7ๆฒกๆๅฏนๅบ็opencv-python๏ผ็ฝไธๅธๅญ่งฃๅณๅๆณ้ฝๆฏ้ๅฏนwin็ | closed | 2019-08-10T07:47:35Z | 2019-08-10T15:53:56Z | https://github.com/STVIR/pysot/issues/153 | [] | JensenHJS | 1 |
xuebinqin/U-2-Net | computer-vision | 28 | Image preprocessing | Closed. | closed | 2020-05-21T17:36:52Z | 2020-05-22T13:54:00Z | https://github.com/xuebinqin/U-2-Net/issues/28 | [] | bluesky314 | 0 |
tensorpack/tensorpack | tensorflow | 951 | DoReFa Net gradient extraction/ trainer function gradeint extraction | Hi YuXin Wu,
I am reading DoReFa Net paper and have already learn a lot from this paper and the code you published on the website. You use gradient quantization method to quantize the gradient during training and the training class you use is in SimpleTrainer class. I looked into the source code of SimpleTrainer and there is function to compute the gradient of the training process.
My question is that I want to use tf.Session.run() method to see the quantized gradient. How should I do this?
I think of two method, first of all, do this in customized gradient function in dorefa.py, is it possible to do that there? Secondly, use session.run(grad) to print the gradient. Where could I find session start of the graph and if I could use session.run(), is that the quantized gradient?
Thank you!
Best regards,
Tong Wu
| closed | 2018-10-27T22:14:06Z | 2018-11-04T14:23:40Z | https://github.com/tensorpack/tensorpack/issues/951 | [
"usage"
] | tongwu92623 | 6 |
developmentseed/lonboard | jupyter | 573 | Easier deck.gl debugging for Lonboard issues | In https://github.com/developmentseed/lonboard/pull/562#issuecomment-2229473126 we're hitting a deck.gl bug when trying to upgrade from deck.gl v8 to v9. The upstream issue was closed and believed to be fixed. So we need to provide a reproducible, minimal example, but we can't ask deck.gl contributors to learn Python and learn Lonboard.
We should have some documentation and/or testing utils to help construct a minimal JS app using data and layers exported from a Lonboard `Map` object. | open | 2024-07-15T22:18:00Z | 2024-07-15T22:18:00Z | https://github.com/developmentseed/lonboard/issues/573 | [] | kylebarron | 0 |
unit8co/darts | data-science | 2,166 | NBEATS :: RAM consumption goes on increasing while predicting | I am using NBEATS model for forecasting. Since we are in a trial phase we are giving past data and generating forecast from the same and we are repeating these steps using a for-loop. We are predicting 1 year data having 15 min interval of time. So code is picking approx 35000 csv files using for loop & predicting their forecast.
But doing that, my Work station RAM goes on increasing & reaches 100% after predicting apprx 10,000 files & code gets interrupted.
I am using - Python version: [e.g. 3.11.5 & darts version [e.g. 0.27.1]
System Configuration - 12th Gen Intel(R) Core(TM) i7-12700K 3.60 GHz, 64.0 GB (63.7 GB usable)
**Additional context**
I am facing these issue only with NBEATS Model & not with the TCN Model.
| closed | 2024-01-15T10:02:11Z | 2024-01-29T15:47:00Z | https://github.com/unit8co/darts/issues/2166 | [
"bug",
"triage"
] | hberande | 3 |
pennersr/django-allauth | django | 3,868 | Headless mode: reset password not working | Thank you for enhancing the library by introducing headless mode.
However, I am unable to get the reset password feature to work.
I have tried using the classic mode, which involves a reset password HTML form endpoint. In classic mode, the reset password email is sent correctly. However, in headless mode, I receive an empty 400 response and no email is sent.
### Request 1:
`POST http://localhost:3000/api/v1/_allauth/browser/v1/auth/password/request`
post data: no form data sent
response: error 400
```
{
"status": 400,
"errors": [
{
"message": "This field is required.",
"code": "required",
"param": "email"
}
]
}
```
OK
### Request 2:
`POST http://localhost:3000/api/v1/_allauth/browser/v1/auth/password/request`
post data: email = invalid.email.invalid
error 400 empty response
**The response should not be empty.**
### Request 3:
`POST http://localhost:3000/api/v1/_allauth/browser/v1/auth/password/request`
post data: email = validemailoftheuser@example.com
**error 400 empty response again**
**The response should not be empty.** | closed | 2024-06-06T16:15:24Z | 2024-06-17T20:21:04Z | https://github.com/pennersr/django-allauth/issues/3868 | [
"Unconfirmed",
"Awaiting input"
] | gleniat | 2 |
pmaji/crypto-whale-watching-app | plotly | 63 | License | Is this open source and what is the license? | closed | 2018-02-23T02:35:14Z | 2018-02-23T03:42:47Z | https://github.com/pmaji/crypto-whale-watching-app/issues/63 | [] | anandanand84 | 2 |
mljar/mljar-supervised | scikit-learn | 35 | When trying to import AutoML it aborts | ```
>>> import pandas as pd
>>> from supervised.automl import AutoML
/root/python/MLwebsite/lib/python3.6/site-packages/sklearn/externals/joblib/__init__.py:15: DeprecationWarning: sklearn.externals.joblib is deprecated in 0.21 and will be removed in 0.23. Please import this functionality directly from joblib, which can be installed with: pip install joblib. If this warning is raised when loading pickled models, you may need to re-serialize those models with scikit-learn 0.21+.
warnings.warn(msg, category=DeprecationWarning)
Aborted
```
this takes about 30 seconds to complete and ends in me returning to bash and not python. I am just attempting to run the quick example in the readme and I have python 3.6.3 in a brand new venv with nothing but this installed via pip. | closed | 2019-11-18T21:20:41Z | 2019-11-18T23:56:29Z | https://github.com/mljar/mljar-supervised/issues/35 | [] | miqueet | 4 |
lux-org/lux | pandas | 501 | [BUG] Exporting to Streamlit | **Describe the bug**
According to the "Exporting to Streamlit" part of docs, after running the example code, it didn't display the HTML content and i found one Error in the browser Console. code is same as docs:
```python
import streamlit as st
import streamlit.components.v1 as components
from pathlib import Path
import pandas as pd
import lux
def app():
st.title('Analysis of Happy Planet Index Dataset')
st.write('Check out these cool visualizations!')
df = pd.read_csv("https://raw.githubusercontent.com/lux-org/lux-datasets/master/data/hpi.csv")
export_file = 'visualizations.html'
html_content = df.save_as_html(output=True)
components.html(html_content, width=800, height=350)
app()
```
**Screenshots**
Displayed:
<img width="1403" alt="image" src="https://github.com/lux-org/lux/assets/34061363/f6bada5c-42b5-409d-990c-8e23d932056f">
Error found:
<img width="1168" alt="image" src="https://github.com/lux-org/lux/assets/34061363/192a2ee6-e8b0-4dfc-8763-a299ade4c3e4">
**Expected behavior**
Displayed according to docs:

Could you tell me how to solve this problem?ah..... i'm not skilling at JavaScript related....pls, thanks
| open | 2023-10-09T02:41:20Z | 2023-10-09T02:41:20Z | https://github.com/lux-org/lux/issues/501 | [] | doufs | 0 |
huggingface/datasets | pytorch | 6,513 | Support huggingface-hub 0.20.0 | CI to test the support of `huggingface-hub` 0.20.0: https://github.com/huggingface/datasets/compare/main...ci-test-huggingface-hub-v0.20.0.rc1
We need to merge:
- #6510
- #6512
- #6516 | closed | 2023-12-19T15:15:46Z | 2023-12-20T08:44:45Z | https://github.com/huggingface/datasets/issues/6513 | [] | albertvillanova | 0 |
voila-dashboards/voila | jupyter | 1,511 | Disable the tree navigation? | Is there a mechanism to disable the tree navigation in voila? Essentially to disallow users to get to the tree navigation and browse through the folders and see all the other voila pages there are in directories.
I saw on some google cached pages there were mentions of `--no-tree`, however I am not seeing that in the current [documentation](https://voila.readthedocs.io/en/stable/customize.html) and when I tried it, there was an unrecognized flag error thrown. | closed | 2024-11-26T19:09:20Z | 2025-01-21T20:18:57Z | https://github.com/voila-dashboards/voila/issues/1511 | [
"enhancement"
] | afonit | 0 |
BlinkDL/RWKV-LM | pytorch | 114 | ๅๆฐmy_pile_version | ๅๆฐmy_pile_versionไปไนๅซไน๏ผ่ฎญ็ป็ๆถๅๅฆไฝ่ฎพ็ฝฎ | closed | 2023-05-16T12:42:01Z | 2023-05-18T06:49:23Z | https://github.com/BlinkDL/RWKV-LM/issues/114 | [] | ZTurboX | 1 |
plotly/dash-table | dash | 479 | Strategy for data / selected_rows / selected_row_ids | While investigating a community post about [expandable rows](https://community.plot.ly/t/in-need-of-a-good-way-to-create-collapsable-rows-row-grouping-with-dash-datatable/24967/2) I suggested a callback changing data based on selected rows - unfortunately this has problems: if you just use `selected_rows`, then selecting an earlier row will mean the index of a later selected row changes. We could, I suppose, work around this if we had a `selected_rows_timestamp` or something, and output both `data` and `selected_rows`.
The more obvious way to work around it would be to use `selected_row_ids`. However, `selected_rows` and `selected_row_ids` don't update correctly when we change `data` - related to https://github.com/plotly/dash-table/issues/239 but it's tougher here as there isn't necessarily any link between prior and new rows. We could make such a link *if* the user has provided row IDs - in which case it would make sense to use `selected_row_ids` as the source of truth and update `selected_rows` to match.
Related: currently `selected_row_ids` is read-only; writing to it doesn't error but doesn't update the displayed table either. Would be nice if it could at least optionally be writable. Similarly for `active_cell` and related props, `row` and `column` must be used to write these props, `row_id` and `column_id` are ignored (and you get an ugly error "Cannot read property 'NaN' of undefined" if you don't provide `row` and `column`) | open | 2019-06-25T01:24:14Z | 2020-08-20T22:08:14Z | https://github.com/plotly/dash-table/issues/479 | [
"dash-type-enhancement"
] | alexcjohnson | 2 |
nvbn/thefuck | python | 717 | alias produces invalid code for fish as of 3.24 | As of 3.24 the output from `thefuck --alias` is incompatible with fish
**Output from 3.24**
```
$ thefuck --alias
function fuck () {
TF_PYTHONIOENCODING=$PYTHONIOENCODING;
export TF_ALIAS=fuck;
export TF_SHELL_ALIASES=$(alias);
export TF_HISTORY=$(fc -ln -10);
export PYTHONIOENCODING=utf-8;
TF_CMD=$(
thefuck THEFUCK_ARGUMENT_PLACEHOLDER $@
) && eval $TF_CMD;
unset TF_HISTORY;
export PYTHONIOENCODING=$TF_PYTHONIOENCODING;
history -s $TF_CMD;
}
$ thefuck --version
The Fuck 3.24 using Python 3.5.2
```
**Output from 3.23**
```
$ thefuck --alias
function fuck -d "Correct your previous console command"
set -l fucked_up_command $history[1]
env TF_ALIAS=fuck PYTHONIOENCODING=utf-8 thefuck $fucked_up_command | read -l unfucked_command
if [ "$unfucked_command" != "" ]
eval $unfucked_command
builtin history delete --exact --case-sensitive -- $fucked_up_command
builtin history merge ^ /dev/null
end
end
$ thefuck --version
The Fuck 3.23 using Python 3.5.2
```
**Fish version output**
```
$ fish --version
fish, version 2.6.0
``` | closed | 2017-10-25T11:49:55Z | 2019-01-01T10:30:17Z | https://github.com/nvbn/thefuck/issues/717 | [] | disjunto | 6 |
strawberry-graphql/strawberry | fastapi | 3,056 | Missing 'typer' and 'libcst' dependencies. | ## Describe the Bug
I noticed that when running the 'strawberry codegen' command, the required dependencies.
This caused the command to fail with a 'ModuleNotFoundError.' As a workaround, I installed these dependencies manually to resolve the issue.
## System Information
- Operating system: Ubuntu 23.04
- Python version: 3.11.4
- Strawberry version (if applicable): 0.205.0
## Steps to Reproduce
1. Install Strawberry.
``` bash
pipenv install strawberry-graphql==0.205.0
```
3. Attempt to run the 'strawberry codegen' command.
```bash
strawberry codegen --schema schema --output-dir . -p python query.graphql
```
4. Observe the 'ModuleNotFoundError' related to missing dependencies.
## Expected Behavior
The 'strawberry codegen' command should run successfully without missing dependencies. | closed | 2023-08-25T07:52:45Z | 2025-03-20T15:56:20Z | https://github.com/strawberry-graphql/strawberry/issues/3056 | [
"bug"
] | daudln | 3 |
nltk/nltk | nlp | 2,916 | In CI, refresh `nltk_data` cache if the hash of `index.xml` differs from the cached hash | Rather than simply caching `nltk_data` until the cache expires and it's forced to re-download the entire `nltk_data`, we should perform a check on the `index.xml` which refreshes the cache if it differs from some previous cache.
I would advise doing this in the same way that it's done for `requirements.txt`:
https://github.com/nltk/nltk/blob/59aa3fb88c04d6151f2409b31dcfe0f332b0c9ca/.github/workflows/ci.yaml#L103
i.e. with `${{ hashFiles('...') }}` using a downloaded `index.xml` from https://raw.githubusercontent.com/nltk/nltk_data/gh-pages/index.xml. Alternatively, with some other hash function.
### References:
* HashFiles documentation: https://docs.github.com/en/actions/learn-github-actions/expressions#hashfiles
---
- Tom Aarsen | closed | 2021-12-16T14:36:07Z | 2022-12-13T21:47:04Z | https://github.com/nltk/nltk/issues/2916 | [
"good first issue",
"CI"
] | tomaarsen | 5 |
absent1706/sqlalchemy-mixins | sqlalchemy | 70 | Migrate documentation to the mkdocs | Description coming soon. | open | 2021-04-22T13:21:46Z | 2021-04-22T14:14:24Z | https://github.com/absent1706/sqlalchemy-mixins/issues/70 | [
"enhancement"
] | myusko | 0 |
google-research/bert | tensorflow | 1,248 | Most general-purpose question answering system? | Is there any pre-trained model of Bert or a similar tool which has been trained on the widest body of knowledge possible to be an effective general purpose question answering system?
What about specifically for programming knowledge?
Thanks very much. | open | 2021-08-03T10:37:05Z | 2021-08-03T10:37:05Z | https://github.com/google-research/bert/issues/1248 | [] | jukhamil | 0 |
apache/airflow | python | 47,601 | Google Cloud + CNCF Kubernetes OnFinishAction equality test | ### Apache Airflow Provider(s)
cncf-kubernetes, google
### Versions of Apache Airflow Providers
I guess you can take the latest ones, should be reproducible there, but definitely those included in official Docker image for 2.10.5.
### Apache Airflow version
2.10.5
### Operating System
Linux
### Deployment
Other Docker-based deployment
### Deployment details
Using official Docker image in k8s (issue is related to usage of GKEStartPodOperator)
### What happened
Starting at this line https://github.com/apache/airflow/blob/2b1c2758f41e438280380a1e11b61d6b0085ee7c/providers/google/src/airflow/providers/google/cloud/operators/kubernetes_engine.py#L641 a new approach to pod deletion is utilized so we are on par with CNCF k8s package enum OnFinishAction. Unfortunately the values set there don't compare with the values withing CNCF k8s package (maybe it's because differences in imports) resulting in failing equality tests here https://github.com/apache/airflow/blob/2b1c2758f41e438280380a1e11b61d6b0085ee7c/providers/cncf/kubernetes/src/airflow/providers/cncf/kubernetes/operators/pod.py#L1047
### What you think should happen instead
I guess another way to compare the enums could be used here so the pods are deleted if operator is properly configured.
### How to reproduce
Use GKEStartPodOperator to run a job in k8s environment with `on_finish_action='DELETE_POD'` - the pod will be kept with message "Skipping deleting pod" instead of deletion.
### Anything else
I changed the equality test to `(str(self.on_finish_action) == str(OnFinishAction.DELETE_POD))` and it worked properly although it's not the best way to fix it. Looks like the property self.on_finish_action is of type str and cannot be directly compared to enum.
Skipping deleting pod
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| open | 2025-03-11T08:57:01Z | 2025-03-11T12:53:36Z | https://github.com/apache/airflow/issues/47601 | [
"kind:bug",
"provider:google",
"area:providers",
"provider:cncf-kubernetes",
"needs-triage"
] | miloszszymczak | 1 |
unit8co/darts | data-science | 2,524 | [BUG] Quantiles/samples are wrong when using enable_optimization=True (default!) | **Describe the bug**
Hi there,
already for a while, I've been puzzled by the output of my XGBoost/LightGBM models because the forecasts were surprisingly "flat" and not really showing any trends. (Also see my previous issue https://github.com/unit8co/darts/issues/2382)

However, the forecasts looked much better when I trained the model to produce point predictions only by minimizing the absolute error. This is surprising because it should be equivalent to the 50%-quantile of the probabilistic forecasts.

But now I just realized that if I set `enable_optimization=False` in `historical_forecasts`, the forecasts look much better and more like what I would expect (the median predictions look the same as the point predictions from before.)

So I think there might be something wrong with the optimization or sampling (which might be somewhat related to another previous issue https://github.com/unit8co/darts/issues/2461#issuecomment-2232651569).
**To Reproduce**
```python
import matplotlib.pyplot as plt
from darts import concatenate
from darts.datasets import AirPassengersDataset
from darts.models import XGBModel
series = AirPassengersDataset().load()
target_end = 60
validation_start = 60
QUANTILES = [0.025, 0.25, 0.5, 0.75, 0.975]
# Probabilistic forecast
xgb = XGBModel(
lags=12,
output_chunk_length=6,
use_static_covariates=True,
likelihood="quantile",
quantiles=QUANTILES
)
xgb.fit(series)
hfc = xgb.historical_forecasts(
series=series,
start=validation_start,
forecast_horizon=6,
stride=6,
last_points_only=False,
retrain=False,
verbose=True,
num_samples=200
)
hfc = concatenate(hfc, axis=0)
# Probabilistic forecast with enable_optimization=False
hfc2 = xgb.historical_forecasts(
series=series,
start=validation_start,
forecast_horizon=6,
stride=6,
last_points_only=False,
retrain=False,
verbose=True,
num_samples=200,
enable_optimization=False
)
hfc2 = concatenate(hfc2, axis=0)
# Point forecast
xgb = XGBModel(
lags=12,
output_chunk_length=6,
use_static_covariates=True,
objective='reg:absoluteerror'
)
xgb.fit(series)
hfc_ae = xgb.historical_forecasts(
series=series,
start=validation_start,
forecast_horizon=6,
stride=6,
last_points_only=False,
retrain=False,
verbose=True
)
hfc_ae = concatenate(hfc_ae, axis=0)
# Plots
series.plot()
hfc.plot()
series.plot(new_plot=True)
hfc2.plot()
series.plot(new_plot=True)
hfc_ae.plot()
```
**Additional context**
I'm not sure if this only affects the Regression Models or also the Pytorch-based Models (but there I haven't noticed any obvious issues so far). | closed | 2024-09-06T11:46:52Z | 2024-09-16T08:05:25Z | https://github.com/unit8co/darts/issues/2524 | [
"bug"
] | dwolffram | 5 |
tqdm/tqdm | jupyter | 665 | estimated time skew over long periods of time | - [x] I have visited the [source website], and in particular
read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
- [x] I have mentioned version numbers, operating system and
environment, where applicable:
```python
import tqdm, sys
print(tqdm.__version__, sys.version, sys.platform)
```
```
4.29.1 3.6.5 (default, Apr 1 2018, 05:46:30)
[GCC 7.3.0] linux
```
I have this situation where estimated time skews over long-running processes. All progressbars have `smoothing=0`.
```
Blargs : 36%|#################2 | 18.0k/50.1k [21:18:02<37:57:53, 4.26s/bla]
Froobles : 36%|##############6 | 1.44k/4.03k [21:07:12<37:58:09, 52.8s/froob]
Frablaghs : 36%|############### | 3.87k/10.8k [21:18:02<37:57:49, 19.8s/frab]
Flebreps : 36%|#################2 | 867k/2.41M [21:21:12<38:03:32, 11.3 fleb/s]
Booblips : 36%|############## | 56.3k/157k [21:20:26<38:02:06, 1.36s/boob]
```
In the beginning, they all stabilized at about 56.5 hours, then slowly slided down. Even though the percent bar stays correct, estimated time keeps on skewing down. I think this is caused by imperfect summing of floating point ratios over time. A common problem in physics simulations. With `smoothing=0`, I think we can fix the estimated time to be equal to `elapsed_t * total / n`. That will also make tqdm faster.
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
| closed | 2019-01-25T07:35:52Z | 2019-01-25T12:18:47Z | https://github.com/tqdm/tqdm/issues/665 | [
"question/docs โฝ",
"need-feedback ๐ข"
] | nurettin | 4 |
huggingface/diffusers | deep-learning | 11,050 | [examples/controlnet/train_controlnet_sd3.py] prompt_embeds and pooled_prompt_embeds not cast to weight_dtype in bf16/fp16 training | ### Describe the bug
When training with --mixed_precision bf16 or fp16, the prompt_embeds and pooled_prompt_embeds tensors in the compute_text_embeddings function are not cast to the appropriate weight_dtype (matching the rest of the model inputs and parameters), causing a mismatch error during training.
Specifically, the tensors are generated as float32 (default) and not moved to bf16/fp16, which leads to issues when performing operations inside the transformer/controlnet forward pass.
### Reproduction
accelerate launch train_controlnet_sd3.py \
--pretrained_model_name_or_path=$MODEL_DIR \
--output_dir=$OUTPUT_DIR \
--train_data_dir=$DATASET \
--resolution=512 \
--caption_column=$CAPTION_COLLUMN \
--output_dir=$OUTPUT_DIR \
--learning_rate=1e-5 \
--max_train_steps=15000 \
--validation_steps=100 \
--train_batch_size=1 \
--gradient_accumulation_steps=4 \
--mixed_precision=bf16
### Logs
```shell
```
### System Info
- ๐ค Diffusers version: 0.33.0.dev0
- Platform: Linux-6.5.0-45-generic-x86_64-with-glibc2.35
- Running on Google Colab?: No
- Python version: 3.11.10
- PyTorch version (GPU?): 2.4.1+cu124 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.29.3
- Transformers version: 4.49.0
- Accelerate version: 1.5.1
- PEFT version: not installed
- Bitsandbytes version: not installed
- Safetensors version: 0.5.3
- xFormers version: not installed
- Accelerator: NVIDIA A100 80GB PCIe, 81920 MiB
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sayakpaul | closed | 2025-03-13T11:58:41Z | 2025-03-14T12:03:17Z | https://github.com/huggingface/diffusers/issues/11050 | [
"bug"
] | andjoer | 2 |
strawberry-graphql/strawberry-django | graphql | 8 | "'ModelClass' object has no attribute 'all'" | Suppose this is also a feature request to support reverse relationships.
The code below causes: `'Profile' object has no attribute 'all'`
Query:
```
{
users {
id
profile {
id
}
}
}
```
Schema:
```
class ProfileResolver(ModelResolver):
model = Profile
fields = (
"id",
)
class UserResolver(ModelResolver):
model = User
fields = (
'id',
'profile'
)
@strawberry.type
class Query(
UserResolver.query(),
):
pass
@strawberry.type
class Mutation(
UserResolver.mutation(),
):
pass
schema = strawberry.Schema(query=Query, mutation=Mutation)
```
Models:
```
class Profile(models.Model):
user = models.OneToOneField(
User, # Default Django User model
unique=True,
verbose_name=_("user"),
related_name="profile",
on_delete=models.CASCADE,
)
```
And for someone running into this in the meantime, the following does work:
```
class ProfileResolver(ModelResolver):
model = Profile
fields = (
"id",
)
class UserResolver(ModelResolver):
model = User
fields = (
'id',
)
@strawberry.field
def profile(info, root) -> ProfileResolver.output_type:
return root.profile
```
Maybe the last snippet could be useful in the example.
If you would like me to create a PR for any of this, let me know. | closed | 2021-03-09T13:50:28Z | 2021-03-19T16:23:24Z | https://github.com/strawberry-graphql/strawberry-django/issues/8 | [
"bug"
] | joeydebreuk | 3 |
coleifer/sqlite-web | flask | 7 | project name | i know it is kind of late, but would you consider picking
some kind of name/nickname for your project?
there is already https://github.com/sqlitebrowser/sqlitebrowser/
and i would like to create a package for openbsd
so its possible to easily install and use it locally...
sqlitebrowser (not sqlite-browser) is way to heavy
on dependencies.
peewee is also not called "orm" :)
| closed | 2015-07-18T11:14:47Z | 2015-07-22T00:25:13Z | https://github.com/coleifer/sqlite-web/issues/7 | [] | minusf | 5 |
deepinsight/insightface | pytorch | 2,547 | scrfd output mean | I just plot scrfd model to neutron app and see that there are many output

Can someone explain to me the meaning of these outputs, i thought it should only return scores, bboxes, kpss | open | 2024-03-26T08:30:57Z | 2024-03-26T08:30:57Z | https://github.com/deepinsight/insightface/issues/2547 | [] | NQHuy1905 | 0 |
python-restx/flask-restx | api | 182 | Issue on the swagger nested fields | Hi everyone,
i'm currently working on the documentation of a rest API, we recently migrate from restplus to restx, and the main reason was to be able to use the fields.Nested and fields.Wildcard.
i've been trying to get this result as "example value" in the API swagger :
```
{
"free_ip_number": "string",
"free_ip_percent": "string",
"ip_list": {
"ip1": {
"dns_name": "hostname",
"alias": "fw-massy",
"type": "vm",
"requester": "Someone",
"description": "fw-massy",
"environment": "test"
},
"ip2": {
"dns_name": "hostname",
"alias": "fw-massy",
"type": "vm",
"requester": "Someone",
"description": "fw-massy",
"environment": "test"
},
"ip3": {
"dns_name": "hostname",
"alias": "fw-massy",
"type": "vm",
"requester": "Someone",
"description": "fw-massy",
"environment": "test"
}
}
}
```
unfortunately `what i get is this :
```
{
"free_ip_number": "string",
"free_ip_percent": "string",
"ip_list": {
"additionalProp1": {
"dns_name": "hostname",
"alias": "fw-massy",
"type": "vm",
"requester": "Someone",
"description": "fw-massy",
"environment": "test"
},
"additionalProp2": {
"dns_name": "hostname",
"alias": "fw-massy",
"type": "vm",
"requester": "Someone",
"description": "fw-massy",
"environment": "test"
},
"additionalProp3": {
"dns_name": "hostname",
"alias": "fw-massy",
"type": "vm",
"requester": "Someone",
"description": "fw-massy",
"environment": "test"
}
}
}
```
here is my code :
```
MODEL_IP = api.model('IP', {
'dns_name' : fields.String(description='Domain name', example='hostname'),
'alias' : fields.String(description='alias', example='fw'),
'type' : fields.String(description='Type of equipment', example='vm'),
'requester' : fields.String(description='The firstname_lastname of the requester', example='someone'),
'description' : fields.String(description='The description of your IP (server name, project, etc)', example='fw'),
'environment' : fields.String(description='Environment type', example='test')
})
MODEL_NEST = fields.Nested(MODEL_IP)
MODEL_WILD = fields.Wildcard(MODEL_NEST)
MODEL_DOWNLOAD_RESPONSE = api.model('document download response', {
'free_ip_number': fields.String(description='Number of IPv4 available in a subnet.'),
'free_ip_percent': fields.String(description='Percentage of IP addresses available in a subnet.'),
'ip_list' : MODEL_WILD
})
```
Do you know how i can modifiy the additionalPropX.
Thanks in advance for any help or advice you can provide.
Bertin-v | open | 2020-07-22T13:59:23Z | 2020-07-31T17:25:18Z | https://github.com/python-restx/flask-restx/issues/182 | [
"question"
] | bertin-v | 3 |
facebookresearch/fairseq | pytorch | 5,002 | Does pretrained wav2vec2.0 sensitive to the sampling rate of the audio input? | Does pretrained wav2vec2.0 sensitive to the sampling rate of the audio input?
If I down sample the audio to 3k sampling rate, will the representation of the down-sampled audio inferenced from wav2vec2.0 degrade seriously?
Thanks!
Best,
Pengbo
| open | 2023-03-02T09:09:23Z | 2023-03-02T09:09:23Z | https://github.com/facebookresearch/fairseq/issues/5002 | [
"question",
"needs triage"
] | EricKani | 0 |
huggingface/datasets | tensorflow | 6,746 | ExpectedMoreSplits error when loading C4 dataset | ### Describe the bug
I encounter bug when running the example command line
```python
python main.py \
--model decapoda-research/llama-7b-hf \
--prune_method wanda \
--sparsity_ratio 0.5 \
--sparsity_type unstructured \
--save out/llama_7b/unstructured/wanda/
```
The bug occurred at these lines of code (when loading c4 dataset)
```python
traindata = load_dataset('allenai/c4', 'allenai--c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train')
valdata = load_dataset('allenai/c4', 'allenai--c4', data_files={'validation': 'en/c4-validation.00000-of-00008.json.gz'}, split='validation')
```
The error message states:
```
raise ExpectedMoreSplits(str(set(expected_splits) - set(recorded_splits)))
datasets.utils.info_utils.ExpectedMoreSplits: {'validation'}
```
### Steps to reproduce the bug
1. I encounter bug when running the example command line
### Expected behavior
The error message states:
```
raise ExpectedMoreSplits(str(set(expected_splits) - set(recorded_splits)))
datasets.utils.info_utils.ExpectedMoreSplits: {'validation'}
```
### Environment info
I'm using cuda 12.4, so I use ```pip install pytorch``` instead of conda provided in install.md
Also, I've tried another environment using the same commands in install.md, but the same bug occured | closed | 2024-03-21T02:53:04Z | 2024-09-18T19:57:14Z | https://github.com/huggingface/datasets/issues/6746 | [] | billwang485 | 8 |
OpenInterpreter/open-interpreter | python | 1,182 | interpreter.llm.completion = custom_language_model seems not working | ### Describe the bug
I attempted to run the routine for the custom model, which is supposed to just echo back what the user said, but it was not successful.
```
def custom_language_model(openai_message):
"""
OpenAI-compatible completions function (this one just echoes what the user said back).
"""
users_content = openai_message[-1].get("content") # Get last message's content
# To make it OpenAI-compatible, we yield this first:
yield {"delta": {"role": "assistant"}}
for character in users_content:
yield {"delta": {"content": character}}
# Tell Open Interpreter to power the language model with this function
interpreter.llm.completion = custom_language_model
```
I attempted to make modifications to `interpreter.llm.completions = custom_language_model`
But the parameters don't match up.
### Reproduce
Run
```
def custom_language_model(openai_message):
"""
OpenAI-compatible completions function (this one just echoes what the user said back).
"""
users_content = openai_message[-1].get("content") # Get last message's content
# To make it OpenAI-compatible, we yield this first:
yield {"delta": {"role": "assistant"}}
for character in users_content:
yield {"delta": {"content": character}}
# Tell Open Interpreter to power the language model with this function
interpreter.llm.completion = custom_language_model
```
https://docs.openinterpreter.com/language-models/custom-models
### Expected behavior
(this one just echoes what the user said back)
### Screenshots
_No response_
### Open Interpreter version
0.2.4
### Python version
3.11
### Operating System name and version
win 11
### Additional context
_No response_ | open | 2024-04-07T09:24:02Z | 2024-08-27T20:42:19Z | https://github.com/OpenInterpreter/open-interpreter/issues/1182 | [] | sq2100 | 3 |
chezou/tabula-py | pandas | 137 | subprocess.CalledProcessError: Command '['java', '-Dfile.encoding=UTF8', '-jar' | # Summary of your issue
Code:
import tabula
dflist = tabula.read_pdf("NSE100011130-JAN-19PS04.pdf", encoding='utf-8', spreadsheet=True, multiple_tables=True)
with open('NSE100011130-JAN-19PS04.pdf.csv', 'a') as f:
for i, df in enumerate(dflist):
dflist[i].to_csv(f, header=False)
Error:
Traceback (most recent call last):
File "C:\Users\mohan\AppData\Local\Programs\Python\Python36-32\pdftocsv.py", line 10, in <module>
dflist = tabula.read_pdf(one_pdf, encoding='utf-8', spreadsheet=True,multiple_tables=True)
File "C:\Users\mohan\AppData\Local\Programs\Python\Python36-32\lib\site-packages\tabula\wrapper.py", line 108, in read_pdf
output = subprocess.check_output(args)
File "C:\Users\mohan\AppData\Local\Programs\Python\Python36-32\lib\subprocess.py", line 336, in check_output
**kwargs).stdout
File "C:\Users\mohan\AppData\Local\Programs\Python\Python36-32\lib\subprocess.py", line 418, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['java', '-Dfile.encoding=UTF8', '-jar', 'C:\\Users\\mohan\\AppData\\Local\\Programs\\Python\\Python36-32\\lib\\site-packages\\tabula\\tabula-1.0.2-jar-with-dependencies.jar', '--pages', '1', '--guess', '--format', 'JSON', '--lattice', 'C:\\Users\\mohan\\AppData\\Local\\Programs\\Python\\Python36-32\\NSE100011108-FEB-19PS04.pdf']' returned non-zero exit status 1. | closed | 2019-02-11T13:33:11Z | 2019-02-11T13:40:45Z | https://github.com/chezou/tabula-py/issues/137 | [] | mohan2515175 | 1 |
seleniumbase/SeleniumBase | web-scraping | 2,284 | TypeError: Binary Location Must be a String | I have simple code where I want to use UC mode but I am getting this error.
```
from seleniumbase import Driver
import time
driver = Driver(
uc=True,
)
driver.get("https://nowsecure.nl/#relax")
time.sleep(6)
driver.quit()
```
```
root@Scrapers:/home/scrapers_dev# python seleniumbase_test.py
Traceback (most recent call last):
File "/home/scrapers_dev/selenoid_test.py", line 4, in <module>
driver = Driver(uc=True, incognito=True)
File "/usr/local/lib/python3.10/dist-packages/seleniumbase/plugins/driver_manager.py", line 463, in Driver
driver = browser_launcher.get_driver(
File "/usr/local/lib/python3.10/dist-packages/seleniumbase/core/browser_launcher.py", line 1561, in get_driver
return get_local_driver(
File "/usr/local/lib/python3.10/dist-packages/seleniumbase/core/browser_launcher.py", line 3390, in get_local_driver
driver = undetected.Chrome(
File "/usr/local/lib/python3.10/dist-packages/seleniumbase/undetected/__init__.py", line 217, in __init__
options.binary_location = (
File "/usr/local/lib/python3.10/dist-packages/selenium/webdriver/chromium/options.py", line 52, in binary_location
raise TypeError(self.BINARY_LOCATION_ERROR)
TypeError: Binary Location Must be a String
```
I have installed seleniumbase just today using `pip install seleniumbase` command. Also tried `pip install seleniumbase --upgrade` but no help.
**python -V**
`Python 3.10.7`
**pip -V**
`pip 23.3.1 from /usr/local/lib/python3.10/dist-packages/pip (python 3.10)`
**lsb_release -a**
```
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.5 LTS
Release: 20.04
Codename: focal
```
| closed | 2023-11-15T06:27:13Z | 2023-11-16T05:57:15Z | https://github.com/seleniumbase/SeleniumBase/issues/2284 | [
"invalid usage",
"UC Mode / CDP Mode"
] | iamumairayub | 1 |
matplotlib/matplotlib | matplotlib | 29,496 | [ENH]: visual plot builder | ### Problem
Code is great, (learning) to code a bit less so.
For anybody not using matplotlib very often I'd say a visual builder would slash development time.
### Proposed solution
Maybe start with https://matplotlib.org/stable/_images/anatomy.png and make it interactive ?
Especially in Jupyter(Lite) it would allow so many more people to start making reproducible plots | closed | 2025-01-20T23:24:24Z | 2025-01-22T16:21:33Z | https://github.com/matplotlib/matplotlib/issues/29496 | [
"New feature"
] | steltenpower | 5 |
itamarst/eliot | numpy | 26 | LoggedMessage.ofType doesn't work with LoggedAction.children | (Originally by Tom Prince).
I naively tried to do
```
action = LoggedAction.ofType(....)
message = LoggedMessage.ofType(action.children, SOME_MESSAGE)[0]
```
which blows up. This seems like a reasonable thing to want to do, and there doesn't seem to be a way of doing this. (Although perhaps not with exactly the code I wrote).
| open | 2014-04-15T15:16:57Z | 2018-09-22T20:59:11Z | https://github.com/itamarst/eliot/issues/26 | [] | itamarst | 0 |
mwaskom/seaborn | pandas | 2,884 | Would it be possible to optionally use jax.gaussian_kde instead of scipy? | [Now that Jax has](https://github.com/google/jax/pull/11237) a GPU-accelerated version of `gaussian_kde`, would it be possible to use that instead of the scipy or the internal version of this function? | closed | 2022-06-29T05:12:10Z | 2022-06-29T21:30:10Z | https://github.com/mwaskom/seaborn/issues/2884 | [] | NeilGirdhar | 2 |
quantumlib/Cirq | api | 6,631 | failing comparisons for noise channels | **Description of the issue**
two related bugs related to equality checks for noise channels:
* approximate comparison between cirq noise channels and other gates raises an error
* exact comparison ignores `DepolarizingChannel.n_qubits`
**How to reproduce the issue**
```python
cirq.approx_eq(cirq.depolarize(0.1), cirq.X) # AttributeError: '_PauliX' object has no attribute 'p'
cirq.approx_eq(cirq.phase_damp(0.1), cirq.X) # AttributeError: '_PauliX' object has no attribute '_gamma'
cirq.approx_eq(cirq.phase_flip(0.1), cirq.X) # AttributeError: '_PauliX' object has no attribute '_p'
cirq.approx_eq(cirq.bit_flip(0.1), cirq.X) # AttributeError: '_PauliX' object has no attribute '_p'
cirq.approx_eq(cirq.asymmetric_depolarize(0.1), cirq.X) # AttributeError: '_PauliX' object has no attribute 'error_probabilities'
assert cirq.depolarize(0.1, 1) == cirq.depolarize(0.1, 2) # passes, but shouldn't
```
i believe the first issue would be fixed by just adding `approximate=True` to the `@value_equality` decorator for each class and removing their explicit implementations of `_approx_eq_`. The second issue just requires the inclusion of `n_qubits` in `DepolarizingChannel._value_equality_values_`
**Cirq version**
```
1.4.0.dev20240529202703
```
| closed | 2024-06-03T17:41:28Z | 2025-01-18T13:51:46Z | https://github.com/quantumlib/Cirq/issues/6631 | [
"kind/bug-report",
"triage/accepted"
] | richrines1 | 0 |
predict-idlab/plotly-resampler | data-visualization | 210 | tz-aware datetime pd.Series. -> tz information gets lost | When passing a tz-aware non `pd.DateTimeIndex` (and thus not a `pd.Series`) to `hf_x` the `.values` is performed internally when parsing the `hf_data` kwargs; which removes the time-zone information; and will display the UTC time.


TODO:
- [x] write tests
- [x] test what happens when we alter `hf_data[<ID>]['x']` at runtime with a tz-aware `pd.DateTimeIndex` | closed | 2023-05-11T13:24:27Z | 2023-05-12T10:41:11Z | https://github.com/predict-idlab/plotly-resampler/issues/210 | [
"bug"
] | jonasvdd | 0 |
jmcnamara/XlsxWriter | pandas | 740 | Feature Request | ## Feature requests
Allow conditional_format to check on non-sequential cells.
Currently you can only specify a sequential range as shown below
work_sheet_teams.conditional_format('**B6:B9**', {'type': 'duplicate','format': format_dup})
I would like to do something such as this
work_sheet_teams.conditional_format('**B6:B9, B12:B15, B17:B25, B30:B40**', {'type': 'duplicate','format': format_dup})
Excel allows you to check those ranges for duplicates
Using Python 3.8
Windows 10
Microsoft Excel 365
Thank you,
John | closed | 2020-08-08T16:33:15Z | 2020-08-09T20:22:23Z | https://github.com/jmcnamara/XlsxWriter/issues/740 | [
"question"
] | jhansen31 | 3 |
sqlalchemy/sqlalchemy | sqlalchemy | 12,323 | How to properly provide Label._make_proxy new parameters | ### Describe the use case
I previously created a view dynamically in version **2.0.37** using the following code:
```python
def view(name: Any, schema_event_target: Any, selectable: Any, schema: Any) -> Any:
"""
The original documentation suggests using `before_create` along with `meta` to create views.
However, since we manage database structures using Alembic, the `before_create` event is not usually triggered.
To address this, we modified the approach to check during database connection instead.
After reviewing the documentation, I found that the bound object doesnโt necessarily have to be `metadata`;
it can also be a `Table` or any `SchemaEventTarget`.
So, we changed the implementation to bind it to the `after_create` event on a `Table` and looked for ways
to trigger this event within Alembic.
Args:
name: The name of the view.
schema_event_target: The schema event target to bind the events to.
selectable: The selectable query defining the view.
schema: The schema in which the view resides.
Returns:
The created table object representing the view.
"""
t = table(name, schema=schema)
t._columns._populate_separate_keys(col._make_proxy(t) for col in selectable.selected_columns) # noqa
sa.event.listen(
schema_event_target,
"after_create",
CreateView(schema, name, selectable).execute_if(callable_=view_doesnt_exist), # type: ignore
)
sa.event.listen(
schema_event_target,
"before_drop",
DropView(schema, name).execute_if(callable_=view_exists) # type: ignore
)
return t
```
However, after upgrading SQLAlchemy to **2.0.38** yesterday, I encountered the following error:
```bash
File "/Users/jqq/PycharmProjects/TFRobotServer/tfrobotserver/db/models/utils.py", line 120, in <genexpr>
t._columns._populate_separate_keys(col._make_proxy(t) for col in selectable.selected_columns) # noqa
^^^^^^^^^^^^^^^^^^
TypeError: Label._make_proxy() missing 2 required keyword-only arguments: 'primary_key' and 'foreign_keys'
make: *** [reset_postgresql] Error 1
```
I noticed that the `_make_proxy` method now requires two new keyword-only parameters: **`primary_key`** and **`foreign_keys`**.
However, Iโm unsure how to properly provide these arguments in this context.
I originally referenced the **SQLAlchemy Views documentation** when writing this code:
[[GitHub: SQLAlchemy Views](https://github.com/sqlalchemy/sqlalchemy/wiki/Views)](https://github.com/sqlalchemy/sqlalchemy/wiki/Views)
Unfortunately, it hasnโt been updated to reflect these changes.
How should I modify my code to work with the latest SQLAlchemy version? Any help would be greatly appreciated!
### Databases / Backends / Drivers targeted
Postgresql
### Example Use
```python
def view(name: Any, schema_event_target: Any, selectable: Any, schema: Any) -> Any:
"""
ๅๆๆกฃไฝฟ็จ็ๆฏ before_create + meta ๆฅๅๅปบ๏ผไฝๆไปฌๅ ไธบๆฏไฝฟ็จ alembic ่ฟ่กๆฐๆฎๅบ็ปๆ็ฎก็ใๆไปฅไธ่ฌไธไผ่งฆๅ before_create ๅจไฝใๅจๆญคไฟฎๆนไธบๅจๆฐๆฎๅบ่ฟๆฅๆถๆฃๆตใ
้่ฟ้
่ฏปๆๆกฃๅ็ฐ๏ผ่ขซ็ปๅฎ็ๅฏน่ฑกไธไธๅฎ้ๅพๆฏmetadata๏ผไนๅฏไปฅๆฏTable๏ผๅช่ฆๆฏไธไธชSchemaEventTargetๅณๅฏใ
ๆไปฅไฟฎๆนไธบ็ปๅฎๅฐTable็after_createๅจไฝไธใๅๆณๅๆณๅจalembicไธญ่งฆๅ่ฟไธชๅจไฝใ
Args:
name:
schema_event_target:
selectable:
schema:
Returns:
"""
t = table(name, schema=schema)
t._columns._populate_separate_keys(col._make_proxy(t) for col in selectable.selected_columns) # noqa
sa.event.listen(
schema_event_target,
"after_create",
CreateView(schema, name, selectable).execute_if(callable_=view_doesnt_exist), # type: ignore
)
sa.event.listen(
schema_event_target, "before_drop", DropView(schema, name).execute_if(callable_=view_exists) # type: ignore
)
return t
```
### Additional context
_No response_ | closed | 2025-02-07T10:33:15Z | 2025-02-08T09:12:35Z | https://github.com/sqlalchemy/sqlalchemy/issues/12323 | [
"schema",
"use case"
] | JIAQIA | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.