repo_name
stringlengths
9
75
topic
stringclasses
30 values
issue_number
int64
1
203k
title
stringlengths
1
976
body
stringlengths
0
254k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
url
stringlengths
38
105
labels
listlengths
0
9
user_login
stringlengths
1
39
comments_count
int64
0
452
errbotio/errbot
automation
901
Location of config-template.py
In order to let us help you better, please fill out the following fields as best you can: ### I am ... * [ ] Reporting a bug ### I am running... * Errbot version: 4.3.4 * OS version: Ubuntu 16:04 * Python version: 3 * Using a virtual environment: yes ### Issue description When you try to start errbot for the first time there is no config.py. The message say's where you can find that file. but that's a wrong location of the file. ``` 14:25:58 ERROR errbot.cli I cannot find the config file ~/errbot/config.py (You can change this path with the -c parameter see --help) 14:25:58 INFO errbot.cli You can use the template ~/errbot/lib/python3.5/site-packages/errbot/cli.py/config-template.py as a base and copy it to ~/errbot/config.py. You can then customize it. ``` the correct location of the file is `~/errbot/lib/python3.5/site-packages/errbot/config-template.py` ### Steps to reproduce Just install a fresh errbot without config file and start it.
closed
2016-11-18T13:47:53Z
2016-12-01T10:17:50Z
https://github.com/errbotio/errbot/issues/901
[ "newcomer-friendly", "#configuration", "#usability" ]
Darkguver
2
xzkostyan/clickhouse-sqlalchemy
sqlalchemy
204
Support for DROP PARTITION statements
**Describe the bug** Clickhouse is not designed to support deleting rows, although it supports it via expensive `ALTER TABLE ... DELETE WHERE` statements. A much more efficient way of deleting rows is dropping whole partitions, which is why reasonable table schema designers take care to partition their tables with that in mind. However, `ALTER TABLE ... DROP PARTITION` statements are not currently supported by the library. **Expected behavior** Being able to drop table partitions with statements along the lines of: ``` session.execute(table.drop_partition(partition_expression)) ``` **Versions** - clickhouse-sqlalchemy 0.2.2 - Python version: NA
open
2022-10-11T15:09:19Z
2022-10-11T15:09:19Z
https://github.com/xzkostyan/clickhouse-sqlalchemy/issues/204
[]
georgipeev
0
lexiforest/curl_cffi
web-scraping
168
[BUG] BytesWarning: str() on a bytes instance (with `-b`)
**Describe the bug** A clear and concise description of what the bug is. **To Reproduce** Run curl_cffi or curl_cffi tests with `-bb` interpreter option e.g. `python3 -bb -Werror -m pytest tests/unittest/test_requests.py` `BytesWarning: str() on a bytes instance` is raised ``` tests/unittest/test_requests.py:52 (test_post_no_body) server = <conftest.TestServer object at 0x7fbb87459710> def test_post_no_body(server): > r = requests.post(str(server.url), headers={"Content-Type": "application/json"}) test_requests.py:54: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../../env/lib/python3.11/site-packages/curl_cffi/requests/__init__.py:92: in request return s.request( ../../env/lib/python3.11/site-packages/curl_cffi/requests/session.py:618: in request req, buffer, header_buffer, q, header_recved, quit_now = self._set_curl_options( ../../env/lib/python3.11/site-packages/curl_cffi/requests/session.py:230: in _set_curl_options c.setopt(CurlOpt.URL, url.encode()) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <curl_cffi.curl.Curl object at 0x7fbb866f9810> option = <CurlOpt.URL: 10002>, value = b'http://127.0.0.1:8000/' def setopt(self, option: CurlOpt, value: Any): """Wrapper for curl_easy_setopt. Parameters: option: option to set, use the constants from CurlOpt enum value: value to set, strings will be handled automatically """ input_option = { # this should be int in curl, but cffi requires pointer for void* # it will be convert back in the glue c code. 0: "int*", 10000: "char*", 20000: "void*", 30000: "int*", # offset type } # print("option", option, "value", value) # Convert value value_type = input_option.get(int(option / 10000) * 10000) if value_type == "int*": c_value = ffi.new("int*", value) elif option == CurlOpt.WRITEDATA: c_value = ffi.new_handle(value) self._write_handle = c_value lib._curl_easy_setopt( self._curl, CurlOpt.WRITEFUNCTION, lib.buffer_callback ) elif option == CurlOpt.HEADERDATA: c_value = ffi.new_handle(value) self._header_handle = c_value lib._curl_easy_setopt( self._curl, CurlOpt.HEADERFUNCTION, lib.buffer_callback ) elif option == CurlOpt.WRITEFUNCTION: c_value = ffi.new_handle(value) self._write_handle = c_value lib._curl_easy_setopt(self._curl, CurlOpt.WRITEFUNCTION, lib.write_callback) option = CurlOpt.WRITEDATA elif option == CurlOpt.HEADERFUNCTION: c_value = ffi.new_handle(value) self._header_handle = c_value lib._curl_easy_setopt(self._curl, CurlOpt.WRITEFUNCTION, lib.write_callback) option = CurlOpt.HEADERDATA elif value_type == "char*": if isinstance(value, str): c_value = value.encode() else: c_value = value # Must keep a reference, otherwise may be GCed. if option == CurlOpt.POSTFIELDS: self._body_handle = c_value else: raise NotImplementedError("Option unsupported: %s" % option) if option == CurlOpt.HTTPHEADER: for header in value: self._headers = lib.curl_slist_append(self._headers, header) ret = lib._curl_easy_setopt(self._curl, option, self._headers) elif option == CurlOpt.RESOLVE: for resolve in value: if isinstance(resolve, str): resolve = resolve.encode() self._resolve = lib.curl_slist_append(self._resolve, resolve) ret = lib._curl_easy_setopt(self._curl, option, self._resolve) else: ret = lib._curl_easy_setopt(self._curl, option, c_value) > self._check_error(ret, "setopt(%s, %s)" % (option, value)) E BytesWarning: str() on a bytes instance ../../env/lib/python3.11/site-packages/curl_cffi/curl.py:190: BytesWarning ``` **Expected behavior** A clear and concise description of what you expected to happen. Not raise BytesWarning. `-bb` is sometimes used in test suites to catch bugs. (in our case it's propagating to all our curl_cffi-related tests, causing them to all fail) **Versions** - OS: linux - curl_cffi version 0.5.10 - `pip freeze` dump ``` anyio==4.0.0 certifi==2023.7.22 cffi==1.16.0 click==8.1.7 cryptography==41.0.5 curl-cffi==0.5.10 h11==0.14.0 httpcore==1.0.1 httpx==0.25.1 idna==3.4 iniconfig==2.0.0 packaging==23.2 pluggy==1.3.0 pycparser==2.21 pytest==7.4.3 sniffio==1.3.0 trustme==1.1.0 uvicorn==0.23.2 ``` **Additional context** - Which session are you using? async or sync? - If using async session, which loop implementation are you using? sync (may apply to async?)
closed
2023-12-03T00:07:17Z
2024-01-01T04:00:32Z
https://github.com/lexiforest/curl_cffi/issues/168
[ "bug" ]
coletdjnz
1
SciTools/cartopy
matplotlib
1,675
unusable due to lack of adequate documentation
### Description <!-- Please provide a general introduction to the issue/proposal. --> I am forced to transition code from the basemap package (for which I had many fixes) to cartopy as basemap does no longer compile from pypi in Python 3.9. Unfortunately, the documentation - and therefore the package - is not really unusable. And whereas things may be a lot better in cartopy, how to use it in the first place remains obscure. For example, the first item on the documentation is https://scitools.org.uk/cartopy/docs/latest/crs/index.html which comes with a lot of mabo-jumbo words, it is claimed this is at core of the package but there is not even a single example. I feel you have taken the Monty Python-origin of Python too literal. If you just read that first page as a starting point, is seems the package would have escaped directly from a Monty Python episode. Then, for some projections, there is no doc at all, such as `PlateCarree`. If you were able to provide a good introduction that give any insight at all on what is going on, that would be much appreciated by all, I believe.
open
2020-10-31T13:38:07Z
2020-11-17T13:17:28Z
https://github.com/SciTools/cartopy/issues/1675
[]
2sn
3
ray-project/ray
deep-learning
50,823
[Train] Unable to gain long-term access to S3 storage for training state/checkpoints when running on AWS EKS
### What happened + What you expected to happen I am using Ray Train + KubeRay to train on AWS EKS. I have configured EKS to give Ray pods a specific IAM role through Pod Identity Association. EKS Pod Identity Association provides a HTTP endpoint through which an application can "assume said IAM role" - i.e. gain a set of temporary credentials to AWS services. The `boto3` library, for example, is able to automatically use this method to obtain credentials without any special user action. However It seems `pyarrow.fs.S3FileSystem` is the only (built-in) vehicle for Ray Train to use S3 as storage. Unfortunately, `pyarrow.fs.S3FileSystem` is unable to automatically gain access through Pod Identity Association. I have filed a report about that in the Apache Arrow repository: https://github.com/apache/arrow/issues/45603 As mentioned in the Apache Arrow issue report, it is possible for the user to manually obtain a set of temporary credentials from Pod Identity Association and pass them them to `S3FileSystem` in constructor arguments. However, Ray Train will keep using this same instance of `S3FileSystem` throughout training, during which time the temporary credentials may expire. It is possible, however, for the user to store the temporary credentials into environment variables AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN and an `S3FileSystem` created with no arguments will pick up those credentials. Moreover, the user can also keep updating these environment variables with new credentials when an old set expires, and the same instance of `S3FileSystem` will use the new credentials in every method call. This methods of manually updating temporary credentials in environment variables is the only way I found that I am able to give Ray Train long-term access to S3 through Pod Identity Association. However, as mentioned in the Apache Arrow ticket, this has its own drawbacks: - The environment variables affect the entire Python process - it is not possible to give specific credentials to S3FileSytem. - The user needs to go through a significant effort to implement updating the environment variables in the context of a Ray Train session, and it is not clear whether it can be done reliably and safely at all. I have found the environment variables need to be updated in the process where TorchTrainer was created. I have found two ways to do this, neither ideal: - The user can do it on a separate thread, potentially risking race conditions with S3FileSytem use by Ray Train. - The user can implement a custom `ray.tune.Callback`, hoping to avoid thread safety issues that way. However: - The existing callback methods are all related to the different points in the progression of training and so inherently coupled to the timing of it, rather than the timing constraints of the credential expiry. - Moreover, it is still possible Ray Train uses S3FileSystem on a different thread under the hood, so it is not clear whether this is thread safe either. While this issue could be resolved in `pyarrow.fs.S3FileSystem`, as suggested in the Apache Arrow bug report, I see possible changes in Ray Train that would alleviate the issue. Specifically: - Ray Train could switch to accessing S3 through `boto3` instead of `pyarrow.fs.S3FileSystem`, since recent versions of `boto3` provide seamless integration with EKS Pod Identity Association. - Alternatively, Ray Train could provide a specific, safe way for the user to periodically update the instance of `S3FileSystem` used by Ray Train. This way 1. an update period could be guaranteed independently of the progression of training, and 2. the user could pass temporary credentials to `S3FileSystem` constructor arguments, not affecting the rest of the Python process. It is worth noting that the same issue occurs when using Ray Train in the context where AWS access is provided through SSO and you will find reproduction instruction below for both cases. However, the main blocker for my production environment is using it on AWS EKS, so I am focusing this bug report on that. ### Versions / Dependencies Ray version: 2.42.1 ### Reproduction script #### Alternative 1: KubeRay + EKS Set up: - an EKS cluster - an IAM role providing access to an S3 bucket (let's say it's called `my-bucket`) - set up an EKS Pod Identity Association assigning the IAM role to Ray pods (pods in a specific Kubernetes service account) - Install KubeRay operator on the EKS cluster. - Install a RayCluster on the EKS cluster. Submit Ray job using entrypoint listed below using `ray job submit -- python3 entrypoint.py` #### Alternative 2: Local Ray Cluster + AWS SSO Set up access to AWS using an SSO profile - providing access to an S3 bucket (let's say it's called `my-bucket`) Run the entrypoint listed below locally using `python3 entrypoint.py`, letting is create a local Ray cluster on the fly. #### Entrypoint (entrypoint.py) ``` import ray.train.torch import ray.train import ray ray.init() trainer = ray.train.torch.TorchTrainer( lambda: print("Hello"), scaling_config=ray.train.ScalingConfig(num_workers=1), run_config=ray.train.RunConfig( storage_path="s3://my-bucket", name="s3fs-bug", ) ) result = trainer.fit() ``` The job errors with this message: ``` Status message: Job entrypoint command failed with exit code 1, last available logs (truncated to 20,000 chars): result = trainer.fit() File "/usr/local/lib/python3.10/site-packages/ray/train/base_trainer.py", line 589, in fit storage = StorageContext( File "/usr/local/lib/python3.10/site-packages/ray/train/_internal/storage.py", line 461, in __init__ self._create_validation_file() File "/usr/local/lib/python3.10/site-packages/ray/train/_internal/storage.py", line 489, in _create_validation_file self.storage_filesystem.create_dir(self.experiment_fs_path) File "pyarrow/_fs.pyx", line 603, in pyarrow._fs.FileSystem.create_dir File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status OSError: When testing for existence of bucket 'my-bucket': AWS Error ACCESS_DENIED during HeadBucket operation: No response body. ``` ### Issue Severity Medium: It poses a significant difficulty though I can work around it.
open
2025-02-22T05:18:57Z
2025-03-05T01:25:36Z
https://github.com/ray-project/ray/issues/50823
[ "bug", "P1", "triage", "train" ]
jleben
1
davidsandberg/facenet
computer-vision
1,104
When calculating ACC and VAL on LFW dataset, the threshold value is different. How can I determine the threshold value in practical application?
![1572599379(1)](https://user-images.githubusercontent.com/24631762/68014429-8535ce00-fcca-11e9-9745-273ea20f7055.png)
open
2019-11-01T09:10:42Z
2019-11-01T09:10:42Z
https://github.com/davidsandberg/facenet/issues/1104
[]
Deepcong2019
0
psf/requests
python
6,394
Failure to fetch data / requests.get returns 503 while curl is 200
requests.get fails to fetch page, while curl on the same machine gets a 200. Additionaly the same code runs on macos 13.0.1 ## Expected Result ```py import requests # with version 2.28.2 res = requests.get("https://www.reichelt.com/raspberry-pi-zero-2-w-4x-1-ghz-512-mb-ram-wlan-bt-rasp-pi-zero2-w-p313902.html?CCOUNTRY=445&LANGUAGE=de") assert res.status_code == 200 ``` with `curl "https://www.reichelt.com/raspberry-pi-zero-2-w-4x-1-ghz-512-mb-ram-wlan-bt-rasp-pi-zero2-w-p313902.html?CCOUNTRY=445&LANGUAGE=de"` which returns the expected page. Both ran on the same machine with python 3.10. ## Actual Result `requests.get` returns http `503` ## Reproduction Steps see above. ## System Information $ python -m requests.help ```json { "chardet": { "version": "4.0.0" }, "charset_normalizer": { "version": "3.1.0" }, "cryptography": { "version": "3.4.8" }, "idna": { "version": "3.4" }, "implementation": { "name": "CPython", "version": "3.10.6" }, "platform": { "release": "5.15.0-1025-raspi", "system": "Linux" }, "pyOpenSSL": { "openssl_version": "30000020", "version": "21.0.0" }, "requests": { "version": "2.28.2" }, "system_ssl": { "version": "30000020" }, "urllib3": { "version": "1.26.15" }, "using_charset_normalizer": false, "using_pyopenssl": true } ``` <!-- This command is only available on Requests v2.16.4 and greater. Otherwise, please provide some basic information about your system (Python version, operating system, &c). --> Pi 4 with Ubuntu 22.04.2 LTS
closed
2023-03-26T15:43:42Z
2024-03-28T00:03:40Z
https://github.com/psf/requests/issues/6394
[]
Hu1buerger
5
serengil/deepface
deep-learning
570
dlib called with BGR instead of RGB?
I've been reading through your code and most detector backend wrappers convert the colorspace from BGR to RGB before calling the external detector. However, the dlib wrapper doesn't seem to do that and AFAIK dlib does expect RGB. Is that a bug? (also please take a look at https://github.com/serengil/deepface/pull/563#issuecomment-1257541112)
closed
2022-10-04T08:47:35Z
2022-10-04T12:19:48Z
https://github.com/serengil/deepface/issues/570
[ "question" ]
Jille
3
chiphuyen/stanford-tensorflow-tutorials
nlp
130
examples/04_eager_repl_demo.py is not found
The slide of lecture 4 says there is a examples/04_eager_repl_demo.py, which is not found in the repository.
open
2018-08-14T02:19:16Z
2018-12-10T14:55:55Z
https://github.com/chiphuyen/stanford-tensorflow-tutorials/issues/130
[]
albertwang21
1
httpie/cli
api
682
What should I do if i wanna post some fields which including keyword?
Like this one: http POST http://localhost:8080/apis appId=123 urlPattern=/test GET=get PUT=put POST=post DELETE=delete
closed
2018-06-01T09:33:49Z
2018-06-03T09:00:59Z
https://github.com/httpie/cli/issues/682
[]
RudolphBrown
1
deepinsight/insightface
pytorch
2,030
Do We Need to Normalize the Output?
I saw that you are normalizing the embedding output [here](https://github.com/deepinsight/insightface/blob/master/recognition/arcface_mxnet/verification.py#L321). Do we need to normalize the embedding? Since, if I removed the normalization part, the performance drop significantly. But, how does it run on the real-time implementation if we need to normalize the embedding? Thank you
open
2022-06-15T08:07:08Z
2022-06-18T03:34:16Z
https://github.com/deepinsight/insightface/issues/2030
[]
Malikanhar
2
bloomberg/pytest-memray
pytest
2
Empty report when running with pytest-xdist
## Bug Report **Current Behavior** The plugin produces an empty output when running tests with [pytest-xdist](https://github.com/pytest-dev/pytest-xdist). **Input Code** ```python a = [] def something(): for i in range(10_000): a.append(i) def test_something(): something() ``` Running multiple processes: ![image](https://user-images.githubusercontent.com/9638362/164390453-58633538-c132-48e5-8d25-6983084808d4.png) Running without distribution: ![image](https://user-images.githubusercontent.com/9638362/164390513-de195022-a765-47f6-9c8a-f0b18210a33c.png) **Environment** - Python(s): python3.9, pytest-xdist, pytest-memray **Possible-solutions**: For the beginning, it would be sufficient to detect running tests with pytest-xdist and warn the user that `-n0` is required to make pytest-memray work. Of course, moving forward, it would be great to make both plugins work together. Running tests in a single process is too hardcore for some projects.
closed
2022-04-21T06:46:31Z
2022-11-16T17:29:22Z
https://github.com/bloomberg/pytest-memray/issues/2
[ "bug", "help wanted" ]
orsinium
6
pytorch/vision
machine-learning
8,654
Transforms v2.RandomRotation fail when expand=True with ValueError(f"Found multiple HxW dimensions in the sample: {sequence_to_str(sorted(sizes))}")
### 🐛 Describe the bug ## Example: ```python import torchvision from torchvision.transforms import v2 transforms = v2.Compose([ v2.ToImage(), v2.RandomRotation(degrees=(-45, 45), expand=True), ]) # Set up COCO dataset train_dataset = torchvision.datasets.CocoDetection(coco_img_root, coco_ann_root, transforms=transforms) train_dataset = torchvision.datasets.wrap_dataset_for_transforms_v2(train_dataset, target_keys=['boxes', 'labels']) img, labels = train_dataset[0] ``` ## Suggested Solution The off by one error is thrown in v2.RandomRotation when expand=True. A similar error is referenced for image rotation in issue #7714, however a similar solution was not implemented for bounding boxes. The issue can be solved by changing line 837 of torchvision/transforms/v2/functional/_geometry.py in `_affine_bounding_boxes_with_expand()`. from: ```python def _affine_bounding_boxes_with_expand() ... if expand: .... # Estimate meta-data for image with inverted=True affine_vector = _get_inverse_affine_matrix(center, angle, translate, scale, shear) ... ``` to: ```python def _affine_bounding_boxes_with_expand() ... if expand: .... # Estimate meta-data for image with inverted=True affine_vector = _get_inverse_affine_matrix([0, 0], angle, translate, scale, shear) ... ``` ### Versions Collecting environment information... PyTorch version: 2.4.0+cu124 Is debug build: False CUDA used to build PyTorch: 12.4 ROCM used to build PyTorch: N/A OS: Linux Mint 21.3 (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: Could not collect Libc version: glibc-2.35 Python version: 3.11.10 (main, Sep 7 2024, 18:35:41) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-6.8.0-40-generic-x86_64-with-glibc2.35 Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 43 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Vendor ID: AuthenticAMD Model name: AMD Ryzen 5 3500U with Radeon Vega Mobile Gfx CPU family: 23 Model: 24 Thread(s) per core: 2 Core(s) per socket: 4 Socket(s): 1 Stepping: 1 Frequency boost: enabled CPU max MHz: 2100.0000 CPU min MHz: 1400.0000 BogoMIPS: 4192.45 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb hw_pstate ssbd ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif overflow_recov succor smca sev sev_es Virtualization: AMD-V L1d cache: 128 KiB (4 instances) L1i cache: 256 KiB (4 instances) L2 cache: 2 MiB (4 instances) L3 cache: 4 MiB (1 instance) NUMA node(s): 1 NUMA node0 CPU(s): 0-7 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Reg file data sampling: Not affected Vulnerability Retbleed: Mitigation; untrained return thunk; SMT vulnerable Vulnerability Spec rstack overflow: Mitigation; Safe RET Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] numpy==2.1.0 [pip3] torch==2.4.0+cu124 [pip3] torchmetrics==1.4.1 [pip3] torchvision==0.19.0+cu124 [pip3] triton==3.0.0 [conda] Could not collect
open
2024-09-19T17:02:01Z
2024-10-11T14:26:54Z
https://github.com/pytorch/vision/issues/8654
[]
jeff-zimmerman
6
dropbox/PyHive
sqlalchemy
154
Replace hive.resultset.use.unique.column.names test config with sqlalchemy dialect code
See this SQLAlchemy commit: ``` commit 8e24584d8d242d40d605752116ac05be33f697d3 Author: Mike Bayer <mike_mp@zzzcomputing.com> Date: Sun Dec 5 00:46:11 2010 -0500 ... - move the "check for a dot in the colname" logic out to the sqlite dialect. ``` SQLAlchemy 0.6 had this snippet of code, which masked the issue. ``` if '.' in colname: # sqlite will in some circumstances prepend table name to # colnames, so strip origname = colname colname = colname.split('.')[-1] else: origname = None ```
closed
2017-08-31T23:17:51Z
2017-09-01T18:57:05Z
https://github.com/dropbox/PyHive/issues/154
[]
jingw
0
newpanjing/simpleui
django
5
Good thing! Wishing new update!
Bugs: 1.Cannot refresh the label opened. Suggestions: 1.It's better if 'layui-header my-header' can be defined by self. 2.It's better if the style and color can be customed same as django admin style.
closed
2019-02-26T03:41:25Z
2019-03-01T10:30:45Z
https://github.com/newpanjing/simpleui/issues/5
[]
duxiaoyang06
5
yt-dlp/yt-dlp
python
11,739
How to stop the extract_info execution before returning the result?
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE - [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field ### Checklist - [X] I'm asking a question and **not** reporting a bug or requesting a feature - [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme) - [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates - [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue) ### Please make sure the question is worded well enough to be understood Using the extract_info method on a url, is there a way to set the execution timeout? I tried, but after reaching the timeout, extract_info still seems to be running, how do I stop it? This is my current code: ```python def get_info_yt_dlp(url, timeout=config.sort_timeout): """ Get the url info by yt_dlp """ ydl_opts = { "socket_timeout": timeout, "skip_download": True, "quiet": True, "no_warnings": True, "format": "best" } with yt_dlp.YoutubeDL(ydl_opts) as ydl: return ydl.sanitize_info(ydl.extract_info(url, download=False)) async def get_speed_yt_dlp(url, timeout=config.sort_timeout): """ Get the speed of the url by yt_dlp """ try: start_time = time() info = await asyncio.wait_for( asyncio.to_thread(get_info_yt_dlp, url, timeout), timeout=timeout ) speed = int(round((time() - start_time) * 1000)) if len(info) else float("inf") return speed except: return float("inf") ``` ### Provide verbose output that clearly demonstrates the problem - [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`) - [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead - [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below ### Complete Verbose Output _No response_
closed
2024-12-05T02:28:03Z
2024-12-05T04:02:41Z
https://github.com/yt-dlp/yt-dlp/issues/11739
[ "question", "incomplete" ]
Guovin
2
pytest-dev/pytest-xdist
pytest
1,057
Add strict/complete typing
The current typing is very imcomplete. To make the plugin more maintainable and understandable, add typing to all of the currently untyped code and apply stricter mypy config. Prerequisite tasks: - execnet typing (done, needs an execnet release + bump execnet requirement) - bump min pytest requirement from 6.2 to 7.0 -- while 6.2 is typed it is missing a lot of stuff, 7.0 is the first version with mostly complete typing
closed
2024-04-04T16:50:51Z
2024-04-16T20:36:43Z
https://github.com/pytest-dev/pytest-xdist/issues/1057
[]
bluetech
0
aidlearning/AidLearning-FrameWork
jupyter
166
安装好后,打开app闪退
05-28 08:01:37.137 28820 28912 E JNIKey : JNI_OnLoad 05-28 08:01:37.137 28820 28912 E JNIKey : register native methods 05-28 08:01:37.138 28820 28912 E JNIKey : register native methods success 05-28 08:01:37.140 28820 28912 E JNIKey : APP PACKAGE NAME:com.aidlux 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY len:98 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:0 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:1 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:2 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:3 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:4 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:5 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:6 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:7 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:8 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:9 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:10 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:11 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:12 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:13 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:14 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:15 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:16 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:17 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:18 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:19 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:20 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:21 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:22 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:23 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:24 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:25 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:26 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:27 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:28 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:29 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:30 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:31 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:32 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:33 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:34 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:35 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:36 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:37 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:38 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:39 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:40 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:41 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:42 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:43 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:44 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:45 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:46 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:47 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:48 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:49 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:50 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:51 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:52 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:53 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:54 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:55 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:56 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:57 05-28 08:01:37.140 28820 28912 E JNIKey : DECRYPT_KEY loop:58 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:59 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:60 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:61 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:62 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:63 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:64 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:65 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:66 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:67 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:68 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:69 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:70 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:71 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:72 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:73 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:74 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:75 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:76 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:77 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:78 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:79 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:80 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:81 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:82 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:83 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:84 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:85 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:86 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:87 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:88 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:89 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:90 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:91 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:92 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:93 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:94 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:95 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:96 05-28 08:01:37.141 28820 28912 E JNIKey : DECRYPT_KEY loop:97 05-28 08:01:37.465 28820 28912 E JNIKey : APP PACKAGE NAME:com.aidlux 05-28 08:01:37.465 28820 28912 E JNIKey : DECRYPT_KEY len:98 05-28 08:01:37.465 28820 28912 E JNIKey : DECRYPT_KEY loop:0 05-28 08:01:37.465 28820 28912 E JNIKey : DECRYPT_KEY loop:1 05-28 08:01:37.465 28820 28912 E JNIKey : DECRYPT_KEY loop:2 05-28 08:01:37.465 28820 28912 E JNIKey : DECRYPT_KEY loop:3 05-28 08:01:37.465 28820 28912 E JNIKey : DECRYPT_KEY loop:4 05-28 08:01:37.465 28820 28912 E JNIKey : DECRYPT_KEY loop:5 05-28 08:01:37.465 28820 28912 E JNIKey : DECRYPT_KEY loop:6 05-28 08:01:37.465 28820 28912 E JNIKey : DECRYPT_KEY loop:7 05-28 08:01:37.465 28820 28912 E JNIKey : DECRYPT_KEY loop:8 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:9 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:10 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:11 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:12 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:13 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:14 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:15 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:16 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:17 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:18 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:19 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:20 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:21 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:22 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:23 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:24 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:25 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:26 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:27 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:28 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:29 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:30 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:31 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:32 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:33 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:34 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:35 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:36 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:37 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:38 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:39 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:40 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:41 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:42 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:43 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:44 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:45 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:46 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:47 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:48 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:49 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:50 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:51 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:52 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:53 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:54 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:55 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:56 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:57 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:58 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:59 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:60 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:61 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:62 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:63 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:64 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:65 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:66 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:67 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:68 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:69 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:70 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:71 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:72 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:73 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:74 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:75 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:76 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:77 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:78 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:79 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:80 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:81 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:82 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:83 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:84 05-28 08:01:37.466 28820 28912 E JNIKey : DECRYPT_KEY loop:85 05-28 08:01:37.467 28820 28912 E JNIKey : DECRYPT_KEY loop:86 05-28 08:01:37.467 28820 28912 E JNIKey : DECRYPT_KEY loop:87 05-28 08:01:37.467 28820 28912 E JNIKey : DECRYPT_KEY loop:88 05-28 08:01:37.467 28820 28912 E JNIKey : DECRYPT_KEY loop:89 05-28 08:01:37.467 28820 28912 E JNIKey : DECRYPT_KEY loop:90 05-28 08:01:37.467 28820 28912 E JNIKey : DECRYPT_KEY loop:91 05-28 08:01:37.467 28820 28912 E JNIKey : DECRYPT_KEY loop:92 05-28 08:01:37.467 28820 28912 E JNIKey : DECRYPT_KEY loop:93 05-28 08:01:37.467 28820 28912 E JNIKey : DECRYPT_KEY loop:94 05-28 08:01:37.467 28820 28912 E JNIKey : DECRYPT_KEY loop:95 05-28 08:01:37.467 28820 28912 E JNIKey : DECRYPT_KEY loop:96 05-28 08:01:37.467 28820 28912 E JNIKey : DECRYPT_KEY loop:97 05-28 08:01:37.468 28820 28912 D NetworkSecurityConfig: No Network Security Config specified, using platform default 05-28 08:01:39.931 28820 28912 E JNIKey : DECRYPT_KEY len:96 05-28 08:01:39.931 28820 28912 E JNIKey : DECRYPT_KEY loop:0 05-28 08:01:39.931 28820 28912 E JNIKey : DECRYPT_KEY loop:1 05-28 08:01:39.931 28820 28912 E JNIKey : DECRYPT_KEY loop:2 05-28 08:01:39.931 28820 28912 E JNIKey : DECRYPT_KEY loop:3 05-28 08:01:39.931 28820 28912 E JNIKey : DECRYPT_KEY loop:4 05-28 08:01:39.931 28820 28912 E JNIKey : DECRYPT_KEY loop:5 05-28 08:01:39.931 28820 28912 E JNIKey : DECRYPT_KEY loop:6 05-28 08:01:39.931 28820 28912 E JNIKey : DECRYPT_KEY loop:7 05-28 08:01:39.931 28820 28912 E JNIKey : DECRYPT_KEY loop:8 05-28 08:01:39.931 28820 28912 E JNIKey : DECRYPT_KEY loop:9 05-28 08:01:39.931 28820 28912 E JNIKey : DECRYPT_KEY loop:10 05-28 08:01:39.931 28820 28912 E JNIKey : DECRYPT_KEY loop:11 05-28 08:01:39.931 28820 28912 E JNIKey : DECRYPT_KEY loop:12 05-28 08:01:39.931 28820 28912 E JNIKey : DECRYPT_KEY loop:13 05-28 08:01:39.931 28820 28912 E JNIKey : DECRYPT_KEY loop:14 05-28 08:01:39.931 28820 28912 E JNIKey : DECRYPT_KEY loop:15 05-28 08:01:39.931 28820 28912 E JNIKey : DECRYPT_KEY loop:16 05-28 08:01:39.931 28820 28912 E JNIKey : DECRYPT_KEY loop:17 05-28 08:01:39.931 28820 28912 E JNIKey : DECRYPT_KEY loop:18 05-28 08:01:39.931 28820 28912 E JNIKey : DECRYPT_KEY loop:19 05-28 08:01:39.931 28820 28912 E JNIKey : DECRYPT_KEY loop:20 05-28 08:01:39.931 28820 28912 E JNIKey : DECRYPT_KEY loop:21 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:22 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:23 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:24 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:25 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:26 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:27 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:28 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:29 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:30 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:31 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:32 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:33 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:34 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:35 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:36 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:37 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:38 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:39 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:40 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:41 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:42 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:43 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:44 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:45 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:46 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:47 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:48 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:49 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:50 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:51 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:52 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:53 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:54 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:55 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:56 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:57 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:58 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:59 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:60 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:61 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:62 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:63 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:64 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:65 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:66 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:67 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:68 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:69 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:70 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:71 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:72 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:73 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:74 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:75 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:76 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:77 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:78 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:79 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:80 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:81 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:82 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:83 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:84 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:85 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:86 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:87 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:88 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:89 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:90 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:91 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:92 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:93 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:94 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:95 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY len:96 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:0 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:1 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:2 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:3 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:4 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:5 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:6 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:7 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:8 05-28 08:01:39.932 28820 28912 E JNIKey : DECRYPT_KEY loop:9 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:10 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:11 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:12 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:13 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:14 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:15 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:16 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:17 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:18 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:19 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:20 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:21 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:22 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:23 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:24 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:25 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:26 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:27 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:28 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:29 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:30 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:31 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:32 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:33 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:34 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:35 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:36 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:37 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:38 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:39 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:40 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:41 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:42 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:43 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:44 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:45 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:46 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:47 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:48 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:49 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:50 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:51 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:52 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:53 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:54 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:55 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:56 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:57 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:58 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:59 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:60 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:61 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:62 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:63 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:64 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:65 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:66 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:67 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:68 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:69 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:70 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:71 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:72 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:73 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:74 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:75 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:76 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:77 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:78 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:79 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:80 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:81 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:82 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:83 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:84 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:85 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:86 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:87 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:88 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:89 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:90 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:91 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:92 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:93 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:94 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:95 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY len:96 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:0 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:1 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:2 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:3 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:4 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:5 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:6 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:7 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:8 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:9 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:10 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:11 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:12 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:13 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:14 05-28 08:01:39.933 28820 28912 E JNIKey : DECRYPT_KEY loop:15 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:16 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:17 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:18 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:19 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:20 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:21 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:22 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:23 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:24 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:25 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:26 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:27 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:28 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:29 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:30 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:31 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:32 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:33 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:34 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:35 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:36 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:37 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:38 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:39 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:40 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:41 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:42 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:43 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:44 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:45 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:46 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:47 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:48 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:49 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:50 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:51 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:52 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:53 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:54 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:55 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:56 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:57 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:58 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:59 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:60 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:61 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:62 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:63 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:64 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:65 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:66 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:67 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:68 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:69 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:70 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:71 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:72 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:73 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:74 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:75 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:76 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:77 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:78 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:79 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:80 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:81 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:82 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:83 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:84 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:85 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:86 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:87 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:88 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:89 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:90 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:91 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:92 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:93 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:94 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:95 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY len:96 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:0 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:1 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:2 05-28 08:01:39.934 28820 28912 E JNIKey : DECRYPT_KEY loop:3 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:4 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:5 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:6 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:7 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:8 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:9 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:10 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:11 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:12 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:13 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:14 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:15 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:16 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:17 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:18 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:19 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:20 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:21 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:22 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:23 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:24 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:25 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:26 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:27 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:28 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:29 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:30 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:31 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:32 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:33 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:34 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:35 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:36 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:37 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:38 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:39 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:40 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:41 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:42 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:43 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:44 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:45 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:46 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:47 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:48 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:49 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:50 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:51 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:52 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:53 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:54 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:55 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:56 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:57 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:58 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:59 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:60 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:61 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:62 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:63 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:64 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:65 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:66 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:67 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:68 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:69 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:70 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:71 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:72 05-28 08:01:39.935 28820 28912 E JNIKey : DECRYPT_KEY loop:73 05-28 08:01:39.937 28820 28912 E com.aidlux: No implementation found for java.lang.String com.aidlux.protection.JNIKey.getKey5() (tried Java_com_aidlux_protection_JNIKey_getKey5 and Java_com_aidlux_protection_JNIKey_getKey5__) 05-28 08:01:39.938 28820 28912 E AndroidRuntime: FATAL EXCEPTION: Thread-5 05-28 08:01:39.938 28820 28912 E AndroidRuntime: Process: com.aidlux, PID: 28820 05-28 08:01:39.938 28820 28912 E AndroidRuntime: java.lang.UnsatisfiedLinkError: No implementation found for java.lang.String com.aidlux.protection.JNIKey.getKey5() (tried Java_com_aidlux_protection_JNIKey_getKey5 and Java_com_aidlux_protection_JNIKey_getKey5__) 05-28 08:01:39.938 28820 28912 E AndroidRuntime: at com.aidlux.protection.JNIKey.getKey5(Native Method) 05-28 08:01:39.938 28820 28912 E AndroidRuntime: at com.aidlux.app.i0$a.run(AidluxInstaller.java:49) 05-28 08:01:40.089 28820 28912 I Process : Sending signal. PID: 28820 SIG: 9
closed
2021-05-28T00:05:49Z
2021-05-28T13:34:20Z
https://github.com/aidlearning/AidLearning-FrameWork/issues/166
[]
xylake
0
noirbizarre/flask-restplus
flask
458
'Namespace' object has no attribute 'representation'
I am moving my project to namespace. Looks like representation is a method in Api class. Receiving the error: ```python File "C:\Users\xxxx\Envs\apidemo\apis\__init__.py", line 2, in <module> from HR import api as ns1 File "C:\Users\xxx\Envs\apidemo\apis\HR.py", line 27, in <module> @api.representation('application/xml') AttributeError: 'Namespace' object has no attribute 'representation' ``` HR.py code: ```python from flask_restplus import Namespace, Resource, fields api = Namespace('HR', description='HR APIs') @api.representation('application/xml') def xml(data, code, headers): resp = make_response(convert_data_to_xml(data), code) resp.headers.extend(headers) return resp ```
closed
2018-06-02T19:53:41Z
2019-08-30T16:48:12Z
https://github.com/noirbizarre/flask-restplus/issues/458
[]
manbeing
1
LibrePhotos/librephotos
django
678
Access to ML model in Faces
I would like to see feature in LibrePhotos to guess the name of person on the photo (cropped to the face or not) without uploading it into archive. Recently I've tried to add face recognition plugin to video surveillance ZoneMinder and the result wasn't good enough - it have special requirements for training photos and other limits. At the same time I have instance of LibrePhotos with well trained model on 35k+ faces of 250+ persons. Is it possible?
open
2022-11-09T19:06:56Z
2022-11-09T19:06:56Z
https://github.com/LibrePhotos/librephotos/issues/678
[ "enhancement" ]
legioner0
0
flavors/django-graphql-jwt
graphql
205
Custom authentication backend with custom input fields
Hi, I want to build a custom authentication without using password as an input field. I want to send three input variables: username, msg, publickey and validate that in a custom Django authentication backend. How to have three fields and pass it to the authenticate method of Django authentication backend?
closed
2020-06-02T06:10:44Z
2020-08-02T11:02:07Z
https://github.com/flavors/django-graphql-jwt/issues/205
[]
amiyatulu
2
miguelgrinberg/python-socketio
asyncio
545
python-socketio token authentication with tornado web server
I am using python-socketio with a tornado web server. I am trying to get the authentication token from the client and check it in tornado server via python-socketio. There is some example in JavaScript Client and NodeJS Server. I find out Flask SocketIO also has a kinda smooth implementation for token authentication. Is there any way to check the authentication token for tornado with python-socketio? Thanks.
closed
2020-09-14T06:10:28Z
2021-04-06T13:24:28Z
https://github.com/miguelgrinberg/python-socketio/issues/545
[ "question" ]
kamranhossain
7
tensorflow/tensor2tensor
deep-learning
1,617
Residual connections in the decoder allow cheating?
The masked-multi-head attention prevents cheating by masking out the tokens in the input sentence to the right of the current word, but the residual connections are unmasked and allow cheating? Say we have our input sequence x for the decoder and we apply the masked multi-head attention and get some representation y (Sublayer(x)) that has no access to the tokens to the right due to the masking (-inf in the softmax), but now we apply the Residual-Add & Norm step `LayerNorm(x+ Sublayer(x))`, only Sublayer(x) is masked, in x we still have information about words following our current word. In all decoder layers and sublayers there are residual connections, therefore some information from the original input sequence x can flow unmasked to the output and the network can cheat? Where am I missing out/misunderstanding something? Why can't the network cheat, despite these residual connections?
closed
2019-06-26T09:30:57Z
2019-07-18T19:53:58Z
https://github.com/tensorflow/tensor2tensor/issues/1617
[]
JannisBush
1
gevent/gevent
asyncio
1,861
Timeout up vs. down the call stack
What is the best way to do this? ```python try: ... # code that might for example get events or greenlets and therefore could raise Timeout except gevent.Timeout: if XXXX: raise # caused by a timeout context up the call stack # (I don't have a reference to the instance, # also could be more than one) else: ... # caused by a Timeout in the try-except block ``` In other words, if the timeout is caused by a context up the stack, it needs to bubble up, otherwise it needs to be absorbed (or whatever the logic is in the situation at hand). What should `XXXX` be?
open
2022-01-26T21:05:13Z
2022-01-27T23:54:39Z
https://github.com/gevent/gevent/issues/1861
[]
woutdenolf
2
dunossauro/fastapi-do-zero
sqlalchemy
48
Health check e serviço de migration no Docker Compose
Olá ao pessoal do curso, principalmente o @dunossauro! Tem um repositório de demonstração que eu estava baseando em partes do curso FastAPI do zero, mas como quando estava preparando a parte do Docker estava como um PR em andamento, procurei outras fontes. Como exemplo base, tinha seguido [a receita de bolo compartilhada nesse pequeno artigo do Simon Wilison para Django](https://til.simonwillison.net/docker/docker-compose-for-django-development). Ao contrário da parte atual do curso que é gerido por um entrypoint, naquela receita usa um serviço a parte que repliquei. Entretanto, ao rodar o serviço dava problemas do serviço de PostgreSQL não estar disponível ainda. Considerando o que falei até agora, busquei sobre como colocar o requisito de disponibilidade dos serviços antes da aplicação FastAPI subir e fiz algumas mudanças: 1. Um [docker-compose.yml](https://github.com/ayharano/cawa/blob/d044d5796ceaec411b32fe447ec620fcf911a77a/docker-compose.yml#L18-L44) que estabelece esses requisitos para então a aplicação subir 2. Um [Dockerfile.dev](https://github.com/ayharano/cawa/blob/d044d5796ceaec411b32fe447ec620fcf911a77a/Dockerfile.dev#L1-L45) que usa [multi-stage build](https://docs.docker.com/build/building/multi-stage/) para gerar uma versão base e então configurar uma imagem para migração e outro para a aplicação em si Considerando o intuito do curso ser uma introdução funcional do apanhado geral, não sei se seria bom abordar esses tópicos na aula de docker/docker compose, mas como em outras aplicações já passei por problemas de serviços bases não estarem de pé antes da aplicação, e dado o uso do curso para a base do repositório mencionado, achei de bom tom contribuir de volta inicialmente como uma issue e, se for o caso, posteriormente como uma PR
closed
2023-11-27T17:20:08Z
2024-05-22T02:12:38Z
https://github.com/dunossauro/fastapi-do-zero/issues/48
[]
ayharano
1
jina-ai/clip-as-service
pytorch
296
Error with bert-serving-start "p.is_ready.wait"
I am unable to get the server up and running in Docker. I run into this error on start (full trace at the bottom of the issue). ``` File "/usr/local/lib/python3.5/dist-packages/bert_serving/server/__init__.py", line 160, in _run p.is_ready.wait() AttributeError: 'function' object has no attribute 'wait' ``` I noticed that the code associated with that error was added just 15 hours ago, so I thought that was likely something to do with the error. https://github.com/hanxiao/bert-as-service/commit/842f273755e8531681c6b041f07380a221d4568b#diff-a946a72133b5fd9202e68ef043143212 ---- **Prerequisites** * [x] Are you running the latest `bert-as-service`? * [X] Did you follow [the installation](https://github.com/hanxiao/bert-as-service#install) and [the usage](https://github.com/hanxiao/bert-as-service#usage) instructions in `README.md`? * [X] Did you check the [FAQ list in `README.md`](https://github.com/hanxiao/bert-as-service#speech_balloon-faq)? * [X] Did you perform [a cursory search on existing issues](https://github.com/hanxiao/bert-as-service/issues)? **System information** I'm following the docker install instructions, so I don't think this information it relevant. But I am running on CPU, so I'm using `tensorflow/tensorflow:1.12.0-py3` (I also tried `tensorflow/tensorflow:latest-py3`). That might have something to do with it? - OS Platform and Distribution (e.g., Linux Ubuntu 16.04): MacOS Sierra: 10.12.6 - TensorFlow installed from (source or binary): (from docker) - TensorFlow version: (from docker) - Python version: - `bert-as-service` version: 1.8.7 - GPU model and memory: - - CPU model and memory: - --- ### Description This is my Dockerfile: ``` FROM tensorflow/tensorflow:latest-py3 RUN pip install -U bert-serving-server[http] COPY ./ /app COPY ./docker/entrypoint.sh /app RUN chmod +x /app/entrypoint.sh WORKDIR /app ENTRYPOINT ["/app/entrypoint.sh"] CMD [] ``` This is my `entrypoint.sh` file: ``` #!/bin/sh bert-serving-start -cpu -num_worker=$1 -model_dir /model -http_port 3011 ``` And my docker run command: ``` docker run -it -p 5555:5555 -p 5556:5556 -p 3011:3011 -v $PATH_MODEL:/model -t bert-as-service $NUM_WORKER ``` The process gets running, then shows this error and won't respond to http requests: ``` File "/usr/local/lib/python3.5/dist-packages/bert_serving/server/__init__.py", line 160, in _run p.is_ready.wait() AttributeError: 'function' object has no attribute 'wait' ``` Here's the full output: ``` usage: /usr/local/bin/bert-serving-start -cpu -num_worker=1 -model_dir /model -http_port 3011 ARG VALUE __________________________________________________ ckpt_name = bert_model.ckpt config_name = bert_config.json cors = * cpu = True device_map = [] fixed_embed_length = False fp16 = False gpu_memory_fraction = 0.5 graph_tmp_dir = None http_max_connect = 10 http_port = 3011 mask_cls_sep = False max_batch_size = 256 max_seq_len = 25 model_dir = /model num_worker = 1 pooling_layer = [-2] pooling_strategy = REDUCE_MEAN port = 5555 port_out = 5556 prefetch_size = 10 priority_batch_size = 16 show_tokens_to_client = False tuned_model_dir = None verbose = False xla = False I:VENTILATOR:[__i:__i: 66]:freeze, optimize and export graph, could take a while... I:GRAPHOPT:[gra:opt: 52]:model config: /model/bert_config.json I:GRAPHOPT:[gra:opt: 55]:checkpoint: /model/bert_model.ckpt I:GRAPHOPT:[gra:opt: 59]:build graph... WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0. For more information, please see: * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md * https://github.com/tensorflow/addons If you depend on functionality not listed there, please file an issue. I:GRAPHOPT:[gra:opt:128]:load parameters from checkpoint... I:GRAPHOPT:[gra:opt:132]:optimize... I:GRAPHOPT:[gra:opt:140]:freeze... I:GRAPHOPT:[gra:opt:145]:write graph to a tmp file: /tmp/tmpmn3y4_p_ I:VENTILATOR:[__i:__i: 74]:optimized graph is stored at: /tmp/tmpmn3y4_p_ I:VENTILATOR:[__i:_ru:128]:bind all sockets I:VENTILATOR:[__i:_ru:132]:open 8 ventilator-worker sockets I:VENTILATOR:[__i:_ru:135]:start the sink I:SINK:[__i:_ru:303]:ready I:VENTILATOR:[__i:_ge:219]:get devices I:VENTILATOR:[__i:_ge:252]:device map: worker 0 -> cpu I:VENTILATOR:[__i:_ru:151]:start http proxy I:WORKER-0:[__i:_ru:514]:use device cpu, load graph from /tmp/tmpmn3y4_p_ I:WORKER-0:[__i:gen:542]:ready and listening! Exception in thread Thread-1: Traceback (most recent call last): File "/usr/lib/python3.5/threading.py", line 914, in _bootstrap_inner self.run() File "/usr/local/lib/python3.5/dist-packages/bert_serving/server/__init__.py", line 114, in run self._run() File "/usr/local/lib/python3.5/dist-packages/zmq/decorators.py", line 75, in wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.5/dist-packages/zmq/decorators.py", line 75, in wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.5/dist-packages/zmq/decorators.py", line 75, in wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.5/dist-packages/bert_serving/server/zmq_decor.py", line 27, in wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.5/dist-packages/bert_serving/server/__init__.py", line 160, in _run p.is_ready.wait() AttributeError: 'function' object has no attribute 'wait' ``` ...
closed
2019-03-28T22:51:57Z
2019-05-16T01:17:30Z
https://github.com/jina-ai/clip-as-service/issues/296
[]
gregsherrid
8
pydantic/logfire
fastapi
211
All records just disappeared in the UI
### Description have the following error in the console: `120Error: <rect> attribute width: A negative value is not valid. ("-2.1666666666666665")` <img width="800" alt="image" src="https://github.com/pydantic/logfire/assets/1642503/5a31bd25-6a55-4113-9de4-dc1dfa71d634"> ### Python, Logfire & OS Versions, related packages (not required) _No response_
closed
2024-05-24T22:20:55Z
2024-05-27T10:37:23Z
https://github.com/pydantic/logfire/issues/211
[ "bug" ]
bllchmbrs
2
pydantic/pydantic-settings
pydantic
43
Environment variables are not loaded for nested models with uppercase variable names
### Checks * [x] I added a descriptive title to this issue * [x] I have searched (google, github) for similar issues and couldn't find anything * [x] I have read and followed [the docs](https://pydantic-docs.helpmanual.io/) and still think this is a bug <!-- Sorry to sound so draconian, but every second saved replying to issues is time spend improving pydantic :-) --> Output of `python -c "import pydantic.utils; print(pydantic.utils.version_info())"`: ``` pydantic version: 1.9.0 pydantic compiled: True install path: /private/tmp/settings/.venv/lib/python3.10/site-packages/pydantic python version: 3.10.1 (main, Jan 6 2022, 17:54:04) [Clang 13.0.0 (clang-1300.0.29.30)] platform: macOS-11.6.4-arm64-arm-64bit optional deps. installed: ['dotenv', 'typing-extensions'] ``` # Bug When nesting a model in a `Settings` class and populating it with an environment file, pydantic does not match environment variables to variables of nested models if the latter is upper cased. Example (adapted from the tutorial). I'm thinking this is a bug because capitalization of non-nested models is handled correctly. However, if it's somehow expected behaviour, perhaps a mention in the documentation could help people out who tend to use upper case variable names for settings. Assume the following `.env` file: ```bash export V0=0 export V1=0 export SUB_MODEL__V2=information ``` Then the following code runs fine: ```py from pydantic import BaseModel, BaseSettings class SubModel(BaseModel): v2: str # <-- lower case class Settings(BaseSettings): v0: str V1: str sub_model: SubModel class Config: env_nested_delimiter = "__" s = Settings(_env_file='.env') print(s.dict()) """ {'v0': '0', 'V1': '0', 'sub_model': {'v2': 'information'}} """ ``` but the following crashes (the only change is the capitalization of `SubModel.V2`): ```py from pydantic import BaseModel, BaseSettings class SubModel(BaseModel): V2: str # <-- upper case class Settings(BaseSettings): v0: str V1: str sub_model: SubModel class Config: env_nested_delimiter = "__" s = Settings(_env_file='.env') print(s.dict()) """ Traceback (most recent call last): File "/private/tmp/settings/pyds.py", line 15, in <module> s = Settings(_env_file='.env') File "pydantic/env_settings.py", line 38, in pydantic.env_settings.BaseSettings.__init__ File "pydantic/main.py", line 331, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 1 validation error for Settings sub_model -> V2 field required (type=value_error.missing) """ ``` I'm guessing that https://github.com/samuelcolvin/pydantic/issues/3115 is affected by the same bug (if it is a bug), but since capitalization isn't mentioned I thought it would be best to create a new issue. PS: Thanks for a great library 🥇
closed
2022-03-25T10:28:08Z
2023-04-29T10:40:49Z
https://github.com/pydantic/pydantic-settings/issues/43
[]
0mar
5
ageitgey/face_recognition
python
1,070
PHP shell_exec and face_detection
* face_recognition version: latest * Python version: 2.7.13 * Operating System: linux ### Description var_dump(shell_exec("/usr/local/bin/face_detection 1.jpg")); always return NULL, but the same command works in terminal how i can link face_detection and PHP? I always try to use the absolute path to face_detection and jpg, but no success! P/S. Got error: Traceback (most recent call last): File "/usr/local/bin/face_detection", line 11, in sys.exit(main()) File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 764, in __call__ return self.main(*args, **kwargs) File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 696, in main _verify_python3_env() File "/usr/local/lib/python3.5/dist-packages/click/_unicodefun.py", line 124, in _verify_python3_env ' mitigation steps.' + extra RuntimeError: Click will abort further execution because Python 3 was configured to use ASCII as encoding for the environment. Consult https://click.palletsprojects.com/en/7.x/python3/ for mitigation steps. This system supports the C.UTF-8 locale which is recommended. You might be able to resolve your issue by exporting the following environment variables: export LC_ALL=C.UTF-8 export LANG=C.UTF-8 LC_ALL=C.UTF-8 and LANG=C.UTF-8 always in .env
closed
2020-02-23T10:20:56Z
2020-02-23T15:11:59Z
https://github.com/ageitgey/face_recognition/issues/1070
[]
kko123
1
freqtrade/freqtrade
python
11,252
Help Needed: Freqtrade Running on Binance US - No Trades Being Made
<!-- Have you searched for similar issues before posting it? Did you have a VERY good look at the [documentation](https://www.freqtrade.io/en/latest/) and are sure that the question is not explained there Please do not use the question template to report bugs or to request new features. --> ## Describe your environment * Operating system: macOS 13 Ventura (running Freqtrade in a Docker container) * Python Version: 3.9.7 (python -V) * CCXT version: 4.4.42(`pip freeze | grep ccxt`) * Freqtrade Version:docker-2024.12-dev(`freqtrade -V` or `docker compose run --rm freqtrade -V` for Freqtrade running in docker) ## Your question *Ask the question you have not been able to find an answer in the [Documentation](https://www.freqtrade.io/en/latest/)* I’ve set up Freqtrade on Binance US and followed the documentation and various tutorials. Despite trying multiple strategies from the [Freqtrade strategy repository](https://github.com/freqtrade/freqtrade-strategies/blob/main/README.md) and reusing their code, no trades are being executed in dry-run or live mode, even after running the bot for over 24 hours in some cases. Here's a detailed breakdown of the issue: Steps I’ve Taken: I’ve configured Freqtrade for Binance US with the following config.json snippet: ``` { "max_open_trades": 5, "stake_currency": "USDT", "stake_amount": "unlimited", "dry_run": true, "dry_run_wallet": 100000, "trading_mode": "spot", "exchange": { "name": "binanceus", "key": "#######", "secret": "#######", "ccxt_config": {"enableRateLimit": true}, "pair_whitelist": [ "BTC/USDT", "BCH/USDT", "ETH/USDT", "LINK/USDT", "LTC/USDT", "SOL/USDT", "XRP/USDT", "ADA/USDT", "DOT/USDT", "ETC/USDT", "ALGO/USDT" ], "pair_blacklist": [ "BNB/.*" ] }, "timeframe": "5m" } ``` I’ve tested at least five strategies, including ones from the Freqtrade strategy repository and simple custom strategies. Additionally, I’ve tried using a basic custom strategy, such as AlwaysBuySell, designed to always trigger trades (even if they result in losses). However, this strategy also fails to execute any trades. (Note: This is my own example, not from Freqtrade.) Backtests show promising results with these strategies, but no trades are made in dry-run or live mode. I’ve waited for over 24 hours during live runs and dry-run mode, yet no trades are triggered. Troubleshooting Steps: Verified that the bot connects to Binance US successfully and recognizes 178 active pairs. Adjusted stake_amount, max_open_trades, and whitelists to ensure these settings are not causing issues. Generated new API tokens and removed all restrictions from my Binance US tokens to rule out credential-related issues, but this did not resolve the problem. Observations: The only notable difference in my setup compared to most tutorials is that I am using Binance US, not Binance. API keys are working, and the bot lists all pairs/markets correctly. Despite strategies that are designed to force trades, no trades are triggered. Backtests consistently show promising results, but live and dry-run modes fail to trigger trades. Question: What could be causing this issue? Are there any known limitations or differences with Binance US that would prevent trades in dry-run or live mode? I’ve exhausted every possible solution I can think of. Any guidance would be greatly appreciated.
closed
2025-01-19T17:28:36Z
2025-01-23T05:54:46Z
https://github.com/freqtrade/freqtrade/issues/11252
[ "Question", "Strategy assistance" ]
EyesOnly1987
8
kennethreitz/responder
graphql
338
How could I set 'json ensure_ascii = false' when resp.media is Japanese or Chinese?
When `resp.media = {'error': '错误的数据类型'}`, I'll get `{"error": "\\u9519\\u8bef\\u7684\\u6570\\u636e\\u7c7b\\u578b"}`. I know I should set `json ensure_ascii = false` in `json.dumps`, but I don't know how to do that in `responder`. like that: ``` python @api.route('/{errorcode}') async def error(req, resp, *, errorcode): resp.media = { 'errorcode': errorcode, 'message': '错误的数据类型' } ``` Sorry about my English, I really not good at that. Thanks a lot!
closed
2019-03-28T08:35:23Z
2019-04-03T10:58:51Z
https://github.com/kennethreitz/responder/issues/338
[]
songsenand
2
laughingman7743/PyAthena
sqlalchemy
187
Cache Expiration Based on Time
Hello, Athena cache is a great feature, but is it possible to set a cache expiration based on time? Thank you
closed
2020-12-13T21:30:13Z
2024-07-25T05:06:13Z
https://github.com/laughingman7743/PyAthena/issues/187
[]
rubenssoto
7
psf/requests
python
6,288
charset-normalizer upgraded to 3.x
charset-normalizer has been upgraded to version 3. requests requires version >= 2 and < 3. ## Expected Result support for charset-normalizer 3 ## Actual Result warning of incompatibility with charset-normalizer 3 ## Reproduction Steps ```shell % pip install --upgrade charset-normalizer Requirement already satisfied: charset-normalizer in /usr/local/lib/python3.10/site-packages (3.0.0) Collecting charset-normalizer Downloading charset_normalizer-3.0.1-cp310-cp310-macosx_10_9_x86_64.whl (124 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 124.2/124.2 kB 2.5 MB/s eta 0:00:00 Installing collected packages: charset-normalizer Attempting uninstall: charset-normalizer Found existing installation: charset-normalizer 3.0.0 Uninstalling charset-normalizer-3.0.0: Successfully uninstalled charset-normalizer-3.0.0 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. requests 2.28.1 requires charset-normalizer<3,>=2, but you have charset-normalizer 3.0.1 which is incompatible. Successfully installed charset-normalizer-3.0.1 ``` ## System Information $ python -m requests.help ```shell % python -m requests.help /usr/local/lib/python3.10/site-packages/requests/__init__.py:109: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (None)/charset_normalizer (3.0.1) doesn't match a supported version! warnings.warn( { "chardet": { "version": null }, "charset_normalizer": { "version": "3.0.1" }, "cryptography": { "version": "" }, "idna": { "version": "3.4" }, "implementation": { "name": "CPython", "version": "3.10.8" }, "platform": { "release": "21.6.0", "system": "Darwin" }, "pyOpenSSL": { "openssl_version": "", "version": null }, "requests": { "version": "2.28.1" }, "system_ssl": { "version": "1010113f" }, "urllib3": { "version": "1.26.12" }, "using_charset_normalizer": true, "using_pyopenssl": false } ``` This command is only available on Requests v2.16.4 and greater. Otherwise, please provide some basic information about your system (Python version, operating system, &c).
closed
2022-11-18T12:09:55Z
2023-11-19T00:03:40Z
https://github.com/psf/requests/issues/6288
[]
walterrowe
1
ray-project/ray
python
51,010
[distributed debugger] exception in regular remote worker function leading to access violation when debugger connects
### What happened + What you expected to happen In my setup we are using a Ray serve deployment with fast API and starlette middleware To test the use use of the distributed debugger, I created a class that has remote static member functions that are decorated with Ray remote but it is not an actor. The deployment has a pair of endpoints. One of them calls a function that enables a breakpoint with the breakpoint function and also has the Red Dot Break points enabled within vs. Code. The second endpoint function calls a function that immediately throws an exception. The deployment calls These remote decorated member functions, which means they will be tasks rather than functions running on an actor. The break point properly pauses and allows the debugger to connect. A similar setup using an actor works for both the breakpoint and the exception. However, the function that throws an exception and is not inside of an actor leads to an access violation as soon as the debugger connects. From the stack Trace it looks like pydevd is trying to create a list of frames from the trace back when this access violation occurs. ### Versions / Dependencies Python 3.11 Ray[default,serve]==2.42.1 Windows 11 ### Reproduction script I am not allowed to share corporate code. If needed, I can try to do this on my personal computer after hours. ### Issue Severity Medium: It is a significant difficulty but I can work around it.
open
2025-03-01T00:58:27Z
2025-03-06T20:31:48Z
https://github.com/ray-project/ray/issues/51010
[ "bug", "triage", "serve" ]
kotoroshinoto
1
pallets-eco/flask-wtf
flask
138
TypeError due to usage of safe_str_cmp
This edit causing an issue like this ``` File "/Users/Jasim/.virtualenvs/project1/lib/python2.7/site-packages/flask_wtf/form.py", line 108, in validate_csrf_token if not validate_csrf(field.data, self.SECRET_KEY, self.TIME_LIMIT): File "/Users/Jasim/.virtualenvs/project1/lib/python2.7/site-packages/flask_wtf/csrf.py", line 108, in validate_csrf return safe_str_cmp(hmac_compare, hmac_csrf) File "/Users/Jasim/.virtualenvs/project1/lib/python2.7/site-packages/werkzeug/security.py", line 117, in safe_str_cmp return _builtin_safe_str_cmp(a, b) TypeError: 'unicode' does not have the buffer interface ``` **Environment** ``` OSX: 10.9.4 Python: Python 2.7.8 (default, Jul 2 2014, 10:14:46) ``` ## ``` BeautifulSoup==3.2.1 Flask==0.10.1 Flask-Assets==0.8 Flask-Babel==0.9 FormEncode==1.2.6 Jinja2==2.7.1 Mako==0.9.0 Markdown==2.3.1 MarkupSafe==0.18 MySQL-python==1.2.5 SQLAlchemy==0.8.2 Werkzeug==0.9.4 alembic==0.6.0 coaster==0.4.0 configobj==4.7.2 cssmin==0.2.0 elementtree==1.2.6-20050316 itsdangerous==0.23 jsmin==2.0.6 nose==1.3.0 passlib==1.6.1 py==1.4.17 pytest==2.4.2 pytz==2013.7 restkit==4.2.2 semantic-version==2.2.0 simplejson==3.3.1 speaklater==1.3 uWSGI==1.9.18.1 webassets==0.8 wsgiref==0.1.2 flask-bootstrap flask_appconfig flask_wtf flask_login Flask-Mail ``` When I go back to `Flask_WTF==0.9.4`, the issue getting solved
closed
2014-07-23T03:28:19Z
2021-05-29T01:16:02Z
https://github.com/pallets-eco/flask-wtf/issues/138
[]
jasimmk
2
BeanieODM/beanie
pydantic
534
Soft Delete
### Discussed in https://github.com/roman-right/beanie/discussions/532 <div type='discussions-op-text'> <sup>Originally posted by **N0odlez** April 13, 2023</sup> Is it possible to have a Soft Delete functionality, or alternatively let me know how to add a Middleware on each database request to check if a field i.e "is_deleted" is equal to 1. This would be useful as sometimes you dont want to actually delete the whole Document but you need to keep for auditing purposes.</div>
closed
2023-04-13T19:34:52Z
2024-05-01T17:28:58Z
https://github.com/BeanieODM/beanie/issues/534
[]
roman-right
1
mars-project/mars
scikit-learn
2,384
Tileable subtask structure visualization
**Is your feature request related to a problem? Please describe.** When visualizing the tileable graph, we can also include a dag visualization for the subtask structure of each tileable, which can help us better control the progress of tileable. **Describe the solution you'd like** We can create an API endpoint to return the subtask dag structure based on the tileable id. The json returned can be something similar to the json returned from the tileable graph endpoint. For example: ``` { dependencies: [ { from_subtask_id, to_subtask_id }, ... ], subtasks: [ { subtask_id, subtask_name, subtask_progress, subtask_status }, ... ], } ``` Then on the web, when user clicks on a tileable, the selected tileable's subtask graph can be displayed on the right side tileable detail panel, and the progress and color will be updated periodically just like the progress of the dag.
closed
2021-08-24T10:20:01Z
2021-09-17T07:46:32Z
https://github.com/mars-project/mars/issues/2384
[ "type: feature", "mod: web" ]
RandomY-2
2
kizniche/Mycodo
automation
790
New PID Controller
Currently, the PID controller utilizes a single set of P, I, and D gains to tune both the raise and lower outputs. However, a tuning that works for one output may not necessarily work well for another. As a result, you may find you can tune one direction very well, but the alternate direction is poorly tuned. To overcome this issue, the PID controller should utilize a set of gains for each direction. Essentially, there are two PID controllers operating, one for each direction. This allows for optimal tuning in both directions and a more efficient regulation can be achieved overall. I'll post information/updates here about the development of a new PID controller utilizing separate gains and other options for each direction.
closed
2020-07-17T02:04:46Z
2020-07-22T18:54:57Z
https://github.com/kizniche/Mycodo/issues/790
[ "enhancement" ]
kizniche
1
huggingface/peft
pytorch
1,693
How to convert a loha safetensor trained from diffusers to webui format
Hello, when I finetune SDXL (actually that is InstantID) with PEFT method, I use lora、loha and lokr for PEFT in [diffuser](https://github.com/huggingface/diffusers). I have a question, how to convert a loha safetensor trained from diffusers to webui format? In the training process: the loading way: `peft_config = LoHaConfig( r=args.rank, alpha=args.rank //2, target_modules=["to_k", "to_q", "to_v", "to_out.0"], ) ` `unet = get_peft_model(unet, peft_config) ` when train process finished, the saving way as: `unet.save_pretrained(args.output_dir)` and I get the safetensor as ![image](https://github.com/KohakuBlueleaf/LyCORIS/assets/61881733/f71caa9a-4935-40f8-84fb-0a18d19991ac) But [webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui/) can't recognize it, I can't use it in webui. How can I fix this promblem!
closed
2024-04-30T07:17:48Z
2024-06-08T15:03:44Z
https://github.com/huggingface/peft/issues/1693
[]
JIAOJIAYUASD
2
writer/writer-framework
data-visualization
557
visibility attribute on backend ui broke the application in 0.7.2
open
2024-09-10T14:47:34Z
2024-09-10T14:47:34Z
https://github.com/writer/writer-framework/issues/557
[ "bug" ]
FabienArcellier
0
ClimbsRocks/auto_ml
scikit-learn
23
add 'date' as an input column type
closed
2016-08-13T05:47:42Z
2016-08-18T02:58:42Z
https://github.com/ClimbsRocks/auto_ml/issues/23
[]
ClimbsRocks
2
marcomusy/vedo
numpy
275
Radius of points
Hi @marcomusy `pts = Points(nxpts, r=12) ` sets uniform radius to all points in a graph. I would like to assign a different radius to each point. Since `r` accepts a float , I wasn't sure how to get this working if I supply a list. Could you please help me with this?
closed
2020-12-21T16:12:39Z
2020-12-26T18:13:05Z
https://github.com/marcomusy/vedo/issues/275
[]
DeepaMahm
9
freqtrade/freqtrade
python
11,284
Create Trade Object at Signal Detection
## Describe the enhancement Currently, the `Trade` object is created only after the first entry order is filled. This architecture has drowbacks: #### 1 Complexity Unnecessary complexity because important parameters must be explicitly passed to functions instead of being directly accessed via the `Trade` object—the default class used to pass data between functions and callbacks. #### 2 Unavailable features Some features that require the `Trade` object are unavailable before the first order is created. This can be problematic, for example, when attempting to store persistent information within the `Confirm trade entry` or 'Custom entry price' callback as soon as an entry signal is triggered. #### 3 Inconsistency Users face inconsistencies regarding the availability of the `Trade` object. *Explain the enhancement you would like* I propose creating the `Trade` object as soon as a signal is detected. This would ensure that the `Trade` object is available for all downstream functions and callbacks. This change also improves visibility for aborted trades (i.e., trades with no filled orders and is_open = False) since they would now be stored in the database. Users would be able to analyze why an entry failed, as expired and failed trades—along with their corresponding entry orders—would be recorded in the database. For users who prefer not to store unfilled trades, we could introduce a configuration option: auto_delete_unfilled_trades This refactor will also help to make a cleaner implementation of #11030
closed
2025-01-25T04:33:17Z
2025-01-26T22:33:20Z
https://github.com/freqtrade/freqtrade/issues/11284
[ "Question" ]
Axel-CH
11
mljar/mljar-supervised
scikit-learn
669
Bug: 'KNeighborsAlgorithm' object has no attribute 'classes_'
Current behavior: 'KNeighborsAlgorithm' object has no attribute 'classes_' Problem during computing permutation importance. Skipping ... Expected: KNeighbors to be trained
closed
2023-11-03T20:58:50Z
2024-09-30T13:39:45Z
https://github.com/mljar/mljar-supervised/issues/669
[ "bug", "help wanted", "good first issue" ]
strukevych
8
graphql-python/graphene-sqlalchemy
graphql
91
AttributeError in Registry class when asserting
I had a situation where I wanted to recreate the SQLAlchemyObjectType, creating a new type. This does not work out of the box with Registry (which is OK), and an assert is thrown. However, the assert itself has an problem: ``` ..../api/graphql/types.py:74: in __init_subclass_with_meta__ registry.register(cls) ../../../venv/lib/python3.6/site-packages/graphene_sqlalchemy/registry.py:12: in register 'received "{}"' E AttributeError: 'tuple' object has no attribute 'format' ```
closed
2017-11-21T10:26:43Z
2023-02-25T06:58:58Z
https://github.com/graphql-python/graphene-sqlalchemy/issues/91
[]
geertjanvdk
1
encode/httpx
asyncio
3,215
Improve async performance.
There seems to be some performance issues in `httpx` (0.27.0) as it has much worse performance than `aiohttp` (3.9.4) with concurrently running requests (in python 3.12). The following benchmark shows how running 20 requests concurrently is over 10x slower with `httpx` compared to `aiohttp`. The benchmark has very basic `httpx` usage for doing multiple GET requests with limited concurrency. The script outputs a figure showing how duration of each GET request has a huge duration variance with `httpx`. ![Figure_1](https://github.com/encode/httpx/assets/12939780/89346136-0caf-47ce-8d1e-b6444524d367) ```python # requirements.txt: # httpx == 0.27.0 # aiohttp == 3.9.4 # matplotlib == 3.9.0 # # 1. start server: python bench.py server # 2. run client test: python bench.py client import asyncio import sys from typing import Any, Coroutine, Iterator import aiohttp import time import httpx from aiohttp import web import matplotlib.pyplot as plt PORT = 1234 URL = f"http://localhost:{PORT}/req" RESP = "a" * 2000 REQUESTS = 100 CONCURRENCY = 20 def run_web_server(): async def handle(_request): return web.Response(text=RESP) app = web.Application() app.add_routes([web.get('/req', handle)]) web.run_app(app, host="localhost", port=PORT) def duration(start: float) -> int: return int((time.monotonic() - start) * 1000) async def run_requests(axis: plt.Axes): async def gather_limited_concurrency(coros: Iterator[Coroutine[Any, Any, Any]]): sem = asyncio.Semaphore(CONCURRENCY) async def coro_with_sem(coro): async with sem: return await coro return await asyncio.gather(*(coro_with_sem(c) for c in coros)) async def httpx_get(session: httpx.AsyncClient, timings: list[int]): start = time.monotonic() res = await session.request("GET", URL) assert len(await res.aread()) == len(RESP) assert res.status_code == 200, f"status_code={res.status_code}" timings.append(duration(start)) async def aiohttp_get(session: aiohttp.ClientSession, timings: list[int]): start = time.monotonic() async with session.request("GET", URL) as res: assert len(await res.read()) == len(RESP) assert res.status == 200, f"status={res.status}" timings.append(duration(start)) async with httpx.AsyncClient() as session: # warmup await asyncio.gather(*(httpx_get(session, []) for _ in range(REQUESTS))) timings = [] start = time.monotonic() await gather_limited_concurrency((httpx_get(session, timings) for _ in range(REQUESTS))) axis.plot([*range(REQUESTS)], timings, label=f"httpx (tot={duration(start)}ms)") async with aiohttp.ClientSession() as session: # warmup await asyncio.gather(*(aiohttp_get(session, []) for _ in range(REQUESTS))) timings = [] start = time.monotonic() await gather_limited_concurrency((aiohttp_get(session, timings) for _ in range(REQUESTS))) axis.plot([*range(REQUESTS)], timings, label=f"aiohttp (tot={duration(start)}ms)") def main(mode: str): assert mode in {"server", "client"}, f"invalid mode: {mode}" if mode == "server": run_web_server() else: fig, ax = plt.subplots() asyncio.run(run_requests(ax)) plt.legend(loc="upper left") ax.set_xlabel("# request") ax.set_ylabel("[ms]") plt.show() print("DONE", flush=True) if __name__ == "__main__": assert len(sys.argv) == 2, f"Usage: {sys.argv[0]} server|client" main(sys.argv[1]) ``` I found the following issue but seems its not related as the workaround doesnt make a difference here https://github.com/encode/httpx/issues/838#issuecomment-1291224189
open
2024-06-04T09:30:12Z
2025-03-14T05:57:31Z
https://github.com/encode/httpx/issues/3215
[ "perf" ]
MarkusSintonen
61
vllm-project/vllm
pytorch
15,036
[Bug]: tensor shape mismatch for `--lora-extra-vocab-size 0`
### Your current environment <details> <summary>The output of `python collect_env.py`</summary> ```text INFO 03-18 19:50:09 [__init__.py:256] Automatically detected platform cuda. Collecting environment information... PyTorch version: 2.6.0+cu124 Is debug build: False CUDA used to build PyTorch: 12.4 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.5 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: Could not collect Libc version: glibc-2.35 Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-4.18.0-477.27.1.el8_8.x86_64-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: 12.4.131 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA H100 NVL GPU 1: NVIDIA H100 NVL GPU 2: NVIDIA H100 NVL GPU 3: NVIDIA H100 NVL Nvidia driver version: 535.183.06 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.9.1.1 /usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.1.1 /usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.1.1 /usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.1.1 /usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.1.1 /usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.1.1 /usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.1.1 /usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.1.1 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 45 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 96 On-line CPU(s) list: 0-95 Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) Gold 6448Y CPU family: 6 Model: 143 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 96 Stepping: 8 BogoMIPS: 4199.99 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 wbnoinvd arat avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid cldemote movdiri movdir64b fsrm md_clear flush_l1d arch_capabilities Hypervisor vendor: VMware Virtualization type: full L1d cache: 4.5 MiB (96 instances) L1i cache: 3 MiB (96 instances) L2 cache: 192 MiB (96 instances) L3 cache: 5.6 GiB (96 instances) NUMA node(s): 2 NUMA node0 CPU(s): 0-47 NUMA node1 CPU(s): 48-95 Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Unknown: No mitigations Vulnerability Retbleed: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] flashinfer-python==0.2.1.post2+cu124torch2.6 [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.26.4 [pip3] nvidia-cublas-cu12==12.4.5.8 [pip3] nvidia-cuda-cupti-cu12==12.4.127 [pip3] nvidia-cuda-nvrtc-cu12==12.4.127 [pip3] nvidia-cuda-runtime-cu12==12.4.127 [pip3] nvidia-cudnn-cu12==9.1.0.70 [pip3] nvidia-cufft-cu12==11.2.1.3 [pip3] nvidia-curand-cu12==10.3.5.147 [pip3] nvidia-cusolver-cu12==11.6.1.9 [pip3] nvidia-cusparse-cu12==12.3.1.170 [pip3] nvidia-cusparselt-cu12==0.6.2 [pip3] nvidia-nccl-cu12==2.21.5 [pip3] nvidia-nvjitlink-cu12==12.4.127 [pip3] nvidia-nvtx-cu12==12.4.127 [pip3] pyzmq==26.3.0 [pip3] torch==2.6.0+cu124 [pip3] torchaudio==2.6.0+cu124 [pip3] torchvision==0.21.0+cu124 [pip3] transformers==4.48.3 [pip3] triton==3.2.0 [conda] Could not collect ROCM Version: Could not collect Neuron SDK Version: N/A vLLM Version: 0.8.0rc2.dev9+g6eaf1e5c vLLM Build Flags: CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled GPU Topology: GPU0 GPU1 GPU2 GPU3 CPU Affinity NUMA Affinity GPU NUMA ID GPU0 X PHB PHB NV12 0-95 0-1 N/A GPU1 PHB X NV12 PHB 0-95 0-1 N/A GPU2 PHB NV12 X PHB 0-95 0-1 N/A GPU3 NV12 PHB PHB X 0-95 0-1 N/A Legend: X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks NVIDIA_VISIBLE_DEVICES=/var/run/nvidia-container-devices NVIDIA_DRIVER_CAPABILITIES=compute,utility VLLM_ALLOW_RUNTIME_LORA_UPDATING=true LD_LIBRARY_PATH=/opt/intel/oneapi/tbb/latest/lib/intel64/gcc4.8:/opt/intel/oneapi/mkl/latest/lib/intel64:/opt/intel/oneapi/compiler/latest/linux/lib:/opt/intel/oneapi/compiler/latest/linux/lib/x64:/opt/intel/oneapi/compiler/latest/linux/compiler/lib/intel64_lin:/usr/local/cuda-12.4/lib64 VLLM_LOGGING_LEVEL=INFO VLLM_USE_V1=0 NCCL_CUMEM_ENABLE=0 TORCHINDUCTOR_COMPILE_THREADS=1 CUDA_MODULE_LOADING=LAZY ``` </details> ### 🐛 Describe the bug When the model server is launched with `--lora-extra-vocab-size 0`(to optimize for LoRA adapter which has not been trained with extra special tokens), the engine falls short on profile run stage: <details> <summary>traceback</summary> ```log INFO 03-17 21:31:59 [__init__.py:256] Automatically detected platform cuda. WARNING 03-17 21:32:01 [api_server.py:750] LoRA dynamic loading & unloading is enabled in the API server. This should ONLY be used for local development! INFO 03-17 21:32:01 [api_server.py:972] vLLM API server version 0.8.0rc2.dev9+g6eaf1e5c INFO 03-17 21:32:01 [api_server.py:973] args: Namespace(subparser='serve', model_tag='/app/model/Llama-3.3-70B-Instruct/', config='', host='0.0.0.0', port=8080, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, chat_template_content_format='auto', response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, enable_ssl_refresh=False, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_request_id_headers=False, enable_auto_tool_choice=True, tool_call_parser='llama3_json', tool_parser_plugin='', model='/app/model/Llama-3.3-70B-Instruct/', task='auto', tokenizer=None, hf_config_path=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, allowed_local_media_path=None, download_dir=None, load_format='auto', config_format=<ConfigFormat.AUTO: 'auto'>, dtype='auto', kv_cache_dtype='auto', max_model_len=131072, guided_decoding_backend='xgrammar', logits_processor_pattern=None, model_impl='auto', distributed_executor_backend=None, pipeline_parallel_size=1, tensor_parallel_size=4, enable_expert_parallel=False, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=None, enable_prefix_caching=None, disable_sliding_window=False, use_v2_block_manager=True, num_lookahead_slots=0, seed=None, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.95, num_gpu_blocks_override=None, max_num_batched_tokens=8192, max_num_partial_prefills=1, max_long_partial_prefills=1, long_prefill_token_threshold=0, max_num_seqs=None, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, hf_overrides=None, enforce_eager=False, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, mm_processor_kwargs=None, disable_mm_preprocessor_cache=False, enable_lora=True, enable_lora_bias=False, max_loras=1, max_lora_rank=8, lora_extra_vocab_size=0, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=20, fully_sharded_loras=True, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', num_scheduler_steps=1, use_tqdm_on_load=True, multi_step_stream_outputs=True, scheduler_delay_factor=0.0, enable_chunked_prefill=True, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_disable_mqa_scorer=False, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=['meta-llama/Llama-3.3-70B-Instruct'], qlora_adapter_name_or_path=None, show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, scheduling_policy='fcfs', scheduler_cls='vllm.core.scheduler.Scheduler', override_neuron_config=None, override_pooler_config=None, compilation_config=None, kv_transfer_config=None, worker_cls='auto', worker_extension_cls='', generation_config='auto', override_generation_config=None, enable_sleep_mode=False, calculate_kv_scales=False, additional_config=None, enable_reasoning=False, reasoning_parser=None, disable_log_requests=False, max_log_len=None, disable_fastapi_docs=False, enable_prompt_tokens_details=False, enable_server_load_tracking=False, dispatch_function=<function ServeSubcommand.cmd at 0x7f12b6c36950>) INFO 03-17 21:32:07 [config.py:583] This model supports multiple tasks: {'classify', 'score', 'reward', 'embed', 'generate'}. Defaulting to 'generate'. INFO 03-17 21:32:07 [config.py:1499] Defaulting to use mp for distributed inference INFO 03-17 21:32:07 [config.py:1677] Chunked prefill is enabled with max_num_batched_tokens=8192. WARNING 03-17 21:32:07 [config.py:2350] LoRA with chunked prefill is still experimental and may be unstable. INFO 03-17 21:32:07 [api_server.py:236] Started engine process with PID 289 INFO 03-17 21:32:09 [__init__.py:256] Automatically detected platform cuda. WARNING 03-17 21:32:11 [api_server.py:750] LoRA dynamic loading & unloading is enabled in the API server. This should ONLY be used for local development! INFO 03-17 21:32:11 [llm_engine.py:241] Initializing a V0 LLM engine (v0.8.0rc2.dev9+g6eaf1e5c) with config: model='/app/model/Llama-3.3-70B-Instruct/', speculative_config=None, tokenizer='/app/model/Llama-3.3-70B-Instruct/', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=131072, download_dir=None, load_format=auto, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=meta-llama/Llama-3.3-70B-Instruct, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=None, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"splitting_ops":[],"compile_sizes":[],"cudagraph_capture_sizes":[256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":256}, use_cached_outputs=True, WARNING 03-17 21:32:11 [multiproc_worker_utils.py:310] Reducing Torch parallelism from 96 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed. INFO 03-17 21:32:11 [custom_cache_manager.py:19] Setting Triton cache manager to: vllm.triton_utils.custom_cache_manager:CustomCacheManager INFO 03-17 21:32:12 [cuda.py:285] Using Flash Attention backend. INFO 03-17 21:32:14 [__init__.py:256] Automatically detected platform cuda. INFO 03-17 21:32:14 [__init__.py:256] Automatically detected platform cuda. INFO 03-17 21:32:14 [__init__.py:256] Automatically detected platform cuda. WARNING 03-17 21:32:15 [api_server.py:750] LoRA dynamic loading & unloading is enabled in the API server. This should ONLY be used for local development! WARNING 03-17 21:32:15 [api_server.py:750] LoRA dynamic loading & unloading is enabled in the API server. This should ONLY be used for local development! [1;36m(VllmWorkerProcess pid=363) [0;0m INFO 03-17 21:32:15 [multiproc_worker_utils.py:229] Worker ready; awaiting tasks [1;36m(VllmWorkerProcess pid=362) [0;0m INFO 03-17 21:32:15 [multiproc_worker_utils.py:229] Worker ready; awaiting tasks WARNING 03-17 21:32:15 [api_server.py:750] LoRA dynamic loading & unloading is enabled in the API server. This should ONLY be used for local development! [1;36m(VllmWorkerProcess pid=361) [0;0m INFO 03-17 21:32:15 [multiproc_worker_utils.py:229] Worker ready; awaiting tasks [1;36m(VllmWorkerProcess pid=361) [0;0m INFO 03-17 21:32:16 [cuda.py:285] Using Flash Attention backend. [1;36m(VllmWorkerProcess pid=362) [0;0m INFO 03-17 21:32:16 [cuda.py:285] Using Flash Attention backend. [1;36m(VllmWorkerProcess pid=363) [0;0m INFO 03-17 21:32:16 [cuda.py:285] Using Flash Attention backend. [1;36m(VllmWorkerProcess pid=362) [0;0m INFO 03-17 21:32:17 [utils.py:925] Found nccl from library libnccl.so.2 [1;36m(VllmWorkerProcess pid=361) [0;0m INFO 03-17 21:32:17 [utils.py:925] Found nccl from library libnccl.so.2 [1;36m(VllmWorkerProcess pid=363) [0;0m INFO 03-17 21:32:17 [utils.py:925] Found nccl from library libnccl.so.2 INFO 03-17 21:32:17 [utils.py:925] Found nccl from library libnccl.so.2 [1;36m(VllmWorkerProcess pid=362) [0;0m INFO 03-17 21:32:17 [pynccl.py:69] vLLM is using nccl==2.21.5 [1;36m(VllmWorkerProcess pid=361) [0;0m INFO 03-17 21:32:17 [pynccl.py:69] vLLM is using nccl==2.21.5 [1;36m(VllmWorkerProcess pid=363) [0;0m INFO 03-17 21:32:17 [pynccl.py:69] vLLM is using nccl==2.21.5 INFO 03-17 21:32:17 [pynccl.py:69] vLLM is using nccl==2.21.5 WARNING 03-17 21:32:18 [custom_all_reduce.py:137] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly. [1;36m(VllmWorkerProcess pid=363) [0;0m WARNING 03-17 21:32:18 [custom_all_reduce.py:137] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly. [1;36m(VllmWorkerProcess pid=362) [0;0m WARNING 03-17 21:32:18 [custom_all_reduce.py:137] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly. [1;36m(VllmWorkerProcess pid=361) [0;0m WARNING 03-17 21:32:18 [custom_all_reduce.py:137] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly. INFO 03-17 21:32:18 [shm_broadcast.py:258] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3], buffer_handle=(3, 4194304, 6, 'psm_a8ef6225'), local_subscribe_addr='ipc:///tmp/2eb49f57-c5ad-460a-a484-5388d5ee459e', remote_subscribe_addr=None, remote_addr_ipv6=False) [1;36m(VllmWorkerProcess pid=361) [0;0m INFO 03-17 21:32:18 [parallel_state.py:948] rank 1 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 1 INFO 03-17 21:32:18 [parallel_state.py:948] rank 0 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 0 [1;36m(VllmWorkerProcess pid=363) [0;0m INFO 03-17 21:32:18 [parallel_state.py:948] rank 3 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 3 [1;36m(VllmWorkerProcess pid=362) [0;0m INFO 03-17 21:32:18 [parallel_state.py:948] rank 2 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 2 INFO 03-17 21:32:18 [model_runner.py:1110] Starting to load model /app/model/Llama-3.3-70B-Instruct/... [1;36m(VllmWorkerProcess pid=362) [0;0m INFO 03-17 21:32:18 [model_runner.py:1110] Starting to load model /app/model/Llama-3.3-70B-Instruct/... [1;36m(VllmWorkerProcess pid=361) [0;0m INFO 03-17 21:32:18 [model_runner.py:1110] Starting to load model /app/model/Llama-3.3-70B-Instruct/... [1;36m(VllmWorkerProcess pid=363) [0;0m INFO 03-17 21:32:18 [model_runner.py:1110] Starting to load model /app/model/Llama-3.3-70B-Instruct/... INFO 03-17 21:37:05 [loader.py:429] Loading weights took 286.19 seconds INFO 03-17 21:37:05 [punica_selector.py:18] Using PunicaWrapperGPU. [1;36m(VllmWorkerProcess pid=363) [0;0m INFO 03-17 21:37:05 [loader.py:429] Loading weights took 286.41 seconds [1;36m(VllmWorkerProcess pid=362) [0;0m INFO 03-17 21:37:05 [loader.py:429] Loading weights took 286.41 seconds [1;36m(VllmWorkerProcess pid=361) [0;0m INFO 03-17 21:37:05 [loader.py:429] Loading weights took 286.41 seconds [1;36m(VllmWorkerProcess pid=362) [0;0m INFO 03-17 21:37:05 [punica_selector.py:18] Using PunicaWrapperGPU. [1;36m(VllmWorkerProcess pid=363) [0;0m INFO 03-17 21:37:05 [punica_selector.py:18] Using PunicaWrapperGPU. [1;36m(VllmWorkerProcess pid=361) [0;0m INFO 03-17 21:37:05 [punica_selector.py:18] Using PunicaWrapperGPU. [1;36m(VllmWorkerProcess pid=362) [0;0m INFO 03-17 21:37:05 [model_runner.py:1146] Model loading took 32.9429 GB and 286.704138 seconds [1;36m(VllmWorkerProcess pid=363) [0;0m INFO 03-17 21:37:05 [model_runner.py:1146] Model loading took 32.9429 GB and 286.708337 seconds INFO 03-17 21:37:05 [model_runner.py:1146] Model loading took 32.9429 GB and 286.507457 seconds [1;36m(VllmWorkerProcess pid=361) [0;0m INFO 03-17 21:37:05 [model_runner.py:1146] Model loading took 32.9429 GB and 286.720625 seconds [1;36m(VllmWorkerProcess pid=361) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] Exception in worker VllmWorkerProcess while processing method determine_num_available_blocks. [1;36m(VllmWorkerProcess pid=361) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] Traceback (most recent call last): [1;36m(VllmWorkerProcess pid=361) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/executor/multiproc_worker_utils.py", line 236, in _run_worker_process [1;36m(VllmWorkerProcess pid=361) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] output = run_method(worker, method, args, kwargs) [1;36m(VllmWorkerProcess pid=361) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/utils.py", line 2216, in run_method [1;36m(VllmWorkerProcess pid=361) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] return func(*args, **kwargs) [1;36m(VllmWorkerProcess pid=361) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context [1;36m(VllmWorkerProcess pid=361) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] return func(*args, **kwargs) [1;36m(VllmWorkerProcess pid=361) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/worker/worker.py", line 229, in determine_num_available_blocks [1;36m(VllmWorkerProcess pid=361) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] self.model_runner.profile_run() [1;36m(VllmWorkerProcess pid=361) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context [1;36m(VllmWorkerProcess pid=361) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] return func(*args, **kwargs) [1;36m(VllmWorkerProcess pid=361) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 1243, in profile_run [1;36m(VllmWorkerProcess pid=361) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] self._dummy_run(max_num_batched_tokens, max_num_seqs) [1;36m(VllmWorkerProcess pid=361) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 1354, in _dummy_run [1;36m(VllmWorkerProcess pid=361) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] self.execute_model(model_input, kv_caches, intermediate_tensors) [1;36m(VllmWorkerProcess pid=361) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context [1;36m(VllmWorkerProcess pid=361) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] return func(*args, **kwargs) [1;36m(VllmWorkerProcess pid=361) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 1669, in execute_model [1;36m(VllmWorkerProcess pid=361) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] self.set_active_loras(model_input.lora_requests, [1;36m(VllmWorkerProcess pid=361) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 1371, in set_active_loras [1;36m(VllmWorkerProcess pid=361) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] self.lora_manager.set_active_adapters(lora_requests, lora_mapping) [1;36m(VllmWorkerProcess pid=361) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/lora/worker_manager.py", line 167, in set_active_adapters [1;36m(VllmWorkerProcess pid=361) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] set_active_adapters_worker(requests, mapping, self._apply_adapters, [1;36m(VllmWorkerProcess pid=361) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/adapter_commons/utils.py", line 54, in set_active_adapters_worker [1;36m(VllmWorkerProcess pid=361) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] apply_adapters_func(requests) [1;36m(VllmWorkerProcess pid=361) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/lora/worker_manager.py", line 227, in _apply_adapters [1;36m(VllmWorkerProcess pid=361) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] self.add_adapter(lora) [1;36m(VllmWorkerProcess pid=361) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/lora/worker_manager.py", line 250, in add_adapter [1;36m(VllmWorkerProcess pid=361) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] self._adapter_manager.activate_adapter(lora_request.lora_int_id) [1;36m(VllmWorkerProcess pid=361) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/lora/models.py", line 720, in activate_adapter [1;36m(VllmWorkerProcess pid=361) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] result = super().activate_adapter(lora_id) [1;36m(VllmWorkerProcess pid=361) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/lora/models.py", line 405, in activate_adapter [1;36m(VllmWorkerProcess pid=361) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] module.set_lora(index, module_lora.lora_a, module_lora.lora_b, [1;36m(VllmWorkerProcess pid=361) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/lora/layers.py", line 223, in set_lora [1;36m(VllmWorkerProcess pid=361) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] self.embeddings_tensors[ [1;36m(VllmWorkerProcess pid=361) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] RuntimeError: The size of tensor a (0) must match the size of tensor b (10) at non-singleton dimension 0 [1;36m(VllmWorkerProcess pid=362) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] Exception in worker VllmWorkerProcess while processing method determine_num_available_blocks. [1;36m(VllmWorkerProcess pid=362) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] Traceback (most recent call last): [1;36m(VllmWorkerProcess pid=362) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/executor/multiproc_worker_utils.py", line 236, in _run_worker_process [1;36m(VllmWorkerProcess pid=362) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] output = run_method(worker, method, args, kwargs) [1;36m(VllmWorkerProcess pid=362) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/utils.py", line 2216, in run_method [1;36m(VllmWorkerProcess pid=362) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] return func(*args, **kwargs) [1;36m(VllmWorkerProcess pid=362) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context [1;36m(VllmWorkerProcess pid=362) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] return func(*args, **kwargs) [1;36m(VllmWorkerProcess pid=362) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/worker/worker.py", line 229, in determine_num_available_blocks [1;36m(VllmWorkerProcess pid=362) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] self.model_runner.profile_run() [1;36m(VllmWorkerProcess pid=362) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context [1;36m(VllmWorkerProcess pid=362) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] return func(*args, **kwargs) [1;36m(VllmWorkerProcess pid=362) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 1243, in profile_run [1;36m(VllmWorkerProcess pid=362) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] self._dummy_run(max_num_batched_tokens, max_num_seqs) [1;36m(VllmWorkerProcess pid=362) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 1354, in _dummy_run [1;36m(VllmWorkerProcess pid=362) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] self.execute_model(model_input, kv_caches, intermediate_tensors) [1;36m(VllmWorkerProcess pid=362) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context [1;36m(VllmWorkerProcess pid=362) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] return func(*args, **kwargs) [1;36m(VllmWorkerProcess pid=362) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 1669, in execute_model [1;36m(VllmWorkerProcess pid=362) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] self.set_active_loras(model_input.lora_requests, [1;36m(VllmWorkerProcess pid=362) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 1371, in set_active_loras [1;36m(VllmWorkerProcess pid=362) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] self.lora_manager.set_active_adapters(lora_requests, lora_mapping) [1;36m(VllmWorkerProcess pid=362) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/lora/worker_manager.py", line 167, in set_active_adapters [1;36m(VllmWorkerProcess pid=362) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] set_active_adapters_worker(requests, mapping, self._apply_adapters, [1;36m(VllmWorkerProcess pid=362) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/adapter_commons/utils.py", line 54, in set_active_adapters_worker [1;36m(VllmWorkerProcess pid=362) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] apply_adapters_func(requests) [1;36m(VllmWorkerProcess pid=362) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/lora/worker_manager.py", line 227, in _apply_adapters [1;36m(VllmWorkerProcess pid=362) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] self.add_adapter(lora) [1;36m(VllmWorkerProcess pid=362) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/lora/worker_manager.py", line 250, in add_adapter [1;36m(VllmWorkerProcess pid=362) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] self._adapter_manager.activate_adapter(lora_request.lora_int_id) [1;36m(VllmWorkerProcess pid=362) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/lora/models.py", line 720, in activate_adapter [1;36m(VllmWorkerProcess pid=362) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] result = super().activate_adapter(lora_id) [1;36m(VllmWorkerProcess pid=362) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/lora/models.py", line 405, in activate_adapter [1;36m(VllmWorkerProcess pid=362) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] module.set_lora(index, module_lora.lora_a, module_lora.lora_b, [1;36m(VllmWorkerProcess pid=362) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/lora/layers.py", line 223, in set_lora [1;36m(VllmWorkerProcess pid=362) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] self.embeddings_tensors[ [1;36m(VllmWorkerProcess pid=362) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] RuntimeError: The size of tensor a (0) must match the size of tensor b (10) at non-singleton dimension 0 [1;36m(VllmWorkerProcess pid=363) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] Exception in worker VllmWorkerProcess while processing method determine_num_available_blocks. [1;36m(VllmWorkerProcess pid=363) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] Traceback (most recent call last): [1;36m(VllmWorkerProcess pid=363) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/executor/multiproc_worker_utils.py", line 236, in _run_worker_process [1;36m(VllmWorkerProcess pid=363) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] output = run_method(worker, method, args, kwargs) [1;36m(VllmWorkerProcess pid=363) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/utils.py", line 2216, in run_method [1;36m(VllmWorkerProcess pid=363) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] return func(*args, **kwargs) [1;36m(VllmWorkerProcess pid=363) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context [1;36m(VllmWorkerProcess pid=363) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] return func(*args, **kwargs) [1;36m(VllmWorkerProcess pid=363) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/worker/worker.py", line 229, in determine_num_available_blocks [1;36m(VllmWorkerProcess pid=363) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] self.model_runner.profile_run() [1;36m(VllmWorkerProcess pid=363) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context [1;36m(VllmWorkerProcess pid=363) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] return func(*args, **kwargs) [1;36m(VllmWorkerProcess pid=363) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 1243, in profile_run [1;36m(VllmWorkerProcess pid=363) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] self._dummy_run(max_num_batched_tokens, max_num_seqs) [1;36m(VllmWorkerProcess pid=363) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 1354, in _dummy_run [1;36m(VllmWorkerProcess pid=363) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] self.execute_model(model_input, kv_caches, intermediate_tensors) [1;36m(VllmWorkerProcess pid=363) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context [1;36m(VllmWorkerProcess pid=363) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] return func(*args, **kwargs) [1;36m(VllmWorkerProcess pid=363) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 1669, in execute_model [1;36m(VllmWorkerProcess pid=363) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] self.set_active_loras(model_input.lora_requests, [1;36m(VllmWorkerProcess pid=363) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 1371, in set_active_loras [1;36m(VllmWorkerProcess pid=363) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] self.lora_manager.set_active_adapters(lora_requests, lora_mapping) [1;36m(VllmWorkerProcess pid=363) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/lora/worker_manager.py", line 167, in set_active_adapters [1;36m(VllmWorkerProcess pid=363) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] set_active_adapters_worker(requests, mapping, self._apply_adapters, [1;36m(VllmWorkerProcess pid=363) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/adapter_commons/utils.py", line 54, in set_active_adapters_worker [1;36m(VllmWorkerProcess pid=363) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] apply_adapters_func(requests) [1;36m(VllmWorkerProcess pid=363) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/lora/worker_manager.py", line 227, in _apply_adapters [1;36m(VllmWorkerProcess pid=363) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] self.add_adapter(lora) [1;36m(VllmWorkerProcess pid=363) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/lora/worker_manager.py", line 250, in add_adapter [1;36m(VllmWorkerProcess pid=363) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] self._adapter_manager.activate_adapter(lora_request.lora_int_id) [1;36m(VllmWorkerProcess pid=363) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/lora/models.py", line 720, in activate_adapter [1;36m(VllmWorkerProcess pid=363) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] result = super().activate_adapter(lora_id) [1;36m(VllmWorkerProcess pid=363) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/lora/models.py", line 405, in activate_adapter [1;36m(VllmWorkerProcess pid=363) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] module.set_lora(index, module_lora.lora_a, module_lora.lora_b, [1;36m(VllmWorkerProcess pid=363) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] File "/app/.venv/lib/python3.10/site-packages/vllm/lora/layers.py", line 223, in set_lora [1;36m(VllmWorkerProcess pid=363) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] self.embeddings_tensors[ [1;36m(VllmWorkerProcess pid=363) [0;0m ERROR 03-17 21:37:05 [multiproc_worker_utils.py:242] RuntimeError: The size of tensor a (0) must match the size of tensor b (10) at non-singleton dimension 0 ERROR 03-17 21:37:06 [engine.py:443] The size of tensor a (0) must match the size of tensor b (10) at non-singleton dimension 0 ERROR 03-17 21:37:06 [engine.py:443] Traceback (most recent call last): ERROR 03-17 21:37:06 [engine.py:443] File "/app/.venv/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 431, in run_mp_engine ERROR 03-17 21:37:06 [engine.py:443] engine = MQLLMEngine.from_vllm_config( ERROR 03-17 21:37:06 [engine.py:443] File "/app/.venv/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 126, in from_vllm_config ERROR 03-17 21:37:06 [engine.py:443] return cls( ERROR 03-17 21:37:06 [engine.py:443] File "/app/.venv/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 80, in __init__ ERROR 03-17 21:37:06 [engine.py:443] self.engine = LLMEngine(*args, **kwargs) ERROR 03-17 21:37:06 [engine.py:443] File "/app/.venv/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 283, in __init__ ERROR 03-17 21:37:06 [engine.py:443] self._initialize_kv_caches() ERROR 03-17 21:37:06 [engine.py:443] File "/app/.venv/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 432, in _initialize_kv_caches ERROR 03-17 21:37:06 [engine.py:443] self.model_executor.determine_num_available_blocks()) ERROR 03-17 21:37:06 [engine.py:443] File "/app/.venv/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 102, in determine_num_available_blocks ERROR 03-17 21:37:06 [engine.py:443] results = self.collective_rpc("determine_num_available_blocks") ERROR 03-17 21:37:06 [engine.py:443] File "/app/.venv/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 316, in collective_rpc ERROR 03-17 21:37:06 [engine.py:443] return self._run_workers(method, *args, **(kwargs or {})) ERROR 03-17 21:37:06 [engine.py:443] File "/app/.venv/lib/python3.10/site-packages/vllm/executor/mp_distributed_executor.py", line 185, in _run_workers ERROR 03-17 21:37:06 [engine.py:443] driver_worker_output = run_method(self.driver_worker, sent_method, ERROR 03-17 21:37:06 [engine.py:443] File "/app/.venv/lib/python3.10/site-packages/vllm/utils.py", line 2216, in run_method ERROR 03-17 21:37:06 [engine.py:443] return func(*args, **kwargs) ERROR 03-17 21:37:06 [engine.py:443] File "/app/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context ERROR 03-17 21:37:06 [engine.py:443] return func(*args, **kwargs) ERROR 03-17 21:37:06 [engine.py:443] File "/app/.venv/lib/python3.10/site-packages/vllm/worker/worker.py", line 229, in determine_num_available_blocks ERROR 03-17 21:37:06 [engine.py:443] self.model_runner.profile_run() ERROR 03-17 21:37:06 [engine.py:443] File "/app/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context ERROR 03-17 21:37:06 [engine.py:443] return func(*args, **kwargs) ERROR 03-17 21:37:06 [engine.py:443] File "/app/.venv/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 1243, in profile_run ERROR 03-17 21:37:06 [engine.py:443] self._dummy_run(max_num_batched_tokens, max_num_seqs) ERROR 03-17 21:37:06 [engine.py:443] File "/app/.venv/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 1354, in _dummy_run ERROR 03-17 21:37:06 [engine.py:443] self.execute_model(model_input, kv_caches, intermediate_tensors) ERROR 03-17 21:37:06 [engine.py:443] File "/app/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context ERROR 03-17 21:37:06 [engine.py:443] return func(*args, **kwargs) ERROR 03-17 21:37:06 [engine.py:443] File "/app/.venv/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 1669, in execute_model ERROR 03-17 21:37:06 [engine.py:443] self.set_active_loras(model_input.lora_requests, ERROR 03-17 21:37:06 [engine.py:443] File "/app/.venv/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 1371, in set_active_loras ERROR 03-17 21:37:06 [engine.py:443] self.lora_manager.set_active_adapters(lora_requests, lora_mapping) ERROR 03-17 21:37:06 [engine.py:443] File "/app/.venv/lib/python3.10/site-packages/vllm/lora/worker_manager.py", line 167, in set_active_adapters ERROR 03-17 21:37:06 [engine.py:443] set_active_adapters_worker(requests, mapping, self._apply_adapters, ERROR 03-17 21:37:06 [engine.py:443] File "/app/.venv/lib/python3.10/site-packages/vllm/adapter_commons/utils.py", line 54, in set_active_adapters_worker ERROR 03-17 21:37:06 [engine.py:443] apply_adapters_func(requests) ERROR 03-17 21:37:06 [engine.py:443] File "/app/.venv/lib/python3.10/site-packages/vllm/lora/worker_manager.py", line 227, in _apply_adapters ERROR 03-17 21:37:06 [engine.py:443] self.add_adapter(lora) ERROR 03-17 21:37:06 [engine.py:443] File "/app/.venv/lib/python3.10/site-packages/vllm/lora/worker_manager.py", line 250, in add_adapter ERROR 03-17 21:37:06 [engine.py:443] self._adapter_manager.activate_adapter(lora_request.lora_int_id) ERROR 03-17 21:37:06 [engine.py:443] File "/app/.venv/lib/python3.10/site-packages/vllm/lora/models.py", line 720, in activate_adapter ERROR 03-17 21:37:06 [engine.py:443] result = super().activate_adapter(lora_id) ERROR 03-17 21:37:06 [engine.py:443] File "/app/.venv/lib/python3.10/site-packages/vllm/lora/models.py", line 405, in activate_adapter ERROR 03-17 21:37:06 [engine.py:443] module.set_lora(index, module_lora.lora_a, module_lora.lora_b, ERROR 03-17 21:37:06 [engine.py:443] File "/app/.venv/lib/python3.10/site-packages/vllm/lora/layers.py", line 223, in set_lora ERROR 03-17 21:37:06 [engine.py:443] self.embeddings_tensors[ ERROR 03-17 21:37:06 [engine.py:443] RuntimeError: The size of tensor a (0) must match the size of tensor b (10) at non-singleton dimension 0 ``` </details> If `--lora-extra-vocab-size` has not been set or set to nonzero (and no other arguments changed), the model server runs okay. ## Minimal working example ```bash VLLM_USE_V1=0 vllm serve /app/model/Llama-3.3-70B-Instruct/ --served-model-name meta-llama/Llama-3.3-70B-Instruct --gpu-memory-utilization 0.96 --tensor-parallel-size 4 --max-model-len 131072 --enable-chunked-prefill --max-num-batched-tokens 8192 --enable-auto-tool-choice --tool-call-parser llama3_json --enable-lora --fully-sharded-lora --max-loras 1 --max-lora-rank 8 --lora-extra-vocab-size 0 ``` (It doesn't even have to load pretrained LoRA adapter, the script fails anyway) ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
closed
2025-03-18T13:19:12Z
2025-03-18T16:40:30Z
https://github.com/vllm-project/vllm/issues/15036
[ "bug" ]
cjackal
2
pyjanitor-devs/pyjanitor
pandas
739
Add the Welcome Bot to the project
This one! https://github.com/apps/the-welcome-bot
closed
2020-09-09T12:08:01Z
2020-09-18T22:53:59Z
https://github.com/pyjanitor-devs/pyjanitor/issues/739
[]
ericmjl
0
aleju/imgaug
deep-learning
630
flipud produces negative stridess
`Flipud` flips images by images[i] = image[::-1, ...] This produces negative strides in numpy which results in errors when using them with PyTorch Tensors. A solution would be to use `numpy.flipud` or call `.copy()` on the flipped array Great library btw! :)
open
2020-02-28T09:26:00Z
2020-03-04T21:16:16Z
https://github.com/aleju/imgaug/issues/630
[ "TODO" ]
AljoSt
0
roboflow/supervision
pytorch
703
PixelateAnnotator throws errors if the area is too small, BlurAnnotator is not censoring if area is too large
### Search before asking - [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar bug report. ### Bug When using PixelateAnnotator with no additional configuration, it throws an error if the area to pixelate is too small: ``` Traceback (most recent call last): File "/home/ultra/clean.py", line 83, in <module> annotated_frame = blur_annotator.annotate( File "/opt/conda/lib/python3.10/site-packages/supervision/annotators/core.py", line 1253, in annotate scaled_up_roi = cv2.resize( cv2.error: OpenCV(4.8.1) /io/opencv/modules/imgproc/src/resize.cpp:4068: error: (-215:Assertion failed) !dsize.empty() in function 'resize' ``` I added this debug to [this section](https://github.com/roboflow/supervision/blob/develop/supervision/annotators/core.py#L1252) ```py print(f"Box: {(x1, y1, x2, y2)}") roi = scene[y1:y2, x1:x2] print(f"ROI shape: {roi.shape}") ``` The last output before the error is: Box: (644, 444, 678, 453) ROI shape: (9, 34, 3) I left the pixel size at the default (10), so I'm guessing the 9 is too small for it As seen in the below code, I made this adjustment to automatically pick the largest possible pixel size and half it by 2, this works very well. ```py self.pixel_size = min(y2 - y1, x2 - x1) / 2 roi = scene[y1:y2, x1:x2] scaled_down_roi = cv2.resize( src=roi, dsize=None, fx=self.pixel_size, fy=self.pixel_size ) ``` This approach would also resolve an issue with the BlurAnnotator. If the kernel size set to the default, and the area is large, the resulting area is still very identifiable. Maybe we can resolve this by having the option to provide a lambda instead of a fixed number, so the user can dynamically decide how large the used kernel/pixel size should be. If that's something worth implementing in the project, I would be happy to create a PR. ### Environment - Supervision 0.17.0 & 0.17.1 hardware not relevant ### Minimal Reproducible Example _No response_ ### Additional _No response_ ### Are you willing to submit a PR? - [X] Yes I'd like to help by submitting a PR!
open
2023-12-30T15:30:07Z
2024-01-02T15:36:20Z
https://github.com/roboflow/supervision/issues/703
[ "bug" ]
Clemens-E
5
Kanaries/pygwalker
pandas
436
[DEV-656] color config proposal
1. since there is already a themeConfig, the name of color config sounds confusing. My suggestion is change the param to "UITheme" 2. support deep merge of variables. If there is no ui theme config design tool, it will be more dev-friendly to allow dev to overrides some css variables instead of the whole setting <sub>From [SyncLinear.com](https://synclinear.com) | [DEV-656](https://linear.app/kanaries/issue/DEV-656/color-config-proposal)</sub>
closed
2024-02-19T03:44:16Z
2024-04-10T12:59:35Z
https://github.com/Kanaries/pygwalker/issues/436
[ "Low priority" ]
ObservedObserver
0
dpgaspar/Flask-AppBuilder
rest-api
1,762
Data used without being defined previous for Azure RBAC
https://github.com/dpgaspar/Flask-AppBuilder/blob/afc8e2c9209a928fa6a791919ceae3ce3cdc48a7/flask_appbuilder/security/manager.py#L625 This data variable in the referred line is not defined, looks like it should be `data = me`
closed
2021-12-21T19:34:15Z
2022-04-29T17:40:56Z
https://github.com/dpgaspar/Flask-AppBuilder/issues/1762
[ "stale" ]
fcomuniz
1
adbar/trafilatura
web-scraping
166
Issue with LXML on M1 / Apple arm64 platforms
On a clean install on the `master` branch, `metadata_tests.py` and `realworld_tests.py` fail. ```sh short test summary info ============================================================================================= FAILED tests/metadata_tests.py::test_pages - AssertionError: assert None == 'Want to See a More Diverse WordPress Contributor Community? So Do We.' FAILED tests/realworld_tests.py::test_extract[False] - AssertionError: assert ('Zweitens wird der Genderstern' in '! D O C T Y P E h t m l >') FAILED tests/realworld_tests.py::test_extract[True] - assert ('Zweitens wird der Genderstern' in '<doc sitename="scilogs.spektrum.de" source="https://scilogs.spektrum.de/engelbart-galaxis/die-ablehnung-der-ge... =================================================================================== 3 failed, 48 passed, 10 warnings in 32.17s ``` Please see the full clone/install/test flow and output below. <details><summary>Shell output</summary> <p> ```sh ~ % cd Desktop ~/Desktop % git clone git@github.com:adbar/trafilatura.git Cloning into 'trafilatura'... remote: Enumerating objects: 6635, done. remote: Counting objects: 100% (1544/1544), done. remote: Compressing objects: 100% (705/705), done. remote: Total 6635 (delta 1148), reused 1063 (delta 831), pack-reused 5091 Receiving objects: 100% (6635/6635), 16.89 MiB | 22.93 MiB/s, done. Resolving deltas: 100% (4575/4575), done. ~/Desktop % cd trafilatura ~/Desktop/trafilatura % python -m venv venv ~/Desktop/trafilatura % source venv/bin/activate (venv) ~/Desktop/trafilatura % (venv) ~/Desktop/trafilatura % git branch * master (venv) ~/Desktop/trafilatura % git pull Already up to date. (venv) ~/Desktop/trafilatura % python --version Python 3.9.9 (venv) ~/Desktop/trafilatura % python -m pip --version pip 21.3.1 from /Users/naftalibeder/Desktop/trafilatura/venv/lib/python3.9/site-packages/pip (python 3.9) (venv) ~/Desktop/trafilatura % python -m pip install -r docs/requirements.txt Collecting Sphinx==4.3.2 Using cached Sphinx-4.3.2-py3-none-any.whl (3.1 MB) Collecting pydata-sphinx-theme==0.7.2 Using cached pydata_sphinx_theme-0.7.2-py3-none-any.whl (1.4 MB) Collecting docutils==0.17.1 Using cached docutils-0.17.1-py2.py3-none-any.whl (575 kB) Collecting trafilatura Using cached trafilatura-1.0.0-py3-none-any.whl (180 kB) Requirement already satisfied: setuptools in ./venv/lib/python3.9/site-packages (from Sphinx==4.3.2->-r docs/requirements.txt (line 2)) (59.0.1) Collecting sphinxcontrib-htmlhelp>=2.0.0 Using cached sphinxcontrib_htmlhelp-2.0.0-py2.py3-none-any.whl (100 kB) Collecting imagesize Using cached imagesize-1.3.0-py2.py3-none-any.whl (5.2 kB) Collecting Pygments>=2.0 Using cached Pygments-2.11.2-py3-none-any.whl (1.1 MB) Collecting sphinxcontrib-qthelp Using cached sphinxcontrib_qthelp-1.0.3-py2.py3-none-any.whl (90 kB) Collecting Jinja2>=2.3 Using cached Jinja2-3.0.3-py3-none-any.whl (133 kB) Collecting sphinxcontrib-serializinghtml>=1.1.5 Using cached sphinxcontrib_serializinghtml-1.1.5-py2.py3-none-any.whl (94 kB) Collecting sphinxcontrib-devhelp Using cached sphinxcontrib_devhelp-1.0.2-py2.py3-none-any.whl (84 kB) Collecting packaging Using cached packaging-21.3-py3-none-any.whl (40 kB) Collecting snowballstemmer>=1.1 Using cached snowballstemmer-2.2.0-py2.py3-none-any.whl (93 kB) Collecting babel>=1.3 Using cached Babel-2.9.1-py2.py3-none-any.whl (8.8 MB) Collecting requests>=2.5.0 Using cached requests-2.27.1-py2.py3-none-any.whl (63 kB) Collecting sphinxcontrib-applehelp Using cached sphinxcontrib_applehelp-1.0.2-py2.py3-none-any.whl (121 kB) Collecting alabaster<0.8,>=0.7 Using cached alabaster-0.7.12-py2.py3-none-any.whl (14 kB) Collecting sphinxcontrib-jsmath Using cached sphinxcontrib_jsmath-1.0.1-py2.py3-none-any.whl (5.1 kB) Collecting beautifulsoup4 Using cached beautifulsoup4-4.10.0-py3-none-any.whl (97 kB) Collecting urllib3<2,>=1.26 Using cached urllib3-1.26.8-py2.py3-none-any.whl (138 kB) Collecting htmldate>=1.0.0 Using cached htmldate-1.0.0-py3-none-any.whl (30 kB) Collecting courlan>=0.6.0 Using cached courlan-0.6.0-py3-none-any.whl (26 kB) Collecting certifi Using cached certifi-2021.10.8-py2.py3-none-any.whl (149 kB) Collecting readability-lxml>=0.8.1 Using cached readability_lxml-0.8.1-py3-none-any.whl (20 kB) Collecting lxml>=4.6.4 Using cached lxml-4.7.1.tar.gz (3.2 MB) Preparing metadata (setup.py) ... done Collecting justext>=3.0.0 Using cached jusText-3.0.0-py2.py3-none-any.whl (837 kB) Collecting charset-normalizer>=2.0.8 Using cached charset_normalizer-2.0.10-py3-none-any.whl (39 kB) Collecting pytz>=2015.7 Using cached pytz-2021.3-py2.py3-none-any.whl (503 kB) Collecting langcodes>=3.2.1 Using cached langcodes-3.3.0-py3-none-any.whl (181 kB) Collecting tld>=0.12 Using cached tld-0.12.6-py39-none-any.whl (412 kB) Collecting dateparser>=1.1.0 Using cached dateparser-1.1.0-py2.py3-none-any.whl (288 kB) Collecting python-dateutil>=2.8.2 Using cached python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB) Collecting MarkupSafe>=2.0 Using cached MarkupSafe-2.0.1-cp39-cp39-macosx_10_9_universal2.whl (18 kB) Collecting cssselect Using cached cssselect-1.1.0-py2.py3-none-any.whl (16 kB) Collecting chardet Using cached chardet-4.0.0-py2.py3-none-any.whl (178 kB) Collecting idna<4,>=2.5 Using cached idna-3.3-py3-none-any.whl (61 kB) Collecting soupsieve>1.2 Using cached soupsieve-2.3.1-py3-none-any.whl (37 kB) Collecting pyparsing!=3.0.5,>=2.0.2 Downloading pyparsing-3.0.7-py3-none-any.whl (98 kB) |████████████████████████████████| 98 kB 4.9 MB/s Collecting tzlocal Using cached tzlocal-4.1-py3-none-any.whl (19 kB) Collecting regex!=2019.02.19,!=2021.8.27 Downloading regex-2022.1.18-cp39-cp39-macosx_11_0_arm64.whl (281 kB) |████████████████████████████████| 281 kB 144.2 MB/s Collecting six>=1.5 Using cached six-1.16.0-py2.py3-none-any.whl (11 kB) Collecting pytz-deprecation-shim Using cached pytz_deprecation_shim-0.1.0.post0-py2.py3-none-any.whl (15 kB) Collecting tzdata Using cached tzdata-2021.5-py2.py3-none-any.whl (339 kB) Using legacy 'setup.py install' for lxml, since package 'wheel' is not installed. Installing collected packages: tzdata, six, pytz-deprecation-shim, urllib3, tzlocal, regex, pytz, python-dateutil, pyparsing, MarkupSafe, idna, charset-normalizer, certifi, tld, sphinxcontrib-serializinghtml, sphinxcontrib-qthelp, sphinxcontrib-jsmath, sphinxcontrib-htmlhelp, sphinxcontrib-devhelp, sphinxcontrib-applehelp, soupsieve, snowballstemmer, requests, Pygments, packaging, lxml, langcodes, Jinja2, imagesize, docutils, dateparser, cssselect, chardet, babel, alabaster, Sphinx, readability-lxml, justext, htmldate, courlan, beautifulsoup4, trafilatura, pydata-sphinx-theme Running setup.py install for lxml ... done Successfully installed Jinja2-3.0.3 MarkupSafe-2.0.1 Pygments-2.11.2 Sphinx-4.3.2 alabaster-0.7.12 babel-2.9.1 beautifulsoup4-4.10.0 certifi-2021.10.8 chardet-4.0.0 charset-normalizer-2.0.10 courlan-0.6.0 cssselect-1.1.0 dateparser-1.1.0 docutils-0.17.1 htmldate-1.0.0 idna-3.3 imagesize-1.3.0 justext-3.0.0 langcodes-3.3.0 lxml-4.7.1 packaging-21.3 pydata-sphinx-theme-0.7.2 pyparsing-3.0.7 python-dateutil-2.8.2 pytz-2021.3 pytz-deprecation-shim-0.1.0.post0 readability-lxml-0.8.1 regex-2022.1.18 requests-2.27.1 six-1.16.0 snowballstemmer-2.2.0 soupsieve-2.3.1 sphinxcontrib-applehelp-1.0.2 sphinxcontrib-devhelp-1.0.2 sphinxcontrib-htmlhelp-2.0.0 sphinxcontrib-jsmath-1.0.1 sphinxcontrib-qthelp-1.0.3 sphinxcontrib-serializinghtml-1.1.5 tld-0.12.6 trafilatura-1.0.0 tzdata-2021.5 tzlocal-4.1 urllib3-1.26.8 (venv) ~/Desktop/trafilatura % pytest zsh: command not found: pytest (venv) ~/Desktop/trafilatura % python -m pip install pytest Collecting pytest Using cached pytest-6.2.5-py3-none-any.whl (280 kB) Collecting pluggy<2.0,>=0.12 Using cached pluggy-1.0.0-py2.py3-none-any.whl (13 kB) Collecting iniconfig Using cached iniconfig-1.1.1-py2.py3-none-any.whl (5.0 kB) Collecting toml Using cached toml-0.10.2-py2.py3-none-any.whl (16 kB) Requirement already satisfied: packaging in ./venv/lib/python3.9/site-packages (from pytest) (21.3) Collecting py>=1.8.2 Using cached py-1.11.0-py2.py3-none-any.whl (98 kB) Collecting attrs>=19.2.0 Using cached attrs-21.4.0-py2.py3-none-any.whl (60 kB) Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in ./venv/lib/python3.9/site-packages (from packaging->pytest) (3.0.7) Installing collected packages: toml, py, pluggy, iniconfig, attrs, pytest Successfully installed attrs-21.4.0 iniconfig-1.1.1 pluggy-1.0.0 py-1.11.0 pytest-6.2.5 toml-0.10.2 (venv) ~/Desktop/trafilatura % pytest =============================================================================================== test session starts =============================================================================================== platform darwin -- Python 3.9.9, pytest-6.2.5, py-1.11.0, pluggy-1.0.0 rootdir: /Users/naftalibeder/Desktop/trafilatura, configfile: pytest.ini collected 51 items tests/cli_tests.py ....... [ 13%] tests/downloads_tests.py .... [ 21%] tests/feeds_tests.py ..... [ 31%] tests/json_metadata_tests.py . [ 33%] tests/metadata_tests.py .........F [ 52%] tests/realworld_tests.py FF [ 56%] tests/sitemaps_tests.py ... [ 62%] tests/spider_tests.py ..... [ 72%] tests/unit_tests.py .............. [100%] ==================================================================================================== FAILURES ===================================================================================================== ___________________________________________________________________________________________________ test_pages ____________________________________________________________________________________________________ def test_pages(): '''Test on real web pages''' metadata = extract_metadata(load_mock_page('http://blog.python.org/2016/12/python-360-is-now-available.html')) assert metadata['title'] == 'Python 3.6.0 is now available!' assert metadata['description'] == 'Python 3.6.0 is now available! Python 3.6.0 is the newest major release of the Python language, and it contains many new features and opti...' assert metadata['author'] == 'Ned Deily' assert metadata['url'] == 'http://blog.python.org/2016/12/python-360-is-now-available.html' assert metadata['sitename'] == 'blog.python.org' metadata = extract_metadata(load_mock_page('https://en.blog.wordpress.com/2019/06/19/want-to-see-a-more-diverse-wordpress-contributor-community-so-do-we/')) > assert metadata['title'] == 'Want to See a More Diverse WordPress Contributor Community? So Do We.' E AssertionError: assert None == 'Want to See a More Diverse WordPress Contributor Community? So Do We.' tests/metadata_tests.py:320: AssertionError ------------------------------------------------------------------------------------------------ Captured log call ------------------------------------------------------------------------------------------------ WARNING trafilatura.metadata:metadata.py:239 no main title found WARNING trafilatura.metadata:metadata.py:247 no h2 title found _______________________________________________________________________________________________ test_extract[False] _______________________________________________________________________________________________ xmloutput = False @pytest.mark.parametrize("xmloutput", [False, True]) def test_extract(xmloutput): # xmloutput=False '''test extraction from HTML''' result = load_mock_page('https://die-partei.net/luebeck/2012/05/31/das-ministerium-fur-club-kultur-informiert/', xmloutput) assert 'Impressum' not in result and 'Die GEMA dreht völlig am Zeiger!' in result result = load_mock_page('https://www.bmjv.de/DE/Verbraucherportal/KonsumImAlltag/TransparenzPreisanpassung/TransparenzPreisanpassung_node.html', xmloutput) assert 'Impressum' not in result and 'Anbieter von Fernwärme haben innerhalb ihres Leitungsnetzes ein Monopol' in result result = load_mock_page('https://denkanstoos.wordpress.com/2012/04/11/denkanstoos-april-2012/', xmloutput) assert 'Two or three 10-15 min' in result and 'What type? Etc. (30 mins)' in result and 'Dieser Eintrag wurde veröffentlicht' not in result and 'Mit anderen Teillen' not in result result = load_mock_page('https://www.ebrosia.de/beringer-zinfandel-rose-stone-cellars-lieblich-suess', xmloutput) assert 'Das Bukett präsentiert sich' in result and 'Kunden kauften auch' not in result and 'Gutschein sichern' not in result # and 'Besonders gut passt er zu asiatischen Gerichten' in result result = load_mock_page('https://www.landwirt.com/Precision-Farming-Moderne-Sensortechnik-im-Kuhstall,,4229,,Bericht.html', xmloutput) assert 'Überwachung der somatischen Zellen' in result and 'tragbaren Ultraschall-Geräten' in result and 'Kotkonsistenz' in result and 'Anzeigentarife' not in result # and 'Aktuelle Berichte aus dieser Kategorie' not in result result = load_mock_page('http://www.rs-ingenieure.de/de/hochbau/leistungen/tragwerksplanung', xmloutput) #print(result) if xmloutput is False: assert 'Wir bearbeiten alle Leistungsbilder' in result and 'Brückenbau' not in result result = load_mock_page('http://www.shingon-reiki.de/reiki-und-schamanismus/', xmloutput) assert 'Catch Evolution' not in result and 'und gekennzeichnet mit' not in result and 'Heut geht es' in result and 'Ich komme dann zu dir vor Ort.' in result result = load_mock_page('http://love-hina.ch/news/0409.html', xmloutput) assert 'Kapitel 121 ist' in result and 'Besucher online' not in result and 'Kommentare schreiben' not in result result = load_mock_page('http://www.cdu-fraktion-erfurt.de/inhalte/aktuelles/entwicklung-der-waldorfschule-ermoeglicht/index.html', xmloutput) assert 'der steigenden Nachfrage gerecht zu werden.' in result and 'Zurück zur Übersicht' not in result # and 'Erhöhung für Zoo-Eintritt' not in result result = load_mock_page('https://de.creativecommons.org/index.php/2014/03/20/endlich-wird-es-spannend-die-nc-einschraenkung-nach-deutschem-recht/', xmloutput) assert 'das letzte Wort sein kann.' in result and 'Ähnliche Beiträge' not in result # and 'Michael Blahm' not in result # comments result = load_mock_page('https://piratenpartei-mv.de/blog/2013/09/12/grundeinkommen-ist-ein-menschenrecht/', xmloutput) assert 'Unter diesem Motto findet am 14. September' in result and 'Volksinitiative Schweiz zum Grundeinkommen.' in result and 'getaggt mit:' not in result # and 'Was denkst du?' not in result result = load_mock_page('https://scilogs.spektrum.de/engelbart-galaxis/die-ablehnung-der-gendersprache/', xmloutput) > assert 'Zweitens wird der Genderstern' in result and 'alldem leider – nichts.' in result # and 'Beitragsbild' not in result E AssertionError: assert ('Zweitens wird der Genderstern' in '! D O C T Y P E h t m l >') tests/realworld_tests.py:190: AssertionError _______________________________________________________________________________________________ test_extract[True] ________________________________________________________________________________________________ xmloutput = True @pytest.mark.parametrize("xmloutput", [False, True]) def test_extract(xmloutput): # xmloutput=False '''test extraction from HTML''' result = load_mock_page('https://die-partei.net/luebeck/2012/05/31/das-ministerium-fur-club-kultur-informiert/', xmloutput) assert 'Impressum' not in result and 'Die GEMA dreht völlig am Zeiger!' in result result = load_mock_page('https://www.bmjv.de/DE/Verbraucherportal/KonsumImAlltag/TransparenzPreisanpassung/TransparenzPreisanpassung_node.html', xmloutput) assert 'Impressum' not in result and 'Anbieter von Fernwärme haben innerhalb ihres Leitungsnetzes ein Monopol' in result result = load_mock_page('https://denkanstoos.wordpress.com/2012/04/11/denkanstoos-april-2012/', xmloutput) assert 'Two or three 10-15 min' in result and 'What type? Etc. (30 mins)' in result and 'Dieser Eintrag wurde veröffentlicht' not in result and 'Mit anderen Teillen' not in result result = load_mock_page('https://www.ebrosia.de/beringer-zinfandel-rose-stone-cellars-lieblich-suess', xmloutput) assert 'Das Bukett präsentiert sich' in result and 'Kunden kauften auch' not in result and 'Gutschein sichern' not in result # and 'Besonders gut passt er zu asiatischen Gerichten' in result result = load_mock_page('https://www.landwirt.com/Precision-Farming-Moderne-Sensortechnik-im-Kuhstall,,4229,,Bericht.html', xmloutput) assert 'Überwachung der somatischen Zellen' in result and 'tragbaren Ultraschall-Geräten' in result and 'Kotkonsistenz' in result and 'Anzeigentarife' not in result # and 'Aktuelle Berichte aus dieser Kategorie' not in result result = load_mock_page('http://www.rs-ingenieure.de/de/hochbau/leistungen/tragwerksplanung', xmloutput) #print(result) if xmloutput is False: assert 'Wir bearbeiten alle Leistungsbilder' in result and 'Brückenbau' not in result result = load_mock_page('http://www.shingon-reiki.de/reiki-und-schamanismus/', xmloutput) assert 'Catch Evolution' not in result and 'und gekennzeichnet mit' not in result and 'Heut geht es' in result and 'Ich komme dann zu dir vor Ort.' in result result = load_mock_page('http://love-hina.ch/news/0409.html', xmloutput) assert 'Kapitel 121 ist' in result and 'Besucher online' not in result and 'Kommentare schreiben' not in result result = load_mock_page('http://www.cdu-fraktion-erfurt.de/inhalte/aktuelles/entwicklung-der-waldorfschule-ermoeglicht/index.html', xmloutput) assert 'der steigenden Nachfrage gerecht zu werden.' in result and 'Zurück zur Übersicht' not in result # and 'Erhöhung für Zoo-Eintritt' not in result result = load_mock_page('https://de.creativecommons.org/index.php/2014/03/20/endlich-wird-es-spannend-die-nc-einschraenkung-nach-deutschem-recht/', xmloutput) assert 'das letzte Wort sein kann.' in result and 'Ähnliche Beiträge' not in result # and 'Michael Blahm' not in result # comments result = load_mock_page('https://piratenpartei-mv.de/blog/2013/09/12/grundeinkommen-ist-ein-menschenrecht/', xmloutput) assert 'Unter diesem Motto findet am 14. September' in result and 'Volksinitiative Schweiz zum Grundeinkommen.' in result and 'getaggt mit:' not in result # and 'Was denkst du?' not in result result = load_mock_page('https://scilogs.spektrum.de/engelbart-galaxis/die-ablehnung-der-gendersprache/', xmloutput) > assert 'Zweitens wird der Genderstern' in result and 'alldem leider – nichts.' in result # and 'Beitragsbild' not in result E assert ('Zweitens wird der Genderstern' in '<doc sitename="scilogs.spektrum.de" source="https://scilogs.spektrum.de/engelbart-galaxis/die-ablehnung-der-genderspr...t="2jmj7l5rSw0yVb/vlWAYkK/YBwk=">\n <main>\n <p>! D O C T Y P E h t m l &gt;</p>\n </main>\n <comments/>\n</doc>') tests/realworld_tests.py:190: AssertionError ------------------------------------------------------------------------------------------------ Captured log call ------------------------------------------------------------------------------------------------ WARNING trafilatura.metadata:metadata.py:239 no main title found WARNING trafilatura.metadata:metadata.py:247 no h2 title found ================================================================================================ warnings summary ================================================================================================= tests/cli_tests.py::test_parser tests/cli_tests.py::test_parser /Users/naftalibeder/Desktop/trafilatura/trafilatura/cli.py:212: PendingDeprecationWarning: --notables will be deprecated in a future version, use --no-tables instead warnings.warn( tests/cli_tests.py::test_parser tests/cli_tests.py::test_parser tests/cli_tests.py::test_parser tests/cli_tests.py::test_parser /Users/naftalibeder/Desktop/trafilatura/trafilatura/cli.py:205: PendingDeprecationWarning: --nocomments will be deprecated in a future version, use --no-comments instead warnings.warn( tests/cli_tests.py::test_parser tests/cli_tests.py::test_parser tests/cli_tests.py::test_cli_pipeline /Users/naftalibeder/Desktop/trafilatura/trafilatura/cli.py:219: PendingDeprecationWarning: --with-metadata will be deprecated in a future version, use --only-with-metadata instead warnings.warn( tests/downloads_tests.py::test_fetch /Users/naftalibeder/Desktop/trafilatura/venv/lib/python3.9/site-packages/urllib3/connectionpool.py:1043: InsecureRequestWarning: Unverified HTTPS request is being made to host 'httpbin.org'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings warnings.warn( -- Docs: https://docs.pytest.org/en/stable/warnings.html ============================================================================================= short test summary info ============================================================================================= FAILED tests/metadata_tests.py::test_pages - AssertionError: assert None == 'Want to See a More Diverse WordPress Contributor Community? So Do We.' FAILED tests/realworld_tests.py::test_extract[False] - AssertionError: assert ('Zweitens wird der Genderstern' in '! D O C T Y P E h t m l >') FAILED tests/realworld_tests.py::test_extract[True] - assert ('Zweitens wird der Genderstern' in '<doc sitename="scilogs.spektrum.de" source="https://scilogs.spektrum.de/engelbart-galaxis/die-ablehnung-der-ge... =================================================================================== 3 failed, 48 passed, 10 warnings in 32.17s ==================================================================================== ``` </p> </details>
closed
2022-01-22T02:37:51Z
2023-11-08T15:59:24Z
https://github.com/adbar/trafilatura/issues/166
[ "bug", "wontfix", "documentation" ]
naftalibeder
9
davidsandberg/facenet
computer-vision
604
align_dataset_mtcnn.py using fail
Anyone who can help me? After getting the data LFW(put in facenet/data/lfw/raw), I want to handle these data using align_dataset_mtcnn.py, but something wrong happened. My command is "cd facenet export PYTHONPATH=$(pwd)/src python ./src/align/align_dataset_mtcnn.py ./data/lfw/raw ./data/lfw/lfw_160 --image_size 160 --margin 32 --random_order" (MAC without GPU) Error as below: Traceback (most recent call last): File "./src/align/align_dataset_mtcnn.py", line 35, in <module> import align.detect_face File "/Users/tongwenjie/Desktop/twj/TensorFlow项目及代码/Face recognition/davidsandberg_facenet_3k/facenet/src/align/detect_face.py", line 34, in <module> import cv2 File "/Users/tongwenjie/anaconda2/lib/python2.7/site-packages/cv2/__init__.py", line 4, in <module> from .cv2 import * ImportError: dlopen(/Users/tongwenjie/anaconda2/lib/python2.7/site-packages/cv2/cv2.so, 2): Symbol not found: _clock_gettime Referenced from: /Users/tongwenjie/anaconda2/lib/python2.7/site-packages/cv2/.dylibs/libavutil.55.78.100.dylib (which was built for Mac OS X 10.12) Expected in: /usr/lib/libSystem.B.dylib in /Users/tongwenjie/anaconda2/lib/python2.7/site-packages/cv2/.dylibs/libavutil.55.78.100.dylib 192:facenet tongwenjie$ python ./src/align/align_dataset_mtcnn.py ./data/lfw/raw ./data/lfw/lfw_160 --image_size 160 --margin 32 Traceback (most recent call last): File "./src/align/align_dataset_mtcnn.py", line 35, in <module> import align.detect_face File "/Users/tongwenjie/Desktop/twj/TensorFlow项目及代码/Face recognition/davidsandberg_facenet_3k/facenet/src/align/detect_face.py", line 34, in <module> import cv2 File "/Users/tongwenjie/anaconda2/lib/python2.7/site-packages/cv2/__init__.py", line 4, in <module> from .cv2 import * ImportError: dlopen(/Users/tongwenjie/anaconda2/lib/python2.7/site-packages/cv2/cv2.so, 2): Symbol not found: _clock_gettime Referenced from: /Users/tongwenjie/anaconda2/lib/python2.7/site-packages/cv2/.dylibs/libavutil.55.78.100.dylib (which was built for Mac OS X 10.12) Expected in: /usr/lib/libSystem.B.dylib in /Users/tongwenjie/anaconda2/lib/python2.7/site-packages/cv2/.dylibs/libavutil.55.78.100.dylib
closed
2018-01-07T09:05:31Z
2018-04-01T21:38:07Z
https://github.com/davidsandberg/facenet/issues/604
[]
situjie68
2
feder-cr/Jobs_Applier_AI_Agent_AIHawk
automation
742
[BUG]: ERROR: Cannot install -r requirements.txt (line 10), and langchain-core===0.2.36 because these package versions have conflicting dependencies.
### Describe the bug ERROR: Cannot install -r requirements.txt (line 10), -r requirements.txt (line 12), -r requirements.txt (line 13), -r requirements.txt (line 14), -r requirements.txt (line 15), -r requirements.txt (line 2), -r requirements.txt (line 7), -r requirements.txt (line 8) and langchain-core===0.2.36 because these package versions have conflicting dependencies. ### Steps to reproduce Just running "pip install -r requirements.txt" ### Expected behavior _No response_ ### Actual behavior _No response_ ### Branch None ### Branch name _No response_ ### Python version _No response_ ### LLM Used _No response_ ### Model used _No response_ ### Additional context The conflict is caused by: The user requested langchain-core===0.2.36 langchain 0.2.11 depends on langchain-core<0.3.0 and >=0.2.23 langchain-community 0.2.10 depends on langchain-core<0.3.0 and >=0.2.23 langchain-google-genai 1.0.10 depends on langchain-core<0.3 and >=0.2.33 langchain-ollama 0.1.3 depends on langchain-core<0.3.0 and >=0.2.36 langchain-openai 0.1.17 depends on langchain-core<0.3.0 and >=0.2.20 langchain-text-splitters 0.2.2 depends on langchain-core<0.3.0 and >=0.2.10 lib-resume-builder-aihawk 0.1 depends on langchain-core langchain-anthropic 0.2.4 depends on langchain-core<0.4.0 and >=0.3.15 The user requested langchain-core===0.2.36 langchain 0.2.11 depends on langchain-core<0.3.0 and >=0.2.23 langchain-community 0.2.10 depends on langchain-core<0.3.0 and >=0.2.23 langchain-google-genai 1.0.10 depends on langchain-core<0.3 and >=0.2.33 langchain-ollama 0.1.3 depends on langchain-core<0.3.0 and >=0.2.36 langchain-openai 0.1.17 depends on langchain-core<0.3.0 and >=0.2.20 langchain-text-splitters 0.2.2 depends on langchain-core<0.3.0 and >=0.2.10 lib-resume-builder-aihawk 0.1 depends on langchain-core langchain-anthropic 0.2.3 depends on langchain-core<0.4.0 and >=0.3.9 The user requested langchain-core===0.2.36 langchain 0.2.11 depends on langchain-core<0.3.0 and >=0.2.23 langchain-community 0.2.10 depends on langchain-core<0.3.0 and >=0.2.23 langchain-google-genai 1.0.10 depends on langchain-core<0.3 and >=0.2.33 langchain-ollama 0.1.3 depends on langchain-core<0.3.0 and >=0.2.36 langchain-openai 0.1.17 depends on langchain-core<0.3.0 and >=0.2.20 langchain-text-splitters 0.2.2 depends on langchain-core<0.3.0 and >=0.2.10 lib-resume-builder-aihawk 0.1 depends on langchain-core langchain-anthropic 0.2.1 depends on langchain-core<0.4.0 and >=0.3.1 The user requested langchain-core===0.2.36 langchain 0.2.11 depends on langchain-core<0.3.0 and >=0.2.23 langchain-community 0.2.10 depends on langchain-core<0.3.0 and >=0.2.23 langchain-google-genai 1.0.10 depends on langchain-core<0.3 and >=0.2.33 langchain-ollama 0.1.3 depends on langchain-core<0.3.0 and >=0.2.36 langchain-openai 0.1.17 depends on langchain-core<0.3.0 and >=0.2.20 langchain-text-splitters 0.2.2 depends on langchain-core<0.3.0 and >=0.2.10 lib-resume-builder-aihawk 0.1 depends on langchain-core langchain-anthropic 0.2.0 depends on langchain-core<0.4.0 and >=0.3.0 The user requested langchain-core===0.2.36 langchain 0.2.11 depends on langchain-core<0.3.0 and >=0.2.23 langchain-community 0.2.10 depends on langchain-core<0.3.0 and >=0.2.23 langchain-google-genai 1.0.10 depends on langchain-core<0.3 and >=0.2.33 langchain-ollama 0.1.3 depends on langchain-core<0.3.0 and >=0.2.36 langchain-openai 0.1.17 depends on langchain-core<0.3.0 and >=0.2.20 langchain-text-splitters 0.2.2 depends on langchain-core<0.3.0 and >=0.2.10 lib-resume-builder-aihawk 0.1 depends on langchain-core langchain-anthropic 0.1.11 depends on langchain-core<0.2.0 and >=0.1.43 The user requested langchain-core===0.2.36 langchain 0.2.11 depends on langchain-core<0.3.0 and >=0.2.23 langchain-community 0.2.10 depends on langchain-core<0.3.0 and >=0.2.23 langchain-google-genai 1.0.10 depends on langchain-core<0.3 and >=0.2.33 langchain-ollama 0.1.3 depends on langchain-core<0.3.0 and >=0.2.36 langchain-openai 0.1.17 depends on langchain-core<0.3.0 and >=0.2.20 langchain-text-splitters 0.2.2 depends on langchain-core<0.3.0 and >=0.2.10 lib-resume-builder-aihawk 0.1 depends on langchain-core langchain-anthropic 0.1.10 depends on langchain-core<0.2.0 and >=0.1.43 The user requested langchain-core===0.2.36 langchain 0.2.11 depends on langchain-core<0.3.0 and >=0.2.23 langchain-community 0.2.10 depends on langchain-core<0.3.0 and >=0.2.23 langchain-google-genai 1.0.10 depends on langchain-core<0.3 and >=0.2.33 langchain-ollama 0.1.3 depends on langchain-core<0.3.0 and >=0.2.36 langchain-openai 0.1.17 depends on langchain-core<0.3.0 and >=0.2.20 langchain-text-splitters 0.2.2 depends on langchain-core<0.3.0 and >=0.2.10 lib-resume-builder-aihawk 0.1 depends on langchain-core langchain-anthropic 0.1.9 depends on langchain-core<0.2.0 and >=0.1.43 The user requested langchain-core===0.2.36 langchain 0.2.11 depends on langchain-core<0.3.0 and >=0.2.23 langchain-community 0.2.10 depends on langchain-core<0.3.0 and >=0.2.23 langchain-google-genai 1.0.10 depends on langchain-core<0.3 and >=0.2.33 langchain-ollama 0.1.3 depends on langchain-core<0.3.0 and >=0.2.36 langchain-openai 0.1.17 depends on langchain-core<0.3.0 and >=0.2.20 langchain-text-splitters 0.2.2 depends on langchain-core<0.3.0 and >=0.2.10 lib-resume-builder-aihawk 0.1 depends on langchain-core langchain-anthropic 0.1.8 depends on langchain-core<0.2.0 and >=0.1.42 The user requested langchain-core===0.2.36 langchain 0.2.11 depends on langchain-core<0.3.0 and >=0.2.23 langchain-community 0.2.10 depends on langchain-core<0.3.0 and >=0.2.23 langchain-google-genai 1.0.10 depends on langchain-core<0.3 and >=0.2.33 langchain-ollama 0.1.3 depends on langchain-core<0.3.0 and >=0.2.36 langchain-openai 0.1.17 depends on langchain-core<0.3.0 and >=0.2.20 langchain-text-splitters 0.2.2 depends on langchain-core<0.3.0 and >=0.2.10 lib-resume-builder-aihawk 0.1 depends on langchain-core langchain-anthropic 0.1.7 depends on langchain-core<0.2.0 and >=0.1.41 The user requested langchain-core===0.2.36 langchain 0.2.11 depends on langchain-core<0.3.0 and >=0.2.23 langchain-community 0.2.10 depends on langchain-core<0.3.0 and >=0.2.23 langchain-google-genai 1.0.10 depends on langchain-core<0.3 and >=0.2.33 langchain-ollama 0.1.3 depends on langchain-core<0.3.0 and >=0.2.36 langchain-openai 0.1.17 depends on langchain-core<0.3.0 and >=0.2.20 langchain-text-splitters 0.2.2 depends on langchain-core<0.3.0 and >=0.2.10 lib-resume-builder-aihawk 0.1 depends on langchain-core langchain-anthropic 0.1.6 depends on langchain-core<0.2.0 and >=0.1.33 The user requested langchain-core===0.2.36 langchain 0.2.11 depends on langchain-core<0.3.0 and >=0.2.23 langchain-community 0.2.10 depends on langchain-core<0.3.0 and >=0.2.23 langchain-google-genai 1.0.10 depends on langchain-core<0.3 and >=0.2.33 langchain-ollama 0.1.3 depends on langchain-core<0.3.0 and >=0.2.36 langchain-openai 0.1.17 depends on langchain-core<0.3.0 and >=0.2.20 langchain-text-splitters 0.2.2 depends on langchain-core<0.3.0 and >=0.2.10 lib-resume-builder-aihawk 0.1 depends on langchain-core langchain-anthropic 0.1.5 depends on langchain-core<0.2.0 and >=0.1.33 The user requested langchain-core===0.2.36 langchain 0.2.11 depends on langchain-core<0.3.0 and >=0.2.23 langchain-community 0.2.10 depends on langchain-core<0.3.0 and >=0.2.23 langchain-google-genai 1.0.10 depends on langchain-core<0.3 and >=0.2.33 langchain-ollama 0.1.3 depends on langchain-core<0.3.0 and >=0.2.36 langchain-openai 0.1.17 depends on langchain-core<0.3.0 and >=0.2.20 langchain-text-splitters 0.2.2 depends on langchain-core<0.3.0 and >=0.2.10 lib-resume-builder-aihawk 0.1 depends on langchain-core langchain-anthropic 0.1.4 depends on langchain-core<0.2 and >=0.1 The user requested langchain-core===0.2.36 langchain 0.2.11 depends on langchain-core<0.3.0 and >=0.2.23 langchain-community 0.2.10 depends on langchain-core<0.3.0 and >=0.2.23 langchain-google-genai 1.0.10 depends on langchain-core<0.3 and >=0.2.33 langchain-ollama 0.1.3 depends on langchain-core<0.3.0 and >=0.2.36 langchain-openai 0.1.17 depends on langchain-core<0.3.0 and >=0.2.20 langchain-text-splitters 0.2.2 depends on langchain-core<0.3.0 and >=0.2.10 lib-resume-builder-aihawk 0.1 depends on langchain-core langchain-anthropic 0.1.3 depends on langchain-core<0.2 and >=0.1 The user requested langchain-core===0.2.36 langchain 0.2.11 depends on langchain-core<0.3.0 and >=0.2.23 langchain-community 0.2.10 depends on langchain-core<0.3.0 and >=0.2.23 langchain-google-genai 1.0.10 depends on langchain-core<0.3 and >=0.2.33 langchain-ollama 0.1.3 depends on langchain-core<0.3.0 and >=0.2.36 langchain-openai 0.1.17 depends on langchain-core<0.3.0 and >=0.2.20 langchain-text-splitters 0.2.2 depends on langchain-core<0.3.0 and >=0.2.10 lib-resume-builder-aihawk 0.1 depends on langchain-core langchain-anthropic 0.1.2 depends on langchain-core<0.2 and >=0.1 The user requested langchain-core===0.2.36 langchain 0.2.11 depends on langchain-core<0.3.0 and >=0.2.23 langchain-community 0.2.10 depends on langchain-core<0.3.0 and >=0.2.23 langchain-google-genai 1.0.10 depends on langchain-core<0.3 and >=0.2.33 langchain-ollama 0.1.3 depends on langchain-core<0.3.0 and >=0.2.36 langchain-openai 0.1.17 depends on langchain-core<0.3.0 and >=0.2.20 langchain-text-splitters 0.2.2 depends on langchain-core<0.3.0 and >=0.2.10 lib-resume-builder-aihawk 0.1 depends on langchain-core langchain-anthropic 0.1.1 depends on langchain-core<0.2 and >=0.1 The user requested langchain-core===0.2.36 langchain 0.2.11 depends on langchain-core<0.3.0 and >=0.2.23 langchain-community 0.2.10 depends on langchain-core<0.3.0 and >=0.2.23 langchain-google-genai 1.0.10 depends on langchain-core<0.3 and >=0.2.33 langchain-ollama 0.1.3 depends on langchain-core<0.3.0 and >=0.2.36 langchain-openai 0.1.17 depends on langchain-core<0.3.0 and >=0.2.20 langchain-text-splitters 0.2.2 depends on langchain-core<0.3.0 and >=0.2.10 lib-resume-builder-aihawk 0.1 depends on langchain-core langchain-anthropic 0.1.0 depends on langchain-core<0.2 and >=0.1 The user requested langchain-core===0.2.36 langchain 0.2.11 depends on langchain-core<0.3.0 and >=0.2.23 langchain-community 0.2.10 depends on langchain-core<0.3.0 and >=0.2.23 langchain-google-genai 1.0.10 depends on langchain-core<0.3 and >=0.2.33 langchain-ollama 0.1.3 depends on langchain-core<0.3.0 and >=0.2.36 langchain-openai 0.1.17 depends on langchain-core<0.3.0 and >=0.2.20 langchain-text-splitters 0.2.2 depends on langchain-core<0.3.0 and >=0.2.10 lib-resume-builder-aihawk 0.1 depends on langchain-core langchain-anthropic 0.0.2 depends on langchain-core<0.2 and >=0.1 To fix this you could try to: 1. loosen the range of package versions you've specified 2. remove package versions to allow pip to attempt to solve the dependency conflict ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts
closed
2024-11-04T13:33:58Z
2024-12-18T17:12:43Z
https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/742
[ "bug" ]
agispas
4
python-gitlab/python-gitlab
api
2,827
Bug: List all projects of current user returns empty list
## Description of the problem, including code/CLI snippet I want to call `gitlab_client.projects.list(membership=True, all=True)` and have all projects returned that the current user is a member of. ## Expected Behavior Returns a list of projects the user is a member of. Can be filtered via the gitlab API using the `membership` parameter. https://docs.gitlab.com/ee/api/projects.html#list-all-projects ## Actual Behavior Returns an empty list ## Specifications - python-gitlab version: 4.4.0 - API version you are using (v3/v4): v4 - Gitlab server version (or gitlab.com): GitLab Enterprise Edition [v16.9.2-ee](https://gitlab.com/gitlab-org/gitlab/-/tags/v16.9.2-ee)
closed
2024-03-22T12:39:53Z
2024-06-06T20:36:21Z
https://github.com/python-gitlab/python-gitlab/issues/2827
[]
Malaber
1
postmanlabs/httpbin
api
441
Server is down?
![httpbin](https://user-images.githubusercontent.com/8018334/39406032-eb0423ca-4be2-11e8-9a9f-b77d1999139f.png)
closed
2018-04-29T11:24:26Z
2018-04-29T11:48:56Z
https://github.com/postmanlabs/httpbin/issues/441
[]
sicklife
1
matplotlib/mplfinance
matplotlib
335
Bug Report: Point-and-Figure Plotting Error
**Describe the bug** The Point and Figure definition doesn't plot quite correctly. The only reason why a new column is created in a Point and Figure Chart is to avoid overwriting an existing box. So, a change in direction (X to O or vice-versa) doesn't result in a new column unless it would result in an overwriting of an existing data point. This helps to better show the trend and makes the chart more compact and readable. **To Reproduce** Steps to reproduce the behavior: 1. Go to '...' 2. Click on '....' 3. Scroll down to '....' 4. See error N/A **Expected behavior** The O or X, insofar as they're not overwriting an existing plot point should be in a single column. **Screenshots** Please see the screenshots from the documentation provided. **Desktop (please complete the following information):** **Smartphone (please complete the following information):** **Additional context**
closed
2021-02-20T02:44:16Z
2021-02-28T03:56:52Z
https://github.com/matplotlib/mplfinance/issues/335
[ "bug" ]
GenusGeoff
2
jina-ai/serve
fastapi
5,522
chore: draft release note 3.13
# Release Note This release contains 14 new features, 9 bug fixes and 7 documentation improvements. This release introduces major features like Custom Gateways, Dynamic Batching for Executors, development support with auto-reloading, support for the new namespaced Executor scheme `jinaai`, improvements for our gRPC transport layer, and more. ## 🆕 Features ### Custom Gateways (#5153, #5189, #5342, #5457, #5465, #5472 and #5477) Jina Gateways are now customizable in the sense that you can implement them in much the same way as an Executor. With this feature, Jina gives power to the user to implement any server, protocol or interface at the Gateway level. There's no more need to build an extra service that uses the Flow. For instance, you can define a Jina Gateway that communicates with the rest of Flow Executors like so: ```python from jina.serve.runtimes.gateway.http.fastapi import FastAPIBaseGateway class MyGateway(FastAPIBaseGateway): @property def app(self): from fastapi import FastAPI app = FastAPI(title='Custom FastAPI Gateway') @app.get(path='/service') async def my_service(input: str): # convert input request to Documents docs = DocumentArray([Document(text=input)]) # send Documents to Executors using GatewayStreamer result = None async for response_docs in self.streamer.stream_docs( docs=docs, exec_endpoint='/', ): # convert response docs to server response and return it result = response_docs[0].text return {'result': result} return app ``` Then you can use it in your Flow in the following way: ```python flow = Flow().config_gateway( uses=MyGateway, port=12345, protocol='http' ) ``` A Custom Gateway can be used as a Python class, YAML configuration or Docker image. Adding support for Custom Gateways required exposing the Gateway API and supporting multiple ports and protocols (mentioned in a [prior release](https://github.com/jina-ai/jina/releases/tag/v3.12.0)). You can customize it by subclassing the [FastAPIBaseGateway](https://docs.jina.ai/concepts/gateway/customization/#subclass-from-fastapibasegateway) class (for simple implementation) or base [Gateway](https://docs.jina.ai/concepts/gateway/customization/#subclass-from-gateway) for more complex use cases. Working on this feature also involved exposing and improving the [GatewayStreamer](https://docs.jina.ai/concepts/gateway/customization/#calling-executors-with-gatewaystreamer) API as a way to communicate with Executors within the Gateway. Find more information in the [Custom Gateway page](docs.jina.ai/concepts/gateway/customization/). ### Dynamic batching (#5410) This release adds Dynamic batching capabilities to Executors. Dynamic batching allows requests to be accumulated and batched together before being sent to an Executor. The batch is created dynamically depending on the configuration for each endpoint. This feature is especially relevant for inference tasks where model inference is more optimized when batched to efficiently use GPU resources. You can configure Dynamic batching using either a decorator or the `uses_dynamic_batching` parameter. The following example shows how to enable Dynamic batching on an Executor that performs model inference: ```python from jina import requests, dynamic_batching, Executor, DocumentArray, Flow class MyExecutor(Executor): def __init__(self, **kwargs): super().__init__(**kwargs) # initialize model self.model = torch.nn.Linear(in_features=128, out_features=128) @requests(on='/bar') @dynamic_batching(preferred_batch_size=10, timeout=200) def embed(self, docs: DocumentArray, **kwargs): docs.embeddings = self.model(torch.Tensor(docs.tensors)) flow = Flow().add(uses=MyExecutor) ``` With Dynamic Batching enabled, the Executor above will efficiently use GPU resources to perform inference by batching Documents together. Read more about the feature in the [Dynamic Batching documentation page](https://docs.jina.ai/concepts/executor/dynamic-batching/). ### Install requirements of local Executors (#5508) Prior to this release, the `install_requirements` parameter of Executors only installed Executor requirements for Hub Executors. Now, local Executors with a `requirements.txt` file will also have their requirements installed before starting Flows. ### Support `jinaai` Executor scheme to enable namespaced Hub Executors (#5462, #5468 and #5515) As [Jina AI Cloud introduced namespaces](https://jina.ai/news/how-to-manage-jina-resources-with-namespaces/) to Executor resources, we made changes to support the new `jinaai` Executor scheme. This PR adds support for the new scheme. This means that namespaced Executors can now be used with the `jinaai` scheme in the following way: ```python from jina import Flow flow =Flow().add(uses='jinaai://jina-ai/DummyHubExecutor') ``` This scheme is also supported in Kubernetes and other APIs: ```python from jina import Flow flow =Flow().add(uses='jinaai+docker://jina-ai/DummyHubExecutor') flow.to_kubernetes_yaml('output_path', k8s_namespace='my-namespace') ``` The support of the new scheme means the minimum supported version of [`jina-hubble-sdk`](https://github.com/jina-ai/jina-hubble-sdk) has been increased to `0.26.10`. ### Add auto-reloading to Flow and Executor on file changes (#5461, #5488 and #5514) A new argument `reload` has been added to the Flow and Executor APIs, which automatically reloads running Flows and Executors when changes are made to Executor source code or YAML configurations of Flows and Executors. Although this feature is only meant for development, it aims to help developers iterate fast and automatically update Flows with changes they make live during development. Find out more about this feature in these two sections: * [Executor hot reload](https://docs.jina.ai/concepts/executor/hot-reload/) * [Flow hot reload](https://docs.jina.ai/concepts/flow/hot-reload/) ### Expand Executor serve parameters (#5494) The method `Executor.serve` can receive more parameters, similar to what the Flow API expects. With new parameters to control serving and deployment configurations of the Executor, this method empowers the Executor to be convenient for single service tasks. This means you can not only build advanced microservices-based pipelines and applications, but also build individual services with all Jina features: shards/replicas, dynamic batching, auto-reload, etc. [Read more about the method in the Python API documentation](https://docs.jina.ai/api/jina.serve.executors/#jina.serve.executors.BaseExecutor.serve). ### Add gRPC trailing metadata when logging gRPC error (#5512) When logging gRPC errors, context trailing metadata is now shown. This helps identify underlying network issues rather than the error codes that mask multiple network errors into a single gRPC status code. For instance, the new log message looks like the following: ```text DEBUG gateway@ 1 GRPC call to deployment executor0 failed with error <AioRpcError of RPC that terminated with: status = StatusCode.UNAVAILABLE ... trailing_metadata=Metadata((('content-length', '0'), ('l5d-proxy-error', 'HTTP Balancer service in fail-fast'), ('l5d-proxy-connection', 'close'), ('date', 'Tue, 13 Dec 2022 10:20:15 GMT'))), for retry attempt 2/3. Trying next replica, if available. ``` The trailing_metadata returned by load balancers will help to identify the root cause more accurately. ### Implement `unary_unary` stub for Gateway Runtime (#5507) This release adds the gRPC `unary_unary` stub for Gateway Runtime as a new communication stub with Executors. Since the [gRPC performance best practices for Python page](https://grpc.io/docs/guides/performance/#python) suggests that unary stream implementation might be faster for Python, we added this communication method. However, this is not enabled by default. The streaming RPC method will still be used unless you set the `stream` option to `False` in the [`Client.post()`](https://docs.jina.ai/api/jina.clients.mixin/#jina.clients.mixin.PostMixin.post) method. The feature is only effective when the gRPC protocol is used. Read more about the feature in the documentation: https://docs.jina.ai/concepts/client/send-receive-data/#use-unary-or-streaming-grpc ### Add Startup Probe and replace Readiness Probe with Liveness Probe (#5407) Before this release, when exporting Jina Flows to Kubernetes YAML configurations, Kubernetes Readiness Probes used to be added for the Gateway pod and each Executor pod. In this release we have added a Startup Probe and replaced Readiness Probe with Liveness Probe. Both probes use the [`jina ping` command](https://docs.jina.ai/cli/#ping) to check that pods are healthy. ### New Jina perf Docker images (#5498) We added a slightly larger Docker image with suffix `perf` which includes a set of tools useful for performance tuning and debugging. The new image is available in [Jina AI's Docker hub](https://hub.docker.com/r/jinaai/jina). ### New Jina Docker image for Python 3.10, and use Python 3.8 for default Jina image (#5490) Besides adding Docker images aimed for performance optimization, we added an image with a newer Python version: 3.10. This is available in [Jina AI's Docker hub](https://hub.docker.com/r/jinaai/jina), for example `jinaai/jina:master-py310-standard`. We also made Python 3.8 our minimum supported Python version by default, and it will be used for default Docker images. ### Minimize output of `jina ping` command (#5476) `jina ping` commands are now less verbose and will print less irrelevant output. However, important information like latency for each round, average latency, number of successful requests and ping result will still show up. ### Add Kubernetes preStop hook to the containers (#5445) A `preStop` hook has been added to Executors and the Gateway to allow a grace period. This allows more time to complete in-flight requests and finish the server's graceful shutdown. ### Generate random ports for multiple protocols (#5455) If you use multiple protocols for a Gateway, you no longer need to specify a port for each one. Whether it's Python or YAML, you just need to specify the protocols you want to support and Jina will generate random ports for you. Python API: ```python from jina import Flow flow = Flow().config_gateway(protocol=['grpc', 'http', 'websocket']) with flow: flow.block() ``` YAML: ```yaml jtype: Flow gateway: protocol: - 'grpc' - 'http' - 'websocket' ``` Result: ![flow](https://user-images.githubusercontent.com/15269265/207876448-8aa2943f-d66c-4374-ab73-8ab0cc1793ca.png) ## 🐞 Bug Fixes ### List-like args passed as string (#5464) We fixed the format expected for `port`, `host` and `port_monitoring` to feel more Pythonic. Basically, if you use replicas, you no longer have to provide comma-separated ports as a string value. Instead, you can simply pass a list of values, no need to put all in a string anymore! For instance, suppose we have two external replicas of an Executor that we want to join in our Flow (the first is hosted on `localhost:12345` and the second on `91.198.174.192:12346`). We can add them like this: ```python from jina import Flow replica_hosts, replica_ports = ['localhost','91.198.174.192'], ['12345','12346'] # instead of 'localhost,91.198.174.192', '12345,12346' Flow().add(host=replica_hosts, port=replica_ports, external=True) ``` Or: ```python Flow().add(host=['localhost:12345','91.198.174.192:12346'], external=True) ``` Note that this is **not** a breaking change, and the old syntax (comma-separated values: `Flow().add(host='localhost:12345,91.198.174.192:12346', external=True)`) is still supported for backwards compatibility. ### Restore port to overload type hint and JSON schema (#5501) When we made `port` and `protocol` arguments of the Gateway support multiple values, a bug was introduced where `port` did not appear in Jina's JSON schema as well as the Flow API overload for method signatures. Although the arguments are functional in both the Python API and YAML, this suppressed auto-completion and developer support for these parameters. This release restores the `port` parameter in both the Flow method overloads and JSON schema. ### Do not force `insecure` to `True` in open telemetry integration (#5483) In Jina's instrumentation, communication to open telemetry exporters used to be forced to `insecure` mode. Luckily, our community member @big-thousand picked this up and submitted a fix. The communication is no longer forced to the `insecure` mode. Kudos to @big-thousand for his contribution! ### Fix problem when using floating Executor in HTTP (#5493) We found a bug when using Floating Executors in HTTP, where the floating Executor is connected to the Gateway (in the Flow topology). In this case, the Executor would not receive input Documents properly. This release fixes the mentioned bug. ### Add egg info post install command for egg info setup mode (#5491) This release adds support for the `egg info` setup mode in Python. This means post-installation commands are now properly executed in environments that rely on Python's new setup mode. This bug resulted in several issues especially for environments that depend on these post-installation commands. For instance, some Environment Variables that are needed for Jina to work on macOS and for CLI auto-complete. ### Do not apply limits when `gpus='all'` in Kubernetes (#5485) If Executor parameter `gpus` is set to `"all"`, no limits will be applied on the pod in Kubernetes. ### Fix Windows signal handling (#5484) This release improves signal handling on Windows, specifically when cancelling a Flow with an OS signal. ### Cap `opentelemetry-instrumentation-aiohttp-client` (#5452) This release caps the version for `opentelemetry-instrumentation-aiohttp-client` which is incompatible with `opentelemetry-semantic-conventions`. ### Raise exceptions from path importer (#5447) Previously, errors were hidden when they came from a Python module imported to load an Executor. Actually the module was not considered to be a Python module, which produced other misleading errors. In this release, actual errors during imports will be raised and no longer hidden. ## 📗 Documentation Improvements - Add gRPC requirements for Apple Silicon (M1 Chip) to fix failing installation of Jina (#5511) - Add redirects from '/fundamentals' to '/concepts' (#5504) - Update JCloud documentation to the jcloud `v0.1.0` (#5385) - Restructure documentation under `/concepts` - Change Executor URI scheme to namespaced scheme `jinaai` (#5450) - Custom Gateway documentation (#5465) - Provide more accurate description for port and protocol parameters of the Gateway (#5456) ## 🤟 Contributors We would like to thank all contributors to this release: - Delgermurun (@delgermurun) - Jie Fu (@jemmyshin) - Alex Cureton-Griffiths (@alexcg1) - big-thousand (@big-thousand) - IyadhKhalfallah (@IyadhKhalfallah) - Deepankar Mahapatro (@deepankarm) - samsja (@samsja) - AlaeddineAbdessalem (@alaeddine-13) - Joan Fontanals (@JoanFM) - Anne Yang (@AnneYang720) - Han Xiao (@hanxiao) - Girish Chandrashekar (@girishc13) - Jackmin801 (@Jackmin801)
closed
2022-12-14T14:38:15Z
2022-12-15T16:09:24Z
https://github.com/jina-ai/serve/issues/5522
[]
alexcg1
0
paperless-ngx/paperless-ngx
django
9,478
[BUG] Workflow overwrites custom text fields
### Description When a workflow adds empty custom fields to a document of type "Rechnung" and the user later fills in these fields (e.g. "Rechnungsnummer"), the filled-in values are overwritten with null on the next PUT request. This happens even though the PUT request clearly contains the correct field values. **Expected Behavior:** If a custom field contains a non-null value in the PUT request, this value should be preserved and not overwritten with null. **Actual Behavior:** After the PUT request, the document reloads (GET /api/documents/3097/?full_perms=true) and shows all previously filled custom field values as null, except for one (field 18) which remains untouched. <img width="1006" alt="Image" src="https://github.com/user-attachments/assets/cbe0e181-a910-4c51-a9eb-8259b14f6035" /> After I fill the fields, it does the following put request: <img width="527" alt="Image" src="https://github.com/user-attachments/assets/6866d6ee-28e6-49e3-9bd5-76223a90198f" /> ```json { "id":3097, "correspondent":19, "document_type":2, "storage_path":1, "title":"*************************",", "content":"*************************"," "tags":[ 74, 9 ], "created":"2025-03-03T00:00:00+01:00", "created_date":"2025-03-03", "modified":"2025-03-24T12:17:02.139824+01:00", "added":"2025-03-19T12:42:06.864390+01:00", "deleted_at":null, "archive_serial_number":1635290359, "original_file_name":"*************************",", "archived_file_name":"*************************", "owner":4, "permissions":{ "view":{ "users":[ ], "groups":[ ] }, "change":{ "users":[ ], "groups":[ ] } }, "notes":[ ], "custom_fields":[ { "value":"MB/25/03/051945", "field":1 }, { "value":"2025-03-03", "field":5 }, { "value":null, "field":12 }, { "value":null, "field":13 }, { "value":"CHF46.30", "field":14 }, { "value":"3HEK7CwYPvMNAaja", "field":18 } ], "page_count":2, "mime_type":"application/pdf" } ``` The browser loads immediatly https://******/api/documents/3097/?full_perms=true, where some of the fields are NULL. ```json { "id":3097, "correspondent":19, "document_type":2, "storage_path":1, "title":"************************", "content":"*****", "tags":[ 74, 9 ], "created":"2025-03-03T00:00:00+01:00", "created_date":"2025-03-03", "modified":"2025-03-24T12:17:02.832783+01:00", "added":"2025-03-19T12:42:06.864390+01:00", "deleted_at":null, "archive_serial_number":1635290359, "original_file_name":"************************", "archived_file_name":"************************", "owner":4, "permissions":{ "view":{ "users":[ ], "groups":[ ] }, "change":{ "users":[ ], "groups":[ ] } }, "notes":[ ], "custom_fields":[ { "value":null, "field":1 }, { "value":null, "field":5 }, { "value":null, "field":12 }, { "value":null, "field":13 }, { "value":null, "field":14 }, { "value":"3HEK7CwYPvMNAaja", "field":18 } ], "page_count":2, "mime_type":"application/pdf" } ``` I expected that a workflow does not overwrite custom fields, if they have a content. ### Steps to reproduce 1. Trigger a workflow that adds empty custom fields (e.g., "Rechnungsnummer", "Rechnungsdatum", etc.) to a document of type "Rechnung". 2. Manually fill in those fields in the frontend. 3. Save the document. 4. Inspect the PUT request: it correctly includes the filled custom field values. 5. Observe the immediate GET request to /api/documents/<id>/?full_perms=true. 6. Notice that the custom field values are now null, although they were just submitted with actual values. ### Webserver logs ```bash [2025-03-24 12:17:02,543] [INFO] [paperless.matching] Document did not match Workflow: Krankenkasse ist Rechnung [2025-03-24 12:17:02,549] [DEBUG] [paperless.matching] ('Document doc type Rechnung does not match Prämienrechnung',) [2025-03-24 12:17:02,553] [INFO] [paperless.matching] Document did not match Workflow: Krankenkasse ist Rechnung [2025-03-24 12:17:02,555] [DEBUG] [paperless.matching] ('Document doc type Rechnung does not match Leistungsabrechnung',) [2025-03-24 12:17:02,567] [INFO] [paperless.matching] Document matched WorkflowTrigger 3 from Workflow: Rechnung [2025-03-24 12:17:02,569] [INFO] [paperless.handlers] Applying WorkflowAction 8 from Workflow: Rechnung [2025-03-24 12:17:02,848] [INFO] [paperless.matching] Document did not match Workflow: Betreibung hat Betreibungsnummer [2025-03-24 12:17:02,850] [DEBUG] [paperless.matching] ('Document doc type Rechnung does not match Betreibung',) [2025-03-24 12:17:02,865] [INFO] [paperless.matching] Document did not match Workflow: Kontoauszug Raiffeisen [2025-03-24 12:17:02,869] [DEBUG] [paperless.matching] ('Document doc type Rechnung does not match Kontoauszug',) [2025-03-24 12:19:53,279] [WARNING] [_granian.asgi.io] ASGI transport error: SendError { .. } [2025-03-24 12:19:53,289] [WARNING] [_granian.asgi.io] ASGI transport error: SendError { .. } [2025-03-24 12:19:53,302] [WARNING] [_granian.asgi.io] ASGI transport error: SendError { .. } [2025-03-24 12:19:53,307] [WARNING] [_granian.asgi.io] ASGI transport error: SendError { .. } [2025-03-24 12:19:53,311] [WARNING] [_granian.asgi.io] ASGI transport error: SendError { .. } [2025-03-24 12:19:55,203] [WARNING] [_granian.asgi.io] ASGI transport error: SendError { .. } [2025-03-24 12:19:55,208] [WARNING] [_granian.asgi.io] ASGI transport error: SendError { .. } ``` ### Browser logs ```bash ``` ### Paperless-ngx version 2.15.0 ### Host OS MacOS / x86_64 / Docker ### Installation method Docker - official image ### System status ```json { "pngx_version": "2.15.0", "server_os": "Linux-6.10.14-linuxkit-x86_64-with-glibc2.36", "install_type": "docker", "storage": { "total": 2121312247808, "available": 1132791042048 }, "database": { "type": "mysql", "url": "paperless", "status": "OK", "error": null, "migration_status": { "latest_migration": "paperless_mail.0029_mailrule_pdf_layout", "unapplied_migrations": [] } }, "tasks": { "redis_url": "redis://broker:6379", "redis_status": "OK", "redis_error": null, "celery_status": "OK", "celery_url": "celery@e06c7beb2db1", "celery_error": null, "index_status": "OK", "index_last_modified": "2025-03-24T12:17:02.469167+01:00", "index_error": null, "classifier_status": "OK", "classifier_last_trained": "2025-03-24T11:06:39.199015Z", "classifier_error": null, "sanity_check_status": "OK", "sanity_check_last_run": "2025-03-24T10:11:53.292054Z", "sanity_check_error": null } } ``` ### Browser LibreWolf / Firefox ### Configuration changes _No response_ ### Please confirm the following - [x] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation. - [x] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools. - [x] I have already searched for relevant existing issues and discussions before opening this report. - [x] I have updated the title field above with a concise description.
closed
2025-03-24T11:26:37Z
2025-03-24T13:34:50Z
https://github.com/paperless-ngx/paperless-ngx/issues/9478
[ "duplicate" ]
petarmarj
1
reiinakano/scikit-plot
scikit-learn
30
installation error
Hi i tried to pip install skikit-plot but got the following error message: Command "python setup.py egg_info" failed with error code 1. Any suggestion? Thanks very much. Elena
closed
2017-05-01T22:36:46Z
2017-05-07T08:19:36Z
https://github.com/reiinakano/scikit-plot/issues/30
[]
zhongheli
3
zama-ai/concrete-ml
scikit-learn
681
Performance Issues
## Summary I have tested mutliple exampe codes like cifar 10, cifar 100. As previously reported, it takes lot of time on inference. Almost 25 ton30 min on inference. I have upgraded my hardware resources and did not notice a significant improvement in inference time. Can you please share some neural network code which is optimized for concerete ml and takes less time. Its okay if it works on image or text dataset.
closed
2024-05-16T03:46:46Z
2024-06-11T09:23:52Z
https://github.com/zama-ai/concrete-ml/issues/681
[ "bug" ]
thorntonmk
1
robusta-dev/robusta
automation
1,028
ability to send notification after some additional pause
**Is your feature request related to a problem?** I want to create a notification chain, for ex. on critical alert notify this sink immediately, but this sink only if alert continue firing after 30m (promote alert to next support level). **Describe alternatives you've considered** Something similar can be achieved with alertmanager by increasing group_wait time for next level receiver.
open
2023-08-09T11:39:46Z
2023-10-31T15:46:16Z
https://github.com/robusta-dev/robusta/issues/1028
[ "enhancement" ]
lictw
2
thp/urlwatch
automation
723
Documentation example for Github tags stopped working.
In the [documentation](https://urlwatch.readthedocs.io/en/latest/filters.html#watching-github-releases-and-gitlab-tags) in the section "Watching Github releases and Gitlab tags" ``` This is the corresponding version for Github tags: url: https://github.com/thp/urlwatch/tags filter: - xpath: (//div[contains(@class,"commit js-details-container Details")]//h4//a)[1] - html2text - strip ``` Stopped working for me a couple days ago. I assume there was a change to the structure of the site: I fixed Github tags with this filter instead, this will give you the latest/top-most tag as result: ``` url: https://github.com/thp/urlwatch/tags filter: - xpath: path: //*[@class="Link--primary"] maxitems: 1 - html2text: ``` It also works for Github releases, which will include the latest/top-most release or pre-release: ``` url: https://github.com/Novik/ruTorrent/releases filter: - xpath: path: //*[@class="Link--primary"] maxitems: 1 - html2text: ``` If you only want to monitor the latest release and not include pre-releases: ``` url: https://github.com/Novik/ruTorrent/releases/latest filter: - xpath: //*[@class="ml-1"] - html2text: - strip ``` Thanks for a wonderful tool.
closed
2022-10-07T13:55:40Z
2023-08-01T17:49:13Z
https://github.com/thp/urlwatch/issues/723
[]
mercurytoxic
5
tableau/server-client-python
rest-api
703
Issue updating connections on workbooks to datasources
Hi, I'm trying to update connections on workbooks. The example provided in samples works for datasources, but when I'm updating workbooks I'm getting the following error: 400039: Bad Request There was a problem updating connection {CONNECTION_ID} for workbook {WORKBOOK_ID}. Maybe it's a different issue, but how would one update workbook connections to point to a different datasource on Tableau? From what I can see the connections get updated by simply overriding the attributes on the ConnectionItem class and then building an xml/text request. What I can't see is how to change datasource_name/datasource_id on the ConnectionItem class (it's private methods). UPDATE: I've also noticed I can not publish new workbooks. Even if I just download a wb from the server, give it a different name (By setting WorkbookItem(project_id='123', name='new_name'), and try to upload it. The error I get is: 400011: Bad Request here was a problem publishing the file {FILE_PATH} UPDATE: I've managed to fix uploading issue. I also checked the Tableau logs on updating connections on workbooks and it's throwing a java error saying: dbclass: sqlproxy is not editable (errorCode=60004) So my question stays the same: how would one point a workbook to use a different datasource
open
2020-09-30T10:25:51Z
2023-08-07T18:08:52Z
https://github.com/tableau/server-client-python/issues/703
[ "needs investigation", "document-api" ]
DaniilBalabanov33N
3
microsoft/qlib
machine-learning
1,062
qrun TRA model error
## 🐛 Bug Description <!-- A clear and concise description of what the bug is. --> ## To Reproduce Steps to reproduce the behavior: 1. qrun examples/benchmarks/TRA/workflow_config_tra_Alpha158.yaml ## Expected Behavior <!-- A clear and concise description of what you expected to happen. --> ERROR - qlib.workflow - [utils.py:41] - An exception has been raised[AttributeError: 'list' object has no attribute 'start']. ## Screenshot <!-- A screenshot of the error message or anything shouldn't appear--> ## Environment **Note**: User could run `cd scripts && python collect_info.py all` under project directory to get system information and paste them here directly. - Qlib version: - Python version: - OS (`Windows`, `Linux`, `MacOS`): - Commit number (optional, please provide it if you are using the dev version): ## Additional Notes <!-- Add any other information about the problem here. -->
closed
2022-04-18T14:09:53Z
2022-09-15T05:06:18Z
https://github.com/microsoft/qlib/issues/1062
[ "bug" ]
stockcoder
5
deepinsight/insightface
pytorch
1,799
torch2onnx assert error
When I run the torch2onnx.py file(recognition/arcface_torch/), I get an error in the code below. assert os.path.exists(input_file) The cause seems to be that "input=" is included in "input_file". So, I modified a single line of code, can I run it without modification? If not, I'd like to send you a pull request. thank you.
open
2021-10-26T06:48:55Z
2021-10-29T05:47:05Z
https://github.com/deepinsight/insightface/issues/1799
[]
Yuri-Kim
1
deepspeedai/DeepSpeed
machine-learning
6,876
How to turn on allgather overlapping in ZeRO-1/2 ?
[ TARGET ] ZeRO-1/2 requires an allgather after backward computations since the optimizer states are distributed and hence only parts of the model parameters are updated on each rank. allgather assembles the complete updated model for each rank before the forward operation in the next iteration/step (which relies on updated parameters). In theory, if this process is done bucket by bucket, allgather communications can be largely overlapped by the forward computation in next iteration. This is what I want to realize to further boost my training speed (together with reduce overlapping). [ ISSUE ] Even though I found related configurations on the official [DS_CONFIG docs](https://www.deepspeed.ai/docs/config-json/#zero-optimizations-for-fp16-training), I still failed to run a ZeRO-1/2 training with allgather overlapping. I came to this conclusion by carefully profiling the training and observe the allgather communication and forward computations. (btw the reduce overlap works well) My ZeRO configurations are set as follows: #------ ZeRO-1 DS_CONFIG ------# { "train_batch_size": 65536, "gradient_accumulation_steps": 1, "optimizer": { "type": "Adam", "params": { "lr": 0.00015 } }, "fp16": { "enabled": true, "loss_scale": 0 }, "zero_optimization": { "stage": 1, "overlap_comm": true, "contiguous_gradients": true, "reduce_scatter": true, "reduce_bucket_size": 1e6 } } #------ ZeRO-2 DS_CONFIG ------# { "train_batch_size": 65536, "gradient_accumulation_steps": 1, "optimizer": { "type": "Adam", "params": { "lr": 0.00015 } }, "fp16": { "enabled": true, "loss_scale": 0 }, "zero_optimization": { "stage": 2, "overlap_comm": true, "contiguous_gradients": true, "reduce_scatter": true, "reduce_bucket_size": 1e6, "allgather_bucket_size": 1e6, "allgather_partitions": true } } [ BUG ] After the failure I deep dive into the ZeRO-1/2 source code. In [stage_1_and_2.py #L1921](https://github.com/microsoft/DeepSpeed/blob/master/deepspeed/runtime/zero/stage_1_and_2.py#L1921) an **all_gather_dp_groups** was call and **allgather_bucket_size** as set as one of this function's arguments. It seem that **all_gather_dp_groups** would do something with updated parameters and transmit it in buckets (which can make communication overlap possible). However, in [utils.py L#965](https://github.com/microsoft/DeepSpeed/blob/master/deepspeed/runtime/utils.py#L965) it called another function **all_gather_into_tensor_dp_groups** in which **dist.all_gather_into_tensor** is directly called without any bucketing. I don't think this can result in any allgather communication overlapping since parameter allgather seems to transmit at once instead of bucket by bucket. I wonder how to turn on allgather overlapping and I doubt that if this is supported in ZeRO-1/2 ??? Thanks for your time and contributions :)
closed
2024-12-16T12:07:56Z
2024-12-18T18:01:41Z
https://github.com/deepspeedai/DeepSpeed/issues/6876
[]
2012zzhao
5
ipython/ipython
data-science
14,581
Enabling %matplotlib qt in IPython shell slows it down significantly
To reproduce: Fire up a terminal, activate conda environment and enter ipython. This is a normal ipython terminal (not jupyter console or jupyter qtconsole). At first, the shell is responsive even after many imports and trying out different data manipulation operations. As soon as `%matplotlib qt`is enabled, the shell slows down to a crawl, both input and time to show output. It feels as if there is a large roundtrip that is being taken by the input. I understand that the matplotlib event loop needs to hook into ipython's loop or something along those lines, but it's very slow and the input delay is significant. Other backends/frontends: It looks like the issue does not exist with jupyter-qtconsole. Update 11/17/24: Performance degradation does not happen with c.InteractiveShellApp.matplotlib = "tkagg" Looks like something is causing the qt backend to degrade performance. OS: Windows 11 24H2 Build 26100.2314 Terminal: Windows Terminal 1.21.2911.0 (native, no WSL) ``` Relevant packages: python | 3.12.7 | hce54a09_0_cpython | conda-forge ipython | 8.29.0 | pyh7428d3b_0 | conda-forge ipython_genutils | 0.2.0 | pyhd8ed1ab_1 | conda-forge prompt-toolkit | 3.0.48 | pyha770c72_0 | conda-forge noarch pygments | 2.18.0 | pyhd8ed1ab_0 | conda-forge noarch matplotlib | 3.9.2 | py312h2e8e312_2 | conda-forge matplotlib-base | 3.9.2 | py312h90004f6_2 | conda-forge matplotlib-inline | 0.1.7 | pyhd8ed1ab_0 | conda-forge jupyter | 1.1.1 | pyhd8ed1ab_0 | conda-forge jupyter-lsp | 2.2.5 | pyhd8ed1ab_0 | conda-forge jupyter_client | 7.4.9 | pyhd8ed1ab_0 | conda-forge jupyter_console | 6.6.3 | pyhd8ed1ab_0 | conda-forge jupyter_core | 5.7.2 | pyh5737063_1 | conda-forge jupyter_events | 0.10.0 | pyhd8ed1ab_0 | conda-forge jupyter_server | 2.14.2 | pyhd8ed1ab_0 | conda-forge jupyter_server_terminals 0.5.3 | pyhd8ed1ab_0 | conda-forge jupyterlab | 4.3.0 | pyhd8ed1ab_0 | conda-forge jupyterlab_pygments | 0.3.0 | pyhd8ed1ab_1 | conda-forge jupyterlab_server | 2.27.3 | pyhd8ed1ab_0 | conda-forge jupyterlab_widgets | 3.0.13 | pyhd8ed1ab_0 | conda-forge ``` I am happy to run any tests on my end to narrow down the issue. I have a fairly beefy workstation and this is really the only situation where I encounter lag with my inputs/outputs in the terminal.
closed
2024-11-14T02:10:34Z
2025-02-02T13:58:42Z
https://github.com/ipython/ipython/issues/14581
[]
AmerM137
13
huggingface/peft
pytorch
1,593
Getting Dora Model Is Very Slow
### System Info Package Version ------------------------ --------------- accelerate 0.29.0.dev0 aiohttp 3.9.3 aiosignal 1.3.1 annotated-types 0.6.0 appdirs 1.4.4 async-timeout 4.0.3 attrs 23.2.0 bitsandbytes 0.43.0 certifi 2024.2.2 charset-normalizer 3.3.2 click 8.1.7 datasets 2.18.0 deepspeed 0.14.0+ce78a632 dill 0.3.8 docker-pycreds 0.4.0 docstring_parser 0.16 einops 0.7.0 exceptiongroup 1.2.0 filelock 3.13.3 flash-attn 2.5.6 frozenlist 1.4.1 fsspec 2024.2.0 gitdb 4.0.11 GitPython 3.1.42 hjson 3.1.0 huggingface-hub 0.22.1 idna 3.6 iniconfig 2.0.0 Jinja2 3.1.3 markdown-it-py 3.0.0 MarkupSafe 2.1.5 mdurl 0.1.2 mpmath 1.3.0 multidict 6.0.5 multiprocess 0.70.16 networkx 3.1 ninja 1.11.1.1 numpy 1.24.4 nvidia-cublas-cu12 12.1.3.1 nvidia-cuda-cupti-cu12 12.1.105 nvidia-cuda-nvrtc-cu12 12.1.105 nvidia-cuda-runtime-cu12 12.1.105 nvidia-cudnn-cu12 8.9.2.26 nvidia-cufft-cu12 11.0.2.54 nvidia-curand-cu12 10.3.2.106 nvidia-cusolver-cu12 11.4.5.107 nvidia-cusparse-cu12 12.1.0.106 nvidia-nccl-cu12 2.19.3 nvidia-nvjitlink-cu12 12.4.99 nvidia-nvtx-cu12 12.1.105 packaging 24.0 pandas 2.0.3 peft 0.10.1.dev0 pillow 10.2.0 pip 24.0 pluggy 1.4.0 protobuf 3.20.1 psutil 5.9.8 py-cpuinfo 9.0.0 pyarrow 15.0.2 pyarrow-hotfix 0.6 pydantic 2.6.4 pydantic_core 2.16.3 Pygments 2.17.2 pynvml 11.5.0 pytest 8.1.1 python-dateutil 2.9.0.post0 pytz 2024.1 PyYAML 6.0.1 regex 2023.12.25 requests 2.31.0 rich 13.7.1 safetensors 0.4.2 scipy 1.10.1 sentencepiece 0.2.0 sentry-sdk 1.43.0 setproctitle 1.3.3 setuptools 69.2.0 shtab 1.7.1 six 1.16.0 smmap 5.0.1 sympy 1.12 text-generation 0.7.0 tokenizers 0.15.2 tomli 2.0.1 torch 2.2.1 torchaudio 2.2.1 torchvision 0.17.1 tqdm 4.66.2 transformers 4.40.0.dev0 triton 2.2.0 trl 0.8.1 typing_extensions 4.10.0 tyro 0.7.3 tzdata 2024.1 urllib3 2.2.1 wandb 0.16.5 wheel 0.43.0 xxhash 3.4.1 yarl 1.9.4 python 3.11 I have tested this on both a dual A100 and dual 3090 system. Using the same docker image. ### Who can help? @pacman100 @younesbelkada @sayakpaul When calling the ```get_peft_model``` method with config that has ```use_dora=True``` the time to get a model is VERY long(several minutes). Meanwhile, if I just use a regular Lora model, I get the model almost immediately. I also do not have this issue when using a QDora model oddly enough. ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder - [X] My own task or dataset (give details below) ### Reproduction ```python model_name = "mistralai/Mistral-7B-v0.1" model = AutoModelForCausalLM.from_pretrained(model_name, token=access_token,use_flash_attention_2=True) peft_config = LoraConfig( task_type=TaskType.CAUSAL_LM, inference_mode=False, r=args.lora_rank, lora_alpha=args.lora_alpha, lora_dropout=args.lora_dropout,target_modules=target_modules,modules_to_save=modules_to_save,use_dora=args.dora ) model = get_peft_model(model, peft_config) ``` I removed some stuff to keep it simple. If you want to see a more complete example on how I am running this, please see the code [here](https://github.com/mallorbc/Finetune_LLMs) ### Expected behavior I would expect Dora to load as quickly as Lora, or at least not several orders of magnitude slower.
closed
2024-03-27T06:05:56Z
2024-05-16T21:18:34Z
https://github.com/huggingface/peft/issues/1593
[]
mallorbc
10
sqlalchemy/sqlalchemy
sqlalchemy
10,832
JSON type compile literal_binds results in error
### Describe the bug SQLalchemy query compiler fails to compile for JSON column. Did not find much documentation on the matter and I am still not sure if I need to turn the python dictionary into a JSON string before the insert statement or not. ### Optional link from https://docs.sqlalchemy.org which documents the behavior that is expected _No response_ ### SQLAlchemy Version in Use 1.4.49 ### DBAPI (i.e. the database driver) Not sure ### Database Vendor and Major Version SQLite ### Python Version 3.11.6 ### Operating system Linux ### To Reproduce ```python # Import SQLAlchemy modules from sqlalchemy import Table, Column, Integer, JSON, MetaData, create_engine # Create an engine and a metadata object engine = create_engine("sqlite:///") metadata = MetaData() # Define a table with a JSON column json_table = Table( "json_table", metadata, Column("id", Integer, primary_key=True), Column("data", JSON, nullable=False), ) # Create the table in the database metadata.create_all(engine) # Create a JSON object to insert json_data = { "name": "Bob", "age": 30, "hobbies": ["gaming", "cooking", "traveling"], } # Create an insert statement stmt = json_table.insert().values(data=json_data) # Compile the insert statement to a string with the literal_binds argument. ERROR HERE!!!!! stmt_string = stmt.compile(compile_kwargs={"literal_binds": True}) print(stmt_string) ``` ### Error ``` Exception has occurred: CompileError No literal value renderer is available for literal value "{'name': 'Bob', 'age': 30, 'hobbies': ['gaming', 'cooking', 'traveling']}" with datatype JSON File "/home/jlynch/Workspace/Cloud/telem_dft_bin/debug/json_debug.py", line 30, in <module> stmt_string = stmt.compile(compile_kwargs={"literal_binds": True}) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ sqlalchemy.exc.CompileError: No literal value renderer is available for literal value "{'name': 'Bob', 'age': 30, 'hobbies': ['gaming', 'cooking', 'traveling']}" with datatype JSON ``` ### Additional context This was a quick example I typed up into relation to an issue I was facing in a larger program I am making.
closed
2024-01-05T18:52:32Z
2024-01-06T00:37:58Z
https://github.com/sqlalchemy/sqlalchemy/issues/10832
[ "sql", "expected behavior" ]
jlynchMicron
2
microsoft/hummingbird
scikit-learn
516
What is pytorch container and how do I see the actual matrices?
In the example code, how would I go look at the pytorch model container variable model, and find out what actual computations it is computing? Aka how do I convert the pytorch container type to an actual nn.Module in pytorch?
closed
2021-05-27T03:11:51Z
2021-06-02T21:04:20Z
https://github.com/microsoft/hummingbird/issues/516
[]
marsupialtail
7
ScrapeGraphAI/Scrapegraph-ai
machine-learning
811
The timeout is not being respected
**Describe the bug** The timeout set on the configuration is not being respected. **Expected behavior** Timeout to be respected **Code** ``` config = { "llm": { "api_key": "my_key", "model": "google_genai/gemini-1.5-flash", "timeout": 120, "temperature": 0 }, "timeout": 120, } ``` ``` response = SmartScraperGraph( prompt="MY_PROMPT", source="MY_HTML", config=config, ).run() ``` Errors: Chunk Processing ``` Timeout error: Response took longer than 30 seconds 2024-11-19 10:25:18.405 | ERROR | invalid literal for int() with base 10: 'Response timeout exceeded during chunk processing' ``` or Merge ``` Timeout error: Response took longer than 30 seconds 2024-11-19 10:28:17.721 | ERROR | invalid literal for int() with base 10: 'Response timeout exceeded during merge' ```
closed
2024-11-19T13:42:52Z
2025-01-06T04:17:29Z
https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/811
[]
matheus-rossi
10
mljar/mercury
jupyter
197
notebook with margins
closed
2023-01-05T12:12:02Z
2023-02-17T13:05:17Z
https://github.com/mljar/mercury/issues/197
[]
pplonski
1
nonebot/nonebot2
fastapi
2,718
Plugin: nonebot-plugin-tsugu-bangdream-bot
### PyPI 项目名 nonebot-plugin-tsugu-bangdream-bot ### 插件 import 包名 nonebot_plugin_tsugu_bangdream_bot ### 标签 [{"label":"tsugu","color":"#ffee88"}] ### 插件配置项 _No response_
closed
2024-05-16T08:55:15Z
2024-05-18T14:46:03Z
https://github.com/nonebot/nonebot2/issues/2718
[ "Plugin" ]
WindowsSov8forUs
9
akfamily/akshare
data-science
5,112
CentOS 7.9安装失败
阿里云服务器,Python3.11 CentOS 7.9 安装失败,报错的原因是去Google拉取 depot_tools,无法访问。 ![image](https://github.com/user-attachments/assets/052abe22-7870-4ed3-9072-84dcdbf98d86) 请问能否去掉这个,在Windows11是可以安装的。
closed
2024-08-07T06:37:03Z
2024-09-03T13:44:39Z
https://github.com/akfamily/akshare/issues/5112
[ "bug" ]
random-heap
6
aidlearning/AidLearning-FrameWork
jupyter
116
86b2p2初次可以启动,再次启动打不开,崩溃,启动也很慢。
86b2p2初次可以启动,再次启动打不开,崩溃,启动也很慢。
closed
2020-07-22T11:05:41Z
2020-07-23T05:33:49Z
https://github.com/aidlearning/AidLearning-FrameWork/issues/116
[]
ottoskyer
1
flasgger/flasgger
rest-api
214
Authorization Header does not exist in flask request object
I'm expecting Authorization and a request id in request header but it is missing. print(request.headers) ``` Host: 0.0.0.0:9000 Origin: http://localhost:9000 Accept-Encoding: gzip, deflate Dnt: 1 Access-Control-Request-Headers: authorization,request_id Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Connection: keep-alive Access-Control-Request-Method: GET Accept-Language: en-GB,en;q=0.5 User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:61.0) Gecko/20100101 Firefox/61.0 ``` i have parameter header in my docstring ``` parameters: - in: header $ref: '#/definitions/Authorization' ```
closed
2018-07-20T07:52:54Z
2019-05-09T21:30:06Z
https://github.com/flasgger/flasgger/issues/214
[]
ryanermita
2
replicate/cog
tensorflow
1,386
How to gracefully return back with no img output
In below code snippet, how can I gracefully mark the prediction as completed with some custom output when "No human detected in the image!" ``` def predict( self, image: Path = Input(description="Grayscale input image"), scale: float = Input( description="Factor to scale image by", ge=0, le=10, default=1.5 ), ) -> List[Path]: """Run a single prediction on the model""" # processed_input = preprocess(image) # output = self.model(processed_image, scale) # Create the full input path and read the file img = Image.open(image) image_extension = image.suffix print("Image extension:", image_extension) image_np = np.array(img) if image_np is None: return height, width, _ = image_np.shape # Assign the image bounds to c1, c2, d1, d2 c1, c2, d1, d2 = 0, 0, width, height # Run the image through the model results = model(img) output = [] # Find the first human detected in the image human = next((x for x in results.xyxy[0] if int(x[5]) == 0), None) if human is None: print("No human detected in the image!") return ``` I am getting below error ``` panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x14afe4c] goroutine 1 [running]: github.com/replicate/cog/pkg/cli.handleMultipleFileOutput(0x1528ae0?, 0xc0002c7f20?) /home/runner/work/cog/cog/pkg/cli/predict.go:256 +0x2c github.com/replicate/cog/pkg/cli.predictIndividualInputs({{{0x0, 0x0, 0x0}, {0xc000098770, 0x1, 0x1}, {0x0, 0x0}, {0xc000299c10, 0xd}, ...}, ...}, ...) /home/runner/work/cog/cog/pkg/cli/predict.go:180 +0x1dc github.com/replicate/cog/pkg/cli.cmdPredict(0xc00029cf00?, {0xc00025b540?, 0x2?, 0x2?}) /home/runner/work/cog/cog/pkg/cli/predict.go:153 +0x9be github.com/spf13/cobra.(*Command).execute(0xc00029cf00, {0xc00025b520, 0x2, 0x2}) /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940 +0x862 github.com/spf13/cobra.(*Command).ExecuteC(0xc00029c000) /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068 +0x3bd github.com/spf13/cobra.(*Command).Execute(0x199d420?) /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992 +0x19 main.main() /home/runner/work/cog/cog/cmd/cog/cog.go:14 +0x6f ```
open
2023-11-17T19:19:35Z
2023-11-22T09:27:21Z
https://github.com/replicate/cog/issues/1386
[]
jerry275
2
graphql-python/graphene-django
django
1,428
[Critical] Breaking change in patch release
In version 3.1.2, the `JSONField` in `graphene_django.compat` was removed. While removing this is a valid change, it constitutes a breaking change, and therefore a bump of the project's major version. Due to semantic versioning, downstream applications will automatically update to `3.1.2` and break if they imported the field from there. Please make a critical bugfix release reverting the change and release it to PyPI as version 3.1.3, if possible, today ☺!
closed
2023-07-08T11:56:36Z
2023-07-18T12:11:32Z
https://github.com/graphql-python/graphene-django/issues/1428
[ "🐛bug" ]
Natureshadow
0
graphistry/pygraphistry
pandas
50
FAQ / guide on exporting
cc @thibaudh @padentomasello @briantrice
closed
2016-01-11T21:50:51Z
2020-06-10T06:46:52Z
https://github.com/graphistry/pygraphistry/issues/50
[ "enhancement" ]
lmeyerov
0
ageitgey/face_recognition
python
692
Faster face recognition in base with more than 1 million encodings.
* face_recognition version: Last * Python version: 3 * Operating System: Ubuntu Hello! I want to collect pre-calculated encodings of more than 1 million faces in database (mysql,postgres...?) with 128 columns. And i have face,which I want to find among them. And than i want search top 5 the most similar faces for need face. In this https://github.com/ageitgey/face_recognition/issues/238 have already been tips on this subject @khaledabbad already have solution with Apache Solr. But I'm not familiar with Apache and I don't know how to make a search engine. What are the possible solutions to find encoding of 1 face in the base of encodings more than 1 million persons? How can you do it as faster as possible? Solr or other? Examples? Thank you advance!
open
2018-11-29T21:08:34Z
2018-11-30T07:26:46Z
https://github.com/ageitgey/face_recognition/issues/692
[]
arpsyapathy
2
jina-ai/clip-as-service
pytorch
278
TypeError: cannot unpack non-iterable NoneType object
**Prerequisites** > Please fill in by replacing `[ ]` with `[x]`. * [x] Are you running the latest `bert-as-service`? * [x] Did you follow [the installation](https://github.com/hanxiao/bert-as-service#install) and [the usage](https://github.com/hanxiao/bert-as-service#usage) instructions in `README.md`? * [x] Did you check the [FAQ list in `README.md`](https://github.com/hanxiao/bert-as-service#speech_balloon-faq)? * [x] Did you perform [a cursory search on existing issues](https://github.com/hanxiao/bert-as-service/issues)? **System information** > Some of this information can be collected via [this script](https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh). - OS Platform and Distribution (e.g., Linux Ubuntu 16.04):windows10 - TensorFlow installed from (source or binary): - TensorFlow version:1.13.0 - Python version:3.7 - `bert-as-service` version: latest - GPU model and memory: 2080 8G - CPU model and memory: AMD 40G --- ### Description > Please replace `YOUR_SERVER_ARGS` and `YOUR_CLIENT_ARGS` accordingly. You can also write your own description for reproducing the issue. I'm using this command to start the server: ```bash bert-serving-start -model_dir D:\chinese_L-12_H-768_A-12\ -num_worker=4 ``` Then this issue shows up: ![image](https://user-images.githubusercontent.com/20437608/54470396-db104100-47e1-11e9-81f8-60284f3eaba1.png) Traceback (most recent call last): File "c:\users\kodgv\anaconda3\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "c:\users\kodgv\anaconda3\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\KODGV\Anaconda3\Scripts\bert-serving-start.exe\__main__.py", line 9, in <module> File "c:\users\kodgv\anaconda3\lib\site-packages\bert_serving\server\cli\__init__.py", line 5, in main server = BertServer(args) File "c:\users\kodgv\anaconda3\lib\site-packages\bert_serving\server\__init__.py", line 70, in __init__ self.graph_path, self.bert_config = pool.apply(optimize_graph, (self.args,)) TypeError: cannot unpack non-iterable NoneType object i'm using the full path of the files
closed
2019-03-16T03:59:21Z
2022-09-26T17:27:56Z
https://github.com/jina-ai/clip-as-service/issues/278
[]
KODGV
8
pennersr/django-allauth
django
3,994
Cannot add security key (webauthn)
Hi, I really appreciate your work on adding fido2 support! I tried to test it locally, but I could not add a new security key **Here is what I did:** 1. I installed the package using `pip install git+https://github.com/pennersr/django-allauth.git@main#egg=django-allauth[mfa]` 2. Add `MFA_SUPPORTED_TYPES = ["totp", "webauthn", "recovery_codes"]` and `MFA_PASSKEY_LOGIN_ENABLED = True` to settings 3. Add `MFA_WEBAUTHN_ALLOW_INSECURE_ORIGIN = True` to local settings 4. Sign in with a local account using email and password 5. Navigate to `http://localhost:8000/accounts/2fa/webauthn/add/` 6. Provide a name for my key and checking the checkmark (tried both) 7. Click on the `Add` button **What actually happend** No post request was send to the backend. Instead the js console is throwing the following error. ```javascript [Error] Error: Missing key: displayName — webauthn-json.js:88 dispatchError (webauthn.js:8) (anonymous function) (webauthn.js:33) ``` This is the `credentials` json that gets passed to `webauthnJSON.create` ```json { "publicKey": { "rp": { "name": "example.com", "id": "localhost" }, "user": { "name": "user@example.com", "id": "OQ" }, "challenge": "redacted", "pubKeyCredParams": [ { "type": "public-key", "alg": -7 }, { "type": "public-key", "alg": -8 }, { "type": "public-key", "alg": -35 }, { "type": "public-key", "alg": -36 }, { "type": "public-key", "alg": -37 }, { "type": "public-key", "alg": -257 }, { "type": "public-key", "alg": -47 } ], "excludeCredentials": [], "authenticatorSelection": { "residentKey": "discouraged", "userVerification": "discouraged", "requireResidentKey": false }, "extensions": { "credProps": true } } } ``` **What I expected to happen** That I get prompted to add a new security key by my device (macOS) Just a side note for the docs: I had to search through the docs to find I had to add step 3. Also I usually develop using `127.0.0.1` instead of `localhost`. But only on `localhost` I was prompted to provide my security key when I click on `Sign in with a passkey` (the flow seems to work, but it fails, since I don't have a passkey setup due to the issue above). Would it help if I add a PR to provide a dedicated page for working with passkeys?
closed
2024-07-26T18:46:36Z
2024-07-27T06:11:19Z
https://github.com/pennersr/django-allauth/issues/3994
[ "Help wanted" ]
inc-ali
6
albumentations-team/albumentations
deep-learning
1,563
[Feature Request] Return Mixing parameter from MixUp augmentation
People use mixing parameter to weight loss of the image. [Discussion](https://www.kaggle.com/competitions/hms-harmful-brain-activity-classification/discussion/481764#2682070)
closed
2024-03-05T17:18:42Z
2024-03-12T00:25:36Z
https://github.com/albumentations-team/albumentations/issues/1563
[ "feature request" ]
ternaus
1
falconry/falcon
api
1,975
Warn upon importing Falcon using deprecated Python?
We use to declare certain Python language, or specific implementation, versions as deprecated, or later, completely unsupported. Maybe we should emit a deprecation (or EOL) warning in the case `falcon` is imported in such an interpreter?
closed
2021-10-16T11:11:57Z
2024-08-17T17:53:27Z
https://github.com/falconry/falcon/issues/1975
[ "needs-decision", "proposal", "maintenance" ]
vytas7
6
jupyter-incubator/sparkmagic
jupyter
352
Problems with yarn-cluster mode
## Setup - centos 7 - cdh 5.5.X - livy 3.0 ## Issue When running sparkmagic with livy configured to `yarn-client` or `yarn-cluster`: ``` # What spark master Livy sessions should use. livy.spark.master = yarn-client ``` Things generally work. However, as part of my interactive session I'd like to use archives and munge environment variables like the following: ``` {"conf": {"spark.yarn.appMasterEnv.PYSPARK_PYTHON": "./ANACONDA/env/bin/python", "spark.yarn.executorEnv.PYSPARK_PYTHON": "./ANACONDA/env/bin/python"}, "executorMemory": "2048M", "executorCores": 1, "archives": ["hdfs:///user/centos/environment_27.zip#ANACONDA"]} ``` When using something like the above sparkmagic always timesout. On the livy side I see things like the following: > 17/05/03 15:35:33 DEBUG AbstractConnection: fillInterested HttpConnection@496bb752[FILLING,SelectChannelEndPoint@32206cb9{/10.0.0.221:50869<->8998,Open,in,OSHUT,-,-,0/30000,HttpConnection}{io=0,kio=0,kro=1}][p=HttpParser{s=CLOSED,0 of 0}, g=HttpGenerator{s=START},c=HttpChannelOverHttp@5c8fd76{r=1,c=false,a=IDLE,uri=}] 17/05/03 15:35:33 DEBUG AbstractConnection: FILLING-->FILLING_FILL_INTERESTED HttpConnection@496bb752[FILLING_FILL_INTERESTED,SelectChannelEndPoint@32206cb9{/10.0.0.221:50869<->8998,Open,in,OSHUT,-,-,0/30000,HttpConnection}{io=0,kio=0,kro =1}][p=HttpParser{s=CLOSED,0 of 0},g=HttpGenerator{s=START},c=HttpChannelOverHttp@5c8fd76{r=1,c=false,a=IDLE,uri=}] 17/05/03 15:35:33 DEBUG AbstractConnection: FILLING_FILL_INTERESTED-->FILL_INTERESTED HttpConnection@496bb752[FILL_INTERESTED,SelectChannelEndPoint@32206cb9{/10.0.0.221:50869<->8998,Open,in,OSHUT,-,-,1/30000,HttpConnection}{io=0,kio=0,kro =1}][p=HttpParser{s=CLOSED,0 of 0},g=HttpGenerator{s=START},c=HttpChannelOverHttp@5c8fd76{r=1,c=false,a=IDLE,uri=}] 17/05/03 15:35:33 DEBUG SelectChannelEndPoint: Local interests updating 0 -> 1 for SelectChannelEndPoint@32206cb9{/10.0.0.221:50869<->8998,Open,in,OSHUT,R,-,0/30000,HttpConnection}{io=1,kio=0,kro=1} 17/05/03 15:35:33 DEBUG SelectorManager: Queued change org.eclipse.jetty.io.SelectChannelEndPoint$1@789f4f62 17/05/03 15:35:33 DEBUG SelectorManager: Selector loop woken up from select, 0/1 selected 17/05/03 15:35:33 DEBUG SelectorManager: Running change org.eclipse.jetty.io.SelectChannelEndPoint$1@789f4f62 17/05/03 15:35:33 DEBUG SelectChannelEndPoint: Key interests updated 0 -> 1 on SelectChannelEndPoint@32206cb9{/10.0.0.221:50869<->8998,Open,in,OSHUT,R,-,0/30000,HttpConnection}{io=1,kio=1,kro=1} 17/05/03 15:35:33 DEBUG SelectorManager: Selector loop waiting on select And after a few seconds `50869` is increased to `50870` until the timeout. I've increased the `livy_session_startup_timeout_seconds` but this is probably not what I want. cc @martindurant
closed
2017-05-03T15:40:05Z
2017-05-03T18:10:52Z
https://github.com/jupyter-incubator/sparkmagic/issues/352
[]
quasiben
1
gradio-app/gradio
data-visualization
9,878
Unable to serve image with allowed_paths
### Describe the bug Hello, Since, gradio v5 release, i'm unable to display any images from the gradio client. I tryed the same code with gradio v4 and there no problem. ### Have you searched existing issues? 🔎 - [X] I have searched and found no existing issues ### Reproduction ```python import gradio as gr with gr.Blocks() as demo: gr.HTML("<img src='/file=image.png' alt='image One'>") demo.launch(allowed_paths=["image.png"]) ``` ### Screenshot ![image](https://github.com/user-attachments/assets/83380021-c7f7-4448-98d9-efafbc3f38d9) ### Logs _No response_ ### System Info ```shell pip show gradio Name: gradio Version: 5.4.0 Summary: Python library for easily interacting with trained machine learning models Home-page: Author: Author-email: Abubakar Abid <gradio-team@huggingface.co>, Ali Abid <gradio-team@huggingface.co>, Ali Abdalla <gradio-team@huggingface.co>, Dawood Khan <gradio-team@huggingface.co>, Ahsen Khaliq <gradio-team@huggingface.co>, Pete Allen <gradio-team@huggingface.co>, Ömer Faruk Özdemir <gradio-team@huggingface.co>, Freddy A Boulton <gradio-team@huggingface.co>, Hannah Blair <gradio-team@huggingface.co> License: Location: /home/gse1581/playground-rag/.venv/lib64/python3.11/site-packages Requires: aiofiles, anyio, fastapi, ffmpy, gradio-client, httpx, huggingface-hub, jinja2, markupsafe, numpy, orjson, packaging, pandas, pillow, pydantic, pydub, python-multipart, pyyaml, ruff, safehttpx, semantic-version, starlette, tomlkit, typer, typing-extensions, uvicorn Required-by: ``` ### Severity I can work around it
closed
2024-10-31T16:48:06Z
2024-10-31T16:50:43Z
https://github.com/gradio-app/gradio/issues/9878
[ "bug" ]
bastian110
1
predict-idlab/plotly-resampler
data-visualization
195
FigureResampler not working for Boxplot and Histogram
Hello, In the last example here https://github.com/predict-idlab/plotly-resampler/blob/main/examples/basic_example.ipynb go.Histogram and go.Boxplot does not seem to be affected by the limiting points to display in fig = FigureResampler(base_fig, default_n_shown_samples=1000, verbose=False) why is that?
closed
2023-04-14T23:03:30Z
2023-04-17T16:48:08Z
https://github.com/predict-idlab/plotly-resampler/issues/195
[ "documentation" ]
bwassim
2
FlareSolverr/FlareSolverr
api
846
[yggtorrent] : Error solving the challenge
### Have you checked our README? - [X] I have checked the README ### Have you followed our Troubleshooting? - [X] I have followed your Troubleshooting ### Is there already an issue for your problem? - [X] I have checked older issues, open and closed ### Have you checked the discussions? - [X] I have read the Discussions ### Environment ```markdown - FlareSolverr version: 3.2.2 - Last working FlareSolverr version: 3.2.2 - Operating system: ubuntu - Are you using Docker: [yes] - FlareSolverr User-Agent (see log traces or / endpoint): Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36 - Are you using a VPN: [no] - Are you using a Proxy: [no] - Are you using Captcha Solver: [no] - If using captcha solver, which one: - URL to test this issue: Prowlarr Test: https://www3.yggtorrent.wtf/engine/search?category=2145&do=search&order=desc&sort=publish_date ``` ### Description I use the Test button in Prowlarr ### Logged Error Messages ```text 2023-08-02 09:14:39 INFO Incoming request => POST /v1 body: {'maxTimeout': 60000, 'cmd': 'request.get', 'url': 'https://www3.yggtorrent.wtf/engine/search?category=2145&do=search&order=desc&sort=publish_date', 'proxy': {}} 2023-08-02 09:14:44 INFO Challenge detected. Title found: Just a moment... 2023-08-02 09:15:42 ERROR Error: Error solving the challenge. Timeout after 60.0 seconds. 2023-08-02 09:15:42 INFO Response in 63.719 s 2023-08-02 09:15:42 INFO 172.18.0.5 POST http://flaresolverr:8191/v1 500 Internal Server Error ``` ``` ### Screenshots _No response_
closed
2023-08-02T09:18:45Z
2023-08-02T14:58:44Z
https://github.com/FlareSolverr/FlareSolverr/issues/846
[ "duplicate" ]
almottier
6
tensorly/tensorly
numpy
422
Testing for complex tensors
See #298. Several of the tests for methods should be extended to the complex cases. Perhaps a first step to accomplishing this is to extend the random tensor methods to be able to construct random tensors with complex elements.
open
2022-07-12T02:25:48Z
2022-07-20T03:07:38Z
https://github.com/tensorly/tensorly/issues/422
[ "enhancement" ]
aarmey
1
feature-engine/feature_engine
scikit-learn
763
transformer to apply mean normalization
in mean normalization, we subtract the mean from each value and then divide by the value range. This centres the variables at 0, and scales their values between -1 and 1. It is an alternative to standardization. sklearn has no transformer to apply mean normalization. but we can combine the standard scaler and the robust scaler to do so. The thing is, that both transformers need to be fit over the raw data, so we can't use them within a pipeline, because the pipeline applies the transformation before the next transformer learns the required parameters. My idea is to wrap both transformers within a class, so that fit is applied without transform, and then with those parameters, we can transform the data. See for example here: https://github.com/solegalli/Python-Feature-Engineering-Cookbook-Second-Edition/blob/main/ch07-scaling/Recipe-4-mean-normalization.ipynb
closed
2024-05-15T12:01:48Z
2024-11-02T18:41:56Z
https://github.com/feature-engine/feature_engine/issues/763
[]
solegalli
6
graphdeco-inria/gaussian-splatting
computer-vision
879
Question about obtaining exponential falloff multiplied to alpha
``` // Resample using conic matrix (cf. "Surface // Splatting" by Zwicker et al., 2001) float2 xy = collected_xy[j]; float2 d = { xy.x - pixf.x, xy.y - pixf.y }; float4 con_o = collected_conic_opacity[j]; float power = -0.5f * (con_o.x * d.x * d.x + con_o.z * d.y * d.y) - con_o.y * d.x * d.y; if (power > 0.0f) continue; ``` ![image](https://github.com/graphdeco-inria/gaussian-splatting/assets/22341329/0c017878-1014-4a73-965c-007df15a5d68) The code to obtain exponential falloff is somewhat confusing compared to the paper. It seems that d.x and d.y corresponds to (x-μx) and (y-μy). Then what do con_o.x and con_o.z, and following con_o.y * d.x * d.y stand for?
closed
2024-07-09T03:38:18Z
2024-07-09T03:57:59Z
https://github.com/graphdeco-inria/gaussian-splatting/issues/879
[]
cmh1027
1
glumpy/glumpy
numpy
208
small typo on easyway.rst
very minor typo on line 93 in one of the [tutorials](https://github.com/glumpy/glumpy/blob/master/doc/tutorial/easyway.rst) [fixed fork](https://github.com/kemfic/glumpy/blob/master/doc/tutorial/easyway.rst) "Exercices" should be "Exercises"
closed
2019-03-09T18:34:57Z
2019-03-09T18:36:46Z
https://github.com/glumpy/glumpy/issues/208
[]
kemfic
1
CorentinJ/Real-Time-Voice-Cloning
deep-learning
939
Can not find the three Pretrained model
![image](https://user-images.githubusercontent.com/41334152/144902001-52e45a9e-4c78-4c1d-90c2-7a8037df96df.png) I intended to download these models, but find nothing. encoder\saved_models\pretrained.pt synthesizer\saved_models\pretrained\pretrained.pt vocoder\saved_models\pretrained\pretrained.pt Any guys know why? Or can somebody just share them with me? Thank you.
closed
2021-12-06T18:32:20Z
2021-12-28T12:34:19Z
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/939
[]
ZSTUMathSciLab
1
httpie/cli
api
1,589
v3.2.3 GitHub release missing binary and package
## Checklist - [x] I've searched for similar issues. --- The 3.2.3 release https://github.com/httpie/cli/releases/tag/3.2.3 is missing the binary and .deb package
open
2024-07-18T23:28:21Z
2024-07-18T23:28:21Z
https://github.com/httpie/cli/issues/1589
[ "bug", "new" ]
Crissante
0
scikit-learn/scikit-learn
python
30,278
Unifying references style in docstrings in _pca.py
### Describe the issue linked to the documentation A very minor suggested change to write references section of function docstrings in identical style in [_pca.py](https://github.com/scikit-learn/scikit-learn/blob/main/sklearn/decomposition/_pca.py) code file. ### Suggest a potential alternative/fix I aimed to write both references in a single identical style to improve documentation style. I followed the [issue creation link](https://github.com/scikit-learn/scikit-learn/issues/new/choose), where I chose [‘Documentation improvement’ option](https://github.com/scikit-learn/scikit-learn/issues/new?assignees=&labels=Documentation%2CNeeds+Triage&projects=&template=doc_improvement.yml), which provided a template to submit an appropriate issue. Several things that have changed are: • Followed the [Python PEP8 style guide](https://peps.python.org/pep-0008/#maximum-line-length), as the ‘[Coding guidelines](https://scikit-learn.org/dev/developers/develop.html#coding-guidelines)’ from the project specified. • Followed `author (year). “title”. journal name and page. <link>` format. • Changed link addresses from https to doi whenever possible. • Changed two different link to identical literature between the two references to identical link. ORIGINAL (line 51 to 54) This implements the method of `Thomas P. Minka: Automatic Choice of Dimensionality for PCA. NIPS 2000: 598-604 <https://proceedings.neurips.cc/paper/2000/file/7503cfacd12053d309b6bed5c89de212-Paper.pdf>`_ POST-EDIT This implements the method from: `Minka, T. P.. (2000). “Automatic choice of dimensionality for PCA”. NIPS 2000, 598-604. <https://proceedings.neurips.cc/paper/2000/file/7503cfacd12053d309b6bed5c89de212-Paper.pdf>`_ ORIGINAL (line 324 to 347) For n_components == 'mle', this class uses the method from: `Minka, T. P.. "Automatic choice of dimensionality for PCA". In NIPS, pp. 598-604 <https://tminka.github.io/papers/pca/minka-pca.pdf>`_ Implements the probabilistic PCA model from: `Tipping, M. E., and Bishop, C. M. (1999). "Probabilistic principal component analysis". Journal of the Royal Statistical Society: Series B (Statistical Methodology), 61(3), 611-622. <http://www.miketipping.com/papers/met-mppca.pdf>`_ via the score and score_samples methods. For svd_solver == 'arpack', refer to `scipy.sparse.linalg.svds`. For svd_solver == 'randomized', see: :doi:`Halko, N., Martinsson, P. G., and Tropp, J. A. (2011). "Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions". SIAM review, 53(2), 217-288. <10.1137/090771806>` and also :doi:`Martinsson, P. G., Rokhlin, V., and Tygert, M. (2011). "A randomized algorithm for the decomposition of matrices". Applied and Computational Harmonic Analysis, 30(1), 47-68. <10.1016/j.acha.2010.02.003>` POST-EDIT For n_components == 'mle', this class uses the method from: `Minka, T. P.. (2000). “Automatic choice of dimensionality for PCA”. NIPS 2000, 598-604. <https://proceedings.neurips.cc/paper/2000/file/7503cfacd12053d309b6bed5c89de212-Paper.pdf>`_ Implements the probabilistic PCA model from: :doi:`Tipping, M. E., and Bishop, C. M. (1999). "Probabilistic principal component analysis". Journal of the Royal Statistical Society: Series B (Statistical Methodology), 61(3), 611-622. <10.1162/089976699300016728>` via the score and score_samples methods. For svd_solver == 'arpack', refer to `scipy.sparse.linalg.svds`. For svd_solver == 'randomized', see: :doi:`Halko, N., Martinsson, P. G., and Tropp, J. A. (2011). "Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions". SIAM review, 53(2), 217-288. <10.1137/090771806>` and also :doi:`Martinsson, P. G., Rokhlin, V., and Tygert, M. (2011). "A randomized algorithm for the decomposition of matrices". Applied and Computational Harmonic Analysis, 30(1), 47-68. <10.1016/j.acha.2010.02.003>`
closed
2024-11-15T12:14:37Z
2024-11-18T08:52:31Z
https://github.com/scikit-learn/scikit-learn/issues/30278
[ "Documentation" ]
hanjunkim11
3
miguelgrinberg/microblog
flask
321
microblog_fork
closed
2022-07-12T04:51:14Z
2022-07-12T06:18:23Z
https://github.com/miguelgrinberg/microblog/issues/321
[]
ruum1
0
dfm/corner.py
data-visualization
262
labelpad pushes the labels out of the figure entirely
I'm making a cornerplot where some of my variables are very small or big (order of ~0.00001 or 10000), so the tick labels are quite long. As a result, the default settings have my tick labels overlapping my axis labels, so I'm trying to use the "labelpad" keyword of cornerplot to move the axis labels further away. However, the figure isn't resizing to accommodate these shifted labels, so effectively all I'm accomplishing is pushing my labels out of the plot entirely! Is there a way to fix this?
closed
2024-05-24T15:24:22Z
2024-05-28T09:40:59Z
https://github.com/dfm/corner.py/issues/262
[]
melissa-hobson
2
microsoft/unilm
nlp
958
[TrOCR] TrOCR processor issue with small version
When I try to use the small version from trocr cannot convert to a fast tokenizer as the code below Processor : `processor = TrOCRProcessor.from_pretrained("microsoft/trocr-small-handwritten")` `model = VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-small-handwritten") ` the issue as : ``` processor = TrOCRProcessor.from_pretrained("microsoft/trocr-small-handwritten") File "/home/.local/lib/python3.8/site-packages/transformers/processing_utils.py", line 186, in from_pretrained args = cls._get_arguments_from_pretrained(pretrained_model_name_or_path, **kwargs) File "/home/.local/lib/python3.8/site-packages/transformers/processing_utils.py", line 230, in _get_arguments_from_pretrained args.append(attribute_class.from_pretrained(pretrained_model_name_or_path, **kwargs)) File "/home/.local/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 591, in from_pretrained return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/home/.local/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1805, in from_pretrained return cls._from_pretrained( File "/home/.local/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1950, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/home/.local/lib/python3.8/site-packages/transformers/models/xlm_roberta/tokenization_xlm_roberta_fast.py", line 155, in __init__ super().__init__( File "/home/.local/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 113, in __init__ fast_tokenizer = convert_slow_tokenizer(slow_tokenizer) File "/home/.local/lib/python3.8/site-packages/transformers/convert_slow_tokenizer.py", line 1111, in convert_slow_tokenizer return converter_class(transformer_tokenizer).converted() File "/home/.local/lib/python3.8/site-packages/transformers/convert_slow_tokenizer.py", line 426, in __init__ from .utils import sentencepiece_model_pb2 as model_pb2 File "/home/.local/lib/python3.8/site-packages/transformers/utils/sentencepiece_model_pb2.py", line 34, in <module> create_key=_descriptor._internal_create_key, AttributeError: module 'google.protobuf.descriptor' has no attribute '_internal_create_key ``` `sentencepiece ` was installed it is the same case as in` trocr-large-stage1 ` when I replace the processor with base one it works but gives such bad results on the I am dataset where CER= 57.3
open
2022-12-21T13:38:05Z
2023-01-05T05:29:15Z
https://github.com/microsoft/unilm/issues/958
[]
Mohammed20201991
1