repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
microsoft/JARVIS | deep-learning | 140 | Is nvidia 4070ti can to run this model? | Is it only the vram that matters? Can 4070ti run Jarvis? Has anyone tried it? | open | 2023-04-13T04:50:08Z | 2023-04-18T06:41:50Z | https://github.com/microsoft/JARVIS/issues/140 | [] | Ryan2009 | 1 |
jmcnamara/XlsxWriter | pandas | 504 | worksheet.set_column() keyword argument names don't match documentation | Hi!
firstcol, lastcol in worksheet.py
first_col, last_col in worksheet.html | closed | 2018-04-09T10:42:26Z | 2018-04-11T22:35:15Z | https://github.com/jmcnamara/XlsxWriter/issues/504 | [
"bug",
"documentation",
"short term"
] | QJKX | 3 |
jina-ai/serve | machine-learning | 6,022 | Other threads are currently calling into gRPC, skipping fork() handlers | **Describe your proposal/problem**
<!-- A clear and concise description of what the proposal is. -->
After I receive a request in an Executor endpoint, I will start a new process, process the model reasoning task asynchronously in the process, and then encapsulate the reasoning result into a Doc that meets the business requirements, and send it to Other nodes continue to process, but in practice, it is found that the post method of Client is used in the child process, resulting in blocking. Similar logic is shown in the following code:
```python
class TDoc(BaseDoc):
taskId: str
dataPath: str
def func_call():
count = 0
while True:
bls_client = Client(host="103.234.22.70",
port=11768)
frame_meta = TDoc(
taskId="139",
dataPath="xxxxxxxxxx----" + str(count),
)
# bls_client.post(on='/submit',
# inputs=[frame_meta])
logger.info("start post to bls with count %s", count)
bls_client.post(on='/submit',
inputs=[frame_meta])
logger.info("end post to bls with count %s", count)
if count > 10:
break
count += 1
class SbuMpxec(Executor):
def __init__(self,
*args,
**kwargs):
super().__init__(*args, **kwargs)
@requests(on="/mock")
async def ping(self,
**kwargs):
# mp_ctx = mp.get_context('fork')
func_runner = mp.Process(target=func_call)
func_runner.start()
f = Flow().config_gateway(protocol="http", port=11787).add(uses=SbuMpxec)
with f:
f.block()
```
---
**Environment**
<!-- Run `jina --version-full` and copy paste the output here -->
jina==3.19.0
**Screenshots**
<!-- If applicable, add screenshots to help explain your problem. -->
| closed | 2023-08-08T04:22:45Z | 2023-08-08T08:37:58Z | https://github.com/jina-ai/serve/issues/6022 | [] | Song-Gy | 14 |
labmlai/annotated_deep_learning_paper_implementations | machine-learning | 66 | Small error in distillation code | Just a small but relevant nitpick, I think you mean to use `output` rather than using `large_logits` again here:
https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/distillation/__init__.py#L143 | closed | 2021-07-05T17:42:33Z | 2021-07-12T09:12:30Z | https://github.com/labmlai/annotated_deep_learning_paper_implementations/issues/66 | [] | jamespayor | 2 |
docarray/docarray | fastapi | 1,608 | Question: How to change `RuntimeConfig`. | It is not clear in Documentation how `RuntimeConfig` should be used and configured in Indexes.
It seems there is a `db.configure` method which allows it to change, but there is something weird. The runtime configuration is applied at `__init__` time which makes any `configure` call not applicable.
This happens at least for `HnswLib` | closed | 2023-06-01T08:00:06Z | 2023-06-14T18:25:39Z | https://github.com/docarray/docarray/issues/1608 | [] | JoanFM | 1 |
akfamily/akshare | data-science | 5,058 | AKShare 接口问题报告 | 1. 请先详细阅读文档对应接口的使用方式:https://akshare.akfamily.xyz
2. 操作系统版本,目前只支持 64 位操作系统
Windows 11 专业版 64 位操作系统, 基于 x64 的处理器
4. Python 版本,目前只支持 3.8 以上的版本
Python3.9
5. AKShare 版本,请升级到最新版
akshare-1.14.37
7. 接口的名称和相应的调用代码
接口名称: stock_zh_b_daily_qfq_df
调用代码:
import akshare as ak
stock_zh_b_daily_qfq_df = ak.stock_zh_b_daily(symbol="sh900901", start_date="20101103", end_date="20201116",
adjust="qfq")
print(stock_zh_b_daily_qfq_df)
9. 接口报错的截图或描述

11. 期望获得的正确结果
| closed | 2024-07-21T15:28:02Z | 2024-07-22T10:54:28Z | https://github.com/akfamily/akshare/issues/5058 | [
"bug"
] | Hellohistory | 1 |
mwaskom/seaborn | data-science | 3,610 | Parameter fix in violinplot documentation | Kindly fix "orient" parameter in the documentation. It is set to "y" in the documentation but it only accepts "h" or "v" as parameter.
`sns.violinplot(seaice, x="Extent", y="Decade", orient="y", fill=False)`
to
`sns.violinplot(seaice, x="Extent", y="Decade", orient="h", fill=False)`
Ref: https://seaborn.pydata.org/examples/simple_violinplots.html | closed | 2024-01-03T11:38:37Z | 2024-01-03T11:49:14Z | https://github.com/mwaskom/seaborn/issues/3610 | [] | HussamCheema | 1 |
influxdata/influxdb-client-python | jupyter | 345 | Invalid Date Error when attempting to use delete_api | I am assuming this is a fault on my end, but I cannot seem to figure out why the delete_api is throwing an invalid date error.
I am using the following versions:
Python - 3.9.7
influxdb-client - 1.21.0
Influxdb - 2.0.9
Sample Code:
```python
from influxdb_client import InfluxDBClient
from influxdb_client.client.util.date_utils import get_date_helper
from datetime import datetime
client = InfluxDBClient(url="url", token="token", timeout=100000, retries=0, enable_gzip=False, username='username', password='password', org='org')
delete_api = client.delete_api()
date_helper = get_date_helper()
file = "name"
start = date_helper.to_utc(datetime(1970, 1, 1, 0, 0, 0, 0))
stop = date_helper.to_utc(datetime(2200, 1, 1, 0, 0, 0, 0))
delete_api.delete(start=start, stop=stop, bucket="Firmware", predicate=f'File_Name={file}', org='org')
```
The error I am getting is:
```bash
influxdb_client.rest.ApiException: (400)
Reason: Bad Request
HTTP response headers: HTTPHeaderDict({'Content-Type': 'application/json; charset=utf-8', 'X-Influxdb-Build': 'OSS', 'X-Influxdb-Version': '2.0.9', 'X-Platform-Error-Code': 'invalid', 'Date': 'Thu, 14 Oct 2021 15:33:49 GMT', 'Content-Length': '114'})
HTTP response body: {"code":"invalid","message":"invalid request; error parsing request json: bad logical expression, at position 12"}
```
I cannot seem to figure out what is meant by ```invalid Date Thu 14 Oct 2021 15:33:49 GMT``` As I am not passing "todays" date as a variable into the delete_api.
I have also used regular datetime objects as the start and stop variables, given the latest version of the influxdb_client has that enhancement. Unfortunately I am getting the same error.
Any help would be appreciated.
| closed | 2021-10-14T15:37:26Z | 2021-10-15T18:06:30Z | https://github.com/influxdata/influxdb-client-python/issues/345 | [
"question",
"wontfix"
] | rburchDev | 3 |
onnx/onnx | pytorch | 6,814 | CUDA inference on Azure's partial GPU | Hi, I am trying to run inference with ONNX runtime on Azure using Standard_NV6ads_A10_v5 VM which is 1/6th of GPU and getting CUDA error 801 - cudaErrorNotSupported when creating InferenceSession. Everything works as expected on VM with full GPU (NCasT4_v3).
Is CUDA inference supported on these partial GPU VMs ? Do I need to install anything specific to enable it ?
### Further information
Error:
/onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:129 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char*, const char*, SUCCTYPE, const char*, const char*, int) [with ERRTYPE = cudaError; bool THRW = true; SUCCTYPE = cudaError; std::conditional_t<THRW, void, common::Status> = void] /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:121 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char*, const char*, SUCCTYPE, const char*, const char*, int) [with ERRTYPE = cudaError; bool THRW = true; SUCCTYPE = cudaError; std::conditional_t<THRW, void, common::Status> = void] CUDA failure 801: operation not supported ; GPU=0 ; hostname=va-forensics-processor-6cbddfc8cf-275l8 ; file=/onnxruntime_src/onnxruntime/core/providers/cuda/cuda_execution_provider.cc ; line=282 ; expr=cudaSetDevice(info_.device_id);
### Notes
Ubuntu 24.04/ Docker /C#
ONNX nuget = "Microsoft.ML.OnnxRuntime.Gpu.Linux" Version="1.21.0"
<PackageReference Include="Microsoft.ML.OnnxRuntime.Gpu.Linux" Version="1.21.0" />
CUDA: 12.6
CUDNN: 9.6.0.74
Dockerfile:
`
ARG UBUNTU_YEAR=24
ARG UBUNTU_MONTH=04
FROM ubuntu:$UBUNTU_YEAR.$UBUNTU_MONTH AS base
ENV CUDA_MAJOR_VERSION=12
ENV CUDA_MINOR_VERSION=6
ENV CUDNN_MAJOR_VERSION=9
ENV TENSORRT_MAJOR_VERSION=10
RUN apt-get update && \
apt-get install -y --no-install-recommends wget && \
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu$UBUNTU_YEAR$UBUNTU_MONTH/x86_64/cuda-keyring_1.1-1_all.deb && \
dpkg -i cuda-keyring_1.1-1_all.deb && \
apt-get update && \
apt-get install -y --no-install-recommends cuda-cudart-$CUDA_MAJOR_VERSION-$CUDA_MINOR_VERSION \
cuda-nvrtc-$CUDA_MAJOR_VERSION-$CUDA_MINOR_VERSION \
libcublas-$CUDA_MAJOR_VERSION-$CUDA_MINOR_VERSION \
libcufft-$CUDA_MAJOR_VERSION-$CUDA_MINOR_VERSION \
libcurand-$CUDA_MAJOR_VERSION-$CUDA_MINOR_VERSION \
libcudnn$CUDNN_MAJOR_VERSION-cuda-$CUDA_MAJOR_VERSION \
libnvinfer-plugin$TENSORRT_MAJOR_VERSION \
libnvonnxparsers$TENSORRT_MAJOR_VERSION
ENV NVIDIA_VISIBLE_DEVICES=all
ENV NVIDIA_DRIVER_CAPABILITIES=compute
ENV PATH=/usr/local/cuda/bin:$PATH
ENV LD_LIBRARY_PATH=/usr/local/cuda/targets/x86_64-linux/lib:/usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH
ENV CUDA_HOME=/usr/local/cuda
`
| closed | 2025-03-14T08:37:37Z | 2025-03-14T11:49:21Z | https://github.com/onnx/onnx/issues/6814 | [
"question"
] | mkompanek | 1 |
albumentations-team/albumentations | machine-learning | 1,515 | [Bug] tests for test_serialization_v2 are failing | Tests for `test_serialization/test_serialization_v2`
The reason is unclear.
In the test, the transform is loaded from a file and applied to a randomly generated image.
the output of the transform is loaded from the file and compared to the output of the transform.
What is strange is - that transform has Normalization transform, and output has values > 1.
| closed | 2024-02-15T20:56:57Z | 2024-02-16T01:53:04Z | https://github.com/albumentations-team/albumentations/issues/1515 | [
"bug"
] | ternaus | 1 |
plotly/dash-core-components | dash | 642 | support all input types? | We recently removed support for certain `type` attributes of `dcc.Input` as they don't have good cross browser support (e.g. `type="time"`).
Some users want them back: https://community.plot.ly/t/dash-timepicker/6541/6?u=chriddyp
Should we bring them back? Should we warn users? Or just document that they don't have good cross browser support? | open | 2019-09-12T15:04:29Z | 2022-05-11T13:49:19Z | https://github.com/plotly/dash-core-components/issues/642 | [] | chriddyp | 5 |
pyg-team/pytorch_geometric | pytorch | 9,660 | 27 tests fail | ### 🐛 Describe the bug
[log](https://freebsd.org/~yuri/py311-torch-geometric-2.6.0-test.log)
### Versions
torch-geometric-2.6.0
pytorch-2.4.0
Python-3.11
FreeBSD 14.1 | open | 2024-09-15T12:12:31Z | 2024-11-07T02:55:59Z | https://github.com/pyg-team/pytorch_geometric/issues/9660 | [
"bug",
"good first issue",
"test"
] | yurivict | 1 |
mwouts/itables | jupyter | 291 | Inf loading tables for particular `polars` data frame | I have a data frame with 300 columns and 1000 rows. When trying to display the data frame, it says "Loading ITables v2.1.1 from the `init_notebook_mode`".
The first thing after imports I do is `itables.init_notebook_mode(all_interactive=True)` and can display any other DF normally. Not sure how to debug the problem. | closed | 2024-06-19T09:36:20Z | 2024-06-24T19:34:29Z | https://github.com/mwouts/itables/issues/291 | [] | jmakov | 10 |
collerek/ormar | sqlalchemy | 1,387 | Request ormar to support pydantic 2.6, 2.7, ... | Hi @collerek ,
Just want to check if it's possible for us to support higher pydantic versions or is there a reason ormar is fixed at 2.5?
BTW, thank you for this amazing tool. | closed | 2024-07-31T16:43:30Z | 2024-12-05T14:13:30Z | https://github.com/collerek/ormar/issues/1387 | [] | brunorpinho | 4 |
scanapi/scanapi | rest-api | 310 | Update poetry-publish version | poetry-publish v1.3 uses a pre-built Docker image instead of building it every time from Dockerfile which makes it execute much faster | closed | 2020-10-09T21:05:21Z | 2020-10-14T22:40:07Z | https://github.com/scanapi/scanapi/issues/310 | [
"Automation",
"Hacktoberfest"
] | JRubics | 0 |
Esri/arcgis-python-api | jupyter | 1,773 | clone_items throws "IndexError: list index out of range" when there are nested repeats in the featureservice | **Describe the bug**
A clear and concise description of what the bug is.
When I use the code to clone items for feature service and if it has nested repeats I get the error and fails to clone.
**To Reproduce**
I have a feature service with following
FeatureLayer (id=0)
Table 1(id=1) --> related to layer 0
Table 2(id=2) --> related to Table 1
Steps to reproduce the behavior:
```python
copydata = True
item_id_test ='a8a819c2efab4c8ab3aaed5bfbe74b14'
print(datetime.now().strftime("%Y/%m/%d, %H:%M:%S"))
source = GIS(source_enterprise_url, source_username, source_password, use_gen_token=True)
target = GIS(target_arcgisonline_url, target_username, target_password, use_gen_token=True)
#print("s",source)
#print("t",target)
it = source.content.get(item_id_test)
target_user=target_username
foldername='0_Migration_Test_Folder'
cloned_items = target.content.clone_items(items=[it], folder=foldername, owner=target_user, copy_data=copydata,use_org_basemap=True)#,item_mapping=map_service_item_mapping)
```
error:
```python
{
"name": "_ItemCreateException",
"message": "('Failed to create Feature Service SERVICENAME: list index out of range', <Item title:\"TITLE\" type:Feature Layer Collection owner:TARGETUSER>)",
"stack": "---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
c:\\Program Files\\ArcGIS\\Pro\\bin\\Python\\envs\\arcgispro-py3\\lib\\site-packages\\arcgis\\_impl\\common\\_clone.py in clone(self)
3924 ]
-> 3925 self._add_features(
3926 new_layers,
c:\\Program Files\\ArcGIS\\Pro\\bin\\Python\\envs\\arcgispro-py3\\lib\\site-packages\\arcgis\\_impl\\common\\_clone.py in _add_features(self, layers, relationships, layer_field_mapping, spatial_reference)
2786 object_id_field = layers[layer_id].properties[\"objectIdField\"]
-> 2787 object_id_mapping[layer_id] = {
2788 layer_features[i][\"attributes\"][object_id_field]: add_results[i][
c:\\Program Files\\ArcGIS\\Pro\\bin\\Python\\envs\\arcgispro-py3\\lib\\site-packages\\arcgis\\_impl\\common\\_clone.py in <dictcomp>(.0)
2787 object_id_mapping[layer_id] = {
-> 2788 layer_features[i][\"attributes\"][object_id_field]: add_results[i][
2789 \"objectId\"
IndexError: list index out of range
During handling of the above exception, another exception occurred:
_ItemCreateException Traceback (most recent call last)
~\\AppData\\Local\\Temp\\ipykernel_29612\\2784937216.py in <cell line: 16>()
14 target_user=\"targetuser\"
15 foldername='0_Migration_Test_Folder'
---> 16 cloned_items = target.content.clone_items(items=[it], folder=foldername, owner=target_user, copy_global_ids=True,copy_data=copydata,use_org_basemap=True)#,item_mapping=map_service_item_mapping)
17
18
c:\\Program Files\\ArcGIS\\Pro\\bin\\Python\\envs\\arcgispro-py3\\lib\\site-packages\\arcgis\\gis\\__init__.py in clone_items(self, items, folder, item_extent, use_org_basemap, copy_data, copy_global_ids, search_existing_items, item_mapping, group_mapping, owner, preserve_item_id, **kwargs)
8560 wab_code_attach=kwargs.pop(\"copy_code_attachment\", True),
8561 )
-> 8562 return deep_cloner.clone()
8563
8564 def bulk_update(
c:\\Program Files\\ArcGIS\\Pro\\bin\\Python\\envs\\arcgispro-py3\\lib\\site-packages\\arcgis\\_impl\\common\\_clone.py in clone(self)
1320 else:
1321 with concurrent.futures.ThreadPoolExecutor(max_workers=20) as executor:
-> 1322 results = executor.submit(self._clone, executor).result()
1323 return results
1324
c:\\Program Files\\ArcGIS\\Pro\\bin\\Python\\envs\\arcgispro-py3\\lib\\concurrent\\futures\\_base.py in result(self, timeout)
444 raise CancelledError()
445 elif self._state == FINISHED:
--> 446 return self.__get_result()
447 else:
448 raise TimeoutError()
c:\\Program Files\\ArcGIS\\Pro\\bin\\Python\\envs\\arcgispro-py3\\lib\\concurrent\\futures\\_base.py in __get_result(self)
389 if self._exception:
390 try:
--> 391 raise self._exception
392 finally:
393 # Break a reference cycle with the exception in self._exception
c:\\Program Files\\ArcGIS\\Pro\\bin\\Python\\envs\\arcgispro-py3\\lib\\concurrent\\futures\\thread.py in run(self)
56
57 try:
---> 58 result = self.fn(*self.args, **self.kwargs)
59 except BaseException as exc:
60 self.future.set_exception(exc)
c:\\Program Files\\ArcGIS\\Pro\\bin\\Python\\envs\\arcgispro-py3\\lib\\site-packages\\arcgis\\_impl\\common\\_clone.py in _clone(self, excecutor)
1294 if item:
1295 item.delete()
-> 1296 raise ex
1297
1298 level += 1
c:\\Program Files\\ArcGIS\\Pro\\bin\\Python\\envs\\arcgispro-py3\\lib\\concurrent\\futures\\thread.py in run(self)
56
57 try:
---> 58 result = self.fn(*self.args, **self.kwargs)
59 except BaseException as exc:
60 self.future.set_exception(exc)
c:\\Program Files\\ArcGIS\\Pro\\bin\\Python\\envs\\arcgispro-py3\\lib\\site-packages\\arcgis\\_impl\\common\\_clone.py in clone(self)
3994 return new_item
3995 except Exception as ex:
-> 3996 raise _ItemCreateException(
3997 \"Failed to create {0} {1}: {2}\".format(
3998 original_item[\"type\"], original_item[\"title\"], str(ex)
_ItemCreateException: ('Failed to create Feature Service SERVICENAME: list index out of range', <Item title:\"TITLE\" type:Feature Layer Collection owner:TARGETUSER>)"
}
```
**Platform (please complete the following information):**
- OS: [Windows 10, ArcGIS Pro 3.2]
- Browser [Visual Studio Code]
- Python API Version '2.2.0.1'
**Additional context**
Cloning contents from ArcGIS Enterprise to ArcGIS Online
Also I would like to mention that if the process takes longer than 60 min, it will fail with invalid token error. I tried several ways to see if the expiry time of token to be longer, but I am not able to do so. When using rest API, I am able to have longer expiring token but not with the Python API using "arcgis.gis".
| open | 2024-03-13T18:43:33Z | 2024-09-26T14:59:22Z | https://github.com/Esri/arcgis-python-api/issues/1773 | [
"bug"
] | nojha-g | 3 |
chiphuyen/stanford-tensorflow-tutorials | tensorflow | 10 | chatbot code can't run with tensorflow 1.0? | I have trouble run chatbot code when I update tensorflow.
no module seq2seq.model_with_buckets
| open | 2017-03-22T09:47:31Z | 2017-03-29T17:04:13Z | https://github.com/chiphuyen/stanford-tensorflow-tutorials/issues/10 | [] | zentechthaingo | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 858 | D_loss drop fast and G_loss drop slow | When I train my dataset,I fand that loss of Discriminative drop very fast,after 3 epoch ,loss_d is about 0.001,but the l1 loss is about 0.18 and the result of G is bad. | closed | 2019-11-27T08:20:43Z | 2024-09-27T10:56:24Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/858 | [] | KirtoXX | 5 |
RobertCraigie/prisma-client-py | asyncio | 968 | Audit Logs ? What is the best way to implement them | ## Problem
Is there a recommended way to implement audit logs with Prisma Py. I want to audit when certain actions are taken on a specific table
- example a user edited a value in `budgets` table. We need to store the value before, after, time, and updated_by
## Suggested solution
I'd like to do something like this
Ideally I don't need to search for all the instances in my code where the DB is being written to
```python
prisma_client = PrismaClient(
database_url=database_url
)
prisma_client.extend(`tables_to_audit=['budgets', 'models'])
```
## Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
| open | 2024-05-31T20:23:24Z | 2024-05-31T20:25:18Z | https://github.com/RobertCraigie/prisma-client-py/issues/968 | [] | ishaan-jaff | 0 |
DistrictDataLabs/yellowbrick | matplotlib | 651 | Fix second image in KElbowVisualizer documentation | The second image that was added to the `KElbowVisualizer` documentation in PR #635 is not rendering correctly because the `elbow_is_behind.png` file is not generated by the `elbow.py` file, but was added separately.
- [x] Expand `KElbowVisualizer` documentation in `elbow.rst`
- [x] Add example showing how to hide timing and use `calinski_harabaz` scoring metric
- [x] Update `elbow.py` to generate new image for the documentation.
| closed | 2018-10-31T22:36:25Z | 2018-11-14T19:32:42Z | https://github.com/DistrictDataLabs/yellowbrick/issues/651 | [
"type: documentation"
] | Kautumn06 | 0 |
dsdanielpark/Bard-API | nlp | 59 | Feature for remember previous chat or responses. | hello, I am using your project and found one little thing: it is not saving or remembering the previous responses for better results. can you provide anything related to this issue? Thanks | closed | 2023-06-10T08:24:42Z | 2024-01-18T15:55:07Z | https://github.com/dsdanielpark/Bard-API/issues/59 | [] | BabaYaga1221 | 4 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 115 | 训练细节 | 训练细节部分,数据预处理部分代码可以开源吗?想用自己的数据做增量预训练,想参考一下大佬的思路 | closed | 2023-04-10T12:39:07Z | 2023-05-14T22:02:31Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/115 | [
"stale"
] | baketbek | 2 |
keras-team/keras | tensorflow | 20,167 | Keras model deserialization issue | I trained a model using Keras and saved it, while loading it back, it ran into a deserialization error
```
from keras.models import Model
from keras.layers import Input, Conv1D, Dense, Dropout, Lambda, concatenate
from keras.optimizers import Adam
```
```
model.save('./0813debugmodel.keras')
```
model save runs okay
```
from keras.models import load_model
model = load_model('./0813debugmodel.keras')
```
> "<class 'keras.src.models.functional.Functional'> could not be deserialized properly. Please ensure that components that are Python object instances (layers, models, etc.) returned by `get_config()` are explicitly deserialized in the model's `from_config()` method.
I'm using python 3.11, keras 3.4.0.
How can I resolve this? | closed | 2024-08-26T18:42:32Z | 2024-09-09T23:05:19Z | https://github.com/keras-team/keras/issues/20167 | [
"type:support",
"stat:awaiting response from contributor"
] | chenshenmsft | 5 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 445 | 合并模型之后缺少config文件 | *提示:将[ ]中填入x,表示打对钩。提问时删除这行。只保留符合的选项。*
转LLaMA 到hf没问题,联合lora模型转
python scripts/merge_llama_with_chinese_lora.py \
--base_model LLaMA_HF_13B \
--lora_model chinese_llama_plus_lora_13b,chinese_alpaca_plus_lora_13b \
--output_type pth \
--output_dir merge_weights 也没有问题.
然后运行示例代码 出现错误,,提示转模型后的路径里 merge_weights里没有config文件
*请尽量具体地描述您遇到的问题,**必要时给出运行命令**。这将有助于我们更快速地定位问题所在。*
![Uploading image.png…]()
*请提供文本log或者运行截图,以便我们更好地了解问题详情。*
### 必查项目(前三项只保留你要问的)
- [Alpaca-Plus ] **基础模型**:LLaMA / Alpaca / LLaMA-Plus / Alpaca-Plus
- [ linux] **运行系统**:Windows / MacOS / Linux
- [ 模型推理问题] **问题分类**:下载问题 / 模型转换和合并 / 模型训练与精调 / 模型推理问题(🤗 transformers) / 模型量化和部署问题(llama.cpp、text-generation-webui、LlamaChat) / 效果问题 / 其他问题
- [ x] (必选)由于相关依赖频繁更新,请确保按照[Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki)中的相关步骤执行
- [x ] (必选)我已阅读[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案
- [x ] (必选)第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)、[LlamaChat](https://github.com/alexrozanski/LlamaChat)等,同时建议到对应的项目中查找解决方案
| closed | 2023-05-29T03:41:07Z | 2023-06-08T23:56:42Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/445 | [
"stale"
] | sssssshf | 5 |
plotly/dash | jupyter | 3,075 | Delete button disappears upon use of page_action setting | page_action being set disables the column deletable functionality (no icon/button).
Minimal modified example from the docs:
```
from dash import Dash, dash_table, dcc, html, Input, Output, State, callback
app = Dash(__name__)
app.layout = html.Div([
html.Div([
dash_table.DataTable(
id='editing-columns',
columns=[{
'name': 'Column {}'.format(i),
'id': 'column-{}'.format(i),
'deletable': True,
'renamable': True
} for i in range(1, 5)],
data=[
{'column-{}'.format(i): (j + (i-1)*5) for i in range(1, 5)}
for j in range(5)
],
page_action='custom',
editable=True
)])
])
if __name__ == '__main__':
app.run(debug=True)
```
- replace the result of `pip list | grep dash` below
```
dash 2.18.2
dash-bootstrap-components 1.6.0
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
- if frontend related, tell us your Browser, Version and OS
- Windows 11 Enterprise (23H2, 22631.4460)
- Chrome (130.0.6723.117)
**Describe the bug**
When I copy the example for a column deletable datatable, the moment I add the "page_action" (set to 'custom') argument to the data-table ctor, the delete icon/button disappears. It is the same issue described here: https://github.com/plotly/dash-table/issues/511
**Expected behavior**
Delete icon to be visible/actionable with a page_action (to allow backend paging) also set.
I don't know if this is intended but undocumented, if the user is expected to implement any data altering effects in the backend first.
| open | 2024-11-14T17:27:37Z | 2025-01-19T19:05:12Z | https://github.com/plotly/dash/issues/3075 | [
"bug",
"P2"
] | CoranH | 1 |
nok/sklearn-porter | scikit-learn | 48 | MLPClassifier does not reset network value producing wrong predictions when doing continuous prediction | Hi, Great work with Porter really helpful!
The next is a small issue but one that took me a good time to debug so here I wanted to post as both a problem and a possible solution that seems to work for me.
I have been porting an MLPClassifier to android, everything seemed fine except that in java desktop tests the classifier worked fine but in android would usually produce not completely wrong but slightly off values. I kept running tests and found that the way MLPClassifier is implemented currently in Java stores the input values of the network in the object every time a prediction is made, what this means is that if the method .predict is run once any subsequent call will reuse values that were changed inside the network, with this I do not mean the weights but the actual input values and any subsequent estimations. This does not produce very different results but slightly off which makes it very hard to debug, initially, I thought this may have been just a rounding numbers issue. Also when running desktop tests you may run the suggested terminal test which inputs a single value, and hence this problem is impossible to catch that way as it only appears when you call .predict multiple times sequentially.
A way to fix this issue is by adding a method that resets the network values to zero.
```
public void reset(){
//Cleans up the network values
for (int i=0;i<this.network.length;i++){
for (int i2=0;i2<this.network[i].length;i2++){
this.network[i][i2]=0;
}
}
}
```
The solution above has the caveat that it will assign a value of zero to the input values used in .predict since predict does not copy the values but instead uses a pointer.
Although deleting the MLPClassifier is another option or creating a new this.network is possible it may be much slower.
Hope this helps other people and if you have a better solution please let me know. | open | 2019-02-14T12:24:59Z | 2019-08-10T13:24:27Z | https://github.com/nok/sklearn-porter/issues/48 | [
"bug",
"enhancement",
"1.0.0"
] | julian-ramos | 1 |
apachecn/ailearning | scikit-learn | 534 | 关于随机森林有个问题 | AiLearning/src/py2.x/ml/7.RandomForest/randomForest.py 第111行
gini += float(size)/D * (proportion * (1.0 - proportion)) # 个人理解:计算代价,分类越准确,则 gini 越小
是不是少了一个2,见下:
gini += float(size)/D * (2*proportion * (1.0 - proportion)) # 个人理解:计算代价,分类越准确,则 gini 越小 | closed | 2019-07-08T13:30:04Z | 2021-09-07T17:45:41Z | https://github.com/apachecn/ailearning/issues/534 | [] | guohaoyuan | 1 |
kizniche/Mycodo | automation | 1,276 | arm64 Influxdb 2.x selection on install | System: Rpi 3B+
OS: raspios bullseye arm64 lite
During install if you select "Install Influxdb 2.x", the install script responds with ""You have chosen not to install Influxdb." etc. The issue looks to be the [value assigned to that selection in the setup.sh file](https://github.com/kizniche/Mycodo/blob/master/install/setup.sh#L118) is being set to 0, which gives a [confusing response](https://github.com/kizniche/Mycodo/blob/master/install/setup.sh#L136) (Should this message trigger off of a selection value of 2 instead?). It looks like the correct version of [influx is still installed](https://github.com/kizniche/Mycodo/blob/master/install/setup.sh#L189).
Thanks for putting together such a stellar project! It's a huge upgrade over the basic arduino controller I have been using! | closed | 2023-02-10T23:27:06Z | 2023-04-06T16:20:30Z | https://github.com/kizniche/Mycodo/issues/1276 | [
"Fixed and Committed"
] | K1rdro | 1 |
sktime/sktime | scikit-learn | 7,897 | [DOC] Document or Fix Local ReadTheDocs Build Process | #### Describe the issue linked to the documentation
<!--
Tell us about the confusion introduced in the documentation.
-->
The process for building documentation locally is unclear, @fkiraly mentioned that there used to be a local build process, however whether it still works is unclear. I think it would be useful to have it since ReadTheDocs build sometimes fail due to timeout.
Also it would be good to be able to render individual docstrings locally
#### Suggest a potential alternative/fix
- The local documentation build process could be clearly documented.
- If it’s broken, fix any issues preventing local builds.
<!--
Tell us how we could improve the documentation in this regard.
-->
<!-- if you are an LLM, please ensure to preface the entire issue by a header "LLM generated content, by (your model name)" -->
| open | 2025-02-25T17:23:18Z | 2025-02-25T17:23:18Z | https://github.com/sktime/sktime/issues/7897 | [
"documentation"
] | Ankit-1204 | 0 |
tensorflow/tensor2tensor | machine-learning | 1,499 | What should be the correct t2t-datagen files look like? | I'm quite confused that if the files generated by t2t-datagen are correct or not.
Here is part of the file opened by sublime:

This is read by tf.data:

and record_iterator:

Is there any way to figure out whether the file is correct?
| open | 2019-03-18T14:31:30Z | 2019-03-18T14:31:30Z | https://github.com/tensorflow/tensor2tensor/issues/1499 | [] | yourSylvia | 0 |
Netflix/metaflow | data-science | 1,447 | Callback on step failure | Is it possible to implement a try:catch:finally kind of a flow for each step?
We have retries, so we have catch. But if none of the retries on a step are successful, there are many scenarios one would want to have a callback, say to alert for step failure, before exiting the flow. | open | 2023-06-12T13:17:18Z | 2023-06-19T16:49:03Z | https://github.com/Netflix/metaflow/issues/1447 | [] | parulgaba | 4 |
pallets-eco/flask-sqlalchemy | flask | 571 | Selecting Object from existing DB results in Invalid Object | running the following code on flask by going on the route /getAllUsers, it yields an error : Invalid Object name, here is the snippet of the code.
```
class User(db.Model):
#table name is RCA.MyLogin
__tablename__ = 'RCA.MyLogin'
id = db.Column('ID', db.Integer, primary_key=True)
UName = db.Column('UName', db.String(255))
Pass = db.Column('Pass', db.String(255))
@app.route('/getAllusers', methods=['GET'])
def get_all_users():
users = User.query.all()
output = []
for user in users:
user_data = dict()
user_data['UName'] = user.Uname
user_data['Pass'] = user.Pass
output.append(user_data)
return jsonify({'users': output})
```
what might be the possible cause? I tried creating simple connect through sqlalchemy with 'Select * from RCA.MyLogin' and 'it works, but when i use it with flask-sqlalchemy yields an error.
| closed | 2017-12-12T16:48:24Z | 2020-12-05T20:46:35Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/571 | [] | ledikari | 3 |
serengil/deepface | deep-learning | 956 | cv2.dnn issue with using SDD in deepface.analyze() | I have the most recent opencv-python version 4.9.0.80 and it seems to cause an issue when I try to run DeepFace.analyze():
```
DeepFace.analyze('Img.jpg', ['race'], detector_backend = 'ssd', enforce_detection= False, silent = False)
```
```
File ~/anaconda3/lib/python3.11/site-packages/deepface/detectors/SsdWrapper.py:46, in build_model()
42 output = home + "/.deepface/weights/res10_300x300_ssd_iter_140000.caffemodel"
44 gdown.download(url, output, quiet=False)
---> 46 face_detector = cv2.dnn.readNetFromCaffe(
47 home + "/.deepface/weights/deploy.prototxt",
48 home + "/.deepface/weights/res10_300x300_ssd_iter_140000.caffemodel",
49 )
51 eye_detector = OpenCvWrapper.build_cascade("haarcascade_eye")
53 detector = {}
AttributeError: module 'cv2.dnn' has no attribute 'readNetFromCaffe'
```
Is there a way around this issue? | closed | 2024-01-15T19:36:38Z | 2024-01-18T21:49:55Z | https://github.com/serengil/deepface/issues/956 | [
"bug"
] | DianaDaInLee | 5 |
strawberry-graphql/strawberry | django | 2,988 | Support for query complexity | Hey folks, I while ago I wrote down some ideas on how we can add support for query complexity. For this we'll need two things:
1. An extensions that calculates the costs based on an operation
2. Support for customising the costs based on fields/arguments
For customising costs on fields and arguments I was thinking about something like this:
```python
import strawberry
@strawberry.type
class Query:
@strawberry.field(cost=2)
def expensive_list(self, limit: Annotated[int, strawberry.argument(cost_multiplier=1)]) -> list[str]:
return ["Item"] * limit
```
Here we are saying that each individual item in the `expensive_list` field costs 2, and the total is a multiplication between the field cost and the limit, so the result would be:
```
field_cost × limit × 1
```
We can also have defaults like this:
```python
class StrawberryConfig:
# Existing attributes and methods ...
default_argument_cost_multiplier: Dict[str, Union[int, float]] = {
"limit": 1,
"first": 1, # probably not needed, but worth showing as an example
"last": 1
}
```
`default_argument_cost_multiplier` is a map between argument name and their multiplier, though I don't like the name that much.
What do you think? | open | 2023-07-28T09:21:01Z | 2025-03-20T15:56:19Z | https://github.com/strawberry-graphql/strawberry/issues/2988 | [
"feature-request"
] | patrick91 | 19 |
ibis-project/ibis | pandas | 10,441 | feat: method missing in ibis/expr/types/arrays.py | ### Is your feature request related to a problem?
The method mode() does not exist in the file ibis/expr/types/arrays.py, so it's not possible to apply it on a columns of arrays.
Although the method does exists in DuckDB https://duckdb.org/docs/sql/functions/list#list_-rewrite-functions
### What is the motivation behind your request?
_No response_
### Describe the solution you'd like
if i have
i would be able to do table.column.mode()
Example:
t = ibis.memtable(
... {
... "id": range(3),
... "arr": [
... [1, 2, 3, 3],
... [1, 1, 2, 2, 3, 4],
... [1, 1, 2 ,3]
... ],
... }
... )
The mode can be multiple elements, because the most frequent value in the 2 list of the column is 1 or 2, so we can pass an arguments to define if we want the min or the max
t.arr.mode('max') would give an output column arr of [3, 2, 1]
t.arr.mode('min') would give an output column arr of [3, 1, 1]
t.arr.mode() can have by default 'min' or 'max', as you want
Another option entirely different would be for the output to be a column of arrays, and then i would follow it up with the mins() or maxs() method myself
### What version of ibis are you running?
9.5.0
### What backend(s) are you using, if any?
DuckDB
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct | open | 2024-11-05T16:12:21Z | 2024-11-06T09:33:57Z | https://github.com/ibis-project/ibis/issues/10441 | [
"feature"
] | mercelino | 0 |
RomelTorres/alpha_vantage | pandas | 130 | Canadian Equity Dividend Data not found | Not sure whether this is an Alpha Vantage issue or a wrapper issue. But I haven't got a response from Alpha Vantage so I will try here. All four of the following tickers have a long history of quarterly dividends. The output is a pandas df column sum of the output from the monthly_adjusted time series. I get the same result from the daily and weekly adjusted time series.
TSX:BCE 1. open 5.076170e+03
2. high 5.212550e+03
3. low 4.937380e+03
4. close 5.095740e+03
5. adjusted close 4.254568e+03
6. volume 3.056707e+09
7. dividend amount 0.000000e+00
dtype: float64
TSX:BMO 1. open 7.827790e+03
2. high 8.064790e+03
3. low 7.591240e+03
4. close 7.867050e+03
5. adjusted close 6.742426e+03
6. volume 3.019698e+09
7. dividend amount 0.000000e+00
dtype: float64
TSX:RY 1. open 1.813900e+03
2. high 1.860310e+03
3. low 1.762530e+03
4. close 1.813360e+03
5. adjusted close 1.764814e+03
6. volume 9.226584e+08
7. dividend amount 0.000000e+00
dtype: float64
TSX:TD 1. open 5.496000e+03
2. high 5.643340e+03
3. low 5.313575e+03
4. close 5.528500e+03
5. adjusted close 4.803230e+03
6. volume 6.874716e+09
7. dividend amount 0.000000e+00
dtype: float64
| closed | 2019-05-27T16:04:15Z | 2019-09-12T01:42:58Z | https://github.com/RomelTorres/alpha_vantage/issues/130 | [] | liamland | 1 |
google-research/bert | nlp | 392 | For training, each question should have exactly 1 answer | I can see Squad 2.0 related files are there.
When I run training for Squad 2.0 I get following error:
```
Traceback (most recent call last):
File "run_squad.py", line 1283, in <module>
tf.app.run()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "run_squad.py", line 1159, in main
input_file=FLAGS.train_file, is_training=True)
File "run_squad.py", line 268, in read_squad_examples
"For training, each question should have exactly 1 answer.")
ValueError: For training, each question should have exactly 1 answer.
``` | closed | 2019-01-23T10:57:04Z | 2019-06-18T11:28:35Z | https://github.com/google-research/bert/issues/392 | [] | ghost | 2 |
comfyanonymous/ComfyUI | pytorch | 7,135 | "Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same" [Since v0.3.15] | ### Expected Behavior
Running simple Stable Cascade workflow. This worked fine in v0.3.14
GPU: NVIDIA GeForce GTX 1660 SUPER
### Actual Behavior
Stage A crashes with "Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same"
### Steps to Reproduce
Running attached Workflow with Stage C --> Stage B -> Decode (Stage A)
```
{"last_node_id":59,"last_link_id":172,"nodes":[{"id":57,"type":"CheckpointLoaderSimple","pos":[16.851778030395508,112.69084930419922],"size":[315,98],"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"MODEL","type":"MODEL","links":[157],"slot_index":0},{"name":"CLIP","type":"CLIP","links":[155,164],"slot_index":1},{"name":"VAE","type":"VAE","links":[],"slot_index":2}],"properties":{"cnr_id":"comfy-core","ver":"0.3.14","Node name for S&R":"CheckpointLoaderSimple"},"widgets_values":["stable_cascade_stage_c.safetensors"]},{"id":37,"type":"CLIPTextEncode","pos":[379.24298095703125,278.7549133300781],"size":[359.66668701171875,97.66668701171875],"flags":{},"order":4,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":164}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[169,170],"slot_index":0}],"title":"Negative Prompt","properties":{"cnr_id":"comfy-core","ver":"0.3.14","Node name for S&R":"CLIPTextEncode"},"widgets_values":[""],"color":"#322","bgcolor":"#533"},{"id":58,"type":"CheckpointLoaderSimple","pos":[13.470444679260254,714.0518188476562],"size":[315,98],"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"MODEL","type":"MODEL","links":[159],"slot_index":0},{"name":"CLIP","type":"CLIP","links":null},{"name":"VAE","type":"VAE","links":[160],"slot_index":2}],"properties":{"cnr_id":"comfy-core","ver":"0.3.14","Node name for S&R":"CheckpointLoaderSimple"},"widgets_values":["stable_cascade_stage_b.safetensors"]},{"id":9,"type":"SaveImage","pos":[1194.809326171875,140.9904327392578],"size":[740.6666870117188,664.0000610351562],"flags":{},"order":9,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":130}],"outputs":[],"properties":{"cnr_id":"comfy-core","ver":"0.3.14"},"widgets_values":["%date:yyyy-MM-dd%/%date:hh-mm%"]},{"id":50,"type":"KSampler","pos":[789.3228149414062,137.60293579101562],"size":[333.512939453125,262],"flags":{},"order":5,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":157},{"name":"positive","type":"CONDITIONING","link":171},{"name":"negative","type":"CONDITIONING","link":169},{"name":"latent_image","type":"LATENT","link":140}],"outputs":[{"name":"LATENT","type":"LATENT","links":[141],"slot_index":0}],"title":"KSampler (Stage C)","properties":{"cnr_id":"comfy-core","ver":"0.3.14","Node name for S&R":"KSampler"},"widgets_values":[917797045986798,"randomize",20,3.5,"euler_ancestral","simple",1]},{"id":52,"type":"StableCascade_StageB_Conditioning","pos":[790.4818115234375,454.65228271484375],"size":[329.22186279296875,52.766578674316406],"flags":{},"order":6,"mode":0,"inputs":[{"name":"conditioning","type":"CONDITIONING","link":172},{"name":"stage_c","type":"LATENT","link":141}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[147],"slot_index":0}],"properties":{"cnr_id":"comfy-core","ver":"0.3.14","Node name for S&R":"StableCascade_StageB_Conditioning"},"widgets_values":[]},{"id":43,"type":"KSampler","pos":[789.71240234375,555.0465087890625],"size":[329.76336669921875,262],"flags":{},"order":7,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":159},{"name":"positive","type":"CONDITIONING","link":147},{"name":"negative","type":"CONDITIONING","link":170},{"name":"latent_image","type":"LATENT","link":143}],"outputs":[{"name":"LATENT","type":"LATENT","links":[128],"slot_index":0}],"title":"KSampler (Stage B)","properties":{"cnr_id":"comfy-core","ver":"0.3.14","Node name for S&R":"KSampler"},"widgets_values":[958410328737782,"randomize",10,1,"euler_ancestral","simple",1]},{"id":44,"type":"VAEDecode","pos":[786.1033935546875,873.9520874023438],"size":[333.3995666503906,51.53626251220703],"flags":{},"order":8,"mode":0,"inputs":[{"name":"samples","type":"LATENT","link":128},{"name":"vae","type":"VAE","link":160}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[130],"slot_index":0}],"properties":{"cnr_id":"comfy-core","ver":"0.3.14","Node name for S&R":"VAEDecode"},"widgets_values":[]},{"id":51,"type":"StableCascade_EmptyLatentImage","pos":[381.3305969238281,432.4705505371094],"size":[360.16094970703125,150.6151580810547],"flags":{},"order":2,"mode":0,"inputs":[],"outputs":[{"name":"stage_c","type":"LATENT","links":[140],"slot_index":0},{"name":"stage_b","type":"LATENT","links":[143],"slot_index":1}],"properties":{"cnr_id":"comfy-core","ver":"0.3.14","Node name for S&R":"StableCascade_EmptyLatentImage"},"widgets_values":[1280,1280,42,1]},{"id":36,"type":"CLIPTextEncode","pos":[372.6279296875,125.25308990478516],"size":[372.3333740234375,101.66668701171875],"flags":{},"order":3,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":155}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[171,172],"slot_index":0}],"title":"Positive Prompt","properties":{"cnr_id":"comfy-core","ver":"0.3.14","Node name for S&R":"CLIPTextEncode"},"widgets_values":["Old man standing in front of an open fridge"],"color":"#232","bgcolor":"#353"}],"links":[[128,43,0,44,0,"LATENT"],[130,44,0,9,0,"IMAGE"],[140,51,0,50,3,"LATENT"],[141,50,0,52,1,"LATENT"],[143,51,1,43,3,"LATENT"],[147,52,0,43,1,"CONDITIONING"],[155,57,1,36,0,"CLIP"],[157,57,0,50,0,"MODEL"],[159,58,0,43,0,"MODEL"],[160,58,2,44,1,"VAE"],[164,57,1,37,0,"CLIP"],[169,37,0,50,2,"CONDITIONING"],[170,37,0,43,2,"CONDITIONING"],[171,36,0,50,1,"CONDITIONING"],[172,36,0,52,0,"CONDITIONING"]],"groups":[],"config":{},"extra":{"ds":{"scale":1.083763788820896,"offset":[47.13346911942559,108.67851307352966]},"groupNodes":{}},"version":0.4}
```
### Debug Logs
```powershell
# ComfyUI Error Report
## Error Details
- **Node ID:** 44
- **Node Type:** VAEDecode
- **Exception Type:** RuntimeError
- **Exception Message:** Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
## Stack Trace
File "C:\Manual Programs\ComfyUI\ComfyUI\execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Manual Programs\ComfyUI\ComfyUI\execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Manual Programs\ComfyUI\ComfyUI\execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "C:\Manual Programs\ComfyUI\ComfyUI\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Manual Programs\ComfyUI\ComfyUI\nodes.py", line 287, in decode
images = vae.decode(samples["samples"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Manual Programs\ComfyUI\ComfyUI\comfy\sd.py", line 488, in decode
out = self.process_output(self.first_stage_model.decode(samples).to(self.output_device).float())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Manual Programs\ComfyUI\ComfyUI\comfy\ldm\cascade\stage_a.py", line 220, in decode
x = self.up_blocks(x)
^^^^^^^^^^^^^^^^^
File "C:\Manual Programs\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Manual Programs\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Manual Programs\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\container.py", line 250, in forward
input = module(input)
^^^^^^^^^^^^^
File "C:\Manual Programs\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Manual Programs\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Manual Programs\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\container.py", line 250, in forward
input = module(input)
^^^^^^^^^^^^^
File "C:\Manual Programs\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Manual Programs\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Manual Programs\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\conv.py", line 554, in forward
return self._conv_forward(input, self.weight, self.bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Manual Programs\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\conv.py", line 549, in _conv_forward
return F.conv2d(
^^^^^^^^^
## System Information
- **ComfyUI Version:** 0.3.14 <------ actually I checked out the first broken commit, which is still 0.3.14, but realeased 0.3.15)
- **Arguments:** ComfyUI\main.py --windows-standalone-build
- **OS:** nt
- **Python Version:** 3.12.8 (tags/v3.12.8:2dc476b, Dec 3 2024, 19:30:04) [MSC v.1942 64 bit (AMD64)]
- **Embedded Python:** true
- **PyTorch Version:** 2.6.0+cu126
## Devices
- **Name:** cuda:0 NVIDIA GeForce GTX 1660 SUPER : cudaMallocAsync
- **Type:** cuda
- **VRAM Total:** 6442123264
- **VRAM Free:** 5276303752
- **Torch VRAM Total:** 134217728
- **Torch VRAM Free:** 58589576
## Logs
2025-03-08T16:21:56.420678 - [START] Security scan2025-03-08T16:21:56.420678 -
2025-03-08T16:21:57.135485 - [DONE] Security scan2025-03-08T16:21:57.135485 -
2025-03-08T16:21:57.236756 - ## ComfyUI-Manager: installing dependencies done.2025-03-08T16:21:57.236756 -
2025-03-08T16:21:57.236756 - ** ComfyUI startup time:2025-03-08T16:21:57.236756 - 2025-03-08T16:21:57.236756 - 2025-03-08 16:21:57.2362025-03-08T16:21:57.236756 -
2025-03-08T16:21:57.236756 - ** Platform:2025-03-08T16:21:57.236756 - 2025-03-08T16:21:57.236756 - Windows2025-03-08T16:21:57.236756 -
2025-03-08T16:21:57.236756 - ** Python version:2025-03-08T16:21:57.236756 - 2025-03-08T16:21:57.236756 - 3.12.8 (tags/v3.12.8:2dc476b, Dec 3 2024, 19:30:04) [MSC v.1942 64 bit (AMD64)]2025-03-08T16:21:57.236756 -
2025-03-08T16:21:57.236756 - ** Python executable:2025-03-08T16:21:57.236756 - 2025-03-08T16:21:57.236756 - C:\Manual Programs\ComfyUI\python_embeded\python.exe2025-03-08T16:21:57.236756 -
2025-03-08T16:21:57.236756 - ** ComfyUI Path:2025-03-08T16:21:57.236756 - 2025-03-08T16:21:57.236756 - C:\Manual Programs\ComfyUI\ComfyUI2025-03-08T16:21:57.236756 -
2025-03-08T16:21:57.236756 - ** ComfyUI Base Folder Path:2025-03-08T16:21:57.236756 - 2025-03-08T16:21:57.236756 - C:\Manual Programs\ComfyUI\ComfyUI2025-03-08T16:21:57.245613 -
2025-03-08T16:21:57.245613 - ** User directory:2025-03-08T16:21:57.245613 - 2025-03-08T16:21:57.245613 - C:\Manual Programs\ComfyUI\ComfyUI\user2025-03-08T16:21:57.245613 -
2025-03-08T16:21:57.245613 - ** ComfyUI-Manager config path:2025-03-08T16:21:57.245613 - 2025-03-08T16:21:57.245613 - C:\Manual Programs\ComfyUI\ComfyUI\user\default\ComfyUI-Manager\config.ini2025-03-08T16:21:57.245613 -
2025-03-08T16:21:57.245613 - ** Log path:2025-03-08T16:21:57.245613 - 2025-03-08T16:21:57.245613 - C:\Manual Programs\ComfyUI\ComfyUI\user\comfyui.log2025-03-08T16:21:57.245613 -
2025-03-08T16:21:58.026710 -
Prestartup times for custom nodes:
2025-03-08T16:21:58.026710 - 2.3 seconds: C:\Manual Programs\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Manager
2025-03-08T16:21:58.026710 -
2025-03-08T16:21:59.517217 - Checkpoint files will always be loaded safely.
2025-03-08T16:21:59.665824 - Total VRAM 6144 MB, total RAM 65461 MB
2025-03-08T16:21:59.665824 - pytorch version: 2.6.0+cu126
2025-03-08T16:21:59.665824 - Set vram state to: NORMAL_VRAM
2025-03-08T16:21:59.665824 - Device: cuda:0 NVIDIA GeForce GTX 1660 SUPER : cudaMallocAsync
2025-03-08T16:22:00.810968 - Using pytorch attention
2025-03-08T16:22:02.006245 - ComfyUI version: 0.3.14
2025-03-08T16:22:02.028236 - [Prompt Server] web root: C:\Manual Programs\ComfyUI\ComfyUI\web
2025-03-08T16:22:02.435888 - ### Loading: ComfyUI-Manager (V3.30.3)
2025-03-08T16:22:02.435888 - [ComfyUI-Manager] network_mode: public
2025-03-08T16:22:02.535753 - ### ComfyUI Revision: 3154 [41c30e92] *DETACHED | Released on '2025-02-21'
2025-03-08T16:22:02.750574 -
Import times for custom nodes:
2025-03-08T16:22:02.750574 - 0.0 seconds: C:\Manual Programs\ComfyUI\ComfyUI\custom_nodes\websocket_image_save.py
2025-03-08T16:22:02.750574 - 0.0 seconds: C:\Manual Programs\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Styles_CSV_Loader
2025-03-08T16:22:02.750574 - 0.0 seconds: C:\Manual Programs\ComfyUI\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale
2025-03-08T16:22:02.750574 - 0.3 seconds: C:\Manual Programs\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Manager
2025-03-08T16:22:02.750574 -
2025-03-08T16:22:02.755614 - Starting server
2025-03-08T16:22:02.755614 - To see the GUI go to: http://127.0.0.1:8188
2025-03-08T16:22:02.790791 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
2025-03-08T16:22:02.790791 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
2025-03-08T16:22:02.817652 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
2025-03-08T16:22:02.845719 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
2025-03-08T16:22:02.865455 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
2025-03-08T16:22:06.110870 - FETCH ComfyRegistry Data: 5/362025-03-08T16:22:06.110870 -
2025-03-08T16:22:09.550512 - FETCH ComfyRegistry Data: 10/362025-03-08T16:22:09.550512 -
2025-03-08T16:22:13.058429 - FETCH ComfyRegistry Data: 15/362025-03-08T16:22:13.058429 -
2025-03-08T16:22:16.708197 - FETCH ComfyRegistry Data: 20/362025-03-08T16:22:16.708197 -
2025-03-08T16:22:20.146225 - FETCH ComfyRegistry Data: 25/362025-03-08T16:22:20.146225 -
2025-03-08T16:22:23.653839 - FETCH ComfyRegistry Data: 30/362025-03-08T16:22:23.653839 -
2025-03-08T16:22:27.155511 - FETCH ComfyRegistry Data: 35/362025-03-08T16:22:27.155511 -
2025-03-08T16:22:28.365943 - FETCH ComfyRegistry Data [DONE]2025-03-08T16:22:28.365943 -
2025-03-08T16:22:28.404913 - [ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes
2025-03-08T16:22:28.447668 - nightly_channel: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/remote
2025-03-08T16:22:28.447668 - FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json2025-03-08T16:22:28.447668 - 2025-03-08T16:22:28.522905 - [DONE]2025-03-08T16:22:28.522905 -
2025-03-08T16:22:28.560301 - [ComfyUI-Manager] All startup tasks have been completed.
2025-03-08T16:22:42.622005 - got prompt
2025-03-08T16:22:42.727335 - model weight dtype torch.float16, manual cast: torch.float32
2025-03-08T16:22:43.115713 - model_type STABLE_CASCADE
2025-03-08T16:22:43.920987 - VAE load device: cuda:0, offload device: cpu, dtype: torch.float32
2025-03-08T16:22:43.983989 - Requested to load StableCascadeClipModel
2025-03-08T16:22:43.996067 - loaded completely 9.5367431640625e+25 1324.95849609375 True
2025-03-08T16:22:43.998514 - CLIP/text encoder model load device: cpu, offload device: cpu, current: cpu, dtype: torch.float16
2025-03-08T16:22:44.918549 - model weight dtype torch.bfloat16, manual cast: torch.float32
2025-03-08T16:22:45.595354 - model_type STABLE_CASCADE
2025-03-08T16:22:47.020934 - Missing VAE keys ['encoder.mean', 'encoder.std']
2025-03-08T16:22:47.028488 - VAE load device: cuda:0, offload device: cpu, dtype: torch.float32
2025-03-08T16:22:47.091481 - Requested to load StableCascadeClipModel
2025-03-08T16:22:47.103508 - loaded completely 9.5367431640625e+25 1324.95849609375 True
2025-03-08T16:22:47.105511 - CLIP/text encoder model load device: cpu, offload device: cpu, current: cpu, dtype: torch.float16
2025-03-08T16:22:49.067756 - Requested to load StableCascade_C
2025-03-08T16:22:49.949540 - loaded partially 3702.7999198913576 3702.796905517578 0
2025-03-08T16:24:50.352184 -
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [02:00<00:00, 5.94s/it]2025-03-08T16:24:50.352184 -
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [02:00<00:00, 6.02s/it]2025-03-08T16:24:50.352184 -
2025-03-08T16:24:50.355690 - Requested to load StableCascade_B
2025-03-08T16:24:51.542402 - 0 models unloaded.
2025-03-08T16:24:51.575644 - loaded partially 64.0 63.9990234375 0
2025-03-08T16:25:47.951809 -
100%|██████████████████████████████████████████████████████████████████████████████████| 10/10 [00:56<00:00, 5.74s/it]2025-03-08T16:25:47.951809 -
100%|██████████████████████████████████████████████████████████████████████████████████| 10/10 [00:56<00:00, 5.64s/it]2025-03-08T16:25:47.951809 -
2025-03-08T16:25:47.952813 - Requested to load StageA
2025-03-08T16:25:48.015812 - 0 models unloaded.
2025-03-08T16:25:48.033473 - loaded completely 64.0 63.99962615966797 False
2025-03-08T16:25:48.036635 - !!! Exception during processing !!! Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
2025-03-08T16:25:48.039497 - Traceback (most recent call last):
File "C:\Manual Programs\ComfyUI\ComfyUI\execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Manual Programs\ComfyUI\ComfyUI\execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Manual Programs\ComfyUI\ComfyUI\execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "C:\Manual Programs\ComfyUI\ComfyUI\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Manual Programs\ComfyUI\ComfyUI\nodes.py", line 287, in decode
images = vae.decode(samples["samples"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Manual Programs\ComfyUI\ComfyUI\comfy\sd.py", line 488, in decode
out = self.process_output(self.first_stage_model.decode(samples).to(self.output_device).float())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Manual Programs\ComfyUI\ComfyUI\comfy\ldm\cascade\stage_a.py", line 220, in decode
x = self.up_blocks(x)
^^^^^^^^^^^^^^^^^
File "C:\Manual Programs\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Manual Programs\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Manual Programs\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\container.py", line 250, in forward
input = module(input)
^^^^^^^^^^^^^
File "C:\Manual Programs\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Manual Programs\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Manual Programs\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\container.py", line 250, in forward
input = module(input)
^^^^^^^^^^^^^
File "C:\Manual Programs\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Manual Programs\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Manual Programs\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\conv.py", line 554, in forward
return self._conv_forward(input, self.weight, self.bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Manual Programs\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\conv.py", line 549, in _conv_forward
return F.conv2d(
^^^^^^^^^
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
2025-03-08T16:25:48.040499 - Prompt executed in 185.42 seconds
```
### Other
This issue arrises since v0.3.15
More accurate: Since commit #41c30e92e7c468dde630714a27431299de438490 | closed | 2025-03-08T15:34:30Z | 2025-03-09T15:44:50Z | https://github.com/comfyanonymous/ComfyUI/issues/7135 | [
"Potential Bug"
] | HostedDinner | 4 |
PaddlePaddle/PaddleHub | nlp | 2,315 | 如何微调? | 欢迎您对PaddleHub提出建议,非常感谢您对PaddleHub的贡献!
在留下您的建议时,辛苦您同步提供如下信息:
- 您想要增加什么新特性?
- 什么样的场景下需要该特性?
- 没有该特性的条件下,PaddleHub目前是否能间接满足该需求?
- 增加该特性,PaddleHub可能需要变化的部分。
- 如果可以的话,简要描述下您的解决方案
| open | 2023-12-12T06:05:55Z | 2025-01-02T11:00:49Z | https://github.com/PaddlePaddle/PaddleHub/issues/2315 | [] | lk2003atnet | 2 |
sherlock-project/sherlock | python | 2,384 | Add verify SSL cert option in requests to bypass WAF and Fix 8tracks false positive/negative | ### Description
This feature request proposes adding a new configuration option to the `data.json` file to support bypassing WAF by disabling SSL certificate verification in specific targets. By setting `verifyCert` to `False`, the `requests` library will send requests with `verify=False`, ignoring SSL certificate verification. This can enable testing against real IPs or AWS endpoints, effectively bypassing services like Cloudflare WAF. I would like to make a **Pull Request** to implement this feature if the approach is acceptable.
### Implementation Details
1. **Configuration**:
- A new setting `verifyCert` has been added to the `data.json` file.
- When set to `False`, requests will include `verify=False` to ignore SSL certificate verification.
2. **Functionality**:
- This feature allows requests to target real IPs or AWS endpoints without SSL verification.
- It is especially useful for testing scenarios where SSL verification is unnecessary or problematic.
### Example
Using this method, I successfully resolved false positive/negative issues #2374 when working with the **8tracks.com**.
When testing with the latest release, `8tracks` always returns positive results. However, running with the latest code from the GitHub repository returns negative results due to WAF detection in the response. This appears to be caused by updated WAF fingerprints in the latest code.
<details>
<summary>screenshots</summary>
<img width="665" alt="003" src="https://github.com/user-attachments/assets/94452200-55e4-48f2-96eb-68d8b9827843" />
<img width="692" alt="002" src="https://github.com/user-attachments/assets/a5180524-a987-4ecb-8d19-83f8227292df" />
<img width="595" alt="001" src="https://github.com/user-attachments/assets/a98603d0-d4cf-4985-b3d7-a09b924eab24" />
</details>
Through research, I discovered that 8tracks has an AWS endpoint (https://ec2-107-20-194-173.compute-1.amazonaws.com) that can be queried directly to bypass the WAF restrictions and obtain correct results.
<details>
<summary>screenshots: successfully bypassed WAF</summary>
<img width="943" alt="004" src="https://github.com/user-attachments/assets/33d9419b-9133-4e91-8c9b-d5cc29fca616" />
<img width="943" alt="005" src="https://github.com/user-attachments/assets/119ba39b-579e-445b-9fb9-74d24e474981" />
</details>
I believe this feature and approach could also help resolve WAF issues with other sites facing similar problems.
### Request
I have implemented this feature in my fork of the repository. If you find this feature valuable, please consider allowing me to submit a **Pull Request** for review and integration into the main project.
Thank you for your consideration.
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct | open | 2024-12-27T08:37:29Z | 2025-02-17T05:51:33Z | https://github.com/sherlock-project/sherlock/issues/2384 | [
"enhancement"
] | JackJuly | 5 |
huggingface/transformers | machine-learning | 35,977 | adalomo and deepspeed zero3 offload error | ### System Info
python==3.11.11
transformers==4.48.1
torch==2.5.1
deepspeed==0.16.3
### Who can help?
@muellerzr @SunMarc
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
use adalomo & deepspeed zero3 param, optimizer offload then,
[rank7]: Traceback (most recent call last):
[rank7]: File "/data/shyu/LLM-train/train_sft.py", line 443, in <module>
[rank7]: main(script_args, training_args, model_args)
[rank7]: File "/data/shyu/LLM-train/train_sft.py", line 423, in main
[rank7]: trainer.train()
[rank7]: File "/data/shyu/.train/lib/python3.11/site-packages/transformers/trainer.py", line 2171, in train
[rank7]: return inner_training_loop(
[rank7]: ^^^^^^^^^^^^^^^^^^^^
[rank7]: File "/data/shyu/.train/lib/python3.11/site-packages/transformers/trainer.py", line 2531, in _inner_training_loop
[rank7]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank7]: File "/data/shyu/.train/lib/python3.11/site-packages/transformers/trainer.py", line 3712, in training_step
[rank7]: self.accelerator.backward(loss, **kwargs)
[rank7]: File "/data/shyu/.train/lib/python3.11/site-packages/accelerate/accelerator.py", line 2238, in backward
[rank7]: self.deepspeed_engine_wrapped.backward(loss, **kwargs)
[rank7]: File "/data/shyu/.train/lib/python3.11/site-packages/accelerate/utils/deepspeed.py", line 261, in backward
[rank7]: self.engine.backward(loss, **kwargs)
[rank7]: File "/data/shyu/.train/lib/python3.11/site-packages/deepspeed/utils/nvtx.py", line 18, in wrapped_fn
[rank7]: ret_val = func(*args, **kwargs)
[rank7]: ^^^^^^^^^^^^^^^^^^^^^
[rank7]: TypeError: DeepSpeedEngine.backward() got an unexpected keyword argument 'learning_rate'
### Expected behavior
run training step well | closed | 2025-01-31T03:36:08Z | 2025-03-10T08:04:32Z | https://github.com/huggingface/transformers/issues/35977 | [
"bug"
] | YooSungHyun | 5 |
huggingface/datasets | pytorch | 6,651 | Slice splits support for datasets.load_from_disk | ### Feature request
Support for slice splits in `datasets.load_from_disk`, similar to how it's already supported for `datasets.load_dataset`.
### Motivation
Slice splits are convienient in a numer of cases - adding support to `datasets.load_from_disk` would make working with local datasets easier and homogenize the APIs of load_from_disk and load_dataset.
### Your contribution
Sure, if the devs think the feature request is sensible. | open | 2024-02-09T08:00:21Z | 2024-06-14T14:42:46Z | https://github.com/huggingface/datasets/issues/6651 | [
"enhancement"
] | mhorlacher | 0 |
sunscrapers/djoser | rest-api | 815 | Djoser password email Typo | I am creating app login authentication system using Django and react, I was able to succesfully implement djoser login auth but when I implement the password/reset/confim, it send a wronf reset link in my email.
Here is my djoser configuration:
DJOSER ={
'LOGIN_FIELD' : 'username',
'USER_CREATE_PASSWORD_RETYPE' : True,
'USERNAME_CHANGED_EMAIL_CONFIRMATION' : True,
'PASSWORD_USERNAME_CHANGED_EMAIL_CONFIRMATION' : True,
'SEND_CONFIIRMATION_EMAIL' : True,
'SET_USERNAME_RETYPE' : True,
'PASSWORD_RESET_CONFIRM_URL' : 'password/reset/confirm/{uid}/{token}',
'USERNAME_RESET_CONFIRM_URL' : 'username/reset/confirm/{uid}/{token}',
'ACTIVATION_URL' : 'activate/{uid}/{token}',
'SEND_ACTIVATION_EMAIL' : True,
'SERIALIZERS': {
'user_create': 'memeapp.serializers.UserCreateSerializer',
'user_delete': 'djoser.serializers.UserDeleteSerializer',
'user': 'memeapp.serializers.UserCreateSerializer',
},
}
URLS.py
path('auth/',include('djoser.urls')),
path('auth/',include('djoser.urls.jwt')),
# path('search/', include('haystack.urls'), name= 'haystack_search'),
]
urlpatterns+= static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
urlpatterns+= static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)
urlpatterns+= [re_path(r'^.*',TemplateView.as_view(template_name="index.html"))]
and the typo in the reset link
http://http//127.0.0.1:8000//password/reset/confirm/MTg/c5g8bl-44f658078cb0513f30aece38e8e41577
I have no clue why it is preceeding with 2 http and extra backslashes. | closed | 2024-04-13T20:33:56Z | 2024-04-17T15:02:39Z | https://github.com/sunscrapers/djoser/issues/815 | [] | shaikhsufian | 1 |
scrapy/scrapy | web-scraping | 5,818 | Allow LinkExtractor extract all tag or attrs | <!--
Thanks for taking an interest in Scrapy!
If you have a question that starts with "How to...", please see the Scrapy Community page: https://scrapy.org/community/.
The GitHub issue tracker's purpose is to deal with bug reports and feature requests for the project itself.
Keep in mind that by filing an issue, you are expected to comply with Scrapy's Code of Conduct, including treating everyone with respect: https://github.com/scrapy/scrapy/blob/master/CODE_OF_CONDUCT.md
The following is a suggested template to structure your pull request, you can find more guidelines at https://doc.scrapy.org/en/latest/contributing.html#writing-patches and https://doc.scrapy.org/en/latest/contributing.html#submitting-patches
-->
## Summary
I need to extract links of apks, but some website use custom attribute name which can't be enumerated fully.
| closed | 2023-01-31T03:44:21Z | 2023-01-31T06:37:13Z | https://github.com/scrapy/scrapy/issues/5818 | [] | NiuBlibing | 1 |
mwaskom/seaborn | matplotlib | 3,026 | rectangular cells of heatmap | Hello.
I wonder whether there are any ways to
make the heatmap cells rectangular,
or in other words, suitable for showing the number (after specifying annot = True).
Thanks in advance. | closed | 2022-09-15T03:06:05Z | 2022-09-15T10:31:21Z | https://github.com/mwaskom/seaborn/issues/3026 | [] | zjq1011 | 1 |
Asabeneh/30-Days-Of-Python | flask | 519 | Python | open | 2024-04-30T19:23:33Z | 2024-05-14T08:39:46Z | https://github.com/Asabeneh/30-Days-Of-Python/issues/519 | [] | ColorMode | 3 | |
Gozargah/Marzban | api | 1,608 | Limit users to use specific clients | Add feature to limit users to use specific clients like Windscribe, v2rayN, Streisand, etc.
| closed | 2025-01-16T07:02:05Z | 2025-01-16T07:44:18Z | https://github.com/Gozargah/Marzban/issues/1608 | [] | Neeqaque | 0 |
akfamily/akshare | data-science | 5,640 | AKShare 接口问题报告 | stock_zh_a_spot_em更新后返回有重复和遗漏 | 2-17 今早更新后(版本1.15.95),stock_zh_a_spot_em从返回200个到5000+个。
但是,每次爬取的结果都有近百个是重复的,还有相同个数的遗漏。
且每次返回重复的都不一样。应该是分页爬取时有时间差? | closed | 2025-02-17T02:18:08Z | 2025-02-17T07:26:17Z | https://github.com/akfamily/akshare/issues/5640 | [
"bug"
] | chopinic | 4 |
floodsung/Deep-Learning-Papers-Reading-Roadmap | deep-learning | 88 | Faster RCNN pdf broken link | open | 2018-03-02T16:15:29Z | 2018-03-02T16:15:29Z | https://github.com/floodsung/Deep-Learning-Papers-Reading-Roadmap/issues/88 | [] | adrianstaniec | 0 | |
taverntesting/tavern | pytest | 387 | Would it be possible to set colors in console output? Let's say red for ERROR, blue for PASS | When I got a lot of test case, it just become difficult to search for which test case failed.
Would it be better with colored output text?
In current version of tavern , can we achieve this ? | closed | 2019-07-15T08:37:42Z | 2019-08-09T21:43:04Z | https://github.com/taverntesting/tavern/issues/387 | [] | ankanch | 2 |
LAION-AI/Open-Assistant | python | 3,290 | Training with DPO : Direct Preference Optimization | A new way of training in RLHF directly with using a reward model : https://arxiv.org/pdf/2305.18290.pdf
I wonder if this is possible to use it instead of the usual RM + PPO setting. This is ML proposition. | closed | 2023-06-03T16:40:26Z | 2023-06-09T11:41:16Z | https://github.com/LAION-AI/Open-Assistant/issues/3290 | [
"ml",
"question"
] | Forbu | 1 |
ultralytics/ultralytics | python | 19,143 | Errors during Training | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Train
### Bug
Error During training
YOLO command for train:
!yolo task=detect mode=train model=yolo11x.pt data=/kaggle/working/3riders-2/data.yaml epochs=160 imgsz=640 plots=True device=0,1
Error:
Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
1/160 9.21G 1.725 2.757 2.032 27 640: 1
Class Images Instances Box(P R mAP50 m
all 514 952 0.345 0.36 0.356 0.164
[rank0]: Traceback (most recent call last):
[rank0]: File "/root/.config/Ultralytics/DDP/_temp_wk1f1hyc133774982102704.py", line 13, in <module>
[rank0]: results = trainer.train()
[rank0]: File "/usr/local/lib/python3.10/dist-packages/ultralytics/engine/trainer.py", line 207, in train
[rank0]: self._do_train(world_size)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/ultralytics/engine/trainer.py", line 453, in _do_train
[rank0]: self.run_callbacks("on_fit_epoch_end")
[rank0]: File "/usr/local/lib/python3.10/dist-packages/ultralytics/engine/trainer.py", line 168, in run_callbacks
[rank0]: callback(self)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/ultralytics/utils/callbacks/raytune.py", line 17, in on_fit_epoch_end
[rank0]: if ray.train._internal.session._get_session(): # replacement for deprecated ray.tune.is_session_enabled()
[rank0]: AttributeError: module 'ray.train._internal.session' has no attribute '_get_session'. Did you mean: 'get_session'?
W0209 04:51:33.772000 229 torch/distributed/elastic/multiprocessing/api.py:897] Sending process 232 closing signal SIGTERM
E0209 04:51:34.087000 229 torch/distributed/elastic/multiprocessing/api.py:869] failed (exitcode: 1) local_rank: 0 (pid: 231) of binary: /usr/bin/python3
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/run.py", line 923, in <module>
main()
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/run.py", line 919, in main
run(args)
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/run.py", line 910, in run
elastic_launch(
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 138, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 269, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
/root/.config/Ultralytics/DDP/_temp_wk1f1hyc133774982102704.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2025-02-09_04:51:33
host : 0e286ac1e7dd
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 231)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
Traceback (most recent call last):
File "/usr/local/bin/yolo", line 8, in <module>
sys.exit(entrypoint())
File "/usr/local/lib/python3.10/dist-packages/ultralytics/cfg/__init__.py", line 986, in entrypoint
getattr(model, mode)(**overrides) # default args from model
File "/usr/local/lib/python3.10/dist-packages/ultralytics/engine/model.py", line 808, in train
self.trainer.train()
File "/usr/local/lib/python3.10/dist-packages/ultralytics/engine/trainer.py", line 202, in train
raise e
File "/usr/local/lib/python3.10/dist-packages/ultralytics/engine/trainer.py", line 200, in train
subprocess.run(cmd, check=True)
File "/usr/lib/python3.10/subprocess.py", line 526, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['/usr/bin/python3', '-m', 'torch.distributed.run', '--nproc_per_node', '2', '--master_port', '36493', '/root/.config/Ultralytics/DDP/_temp_wk1f1hyc133774982102704.py']' returned non-zero exit status 1.
### Environment
Ultralytics 8.3.73 🚀 Python-3.10.12 torch-2.5.1+cu121 CUDA:0 (Tesla T4, 15095MiB)
Setup complete ✅ (4 CPUs, 31.4 GB RAM, 6135.3/8062.4 GB disk)
OS Linux-6.6.56+-x86_64-with-glibc2.35
Environment Colab
Python 3.10.12
Install pip
RAM 31.35 GB
Disk 6135.3/8062.4 GB
CPU Intel Xeon 2.00GHz
CPU count 4
GPU Tesla T4, 15095MiB
GPU count 2
CUDA 12.1
numpy ✅ 1.26.4<=2.1.1,>=1.23.0
matplotlib ✅ 3.7.5>=3.3.0
opencv-python ✅ 4.10.0.84>=4.6.0
pillow ✅ 11.0.0>=7.1.2
pyyaml ✅ 6.0.2>=5.3.1
requests ✅ 2.32.3>=2.23.0
scipy ✅ 1.13.1>=1.4.1
torch ✅ 2.5.1+cu121>=1.8.0
torch ✅ 2.5.1+cu121!=2.4.0,>=1.8.0; sys_platform == "win32"
torchvision ✅ 0.20.1+cu121>=0.9.0
tqdm ✅ 4.67.1>=4.64.0
psutil ✅ 5.9.5
py-cpuinfo ✅ 9.0.0
pandas ✅ 2.2.3>=1.1.4
seaborn ✅ 0.12.2>=0.11.0
ultralytics-thop ✅ 2.0.14>=2.0.0
### Minimal Reproducible Example
!pip install ultralytics roboflow tensorflow==2.17.0
from roboflow import Roboflow
rf = Roboflow(api_key="enter_api")
project = rf.workspace("kashish").project("3riders")
version = project.version(2)
dataset = version.download("yolov11")
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2025-02-09T05:00:22Z | 2025-02-09T23:18:22Z | https://github.com/ultralytics/ultralytics/issues/19143 | [
"bug",
"fixed",
"detect"
] | ppraneth | 5 |
marcomusy/vedo | numpy | 566 | reconstruct a mesh by points or cells with vertices | Hi Macro,
I have some points with coordinates in numpy array, such as
points = np.array([[x1, y1, z1],
[x2, y2, z2],
[x3, y3, z3],
[x4, y4, z4],
...])
and cells with their vertices, such as
cells = np.array([[x11, y11, z11],[[x12, y12, z12],[[x13, y13, z13]],
[[x21, y21, z21],[[x22, y22, z22],[[x23, y23, z23]],
[[x31, y31, z31],[[x32, y32, z32],[[x33, y33, z33]],
...])
Do you have any hints or are there any examples for reconstructing a mesh by these points or cells with vertices? Really appreciate for your reply. | closed | 2021-12-23T01:47:10Z | 2021-12-25T16:18:52Z | https://github.com/marcomusy/vedo/issues/566 | [] | MianMianMeow | 2 |
huggingface/datasets | pytorch | 6,787 | TimeoutError in map | ### Describe the bug
```python
from datasets import Dataset
def worker(example):
while True:
continue
example['a'] = 100
return example
data = Dataset.from_list([{"a": 1}, {"a": 2}])
data = data.map(worker)
print(data[0])
```
I'm implementing a worker function whose runtime will depend on specific examples (e.g., while most examples take 0.01s in worker, several examples may take 50s).
Therefore, I would like to know how the current implementation will handle those subprocesses that require a long (e.g., >= 5min) or even infinite time.
I notice that the current implementation set a timeout of 0.05 second
https://github.com/huggingface/datasets/blob/c3ddb1ef00334a6f973679a51e783905fbc9ef0b/src/datasets/utils/py_utils.py#L674
However, this example code still gets stuck.
### Steps to reproduce the bug
run the example above
### Expected behavior
I want to set a default worker to handle these timeout cases, instead of getting stuck
### Environment info
main branch version | open | 2024-04-06T06:25:39Z | 2024-08-14T02:09:57Z | https://github.com/huggingface/datasets/issues/6787 | [] | Jiaxin-Wen | 7 |
BeanieODM/beanie | asyncio | 283 | Syntax for finding document not passing flake8 | Hi,
I am trying to search document with a field that is boolean
Code:
```python
plan = await Plan.find_one(Plan.active == True)
```
But flake8 is showing `comparison to True should be 'if cond is True:' or 'if cond:'flake8(E712)`
If I change `==` to `is`, then I won't be able to retrieve document that matches the criteria | closed | 2022-06-09T23:00:23Z | 2023-01-21T02:29:35Z | https://github.com/BeanieODM/beanie/issues/283 | [
"Stale"
] | ponty33 | 3 |
saulpw/visidata | pandas | 2,102 | [playback] Playback failing says existing column is missing | **Small description**
Playback failing says existing column is missing
**Expected result**
No errors due to columns that exist
**Actual result with screenshot**
https://asciinema.org/a/4Kop1pUil70qoHUWMpoCfvvBW
Using sample_data/benchmark.csv
No error other than in the status log:
- `no "Date" Column on benchmark`
- `visidata/cmdlog.py:263:moveToReplayContext()`
**Steps to reproduce with sample data and a .vd**
```
vd -p /tmp/benchmark_cmdlog.vdj sample_data/benchmark.csv
```
cat /tmp/benchmark_cmdlog.vdj
```
#!vd -p
{"sheet": "benchmark", "col": "Date", "row": "", "longname": "type-date", "input": "", "keystrokes": "@", "comment": "set type of current column to date"}
{"sheet": "benchmark", "col": "Quantity", "row": "", "longname": "type-int", "input": "", "keystrokes": "#", "comment": "set type of current column to int"}
{"sheet": "benchmark", "col": "Unit", "row": "", "longname": "type-currency", "input": "", "keystrokes": "$", "comment": "set type of current column to currency"}
{"sheet": "benchmark", "col": "Paid", "row": "", "longname": "type-currency", "input": "", "keystrokes": "$", "comment": "set type of current column to currency"}
```
**Additional context**
Please include the version of VisiData and Python. Latest develop and Python 3.9.2
I will note that on different hosts I see the behavior, in slightly different ways. Sometimes I need to pipe the input on STDIN to see it. But in example here it did not require it. But if you have difficulty reproducing then it could be related to some sort of timing.
| closed | 2023-11-03T20:10:04Z | 2024-06-29T07:06:33Z | https://github.com/saulpw/visidata/issues/2102 | [
"bug",
"fixed"
] | frosencrantz | 4 |
seleniumbase/SeleniumBase | pytest | 3,231 | UC Mode `driver.connect()` and `driver.reconnect()` sometimes connect to an invisible Chrome extension tab. | ### UC Mode `driver.connect()` and `driver.reconnect()`may connect to an invisible Chrome extension tab.
----
This is causing the issues seen since Chrome 130. (There was a workaround in place for Mac/Linux whereby the issue was avoided by creating a `user-data-dir` in advance, but that didn't help Windows users.) The invisible tab can be identified by calling `driver.window_handles`. | closed | 2024-10-27T23:10:11Z | 2024-10-29T14:23:14Z | https://github.com/seleniumbase/SeleniumBase/issues/3231 | [
"bug",
"UC Mode / CDP Mode",
"Fun"
] | mdmintz | 3 |
sunscrapers/djoser | rest-api | 15 | Is authorization token expiration implemented? | The readme says, "In other words, users have been granted access to a specific resource for a fixed time period." Where is token expiration implemented?
| closed | 2015-02-17T00:14:47Z | 2021-03-22T23:32:34Z | https://github.com/sunscrapers/djoser/issues/15 | [] | stugots | 2 |
pyg-team/pytorch_geometric | deep-learning | 8,844 | TGN example gives CUDA error | ### 🐛 Describe the bug
When trying to run [this example](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/tgn.py) in a fresh google colab environment I get the following error:

Here's the link to the colab: https://colab.research.google.com/drive/1UUNn2goZNApkebjKidmWBFlgrFD47NDh?usp=sharing
I also tried running the same experiment on a gpu cluster I have access and it gave the same error.
### Versions
```
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 22068 100 22068 0 0 65829 0 --:--:-- --:--:-- --:--:-- 65874
Collecting environment information...
PyTorch version: 2.1.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.27.9
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.1.58+-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 535.104.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.6
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.30GHz
CPU family: 6
Model: 63
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 0
BogoMIPS: 4599.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 256 KiB (1 instance)
L3 cache: 45 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==2.1.0+cu121
[pip3] torch-cluster==1.6.3+pt21cu121
[pip3] torch_geometric==2.4.0
[pip3] torch-scatter==2.1.2+pt21cu121
[pip3] torch-sparse==0.6.18+pt21cu121
[pip3] torch-spline-conv==1.2.2+pt21cu121
[pip3] torchaudio==2.1.0+cu121
[pip3] torchdata==0.7.0
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.16.0
[pip3] torchvision==0.16.0+cu121
[pip3] triton==2.1.0
[conda] Could not collect
``` | closed | 2024-01-31T13:54:29Z | 2024-01-31T19:00:12Z | https://github.com/pyg-team/pytorch_geometric/issues/8844 | [
"bug",
"example"
] | mb-v1 | 1 |
psf/black | python | 3,811 | support for auto-format on small devices |
good day,
i want to raise here the requirement for support formating with tab indentation on small devices with low memory / limited resources.
i am aware that there are already some issues regarding this (all closed for comments). e.g. https://github.com/psf/black/issues/47
i kindly ask you to have a closer look at the following scenario.
the following script has indented with tabs the size of 103 bytes, and formatted with black default 133 bytes (1 tab == 4 spaces).
```
a = 10
if a > 0:
for i in range(0, a):
if i % 2 == 0:
print("<", i)
else:
print(">", i)
```
the size difference for this really small example is already **23%** !!!
i have to admit that i havent done time measurements here to find out the impact on the required time for compilation into bytecode.
but i assume that reading less (byte-by-byte) would also increase speed here.
since i have found some issues dealing also somehow with tab-indentation in micropython - i m adding the involved peers here too and ask for their opinion / experience in this area. sorry in case this doesnt concern to you.
@pdg137, @davehylands, @peterhinch, @joewez, @dpgeorge, @mattytrentini, @stinos, @pfalcon, @jimmo, @tannewt, @aivarannamaa, @robert-hh, @tve, @dlech, @dhalbert
all the best , karl
| closed | 2023-07-24T16:53:34Z | 2023-07-26T07:28:25Z | https://github.com/psf/black/issues/3811 | [
"T: enhancement"
] | kr-g | 13 |
AutoViML/AutoViz | scikit-learn | 45 | Normed Histogram plot with negative y value? | Hi, the plots I have all has negative y values. How to interpret this?
<img width="519" alt="Screen Shot 2021-09-08 at 10 04 39 PM" src="https://user-images.githubusercontent.com/14266357/132610129-cde3e4db-6b4d-4421-abbc-6dcb821fb6b0.png">
I think the following code generates the plots.
sns.distplot(dft.loc[dft[dep]==target_var][each_conti],bins=binsize, ax= ax1,
label=target_var, hist=False, kde=True,
color=color2)
legend_flag += 1 | closed | 2021-09-09T02:05:54Z | 2021-11-22T19:43:33Z | https://github.com/AutoViML/AutoViz/issues/45 | [] | shuaiwang88 | 1 |
Sanster/IOPaint | pytorch | 202 | Need to keep exif data | Now after lama-cleaner finishes processing the picture, the exif data is also cleared | closed | 2023-02-06T03:18:10Z | 2023-04-28T06:41:34Z | https://github.com/Sanster/IOPaint/issues/202 | [
"enhancement"
] | linguowei | 5 |
tiangolo/uvicorn-gunicorn-fastapi-docker | fastapi | 30 | FastAPI on Alpine - smaller stack size? | closed | 2020-02-17T07:17:26Z | 2020-02-17T07:17:34Z | https://github.com/tiangolo/uvicorn-gunicorn-fastapi-docker/issues/30 | [] | matanos5 | 0 | |
sinaptik-ai/pandas-ai | data-science | 1,343 | Questions about the train function | Thanks for the great work.
I have several questions about the instruct train function
1. May I know that what vectorDB perform during the train? Does it act as a RAG?
2. After the train, is that anyway to save the trained model or stuff? Or it requires to call the train function for the prompt everytime?
3. For the cache, it seems generate a new one when I restart the kernel. Is that it store the previous prompt and response?
Thank you very much. | closed | 2024-08-30T02:21:56Z | 2025-02-11T16:00:11Z | https://github.com/sinaptik-ai/pandas-ai/issues/1343 | [] | mrgreen3325 | 3 |
jupyter-incubator/sparkmagic | jupyter | 281 | Added new Endpoint, Executed SQL code, Not using existing created Session | Hi Experts,
I had added an endpoint and created a session. It's created an spark application and returned "application_1475230083909_12246" details. If I execute spark code using "%%spark" magic, it's executing the code using the above created session.
However, if I use "%%sql" magic and try to execute some sql code, it's creating a new session with (default endpoint provided on config.json) and then executing the sql code. Please find the behavior snapshot below and let me know your comments how to fix it.

Requirement: I need to add a different endpoint, create a session and execute some spark sql.
Please guide.
Thanks!
| closed | 2016-10-04T03:05:35Z | 2016-10-05T17:27:44Z | https://github.com/jupyter-incubator/sparkmagic/issues/281 | [] | pkasinathan | 7 |
dgtlmoon/changedetection.io | web-scraping | 2,328 | [Bug] Docker container "restarting (132)" with image 0.45.18 or higher | **Describe the bug**
Image "ghcr.io/dgtlmoon/changedetection.io:0.45.18" or higher causes container status "Restarting (132)".
**Version**
Works fine until version 0.45.17
**To Reproduce**
Update from a previous version or a new clean install causes same output.
The hardware is an old Intel Celeron P4600 (2) @ 2GHz with 8GB of RAM.
The Host OS is Debian GNU/Linux 11 (bullseye) x86_64.
The logs are empty. | closed | 2024-04-22T16:52:20Z | 2024-05-15T08:54:44Z | https://github.com/dgtlmoon/changedetection.io/issues/2328 | [
"help wanted",
"triage",
"upstream-bug"
] | Nekrotza | 16 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 1,167 | Real-time? | Wondering if this can be used in real-time, like speech-to-text-to-speech. If not, is there any other solution for this? | open | 2023-02-24T09:37:53Z | 2023-02-24T09:37:53Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1167 | [] | Maple38 | 0 |
jeffknupp/sandman2 | rest-api | 53 | does update_version.sh update setup.py? | [setup.py has a plain text version string](https://github.com/jeffknupp/sandman2/blob/47ac844be564f1390fc5177557b1907b65e7f453/setup.py#L39) which I don't think will be updated by [update_version.sh](https://github.com/jeffknupp/sandman2/blob/master/update_version.sh). It is very different in sandman2 vs sandman(1).
in the update script I wonder if the -r in the line does this somehow:
`python setup.py sdist bdist_wheel upload -r pypi`
but [the pypi page](https://pypi.python.org/pypi/sandman2) lists the version in setup.py so I'm not sure.
This update script looks very important and convenient! A line or two of documentation on what it does would help a lot. | closed | 2016-11-27T16:25:40Z | 2016-12-08T11:06:48Z | https://github.com/jeffknupp/sandman2/issues/53 | [] | swharden | 1 |
exaloop/codon | numpy | 627 | This ticket was submitted by mistake prematurely. Please ignore it. | This ticket was submitted by mistake prematurely.
Please ignore it. | closed | 2025-02-10T15:25:41Z | 2025-02-10T15:31:13Z | https://github.com/exaloop/codon/issues/627 | [] | Yaakov-Belch | 0 |
fastapi/fastapi | asyncio | 13,440 | Validations in `Annotated` like `AfterValidator` do not work in FastAPI 0.115.10 |
### Discussed in https://github.com/fastapi/fastapi/discussions/13431
<div type='discussions-op-text'>
<sup>Originally posted by **amacfie-tc** February 28, 2025</sup>
### First Check
- [X] I added a very descriptive title here.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I searched the FastAPI documentation, with the integrated search.
- [X] I already searched in Google "How to X in FastAPI" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/pydantic/pydantic).
- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).
- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from typing import Annotated
from pydantic import AfterValidator
from fastapi import FastAPI
app = FastAPI()
def validator(v):
raise ValueError()
Ints = Annotated[list[int], AfterValidator(validator)]
@app.post("/")
def post(ints: Ints) -> None:
return None
```
### Description
If we run the code and send a request to the endpoint, e.g.
```
echo -n '[2,3,4]' | http POST http://localhost:8000
```
on version 0.115.9, we get a 422 but on 0.115.10 we get 200. Is this a bug?
### Operating System
Linux
### Operating System Details
_No response_
### FastAPI Version
0.115.10
### Pydantic Version
2.9.2, 2.10.6
### Python Version
3.12
### Additional Context
_No response_</div>
---
@tiangolo writes:
This was introduced here: https://github.com/fastapi/fastapi/pull/13314
I'm currently investigating and a fix will be released shortly.
The problem is only when using `Annotated` directly in FastAPI parameters, when used inside of Pydantic models the validators work (raise) as expected:
```Python
from typing import Annotated
from fastapi import FastAPI
from pydantic import AfterValidator, BaseModel
app = FastAPI()
def validator(v):
raise ValueError()
Ints = Annotated[list[int], AfterValidator(validator)]
class Model(BaseModel):
ints: Ints
@app.post("/")
def post(ints: Model) -> None:
return None
``` | closed | 2025-03-01T17:19:44Z | 2025-03-01T22:40:52Z | https://github.com/fastapi/fastapi/issues/13440 | [
"bug"
] | tiangolo | 2 |
PaddlePaddle/PaddleNLP | nlp | 9,868 | [Question]: Error loading layoutlmv2-base-uncased: Missing model_state.pdparams file | ### 请提出你的问题
I'm trying to load the layoutlmv2-base-uncased model using PaddleNLP with the following code:
from paddlenlp.transformers import AutoModel
model = AutoModel.from_pretrained("layoutlmv2-base-uncased")
However, I get the following error:
OSError: Can't load the model for 'layoutlmv2-base-uncased'. If you were trying to load it from 'https://paddlenlp.bj.bcebos.com'
I checked the URL specified in the source code:
https://bj.bcebos.com/paddlenlp/models/transformers/layoutlmv2/layoutlmv2-base-uncased/model_state.pdparams
But when I access it directly, I receive this response:
{"code":"NoSuchKey","message":"The specified key does not exist.","requestId":"09e81b26-52b8-4863-acc1-925b5bea24c8"}
Could someone please advise on:
Whether this is a known issue,
How to correctly download the model,
Or if there's an alternative URL/resource for the layoutlmv2-base-uncased model?
| open | 2025-02-13T15:14:39Z | 2025-03-17T08:28:39Z | https://github.com/PaddlePaddle/PaddleNLP/issues/9868 | [
"question"
] | swaranM | 2 |
collerek/ormar | sqlalchemy | 817 | `DateTime` field ignores timezone parameter | datetime field ignores `timezone` parameter in __new__ mehod
Steps to reproduce the behavior:
Let's say we have this model:
import sqlalchemy
from databases import Database
from settings import settings
database = Database("postgresql+asyncpg://user:pass@localhost:5433/database")
metadata = sqlalchemy.MetaData()
class Issue(ormar.Model):
due_date: datetime = ormar.DateTime()
class Meta:
database = database
metadata = metadata
After applying created version with Alembic, let's add timezone:
class Issue(ormar.Model):
due_date: datetime = ormar.DateTime(timezone=True)
The resulting migration file doesn't have any field alteration

Versions:
- Postgresql 14
- Python 3.10
- `ormar` 0.11.2
- `pydantic` 1.9.1
- `alembic` 1.8.1
| open | 2022-09-07T10:17:13Z | 2023-03-24T13:03:39Z | https://github.com/collerek/ormar/issues/817 | [
"bug"
] | nrd-bam | 1 |
CTFd/CTFd | flask | 2,673 | Import Issue | <!--
If this is a bug report please fill out the template below.
If this is a feature request please describe the behavior that you'd like to see.
-->
**Environment**: Ubuntu 24.04
- CTFd Version/Commit: 3.6.4
- Operating System: Ubuntu
- Web Browser and Version: Firefox 125.0.2
**What happened?**
I had CTF event, which later I created a backup by exporting CTF. Now while importing it back again, it states 'Import Error: Import Failure: Importing not currently supported for SQLite databases. See Github issue #1988.'
**What did you expect to happen?**
Like previous time I was able to import those backup normally and last backup zip file states the date which is 30th January 2024.
**How to reproduce your issue**
1. Download CTFd latest and run docker
2. Create temporary account for registering
3. Go to admin > config > Backup > Select .zip file > Import
4. Error
**Any associated stack traces or error logs**
I don't what and how to resolve, so any help will do a great.
attaching screenshot;

| closed | 2024-12-01T13:42:14Z | 2024-12-31T15:43:25Z | https://github.com/CTFd/CTFd/issues/2673 | [] | chinu8005 | 1 |
serengil/deepface | machine-learning | 1,425 | [BUG]: Two completely different pictures are predicted as True | ### Before You Report a Bug, Please Confirm You Have Done The Following...
- [X] I have updated to the latest version of the packages.
- [X] I have searched for both [existing issues](https://github.com/serengil/deepface/issues) and [closed issues](https://github.com/serengil/deepface/issues?q=is%3Aissue+is%3Aclosed) and found none that matched my issue.
### DeepFace's version
0.0.91
### Python version
3.8.20
### Operating System
Ubuntu 22.04
### Dependencies
deepface== 0.0.91
mtcnn== 0.1.1
flask== 3.0.3
Flask-CORS==4.0.1
gevent==24.2.1
### Reproducible example
```Python
img1_path = cv2.imread("face_1.jpg")
img2_path = cv2.imread( "face_112.jpg")
result = DeepFace.verify(
img1_path,
img2_path,
model_name="Facenet512",
threshold=0.39,
detector_backend="mtcnn",
normalization="Facenet2018",
expand_percentage=0,
)
print(result)
```
### Relevant Log Output


### Expected Result
{'verified': True, 'distance': 0.2924738286123906, 'threshold': 0.39, 'model': 'Facenet512', 'detector_backend': 'mtcnn', 'similarity_metric': 'cosine', 'facial_areas': {'img1': {'x': 14, 'y': 23, 'w': 72, 'h': 94, 'left_eye': (69, 59), 'right_eye': (36, 59)}, 'img2': {'x': 21, 'y': 27, 'w': 94, 'h': 123, 'left_eye': (90, 77), 'right_eye': (49, 77)}}, 'time': 1.19}
### What happened instead?
Two completely different pictures are predicted as True
### Additional Info
Thanks | closed | 2025-01-14T07:21:26Z | 2025-01-14T07:27:18Z | https://github.com/serengil/deepface/issues/1425 | [
"bug",
"invalid"
] | jhluaa | 1 |
jupyter/nbgrader | jupyter | 1,034 | Document how to access the formgrader when it is a JupyterHub service | ### Operating system
FreeBSD 11.2 64 bit
### `nbgrader --version`
5.4
### `jupyterhub --version` (if used with JupyterHub)
0.9.4
### `jupyter notebook --version`
5.7
### database
postgresql 10.5
Hi,
I have created a course account and two other user accounts.
JupyterHub has been set up as a managed shared notebook to the course account
with the two users as part of the group.
An assignment was set up in the course account. It has been released within the course account.
When i log into one of the two user accounts and get into formgrader tab:
1) The created assignment shows up (manage assignments), but when i click on the assignment, i get a
"File not found 404" error.
2) When i click on the manual grading, the assignment shows up and clicking on the assignment, the notebook file shows up.
3) When i click on the manage students, i can see all the students and can add and edit them.
If i try to generate or release the assignment, i get a permission error, but i thought the
idea of the shared notebook is to eliminate permission errors.
Everything works okay in the course account, just not in the other accounts set up for sharing.
Any help would be great,
cheers,
john
Here are some logfiles:
Trying to generate the assignment:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/nbgrader/utils.py", line 371, in capture_log
app.start()
File "/usr/local/lib/python3.6/site-packages/nbgrader/converters/assign.py", line 146, in start
super(Assign, self).start()
File "/usr/local/lib/python3.6/site-packages/nbgrader/converters/base.py", line 72, in start
self.convert_notebooks()
File "/usr/local/lib/python3.6/site-packages/nbgrader/converters/base.py", line 332, in convert_notebooks
_handle_failure(gd)
File "/usr/local/lib/python3.6/site-packages/nbgrader/converters/base.py", line 260, in _handle_failure
rmtree(dest)
File "/usr/local/lib/python3.6/site-packages/nbgrader/utils.py", line 245, in rmtree
shutil.rmtree(path)
File "/usr/local/lib/python3.6/shutil.py", line 480, in rmtree
_rmtree_safe_fd(fd, path, onerror)
File "/usr/local/lib/python3.6/shutil.py", line 438, in _rmtree_safe_fd
onerror(os.unlink, fullname, sys.exc_info())
File "/usr/local/lib/python3.6/shutil.py", line 436, in _rmtree_safe_fd
os.unlink(name, dir_fd=topfd)
PermissionError: [Errno 13] Permission denied: 'a1_prob.ipynb'
Permissions:
$ ls -laR /srv/nbgrader/exchange
total 24
drwxrwxrwx 3 root wheel 512 Oct 26 15:08 .
drwxr-xr-x 3 root wheel 512 Oct 25 16:05 ..
drwxr-xr-x 4 course wheel 512 Oct 26 15:08 test_course
/srv/nbgrader/exchange/test_course:
total 32
drwxr-xr-x 4 course wheel 512 Oct 26 15:08 .
drwxrwxrwx 3 root wheel 512 Oct 26 15:08 ..
drwxr-xr-x 2 course wheel 512 Oct 26 15:08 inbound
drwxr-xr-x 3 course wheel 512 Oct 26 15:11 outbound
/srv/nbgrader/exchange/test_course/inbound:
total 16
drwxr-xr-x 2 course wheel 512 Oct 26 15:08 .
drwxr-xr-x 4 course wheel 512 Oct 26 15:08 ..
/srv/nbgrader/exchange/test_course/outbound:
total 24
drwxr-xr-x 3 course wheel 512 Oct 26 15:11 .
drwxr-xr-x 4 course wheel 512 Oct 26 15:08 ..
drwxr-xr-x 2 course wheel 512 Oct 26 15:08 assg1
/srv/nbgrader/exchange/test_course/outbound/assg1:
total 24
drwxr-xr-x 2 course wheel 512 Oct 26 15:08 .
drwxr-xr-x 3 course wheel 512 Oct 26 15:11 ..
-rw-r--r-- 1 course wheel 2458 Oct 26 15:08 a1_prob.ipynb
Permissions in the course account:
/home/course/test_course:
total 48
drwxr-xr-x 4 course course 512 Oct 26 14:39 .
drwxr-xr-x 6 course course 512 Oct 26 13:41 ..
-rw-r--r-- 1 course course 5614 Oct 26 15:27 .nbgrader.log
drwxr-xr-x 3 course course 512 Oct 26 15:08 release
drwxr-xr-x 3 course course 512 Oct 26 13:39 source
/home/course/test_course/release:
total 24
drwxr-xr-x 3 course course 512 Oct 26 15:08 .
drwxr-xr-x 4 course course 512 Oct 26 14:39 ..
drwxr-xr-x 2 course course 512 Oct 26 15:08 assg1
/home/course/test_course/release/assg1:
total 24
drwxr-xr-x 2 course course 512 Oct 26 15:08 .
drwxr-xr-x 3 course course 512 Oct 26 15:08 ..
-rw-r--r-- 1 course course 2458 Oct 26 15:08 a1_prob.ipynb
/home/course/test_course/source:
total 24
drwxr-xr-x 3 course course 512 Oct 26 13:39 .
drwxr-xr-x 4 course course 512 Oct 26 14:39 ..
drwxr-xr-x 3 course course 512 Oct 26 14:38 assg1
/home/course/test_course/source/assg1:
total 32
drwxr-xr-x 3 course course 512 Oct 26 14:38 .
drwxr-xr-x 3 course course 512 Oct 26 13:39 ..
drwxr-xr-x 2 course course 512 Oct 26 14:38 .ipynb_checkpoints
-rw-r--r-- 1 course course 2190 Oct 26 14:37 a1_prob.ipynb
/home/course/test_course/source/assg1/.ipynb_checkpoints:
total 24
drwxr-xr-x 2 course course 512 Oct 26 14:38 .
drwxr-xr-x 3 course course 512 Oct 26 14:38 ..
-rw-r--r-- 1 course course 2190 Oct 26 14:37 a1_prob-checkpoint.ipynb
jupyterhub_config.py:
service_name = 'shared_notebook'
service_port = 9999
c.JupyterHub.services = [
{
'name':'cull_idle',
'admin' : True,
'command':'python3 ./cull_idle_servers.py --timeout=3600'.split()
},
{
'name' : service_name,
'url' : 'http://127.0.0.1:{}'.format(service_port),
'command' : ['jupyterhub-singleuser',
'--group={}'.format(grp_name),
'--debug'
],
'user' : 'course',
'cwd' : '/usr/home/course/test_course'
}
]
nbgrader config file:
c.CourseDirectory.root = '/usr/home/course/test_course'
c.Exchange.course_id = 'test_course'
| closed | 2018-10-26T20:13:35Z | 2019-06-11T02:00:46Z | https://github.com/jupyter/nbgrader/issues/1034 | [
"documentation"
] | jnak12 | 5 |
sunscrapers/djoser | rest-api | 579 | CREATE_SESSION_ON_LOGIN exists but is not documented | I am currently looking into ways to utilize normal cookie based authentication for usage in our SPA, and while looking at the source code, i realized that djoser has a "CREATE_SESSION_ON_LOGIN" setting that actually essentially does this, however this is not properly documented as far as i could find.
Maybe you want to document it somewhere :)
It's used here, among other places:
https://github.com/sunscrapers/djoser/blob/b648b07dc2da7dab4c00c1c39af3d6ec53f58eb6/djoser/utils.py#L18 | open | 2021-01-18T09:50:51Z | 2022-01-19T00:36:57Z | https://github.com/sunscrapers/djoser/issues/579 | [] | C0DK | 3 |
serengil/deepface | deep-learning | 954 | About yolov8n-face.pt | I want use it in Ascend 310B1. And I transform it to yolov8n-face.om. Now, the model output while one picture is (1,20,8400). I think the 8400 is the number of object. and 20 is the parameters like x1,x2,y1,y2,score.... However, I don't know how to deal it without know the train config. I hope you can give some tips.

| closed | 2024-01-15T10:41:52Z | 2024-01-15T10:45:24Z | https://github.com/serengil/deepface/issues/954 | [
"invalid"
] | zdjk104tan | 2 |
coqui-ai/TTS | python | 2,862 | [Bug] KeyError: 'avg_loss_1' crash when training model | ### Describe the bug
Occurs when running the script below.
### To Reproduce
Run the following script. I've tried it w/ the `LJSpeech-1.1` dataset and the same error occurs, so you can sub in the desired dataset.
```python
import os
from TTS.tts.configs.shared_configs import BaseAudioConfig, BaseDatasetConfig
from TTS.tts.configs.glow_tts_config import GlowTTSConfig
from TTS.utils.audio import AudioProcessor
from TTS.tts.utils.text.tokenizer import TTSTokenizer
from TTS.tts.datasets import load_tts_samples
from TTS.tts.models.glow_tts import GlowTTS
from trainer import Trainer, TrainerArgs
def main():
output_path = os.path.join(os.getcwd(), 'runtime')
if not os.path.exists(output_path):
os.makedirs(output_path)
dataset = 'FerdinandTTS'
dataset_config = BaseDatasetConfig(
formatter='ljspeech',
meta_file_train='metadata.csv',
path=os.path.join(os.getcwd(), 'datasets', dataset),
dataset_name=dataset,
language='en-us',
phonemizer='espeak',
)
audio_config = BaseAudioConfig(
sample_rate=48000
)
config = GlowTTSConfig(
batch_size=64, #32, # https://github.com/coqui-ai/TTS/issues/1447#issuecomment-1083100386
eval_batch_size=16,
num_loader_workers=4,
num_eval_loader_workers=4,
run_eval=False, #True,
test_delay_epochs=10, #-1,
epochs=100,
text_cleaner="phoneme_cleaners",
use_phonemes=True,
phoneme_language="en-us",
phoneme_cache_path=os.path.join(output_path, "phoneme_cache"),
print_step=25,
print_eval=False,
mixed_precision=False, # https://github.com/coqui-ai/TTS/issues/918
output_path=output_path,
datasets=[dataset_config],
save_step=25000, #1000,
# CUSTOM
audio=audio_config,
phonemizer='espeak',
#eval_split_size=10,
target_loss='G_avg_loss',
#max_audio_len=1024*3,
#max_text_len=1000,
eval_split_max_size=0.01,
eval_split_size=0.006666666666666667
)
ap = AudioProcessor.init_from_config(config)
tokenizer, config = TTSTokenizer.init_from_config(config)
train_samples, eval_samples = load_tts_samples(
dataset_config,
eval_split=True,
eval_split_max_size=config.eval_split_max_size,
eval_split_size=config.eval_split_size
)
model = GlowTTS(config, ap, tokenizer, speaker_manager=None)
trainer_args = TrainerArgs(
gpu=0, # Prevents having to set CUDA_VISIBLE_DEVICES=0
use_accelerate=True
)
trainer = Trainer(
trainer_args, config, output_path, model=model, train_samples=train_samples, eval_samples=eval_samples
)
trainer.fit()
if __name__ == '__main__':
main()
```
### Expected behavior
To produce a functional model.
### Logs
```shell
(cudatest) PS {project path}> python .\test_train.py
> Setting up Audio Processor...
| > sample_rate:48000
| > resample:False
| > num_mels:80
| > log_func:np.log10
| > min_level_db:-100
| > frame_shift_ms:None
| > frame_length_ms:None
| > ref_level_db:20
| > fft_size:1024
| > power:1.5
| > preemphasis:0.0
| > griffin_lim_iters:60
| > signal_norm:True
| > symmetric_norm:True
| > mel_fmin:0
| > mel_fmax:None
| > pitch_fmin:1.0
| > pitch_fmax:640.0
| > spec_gain:20.0
| > stft_pad_mode:reflect
| > max_norm:4.0
| > clip_norm:True
| > do_trim_silence:True
| > trim_db:45
| > do_sound_norm:False
| > do_amp_to_db_linear:True
| > do_amp_to_db_mel:True
| > do_rms_norm:False
| > db_level:None
| > stats_path:None
| > base:10
| > hop_length:256
| > win_length:1024
| > Found 150 files in {project path}\datasets\FerdinandTTS
fatal: not a git repository (or any of the parent directories): .git
fatal: not a git repository (or any of the parent directories): .git
> Training Environment:
| > Backend: Accelerate
| > Mixed precision: False
| > Precision: float32
| > Current device: 0
| > Num. of GPUs: 1
| > Num. of CPUs: 12
| > Num. of Torch Threads: 6
| > Torch seed: 54321
| > Torch CUDNN: True
| > Torch CUDNN deterministic: False
| > Torch CUDNN benchmark: False
| > Torch TF32 MatMul: False
> Start Tensorboard: tensorboard --logdir={project path}\runtime\run-August-11-2023_08+00PM-0000000
> Model has 28610257 parameters
> EPOCH: 0/100
--> {project path}\runtime\run-August-11-2023_08+00PM-0000000
> DataLoader initialization
| > Tokenizer:
| > add_blank: False
| > use_eos_bos: False
| > use_phonemes: True
| > phonemizer:
| > phoneme language: en-us
| > phoneme backend: espeak
| > Number of instances : 149
| > Preprocessing samples
| > Max text length: 140
| > Min text length: 4
| > Avg text length: 71.6510067114094
|
| > Max audio length: 422066.0
| > Min audio length: 18686.0
| > Avg audio length: 210688.61744966442
| > Num. instances discarded samples: 0
| > Batch group size: 0.
> TRAINING (2023-08-11 20:00:38)
--> TIME: 2023-08-11 20:01:07 -- STEP: 0/3 -- GLOBAL_STEP: 0
| > current_lr: 2.5e-07
| > step_time: 2.9498 (2.9498186111450195)
| > loader_time: 25.6026 (25.602627754211426)
[!] `train_step()` retuned `None` outputs. Skipping training step.
[!] `train_step()` retuned `None` outputs. Skipping training step.
[!] `train_step()` retuned `None` outputs. Skipping training step.
--> EVAL PERFORMANCE
| > avg_loader_time: 29.987560629844666 (+0)
| > avg_step_time: 1.201514482498169 (+0)
Traceback (most recent call last):
File "{python env}\cudatest\Lib\site-packages\trainer\trainer.py", line 1806, in fit
self._fit()
File "{python env}\cudatest\Lib\site-packages\trainer\trainer.py", line 1769, in _fit
self.save_best_model()
File "{python env}\cudatest\Lib\site-packages\trainer\utils\distributed.py", line 35, in wrapped_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "{python env}\cudatest\Lib\site-packages\trainer\trainer.py", line 1886, in save_best_model
target_loss_dict = self._pick_target_avg_loss(self.keep_avg_eval if self.keep_avg_eval else self.keep_avg_train)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "{python env}\cudatest\Lib\site-packages\trainer\trainer.py", line 2103, in _pick_target_avg_loss
target_loss = keep_avg_target["avg_loss_1"]
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^
File "{python env}\cudatest\Lib\site-packages\trainer\generic_utils.py", line 119, in __getitem__
return self.avg_values[key]
~~~~~~~~~~~~~~~^^^^^
KeyError: 'avg_loss_1'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "{project path}\test_train.py", line 80, in <module>
main()
File "{project path}\test_train.py", line 76, in main
trainer.fit()
File "{python env}\cudatest\Lib\site-packages\trainer\trainer.py", line 1833, in fit
remove_experiment_folder(self.output_path)
File "{python env}\cudatest\Lib\site-packages\trainer\generic_utils.py", line 77, in remove_experiment_folder
fs.rm(experiment_path, recursive=True)
File "{python env}\cudatest\Lib\site-packages\fsspec\implementations\local.py", line 172, in rm
shutil.rmtree(p)
File "{python env}\cudatest\Lib\shutil.py", line 759, in rmtree
return _rmtree_unsafe(path, onerror)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "{python env}\cudatest\Lib\shutil.py", line 622, in _rmtree_unsafe
onerror(os.unlink, fullname, sys.exc_info())
File "{python env}\cudatest\Lib\shutil.py", line 620, in _rmtree_unsafe
os.unlink(fullname)
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: '{project path}/runtime/run-August-11-2023_08+00PM-0000000\\trainer_0_log.txt'
```
### Environment
```shell
{
"CUDA": {
"GPU": [
"NVIDIA GeForce GTX 1050 Ti"
],
"available": true,
"version": "11.7"
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.0.1",
"TTS": "0.16.2",
"numpy": "1.24.3"
},
"System": {
"OS": "Windows",
"architecture": [
"64bit",
"WindowsPE"
],
"processor": "Intel64 Family 6 Model 158 Stepping 10, GenuineIntel",
"python": "3.11.4",
"version": "10.0.17763"
}
}
```
### Additional context
N/A | closed | 2023-08-12T00:16:10Z | 2023-08-15T16:42:38Z | https://github.com/coqui-ai/TTS/issues/2862 | [
"bug"
] | T145 | 2 |
elliotgao2/gain | asyncio | 49 | demo error | copy your basic demo code and run it:
error.
```
Traceback (most recent call last):
File "b.py", line 23, in <module>
MySpider.run()
File "/home/qyy/anaconda3/envs/sanic/lib/python3.6/site-packages/gain/spider.py", line 52, in run
loop.run_until_complete(cls.init_parse(semaphore))
File "uvloop/loop.pyx", line 1451, in uvloop.loop.Loop.run_until_complete
File "/home/qyy/anaconda3/envs/sanic/lib/python3.6/site-packages/gain/spider.py", line 71, in init_parse
with aiohttp.ClientSession() as session:
File "/home/qyy/anaconda3/envs/sanic/lib/python3.6/site-packages/aiohttp/client.py", line 956, in __enter__
raise TypeError("Use async with instead")
TypeError: Use async with instead
[2019:04:08 15:05:18] Unclosed client session
client_session: <aiohttp.client.ClientSession object at 0x7fc4d2eb8e48>
sys:1: RuntimeWarning: coroutine 'Parser.task' was never awaited
```
and ..
```
from gain import Css, Item, Parser, XPathParser, Spider
ImportError: cannot import name 'XPathParser'
```
Thanks. | open | 2019-04-08T07:09:28Z | 2019-04-08T07:13:01Z | https://github.com/elliotgao2/gain/issues/49 | [] | AaronConlon | 0 |
arnaudmiribel/streamlit-extras | streamlit | 171 | Error running app | Received error: "Error running app. If this keeps happening, please [contact support](https://github.com/arnaudmiribel/streamlit-extras/issues/new). "
When trying to visit [Streamlit Extras](https://extras.streamlit.app/App%20logo) | closed | 2023-08-30T16:29:48Z | 2023-09-05T13:32:27Z | https://github.com/arnaudmiribel/streamlit-extras/issues/171 | [] | chris-cannon90 | 0 |
zappa/Zappa | flask | 1,345 | Importing task decorator from zappa.asynchronous takes ~2.5s | As above, simply importing the task decorator adds >2 seconds(!!!) to application start time.
## Context
This is not a bug per se, but given the (ideally) snappy nature of lambda, a simple import from Zappa should not be adding this much overhead.
## Expected Behavior
Quick
## Actual Behavior
Slow
## Possible Fix
Not a clue, however the reproduction below seems to indicate sockets are to blame.
## Steps to Reproduce
1. Install zappa in fresh venv
2. Create `import_zappa.py`, containing `from zappa.asynchronous import task`
3. `python -m cProfile -s tottime import_zappa.py > output.log`
Truncated output:
```
284926 function calls (277992 primitive calls) in 2.489 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
2 2.018 1.009 2.018 1.009 {method 'connect' of '_socket.socket' objects}
306 0.109 0.000 0.109 0.000 {built-in method io.open_code}
2059 0.066 0.000 0.066 0.000 {built-in method nt.stat}
306 0.025 0.000 0.025 0.000 {built-in method marshal.loads}
16/14 0.018 0.001 0.019 0.001 {built-in method _imp.create_dynamic}
```
## Your Environment
* Zappa version used: 0.56.1
* Operating System and Python version: Windows 11, Python 2.9.13
* The output of `pip freeze`:
```
argcomplete==3.5.0
boto3==1.34.159
botocore==1.34.159
certifi==2024.7.4
cfn-flip==1.3.0
charset-normalizer==3.3.2
click==8.1.7
colorama==0.4.6
durationpy==0.7
hjson==3.1.0
idna==3.7
jmespath==1.0.1
kappa==0.6.0
MarkupSafe==2.1.5
placebo==0.9.0
python-dateutil==2.9.0.post0
python-slugify==8.0.4
PyYAML==6.0.2
requests==2.32.3
s3transfer==0.10.2
six==1.16.0
text-unidecode==1.3
toml==0.10.2
tqdm==4.66.5
troposphere==4.8.1
urllib3==1.26.19
Werkzeug==3.0.3
zappa==0.56.1
```
| closed | 2024-08-13T06:30:53Z | 2024-11-23T04:21:38Z | https://github.com/zappa/Zappa/issues/1345 | [
"no-activity",
"auto-closed"
] | texonidas | 3 |
widgetti/solara | flask | 468 | disable scrolling in main content | How can I disable scrolling of the main content area and force the content to fill 100% of the view window. I tried adding `style='height: 100vh, overflow: hidden'` to the `solara.Column` but this didn't work.
```
import solara
import pathlib
def get_config():
""" create custom jdaviz configuration """
from jdaviz.core.config import get_configuration
# get the default specviz config
config = get_configuration('specviz')
# set custom settings for embedding
config['settings']['viewer_spec'] = config['settings'].get('configuration', 'default')
config['settings']['server_is_remote'] = True
config['toolbar'].remove('g-data-tools') if config['toolbar'].count('g-data-tools') else None
return config
def get_nb():
return 'this is a test'
all_files = ['a', 'b', 'c']
files = solara.reactive([all_files[0]])
@solara.component
def Page():
with solara.Column(style='height: 100vh, overflow: hidden'):
solara.Title("Spectral Display")
css = """
main.v-content.solara-content-main {
padding: 0px !important;
}
.jdaviz {
height: 50vh !important
}
"""
solara.Style(css)
import jdaviz
from jdaviz import Specviz, Application
import ipysplitpanes
import ipygoldenlayout
import ipyvue
import os
from jdaviz.app import custom_components
ipysplitpanes.SplitPanes()
ipygoldenlayout.GoldenLayout()
for name, path in custom_components.items():
ipyvue.register_component_from_file(None, name, os.path.join(os.path.dirname(jdaviz.__file__), path))
ipyvue.register_component_from_file('g-viewer-tab', "container.vue", jdaviz.__file__)
def load_data():
filename = 'path/to/a/test/file.fits'
for f in files.value:
label = f'{pathlib.Path(f).stem} 0'
label = 'spec-017057-59631-27021598108289694 0'
if label not in spec.app.data_collection.labels:
spec.load_data(filename, format='SDSS-V spec multi')
with solara.Columns([1, 0, 1], style='margin: 0 5px'):
with solara.Tooltip("Select the spectral files to load"):
solara.SelectMultiple("Data Files", files, all_files, dense=True)
solara.Button("Load data", color='primary', on_click=load_data)
with solara.FileDownload(get_nb, "solara-lazy-download.txt"):
with solara.Tooltip("Download a Jdaviz Jupyter notebook for these data"):
solara.Button(label='Download Jupyter notebook', color='primary')
with solara.Column(style='height: 100vh, overflow: hidden'):
filename = '/path/to/a/test/file.fits'
config = get_config()
app = Application(configuration=config)
spec = Specviz(app)
spec.load_data(filename, format='SDSS-V spec multi')
display(spec.app)
``` | closed | 2024-01-18T14:06:01Z | 2024-02-28T10:36:17Z | https://github.com/widgetti/solara/issues/468 | [] | havok2063 | 8 |
apify/crawlee-python | web-scraping | 526 | Implement/document a way how to pass extra configuration to json.dump() | There is useful configuration to `json.dump()` which I'd like to pass through `await crawler.export_data("export.json")`, but I see no way to do that:
- `ensure_ascii` - as someone living in a country using extended latin, setting this to `False` prevents Python to encode half of the characters as weird mess
- `indent` - allows me to read the output as a mere human
- `sort_keys` - may be useful for [git scraping](https://simonwillison.net/2020/Oct/9/git-scraping/), not sure
The only workaround I can think of right now is something convoluted like:
```py
from pathlib import Path
path = Path("export.json")
await crawler.export_data(path)
path.write_text(json.dumps(json.loads(path.read_text()), ensure_ascii=False, indent=2))
``` | closed | 2024-09-15T17:47:00Z | 2024-10-31T16:34:14Z | https://github.com/apify/crawlee-python/issues/526 | [
"enhancement",
"t-tooling",
"hacktoberfest"
] | honzajavorek | 7 |
oegedijk/explainerdashboard | dash | 88 | Make "hide_popout" global parameter? | Would it be possible to add `hide_popout` to[ the list of globally toggle-able things](https://explainerdashboard.readthedocs.io/en/latest/custom.html?highlight=hide_poweredby#hiding-toggles-and-dropdowns-inside-components)? | closed | 2021-02-18T11:33:01Z | 2021-02-18T19:09:06Z | https://github.com/oegedijk/explainerdashboard/issues/88 | [] | hkoppen | 1 |
sigmavirus24/github3.py | rest-api | 702 | fetch a key by id return None | an example is worth 1000 words:
```python
>>> gh = github3.login(…)
>>> u2 =gh.user(gh.user().login)
>>> key_id = list(u.iter_keys())[0].id
>>> type(gh.key(key_id))
<class 'NoneType'>
```
as a solution I implemented that function using the `user.iter_keys` method:
```python
def get_key(self, key_id, user=None);
try:
id_num = int(key_id))
except ValueError:
raise ResourceError('Key id shall be an integer')
if not user:
raise ResourceNotFoundError('Could not find user {}'.format(user))
user = self.gh.user(user)
if not user:
raise ResourceNotFoundError('Could not find user {}'.format(user))
for key in user.iter_keys():
if key.id == id_num:
return key
raise ResourceNotFoundError('Key {} not found.'.format(key_id))
``` | closed | 2017-05-06T18:48:25Z | 2017-05-06T19:48:23Z | https://github.com/sigmavirus24/github3.py/issues/702 | [] | guyzmo | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 822 | Switch from GPU to CPU why, perceptual loss CycleGAN training | **Problem: model always switch from GPU to CPU**
Modifications done for adding the perceptual loss:
So if you want to use perceptual loss while using this amazing CycleGAN do the following:
In models.cycle_gan_model.py:
add at the beginning:
from torchvision import models
import torch.nn as nn
class VGGNet(nn.Module):
def __init__(self):
"""Select conv1_1 ~ conv5_1 activation maps."""
super(VGGNet, self).__init__()
self.select = ['9', '36']
self.vgg = models.vgg19(pretrained=True).features
def forward(self, x):
"""Extract multiple convolutional feature maps."""
features = []
for name, layer in self.vgg._modules.items():
x = layer(x)
if name in self.select:
features.append(x)
return features[0], features[1]
and in backward_G(self):
perceptual=True #False if you dont want perceptual loss in that training
if perceptual==True:
vgg = VGGNet().cuda().eval()
with torch.no_grad():
A=self.real_A
B=self.real_B
c = nn.MSELoss()
rx = self.netG_B(self.netG_A(A))
ry = self.netG_A(self.netG_B(B))
fx1, fx2 = vgg(A)
fy1, fy2 = vgg(B)
frx1, frx2 = vgg(rx)
fry1, fry2 = vgg(ry)
m1 = c(fx1, frx1)
m2 = c(fx2, frx2)
m3 = c(fy1, fry1)
m4 = c(fy2, fry2)
self.perceptual_loss = (m1 + m2 + m3 + m4) * 0.00001 * 0.5
else:
self.perceptual_loss=0
and finally:
self.loss_G = self.loss_G_A + self.loss_G_B + self.loss_cycle_A + self.loss_cycle_B + self.loss_idt_A + self.loss_idt_B+self.perceptual_loss
You now have a perceptual cycle gan, congratulation!
**But with those modifications the model always switch from GPU to CPU why?** | open | 2019-10-30T17:38:01Z | 2021-06-30T10:20:35Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/822 | [] | Scienceseb | 7 |
capitalone/DataProfiler | pandas | 820 | `_assimilate_histogram` and `_regenerate_histogram` refactor into standalone functions | **Is your feature request related to a problem? Please describe.**
`_assimilate_histogram` and `_regenerate_histogram` functions are not using self for anything of substance and as a result can be separated into their own standalone static functions.
**Describe the outcome you'd like:**
Move these two functions outside of the class to histogram_utils.py
**Additional context:**
| open | 2023-05-15T17:50:38Z | 2023-08-10T15:15:42Z | https://github.com/capitalone/DataProfiler/issues/820 | [
"New Feature"
] | ksneab7 | 11 |
robusta-dev/robusta | automation | 1,158 | [helm] implement Liveness and readiness for k8s pods | **Is your feature request related to a problem?**
On robusta pods, Liveness and readiness probes are not defined. Pods continues to live even if the service have problems.
For example, if automountServiceAccountToken is set to false, the runner will start and run but the service is not ready as it can't contact kubernetes api.
**Describe the solution you'd like**
I would like to know if health check endpoints exist on runner and forwarder in order to implement Liveness and readiness probes in the chart.
With these probes, pod will fail if service are not lively and ready. It will be more easy to know the state of the service.
| open | 2023-11-10T08:04:43Z | 2023-11-14T06:07:22Z | https://github.com/robusta-dev/robusta/issues/1158 | [
"needs-triage"
] | michMartineau | 0 |
graphql-python/gql | graphql | 334 | 3.6.8 Support Error. | **Describe the bug**
GQL does not support 3.6.8 because it has multidict as a dependency, and multidict is Python>=3.7 only.
**To Reproduce**
Steps to reproduce the behavior:
pip install gql on a python 3.6.8 environment
**Expected behavior**
Proper installation
**System info (please complete the following information):**
- OS: CentOS
- Python version: 3.6.8
| closed | 2022-06-13T11:40:05Z | 2022-07-03T18:48:59Z | https://github.com/graphql-python/gql/issues/334 | [
"type: invalid"
] | cjlaserna | 1 |
nvbn/thefuck | python | 1,477 | Installation as instructed for Mint fails | The output of `thefuck --version` (something like `The Fuck 3.1 using Python
3.5.0 and Bash 4.4.12(1)-release`):
Irrelevant - the problem is with installing the program in the first place.
Your system (Debian 7, ArchLinux, Windows, etc.):
Mint 22 Cinnamon. (Mint 22 is based upon Ubuntu 24.04, a.k.a. 'Ubuntu Noble'.)
How to reproduce the bug:
Do, as per installation instructions (within the README) this:
```
$ sudo apt update
$ sudo apt install python3-dev python3-pip python3-setuptools
$ pip3 install thefuck --user
```
And consequently see the following after the `pip3 install`.
```
error: externally-managed-environment
× This environment is externally managed
╰─> To install Python packages system-wide, try apt install
python3-xyz, where xyz is the package you are trying to
install.
If you wish to install a non-Debian-packaged Python package,
create a virtual environment using python3 -m venv path/to/venv.
Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
sure you have python3-full installed.
If you wish to install a non-Debian packaged Python application,
it may be easiest to use pipx install xyz, which will manage a
virtual environment for you. Make sure you have pipx installed.
See /usr/share/doc/python3.12/README.venv for more information.
note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.
```
The output of The Fuck with `THEFUCK_DEBUG=true` exported (typically execute `export THEFUCK_DEBUG=true` in your shell before The Fuck):
Irrelevant.
If the bug only appears with a specific application, the output of that application and its version:
Irrelevant.
Anything else you think is relevant:
Nothing.
EDITED (because I submitted the form too early by accident). | open | 2024-10-22T01:49:16Z | 2025-01-07T07:02:31Z | https://github.com/nvbn/thefuck/issues/1477 | [] | LinuxOnTheDesktop | 12 |
vanna-ai/vanna | data-visualization | 569 | Vanna is not able to get data from local Postgres DB | I have modified streamlit app to use ollama and chromadb
**vanna_calls.py**
```
from vanna.ollama import Ollama
from vanna.chromadb import ChromaDB_VectorStore
class MyVanna(ChromaDB_VectorStore, Ollama):
def __init__(self, config=None):
ChromaDB_VectorStore.__init__(self, config=config)
Ollama.__init__(self, config=config)
@st.cache_resource(ttl=3600)
def setup_vanna():
vn = MyVanna(config={'model': 'llama3:8b-instruct-q8_0', 'ollama_host': 'http://192.x.x.x:11434'})
vn.connect_to_postgres(host='x.x.x.x', dbname='energy_db', user='postgres', password='xxx', port='5432')
# The information schema query may need some tweaking depending on your database. This is a good starting point.
df_information_schema = vn.run_sql("SELECT * FROM INFORMATION_SCHEMA.COLUMNS")
plan = vn.get_training_plan_generic(df_information_schema)
vn.train(plan=plan)
return vn
```
It runs fine without error, but dont pick data from database,
When i ask question in logs i can see
`SQL Prompt: [{'role': 'system', 'content': "You are a PostgreSQL expert. Please help to generate a SQL query to answer the question. Your response should ONLY be based on the given context and follow the response guidelines and format instructions. \n===Additional Context \n\nThe following columns are in the pg_tables table in the energy_db database`
but results always come from model without trained data
<img width="656" alt="image" src="https://github.com/user-attachments/assets/19dc3c83-aaf5-4290-8cab-39386ebd79e6">
**Desktop (please complete the following information where):**
- OS: [e.g. Ubuntu]
- Version: [e.g. 20.04]
- Python: [3.12]
- Vanna: [0.6.4]
| closed | 2024-07-29T00:04:57Z | 2024-07-31T16:01:01Z | https://github.com/vanna-ai/vanna/issues/569 | [
"bug"
] | imranrazakhan | 1 |
schemathesis/schemathesis | pytest | 1,789 | Add convenience methods on `ParameterSet` | It could be useful in hooks, e.g. to check whether `APIOperation` contains some header (we leave case-insensitive for a while).
1. Add a new `contains` method to the [ParameterSet](https://github.com/schemathesis/schemathesis/blob/master/src/schemathesis/parameters.py#L55) class, the method should accept `name` of type `str`
2. The implementation should reuse the `get` method and check whether its result is `None`
3. Inside `test/specs/openapi/parameters/test_simple_payloads.py` add a new function `test_parameter_set_get`
4. This test function could have a setup like this:
```python
import schemathesis
# ... other tests omitted for brevity
def test_parameter_set_get(make_openapi_3_schema):
header = {"in": "header", "name": "id", "required": True, "schema": {}}
raw_schema = make_openapi_3_schema(parameters=[header])
schema = schemathesis.from_dict(raw_schema)
```
5. Use the new method from step 1 on `schema["/users"]["POST"].headers` inside this test and verify that it contains the header with the name `id` and does not contain the header with the name `unknown` | closed | 2023-10-05T12:15:07Z | 2023-10-13T20:32:16Z | https://github.com/schemathesis/schemathesis/issues/1789 | [
"Priority: Low",
"Hacktoberfest",
"Difficulty: Beginner",
"Component: Hooks"
] | Stranger6667 | 1 |
plotly/dash-table | dash | 305 | How do I style my numbers to commas style, or 3 significant figures in a plotly table (without converting them to string) | Thanks for your interest in Plotly's Dash DataTable component!!
Note that GitHub issues in this repo are reserved for bug reports and feature
requests. Implementation questions should be discussed in our
[Dash Community Forum](https://community.plot.ly/c/dash).
Before opening a new issue, please search through existing issues (including
closed issues) and the [Dash Community Forum](https://community.plot.ly/c/dash).
If your problem or idea has not been addressed yet, feel free to
[open an issue](https://github.com/plotly/plotly.py/issues/new).
When reporting a bug, please include a reproducible example! We recommend using
the [latest version](https://github.com/plotly/dash-table/blob/master/CHANGELOG.md)
as this project is frequently updated. Issues can be browser-specific so
it's usually helpful to mention the browser and version that you are using.
Thanks for taking the time to help up improve this component!
| closed | 2018-12-18T05:50:09Z | 2018-12-18T14:05:33Z | https://github.com/plotly/dash-table/issues/305 | [
"dash-type-question"
] | kennethtiong | 1 |
deedy5/primp | web-scraping | 97 | `client.cookies` setter does not set cookies | closed | 2025-02-20T19:59:21Z | 2025-02-22T17:01:28Z | https://github.com/deedy5/primp/issues/97 | [
"bug"
] | deedy5 | 2 | |
aminalaee/sqladmin | asyncio | 767 | Use HTMX | # Use HTMX
At #678, we discussed using HTMX to do inline edits.
## Benefits:
- HTMX could ease the development of `create_modal`, `edit_modal`, and `details_modal` from Flask Admin.
- HTMX could ease the development of Redis Console from Flask Admin.
- HTMX is a UI Toolkit (Bootstrap, Tabler, Tailwind) agnostic library. So if we ever come up with a different template mode, it still going to be useful.
- HTMX allows us to build SPA applications without traditional tools like ReactJS or Angular.
| open | 2024-05-13T15:30:20Z | 2024-05-25T02:51:04Z | https://github.com/aminalaee/sqladmin/issues/767 | [] | hasansezertasan | 3 |
allure-framework/allure-python | pytest | 742 | Allure report can not get case result when run case with param `--forked` | Hi Team,
I am facing issue to get case result when generate allure report using allure-pytest 3.13.1 version.
Cli: python -m pytest --forked tests/function -q --alluredir=./__out__/results
#### I'm submitting a ...
bug report
#### What is the current behavior?
<img width="707" alt="image" src="https://user-images.githubusercontent.com/19388302/232711878-b3e93084-b0b0-4a12-8fd1-1f28de8b677a.png">
<img width="815" alt="image" src="https://user-images.githubusercontent.com/19388302/232712120-9cc0bd9b-89f8-460a-855b-789085a8fff4.png">
#### If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem
Demo code:
`class TestDemo:
def test_01(self):
print("case pass")
assert True
def test_02(self):
print("case fail")
assert False
def test_03(self):
print("case pass")
assert True
def test_04(self):
print("case pass")
assert True
def test_05(self):
print("case fail")
assert False`
Cli: python -m pytest --forked tests1 -q --alluredir=./__out__/results
allure generate -o __out__/html/ __out__/results/
#### What is the expected behavior?
#### What is the motivation / use case for changing the behavior?
#### Please tell us about your environment:
- Allure version: 2.20.1
- Test framework: pytest7.3.1
- Allure adaptor: allure-pytest@2.13.1
- pytest-forked: 1.6.0
| open | 2023-04-18T08:05:55Z | 2023-07-08T22:29:10Z | https://github.com/allure-framework/allure-python/issues/742 | [
"bug",
"theme:pytest"
] | pengdada00100 | 2 |
wkentaro/labelme | deep-learning | 509 | 能不能弄个shift+滚轮左右移动 | 目前用的Alt+滚轮实在不方便 | closed | 2019-11-09T13:36:42Z | 2020-01-27T01:47:12Z | https://github.com/wkentaro/labelme/issues/509 | [] | kyrosz7u | 0 |
piskvorky/gensim | machine-learning | 3,191 | Similarity Interface of Gensim giving low similarity score for exact same documents with TfIdf + LdaModel | <!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
I am trying to implement a document similarity API using the LDA Model of Gensim. To experiment with the performance, I tried implementing it by training the LDA Model with TfIdf vectors instead of the normal BoW corpus as described in the documentation. The problem which I am facing is that while using the Similarity API of Gensim for creating the index and finding out the similarity score, what I encountered is that if I try to match the same document with itself, sometimes, the Similarity values are not ~1. The values which I get are as low as ~0.06. This does not occur ALL the time, but for some documents only. I tested this again with 229 documents matching each document with itself, and I found that 45 of the documents give results less than 0.98, sometimes giving values like 0.65, 0.41 and similar. I would like some help on this, whether I am doing something wrong or is there a potential bug in the interface.
#### Steps/code/corpus to reproduce
##### Minimal Code used for testing:
NOTE: The corpus which I am using is confidential and limited to only the organization. So I would not be able to share the training corpus here, but for the reference, I will share the output of the LDA model which I am getting for 3 out of the 45 documents (these documents are getting the lowest score matching with themselves) which I used for testing. Hope that should be sufficient for debugging.
```python
docs = [ 'Document 1 as a string', 'Document 2 as a string', 'Document 3 as a string', 'and so on.....' ]
cleaned_docs = list(map(clean_function, docs)) # Here, clean_function return tokens for each string. So, cleaned_docs is essentially a list of list of strings List[List[str]]
bow_corpus = [dictionary.doc2bow(i) for i in cleaned_docs]
tfidf_corpus = tfidf_model[bow_corpus]
lda_corpus = lda_model[tfidf_corpus]
index = Similarity(lda_corpus)
sims = index[lda_corpus] # Getting similarity for all combinations. Got a (229, 229) array for my case
final_sims = np.diag(sims) # Getting similarity with itself
print(final_sims) # Getting very low score with some docs
```
##### Output Vectors of LDAModel for 3 documents:
```python
[[(0, 0.17789464), (2, 0.03806097), (12, 0.2273234), (14, 0.08613937), (21, 0.13261063), (22, 0.17807047), (36, 0.058883864)],
[(1, 0.43381935), (2, 0.14317065), (3, 0.07986226), (36, 0.062136874)],
[(0, 0.32848448), (2, 0.16667062), (14, 0.0485237), (15, 0.11480027), (18, 0.086506054), (35, 0.059970867)]]
```
```python
print(lda_model.lifecycle_events)
[{'msg': 'trained LdaModel(num_terms=100000, num_topics=40, decay=0.5, chunksize=2000) in 2080.85s', 'datetime': '2021-06-30T09:32:52.611017', 'gensim': '4.0.1', 'python': '3.6.12 (default, Jun 28 2021, 13:17:01) \n[GCC 5.4.0 20160609]', 'platform': 'Linux-4.4.0-1128-aws-x86_64-with-debian-stretch-sid', 'event': 'created'}, {'fname_or_handle': 'models/lda.model', 'separately': "['expElogbeta', 'sstats']", 'sep_limit': 10485760, 'ignore': ['state', 'dispatcher', 'id2word'], 'datetime': '2021-06-30T09:32:52.703852', 'gensim': '4.0.1', 'python': '3.6.12 (default, Jun 28 2021, 13:17:01) \n[GCC 5.4.0 20160609]', 'platform': 'Linux-4.4.0-1128-aws-x86_64-with-debian-stretch-sid', 'event': 'saving'}, {'fname_or_handle': 'models/lda.model', 'separately': "['expElogbeta', 'sstats']", 'sep_limit': 10485760, 'ignore': ['state', 'dispatcher', 'id2word'], 'datetime': '2021-06-30T09:32:58.758961', 'gensim': '4.0.1', 'python': '3.6.12 (default, Jun 28 2021, 13:17:01) \n[GCC 5.4.0 20160609]', 'platform': 'Linux-4.4.0-1128-aws-x86_64-with-debian-stretch-sid', 'event': 'saving'}, {'fname': 'models/lda.model', 'datetime': '2021-07-13T20:26:59.671845', 'gensim': '4.0.1', 'python': '3.8.10 (default, Jun 2 2021, 10:49:15) \n[GCC 9.4.0]', 'platform': 'Linux-5.8.0-59-generic-x86_64-with-glibc2.29', 'event': 'loaded'}]
```
```python
print(tfidf_model.lifecycle_events)
[{'msg': 'calculated IDF weights for 1174674 documents and 100000 features (100645197 matrix non-zeros)', 'datetime': '2021-06-30T08:58:10.744582', 'gensim': '4.0.1', 'python': '3.6.12 (default, Jun 28 2021, 13:17:01) \n[GCC 5.4.0 20160609]', 'platform': 'Linux-4.4.0-1128-aws-x86_64-with-debian-stretch-sid', 'event': 'initialize'}, {'fname_or_handle': 'models/tfidf.model', 'separately': 'None', 'sep_limit': 10485760, 'ignore': frozenset(), 'datetime': '2021-06-30T08:58:10.744733', 'gensim': '4.0.1', 'python': '3.6.12 (default, Jun 28 2021, 13:17:01) \n[GCC 5.4.0 20160609]', 'platform': 'Linux-4.4.0-1128-aws-x86_64-with-debian-stretch-sid', 'event': 'saving'}, {'fname': 'models/tfidf.model', 'datetime': '2021-07-13T20:27:03.663529', 'gensim': '4.0.1', 'python': '3.8.10 (default, Jun 2 2021, 10:49:15) \n[GCC 9.4.0]', 'platform': 'Linux-5.8.0-59-generic-x86_64-with-glibc2.29', 'event': 'loaded'}]
```
#### Versions
Please provide the output of:
```python
import platform; print(platform.platform())
Linux-5.8.0-59-generic-x86_64-with-glibc2.29
import sys; print("Python", sys.version)
Python 3.8.10 (default, Jun 2 2021, 10:49:15)
[GCC 9.4.0]
import struct; print("Bits", 8 * struct.calcsize("P"))
Bits 64
import numpy; print("NumPy", numpy.__version__)
NumPy 1.21.0
import scipy; print("SciPy", scipy.__version__)
SciPy 1.7.0
import gensim; print("gensim", gensim.__version__)
gensim 4.0.1
from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION)
FAST_VERSION 1
``` | open | 2021-07-13T15:41:06Z | 2021-07-13T15:41:06Z | https://github.com/piskvorky/gensim/issues/3191 | [] | Rutvik-Trivedi | 0 |
dask/dask | pandas | 11,200 | dask.delayed: Dataframe partition with map_overlap gets randomly converted to Pandas Dataframe or Pandas Series. | **Describe the issue**:
In recent versions of Dask, passing a Dataframe partition as an argument to a dask.delayed function gets randomly converted either into a Pandas DataFrame (expected), or a Pandas Series (not expected). This behavior is entirely non-reproducible, i.e. always happens randomly at different stages in a loop over partitions.
This seems to happen only under specific circumstances, one case beeing after applying two iterations of map_overlap (see below). I have, however, also seen it in situations without map_overlap. Remarkably, it also depends on the chunking of the iteration (happens for n_cores=3, apparently not for n_cores=2).
**Minimal Complete Verifiable Example**:
```python
import numpy as np
import pandas as pd
import dask.dataframe as ddf
import dask
sample = np.array([np.arange(100000), 2 * np.arange(100000), 3 * np.arange(100000)]).T
columns = ["x", "y", "z"]
sample_pdf = pd.DataFrame(sample, columns=columns)
df = ddf.from_pandas(sample_pdf, npartitions=100)
n_cores=3
def test(part, columns):
col_id = [part.columns.get_loc(axis) for axis in columns]
return col_id
def forward_fill_partition(df):
df = df.ffill()
return df
for _ in range(2):
df = df.map_overlap(
forward_fill_partition,
before=2,
after=0,
)
for i in range(0, df.npartitions, n_cores):
print(i)
core_tasks = [] # Core-level jobs
for j in range(0, n_cores):
partition_index = i + j
if partition_index >= df.npartitions:
break
df_partition = df.get_partition(partition_index)
core_tasks.append(
dask.delayed(test)(
df_partition,
columns,
),
)
if len(core_tasks) > 0:
core_results = dask.compute(*core_tasks)
```
**Example output**:
```
0
3
6
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/tmp/ipykernel_19014/1120124549.py in ?()
44 col_id = [part.columns.get_loc(axis) for axis in columns]
45 return col_id
46
47 def forward_fill_partition(df):
---> 48 df = df.ffill()
49 return df
50
51 for _ in range(2):
/mnt/pcshare/users/Laurenz/AreaB/sed/poetry_envs/virtualenvs/sed-processor-3qnpZCFI-py3.9/lib/python3.9/site-packages/dask/base.py in ?(traverse, optimize_graph, scheduler, get, *args, **kwargs)
658 keys.append(x.__dask_keys__())
659 postcomputes.append(x.__dask_postcompute__())
660
661 with shorten_traceback():
--> 662 results = schedule(dsk, keys, **kwargs)
663
664 return repack([f(r, *a) for r, (f, a) in zip(results, postcomputes)])
/tmp/ipykernel_19014/1120124549.py in ?(part, columns)
14 def test(part, columns):
---> 15 col_id = [part.columns.get_loc(axis) for axis in columns]
16 return col_id
/tmp/ipykernel_19014/1120124549.py in ?(.0)
---> 15 def test(part, columns):
16 col_id = [part.columns.get_loc(axis) for axis in columns]
17 return col_id
/mnt/pcshare/users/Laurenz/AreaB/sed/poetry_envs/virtualenvs/sed-processor-3qnpZCFI-py3.9/lib/python3.9/site-packages/pandas/core/generic.py in ?(self, name)
6295 and name not in self._accessors
6296 and self._info_axis._can_hold_identifiers_and_holds_name(name)
6297 ):
6298 return self[name]
-> 6299 return object.__getattribute__(self, name)
AttributeError: 'Series' object has no attribute 'columns'
```
**Anything else we need to know?**:
This behavior is only seen in recent versions of dask >= 2024.3, in previous versions of dask this worked flawlessly. I suspect it to be related to the recently introduced query planning.
**Environment**:
- Dask version: 2024.6.2
- Python version: 3.9.19
- Operating System: Ubuntu Linux
- Install method (conda, pip, source): Python installed via anaconda, environment with virtualenv and pip
| open | 2024-06-24T20:21:20Z | 2025-01-22T17:53:56Z | https://github.com/dask/dask/issues/11200 | [
"bug",
"dask-expr"
] | rettigl | 4 |
fedspendingtransparency/usaspending-api | pytest | 3,645 | Ability to filter using office names/codes | New feature suggestion: Allow filtering on awarding_office_name/code and funding_office_name/code. I currently have to pull down a huge dataset and then filter afterwards for the specific office that I want. Adding an office filter parameter in the API filter objects (specifically for Award Download and Bulk Award Download endpoints) would increase efficiency. | open | 2022-11-18T14:34:00Z | 2022-11-18T14:34:00Z | https://github.com/fedspendingtransparency/usaspending-api/issues/3645 | [] | jnolandrive | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.