QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
77,846,354
| 1,645,339
|
LLAMAINDEX - /pytorch/aten/src/ATen/native: indexSelectSmallIndex: block: [18,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed
|
<p>I have problem with LLAMAINDEX</p>
<blockquote>
<p>/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Indexing.cu:1237:
indexSelectSmallIndex: block: [18,0,0], thread: [31,0,0] Assertion
<code>srcIndex < srcSelectDimSize</code> failed.</p>
</blockquote>
<p>loading data</p>
<pre><code>from llama_index import SimpleDirectoryReader
reader = SimpleDirectoryReader(
input_files=["./data/paul_graham/paul_graham_essay1.txt"]
)
</code></pre>
<p>setup prompts - specific to StableLM</p>
<pre><code>from llama_index.prompts import PromptTemplate
# This will wrap the default prompts that are internal to llama-index
# taken from https://huggingface.co/Writer/camel-5b-hf
query_wrapper_prompt = PromptTemplate(
"Below is an instruction that describes a task. "
"Write a response that appropriately completes the request.\n\n"
"### Instruction:\n{query_str}\n\n### Response:"
)
</code></pre>
<p>load model</p>
<pre><code>import torch
from llama_index import VectorStoreIndex, SimpleDirectoryReader, ServiceContext
from llama_index.llms import HuggingFaceLLM
llm = HuggingFaceLLM(
context_window=2048,
max_new_tokens=256,
generate_kwargs={"temperature": 0.25, "do_sample": True},
query_wrapper_prompt=query_wrapper_prompt,
tokenizer_name="Writer/camel-5b-hf",
model_name="Writer/camel-5b-hf",
device_map="auto",
tokenizer_kwargs={"max_length": 1024}, #2048
# uncomment this if using CUDA to reduce memory usage
model_kwargs={"torch_dtype": torch.float16}
)
service_context = ServiceContext.from_defaults(chunk_size=512, llm=llm, embed_model="local")
</code></pre>
<p>start indexing</p>
<pre><code>from llama_index import (
VectorStoreIndex,
SimpleDirectoryReader,
load_index_from_storage,
StorageContext,
ServiceContext
)
#service_context = ServiceContext.from_defaults(embed_model=embed_model, llm_predictor=llm_predictor)
service_context = ServiceContext.from_defaults(chunk_size=1024, llm=llm, embed_model="local")
index = VectorStoreIndex(documents,service_context=service_context)
</code></pre>
<p>get response from query where the error occurs</p>
<pre><code>query_engine = index.as_query_engine()
response = query_engine.query(
"What did the author do after his time at Y Combinator?"
)
print(response)
</code></pre>
<p>more stack trace of the error</p>
<blockquote>
<p>Token indices sequence length is longer than the specified maximum
sequence length for this model (1815 > 512). Running this sequence
through the model will result in indexing errors env:
CUDA_LAUNCH_BLOCKING=1 Setting <code>pad_token_id</code> to <code>eos_token_id</code>:50256
for open-end generation. Setting <code>pad_token_id</code> to
<code>eos_token_id</code>:50256 for open-end generation. Setting <code>pad_token_id</code>
to <code>eos_token_id</code>:50256 for open-end generation. Setting
<code>pad_token_id</code> to <code>eos_token_id</code>:50256 for open-end generation.
Setting <code>pad_token_id</code> to <code>eos_token_id</code>:50256 for open-end
generation. Setting <code>pad_token_id</code> to <code>eos_token_id</code>:50256 for
open-end generation. Setting <code>pad_token_id</code> to <code>eos_token_id</code>:50256
for open-end generation. Setting <code>pad_token_id</code> to
<code>eos_token_id</code>:50256 for open-end generation. Setting <code>pad_token_id</code>
to <code>eos_token_id</code>:50256 for open-end generation. Setting
<code>pad_token_id</code> to <code>eos_token_id</code>:50256 for open-end generation.
Setting <code>pad_token_id</code> to <code>eos_token_id</code>:50256 for open-end
generation. Setting <code>pad_token_id</code> to <code>eos_token_id</code>:50256 for
open-end generation. Setting <code>pad_token_id</code> to <code>eos_token_id</code>:50256
for open-end generation. Setting <code>pad_token_id</code> to
<code>eos_token_id</code>:50256 for open-end generation. This is a friendly
reminder - the current text generation call will exceed the model's
predefined maximum length (2048). Depending on the model, you may
observe exceptions, performance degradation, or nothing at all.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Indexing.cu:1237:
indexSelectSmallIndex: block: [22,0,0], thread: [0,0,0] Assertion
<code>srcIndex < srcSelectDimSize</code> failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Indexing.cu:1237:
indexSelectSmallIndex: block: [22,0,0], thread: [1,0,0] Assertion
<code>srcIndex < srcSelectDimSize</code> failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Indexing.cu:1237:
indexSelectSmallIndex: block: [22,0,0], thread: [2,0,0] Assertion
<code>srcIndex < srcSelectDimSize</code> failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Indexing.cu:1237:
indexSelectSmallIndex: block: [22,0,0], thread: [3,0,0] Assertion
<code>srcIndex < srcSelectDimSize</code> failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Indexing.cu:1237:
indexSelectSmallIndex: block: [22,0,0], thread: [4,0,0] Assertion
<code>srcIndex < srcSelectDimSize</code> failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Indexing.cu:1237:
indexSelectSmallIndex: block: [22,0,0], thread: [5,0,0] Assertion
<code>srcIndex < srcSelectDimSize</code> failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Indexing.cu:1237:
indexSelectSmallIndex: block: [22,0,0], thread: [6,0,0] Assertion
<code>srcIndex < srcSelectDimSize</code> failed.</p>
</blockquote>
|
<python><pytorch><llama><llama-index>
|
2024-01-19 13:18:12
| 1
| 7,777
|
venergiac
|
77,846,264
| 5,616,309
|
jpype dbapi2 : ClassNotFoundException error on python
|
<p>I want to run SQL on an IBM i database from <strong>python</strong>, with jpype and jdbc. The problem I have must be from jpype, all seems correct (the informations that I print from my program), but I have a "ClassNotFoundException'.
I downloaded and installed in my folder IBM's jar (jt400-20.0.6.jar) Then I ran this simple test</p>
<pre><code>import os
import jpype
import jpype.dbapi2
driver_file_path = os.path.join(os.path.dirname(__file__), "jt400-20.0.6.jar")
jpype.startJVM()
print('startJVM effectué')
jpype.addClassPath(driver_file_path)
class_path = jpype.getClassPath(True)
print("Class Path " + class_path)
jpype.java.lang.System.out.println("hello world")
connection_string = 'jdbc:as400://MACHINE'
try:
cnx = jpype.dbapi2.connect(connection_string, driver= "com.ibm.as400.access.AS400JDBCDriver",
driver_args={
"user": 'UTIL',
"password": 'MOTPASSE',
"extended metadata": "true",
}
)
except jpype.dbapi2.Error as err:
print("Erreur" + err)
</code></pre>
<p>The driver exists at the driver_file_path that is printed. There's an AS400JDBCDriver.class in com\ibm\as400\access in the jar file (I opened the file with 7 zip to see it) When running, I have this informations</p>
<pre><code>startJVM effectué
Class Path C:\Users...\jt400-20.0.6.jar
hello world
</code></pre>
<p>So the path should have been added to the jvm, and java is working... But I have also</p>
<pre><code>Traceback (most recent call last):
File "org.jpype.JPypeContext.java", line -1, in org.jpype.JPypeContext.callMethod
Exception: Java Exception
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\...\Test_jpype_question.py", line 37, in <module>
cnx = jpype.dbapi2.connect(connection_string, driver= "com.ibm.as400.access.AS400JDBCDriver",
File "C:\...\jpype\dbapi2.py", line 404, in connect
_jpype.JClass('java.lang.Class').forName(driver).newInstance()
java.lang.java.lang.ClassNotFoundException: java.lang.ClassNotFoundException: com.ibm.as400.access.AS400JDBCDriver
</code></pre>
<p>I tried lot of things, but I can't understand what's wrong. What can I do to make it work?</p>
|
<python><ibm-midrange><jpype>
|
2024-01-19 13:01:37
| 0
| 1,129
|
FredericP
|
77,846,176
| 10,576,777
|
Pytest not working in VSCODE, after installing the package in venv
|
<p>I'm trying to run pytest which worked in pycharm, but can't manage to run it on vscode.</p>
<p>First I'm doing:</p>
<pre><code>python -m venv env
</code></pre>
<pre><code>env\Scripts\activate
</code></pre>
<pre><code>python -m pip install --upgrade pip
</code></pre>
<pre><code>pip install selenium webdriver-manager pytest pytest-html
</code></pre>
<pre><code>Package Version
------------------ -----------
pip 23.3.2
pytest 7.4.4
pytest-html 4.1.1
pytest-metadata 3.0.0
pytest-xdist 3.5.0
selenium 4.16.0
setuptools 69.0.3
webdriver-manager 4.0.1
</code></pre>
<p>The packages appear in the venv yet getting the following error:</p>
<pre><code>(env) PS C:\Users\User\Desktop\automation-projects\Python\paloAlto> pytest
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "C:\Users\User\Desktop\automation-projects\Python\paloAlto\env\Lib\site-packages\_pytest\main.py", line 267, in wrap_session
INTERNALERROR> config._do_configure()
INTERNALERROR> File "C:\Users\User\Desktop\automation-projects\Python\paloAlto\env\Lib\site-packages\_pytest\config\__init__.py", line 1053, in _do_configure
INTERNALERROR> self.hook.pytest_configure.call_historic(kwargs=dict(config=self))
INTERNALERROR> File "C:\Users\User\Desktop\automation-projects\Python\paloAlto\env\Lib\site-packages\pluggy\_hooks.py", line 514, in call_historic
INTERNALERROR> res = self._hookexec(self.name, self._hookimpls, kwargs, False)
INTERNALERROR> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
INTERNALERROR> File "C:\Users\User\Desktop\automation-projects\Python\paloAlto\env\Lib\site-packages\pluggy\_manager.py", line 115, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
INTERNALERROR> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
INTERNALERROR> File "C:\Users\User\Desktop\automation-projects\Python\paloAlto\env\Lib\site-packages\pluggy\_callers.py", line 113, in _multicall
INTERNALERROR> raise exception.with_traceback(exception.__traceback__)
INTERNALERROR> File "C:\Users\User\Desktop\automation-projects\Python\paloAlto\env\Lib\site-packages\pluggy\_callers.py", line 77, in _multicall
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> ^^^^^^^^^^^^^^^^^^^^^^^^^
INTERNALERROR> File "C:\Users\User\Desktop\automation-projects\Python\paloAlto\testCases\conftest.py", line 49, in pytest_configure
INTERNALERROR> config._metadata['Project Name'] = 'Palo Alto'
INTERNALERROR> ^^^^^^^^^^^^^^^^
INTERNALERROR> AttributeError: 'Config' object has no attribute '_metadata'
</code></pre>
|
<python><visual-studio-code><pytest>
|
2024-01-19 12:46:56
| 2
| 447
|
A-Makeyev
|
77,846,144
| 353,337
|
np.argsort with duplicate entries: make deterministic?
|
<p>When <code>argsort</code>ing a numpy entry with duplicate entries, the output isn't deterministic, e.g., from</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
print(np.argsort([4, 2, 0, 6, 4, 3, 7, 6, 5, 7, 1, 2]))
</code></pre>
<p>I sometimes get</p>
<pre class="lang-py prettyprint-override"><code>[ 2 10 1 11 5 0 4 8 3 7 6 9]
</code></pre>
<p>and other times</p>
<pre class="lang-py prettyprint-override"><code>[ 2 10 11 1 5 0 4 8 3 7 6 9]
# ! !
</code></pre>
<p>(I noticed that when testing a software with Python 3.12, not sure if this has anything to do with it.)</p>
<p>Both results are correct, so I can't call this a bug. Is there a way to make the result deterministic, e.g., but enforcing smaller array indices to go before larger ones?</p>
|
<python><numpy><sorting>
|
2024-01-19 12:39:58
| 0
| 59,565
|
Nico Schlömer
|
77,846,105
| 8,916,031
|
One-liner to delete any line that starts with a number not followed by a delimiter
|
<p>This two-line python code works. Can it be made concise? Thanks.</p>
<pre><code>temp_test_001 = '''
Test 1,2,3,
41
Test 5,6,7,
8800
8800 8800
8800.
8800.0
8,800
Test 9,10
Test 11,12
'''.split('\n')
lines = ''
for line in enumerate(temp_test_001): lines += [line[1] + '\n',''][line[1].isnumeric()]
print(lines)
Test 1,2,3,
Test 5,6,7,
8800 8800
8800.
8800.0
8,800
Test 9,10
Test 11,12
</code></pre>
|
<python><regex><windows><text><sed>
|
2024-01-19 12:32:57
| 2
| 1,503
|
Krantz
|
77,845,942
| 1,222,823
|
Avoid adding file and folder names to a module's namespace
|
<p>Is there a way to avoid adding file and folder names to a module's namespace? With the following structure:</p>
<pre><code>├── pkg
│ ├── __init__.py
│ ├── folder
│ │ └── func2.py
│ └── func1.py
</code></pre>
<p><code>__init__.py</code>:</p>
<pre><code>from .func1 import a
from .folder.func2 import b
</code></pre>
<p>Then importing</p>
<pre><code>>>> import pkg
>>> dir(pkg)
['__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__',
'__package__', '__path__', '__spec__', 'a', 'b', 'folder', 'func1']
</code></pre>
<p>Is there a way to set up the project so <code>folder</code> and <code>func1</code> don't appear in the module's namespace?</p>
|
<python><import><module><subdirectory>
|
2024-01-19 12:05:28
| 1
| 345
|
mrtasktat
|
77,845,848
| 13,147,413
|
GroupBy, column selection and mean in Polars
|
<p>I'm trying to translate this pandas statement:</p>
<pre><code> df.groupby(['X', 'Y'])['Z'].mean()
</code></pre>
<p>to polars, but i'm stuck at this point:</p>
<pre><code> df.group_by(['X', 'Y'], maintain_order=True).select(pl.col('Z').mean())
</code></pre>
<p>where i get this error:</p>
<pre><code> AttributeError: 'GroupBy' object has no attribute 'select'
</code></pre>
<p>Any help would be very appreciated!</p>
|
<python><dataframe><python-polars>
|
2024-01-19 11:50:10
| 1
| 881
|
Alessandro Togni
|
77,845,797
| 11,149,943
|
Count how many unique values there are in each row of a matrix (2d array) in Python
|
<p>I have a 2d array and I would like find how many unique values there is in each row. For example:</p>
<pre><code>arr = np.array([[3,4,4,4,3,4],
[4,4,4,4,4,4],
[3,3,3,2,3,2],
[2,3,3,1,2,2]])
</code></pre>
<p>Then the output that I desire to obtain would be:</p>
<pre><code>res = np.array([2,1,2,3])
</code></pre>
<p>Because there are two uniques values in first row, one unique value in second row, two in third row and three in fourth row.</p>
<p>How can I achieve this? Using <code>np.unique</code> and <code>np.bincounts</code> I was not able.</p>
|
<python><numpy><matrix><multidimensional-array><unique>
|
2024-01-19 11:42:42
| 3
| 331
|
Diego Ruiz
|
77,845,774
| 8,176,763
|
bulk insert into postgres db using fastapi and returing ids
|
<p>My goal is to insert values in a db table using fastapi <code>post</code> method and return the primary key <code>id</code> from the db auto generated ids as a response model. I have successfully executed the post method but I'm not returning anything, how can i return this data ?</p>
<p>This is my <code>models.py</code>:</p>
<pre><code>from sqlalchemy.orm import Mapped,mapped_column,DeclarativeBase
from sqlalchemy import Identity,Enum
from pydantic import BaseModel, ConfigDict, HttpUrl, EmailStr
import enum
class Base(DeclarativeBase):
pass
class Color(str,enum.Enum):
RED = 'RED'
BLUE = 'BLUE'
class Example(Base):
__tablename__='example'
id: Mapped[int] = mapped_column(Identity(always=True),primary_key=True)
colors: Mapped[Color] = mapped_column(Enum(Color))
class ExampleBase(BaseModel):
model_config = ConfigDict(from_attributes=True)
colors: Color
class ExampleCreate(ExampleBase):
id: int
</code></pre>
<p>And this is <code>main.py</code>:</p>
<pre><code>from db import engine,get_db
from models import *
from fastapi import FastAPI,Depends,Query,Path,Body,Cookie,Header,Response,Form,File,UploadFile,HTTPException,Request
from fastapi.responses import JSONResponse, RedirectResponse
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy import select,update,insert
from contextlib import asynccontextmanager
from typing import Annotated,Union,List,Any
from fastapi.responses import HTMLResponse
import uvicorn
from datetime import datetime, time, timedelta
from uuid import UUID
@asynccontextmanager
async def lifespan(app: FastAPI):
async with engine.begin() as conn:
await conn.run_sync(Base.metadata.create_all)
yield
app = FastAPI(lifespan=lifespan) # type: ignore
@app.post("/create_color/") #,response_model=list[ExampleCreate]
async def create_colors(colors: list[ExampleBase],db: AsyncSession = Depends(get_db)):
stmt = (insert(Example),[_.model_dump() for _ in colors])
await db.execute(*stmt)
await db.commit()
# How to return a list of the inserted objects with their ID ????????????
</code></pre>
|
<python><fastapi><pydantic>
|
2024-01-19 11:38:31
| 1
| 2,459
|
moth
|
77,845,666
| 7,054,518
|
trying to place two pyspark dataframes side by side
|
<p>I'm trying to join to dataframes side by side, they don't have any column in common</p>
<p>I have tried below code but doesn't seem to get proper output</p>
<pre><code>from pyspark.sql.functions import monotonically_increasing_id
df_assembled = df_assembled.withColumn("id",monotonically_increasing_id() )
outlier_df = outlier_df.withColumn( "id", monotonically_increasing_id() )
result_df = df_assembled.join(outlier_df,df_assembled.id == outlier_df.id, how='inner')
</code></pre>
<p>I tried to print the count() of the dataframe, but the combined dataframe has a different count :</p>
<pre><code>print(df_assembled.count())
print(outlier_df.count())
print(result_df.count())
</code></pre>
<p>output :</p>
<pre><code>170856
170856
6144
</code></pre>
|
<python><apache-spark><pyspark><apache-spark-sql>
|
2024-01-19 11:18:34
| 1
| 463
|
yashaswi k
|
77,845,509
| 14,954,262
|
openpyxl - get real values of cells when create a list
|
<p>I have this code to read a .xlsx file :</p>
<pre><code>wb1 = openpyxl.load_workbook('test.xlsx')
ws1 = wb1.worksheets[0]
content = list(ws1.iter_rows(values_only=True)
</code></pre>
<p>But the format of datas inside the <code>content</code> list contains some <code>'None'</code> and some <code>'datetime.datetime(2023, 12, 21, 0, 0)'</code></p>
<p>How can I do the same to be sure the content list contains only real values inside ?
So None would be '' and datetime.datetime(2023, 12, 21, 0, 0) would be a real date.</p>
<p>I have tried</p>
<pre><code>content = list(str(ws1.iter_rows(values_only=True))
</code></pre>
<p>but it's not working</p>
<p>Thanks</p>
|
<python><list><openpyxl>
|
2024-01-19 10:51:00
| 0
| 399
|
Nico44044
|
77,845,472
| 16,077,901
|
How to get tuya bluetooth devices with python?
|
<p>I used this module</p>
<p><a href="https://pypi.org/project/tinytuya/" rel="nofollow noreferrer">https://pypi.org/project/tinytuya/</a></p>
<p>to work with my wifi tuya devices.</p>
<p>Recently, I added a bluetooth device (temperature & humidity sensor) that works well on my androïd app smartlife and did not found a module.</p>
<p>I tried the following script from <a href="https://stackoverflow.com/questions/48898476/method-defaultadapter-with-signature-on-interface-org-bluez-manager-doesn">Method "DefaultAdapter" with signature "" on interface "org.bluez.Manager" doesn't exist in raspberry pi 3</a></p>
<p>But did not get any device.</p>
<p>Any idea why this script does not work? How to deal with tuya bluetooth devices?</p>
<p>Thanks in advance.</p>
<pre><code>#!/usr/bin/env python3
import dbus
bus = dbus.SystemBus()
SERVICE_NAME = "org.bluez"
OBJECT_IFACE = "org.freedesktop.DBus.ObjectManager"
ADAPTER_IFACE = SERVICE_NAME + ".Adapter1"
DEVICE_IFACE = SERVICE_NAME + ".Device1"
PROPERTIES_IFACE = "org.freedesktop.DBus.Properties"
manager = dbus.Interface(bus.get_object("org.bluez", "/"), "org.freedesktop.DBus.ObjectManager")
objects = manager.GetManagedObjects()
for path, ifaces in objects.items():
adapter = ifaces.get(ADAPTER_IFACE)
if adapter is None:
continue
obj = bus.get_object(SERVICE_NAME, path)
adapter = dbus.Interface(obj, ADAPTER_IFACE)
</code></pre>
<p>bluetoothctl show</p>
<pre><code>Controller 00:08:CA:F1:65:CD (public)
Name: Artens
Alias: Artens
Class: 0x005c010c
Powered: yes
PowerState: on
Discoverable: no
DiscoverableTimeout: 0x000000b4
Pairable: yes
UUID: SIM Access (0000112d-0000-1000-8000-00805f9b34fb)
UUID: PnP Information (00001200-0000-1000-8000-00805f9b34fb)
UUID: Audio Source (0000110a-0000-1000-8000-00805f9b34fb)
UUID: Audio Sink (0000110b-0000-1000-8000-00805f9b34fb)
UUID: Message Notification Se.. (00001133-0000-1000-8000-00805f9b34fb)
UUID: A/V Remote Control Target (0000110c-0000-1000-8000-00805f9b34fb)
UUID: A/V Remote Control (0000110e-0000-1000-8000-00805f9b34fb)
UUID: Phonebook Access Server (0000112f-0000-1000-8000-00805f9b34fb)
UUID: Message Access Server (00001132-0000-1000-8000-00805f9b34fb)
UUID: OBEX File Transfer (00001106-0000-1000-8000-00805f9b34fb)
UUID: OBEX Object Push (00001105-0000-1000-8000-00805f9b34fb)
UUID: IrMC Sync (00001104-0000-1000-8000-00805f9b34fb)
UUID: Vendor specific (00005005-0000-1000-8000-0002ee000001)
Modalias: usb:v1D6Bp0246d0542
Discovering: no
</code></pre>
<p>script2</p>
<pre><code>#!/usr/bin/env python3
# https://git.kernel.org/pub/scm/bluetooth/bluez.git/tree/test/bluezutils.py
# SPDX-License-Identifier: LGPL-2.1-or-later
import dbus
SERVICE_NAME = "org.bluez"
ADAPTER_INTERFACE = SERVICE_NAME + ".Adapter1"
DEVICE_INTERFACE = SERVICE_NAME + ".Device1"
def get_managed_objects():
bus = dbus.SystemBus()
manager = dbus.Interface(bus.get_object("org.bluez", "/"),
"org.freedesktop.DBus.ObjectManager")
return manager.GetManagedObjects()
def find_adapter(pattern=None):
return find_adapter_in_objects(get_managed_objects(), pattern)
def find_adapter_in_objects(objects, pattern=None):
bus = dbus.SystemBus()
for path, ifaces in objects.items():
adapter = ifaces.get(ADAPTER_INTERFACE)
if adapter is None:
continue
if not pattern or pattern == adapter["Address"] or \
path.endswith(pattern):
obj = bus.get_object(SERVICE_NAME, path)
return dbus.Interface(obj, ADAPTER_INTERFACE)
raise Exception("Bluetooth adapter not found")
def find_device(device_address, adapter_pattern=None):
return find_device_in_objects(get_managed_objects(), device_address,
adapter_pattern)
def find_device_in_objects(objects, device_address, adapter_pattern=None):
bus = dbus.SystemBus()
path_prefix = ""
if adapter_pattern:
adapter = find_adapter_in_objects(objects, adapter_pattern)
path_prefix = adapter.object_path
for path, ifaces in objects.items():
device = ifaces.get(DEVICE_INTERFACE)
if device is None:
continue
if (device["Address"] == device_address and
path.startswith(path_prefix)):
obj = bus.get_object(SERVICE_NAME, path)
return dbus.Interface(obj, DEVICE_INTERFACE)
raise Exception("Bluetooth device not found")
print("Start")
get_managed_objects
find_adapter
find_device('hci0')
print("End")
</code></pre>
|
<python><python-3.x><linux><bluetooth><tuya>
|
2024-01-19 10:44:15
| 0
| 1,331
|
Dri372
|
77,845,370
| 536,262
|
How can I find out what is causing namespace error and verify modules from pip
|
<p>I'm using the package pyJWT extinsively to inspect and manipulate jwts, but suddenly all newly build containers start failing saying <code>jwt.decode()</code> do not exists.</p>
<p>I suspect the jwt namespace has been injected from something else, or the cache has been poisoned.</p>
<pre class="lang-py prettyprint-override"><code># pip show pyJWT
Name: PyJWT
Version: 2.8.0
Summary: JSON Web Token implementation in Python
Home-page: https://github.com/jpadilla/pyjwt
Author: Jose Padilla
Author-email: hello@jpadilla.com
License: MIT
Location: /root/.pyenv/versions/3.12.1/envs/base/lib/python3.12/site-packages
Requires:
Required-by: multitenant-fullstack-test
(base) root@950fc89a6ab0:/test/multitenant-fullstack-test# python
Python 3.12.1 (main, Jan 11 2024, 15:16:36) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import jwt
>>> dir(jwt)
['AbstractJWKBase', 'AbstractSigningAlgorithm', 'JWKSet', 'JWT', '__all__', '__builtins__',
'__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__',
'__spec__', 'exceptions', 'jwa', 'jwk', 'jwk_from_bytes', 'jwk_from_der', 'jwk_from_dict',
'jwk_from_pem', 'jwkset', 'jws', 'jwt', 'std_hash_by_alg', 'supported_key_types',
'supported_signing_algorithms', 'utils']
</code></pre>
<p>As you see the namespace is quite empty, the workaround for me was to reinstall it:</p>
<pre class="lang-py prettyprint-override"><code># pip install pyJWT
Requirement already satisfied: pyJWT in /root/.pyenv/versions/3.12.1/envs/base/lib/python3.12/site-packages (2.8.0)
# pip uninstall pyJWT -y
Found existing installation: PyJWT 2.8.0
Uninstalling PyJWT-2.8.0:
Successfully uninstalled PyJWT-2.8.0
# pip install pyJWT
Collecting pyJWT
Using cached PyJWT-2.8.0-py3-none-any.whl.metadata (4.2 kB)
Using cached PyJWT-2.8.0-py3-none-any.whl (22 kB)
Installing collected packages: pyJWT
Successfully installed pyJWT-2.8.0
# python
Python 3.12.1 (main, Jan 11 2024, 15:16:36) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import jwt
>>> dir(jwt)
['DecodeError', 'ExpiredSignatureError', 'ImmatureSignatureError', 'InvalidAlgorithmError',
'InvalidAudienceError', 'InvalidIssuedAtError', 'InvalidIssuerError', 'InvalidKeyError',
'InvalidSignatureError', 'InvalidTokenError', 'MissingRequiredClaimError', 'PyJWK',
'PyJWKClient', 'PyJWKClientConnectionError', 'PyJWKClientError', 'PyJWKError', 'PyJWKSet',
'PyJWKSetError', 'PyJWS', 'PyJWT', 'PyJWTError', '__all__', '__author__', '__builtins__',
'__cached__', '__copyright__', '__description__', '__doc__', '__email__', '__file__',
'__license__', '__loader__', '__name__', '__package__', '__path__', '__spec__', '__title__',
'__uri__', '__url__', '__version__', 'algorithms', 'api_jwk', 'api_jws', 'api_jwt',
'decode', 'encode', 'exceptions', 'get_algorithm_by_name', 'get_unverified_header',
'jwk_set_cache', 'jwks_client', 'register_algorithm', 'types', 'unregister_algorithm',
'utils', 'warnings']
</code></pre>
<p>Then it is fine again.</p>
<p>I've tried rebuilding the container several times, but this error still appears, so
my current workaround will be to add to the firsts test using jwt to check this and reinstall if required attributes missing.</p>
<p>From perl I'm used to running unittests when installing packages from cpan, is there any way or module that can be used for this?</p>
<p>UPDATE:
<code>help(jwt)</code> showed me that <code>https://pypi.org/project/jwt/</code> module was installed, but it must have been installed by any of the other mdules, but it does not show up in <code>pip list</code>:</p>
<pre class="lang-py prettyprint-override"><code>pip list
Package Version Editable project location
-------------------------- ------------ --------------------------------
aenum 3.1.15
aiohttp 3.9.1
aiosignal 1.3.1
allure-pytest 2.13.2
allure-python-commons 2.13.2
ansi2html 1.9.1
attrs 23.2.0
Authlib 1.2.0
beautifulsoup4 4.12.3
certifi 2023.11.17
cffi 1.16.0
charset-normalizer 3.3.2
click 8.1.7
colorama 0.4.6
coverage 7.4.0
cryptography 41.0.7
curlify 2.2.1
deepdiff 6.7.1
dill 0.3.7
dparse 0.6.4b0
elastic-transport 8.11.0
elasticsearch 8.11.1
flatten-dict 0.4.2
frozenlist 1.4.1
h11 0.14.0
headless-selenium-test 0.1 /test/headless-selenium-test
idna 3.6
iniconfig 2.0.0
Jinja2 3.1.3
jsonschema 4.21.0
jsonschema-specifications 2023.12.1
jwt 1.3.1
ldap3 2.9.1
Levenshtein 0.23.0
markdown-it-py 3.0.0
MarkupSafe 2.1.3
marshmallow 3.20.2
mdurl 0.1.2
multidict 6.0.4
multitenant-fullstack-test 0.1 /test/multitenant-fullstack-test
ordered-set 4.1.0
outcome 1.3.0.post0
packaging 23.0
pillow 10.2.0
pip 23.3.2
pluggy 1.3.0
psutil 5.9.7
py 1.11.0
pyasn1 0.5.1
pycparser 2.21
pydantic 1.10.13
Pygments 2.17.2
PyJWT 2.8.0
PySocks 1.7.1
pytest 7.4.4
pytest-cov 4.1.0
pytest-html 1.0.mortenb-fork-418
pytest-metadata 3.0.0
pytest-reportportal 5.3.1
python-dateutil 2.8.2
python-Levenshtein 0.23.0
pytz 2023.3.post1
rapidfuzz 3.6.1
referencing 0.32.1
reportportal-client 5.5.4
requests 2.31.0
rich 13.7.0
rpds-py 0.17.1
ruamel.yaml 0.18.5
ruamel.yaml.clib 0.2.8
safety 3.0.0
safety-schemas 0.0.1
selenium 4.16.0
setuptools 69.0.3
six 1.16.0
snaptime 0.2.4
sniffio 1.3.0
sortedcontainers 2.4.0
soupsieve 2.5
thefuzz 0.20.0
trio 0.24.0
trio-websocket 0.11.1
typer 0.9.0
typing_extensions 4.9.0
tzlocal 5.2
urllib3 2.1.0
wsproto 1.2.0
yarl 1.9.4
</code></pre>
|
<python><pip>
|
2024-01-19 10:25:44
| 1
| 3,731
|
MortenB
|
77,845,362
| 201,657
|
What is the correct procedure for installing the Snowflake Python API
|
<p>I would like to use Python to interact with my Snowflake account. To get started I am following the <a href="https://quickstarts.snowflake.com/guide/getting-started-snowflake-python-api/index.html#1" rel="nofollow noreferrer">Getting started with the Snowflake Python API</a> quickstart which clearly states to run the following commands:</p>
<pre class="lang-bash prettyprint-override"><code>python3 -m venv '.venv'
source '.venv/bin/activate'
pip install snowflake
</code></pre>
<p>Well I tried that then subsequently attempted to run the simplest of code, <code>from snowflake.core import Root</code> which I took from <a href="https://quickstarts.snowflake.com/guide/getting-started-snowflake-python-api/index.html#2" rel="nofollow noreferrer">page 3 of the quickstart</a>. It failed:</p>
<pre><code>➜ ~ cd /tmp
➜ /tmp mkdir test-snowflake
➜ /tmp cd test-snowflake
➜ test-snowflake python -m venv .venv && \
python -m pip install --upgrade pip && \
source .venv/bin/activate
Requirement already satisfied: pip in /Users/jamiethomson/.pyenv/versions/3.11.2/lib/python3.11/site-packages (23.3.2)
(.venv) ➜ test-snowflake pip install snowflake
Collecting snowflake
Using cached snowflake-0.5.1-py3-none-any.whl (1.4 kB)
Collecting snowflake-core==0.5.1
Using cached snowflake_core-0.5.1-py3-none-any.whl (323 kB)
Collecting snowflake
Using cached snowflake-0.5.0-py3-none-any.whl (1.3 kB)
Collecting snowflake-core==0.5.0
Using cached snowflake_core-0.5.0-py3-none-any.whl (323 kB)
Collecting snowflake
Using cached snowflake-0.4.0-py3-none-any.whl (1.3 kB)
Collecting snowflake-core==0.4.0
Using cached snowflake_core-0.4.0-py3-none-any.whl (323 kB)
Collecting snowflake
Using cached snowflake-0.0.4.tar.gz (2.4 kB)
Preparing metadata (setup.py) ... done
Installing collected packages: snowflake
DEPRECATION: snowflake is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559
Running setup.py install for snowflake ... done
Successfully installed snowflake-0.0.4
[notice] A new release of pip available: 22.3.1 -> 23.3.2
[notice] To update, run: pip install --upgrade pip
(.venv) ➜ test-snowflake python -c "from snowflake.core import Root"
<string>:1: DeprecationWarning: This package has been renamed to snowflake_uuid and will be removed shortly. Please update immediately.
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'snowflake.core'; 'snowflake' is not a package
</code></pre>
<p>Screenshot to prove it:</p>
<p><a href="https://i.sstatic.net/UldZ3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UldZ3.png" alt="Screenshot of commands and output" /></a></p>
<p>I have followed the commands exactly but they did not behave as advertised. I am assuming that this quickstart is out-of-date however it is linked to from the API documentation: <a href="https://docs.snowflake.com/developer-guide/snowflake-python-api/snowflake-python-overview" rel="nofollow noreferrer">Snowflake documentation > Developer > Snowflake Python API: Managing Snowflake objects with Python</a></p>
<p>I don't know what is wrong here. Can someone tell me the correct steps for using Snowflake's python API?</p>
|
<python><snowflake-cloud-data-platform>
|
2024-01-19 10:24:08
| 2
| 12,662
|
jamiet
|
77,845,045
| 726,730
|
python virtual devices - How to reuse camera in more than one application?
|
<p>Is there any available way to register a camera and a microphone as a virtual camera and virtual microphone using python, so i can reuse theme in more than one application?</p>
|
<python><virtual>
|
2024-01-19 09:35:05
| 1
| 2,427
|
Chris P
|
77,845,041
| 9,797,207
|
MyPy Type Check Error: No attribute found in Base class for subclass methods
|
<p>I'm facing an issue with MyPy type checking, and I would appreciate some guidance on resolving it. I have a base class Base and two subclasses Sub1 and Sub2. Each subclass has its own methods (b and c, respectively).</p>
<pre><code>class Base:
def a(self) -> None:
print("a")
class Sub1(Base):
def b(self) -> None:
print("b")
class Sub2(Base):
def c(self) -> None:
print("c")
</code></pre>
<p>I also have a function check that takes an object of type Base and performs different actions based on whether it's an instance of Sub1 or Sub2:</p>
<pre><code>def check(obj: Base) -> None:
obj.a()
if isinstance(obj, Sub1):
obj.b()
elif isinstance(obj, Sub2):
obj.c()
</code></pre>
<p>However, when I run MyPy, I get the following error: <em><strong>"Base" has no attribute "b" [attr-defined]. Errors were detected by MyPy (exit code 1).</strong></em></p>
<p>I understand that MyPy is not aware of the subclass-specific methods, and I'm currently using <em><strong>cast</strong></em> to work around this issue. Is there a better way to handle this without explicit casting? I've considered using <em><strong>Union</strong></em> types, but I'm not sure if that's the best approach as there will be additional classes added as sub class and also this attribute will be used in many functions.</p>
<p>Considering the larger use case I have just given an example on the approach but i will be using it many places, Any suggestions on how to structure the types or refactor the code to make MyPy happy would be greatly appreciated.</p>
|
<python><inheritance><casting><mypy><python-typing>
|
2024-01-19 09:34:42
| 0
| 467
|
saibhaskar
|
77,844,971
| 4,405,942
|
Django: Unknown command: 'collectstatic' when using call_command
|
<p>I have a Django project with <code>"django.contrib.staticfiles"</code> in <code>INSTALLED_APPS</code>. Therefore, I can run <code>./manage.py collectstatic</code>. But I need to use <a href="https://docs.djangoproject.com/en/5.0/ref/django-admin/#running-management-commands-from-your-code" rel="nofollow noreferrer">call_command</a>.</p>
<p>Running <code>python -c 'from django.core.management import call_command; call_command("collectstatic")'</code> results in <code>django.core.management.base.CommandError: Unknown command: 'collectstatic'</code></p>
<p>What am I doing wrong? Shouldn't the 2 commands be absolutely identical? Obviously, environments, paths, etc are correct in both cases. I am running both commands from the same shell in a docker container.</p>
|
<python><django><manage.py><collectstatic>
|
2024-01-19 09:23:35
| 0
| 2,180
|
Gerry
|
77,844,778
| 1,901,071
|
Converting Polars to Pandas raises error DLL load failed while importing lib
|
<p>I am practicing Pythons using Polars. I have a data-frame which I have generated a bunch of calculations in my data-frame and then want to convert it to a Pandas data-frame so that I can apply an isolation forest to the the dataset. I have installed polars and pyarrow and pandas using</p>
<p>My Code and version of the packages is below. I have windows as my OS</p>
<pre class="lang-py prettyprint-override"><code># Name Version Build Channel
pandas 1.5.3 py310h1c4a608_1 https://conda.anaconda.org/conda-forge
pandas-flavor 0.6.0 pyhd8ed1ab_1 https://conda.anaconda.org/conda-forge
polars 0.20.5 pypi_0 pypi
pyarrow 10.0.1 pypi_0 pypi
pyarrow-hotfix 0.6 pyhd8ed1ab_0 https://conda.anaconda.org/conda-forge
# ADD OUTLIER COLUMNS---------------------------------------------------------
# Using Isolation Forests to find outliers
from sklearn.ensemble import IsolationForest
import pandas as pd
import polars as pl
features = df_calc.drop('status')
features.to_pandas()
ImportError: DLL load failed while importing lib: The specified module could not be found.
</code></pre>
<p>Can anyone help?</p>
|
<python><pandas><anaconda><python-polars>
|
2024-01-19 08:53:25
| 2
| 2,946
|
John Smith
|
77,844,740
| 7,253,993
|
How to match specific Unix shell-style strings using Python's fnmatch?
|
<p>I would like to create a Unix shell-style pattern that will only match strings with specific suffixes. The pattern in question starts with <code>0.1</code> and can finish with <code>test</code> or <code>alpha</code> or <code>.</code> .</p>
<p>This means that <code>0.1test</code> and <code>0.1alpha</code> should match but not <code>0.1beta</code> or <code>0-1alpha</code>.
I've constructed the following patterns:</p>
<ol>
<li><code>0.1{.,test,alpha}</code></li>
<li><code>0.1{["."],["test"],["alpha"]}</code></li>
</ol>
<p>but both fail when tested as such:</p>
<pre><code>import fnmatch
result = fnmatch.fnmatch(v, glob_pattern)
if result:
print('SUCCESS')
else:
print('FAILURE')
</code></pre>
|
<python><regex><fnmatch>
|
2024-01-19 08:45:38
| 1
| 6,300
|
kingJulian
|
77,844,696
| 18,136,808
|
Compression of ordered integer list
|
<p>I'm reading the book <a href="https://nlp.stanford.edu/IR-book/information-retrieval-book.html" rel="nofollow noreferrer">"Introduction to information retrieval"</a> and I have some practical doubt. In the book there is a <a href="https://nlp.stanford.edu/IR-book/pdf/05comp.pdf" rel="nofollow noreferrer">chapter</a> dedicated to the compression of the index (dictionary and posting list). For the posting list an explanation of the document list is explain.</p>
<p>Example:
The term "book" has a posting list for example "1 4 7 19 20 25" which means that it is contained in this documents ids.</p>
<p>In this case the gap between the ids is computed "1 3 3 12 1 5" so then with a linear scan we can retrive each document by summing the previous.</p>
<p>Then is explained that the gap between two DocID will be small in most of the case and it's possible to use an seconder like (Variable byte codes or gamma encoding).</p>
<p>With gamma encoding we obtain:</p>
<ul>
<li>1 -> 0</li>
<li>3 -> 101</li>
<li>3 -> 101</li>
<li>12 -> (binary 1100) -> (length(100) = 3) -> (unary(3) 1110) -> unary + length = 1110100 which is the gamma for 12</li>
<li>1 -> 0</li>
<li>5 -> 110101</li>
</ul>
<p>So the result of my compression with gamme encoding for "1 3 3 12 1 5" is "010110111101000110101"</p>
<p>My doubt is how to get in practice this compression? I'm not understanding practically how to store this data to reduce the space complexity. Because now I'm using pickle to store the dictionary that I load from a txt file without the benefit of having a list of ordered integer.</p>
<pre><code>def pickle_data(path: str, posting_list):
out = open(path, 'wb')
pickle.dump(posting_list, out)
out.close()
</code></pre>
<p>Someone can explain how use this benefit of an ordered integer with (at most) small integer?</p>
|
<python><encoding><binary><compression><information-retrieval>
|
2024-01-19 08:36:13
| 1
| 359
|
carlo97
|
77,844,483
| 4,521,319
|
LeetCode - Minimum Falling Path Sum - question on memoization
|
<p>I am trying to solve this leetcode problem: <a href="https://leetcode.com/problems/minimum-falling-path-sum/description" rel="nofollow noreferrer">https://leetcode.com/problems/minimum-falling-path-sum/description</a></p>
<blockquote>
<p>Given an n x n array of integers matrix, return the minimum sum of any falling path through matrix.</p>
<p>A falling path starts at any element in the first row and chooses the
element in the next row that is either directly below or diagonally
left/right. Specifically, the next element from position (row, col)
will be (row + 1, col - 1), (row + 1, col), or (row + 1, col + 1).</p>
</blockquote>
<p>It's a dynamic programming problem that I wanted to solve using recursion and memoization. The editorial section provided a java solution using <code>row</code> and <code>col</code> for memoization like below:</p>
<pre><code>class Solution {
public int minFallingPathSum(int[][] matrix) {
int minFallingSum = Integer.MAX_VALUE;
Integer memo[][] = new Integer[matrix.length][matrix[0].length];
// start a DFS (with memoization) from each cell in the top row
for (int startCol = 0; startCol < matrix.length; startCol++) {
minFallingSum = Math.min(minFallingSum,
findMinFallingPathSum(matrix, 0, startCol, memo));
}
return minFallingSum;
}
public int findMinFallingPathSum(int[][] matrix, int row, int col, Integer[][] memo) {
//base cases
if (col < 0 || col == matrix.length) {
return Integer.MAX_VALUE;
}
//check if we have reached the last row
if (row == matrix.length - 1) {
return matrix[row][col];
}
//check if the results are calculated before
if (memo[row][col] != null) {
return memo[row][col];
}
// calculate the minimum falling path sum starting from each possible next step
int left = findMinFallingPathSum(matrix, row + 1, col, memo);
int middle = findMinFallingPathSum(matrix, row + 1, col + 1, memo);
int right = findMinFallingPathSum(matrix, row + 1, col - 1, memo);
memo[row][col] = Math.min(left, Math.min(middle, right)) + matrix[row][col];
return memo[row][col];
}
}
</code></pre>
<p>my initial approach using python was like below:</p>
<pre><code>class Solution:
def minFallingPathSum(self, matrix: List[List[int]]) -> int:
d = {}
min_sum = sys.maxsize
for i in range(len(matrix)):
min_sum = min(min_sum, self.recur(matrix, 1, i, matrix[0][i], d))
return min_sum
def recur(self, matrix: [], row: int, col: int, sum: int, d: {}):
if row >= len(matrix):
return sum
if (row, col) not in d:
l = []
l.append(self.recur(matrix, row + 1, col, sum + matrix[row][col], d))
if col - 1 >= 0:
l.append(self.recur(matrix, row + 1, col - 1, sum + matrix[row][col-1], d))
if col + 1 < len(matrix):
l.append(self.recur(matrix, row + 1, col + 1, sum + matrix[row][col+1], d))
d[row,col] = min(l)
return d[row,col]
</code></pre>
<p>but it's failing with a wrong answer after 18/50 test cases. I changed it to below by using the <code>sum</code> along with <code>row</code> and <code>col</code> for memoization like below:</p>
<pre><code>class Solution:
def minFallingPathSum(self, matrix: List[List[int]]) -> int:
d = {}
min_sum = sys.maxsize
for i in range(len(matrix)):
min_sum = min(min_sum, self.recur(matrix, 1, i, matrix[0][i], d))
return min_sum
def recur(self, matrix: [], row: int, col: int, sum: int, d: {}):
if row >= len(matrix):
return sum
if (row, col, sum) not in d:
l = []
l.append(self.recur(matrix, row + 1, col, sum + matrix[row][col], d))
if col - 1 >= 0:
l.append(self.recur(matrix, row + 1, col - 1, sum + matrix[row][col-1], d))
if col + 1 < len(matrix):
l.append(self.recur(matrix, row + 1, col + 1, sum + matrix[row][col+1], d))
d[row,col,sum] = min(l)
return d[row,col,sum]
</code></pre>
<p>this is working but time limit exceeded after 43/50 test cases.</p>
<p>I am wondering why my python code with using <code>(row, col)</code> for memoization is not working where as it's working for the Java code in the editorial.</p>
<p>any help would be appreciated.</p>
|
<python><recursion><dynamic-programming><memoization>
|
2024-01-19 07:53:35
| 1
| 925
|
Hemanth Annavarapu
|
77,844,417
| 6,364,117
|
Cannot read hx-includes value on flask view using hx-put
|
<p>I cannot read the value of a select element via a htmx PUT <code>hx-put</code>, method to the flask view function. The select is in a modal body. I am almost certain I need to read the PUT value from the <code>request.get_json()</code> function. The put executes and I can print the <code>request</code> object in the view function, but no data.</p>
<pre><code><div class="modal-dialog modal-dialog-centered">
<div class="modal-content">
<div class="modal-header">
<h5 id="muctor" class="modal-title">{{ title }}</h5>
<h5 class="modal-title">{{ title }}</h5>
</div>
<div class="modal-body">
<div class="dropdown">
<select id="selected_storage" name="pavvy" class="form-select form-select-sm" aria-label=".form-select-sm example">
{% for i in items %}
<option value="{{ i.id }}">{{ i.name }}</option>
{% endfor %}
</select>
</div>
</div>
<div><input type="text" id="username" placeholder="Enter your name"></div>
<div class="modal-footer">
<button type="button" class="btn btn-secondary" data-bs-dismiss="modal">Close</button>
<button
type="button"
hx-put="{{ url_for('lots_bp.update_storage', id=id) }}"
hx-include="#selected_storage"
hx-swap="none"
hx-headers="{'Content-Type': 'application/json'}"
hx-trigger="click"
class="btn btn-sm btn-primary">Submit</button>
</div>
</div>
</div>
</code></pre>
<pre class="lang-py prettyprint-override"><code>@lots_bp.route("/update_storage/<int:id>", methods=["PUT"])
def update_storage(id: int):
print(request.get_json())
return ""
</code></pre>
<p>This is giving me the following error in the console.</p>
<pre class="lang-none prettyprint-override"><code>SyntaxError: Expected property name or '}' in JSON at position 1 (line 1 column 2)
at JSON.parse (<anonymous>)
at S (htmx.min.js:1:4795)
at dr (htmx.min.js:1:34354)
at sr (htmx.min.js:1:31612)
at ce (htmx.min.js:1:37724)
at htmx.min.js:1:21402
at HTMLButtonElement.i (htmx.min.js:1:17838)
y @ htmx.min.js:1
S @ htmx.min.js:1
dr @ htmx.min.js:1
sr @ htmx.min.js:1
ce @ htmx.min.js:1
(anonymous) @ htmx.min.js:1
i @ htmx.min.js:1
htmx.min.js:1
PUT http://localhost:5000/lot/update_storage/1 415 (UNSUPPORTED MEDIA TYPE)
ce @ htmx.min.js:1
(anonymous) @ htmx.min.js:1
i @ htmx.min.js:1
htmx.min.js:1 Response Status Error Code 415 from /lot/update_storage/1
</code></pre>
|
<python><flask><htmx>
|
2024-01-19 07:40:12
| 2
| 1,422
|
godhar
|
77,843,824
| 18,926,992
|
pyttsx3 delays print statement on the previous line
|
<p>I've noticed a weird bug when using <code>pyttsx3.speak()</code> after a print statement that has the <code>end=''</code> argument set. Take a look at my code.</p>
<p>I've done a bit of research, and I couldn't find anything that's related to this issue. And I couldn't find a reason why exactly this is happening. The first thing that came to my mind is that maybe the default <code>print()</code> function behavior won't print the statement, unless there's a new line. Like the standard output isn't printed to unless there's a new line.</p>
<pre class="lang-py prettyprint-override"><code>import pyttsx3
print("This should be before speaking, ", end='')
pyttsx3.speak("Hi")
print("and this should be after speaking")
</code></pre>
<p>The normal logic would be that it would print the first statement, speak <code>hi</code>, then print the second statement. However, it doesn't print anything until <code>pyttsx3.speak()</code> finishes. This only happens if I specify <code>end=''</code> in the print statement. If I remove it, the logic is completely fine.</p>
|
<python><printing><output><newline><pyttsx3>
|
2024-01-19 05:06:14
| 1
| 470
|
eternal_white
|
77,843,784
| 13,496,839
|
real-time text input issues in PyQt5 with Tamil99 keyboard layout
|
<p>I am working on a PyQt5 application with three QLineEdit widgets and one QListWidget. The first QLineEdit is for English (US) input, the second uses the Tamil Phonetic input method, and the third uses the Tamil99 keyboard layout. The application runs on Windows 10.</p>
<p>The issue arises when using the Tamil99 keyboard layout in the third QLineEdit. The textChanged signal is not emitted for every key press, and as a result, the filtered list in the QListWidget is not updated in real-time. How to resolve it ?</p>
<p><strong>Problem:</strong>
When using a QLineEdit with the Tamil99 keyboard layout, the textChanged signal is not emitted for every key press, and the filtered list is not updated in real-time.</p>
<p><strong>Requirements:</strong>
Real-time Filtering: The application should capture every keystroke in the QLineEdit widgets and update the filtered list in the QListWidget in real-time.</p>
<pre><code>import sys
from PyQt5.QtWidgets import *
from PyQt5.QtGui import *
from PyQt5.QtCore import *
import win32api
import py_win_keyboard_layout
class Diff_Language(QWidget):
def __init__(self):
super().__init__()
self.setWindowTitle("Input Different Languages in Different Textboxes")
self.lbl1 = QLabel("Input Language - English (US)")
self.tbox1 = QLineEdit()
self.tbox1.setFont(QFont('Arial Unicode MS', 10, QFont.Bold))
self.tbox1.installEventFilter(self)
self.tbox1.setMaxLength(25)
self.tbox1.textChanged.connect(self.filter_list)
self.lbl2 = QLabel("Input Language - Tamil Phonetic")
self.tbox2 = QLineEdit()
self.tbox2.setFont(QFont('Arial Unicode MS', 30, QFont.Bold))
self.tbox2.installEventFilter(self)
self.tbox2.setMaxLength(25)
self.tbox2.textChanged.connect(self.filter_list)
self.lbl3 = QLabel("Input Language - Default (English US)")
self.tbox3 = QLineEdit()
self.tbox3.setFont(QFont('Arial Unicode MS', 12, QFont.Bold))
self.tbox3.installEventFilter(self)
self.tbox3.setMaxLength(25)
self.tbox3.textChanged.connect(self.filter_list)
self.listbox = QListWidget()
add_items = ["பாலாஜி","பாலா","பால்","Red", "Dark red", "Light Red", "Redish Blue", "Redish Green", "Green","Blue","Dark Blue", "Dark Green"]
self.original_items = add_items.copy() # Store original items for filtering
self.listbox.addItems(add_items)
self.vbox = QVBoxLayout()
self.vbox.addWidget(self.lbl1)
self.vbox.addWidget(self.tbox1)
self.vbox.addWidget(self.lbl2)
self.vbox.addWidget(self.tbox2)
self.vbox.addWidget(self.lbl3)
self.vbox.addWidget(self.tbox3)
self.vbox.addWidget(self.listbox)
self.setLayout(self.vbox)
self.current_layout = 0x409 # Default layout is English (US)
def eventFilter(self, obj, event):
if event.type() == QEvent.FocusIn:
self.handle_focus_in(obj)
return super().eventFilter(obj, event)
def handle_focus_in(self, textbox):
if textbox == self.tbox1:
self.current_layout = win32api.LoadKeyboardLayout("00020409")
elif textbox == self.tbox2:
self.current_layout = 0x449 # Tamil Phonetic language identifier
elif textbox == self.tbox3:
self.current_layout = win32api.LoadKeyboardLayout("00020449") #Tamil99 keyboard layout
py_win_keyboard_layout.change_foreground_window_keyboard_layout(self.current_layout)
self.listbox.clear()
self.listbox.addItems(self.original_items)
def key_press_event(self, event):
super(QLineEdit, self.tbox1).keyPressEvent(event)
text = self.tbox1.text()
print("1111111", text)
filtered_items = [item for item in self.original_items if text.lower() in item.lower()]
self.listbox.clear()
self.listbox.addItems(filtered_items)
def filter_list(self):
text = self.sender().text() # Get the text from the sender
print("1111111", text)
filtered_items = [item for item in self.original_items if text in item.lower()]
self.listbox.clear()
self.listbox.addItems(filtered_items)
def handle_focus_out(self, textbox):
# When a textbox loses focus, switch back to the default layout (English US)
self.current_layout = win32api.LoadKeyboardLayout("00020409")
py_win_keyboard_layout.change_foreground_window_keyboard_layout(self.current_layout)
def closeEvent(self, event):
self.current_layout = win32api.LoadKeyboardLayout("00020409")
py_win_keyboard_layout.change_foreground_window_keyboard_layout(self.current_layout)
event.accept()
def main():
app = QApplication(sys.argv)
mainscreen = Diff_Language()
app.setStyle("Fusion")
mainscreen.show()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
</code></pre>
<p>For example if we type <strong>'j' and 'Q'</strong> simultaneously its equivalent to "பா" and type <strong>"n" and "q"</strong> simultaneously its equivalent to "லா"</p>
|
<python><pyqt5><qlineedit>
|
2024-01-19 04:52:31
| 0
| 686
|
Bala
|
77,843,738
| 8,726,488
|
Python group by values using itertools groupby function
|
<p>Python code is below.</p>
<pre><code>from itertools import groupby
data = [('a', 1), ('b', 2), ('b', 3), ('c', 4), ('c', 5)]
_sorted_data = sorted(data,key= lambda element : element[0])
_res = groupby(_sorted_data,key=lambda x : x[0])
for key,value in _res:
print(key,list(value))
</code></pre>
<p>Got below result.</p>
<pre><code>a [('a', 1)]
b [('b', 2), ('b', 3)]
c [('c', 4), ('c', 5)]
</code></pre>
<p>my expected result is</p>
<pre><code>a [1]
b [2,3]
c [4,5]
</code></pre>
<p>How can transform the 'Values'</p>
|
<python><python-3.x><python-itertools>
|
2024-01-19 04:37:55
| 1
| 3,058
|
Learn Hadoop
|
77,843,685
| 23,190,147
|
Printing an image in python
|
<p>I'm trying to print an image, literally to the python shell. This is only for the "see if it is possible" purposes, I'm not actually using this for anything, just a fun project of mine. Of course, I already know how to display images and have done a few projects in the past involving images, but I want to know if it is possible to actually print an image to the python shell.</p>
<p>For example, you get a shape that looks like a pyramid using these print statements:</p>
<pre><code>print(" 0 ")
print(" 0 0 ")
print(" 0 0 0 ")
print(" 0 0 0 0 ")
print(" 0 0 0 0 0 ")
print(" 0 0 0 0 0 0 ")
print(" 0 0 0 0 0 0 0 ")
print(" 0 0 0 0 0 0 0 0 ")
print("0 0 0 0 0 0 0 0 0")
</code></pre>
<p>and while this is obviously inefficient and needs for loops so that you don't have to write how 100 print statements, in case the image is pretty complicated, it could still be a method of showing an image. (Even though PIL and Pillow are WAY better in so many ways, this is obviously a really inefficient way to display an image).</p>
<p>I thought about certain ways to go about doing this. Obviously, we would need to get an input, so I'll have an image saved in my files (preferably a small one that isn't very detailed), and I'll get that image ready for use, something like this:</p>
<pre><code>from PIL import Image
def get_image(file_path):
img = Image.open("path to image")
return img
path = input("Please enter path to image: "):
img = get_image(path)
</code></pre>
<p>now we have the image ready to use. The problem comes when we need to print the image out to the shell, I was thinking about imaging modules that could kind of find all the pixels in the image to help me convert it. Something like this:</p>
<pre><code>#import libraries that help with converting the image to "print" format
def convert(img):
pass
def print_out(p_info):
pass
</code></pre>
<p>and here, I'm assuming there's a library that can help with this. This idea is such a far away idea, and since it is only used for fun and entertainment, there's probably not going to be a python library that can do this. I'm open to any suggestions, so please let me know if you have any ideas.</p>
<p>UPDATE:</p>
<p>Resolved! I got plenty of great suggestions about the best python image packages to use.</p>
|
<python><image><printing>
|
2024-01-19 04:14:34
| 1
| 450
|
5rod
|
77,843,672
| 8,792,060
|
Find k smallest values of of a 2D array slice of a 3D Array?
|
<p>I've read through other questions where other people are trying to find the <em>k</em> smallest values with <code>np.partition</code> or <a href="https://stackoverflow.com/questions/31642353/find-the-n-smallest-items-in-a-numpy-array-of-arrays"><code>np.argpartition</code></a>. However, these seem like simple 1D or 2D arrays to operate on.</p>
<p>What I have is a 3D array where I'm taking a 2D slice of it and trying to find the 4 smallest values as such:</p>
<pre><code>import numpy as np
np.random.seed(556)
big_array = np.random.randint(0,200,size = (15,9,3))
for i in range(3):
min_vals = np.sort(big_array[:,:,i].flatten())[:4]
print("The smallest values for last axis #{} are: {:.1f},{:.1f},{:.1f},{:.1f}".format(i,*min_vals))
</code></pre>
<p>Unfortunately, it seems like <code>np.partition</code> doesn't like it when the kth argument is larger than the dimension of whatever axis you're trying to specify, probably because I don't fully comprehend the syntax. Likewise, it doesn't look like <code>np.sort</code> or <code>np.argsort</code> have options for multiple axes arguments.</p>
<p>Is there a way to accomplish this without using a for loop or list comprehension?</p>
|
<python><arrays><numpy><sorting>
|
2024-01-19 04:10:38
| 1
| 553
|
superasiantomtom95
|
77,843,561
| 8,460,470
|
Parsing a string input into a lambda function with multiple parameters
|
<p>How can I parse a string input into the proper form which could be used as the first <strong>callable</strong> parameter of <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html" rel="nofollow noreferrer">scipy.optimize.curve_fit</a>?</p>
<pre class="lang-python prettyprint-override"><code>import numpy as np
from scipy.optimize import curve_fit
x = np.arange(100)
y = 2*np.exp(-x)*3.14/2+1
</code></pre>
<p>The string is a math formula inputted by some users with no knowledge of Python. The <code>parse</code> function take these formulas as input:</p>
<pre class="lang-python prettyprint-override"><code>f = parse('a*exp(-x)*b/c+d')
f1 = parse('k*x+b')
f2 = parse('k*sin(x*w)+b')
</code></pre>
<p>and parse / interpret them to python lambda functions:</p>
<pre class="lang-python prettyprint-override"><code>f = lambda x,a,b,c,d: a*numpy.exp(-x)*b/c+d
f1 = lambda x,k,b: k*x+b
f2 = lambda x,k,w,b: k*sin(x*w)+b
</code></pre>
<p>so that <code>f</code> can be called by <code>curve_fit</code></p>
<pre class="lang-python prettyprint-override"><code>p,pcov = curve_fit(f,x,y)
</code></pre>
<p>I've found a similar <a href="https://stackoverflow.com/questions/75434218/evaluate-a-mathematical-expression-with-a-call-to-functions-in-a-string-in-pytho">question</a> on Stack Overflow but there are no answers or comments.</p>
|
<python>
|
2024-01-19 03:29:27
| 1
| 460
|
Shannon
|
77,843,483
| 3,250,110
|
How to check if object in nested array contains a value
|
<p>In Python, this works:</p>
<pre><code>for site in userSites:
for role in site.roles:
if role.role == Role.ADMIN:
return True
return False
</code></pre>
<p>I am looking for a way to do this in a single line that can check a nested array to see if any of the user's sites contain the admin role.</p>
|
<python>
|
2024-01-19 02:57:33
| 1
| 801
|
Zach
|
77,843,154
| 3,491,653
|
boto3 list only objects of a specific storage class in s3 bucket
|
<p>This is an efficiency problem. I am using <code>boto3</code> to selectively download files from an S3 bucket. The problem is that I have about 50k records in a folder and over 70% of them are in <strong>GLACIER</strong>/<strong>DEEP_ARCHIVE</strong>. I am using <code>list_objects_v2</code> to iterate the files.</p>
<p>In Java, I can pass a flag to filter out GLACIER/DEEP_ARCHIVE objects in the API call. It does not seem to be supported in Python's boto3 SDK, as far as I have looked. If you have come across a similar problem, how did you tackle it?</p>
<p>Here is my code:</p>
<pre><code>def download_log_files(
logger: logging.Logger,
log_file_codes: list[int],
aws_s3_credentials: Dict[str, str],
local_download_dir: str,
):
"""Download all log files from S3 bucket to disk.
Args:
logger (logging.Logger): For logging.
log_file_codes (list[int]): The log file codes.
aws_s3_credentials (Dict[str, str]): The AWS S3 credentials.
local_download_dir (str): The local directory to download the log files to.
"""
for log_file_code in log_file_codes:
logger.debug(f"Downloading log files for log: {log_file_code}")
s3_folder_name = str(log_file_code).zfill(4)
s3 = boto3.client(
"s3",
aws_access_key_id=aws_s3_credentials["s3_user_id"],
aws_secret_access_key=aws_s3_credentials["s3_user_secret"],
)
paginator = s3.get_paginator("list_objects_v2")
pages = paginator.paginate(
Bucket=aws_s3_credentials["s3_logs_bucket_name"],
Prefix=s3_folder_name,
# TODO: pass attribute to filter by StorageClass
)
if pages is None:
logger.error(
f"Could not retrieve bucket contents for {s3_folder_name}. Skipping log: {log_file_code}"
)
continue
local_download_folder = Path(f"{local_download_dir}/{s3_folder_name}")
if not os.path.exists(local_download_folder):
os.makedirs(local_download_folder)
# Download and extract all log files from S3 bucket
for page in pages:
for obj in page["Contents"]:
# If it is in GLACIER or DEEP_ARCHIVE, then skip it.
# This can be sped up if this filter is part of the API call
if (
obj["StorageClass"] == "GLACIER"
or obj["StorageClass"] == "DEEP_ARCHIVE"
):
continue
# Download the file ...
pass
</code></pre>
|
<python><amazon-web-services><amazon-s3><boto3>
|
2024-01-19 00:45:52
| 0
| 614
|
i_use_the_internet
|
77,843,038
| 395,857
|
How can use Haystack to identify the top k sentences that are the closest match to a user query, and then returns the docs containing these sentences?
|
<p>I have a set of 1000 documents (plain texts) and one user query. I want to retrieve the top k documents that are the most relevant to a user query using the Python library <a href="https://github.com/deepset-ai/haystack" rel="nofollow noreferrer">Haystack</a> and <a href="https://github.com/facebookresearch/faiss" rel="nofollow noreferrer">Faiss</a>. Specially, I want the system to identify the top k sentences that are the closest match to the user query, and then returns the documents that contain these sentences. How can I do so?</p>
<p>The following code identifies the top k <em>documents</em> that are the closest match to the user query. How can I change it so that instead, the code identifies the top k <em>sentences</em> that are the closest match to the user query, and returns the documents that contain these sentences.</p>
<pre><code># Note: Most of the code is from https://haystack.deepset.ai/tutorials/07_rag_generator
import logging
logging.basicConfig(format="%(levelname)s - %(name)s - %(message)s", level=logging.WARNING)
logging.getLogger("haystack").setLevel(logging.INFO)
import pandas as pd
from haystack.utils import fetch_archive_from_http
# Download sample
doc_dir = "data/tutorial7/"
s3_url = "https://s3.eu-central-1.amazonaws.com/deepset.ai-farm-qa/datasets/small_generator_dataset.csv.zip"
fetch_archive_from_http(url=s3_url, output_dir=doc_dir)
# Create dataframe with columns "title" and "text"
#df = pd.read_csv(f"{doc_dir}/small_generator_dataset.csv", sep=",")
df = pd.read_csv(f"{doc_dir}/small_generator_dataset.csv", sep=",",nrows=10)
# Minimal cleaning
df.fillna(value="", inplace=True)
print(df.head())
from haystack import Document
# Use data to initialize Document objects
titles = list(df["title"].values)
texts = list(df["text"].values)
documents = []
for title, text in zip(titles, texts):
documents.append(Document(content=text, meta={"name": title or ""}))
from haystack.document_stores import FAISSDocumentStore
document_store = FAISSDocumentStore(faiss_index_factory_str="Flat", return_embedding=True)
from haystack.nodes import RAGenerator, DensePassageRetriever
retriever = DensePassageRetriever(
document_store=document_store,
query_embedding_model="facebook/dpr-question_encoder-single-nq-base",
passage_embedding_model="facebook/dpr-ctx_encoder-single-nq-base",
use_gpu=True,
embed_title=True,
)
# Delete existing documents in documents store
document_store.delete_documents()
# Write documents to document store
document_store.write_documents(documents)
# Add documents embeddings to index
document_store.update_embeddings(retriever=retriever)
from haystack.pipelines import GenerativeQAPipeline
from haystack import Pipeline
pipeline = Pipeline()
pipeline.add_node(component=retriever, name='Retriever', inputs=['Query'])
from haystack.utils import print_answers
QUESTIONS = [
"who got the first nobel prize in physics",
"when is the next deadpool movie being released",
]
for question in QUESTIONS:
res = pipeline.run(query=question, params={"Retriever": {"top_k": 5}})
print(res)
#print_answers(res, details="all")
</code></pre>
<p>To run the code:</p>
<pre><code>conda create -y --name haystacktest python==3.9
conda activate haystacktest
pip install --upgrade pip
pip install farm-haystack
conda install pytorch -c pytorch
pip install sentence_transformers
pip install farm-haystack[colab,faiss]==1.17.2
</code></pre>
<p>E.g., I wonder if there is a way to amend the Faiss indexing strategy.</p>
|
<python><indexing><information-retrieval><faiss><haystack>
|
2024-01-18 23:57:05
| 1
| 84,585
|
Franck Dernoncourt
|
77,842,996
| 3,555,115
|
Add multiple columns and assign values to a dataframe from another dataframe columns based on condition
|
<p>I have a dataframe</p>
<pre><code>df1 =
Instance_name counter_name counter_value
A bytes_read 0
A bytes_written 90
B bytes_read 100
B bytes_Written 90
C bytes_read 100
C bytes_Written 90
</code></pre>
<p>I need to add columns to df1, based on df2 column "stage" done value. There can be multiple stages such as sort,comp,done etc. I am only interested in "done" stage rows.</p>
<pre><code>df2 =
load instance_name blks_comp blks_sort blks_dirty stage
0 A 0 50 90 sort
0 B 10 50 90 sort
0 C 10 50 90 comp
1 A 10 50 100 comp
1 B 910 950 9100 comp
1 C 980 950 9100 done
2 A 910 980 9100 comp
2 B 710 980 8100 done
2 C 910 980 9100 ready
</code></pre>
<p>If "instance_name" column fields has "done" stage, add these values to df1 dataframe. Output should be</p>
<pre><code>df3 =
Instance_name counter_name counter_value blks_comp blks_sort blks_dirty
A bytes_read 0 0 0 0
A bytes_written 90 0 0 0
B bytes_read 100 710 980 8100
B bytes_Written 90 710 980 8100
C bytes_read 100 980 950 9100
C bytes_Written 90 980 950 9100
</code></pre>
<p>Instance Name has all 0s for blks_comp, blks_sort, blks_dirty as it doesnt have "done" value in stage column.</p>
|
<python><pandas><dataframe>
|
2024-01-18 23:42:52
| 1
| 750
|
user3555115
|
77,842,863
| 2,873,090
|
Get unique features for each set in a collection of sets
|
<p>I'm struggling to search how to do this; maybe there's a word for this set operation I'm not aware of.</p>
<p>I have several sets with overlapping elements, and I'd like to find which elements are unique to each given set. For example:</p>
<pre class="lang-py prettyprint-override"><code>sets = {
'rat': {'a', 'b', 'c', 'd'},
'cat': {'a', 'b', 'd', 'f'},
'dog': {'a', 'b'},
}
</code></pre>
<p>I can see that <em><strong>f</strong></em> is unique to <em><strong>cats</strong></em> and <em><strong>c</strong></em> is unique to <em><strong>rats</strong></em>. So I'd like to do some set operations or something to yield that data:</p>
<pre class="lang-py prettyprint-override"><code>uniq = {
'rat': {'c'},
'cat': {'f'},
'dog': set(),
}
</code></pre>
<p><em>(Bonus question, mainly out of curiosity-- instead of sets, if I put these in a pandas / numpy 2d matrix, is there a neat way to do this same operation? e.g. where 'rat', 'cat', 'dog' was columns and a,b,c,... was rows or vice versa)</em></p>
|
<python><set>
|
2024-01-18 22:56:49
| 9
| 891
|
catleeball
|
77,842,724
| 1,028,270
|
I can't get logging.config to load my custom class
|
<p>I have a custom formatter defined in my log_conf.py.</p>
<pre><code># log_conf.py
class Custom(jsonlogger.JsonFormatter):
def add_fields(self, log_record, record, message_dict):
super().add_fields(log_record, record, message_dict)
log_record["new_field"] = log_record["levelname"]
logging.config.fileConfig(file_path)
def get_logger(name):
return logging.getLogger(name)
</code></pre>
<p>In my logger.ini file I reference it like this:</p>
<pre><code>....
[formatter_json]
class = log_conf.Custom
</code></pre>
<p>If I throw my log_conf.py and logger.ini in the same module that is using it this works:</p>
<pre><code>from my_project.my_module import log_conf
log = log_conf.get_logger(__name__)
</code></pre>
<p>But I want to put it in a different folder like this:</p>
<pre><code>my_project
main.py
my_module/
stuff.py
myconf/
logger.ini
log_conf.py
</code></pre>
<p>If I use this structure and do <code>from my_project.myconf import log_conf</code> I get <code>ModuleNotFoundError: No module named 'log_conf'</code>.</p>
<p>If I change the ini file to <code>class = myconf.log_conf.Custom</code> it still can't find it.</p>
<p>If I change it to <code>class = my_project.myconf.log_conf.Custom</code> I get <code>AttributeError: cannot access submodule 'log_config' of module 'my_project.myconf' (most likely due to a circular import)</code>.</p>
<p>Why does it work when they are in the module's dir and throw circular import when using the full namespace of the Custom class for <code>class=</code>?</p>
|
<python><python-logging>
|
2024-01-18 22:22:08
| 1
| 32,280
|
red888
|
77,842,706
| 9,779,999
|
ValidationError: 1 validation error for RetrievalQA
|
<p>Here is my code:</p>
<pre><code>from langchain.document_loaders import PyPDFLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_openai import OpenAIEmbeddings
from langchain_community.vectorstores import Chroma
openai_api_key= 'xxx'
loader = PyPDFLoader('xxx.pdf')
text = loader.load()
chunk_size = 200
chunk_overlap = 50
# Split the pdf
splitter = RecursiveCharacterTextSplitter(
chunk_size=chunk_size,
chunk_overlap=chunk_overlap,
separators=['.']
)
doc = splitter.split_documents(text)
embedding = OpenAIEmbeddings(openai_api_key=openai_api_key)
persist_directory = "embedding/chroma"
vectordb = Chroma(
persist_directory = persist_directory,
embedding_function = embedding)
vectordb.persist()
</code></pre>
<p>Here is the first part of my code, then i tested if data in vectordb.get()['documents'], yes it is.</p>
<p>Then i retrieve the db,</p>
<pre><code>from langchain.chains import RetrievalQA
from langchain.chat_models import ChatOpenAI
openai_api_key= 'xxx'
vectordb2 = Chroma(persist_directory=persist_directory, embedding_function=embedding)
vectordb2.get()['documents']
</code></pre>
<p>Yes the data is in vectordb2, everything seems alright.</p>
<p>Then i try to use RetrievalQA with OpenAI model,</p>
<pre><code>retriever = vectordb2.as_retriever() # search_kwargs={"k": 4}
qa = RetrievalQA.from_chain_type(ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0, openai_api_key=openai_api_key),
chain_type="stuff",
retriever=retriever)
</code></pre>
<p>It throws me this error:</p>
<pre><code>File ~/miniforge3/envs/langchain/lib/python3.9/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for RetrievalQA
retriever
instance of BaseRetriever expected (type=type_error.arbitrary_type; expected_arbitrary_type=BaseRetriever)
</code></pre>
<p>I googled, and it seems there are some other kinds of "1 validation error", but they are different. e.g. <a href="https://stackoverflow.com/questions/77429347/validationerror-1-validation-error-for-retrievalqa-retriever-value-is-not-a-val">ValidationError: 1 validation error for RetrievalQA retriever value is not a valid dict (type=type_error.dict)</a></p>
<p>Would anyone please help? any help is appreciated.</p>
|
<python><langchain><data-retrieval><chromadb>
|
2024-01-18 22:16:29
| 0
| 1,669
|
yts61
|
77,842,692
| 9,599,098
|
Why will my local Python Script work perfectly but then fail as soon as I try to run in Lambda?
|
<p>I am essentially at the point of trying to run a Hello World application because I cannot figure this out.</p>
<p>My local Env is Python 3.12.0. My Lambda Function is Python 3.12.0..</p>
<p>I have followed both methods in the official AWS documentation for creating my zip file to load to Lambda.</p>
<p><a href="https://docs.aws.amazon.com/lambda/latest/dg/python-package.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/lambda/latest/dg/python-package.html</a></p>
<p>The script is 100% barebones and works without error on my local PC.</p>
<p>Script:</p>
<pre><code>import openpyxl
import snowflake.connector
import boto3
import pandas as pd
import sys
import json
def lambda_handler(event, context):
# TODO implement
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
</code></pre>
<p>When I attempt to run this in Lambda as a simple test I get back the following log:</p>
<pre><code>[ERROR] AttributeError: module 'os' has no attribute 'add_dll_directory'
Traceback (most recent call last):
File "/var/lang/lib/python3.12/importlib/__init__.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1381, in _gcd_import
File "<frozen importlib._bootstrap>", line 1354, in _find_and_load
File "<frozen importlib._bootstrap>", line 1325, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 929, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 994, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "/var/task/lambda_function.py", line 1, in <module>
import openpyxl
File "/var/task/openpyxl/__init__.py", line 5, in <module>
from openpyxl.compat.numbers import NUMPY
File "/var/task/openpyxl/compat/__init__.py", line 3, in <module>
from .numbers import NUMERIC_TYPES
File "/var/task/openpyxl/compat/numbers.py", line 9, in <module>
import numpy
File "/var/task/numpy/__init__.py", line 112, in <module>
_delvewheel_patch_1_5_2()
File "/var/task/numpy/__init__.py", line 109, in _delvewheel_patch_1_5_2
os.add_dll_directory(libs_dir)
</code></pre>
<p>Any help would be truly appreciated as I am losing my mind over this. Note this works fine when I only installed boto3. My guess is the issue is to do with either snowflake or openpyxl. Granted the exact version are installed locally on my pc and my virtual env as I am pushing to Lambda.</p>
|
<python><amazon-web-services><aws-lambda><openpyxl>
|
2024-01-18 22:13:20
| 0
| 784
|
user68288
|
77,842,670
| 477,969
|
How to consume bokeh with mypy
|
<p>I am using bokeh library and it states that it uses mypy internally. Thus, I infer it has good typing information which I have confirmed looking the source code.</p>
<p>However, I notice that using mypy in my own project that imports bokeh still produces errors.
How I should configure mypy in my side to gracefully and correctly use the bokeh typing in when I consume the library in my project?</p>
<p>for example, I receive this kind of error:</p>
<pre><code>error: Incompatible types in assignment (expression has type "str", variable has type "Override[str]") [assignment]
</code></pre>
|
<python><python-3.x><bokeh><mypy>
|
2024-01-18 22:08:21
| 0
| 2,114
|
JairoV
|
77,842,667
| 11,618,586
|
labeling segments of data based on multiple conditions
|
<p>I have an indexed pandas dataframe with values like so:</p>
<pre><code>data = {'ID':[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
'column1': [15, 16, 17, 14, 13, 5, 3, 2, 1.9, 1.2, 1, 0.8, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 1, 2, 3, 4, 5, 6],
'column2': [10, 11, 12, 13, 13.5, 14, 14.5, 15, 16, 17, 18, 19, 20, 20, 20, 20, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10]
}
ID column1 column2
0 1 15.0 10.0
1 1 16.0 11.0
2 1 17.0 12.0
3 1 14.0 13.0
4 1 13.0 13.5
5 1 5.0 14.0
6 1 3.0 14.5
7 1 2.0 15.0
8 1 1.9 16.0
9 1 1.2 17.0
10 1 1.0 18.0
11 1 0.8 19.0
12 1 0.5 20.0
13 1 0.5 20.0
14 1 0.5 20.0
15 1 0.5 20.0
16 1 0.5 20.0
17 1 0.5 19.0
18 1 0.5 18.0
19 1 0.5 17.0
20 1 0.5 16.0
21 1 1.0 15.0
22 1 2.0 14.0
23 1 3.0 13.0
24 1 4.0 12.0
25 1 5.0 11.0
26 1 6.0 10.0
</code></pre>
<p>I want to label the rows based on the following conditions:</p>
<ol>
<li>In <code>column1</code>, from the beginning until just before we first encounter a
value <code><=2</code>, <strong>AND</strong> when <code>column2</code> is <code>>13</code>, label as <code>Pre_Start</code></li>
<li>When <code>0.5 < column1 <= 2</code> <strong>AND</strong> <code>< column2 <= 19</code> label as <code>Start</code></li>
<li>When <code>column1 <= 0.5</code> <strong>AND</strong> <code>column2 >= 19</code> label as <code>Steady</code></li>
<li>When <code>column1 <= 0.5</code> <strong>AND</strong> <code>14 < column2 < 19</code> label as <code>Ramp</code></li>
<li>When <code>column1 > 0.5</code> <strong>AND</strong> <code>column2 < 19</code> label as <code>End</code></li>
</ol>
<p>Such that the resulting dataframe would be something like this:</p>
<pre><code> ID column1 column2 Label
0 1 15.0 10.0 Pre_Start
1 1 16.0 11.0 Pre_Start
2 1 17.0 12.0 Pre_Start
3 1 14.0 13.0 Pre_Start
4 1 13.0 13.5 Pre_Start
5 1 5.0 14.0 Pre_Start
6 1 3.0 14.5 Pre_Start
7 1 2.0 15.0 Start
8 1 1.9 16.0 Start
9 1 1.2 17.0 Start
10 1 1.0 18.0 Start
11 1 0.8 19.0 Start
12 1 0.5 20.0 Steady
13 1 0.5 20.0 Steady
14 1 0.5 20.0 Steady
15 1 0.5 20.0 Steady
16 1 0.5 20.0 Steady
17 1 0.5 19.0 Ramp
18 1 0.5 18.0 Ramp
19 1 0.5 17.0 Ramp
20 1 0.5 16.0 Ramp
21 1 1.0 15.0 End
22 1 2.0 14.0 End
23 1 3.0 13.0 End
24 1 4.0 12.0 End
25 1 5.0 11.0 End
26 1 6.0 10.0 End
</code></pre>
<p>I am able to successfully apply a grouped filter by defining a function start index using <code>numpy</code> like</p>
<pre><code>def filter_group(group):
start_index = np.argmax(group['column1'].values <= 2)
return group.iloc[start_index:]
</code></pre>
<p>and calling the function using the <code>apply</code> clause:</p>
<pre><code>filtered_df = df.groupby('ID', group_keys=False).apply(filter_group).reset_index(drop=True)
</code></pre>
<p>However, I can't get it to work with multiple conditions.
Is there a more elegant way to achieve this?</p>
|
<python><python-3.x><pandas><indexing>
|
2024-01-18 22:07:44
| 1
| 1,264
|
thentangler
|
77,842,516
| 58,347
|
MyPy: How to type an argument that will forward to isinstance()
|
<p>I'm writing a predicate function. Among other things, it can take an argument that is compatible with <code>isinstance()</code>, such that if you called <code>my_pred(val, types)</code>, it would return <code>isinstance(val, types)</code>.</p>
<p>However, I'm not sure how to type that second argument correctly when defining my predicate. The second argument of <code>isinstance()</code> appears to be a MyPy-internal type named <code>_ClassInfo</code>. Is there a way to get access to that type? <em>Should</em> I try to get access to that? Or should I just give up and type it as <code>Any</code>?</p>
<p>(I can't write the type correctly manually, because it's recursive and MyPy doesn't yet support recursive types correctly; a <code>_ClassInfo</code> can be a <code>Type</code>, or a <code>tuple[_ClassInfo]</code>.)</p>
|
<python><python-3.x><mypy>
|
2024-01-18 21:32:16
| 1
| 18,453
|
Tab Atkins-Bittner
|
77,842,479
| 593,487
|
How to split transformation matrix into steps for animation?
|
<p>I have a 4D transformation matrix (rotation, translation and scale) that I can use to do <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.affine_transform.html" rel="nofollow noreferrer">affine transform</a> of a 3D numpy array, from where I take a slice and display it. This part works and allows me to view rotated / moved / scaled 3D data.</p>
<p>However, when matrix changes, I would like to animate the transition. To do this I need to split my matrix <code>M</code> into steps such that <code>M</code> equals <code>S0 @ S1 @ S2 @ ... @ Sn</code>, where each step performs a tiny bit of rotation, translation and scale.</p>
<p>Alternatively, I could animate the transition by making e.g. rotation as a series of small rotations and the consume them one by one, but if there is another operation while the previous one is still in progress, I would like to merge them and go in the direction of the end state directly. So getting a series of steps would be preferable.</p>
<p>Is it possible to split my transformation matrix into such steps, and if yes, how?</p>
|
<python><numpy><animation>
|
2024-01-18 21:22:45
| 0
| 18,451
|
johndodo
|
77,842,461
| 3,606,412
|
TypeError: cannot unpack non-iterable TreeNode object
|
<p>I am learning to use a deque to check if two binary trees are the same. However, I can't figure out why n1 and n2 (as node1 and node2) can't be unpacked. I tried to put them in a list and a tuple, unpacking, etc, but can't unpack successfully. I'm thinking it's something else. Could anyone help? Here is the code:</p>
<pre><code> def iterative(self, p, q):
q = deque( [p, q] )
while len(q) > 0:
n1, n2 = q.popleft()
if not n1 and not n2: pass
elif not n1 or not n2: return False
else:
if n1.val != n2.val: return False
q.append( [n1.left, n2.left] )
q.append( [n1.right, n2.right] )
return True
</code></pre>
|
<python><python-3.x><tree><queue><binary-tree>
|
2024-01-18 21:17:44
| 1
| 1,383
|
LED Fantom
|
77,842,458
| 2,706,344
|
Extracting maximum number from DataFrame of strings (and some NaN values)
|
<p>Look at the DataFrame:</p>
<pre><code>import pandas as pd
import numpy as np
data=pd.DataFrame(['random 15 numbers 128 and 12 letters','12-5','page 65'],columns=['text'])
</code></pre>
<p><a href="https://i.sstatic.net/GNX1j.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GNX1j.png" alt="enter image description here" /></a></p>
<p>I want to extract all numbers from the strings and write the maximum number into a new column. I achieved that with this code:</p>
<pre><code>data['list']=data['text'].str.extractall('(\d+)').unstack().values.tolist()
data['max']=data['list'].apply(lambda row:max([int(x) for x in row if x is not np.nan]))
</code></pre>
<p>This results in this DataFrame:</p>
<p><a href="https://i.sstatic.net/f3DDX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f3DDX.png" alt="enter image description here" /></a></p>
<p>First question: Is there a more elegant way to do that?</p>
<p>My actual problem: My code works only if there is no <code>NaN</code> value in my original DataFrame. How would you adapt the code in that case? The result should be a <code>NaN</code> column for each <code>NaN</code> value with the correct index. Replace the <code>data</code> defining line by the following to make the problem appear:</p>
<pre><code>data=pd.DataFrame(['random 15 numbers 128 and 12 letters','12-5','page 65',np.nan],columns=['text'])
</code></pre>
<p>Additionally I want to deal the code with entrys which are not <code>NaN</code> but strings without a number. In that case the intermediate list should be empty and the last row should be <code>NaN</code> (this last thing is easy to achive by manipulating the last line).</p>
|
<python><pandas><regex>
|
2024-01-18 21:17:01
| 1
| 4,346
|
principal-ideal-domain
|
77,842,438
| 856,804
|
Why does mypy complain when a subtype is returned when using TypeVar?
|
<p>I wonder why this example doesn't pass mypy check:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Optional, TypeVar
class A:
pass
class B(A):
def __init__(self) -> None:
print("B")
T = TypeVar("T", bound=A)
def foo(x: T) -> Optional[T]:
if type(x) is B:
return B()
return None
</code></pre>
<p>The mypy error:</p>
<pre class="lang-none prettyprint-override"><code>toy.py:18:16: error: Incompatible return value type (got "B", expected "Optional[T]") [return-value]
Found 1 error in 1 file (checked 1 source file)
</code></pre>
<p>Isn't <code>B</code> a valid type for <code>T</code>?</p>
|
<python><mypy><python-typing>
|
2024-01-18 21:12:32
| 1
| 9,110
|
zyxue
|
77,842,223
| 3,555,115
|
Pivoting based on multiple columns and rearrange data in Python Dataframe
|
<p>have a python dataframe.</p>
<pre><code>df1
Load Instance_name counter_name counter_value
0 A bytes_read 0
0 A bytes_written 90
0 B bytes_read 100
0 B bytes_Written 90
1 A bytes_read 10
1 A bytes_written 940
1 B bytes_read 1100
1 B bytes_written 910
</code></pre>
<p>To simplify view, I need something like below ex. transform counter_name column fields to columns and rearrange data.***</p>
<pre><code>df2 =
Load Instance_name bytes_read bytes_written
0 A 0 90
0 B 100 90
1 A 10 940
1 B 1100 910
</code></pre>
<p>I am new to python dataframe libraries and not sure the right way to achieve this.</p>
|
<python><pandas><dataframe>
|
2024-01-18 20:26:22
| 1
| 750
|
user3555115
|
77,842,181
| 7,347,925
|
How to speed up creating Point GeoSeries with large data?
|
<p>I have two 1D arrays and want to combine them into one Point GeoSeries like this:</p>
<pre><code>import numpy as np
from geopandas import GeoSeries
from shapely.geometry import Point
x = np.random.rand(int(1e6))
y = np.random.rand(int(1e6))
GeoSeries(map(Point, zip(x, y)))
</code></pre>
<p>It costs about 5 seconds on my laptop. Is it possible to accelerate the generation of GeoSeries?</p>
|
<python><pandas><geopandas><shapely>
|
2024-01-18 20:15:35
| 1
| 1,039
|
zxdawn
|
77,842,097
| 1,795,927
|
How to insert Blob in oracle db in python
|
<p>I am new to python and i am trying to insert a blob to an oracle db table. but its failing with 'TypeError: oracledb.base_impl.DbType object is not callable</p>
<pre><code>import oracledb
from datetime import datetime
# Connection details (replace with your credentials)
username = "username"
password = "password"
dsn = "database"
# Table and data details
table_name = "TRX_DTL"
orderid = 12345
payload_text = "This is the BLOB text to be inserted."
# Create a connection
try:
with oracledb.connect(user=username, password=password, dsn=dsn) as connection:
try:
with connection.cursor() as cursor:
# Construct the INSERT statement with bind variables
sql = """
INSERT INTO {} (orderid, create_ts, payload)
VALUES (:orderid, :create_ts, :payload)
""".format(table_name)
# Bind variables and execute the statement
cursor.execute(sql, {
"orderid": orderid,
"create_ts": datetime.now(), # Use current timestamp
"payload": oracledb.BLOB(payload_text.encode()) # Encode text and create BLOB
})
connection.commit() # Commit the transaction
print("BLOB data inserted successfully!")
except oracledb.Error as e:
print("Error occurred:", e)
except oracledb.Error as e:
print("Error connecting to database:", e)
</code></pre>
|
<python><oracle-database><python-oracledb>
|
2024-01-18 19:55:31
| 1
| 856
|
Nevin Thomas
|
77,841,985
| 1,474,895
|
Python Requests Giving Me Missing Metadata When Trying To Upload Attachment
|
<p>I am trying to upload an attachment via an API. I can make it work in the software's Swagger environment with this:</p>
<pre><code>curl -X 'POST' \
'https://demo.citywidesolutions.com/v4_server/external/v1/maintenance/service_requests/9/attached_files' \
-H 'accept: */*' \
-H 'Content-Type: multipart/form-data' \
-H 'Authorization: Bearer 123456789' \
-F 'description=' \
-F 'data=@This is my file.pdf;type=application/pdf'
</code></pre>
<p>When I try to do the same with python requests I get a 400 error with a message of missing metadata. Here is what I am trying to pass with Python:</p>
<pre><code>import requests
Headers = {'accept': '*/*', 'Content-Type': 'multipart/form-data', 'Authorization': 'Bearer 123456789'}
Attachment = {'description': '', 'data': open('C:/This is my file.pdf', 'rb')}
Response = requests.post(url='https://demo.citywidesolutions.com/v4_server/external/v1/maintenance/service_requests/9/attached_files', headers=Headers, files=Attachment)
</code></pre>
<p>From that I get a 400 response and the JSON says Missing Metadata. What am I missing here?</p>
|
<python><python-requests><attachment>
|
2024-01-18 19:31:44
| 2
| 1,483
|
Cody Brown
|
77,841,857
| 7,563,454
|
Pygame: Get an image from the average mixture of multiple images
|
<p>I have an number of images with the same resolution in a list... they're generated and painted from the code with with <code>pygame.Surface</code> though that shouldn't be relevant for this question, will mention they're part of a multi-sampling system with each image representing one render. I want to merge them into a single image, with the resulting image being the average of all images on all channels (RGBA). How can I most efficiently achieve this?</p>
<p>Obviously I can't just blit the two as <code>img1.blit(img2, (0, 0))</code> as the later image will fully cover the former. I do use an alpha channel for other purposes, which allows mixing future images at a lower transparency on top of the previous result, but this isn't accurate as I want a real average without later images covering the result by a variable amount based on order: I don't want the result to look like <code>img1.* / 2 + img2.* / 2 + img3.* / 2</code> but rather <code>(img1.* + img2.* + img3.*) / 3</code>.</p>
<p>I'm aware I could manually scan the pixels in each image but that would be very performance intensive: I'd prefer using the using less accurate alpha blending than having to saddle the main loop with that. Please let me know what is the closest solution you're aware of.</p>
|
<python><python-3.x><pygame>
|
2024-01-18 19:04:05
| 3
| 1,161
|
MirceaKitsune
|
77,841,821
| 6,836,950
|
Set state attribute for request object in fastapi/starlette test client
|
<p>I have a GET endpoint that proxies another endpoint and it expects to extract Bearer token from request's state attribute. This is my endpoint</p>
<pre><code>
from fastapi import Depends, Request
from starlette.requests import Request
from starlette.status import HTTP_200_OK
from starlette.testclient import TestClient
@router.get("/", response_model=DataOut)
def get_documents(request: Request, query_params: QueryParams = Depends()) -> DataOut:
return DocumentsOut.parse_obj(
data_api_client.get(
"/data/",
request.state.token,
params=query_params.dict(by_alias=True),
).json()
)
</code></pre>
<p>Now I want to have integration test for it. I have test client. However I'm not able to set request state attribute in test client. How can I solve this problem? Below is my test function</p>
<pre><code>def test_get_data(my_test_client: TestClient, api_url: str, api_token: str) -> None:
params = {
"offset": 0,
"limit": 20,
}
response = my_test_client.get(url=api_url, headers={"Authorization": api_token}, params=params)
assert response.status_code == HTTP_200_OK
</code></pre>
<p>When I debug this test and arrive at the endpoint, the state attribute of the request object is empty.</p>
|
<python><python-3.x><fastapi><starlette>
|
2024-01-18 18:57:09
| 0
| 4,179
|
Okroshiashvili
|
77,841,796
| 15,279,420
|
Boto3 not really starting an instance
|
<p>I'm running Python code on an instance and I'm trying to start an EC2 instance using boto3 <code>start_instances</code>. The code works and gives the following response:</p>
<pre><code>Response: {'StartingInstances': [{'CurrentState': {'Code': 0, 'Name': 'pending'}, 'InstanceId': 'i-<id>', 'PreviousState': {'Code': 80, 'Name': 'stopped'}}], [..]
</code></pre>
<p>But in the UI it's not up, not even pending. I've run the code 7 times and its the same thing. To make a test, I tried the opposite: I started the instance using the UI and used the <code>stop_instances</code> function and it worked!</p>
<p>The permissions are these:</p>
<pre><code>"ec2:StartInstances",
"ec2:StopInstances"
</code></pre>
<p>What might be happening?</p>
|
<python><amazon-web-services><amazon-ec2><boto3>
|
2024-01-18 18:51:28
| 2
| 343
|
Luis Felipe
|
77,841,654
| 3,557,485
|
Ranking objects by absolute occurrence
|
<p>I have a list of dictionaries which contains contains floats. My goal is to rank these objects with considering the occurrence the duplicates.</p>
<h2>Example</h2>
<p>If the highest value is <code>19.93</code> and it occurs three times in the raw list, we will have three <code>{"rank":1}</code>'s, and as we already have 3 ranked items, the item with <code>19.07</code> value will have <code>{"rank":4}</code> as there are three above ranked objects. (This is the logic how European soccer score rankings are calculated. I want to achieve the same)</p>
<pre><code>{'value': {'total': 19.93}, 'rank': 1}
{'value': {'total': 19.93}, 'rank': 1}
{'value': {'total': 19.93}, 'rank': 1}
{'value': {'total': 19.07}, 'rank': 4}
{'value': {'total': 18.24}, 'rank': 5}
{'value': {'total': 18.24}, 'rank': 5}
{'value': {'total': 18.24}, 'rank': 5}
{'value': {'total': 17.15}, 'rank': 8}
{'value': {'total': 16.8}, 'rank': 9}
{'value': {'total': 16.8}, 'rank': 9}
{'value': {'total': 16.8}, 'rank': 9}
</code></pre>
<p>But instead of the above, my code gives me:</p>
<pre><code>{'value': {'total': 19.93}, 'rank': 1}
{'value': {'total': 19.93}, 'rank': 1}
{'value': {'total': 19.93}, 'rank': 1}
{'value': {'total': 18.24}, 'rank': 3}
{'value': {'total': 18.24}, 'rank': 3}
{'value': {'total': 18.24}, 'rank': 3}
{'value': {'total': 19.07}, 'rank': 4}
{'value': {'total': 17.15}, 'rank': 5}
{'value': {'total': 16.8}, 'rank': 6}
{'value': {'total': 16.8}, 'rank': 6}
{'value': {'total': 16.8}, 'rank': 6}
</code></pre>
<p>What I basically do is creating a unique list from the values, track the occurrence of the previous objects and add it to the current <code>rank</code>. What am I doing wrong?</p>
<pre><code>#########################
# actual code
import random
def rank_lista(ordered_list):
vals = []
# add all values into a list
for list_item in ordered_list:
try:
vals.append(list_item['value']['total'])
except Exception as e:
print (e)
unique_values = sorted(list(set(vals)))
sorted_list = []
for ooo in reversed(list(unique_values)):
sorted_list.append(ooo)
unique_values = []
unique_values = sorted_list
print ('sorted unique values:', unique_values)
for ordered_list_item in ordered_list:
try:
val = ordered_list_item['value']['total']
if unique_values.index(val) == 0:
ordered_list_item['rank'] = unique_values.index(val)+1
elif unique_values.index(val) == 1:
previous_value_index = 0
previous_value_count = vals.count(unique_values[previous_value_index])
ordered_list_item['rank'] = 1 + previous_value_count
else:
previous_value_index = unique_values.index(val)-1
previous_value_count = vals.count(unique_values[previous_value_index])
ordered_list_item['rank'] = (previous_value_index + 1) + previous_value_count
except Exception as e:
print (e,2)
return ordered_list
def create_random_list():
unique_rounded_floats = list(set(round(random.uniform(1, 20), 2) for _ in range(9)))
floats_with_duplicates = [item for sublist in [[float_] * 3 for float_ in unique_rounded_floats] for item in sublist]
other_floats = [round(random.uniform(1, 20), 2) for _ in range(11)]
result_array = floats_with_duplicates + other_floats
arr = []
for r in result_array:
obj = {}
obj['value'] = {'total':r}
arr.append(obj)
return arr
random_list = create_random_list()
print (random_list)
result_list = sorted(rank_lista(random_list), key=lambda d: float(d['rank']), reverse = False)
for res in result_list:
print (res)
</code></pre>
|
<python><python-3.x>
|
2024-01-18 18:27:00
| 2
| 3,360
|
rihekopo
|
77,841,645
| 11,030,722
|
Python Cloud Function - How to Query Firestore with documentID? (where)
|
<p>I am attempting to filter my data based on the <code>documentId</code>. My objective is to retrieve a random document. Unfortunately, I am unable to find a way to achieve this within a Python cloud function.</p>
<p>My current code looks like this:</p>
<pre class="lang-py prettyprint-override"><code>result = firestore_client.collection("Quotes").where(filter=fs.FieldFilter(**'__name__'** , '>=', random_character)).limit(1).get()
</code></pre>
<p>In other languages it seems to work, see <a href="https://stackoverflow.com/a/52252264/11030722">https://stackoverflow.com/a/52252264/11030722</a>:</p>
<pre><code>db.collection('books').where(firebase.firestore.FieldPath.documentId(), '==', 'fK3ddutEpD2qQqRMXNW5').get()
</code></pre>
<p>P.S.: It also doesn't work for me without using <code>FieldFilter</code>. Thanks for any assistance.</p>
<p>I've tried <code>'__name__'</code> and <code>firebase.firestore.FieldPath.documentId()</code>, but both did not work.</p>
|
<python><firebase><google-cloud-firestore><google-cloud-functions>
|
2024-01-18 18:25:05
| 0
| 479
|
Luca Köster
|
77,841,392
| 2,410,605
|
Selenium Python How Do I Loop Through Menu Items and See If A Specific Menu is Open
|
<p>I'm new to Selenium Python and trying to loop through a list of menu items, and when I get to a specific menu item I want to see if it's expanded or not. Based on how the code is set up, the only way I can see to do this is see if the aria-expanded element is set to true.</p>
<p>So my problem is two-fold: 1) How do I best loop through the menu items until I get to "Workloads" and 2) how do I determine if Workloads is expanded or not.</p>
<p><a href="https://i.sstatic.net/7ZJk2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7ZJk2.png" alt="enter image description here" /></a></p>
<p>My first thought was to do a For loop by class-name for "ng-star-inserted" class, but that class is used in other areas of the page and I was getting an error in trying to look for the class "category-name" in the For loop. Everything that I've read that could help is to use the XPath, but I'm not familiar enough with the syntax of XPath to know what I'm doing so I keep getting errors.</p>
<p>Below is the relevant code, any help you can provide would be so, so appreciated!!</p>
<pre><code><vdl-sidenav-scroll class="sidenav-scrollable-container vdl-sidenav-scroll">
<div cdkscrollable="" class="vdl-scroll-container" tabindex="-1">
<div class="vdl-sidenav-list" role="list">
<!---->
<div class="ng-star-inserted">
<vdl-sidenav-item class="vdl-tooltip-trigger selected vdl-sidenav-item" role="listitem" tabindex="0" vdltooltipclasses="vdl-tooltip-popover" vdltooltiphidedelay="300" vdltooltipposition="right" vdltooltipshowdelay="300" aria-expanded="false" aria-selected="true" aria-describedby="cdk-describedby-message-2" cdk-describedby-host="">
<div class="category-icon">
<!---->
<div>
<!---->
<vdl-icon class="category-icon vdl-icon notranslate fa fa-dashboard ng-star-inserted" fontset="fontawesome" role="img" vdl-list-icon="" aria-hidden="true" data-vdl-icon-type="font" data-vdl-icon-name="fa-dashboard" data-vdl-icon-namespace="fontawesome">
<!---->
</vdl-icon>
<!---->
</div>
</div>
<!---->
<div class="category-name ng-star-inserted">
Dashboard
</div>
<!---->
<!---->
</vdl-sidenav-item>
<!---->
</div>
<div class="ng-star-inserted">
<vdl-sidenav-item class="vdl-tooltip-trigger vdl-sidenav-item" role="listitem" tabindex="0" vdltooltipclasses="vdl-tooltip-popover" vdltooltiphidedelay="300" vdltooltipposition="right" vdltooltipshowdelay="300" aria-expanded="false" aria-selected="false" aria-describedby="cdk-describedby-message-3" cdk-describedby-host="">
<div class="category-icon">
<!---->
<div>
<!---->
<!---->
<vdl-icon class="category-icon vdl-icon notranslate ng-star-inserted" role="img" vdl-list-icon="" aria-hidden="true" data-vdl-icon-type="svg" data-vdl-icon-name="successful-job">
<svg width="100%" height="100%" viewBox="0 0 14 19" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" fit="" preserveAspectRatio="xMidYMid meet" focusable="false">
<!-- Generator: Sketch 44.1 (41455) - http://www.bohemiancoding.com/sketch -->
<desc>Created with Sketch.</desc>
<defs></defs>
<g id="Iconography---" stroke="none" stroke-width="1" fill="none" fill-rule="evenodd">
<g id="Artboard" transform="translate(-126.000000, -414.000000)" fill="#3B73B4">
<g id="ActivityMon_Status" transform="translate(20.000000, 175.000000)">
<g id="Outlines" transform="translate(95.000000, 14.000000)">
<path d="M24.1160714,229.973214 L20.8571429,233.232143 L20.8571429,242.428571 C20.8571429,242.770835 20.7343762,243.064731 20.4888393,243.310268 C20.2433023,243.555805 19.9494065,243.678571 19.6071429,243.678571 C19.2648792,243.678571 18.9709834,243.555805 18.7254464,243.310268 C18.4799095,243.064731 18.3571429,242.770835 18.3571429,242.428571 L18.3571429,238.142857 L17.6428571,238.142857 L17.6428571,242.428571 C17.6428571,242.770835 17.5200905,243.064731 17.2745536,243.310268 C17.0290166,243.555805 16.7351208,243.678571 16.3928571,243.678571 C16.0505935,243.678571 15.7566977,243.555805 15.5111607,243.310268 C15.2656238,243.064731 15.1428571,242.770835 15.1428571,242.428571 L15.1428571,233.232143 L11.8839286,229.973214 C11.6755942,229.76488 11.5714286,229.511906 11.5714286,229.214286 C11.5714286,228.916665 11.6755942,228.663692 11.8839286,228.455357 C12.0997035,228.247023 12.3545372,228.142857 12.6484375,228.142857 C12.9423378,228.142857 13.1934513,228.247023 13.4017857,228.455357 L15.9464286,231 L20.0535714,231 L22.5982143,228.455357 C22.8065487,228.247023 23.0595223,228.142857 23.3571429,228.142857 C23.6547634,228.142857 23.9077371,228.247023 24.1160714,228.455357 C24.3244058,228.671132 24.4285714,228.925966 24.4285714,229.219866 C24.4285714,229.513766 24.3244058,229.76488 24.1160714,229.973214 Z M20.5,228.142857 C20.5,228.834825 20.2563268,229.424477 19.7689732,229.91183 C19.2816196,230.399184 18.6919677,230.642857 18,230.642857 C17.3080323,230.642857 16.7183804,230.399184 16.2310268,229.91183 C15.7436732,229.424477 15.5,228.834825 15.5,228.142857 C15.5,227.450889 15.7436732,226.861238 16.2310268,226.373884 C16.7183804,225.88653 17.3080323,225.642857 18,225.642857 C18.6919677,225.642857 19.2816196,225.88653 19.7689732,226.373884 C20.2563268,226.861238 20.5,227.450889 20.5,228.142857 Z" id="svg-icon"></path>
</g>
</g>
</g>
</g>
</svg>
</vdl-icon>
</div>
</div>
<!---->
<div class="category-name ng-star-inserted">
Activity monitor
</div>
<!---->
<!---->
</vdl-sidenav-item>
<!---->
</div>
<div class="ng-star-inserted">
<vdl-sidenav-item class="vdl-tooltip-trigger vdl-sidenav-item" role="listitem" tabindex="0" vdltooltipclasses="vdl-tooltip-popover" vdltooltiphidedelay="300" vdltooltipposition="right" vdltooltipshowdelay="300" aria-expanded="false" aria-selected="false" aria-describedby="cdk-describedby-message-4" cdk-describedby-host="">
<div class="category-icon">
<!---->
<div>
<!---->
<vdl-icon class="category-icon vdl-icon notranslate fa fa-shield ng-star-inserted" fontset="fontawesome" role="img" vdl-list-icon="" aria-hidden="true" data-vdl-icon-type="font" data-vdl-icon-name="fa-shield" data-vdl-icon-namespace="fontawesome">
<!---->
</vdl-icon>
<!---->
</div>
</div>
<!---->
<div class="category-name ng-star-inserted">
Protection
</div>
<!---->
<!---->
<div class="category-expander ng-star-inserted">
<vdl-icon class="sidenav-icon vdl-icon notranslate fa fa-angle-down" fonticon="fa-angle-down" fontset="fontawesome" role="img" aria-hidden="true" data-vdl-icon-type="font" data-vdl-icon-name="fa-angle-down" data-vdl-icon-namespace="fontawesome">
<!---->
</vdl-icon>
</div>
</vdl-sidenav-item>
<!---->
</div>
<div class="ng-star-inserted">
<vdl-sidenav-item class="vdl-tooltip-trigger vdl-sidenav-item" role="listitem" tabindex="0" vdltooltipclasses="vdl-tooltip-popover" vdltooltiphidedelay="300" vdltooltipposition="right" vdltooltipshowdelay="300" aria-expanded="true" aria-selected="false" aria-describedby="cdk-describedby-message-5" cdk-describedby-host="">
<div class="category-icon">
<!---->
<div>
<!---->
<vdl-icon class="category-icon vdl-icon notranslate fa fa-briefcase ng-star-inserted" fontset="fontawesome" role="img" vdl-list-icon="" aria-hidden="true" data-vdl-icon-type="font" data-vdl-icon-name="fa-briefcase" data-vdl-icon-namespace="fontawesome">
<!---->
</vdl-icon>
<!---->
</div>
</div>
<!---->
<div class="category-name ng-star-inserted">
Workloads
</div>
<!---->
<!---->
<div class="category-expander ng-star-inserted">
<vdl-icon class="sidenav-icon vdl-icon notranslate fa fa-angle-down expanded" fonticon="fa-angle-down" fontset="fontawesome" role="img" aria-hidden="true" data-vdl-icon-type="font" data-vdl-icon-name="fa-angle-down" data-vdl-icon-namespace="fontawesome">
<!---->
</vdl-icon>
</div>
</vdl-sidenav-item>
<!---->
<div class="ng-star-inserted">
<!---->
<div class="ng-star-inserted">
<vdl-sidenav-item class="sub-category-indent vdl-tooltip-trigger vdl-sidenav-item" role="listitem" tabindex="0" vdltooltipclasses="vdl-tooltip-popover" vdltooltiphidedelay="300" vdltooltipposition="right" vdltooltipshowdelay="300" aria-expanded="false" aria-selected="false" aria-describedby="cdk-describedby-message-10" cdk-describedby-host="">
<div class="category-icon">
<!---->
<div>
<!---->
<!---->
<vdl-icon class="category-icon vdl-icon notranslate ng-star-inserted" role="img" vdl-list-icon="" aria-hidden="true" data-vdl-icon-type="svg" data-vdl-icon-name="database-mssql">
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" viewBox="0 0 20 20" fit="" height="100%" width="100%" preserveAspectRatio="xMidYMid meet" focusable="false">
<defs>
<style>
.cls-1 {
fill: none;
}
.cls-2 {
clip-path: url(#clip-path);
}
.cls-3 {
clip-path: url(#clip-path-2);
}
.cls-4 {
fill: #f9f9fa;
}
</style>
<clipPath id="clip-path">
<circle class="cls-1" cx="-657.86" cy="-220.61" r="5.5"></circle>
</clipPath>
<clipPath id="clip-path-2">
<polygon class="cls-1" points="-659.64 -218.11 -659.64 -220.08 -659.71 -222.09 -659.69 -222.1 -658.08 -218.11 -657.63 -218.11 -656.02 -222.11 -656 -222.1 -656.07 -220.08 -656.07 -218.11 -655.4 -218.11 -655.4 -223.09 -656.26 -223.09 -657.85 -219.05 -657.87 -219.05 -659.45 -223.09 -660.32 -223.09 -660.32 -218.11 -659.64 -218.11"></polygon>
</clipPath>
</defs><path class="cls-1" d="M8.32,13.06A7.49,7.49,0,0,1,7,12.58L6.61,13.7a5.35,5.35,0,0,0,.64.27l.29-.76h0l0,.86c.25.08.51.16.79.22Z"></path><path class="cls-1" d="M7.48,16.35v.78h.84v-.5C8,16.54,7.73,16.45,7.48,16.35Z"></path><path class="cls-1" d="M6,15.38,5.77,16h0L4.29,12.07H3.2v5.06H4V15.4L4,13.22H4l1.49,3.91H6l.48-1.27A2.14,2.14,0,0,1,6,15.38Z"></path><path class="cls-4" d="M12.47,11.3A22,22,0,0,1,9.15,11a5.64,5.64,0,0,1,1.35,2.69c.63,0,1.29.08,2,.08a19.46,19.46,0,0,0,3.78-.34A9.1,9.1,0,0,0,19,12.56c.67-.39,1-.81,1-1.26V9.63a8.53,8.53,0,0,1-3.19,1.25A21.41,21.41,0,0,1,12.47,11.3Z"></path><path class="cls-4" d="M19,1.26A8.81,8.81,0,0,0,16.25.34,19.46,19.46,0,0,0,12.47,0,19.46,19.46,0,0,0,8.69.34a8.81,8.81,0,0,0-2.75.92c-.68.38-1,.8-1,1.25V3.77c0,.45.33.87,1,1.25a9.08,9.08,0,0,0,2.75.92,19.46,19.46,0,0,0,3.78.34,19.46,19.46,0,0,0,3.78-.34A9.08,9.08,0,0,0,19,5c.67-.38,1-.8,1-1.25V2.51C20,2.06,19.67,1.64,19,1.26Z"></path><path class="cls-4" d="M12.47,7.53a21.41,21.41,0,0,1-4.35-.42A8.53,8.53,0,0,1,4.93,5.87V7.53c0,.46.33.87,1,1.26a9.08,9.08,0,0,0,2.75.92,20.28,20.28,0,0,0,3.78.34,20.28,20.28,0,0,0,3.78-.34A9.08,9.08,0,0,0,19,8.79c.67-.39,1-.8,1-1.26V5.87a8.53,8.53,0,0,1-3.19,1.24A21.41,21.41,0,0,1,12.47,7.53Z"></path><path class="cls-4" d="M12.47,15.07c-.65,0-1.27,0-1.88-.09a5.62,5.62,0,0,1-.66,2.44,22.77,22.77,0,0,0,2.54.16,19.46,19.46,0,0,0,3.78-.34A8.81,8.81,0,0,0,19,16.32c.67-.38,1-.8,1-1.25V13.4a8.53,8.53,0,0,1-3.19,1.25A21.41,21.41,0,0,1,12.47,15.07Z"></path><path class="cls-4" d="M4.91,9.9a4.9,4.9,0,1,0,4.91,4.89A4.9,4.9,0,0,0,4.91,9.9Zm2.7,3.32v4.29H6.72V15.69l.06-1.41,0-.9h0l-.31.8-.76,2-.51,1.33h-.6L3,13.39H3l.09,2.3v1.82H2.21V12.18H3.36l1.54,4.1h0l.23-.61.66-1.77.45-1.18.2-.54H7.61Z"></path></svg>
</vdl-icon>
</div>
</div>
<!---->
<div class="category-name ng-star-inserted">
Microsoft SQL Server
</div>
<!---->
<!---->
</vdl-sidenav-item>
</div>
<div class="ng-star-inserted">
<vdl-sidenav-item class="sub-category-indent vdl-tooltip-trigger vdl-sidenav-item" role="listitem" tabindex="0" vdltooltipclasses="vdl-tooltip-popover" vdltooltiphidedelay="300" vdltooltipposition="right" vdltooltipshowdelay="300" aria-expanded="false" aria-selected="false" aria-describedby="cdk-describedby-message-11" cdk-describedby-host="">
<div class="category-icon">
<!---->
<div>
<!---->
<!---->
<vdl-icon class="category-icon vdl-icon notranslate ng-star-inserted" role="img" vdl-list-icon="" aria-hidden="true" data-vdl-icon-type="svg" data-vdl-icon-name="database-oracle">
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" viewBox="0 0 20 20" fit="" height="100%" width="100%" preserveAspectRatio="xMidYMid meet" focusable="false">
<defs>
<style>
.cls-1 {
fill: none;
}
.cls-2 {
clip-path: url(#clip-path);
}
.cls-3 {
clip-path: url(#clip-path-2);
}
.cls-4 {
fill: #f9f9fa;
}
</style>
<clipPath id="clip-path">
<circle class="cls-1" cx="-657.86" cy="-243.46" r="5.5"></circle>
</clipPath>
<clipPath id="clip-path-2">
<polygon class="cls-1" points="-659.64 -240.96 -659.64 -242.93 -659.71 -244.94 -659.69 -244.94 -658.08 -240.96 -657.63 -240.96 -656.02 -244.95 -656 -244.95 -656.07 -242.93 -656.07 -240.96 -655.4 -240.96 -655.4 -245.93 -656.26 -245.93 -657.85 -241.89 -657.87 -241.89 -659.45 -245.93 -660.32 -245.93 -660.32 -240.96 -659.64 -240.96"></polygon>
</clipPath>
</defs><path class="cls-4" d="M12.46,11.39a22,22,0,0,1-3.32-.27,5.71,5.71,0,0,1,1.35,2.7c.63,0,1.29.08,2,.08a19.46,19.46,0,0,0,3.78-.34A9.2,9.2,0,0,0,19,12.65c.67-.39,1-.81,1-1.26V9.72A8.53,8.53,0,0,1,16.81,11,21.41,21.41,0,0,1,12.46,11.39Z"></path><path class="cls-4" d="M19,1.34A9.2,9.2,0,0,0,16.24.43,19.46,19.46,0,0,0,12.46.09,19.46,19.46,0,0,0,8.68.43a9.1,9.1,0,0,0-2.75.91c-.68.39-1,.81-1,1.26V3.86c0,.45.33.86,1,1.25A9.08,9.08,0,0,0,8.68,6a20.28,20.28,0,0,0,3.78.34A20.28,20.28,0,0,0,16.24,6,9.18,9.18,0,0,0,19,5.11c.67-.39,1-.8,1-1.25V2.6C20,2.15,19.66,1.73,19,1.34Z"></path><path class="cls-4" d="M12.46,7.62A21.41,21.41,0,0,1,8.11,7.2,8.46,8.46,0,0,1,4.92,6V7.62c0,.45.33.87,1,1.26a9.36,9.36,0,0,0,2.75.92,20.31,20.31,0,0,0,3.78.33,20.31,20.31,0,0,0,3.78-.33A9.47,9.47,0,0,0,19,8.88c.67-.39,1-.81,1-1.26V6A8.53,8.53,0,0,1,16.81,7.2,21.41,21.41,0,0,1,12.46,7.62Z"></path><path class="cls-4" d="M12.46,15.16c-.65,0-1.27,0-1.88-.09a5.54,5.54,0,0,1-.66,2.43,20.43,20.43,0,0,0,2.54.17,20.28,20.28,0,0,0,3.78-.34A9.18,9.18,0,0,0,19,16.41c.67-.38,1-.8,1-1.25V13.49a8.53,8.53,0,0,1-3.19,1.24A20.7,20.7,0,0,1,12.46,15.16Z"></path><path class="cls-4" d="M4.88,12.88a1.06,1.06,0,0,0-.88.4,1.6,1.6,0,0,0-.33,1v.92a1.6,1.6,0,0,0,.33,1,1,1,0,0,0,.88.41,1.1,1.1,0,0,0,.91-.4,1.56,1.56,0,0,0,.34-1v-.92a1.55,1.55,0,0,0-.34-1A1.13,1.13,0,0,0,4.88,12.88Z"></path><path class="cls-4" d="M4.9,10a4.9,4.9,0,1,0,4.9,4.9A4.91,4.91,0,0,0,4.9,10ZM7,15.24a2.1,2.1,0,0,1-.58,1.53,2,2,0,0,1-1.51.61,1.92,1.92,0,0,1-1.48-.61,2.13,2.13,0,0,1-.57-1.53v-.91a2.17,2.17,0,0,1,.57-1.54,2,2,0,0,1,1.48-.61,2,2,0,0,1,1.51.61A2.18,2.18,0,0,1,7,14.33Z"></path></svg>
</vdl-icon>
</div>
</div>
<!---->
<div class="category-name ng-star-inserted">
Oracle
</div>
<!---->
<!---->
</vdl-sidenav-item>
</div>
</div>
</div>
<div class="ng-star-inserted">
<vdl-sidenav-item class="vdl-tooltip-trigger vdl-sidenav-item" role="listitem" tabindex="0" vdltooltipclasses="vdl-tooltip-popover" vdltooltiphidedelay="300" vdltooltipposition="right" vdltooltipshowdelay="300" aria-expanded="false" aria-selected="false" aria-describedby="cdk-describedby-message-6" cdk-describedby-host="">
<div class="category-icon">
<!---->
<div>
<!---->
<!---->
<vdl-icon class="category-icon vdl-icon notranslate ng-star-inserted" role="img" vdl-list-icon="" aria-hidden="true" data-vdl-icon-type="svg" data-vdl-icon-name="credential-management-logo">
<svg width="100%" height="100%" viewBox="0 0 19 20" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" fit="" preserveAspectRatio="xMidYMid meet" focusable="false">
<!-- Generator: Sketch 61.2 (89653) - https://sketch.com -->
<g id="Final-Side-Nav" stroke="none" stroke-width="1" fill="none" fill-rule="evenodd">
<path d="M15.3571429,0 L15.5379813,0.00819614955 C15.9522066,0.0464448475 16.312314,0.218563988 16.6183036,0.524553571 C16.968006,0.874255952 17.1428571,1.29464286 17.1428571,1.78571429 L17.1428571,1.78571429 L17.1428571,4.28571429 L18.2142857,4.28571429 L18.288923,4.29199219 C18.3600725,4.30454799 18.4207589,4.3359375 18.4709821,4.38616071 C18.5379464,4.453125 18.5714286,4.53869048 18.5714286,4.64285714 L18.5714286,4.64285714 L18.5714286,6.78571429 L18.5651507,6.86035156 C18.5525949,6.93150112 18.5212054,6.9921875 18.4709821,7.04241071 C18.4040179,7.109375 18.3184524,7.14285714 18.2142857,7.14285714 L18.2142857,7.14285714 L17.1428571,7.14285714 L17.1428571,8.57142857 L18.2142857,8.57142857 L18.288923,8.57770647 C18.3600725,8.59026228 18.4207589,8.62165179 18.4709821,8.671875 C18.5379464,8.73883929 18.5714286,8.82440476 18.5714286,8.92857143 L18.5714286,8.92857143 L18.5714286,11.0714286 L18.5651507,11.1460658 C18.5525949,11.2172154 18.5212054,11.2779018 18.4709821,11.328125 C18.4040179,11.3950893 18.3184524,11.4285714 18.2142857,11.4285714 L18.2142857,11.4285714 L17.1428571,11.4285714 L17.1428571,12.8571429 L18.2142857,12.8571429 L18.288923,12.8634208 C18.3600725,12.8759766 18.4207589,12.9073661 18.4709821,12.9575893 C18.5379464,13.0245536 18.5714286,13.110119 18.5714286,13.2142857 L18.5714286,13.2142857 L18.5714286,15.3571429 L18.5651507,15.4317801 C18.5525949,15.5029297 18.5212054,15.5636161 18.4709821,15.6138393 C18.4040179,15.6808036 18.3184524,15.7142857 18.2142857,15.7142857 L18.2142857,15.7142857 L17.1428571,15.7142857 L17.1428571,18.2142857 L17.134661,18.3951242 C17.0964123,18.8093494 16.9242932,19.1694568 16.6183036,19.4754464 C16.2686012,19.8251488 15.8482143,20 15.3571429,20 L15.3571429,20 L1.78571429,20 L1.60487584,19.9918039 C1.19065058,19.9535552 0.830543155,19.781436 0.524553571,19.4754464 C0.17485119,19.125744 5.5067062e-14,18.7053571 5.5067062e-14,18.2142857 L5.5067062e-14,18.2142857 L5.5067062e-14,1.78571429 L0.00819614955,1.60487584 C0.0464448475,1.19065058 0.218563988,0.830543155 0.524553571,0.524553571 C0.874255952,0.17485119 1.29464286,0 1.78571429,0 L1.78571429,0 L15.3571429,0 Z M7.2875817,5 C6.59041394,5 5.90849673,5.19891122 5.24183007,5.59673367 C4.5751634,5.99455611 4.03485839,6.5138191 3.62091503,7.15452261 C3.20697168,7.79522613 3,8.45058626 3,9.12060302 C3,9.80318258 3.22331155,10.3590871 3.66993464,10.7883166 C4.11655773,11.2175461 4.69498911,11.4321608 5.40522876,11.4321608 C6.14640523,11.4321608 6.86464052,11.2099874 7.55993464,10.7656407 L7.79084967,10.6092965 L12.1764706,14.8241206 C12.2984749,14.9413735 12.4466231,15 12.620915,15 C12.8039216,15 12.9803922,14.9183417 13.1503268,14.7550251 C13.3202614,14.5917085 13.4052288,14.4221106 13.4052288,14.2462312 C13.4052288,14.1122278 13.3661874,13.9943049 13.2881046,13.8924623 L13.2222222,13.8190955 L11.7843137,12.4371859 L12.4117647,11.8341709 C12.4248366,11.8467337 12.4782135,11.9011725 12.5718954,11.9974874 C12.6655773,12.0938023 12.7494553,12.1775544 12.8235294,12.2487437 C12.8976035,12.319933 12.9771242,12.3890285 13.0620915,12.4560302 C13.1470588,12.5230318 13.2091503,12.5565327 13.248366,12.5565327 C13.3224401,12.5565327 13.4662309,12.4539363 13.6797386,12.2487437 C13.8932462,12.0435511 14,11.9053601 14,11.8341709 C14,11.80067 13.9379085,11.7148241 13.8137255,11.5766332 C13.6895425,11.4384422 13.5305011,11.2751256 13.3366013,11.0866834 C13.1427015,10.8982412 12.9542484,10.7181742 12.7712418,10.5464824 C12.5882353,10.3747906 12.4095861,10.2083333 12.2352941,10.0471106 C12.0610022,9.88588777 11.9607843,9.79271357 11.9346405,9.76758794 C11.8910675,9.72571189 11.8409586,9.70477387 11.7843137,9.70477387 C11.7102397,9.70477387 11.5664488,9.80737018 11.3529412,10.0125628 C11.1394336,10.2177554 11.0326797,10.3559464 11.0326797,10.4271357 C11.0326797,10.4648241 11.0675381,10.5244975 11.1372549,10.6061558 C11.2069717,10.6878141 11.2788671,10.7642379 11.3529412,10.8354271 C11.4270153,10.9066164 11.5141612,10.9872278 11.6143791,11.0772613 C11.6945534,11.1492881 11.746841,11.1965243 11.7712418,11.2189698 L11.7843137,11.2311558 L11.1568627,11.8341709 L8.83660131,9.60427136 C9.40740741,8.86725293 9.69281046,8.10301508 9.69281046,7.31155779 C9.69281046,6.62897822 9.46949891,6.0730737 9.02287582,5.64384422 C8.57625272,5.21461474 7.99782135,5 7.2875817,5 Z M7.18300654,6.20603015 C7.53159041,6.20603015 7.82788671,6.32328308 8.07189542,6.55778894 C8.31590414,6.79229481 8.4379085,7.07705193 8.4379085,7.4120603 C8.4379085,7.74706868 8.31590414,8.0318258 8.07189542,8.26633166 C7.82788671,8.50083752 7.53159041,8.61809045 7.18300654,8.61809045 C7,8.61809045 6.81917211,8.57830821 6.64052288,8.49874372 C6.72331155,8.67043551 6.76470588,8.84422111 6.76470588,9.0201005 C6.76470588,9.35510888 6.64270153,9.639866 6.39869281,9.87437186 C6.1546841,10.1088777 5.8583878,10.2261307 5.50980392,10.2261307 C5.16122004,10.2261307 4.86492375,10.1088777 4.62091503,9.87437186 C4.37690632,9.639866 4.25490196,9.35510888 4.25490196,9.0201005 C4.25490196,8.68509213 4.37690632,8.40033501 4.62091503,8.16582915 C4.86492375,7.93132328 5.16122004,7.81407035 5.50980392,7.81407035 C5.69281046,7.81407035 5.87363834,7.8538526 6.05228758,7.93341709 C5.96949891,7.76172529 5.92810458,7.5879397 5.92810458,7.4120603 C5.92810458,7.07705193 6.05010893,6.79229481 6.29411765,6.55778894 C6.53812636,6.32328308 6.83442266,6.20603015 7.18300654,6.20603015 Z" id="svg-icon" fill="#F9F9F9" fill-rule="nonzero"></path>
</g>
</svg>
</vdl-icon>
</div>
</div>
<!---->
<div class="category-name ng-star-inserted">
Credential management
</div>
<!---->
<!---->
</vdl-sidenav-item>
<!---->
</div>
</div>
</div>
<div class="vdl-scroll-rail" style="display: none;">
<div cdkdrag="" cdkdragboundary=".vdl-scroll-rail" cdkdraglockaxis="y" class="vdl-scroll-bar cdk-drag"></div>
</div>
</code></pre>
|
<python><selenium-webdriver>
|
2024-01-18 17:37:02
| 1
| 657
|
JimmyG
|
77,841,343
| 11,578,996
|
How to format/access inner boxplots of a violin catplot in seaborn?
|
<p><a href="https://i.sstatic.net/RUmAk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RUmAk.png" alt="seaborn catplot with kind=violin" /></a></p>
<p>I'm looking to change the formatting of the inner boxplots for the violin plots within this catplot. I have tried using the violinplot parameter <code>inner_kws=dict(box_width=15, whis_width=2, color=".8")</code> to no avail.</p>
<p>code used</p>
<pre><code>p = sns.catplot(data=sample4, x='modelID', y='error', hue='isDF', size=3, kind='violin', split=True, height=5, aspect=1.8, inner='box', fill=False, inner_kws=dict(box_width=15, whis_width=2, color=".8"))
p.set(ylabel='occupancy error', title='Error distribution by model type for busy variable legs')
p.set_xticklabels(rotation=45)
for i,ax in enumerate(p.axes.flatten()):
ax.axhline(0.2, ls='--', color='k', alpha=0.1, zorder=0)
ax.axhline(-0.2, ls='--', color='k', alpha=0.1, zorder=0)
ax.axhline(0, ls='--', color='k', alpha=0.3, zorder=0)
</code></pre>
<p>sample data</p>
<pre><code> modelID isDF error
622 MonthAverage No -0.182288
1096 S-AVG-1 No 0.122069
1277 S-AVG-4 No 0.105147
2189 NoChange Yes -0.189221
1250 S-AVG-4 No -0.037150
1530 GMM Yes 0.068475
1553 GMM Yes -0.543998
789 MonthAverage No -0.164985
1061 S-AVG-1 No 0.064435
1602 MonthAverage Yes -0.062130
448 GMM No 0.478873
1513 GMM Yes 0.166998
1349 S-AVG-4 No -0.069806
1860 GAM Yes 0.034769
830 NoChange No -0.150372
1151 S-AVG-1 No -0.142022
846 NoChange No -0.385930
1847 GAM Yes 0.102694
1049 S-AVG-1 No 0.095174
921 NoChange No -0.102451
</code></pre>
|
<python><seaborn><violin-plot><catplot>
|
2024-01-18 17:28:28
| 0
| 389
|
ciaran haines
|
77,841,256
| 1,711,271
|
group dataframe by column, process values and normalize other column based on processed values
|
<p>I have a dataframe <code>df</code> with two columns, <code>foo</code> and <code>bar</code>:</p>
<pre><code> foo bar
0 2410_030_GCD 0.012093
1 2410_030_GCD 0.012789
2 2410_030_GCD 0.014205
3 2410_010_GCD 0.016280
4 2410_010_GCD 0.019738
5 6122_020_LCM 0.025806
6 6122_020_LCM 0.030336
7 6122_020_LCM 0.034753
8 6122_020_LCM 0.042229
9 3456_030_SGD 0.025792
10 3456_030_SGD 0.030255
11 3456_030_SGD 0.034683
12 3456_030_SGD 0.042194
</code></pre>
<p>For each group of rows where <code>foo</code> has the same value, I want to extract the first substring of 4 consecutive digits from 'foo' (i.e., <code>'2410'</code> if <code>foo = '2410_030_GCD'</code>), convert to a <code>float</code>, divide by 1000 and then divide the values of <code>bar</code> by the resulting value, storing the result in a new column called <code>baz</code>. In other words, the output has to be</p>
<pre><code> foo bar baz
0 2410_030_GCD 0.012093 0.0050178
1 2410_030_GCD 0.012789 0.0053066
2 2410_030_GCD 0.014205 0.0058942
3 2410_010_GCD 0.016280 0.0067552
4 2410_010_GCD 0.019738 0.0081900
5 6122_020_LCM 0.025806 0.0042153
6 6122_020_LCM 0.030336 0.0049552
7 6122_020_LCM 0.034753 0.0056767
8 6122_020_LCM 0.042229 0.0068979
9 3456_030_SGD 0.025792 0.0074630
10 3456_030_SGD 0.030255 0.0087543
11 3456_030_SGD 0.034683 0.0100356
12 3456_030_SGD 0.042194 0.0122089
</code></pre>
<p>Since <code>df</code> is passed as an argument between various functions, I'd rather not add other (intermediate?) columns to <code>df</code>, except for <code>baz</code>. Also, I'm looking for an efficient solution, because the dataframe can be quite large (way bigger than the small example above).</p>
|
<python><pandas><group-by><python-re>
|
2024-01-18 17:13:08
| 4
| 5,726
|
DeltaIV
|
77,841,234
| 13,142,245
|
Stan dimensions mismatch
|
<p>PyStan (imported as Stan) is giving me an unexpected error</p>
<pre><code>import stan
# Load data (replace X and y with your actual data)
X = np.linspace(1,100,100).reshape([100,1])
y = np.random.randint(0,2,100).reshape([100,1])
N = y.shape[0]
K = X.shape[1]
data = {'N': N, 'K': K, 'X': X, 'y': y}
# Compile Stan model
posterior = stan.build(lr_model, data=data, random_seed=1)
# Fit the model
samples = posterior.sample(data=data, iter=1000, chains=4)
</code></pre>
<p><strong>Error</strong>:</p>
<pre><code>Exception: mismatch in number dimensions declared and found in context; processing stage=data initialization; variable name=y; dims declared=(100); dims found=(100,1) (in '/tmp/httpstan_ii3yhtja/model_776ng6bg.stan', line 7, column 2 to column 35)
</code></pre>
<p>I don't understand why Stan is expecting 100 not (100,1) and how to fix. I've tried using lists instead of arrays but the same error was returned.</p>
<p>Here's the script</p>
<pre><code>lr_model = """
data {
int<lower=0> N; // number of observations
int<lower=0> K; // number of predictors
matrix[N, K] X; // predictor matrix
array[N] int<lower=0, upper=1> y; // binary response
}
parameters {
vector[K] beta; // regression coefficients
}
model {
beta ~ normal(0, 1);
y ~ bernoulli_logit(X * beta);
}
"""
</code></pre>
|
<python><stan><pystan>
|
2024-01-18 17:09:38
| 0
| 1,238
|
jbuddy_13
|
77,841,192
| 13,147,413
|
Replacing values from a given column with the modified values from a different column in python-polars
|
<p>I need to update the values of a column (X) in correspondance with certain values ('v') of another column Y with values from another column (Z) times some numbers and then cast the obtained column to int:</p>
<p>In pandas the code is as follow:</p>
<pre><code>df.loc[df["Y"] == "v", "X"] = (
df.loc[df["Y"] == "v", "Z"] * 1.341 * 15).astype(int)
</code></pre>
<p>In polars i'm able to select values (e.g. nulls) from a column and replace them with values from another column:</p>
<pre><code>df=df.with_columns(
pl.when(df['col'].is_null())
.then(df['other_col'])
.otherwise(df['col'])
.alias('alias_col')
)
</code></pre>
<p>but i'm stuck at nesting the call to the (modified) third column.</p>
<p>Any help would be very much appreciated!</p>
|
<python><dataframe><python-polars>
|
2024-01-18 17:03:29
| 1
| 881
|
Alessandro Togni
|
77,841,164
| 1,668,622
|
Where does the limitation to 100 connections to docker.socket (using aiodocker) come from?
|
<p>I'm using the Python-package <a href="https://github.com/aio-libs/aiodocker" rel="nofollow noreferrer">aiodocker</a> do monitor running Docker containers using <code>container.stats()</code> like this:</p>
<pre><code>async for stats in container.stats():
do_something(...)
</code></pre>
<p>And this works as expected.</p>
<p><code>lsof +E /run/docker.sock</code> (might also be <code>/var/run/docker.sock</code>) reveals that there will be a connection to <code>/run/docker.sock</code> established for every monitored container - which is not a surprise, <code>docker stats</code> would also use one connection per container.</p>
<p>However, using Python / <code>aiodocker</code> I can establish a maximum of 100 connections per process - after that new calls involving a new connection will block.</p>
<p>Running another instance of the same Python script seems to be not affected by the first one (but can also open a maximum of 100 connections).</p>
<p>Strangely <code>docker stats</code> does not seem to have this limitation - it will happily monitor 200 and more containers.</p>
<p>Searching through the <a href="https://github.com/aio-libs/aiodocker/tree/master/aiodocker" rel="nofollow noreferrer">aiodocker source code</a> does not show any hard coded limitation like this, and using DDG/Google/ChatGPT I can't find any reference to a limitation to 100 connections/process.</p>
<p>Where is this limitation defined? Is it configurable?</p>
|
<python><docker><sockets>
|
2024-01-18 16:59:56
| 1
| 9,958
|
frans
|
77,840,942
| 6,435,921
|
Computing Matrix-vector product between submatrices of multidimensional arrays
|
<p>I have an array <code>A</code> of shape <code>(1000, 54, 50)</code> and another array <code>x</code> of shape <code>(1000, 54)</code>. For every <code>i=0, ..., 999</code> I. would like to compute <code>A[i, :, :] @ (A[i, :,:].T @ x[i])</code>. What is the fastest way that leverages NumPy's vectorisation capabilities?</p>
<pre><code>import numpy as np
A = np.random.randn(1000, 54, 50)
x = np.random.randn(1000, 54)
def slow_method(A, x):
B = np.zeros((1000, 54))
for i in range(1000):
B[i] = A[i] @ (A[i].T @ x[i])
return B
</code></pre>
<p>Another method I came up with is with <code>einsum</code> but it seems like it does a lot of redundant calculations.</p>
<pre><code>def einsum_method(A, x)
return np.einsum('ijk,ik->ij', A, np.einsum('ijk,ik->ij', np.transpose(A, axes=(0, 2, 1)), x))
</code></pre>
|
<python><numpy><numpy-ndarray>
|
2024-01-18 16:26:14
| 1
| 3,601
|
Euler_Salter
|
77,840,804
| 11,703,579
|
Install Neurolab for Python 3.11
|
<p>I´m trying to install the <code>neurolab</code> package to my <code>Python 3.11</code> environment.
I use <code>Anaconda</code> with <code>Python version 3.11</code>. The available versions of <code>Neurolab</code> found on <code>Anaconda</code> page are those:</p>
<p><a href="https://i.sstatic.net/VxWrl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VxWrl.png" alt="enter image description here" /></a></p>
<p>But none of this versions work on <code>Python 3.11</code> . I tried to instal via <code>Anaconda Navigator</code> and <code>CMD (I use Windows 11)</code>.</p>
<p>How to install this package (or a similar version that works) with my version of Python?</p>
|
<python><machine-learning><anaconda><neurolab>
|
2024-01-18 16:04:02
| 1
| 569
|
gbossa
|
77,840,657
| 893,254
|
VS Code Pylance - Import ".XXX" could not be resolved
|
<p>The below screenshot shows the contents of an <code>__init__.py</code> file which Pylance thinks has some missing imports.</p>
<p>Why are some of the imports resolvable and some are not?</p>
<p>The directory containing this <code>__init__.py</code> file contains the files listed in the import statements:</p>
<ul>
<li><code>parameter_base.py</code></li>
<li><code>parameter_boolean.py</code></li>
<li><code>parameter_search_type.py</code></li>
<li><code>parameter_tristate.py</code></li>
<li>etc...</li>
</ul>
<p>The directory structure looks like this:</p>
<pre><code>src/
liburl/
__init__.py
url_builder.py
url_parameter_types/
__init__.py
parameter_base.py
parameter_boolean.py
parameter_search_type.py
etc...
</code></pre>
<p>What I find confusing is that <code>ParameterBoolean</code> imports ok.</p>
<pre><code># parameter_boolean.py
from .parameter_base import ParameterBase
class ParameterBoolean():
def __init__(self) -> None:
self.state = None
def set_state(self, state) -> None:
if not isinstance(state, bool):
raise ValueError('state must be of type bool')
self.state = state
def get_state(self) -> bool:
return self.state
</code></pre>
<p>whereas <code>ParameterSearchType</code> does not.</p>
<pre><code># parameter_search_type.py
from enum import Enum
from .parameter_base import ParameterBase
class SearchTypeEnum(Enum):
FOR_SALE = 1
class ParameterSearchType(ParameterBase):
def __init__(self) -> None:
super().__init__()
self.state = SearchTypeEnum.FOR_SALE
def get_state(self) -> SearchTypeEnum:
return self.state
</code></pre>
<p>There doesn't seem to be much difference between these files and Pylance shows no obvious problems.</p>
<p><a href="https://i.sstatic.net/zvZHG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zvZHG.png" alt="pylance" /></a></p>
|
<python>
|
2024-01-18 15:42:27
| 1
| 18,579
|
user2138149
|
77,840,585
| 893,254
|
Building a Python Library - are fully qualified import statements required?
|
<h1>Additional context added later</h1>
<p>I wrote this question when I had less experience with Python and now I have some recommendations to make:</p>
<ol>
<li>Do not put library code in a <code>src</code> directory</li>
<li>Do not put "executable" Python code in a seperate <code>bin</code> directory</li>
<li>Do not include <code>pyproject.toml</code></li>
</ol>
<p>If you follow the two above points, and compare with the project structure from the original question below, you will end up with a structure like this instead:</p>
<pre><code>my_project/
main.py
liburl/
__init__.py
...
.venv
</code></pre>
<p>What you will find with this structure is that everything "just works".</p>
<ul>
<li>you run <code>main.py</code> from the directory <code>my_project</code></li>
<li>when the interpreter starts up, the CWD of <code>main.py</code> will be added to the interpreter search paths</li>
<li>this means the interpreter can find the library <code>liburl</code>, without any additional machinery</li>
</ul>
<p>This is how Python is intended to be used. It isn't like other languages which favour splitting source code for libraries with source code for executable things which use those libraries.</p>
<p>Here's the original question:</p>
<h1>Python Project Structure</h1>
<p>This is what my Python project structure looks like.</p>
<ul>
<li>As far as I am aware, this choice of structure is a standard project structure.</li>
<li>It is possible that I am wrong.</li>
<li>This choice of structure might have something to do with my question, which is about having to use fully qualified names within the library itself.</li>
<li>If this design is bad in some way please let me know.</li>
</ul>
<pre><code>my_project/
bin/
main.py
src/
liburl/
__init__.py
url_builder.py
url_parameter_types/
__init__.py
parameter_base.py
parameter_boolean.py
pyproject.toml
.venv
</code></pre>
<hr />
<h4><code>pyproject.toml</code></h4>
<pre><code>[build-system]
requires = ["setuptools"]
build-backend = "setuptools.build_meta"
[project]
name = "my_project"
version = "1.0.0"
description = "My Project"
[tools.setuptools.packages.find]
where = ["src"]
</code></pre>
<hr />
<ul>
<li>I am trying to build a python library which I will use from <code>main.py</code>. The library is named <code>liburl</code>.</li>
<li>Within the library code, I appear to require fully qualified import statements.</li>
<li>Is this expected? It seems odd that I can't use relative imports.</li>
</ul>
<p>Here is an example:</p>
<h4><code>url_parameter.py</code></h4>
<pre><code>"""
`url_parameter_types` is a subdirectory in the same directory
as this file
"""
from url_parameter_types import UrlParameterTristate # this does not work
### ... rest of code
</code></pre>
<p>Here is another example:</p>
<h4><code>parameter_boolean.py</code></h4>
<pre><code>"""
`parameter_base.py` is a file in the same directory as this
file
"""
from paramter_base import UrlParameterBase # does not work
### ... rest of code
</code></pre>
<hr />
<p>In order to make the example import statements shown above work, I have to use fully qualified imports. That means to do this:</p>
<pre><code>from liburl.url_parameter_types.parameter_boolean import UrlParameterBoolean
</code></pre>
<p>This strikes me as odd for two reasons:</p>
<ul>
<li>It is a lot of excess typing</li>
<li>The thing being imported exists in the same directory as the code doing the importing, I would have thought it was "visible" from other source files in the same directory</li>
<li>If the directory structure changes (quite a common thing to do when building a library) all the absolute import statements will need to be changed. (For example if a subfolder is renamed or moved - you can probably imagine why this would be a problem without an explicit explanation.)</li>
</ul>
<hr />
<p>For a final, bonus example, consider the following:</p>
<h4><code>__init__.py</code></h4>
<pre><code>from url_builder import UrlBuilder
</code></pre>
<ul>
<li>I cannot import from the file <code>url_builder.py</code> from the root level of this library.</li>
<li>Instead I have to do this:</li>
</ul>
<pre><code>from liburl.url_builder import UrlBuilder
</code></pre>
<p>Is this just how Python works? It surprises me that I can't use relative imports.</p>
|
<python>
|
2024-01-18 15:31:34
| 1
| 18,579
|
user2138149
|
77,840,560
| 16,813,096
|
Why opacity in tkinter widgets not supported?
|
<p>So I searched for opacity in individual tkinter widgets and I found that it is not possible with tkinter.</p>
<p>However, there is transparent attribute in macosx and windows which makes a color fully transparent (<a href="https://stackoverflow.com/questions/75594687/how-to-apply-transparent-background-in-tkinter-window-of-linux-not-alpha">example here</a>), means we can <strong>see through that area of the window completely</strong>.</p>
<p>Second root attribute is the <strong>-alpha</strong> which changes the opacity of the <strong>whole window</strong> (value from 0 to 1).</p>
<p>What I wish is to have that alpha opacity in individual widgets only (with solid background), atleast for tk canvas, so that we can make blurry/transparent effects.</p>
<p>(Tk canvas has the <a href="https://stackoverflow.com/a/59149180/16813096">stipple</a> method, but it is not true transparency)</p>
<p>PIL can also be used to make <a href="https://stackoverflow.com/a/54645103/16813096">images transparent</a>, but not widgets.</p>
<p>I also found a <a href="https://github.com/astqx/tkinterwidgets" rel="nofollow noreferrer">hack</a>, which is to use the alpha and transparent attribute with the overrideredirect window + binding it with a different base window, but this hack will not work in all cases and can be problematic.</p>
<p>So, what I am searching is; can we use some other tricks if exists? At least for windows, maybe using <strong>win32gui or ctypes?</strong></p>
<p>To be honest; this is the only important and basic thing tkinter lacks.</p>
<p>Tkinter canvas is actually very powerful, we can even make our own custom widgets, one example is the customtkinter library. Adding opacity in those widgets would be great.</p>
|
<python><python-3.x><tkinter><tkinter-canvas><customtkinter>
|
2024-01-18 15:28:49
| 1
| 582
|
Akascape
|
77,840,397
| 2,729,831
|
Error "data is invalid" for Custom SSL certificate Google App Engine
|
<p>I am trying to install a custom SSL certificate for google app engine but I get this error: <code>"The certificate data is invalid. Please ensure that the private key and public certificate match."</code>
So I checked with the commands:</p>
<pre><code>openssl x509 -noout -modulus -in concat.crt | openssl md5
openssl rsa -noout -modulus -in server.key.pem | openssl md5
</code></pre>
<p>and the output is the same.
How can I fix it?</p>
|
<python><ssl><google-cloud-platform><google-app-engine><ssl-certificate>
|
2024-01-18 15:05:22
| 0
| 473
|
blob
|
77,840,287
| 6,435,921
|
NumPy - Fill each subdiagonal of an array, with rows from another array leveraging vectorisation
|
<p>I have a numpy array <code>A</code> of shape <code>(N, M, M)</code> and another numpy array <code>B</code> of shape <code>(N, M)</code>. I want to fill each of the <code>N</code> diagonals of the submatrices <code>A[i, :, :]</code> with the rows in <code>B</code>. I want to do this as fast as possible using Numpy's vectorisation capabilities. Of course, one could use a for loop, but I want something more efficient.</p>
<pre><code>import numpy as np
A = np.random.randn(1000, 50, 50)
B = np.random.randn(1000, 50)
for i in range(1000):
np.fill_diagonal(A[i, :, :], B[i])
</code></pre>
<h1>Benchmarking</h1>
<p>Here I will benchmark the various solutions. From <a href="https://stackoverflow.com/a/66432278/6435921">this</a> question I have found <code>method3</code> which is displayed below.</p>
<pre><code>def method1(A, B):
for i in range(A.shape[0]):
np.fill_diagonal(A[i, :, :], B[i])
return A
def method2(A, B):
idx1, idx2 = np.indices(B.shape).reshape(2, -1)
A[idx1, idx2, idx2] = B.ravel()
return A
def method3(A, B):
np.einsum('ijj->ij', A)[...] = B
return A
</code></pre>
<p>Running <code>%timeit</code> on these three methods gives</p>
<ul>
<li>Method1: 951 µs ± 3.21 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)</li>
<li>Method2: 289 µs ± 1.3 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)</li>
<li>Method3: 47.6 µs ± 169 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)</li>
</ul>
<p>So method 3 is the fastest, although I was hoping for a better speedup.</p>
|
<python><numpy><numpy-ndarray>
|
2024-01-18 14:49:49
| 3
| 3,601
|
Euler_Salter
|
77,840,285
| 6,184,153
|
Email test does not work in Django if django-allauth sends the email
|
<p>I use the <strong>Django 5.0.1 with django-allauth</strong>. With real smtp email backend, email confirmation works. Django's email testing features are also working if django-allauth is not involved, like in the <a href="https://docs.djangoproject.com/en/5.0/topics/testing/tools/#email-services" rel="nofollow noreferrer">example of the Django testing docs' email service</a>.</p>
<p>As soon as I want to test the third party django-allauth confirmation email sending function, the <em>mail.outbox</em> does not have any item.</p>
<p>My partial test code:</p>
<pre class="lang-py prettyprint-override"><code>class SignupProcessTest(TestCase):
def setUp(self):
# We setup base form data and we overwrite only test method specific ones
self.data = {"email": "roland.szucs@booknwalk.com",
"password1": "ajtoablak1",
"newsletter": 1}
def test_confirmEmailSent(self):
self.client.post(reverse("account_signup"),
self.data,
secure=True,
follow=True)
# check email
self.assertEqual(len(mail.outbox), 1, f"Email was not sent for confirmation:{len(mail.outbox)}")
</code></pre>
<p>Based on the docs Django just replaces transparently the email backend settings and collects all emails to the <em>mail.outbox</em> array. Even Django-allauth should not be aware which email backend serves its email sending request.</p>
<p>When I try this in my local server, after signup I am redirected to another page and I get an email within seconds. Why does not it happen in test environment?</p>
<p>Any idea?</p>
|
<python><django><email><testing><django-allauth>
|
2024-01-18 14:49:36
| 1
| 361
|
Roland
|
77,840,135
| 6,345,518
|
Python regex substitute string in code file
|
<p>I try to use Python v3.12.0 regex to reformat <strong>strings in a code source file</strong> to resemble C++ raw string format.</p>
<h3>Sample input file</h3>
<p>F.i. the following code in file <code>myclassA.txt</code> (code is in a <strong>C++-like dialect, please mind the special way of declaring multi-line strings</strong>):</p>
<pre class="lang-cpp prettyprint-override"><code>class myClassA(myBaseClass) {
docstring = // End-of-line-comments possible
"
This is my class description docstring stored in a string variable inherited from
myBaseClass.
The content of this string, INCLUDING INDENTATION, MUST NOT be changed.
";
// This variable is declared as string has a mismatched indentation:
string some_detailed_notes = "Some string like this is also possible.
Also with
some really
strange indentation
and `inline code`, ```code blocks``` $\\text{LaTeX}$ $$\\frac{1}{2}$$
:::{hint}
some admonitions
:::
and all kind of special characters such as:
\"
(...)
{...}
[...]
;.,
etc.
which MUST be preserved.";
string some_other_string = "
do NOT touch this, only docstring and some_detailed_notes
";
}
</code></pre>
<h3>Expected output</h3>
<p>In this code file, I want to <em><em>replace all unescaped</em> string delimiters <code>"</code></em>* - but <strong>not</strong> <code>\"</code> such that the string resembles C++ raw string format:</p>
<pre class="lang-cpp prettyprint-override"><code>docstring = R""""(
my string
)"""";
</code></pre>
<p>resp. for the full input example:</p>
<pre class="lang-cpp prettyprint-override"><code>class myClassA(myBaseClass) {
docstring = // End-of-line-comments possible
R""""(
This is my class description docstring stored in a string variable inherited from
myBaseClass.
The content of this string, INCLUDING INDENTATION, MUST NOT be changed.
)"""";
// This variable is declared as string has a mismatched indentation:
string some_detailed_notes = R""""(Some string like this is also possible.
Also with
some really
strange indentation
and `inline code`, ```code blocks``` $\\text{LaTeX}$ $$\\frac{1}{2}$$
:::{hint}
some admonitions
:::
and all kind of special characters such as:
\"
(...)
{...}
[...]
;.,
etc.
which MUST be preserved.)"""";
string some_other_string = "
do NOT touch this, only docstring and some_detailed_notes
";
}
</code></pre>
<h3>What I tried so far</h3>
<p>I tried the following regex to enclose the string group in C++-raw-string delimiters, <strong>but only for the first occurrences of strings <code>docstring</code> and <code>some_detailed_notes</code></strong>.</p>
<pre class="lang-py prettyprint-override"><code>import pathlib
import re
fp = pathlib.Path('myclassA.txt').resolve(True)
txt = fp.read_text(encoding='utf-8')
cpp_rawstr_start = 'R""""('
cpp_rawstr_end = ')""""'
txt_sub = re.sub(
r'([ ]*(docstring|some_detailed_notes)\s*=[\s\t\n]*")(?:(?!")(?:\\.|[^\\]))*";',
fr'{cpp_rawstr_start}\n\g<0>\n{cpp_rawstr_end}',
txt,
count=2,
flags=re.DOTALL
)
</code></pre>
<p>But this results in:</p>
<pre class="lang-cpp prettyprint-override"><code>R""""(
docstring = // End-of-line-comments possible
"
This is my class description docstring stored in a string variable inherited from
myBaseClass.
The content of this string, INCLUDING INDENTATION, MUST NOT be changed.
";
)""""
</code></pre>
<p>which is of course not exactly what I wanted, furthermore <code>some_detailed_notes</code> <strong>is not touched at all</strong>.</p>
<h4>Additional task</h4>
<p><strong>Only if feasible and not too complicated</strong>, I'd also like to include strings defined using enclosing parentheses:</p>
<pre class="lang-cpp prettyprint-override"><code>docstring = (
"
Enclosed in parentheses
"
);
</code></pre>
<p>to be converted to</p>
<pre class="lang-cpp prettyprint-override"><code>docstring =
R""""(
Enclosed in parentheses
)"""";
</code></pre>
<p>dropping the parentheses is optional.</p>
|
<python><regex>
|
2024-01-18 14:27:49
| 2
| 5,832
|
JE_Muc
|
77,840,099
| 512,480
|
VSCode Python: Error while enumerating installed packages
|
<p>My long-successful VSCode Python setup broke inexplicably.</p>
<p>Please, what does the error below mean?
First the recommended Python 3.12 can't find _tkinter. I try an older installed version and get this. I can't use pip either, it says "externally managed environment". I'm lost.</p>
<pre class="lang-none prettyprint-override"><code>% cd /Users/ken/Teaching/Python2024 ; /usr/bin/env /opt/local/bin/python3.9 /Users/ken/.vscode/extensions/ms-python.debugpy-2023.3.13341006-darwin-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher 64660 -- /Users/ken/Teaching/Python2024/turtle1.py
E+00000.193: Error while enumerating installed packages.
Traceback (most recent call last):
File "/Users/ken/.vscode/extensions/ms-python.debugpy-2023.3.13341006-darwin-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/common/log.py", line 361, in get_environment_description
report(" {0}=={1}\n", pkg.name, pkg.version)
AttributeError: 'PathDistribution' object has no attribute 'name'
Stack where logged:
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/Users/ken/.vscode/extensions/ms-python.debugpy-2023.3.13341006-darwin-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/__main__.py", line 91, in <module>
main()
File "/Users/ken/.vscode/extensions/ms-python.debugpy-2023.3.13341006-darwin-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/__main__.py", line 21, in main
log.describe_environment("debugpy.launcher startup environment:")
File "/Users/ken/.vscode/extensions/ms-python.debugpy-2023.3.13341006-darwin-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/common/log.py", line 369, in describe_environment
info("{0}", get_environment_description(header))
File "/Users/ken/.vscode/extensions/ms-python.debugpy-2023.3.13341006-darwin-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/common/log.py", line 363, in get_environment_description
swallow_exception("Error while enumerating installed packages.")
File "/Users/ken/.vscode/extensions/ms-python.debugpy-2023.3.13341006-darwin-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/common/log.py", line 215, in swallow_exception
_exception(format_string, *args, **kwargs)
E+00000.030: Error while enumerating installed packages.
Traceback (most recent call last):
File "/Users/ken/.vscode/extensions/ms-python.debugpy-2023.3.13341006-darwin-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/common/log.py", line 361, in get_environment_description
report(" {0}=={1}\n", pkg.name, pkg.version)
AttributeError: 'PathDistribution' object has no attribute 'name'
Stack where logged:
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/Users/ken/.vscode/extensions/ms-python.debugpy-2023.3.13341006-darwin-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 39, in <module>
cli.main()
File "/Users/ken/.vscode/extensions/ms-python.debugpy-2023.3.13341006-darwin-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 415, in main
api.ensure_logging()
File "/Users/ken/.vscode/extensions/ms-python.debugpy-2023.3.13341006-darwin-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/api.py", line 61, in ensure_logging
log.describe_environment("Initial environment:")
File "/Users/ken/.vscode/extensions/ms-python.debugpy-2023.3.13341006-darwin-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/common/log.py", line 369, in describe_environment
info("{0}", get_environment_description(header))
File "/Users/ken/.vscode/extensions/ms-python.debugpy-2023.3.13341006-darwin-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/common/log.py", line 363, in get_environment_description
swallow_exception("Error while enumerating installed packages.")
File "/Users/ken/.vscode/extensions/ms-python.debugpy-2023.3.13341006-darwin-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/common/log.py", line 215, in swallow_exception
_exception(format_string, *args, **kwargs)
E+00000.225: Error while enumerating installed packages.
Traceback (most recent call last):
File "/Users/ken/.vscode/extensions/ms-python.debugpy-2023.3.13341006-darwin-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/common/log.py", line 361, in get_environment_description
report(" {0}=={1}\n", pkg.name, pkg.version)
AttributeError: 'PathDistribution' object has no attribute 'name'
Stack where logged:
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/Users/ken/.vscode/extensions/ms-python.debugpy-2023.3.13341006-darwin-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 39, in <module>
cli.main()
File "/Users/ken/.vscode/extensions/ms-python.debugpy-2023.3.13341006-darwin-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 430, in main
run()
File "/Users/ken/.vscode/extensions/ms-python.debugpy-2023.3.13341006-darwin-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 281, in run_file
log.describe_environment("Pre-launch environment:")
File "/Users/ken/.vscode/extensions/ms-python.debugpy-2023.3.13341006-darwin-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/common/log.py", line 369, in describe_environment
info("{0}", get_environment_description(header))
File "/Users/ken/.vscode/extensions/ms-python.debugpy-2023.3.13341006-darwin-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/common/log.py", line 363, in get_environment_description
swallow_exception("Error while enumerating installed packages.")
File "/Users/ken/.vscode/extensions/ms-python.debugpy-2023.3.13341006-darwin-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/common/log.py", line 215, in swallow_exception
_exception(format_string, *args, **kwargs)
</code></pre>
|
<python><visual-studio-code><pip>
|
2024-01-18 14:21:53
| 4
| 1,624
|
Joymaker
|
77,839,982
| 8,324,480
|
How to include an online forms in a sphinx-build website
|
<p>I have a python package and website which is build with <code>sphinx</code>. I'm looking to include a page containing a form that can be filled:</p>
<pre><code>e-mail (*):
question 1:
question 2:
...
Validate
</code></pre>
<p>Upon validation, the form should be converted in a nice formatting and sent by mail.
Before writing my own tool and extension, do you know of any sphinx extension I could use to do so?</p>
|
<python><forms><python-sphinx>
|
2024-01-18 14:03:26
| 1
| 5,826
|
Mathieu
|
77,839,886
| 9,846,650
|
How to run commands in postgres inside docker using python script?
|
<p>I have wrote a script to download a postgresql image in docker and run certain commands inside the docker.</p>
<p>I am able to successfully download the container and enter inside it after getting the container id using script.</p>
<pre class="lang-py prettyprint-override"><code>
import subprocess, os
import docker
import time
def run_command(command):
subprocess.run(command, shell=True)
def execute_command_in_container(container_id, command):
client = docker.from_env()
try:
# Start the container if not already running
container = client.containers.get(container_id)
container.start()
# Execute the command inside the container
exec_result = container.exec_run(command, stdout=True, stderr=True, tty=True)
# Print the command output
print("Command output:")
print(exec_result.output.decode())
# Print the command error, if any
if exec_result.exit_code != 0:
print("Command error:")
print(exec_result.stderr.decode())
except docker.errors.NotFound as e:
print(f"Container with ID {container_id} not found.")
except docker.errors.APIError as e:
print(f"An error occurred while communicating with Docker API: {e}")
except Exception as e:
print(f"An unexpected error occurred: {e}")
finally:
if container:
container.stop()
# Pull PostgreSQL image
pull_image_command = "docker pull postgres"
run_command(pull_image_command)
run_container_command = "docker run --name postgres -e POSTGRES_PASSWORD=mysecretpassword -d -p 5432:5432 postgres"
run_command(run_container_command)
get_container_id_command = "docker ps -qf 'name=postgres'"
result = subprocess.run(get_container_id_command, shell=True, capture_output=True, text=True)
container_id = result.stdout.strip()
sql_dump_path = "/Users/dump.sql"
if not os.path.exists(sql_dump_path):
raise FileNotFoundError(f"The file {sql_dump_path} does not exist.")
copy_to_container_command = f"docker cp {sql_dump_path} {container_id}:/"
run_command(copy_to_container_command)
# code fails from here till the end
commands_within_container = [
"psql -U postgres",
]
enter_container_and_execute_commands = f"docker exec -it {container_id} bash"
run_command(enter_container_and_execute_commands)
for command in commands_within_container:
execute_command_in_container(container_id, command)
time.sleep(2)
commands_within_container = [
"CREATE DATABASE abc;",
.
.
.
.
]
for command in commands_within_container:
execute_command_in_container(container_id, command)
commands_within_container_2 = [
.
.
.
.
"\q"
]
for command in commands_within_container_2:
execute_command_in_container(container_id, command)
last_command = "psql -U postgres abc < dump.sql"
run_command(last_command)
run_command("exit")
</code></pre>
<p>Code stops here, this is the terminal output:</p>
<pre><code>Successfully copied 5.69GB to 69be84c9269:/
root@69be84c9269:/#
</code></pre>
<p>I need to make tables inside the postgresql as per my dump. How to fix my script accordingly?</p>
|
<python><postgresql><docker>
|
2024-01-18 13:47:21
| 1
| 712
|
mad_lad
|
77,839,844
| 13,518,907
|
Langchain RetrievalQA: Missing some input keys
|
<p>I have created a RetrievalQA Chain, but facing an issue. When calling the Chain, I get the following error: <code>ValueError: Missing some input keys: {'query', 'typescript_string'} </code></p>
<p>My code looks as follows:</p>
<pre><code>from langchain_community.embeddings import HuggingFaceEmbeddings
embeddings = HuggingFaceEmbeddings(model_name="intfloat/multilingual-e5-large",
model_kwargs={'device': 'mps'}, encode_kwargs={'device': 'mps', 'batch_size': 32})
vectorstore = FAISS.load_local("vectorstore/db_faiss_bkb", embeddings)
retriever = vectorstore.as_retriever(search_kwargs={'k': 1, 'score_treshold': 0.75}, search_type="similarity")
llm = build_llm("modelle/mixtral-8x7b-instruct-v0.1.Q5_K_M.gguf")
def build_retrieval_qa(llm, prompt, vectordb):
chain_type_kwargs={ "prompt": prompt,
"verbose": False
}
dbqa = RetrievalQA.from_chain_type(llm=llm,
chain_type='stuff',
retriever=vectordb,
return_source_documents=True,
chain_type_kwargs=chain_type_kwargs,
verbose=True
)
return dbqa
prompt = """
<s> [INST] You are getting a task and a User input. If there is relevant information in the context, please add this information to your answer.
### Here the Task: ###
{typescript_string}
### Here the context: ###
{context}
### Here the User Input: ###
{query}
Answer: [/INST]
"""
prompt_temp = PromptTemplate(template=prompt, input_variables=["typescript_string", "context", "query"])
dbqa1 = build_retrieval_qa(llm=llm,prompt=prompt_temp,vectordb=retriever)
question = "What is IGB?"
types = "Answer shortly!"
dbqa1({"query": question, "typescript_string": types})
</code></pre>
<p>With this code the error from above occurs in the last line here.</p>
<p>The weird thing is, that it is working with a LLM-Chain from Langchain without Retrieval:</p>
<pre><code>from langchain.chains import LLMChain
llm_chain = LLMChain(
llm=llm,
prompt= prompt_temp,
verbose=True,
)
test = llm_chain({"type_string": types, "input": question})
test
</code></pre>
<p>This works and I am getting a correct response.
I am using</p>
<blockquote>
<p>Langchain == 0.1.0</p>
</blockquote>
<p>So is there something wrong with my PromptTemplate?</p>
|
<python><python-3.x><langchain><py-langchain><retrieval-augmented-generation>
|
2024-01-18 13:41:11
| 3
| 565
|
Maxl Gemeinderat
|
77,839,774
| 6,194,597
|
Randomly select from a list while ensuring the selected elements value must be x apart
|
<p>Say I have a list</p>
<pre><code>l = [0, 0, 0, 1, 1, 2, 3, 4, 5, 5, 5, 5, 6, 6, 7]
</code></pre>
<p>Given the list above, I have 15 elements, 8 unique elements, I need to sample n number of indices of the elements that are random but are <strong>at least x apart in value</strong>. For example, if <code>n = 4</code> and <code>x = 2</code>, some possibilities are (note that the order of the <code>indices</code> can be in any order, I gave them in ascending order for more simple understanding):</p>
<pre><code># the randomly selected elements are [0, 2, 4, 7], while their respective indices are:
indices = [0, 5, 7, 14]
# the randomly selected elements are [1, 3, 5, 7], while their respective indices are:
indices = [3, 6, 11, 14]
</code></pre>
<p>And in the case where it's not possible to draw n number of x apart indices (where in this case above, <code>n = 5</code>, <code>x = 2</code>, the remaining indices (in this case, the remaining 1 index) could be from any random element that hasn't been picked yet. And if there's none left to pick from, the length of <code>indices</code> would be <code>n - remaining</code> (effectively mean no need to pick anymore, keep what is picked).</p>
<p>I don't know whether one could call is a truly random sampling, but what are some approach to this problem? Efficiency is also paramount in this case as many calls will be made.</p>
|
<python>
|
2024-01-18 13:30:47
| 4
| 1,139
|
Gregor Isack
|
77,839,743
| 5,696,601
|
Reproducing a treemap in Plotly from a repository of dash samples
|
<p>I want to reproduce <a href="https://dash-example-index.herokuapp.com/modal-treemap" rel="nofollow noreferrer">a treemap in Plotly from a repository of dash samples</a>.</p>
<p><a href="https://i.sstatic.net/fl2Fu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fl2Fu.png" alt="enter image description here" /></a></p>
<p>The sample code is as follows:</p>
<pre><code>import pandas as pd
from dash import Dash, Input, Output, callback, dcc, html, State
import plotly.express as px
import dash_bootstrap_components as dbc
df = pd.read_table(
"https://raw.githubusercontent.com/plotly/datasets/master/global_super_store_orders.tsv"
)
df["profit_derived"] = df["Profit"].str.replace(",", ".").astype("float")
df["ship_date"] = pd.to_datetime(df["Ship Date"])
# Hierarchical charts (sunburst, treemap, etc.) work only with positive aggregate values
# In this step, we ensure that aggregated values will be positive
df = df.query(expr="profit_derived >= 0")
df = df[["profit_derived", "Segment", "Region", "ship_date"]]
app = Dash(__name__, external_stylesheets=[dbc.themes.BOOTSTRAP])
app.layout = dbc.Container(
[
html.H4(
"Distribution of profit as per business segment and region",
style={"textAlign": "center"},
className="mb-3",
),
# ------------------------------------------------- #
# Modal
html.Div(
[
dbc.Button("Open modal", id="open", n_clicks=0),
dbc.Modal(
[
dbc.ModalHeader(dbc.ModalTitle("Filters")),
dbc.ModalBody(
[
# Filter within dbc Modal
html.Label("Regions"),
dcc.Dropdown(
id="dynamic_callback_dropdown_region",
options=[
{"label": x, "value": x}
for x in sorted(df["Region"].unique())
],
multi=True,
),
html.Br(),
html.Label("Ship Date"),
dcc.DatePickerRange(
id="my-date-picker-range",
min_date_allowed=min(df["ship_date"]),
max_date_allowed=max(df["ship_date"]),
end_date=max(df["ship_date"]),
start_date=min(df["ship_date"]),
clearable=True,
),
]
),
],
id="modal",
is_open=False,
),
],
className="mb-5",
),
# ---------------------------------------- #
# Tabs
dcc.Tabs(
id="tab",
value="treemap",
children=[
dcc.Tab(label="Treemap", value="treemap"),
dcc.Tab(label="Sunburst", value="sunburst"),
],
),
html.Div(id="tabs-content"),
],
fluid=True,
)
@callback(
Output("tabs-content", "children"),
Input("dynamic_callback_dropdown_region", "value"),
Input("tab", "value"),
Input("my-date-picker-range", "start_date"),
Input("my-date-picker-range", "end_date"),
)
def main_callback_logic(region, tab, start_date, end_date):
dff = df.copy()
if region is not None and len(region) > 0:
dff = dff.query("Region == @region")
if start_date is not None:
dff = dff.query("ship_date > @start_date")
if end_date is not None:
dff = dff.query("ship_date < @end_date")
dff = dff.groupby(by=["Segment", "Region"]).sum().reset_index()
if tab == "treemap":
fig = px.treemap(
dff, path=[px.Constant("all"), "Segment", "Region"], values="profit_derived"
)
else:
fig = px.sunburst(
dff, path=[px.Constant("all"), "Segment", "Region"], values="profit_derived"
)
fig.update_traces(root_color="lightgrey")
fig.update_layout(margin=dict(t=50, l=25, r=25, b=25))
return dcc.Graph(figure=fig)
@callback(
Output("modal", "is_open"),
Input("open", "n_clicks"),
State("modal", "is_open"),
)
def toggle_modal(n1, is_open):
if n1:
return not is_open
return is_open
if __name__ == "__main__":
app.run_server()
</code></pre>
<p>However, when I run the code, it does not display the example correctly.</p>
<p><a href="https://i.sstatic.net/39Rz5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/39Rz5.png" alt="enter image description here" /></a></p>
<p>Before accessing the dashboard on my localhost, the following output is printed to the console:</p>
<pre><code>Could not infer format, so each element will be parsed individually, falling back to `dateutil`. To ensure parsing is consistent and as-expected, please specify a format.
</code></pre>
<p>When I access the dashboard on my localhost, <a href="https://pastebin.com/NjE1xg2K" rel="nofollow noreferrer">additional output</a> is printed to the console.</p>
<p>How do I reproduce the dash example correctly?</p>
|
<python><plotly><plotly-dash>
|
2024-01-18 13:25:58
| 1
| 1,023
|
Stücke
|
77,839,589
| 1,612,369
|
How to make Python scripts available anywhere in PowerShell
|
<p>Currently working with Python in WIndows 10.</p>
<p>I am trying to setup PowerShell so that Python would find scripts even if I am currently in different folder. Simply speaking, I would like to run script as follows</p>
<pre><code>python myscript.py
</code></pre>
<p>instead of</p>
<pre><code>python D:\aaa\bbb\myscript.py
</code></pre>
<p>I have only tried to append the extra folders to environment variable <code>Path</code> in PowerShell</p>
<pre><code>$Env:Path = $Env:Path + ';D:\aaa\bbb'
</code></pre>
<p>and the script can be found itself, such as I can use TAB to autocomplete a script name, however python still does not recognise the file, that is <code>python myscript.py</code> still returns <code>[Error] No such file as myscript.py</code>.</p>
<p>Is there any environment variable that controls locations where script are searched? <code>PYTHONPATH</code> seems to be for imported modules inside Python interpreter.</p>
|
<python><powershell><path>
|
2024-01-18 12:57:30
| 1
| 2,691
|
Celdor
|
77,839,540
| 15,283,859
|
Pandas hierarchy/complex conditions to drop duplicates
|
<p>I have a large dataset of locations - there is a sample below.</p>
<p>I need to find and drop duplicates. There are some obvious duplicates, which have the same address, name and borough, but also some almost equal rows, which identify 2 different locations (such as the last 2 rows of the sample below).
To complicate things, most location have a missing name, so I only have the address and the borough, and the address doesn't always have a number.</p>
<p>The borough also is inconsistent, but I found a way to simplify it (see below).</p>
<pre class="lang-py prettyprint-override"><code>addresses = ['regents street', '21 regent street',
'bishopgate 3', '3 bishopgate', 'bishop gate', 'regent',
'hill park', 'hill park road', '10 hill park road',
'12 hill park', 'south street', 'south street', 'east street', '2 east street', 'cup street', 'bond street',
'80 cobbler road', '88 cobbler road']
name = ['a','a','b','b','b','c','d','e','e','d', np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, 'g', 'h']
boroughs = ['royal borough of greenwhich', 'borough of greenwhich',
'royal borough of chelsea', 'chelsea', 'borough of chelsea',
'greenwhich', 'haringey', 'haringey', 'borough of haringey', 'haringey',
'southwark', 'southwark', 'hammersmith', 'borough of hammersmith',
'hackney', 'hackney', 'lambeth', 'lambeth']
df = pd.DataFrame({'address':addresses, 'name':name, 'borough':boroughs})
# to simplify the borough columns i do this
df['borough'] = df['borough'].str.replace('royal|royal borough of|borough of', '', regex=True)
</code></pre>
<p>I though I could clean up the address too with:
<code>df['address'] = df['address'].str.replace('\d+', '', regex=True)</code>
but then I lose information about different locations on the same street.</p>
<p>Any ideas on how I could do it? (I tried things such as groupby([...]).max() but without success).</p>
<p>P.S. If one address is present only once in different format (e.g. 'east street' and '2 east street') I'm going to assume that they refer to the same location and therefore only one record should be kept.</p>
|
<python><python-3.x><pandas><dataframe><numpy>
|
2024-01-18 12:50:27
| 1
| 895
|
Yolao_21
|
77,839,384
| 1,554,114
|
python win32com with Inventor seems to be missing methods
|
<p>I am trying to translate sample VBA code from Inventor over to python.<br />
Most things, like modifying parameters work really well.
But unfortunately, export to STL is failing.</p>
<p>It seems, that methods of the "STL Exporter Plugin" that should be there are just missing.<br />
In my case: <code>HasSaveCopyAsOptions</code> and <code>SaveCopyAs</code>.
(This even seems to happen for other plugins..)</p>
<p>Error message:<br />
<code>AttributeError: '<win32com.gen_py.Autodesk Inventor Object Library.ApplicationAddIn instance at 0x1940552744240>' object has no attribute 'HasSaveCopyAsOptions'</code></p>
<ul>
<li>Is there something like a "refresh" method for win32com?</li>
<li>Is it due to the instance type being <code>Library.ApplicationAddIn</code> that I only see default methods?</li>
</ul>
<p>I already set a breakpoint before calling <code>SaveCopyAs</code> and tried to <code>dir(oSTLTranslator)</code>.
The resulting list looks quite empty for my expectations.</p>
<pre class="lang-py prettyprint-override"><code># -*- coding: utf-8 -*-
import win32com.client
from win32com.client import gencache, Dispatch, constants, DispatchEx
import os
# !!MINIFIED VERSION OF CLASS!!
class InventorAPI:
def loadInventorApp(self):
self.oApp = win32com.client.Dispatch('Inventor.Application')
self.oApp.Visible = True
self.mod = gencache.EnsureModule('{D98A091D-3A0F-4C3E-B36E-61F62068D488}', 0, 1, 0)
self.oApp = self.mod.Application(self.oApp)
# self.oApp.SilentOperation = True
def loadInventorDoc(self):
oDoc = self.oApp.ActiveDocument
self.oDoc = self.mod.PartDocument(oDoc)
def listAllPlugins(self):
no = self.oApp.ApplicationAddIns.Count
print(f"Listing {no} plugins:")
i=0
while (i<no):
i+=1 #Increments first, as index is 1-based
name = self.oApp.ApplicationAddIns.Item(i).ShortDisplayName
id = self.oApp.ApplicationAddIns.Item(i).ClientId
print(f"{i:02}: {id} - {name}")
def saveStl(self, Name):
oSTLTranslator = self.oApp.ApplicationAddIns.ItemById("{533E9A98-FC3B-11D4-8E7E-0010B541CD80}")
if oSTLTranslator is None:
raise( "Could not access STL translator." )
oContext = self.oApp.TransientObjects.CreateTranslationContext()
oOptions = self.oApp.TransientObjects.CreateNameValueMap()
if oSTLTranslator.HasSaveCopyAsOptions(self.oApp.ActiveDocument, oContext, oOptions):
#Set accuracy.
# 0 = High
# 1 = Medium
# 2 = Low
oOptions.SetValue("Resolution", 0)
oContext.Type = kFileBrowseIOMechanism
oData = self.oApp.TransientObjects.CreateDataMedium()
oData.FileName = Name
oSTLTranslator.SaveCopyAs(self.oApp.ActiveDocument, oContext, oOptions, oData)
inv = InventorAPI()
inv.loadInventorApp()
#inv.listAllPlugins()
#exit()
inv.loadInventorDoc()
inv.saveStl("Mount_75mm.stl")
exit()
</code></pre>
<p><strong>UPDATE:</strong><br />
[No longer relevant - Check post history, if interested]</p>
<p><strong>UPDATE 2:</strong><br />
As pointed ot by Michael, the issue is indeed with the types!<br />
I created a custom <code>ItemById</code> method to cast to TranslatorAddIn:</p>
<pre><code># Result is of type CLSID (default: ApplicationAddIn)
# The method ItemById is actually a property, but must be used as a method to correctly pass the arguments
import pythoncom
defaultNamedNotOptArg=pythoncom.Empty
LCID = 0x0
def ItemByIdEx(oApplicationAddIns, ClientId=defaultNamedNotOptArg, CLSID='{A0481EEB-2031-11D3-B78D-0060B0F159EF}'):
'Retrieves an ApplicationAddIn object based on the Client Id'
ret = oApplicationAddIns._oleobj_.InvokeTypes(50335746, LCID, 2, (9, 0), ((8, 1),),ClientId
)
if ret is not None:
ret = win32com.client.Dispatch(ret, 'ItemById', CLSID) #<== CHANGED THIS, original code has default ID of ApplicationAddIn
return ret
</code></pre>
<p>With this, my object oSTLTranslator has the missing methods available.
I can now call</p>
<pre><code>#oSTLTranslator = self.oApp.ApplicationAddIns.ItemById("{533E9A98-FC3B-11D4-8E7E-0010B541CD80}")
oSTLTranslator = ItemByIdEx(self.oApp.ApplicationAddIns, "{533E9A98-FC3B-11D4-8E7E-0010B541CD80}", "{6ECCBC87-A50D-11D4-8DE4-0010B541CAA8}")
</code></pre>
<p><strong>UPDATE 3:</strong><br />
Custom method is not even needed!<br />
The purpose of <code>self.mod</code> is to serve as type-caster.<br />
Just do: <code>oSTLTranslator = self.mod.TranslatorAddIn(oSTLTranslator)</code></p>
<p>Final function:</p>
<pre><code>def saveStl(self, Name):
oSTLTranslator = self.oApp.ApplicationAddIns.ItemById("{533E9A98-FC3B-11D4-8E7E-0010B541CD80}")
if oSTLTranslator is None:
raise ValueError("Could not access STL translator." )
oSTLTranslator = self.mod.TranslatorAddIn(oSTLTranslator)
oContext = self.oApp.TransientObjects.CreateTranslationContext()
oOptions = self.oApp.TransientObjects.CreateNameValueMap()
if oSTLTranslator.HasSaveCopyAsOptions(self.oApp.ActiveDocument, oContext, oOptions):
#Set accuracy.
# 2 = High
# 1 = Medium
# 0 = Low
#For completeness - usually no need to set all of these
oOptions.SetValue("Resolution", 0)
oOptions.SetValue("SurfaceDeviation", 5.0)
oOptions.SetValue("NormalDeviation", 1000.0)
oOptions.SetValue("MaxEdgeLength", 100000.0)
oOptions.SetValue("AspectRatio", 2150.0)
oOptions.SetValue("OutputFileType", 0)
oContext.Type = win32com.client.constants.kFileBrowseIOMechanism
oData = self.oApp.TransientObjects.CreateDataMedium()
oData.FileName = Name
oSTLTranslator.SaveCopyAs(self.oApp.ActiveDocument, oContext, oOptions, oData)
</code></pre>
|
<python><win32com><autodesk-inventor>
|
2024-01-18 12:27:10
| 1
| 4,731
|
Nippey
|
77,839,358
| 9,490,769
|
Full height main div in dash
|
<p>I have created a very simple dash application, but I cannot get the main div to stretch to the full page height.</p>
<p>Here is the python code</p>
<pre class="lang-py prettyprint-override"><code>import dash
from dash import html
app = dash.Dash(__name__)
page = html.Div(id='page',
children=[html.H1('hello')],
style={'background-color': 'orange', 'height': '100%', 'width': '100%'},
className='flex-grow-1')
app.layout = html.Div(
page,
className='full-screen-div'
)
app.run_server()
</code></pre>
<p>Here is the css code under <code>assets/custom.css</code></p>
<pre class="lang-css prettyprint-override"><code>body, html {
height: 100%;
width: 100%;
margin: 0;
}
.full-screen-div {
width: 100vw;
height: 100%;
min-height: 100%;
box-sizing: border-box;
overflow-x: hidden;
overflow-y: hidden;
}
</code></pre>
<p>The page looks as shown below - I would like the orange background to stretch to fill the browser window.
I have tried setting height with the viewport height parameter, but that doesn't translate well between screen sizes. I have also tried setting <code>position: fixed</code>, but that gives some overflow when introducing figures.
<a href="https://i.sstatic.net/oHlJd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oHlJd.png" alt="enter image description here" /></a></p>
|
<python><html><css><plotly-dash>
|
2024-01-18 12:24:26
| 1
| 3,345
|
oskros
|
77,839,338
| 5,790,653
|
python yaml openstack how to access element of one group
|
<p>I have a yaml file like this:</p>
<pre class="lang-yaml prettyprint-override"><code>clouds:
all:
gr1:
id: '202401'
gr2:
id: '202402'
gr3:
id: '202403'
gr4:
id: '202404'
group1:
gr1:
id: '202401'
group2:
gr2:
id: '202402'
group34:
gr3:
id: '202403'
gr4:
id: '202404'
</code></pre>
<p>This is my Python code:</p>
<pre><code>import openstack
import yaml
with open('test.yaml', 'r') as f:
reg = yaml.safe_load(f)
all_groups = []
for name in reg['clouds']['all']:
all_groups.append(name)
group1 = []
for name in reg['clouds']['group1']:
group1.append(name)
group34 = []
for name in reg['clouds']['group34']:
group34.append(name)
# Initialize and turn on debug logging
openstack.enable_logging(debug=False)
# Initialize connection
for region in all_groups:
conn = openstack.connect(cloud=region) # test.yaml
</code></pre>
<p>This is the error:</p>
<pre class="lang-none prettyprint-override"><code>openstack.exceptions.ConfigException: Cloud gr1 was not found.
</code></pre>
<p>The error is correct because I have no <code>clouds ==> gr1</code>, and it should be like <code>clouds ==> all ==> gr1</code>, but I'm not sure how.</p>
<p>I even tried this (I'm thinking of something like this but not sure the correct way):</p>
<pre><code># Initialize connection
for region in all_groups:
conn = openstack.connect(cloud=['all'][region]) # test.yaml
</code></pre>
<p>But got this error:</p>
<pre class="lang-none prettyprint-override"><code>TypeError: list indices must be integers or slices, not str
</code></pre>
<p>How can I say in this context to access <code>clouds ==> all ==> gr1</code> for the mentioned module I'm using.</p>
<p>I download the yaml file from my website which is like this:</p>
<pre class="lang-yaml prettyprint-override"><code>clouds:
gr1:
id: '202401'
gr2:
id: '202402'
</code></pre>
<p>But I'm going to have file like in the beginning of the question.</p>
|
<python><openstack>
|
2024-01-18 12:21:09
| 1
| 4,175
|
Saeed
|
77,838,950
| 988,279
|
Parse URL parameter and create a dictionary
|
<p>I want to have the URL parameters as a "real" dict. How can I achieve that? I don't want to use <code>params.split()</code> multiple times.</p>
<pre><code>from urllib.parse import urlparse
url = 'http://nohost.com?params=depth:1,width:500,size:small'
the_url = urlparse(url)
url_part, params = the_url.path, the_url.query
print(params) # params=depth:1,width:500,size:small
</code></pre>
<p>At the end of the day I want to have a real dictionary from the URL parameters</p>
<pre><code>params = {'depth': 1, 'width': 500, 'size': 'small'}
</code></pre>
|
<python>
|
2024-01-18 11:14:05
| 1
| 522
|
saromba
|
77,838,888
| 231,934
|
Overriding nested values in polyfactory with pydantic models
|
<p>Is it possible to provide values for complex types generated by polyfactory?</p>
<p>I use pydantic for models and pydantic ModelFactory. I noticed that <code>build</code> method supports kwargs that can provide values for constructed model, but I didn't figure if it's possible to provide values for nested fields.</p>
<p>For example, if I have model A which is also the type for field a in model B, is ti possible to construct B via polyfactory and provide some values for field 'a'?</p>
<p>I tried to call build with</p>
<pre><code>MyFactory.build(**{"a": {"nested_value": "b"}})
</code></pre>
<p>but it does not work.</p>
<p>Is it possible to override nested values?</p>
|
<python><pydantic><polyfactory>
|
2024-01-18 11:05:12
| 1
| 3,842
|
Martin Macak
|
77,838,496
| 2,913,106
|
How to manually set axis offset with custom tick labels?
|
<p>Consider the plot below. Both axis have custom labels, to label the rows and columns in an <code>imshow</code>-plot. The problem now is that the y-axis has very large values. Ideally I'd like to manually set an axis-offset (i.e. on top of the axis somethng like <code>1e5</code>) such that the ticks are only show a smaller number. How can I achieve that?</p>
<p>There was a solution provided <a href="https://stackoverflow.com/a/6654046/2913106">here</a> but it does <em>not</em> work in this case, as we do not have a <code>ScalarFormatter</code> due to the custom labels:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
s = 10000000
x, y = [1, 1, 2, 3, 5], [2*s, 3*s, 5*s, 7*s]
x, y = np.meshgrid(x, y)
z = np.random.rand(x.shape[0], x.shape[1])
plt.imshow(z)
plt.gca().set_yticks(range(len(y[..., 0])))
plt.gca().set_xticks(range(len(x[0, ...])))
plt.gca().set_yticklabels(y[..., 0])
plt.gca().set_xticklabels(x[0, ...])
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/N6a5f.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/N6a5f.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><axis><xticks>
|
2024-01-18 10:04:42
| 2
| 11,728
|
flawr
|
77,838,382
| 936,269
|
How to correctly get file path using QUrl on windows
|
<p>So need to allow users to select a file path via QT application.
For this I use <a href="https://doc.qt.io/qtforpython-6/PySide6/QtWidgets/QFileDialog.html#PySide6.QtWidgets.PySide6.QtWidgets.QFileDialog.getOpenFileUrl" rel="nofollow noreferrer">getOpenFileUrl</a>.</p>
<p>Then I take the resulting <a href="https://doc.qt.io/qtforpython-6/PySide6/QtCore/QUrl.html#PySide6.QtCore.PySide6.QtCore.QUrl.path" rel="nofollow noreferrer">QUrl</a> and ask for it as <code>path</code> which returns a string.
However, when I convert this to a python native <code>pathlib.path</code> I get that the path does not exists. The path it result in is: <code>\C:\Users\LNI\Desktop\test_qt_dialog_picker\main.py</code></p>
<p>See code below:</p>
<pre class="lang-py prettyprint-override"><code>from PySide6.QtWidgets import QApplication, QWidget, QFileDialog
import sys
import pathlib
class MainWindow(QWidget):
def __init__(self, parent: QWidget) -> None:
super().__init__(parent)
selection_result = QFileDialog.getOpenFileUrl(self, 'Select a File')
path = selection_result[0].path()
print(f'Path as "path": {path}')
path = pathlib.Path(path)
print(f'Path as pathlib path: {path}')
print(f'Path exists: {path.exists()}')
if __name__ == '__main__':
app = QApplication(sys.argv)
main_window = MainWindow(None)
main_window.show()
sys.exit(app.exec())
</code></pre>
<p>Now if I instead use the QUrl <code>toString()</code> I get a different problem because this is prepended with <code>file:///</code> (which makes sense) and this is by pathlib is translated to <code>file:\C:\Users\LNI\Desktop\test_qt_dialog_picker\main.py</code> see the code below:</p>
<pre class="lang-py prettyprint-override"><code>from PySide6.QtWidgets import QApplication, QWidget, QFileDialog
import sys
import pathlib
class MainWindow(QWidget):
def __init__(self, parent: QWidget) -> None:
super().__init__(parent)
selection_result = QFileDialog.getOpenFileUrl(self, 'Select a File')
path = selection_result[0].toString()
print(f'Path as str: {path}')
path = pathlib.Path(path)
print(f'Path as pathlib path: {path}')
print(f'Path exists: {path.exists()}')
if __name__ == '__main__':
app = QApplication(sys.argv)
main_window = MainWindow(None)
main_window.show()
sys.exit(app.exec())
</code></pre>
<p>Now I can fix this by adding, an <code>if</code> although this causes issues on linux but I can handle that easy:</p>
<pre class="lang-py prettyprint-override"><code>from PySide6.QtWidgets import QApplication, QWidget, QFileDialog
import sys
import pathlib
class MainWindow(QWidget):
def __init__(self, parent: QWidget) -> None:
super().__init__(parent)
selection_result = QFileDialog.getOpenFileUrl(self, 'Select a File')
path = selection_result[0].path()
print(f'Path as "path": {path}')
if path.startswith('/') and len(path) > 1:
path = path[1:]
path = pathlib.Path(path)
print(f'Path as pathlib path: {path}')
print(f'Path exists: {path.exists()}')
if __name__ == '__main__':
app = QApplication(sys.argv)
main_window = MainWindow(None)
main_window.show()
sys.exit(app.exec())
</code></pre>
<p>But is this really the best option on windows? Is there a better alternative or am I doing the only option? I am also wondering if the <code>/</code> at the beginning of the path on Windows is a bug in QT.</p>
|
<python><qt><pyqt6>
|
2024-01-18 09:44:45
| 0
| 2,208
|
Lars Kakavandi-Nielsen
|
77,838,272
| 9,957,594
|
Polars - get hour from Time column
|
<p>I have a Polars df with a column 'Checkout Time' in a string format, e.g '17:54'.</p>
<p>I'm converting to a time format with:
<code>df = df.with_columns(pl.col('Checkout Time').str.to_time('%H:%M'))</code></p>
<p>How would I then get the actual hour of the time as an integer?</p>
<p>I could do it with map_rows but is there a better way than this?</p>
<pre class="lang-py prettyprint-override"><code>df.with_columns(
df.select('Checkout Time').map_rows(lambda x: x[0].hour)
).rename({'map': 'Hour'})
</code></pre>
<p>But this seems annoying, is there a way to do it directly in with_columns?</p>
<p>Some example data:</p>
<pre class="lang-py prettyprint-override"><code>data = np.array(['06:30', '17:45', '18:32'])
df = pl.from_numpy(data, schema=["Checkout Time"])
df = df.with_columns(pl.col('Checkout Time').str.to_time('%H:%M'))
</code></pre>
|
<python><dataframe><python-polars>
|
2024-01-18 09:27:03
| 1
| 350
|
tcotts
|
77,838,245
| 8,444,568
|
Is it safe to add inheritance relationship for a pickled object?
|
<p>I have some old pickle files that store object <code>d</code>:</p>
<pre><code>class D():
def __init__(self, val):
self._val = val
def val(self):
return self.val
d = D()
pickle.dump(d, file_store_obj_d)
</code></pre>
<p>Now, I want to do some refactor and add a base interface class for <code>D</code>:</p>
<pre><code>class B():
def val2(self): # new interface
return self.val()+1
def val(self):
pass
class D(B):
def __init__(self, val):
self._val = val
def val(self):
return self._val
d = pickle.load(file_store_obj_d) # is this safe and keeps backward compatibility?
</code></pre>
<p>I want to know is code above safe?(<code>B</code> is an interface class having no new attributes, only some new methods are added)</p>
|
<python><pickle>
|
2024-01-18 09:23:45
| 0
| 893
|
konchy
|
77,838,169
| 11,702,196
|
Installing opencv in docker and run into compilation errors?
|
<p>I've been trying to build this repo and install it inside docker. <a href="https://github.com/Silvicek/baselines" rel="nofollow noreferrer">https://github.com/Silvicek/baselines</a></p>
<p>I am installing it in dockerfile with <code>pip3 install -e .</code>. And there is a compilation problem in opencv.</p>
<p>My dockerfile:</p>
<pre><code>FROM tensorflow/tensorflow:1.14.0-gpu-py3
ENV TZ="America/Los_Angeles"
ARG DEBIAN_FRONTEND=noninteractive
RUN rm /etc/apt/sources.list.d/cuda.list && \
rm /etc/apt/sources.list.d/nvidia-ml.list && \
apt-key del 7fa2af80 && \
apt-get update && apt-get install -y --no-install-recommends wget && \
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.0-1_all.deb && \
dpkg -i cuda-keyring_1.0-1_all.deb && \
apt-get update
RUN apt-get install -y git python3-opencv
RUN mkdir /build
COPY baselines/ /build/baselines
WORKDIR /build/baselines
RUN ls -lrt /build
RUN ls -lrt /build/baselines
RUN pip3 install --upgrade pip setuptools wheel
RUN pip3 install -e . -vvv
</code></pre>
<p>docker build command:</p>
<pre class="lang-bash prettyprint-override"><code>docker build -f dockerfile --progress=plain -t dist_dqn .
</code></pre>
<p>It fails at the last step. There are 2 chunks of error outputs.</p>
<p>Chunk 1:</p>
<pre><code>#14 1702.8 [100%] Generate files for Python bindings and documentation
#14 1704.0 /tmp/pip-install-jdnrgs2i/opencv-python_a75af04f83144d719ab5328e2c3bcd30/opencv/modules/
python/src2/typing_stubs_generator.py:54: UserWarning: Typing stubs generation has failed.
#14 1704.0 Traceback (most recent call last):
#14 1704.0 File "/tmp/pip-install-jdnrgs2i/opencv-python_a75af04f83144d719ab5328e2c3bcd30/opencv/
modules/python/src2/typing_stubs_generation/nodes/type_node.py", line 291, in resolve
#14 1704.0 self.value.resolve(root)
#14 1704.0 File "/tmp/pip-install-jdnrgs2i/opencv-python_a75af04f83144d719ab5328e2c3bcd30/opencv/
modules/python/src2/typing_stubs_generation/nodes/type_node.py", line 628, in resolve
#14 1704.0 self.full_typename, errors
#14 1704.0 typing_stubs_generation.nodes.type_node.TypeResolutionError: Failed to resolve one of "_
typing.Union[cv2.typing.Scalar, GMat, GOpaqueT, GArrayT]" items. Errors: ['Failed to resolve "GMat" e
xposed as "GMat"', 'Failed to resolve "GOpaqueT" exposed as "GOpaqueT"', 'Failed to resolve "GArrayT"
exposed as "GArrayT"']
#14 1704.0
#14 1704.0 The above exception was the direct cause of the following exception:
#14 1704.0
#14 1704.0 Traceback (most recent call last):
#14 1704.0 File "/tmp/pip-install-jdnrgs2i/opencv-python_a75af04f83144d719ab5328e2c3bcd30/opencv/
modules/python/src2/typing_stubs_generator.py", line 49, in wrapped_func
#14 1704.0 ret_type = func(*args, **kwargs)
#14 1704.0 File "/tmp/pip-install-jdnrgs2i/opencv-python_a75af04f83144d719ab5328e2c3bcd30/opencv/
modules/python/src2/typing_stubs_generator.py", line 148, in _generate
#14 1704.0 generate_typing_stubs(self.cv_root, output_path)
#14 1704.0 File "/tmp/pip-install-jdnrgs2i/opencv-python_a75af04f83144d719ab5328e2c3bcd30/opencv/
modules/python/src2/typing_stubs_generation/generation.py", line 91, in generate_typing_stubs
#14 1704.0 _generate_typing_module(root, output_path)
#14 1704.0 File "/tmp/pip-install-jdnrgs2i/opencv-python_a75af04f83144d719ab5328e2c3bcd30/opencv/
modules/python/src2/typing_stubs_generation/generation.py", line 794, in _generate_typing_module
#14 1704.0 node.resolve(root)
#14 1704.0 File "/tmp/pip-install-jdnrgs2i/opencv-python_a75af04f83144d719ab5328e2c3bcd30/opencv/
modules/python/src2/typing_stubs_generation/nodes/type_node.py", line 297, in resolve
#14 1704.0 ) from e
#14 1704.0 typing_stubs_generation.nodes.type_node.TypeResolutionError: Failed to resolve alias "GP
rotoArg" exposed as "GProtoArg"
#14 1704.0
#14 1704.0 traceback.format_exc()
#14 1704.0 Note: Class cv::Feature2D has more than 1 base class (not supported by Python C extensio
ns)
#14 1704.0 Bases: cv::Algorithm, cv::class, cv::Feature2D, cv::Algorithm
#14 1704.0 Only the first base class will be used
#14 1704.0 Note: Class cv::detail::GraphCutSeamFinder has more than 1 base class (not supported by
Python C extensions)
#14 1704.0 Bases: cv::detail::GraphCutSeamFinderBase, cv::detail::SeamFinder
#14 1704.0 Only the first base class will be used
#14 1704.0 [100%] Built target gen_opencv_python_source
#14 1704.1 [100%] Building CXX object modules/python3/CMakeFiles/opencv_python3.dir/__/src2/cv2.cpp
.o
#14 1779.0 [100%] Building CXX object modules/python3/CMakeFiles/opencv_python3.dir/__/src2/cv2_uti
l.cpp.o
#14 1780.1 [100%] Building CXX object modules/python3/CMakeFiles/opencv_python3.dir/__/src2/cv2_num
py.cpp.o
#14 1781.1 [100%] Building CXX object modules/python3/CMakeFiles/opencv_python3.dir/__/src2/cv2_con
vert.cpp.o
#14 1782.7 [100%] Building CXX object modules/python3/CMakeFiles/opencv_python3.dir/__/src2/cv2_hig
hgui.cpp.o
#14 1783.9 [100%] Linking CXX shared module ../../lib/python3/cv2.cpython-36m-x86_64-linux-gnu.so
#14 1787.9 [100%] Built target opencv_python3
</code></pre>
<p>Chunk 2:</p>
<pre><code>#14 1788.3 Copying files from CMake output
#14 1788.3 Traceback (most recent call last):
#14 1788.3 File "/usr/local/lib/python3.6/dist-packages/pip/_vendor/pep517/in_process/_in_process
.py", line 363, in <module>
#14 1788.3 main()
#14 1788.3 File "/usr/local/lib/python3.6/dist-packages/pip/_vendor/pep517/in_process/_in_process
.py", line 345, in main
#14 1788.3 json_out['return_val'] = hook(**hook_input['kwargs'])
#14 1788.3 File "/usr/local/lib/python3.6/dist-packages/pip/_vendor/pep517/in_process/_in_process
.py", line 262, in build_wheel
#14 1788.3 metadata_directory)
#14 1788.3 File "/usr/local/lib/python3.6/dist-packages/setuptools/build_meta.py", line 231, in b
uild_wheel
#14 1788.3 wheel_directory, config_settings)
#14 1788.3 File "/usr/local/lib/python3.6/dist-packages/setuptools/build_meta.py", line 215, in _
build_with_temp_dir
#14 1788.3 self.run_setup()
#14 1788.3 File "/usr/local/lib/python3.6/dist-packages/setuptools/build_meta.py", line 268, in r
un_setup
#14 1788.3 self).run_setup(setup_script=setup_script)
#14 1788.3 File "/usr/local/lib/python3.6/dist-packages/setuptools/build_meta.py", line 158, in r
un_setup
#14 1788.3 exec(compile(code, __file__, 'exec'), locals())
#14 1788.3 File "setup.py", line 537, in <module>
#14 1788.3 main()
#14 1788.3 File "setup.py", line 310, in main
#14 1788.3 cmake_source_dir=cmake_source_dir,
#14 1788.3 File "/tmp/pip-build-env-9wbzljs3/overlay/lib/python3.6/site-packages/skbuild/setuptoo
ls_wrap.py", line 683, in setup
#14 1788.3 cmake_install_dir,
#14 1788.3 File "setup.py", line 450, in _classify_installed_files_override
#14 1788.3 raise Exception("Not found: '%s'" % relpath_re)
#14 1788.3 Exception: Not found: 'python/cv2/python-3/cv2.*'
#14 1788.3 Building wheel for opencv-python (pyproject.toml): finished with status 'error'
#14 1788.3 ERROR: Failed building wheel for opencv-python
#14 1788.3 Created temporary directory: /tmp/pip-wheel-zsi8p8ua
#14 1788.3 Building wheel for future (setup.py): started
#14 1788.3 Destination directory: /tmp/pip-wheel-zsi8p8ua
</code></pre>
|
<python><c++><docker><opencv><ubuntu>
|
2024-01-18 09:11:26
| 1
| 391
|
Yv Zuo
|
77,837,914
| 9,743,391
|
Snap to the nearest or previous datetime point value every interval
|
<p>I am hoping to downsample the dataset with uneven sub-hour intervals to 6-hourly, with fixed time points of 00H, 6H, 12H and 18H. If data at these time points exist, keep. If not, use the observation from the nearest previous datapoint <strong>without data manipulation</strong>, keeping original records.</p>
<pre><code>import pandas as pd
data = pd.DataFrame({'dt': ['2022-10-29 18:00:00', '2022-10-30 00:00:00',
'2022-10-30 06:00:00', '2022-10-30 12:00:00',
'2022-10-30 18:00:00', '2022-10-31 00:00:00',
'2022-10-31 06:00:00', '2022-10-31 12:00:00',
'2022-10-31 15:00:00', '2022-10-31 15:30:00',
'2022-10-31 16:00:00', '2022-10-31 16:30:00',
'2022-10-31 17:00:00', '2022-10-31 17:30:00',
'2022-10-31 18:00:00', '2022-10-31 18:30:00',
'2022-10-31 19:00:00', '2022-10-31 19:30:00',
'2022-10-31 20:00:00', '2022-10-31 20:30:00',
'2022-10-31 21:00:00', '2022-10-31 21:30:00',
'2022-10-31 22:00:00', '2022-10-31 22:30:00',
'2022-10-31 23:00:00', '2022-10-31 23:30:00',
'2022-11-01 00:00:00', '2022-11-01 00:30:00',
'2022-11-01 01:00:00', '2022-11-01 01:30:00',
'2022-11-01 02:00:00', '2022-11-01 02:30:00',
'2022-11-01 03:00:00', '2022-11-01 03:30:00',
'2022-11-01 04:00:00', '2022-11-01 04:30:00',
'2022-11-01 05:00:00', '2022-11-01 05:30:00',
'2022-11-01 06:00:00', '2022-11-01 06:30:00',
'2022-11-01 07:00:00', '2022-11-01 07:30:00',
'2022-11-01 08:00:00', '2022-11-01 08:30:00',
'2022-11-01 09:00:00', '2022-11-01 09:30:00',
'2022-11-01 10:00:00', '2022-11-01 10:30:00',
'2022-11-01 11:00:00', '2022-11-01 11:30:00'],
'Value': [ 8786.8, 8789.5
, 8787.7, 8789.2, 8789.4, 8790.3
, 8789.6, 8790.5, 8789.7, 8790.6
, 8790.5, 8790.5, 8790.4, 8790.4
, 8790.3, 8790.2, 8790.1, 8790.0
, 8789.9, 8789.8, 8789.7, 8789.5
, 8789.5, 8789.3, 8789.2, 8789.6
, 8789.8, 8789.8, 8789.7, 8789.6
, 8789.7, 8789.5, 8789.5, 8789.3
, 8789.2, 8789.6, 8789.8, 8789.8
, 8789.7, 8789.6, 8789.7, 8789.6
, 8789.5, 8789.3, 8789.2, 8789.6
, 8789.8, 8789.8, 8789.7, 8789.6]})
data.index = pd.DatetimeIndex(data['dt'])
</code></pre>
<p>But with <code>data.resample('6H').last()</code>, strangely I get one data point at 17:30 instead of 18:00 starting at 2022-10-31:</p>
<p><a href="https://i.sstatic.net/HHVPN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HHVPN.png" alt="enter image description here" /></a></p>
|
<python><pandas><dataframe>
|
2024-01-18 08:28:06
| 2
| 626
|
Jane
|
77,837,887
| 1,220,538
|
How to run Py_Initialize in embedded python with virtualenv using C API?
|
<p>In my C++ application I am using embedded python interpreter using the stable C API to execute python scripts. It works well, but I want to extend this functionality to be able to configure a virtualenv path to use that instead of the system's python packages.</p>
<p>What I gathered is that I have to set some variables before calling Py_Initialize to make it use the virtualenv. I tried a lot of combinations of calling Py_SetPythonHome, Py_SetPath, Py_SetProgramName with specific paths and setting environment variables PATH, VIRTUAL_ENV, PYTHONPATH, but none of the combinations seem so to work.</p>
<p>In my last attempt I appended my PATH environment variable with the virtualenv's bin directory, added the VIRTUAL_ENV environment variable with the virtualenv path, set the PYTHONPATH environent variable to the list of subdirectories of the virtualenv and called Py_SetPythonHome with the virtualenv path:</p>
<pre><code>std::string old_path = utils::Environment::getEnvironmentVariable("PATH").value();
utils::Environment::setEnvironmentVariable("PATH", ((virtualenv_path_ / "bin").string() + ":" + old_path).c_str());
utils::Environment::setEnvironmentVariable("VIRTUAL_ENV", virtualenv_path_.string().c_str());
auto lib_path = virtualenv_path_ / "lib" ;
std::string python_dir_name;
for (auto const& dir_entry : std::filesystem::directory_iterator{lib_path}) {
if (minifi::utils::string::startsWith(dir_entry.path().filename().string(), "python")) {
python_dir_name = dir_entry.path().filename().string();
break;
}
}
static const auto package_path = lib_path / python_dir_name / "site-packages";
static const auto python_dir_path = lib_path / python_dir_name;
static size_t virtualenv_path_size = virtualenv_path_.string().size();
Py_SetPythonHome(Py_DecodeLocale(virtualenv_path_.c_str(), &virtualenv_path_size));
utils::Environment::setEnvironmentVariable("PYTHONPATH", (package_path.string() + "/:" + lib_path.string() + "/").c_str());
</code></pre>
<p>After starting up the application it seems that I have the right directories set, but I still get an error (the virtualenv path is /home/user/pythonrunnerapp/pyenv):</p>
<pre><code>Python path configuration:
PYTHONHOME = '/home/user/pythonrunnerapp/pyenv'
PYTHONPATH = '/home/user/pythonrunnerapp/pyenv/lib/python3.10/site-packages/:/home/user/pythonrunnerapp/pyenv/lib/'
program name = 'python3'
isolated = 0
environment = 1
user site = 1
import site = 1
sys._base_executable = '/home/user/pythonrunnerapp/pyenv/bin/python3'
sys.base_prefix = '/home/user/pythonrunnerapp/pyenv'
sys.base_exec_prefix = '/home/user/pythonrunnerapp/pyenv'
sys.platlibdir = 'lib'
sys.executable = '/home/user/pythonrunnerapp/pyenv/bin/python3'
sys.prefix = '/home/user/pythonrunnerapp/pyenv'
sys.exec_prefix = '/home/user/pythonrunnerapp/pyenv'
sys.path = [
'/home/user/pythonrunnerapp/pyenv/lib/python3.10/site-packages/',
'/home/user/pythonrunnerapp/pyenv/lib/',
'/home/user/pythonrunnerapp/pyenv/lib/python310.zip',
'/home/user/pythonrunnerapp/pyenv/lib/python3.10',
'/home/user/pythonrunnerapp/pyenv/lib/python3.10/lib-dynload',
]
Fatal Python error: init_fs_encoding: failed to get the Python codec of the filesystem encoding
Python runtime state: core initialized
ModuleNotFoundError: No module named 'encodings'
Current thread 0x00007f33a4b50780 (most recent call first):
<no Python frame>
</code></pre>
<p>If I don't do any of this, but I activate the environment before running the application and the executing it from that shell it manages to use the virtual environment, but I would like to be able to activate it from the application when needed.
What is the proper way to use an embedded python in a virtualenv here without the need to activate it beforehand?</p>
|
<python><c++><python-3.x><virtualenv>
|
2024-01-18 08:23:25
| 0
| 329
|
Lord Gamez
|
77,837,807
| 15,063,526
|
How to use function from 1 jupyter notebook in another jupyter notebook?
|
<p>There are two python notebooks:</p>
<ul>
<li><code>trading.ipynb</code> (1st)</li>
<li><code>monitorTrades.ipynb</code> (2nd)</li>
</ul>
<p>While the execution of the main block is in the first notebook, a function has to be executed in the second notebook (at the same time). This function must have 2 arguments (which are calculated during the execution in first notebook).</p>
<p>There are multiple functions in <code>monitorTrades.ipynb</code>, but I only want to run function: <code>checkFunc()</code>.
This function has 4 arguments. 2 of which came from <code>monitorTrades.ipynb</code> and the other 2 arguments are calculated during the execution of <code>trading.ipynb</code>.</p>
<p>The function must execute on <code>monitorTrades.ipynb</code>.</p>
<p>How to do this implement this?</p>
|
<python><jupyter-notebook><jupyter>
|
2024-01-18 08:09:29
| 1
| 341
|
Sanchay Kasturey
|
77,837,732
| 3,555,115
|
Compare Timestamps from columns and remove entries based on condition
|
<p>I have a dataframe</p>
<pre><code>df1 =
Instance sort_begin cop_begin ded_begin
a 04:19:07 04:31:56 04:32:32
a 04:19:07 02:34:22 02:34:22
f 05:19:07 02:31:26 04:32:32
f 04:19:17 04:35:56 04:34:35
</code></pre>
<p>The columns are stages in workflow with timestamps in H:M:S format, where cop_begin should begin only after sort_begin, and ded_begin after cop_begin.
However, there are other rows which has wrong values where cop_begin is before sort_begin and ded_begin before cop_begin.</p>
<p>How can we remove such entries in dataframe ?</p>
<pre><code> Output df =
Instance sort_begin cop_begin ded_begin
a 04:19:07 04:31:56 04:32:32
f 04:19:17 04:35:56 04:37:35
</code></pre>
|
<python><pandas><dataframe>
|
2024-01-18 07:54:23
| 1
| 750
|
user3555115
|
77,837,695
| 3,618,999
|
Retry post request Mechanism in Backend in python flask
|
<p>Context: I have one API, where inside the API 3 different external POST APIs are called. And behaviour should be if all 3 APIs give success response, then only overall API should be success. And even if 1 failed we do the rollback on the POST call. Now the issue is with one external API that has very high response time. P98 is 5 sec.</p>
<p>Our service is in Flask, and we have 10 second timeout on external request.post method.</p>
<p>Now we are building a solution where we have to do retry in the backend API itself. So I have 2 solutions, does both consume same server resource?</p>
<ol>
<li>Retry 3 times on timeout. (3* 10 sec = 30sec)</li>
<li>Increase timeout to 30 sec on request.post call</li>
</ol>
<p><strong>Does both above approaches consumes same server resource?</strong></p>
|
<python><flask><design-patterns><architecture><system-design>
|
2024-01-18 07:47:57
| 1
| 579
|
ratnesh
|
77,837,585
| 3,314,925
|
Correct way to call helper function in a class
|
<p>Is there a 'correct' way to call a helper function when using an instance method in a class? Both functions <strong>calc_age</strong> and <strong>calc_age_v2</strong> work, is there a preferred method? This is a toy example, the real code has more complex functions.</p>
<pre><code>#%%
def calc_age(self):
age=2024-self.dob
return(age)
def calc_age_v2(self):
self.age=2024-self.dob
return(self)
class Person:
def __init__(self, name, dob,age):
self.name = name
self.dob = dob
self.age=age
def myfunc(self):
print("Hello my name is " + self.name)
def calc_age(self):
self.age=calc_age(self)
def calc_age_v2(self):
self=calc_age_v2(self)
p1 = Person(name="John",dob=2002,age=None)
# p1.calc_age()
p1.calc_age_v2()
print(p1.age)
</code></pre>
|
<python><class><arguments><self><instance-methods>
|
2024-01-18 07:29:07
| 1
| 1,610
|
Zeus
|
77,837,583
| 5,166,312
|
STM32L0xx MCU: J-Link EraseChip Issues via Jlink_X64.dll
|
<p>I need help with erasing an <strong>STM32L083</strong> using a J-Link. I have the Jlink_X64.dll library and I am calling its functions from Python. However, when I call <strong>JLINK_EraseChip()</strong>, it returns a <em>pylink.errors.JLinkEraseException: Failed to erase sector error</em> and the MCU does not get erased. Interestingly, erasing another MCU like STM32L152 works fine. Also, if I use J-Flash Lite to erase the chip, it is successful. After erasing the MCU with J-Flash Lite, if I try erasing it again using JLINK_EraseChip(), it works. Do you have any suggestions on what else I could try? BTW, the processor is not locked.
<a href="https://i.sstatic.net/ttr2t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ttr2t.png" alt="enter image description here" /></a></p>
|
<python><dll><stm32><segger-jlink>
|
2024-01-18 07:28:53
| 1
| 337
|
WITC
|
77,837,392
| 3,187,537
|
APIConnectionError: Connection error. in langchain vector search
|
<p>I am trying to implement a RAG solution using Azure OpenAI and Azure AI Search.</p>
<p>This is my <code>requirements.txt</code> file:</p>
<pre><code>azure-core==1.29.6
azure-common==1.1.28
azure-identity==1.15.0
azure-keyvault-keys==4.8.0
azure-keyvault-secrets==4.7.0
azure-search-documents==11.4.0
openai==1.8.0
langchain==0.1.1
fastapi==0.109.0
uvicorn==0.26.0
tiktoken==0.5.2
gunicorn==21.2.0
langchain-openai==0.0.2
</code></pre>
<p>I am trying to implement <code>RetrievalQA</code> using the following vector_store connection:</p>
<pre><code># Cognitive Search connection setting
index_name = os.environ["AZURE_SEARCH_INDEX_NAME"]
service_name = COGNOS_SERVICE
key = AZURE_SEARCH_ADMIN_KEY
vector_store_address = "https://{}.search.windows.net/".format(service_name)
vector_store_password = AZURE_SEARCH_ADMIN_KEY
# Define LLM
llm = AzureChatOpenAI(
model="gpt-35-turbo",
streaming=True,
azure_deployment="chatgpt-gpt35-turbo",
temperature=0.0,
)
embedding_model: str = "text-embedding-ada-002"
embeddings: OpenAIEmbeddings = OpenAIEmbeddings(
deployment=embedding_model, chunk_size=1
)
vector_store = AzureSearch(
azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name=index_name,
embedding_function=embeddings.embed_query,
# content_key="report_content"
)
</code></pre>
<p>When I run the following <code>vector_store</code> part I get the following error:</p>
<pre><code>File d:\xxxx\.venv\lib\site-packages\openai\_base_client.py:919, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
909 return self._retry_request(
910 options,
911 cast_to,
(...)
915 response_headers=None,
916 )
918 log.debug("Raising connection error")
--> 919 raise APIConnectionError(request=request) from err
921 log.debug(
922 'HTTP Request: %s %s "%i %s"', request.method, request.url, response.status_code, response.reason_phrase
923 )
925 try:
APIConnectionError: Connection error.
</code></pre>
<p>I have made changes as per the new <code>langchain-community</code> imports.</p>
<p>The same code with older versions of <code>langchain</code> and the same connection settings seems to work.</p>
<p>What changes should I do to make the <code>vector_store</code> connection work?</p>
|
<python><azure-cognitive-search><langchain><azure-openai><azure-ai-search>
|
2024-01-18 06:49:19
| 0
| 1,732
|
Saurabh Jain
|
77,837,305
| 5,938,276
|
Running Juypter Notebook remotely
|
<p>I have connected to my Rasp Pi using VSCode from Windows so I can run a Juypter Notebook on the Rasp Pi.</p>
<p>However the Rasp Pi has venv installed and when I try to run the notebook I get a message:</p>
<p><code>Running cells with /usr/bin/python' requires ipykernel package.</code></p>
<p><a href="https://i.sstatic.net/cpunz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cpunz.png" alt="enter image description here" /></a></p>
<p>I can accept this message but then get a error that I cant install ipykernel in a externally managed enviroment:</p>
<p><a href="https://i.sstatic.net/nFNLw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nFNLw.png" alt="enter image description here" /></a></p>
<p>If in terminal I activate the venv and then type <code>which python</code> I get <code>/home/user</code>/test_project/venv/bin/python` and then I try to use this path to change interpreter:</p>
<p>Select Another kernel - > Python Environment - > Create Python Environment - ></p>
<p>But then the only option is to create a new venv???</p>
<p>How can I run my juypter notebook?</p>
|
<python><visual-studio-code><python-venv><vscode-remote>
|
2024-01-18 06:29:14
| 1
| 2,456
|
Al Grant
|
77,837,301
| 1,230,724
|
Get closest parent dtype of two ndarray's
|
<p>I'd like to convert potentially mixed (i.e. <code>dtype==object</code>) numpy arrays to the closest parent dtype (as per the hierarchy shown here <a href="https://numpy.org/doc/stable/reference/arrays.scalars.html" rel="nofollow noreferrer">https://numpy.org/doc/stable/reference/arrays.scalars.html</a>).</p>
<p><a href="https://i.sstatic.net/UrtJg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UrtJg.png" alt="enter image description here" /></a></p>
<p>Is there a built-in function (or easy way) to get the (least generic) parent type that is common to two dypes? Obviously, I can't trust the dtype of the array (because it may be object which doesn't reveal too much about the element's types).</p>
|
<python><numpy>
|
2024-01-18 06:28:36
| 0
| 8,252
|
orange
|
77,837,297
| 8,076,158
|
Create an empty TypedDict with Tuple values in Numba
|
<p>I want to initialise a dict with string keys and heterogenous tuple values. This is what I have. Any ideas..?</p>
<pre><code>from numba import types, typed, njit, typeof, from_dtype
TypeMyTuple = types.Tuple([types.unicode_type, types.float16, types.int16])
my_dict = typed.Dict.empty(key_type=types.unicode_type, value_type=TypeMyTuple)
</code></pre>
<p>Error:</p>
<pre><code>Traceback (most recent call last):
File "/home/me/.pycharm_helpers/pydev/pydevconsole.py", line 364, in runcode
coro = func()
^^^^^^
File "<input>", line 8, in <module>
File "/home/me/.pyenv/versions/3.11.1/envs/my_env/lib/python3.11/site-packages/numba/typed/typeddict.py", line 105, in empty
return cls(dcttype=DictType(key_type, value_type), n_keys=n_keys)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/.pyenv/versions/3.11.1/envs/my_env/lib/python3.11/site-packages/numba/typed/typeddict.py", line 120, in __init__
self._dict_type, self._opaque = self._parse_arg(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/.pyenv/versions/3.11.1/envs/my_env/lib/python3.11/site-packages/numba/typed/typeddict.py", line 156, in _parse_arg
opaque = _make_dict(dcttype.key_type, dcttype.value_type,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/.pyenv/versions/3.11.1/envs/my_env/lib/python3.11/site-packages/numba/core/dispatcher.py", line 468, in _compile_for_args
error_rewrite(e, 'typing')
File "/home/me/.pyenv/versions/3.11.1/envs/my_env/lib/python3.11/site-packages/numba/core/dispatcher.py", line 409, in error_rewrite
raise e.with_traceback(None)
numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend)
No implementation of function Function(<function new_dict at 0x7f78a6293920>) found for signature:
>>> new_dict(typeref[unicode_type], typeref[Tuple(unicode_type, float16, int16)], n_keys=int64)
There are 2 candidate implementations:
- Of which 2 did not match due to:
Overload in function 'impl_new_dict': File: numba/typed/dictobject.py: Line 653.
With argument(s): '(typeref[unicode_type], typeref[Tuple(unicode_type, float16, int16)], n_keys=int64)':
Rejected as the implementation raised a specific error:
LoweringError: Failed in nopython mode pipeline (step: native lowering)
float16
File "../../../../../../home/me/.pyenv/versions/3.11.1/envs/my_env/lib/python3.11/site-packages/numba/typed/dictobject.py", line 671:
def imp(key, value, n_keys=0):
<source elided>
raise RuntimeError("expecting *n_keys* to be >= 0")
dp = _dict_new_sized(n_keys, keyty, valty)
^
During: lowering "dp = call $48load_global.0(n_keys, $62load_deref.3, $64load_deref.4, func=$48load_global.0, args=[Var(n_keys, dictobject.py:668), Var($62load_deref.3, dictobject.py:671), Var($64load_deref.4, dictobject.py:671)], kws=(), vararg=None, varkwarg=None, target=None)" at /home/me/.pyenv/versions/3.11.1/envs/my_env/lib/python3.11/site-packages/numba/typed/dictobject.py (671)
raised from /home/me/.pyenv/versions/3.11.1/envs/my_env/lib/python3.11/site-packages/numba/core/errors.py:846
During: resolving callee type: Function(<function new_dict at 0x7f78a6293920>)
During: typing of call at /home/me/.pyenv/versions/3.11.1/envs/my_env/lib/python3.11/site-packages/numba/typed/typeddict.py (23)
File "../../../../../../home/me/.pyenv/versions/3.11.1/envs/my_env/lib/python3.11/site-packages/numba/typed/typeddict.py", line 23:
def _make_dict(keyty, valty, n_keys=0):
return dictobject._as_meminfo(dictobject.new_dict(keyty, valty,
^
</code></pre>
<p>Many thanks</p>
<p>(Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.)</p>
|
<python><numba>
|
2024-01-18 06:27:34
| 1
| 1,063
|
GlaceCelery
|
77,837,062
| 4,564,080
|
Overriding method in subclass with a parameter type hint that is a subclass of the type hint used in the parent's method
|
<p>I have the following structure of a parent class and a child class which inherits from parent. The constructor of the parent takes a parameter of type <code>ParentType</code>, and the constructor of the child class takes the same parameter, but of type <code>ChildType</code>, which is a subclass of <code>ParentType</code>.</p>
<p>The code runs fine, however mypy complains as it is inferring the type of the parameter from the parent class and not the subclass.</p>
<pre class="lang-py prettyprint-override"><code>class ParentType:
def __init__(self, param_1: int):
self.param_1 = param_1
class ChildType(ParentType):
def __init__(self, param_1: int, param_2: int):
super().__init__(param_1=param_1)
self.param_2 = param_2
class Parent:
def __init__(self, a: ParentType):
self.a = a
class Child(Parent):
def __init__(self, a: ChildType):
super().__init__(a=a)
child_type = ChildType(param_1=1, param_2=2)
child = Child(a=child_type)
print(child.a.param_1) # works
print(child.a.param_2) # works, but mypy complains that "ParentType" has no attribute "param_2"; maybe "param_1"?
</code></pre>
<p>What would be the correct way to achieve this?</p>
|
<python><python-typing>
|
2024-01-18 05:22:41
| 1
| 4,635
|
KOB
|
77,837,023
| 134,044
|
Use str.format to format numbers with a minimum of 1 decimal place without rounding and without adding additional trailing zeroes
|
<h4>Context</h4>
<p>I am receiving numbers as strings in <strong>Python 3</strong> and want to transmit them as strings in order to preserve accuracy so that the upstream server can convert the numbers-as-strings to <code>decimal.Decimal</code> without loss of accuracy.</p>
<p>[<em>Admins: I have repeatedly searched Stack Overflow and other resources can cannot find anyone asking or answering this specific question.</em>]</p>
<h4>Question</h4>
<p><strong>How do I use <a href="https://docs.python.org/3.10/library/stdtypes.html#str.format" rel="nofollow noreferrer"><code>str.format</code></a> to provide the given floating point format (below)?</strong> Showing that it's impossible is an acceptable answer.</p>
<p><strong>Succinctly put, the format is just to provide <em>at least one decimal place at all times</em> even if that is just a trailing zero, but never to perform any rounding.</strong></p>
<p>I suppose there may be a <a href="https://docs.python.org/3.10/library/string.html#string.Formatter" rel="nofollow noreferrer">Formatter class</a> solution, which would be acceptable if available, but a <code>str.format</code> solution is preferred.</p>
<p>I would be interested in knowing if <a href="https://docs.python.org/3.10/reference/lexical_analysis.html#f-strings" rel="nofollow noreferrer">f-strings</a> can achieve this as well. Using f-strings is not preferred as an answer in this context, but it would be interesting to know if f-strings can provide this format (without just packing the demonstrated expression inside the f-string for execution: that's just moving the required code into a secondary evaluation environment).</p>
<h4>Required Format</h4>
<p>To distinguish the numbers from <code>int</code> <strong>the format must suffix <code>".0"</code> to whole numbers only if they are presented without any decimal places, but preserve exactly any decimal places that are presented, even if that includes trailing zeroes.</strong></p>
<p>There is an alternative solution under consideration which is to <strong>strip all trailing decimal places that are zeroes but leave one and only one trailing decimal zero for whole numbers (and of course preserve all non-zero decimal places without rounding)</strong>.</p>
<p>Incoming strings can be expected to be valid <code>int</code> or <code>Decimal</code>. The special case of receiving a whole number with just the decimal point and no decimal places is invalid and will not occur (no need to handle <code>"42."</code>). The empty string (<code>""</code>) will not occur.</p>
<p>It's not acceptable just to configure to a large number of decimal places (<code>"{:.28f}".format(1)</code>).</p>
<h4>Demonstration</h4>
<p>AFAIK this should be the required behaviour. Looking for this behaviour using <code>format</code>:</p>
<pre class="lang-py prettyprint-override"><code>for number in ("42", "4200", "42.0000", "42.34", "42.3400", "42.3456"):
string = "." in number and number or number + ".0"
print(number, "=>", string)
</code></pre>
<pre><code>42 => 42.0
4200 => 4200.0
42.0000 => 42.0000
42.34 => 42.34
42.3400 => 42.3400
42.3456 => 42.3456
</code></pre>
<h3>Alternative</h3>
<p>This alternative behaviour is also acceptable.</p>
<pre class="lang-py prettyprint-override"><code>for number in ("42", "4200", "42.0000", "42.34", "42.3400", "42.3456"):
string = (
number.rstrip("0")[-1] == "." and number.rstrip("0") + "0"
or "." not in number and number + ".0"
or number.rstrip("0")
)
print(number, "=>", string)
</code></pre>
<pre><code>42 => 42.0
4200 => 4200.0
42.0000 => 42.0
42.34 => 42.34
42.3400 => 42.34
42.3456 => 42.3456
</code></pre>
|
<python><python-3.x><decimal><string-formatting>
|
2024-01-18 05:09:22
| 3
| 4,109
|
NeilG
|
77,836,728
| 2,597,213
|
Polars, compute new column by cross referencing two dataframes
|
<p>First of all I have to say this is my first time using any DataFrame module.</p>
<p>I have need to process FEM simulation results that I have loaded into polars dataframe, which produce two DataFrames:</p>
<pre><code>nodes_df = pl.DataFrame(
{"node":[1,2,3,4,5,6,7,8,9,10],
"x":[4.5,-4.6, 3.8,-3.8, 2.1,-9.3, 1.5,-6.7, 0.0, 3.6],
"y":[9.8,-9.9, 8.2,-8.3, 4.6,-2.0, 1.4,-6.1, 0.0, 1.0],
"z":[0.0,0.0,8.8,8.8,1.2,1.2,5.5,5.5,5.5,0.0,]})
elements = [1, 2, 3]
elements_df = pl.DataFrame(
{
"element": elements,
"N_1": [1, 1, 1],
"N_2": [2, 2, 2],
"N_3": [3, 3, 3],
"N_4": [4, 4, 4],
"N_5": [5, 5, 5],
"N_6": [6, 6, 6],
"N_7": [7, 7, 7],
"N_8": [8, 8, 8],
"N_9": [9, 9, 9],
"N_10": [10, 10, 10],
}
)
</code></pre>
<p>This data represent the values of a 10 node elements C3D10 like this:</p>
<p><a href="https://i.sstatic.net/zUxP4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zUxP4.png" alt="C3D10 Abaqus element" /></a></p>
<p>the <code>elements_df</code>contains the definition for all elements, where the columns <code>N_{x}</code> are de ids of it's nodes (the node order is important), so each element always have 10 different nodes. The <code>nodes_df</code> can have more columns with other simulation results that I will process latter. The Idea is pass that results stored in the <code>nodes_df</code> to the element centroid.</p>
<p>The example have 10 (fake) nodes, and 3 (equals fake) elements, this are just sample data every row will be different. Where <code>nodes_df</code> has an <code>"node"</code> column to represent the node id, and <code>x,y,z</code> are the node coordinates. The <code>elements_df</code> has an "element" column for the id, and <code>N_<int[1:n]></code> columns for it's nodes ids. First I´dont know if this schema is better or worst than having a single column with a list of nodes in one column, thinking on performance and easy of use of the dataframe, I have no problen to change the schema.</p>
<p>But I need to make cross reference of both dataframes to make several computations of my results. To start I need to compute the the element centroid.</p>
<p>As I'm new with dataframe I did this to clarify my point, but I know that this is the worst DataFrame code that anyone can write:</p>
<pre><code>def coords(nodes: list):
# get nodes coordinates for a node list
return (
nodes_df.filter(pl.col("node").is_in(nodes))
.select((pl.col(("x", "y", "z"))))
.mean()
)
def centroid(el_id: int):
# get the first four nodes ids for el with id = el_id
nodes = elements_df.filter(pl.col("element") == el_id).to_numpy()[0][:4]
return coords(nodes)
# loop over all elements and create new dataframe
# or maybe use this schema:
# _tmp = {"element": [], "x": [],"y": [],"z": []}
_tmp = {"element": [], "centroid": []}
for el in elements:
c = centroid(el).to_numpy()[0]
_tmp["element"].append(el)
_tmp["centroid"].append(c)
# _tmp["x"].append(c[0])
# _tmp["y"].append(c[1])
# _tmp["z"].append(c[2])
# concat dataframes to add the new "centroid" column
elements_df = pl.concat((elements_df, pl.DataFrame(_tmp)), how="align")
</code></pre>
<p>the results will be something like:</p>
<pre><code>shape: (3, 12)
┌─────────┬─────┬─────┬─────┬───┬─────┬─────┬──────┬───────────────────────────┐
│ element ┆ N_1 ┆ N_2 ┆ N_3 ┆ … ┆ N_8 ┆ N_9 ┆ N_10 ┆ centroid │
│ --- ┆ --- ┆ --- ┆ --- ┆ ┆ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ i64 ┆ i64 ┆ ┆ i64 ┆ i64 ┆ i64 ┆ list[f64] │
╞═════════╪═════╪═════╪═════╪═══╪═════╪═════╪══════╪═══════════════════════════╡
│ 1 ┆ 1 ┆ 2 ┆ 3 ┆ … ┆ 8 ┆ 9 ┆ 10 ┆ [1.233333, 2.7, 2.933333] │
│ 2 ┆ 1 ┆ 2 ┆ 3 ┆ … ┆ 8 ┆ 9 ┆ 10 ┆ [1.233333, 2.7, 2.933333] │
│ 3 ┆ 1 ┆ 2 ┆ 3 ┆ … ┆ 8 ┆ 9 ┆ 10 ┆ [1.233333, 2.7, 2.933333] │
└─────────┴─────┴─────┴─────┴───┴─────┴─────┴──────┴───────────────────────────┘
</code></pre>
<p>or the commented schema:</p>
<pre><code>┌─────────┬─────┬─────┬─────┬───┬──────┬──────────┬─────┬──────────┐
│ element ┆ N_1 ┆ N_2 ┆ N_3 ┆ … ┆ N_10 ┆ x ┆ y ┆ z │
│ --- ┆ --- ┆ --- ┆ --- ┆ ┆ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ i64 ┆ i64 ┆ ┆ i64 ┆ f64 ┆ f64 ┆ f64 │
╞═════════╪═════╪═════╪═════╪═══╪══════╪══════════╪═════╪══════════╡
│ 1 ┆ 1 ┆ 2 ┆ 3 ┆ … ┆ 10 ┆ 1.233333 ┆ 2.7 ┆ 2.933333 │
│ 2 ┆ 1 ┆ 2 ┆ 3 ┆ … ┆ 10 ┆ 1.233333 ┆ 2.7 ┆ 2.933333 │
│ 3 ┆ 1 ┆ 2 ┆ 3 ┆ … ┆ 10 ┆ 1.233333 ┆ 2.7 ┆ 2.933333 │
└─────────┴─────┴─────┴─────┴───┴──────┴──────────┴─────┴──────────┘
</code></pre>
<p>I'm sure that what I did here can be done directly in polar but I don't know where to start. This is my simplest, but I wold have to compute other things by cross referencing the both dataframes, so I want to learn how to do it. And also looking for advices on my schemas</p>
|
<python><dataframe><python-polars>
|
2024-01-18 03:28:12
| 1
| 4,885
|
efirvida
|
77,836,616
| 23,190,147
|
Count number of ways to pair integers 1-14 under constraint
|
<p>Consider the following problem:</p>
<p>How many ways are there to split the integers 1 through 14 into 7 pairs such that in each pair, the greater number is at least 2 times the lesser number?</p>
<p>This is obviously a math problem, and there are several solutions to it. I wondered if there is a way to write a python program instead, to automatically search through all possible pairs, and give me the correct answer. Right now I'll take this moment to clarify: I do not want a "reasonable" solution to this problem, I want a python program to solve it (goes through all the possible pairs).</p>
<p>While going through the possibilities of how I might do this, I considered using a <code>set</code>, since the order of a set doesn't matter. If I had a for loop for instance, that counted all the pairs, the program would need to know when to stop counting the pairs or when it had already counted a certain pair. Since the for loop would have randomly ordered pairs, if I put those pairs into a set, it would likely make it easier to figure out if I had already counted that pair, since order does not matter in a set.</p>
<p>I'm kind of stuck right now...so I'm open to any suggestions on how I can build on this idea, or any other potential ideas (I don't have any code yet to show...hopefully coming soon).</p>
<p>UPDATE:</p>
<p>Resolved. It seems there are several ways to do this, and I built some code that works (using some helpful answers as well). Efficiency doesn't really matter here (since we're only pairing the numbers from 1-14), but as the number of pairs increase, it becomes more and more important to write efficient code, so that's a suggestion if you're doing something similar to this.</p>
|
<python><math><set><combinations>
|
2024-01-18 02:45:27
| 2
| 450
|
5rod
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.