QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,419,357
| 13,709,317
|
Why wont the `grpc` package install on my system?
|
<p>Hi I'm trying to install the <code>grpc</code> python package via pip. It is actually intended to be installed by a build script for another setup that I am doing but the script kept on failing so I decided to do it myself.</p>
<p>Here is what I get:</p>
<pre><code>❯ pip3 install grpc
Defaulting to user installation because normal site-packages is not writeable
Collecting grpc
Using cached grpc-1.0.0.tar.gz (5.2 kB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [6 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-_au3535m/grpc_0b2b1ce7560943288874cf94aa72394a/setup.py", line 33, in <module>
raise RuntimeError(HINT)
RuntimeError: Please install the official package with: pip install grpcio
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
</code></pre>
<p>I have tried following the hint:</p>
<pre><code>pip install grpcio
</code></pre>
<p>and it ran without any troubles. I also visited a plethora of forums by searching the keywords: "setup.py egg_info did not run successfully" and all suggested doing:</p>
<pre><code>pip install --upgrade pip
pip install --upgrade setuptools
</code></pre>
<p>I tried both and here is what I get:</p>
<pre><code>Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: pip in /home/utl/.local/lib/python3.10/site-packages (24.0)
Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: setuptools in /usr/local/lib/python3.10/dist-packages (69.5.1)
</code></pre>
<p>And yet when I run <code>pip install grpc</code> afterwards it does not work and I get the same error as before. I've been struggling with the build script for quite some time now so any help is very appreciated.</p>
|
<python><grpc>
|
2024-05-02 13:04:31
| 1
| 801
|
First User
|
78,418,960
| 3,024,945
|
How do I know if flag was passed by the user or has default value?
|
<p>Sample code:</p>
<pre><code>import click
@click.command
@click.option('-f/-F', 'var', default=True)
def main(var):
click.echo(var)
main()
</code></pre>
<p>Inside <code>main()</code> function, how can I check if <code>var</code> parameter got <code>True</code> value by default, or it was passed by the user?</p>
<p>What I want to achieve: I will have few flags. When the user does not pass any of the flags, I want them all to be <code>True</code>. When the user passes at least one of the flags, only the passed flags should be <code>True</code>, the other flags <code>False</code>.</p>
|
<python><python-click>
|
2024-05-02 11:52:24
| 1
| 1,458
|
Kossak
|
78,418,933
| 13,202,601
|
Single operator floor division
|
<p>What programming languages other than Python has the floor division as a single operator from the programmers' point of view?
Why does it even exist? Please don't give me the answer of because it can!</p>
<p>I did google for it first but most results are just about accomplishing the same thing with combination of an operator and a function call. Duh!</p>
|
<python><programming-languages>
|
2024-05-02 11:46:18
| 1
| 802
|
stackoverblown
|
78,418,808
| 5,378,816
|
How to write a test if the argument defines a type
|
<p>This is not a <code>pydantic</code> question, but to explain why am I asking: <code>pydantic.TypeAdapter()</code> accepts (among many others) all the following type definitions as its argument and can create a working validator for them:</p>
<pre><code>int
int|str
list
list[str|int]
typing.Union[int,str]
typing.Literal[10,20,30]
</code></pre>
<p>Example:</p>
<pre><code>>>> validator = pydantic.TypeAdapter(list[str|int]).validate_python
>>> validator([10,20,"stop"])
[10, 20, 'stop']
>>> validator([10,20,None])
(traceback deleted)
pydantic_core._pydantic_core.ValidationError: ...
</code></pre>
<p>I want to make a test if an argument is a such type defition. How do I write such test?</p>
<ul>
<li>I started with <code>isinstance(arg, type)</code> for simple types like <code>int</code> or <code>list</code></li>
<li>then I added <code>isinstance(arg, types.GenericAlias</code> for <code>list[str]</code> etc.</li>
<li>then I realized it does not recognize <code>int|str</code> (which itself behaves differently than <code>typing.Union[int,str]</code>). Also the <code>Literal[]</code> is not recognized ... I'm probably on a wrong track.</li>
</ul>
|
<python>
|
2024-05-02 11:22:31
| 1
| 17,998
|
VPfB
|
78,418,183
| 15,290,244
|
How to create fulltext index without specifying node label in Cypher
|
<p>I tried to create a fulltext index for all nodes based on <a href="https://neo4j.com/docs/cypher-manual/current/indexes/semantic-indexes/full-text-indexes/#create-full-text-indexes" rel="nofollow noreferrer">this documentation</a></p>
<p>This is the query I came up with:</p>
<pre><code>CREATE FULLTEXT INDEX full_text_docs
FOR (n)<-[:contains]-(d:DocumentFile) ON EACH [n.entity_id, d.entity_id]
IF NOT EXISTS
</code></pre>
<p>but I get the following error:</p>
<pre><code>neo4j.exceptions.CypherSyntaxError: {code: Neo.ClientError.Statement.SyntaxError} {message: Invalid input ')': expected ":" (line 3, column 39 (offset: 76))
" FOR (n)<-[:contains]-(d:DocumentFile) ON EACH [n.entity_id, d.entity_id]"
^}
</code></pre>
<p>The query was derived from the two examples:</p>
<pre><code>CREATE FULLTEXT INDEX namesAndTeams FOR (n:Employee|Manager) ON EACH [n.name, n.team]
</code></pre>
<pre><code>CREATE FULLTEXT INDEX communications FOR ()-[r:REVIEWED|EMAILED]-() ON EACH [r.message]
</code></pre>
<p>where the first example uses <code>n:label</code> (but I don't want to specify a label) and the second uses a relationship but without specifying <code>n</code> (I want to specify n so I can index it).</p>
<p>How do I change the query so that it creates a fulltext index for all nodes matching <code>(n)<-[:contains]-(d:DocumentFile)</code>?</p>
<p>EDIT:
This does not work either</p>
<pre><code> CREATE FULLTEXT INDEX full_text_docs
FOR (n:EntityNode)-[r:contains]-(d:DocumentFile) ON EACH [n.entity_id, d.entity_id]
IF NOT EXISTS
</code></pre>
<p>Error:</p>
<pre><code>neo4j.exceptions.CypherSyntaxError: {code: Neo.ClientError.Statement.SyntaxError} {message: Invalid input '-': expected "ON" (line 3, column 51 (offset: 88))
" FOR (n:EntityNode)-[r:contains]-(d:DocumentFile) ON EACH [n.entity_id, d.entity_id]"
^}
</code></pre>
|
<python><neo4j><cypher>
|
2024-05-02 09:34:08
| 2
| 743
|
Simao Gomes Viana
|
78,418,010
| 22,466,650
|
How to switch between parent classes during child instantiation?
|
<p>I'm trying to make a custom class <code>CsvFrame</code> that is a dataframe made either with pandas or polars.</p>
<p>For that I made the code below:</p>
<pre><code>class CsvFrame:
def __init__(self, engine, *args, **kwargs):
if engine == 'polars':
import polars as pl
pl.DataFrame.__init__(pl.read_csv(*args, **kwargs))
if engine == 'pandas':
import pandas as pd
pd.DataFrame.__init__(pd.read_csv(*args, **kwargs))
</code></pre>
<p>Now when I instantiante an object, there are two problems:</p>
<ul>
<li>there is no HTML represention of the dataframe in VS Code-Jupyter</li>
<li>none of the methods or attributes of a dataframe are available</li>
</ul>
<pre><code>import io
input_text = '''
col1,col2
A,1
B,2
'''
cfr = CsvFrame('polars', io.StringIO(input_text))
# problem 1
cfr # <__main__.CsvFrame at 0x1fd721f32c0>
# problem 2
cfr.melt()
AttributeError: 'CsvFrame' object has no attribute 'melt'
</code></pre>
<p>Can you help me fix that?</p>
|
<python><class>
|
2024-05-02 09:04:42
| 1
| 1,085
|
VERBOSE
|
78,417,713
| 2,186,848
|
How do I write tests for my django-extensions cron job?
|
<p>I have a cron job in my Django app that's defined as a <code>MinutelyJob</code> (from <code>django-extensions</code>).</p>
<p>How do I write tests for the job? <a href="https://django-extensions.readthedocs.io/en/latest/jobs_scheduling.html" rel="nofollow noreferrer">The module documentation</a> is quite sparse, and doesn't tell me how to call the job from code as opposed to the command line. I don't want to write test code that depends on undocumented interfaces.</p>
<p>Alternatively, should I reimplement the job using a different module? I only have the one job so Celery is a bit heavyweight for my use case.</p>
|
<python><django><cron><django-extensions>
|
2024-05-02 08:04:50
| 0
| 624
|
Flup
|
78,417,678
| 12,556,481
|
Downloading is not working using the Python requests library
|
<p>I'm trying to download a PDF using the following URL, but I can't see any content. When I try different URLs, it works fine. Can someone explain what the issue might be? Does it have something to do with this website?
This is the code:</p>
<pre><code>import requests
pdf_url="https://www.npci.org.in/PDF/nach/circular/2015-16/Circular_No_126.pdf"
pdf_title="test"
headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:124.0) Gecko/20100101 Firefox/124.0"}
response = requests.get(pdf_url, headers, stream = True)
if response.status_code==200:
content = next(response.iter_content(10))
with open(f"{pdf_title}.pdf", "wb") as fd:
fd.write(response.content)
</code></pre>
|
<python><python-requests><request>
|
2024-05-02 07:59:06
| 2
| 309
|
dfcsdf
|
78,417,407
| 9,827,719
|
Docker file with Ubuntu, Python and Weasyprint - Problem with venv
|
<p>I am trying to get a docker image with Ubuntu, Python and Weasyprint to work.
I think the error for me is that <code>requirements.txt</code> is installed in a Python venv. When I try to run main.py it gives me a error <code>ModuleNotFoundError: No module named 'sqlalchemy'</code>, so it did install requirements.txt, but only in the virtual environment, and this is not the environment that is beeing run.</p>
<p><strong>Dockerfile</strong></p>
<pre><code># Specify Python
FROM ubuntu:latest
# Upgrade Ubuntu
RUN echo Dockerfile :: Update Ubuntu
RUN apt-get update && apt-get install -y
RUN apt-get install build-essential -y
# Install Python
RUN echo Dockerfile :: Install Python
RUN apt install python3 -y
RUN python3 --version
# Install Weasyprint
RUN echo Dockerfile :: Install components
RUN apt-get install -y python3-lxml libcairo2 libgdk-pixbuf2.0-0 libffi-dev shared-mime-info
RUN apt-get install -y libpango-1.0-0
RUN apt install -y python3-dev libpq-dev
# Install PIP
RUN echo Dockerfile :: Install PIP
RUN apt-get install python3-pip -y
RUN pip --version
# Install venv
RUN echo Dockerfile :: Install Venv
RUN apt-get install python3-venv -y
# Set enviroment variables
RUN echo Dockerfile :: Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Open port
RUN echo Dockerfile :: Open port
EXPOSE 8080
# Add Python script
RUN echo Dockerfile :: Add Python script
RUN mkdir /app
WORKDIR /app
COPY . .
# Install dependencies
RUN echo Dockerfile :: Install dependencies
RUN python3 -m venv .venv
RUN . .venv/bin/activate
RUN .venv/bin/pip install -r requirements.txt
# Set Pythons path
RUN echo Dockerfile :: Set Pythons path
ENV PYTHONPATH /app
# Run script
RUN echo Dockerfile :: Run script
CMD [ "python3", "./main.py" ]
</code></pre>
<p><strong>Build output:</strong></p>
<pre><code>docker build .
2024/05/02 08:54:37 http2: server: error reading preface from client //./pipe/docker_engine: file has already been closed
[+] Building 0.0s (0/0) docker:default
[+] Building 51.2s (33/33) FINISHED docker:default
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 1.42kB 0.0s
=> [internal] load metadata for docker.io/library/ubuntu:latest 0.5s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [ 1/28] FROM docker.io/library/ubuntu:latest@sha256:3f85b7caad41a95462cf5b787d8a04604c8262cdcdf9a472b8c52ef83375fe15 0.0s
=> [internal] load build context 0.8s
=> => transferring context: 973.99kB 0.8s
=> CACHED [ 2/28] RUN echo Dockerfile :: Update Ubuntu 0.0s
=> CACHED [ 3/28] RUN apt-get update && apt-get install -y 0.0s
=> CACHED [ 4/28] RUN apt-get install build-essential -y 0.0s
=> CACHED [ 5/28] RUN echo Dockerfile :: Install Python 0.0s
=> CACHED [ 6/28] RUN apt install python3 -y 0.0s
=> CACHED [ 7/28] RUN python3 --version 0.0s
=> CACHED [ 8/28] RUN echo Dockerfile :: Install components 0.0s
=> CACHED [ 9/28] RUN apt-get install -y python3-lxml libcairo2 libgdk-pixbuf2.0-0 libffi-dev shared-mime-info 0.0s
=> CACHED [10/28] RUN apt-get install -y libpango-1.0-0 0.0s
=> CACHED [11/28] RUN apt install -y python3-dev libpq-dev 0.0s
=> CACHED [12/28] RUN echo Dockerfile :: Install PIP 0.0s
=> CACHED [13/28] RUN apt-get install python3-pip -y 0.0s
=> CACHED [14/28] RUN pip --version 0.0s
=> CACHED [15/28] RUN echo Dockerfile :: Install Venv 0.0s
=> CACHED [16/28] RUN apt-get install python3-venv -y 0.0s
=> CACHED [17/28] RUN echo Dockerfile :: Set environment variables 0.0s
=> CACHED [18/28] RUN echo Dockerfile :: Open port 0.0s
=> CACHED [19/28] RUN echo Dockerfile :: Add Python script 0.0s
=> CACHED [20/28] RUN mkdir /app 0.0s
=> CACHED [21/28] WORKDIR /app 0.0s
=> [22/28] COPY . . 3.9s
=> [23/28] RUN echo Dockerfile :: Install dependencies 0.4s
=> [24/28] RUN python3 -m venv .venv 4.0s
=> [25/28] RUN . .venv/bin/activate 0.4s
=> [26/28] RUN .venv/bin/pip install -r requirements.txt 38.9s
=> [27/28] RUN echo Dockerfile :: Set Pythons path 0.4s
=> [28/28] RUN echo Dockerfile :: Run script 0.5s
=> exporting to image 1.2s
=> => exporting layers 1.2s
=> => writing image sha256:e936b6ebf44f23a8a04ec47c9ce69c33f7872249b6ee795a606d64a30e30b6a8 0.0s
What's Next?
View a summary of image vulnerabilities and recommendations → docker scout quickview
</code></pre>
<p><strong>Run output:</strong></p>
<pre><code>docker run e9
Traceback (most recent call last):
File "/app/./main.py", line 10, in <module>
from src.dao.db_adapter import DBAdapter
File "/app/src/dao/db_adapter.py", line 19, in <module>
import sqlalchemy
ModuleNotFoundError: No module named 'sqlalchemy'
</code></pre>
<p>How can I fix so that either requirements are installed system wide or python uses the venv?</p>
|
<python><docker><weasyprint>
|
2024-05-02 07:01:12
| 1
| 1,400
|
Europa
|
78,417,363
| 1,328,979
|
Validate a recursive data structure (e.g. tree) using Python Cerberus (v1.3.5)
|
<p>What is the right way to model a recursive data structure's schema in Cerberus?</p>
<h3>Attempt #1:</h3>
<pre class="lang-py prettyprint-override"><code>from cerberus import Validator, schema_registry
schema_registry.add("leaf", {"value": {"type": "integer", "required": True}})
schema_registry.add("tree", {"type": "dict", "anyof_schema": ["leaf", "tree"]})
v = Validator(schema = {"root": {"type": "dict", "schema": "tree"}})
</code></pre>
<p>Error:</p>
<pre class="lang-py prettyprint-override"><code>cerberus.schema.SchemaError: {'root': [{
'schema': [
'no definitions validate', {
'anyof definition 0': [{
'anyof_schema': ['must be of dict type'],
'type': ['null value not allowed'],
}],
'anyof definition 1': [
'Rules set definition tree not found.'
],
},
]},
]}
</code></pre>
<h3>Attempt #2:</h3>
<p>The above error indicating the need for a rules set definition for <code>tree</code>:</p>
<pre class="lang-py prettyprint-override"><code>from cerberus import Validator, schema_registry, rules_set_registry
schema_registry.add("leaf", {"value": {"type": "integer", "required": True}})
rules_set_registry.add("tree", {"type": "dict", "anyof_schema": ["leaf", "tree"]})
v = Validator(schema = {"root": {"type": "dict", "schema": "tree"}})
v.validate({"root": {"value": 1}})
v.errors
v.validate({"root": {"a": {"value": 1}}})
v.errors
v.validate({"root": {"a": {"b": {"c": {"value": 1}}}}})
v.errors
</code></pre>
<p>Output:</p>
<pre><code>False
{'root': ['must be of dict type']}
</code></pre>
<p>for all 3 examples.</p>
<h3>Expected behaviour</h3>
<p>Ideally, I would like all the below documents to pass validation:</p>
<pre class="lang-py prettyprint-override"><code>v = Validator(schema = {"root": {"type": "dict", "schema": "tree"}})
assert v.validate({"root": {"value": 1}}), v.errors
assert v.validate({"root": {"a": {"value": 1}}}), v.errors
assert v.validate({"root": {"a": {"b": {"c": {"value": 1}}}}}), v.errors
</code></pre>
<h3>Related questions</h3>
<ul>
<li><a href="https://stackoverflow.com/questions/75008628/is-it-possible-for-cerberus-to-check-nested-recursive-structure">Is it possible for cerberus to check nested recursive structure?</a></li>
<li><a href="https://stackoverflow.com/questions/50499045/python-cerberus-multipe-schemas-for-a-single-filed">Python Cerberus: multipe schemas for a single filed?</a></li>
<li><a href="https://github.com/pyeve/cerberus/issues/513" rel="nofollow noreferrer">https://github.com/pyeve/cerberus/issues/513</a></li>
</ul>
|
<python><recursion><tree><cerberus>
|
2024-05-02 06:52:29
| 1
| 1,464
|
Marc Carré
|
78,417,261
| 2,772,127
|
pyenv is killed during module installation?
|
<p>I installed pyenv (on Debian 12) as I need to run a program that uses python 10.<br />
Now I also need a module called local-attention.
So I tried to install it with <code>pip</code>:</p>
<pre><code>pip install --user local-attention
Collecting local-attention
Using cached local_attention-1.9.1-py3-none-any.whl (8.2 kB)
Collecting einops>=0.6.0
Downloading einops-0.8.0-py3-none-any.whl (43 kB)
| | 43 kB 370 kB/s
Collecting torch
Downloading torch-2.3.0-cp310-cp310-manylinux1_x86_64.whl (779.1 MB)
| | 779.1 MB 18.2 MB/s eta 0:00:01/home/lucas/.pyenv/pyenv.d/exec/pip-rehash/pip: line 20: 3694 Killed "$PYENV_COMMAND_PATH" "$@"
</code></pre>
<p>But it looks like <code>pip</code> is killed during the process.<br />
And if I check for the module installation:</p>
<pre><code>python -c "import local_attention"
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'local_attention'
</code></pre>
<p>Can someone help me ?</p>
|
<python><pyenv>
|
2024-05-02 06:31:30
| 0
| 1,066
|
Duddy67
|
78,417,252
| 9,951,273
|
Overload function with SupportsInt and not SupportsInt?
|
<p>I want to overload the function below so if passed a value that supports <code>int()</code> Python type hints <code>int</code> otherwise Python type hints the value passed.</p>
<p>Python <code>typing</code> module provides a <code>SupportsInt</code> type we can use to check if our value supports int.</p>
<pre><code>from typing import Any, SupportsInt, overload
@overload
def to_int(value: SupportsInt) -> int: ...
@overload
def to_int[T: NotSupportsInt???](value: T) -> T: ...
def to_int(value: Any) -> Any:
try:
return int(value)
except TypeError:
return value
</code></pre>
<p>But in our second <code>overload</code> statement how can we specify all values that don't support <code>int</code>?</p>
|
<python><python-typing>
|
2024-05-02 06:28:27
| 3
| 1,777
|
Matt
|
78,417,251
| 4,488,349
|
How exactly does tensorflow perform mini-batch gradient descent?
|
<p>I am unable to achieve good results unless I choose a batch size of 1. By good, I mean error decreases significantly through the epochs. When I do a full batch of 30 the results are poor, error behaves erratically decreasing only slightly and then learning nothing, or increasing. However, tensorflow gets good results for any batch_size with these same settings.</p>
<p>My question is, what is wrong with my gradient descent method?</p>
<p>In addition, How is tensorflow different? How do the gradients remain so stable through the epochs without, apparently, scaling or clipping them with default SGD settings?</p>
<pre class="lang-python prettyprint-override"><code>#%%
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
p = 10
N = 30
w = np.random.uniform(-1,1, (p,1))
X = np.random.normal(0,1, (N,p))
y = np.matmul(X,w) + np.random.normal(0,1,(N,1))
#%%
class layer:
def __init__(self, out_dim, input=False, in_dim=None):
self.out_dim = out_dim
if not input:
self.weights = np.random.normal(0,1.0/out_dim,(out_dim, in_dim))
self.input_bit = 0
else:
self.input_bit = 1
def compute_self(self, z):
if self.input_bit==0:
self.z = np.matmul(self.weights,z)
else:
self.z = z
return np.reshape(self.z, (-1,1))
class network:
def __init__(self):
self.layers = []
def add_layer(self, layer):
self.layers.append(layer)
def compute_net_iter(self, x, L):
if L==(len(self.layers)-1):
return np.squeeze(self.layers[len(self.layers)-1].compute_self(self.layers[L-1].z))
if L==0:
self.layers[0].compute_self(x)
return self.compute_net_iter(self.layers[0].z, L+1)
else:
self.layers[L].compute_self(self.layers[L-1].z)
return self.compute_net_iter(self.layers[L].z, L+1)
def compute_output(self,X):
y = []
for i in range(X.shape[0]):
y.append(self.compute_net_iter(X[i,:], 0))
return np.reshape(np.array(y), (-1,1))
def mse(self, yhat, y):
return np.mean(np.power(yhat - y,2))
def grad_E(self, yhat, y):
return np.reshape(np.sum(yhat - y), (-1, 1))
def batch_data(self, X,y, size):
nrows = X.shape[0]
rand_rows = np.random.permutation(range(nrows))
batches = int(nrows/size)
rem = nrows%size
if rem:
batches += 1
b = 0
Xbatches = {}
ybatches = {}
c = 0
while b < nrows:
if b+size>nrows:
e = nrows-b
else:
e = size
r = rand_rows[b:(b+e)]
Xbatches[c] = X[r,:]
ybatches[c] = y[r,:]
b+=size
c+=1
return Xbatches, ybatches
def update(self, X,y, epochs=10, batch_size=32, lr=.01):
deltas = {}
for i in range(1,len(self.layers)):
deltas[i] = 0
for e in range(epochs):
Xbatches, ybatches = self.batch_data(X,y, batch_size)
batches = len(ybatches)
for b in range(batches):
yhat = self.compute_output(Xbatches[b])
grad_E = self.grad_E(yhat, ybatches[b])/len(yhat)
z = np.reshape(self.layers[-2].z, (-1,1))
grad_W = np.matmul(grad_E, z.T)
deltas[len(self.layers)-1] = grad_W
for L in reversed(range(1, len(self.layers)-1)):
grad_E = np.matmul(self.layers[L+1].weights.T, grad_E)
z = np.reshape(self.layers[L-1].z, (-1,1))
grad_W = np.matmul(grad_E, z.T)
deltas[L] = grad_W
for L in range(1,len(self.layers)):
self.layers[L].weights = self.layers[L].weights - (lr*deltas[L])
yhat = self.compute_output(X)
err = self.mse(yhat, y)
print(err)
layer0 = layer(X.shape[1], input=True)
layer1 = layer(10, in_dim=layer0.out_dim)
layer2 = layer(1, in_dim=layer1.out_dim)
net = network()
net.add_layer(layer0)
net.add_layer(layer1)
net.add_layer(layer2)
net.update(X,y, epochs=30, batch_size=30, lr=.01)
yhat = net.compute_output(X)
plt.plot(yhat)
plt.plot(y)
plt.show()
# %%
</code></pre>
|
<python><tensorflow><gradient-descent><mini-batch><stochastic-gradient>
|
2024-05-02 06:28:22
| 1
| 380
|
debo
|
78,417,247
| 4,286,568
|
How to reverse a NumPy array using stride_tricks.as_strided
|
<p>Is it possible to reverse an array using <code>as_strided</code> in NumPy? I tried the following and got garbage results</p>
<p>Note : I am aware of indexing tricks like ::-1, but what to know if this can be achieved thru ax_strided.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from numpy.lib.stride_tricks import as_strided
elems = 16
inp_arr = np.arange(elems).astype(np.int8)
print(inp_arr)
print(inp_arr.shape)
print(inp_arr.strides)
expanded_input = as_strided(inp_arr[15],
shape = inp_arr.shape,
strides = (-1,))
print(expanded_input)
print(expanded_input.shape)
print(expanded_input.strides)
</code></pre>
<p><strong>Output</strong></p>
<p>[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15]</p>
<p>(16,)</p>
<p>(1,)</p>
<p><strong>[ 15 -113 0 21 -4 -119 -53 3 79 0 0 0 0 0
0 2]</strong></p>
<p>(16,)</p>
<p>(-1,)</p>
|
<python><arrays><numpy>
|
2024-05-02 06:27:15
| 1
| 1,125
|
balu
|
78,417,241
| 16,390,058
|
Controlling raspberry pi bluetooth with a python script
|
<p>I have a Raspberry Pi 4 Model B running Raspberry Pi OS Lite 64-bit with</p>
<p>Operating System: Debian GNU/Linux 12 (Bookworm)<br />
Kernel: Linux 6.6.20+rpt-rpi-v8.</p>
<p>I have to control the Bluetooth of the Raspberry with a Python script.
The script has to be able to enable/disable Bluetooth and rename the Raspberry Pi.<br />
I need to change the Bluetooth name automatically on the fly, because the Bluetooth name has to correspond to connected devices that can be hot-swapped</p>
<p>Currently, I use <code>os.system(f"sudo hostnamectl set-hostname '{name}'")</code> to rename the device and <code>os.system(f"sudo systemctl restart bluetooth")</code> to restart bluetooth.</p>
<p>This only works some of the time, and often times I manually have to enter more commands in the console:</p>
<pre><code>pi@One:~ $ bluetoothctl
[bluetooth]# discoverable on
[bluetooth]# exit
</code></pre>
<p>Is there a more elegant solution to do this, that may also allow for more functionality?</p>
|
<python><raspberry-pi><bluetooth><raspberry-pi4>
|
2024-05-02 06:25:30
| 1
| 317
|
Plat00n
|
78,417,221
| 16,869,946
|
Implementing n-player Elo rating in pandas dataframe
|
<p>Sorry if this is a rather complicated question.
I have a pandas dataframe that records the result of races between different players:</p>
<p><code>Race_ID</code> records different races</p>
<p><code>Racer_ID</code> records different players</p>
<p><code>N</code> denotes the number of players in that game</p>
<p><code>Place</code> denotes the outcome of that game, with <code>1</code> being the winner, <code>2</code> being the first runner up, etc.</p>
<p>I want to add a new column called <code>Elo_rating</code> to represent the current elo rating of that player using the following algorithm for n-player elo:</p>
<ol>
<li>If this is the first game for that player, then give them an elo rating of 400:</li>
</ol>
<pre><code>Date Race_ID N Racer_ID Place Elo_rating
1/12/2021 10055116 4 1 3 400
1/12/2021 10055116 4 2 2 400
1/12/2021 10055116 4 3 1 400
1/12/2021 10055116 4 4 4 400
3/5/2022 10055117 3 2 1
3/5/2022 10055117 3 3 2
3/5/2022 10055117 3 4 3
2/12/2022 10055118 5 1 3
2/12/2022 10055118 5 3 5
2/12/2022 10055118 5 4 2
2/12/2022 10055118 5 5 4 400
2/12/2022 10055118 5 6 1 400
1/1/2023 10055119 4 1 1
1/1/2023 10055119 4 4 3
1/1/2023 10055119 4 5 4
1/1/2023 10055119 4 6 2
</code></pre>
<ol start="2">
<li>Compute the expected rating of player x by the following formula:</li>
</ol>
<p><a href="https://i.sstatic.net/bZA3DzpU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bZA3DzpU.png" alt="enter image description here" /></a></p>
<p>where D=400, and if this is the first game for that player, then set E_i = 1/N</p>
<ol start="3">
<li>After the race result is out, the Score function for player i is given by</li>
</ol>
<p>S_i = (N - place) / (N(N-1)/2)</p>
<p>So for example, the score for player 1, 2, 3, 4 after the first race (Race_ID = 10055116) are respectively</p>
<p>S_1 = (4-3) / (4(3)/2) = 1/6,</p>
<p>S_2 = (4-2) / (4(3)/2) = 2/6,</p>
<p>S_3 = (4-1) / (4(3)/2) = 3/6,</p>
<p>S_4 = (4-4) / (4(3)/2) = 0/6.</p>
<ol start="4">
<li>Finally the elo rating update for the next race is given by</li>
</ol>
<p>E_i <- R_i + 30(S_i - E_i)</p>
<p>So for example the elo rating for the second race (Race_ID = 10055117) is given by</p>
<pre><code>Date Race_ID N Racer_ID Place Elo_rating
1/12/2021 10055116 4 1 3 400
1/12/2021 10055116 4 2 2 400
1/12/2021 10055116 4 3 1 400
1/12/2021 10055116 4 4 4 400
3/5/2022 10055117 3 2 1 402.5
3/5/2022 10055117 3 3 2 407.5
3/5/2022 10055117 3 4 3 392.5
2/12/2022 10055118 5 1 3 397.5
2/12/2022 10055118 5 3 5 407.212315853
2/12/2022 10055118 5 4 2 382.859605171
2/12/2022 10055118 5 5 4 400
2/12/2022 10055118 5 6 1 400
1/1/2023 10055119 4 1 1
1/1/2023 10055119 4 4 3
1/1/2023 10055119 4 5 4
1/1/2023 10055119 4 6 2
</code></pre>
<p>And the desired output is given by</p>
<pre><code>Date Race_ID N Racer_ID Place Elo_rating
1/12/2021 10055116 4 1 3 400
1/12/2021 10055116 4 2 2 400
1/12/2021 10055116 4 3 1 400
1/12/2021 10055116 4 4 4 400
3/5/2022 10055117 3 2 1 402.5
3/5/2022 10055117 3 3 2 407.5
3/5/2022 10055117 3 4 3 392.5
2/12/2022 10055118 5 1 3 397.5
2/12/2022 10055118 5 3 5 407.212315853
2/12/2022 10055118 5 4 2 382.859605171
2/12/2022 10055118 5 5 4 400
2/12/2022 10055118 5 6 1 400
1/1/2023 10055119 4 1 1 383.501122
1/1/2023 10055119 4 4 3 372.913004
1/1/2023 10055119 4 5 4 382.8213312
1/1/2023 10055119 4 6 2 391.8213312
</code></pre>
<p>I have no idea how I can approach this problem, but I guess we should start by <code>groupby</code> the Racer_ID and then applying <code>.map</code> but I have no idea how to implement it. Thank you so much.</p>
|
<python><python-3.x><pandas><dataframe><group-by>
|
2024-05-02 06:21:33
| 2
| 592
|
Ishigami
|
78,417,215
| 7,640,923
|
Extracting specific patterns of substrings from a material description column
|
<p>I have a column named MAT_DESC in a table that contains material descriptions in a free-text format. Here are some sample values from the MAT_DESC column:</p>
<pre><code>
QWERTYUI PN-DR, Coarse, TR, 1-1/2 in, 50/Carton, 200 ea/Case, Dispenser Pack
2841 PC GREY AS/AF (20/CASE)
CI-1A, up to 35 kV, Compact/Solid, Stranded, 10/Case
MT53H7A4410WS5 WS WEREDSS PMR45678 ERTYUI HEERTYUIND 10/case
TYPE.2 86421-K40-F000, 1 Set/Pack, 100 Packs/Case
Clear, 1 in x 36 yd, 4.8 mil, 24 rolls per case
</code></pre>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Material Desc</th>
<th>String to be Extracted</th>
</tr>
</thead>
<tbody>
<tr>
<td>QWERTYUI PN-DR, Coarse, TR, 1-1/2 in, 50/Carton, 200 ea/Case, Dispenser Pack</td>
<td>50/Carton, 200 ea/Case</td>
</tr>
<tr>
<td>2841 PC GREY AS/AF (20/CASE)</td>
<td>20/CASE</td>
</tr>
<tr>
<td>TYPE.2 86421-K40-F000, 1 Set/Pack, 100 Packs/Case</td>
<td>1 Set/Pack, 100 Packs/Case</td>
</tr>
<tr>
<td>RTYU 31655, 240+, 6 in, 50 Discs/Roll, 6 Rolls/Case</td>
<td>50 Discs/Roll, 6 Rolls/Case</td>
</tr>
<tr>
<td>Clear, 1 in x 36 yd, 4.8 mil, 24 rolls per case</td>
<td>24 rolls per case</td>
</tr>
<tr>
<td>3M™ Victory Series™ Bracket MBT™ 017-873, .022, UL3, 0T/8A, Hk, 5/Pack</td>
<td>5/Pack</td>
</tr>
<tr>
<td>3M™ BX™ Dual Reader Protective Eyewear 11458-00000-20, Clear Anti-Fog Lens, Silver/Black Frame, +2.0 Top/Bottom Diopter, 20 ea/Case</td>
<td>20 ea/Case</td>
</tr>
<tr>
<td>4220VDS-QCSHC/900-000/A CABINET EMPTY</td>
<td>No units</td>
</tr>
<tr>
<td>3M™ Bumpon™ Protective Product SJ5476 Fluorescent Yellow, 3.000/Case</td>
<td>3.000/Case</td>
</tr>
<tr>
<td>3M™ Bumpon™ Protective Products SJ61A2 Black, 10,000/Case</td>
<td>10,000/Case</td>
</tr>
</tbody>
</table></div>
<p>I'm trying to extract specific patterns of substrings from the MAT_DESC column, such as the quantity and unit information (e.g., "50 Discs/Roll", "200 ea/Case", "10/Case",50/Carton, 200 ea/Case etc.).
I'm currently using the following SQL query to attempt this:</p>
<pre><code>SELECT MAT_DESC,
CASE
WHEN PATINDEX('%[A-Za-z]/[A-Za-z]%', MAT_DESC) > 0
THEN CAST(PATINDEX('%[A-Za-z]/[A-Za-z]%', MAT_DESC) AS VARCHAR)
ELSE 'No X'
END AS Unit_Index
FROM TEMP_TABLE;
</code></pre>
<p>This query finds the pattern index of substrings like "Discs/Roll" or "ea/Case" using the PATINDEX function. Then, I planned to find the nearest comma indices before and after the pattern index and extract the substring using those indices.
However, this approach works for some scenarios but fails in others, especially when the material description contains additional information or is structured differently.</p>
<p><strong>For python I use below regular expression to detect the pattern and retrieve substrings</strong></p>
<pre><code>pattern = r"(\d+)\s*(\w+)/(\w+)"
results = []
for desc in material_descriptions:
matches = re.findall(pattern, desc)
unit_strings = []
if matches:
for match in matches:
quantity, unit1, unit2 = match
unit_string = f"{quantity} {unit1}/{unit2}"
unit_strings.append(unit_string)
if unit_strings:
unit_info = ", ".join(unit_strings)
results.append((desc, unit_info))
for material_desc, unit_info in results:
print(f"Material Description: {material_desc}")
print(f"Unit Information: {unit_info}")
print()
</code></pre>
<p><strong>Python script fails in the below listed scenarios</strong></p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Material Desc</th>
<th>String to be Extracted</th>
</tr>
</thead>
<tbody>
<tr>
<td>3M™ Victory Series™ Bracket MBT™ 017-873, .022, UL3, 0T/8A, Hk, 5/Pack</td>
<td>5/Pack</td>
</tr>
<tr>
<td>3M™ BX™ Dual Reader Protective Eyewear 11458-00000-20, Clear Anti-Fog Lens, Silver/Black Frame, +2.0 Top/Bottom Diopter, 20 ea/Case</td>
<td>20 ea/Case</td>
</tr>
<tr>
<td>4220VDS-QCSHC/900-000/A CABINET EMPTY</td>
<td>No units</td>
</tr>
<tr>
<td>3M™ Bumpon™ Protective Product SJ5476 Fluorescent Yellow, 3.000/Case</td>
<td>3.000/Case</td>
</tr>
<tr>
<td>3M™ Bumpon™ Protective Products SJ61A2 Black, 10,000/Case</td>
<td>10,000/Case</td>
</tr>
</tbody>
</table></div>
<p>Is there a more robust way to extract specific patterns of substrings (like quantity and unit information) from a free-text material description column? I'm open to solutions in SQL and Python</p>
|
<python><sql><sql-server>
|
2024-05-02 06:20:35
| 1
| 315
|
rohi
|
78,417,187
| 3,114,229
|
vscode pydevd warning: Computing repr of ... was slow
|
<p>I'm using VSCode for Python and I'm experiencing following warning which I'm unable to suppress:</p>
<pre><code>pydevd warning: Computing repr of slow_repr_instance (SlowRepr) was slow (took 2.00s)
Customize report timeout by setting the `PYDEVD_WARN_SLOW_RESOLVE_TIMEOUT` environment variable to a higher timeout (default is: 0.5s)
</code></pre>
<p>I have found several suggestions of how to fix it but nothing seems able to suppress this warning:</p>
<p><a href="https://stackoverflow.com/questions/71695716/pydevd-warnings-in-visual-studio-code-debug-console">pydevd warnings in Visual Studio Code Debug Console</a></p>
<p><a href="https://stackoverflow.com/questions/73485582/fix-the-following-pydevd-warning-computing-repr-of-something-was-slow-took">Fix the following: "pydevd warning: Computing repr of <something> was slow (took 0.35s)" in VS CODE</a></p>
<p>Below is a my launch- and settings.json files as well as an example code for replicating the warning.</p>
<p><strong>Launch.json:</strong></p>
<pre><code>{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [{
"name": "Python: Current File",
"type": "debugpy",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal",
"justMyCode": false,
"env": {
"PYDEVD_DISABLE_FILE_VALIDATION": "1",
"PYDEVD_WARN_SLOW_RESOLVE_TIMEOUT": "2.0"
}
}
]
}
</code></pre>
<p><strong>Settings.json:</strong></p>
<pre><code>{
"python.envFile": "${workspaceFolder}/.env",
"terminal.integrated.env.windows": {
"PYDEVD_WARN_SLOW_RESOLVE_TIMEOUT": "2.0"
}
}
</code></pre>
<p>Please see <strong>sample code</strong> to replicate the problem:</p>
<pre><code>import time
class SlowRepr:
def __init__(self, value):
self.value = value
def __repr__(self):
# Simulate a slow computation by sleeping for a while
time.sleep(2) # Adjust the sleep duration to trigger the warning
return f"SlowRepr({self.value})"
# Create an instance of SlowRepr
slow_repr_instance = SlowRepr("This is a slow object")
# This line will trigger the warning if the repr computation takes longer than the timeout
print(slow_repr_instance)
</code></pre>
|
<python><json><visual-studio-code><warnings><pydev>
|
2024-05-02 06:10:35
| 0
| 419
|
Martin
|
78,416,773
| 1,613,983
|
How do I use python logging with uvicorn/FastAPI?
|
<p>Here is a small application that reproduces my problem:</p>
<pre><code>import fastapi
import logging
import loguru
instance = fastapi.FastAPI()
@instance.on_event("startup")
async def startup_event():
logger = logging.getLogger("mylogger")
logger.info("I expect this to log")
loguru.logger.info("but only this logs")
</code></pre>
<p>When I launch this application with <code>uvicorn app.main:instance --log-level=debug</code> I see this in my terminal:</p>
<pre><code>INFO: Waiting for application startup.
2024-05-02 13:14:45.118 | INFO | app.main:startup_event:28 - but only this logs
INFO: Application startup complete.
</code></pre>
<p>Why does only the <code>loguru</code> logline work, and how can I make standard python logging work as expected?</p>
|
<python><logging><fastapi><uvicorn>
|
2024-05-02 03:16:10
| 2
| 23,470
|
quant
|
78,416,745
| 10,935,201
|
How to properly setup tensorflow with GPU acceleration on NixOS
|
<p>In advance, I have just started using NixOS, so please forgive me if I make seemingly basic mistakes.</p>
<p>I have looked up what feels like the entirety of the Internet in the search of the solution to my problem.</p>
<p>Here is my shell.nix so far:</p>
<pre><code>with import <nixos> {};
mkShell {
name = "tensorflow-cuda-shell";
buildInputs = with python311Packages; [
cudatoolkit
cudaPackages.cudnn
tensorflowWithCuda
];
shellHook = ''
export LD_LIBRARY_PATH=${pkgs.stdenv.cc.cc.lib}/lib:${pkgs.cudaPackages.cudatoolkit}/lib:${pkgs.cudaPackages.cudnn}/lib:${pkgs.cudaPackages.cudatoolkit.
lib}/lib:$LD_LIBRARY_PATH
alias pip="PIP_PREFIX='$(pwd)/_build/pip_packages' TMPDIR='$HOME' \pip"
export PYTHONPATH="$(pwd)/_build/pip_packages/lib/python3.9/site-packages:$PYTHONPATH"
export PATH="$(pwd)/_build/pip_packages/bin:$PATH"
unset SOURCE_DATE_EPOCH
'';
}
</code></pre>
<p>This is an amalgamation of the wiki page on tensorflow (which seems outdated), responses on forums, and scraping GitHub for custom configs. With this shell script it currently takes ages to compile everything from source. It has been at least an hour, but it still compiles binaries. Is there any way to speed this up?</p>
<p>Any idea of how to run tensorflow with GPU acceleration is deeply appreciated. Seems like this issue was not brought up for a very long time and all of the resources are outdated.</p>
|
<python><tensorflow><nixos>
|
2024-05-02 03:02:54
| 1
| 389
|
The Developer
|
78,416,469
| 353,407
|
How to fix "Toppra failed to find the maximum path acceleration"
|
<p>I'm using Drake's <a href="https://drake.mit.edu/doxygen_cxx/classdrake_1_1multibody_1_1_toppra.html" rel="nofollow noreferrer">Toppra</a> implementation to post-process a path produced by <a href="https://drake.mit.edu/doxygen_cxx/classdrake_1_1planning_1_1trajectory__optimization_1_1_gcs_trajectory_optimization.html" rel="nofollow noreferrer">GcsTrajectoryOptimization</a>. Things work fine until I put in acceleration limits.</p>
<pre><code>grid_points = Toppra.CalcGridPoints(gcs_traj, CalcGridPointsOptions())
toppra = Toppra(gcs_traj, plant, grid_points)
toppra.AddJointVelocityLimit(velocity_lb, velocity_ub)
# uncommenting the line below sometimes leads to a crash!
# toppra.AddJointAccelerationLimit(accel_lb, accel_ub)
toppra_times = toppra.SolvePathParameterization()
</code></pre>
<p>The error produced is <code>ERROR:drake:Toppra failed to find the maximum path acceleration at knot 9/172.</code></p>
<p>I'm not sure why this would cause a failure. Empirically I've found that as I tighten acceleration limits, more trajectories fail to solve. This goes against my physical intuition, since one can always get a feasible solution by going sufficiently slowly.</p>
<p>What can I do to understand what is going wrong and how to fix it?</p>
<h2>Updates</h2>
<p>I have updated my code to try using <a href="https://github.com/hungpham2511/toppra" rel="nofollow noreferrer">Hung Pham's implementation</a> and I've gotten some similar errors.</p>
<pre><code>WARNING:toppra.algorithm.reachabilitybased.reachability_algorithm:A numerical error occurs: The controllable set at step [165 / 187] can't be computed.
WARNING:toppra.algorithm.reachabilitybased.reachability_algorithm:An error occurred when computing controllable velocities. The path is not controllable, or is badly conditioned.
WARNING:toppra.algorithm.algorithm:Fail to parametrize path. Return code: <ParameterizationReturnCode.FailUncontrollable: 'Error: Instance is not controllable'>
</code></pre>
<p>I've made sure that my path is differentiable up to order 2.
Here is a plot of the path first derivatives, each joint in a different color.
<a href="https://i.sstatic.net/Wxi7jIfw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Wxi7jIfw.png" alt="path first derivatives" /></a></p>
<p>Here is a plot of the path second derivatives.
<a href="https://i.sstatic.net/JCFbee2C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JCFbee2C.png" alt="path second derivatives" /></a></p>
<p>The problem clearly looks feasible and any acceleration and velocity bounds can be met by a trivial time dilation.</p>
|
<python><drake>
|
2024-05-02 00:23:19
| 1
| 5,543
|
Mark
|
78,416,456
| 283,735
|
Multiple distributions from single Python repository
|
<p>I'm looking for a way to create multiple distribution package configurations from a single repository. For example, given my source layout:</p>
<pre><code>project/
|__src/
| |__a/
| | |__a.py
| | b/
| | |__b.py
| | c/
| | |__c.py
|__pyproject.toml
</code></pre>
<p>I want two create two different flavors of distribution packages:</p>
<pre><code>project_a+b+c-v0.0.1 # for internal use; everything included
project_c_only-v0.0.1 # for external use; package "c" only
</code></pre>
<p>Is there an existing pattern for this kind of thing using pyproject.toml?</p>
|
<python><setuptools><monorepo><pyproject.toml>
|
2024-05-02 00:15:52
| 0
| 31,287
|
jjkparker
|
78,416,331
| 1,172,685
|
Why are python timezone.utc and ZoneInfo("UTC") not equivalent tzinfo objects? How to check they are the same TZ?
|
<p>Why are these not equal?</p>
<pre class="lang-py prettyprint-override"><code>>>> from datetime import datetime
>>> from datetime import timedelta
>>> from datetime import timezone
>>> from zoneinfo import ZoneInfo
>>> from zoneinfo import ZoneInfoNotFoundError
>>> dt_tz = datetime.now(tz=timezone.utc)
>>> dt_zi = dt_tz.astimezone(tz=ZoneInfo("UTC"))
>>> dt_tz
datetime.datetime(2024, 5, 1, 23, 15, 24, 3560, tzinfo=datetime.timezone.utc)
>>> dt_zi
datetime.datetime(2024, 5, 1, 23, 15, 24, 3560, tzinfo=zoneinfo.ZoneInfo(key='UTC'))
>>> dt_tz == dt_zi
True
>>> dt_tz.tzinfo == dt_zi.tzinfo
False
</code></pre>
<p>How is it possible to check that two <code>datetime</code> tz-aware objects have an equivalent TZ, if they are created with either <code>timezone</code> or <code>ZoneInfo</code>? At first sight, these things intend to do the same stuff but they are not entirely inter-operable. For example, there is no <code>ZoneInfo.astimezone()</code> conversion.</p>
<p>In anticipation of answers that recommend using some third-party library, those answers are OK but the strong preference is to use python builtin packages (python 3 >= 3.11).</p>
<h2>One Approach</h2>
<pre><code>>>> dt_tz.tzname()
'UTC'
>>> dt_zi.tzname()
'UTC'
>>> dt_tz.tzname() == dt_zi.tzname()
True
>>> dt_tz.tzinfo
datetime.timezone.utc
>>> type(dt_tz.tzinfo)
<class 'datetime.timezone'>
>>> ZoneInfo(dt_tz.tzname())
zoneinfo.ZoneInfo(key='UTC')
</code></pre>
<p>BUT this doesn't generalize beyond UTC :(</p>
<pre><code>>>> dt_ny = datetime.now().astimezone(ZoneInfo("America/New_York"))
>>> dt_ny
datetime.datetime(2024, 5, 1, 19, 38, 53, 69087, tzinfo=zoneinfo.ZoneInfo(key='America/New_York'))
>>> ts = dt_ny.isoformat()
>>> ts
'2024-05-01T19:38:53.069087-04:00'
>>> ts_dt = datetime.fromisoformat(ts)
>>> ts_dt
datetime.datetime(2024, 5, 1, 19, 38, 53, 69087, tzinfo=datetime.timezone(datetime.timedelta(days=-1, seconds=72000)))
>>> ts_dt.tzname()
'UTC-04:00'
>>> ZoneInfo(ts_dt.tzname())
zoneinfo._common.ZoneInfoNotFoundError: 'No time zone found with key UTC-04:00'
</code></pre>
|
<python><timezone><zoneinfo>
|
2024-05-01 23:11:38
| 1
| 1,714
|
Darren Weber
|
78,416,282
| 3,476,463
|
AIC function for lightgbm, xgboost and randomforest regression
|
<p>I'm trying to evaluate regression models created using <code>LightGBM</code>, <code>XGBoost</code>, and randomforest, with aic. My approach is to order features in terms of feature importance, fit, predict, and calculate aic, then drop the least important feature, and repeat. The goal is to find the least number of features required to achieve the lowest aic, and avoid overfitting. I have a general function below for calculating aic. I'm wondering if given my end goal it makes sense to use this general function to calculate aic for these three different models, or do I need a specific function for each model? If a different aic function is required for each model can anyone please suggest the functions?</p>
<p>Code:</p>
<pre><code>from sklearn.metrics import mean_squared_error
import numpy as np
import pandas as pd
def aic(y_true, y_pred,X_train):
# calculate mean squared error
mse = mean_squared_error(y_true,y_pred)
# number of features
n_features = X_train.shape[1]
n=len(y_true)
aic= 2 * n_features - 2 * n * np.log(mse)
return aic
</code></pre>
|
<python><regression><random-forest><xgboost><lightgbm>
|
2024-05-01 22:51:44
| 0
| 4,615
|
user3476463
|
78,416,262
| 15,229,911
|
SQLite database locking even with threading.Lock
|
<p>I use sqlite3 database on my flask website, and on each database access I use a threading.Lock in order to avoid memory races:</p>
<pre class="lang-py prettyprint-override"><code># one sync, one connection, one cursor for the entirety of the website
sync = threading.Lock()
connection = sqlite3.connect('db', check_same_thread = false)
cursor = connection.cursor()
#...
with sync:
cursor.execute(...) # just .execute(), i do not use .commit()
</code></pre>
<p>It works fine, when I use it on a localhost as a bare flask server. Even when I spam it with lots of requests, where each request has to access a database (and not once), it doesn't break. However, when I submit this code to a website hosting service, which uses Phusion Passenger, it generally works fine, but when I do too much requests, it results in <code>sqlite3.OperationalError: database is locked</code> over and over again.</p>
<p>What do I do wrong? I used a mutex, why does the database get locked? Does it have something to do with the fact I use just one cursor?</p>
|
<python><sqlite><flask><passenger>
|
2024-05-01 22:42:07
| 1
| 324
|
postcoital-solitaire
|
78,416,214
| 2,475,195
|
How many trees do I actually have in my LightGBM model?
|
<p>I have code that looks like this</p>
<pre><code>clf = lgb.LGBMClassifier(max_depth=3, verbosity=-1, n_estimators=3)
clf.fit(train_data[features], train_data['y'], sample_weight=train_data['weight'])
print (f"I have {clf.n_estimators_} estimators")
fig, ax = plt.subplots(nrows=4, figsize=(50,36), sharex=True)
lgb.plot_tree(clf, tree_index=7, dpi=600, ax=ax[0]) # why does it have 7th tree?
lgb.plot_tree(clf, tree_index=8, dpi=600, ax=ax[1]) # why does it have 8th tree?
#lgb.plot_tree(clf, tree_index=9, dpi=600, ax=ax[2]) # crashes
#lgb.plot_tree(clf, tree_index=10, dpi=600, ax=ax[3]) # crashes
</code></pre>
<p>I am surprised that despite <code>n_estimators=3</code>, I seem to have 9 trees? How do I actually set the number of trees, and related to that, what does <code>n_estimators</code> do? I've read the docs, and I thought it would be the number of trees, but it seems to be something else.</p>
<p>Separately, how do I interpret the separate trees, with their ordering, 0, 1, 2, etc. I know random forest, and how there every tree is equally important. In boosting, the first tree is most important, the next one significantly less, the next significantly less. So in my head, when I look at the tree diagrams, how can I "simulate" the LightGBM inference process?</p>
|
<python><machine-learning><lightgbm><boosting>
|
2024-05-01 22:24:21
| 1
| 4,355
|
Baron Yugovich
|
78,416,197
| 4,133,464
|
How to send data between Docker containers on the same network
|
<p>I have two docker containers on the same network. I want to send some data with POST and GET methods over HTTP. I am doing:</p>
<pre><code>app = Flask(__name__)
is_processing = False # Flag to indicate whether processing is ongoing
receiver_url = "http://172.18.0.2:8889/receive_data" # Define the receiver URL
</code></pre>
<p>and</p>
<pre><code>app.run(port=8889)
</code></pre>
<p>on the sender container I am doing:</p>
<pre><code>def send_data_to_receiver(data):
receiver_url = "http://172.18.0.2:8889/receive_data"
data_serializable = [x.tolist() for x in data]
response = requests.post(receiver_url, json=data_serializable)
if response.status_code == 200:
print("Data sent successfully to receiver notebook")
else:
print("Failed to send data to receiver notebook")
</code></pre>
<p>The above works successfully between jupyter notebooks, however when I am trying to add them in docker containers I get the following error:</p>
<pre><code>File "/usr/local/lib/python3.11/dist-packages/requests/adapters.py", line 519, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.18.0.2', port=8889): Max retries exceeded with url: /receive_data (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fc750529cd0>: Failed to establish a new connection: [Errno 111] Connection refused'))
</code></pre>
<p>What is wrong with my setup?</p>
|
<python><docker><docker-network>
|
2024-05-01 22:20:04
| 0
| 819
|
deadpixels
|
78,416,147
| 251,589
|
Changing the exception type in a context manager
|
<p>I currently have code that looks like this scattered throughout my codebase:</p>
<pre class="lang-py prettyprint-override"><code>try:
something()
except Exception as exc:
raise SomethingError from exc
</code></pre>
<p>I would like to write a context manager that would remove some of this boiler-plate:</p>
<pre class="lang-py prettyprint-override"><code>with ExceptionWrapper(SomethingError):
something()
</code></pre>
<p>It looks like it is possible to suppress exceptions inside a context manager - see: <a href="https://docs.python.org/3/library/contextlib.html#contextlib.suppress" rel="nofollow noreferrer"><code>contextlib.suprress</code></a>. It doesn't look like it is possible to change what exception is being raised.</p>
<p>However, I haven't been able to find clear documentation on what the return value of the <code>__exit__</code> function of a context manager is.</p>
|
<python><exception><contextmanager>
|
2024-05-01 22:00:32
| 1
| 27,385
|
sixtyfootersdude
|
78,416,001
| 2,267,058
|
How to configure openapi-generator-cli to include None values for nullable fields in Pydantic models
|
<p>I'm using openapi-generator-cli to generate Python clients, and it's working great except for a specific scenario. I need to send requests to an external service where all nullable fields must be included even if their values are None. When I generate the client with openapi-generator-cli, it creates models with a to_dict method like this:</p>
<pre class="lang-py prettyprint-override"><code>def to_dict(self) -> Dict[str, Any]:
"""Return the dictionary representation of the model using alias.
This has the following differences from calling pydantic's
`self.model_dump(by_alias=True)`:
* `None` is only added to the output dict for nullable fields that
were set at model initialization. Other fields with value `None`
are ignored.
"""
excluded_fields: Set[str] = set([
])
_dict = self.model_dump(
by_alias=True,
exclude=excluded_fields,
exclude_none=True,
)
</code></pre>
<p>Is there a way to configure openapi-generator-cli to allow including None values for nullable fields in the to_dict method? This will be accomplished if <code>exclude_none=False</code> but I don't find any directive for openapi-generator for this.</p>
<p>Let's say I have a Python dictionary like this:</p>
<pre class="lang-json prettyprint-override"><code>request_data = {
"account": None,
"lastName": "Doe",
"firstName": "Jon",
"middleInitial": None,
"companyName": None,
"isOrganization": False,
"address": {
"type": "P",
"address1": "5171 California Avenue",
"address2": "Suite 200",
"address3": None,
"address4": None,
"locality": "Irvine",
"region": "CA",
"postalCode": "92617",
"countryCode": "US"
}
}
</code></pre>
<p>In this example, even though some fields like "account", "middleInitial", and "companyName" are None, I need them to be included in the request sent to the external service.</p>
<p>Additional Note:</p>
<p>Perhaps the issue lies not in the configuration of openapi-generator but in the Swagger file used to create these models. However, I couldn't find any configuration in the Swagger file that results in the desired models. Here's a snippet of my Swagger models:</p>
<pre class="lang-yaml prettyprint-override"><code>User:
type: object
required:
- address
properties:
account:
type: string
default: null
middleInitial:
type: string
default: null
lastName:
type: string
default: null
firstName:
type: string
default: null
companyName:
type: string
default: null
isOrganization:
type: boolean
default: null
address:
$ref: "#/components/schemas/Adress"
</code></pre>
|
<python><swagger><pydantic><openapi-generator><openapi-generator-cli>
|
2024-05-01 21:14:56
| 1
| 323
|
Chubutin
|
78,415,959
| 2,986,147
|
appropriate way to create group_rand_coef_data in GPBoost's GPModel
|
<p>Below is the code I copied from the GPBoost site. <a href="https://github.com/fabsig/GPBoost/blob/master/examples/python-guide/generalized_linear_Gaussian_process_mixed_effects_models.py" rel="nofollow noreferrer">https://github.com/fabsig/GPBoost/blob/master/examples/python-guide/generalized_linear_Gaussian_process_mixed_effects_models.py</a></p>
<p>I think the input variable <code>x</code> for <code>group_rand_coef_data</code> is unknown when running this mixed effects model in this line <code>gpb.GPModel(group_data=group_data, group_rand_coef_data=x,</code> in the real life, unlike this tutorial code (below) where <code>x</code> was known/man-made.</p>
<p>In the real life where <code>x</code> is unknown, what can I use for <code>group_rand_coef_data</code>?</p>
<pre><code>import gpboost as gpb
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
plt.style.use('ggplot')
def simulate_response_variable(lp, rand_eff, likelihood):
"""Function that simulates response variable for various likelihoods"""
n = len(rand_eff)
if likelihood == "gaussian":
xi = 0.1**0.5 * np.random.normal(size=n) # error term, variance = 0.1
y = lp + rand_eff + xi
return y
likelihood = "gaussian"
"""
Grouped random effects
"""
# --------------------Simulate data----------------
# Single-level grouped random effects
n = 1000 # number of samples
m = 200 # number of categories / levels for grouping variable
group = np.arange(n) # grouping variable
for i in range(m):
group[int(i * n / m):int((i + 1) * n / m)] = i
np.random.seed(1)
b = 0.25**0.5 * np.random.normal(size=m) # simulate random effects, variance = 0.25
# Simulate linear regression fixed effects
X = np.column_stack((np.ones(n), np.random.uniform(size=n) - 0.5)) # design matrix / covariate data for fixed effect
beta = np.array([0, 2]) # regression coefficents
lp = X.dot(beta)
# Crossed grouped random effects and random slopes
group_crossed = group[np.random.permutation(n)-1] # grouping variable for crossed random effects
b_crossed = 0.25**0.5 * np.random.normal(size=m) # simulate crossed random effects
b_random_slope = 0.25**0.5 * np.random.normal(size=m)
x = np.random.uniform(size=n) # covariate data for random slope
rand_eff = b[group] + b_crossed[group_crossed] + x * b_random_slope[group]
rand_eff = rand_eff - np.mean(rand_eff)
y_crossed_random_slope = simulate_response_variable(lp=lp, rand_eff=rand_eff, likelihood=likelihood)
# --------------------Two crossed random effects and random slopes----------------
# Define and train model
group_data = np.column_stack((group, group_crossed))
gp_model = gpb.GPModel(group_data=group_data, group_rand_coef_data=x,
ind_effect_group_rand_coef=[1], likelihood=likelihood)
# 'ind_effect_group_rand_coef=[1]' indicates that the random slope is for the first random effect
gp_model.fit(y=y_crossed_random_slope, X=X, params={"std_dev": True})
gp_model.summary()
# Prediction
pred = gp_model.predict(group_data_pred=group_data, group_rand_coef_data_pred=x, X_pred=X)
# Obtain predicted (="estimated") random effects for the training data
all_training_data_random_effects = gp_model.predict_training_data_random_effects()
first_occurences_1 = [np.where(group==i)[0][0] for i in np.unique(group)]
pred_random_effects = all_training_data_random_effects.iloc[first_occurences_1,0]
pred_random_slopes = all_training_data_random_effects.iloc[first_occurences_1,2]
# Compare true and predicted random effects
plt.scatter(b, pred_random_effects, label="Random effects")
plt.scatter(b_random_slope, pred_random_slopes, label="Random slopes")
plt.legend()
plt.title("Comparison of true and predicted random effects")
plt.show(block=False)
</code></pre>
<p>Results</p>
<pre><code>=====================================================
Model summary:
Log-lik AIC BIC
-823.6 1659.2 1688.64
Nb. observations: 1000
Nb. groups: 200 (Group_1), 200 (Group_2)
-----------------------------------------------------
Covariance parameters (random effects):
Param. Std. dev.
Error_term 0.1044 0.0067
Group_1 0.1918 0.0263
Group_2 0.2709 0.0301
Group_1_rand_coef_nb_1 0.2288 0.0527
-----------------------------------------------------
Linear regression coefficients (fixed effects):
Param. Std. dev. z value P(>|z|)
Covariate_1 -0.0051 0.051 -0.0995 0.9208
Covariate_2 1.9975 0.048 41.6060 0.0000
=====================================================
</code></pre>
<p>plot
<a href="https://i.sstatic.net/BHa5qYrz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BHa5qYrz.png" alt="output plot" /></a></p>
|
<python><mixed-models>
|
2024-05-01 21:00:40
| 1
| 375
|
stok
|
78,415,935
| 1,738,879
|
How to remove parent keys from a nested dictionary if child keys are empty without modifying the original dictionary?
|
<p>I have a nested dictionary for which I need to remove parent keys if some predefined child keys are empty. For example, consider the following dictionary d:</p>
<pre><code>d = {
'outer_key_1': {
'class': 'a_class',
'items': {
'A': { # <- this key should be removed
'item': 'A',
'tags': [],
'count': 3
},
'B': {
'item': 'B',
'tags': ['tag1', 'tag2'],
'count': 5
}
},
'type': 1,
'attic': 1
},
'outer_key_2': {...},
}
</code></pre>
<p>I want to remove the parent keys where the child key <code>tags</code> is empty. In this example, I want to remove the inner key <code>A</code> because the <code>tags</code> key under <code>'items'['A']</code> is empty. I have a solution that modifies the original dictionary in place:</p>
<pre><code>def remove_parent_key(d, check_key):
for key, value in list(d.items()):
if isinstance(value, dict):
if check_key in value:
if value[check_key] in ('', [], {}, None):
del d[key]
else:
remove_parent_key(value, check_key)
</code></pre>
<p>However, I need to keep the original dictionary unchanged and return a new dictionary with the desired modifications.</p>
<p>Any help would be greatly appreciated! Thank you!</p>
|
<python><python-3.x>
|
2024-05-01 20:55:36
| 0
| 1,925
|
PedroA
|
78,415,870
| 23,260,297
|
powershell script missing argument in parameter list
|
<p>I am trying to run a powershell script that executes a python exe file which takes a json file as an argument.</p>
<p>I checked my json file, and the contents are valid.</p>
<p>My powershell script looks like so:</p>
<pre><code>& "C:\RPA\0001-DHR\DHR_ConsolidateMarks\DHR_ConsolidateMarks.exe" test.json
</code></pre>
<p>The error message is:</p>
<pre><code>At C:\Users\srv_hq-rpa-bot1\AppData\Local\Temp\Robin\f0f3xbjmnlo.tmp.ps1:1 char:75 + ... DHR\DHR_ConsolidateMarks\DHR_ConsolidateMarks.exe" {"Items":[{"Name": ... + ~~ Unexpected token ':[' in expression or statement. At C:\Users\srv_hq-rpa-bot1\AppData\Local\Temp\Robin\f0f3xbjmnlo.tmp.ps1:1 char:84 + ... dateMarks\DHR_ConsolidateMarks.exe" {"Items":[{"Name":"JAron","File": ... + ~~~~~~~~ Unexpected token ':"JAron"' in expression or statement. At C:\Users\srv_hq-rpa-bot1\AppData\Local\Temp\Robin\f0f3xbjmnlo.tmp.ps1:1 char:92 + ... ateMarks\DHR_ConsolidateMarks.exe" {"Items":[{"Name":"JAron","File":" ... + ~ Missing argument in parameter list. + CategoryInfo : ParserError: (:) [], ParentContainsErrorRecordException + FullyQualifiedErrorId : UnexpectedToken
</code></pre>
<p>Not sure what exactly is going wrong here since my JSON is valid and I do not see anything wrong with my powershell script. Any suggestions would help</p>
<p>Note: the .exe file was generated using pyinstaller --onedir so the python interpreter is packaged with the .exe</p>
|
<python><powershell><pyinstaller>
|
2024-05-01 20:38:15
| 0
| 2,185
|
iBeMeltin
|
78,415,860
| 1,564,070
|
Visio VBA set dynamic connector to straight connector style
|
<p>I'm using Python to control Visio via win32com.Client on a Windows 10 Platform. I've had a lot of success with this project but am stuck trying to find an answer. I've been scouring the Microsoft docs (for VBA of course) but cannot find an answer.</p>
<p>I'm adding connections points to shapes and connecting them with dynamic connectors without issue. All connectors that I drop appear as Right-Angle Connectors and I would like to switch some of them to Straight or Curved Connector styles. I haven't found anything in the ShapeSheet that look like the right setting. For what it's worth the code I'm using to drop the connectors is below. Thanks in advance for any assistance provided!</p>
<pre><code># visSectionConnectionPts = 7
shape = self.v_app.draw_page.Drop(self.v_app.app.ConnectorToolDataObject, 5, 4)
shape.Cells("BeginX").GlueTo(s_shp.shape.CellsSRC(7, s_row, 0))
shape.Cells("EndX").GlueTo(d_shp.shape.CellsSRC(7, d_row, 0))
if arrow:
shape.Cells("EndArrow").FormulaForceU = "13"
if len(text) > 0:
shape.Text = text
</code></pre>
|
<python><vba><visio>
|
2024-05-01 20:34:57
| 1
| 401
|
WV_Mapper
|
78,415,856
| 5,728,013
|
Detecting GPU availability in llama-cpp-python
|
<h3>Question</h3>
<p>How can I programmatically check if <code>llama-cpp-python</code> is installed with support for a CUDA-capable GPU?</p>
<h3>Context</h3>
<p>In my program, I am trying to warn the developers when they fail to configure their system in a way that allows the <code>llama-cpp-python</code> LLMs to leverage GPU acceleration. For example, they may have installed the library using <code>pip install llama-cpp-python</code> without setting <a href="https://llama-cpp-python.readthedocs.io/en/stable/#installation-configuration" rel="nofollow noreferrer">appropriate environment variables</a> for CUDA acceleration, or the CUDA Toolkit may be missing from their operating system.</p>
<h3>What I Have Tried</h3>
<p>In earlier versions of the library, I could reliably detect whether a GPU was available, i.e., I got fast responses, high GPU utilization, and detected GPU availability if and only if the library was installed with appropriate environment variables.</p>
<p>Initially, I used to check GPU availability using:</p>
<pre class="lang-py prettyprint-override"><code>from llama_cpp.llama_cpp import GGML_USE_CUBLAS
def is_gpu_available_v1() -> bool:
return GGML_USE_CUBLAS
</code></pre>
<p>Later, the <code>GGML_USE_CUBLAS</code> was removed. For some time, I used the following alternative:</p>
<pre class="lang-py prettyprint-override"><code>from llama_cpp.llama_cpp import _load_shared_library
def is_gpu_available_v2() -> bool:
lib = _load_shared_library('llama')
return hasattr(lib, 'ggml_init_cublas')
</code></pre>
<p>For newer versions of the library, the latter approach consistently returns <code>False</code>, even if the inference for the LLM is being executed on a GPU.</p>
|
<python><gpu><llama-cpp-python><llamacpp>
|
2024-05-01 20:33:56
| 2
| 810
|
Programmer.zip
|
78,415,800
| 6,722,582
|
Run a specific set of pytest tests in a specific sequence only by specifying test names
|
<p>To reproduce a pytest failure, I would like to run one or more tests in a specific sequence via <code>pytest</code> without modifying the original source files (adding markers would not be allowed). The tests are scattered across multiple source files and I do not want to execute the other tests in those files.</p>
<p>For example, I'd like to run test_potato_salad and test_macaroni_salad in that order without specifying the source file name.</p>
|
<python><pytest>
|
2024-05-01 20:19:44
| 1
| 348
|
Brandon Tweed
|
78,415,718
| 513,393
|
In Azure Functions in Python. Setting SessionID when publishing message to Azure Service Bus
|
<p>I drafted an Azure Function in Python, model V2.</p>
<p>I have successfully published a message into a topic.</p>
<p>Now I would like to <strong>set a SessionID for a published message</strong>, but can not figure out how. How do I publish a <strong>message with a SessionId</strong>?</p>
<p>My attempt with JSON in "my_json_string" variable was not recognized by Azure.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>import logging
import azure.functions as func
from datetime import datetime
import json
app = func.FunctionApp()
# vs output into queue for python
# https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-service-bus-output?tabs=python-v2%2Cisolated-process%2Cnodejs-v4%2Cextensionv5&pivots=programming-language-python
@app.route(route="http_trigger_topic", auth_level=func.AuthLevel.ANONYMOUS)
@app.service_bus_topic_output(arg_name="message",
connection="ServiceBusConnection",
topic_name="alfdevapi6topic")
def http_trigger_topic(req: func.HttpRequest, message: func.Out[str]) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
myMessage = "Hi alf this is my message via the queue to you."
logging.info(myMessage)
input_msg = req.params.get('message')
now = datetime.now()
print("now =", now)
dt_string = now.strftime("%d/%m/%Y %H:%M:%S")
my_json_string = f"""
{{
"body": "{myMessage}",
"customProperties": {{
"messagenumber": 0,
"timePublish": "{now}",
}},
"brokerProperties": {{
"SessionId": "1"
}}
}}
"""
message.set(my_json_string)
return func.HttpResponse(
"This function should process queue messages.",
status_code=200
)</code></pre>
</div>
</div>
</p>
|
<python><azure><azure-functions><azureservicebus><azure-servicebus-topics>
|
2024-05-01 19:58:58
| 2
| 6,591
|
Skip
|
78,415,679
| 12,339,047
|
When is wheel required alongside setuptools?
|
<p>In <a href="https://peps.python.org/pep-0518/#build-system-table" rel="nofollow noreferrer"><em>PEP 518 – Specifying Minimum Build System Requirements for Python Projects</em></a> it says:</p>
<blockquote>
<p>For the vast majority of Python projects that rely upon setuptools, the <code>pyproject.toml</code> file will be:</p>
<pre><code>[build-system]
# Minimum requirements for the build system to execute.
requires = ["setuptools", "wheel"] # PEP 508 specifications.
</code></pre>
</blockquote>
<p>However, using this <code>pyproject.toml</code> without a "wheel" dependency:</p>
<pre><code>[build-system]
requires = ["setuptools"]
[project]
name = "meowpkg"
version = "1.0"
description = "a package that meows"
</code></pre>
<p>It is still possible to build wheels, using either <a href="https://pypi.org/project/build/" rel="nofollow noreferrer">build</a> or <a href="https://pypi.org/project/pip/" rel="nofollow noreferrer">pip</a>:</p>
<pre><code>$ python3 -m venv .venv
$ .venv/bin/pip install -q build
$ .venv/bin/pip list
Package Version
--------------- -------
build 1.2.1
packaging 24.0
pip 24.0
pyproject_hooks 1.1.0
$ .venv/bin/python -m build --wheel
...
Successfully built meowpkg-1.0-py3-none-any.whl
$ .venv/bin/python -m pip wheel .
...
Successfully built meowpkg
</code></pre>
<p>Why does PEP 518 say to include <code>wheel</code> directly as a build-system requirement?</p>
|
<python><python-packaging><python-wheel><pyproject.toml><pep517>
|
2024-05-01 19:49:49
| 0
| 1,428
|
platypus
|
78,415,638
| 825,227
|
Cleaner way to parse a PDF in Python
|
<p>Looking to parse PDFs to glean relevant info.</p>
<p>Using <code>pypdf</code> and am able to extract text, but it's a bit of a slog formatting into something usable because it appears the PDFs are formatted and not straight text.</p>
<p>For instance, looking to extract 'Asset', 'Transaction Type' and 'Amount', from the table herein:</p>
<p><a href="https://disclosures-clerk.house.gov/public_disc/ptr-pdfs/2020/20017693.pdf" rel="nofollow noreferrer">https://disclosures-clerk.house.gov/public_disc/ptr-pdfs/2020/20017693.pdf</a></p>
<p>If I'm not able to extract the table (headers and all), I'd like to extract the ticker (eg, '(CSCO)'), asset type (eg, '[ST]' here), transaction type (eg, 'S') and amount, individually.</p>
<p>The below gets me all the text to parse, but what I'm returning so far is kind of janky and wonder if there's a better way.</p>
<pre><code>import pypdf
import io
import requests as re
import pandas as pd
from bs4 import BeautifulSoup
import pickle
import fnmatch
import re as rx
url = 'https://disclosures-clerk.house.gov/public_disc/ptr-pdfs/2020/20017693.pdf'
c = io.BytesIO(re.get(url = url).content)
pdf = pypdf.PdfReader(c)
text = ""
for page in pdf.pages:
text += page.extract_text() + "\n"
substring = text.split("\n")
holding = fnmatch.filter(substring, '*(*)*')
htype = fnmatch.filter(substring, '*[[]*[]]*')
</code></pre>
|
<python><parsing><pdf><pypdf>
|
2024-05-01 19:39:41
| 1
| 1,702
|
Chris
|
78,415,453
| 825,227
|
Is there a way to retrieve search results from a public domain in Python
|
<p>Looking at something like this:</p>
<p><a href="https://disclosures-clerk.house.gov/FinancialDisclosure" rel="nofollow noreferrer">https://disclosures-clerk.house.gov/FinancialDisclosure</a></p>
<p>Using the 'Search' function in the box on the left, I'd like to select a year in the 'Filing Year' dropdown and retrieve PDFs hyperlinked to in the results in Python.</p>
<p>For instance, for year 2024, I'd like to retrieve PDFs linked to for the 140 entries returned. Ideally, I'd also be able to filter out based on 'Filing'. Any way to do this?</p>
|
<python><html><python-requests>
|
2024-05-01 18:53:11
| 1
| 1,702
|
Chris
|
78,415,332
| 4,276,951
|
How do I fix this error: in PyCharm: The application was unable to start correctly (0xc0000005)
|
<p>After doing a fresh install of PyCharm 2019.2.5 with Windows 10, I get this message</p>
<p><a href="https://i.sstatic.net/glJRjIzc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/glJRjIzc.png" alt="The application was unable to start correctly (0xc0000005)" /></a></p>
<p>I tried to install it for a course I am taking but got stuck and it won't open the IDE. Even after uninstalling and removing all traces of the program... and reinstalling, I encountered the same issue</p>
|
<python><pycharm><jetbrains-ide><aslr>
|
2024-05-01 18:21:49
| 1
| 2,657
|
Avi Parshan
|
78,415,299
| 344,286
|
Is there any built-in way in Flask/Jinja to create a link with the URL?
|
<p>An example, in CommonMark, if I type:</p>
<pre><code>http://example.com
</code></pre>
<p>Then this will get converted to:</p>
<pre class="lang-html prettyprint-override"><code><a href="http://example.com">http://example.com</a>
</code></pre>
<p>This tag has 2x the <code>http://example.com</code> as strictly necessary - one for presentation as the body of the <code><a></code> tag, and one as the <code>href</code>.</p>
<p>In Jinja/Flask this can be accomplished with:</p>
<pre class="lang-html prettyprint-override"><code><a href="{{ url_for('something') }}">{{ url_for('something') }}</a>
</code></pre>
<p>which <em>works</em>, but I can't help but to feel like there should be a better way. I suppose I could do something like write my own function and inject that into the Jinja context so I could do something like:</p>
<pre class="lang-python prettyprint-override"><code>a(url_for('example.page'), _self=True)
</code></pre>
<p>Is there a better way?</p>
|
<python><flask><jinja2>
|
2024-05-01 18:12:55
| 1
| 52,263
|
Wayne Werner
|
78,415,089
| 472,673
|
Creating HDF5 virtual dataset for dynamic data using h5py
|
<p>I have a HDF5 file which contains three 1D arrays in different datasets. This file is created using h5py in Python and the 1D arrays are continually being appended to (ie growing). For simplicity, let’s call these 1D arrays as “A”, “B” and “C” and let’s say each array initially contains 100 values, but every second they will grow by one value (eg 101, 102 etc).</p>
<p>What I’m looking to do is create a single virtual dataset which is the concatenation of all three 1D arrays. This is relatively easy for the static case (3 x 100 values) but I want this virtual dataset to grow as more values are added (eg 303 values at 1 second, 306 at seconds etc.).</p>
<p>Is there a pythonic / efficient way to do this which isn’t just delete the virtual dataset and recreate it each second?</p>
|
<python><hdf5><h5py>
|
2024-05-01 17:19:41
| 1
| 1,349
|
Mark
|
78,414,996
| 3,415,543
|
wxPython Slider incorrectly displays with sized panels
|
<p>Creating a slider in a wxPython sized panel incorrectly displays the labels as shown here:</p>
<p><a href="https://i.sstatic.net/ED9FxOfZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ED9FxOfZ.png" alt="enter image description here" /></a></p>
<p>The following is the smallest program I could write to demonstrate the problem.</p>
<pre class="lang-py prettyprint-override"><code>from typing import cast
from wx import App
from wx import DEFAULT_FRAME_STYLE
from wx import FRAME_FLOAT_ON_PARENT
from wx import ID_ANY
from wx import SL_AUTOTICKS
from wx import SL_HORIZONTAL
from wx import SL_LABELS
from wx import Slider
from wx.lib.sized_controls import SizedFrame
from wx.lib.sized_controls import SizedPanel
WINDOW_WIDTH: int = 400
WINDOW_HEIGHT: int = 200
class SliderBugApp(App):
def __init__(self):
super().__init__()
self._frameTop: SizedFrame = cast(SizedFrame, None)
def OnInit(self) -> bool:
title: str = 'Demo Buggy Slider'
frameStyle: int = DEFAULT_FRAME_STYLE | FRAME_FLOAT_ON_PARENT
self._frameTop = SizedFrame(parent=None, id=ID_ANY, size=(WINDOW_WIDTH, WINDOW_HEIGHT), style=frameStyle, title=title)
sizedPanel: SizedPanel = self._frameTop.GetContentsPane()
slideStyle: int = SL_HORIZONTAL | SL_AUTOTICKS | SL_LABELS
Slider(sizedPanel, id=ID_ANY, value=100, minValue=25, maxValue=100, style=slideStyle)
self._frameTop.Show(True)
return True
testApp = SliderBugApp()
testApp.MainLoop()
</code></pre>
<p>Does anyone have any thoughts?</p>
|
<python><wxpython>
|
2024-05-01 17:01:07
| 1
| 555
|
Sequestered1776Vexer
|
78,414,992
| 6,703,592
|
dataframe tuple name column set a value
|
<pre><code>import pandas as pd
df = pd.DataFrame([[0,0], [0,0]], index=['a', 'b'])
df.columns = [(1,2), (3,4)]
df.at['a', (1,2)] = 100
print(df)
</code></pre>
<p>My column name is a tuple and I want to set value 100 to one of rhe cell. The expected result is</p>
<pre><code> (1, 2) (3, 4)
a 100 0
b 0 0
</code></pre>
<p>But I got</p>
<pre><code> (1, 2) (3, 4) 1 2
a 0 0 100.0 100.0
b 0 0 NaN NaN
</code></pre>
<p>Namely, pandas regards the tuple as two new columns. I also tried <code>.loc</code> and got the same result.</p>
<p>Actually it seems depend on the IDE, since in Jupternote, it is the first one. But in pycharm, it become the unexpected one.</p>
|
<python><pandas><dataframe>
|
2024-05-01 17:00:37
| 1
| 1,136
|
user6703592
|
78,414,881
| 16,759,116
|
Equivalent of a() < b() < c() without chaining?
|
<p>(Inspired by a <a href="https://langdev.stackexchange.com/q/3755/5199">question</a> about an AEC to WebAssembly compiler and its answers, where I can imagine this might matter.)</p>
<p>The simple <code>a() < b() and b() < c()</code> isn't equivalent because <code>b()</code> might get called twice.</p>
<p>Using variables is better:</p>
<pre><code>_a = a()
_b = b()
_a < _b and _b < c()
</code></pre>
<p>But even that isn't 100% equivalent. In particular, the value of <code>a()</code> is kept alive until after the second comparison, which could be problematic. I logged events with all three versions:</p>
<pre class="lang-none prettyprint-override"><code>chained:
a() b() (3 4) del c() (3 3) del del
simple_and:
a() b() (3 3) del del b() c() (3 3) del del
variables:
a() b() (4 4) c() (4 3) del del del
</code></pre>
<p>(<code>del</code> logs the "deletion" of objects returned by the functions, and the number pairs log the reference counts of the two compared objects during a comparison.)</p>
<p>Is there an equivalent way without chaining?</p>
<p>Testing script:</p>
<pre class="lang-py prettyprint-override"><code>def chained():
return a() < b() < c()
def simple_and():
return a() < b() and b() < c()
def variables():
_a = a()
_b = b()
return _a < _b and _b < c()
import sys
class X:
def __lt__(self, other):
print(end=f'({sys.getrefcount(self)} {sys.getrefcount(other)}) ')
return True
def __del__(self):
print(end='del ')
def a(): print(end='a() '); return X()
def b(): print(end='b() '); return X()
def c(): print(end='c() '); return X()
for f in chained, simple_and, variables:
print(f.__name__ + ':')
f()
print('\n')
</code></pre>
<p><a href="https://ato.pxeger.com/run?1=dZKxbsMgEIaljjzFbYDqZq7cZMkzdMhQCQE-aksOtgBXiqo8SZcs7UP1aQoBt1aSMtg67r_vhzs-vsZDaAd7On1OwTw8ft9tGzSgW9lZbBivCcTlMEzOgmQc1qDOX804IUnqu_3Yo5D2f3XMXVa9SddJ1aOfi4SETSrJgYqBKkHBRcE6JRIs_gqLRPfBBfAHT4jupfewy8DkIkQfhGAee1PBEFp0xS2t0XU2MLTNxlD2HgGrVwwOjR6muJ9q-BGu9jPlyIHyX1I54bObcGHdYF-8b5rSmE-QczvixetlLvWO8qeZvItXTTJ1IVO3ZfpCpq9lxAwODHR2HnW1GGT1N5588swyKyGs3KMQcA-0Lg0wZUxZQ18s5fkplRc1v6wf" rel="nofollow noreferrer">Attempt This Online!</a></p>
|
<python>
|
2024-05-01 16:33:56
| 4
| 10,901
|
no comment
|
78,414,849
| 3,155,240
|
How to tick "choose paper source by pdf page size" in adobe acrobat reader
|
<p>I have a pdf. I'd like to be able to set the "Choose paper source by PDF page size" checkbox in adobe acrobat reader. I could've sworn some time ago, I'd found a way to set it to be on all the time by setting an option in the viewer preferences, but after scouring through the preferences and internet, I have yet to find it again. I have seen some posts about a java and c# SDK called "iTextSharp", but unfortunately, python has no iTextSharp install in pip (from what I could tell). Is there any python library that supports such a feature? There seems to be something that you could include in the metadata, <a href="https://stackoverflow.com/a/19875055/3155240">per this answer</a>, but I haven't been able to find an implementation of that nor what to set in the <a href="https://www.adobe.com/devnet-docs/acrobatetk/tools/PrefRef/Windows/index.html" rel="nofollow noreferrer">page that I am assuming they are referencing</a>.</p>
<p>From a pure python standpoint, I am open to any library that is MIT, BSD, or of the like. No GPL. I have used pypdfium2, PyPDF2, and fitz to try and just get something to work with the help of a language model, but naturally, it's not very helpful.</p>
|
<python><python-3.x><pdf><adobe><acrobat>
|
2024-05-01 16:27:27
| 1
| 2,371
|
Shmack
|
78,414,724
| 2,796,170
|
SQLAlchemy Event Listeners - Best Way to get update values while using do_orm_execute
|
<p>When using <code>sqlalchemy</code> event listeners, I'm wondering the recommended/best way to get the actual updated values while using the <code>do_orm_execute</code> session event.</p>
<p>For example, when detecting ORM events using something like <code>before_flush</code> I can do something like:</p>
<pre class="lang-py prettyprint-override"><code>def _before_flush(session, flush_context, instances):
for entry in session.dirty:
# below would print the class instance of my updated table row
print(entry)
event.listen(session, "before_flush", _before_flush)
</code></pre>
<p>I'm hoping to find a (at least nearly) analogous way of detecting updates that were made through <code>execute()</code> statements using <code>do_orm_execute</code>.</p>
<pre class="lang-py prettyprint-override"><code>def _receive_orm_execute(orm_execute_state: ORMExecuteState):
if orm_execute_state.is_update:
# what is the best way to grab updates here?
# the following seems to have tuples with updated field values, but should I be using this:
# orm_execute_state.execution_options['_sa_orm_update_options']._resolved_keys_as_propnames
event.listen(session, "do_orm_execute", _receive_orm_execute)
</code></pre>
<p>Either of the above requires more setup than just dropping in these snippets. But I am assuming anyone with an answer to this question will not need it.</p>
<p>This question really boils down to - what is the best/recommended way to do this?</p>
<p>FWIW: using <code>sqlalchemy</code> v1.4.x (due to project constraints) but will take insights from any version.</p>
<p>Thanks in advance.</p>
|
<python><sqlalchemy>
|
2024-05-01 15:57:37
| 0
| 557
|
codeAndStuff
|
78,414,645
| 2,040,705
|
Unable to Display Images in Google Colab
|
<p>This google colab cell doesn't show any image, how to fix it?</p>
<p>First I connect to google drive and set working directory.</p>
<pre><code>from google.colab import drive
drive.mount('/content/drive/')
import os
os.chdir("/content/drive/My Drive/Colab Notebooks")
</code></pre>
<p>Then in Markdown cell I have this:</p>
<pre><code># Optional Lab: Model Representation
<figure>
<img src="C1_W1_L3_S1_Lecture_b.png" style="width:600px;height:200px;">
</figure>
</code></pre>
<p><a href="https://i.sstatic.net/2Ed2KVM6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2Ed2KVM6.png" alt="enter image description here" /></a></p>
<p>Neither of this worked as well, even when I use absolute path.</p>
<pre><code>

<img src="C1_W1_L3_S1_Lecture_b.png" alt="Image Title">
</code></pre>
<p><a href="https://i.sstatic.net/6HQUehzB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6HQUehzB.png" alt="enter image description here" /></a></p>
<p>Checking for files in current directory:</p>
<pre><code>!ls
C1_W1_L3_S1_Lecture_b.png 'Test read drive contents.ipynb'
temp.txt
</code></pre>
<p>It can read other files in the folder though:</p>
<pre><code>readfile = open("temp.txt", 'r').read()
readfile
'test'
</code></pre>
<p>Additionally, this method worked but its not what I am looking for since all lecture notebooks are formated using <code><figure> </figure></code> I won't be able to reformat everything.</p>
<pre><code>from IPython.display import Image, display
display(Image(filename='/content/drive/My Drive/Colab Notebooks/C1_W1_L3_S1_Lecture_b.png'))
</code></pre>
|
<python><jupyter-notebook><markdown><google-colaboratory>
|
2024-05-01 15:44:27
| 0
| 1,396
|
user40
|
78,414,585
| 673,600
|
Using FAISS in Databricks
|
<p>Despite installing the correct package I cannot make the following code work to add an index to a Dataset.</p>
<p>Failing to work at the following line:</p>
<pre><code>embeddings_dataset.add_faiss_index(column="embeddings")
</code></pre>
<p>gives the following error message</p>
<pre><code>ImportError: You must install Faiss to use FaissIndex. To do so you can run `conda install -c pytorch faiss-cpu` or `conda install -c pytorch faiss-gpu`. A community supported package is also available on pypi: `pip install faiss-cpu` or `pip install faiss-gpu`. Note that pip may not have the latest version of FAISS, and thus, some of the latest features and bug fixes may not be available
</code></pre>
<p>I'm installing by trying a variety of the following:</p>
<pre><code>!pip install --upgrade faiss-gpu==1.7.2
!pip install faiss-gpu
!pip install --upgrade faiss-cpu
</code></pre>
<p>But none of these give any different results. The code looks like:</p>
<pre><code>import faiss
from datasets import Dataset
embeddings_dataset = Dataset.from_pandas(comments_df[0:10])
embeddings_dataset.add_faiss_index(column="embeddings")
</code></pre>
|
<python><databricks><faiss>
|
2024-05-01 15:31:55
| 0
| 6,026
|
disruptive
|
78,414,334
| 7,091,922
|
How to access environmental variables when executing a python-behave test?
|
<p>I am trying to set up a <code>behave</code> test for my app and I need to access many different environmental variables during the execution.</p>
<p>For example, I have code that checks if the ENVIRONMENT is local, dev or prod.</p>
<p>I saw that in the <code>environment.py</code> file I can access parameters passed to the command line like: <code>behave -t qa -D ENVIRONMENT=local</code>
And access it in the <code>before_</code> funcitons like this: <code>context.config.userdata['ENVIRONMENT']</code></p>
<p>But so far I was unable to turn that into an environment variable for my test's execution.</p>
<p>Am I missing something or that is more like a feature request?</p>
<p>Thank you!</p>
|
<python><environment-variables><python-behave>
|
2024-05-01 14:42:37
| 1
| 823
|
A Campos
|
78,414,110
| 8,271,180
|
Python assign inherited class
|
<p>I have a class <code>A</code> that has many attributes, lets call them a,b,c,...,z</p>
<p>Then I have a class B that inherites from A. Inside a function in B I want to set all the self attributes of A to another instance of A.</p>
<pre><code>class B(A):
def set_A(obj:A):
self.a = obj.a
self.b = obj.b
...
self.z = obj.z
</code></pre>
<p>Is there any way to do it without wrting each of <code>A</code>s properties and setting them one by one?</p>
|
<python><class><inheritance><syntax>
|
2024-05-01 14:00:28
| 0
| 1,356
|
Tomer Wolberg
|
78,414,094
| 3,010,486
|
Draw a outline mask based on image shape - Using Canny edge - updated
|
<p>I would like to <strong>draw an outline mask (shape) based on the image shape</strong>.
I am using <strong>opencv-python version 4.9.0.80</strong>.</p>
<p>So when we have an <strong>images like these</strong>.</p>
<p><a href="https://i.sstatic.net/Z4O6Qzbmm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Z4O6Qzbmm.png" alt="Just a test" /></a></p>
<p><a href="https://i.sstatic.net/g7jpRcIzm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/g7jpRcIzm.png" alt="enter image description here" /></a></p>
<p>We want <strong>output like this without images inside. Avoid gray background(It is added so that, the outline mask based on the image shape can be visible correctly).</strong></p>
<p><a href="https://i.sstatic.net/MigeBapBm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MigeBapBm.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/MrZxzcpBm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MrZxzcpBm.png" alt="enter image description here" /></a></p>
<p>I have tried <code>cv.convexHull()</code> but the <strong>result is not coming good as expected</strong>.</p>
<p><a href="https://i.sstatic.net/tCX1SdXym.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tCX1SdXym.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/65HXVIYBm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/65HXVIYBm.png" alt="enter image description here" /></a></p>
<p><strong>I am trying this code.</strong></p>
<pre><code>import cv2 as cv
import numpy as np
path = '/opt/boundary-remover/SW_test2-01.png'
# path = '/opt/boundary-remover/scaled_2x.png'
# Load image
src = cv.imread(cv.samples.findFile(path))
cv.imwrite('1_original.png', src)
# Convert image to gray and blur it
src_gray = cv.cvtColor(src, cv.COLOR_BGR2GRAY)
cv.imwrite('2_gray.png', src_gray)
# Detect edges using Canny
canny_output = cv.Canny(src_gray, 100, 200)
cv.imwrite('3_canny.png', canny_output)
# Find contours
contours, _ = cv.findContours(canny_output, cv.RETR_EXTERNAL, cv.CHAIN_APPROX_NONE)
# Combine all contours into a single contour
all_contours = np.concatenate(contours)
# Find the convex hull object for the combined contour
hull = cv.convexHull(all_contours)
# Draw the combined contour and its convex hull
drawing = np.zeros((canny_output.shape[0], canny_output.shape[1], 3), dtype=np.uint8)
color = (0, 255, 0) # Green color
# cv.drawContours(drawing, [all_contours], 0, color)
# cv.imwrite('4_all_contours.png', drawing)
cv.drawContours(drawing, [hull], 0, color)
cv.imwrite('5_hull.png', drawing)
cv.drawContours(drawing, [hull], -1, color, thickness=cv.FILLED)
cv.imwrite('7_filled.png', drawing)
</code></pre>
|
<python><numpy><opencv><image-processing>
|
2024-05-01 13:56:43
| 3
| 3,246
|
NiRmaL
|
78,413,936
| 7,334,203
|
Python requests put to update image always give a 400. Im following the documentation
|
<p>im having this issue:
My code is this</p>
<pre><code>import requests
import json
import re
# Define your Shopify store's API key and password
API_KEY = 'xxxxxxxxxxxxxxxxxx'
PASSWORD = 'xxxxxxxxxxxxxx'
API_TOKEN = 'xxxx_xxxxxxxxxcd7'
SHOPIFY_DOMAIN = 'https://www.xxxx.xxx/'
headers = {
'X-Shopify-Access-Token': API_TOKEN,
'Content-Type': 'application/json',
}
# Define the endpoint URL
endpoint = 'https://www.xxxxx/admin/api/2024-04/products/9190309691717/images/53245809295685.json'
# Define the new alt text
new_alt_text = "#title-#seren.earth-#variantSKU"
# Prepare the request body
body = {
'image': {
"product_id": 9190309691717,
"id": 53245809295685, # Include the image ID,
"alt": 'variantSKU',
}
}
# Send the PUT request
response = requests.put(endpoint, headers=headers, json=body)
print(response.status_code)
print(response.content)
# Check the response status
if response.status_code == 200:
print(f"Alt text updated successfully for image 9190309691717")
else:
print(f"Failed to update alt text for image 9190309691717. Response: {response.text}")
</code></pre>
<p>But im always getting this error:
400
b'{"errors":{"image":"Required parameter missing or invalid"}}'
Failed to update alt text for image 9190309691717. Response: {"errors":{"image":"Required parameter missing or invalid"}}</p>
<p>What am i missing? Do you see any errors?</p>
|
<python><python-requests><shopify-app>
|
2024-05-01 13:19:24
| 0
| 7,486
|
RamAlx
|
78,413,842
| 6,714,667
|
How can i remove only single lines between text to chunk it up?
|
<p>I have a text file:</p>
<pre><code>title
header
topic one two three
hello harry
</code></pre>
<p>i want to remove only single lines between text to get:</p>
<pre><code>title
header
topic one two three
hello harry
</code></pre>
<p>how i can do this using python?</p>
<pre><code>data = open('data.txt').read().replace('\n', '')
</code></pre>
<p>the above removes all</p>
|
<python><text><strip>
|
2024-05-01 12:58:38
| 2
| 999
|
Maths12
|
78,413,787
| 4,915,288
|
ValueError: No chrome executable found on PATH, chromedriver_autoinstaller.install
|
<p>Trying to use chromedriver_autoinstaller.install() but getting the ValueError: No chrome executable found on PATH</p>
<pre><code>chromedriver_autoinstaller.install()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/e/reports_automation/venv/lib/python3.10/site-packages/chromedriver_autoinstaller/__init__.py", line 21, in install
chromedriver_filepath = utils.download_chromedriver(path, no_ssl)
File "/home/e/reports_automation/venv/lib/python3.10/site-packages/chromedriver_autoinstaller/utils.py", line 269, in download_chromedriver
chrome_version = get_chrome_version()
File "/home/e/reports_automation/venv/lib/python3.10/site-packages/chromedriver_autoinstaller/utils.py", line 140, in get_chrome_version
path = get_linux_executable_path()
File "/home/e/reports_automation/venv/lib/python3.10/site-packages/chromedriver_autoinstaller/utils.py", line 196, in get_linux_executable_path
raise ValueError("No chrome executable found on PATH")
ValueError: No chrome executable found on PATH
</code></pre>
|
<python><python-3.x><google-chrome><selenium-webdriver><selenium-chromedriver>
|
2024-05-01 12:45:37
| 2
| 954
|
Lohith
|
78,413,784
| 18,139,225
|
How to make the unit show up at the result after square root calculation using handcalcs and forallpeople Python libraries?
|
<p>I am using forallpeople Python library within handcalcs Python library. After performing square root calculation, the unit is not showing up at the calculation result:</p>
<pre><code>import handcalcs.render
from math import sqrt, pi
import forallpeople as si
si.environment('structural', top_level=True)
</code></pre>
<p>And here is the calculation:</p>
<pre><code>%%render 1 short
A = (78.3*MPa)
B = (56.6*MPa)
C = (80.4*MPa)
D = sqrt(A**2 + B**2 + C**2)
</code></pre>
<p>And here is the output:</p>
<p><a href="https://i.sstatic.net/kEzlzf3b.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kEzlzf3b.png" alt="The output of the calculation" /></a></p>
<p>I wonder whether there is a way of making the unit show up, i.e. <code>D = ... = 125.7 MPa</code></p>
|
<python><jupyter-lab>
|
2024-05-01 12:45:11
| 2
| 441
|
ezyman
|
78,413,739
| 10,035,190
|
make authenticated and login work for multiple table other then auth User table django?
|
<p>I have created hostTable model in hostlogin app and I want to use this table for login purpose for that i have created custom authenticate function because default authenticate() was working for auth user table only. Also login() is not attaching current session to the users from hostTable. Although it working fine if I set <code>AUTH_USER_MODEL ='hostlogin.HostTable'</code> but then admin page is not working also this is not acceptable solution because hostTable I will use for teaches login (127.0.0.1:8081/tlogin/) and studentlogin table I will use for student login (127.0.0.1:8081/slogin/) that time how will I use two different table for different login() and authenticate ().</p>
<p>models.py</p>
<pre><code>from django.db import models
from django.utils import timezone
from django.contrib.auth.models import AbstractUser,AbstractBaseUser,PermissionsMixin
from django.contrib.auth.hashers import make_password, check_password
class HostTable(AbstractBaseUser):
pid=models.AutoField(primary_key=True,auto_created=True)
username=models.CharField(max_length=50,unique=True)
password=models.CharField(max_length=128)
#...someotherfields...
createdtime =models.DateTimeField(default=timezone.now)
last_login=models.DateTimeField(default=timezone.now)
is_authenticated=models.BooleanField(default=True)
is_active = models.BooleanField(default=True)
is_staff = models.BooleanField(default=False)
USERNAME_FIELD = 'username'
REQUIRED_FIELDS=['hostname','ownername','password']
def save(self, *args, **kwargs):
self.password = make_password(self.password)
super(HostTable, self).save(*args, **kwargs)
def __str__(self):
return self.hostname
</code></pre>
<p>view.py</p>
<pre><code>
def hostlogin(request):
if request.method == 'POST':
form = UserLoginForm(request.POST)
if form.is_valid():
username = form.cleaned_data.get('username')
password = form.cleaned_data.get('password')
print('view',username,password)
user = hostTableAuth.authenticate(request,username,password)
print('view',user)
if user is not None:
login(request,user)
messages.success(request, f' welcome {username} !!')
# print(request.META['REMOTE_USER'])
print('r',request.user)
print('r',request.user.is_authenticated)
print('s',request.session)
request.session['username']=username
return redirect('home')
else:
messages.info(request, f'account done not exit plz sign in')
form = UserLoginForm()
return render(request, 'hostlogin/login.html', {'form':form, 'title':'log in'})
</code></pre>
<p>customeAuth.py</p>
<pre><code>from django.contrib.auth.backends import BaseBackend,ModelBackend
from django.contrib.auth.hashers import check_password
from .models import HostTable
class hostTableAuth(ModelBackend):
def authenticate(self, request,username, password):
user=HostTable.objects.filter(username=username).first()
print('user',username,password,user.password)
if user is not None:
login_valid = user.username == username
pwd_valid = check_password(password, user.password)
print(login_valid,pwd_valid)
if login_valid and pwd_valid:
try:
user = HostTable.objects.get(username=username)
except HostTable.DoesNotExist:
# Create a new user. There's no need to set a password
# because only the password from settings.py is checked.
user = HostTable(username=username)
user.is_staff = True
user.is_superuser = True
user.save()
print('auth1',user)
return user
else:
print('auth2',user)
return None
def get_user(self, user_id):
try:
return HostTable.objects.get(pk=user_id)
except HostTable.DoesNotExist:
return None
</code></pre>
<p><a href="https://i.sstatic.net/nSW1LazP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nSW1LazP.png" alt="enter image description here" /></a></p>
|
<python><django><authentication><django-models><django-rest-framework>
|
2024-05-01 12:35:19
| 1
| 930
|
zircon
|
78,413,655
| 1,020,139
|
Pydantic ConfigDict(use_enum_values=True) has no effect when providing default value. How can I provide default values?
|
<p>The following is a simplified example.</p>
<p>I need to configure <code>ConfigDict(use_enum_values=True)</code> as the code performs <code>model_dump()</code>, and I want to get rid of raw enum values for the purpose of later serialization.</p>
<p>How can <code>A</code> provide a default value for <code>A.x</code>, so that <code>model_dump()</code> outputs the enum value and not the enum itself?</p>
<p>Bonus: Is there any way for A to initilize <code>A.x</code> itself and make it impossible for users to set it? After all, default values can be overriden</p>
<pre><code>from enum import Enum
from pydantic import BaseModel, ConfigDict
class X(Enum):
x = "X"
y = "Y"
z = "Z"
class A(BaseModel):
model_config = ConfigDict(use_enum_values=True, frozen=True)
x: X = X.x
>>> A()
A(x=<X.x: 'X'>)
>>> A(x=X.x)
A(x='X')
</code></pre>
|
<python><pydantic>
|
2024-05-01 12:09:33
| 1
| 14,560
|
Shuzheng
|
78,413,287
| 13,561,669
|
Unable to interact with google maps with selenium to get route details
|
<p>I am trying to interact with google maps to get route related data. Here is what I have written so far.</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
import time
driver = webdriver.Chrome()
driver.get("https://www.google.com/maps")
time.sleep(5)
driver.find_element(By.ID, 'hArJGc').click()
time.sleep(5)
search_boxes = driver.find_elements(By.CLASS_NAME, 'tactile-searchbox-input')
search_boxes[0].clear()
search_boxes[0].send_keys('Hyderabad')
search_boxes[1].clear()
search_boxes[1].send_keys('Guntur')
search_boxes[1].send_keys(Keys.ENTER)
time.sleep(10)
driver.find_element(By.CLASS_NAME, 'goog-inline-block goog-menu-button-dropdown').click()
</code></pre>
<p>I am trying to access the drop down button to set departure time by passing its class name. but it is saying such element does not exists as shown below</p>
<pre><code>NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":".goog-inline-block goog-menu-button-dropdown"}
(Session info: chrome=124.0.6367.91); For documentation on this error, please visit: https://www.selenium.dev/documentation/webdriver/troubleshooting/errors#no-such-element-exception
Stacktrace:
GetHandleVerifier [0x00007FF7AEF11502+60802]
(No symbol) [0x00007FF7AEE8AC02]
(No symbol) [0x00007FF7AED47CE4]
(No symbol) [0x00007FF7AED96D4D]
(No symbol) [0x00007FF7AED96E1C]
(No symbol) [0x00007FF7AEDDCE37]
(No symbol) [0x00007FF7AEDBABBF]
(No symbol) [0x00007FF7AEDDA224]
(No symbol) [0x00007FF7AEDBA923]
(No symbol) [0x00007FF7AED88FEC]
(No symbol) [0x00007FF7AED89C21]
GetHandleVerifier [0x00007FF7AF21411D+3217821]
GetHandleVerifier [0x00007FF7AF2560B7+3488055]
GetHandleVerifier [0x00007FF7AF24F03F+3459263]
GetHandleVerifier [0x00007FF7AEFCB846+823494]
(No symbol) [0x00007FF7AEE95F9F]
(No symbol) [0x00007FF7AEE90EC4]
(No symbol) [0x00007FF7AEE91052]
(No symbol) [0x00007FF7AEE818A4]
BaseThreadInitThunk [0x00007FFAFC8E257D+29]
RtlUserThreadStart [0x00007FFAFE42AA48+40]
</code></pre>
<p>But when I inspect the webpage that class name exists. How come I am not able to access it like other elements and how to resolve this?</p>
<p>thanks in advance!
<a href="https://i.sstatic.net/e8uZ0fpv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e8uZ0fpv.png" alt="enter image description here" /></a></p>
|
<python><google-maps><selenium-webdriver><automation>
|
2024-05-01 10:43:48
| 1
| 307
|
Satya Pamidi
|
78,412,978
| 659,389
|
Mypy + FlaskSQLAlchemy + model multiple inheritance => has no attribute
|
<p>It seems that <code>mypy</code> is having problems taking into account all the superclasses and reports missing attributes. Here is a simple example:</p>
<pre><code>import uuid
from flask_sqlalchemy import SQLAlchemy
from sqlalchemy import String
from sqlalchemy.dialects.postgresql import UUID
from sqlalchemy.orm import mapped_column, DeclarativeBase
db = SQLAlchemy()
class Base(DeclarativeBase):
pass
class IdMixin:
id = mapped_column(UUID(as_uuid=True), nullable=False, default=uuid.uuid4, primary_key=True)
class ImageMixin:
image_name = mapped_column(String, default=None)
image_uri = mapped_column(String, default=None)
class Model(IdMixin, Base):
pass
class Post(Model, ImageMixin):
content = mapped_column(String)
class Comment(Model, ImageMixin):
text = mapped_column(String)
def generate_image_uri(image_name: str) -> str:
return f"https://example.com/{image_name}"
models = [Comment, Post]
for model in models:
entities = db.session.query(model).all()
for entity in entities:
entity.image_uri = generate_image_uri(image_name=entity.image_name) # Line 45
</code></pre>
<p>Mypy reports:</p>
<pre><code>mypy-warning.py:45: error: "Model" has no attribute "image_uri" [attr-defined]
mypy-warning.py:45: error: "Model" has no attribute "image_name" [attr-defined]
</code></pre>
<p>It works just fine when I move the <code>ImageMixin</code> to the <code>Model</code> class and then only subclass <code>Model</code> in the <code>Post</code> and <code>Comment</code> classes, but this is not what I want as I don't necessarily want all models to mix in the image functionality.</p>
|
<python><sqlalchemy><flask-sqlalchemy><mypy><python-typing>
|
2024-05-01 09:25:48
| 1
| 1,333
|
koleS
|
78,412,917
| 7,334,203
|
GraphQL mutation for updating products is not working
|
<p>im trying to implement a python script that will update every product's alt text image to a specific text.
My code initially is this:
import requests
import json
import re</p>
<pre><code># Define your Shopify store's API key and password
API_KEY = '88xxxxxxxxxxxxxxxxxxxxxxxxxxxxf'
PASSWORD = '0xxxxxxxxxxxxxxxxxxxxxxxxxxxx4'
API_TOKEN = 'shpat_xxxxxxxxxxxxxxxxxxxxxxx'
SHOPIFY_DOMAIN = 'https://www.xxxx.yyyy/'
# Define your Shopify store's GraphQL endpoint
GRAPHQL_ENDPOINT = 'https://www.xxxx.yyyy/admin/api/2024-04/graphql'
headers = {
'X-Shopify-Access-Token': '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',
'Content-Type': 'application/json',
}
# Define the mutation template
mutation_template = """
mutation updateImageAltText($productId: ID!, $imageId: ID!, $newAltText: String!) {
productImageUpdate(productId: $productId, image: {id: $imageId, altText: $newAltText}) {
image {
id
altText
}
userErrors {
field
message
}
}
}
"""
# Function to send the mutation
def send_mutation(product_id, image_id, new_alt_text):
# Prepare the mutation variables
variables = {
"productId": product_id,
"imageId": image_id,
"newAltText": new_alt_text
}
# Prepare the GraphQL request payload
payload = {
"query": mutation_template,
"variables": variables
}
# Send the GraphQL mutation request
response = requests.post(GRAPHQL_ENDPOINT, json=payload, headers=headers)
# Check for successful response
if response.status_code == 200:
print(f"Alt text updated successfully for image {image_id}")
else:
print(f"Failed to update alt text for image {image_id}. Response: {response.text}")
</code></pre>
<p>But when im executing the code even if it is successfull the updated are not seen. I have tried running a fetch query with graphql but im getting this response which is quite weird</p>
<pre><code># GraphQL query
query = """
query MyQuery {
products(first: 50) {
edges {
node {
id
images(first: 1) {
edges {
node {
id
altText
}
}
}
}
}
}
}
"""
# JSON payload
payload = {
"query": query
}
# Make the POST request
response = requests.post(GRAPHQL_ENDPOINT, json=payload, headers=headers)
# # Check if the request was successful
if response.status_code == 200:
# Parse the JSON response
data = response.json()
# print(json.dumps(data, indent=4))
else:
print(f"Query failed to run with a status code {response.status_code}")
print(response.text)
</code></pre>
<p>-------------------------OUTPUT---------------------------</p>
<pre><code>b'<html>\n <body>\n <noscript>\n <a href="https://accounts.shopify.com/oauth/authorize?client_id=7ee65a63608843c577db8b23c4d7316ea0a01bd2f7594f8a9c06ea668c1b775c&amp;destination_uuid=4aa9e194-f056-4326-b911-141e3efda215&amp;nonce=9ecd58f45954fd0d7d9e7b886e8a1231&amp;prompt=merge&amp;redirect_uri=https%3A%2F%2Fc6859c-2.myshopify.com%2Fadmin%2Fauth%2Fidentity%2Fcallback&amp;response_type=code&amp;scope=email%20https%3A%2F%2Fapi.shopify.com%2Fauth%2Fdestinations.readonly%20openid%20profile%20https%3A%2F%2Fapi.shopify.com%2Fauth%2Fpartners.collaborator-relationships.readonly%20https%3A%2F%2Fapi.shopify.com%2Fauth%2Fbanking.manage%20https%3A%2F%2Fapi.shopify.com%2Fauth%2Fmerchant-setup-dashboard.graphql%20https%3A%2F%2Fapi.shopify.com%2Fauth%2Fshopify-chat.admin.graphql%20https%3A%2F%2Fapi.shopify.com%2Fauth%2Fflow.workflows.manage%20https%3A%2F%2Fapi.shopify.com%2Fauth%2Forganization-identity.manage%20https%3A%2F%2Fapi.shopify.com%2Fauth%2Fmerchant-bank-account.manage%20https%3A%2F%2Fapi.shopify.com%2Fauth%2Fshopify-tax.manage&amp;state=d20d4aa1e3781ba66b325f770fe4a923&amp;ui_locales=en&amp;ux=shop">Continue</a>\n </noscript>\n\n <script type="text/javascript"
defer>\n
window.location="https:\\/\\/accounts.shopify.com\\/oauth\\/authorize?client_id=7ee65a63608843c577db8b23c4d7316ea0a01bd2f7594f8a9c06ea668c1b775c\\u0026destination_uuid=4aa9e194-f056-4326-b911-141e3efda215\\u0026nonce=9ecd58f45954fd0d7d9e7b886e8a1231\\u0026prompt=merge\\u0026redirect_uri=https%3A%2F%2Fc6859c-2.myshopify.com%2Fadmin%2Fauth%2Fidentity%2Fcallback\\u0026response_type=code\\u0026scope=email%20https%3A%2F%2Fapi.shopify.com%2Fauth%2Fdestinations.readonly%20openid%20profile%20https%3A%2F%2Fapi.shopify.com%2Fauth%2Fpartners.collaborator-relationships.readonly%20https%3A%2F%2Fapi.shopify.com%2Fauth%2Fbanking.manage%20https%3A%2F%2Fapi.shopify.com%2Fauth%2Fmerchant-setup-dashboard.graphql%20https%3A%2F%2Fapi.shopify.com%2Fauth%2Fshopify-chat.admin.graphql%20https%3A%2F%2Fapi.shopify.com%2Fauth%2Fflow.workflows.manage%20https%3A%2F%2Fapi.shopify.com%2Fauth%2Forganization-identity.manage%20https%3A%2F%2Fapi.shopify.com%2Fauth%2Fmerchant-bank-account.manage%20https%3A%2F%2Fapi.shopify.com%2Fauth%2Fshopify-tax.manage\\u0026state=d20d4aa1e3781ba66b325f770fe4a923\\u0026ui_locales=en\\u0026ux=shop"
;\n
</script>\n </body>\n</html>\n'
</code></pre>
<p>What am i missing? Any ideas?</p>
|
<python><graphql><shopify>
|
2024-05-01 09:08:18
| 0
| 7,486
|
RamAlx
|
78,412,846
| 5,747,326
|
Issues processing XML files using Saxonica(saxonche) with Python
|
<p>I'm using <code>saxonche</code> python library for XPath3.1 ops in python. I have created a FastAPI that just accepts the XML filename, opens it, processes and returns the response.
It worked fine during development on an Intel MacBook, but in production on an Amazon m7g.2xlarge instance (Debian 12 ARM64), it fails with the following error when processing multiple files.</p>
<blockquote>
<p>Fatal error: StackOverflowError: Enabling the yellow zone of the stack
did not make any stack space available. Possible reasons for that: 1)
A call from native code to Java code provided the wrong JNI
environment or the wrong IsolateThread; 2) Frames of native code
filled the stack, and now there is not even enough stack space left to
throw a regular StackOverflowError; 3) An internal VM error occurred.</p>
</blockquote>
<p>XML File size: 5 to 8 MB
Production env: m7g.2xlarge(AWS) with Debian 12 ARM64</p>
<p><strong>Questions</strong>:<br>
Does saxonche have a limitation with processing multiple files simultaneously?<br>
Could upgrading Java on the server potentially resolve this issue?<br></p>
<p>Any suggestions for troubleshooting or resolving this error would be greatly appreciated.
Thank you for your help!</p>
|
<python><saxon><saxon-c>
|
2024-05-01 08:52:15
| 1
| 746
|
N Raghu
|
78,412,680
| 10,136,501
|
Error while initializing SparkSession in Windows
|
<p>I'm getting the below error message while initialing the SparkSession in my Windows system.
I have followed the below steps and added the respective path as Env Varaiable:</p>
<ol>
<li>Install Java 11 and set JAVA_HOME as env variable</li>
<li>Create a venv and Install Python 3.10 using Anaconda</li>
<li>Download Spark: <a href="https://dlcdn.apache.org/spark/spark-3.5.1/spark-3.5.1-bin-hadoop3.tgz" rel="nofollow noreferrer">https://dlcdn.apache.org/spark/spark-3.5.1/spark-3.5.1-bin-hadoop3.tgz</a></li>
<li>Set SPARK_HOME as environment variable</li>
<li>Downloaded Hadoop for windows: <a href="https://github.com/steveloughran/winutils" rel="nofollow noreferrer">https://github.com/steveloughran/winutils</a></li>
<li>Set the HADOOP_HOME environment variable</li>
</ol>
<pre class="lang-py prettyprint-override"><code>from pyspark.sql import SparkSession
print("Packages Imported...")
def load_dataframe(path):
df = spark.read.format("csv") \
.option("header", True) \
.load(path)
return df
if __name__ == "__main__":
print("Running app1.py...")
spark = SparkSession \
.builder \
.appName("app1") \
.master("local[3]") \
.getOrCreate()
# Print spark version
print("Spark Version: ", spark.version)
# wines_df = load_dataframe("./data/wine.csv")
# print(type(wines_df))
# print(wines_df.printSchema())
</code></pre>
<p>Error Message:
<a href="https://i.sstatic.net/Tl425TJj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Tl425TJj.png" alt="error" /></a></p>
|
<python><apache-spark><pyspark>
|
2024-05-01 08:05:52
| 1
| 369
|
RevolverRakk
|
78,412,549
| 14,471,688
|
How to perform matthews_corrcoef in sklearn simultaneously between every column using a matrix X and and output y?
|
<p>I want to calculate the Matthews correlation coefficient (MCC) in sklearn between every column of a matrix X with an output y. Here is my code:</p>
<pre><code>from sklearn.metrics import matthews_corrcoef
import numpy as np
X = np.array([[1, 0, 0, 0, 0],
[1, 0, 0, 1, 0],
[1, 0, 0, 0, 1],
[1, 1, 0, 0, 0],
[1, 1, 0, 1, 0],
[1, 1, 0, 0, 1],
[1, 0, 1, 0, 0],
[1, 0, 1, 1, 0],
[1, 0, 1, 0, 1],
[1, 0, 0, 0, 0]])
y = np.array([1, 0, 1, 0, 1, 0, 1, 0, 1, 0])
n_sample, n_feature = X.shape
rcf_all = []
for i in range(n_feature):
coeff_c_f = abs(matthews_corrcoef(X[:, i], y))
rcf_all.append(coeff_c_f)
rcf = np.mean(rcf_all)
</code></pre>
<p>It worked pretty good here but as long as I have a very big matrix with many features, calculating them by looping through one feature at a time is pretty slow. What is the most effective way to perform this simultaneously without using the loop to speed up the calculation process?</p>
|
<python><numpy><scikit-learn>
|
2024-05-01 07:32:08
| 1
| 381
|
Erwin
|
78,412,322
| 4,391,249
|
How to make a Protocol that inherits methods and attributes from a concrete class
|
<h2>Scenario</h2>
<p>I have a <code>Foo</code> protocol:</p>
<pre class="lang-py prettyprint-override"><code>class Foo(Protocol):
fooname: str
def do_fooey_things(self):
pass
</code></pre>
<p>and a bunch of different types of concrete Foos:</p>
<pre class="lang-py prettyprint-override"><code>class PurpleFoo:
fooname: str = "purple"
def do_fooey_things(self):
print("purple foo!")
</code></pre>
<p>What I've achieved with this is a way of being able to type-hint, have my IDE help me, and run tests to make sure all my concrete Foos are doing what I expect: <code>isinstance(purple_foo, Foo)</code>. And I've met another design requirement which is I want to minimize abstraction (ie I don't want to use abstract base classes).</p>
<p>But now I need all my Foo's to use a mixin from a third party library like:</p>
<pre><code>class PurpleFoo(BarMixin):
...
</code></pre>
<p>But now my IDE can't help me with the <code>BarMixin</code> methods and attributes, and my <code>isinstance</code> checks will pass even if one of the Foos fails to use <code>BarMixin</code> (and I want them all too use it).</p>
<h2>Question and what I've tried</h2>
<p>I know I can't do</p>
<pre><code>class Foo(Protocol, BarMixin)
...
</code></pre>
<p>So what do I do?</p>
<p>The best I've come up with so far is to make <code>Foo</code> have class attributes pointing to the attributes and methods of <code>BarMixin</code> that I'm interested in adopting:</p>
<pre><code>class Foo(Protocol)
fooname: str
bar_attribute = BarMixin.bar_attribute
bar_method = BarMixin.bar_method
</code></pre>
<p>At least this way I benefit from the IDE help (at least in VSCode) and the <code>isinstance</code> check catches any Foo's that don't inherit from <code>BarMixin</code>.</p>
|
<python><mypy><python-typing><pyright>
|
2024-05-01 06:29:56
| 0
| 3,347
|
Alexander Soare
|
78,412,286
| 10,964,685
|
dmc card responsivity - plotly dash
|
<p>I've got multiple <code>dmc.card</code> items within a dash app layout. I've got them set to a specific height and width. I've also instilled <code>base, sm, md, lg</code> values to account for a responsive change in screen settings.</p>
<p>I'm trying to achieve the same response that the traditional <code>dbc.Card</code> performs. That is, when the screen size is reduced, the cards keep their same layout but reduce in width.</p>
<p>I don't want the cards to split into separate rows.</p>
<p>Is there a way for <code>dmc.card</code> to operate the same way?</p>
<pre><code>import dash
from dash import dcc
from dash import html
from dash.dependencies import Input, Output
import dash_bootstrap_components as dbc
import plotly.express as px
import pandas as pd
import dash_mantine_components as dmc
external_stylesheets = [dbc.themes.COSMO, dbc.icons.BOOTSTRAP]
app = dash.Dash(__name__, external_stylesheets = external_stylesheets)
carddbc = dbc.Card(
[
dbc.CardBody(
[
html.P('', className = 'card-text'),
]
),
],
style = {'height':'100%'}
)
card1 = dmc.Card(
html.Div(children = [
dcc.Checklist(['','','','',''],
),
],
),
w={"base": 100, "sm": 180,"md": 300, "lg": 300},
py={"base": "xs", "sm": "md", "lg": "xl"},
bg={"base": "blue.7", "sm": "red.7", "lg": "green.7"},
withBorder=True,
shadow='sm',
radius='md',
h=275,
)
card2 = dmc.Card(
html.Div(children = [
dcc.Checklist(['','','','',''],
),
],
),
w={"base": 100, "sm": 180,"md": 300, "lg": 300},
py={"base": "xs", "sm": "md", "lg": "xl"},
bg={"base": "blue.7", "sm": "red.7", "lg": "green.7"},
withBorder=True,
shadow='sm',
radius='md',
h=275,
)
card3 = dmc.Card(
html.Div(children = [
dcc.Checklist(['','','','',''],
),
],
),
w={"base": 100, "sm": 180,"md": 300, "lg": 300},
py={"base": "xs", "sm": "md", "lg": "xl"},
bg={"base": "blue.7", "sm": "red.7", "lg": "green.7"},
withBorder=True,
shadow='sm',
radius='md',
h=275,
)
app.layout = dbc.Container([
dbc.Row([
dbc.Col([
], xs = 2, sm = 2, md = 2, lg = 2
),
dbc.Col([
dbc.Row([
dbc.Col(carddbc, width = {'size':3}),
dbc.Col(carddbc, width = {'size':3}),
dbc.Col(carddbc, width = {'size':3}),
], justify = 'center'),
dbc.Row([
dbc.Col([
], xs = 4, sm = 4, md = 4, lg = 4),
dbc.Col([
dbc.Card([
dbc.CardBody([
dbc.Row([
dbc.Col([
dcc.Graph()
],
),
dbc.Row([
dbc.Col(html.Div(card1), className = ''),
dbc.Col(html.Div(card2), className = ''),
dbc.Col(html.Div(card3), className = ''),
]
)
]),
],
)
])
], xs = 8, sm = 8, md = 8, lg = 8),
])
], xs = 10, sm = 10, md = 10, lg = 10)
])
], fluid = True)
if __name__ == '__main__':
app.run_server(debug = True)
</code></pre>
|
<python><css><plotly-dash>
|
2024-05-01 06:18:10
| 1
| 392
|
jonboy
|
78,412,205
| 4,112,504
|
How to realize tsaregular function of Matlab in python?
|
<p>I know the <code>tsa</code> function and can make a similar function in Python.</p>
<p>But I'm having trouble understanding <code>tsaregular</code><a href="https://ww2.mathworks.cn/help/predmaint/ref/tsaregular.html" rel="nofollow noreferrer">1</a> :Regular signal of a time-synchronous averaged signal.</p>
<p>I want to program one in Python.</p>
<p>Could someone kindly explain it to me?</p>
|
<python><matlab><signal-processing>
|
2024-05-01 05:53:24
| 1
| 340
|
Jilong Yin
|
78,412,032
| 20,088,885
|
UncaughtPromiseError > TypeError Uncaught Promise > Failed to fetch Odoo
|
<p>I'm having a little bit of trouble after installing odoo17 successfully, when I create a new page in my website I get this error</p>
<pre><code>TypeError: Failed to fetch
at Object.get (https://website.com/web/assets/6db5172/web.assets_web.min.js:2861:92)
at https://website.com/web/assets/6db5172/web.assets_web.min.js:15488:49
at AddPageDialog.getCssLinkEls (https://website.com/web/assets/6db5172/web.assets_web.min.js:15489:91)
at Object.getCssLinkEls (https://website.com/web/assets/6db5172/web.assets_web.min.js:15484:727)
at AddPageTemplates.preparePages (https://website.com/web/assets/6db5172/web.assets_web.min.js:15479:37)
at AddPageTemplates.<anonymous> (https://website.com/web/assets/6db5172/web.assets_web.min.js:15478:653)
at https://website.com/web/assets/6db5172/web.assets_web.min.js:1005:80
at Array.map (<anonymous>)
at ComponentNode.initiateRender (https://ewebsite.com/web/assets/6db5172/web.assets_web.min.js:1005:69)
at https://website.com/web/assets/6db5172/web.assets_web.min.js:1540:80
</code></pre>
<p>This is the sample image</p>
<p><a href="https://i.sstatic.net/BOglSv0z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BOglSv0z.png" alt="enter image description here" /></a></p>
|
<python><odoo>
|
2024-05-01 04:35:22
| 1
| 785
|
Stykgwar
|
78,412,008
| 8,844,500
|
Python multiprocessing apply_async with callback does not update the dictionary data structure
|
<p>I'm trying to count all the tokens in a catalog. Due to the large amount of document, I'd like to do this counting using multiprocessing (or any other parallel calculation tool that you're free to mention). My problem is that naive construction does not work.</p>
<p>Here the minimal example I constructed</p>
<pre class="lang-py prettyprint-override"><code>import multiprocessing
import random
def count_tokens(document):
counter = dict()
for token in document:
if token in counter:
counter[token] += 1
else:
counter[token] = 1
return counter
tokens = ['tok'+str(i) for i in range(int(9))]
catalog = [random.choices(tokens, k=8) for _ in range(100)]
token_counts = {token: 0 for token in tokens}
def callback(result):
global token_counts
for token, count in result.items():
token_counts[token] += count
return token_counts
with multiprocessing.Pool() as pool:
for document in catalog:
pool.apply_async(count_tokens, args=(document,), callback=callback)
</code></pre>
<p>and the problem is that the returned <code>token_counts</code> is not the same as the un-parallel calculation</p>
<pre class="lang-py prettyprint-override"><code>token_counts = {token: 0 for token in tokens}
for document in catalog:
callback(count_tokens(document))
</code></pre>
<p>To be sure I constructed the complete script</p>
<pre class="lang-py prettyprint-override"><code>import multiprocessing
import random
def count_tokens(document):
counter = dict()
for token in document:
if token in counter:
counter[token] += 1
else:
counter[token] = 1
return counter
tokens = ['tok'+str(i) for i in range(int(9))]
catalog = [random.choices(tokens, k=8) for _ in range(100)]
token_counts = {token: 0 for token in tokens}
def callback(result):
global token_counts
for token, count in result.items():
token_counts[token] += count
return token_counts
with multiprocessing.Pool() as pool:
for document in catalog:
pool.apply_async(count_tokens, args=(document,), callback=callback)
count_multiprocessing = dict(**token_counts)
print(count_multiprocessing)
token_counts = {token: 0 for token in tokens}
for document in catalog:
callback(count_tokens(document))
count_onecpu = dict(**token_counts)
print(count_onecpu)
for token, count in count_onecpu.items():
assert count == count_multiprocessing[token]
for token, count in count_multiprocessing.items():
assert count == count_onecpu[token]
</code></pre>
<p>that always end up with an assertion error at home (Python 3.10.9 if that matters).</p>
|
<python><python-multiprocessing>
|
2024-05-01 04:23:24
| 2
| 329
|
FraSchelle
|
78,411,879
| 342,553
|
Why mock.patch works on attribute of object that is already imported
|
<p>I am aware of the importance of path to mock as illustrated <a href="https://stackoverflow.com/questions/20242862/why-python-mock-patch-doesnt-work">here</a>, but consider this Django scenario</p>
<p>models.py</p>
<pre class="lang-py prettyprint-override"><code>class Proxmox(Model):
@property
def api(self, ...):
....
</code></pre>
<p>tasks.py</p>
<pre class="lang-py prettyprint-override"><code>def run_task(...):
....
</code></pre>
<p>views.py</p>
<pre><code>from models import Proxmox
from tasks import run_task
class APIView(...):
def get(request):
Proxmox.objects.get(...).api.do_something()
run_task()
</code></pre>
<p>tests.py</p>
<pre class="lang-py prettyprint-override"><code>class MyTestCase(...):
@mock.patch('tasks.run_task') <---- this is not patched as already imported in view
#@mock.patch('views.run_task'). <---- this patches fine
@mock.patch('models.Proxmox.api') <---- but why this works fine?
def test_one(self, mock_api, mock_run_task):
client.get(....)
...
</code></pre>
<p>Both <code>Proxmox</code> and <code>run_task</code> are imported into <code>views.py</code> so why patching <code>models.Proxmox.api</code> still got patched?</p>
|
<python><python-mock>
|
2024-05-01 03:22:36
| 1
| 26,828
|
James Lin
|
78,411,847
| 4,490,974
|
Python not buffering audio correctly
|
<p>I have a python script that fetches audio file from GCS and is meant to buffer only a max of 10 seconds of audio each time per chunk.</p>
<p>However it is not working, it starts off good but when a long file say 2 hours long, it ends up being out by a lot. Say the current play head is 4min it has buffered 5min total that's 60 seconds out.</p>
<p>It's meant to act like a live stream.</p>
<pre><code>def generate_audio(audio_file, bitrate_str, sample_rate_str, audio_size_str):
print(audio_file)
# Convert bitrate, sample rate, and audio size to integers
bitrate = int(bitrate_str)
print('bitrate is')
print(bitrate)
sample_rate = int(sample_rate_str)
audio_size = int(audio_size_str)
print(audio_size)
# Set up the start position for reading chunks from the file
start_position = 0
part1_runs = 0
# Set the initial desired duration
initial_desired_duration = 0.00000499 # seconds per chunk
remaining_duration = calculate_remaining_duration(audio_size, start_position, bitrate, sample_rate)
desired_duration = min(initial_desired_duration, remaining_duration)
delay = 0.01
count = 0
# Stream the audio data chunks to the client
while True:
# Calculate the chunk size based on the current desired duration
chunk_size = calculate_chunk_size(bitrate, sample_rate, initial_desired_duration)
print (chunk_size)
# Example values
duration = calculate_audio_duration(chunk_size, sample_rate, 44100)
print("Duration of audio in chunk:", duration, "seconds")
# Ensure that the end position for the chunk does not exceed the file size
end_position = min(start_position + chunk_size, audio_size)
# Read a chunk of data from the audio file
chunk = audio_file.download_as_string(start=start_position, end=end_position)
# If no more data is available, break the loop
if not chunk:
break
# Update the start position for the next chunk
start_position += len(chunk)
# Update the remaining duration
remaining_duration = calculate_remaining_duration(audio_size, start_position, bitrate, sample_rate)
# Round to the nearest second
print("Remaining duration (rounded to nearest second):", remaining_duration, "seconds")
if count >= 0.68:
sleepDur = remaining_duration/1000
sleepDur = str(sleepDur).split(".")[0]
# Convert the value to a string and slice it to include only the first five digits after the decimal point
if part1_runs >= 10:
time.sleep(0.68)
part1_runs = 3
elif part1_runs > 8:
print("sleeping for 3 seconds")
time.sleep(3)
elif part1_runs < 8:
print(float(sleepDur) + part1_runs)
time.sleep(1)
part1_runs += 1 # Increment the counter
elif count < 0.68:
count+=0.05
time.sleep(0.64)
# Yield the chunk to the client
yield chunk
</code></pre>
|
<python><python-3.x>
|
2024-05-01 03:03:42
| 0
| 895
|
Russell Harrower
|
78,411,784
| 46,503
|
SQLAlchemy: Confused with many-to-many relationship for the same table and additional data
|
<p>I want a user to be referral and referee, and store some additional data. Here is my code:</p>
<pre><code>class User(UserMixin, db.Model):
__tablename__ = 'user'
referees = db.relationship('RefData', back_populates='referral', lazy='dynamic')
referrals = db.relationship('RefData', back_populates='referee', lazy='dynamic')
</code></pre>
<p>and the RefData table:</p>
<pre><code>class RefData(UserMixin, db.Model):
__tablename__ = 'ref_data'
referral_id = db.Column(UUID(as_uuid=True), db.ForeignKey('user.id'), primary_key=True)
referral = db.relationship('User', back_populates='referrals')
referee_id = db.Column(UUID(as_uuid=True), db.ForeignKey('user.id'), primary_key=True)
referee = db.relationship('User', back_populates='referees')
</code></pre>
<p>But with this code, I have the following exception and I can't figure out how to solve it:</p>
<pre><code>sqlalchemy.exc.AmbiguousForeignKeysError: Could not determine join condition between
parent/child tables on relationship User.referees - there are multiple foreign key paths
linking the tables. Specify the 'foreign_keys' argument, providing a list of those columns
which should be counted as containing a foreign key reference to the parent table.
</code></pre>
|
<python><sqlalchemy>
|
2024-05-01 02:37:25
| 2
| 5,287
|
mimic
|
78,411,725
| 2,813,606
|
How to parse nested dictionaries into Pandas DataFrame
|
<p>I have a pretty crazy dictionary that I'm trying to parse out into a pandas dataframe. Here is a smaller version of what the dictionary looks like:</p>
<pre><code>import datetime
from decimal import *
test_dict = [{'record_id': '43bbdfbf',
'date': datetime.date(2023, 3, 25),
'person': {
'id': '123abc',
'name': 'Person1'
},
'venue': {
'id': '5bd6c74c',
'name': 'Place1',
'city': {
'id': '3448439',
'name': 'São Paulo',
'state': 'São Paulo',
'state_code': 'SP',
'coords': {'lat': Decimal('-23.5475'), 'long': Decimal('-46.63611111')},
'country': {'code': 'BR', 'name': 'Brazil'}
},
},
'thing_lists': {'thing_list': [
{'song': [
{'name': 'Thing1','info': None,'dup': None},
{'name': 'Thing2', 'info': None, 'dup': None},
{'name': 'Thing3', 'info': None, 'dup': None},
{'name': 'Thing4', 'info': None, 'dup': None}],
'extra': None},
{'song': [
{'name': 'ExtraThing1','info': None,'dup': None},
{'name': 'ExtraThing2', 'info': None, 'dup': None}],
'extra': 1
}]}}]
</code></pre>
<p>Here's a function I started building to parse out pieces of information from the dictionary:</p>
<pre><code>def extract_values(dictionary):
record_id = dictionary[0]['record_id'],
date = dictionary[0]['date'],
country = dictionary[0]['venue']['city']['country']['name']
return record_id, date, venue, city, lat, long, country
</code></pre>
<p>Here's the piece where I attempt to pull out the pieces into a dataframe.</p>
<pre><code>import pandas as pd
df = pd.DataFrame(extract_values(test_dict)).transpose()
df.rename(
columns={
df.columns[0]: 'record_id',
df.columns[1]: 'date',
df.columns[3]: 'city',
df.columns[6]: 'country'
},
inplace=True
)
</code></pre>
<p>As you can see it mostly works except for string fields which get split out where each row gets a single character. I'm not sure how to resolve this issue. However, it seems if the last field I pull isn't a string - then it gets squished back into place. Is there a way to push the strings together manually so I don't have to rely on the data type of the final field?</p>
<p>Also, the final few fields appear to be tricky to pull. Ideally, I would like my final dataframe to look like the following:</p>
<pre><code>RecordID Date City Country ThingName Dup Extra
43bbdfbf 2023-03-25 São Paulo Brazil Thing1 None None
43bbdfbf 2023-03-25 São Paulo Brazil Thing2 None None
43bbdfbf 2023-03-25 São Paulo Brazil Thing3 None None
43bbdfbf 2023-03-25 São Paulo Brazil Thing4 None None
43bbdfbf 2023-03-25 São Paulo Brazil ExtraThing1 None 1
43bbdfbf 2023-03-25 São Paulo Brazil ExtraThing2 None 1
</code></pre>
|
<python><json><pandas><dictionary><datetime>
|
2024-05-01 02:06:28
| 1
| 921
|
user2813606
|
78,411,570
| 13,860,156
|
Implementing custom lasso regression with fixed features in sklearn pipeline for variable selection
|
<p>There are two posts related to this topic in R language <a href="https://stats.stackexchange.com/questions/519878/including-fixed-effects-in-a-lasso-elastic-net-regression-model-in-r">including fixed regressor in a Lasso regression model</a> and <a href="https://stackoverflow.com/questions/55971712/fixed-effects-logit-lasso-model">fixed effect Lasso logit model</a></p>
<p>I am writing a feature selection model using Lasso penalisation, my data has some seasonal dummy variables and they must not be dropped during the modelling phase. The easiest way to do so is the shrinkage applied to the coefficients on the linear model should avoid the coefficients of fixed features. Could you please help me write a <strong>custom Lasso</strong> function which does not shrink the coefficients of fixed features and is callable in <strong>sklearn Pipeline</strong>?**</p>
<pre><code> from sklearn.pipeline import Pipeline
pipeline = Pipeline([
('scaler', StandardScaler()), # Optional: Feature scaling
('lasso', LassoWithFixedFeatures(fixed_features_indices))
])
</code></pre>
<p>I have tried this,</p>
<pre><code>from sklearn.base import BaseEstimator, RegressorMixin
from sklearn.linear_model import Lasso
import numpy as np
class LassoWithFixedFeatures(BaseEstimator, RegressorMixin):
def __init__(self, fixed_features_indices, alpha=1.0):
self.fixed_features_indices = fixed_features_indices
self.alpha = alpha
def fit(self, X, y):
# Fit Lasso model with regularized coefficients
self.lasso = Lasso(alpha=self.alpha)
self.lasso.fit(X, y)
# Calculate the penalty term for fixed features
penalty_fixed = self.alpha * np.abs(self.lasso.coef_[self.fixed_features_indices])
# Set coefficients of fixed features to their original values
fixed_features_coefs = np.linalg.lstsq(X[:, self.fixed_features_indices], y, rcond=None)[0]
self.coef_ = np.zeros(X.shape[1])
self.coef_[self.fixed_features_indices] = fixed_features_coefs
# Calculate the penalty term for non-fixed features
penalty_non_fixed = np.zeros(X.shape[1])
penalty_non_fixed[self.fixed_features_indices] = 0 # Exclude fixed features from penalty
penalty_non_fixed[~np.isin(np.arange(X.shape[1]), self.fixed_features_indices)] = self.alpha * np.abs(self.lasso.coef_)
# Update coefficients by considering penalties
self.coef_ += self.lasso.coef_ - penalty_non_fixed + penalty_fixed
return self
def predict(self, X):
return np.dot(X, self.coef_)
</code></pre>
<p>There were total 18 regressors and 7, I have kept at <code>fixed_features</code>. I was expecting lasso regression with custom shrinkage but got</p>
<pre><code>ValueError:
All the 500 fits failed.
It is very likely that your model is misconfigured.
You can try to debug the error by setting error_score='raise'
</code></pre>
<h2>Below are more details about the failures:</h2>
<pre><code>500 fits failed with the following error: Traceback (most recent call last): File "C:\Users\...py", line 732, in _fit_and_score
estimator.fit(X_train, y_train, **fit_params) File "C:\..\base.py", line 1151, in wrapper
return fit_method(estimator, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\...py", line 420, in fit
self._final_estimator.fit(Xt, y, **fit_params_last_step) File "C:\Users\...5759.py", line 26, in fit
penalty_non_fixed[~np.isin(np.arange(X.shape[1]), self.fixed_features_indices)] = self.alpha * np.abs(self.lasso.coef_)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ValueError: NumPy boolean array indexing assignment cannot assign 18 input values to the 11 output values where the mask is true
</code></pre>
|
<python><scikit-learn><feature-selection><lasso-regression>
|
2024-05-01 00:43:33
| 1
| 399
|
ORSpecialist
|
78,411,491
| 5,450,919
|
Oracle SQL Cloud programmatically using Python 3.12
|
<p>I'm experiencing difficulty connecting to Oracle SQL Cloud programmatically using Python 3.12. I'm running an M2 MacBook Pro within a virtualenv, and I've successfully installed the 'cx_Oracle' library. Despite trying various approaches, I haven't been able to resolve the issue. Has anyone encountered this problem and found a solution?</p>
<blockquote>
<p>cx_Oracle.DatabaseError: DPI-1047: Cannot locate a 64-bit Oracle
Client library: "dlopen(libclntsh.dylib, 0x0001): tried:
'libclntsh.dylib' (no such file),
'/System/Volumes/Preboot/Cryptexes/OSlibclntsh.dylib' (no such file),
'/usr/lib/libclntsh.dylib' (no such file, not in dyld cache),
'libclntsh.dylib' (no such file)". See
<a href="https://cx-oracle.readthedocs.io/en/latest/user_guide/installation.html" rel="nofollow noreferrer">https://cx-oracle.readthedocs.io/en/latest/user_guide/installation.html</a>
for help</p>
</blockquote>
|
<python><oracle-database><cx-oracle>
|
2024-04-30 23:59:19
| 0
| 629
|
b8con
|
78,411,428
| 2,850,815
|
VSCode python debugger always activates base env on top of selected interpreter with anaconda
|
<p>After setting up Anaconda and VSCode on my new Mac and selecting my desired python interpreter (in this case called <code>lotteryfl</code>), it seems that a new terminal always starts with a base env activated on top of <code>lotteryfl</code>. This also applies when I try to start the debugger, so that my installed packages in <code>lotteryfl</code> cannot be found when being imported. How may I fix it so that only <code>lotteryfl</code> is activated?</p>
<p><a href="https://i.sstatic.net/3K61rpOl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3K61rpOl.png" alt="enter image description here" /></a></p>
|
<python><visual-studio-code><anaconda><vscode-debugger>
|
2024-04-30 23:27:19
| 3
| 873
|
Hang Chen
|
78,411,404
| 6,606,057
|
Aggregate Dataframe by Set Number of Rows
|
<p>I have a dataframe that I would like to resample by averaging based on the number of rows.</p>
<p>For example i'd like to aggregate by every three rows:</p>
<pre><code> A B C
0 3 4 5
1 5 1 4
2 4 3 5
3 1 5 5
4 3 4 5
5 5 5 5
6 5 0 2
7 4 0 2
8 3 2 2
</code></pre>
<p>With the outcome being:</p>
<pre><code> A B C
0 4.0 2.7 4.7
1 3.0 4.7 5.0
2 4.0 0.7 2.0
</code></pre>
<p>I've tried permutations on:</p>
<pre><code>
mdf.groupby('index1')['attr'].mean()
</code></pre>
<p>But end up with KeyError: 'index1'</p>
|
<python><resample>
|
2024-04-30 23:18:44
| 3
| 485
|
Englishman Bob
|
78,411,233
| 412,137
|
Transitioning from setuptools to Poetry: Issues with Scripts and Package Data
|
<p>I'm migrating a Python project from setuptools to Poetry and encountering issues with my scripts not being recognized and including non-code files. Below is how I previously configured my project using setuptools in setup.py:</p>
<pre><code>from setuptools import setup, find_packages
version = "1.0.0"
setup(
name='my_package',
version=version,
packages=find_packages(),
package_data={
'my_package.merge': ['*.sh']
},
include_package_data=True,
url='repo_url',
entry_points={
'console_scripts': [
'my_amazin_func=my_package.fun:entrypoint'
]
}
)
</code></pre>
<p>Now, I'm trying to achieve the same functionality using poetry. Here's my current pyproject.toml configuration:</p>
<pre><code>[tool.poetry]
name = "my package"
version = "1.0.0"
description = "My Amazing package"
authors = ["AAA"]
readme = "README.md"
include = [
'my_package/merge/*.sh'
]
[tool.poetry.scripts]
my_amazin_func = "my_package.fun:entrypoint"
[tool.poetry.dependencies]
python = ">=3.10.1,<3.11"
[tool.flake8]
max-line-length = 100
select = ['E9', 'F63', 'F7', 'F82']
count = true
show-source = true
statistics = true
max-complexity = 10
[tool.poetry.group.dev.dependencies]
nose = "1.3.7"
pytest = "8.1.1"
pytest-mock = "3.14.0"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
</code></pre>
<p><strong>Issue</strong>
The function <code>my_amazin_func</code> is intended to be used like a CLI tool, but it is not recognized after migration. How can I ensure that my_amazin_func is correctly recognized as a command line script, and how can I include specific non-code files as I did with package_data in setuptools?</p>
<p>Additional Notes
I have ensured that poetry install was run after updating pyproject.toml.
I have checked the virtual environment to confirm that the script is not present.
Any guidance on how to properly configure Poetry to replace the functionality previously handled by setuptools would be greatly appreciated.</p>
|
<python><setuptools><python-poetry>
|
2024-04-30 22:12:31
| 1
| 2,767
|
Nadav
|
78,411,076
| 793,190
|
Batch file that starts conda and opens Jupyter Notebook won't close on "exit" line
|
<p>I created a <code>.bat</code> file that starts conda in my <code>base</code> environment, then opens Jupyter Notebook. Here's the code:</p>
<pre><code>@echo off
call conda activate base
cd C:\Users\{myUserName}\OneDrive\Documents\Udemy Python Course\My Lesson Scripts
jupyter lab
</code></pre>
<p>That all works great. The cmd window opens, I see the "successfully linked/loaded", etc lines in the command window scroll, and a browser tab opens with Jupyter Notebook and everything is honkey dory from there.</p>
<p>But I'd also like to have the cmd window automatically close once the tab is open in the browser so I added</p>
<pre><code>exit
</code></pre>
<p>at the end of the bat file but the cmd window didn't close. So to see if the bat script was even getting there I added</p>
<pre><code>echo Closing...
timeout 10 >nul
</code></pre>
<p>before the <code>exit</code> line.</p>
<p>After making sure the Jupyter Notebook tab was open in my browser, I checked the cmd window and "Closing..." was not at the bottom of the window, so it's not getting to that point.</p>
<p>I then went to Jupyter Notebook and in the File menu clicked "Shut Down", quickly brought up the open cmd window and sure enough, "Closing..." showed and in 10 seconds the cmd window closed.</p>
<p>So, the batch file is actually waiting for the server/(service?) to stop before it can close on its own.</p>
<p>I know that with Jupyter Notebook open and I'm doing stuff in there - running cells, etc, closing the cmd windows has no effect on the operation of Jupyter, so I'm wondering if there is a way to "bypass" the cmd window/batch file waiting for the server to shut down and force-close the cmd window from the batch file myself.</p>
|
<python><batch-file><jupyter-notebook><conda>
|
2024-04-30 21:19:57
| 1
| 5,088
|
marky
|
78,411,055
| 993,812
|
Find elements in xml file with lxml find() method
|
<p>I have xml files that are 1 million+ lines long. I'm able to parse them without issue with <code>BeautifulSoup</code>, but it can take a minute or more to do the parsing with <code>bs4</code>. I'm trying to use lxml to do the parsing to hopefully speed things up dramatically, but I can't get the <code>find()</code> method to work at all.</p>
<p>Ultimately I'm looking to replace this bs4 line with lxml code:</p>
<pre><code>datamanagers = soup.find_all('Field', {'Name': 'DataManager'})
</code></pre>
<p>I simply cannot get the <code>find()</code> method to work. I figured I could start small and get the first element within root. An example xml file starts like this:</p>
<pre><code><?xml version="1.0" encoding="utf-8"?>
<Root xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://www.spotfire.com/schemas/Document1.0.xsd" SchemaVersion="1.0">
<Object Id="1">
<Type>
...
</Type>
</Object>
</Root>
</code></pre>
<p>So I try:</p>
<pre><code>with open(path_work + '\\' + file.stem + '\\' + 'AnalysisDocument.xml') as f:
tree = etree.parse(f)
root = tree.getroot()
tree.find('Object')
root.find('Object')
tree.find('.//Object')
root.find('.//Object')
</code></pre>
<p>Everything I try returns <code>None</code>. What am I doing wrong here? I've looked at tons of answers and they all make the <code>find()</code> function seem incredibly simple to use.</p>
|
<python><lxml>
|
2024-04-30 21:14:49
| 1
| 555
|
John
|
78,410,960
| 480,118
|
psycopg3 mogrify: AttributeError: 'Connection' object has no attribute 'mogrify'
|
<p>Im using psycopg3 and would like to see what the actual sql with the parameters is so i can execute it using dbweaver...</p>
<pre><code> with psycopg.connect(self.connect_str, autocommit=True) as conn:
if self.log.level == logging.DEBUG:
cur = conn.cursor()
sql_mogr = cur.mogrify(sql, params)
self.log.debug(sql_mogr)
else:
self.log.info(f'sql: {sql}, params:{params}')
df = pd.read_sql(sql, con = conn, params = params)
</code></pre>
<p>The results of the mogrify line is:</p>
<pre><code>AttributeError: 'Cursor' object has no attribute 'mogrify'
</code></pre>
<p>Does psycopg3 not support this method? if not, what is an alternate solution?</p>
<p>version of psycopg:</p>
<pre><code>psycopg==3.1.18
psycopg-binary==3.1.18
psycopg-pool==3.2.1
</code></pre>
|
<python><postgresql><psycopg2><psycopg3>
|
2024-04-30 20:50:24
| 1
| 6,184
|
mike01010
|
78,410,848
| 7,211,014
|
How do I add a public PGP key from a internal apt mirror inside python3.10-slim image?
|
<p>I need to set up <code>apt</code> inside of our container images to use our internal apt mirror.
This will require changing the apt sources, copying over the public key, adjusting apt to use a proxy, and then running <code>apt update</code></p>
<p>I have read many solutions line. Problem is, all of them either have <code>apt-key</code> or <code>gpg</code>. The image I have is python3.10-slim, it does not. And I cant <code>apt install</code> anything until I get the mirrors set up to do any installs. So I am left to using whatever is on this image.</p>
<p>I have the public pgp key, I can copy it to the container. How do I force apt to trust it?</p>
|
<python><docker><containers><apt><pgp>
|
2024-04-30 20:18:44
| 1
| 1,338
|
Dave
|
78,410,672
| 9,191,338
|
How to get detailed documentation on the setup and cleanup procedure of CPython interpreter?
|
<p>I'm running a multiprocessing application with Python, and have a hard time cleaning up the resource after the program finishes. My program has many interaction with C library, and I need to make sure resources in the C library are released properly.</p>
<p>Therefore, I would like to know the setup and cleanup process of Python interpreter:</p>
<ol>
<li>From OS created the process, until Python interpreter starts to run bytecode, what happens in between?</li>
<li>When a program exits, or being interrupted by signals, what is the cleanup order of objects in Python?</li>
</ol>
<p>I searched the python documentation page, and find little resource on this.</p>
|
<python>
|
2024-04-30 19:24:12
| 0
| 2,492
|
youkaichao
|
78,410,548
| 6,213,343
|
Polars: Use column values to reference other column in when / then expression
|
<p>I have a Polars dataframe where I'd like to derive a new column using a when/then expression. The values of the new column should be taken from a different column in the same dataframe. However, the column from which to take the values differs from row to row.</p>
<p>Here's a simple example:</p>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame(
{
"frequency": [0.5, None, None, None],
"frequency_ref": ["a", "z", "a", "a"],
"a": [1, 2, 3, 4],
"z": [5, 6, 7, 8],
}
)
</code></pre>
<p>The resulting dataframe should look like this:</p>
<pre class="lang-py prettyprint-override"><code>res = pl.DataFrame(
{
"frequency": [0.5, None, None, None],
"frequency_ref": ["a", "z", "a", "a"],
"a": [1, 2, 3, 4],
"z": [5, 6, 7, 8],
"res": [0.5, 6, 3, 4]
}
)
</code></pre>
<p>I tried to create a dynamic reference using a nested pl.col:</p>
<pre class="lang-py prettyprint-override"><code># Case 1) Fixed value is given
fixed_freq_condition = pl.col("frequency").is_not_null() & pl.col("frequency").is_not_nan()
# Case 2) Reference to distribution data is given
ref_freq_condition = pl.col("frequency_ref").is_not_null()
# Apply the conditions to calculate res
df = df.with_columns(
pl.when(fixed_freq_condition)
.then(pl.col("frequency"))
.when(ref_freq_condition)
.then(
pl.col(pl.col("frequency_ref"))
)
.otherwise(0.0)
.alias("res"),
)
</code></pre>
<p>Which fails with <code>TypeError: invalid input for "col". Expected "str" or "DataType", got 'Expr'.</code></p>
<p>What works (but only as an intermediate solution) is by explicitly listing every possible column value in a very long when/then expression. This is far from optimal as the column names might change in the future and produces a lot of code repititon.</p>
<pre class="lang-py prettyprint-override"><code>df = df.with_columns(
pl.when(fixed_freq_condition)
.then(pl.col("frequency"))
.when(pl.col("frequency_ref") == "a")
.then(pl.col("a"))
# ... more entries
.when(pl.col("frequency_ref") == "z")
.then(pl.col("z"))
.otherwise(0.0)
.alias("res"),
)
</code></pre>
|
<python><dataframe><python-polars>
|
2024-04-30 18:54:40
| 3
| 626
|
Christoph Pahmeyer
|
78,410,497
| 5,335,180
|
A UTF-8 text file is failing to import via pandas with a UTF-8 encoding error
|
<p>I've got a text file exported from SQL as UTF-8 with about 5.5 million rows. I'm trying to then read this file with Pandas/Python, but getting</p>
<pre><code>UnicodeDecodeError: 'utf-8' codec can't decode byte 0xed in position 135596: invalid continuation byte
</code></pre>
<p>How can I troubleshoot this? I loaded the file into Notepad++ and tried "Convert to UTF-8", but I got the same results. I tried stepping through with a debugger, but pandas is parsing in quite large chunks and I'm having trouble identifying exactly which character is causing it to choke. I tried reading the file as binary and inspecting position 135596 but I didn't see anything out of the ordinary.</p>
<p>Any suggestions on how to identify the issue in our data? At this point I'm considering doing a binary split search (split the data in half, identify which half gives an error, and keep splitting that way until I find it), but it's quite a lot of text.</p>
|
<python><pandas><utf-8>
|
2024-04-30 18:42:22
| 3
| 489
|
reas0n
|
78,410,488
| 11,064,604
|
DataFrame Creation from DataFrame.apply
|
<p>I have a function that returns a <code>pd.DataFrame</code> given a row of another dataframe:</p>
<pre><code>def func(row):
if row['col1']== 1:
return pd.DataFrame({'a':[1], 'b':[11]})
else:
return pd.DataFrame({'a':[-1, -2], 'b':[-11,-22]})
</code></pre>
<p>I want to use apply <code>func</code> to another dataframe to create a new data frame, like below:</p>
<pre><code>df = pd.DataFrame({'col1':[1,2,3],'col2':[11,22,33]})
# do some cool pd.DataFrame.apply stuff
# resulting in the below dataframe
pd.DataFrame({
'a':[1,-1,-2,-1,-2],
'b':[11,-11,-22,-11,-22]
})
</code></pre>
<p>Currently, I use the code below for the desired result:</p>
<pre><code>pd.concat([mini[1] for mini in df.apply(func,axis=1).iteritems()])
</code></pre>
<p>While this works, it is fairly ugly. Is there a more elegant way to create a dataframe from <code>df</code>?</p>
|
<python><python-3.x><pandas><pandas-apply>
|
2024-04-30 18:40:09
| 2
| 353
|
Ottpocket
|
78,410,441
| 2,079,306
|
Python List to string manipulation. Print each command line argv "--option argument1 argument2" on their own separate lines
|
<p>I have a working solution, it's just so gross. I often write stuff like this and then just look at it disgusted, but this one is far to gross to leave as is... especially as it's just list to string manipulation. I'm embarrassed. I'm interested in more succinct solutions. Anyone have a cleaner way to put each option parameter followed by its arguments printed on separate lines?</p>
<p>Input: python3 -O myscript.py --dates "2024-04-30" "2024-04-29" "2024-04-28" "2024-04-27" --names "johnson" "woody" "richard" "willy"
-o "/home/stamos/pickle/pics/name_brainstorming"</p>
<pre><code>sys.argv[1:] = parameter_set
print("\nGenerating parameter set: ")
parameter_set_string = ""
first_iter = True
for each in parameter_set:
if each[0] == "-" and first_iter == False:
print(parameter_set_string)
parameter_set_string = each
else:
if first_iter:
parameter_set_string += each
first_iter = False
else:
parameter_set_string += ' "' + each + '" '
print(parameter_set_string)
</code></pre>
<p>Output:</p>
<pre><code>Generating parameter set:
--dates "2024-04-30" "2024-04-29" "2024-04-28" "2024-04-27"
--names "johnson" "woody" "richard" "willy"
-o "/home/stamos/pickle/pics/name_brainstorming"
Generating parameter set:
--dates "2024-04-26" "2024-04-25" "2024-04-24"
--names "anaconda" "black mamba" "python"
-o "/home/stamos/pickle/pics/name_brainstorming"
</code></pre>
|
<python><printing><parameters><argv>
|
2024-04-30 18:23:40
| 2
| 1,123
|
john stamos
|
78,410,407
| 18,139,225
|
Is this a bug in forallpeople library or am I doing the arithmetic in a wrong way when using handcalcs library?
|
<p>I am performing hand calculations using <code>handcalcs</code> python library. I have just done the following test calculation using unit-aware calculation with the help of the <code>forallpeople</code> python library:</p>
<pre><code>import handcalcs.render
import forallpeople as si
si.environment('structural', top_level=True)
</code></pre>
<p>And the calculation</p>
<pre><code>%%render
A=(70000000*Pa)
B=(5*Pa)
C=A+B
</code></pre>
<p>Here are the results:</p>
<pre><code>A = 70.000 MPa
B = 5.000 Pa
C = 70.000 MPa
</code></pre>
<p>Obviously I was expecting the correct answer in Pa, i.e. <code>C = 70000005 Pa</code>. My question is: Is there a way of performing the arithmetic, with the units, and get a correct answer. This is only a test: I am performing some lengthy calculations. Of course, one can perform the arithmetic using only <code>handcalcs</code> and get correct answer, but I want to include also units with <code>forallpeople</code> library.</p>
|
<python><jupyter-lab>
|
2024-04-30 18:14:38
| 1
| 441
|
ezyman
|
78,410,264
| 1,711,271
|
Most efficient way to keep track of the samples I already selected from an array
|
<p>I have a moderately large <code>np</code> array (which could however get larger in the future):</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
x = np.arange(100_000).reshape((10_000,10))
</code></pre>
<p>I need to iteratively choose a random sample (row), making sure that I never choose the same sample twice. Currently I'm doing</p>
<pre><code>rng = np.random.default_rng(seed=42)
indices = list(range(len(x)))
for _ in range(1000):
i = rng.choice(indices)
## do something with x[i]
indices.remove(i)
</code></pre>
<p>However, I read that <code>remove</code> is pretty slow. Is there a better way to keep track of the indices I already used?</p>
|
<python><list><numpy>
|
2024-04-30 17:40:20
| 3
| 5,726
|
DeltaIV
|
78,410,040
| 179,014
|
Why is pandas rounding different from python rounding?
|
<p>It stumbled upon a case where Pandas and Python behave differently when rounding.</p>
<pre><code>>>> import pandas as pd
>>> numbers = [0.495,1.495,2.495,3.495,4.495,5.495,6.495, 7.495,8.495, 9.495, 10.495]
>>> [round(float(x),2 ) for x in numbers]
[0.49, 1.5, 2.5, 3.5, 4.5, 5.5, 6.5, 7.5, 8.49, 9.49, 10.49]
>>> pd.DataFrame(numbers).astype(float).round(2)
0
0 0.50
1 1.50
2 2.50
3 3.50
4 4.50
5 5.50
6 6.50
7 7.50
8 8.49
9 9.49
10 10.50
</code></pre>
<p>Why does Pandas round <code>0.495</code> and <code>10.495</code> different to Python? I thought since Python 3 both implemented banker's rounding? Is one implementation more correct than the other?</p>
|
<python><pandas><rounding>
|
2024-04-30 16:58:00
| 1
| 11,858
|
asmaier
|
78,410,011
| 6,899,925
|
How to filter a queryset by a many2many field
|
<p>I have a Notification model which has a field called <code>seen_users</code> like so:</p>
<pre class="lang-py prettyprint-override"><code>from django.contrib.auth import get_user_model
User = get_user_model()
class Notification(models.Model):
title = models.CharField(max_length=255)
seen_users = models.ManyToManyField(User, blank=True)
</code></pre>
<p>Whenever a user sees a notification (.e.g <code>notification_obj</code>), that user will be added to <code>notification_obj.seen_users</code>.</p>
<p>Now how can I filter notifications that has not seen by a specific user like <code>user1</code> <b> In most efficient way</b>?</p>
<p>I've tried to query like below:</p>
<pre class="lang-py prettyprint-override"><code>class NotificationView(generics.ListAPIView):
authentication_classess = [TokenAuthentication]
permission_classes = []
def get_queryset(self):
unseen_only = self.request.GET.get("unseen_only", "0")
if unseen_only == "1":
# THIS IS WHERE I GOT TROUBLES
# Because other users may have seen this and its not empty
return Notification.objects.filter(seen_users__in=[])
return Notification.objects.all()
</code></pre>
|
<python><django><django-rest-framework><django-queryset>
|
2024-04-30 16:51:05
| 1
| 703
|
AbbasEbadian
|
78,409,475
| 1,802,826
|
Why does a command in my Bash-script return 0 even when it fails? Python, Whisper, ffmpeg
|
<p>I have a <code>bash</code> script that executes <code>whisper</code> on all sound files in a directory. Whisper uses <code>ffmpeg</code> to decode sound files to a format it can handle. One of the files in the directory was corrupt and caused ffmpeg to fail. The first time I executed my script it returned <code>1</code> when it hit that file but when I try to reproduce the error it always returns <code>0</code>.</p>
<p>Here is the log from the first run when 1 was returned:</p>
<pre><code>Traceback (most recent call last):
File "/Users/db/Library/Python/3.11/lib/python/site-packages/whisper/audio.py", line 48, in load_audio
.run(cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/db/Library/Python/3.11/lib/python/site-packages/ffmpeg/_run.py", line 325, in run
raise Error('ffmpeg', out, err)
ffmpeg._run.Error: ffmpeg error (see stderr output for detail)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/db/Library/Python/3.11/bin/whisper", line 8, in <module>
sys.exit(cli())
^^^^^
File "/Users/db/Library/Python/3.11/lib/python/site-packages/whisper/transcribe.py", line 437, in cli
result = transcribe(model, audio_path, temperature=temperature, **args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/db/Library/Python/3.11/lib/python/site-packages/whisper/transcribe.py", line 121, in transcribe
mel = log_mel_spectrogram(audio, padding=N_SAMPLES)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/db/Library/Python/3.11/lib/python/site-packages/whisper/audio.py", line 130, in log_mel_spectrogram
audio = load_audio(audio)
^^^^^^^^^^^^^^^^^
File "/Users/db/Library/Python/3.11/lib/python/site-packages/whisper/audio.py", line 51, in load_audio
raise RuntimeError(f"Failed to load audio: {e.stderr.decode()}") from e
RuntimeError: Failed to load audio: ffmpeg version 4.4.4 Copyright (c) 2000-2023 the FFmpeg developers
built with Apple clang version 15.0.0 (clang-1500.1.0.2.5)
configuration: --prefix=/opt/local --cc=/usr/bin/clang --mandir=/opt/local/share/man --enable-audiotoolbox --disable-indev=jack --disable-libjack --disable-libopencore-amrnb --disable-libopencore-amrwb --disable-libxcb --disable-libxcb-shm --disable-libxcb-xfixes --enable-opencl --disable-outdev=xv --enable-sdl2 --disable-securetransport --enable-videotoolbox --enable-avfilter --enable-avresample --enable-fontconfig --enable-gnutls --enable-libass --enable-libbluray --enable-libdav1d --enable-libfreetype --enable-libfribidi --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-librsvg --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libzimg --enable-libzvbi --enable-lzma --enable-pthreads --enable-shared --enable-swscale --enable-zlib --enable-libaom --enable-libsvtav1 --arch=x86_64 --enable-x86asm --enable-gpl --enable-libvidstab --enable-libx264 --enable-libx265 --enable-libxvid --enable-postproc
libavutil 56. 70.100 / 56. 70.100
libavcodec 58.134.100 / 58.134.100
libavformat 58. 76.100 / 58. 76.100
libavdevice 58. 13.100 / 58. 13.100
libavfilter 7.110.100 / 7.110.100
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 9.100 / 5. 9.100
libswresample 3. 9.100 / 3. 9.100
libpostproc 55. 9.100 / 55. 9.100
/soundfile1.opus: Invalid data found when processing input
exit: 1 //this is my own output
</code></pre>
<p>and here is an example of when it returns 0 despite the input file is corrupt:</p>
<pre><code>% whisper soundfile1.opus
Traceback (most recent call last):
File "/Users/db/Library/Python/3.11/lib/python/site-packages/whisper/audio.py", line 58, in load_audio
out = run(cmd, capture_output=True, check=True).stdout
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/subprocess.py", line 571, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ffmpeg', '-nostdin', '-threads', '0', '-i', 'soundfile1.opus', '-f', 's16le', '-ac', '1', '-acodec', 'pcm_s16le', '-ar', '16000', '-']' returned non-zero exit status 1.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/db/Library/Python/3.11/lib/python/site-packages/whisper/transcribe.py", line 597, in cli
result = transcribe(model, audio_path, temperature=temperature, **args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/db/Library/Python/3.11/lib/python/site-packages/whisper/transcribe.py", line 133, in transcribe
mel = log_mel_spectrogram(audio, model.dims.n_mels, padding=N_SAMPLES)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/db/Library/Python/3.11/lib/python/site-packages/whisper/audio.py", line 140, in log_mel_spectrogram
audio = load_audio(audio)
^^^^^^^^^^^^^^^^^
File "/Users/db/Library/Python/3.11/lib/python/site-packages/whisper/audio.py", line 60, in load_audio
raise RuntimeError(f"Failed to load audio: {e.stderr.decode()}") from e
RuntimeError: Failed to load audio: ffmpeg version 4.4.4 Copyright (c) 2000-2023 the FFmpeg developers
built with Apple clang version 15.0.0 (clang-1500.1.0.2.5)
configuration: --prefix=/opt/local --cc=/usr/bin/clang --mandir=/opt/local/share/man --enable-audiotoolbox --disable-indev=jack --disable-libjack --disable-libopencore-amrnb --disable-libopencore-amrwb --disable-libxcb --disable-libxcb-shm --disable-libxcb-xfixes --enable-opencl --disable-outdev=xv --enable-sdl2 --disable-securetransport --enable-videotoolbox --enable-avfilter --enable-avresample --enable-fontconfig --enable-gnutls --enable-libass --enable-libbluray --enable-libdav1d --enable-libfreetype --enable-libfribidi --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-librsvg --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libzimg --enable-libzvbi --enable-lzma --enable-pthreads --enable-shared --enable-swscale --enable-zlib --enable-libaom --enable-libsvtav1 --arch=x86_64 --enable-x86asm --enable-gpl --enable-libvidstab --enable-libx264 --enable-libx265 --enable-libxvid --enable-postproc
libavutil 56. 70.100 / 56. 70.100
libavcodec 58.134.100 / 58.134.100
libavformat 58. 76.100 / 58. 76.100
libavdevice 58. 13.100 / 58. 13.100
libavfilter 7.110.100 / 7.110.100
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 9.100 / 5. 9.100
libswresample 3. 9.100 / 3. 9.100
libpostproc 55. 9.100 / 55. 9.100
soundfile1.opus: Invalid data found when processing input
Skipping soundfile1.opus due to RuntimeError: Failed to load audio: ffmpeg version 4.4.4 Copyright (c) 2000-2023 the FFmpeg developers
built with Apple clang version 15.0.0 (clang-1500.1.0.2.5)
configuration: --prefix=/opt/local --cc=/usr/bin/clang --mandir=/opt/local/share/man --enable-audiotoolbox --disable-indev=jack --disable-libjack --disable-libopencore-amrnb --disable-libopencore-amrwb --disable-libxcb --disable-libxcb-shm --disable-libxcb-xfixes --enable-opencl --disable-outdev=xv --enable-sdl2 --disable-securetransport --enable-videotoolbox --enable-avfilter --enable-avresample --enable-fontconfig --enable-gnutls --enable-libass --enable-libbluray --enable-libdav1d --enable-libfreetype --enable-libfribidi --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-librsvg --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libzimg --enable-libzvbi --enable-lzma --enable-pthreads --enable-shared --enable-swscale --enable-zlib --enable-libaom --enable-libsvtav1 --arch=x86_64 --enable-x86asm --enable-gpl --enable-libvidstab --enable-libx264 --enable-libx265 --enable-libxvid --enable-postproc
libavutil 56. 70.100 / 56. 70.100
libavcodec 58.134.100 / 58.134.100
libavformat 58. 76.100 / 58. 76.100
libavdevice 58. 13.100 / 58. 13.100
libavfilter 7.110.100 / 7.110.100
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 9.100 / 5. 9.100
libswresample 3. 9.100 / 3. 9.100
libpostproc 55. 9.100 / 55. 9.100
soundfile1.opus: Invalid data found when processing input
% echo $?
0
</code></pre>
<p>When I try to reproduce the behaviour I have both tried to execute the script as well as just running the actual line of code from the command prompt. The actual script looks like this:</p>
<pre><code>if whisper "$file" >> "${directory}/../${transdir}/${name}.txt"; then
whispersuccess=$?
echo "exit code $whispersuccess"
else
whispersuccess=$?
echo "exit code: $whispersuccess"
</code></pre>
<h2>I want it to return ¬0 when something like this happens. How do I achieve this?</h2>
|
<python><bash><ffmpeg>
|
2024-04-30 15:06:27
| 1
| 983
|
d-b
|
78,409,220
| 3,007,075
|
code as notebook, how to reload custom module?
|
<p>See the example below:</p>
<pre><code>#%% cell 1
import my_custom_module
#%% cell2:
data = load_data_that_takes_a_long_time_to_load()
#%% cell 3:
result = my_custom_module.function_that_changes(data)
</code></pre>
<p>How can I write a python script that has those cell separations so it can run programmatically as a regular python script, yet if I'm running cells manually, I need to be able to reload a module that changed without restarting the kernel. I've tried using <code>importlib.reload</code> but to no avail.</p>
|
<python>
|
2024-04-30 14:27:50
| 0
| 1,166
|
Mefitico
|
78,409,218
| 520,556
|
Handling a large matrix with numpy efficiently
|
<p>I am trying to run some simple calculations on a quite large matrix; roughly, 200Kx200K or real numbers. I need to obtain l1-norm, i.e., the sum of absolute values. <a href="https://stackoverflow.com/questions/48451167/numpy-set-absolute-value-in-place">Here is a similar question with addresses the question of absolute values</a>. The problem is, thus, not how to obtain absolute values and to sum them up, but how to handle large matrices/arrays efficiently. Here is the code:</p>
<pre><code>results = np.abs(mtx).sum(dim='rows')
</code></pre>
<p>Unfortunately, the code crashes even on the server, allocating a lot of memory. What would be the efficient way to handle this? Would using float32 make a difference? The number should be rather small.</p>
|
<python><numpy>
|
2024-04-30 14:27:30
| 1
| 1,598
|
striatum
|
78,409,120
| 2,301,970
|
Wrong dependency installed with pip
|
<p>I wonder if anyone could please help me with this issue:</p>
<p>I have two python libraries <a href="https://github.com/Vital-Fernandez/lime" rel="nofollow noreferrer">lime</a> and <a href="https://github.com/Vital-Fernandez/specsy" rel="nofollow noreferrer">specsy</a>, where the first library is a dependency of the second.</p>
<p>At the time when I added lime to <a href="https://pypi.org/project/lime-stable/" rel="nofollow noreferrer">PyPi</a> there was another library with the same name so I declared it as lime-stable.</p>
<p>I have now uploaded specsy to its <a href="https://pypi.org/project/specsy/" rel="nofollow noreferrer">PyPi</a> but when I try to install it this happens:</p>
<p>a) It installs the third party "lime" library (not mine)</p>
<p>b) It installs an older version of specsy.</p>
<p>I have actually declared it as "lime-stable" in three different places:</p>
<p>a) The project.toml which I think is the expected place:</p>
<pre><code>[project]
name = "specsy"
version = "0.2.2"
readme = "README.rst"
description = "Model fitting package for the chemical analysis of astronomical spectra"
dependencies = ["arviz~=0.18",
"astropy~=6.0",
"corner~=2.2",
"h5netcdf~=1.3.0",
"jax~=0.4",
"jaxlib==0.4.26",
"lime-stable~=1.0",
"lmfit~=1.3",
"matplotlib~=3.8",
"numpy~=1.26",
"pandas~=2.2.2",
"pymc~=5.13",
"PyNeb~=1.1",
"pytensor~=2.20",
"scipy~=1.13",
"six~=1.16.0",
"toml~=0.10",
"tomli >= 2.0.0 ; python_version < '3.11'",
"xarray~=2024.3.0"]
</code></pre>
<p>b) The requirements.txt file</p>
<pre><code>arviz~=0.18
astropy~=6.0
corner~=2.2
h5netcdf~=1.3.0
jax~=0.4
jaxlib~=0.4
lime-stable~=1.0
lmfit~=1.3
matplotlib~=3.8
numpy~=1.26
pandas~=2.2.2
pymc~=5.13
PyNeb~=1.1
pytensor~=2.20
scipy~=1.13
six~=1.16.0
toml~=0.10
tomli >= 2.0.0 ; python_version < "3.11"
xarray~=2024.3.0
</code></pre>
<p>c) and the setup.py file:</p>
<pre><code> packages=find_packages('src'),
package_dir={'': 'src'},
package_data={'': ['config.toml', 'inference/*', 'innate/*', 'models/*', 'operations/*', 'resources/*', 'workflow/*']},
include_package_data=True,
install_requires=["arviz", "astropy", "h5netcdf", "jax", "jaxlib", "lime-stable", "lmfit", "matplotlib", "numpy",
"pandas", "pymc", "PyNeb", "pytensor", "scipy", "toml", "xarray"],
</code></pre>
<p>But it still installs "lime" instead of "lime-stable" from pip.</p>
<p>I wonder if anyone coculd please point me towards the source of the conflict.</p>
|
<python><pip><python-packaging><requirements.txt>
|
2024-04-30 14:12:09
| 1
| 693
|
Delosari
|
78,409,103
| 1,741,868
|
"No function matches the given name and argument types." when trying to insert a `DateTimeTzRange` with Sqlalchemy ORM
|
<p>I've got a database table in a Postgres 15 database, it's got two columns (amongst others), one is of type <code>daterange</code>, the other is <code>tstzrange</code>.</p>
<p>We've got some code that inserts to this table with raw SQL (via sqlalchemy core) by converting the <code>sqlalchemy.dialects.postgresql.Range</code> to a <code>psycopg2.extras.DateRange</code> or <code>psycopg2.extras.DateTimeTzRange</code>. This works fine.</p>
<p>I'm trying to hook up the Sqlalchemy ORM but I'm getting the following error when I call <code>commit()</code> on my session:</p>
<pre><code>sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedFunction)
function daterange(timestamp with time zone, unknown, unknown) does not exist
LINE 1: ...y.co.uk', '81740d75-d75a-4187-8b13-e28ac2e4dbf8', daterange(...
^
HINT: No function matches the given name and argument types.
You might need to add explicit type casts.
[SQL: INSERT INTO "user" (user_id, username, full_name, provider_subject_id, effective, asserted)
SELECT p0::VARCHAR, p1::VARCHAR, p2::VARCHAR, p3::VARCHAR,
p4::DATERANGE, p5::TSTZRANGE FROM (VALUES (%(user_id__0)s,
%(username__0)s, %(full_name__0)s,
%(pro ... 233 characters truncated ... en(p0, p1, p2, p3, p4, p5, sen_counter)
ORDER BY sen_counter RETURNING "user".id, "user".id AS id__1]
[parameters: {'effective__0': DateRange(datetime.datetime(2024, 4, 30, 13, 43, 31, 985502, tzinfo=<UTC>), None, '[)'),
'provider_subject_id__0': '81740d75-d75a-4187-8b13-e28ac2e4dbf8',
'full_name__0': 'foo@example.com',
'user_id__0': '81740d75-d75a-4187-8b13-e28ac2e4dbf8',
'asserted__0': DateTimeTZRange(datetime.datetime(2024, 4, 30, 13, 43, 31, 985513, tzinfo=<UTC>), None, '[)'),
'username__0': 'foo@example.com',
'effective__1': DateRange(datetime.datetime(2024, 4, 30, 13, 43, 31, 985502, tzinfo=<UTC>), None, '[)'),
'provider_subject_id__1': '81740d75-d75a-4187-8b13-e28ac2e4dbf8',
'full_name__1': 'foo@example.com',
'user_id__1': '81740d75-d75a-4187-8b13-e28ac2e4dbf8',
'asserted__1': DateTimeTZRange(datetime.datetime(2024, 4, 30, 13, 43, 31, 985513, tzinfo=<UTC>), None, '[)'),
'username__1': 'foo@example.com'}]
(Background on this error at: https://sqlalche.me/e/20/f405)
</code></pre>
<p>My <code>UserModel</code> looks like:</p>
<pre><code>class ModelBase(DeclarativeBase):
pass
class UserModel(ModelBase):
__tablename__ = "user"
id: Mapped[int] = mapped_column(primary_key=True)
user_id: Mapped[str] = mapped_column(String)
username: Mapped[str] = mapped_column(String)
full_name: Mapped[str] = mapped_column(String)
provider_subject_id: Mapped[str] = mapped_column(String)
effective: Mapped[DATERANGE] = Column(DATERANGE)
asserted: Mapped[TSTZRANGE] = Column(TSTZRANGE)
</code></pre>
<p>The user table is defined as so:</p>
<pre><code>from sqlalchemy import MetaData
metadata_obj = MetaData()
Table("user", metadata_obj,
Column("id", PRIMARY_KEY_TYPE, primary_key=True),
Column("user_id", data_types.CODE_TYPE, nullable=False),
Column("role", data_types.CODE_TYPE, nullable=True),
Column("username", data_types.CODE_TYPE, nullable=False),
Column("full_name", data_types.SHORT_FIXED_STRING_TYPE, nullable=False),
Column("provider_subject_id", data_types.CODE_TYPE, nullable=True),
Column("effective", DATERANGE, nullable=False),
Column("asserted", TSTZRANGE, nullable=False)
)
</code></pre>
<p>I get the above error when I try to commit my session, like so:</p>
<pre><code>session = session_factory()
user = UserModel(
# ...
asserted = DateTimeTZRange(asserted_value.lower, asserted_value.upper, bounds=asserted_value.bounds)
effective = DateRange(effective_value.lower, effective_value.upper, bounds=effective_value.bounds)
)
session.add(user)
session.commit() #💥
</code></pre>
<p>So my gut says this <em>should</em> work, because it's working with raw SQL, but also I'm a python/sqlalchemy/postgres n00b, so what do I know.</p>
<p>My gut also thinks there's some mapping special sauce I'm missing, but again... what do I know?</p>
|
<python><python-3.x><postgresql><sqlalchemy><psycopg2>
|
2024-04-30 14:09:42
| 2
| 14,935
|
Greg B
|
78,408,949
| 3,238,957
|
dynamic number of fields in executemany call
|
<p>I have a little python app that reads a large amount of data, and then attempts to insert it into a table.
To avoid having it all in memory, I utilize the executemany method, using a generator.</p>
<p>This program is very generic, and as such I do not know the number of fields in the table that I'm inserting into. However the number of fields in the list returned by the generator will be equal to the number of fields in the table.</p>
<p>This is the code that works for my first test:</p>
<pre><code>writecur.executemany( f"""INSERT INTO {self.schema_name}.{self.table_name}
({','.join(fields)}) VALUES (%s,%s,%s,%s,%s,%s,%s,%s)""", [rec for rec in batch])
</code></pre>
<p>The <code>fields</code> variable has a list of the fields I am inserting into, and the <code>batch</code> is the result of the generator that uses islice to create a list of tuples (same number of values in the tuple, as there are fields in <code>fields</code>.
In the example above, there are 8 values in each <code>rec</code> .</p>
<p>Now the last [outstanding] step is to have the number of "%s"'s be dynamically added to the f string rather than the hardcoded 8 of them.
What is a slick way to do that? (I use sqlalchemy and should stick with that at least)</p>
<p>Or.. is there an even better way to do this, maybe without using f strings... I'm open to the best method..
Note that the table name is also dynamic.</p>
<p>Thanks!</p>
|
<python><mysql><sqlalchemy><executemany>
|
2024-04-30 13:45:59
| 0
| 566
|
da Bich
|
78,408,880
| 14,471,688
|
How to perform matthews_corrcoef in sklearn simultaneously for every column using a matrix?
|
<p>I want to perform Matthews correlation coefficient (MCC) in sklearn to find the correlation between different features (boolean vectors) in a 2D numpyarray. What I have done so far is to loop through each column and find correlation value between features one by one.</p>
<p>Here is my code:</p>
<pre><code>from sklearn.metrics import matthews_corrcoef
import numpy as np
X = np.array([[1, 0, 0, 0, 0],
[1, 0, 0, 1, 0],
[1, 0, 0, 0, 1],
[1, 1, 0, 0, 0],
[1, 1, 0, 1, 0],
[1, 1, 0, 0, 1],
[1, 0, 1, 0, 0],
[1, 0, 1, 1, 0],
[1, 0, 1, 0, 1],
[1, 0, 0, 0, 0]])
n_sample, n_feature = X.shape
rff_all = []
for i in range(n_feature):
for j in range(i + 1, n_feature):
coeff_f_f = abs(matthews_corrcoef(X[:, i], X[:, j]))
rff_all.append(coeff_f_f)
rff = np.mean(rff_all)
</code></pre>
<p>As I have a huge dimension of 2D numpyarray, it seems to be really slow and impractical. What is the most efficient way to perform this kind of operation simultaneously without using the loops?</p>
<p><strong>Edit:</strong> I then come up with this idea but it is still pretty slow.</p>
<pre><code>from more_itertools import distinct_combinations
all_c = []
for item in distinct_combinations(np.arange(X.shape[1]), r=2):
c = matthews_corrcoef(X[:, item][:, 0], X[:, item][:, 1])
all_c.append(abs(c))
</code></pre>
|
<python><numpy><scikit-learn>
|
2024-04-30 13:33:51
| 2
| 381
|
Erwin
|
78,408,806
| 1,232,660
|
Remove all nested XML tags
|
<p>I am trying to remove all "nested tags of same type". For every XML element, if you find another subelement in its subtree that has same name, remove its tag (keep its contents). In another words, transform <code><a>...<a>...</a>...</a></code> into <code><a>.........</a></code>.</p>
<p>I created a very nice and simple piece of code using functions <a href="https://lxml.de/apidoc/lxml.etree.html#lxml.etree._Element.iter" rel="nofollow noreferrer"><code>iter</code></a> and <a href="https://lxml.de/apidoc/lxml.etree.html#lxml.etree.strip_tags" rel="nofollow noreferrer"><code>strip_tags</code></a> from the <code>lxml</code> package:</p>
<pre class="lang-python prettyprint-override"><code>import lxml.etree
root = lxml.etree.parse('book.txt')
for element in root.iter():
lxml.etree.strip_tags(element, element.tag)
print(lxml.etree.tostring(root).decode())
</code></pre>
<p>I used this input file:</p>
<pre class="lang-xml prettyprint-override"><code><book>
<b><title>My <b>First</b> Book</title></b>
<i>Introduction <i><i>To</i></i> LXML</i>
<name><a>Author: <a>James</a></a></name>
</book>
</code></pre>
<p>and I got this output:</p>
<pre class="lang-xml prettyprint-override"><code><book>
<b><title>My First Book</title></b>
<i>Introduction To LXML</i>
<name><a>Author: <a>James</a></a></name>
</book>
</code></pre>
<p>As you can see, it removed almost all the nested tags except one: <code><a>Author: <a>James</a></a></code>. What is wrong with the code? How can I fix it?</p>
|
<python><xml><lxml>
|
2024-04-30 13:20:23
| 2
| 3,558
|
Jeyekomon
|
78,408,713
| 10,601,287
|
How to set proxies with Selenium options when connecting to command-line generated google-chrome
|
<p>I am successfully running the chrome browser from the terminal with the following command:</p>
<blockquote>
<p>google-chrome --remote-debugging-port=9222 --user-data-dir="C:\selenum\ChromeProfile"</p>
</blockquote>
<p>I am then able to connect to this browser instance with the following line of code when initialising the driver:</p>
<pre><code>options.add_experimental_option("debuggerAddress", "127.0.0.1:9222")
</code></pre>
<p>I would now like to set a proxy server address in the format of "http://username:password@proxyhost:port".</p>
<p>I am usually able to use proxies just fine with Selenium using a manifest_json block and adding to options i.e.:</p>
<pre><code>pluginfile = 'proxy_auth_plugin.zip'
with zipfile.ZipFile(pluginfile, 'w') as zp:
zp.writestr("manifest.json", manifest_json)
zp.writestr("background.js", background_js)
options.add_extension(pluginfile)
</code></pre>
<p>However, when the above experimental debugger option (above) is added to the code, the proxies no longer work and the system IP address is used.</p>
<p>I have also tried setting a proxy from the command-line with '--proxy-server="http://username:password@proxyhost:port', but as far as I understand, Chrome doesn't accent username and passwords through command-line.</p>
<p>So I'm wondering is there any way to achieve setting proxies with Selenium after connecting to a command-line generated Chrome instance?</p>
<p>Thanks in advance for your help.</p>
|
<python><selenium-webdriver><http-proxy><google-chrome-headless>
|
2024-04-30 13:03:29
| 1
| 319
|
Nancy Collins
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.