QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,457,199
| 3,981,290
|
How to sort Pandas dataframe by column using the key argument
|
<p>Assume a Pandas data frame (for the sake of simplicity, let's say with three columns). The columns are titled <code>A</code>, <code>B</code> and <code>d</code>.</p>
<pre><code>$ import pandas as pd
$ df = pd.DataFrame([[1, 2, "a"], [1, "b", 3], ["c", 4, 6]], columns=['A', 'B', 'd'])
$ df
A B d
0 1 2 a
1 1 b 3
2 c 4 6
</code></pre>
<p>Further assume that I wish to sort the data frame so that the columns have exactly the following order: <code>d</code>, <code>A</code>, <code>B</code>. The rows of the data frame shall not be rearranged in any way. The desired output is:</p>
<pre><code>$ col_target_order = ['d', 'A', 'B']
$ df_desired
d A B
0 a 1 2
1 3 1 b
2 6 c 4
</code></pre>
<p>I know that this can be done via the <code>sort_index</code> function of pandas. However, the following won't work, as the input list (<code>col_target_order</code>) is not callable:</p>
<pre><code>$ df.sort_index(axis=1, key=col_target_order)
</code></pre>
<p>What key specification do I have to use?</p>
|
<python><pandas><sorting><key>
|
2024-05-09 22:17:44
| 2
| 1,793
|
Michael Gruenstaeudl
|
78,457,178
| 15,045,363
|
How to get clear sky irradiance with horizon in PVLib?
|
<p><strong>Is there a way to take into account the horizon in PVLib ?</strong></p>
<p>I have PV systems in the mountains, so with high neigbouring montains affecting the horizon. I have simulated the clearsky irradiance with PVLib, PVGis with and without horizon (see next figure). We see that the difference is significant.</p>
<ul>
<li>Is it possible to consider the horizon of the <code>Location</code> when calling the <code>Location.get_clearsky()</code> function ?</li>
<li>If not, is there another way to consider the impact of the horizon with PVLib?</li>
<li>If not, it is possible to run a ModelChain (that use the pvwatts AC and DC models) only with the clearsky GHI+Horizon comming from PVGIS, even if no DNI and DHI are provided ?</li>
</ul>
<p><a href="https://i.sstatic.net/7E7noweK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7E7noweK.png" alt="Clear Sky Irradiance comparison between PVGIS and PVLib (in a valley, in January)" /></a></p>
<hr />
<p><strong>PS:</strong> This is my code to get the clearsky irradiance and the PV system AC Power:</p>
<pre class="lang-py prettyprint-override"><code># Location
locationValley = Location(latitude=46.179, longitude=7.602, altitude=1431, tz='Europe/Zurich', name='Valley')
# Weather
datetime = pd.date_range(start='2022-01-01', end='2022-01-02', freq='5min', tz=locationValley.tz, inclusive='left')
csPowerValley = locationValley.get_clearsky(datetime) # ineichen with climatology table by default
# ModelChain
array1 = Array(
mount=FixedMount(surface_tilt=30, surface_azimuth=241, racking_model = 'open_rack'),
module_parameters={'pdc0': 55, 'gamma_pdc' : -0.004},
module_type = 'glass_polymer',
modules_per_string = 445, # 445 modules for 1 string or 89 modules for 5 strings is the same
strings = 1,
temperature_model_parameters=pvlib.temperature.TEMPERATURE_MODEL_PARAMETERS['sapm']['open_rack_glass_polymer'],
)
system = PVSystem(arrays=[array1], inverter_parameters={'pdc0': 24475, 'eta_inv_nom': 0.96})
modelChain = ModelChain(system, locationValley, clearsky_model = 'ineichen', aoi_model='no_loss', spectral_model="no_loss")
modelChain.run_model(csPowerValley)
</code></pre>
|
<python><pvlib>
|
2024-05-09 22:11:20
| 1
| 865
|
Maxime Charrière
|
78,457,128
| 19,048,408
|
How do I configure Python "ruff" to sensibly format Polars?
|
<p>How do I configure Ruff to sensibly auto-format Python code for the Polars library?</p>
<p>With the default settings, it likes to left-align <code>.select</code>/<code>.with_columns</code> calls with many arguments. The following is how ruff/black both format it (which is difficult to read):</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = df.select(
event_type=pl.col("log_entry")
.str.extract(r"Event Type: ([\w ]+)")
.cast(pl.Utf8),
event_duration=pl.col("log_entry")
.str.extract(r"Duration: (\d+)")
.cast(pl.Int32),
)
</code></pre>
<p>This is roughly how I want it to be formatted, if wrapping at 79 chars (note the indented additional operations:</p>
<pre class="lang-py prettyprint-override"><code>df = df.select(
event_type=pl.col("log_entry")
.str.extract(r"Event Type: ([\w ]+)")
.cast(pl.Utf8),
event_duration=pl.col("log_entry")
.str.extract(r"Duration: (\d+)")
.cast(pl.Int32),
)
</code></pre>
|
<python><python-polars><ruff>
|
2024-05-09 21:54:10
| 0
| 468
|
HumpbackWhale194
|
78,457,093
| 2,893,712
|
Pandas Map Multiple Columns With A Filter
|
<p>I have a dataframe like so (simplified for this example)</p>
<pre><code> Site LocationName Resource#
01 Test Name 5
01 Testing 6
02 California 10
02 Texas 11
...
</code></pre>
<p>Each site has their own mapping for <code>LocationName</code> and <code>Resource#</code></p>
<p>For example:</p>
<ul>
<li>Site 01 has a mapping of <code>{'Test Name': 'Another Test', 'Testing': 'RandomLocation'}</code> for <code>LocationName</code> and a mapping of <code>{5: 5000}</code> for <code>Resource#</code></li>
<li>Site 02 has a mapping of <code>{'California': 'CA-123'}</code> for <code>LocationName</code> and a mapping of <code>{10: '10A', 11: '11B'}</code> for <code>Resource#</code></li>
</ul>
<p>I am trying to map each respective site with their mappings for the different columns. If the mapping does not exist, I want the field to be None/blank.</p>
<p>My ideal output is:</p>
<pre><code>Site# LocationName Resource#
01 Another Test 5000
01 RandomLocation
02 CA-123 10A
02 11B
</code></pre>
<p>My idea was to filter for each site and run map on the series</p>
<pre><code>df01 = df[df.Site == '01']
df01 = df['LocationName'].map({'Test Name': 'Another Test', 'Testing': 'RandomLocation'})
</code></pre>
<p>But this returns <code>SettingWithCopyWarning</code> since I am performing these operations on a copy.</p>
<p>Is there a simple way to achieve this?</p>
|
<python><pandas><dictionary><pandas-settingwithcopy-warning>
|
2024-05-09 21:41:01
| 2
| 8,806
|
Bijan
|
78,456,984
| 552,247
|
socat to simulate noisy serial line
|
<p>due to educational purposes I have some scenarios to deal with.<br>
The main one is to simulate a noisy serial line using socat (I currently have version 1.7.4)</p>
<p>I'll elaborate:<br>
I have software that has to do with protocols over serial. I need to test its reliability and performance against errors and communication problems, in short against noisy channels.<br>
With socat I can create a "<em>clean</em>" virtual channel:</p>
<pre><code>socat -d -d -v -x pty,raw,echo=0,link=/tmp/ttyV1 pty,raw,echo=0,link=/tmp/ttyV2
</code></pre>
<p>I'm looking for a way for the data to be "<em>corrupted</em>" in the way between <code>ttyV1</code> and <code>ttyV2</code> (and vice versa).<br>
I was thinking of a python script that somehow receives the data sent to <code>ttyV1</code>, processes it, and then transmits it to <code>ttyV2</code>.<br>
I tried it by running 3 instances of socat:</p>
<ul>
<li><strong>socatX</strong>: is what creates pty with <code>socat -d -d -v -x pty,raw,echo=0,link=/tmp/ttyV1 pty,raw,echo=0,link=/tmp/ttyV2</code></li>
<li><strong>socat1</strong>: is <code>socat -d -d -v -x /tmp/ttyV1,raw,echo=0 EXEC: "python3 disturb_script.py",pty,raw,echo=0</code></li>
<li><strong>socat2</strong>: is <code>socat -d -d -v -x /tmp/ttyV2,raw,echo=0 EXEC: "python3 disturb_script.py",pty,raw,echo=0</code></li>
</ul>
<p>in this case by sending data to <code>/tmp/ttyV1</code>, for example with <code>cat >/tmp/ttyV1</code>, I see data flowing without modification. I see the data trace (thanks to the <code>-x -v</code> parameters of socat) only in the socatX session.<br>
I tried terminating socat1 and socat2, without touching socatX, and the data continues to flow. So I thought that sending the data to <code>/tmp/ttyV1</code> only involves socatX.</p>
<p>then I changed something:</p>
<ul>
<li><strong>socat1</strong>: create the first pty with <code>socat -d -d -v -x pty,raw,echo=0,link=/tmp/ttyV1 EXEC: "python3 disturb_script.py",pty,raw,echo=0</code></li>
<li><strong>socat2</strong>: create the second pty with <code>socat -d -d -v -x pty,raw,echo=0,link=/tmp/ttyV2 EXEC: "python3 disturb_script.py",pty,raw,echo=0</code></li>
<li><strong>socatX</strong>: I would like it to link the two pty using <code>socat -d -d -v -x /tmp/ttyV1,raw,echo=0 /tmp/ttyV2,raw,echo=0</code></li>
</ul>
<p>in this case the data, always sent with <code>cat >/tmp/ttyV1</code>, does not pass on <code>/tmp/ttyV2</code>. And I can see the data traces only on socat1.</p>
<p>the python script is this:</p>
<pre class="lang-py prettyprint-override"><code>import sys
import random
import os
def disturb(data):
""" Disturb data by altering or deleting characters. """
result = bytearray()
for byte in data:
if random.random() < 0.1: # 10% chance to change a bit in a byte
byte ^= 1 << random.randint(0, 7)
if random.random() > 0.05: # 95% chance to keep the byte
result.append(byte)
return bytes(result)
def main():
while True:
data = sys.stdin.buffer.read(128)
if not data:
break
print(f"Received data: {data}") # Debug print
disturbed_data = disturb(data)
print(f"Sending data: {disturbed_data}") # Debug print
sys.stdout.buffer.write(disturbed_data)
sys.stdout.buffer.flush()
if __name__ == '__main__':
main()
</code></pre>
<p>How can I get to my goal?</p>
<p>PS:<br>
In the beginning I told you about other scenarios.<br>
Another scenario is one where I have a <em>true</em> serial; I should create a virtual serial connected to it, and in between put a script that processes (encrypts/decrypts) the data</p>
|
<python><pty><socat>
|
2024-05-09 21:06:23
| 1
| 1,598
|
mastupristi
|
78,456,944
| 10,470,517
|
Flask App running within Docker for File Upload
|
<p>I wrote the following flask application which will collect a file selected and then upload it to a database running in sql lite. If i run the app locally, it works fine with the table created in the database and all the data uploaded. However if I run the app using docker, I get a 200 on file upload but do not see the table created in the database along with all the data.</p>
<p>Here is the code:</p>
<pre><code>import os
import sqlite3
from flask import Flask, flash, jsonify, request, redirect, render_template
from werkzeug.utils import secure_filename
app = Flask(__name__)
app.secret_key = "secret key"
app.config['MAX_CONTENT_LENGTH'] = 16 * 1024 * 1024
path = os.getcwd()
uploadFolder = 'app/uploads'
os.makedirs(uploadFolder, exist_ok=True)
app.config['UPLOAD_FOLDER'] = uploadFolder
allowedExtensions = set(['txt'])
# Initialize SQLite database
#database = os.path.join(path, 'file_uploads.db')
database = 'app/uploads/file_uploads.db'
# Health check to see if the service is active
@app.route('/healthCheck', methods=['GET'])
def checkStatus():
response = {
'healthCheck': 'Flask service is up and running!'
}
return jsonify(response), 200
def createTable():
conn = sqlite3.connect(database)
c = conn.cursor()
c.execute('''CREATE TABLE IF NOT EXISTS words
(id INTEGER PRIMARY KEY AUTOINCREMENT,
word TEXT NOT NULL,
filename TEXT NOT NULL,
filepath TEXT NOT NULL)''')
conn.commit()
conn.close()
createTable()
def allowedFile(filename):
return '.' in filename and filename.rsplit('.', 1)[1].lower() in allowedExtensions
@app.route('/')
def uploadForm():
return render_template('upload.html')
@app.route('/', methods=['POST'])
def uploadFile():
if request.method == 'POST':
if 'file' not in request.files:
flash('No file part')
return redirect(request.url)
file = request.files['file']
if file.filename == '':
flash('No file selected for uploading')
return redirect(request.url)
if file and allowedFile(file.filename):
filename = secure_filename(file.filename)
filepath = os.path.join(app.config['UPLOAD_FOLDER'], filename)
file.save(filepath)
# Read content of the file and split into words
with open(filepath, 'r') as f:
content = f.read()
words = content.split()
# Insert each word into the SQLite database
conn = sqlite3.connect(database)
c = conn.cursor()
for word in words:
c.execute("INSERT INTO words (word, filename, filepath) VALUES (?, ?, ?)", (word, filename, filepath))
conn.commit()
conn.close()
flash('File successfully uploaded and words saved to database')
return render_template('upload.html')
else:
flash('Allowed file types are txt')
return redirect(request.url)
# Route to get word by ID
@app.route('/word/<int:id>', methods=['GET'])
def getWordById(id):
conn = sqlite3.connect(database)
c = conn.cursor()
c.execute("SELECT word FROM words WHERE id=?", (id,))
word = c.fetchone()
conn.close()
if word:
return jsonify({'id': id, 'word': word[0]}), 200
else:
return jsonify({'error': 'Word not found'}), 404
if __name__ == "__main__":
app.run(host="127.0.0.1", port=5000)
</code></pre>
<p>Can someone please help me figure out why the table is not visible in sql lite even though 200 appears in the log for the file upload.</p>
<p>Here is the expected output:</p>
<p><a href="https://i.sstatic.net/2SlqV3M6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2SlqV3M6.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/A2Sm2gm8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A2Sm2gm8.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/yro8e2C0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yro8e2C0.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/GUuxWSQE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GUuxWSQE.png" alt="enter image description here" /></a></p>
<p>I am unable to accompolish the expected output in SQLLite when running the app using Docker. What am I doing wrong?</p>
<p>Here are the contents of the docker file:</p>
<pre><code># Use an official Python runtime as a parent image
FROM python:3.8-slim
# Set the working directory in the container
WORKDIR /app
# Copy the Flask application directory into the container at /app
COPY . /app
# Install any needed dependencies specified in requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
# Make port 5000 available to the world outside this container
EXPOSE 5000
# Set the upload folder as a volume
VOLUME /app/uploads
# Define environment variable
ENV FLASK_APP=test.py
# Run the Flask application when the container launches
CMD ["flask", "run", "--host=0.0.0.0"]
</code></pre>
<p>Makefile Code:</p>
<pre><code># Variables
DOCKER_IMAGE_NAME = my-flask-app
DOCKER_CONTAINER_NAME = my-flask-container
TEST_FILE = upload/word.txt
# Targets
build:
docker build -t $(DOCKER_IMAGE_NAME) .
run:
docker run -d -p 5000:5000 --name $(DOCKER_CONTAINER_NAME) $(DOCKER_IMAGE_NAME)
test: build
docker run $(DOCKER_IMAGE_NAME) python -u test.py
upload:
curl -X POST -F "file=@$(TEST_FILE)" http://127.0.0.1:5000/
stop:
docker stop $(DOCKER_CONTAINER_NAME)
docker rm $(DOCKER_CONTAINER_NAME)
.PHONY: build run stop test upload
</code></pre>
|
<python><docker><sqlite><flask><makefile>
|
2024-05-09 20:54:55
| 1
| 419
|
caliGeek
|
78,456,849
| 1,100,107
|
Recursive R function and its Python translation behave differently
|
<p>Here is a recursive R function involving a matrix <code>S</code> from the parent environment:</p>
<pre class="lang-r prettyprint-override"><code>f <- function(m, k, n) {
if(n == 0) {
return(100)
}
if(m == 1) {
return(0)
}
if(!is.na(S[m, n])) {
return(S[m, n])
}
s <- f(m-1, 1, n)
i <- k
while(i <= 5) {
if(n > 2) {
s <- s + f(m, i, n-1)
} else {
s <- s + f(m-1, 1, n-1)
}
i <- i + 1
}
if(k == 1) {
S[m, n] <- s
}
return(s)
}
</code></pre>
<p>Here is the result of a call to this function:</p>
<pre class="lang-r prettyprint-override"><code>> n <- 4
> S <- matrix(NA_real_, nrow = n, ncol = n)
> f(n, 1, n)
[1] 127500
</code></pre>
<p>Now, here is the Python version:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
def f(m, k, n):
if n == 0:
return 100
if m == 1:
return 0
if S[m-1, n-1] is not None:
return S[m-1, n-1]
s = f(m-1, 1, n)
i = k
while i <= 5:
if n > 2:
s = s + f(m, i, n-1)
else:
s = s + f(m-1, 1, n-1)
i = i + 1
if k == 1:
S[m-1, n-1] = s
return s
</code></pre>
<p>The call:</p>
<pre class="lang-py prettyprint-override"><code>>>> n = 4
>>> S = np.full((n, n), None)
>>> f(n, 1, n)
312500
</code></pre>
<p>The Python result is different from the R result. Why?</p>
<p>One gets identical results if one replaces</p>
<pre class="lang-py prettyprint-override"><code> if S[m-1, n-1] is not None:
return S[m-1, n-1]
</code></pre>
<p>with</p>
<pre class="lang-py prettyprint-override"><code> if k == 1 and S[m-1, n-1] is not None:
return S[m-1, n-1]
</code></pre>
<p>I also have a C++ version of this function, and it has the same behavior as the R function.</p>
|
<python><r><recursion>
|
2024-05-09 20:26:16
| 5
| 85,219
|
Stéphane Laurent
|
78,456,828
| 13,392,257
|
Selenium emulate site you came from
|
<p>When I am opening the url via link <a href="https://kinoxor.pro/650-mir-druzhba-zhvachka-2024-05-06-19-54.html" rel="nofollow noreferrer">https://kinoxor.pro/650-mir-druzhba-zhvachka-2024-05-06-19-54.html</a> -- I have an error - <code>Internal Server Error</code></p>
<p><a href="https://i.sstatic.net/5175p7IH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5175p7IH.png" alt="enter image description here" /></a></p>
<p>But when I paste the link to the search engine <a href="https://yandex.ru/search/?text=https%3A%2F%2Fkinoxor.pro%2F650-mir-druzhba-zhvachka-2024-05-06-19-54.html&search_source=dzen_desktop_safe&lr=213" rel="nofollow noreferrer">https://yandex.ru/search/?text=https%3A%2F%2Fkinoxor.pro%2F650-mir-druzhba-zhvachka-2024-05-06-19-54.html&search_source=dzen_desktop_safe&lr=213</a></p>
<p>I can open the site (the first site in the results)
<a href="https://i.sstatic.net/fdXh5C6t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fdXh5C6t.png" alt="enter image description here" /></a></p>
<p>I am opening site with help of selenium</p>
<pre><code>from selenium import webdriver
driver = webdriver.Chrome()
url = "https://kinoxor.pro/650-mir-druzhba-zhvachka-2024-05-06-19-54.html"
driver.get(url)
</code></pre>
<p>I think that I should emulate the site I came from (add some headers / meta-information)
I mean I should emulate that my previous site was yandex.ru (I don't want to really go to yandex)</p>
<p>Is it possible in Python?</p>
|
<python><parsing><selenium-webdriver><web-scraping>
|
2024-05-09 20:19:32
| 1
| 1,708
|
mascai
|
78,456,761
| 1,015,703
|
Example of meson-python Cython package with src layout
|
<p>I am having trouble finding a minimal example of a Cython project that uses meson-python for packaging and is set up using the "src layout" (as described <a href="https://packaging.python.org/en/latest/discussions/src-layout-vs-flat-layout/" rel="nofollow noreferrer">here</a>).</p>
<p>I have seen examples of meson-python projects, Cython projects, and src layout, but not one that includes all of these elements.</p>
<p>Can someone point me to one, or show how to set up the src layout directory structure, pyproject.toml file, and meson.build files to handle Cython?</p>
<p>I also will need a "tests" directory -- where does this go and how can it import the built package?</p>
|
<python><cython><python-packaging><meson-build>
|
2024-05-09 20:02:20
| 1
| 1,625
|
David H
|
78,456,707
| 11,516,350
|
Flask SQL, alembic and SQlite: how to avoid delete object when is referenced (Foreign Key)
|
<p>Given 2 tables: transactions and categories.</p>
<p>Transactions contains a FK category_id with the ID of assigned category.</p>
<p>I want to not allow delete categories referenced by transactions. I mean: if there is one transaction with category_id 3, for example, I want to get an error from DB if delete the categry with ID 3. Like other db engines oracle, mysql...</p>
<p>But, in my case, I'm able to delete it. The transaction still containing the categoty_exists, but that ID not exists on categories tables.</p>
<p>As mentioned in the title, I am using flask sql to create my models and alembic to database migrations with a sqlite database.</p>
<p>My sqlalchemy models:</p>
<pre><code>class TransactionModel(db.Model):
__tablename__ = 'transactions'
id = db.Column(db.Integer, primary_key=True, autoincrement=True)
date = db.Column(db.Date, nullable=False)
amount = db.Column(db.Numeric(precision=10, scale=2), nullable=False)
concept = db.Column(db.String(100), nullable=False)
category_id = db.Column(db.Integer, db.ForeignKey('categories.id', ondelete='RESTRICT'), nullable=True)
category = db.relationship("CategoryModel")
</code></pre>
<pre><code>class CategoryModel(db.Model):
__tablename__ = 'categories'
id = db.Column(db.Integer, primary_key=True, autoincrement=True)
name = db.Column(db.String(24), nullable=False)
description = db.Column(db.String(128), nullable=False)
</code></pre>
<p>With that models, there is the migration scripts generated:</p>
<pre><code>def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.create_table('categories',
sa.Column('id', sa.Integer(), autoincrement=True, nullable=False),
sa.Column('name', sa.String(length=24), nullable=False),
sa.Column('description', sa.String(length=128), nullable=False),
sa.PrimaryKeyConstraint('id')
)
op.create_table('transactions',
sa.Column('id', sa.Integer(), autoincrement=True, nullable=False),
sa.Column('date', sa.Date(), nullable=False),
sa.Column('amount', sa.Numeric(precision=10, scale=2), nullable=False),
sa.Column('concept', sa.String(length=100), nullable=False),
sa.Column('category_id', sa.Integer(), nullable=True),
sa.ForeignKeyConstraint(['category_id'], ['categories.id'], ondelete='RESTRICT'),
sa.PrimaryKeyConstraint('id')
)
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_table('transactions')
op.drop_table('categories')
# ### end Alembic commands ###
</code></pre>
<p>When I execute the migration, these are the generated tables:</p>
<pre><code>CREATE TABLE categories (
id INTEGER NOT NULL,
name VARCHAR(24) NOT NULL,
description VARCHAR(128) NOT NULL,
PRIMARY KEY (id)
)
CREATE TABLE transactions (
id INTEGER NOT NULL,
date DATE NOT NULL,
amount NUMERIC(10, 2) NOT NULL,
concept VARCHAR(100) NOT NULL,
category_id INTEGER,
PRIMARY KEY (id),
FOREIGN KEY(category_id) REFERENCES categories (id) ON DELETE RESTRICT
)
</code></pre>
<p>The behavior is so extrange:</p>
<p>Given 2 categories (ids 1 and 2), and 2 transactions. Transaction 1 with category_id=1 and transaction 2 with category 2. If I delete category 2, transaction 2 still with category_id=2. Then, if I create a new category, it is not taking new ID 3, is getting ID 2 because there is not category 2. Result? Now, the transaction 2 appears with category 3 because the newest category 3 is the category with the "old" id 2.</p>
<p>But, given the same example (2 categories, 2 transactions) if I delete category 1, because category 2 existing, the new category no is with ID 3.</p>
<p>My goals:</p>
<ul>
<li>Prevent delete category 1 and 2 because they are referenced on table transactions. Only be able to delete a category if is not referenced on any table.</li>
<li>Prevent the reutilization of old IDs. If I delete category 2, the ID 2 will not be used never, like sequences on oracle for exapmle.</li>
</ul>
<p>Yes, I can do it on my application logic. But I want to get it working well in database too in order to reduce human errors when use directly the database queries.</p>
<p>EDIT:</p>
<p>I have created manual migration with this:</p>
<pre><code>def upgrade():
op.execute("PRAGMA foreign_keys = ON")
</code></pre>
<p>And still not working.</p>
<p>Checking database, I noticed that foreigns_keys still not enabled (I don't know why if the migration file was executed correctly...)</p>
<p>But then, I have executed the "PRAGMA foreign_keys = ON" directly on database and now foreign_keys appears as 1 but still working with the same behavior.</p>
<p>Based on: <a href="https://www.sqlite.org/foreignkeys.html" rel="nofollow noreferrer">https://www.sqlite.org/foreignkeys.html</a></p>
|
<python><sqlite><sqlalchemy><relational-database><alembic>
|
2024-05-09 19:49:11
| 0
| 1,347
|
UrbanoJVR
|
78,456,670
| 5,563,616
|
How to convert hierarchical TCL string value containing dictionaries into an equivalent Python hierarchical value?
|
<p>I have hierarchical TCL values represented as strings like this: <code>a {x y v w} b {i j k m}</code>.</p>
<p>The above value contains 3 dictionary objects, which can be queried using TCL expressions.</p>
<p>I need to convert a value like this to a corresponding Python hierarchical value. For the above string the Python value would be: <code>{'a': {'x': 'y', 'v': 'w'}, 'b': {'i': 'j', 'k': 'm'}}</code>.</p>
<p>The standard Python module tkinter can convert TCL values to Python values.</p>
<p>But can it convert the hierarchical value, like I provided above, to its Python equivalent?</p>
|
<python><tcl>
|
2024-05-09 19:42:24
| 2
| 1,682
|
Jennifer M.
|
78,456,651
| 4,418,481
|
plotly-resampler with plotly offline figure
|
<p>I created a simple PyQt app that basically reads CSV files and shows their data in a plot. I decided to display the figures using Plotly offline so I displayed them in a PyQt widget called QWebEngine by providing an HTML of the figure:</p>
<pre><code>traces = self.prepare_traces()
layout = go.Layout(xaxis=dict(title=self.x_label, range=self.x_axis_range),
yaxis=dict(title=self.y_label))
self.fig = go.Figure(data=traces, layout=layout)
self.update_figure_layout(self.x_label, self.y_label)
plot_div = self.render_plot_html()
return self.wrap_plot_in_html(plot_div)
</code></pre>
<p>where wrap_plot_in_html is like this:</p>
<pre><code>return f"""
<html>
<head>
<script src='https://cdn.plot.ly/plotly-latest.min.js'></script>
</head>
<body style='margin:0; padding:0; background-color:{self.data_handler.figure_bg_color};'>
{plot_div}
</body>
</html>
"""
</code></pre>
<p>So far everything works well.</p>
<p>However, I came to a situation where the data that I want to display is relatively big and every user interaction with the figure makes it lag and react very slowly.</p>
<p>I saw that there is a library called Plotly-resampler that basically helps with data aggregation based on the zoom level of the figure and more.</p>
<p>From the tutorial, all I need to change is basically:</p>
<pre><code>self.fig = FigureWidgetResampler(go.Figure(data=traces, layout=layout))
</code></pre>
<p>So I tried it, and indeed the displayed figure seems to have some of the resampler features (for example I can see the [R] in the legend). however, it is not interactive. When I zoom the figure or anything, it won't really use the resampler as it should which makes some sense to me since I loaded the HTML of the initial state but I wonder if there is anything I can do to make it work.</p>
<p>Thank you</p>
|
<python><plotly><plotly-resampler>
|
2024-05-09 19:36:44
| 0
| 1,859
|
Ben
|
78,456,371
| 10,818,030
|
Error: the command pytest could not be found within PATH
|
<p>I am trying to run a suite using pytest with this command:</p>
<p><code>pytest -n8 --dc=us best_practice/desktop_web/</code></p>
<p>However, when I try to execute the run, I get the error:</p>
<p><code>Error: the command pytest (from best-practice-desktop-us) could not be found within PATH.</code></p>
<p>I verified it's installed by running <code>pip list</code> and I see it listed in their as a package:</p>
<p>I attempted adding it to my path with this line but that didn't work either:</p>
<p><code>export PATH="/Users/mikejones/Library/Python/3.12/lib/python/site-packages/pytest:$PATH" </code></p>
<p><a href="https://github.com/saucelabs-training/demo-python" rel="nofollow noreferrer">This</a> is the repo I am attempting to run. It should work so it's clearly an issue where my terminal can't access the directory containing pytest.</p>
<p>Anyone know what to do to resolve this?</p>
|
<python><path><pytest>
|
2024-05-09 18:37:03
| 0
| 657
|
Casey Cling
|
78,456,247
| 4,159,461
|
How start python-telegram-bot in a Thread
|
<p>I need to run the telegram bot in a separete thread to have the main thread free, but I have bad times tring this.</p>
<p>Follow the piece of code that I have used to test</p>
<p>bot_thread.py</p>
<pre><code>
import logging
import asyncio
from telegram import Update
from telegram.ext import Application, ContextTypes, CommandHandler
from threading import Thread
from multiprocessing import Process
logging.basicConfig(
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
level=logging.INFO
)
class Bot(Thread):
async def start_msg(self, update: Update, context: ContextTypes.DEFAULT_TYPE):
await context.bot.send_message(chat_id=update.effective_chat.id, text="I'm a bot, please talk to me!")
def run(self):
async def hello(update: Update, context: ContextTypes.DEFAULT_TYPE):
await context.bot.send_message(chat_id=update.effective_chat.id, text="Starting bot")
key = open('../data/keys', 'r').read()
application = Application.builder().token(key).build()
job_queue = application.job_queue
job_queue.run_once(hello, 90)
start_handler = CommandHandler('start', self.start_msg)
application.add_handler(start_handler)
application.run_polling()
</code></pre>
<p>bot.py</p>
<pre><code>from bot_thread import Bot
Bot().start()
</code></pre>
<p>When tries to start the bot it gives a error
(RuntimeError: There is no current event loop in thread 'Thread-1'.):</p>
<pre><code>> python .\bot_wrapper.py
2024-05-09 15:04:13,916 - apscheduler.scheduler - INFO - Adding job tentatively -- it will be properly scheduled when the scheduler starts
Exception in thread Thread-1:
Traceback (most recent call last):
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.2032.0_x64__qbz5n2kfra8p0\Lib\threading.py", line 1045, in _bootstrap_inner
self.run()
File "bot_thread.py", line 32, in run
application.run_polling()
File "AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\telegram\ext\_application.py", line 838, in run_polling
return self.__run(
^^^^^^^^^^^
File "AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\telegram\ext\_application.py", line 1020, in __run
loop = asyncio.get_event_loop()
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.2032.0_x64__qbz5n2kfra8p0\Lib\asyncio\events.py", line 677, in get_event_loop
raise RuntimeError('There is no current event loop in thread %r.'
RuntimeError: There is no current event loop in thread 'Thread-1'.
C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.2032.0_x64__qbz5n2kfra8p0\Lib\threading.py:1047: RuntimeWarning: coroutine 'Updater.start_polling' was never awaited
self._invoke_excepthook(self)
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
</code></pre>
|
<python><multithreading><telegram-bot>
|
2024-05-09 18:11:10
| 0
| 571
|
DevBush
|
78,456,203
| 11,233,365
|
Pip install failing on pip >22.2.2 when installing packages from PyPi via proxy, but not from PyPi directly
|
<p><strong>UPDATE</strong>
After further study, I found that I have grossly misunderstood the problem I'm encountering, and have re-written the question to reflect the correct, up-to-date details.</p>
<p>================</p>
<p>I am trying to install a Python package on a local machine that does not have direct access to the internet due to security reasons. In the past, we have done our installations via a proxy rest API with full internet access that copies across the packages from PyPI.</p>
<p>It has become quite apparent in recent months that the process is no longer compatible with recent versions of <code>pip</code>. Running <code>pip install</code> via the proxy returns the following error:</p>
<pre class="lang-none prettyprint-override"><code>$ pip install --trusted-host url-to-proxy.com --index-url http://url-to-proxy.com:8000/pypi --extra-index-url http://url-to-proxy.com:8000/pypi package-name[optional-dependencies]
Looking in indexes: http://url-to-proxy.com:8000/pypi, http://url-to-proxy.com:8000/pypi
Collecting package-name[optional-dependencies]
Obtaining dependency information for package-name[optional-dependencies] from http://url-to-proxy.com:8000/pypi/package-name/package-name-py3-none-any.whl.metadata
$ ERROR: HTTP error 404 while getting http://url-to-proxy.com:8000/pypi/package-name/package-name-0.0.0-py3-none-any.whl.metadata
$ ERROR: 404 Client Error: Not Found for http://url-to-proxy.com:8000/pypi/package-name/package-name-0.0.0-py3-none-any.whl.metadata
</code></pre>
<p>When I try and run the same installation of the package using the URL from PyPi, it works without issue:</p>
<pre class="lang-none prettyprint-override"><code>$ pip install --trusted-host https://pypi.org/simple/ --index-url https://pypi.org/simple/ --extra-index-url https://pypi.org/simple/ package-name[optional-dependencies]
</code></pre>
<p>The <code>package.tar.gz</code> and <code>package-py3-none-any.whl</code> files on the proxy appear to be identical to those on PyPI, and comparing the contents yields no noticeable differences. <code>pip v22.2.2</code> is the highest version for which this installation via proxy completes without issue, so the URLs should be correct.</p>
<p><strong>This leads me to believe that something must have changed from <code>pip v22.2.2</code> to <code>pip v22.3</code> that breaks its ability to communicate with the proxy properly to get the <code>.whl.metadata</code> needed.</strong></p>
<p>What might have changed, and how might I make the process compatible with newer versions of <code>pip</code>? Thanks!</p>
|
<python><pip>
|
2024-05-09 17:59:54
| 0
| 301
|
TheEponymousProgrammer
|
78,456,176
| 10,500,957
|
Sharing Class Variables in Python
|
<p>After looking through many posts of class variables and instance variables I am not understanding how to share variables among classes.</p>
<pre><code>#!/usr/bin/python3
from PyQt5.QtWidgets import (QApplication)
import sys
class One():
def __init__(self, myTextString, parent=None):
self.mytext = myTextString
self.printtxt()
Main.txt = 'another string'
self.txt2 = 'a third string'
def printtxt(self):
print('One',self.mytext)
class Two():
def __init__(self, parent=None):
self.printtxt()
def printtxt(self):
print('Two',One.txt) # Attribute Error
print('Two',One.txt2) # Attribute Error
class Main():
def __init__(self, parent = None):
self.txt = 'a string of text'
self.w = One(self.txt)
self.w1 = Two()
print(self.txt) # not changed by One
if __name__ == '__main__':
app = QApplication(sys.argv)
ex = Main()
sys.exit(app.exec_())
</code></pre>
<p>In the above code the Main class initiates class One and class Two. While I can send the value of self.txt to the initiated class One, there are two problems I am having.</p>
<ol>
<li>When I try to print One's variables in class Two's printtxt I get attribute errors:</li>
</ol>
<p>AttributeError: type object 'One' has no attribute 'mytext'</p>
<p>AttributeError: type object 'One' has no attribute 'txt2'</p>
<ol start="2">
<li>When I try and change Main's txt variable in One, it is not changed</li>
</ol>
<p>I would like to share a value created in any particular class with all the other classes that are created (in other words 'global' variables up to a point).</p>
<p>The controller concept in Bryan Oakley's Tkinter example (see <a href="https://stackoverflow.com/questions/7546050/switch-between-two-frames-in-tkinter/7557028">Switch between two frames in tkinter?</a> and <a href="https://stackoverflow.com/questions/32212408/how-to-get-variable-data-from-a-class/32213127">How to get variable data from a class?</a>) is close I think, but if anyone can answer or point me to posts that explain this simply, I'd be grateful.
Thank you for any help.</p>
<p>Subsequently, I looked at Bryan Oakley's controller concept againm. My project involves merging 4 of my working programs together into one and have these different parts interact, without much re-coding, but the controller concept seems the simplest and best solution. So this code, based on Oakley's code, might help someone in the future:</p>
<pre><code>#!/usr/bin/python3
class SampleApp():
def __init__(self, *args, **kwargs):
self.app_data = 'One'
for F in (PageOne, PageTwo):
frame = F(controller=self)
class PageOne():
def __init__(self, controller):
self.controller = controller
value = self.controller.app_data
print('Page1', value)
class PageTwo():
def __init__(self, controller):
self.controller = controller
self.controller.app_data='Two'
value = self.controller.app_data
print('Page2',value)
SampleApp()
</code></pre>
|
<python><class><variables><instance>
|
2024-05-09 17:53:55
| 1
| 322
|
John
|
78,456,105
| 15,277,591
|
How to scrape HTML with Python?
|
<p>I'm working on a Python script to scrape data from this page: <a href="https://www.immobiliare.it/search-list/?criterio=rilevanza&__lang=it&idContratto=1&idCategoria=1&raggio=300&centro=45.185878%2C9.156321&pag=1#1466423673" rel="nofollow noreferrer">https://www.immobiliare.it/search-list/?criterio=rilevanza&__lang=it&idContratto=1&idCategoria=1&raggio=300&centro=45.185878%2C9.156321&pag=1#1466423673</a></p>
<p>I've written this script, but the HTML I get is "different" from what I see in the browser, making it impossible for me to parse the data on the site. How can I solve this?</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import lxml
url = 'https://www.immobiliare.it/search-list/?idContratto=1&idCategoria=1&criterio=prezzo&ordine=asc&__lang=it&raggio=300&centro=45.185878%2C9.156321&pag=1'
content = requests.get(url)
soup = BeautifulSoup(content.text, "lxml")
print(soup.prettify())
title_element = soup.find("div", class_="in-realEstateListHeader__title")
print(title_element)
</code></pre>
|
<python><web-scraping><beautifulsoup><python-requests>
|
2024-05-09 17:37:29
| 1
| 342
|
CastoldiG
|
78,456,079
| 234,146
|
why does numpy complain about matplotlib install errors
|
<p>I'm testing a python 3.7-32 installation using pip to install numpy. During the clean install of python in an empty directory using</p>
<pre><code>pip -install numpy-1.21.6-cp37-cp37m-win32.whl
</code></pre>
<p>I get the following errors:</p>
<pre><code>ERROR: matplotlib 3.5.3 requires packaging>=20.0, which is not installed.
ERROR: matplotlib 3.5.3 requires pillow>=6.2.0, which is not installed.
ERROR: matplotlib 3.5.3 requires python-dateutil>=2.7, which is not installed.
</code></pre>
<p>But numpy doesn't even require matplotlib according to its metadata file. What is causing these errors?</p>
|
<python><numpy><matplotlib>
|
2024-05-09 17:30:25
| 0
| 1,358
|
Max Yaffe
|
78,455,901
| 2,707,955
|
How to find the mathematical function from list of plots
|
<p>I would like to find the mathematical function that could approximately fit all of the following points :</p>
<p><a href="https://i.sstatic.net/40AWOyLj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/40AWOyLj.png" alt="plots" /></a></p>
<pre><code>x = [560, 387, 280, 231, 196, 168, 148, 136, 124, 112, 104, 101, 93, 88, 84, 80, 76]
y = [10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90]
coefficients = np.polyfit(x, y, len(x) - 1)
polynomial = np.poly1d(coefficients)
</code></pre>
<p>But if I test this polynomial, I get wrong values :</p>
<pre><code>x1 = []
y1 = []
for i in range(276, 570, 1):
x1.append(i)
y1.append(polynomial(i))
plt.figure(figsize=(12,6))
plt.plot(x1, y1, 'o')
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/zxwnLn5n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zxwnLn5n.png" alt="wrong_plots" /></a></p>
|
<python><numpy>
|
2024-05-09 16:50:51
| 2
| 365
|
Aurélien
|
78,455,739
| 12,288,028
|
Selecting extreme temperature values from pandas dataframe column where selection process includes several complicating conditions
|
<p>Have weather data set that includes daily high temperature in Celsius in a pandas dataframe that is as simple as the date and daily high temperature (rounded to the tenth value). Here is a sample data set:</p>
<pre><code>data_dict = {
'dates': ['2023-07-01', '2023-07-02', '2023-07-03', '2023-07-04', '2023-07-05', '2023-07-06', '2023-07-07', '2023-07-08', '2023-07-09', '2023-07-10', '2023-07-11', '2023-07-12', '2023-07-13', '2023-07-14', '2023-07-15', '2023-07-16', '2023-07-17', '2023-07-18', '2023-07-19', '2023-07-20', '2023-07-21', '2023-07-22', '2023-07-23', '2023-07-24', '2023-07-25', '2023-07-26', '2023-07-27', '2023-07-28', '2023-07-29', '2023-07-30', '2023-07-31', '2023-08-01', '2023-08-02'],
'daily_high_temp': [39.1, 39.8, 40, 40.3, 40.4, 40.2, 40.4, 40.6, 41, 41.1, 40.9, 41.2, 40.9, 39.9, 41.2, 42, 42.3, 41.9, 40.7, 39.8, 41.1, 41.3, 40.9, 40.7, 40, 39.8, 41.2, 40.9, 39.6, 40.9, 41.4, 41.2, 41.4]
}
df = pd.DataFrame(data=data_dict)
</code></pre>
<p>Want to create another dataframe column 'extreme_highs' that logs extreme high temps with several conditions. Those conditions:</p>
<ol>
<li>For dates/temps not logged set value = 0</li>
<li>Only temps greater than 40 degrees Celsius are eligible for logging.</li>
<li>Starting at earliest date, identify temp greater than 40 degrees.</li>
<li>Considering that date/temp and the proceeding 3 days immediately following that date, log the date/temp with the max temp.</li>
<li>If two or more date/temps in the four-day window share the max temp, only log the date/temp in which the max temp first occurred (earliest occurrence)</li>
<li>Once a date/temp has been logged, the next six days are ineligible for logging. Only one date/temp in a seven day period is to be logged.</li>
</ol>
<p>Based on the data above and conditions listed, the resulting data should be (hopefully) produced:</p>
<p><a href="https://i.sstatic.net/UWYz6uED.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UWYz6uED.png" alt="enter image description here" /></a></p>
<p>I have developed a solution to this in excel, but it is convoluted and complicated. Hoping that a simple "pythonic" solution can be shared because I am stumped! Thank you for your help!!!</p>
|
<python><pandas><dataframe><conditional-statements>
|
2024-05-09 16:15:59
| 1
| 486
|
MC Hammerabi
|
78,455,734
| 160,245
|
python lxml.html.parse not reading url - or how to get request.get into lxml.html.dom?
|
<p>The same code below works for many webpages, but for a few like the one below, it gives error:</p>
<blockquote>
<p>Error: Error reading file
'http://akademos-garden.com/homeschooling-tips-work-home-parents':
failed to load HTTP resource</p>
</blockquote>
<p>Python to reproduce:</p>
<pre><code>from lxml.html import parse
import requests
import pprint
page_url = 'http://akademos-garden.com/homeschooling-tips-work-home-parents/'
try:
parsed_page = parse(page_url)
dom = parsed_page.getroot()
except Exception as e:
# TODO - log this into some other error table to come back and research
errMsg = f"Error: {e} "
print(errMsg)
print("Try get without User-Agent")
result = requests.get(page_url).status_code
pprint.pprint(result)
print("Try get with User-Agent")
result = requests.get(page_url, headers={'User-Agent': None}).status_code
pprint.pprint(result)
</code></pre>
<p>This post refers to adding the User-Agent, but I don't understand how to do that with lxml. Both the request.get's above run with no error, return http status=200.</p>
<p><a href="https://stackoverflow.com/questions/28801216/python-lxml-html-parse-not-reading-url">python lxml.html.parse not reading url</a>.</p>
<p>If I have to use requests.get, I can do that, but then how do I get it in the dom object?</p>
|
<python><python-3.x><lxml.html>
|
2024-05-09 16:15:21
| 1
| 18,467
|
NealWalters
|
78,455,685
| 2,817,520
|
Uploaded files are lost after automatic redirect to trailing '/'
|
<p>From the <a href="https://flask.palletsprojects.com/en/3.0.x/quickstart/#unique-urls-redirection-behavior" rel="nofollow noreferrer">flask documentation</a> a link when used without trailing slash redirects to it with the slash.</p>
<p>In my case with the url ending in <code>/uploads/</code>, the following code prints my files when I upload them from <code>localhost:5000/uploads/</code> but <code>files</code> is empty when I upload them from <code>localhost:5000/uploads</code>.How can I keep the trailing slash in the following route but still be able to get uploaded files even after the redirection?</p>
<pre><code>@app.route('/uploads/')
def upload_file():
files = request.files.getlist('files')
print(files)
</code></pre>
|
<python><flask><file-upload><response.redirect>
|
2024-05-09 16:06:33
| 1
| 860
|
Dante
|
78,455,648
| 395,857
|
How can I find all exact occurrences of a string, or close matches of it, in a longer string in Python?
|
<p>Goal:</p>
<ul>
<li>I'd like to find all exact occurrences of a string, or close matches of it, in a longer string in Python.</li>
<li>I'd also like to know the location of these occurrences in the longer string.</li>
<li>To define what a close match is, I'd like to set a threshold, e.g. number of edits if using the edit distance as the metric.</li>
<li>I'd also like the code to give a matching score (the one that is likely used to determine if a candidate substring is over the matching threshold I set).</li>
</ul>
<p>How can I do so in Python?</p>
<hr />
<p>Example:</p>
<pre><code>long_string = """1. Bob likes classical music very much.
2. This is classic music!
3. This is a classic musical. It has a lot of classical musics.
"""
query_string = "classical music"
</code></pre>
<p>I'd like the Python code to find "classical music" and possibly "classic music", "classic musical" and "classical musics" depending on the string matching threshold I set.</p>
<hr />
<p>Research: I found <a href="https://stackoverflow.com/q/17740833/395857">Checking fuzzy/approximate substring existing in a longer string, in Python?</a> but the question focuses on the best match only (i.e., not all occurrences) and answers either also focuses on the best match or don't work on multi-word query strings (since the question only had a single-word query strings, or return some incorrect score (doesn't get a perfect score even for an exact match).</p>
|
<python><string-matching><fuzzy-search>
|
2024-05-09 15:59:35
| 2
| 84,585
|
Franck Dernoncourt
|
78,455,598
| 13,313,873
|
numpy ignores "casting" argument
|
<p>Numpy version is 1.26.4</p>
<pre><code>>>> np.isnan('nan', casting='no')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
>>> np.isinf('+inf', casting='unsafe')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: ufunc 'isinf' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
</code></pre>
<p>No matter which casting rule I specify it seems to be using 'safe', why?</p>
|
<python><numpy>
|
2024-05-09 15:50:36
| 0
| 955
|
noob overflow
|
78,455,345
| 11,197,957
|
Segmentation faults and memory leaks while calling GMP C functions from Python
|
<p>Work was quiet today, and so the team was directed to do some "self-development". I decided to have some fun calling <strong>C</strong> functions from <strong>Python</strong>. I had already had a good time using Rust to speed up Python, but I kept hitting a brick wall whenever I wanted to work with integers greater than the <code>u128</code> type could hold. I thought that, by using C's famous <strong>GMP</strong> library, I could overcome this.</p>
<p>So far, I've managed to build a minimal C program which runs, which seems to do what I want, and which - to my eyes - doesn't have anything obviously wrong with it. This is my code:</p>
<pre class="lang-c prettyprint-override"><code>#include <stdio.h>
#include <gmp.h>
#define BASE 10
void _factorial(int n, mpz_t result) {
int factor;
mpz_t factor_mpz;
mpz_init(result);
mpz_init_set_ui(result, 1);
for (factor = 1; factor <= n; factor++) {
mpz_init(factor_mpz);
mpz_init_set_ui(factor_mpz, factor);
mpz_mul(result, result, factor_mpz);
mpz_clear(factor_mpz);
}
}
char *factorial(int n) {
char *result;
mpz_t result_mpz;
_factorial(n, result_mpz);
mpz_get_str(result, BASE, result_mpz);
mpz_clear(result_mpz);
return result;
}
int main(void) { // This runs without any apparent issues.
char *result = factorial(100);
printf("%s\n", result);
return 0;
}
</code></pre>
<p>I then try to call this from Python like so:</p>
<pre class="lang-py prettyprint-override"><code>from ctypes import CDLL, c_void_p, c_char_p, c_int32, cast
CLIB = CDLL("./shared.so")
cfunc = CLIB.factorial
cfunc.argtypes = [c_int32]
cfunc.restype = c_char_p
raw_pointer = cfunc(100)
result = raw_pointer.decode()
print(result)
</code></pre>
<p>I compiled the C code to an .so file using the following command:</p>
<pre><code>gcc main.c -lgmp -fpic -shared -o shared.so
</code></pre>
<p>I then ran the above Python script, but unfortunately ran into two issues:</p>
<ol>
<li>Although it reaches the <code>print()</code> statements and prints the correct result, it then hits a <strong>segmentation fault</strong>.</li>
<li>I'm worried that, in passing an arbitrary-length string from C to Python, there may be some memory leaks.</li>
</ol>
<p>Does anyone know how I can overcome the segmentation fault, and, if there is indeed a memory leak, how I can plug it?</p>
|
<python><c><gmp>
|
2024-05-09 15:04:02
| 1
| 734
|
Tom Hosker
|
78,455,314
| 3,385,948
|
How to make Dash Ag-Grid column wider?
|
<p>I'm making a table in Python Dash Ag-Grid and I can't seem to make a column wider. I've got lots of extra space to the right, and I thought <code>columnSize=auto-size</code> or <code>columnSize=sizeToFit</code> would make the table fill the entire space.</p>
<p></p>
<p>Here's the column for <code>columnDefs</code>:</p>
<pre class="lang-py prettyprint-override"><code> {
"field": "description",
"headerName": "Part Description",
"headerTooltip": "More information about the part",
"wrapText": True,
"autoHeight": True,
"cellDataType": "text",
"sortable": True,
"resizable": True,
"filter": True,
# note my attempts at setting width params below:
"minWidth": 100,
"width": 180,
"maxWidth": 400,
},
</code></pre>
<p>Here's the AgGrid itself:</p>
<pre class="lang-py prettyprint-override"><code>AgGrid(
id="inventory_ag_grid",
# Default theme is "ag-theme-quartz"
className="ag-theme-quartz",
dashGridOptions={
"skipHeaderOnAutoSize": True,
},
# 80vh is 80% of the viewport height
style={
"height": "50vh",
"width": "100%",
},
# https://dash.plotly.com/dash-ag-grid/column-sizing
columnSize="autoSize",
defaultColDef={"resizable": True},
getRowStyle={
"styleConditions": [
{
# Set every 2nd row to have a background color
"condition": "params.rowIndex % 2 === 0",
"style": {
"backgroundColor": "rgba(0, 0, 0, 0.05)",
},
},
]
},
)
</code></pre>
<p>Documentation:
<a href="https://dash.plotly.com/dash-ag-grid/column-sizing" rel="nofollow noreferrer">https://dash.plotly.com/dash-ag-grid/column-sizing</a></p>
<p>Here's a reproducible example:</p>
<pre><code>import pandas as pd
from dash import html, dcc, callback, Input, Output, State
from dash_ag_grid import AgGrid
import dash_bootstrap_components as dbc
import time
rows = [
{'warehouse_name': 'Stettler', 'part_name': 'part1', 'description': 'Piston Packing and Head O-Rings and lots more text so we can see what it\'s like when the column wraps and we have a really really long description that can\'t fit on one line', 'cost_cad': 5000.0, 'cost_usd': 3521.0, 'msrp_cad': 15001.0, 'msrp_usd': 12008.0, 'warehouse_quantity': 0.1, 'sold_quantity': 0.2, 'calc_quantity': 2.0, 'actual_quantity': None},
{'warehouse_name': 'Stettler', 'part_name': 'part2', 'description': 'Rod Packing, ONE END ONLY', 'cost_cad': 545.0, 'cost_usd': 384.0, 'msrp_cad': 1636.0, 'msrp_usd': 1309.0, 'warehouse_quantity': 0.0, 'sold_quantity': 0.4, 'calc_quantity': 2.0, 'actual_quantity': None},
{'warehouse_name': 'Stettler', 'part_name': 'part3', 'description': 'Hydraulic Piston Seals & O-Rings, ONE END ONLY', 'cost_cad': 272.0, 'cost_usd': 192.0, 'msrp_cad': 816.0, 'msrp_usd': 653.0, 'warehouse_quantity': 1.6, 'sold_quantity': 0.4, 'calc_quantity': 2.0, 'actual_quantity': None}
]
df = pd.DataFrame(rows)
def get_inventory_columns() -> list:
"""Get the columns for the inventory table"""
price_col = "cost_cad"
header_name = "Part Cost CAD"
columns = [
{
"field": "warehouse_name",
"headerName": "Warehouse Name",
"sortable": True,
"resizable": True,
"filter": True,
},
{
"field": "part_name",
"headerName": "Part Number",
"cellDataType": "text",
"wrapText": True,
"sortable": True,
"resizable": True,
"filter": True,
},
{
"field": "description",
"headerName": "Description NOT Auto-Sized",
# "headerTooltip": "More information about the part",
"cellDataType": "text",
"wrapText": True,
"autoHeight": True,
"sortable": True,
"resizable": True,
"filter": True,
"maxWidth": 600,
},
{
"field": price_col,
"headerName": header_name,
"sortable": True,
"resizable": True,
"filter": True,
},
{
"field": "warehouse_quantity",
"headerName": "Warehouse Quantity from BoM",
"sortable": True,
"resizable": True,
"filter": True,
},
{
"field": "sold_quantity",
"headerName": "Sold Quantity",
"sortable": True,
"resizable": True,
"filter": True,
},
{
"field": "calc_quantity",
"headerName": "Calculated Avg Quantity",
"sortable": True,
"resizable": True,
"filter": True,
},
{
"field": "actual_quantity",
"headerName": "Actual Quantity",
"sortable": True,
"resizable": True,
"filter": True,
"editable": True,
},
]
return columns
# Dash app
app = dash.Dash(__name__)
app.layout = html.Div(
[
html.H1("Ag-Grid Example"),
dbc.Spinner(
id="inventory_col",
color="success",
children=AgGrid(
id="inventory_ag_grid",
className="ag-theme-quartz",
dashGridOptions={
"skipHeaderOnAutoSize": False,
},
# 80vh is 80% of the viewport height
style={
"height": "80vh",
"width": "100%",
},
columnSize="autoSize",
defaultColDef={"resizable": True},
getRowStyle={
"styleConditions": [
{
# Set every 2nd row to have a background color
"condition": "params.rowIndex % 2 === 0",
"style": {
"backgroundColor": "rgba(0, 0, 0, 0.05)",
},
},
]
},
),
),
]
)
@app.callback(
Output("inventory_ag_grid", "columnDefs"),
Input("inventory_col", "children"),
)
def update_inventory_columns(children):
# Wait for auto-sizing to finish
time.sleep(2)
return get_inventory_columns()
@app.callback(
Output("inventory_ag_grid", "rowData"),
Input("inventory_col", "children"),
)
def update_inventory_rows(children):
# Wait for auto-sizing to finish
time.sleep(2)
return df.to_dict("records")
if __name__ == "__main__":
app.run_server(debug=True)
</code></pre>
|
<python><ag-grid><plotly-dash>
|
2024-05-09 14:58:15
| 1
| 5,708
|
Sean McCarthy
|
78,455,290
| 12,810,409
|
Summing numpy array with an empty array
|
<p>I need to sum a normal numpy array with an empty array</p>
<pre><code>x = np.ones([2,3])
x + np.array([]).reshape(2,-1)
</code></pre>
<p>Output:</p>
<pre><code>ValueError: operands could not be broadcast together with shapes (2,3) (2,0)
</code></pre>
<p>Reshaping them to other dimensions does not work, e.g. <code>x.reshape(-1) + np.array([])</code>.
And making an if-statement to see if the right hand side term is empty or not seems not necessary.</p>
|
<python><numpy><pytorch>
|
2024-05-09 14:54:08
| 1
| 378
|
Toon Tran
|
78,455,273
| 4,737,944
|
In Python unittest, how can I access the instance on which a mock method was called?
|
<p>I have a Python class structure similar to the following example:</p>
<pre class="lang-python prettyprint-override"><code>
class Foo:
def start(self):
# do something
class FooBar(Foo):
def __init__(self, param):
self.param = param
def run(self):
# do something else
class ProductionClass:
def use_foobar(self):
foo = FooBar("string")
foo.start()
</code></pre>
<p>Now I want to write a test in which the <code>start</code> method on <code>FooBar</code> (inherited from <code>Foo</code>) is mocked so that the <code>run</code> method defined directly in <code>FooBar</code> is called instead.</p>
<p>I tried it this way:</p>
<pre class="lang-python prettyprint-override"><code>
def test_something(self):
self.foobar_patcher = patch.object(FooBar, 'start', side_effect=self.mock_start)
self.foobar_patcher.start()
ProductionClass().use_foobar()
def mock_start(self):
the_production_class_instance.run()
</code></pre>
<p>In this example, <code>the_production_class_instance</code> is meant to be the instance of FooBar where the <code>start()</code> method was called on, but I can't seem to find a way where to get this instance. The instance is created inside <code>ProductionClass().use_foobar()</code> and discarded after use, so there is no real way to access it from the outside.</p>
<p>Is there a way to achieve this with unittest?</p>
|
<python><python-unittest>
|
2024-05-09 14:51:15
| 1
| 433
|
ronin667
|
78,455,268
| 6,693,247
|
Python and pip misconfiguration leads to package installation errors
|
<p>I'm facing an issue with Python and pip where packages are not being installed under the correct version of Python. I am using Python 3.9.6:</p>
<pre><code>python3 -V
Python 3.9.6
</code></pre>
<p>The pip version</p>
<pre><code>pip3 --version
pip 21.2.4 from /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/site-packages/pip (python 3.9)
</code></pre>
<p>The paths for pip3 and python3 are:</p>
<pre><code>whereis pip3
pip3: /usr/bin/pip3
whereis python3
python3: /usr/bin/python3
</code></pre>
<p>Despite this, when I try to install google-cloud-storage using <code>pip3 install google-cloud-storage</code>, the package seems to install under Python 3.8. I receive the following warning:</p>
<pre><code>WARNING: Target directory /usr/local/lib/python3.8/site-packages/googleapis_common_protos-1.63.0.dist-info already exists. Specify --upgrade to force replacement.
</code></pre>
<p>I attempted to resolve this by creating a virtual environment with:</p>
<p><code>/Library/Developer/CommandLineTools/usr/bin/python3 -m venv venv</code>
And installing the package within the virtual environment:</p>
<pre><code>/Users/username/Desktop/project/venv/bin/python3 -m pip install --upgrade google-cloud-storage
</code></pre>
<p>However, this did not resolve the issue. When I run my script, I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "/Users/username/Desktop/project/main.py", line 3, in <module>
from google.cloud import storage
ModuleNotFoundError: No module named 'google'
</code></pre>
<p>Additionally, google-cloud-storage does not appear in the output of pip list or pip3 list.</p>
<p>I installed Python 3.9 using Homebrew. I have deleted all Python versions from <code>/usr/local/lib/</code>. I don't have any other version of Python other than Python 3.9 currently installed.
Could there be an issue with how pip is linked to Python versions on my system? How can I ensure that pip3 installs packages under Python 3.9?</p>
<p>Any help would be appreciated!</p>
|
<python><python-3.x><pip>
|
2024-05-09 14:50:04
| 2
| 400
|
dand1
|
78,455,226
| 12,881,307
|
robocorp-windows find window from executable with spaces in path
|
<p>I want to build an RPA to automate some tasks in different windows computers. I've been looking for frameworks or libraries to do so in Python and <a href="https://robocorp.com/docs/python/robocorp/robocorp-windows/api" rel="nofollow noreferrer">robocorp-windows</a> seems more robust than other options (I've seen RPAs written in <a href="https://pyautogui.readthedocs.io/en/latest/" rel="nofollow noreferrer">PyAutoGUI</a>, but I do not want to rely on image matching to find elements).</p>
<p>The tasks I want to automate require opening other executables. Since I'm not familiar with this library, I am writing some tests to get familiar with it:</p>
<pre class="lang-py prettyprint-override"><code>from robocorp import windows
from robocorp.windows import WindowElement
import logging
from sys import stdout
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
console_handler = logging.StreamHandler(stdout)
formatter = logging.Formatter(
"\033[95m%(levelname)s\033[0m[%(lineno)s - %(funcName)s] %(message)s"
)
console_handler.setFormatter(formatter)
logger.addHandler(console_handler)
def execute(executable: str) -> WindowElement:
"""
Execute a program and retrieve its window element
"""
desktop = windows.desktop()
logger.debug(f"opening {executable}...")
desktop.windows_run(executable)
logger.debug(f"searching for window with executable:{executable}")
return windows.find_window(f"executable:{executable.replace(' ', '%20')}")
program_x = "C:\\Users\\me\\AppData\\Local\\Programs\\Launcher\\Launcher.exe"
firefox = "C:\\Users\\me\\AppData\\Local\\Mozilla Firefox\\firefox.exe"
program_x_window = execute(program_x)
firefox_window = execute(firefox)
</code></pre>
<h3>The Problem</h3>
<p>I expect this code to open both programs and store their respective windows in variables for later manipulation. However, the program is unable to find the handle for firefox because the path contains a whitespace (if the whitespace is not replaced it searches for the locator <code>name:"Firefox\firefox.exe"</code> instead).</p>
|
<python><python-3.x>
|
2024-05-09 14:43:35
| 1
| 316
|
Pollastre
|
78,455,102
| 395,857
|
Why doesn't fuzzywuzzy's process.extractBests give a 100% score when the tested string 100% contains the query string?
|
<p>I'm testing <code>fuzzywuzzy</code>'s <code>process.extractBests()</code> as follows:</p>
<pre><code>from fuzzywuzzy import process
# Define the query string
query = "Apple"
# Define the list of choices
choices = ["Apple", "Apple Inc.", "Apple Computer", "Apple Records", "Apple TV"]
# Call the process.extractBests function
results = process.extractBests(query, choices)
# Print the results
for result in results:
print(result)
</code></pre>
<p>It outputs:</p>
<pre><code>('Apple', 100)
('Apple Inc.', 90)
('Apple Computer', 90)
('Apple Records', 90)
('Apple TV', 90)
</code></pre>
<p>Why didn't the scorer give 100 to all strings since they all 100% contain the query string ("Apple")?</p>
<p>I use fuzzywuzzy==0.18.0 with Python 3.11.7.</p>
|
<python><nlp><string-matching><fuzzywuzzy>
|
2024-05-09 14:24:36
| 1
| 84,585
|
Franck Dernoncourt
|
78,455,055
| 8,547,516
|
SQLAlchemy case-sensitive Unicode Column
|
<p>The title does more or less say everything. I want to create a data model with SQLAlchemy and it should contain fields with values that contain unicode and should be considered case-sensitive.</p>
<p>Until know my type is <code>sqlalchemy.Unicode(255)</code> but e.g. in mariadb this results in <code>utf8mb4_general_ci</code> which is case-insensitive as far as I know.</p>
<p>So my question is how to define a case-insensitive unicode column in a database-independed way.</p>
|
<python><sqlalchemy>
|
2024-05-09 14:15:59
| 0
| 1,250
|
gerum
|
78,455,045
| 7,433,420
|
Django Choices model field with choices of classes
|
<p>The following code was working in python 3.10 but not in 3.11 due to a change in the <code>enum</code> module.</p>
<p>Now the app won't launch with the following error message :</p>
<pre><code> File "/home/runner/work/e/e/certification/models.py", line 3, in <module>
from .certifications.models import Certification, QuizzCertification # noqa: F401
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/e/e/certification/certifications/models.py", line 17, in <module>
class CertificationType(models.TextChoices):
File "/opt/hostedtoolcache/Python/3.11.2/x64/lib/python3.11/site-packages/django/db/models/enums.py", line 49, in __new__
cls = super().__new__(metacls, classname, bases, classdict, **kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.11.2/x64/lib/python3.11/enum.py", line 560, in __new__
raise exc
File "/opt/hostedtoolcache/Python/3.11.2/x64/lib/python3.11/enum.py", line 259, in __set_name__
enum_member = enum_class._new_member_(enum_class, *args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.11.2/x64/lib/python3.11/enum.py", line 1278, in __new__
raise TypeError('%r is not a string' % (values[0], ))
TypeError: <class 'certification.certifications.models.QuizzCertification'> is not a string
</code></pre>
<p>How would you implement a model field with choices, where the first element is a Model class?</p>
<pre><code>class QuizzCertification(models.Model):
...
class OtherCertification(models.Model):
...
class CertificationType(models.TextChoices):
QUIZZ = QuizzCertification, "Quizz"
OTHER = OtherCertification, "Other"
class Certification(models.Model):
name = models.CharField(max_length=100, unique=True)
description = models.TextField(_("Description"), blank=True, null=True)
type_of = models.CharField(
_("Type of certification"),
max_length=100,
choices=CertificationType,
default=CertificationType.QUIZZ,
)
</code></pre>
|
<python><django><django-models><enums>
|
2024-05-09 14:14:35
| 1
| 908
|
WitoldW
|
78,454,950
| 5,183,434
|
Find the number of layers in an *ome.tif file using tifffile package
|
<p>Here are the standards for a pyramidal ome-tif file: <a href="https://docs.openmicroscopy.org/ome-model/5.6.3/ome-tiff/" rel="nofollow noreferrer">https://docs.openmicroscopy.org/ome-model/5.6.3/ome-tiff/</a></p>
<p>These files are supported by the tifffile package in python: <a href="https://github.com/cgohlke/tifffile/tree/master" rel="nofollow noreferrer">https://github.com/cgohlke/tifffile/tree/master</a></p>
<p>Here is the syntax of opening a specific level of one of these pyramidal ome-tif files from their readme: <a href="https://github.com/cgohlke/tifffile/blob/ec6be6289db1b8b7327bb98816a764c95b9b7299/README.rst?plain=1#L678-L682" rel="nofollow noreferrer">https://github.com/cgohlke/tifffile/blob/ec6be6289db1b8b7327bb98816a764c95b9b7299/README.rst?plain=1#L678-L682</a></p>
<p>Here is code to make an ome-tif file to use for testing: <a href="https://github.com/cgohlke/tifffile/blob/ec6be6289db1b8b7327bb98816a764c95b9b7299/README.rst?plain=1#L631-L674" rel="nofollow noreferrer">https://github.com/cgohlke/tifffile/blob/ec6be6289db1b8b7327bb98816a764c95b9b7299/README.rst?plain=1#L631-L674</a></p>
<p>I can't seem to find an easy way to just <em>get</em> the number of layers the file has. A simple solution is here:</p>
<p>simple_all_levels.py</p>
<pre><code>from tifffile import imread, imwrite, TiffFile
f = 'temp.ome.tif'
# Make smaller versions of the big tiff
i = 0
while True:
image = imread(f, series=0, level=i)
imwrite(f"L{i}.{f}", image)
i = i + 1
print(i)
</code></pre>
<p>Just run this until it crashes, the last <code>i</code> that prints is the number of layers. But running until crashing is kinda dumb. After some peeping, I found this snippet:</p>
<p>get_layers_weird.py</p>
<pre><code>from tifffile import imread, imwrite, TiffFile
f = 'temp.ome.tif'
tif = TiffFile(f)
list_of_pages = tif.pages.pages[0].pages.pages
print(len(list_of_pages))
</code></pre>
<p>This gets me the answer I want, but getting the length of an array of an accession, of an accession of an array's first member, which is an accession to the accession of a supplementary import seems too obfuscated to be the intended place for this information to be held.</p>
<p>Where <strong>should</strong> I look in the ome-tif for its number of levels?</p>
<p>EDIT: I did "my own research" using the following script and found <code>len(tif.pages.keyframe.subifds)</code> to be a slightly more bearable reference.</p>
<pre><code>from tifffile import imread, imwrite, TiffFile
f = 'temp.ome.tif'
primitives = (bool, str, int, float, type(None))
prim_iters = (list, tuple, dict, set)
# Find out how to get "level" variable
tif = TiffFile(f)
# Stop criteria
list_of_pages = tif.pages.pages[0].pages.pages
subifds = tif.pages._keyframe.subifds
known_num_levels = 6
depth_limit = 6
tif_vars = [(f"tif.{k}",v) for k,v in vars(tif).items()]
depth = 0
depth_breadth = 0
while tif_vars:
if depth_breadth == 0:
depth_breadth = len(tif_vars)
depth = depth + 1
print(f"depth: {depth}")
print(f"breadth: {depth_breadth}")
if depth > depth_limit:
break
# print(tif_vars)
tif_key, tif_var = tif_vars.pop(0)
depth_breadth -= 1
if type(tif_var) in primitives:
print(f"primitive: {tif_key} : {tif_var}")
if tif_var == known_num_levels:
print(f"candidate: {tif_key} : {tif_var}")
if tif_var in subifds:
print(f"answer field: {tif_key} : {tif_var}")
elif tif_var == list_of_pages:
print(f"page key: {tif_key}")
elif tif_key.endswith("parent"):
print(f"parent: {tif_key}")
elif tif_key == 'tif._fh._fh': # Crashes when used
print(f"Skipping {tif_key}")
elif str(type(tif_key)) == "<class '_io.BufferedReader'>":
print(f"Skipping {tif_key} as a BufferedReader")
elif hasattr(tif_var, "__dict__"):
print(f"dict: {tif_key}")
tif_vars += [(f"{tif_key}.{k}",v) for k,v in vars(tif_var).items()]
elif hasattr(tif_var, "__dir__") and type(tif_var) not in prim_iters:
print(f"dir: {tif_key}")
tif_vars += [(f"{tif_key}.{k}", getattr(tif_var, k)) for k in dir(tif_var) if not callable(getattr(tif_var, k))]
elif hasattr(tif_var, "__iter__"):
print(f"iter: {tif_key}")
tif_vars += [(f"{tif_key}[{i}]",tif_var_child) for i, tif_var_child in enumerate(tif_var)]
if len(tif_var) == known_num_levels:
print(f"candidate list: {tif_key} : {tif_var}")
</code></pre>
|
<python><tiff>
|
2024-05-09 13:59:03
| 0
| 742
|
Jeff
|
78,454,844
| 5,072,692
|
Using Python read Refcursors returned from function in Postgres
|
<p>I have a function that returns a set of refcursors:</p>
<pre><code>CREATE function func_name(func_date date) returns SETOF refcursor
language plpgsql
as
$$
DECLARE
ref1 refcursor := 'data_1';
ref2 refcursor := 'data_2';
BEGIN
OPEN ref1 FOR
SELECT * FROM users;
RETURN NEXT ref1;
OPEN ref2 FOR
SELECT * FROM roles;
RETURN NEXT ref2;
END;
$$;
</code></pre>
<p>I have the following code in Python which calls the function which returns a list of tuples of the refcursors. I need help as to how I can read these refcursors and add the data to a DataFrame</p>
<pre><code>import psycopg2 as db
import datetime
# Connect to DB
conn = db.connect(host='abc', dbname='abc', user='a', password='abc',gssencmode="disable")
conn.set_session(autocommit=True)
func_date = datetime.date.today()
cur = conn.cursor()
cur.callproc('func_name', [func_date])
data = cur.fetchall()
for row in data:
a = row[0]
print(a)
</code></pre>
|
<python><postgresql><psycopg2>
|
2024-05-09 13:39:49
| 0
| 955
|
Adarsh Ravi
|
78,454,803
| 13,757,692
|
Pyplot background with color gradient, filling the whole figure
|
<p>I want to add a background to my Pyplot figure, so that it looks approximately like shown below:</p>
<p><a href="https://i.sstatic.net/mGQKBmDsm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mGQKBmDsm.png" alt="enter image description here" /></a></p>
<p>As far as I can tell, adding a color gradient to the figure is a bit tricky, so the idea was to stretch the Axes to fill the entire figure and then add a background image, or something similar to the Axes.</p>
<p>However, I have neither been able to get the axes to fill the entire figure, as shown in the code example below, nor do I know how to set a color gradient background, or set an image behind the graph.</p>
<p>This is what I generate at the moment:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import mypyplot
x, y, z = np.random.rand(30).reshape((3, 10))
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x, y, z)
fig.set_facecolor('green')
ax.set_facecolor('gray')
ax.set_position([0, 0, 1, 1])
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/Evblq2ZPm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Evblq2ZPm.png" alt="enter image description here" /></a></p>
<p>Using this</p>
<pre><code>ax.imshow([[0,0],[1,1]], cmap=plt.cm.Greens, interpolation='bicubic', extent=(-1, 1, -1, 1))
</code></pre>
<p>I can plot an image over the entire axes, below the data, but not below the axis labels, grid, etc. Is there a smart way to do this?</p>
|
<python><matplotlib><gradient>
|
2024-05-09 13:32:50
| 1
| 466
|
Alex V.
|
78,454,631
| 9,583,035
|
How create a tiff image from dataframe with a grid python
|
<p>I have a dataframe with a grid of 3 columns</p>
<p>utm N utm E Value</p>
<p>How can i create a tiff image using this grid ?</p>
<p>Data:</p>
<pre><code> E N value
0 754104.853089 7.105749e+06 -0.001245
1 755104.853089 7.105749e+06 -0.001168
2 756104.853089 7.105749e+06 -0.001071
3 757104.853089 7.105749e+06 -0.000931
4 758104.853089 7.105749e+06 -0.000827
</code></pre>
|
<python><tiff>
|
2024-05-09 13:04:50
| 0
| 404
|
Vitor Bento
|
78,454,457
| 855,475
|
Pandas read_json Future Warning: The behavior of 'to_datetime' with 'unit' when parsing strings is deprecated
|
<p>I am updating pandas version from 1.3.5 to 2.2.2 on an old project. I am not very familiar with pandas, and I am stuck with a Future Warning:</p>
<p>FutureWarning:</p>
<p>The behavior of 'to_datetime' with 'unit' when parsing strings is deprecated. In a future version, strings will be parsed as datetime strings, matching the behavior without a 'unit'. To retain the old behavior, explicitly cast ints or floats to numeric type before calling to_datetime.</p>
<p>Here is the causes the error:</p>
<pre><code>>>> from io import StringIO
>>> import json
>>> import pandas
>>>
>>> str = '{"Keyword":{"0":"TELECOM","1":"OPERATOR","Total":"Total"},"Type":{"0":"job_field","1":"title_class","Total":null},"Seniority Score":{"0":0.0,"1":-1.0,"Total":-1.0}}'
>>> df = pandas.read_json(StringIO(str), typ='frame', orient='records')
</code></pre>
<p>From what I get, it has to do something with integers and floats, probably being represented as strings in the json, but I tried various combinations and can not get arround the warning. I am confused becase I am not calling the <code>to_datetime</code> function at all.</p>
|
<python><pandas>
|
2024-05-09 12:30:59
| 1
| 6,478
|
Martin Taleski
|
78,454,417
| 6,049,429
|
poetry show outdated from source
|
<p>I've two packages:</p>
<ol>
<li>package1</li>
<li>package2</li>
</ol>
<p>I'm installing it from my source:</p>
<pre><code>[[tool.poetry.source]]
name = "mysource"
url = "https://example.com/mysource/simple/"
priority = "explicit"
</code></pre>
<p>pyproject.toml</p>
<pre><code>[tool.poetry.dependencies]
package1 = { version = "==2.0.2", source = "mysource" }
package2 = {version = "==0.2.0", source = "mysource"}
[[tool.poetry.source]]
name = "mysource"
url = "https://example.com/mysource/simple/"
priority = "explicit"
</code></pre>
<p>Both of these packages are also available in <code>pypi.org</code></p>
<p>so when i run the following command to check for outdated packages I get incorrect output</p>
<pre><code>$ poetry show --outdated
package1 2.0.2 28.0.0 description of package
package2 0.2.0 5.8.0 description of package
</code></pre>
<p>It compared the installed versions with the versions of pypi.org packages, which is incorrect.</p>
<p>I don't want to change the priority of my source to primary as that is taking too long to run.</p>
<p>How can I check for outdated packages which are installed from <code>mysource</code>?</p>
<p>Expectation is to match the version from mysource and not from pypi.org</p>
<pre><code>$ poetry show --outdated
package1 2.0.2 2.0.3 description of package
package2 0.2.0 0.2.1 description of package
</code></pre>
|
<python><python-3.x><python-poetry>
|
2024-05-09 12:23:00
| 0
| 984
|
Cool Breeze
|
78,454,411
| 1,135,541
|
On Windws, Windows Subsystem for Linux (WSL) and every time I install Python, I get ModuleNotFoundError: No module named '_tkinter'
|
<p>On Ubuntu 24.04, Here is the Error I get:</p>
<pre><code> [I] /home/sporty~> pyenv install 3.12.2
Downloading Python-3.12.2.tar.xz...
-> https://www.python.org/ftp/python/3.12.2/Python-3.12.2.tar.xz
Installing Python-3.12.2...
python-build: use readline from homebrew
python-build: use ncurses from homebrew
python-build: use zlib from homebrew
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/sporty/.pyenv/versions/3.12.2/lib/python3.12/tkinter/__init__.py", line 38, in <module>
import _tkinter # If this fails your Python may not be configured for Tk
^^^^^^^^^^^^^^^
ModuleNotFoundError: No module named '_tkinter'
WARNING: The Python tkinter extension was not compiled and GUI subsystem has been detected. Missing the Tk toolkit?
Installed Python-3.12.2 to /home/sporty/.pyenv/versions/3.12.2
</code></pre>
<p>Is this the error because this is the Linux instance on WSL? or is there a possibility that there is a problem with the dependencies?</p>
|
<python><python-3.x><tkinter><windows-subsystem-for-linux><tk-toolkit>
|
2024-05-09 12:21:31
| 0
| 1,911
|
user1135541
|
78,454,336
| 6,455,667
|
Python member variables of different data types not getting updated in different thread with same priority
|
<p>Consider this sample code:</p>
<pre><code>class Test:
def __init__(self) -> None:
self.bool = False
self.string = ""
thread = Thread(target = self.target)
thread.start()
def target(self):
while True:
if self.bool:
print("Bool changed")
break
if self.string != "":
print("String changed")
break
def changeBool(self):
self.bool = True
def changeString(self):
self.string = "sample"
var = Test()
var.changeString()
var.changeBool()
</code></pre>
<p>Here, I have two member variables in a class: a boolean and a string. In the constructor, I start a new thread to run a function which basically tells me which of the two variables gets modified inside the thread first.</p>
<p>When I run this program, even though I am calling <code>changeString()</code> first, the output is always "Bool changed". This means that the boolean is getting updated inside the thread faster than the string. Why does this happen? And how can I ensure that <code>self.bool</code> gets updated inside the thread immediately after it is modified in the main thread?</p>
|
<python><multithreading><class><asynchronous><member>
|
2024-05-09 12:07:49
| 1
| 452
|
Anchith Acharya
|
78,454,286
| 2,731,575
|
Is there a way to use fuzzy selection to narrow the choices in a wx.ComboBox (wxWidgets)
|
<p>I want to populate a wx.ComboBox with a large number of items but rather than having to scroll through them all, I'd like to type a string into the ComboBox and narrow the items in the ComboBox using fuzzy search (I'm using '<a href="https://github.com/seatgeek/thefuzz" rel="nofollow noreferrer">thefuzz</a>').</p>
<p>So I've <em>almost</em> got it working but there's something amiss in the plumbing. I suspect I need to capture a wider range of events so that I can implement backspace and perhaps escape, but the main problem is that I can't seem to manipulate the TextEntry in the ComboBox the way I'd like.</p>
<p>Has anyone already done this?</p>
<p>Here's my code so far:</p>
<pre><code>import wx
from thefuzz import process
class ThingSelector(wx.Frame):
def __init__(self, parent, title, thing_descriptions):
super(ThingSelector, self).__init__(parent, title=title, size=(400, 200))
self.thing_descriptions = thing_descriptions
self.init_ui()
def init_ui(self):
panel = wx.Panel(self)
self.thing_choice = wx.ComboBox(panel, choices=self.thing_descriptions)
self.thing_choice_block = False
self.thing_choice_filter = ""
self.thing_choice.Bind(wx.EVT_TEXT, self.on_text)
accept_button = wx.Button(panel, label="Accept")
accept_button.Bind(wx.EVT_BUTTON, self.accept_button_click)
cancel_button = wx.Button(panel, label="Cancel")
cancel_button.Bind(wx.EVT_BUTTON, self.on_cancel_button)
sizer = wx.BoxSizer(wx.VERTICAL)
sizer.Add(self.thing_choice, 0, wx.ALL | wx.EXPAND, 5)
buttons = wx.BoxSizer(wx.HORIZONTAL)
sizer.Add(buttons, 0, wx.ALL | wx.EXPAND, 5)
buttons.Add(accept_button, 0, wx.ALL | wx.EXPAND, 5)
buttons.Add(cancel_button, 0, wx.ALL | wx.EXPAND, 5)
panel.SetSizer(sizer)
self.Show()
def update_choices(self, filter):
matches = process.extractBests(filter, self.thing_descriptions, limit=10)
self.thing_choice.SetItems([match[0] for match in matches])
self.thing_choice.SetInsertionPointEnd()
self.thing_choice.Popup()
def on_text(self, event):
# avoid recursion when I call ChangeValue
if not self.thing_choice_block:
self.thing_choice_block = True
current_text = event.GetString()
if len(current_text) == 1:
self.thing_choice_filter += current_text
self.update_choices(self.thing_choice_filter)
self.thing_choice_block = False
else:
self.thing_choice_filter = ""
self.thing_choice.ChangeValue(current_text)
def accept_button_click(self, event):
selected_thing = ""
selection = self.thing_choice.GetSelection()
if selection != wx.NOT_FOUND:
selected_thing = self.thing_choice.GetString(selection)
print(selected_thing)
return
def on_cancel_button(self, event):
self.Close()
# Create list of 'thing's:
thing_descriptions = [ "aaa", "bbb", "ccc" ]
# GUI setup
app = wx.App(False)
frame = ThingSelector(None, title="Select Thing", thing_descriptions=thing_descriptions)
app.MainLoop()
</code></pre>
|
<python><combobox><wxpython><wxwidgets><fuzzywuzzy>
|
2024-05-09 11:57:15
| 0
| 371
|
wef
|
78,454,167
| 893,254
|
Pandas read_excel (or other) - does `skiprows` occur before `headers`?
|
<p>This is a question about the order in which two operations occur when the Pandas <code>read_excel</code> function is called. (Although this would also apply to other <code>read_X</code> type functions such as <code>read_csv</code>.)</p>
<p>The <code>read_excel</code> function takes two arguments of interest</p>
<ul>
<li><code>skiprows</code></li>
<li><code>headers</code></li>
</ul>
<p>In this case, I pass a list of integers to <code>headers</code> to create a multiindex.</p>
<p>My question is, does the <code>skiprows</code> operation occur before <code>headers</code> is used to set the headers? (The documentation doesn't explicitly state anything about this.)</p>
<p>For context, I have some code which reads an Excel file which contains blank lines before the header rows, and between the header rows and data. In this context it isn't obvious what the behavior should be.</p>
|
<python><pandas>
|
2024-05-09 11:35:21
| 2
| 18,579
|
user2138149
|
78,454,139
| 3,918,419
|
How to properly scale thumbnails in my custom list widget?
|
<p><strong>Problem:</strong> I am trying to achieve a thumbnail viewer where each item (i.e. thumbnail and its page number label) are of fixed size. I am trying to scale the images so that they maintain aspect ratio inside these items. However, most images appear somewhat cropped in both dimensions.</p>
<p><a href="https://i.sstatic.net/6H9HWe6B.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6H9HWe6B.png" alt="MWE - Main Window" /></a></p>
<p><strong>Desired result:</strong> thumbnails are scaled, without any cropping, and center-aligned within their item objects (horizontally and vertically), preferably with page number labels being a fixed number of px from the bottom border of parent item.</p>
<p>MWE:</p>
<pre><code>import os
from PyQt5.QtWidgets import QApplication, QMainWindow, QWidget, QVBoxLayout, QSplitter
from PyQt5.QtCore import QSize, Qt
from PyQt5.QtGui import QPixmap
from PyQt5.QtWidgets import QListWidget, QVBoxLayout, QListWidgetItem, QListView, QAbstractItemView, QLabel
class CustomListWidget(QListWidget):
def dropEvent(self, event):
super().dropEvent(event)
self.updatePageNumbers()
def updatePageNumbers(self):
for index in range(self.count()):
item = self.item(index)
widget = self.itemWidget(item)
if widget is not None:
number_label = widget.findChild(QLabel, 'PageNumberLabel')
if number_label:
number_label.setText(str(index + 1))
else:
print(f"No widget found for item at index {index}")
class ThumbnailViewer(QWidget):
def __init__(self):
super().__init__()
# self.iconSize = QSize(200, 220)
# self.itemSize = QSize(220, 250)
self._initUI()
def _initUI(self):
vbox = QVBoxLayout(self)
self.listWidget = CustomListWidget()
self.listWidget.setDragDropMode(QAbstractItemView.InternalMove)
self.listWidget.setFlow(QListView.LeftToRight)
self.listWidget.setWrapping(True)
self.listWidget.setResizeMode(QListView.Adjust)
self.listWidget.setMovement(QListView.Snap)
# self.listWidget.setIconSize(self.iconSize)
self.listWidget.setStyleSheet("""
QListWidget::item {
border: 1px solid red;
}
""")
folder = os.path.join(os.getcwd(), 'thumbs')
files = [f for f in os.listdir(folder) if f.lower().endswith(('.png', '.jpg', '.jpeg')) and not f.startswith('.')]
for idx, file in enumerate(files, start=1):
item, widget = self.loadImageItem(file, idx, folder=folder)
self.listWidget.addItem(item)
self.listWidget.setItemWidget(item, widget)
print(f"Adding thumbnail to ThumbnailViewer: {file}")
vbox.addWidget(self.listWidget)
self.setLayout(vbox)
def loadImageItem(self, file, pageNum, folder=None):
widget = QWidget()
iconLabel = QLabel()
path = os.path.join(folder, file) if folder else file
pixmap = QPixmap(path)
max_width = 220
max_height = 190
aspect_ratio = pixmap.width() / pixmap.height()
if aspect_ratio > 1:
scale_factor = max_width / pixmap.width()
else:
scale_factor = max_height / pixmap.height()
new_width = int(pixmap.width() * scale_factor)
new_height = int(pixmap.height() * scale_factor)
scaled_pixmap = pixmap.scaled(new_width, new_height, Qt.KeepAspectRatio)
iconLabel.setPixmap(scaled_pixmap)
iconLabel.setAlignment(Qt.AlignCenter)
numberLabel = QLabel(str(pageNum))
numberLabel.setObjectName('PageNumberLabel')
numberLabel.setAlignment(Qt.AlignHCenter)
layout = QVBoxLayout()
layout.addWidget(iconLabel)
layout.addWidget(numberLabel)
widget.setLayout(layout)
item = QListWidgetItem()
item.setSizeHint(QSize(220, 250))
return item, widget
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
self.setWindowTitle("Thumbnail Viewer Test")
central_widget = QWidget()
layout = QVBoxLayout(central_widget)
# Thumbnail Viewer
self.thumbnail_viewer = ThumbnailViewer()
layout.addWidget(self.thumbnail_viewer)
# Add a blank space
blank_space = QLabel("Blank Space")
blank_space.setAlignment(Qt.AlignCenter)
layout.addWidget(blank_space)
self.setCentralWidget(central_widget)
if __name__ == "__main__":
import sys
app = QApplication(sys.argv)
window = MainWindow()
window.setGeometry(100, 100, 800, 600)
window.show()
sys.exit(app.exec_())
</code></pre>
|
<python><pyqt><pyqt5><custom-widgets>
|
2024-05-09 11:29:41
| 2
| 654
|
MrVocabulary
|
78,454,039
| 11,198,558
|
What is the Django logic flow to show pdf inside HTML template after click the given link
|
<p>I'm using Django to create a website to public my post, all of my post is pdf.</p>
<p>As usual, I defined my view and view logic, it returns a context dictionary</p>
<pre><code>views.py
class PageView(TemplateView):
def get_context_data(self):
context = super().get_context_data(**kwargs)
highlightObject = Post.objects.filter(is_highlight = 1).order_by('-created_on')
context['highlight'] = highlightObject
return context
</code></pre>
<p>Then, on my html file</p>
<pre><code>{% extends "base.html" %}
{% for row in highlight %}
<h3 class="mb-auto">{{ row. Title }}</h3>
<div class="btn-group">
<a href="{{ row.pdfFile.url }}" class="icon-link">Reading more</a>
</div>
{% endfor %}
</code></pre>
<p>Until now, when cliking on the "Reading more", it changes to pdf Viewer, but I want to show pdf viewer inside html template, for example, it should change to other html file which extend the template from <code>base.html</code> and show pdf viewer using path <code>row.pdfFile.url</code></p>
<p>So, how to write a new html file for reading pdf from directory <code>row.pdfFile.url</code>
Or, if there is any other way to solve this problem.</p>
<p>Thanks</p>
|
<python><django>
|
2024-05-09 11:13:12
| 1
| 981
|
ShanN
|
78,453,755
| 7,435,104
|
Problem with python finding Qt platform plugin on NixOS (Sway)
|
<p>I have recently moved over to NixOS and I am having an issue with a python project I am starting.
I have decided to move forward using Conda to handle my python environment based on the <a href="https://wiki.nixos.org/wiki/Python#Using_conda" rel="nofollow noreferrer">NixOS wiki</a>. I am running NixOS with Sway (wayland) as my window manager. When I try to generate a figure with this code:</p>
<pre><code>import matplotlib.pyplot as plt
fig = plt.figure()
</code></pre>
<p>I get the following error:</p>
<pre><code>qt.qpa.plugin: Could not find the Qt platform plugin "wayland" in ""
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: eglfs, minimal, minimalegl, offscreen, vnc, webgl, xcb.
Aborted (core dumped)
</code></pre>
<p>I have searched around on the web, and have found several similar questions, but I'm too naive to understand where I am going wrong. For example, the same problem is referenced <a href="https://forum.qt.io/topic/129153/qt-qpa-plugin-could-not-load-the-qt-platform-plugin-xcb-in-even-though-it-was-found" rel="nofollow noreferrer">HERE</a>, but I'm not savvy enough to follow the discussion. This issue is also discussed on the <a href="https://discourse.nixos.org/t/python-qt-qpa-plugin-could-not-find-xcb/8862" rel="nofollow noreferrer">NixOS Discourse page</a>.
I don't remember from which forum I found this advice, but adjusting the script like this:</p>
<pre><code>import matplotlib.pyplot as plt
import os
os.environ['QT_QPA_PLATFORM'] = "wayland"
fig = plt.figure()
</code></pre>
<p>Resolves at least the "xcb" error. So I am left with:</p>
<pre><code>qt.qpa.plugin: Could not find the Qt platform plugin "wayland" in ""
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: eglfs, minimal, minimalegl, offscreen, vnc, webgl, xcb.
Aborted (core dumped)
</code></pre>
<p>I don't know if I'm missing a package/library, if it's an issue with my <code>conda</code> environment, if it's a NixOS issue? Thank you for helping me navigate my naivete. Cheers!</p>
|
<python><matplotlib><nixos><wayland>
|
2024-05-09 10:22:14
| 1
| 401
|
tlmoore
|
78,453,589
| 3,727,079
|
How can I check if the last row of a dataframe has timestamp between two times?
|
<p>Here's a one-row dataframe:</p>
<pre><code>import pandas as pd
import datetime
df = pd.DataFrame(columns = ['time', 'score'])
df.at[0, 'time'] = '2022-06-11 07:34:54.168327+00:00'
df.at[0, 'score'] = 2793.7
df['time'] = pd.to_datetime(df['time'])
</code></pre>
<p>I want to check if the last row of this dataframe is between two times, say 4pm and 9pm. How can I do it?</p>
<p>Googling, it seems the <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.between_time.html" rel="nofollow noreferrer">between_time</a> method might be usable, but the command seems to select only rows between these two times, so it doesn't look directly applicable to this problem. The naive attempt <code>df.iloc[-1].between_time('16:00', '21:00')</code> returns a <code>Index must be DatetimeIndex</code> error, as well.</p>
<p>I imagine one can use <code>between_time</code> to select the relevant rows and then check if the last row is one of those rows, but that seems highly inefficient.</p>
|
<python><pandas><dataframe><datetime>
|
2024-05-09 09:49:31
| 1
| 399
|
Allure
|
78,453,559
| 5,378,816
|
Can exception. __traceback__ be None for an exception caught in try-except?
|
<p>Submitting this code (BTW, it prints <code>None</code>):</p>
<pre><code>try:
1/0
except Exception as err:
print(err.__traceback__.tb_next)
</code></pre>
<p>to mypy produces an error:</p>
<pre><code># error: Item "None" of "TracebackType | None" has no attribute "tb_next" [union-attr]
</code></pre>
<p>Can this happen? A caught exception with a traceback equal to <code>None</code>?</p>
|
<python>
|
2024-05-09 09:43:06
| 1
| 17,998
|
VPfB
|
78,453,496
| 11,233,365
|
Get Azure Pipelines to install test environment from pyproject.toml instead of requirements_dev.txt
|
<p>As mentioned in the title, I'm hoping to get Azure Pipelines to be able to install its test environment from <code>pyproject.toml</code> instead of a separate <code>requirements_dev.txt</code> file, as it would help with reducing the number of dependencies lists that I'd have to maintain.</p>
<p>From looking in the <code>.azure-pipelines</code> folder, I was able to find the following script section in the <code>ci.yml</code> file that seems to be responsible for the installation of the test environment:</p>
<pre><code> - script: |
set -eux
pip install --disable-pip-version-check -r "$(Pipeline.Workspace)/src/requirements_dev.txt"
pip install --no-deps --disable-pip-version-check -e "$(Pipeline.Workspace)/src"
displayName: Install package
</code></pre>
<p>Would replacing <code>/requirements_dev.txt</code> with <code>/pyproject.toml</code> be sufficient to get Azure Pipelines to read from <code>pyproject.toml</code>, or will I have to perform other tweaks in addition to that?</p>
<p>Thanks in advance!</p>
|
<python><azure-pipelines><github-actions>
|
2024-05-09 09:30:32
| 1
| 301
|
TheEponymousProgrammer
|
78,453,483
| 6,221,742
|
Text-2-Sql using Llama3 locally
|
<p>I'm attempting to utilize the template provided in the Langchain repository for text-to-SQL retrieval using <a href="https://huggingface.co/QuantFactory/Meta-Llama-3-8B-GGUF" rel="nofollow noreferrer">Llama3</a>. Here's the link to the template: <a href="https://github.com/langchain-ai/langchain/tree/master/templates/sql-llamacpp" rel="nofollow noreferrer">Langchain SQL LlamaCPP Template</a>.</p>
<p>Here is my code</p>
<pre><code># chain.py code
# Import necessary modules
import os
import re
from langchain.memory import ConversationBufferMemory
from langchain_community.llms import LlamaCpp
from langchain_community.utilities import SQLDatabase
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.runnables import RunnableLambda, RunnablePassthrough,
RunnableParallel, RunnableMap
# Define model path and name
models_path = f'C:\\Users\\<my_username>\\Desktop\\SQL_LLAMACPP\\my-app\\packages\\sql-
llamacpp\\model'
model_name_phi = 'Meta-Llama-3-8B.Q2_K.gguf'
# Instantiate LlamaCpp
llm_cpp = LlamaCpp(
model_path=f"{models_path}\\{model_name_phi}",
n_threds=4,
n_ctx=12048,
max_tokens=500,
top_p=1,
f16_kv=True,
verbose=True,
)
# Define PostgreSQL connection string and initialize SQLDatabase
CONNECTION_STRING = f"postgresql+psycopg2://user:password@127.0.0.1:5437/LLMs"
db = SQLDatabase.from_uri(CONNECTION_STRING,
schema="public",
view_support=True
)
# Define functions to interact with the database
def get_schema(_):
return db.get_table_info()
def get_query(query):
sql_query = query.replace("SQLQuery: ", "")
return sql_query
def run_query(query):
return db.run(query)
# Define prompt templates
sql_template = """
You are a Postgres expert. Given an input question, first create a
syntactically correct Postgres query to run, then look at the results
of the query and return the answer to the input question.
Unless the user specifies in the question a specific number of
examples to obtain, query for at most 5 results using the LIMIT clause
as per Postgres. You can order the results to return the most
informative data in the database.
Never query for all columns from a table. You must query only the
columns that are needed to answer the question. Wrap each column name
in double quotes (") to denote them as delimited identifiers.
Pay attention to use only the column names you can see in the tables
below. You can never make a query using columns that do not exist. Also,
make sure to which column is in which table.
Pay attention to use date('now') function to get the current date, if the question
involves "today".
If you can't find an answer return a query with a polite message.
Ensure the query follows rules:
- No INSERT, UPDATE, DELETE instructions.
- No CREATE, ALTER, DROP instructions.
- Only SELECT queries for data retrieval.
Use the following exact format:
Question: <Question here>
SQLQuery: <SQL Query to run>
SQLResult: <Result of the SQLQuery>
Answer: <Final answer here>
Only use the following tables and columns:
{dbschema}
""" # noqa: E501
final_template = """
Based on the table schema below, question, sql query, and sql response,
write a natural language response:
Schema: {dbschema}
Question: {question}
SQLQuery: {query}
SQLResponse: {response}
""" # noqa: E501
# Define conversation memory
memory = ConversationBufferMemory(return_messages=True)
# Define prompt chains
sql_chain = (
RunnablePassthrough.assign(
dbschema=get_schema,
history=RunnableLambda(lambda x: memory.load_memory_variables(x)["history"]),
)
| ChatPromptTemplate.from_messages(
[
("system", sql_template),
MessagesPlaceholder(variable_name="history"),
("human", "{question}"),
]
)
| llm_cpp.bind(stop=["\nSQLResult:"])
| StrOutputParser()
)
prompt_response = ChatPromptTemplate.from_messages(
[
("system", final_template),
("human", "{question}")
]
)
sql_response_memory = RunnablePassthrough.assign(output=sql_chain) | save
class InputType(BaseModel):
question: str
chain = (
RunnablePassthrough.assign(
query=sql_response_memory
).with_types(
input_type=InputType
)
| RunnablePassthrough.assign(
dbschema=get_schema,
response=RunnableLambda(lambda x: db.run(get_query(x["query"]))),
)
| prompt_response
| llm_cpp
)
</code></pre>
<p>But, I am getting the following error:</p>
<blockquote>
<p>| sqlalchemy.exc.ProgrammingError: (psycopg2.errors.SyntaxError) syntax error at or near "20" | LINE 1: 20,5NUT" | ^ | | [SQL: 20,5NUT"] | (Background on this error at: <a href="https://sqlalche.me/e/20/f405" rel="nofollow noreferrer">https://sqlalche.me/e/20/f405</a>)</p>
</blockquote>
<p>Do I need to modify the prompt template or the model is not good for this task?</p>
|
<python><postgresql><langchain><large-language-model>
|
2024-05-09 09:27:57
| 0
| 339
|
AndCh
|
78,453,464
| 1,928,054
|
tox cannot find module
|
<p>I am trying to run tox on a module.</p>
<p>Consider the following python package:</p>
<pre><code>foo
├── tox.ini
├── setup.cfg
│ tests
│ └── test_bar.py
└── src
├── data
│ └── data.csv
├── __init__.py
└── bar.py
</code></pre>
<p>Where <code>bar.py</code> has:</p>
<pre><code>import importlib.resources
def get_data_file(file_name):
return importlib.resources.files("data").joinpath(file_name).read_text()
</code></pre>
<p>and <code>test_bar.py</code> has:</p>
<pre><code>import bar
def test_bar():
data = bar.get_data_file('data.csv')
assert True
</code></pre>
<p>I can use this package as follows:</p>
<pre><code>import bar
data = bar.get_data_file('data.csv')
</code></pre>
<p>Moreover, <code>pytest</code> passes.</p>
<p>However, <code>tox</code> returns the following error message in <code>tests/test_bar.py</code></p>
<p><code>ModuleNotFoundError: No module named 'bar'</code></p>
<h3>Initial approach</h3>
<p>Initially, the package had the following structure:</p>
<pre><code>foo
├── tox.ini
├── setup.cfg
│ tests
│ └── test_bar.py
└── src
└── foo
├── data
│ └── data.csv
├── __init__.py
└── bar.py
</code></pre>
<p>This structure is in accordance to <code>PyScaffold</code>, and works with <code>tox</code>. However, since this package only has one module, I was hoping to avoid having to <code>import foo.bar</code>, and instead <code>import bar</code>.</p>
<p>I tried the suggestion from <a href="https://github.com/pyscaffold/pyscaffold/discussions/399" rel="nofollow noreferrer">https://github.com/pyscaffold/pyscaffold/discussions/399</a> i.e. add <code>from .foo import bar</code> or <code>from . import bar</code> in <code>__init__.py</code>.</p>
<p>However, if I then try to <code>import bar</code>, I get <code>ModuleNotFoundError: No module named 'bar'</code>.</p>
<p>Moreover, my understanding is that anything in the <code>src</code> folder is considered to be part of the package. Therefore I figured I should be able to put <code>bar.py</code> directly in <code>src</code>.</p>
<p>In conclusion, I believe to have the following questions:</p>
<ul>
<li>Is it best practice to try to aim for <code>import bar</code> instead of <code>import foo.bar</code>, given that I only have 1 module</li>
<li>If so, is it best practice to put <code>bar.py</code> directly in the <code>src</code> folder</li>
<li>If so, should I configure <code>tox</code> differently?</li>
</ul>
|
<python><tox><pyscaffold>
|
2024-05-09 09:25:12
| 0
| 503
|
BdB
|
78,453,450
| 7,640,923
|
Identifying and retrieving particular sequences of characters from within text fields containing Basic Data desc
|
<p>I have a list named MAT_DESC that contains material descriptions in a free-text format. Here are some sample values from the MAT_DESC column:</p>
<pre><code>QWERTYUI PN-DR, Coarse, TR, 1-1/2 in, 50/Carton, 200 ea/Case, Dispenser Pack
2841 PC GREY AS/AF (20/CASE)
CI-1A, up to 35 kV, Compact/Solid, Stranded, 10/Case
MT53H7A4410WS5 WS WEREDSS PMR45678 ERTYUI HEERTYUIND 10/case
TYPE.2 86421-K40-F000, 1 Set/Pack, 100 Packs/Case
Clear, 1 in x 36 yd, 4.8 mil, 24 rolls per case
3M™ Victory Series™ Bracket MBT™ 017-873, .022, UL3, 0T/8A, Hk, 5/Pack
3M™ BX™ Dual Reader Protective Eyewear 11458-00000-20, Clear Anti-Fog Lens, Silver/Black Frame, +2.0 Top/Bottom Diopter, 20 ea/Case
4220VDS-QCSHC/900-000/A CABINET EMPTY
3M™ Bumpon™ Protective Product SJ5476 Fluorescent Yellow, 3.000/Case
3M™ Bumpon™ Protective Products SJ61A2 Black, 10,000/Case
</code></pre>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Material Desc</th>
<th>String to be Extracted</th>
</tr>
</thead>
<tbody>
<tr>
<td>QWERTYUI PN-DR, Coarse, TR, 1-1/2 in, 50/Carton, 200 ea/Case, Dispenser Pack</td>
<td>50/Carton, 200 ea/Case</td>
</tr>
<tr>
<td>2841 PC GREY AS/AF (20/CASE)</td>
<td>20/CASE</td>
</tr>
<tr>
<td>TYPE.2 86421-K40-F000, 1 Set/Pack, 100 Packs/Case</td>
<td>1 Set/Pack, 100 Packs/Case</td>
</tr>
<tr>
<td>RTYU 31655, 240+, 6 in, 50 Discs/Roll, 6 Rolls/Case</td>
<td>50 Discs/Roll, 6 Rolls/Case</td>
</tr>
<tr>
<td>Clear, 1 in x 36 yd, 4.8 mil, 24 rolls per case</td>
<td>24 rolls per case</td>
</tr>
<tr>
<td>3M™ Victory Series™ Bracket MBT™ 017-873, .022, UL3, 0T/8A, Hk, 5/Pack</td>
<td>5/Pack</td>
</tr>
<tr>
<td>3M™ BX™ Dual Reader Protective Eyewear 11458-00000-20, Clear Anti-Fog Lens, Silver/Black Frame, +2.0 Top/Bottom Diopter, 20 ea/Case</td>
<td>20 ea/Case</td>
</tr>
<tr>
<td>4220VDS-QCSHC/900-000/A CABINET EMPTY</td>
<td>No units</td>
</tr>
<tr>
<td>3M™ Bumpon™ Protective Product SJ5476 Fluorescent Yellow, 3.000/Case</td>
<td>3.000/Case</td>
</tr>
<tr>
<td>3M™ Bumpon™ Protective Products SJ61A2 Black, 10,000/Case</td>
<td>10,000/Case</td>
</tr>
</tbody>
</table></div>
<p>I'm trying to extract specific patterns of substrings from the MAT_DESC column, such as the quantity and unit information (e.g., "50 Discs/Roll", "200 ea/Case", "10/Case",50/Carton, 200 ea/Case etc.).
I'm currently using the following PYTHON to attempt this:</p>
<pre><code>pattern = r"(\d+)\s*(\w+)/(\w+)"
results = []
for desc in material_descriptions:
matches = re.findall(pattern, desc)
unit_strings = []
if matches:
for match in matches:
quantity, unit1, unit2 = match
unit_string = f"{quantity} {unit1}/{unit2}"
unit_strings.append(unit_string)
if unit_strings:
unit_info = ", ".join(unit_strings)
results.append((desc, unit_info))
for material_desc, unit_info in results:
print(f"Material Description: {material_desc}")
print(f"Unit Information: {unit_info}")
print()
</code></pre>
<p><strong>Python script fails in the below listed scenarios</strong></p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Material Desc</th>
<th>String to be Extracted</th>
</tr>
</thead>
<tbody>
<tr>
<td>3M™ Victory Series™ Bracket MBT™ 017-873, .022, UL3, 0T/8A, Hk, 5/Pack</td>
<td>5/Pack</td>
</tr>
<tr>
<td>3M™ BX™ Dual Reader Protective Eyewear 11458-00000-20, Clear Anti-Fog Lens, Silver/Black Frame, +2.0 Top/Bottom Diopter, 20 ea/Case</td>
<td>20 ea/Case</td>
</tr>
<tr>
<td>4220VDS-QCSHC/900-000/A CABINET EMPTY</td>
<td>No units</td>
</tr>
<tr>
<td>3M™ Bumpon™ Protective Product SJ5476 Fluorescent Yellow, 3.000/Case</td>
<td>3.000/Case</td>
</tr>
<tr>
<td>3M™ Bumpon™ Protective Products SJ61A2 Black, 10,000/Case</td>
<td>10,000/Case</td>
</tr>
</tbody>
</table></div>
<p>Is there a way to achieve this ?</p>
|
<python><python-3.x><regex>
|
2024-05-09 09:21:12
| 1
| 315
|
rohi
|
78,453,435
| 2,164,904
|
matching numpy conditions row and column without iteration
|
<p>Given a dataframe <code>condition</code> defined as:</p>
<pre><code>0 [3, 4]
1 [2]
</code></pre>
<p>I want another dataframe's 0th row, column 3 and 4 and 1th row, column 2 to be set to 0</p>
<p>For example given another dataframe <code>df2</code>:</p>
<pre><code> 1 2 3 4
0 0.275464 0.275404 0.2782 0.273485
1 0.275464 0.275404 0.2782 0.273485
</code></pre>
<p>After applying <code>condition</code>, I want <code>df2</code> to become:</p>
<pre><code> 1 2 3 4
0 0.275464 0.275404 0.0000 0.0000
1 0.275464 0.000000 0.2782 0.273485
</code></pre>
<p>Is there a good way to do this without iterating over the individual rows?</p>
|
<python><pandas>
|
2024-05-09 09:18:15
| 4
| 1,385
|
John Tan
|
78,453,194
| 5,567,893
|
How to match the index of tensors and values in the list using pytorch?
|
<p>I'd like to match the index of tensors from the list.
I'm trying to do link prediction using Pytorch.
In this process, I need to convert the index to the name by mapping it to the dictionary.
To do this, I set the dictionary and masking to the tensor, but it returned unexpected indices.</p>
<pre class="lang-py prettyprint-override"><code>inv_entity_dict = {v: k for k, v in entity_dict.items()}
inv_entity_dict
#{0: 'TMEM35A',
# 1: 'FHL5',
# 2: 'Sirolimus',
# 3: 'TMCO2',
# 4: 'RNF123',
# 5: 'SMURF2',
# 6: 'SSH3',
# 7: 'PSMA4',
# 8: 'SOD3',
# 9: 'SCOC',
# 10: 'Cysteamine',
# 11: 'TOX',
#...}
nonzero[0:10]
#array([ 0, 1, 3, 4, 5, 6, 7, 8, 9, 11])
</code></pre>
<p>After running the code, it returned unexpected results because Sirolimus(idx==2), which is not in the nonzero array, should not be matched the name.</p>
<pre class="lang-py prettyprint-override"><code>for i in range(1):
raw_probs = (z[i][nonzero[0:10]] @ z[i][nonzero[0:10]].t()).sigmoid()
filtered_probs = pd.DataFrame((raw_probs>0.9).nonzero(as_tuple=False).cpu().numpy(), columns=['Gene1', 'Gene2'])
filtered_probs['prob'] = raw_probs[(raw_probs>0.9)].cpu().detach().numpy()
filtered_probs_name = map_id2gene(filtered_probs, inv_entity_dict) #converting func.
#Expected result
# Gene1 Gene2 prob
#67 TOX TOX 1.0
#0 TMEM35A TMEM35A 1.0
#1 TMEM35A FHL5 1.0
#2 TMEM35A RNF123 1.0
#52 SCOC TMEM35A 1.0
#Wrong
# Gene1 Gene2 prob
#67 SCOC SCOC 1.0
#0 TMEM35A TMEM35A 1.0
#1 TMEM35A FHL5 1.0
#2 TMEM35A Sirolimus 1.0
#52 SOD3 TMEM35A 1.0
</code></pre>
<p>I guess the initialized <code>raw_probs</code> indices went into the converting process directly.</p>
<pre class="lang-py prettyprint-override"><code>raw_prob
#tensor([[1.0000e+00, ..., 1.0000e+00], #real index: 0
# [1.0000e+00, ..., 1.0000e+00], #real index: 1
# [1.0000e+00, ..., 1.0000e+00], #real index: 3, but considered to 2
# [1.0000e+00, ..., 1.0000e+00], #real index: 4, but considered to 3, ...
# [1.0000e+00, ..., 1.0000e+00], #real index: 5
# [1.0000e+00, ..., 1.0000e+00], #real index: 6
# [0.0000e+00, ..., 0.0000e+00], #real index: 7
# [0.0000e+00, ..., 4.4097e-36], #real index: 8
# [1.0000e+00, ..., 1.0000e+00], #real index: 9
# [1.0000e+00, ..., 1.0000e+00] #real index: 11, but considered to 9], device='cuda:0')
</code></pre>
<p>In this case, how can I match the correct ids and names based on the <code>inv_entity_dict</code> and <code>nonzero</code> list?</p>
|
<python><pytorch>
|
2024-05-09 08:27:49
| 1
| 466
|
Ssong
|
78,453,172
| 713,200
|
How to search for a attribute substring for a xpath when the attribute is not known?
|
<p>I have a xpath like</p>
<pre><code>//*[name()='svg' and contains(@data-type,'CHASSIS')]
</code></pre>
<p>This will lead to below html</p>
<pre><code><svg xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:cc="http://creativecommons.org/ns#" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:svg="http://www.w3.org/2000/svg" xmlns="http://www.w3.org/2000/svg" xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd" xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape" version="1.1" id="asr9901-front_coreLayer_2" x="0" y="0" viewBox="0 0 1663 329" xml:space="preserve" width="1674px" height="325px" sodipodi:docname="asr9901-front_core.svg" inkscape:version="0.92.1 r15371" data-orientation="undefined" data-moduleId="271730511" data-type="CHASSIS">
</code></pre>
<p>Basically I want to search for the string "NCS-4202" and "Front" without specifying any attribute.</p>
<p>How can I do the search ?</p>
<p>I tried with</p>
<pre><code>//*[name()='svg' and contains(@data-type,'CHASSIS') and contains(., 'Front')]
</code></pre>
<p>but had no luck. It failed to search for any xpaths.</p>
|
<python><selenium-webdriver><xpath>
|
2024-05-09 08:23:21
| 0
| 950
|
mac
|
78,452,885
| 3,712,352
|
Multiply a pyspark column with array for each row
|
<p>I have a pyspark DataFrame with two columns. One is a float and another one is an array.
I know that the length of the array in each row is the same length as the the number of rows.
I want to create a new column in the DataFrame that for each row the result will be the dot product of the array and the column.</p>
<p>For example, for the following DataFrame:</p>
<pre><code>+------------------------------------------------------------+-----+
|weights |value|
+------------------------------------------------------------+-----+
|[0.0, 5.0, 4.0, 3.0, 2.0, 1.0, 0.0, 1.0, 2.0, 3.0, 4.0, 5.0]|34 |
|[5.0, 0.0, 5.0, 4.0, 3.0, 2.0, 1.0, 0.0, 1.0, 2.0, 3.0, 4.0]|50 |
|[4.0, 5.0, 0.0, 5.0, 4.0, 3.0, 2.0, 1.0, 0.0, 1.0, 2.0, 3.0]|56 |
|[3.0, 4.0, 5.0, 0.0, 5.0, 4.0, 3.0, 2.0, 1.0, 0.0, 1.0, 2.0]|45 |
|[2.0, 3.0, 4.0, 5.0, 0.0, 5.0, 4.0, 3.0, 2.0, 1.0, 0.0, 1.0]|34 |
|[1.0, 2.0, 3.0, 4.0, 5.0, 0.0, 5.0, 4.0, 3.0, 2.0, 1.0, 0.0]|36 |
|[0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 0.0, 5.0, 4.0, 3.0, 2.0, 1.0]|45 |
|[1.0, 0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 0.0, 5.0, 4.0, 3.0, 2.0]|50 |
|[2.0, 1.0, 0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 0.0, 5.0, 4.0, 3.0]|57 |
|[3.0, 2.0, 1.0, 0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 0.0, 5.0, 4.0]|39 |
|[4.0, 3.0, 2.0, 1.0, 0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 0.0, 5.0]|48 |
|[5.0, 4.0, 3.0, 2.0, 1.0, 0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 0.0]|39 |
+------------------------------------------------------------+-----+
</code></pre>
<p>I want to add a new column 'result' and the value of each row will be:</p>
<pre><code>numpy.dot(row['weights'] * [34, 50, 56, 45, 34, 36, 45, 50, 57, 39, 48, 39])
</code></pre>
<p>Thanks.</p>
|
<python><pyspark>
|
2024-05-09 07:15:47
| 2
| 1,838
|
AndreyF
|
78,452,871
| 984,621
|
Scrapy won't download images in the *.MPO format - PIL.UnidentifiedImageError: cannot identify image file
|
<p>When Scrapy spiders tries to download an image that is in the <strong>.mpo</strong> format, it results in this error:</p>
<pre><code>PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7ff1297bbec0>
</code></pre>
<p>How do I make Scrapy/Pillow to process images with the <strong>.mpo</strong> extension?</p>
<p><strong>Detailed error</strong></p>
<pre><code>PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7ff1297801d0>
2024-05-09 10:26:26 [scrapy.core.engine] DEBUG: Crawled (200) <GET URL> (referer: None)
2024-05-09 10:26:26 [scrapy.pipelines.files] DEBUG: File (downloaded): Downloaded file from <GET URL> referred in <None>
2024-05-09 10:26:26 [scrapy.pipelines.files] ERROR: File (unknown-error): Error processing file from <GET URL> referred in <None>
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/twisted/internet/defer.py", line 1660, in _inlineCallbacks
result = current_context.run(gen.send, result)
StopIteration: <200 URL>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/scrapy/pipelines/files.py", line 493, in media_downloaded
checksum = self.file_downloaded(response, request, info, item=item)
File "/usr/local/lib/python3.10/dist-packages/scrapy/pipelines/media.py", line 153, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/scrapy/pipelines/images.py", line 119, in file_downloaded
return self.image_downloaded(response, request, info, item=item)
File "/usr/local/lib/python3.10/dist-packages/scrapy/pipelines/media.py", line 153, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/scrapy/pipelines/images.py", line 123, in image_downloaded
for path, image, buf in self.get_images(response, request, info, item=item):
File "/usr/local/lib/python3.10/dist-packages/scrapy/pipelines/images.py", line 139, in get_images
orig_image = self._Image.open(BytesIO(response.body))
File "/usr/local/lib/python3.10/dist-packages/PIL/Image.py", line 3339, in open
raise UnidentifiedImageError(msg)
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7ff1297bbec0>
</code></pre>
|
<python><scrapy><python-imaging-library>
|
2024-05-09 07:12:57
| 0
| 48,763
|
user984621
|
78,452,863
| 13,942,929
|
Cython : How can we properly call add, sub, mul & divide operator?
|
<p>In .pxd file, I wrote</p>
<pre><code> _Point operator+(const _Point other) const
_Point operator -(const _Point other) const
bool operator==(const _Point other) const
</code></pre>
<p>In .pyx file, I wrote</p>
<pre><code>def __eq__(self, Point other):
cdef _Point* selfptr = self.c_point.get()
cdef _Point* otherptr = other.c_point.get()
return selfptr[0] == otherptr[0]
</code></pre>
<p>But it comes to operators like <code>__add__</code> , I cannot do the same thing like <code>==</code>.</p>
<pre><code>def __add__(self, Point other):
cdef _Point * selfptr = self.c_point.get()
cdef _Point * otherptr = other.c_point.get()
return selfptr[0] + otherptr[0]
</code></pre>
<p>Error Message</p>
<pre><code> × python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [27 lines of output]
Error compiling Cython file:
------------------------------------------------------------
...
return not self.__eq__(other)
def __add__(self, Point other):
cdef _Point * selfptr = self.c_point.get()
cdef _Point * otherptr = other.c_point.get()
return selfptr[0] + otherptr[0]
^
------------------------------------------------------------
src/Geometry/Point/Point.pyx:170:26: Cannot convert '_Point' to Python object
</code></pre>
<p>How can I fix this? Thank you</p>
|
<python><c++><cython><operator-keyword><cythonize>
|
2024-05-09 07:12:03
| 1
| 3,779
|
Punreach Rany
|
78,452,816
| 4,806,592
|
Why do I get TypeError: iteration over a 0-d array even though I am passing 1-d array?
|
<p>I have a dataframe (<code>new_df</code>) with below data:</p>
<pre><code>Unnamed: 0 Words embedding
0 0 Elephant [-0.017855134320424067, -0.008739002273680945,...
1 1 Lion [-0.001514446088710819, -0.010011047775235734,...
2 2 Tiger [-0.013417221828848814, -0.009594874215361255,...
3 3 Dog [-0.0009933243881749651, -0.015114395874422861...
4 4 Cricket [0.003905495127549335, -0.0072066829816015395,...
5 5 Footbal [-0.011442505362835323, -0.008127146122306163,...
</code></pre>
<p>I am picking value of the dataframe as</p>
<pre><code>text_embedding = new_df["embedding"][1]
</code></pre>
<p>Now when I run the below snippet for calculating some value:</p>
<pre><code>import numpy as np
def cosine_similarity(A, B):
# Calculate dot product
dot_product = sum(a*b for a, b in zip(A, B))
# Calculate the magnitude of each vector
magnitude_A = sum(a*a for a in A)**0.5
magnitude_B = sum(b*b for b in B)**0.5
cosine_similarity = dot_product / (magnitude_A * magnitude_B)
print(f"Cosine Similarity using standard Python: {cosine_similarity}")
array1 = np.array(new_df['embedding'][0])
array2 = np.array(text_embedding)
cosine_similarity(array1,array2)
</code></pre>
<p>I get this Error:</p>
<pre><code> ---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[13], line 29
25 array2 = np.array(text_embedding)
27 #print(array1)
28 #print(array2)
---> 29 cosine_similarity(array1,array2)
31 #cosine_similarity([1,2,3,1,1,2,2,2,2,2,2,3,2,2,2,2,2],[1,2,3,1,1,2,2,2,2,2,2,5,2,2,2,2,2])
32
33 #df
Cell In[13], line 10, in cosine_similarity(A, B)
7 def cosine_similarity(A, B):
8
9 # Calculate dot product
---> 10 dot_product = sum(a*b for a, b in zip(A, B))
12 # Calculate the magnitude of each vector
13 magnitude_A = sum(a*a for a in A)**0.5
TypeError: iteration over a 0-d array
</code></pre>
<p>When I try the below code, its working fine:</p>
<pre><code>cosine_similarity([1,2,3,1,1,2,2,2,2,2,2,3,2,2,2,2,2],[1,2,3,1,1,2,2,2,2,2,2,5,2,2,2,2,2])
</code></pre>
<p>I don't understand why I get this error when taking the lists from my dataframe. Both the variables are 1-D arrays, so I'm not sure why its failing with array1,array2 while passing with values manually provided.</p>
|
<python><python-3.x><pandas><dataframe><numpy>
|
2024-05-09 07:02:54
| 1
| 519
|
Karan
|
78,452,797
| 7,959,614
|
Create bimodal distribution from two uniform distribution using Numpy
|
<p>I am trying to reproduce the <code>bimodalSample</code> function of this <a href="https://blog.cyrusroshan.com/post/transforming-distributions" rel="nofollow noreferrer">blog</a> in Python.</p>
<p>My attempt:</p>
<pre><code>import numpy as np
def bimodal_pdf(distance: float, weight: float) -> np.ndarray:
r = np.random.rand()
if r < weight:
return np.random.normal(r) * (1 - distance)
else:
return np.random.normal(r) * (1 - distance) + distance
n_samples = 100000
distance = 0.7
weight = 0.5
pdf = np.array([
bimodal_pdf(distance=distance, weight=weight) for s in range(n_samples)
])
</code></pre>
<p>However, I am getting values below zero and above 1. What's wrong with my implementation?
Thanks</p>
|
<python><numpy>
|
2024-05-09 06:59:08
| 1
| 406
|
HJA24
|
78,452,640
| 3,179,698
|
In jupyter notebook, how to use venv to manage package?
|
<p>I am starting to use venv to manage my packages.</p>
<p>Here I could use terminal to do it, but I want to do it in jupyter notebook, so that I could do the installation during my software dev work smoothly.</p>
<p>However, when I created and activated my virtual env, it seems jupyter didn't confirm me in the virtual env.</p>
<p>First I run this two command to create and activate the venv</p>
<pre><code>!python -m venv first_venv
!source first_venv/bin/activate
</code></pre>
<p>Then when I run <code>pip install faker</code> it was still installed in the system folder like /usr/local/lib/... rather than under virtual env lib folder first_venv/lib/pythonx.x/... (please uninstall it by <code>pip uninstall faker</code>,else you may not see what I described below)</p>
<p>If I want to install it in virtual env, I have to run: <code>!first_venv/bin/pip install faker</code> which is ugly and tedious, furthermore, I can't import the faker module I installed this ugly way.</p>
<p>So actually I am not in the "virtual env mode", I am still in the normal env mode in my jupyter notebook.</p>
<p>So,is there a way to activate my virtual env then do everything in my virtual env in jupyter notebook? Or if I can't, I could only use terminal to do this, please let me know, too.</p>
<p>In short, how could I smoothly using pip to install and uninstall packages in virtual env and I could import directly and firstly from my virtual env; if it is not in virtual env, I <strong>don't want</strong> to import it from system library, I want all package I used self-contained in virtual env.</p>
|
<python><jupyter-notebook><python-venv>
|
2024-05-09 06:25:11
| 1
| 1,504
|
cloudscomputes
|
78,452,624
| 1,609,428
|
how to use loc with a pipe in Pandas?
|
<p>Consider the following example</p>
<pre><code>import numpy as np
import pandas as pd
data = {
'Group': ['A', 'B', 'C', 'D']*3, # Repeating groups to fill the DataFrame
'Timestamp': pd.date_range(start='2023-01-01', periods=12, freq='M'), # Monthly frequency
'Numeric': np.random.rand(12) * 100, # Random numeric values scaled up
'String': ['apple', 'banana', 'cherry', 'date']*3 # Repeating string entries
}
df = pd.DataFrame(data)
df.head()
Out[28]:
Group Timestamp Numeric String
0 A 2023-01-31 69.320654 apple
1 B 2023-02-28 1.667633 banana
2 C 2023-03-31 14.211651 cherry
3 D 2023-04-30 40.061005 date
4 A 2023-05-31 23.433903 apple
</code></pre>
<p>I know I can use chaining using pipe, which works really well. However, the following line of code, which tries to filter the <code>groupby</code> dataframe using <code>.loc</code> fails. What is the issue here? Isn't pipe just passing a dataframe (so <code>.loc</code> should work)?</p>
<pre><code>(df.groupby('Group')
.pipe(lambda x: x.loc[x.Numeric < x.Numeric.max()]))
Traceback (most recent call last):
AttributeError: 'DataFrameGroupBy' object has no attribute 'loc'
</code></pre>
|
<python><pandas>
|
2024-05-09 06:21:29
| 2
| 19,485
|
ℕʘʘḆḽḘ
|
78,452,601
| 1,744,357
|
Add SHA1 to signxml python
|
<p>I am using the library <a href="https://xml-security.github.io/signxml" rel="nofollow noreferrer">signxml</a> to sign XML signatures for SAML authentication. One of our implementer partners requires that we send the signature in SHA1. The base configuration of XMLSigner does not support SHA1 because it has been deprecated because SHA1 is not secure. Unfortunately I still have to send it as SHA1 because the other implementer won't change their code base. I have read the library documentation and unsure how to force SHA1 support. If you call this code below, it errors out at this point in the code: <a href="https://github.com/XML-Security/signxml/blob/9f06f4314f1a0480e22992bbb8209a71bc581e05/signxml/signer.py#L120" rel="nofollow noreferrer">https://github.com/XML-Security/signxml/blob/9f06f4314f1a0480e22992bbb8209a71bc581e05/signxml/signer.py#L120</a></p>
<pre><code>signed_saml_root = XMLSigner(method=signxml.methods.enveloped, signature_algorithm="rsa-sha1", digest_algorithm="sha1", c14n_algorithm="http://www.w3.org/2001/10/xml-exc-c14n#")\
.sign(saml_root, key=self.key, cert=self.cert, always_add_key_value=True)
verified_data = XMLVerifier().verify(signed_saml_root, x509_cert=self.cert).signed_xml
</code></pre>
<p>The documentation mentions doing the following for SHA1 deprecation: SHA1 based algorithms are not secure for use in digital signatures. They are included for legacy compatibility only and disabled by default. To verify SHA1 based signatures, use:</p>
<pre><code>XMLVerifier().verify(
expect_config=SignatureConfiguration(
signature_methods=...,
digest_algorithms=...
)
)
</code></pre>
<p>But that looks for verification only, unsure how to make it work on signature. Can someone provide some advice on how to get SHA1 working with the signxml library.</p>
|
<python><xml><saml><sha1>
|
2024-05-09 06:16:11
| 1
| 571
|
rocket_boomerang_19
|
78,452,494
| 4,115,031
|
How do I debug a remote gunicorn Python web app using Jetbrains Gateway?
|
<h2>Background</h2>
<p>I'm working as a programmer for a company that has a complicated setup of repos, so what they've done is set up an EC2 instance with all of the necessary repos and config, and I ssh into it to work on their Python (Flask) backend code. I've been using Jetbrains Gateway as my IDE (it runs an IDE instance on the remote server and then you connect to it with a local client interface, so it looks just like PyCharm but the actual code and IDE features are being executed on the remote server)</p>
<h2>Problem</h2>
<p>I can't figure out how to debug my code. I've used PyCharm's debug feature before but I'm feeling lost in this setup.</p>
<h2>What I've tried so far</h2>
<p>I looked at the "Edit Configurations" view and saw that there's a feature called "Python Debug Server", which requires that I specify the IP address of my local machine and a port it is listening on, which the remote EC2 instance will then make a TCP connection to. I don't have a static IP address and I'm not sure that my ISP allows incoming connections like that, so I was looking around online for a service that might help me with that, and saw that ngrok was mentioned in a few places.</p>
|
<python><pycharm><jetbrains-ide><ngrok><tunneling>
|
2024-05-09 05:45:10
| 1
| 12,570
|
Nathan Wailes
|
78,452,469
| 1,876,345
|
How to override the base resolver in pyyaml
|
<p>I have found several comments and a similar <a href="https://stackoverflow.com/a/42284826/1876345">question</a> on how to override the resolver.
<a href="https://github.com/yaml/pyyaml/issues/376#issuecomment-576821252" rel="nofollow noreferrer">https://github.com/yaml/pyyaml/issues/376#issuecomment-576821252</a></p>
<blockquote>
<p>@StrayDragon Actually you can change this behavior if you want by overriding the bool tag regex in the base resolver (<a href="https://github.com/yaml/pyyaml/blob/master/lib/yaml/resolver.py#L170-L175" rel="nofollow noreferrer">https://github.com/yaml/pyyaml/blob/master/lib/yaml/resolver.py#L170-L175</a>), then wire up your resolver instead of the default on a new loader instance (which gets passed to yaml.load().</p>
</blockquote>
<p>I understand the theory, but it is not working when I try to implement it.</p>
<p>This is my python code:</p>
<pre><code> import yaml
import re
class CustomLoader(yaml.FullLoader):
pass
# Override the boolean tag regex
yaml.resolver.BaseResolver.add_implicit_resolver(
'tag:yaml.org,2002:bool',
re.compile(
r'''^(?:yes|Yes|YES|no|No|NO
|true|True|TRUE|false|False|FALSE
|off|Off|OFF)$''',
re.X
),
list('yYnNtTfFoO')
)
# Read the workflow YAML file using the custom loader
with open(file_path, 'r') as file:
yaml_content = yaml.load(file, Loader=CustomLoader)
</code></pre>
<p>And I thought it will work, but my original YML file</p>
<pre><code>name: cloud
on:
push:
workflow_dispatch:
concurrency:
group: "${{ github.ref }}"
</code></pre>
<p>Is still being replaced</p>
<pre><code>name: cloud-cicd-ghc-test/diego-test-migrate-to-github-ci
True:
push: null
workflow_dispatch: null
concurrency:
group: "${{ github.ref }}"
</code></pre>
<p>So I'm not sure how to do this process.</p>
|
<python><python-3.x><yaml><pyyaml>
|
2024-05-09 05:35:14
| 2
| 974
|
Diego
|
78,452,384
| 3,099,733
|
Why doesn't Annotated[str, T] work while Annotated[T, str] works well?
|
<p>I have a project where user need to use <code>Annotated[str, MyType()]</code> a lot. In order to simplify it, I try to create a generic type</p>
<pre class="lang-py prettyprint-override"><code>T = TypeVar('T')
CustomUrl = Annotated[str, T]
</code></pre>
<p>But I get this error when I try to use it this way:</p>
<pre class="lang-py prettyprint-override"><code>CustomUrl[MyData()]
# TypeError: typing.Annotated[str, ~T] is not a generic class
</code></pre>
<p>I have no idea why this happens, and it works well when I reverse the order to</p>
<pre class="lang-py prettyprint-override"><code>CustomUrl[T, str]
</code></pre>
<p>But that's not what I want. Is there any way to make it work?</p>
<p>My use case is to have user declare a dataclass and it will generate a UI schema automatically, for example</p>
<pre class="lang-py prettyprint-override"><code># pseudo code
@dataclass
class TaskArgs:
input_text: Annotated[str, {'multiline': True}]
input_file: Annotated[str, {'multifiles': True}]
</code></pre>
<h2>Update</h2>
<p>I have try the following answer:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Annotated, TypeVar
T = TypeVar('T')
class CustomeUrl(Annotated[str, T]):
...
c: CustomeUrl = 's'
</code></pre>
<p>The problem is the last expression raise the error:</p>
<pre><code>Expression of type "Literal['s']" is incompatible with declared type "CustomeUrl"
"Literal['s']" is incompatible with "CustomeUrl"
</code></pre>
<p>The reason I choose <code>Annotated</code> is to ensure the type check is valid so I don't think it solves my issue.</p>
|
<python><python-typing>
|
2024-05-09 05:07:17
| 1
| 1,959
|
link89
|
78,452,284
| 240,443
|
KeyboardInterrupt in asyncio.TaskGroup
|
<p>The docs on <a href="https://docs.python.org/3/library/asyncio-task.html#task-groups" rel="noreferrer">Task Groups</a> say:</p>
<blockquote>
<p>Two base exceptions are treated specially: If any task fails with <code>KeyboardInterrupt</code> or <code>SystemExit</code>, the task group still cancels the remaining tasks and waits for them, but then the initial <code>KeyboardInterrupt</code> or <code>SystemExit</code> is re-raised instead of <code>ExceptionGroup</code> or <code>BaseExceptionGroup</code>.</p>
</blockquote>
<p>This makes me believe, given the following code:</p>
<pre><code>import asyncio
async def task():
await asyncio.sleep(10)
async def run() -> None:
try:
async with asyncio.TaskGroup() as tg:
t1 = tg.create_task(task())
t2 = tg.create_task(task())
print("Done")
except KeyboardInterrupt:
print("Stopped")
asyncio.run(run())
</code></pre>
<p>running and hitting <kbd>Ctrl-C</kbd> should result in printing <code>Stopped</code>; but in fact, the exception is not caught:</p>
<pre><code>^CTraceback (most recent call last):
File "<python>/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<python>/asyncio/base_events.py", line 685, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "<module>/__init__.py", line 8, in run
async with asyncio.TaskGroup() as tg:
File "<python>/asyncio/taskgroups.py", line 134, in __aexit__
raise propagate_cancellation_error
File "<python>/asyncio/taskgroups.py", line 110, in __aexit__
await self._on_completed_fut
asyncio.exceptions.CancelledError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<frozen runpy>", line 189, in _run_module_as_main
File "<frozen runpy>", line 148, in _get_module_details
File "<frozen runpy>", line 112, in _get_module_details
File "<module>/__init__.py", line 15, in <module>
asyncio.run(run())
File "<python>/asyncio/runners.py", line 194, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "<python>/asyncio/runners.py", line 123, in run
raise KeyboardInterrupt()
KeyboardInterrupt
</code></pre>
<p>What am I missing? What is the correct way of detecting <code>KeyboardInterrupt</code>?</p>
|
<python><python-3.x><exception><python-asyncio><keyboardinterrupt>
|
2024-05-09 04:31:54
| 2
| 199,494
|
Amadan
|
78,452,069
| 3,453,768
|
deepcopy fails with object that contains objects that contain sets that point to each other with __hash__
|
<p>This MWE builds a <code>Network</code> class and a <code>Node</code> class. A <code>Node</code> has attributes called <code>predecessors</code> and <code>successors</code>, each of which is a set that will contain other <code>Node</code>s. Functions in the two classes manage the structure of the network.</p>
<pre class="lang-py prettyprint-override"><code>from copy import deepcopy
class Network:
def __init__(self):
self.nodes = []
def add_node(self, node):
self.nodes.append(node)
def add_successor(self, node, successor_node):
node.add_successor(successor_node)
successor_node.add_predecessor(node)
self.add_node(successor_node)
class Node:
def __init__(self, index):
self.index = index
self.predecessors = set()
self.successors = set()
def __hash__(self):
return self.index
def add_successor(self, successor):
self.successors.add(successor)
def add_predecessor(self, predecessor):
self.predecessors.add(predecessor)
network = Network()
node1 = Node(1)
node2 = Node(2)
network.add_node(node1)
network.add_successor(node1, node2)
deepcopy(network)
</code></pre>
<p>This code fails on the last line (<code>deepcopy(network)</code>) with the error:</p>
<pre><code> ... etc ...
File "/path/to/lib/python3.8/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/path/to/lib/python3.8/copy.py", line 264, in _reconstruct
y = func(*args)
File "/path/to/deepcopy_mwe.py", line 22, in __hash__
return self.index
AttributeError: 'Node' object has no attribute 'index'
</code></pre>
<p>If I do any one of the following things, the error goes away:</p>
<ul>
<li>Replace the <code>predecessors</code> and <code>successors</code> attributes with lists instead of sets (initialize them to <code>[]</code> and use <code>append()</code> instead of <code>add()</code> in <code>add_successor()</code> and <code>add_predecessor()</code>).</li>
<li>Remove the line <code>successor_node.add_predecessor(node)</code> in <code>Network.add_successor()</code> (so that the nodes don't both point at each other).</li>
<li>Remove the <code>__hash__()</code> function.</li>
</ul>
<p>What's going on here, and how can I fix it?</p>
|
<python><hash><set><deep-copy>
|
2024-05-09 03:09:00
| 0
| 2,397
|
LarrySnyder610
|
78,451,928
| 19,048,408
|
How do I configure Python "black" to sensibly format Polars?
|
<p>How do I configure Black to sensibly auto-format Python code for the Polars library?</p>
<p>With the default settings, it likes to left-align <code>.select</code>/<code>.with_columns</code> calls with many arguments. I want the following code to not get mangled. Is there a standard configuration?</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = df.select(
user_ids=pl.col("log_entry").str.extract(r"User ID: \[([\d ,]+)\]"),
session_times=pl.col("log_entry").str.extract(
r"Session Time: \[([\d :]+)\]"
),
event_type=pl.col("log_entry")
.str.extract(r"Event Type: ([\w ]+)")
.cast(pl.Utf8),
event_duration=pl.col("log_entry")
.str.extract(r"Duration: (\d+)")
.cast(pl.Int32),
)
</code></pre>
<p>This is roughly how I think it should be formatted, if wrapping at 79 chars:</p>
<pre class="lang-py prettyprint-override"><code>df = df.select(
user_ids=pl.col("log_entry").str.extract(
r"User ID: \[([\d ,]+)\]"
),
session_times=pl.col("log_entry").str.extract(
r"Session Time: \[([\d :]+)\]"
),
event_type=pl.col("log_entry")
.str.extract(r"Event Type: ([\w ]+)")
.cast(pl.Utf8),
event_duration=pl.col("log_entry")
.str.extract(r"Duration: (\d+)")
.cast(pl.Int32),
)
</code></pre>
|
<python><python-polars><python-black>
|
2024-05-09 02:20:08
| 0
| 468
|
HumpbackWhale194
|
78,451,861
| 2,457,160
|
Create new column based on multiple columns and some conditions
|
<p>I have a data frame with 2-level indexed columns (sample data below):</p>
<pre><code> metric grp Avg P95
mean ci_low ci_up mean ci_low ci_up
0 a CONTROL 8.202862 8.100596 8.306985 17.122063 16.832979 17.400000
1 a EXPERIMENT 8.243680 8.141428 8.351050 16.709375 16.383333 17.043500
2 a diff 0.040819 -0.102122 0.185837 -0.412688 -0.855854 0.050104
3 a rel_diff 0.005016 -0.012299 0.022944 -0.024022 -0.049377 0.002982
4 b CONTROL 15.513669 15.316762 15.720606 31.893576 31.357300 32.600350
5 b EXPERIMENT 15.486001 15.293760 15.672843 31.317389 30.808775 31.839000
6 b diff -0.027668 -0.300167 0.251497 -0.576186 -1.450138 0.223063
7 b rel_diff -0.001740 -0.019249 0.016368 -0.017967 -0.044785 0.007031
</code></pre>
<p>Result I'd like to produce:</p>
<ul>
<li>create a new column in format of <code>mean [cl_low, cl_up]</code> for each level-1 column <code>Avg</code>, <code>P95</code></li>
<li>for <code>rel_diff</code> rows, reformat all columns as percentages, e.g.: <code>0.005016 --> 0.5%</code>.</li>
<li>for rest rows, round all columns to 3rd digit</li>
</ul>
<p>What is the pythonic way to do this? Thanks in advance for tips!</p>
|
<python><pandas>
|
2024-05-09 01:54:27
| 1
| 3,378
|
SixSigma
|
78,451,714
| 10,789,207
|
Logging to Queue during multiprocessing fails
|
<p><strong>TL;DR: Why is there no handler listed for the logger at the (inside run) print statement in the console view?</strong></p>
<p>Looking for explanation of why this logging scheme is not working properly.</p>
<p>I'm following the recipe (pretty closely) for logging multiple processes to the same log file found <a href="https://docs.python.org/3/howto/logging-cookbook.html#logging-to-a-single-file-from-multiple-processes" rel="nofollow noreferrer">here</a> in the python dox. The main difference in the code below is that I'm attempting to implement a Worker class instead of just a function. Perhaps I can switch back to function, but it would fit better in the larger scheme of the project as a class. Anyhow...</p>
<p>When following the basic guidance of the dox, I can get the logging listener function up and running fine, but things fall apart when the <code>worker</code> tries to log. I can see that the queue handler is added to the worker logger during the configuration function, called during the <code>__init__</code> and the logging statement in the init <em>is</em> captured in the log as expected.</p>
<p>However, inside the <code>run()</code> function, the wheels fall off and it is as if the logger that was created has forgotten about its handlers.</p>
<p>I'm guessing there is some nuance to how the <code>multiprocessing</code> processes are initiated that cause this, but it took some time to T/S this and I'd be interested in an explanation as to why this doesn't work or a better way to configure a logger for a worker <code>Process</code> that is a class.</p>
<h3>Code:</h3>
<pre><code>import logging
import random
import sys
import time
from logging import handlers
from multiprocessing import Queue, Process
from queue import Empty, Full
def worker_configurer(log_queue, idx):
logger = logging.getLogger(".".join(("A", "worker", str(idx))))
h = handlers.QueueHandler(log_queue)
logger.addHandler(h)
logger.setLevel(logging.INFO)
print(
f"configured worker {idx} with logger {logger.name} with handlers: {logger.handlers.copy()}"
)
return logger
class Worker(Process):
worker_idx = 0
def __init__(self, work_queue, log_queue, worker_configurer, **kwargs):
super(Worker, self).__init__()
self.idx = Worker.worker_idx
Worker.worker_idx += 1
self.logger = worker_configurer(log_queue, self.idx)
print(f"self.logger handlers during init: {self.logger.handlers.copy()}")
self.logger.info(f"worker {self.idx} initialized") # <-- does show up in log
self.work_queue = work_queue
def run(self):
print(
f"(inside run): self.logger name: {self.logger.name}, handlers:"
f" {self.logger.handlers.copy()}"
)
self.logger.info(f"worker {self.idx} started!") # <-- will NOT show up in log
while True:
job_duration = self.work_queue.get()
if job_duration is None:
print(f"Worker {self.idx} received stop signal")
break
time.sleep(job_duration)
# book the job...
print(f"worker {self.idx} finished job of length {job_duration}")
self.logger.info(f"worker {self.idx} finished job of length {job_duration}")
def listener_configurer():
logging.basicConfig(
filename="mp_log.log",
filemode="a",
format="%(asctime)s | %(name)s | %(levelname)s | %(message)s",
datefmt="%d-%b-%y %H:%M:%S",
level=logging.INFO,
)
def listener_process(queue, configurer):
configurer() # redundant (for now), but harmless
logger = logging.getLogger("A")
while True:
try:
record = queue.get(timeout=5)
print("bagged a message from the queue")
if (
record is None
): # We send this as a sentinel to tell the listener to quit.
break
logger = logging.getLogger(record.name)
logger.handle(record) # No level or filter logic applied - just do it!
except Empty:
pass
except Exception:
import sys, traceback
print("Whoops! Problem:", file=sys.stderr)
traceback.print_exc(file=sys.stderr)
if __name__ == "__main__":
listener_configurer()
logger = logging.getLogger("A")
logger.warning("Logger Active!")
work_queue = Queue(5)
log_queue = Queue(100)
# start the logging listener
listener = Process(target=listener_process, args=(log_queue, listener_configurer))
listener.start()
# make workers
num_workers = 2
workers = []
for i in range(num_workers):
w = Worker(
work_queue,
log_queue=log_queue,
worker_configurer=worker_configurer,
)
w.start()
workers.append(w)
logger.info(f"worker {i} created")
num_jobs = 10
jobs_assigned = 0
while jobs_assigned < num_jobs:
try:
work_queue.put(random.random() * 2, timeout=0.1)
jobs_assigned += 1
except Full:
pass
print("Call it a day and send stop sentinel to everybody")
for i in range(num_workers):
work_queue.put(None)
log_queue.put(None)
for w in workers:
w.join()
print("another worker retired!")
listener.join()
</code></pre>
<h3>Console:</h3>
<pre><code>configured worker 0 with logger A.worker.0 with handlers: [<QueueHandler (NOTSET)>]
self.logger handlers during init: [<QueueHandler (NOTSET)>]
configured worker 1 with logger A.worker.1 with handlers: [<QueueHandler (NOTSET)>]
self.logger handlers during init: [<QueueHandler (NOTSET)>]
bagged a message from the queue
bagged a message from the queue
(inside run): self.logger name: A.worker.1, handlers: []
(inside run): self.logger name: A.worker.0, handlers: []
worker 0 finished job of length 1.2150712953970373
worker 1 finished job of length 1.2574239731920005
worker 1 finished job of length 0.11736058130132943
Call it a day and send stop sentinel to everybody
worker 0 finished job of length 0.4843796181316009
worker 1 finished job of length 1.048915894468737
bagged a message from the queue
worker 0 finished job of length 1.2749454212499574
worker 0 finished job of length 0.7298640313585205
worker 1 finished job of length 1.6144333153092076
worker 1 finished job of length 1.219077068714904
Worker 1 received stop signal
worker 0 finished job of length 1.561689295025705
Worker 0 received stop signal
another worker retired!
another worker retired!
</code></pre>
<h3>Log File:</h3>
<pre><code>08-May-24 17:33:15 | A | WARNING | Logger Active!
08-May-24 17:33:15 | A.worker.0 | INFO | worker 0 initialized
08-May-24 17:33:15 | A | INFO | worker 0 created
08-May-24 17:33:15 | A.worker.1 | INFO | worker 1 initialized
08-May-24 17:33:15 | A | INFO | worker 1 created
08-May-24 17:33:15 | A.worker.0 | INFO | worker 0 initialized
08-May-24 17:33:15 | A.worker.1 | INFO | worker 1 initialized
</code></pre>
|
<python><logging><python-multiprocessing>
|
2024-05-09 00:43:23
| 1
| 11,992
|
AirSquid
|
78,451,428
| 1,516,389
|
Python Accelerate package thrown error when using Trainer from Transformers
|
<p>I'm trying out this Hugging Face <a href="https://huggingface.co/learn/nlp-course/en/chapter3/3?fw=pt" rel="nofollow noreferrer">tutorial</a></p>
<p>I'm trying to use a trainer to train my mode. The code errors out at this point:</p>
<pre><code>from datasets import load_dataset
from transformers import AutoTokenizer, AutoModelForSequenceClassification, DataCollatorWithPadding, TrainingArguments, Trainer
checkpoint = "bert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
raw_datasets = load_dataset("glue", "mrpc")
def tokenize_function(example):
return tokenizer(example["sentence1"], example["sentence2"], truncation=True)
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
training_args = TrainingArguments("test-trainer")
model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2)
#The above code works upto here
#The following line fails
trainer = Trainer(
model,
training_args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["validation"],
tokenizer=tokenizer,
)
</code></pre>
<p>The error is displayed as:</p>
<pre><code>File "tutorial.py", line 21, in <module>
trainer = Trainer(
^^^^^^^^
File "/opt/miniconda3/envs/py3env/lib/python3.12/site-packages/transformers/trainer.py", line 388, in __init__
self.create_accelerator_and_postprocess()
File "/opt/miniconda3/envs/py3env/lib/python3.12/site-packages/transformers/trainer.py", line 4364, in create_accelerator_and_postprocess
self.accelerator = Accelerator(**args)
^^^^^^^^^^^^^^^^^^^
TypeError: Accelerator.__init__() got an unexpected keyword argument 'use_seedable_sampler'
</code></pre>
<p>Versions:</p>
<pre><code>Python: 3.12.3
Transformer: 4.40.2
Datasets: 2.19.1
Accelerate: 0.21.0
</code></pre>
|
<python><huggingface-transformers>
|
2024-05-08 22:31:42
| 1
| 636
|
raka
|
78,451,346
| 268,581
|
Testing against local package before pushing update to PyPI
|
<p>Let's say I have a package <code>abc</code> that I've published to PyPI.</p>
<p>I have other local projects that use <code>abc</code>. Of course, I'd like to be able to update <code>abc</code> locally and test against this version before pushing out to PyPI.</p>
<p>Here's one approach I'm using:</p>
<ul>
<li>Make sure package is not in <code>pip list</code> output. I.e. uninstall if necessary.</li>
<li>Set <code>PYTHONPATH</code> to include the path to the local package</li>
</ul>
<p>Then, if I want to test against the officially published <code>abc</code> package later, I can have another <code>venv</code> environment in which to install <code>abc</code> and test again there.</p>
<p>The benefit to the <code>PYTHONPATH</code> approach is that any change made to <code>abc</code> locally will immediately be available to any project importing it.</p>
<h1>Question</h1>
<p>Is this the recommended workflow? Is there another recommended way?</p>
|
<python><pypi><python-packaging>
|
2024-05-08 22:05:28
| 0
| 9,709
|
dharmatech
|
78,451,318
| 26,416
|
How to make Tk widgets flow from left to right and then top to bottom?
|
<p>I have to display dozens of radio buttons (grouped by three) in a frame and would like them to flow like text: from left to right and when the line is full, create a second line below.<br />
How would you do this?</p>
<p>EDIT: after some nice comments I got this:</p>
<pre class="lang-py prettyprint-override"><code>import tkinter
root = tkinter.Tk()
root.title('Elementario')
root.attributes("-fullscreen", True)
text = tkinter.Text(root, wrap=tkinter.WORD)
text.pack(fill=tkinter.BOTH)
modules = [
{'name':'reset_display'},
{'name':'number_to_bin_str'},
{'name':'number_to_dec_str'},
{'name':'number_to_hex_str'},
{'name':'list_to_number'},
{'name':'list_to_bin_str'},
{'name':'list_to_dec_str'}]
mod_vars = {}
for mod in modules:
name = mod['name']
rb_frame = tkinter.Frame(text)
rb_frame.pack()
label = tkinter.Label(rb_frame, text=name)
label.grid(row=0, column=0)
mod_vars[name] = tkinter.StringVar(value="0")
rb_T = tkinter.Radiobutton(rb_frame, text="T",
variable=mod_vars[name], value="T",)
rb_T.grid(row=0, column=2)
rb_U = tkinter.Radiobutton(rb_frame, text="U",
variable=mod_vars[name], value="U")
rb_U.grid(row=0, column=3)
root.mainloop()
</code></pre>
<p>The radio buttons are centered in the text widget and I could not find a way to make them flow from left to right then top to bottom.</p>
|
<python><user-interface><tkinter>
|
2024-05-08 21:58:01
| 1
| 1,604
|
Gra
|
78,451,312
| 14,250,641
|
Reorder a stacked barplot in altair
|
<p>So I am making an interactive plot (which is why I'm using altair) and I want to reorder the stacks within each bar to match the legend. I've tried reodering the df, I've tried using the 'sort' parameter, but nothing works. Any suggestions would be extremely helpful!</p>
<pre><code>color_scale = alt.Scale(
domain=['Strongly disagree', 'Somewhat disagree', 'Neither agree nor disagree', 'Somewhat agree', 'Strongly agree'],
range=['#FF0000', '#FF6347', '#A9A9A9', '#87CEEB', '#4682B4']
)
# Define the brush selection
brush = alt.selection_interval(encodings=['x'])
# Define the plot_bar chart and filter the data according to the brush selection
plot_bar = alt.Chart(new_df).mark_bar().encode(
x='value',
y=alt.Y("question:N", axis=alt.Axis(title='Question')), # Use the specified y-axis title
color=alt.Color("type:N", title="Response", scale=color_scale, sort=desired_order), # Specify the desired order for the legend
).transform_filter(
brush
).properties(
width=400, height=250 # Adjust the width of the chart as needed
)
plot_bar
</code></pre>
<p><a href="https://i.sstatic.net/cDDBVPgY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cDDBVPgY.png" alt="enter image description here" /></a></p>
|
<python><pandas><dataframe><plot><altair>
|
2024-05-08 21:56:13
| 1
| 514
|
youtube
|
78,451,289
| 20,898,396
|
Match type: Irrefutable pattern is allowed only for the last case statement
|
<p>I was trying to use the syntax <code>match</code> with a generic, but it wasn't working so I tried <code>match type()</code>.</p>
<pre class="lang-py prettyprint-override"><code>from typing import Generic, TypeVar
from pydantic import BaseModel
class LetterList(BaseModel):
letter: str
class NumberList(BaseModel):
numbers: list[int]
class BulletPointsList(BaseModel):
bullet: str
ListType = LetterList | NumberList | BulletPointsList | None
T = TypeVar("T", bound=ListType)
class ListPattern(Generic[T]):
def test(self, list_type: T, other_list_type: ListType):
match other_list_type:
case T(): # "Unbound" is not a class
pass
case BulletPointsList():
pass
case _:
pass
def test2(self, list_type: T, other_list_type: ListType):
match type(other_list_type):
case T: # Irrefutable pattern is allowed only for the last case statement
pass
case BulletPointsList: # Irrefutable pattern is allowed only for the last case statement
pass
case _:
pass
def test3(self, list_type: T, other_list_type: ListType):
# nicer way to write this?
if type(list_type) == type(other_list_type):
pass
elif type(other_list_type) == BulletPointsList:
pass
else:
pass
</code></pre>
<p>I am confused by why matching a type is an irrefutable pattern, when the following works:</p>
<pre class="lang-py prettyprint-override"><code>a = 3
match a:
case 1: # a == 1, similar to type(other_list_type) == BulletPointsList ?
print("hi")
case 2:
print("hello")
case _:
print("_")
</code></pre>
<hr />
<p>It seems to have to do with dots. <a href="https://stackoverflow.com/a/77829220/20898396">answer</a></p>
<pre class="lang-py prettyprint-override"><code>import builtins
from typing import Any
class ListPattern(Generic[T]):
def test(self, other_list_type: Any):
match type(other_list_type):
case builtins.str:
print("str")
case _:
print("other")
ListPattern().test(
other_list_type=""
)
</code></pre>
<p>Can this be done with our own types?</p>
<p>Edit:</p>
<pre><code>"BulletPointsList" is not accessedPylance
Irrefutable pattern is allowed only for the last case statementPylance
(variable) BulletPointsList: type[LetterList] | type[NumberList] | type[BulletPointsList] | type[None]
</code></pre>
<p>It seems to think that <code>BulletPointsList</code> is a variable.</p>
|
<python>
|
2024-05-08 21:47:31
| 0
| 927
|
BPDev
|
78,451,219
| 22,407,544
|
'The request signature we calculated does not match the signature you provided' in DigitalOcean Spaces
|
<p>My django app saves user-uploaded files to my s3 bucket in DigitalOcean Spaces(using django-storages[s3], which is based on amazon-s3) and the path to the file is saved in my database. However when I click the url in located in the database it leads me to a page with this error:
<code>The request signature we calculated does not match the signature you provided. Check your key and signing method.</code></p>
<p>The actual url, for example, looks something like this:
<code>https://my-spaces.nyc3.digitaloceanspaces.com/media/uploads/Recording_2.mp3?AWSAccessKeyId=DO009ABCDEFGH&Signature=Y9tn%2FTZa6sVlGGZSU77tA%3D&Expires=1604202599</code>. Ideally the url saved should be <code>https://my-spaces.nyc3.digitaloceanspaces.com/media/uploads/Recording_2.mp3</code></p>
<p>This actually also impacts other parts of my project because the url will be accessed later using <code>requests</code> but because of this error I get <code>status [402]</code>.</p>
<p>My settings.py is this:</p>
<pre><code>AWS_ACCESS_KEY_ID = 'DO009ABCDEFGH'
AWS_SECRET_ACCESS_KEY = 'Nsecret_key'
AWS_STORAGE_BUCKET_NAME = 'bucket_name'
AWS_DEFAULT_ACL = 'public-read'
AWS_S3_ENDPOINT_URL = 'https://transcribe-spaces.nyc3.digitaloceanspaces.com/'
AWS_S3_OBJECT_PARAMETERS = {
'CacheControl': 'max-age=86400'
}
AWS_MEDIA_LOCATION = 'media/'
PUBLIC_MEDIA_LOCATION = 'media/'
MEDIA_URL = '%s%s' % (AWS_S3_ENDPOINT_URL, AWS_MEDIA_LOCATION)
DEFAULT_FILE_STORAGE = 'mysite.storage_backends.MediaStorage'
</code></pre>
<p>The url that is saved contains the Access Key, the Signature that was used to write the file to the bucket and a timeout. I want all of that to not be there when the url to the file is saved in the database. I've tried to edit the <code>MEDIA_URL</code>, <code>AWS_STORAGE_BUCKET_NAME</code> and other settings but they either caused errors that broke the app or did nothing at all.</p>
|
<python><django><amazon-s3><digital-ocean>
|
2024-05-08 21:26:48
| 1
| 359
|
tthheemmaannii
|
78,451,100
| 16,717,009
|
What is the type hint for a descriptor?
|
<p>This is a new question related to <a href="https://stackoverflow.com/questions/78450557">How can I pass a namedtuple attribute to a method without using a string?</a> .
How would I type hint the <code>column</code> parameter in method <code>attempt_access</code>. It can be either an <code>int</code> or the attribute of a namedtuple.</p>
<pre><code>
from typing import NamedTuple
class Record(NamedTuple):
id: int
name: str
age: int
class NamedTupleList:
def __init__(self, data):
self._data = data
def attempt_access(self, row: int, column) -> any:
r = self._data[row]
try:
val = r[column]
except TypeError:
val = column.__get__(r)
return val
data = [Record(1, 'Bob', 30),
Record(2, 'Carol', 25),
Record(3, 'Ted', 29),
Record(4, 'Alice', 28),
]
class_data = NamedTupleList(data)
print('attempt_access by index')
v = class_data.attempt_access(0, 2)
print(v)
print('attempt_access by name')
v = class_data.attempt_access(0, Record.age)
print(v)
</code></pre>
|
<python><python-typing>
|
2024-05-08 20:52:07
| 1
| 343
|
MikeP
|
78,451,024
| 2,662,302
|
Python Polars issue with lazy evaluation
|
<p>I have two dictionaries with keys string and values polars expresions.</p>
<ul>
<li>factor_query_dict</li>
<li>currency_factor_query_dict</li>
</ul>
<p>And I'm doing this:</p>
<pre class="lang-py prettyprint-override"><code>factor_holdings = holdings.lazy().with_columns(
[
pl.coalesce(
pl.when(polars_query).then(pl.lit(factor, dtype=pl.String))
for factor, polars_query in factor_query_dict.items()
).alias("factor"),
pl.coalesce(
pl.when(polars_query).then(pl.lit(factor, dtype=pl.String))
for factor, polars_query in currency_factor_query_dict.items()
).alias("currency_factor"),
]
)
</code></pre>
<p>When I do .collect() the. first column is populated but the second is not.
The difference is when I avoid the lazy part both columns are populated.</p>
<p>What could be the reason?</p>
<p>I tried:</p>
<ul>
<li>Try generating a minimum example but in that case it worked. The unique difference in the execution plan was that the strings literal in the expression where Utf8(bla) and in my case it where String(bla).</li>
<li>I tried casting the pl.lit with both dtype=pl.Utf8 and pl.String and the result is the same.</li>
<li>When printing the execution is says for example pl.col("A").is_in(Series) how can I see the values in the series?</li>
</ul>
|
<python><python-polars>
|
2024-05-08 20:29:57
| 0
| 505
|
rlartiga
|
78,450,904
| 10,319,707
|
Do either Python or AWS Glue provide an alternative to .NET's SqlBulkCopy?
|
<p>I am porting an old SSIS package to AWS Glue. The package runs daily. In several steps in this package, I take data from one table on one Microsoft SQL Server and copy all of it to an empty table of identical schema on another Microsoft SQL Server. I wish to replicate these steps in AWS Glue. This would be rather easy if I could write code in C# or <a href="https://docs.dbatools.io/Copy-DbaDbTableData.html" rel="nofollow noreferrer">PowerShell's dbatools</a> because, like SSIS, they all use <code>SqlBulkCopy</code> either directly or indirectly. Unfortunately, I am under the following restrictions:</p>
<ol>
<li>If I am using a system where I can pick a language and one of the options is Python, then I <strong>must</strong> use either Python or SQL embedded in Python. I believe that IronPython is not an option.</li>
<li>My copying must get the same or better performance as SSIS. In other words, I need <code>SqlBulkCopy</code> or an equivalent.</li>
<li>AWS Glue must be involved. It is allowed to call other services, e.g. Lambda and Step Functions.</li>
</ol>
<p>With these restrictions in mind, what options do I have? Do either Python or AWS Glue provide an alternative to .NET's <code>SqlBulkCopy</code>?</p>
|
<python><sql-server><aws-glue><bulkinsert><sqlbulkcopy>
|
2024-05-08 19:57:47
| 0
| 1,746
|
J. Mini
|
78,450,756
| 6,569,899
|
How to optimize a bulk query to redis in django - hiredis
|
<p>I am porting a rest/graphql api from a Java project to Django in Python. We are using redis in both. We have one endpoint that is rather large (returns several MB). In this endpoint we construct a key and if that key exists in redis we return that data and skip past the other logic in the endpoint. I have deployed the api to a development environment. The Java version of this endpoint returns data between 4-5 seconds faster than the python version. I recognize some of this is that Java is a compiled language vs Python which is interpreted. I'm aware that there are a host of other differences, but I'm trying to see if there is any low hanging fruit that I can pick to speed up my Python api.</p>
<p>I ran across an optimized library named hiredis. The website says the library speeds up multi-bulk replies, which is my use case. I repeatedly hit an endpoint that returns a large amount of data. Hiredis is supposedly a parser written in C. I have redis-py (v4.5.4) and django-redis installed. I read that I need to <code>pip install hiredis</code>. One source says hiredis will automatically be detected and just work in my environment but another says I need to add a <code>PARSER_CLASS</code> property to my <code>CACHES</code> setting like so:</p>
<pre><code>CACHES = {
'default': {
'BACKEND': 'django_redis.cache.RedisCache',
'LOCATION': 'redis://127.0.0.1:6379/1',
'OPTIONS': {
'CLIENT_CLASS': 'django_redis.client.DefaultClient',
'PARSER_CLASS': 'redis.connection.HiredisParser',
}
}
}
</code></pre>
<p>I have tried both ways and I do not see a speed up. I'm certain the data is being cached in redis and the key is found. The data is returning from redis and skipping over the rest of the logic. Here are my questions</p>
<ol>
<li>Am I able to use hiredis with Django?</li>
<li>What do I need to do to enable it?</li>
<li>Is there a way I can verify that the hiredis parser is being used?</li>
<li>Are there any other ways I can speed up my redis connection?</li>
</ol>
|
<python><django><redis><hiredis><django-redis>
|
2024-05-08 19:19:10
| 0
| 2,431
|
afriedman111
|
78,450,623
| 9,659,840
|
how to check if df column contains a map key and if contains, put the corresponding value in a new column in pyspark?
|
<p>"I have a DataFrame with records stored in a particular column. I want to compare each record in that column against a predefined map. If a record contains any of the keys in the map, I want to populate a new column with the corresponding value associated with that key in the map."</p>
|
<python><pyspark><databricks>
|
2024-05-08 18:45:38
| 0
| 469
|
UC57
|
78,450,597
| 10,634,126
|
Creating a new DataFrame column from application of function to multiple columns in groupby
|
<p>I have a DataFrame of population counts by combination of categorical demographic features and date, with some missing values (consistent across all combos) per date constituting gaps in the data.</p>
<p>I am attempting to:</p>
<ol>
<li>group by all demographic features</li>
<li>apply a function to each time series of population counts per demographic group (this requires manipulating both the date column and the population column)</li>
<li>create a new column in the original (ungrouped) DataFrame based on this function</li>
</ol>
<p>The function in (2) acts on the existing population column with missing values, redistributing post-gap counts backwards over the gap. I believe the function works as intended, but I am struggling to stitch it into the context of the group-by and turn that into a new column in the DataFrame.</p>
<p>Here is sample data:</p>
<pre><code> age race gender date population
0 15-24 AAPI Male 2020-01-01 1.0
1 15-24 AAPI Male 2020-01-02 2.0
2 15-24 AAPI Male 2020-01-03 2.0
...
7 15-24 Black Female 2020-01-01 0.0
8 15-24 Black Female 2020-01-02 NaN
9 15-24 Black Female 2020-01-03 3.0
</code></pre>
<p>For the above trivial example, the desired output would be:</p>
<pre><code> age race gender date population interpolated
0 15-24 AAPI Male 2020-01-01 1.0 1.0
1 15-24 AAPI Male 2020-01-02 2.0 2.0
2 15-24 AAPI Male 2020-01-03 2.0 2.0
...
7 15-24 Black Female 2020-01-01 0.0 0.0
8 15-24 Black Female 2020-01-02 NaN 1.5
9 15-24 Black Female 2020-01-03 3.0 1.5
</code></pre>
<p>I have created the following function, which takes an input list of date gaps:</p>
<pre><code>gaps = [
{
"gap": [2020-01-02],
"day_after": 2020-01-03,
}
]
def bfill_pop(gaps, group):
for el in gaps:
fill_val = group.loc[group["date"] == el["day_after"], "population"] / (
len(el["gap"]) + 1
)
group.loc[group["date"].isin(el["gap"]), "population"] = fill_val
group.loc[group["date"] == el["day_after"], "population"] = fill_val
return group.rename(columns={"population": "interpolated"})["interpolated"]
</code></pre>
<p>When I try to apply this to the DataFrame with <code>apply()</code> or <code>transform()</code> functions though, I get errors, e.g.:</p>
<pre><code>df["interpolated"] = df.groupby(["age", "race", "gender"]).apply(
lambda g: bfill_pop(gaps, g)
)
</code></pre>
<pre><code>> ValueError: cannot handle a non-unique multi-index!
</code></pre>
<p>Is there a way to do this with apply or transform functions?</p>
|
<python><pandas><dataframe><group-by>
|
2024-05-08 18:40:47
| 1
| 909
|
OJT
|
78,450,595
| 1,788,656
|
quiver wind arrows are too lengthy for scale=1 and does not match the key arrow
|
<p>All,</p>
<p>The argument scale=1 of the matplotlib quiver (wind plot) function produces lengthy arrows extending beyond the figure limits. On the other hand, using scale=None seems to yield a logical arrow length. Any insights on this?</p>
<p>Beside <code>print(ax_left.scale)</code>, which must print None, yields 127.27 if only executed inside the IPython console. Any ideas?</p>
<p>Lastly, given that the U=5, which should have the same constant length as the plotted arrows, 5, the length of the arrow key, surprisingly, does not match exactly the length of the wind arrows.</p>
<p>Thanks</p>
<p><a href="https://i.sstatic.net/JpaXIwq2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JpaXIwq2.png" alt="enter image description here" /></a></p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0,200,9)
y = np.linspace(0,100,5)
X,Y = np.meshgrid(x,y)
U = 5*np.ones_like(X)
V = 5*np.ones_like(Y)
#%% understanding "scale"
plt.close('all')
fig, ax = plt.subplots(1,2)
ax_left=ax[0].quiver(x, y, U, V,scale=None)
ax[0].quiverkey(ax_left,X=0.8,Y=1.05,U=3.5,label='unit',labelpos='W')
ax[0].set_title('')
print('scale is '+str(ax_left.scale))
ax_right=ax[1].quiver(x, y, U, V,scale=1)
ax[0].quiverkey(ax_right,X=0.8,Y=1.05,U=3.5,label='unit',labelpos='W')
ax[1].set_title('')
print('scale is '+str(ax_right.scale))
plt.savefig('test.png')
</code></pre>
|
<python><matplotlib><ipython>
|
2024-05-08 18:38:37
| 1
| 725
|
Kernel
|
78,450,557
| 16,717,009
|
How can I pass a namedtuple attribute to a method without using a string?
|
<p>I'm trying to create a class to represent a list of named tuples and I'm having trouble with accessing elements by name. Here's an example:</p>
<pre><code>from typing import NamedTuple
class Record(NamedTuple):
id: int
name: str
age: int
class NamedTupleList:
def __init__(self, data):
self._data = data
def attempt_access(self, row, column):
print(f'{self._data[row].age=}')
r = self._data[row]
print(f'{type(column)=}')
print(f'{column=}')
print(f'{r[column]=}')
data = [Record(1, 'Bob', 30),
Record(2, 'Carol', 25),
Record(3, 'Ted', 29),
Record(4, 'Alice', 28),
]
class_data = NamedTupleList(data)
print('show data')
print(f'{data[0]=}')
print(f'{type(data[0])=}')
print(f'{data[0].age=}')
print('\nshow class_data')
print(f'{type(class_data)=}')
print('\nattempt_access by index')
class_data.attempt_access(0, 2)
print('\nattempt_access by name')
class_data.attempt_access(0, Record.age) # why can't I do this?
</code></pre>
<p>Produces:</p>
<pre><code>data[0]=Record(id=1, name='Bob', age=30)
type(data[0])=<class '__main__.Record'>
data[0].age=30
show class_data
type(class_data)=<class '__main__.NamedTupleList'>
attempt_access by index
self._data[row].age=30
type(column)=<class 'int'>
column=2
r[column]=30
attempt_access by name
self._data[row].age=30
type(column)=<class '_collections._tuplegetter'>
column=_tuplegetter(2, 'Alias for field number 2')
print(f'{r[column]=}')
~^^^^^^^^
TypeError: tuple indices must be integers or slices, not _collections._tuplegetter
</code></pre>
<p>So I can successfully access 'rows' and 'columns' by index, but if I want to access a column (i.e. namedtuple attribute) through a method call I get an error. What's interesting is that the column value is <code>_tuplegetter(2, 'Alias for field number 2')</code> so the index is known in my method but I can't get to it. Does anyone know how I can access this value so that I can pass a name to the method? I'm trying to avoid passing the name as a string - I'd really like to take advantage of the namespace since that's one of the advantages of a namedtuple after all.</p>
|
<python>
|
2024-05-08 18:30:51
| 1
| 343
|
MikeP
|
78,450,541
| 10,485,253
|
How do I get table data from a label object in sqlalchemy?
|
<p>Lets say I have the following simplified example:</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy.orm import DeclarativeBase, Mapped
class TableA(DeclarativeBase):
col_name = Mapped[str | None]
parent_id = Mapped[int]
class TableB(DeclarativeBase):
col_name = Mapped[str | None]
parent_id = Mapped[int]
class TableC(DeclarativeBase):
col_name = Mapped[str | None]
parent_id = Mapped[int]
MY_MAPPINGS = {
"a__col_name" : func.coalesce(TableA.col_name, "").label("some_col_name"),
"b__col_name" : func.coalesce(TableB.col_name, "").label("another__col_name"),
"c__col_name" : func.coalesce(TableC.col_name, "").label("third__col_name"),
}
def get_query(mapping: str, parent_id: int):
col = MY_MAPPINGS[mapping]
return select(col)
</code></pre>
<p>That should be fine. But now what if I want to add a filter on this query for parent_id, how would I do this? Ideally I would want to do something like</p>
<pre><code>.filter(col.table.parent_id = parent_id)
</code></pre>
<p>But col is type <code>sqlalchemy.sql.elements.Label</code> and col.table is None. What do I do here?</p>
|
<python><sqlalchemy>
|
2024-05-08 18:28:09
| 1
| 887
|
TreeWater
|
78,450,533
| 5,596,534
|
Multiple conditions for filters on partitioned columns with pandas read_parquet
|
<p>If I have a partitioned data and I was to filter using the <code>filters</code> argument in <code>pd.read_parquet</code> how can I accomplish that? For example:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
data = {
"ID": [1, 2, 3],
"Value": ["A", "B", "C"]
}
df = pd.DataFrame(data)
parquet_folder = "example_partitioned"
df.to_parquet(parquet_folder, index=False, partition_cols=["Value"])
</code></pre>
<p>So I have partitioned data structure on disk. If I construct a filter condition like this it works:</p>
<pre class="lang-py prettyprint-override"><code>filter_conditions = [
("Value", "==", "A")
]
pd.read_parquet(parquet_folder, filters=filter_conditions)
</code></pre>
<p>But if I want multiple conditions (i.e. A <em>OR</em> B) the following does not work:</p>
<pre class="lang-py prettyprint-override"><code>filter_conditions_two = [
("Value", "==", "A"),
("Value", "==", "B")
]
pd.read_parquet(parquet_folder, filters=filter_conditions_two)
</code></pre>
<p>That instead returns a empty data frame. Is this possible with filters?</p>
|
<python><pandas><parquet>
|
2024-05-08 18:26:21
| 1
| 4,426
|
boshek
|
78,450,478
| 2,475,195
|
Pandas rolling sum within a group
|
<p>I am trying to calculate a rolling sum or any other statistic (e.g. mean), within each group. Below I am giving an example where the window is 2 and the statistic is sum.</p>
<pre><code>df = pd.DataFrame.from_dict({'class': ['a', 'b', 'b', 'c', 'c', 'c', 'b', 'a', 'b'],
'val': [1, 2, 3, 4, 5, 6, 7, 8, 9]})
df['sum2_per_class'] = [1, 2, 5, 4, 9, 11, 10, 9, 16] # I want to compute this column
# df['sum2_per_class'] = df[['class', 'val']].groupby('class').rolling(2).sum() # what I tried
class val sum2_per_class
0 a 1 1
1 b 2 2
2 b 3 5
3 c 4 4
4 c 5 9
5 c 6 11
6 b 7 10
7 a 8 9
8 b 9 16
</code></pre>
<p>Here's what I tried and the corresponding error:</p>
<pre><code>df['sum2_per_class'] = df[['class', 'val']].groupby('class').rolling(2).sum()
TypeError: incompatible index of inserted column with frame index
</code></pre>
|
<python><pandas><dataframe><group-by><rolling-computation>
|
2024-05-08 18:09:28
| 1
| 4,355
|
Baron Yugovich
|
78,450,469
| 4,862,402
|
Rendering jinja templated parameters before rendering sql query
|
<p>I have a DAG with multiple SQL tasks (and .sql files) referencing the same template variable called <code>refdate</code> (each file may reference this variable multiple times within the query code). Let's assume the sql file looks like this:</p>
<pre><code>select '{{params.refdate}}';
</code></pre>
<p>Now I want to use the <code>SQLExecuteQueryOperator</code> to set the parameter <code>refdate</code> and execute the query:</p>
<pre><code>sql_task = SQLExecuteQueryOperator(
task_id="sql-task",
conn_id="connection",
sql="query.sql",
params={
"refdate": "{{ macros.ds_add(ds, -2) }}"
}
)
</code></pre>
<p>The problem is that this operator will not render the parameter properly. It will just replace the the <code>refdate</code> parameter with the plain text "{{ macros.ds_add(ds, -2) }}".</p>
<p><a href="https://stackoverflow.com/questions/72775001/passing-jinja-variables-in-airflow-params-dict">From this question</a>, I noticed that we can simply replace the parameter with the jinja template in the query file. Thus I can directly write my sql query file like this:</p>
<pre><code>select '{{ macros.ds_add(ds, -2) }}';
</code></pre>
<p>Now, there is no need to provide the <code>refdate</code> parameter anymore. We can simply:</p>
<pre><code>sql_task = SQLExecuteQueryOperator(
task_id="sql-task",
conn_id="connection",
sql="query.sql"
)
</code></pre>
<p>This change works fine!</p>
<p>My concern, however, is that in my real business case I have multiple sql files and multiple references to <code>refdate</code>. I'm concerned because the definition of <code>refdate</code> may change in the future, for example to something like <code>refdate="{{ macros.ds_add(ds, -1) }}"</code>. In such a case I would need to change multiple SQL files at the same time. Of course this is not a good practice in terms of writing clean and easily mantainable DAG codes.</p>
<p>Any suggestions?</p>
|
<python><sql><airflow><jinja2>
|
2024-05-08 18:07:29
| 0
| 1,161
|
Victor Mayrink
|
78,450,274
| 7,307,824
|
Multiple requests in Python with `asyncio`
|
<p>I'm trying to make a number of requests at the same time.</p>
<p>I'm new to <code>async</code> and <code>await</code> in Python (I've used it in Js).</p>
<p>I found an example and used this:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
import aiohttp
async def get_item( session: aiohttp.ClientSession, path: str ):
# fetch data
url = f'my/path/here{path}'
resp = await session.request('GET', url=url)
data = await resp.json()
# at this point I do stuff to the data and save as csv
return await data
async def get_all():
tasks = []
async with aiohttp.ClientSession() as session:
tasks.append(get_item(session=session, path='1'))
tasks.append(get_item(session=session, path='2'))
return await asyncio.gather(*tasks, return_exceptions=True)
loop = asyncio.get_event_loop()
loop.run_until_complete(get_all())
</code></pre>
<p>but I get the following error:</p>
<p><code>RuntimeError: This event loop is already running</code></p>
<p>Obviously there is an issue, please can someone explain what the issue is and how I can fix this?</p>
|
<python><python-asyncio>
|
2024-05-08 17:25:43
| 2
| 568
|
Ewan
|
78,450,001
| 774,575
|
How to build a MultiIndex DataFrame from a dict of data and a dict of index levels
|
<p>I'm struggling with the creation of this <code>DataFrame</code></p>
<pre><code> A B
x y
a 1 2 1
2 6 3
c 2 7 2
</code></pre>
<p>from these two dictionaries which seem sufficient:</p>
<pre><code>data = {'A': [2,6,7],
'B': [1,3,2]}
index = {'x': ['a', 'a', 'c'],
'y': [1, 2, 2]}
</code></pre>
<p>I tried several methods without success, of which:</p>
<pre><code>dfm = pd.DataFrame.from_records(data=data, index=index)
</code></pre>
<p>The index size is incorrect, 2 instead of 3. I know an alternative with splitting the index dict into an array and a list:</p>
<pre><code>labels = [['a', 'a', 'c'], [1, 2, 2]]
mi = pd.MultiIndex.from_arrays(labels, names=['x','y'])
dfm = pd.DataFrame(data, index=mi)
</code></pre>
<p>but this isn't very elegant.</p>
|
<python><pandas><multi-index>
|
2024-05-08 16:33:26
| 1
| 7,768
|
mins
|
78,449,946
| 741,850
|
Prevent asyncpg from processing output from postgres? (return raw data)
|
<p>I am trying to return raw data, as in string / bytes, from a postgresql query executed with asyncpg, without asyncpg parsing the output data into a Record/dict. In my case I use a cursor, but I think the question is general.</p>
<p>Why?</p>
<p>I am returning pre-processed json data, ready for an endpoint. I do not want to parse this data, turn it into a dict, and then in python, re-encode it back as json. It seems like a lot of wasted effort. I am returning very much data, so I expect that preventing this will result in significantly increased performance.</p>
<p>My code is essentially this:</p>
<pre class="lang-py prettyprint-override"><code>query = "SELECT ARRAY_AGG(JSON_STRIP_NULLS('my', 'data', 'from', (select array_agg(somewhere) from databasetable)))"
# Never mind the actual query. It is big, but returns many lines of json.
# eg:
# {"my": "data", "from": [1,2,3]}
# {"my": "data", "from": [1,2,4]}
# {"my": "data", "from": [1,2,5]}
async def my_generator():
async with get_transaction_somehow() as conn: # type: asyncpg.Connection | asyncpg.pool.PoolConnectionProxy
async for record in conn.cursor(query, my_data):
yield record # Here I just want to spew out whatever asyncpg gets from postgres
</code></pre>
|
<python><postgresql><asyncpg>
|
2024-05-08 16:24:59
| 1
| 12,944
|
Automatico
|
78,449,726
| 10,200,497
|
How can I increase each group by N percent than the previous group?
|
<p>First of all I'm not sure if it is the correct title. Feel free to suggest a better one.</p>
<p>This is my DataFrame:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{
'a': [100, 100, 102, 102, 106, 106, 106, 107, 107, 107]
}
)
</code></pre>
<p>And this is the expected output. I want to create column <code>b</code>:</p>
<pre><code> a b
0 100 100
1 100 100
2 102 103
3 102 103
4 106 106.09
5 106 106.09
6 106 106.09
7 110 110
8 110 110
9 110 110
</code></pre>
<p>The process is as follows:</p>
<p><strong>a)</strong> Groups are defined by column <code>a</code>.</p>
<p><strong>b)</strong> Each group must be AT LEAST 3 percent higher than the previous group. That is the output of column <code>b</code>.</p>
<p>For example:</p>
<p>First group is 100. So the process starts for the next group.</p>
<p>100 * 1.03 = 103</p>
<p>The next group is 102. It is less than 103. So for <code>b</code>, 103 is selected. Now the next group is 106 so:</p>
<p>103 * 1.03 = 106.09</p>
<p>106 is less than this number so 106.09 is selected. For the next group since 110 > 106.09 * 1.03, 110 is selected.</p>
<p>This is one of my attemps. But it feels like it is not the correct approach:</p>
<pre><code>df['streak'] = df.a.ne(df.a.shift(1)).cumsum()
df['b'] = df.groupby('streak')['a'].apply(lambda x: x * 1.03)
</code></pre>
|
<python><pandas><dataframe>
|
2024-05-08 15:47:54
| 1
| 2,679
|
AmirX
|
78,449,599
| 15,370,142
|
pandas date column to unix timestamp accounting for timezone and varying datetime formats
|
<p>I have multiple data frames with a datetime column as a string. Datetime formats vary across columns in the dataframe or across dataframes. I want to get a unix timestamp that gets interpreted by an ArcGIS application into the local timezone.</p>
<p>For example, one such dataframe is the following:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
time_dict = {"datetime": ["2023-08-15T15:32:47.687+00:00", ""]}
test_df = pd.DataFrame(time_dict)
</code></pre>
<p>I tried a number of simple functions. When I submitted the unix timestamp to the ArcGIS application, it gave a date-time that was 4 hours earlier than the local time. When I tried to make some corrections for the timezone, I encountered a TypeError saying that the timezone was already present in the date.</p>
<p>I will provide a solution below, but maybe someone has a better one.</p>
|
<python><pandas><datetime><timezone><unix-timestamp>
|
2024-05-08 15:24:28
| 2
| 412
|
Ted M.
|
78,449,590
| 3,123,109
|
"... && coverage report" not working after switching to pytest-django
|
<p>I was using <code>unittest</code> in Django to write tests and running the tests with this command:</p>
<pre><code>coverage run --omit='src/manage.py,src/config/*,*/.venv/*,*/*__init__.py,*/tests.py,*/admin.py' src/manage.py test src && coverage report
</code></pre>
<p>It'd run the tests then display the <code>.coverage</code> generated by <code>coverage run ...</code> after running.</p>
<p>After installing <code>pytest-django</code> and setting up <code>pytest.ini</code>, I've updated my command to:</p>
<pre><code>coverage run --omit='src/manage.py,src/config/*,*/.venv/*,*/*__init__.py,*/tests.py,*/admin.py' -m pytest && coverage report
</code></pre>
<p>Note, I'm not using <code>pytest-cov</code> just yet.</p>
<p>For some reason I can't figure out, the <code>coverage report</code> is not being displayed after the tests run.</p>
<p>I can run each commands separately:</p>
<ul>
<li>The <code>coverage run ...</code> runs the tests and generates the report.</li>
<li>The <code>coverage report</code> displays the report.</li>
</ul>
<p>I just can't get the report to display doing <code>... && coverage report</code> after switching to <code>pytest-django</code>.</p>
<p>Any reason for this?</p>
<p>Versions:</p>
<pre><code>coverage = "^6.2"
pytest-django = "^4.7.0"
</code></pre>
|
<python><django><pytest><coverage.py><pytest-django>
|
2024-05-08 15:24:09
| 1
| 9,304
|
cheslijones
|
78,449,573
| 1,451,649
|
In Bokeh, how can I update the color of lines when updating a multiline data source?
|
<p>I want to update a multi_line in a bokeh figure. As part of the updates, I need to adjust colors.</p>
<p>First, I make a simple figure:</p>
<pre><code>from bokeh.plotting import figure, show
p = figure()
# create a dummy multiline so that we can update it later with new data
p.multi_line(xs=[[0,1]], ys=[[0,1]], name='my_lines')
show(p)
</code></pre>
<p>This gives fine results:</p>
<p><a href="https://i.sstatic.net/nSFdFitPm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nSFdFitPm.png" alt="enter image description here" /></a></p>
<p>Then I try to update the lines:</p>
<pre><code>lines_ds = p.document.get_model_by_name('my_lines').data_source
lines_ds.data = dict(
xs=[[1,2,3],[0,0]],
ys=[[0,1,0],[0,1]],
color=['black','red']
)
show(p)
</code></pre>
<p>This updates my x and y data, but not the colors:</p>
<p><a href="https://i.sstatic.net/j2XgjkFdm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/j2XgjkFdm.png" alt="enter image description here" /></a></p>
<p>How can I update the colors of the lines? I already tried <code>colors</code> instead of <code>color</code>.
I'm not entirely opposed to simply creating an entirely new figure each time I need to update (multiple times per second), but I've been told that the <em>right</em> way to use bokeh is to update your objects rather than deleting and recreating things.</p>
|
<python><bokeh>
|
2024-05-08 15:20:32
| 1
| 3,741
|
jpobst
|
78,449,517
| 6,694,814
|
Python ValueError: I/O operation on closed file for loop range
|
<p>I have the following code:</p>
<pre><code>def openFile():
filepath = filedialog.askopenfilename(
title="Please select a file",
filetypes = (('Excel files', '*.xlsx'),
('Excel macro files', '*.xlsm'))
)
file = open(filepath, 'r')
wb = load_workbook(filepath, read_only=False, keep_vba=True)
ws=wb.active
fileonly = filepath.split('/')[-1]
filewithoutextension = fileonly.split('-')[0]
fileextension = fileonly.split('.')[-1]
y = pyautogui.prompt(text='Please specify the number of file duplicates', title='Number of files', default='2')
file_path = filedialog.askdirectory()
print(y)
for x in range(2, (int(y))+1):
newname = "%03d" % x
dv = DataValidation(type="list", formula1='"COLLECTION,INTER PROJECT COLLECTION,INTER PROJECT TRANSFER,TRANSFER"', allow_blank=False)
dv.error = 'Please select entry from the list!'
c1 = ws["D4"]
c1.value = "COLLECTION"
dv.add(c1)
ws.add_data_validation(dv)
save_path = Path(file_path) / f"{filewithoutextension}- {newname}.{fileextension}"
wb.save(save_path)
print (filewithoutextension+'- '+newname+'.'+fileextension)
wb.close()
rootwindow = tk.Tk()
rootwindow.geometry("500x100")
rootwindow.title("Excel file duplicator")
font.families()
windowfont=tk.font.nametofont("TkDefaultFont")
windowfont.config(
family="Segoe Script",
size=24,
weight=font.BOLD)
button = tk.Button(rootwindow, text='Open Excel
document',command=openFile, fg='black', bg='yellow')
button.pack()
rootwindow.mainloop()
</code></pre>
<p>which throws the following error:</p>
<p><strong>ValueError: I/O operation on closed file.</strong></p>
<p>Tried to use the file.close() and other solutions included in the code, which are listed here, but they didn't work.</p>
<p><a href="https://stackoverflow.com/questions/43316046/python-error-python-valueerror-i-o-operation-on-closed-file">Python error : Python ValueError: I/O operation on closed file</a></p>
<p><a href="https://stackoverflow.com/questions/18952716/valueerror-i-o-operation-on-closed-file">ValueError : I/O operation on closed file</a></p>
<p>How could I apply them to the situation, where instead of file I have a loop between a multitude of files?</p>
<p><strong>UPDATE:</strong></p>
<p>Previously I had a problem with saving images with the files duplicated by openpyxl.
By using the following solution:
<a href="https://stackoverflow.com/questions/40606139/images-dissapear-in-excel-documents-when-copying-them-with-python">Images dissapear in excel documents when copying them with python</a>
I have installed the Pillow library. The error has appeared since the installation.</p>
|
<python><tkinter><openpyxl>
|
2024-05-08 15:10:26
| 1
| 1,556
|
Geographos
|
78,449,510
| 9,466,142
|
Docker containers not able to communicate with each other on the same machine
|
<p>I have a docker compose file running 2 different services and i have launched a third container which is acting as a database for the first two containers.
Third container is a vector db and when i am trying to communicate via the new instantiated docker container i am getting the below error:</p>
<pre><code>File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 501, in send
whisperfusion-1 | raise ConnectionError(err, request=request)
whisperfusion-1 | requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
</code></pre>
<p>How can i manage communication between both docker containers.</p>
<p>My docker compose file:</p>
<pre><code>version: '3.8'
services:
service1:
build:
context: docker
dockerfile: Dockerfile
image: service1:latest
volumes:
- type: bind
source: ./docker/scratch-space
target: /root/scratch-space
- type: bind
environment:
VERBOSE: ${VERBOSE:-false}
DB_HOST: host.docker.internal
DB_PORT: 3306
DB_USER: '4545'
DB_PASSWORD: '4545'
DB_NAME: 'fszdz'
ports:
- "8888:8888"
- "6006:6006"
deploy:
resources:
reservations:
devices:
- capabilities: ["gpu"]
entrypoint: ["/root"]
nginx:
image: nginx:latest
volumes:
- ./docker/resources/docker/default:/etc/nginx/conf.d/default.conf:ro
- ./docker/scripts/start-nginx.sh:/start-nginx.sh:ro
ports:
- "8000:80"
depends_on:
- service1
entrypoint: ["/bin/bash", "/start-nginx.sh"]
</code></pre>
<p>I am starting other service with the below command:</p>
<pre><code>docker run -p 8888:8888 epsilla/vectordb
</code></pre>
<p>How to communicate between two containers?
Do i need to rebuild the docker image and what chnages are required to establish the link?</p>
|
<python><docker><docker-compose><dockerfile><large-language-model>
|
2024-05-08 15:08:54
| 0
| 573
|
Rahul Anand
|
78,449,498
| 6,638,232
|
How to use the result of a previous if statement to a next if?
|
<p>I would like to use a while loop in Python for the following if statements. I dont know how to implement this correctly in python.</p>
<pre><code> if t == 0:
za = ds.isel(time=t)
abc = get_grids_inside_rad(127.2, 15.7)
print(abc)
if t == 1:
za = ds.isel(time=t)
ghi = get_grids_inside_rad(abc.lon,abc.lat)
print(ghi)
if t == 2:
za = ds.isel(time=t)
jkl = get_grids_inside_rad(ghi.lon,ghi.lat)
print(jkl)
if t == 3:
za = ds.isel(time=t)
mno = get_grids_inside_rad(jkl.lon,jkl.lat)
print(mno)
if t == 4:
za = ds.isel(time=t)
pqr = get_grids_inside_rad(mno.lon,ghi.lat)
print(pqr)
</code></pre>
<p>I would like to:</p>
<ol>
<li>use the output of the previous if statement (the first time step) as an input to the next if statement or loop it; and</li>
<li>save the final output to a csv file.</li>
</ol>
<p>The <code>get_grids_inside_rad(127.2, 15.7)</code> is a function that reads lon and lat.</p>
<p>Currently, the output is like this. But I want is that there are four lat lon msl in the final output. This does not even print the output of the 2nd if statement.</p>
<pre class="lang-none prettyprint-override"><code> lon lat time msl
17 127.5 15.0 2015-10-16 99847.414062
</code></pre>
<p>What is the best way to do this in Python? <code>len(ds.time)</code> is equal to 4.</p>
<p>Here's the full code for this:</p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import xarray as xr
import cartopy.crs as ccrs
import cartopy.mpl.ticker as cticker
import matplotlib.ticker as mticker
from cartopy.util import add_cyclic_point
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
import metpy.calc as mpcalc
import csv
import sys
ds = xr.open_dataset('prmsl_koppu_oct16.nc')
def haversine(lon1, lat1, lon2, lat2):
# convert decimal degrees to radians
lon1 = np.deg2rad(lon1)
lon2 = np.deg2rad(lon2)
lat1 = np.deg2rad(lat1)
lat2 = np.deg2rad(lat2)
# haversine formula
dlon = lon2 - lon1
dlat = lat2 - lat1
a = np.sin(dlat/2)**2 + np.cos(lat1) * np.cos(lat2) * np.sin(dlon/2)**2
c = 2 * np.arcsin(np.sqrt(a))
r = 6371
return c * r
def get_grids_inside_rad(ctr_lon, ctr_lat):
lon = np.arange(0, 360, 1.25)
lat = np.arange(-90, 90, 1.25)
# get coordinates of all points on the grid
grid_lon, grid_lat = np.meshgrid(lon, lat)
dists_in_km = haversine(grid_lon, grid_lat, ctr_lon, ctr_lat)
dists_in_deg = dists_in_km / 111
# find nearby points
thr = 2.0
coordinates = []
for i in range(grid_lon.shape[0]):
for j in range(grid_lon.shape[1]):
this_lon = grid_lon[i, j]
this_lat = grid_lat[i, j]
dist = dists_in_deg[i, j]
if dist <= thr:
df = pd.DataFrame({"lon":[this_lon],"lat":[this_lat],"dist":[dist]})
coordinates.append(df)
xyz = pd.concat(coordinates)
xyz.reset_index(drop=True, inplace=True)
xyz.index = range(1, len(xyz)+1, 1)
lat = list(xyz.iloc[:, 1])
lon = list(xyz.iloc[:, 0])
filtered_rad=za.sel(lat=lat, lon=lon, method='nearest')
yyy=filtered_rad.sortby('lon')
df = yyy.to_dataframe().reset_index().drop_duplicates()
df = df[df["msl"]==df["msl"].min()]
return df
t=0
while t < len(ds.time):
if t == 0:
za = ds.isel(time=t)
abc = get_grids_inside_rad(127.2, 15.7)
print(abc)
if t == 1:
za = ds.isel(time=t)
ghi = get_grids_inside_rad(abc.lon,abc.lat)
print(ghi)
else:
break
t += 1
</code></pre>
<p>The ds has the following structure:
<a href="https://i.sstatic.net/HlgvpLDO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HlgvpLDO.png" alt="enter image description here" /></a></p>
<p>Here's the link to the data:</p>
<p><a href="https://drive.google.com/file/d/1vunP3pD3jkuKDCCNrfdT4u0rgxQ9mtXx/view?usp=sharing" rel="nofollow noreferrer">Link to the data</a></p>
|
<python>
|
2024-05-08 15:07:01
| 2
| 423
|
Lyndz
|
78,449,471
| 8,205,554
|
How to most efficiently delete a tuple from a list of tuples based on the first element of the tuple in Python?
|
<p>I have a data structure as follows,</p>
<pre class="lang-py prettyprint-override"><code>data = {
'0_0': [('0_0', 0), ('0_1', 1), ('0_2', 2)],
'0_1': [('0_0', 1), ('0_1', 0), ('0_2', 1)],
'0_2': [('0_0', 2), ('0_1', 1), ('0_2', 0)],
}
</code></pre>
<p>Each key of the dictionary is unique. The values corresponding to the key value in the dictionary structure are list of tuples, and each tuple contains the coordinate name and the distance to that coordinate.</p>
<p>How to delete tuple from lists by given key? For example I wrote a function as follows,</p>
<pre class="lang-py prettyprint-override"><code>def remove_all_acs(
dictionary,
delete_ac
):
for key in dictionary:
acs = dictionary[key]
for ac in acs:
if ac[0] == delete_ac:
acs.remove(ac)
break
return dictionary
</code></pre>
<p>It's my expected output and returns a result as follows,</p>
<pre class="lang-py prettyprint-override"><code>print(remove_all_acs(data, '0_0'))
{
'0_0': [('0_1', 1), ('0_2', 2)],
'0_1': [('0_1', 0), ('0_2', 1)],
'0_2': [('0_1', 1), ('0_2', 0)]
}
</code></pre>
<p>It works but is not effective on large lists. Do you have any idea? Thanks in advance.</p>
<p>In addition you can create large dataset using this code,</p>
<pre class="lang-py prettyprint-override"><code>import math
def generate(width, height):
coordinates = [(x, y) for x in range(width) for y in range(height)]
dataset = {}
for x1, y1 in coordinates:
key = f"{x1}_{y1}"
distances = []
for x2, y2 in coordinates:
if (x1, y1) != (x2, y2):
distance = math.sqrt((x2 - x1)**2 + (y2 - y1)**2)
distances.append((f"{x2}_{y2}", distance))
dataset[key] = distances
return dataset
</code></pre>
|
<python>
|
2024-05-08 15:00:45
| 7
| 2,633
|
E. Zeytinci
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.