QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
77,982,938
| 1,499,402
|
Polars idiomatic way of aggregating n consecutive rows of a data frame
|
<p>I'm new to Polars, and I ended up writing this code to compute some aggregating expression over segments of <code>n</code> rows:</p>
<pre><code>import polars as pl
df = pl.DataFrame({"a": [1, 1, 3, 8, 62, 535, 4213]})
(
df.with_columns(index=pl.int_range(pl.len(), dtype=pl.Int32))
.group_by_dynamic(index_column="index", every="3i")
.agg(pl.col("a").mean())
)
</code></pre>
<p>For the example I set <code>n==3</code> for <code>7</code> rows, but think of a smallish <code>n</code> of about <code>100</code>, for a multicolumn data frame of about <code>10**6</code> rows.</p>
<p>I was wondering if this is the idiomatic way of doing this type of operation.
Somehow <code>group_by_dynamic</code> over an <code>Int32</code> range seems overkill to me: I was wondering if there is a more direct way of doing the same aggregation.</p>
|
<python><aggregate><python-polars><rolling-computation>
|
2024-02-12 16:37:11
| 2
| 4,888
|
Stefano M
|
77,982,766
| 5,589,640
|
Delete word that follows a specific word
|
<p>I have e-mails and transcripts in German from conversations with customers. They include personal identifiable information that I need to remove. So it is a data anonymisation task. The text would be <em>"Hello Mr. Smith"</em>, <em>"Dear Mr Smith"</em>, <em>"Hello Lisa"</em> etc. followed by the conversation. I need to keep the conversation for further analysis. Three solutions come to my mind:</p>
<p><strong>A) compiling a list of names</strong>:
At this stage, I do not know all the names that will be mentioned. I have no access to the CRM database. So compiling a list and adding it to the stopwords corpus will be time-consuming and/or error prone.</p>
<p><strong>B) Part-of-Speech Tagging (PoS) / Named Entity Recognition (NER)</strong>:
This would also delete product names and places. I need to keep this information. So NER is unfortunately not an option.</p>
<p><strong>C) Regular expression (regex)</strong>:
Use RegEx to match the salutation, e.g. "Dear", and delete the subsequent word. <a href="https://stackoverflow.com/questions/32241159/how-to-remove-text-after-and-before-specific-words-in-a-string-using-python-rege">This answer</a> gave me a good starting point, but it assumes that I know the word that follows the name I need to delete which I don't.</p>
<pre><code>import re
print re.sub(r'(?<=copy )(.*)(?=from)', '', "copy table values from 'a.dat';")
</code></pre>
<p>How could I modify the code to delete the word that follows the salutation?</p>
<p>I read up on <a href="https://www.regular-expressions.info/lookaround.html" rel="nofollow noreferrer">lookaround</a> and played around a bit on <a href="https://regex101.com/r/sS2dM8/11" rel="nofollow noreferrer">regex101</a> but couldn't figure it out.</p>
<p>Also, would I need to tokenize the string first?</p>
<p>A solution with pandas <code>str.replace</code> is welcome too.</p>
|
<python><regex>
|
2024-02-12 16:05:32
| 2
| 625
|
Simone
|
77,982,709
| 4,269,851
|
How to structure custom file handleing class in python?
|
<p>Writing my file handling class in python I came across this problem file getting closed before I can use it. Guess I can't use <code>with</code> inside class. What then is the safest way to open/close file to avoid it staying open in case my code crashes before I have chance to close the file?</p>
<p>Clarification: I can go without file handling class, but I wanted to have all file operations handled by my own class rather than be part of code.</p>
<p>Basically class has to</p>
<ol>
<li>open file and keep open</li>
<li>I run file operations later in code as needed</li>
<li>if my script crashes before I close the file then it somehow magically close/release it as well.</li>
</ol>
<pre><code>class File:
def __init__(self, ext_file_path):
self.file_path = ext_file_path
class FileLow:
def __init__(self, path):
self.filePath = path
def __enter__(self):
self.fileHandle = open(self.filePath, 'r+b', ) # specify encoding
return self.fileHandle
def __exit__(self, *args):
self.fileHandle.close()
print("closed file")
# class end
def load(self):
with self.FileLow(self.file_path) as fileHandle:
self.myfileHandle = fileHandle
def read(self):
self.myfileHandle.seek(0) # go to beginning of file
return self.myfileHandle.read() # read file contents
# class end
file = File("D:\\text.txt")
print("loaded file")
file.load()
print("reading file")
print(file.read())
</code></pre>
<p>returns</p>
<pre class="lang-none prettyprint-override"><code>ValueError: seek of closed file
loaded file
closed file
reading file
</code></pre>
<p>I could get rid of <code>class File</code> and keep <code>class FileLow</code> as only class, but i am trying to avoid writing all my further app code inside <code>with</code> block</p>
|
<python><file><class><with-statement>
|
2024-02-12 15:56:23
| 2
| 829
|
Roman Toasov
|
77,982,694
| 8,888,469
|
How to join two dataframes for a particular condition for the joining keys
|
<p>I have two datframes <code>df1</code> and <code>df2</code> and I want to join them and create new dataframe df3 .</p>
<p>I want join work even if <code>dest</code> column of <code>df1</code> has one match in <code>dest</code> in column of <code>df2</code>.</p>
<p>Join key is pair <code>org,dest</code></p>
<p><strong>df1</strong></p>
<pre><code>Name org dest
Ashok A B
Rahul A C
Anupa B A
Sam A B
</code></pre>
<p><strong>df2</strong></p>
<pre><code>org dest Amount
A A/B/C 10
B C 20
A W 30
</code></pre>
<p><strong>Expected Output</strong></p>
<pre><code>Name org dest Amount
Ashok A B 10
Rahul A C
Anupa B A
Sam A B 10
</code></pre>
<p>How can this be done in python</p>
|
<python><pandas><join>
|
2024-02-12 15:54:27
| 1
| 933
|
aeapen
|
77,982,614
| 1,961,574
|
django: Using dictionaries as values for models.Choices
|
<p>I'm trying to save a frequency to the database, which consists of an amount (e.g. 2) and a unit (e.g. months). The user must be given a choice, too. The idea is to pass this to a class that can use this info to calculate frequency (similar to, but not quite <code>timedelta</code>, however this is beyond the scope of this question).</p>
<p>I thought it would be neat if I could save this information as a dict into a JSONField, and when retrieving it programmatically, I could pass it straight onto my <code>CustomFrequencyClass</code> in charge of the calculations.</p>
<p>The error I invariably get is <code>TypeError: unhashable type: 'dict'</code>. Or, if I pass the values as tuples, it'll tell me <code>(fields.E005) 'choices' must be an iterable containing (actual value, human readable name) tuples.</code></p>
<pre class="lang-py prettyprint-override"><code>class CustomFrequencyClass:
def __init__(self, *args, **kwargs):
pass
class Report(models.Model):
class ReportFrequency(CustomFrequencyClass, models.Choices):
DAILY = {"days": 1}, "Daily"
WEEKLY = {"days": 7}, "Weekly"
frequency = models.JSONField(choices=ReportFrequency.choices, default=ReportFrequency.DAILY)
</code></pre>
|
<python><django>
|
2024-02-12 15:42:17
| 1
| 2,712
|
bluppfisk
|
77,982,303
| 13,086,128
|
Select all integer columns except a few in python-polars?
|
<p>Consider I have a dataframe:</p>
<pre><code>import polars as pl
import polars.selectors as cs
df = pl.DataFrame(
{
'p': [1, 2, 1, 3, 1, 2],
'x': list(range(6, 0, -1)),
'y': list(range(2, 8)),
'z': [3, 4, 5, 6, 7, None],
"q" : list('abcdef')
}
)
</code></pre>
<p>df</p>
<pre><code>shape: (6, 5)
p x y z q
i64 i64 i64 i64 str
1 6 2 3 "a"
2 5 3 4 "b"
1 4 4 5 "c"
3 3 5 6 "d"
1 2 6 7 "e"
2 1 7 null "f"
</code></pre>
<p>I need to select all the <strong>integer columns</strong> except <code>p</code> and <code>z</code>.</p>
<p>One way is to manually select each column but, it is not feasible if there are hundreds of columns.</p>
<p>What could a better and efficient way?</p>
|
<python><python-3.x><dataframe><python-polars>
|
2024-02-12 14:50:01
| 2
| 30,560
|
Talha Tayyab
|
77,982,289
| 2,985,155
|
Scikit-learn Gaussian Process Regressor returns 2d vector instead of 2d Covariance matrix
|
<p>I'm fitting data (2d input, <strong>2d output</strong>) to a <a href="https://scikit-learn.org/stable/modules/generated/sklearn.gaussian_process.GaussianProcessRegressor.html" rel="nofollow noreferrer">Gaussian Process from Sklearn</a> but when trying to get the covariance matrix I'm getting a 2d vector, not a matrix.</p>
<p>For some examples, it works fine (returns a 2d matrix), but I do not understand what's wrong with my case.</p>
<p>Should I interpret that the two values returned are meant to be the diagonal values of a diagonal matrix?</p>
<p>This is a simple example to reproduce the issue:</p>
<pre><code>import numpy as np
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import RationalQuadratic
# Simple dataset
X = np.array([[0,1], [1,0]])
y = np.array([np.sin(12 * X[:, 0]).ravel(),
np.cos(12 * X[:, 1]).ravel()])
kernel = RationalQuadratic()
gpr = GaussianProcessRegressor(kernel=kernel, random_state=0)
gpr.fit(X, y)
new_x = np.array([[0.1, 0.5]])
y_pred, y_cov = gpr.predict(new_x, return_cov=True)
print("y_pred:", y_pred)
print("y_cov:", y_cov)
</code></pre>
<p>Which prints:</p>
<blockquote>
<p><code>y_pred: [[ 1.71640581e-05 -2.47700145e-03]]</code></p>
<p><code>y_cov: [[[0.99997834 0.99997834]]]</code></p>
</blockquote>
<p>And if instead of cov I return std (not sure what that means for a 2d target):</p>
<pre><code>y_pred, y_std = gpr.predict(np.array([[0.1, 0.5]]), return_std=True)
print("y_pred:", y_pred)
print("y_std:", y_std)
</code></pre>
<blockquote>
<p><code>y_pred: [[ 1.71640581e-05 -2.47700145e-03]]</code></p>
<p><code>y_std: [[0.99998917 0.99998917]]</code></p>
</blockquote>
<p>Note that y_cov has an extra dimension than y_std</p>
<pre><code>y_cov.shape
</code></pre>
<blockquote>
<p>(1, 1, 2)</p>
</blockquote>
<pre><code>y_std.shape
</code></pre>
<blockquote>
<p>(1, 2)</p>
</blockquote>
|
<python><scikit-learn><gaussian-process><covariance-matrix>
|
2024-02-12 14:48:19
| 0
| 473
|
JackS
|
77,982,046
| 1,711,271
|
how to select all columns from a list in a polars dataframe
|
<p>I have a dataframe</p>
<pre><code>import polars as pl
import numpy as np
df = pl.DataFrame(
{
"nrs": [1, 2, 3, None, 5],
"names": ["foo", "ham", "spam", "egg", None],
"random": np.random.rand(5),
"groups": ["A", "A", "B", "C", "B"],
}
)
</code></pre>
<p>I want to select only the columns in <code>list</code>:</p>
<pre><code>mylist = ['nrs', 'random']
</code></pre>
<p>This seems to work:</p>
<pre><code>import polars.selectors as cs
df.select(cs.by_name(mylist)))
</code></pre>
<p>Is this the idiomatic way to do it? Or are there better ways?</p>
|
<python><select><python-polars>
|
2024-02-12 14:11:06
| 3
| 5,726
|
DeltaIV
|
77,981,863
| 11,861,874
|
Pandas data frame to Table in Word doc using Python
|
<p>I am inserting the pandas data frame as a table in a Word document using Python. I am using docx library but somehow while running a loop that creates three data frames it updates the same values in all of them, can you please let me know what's missing here?</p>
<pre><code>import pandas as pd
from docx import Document
from docx.shared import Inches
data1 = {'Header':['L1','L2','L3'], 'Val1':[float(100),float(200),float(300)],
'Val2':[float(400),float(500),float(600)], 'Val3':
[float(700),float(800),float(900)]}
data1_summary = pd.DataFrame(data=data1)
# Inside loop it'll create two more such outputs but with different values but with the same labels.
data2 = {'Header':['L5','L6'], 'Val5':[float(1000),float(1100)],
'Val6':[float(1300),float(1400)]}
data2_summary = pd.DataFrame(data=data2)
data3 = {'Header':['L7','L8','L9','L10'], 'Val7':[float(1900),float(2000),float(2100),float(2200)],
'Val8':[float(2900),float(2300),float(2400),float(2800)], 'Val9':
[float(3500),float(3600),float(3700),float(3900)]}
data3_summary = pd.DataFrame(data=data3)
# Here are the two functions that help to add the table.
def make_rows_bold(row):
"""
This function sets all the cells in the 'row' to bold.
To be used to set the header row to bold.
"""
for cell in row.cells:
for paragraph in cell.paragraphs:
for run in paragraph.runs:
run.font.bold = True
def add_table_to_word_doc(input_df: pd.DataFrame, document):
"""
This function converts the the input_df into a table, adds it to the input document, and then returns the document.
"""
# this create a table of the size of the df
table = document.add_table(input_df.shape[0]+1, input_df.shape[1])
# convert the column names of the dataframe to column titles of the ms word table
for col in range(input_df.shape[-1]):
table.cell(0, col).text = input_df.columns[col]
# add the rest of the dataframe to the table
for row in range(input_df.shape[0]):
for col in range(input_df.shape[-1]):
table.cell(row+1, col).text = str(input_df.values[row, col])
# style of the table ('Table Grid' is just the normal table)
table.style = 'Table Grid'
# set the title row to bold
make_rows_bold(table.rows[0])
return document
</code></pre>
|
<python><dataframe><datatable><python-docx>
|
2024-02-12 13:40:40
| 0
| 645
|
Add
|
77,981,774
| 4,396,229
|
ECIES Encryption for python and iOS
|
<p>I'm attempting to implement ECIES encryption in iOS Swift which has to be compatible with server python, but I'm encountering compatibility issues. I followed the instructions from an article for both Python and iOS, but when decrypting, it throws an error. Is there a library available that supports ECIES encryption compatible with both Python and Swift?</p>
<p>Repo macOS: <a href="https://github.com/agens-no/EllipticCurveKeyPair" rel="nofollow noreferrer">https://github.com/agens-no/EllipticCurveKeyPair</a></p>
<p>--------For python-----------</p>
<p>and I have tried this repository:
<a href="https://gist.github.com/dschuetz/2ff54d738041fc888613f925a7708a06" rel="nofollow noreferrer">https://gist.github.com/dschuetz/2ff54d738041fc888613f925a7708a06</a></p>
<p><a href="https://gist.github.com/ateska/09e1c874494bba8e381ccd8d851b0df8" rel="nofollow noreferrer">https://gist.github.com/ateska/09e1c874494bba8e381ccd8d851b0df8</a></p>
<p>article: <a href="https://darthnull.org/secure-enclave-ecies/" rel="nofollow noreferrer">https://darthnull.org/secure-enclave-ecies/</a></p>
|
<python><ios><swift><encryption-asymmetric><ecies>
|
2024-02-12 13:24:33
| 0
| 780
|
Steven
|
77,981,751
| 8,811,098
|
Import other class does not work in Jupyter Notebook
|
<p>I have a file named <strong>rentalcar.ipynb</strong> with class inside named <strong>rentalcar</strong>
I am trying to import this class using other file <strong>customer.ipynb</strong>,</p>
<p>I am still getting this error</p>
<pre><code> ---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Cell In[1], line 2
1 from datetime import datetime
----> 2 from rentalcar import rentalcar
4 class Customer:
5 def __init__(self, name, rental_company):
ModuleNotFoundError: No module named 'rentalcar'
</code></pre>
<p>Any Idea how to fix this, or should I run this from Python shell as ".py" extension to run correctly?</p>
|
<python><jupyter-notebook><jupyter-lab>
|
2024-02-12 13:20:19
| 4
| 386
|
Nadin Martini
|
77,981,735
| 3,943,162
|
"agent_node() got multiple values for argument 'agent'" when extract langchain example code from notebook
|
<p>I'm running the example LangChain/LangGraph code for "Basic Multi-agent Collaboration." I got the example from <a href="https://github.com/langchain-ai/langgraph/blob/main/examples/multi_agent/multi-agent-collaboration.ipynb/" rel="nofollow noreferrer">here</a> (Github). There is also a <a href="https://blog.langchain.dev/langgraph-multi-agent-workflows/" rel="nofollow noreferrer">blogpost/video</a>.</p>
<p>After configuring my local virtual environment and copying and pasting the notebook, everything is working fine.</p>
<p>In the same virtual environment, I'm adapting the code to avoid to use the Jupyter Notebook. To do so, I decoupled the code into some classes.</p>
<p>Original code (working):</p>
<pre class="lang-py prettyprint-override"><code>
# Helper function to create a node for a given agent
def agent_node(state, agent, name):
result = agent.invoke(state)
# We convert the agent output into a format that is suitable to append to the global state
if isinstance(result, FunctionMessage):
pass
else:
result = HumanMessage(**result.dict(exclude={"type", "name"}), name=name)
return {
"messages": [result],
# Since we have a strict workflow, we can
# track the sender so we know who to pass to next.
"sender": name,
}
llm = ChatOpenAI(model="gpt-3.5-turbo", openai_api_key=open_ai_key)
# Research agent and node
research_agent = create_agent(
llm,
[search],
system_message="You should provide accurate data for the chart generator to use.",
)
research_node = functools.partial(agent_node, agent=research_agent, name="Researcher")
# Chart Generator
chart_agent = create_agent(
llm,
[python_repl],
system_message="Any charts you display will be visible by the user.",
)
chart_node = functools.partial(agent_node, agent=chart_agent, name="Chart Generator")
#...some other cells
for s in graph.stream(
{
"messages": [
HumanMessage(
content="Fetch the UK's GDP over the past 5 years,"
" then draw a line graph of it."
" Once you code it up, finish."
)
],
},
# Maximum number of steps to take in the graph
{"recursion_limit": 100},
):
print(s)
print("----")
</code></pre>
<p>My adapted version (not working):</p>
<p>nodes.py</p>
<pre class="lang-py prettyprint-override"><code>class Nodes:
def __init__(self, llm, tools):
self.llm = llm
self.tools = tools
def agent_node(state, agent, name):
result = agent.invoke(state)
if isinstance(result, FunctionMessage):
pass
else:
result = HumanMessage(**result.dict(exclude={"type", "name"}), name=name)
return {
"messages": [result],
# track the sender so we know who to pass to next.
"sender": name,
}
def research_agent_node(self):
research_agent = create_agent(
self.llm,
[self.tools[0]],
system_message="You should provide accurate data for the chart generator to use.",
)
return functools.partial(self.agent_node, agent=research_agent, name="Researcher")
def chart_generator_node(self):
chart_agent = create_agent(
self.llm,
[self.tools[0]],
system_message="Any charts you display will be visible by the user.",
)
return functools.partial(self.agent_node, agent=chart_agent, name="Chart Generator")
</code></pre>
<p>main.py</p>
<pre class="lang-py prettyprint-override"><code># Get the agent and tool nodes
nodes = Nodes(llm, tools)
research_agent_node = nodes.research_agent_node()
chart_agent_node = nodes.chart_generator_node()
#...more code
for s in graph.stream(
{
"messages": [
HumanMessage(
content="Fetch the UK's GDP over the past 5 years,"
" then draw a line graph of it."
" Once you code it up, finish."
)
],
},
# Maximum number of steps to take in the graph
{"recursion_limit": 100},
):
print(s)
print("----")
</code></pre>
<p>The error message is <code>TypeError: Nodes.agent_node() got multiple values for argument 'agent'</code> and it's pretty straightforward to understand, but really hard to debug. Maybe I misunderstood how to return the <code>functools.partial</code>, but I don't know how to verify it.</p>
|
<python><jupyter-notebook><agent><py-langchain><langgraph>
|
2024-02-12 13:17:39
| 1
| 1,789
|
James
|
77,981,596
| 19,299,757
|
How to provide custom message in pytest-html-reporter
|
<p>Is there a way to provide custom message for each test when I use [pytest-html-reporter]<a href="https://pypi.org/project/pytest-html-reporter/" rel="nofollow noreferrer">https://pypi.org/project/pytest-html-reporter/</a>?</p>
<p>I tried both print and logging statement for my tests but they are not shown in the report.</p>
<p>I call the pytest test as follows:</p>
<p>pytest --cache-clear -v -s --capture=tee-sys --html-report=./Reports/report.html --title='Dashboard' --self-contained-html .\DASHBOARD\Tests\test_dashboard.py</p>
<p>The report only shows the Suite,Test case name,Status,Time,Error Message.</p>
<p>Any help much appreciated.</p>
|
<python><pytest><pytest-html><pytest-html-reporter>
|
2024-02-12 12:54:28
| 2
| 433
|
Ram
|
77,981,582
| 3,455,071
|
RP Zero with new system and caemra module 3 - camera libs incompatibility
|
<p>I am trying to refactor old camera code in my RP Zero. It looks like everything's changed in new version of the OS and old libs are no longer there or incompatible.</p>
<p>In the old code I could do this to create the stream:</p>
<pre><code>import cv2
from imutils.video.pivideostream import PiVideoStream
import imutils
import time
import numpy as np
class VideoCamera(object):
def __init__(self, flip = False):
self.vs = PiVideoStream().start()
self.flip = flip
time.sleep(2.0)
def __del__(self):
self.vs.stop()
def flip_if_needed(self, frame):
if self.flip:
return np.flip(frame, 0)
return frame
def get_frame(self):
frame = self.flip_if_needed(self.vs.read())
ret, jpeg = cv2.imencode('.jpg', frame)
return jpeg.tobytes()
</code></pre>
<p>In new reality it seems no longer valid.</p>
<p>First thing I tried: I noticed that "imutils" lib is no longer installed together with cv2 so I have installed it manually with pip. I guess this lib is old version and it's not compatible any more. It is calling "picamera" lib where this lib is now "picamera2" I guess. I tried to create symlink to change it's name but it failed either.</p>
<p>Second try was to install imutils2 instead but this lib is completely different and I don't really understand why it has the same name.</p>
<p>So at this stage I got stuck with the code, I don't know how to make it working again.
Any advice would be very appreciated.</p>
|
<python><raspberry-pi><camera><raspberry-pi-zero><imutils>
|
2024-02-12 12:52:06
| 0
| 529
|
smoczyna
|
77,981,495
| 5,120,843
|
Is this an incorrect Mann-Whitney U statistic using Apache Commons?
|
<p>Using the staology.com website, I did a Mann-Whitney test in both Java and in Python. Yet, the Java version appears to be incorrect or I did something wrong. The Python logic matches the website (<a href="https://www.statology.org/mann-whitney-u-test/" rel="nofollow noreferrer">https://www.statology.org/mann-whitney-u-test/</a>).</p>
<p>Here is the Java code:</p>
<pre><code>import org.apache.commons.math3.stat.inference.MannWhitneyUTest;
public static void main(String[] args)
{
// Creating two sample datasets
double[] data1 = {3, 5, 1, 4, 3, 5};
double[] data2 = {4, 8, 6, 2, 1, 9};
// Compute and print ranks
MannWhitneyUTest mannWhitneyUTest = new MannWhitneyUTest();
double uStatistic = mannWhitneyUTest.mannWhitneyU(data1, data2);
double pValue = mannWhitneyUTest.mannWhitneyUTest(data1, data2);
// Printing U statistic
System.out.println("U-statistic: " + uStatistic);
// Printing P value
System.out.println("P-value: " + pValue);
}
</code></pre>
<p>The output from this code does not match the website:</p>
<pre><code>U-statistic: 23.0
P-value: 0.4233396415824435
</code></pre>
<hr />
<p>The Python logic, on the other hand, does match the website:</p>
<pre><code>import numpy as np
from scipy.stats import mannwhitneyu
data1 = [3, 5, 1, 4, 3, 5]
data2 = [4, 8, 6, 2, 1, 9]
# Compute and print ranks
u_statistic, p_value = mannwhitneyu(data1, data2)
# Printing the ranks
print('U-statistic: ', u_statistic)
# Computing and printing p-value
print('P-value: ', p_value)
U-statistic: 13.0
P-value: 0.46804160590041655
</code></pre>
<p>Did I do something incorrect in the Java version?</p>
<p>If so, I don't see what it is.</p>
|
<python><java><statistics>
|
2024-02-12 12:38:02
| 1
| 627
|
Morkus
|
77,981,486
| 9,173,710
|
Float overflow in scipy quadrature
|
<p>I am trying to calculate the following function:</p>
<p><a href="https://i.sstatic.net/2wHja.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2wHja.png" alt="formula" /></a></p>
<p>For every point x I need to evaluate the integral numerically since it can't be solved analytically.
(At least sympy and WolframAlpha can't do it) I am using scipy.integrate.quad for this.<br />
However, the sinh functions throw math range or float overflow errors when s is larger than ~300. I tried numpy functions and Python built-in math libraries.</p>
<p>Is there a way to properly calculate this?</p>
<p>Here is the equation implementation:</p>
<pre class="lang-py prettyprint-override"><code>from scipy.integrate import quad
h = 0.0005
G = 1
R= 0.002
upsilon = 0.06
gamma = 0.072
def style_s(r, gamma, upsilon, G):
x_ = (r-R)/h
B = upsilon/(2*G) # substitute B for constants in last operand in denominator
def integrand(s_):
return np.cos(s_*x_)/(((1+2*s_**2 + np.cosh(2*s_))/(np.sinh(2*s_) - 2*s_))*s_ + B/h*s_**2)
int_out = quad(integrand, 0, inf)
return gamma/(2*np.pi*G) * int_out[0]
</code></pre>
|
<python><floating-point><integral><calculus><quad>
|
2024-02-12 12:36:51
| 2
| 1,215
|
Raphael
|
77,981,393
| 159,072
|
ValueError: generator already executing
|
<p>The following generator is throwing an exception.</p>
<p>How can I resolve this issue?</p>
<pre><code>import os
import numpy as np
import pandas as pd
import tensorflow as tf
from sklearn.model_selection import train_test_split
import autokeras as ak
DATA_COLUMN_COUNT = 12
LABEL_COLUMN_INDEX = 2
FEATURE_START_INDEX = 3
NUM_OF_FEATURES = DATA_COLUMN_COUNT - FEATURE_START_INDEX
NUM_OF_CLASSES = 3
BATCH_SIZE = 64
EPOCHS = 111
MAX_TRIALS = 10
STEPS_PER_EPOCH = 200
VALIDATION_STEPS = 50
def load_data(file_path):
data = pd.read_csv(file_path, header=None, sep='\s+', usecols=range(0, DATA_COLUMN_COUNT))
print("Sample data with usecols and correct separator:")
print(data.head())
label_conversion = {'H': 0, 'E': 1, 'C': 2}
y = data.iloc[:, LABEL_COLUMN_INDEX].map(label_conversion)
y = tf.keras.utils.to_categorical(y, num_classes=len(label_conversion))
x = data.iloc[:, FEATURE_START_INDEX:].to_numpy()
return x, y
def data_generator(file_paths, batch_size, num_classes):
while True:
for file_path in file_paths:
x, y = load_data(file_path)
# Remove the redundant to_categorical call
# y = tf.keras.utils.to_categorical(y, num_classes)
for i in range(0, len(x), batch_size):
end = i + batch_size
batch_x = x[i:end]
batch_y = y[i:end]
if batch_x.shape[0] < batch_size:
padding = batch_size - batch_x.shape[0]
batch_x = np.pad(batch_x, ((0, padding), (0, 0)), mode='constant', constant_values=0)
# Ensure batch_y has the correct number of dimensions before padding
batch_y = np.pad(batch_y, ((0, padding), (0, 0)), mode='constant', constant_values=0)
yield batch_x, batch_y
def create_generators(folder_path, batch_size, num_classes, validation_split=0.2):
files = [os.path.join(folder_path, f) for f in os.listdir(folder_path) if f.endswith('.dat')]
train_files, validation_files = train_test_split(files, test_size=validation_split)
train_gen = data_generator(train_files, batch_size, num_classes)
validation_gen = data_generator(validation_files, batch_size, num_classes)
return train_gen, validation_gen
if __name__ == "__main__":
data_folder = r'/home/my_username/my_project_v2_original_data'
train_gen, val_gen = create_generators(data_folder, BATCH_SIZE, NUM_OF_CLASSES)
train_dataset = tf.data.Dataset.from_generator(
lambda: train_gen,
output_signature=(
tf.TensorSpec(shape=(None, NUM_OF_FEATURES), dtype=tf.float32),
tf.TensorSpec(shape=(None, NUM_OF_CLASSES), dtype=tf.float32)
)
)
validation_dataset = tf.data.Dataset.from_generator(
lambda: val_gen,
output_signature=(
tf.TensorSpec(shape=(None, NUM_OF_FEATURES), dtype=tf.float32),
tf.TensorSpec(shape=(None, NUM_OF_CLASSES), dtype=tf.float32)
)
)
clf = ak.StructuredDataClassifier(max_trials=MAX_TRIALS, overwrite=True, num_classes=NUM_OF_CLASSES)
# Fit the model
clf.fit(train_dataset, epochs=EPOCHS, steps_per_epoch=STEPS_PER_EPOCH, validation_data=validation_dataset, validation_steps=VALIDATION_STEPS)
best_hps = clf.tuner.get_best_hyperparameters()[0]
print(best_hps)
model = clf.export_model()
# Save the model
model.save("cnn_autokeras_by_chunk_without_ohe")
# Evaluate the model
evaluation = clf.evaluate(validation_dataset)
print(evaluation)
</code></pre>
<pre><code>C:\Users\pc\AppData\Local\Programs\Python\Python311\python.exe C:/git/my_project_v2/my_project_v2/my_project_v2_autokeras/cnn_autokeras_by_original_by_chunk.py
Using TensorFlow backend
2024-02-12 23:14:56.551477: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: SSE SSE2 SSE3 SSE4.1 SSE4.2 AVX, in other operations, rebuild TensorFlow with the appropriate compiler flags.
Sample data with usecols and correct separator:
0 1 2 3 4 5 6 7 8 9 10 11
0 0 SER C 0.000 0.000 0.000 1 1 1 1 1 0
1 1 ASN C 5.651 8.805 0.000 1 1 1 1 1 0
2 2 ALA C 5.582 5.602 9.099 0 0 0 1 1 0
3 3 MET C 5.756 7.791 7.902 0 0 0 0 0 0
4 4 ILE E 6.853 10.406 11.253 0 0 0 0 0 0
Sample data with usecols and correct separator:
0 1 2 3 4 5 6 7 8 9 10 11
0 0 PRO C 0.000 0.000 0.000 1 1 1 1 1 0
1 1 ILE E 6.388 10.133 0.000 1 1 1 1 1 0
2 2 MET E 7.090 9.464 12.018 0 0 0 0 1 0
3 3 LEU E 6.281 9.841 12.931 0 0 0 0 0 0
4 4 ARG E 6.628 5.599 9.306 0 0 0 0 0 0
2024-02-12 23:14:58.258253: W tensorflow/core/framework/op_kernel.cc:1827] INVALID_ARGUMENT: ValueError: generator already executing
Traceback (most recent call last):
File "C:\Users\pc\AppData\Roaming\Python\Python311\site-packages\tensorflow\python\ops\script_ops.py", line 270, in __call__
ret = func(*args)
^^^^^^^^^^^
File "C:\Users\pc\AppData\Roaming\Python\Python311\site-packages\tensorflow\python\autograph\impl\api.py", line 643, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pc\AppData\Roaming\Python\Python311\site-packages\tensorflow\python\data\ops\from_generator_op.py", line 198, in generator_py_func
values = next(generator_state.get_iterator(iterator_id))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: generator already executing
Traceback (most recent call last):
File "C:\git\my_project_v2\my_project_v2\my_project_v2_autokeras\cnn_autokeras_by_original_by_chunk.py", line 97, in <module>
2024-02-12 23:14:58.260762: W tensorflow/core/framework/op_kernel.cc:1827] INVALID_ARGUMENT: ValueError: generator already executing
Tr aclf.fit(train_dataset, epochs=EPOCHS, steps_per_epoch=STEPS_PER_EPOCH, validation_data=validation_dataset, validation_steps=VALIDATION_STEPS)ce
back (most recent call last):
File " File "C:\Users\pc\AppData\Local\Programs\Python\Python311\Lib\site-packages\autokeras\tasks\structured_data.py", line 326, in fit
C:\Users\pc\AppData\Roaming\Python\Python311\site-packages\tensorflow\python\ops\script_ops.py", line 270, in __call__
ret = func(*args)
^^^^^^^^^^^
File "C:\Users\pc\AppData\Roaming\Python\Python311\site-packages\tensorflow\python\autograph\impl\api.py", line 643, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pc\AppData\Roaming\Python\Python311\site-packages\tensorflow\python\data\ops\from_generator_op.py", line 198, in generator_py_func
values = next(generator_state.get_iterator(iterator_id))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: generator already executing
2024-02-12 23:14:58.263112: W tensorflow/core/framework/op_kernel.cc:1827] INVALID_ARGUMENT: ValueError: generator already executing
Traceback (most recent call last):
File "C:\Users\pc\AppData\Roaming\Python\Python311\site-packages\tensorflow\python\ops\script_ops.py", line 270, in __call__
ret = func(*args)
^^^^^^^^^^^
File "C:\Users\pc\AppData\Roaming\Python\Python311\site-packages\tensorflow\python\autograph\impl\api.py", line 643, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pc\AppData\Roaming\Python\Python311\site-packages\tensorflow\python\data\ops\from_generator_op.py", line 198, in generator_py_func
values = next(generator_state.get_iterator(iterator_id))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: generator already executing
2024-02-12 23:14:58.265071: W tensorflow/core/framework/op_kernel.cc:1827] INVALID_ARGUMENT: ValueError: generator already executing
Traceback (most recent call last):
File "C:\Users\pc\AppData\Roaming\Python\Python311\site-packages\tensorflow\python\ops\script_ops.py", line 270, in __call__
ret = func(*args)
^^^^^^^^^^^
File "C:\Users\pc\AppData\Roaming\Python\Python311\site-packages\tensorflow\python\autograph\impl\api.py", line 643, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pc\AppData\Roaming\Python\Python311\site-packages\tensorflow\python\data\ops\from_generator_op.py", line 198, in generator_py_func
values = next(generator_state.get_iterator(iterator_id))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: generator already executing
2024-02-12 23:14:58.266913: W tensorflow/core/framework/op_kernel.cc:1827] INVALID_ARGUMENT: ValueError: generator already executing
Traceback (most recent call last):
File "C:\Users\pc\AppData\Roaming\Python\Python311\site-packages\tensorflow\python\ops\script_ops.py", line 270, in __call__
ret = func(*args)
^^^^^^^^^^^
File "C:\Users\pc\AppData\Roaming\Python\Python311\site-packages\tensorflow\python\autograph\impl\api.py", line 643, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pc\AppData\Roaming\Python\Python311\site-packages\tensorflow\python\data\ops\from_generator_op.py", line 198, in generator_py_func
values = next(generator_state.get_iterator(iterator_id))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: generator already executing
2024-02-12 23:14:58.271889: W tensorflow/core/framework/op_kernel.cc:1827] INVALID_ARGUMENT: ValueError: generator already executing
Traceback (most recent call last):
File "C:\Users\pc\AppData\Roaming\Python\Python311\site-packages\tensorflow\python\ops\script_ops.py", line 270, in __call__
ret = func(*args)
^^^^^^^^^^^
File "C:\Users\pc\AppData\Roaming\Python\Python311\site-packages\tensorflow\python\autograph\impl\api.py", line 643, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pc\AppData\Roaming\Python\Python311\site-packages\tensorflow\python\data\ops\from_generator_op.py", line 198, in generator_py_func
values = next(generator_state.get_iterator(iterator_id))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: generator already executing
2024-02-12 23:14:58.274866: W tensorflow/core/framework/op_kernel.cc:1827] INVALID_ARGUMENT: ValueError: generator already executing
Traceback (most recent call last):
File "C:\Users\pc\AppData\Roaming\Python\Python311\site-packages\tensorflow\python\ops\script_ops.py", line 270, in __call__
ret = func(*args)
^^^^^^^^^^^
File "C:\Users\pc\AppData\Roaming\Python\Python311\site-packages\tensorflow\python\autograph\impl\api.py", line 643, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pc\AppData\Roaming\Python\Python311\site-packages\tensorflow\python\data\ops\from_generator_op.py", line 198, in generator_py_func
values = next(generator_state.get_iterator(iterator_id))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: generator already executing
history = super().fit(
^^^^^^^^^^^^
File "C:\Users\pc\AppData\Local\Programs\Python\Python311\Lib\site-packages\autokeras\tasks\structured_data.py", line 139, in fit
history = super().fit(
^^^^^^^^^^^^
File "C:\Users\pc\AppData\Local\Programs\Python\Python311\Lib\site-packages\autokeras\auto_model.py", line 283, in fit
self._analyze_data(dataset)
File "C:\Users\pc\AppData\Local\Programs\Python\Python311\Lib\site-packages\autokeras\auto_model.py", line 369, in _analyze_data
for x, y in dataset:
File "C:\Users\pc\AppData\Roaming\Python\Python311\site-packages\tensorflow\python\data\ops\iterator_ops.py", line 809, in __next__
return self._next_internal()
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pc\AppData\Roaming\Python\Python311\site-packages\tensorflow\python\data\ops\iterator_ops.py", line 772, in _next_internal
ret = gen_dataset_ops.iterator_get_next(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pc\AppData\Roaming\Python\Python311\site-packages\tensorflow\python\ops\gen_dataset_ops.py", line 3055, in iterator_get_next
Sample data with usecols and correct separator:
0 1 2 3 4 5 6 7 8 9 10 11
0 0 MET C 0.000 0.000 0.000 1 1 1 1 1 0
1 1 THR C 7.281 10.424 0.000 1 1 1 1 1 0
2 2 PHE C 6.626 8.966 12.663 0 0 0 0 1 0
3 3 GLU C 6.220 9.739 12.658 0 0 0 0 0 0
4 4 LEU E 6.748 10.167 12.863 0 0 0 0 0 0
_ops.raise_from_not_ok_status(e, name)
File "C:\Users\pc\AppData\Roaming\Python\Python311\site-packages\tensorflow\python\framework\ops.py", line 5888, in raise_from_not_ok_status
raise core._status_to_exception(e) from None # pylint: disable=protected-access
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
tensorflow.python.framework.errors_impl.InvalidArgumentError: {{function_node __wrapped__IteratorGetNext_output_types_2_device_/job:localhost/replica:0/task:0/device:CPU:0}} ValueError: generator already executing
Traceback (most recent call last):
File "C:\Users\pc\AppData\Roaming\Python\Python311\site-packages\tensorflow\python\ops\script_ops.py", line 270, in __call__
ret = func(*args)
^^^^^^^^^^^
File "C:\Users\pc\AppData\Roaming\Python\Python311\site-packages\tensorflow\python\autograph\impl\api.py", line 643, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pc\AppData\Roaming\Python\Python311\site-packages\tensorflow\python\data\ops\from_generator_op.py", line 198, in generator_py_func
values = next(generator_state.get_iterator(iterator_id))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: generator already executing
[[{{node PyFunc}}]] [Op:IteratorGetNext] name:
2024-02-12 23:14:58.675970: W tensorflow/core/kernels/data/generator_dataset_op.cc:108] Error occurred when finalizing GeneratorDataset iterator: FAILED_PRECONDITION: Python interpreter state is not initialized. The process may be terminated.
[[{{node PyFunc}}]]
2024-02-12 23:14:58.676507: W tensorflow/core/kernels/data/generator_dataset_op.cc:108] Error occurred when finalizing GeneratorDataset iterator: FAILED_PRECONDITION: Python interpreter state is not initialized. The process may be terminated.
[[{{node PyFunc}}]]
Process finished with exit code 1
</code></pre>
|
<python><tensorflow><neural-network><generator><auto-keras>
|
2024-02-12 12:17:56
| 1
| 17,446
|
user366312
|
77,981,352
| 567,059
|
Python logging filter being applied to all handlers when only added to one
|
<p>I am adding logging to a Python project, and I wish to log to both terminal and file. To do this I am using a <code>streamHandler</code> and a <code>fileHandler</code>.</p>
<p>To make it easier to quickly differentiate between log levels, I wish to add colour to the <code>levelname</code> for the logs that are output to the terminal, and to achieve this I have added a filter.</p>
<p>However, despite the filter only being applied to the <code>streamHandler</code>, messages in the log file are seeing the colour code applied. How can this be, given the filter is not added to the <code>fileHandler</code>?</p>
<h3>Example Code</h3>
<p>Note that the filter in this example only includes a format for <code>logging.ERROR</code>, as this will log to both terminal and file. But in reality I have a format for all log levels.</p>
<pre class="lang-py prettyprint-override"><code>import logging
class ColourLevelNameFilter(logging.Filter):
FORMATS = {
logging.ERROR: '\x1b[031m' + '{0}' + '\x1b[0m',
}
def filter(self, record):
levelname_format = self.FORMATS.get(record.levelno)
record.levelname = levelname_format.format(record.levelname)
return True
def main():
time_format = '%Y-%m-%d %H:%M:%S'
text_format = '%(asctime)s %(levelname)s => %(message)s'
formatter = logging.Formatter(text_format, time_format)
stream_handler = logging.StreamHandler()
stream_handler.setFormatter(formatter)
stream_handler.setLevel(logging.INFO)
stream_handler.addFilter(ColourLevelNameFilter())
log_file = 'test-error.log'
file_handler = logging.FileHandler(log_file, 'w')
file_handler.setFormatter(formatter)
file_handler.setLevel(logging.WARNING)
logger = logging.getLogger('my-app')
logger.setLevel(logging.INFO)
logger.addHandler(stream_handler)
logger.addHandler(file_handler)
logger.error('Testing logging of error.')
if __name__ == '__main__':
main()
</code></pre>
|
<python><python-logging>
|
2024-02-12 12:11:54
| 1
| 12,277
|
David Gard
|
77,981,266
| 12,200,808
|
How to set dynamic version for tool.setuptools_scm in pyproject.toml
|
<p>I'm trying to test the <code>pyproject.toml</code> by building an example.</p>
<p>If I deleted the line "<code>[tool.setuptools_scm]</code>", the build will success, but adding that line, the build will fail with the following error.</p>
<p><strong>Here is the pyproject.toml: /test/example/pyproject.toml</strong></p>
<pre><code>[project]
name = "example"
dynamic = ["version"]
[tool.setuptools.dynamic]
version = {attr = "example.__version__"}
[tool.setuptools_scm]
</code></pre>
<p><strong>When building with this command:</strong></p>
<pre><code># cd /test/example
# python -m build --no-isolation
</code></pre>
<p><strong>Here is the error:</strong></p>
<pre><code>* Getting build dependencies for sdist...
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/usr/local/lib/python3.10/dist-packages/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/usr/local/lib/python3.10/dist-packages/pyproject_hooks/_in_process/_in_process.py", line 287, in get_requires_for_build_sdist
return hook(config_settings)
File "/usr/local/lib/python3.10/dist-packages/setuptools/build_meta.py", line 328, in get_requires_for_build_sdist
return self._get_build_requires(config_settings, requirements=[])
File "/usr/local/lib/python3.10/dist-packages/setuptools/build_meta.py", line 295, in _get_build_requires
self.run_setup()
File "/usr/local/lib/python3.10/dist-packages/setuptools/build_meta.py", line 311, in run_setup
exec(code, locals())
File "<string>", line 1, in <module>
File "/usr/local/lib/python3.10/dist-packages/setuptools/__init__.py", line 103, in setup
return distutils.core.setup(**attrs)
File "/usr/local/lib/python3.10/dist-packages/setuptools/_distutils/core.py", line 147, in setup
_setup_distribution = dist = klass(attrs)
File "/usr/local/lib/python3.10/dist-packages/setuptools/dist.py", line 303, in __init__
_Distribution.__init__(self, dist_attrs)
File "/usr/local/lib/python3.10/dist-packages/setuptools/_distutils/dist.py", line 283, in __init__
self.finalize_options()
File "/usr/local/lib/python3.10/dist-packages/setuptools/dist.py", line 654, in finalize_options
ep(self)
File "/usr/local/lib/python3.10/dist-packages/setuptools_scm/_integration/setuptools.py", line 121, in infer_version
_assign_version(dist, config)
File "/usr/local/lib/python3.10/dist-packages/setuptools_scm/_integration/setuptools.py", line 56, in _assign_version
_version_missing(config)
File "/usr/local/lib/python3.10/dist-packages/setuptools_scm/_get_version_impl.py", line 112, in _version_missing
raise LookupError(
LookupError: setuptools-scm was unable to detect version for /test/example.
Make sure you're either building from a fully intact git repository or PyPI tarballs. Most other sources (such as GitHub's tarballs, a git checkout without the .git folder) don't contain the necessary metadata and will not work.
For example, if you're using pip, instead of https://github.com/user/proj/archive/master.zip use git+https://github.com/user/proj.git#egg=proj
ERROR Backend subprocess exited when trying to invoke get_requires_for_build_sdist
</code></pre>
<p>What's the correct configuration for <code>[tool.setuptools_scm]</code> in <code>pyproject.toml</code>?</p>
|
<python><ubuntu><setuptools><pyproject.toml><setuptools-scm>
|
2024-02-12 11:53:59
| 1
| 1,900
|
stackbiz
|
77,981,208
| 2,575,970
|
Create & Insert data with schema of Dataframe
|
<p>I have a csv that has 3300 attributes/columns. I cannot afford to create the columns manually in AZ SQL DB. I am looking out ways to automatically do it using Python. Came across to_sql and SQL Alchemy but does not look like it is working.</p>
<blockquote>
<p>Error; ArgumentError: Expected string or URL object, got <pyodbc.Connection object at 0x0000022429404440></p>
</blockquote>
<pre><code>import pandas as pd
import pyodbc
import urllib
from sqlalchemy import create_engine
fpath = r"C:\Users\***\test.csv"
df = pd.read_csv(fpath)
#print(df.head(5))
conn = pyodbc.connect("DRIVER={ODBC Driver 17 for SQL Server};Server=tcp:XXX.database.windows.net,1433;DATABASE=XXX;UID=XXX;PWD=XXX")#;Encrypt=yes;TrustServerCertificate=no;Connection Timeout=30;")
engine = create_engine(conn)
df.to_sql(name='test',con=engine, if_exists='append',index=False)
</code></pre>
|
<python><sql-server><pandas><sqlalchemy><azure-sql-database>
|
2024-02-12 11:42:42
| 1
| 416
|
WhoamI
|
77,981,121
| 5,810,717
|
Using xpath and selenium to select a HTML element in Python
|
<p>Thanks for letting me ask my questions here. I am new to Selenium and XPath, just trying to scrape a not-so-simple website using Python.</p>
<p>My questions are:</p>
<ol>
<li>Do you have an answer to my specific question on how to select the HTML element in question?</li>
<li>Do you have any addition to the learning ressources I listed below (which seem very helpful, but I was seemingly not yet advanced enough to apply them to my situation)?</li>
</ol>
<p>The specific question: I have a HTML file that looks as follows and want to extract the 'data-testid="qs-select-make"' element (in the end, I want to use selenium to update the drop-down menu)</p>
<p>For the life of me, I do not get it to work....</p>
<pre><code><div class = "a">
<div class = "ab">
<div class = "abc">
<div class = "abcd">
<select class="tya6p HaBLt A4yQa q0MnL" placeholder="Any" data-testid="qs-select-
make"><option selected="" value="">Any</option>
</code></pre>
<p>Using the google chrome web developer I already found the "correct" (albeit not very nice path) seems to be
<code>[@id="root"]/div/div/article[1]/section/div/div[2]/div[1]/div[1]/div/select</code></p>
<p>Still, the following code, attempting to insert the make "Audi" into the drop-down menu, fails with an Invalid Selector Exception:</p>
<pre><code>make_string = "//select[//*
[@id='root"]/div/div/article[1]/section/div/div[2]/div[1]/div[1]/div/select]option
selected[text()='{}']".format("Audi")
driver.find_element("xpath", make_string).click() #use selenium to click the button
</code></pre>
<p>Does anyone know what I am doing wrong, and of a nicer way to do it?</p>
<p>Regarding question 2, the resource. So far I have used:</p>
<ul>
<li>Stackoverflow, helpful as always, in particular <a href="https://stackoverflow.com/questions/5808909/selecting-an-element-with-xpath-and-selenium">this question</a></li>
<li>A very useful testsigma blogpost, which pointed me towards the chrome browser web developer to at least the the path:<a href="https://testsigma.com/blog/xpath-in-selenium/" rel="nofollow noreferrer">here</a></li>
<li>The Selenium documentation - very nicely written, but since I am beginner I am not yet able to apply the general concepts to my specific issue... apologies: <a href="https://www.selenium.dev/documentation/webdriver/elements/locators/" rel="nofollow noreferrer">Here</a></li>
</ul>
|
<python><selenium-webdriver><xpath><selenium-chromedriver>
|
2024-02-12 11:28:28
| 1
| 363
|
fußballball
|
77,980,972
| 1,711,271
|
merge groups of columns in a polars dataframe to single columns
|
<p>I have a polars dataframe with columns <code>a_0, a_1, a_2, b_0, b_1, b_2</code>. I want to convert it to a longer and thinner dataframe (3 x rows, but just 2 columns <code>a</code> and <code>b</code>), so that <code>a</code> contains <code>a_0[0], a_1[0], a_2[0], a_0[1], a_1[1], a_2[1],...</code> and the same for <code>b</code>. How can I do that?</p>
|
<python><reshape><python-polars>
|
2024-02-12 11:00:30
| 3
| 5,726
|
DeltaIV
|
77,980,709
| 8,554,611
|
How do I tell PyCharm of the class variable created by a class decorator?
|
<p>Say, I have a decorator to make logging more straightforward:</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypeVar
_T = TypeVar("_T", bound=type)
def with_logger(cls: _T) -> _T:
from logging import getLogger
cls.logger = getLogger(cls.__name__)
return cls
@with_logger
class Spam:
def __init__(self) -> None:
print(f"{Spam.logger.name = }")
Spam()
</code></pre>
<p>The code runs fine. However, both PyCharm and <code>mypy</code> complain that <code>Spam</code> has no attribute named <code>logger</code>. The line that causes the warning is <code>print(f"{Spam.logger.name = }")</code>.</p>
<p>Following <a href="https://stackoverflow.com/questions/74034847/how-to-tell-mypy-that-a-class-decorator-adds-a-method-to-the-decorated-class">this question</a>, I transformed the code into the following monstrous one:</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypeVar
_T = TypeVar("_T", bound=type)
def with_logger(cls: _T) -> _T:
from logging import getLogger
from typing import TYPE_CHECKING
setattr(cls, 'logger', getLogger(cls.__name__))
if TYPE_CHECKING:
from typing import ClassVar, Protocol, cast
class LoggerMixin(Protocol):
from logging import Logger
logger: ClassVar[Logger] = getLogger(cls.__name__)
return cast(_T, type(cls.__name__, (LoggerMixin, cls), dict()))
else:
return cls
@with_logger
class Spam:
def __init__(self) -> None:
print(f"{Spam.logger.name = }")
Spam()
</code></pre>
<p>Still, no avail. PyCharm, <code>mypy</code>, and <code>basedmypy</code> claim that <code>logger</code> is unresolved.</p>
<p>Is there anything else to do? It's been more than a year since the question.</p>
<p>I can explicitly type <code>logger</code> within each decorated class, but the solution is next to discarding the decorator at all:</p>
<pre class="lang-py prettyprint-override"><code>from logging import Logger, getLogger
from typing import ClassVar, TypeVar
_T = TypeVar("_T", bound=type)
def with_logger(cls: _T) -> _T:
setattr(cls, 'logger', getLogger(cls.__name__))
return cls
@with_logger
class Spam:
logger: ClassVar[Logger]
def __init__(self) -> None:
print(f"{Spam.logger.name = }")
Spam()
</code></pre>
<hr />
<p>As @KamilCuk suggested, a working way out is to inherit a class that creates the <code>logger</code>:</p>
<pre class="lang-py prettyprint-override"><code>from logging import Logger, getLogger
from typing import ClassVar
class BaseLogger:
logger: ClassVar[Logger]
def __new__(cls):
cls.logger = getLogger(cls.__name__)
return super().__new__(cls)
class Spam(BaseLogger):
def __init__(self) -> None:
print(f"{Spam.logger.name = }")
Spam()
</code></pre>
<p>It works fine. It highlights correctly in PyCharm. I just wonder if it can be done via a decorator.</p>
|
<python><pycharm><mypy><python-decorators><python-typing>
|
2024-02-12 10:21:00
| 0
| 796
|
StSav012
|
77,980,617
| 1,762,051
|
Django field look up __date is not returning objects
|
<p>Why field lookup <code>__date</code> and <code>__month</code> are not returning any objects?</p>
<pre class="lang-py prettyprint-override"><code>class MembershipPlanSubscription(TimeStampBaseModel):
user = models.ForeignKey(to=User, on_delete=models.CASCADE)
start_date = models.DateTimeField()
end_date = models.DateTimeField()
</code></pre>
<pre><code>In [3]: objects = MembershipPlanSubscription.objects.all()
In [4]: for object in objects:
...: print(object.end_date)
...:
2024-01-22 13:59:17.110148+00:00
2024-01-22 14:15:11.589769+00:00
2024-12-22 14:18:38.624196+00:00
2024-01-22 14:26:10.841796+00:00
2024-01-22 16:36:57.632614+00:00
2024-01-22 19:07:11.450086+00:00
2024-12-24 05:14:35.206241+00:00
2024-03-29 10:33:52.058009+00:00
2024-03-29 10:34:03.927215+00:00
2024-04-04 02:44:18.295650+00:00
2024-02-06 21:03:26.650677+00:00
2024-02-07 14:32:00.987613+00:00
2024-02-08 04:54:32.838352+00:00
2024-04-17 20:49:07.252812+00:00
2024-04-21 14:17:38.210087+00:00
2024-04-26 05:10:26.413481+00:00
2025-01-28 11:01:15.567712+00:00
2024-03-03 20:51:27.211525+00:00
2024-03-04 17:54:18.230713+00:00
2024-03-05 05:42:49.367263+00:00
2024-03-06 16:58:31.119124+00:00
2025-02-07 15:13:42.036329+00:00
2024-03-08 15:00:09.944520+00:00
2024-03-09 21:22:32.184259+00:00
2024-05-09 23:49:42.019238+00:00
2024-03-11 15:49:48.272387+00:00
In [5]: MembershipPlanSubscription.objects.filter(end_date__date = '2024-01-22')
Out[5]: <QuerySet []>
In [6]: MembershipPlanSubscription.objects.filter(end_date__date = datetime.date(2024, 1, 22))
Out[6]: <QuerySet []>
In [8]: MembershipPlanSubscription.objects.filter(end_date__year='2024')
Out[8]: <QuerySet [<MembershipPlanSubscription: MembershipPlanSubscription object (5)>, <MembershipPlanSubscription: MembershipPlanSubscription object (6)>, <MembershipPlanSubscription: MembershipPlanSubscription object (7)>, <MembershipPlanSubscription: MembershipPlanSubscription object (8)>, <MembershipPlanSubscription: MembershipPlanSubscription object (10)>, <MembershipPlanSubscription: MembershipPlanSubscription object (11)>, <MembershipPlanSubscription: MembershipPlanSubscription object (12)>, <MembershipPlanSubscription: MembershipPlanSubscription object (13)>, <MembershipPlanSubscription: MembershipPlanSubscription object (14)>, <MembershipPlanSubscription: MembershipPlanSubscription object (15)>, <MembershipPlanSubscription: MembershipPlanSubscription object (16)>, <MembershipPlanSubscription: MembershipPlanSubscription object (17)>, <MembershipPlanSubscription: MembershipPlanSubscription object (18)>, <MembershipPlanSubscription: MembershipPlanSubscription object (19)>, <MembershipPlanSubscription: MembershipPlanSubscription object (20)>, <MembershipPlanSubscription: MembershipPlanSubscription object (21)>, <MembershipPlanSubscription: MembershipPlanSubscription object (23)>, <MembershipPlanSubscription: MembershipPlanSubscription object (24)>, <MembershipPlanSubscription: MembershipPlanSubscription object (25)>, <MembershipPlanSubscription: MembershipPlanSubscription object (26)>, '...(remaining elements truncated)...']>
In [9]: MembershipPlanSubscription.objects.filter(end_date__month='02')
Out[9]: <QuerySet []>
In [10]: MembershipPlanSubscription.objects.filter(end_date__month='2')
Out[10]: <QuerySet []>
In [11]: MembershipPlanSubscription.objects.filter(end_date__month=2)
Out[11]: <QuerySet []>
</code></pre>
|
<python><django><django-models>
|
2024-02-12 10:01:57
| 2
| 10,924
|
Alok
|
77,980,601
| 2,563,981
|
Python threading basics on console application - Correct way to terminate thread/s
|
<p>I'm looking for the best way to handle a python console threading application with the possibility to exit at (say) Ctrl+C</p>
<p>This is my code.</p>
<pre><code>from threading import Thread
import time
import sys
class MyPrint(Thread):
def __init__(self):
super(MyPrint, self).__init__()
self.WhatToPrint = "Iteration"
self.running = True
def run(self):
Counter = 0
a, b = 0, 1
print("\tFibonacci - START")
while self.running:
print("\t{} {} - {}".format(self.WhatToPrint, Counter, a))
Counter += 1
a, b = b, a + b
time.sleep(0.2)
def stop(self):
self.running = False
print("\tFibonacci - END")
def Main() -> int:
print("Main Start")
oMyPrint = MyPrint()
oMyPrint.start()
oMyPrint.join(1)
try:
while True:
time.sleep(0.1)
except KeyboardInterrupt:
oMyPrint.stop()
print("Main End")
return 0
if __name__ == '__main__':
sys.exit(Main())
</code></pre>
<p>It works, but I'm sure it can be improved a lot. I'd like your suggestions to start with the right foot.
Thanks</p>
|
<python><multithreading>
|
2024-02-12 09:59:50
| 0
| 429
|
Kite
|
77,980,560
| 10,117,858
|
Why is Merge Sort Performance LINEAR?
|
<p>I'm working on comparing sorting algorithms in Python. I've implemented the algorithms and a testing function that evaluates each algorithm on various input sizes, up to a very large maximum, in the hundreds of thousands. The execution times are saved for plotting later. As expected, Selection Sort exhibits a quadratic behavior, and its graph reflects that. However, I'm encountering an issue with Merge Sort – its graph appears linear instead of the expected nlogn behavior!!</p>
<p>This is my implementation of Merge Sort:</p>
<pre><code>def merge_sort(A):
merge_sort_aux(A, 0, len(A) - 1)
# Merge Sort Helper Function
# (Recursive)
def merge_sort_aux(A, p, r):
if p < r:
q = (p + r) // 2 # Integer division
merge_sort_aux(A, p, q)
merge_sort_aux(A, q + 1, r)
merge(A, p, q, r)
def merge(A, p, q, r):
# Calculate the sizes of the two sublists
n1 = q - p + 1
n2 = r - q
# Create two NumPy arrays initially initialized to 0
# They are of type float to support sentinels
L = np.zeros(n1 + 1, dtype=float)
R = np.zeros(n2 + 1, dtype=float)
# Copy data into NumPy arrays
L[:n1] = [A[p + i] for i in range(n1)]
R[:n2] = [A[q + 1 + j] for j in range(n2)]
# Sentinels
L[n1] = np.inf
R[n2] = np.inf
i = j = 0
for k in range(p, r + 1):
if L[i] <= R[j]:
A[k] = L[i]
i += 1 # Increment the index of the left sublist to preserve the loop invariant
else:
A[k] = R[j]
j += 1 # Increment the index of the right sublist
</code></pre>
<p>This is my function that I use to run the tests</p>
<pre><code>def test_algo(algo_function, maxRange=TEST_MAX_DIMENSION, minRange=2, step=TEST_STEP):
"""
Test the execution time of the given algorithm function for various input sizes.
Parameters:
- algo_function: The algorithm function to be tested.
- maxRange: The maximum input size to be tested.
- minRange: The minimum input size to start the testing.
- step: The step size between different input sizes.
Returns:
- results: A dictionary to store the execution times for each input size.
"""
results = {} # Dictionary to store execution times for each input size
for i in range(minRange, maxRange, step):
A = random_list(i) # Assuming you have a function 'random_list' generating a list of size i
start = timer()
algo_function(A)
end = timer()
results[i] = end - start
return results
</code></pre>
<p><a href="https://i.sstatic.net/n6hKk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/n6hKk.png" alt="Merge Sort perfromance" /></a></p>
<p>Any suggestions or explanations would be greatly appreciated!</p>
<p><strong>EDIT</strong></p>
<p>I have tested up to a list 3 million values
<a href="https://i.sstatic.net/gG2MU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gG2MU.png" alt="enter image description here" /></a></p>
|
<python><algorithm><sorting><mergesort>
|
2024-02-12 09:52:34
| 2
| 892
|
Niccolò Caselli
|
77,980,379
| 2,169,557
|
Is there an equivalent function to the b'' byte literal?
|
<p>I am working with a text file containing lines of data that include both ASCII and escaped byte data and I am trying to find a function in Python to encode the lines of text into bytes data for sending to another PC over UDP.</p>
<p>The string I am trying to encode is captured OSC data and looks similar to this line:</p>
<pre><code>#bundle\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x008/tracking/trackers/head/position\x00\x00\x00\x00,fff\x00\x00\x00\x00\x00\x00\x00\x00\xbd\xb8\x93\xbd\x80\x00\x00\x00\x00\x00\x004/tracking/trackers/6/position\x00\x00\x00,fff\x00\x00\x00\x00\x00\x00\x00\x00\xbe\xdb\xe9\x9e\x80\x00\x00\x00\x00\x00\x004/tracking/trackers/6/rotation\x00\x00\x00,fff\x00\x00\x00\x00\x80\x00\x00\x00\x80\x00\x00\x00\x00\x00\x00\x00\x00\x00\x004/tracking/trackers/1/position\x00\x00\x00,fff\x00\x00\x00\x00\x00\x00\x00\x00\xbfAO\x80\x80\x00\x00\x00\x00\x00\x004/tracking/trackers/1/rotation\x00\x00\x00,fff\x00\x00\x00\x00\x80\x00\x00\x00\x80\x00\x00\x00\x00\x00\x00\x00\x00\x00\x004/tracking/trackers/4/position\x00\x00\x00,fff\x00\x00\x00\x00<]\x01P\xbfF[\x9d>\xa2\xaa\xf2\x00\x00\x004/tracking/trackers/4/rotation\x00\x00\x00,fff\x00\x00\x00\x00\xc2\xad\xe3\x91A\xbc\xd2\xec@\xff)\xcb\x00\x00\x004/tracking/trackers/5/position\x00\x00\x00,fff\x00\x00\x00\x00>:=\xcd\xbf\x90vV\x80\x00\x00\x00\x00\x00\x004/tracking/trackers/5/rotation\x00\x00\x00,fff\x00\x00\x00\x00\x80\x00\x00\x00\x80\x00\x00\x00\x00\x00\x00\x00\x00\x00\x004/tracking/trackers/2/position\x00\x00\x00,fff\x00\x00\x00\x00<]\x01L\xbf\xa2\x93Y>\xa2\xaa\xf2\x00\x00\x004/tracking/trackers/2/rotation\x00\x00\x00,fff\x00\x00\x00\x00\x00\x00\x00\x00A\xf9Op\x00\x00\x00\x00\x00\x00\x004/tracking/trackers/3/position\x00\x00\x00,fff\x00\x00\x00\x00>\xb28^\xbf\x92\xc6L>\xea\x81o\x00\x00\x004/tracking/trackers/3/rotation\x00\x00\x00,fff\x00\x00\x00\x00BJ\xd5.B\xfd\xb4(C#\xcf`'
</code></pre>
<p>But when I attempt to encode the string to bytes Python is including new double backslashes in the bytes object:</p>
<pre><code>b1 = b'tracking/trackers/head/position\x00\x00\x00\x00'
b2 = r"tracking/trackers/head/position\x00\x00\x00\x00".encode()
print(b1)
print(b2)
b'tracking/trackers/head/position\x00\x00\x00\x00'
b'tracking/trackers/head/position\\x00\\x00\\x00\\x00' #added double backslashes
</code></pre>
<p>How do I parse or encode a string to produce the equivalent as a byte literal in Python 3?</p>
|
<python><encoding><byte>
|
2024-02-12 09:16:47
| 1
| 1,877
|
Logic1
|
77,980,099
| 393,010
|
When can mypy infer the type of containers such as collections.Counter?
|
<p>Running this mypy gist (<a href="https://mypy-play.net/?mypy=latest&python=3.12&gist=58a8148f2c95c7a282a6f8a11ccd689a" rel="nofollow noreferrer">https://mypy-play.net/?mypy=latest&python=3.12&gist=58a8148f2c95c7a282a6f8a11ccd689a</a>)</p>
<pre class="lang-py prettyprint-override"><code>from collections import Counter
C = Counter()
</code></pre>
<p>gives this mypy error:</p>
<pre class="lang-none prettyprint-override"><code>main.py:3: error: Need type annotation for "C" [var-annotated]
Found 1 error in 1 file (checked 1 source file)
</code></pre>
<p>Is there any way that mypy could infer this type?</p>
|
<python><mypy><python-typing>
|
2024-02-12 08:16:35
| 1
| 5,626
|
Moberg
|
77,979,992
| 7,848,740
|
Export state_dict checkpoint from .pt model PyTorch
|
<p>I'm new to PyTorch and the whole model/AI programming.</p>
<p>I have a library that needs a checkpoint in the form of a <em>state_dict</em> from a model to run.</p>
<p>I've the <code>.pt</code> model (more preciously the <a href="https://github.com/NVIDIA/radtts" rel="nofollow noreferrer">radtts pre-trained model</a>) and I need to extract the dictionary for the checkpoint.</p>
<p>From what I understand from the <a href="https://pytorch.org/tutorials/beginner/saving_loading_models.html#save-load-state-dict-recommended" rel="nofollow noreferrer">PyTorch documentation</a> I should be able to load the model and save the <em>state_dict</em> with <code>torch.save(model.state_dict(), PATH)</code></p>
<p>My problem is, first of all, is it correct? How do I load the model on PyTorch to extract the <em>state_dict</em>?</p>
|
<python><pytorch><state-dict>
|
2024-02-12 07:55:47
| 1
| 1,679
|
NicoCaldo
|
77,979,686
| 16,896,291
|
Replacement for prev_execution_date as it is deprecated in airflow 2
|
<p>I am migrating airflow 1 pipelines on airflow 2 and stumbled across the deprecated <code>{{ prev_execution_date }}</code>.</p>
<p>I am not sure what can be used instead.<br />
I found <code>prev_data_interval_start_success</code> and <code>prev_data_interval_end_success</code> but not sure which one to pick from these or are they even correct replacement for <code>prev_execution_date</code>?</p>
<p>There were other template variables like <code>execution_date</code> whose replacement was <code>logical_date</code>.</p>
<p>Doc link : <a href="https://airflow.apache.org/docs/apache-airflow/stable/templates-ref.html" rel="nofollow noreferrer">https://airflow.apache.org/docs/apache-airflow/stable/templates-ref.html</a></p>
|
<python><airflow><jinja2><airflow-2.x>
|
2024-02-12 06:43:15
| 1
| 427
|
Hemant Sah
|
77,979,412
| 65,889
|
How to make the symlink behaviour of Python's os.walk deterministic across different file ssytem types
|
<p>I have a weird probelm with os.walk in Python 3. To illustrate let's have a directory <code>fs</code> with the following structure:</p>
<pre><code>fs
├── data1.txt
├── data2.txt
├── data3.txt
├── dir
│ ├── file1.txt
│ ├── file2.txt
│ └── file3.txt
└── sym -> dir
</code></pre>
<h5>Test 1: Directory on local volume</h5>
<p>Say, this directory sits under macOS on a local disk (APFS filesystem). Now I run the following Python code:</p>
<pre><code>for dirpath, subdirs, filenames in os.walk('/path/to/fs'):
print(f"{dirpath=}\n{subdirs=}\n{filenames=}\n")
</code></pre>
<p>This returns for me:</p>
<pre><code>dirpath='/path/to/fs'
subdirs=['sym', 'dir']
filenames=['data1.txt', 'data3.txt', 'data2.txt']
dirpath='/path/to/fs/dir'
subdirs=[]
filenames=['file2.txt', 'file3.txt', 'file1.txt']
</code></pre>
<p>Please note, <code>sym</code> is listed <em>as a directory</em> in the list <code>subdirs</code>.</p>
<h5>Test 2: Directory on a mounted Samba share</h5>
<p>Now I copy the directory <code>fs</code> to a Samba share (in my case a mounted volume on a Synogly NAS). I run the same Python code</p>
<pre><code>for dirpath, subdirs, filenames in os.walk('/path/on/SMB/share/to/fs'):
print(f"{dirpath=}\n{subdirs=}\n{filenames=}\n")
</code></pre>
<p>and now I get:</p>
<pre><code>dirpath='/path/on/SMB/share/to/fs'
subdirs=['dir']
filenames=['sym', 'data1.txt', 'data3.txt', 'data2.txt']
dirpath='/path/on/SMB/share/to/fs/dir'
subdirs=[]
filenames=['file2.txt', 'file3.txt', 'file1.txt']
</code></pre>
<p>Please note, <code>sym</code> is listed <em>as a file</em> in the list <code>filenames</code>.</p>
<p>How can I make <code>os.walk</code> behaving deterministically the same on both filesystem types? Do I need to go through all eleemnts in the <code>subdirs</code> and <code>filenames</code> lists, check whether an entry is a symlinks and then myself move it, so that the lists are the same on the two filesystem types?</p>
<p><em>Note:</em> I do not want to set <code>followlinks=True</code> in the <code>os.walk</code> call, because of the danger of infinite recursion.</p>
<h4>Additional Details</h4>
<p>When I list the directories via <code>ls</code> I get the following result:</p>
<pre><code>$ ls -l /path/to/fs
total 24
-rw-r--r--@ 1 user group 8 Feb 7 11:26 data1.txt
-rw-r--r--@ 1 user group 8 Feb 7 11:26 data2.txt
-rw-r--r--@ 1 user group 8 Feb 7 11:26 data3.txt
drwxr-xr-x@ 5 user group 160 Feb 7 11:28 dir
lrwxr-xr-x@ 1 user group 3 Feb 7 11:29 sym -> dir
</code></pre>
<p>and</p>
<pre><code>ls -l /path/on/SMB/share/to/fs
total 64
-rwx------@ 1 user group 8 Feb 7 11:26 data1.txt
-rwx------@ 1 user group 8 Feb 7 11:26 data2.txt
-rwx------@ 1 user group 8 Feb 7 11:26 data3.txt
drwx------@ 1 user group 16384 Feb 7 11:28 dir
lrwx------@ 1 user group 1067 Feb 7 11:31 sym -> dir
</code></pre>
<p>So both filesystems identify <code>sym</code> as a link. Why does Python treat them differently?</p>
|
<python><filesystems><samba><os.walk><apfs>
|
2024-02-12 05:00:37
| 0
| 10,804
|
halloleo
|
77,979,354
| 544,982
|
Trouble passing AWS API Gateway query parameters to a Python AWS Lambda
|
<p>I have an API which I created using the AWS API Gateway and it has 2 URL String parameters so my GET call is in the following format</p>
<pre><code>/endpoint?City=NYC&State=NY
</code></pre>
<p>I am trying to capture both the City and the State URL String parameters in my Python Lambda function and when I print them out and monitor them in CloudWatch, I see None for both of them. I am not sure what I am missing here</p>
<pre><code>import boto3
import json
def lambda_handler(event, context):
# Extract city and state from query parameters
query_parameters = event.get('queryStringParameters', {})
city = query_parameters.get('City')
state = query_parameters.get('State')
# Print city and state values
print('City:', city)
print('State:', state)
if not city or not state:
return {
'statusCode': 400,
'body': 'City and State parameters are required.'
}
</code></pre>
<p>When I execute the following code, the code is going into the error block and I am seeing this as the output</p>
<pre><code>City and State parameters are required.
</code></pre>
<p>Is there anything else that I may have missed or what am I doing wrong here?</p>
|
<python><amazon-web-services><aws-lambda><aws-api-gateway>
|
2024-02-12 04:40:18
| 2
| 1,691
|
p0tta
|
77,979,317
| 4,407,597
|
HTTPSConnectionPool(host='',port=443) Max retries exceeded in apache and upstream prematurely closed connection while reading response header in nginx
|
<p>I am trying to fetch a government URL <a href="https://ahara.kar.nic.in/status2/" rel="nofollow noreferrer">https://ahara.kar.nic.in/status2/</a>, but I am getting errors like <strong>'Max retries exceeded with timed out'</strong> in Apache server and <strong>upstream prematurely closed connection while reading the response header</strong> error in the nginx server I tried all the answers available in web for more than a week but no luck, so I am posting this question, I feel this may help me to find the solution</p>
<p>It may not be a timeout issue or upstream connection issue because the same URL is working fine in GoDaddy's shared server at <a href="https://mycard.easycardmaker.com/status-page" rel="nofollow noreferrer">https://mycard.easycardmaker.com/status-page</a> but the same code is not working in Hostinger's Ubuntu VPS</p>
<p>I tried first with flask/ nginx/ wsgi, got this issue, so I thought It may be nginx config issue then I reinstalled plain OS and tried with Apache server</p>
<p>The issue I am getting in Hostinger's Ubuntu VPS is only for Government websites as I observed not for any others ex: <a href="https://itunes.apple.com/in/genre/ios-business/id6000?mt=8" rel="nofollow noreferrer">https://itunes.apple.com/in/genre/ios-business/id6000?mt=8</a></p>
<p>Thanks, Your help will be most appreciated :)🙏</p>
<p>I followed these below pages for configuring flask app in Ubuntu VPS</p>
<ol>
<li><p>For NGINX: <a href="https://www.codewithharry.com/blogpost/flask-app-deploy-using-gunicorn-nginx/" rel="nofollow noreferrer">https://www.codewithharry.com/blogpost/flask-app-deploy-using-gunicorn-nginx/</a></p>
</li>
<li><p>For Apache: <a href="https://www.digitalocean.com/community/tutorials/how-to-deploy-a-flask-application-on-an-ubuntu-vps" rel="nofollow noreferrer">https://www.digitalocean.com/community/tutorials/how-to-deploy-a-flask-application-on-an-ubuntu-vps</a></p>
<pre><code>from flask import Flask
import requests
from time import sleep
app = Flask(__name__)
@app.route("/")
def hello():
try:
headers = {
"Host": "ahara.kar.nic.in",
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/121.0.0.0 Safari/537.36"
}
sleep(5)
res = requests.get("https://ahara.kar.nic.in/status2/", verify=False, headers=headers, timeout=30)
print(res.text)
return "Hello, Got response"
except requests.ConnectionError as e:
print(e)
return 'Failed to establish a connection'
if __name__ == "__main__":
app.run()
</code></pre>
</li>
</ol>
<p><strong>easyCardMakerApp.conf</strong></p>
<pre><code><VirtualHost *:80>
ServerName 195.35.7.181
ServerAdmin admin@easycardmaker.online
WSGIScriptAlias / /var/www/easyCardMaker/easyCardMaker.wsgi
<Directory /var/www/easyCardMaker>
Order allow,deny
Allow from all
Header set Access-Control-Allow-Origin "*"
</Directory>
Alias /static /var/www/easyCardMaker/static
<Directory /var/www/easyCardMaker/static>
Order allow,deny
Allow from all
</Directory>
ErrorLog ${APACHE_LOG_DIR}/error.log
LogLevel warn
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
</code></pre>
<p><strong>easyCardMaker.wsgi</strong></p>
<pre><code>import sys
import logging
from app import app as application
logging.basicConfig(stream=sys.stderr)
sys.path.insert(0,"/var/www/easyCardMaker/")
application.secret_key = '28fc4673473543eda2e6a9f503f4bb71'
</code></pre>
|
<python><apache><nginx><flask><python-requests>
|
2024-02-12 04:20:48
| 0
| 2,729
|
Gopala Raja Naika
|
77,978,953
| 11,618,586
|
Apply find_peaks() function using groupby() in a pandas dataframe
|
<p>I have a timeseries data that is grouped by in <code>ID</code> column.</p>
<p>I tried using the following code:</p>
<pre><code>def find_peaks_in_group(group):
peaks, peak_properties = find_peaks(group['RFPower'], prominence=1, height=0.7)
group['peaks'] = False
group.loc[peaks, 'peaks'] = True
group['peak_heights'] = 0.0
group.loc[peaks, 'peak_heights'] = peak_properties['peak_heights']
return group
result_df=df.groupby('ID', group_keys=False).apply(find_peaks_in_group)
</code></pre>
<p>However Im getting a KeyError:</p>
<blockquote>
<p>None of [Int64Index([76116, 76134, 76146, 76150, 76155, 76161, 76165,
76171, 76177, 76180, 76182, 76185, 76187, 76192, 76196, 76201, 76220,
76224, 76230, 76235, 76240, 76242, 76247, 76252, 76257, 76276, 76282,
76286, 76292, 76295, 76297, 76302, 76307, 76337, 76348, 76350, 76362,
76364, 77324, 93723, 94851, 94855], dtype='int64')] are in the
[index]"</p>
</blockquote>
<p>I even tried removing the <code>group_keys=False</code> and resetting the index to no avail.
Any suggestions?</p>
|
<python><python-3.x><pandas><scipy>
|
2024-02-12 00:54:03
| 1
| 1,264
|
thentangler
|
77,978,885
| 9,983,652
|
how to install readline and ncurses using conda?
|
<p>I am using conda in win 10, trying to install below 2 packages and it never succeed. The error is</p>
<p>PackagesNotFoundError: The following packages are not available from current channels:</p>
<ul>
<li>conda-forge::ncurses</li>
</ul>
<p>I did try different channels and it never work either.</p>
<p>Thanks.</p>
<pre><code>conda install anaconda::readline
conda install conda-forge::ncurses
</code></pre>
<pre><code>Collecting package metadata (current_repodata.json): done
Solving environment: unsuccessful initial attempt using frozen solve. Retrying with flexible solve.
Collecting package metadata (repodata.json): done
Solving environment: unsuccessful initial attempt using frozen solve. Retrying with flexible solve.
PackagesNotFoundError: The following packages are not available from current channels:
- conda-forge::ncurses
Current channels:
- https://repo.anaconda.com/pkgs/main/win-64
- https://repo.anaconda.com/pkgs/main/noarch
- https://repo.anaconda.com/pkgs/r/win-64
- https://repo.anaconda.com/pkgs/r/noarch
- https://repo.anaconda.com/pkgs/msys2/win-64
- https://repo.anaconda.com/pkgs/msys2/noarch
</code></pre>
|
<python><anaconda>
|
2024-02-12 00:21:04
| 1
| 4,338
|
roudan
|
77,978,720
| 2,933,113
|
SIGSEGV! error when trying to interact with Google Cloud Storage bucket w python
|
<p>I am writing a Google cloud function. Part of it's responsibility is to write files to a Cloud storage bucket. I believe I have bucket permissions set up and my service account, etc, but any time I try to interact with the bucket (upload files, or even bucket.exists()) I get the following error:</p>
<blockquote>
<p>[ERROR] Worker (pid:12345) was sent SIGSEGV!</p>
</blockquote>
<p>This seems like a memory error, however the file contents is small (37k), and like I said it even happens when I just call bucket.exists(). The bucket does exists, and the filepath is correct, and the file is good, as I can write it to disk and access it just fine. My hunch is it's still some issue with GCP auth/roles, etc. but I can't find the issue and the error doesn't help much. Any thoughts?</p>
<pre><code>def upload_blob_from_memory(bucket_name, contents, destination_blob_name):
credentials = service_account.Credentials.from_service_account_file('/my-service-account-creds.json')
storage_client = storage.Client(credentials=credentials,project='my-bucket')
bucket = storage_client.bucket(bucket_name)
# fails here at bucket.exists(), but same issue any time I want to interact with a bucket
print(f"bucket.exists() {bucket.exists()}")
blob = bucket.blob(destination_blob_name)
blob.upload_from_string(contents)
print(
f"{destination_blob_name} with contents {contents} uploaded to {bucket_name}."
)
</code></pre>
<p>NOTE: this function is not deployed, I'm testing locally with:</p>
<pre><code>functions-framework-python --target my-function
</code></pre>
|
<python><google-cloud-platform><google-cloud-functions><google-cloud-storage>
|
2024-02-11 22:52:15
| 1
| 1,456
|
Claytronicon
|
77,978,718
| 14,924,809
|
`Camelot` gives error for not having the correct arm64 architecture of Ghostscript
|
<p>I'm trying to use the python package <code>camelot</code> but get the following error, which I understand relates to my M1 mac I'm using:</p>
<pre><code>>>> import camelot
>>> import pandas
>>> data = camelot.read_pdf('path/to/pdf.pdf')
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/camelot/ext/ghostscript/_gsprint.py", line 260, in <module>
libgs = cdll.LoadLibrary("libgs.so")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/ctypes/__init__.py", line 460, in LoadLibrary
return self._dlltype(name)
^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/ctypes/__init__.py", line 379, in __init__
self._handle = _dlopen(self._name, mode)
^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: dlopen(libgs.so, 0x0006): tried: 'libgs.so' (no such file), '/System/Volumes/Preboot/Cryptexes/OSlibgs.so' (no such file), '/usr/lib/libgs.so' (no such file, not in dyld cache), 'libgs.so' (no such file), '/usr/lib/libgs.so' (no such file, not in dyld cache)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/camelot/io.py", line 113, in read_pdf
tables = p.parse(
^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/camelot/handlers.py", line 173, in parse
t = parser.extract_tables(
^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/camelot/parsers/lattice.py", line 402, in extract_tables
self._generate_image()
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/camelot/parsers/lattice.py", line 211, in _generate_image
from ..ext.ghostscript import Ghostscript
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/camelot/ext/ghostscript/__init__.py", line 24, in <module>
from . import _gsprint as gs
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/camelot/ext/ghostscript/_gsprint.py", line 268, in <module>
libgs = cdll.LoadLibrary(libgs)
^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/ctypes/__init__.py", line 460, in LoadLibrary
return self._dlltype(name)
^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/ctypes/__init__.py", line 379, in __init__
self._handle = _dlopen(self._name, mode)
^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: dlopen(/Users/adrumm/lib/libgs.dylib, 0x0006): tried: '/Users/adrumm/lib/libgs.dylib' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')), '/System/Volumes/Preboot/Cryptexes/OS/Users/adrumm/lib/libgs.dylib' (no such file), '/Users/adrumm/lib/libgs.dylib' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')), '/usr/local/Cellar/ghostscript/10.02.1/lib/libgs.10.02.dylib' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')), '/System/Volumes/Preboot/Cryptexes/OS/usr/local/Cellar/ghostscript/10.02.1/lib/libgs.10.02.dylib' (no such file), '/usr/local/Cellar/ghostscript/10.02.1/lib/libgs.10.02.dylib' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64'))
</code></pre>
<p>I think this is same issue as <a href="https://stackoverflow.com/questions/65819425/python-camelot-ghostscript-wrong-architecture-error">this prior question</a>, but the sole response to that question does not help as I already have the arm64 Python installed.</p>
<p>Steps taken:</p>
<ul>
<li><code>brew install ghostscript</code></li>
<li>Checking that Ghostscript is installed as per the <a href="https://camelot-py.readthedocs.io/en/master/user/install-deps.html" rel="nofollow noreferrer">package documentation</a> confirms that it is installed</li>
<li>Same issue occurs whether in a virtual environment not</li>
</ul>
<p>Is this even possible to fix?</p>
|
<python><arm64><python-camelot>
|
2024-02-11 22:51:16
| 1
| 510
|
Abigail
|
77,978,682
| 1,806,998
|
AWS Lambda - Database writes not reflected by subsequent read
|
<p>I have two lambda functions, one that inserts data into a database (create_user), and another (get_users) that reads it from a MySQL database hosted on AWS RDS.</p>
<p>The database code for the create_user lambda looks like this (note the commit):</p>
<pre class="lang-py prettyprint-override"><code>user = ... # get user from request object
with self.conn.cursor() as cursor:
cursor.execute(sql.INSERT_USER, (
user.first_name,
user.last_name,
user.birth_date,
user.sex,
user.firebase_id
))
self.conn.commit()
</code></pre>
<p>And the database code for the get_users lambda:</p>
<pre class="lang-py prettyprint-override"><code>with self.conn.cursor() as cursor:
cursor.execute(sql.GET_ALL_USERS)
user_rows = cursor.fetchall()
</code></pre>
<p>In both lambdas I set up my database connection outside the handler function:</p>
<pre class="lang-py prettyprint-override"><code>conn_params = db_utils.db_connection_parameters()
conn = pymysql.connect(host=conn_params['host'],
user=conn_params['username'],
password=conn_params['password'],
database=conn_params['name'],
cursorclass=pymysql.cursors.DictCursor)
def lambda_handler(event, context):
...
</code></pre>
<p>The issue is that the get_users lambda response does not contain users that were recently inserted with the create_user lambda or via MySQLWorkbench (again, remembering to call <code>commit</code>). If I connect to the database with pymysql or MySQLWorkbench, I can see that the new row was inserted, but the get_users lambda does not reflect this change for another 10 minutes or so. I'd like the results to be reflected nearly immediately.</p>
|
<python><mysql><aws-lambda><amazon-rds><pymysql>
|
2024-02-11 22:36:48
| 1
| 2,243
|
jerney
|
77,978,501
| 11,586,490
|
Why isn't my instance_table.row_data object updating in Kivy MDDataTable?
|
<p>I'm creating a simple scorecard app but am having trouble accessing my cell values as they're updated.</p>
<p>When I create the table, using the <code>create_table</code> method, I print the value of the columns through <code>column_sums</code> and all looks correct (a list of tuples with '0' in them). However, when I open the popup and update the table with a new number it updates on the screen but when I then call the <code>update_cell</code> function and try to access the table data, through <code>instance_table.row_data</code> it still prints a list of tuples with '0' in them, the values I see on screen aren't coming through in that function.</p>
<p>I've tried re-summing the values through <code>column_sums</code> as I did in the <code>create_table</code> method but I still get a list of tuples with '0' in them. Am I doing something wrong by updating the <code>text</code> property of the <code>CellRow</code>? I was unable to find a property called <code>value</code> or something.</p>
<pre><code>class Example(MDApp):
dialog = None
text_field = None
def build(self):
kv = Builder.load_string(KV)
self.theme_cls.theme_style = "Light"
self.sm = WindowManager()
screens = [LandingWindow(name="landing"), NewGameWindow(name="new_game"), ScorecardWindow(name="scorecard"),]
for screen in screens:
self.sm.add_widget(screen)
self.sm.current = "landing" #set to "landing"
return self.sm
def button_pressed(self, *args):
for a in args:
try:
self.text_field.text += a.text
except:
pass
def update_cell(self, instance_row, instance_table, text_field, *args):
instance_row.text = text_field.text
print(instance_table.row_data)
def close(self):
Example.get_running_app().sm.current = "landing"
#TODO delete the data from the page
def open_popup(self, instance_table, instance_row):
box = BoxLayout(orientation="vertical")
self.text_field = TextInput(size_hint_y=0.25)
box.add_widget(self.text_field)
grid = GridLayout(cols=3)
numeric_keypad = ['7', '8', '9', '4', '5', '6', '1', '2', '3', '.', '0', 'OK']
for x in numeric_keypad:
if str(x) == 'OK':
ok_button = Button(text=str(x))
grid.add_widget(ok_button)
else:
number_button = Button(text=str(x))
number_button.bind(on_release=self.button_pressed)
grid.add_widget(number_button)
box.add_widget(grid)
popup = Popup(title="Enter Your Score", content=box, size_hint=(0.5,0.5))
ok_button.bind(on_release=lambda *args: self.update_cell(instance_row, instance_table, self.text_field, *args), on_press=popup.dismiss)
popup.open()
def create_table(self):
game_name = self.root.get_screen("new_game").ids['game_name'].text
rounds = self.root.get_screen("new_game").ids['rounds'].value
players = self.root.get_screen("new_game").ids['players'].value
players = int(players)
rounds = int(rounds)
round_list = [("0",) * players for _ in range(rounds)]
table = MDDataTable(
rows_num=len(round_list),
column_data=[(f"Player {x + 1}", dp(20)) for x in range(players)],
row_data=round_list,
on_row_press = self.open_popup
)
column_sums = [sum(int(row_data[column_index]) for row_data in table.row_data) for column_index in
range(players)]
table.bind(on_row_press=self.open_popup)
self.root.get_screen("scorecard").ids['add_table_to_scrollview'].add_widget(table)
self.sm.current = "scorecard"
print(column_sums)
Example().run()
</code></pre>
<p>UPDATE:</p>
<p>Through <strong>many</strong> rounds of ChatGPT I was able to get to a point where the table updates (on screen and in the list of tuples that's printed in <code>column_sums</code>). However, the code updates the cell index in order, starting at [0], the first cell. For example, if I select the 14th cell, it will update the first cell. Then if I update the 5th cell, it will update the second cell, and so on. Many more rounds of ChatGPT couldn't solve this issue!</p>
<p>Here's the updated code of the <code>update_cell</code> function:</p>
<pre><code> def update_cell(self, instance_row, instance_table, text_field, *args):
#instance_row.text = text_field.text # This updates the correct cell and appears on the screen but isn't actually updating the values in the list of tuples.
print("row clicked is: ", instance_row)
# Split the instance_row.text into individual elements
row_text_elements = instance_row.text.split(',')
# Find the row_index corresponding to instance_row.text
row_index = -1
for index, row_data in enumerate(instance_table.row_data):
if all(element in row_data for element in row_text_elements):
row_index = index
break
if row_index != -1:
# Find the cell_index corresponding to instance_row.text
cell_index = -1
row_data_list = list(instance_table.row_data[row_index])
for index, element in enumerate(row_data_list):
if element == instance_row.text:
cell_index = index
break
if cell_index != -1:
# Update the cell value in the row
row_data_list[cell_index] = text_field.text
instance_table.row_data[row_index] = tuple(row_data_list)
print("row index: ", instance_table.row_data[row_index])
</code></pre>
|
<python><kivy><kivymd>
|
2024-02-11 21:30:24
| 1
| 351
|
Callum
|
77,978,368
| 893,254
|
How does one alter a topic configuration using incremental_alter_configs?
|
<p>I am trying to create a Python script to automate topic creation and configuration.</p>
<p>I have managed to make the topic creation part work. I get a runtime error when trying to alter the configuration.</p>
<pre><code>ValueError: expected non-empty list of ConfigEntry to alter incrementally in incremental_configs field
</code></pre>
<p>Here's my code:</p>
<pre><code>config = {
'min.insync.replicas': '3'
}
resource = ConfigResource(ResourceType.TOPIC, name=topic_name, set_config=config, described_configs=None)
futures = admin_client.incremental_alter_configs(resources=[resource])
for config_resource, future in futures.items():
try:
future.result()
print(f'Updated topic config for topic {config_resource}')
except Exception as exception:
print(f'Failed to update topic config for topic {config_resource}, {exception}')
</code></pre>
<p>I found the <a href="https://docs.confluent.io/platform/current/clients/confluent-kafka-python/html/index.html#adminclient" rel="nofollow noreferrer">documentation</a> quite hard to follow.</p>
<p>I based this code on this <a href="https://github.com/confluentinc/confluent-kafka-python" rel="nofollow noreferrer">example</a>. It seemed from reading the docs altering the config could be done in a similar way to topic creation. I am not totally sure what is wrong.</p>
|
<python><apache-kafka><confluent-kafka-python>
|
2024-02-11 20:44:31
| 1
| 18,579
|
user2138149
|
77,978,226
| 2,386,113
|
What is a Contour plot (Matplotlib in Python) representing?
|
<p>I am confused about what my contour plot is representing. I have put the hardcoded value I obtained in my program for a <em>MWE</em> below. I have only 9 values (also plotted as scatter) but I don't understand how am I obtaining the diamond-shaped curves.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
X = np.array([[-1.62, -1.08, -0.54],
[-1.62, -1.08, -0.54],
[-1.62, -1.08, -0.54]])
Y = np.array([[2.16, 2.16, 2.16],
[2.7 , 2.7 , 2.7 ],
[3.24, 3.24, 3.24]])
Z= np.array([[1846.76519619, 1912.83024484, 1907.85033005, 1884.44377117,
1936.11219261, 1917.34261408, 1864.70501227, 1909.25344747,
1886.32844313]]).reshape(3,3)
plt.figure()
plt.title('Sample Contour Plot')
plt.gca().contour(X , Y , Z)
plt.scatter(X,Y,c=Z,s=20)
print('done')
</code></pre>
<p><a href="https://i.sstatic.net/rej6F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rej6F.png" alt="enter image description here" /></a></p>
|
<python><matplotlib>
|
2024-02-11 20:00:09
| 1
| 5,777
|
skm
|
77,978,204
| 16,707,518
|
Speeding up a rolling sum calculation?
|
<p>I'm doing some work with a fairly large amount of (horse racing!) data for a project, calculating rolling sums of values for various different combinations of data - thus I need to streamline it as much as possible.</p>
<p>Essentially I am:</p>
<ul>
<li>calculating the rolling calculation of a points field over time</li>
<li>calculating this for various grouped combinations of data [in this case the combination of horse and trainer]</li>
<li>looking at the average of the value by group for the last 180 days of data through time</li>
</ul>
<p>The rolling window calculation below works fine - but takes 8.2s [this is about 1/8 of the total dataset - hence would take 1m 5s]. I am looking for ideas of how to streamline this calculation as I'm looking to do it for a number of different combinations of data, and thus speed is of the essence. Thanks.</p>
<pre><code>import pandas as pd
import time
url = 'https://raw.githubusercontent.com/richsdixon/testdata/main/testdata.csv'
df = pd.read_csv(url, parse_dates=True)
df['RaceDate'] = pd.to_datetime(df['RaceDate'], format='mixed')
df.sort_values(by='RaceDate', inplace=True)
df['HorseRaceCount90d'] = (df.groupby(['Horse','Trainer'], group_keys=False)
.apply(lambda x: x.rolling(window='180D', on='RaceDate', min_periods=1)['Points'].mean()))
</code></pre>
|
<python><pandas><rolling-computation>
|
2024-02-11 19:53:05
| 1
| 341
|
Richard Dixon
|
77,978,197
| 20,122,390
|
Should using cache between Python coroutines be blocking?
|
<p>I needed to use cache in a web application built in Python. Since I couldn’t directly use lru_cache in a coroutine, I built a simple decorator that would allow me to use it:</p>
<pre><code>from asyncio import Lock, sleep
async def acquire_lock_async(lock: Lock) -> None:
await lock.acquire()
return
class AsyncCacheable:
def __init__(self, coro_func: Awaitable) -> Any:
self.coro_func = coro_func
self.done = False
self.result = None
self.lock = Lock()
def __await__(self):
"""
A class that wraps a coroutine function and caches its result.
Safe to be used in an asynchronous context.
"""
while True:
if self.done:
return self.result
if not self.lock.locked():
try:
yield from acquire_lock_async(self.lock).__await__()
self.result = yield from self.coro_func().__await__()
self.done = True
finally:
self.lock.release()
return self.result
else:
yield from sleep(0.05)
def async_cacheable(coro_func: Awaitable) -> Awaitable:
def wrapper(*args, **kwargs):
return AsyncCacheable(lambda: coro_func(*args, **kwargs))
return wrapper
@lru_cache(maxsize=8)
@async_cacheable
async def get_company_id(self, simulation_id: int):
simulation_in_db = await self.get_by_id(_id=simulation_id)
if not simulation_in_db:
raise ValueError("Simulation not found")
company_id = simulation_in_db["company_id"]
return company_id
</code></pre>
<p>I tested it and it works fine. But now I have doubts about whether I have been on the right path. Does what I’ve done with the Lock make sense to make it safe between coroutines? It should be like that?</p>
<p>Thanks!</p>
<p>EDIT:</p>
<p>In these cases, it may be more convenient/simpler to use lru_cache and store Future instances. It would be something like this:</p>
<pre><code>async def get_company_id(self, simulation_id: int):
simulation_task = self.task_get_company_id(simulation_id)
simulation_in_db = await simulation_task
if not simulation_in_db:
raise ValueError("Simulation not found")
company_id = simulation_in_db["company_id"]
return company_id
@lru_cache(maxsize=8)
def task_get_company_id(self, simulation_id: int):
return create_task(self.get_by_id(_id=simulation_id))
</code></pre>
|
<python><async-await><python-asyncio>
|
2024-02-11 19:50:19
| 0
| 988
|
Diego L
|
77,978,139
| 7,981,566
|
Removing overlapping events from data table with intervals
|
<p>Given a large (> 100MB) data frame of events with location and timestamps, how can I remove events synchronously occurring in all locations (i.e. putative noise) in <em>R</em>, <em>MATLAB</em> or <em>Python</em> (with reasonable performance)?</p>
<p>A minimal specification of the problem in <em>R</em> would be:</p>
<pre><code>pixel <- c(1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3)
start <- c(1, 3, 6, 8, 1, 3, 5, 7, 8, 1, 4, 7)
end <- c(2, 4, 7, 9, 2, 4, 6, 8, 9, 3, 5, 9)
events <- data.frame(cbind(pixel, start, end))
# there was an event between 1 and 2s detected everywhere;
# this event would therefore be removed in the desired output:
#
# pixel start end
# 1 3 4
# 1 6 7
# 1 8 9
# 2 3 4
# 2 5 6
# 2 7 8
# 2 8 9
# 3 4 5
# 3 7 9
</code></pre>
<p>I had tried to solve the problem with loops, but the solution is slow. (Experts sometimes recommend to "vectorize" calculations, but I found no way to get rid of the loops.)</p>
<p>Additionally, I had found a related post for the problem in <em>Python</em> on <a href="https://stackoverflow.com/questions/72242090/pandas-data-frame-remove-overlapping-intervals">Pandas Data Frame - Remove Overlapping Intervals</a>.</p>
<p>It seems to me that this type of problem should be a common one and is probably already solved by a package, but I couldn't find it.</p>
|
<python><r><matlab><vectorization><intervals>
|
2024-02-11 19:31:21
| 1
| 1,059
|
NicolasBourbaki
|
77,978,057
| 4,505,998
|
Turn level of MultiIndex columns into a column with values (unstack columns)
|
<p>I have a DraFrame with a MultiIndex in the columns, and I want to turn the first level into its own column.</p>
<p>Original df</p>
<pre class="lang-py prettyprint-override"><code>pd.DataFrame(
{
('a', 'm1'): [0,1],
('a', 'm2'): [10, 11],
('b', 'm1'): [2,3],
('b', 'm2'): [12, 13],
('c', 'm1'): [3,4],
('c', 'm2'): [13,14],
},
)
# a b c
# m1 m2 m1 m2 m1 m2
# 0 0 10 2 12 3 13
# 1 1 11 3 13 4 14
</code></pre>
<p>Desired dataframe:</p>
<pre><code> m1 m2 level0
0 10 a
1 11 a
2 12 b
3 13 b
3 13 c
4 14 c
</code></pre>
<p>I tried using <a href="https://pandas.pydata.org/docs/reference/api/pandas.melt.html" rel="nofollow noreferrer">pd.melt</a>, but I couldn't get the desired effect.</p>
|
<python><pandas><dataframe>
|
2024-02-11 19:06:26
| 1
| 813
|
David Davó
|
77,977,838
| 5,905,678
|
Poetry project lock file and Docker container installation
|
<p>I have a Python project with Poetry which i would like to upload to CodeArtifact (like any other Python Repository) and then install it into a Docker container. My Python project has dependencies in the <code>pyproject.toml</code> like</p>
<pre><code>[tool.poetry.dependencies]
pandas = "^2.1.4"
pytest-mock = "^3.12.0"
</code></pre>
<p>the <code>^</code> means that even higher versions are accepted. But as soon as I have my project working I also have a <code>poetry.lock</code> file. This prevents higher versions from being installed with <code>poetry install</code>.
Now when everything is working, I run <code>poetry build</code> and upload the lib to CodeArtifact.</p>
<p>If I want to install this library into a Docker container, I could run <code>pip install</code> and specify the version of my lib.</p>
<p>My questions are now:</p>
<ul>
<li>Is this installation going to respect the <code>poetry.lock</code> file? Or is it going to install the latest dependencies for example <code>pandas 2.2.0</code></li>
<li>How can I prevent newer versions? Which is to say: Do I have to remove it from the pyproject.toml file? Or do have to copy the lock file to the container and run <code>poetry install</code>?</li>
</ul>
|
<python><docker><pypi><python-poetry><aws-codeartifact>
|
2024-02-11 18:06:46
| 1
| 1,518
|
Khan
|
77,977,794
| 832,560
|
Discord modal callback never called. The modal window gives a "Something went wrong" error, but no errors in console
|
<p>Spawning the model from a command:</p>
<pre class="lang-py prettyprint-override"><code> @discord.app_commands.command(name="host_game_test", description="Host a new game")
async def host_game_test(self, interaction: discord.Interaction):
modal = HostGameModal(title="Host New Game", era=GameAge.EA)
await interaction.response.send_modal(modal)
</code></pre>
<p>This works fine, creating the modal window. However, after filling it out and clicking submit, after a delay a red error message appears - "Something went wrong. Please try again." No errors are reported in the console at all.</p>
<pre class="lang-py prettyprint-override"><code>from discord import Interaction, TextStyle
from discord.ui import TextInput, Modal
from utils import printlog
class HostGameModal(Modal):
def __init__(self, era: str, *args, **kwargs):
super().__init__(*args, **kwargs)
self.era = era # Store the era
self.add_item(TextInput(label="Game Name", custom_id="game_name", placeholder="Enter your game's name"))
self.add_item(TextInput(label="Max Players", custom_id="max_players", placeholder="Enter max number of players",
style=TextStyle.short))
self.add_item(TextInput(label="Password", custom_id="password", placeholder="Enter a game password",
style=TextStyle.short))
async def callback(self, interaction: Interaction):
printlog("HostGameModal callback called.")
try:
await interaction.response.defer(ephemeral=True)
game_name = self.children[0].value
max_players = int(self.children[1].value)
password = self.children[2].value
# Use self.era here for the game's era
await interaction.followup.send(f"Hosting a '{self.era}' game named '{game_name}' with max {max_players} "
f"players and password '{password}'.")
except Exception as e:
printlog(f"Error in HostGameModal callback: {e}")
await interaction.followup.send("An error occurred. Please try again later.", ephemeral=True)
</code></pre>
<p>Here's the Modal class. The printlog at the start of the callback function is never called, so it seems for some reason the callback is just never being called.</p>
<p>Have been stumped on this for a while, and there doesn't seem to be a huge amount of documentation about the modal stuff that I can find. Can anyone see what the problem might be?</p>
|
<python><discord><discord.py>
|
2024-02-11 17:55:10
| 1
| 830
|
Cerzi
|
77,977,777
| 893,254
|
How to partition a dictionary in Python?
|
<p>Many languages provide a standard library which provides a dictionary or binary tree map type. Many of these data structure implementations provide a function to split the data structure into a pair of data structures based on some condition.</p>
<p>This is known as a partition operation. Rust calls it "split off". C++ <code>map</code> types have "lower bound" and "upper bound" operation for comparison to keys. The standard library also provides a "partition" operation which exists in <code>std::algorithm</code>.</p>
<p>I can't see an ideomatic way of doing this in Python. I have managed to come up with a manual way of doing it, but I don't really like this code, because it doesn't split a <code>dict</code> into two parts, but splits part of it and then splits another part of it, resulting in fragile (and duplicated) code.</p>
<pre><code>map = {
...
}
map_part_1 = {
key: value for key, value in map.items() if <condition>
}
map_part_2 = {
key: value for key, value in map.items() if not <condition>
}
</code></pre>
<p>Not great, right? Is there a better way?</p>
|
<python><dictionary>
|
2024-02-11 17:49:56
| 3
| 18,579
|
user2138149
|
77,977,601
| 595,305
|
How can I test what my `__main__` file does?
|
<p>I'm using pytest. NB W10.<br>
I will be running my app by going <code>> python src/core</code>. "core" is my central module, with a <code>__main__.py</code> file.</p>
<p>For that reason I'm putting my "tests" directory in [project root]/src/core.</p>
<p>I have a test file test_xxx.py. In it I try to import <code>__main__</code>:</p>
<pre><code>import __main__
...
def test_xxxxx():
print(f'main {__main__} type {type(__main__)}')
</code></pre>
<p>This prints the following:</p>
<pre><code>main <module '__main__' from 'D:\\apps\\Python\\virtual_envs\\doc_indexer v2.0.0\\Scripts\\pytest.exe\\__main__.py'> type <class 'module'>
</code></pre>
<p>... in other words, it is importing not my local <code>__main__.py</code> from my local "core" module, but pytest's own <code>__main__</code> from its module. So far so understandable. Then I look at sys.path:</p>
<pre><code>path: D:\My Documents\software projects\EclipseWorkspace\doc_indexer\src\core\tests
path: D:\My Documents\software projects\EclipseWorkspace\doc_indexer\src\core
path: D:\apps\Python\virtual_envs\doc_indexer v2.0.0\Scripts\pytest.exe
path: D:\apps\Python\PyQt5
...
</code></pre>
<p>Now I'm scratching my head, not for the first time with pytest. This ...src\core path is listed <em><strong>before</strong></em> ...Scripts\pytest.exe. Why would pytest identify its own <code>__main__.py</code> and import it, before importing <code>__main__.py</code> from a path which precedes it?</p>
<p>I've also tried various experiments with <code>importlib</code>. Nothing seems to work.</p>
<p>So generally, is there any way to actually import that file ...src/core/<code>__main__py</code>?</p>
<p>I realise that this file should do minimal things before handing on to another file, but for completeness I'd just like to test that <code>main()</code> does in fact do that. Is there any way to run <code>__main__.main()</code> in a test?</p>
|
<python><testing><module><pytest>
|
2024-02-11 16:58:21
| 1
| 16,076
|
mike rodent
|
77,977,421
| 1,311,449
|
Python: configparser.InterpolationSyntaxError: '$' must be followed by '$' or '{', found: '$!'
|
<p>I have <code>psw: Test123$!</code> string in my configs.ini and tryin to read it with the following code will result in <code>configparser.InterpolationSyntaxError: '$' must be followed by '$' or '{', found: '$!'</code> error:</p>
<pre><code>configs = configparser.RawConfigParser()
configs._interpolation = configparser.ExtendedInterpolation()
configs.read(configs_path)
print(configs.get('app', 'psw', fallback=''))
</code></pre>
<p>I've seen other similar posts but all of them was referring to <code>%</code> as special character, not <code>$</code>! I tried escaping it inserting a double symbol (<code>psw: Test123$$!</code>) as suggested for the <code>%</code> symbol, but it didn't work. I can't find any reference to dollar symbol in the docs unless if it is followed by an open graph symbol (<code>${</code>)...so how to obtain a string containing a dollar symbol from an <code>.ini</code> file with <code>configparser</code>?</p>
|
<python><special-characters><configparser>
|
2024-02-11 16:04:17
| 0
| 656
|
Mark
|
77,977,383
| 9,251,158
|
Recursive function fails depending on lexical scoping
|
<p>I want to generate variations on a name by edit distance equal or smaller than a number. I thought the simplest solution was a recursion. If I add the results of the current step after the recursion, the following code works fine; if I add them before, the recursion fails to terminate:</p>
<pre class="lang-py prettyprint-override"><code>def generate_distance1(name):
res = []
# Deletion.
for i in range(len(name)):
if 0 == i:
res.append(name[1:])
elif len(name) - 1 == i:
res.append(name[:-1])
else:
res.append(name[:i] + name[i+1:])
# Substitution.
for i in range(len(name)):
if 0 == i:
res.append("?" + name[1:])
elif len(name) - 1 == i:
res.append(name[:-1] + "?")
else:
res.append(name[:i] + "?" + name[i+1:])
# Addition
for i in range(len(name) + 1):
if 0 == i:
res.append("?" + name)
elif len(name) == i:
res.append(name + "?")
else:
res.append(name[:i] + "?" + name[i:])
res = list(set(res))
return res
def generate_distance(name, max_distance, before):
if 0 == max_distance:
return [name]
dist1 = generate_distance1(name)
if 1 == max_distance:
return dist1
if before:
# This is not OK.
res = dist1
else:
# This is OK.
res = []
for n in dist1:
res.extend(generate_distance(n, max_distance - 1, before))
if not before:
res.extend(dist1)
return list(set(res))
print(generate_distance("abracadabra", 2, before=False))
print(generate_distance("abracadabra", 2, before=True))
</code></pre>
<p>I'm using Python 3.11.6. I think this issue has to do with lexical scoping, and possibly closures, but I cannot understand why; neither can I really formulate a better title for the question. Why does this recursive function fail to terminate?</p>
|
<python><recursion><lexical-scope>
|
2024-02-11 15:51:50
| 1
| 4,642
|
ginjaemocoes
|
77,977,370
| 10,798,503
|
Python Handling H264 Frames for Live Stream from Eufy Server
|
<p>I am currently using the <a href="https://github.com/bropat/eufy-security-ws" rel="nofollow noreferrer">Eufy Security WebSocket Server</a>, a server wrapper constructed around the eufy-security-client library, enabling access via a WebSocket interface. I have developed a python version of the client, where I attempt to display the live stream using the <code>device.start_livestream</code> command, as <a href="https://bropat.github.io/eufy-security-ws/#/api_cmds" rel="nofollow noreferrer">outlined here</a>.</p>
<p>In short, the web server continuously returns a buffer of video frames in H264 format. while, I read this in my Python script and attempt to render it on a GUI. However, I encounter an issue where a significant number of frames are missed. This is potentially due to the compression of consecutive frames, a process known as inter-frame compression.</p>
<p>The solution probably lies in implementing a buffering or packet reassembly logic to ensure complete frames are processed. Despite many attempts and exploration of various ways, I couldn't figure it out.</p>
<p>Here's my python full code:</p>
<pre><code>import websocket
import json
import av
import cv2
buffer = bytearray()
def is_h264_complete(buffer):
# Convert the buffer to bytes
buffer_bytes = bytes(buffer)
# Look for the start code in the buffer
start_code = bytes([0, 0, 0, 1])
positions = [i for i in range(len(buffer_bytes)) if buffer_bytes.startswith(start_code, i)]
# Check for the presence of SPS and PPS
has_sps = any(buffer_bytes[i+4] & 0x1F == 7 for i in positions)
has_pps = any(buffer_bytes[i+4] & 0x1F == 8 for i in positions)
return has_sps and has_pps
def on_message(ws, message):
data = json.loads(message)
message_type = data["type"]
if message_type == "event" and data["event"]["event"] == "livestream video data":
image_buffer = data["event"]["buffer"]["data"]
if not is_h264_complete(image_buffer):
print(f"Error! incomplete h264: {len(image_buffer)}")
return
buffer_bytes = bytes(image_buffer)
packet = av.Packet(buffer_bytes)
codec = av.CodecContext.create('h264', 'r')
frames = codec.decode(packet)
# Display the image
for frame in frames:
image = frame.to_ndarray(format='bgr24')
# Put the length of the buffer on the image
cv2.putText(image, f"Buffer Length: {len(image_buffer)}", (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 2)
cv2.imshow('Image', image)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
def on_error(ws, error):
print(f"Error: {error}")
def on_close(ws):
print("Connection closed")
def on_open(ws):
print("Connection opened")
# Send a message to the server
ws.send(json.dumps({"messageId" : "start_listening", "command": "start_listening"})) # replace with your command and parameters
ws.send(json.dumps({"command": "set_api_schema", "schemaVersion" : 20}))
ws.send(json.dumps({"messageId" : "start_livestream", "command": "device.start_livestream", "serialNumber": "T8410P4223334EBE"})) # replace with your command and parameters
if __name__ == "__main__":
websocket.enableTrace(False)
ws = websocket.WebSocketApp("ws://localhost:3000", # replace with your server URI
on_message=on_message,
on_error=on_error,
on_close=on_close)
ws.on_open = on_open
ws.run_forever()
</code></pre>
|
<python><opencv><video-streaming><video-processing><h.264>
|
2024-02-11 15:47:49
| 1
| 1,142
|
yarin Cohen
|
77,977,118
| 1,485,872
|
subprocess.Popen, communicate() and wget raises error when there is none?
|
<p>I have a piece of code that I use to run commend line calls from python (probably taken from SO, but seem to have missed the source on this):</p>
<pre><code>import subprocess
# Run cmd line
def run_cmd(cmd, verbose=True, *args, **kwargs):
if verbose:
print(cmd)
process = subprocess.Popen(
cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True, shell=True
)
std_out, std_err = process.communicate()
if std_err:
raise RuntimeError(std_err.strip())
if verbose:
print(std_out.strip(), std_err)
return std_out.strip()
</code></pre>
<p>I have been using it fine, but recently I was quite confused when I realized that when called with:</p>
<pre><code>run_cmd('wget https://zenodo.org/records/8014758/files/2DeteCT_slices1-1000.zip -P /one/of/my/folders/2detect/file.zip')
</code></pre>
<p>This would cause it to error. Took a bit to figure out because turns out that what is being printed, instead of an error, is the progress of dowloading the file and successful command, i.e. an "unrolled" version of what I get when I call the line from command line:</p>
<pre><code>--2024-02-11 13:54:55-- https://zenodo.org/records/8014758/files/2DeteCT_slices1-1000.zip
Resolving zenodo.org (zenodo.org)... 188.184.103.159, 188.185.79.172, 188.184.98.238, ...
Connecting to zenodo.org (zenodo.org)|188.184.103.159|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 33880212893 (32G) [application/octet-stream]
Saving to: ‘/one/of/my/folders/2detect/file.zip’
file.zip 100%[===================================================================================================================>] 31.55G 20.9MB/s in 22m 5s
2024-02-11 14:17:02 (24.4 MB/s) - ‘/one/of/my/folders/2detect/file.zip’ saved [33880212893/33880212893]
</code></pre>
<p>I can verify the file, there is no error on it and I can unzip it well.
So, why does this piece of code basically return <code>std_err</code>? the error itself does not seem to be an error but a progress bar.</p>
<p>Hopefully understanding what is going on can help me fix the <code>run_cmd</code> function.</p>
|
<python><subprocess>
|
2024-02-11 14:34:34
| 1
| 35,659
|
Ander Biguri
|
77,977,005
| 10,836,309
|
Converting a dataframe of timed tasks to timeline in Excel
|
<p>I have a dataframe of tasks at different resources at given times. Here is an example:</p>
<pre><code>import pandas as pd
data = {'product': ['Prod1', 'Prod1', 'Prod1', 'Prod2', 'Prod2'],
'resource': ['press', 'coat', 'pack', 'press', 'coat'],
'start': ['08/02/2024 10:10', '08/02/2024 13:00', '08/02/2024 20:10', '08/02/2024 10:50', '08/02/2024 14:20'],
'end': ['08/02/2024 10:49', '08/02/2024 14:20', '08/02/2024 22:10', '08/02/2024 11:40', '08/02/2024 16:45'],
}
df = pd.DataFrame(data)
df['start'] = pd.to_datetime(df['start'], format='%d/%m/%Y %H:%M')
df['end'] = pd.to_datetime(df['end'], format='%d/%m/%Y %H:%M')
df
product resource start end
0 Prod1 press 2024-02-08 10:10:00 2024-02-08 10:49:00
1 Prod1 coat 2024-02-08 13:00:00 2024-02-08 14:20:00
2 Prod1 pack 2024-02-08 20:10:00 2024-02-08 22:10:00
3 Prod2 press 2024-02-08 10:50:00 2024-02-08 11:40:00
4 Prod2 coat 2024-02-08 14:20:00 2024-02-08 16:45:00
</code></pre>
<p>I was able to create a timeline through <code>plotly</code> :</p>
<pre><code>import plotly.express as px
fig = px.timeline(df, x_start="start", x_end="end", y="resource", color="resource", text="product")
fig.show()
</code></pre>
<p>Which gives:</p>
<p><a href="https://i.sstatic.net/SokmH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SokmH.png" alt="enter image description here" /></a></p>
<p>I am looking for a way to produce similar timeline but not as an image but in Excel.</p>
|
<python><pandas><plotly>
|
2024-02-11 13:58:16
| 1
| 6,594
|
gtomer
|
77,976,924
| 9,877,065
|
Assignment of a sliceable object single element
|
<p>It recently came to my eyes that in:</p>
<pre><code>a = [1]
print(a, type(a))
[b] = a
print(b)
b = a[0]
a = [1,2]
b = a[0]
print(b)
try :
[b] = a
except Exception as e:
print(e)
</code></pre>
<p>The notation <code>[b] = a</code> could actually work.</p>
<p>Why does it work, is it relevant in more complex situations? Does it work against <em>"There should be one-- and preferably only one --obvious way to do it"</em>?</p>
|
<python>
|
2024-02-11 13:28:02
| 2
| 3,346
|
pippo1980
|
77,976,740
| 4,792,022
|
Efficiently comparing list of dictionaries in JSONL file with list of keys
|
<ul>
<li>I have a jsonl file containing around 1,000,000 dictionaries</li>
<li>I am interested in discionaries where the values of field_1 is a string
from list_of_strings which contains around 100,000 strings.</li>
</ul>
<p>I can hold both in memory at the same time, and i'd like to quickly and efficiently compare them.</p>
<p>my first attempt was</p>
<pre><code>matching_dicts = []
key = "field_1 "
# Open the JSONL file and iterate over its lines
with jsonlines.open(file_path) as reader:
for line_number, obj in enumerate(reader):
# Check if the object has the target field and its value is in the list_of_strings
if key in obj and obj[key] in list_of_strings :
# If so, append the line to the list
matching_articles.append((obj, line_number))
</code></pre>
<p>this is slow what would be faster?</p>
|
<python><performance><dictionary><jsonlines>
|
2024-02-11 12:23:04
| 1
| 544
|
Abijah
|
77,976,619
| 16,707,518
|
"window must be an integer 0 or greater" issue with '30D' style rolling calculations
|
<p>I've had a look and can't seem to find a solution to this issue. I'm wanting to calculate the rolling sum of the previous 30 days' worth of data at each date in the dataframe - by subgroup - for a set of data that isn't daily - it's spaced fairly irregularly. I've been attempting to use ChatGPT which is getting in a twist over it.</p>
<p>Initially the suggestion was that I'd not converted the Date column to datetime format to allow for the rolling calculation, but now from the code below:</p>
<pre><code>import pandas as pd
from datetime import datetime, timedelta
import numpy as np
# Create a dataset with irregularly spaced dates spanning two years
np.random.seed(42)
date_rng = pd.date_range(start='2022-01-01', end='2023-12-31', freq='10D') # Every 10 days
data = {'Date': np.random.choice(date_rng, size=30),
'Group': np.random.choice(['A', 'B'], size=30),
'Value': np.random.randint(1, 30, size=30)}
df = pd.DataFrame(data)
# Sort DataFrame by date
df.sort_values(by='Date', inplace=True)
df['Date'] = pd.to_datetime(df['Date'])
# Calculate cumulative sum by group within the previous 30 days from each day
df['RollingSum_Last30Days'] = df.groupby('Group')['Value'].transform(lambda x: x.rolling(window='30D', min_periods=1).sum())
</code></pre>
<p>I'm getting an error of:</p>
<pre><code>ValueError: window must be an integer 0 or greater
</code></pre>
<p>I've found conflicting comments online as to whether the format '30D' works in rolling windows but I'm none the wiser as to a solution to this. Any help appreciated.</p>
<p>Running in VSCode in Python 3.11.8.</p>
|
<python><pandas><rolling-computation>
|
2024-02-11 11:39:07
| 2
| 341
|
Richard Dixon
|
77,976,508
| 14,425,501
|
How to send parallel request to Google Gemini?
|
<p>I have 107 images and I want to extract text from them, and I am using Gemini API, and this is my code till now:</p>
<pre><code># Gemini Model
model = genai.GenerativeModel('gemini-pro-vision', safety_settings=safety_settings)
# Code
images_to_process = [os.path.join(image_dir, image_name) for image_name in os.listdir(image_dir)] # list of 107 images
prompt = """Carefully scan this images: if it has text, extract all the text and return the text from it. If the image does not have text return '<000>'."""
for image_path in tqdm(images_to_process):
img = Image.open(image_path)
output = model.generate_content([prompt, img])
text = output.text
print(text)
</code></pre>
<p>In this code, I am just taking one image at a time and extracting text from it using Gemini.</p>
<p><strong>Problem</strong> -
I have 107 images and this code is taking ~10 minutes to run. I know that Gemini API can handle 60 requests per minute. How to send 60 images at the same time? How to do it in batch?</p>
|
<python><google-gemini><google-generativeai>
|
2024-02-11 10:56:53
| 1
| 1,933
|
Adarsh Wase
|
77,976,453
| 1,616,785
|
whatever I do sqlalchemy wont commit to the database
|
<p><code>Python</code>, <code>Sqlalchemy</code>, <code>Mysql 8</code> envronment.</p>
<p>I am using only raw sql select and execute commands(No ORM). I have queries across databases like <code>insert into db1.table1 select * from db2.table1</code>.</p>
<pre><code>engine = create_engine(echo=True)
session = sessionmaker(autocommit=False, autoflush=False, bind=user_engine)
transacton = session.begin()
with session as session:
session.execute("INSERT into db1.table1 select * from db2.tabe1")
session.execute("INSERT into db1.table2 select * from db2.tabe2")
transacton.commit()
</code></pre>
<p>nothing gets commited to the database. in log i see engine.ROLLBACK after each statement</p>
<p>trial 2</p>
<pre><code>engine = create_engine(echo=True)
session = sessionmaker(autocommit=False, autoflush=False, bind=user_engine)
transacton = session.begin()
session.execute("INSERT into db1.table1 select * from db2.tabe1")
session.execute("INSERT into db1.table2 select * from db2.tabe2")
transacton.commit()
</code></pre>
<p>no change. nothing commited.</p>
<pre><code>engine = create_engine(echo=True)
transacton = engine.begin()
engine.execute("INSERT into db1.table1 select * from db2.tabe1")
engine.execute("INSERT into db1.table2 select * from db2.tabe2")
transacton.commit()
</code></pre>
<p>and</p>
<pre><code>engine = create_engine(echo=True)
connection = engine.connect()
transacton = connection.begin()
connection .execute("INSERT into db1.table1 select * from db2.tabe1")
connection.execute("INSERT into db1.table2 select * from db2.tabe2")
transacton.commit()
</code></pre>
<p>nothing commited. now rollback is not seen in logs.</p>
<p>What could i be doing wrong here. Trying for few days now. Not an expert in sqlalchemy here.</p>
<p>The solution i had with isolation_level in comment also didnt work either. so after <code>isolation_level=none</code> begin/commit had no effect. the data gets updated to db after each statement. Another issue is after one block is <code>commited</code>, and issuing another <code>begin()</code> i get error saying a transaction already exists. Have no clue on what is happening here. The reason i chose sqlalchemy was that i read it takes care of connection pooling. also it allows named arguments for raw sql. Otherwise <code>mysqlconnector</code> and <code>MySQLdb</code> was working just fine. I hope I am missing something basic here.</p>
<p>Here even though I begin as <code>connection.begin()</code> explicitly. I see <code>implicit</code> in logs
11-02-2024 23:19:56 | INFO | COMMIT
11-02-2024 23:20:36 | INFO | BEGIN (implicit)
11-02-2024 23:20:44 | ERROR | ERROR: This connection has already initialized a SQLAlchemy Transaction() object via begin() or autobegin; can't call begin() here unless rollback() or commit() is called first.
Traceback (most recent call last):</p>
|
<python><mysql><sqlalchemy>
|
2024-02-11 10:38:35
| 0
| 1,401
|
sjd
|
77,976,304
| 4,000,964
|
Abstract class from a concrete class in Python
|
<p>With the release of Python 3.12, <code>pathlib.Path</code> can now be subclassed. I want to create a subclass <code>CustomPath(Path)</code> for non-<code>os</code> environments (ftp, sftp, s3 storage, etc.), meaning I have to re-implemented (almost) all methods.</p>
<p>I want to make sure that <code>CustomPath</code>is only using methods which are defined in the subclass, to prevent accidentally using methods from the parent <code>Path</code> class. In order to do this, I want to use only the interface (abstract class) of <code>Path</code>. (Since <code>Path</code> may be updated to include new methods beyond my control.)</p>
<p>What is the most pythonic way of doing this? (It might be the case that it is not appropriate to subclass at all.)</p>
<hr />
<p>Here's an example of expected behavior:</p>
<pre class="lang-py prettyprint-override"><code>class S3Path(pathlib.Path):
@classmethod
def from_connection(cls, ...):
... # custom implementation
def read_text(self, encoding=None, errors=None):
... # custom implementation
s3path = S3Path.from_connection(...)
text = s3path.read_text()
s3path.write_text(text) # should raise NotImplementedError
</code></pre>
|
<python><abstract-class><pathlib>
|
2024-02-11 09:56:37
| 2
| 1,208
|
Frank Vel
|
77,976,298
| 12,387,950
|
Is there an efficient way to format Decimal?
|
<p>It is nice, easy and fast to use format-strings in python. So I never considered the performance penalty of this operation. Some time ago I switched my program from <code>float</code> data type to <code>Decimal</code> to eliminate rounding errors. Some performance degradation was expected and it is ok. But I'm surprised to see how large performance penalty I have just from printing a log of formatted <code>Decimal</code> numbers.</p>
<p>Below I illustrate the difference with <em>cProfile</em> results.
The question is - are there any efficient ways to have formatted <code>Decimal</code> numbers in python?</p>
<p>Here is a test for float-number formatting:</p>
<pre><code>from cProfile import Profile
a_float = 1234567890.12345
def format_float(value: float) -> str:
if value is None:
return ''
result = f"{value:+,.6f}"
return result
def test_float():
for i in range(1000000):
b = format_float(a_float)
p = Profile()
p.runcall(test_float)
p.print_stats()
</code></pre>
<p>it gives result: 1000002 function calls in <strong>0.839 seconds</strong></p>
<p>And here is the same test with <em>Decimal</em>:</p>
<pre><code>from decimal import Decimal
from cProfile import Profile
a_decimal = Decimal('1234567890.12345')
def format_decimal(value: Decimal) -> str:
if value is None:
return ''
result = f"{value:+,.6f}"
return result
def test_decimal():
for i in range(1000000):
b = format_decimal(a_decimal)
p = Profile()
p.runcall(test_decimal)
p.print_stats()
</code></pre>
<p>which results in: 55000002 function calls in <strong>27.739 seconds</strong></p>
<p>Is there any way to format <code>Decimal</code> nicely in shorter time?</p>
<p>The discussion in comments brought nice spot - that others don't have such a problem. Here I think the output of the profiler is of some interest.</p>
<p>For the case with <code>float</code> it is pretty short and ends with <code>{method 'disable' of '_lsprof.Profiler' objects}</code> that suggest some optimization I think:</p>
<pre><code>>>> p.print_stats()
1000002 function calls in 0.735 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1000000 0.564 0.000 0.564 0.000 <stdin>:1(format_float)
1 0.171 0.171 0.735 0.735 <stdin>:1(test_float)
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
</code></pre>
<p>while with <code>Decimal</code> there are a lot of lines:</p>
<pre class="lang-none prettyprint-override"><code>>>> p.print_stats()
47000029 function calls in 23.731 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1000000 0.625 0.000 23.395 0.000 <stdin>:1(format_decimal)
1 0.335 0.335 23.731 23.731 <stdin>:1(test_decimal)
1000000 0.947 0.000 1.795 0.000 _pydecimal.py:2622(_rescale)
1000000 3.218 0.000 22.771 0.000 _pydecimal.py:3758(__format__)
1000000 0.533 0.000 0.665 0.000 _pydecimal.py:3844(_dec_from_triple)
1 0.000 0.000 0.000 0.000 _pydecimal.py:3902(__init__)
5 0.000 0.000 0.000 0.000 _pydecimal.py:3938(_set_integer_check)
2 0.000 0.000 0.000 0.000 _pydecimal.py:3952(_set_signal_dict)
9 0.000 0.000 0.000 0.000 _pydecimal.py:3963(__setattr__)
1000000 0.294 0.000 0.405 0.000 _pydecimal.py:448(getcontext)
1000000 1.974 0.000 3.767 0.000 _pydecimal.py:6188(_parse_format_specifier)
1000000 0.870 0.000 0.988 0.000 _pydecimal.py:6268(_format_align)
1000000 1.734 0.000 1.822 0.000 _pydecimal.py:6295(_group_lengths)
1000000 6.271 0.000 10.549 0.000 _pydecimal.py:6318(_insert_thousands_sep)
1000000 0.236 0.000 0.236 0.000 _pydecimal.py:6355(_format_sign)
1000000 1.390 0.000 13.162 0.000 _pydecimal.py:6365(_format_number)
3000000 0.466 0.000 0.466 0.000 _pydecimal.py:820(__bool__)
1000000 0.132 0.000 0.132 0.000 {built-in method __new__ of type object at 0x7e4f98d56d40}
7 0.000 0.000 0.000 0.000 {built-in method builtins.isinstance}
16000000 0.979 0.000 0.979 0.000 {built-in method builtins.len}
4000000 0.732 0.000 0.732 0.000 {built-in method builtins.max}
4000000 0.524 0.000 0.524 0.000 {built-in method builtins.min}
1 0.000 0.000 0.000 0.000 {built-in method fromkeys}
4000000 0.277 0.000 0.277 0.000 {method 'append' of 'list' objects}
1 0.000 0.000 0.000 0.000 {method 'copy' of 'dict' objects}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
1000000 0.112 0.000 0.112 0.000 {method 'get' of '_contextvars.ContextVar' objects}
1000000 0.613 0.000 0.613 0.000 {method 'groupdict' of 're.Match' objects}
1000000 0.289 0.000 0.289 0.000 {method 'join' of 'str' objects}
1000000 1.180 0.000 1.180 0.000 {method 'match' of 're.Pattern' objects}
1 0.000 0.000 0.000 0.000 {method 'set' of '_contextvars.ContextVar' objects}
</code></pre>
<p>In fast test ran by <em>Suramuthu R</em> here <a href="https://imgur.com/Y6PRJEC" rel="nofollow noreferrer">here</a> I see that his <code>Decimal</code> test is similar to my <code>float</code> and ends with the line:</p>
<pre class="lang-none prettyprint-override"><code>{method 'disable' of '_lsprof.Profiler' objects}
</code></pre>
<p>It looks like some optimization is switched off for <code>Decimal</code> on my PC.
Does anyone have idea how to check it?</p>
|
<python><performance><archlinux><f-string><python-decimal>
|
2024-02-11 09:53:05
| 2
| 605
|
StarterKit
|
77,976,264
| 1,234,434
|
Unable to sort a list of dictionaries by position
|
<p>I'm learning python and doing an online exercise.</p>
<p>My current goal is to sort a list of dictionaries:</p>
<pre><code>if __name__ == '__main__':
arr=[]
for _ in range(int(input())):
name = input()
score = float(input())
arr.append({name: score})
print(arr)
print(sorted(arr,key=lambda x: x[1]))
</code></pre>
<p>But I get an error:</p>
<pre><code>Traceback (most recent call last):
File "/tmp/submission/20240211/09/34/hackerrank-1e461b8e0b66f9b6cec57a2d76f20107/code/Solution.py", line 9, in <module>
print(sorted(arr,key=lambda x: x[1]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/submission/20240211/09/34/hackerrank-1e461b8e0b66f9b6cec57a2d76f20107/code/Solution.py", line 9, in <lambda>
print(sorted(arr,key=lambda x: x[1]))
~^^^
KeyError: 1
</code></pre>
<p>Why is it causing this error, is there some underlying fact about dictionaries that I am missing? How do I resolve it?</p>
|
<python>
|
2024-02-11 09:40:36
| 1
| 1,033
|
Dan
|
77,976,092
| 5,429,320
|
Azure Func App queue trigger throwing encoding errors when trying to read the message
|
<p>I have an Azure Function App that is written in python. One function adds values to my Azure Storage Account Queue. The message body is a simple string of characters for example: <code>SS3</code>.</p>
<p>This function looks like:</p>
<pre class="lang-py prettyprint-override"><code>def main(scheduleImportSet: func.TimerRequest) -> None:
...
with DatabaseManager() as db:
...
create_queue(queue_name)
queue_client = QueueClient.from_connection_string(connection_string, queue_name)
queue_client.message_encode_policy = BinaryBase64EncodePolicy()
queue_client.message_decode_policy = BinaryBase64DecodePolicy()
for set in set_data['data']:
...
message_bytes = code.encode('utf-8')
encoded_message = queue_client.message_encode_policy.encode(content=message_bytes)
queue_client.send_message(encoded_message)
...
</code></pre>
<p>I have created a second function that is triggered based on the messages in the queue. I am trying to use the value in the message body to be used in the rest of the function.</p>
<pre class="lang-py prettyprint-override"><code>def main(queueImportSetCards: func.QueueMessage) -> None:
start = time.perf_counter()
logger.info('Get Set - Cards: Started.')
message_content = queueImportSetCards.get_body().decode('utf-8')
print(message_content)
with DatabaseManager() as db:
set_data = fetch_set_data(message_content)
print(set_data)
</code></pre>
<pre class="lang-py prettyprint-override"><code>def fetch_set_data(code):
url = API_BASE_URL + code + '.json'
response = get_response(url)
if response:
try:
set_data = response.json()
return set_data ['data']
except ValueError:
logger.error(f"Failed to parse JSON data from {url}.")
else:
logger.error(f"Failed to get a valid response from {url}.")
return {}
</code></pre>
<p>This is throwing the following error:</p>
<pre><code>[2024-02-11T08:30:03.672Z] 2024-02-11 08:30:03,671 - ERROR - Error occurred: <class 'UnicodeEncodeError'> - 'charmap' codec can't encode character '\u2212' in position 3710: character maps to <undefined> for <traceback object at 0x000001D3864D6300>
[2024-02-11T08:30:03.674Z] Executed 'Functions.queue_import_sets_cards' (Failed, Id=950cc02b-2ef7-4290-b5ff-9a44b11d8141, Duration=209ms)
[2024-02-11T08:30:03.674Z] System.Private.CoreLib: Exception while executing function: Functions.queue_import_sets_cards. System.Private.CoreLib: Result: Failure
Exception: UnicodeEncodeError: 'charmap' codec can't encode character '\u2212' in position 3710: character maps to <undefined>
Stack: File "C:\Program Files\Microsoft\Azure Functions Core Tools\workers\python\3.10/WINDOWS/X64\azure_functions_worker\dispatcher.py", line 479, in _handle__invocation_request
call_result = await self._loop.run_in_executor(
File "C:\Users\Ross\AppData\Local\Programs\Python\Python310\lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "C:\Program Files\Microsoft\Azure Functions Core Tools\workers\python\3.10/WINDOWS/X64\azure_functions_worker\dispatcher.py", line 752, in _run_sync_func
return ExtensionManager.get_sync_invocation_wrapper(context,
File "C:\Program Files\Microsoft\Azure Functions Core Tools\workers\python\3.10/WINDOWS/X64\azure_functions_worker\extension.py", line 215, in _raw_invocation_wrapper
result = function(**args)
File "C:\Users\Ross\OneDrive\Documents\Repositories\Func\queue_import_sets_cards\__init__.py", line 359, in main
print(set_data)
File "c:\Users\Ross\.vscode\extensions\ms-python.python-2024.0.1\pythonFiles\lib\python\debugpy\_vendored\pydevd\_pydevd_bundle\pydevd_io.py", line 40, in write
r.write(s)
File "C:\Users\Ross\AppData\Local\Programs\Python\Python310\lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
.
</code></pre>
<p>I have tried without the <code>decode()</code> but then there are issue as the message returns as <code>b'SS3'</code></p>
|
<python><azure><azure-functions>
|
2024-02-11 08:32:13
| 1
| 2,467
|
Ross
|
77,975,786
| 2,403,531
|
Python multiprocessing queues hanging without wait timers
|
<p>I am having trouble with queues failing to work as intended. The goal of this design is so that the <code>muncher</code> can take as little or as long as it needs, but it will always be able filled with work (input queue always has stuff, output queue always has room). I want to maximize <code>muncher</code>'s data crunching while the <code>yoinker</code> and <code>yeeter</code> take care of shuffling data to/away from it.</p>
<p>The gist of the test code is to <code>yoink</code> some data (out of thin air currently), <code>munch</code> on it, then <code>yeet</code> it into the ether. <code>Yoinking</code> fills an input queue, <code>munching</code> uses the input queue and dumps into an output queue, and <code>yeeting</code> pulls from the output queue.</p>
<p>Ideally it should all work, I was banking on the implicit waiting from queues during stall events (e.g., the <code>yoinker</code> fills the input queue, its threads stop while the queue is full; <code>yeeter</code>'s output queue is empty, its threads stop while the output queue is filled back up).</p>
<p>That doesn't seem to happen reliably, which is problematic for using code reliably.</p>
<p>Without wait timers, the single-threaded <code>yoinker</code>-caller fails to exit all <code>muncher</code>/<code>yeeter</code> threads. With wait timers on the <code>munching</code>/<code>yeeting</code> segments, the single-threaded <code>yoinker</code>-caller usually gets through it.</p>
<p>With wait timers on the <code>munching</code>/<code>yeeting</code> segments, the parallel <code>yoinker</code>-caller fails to even get to the exiting stage. With wait timers on the <code>yoinking</code> segment as well as the <code>munching</code>/<code>yeeting</code> segments, it succeeds.</p>
<p>I can't control the speed at which the <code>yoink</code>/<code>munch</code>/<code>yeet</code> segments will finish (sometimes it might be instantaneous, maybe multiple nearly instantaneous <code>yoinks</code> in a row or something), is there a way to make this all be reliable with a parallel <code>yoinker</code>-caller without any needed wait timers?</p>
<pre><code>import numpy as np
import multiprocessing as mp
from time import sleep
#SOMETHING -> INPUT QUEUE
def yoinker(incrementer):
#--- make some data ---
data_in = np.ones((2000,2000)); #make some big data
data_in = data_in/np.sum(data_in)*incrementer; #tag it
#--- ditch the data ---
qb_in.put(data_in); #jam the data in
# sleep(np.random.random()); #nap time! wait between 0 and 1 seconds
#END DEF
#INPUT QUEUE -> SOMETHING -> OUTPUT QUEUE
def muncher(qb_in, qb_out):
while True: #these are eternal functions that continually work on a queue that has no predefined end (to them)
#--- get the data out ---
data_in = qb_in.get(); #get the data out
if( data_in is None ):
break #this is how this gets to end
#END IF
#--- do something with the data ---
data_out = data_in.copy(); #so much
#--- ditch the data ---
qb_out.put(data_out); #jam the data in
# sleep(np.random.random()); #nap time! wait between 0 and 1 seconds
#END WHILE
#END DEF
#OUTPUT QUEUE -> SOMETHING
def yeeter(qb_out):
while True: #these are eternal functions that continually work on a queue that has no predefined end (to them)
#--- get the data out ---
data_out = qb_out.get(); #get the data out
if( data_out is None ):
break #this is how this gets to end
#END IF
#--- save the data ---
# print('got data_out, sum is: '+str(np.round(np.sum(np.sum(data_out))))); #do some reporting
data_out = np.round(np.sum(np.sum(data_out)));
# sleep(np.random.random()); #nap time! wait between 0 and 1 seconds
#END WHILE
#END DEF
def parallel_starmap_helper(fn, args): #for multiprocess starmap with kwargs, MUST be outside of the function that calls it or it just hangs
return fn(*args)
#END DEF
def initer(_qb_in): #basically each process will call this function when it starts (b/c it's defined as an "initializer")
global qb_in; #it lets each process know "qb_in" is a global variables (outside of the scope of the code, e.g., they'll appear w/o being defined)
qb_in = _qb_in; #link em up here
#END DEF
if __name__=='__main__':
#--- settings ---
threads_in = 2; #number of threads for the input process (reads files, fills input queue with resulting data)
threads_calc = 4; #number of threads for the calc process (reads input queue's resulting data, converts, fills output queue)
threads_out = 2; #number of threads for the output process (reads output queue's converted data, saves files)
queueSize_in = 5; # how many input files to hold (if emptied, stalls conversion)
queueSize_out = 5; #how many output files to hold (if filled, stalls conversion)
#--- define queues that hold input and output datas ---
qb_in = mp.Queue(maxsize=queueSize_in); # Queue to hold input things
qb_out = mp.Queue(maxsize=queueSize_out); # Queue to hold output things
#--- build data generator parallel lists (not using queues) ---
parallel_list = []; #Prep
for j in range(0, 25):
parallel_list.append([yoinker, [j]]); # pack up all needed function calls
#END FOR j
#--- build data cruncher lists (using queues) ---
munchers = [mp.Process(target=muncher,args=(qb_in, qb_out)) for i in range(threads_calc)] # this function gets the data from the input queue, processes it, and then puts in the output queue
yeeters = [mp.Process(target=yeeter,args=(qb_out, )) for i in range(threads_out)] # this function gets data processed and does the final steps
#--- start up all processes that are NOT blocking ---
for munch in munchers:
munch.daemon = True; #say it lives for others
munch.start(); #start each muncher up
#END FOR munch
for yeet in yeeters:
yeet.daemon = True; #say it lives for others
yeet.start(); #start each yeeter up
#END FOR yeet
for j in range(0, 25):
yoinker(j); # pack up all needed function
print('placed j'+str(j))
#END FOR j
# #--- call blocking data generator ---
# with mp.Pool(processes=threads_in, initializer=initer, initargs=(qb_in,)) as executor:
# executor.starmap(parallel_starmap_helper, parallel_list); #function you want is replaced with; parallel_starmap_kwarg_helper helps starmap distribute everything right
# #END WITH
for j in range(0, threads_calc):
qb_in.put(None); #tell all muncher threads to quit (they get out of the qb_in queue)
print('qb_in - Put a None')
#END FOR j
for j in range(0, threads_out):
qb_out.put(None); #tell all yeeter threads to quit (they get out of the qb_out queue)
print('qb_out - Put a None')
#END FOR j
#--- This portion lets you know if the code has hung ---
#it does this via checking the queues. The queues should end as exactly enough `None`s have been put in to end all of the queues started, but without timers they do not always end.
#This is here to give some feedback, since calling `.join()` on the processes will just sit there silently.
FLG_theyDone = False;
while( FLG_theyDone == False ):
FLG_theyDone = True;
print('\nStarting loop')
if( qb_in.full() == True ):
print('qb_in full');
elif( qb_in.empty() == True ):
print('qb_in empty');
#END IF
if( qb_out.full() == True ):
print('qb_out full');
elif( qb_out.empty() == True ):
print('qb_out empty');
#END IF
for munch in munchers:
print('munch - '+str(munch.exitcode))
if( munch.exitcode is None ):
FLG_theyDone = False;
# try:
# qb_in.put(sentinel, block=False); #tell all muncher threads to quit
# except:
# print('qb_in full, can\'t jam more Nones');
# #END TRYING
#END IF
#END FOR munch
# print('yeeters - '+str(yeeters))
for yeet in yeeters:
print('yeet - '+str(yeet.exitcode))
if( yeet.exitcode is None ):
FLG_theyDone = False;
# try:
# qb_out.put(sentinel, block=False); #tell all muncher threads to quit
# except:
# print('qb_outn full, can\'t jam more Nones');
# #END TRYING
#END IF
#END FOR yeet
sleep(np.random.random()+2); #nap time! wait between 0 and 1 seconds
#END IF
#--- set up a place for them to end ---
for munch in munchers:
munch.join(); #end each muncher
#END FOR munch
for yeet in yeeters:
yeet.join(); #end each yeeter
#END FOR yeet
#END IF
</code></pre>
|
<python><multiprocessing><queue>
|
2024-02-11 06:07:34
| 1
| 730
|
user2403531
|
77,975,706
| 2,121,442
|
How to convert strings to bytes in a for in python
|
<p>I have a List of strings, but when I try to count the elements in the List which match a string, I get a byte error. Here's the code:</p>
<pre><code>def countTransactions(data, lineDelim, marker):
lines = data.split(lineDelim)
return len([i for i in lines if marker in i])
</code></pre>
<p>And here's the error I see:</p>
<pre><code> File "//./main.py", line 328, in <module>
topTransactions = countTransactions(topString, bin_delimiter, TMARKER)
File "//./main.py", line 188, in countTransactions
return len([i for i in lines if marker in i])
File "//./main.py", line 188, in <listcomp>
return len([i for i in lines if marker in i])
TypeError: a bytes-like object is required, not 'str'
</code></pre>
<p>If my List is indeed a List of bytes not strings, how do I fix this?</p>
|
<python>
|
2024-02-11 05:30:07
| 1
| 581
|
Jason Michael
|
77,975,669
| 1,613,983
|
net::SOCKS_CONNECTION_FAILED when routing Pyppeteer request through socks proxy
|
<p>Here's my attempt at a minimal webscraper using pyppeteer 1.0.2 routing through a NordVPN socks proxy:</p>
<pre><code>import asyncio
from pyppeteer import launch
import json
import requests
res = requests.get('https://nordvpn.com/api/server')
servers = json.loads(res.content)
socks_servers = [server['ip_address'] for server in servers if server['features']['socks']]
socks_server = socks_servers[0]
port = '1080'
async def with_proxy():
browser = await launch(
executablePath='/usr/bin/google-chrome',
headless=True,
args=[f'--proxy-server=socks5://{socks_server}:{port}']
)
page = await browser.newPage()
await page.authenticate({
'username': username,
'password': password
})
await page.goto('https://api.myip.com')
await browser.close()
await with_proxy()
</code></pre>
<p>However, this results in the following error:</p>
<pre><code>---------------------------------------------------------------------------
PageError Traceback (most recent call last)
Cell In[33], line 23
20 await page.goto('https://api.myip.com')
21 await browser.close()
---> 23 await with_proxy()
Cell In[33], line 20, in with_proxy()
15 # do not forget to put "await" before async functions
16 await page.authenticate({
17 'username': username,
18 'password': password
19 })
---> 20 await page.goto('https://api.myip.com')
21 await browser.close()
File /usr/local/lib/python3.11/site-packages/pyppeteer/page.py:831, in Page.goto(self, url, options, **kwargs)
829 result = await self._navigate(url, referrer)
830 if result is not None:
--> 831 raise PageError(result)
832 result = await watcher.navigationPromise()
833 watcher.cancel()
PageError: net::ERR_SOCKS_CONNECTION_FAILED at https://api.myip.com
</code></pre>
<p>As a test, I tried the same thing through the native <code>requests</code> library and it seemed to work fine:</p>
<pre><code>import requests
def native_requests():
with requests.Session() as s:
s.headers['Connection'] = 'close'
prox = {'https':f'socks5://{username}:{password}@{socks_server}:{port}'}
r = s.get('https://api.myip.com', proxies=prox)
print(r)
s.close()
native_requests()
</code></pre>
<p>This prints <code><Response [200]></code> as expected. What am I doing wrong here?</p>
|
<python><google-chrome><pyppeteer>
|
2024-02-11 05:13:52
| 2
| 23,470
|
quant
|
77,975,605
| 7,408,848
|
Saving a pandas dataframe in excel with utf-8 encoding
|
<p>Simply want to set the encoding of an excel workbook for saving data. All data is stored in a pandas dataframe.</p>
<p>Code below:</p>
<pre><code>file_name = "bobs_burgers"
with pd.ExcelWriter(r".\Exported_data\{file_name}.xlsx".format(file_name = file_name),
engine="xlsxwriter",
options={'encoding':'utf-8'}) as writer:
data_table.to_excel(writer, sheet_name = "characters", index = False)
</code></pre>
<p>I receive the error below</p>
<blockquote>
<p>TypeError: Workbook.<strong>init</strong>() got an unexpected keyword argument 'encoding'</p>
</blockquote>
<p>Using google to search for this, I have found that the majority of comments about encoding using ExcelWriter always has input of <code>options</code>.</p>
<p>How to set the encode to <code>utf-8</code> for exported pandas tables?</p>
|
<python><pandas><encode>
|
2024-02-11 04:38:47
| 1
| 1,111
|
Hojo.Timberwolf
|
77,975,441
| 1,013,799
|
_openai_scripts.py': [Errno 2] No such file or directory
|
<p>Working on a python script using the OpenAI API. When I try to run the script this is my Mac terminal output:</p>
<pre><code>/Library/Developer/CommandLineTools/usr/bin/python3: can't open file '/Users/me/Library/Python/3.9/lib/python/site-packages/openai/_openai_scripts.py': [Errno 2] No such file or directory
</code></pre>
<p>I can confirm there is no such file in that folder, but not sure what to do about it.</p>
<p><strong>Edits in Response to Comments:</strong></p>
<p>Here is the part of my script that references OpenAI:</p>
<pre><code># Step 2: Send HTML to ChatGPT API
api_key = "..."
prompt = f"HTML source code:\n\n{html_content}\n\nExtract the following information:\n1. Church Name\n2. Pastor Name\n3. Pastor Email"
response = openai.Completion.create(
model="gpt-3.5-turbo", # Use the appropriate model
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt},
],
api_key=api_key
)
chat_output = response.choices[0].message["content"].strip()
</code></pre>
<p>And the command I'm using to run the script is:</p>
<pre><code>python3 chatgpt-analyze.py
</code></pre>
<p>Here is the result of running the <code>pip3 show openai</code> command:</p>
<pre><code>Name: openai
Version: 1.12.0
Summary: The official Python library for the openai API
Home-page:
Author:
Author-email: OpenAI <support@openai.com>
License:
Location: /Users/me/Library/Python/3.9/lib/python/site-packages Requires: anyio, distro, httpx, pydantic, sniffio, tqdm, typing-extensions
Required-by:
</code></pre>
<p>But if I try to run e.g., <code>openai --help</code> I get the same error:</p>
<pre><code>/Library/Developer/CommandLineTools/usr/bin/python3: can't open file '/Users/me/Library/Python/3.9/lib/python/site-packages/openai/_openai_scripts.py': [Errno 2] No such file or directory
</code></pre>
|
<python><openai-api>
|
2024-02-11 03:03:27
| 0
| 3,222
|
Drewdavid
|
77,975,378
| 10,693,596
|
panel: how to obtain the name of the uploaded file?
|
<p>While using <code>param.Parameterized</code> way of creating a Panel application, I am not able to obtain the name of the uploaded file. The minimal code is below, I am trying to figure out the solution for the <code>update_name</code> method:</p>
<pre class="lang-py prettyprint-override"><code>import panel as pn
import param
class TestFile(param.Parameterized):
file_upload = param.FileSelector()
file_name = param.String()
@param.depends("file_upload", watch=True)
def update_name(self):
... # how?
test = TestFile()
app = pn.Row(pn.Param(test.param, widgets={"file_upload": {"type": pn.widgets.FileInput}}))
app.servable()
</code></pre>
<p>It seems that doing this would require creating the widget inside the class, since <code>pn.widgets.FileInput</code> does have a <code>filename</code> property. Is there a way to get the file name of the uploaded file without using <code>pn.widgets</code>?</p>
|
<python><holoviz-panel><python-param>
|
2024-02-11 02:22:20
| 0
| 16,692
|
SultanOrazbayev
|
77,975,262
| 10,940,989
|
Untangle Facebook reaction encoding?
|
<p>I've downloaded my Facebook data in JSON format and want to perform analysis on it. It contains segments like:</p>
<pre><code>{
"reaction": "\u00e2\u009d\u00a4",
"actor": "..."
},
</code></pre>
<p>The reaction here is a heart. However, if I print it in Python, obviously it comes out as simply those unicode characters (â¤), rather than a heart.</p>
<p>Does anyone know if there's somewhere that contains all of Facebook's reaction encodings?</p>
|
<python><facebook><unicode><emoji>
|
2024-02-11 00:56:48
| 2
| 380
|
Anthony Poole
|
77,975,111
| 19,299,757
|
Retrieve AWS secrets using boto3
|
<p>I want to retrieve AWS secrets using python boto3 and I came across this sample code:</p>
<ul>
<li><a href="https://github.com/awsdocs/aws-doc-sdk-examples/blob/main/python/example_code/secretsmanager/get_secret_value.py" rel="nofollow noreferrer">aws-doc-sdk-examples/python/example_code/secretsmanager/get_secret_value.py at main · awsdocs/aws-doc-sdk-examples · GitHub</a></li>
<li><a href="https://docs.aws.amazon.com/code-library/latest/ug/python_3_secrets-manager_code_examples.html" rel="nofollow noreferrer">Secrets Manager examples using SDK for Python (Boto3) - AWS SDK Code Examples</a></li>
</ul>
<p>But it is confusing. I don't see boto3 library import in the python file. Not an expert of Python, so any help in understanding this much appreciated.</p>
<p>I was expecting to have the AWS secrets name and boto3 library as part of the python function.</p>
|
<python><amazon-web-services><boto3><aws-secrets-manager>
|
2024-02-10 23:40:54
| 1
| 433
|
Ram
|
77,975,009
| 4,348,400
|
How do `prior` and `connect` work in `matplotlib.sankey.Sankey.add`?
|
<p>I have a weighted directed graph (<a href="https://networkx.org/documentation/stable/reference/classes/digraph.html" rel="nofollow noreferrer"><code>nx.DiGraph</code></a>) that I want to plot as a Sankey diagram using Matplotlib.</p>
<p>I'm a bit confused about the roles of <code>prior</code> and <code>connect</code>. The <a href="https://matplotlib.org/stable/api/sankey_api.html#matplotlib.sankey.Sankey.add" rel="nofollow noreferrer">docs</a> say:</p>
<blockquote>
<p>prior : int</p>
<p>Index of the prior diagram to which this diagram should be connected.</p>
</blockquote>
<p>and</p>
<blockquote>
<p>connect : (int, int)</p>
<p>A (prior, this) tuple indexing the flow of the prior diagram and the flow of this diagram which should be connected. If this is the first diagram or prior is None, connect will be ignored.</p>
</blockquote>
<p>It isn't clear to me what these parameters are that I would have to specify prior twice, once in <code>prior</code> and the other time as the first element passed to <code>connect</code>.</p>
<p>There are <a href="https://matplotlib.org/stable/gallery/specialty_plots/sankey_basics.html#sphx-glr-gallery-specialty-plots-sankey-basics-py" rel="nofollow noreferrer">these examples</a> in the Matplotlib gallery, but I did not find that they illuminated much about how these parameters are really supposed to work.</p>
<p>I found <a href="https://flothesof.github.io/sankey-tutorial-matplotlib" rel="nofollow noreferrer">this tutorial</a> which did a nice job of incrementally building up some of the components. It described <code>connect</code> this way:</p>
<blockquote>
<p>It turns out that we can now give a simpler explanation of the connect argument: it says which flows (indexed in the order they were defined) should be connected. So connect should really be described as</p>
<blockquote>
<p>connect = (index_of_prior_flow, index_of_current_diagram_flow) that need to be connected</p>
</blockquote>
</blockquote>
<p>But taking that understanding to <a href="https://matplotlib.org/stable/gallery/specialty_plots/sankey_rankine.html#sphx-glr-gallery-specialty-plots-sankey-rankine-py" rel="nofollow noreferrer">this example</a> proved confusing to me since I would have thought that <code>index_of_current_diagram_flow</code> would be constant across diagrams, but the example suggests otherwise.</p>
<p>I'm sure there is a consistent behaviour to this tool, but I don't understand it yet. How do I understand <code>prior</code> and <code>connect</code> in terms of a directed graph?</p>
|
<python><matplotlib><plot><sankey-diagram>
|
2024-02-10 22:56:52
| 0
| 1,394
|
Galen
|
77,974,910
| 80,002
|
How to add custom dimensions to App Insights request telemetry in an ASGI web application?
|
<p>I have a python web application using Quart. Here is how I add boilerplate App Insights request telemetry to it:</p>
<pre><code>...
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.instrumentation.asgi import OpenTelemetryMiddleware
from azure.monitor.opentelemetry.exporter import AzureMonitorTraceExporter
bp = Blueprint("routes", __name__, static_folder='static')
def create_app():
tracer_provider = TracerProvider()
tracer_provider.add_span_processor(BatchSpanProcessor(AzureMonitorTraceExporter()))
app = Quart(__name__)
app.asgi_app = OpenTelemetryMiddleware(app.asgi_app, tracer_provider = tracer_provider)
app.register_blueprint(bp)
return app
...
</code></pre>
<p>I also set the <code>APPLICATIONINSIGHTS_CONNECTION_STRING</code> environment variable to the App Insights connection string.</p>
<p>And it works - I get boilerplate request telemetry in the App Insights.</p>
<p>Now I want to enrich the request telemetry with custom dimensions. I know the theory of it:</p>
<ol>
<li>Have request scoped singleton exposing a dictionary.</li>
<li>At any point in code we can add entries to that dictionary</li>
<li>Just before the request is over there is code to send App Insights telemetry. There should be a way to customize this code in one way or another to add the dictionary to the request telemetry as <code>customDimensions</code>.</li>
</ol>
<p>We have implemented it in our Asp.Net application. But I have no idea how to translate this into python web app using Quart.</p>
<p><strong>EDIT 1</strong></p>
<p>For Asp.Net Core the logic implementation is trivial:</p>
<ol>
<li>Configure the connection string to the App Insights. Could be done through the environment similarly to python, only the environment variable is different (<code>ApplicationInsights__ConnectionString</code>)</li>
<li>Add <code>Microsoft.ApplicationInsights.AspNetCore</code> nuget package.</li>
<li>Add <code>services.AddApplicationInsightsTelemetry(opt => opt.EnableActiveTelemetryConfigurationSetup = true);</code> to <code>Startup.ConfigureServices</code></li>
</ol>
<p>This gives us boilerplate request telemetry. And I have it in python too. To enrich it with custom telemetry in Asp.Net Core we do not even need to hook into the code that sends the telemetry. We just need to execute the following code:</p>
<pre><code>var telem = context.Features.Get<RequestTelemetry>();
telem.Properties[Key] = Value;
</code></pre>
<p>This adds the Key-Value pair to the list of the custom dimensions of the current request.</p>
<p>E.g. <a href="https://www.aspnetmonsters.com/2020/03/2020-03-07-enhancing-application-insights-request-telemetry/" rel="nofollow noreferrer">https://www.aspnetmonsters.com/2020/03/2020-03-07-enhancing-application-insights-request-telemetry/</a></p>
<p>In our C# code, we register the following singleton:</p>
<pre><code>public class EnrichRequest : IEnrichRequest
{
public void EnrichHttpRequest(HttpContext context, Dictionary<string, string> properties)
{
var telem = context.Features.Get<RequestTelemetry>();
foreach (var prop in properties)
{
telem.Properties.Add(prop.Key, prop.Value);
}
}
}
</code></pre>
<p>So the singleton is global (not even scoped to a request), but the passed <code>context</code> parameter represents the current request, which the caller can get from any point through the DI friendly <a href="https://learn.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.http.ihttpcontextaccessor?view=aspnetcore-8.0" rel="nofollow noreferrer">IHttpContextAccessor</a> interface, which provides the context of the current request.</p>
<p>As you can see, enrich request telemetry in Asp.Net Core is very simple. The question now - how can we do it in python Quart application?</p>
|
<python><azure-application-insights><open-telemetry>
|
2024-02-10 22:14:04
| 1
| 63,544
|
mark
|
77,974,845
| 3,787,646
|
How to dynamically create and set a validation model for a FastAPI endpoint based on the dependencies I'm passing?
|
<p>In the example below, I'd like to loose the coupling between <code>MediaController</code> and its dependencies, the <code>MediaPipelineX</code>s to enable it to work with whatever I'm passing to it. That's an important aspect of OOP and Dependency Injection.</p>
<p>How do I do that? The way fastapi currently works seems to make reuse of <code>MediaController</code> impossible. I feel if <a href="https://github.com/tiangolo/fastapi/blob/master/fastapi/routing.py#L843" rel="nofollow noreferrer"><code>add_api_route</code></a> would take another parameter <code>request_model</code> I could create the type in the class initializer and provide it as instance attribute. But without, I feel stuck.</p>
<p>I'd appreciate any advice.</p>
<pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI, APIRouter
from pydantic import BaseModel, Field
from typing import Literal, Union
class MediaPipeline1():
class Params1(BaseModel):
pipeline_name: Literal["One"]
foo: int = 11
class MediaPipeline2():
class Params2(BaseModel):
pipeline_name: Literal["Two"]
bar: int = 22
class MediaPipeline3():
class Params3(BaseModel):
pipeline_name: Literal["Three"]
baz: int = 33
##############################################
class MediaController():
# Hardcode the dependencies the initializer gets passed
# This is bad
class RequestModel(BaseModel): # Tagged union
params: Union[MediaPipeline1.Params1, MediaPipeline2.Params2] = Field(discriminator='pipeline_name')
def __init__(self, pipelines):
self.pipelines = pipelines
def handler(self, request: RequestModel):
print(request)
#############################################
my_pipeline_1 = MediaPipeline1()
my_pipeline_2 = MediaPipeline2()
my_pipeline_3 = MediaPipeline3()
router = APIRouter()
# Works
my_controller_12 = MediaController([my_pipeline_1, my_pipeline_2])
router.add_api_route("/1", my_controller_12.handler)
# Bug: Will not handle my_pipeline_3 because class is hardcoded for [my_pipeline_1, my_pipeline_2]
my_controller_23 = MediaController([my_pipeline_2, my_pipeline_3])
router.add_api_route("/2", my_controller_23.handler)
app = FastAPI()
app.include_router(router)
</code></pre>
|
<python><oop><dependency-injection><fastapi><pydantic>
|
2024-02-10 21:46:56
| 0
| 863
|
Multisync
|
77,974,696
| 1,245,281
|
Is it possible to "serialize" a span such that a span could be started, and then picked up after reboot?
|
<p>I don't actually have any code, it's more of a conceptual "how might someone do this" sort of question.</p>
<p>I have a python application that I am instrumenting with OpenTelemetry, the application basically does some configuration on the machine that only can be completed after a reboot.
I'd like to be able to create a span at the start of the operation, and when the application picks back up after a reboot it is able to continue with the span.</p>
<p>Any ideas on how to accomplish this sort of thing?</p>
|
<python><trace><open-telemetry>
|
2024-02-10 20:59:24
| 0
| 551
|
RedBullet
|
77,974,525
| 50,385
|
What is the right way to await cancelling an asyncio task?
|
<p><a href="https://docs.python.org/3/library/asyncio-task.html#asyncio.Task.uncancel" rel="noreferrer">The docs for cancel</a> make it sound like you should usually propagate CancelledError exceptions:</p>
<blockquote>
<p>Therefore, unlike Future.cancel(), Task.cancel() does not guarantee that the Task will be cancelled, although suppressing cancellation completely is not common and is actively discouraged. Should the coroutine nevertheless decide to suppress the cancellation, it needs to call Task.uncancel() in addition to catching the exception.</p>
</blockquote>
<p>However, neither of the methods for detecting cancellation are awaitable: <code>cancelling</code> which tells you if cancelling is in progress, and <code>cancelled</code> tells you if cancellation is done. So the obvious way to wait for cancellation is this:</p>
<pre class="lang-py prettyprint-override"><code>foo_task.cancel()
try:
await foo_task
except asyncio.CancelledError:
pass
</code></pre>
<p>There are lots of examples of this online even on SO. <a href="https://docs.python.org/3/library/asyncio-task.html#task-cancellation" rel="noreferrer">But the docs warn you asyncio machinery will "misbehave"</a> if you do this:</p>
<blockquote>
<p>The asyncio components that enable structured concurrency, like asyncio.TaskGroup and asyncio.timeout(), are implemented using cancellation internally and might misbehave if a coroutine swallows asyncio.CancelledError</p>
</blockquote>
<p>Now you might be wondering why you would wait to block until a task is fully cancelled. The problem is the <a href="https://docs.python.org/3.10/library/asyncio-task.html#asyncio.create_task" rel="noreferrer">asyncio event loop only creates weak references to tasks</a>, so if as your class is shutting down (e.g. due to a <code>cleanup</code> method or <code>__aexit__</code>) and you don't await every task you spawn, you might tear down the only strong reference while the task is still running, and then python will yell at you:</p>
<pre><code>ERROR base_events.py:1771: Task was destroyed but it is pending!
</code></pre>
<p>So it seems to avoid the error I am specifically being forced into doing the thing I'm not supposed to do :P The only alternative seems to be weird unpythonic hackery like stuffing every task I make in a global set and awaiting them all at the end of the run.</p>
|
<python><async-await><task><python-asyncio><cancellation>
|
2024-02-10 19:54:22
| 2
| 22,294
|
Joseph Garvin
|
77,974,379
| 8,713,442
|
How to find max value in list datatype column in polars dataframe
|
<p>I am using a Polar data frame for the first time. I am trying to match input values with data in the Postgres table.
Sharing some sample code which is part of the actual code.
I have a column called "Score" of type list[i32]. As a next step, I am trying to find the maximum value in that list. I am getting errors.</p>
<pre><code>import polars as pl
import jaro
def test_polars():
fname='sarah'
lname = 'vatssss'
data = {"first_name": ['sarah', 'purnima'], "last_name": ['vats', 'malik']}
df = pl.DataFrame(data)
print(df)
df = (df.with_columns(
[
(pl.when(pl.col("first_name") == fname).then(1).otherwise(0)).alias("E_FN"),
(pl.when(pl.col("last_name") == lname).then(1).otherwise(0)).alias("E_LN"),
(pl.when(pl.col("first_name").str.slice(0, 3) == fname[0:3]).then(1).otherwise(0)).alias("F3_FN"),
(pl.when(pl.col("first_name").map_elements(
lambda first_name: jaro.jaro_winkler_metric(first_name, fname)) >= 0.8).then(1).otherwise(0)).alias(
"CMP80_FN"),
(pl.when(pl.col("last_name").map_elements(
lambda first_name: jaro.jaro_winkler_metric(first_name, lname)) >= 0.9).then(1).otherwise(0)).alias(
"CMP90_LN"),
]
)
.with_columns(
[ pl.concat_list(980 * pl.col("E_FN") ,
970 * pl.col("E_LN") ).alias("score")
]
)
.with_columns(
[pl.max(pl.col("score")).alias("max_score")
]
)
)
print(df)
if __name__ == '__main__':
test_polars()
C:\PythonProject\pythonProject\venv\Graph_POC\Scripts\python.exe "C:\PythonProject\pythonProject\polars data.py"
shape: (2, 2)
┌────────────┬───────────┐
│ first_name ┆ last_name │
│ --- ┆ --- │
│ str ┆ str │
╞════════════╪═══════════╡
│ sarah ┆ vats │
│ purnima ┆ malik │
└────────────┴───────────┘
Traceback (most recent call last):
File "C:\PythonProject\pythonProject\polars data.py", line 45, in <module>
test_polars()
File "C:\PythonProject\pythonProject\polars data.py", line 34, in test_polars
[pl.max(pl.col("score")).alias("max_score")
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\PythonProject\pythonProject\venv\Graph_POC\Lib\site-packages\polars\functions\aggregation\vertical.py", line 175, in max
return F.col(*names).max()
^^^^^^^^^^^^^
File "C:\PythonProject\pythonProject\venv\Graph_POC\Lib\site-packages\polars\functions\col.py", line 288, in __new__
return _create_col(name, *more_names)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\PythonProject\pythonProject\venv\Graph_POC\Lib\site-packages\polars\functions\col.py", line 67, in _create_col
raise TypeError(msg)
TypeError: invalid input for `col`
Expected `str` or `DataType`, got 'Expr'.
Process finished with exit code 1
</code></pre>
|
<python><python-polars>
|
2024-02-10 18:57:44
| 1
| 464
|
pbh
|
77,974,285
| 515,311
|
Allowing a specific set of undefined values in enums
|
<p>Given an enum like this:</p>
<pre class="lang-py prettyprint-override"><code>class Result(IntEnum):
A = 0
B = 1
C = 3
</code></pre>
<p>I'd like to be able to create new enum members dynamically, but only if the value is within a given range. For example, given:</p>
<pre class="lang-py prettyprint-override"><code>class Result(IntEnum, valid_range=(4, 10)):
A = 0
B = 1
C = 3
</code></pre>
<p>I'd expect the following behaviour:</p>
<ul>
<li><code>Result(10)</code> would be ok, returning an instance of the enum where <code>.value</code> is set to 10.</li>
<li><code>Result(0)</code> would return <code>Result.A</code></li>
<li><code>Result(20)</code> or <code>Result("hello")</code> would raise a <code>ValueError</code></li>
</ul>
<hr />
<p><strong>Why?</strong></p>
<p>This is for the BACnet standard - a building management/HVAC protocol. Grossly simplified it defines a standard set of message types (for example: <code>READ</code>, <code>WRITE</code>, <code>LIST</code>) which map to integer values. I've modelled these as an enum:</p>
<pre class="lang-py prettyprint-override"><code>class MessageType(IntEnum):
READ = 0
WRITE = 1
LIST = 2
</code></pre>
<p>This is then used as follows:</p>
<pre class="lang-py prettyprint-override"><code>@dataclass
class Message:
message_type: MessageType
data: bytes
def decode(raw: bytes) -> Message:
message_type = MessageType(unpack("!H", bytes[:2]))
data = bytes[2:]
return Message(message_type=message_type, data=data)
</code></pre>
<p>The protocol also allows for vendors to define their own message types but limits them to a specific range of values (i.e. between 10 and 20).</p>
<p><em>Options considered:</em></p>
<ul>
<li>Define each value as an additional enum attribute (i.e. <code>MessageType.VENDOR_10</code>) explicitly or dynamically but the number of valid values can be in the millions, most of which will not be used.</li>
<li>Don't use an enum, just an <code>int</code>, an int <code>TypeAlias</code>, or just a custom class.</li>
<li>Change the type annotation on <code>Message</code> to <code>int | MessageType</code> and when decoding, attempt to create a <code>MessageType</code>, catching the <code>ValueError</code> and assigning the int.</li>
</ul>
|
<python><python-3.x><enums>
|
2024-02-10 18:32:59
| 1
| 1,052
|
Matthew Brown
|
77,974,164
| 4,465,928
|
AttributeError: module 'keras.api._v2.keras' has no attribute 'models' when using tf.keras.models.Sequential() for Neural Network
|
<p>I'm using Jupyter Notebook and am trying to follow along this example for Siamese NN with Triplet Loss: <a href="https://github.com/13muskanp/Siamese-Network-with-Triplet-Loss/blob/master/Siamese%20Network%20with%20Triplet%20Loss.ipynb" rel="nofollow noreferrer">https://github.com/13muskanp/Siamese-Network-with-Triplet-Loss/blob/master/Siamese%20Network%20with%20Triplet%20Loss.ipynb</a></p>
<p>But when I try to run the colab myself, I can't get the Sequential() part to work:
<a href="https://i.sstatic.net/y1ctG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/y1ctG.png" alt="enter image description here" /></a></p>
<p>I've tried everything from using <code>tf.keras.Sequential()</code> to modifying the imports but I can't figure out how to make this compile. Any help would be appreciated!</p>
<p>Edit: After trying <code>from tensorflow import keras</code> I get a different error <code>AttributeError: module 'keras.api._v2.keras' has no attribute 'models'</code> <a href="https://i.sstatic.net/eQRMA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eQRMA.png" alt="enter image description here" /></a></p>
<p>but I can't figure out how to get past this</p>
<p><a href="https://i.sstatic.net/74Hwt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/74Hwt.png" alt="enter image description here" /></a></p>
<p>Edit 2: showing output of <code>pip list</code></p>
<p><a href="https://i.sstatic.net/3JAHe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3JAHe.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/Zd2Yg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Zd2Yg.png" alt="enter image description here" /></a></p>
|
<python><tensorflow><keras><neural-network>
|
2024-02-10 17:59:50
| 0
| 331
|
Anthony
|
77,974,129
| 13,373,677
|
Selenium WebDriver Fails to Locate Elements on Linux Server but works locally
|
<p>I'm experiencing an issue with my web scraper running on a Linux server where it fails to locate elements using Selenium WebDriver. The scraper works perfectly fine on my local machine, but when I run it on the Linux server, it throws a NoSuchElementException.</p>
<p>The error message I receive is:</p>
<blockquote>
<p>selenium.common.exceptions.NoSuchElementException: Message: no such
element: Unable to locate element: {"method":"css
selector","selector":".Crom_table__p1iZz"}</p>
</blockquote>
<p>I've tried using WebDriverWait to wait for the table to load, but it doesn't seem to be working. Additionally, I've logged the page source on the Linux server and compared it to my local machine, and I've noticed significant differences in the HTML structure.</p>
<p>I am using a custom user-agent to prevent differences, but that hasn't resolved the issue. I'm not sure what else to try to resolve this issue.</p>
<p>Here's a snippet of my code:</p>
<pre><code>def team_defenses(season):
url = 'https://www.nba.com/stats/teams/defense?Season=' + season
driver = webdriver.Chrome(service=ChromeService(ChromeDriverManager().install()), options=get_driver_options())
driver.get(url)
# Wait for the table to load, if it doesn't load display content
try:
WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.CLASS_NAME, 'Crom_table__p1iZz')))
except Exception as e:
print(f"Error: {e}")
with open('error_page_source.txt', 'w') as f:
f.write(driver.page_source)
driver.quit()
return
table = driver.find_element(By.CLASS_NAME, 'Crom_table__p1iZz')
# ... rest of my code ...
driver.quit()
def get_driver_options():
options = Options()
options.add_argument('--headless') # Runs Chrome in headless mode.
options.add_argument('--no-sandbox') # Bypass OS security model
options.add_argument('--disable-dev-shm-usage') # Overcome limited resource problems
return options
</code></pre>
|
<python><linux><selenium-webdriver><server>
|
2024-02-10 17:50:38
| 1
| 311
|
Lenart Golob
|
77,974,108
| 9,251,158
|
OpenTimelineIO error from exporting a Final Cut Pro file with the Kdenlive adapter
|
<p>Follow-up on <a href="https://stackoverflow.com/questions/77846715/monkey-patching-opentimelineio-adapter-to-import-final-cut-pro-xml">Monkey-patching OpenTimelineIO adapter to import Final Cut Pro XML</a></p>
<p>I have several video projects from Final Cut Pro that I want to use in KdenLive. I found the OpenTimelineIO project and it would solve all my problems. I installed it, fixed the XML file following the suggestions from the thread above and am able to read with the FCP adapter.</p>
<p>I install <code>opentimelineio</code> from the PyPI, run <code>otioconvert</code>, and get an error related to the Kdenlive export adapter:</p>
<pre><code> $ otioconvert -i /path/to/file/ep4_fixed.fcpxml -o /path/to/file/ep4_fixed.kdenlive
Traceback (most recent call last):
File "/usr/local/bin/otioconvert", line 8, in <module>
sys.exit(main())
^^^^^^
File "/usr/local/lib/python3.11/site-packages/opentimelineio/console/otioconvert.py", line 278, in main
otio.adapters.write_to_file(
File "/usr/local/lib/python3.11/site-packages/opentimelineio/adapters/__init__.py", line 192, in write_to_file
return adapter.write_to_file(
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/opentimelineio/adapters/adapter.py", line 192, in write_to_file
result = self.write_to_string(input_otio, **adapter_argument_map)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/opentimelineio/adapters/adapter.py", line 283, in write_to_string
return self._execute_function(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/opentimelineio/plugins/python_plugin.py", line 153, in _execute_function
return (getattr(self.module(), func_name)(**kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/opentimelineio_contrib/adapters/kdenlive.py", line 389, in write_to_string
rate = input_otio.duration().rate
^^^^^^^^^^^^^^^^^^^
AttributeError: 'opentimelineio._otio.SerializableCollection' object has no attribute 'duration'
</code></pre>
<p>When I try to import it on Kdenlive, I get the same error:</p>
<pre><code> File "/usr/local/bin/otioconvert", line 8, in <module>
sys.exit(main())
^^^^^^
File "/usr/local/lib/python3.11/site-packages/opentimelineio/console/otioconvert.py", line 278, in main
otio.adapters.write_to_file(
File "/usr/local/lib/python3.11/site-packages/opentimelineio/adapters/__init__.py", line 192, in write_to_file
return adapter.write_to_file(
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/opentimelineio/adapters/adapter.py", line 192, in write_to_file
result = self.write_to_string(input_otio, **adapter_argument_map)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/opentimelineio/adapters/adapter.py", line 283, in write_to_string
return self._execute_function(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/opentimelineio/plugins/python_plugin.py", line 153, in _execute_function
return (getattr(self.module(), func_name)(**kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/opentimelineio_contrib/adapters/kdenlive.py", line 390, in write_to_string
rate = input_otio.duration().rate
^^^^^^^^^^^^^^^^^^^
AttributeError: 'opentimelineio._otio.SerializableCollection' object has no attribute 'duration'
</code></pre>
<p>The error is distinct from that thread in that the offending collection is of this sort:</p>
<pre><code>SerializableCollection(ep1-6, [otio.schema.Timeline(name='ep4', tracks=otio.schema.Stack(name='', children=[otio.schema.Track(name='-1', children=[otio.schema.Gap(name='',...
</code></pre>
<p>How can I tweak the XML file or the source code to fix this error and export for reading in Kdenlive?</p>
<p>The FCP XML file is <a href="https://www.dropbox.com/scl/fi/92ujpyxo86w56m4qpuh8b/ep4_fixed.fcpxml?rlkey=8kpom8mmebzup9uo67ue2099m&dl=0" rel="nofollow noreferrer">here</a>.</p>
<h2>update</h2>
<p>I tried this conversion under two methods.</p>
<p>First, I uninstalled <code>opentimelineio</code> (<code>python3 -m pip uninstaled opentimelineio</code>) and installed it from PyPI with <code>python3 -m pip install opentimelineio</code>. The <code>kdenlive</code> adapter is listed:</p>
<pre class="lang-py prettyprint-override"><code>$ python3 -c "import opentimelineio as otio; print(otio.adapters.available_adapter_names())"
['otio_drp_adapter', 'fcp_xml', 'otio_json', 'otioz', 'otiod', 'cmx_3600', 'svg', 'fcpx_xml', 'hls_playlist', 'rv_session', 'maya_sequencer', 'ale', 'burnins', 'AAF', 'xges', 'kdenlive']
</code></pre>
<p>When I run <code>otioconvert</code>, as above, I get the same error as above (<code>SerializableCollection</code> has no attribute <code>duration</code>).</p>
<p>Second, I removed that installation and installed it from source, cloning the project from GitHub, from the latest commit of the <code>main</code> branch (version <code>OpenTimelineIO-0.16.0.dev1</code>, installed with <code>python3 -m pip install .</code>). Then, <code>kdenlive</code> is not listed as an adapter, and indeed <code>kdenlive.py</code> is missing from the directory <code>contrib/adapters</code>.</p>
|
<python><python-3.x><xml><finalcut>
|
2024-02-10 17:47:09
| 0
| 4,642
|
ginjaemocoes
|
77,974,016
| 10,464,730
|
How can I stream an OpenCV video?
|
<p>I'm trying to make the image/video streaming from <a href="https://github.com/pornpasok/opencv-stream-video-to-web" rel="nofollow noreferrer">here</a> work on socket so I could make it send that stream to a server in the cloud. For testing, I just use <code>Live Server</code> to host the webpage and use <code>websocket</code> as the mechanism for transferring the data. I was able to make something show on the webpage, but it seems like it only shows the first frame the camera got after I ran my code. The webpage does seem to be getting more frames but I did notice that they seem to look the same. Then I stumbled upon <code>cv2.imshow()</code> and I does seem to be that <code>VideoStream</code> isn't giving a new frame/image, but if I ran the code from source repo, everything works fine.</p>
<p>Here's what I have so far, I'll only add the parts that I changed/added:</p>
<pre class="lang-py prettyprint-override"><code>async def generate():
...
# yield (b'--frame\r\n' b'Content-Type: image/jpeg\r\n\r\n' +
# bytearray(encodedImage) + b'\r\n')
yield encodedImage.tobytes()
</code></pre>
<p>I made a main.py with:</p>
<pre class="lang-py prettyprint-override"><code>import argparse
import asyncio
import threading
import aioschedule as schedule
from webstreaming import detect_motion
from socket_helper import socket_serve
if __name__ == '__main__':
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
# construct the argument parser and parse command line arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--ip", type=str, required=True,
help="ip address of the device")
ap.add_argument("-o", "--port", type=int, required=True,
help="ephemeral port number of the server (1024 to 65535)")
ap.add_argument("-f", "--frame-count", type=int, default=32,
help="# of frames used to construct the background model")
args = vars(ap.parse_args())
# start a thread that will perform motion detection
t = threading.Thread(target=detect_motion, args=(
args["frame_count"],))
t.daemon = True
t.start()
asyncio.get_event_loop().run_until_complete(socket_serve())
asyncio.get_event_loop().run_forever()
I made a socket helper as well:
import time
import websockets
from websockets.sync.client import connect
from webstreaming import generate
HOST_NAME = "localhost"
SOCKET_PORT = 15000
wsConn = any
async def stream_cam_handler(websocket):
print(f"stream_cam_handler")
async for frame in generate():
time.sleep(0.2)
await websocket.send(frame)
async def handler(websocket, path):
global wsConn
wsConn = websocket
data = await websocket.recv()
reply = f"Data received as: {data}!"
await websocket.send(reply)
await stream_cam_handler(websocket)
def socket_serve():
start_server = websockets.serve(handler, "localhost", 15000)
return start_server
</code></pre>
<p>Then here's what I got for the webpage:</p>
<pre class="lang-html prettyprint-override"><code><!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
</head>
<body>
<h1>Pi Video Surveillance</h1>
<img id="img_vid_feed" src="/" alt="video_feed">
<br>
<script>
let image = document.getElementById("img_vid_feed");
const ws = new WebSocket('ws://localhost:15000');
ws.binaryType = "blob";
ws.addEventListener('open', function (event) {
ws.send('Connection Established');
});
ws.onmessage = function (event) {
image.src = URL.createObjectURL(event.data);
};
</script>
</body>
</html>
</code></pre>
|
<javascript><python><opencv><websocket><video-streaming>
|
2024-02-10 17:25:01
| 0
| 605
|
rminaj
|
77,973,904
| 5,072,010
|
Lambda invoke unhandled, timing out after 3 seconds?
|
<p>Have the following lambda function and dynamodb defined via terraform:</p>
<pre><code>provider "aws" {
region = "us-east-1" # Change to your desired region
access_key = "test" # Access key for LocalStack
secret_key = "test" # Secret key for LocalStack
skip_credentials_validation = true
skip_requesting_account_id = true
endpoints {
dynamodb = "http://localhost:4566" # LocalStack DynamoDB endpoint
lambda = "http://localhost:4566"
iam = "http://localhost:4566"
}
}
resource "aws_iam_role" "lambda_execution_role" {
name = "lambda_execution_role"
assume_role_policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Action = "sts:AssumeRole",
Effect = "Allow",
Principal = {
Service = "lambda.amazonaws.com"
}
}
]
})
}
resource "aws_iam_role_policy_attachment" "lambda_dynamodb_access" {
policy_arn = "arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess"
role = aws_iam_role.lambda_execution_role.name
}
resource "aws_iam_role_policy" "lambda_execution_policy" {
name = "lambda_execution_policy"
role = aws_iam_role.lambda_execution_role.id
policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Action = [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
Effect = "Allow",
Resource = "*"
},
{
Action = "lambda:InvokeFunction",
Effect = "Allow",
Resource = aws_lambda_function.add_user_post.arn
}
# Add more permissions as needed
]
})
}
resource "aws_lambda_function" "add_user_post" {
function_name = "AddUserPost"
handler = "addUserPostFunction.lambda_handler"
runtime = "python3.8"
filename = data.archive_file.lambda_function_add_user_post.output_path # ZIP file containing your Python code
role = aws_iam_role.lambda_execution_role.arn
source_code_hash = filebase64sha256(data.archive_file.lambda_function_add_user_post.output_path)
environment {
variables = {
TABLE_NAME = "UserProfileTable"
}
}
}
data "archive_file" "lambda_function_add_user_post" {
type = "zip"
source_dir = "${path.module}/Source"
output_path = "${path.module}/lambda_function_add_user_post.zip"
}
</code></pre>
<p>DynamoDB:</p>
<pre><code> provider "aws" {
region = "us-east-1" # Change to your desired region
access_key = "test" # Access key for LocalStack
secret_key = "test" # Secret key for LocalStack
skip_credentials_validation = true
skip_requesting_account_id = true
endpoints {
dynamodb = "http://localhost:4566" # LocalStack DynamoDB endpoint
lambda = "http://localhost:4566"
}
}
resource "aws_dynamodb_table" "users_table" {
name = "UserProfileTable"
billing_mode = "PROVISIONED" # Or use "PAY_PER_REQUEST" for on-demand capacity
read_capacity = 5
write_capacity = 5
hash_key = "UserID"
range_key = "PostID"
attribute {
name = "UserID"
type = "N"
}
attribute {
name = "PostID"
type = "N"
}
tags = {
Name = "dynamodb-table-1"
Environment = "production"
}
}
</code></pre>
<p>And then the .py function that the lambda executes:</p>
<pre><code>import boto3
import os
dynamodb = boto3.resource('dynamodb', region_name='us-east-1', endpoint_url='http://localhost:4566')
def lambda_handler(event, context):
try:
# Your DynamoDB table name
table_name = os.environ["TABLE_NAME"]
# Sample data to be added to DynamoDB
item = {
'userid': '1',
'postid': '1',
'content': 'Yay!',
}
# DynamoDB put operation
table = dynamodb.Table(table_name)
result = table.put_item(Item=item)
print('Item added to DynamoDB:', result)
return {
'statusCode': 200,
'body': 'Item added to DynamoDB successfully',
}
except Exception as e:
print('Error adding item to DynamoDB:', e)
return {
'statusCode': 500,
'body': 'Error adding item to DynamoDB',
}
</code></pre>
<p><code>Terraform apply</code>ing the two infrastructure components yields no errors, and I can check their existence via <code>awslocal dynamodb list-tables</code> or <code>awslocal lambda list-functions</code> and they return the expected values.</p>
<p>However, when running: <code>awslocal --endpoint-url=http://localhost:4566 lambda invoke --function-name AddUserPost --payload '{}' output.json</code></p>
<p>I get</p>
<pre><code>{
"StatusCode": 200,
"FunctionError": "Unhandled",
"ExecutedVersion": "$LATEST"
}
</code></pre>
<p>The <code>output.json</code> yields:
<code>{"errorMessage":"2024-02-10T16:42:23Z cd163c6c-75d4-4be2-9539-cec447c3624a Task timed out after 3.00 seconds"}</code></p>
<p>The <code>unhandled</code> functionerror made me think the problem was with the handler defined in the lambda function, but I have tripled checked the syntax/spelling of this and I am pretty sure it correct. Not sure how to proceed.</p>
<p>Edit: From Helder's below answer, modifying the type of data in the insert was necessary. This removed one error, but then led to the <code>invoke</code> causing a failure to connect to the endpoint. This <a href="https://stackoverflow.com/questions/61749489/getting-could-not-connect-to-the-endpoint-url-error-with-boto3-when-deploying">stack question</a> has an answer which states that the boto3 runtime should set the <code>endpoint_url</code> = 'http://host.docker.internal:4566`. Unclear to me <em>why</em> this is necessary, but it fixed the problem fully.</p>
|
<python><aws-lambda><terraform><amazon-dynamodb><localstack>
|
2024-02-10 16:51:09
| 2
| 1,459
|
Runeaway3
|
77,973,757
| 11,329,736
|
Access command line values during Snakemake pipeline run
|
<p>Is it possible to access the values of arguments that were parsed to <code>Snakemake</code> in the command line or set in the profile yaml file?</p>
<p>For example, I would like to know whether <code>--slurm</code> is <code>True</code> so that I can split certain jobs to allow for optimal parallelisation.</p>
|
<python><command-line><snakemake>
|
2024-02-10 16:07:18
| 1
| 1,095
|
justinian482
|
77,973,278
| 352,972
|
Determine which microphone source to use for Azure Cognitive Services with Python
|
<p>I've set up an instance of Azure Cognitive Services to listen on a microphone for a key phrase. This works fine, however I'm unable to tell it to listen to a specific microphone on my Apple Mac.</p>
<p>The code I have is:</p>
<pre class="lang-py prettyprint-override"><code>mic_name = self.preferences.get('mic_name', None)
self.audio_config = speechsdk.audio.AudioConfig(device_name=mic_name) if mic_name else None
...
self.keyword_recognizer = speechsdk.KeywordRecognizer(audio_config=self.audio_config)
</code></pre>
<p>Where I'm providing the microphone name as the device name to the <code>speechsdk.audio.AudioConfig</code> library. However from what I can read via <a href="https://aka.ms/csspeech/microphone-selection" rel="nofollow noreferrer">https://aka.ms/csspeech/microphone-selection</a> it seems I need to provide the device ID, not the name or index which is what pyaudio gives me.</p>
<p>I've been searching online in an attempt to find a solution to getting the device ID, and the only thing I've been able to determine is that the pyobjc package might be needed in order to interact with the hardware via objective-c. Yet my attempts at that have also failed.</p>
<p>Does anyone know of an existing library, or example I could reference, where a Python script is able to return the ID of a microphone device so I can provide it to the Speech Services SDK? (I also want this to work for windows, but that is a seperate matter)</p>
<p>===</p>
<p>Update - to be clear, I need the user to be able to select the microphone they want to use from a dropdown, and then pass said Microphone ID through to the <code>speechsdk.audio.AudioConfig</code> library.</p>
|
<python><objective-c><azure><azure-cognitive-services>
|
2024-02-10 13:38:57
| 1
| 452
|
Dan
|
77,973,265
| 14,348,996
|
Why does the function to close logging handlers not run when I expect?
|
<p>Can anyone explain why the <code>close_handlers</code> function does not run across every handler that is an instance of <code>MyHandler</code>? When I run the script below, the first <code>close_handlers</code> call in <code>thing_1</code>'s <code>finally</code> block only seems to close 1 of the 2 handlers added to the logger in <code>set_up_logging</code>.</p>
<p>When it gets to the second <code>close_handlers</code> call, it appears that this closes 3 handlers, and by this time, the handler that was 'missed' the first time round has appended all 4 log statements to its buffer. I'd expect all handlers to flush for <code>thing_1</code> and be removed, and this to be replicated in <code>thing_2</code> but this is clearly not happening.</p>
<p>The use case is gathering logs for particular function calls to send to a separate messaging service, and I'd like all the logs to be gathered for each function call and sent as a formatted group. I'm finding this behavior on the formatted messages I'm sending (i.e. duplicated, overlapping messages being sent because handlers aren't closing when they're meant to). I'm closing and re-adding handlers each time as I need info about the current function call to format the title of each message:</p>
<pre><code>import logging
from logging.handlers import BufferingHandler
_logger = logging.getLogger(__name__)
class MyHandler(BufferingHandler):
def __init__(self, capacity: int):
super().__init__(capacity)
def flush(self):
print(self.buffer)
super().flush()
def set_up_logging():
logging.basicConfig()
root_logger = logging.getLogger()
root_logger.setLevel(logging.INFO)
root_logger.addHandler(MyHandler(10))
root_logger.addHandler(MyHandler(5))
def close_handlers():
for handler in logging.getLogger().handlers:
if isinstance(handler, MyHandler):
logging.getLogger().removeHandler(handler)
handler.close()
def thing_1():
try:
set_up_logging()
_logger.info("hi")
_logger.error("something went wrong")
finally:
print(len(logging.getLogger().handlers))
close_handlers()
def thing_2():
try:
set_up_logging()
_logger.info("hi again")
_logger.error("something else went wrong")
finally:
print(len(logging.getLogger().handlers))
close_handlers()
if __name__ == "__main__":
thing_1()
thing_2()
</code></pre>
|
<python><logging>
|
2024-02-10 13:34:50
| 0
| 1,236
|
henryn
|
77,973,238
| 8,398,851
|
How to compare 2 json files and merge in one with python
|
<p>Im trying to compare if a value from jsonA(deployed_devices) exists in jsonB(assigned_components) and merge them into one new json Object. The goal is to iterate over every asset_it in the deployed_devices json and check if and where the asset_id matches witch the id in assigned_components[data][rows] and generate a new JSON. Heres my current code which generates a new json but leaves the 'Components' in the new JSON(matched_devices) completely empty:</p>
<pre><code>def match_device_components(assigned_components, deployed_devices):
matched_devices = []
for component_entry in assigned_components:
for component_name, component_data in component_entry.items():
component_rows = component_data.get('data', {}).get('rows', [])
assigned_component_ids = {component["id"] for component in component_rows}
for device in deployed_devices:
matched_components = []
for component in component_rows:
if component["id"] == device["asset_id"]:
matched_components.append(component["name"])
matched_device = {
"model_name": device["model_name"],
"asset_tag": device["asset_tag"],
"asset_id": device["asset_id"],
"components": matched_components
}
matched_devices.append(matched_device)
return matched_devices
</code></pre>
<p>Those are the JSON to work with (cropped the output a bit for better overview):</p>
<pre><code>deployed_devices = [
{
"model_name": "RACK/BOX",
"asset_tag": "X-0043",
"asset_id": "234"
},
.... more devices ...
]
assigned_components = [
{
"Camera":{
"id":70,
"name":"Camera",
"user_can_checkout":1,
"available_actions":{
"checkout":true,
"checkin":true,
"update":true,
"delete":true
}
},
"data":{
"total":52,
"rows":[
{
"assigned_pivot_id":710,
"id":133,
"name":"BOX 25x17x21",
"qty":2,
"note":"",
"type":"asset",
"created_at":{
"datetime":"2024-01-15 11:59:06",
"formatted":"15.01.2024 11:59"
},
"available_actions":{
"checkin":true
}
},
... many more rows ...
]
}
},
{
"LED":{
"id":69,
"name":"LED ",
"user_can_checkout":1,
"available_actions":{
"checkout":true,
"checkin":true,
"update":true,
"delete":true
}
},
"data":{
"total":10,
"rows":[
{
"assigned_pivot_id":823,
"id":57,
"name":"BOX 25x17x21",
"qty":1,
"note":"None",
"type":"asset",
"created_at":{
"datetime":"2024-01-22 10:50:34",
"formatted":"22.01.2024 10:50"
},
"available_actions":{
"checkin":true
}
},
... many more rows ...
]
}
}
]
</code></pre>
|
<python><json><python-3.x><comparison>
|
2024-02-10 13:23:36
| 2
| 323
|
Nico
|
77,973,204
| 10,746,224
|
Scraping using CSS Selectors with ::before isn't displaying text
|
<p>I'm trying to scrap <code>Monday, 9:30 AM</code> from <a href="https://www.ebay.com/itm/145599690533?" rel="nofollow noreferrer">this</a> ebay listing, using scrapy.</p>
<p>From a scrapy shell <code>scrapy shell https://www.ebay.com/itm/145599690533?</code>:</p>
<pre><code>>>> response.css('span.ux-timer__time-left::text')
[]
</code></pre>
<p>I've also tried copying the css path and xpath from Firefox Dev but they gave the same result.</p>
<p>I suspect the issue has something to do with the <code>::before</code> before the plaintext, but I know next to nothing about that.</p>
<p>What am I missing?</p>
|
<python><css><web-scraping><scrapy><css-selectors>
|
2024-02-10 13:15:48
| 1
| 16,425
|
Lord Elrond
|
77,973,198
| 9,172,401
|
Solution for Django model composite foreign keys
|
<p>I’m working with two models in Django: Order and OrderItems. I want to establish a one-to-one relationship between them using two columns instead of the usual single column.</p>
<p>I’m aware of solutions like SQL Alchemy’s ForeignKeyConstraint and the django-composite-foreignkey package, but I’d prefer not to use ForeignKeyConstraint and the latter isn’t compatible with Django 4.0. Is there an alternative solution available for creating a composite foreign key that references two columns in Django 4.0?</p>
<pre><code>class Orders(models.Model):
objects = models.Manager()
id = models.AutoField(primary_key=True)
order_id = models.CharField(max_length=255)
--------> here it should reference with two columns ( id and department_id )
orderItem = models.ForeignKey('OrderCustomers', on_delete=models.PROTECT, db_column='order_item_id', to_field='id')
order_item_id = models..CharField(max_length=255)
department_id = models.CharField(max_length=255)
created_at = DateTimeWithoutTZField(auto_now_add=True)
updated_at = DateTimeWithoutTZField(auto_now=True)
def __str__(self):
return self.order_id
</code></pre>
|
<python><django><sqlalchemy>
|
2024-02-10 13:11:20
| 1
| 598
|
Marcos DaSilva
|
77,973,137
| 2,021,355
|
Redefine input function in Python to yield a value from a predetermined list of values each time it's called
|
<p>Is there a clean "Pythonic" way to write an input-replacement function in Python that will yield a value from a predetermined list of input values each time it's called?</p>
<pre><code>raw_data = ['8', '2', '1']
c = 0
def input():
global c
c += 1
return raw_data[c - 1]
for _ in range(3):
print(input())
</code></pre>
<p>This does and is expected to output:</p>
<pre><code>8
2
1
</code></pre>
<p>Intuitively <code>yield</code> seems like it ought to be part of the solution but I cannot wrap my head around how to implement it as an <code>input</code> replacement.</p>
|
<python><yield>
|
2024-02-10 12:56:11
| 3
| 1,473
|
Eric D
|
77,973,107
| 1,145,666
|
How can I read more than 4096 bytes from stdin, copy-pasted to a terminal on Linux?
|
<p>I have this code:</p>
<pre><code>import sys
binfile = "data.hex"
print("Paste ascii encoded data.")
line = sys.stdin.readline()
b = bytes.fromhex(line)
with open(binfile, "wb") as fp:
fp.write(b)
</code></pre>
<p>Problem is that never more than 4096 bytes are read in the <code>sys.stdin.readline()</code> call. How can I make that buffer larger? I tried to supply a larger number to the call, but that had no effect.</p>
<p><strong>Update</strong> I changed my <code>stdin</code> reading code to this:</p>
<pre><code>line = ''
while True:
b = sys.stdin.read(1)
sys.stdin.flush()
line += b
if b == "\n":
break
print(f"Read {len(line)} bytes")
</code></pre>
<p>Still run into that limit.</p>
|
<python><stdin><readline>
|
2024-02-10 12:46:05
| 1
| 33,757
|
Bart Friederichs
|
77,972,795
| 22,466,650
|
How to set a common title for all the axes in a column of subfigures?
|
<p>I'm trying to set a common title for the axes of the same column.</p>
<p>In the example below, we should have the ids titles only once and they should be placed at the very top of each column.</p>
<p>I can't figure it out. Can you guys show me how to do that ?</p>
<p><a href="https://i.sstatic.net/cXm2O.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cXm2O.png" alt="enter image description here" /></a></p>
<p>Here is my code :</p>
<pre><code>import matplotlib.pyplot as plt
categories = ["category1", "category2", "category3"]
ids = ["id1", "id2"]
data = [
[{"A": 1, "B": 2, "C": 3}, {"D": 4, "E": 5, "F": 6}],
[{"G": 7, "H": 8, "I": 9}, {"J": 10, "K": 11, "L": 12}],
[{"M": 13, "N": 14, "O": 15}, {"P": 16, "Q": 17, "R": 18}],
]
fig = plt.figure(figsize=(15, 6))
subfigs = fig.subfigures(len(categories), 1)
for category, subfig, data_row in zip(categories, subfigs.flat, data):
_ = subfig.supylabel(t=category, x=0.09)
axs = subfig.subplots(1, len(ids))
for id, ax, data_dict in zip(ids, axs.flat, data_row):
_ = ax.set(title=id)
_ = ax.bar(data_dict.keys(), data_dict.values())
plt.show()
</code></pre>
|
<python><matplotlib>
|
2024-02-10 11:17:34
| 3
| 1,085
|
VERBOSE
|
77,972,468
| 11,861,874
|
Formatting a Numbers in DataFrame
|
<p>Can someone please help me to format just numbers in a data frame to "int" and "comma"? I tried doing the same and got an error message mentioned below.</p>
<p>Error: int() argument must be string, a bytes-like object or a number, not 'DataFrame'</p>
<pre><code>import pandas as pd
data1 = {'Header':['L1','L2','L3'], 'Val1':[float(1000.2),float(2000.40),float(300.55)],
'Val2':[float(4000.3),float(500.00),float(60000.55)]}
</code></pre>
<p>The expected output is mentioned below.</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Header</th>
<th>Val1</th>
<th>Val2</th>
</tr>
</thead>
<tbody>
<tr>
<td>L1</td>
<td>1,000</td>
<td>4,000</td>
</tr>
<tr>
<td>L2</td>
<td>2,000</td>
<td>500</td>
</tr>
<tr>
<td>L3</td>
<td>301</td>
<td>60,001</td>
</tr>
</tbody>
</table></div>
|
<python><pandas>
|
2024-02-10 09:19:26
| 1
| 645
|
Add
|
77,972,203
| 13,088,678
|
joining multiple dataframes : Reference column is ambiguous
|
<p>I have 3 dataframes. Joining column <code>id</code> is present exactly once in each dataframe.</p>
<ul>
<li><p>aggregated_df : ['id', 'country_agg', 'reference_agg', 'activities_agg', 'type_agg', 'is_batch_agg', 'is_stream_agg']</p>
<ul>
<li><bound method DataFrame.printSchema of DataFrame[id: string, country_agg: array<structkey:string,timestamp:timestamp>, reference_agg: array<structkey:string,timestamp:timestamp>, activities_agg: array<structkey:array<string,timestamp:timestamp>>, type_agg: array<structkey:array<string,timestamp:timestamp>>, is_batch_agg: array<structkey:string,timestamp:timestamp>, is_stream_agg: array<structkey:string,timestamp:timestamp>]></li>
</ul>
</li>
<li><p>activities_exploded_df : ['id', 'activities_details']</p>
<ul>
<li><bound method DataFrame.printSchema of DataFrame[id: string, activities_details: array<structkey:string,timestamp:timestamp>]></li>
</ul>
</li>
<li><p>type_exploded_df : ['id', 'type_details']</p>
<ul>
<li><bound method DataFrame.printSchema of DataFrame[id: string, type_details: array<structkey:string,timestamp:timestamp>]></li>
</ul>
</li>
</ul>
<p><strong>Below join works:</strong> joining just <code>2 dataframes</code></p>
<pre><code> joined_df = aggregated_df.join(
type_exploded_df, aggregated_df.id == type_exploded_df.id, "left"
)
</code></pre>
<p><strong>Below join fails with error:</strong> joining <code>3 dataframes</code>
Error : AnalysisException: [AMBIGUOUS_REFERENCE] Reference <code>id</code> is ambiguous, could be: [<code>id</code>, <code>id</code>].</p>
<pre><code> joined_df = aggregated_df.join(
type_exploded_df, aggregated_df.id == type_exploded_df.id, "left"
).join(
activities_exploded_df, type_exploded_df.id == activities_exploded_df.id, "left"
)
</code></pre>
<p><strong>Below join works</strong> : joining <code>3 dataframes</code>, change joining way to <code>"id"</code></p>
<pre><code>joined_df = aggregated_df.join(
type_exploded_df, "id", "left"
).join(
activities_exploded_df, "id", "left"
)
</code></pre>
<p>Also tested with dummy data, same way of joining above which failed in 2nd case above, is working here.</p>
<pre><code>from pyspark.sql import SparkSession, DataFrame
from pyspark.sql.functions import col
# Create Spark session
spark = SparkSession.builder.appName('example').getOrCreate()
# Create the first DataFrame
data1 = [(1, 'A'), (2, 'B'), (3, 'C')]
df1 = spark.createDataFrame(data1, ['id', 'col1'])
# Create the second DataFrame
data2 = [(1, 'X'), (2, 'Y'), (4, 'Z')]
df2 = spark.createDataFrame(data2, ['id', 'col2'])
# Create the third DataFrame
data3 = [(1, 'M'), (3, 'N'), (4, 'O')]
df3 = spark.createDataFrame(data3, ['id', 'col3'])
joined_df = df1.join(df2, df1.id == df2.id, 'left').join(df3, df1.id == df3.id, 'left')
joined_df.show()
</code></pre>
<p><strong>Question</strong> : when I join 3 dataframes (2nd case), join fails with error Reference <code>id</code> is ambiguous. Howeever all dataframes just have only single occurence of column <code>id</code>. Also same way of join is working with dummy data at the end.</p>
|
<python><apache-spark><pyspark>
|
2024-02-10 07:21:26
| 1
| 407
|
Matthew
|
77,971,994
| 10,466,809
|
Shorten regex to capture digits grouped by variable delimiter
|
<p>I have a regex that is trying to capture integer numbers like</p>
<pre><code>12_456_789
</code></pre>
<p>But the delimiter can by either any of <code>[\ _.,]</code> or no delimiter. My regex looks like</p>
<pre><code>\d{1,3}((_\d{3})*|(\ \d{3})*|(\.\d{3})*|(,\d{3})*|(\d{3})*)
</code></pre>
<p>This regex works as epxected, but it's annoying having to repeat myself for each delimiter. I'm tempted to do something like</p>
<pre><code>\d{1,3}(([\ _.,]\d{3})*)
</code></pre>
<p>But this would match a string like</p>
<pre><code>123,456.789_987
</code></pre>
<p>which is not acceptable.</p>
<p>Is there some kind of abbreviation along the lines of my temptation that can be used? Or am I stuck with conjunction repetition?</p>
<p>I'm working in python verbose regex.</p>
|
<python><regex>
|
2024-02-10 05:46:11
| 4
| 1,125
|
Jagerber48
|
77,971,943
| 3,112,791
|
Blender bpy Module Ignores GPU Configuration for Rendering
|
<p>I'm attempting to use <code>bpy</code> (Blender) to render a scene using Python 3.10.0 on Windows. My laptop is equipped with an NVIDIA GeForce RTX 3050. Despite configuring the GPU, I consistently observe the CPU being used instead of the GPU. What am I missing?</p>
<pre><code>def foo():
bpy.ops.wm.open_mainfile(filepath=f"./sample4.blend")
duration = 10
frames = int(math.ceil(duration * 30))
# Set GPU rendering options
preferences = bpy.context.preferences
cycles_preferences = preferences.addons['cycles'].preferences
cycles_preferences.compute_device_type = 'CUDA'
bpy.context.scene.cycles.device = 'GPU'
cycles_preferences.get_devices()
# I verified that [0] is a NVIDIA GeForce RTX 3050 Laptop GPU
bpy.context.preferences.addons['cycles'].preferences.devices[0].use = True
bpy.context.scene.render.engine = 'CYCLES'
bpy.context.scene.cycles.device = 'GPU'
bpy.context.scene.cycles.feature_set = 'SUPPORTED'
bpy.context.scene.cycles.compute_device_type = 'CUDA'
bpy.context.scene.cycles.device = 'GPU'
bpy.context.scene.cycles.use_cpu = False # Disable CPU rendering
bpy.context.scene.render.resolution_x = 1280
bpy.context.scene.render.resolution_y = 720
bpy.context.scene.render.fps = 30
bpy.context.scene.frame_start = 1
bpy.context.scene.frame_end = frames
bpy.context.scene.render.image_settings.file_format = "FFMPEG"
bpy.context.scene.render.ffmpeg.format = "MPEG4"
bpy.context.scene.render.ffmpeg.codec = "H264"
bpy.context.scene.render.filepath = r"C:/.../final.mp4" # Modify this path
</code></pre>
|
<python><rendering><blender><bpy>
|
2024-02-10 05:09:44
| 1
| 427
|
Pavel Angel Mendoza Villafane
|
77,971,878
| 10,200,497
|
How can I use Pandas to format individual cells conditionally in Excel?
|
<p>This is my DataFrame:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'a': ['long', 'short', np.nan], 'b': [np.nan, 3, 2]})
</code></pre>
<p>I want to style the output of <code>df.to_excel</code>. Like this:</p>
<p><a href="https://i.sstatic.net/hVc8u.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hVc8u.png" alt="enter image description here" /></a></p>
<p>If <code>df.a == 'long'</code>, <code>a</code> is green and if <code>df.a == 'short'</code>, <code>a</code> is red.
I have read multiple posts on SO but couldn't figure out how to do it.</p>
<p>This is one of my attempts that got really close to the result:</p>
<pre><code>def bounded_highlights(df):
conds = [df.a == 'short', df.a == 'long']
labels = ['background-color: pink', 'background-color: lime']
array = np.select(conds, labels, default='')
return array
</code></pre>
<p>When I used <code>df.style.apply(bounded_highlights, axis=None).to_excel(...</code> it doesn't work.</p>
|
<python><pandas>
|
2024-02-10 04:17:03
| 1
| 2,679
|
AmirX
|
77,971,802
| 10,292,330
|
Is class dict(dict) undesirable if I only want to overwrite __str__
|
<p>Is this approach undesirable? I want all proceeding dicts to behave this way.</p>
<pre><code>import json
class dict(dict):
def __str__(self) -> str:
return json.dumps(self, indent=4)
</code></pre>
|
<python>
|
2024-02-10 03:27:10
| 0
| 5,561
|
OysterShucker
|
77,971,697
| 899,954
|
How can I get my Python GStreamer app to show the webcam feed properly instead of showing garbled green video?
|
<p>Trying to build a very simple Python program to open the webcam on my MacBook and display that on the screen. However, I cannot get the Python version of my pipeline to display the webcam video, and I get garbled / scrolling green lines.</p>
<p>Would like to understand where I went wrong and how I can fix the program.</p>
<p>So far, I have the following program:</p>
<pre class="lang-py prettyprint-override"><code>import os
import sys
import gi
gi.require_version('Gst', '1.0')
gi.require_version('GstVideo', '1.0')
gi.require_version('GLib', '2.0')
gi.require_version('GObject', '2.0')
from gi.repository import GObject, Gst, GstVideo
GObject.threads_init()
Gst.init(None)
class GstCaps(object):
def __init__(self, caps_string):
self.caps_string = caps_string
def __new__(cls, caps_string):
cf = Gst.ElementFactory.make('capsfilter', None)
caps = Gst.Caps.from_string(caps_string)
cf.set_property('caps', caps)
return cf
class Webcam(object):
# macOS, open webcam and display on screen
# ./gst-launch-1.0 -v -e
# avfvideosrc device-index=0 !
# "video/x-raw, width=1280, height=720, format=(string)YUY2, texture-target=rectangle" !
# rawvideoparse width=1280 height=720 format=yuy2 !
# queue !
# autovideoconvert !
# autovideosink
def __init__(self, device_index: int = 0):
self.mainloop = GObject.MainLoop()
self.pipeline = Gst.ElementFactory.make('pipeline', 'pipeline')
self.source = Gst.ElementFactory.make('avfvideosrc', 'source')
self.source.set_property('device-index', device_index)
self.caps = GstCaps('video/x-raw, width=1280, height=720, format=(string)YUY2, texture-target=rectangle')
# self.source.set_property('caps', caps)
self.rawvideoparse = Gst.ElementFactory.make('rawvideoparse', 'rawvideoparse')
self.rawvideoparse.set_property('width', 1280)
self.rawvideoparse.set_property('height', 720)
self.rawvideoparse.set_property('format', 'yuy2')
self.queue = Gst.ElementFactory.make('queue', 'queue')
self.autovideoconvert = Gst.ElementFactory.make('autovideoconvert', 'autovideoconvert')
self.autovideosink = Gst.ElementFactory.make('autovideosink', 'autovideosink')
if (not self.pipeline or
not self.source or
not self.caps or
not self.rawvideoparse or
not self.queue or
not self.autovideoconvert or
not self.autovideosink
):
print('ERROR: Not all elements could be created.')
sys.exit(1)
self.pipeline.add(self.source)
self.pipeline.add(self.rawvideoparse)
self.pipeline.add(self.queue)
self.pipeline.add(self.autovideoconvert)
self.pipeline.add(self.autovideosink)
linked = self.source.link(self.rawvideoparse)
linked = linked and self.rawvideoparse.link(self.queue)
linked = linked and self.queue.link(self.autovideoconvert)
linked = linked and self.autovideoconvert.link(self.autovideosink)
if not linked:
print("ERROR: Elements could not be linked")
sys.exit(1)
self.bus = self.pipeline.get_bus()
self.bus.add_signal_watch()
self.bus.connect('message::eos', self.on_eos)
self.bus.connect('message::error', self.on_error)
def run(self):
self.pipeline.set_state(Gst.State.PLAYING)
self.mainloop.run()
def quit(self):
self.pipeline.set_state(Gst.State.NULL)
self.mainloop.quit()
def on_eos(self, bus, message):
self.quit()
def on_error(self, bus, message):
print(f'ERROR: {message.parse_error()}' )
self.quit()
webcam = Webcam()
webcam.run()
</code></pre>
<p>With this program, I get the following video:</p>
<p><a href="https://i.sstatic.net/Lum6h.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Lum6h.png" alt="bad_video" /></a></p>
<p>When I run the pipeline from the command line, I get the proper video output as seen below:</p>
<pre class="lang-bash prettyprint-override"><code>gst-launch-1.0 -v -e avfvideosrc device-index=1 ! \
"video/x-raw, width=1280, height=720, format=(string)YUY2, texture-target=rectangle" ! \
rawvideoparse width=1280 height=720 format=yuy2 ! \
queue ! \
autovideoconvert ! \
autovideosink
</code></pre>
<p><a href="https://i.sstatic.net/DIzkQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DIzkQ.png" alt="good_video" /></a></p>
<p>The output from the Python program looks very familiar to a pipeline error when the format is not specified correctly. I had run into this when I was learning how to setup the pipeline in the command line.</p>
|
<python><gstreamer><python-gstreamer>
|
2024-02-10 02:18:18
| 1
| 783
|
HanSooloo
|
77,971,628
| 10,318,539
|
CircuitError: 'inverse() not implemented for reset.'
|
<p>Initially, this code was functioning correctly, but I am currently encountering an error. I am attempting to create a circuit using a unitary vector and applying inversion.</p>
<pre><code>desire_vector = [0. 0.5 0. 0.5 0. 0.5 0. 0.5]
qc = QuantumCircuit(qr, cr)
qc.initialize(desired_vector,qr)
backend = BasicAer.get_backend("qasm_simulator")
qc = transpile(qc,backend).inverse()
</code></pre>
<p><strong>The Error:</strong></p>
<p>Can we inverse a quantum ciruit that is initialized using a statevector?</p>
|
<python><qiskit>
|
2024-02-10 01:38:37
| 1
| 485
|
Engr. Khuram Shahzad
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.