QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,567,558
| 234,146
|
Create a numpy table in C++
|
<p>I have a C++ DLL that utilizes numpy tables created in Python. I can add and retrieve data in C++ and Python from the table easily.</p>
<p>The table currently is created in Python using:</p>
<pre><code>class MyTable(object):
_table_type = np.dtype({'names': ['field1', 'field2'],
'formats':[np.float32, np.float32],
'offsets':[0, 4]})
_table_len = 42
self._table = np.empty(MyTable.table_len, MyTable._table_type)
</code></pre>
<p>I would like to be able to create the table originally in C++ using a structure def, e.g.:</p>
<pre><code>struct Table
{
float32 field1;
float32 field2;
}
pybind11::array_t<Table> MyTable = magic_function(length, ...(necessary Table defs here...);
</code></pre>
<p>and then access this from python as a numpy table or in C++ as an array of Table.</p>
<p>Any guidance would be welcome.</p>
|
<python><c++><numpy><pybind11>
|
2024-06-02 19:54:56
| 0
| 1,358
|
Max Yaffe
|
78,567,412
| 7,090,501
|
Can't close out of pop up box with selenium
|
<p>I'm trying to download a pdf at the following link:</p>
<p><a href="https://www.cms.gov/medicare-coverage-database/view/lcd.aspx?lcdid=39763&ver=8" rel="nofollow noreferrer">https://www.cms.gov/medicare-coverage-database/view/lcd.aspx?lcdid=39763&ver=8</a></p>
<p>Most documents at this website have the a pop up for accepting a license which I can close using</p>
<pre><code>driver.find_element(By.ID, "btnAcceptLicense").click()
</code></pre>
<p>The link above has another pop up afterward that contains a "Close" button. I can find with</p>
<pre><code>elem = driver.find_element(By.CLASS_NAME, "modal-footer").find_element(By.CLASS_NAME, "btn")
</code></pre>
<p>But when I run <code>elem.click()</code>, I get a <code>ElementNotInteractableException</code>.</p>
<p>Here is what I've written so far:</p>
<pre class="lang-py prettyprint-override"><code>import time
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
chrome_options = Options()
chrome_options.add_argument('--headless')
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--disable-dev-shm-usage')
driver = webdriver.Chrome(options=chrome_options)
url = "https://www.cms.gov/medicare-coverage-database/view/lcd.aspx?lcdid=39859&ver=4"
driver.get(url)
time.sleep(0.5)
# Accept license
driver.find_element(By.ID, "btnAcceptLicense").click()
time.sleep(0.5)
# Close pop second pop up (fails here)
driver.find_element(By.CLASS_NAME, "modal-footer").find_element(By.CLASS_NAME, "btn").click()
# Download document
driver.find_element(By.ID, "btnDownload").click()
</code></pre>
|
<python><selenium-webdriver>
|
2024-06-02 18:46:17
| 3
| 333
|
Marshall K
|
78,567,404
| 520,556
|
Particular reshape of pandas dataframe
|
<p>How can I create a new dataframe if the input looks like this:</p>
<pre><code>inputTable = pd.DataFrame({
'A' : [1, 2, 3, 4, 5],
'B' : [2, 3, 4, 5, 6],
'C' : [3, 4, 5, 6, 7]
})
</code></pre>
<p>and the resulting table should look like this:</p>
<pre><code>resultingTable = pd.DataFrame({
'Category' : ['A', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'B', 'C', 'C', 'C', 'C', 'C'],
'Value' : [1, 2, 3, 4, 5, 2, 3, 4, 5, 6, 3, 4, 5, 6, 7]
})
</code></pre>
<p>I am aware of <code>pd.wide_to_long()</code> but it seems to be an overkill for the simple case I am dealing with. Is there something more appropriate and, perhaps, more efficient?</p>
<p>There is also a <code>pd.melt()</code> solution as explained in one of the replies, with <a href="https://stackoverflow.com/questions/68961796/how-do-i-melt-a-pandas-dataframe">more details here</a>. However, this is <strong>not</strong> the only solution, as the other reply suggests.</p>
|
<python><pandas>
|
2024-06-02 18:43:40
| 2
| 1,598
|
striatum
|
78,567,022
| 7,758,213
|
Running Powershell commands using Paramiko in python in a remote Windows 7
|
<p>I'm using paramiko in Python to send commands (and retrieve data) from another server.
When I send a <strong>Powershell</strong> command to Windows 10 server it all works OK.</p>
<p>But when sending this to Windows 7 it stuck indefinitely.</p>
<p>sending cmd command works OK in both OS types</p>
<p>my example:</p>
<pre><code>ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect(server_address, connect_port, server_username, server_password)
stdin, stdout, stderr = ssh_client.exec_command("powershell 'echo hello'")
# Trying to run any of the next lines is stuck on Windows7
exit_status: int = stdout.channel.recv_exit_status()
output = stdout.read().decode()
error = stderr.read().decode()
</code></pre>
<p>I also tried to use:</p>
<pre><code>channel = ssh_client.invoke_shell()
channel.send("powershell 'echo hello'")
time.sleep(1)
output = ""
while channel.recv_ready():
output += channel.recv(1024).decode('utf-8')
</code></pre>
<p>in that case, it is not stuck, but the output (in Wind7) is:</p>
<pre><code>"\x1b[2J\x1b[2J\x1b[2J\x1b[1;1H\x1b[0;39;24;27;37;40mMicrosoft Windows [Version 6.1.7601] \n
[2;1HCopyright (c) 2009 Microsoft Corporation. All rights reserved. \n\n
[4;1Hbioimg@LVBT-PC C:\\Users\\bioimg>shell 'echo hello'
[4;32Hshell 'echo hello'
[4;33H\x1b[4;33Hhell 'echo hello'
[4;34H\x1b[4;34Hell 'echo hello'
[4;35H\x1b[4;35Hll 'echo hello'
[4;36H\x1b[4;36Hl 'echo hello'
[4;37H\x1b[4;37H 'echo hello'
[4;38H\x1b[4;38H'echo hello'
[4;39H\x1b[4;39Hecho hello'
[4;40H\x1b[4;40Hcho hello'
[4;41H\x1b[4;41Hho hello'
[4;42H\x1b[4;42Ho hello'
[4;43H\x1b[4;43H hello'
[4;44H\x1b[4;44Hhello'
[4;45H\x1b[4;45Hello'
[4;46H\x1b[4;46Hllo'
[4;47H\x1b[4;47Hlo'
[4;48H\x1b[4;48Ho'
[4;49H\x1b[4;49H'
[4;50H\x1b[4;50H"
</code></pre>
<p>Is there a way to solve it?
Thanks</p>
|
<python><powershell><powershell-2.0><paramiko>
|
2024-06-02 16:15:22
| 0
| 968
|
Izik
|
78,566,922
| 252,228
|
imported module not found in class in Pyodide
|
<p>I am trying to use pyodide with lxml and urllib3, for some reasons I don't understand, when I try to use <code>urllib3</code> in a class supposed to be a Resolver for lxml etree I get an error <code>NameError: name 'urllib3' is not defined</code>.</p>
<p>Example code is online at <a href="https://martin-honnen.github.io/pyodide-tests/simple-requests-test-case3.html" rel="nofollow noreferrer">https://martin-honnen.github.io/pyodide-tests/simple-requests-test-case3.html</a> (note that all output goes to the browser console so use F12 to see the output).</p>
<p>Code is doing <code><script src="https://cdn.jsdelivr.net/pyodide/v0.26.0/full/pyodide.js"></script></code> and then</p>
<pre class="lang-js prettyprint-override"><code>
const python = `
import js
import urllib3
import lxml
from lxml import etree as ET
url = base_url + 'foo-transform-module.xsl'
js.console.log(urllib3.request('GET', url).status)
class TestResolver(ET.Resolver):
def resolve(self, url, id, context):
print("Resolving URL '%s'" % url)
if url.startswith('http'):
return self.resolve_file(urllib3.request('GET', url), context)
else:
return False
parser = ET.XMLParser(no_network=False)
parser.resolvers.add(TestResolver())
tree = ET.parse(url, parser)
tree.getroot().tag
`;
async function main() {
let pyodide = await loadPyodide();
await pyodide.loadPackagesFromImports(python);
const locals = new Map();
locals.set('base_url', window.location.href.replace(/[^/]+?$/, ''));
console.log(await pyodide.runPythonAsync(python, { locals : locals }));
};
main();
</code></pre>
<p>Full error in console is</p>
<pre><code>Uncaught (in promise) PythonError: Traceback (most recent call last):
File "/lib/python312.zip/_pyodide/_base.py", line 574, in eval_code_async
await CodeRunner(
File "/lib/python312.zip/_pyodide/_base.py", line 394, in run_async
coroutine = eval(self.code, globals, locals)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<exec>", line 28, in <module>
File "src/lxml/etree.pyx", line 3570, in lxml.etree.parse
File "src/lxml/parser.pxi", line 1952, in lxml.etree._parseDocument
File "src/lxml/parser.pxi", line 1978, in lxml.etree._parseDocumentFromURL
File "src/lxml/parser.pxi", line 1881, in lxml.etree._parseDocFromFile
File "src/lxml/parser.pxi", line 1200, in lxml.etree._BaseParser._parseDocFromFile
File "src/lxml/parser.pxi", line 633, in lxml.etree._ParserContext._handleParseResultDoc
File "src/lxml/parser.pxi", line 739, in lxml.etree._handleParseResult
File "src/lxml/etree.pyx", line 329, in lxml.etree._ExceptionContext._raise_if_stored
File "src/lxml/parser.pxi", line 462, in lxml.etree._local_resolver
File "src/lxml/docloader.pxi", line 150, in lxml.etree._ResolverRegistry.resolve
File "<exec>", line 20, in resolve
NameError: name 'urllib3' is not defined
at new_error (pyodide.asm.js:10:9965)
at pyodide.asm.wasm:0x16dbeb
at pyodide.asm.wasm:0x177339
at _PyEM_TrampolineCall_JS (pyodide.asm.js:10:125866)
at pyodide.asm.wasm:0x1c2db7
at pyodide.asm.wasm:0x2c7b17
at pyodide.asm.wasm:0x20a78c
at pyodide.asm.wasm:0x1c34a4
at pyodide.asm.wasm:0x1c37b3
at pyodide.asm.wasm:0x1c3831
at pyodide.asm.wasm:0x29e865
at pyodide.asm.wasm:0x2a4e5c
at pyodide.asm.wasm:0x1c3971
at pyodide.asm.wasm:0x1c35da
at pyodide.asm.wasm:0x17699d
at callPyObjectKwargs (pyodide.asm.js:10:64068)
at Module.callPyObjectMaybePromising (pyodide.asm.js:10:65316)
at wrapper (pyodide.asm.js:10:27006)
at onGlobalMessage (pyodide.asm.js:10:101760)
</code></pre>
<p>Is there some flaw in my code? How do I get the imported module to be known in the code of the class?</p>
|
<python><lxml><urllib3><pyodide>
|
2024-06-02 15:41:10
| 1
| 168,793
|
Martin Honnen
|
78,566,724
| 10,855,529
|
Selecting a particular set of strings from a list in polars
|
<pre><code>df = pl.DataFrame({'list_column': [['a.xml', 'b.xml', 'c', 'd'], ['e.xml', 'f.xml', 'g', 'h']]})
def func(x):
return [y for y in x if '.xml' in y]
df.with_columns(pl.col('list_column').map_elements(func, return_dtype=pl.List(pl.String)))
</code></pre>
<p>Is there a way to achieve the same without using <code>map_elements</code>?</p>
|
<python><python-polars>
|
2024-06-02 14:28:39
| 1
| 3,833
|
apostofes
|
78,566,503
| 2,749,397
|
Offsetting the x tick labels to aim at the corresponding x tick
|
<p><a href="https://i.sstatic.net/peo4Cdfg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/peo4Cdfg.png" alt="enter image description here" /></a></p>
<p>As you can see, the arrows are not <em>exactly</em> aimed at the corresponding <em>x</em>-tick, is it possible to offset the labels so that, for any reasonable rotation angle, say 50 ≤ θ ≤ 90, the arrows <em>hit</em> their <em>x</em>-tick?</p>
<blockquote>
<h3>Regarding <a href="https://stackoverflow.com/questions/28615887/how-to-move-a-tick-label">How to move a tick label</a></h3>
<p>I'm afraid it doesn't answer completely my question.</p>
<ol>
<li>Using <code>ha='right'</code> is practically correct when <code>θ≈45</code>, otherwise the arrows miss their targets.</li>
<li><a href="https://stackoverflow.com/a/49449590/2749397">IOBE's answer</a> tells me how to move by x points the labels, but doesn't answer HOW MUCH I must move them as a function of <code>θ</code>.</li>
</ol>
</blockquote>
<p>If you want to experiment, here it's my code</p>
<pre><code>import matplotlib.pyplot as plt
ax = plt.subplot()
for label in ax.get_xticklabels():
label.set(bbox=dict(boxstyle='rarrow', pad=0), rotation=70)
plt.show()
</code></pre>
|
<python><matplotlib>
|
2024-06-02 13:05:18
| 0
| 25,436
|
gboffi
|
78,566,374
| 2,662,728
|
'HtmlToDocx' object has no attribute 'run'
|
<p>The parser is giving "HtmlToDocx object has no attribute run". When I do a search, I look at the stackoverflow results first because the answers are rated. I did not find an answer for this question on stackoverflow.</p>
|
<python><html><html-to-docx>
|
2024-06-02 12:18:47
| 2
| 535
|
Anthony Petrillo
|
78,566,281
| 774,575
|
Select with DateTime index, ignoring day attribute
|
<p>Trying to select rows for which the index label is a date in January 2016:</p>
<pre><code>import numpy as np
import pandas as pd
index = pd.date_range('2016-01-28', freq='d', periods=6)
columns=list('A')
values = np.random.randint(1,10, size=(len(index), len(columns)))
df = pd.DataFrame(values, index=index, columns=columns)
print(df[df.index=='2016-01'])
</code></pre>
<p>Indeed this doesn't work:</p>
<pre><code>Empty DataFrame
Columns: [A]
Index: []
</code></pre>
<p>I could use <code>df[(df.index>='2016-01') & (df.index<'2016-02')]</code> or perform a double comparison on year and month using the <code>.dt</code> accessor, but this is way more verbose and not really scalable.</p>
<p>What is the syntax for a DateTime comparison ignoring the day attribute, e.g. <code>df.index=='2016-02-*'</code>?</p>
|
<python><pandas><datetime>
|
2024-06-02 11:39:53
| 1
| 7,768
|
mins
|
78,566,093
| 5,798,365
|
Assignment works with no __setitem__ defined
|
<p>I'm writing my own matrix class that works with Python <code>Fraction</code> instead of numbers.</p>
<p>I now need to overload <code>[]</code>, so I could read and assign values to specific cells in the matrix without directly addressing the base list from the outer scope.</p>
<p>So here's the code:</p>
<pre><code>from fractions import Fraction
def is_finite_decimal_denominator(n):
"""
:param n: denominator
:return: True if the denominator's factors are only 2 and 5
"""
number = n
while number != 1:
divisible_by_5 = (number % 5 == 0)
divisible_by_2 = (number % 2 == 0)
if not divisible_by_2 and not divisible_by_5:
return False
if divisible_by_2:
number = number // 2
else:
number = number // 5
return True
def fraction_to_string(fraction):
"""
:param fraction: print the fraction as a finite decimal one if possible
:return: void
"""
if fraction.denominator == 1:
return str(fraction.numerator)
if is_finite_decimal_denominator(fraction.denominator):
return str(fraction.numerator / fraction.denominator)
else:
return f"{fraction.numerator}/{fraction.denominator}"
class Matrix_LA:
def __init__(self, *arg):
self.M = []
for line in arg:
self.M.append(list(map(Fraction, line.split())))
self.n_rows = len(arg)
self.n_cols = len(self.M[0])
self.space = 8
def __str__(self):
str_mtrx = '+' + '-' * (self.n_cols * self.space) + '+\n'
for line in self.M:
str_mtrx += "|"
for num in line:
str_mtrx += f"{fraction_to_string(num): ^{self.space}}"
str_mtrx += "|\n"
str_mtrx += '+' + '-' * (self.n_cols * self.space) + '+\n'
return str_mtrx
def __getitem__(self, n):
return self.M[n]
x = Matrix_LA("1/7 2 ", "3 4")
print(x, x.M)
x[1][1] = Fraction(1, 7)
print(x, x.M)
</code></pre>
<p>What strikes me unusual is that the line <code>x[1][1] = Fraction(1, 7)</code> works fine even though there's no <code>__setitem__</code> method in my class yet. How does Python know that it should address my <code>self.M</code> list to perform the assignment even though I haven't written the <code>__setitem__</code> yet?</p>
|
<python><class><methods><operator-overloading>
|
2024-06-02 10:26:06
| 1
| 861
|
alekscooper
|
78,565,961
| 4,002,204
|
Error in sklearn’s cross_val_score with ‘f1’ scoring for categorical target in LightGBM
|
<p>My code receives a dataset and runs a classification on it with <code>lightgbm</code>. The problem is when I try to do fine-tuning with <code>sklearn</code>'s <code>cross_val_score</code>, the target column contains categorical values, not numerical ones.
It works when I set the <code>cross_val_score</code> to work with accuracy, log loss, roc_auctype scoring. On the other hand, when I set it to work with scoring of f1, precision, recall, I get an error. Below is an example when I try to classify the iris dataset:</p>
<p>The code:</p>
<pre><code>cv_results = cross_val_score(estimator, self.X_train, self.y_train, cv=kfold, scoring='f1')
</code></pre>
<p>The error:</p>
<pre><code>ValueError: pos_label=1 is not a valid label: It should be one of
['Iris-setosa' 'Iris-versicolor' 'Iris-virginica']
</code></pre>
<p>Thanks</p>
|
<python><scikit-learn><cross-validation><multiclass-classification><lightgbm>
|
2024-06-02 09:16:47
| 0
| 1,106
|
Zag Gol
|
78,565,860
| 694,716
|
How to run two chain that first output is second input using langchain python?
|
<p>I have two chains</p>
<pre><code>llm = OpenAI()
code_prompt = PromptTemplate(
input_variables=["task", "language"],
template="Write a very short {language} function that will {task}."
)
test_prompt = PromptTemplate(
input_variables=["language", "code"],
template="Write a unit test for the following {language} code:\n{code}"
)
chain_code: Runnable = code_prompt | llm | {"language": StrOutputParser(), "code": StrOutputParser()}
chain_test: Runnable = test_prompt | llm | {"test": StrOutputParser(), "code": StrOutputParser()}
sequence: Runnable = chain_code | chain_test
result = sequence.invoke({ "task": "return a list of numbers", "language":"python" })
print(">>>>>> GENERATED CODE:")
print(result["code"])
print(">>>>>> GENERATED TEST:")
print(result["test"])
</code></pre>
<p>The output of chain_code will be input for chain_test.</p>
<p>The code works but prints same result for code and test. How can I fix this?</p>
|
<python><langchain>
|
2024-06-02 08:30:29
| 1
| 6,935
|
barteloma
|
78,565,773
| 7,261,317
|
Django 4.2: Apply sorting Child Table key based on foreignkey
|
<p>please see below model definition</p>
<pre><code>class Product(models.Model):
name = models.CharField(max_length=200, blank=True, null=True)
prduct_type = models.CharField(max_length=30, blank=True, null=True)
class ProductRateInfo(models.Model):
product= models.ForeignKey(Product, on_delete=models.CASCADE, related_name='rate_info')
season_start = models.DateField(blank=True, null=True)
season_end = models.DateField(blank=True, null=True)
sbr_rate = models.FloatField(default=0)
</code></pre>
<p>While fetching the product we need apply sorting <code>ProductRateInfo.sbr_rate column</code></p>
<p>Tried below method but it's not working</p>
<pre><code>product_qs = Product.object.fitler(prduct_type='demotest').prefetch_related('rate_info')
</code></pre>
|
<python><django><django-models>
|
2024-06-02 07:52:06
| 1
| 3,483
|
Robert
|
78,565,772
| 163,768
|
2d numpy indexing by another array
|
<p>So, I have a 2d array</p>
<pre><code>a.shape
(1050, 21)
</code></pre>
<p>And a 1d array</p>
<pre><code>b.shape
(1050,)
</code></pre>
<p>I'm looking for a way (other than iterating) to produce 1d array (let's call it "c") from "a" so that c[x] = a[x, b[x]]</p>
|
<python><numpy><numpy-ndarray>
|
2024-06-02 07:51:48
| 1
| 1,669
|
Demiurg
|
78,565,768
| 16,869,946
|
Pandas groupby transform minimum greater than 0
|
<p>I have a Pandas dataframe that looks like</p>
<pre><code>Race_ID Date Student_ID Rank
1 1/1/2023 1 3
1 1/1/2023 2 8
1 1/1/2023 3 0
1 1/1/2023 4 4
2 11/9/2022 1 2
2 11/9/2022 2 3
2 11/9/2022 3 9
3 17/4/2022 5 0
3 17/4/2022 2 1
3 17/4/2022 3 2
3 17/4/2022 4 5
4 1/3/2022 1 6
4 1/3/2022 2 2
5 1/1/2021 1 0
5 1/1/2021 2 3
5 1/1/2021 3 1
</code></pre>
<p>And I want to add a new column <code>min>0</code> which is the minimum value of <code>Rank</code> groupby <code>Race_ID</code> greater than 0, so the desired outcome looks like</p>
<pre><code>Race_ID Date Student_ID Rank min>0
1 1/1/2023 1 3 3
1 1/1/2023 2 8 3
1 1/1/2023 3 0 3
1 1/1/2023 4 4 3
2 11/9/2022 1 2 2
2 11/9/2022 2 3 2
2 11/9/2022 3 9 2
3 17/4/2022 5 0 1
3 17/4/2022 2 1 1
3 17/4/2022 3 2 1
3 17/4/2022 4 5 1
4 1/3/2022 1 6 2
4 1/3/2022 2 2 2
5 1/1/2021 1 0 1
5 1/1/2021 2 3 1
5 1/1/2021 3 1 1
</code></pre>
<p>I know <code>groupby</code> and <code>transform('min')</code> but I don't know how to include the condition >0.</p>
|
<python><pandas><dataframe><group-by>
|
2024-06-02 07:51:06
| 1
| 592
|
Ishigami
|
78,565,758
| 2,756,466
|
Connect Chainlit to existing ChromaDb
|
<p>I am trying to create a RAG application using chainlit.</p>
<p>This is the code, I got from an existing tutorial, which is working fine. Only problem that the user has to choose a pdf file every time. I want that chainlit is connected with a persistent chroma vectordb, which should be created only once for all users.</p>
<pre><code>from typing import List
import PyPDF2
from io import BytesIO
from langchain_community.embeddings import OllamaEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.vectorstores import Chroma
from langchain.chains import (
ConversationalRetrievalChain,
)
#from langchain_community.llms import Ollama
from langchain.docstore.document import Document
from langchain_community.llms import Ollama
from langchain_community.chat_models import ChatOllama
from langchain.memory import ChatMessageHistory, ConversationBufferMemory
import chainlit as cl
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100)
@cl.on_chat_start
async def on_chat_start():
files = None
# Wait for the user to upload a file
while files is None:
files = await cl.AskFileMessage(
content="Please upload a pdf file to begin!",
accept=["application/pdf"],
max_size_mb=20,
timeout=180,
max_files=2,
).send()
file = files[0]
print(file)
msg = cl.Message(content=f"Processing `{file.name}`...")
await msg.send()
# Read the PDF file
#pdf_stream = BytesIO(content)
pdf = PyPDF2.PdfReader(file.path)
pdf_text = ""
for page in pdf.pages:
pdf_text += page.extract_text()
# Split the text into chunks
texts = text_splitter.split_text(pdf_text)
# Create a metadata for each chunk
metadatas = [{"source": f"{i}-pl"} for i in range(len(texts))]
# Create a Chroma vector store
embeddings = OllamaEmbeddings(model="nomic-embed-text")
docsearch = await cl.make_async(Chroma.from_texts)(
texts, embeddings, metadatas=metadatas
)
message_history = ChatMessageHistory()
memory = ConversationBufferMemory(
memory_key="chat_history",
output_key="answer",
chat_memory=message_history,
return_messages=True,
)
# Create a chain that uses the Chroma vector store
chain = ConversationalRetrievalChain.from_llm(
ChatOllama(model="mistral"),
chain_type="stuff",
retriever=docsearch.as_retriever(),
memory=memory,
return_source_documents=True,
)
# Let the user know that the system is ready
msg.content = f"Processing `{file.name}` done. You can now ask questions!"
await msg.update()
cl.user_session.set("chain", chain)
@cl.on_message
async def main(message: cl.Message):
chain = cl.user_session.get("chain") # type: ConversationalRetrievalChain
cb = cl.AsyncLangchainCallbackHandler()
res = await chain.ainvoke(message.content, callbacks=[cb])
answer = res["answer"]
source_documents = res["source_documents"] # type: List[Document]
text_elements = [] # type: List[cl.Text]
if source_documents:
for source_idx, source_doc in enumerate(source_documents):
source_name = f"source_{source_idx}"
# Create the text element referenced in the message
text_elements.append(
cl.Text(content=source_doc.page_content, name=source_name)
)
source_names = [text_el.name for text_el in text_elements]
if source_names:
answer += f"\nSources: {', '.join(source_names)}"
else:
answer += "\nNo sources found"
await cl.Message(content=answer, elements=text_elements).send()
</code></pre>
|
<python><langchain><py-langchain><chromadb><retrieval-augmented-generation>
|
2024-06-02 07:43:00
| 1
| 7,004
|
raju
|
78,565,716
| 4,030,761
|
Mismatch between periodogram calculated by SciPy periodogram and AstroPy Lomb Scargle periodogram at low frequencies
|
<p>I am trying to compute the periodogram of my data using both SciPy's periodogram and AstroPy <a href="https://docs.astropy.org/en/stable/timeseries/lombscargle.html" rel="nofollow noreferrer">Lomb-Scargle periodogram</a>—the periodogram matches everywhere except at frequencies near the minimum frequency as shown in my plots. These are the results of numerical simulations.</p>
<p>Based on observational data, I expect a strong signal near 0. Hence, the SciPy periodogram results look more physically plausible than the Lomb-Scargle periodogram.</p>
<p>I haven't figured out why and how to make them similar. Any insight is deeply appreciated.</p>
<p>Below is the code to reproduce my plots.</p>
<p>From standard SciPy periodogram:
<a href="https://i.sstatic.net/Hl15coJO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Hl15coJO.png" alt="enter image description here" /></a>
From Lomb-Scargle periodogram:
<a href="https://i.sstatic.net/rEzUdHxk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rEzUdHxk.png" alt="enter image description here" /></a></p>
<pre><code>from astropy.timeseries import LombScargle
import numpy as np
import pandas as pd
from scipy import signal
import requests
import matplotlib.pyplot as plt
def plot_periodogram(x,y,N_freq,min_freq,max_freq,height_threshold,periodogram_type):
fig, ax = plt.subplots(figsize=(12,8))
if periodogram_type == 'periodogram':
dx = np.mean(np.diff(x)) # Assume x is uniformly sampled
fs = 1 / dx
freq, power_periodogram = signal.periodogram(y,fs,scaling="spectrum",nfft=N_freq,
return_onesided=True,detrend='constant')
power_max = power_periodogram[~np.isnan(power_periodogram)].max()
plt.plot(freq, power_periodogram/power_max,linestyle="solid",color="black",linewidth=2)
filename = "PowerSpectrum"
else:
freq = np.linspace(min_freq,max_freq,N_freq)
ls= LombScargle(x, y,normalization='psd',nterms=1)
power_periodogram= ls.power(freq)
power_max = power_periodogram[~np.isnan(power_periodogram)].max()
false_alarm_probabilities = [0.01,0.05]
periodogram_peak_height= ls.false_alarm_level(false_alarm_probabilities,minimum_frequency=min_freq,
maximum_frequency=max_freq,method='bootstrap')
filename = "PowerSpectrum_LombScargle"
plt.plot(freq, power_periodogram/power_max,linestyle="solid",color="black",linewidth=2)
plt.axhline(y=periodogram_peak_height[0]/power_max, color='black', linestyle='--')
plt.axhline(y=periodogram_peak_height[1]/power_max, color='black', linestyle='-')
peaks_index, properties = signal.find_peaks(power_periodogram/power_max, height=height_threshold)
peak_values = properties['peak_heights']
peak_power_freq = freq[peaks_index]
for i in range(len(peak_power_freq)):
plt.axvline(x = peak_power_freq[i],color = 'red',linestyle='--')
ax.text(peak_power_freq[i]+0.05, 0.95, str(round(peak_power_freq[i],2)), color='red',ha='left', va='top', rotation=0,transform=ax.get_xaxis_transform())
fig.patch.set_alpha(1)
plt.ylabel('Spectral Power',fontsize=20)
plt.xlabel('Spatial Frequency', fontsize=20)
plt.grid(True)
plt.xlim(left=min_freq,right=max_freq)
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
plt.savefig(filename,bbox_inches='tight')
plt.show()
# URL of the CSV file on Pastebin
url = 'https://pastebin.com/raw/uFi8WPvJ'
# Fetch the raw data from the URL
response = requests.get(url)
# Check if the request was successful
if response.status_code == 200:
# Decode the response content to text
data = response.text
# Save the data to a CSV file
with open('data.csv', 'w') as f:
f.write(data)
df =pd.read_csv('data.csv',sep=',',comment='%', names=['x', 'Bphi','r','theta'])
x = df['x'].values
y = df['Bphi'].values
# https://stackoverflow.com/questions/37540782/delete-nan-and-corresponding-elements-in-two-same-length-array
indices = np.logical_not(np.logical_or(np.isnan(x), np.isnan(y)))
x = x[indices]
y = y[indices]
y = y - np.mean(y)
N_freq = 10000
min_freq = 0.001;
max_freq = 4.0
height_threshold =0.7
plot_periodogram(x,y,N_freq,min_freq,max_freq,height_threshold,"periodogram")
plot_periodogram(x,y,N_freq,min_freq,max_freq,height_threshold,"ls")
</code></pre>
|
<python><signal-processing><fft><spectrum>
|
2024-06-02 07:18:59
| 1
| 355
|
Prav001
|
78,565,706
| 77,222
|
jinja2 evaluate variables in variable
|
<p>Using Jinja2 v 3.0. Say I have this context:</p>
<pre class="lang-py prettyprint-override"><code>h = template.render(content=c, md=metadata)
</code></pre>
<p>Suppose my c, which is a string, also has variable in it, like <code>{{ md.title }}</code>, that I do want to be evaluated.</p>
<p>How can I do that? is there a recursive kind of call to the jinja render method?</p>
|
<python><jinja2>
|
2024-06-02 07:15:11
| 1
| 11,645
|
Ayman
|
78,565,533
| 11,941,142
|
How to rename a file with Python without overwriting an existing file?
|
<p>I'd like to rename a file using Python on Linux, but only if it won't overwrite an existing file.</p>
<p>I'm looking for the same behaviour as:</p>
<pre><code>$ mv --no-clobber old_filename new_filename
</code></pre>
<p>As best as I can tell, I can't get this behaviour from the obvious candidates:</p>
<ul>
<li><a href="https://docs.python.org/3/library/os.html#os.rename" rel="nofollow noreferrer"><code>os.rename</code></a> will silently overwrite a file called <code>new_filename</code> when running on Linux (but not on Windows!).</li>
<li><a href="https://docs.python.org/3/library/shutil.html#shutil.move" rel="nofollow noreferrer"><code>shutil.move</code></a> just uses <code>os.rename</code>, so it has the same behaviour.</li>
</ul>
<p>Also, I need the operation to be atomic to avoid race conditions, so I don't want a two step approach which tests for the existence of <code>new_filename</code>, then does the rename if it isn't found.</p>
|
<python>
|
2024-06-02 05:29:21
| 1
| 572
|
countermeasure
|
78,565,463
| 513,554
|
Is ELISA analysis in Elixir Nx/Schorar possible?
|
<p>I have read the article <a href="https://medium.com/@tentotheminus9/elisa-analysis-in-python-deb8c6ed91db" rel="nofollow noreferrer">ELISA Analysis in Python</a> on Medium.</p>
<p>The above article uses SciPy's <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html" rel="nofollow noreferrer">curve_fit</a> function to find an approximate curve based on the 4 parameter logistic regression (4PL) model as follows:</p>
<pre class="lang-py prettyprint-override"><code>from scipy.optimize import curve_fit
x = [1.95, 3.91, 7.381, 15.63, 31.25, 62.5, 125,250, 500, 1000]
y = [0.274, 0.347, 0.392, 0.420, 0.586, 1.115, 1.637, 2.227, 2.335, 2.372]
def log4pl(x, A, B, C, D):
return(((A - D) / (1.0 + ((x / C) ** B))) + D)
params, _ = curve_fit(log4pl, x, y)
A, B, C, D = params[0], params[1], params[2], params[3]
</code></pre>
<p>I would like to do the same thing using the <a href="https://hexdocs.pm/scholar/readme.html" rel="nofollow noreferrer">Nx/Scholar</a> library in Elixir.</p>
<p>Is it possible? I would appreciate any hints you can give me.</p>
<hr />
<p>[UPDATE]</p>
<p>From a quick look at the Python <code>scipy.optimize</code> source code, it appears that <code>curve_fit</code> uses Fortran's MINPACK library internally.</p>
<p>As far as I know, there is no easy way to use MINPACK from Elixir.</p>
<p>Therefore, I conclude that it is difficult to do ELISA Analysis in Elixir at this time.</p>
<p>I welcome any additional information.</p>
|
<python><machine-learning><scipy><elixir><elixir-nx>
|
2024-06-02 04:29:18
| 2
| 5,188
|
Tsutomu
|
78,565,383
| 9,468,665
|
An alternative way to set the value of a variable in pocketpy wihout converting to a string
|
<p>I am using pocketpy as an embedded interpreter for a C++ program. One thing I need to do is convert a <code>std::vector<float></code> to a named pocketpy variable. I can do this by evaluating a string version and passing the string to the <code>vm->exec</code> method but that seems like an inefficient approach (<code>vm</code> is the <code>pkpy::VM</code>):</p>
<pre><code>stringstream s;
s << "v = [" << x[0] << "," << x[1] << "," << x[2] << "]";
vm->exec(s.str());
</code></pre>
<p>I can create native pocketpy lists following the instructions:</p>
<pre><code>List t;
t.push_back(py_var(vm, x[0]));
t.push_back(py_var(vm, x[1]));
t.push_back(py_var(vm, x[2]));
PyVar obj = py_var(vm, std::move(t));
</code></pre>
<p>That seems like it will be much more efficient but I then can't work out how to assign this to the variable <code>v</code> in the pocketpy virtual machine. It seems like such a basic thing to do that I must be missing something. Perhaps the string approach is the right way to go but the vector can be moderately big so it does not feel like a good option. Any help appreciated.</p>
|
<python><c++><interop>
|
2024-06-02 03:15:30
| 0
| 474
|
Bill Sellers
|
78,565,377
| 5,043,301
|
How to customize Form Validation error message for Password Confirmation Field
|
<p>I am trying to customize Form Validation error message for Password Confirmation Field at User Creation Form.</p>
<p>My <strong>forms.py</strong> is like below.</p>
<pre><code>from django.contrib.auth.forms import UserCreationForm
from django.core.exceptions import ValidationError
from .models import User
class CustomUserCreationForm(UserCreationForm):
class Meta:
model = User
fields = ('first_name','last_name','email','date_of_birth', 'gender', 'user_type', 'phone', 'address','photo', 'password1', 'password2' )
error_messages = {
field: {
'required': f"{field.replace('_', ' ').title()} is required."
}
for field in fields
}
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
for field_name in self.fields:
self.fields[field_name].error_messages.update(self.Meta.error_messages[field_name])
</code></pre>
<p>My <strong>views.py</strong> is like below.</p>
<pre><code>class UserRegisterView( FormView ):
template_name = 'register.html'
form_class = CustomUserCreationForm
redirect_authenticated_user = True
success_url = reverse_lazy('dashboard')
def form_valid( self, form ):
user = form.save()
if user is not None:
login( self.request, user )
return super( UserRegisterView, self ).form_valid( form )
def get( self, *args, **kwargs ):
if self.request.user.is_authenticated:
return redirect('dashboard')
return super( UserRegisterView, self ).get( *args, **kwargs )
def form_invalid(self, form):
return super(UserRegisterView, self).form_invalid(form)
</code></pre>
<p>My HTML code is like below.</p>
<pre><code><label class="form-label">PassWord Confirmation</label>
<input type="password" class="form-control" name="password2" />
{% if form.password2.errors %}
<div class="alert alert-danger">{{ form.password2.errors }}</div>
{% endif %}
</code></pre>
<p>Now I am getting validation message <code>Password2 is required.</code>. But I would like to get validation message <code>Password Confirmation is required.</code>.</p>
|
<python><django>
|
2024-06-02 03:12:30
| 2
| 7,102
|
abu abu
|
78,565,243
| 11,618,586
|
How to return NaN if all values are NaN using the agg() function specifying aggregation output columns
|
<p>I have a dataframe like so:</p>
<pre><code>data = {'Integers': [1, 2, np.nan, 4, 5],
'AllNaN': [np.nan, np.nan, np.nan, np.nan, np.nan]}
df = pd.DataFrame(data)
</code></pre>
<p>I want to return <code>NaN</code> when performing the sum aggregations on the datagrame. There are solutions on here that advises to use <code>agg(pd.Series.sum, min_count=1)</code>. However the way I have my aggregations are using the alternate <code>agg</code> method like so:</p>
<pre><code>agg_df=df.agg(SummedInt=('Integers','sum'), sumofallNaN=('AllNaN','sum')).reset_index()
</code></pre>
<p>How do I use the <code>min_count=1</code> argument with this method?</p>
|
<python><python-3.x><pandas><aggregate-functions>
|
2024-06-02 01:04:49
| 1
| 1,264
|
thentangler
|
78,565,203
| 5,951,505
|
List of lists (double [[) in Python
|
<p>This is a really basic thing but after reading the documentation I am still not able to do. I just want to create a list of 9 sublists as follows</p>
<pre><code>[[0, 0.05, 0.1, ..., 25],[0, 0.05, 0.1, ..., 25],[0, 0.05, 0.1, ..., 25],[0, 0.05, 0.1, ..., 25],[0, 0.05, 0.1, ..., 25],[0, 0.05, 0.1, ..., 25],[0, 0.05, 0.1, ..., 25],[0, 0.05, 0.1, ..., 25],[0, 0.05, 0.1, ..., 25]]
</code></pre>
<p>I am doing</p>
<pre><code>[np.arange(0, 25.05, 0.05), np.arange(0, 25.05, 0.05), np.arange(0, 25.05, 0.05),
np.arange(0, 25.05, 0.05), np.arange(0, 25.05, 0.05), np.arange(0, 25.05, 0.05),
np.arange(0, 25.05, 0.05), np.arange(0, 25.05, 0.05), np.arange(0, 25.05, 0.05)]
</code></pre>
<p>but I get <code>24.9 , 24.95, 25. ])]</code>, while I would like to get <code>24.9 , 24.95, 25. ]])]</code>. How can I get this done?</p>
|
<python><list>
|
2024-06-02 00:24:47
| 2
| 381
|
Hans
|
78,565,188
| 11,833,899
|
Getting data into a manager.dict and sharing it between processes
|
<p>I'm currently trying to teach myself how to handle live data by streaming Forza telemetry data from my xbox to my PC. I'm having issues storing that data in a manager.dict and sharing it to other processes. I've created a manager.dict called packets, here is the app.py:</p>
<pre><code>from connectors.forza_connect import forza_connect, forza_record
from telemetry_utils import throttle
from multiprocessing import Manager, Process
if __name__ == '__main__':
with Manager() as manager:
packets = manager.dict({}) # Create shared dict for the worker and UI to use
# Start the background data collection process
data_args = ('motorsport', packets, )
data_process = Process(target=forza_connect, args=data_args)
data_process.start()
throttle_process = Process(target=throttle.show_throttle_text, args=(packets, ))
throttle_process.start()
data_process.join()
throttle_process.join()
</code></pre>
<p>This runs forza_connect which collects the data and transforms it into a dictionary, this bit works fine on its own but it doesn't seem to send that data back in to packets. Here's the forza_connect.py:</p>
<pre><code>import socket
from data_utils import forza_data
def forza_connect(game_version, packets, local_ip='0.0.0.0', port=5555, to_print=False, fd=forza_data):
## Connect to Forza
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.bind((local_ip, port))
#sock.settimeout(5)
fd = fd.ForzaData(game_version)
while True:
packet, packet_source = sock.recvfrom(1024) # Receive data
fd.parse(packet) # Parse data
# Append packet to the shared list
for k, v in fd.to_dict().items():
packets[k] = v
sock.close() # Close the socket
</code></pre>
<p>I want the data collected above to be available to multiple processes, however no data seems to be being stored and no data is getting sent to throttle.py</p>
<pre><code># Show throttle percentage as text
def show_throttle_text(forza_data):
while True:
#print(f"\r throttle percentage: {int(forza_data['throttle'])}", end='', flush=True)
print(forza_data)
</code></pre>
<p>This currently just showing an empty dict. What is it I am mis-understanding about how to use this? Thanks</p>
|
<python><multiprocessing><python-multiprocessing>
|
2024-06-02 00:18:52
| 1
| 598
|
scottapotamus
|
78,565,064
| 11,998,382
|
Avoid doubling every rule for NEWLINE vs DOUBLE_NEWLINE
|
<p>I want blank new lines to be syntactically significant (optimization for leaf leaning trees <code>[[[a, b], c, d], e, [g], f] == [a b | c d | e [g] f]</code> but with <code>\n\n</code> instead of <code>|</code>). But the only way I can disambiguate is by doubling every definition depending on whether it ended with a single or double new line.</p>
<p>Essentially the problem is: I don't want the first rule <code>tree</code> only ever being used, and the second rule <code>tree_d</code> never being used. But I still need the <code>Indenter</code> to work (not deindenting blank new lines).</p>
<pre><code>tree: tree_d? (NAME NL)+
tree_dnl: tree_dnl? (NAME NL)* NAME DNL
</code></pre>
<p>Is there a way to reduce this duplication.</p>
<pre class="lang-py prettyprint-override"><code>from lark import Lark
from lark.indenter import Indenter
grammar="""\
start: tree
tree: tree_d? node+
tree_d: tree_d? node* node_d
?node: leaf | sub_tree
?node_d: leaf_d | sub_tree_d
leaf: NAME NL
leaf_d: NAME DNL
?sub_tree: _INDENT tree _DEDENT
?sub_tree_d: _INDENT tree_d _DEDENT
NL: /\\n */
DNL: /\\n\\n */
%declare _INDENT _DEDENT
%import common.CNAME -> NAME
"""
# [[[a, b], c, d], e, [g], f]
sample_string ="""\
a
b
c
d
e
g
f
"""
class TreeIndenter(Indenter):
NL_type = 'NL'
OPEN_PAREN_types = []
CLOSE_PAREN_types = []
INDENT_type = '_INDENT'
DEDENT_type = '_DEDENT'
tab_len = 4
parser = Lark(grammar, postlex=TreeIndenter(), debug=True)
for i, t in enumerate(parser.lex(sample_string)):
print((t.line, t.column), repr(t))
parse_tree = parser.parse(sample_string)
print(parse_tree.pretty())
</code></pre>
|
<python><lark-parser>
|
2024-06-01 22:35:26
| 0
| 3,685
|
Tom Huntington
|
78,565,013
| 907,047
|
Using llama3 model to parse markdown data
|
<p>I am trying to use llama3 model to parse markdown data input and to create a markdown table for me.
Here is my Python script. The only external library is ollama Python library. I would be greatfull to any suggestions how to do this in the proper and best practices way or at least to point me what exactly documentation I need to check in order to do it in a proper way. llama3 model was able to do it for only two "records" of markdown and I needed to wait around 50 minutes. However when I tried to run it with the whole content I've got this unpleasant error message:</p>
<pre><code>It looks like you're trying to parse a large Markdown file with Angular 2 migrations from 0.26 to 1.4.
Here's the output in JSON format:
{
"From": "0.26",
"To": "1.4"
},
[
{
"Migrations": [
["#1135", "TSLint has been deprecated for a while now"],
["Angular removed TSLint support"],
["Configuring ESLint"]
]
}
],
Please let me know what you'd like to do with this data.
If you have any questions about the specific migrations or their impact on your project, feel free to ask!
</code></pre>
<p>Here is my Python implementation:</p>
<pre><code>import ollama
import re
# Press Shift+F10 to execute it or replace it with your code.
# Press Double Shift to search everywhere for classes, files, tool windows, actions, and settings.
prompt = re.sub(r"\s+", " ", f"""
For each occurrence of the pattern ## From [from version number] to [to version number], get the content
including all new lines and special characters until the next ## and keep it as [resulted content].
Replace all occurrences of past simple or present perfect tense in [collected content], with future simple tense,
and keep the whole result as [resulted content].
In [resulted content] find mentioning of Angular version. If there is no
Angular version take the one from previous row. If there is no previous row
take version 11 and remember it as [angular version].
In [resulted content] find mentioning of NodeJS version. If there is no
NodeJS version take the one from previous row. If there is no previous row
take version 14.15.0 LTS and remember it as [nodejs version].
In [resulted content] find mentioning of NPM version. If there is no
NPM version take the one from previous row. If there is no previous row
take version 6.14.8 and remember it as [npm version].
Finally output a well formatted markdown table with columns From PWA version - [from version number],
To PWA version - [to version number],
Angular version - [angular version],
NodeJS version - [nodejs version],
NPM version - [npm version],
Things we need to do - [resulted content] - the whole text don't summarize it,
Estimate - 1 this is a hardcoded value,
Regression - two days - this is a hardcoded value.
Please sort the entire output table based on the values in the From PWA version column.
Response only with the result table, please.
Here is the input data:
""".strip())
with open('migrations.md') as f:
migrations_description = f.read()
prompt = prompt + migrations_description
def main(prompt_text):
print("Please wait ...")
# print(prompt_text)
response = ollama.chat(model='intershop-pwa-changelog-converter',
messages=[{'role': 'user', 'content': prompt_text}])
with open('response.md', 'w') as file:
file.write(response['message']['content'])
print("All done!")
# Press the green button in the gutter to run the script.
if __name__ == '__main__':
main(prompt)
# See PyCharm help at https://www.jetbrains.com/help/pycharm/
</code></pre>
<p>and here is my Modelfile. I builded a custom model from llama3:</p>
<pre><code>FROM llama3
PARAMETER temperature 1
PARAMETER num_ctx 32768
</code></pre>
<p>The whole thing can be cloned from here in case someone wants to run it locally:</p>
<p><a href="https://github.com/gonaumov/intershop_pwa_changelog_converter" rel="nofollow noreferrer">https://github.com/gonaumov/intershop_pwa_changelog_converter</a></p>
|
<python><python-3.x><artificial-intelligence><llama><ollama>
|
2024-06-01 22:12:01
| 0
| 4,231
|
Georgi Naumov
|
78,564,958
| 7,053,357
|
How to find deprecated APIs
|
<p>My organization is a startup that has a microservice architecture (Python). There are several services that expose some endpoints. The problem is that, as the startup grew, many things were dynamic, code changed and people came and went... and we are certain for a fact that there's more than a few endpoints that aren't used anymore and we want to find what those are and delete them. We're just not sure what the approach to such a problem should be, because we don't want to break things... but on the other hand, the code is becoming bloated and filled with garbage...</p>
<p>Any known solution or tools for such a problem?</p>
|
<python><deprecated>
|
2024-06-01 21:27:42
| 1
| 364
|
felisimo
|
78,564,920
| 1,686,628
|
Function not being mocked unless full path is called
|
<p>main.py</p>
<pre><code>from path.to.mod import run
def foo(args: str):
run(args)
</code></pre>
<p>test.py</p>
<pre><code>@patch("path.to.mod.run")
def verify_args(mock):
foo("bar")
mock.assert_called_once_with("bar")
</code></pre>
<p>the above code does not mock <code>run</code> ie assertion fails.<br />
However if I change <code>main.py</code> to</p>
<pre><code>def foo(args: str):
path.to.mod.run(args)
</code></pre>
<p>then the test mocks successfully</p>
<p>How can I mock without needing specify full path from <code>main.py</code>?</p>
<p>if I change test.py to</p>
<pre><code>from path.to.mod import run
@patch("run")
def verify_args(mock):
foo("bar")
mock.assert_called_once_with("bar")
</code></pre>
<p>it complains that run is not a valid target</p>
<pre><code>TypeError: Need a valid target to patch. You supplied: 'run'
</code></pre>
|
<python><python-3.x><pytest><python-unittest><python-unittest.mock>
|
2024-06-01 21:08:08
| 1
| 12,532
|
ealeon
|
78,564,771
| 13,968,392
|
Alternative to df.rename(columns=str.replace(" ", "_"))
|
<p>I noticed that it's possible to use <code>df.rename(columns=str.lower)</code>, but not <code>df.rename(columns=str.replace(" ", "_"))</code>.</p>
<ol>
<li><p>Is this because it is allowed to use the variable which stores the method (<code>str.lower</code>), but it's not allowed to actually call the method (<code>str.lower()</code>)?
There is a similar <a href="https://stackoverflow.com/questions/62070778">question</a>, why the error message of df.rename(columns=str.replace(" ", "_")) is rather confusing – without an answer on that.</p>
</li>
<li><p>Is it possible to use methods of the <a href="https://pandas.pydata.org/docs/reference/api/pandas.Index.str.html#pandas.Index.str" rel="nofollow noreferrer"><code>.str</code></a> accessor (of <code>pd.DataFrame().columns</code>) inside of <code>df.rename(columns=...)</code>?
The only solution I came up so far is</p>
<pre><code>df = df.rename(columns=dict(zip(df.columns, df.columns.str.replace(" ", "_"))))
</code></pre>
<p>but maybe there is something more consistent and similar to style of <code>df.rename(columns=str.lower)</code>? I know <code>df.rename(columns=lambda x: x.replace(" ", "_")</code> works, but it doesn't use the <code>.str</code> accessor of pandas columns, it uses the <a href="https://docs.python.org/3/library/stdtypes.html#str.replace" rel="nofollow noreferrer"><code>str.replace()</code></a> of the standard library.<br />
The purpose of the question is explore the possibilities to use <a href="https://pandas.pydata.org/docs/reference/series.html#string-handling" rel="nofollow noreferrer">pandas str methods</a> when renaming columns in method chaining, that's why <code>df.columns = df.columns.str.replace(' ', '_')</code> is not suitable to me.</p>
</li>
</ol>
<p>As an <code>df</code> example, assume:</p>
<pre><code>df = pd.DataFrame([[0,1,2]], columns=["a pie", "an egg", "a nut"])
</code></pre>
|
<python><pandas><replace><rename><method-chaining>
|
2024-06-01 19:46:55
| 1
| 2,117
|
mouwsy
|
78,564,655
| 4,146,344
|
Show an image at some location on a point cloud using Open3D
|
<p>So I have a point cloud which I can show on the screen using Open3D's examples. However, now I have an image and I need to show that image on a specific coordinate on the point cloud. I can't find any example about how to do this. Does anyone know how it can be done ? Thank you very much.</p>
|
<python><point-clouds><open3d>
|
2024-06-01 18:44:45
| 1
| 710
|
Dang Manh Truong
|
78,564,587
| 4,956,494
|
manim - Can't animate a 3D Coin Flip
|
<p>I am trying to animate a coin flip using Manim.
For that I create a VGroup with a central Cylinder and two Dots for the two sides.</p>
<p>Still, when I rotate the object the bottom part doesn't come to the front, it' like the front side always stays there. Can someone help me?</p>
<pre class="lang-py prettyprint-override"><code>class Coin(VGroup):
def __init__(self, radius=1, height=None, **kwargs):
super().__init__(**kwargs)
if height is None:
height = radius / 4
self.top = Dot(radius=radius, color=GOLD_D, fill_opacity=1)
self.top.move_to(OUT*height/2)
self.bottom = self.top.copy()
self.bottom.set_color(RED)
self.bottom.move_to(IN*height/2)
self.edge = Cylinder(radius=radius, height=height)
self.edge.set_fill(GREY, opacity=1)
# rotate the cylinder so that the upper face is towards the camera
self.edge.rotate(90 * DEGREES, OUT)
self.add(self.edge, self.bottom, self.top)
class CoinFlip(Scene):
def construct(self):
c = Coin()
self.play(Rotate(c, PI*.8, axis=RIGHT, about_point=ORIGIN))
</code></pre>
|
<python><manim>
|
2024-06-01 18:15:30
| 1
| 446
|
rusiano
|
78,564,523
| 7,760,910
|
Unable to mock awsglue module via tox
|
<p>I have one method in one of the modules like below:</p>
<pre><code>from awsglue.utils import getResolvedOptions
from functools import *
class FetchArguments:
def __init__(self) -> None:
pass
def fetch_arguments(self, args):
</code></pre>
<p>and now for this class, I have written a unit test where I am trying to mock the <code>awsglue</code> library using <code>unittest</code> library. I have used a <code>patch</code> decorator.</p>
<pre><code>import unittest
from unittest.mock import patch
from utils.fetch_arguments import FetchArguments
class TestFetchArguments(unittest.TestCase):
@patch("utils.fetch_arguments.awsglue.utils.getResolvedOptions")
def test_fetch_arguments_with_fullload(self, mock_get_resolved_options):
</code></pre>
<p>now when I try to run it via <code>tox</code>, it throws the below error:</p>
<pre><code>from awsglue.dynamicframe import DynamicFrame
ModuleNotFoundError: No module named 'awsglue'
</code></pre>
<p>Code is working fine, as I have tested it using the below first approach.</p>
<p><strong>Alternative-1</strong>:</p>
<p>To resolve this error, I installed <code>git+https://github.com/awslabs/aws-glue-libs.git</code> (latest glue library) and copied the libraries in the <code>tox->python->lib->site-packages</code> folder and my tests ran fine. But this is not suitable for the CI/CD purposes.</p>
<p><strong>Alternative-2</strong>:<br />
I directly tried running the test via this command too <code>python -m unittest discover -v</code> but then it throws an error w.r.t python modules imported like the below one:</p>
<pre><code>from models.model import SparkJobFactoryDetails
ModuleNotFoundError: No module named 'models'
</code></pre>
<p><strong>Alternative-3</strong>:</p>
<p>I also tried to put this <code>git+https://github.com/awslabs/aws-glue-libs.git</code> in my <code>setup.py</code> and then tried to install this package at runtime but somehow it doesn't accept this format in my setup.py file.</p>
<p>So given these scenarios what exactly can be done to make it smoother?</p>
|
<python><amazon-web-services><unit-testing><aws-glue><python-unittest>
|
2024-06-01 17:48:16
| 0
| 2,177
|
whatsinthename
|
78,564,333
| 1,802,225
|
How to init Pool() inside child daemonic, but only once?
|
<p>On python <code>3.11</code> and <code>Ubuntu</code> I have a task to init asynchronous calls every time interval (not <code>asyncio</code>) and inside child do multiprocessing task. I have 36 cores / 72 processors. The problem is when I init new <code>Pool(72)</code> it takes 0.3 seconds that is too much for my task, because performance matters. With this article <a href="https://stackoverflow.com/questions/6974695/python-process-pool-non-daemonic">Python Process Pool non-daemonic?</a> I found out how to do new pool inside pool (using <code>NoDaemonProcess</code>). But how to init child pool only once? <code>concurrent.futures</code> does not good for me, because I made test and it's slower than <code>multiprocessing</code>.</p>
<p>Here is working example, I need to modify somehow to init pool inside child only once.</p>
<pre><code>parent pid=907058
2024-06-01 19:16:44.856839 start
2024-06-01 19:16:44.861229 sleep 4 sec
2024-06-01 19:16:44.861777 [907059] on_message(): 1
2024-06-01 19:16:44.866430 [907059] starting pool..
2024-06-01 19:16:44.867275 worker_function(), a=907059_1
2024-06-01 19:16:44.867373 worker_function(), a=907059_2
2024-06-01 19:16:44.867410 worker_function(), a=907059_3
2024-06-01 19:16:48.861738 start
2024-06-01 19:16:48.864965 sleep 4 sec
2024-06-01 19:16:48.865581 [907070] on_message(): 2
2024-06-01 19:16:48.870826 [907070] starting pool..
2024-06-01 19:16:48.871544 worker_function(), a=907070_1
2024-06-01 19:16:48.871638 worker_function(), a=907070_2
2024-06-01 19:16:48.871695 worker_function(), a=907070_3
2024-06-01 19:16:52.865456 long sleep..
2024-06-01 19:16:56.867489 end worker_function(), a=907059_1
2024-06-01 19:16:56.867657 end worker_function(), a=907059_3
2024-06-01 19:16:56.867666 end worker_function(), a=907059_2
2024-06-01 19:16:56.868269 [907059] pool ended
2024-06-01 19:16:56.870487 [907059] finished on_message(): 1
2024-06-01 19:17:00.871746 end worker_function(), a=907070_1
2024-06-01 19:17:00.871896 end worker_function(), a=907070_2
2024-06-01 19:17:00.871903 end worker_function(), a=907070_3
2024-06-01 19:17:00.872659 [907070] pool ended
2024-06-01 19:17:00.874545 [907070] finished on_message(): 2
2024-06-01 19:17:12.865676 finished
</code></pre>
<p>Code:</p>
<pre class="lang-py prettyprint-override"><code>import os
import time
import traceback
from datetime import datetime
from multiprocessing import Pool
import multiprocessing.pool
# https://stackoverflow.com/questions/6974695/python-process-pool-non-daemonic
class NoDaemonProcess(multiprocessing.Process):
@property
def daemon(self):
return False
@daemon.setter
def daemon(self, value):
pass
class NoDaemonContext(type(multiprocessing.get_context())):
Process = NoDaemonProcess
class NestablePool(multiprocessing.pool.Pool):
def __init__(self, *args, **kwargs):
kwargs['context'] = NoDaemonContext()
super(NestablePool, self).__init__(*args, **kwargs)
class Message():
def __init__(self):
# self.pool_3 = Pool(3)
pass
def worker_function(self, a):
print(f"{datetime.now().strftime('%Y-%m-%d %H:%M:%S.%f')} worker_function(), a={a}")
time.sleep(12)
print(f"{datetime.now().strftime('%Y-%m-%d %H:%M:%S.%f')} end worker_function(), a={a}")
return None
def on_message(self, message):
try:
pid = os.getpid()
print(f"{datetime.now().strftime('%Y-%m-%d %H:%M:%S.%f')} [{pid}] on_message(): {message}")
# I need to make code that here I don't init new Pool()
# because my server has 72 logic processos and it takes 300ms to init
# for my task it's super long, so I want to init Pool() once, but not everytime when calling on_message()
# this could be possible solution
# but it does not work, in __init__() the Pool(3) is not initing
# res = self.pool_3.starmap_async(self.worker_function, [(f"{pid}_1",),(f"{pid}_2",),(f"{pid}_3",)]).get()
# if I init Pool with self. here, I will get error
# "Pool objects cannot be passed between processes or pickled"
with Pool(3) as pool:
print(f"{datetime.now().strftime('%Y-%m-%d %H:%M:%S.%f')} [{pid}] starting pool..")
res = pool.starmap_async(self.worker_function, [(f"{pid}_1",),(f"{pid}_2",),(f"{pid}_3",)]).get()
print(f"{datetime.now().strftime('%Y-%m-%d %H:%M:%S.%f')} [{pid}] pool ended")
print(f"{datetime.now().strftime('%Y-%m-%d %H:%M:%S.%f')} [{pid}] finished on_message(): {message}")
# os.kill(pid, 9)
except Exception as e:
print(traceback.format_exc())
print(e)
if __name__ == "__main__":
print(f"parent pid={os.getpid()}")
# https://stackoverflow.com/a/44719580/1802225 process.terminate()
me = Message()
for i in range(1, 3):
print()
print(f"{datetime.now().strftime('%Y-%m-%d %H:%M:%S.%f')} start")
# starting pool non-daemonic to start pool inside
pool = NestablePool(1)
pool.starmap_async(me.on_message, [(i,)])
print(f"{datetime.now().strftime('%Y-%m-%d %H:%M:%S.%f')} sleep 4 sec")
time.sleep(4)
print()
print(f"{datetime.now().strftime('%Y-%m-%d %H:%M:%S.%f')} long sleep..")
time.sleep(20)
print(f"{datetime.now().strftime('%Y-%m-%d %H:%M:%S.%f')} finished")
</code></pre>
<hr />
<p>Another answers code (modified) that works not as expected:</p>
<pre class="lang-py prettyprint-override"><code>from multiprocessing.pool import Pool, ThreadPool
from datetime import datetime
import traceback
import time
import os
def convert_char_to_integer(ch):
pid = os.getpid()
print(f"{datetime.now().strftime('%Y-%m-%d %H:%M:%S.%f')} [{pid}] convert_char_to_integer(), ch={ch}")
time.sleep(12) # Simulate real processing
print(f"{datetime.now().strftime('%Y-%m-%d %H:%M:%S.%f')} [{pid}] end convert_char_to_integer(), ch={ch}")
return ch
def on_message(multiprocessing_pool, message):
pid = os.getpid()
print(f"{datetime.now().strftime('%Y-%m-%d %H:%M:%S.%f')} [{pid}] on_message(): {message}")
print(f"{datetime.now().strftime('%Y-%m-%d %H:%M:%S.%f')} [{pid}] starting pool..")
result = multiprocessing_pool.starmap_async(convert_char_to_integer, [(f"{pid}_1",),(f"{pid}_2",),(f"{pid}_3",)]).get()
print(f"{datetime.now().strftime('%Y-%m-%d %H:%M:%S.%f')} [{pid}] pool ended")
def await_next_message():
yield 'Message 1'
time.sleep(.1)
yield 'Message 2'
time.sleep(.1)
yield 'Message 3'
def main():
# Pool(3) must be 3 (let's say it's maxumim cpu's I have)
# if I set Pool(9) it will work correct, but this way is false, because it collects +3 +3.. pool that can be used
# that does not good for me, because in real task I use all CPUs
# and I need to execute pools independently of each other
with Pool(3) as multiprocessing_pool, ThreadPool(3) as multithreading_pool:
for message in await_next_message():
print()
print(f"{datetime.now().strftime('%Y-%m-%d %H:%M:%S.%f')} start")
multithreading_pool.apply_async(on_message, args=(multiprocessing_pool, message))
print(f"{datetime.now().strftime('%Y-%m-%d %H:%M:%S.%f')} sleep 4 sec")
time.sleep(4)
# Wait for all submitted tasks to complete
multithreading_pool.close()
multithreading_pool.join()
if __name__ == '__main__':
main()
</code></pre>
<p>Result:</p>
<pre><code>2024-06-03 20:41:26.731560 start
2024-06-03 20:41:26.731670 sleep 4 sec
2024-06-03 20:41:26.731726 [27211] on_message(): Message 1
2024-06-03 20:41:26.731838 [27211] starting pool..
2024-06-03 20:41:26.732411 [27212] convert_char_to_integer(), ch=27211_1
2024-06-03 20:41:26.732582 [27213] convert_char_to_integer(), ch=27211_2
2024-06-03 20:41:26.732634 [27214] convert_char_to_integer(), ch=27211_3
2024-06-03 20:41:30.832071 start
2024-06-03 20:41:30.832175 sleep 4 sec
2024-06-03 20:41:30.832264 [27211] on_message(): Message 2
2024-06-03 20:41:30.832343 [27211] starting pool..
# (!) HERE is not expected behavior. It must be:
# convert_char_to_integer(), ch=27211_1
# convert_char_to_integer(), ch=27211_2
# convert_char_to_integer(), ch=27211_3
# , but that is not happening, because it is waiting 3 processes to be ending ("end convert_char_to_integer()"), but I need to work them independently
2024-06-03 20:41:34.932409 start
2024-06-03 20:41:34.932532 sleep 4 sec
2024-06-03 20:41:34.932644 [27211] on_message(): Message 3
2024-06-03 20:41:34.932712 [27211] starting pool..
2024-06-03 20:41:38.732708 [27212] end convert_char_to_integer(), ch=27211_1
2024-06-03 20:41:38.732865 [27213] end convert_char_to_integer(), ch=27211_2
2024-06-03 20:41:38.732895 [27214] end convert_char_to_integer(), ch=27211_3
2024-06-03 20:41:38.733189 [27212] convert_char_to_integer(), ch=27211_1
2024-06-03 20:41:38.733256 [27214] convert_char_to_integer(), ch=27211_2
2024-06-03 20:41:38.733299 [27213] convert_char_to_integer(), ch=27211_3
2024-06-03 20:41:38.733613 [27211] pool ended
2024-06-03 20:41:50.733313 [27212] end convert_char_to_integer(), ch=27211_1
2024-06-03 20:41:50.733415 [27214] end convert_char_to_integer(), ch=27211_2
2024-06-03 20:41:50.733449 [27213] end convert_char_to_integer(), ch=27211_3
2024-06-03 20:41:50.733624 [27212] convert_char_to_integer(), ch=27211_1
2024-06-03 20:41:50.733642 [27214] convert_char_to_integer(), ch=27211_2
2024-06-03 20:41:50.733668 [27213] convert_char_to_integer(), ch=27211_3
2024-06-03 20:41:50.733868 [27211] pool ended
2024-06-03 20:42:02.733748 [27212] end convert_char_to_integer(), ch=27211_1
2024-06-03 20:42:02.733746 [27214] end convert_char_to_integer(), ch=27211_2
2024-06-03 20:42:02.733757 [27213] end convert_char_to_integer(), ch=27211_3
2024-06-03 20:42:02.734315 [27211] pool ended
</code></pre>
|
<python><python-3.x><multithreading><multiprocessing><python-multiprocessing>
|
2024-06-01 16:27:41
| 1
| 1,770
|
sirjay
|
78,564,217
| 1,540,785
|
starting container process caused: exec: "fastapi": executable file not found in $PATH: unknown
|
<p>I am trying to Dockerize my fastapi application, but it fails with the following error</p>
<blockquote>
<p>Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "fastapi": executable file not found in $PATH: unknown</p>
</blockquote>
<p>Can someone help me out?</p>
<p><strong>Dockerfile</strong></p>
<pre><code>FROM python:3.12 as builder
# install and setup poetry config
RUN pip install poetry==1.8.2
ENV POETRY_NO_INTERACTION=1 \
POETRY_VIRTUALENVS_IN_PROJECT=1 \
POETRY_VIRTUALENVS_CREATE=1 \
POETRY_CACHE_DIR=/tmp/poetry_cache
WORKDIR /navigation
COPY pyproject.toml poetry.lock ./
# poetry complains if there is no README file
RUN touch README.md
# install without dev dependencies + remove poetry cache
RUN poetry install --without dev && rm -rf $POETRY_CACHE_DIR
FROM python:3.12-alpine as runtime
ENV VIRTUAL_ENV=/navigation/.venv \
PATH="/navigation/.venv/bin:$PATH"
COPY --from=builder ${VIRTUAL_ENV} ${VIRTUAL_ENV}
COPY navigation ./navigation
CMD ["fastapi", "run", "main.py", "--proxy-headers", "--port", "80"]
</code></pre>
<p><strong>docker-compose.yml</strong></p>
<pre><code>services:
navigation-api:
build:
context: .
dockerfile: Dockerfile
volumes:
- ./navigation:/navigation
</code></pre>
<p>I'm using poetry (as can be seen in the Dockerfile) to install my dependencies. Here are my dependencies in my pyproject.toml file.</p>
<p><strong>pyproject.toml</strong></p>
<pre><code>[tool.poetry.dependencies]
python = ">=3.12,<3.13"
fastapi = "^0.111.0"
prisma = "^0.13.1"
</code></pre>
<p>I also tried to use uvicorn instead of using the fastapi-cli, but the same error occurs.</p>
<p>I tried to not use the builder pattern to see if it was an issue there. But the same error.</p>
<p>I checked this ticket out, but no solutions offered worked: <a href="https://stackoverflow.com/questions/72235848/starting-container-process-caused-exec-uvicorn-executable-file-not-found-in">starting container process caused: exec: "uvicorn": executable file not found in $PATH: unknown</a></p>
|
<python><docker><fastapi><python-poetry><builder-pattern>
|
2024-06-01 15:38:00
| 2
| 371
|
kanadianDri3
|
78,563,834
| 11,834,577
|
Tesseract not recognising digits correctly
|
<p>I have some images, I am preprocessing them before extracting digits from them. The problem is "Tesseract" is not able to extract accurate digits from them. The images only contain digits.</p>
<p>Following is my code:</p>
<pre><code>from PIL import Image, ImageEnhance, ImageFilter
import pytesseract
CAPTCHA_PATH = 'captcha_images/captcha8.jpeg'
RED_REMOVED_PATH = 'processed/red_removed.jpeg'
PROCESSED_IMAGE_PATH = 'processed/processed_image.png'
def change_pixels_except_black(image_path, output_path, threshold=50):
"""
Change all pixels to white except for pixels close to black.
:param image_path: Path to the input image.
:param output_path: Path to save the output image.
:param threshold: Threshold to determine if a pixel is black (default is 50).
"""
# Open the image
image = Image.open(image_path)
image = image.convert("RGB") # Ensure the image is in RGB mode
# Load the image data
pixels = image.load()
# Get the dimensions of the image
width, height = image.size
# Iterate over each pixel
for y in range(height):
for x in range(width):
# Get the current pixel's color
r, g, b = pixels[x, y]
# Check if the pixel is close to black
if r < threshold and g < threshold and b < threshold:
# Keep the pixel as is (close to black)
continue
else:
# Change the pixel to white
pixels[x, y] = (255, 255, 255)
# Save the modified image
image.save(output_path)
def preprocess_image(image_path, output_path):
"""
Preprocess the image to enhance its quality for OCR.
:param image_path: Path to the input image.
:param output_path: Path to save the processed image.
"""
# Open the image
image = Image.open(image_path)
# Resize the image
new_width = image.width * 2
new_height = image.height * 2
image = image.resize((new_width, new_height), Image.LANCZOS)
# Convert to grayscale
image = image.convert('L')
# Increase contrast
enhancer = ImageEnhance.Contrast(image)
image = enhancer.enhance(2)
# Apply a filter to sharpen the image
image = image.filter(ImageFilter.SHARPEN)
# Save the processed image
image.save(output_path)
def extract_text(image_path):
"""
Extract text from an image using pytesseract.
:param image_path: Path to the input image.
:return: Extracted text.
"""
text = pytesseract.image_to_string(Image.open(
image_path), lang='eng', config='--psm 10 --oem 3 -c tessedit_char_whitelist=0123456789')
return text
# Example usage
change_pixels_except_black(CAPTCHA_PATH, RED_REMOVED_PATH, threshold=70)
preprocess_image(RED_REMOVED_PATH, PROCESSED_IMAGE_PATH)
text = extract_text(PROCESSED_IMAGE_PATH)
print(f"Extracted Text of {CAPTCHA_PATH}:")
print(text)
</code></pre>
<p>These are some examples of images:</p>
<p><strong>Original Image</strong></p>
<p><a href="https://i.sstatic.net/eyVGZCvI.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eyVGZCvI.jpg" alt="Captcha 3" /></a></p>
<p><strong>Removed red lines</strong></p>
<p><a href="https://i.sstatic.net/vKaCAlo7.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vKaCAlo7.jpg" alt="Red lines removed" /></a></p>
<p><strong>After Preprocessing</strong></p>
<p><a href="https://i.sstatic.net/7zKVimeK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7zKVimeK.png" alt="Preprocessed" /></a></p>
<blockquote>
<p>Output: 0858</p>
</blockquote>
<blockquote>
<p>Expected Output: 08588</p>
</blockquote>
<p><strong>Some other examples</strong></p>
<p><a href="https://i.sstatic.net/o63psA4i.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/o63psA4i.jpg" alt="Captcha 5" /></a></p>
<blockquote>
<p>Output: 95415</p>
</blockquote>
<blockquote>
<p>Expected Output: 92412</p>
</blockquote>
<p><a href="https://i.sstatic.net/it45PY5j.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/it45PY5j.jpg" alt="Captcha 6" /></a></p>
<blockquote>
<p>Output: 2043</p>
</blockquote>
<blockquote>
<p>Expected Output: 20413</p>
</blockquote>
<p><a href="https://i.sstatic.net/pBRAz0jf.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pBRAz0jf.jpg" alt="Captcha 7" /></a></p>
<blockquote>
<p>Output: 61416</p>
</blockquote>
<blockquote>
<p>Expected Output: 61116</p>
</blockquote>
<p>I have also tried different "Tesseract" configurations with no success. How can I improve/modify my code so it can extract text/digits accurately?</p>
|
<python><tesseract><python-tesseract>
|
2024-06-01 12:53:17
| 0
| 388
|
Vishwa Mittar
|
78,563,796
| 1,045,364
|
How to add a scrollbar to python plotly long title
|
<p>I am using python plotly to create scatterplot using following function-</p>
<pre><code>def show_products(products: list[ProductInfo], title: str) -> None:
layout = go.Layout(
autosize=False,
width=1500,
height=1500,
xaxis=go.layout.XAxis(linecolor="black", linewidth=5, mirror=True),
yaxis=go.layout.YAxis(linecolor="black", linewidth=5, mirror=True),
margin=go.layout.Margin(l=50, r=50, b=100, t=100, pad=4),
)
fig = go.Figure(layout=layout)
for product in products:
add_scatter_plot(
fig,
product,
name_format="%website - %name - %id",
)
title_=dict(
text=title,
x=0.5,
y=.97,
xanchor='center',
yanchor='top',
pad = dict(
t = -10
),
font=dict(
#family='Courier New, monospace',
size=12,
color='#000000'
)
)
config_figure(fig, title_)
</code></pre>
<p>But the problem is the title of my scatterplot is long(500 character+) so tried another trick to add <code><br /></code> after some portion of the title. But all went in vain. How can I add horizontal scrollbar to accommodate long title?</p>
<p>Without adding <code><br /></code> tag, some of the part of title is out of view-
<a href="https://i.sstatic.net/82la0PPT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/82la0PPT.png" alt="without adding <br >/ tag" /></a></p>
<p>After adding <code><br /></code> tag, title overlaps the chart area-
<a href="https://i.sstatic.net/MK6cnGpB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MK6cnGpB.png" alt="with br tag" /></a></p>
|
<python><python-3.x><plotly><formatting>
|
2024-06-01 12:35:01
| 0
| 5,300
|
Learner
|
78,563,786
| 3,010,930
|
python compile with multiple functions
|
<p>(edit: simpler example with functions)
I'm having trouble compiling a python AST with functions that call each other. A minimal example:</p>
<pre><code>def main():
str = """
def fn_2():
print("got so far")
def fn(input):
fn_2()
fn("abc")
"""
mod = compile(str, "testing", 'exec')
exec(mod)
if __name__ == "__main__":
main()
</code></pre>
<p>Error:</p>
<pre><code>Traceback (most recent call last):
File "/Users/wvw/git/n3/fun3/python/test_compile.py", line 26, in <module>
main()
File "/Users/wvw/git/n3/fun3/python/test_compile.py", line 23, in main
exec(mod)
File "testing", line 8, in <module>
File "testing", line 6, in fn
NameError: name 'fn_2' is not defined
</code></pre>
<p>When I add the function to the original file, it works:</p>
<pre><code>def fn_2():
print("got here")
def main():
str = """
def fn_2():
print("got so far")
# def fn(input):
# fn_2()
fn("abc")
"""
mod = compile(str, "testing", 'exec')
exec(mod)
if __name__ == "__main__":
main()
</code></pre>
<p>But, that is not my goal :-) I'm working on a source-to-source compilation project that spews out a number of functions that call each other.</p>
<p>Clearly, there are some inner workings of compile that I am unaware of. I'm hoping someone with metaprogramming experience in python will be able to shed some light!</p>
|
<python><compilation><metaprogramming>
|
2024-06-01 12:31:41
| 1
| 1,434
|
William
|
78,563,412
| 3,160,186
|
Specify custom errorbar values in Seaborn
|
<p>I have the following dataframe:</p>
<pre class="lang-py prettyprint-override"><code>data = pd.DataFrame([
["A", "gelu", 0.896048951, 0.897377622, 0.893671329],
["A", "hard_tanh", 0.889965035, 0.891643357, 0.888566434],
["A", "leaky_relu", 0.89527972, 0.896818182, 0.89451049],
["A", "relu", 0.893811189, 0.896748252, 0.891853147],
["A", "tanh", 0.892272727, 0.894230769, 0.889685315],
["B", "gelu", 0.896048951, 0.897377622, 0.893671329],
["B", "hard_tanh", 0.889965035, 0.891643357, 0.888566434],
["B", "leaky_relu", 0.89527972, 0.896818182, 0.89451049],
["B", "relu", 0.893811189, 0.896748252, 0.891853147],
["B", "tanh", 0.892272727, 0.894230769, 0.889685315],
], columns=["mode", "act_fn", "accuracy", "y_err1", "y_err2"])
</code></pre>
<p>I'm trying to display it with a bar plot using seaborn, however no matter what I try I keep getting the error <code>'yerr' (shape: (2, 5)) must be a scalar or a 1D or (2, n) array-like whose shape matches 'y' (shape: (1,))</code> or similar, how should I proceed? This is the code I'm using:</p>
<pre><code>sns.barplot(
data=data,
x="act_fn",
y="accuracy",
hue="mode",
# yerr=tmp_df[["y_err1", "y_err2"]].to_numpy().T, # this doens't work even if I remove the hue and try to display only mode A
ax=ax,
)
</code></pre>
<p>Also I imagine I would have to compute y_err1 and y_err2 as values relative to accuracy and not absolute ones, but just getting the errorbars to plot would be great!</p>
|
<python><seaborn><bar-chart><errorbar>
|
2024-06-01 10:00:23
| 0
| 309
|
Liuka
|
78,562,982
| 6,638,232
|
How count adjacent points with the same values and label them according to the count in Python
|
<p>I have the following script and sample data:</p>
<pre><code>import numpy as np
import pandas as pd
# Generating dummy data for testing
ROWS=10
COLS=20
X = np.random.randint(2, size=(ROWS, COLS))
# Visualizing
df = pd.DataFrame(data=X)
bg='background-color: '
df.style.apply(lambda x: [bg+'red' if v>=1 else bg+'yellow' for v in x])
</code></pre>
<p><a href="https://i.sstatic.net/vT5H7zVo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vT5H7zVo.png" alt="enter image description here" /></a></p>
<p><strong>My problem is:</strong></p>
<p>How do I count the adjacent points (i.e., points with the same values and no gaps), group them, and label them according to the counts using python.</p>
<p>Here's the expected image for the input above:</p>
<p><a href="https://i.sstatic.net/z9EDif5n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/z9EDif5n.png" alt="enter image description here" /></a></p>
<p>I'll appreciate any help that you can provide.</p>
|
<python><numpy>
|
2024-06-01 06:25:45
| 0
| 423
|
Lyndz
|
78,562,910
| 2,396,539
|
Getting access to a protected member warning shown by pycharm in descriptor get method
|
<p>I was trying to play around with Python descriptors and Pycharm IDE seems to be complaining</p>
<p><a href="https://i.sstatic.net/v8XHPYGo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/v8XHPYGo.png" alt="enter image description here" /></a></p>
<p>Am I using descriptors incorrectly? Or is this expected?</p>
<p>Complete code:</p>
<pre><code>class Age:
def __set__(self, instance, value):
if value > 200:
raise ValueError("Age cannot be more than 200")
instance._age = value
def __get__(self, instance, owner):
if instance is None:
return self
return instance._age
class Employee:
age = Age()
def __init__(self):
self._age = None
def __str__(self):
return f"Age: {self.age}"
e = Employee()
e.age = 204
print(e)
</code></pre>
|
<python><python-descriptors>
|
2024-06-01 05:42:17
| 1
| 69,441
|
Aniket Thakur
|
78,562,653
| 10,001,413
|
Internal Server Error when hosting flask web app
|
<p>I am trying to host a flask web app on a hosting platform. So far I have tried to host on vercel and pythonanywhere, and I seem to run into the same issue with both. The code runs fine locally but on the cloud it gives me an <code>Internal Server Error (500)</code></p>
<p>The code is long but ill try to shorten it up here.
This is my flask app:</p>
<pre><code>from flask import Flask, request, jsonify, Response, send_from_directory, render_template
from flask_cors import CORS
import pandas as pd
import io, os, time
app = Flask(__name__, static_folder='static', template_folder='templates')
CORS(app)
@app.route('/')
def index():
return render_template('index.html')
@app.route('/process', methods=['POST'])
def process_file():
try:
type_id = int(request.form.get('buttonID'))
uploaded_file = request.files['file']
input_filename = uploaded_file.filename
uploaded_file.save(input_filename)
output_filename = process_file(input_filename)
file_extension = os.path.splitext(output_filename)[1].lower()
if file_extension == '.csv':
mimetype = 'text/csv'
elif file_extension in ['.xls', '.xlsx']:
mimetype = 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet'
else:
return jsonify({"error": "Unsupported output file format"}), 400
with open(output_filename, 'rb') as f:
file_contents = f.read()
return Response(
file_contents,
mimetype=mimetype,
headers={"Content-Disposition": f"attachment;filename={output_filename}"}
)
except Exception as e:
return jsonify({"error": str(e)}), 500
@app.route('/static/<path:path>')
def send_static(path):
return send_from_directory('static', path)
</code></pre>
<p>The <code>process_file</code> function is defined, and it just creates the processed spreadsheet file in the directory and returns its name</p>
<p>This is my script.js (shortened):</p>
<pre><code>async function handleSubmit() {
if (!selectedButton) {
document.getElementById('process-warning').style.display = 'block';
return;
}
if (!fileForProcessing) {
console.error("No file selected.");
return;
}
const formData = new FormData();
formData.append('file', fileForProcessing);
formData.append('buttonID', selectedButton);
try {
const response = await fetch(`${window.location.origin}/process`, {
method: 'POST',
body: formData
});
if (response.status === 200) {
newFileData = await response.arrayBuffer();
recvdFile = true;
}
} catch (error) {
console.error(`Error: ${error}`);
} finally {
document.getElementById('spinner').style.display = 'none';
document.getElementById('processing-incomplete').style.display = 'none';
}
}
</code></pre>
<p>The error occurs specifically on this line:
<code>const response = await fetch(`${window.location.origin}/process`, {</code></p>
<p>This happens when I press a submit button and the <code>handleSubmit()</code> function is called</p>
<p>At first I thought it was an issue with vercel, since the program works great locally, so I tried python anywhere, but I get the same issue. Ive been trying to figure out where this issue occurs, but the console just says Internal Server Error and gives me no leads.</p>
<p>Any way I could resolve this?
TIA</p>
|
<javascript><python><flask><cloud><hosting>
|
2024-06-01 02:32:59
| 1
| 350
|
Shell1500
|
78,562,537
| 1,762,950
|
How to save numpy.ndarray in static or thread local variable in python extension written in Rust with PyO3?
|
<p>I am building a simple python extension to process numpy.ndarray objects using rust-numpy crate. I want to save numpy.ndarray objects in static or thread local variables for later process:</p>
<pre class="lang-rust prettyprint-override"><code>use std::cell::RefCell;
use numpy::PyReadwriteArray1;
use pyo3::{Bound, pymodule, PyResult, types::PyModule};
thread_local! {
static ARRAYS: RefCell<Vec::<PyReadwriteArray1<i32>>> = RefCell::new(Vec::<PyReadwriteArray1<i32>>::new());
}
#[pymodule]
fn rust_ext<'py>(m: &Bound<'py, PyModule>) -> PyResult<()> {
#[pyfn(m)]
fn register_array(mut x: PyReadwriteArray1<i32>) {
x.as_array_mut()[0] = 100;
ARRAYS.with_borrow_mut(|v| v.push(x));
}
Ok(())
}
</code></pre>
<p>But I got lifetime compile error:</p>
<pre><code>static ARRAYS: RefCell<Vec::<PyReadwriteArray1<i32>>> = RefCell::new(Vec::<PyReadwriteArray1<i32>>::new());
| ^ expected named lifetime parameter
</code></pre>
<p>So is it possible to save python objects in extension written in Rust? If so, how to fix 'py lifetime issue?</p>
|
<python><rust><pyo3>
|
2024-06-01 00:50:44
| 1
| 415
|
kyleqian
|
78,562,481
| 1,802,225
|
How to close multiprocessing pool inside process?
|
<p>I am wondering how to make multiprocessing in python (<code>3.11</code>) with asynchronous calls (not <code>asyncio</code> lib) and automatically close processes when they are finished?</p>
<p>Below I wrote a simple code and the problem that it does not close pool processes (<code># this prints is never executed</code> line are not reached), perhaps there are errors, but I don't see in terminal output (testing on Ubuntu).</p>
<p>Task: I need to execute processes every timer interval and without waiting to finish start new ones. I have not found any information on Internet and decided to make <code>pool_list</code> global variable to get access to pool objects (to close them later when process is done).</p>
<p>How to close pools correct in my problem? Maybe there is another solution? I chose <code>concurrent.futures</code> lib, because later I will need to start new pool process inside process (in func <code>on_message</code>), I have read that <code>concurrent.futures</code> can do that (source: <a href="https://stackoverflow.com/questions/17223301/python-multiprocessing-is-it-possible-to-have-a-pool-inside-of-a-pool">Python multiprocessing: is it possible to have a pool inside of a pool?</a>).</p>
<pre><code>2024-06-01 02:53:23.984849 start
2024-06-01 02:53:23.987660 sleep 5 sec
2024-06-01 02:53:23.988430 new message: message-1, pool_list len=1
2024-06-01 02:53:28.988004 start
2024-06-01 02:53:28.991552 sleep 5 sec
2024-06-01 02:53:28.992286 new message: message-2, pool_list len=2
2024-06-01 02:53:33.991890 start
2024-06-01 02:53:33.994907 sleep 5 sec
2024-06-01 02:53:33.995735 new message: message-3, pool_list len=3
2024-06-01 02:53:38.988640 finished message: message-1
2024-06-01 02:53:38.995415 start
2024-06-01 02:53:38.998443 sleep 5 sec
2024-06-01 02:53:38.999227 new message: message-4, pool_list len=4
2024-06-01 02:53:43.992491 finished message: message-2
2024-06-01 02:53:43.999209 start
2024-06-01 02:53:44.002906 sleep 5 sec
2024-06-01 02:53:44.003642 new message: message-5, pool_list len=5
2024-06-01 02:53:48.995955 finished message: message-3
2024-06-01 02:53:49.003247 start
2024-06-01 02:53:49.006613 sleep 5 sec
2024-06-01 02:53:49.007651 new message: message-6, pool_list len=6
2024-06-01 02:53:53.999423 finished message: message-4
2024-06-01 02:53:59.003858 finished message: message-5
2024-06-01 02:54:04.007971 finished message: message-6
</code></pre>
<pre class="lang-py prettyprint-override"><code>import time
import random
import string
from datetime import datetime
# from multiprocessing import Pool
from concurrent.futures import ProcessPoolExecutor as Pool
pool_list = {}
def on_message(pool_id, message):
print(f"{datetime.now().strftime('%Y-%m-%d %H:%M:%S.%f')} new message: {message}, pool_list len={len(pool_list)}")
time.sleep(15)
print(f"{datetime.now().strftime('%Y-%m-%d %H:%M:%S.%f')} finished message: {message}")
# this prints is never executed
print('done:', pool_list[pool_id].done())
print('running:', pool_list[pool_id].running())
# close pool as it finished
pool_list[pool_id].cancel()
del pool_list[pool_id]
print('closed!', f"pool_list len={len(pool_list)}")
if __name__ == "__main__":
for i in range(1, 7):
print()
print(f"{datetime.now().strftime('%Y-%m-%d %H:%M:%S.%f')} start")
pool_id = ''.join(random.choices(string.ascii_letters, k=10))
# we can not use "with" context, otherwise it will delay for loop until on_message() finished, but we need async
pool = Pool(1)
# adding pool to list in order to close() it in on_message()
pool_list[pool_id] = pool
pool.submit(on_message, pool_id, f'message-{i}')
# or:
# future = pool.submit(on_message, pool_id, f'message-{i}')
# pool_list[pool_id] = future
print(f"{datetime.now().strftime('%Y-%m-%d %H:%M:%S.%f')} sleep 5 sec")
time.sleep(5)
</code></pre>
<p><strong>UPDATE:</strong> I wrapped <code>on_message</code> to try catch and <code>pool_list</code> is empty. No access to variable. So, how can I close pools?</p>
|
<python><python-3.x><multiprocessing><python-multiprocessing><concurrent.futures>
|
2024-06-01 00:04:16
| 1
| 1,770
|
sirjay
|
78,562,406
| 253,039
|
Multiplying chains of matrices in JAX
|
<p>Suppose I have a vector of parameters <code>p</code> which parameterizes a set of matrices <code>A_1(p), A_2(p),...,A_N(p)</code>. I have a computation in which for some list of indices <code>q</code> of length <code>M</code>, I have to compute <code>A_{q_M} * ... * A_{q_2} * A_{q_1} * v</code> for several different <code>q</code> s. Each <code>q</code> has a different length, but crucially doesn't change! What changes, and what I wish to take gradients against is <code>p</code>.</p>
<p>I'm trying to figure out how to convert this to performant JAX. One way to do it is to have some large matrix <code>Q</code> which contains all the different <code>q</code>s on each row, padded out with identity matrices such that each multiplication chain is the same length, and then <code>scan</code> over a function that <code>switch</code> es between <code>N</code> different functions doing matrix-vector multiplications by <code>A_n(p)</code>.</p>
<p>However -- I don't particularly like the idea of this padding. Also, since <code>Q</code> here is fixed, is there potentially a smarter way to do this? The distribution of lengths of <code>q</code> s has a very long tail, so <code>Q</code> will be dominated by padding.</p>
<p>EDIT: Here's a (edit 2: functional) minimal example</p>
<pre class="lang-py prettyprint-override"><code>sigma0 = jnp.eye(2)
sigmax = jnp.array([[0, 1], [1, 0]])
sigmay = jnp.array([[0, -1j], [1j, 0]])
sigmaz = jnp.array([[1, 0], [0, -1]])
sigma = jnp.array([sigmax, sigmay, sigmaz])
def gates_func(params):
theta = params["theta"]
epsilon = params["epsilon"]
n = jnp.array([jnp.cos(theta), 0, jnp.sin(theta)])
omega = jnp.pi / 2 * (1 + epsilon)
X90 = expm(-1j * omega * jnp.einsum("i,ijk->jk", n, sigma) / 2)
return {
"Z90": expm(-1j * jnp.pi / 2 * sigmaz / 2),
"X90": X90
}
def multiply_out(params):
gate_lists = [["X90", "X90"], ["X90","Z90"], ["Z90", "X90"], ["X90","Z90","X90"]]
gates = gates_func(params)
out = jnp.zeros(len(gate_lists))
for i, gate_list in enumerate(gate_lists):
init = jnp.array([1.0,0.0], dtype=jnp.complex128)
for g in gate_list:
init = gates[g] @ init
out = out.at[i].set(jnp.abs(init[0]))
return out
params = dict(theta=-0.0, epsilon=0.001)
multiply_out(params)
</code></pre>
|
<python><jax>
|
2024-05-31 23:13:06
| 1
| 402
|
Evan
|
78,562,400
| 14,523,964
|
Trying to get JSON data from an API but getting an error when I try to concat in a while loop
|
<p>I am trying to get data from a specific table (<a href="https://data.cms.gov/provider-data/dataset/yizn-abxn" rel="nofollow noreferrer">https://data.cms.gov/provider-data/dataset/yizn-abxn</a>) on the the Centers for Medicare & Medicaid Services website using their API.</p>
<p>Because the API only provides 500 rows per request, I am using a while loop.</p>
<p>However, my concat isn't working for some reason. I tried googling for a solution, but the results didn't really help.</p>
<p>Below is my code.</p>
<pre><code>import pandas as pd
import requests
url = "https://data.cms.gov/provider-data/api/1/datastore/query/yizn-abxn/0"
def loop_json():
i = 0
total_rows = 1500
want = []
json_norm = []
while i < total_rows:
size = 500
offset_url = f"{url}?size={size}&offset={i}"
print(offset_url)
offset = i
offset_response = requests.request("GET", offset_url)
print(f"Made request for {size} results at offset {i}")
# if you want to save the data you would do that here
json_resp = offset_response.json()
json_norm = pd.json_normalize(json_resp['results'])
# want = pd.concat(json_norm)
i += size
return want
df = loop_json()
</code></pre>
<p>The function currently works if <code>want = pd.concat(json_norm)</code> is commented out, but it only gives me the last 242 rows.</p>
<p>Obviously, I would like the function to give me all 1,242 rows.</p>
<p>Any help would be greatly appreciated, especially since I am new to Python, JSON, and APIs.</p>
|
<python><json>
|
2024-05-31 23:10:22
| 1
| 655
|
tonybot
|
78,562,227
| 3,954,026
|
python + SQLAlchemy: deleting taking 250,000x longer than querying the same data
|
<p>I am accessing a PostGres database using python and SQL ALchemy. I can't figure out how to delete in a timely manner. <strong>The query is fast, the delete takes 250,000x longer</strong></p>
<p>I have a table, 'RP', that has 92M rows. I am trying to delete some of them.</p>
<p>I have some code that finds the objects I want to delete that works and runs fast, it is basically:</p>
<pre><code>import sqlalchemy as sa
...
with Session(engine, autobegin=True) as session:
...
r_count = 0
for image in images:
sub_r = sa.select(RP).filter_by(d_id=image.id)
count = session.execute(sa.select(sa.func.count()).select_from(sub_r)).scalar_one()
r_count += count
#print timing and count info here
</code></pre>
<p>This loop executes ~500 times in ~0.2 seconds, because len(images)~500, each time counting ~10-200 individual rows, for a total of ~70,000 rows that I want to delete.</p>
<p>When I add a delete command below, it takes much longer. Each of the 500 passes through the loop, which originally took <0.01 second to execute, now is taking 250 seconds for each iteration, meaning that it will take 36+ hours to delete these 70,000 rows (which were found by a query in <0.5 seconds).</p>
<pre><code>import sqlalchemy as sa
...
with Session(engine, autobegin=True) as session:
...
r_count = 0
for image in images:
sub_r = sa.select(RP).filter_by(d_id=image.id)
count = session.execute(sa.select(sa.func.count()).select_from(sub_r)).scalar_one()
r_count += count
#print timing and count info here
#New Delete Code Below
for image in images:
session.query(RP).filter_by(d_id=image.id).delete()
#session.commit() #tried with this inserted and removed, doesn't seem to matter
# Deleting using the session does not go faster
# del_rp = sa.delete(RP).where(RP.d_id ==image.id)
# session.execute(del_rp)
# Deleting one at a time also doesn't seem to go faster
# rois = session.query(RP).filter_by(d_id=image.id).all()
# for r in rois:
# session.delete(r)
#print timing for each loop here, ~260+ seconds for each loop
</code></pre>
<p><strong>To summarize: I tried 3 main strategies. I tried doing session.query().filter_by().delete(), I tried doing session.execute(sa.delete().where()), and I tried doing a for loop on session.query().filter_by().all(), and then doing a session.delete(one). I also tried including session.commit() at points in the middle.</strong></p>
<p>I am expecting it to execute much faster. I see no reason why the query should go fast and the delete take many orders of magnitude longer.</p>
<p>I am the only user on the server. So there is no other possible bottleneck besides this code. The commented out methods also seem to take many hundreds of seconds for a few dozen deletes (it would take me longer to gather precise timing info)</p>
<p>I am using the pgAdmin dashboard, and in the 'Tuples Out' view, I see ~200-300 seconds of flatline, then 4,000 fetched, 12 billion returned. If I'm deleting ~140 objects in that pass through the loop, then that corresponds to each individual delete causing a fetch/return of the whole 92M table. Is there some way to tell the delete that I don't care for any return value (assuming that's what is happening?)</p>
<p>I suppose I could try to be more clever and accumulate all 70,000 rows and issue a single delete command, but based on the pgAdmin dashboard, it seems like this will not help, because it seems like the session.query().filter_by().delete() is getting broken up individually anyways.</p>
<p>What am I supposed to be doing differently? I have tried .delete(synchronize_session='fetch'), it doesn't seem to help, but I don't remember if I tried it in every permutation.</p>
<p>Maybe this should be a second question, but I also can't figure out a way to interrupt the code. It seems like if I send a ctrl+C, it waits until the ~250 second loop iteration is done. If I use the pgAdmin tool, I don't have permission to kill the session. I don't want a bunch of idle threads clogging up the server, so at this point I'm just being patient and waiting for the loop</p>
<p><strong>Edit: I checked the server as suggested. I see that the actual code being executed is</strong></p>
<pre><code>DELETE FROM rp WHERE rp.d_id = 5254591 RETURNING rp.id
</code></pre>
<p>That should just be returning a single ID, correct?</p>
<p>If I change my code to:</p>
<p><code>session.execute(sa.delete(RP).where(RP.d_id==image.id))</code></p>
<p>then the server log just says</p>
<pre><code>DELETE FROM rp WHERE rp.d_id = 5254591
</code></pre>
<p>But even with the modified, non-returning syntax, the pgAdmin dashboard is still saying that I'm executing <em>dozens</em> of queries and returning <strong>billions</strong> of tuples in <5 seconds, then 250+ seconds of the database doing nothing. My code is being executed on the server and connecting via loopback IP, should be no network latency.</p>
|
<python><postgresql><sqlalchemy>
|
2024-05-31 21:48:59
| 1
| 552
|
Jeff Ellen
|
78,562,109
| 2,252,356
|
Keras model building get error Cannot convert '51' to a shape
|
<p>I am making a keras model to classify Human pose. I have taken the code from the <a href="https://www.tensorflow.org/lite/tutorials/pose_classification" rel="nofollow noreferrer">link</a></p>
<p>In Colab it is working fine. I get below error in my local</p>
<p><code>Cannot convert '51' to a shape.</code></p>
<pre class="lang-py prettyprint-override"><code>def landmarks_to_embedding(landmarks_and_scores):
"""Converts the input landmarks into a pose embedding."""
# Reshape the flat input into a matrix with shape=(17, 3)
reshaped_inputs = keras.layers.Reshape((17, 3))(landmarks_and_scores)
# Normalize landmarks 2D
landmarks = normalize_pose_landmarks(reshaped_inputs[:, :, :2])
# Flatten the normalized landmark coordinates into a vector
embedding = keras.layers.Flatten()(landmarks)
return embedding
inputs = tf.keras.Input(shape=(51))
embedding = landmarks_to_embedding(inputs)
layer = keras.layers.Dense(128, activation=tf.nn.relu6)(embedding)
layer = keras.layers.Dropout(0.5)(layer)
layer = keras.layers.Dense(64, activation=tf.nn.relu6)(layer)
layer = keras.layers.Dropout(0.5)(layer)
outputs = keras.layers.Dense(len(class_names), activation="softmax")(layer)
model = keras.Model(inputs, outputs)
model.summary()
</code></pre>
|
<python><tensorflow><machine-learning><keras><deep-learning>
|
2024-05-31 20:59:15
| 0
| 2,008
|
B L Praveen
|
78,561,921
| 6,539,635
|
How would I implement the Hausdorff distance using Gekko?
|
<p>If the following function took in arrays A and B that contain arrays of Gekko variables instead of floats, how would I find the Hausdorff distance, aka, how would I define np.inf and modify the rest of the code?</p>
<pre><code>def hausdorff_distance(A, B):
"""
Compute the Hausdorff distance between two sets of points A and B.
"""
dist = 0
for a in A:
min_dist = np.inf
for b in B:
dist = np.linalg.norm(a - b)
if dist < min_dist:
min_dist = dist
if min_dist > dist:
dist = min_dist
return dist
</code></pre>
|
<python><geometry><distance><gekko>
|
2024-05-31 20:00:18
| 1
| 349
|
Aaron John Sabu
|
78,561,796
| 4,159,193
|
Can't find command fastapi for Python on MSYS2
|
<p>I want to make the Python fastapi work with MSYS2.</p>
<p>I work from the MSYS2 MinGW x64 shell.
I have the following installed using the commands:</p>
<pre><code>pacman -S mingw-w64-x86_64-python
pacman -S mingw-w64-x86_64-python-pip
pacman -S mingw-w64-x86_64-python-fastapi
</code></pre>
<p>and I also ran</p>
<pre><code>pip install fastapi
</code></pre>
<p>which gave the output</p>
<pre><code>imelf@FLORI-LENOVO-93 MINGW64 /c/Users/imelf/Documents/NachhilfeInfoUni/Kadala/Pybind11
$ pip install fastapi
Requirement already satisfied: fastapi in c:/software/msys64/mingw64/lib/python3.11/site-packages (0.109.0)
Requirement already satisfied: pydantic!=1.8,!=1.8.1,!=2.0.0,!=2.0.1,!=2.1.0,<3.0.0,>=1.7.4 in c:/software/msys64/mingw64/lib/python3.11/site-packages (from fastapi) (2.5.3)
Requirement already satisfied: starlette<0.36.0,>=0.35.0 in c:/software/msys64/mingw64/lib/python3.11/site-packages (from fastapi) (0.35.0)
Requirement already satisfied: typing-extensions>=4.8.0 in c:/software/msys64/mingw64/lib/python3.11/site-packages (from fastapi) (4.9.0)
Requirement already satisfied: annotated-types>=0.4.0 in c:/software/msys64/mingw64/lib/python3.11/site-packages (from pydantic!=1.8,!=1.8.1,!=2.0.0,!=2.0.1,!=2.1.0,<3.0.0,>=1.7.4->fastapi) (0.6.0)
Requirement already satisfied: pydantic-core==2.14.6 in c:/software/msys64/mingw64/lib/python3.11/site-packages (from pydantic!=1.8,!=1.88.1,!=2.0.0,!=2.0.1,!=2.1.0,<3.0.0,>=1.7.4->fastapi) (2.14.6)
Requirement already satisfied: anyio<5,>=3.4.0 in c:/software/msys64/mingw64/lib/python3.11/site-packages (from starlette<0.36.0,>=0.35.0->fastapi) (4.2.0)
Requirement already satisfied: idna>=2.8 in c:/software/msys64/mingw64/lib/python3.11/site-packages (from anyio<5,>=3.4.0->starlette<0.36.0,>=0.35.0->fastapi) (3.6)
Requirement already satisfied: sniffio>=1.1 in c:/software/msys64/mingw64/lib/python3.11/site-packages (from anyio<5,>=3.4.0->starlette<0.36.0,>=0.35.0->fastapi) (1.3.0)
</code></pre>
<p>Now I have this file test.py with the following content:</p>
<pre><code>from fastapi import FastAPI
meine_coole_rest_api = FastAPI()
@meine_coole_rest_api.get("/")
async def wurzel_pfad():
return {"coole_nachricht" : "Fast API works"}
</code></pre>
<p>Trying to launch it with as described in <a href="https://fastapi.tiangolo.com/tutorial/" rel="nofollow noreferrer">this tutorial</a></p>
<pre><code>fastapi dev test.py
</code></pre>
<p>gives the error message</p>
<pre><code>imelf@FLORI-LENOVO-93 MINGW64 /c/Users/imelf/Documents/NachhilfeInfoUni/Kadala/Pybind11
$ fastapi dev test.py
bash: fastapi: command not found
</code></pre>
<p>Why does he not recognize the command? How do I install fastapi correctly?</p>
|
<python><pip><fastapi><msys2>
|
2024-05-31 19:17:44
| 2
| 546
|
flori10
|
78,561,713
| 10,319,707
|
How can I overcome AWS Glue's extreme memory limitations?
|
<p><a href="https://docs.aws.amazon.com/glue/latest/dg/add-job-python.html" rel="nofollow noreferrer">A Python Shell job cannot use more than one DPU</a>. This means that it has a limit of 16 GB of memory.</p>
<p>Earlier today, I wired what I considered to be a modest ETL task to AWS Glue with 1 DPU. It was written in Python Shell. It ran for just over 10 minutes. In theory, this task queries a Microsoft SQL Server hosted in EC2 and produces 3,000,000 rows with about 250 columns and writes it to CSV. The CSV should come to about 2.5 GB. In practice, I get</p>
<blockquote>
<p>Command Failed due to Out of Memory</p>
</blockquote>
<p>from Glue. As far as I can tell, this error does not come from SQL; It comes from 1 DPU not being enough. Batching the queries and writes to CSV fixed this problem, but I did not wish to need to do that.</p>
<p>This confuses me very deeply. I do not consider 2.5 GB of data to be an unreasonable amount to ETL. As a person with a SQL background, I eat 2.5 GB for breakfast and I was doing this very same task in SSIS years ago. The Python does not do any complex manipulation of the data. It just grabs it from SQL and writes it to new CSV files on S3.</p>
<p>This gives me my question. <strong>AWS Glue is advertised as a cloud-scale ETL tool, but my experience described above indicates that it cannot manage modest ETL tasks. What am I missing and how can these limitations be overcome?</strong></p>
|
<python><amazon-web-services><memory><etl><aws-glue>
|
2024-05-31 18:52:57
| 1
| 1,746
|
J. Mini
|
78,561,409
| 8,543,025
|
Plotly Heatmap Colorbar Displays Ticks in Incorrect Location
|
<p>I'm creating a 1-line heatmap using plotly, based on <a href="https://chart-studio.plotly.com/%7Eempet/15229/heatmap-with-a-discrete-colorscale/?_gl=1*wtj9of*_ga*MTcyMDM3MTgxLjE3MDkyODYzODQ.*_ga_6G7EE0JNSC*MTcxNzE3MDUwOC40MS4xLjE3MTcxNzA2MjQuNjAuMC4w#/" rel="nofollow noreferrer">this guide</a>. In the original code, the <code>z</code> array is 2 dimensional, and everything works fine. But if <code>z</code> is 1D the tickmarks move for some reason.<br />
Here's the original code:</p>
<pre><code>def discrete_colorscale(bvals, colors):
"""
bvals - list of values bounding intervals/ranges of interest
colors - list of rgb or hex colorcodes for values in [bvals[k], bvals[k+1]],0<=k < len(bvals)-1
returns the plotly discrete colorscale
"""
if len(bvals) != len(colors) + 1:
raise ValueError('len(boundary values) should be equal to len(colors)+1')
bvals = sorted(bvals)
nvals = [(v - bvals[0]) / (bvals[-1] - bvals[0]) for v in bvals] # normalized values
dcolorscale = [] # discrete colorscale
for k in range(len(colors)):
dcolorscale.extend([(nvals[k], colors[k]), (nvals[k + 1], colors[k])])
return dcolorscale
bvals = [2, 15, 40, 65, 90]
colors = ['#09ffff', '#19d3f3', '#e763fa' , '#ab63fa']
dcolorsc = discrete_colorscale(bvals, colors)
bvals = np.array(bvals)
tickvals = [np.mean(bvals[k:k + 2]) for k in range(len(bvals) - 1)] # position with respect to bvals where ticktext is displayed
ticktext = [f'<{bvals[1]}'] + [f'{bvals[k]}-{bvals[k+1]}' for k in range(1, len(bvals)-2)]+[f'>{bvals[-2]}']
z = np.random.randint(bvals[0], bvals[-1]+1, size=(20, 20))
heatmap = go.Heatmap(
z=z, colorscale=dcolorsc, colorbar=dict(thickness=25, tickvals=tickvals, ticktext=ticktext)
)
fig = go.Figure(data=[heatmap])
fig.update_layout(width=500, height=500)
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/pzwSSQmf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pzwSSQmf.png" alt="plotly 2D heatmap with discrete colormap" /></a></p>
<p>If we attempt to only show the 1st line of <code>z</code>, you'll see the ticks move up:</p>
<pre><code># ... same as above ...
heatmap = go.Heatmap(
z=[z[0]], colorscale=dcolorsc, colorbar=dict(thickness=25, tickvals=tickvals, ticktext=ticktext)
)
fig = go.Figure(data=[heatmap])
fig.update_layout(width=500, height=500)
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/YFJ4VYOx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YFJ4VYOx.png" alt="plotly 1D heatmap with discrete colormap" /></a></p>
|
<python><heatmap><plotly>
|
2024-05-31 17:22:18
| 0
| 593
|
Jon Nir
|
78,561,392
| 13,721,819
|
How to substitute each regex pattern with a corresponding item from a list
|
<p>I have a string that I want to do regex substitutions on:</p>
<pre><code>string = 'ptn, ptn; ptn + ptn'
</code></pre>
<p>And a list of strings:</p>
<pre><code>array = ['ptn_sub1', 'ptn_sub2', '2', '2']
</code></pre>
<p>I want to replace each appearance of the regex pattern <code>'ptn'</code> with a corresponding item from <code>array</code>.</p>
<p>Desired result:
<code>'ptn_sub1, ptn_sub2; 2 + 2'</code></p>
<p>I tried using <code>re.finditer</code> to iterate through the matches and substitute each time, but this caused the new string to get mangled since the length of the string changes with each iteration.</p>
<pre><code>import re
matches = re.finditer(r'ptn', string)
new_string = string
for i, match in enumerate(matches):
span = match.span()
new_string = new_string[:span[0]] + array[i] + new_string[span[1]:]
</code></pre>
<p>Mangled output: <code>'ptn_sptn_s2, ptn2tn + ptn'</code></p>
<p>How can I do these substitutions correctly?</p>
|
<python><python-re>
|
2024-05-31 17:18:15
| 3
| 612
|
Wilson
|
78,561,327
| 260,313
|
Make version.py visible from setup.py when using pyproject.toml
|
<p>I have a project with this file structure:</p>
<pre><code>project/
|- ...
|- include/
\- version.hpp
|- conanfile.py
|- pyproject.toml
|- setup.py
\- version.py
</code></pre>
<ul>
<li><code>version.hpp</code> is the only point where I keep the current version.</li>
<li><code>version.py</code> programmatically parses the version from <code>version.hpp</code>.</li>
<li>Both <code>conanfile.py</code> and <code>setup.py</code> do <code>from version import get_version</code>, and use <code>get_version()</code> to get the current version.
<br/><code>python -m pip install .</code> has been working without issues until now.</li>
<li>Now, I have added <code>pyproject.toml</code> (because I needed it to generate wheels via <code>cibuildwheel</code>), and when I do <code>pipx run cibuildwheel</code> or <code>python -m pip install .</code>, I get a <code>ModuleNotFoundError: No module named 'version'</code>.</li>
</ul>
<p>How can I make <code>version.py</code> visible from <code>setup.py</code> when using <code>pyproject.toml</code>?</p>
<p><strong>[Update]</strong></p>
<p>I was initially wondering if I had missed some basic configuration but, according to the comment and answer, this is not an obvious catch.</p>
<p>I would add this is not a proper Python project. It's a C++ project, with SWIG and some auto-generated Python code.</p>
<p>I could add a link to the GitHub project, in case you may find that useful for looking at the source code.</p>
<p>My current solution is ugly, but just works. I copy-pasted the <code>get_version()</code> code from <code>version.py</code> into <code>setup.py</code>.</p>
|
<python><setup.py><pyproject.toml>
|
2024-05-31 17:01:55
| 1
| 8,209
|
rturrado
|
78,561,121
| 2,383,070
|
How to extend Polars API to work on both DataFrame and LazyFrame
|
<p>I would like to extend the Polars API as described <a href="https://docs.pola.rs/py-polars/html/reference/api.html" rel="nofollow noreferrer">in the docs</a>, with a single namespace which should work on both DataFrames and LazyFrames.</p>
<p>Pretend I have a simple DataFrame, which could just as easily be a LazyFrame...</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
a = pl.DataFrame({"col1": ["A", "B", "A", "A", "B"], "col2": [1, 2, 3, 4, 5]})
b = a.lazy()
</code></pre>
<p>I want to use a namespace to do "stuff" to both of these. For simplicity, let's just do a simple group_by sum. I could write namespaces for each separately:</p>
<pre class="lang-py prettyprint-override"><code>@pl.api.register_dataframe_namespace("stuff")
class StuffFrame:
def __init__(self, df: pl.DataFrame):
self._df = df
def do_stuff(self) -> pl.DataFrame:
df = self._df.group_by("col1").agg(pl.col("*").sum())
return df
@pl.api.register_lazyframe_namespace("stuff")
class StuffLazyFrame:
def __init__(self, ldf: pl.LazyFrame):
self._ldf = ldf
def do_stuff(self) -> pl.LazyFrame:
ldf = self._ldf.group_by("col1").agg(pl.col("*").sum())
return ldf
a1 = a.stuff.do_stuff()
b1 = b.stuff.do_stuff()
</code></pre>
<p>But then I have to maintain code in two functions. Instead, I could convert the DataFrame to a LazyFrame in the DataFrame namespace and then call the LazyFrame namespace:</p>
<pre class="lang-py prettyprint-override"><code>@pl.api.register_dataframe_namespace("stuff")
class StuffFrame:
def __init__(self, df: pl.DataFrame):
self._df = df
def do_stuff(self) -> pl.DataFrame:
df = self._df.lazy().stuff.do_stuff().collect() #<-- convert to Lazy, do stuff, then collect
return df
@pl.api.register_lazyframe_namespace("stuff")
class StuffLazyFrame:
def __init__(self, ldf: pl.LazyFrame):
self._ldf = ldf
def do_stuff(self) -> pl.LazyFrame:
ldf = self._ldf.group_by("col1").agg(pl.col("*").sum()) #<-- only maintain code in one namespace
return ldf
a2 = a.stuff.do_stuff()
b2 = b.stuff.do_stuff()
</code></pre>
<p>Is this an appropriate approach, or is there a better way to deal with this?</p>
|
<python><dataframe><python-polars>
|
2024-05-31 16:12:46
| 3
| 3,511
|
blaylockbk
|
78,561,116
| 2,382,272
|
Importing pandas before tensorflow makes the script freeze
|
<p>So I have changed from my Windows machine to a MacBook Pro with Apple M3 Pro (36 GB) running with macOS Sonoma (version 14.5) due to a work requirement. I realized something very strange. In a small sample script I managed to extract the root cause of this issue.</p>
<p>When I import pandas before tensorflow / keras the script freezes. It works the other way around.</p>
<p>The script:</p>
<pre><code>import numpy as np
import os
import pandas as pd
from tensorflow.keras import layers, models
print("Creating simple model...")
try:
model = models.Sequential([
layers.Input(shape=(10,)),
layers.Dense(64, activation='relu'),
layers.Dense(1, activation='linear')
])
print("Model created successfully.")
except Exception as e:
print(f"Error creating model: {e}")
x_train = np.random.rand(100, 10)
y_train = np.random.rand(100, 1)
# Compile the model
model.compile(optimizer='adam', loss='mean_squared_error')
# Train the model
try:
model.fit(x_train, y_train, epochs=5, batch_size=32)
print("Model training completed successfully.")
except Exception as e:
print(f"Error during training: {e}")
</code></pre>
<p>This, when run, gives me the following output:</p>
<pre><code>Creating simple model...
2024-05-31 18:04:07.639131: I metal_plugin/src/device/metal_device.cc:1154] Metal device set to: Apple M3 Pro
2024-05-31 18:04:07.639149: I metal_plugin/src/device/metal_device.cc:296] systemMemory: 36.00 GB
2024-05-31 18:04:07.639154: I metal_plugin/src/device/metal_device.cc:313] maxCacheSize: 13.50 GB
2024-05-31 18:04:07.639170: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2024-05-31 18:04:07.639186: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>)
</code></pre>
<p>The script freezes at this point and has to be terminated. When I swap the order of import</p>
<pre><code>from tensorflow.keras import layers, models
import pandas as pd
</code></pre>
<p>I get the following:</p>
<pre><code>Creating simple model...
2024-05-31 18:07:18.879661: I metal_plugin/src/device/metal_device.cc:1154] Metal device set to: Apple M3 Pro
2024-05-31 18:07:18.879680: I metal_plugin/src/device/metal_device.cc:296] systemMemory: 36.00 GB
2024-05-31 18:07:18.879685: I metal_plugin/src/device/metal_device.cc:313] maxCacheSize: 13.50 GB
2024-05-31 18:07:18.879705: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2024-05-31 18:07:18.879717: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>)
Model created successfully.
Epoch 1/5
2024-05-31 18:07:19.269585: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:117] Plugin optimizer for device_type GPU is enabled.
4/4 ━━━━━━━━━━━━━━━━━━━━ 1s 16ms/step - loss: 0.1177
Epoch 2/5
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - loss: 0.1078
Epoch 3/5
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - loss: 0.0932
Epoch 4/5
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - loss: 0.1008
Epoch 5/5
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - loss: 0.0865
Model training completed successfully.
</code></pre>
<p>Note that I dont even use pandas in the script. For reference I imported os and didnt use it anywhere in the script either but it doesnt affect it.</p>
<p>Here is my env package pip list:</p>
<pre><code>Package Version
---------------------------- -----------
absl-py 2.1.0
astunparse 1.6.3
Bottleneck 1.3.7
cachetools 5.3.3
certifi 2024.2.2
charset-normalizer 3.3.2
db-dtypes 1.2.0
flatbuffers 24.3.25
gast 0.5.4
google-api-core 2.19.0
google-auth 2.29.0
google-cloud-bigquery 3.23.1
google-cloud-core 2.4.1
google-crc32c 1.5.0
google-pasta 0.2.0
google-resumable-media 2.7.0
googleapis-common-protos 1.63.0
grpcio 1.64.0
grpcio-status 1.62.2
h5py 3.11.0
idna 3.7
importlib_metadata 7.1.0
joblib 1.4.2
keras 3.3.3
libclang 18.1.1
Markdown 3.6
markdown-it-py 3.0.0
MarkupSafe 2.1.5
mdurl 0.1.2
ml-dtypes 0.3.2
namex 0.0.8
numexpr 2.8.7
numpy 1.26.4
opt-einsum 3.3.0
optree 0.11.0
packaging 24.0
pandas 2.2.1
pip 24.0
proto-plus 1.23.0
protobuf 4.25.3
pyarrow 16.1.0
pyasn1 0.6.0
pyasn1_modules 0.4.0
Pygments 2.18.0
python-dateutil 2.9.0.post0
pytz 2024.1
requests 2.32.3
rich 13.7.1
rsa 4.9
scikit-learn 1.4.2
scipy 1.11.4
setuptools 69.5.1
six 1.16.0
tensorboard 2.16.2
tensorboard-data-server 0.7.2
tensorflow 2.16.1
tensorflow-io-gcs-filesystem 0.37.0
tensorflow-macos 2.16.1
tensorflow-metal 1.1.0
termcolor 2.4.0
threadpoolctl 3.5.0
tqdm 4.66.4
typing_extensions 4.12.0
tzdata 2024.1
urllib3 2.2.1
Werkzeug 3.0.3
wheel 0.43.0
wrapt 1.16.0
zipp 3.19.0
</code></pre>
<p>Suggestion from comments (@Ze'ev Ben-Tsvi)</p>
<pre><code>import numpy as np
import os
import pandas as pd
from tensorflow.keras import layers, models
print("Creating simple model...")
try:
print("Initializing Sequential model...")
model = models.Sequential()
print("Adding input layer...")
model.add(layers.Input(shape=(10,)))
print("Adding first Dense layer...")
model.add(layers.Dense(64, activation='relu'))
print("Adding output Dense layer...")
model.add(layers.Dense(1, activation='linear'))
print("Model created successfully.")
except Exception as e:
print(f"Error creating model: {e}")
x_train = np.random.rand(100, 10)
y_train = np.random.rand(100, 1)
# Compile the model
try:
print("Compiling model...")
model.compile(optimizer='adam', loss='mean_squared_error')
print("Model compiled successfully.")
except Exception as e:
print(f"Error during compilation: {e}")
# Train the model
try:
print("Training model...")
model.fit(x_train, y_train, epochs=5, batch_size=32)
print("Model training completed successfully.")
except Exception as e:
print(f"Error during training: {e}")
</code></pre>
<p>The output of this script is:</p>
<pre><code>Initializing Sequential model...
Adding input layer...
Adding first Dense layer...
Adding output Dense layer...
Model created successfully.
Compiling model...
Model compiled successfully.
Training model...
Epoch 1/5
</code></pre>
<p>It seems to get a little bit further in the execution when written like this. Now it doesnt get stuck at models.Sequential anymore but at model.fit.</p>
<p>Swapping the order of import again (tensorflow then pandas) I get:</p>
<pre><code>Creating simple model...
Initializing Sequential model...
Adding input layer...
Adding first Dense layer...
Adding output Dense layer...
Model created successfully.
Compiling model...
Model compiled successfully.
Training model...
Epoch 1/5
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.4620
Epoch 2/5
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 636us/step - loss: 0.3263
Epoch 3/5
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.2322
Epoch 4/5
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 629us/step - loss: 0.1395
Epoch 5/5
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 690us/step - loss: 0.1251
Model training completed successfully.
</code></pre>
<p>The main issue here is that at no point do I get an exception, not even when wrapping all imports individually in try/catch blocks. Something seems to either swallow the errors or none are thrown.</p>
|
<python><pandas><tensorflow>
|
2024-05-31 16:12:11
| 2
| 1,331
|
Ilhan
|
78,561,073
| 1,126,944
|
Is Root Logger At Logging Module?
|
<p>Although my this question seems trivial, but it really makes me doubt is this due to Python documentation mistake or my understanding has some issues ?</p>
<p>When I reading Python's logging documentation, I met this <a href="https://docs.python.org/3/howto/logging.html#logging-basic-tutorial" rel="nofollow noreferrer">statement</a>:</p>
<blockquote>
<p>By default, no destination is set for any logging messages. You can
specify a destination (such as console or file) by using basicConfig()
as in the tutorial examples. If you call the functions debug(),
info(), warning(), error() and critical(), they will check to see if
no destination is set; and if one is not set, they will set a
destination of the console (sys.stderr) and a default format for the
displayed message <strong>before delegating to the root logger</strong> to do the
actual message output.</p>
</blockquote>
<p>This is a confusion to me, since here debug(), info() ... etc. function is referring to the ones on the logging module(not on logging.Logger instances), and they are just a syntactic sugar for calling Logger.debug() on root logger according to the <a href="https://docs.python.org/3/library/logging.html#logging.debug" rel="nofollow noreferrer">documentation</a> of logging.debug(). Since above quoted states "before delegating to the root logger", I wonder here the debug() method is already called on the root logger, how can it delegating to the "root logger" any more ?</p>
|
<python><logging>
|
2024-05-31 16:01:18
| 1
| 1,330
|
IcyBrk
|
78,560,979
| 8,262,535
|
MLFlow - Is there a way to log all input parameters to a function?
|
<p>I have a function which is meant to be extendable, and would like to ensure that all input parameters keep being logged without the need to manually check/add them. Is there a way to do this automatically?</p>
<pre><code>def __init__(self, model_name: str, num_classes: int, device: str = 'cuda:0', learning_rate: float = 5e-5,
do_layer_freeze: bool = True, extra_class_layers: Optional[Union[int, list]] = None,
fine_tune_dropout_rate: float = 0):
</code></pre>
<p>For scikit learn models this happens naturally using the autologger but doesn't seem to with pytroch (Lightning). Currently, I am having to do it the manual way:</p>
<pre><code>mlflow.log_params({'model_name': model_name,
'num_classes': num_classes,
'learning_rate': learning_rate,
'do_layer_freeze': do_layer_freeze,
'extra_class_layers': extra_class_layers,
'fine_tune_dropout_rate': fine_tune_dropout_rate})
</code></pre>
|
<python><logging><pytorch><mlflow><pytorch-lightning>
|
2024-05-31 15:36:45
| 1
| 385
|
illan
|
78,560,927
| 2,326,627
|
Import files from external submodule
|
<p>Is it possible to execute an external submodule of a project without changing the file content of the submodule itself?
Suppose I have this hierarchy.</p>
<pre><code>mydir
|
+-- mya.py
|
+-- submodule
|
+-- subpackage
|
+-- __init__.py
|
+-- suba.py
|
+-- subb.py
</code></pre>
<p><code>submodule</code> is an external git repository (cloned with <code>git submodule add <git-repo></code> for what it's worth).</p>
<p>Inside this module, <code>suba.py</code> has an import statement of the kind <code>from subpackage import subb</code>.</p>
<p>Now, inside <code>mya.py</code>, what import statement should I use to correctly import <code>submodule.subpackage.suba</code> without changing the submodule code?
While I can insert
<code>from submodule.subpackage import suba</code>, at execution time it will raise the exception
`ModuleNotFoundError: No module named 'subpackage'</p>
|
<python><python-3.x><python-import>
|
2024-05-31 15:26:53
| 2
| 1,188
|
tigerjack
|
78,560,578
| 10,145,953
|
AWS Textract asynchronous operations within multiprocessing
|
<p>I am working in a Lambda function within AWS. I have two functions which asynchronously call on Textract to return the extracted text from an image. By switching to this asynchronous operation from a singular call one at a time (which must wait for the result to complete before submitting a new request), given the volume of images I need processed by Textract, I was able to reduce processing time for Textract from 8 minutes to about 3 minutes--a vast improvement.</p>
<p>But, I am looking into using <code>multiprocessing</code> to see if I can reduce the time down even further. However, it appears that <code>multiprocessing.map</code> and <code>multiprocessing.starmap</code> do not seem to work very well in AWS Lambda. I saw some recommendations for using <code>multiprocessing.Process</code> or <code>multiprocessing.Pipe</code>, but it isn't clear if that will actually make a big impact.</p>
<p>Based on my code below, will leveraging <code>multiprocessing.Process</code> or <code>multiprocessing.Pipe</code> make noticeable improvements in processing time or is it not worth the effort? If it is worth it, can anyone make any suggestions on how to actually implement this given my code? I am brand new to multiprocessing and there's a lot to wrap my head around, further complicated by trying to also implement in AWS.</p>
<pre><code>def extract_text_async(img, loc):
img_obj = Image.fromarray(img).convert('RGB')
out_img_obj = io.BytesIO()
img_obj.save(out_img_obj, format="png")
out_img_obj.seek(0)
file_name = key_id + "_" + loc + ".png"
s3.Bucket(bucket_name).put_object(Key=file_name, Body=out_img_obj, ContentType="image/png")
response = textract_client.start_document_text_detection(DocumentLocation={'S3Object':{'Bucket': bucket_name,'Name': file_name}},JobTag=key_id + loc, NotificationChannel={'SNSTopicArn': snsarn,'RoleArn': rolearn},OutputConfig={'S3Bucket': output_bucket,'S3Prefix': str(datetime.now()).replace(" ", "_") + key_id + "_" + loc + "_textract_output"})
return response['JobId']
def fetch_textract_async(jobid):
response = textract_client.get_document_text_detection(JobId=jobid,MaxResults=1000)
status = response['JobStatus']
text_len = {}
for y in range(len(response['Blocks'])):
if 'Text' in response['Blocks'][y]:
text_len[y] = len(response['Blocks'][y]['Text'])
else:
pass
if bool(text_len):
extracted_text = response['Blocks'][max(text_len, key=text_len.get)]['Text']
if extracted_text == '-':
extracted_text = ''
else:
pass
else:
extracted_text = ''
return extracted_text
# example function calls
s1_1 = extract_text_async(cropped_images['Section 1']['1'],"s1_1")
s1_2 = extract_text_async(cropped_images['Section 1']['2'],"s1_2")
s1_3 = extract_text_async(cropped_images['Section 1']['3'],"s1_3")
s1_1_result = fetch_textract_async(s1_1)
s1_2_result = fetch_textract_async(s1_2)
s1_3_result = fetch_textract_async(s1_3)
</code></pre>
|
<python><aws-lambda><multiprocessing><amazon-textract>
|
2024-05-31 14:16:50
| 1
| 883
|
carousallie
|
78,560,565
| 2,867,168
|
Unable to select tags in BeautifulSoup via CSS Selector
|
<p>folks! I am currently working with BeautifulSoup to try to scrape some data from a website, and I'm having some issues trying to select elements using <code>soup.select()</code>.</p>
<p>Here's a screenshot from my browser of the section of code I'm working with.</p>
<p><a href="https://i.sstatic.net/kEsvlndb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kEsvlndb.png" alt="enter image description here" /></a></p>
<p>Here's the very simple code I'm using at the moment to scrape data, the idea is that I am trying to select all of the <code><a href></code> elements from within the <code><div></code> with <code>id=lst_hdr_bm</code>:</p>
<pre class="lang-py prettyprint-override"><code>import urllib.request
from bs4 import BeautifulSoup
# Grab website source, make soup?
html = urllib.request.urlopen('https://infinitediscs.com')
soup = BeautifulSoup(html, 'html.parser')
tags = soup.select('#lst_hdr_bm > ul > li > a')
print(tags)
</code></pre>
<p>When I run this query in my browser (testing the CSS selector via document.querySelectAll), it returns 82 elements which is to be expected. When I run this via BS in Python, nothing is returned.</p>
<p>What could be causing this problem? Is there some default depth limit that can be parsed by the default html parser possibly? I am confused.</p>
|
<python><beautifulsoup>
|
2024-05-31 14:14:31
| 1
| 1,253
|
MisutoWolf
|
78,560,561
| 18,949,720
|
Streamlit-folium no data return on click
|
<p>Using streamlit_folium and geopandas explore() to display a choropleth map drawn with this code:</p>
<pre><code>m = my_shapefile.explore(
column = 'risk',
popup = ['ID_PARCEL', 'Risk of infection:'],
tooltip = 'Risk of infection:',
)
st_data = st_folium(m, width = 800, height = 500)
st.write(st_data)
</code></pre>
<p>It seems that st_data never gets updated when clicking on elements of the map:</p>
<p><a href="https://i.sstatic.net/9iN9akKN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9iN9akKN.png" alt="enter image description here" /></a></p>
<p>I tried repeating st.write(st_data) every second to see if there was any update that was not displayed, but I had the same result whether I click on polygons or not.</p>
|
<python><gis><streamlit><folium><choropleth>
|
2024-05-31 14:13:24
| 0
| 358
|
Droidux
|
78,560,536
| 14,073,111
|
Create a WEF simulator in python using kerberos authentication
|
<p>I have a real WEF/WEC setup where i have three machines: Windows Server, Windows 10 (WEF) and Windows 10 (WEC). It works just fine like this... But i want to have a WEF simulator using python with kerbeos authentication.</p>
<p>I installed Wireshark on my WEF, just to see what it is actually sending to WEC and seems it is always sending two requests
First one is authenticating using Kerberos token (as you can see it shows HTTP request 1/2):</p>
<pre><code>Hypertext Transfer Protocol
POST /wsman HTTP/1.1\r\n
Connection: Keep-Alive\r\n
Content-Type: application/soap+xml;charset=UTF-16\r\n
[truncated]Authorization: Kerberos <kerbeors token>
GSS-API Generic Security Service Application Program Interface
User-Agent: Microsoft WinRM Client\r\n
Content-Length: 0\r\n
[Content length: 0]
Host: wec.com:8080\r\n
\r\n
[Full request URI: http://wec.com:8080/wsman]
[HTTP request 1/2]
[Next request in frame: 24]
</code></pre>
<p>And the second one is actually the one contains the XML data (see HTTP request 2/2):</p>
<pre><code>Hypertext Transfer Protocol
POST /wsman HTTP/1.1\r\n
Connection: Keep-Alive\r\n
Content-Type: multipart/encrypted;protocol="application/HTTP-Kerberos-session-encrypted";boundary="Encrypted Boundary"\r\n
User-Agent: Microsoft WinRM Client\r\n
Content-Length: 3472\r\n
[Content length: 3472]
Host: wec.com:8080\r\n
\r\n
[Full request URI: http://wec.com:8080/wsman]
[HTTP request 2/2]
[Prev request in frame: 22]
File Data: 3472 bytes
</code></pre>
<p>Sample xml data:</p>
<pre><code><s:Envelope
xmlns:s="http://www.w3.org/2003/05/soap-envelope"
xmlns:a="http://schemas.xmlsoap.org/ws/2004/08/addressing"
xmlns:e="http://schemas.xmlsoap.org/ws/2004/08/eventing"
xmlns:w="http://schemas.dmtf.org/wbem/wsman/1/wsman.xsd"
xmlns:p="http://schemas.microsoft.com/wbem/wsman/1/wsman.xsd">
<s:Header>
<a:To>http://wec.com:8080/wsman</a:To>
<m:MachineID
xmlns:m="http://schemas.microsoft.com/wbem/wsman/1/machineid" s:mustUnderstand="false">wef.wec.com
</m:MachineID>
<a:ReplyTo>
<a:Address s:mustUnderstand="true">http://schemas.xmlsoap.org/ws/2004/08/addressing/role/anonymous</a:Address>
</a:ReplyTo>
<a:Action s:mustUnderstand="true">http://schemas.dmtf.org/wbem/wsman/1/wsman/Events</a:Action>
<w:MaxEnvelopeSize s:mustUnderstand="true">512000</w:MaxEnvelopeSize>
<a:MessageID>uuid:87AF6068-CB36-4B7C-8C4B-038D839CF904</a:MessageID>
<w:Locale xml:lang="en-US" s:mustUnderstand="false" />
<p:DataLocale xml:lang="en-US" s:mustUnderstand="false" />
<p:SessionId s:mustUnderstand="false">uuid:1707AC9F-3D93-4B88-825C-B75403C72C7C</p:SessionId>
<p:OperationID s:mustUnderstand="false">uuid:FA29F086-B795-4F4A-B825-8628BFB8F157</p:OperationID>
<p:SequenceId s:mustUnderstand="false">1</p:SequenceId>
<w:OperationTimeout>PT60.000S</w:OperationTimeout>
<e:Identifier
xmlns:e="http://schemas.xmlsoap.org/ws/2004/08/eventing">uuid:8e6f0ea4-1f4e-11ef-801a-616c6d617765
</e:Identifier>
<w:Bookmark>
<BookmarkList>
<Bookmark Channel="Application" RecordId="17256411" IsCurrent="true"/>
<Bookmark Channel="Security" RecordId="1"/>
<Bookmark Channel="System" RecordId="2470"/>
</BookmarkList>
</w:Bookmark>
<w:AckRequested/>
</s:Header>
<s:Body>
<w:Events>
<w:Event Action="http://schemas.dmtf.org/wbem/wsman/1/wsman/Event">
<Event
xmlns='http://schemas.microsoft.com/win/2004/08/events/event'>
<System>
<Provider Name='Microsoft-Windows-Security-SPP' Guid='{E23B33B0-C8C9-472C-A5F9-F2BDFEA0F156}' EventSourceName='Software Protection Platform Service'/>
<EventID Qualifiers='16384'>16384</EventID>
<Version>0</Version>
<Level>4</Level>
<Task>0</Task>
<Opcode>0</Opcode>
<Keywords>0x80000000000000</Keywords>
<TimeCreated SystemTime='2024-05-31T12:23:09.8605030Z'/>
<EventRecordID>17256409</EventRecordID>
<Correlation/>
<Execution ProcessID='0' ThreadID='0'/>
<Channel>Application</Channel>
<Computer>wef.wec.com</Computer>
<Security/>
</System>
<EventData>
<Data>2024-06-14T16:57:09Z</Data>
<Data>RulesEngine</Data>
</EventData>
<RenderingInfo Culture='en-US'>
<Message>Successfully scheduled Software Protection service for re-start at 2024-06-14T16:57:09Z. Reason: RulesEngine.</Message>
<Level>Information</Level>
<Task></Task>
<Opcode></Opcode>
<Channel></Channel>
<Provider>Microsoft-Windows-Security-SPP</Provider>
<Keywords>
<Keyword>Classic</Keyword>
</Keywords>
</RenderingInfo>
</Event>
</w:Event>
</w:Events>
</s:Body>
</s:Envelope>
</code></pre>
<p>How can i send this in python? Sending these one by one, didn't work. This is my last piece of code i tried, but WEC is not accepting it:</p>
<pre><code>import requests
import kerberos
from requests_toolbelt.multipart.encoder import MultipartEncoder
def get_kerberos_token(service):
__, krb_context = kerberos.authGSSClientInit(service)
kerberos.authGSSClientStep(krb_context, "")
negotiate_details = kerberos.authGSSClientResponse(krb_context)
return negotiate_details
multipart_data = MultipartEncoder(
fields={
'part1': ('', '', 'application/HTTP-Kerberos-session-encrypted'),
'part2': ('', xml_content, 'application/octet-stream')
},
boundary='Encrypted Boundary'
)
headers = {
'Connection': 'Keep-Alive',
'Content-Type': multipart_data.content_type,
'User-Agent': 'Microsoft WinRM Client',
'Host': '127.0.0.1:8080',
'Authorization': f'Kerberos {get_kerberos_token()}'
}
response = requests.post(
url='http://127.0.0.1:8080/wsman',
data=multipart_data,
headers=headers
)
</code></pre>
|
<python><http><python-requests><http-post><multipartform-data>
|
2024-05-31 14:08:30
| 1
| 631
|
user14073111
|
78,560,448
| 3,583,669
|
Chainlit Stream responses from Groq & Langchain
|
<p>my Chainlit AI chat application uses Groq, OpenAI embeddings, LangChain and Chromadb, and it allows the user to upload a PDF and interact with it. It works fine, but it spits out the whole response.</p>
<p>I'd like it to stream the responses instead. How can I achieve this?</p>
<p>Here's the full code below:</p>
<pre><code>import PyPDF2
from langchain_openai import OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.vectorstores import Chroma
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationBufferMemory
from langchain_community.chat_message_histories import ChatMessageHistory
import chainlit as cl
from langchain_groq import ChatGroq
from dotenv import load_dotenv
import os
# Loading environment variables from .env file
load_dotenv()
# Function to initialize conversation chain with GROQ language model
groq_api_key = os.environ['GROQ_API_KEY']
# Initializing GROQ chat with provided API key, model name, and settings
llm_groq = ChatGroq(groq_api_key=groq_api_key, model_name="llama3-70b-8192", temperature=0.2)
@cl.on_chat_start
async def on_chat_start():
files = None #Initialize variable to store uploaded files
# Wait for the user to upload files
while files is None:
files = await cl.AskFileMessage(
content="Please upload one or more pdf files to begin!",
accept=["application/pdf"],
max_size_mb=100,# Optionally limit the file size,
max_files=10,
timeout=180, # Set a timeout for user response,
).send()
msg = cl.Message(content=f"Processing `{files[0].name}`...", disable_feedback=True)
await msg.send()
# Process each uploaded file
texts = []
metadatas = []
for file in files:
print(file) # Print the file object for debugging
# Read the PDF file
pdf = PyPDF2.PdfReader(file.path)
pdf_text = ""
for page in pdf.pages:
pdf_text += page.extract_text()
# Split the text into chunks
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1200, chunk_overlap=50)
file_texts = text_splitter.split_text(pdf_text)
texts.extend(file_texts)
# Create a metadata for each chunk
file_metadatas = [{"source": f"{i}-{file.name}"} for i in range(len(file_texts))]
metadatas.extend(file_metadatas)
# Create a Chroma vector store
embeddings = OpenAIEmbeddings(); #OllamaEmbeddings(model="nomic-embed-text")
docsearch = await cl.make_async(Chroma.from_texts)(
texts, embeddings, metadatas=metadatas
)
# Initialize message history for conversation
message_history = ChatMessageHistory()
# Memory for conversational context
memory = ConversationBufferMemory(
memory_key="chat_history",
output_key="answer",
chat_memory=message_history,
return_messages=True,
)
# Create a chain that uses the Chroma vector store
chain = ConversationalRetrievalChain.from_llm(
llm=llm_groq,
chain_type="stuff",
retriever=docsearch.as_retriever(),
memory=memory,
return_source_documents=True,
)
# Sending an image with the number of files
elements = [
cl.Image(name="image", display="inline", path="pic.jpg")
]
# Inform the user that processing has ended.You can now chat.
msg = cl.Message(content=f"Processing {len(files)} files done. You can now ask questions!")
await msg.send()
#store the chain in user session
cl.user_session.set("chain", chain)
@cl.on_message
async def main(message: cl.Message):
# Retrieve the chain from user session
chain = cl.user_session.get("chain")
#call backs happens asynchronously/parallel
cb = cl.AsyncLangchainCallbackHandler()
# call the chain with user's message content
res = await chain.ainvoke(message.content, callbacks=[cb])
answer = res["answer"]
source_documents = res["source_documents"]
text_elements = [] # Initialize list to store text elements
await cl.Message(content=answer, elements=text_elements).send()
</code></pre>
<p>I'd like to update it so that the response gets streamed, rather than blurting it all out at once.</p>
|
<python><chatbot><langchain><large-language-model><chromadb>
|
2024-05-31 13:50:03
| 1
| 313
|
Obi
|
78,560,356
| 5,547,553
|
How to update fields with previous fields value in polars?
|
<p>I have this dataframe:</p>
<pre><code>import polars as pl
df = pl.DataFrame({
'file':['a','a','a','a','b','b'],
'ru':['fe','fe','ev','ev','ba','br'],
'rt':[0,0,1,1,1,0],
})
</code></pre>
<pre><code>shape: (6, 3)
┌──────┬─────┬─────┐
│ file ┆ ru ┆ rt │
│ --- ┆ --- ┆ --- │
│ str ┆ str ┆ i64 │
╞══════╪═════╪═════╡
│ a ┆ fe ┆ 0 │
│ a ┆ fe ┆ 0 │
│ a ┆ ev ┆ 1 │
│ a ┆ ev ┆ 1 │
│ b ┆ ba ┆ 1 │
│ b ┆ br ┆ 0 │
└──────┴─────┴─────┘
</code></pre>
<p>I'd like to replace the values in <code>"ru"</code> and <code>"rt"</code> within the same group defined by <code>"file"</code> with the values of the first row in the group <em>if the first <code>"rt"</code> value is 0</em>.</p>
<p>The desired output would look as follows.</p>
<pre><code>shape: (6, 3)
┌──────┬─────┬─────┐
│ file ┆ ru ┆ rt │
│ --- ┆ --- ┆ --- │
│ str ┆ str ┆ i64 │
╞══════╪═════╪═════╡
│ a ┆ fe ┆ 0 │
│ a ┆ fe ┆ 0 │
│ a ┆ fe ┆ 0 │
│ a ┆ fe ┆ 0 │
│ b ┆ ba ┆ 1 │
│ b ┆ br ┆ 0 │
└──────┴─────┴─────┘
</code></pre>
<p>How can I achieve that?</p>
|
<python><dataframe><python-polars>
|
2024-05-31 13:33:41
| 1
| 1,174
|
lmocsi
|
78,559,814
| 5,043,301
|
Form Validation message for Password Confirmation Field
|
<p>I have below code in <strong>forms.py</strong> file.</p>
<pre><code>from django.contrib.auth.forms import UserCreationForm
from .models import User
class CustomUserCreationForm(UserCreationForm):
class Meta:
model = User
fields = ('first_name','last_name','email','date_of_birth', 'gender', 'user_type', 'phone', 'address','photo', 'password' )
error_messages = {
field: {
'required': f"{field.replace('_', ' ').title()} is required."
}
for field in fields
}
</code></pre>
<p>I would like to add <code>'password_confirmation'</code> field in <strong>fields</strong> tuple.</p>
<p>I would like to use Form validation message like below in my HTML template.</p>
<pre><code><label class="form-label">PassWord Confirmation</label>
<input type="password" class="form-control" name="password2" />
{% if form.confirm_password.errors %}
<div class="alert alert-danger">{{ form.confirm_password.errors }}</div>
{% endif %}
</code></pre>
<p>But I am getting errors <code>django.core.exceptions.FieldError: Unknown field(s) () specified for User</code>.</p>
|
<python><django>
|
2024-05-31 11:33:56
| 1
| 7,102
|
abu abu
|
78,559,671
| 3,104,974
|
No fields matching the criteria 'None' were found in the dataset
|
<p>I'm trying to load a spark dataframe via <a href="https://petastorm.readthedocs.io/en/latest/api.html" rel="nofollow noreferrer">petastorm 0.12</a> following the tutorial given in the <a href="https://learn.microsoft.com/en-us/azure/databricks/_extras/notebooks/source/deep-learning/petastorm-spark-converter-tensorflow.html" rel="nofollow noreferrer">petastorm-spark-converter-tensorflow</a> notebook. Essentially my code is the following. The error described in the title is raised in the <code>with</code> statement. (Doesn't happen when directly creating a TFDatasetContextManager via <code>train_context_manager = converter_train.make_tf_dataset(BATCH_SIZE)</code> though.</p>
<pre><code>from petastorm import TransformSpec
from petastorm.spark import make_spark_converter
spark.conf.set(SparkDatasetConverter.PARENT_CACHE_DIR_URL_CONF, "file:///dbfs/tmp/petastorm/cache"
converter_train = make_spark_converter(DF_TRAIN)
with converter_train.make_tf_dataset(BATCH_SIZE) as X_train:
pass
</code></pre>
<p>The dataset definitely isn't empty. I also tried to apply a TransformSpec explicitly selecting my target columns</p>
<pre><code>with converter_train.make_tf_dataset(
BATCH_SIZE,
transform_spec=TransformSpec(selected_fields=[TRAIN_COL])
) as X_train:
</code></pre>
<p>Btw the same happens with <code>converter_train.make_torch_dataloader</code></p>
|
<python><pyspark><petastorm>
|
2024-05-31 10:56:59
| 1
| 6,315
|
ascripter
|
78,559,624
| 5,678,057
|
Classification for multi row observation: Long format to Wide format always efficient?
|
<p>I have a table of observations, or rather 'grouped' observations, where each group represents a deal, and each row representing a product. But the prediction is to be done at a Deal level. Below is the sample dataset.</p>
<p><strong>Sample Dataset :</strong></p>
<pre><code>df = pd.DataFrame({'deal': ['deal1', 'deal1', 'deal2', 'deal2', 'deal3', 'deal3'],
'product': ['prd_1', 'prd_2', 'prd_1', 'prd_2', 'prd_1', 'prd_2'],
'Quantity': [2, 1, 5, 3, 6, 7],
'Total Price': [10, 7, 25, 24, 30, 56],
'Result': ['Won', 'Won', 'Lost','Lost', 'Won', 'Won']})
</code></pre>
<p><strong>My Approach:</strong>
Flatten the data to get one observation per row using <code>pivot_table</code>, so that we get one row per Deal, and then proceed with the classification modelling, probably a logistic regression or gradient boosting.</p>
<p>But in the above case we had:
1 column (product, with 2 unique values) to be pivoted
2 measures (Quantity and Price) as the series/values.</p>
<p>resulting in 4 columns. The Wide format table is shown below:</p>
<p><a href="https://i.sstatic.net/Yd4y0Xx7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Yd4y0Xx7.png" alt="Table" /></a></p>
<p><strong>Question/Problem/Thought:</strong></p>
<p>Is this always the best way in cases like these? The problem (or maybe not?) I see is when number of columns to be pivoted is more than 1 and also if its combination of unique values in it is more, the table may get very very wide!</p>
<p>I would be grateful to hear alternative efficient ways to prepare the dataset to train, if any!</p>
|
<python><classification><feature-selection><data-preprocessing>
|
2024-05-31 10:43:21
| 3
| 389
|
Salih
|
78,559,390
| 2,636,044
|
Pydantic v2 fail early
|
<p>With the following example, I'd expect <code>Root</code> to fail early (after trying to validate <code>sub_model_a</code>, instead, it fails in <code>sub_model_b</code>, I've tried looking at the documentation but can't seem to find a flag for it, I think in pydantic v1 this would fail early?</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel, field_validator
from typing import Literal
class SubModelA(BaseModel):
string: Literal['a', 'b', 'c']
class SubModelB(BaseModel):
integer: int
class Root(BaseModel):
sub_model_a: SubModelA
sub_model_b: SubModelB
@field_validator('sub_model_b')
@classmethod
def validate_sub_model_b(cls, value, info):
validated_a = info.data['sub_model_a']
if __name__ == "__main__":
x = {
'sub_model_a': {
'string': 'invalid'
},
'sub_model_b': {
'integer': 5
}
}
y = Root(**x)
</code></pre>
<p>If this is debugged, you can see we reach the validator, whereas if you try to instantiate <code>SubModelA</code> with the same value <code>invalid</code>, it would throw an expected error</p>
|
<python><python-3.x><pydantic><pydantic-v2>
|
2024-05-31 09:53:25
| 1
| 1,339
|
Onilol
|
78,559,388
| 949,251
|
Warning: Gradients do not exist for variables
|
<p>I recently came across a warning in Tensorflow that caused some head-scratching and took a while to fix. Since I didn't find a solution online, I wanted to share.</p>
<p>I am building a transformer (encoder-decoder) architecture. But my training results are really bad. The transformer always gives the same answer no matter the input, although the training accuracy looks very good (above 0.95). On top of that, I get this warning:</p>
<p><code>WARNING:tensorflow:Gradients do not exist for variables ['embedding/embeddings:0'] when minimizing the loss. If you're using 'model.compile()', did you forget to provide a 'loss' argument?</code></p>
<p>Both the encoder and decoder have</p>
<ul>
<li>a token embedding realized through a <code>keras.Embedding</code> layer</li>
<li>a positional embedding, realized through a
<code>keras_nlp.PositionEmbedding</code> layer.</li>
</ul>
<p>Here is the encoder code:</p>
<pre class="lang-py prettyprint-override"><code>encoder_inputs = Input(shape=(encoder_inputs_size,), name="encoder_inputs")
token_embeddings = Embedding(input_dim=vocabulary_size, output_dim=embedding_dim) (encoder_inputs)
position_embeddings = PositionEmbedding(sequence_length=encoder_inputs_size)(token_embeddings)
encoder_outputs = TransformerEncoder(intermediate_dim=intermediate_dim, num_heads=num_heads)(inputs=position_embeddings)
encoder = Model(encoder_inputs, encoder_outputs, name="encoder")
</code></pre>
<p>There is <code>keras_nlp.TokenAndPositionEmbedding</code> that combines two embeddings into a single layer and using it makes the problem disappear. But since I want to use other forms of embedding, like patch embedding for image processing, I can't use this combined layer.</p>
|
<python><tensorflow><keras><transformer-model>
|
2024-05-31 09:53:15
| 1
| 831
|
Cerno
|
78,559,167
| 2,276,054
|
OR-Tools crashing Python 3.12 in Windows work environments only; perhaps MSVCP140.dll-related?
|
<p>I've created the following simple piece of Python code using OR-Tools:</p>
<pre><code>from ortools.sat.python import cp_model
print("1/4")
model = cp_model.CpModel()
print("2/4")
solver = cp_model.CpSolver()
print("3/4")
solver.solve(model) # <------------ crashes here!
print("4/4")
</code></pre>
<p>When I run it on my home computer, it obviously works fine... but when I run it on 2 different environments at work, it crashes on <code>solver.solve()</code> and never prints out <code>4/4</code>.</p>
<p>At home I have <strong>Windows 10 Home</strong> with admin rights. At work I have <strong>Windows Server 2016 Standard</strong> and <strong>Windows 10 Enterprise</strong>, no admin rights.</p>
<p>All 3 machines have exactly the same Python version:</p>
<pre><code>Python 3.12.1 (tags/v3.12.1:2305ca5, Dec 7 2023, 22:03:25) [MSC v.1937 64 bit (AMD64)] on win32
</code></pre>
<p>When I look into Window's Event log viewer, I see the following:</p>
<pre><code>Faulting application name: python.exe, version: 3.12.1150.1013, time stamp: 0x65724223
Faulting module name: MSVCP140.dll, version: 14.36.32532.0, time stamp: 0x04a30cf0
Exception code: 0xc0000005
Fault offset: 0x0000000000012f58
Faulting application path: C:\Users\.......\Python312\python.exe
Faulting module path: C:\Windows\SYSTEM32\MSVCP140.dll
</code></pre>
<p>Any idea what might be the reason for crash? Maybe some corporate antivirus proces fiddling with Python execution? Should I maybe try a different (latest?) Python version?</p>
<p>Some other external Python libraries I've tried (e.g. <code>pandas</code>, <code>oracledb</code>) seem to be working fine everywhere for me.</p>
|
<python><or-tools><msvcrt><cp-sat>
|
2024-05-31 09:17:21
| 1
| 681
|
Leszek Pachura
|
78,559,149
| 6,401,403
|
Exclude rows from MySQL table where timestamp is less than in other row
|
<p>I have a MySQL table having "datetime" columns <code>begintime</code> and <code>endtime</code>:</p>
<pre><code>+---------------------+---------------------+
| begintime | endtime |
+---------------------+---------------------+
| 2024-05-22 10:13:23 | 2024-05-31 13:37:34 |
| 2024-05-30 17:03:21 | 2024-05-31 16:01:25 |
| 2024-05-30 17:03:21 | 2024-05-31 16:01:25 |
| 2024-05-30 17:03:21 | 2024-05-31 16:01:25 |
| 2024-05-31 15:00:00 | 2024-05-31 15:00:03 |
| 2024-05-31 15:01:32 | 2024-05-31 16:01:26 |
+---------------------+---------------------+
</code></pre>
<p>This table contains the rows where <code>begintime</code> is the same as in some row and <code>endtime</code> is less than in that row. For example:</p>
<pre><code>| 2024-05-22 10:13:23 | 2024-05-31 12:02:18 |
</code></pre>
<p>Here <code>begintime</code> is the same as in the first row and <code>endtime</code> is less than in that row.</p>
<p>How can I filter these rows out of the table using <code>MySQL</code> or maybe Python's <code>pandas</code>?</p>
|
<python><mysql><pandas>
|
2024-05-31 09:13:18
| 2
| 5,345
|
Michael
|
78,559,061
| 14,978,092
|
How to implement class weight sampling in multi label classification?
|
<p>I am working on a multi label classification problem and need some guidance on computing class weights using Scikit-Learn.</p>
<p><strong>Problem Context:</strong></p>
<p>I have a dataset with 9973 training samples.The labels are one-hot encoded, representing 13 different classes.The shape of my training labels is (9973, 13).</p>
<p>I want to use this code:</p>
<pre><code>import numpy as np
from sklearn.utils.class_weight import compute_class_weight
y_integers = np.argmax(y, axis=1)
class_weights = compute_class_weight('balanced', np.unique(y_integers), y_integers)
d_class_weights = dict(enumerate(class_weights))
</code></pre>
<p>This does not work says, too may positional arguments. My training samples look like this:</p>
<pre><code> [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
</code></pre>
<p>How can i implement in multi class classification problem so that my dataset imbalance can be solved ?</p>
<p>Edit 1: It is working fine now, Do you think it works in multilabel because I read somewhere, that you must use sampling weight instead of class weight. How can i implement that ?</p>
|
<python><machine-learning><scikit-learn>
|
2024-05-31 08:57:17
| 1
| 590
|
Hamza
|
78,558,300
| 4,732,111
|
How to access Polars Dataframe containing Struct Type column fields using SQLContext?
|
<p>I'm trying to execute SQL query on Polars dataframes using SQLContext and below is my code:</p>
<pre><code> ctx = pl.SQLContext().register_many(
{"tbl1": df_source, "tbl2": df_reference})
src_df = ctx.execute(pl_sql_query, eager=True)
</code></pre>
<p>Here the schema of <strong>df_source</strong> contains a column named <em>json_message</em> of type <strong>Struct</strong> with Key Value pairs i.e.,</p>
<pre><code>('json_message', Struct({'id': Int64, 'name': String, 'age': Int64, 'dob': String}))
</code></pre>
<p>My sql query to access the struct field is:</p>
<pre><code>pl_sql_query =
"select json_message.id as id, json_message.name as name
from tbl1
where json_message.id in (select id from tbl2)"
</code></pre>
<p>When i execute this query, i'm getting an exception <em><strong>no table or alias named 'json_message' found</strong></em></p>
<p>Not sure how exactly we need to access the struct field value. Tried <em><strong>struct.with_fields</strong></em> but not able to access the value.</p>
<p>Can someone please help me on this?</p>
|
<python><dataframe><python-polars>
|
2024-05-31 05:49:58
| 2
| 363
|
Balaji Venkatachalam
|
78,558,275
| 7,360,872
|
How to add SQLTableSchema into chromadb llama-index?
|
<p>In the code below I need to add one more data into <code>chromadb</code>. How could I do that?</p>
<pre><code>from llama_index.core import SQLDatabase
from llama_index.core.objects import (
SQLTableNodeMapping,
ObjectIndex,
SQLTableSchema,
)
# chroma db
import chromadb
from llama_index.vector_stores.chroma import ChromaVectorStore
from llama_index.core import StorageContext, VectorStoreIndex
sql_database = SQLDatabase(engine)
table_schema_objs = [
SQLTableSchema(table_name=t.table_name, context_str=t.table_summary +"\n Given below are the column Details:\n" +
"\n".join([f"Column name: {col.column_name} with description: {col.column_description}" for col in t.columns]))
for t in table_infos
] # add a SQLTableSchema for each table
print("Creating chromadb", chroma_path)
# create and save chroma
db = chromadb.PersistentClient(path=chroma_path)
chroma_collection = db.get_or_create_collection("table_schema")
vector_store = ChromaVectorStore(chroma_collection=chroma_collection)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
object_index = ObjectIndex.from_objects(
table_schema_objs,
table_node_mapping,
storage_context=storage_context)
obj_retriever = object_index.as_retriever(similarity_top_k=3)
</code></pre>
<p>I've gone through this article and done what it says but it is not working.</p>
<p><a href="https://www.datacamp.com/tutorial/chromadb-tutorial-step-by-step-guide" rel="nofollow noreferrer">https://www.datacamp.com/tutorial/chromadb-tutorial-step-by-step-guide</a></p>
<pre><code>collection.add(
documents = [student_info, club_info, university_info],
metadatas = [{"source": "student info"},{"source": "club info"},{'source':'university info'}],
ids = ["id1", "id2", "id3"]
)
</code></pre>
|
<python><llama-index><chromadb>
|
2024-05-31 05:42:03
| 0
| 928
|
Abhijith M
|
78,557,911
| 3,453,776
|
Trying to use watchdog/watchmedo on a Python/gRPC service: changes detected, code doesn't refresh
|
<p>I'm building a gRPC Python app and tried to use <code>watchdog</code> with the <code>watchmedo</code> extension to listen for code changes and reload, like it is described <a href="https://stackoverflow.com/questions/64504406/how-to-hot-reload-grpc-server-in-python">in this question</a>.</p>
<p>When the server is loaded and I make a change on some file, I can see an exception trace in the docker logs (I added a <code>print('Starting server...')</code> line for guidance):</p>
<pre class="lang-py prettyprint-override"><code>Starting server...
Traceback (most recent call last):
File "/code/main.py", line 4, in <module>
serve()
File "/code/app/server.py", line 20, in serve
server.wait_for_termination()
File "/code/.venv/lib/python3.12/site-packages/grpc/_server.py", line 1485, in wait_for_termination
return _common.wait(
^^^^^^^^^^^^^
File "/code/.venv/lib/python3.12/site-packages/grpc/_common.py", line 156, in wait
_wait_once(wait_fn, MAXIMUM_WAIT_TIMEOUT, spin_cb)
File "/code/.venv/lib/python3.12/site-packages/grpc/_common.py", line 116, in _wait_once
wait_fn(timeout=timeout)
File "/usr/local/lib/python3.12/threading.py", line 655, in wait
signaled = self._cond.wait(timeout)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/threading.py", line 359, in wait
gotit = waiter.acquire(True, timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
KeyboardInterrupt
Starting server...
</code></pre>
<p>It seems that it is recognizing the file changes, but in the end it is not refreshing the code.</p>
<p>I'm testing this by adding a unit test that asserts a failure, and then running the tests again. It doesn't fail.</p>
<p>I'm using <code>poetry</code>, and my <code>docker-compose.yml</code> file looks like this:</p>
<pre class="lang-yaml prettyprint-override"><code> service:
build: .
depends_on:
postgres:
condition: service_healthy
command: poetry run watchmedo auto-restart -d "/app" -p '*.py' --recursive -- python main.py
volumes:
- .:/app
ports:
- '8010:8010'
</code></pre>
<p>the project structure is:</p>
<pre class="lang-bash prettyprint-override"><code>├── app
├── protos
└── main.py
</code></pre>
<p>relevant sections of the <code>Dockerfile</code>:</p>
<pre><code>FROM python:3.12.3-slim-bookworm
RUN pip install poetry==1.8.3
WORKDIR /code
COPY ... ./
RUN poetry install ...
COPY app ./app
COPY protos ./protos
COPY main.py ./
</code></pre>
<p>relevant package versions:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.poetry.dependencies]
python = "^3.12"
grpcio = "^1.63.0"
grpcio-tools = "^1.63.0"
watchdog = {extras = ["watchmedo"], version = "^4.0.1"}
</code></pre>
<p>Anyone with some grpc experience can help me figure out what is happening?</p>
|
<python><grpc><python-watchdog><grpcio>
|
2024-05-31 02:54:11
| 1
| 571
|
nnov
|
78,557,892
| 2,793,602
|
Windows task scheduler to execute python script - 0x1 error
|
<p>I have a python script on a server. It places data from MS SQL Server into a df, then creates a CSV file to a location on the server, and then uploads the CSV file to an FTP site. It checks if there is already a file in that location before creating the CSV, and deletes it if there is.</p>
<p>I want Windows Task Scheduler to do this. I have a service account that I want to set the task up with, but if I use that account, the task produces 0x1. If I use my own account, the task executes fine, suggesting there is a permission problem with the service account.</p>
<p>I have made sure the service account has full permissions in the location where the CSV would be deleted from, and created to. I have also made sure the account has permissions in the database that the script selects data from. What else can I check?</p>
|
<python><sql-server><scheduled-tasks>
|
2024-05-31 02:41:55
| 0
| 457
|
opperman.eric
|
78,557,711
| 17,246,545
|
Why getting gym errors in stable-baseline version 2.1.0?
|
<p>I am uploading an ecr image to an aws lambda and invoking it.</p>
<p>In the core code of this ecr image, I am importing stable-baseline3 (2.3.0), and I am using PPO within this library to use PPO.load().</p>
<p>It used to work fine, but recently I've been getting the error <code>no module named “gym”</code>.</p>
<p>So I added gym 0.21.0 to the Dockerfile and built it, and while doing so, I got a log that asked me to install shimmy, so I added shimmy 1.3.0 to the Dockerfile as well.</p>
<p>However, I ended up getting an error saying that gymnasium.wrapper.monitoring was not found.</p>
<p>So I tried to avoid conflicts between gym and gymnasium by pip uninstall gymnasium from the Dockerfile, but even after doing this, I still get this error about gymnasium.wrapper.monitoring.</p>
<p>What could be causing me to suddenly get the above error when everything was fine up until now, and how can I fix it?</p>
<pre><code>FROM public.ecr.aws/lambda/python:3.10
RUN yum install gcc -y
# Upgrade pip and install wheel
RUN pip3 install wheel==0.38.4
RUN pip3 install stable-baselines3==2.1.0
RUN pip3 install orjson
RUN pip3 install pytz
RUN pip3 install configparser
RUN pip3 install boto3==1.26.157
RUN pip3 install influxdb
RUN pip3 install pandas
RUN pip3 install pymysql
RUN pip3 install sagemaker
</code></pre>
<p>Now, I try to change(upgrade/downgrade) stable-baseline3 version. Is this the right way to fix the error?</p>
|
<python><python-3.x><docker><aws-lambda><compilation>
|
2024-05-31 00:57:41
| 1
| 389
|
SecY
|
78,557,590
| 7,470,057
|
What are my options for configuration files that contain a function in python?
|
<p>I'm creating a chat bot manager in python, where myself (or other developers) can provide a configuration file to the manager and it will generate a bot depending on what it found in the config.</p>
<p>The chat bot manager is generally for monitoring different services, and then performing some sort of action depending on what it finds while monitoring. Ideally, I'd have something that looks like the following:</p>
<pre><code>[
{
triggerURL: "www.foobar.com/some/health/check",
triggerMethod: "GET"
healthyStatusCodes: [200, 202],
healthyResponseBody: "{\"status\":\"ok\"}",
healthyCallback: () => console.log("do something here"),
unhealthyCallback: () => console.log("do something else here")
}
]
</code></pre>
<p>You might notice my example is a little JS-esque.</p>
<p>Is there a way I can embed javascript into a python program to accomplish what I want?</p>
<p>Are there any other alternatives that do something similar? I'd like to keep the amount of clutter in the configuration file to a minimum, and would like to avoid future developers needing to write any extra code to get it working.</p>
|
<python><functional-programming><configuration-files>
|
2024-05-30 23:41:40
| 1
| 465
|
backward forward
|
78,557,567
| 12,240,037
|
Downloading REST data which contains date_time attributes
|
<p>I am attempting to download data from ArcGIS REST services in ArcGIS Pro using custom geometry. After some <a href="https://stackoverflow.com/questions/78405479/arcgis-rest-map-services-data-query-using-geometry-in-python?noredirect=1#comment138254993_78405479">help</a>, I've managed to get this to work and save the data as a shapefile. The data I'm downloading contains a "date_time" field, and only the "date" portion is being returned. After doing some reading, I've learned that shapefiles cannot store date_time. I tried saving to a gdb, but it was more complicated than I expected, and I'm a beginner.</p>
<p>Is there a simple way to save this to a gdb and retain "date_time"?</p>
<p>Or modify the query to calculate a "date" and "time" column separately?</p>
<p>Or return "date_time" as a string that I can then convert to "date" and "time" manually?</p>
<pre><code>from arcgis.gis import GIS
from arcgis.features import FeatureLayer
from arcgis.geometry.filters import intersects
gis = GIS("pro")
#The ESRI JSON used to subset the REST data
geometry = {
"spatialReference": {"wkid": 4326},
"rings": [
[
["-124.785944", "49.142568"],
["-125.439631", "50.001049"],
["-124.928766", "50.335311"],
["-123.735655", "49.654732"],
["-124.404157", "49.265739"],
["-124.785944", "49.142568"]
]
]
}
# The REST service URL
rest_service_url = "https://gisp.dfo-mpo.gc.ca/arcgis/rest/services/FGP/MSDI_Dynamic_Current_Layer/MapServer/0"
# Access the feature layer using the REST service URL
layer = FeatureLayer(rest_service_url)
# Query the feature layer
features = layer.query(where="1=1", geometry_filter=intersects(geometry, 4326))
print(f"Number of features found: {len(features.features)}")
# Save to local Dir
output_dir = r"C:\Users\name\Documents\"
# Save features to shp
features.save(output_dir, "currents.shp")
</code></pre>
<p>Thanks!</p>
|
<python><rest><datetime><arcgis>
|
2024-05-30 23:28:25
| 1
| 327
|
seak23
|
78,557,260
| 13,794,499
|
If Python builtins derive from ABCs, then why is their metaclass type instead of ABCMeta?
|
<p>I was reading PEP-3119, and I discovered that builtins derive from ABCs.</p>
<p>From PEP-3119:</p>
<blockquote>
<p>The built-in type set derives from MutableSet. The built-in type frozenset derives from Set and Hashable.</p>
</blockquote>
<p>In Python:</p>
<pre><code>from collections.abc import Mapping
>>> print('y' if issubclass(dict, Mapping) else 'n')
y
</code></pre>
<p>With the above, I would assume that dict's metaclass should be ABCMeta since it has the ABCMeta metaclass in its inheritance chain. (dict > MutableMapping > Mapping). This might be where my knowledge is failing. Here's the relevant portion from the Python Data Model:</p>
<blockquote>
<p>3.3.3.3. Determining the appropriate metaclass¶ The appropriate metaclass for a class definition is determined as follows:
if no bases and no explicit metaclass are given, then type() is used;
if an explicit metaclass is given and it is not an instance of type(), then it is used directly as the metaclass;
if an instance of type() is given as the explicit metaclass, or bases are defined, then the most derived metaclass is used.
The most derived metaclass is selected from the explicitly specified metaclass (if any) and the metaclasses (i.e. type(cls)) of all specified base classes. The most derived metaclass is one which is a subtype of all of these candidate metaclasses. If none of the candidate metaclasses meets that criterion, then the class definition will fail with TypeError.</p>
</blockquote>
<p>However, Python shows dict's metaclass is type:</p>
<pre><code>>>> type(dict)
<class 'type'>
</code></pre>
<p>Can someone explain this to me? Thank you.</p>
|
<python><class><inheritance><metaclass><abc>
|
2024-05-30 21:08:32
| 1
| 306
|
Jordan
|
78,557,248
| 2,893,712
|
Pandas Map Multiple Columns Based on Specific Conditions
|
<p>My organization uses special codes for various employee attributes. We are migrating to a new system and I have to map these codes to a new code based on certain logic.</p>
<p>Here is my mappings df <code>Mappings</code>:</p>
<pre><code>State Old_Mgmt New_Mgmt Old_ID New_ID New_Site
01 A001 A100 0000 0101 123
01 A002 A100 0000 0102
01 A003 A105 0000 0103 123
02 A001 A100 0000 0101
</code></pre>
<p>And here is <code>EmployeeData</code>:</p>
<pre><code>State Management ID Site
01 A001 0000 456
01 A002 0000 987
02 A002 0000 987
....
</code></pre>
<p>The logic for the mapping is to go through each row of <code>EmployeeData</code> and if there is a match for <code>State</code>, <code>Management</code>, and <code>ID</code>, then it will update to the corresponding <code>New_</code> value. However for <code>Site</code>, it will update the Site ID only if <code>New_Site</code> is not blank/NaN. This mapping will modify the original dataframe.</p>
<p>Based on the above mapping the new <code>EmployeeData</code> would be:</p>
<pre><code>State Management ID Site
01 A100 0101 123 (modified this row)
01 A100 0102 987 (modified this row)
02 A002 0000 987
....
</code></pre>
<p>My initial thought process was to do something like this:</p>
<pre><code>for i,r in EmployeeData.iterrows(): # For each employee row
# Create masks for the filters we are looking for
mask_state = Mappings['State'] == r['State']
mask_mgmt = Mappings['Old_Mgmt'] == r['Management']
mask_id = Mappings['Old_ID'] == r['ID']
# Filter mappings for the above 3 conditions
MATCH = Mappings[mask_state & mask_mgmt & mask_id]
if MATCH.empty: # No matches found
print("No matches found in mapping. No need to update. Skipping.")
continue
MATCH = MATCH.iloc[0] # If a match is found, it will correspond to only 1 row
EmployeeData.at[i, 'Management'] = MATCH['New_Mgmt']
EmployeeData.at[i, 'ID'] = MATCH['New_ID']
if pd.notna(MATCH['New_Site']):
EmployeeData.at[i, 'Site'] = MATCH['New_Site']
</code></pre>
<p>However this seems fairly inefficient because I have to filter mappings for every row. If only 1 column was being mapped, I would do something like:</p>
<pre><code># Make a dict mapping Old_Mgmt -> New_Mgmt
MGMT_MAPPING = pd.Series(Mappings['New_Mgmt'].values,index=Mappings['Old_Mgmt']).to_dict()
mask_state = Mappings['State'] = r['State']
EmployeeData.loc[mask_state, 'Management'] = EmployeeData.loc[mask_state, 'Management'].replace(MGMT_MAPPING)
</code></pre>
<p>But that would not work for my situation since I need to map multiple values</p>
|
<python><pandas><mapping>
|
2024-05-30 21:05:50
| 3
| 8,806
|
Bijan
|
78,557,050
| 9,158,985
|
polars: Enabling global string cache creates smaller parquet files
|
<h2>Summary of the problem</h2>
<h3>Changes made to code</h3>
<p>I have a daily job (written in python using polars) that pulls some data from an API, transforms it, and saves it to a parquet file. A number of the columns are stored as <a href="https://docs.pola.rs/py-polars/html/reference/api/polars.datatypes.Categorical.html" rel="nofollow noreferrer">categorical</a> because they have a very small number of possible values. For unrelated reasons, I recently <a href="https://docs.pola.rs/py-polars/html/reference/api/polars.enable_string_cache.html" rel="nofollow noreferrer">enabled the polars global string cache</a> so I could join with another dataframe.</p>
<h3>Effect on file size</h3>
<p>After merging my changes, I saw that the new parquet files were about half the size they usually are. At first I thought there was simply less data being saved, but after investigation, it had the correct number of rows, and all the data seemed as expected.</p>
<h2>Question</h2>
<p>Is it possible that enabling the global string cache caused this change in file size? If so, why?</p>
|
<python><python-polars>
|
2024-05-30 20:00:19
| 0
| 880
|
natemcintosh
|
78,556,853
| 9,127,614
|
Artifact when creating and filling in a 2d list in Python
|
<p>Predefining a "matrix":</p>
<pre><code>m = [[0] * 2] * 3
</code></pre>
<p>Changing one element at position 0,1:</p>
<pre><code>m[0][1] = 1
</code></pre>
<p>Checking:</p>
<pre><code>print(m)
[[0, 1], [0, 1], [0, 1]]
</code></pre>
<p>All the elements m[0], m[1] and m[2] have now 1 at position 1. Why?
They seem to be a reference to the same object. Why? And how to work it around.</p>
|
<python><list><matrix>
|
2024-05-30 19:12:13
| 0
| 1,179
|
Mikhail Zakharov
|
78,556,399
| 2,153,235
|
Hide pandas column headings in a terminal window to save space and reduce cognitive noise
|
<p>I am looping through the groups of a pandas <code>groupby</code> object to print the (sub)dataframe for each group. The headings are printed for each group. Here are some of the (sub)dataframes, with column headings "MMSI" and "ShipName":</p>
<pre class="lang-none prettyprint-override"><code> MMSI ShipName
15468 109080345 OYANES 3 [19%]
46643 109080345 OYANES 3 [18%]
MMSI ShipName
19931 109080342 OYANES 2 [83%]
48853 109080342 OYANES 2 [82%]
MMSI ShipName
45236 109050943 SVARTHAV 2 [11%]
48431 109050943 SVARTHAV 2 [14%]
MMSI ShipName
21596 109050904 MR:N2FE [88%]
49665 109050904 MR:N2FE [87%]
MMSI ShipName
13523 941500907 MIKKELSEN B 5 [75%]
45711 941500907 MIKKELSEN B 5 [74%]
</code></pre>
<p>Web searching shows that <a href="https://pandas.pydata.org/pandas-docs/version/1.3/reference/api/pandas.io.formats.style.Styler.hide_columns.html" rel="nofollow noreferrer"><code>pandas.io.formats.style.Styler.hide_columns</code></a> can be used to suppress the headings. I am using Python 3.9, in which <code>hide_columns</code> is not recognized. However, <code>dir(pd.io.formats.style.Styler)</code> shows a <code>hide</code> method, for which the doc string gives this first example:</p>
<pre><code>>>> df = pd.DataFrame([[1,2], [3,4], [5,6]], index=["a", "b", "c"])
>>> df.style.hide(["a", "b"]) # doctest: +SKIP
0 1
c 5 6
</code></pre>
<p>When I try <code>hide()</code> and variations thereof, all I get is an address to the resulting <code>Styler</code> object:</p>
<pre><code>>>> df.style.hide(["a", "b"]) # doctest: +SKIP
<pandas.io.formats.style.Styler at 0x243baeb1760>
>>> df.style.hide(axis='columns') # https://stackoverflow.com/a/69111895
<pandas.io.formats.style.Styler at 0x243baeb17c0>
>>> df.style.hide() # Desparate random trial & error
<pandas.io.formats.style.Styler at 0x243baeb1520>
</code></pre>
<p>What could cause my result to differ from the doc string? How can I properly use the <code>Styler</code> object to get the dataframe printed without column headings?</p>
<p>I am using pandas 2.0.3 with Spyder 5.4.3.</p>
|
<python><pandas><dataframe>
|
2024-05-30 17:09:42
| 2
| 1,265
|
user2153235
|
78,556,386
| 4,814,342
|
How to create a custom role with specific permissions in Airflow using AirflowSecurityManager or other?
|
<p>I'm currently working on a project where I need to create custom roles with specific permissions in Apache Airflow. I have a dictionary that contains the role names and their corresponding permissions. Here is an example of the dictionary:</p>
<pre><code>roles_permissions = {
"Role1": ["can_dag_read", "can_dag_edit"],
"Role2": ["can_dag_read"]
}
</code></pre>
<p>I have found that AirflowSecurityManager might be the way to go for managing roles and permissions in Airflow, but I'm not entirely sure how to use it to create these custom roles programmatically.</p>
<p>Could someone provide an example or guide me on how to create these custom roles with the permissions defined in the dictionary?</p>
|
<python><security><airflow><roles>
|
2024-05-30 17:06:30
| 1
| 947
|
Náthali
|
78,556,221
| 504,717
|
Get bytes data instead of writing using wfdb.wsramp
|
<p>We are using <a href="https://wfdb.readthedocs.io/en/latest/wfdb.html#wfdb.wrsamp" rel="nofollow noreferrer">wfdb</a> to write medical info for clients. However, this code has hardcoded requirements to "write-to-a-directory". Is there a way if I can get byte content so that i can upload it to S3?</p>
<p>The only approach I could think of was to write to temporary files, then read their byte content and send that content to s3 (or send those files to s3). But is there any other way without involving disk IO?</p>
<p>Can I override IO call that when it tries to write to disk, it writes to some custom buffer?</p>
|
<python><wfdb>
|
2024-05-30 16:26:56
| 1
| 8,834
|
Em Ae
|
78,556,172
| 2,977,256
|
Historical forecast in time series libraries
|
<p>I have been using Darts, where there is a very nice historical forecast functionality, and also nixtla's neuralforecast library. The latter does have cross-validation, which can mimick a historical forecast, but is bog-slow (unlike the other parts of nixtla's library, which are very speedy, and allow for a lot more customization than Darts). Any ideas of what the "right way" to do this is?</p>
|
<python><time-series>
|
2024-05-30 16:16:16
| 1
| 4,872
|
Igor Rivin
|
78,556,068
| 5,547,553
|
Why does the results of str.extract and str.extract_all differ in polars?
|
<br>
<p>Why does the result for str.extract and str.extract_all differ?<br>
Should str.extract_all not return only the capture group, like str.extract does?</p>
<pre><code>import polars as pl
#polars==0.20.30
dff = pl.DataFrame({'a': 'Label: name, Value: John, Label: car, Value: Ford'})
(dff.with_columns(pl.col('a').str.extract( r'Label:?(.*?) Value:',1).alias('data1'),
pl.col('a').str.extract_all(r'Label:?(.*?) Value:').list.get(0).alias('data2')
)
)
</code></pre>
<p>Results:</p>
<pre><code>data1: " name,"
data2: "Label: name, Value:"
</code></pre>
|
<python><regex><python-polars>
|
2024-05-30 15:54:37
| 0
| 1,174
|
lmocsi
|
78,555,894
| 10,425,150
|
Find all columns with "dateformat" in dataframe
|
<p>For the following <code>df</code> I would like to extract columns with "dates":</p>
<pre><code>import pandas as pd
df = pd.DataFrame([["USD", 12.3, 1, 23.33, 33.1],["USD", 32.1, 2, 34.44, 23.1]],columns= ['currency', '1999-07-31', 'amount', '1999-10-31', '2000-01-31'])
</code></pre>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: left;">currency</th>
<th style="text-align: right;">1999-07-31</th>
<th style="text-align: right;">amount</th>
<th style="text-align: right;">1999-10-31</th>
<th style="text-align: right;">2000-01-31</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">USD</td>
<td style="text-align: right;">12.3</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">23.33</td>
<td style="text-align: right;">33.1</td>
</tr>
<tr>
<td style="text-align: left;">USD</td>
<td style="text-align: right;">32.1</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">34.44</td>
<td style="text-align: right;">23.1</td>
</tr>
</tbody>
</table></div>
<p><strong>Current code:</strong></p>
<pre><code>datetime_types = ["datetime", "datetime64", "datetime64[ns]", "datetimetz"]
dates = df.columns.to_frame().select_dtypes(include=datetime_types)
</code></pre>
<p><strong>Current output:</strong></p>
<pre><code>dates.to_string()
'Empty DataFrame\nColumns: []\nIndex: [currency, 1999-07-31, amount, 1999-10-31, 2000-01-31]'
</code></pre>
<p><strong>Desired output:</strong></p>
<pre><code>[1999-07-31, 1999-10-31, 2000-01-31]
</code></pre>
|
<python><pandas><dataframe>
|
2024-05-30 15:21:31
| 1
| 1,051
|
Gооd_Mаn
|
78,555,801
| 4,245,882
|
Get Position of clicked pixel on an image in justpy
|
<p>In justpy, I would like to load an image and record which pixel a clicked on. So I need the position of the pixel of an image with an offset, not position of the mouse on screen.</p>
<p>JustPy click and click_out event do not give any coordinates at all, even in debug mode.</p>
<pre><code>{'event_type': 'click__out', 'id': 2, 'class_name': 'Img', 'html_tag': 'img', 'vue_type': 'html_component', 'page_id': 0, 'websocket_id': 0, 'session_id': 'e0311ccc163e47df884e3cb109d046b9', 'msg_type': 'event', 'page': WebPage(page_id: 0, number of components: 2, reload interval: None), 'websocket': <starlette.websockets.WebSocket object at 0x75ae283282d0>, 'target': Img(id: 2, html_tag: img, vue_type: html_component, name: No name, number of components: 0)}
</code></pre>
<p>Any idea how to get the position of the pixel? Or the offset of the image on the users screen?</p>
<p>I read the examples on the webpage and tried different events, but failed.</p>
<pre><code>import justpy as jp
def my_click1(self, msg):
self.text = 'I was clicked'
#self.log.text=msg
print(msg.event_type)
print(msg['event_type'])
print(msg)
def img_click(self, msg):
print("Image Click")
print(msg.event_type)
print(msg['event_type'])
print(msg)
print(self)
def img_clickout(self, msg):
print("Clickout")
print(msg.event_type)
print(msg['event_type'])
print(msg)
print(self)
def event_demo1():
wp = jp.WebPage()
wp.debug = True
d = jp.Div(text='Not clicked yet', a=wp, classes='w-48 text-xl m-2 p-1 bg-blue-500 text-white')
img = jp.Img(src='https://www.python.org/static/community_logos/python-powered-h-140x182.png', a=wp)
d.on('click', my_click1)
img.on('click', img_click)
img.on('click__out', img_clickout)
return wp
jp.justpy(event_demo1)
</code></pre>
|
<python><justpy>
|
2024-05-30 15:01:11
| 1
| 698
|
stupidstudent
|
78,555,437
| 23,260,297
|
Exporting dataframe to excel with indexes
|
<p>I am exporting a dataframe to excel. I am using this piece of code:</p>
<pre><code>with pd.ExcelWriter(path, engine='openpyxl', mode='w') as writer:
df.to_excel(writer, sheet_name=name, index=False)
</code></pre>
<p>I know using <code>index=False</code> removes the indexes in excel. However, I have a row index called 'Total' that displays totals for specific columns. Is there a way I can only display that one row index and exclude the rest?</p>
<p>If I use <code>index=True</code> my excel looks like this:</p>
<pre><code> A B
1 foo 100
2 foo 200
3 foo 100
4 foo 200
Total 600
</code></pre>
<p>I want my excel to look like this:</p>
<pre><code> A B
foo 100
foo 200
foo 100
foo 200
Total 600
</code></pre>
|
<python><pandas><excel><openpyxl>
|
2024-05-30 13:54:18
| 1
| 2,185
|
iBeMeltin
|
78,555,411
| 8,510,149
|
Fillna with values from other rows with matching keys
|
<p>In the dataframe I define below I want to use the features ID and ID2 to fill the cells of features val1 and val2 with values. I want all ID and ID2 cominations to have the same values for the features val1 and val2.</p>
<pre><code>
df = pd.DataFrame({'ID':[0,0,0,1,1,1],
'DATE':['2021', '2022', '2023', '2021', '2022', '2023'],
'ID2':[23, 34, 54, 321, 1244, 1244],
'val1':[np.nan, 200, 300, np.nan, 234, np.nan],
'val2':[55555, 66666, 77777, 88888, 99999, np.nan],
'val3':['A', 'F', 'W', 'T', 'I', 'O']})
#expected result
print(pd.DataFrame({'ID':[0,0,0,1,1,1],
'DATE':['2021', '2022', '2023', '2021', '2022', '2023'],
'ID2':[23, 34, 54, 321, 1244, 1244],
'val1':[np.nan, 200, 300, np.nan, 234, 234],
'val2':[55555, 66666, 77777, 88888, 99999, 99999],
'val3':['A', 'F', 'W', 'T', 'I', 'O']}))
</code></pre>
|
<python><pandas>
|
2024-05-30 13:49:43
| 2
| 1,255
|
Henri
|
78,555,334
| 72,791
|
Filter a pandas DataFrame based on multiple columns with a corresponding list of values
|
<p>I have a DataFrame that looks a bit like this:</p>
<pre><code> A B C D ... G H I J
0 First First First First ... 0.412470 0.758011 0.066926 0.877992
1 First First First Third ... 0.007162 0.957042 0.601337 0.636086
2 First First Third First ... 0.956398 0.640909 0.602861 0.679656
3 First First Third Third ... 0.905421 0.199685 0.471300 0.975808
4 First Third First First ... 0.378181 0.498606 0.865298 0.914407
5 First Third First Third ... 0.387706 0.247412 0.339593 0.431647
6 First Third Third First ... 0.582202 0.046199 0.496258 0.533133
7 First Third Third Third ... 0.877199 0.011512 0.338528 0.938252
8 Third First First First ... 0.446433 0.175686 0.115796 0.985400
9 Third First First Third ... 0.315839 0.252855 0.142463 0.929233
10 Third First Third First ... 0.192566 0.600732 0.434166 0.933182
11 Third First Third Third ... 0.380029 0.511411 0.672583 0.807731
12 Third Third First First ... 0.915590 0.507470 0.390135 0.303314
13 Third Third First Third ... 0.977414 0.062521 0.909845 0.314432
14 Third Third Third First ... 0.608958 0.384802 0.193425 0.689283
15 Third Third Third Third ... 0.496223 0.478222 0.076192 0.695453
[16 rows x 10 columns]
</code></pre>
<p>I also have a list (coming from elsewhere) with the values for A, B, C & D that I'm looking for, something like this:</p>
<pre class="lang-py prettyprint-override"><code>expected = ['First', 'Third', 'First', 'Third']
</code></pre>
<p>I'd like to filter to find a row matching a certain set of ABCD values, where the expected values are in a list. Something like this (which doesn't work):</p>
<pre class="lang-py prettyprint-override"><code># This looks neat, but doesn't work
rows = df[df[['A', 'B', 'C', 'D'] == expected]]
rows
</code></pre>
<pre><code># Not what I was hoping for!
Out[17]:
A B C D E F G H I J
0 First NaN First NaN NaN NaN NaN NaN NaN NaN
1 First NaN First Third NaN NaN NaN NaN NaN NaN
2 First NaN NaN NaN NaN NaN NaN NaN NaN NaN
3 First NaN NaN Third NaN NaN NaN NaN NaN NaN
4 First Third First NaN NaN NaN NaN NaN NaN NaN
5 First Third First Third NaN NaN NaN NaN NaN NaN
6 First Third NaN NaN NaN NaN NaN NaN NaN NaN
7 First Third NaN Third NaN NaN NaN NaN NaN NaN
8 NaN NaN First NaN NaN NaN NaN NaN NaN NaN
9 NaN NaN First Third NaN NaN NaN NaN NaN NaN
10 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
11 NaN NaN NaN Third NaN NaN NaN NaN NaN NaN
12 NaN Third First NaN NaN NaN NaN NaN NaN NaN
13 NaN Third First Third NaN NaN NaN NaN NaN NaN
14 NaN Third NaN NaN NaN NaN NaN NaN NaN NaN
15 NaN Third NaN Third NaN NaN NaN NaN NaN NaN
</code></pre>
<p>I could use <code>dropna(subset=['A', 'B', 'C', 'D'])</code> to get the relevant rows, then extract the index and use it on the original table, but that's getting quite long-winded.</p>
<p>I know I can do this long-hand like this, but I'm wondering whether there's a neater way:</p>
<pre class="lang-py prettyprint-override"><code># This works, but is clunky:
rows = df[(df['A'] == expected[0]) & (df['B'] == expected[1]) & (df['C'] == expected[2]) & (df['D'] == expected[3])]
rows
</code></pre>
<pre><code># This is what I want:
Out[19]:
A B C D ... G H I J
5 First Third First Third ... 0.387706 0.247412 0.339593 0.431647
[1 rows x 10 columns]
</code></pre>
<p>Is there a simpler way of doing this? My searching for filtering by lists just seems to come up with lots of <code>isin</code> suggestions, which aren't relevant.</p>
|
<python><pandas><dataframe>
|
2024-05-30 13:36:40
| 2
| 73,231
|
DrAl
|
78,555,002
| 561,243
|
Problem plotting pandas dataframe containing arrays
|
<p>I have a tricky question for you concerning data structure in pandas for plotting with seaborn.</p>
<p>Let's imagine, I have several experiments, each of them performed in different conditions. The result of each experiment is an array with a few thousand floats.</p>
<p>I was considering to have all the experiment results stored in a single pandas dataframe, in the so-called long-format, i.e. each row is one experiment, and each column is a variable.
Almost all the variables are used to define the experimental conditions and then one variable containing the array of float with experiment results.</p>
<p>Something like this:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'id':[1,2], 'temp':[21,22], 'oven':[0,1], 'values':[[1,2,3,4,5], [10,11,12,12,15,16,17]]})
</code></pre>
<p>So far so good.</p>
<p>Now I would like to use seaborn to make some plots. Imagine I want to plot an histogram of the values using id as a category.</p>
<p>I would do:</p>
<pre><code>sns.histplot(df, x='values', hue='id')
</code></pre>
<p>But if I do so, I get an error message complaining that list is an unhashable type.</p>
<p>As a workaround, I changed the data structure, so that I have a row for each of the floats in the experiment results, but this is making the table unnecessarily huge.</p>
<p>Do you have any suggestion for me?</p>
|
<python><pandas><dataframe><seaborn>
|
2024-05-30 12:35:58
| 1
| 367
|
toto
|
78,554,968
| 982,402
|
Automate print pdf using robot framework
|
<p>I am using windows. My application has a print icon. On click of that print icon opens the send to print window. How to automate this print window.</p>
<p>I already tried below steps but nothing worked so far. The focus never goes to this print window. Please guide.</p>
<pre><code>Click Element my-print-icon-id
Switch Window NEW
Click Element print-screen-print-button-id
</code></pre>
|
<python><automation><pytest><robotframework>
|
2024-05-30 12:30:52
| 1
| 1,719
|
Anna
|
78,554,699
| 1,390,887
|
XSL for each text starts-with select in variable
|
<p>i have this XML in input:</p>
<pre><code> <root> RH03051CDSIA280524CM1490301951171
610000001 93001 G0305101700000000004575EUR270524C000000000000,00IT44
620000001001270524270524D000000000649,3450TE ITDA00DPN145
630000001001HAYS SRL/AVIS BUDGET ITALIA S.P.A./AR885265/2355070853/B2B/RCUR/OE5OA5200P4907R3
640000001EUR270524C000000000000,00
EF03051CDSIA280524CM1490301951171 0000001 0000006
RH03051CDSIA280524CM1490301951349
610000001 93001 Z0305101699000078389249USD270524C000000001320,97IT72
640000001USD270524C000000001320,97
EF03051CDSIA280524CM1490301951349 0000001 0000004
</root>
</code></pre>
<p>i want iterate for each string starts with " RH" for parse whole multiline string. for example in my xml input i have 2 strings that starts with RH:</p>
<ol>
<li>RH03051CDSIA280524CM1490301951171 ....</li>
<li>RH03051CDSIA280524CM1490301951349 ....</li>
</ol>
<p>my xslt is this:</p>
<pre><code> <xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:exsl="http://exslt.org/common"
xmlns:ns="urn:iso:std:iso:20022:tech:xsd:camt.053.001.03"
xmlns:BODY="urn:CBI:xsd:CBIBdyBkToCstmrStmtReq.00.01.02"
xmlns:LMSG="urn:CBI:xsd:CBIBkToCstmrStmtReqLogMsg.00.01.02"
xmlns:HE2E="urn:CBI:xsd:CBIHdrSrv.001.07"
xmlns:HTRT="urn:CBI:xsd:CBIHdrTrt.001.07"
xmlns:IDST="urn:CBI:xsd:CBIIdyStmtReqLogMsg.00.01.02"
xmlns:DLST="urn:CBI:xsd:CBIDlyStmtReqLogMsg.00.01.02"
xmlns:PRST="urn:CBI:xsd:CBIPrdcStmtReqLogMsg.00.01.02"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:str="http://exslt.org/strings"
xmlns:myns="py_ns"
extension-element-prefixes="exsl myns"
exclude-result-prefixes="exsl ns">
<xsl:output encoding="UTF-8" standalone="yes" indent="yes" method="xml" />
<xsl:template match="/">
<xsl:for-each select="//text[starts-with(., ' RH')]">
<xsl:value-of select="." />
</xsl:for-each>
</xsl:template>
</xsl:stylesheet>
</code></pre>
<p>but when i do:</p>
<pre><code><xsl:for-each select="//text[starts-with(., ' RH')]">
<xsl:value-of select="." />
</code></pre>
<p>i dont view nothing.
is correct my code?
i must use lxml library and i use :
xmlns:exsl="http://exslt.org/common"
and
xmlns:str="http://exslt.org/strings"</p>
<p>i want:</p>
<p>RH03051CDSIA280524CM1490301951171 610000001 93001 G030510170 ......................... 0000001</p>
<p>and this:</p>
<p>RH03051CDSIA280524CM1490301951349 610000001 93001 ......................................... EF03051CDSIA280524CM1490301951349 0000004</p>
<p>same tips?
thanks
regards</p>
|
<python><xml><xslt><transform>
|
2024-05-30 11:33:42
| 2
| 1,380
|
Catanzaro
|
78,554,551
| 2,846,140
|
Adapt context to log level in structlog
|
<p>A common need when logging events is to provide more or less information depending on the log level. Consider the example below where some noisy payload should be included in the logs only if the log level is <code>DEBUG</code> or below. I found 3 possible approach so far:</p>
<pre class="lang-py prettyprint-override"><code>import random
import logging
import structlog
log_level = logging.INFO
wrapper_class = structlog.make_filtering_bound_logger(log_level)
structlog.configure(wrapper_class=wrapper_class)
logger = structlog.get_logger()
noisy = random.randbytes(1024)
# Approach 1
logger.info("Approach 1")
logger.debug("Approach 1", noisy=noisy)
# Approach 2
if log_level <= logging.DEBUG:
logger.debug("Approach 2", noisy=noisy)
else:
logger.info("Approach 2")
# Approach 3
log = logger.bind(noisy=noisy) if log_level <= logging.DEBUG else logger.bind()
log.info("Approach 3")
</code></pre>
<p>All three approaches work correctly, especially with the <code>INFO</code> level:</p>
<pre><code>2024-05-30 12:41:51 [info ] Approach 1
2024-05-30 12:41:51 [info ] Approach 2
2024-05-30 12:41:51 [info ] Approach 3
</code></pre>
<p>With the <code>DEBUG</code> level however, approach 1 would show two logs for the same event, which is OK but not great (as it can easily be mistaken for two different events when browsing the logs). Approach 2 does not have this problem, although having an <code>INFO</code> log disappear when switching to <code>DEBUG</code> is a bit confusing. My favorite so far is approach 3 which simply adds the noisy payload to the existing <code>INFO</code> log when the log level is <code>DEBUG</code>.</p>
<p>How is this issue typically addressed in real-world code bases? Is there some good practice about that? Am I missing a better approach or a dedicated functionality in structlog?</p>
|
<python><structlog>
|
2024-05-30 11:01:37
| 1
| 13,475
|
Vincent
|
78,554,273
| 10,516,773
|
Gmail API history().list() returns only history id?
|
<p>I'm using Subscription listen script like this, but even though i sent new email for inbox every time it sends only,
<code>{'historyId': 123456 }</code>,</p>
<p>i tried every possible changes i can think of regenerate access tokens, Purge Pub Sub messages still no luck, what i am missing here.</p>
<pre><code>from google.cloud import pubsub_v1
import json
from google.oauth2 import service_account
from dotenv import dotenv_values
from google.oauth2.credentials import Credentials
from googleapiclient.discovery import build
from src.app.db import get_access_token
# from src.app.email_processor import get_message_from_historyid
config = dotenv_values(".env")
def callback(message):
try:
data_str = message.data.decode('utf-8')
data = json.loads(data_str)
history_id = data.get('historyId')
print(data)
get_message_from_historyid(email_id=data.get('emailAddress'), history_id=data.get('historyId'))
except json.JSONDecodeError as json_err:
print(f'Error decoding JSON: {json_err}')
except Exception as e:
print(f'Error processing message: {e}')
finally:
message.ack()
def create_gmail_service(email_id):
token_data = get_access_token(email=email_id)
credentials = Credentials(token=token_data['access_token'], refresh_token=token_data['refresh_token'], token_uri=token_data['token_uri'], client_id=token_data['client_id'], client_secret=token_data['client_secret'],)
service = build('gmail', 'v1', credentials=credentials)
profile = service.users().getProfile(userId='me').execute()
current_history_id = profile.get('historyId')
print(f"Current history ID: {current_history_id}")
return service
def get_message_from_historyid(email_id, history_id):
print("\n",email_id, history_id,"\n")
service = create_gmail_service(email_id)
response = service.users().messages().list(userId='me').execute()
changes = service.users().history().list(userId=email_id, startHistoryId=history_id).execute()
print("changes :: ", changes,"\n")
def listen_to_pubsub():
credentials = service_account.Credentials.from_service_account_file('./src/service_account_credentials.json')
project_id = config["PROJECT_ID"]
subsription_name = config["SUBSCRIPTION_NAME"]
subscriber = pubsub_v1.SubscriberClient(credentials=credentials)
subscription_path = subscriber.subscription_path(project=project_id, subscription=subsription_name)
streaming_pull_future = subscriber.subscribe(subscription_path, callback=callback)
print(f'Listening for messages on {subscription_path}')
try:
streaming_pull_future.result()
except KeyboardInterrupt:
streaming_pull_future.cancel()
if __name__ == '__main__':
listen_to_pubsub()
</code></pre>
|
<python><google-cloud-platform><gmail-api><google-cloud-pubsub>
|
2024-05-30 10:03:23
| 0
| 1,120
|
pl-jay
|
78,553,974
| 12,430,846
|
qcut is not finding quantiles (many 0s and 1s duplicated in my df)
|
<p>I have a column of my df with <code>MAX_PERC</code> column ranging from 0 to 1.</p>
<ul>
<li>The count of 0s are 103168.</li>
<li>The count of 1s are 32364.</li>
<li>The count of obs less than 1 and more than 0.8 is 2594.</li>
<li>The count of obs more than 0 and less than 0.8 is 129.</li>
</ul>
<p>I'm trying to us <code>pd.qcut</code> to find quantiles it only works (i.e. finds more than one quantile) for higher quantile like (>0.8)</p>
<pre><code>pd.qcut(df['MAX_PERC'],80,retbins=True, duplicates='drop')
</code></pre>
<p>To recreate the dataframe:</p>
<pre><code> # Define the counts
count_zeros = 103168
count_ones = 32364
count_between_08_1 = 2594
count_between_0_08 = 129
# Create arrays for each range
zeros = np.zeros(count_zeros)
ones = np.ones(count_ones)
between_08_1 = np.random.uniform(0.8, 1.0, count_between_08_1)
between_0_08 = np.random.uniform(0, 0.8, count_between_0_08)
# Combine all arrays
all_values = np.concatenate([zeros, ones, between_08_1, between_0_08])
# Shuffle the array to mix the values
np.random.shuffle(all_values)
# Create the DataFrame
df = pd.DataFrame({'MAX_PERC': all_values})
</code></pre>
<p>Why? shouldn't be able to calculate quantiles at 2?</p>
|
<python><pandas><dataframe><numpy><quantile>
|
2024-05-30 09:06:43
| 1
| 543
|
coelidonum
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.