QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
79,145,554
| 3,846,286
|
Space complexity of grouping anagrams with hashmap
|
<p>I'm solving a leetcode question.</p>
<blockquote>
<p>Q: Given an array of strings <code>strs</code>, group all anagrams together into
sublists. You may return the output in any order.</p>
<p>Note: <code>strs[i]</code> is made up of lowercase English letters</p>
</blockquote>
<pre><code>class Solution:
def groupAnagrams(self, strs: List[str]) -> List[List[str]]:
# O(m*n) time complexity
# O(m*n) space complexity
d = {} # O(m) entries each with O(n) size
for s in strs: # O(m)
s_l = [0] * 26
for c in s: # O(n)
s_l[ord(c) - ord("a")] += 1
t_l = tuple(s_l) # this is O(1) entry size with O(m) tuples
if t_l in d:
d[t_l].append(s)
else:
d[t_l] = [s]
return d.values()
</code></pre>
<p>So, I thought the space complexity is <code>O(m*n)</code>. There are <code>O(m)</code> entries in the dictionary, and each entry will have <code>O(n)</code> space, where <code>m</code> is the number of strings, and <code>n</code> is the length of the largest string (worst case is all the strings are the same length).</p>
<p>However, the solution I found in <a href="https://neetcode.io/problems/anagram-groups" rel="nofollow noreferrer">https://neetcode.io/problems/anagram-groups</a> says the space complexity is <code>O(m)</code>. Why?</p>
<p>Note that I have found many questions with this exact same problem, like [0] through [9] but none pertains to the space complexity.</p>
<ol start="0">
<li><a href="https://stackoverflow.com/questions/17934627/grouping-anagrams">Grouping anagrams</a></li>
<li><a href="https://stackoverflow.com/questions/11108541/get-list-of-anagrams-from-a-dictionary?rq=1">get list of anagrams from a dictionary</a></li>
<li><a href="https://stackoverflow.com/questions/73286781/group-anagrams-leetcode">Group Anagrams - LeetCode</a></li>
<li><a href="https://stackoverflow.com/questions/70590778/leetcode-49-group-anagrams">LeetCode 49 Group Anagrams</a></li>
<li><a href="https://stackoverflow.com/questions/74505813/group-anagrams-in-leetcode-without-sorting">Group anagrams in leetcode without sorting</a></li>
<li><a href="https://stackoverflow.com/questions/25907481/how-to-group-anagrams-in-a-string-into-an-array">How to group anagrams in a string into an array</a></li>
<li><a href="https://stackoverflow.com/questions/3642812/group-together-all-the-anagrams">Group together all the anagrams</a></li>
<li><a href="https://stackoverflow.com/questions/70636150/group-anagrams-leetcode-question-python">Group anagrams Leetcode question (python)</a> this one also uses the solution for neetcode. No mention of space complexity unfortunately</li>
<li><a href="https://stackoverflow.com/questions/72077721/big-o-of-group-anagram-algorithm">big O of group anagram algorithm</a></li>
<li><a href="https://stackoverflow.com/questions/58150969/how-do-i-group-different-anagrams-together-from-a-given-string-array">How do I group different anagrams together from a given string array?</a></li>
</ol>
<p>This question: <a href="https://stackoverflow.com/questions/71683331/string-anagrams-grouping-improvements-and-analyzing-time-complexity">String anagrams grouping | Improvements and Analyzing time complexity</a> also agrees with my space complexity, but sadly there are no answers so I don't know who is right, if me and the questioner or neetcode.</p>
|
<python><space-complexity>
|
2024-10-31 16:28:34
| 1
| 651
|
chilliefiber
|
79,145,368
| 7,914,054
|
Unable to change the download button sidebar based on the tab chosen using Shiny Python
|
<p>I would like to change the download buttons that appear based on the tab that is chosen. For example, if the tab is Main Visualization, I would like the following buttons to appear: download_heatmap and download_distribution. I would like download_piechart and download_barchart to only appear if "Additional Visualizations" tab is chosen. I'm running into problems when I try to apply this logic.</p>
<p>Here is the code:</p>
<pre><code>from shiny import App, render, ui, reactive, session
import seaborn as sns
from shinywidgets import output_widget, render_widget
# Load the penguins dataset
penguins = sns.load_dataset("penguins").dropna()
# UI setup
app_ui = ui.page_fluid(
ui.layout_sidebar(
ui.sidebar(
ui.input_selectize("species", "Species:", choices=penguins["species"].unique().tolist(), multiple=True),
ui.input_selectize("island", "Island:", choices=penguins["island"].unique().tolist(), multiple=True),
),
# Place download buttons right below the sidebar
ui.output_ui("download_buttons"),
ui.navset_tab(
ui.nav_panel("Main Visualizations", output_widget("heatmap_plot")),
ui.nav_panel("Additional Visualizations", ui.h4("Pie Chart of Species Proportion"), output_widget("pie_chart")),
id = 'my_tabset'
)
)
)
# Server
def server(input, output, session):
filtered_data = reactive.Value(penguins)
# Monitor active tab and dynamically render download buttons
@output
@render.ui
def download_buttons():
active_tab = reactive.Value(input.my_tabset)
if active_tab == "Main Visualizations":
# Render buttons for Main Visualizations
return ui.div(
ui.download_button("download_heatmap", "Download Heatmap", class_="download-btn download-btn-main"),
ui.download_button("download_distribution", "Download Distribution Plot", class_="download-btn download-btn-main"),
style="margin-top: 15px;"
)
else:
# Render buttons for Additional Visualizations
return ui.div(
ui.download_button("download_piechart", "Download Pie Chart", class_="download-btn download-btn-additional"),
ui.download_button("download_barchart", "Download Bar Chart", class_="download-btn download-btn-additional"),
style="margin-top: 15px;"
)
# Create the app
app = App(app_ui, server)
</code></pre>
|
<python><button><download><reactive><py-shiny>
|
2024-10-31 15:41:22
| 0
| 789
|
QMan5
|
79,145,336
| 4,049,396
|
Stripna: Pandas dropna but behaves like strip
|
<p>It is a pretty common occurrence to have leading and trailing <code>NaN</code> values in a table or DataFrame. This is particularly true after joins and in timeseries data.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
df1 = pd.DataFrame({
'a': [1, 2, 3, 4, 5, 6, 7],
'b': [np.NaN, 2, np.NaN, 4, 5, np.NaN, np.NaN],
})
Out[0]:
a b
0 1 NaN
1 2 2.0
2 3 NaN
3 4 4.0
4 5 5.0
5 6 NaN
6 7 NaN
</code></pre>
<p>Let's remove these with <code>dropna</code>.</p>
<pre class="lang-py prettyprint-override"><code>df1.dropna()
Out[1]:
a b
1 2 2.0
3 4 4.0
4 5 5.0
</code></pre>
<p>Oh no!! We lost all the missing values that showed in column <code>b</code>. I want to keep the middle (inner) ones.</p>
<p><strong>How do I strip the rows with leading and trailing <code>NaN</code> values in a quick, clean and efficient way?</strong> The results should look as follows:</p>
<pre class="lang-py prettyprint-override"><code>df1.stripna()
# obviously I'm not asking you to create a new pandas method...
# I just thought it was a good name.
Out[3]:
a b
1 2 2.0
2 3 NaN
3 4 4.0
4 5 5.0
</code></pre>
<hr />
<p>Some of the answers so far are pretty nice but I think this is important enough functionality that I raised a feature request with Pandas <a href="https://github.com/pandas-dev/pandas/issues/60162" rel="nofollow noreferrer">here</a> if anyone is interested. Let's see how it goes!</p>
|
<python><pandas><dataframe><nan>
|
2024-10-31 15:32:52
| 4
| 4,814
|
Little Bobby Tables
|
79,145,328
| 3,042,850
|
Sudoku block checker in python
|
<p>This kind of question has been asked before but I came up with a solution I do not see very often. Now this works on my local host but when I input it into the University program it returns that I've failed the funcitonality check but I pass it on my local host. I need to check a 3 by 3 block with a row and column entry. I get false when I should get false.</p>
<pre><code> sudoku = [
[9, 0, 0, 0, 8, 0, 3, 0, 0],
[2, 0, 0, 2, 5, 0, 7, 0, 0],
[0, 2, 0, 3, 0, 0, 0, 0, 4],
[2, 9, 4, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 7, 3, 0, 5, 6, 0],
[7, 0, 5, 0, 6, 0, 4, 0, 0],
[0, 0, 7, 8, 0, 3, 9, 0, 0],
[0, 0, 1, 0, 0, 0, 0, 0, 3],
[3, 0, 0, 0, 0, 0, 0, 0, 2]
]
def block_correct(sudoku: list, row_no: int, column_no:int):
correct = [1,2,3,4,5,6,7,8,9] # this is the list I compare to
n=0 #counter for rows
b=0 #counter for columns
check_sq= [] #list i use to compare to correct sudoku list
col_noin = column_no #required to remember 3x3 block
while True: #loop for columns
if b==3:
break
while True: #loop for rows
if n==3:
break
check_sq.append(sudoku[row_no][column_no]) #this adds entries into checking list
column_no += 1
n+=1
b += 1
row_no += 1
column_no = col_noin
n=0
if bool(set(correct).difference(check_sq)) == False: #this determines if loops are the same
return True
else:
return False
print(block_correct(sudoku,2,2))
</code></pre>
|
<python><sudoku>
|
2024-10-31 15:29:48
| 1
| 309
|
user3042850
|
79,145,287
| 5,002,668
|
Sorting array along two dimensions
|
<p>Suppose you have data in form of a numpy array of shape <code>(100,3)</code>. The number 100 represents a number of sensors, for each sensor you have:</p>
<ul>
<li>x-coordinate (<code>data[:, 0]</code>)</li>
<li>y-coordinate (<code>data[:, 1]</code>)</li>
<li>reading, e.g. temperature (<code>data[:, 2]</code>)</li>
</ul>
<p>The sensors are not placed on a regular grid. How can I sort the data array such that the x- and y- coordinates are both sequentially sorted? Here is an example:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
data = np.random.uniform(-1, 1, size=(100, 3))
x_pos = data[:, 0]
y_pos = data[:, 1]
temperature = data[:, 2]
sorted_ind = np.lexsort((y_pos, x_pos))
sorted_data = data[sorted_ind]
plt.plot(sorted_data[:, 0]) #plot x positions
plt.plot(sorted_data[:, 1]) #plot y positions
</code></pre>
<p>From my understanding, both coordinates should now be in ascending order but this is only true for the x position.</p>
<p>EDIT: The goal is to display the data on a grid (which does not have to be regular but sorted correctly). So imagine the temperature sensors are almost evenly distributed somewhere over a square area. But if I read out the exact gps positions, it will not be a perfectly regular grid.</p>
<pre><code>img = data.reshape(10,10,3)
plt.imshow(img[:, :, 2]) #should display the temperature correcly positioned on a grid
</code></pre>
|
<python><arrays><sorting>
|
2024-10-31 15:17:31
| 0
| 1,845
|
a.smiet
|
79,145,275
| 843,458
|
python filename fails due to escape character
|
<p>I would like to read a filename.
The string should be converted to a valid filename.
However the input to the conversion fails.</p>
<pre><code>xlsfile = base64.urlsafe_b64encode('C:\Users\matth\OneDrive\Dokumente\20241023-1904097928-umsatz.xlsx')
df = pd.read_excel(xlsfile)
</code></pre>
<p>The problem is the sequence "\20".</p>
<p>How can I ensure that the file is read?</p>
<p>Error message is: <code>(unicode error) 'unicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape</code></p>
|
<python><string><unicode>
|
2024-10-31 15:13:54
| 0
| 3,516
|
Matthias Pospiech
|
79,145,258
| 6,930,340
|
Polars pl.when().then().otherwise() in conjunction with first row of group_by object
|
<p>I have a <code>pl.DataFrame</code> with some columns: <code>level_0</code>, <code>symbol</code>, <code>signal</code>, and a <code>trade</code>. The <code>trade</code> column simply indicates whether to buy or sell the respective <code>symbol</code> ("A" and "B"). It's computed <code>over("level_0", "symbol")</code>.</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
pl.Config(tbl_rows=16)
df = pl.DataFrame(
{
"level_0": [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1],
"symbol": [ "A", "A", "A", "A", "B", "B", "B", "B", "A", "A", "A", "A", "B", "B", "B", "B", ],
"signal": [1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0],
}
).with_columns(
pl.col("signal")
.diff()
.replace(old=0, new=None)
.over("level_0", "symbol")
.alias("trade")
)
</code></pre>
<pre><code>shape: (16, 4)
βββββββββββ¬βββββββββ¬βββββββββ¬ββββββββ
β level_0 β symbol β signal β trade β
β --- β --- β --- β --- β
β i64 β str β i64 β i64 β
βββββββββββͺβββββββββͺβββββββββͺββββββββ‘
β 0 β A β 1 β null β
β 0 β A β 0 β -1 β
β 0 β A β 1 β 1 β
β 0 β A β 1 β null β
β 0 β B β 0 β null β
β 0 β B β 1 β 1 β
β 0 β B β 1 β null β
β 0 β B β 0 β -1 β
β 1 β A β 0 β null β
β 1 β A β 0 β null β
β 1 β A β 0 β null β
β 1 β A β 1 β 1 β
β 1 β B β 1 β null β
β 1 β B β 1 β null β
β 1 β B β 0 β -1 β
β 1 β B β 0 β null β
βββββββββββ΄βββββββββ΄βββββββββ΄ββββββββ
</code></pre>
<p>So far, so good. The only thing is that the first row of each group (<code>["level_0", "symbol"]</code>) isn't correct. I would like to change the <code>null</code> values in the <code>trade</code> column according to the following rule:</p>
<ul>
<li>If the <code>signal</code> column holds a value different from 0 in the respective first row of each group, this value should be copied to the respective first row of each group of the <code>trade</code> column.</li>
<li>If the <code>signal</code> column holds a value equal to 0 in the respective first row of each group, the value of the respective first row of each group of the <code>trade</code> column stays unchanged.</li>
</ul>
<p>To put it differently, I am looking to change the <code>null</code> values in the first row of each group in the trade column whenever there is a value in the <code>signal</code> column that is different from zero.</p>
<p>Here's what I'm looking for:</p>
<pre><code>shape: (16, 4)
βββββββββββ¬βββββββββ¬βββββββββ¬ββββββββ
β level_0 β symbol β signal β trade β
β --- β --- β --- β --- β
β i64 β str β i64 β i64 β
βββββββββββͺβββββββββͺβββββββββͺββββββββ‘
β 0 β A β 1 β 1 β <-- copied value from signal column
β 0 β A β 0 β -1 β
β 0 β A β 1 β 1 β
β 0 β A β 1 β null β
β 0 β B β 0 β null β <-- value stays unchanged
β 0 β B β 1 β 1 β
β 0 β B β 1 β null β
β 0 β B β 0 β -1 β
β 1 β A β 0 β null β <-- value stays unchanged
β 1 β A β 0 β null β
β 1 β A β 0 β null β
β 1 β A β 1 β 1 β
β 1 β B β 1 β 1 β <-- copied value from signal column
β 1 β B β 1 β null β
β 1 β B β 0 β -1 β
β 1 β B β 0 β null β
βββββββββββ΄βββββββββ΄βββββββββ΄ββββββββ
</code></pre>
|
<python><dataframe><python-polars>
|
2024-10-31 15:09:47
| 1
| 5,167
|
Andi
|
79,145,092
| 4,706,711
|
How I can match each returned embedding with the text I gave to him so I can save them into a db?
|
<p>I made this script that reads the text frpom pdf and for each paragraph calculates the mebddings using cohere embeddings api:</p>
<pre><code>
import os
import cohere
import time
from pypdf import PdfReader
from dotenv import load_dotenv
from qdrant_client import QdrantClient
from qdrant_client.models import Distance, VectorParams
load_dotenv()
docsFolder='./docs'
def getTextFromPDF(fileName):
text = ""
reader = PdfReader(fileName)
for page in reader.pages:
text += page.extract_text() + "\n"
return text
def getPhrases(docsFolder):
phrases=[]
with os.scandir(docsFolder) as it:
for entry in it:
if not entry.name.startswith('.') and entry.is_file():
text=getTextFromPDF(docsFolder+"/"+entry.name)
passages = [p.strip() for p in text.split("\n\n") if p.strip()]
phrases.extend(passages)
return phrases
start = time.perf_counter()
phrases = getPhrases(docsFolder)
end = time.perf_counter()
print("Passage Extraction time "+str(end-start)+" seconds")
co = cohere.ClientV2(api_key=os.getenv("COHERE_KEY"))
start = time.perf_counter()
res = co.embed(texts=phrases,model="embed-multilingual-v3.0", input_type="search_document",embedding_types=['float'])
end = time.perf_counter()
print("Embeddings generation time: "+str(end-start)+" seconds")
print(len(res.texts),len(res.embeddings.float),len(phrases))
# Save results here
</code></pre>
<p>What I want do do is to map the input texts with it embedding returned from cohere api so I can save them. The reason why is to use them at a later point and not to re-calculate them.</p>
<p>But what I find it hard it whether each item of <code>res.embeddings.float</code> is in order that <code>phrases</code> were given as input.</p>
|
<python>
|
2024-10-31 14:16:19
| 1
| 10,444
|
Dimitrios Desyllas
|
79,144,977
| 18,618,577
|
Compare dataframe with list
|
<p>I've a script that try to reach some router on the field (from a list in csv), then write a result csv file.
My script have an early condition that check if there is an existing "result.csv" file, and look into it which router is "offline", then make a list from it.</p>
<p>Now I'm trying to compare the first column of a dataframe with all the routers (133 rows), and a list that contain the name (string) of every router that failed to response in the first place.</p>
<p>Here an example of the router list :</p>
<pre><code> sta pwd
0 allev aer
1 alpe qs
2 arg1 gre
3 arg2 dfg
4 argg yui
</code></pre>
<p>(field "sta" is the router name and "pwd" its password.</p>
<p>Then a example of the "toreach" list made from the result file created on the first pass :</p>
<pre><code>['arg1',
'argg']
</code></pre>
<p>Routers that are marked as "offline" in the resultat.csv file, are reported as "toreach" and are appended in the list above. From there I want to compare router list to update it by delete string that are not in the "toreach" list.</p>
<p>Can we use list to compare with dataframe ? Should made a "fake" dataframe with the list ?</p>
|
<python><pandas><list>
|
2024-10-31 13:44:34
| 0
| 305
|
BenjiBoy
|
79,144,768
| 11,155,419
|
How to decode a protobuf Firestore event payload
|
<p>I have a Cloud Run Function, that gets triggered on <code>google.cloud.firestore.document.v1.written</code> Firestore event, as outlined below:</p>
<pre class="lang-py prettyprint-override"><code>from cloudevents.http import CloudEvent
import functions_framework
from google.events.cloud import firestore
from google.protobuf.json_format import MessageToDict
@functions_framework.cloud_event
def hello_firestore(cloud_event: CloudEvent) -> None:
"""Triggers by a change to a Firestore document.
Args:
cloud_event: cloud event with information on the firestore event trigger
"""
firestore_payload = firestore.DocumentEventData()
firestore_payload._pb.ParseFromString(cloud_event.data)
# Convert protobuf to dictionary
old_value_dict = MessageToDict(firestore_payload.old_value._pb) if firestore_payload.old_value else {}
new_value_dict = MessageToDict(firestore_payload.value._pb) if firestore_payload.value else {}
print(new_value_dict)
print(old_value_dict)
</code></pre>
<p>However, the output seems to be in this format:</p>
<pre><code>{
"name": "projects/my-gcp-project/databases/mydb/documents/mycollection/e2767cc0-2e04-452d-9b7b-fc170e2128ec",
"fields": {
"created_at": {"stringValue": "2024-10-31T11:09:34.760557Z"},
"session_id": {"stringValue": "e2767cc0-2e04-452d-9b7b-fc170e2128ec"},
"user_profile": {
"mapValue": {
"fields": {
"age": {"integerValue": "36"},
"name": {"stringValue": "Andrew"},
"middle_name": {"nullValue": None}
}
}
},
....
"createTime": "2024-10-31T11:09:34.802575Z",
"updateTime": "2024-10-31T12:13:36.154031Z",
}
</code></pre>
<p>How can I programmatically decode the message in order to convert everything to Python native types, in the following format:</p>
<pre><code>{
"created_at": "2024-10-31T11:09:34.760557Z",
"session_id": "e2767cc0-2e04-452d-9b7b-fc170e2128ec",
"user_profile": {
"age": 36,
"name": "Andrew",
"middle_name": None,
}
}
</code></pre>
|
<python><google-cloud-firestore><google-cloud-functions><protocol-buffers>
|
2024-10-31 12:35:27
| 0
| 843
|
Tokyo
|
79,144,738
| 7,975,962
|
Polars arithmetic operation using a boolean mask and different sized array
|
<p>I tried so hard with ChatGPT but couldn't make this work and it couldn't solve it either. How am I supposed to assign or increment set of values using a boolean mask? This is what I was doing with pandas.</p>
<pre class="lang-py prettyprint-override"><code>validation_predictions = model.predict(validation_dataset)
df.loc[validation_mask, 'prediction'] += validation_predictions / len(seeds)
</code></pre>
<p><code>validation_mask</code> has <strong>N</strong> number of <code>True</code> and <code>validation_predictions</code> is a <code>numpy</code> array of size <strong>N</strong>, so an assignment like this works fine. However I couldn't achieve the same thing with polars.</p>
<p>I tried when/then/otherwise chain but it throws an error since <code>validation_predictions</code> size doesn't match with entire dataframe's size.</p>
<pre class="lang-py prettyprint-override"><code>df = df.with_columns(
pl.when(validation_mask)
.then(pl.col('prediction') + (validation_predictions / len(seeds)))
.otherwise(pl.col('prediction'))
.alias('prediction')
)
# ShapeError: cannot evaluate two Series of different lengths (6 and 5)
</code></pre>
<p>For reproducibility purposes:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
import numpy as np
seeds = [42, 1337, 0]
df = pl.DataFrame({
"some_column": [10, 20, 30, 40, 50, 60],
"prediction": [1.0, 2.0, 3.0, 4.0, 5.0, 6.0]
})
validation_mask = df["some_column"] > 15
validation_predictions = np.array([0.5, 1.5, 2.5, 3.5, 4.5])
</code></pre>
<p>Expected result:</p>
<pre><code>shape: (6, 2)
βββββββββββββββ¬βββββββββββββ
β some_column β prediction β
β --- β --- β
β i64 β f64 β
βββββββββββββββͺβββββββββββββ‘
β 10 β 1.0 β
β 20 β 2.166667 β
β 30 β 3.5 β
β 40 β 4.833333 β
β 50 β 6.166667 β
β 60 β 7.5 β
βββββββββββββββ΄βββββββββββββ
</code></pre>
|
<python><python-polars>
|
2024-10-31 12:26:22
| 3
| 974
|
gunesevitan
|
79,144,702
| 7,257,731
|
Plotly Choropleth - How to add outline to shape from geojson
|
<p>I want to draw an outline highlighting a shape in a Choropleth map drawn from a GeoJSON using Plotly (in my case over Dash).</p>
<p>I have a dataframe "df" like this one:</p>
<pre><code>state (string) value (float)
STATE_1 50.0
STATE_2 30.5
STATE_3 66.2
</code></pre>
<p>I have also a GeoJSON loaded as an object with this structure:</p>
<pre><code>{
'type': 'FeatureCollection',
'features': [
{
'type': 'Feature',
'id': 1,
'properties': {
'id': 'STATE_1',
},
'geometry': {'type': 'Polygon', 'coordinates': [...]},
},
...
]
}
</code></pre>
<p>I've drawn my base map like this:</p>
<pre><code>fig = px.choropleth(
df,
geojson=my_geojson,
color='value',
locations='state',
featureidkey='properties.id',
projection='Mercator',
)
fig.update_geos(
fitbounds='locations',
visible=False,
)
fig.update_layout(
coloraxis=dict(
colorbar=dict(
title='',
orientation='h',
x=0.5,
y=0.1,
xanchor='center',
yanchor='top',
),
colorscale=[[0, '#04cc9a'], [1, '#5259ad']],
),
margin={"r": 0, "t": 0, "l": 0, "b": 0},
dragmode=False,
)
</code></pre>
<p>I've tried to add another trace unsuccessfully to show the outline of one of the states. I've tried it both using plotly express and plotly graph objects with no results.</p>
<p>How should I do it?</p>
|
<python><plotly><plotly-dash><choropleth>
|
2024-10-31 12:18:13
| 1
| 392
|
Samuel O.D.
|
79,144,692
| 11,633,074
|
remove bash colors when logging to file in flask
|
<p>I'm trying to modify the flask request formater so I get some logs on the console but I want the logs in my file to be colorless since those basically add ANSI characters that make the log reading harder.</p>
<p>Here is what I have done until now, I made a class to setup my logs:</p>
<pre class="lang-py prettyprint-override"><code>from logging.config import dictConfig
from logging import Formatter
import re
from flask import has_request_context, request
def configure_logging():
"""
Configure the logging system
"""
dictConfig({
'version': 1,
'formatters': {
'common': {
'format': '%(asctime)s - %(levelname)s - %(message)s',
'datefmt': '%Y-%m-%d %H:%M:%S',
'()': 'utils.log_utils.CustomFormatter'
},
'error': {
'format': '%(asctime)s - %(levelname)s - %(module)s - %(lineno)d - %(message)s',
'datefmt': '%Y-%m-%d %H:%M:%S',
'()': 'utils.log_utils.CustomFormatter'
}
},
'filters': {
'common': {
'()': 'utils.log_utils.CommonFilter'
}
},
'handlers': {
'console': {
'class': 'logging.StreamHandler',
'level': 'INFO',
},
'common': {
'class': 'logging.handlers.TimedRotatingFileHandler',
'formatter': 'common',
'level': 'DEBUG',
'filters': ['common'],
'filename': 'logs/app.log',
'encoding': 'utf-8',
'when': 'W0',
'backupCount': 3
},
'error': {
'class': 'logging.handlers.TimedRotatingFileHandler',
'formatter': 'error',
'level': 'ERROR',
'filename': 'logs/app.log',
'encoding': 'utf-8',
'when': 'W0',
'backupCount': 3
}
},
'root': {
'handlers': ['console','common', 'error'],
'level': 'DEBUG'
}
})
class CustomFormatter(Formatter):
"""
Custom formatter to remove ANSI escape sequences from log messages
"""
def format(self, record):
# Remove ANSI escape sequences
record.msg = re.sub(r'\x1b\[[0-?]*[ -/]*[@-~]', '', record.msg)
if has_request_context():
record.url = re.sub(r'\x1b\[[0-?]*[ -/]*[@-~]', '', record.url)
record.remote_addr = re.sub(r'\x1b\[[0-?]*[ -/]*[@-~]', '', record.remote_addr)
record.method = re.sub(r'\x1b\[[0-?]*[ -/]*[@-~]', '', record.method)
record.user_agent = re.sub(r'\x1b\[[0-?]*[ -/]*[@-~]', '', record.user_agent)
return super().format(record)
class CommonFilter:
"""
Filter to exclude log errors warning and critical
"""
def filter(self, record):
return record.levelno < 30
</code></pre>
<p>When I check my log file, I see that most of the color characters disappeared but when I get a request log like this:</p>
<p><a href="https://i.sstatic.net/51TiLNwH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/51TiLNwH.png" alt="enter image description here" /></a></p>
<p>I have a log input looking like:</p>
<p><code>2024-10-31 12:00:34 - INFO - 172.18.0.1 - - [31/Oct/2024 12:00:34] "[33mPOST /webhook/2 HTTP/1.1[0m" 404 -</code></p>
<p>It feels like some already formated log is being put into my log by the look of the two timestamp, but I can't figure out how to fix that and get two distinct logging formatting.</p>
|
<python><flask><logging><werkzeug>
|
2024-10-31 12:15:23
| 1
| 436
|
lolozen
|
79,144,498
| 18,554,284
|
Get Icon for any File/Exe on Windows using Python
|
<p>I tried to find a few answers but only found this exact question on <em>Stack Overflow</em> without any proper answer.
I am trying to build an Alternate Shell to replace the Default Desktop but it wouldn't look good without Icons, so I tried several approaches such as locating the target files of the Shortcuts on Desktop and the ones which target an <code>.exe</code>, I can get those using <code>icoextract</code> module for python but for other generic files such as a Word Document, or simply a Folder, I am unable to get their Icons. They also include apps installed using Microsoft Store which don't directly link to an <code>.exe</code>. Some answers included using registry but did not clearly specify how to do so using Python. This <a href="https://github.com/KillerRebooted/Alternate-Window-Shell" rel="nofollow noreferrer">Github Repository</a> includes <code>Desktop Dependency</code> and <code>Desktop Dependency 2</code> which include my tries until now. So is there and module or some way to get the Icon for any filetype by simply providing its absolute path? (P.S.: <code>customtkinter</code> for GUI is temporary for reference purposes only and plan on using PyQt).</p>
|
<python><windows><icons>
|
2024-10-31 11:18:14
| 1
| 789
|
KillerRebooted
|
79,144,456
| 14,743,705
|
Is there a more elegant way to change the structure of my dictionary data?
|
<p>I made this function because I needed to get data in a different order. It works, but I'm wondering if there is a better way.</p>
<pre><code>def transform_data(self, data):
"""
before: data[subject][period][zodiak] = content
after : data[zodiak][period][subject] = content
"""
result = {}
for subject, subject_data in data.items():
for period, period_data in subject_data.items():
for zodiak, content in period_data.items():
if zodiak not in result:
result[zodiak] = {}
if period not in result[zodiak]:
result[zodiak][period] = {}
if subject not in result[zodiak][period]:
result[zodiak][period][subject] = {}
result[zodiak][period][subject] = content
return result
</code></pre>
|
<python><dictionary><transformation>
|
2024-10-31 11:02:58
| 0
| 305
|
Harm
|
79,144,304
| 13,147,413
|
How to interpret "unresolved exception with suggestion" in Pyspark
|
<p>Cannot understand this <code>unresolved exception</code> error</p>
<pre><code>AnalysisException: [UNRESOLVED_COLUMN.WITH_SUGGESTION] A column or function parameter with name `1_1473_C17_18979` cannot be resolved.
</code></pre>
<p>thrown by Pyspark when i apply this thresholding mechanism:</p>
<pre><code>threshold = df.where('step = 4') \
.groupBy('ID') \
.agg(F.percentile_approx('event_time', 0.25)) \
.collect()
threshold = [(r[0],r[1]) for r in threshold]
whereStmt = 'step=1 or step=2 or step=3'
for r in threshold:
whereStmt = whereStmt + f' or (step=4 and ID={r[0]} and event_time<={r[1]})'
df.where(F.expr(whereStmt))
</code></pre>
<p><code>step</code> and <code>event_time</code> are other columns names.</p>
<p>The string <code>1_1473_C17_18979</code> belongs to a <code>ID</code> column, so i'm not entirely free to modify those values. Unless they are reversible transformations.</p>
<p>At the moment my cleaning routine looks like this:</p>
<pre class="lang-py prettyprint-override"><code>tempList = []
# Replacing dashes, dots and spaces with underscores in columns names
for col in df.columns:
new_name = col.strip()
new_name = "".join(new_name.split())
new_name = new_name.replace('-','_')
new_name = new_name.replace('.','_')
new_name = new_name.replace(' ','_')
tempList.append(new_name)
df = df.toDF(*tempList)
# Cast values to string, just to bs sure
df.withColumn('ID',df['ID'].cast('string'))
#Replacing dashes, dots and spaces in the column to which the value that appears to generate the error belongs
df = df.withColumn('ID', regexp_replace('ID', '-', '_'))
df = df.withColumn('ID', regexp_replace('ID', '.', '_'))
df = df.withColumn('ID', regexp_replace('ID', ' ', ''))
</code></pre>
<p>I tried:</p>
<ul>
<li>(using code from <a href="https://stackoverflow.com/questions/55453101/pyspark-error-analysisexception-cannot-resolve-column-name">this</a> thread) catching dashes dots and spaces in columns names (not effective);</li>
<li>replacing dashes and spaces with underscores in values of 'ID' column.</li>
</ul>
<p>The above error arrives after those transformations.</p>
<p>Without applying the cleaning routine i get this different error:</p>
<pre><code>AnalysisException: [UNRESOLVED_COLUMN.WITH_SUGGESTION] A column or function parameter with name `1473_C17` cannot be resolved.
</code></pre>
<p>And i'm pretty sure, looking at this part of the same error message:</p>
<p><code>Filter (((((((((((step#152176 = 1) OR (step#152176 = 2)) OR (step#152176 = 3)) OR (((((step#152176 = 4) AND (ID#152204 = ((1 - '1473_C17) - 17130)))</code></p>
<p>That the confusion in due to the dashes. That's why i replaced them, etc.
It would be very useful to know the position of the value generating the error as well.</p>
|
<python><pyspark>
|
2024-10-31 10:14:13
| 0
| 881
|
Alessandro Togni
|
79,144,282
| 201,657
|
When using pytest-bdd how do I define a dataset should be operated on by a scenario?
|
<p>I am familiar with behave in which I can define data for a step like so:</p>
<hr />
<pre class="lang-none prettyprint-override"><code>Feature: My feature
Scenario: My scenario
Given some data
| order_nr | customer_name | delivery_date |
| 1 | John Doe | 2023-10-01 |
| 2 | Jane Smith | 2023-10-02 |
</code></pre>
<blockquote>
<p>The table is available to the Python step code as the β.tableβ attribute in the <code>Context</code> variable passed into each step function. The table for the example above could be accessed like so:</p>
</blockquote>
<pre class="lang-py prettyprint-override"><code>@given('a set of specific users')
def step_impl(context):
for row in context.table:
model.add_user(name=row['name'], department=row['department'])
</code></pre>
<p><em><a href="https://behave.readthedocs.io/en/latest/tutorial/#step-data" rel="nofollow noreferrer">behave tutorial > Step Data</a></em></p>
<hr />
<p>I like having the data visible to the person reading the feature file however I'm using pytest-bdd. I would like to achieve the same using pytest-bdd.</p>
<p>pytest-bdd however does not have the equivalent of the <code>Context</code> variable therefore I don't know how to achieve this. Is there a way to pass data from the feature file to a step definition when using pytest-bdd?</p>
|
<python><pytest-bdd>
|
2024-10-31 10:08:29
| 1
| 12,662
|
jamiet
|
79,144,183
| 489,010
|
Is there a way to capture name of group for a `group_by` in polars?
|
<p>Is there a way to get the name of a current group for any given iteration of a <code>group_by</code> instance without having to rely to the <code>GroupBy.__iter__()</code>?</p>
<pre class="lang-py prettyprint-override"><code>pl.Config().set_fmt_str_lengths(100)
df = pl.DataFrame({
'fruit':['apple', 'pear', 'peach', 'pear'],
'origin':['mountain', 'valley', 'beach', 'beach'],
'color':['red', 'green', 'yellow', 'green']
})
def udf(df_group, something="arg def"):
something = something + " gets modified in the udf"
return df_group.with_columns(pl.lit(something).alias('from_udf'))
print(df.group_by('origin').map_groups(udf))
shape: (4, 4)
βββββββββ¬βββββββββββ¬βββββββββ¬βββββββββββββββββββββββββββββββββββ
β fruit β origin β color β from_udf β
β --- β --- β --- β --- β
β str β str β str β str β
βββββββββͺβββββββββββͺβββββββββͺβββββββββββββββββββββββββββββββββββ‘
β pear β valley β green β arg def gets modified in the udf β
β apple β mountain β red β arg def gets modified in the udf β
β peach β beach β yellow β arg def gets modified in the udf β
β pear β beach β green β arg def gets modified in the udf β
βββββββββ΄βββββββββββ΄βββββββββ΄βββββββββββββββββββββββββββββββββββ
# Desired functionality
print(df.group_by('origin').map_groups(udf, something=__name_of_group__))
shape: (4, 4)
βββββββββ¬βββββββββββ¬βββββββββ¬βββββββββββββββββββββββββββββββββββ
β fruit β origin β color β from_udf β
β --- β --- β --- β --- β
β str β str β str β str β
βββββββββͺβββββββββββͺβββββββββͺβββββββββββββββββββββββββββββββββββ‘
β pear β valley β green β valley gets modified in the udf β
β apple β mountain β red β mountain gets modified in the udfβ
β peach β beach β yellow β beach gets modified in the udf β
β pear β beach β green β beach gets modified in the udf β
βββββββββ΄βββββββββββ΄βββββββββ΄βββββββββββββββββββββββββββββββββββ
</code></pre>
<p>The functionality can be achieved by manually iterating through the <code>.group_by()</code> object but my intuition tells me that's a no-no for performance since the iteration is delegated to python instead of polars. In an attempt to prove it here a (hopefully) valid benchmark.</p>
<p>Native calling using default argument.</p>
<pre class="lang-py prettyprint-override"><code>%%timeit
df.group_by('origin').map_groups(udf)
308 Β΅s Β± 19.5 Β΅s per loop (mean Β± std. dev. of 7 runs, 1,000 loops each)
</code></pre>
<p>Manual iteration with default argument.</p>
<pre class="lang-py prettyprint-override"><code>%%timeit
pl.concat([ udf(data) for name, data in df.group_by(['origin'])])
473 Β΅s Β± 43.5 Β΅s per loop (mean Β± std. dev. of 7 runs, 1,000 loops each)
</code></pre>
<p>Manual iteration with passed argument.</p>
<pre class="lang-py prettyprint-override"><code>%%timeit
pl.concat([ udf(data, something=name[0])for name, data in df.group_by(['origin'])])
486 Β΅s Β± 33.8 Β΅s per loop (mean Β± std. dev. of 7 runs, 1,000 loops each)
</code></pre>
<p>The origin of this question is a more complicated problem in which I've written a polars plugin that optionally generates some QC reports on disk and having the name of the group is needed for differentiating the reports of each group.</p>
<pre class="lang-py prettyprint-override"><code># real application example (out of context)
(
df
.group_by('umi').agg(
rogtk.assemble_contig( # my plugin function
expr=pl.col("r2_seq"),
k=20,
min_coverage=20,
export_graphs=True,
prefix=??? # here is where I would like to pass an argument based on the group's definition
))
)
</code></pre>
<pre><code></code></pre>
|
<python><python-polars><polars-plugins>
|
2024-10-31 09:40:29
| 1
| 5,026
|
pedrosaurio
|
79,143,954
| 3,649,629
|
Test suite to cover repository where all tests are run at once on python
|
<p>My tests are run correctly when I run them one by one, but failed interfering each other if I run them in test suite.</p>
<pre><code>/postgresql/asyncpg.py", line 789, in _handle_exception
raise translated_error from error
sqlalchemy.exc.InterfaceError: (sqlalchemy.dialects.postgresql.asyncpg.InterfaceError) <class 'asyncpg.exceptions._base.InterfaceError'>: cannot perform operation: another operation is in progress
</code></pre>
<p>I try to simplify the scenario and wrote a Repository which encapsulates CRUD operations to database.</p>
<p>I wrote a test suite which covers this repository by test.</p>
<p>However the first launch shown exception which I haven't figured out yet.</p>
<pre><code>async def create_inventory(self, inventory_data: dict) -> None:
> async with self.database.get_async_session() as session:
E AttributeError: 'async_generator' object has no attribute 'get_async_session'
</code></pre>
<pre><code>@pytest.fixture
async def test_database() -> Database:
"""Fixture for setting up a test database."""
# Initialize the Database
db = Database()
# Create the database tables if needed
async with db.get_async_session() as session:
async with session.begin():
# Here you might want to create your tables for tests
await session.run_sync(Base.metadata.create_all)
yield db # This provides the db instance to the tests
# Teardown can go here if necessary
async with db.get_async_session() as session:
async with session.begin():
await session.run_sync(Base.metadata.drop_all)
@pytest.mark.asyncio
async def test_create_inventory(test_database: Database):
"""Test creating a new Inventory record."""
repo = InventoryRepository(test_database)
inventory_data = {
"time": 1234567890,
"flight": "FL456",
"departure": 9876543211,
"flight_booking_class": "Economy",
"idle_seats_count": 15
}
await repo.create_inventory(inventory_data)
new_inventory = await repo.get_inventory(
flight=inventory_data["flight"],
flight_booking_class=inventory_data["flight_booking_class"]
)
assert new_inventory.flight == inventory_data["flight"]
assert new_inventory.idle_seats_count == inventory_data["idle_seats_count"]
</code></pre>
<p>Repository and database layers:</p>
<pre><code>class Database:
def __init__(self) -> None:
# Database credentials
self.db_username = os.getenv('POSTGRES_USER', "myuser")
self.db_password = os.getenv('POSTGRES_PASSWORD', "mypassword")
self.db_name = os.getenv('POSTGRES_DB', "mydatabase")
self.db_host = os.getenv('POSTGRES_HOST')
self.db_port = os.getenv('POSTGRES_PORT', 5432)
# Ensure all necessary environment variables are set
if not all([self.db_username, self.db_password, self.db_name, self.db_host]):
raise ValueError("Missing one or more environment variables for source or sink database.")
# Database URLs
self.sqlalchemy_database_url = (
f"postgresql+asyncpg://{self.db_username}:{self.db_password}@"
f"{self.db_host}:{self.db_port}/{self.db_name}"
)
# Create async engines
self.engine = create_async_engine(self.sqlalchemy_database_url, echo=True)
# Async session makers for each database
self.AsyncSourceSessionLocal = async_scoped_session(
sessionmaker(
bind=self.engine,
class_=AsyncSession,
autocommit=False,
autoflush=False,
), scopefunc=current_task
)
async def get_async_session(self) -> AsyncGenerator[AsyncSession, None]:
return await self.AsyncSourceSessionLocal()
class InventoryRepository:
def __init__(self, database: Database) -> None:
self.database = database
async def create_inventory(self, inventory_data: dict) -> None:
async with self.database.get_async_session() as session:
async with session.begin():
stmt = insert(Inventory).values(**inventory_data)
await session.execute(stmt)
async def get_inventory(self, flight: str, flight_booking_class: str) -> Optional[Inventory]:
async with self.database.get_async_session() as session:
stmt = select(Inventory).where(
Inventory.flight == flight,
Inventory.flight_booking_class == flight_booking_class
)
result = await session.execute(stmt)
return result.scalar_one_or_none()
</code></pre>
<p>Could you please me figure out how to overcome this issue or give me a correct example how to cover this scenario right by tests which could run as a test suite?</p>
|
<python><database><testing><pytest>
|
2024-10-31 08:28:05
| 1
| 7,089
|
Gleichmut
|
79,143,949
| 972,706
|
pycharm vs command line pytest
|
<p>When running my pytest testsuite from PyCharm, I have a successful result for all my tests.
When running command line, I get module import errors.</p>
<p><a href="https://i.sstatic.net/wjTSJCUY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wjTSJCUY.png" alt="folder structure" /></a></p>
<p>pytest.ini:</p>
<pre class="lang-ini prettyprint-override"><code>[pytest]
minversion = 7.0
addopts = --cov --cov-report=html:converter_coverage
pythonpath = src/converter
testpaths =
tests
</code></pre>
<p>sample test</p>
<pre class="lang-py prettyprint-override"><code>from converter.question_parser import readSingleFile, getAllQuestionsFromFiles
def test_ifTheLanguageIsDutch_andValidTextInputIsGiven_ThenAQuestionIsMade():
foundQuestions = readSingleFile("tests/resources/SingleMinimalQuestion_NL.txt", "NL")
assert len(foundQuestions) == 1
question = foundQuestions[0]
assert question.longQuestion == "This is the long question?"
assert question.shortQuestion == "This is the short question?"
assert question.answer == "This is the answer"
assert question.category == "This is the category"
assert question.round == "This is the round"
</code></pre>
<p>The error:</p>
<pre><code>_________________________________________________________________ ERROR collecting tests/test_question_parser.py __________________________________________________________________
ImportError while importing test module 'C:\Users\peter\Projects\kwismaster\converter\tests\test_question_parser.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
..\..\..\..\AppData\Local\Programs\Python\Python312\Lib\importlib\__init__.py:90: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
test_question_parser.py:1: in <module>
from converter.question_parser import readSingleFile, getAllQuestionsFromFiles
E ModuleNotFoundError: No module named 'converter'
</code></pre>
<p><a href="https://i.sstatic.net/820l6kUT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/820l6kUT.png" alt="pytest test settings" /></a></p>
<p>I've tried with changing about the path in pytest.ini, which did nothing.
If I change the import statement to include src. and pythonpath to be <code>. src src/converter</code> the test will work command line and pycharm, but give import errors, which disables the pycharm clickthrough navigation.</p>
<p>Any solution that works to have pytest command line(for jenkins or github actions or ...) as well as pycharm(for day 2 day development) with clickthrough navigation working is appreciated!</p>
|
<python><pycharm><pytest>
|
2024-10-31 08:25:58
| 2
| 3,294
|
ShadowFlame
|
79,143,788
| 1,672,565
|
Parse Rust into AST with / for use in Python
|
<p>Usually it's the other way round, parsing Python with Rust, see <a href="https://github.com/RustPython/RustPython" rel="nofollow noreferrer">here</a> or <a href="https://github.com/astral-sh/ruff" rel="nofollow noreferrer">here</a> - in my case though I am looking for a way to <strong>parse Rust code with Python</strong> ideally into something like an AST that can be analyzed (ideally before any further compilation steps kick in). Specifically, I want to extract <code>enum</code> and <code>struct</code> definitions from an application written in Rust for unit-tests written in Python. Is there a standard way of doing this, perhaps through the Rust compiler?</p>
|
<python><rust><abstract-syntax-tree>
|
2024-10-31 07:28:32
| 1
| 3,789
|
s-m-e
|
79,143,531
| 956,424
|
Latency to be addressed in django for M2M relationship in ModelAdmin dropdown
|
<p>I have a group of mailboxes which needs to be populated based on customer login and the domains he owns. Customer:User is 1:1 relationship.</p>
<p>Tried:</p>
<p>views.py:</p>
<pre><code>class MailboxAutocomplete(autocomplete.Select2QuerySetView):
def get_queryset(self):
if not self.request.user.is_authenticated:
return Mailbox.objects.none()
qs = Mailbox.objects.all()
# Check if the user is in the 'customers' group
if self.request.user.groups.filter(name='customers').exists():
print('customer login in mailbox autocomplete view.......')
# Filter based on the customer's email
qs = qs.filter(domain__customer__email=self.request.user.email).only('email')
elif self.request.user.groups.filter(name__in=['resellers']).exists():
# Filter based on the reseller's email
qs = qs.filter(domain__customer__reseller__email=self.request.user.email).only('email')
if self.q:
# Further filter based on user input (e.g., email matching)
qs = qs.filter(email__icontains=self.q)
print(qs.values('email'))
return qs
</code></pre>
<p>in the apps urls.py:
path('mailbox-autocomplete/', views.MailboxAutocomplete.as_view(), name='mailbox-autocomplete'),
]</p>
<p>in models.py:</p>
<pre><code>class GroupMailIdsForm(forms.ModelForm):
class Meta:
model = GroupMailIds
fields = "__all__"
mailboxes = forms.ModelMultipleChoiceField(
queryset=Mailbox.objects.none(),
widget=autocomplete.ModelSelect2Multiple(url='mailmanager:mailbox-autocomplete')
)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
if self.instance.pk: # Check if the instance is being updated
if self.request.user.groups.filter(name='customers').exists():
self.fields['mailboxes'].queryset = Mailbox.objects.filter(domain__customer__email=self.request.user.email)
elif self.request.user.groups.filter(name='resellers').exists():
self.fields['mailboxes'].queryset = Mailbox.objects.filter(domain__customer__reseller__email=self.request.user.email)
</code></pre>
<p>in admin.py:</p>
<pre><code>class GroupMailIdsAdmin(ImportExportModelAdmin):
resource_class = GroupMailIdsResource
ordering = ('address',)
filter_horizontal = ('mailboxes',)
</code></pre>
<p>and in settings.py:</p>
<pre><code>INSTALLED_APPS = [
'mailmanager.apps.MailmanagerConfig',
'admin_confirm',
'dal',
'dal_select2',
'django.contrib.admin',
'jquery',
]
with other required django apps.
</code></pre>
<ol>
<li>The autocomplete is not working.</li>
<li>Django version 4.2 used</li>
<li>django-autocomplete-light==3.11.0</li>
</ol>
<p>IS there something I am missing. I am trying to solve the latency issue for loading about 200k mailboxes in the dropdown.
Can I get the precise code to resolve this problem. I need to display the selected mailboxes, and also make selection easy. I am not good with js. Also tried filter_horizontal which is a very good display but causes latency due to the huge number of mailboxes</p>
|
<python><django>
|
2024-10-31 04:43:44
| 1
| 1,619
|
user956424
|
79,143,398
| 802,494
|
Why won't the Python debugger invoked as a module automaticaly enter into post-mortem debugging?
|
<p>Accordig to Python's official documentation at <a href="https://docs.python.org/3.12/library/pdb.html" rel="nofollow noreferrer">pdb</a>:</p>
<p><em>You can also invoke pdb from the command line to debug other scripts. For example:</em></p>
<p><code>python -m pdb myscript.py</code></p>
<p><em>When invoked as a module, pdb will automatically enter post-mortem debugging if the program being debugged exits abnormally. After post-mortem debugging (or after normal exit of the program), pdb will restart the program. Automatic restarting preserves pdbβs state (such as breakpoints) and in most cases is more useful than quitting the debugger upon programβs exit.</em></p>
<p>Yet I need to run <code>python -m pdb -c run myscript.py</code> or <code>python -m pdb -c continue myscript.py</code>, otherwise pdb just enters the script and breaks at the first noncomment line.</p>
<p>I tested on RHEL 8, RHEL 9 and Fedora 40 with Python 3.12 and on SLES 15 SP3 wit Python 3.6.</p>
|
<python><debugging><redhat><pdb>
|
2024-10-31 03:22:32
| 0
| 978
|
Spartan
|
79,143,171
| 14,954,932
|
Reading an xlsx Excel file into a pandas dataframe with openpyxl generates value error
|
<p>I have written a code using selenium to download an xlsx excel file from a website, rename it and then open it with openpyxl. however, trying to read the file into a pandas dataframe with openpyxl leads to an error.</p>
<p>The error is the following:</p>
<pre><code>ValueError: Unable to read workbook: could not read stylesheet from ./ETF/test/test.xlsx.This is most probably because the workbook source files contain some invalid XML.Please see the exception for more details.
</code></pre>
<p>The file can be opened manually without any problems.</p>
<p>My code looks like this:</p>
<pre><code>import os.path
import pandas as pd
import numpy as np
import time
import requests
from glob import glob
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
op = webdriver.ChromeOptions()
prefs = {"download.default_directory":os.path.abspath("./ETF/test")}
# Define download path
op.add_experimental_option("prefs", prefs)
# Google Chrome
PATH = r"./webdriver/chromedriver.exe"
#PATH = r"D:\Coding\Python\project1\webdriver\chromedriver.exe"
# Use Service to set the ChromeDriver path
service = Service(PATH)
driver = webdriver.Chrome(service=service, options=op)
wait = WebDriverWait(driver, 30)
# Download and rename file if it doesn't exist
driver.get('https://www.amundietf.de/de/professionell') # Be aware in recent times, server is often down
driver.maximize_window()
# Wait for the "Professioneller Anleger" button to be present and clickable
profanleger_button = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, "//button[normalize-space()='Professioneller Anleger']")))
profanleger_button.click()
# Wait for the "Akzeptieren und fortfahren" button to be clickable and click it
akzept_button = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.ID, "confirmDisclaimer")))
akzept_button.click()
# Wait for the "Akzeptieren und fortfahren" button to be clickable and click it
cookies_button = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.ID, "CookiesDisclaimerRibbonV1-AllOn")))
cookies_button.click()
time.sleep(2)
# Check whether file exists
if os.path.isfile('ETF/test/' + "Fondszusammensetzung_Amundi*"):
print("File exists.")
else:
driver.get('https://www.amundietf.de/de/professionell/products/equity/amundi-index-msci-emerging-markets-ucits-etf-dr-d/lu1737652583')
# Download file
Komp_ETF_herunterladen_xpath = "//span[@class='pr-3 bold-text']"
# Wait until the download button is clickable
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, Komp_ETF_herunterladen_xpath)))
dlbutton = driver.find_element(By.XPATH, Komp_ETF_herunterladen_xpath)
dlbutton.click()
time.sleep(20) # Waiting is somehow necessary, otherwise renaming is not successful
etf_path = './ETF/test/'
f = glob(os.path.join(etf_path, "Fondszusammensetzung_Amundi*"))[0]
os.rename(f, os.path.join(etf_path, 'test.xlsx'))
# Quit browser
driver.quit()
time.sleep(20)
df = pd.read_excel('./ETF/test/test.xlsx', engine='openpyxl', skiprows=18, skipfooter=4, header=1, usecols="B:H")
df.to_csv('./ETF/test/test.csv', sep=';', encoding='latin-1', decimal=',')
</code></pre>
<p>I can open and edit the file in excel without any problems. I don't see any errors in the file.How can i make sure that i can read in the file as a dataframe, edit it and then export it as a csv file?</p>
|
<python><excel><pandas><dataframe><openpyxl>
|
2024-10-31 00:24:07
| 0
| 376
|
Economist Learning Python
|
79,142,749
| 3,250,928
|
Is it safe to use dataclasses with hidden/non-field inherited properties?
|
<p>I have a <code>dataclass</code> use strategy that is quite flexible, but I am unsure if it is relying on features that aren't part of the type contract. I am often telling people to not rely on the insertion order being the iterable order of a <code>dict</code> in python because only the <code>OrderedDict</code> has that property as a contract and a different version of python may behave differently.</p>
<p>Are there any concerns with patterning like this? It gives me a lot of flexibility to leverage the features of a <code>dataclass</code> that have a separation of properties like this.</p>
<pre class="lang-py prettyprint-override"><code>from abc import ABC
from dataclasses import dataclass, fields
class BQTransformer(ABC):
_project: str = 'default-project-name'
_dataset_wo_version: str
_dataset_version: str
_default_table: str
@classmethod
def get_dataset(cls):
return f'{cls._dataset_wo_version}_v{cls._dataset_version}'
@classmethod
def table(cls, table_name=None) -> str:
if table_name is None:
table_name = cls._default_table
return f'{cls._project}.{cls.get_dataset()}.{table_name}'
@dataclass(frozen=True, eq=True)
class BilliardsUser(BQTransformer):
_dataset_wo_version = 'billiards_info'
_dataset_version = '20240830'
_default_table = 'users'
username: str
id: int
A = BilliardsUser(username='Philip', id=3)
B = BilliardsUser(username='James', id=7)
>>> [f.name for f in fields(A)]
['username', 'id']
>>> A.table()
'default-project-name.billiards_info_v20240830.users'
</code></pre>
|
<python><inheritance><python-dataclasses>
|
2024-10-30 20:47:47
| 1
| 307
|
probinso
|
79,142,432
| 984,114
|
Apache Beam: Number of components does not match number of coders when using Timers
|
<p>My question is very similar to this <a href="https://stackoverflow.com/questions/77022836/apache-beam-wait-n-seconds-after-window-closes-to-execute-dofn">one</a></p>
<p>I'm trying to add some timers to process some data later but i'm getting the following error</p>
<pre><code>Error message from worker: generic::unknown: Traceback (most recent call last):
File "apache_beam/runners/common.py", line 1435, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam/runners/common.py", line 636, in apache_beam.runners.common.SimpleInvoker.invoke_process
File "apache_beam/runners/common.py", line 1621, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "apache_beam/runners/common.py", line 1734, in apache_beam.runners.common._OutputHandler._write_value_to_tag
File "apache_beam/runners/worker/operations.py", line 266, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
File "apache_beam/runners/worker/operations.py", line 528, in apache_beam.runners.worker.operations.Operation.process
File "/usr/local/lib/python3.9/site-packages/apache_beam/runners/worker/bundle_processor.py", line 158, in process
self.windowed_coder_impl.encode_to_stream(
File "apache_beam/coders/coder_impl.py", line 1448, in apache_beam.coders.coder_impl.WindowedValueCoderImpl.encode_to_stream
File "apache_beam/coders/coder_impl.py", line 1467, in apache_beam.coders.coder_impl.WindowedValueCoderImpl.encode_to_stream
File "apache_beam/coders/coder_impl.py", line 1023, in apache_beam.coders.coder_impl.AbstractComponentCoderImpl.encode_to_stream
ValueError: Number of components does not match number of coders.
</code></pre>
<p>my Dofn</p>
<pre><code>class WaitUntilDevicesExist(beam.DoFn):
BUFFER_STATE = beam.transforms.userstate.BagStateSpec('buffer', beam.coders.StrUtf8Coder())
TIMER = beam.transforms.userstate.TimerSpec('timer', beam.TimeDomain.REAL_TIME)
BUFFER_TIMER = 15 # seconds
...
def process(self, key_value, timer=beam.DoFn.TimerParam(TIMER), buffer=beam.DoFn.StateParam(BUFFER_STATE)):
shard_id, batch = key_value
for message in batch:
logging.info(f"Checking = {message}")
...
if (...):
timer.set(timestamp.Timestamp.now() + timestamp.Duration(seconds=self.BUFFER_TIMER))
buffer.add(DeviceCheckHelper(message).to_string())
else:
yield message
@beam.transforms.userstate.on_timer(TIMER)
def expiry_callback(self, timer=beam.DoFn.TimerParam(TIMER), buffer=beam.DoFn.StateParam(BUFFER_STATE)):
events = buffer.read()
logging.info("Timer")
new_buffer = []
for row in events:
message = DeviceCheckHelper.from_string(row)
logging.info(message)
....
if (...):
if retry == 3:
logging.info(f"Waited 3 times, yielding ")
yield message.message
else:
message.increase_retry()
new_buffer.append(message.to_string())
logging.info(f"retry = {message}")
buffer.clear()
timer.clear()
logging.info(f"New buffer = {new_buffer}")
if new_buffer:
for row in new_buffer:
logging.info(f"Adding {row}")
buffer.add(row)
timer.set(timestamp.Timestamp.now() + timestamp.Duration(seconds=self.BUFFER_TIMER))
</code></pre>
<p>and the pipeline looks like this</p>
<pre><code># 1 filter messages
filtered_messages = (
transformed_messages[TransformData.TAG_OK]
| f"Clean Devices {tenant}" >> beam.ParDo(FilterMessages()).with_outputs(FilterMessages.DEVICE_TAG, FilterMessages.OBSERVATION_TAG)
)
# 2 Write observations
observation_results = (
filtered_messages[FilterMessages.OBSERVATION_TAG]
| f"{tenant} Check Devices" >> beam.ParDo(WaitUntilDevicesExist(...))
| f"{tenant} Window Observations messages" >> GroupMessagesByShardedKey(max_messages=200, max_waiting_time=10, shard_key="obs", num_shards=10)
| f"{tenant} Write Observations" >> beam.ParDo(Write(...)).with_outputs(FAILED_TAG)
)
</code></pre>
<p>If I move the WaitUntilDevicesExist after the GroupMessagesByShardedKey it works fine. What am I missing?</p>
|
<python><google-cloud-dataflow><apache-beam>
|
2024-10-30 18:42:30
| 1
| 1,286
|
Alex Fragotsis
|
79,142,341
| 967,621
|
Unindent and convert multiline string to single line
|
<p>I have several multiline strings with indentation in my code, such as this (simplified example):</p>
<pre><code>def foo():
cmd = '''ls
/usr/bin
/usr/sbin
/usr/local/bin'''
print(f'{cmd=}')
foo()
</code></pre>
<p>I want to <strong>remove indentation and unwrap</strong> such strings for printing them as a single line each. Currently, it prints also newline characters and whitespace that results from the indents:</p>
<pre><code>cmd='ls\n /usr/bin\n /usr/sbin\n /usr/local/bin'
</code></pre>
|
<python><string><indentation><multiline><heredoc>
|
2024-10-30 18:15:40
| 1
| 12,712
|
Timur Shtatland
|
79,142,186
| 3,486,684
|
How do I flatten the elements of a column of type list of lists so that it is a column with elements of type list?
|
<p>Consider the following example:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
pl.DataFrame(pl.Series("x", ["1, 0", "2,3", "5 4"])).with_columns(
pl.col("x").str.split(",").list.eval(pl.element().str.split(" "))
)
</code></pre>
<pre><code>shape: (3, 1)
ββββββββββββββββββββββ
β x β
β --- β
β list[list[str]] β
ββββββββββββββββββββββ‘
β [["1"], ["", "0"]] β
β [["2"], ["3"]] β
β [["5", "4"]] β
ββββββββββββββββββββββ
</code></pre>
<p>I want to flatten the elements of the column, so instead of being a nested list, the elements are just a list:</p>
<pre><code>shape: (3, 1)
ββββββββββββββββββ
β x β
β --- β
β list[str] β
ββββββββββββββββββ‘
β ["1", "", "0"] β
β ["2", "3"] β
β ["5", "4"] β
ββββββββββββββββββ
</code></pre>
<p>How do I do that?</p>
|
<python><dataframe><python-polars>
|
2024-10-30 17:25:59
| 1
| 4,654
|
bzm3r
|
79,142,169
| 5,790,653
|
How to check the occurrences of a dictionary key in another list and create new output
|
<p>I have a <code>list</code> and a <code>dictionary</code> like this:</p>
<pre class="lang-py prettyprint-override"><code>mydict = {
'some-id-string1': {'name': 'Saeed1', 'phone': '+989307333730', 'id': 'abc'},
'some-id-string2': {'name': 'Saeed2', 'phone': '+989307333731', 'id': 'def'},
'some-id-string3': {'name': 'Saeed3', 'phone': '+989307333732', 'id': 'ghi'},
'some-id-string4': {'name': 'Saeed3', 'phone': '+989307333733', 'id': 'jkl'},
'some-id-string5': {'name': 'Saeed3', 'phone': '+989307333730', 'id': 'abc'},
'some-id-string6': {'name': 'Saeed3', 'phone': '+989307333730', 'id': 'abc'},
'some-id-string7': {'name': 'Saeed3', 'phone': '+989307333731', 'id': 'def'},
}
mylist = [
{'id': 'abc', 'name': 'some_name1'},
{'id': 'def', 'name': 'some_name2'},
{'id': 'ghi', 'name': 'some_name3'},
]
</code></pre>
<p>I'm going to check if the <code>id</code> value of the <code>mydict</code> matches <code>mylist</code> and add the <code>name</code> of <code>mylist</code> to <code>mydict</code>. If that doesn't exist there, I'm going to have two output (because I'm not sure currently to keep which one, so I'm going to have both to decide later):</p>
<p>Expected output 1 if <code>name</code> exists and to add <code>NOT_FOUND</code> for the ones not having corresponding <code>name</code>:</p>
<pre class="lang-py prettyprint-override"><code>newmydict1 = {
'some-id-string1': {'name': 'Saeed1', 'phone': '+989307333730', 'id': 'abc', 'id_name': 'some_name1'},
'some-id-string2': {'name': 'Saeed2', 'phone': '+989307333731', 'id': 'def', 'id_name': 'some_name2'},
'some-id-string3': {'name': 'Saeed3', 'phone': '+989307333732', 'id': 'ghi', 'id_name': 'some_name3'},
'some-id-string4': {'name': 'Saeed3', 'phone': '+989307333733', 'id': 'jkl', 'id_name': 'NOT_FOUND'},
'some-id-string5': {'name': 'Saeed3', 'phone': '+989307333730', 'id': 'abc', 'id_name': 'some_name1'},
'some-id-string6': {'name': 'Saeed3', 'phone': '+989307333730', 'id': 'abc', 'id_name': 'some_name1'},
'some-id-string7': {'name': 'Saeed3', 'phone': '+989307333731', 'id': 'def', 'id_name': 'some_name2'},
}
</code></pre>
<p>Expected output 2 if <code>name</code> exists and to create a new list for the ones not having corresponding <code>name</code>:</p>
<pre class="lang-py prettyprint-override"><code>newmydict2 {
'some-id-string1': {'name': 'Saeed1', 'phone': '+989307333730', 'id': 'abc', 'id_name': 'some_name1'},
'some-id-string2': {'name': 'Saeed2', 'phone': '+989307333731', 'id': 'def', 'id_name': 'some_name2'},
'some-id-string3': {'name': 'Saeed3', 'phone': '+989307333732', 'id': 'ghi', 'id_name': 'some_name3'},
'some-id-string5': {'name': 'Saeed3', 'phone': '+989307333730', 'id': 'abc', 'id_name': 'some_name1'},
'some-id-string6': {'name': 'Saeed3', 'phone': '+989307333730', 'id': 'abc', 'id_name': 'some_name1'},
'some-id-string7': {'name': 'Saeed3', 'phone': '+989307333731', 'id': 'def', 'id_name': 'some_name2'},
}
newmydict3 = {
'some-id-string4': {'name': 'Saeed3', 'phone': '+989307333733', 'id': 'jkl'},
}
</code></pre>
<p>I'm not sure what to do and what to write, because the <code>key</code> of <code>mydict</code> (I mean <code>some-id-string</code>) does not exist in <code>mylist</code>, and also some members of the <code>mydict['id']</code> has more than one occurences.</p>
<p>I tried</p>
<pre><code>new = [{
'string_id': d,
'id': l['id'],
'id_name': l['name'],
'phone': mydict[d]['phone'],
'name': mydict[d]['name']
} for l in mylist for d in mydict if mydict[d]['id'] == l['id']
]
</code></pre>
|
<python><list-comprehension>
|
2024-10-30 17:20:01
| 1
| 4,175
|
Saeed
|
79,141,950
| 1,802,225
|
ValueError: Incompatible indexer with Series in pandas DataFrame
|
<ul>
<li>python: 3.11</li>
<li>pandas: 2.2.2</li>
</ul>
<p>I need to assign a dict value to 4-th row in df:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'agg': [None] * 5})
df['agg'] = df['agg'].astype(object)
df.loc[3, 'agg'] = {'mm': 4}
</code></pre>
<p>It gives an error:</p>
<pre><code>Traceback (most recent call last):
File "/home/sirjay/python/ethereum/lib/analyse_30m.py", line 1898, in <module>
df.loc[3, 'agg'] = {'mm': 4}
~~~~~~^^^^^^^^^^
File "/home/sirjay/miniconda3/envs/eth/lib/python3.11/site-packages/pandas/core/indexing.py", line 911, in __setitem__
iloc._setitem_with_indexer(indexer, value, self.name)
File "/home/sirjay/miniconda3/envs/eth/lib/python3.11/site-packages/pandas/core/indexing.py", line 1944, in _setitem_with_indexer
self._setitem_single_block(indexer, value, name)
File "/home/sirjay/miniconda3/envs/eth/lib/python3.11/site-packages/pandas/core/indexing.py", line 2189, in _setitem_single_block
value = self._align_series(indexer, Series(value))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sirjay/miniconda3/envs/eth/lib/python3.11/site-packages/pandas/core/indexing.py", line 2455, in _align_series
raise ValueError("Incompatible indexer with Series")
ValueError: Incompatible indexer with Series
</code></pre>
<p>How to fix? This option <code>df['agg'].loc[3] = {'mm': 4}</code> works, but with warning:</p>
<pre><code>Use `df.loc[row_indexer, "col"] = values` instead, to perform the assignment in a single step and ensure this keeps updating the original `df`.
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
</code></pre>
|
<python><python-3.x><pandas><dataframe>
|
2024-10-30 16:20:41
| 1
| 1,770
|
sirjay
|
79,141,905
| 10,615,030
|
How to extract data from a map based on pattern
|
<p>I have a map like this (jpg image):</p>
<p><a href="https://i.sstatic.net/M6BAGGcp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M6BAGGcp.png" alt="input image map" /></a></p>
<p>and the corresponding patterns:</p>
<p><a href="https://i.sstatic.net/v8qwNgwo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/v8qwNgwo.png" alt="available patterns" /></a></p>
<p>I want to create polygons (in a .shp file) based on the patterns.</p>
<p>The goal is to overlay a polygon and see which patterns fall in that overlayed polygon.</p>
<p>I've tried some things using opencv but without good results.</p>
<p>Is there any way to do something like this even approximately? Either in qgis or using pure python or some other software?</p>
|
<python><machine-learning><computer-vision><gis><qgis>
|
2024-10-30 15:58:26
| 0
| 587
|
BananaMaster
|
79,141,783
| 1,843,329
|
How do you pass options to doctest when running 'make doctest' from Sphinx?
|
<p>We use <code>make html</code> to build HTML docs from <code>.rst</code> docs and our Python code using Sphinx. We have also enabled the <a href="https://www.sphinx-doc.org/en/master/usage/extensions/doctest.html" rel="nofollow noreferrer">sphinx.ext.doctest</a> extension in Sphinx so that we can test any code samples in our docs by running <code>make doctest</code>. If you do this:</p>
<pre><code>python -m doctest -h
</code></pre>
<p>you'll see doctest has many flags, e.g., <code>-f</code> to fail fast rather than run all the tests; and the ability to test only named files rather than all of them. But <strong>how can I pass these flags to doctest</strong> when running <code>make doctest</code>? Whilst debugging new code samples I'd really like doctest to only examine the one file I'm debugging, and to fail as soon as it encounters the first error. However, doing things like this, for example:</p>
<pre><code>make doctest SPHINXOPTS="-f"
</code></pre>
<p>doesn't work; doctest still reports multiple errors rather than the first one it finds.</p>
|
<python><python-sphinx><doctest>
|
2024-10-30 15:28:31
| 0
| 2,937
|
snark
|
79,141,781
| 4,182,596
|
Get maximum previous nonmissing value within group in pandas dataframe
|
<p>I have a pandas dataframe with a group structure where the value of interest, <code>val</code>, is guaranteed to be sorted within the group. However, there are missing values in <code>val</code> which I need to bound. The data I have looks like this:</p>
<pre><code>group_id id_within_group val
1 1 3.2
1 2 4.8
1 3 5.2
1 4 NaN
1 5 7.5
2 1 1.8
2 2 2.8
2 3 NaN
2 4 5.4
2 5 6.2
</code></pre>
<p>I now want to create a lower bound, <code>max_prev</code> which is the maximum value within the group for the rows before the current row, whereas <code>min_next</code> is the minimum value within the group for the rows after the current row. It is not possible to just look one row back and ahead, because there could be clusters of <code>NaN</code>. I don't need to take care of the edge cases of the first and last row within group. The desired output would hence be</p>
<pre><code>group_id id_within_group val max_prev min_next
1 1 3.2 NaN 4.8
1 2 4.8 3.2 5.2
1 3 5.2 4.8 7.5
1 4 NaN 5.2 7.5
1 5 7.5 5.2 NaN
2 1 1.8 NaN 2.8
2 2 2.8 1.8 5.4
2 3 NaN 2.8 5.4
2 4 5.4 2.8 6.2
2 5 6.2 5.4 NaN
</code></pre>
<p>How can I achieve this in a reasonable fast way?</p>
|
<python><pandas>
|
2024-10-30 15:27:15
| 1
| 521
|
matnor
|
79,141,695
| 1,725,836
|
pylint scanning is different between Windows and Linux
|
<p>I have a Python project that I want to scan with Pylint. Once I run <code>pylint **/*.py</code> I get different results between Windows and Linux machines, under the same Python and Pylint versions, same working directory, and same <code>.pylintrc</code> config.</p>
<pre><code>pylint 3.2.7
astroid 3.2.4
Python 3.12.4
</code></pre>
<p>For example, when I run <code>pylint --verbose **/*.py</code> on Windows I get <code>Checked 23 files, skipped 0 files</code> but on Linux I get <code>Checked 6 files, skipped 0 files</code> (Windows scan is right, I have 23 Python files in the repo).</p>
<p>I don't understand why the number of files it finds is different. Any idea why this is happening?</p>
|
<python><pylint>
|
2024-10-30 15:04:33
| 1
| 9,887
|
nrofis
|
79,141,639
| 10,595,871
|
Concatenate multiple API calls
|
<p>In my code, I have a list of CU (IDs) that I use in a URL for an API call:</p>
<pre><code>for chunk in cu_chunks:
cu_string = ','.join(chunk)
url = f'{base_url}?CU={cu_string}'
response = requests.get(url)
print(f'Status code for request with {len(chunk)} CUs: {response.status_code}')
</code></pre>
<p>That calls this:</p>
<pre><code>@app.route('/CU', methods=['GET'])
def CU():
cu_string = request.args.get('CU', type=str)
cu_list = cu_string.split(',')
jm.create_job_CU(cu_list)
ret = jsonify([job.result for job in jm.completed_jobs])
jm.completed_jobs = []
return ret
</code></pre>
<p>And then in the create job I have all the stuff.</p>
<p>If I have too many CUs, the URL is too long and it leads to an error, hence the decision to divide the CUs in chunks and make multiple API calls, as in the code above.</p>
<p>The API calls some functions that make some predictions by using some machine learning algorithms and then it pushes the results to the database.
The problem is that the calls make the code running quite slow, so I need a solution to keep the multiple calls but, instead of doing everything, I'd like the create job to only store all the CUs from the call, and then when they're over, to start all the functions only once with all the CUs.</p>
<p>Is it possible to do something like this?</p>
|
<python><flask>
|
2024-10-30 14:52:32
| 0
| 691
|
Federicofkt
|
79,141,599
| 54,873
|
WIth pandas, how do I use np.where with nullable datetime colums?
|
<p><code>np.where</code> is great for pandas since it's a vectorized way to change the column based on a condition.</p>
<p>But while it seems to work great with <code>np</code> native types, it doesn't play nice with dates.</p>
<p>This works great:</p>
<pre><code>>>> df1 = pd.DataFrame([["a", 1], ["b", np.nan]], columns=["name", "num"])
>>> df1
name num
0 a 1.0
1 b NaN
>>> np.where(df1["num"] < 2, df1["num"], np.nan)
array([ 1., nan])
</code></pre>
<p>But this doesn't:</p>
<pre><code>>>> df2 = pd.DataFrame([["a", datetime.datetime(2024,1,2)], ["b", np.nan]], columns=["name", "date"])
>>> df2
name date
0 a 2024-01-02
1 b NaT
>>> np.where(df2["date"] < datetime.datetime(2024,3,1), df2["date"], np.nan)
Traceback (most recent call last):
File "<python-input-10>", line 1, in <module>
np.where(df2["date"] < datetime.datetime(2024,3,1), df2["date"], np.nan)
~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
numpy.exceptions.DTypePromotionError: The DType <class 'numpy.dtypes.DateTime64DType'> could not be promoted by <class 'numpy.dtypes._PyFloatDType'>. This means that no common DType exists for the given inputs. For example they cannot be stored in a single array unless the dtype is `object`. The full list of DTypes is: (<class 'numpy.dtypes.DateTime64DType'>, <class 'numpy.dtypes._PyFloatDType'>)
>>>
</code></pre>
<p>What is the proper vectorized way to do the latter operation?</p>
|
<python><pandas>
|
2024-10-30 14:42:16
| 1
| 10,076
|
YGA
|
79,141,547
| 7,123,797
|
If a class doesn't overload the == operator, then what is the value of an object of that class?
|
<p>From the Python reference:</p>
<blockquote>
<p>Every object has an identity, a type and a value.</p>
</blockquote>
<blockquote>
<p>Comparison operators implement a particular notion of what the value of an object is. One can think of them as defining the value of an object indirectly, by means of their comparison implementation.</p>
</blockquote>
<p>And Luciano Ramalho in his book "Fluent Python" says that (on p.206 of the second ed.):</p>
<blockquote>
<p>The operator <code>==</code> compares the values of objects (the data they hold), while <code>is</code> compares their identities.</p>
</blockquote>
<p>So, one of the possible definitions of the object's value (<em>I think this is the only definition of the object's value that is at least somehow, very carefully and without details, mentioned in the official reference</em>) is the following. The value of an object is the data that is used by the <code>__eq__()</code> method of that object. But there are classes that don't override this method. What is the value of an instance of such a class?<br />
(<em>Please note that I am not asking for a rigorous and strict definition here because it is unlikely to exist. I just need something a bit more practically useful than "object's value is the information that this object represents" β this is the phrase that people say when I ask them about the concept of an object's value, it is a true phrase but not very useful because it is too abstract.</em>)</p>
<hr />
<p>For example, let's say we have some function <code>f</code> - an instance of the built-in <code>function</code> class. This class doesn't override <code>__eq__()</code>, so in this case the <code>==</code> operator just compares the identities of its operands (if both operands are functions). But I don't think that we can say that the value of the object <code>f</code> is the same as the identity of <code>f</code>. Indeed, <a href="https://stackoverflow.com/a/78656280/7123797">people say</a> that user-defined functions are mutable objects, in other words, it is possible to change their value. But it is not possible to change the identity of <code>f</code> (or any other object). So, the value of <code>f</code> is not equal to <code>id(f)</code>. It should be defined differently and I hope it can be defined properly (if not in Python in general then at least in CPython).</p>
<p>Here on stackoverflow I have seen some definitions of a function object value, for example <a href="https://stackoverflow.com/a/35725544/7123797">here</a>:</p>
<blockquote>
<p>It's "value" is a set of other values, which are sufficient to define exact same function.</p>
</blockquote>
<p>But I am not sure that this is a "canonical" definition.</p>
|
<python><function>
|
2024-10-30 14:31:10
| 2
| 355
|
Rodvi
|
79,141,487
| 1,308,967
|
Interpolate heart beats from ECG based on r-peak timestamps
|
<p>I am trying to interpolate heart beats from an occasionally noisy ECG signal.</p>
<p><a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC6679245/" rel="nofollow noreferrer">This paper</a> finds that it is more accurate to interpolate heart beats from a noisy ECG signal by interpolating on beat timestamp, rather than beat duration. (The latter drifts.)</p>
<p>But I don't know how many heart beats are supposed to be in the noisy regions, so if Y is the heart beat timestamp, I don't know what to pass as X.</p>
<p>Should I create an array with latest heart beat timestamp per original signal 256hz sample? The X as sample index and Y as most recent heart beat timestamp? My ignorant guess is that the shelf changes would throw the cubic spline around. Use weighting to only consider the known timestamps when constructing the spline? I <em>think</em> that'd plot <code>y=x</code>. Do I have no choice but to use interval?</p>
<hr />
<p>My rambling workings...</p>
<p>My understanding is that they are interpolating the continually increasing raw r-peak timestamp sequence, rather than the fluctuating r-peak interval. I assume X is the heart beat index 1, 2, 3, 4..., and Y is the index of the ECG signal it appears at, say 256, 500, 730... for a ~60bpm 256hz ECG signal.</p>
<p>I am very new to interpolation, and am experimenting with the SciPy interpolate functions <code>CubicSpline</code> or <code>interpolate.splrep</code> and <code>splev</code>.</p>
<p>Or is my approach wrong? The <a href="https://docs.scipy.org/doc/scipy/tutorial/interpolate/1D.html#missing-data" rel="nofollow noreferrer">scipy docs</a> recommend <a href="https://numpy.org/devdocs/reference/maskedarray.generic.html#module-numpy.ma" rel="nofollow noreferrer"><code>np.ma</code></a> for missing value interpolation, but I can't work out how I would.</p>
<p>A <a href="https://github.com/numpy/numpy/issues/7728" rel="nofollow noreferrer">numpy ticket</a> suggests <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.UnivariateSpline.html" rel="nofollow noreferrer"><code>UniVariateSpline</code></a> with zero knots and zero weights, but if I pass it an array with a <code>nan</code>, <code>spl</code> returns <code>nan</code> for all values. And anyway I still need to know how many heart beats are missing.</p>
<pre><code>y=[1, 4, np.nan, 16, 25, 36, 34, 12, 5, 87]
spl = UnivariateSpline(np.arange(10), y, s=0, w=np.zeros(10))
plt.plot(np.arange(10), spl(np.arange(10)))
>> array([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan])
</code></pre>
<p>I had some luck with zeroing the nans and weights for unknowns:</p>
<pre><code>spl = UnivariateSpline(
np.arange(10),
[1, 4, 0, 16, 25, 36, 34, 12, 5, 87],
w=[1, 1, 0, 1, 1, 1, 1, 1, 1, 1]
)
spl(np.arange(10))
>>> array([ 0.97665691, 4.18844333, 7.52957225, 14.64048455, 26.86293802,
36.34395811, 32.5930899 , 11.48427138, 6.42129891, 86.48885889])
</code></pre>
<p>But I don't know how many <code>nan</code>s I have.</p>
<p>Do I need to use intervals when heart beat count is unknown? Intervals at the sample rate, then X is the sample index and Y is the interval?</p>
<p>Could I do the same with heart beat timestamp? The most recent timestamp for each sample, and zero weight for the noisy regions? Is that statistically valid?</p>
<p>Huge thanks for any pointers.</p>
|
<python><numpy><scipy><interpolation>
|
2024-10-30 14:19:33
| 0
| 6,522
|
Chris
|
79,141,476
| 1,171,746
|
How to properly close pywebview window from a button?
|
<p>How to properly close pywebview window from a button?</p>
<p>I put together this <a href="https://github.com/Amourspirit/webview-py-editor/tree/main" rel="nofollow noreferrer">demo</a> of a Python Code Editor using <a href="https://pywebview.flowrl.com/" rel="nofollow noreferrer">pywebview</a> and <a href="https://codemirror.net/" rel="nofollow noreferrer">codemirror</a>.</p>
<p>It runs fine; However, when I use a button to close the window it does not end all the processes on Linux (Ubuntu 24.04).
See Figure 1.</p>
<p>When I just click the <code>X</code> of the window to close it does work and terminate all the processes.
When I run the same code in Windows it works fine.</p>
<p>When the <code>Cancel</code> button is clicked it calls the <code>Api.destroy()</code> method.
See Source <a href="https://github.com/Amourspirit/webview-py-editor/blob/0d84225d09fd069c74682331c751e71823079cff/src/main.py#L26" rel="nofollow noreferrer">here</a></p>
<pre class="lang-py prettyprint-override"><code>import webview
class Api:
def __init__(self):
self._window = None
def set_window(self, window: webview.Window):
self._window = window
def destroy(self):
if self._window:
self._window.destroy()
self._window = None
# other code ...
</code></pre>
<p>In the source I resorted to calling <code>sys.exit()</code> but this is a poor solution as I plan on using this code in other projects and <code>sys.exit()</code> would also kill a running project.
Any suggestions?</p>
<p>Figure 1:
<a href="https://i.sstatic.net/MBu24bHp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MBu24bHp.png" alt="Screenshot of System Monitor showing QT Web Enginge Still running after close" /></a></p>
|
<python><codemirror><pywebview>
|
2024-10-30 14:17:23
| 0
| 327
|
Amour Spirit
|
79,141,394
| 2,886,575
|
numpy tuple-style argsort ordered pairs of ordered pairs?
|
<p>I have an array of ordered pairs of ordered pairs:</p>
<pre><code>foo = np.array([
[[1, 3],
[2, 4]],
[[4, 1],
[2, 3]],
[[3, 2],
[3, 1]],
[[2, 4],
[3, 1]],
[[1, 2],
[3, 4]]])
</code></pre>
<p>I would like to <code>argsort</code> each pair of pairs, in the usual python tuple sorting sense:</p>
<pre><code>full_argsort(foo) == np.array(
[[0, 1],
[1, 0],
[1, 0],
[0, 1],
[0, 1]])
</code></pre>
<p>(or some index array that will allow me to <code>foo[indices]</code> to sort <code>foo</code>... I'm not entirely sure what the best format for this is)</p>
<p>i.e.</p>
<pre><code>pair_argsort([[1,2],[2,4]]) == [0,1]
pair_argsort([[2,1],[1,2]]) == [1,0]
pair_argsort([[1,2],[1,1]]) == [1,0] #NB
</code></pre>
<p>More generally (up to array/tuple/listification):</p>
<pre><code>pair_of_pairs[pair_argsort(pair_of_pairs)]) == sorted(map(tuple, pair_of_pairs))
</code></pre>
<p>I could then <code>np.apply_along_axis(pair_argsort,...)</code> to argsort the entirety of <code>foo</code>. It would be nice, though, if there were some <code>numpy</code>-native way to do this rather than <code>apply</code>ing my python function, which seems inefficient.</p>
<p>I can argsort by <em>just the first value in the tuple</em> with <code>np.argsort(foo.reshape(-1, 2, 2), axis=1)[:,:,0]</code>:</p>
<pre><code>array([[0, 1],
[1, 0],
[0, 1],
[0, 1],
[0, 1]])
</code></pre>
<p>Note that this is wrong on the third pair of pairs [[3,2],[3,1]].</p>
<p>(I'm still not super sure how to use this to index foo to get the sorted array... will update when I figure that out. :)</p>
<p>How can I "tuple argsort" in numpy?</p>
|
<python><numpy>
|
2024-10-30 13:52:24
| 1
| 5,605
|
Him
|
79,140,720
| 1,479,670
|
How to get bash-like editing and history with stdin.read() or input()?
|
<p>I have a command line tool which repeatedly gets input from the keyboard using <code>stdin.readline()</code>.</p>
<p>When i type some text at the prompt i only have limited possibilities for editing, such as moving the cursor (left and right) and deleting a character.</p>
<p>But, for example, if i move the cursor to the start of my text and type character, it is appended at the end of my input.</p>
<p>Also, when i type the up key, the cursor actually moves upwards.</p>
<p>Is it possible to endow the python standard input with editing and history (accessible with the up and down key) like for instance bash?</p>
|
<python><command-line-interface><stdin>
|
2024-10-30 10:42:59
| 2
| 1,355
|
user1479670
|
79,140,682
| 20,920,790
|
Why function on_failure_callback for Airflow dag not running on task fail?
|
<p>I got Airflow dag with on_failure_callback function.
But on_failure_callback function not running on failure and I don't see logs.</p>
<p>There code of my dag.</p>
<p>All @tasks in this code works, there's no need to show special failing function.
I work with task instance by adding **kwargs to functions (another options not working in my case).</p>
<p>Why send_message_on_dag_fail not works on then dag had failed?</p>
<p>I add same code to task get_start_end_dates just to be sure that send_message_on_dag_fail function is working.</p>
<p>Airflow 2.8.2, Pyhton 3.11.8.</p>
<pre><code>import pandas as pd
from datetime import datetime, timedelta
from dateutil.relativedelta import relativedelta
import pytz
import httpx
from airflow.decorators import dag, task
from airflow.exceptions import AirflowSkipException
from airflow.models import Variable
def get_current_period(date: datetime.date = None):
tz = pytz.timezone('Europe/Moscow')
if date:
now = date.date()
else:
now = datetime.now(tz).date()
if now.day <= 15:
start_date = (now - relativedelta(months=1)).replace(day=16)
end_date = now.replace(day=1) - relativedelta(days=1)
else:
start_date = now.replace(day=1)
end_date = now.replace(day=15)
return str(start_date), str(end_date)
def send_msg(bot_token: str, chat_id: str, message: str, type:str = 'message' or 'code'):
if type == 'message':
url = f'https://api.telegram.org/bot{bot_token}/sendMessage?chat_id={chat_id}&text={message}'
client = httpx.Client(base_url='https://')
return client.post(url)
elif type == 'code':
url = f'https://api.telegram.org/bot{bot_token}/sendMessage'
params = {
'chat_id': chat_id,
'text': message,
'parse_mode': 'Markdown'
}
client = httpx.Client(base_url='https://')
return client.post(url, params=params)
def get_xcom_from_context(context, task_id: str, dict_key:str = False):
if dict_key:
xcom = context['ti'].xcom_pull(task_ids=task_id)[dict_key]
else:
xcom = context['ti'].xcom_pull(task_ids=task_id)
return xcom
default_args = {
'owner': 'user',
'depends_on_past': False,
'retries': 1,
'retry_delay': timedelta(minutes=1),
'start_date': datetime(2024, 9, 19)
}
host = Variable.get('host')
database_name = Variable.get('database_name')
user_name = Variable.get('user_name')
password_for_db = Variable.get('password_for_db')
server_host_name = Variable.get('server_host_name')
bearer_key = Variable.get('bearer_key')
user_key = Variable.get('user_key')
sales_plans_url = Variable.get('sales_plans_url')
specialization_prices_url = Variable.get('specialization_prices_url')
bot_token = Variable.get('bot_token')
chat_id = Variable.get('chat_id')
def send_message_on_dag_fail(bot_token = bot_token, chat_id = chat_id, **kwargs):
context = kwargs
log = context['ti'].log
log.error('DAG FINISHED WITH ERROR __________________') # this error text easier to find
task_id = context['ti'].task_id
dag_id = context['dag'].dag_id
message = f"Task {task_id} from Dag {dag_id} failed."
log.error(message)
send_msg(bot_token, chat_id, message, 'message')
@dag(default_args=default_args, schedule_interval=None, catchup=False, concurrency=4, on_failure_callback=send_message_on_dag_fail)
def dag_get_bonus_and_penaltys_for_staff():
@task
def check_time():
tz = pytz.timezone('Europe/Moscow')
current_time = datetime.now(tz).time()
if current_time >= datetime.strptime("00:00", "%H:%M").time() and current_time <= datetime.strptime("01:00", "%H:%M").time():
return False
else:
return True
@task
def get_start_end_dates(bot_token = bot_token, chat_id = chat_id,**kwargs):
context = kwargs
log = context['ti'].log
check_time = get_xcom_from_context(context, 'check_time')
log.info('Xcom objects pulled from context')
if check_time:
start_date, end_date = get_current_period()
result = {
'start_date': start_date
, 'end_date': end_date
}
task_id = context['ti'].task_id
dag_id = context['dag'].dag_id
message = f"TASK {task_id} DAG {dag_id}."
log.error(message)
send_msg(bot_token, chat_id, message, 'message')
return result
else:
raise AirflowSkipException("Time for database cleaning, skip DAG execution.")
...
check_time_task = check_time()
get_start_end_dates_task = get_start_end_dates()
check_time_task >> get_start_end_dates_task >> ...
dag_get_bonus_and_penaltys_for_staff = dag_get_bonus_and_penaltys_for_staff()
</code></pre>
|
<python><airflow>
|
2024-10-30 10:32:04
| 1
| 402
|
John Doe
|
79,140,661
| 7,973,301
|
How to sum values based on a second index array in a vectorized manner
|
<p>Let' say I have a <em>value array</em></p>
<pre class="lang-py prettyprint-override"><code>values = np.array([0.0, 1.0, 2.0, 3.0, 4.0])
</code></pre>
<p>and an <em>index array</em></p>
<pre class="lang-py prettyprint-override"><code>indices = np.array([0,1,0,2,2])
</code></pre>
<p>Is there a vectorized way to sum the values for each unique index in <code>indices</code>?
I mean a vectorized version to get <code>sums</code> in this snippet:</p>
<pre class="lang-py prettyprint-override"><code>sums = np.zeros(np.max(indices)+1)
for index, value in zip(indices, values):
sums[index] += value
</code></pre>
<p>Bonus points, if the solution allows <code>values</code> (and in consequence <code>sums</code>)to be multi-dimensional.</p>
<p><strong>EDIT:</strong> I benchmarked the posted solutions:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import time
import pandas as pd
values = np.arange(1_000_000, dtype=float)
rng = np.random.default_rng(0)
indices = rng.integers(0, 1000, size=1_000_000)
N = 100
now = time.time_ns()
for _ in range(N):
sums = np.bincount(indices, weights=values, minlength=1000)
print(f"np.bincount: {(time.time_ns() - now) * 1e-6 / N:.3f} ms")
now = time.time_ns()
for _ in range(N):
sums = np.zeros(1 + np.amax(indices), dtype=values.dtype)
np.add.at(sums, indices, values)
print(f"np.add.at: {(time.time_ns() - now) * 1e-6 / N:.3f} ms")
now = time.time_ns()
for _ in range(N):
pd.Series(values).groupby(indices).sum().values
print(f"pd.groupby: {(time.time_ns() - now) * 1e-6 / N:.3f} ms")
now = time.time_ns()
for _ in range(N):
sums = np.zeros(np.max(indices)+1)
for index, value in zip(indices, values):
sums[index] += value
print(f"Loop: {(time.time_ns() - now) * 1e-6 / N:.3f} ms")
</code></pre>
<p>Results:</p>
<pre><code>np.bincount: 1.129 ms
np.add.at: 0.763 ms
pd.groupby: 5.215 ms
Loop: 196.633 ms
</code></pre>
|
<python><numpy>
|
2024-10-30 10:25:13
| 3
| 970
|
Padix Key
|
79,140,294
| 9,516,820
|
Pytesseract TesseractError: Unable to Load Language Files
|
<p>I am trying to use pytesseract in my system. But I am getting the following error message</p>
<pre><code>pytesseract.pytesseract.TesseractError: (1, 'Error opening data file /opt/homebrew/share/eng.traineddata Please make sure the TESSDATA_PREFIX environment variable is set to your "tessdata" directory. Failed loading language \'eng\' Tesseract couldn\'t load any languages! Could not initialize tesseract.')
</code></pre>
<p>I am using an M1 MacBook Pro and installed the tesseract engine using brew</p>
<p><code>brew install tesseract</code></p>
<p>I also installed the lang files using the following command</p>
<p><code>brew install tesseract-lang</code></p>
<p>The TESSDATA_PREFIX seems to point to the correct location</p>
<p><code>/opt/homebrew/share/</code></p>
<p>I also ensured that the language files are present in the correct directory
<code>ls /opt/homebrew/share/tessdata/</code></p>
<p>This is the result of the above command</p>
<pre><code>LICENSE ceb.traineddata eng.traineddata gle.traineddata jav.traineddata lit.traineddata osd.traineddata snum.traineddata tha.traineddata
README.md ces.traineddata enm.traineddata glg.traineddata jpn.traineddata ltz.traineddata pan.traineddata spa.traineddata tir.traineddata
afr.traineddata chi_sim.traineddata epo.traineddata grc.traineddata jpn_vert.traineddata mal.traineddata pdf.ttf spa_old.traineddata ton.traineddata
amh.traineddata chi_sim_vert.traineddata equ.traineddata guj.traineddata kan.traineddata mar.traineddata pol.traineddata sqi.traineddata tur.traineddata
ara.traineddata chi_tra.traineddata est.traineddata hat.traineddata kat.traineddata mkd.traineddata por.traineddata srp.traineddata uig.traineddata
asm.traineddata chi_tra_vert.traineddata eus.traineddata heb.traineddata kat_old.traineddata mlt.traineddata pus.traineddata srp_latn.traineddata ukr.traineddata
aze.traineddata chr.traineddata fao.traineddata hin.traineddata kaz.traineddata mon.traineddata que.traineddata sun.traineddata urd.traineddata
aze_cyrl.traineddata configs fas.traineddata hrv.traineddata khm.traineddata mri.traineddata ron.traineddata swa.traineddata uzb.traineddata
bel.traineddata cos.traineddata fil.traineddata hun.traineddata kir.traineddata msa.traineddata rus.traineddata swe.traineddata uzb_cyrl.traineddata
ben.traineddata cym.traineddata fin.traineddata hye.traineddata kmr.traineddata mya.traineddata san.traineddata syr.traineddata vie.traineddata
bod.traineddata dan.traineddata fra.traineddata iku.traineddata kor.traineddata nep.traineddata script tam.traineddata yid.traineddata
bos.traineddata deu.traineddata frk.traineddata ind.traineddata kor_vert.traineddata nld.traineddata sin.traineddata tat.traineddata yor.traineddata
bre.traineddata div.traineddata frm.traineddata isl.traineddata lao.traineddata nor.traineddata slk.traineddata tel.traineddata
bul.traineddata dzo.traineddata fry.traineddata ita.traineddata lat.traineddata oci.traineddata slv.traineddata tessconfigs
cat.traineddata ell.traineddata gla.traineddata ita_old.traineddata lav.traineddata ori.traineddata snd.traineddata tgk.traineddata
</code></pre>
<p>The eng.traineddata file which is what I want to primarily use is located in the directory</p>
<p>I also ensured that the prefix is exported in the zsh file
<code>export TESSDATA_PREFIX=/opt/homebrew/share/</code></p>
<p>I double checked by looking into the file. I also reloaded the configuration settings by running <code>source ~/.zshrc</code></p>
<p>Even after all of this I am getting the error message. I installed pytesseract inside a conda environment using pip and installed the tesseract engine using brew. I am not sure anymore what I am doing wrong.</p>
|
<python><tesseract><python-tesseract>
|
2024-10-30 08:47:30
| 1
| 972
|
Sashaank
|
79,139,716
| 2,552,290
|
Is constructing a list from a list guaranteed to make a copy?
|
<p>I have some python code that takes an iterable (list or tuple or generator expression or any other iterable), and needs to copy it into a new list.</p>
<p>The following works:</p>
<pre><code>newlist = [x for x in olditerable]
</code></pre>
<p>But I'm wondering whether I can just do this instead:</p>
<pre><code>newlist = list(olditerable)
</code></pre>
<p>Obviously it works as required (makes a copy) if olditerable is <em>not</em> a list.
Less-obviously, it also seems to work as required (makes a copy) when olditerable <em>is</em> a list:</p>
<pre><code>oldlist = [1,2,3]
newlist = list(oldlist)
assert newlist is not oldlist # so it didn't just return oldlist
oldlist.append("appended to oldlist")
newlist.append("appended to newlist")
print(f"oldlist is {oldlist}")
print(f"newlist is {newlist}")
</code></pre>
<p>The output further confirms that newlist is not oldlist:</p>
<pre><code>oldlist is [1, 2, 3, 'appended to oldlist']
newlist is [1, 2, 3, 'appended to newlist']
</code></pre>
<p>Is this copying behavior guaranteed? I haven't been able to find any documentation about it one way or another.</p>
<p>In contrast, <code>tuple(oldtuple)</code> does <em>not</em> make a copy (of course, since it would be silly to, since tuples are immutable):</p>
<pre><code>oldtuple = (1,2,3)
newtuple = tuple(oldtuple)
assert newtuple is oldtuple
</code></pre>
|
<python>
|
2024-10-30 05:10:16
| 1
| 5,611
|
Don Hatch
|
79,139,651
| 6,197,439
|
Getting the size of an item (QStyledItemDelegate?) in a QTableView in PyQt5?
|
<p>In the example below (at end of the post), what I want to do, is to</p>
<ul>
<li>have an image applied to a QTableView item (cell) if it has empty contents (as a DecorationRole, since I might have other logic which might use BackgroundRole)
<ul>
<li>this image should scale/resize according to the cell size</li>
</ul>
</li>
<li>have a possibility to run a "resizeToContents" actions arbitrarily</li>
<li>start off the table with an "empty" state represented by a single empty cell</li>
</ul>
<p>Now, with the code below, things generally work:</p>
<ul>
<li>Image is applied to item (cell), if its data contents are empty - and it does scale with the cell (e.g. if we manually change column/row width/height)</li>
<li>"resizeToContents" generally works fine if the cell has data contents</li>
</ul>
<p>The problem is with the starting "empty" state, the DecorationRole drawing, and resizeToContents - let me illustrate:</p>
<ol>
<li>First, if the code below does <em>not</em> do the DecorationRole image for empty cells/items (<code>self.do_decoration = False</code>), then upon start we get this GUI state:<br />
<a href="https://i.sstatic.net/MBGyRMnp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MBGyRMnp.png" alt="no DecorationRole, QTableView start" /></a><br />
... and after clicking "resizeToContents", we get this GUI state instead:<br />
<a href="https://i.sstatic.net/TzpDIeJj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TzpDIeJj.png" alt="no DecorationRole, QTableView after "resizeToContents" click" /></a><br />
... that is, the cell "shrinks" (gets its width reduced). However, on subsequent clicks on "resizeToContents", this second GUI state remains unchanged (constant).</li>
<li>However, if we <em>do</em> enable the DecorationRole image for empty cells/items (<code>self.do_decoration = True</code>), then the first GUI state after program start looks like this:<br />
<a href="https://i.sstatic.net/kEHuX91b.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kEHuX91b.png" alt="DecorationRole, QTableView start" /></a><br />
... and on <em>every</em> subsequent click on "resizeToContents", the cell/item <em>grows</em> - so after 5 clicks on "resizeToContents" the GUI state is like this:<br />
<a href="https://i.sstatic.net/EY3TtZPa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EY3TtZPa.png" alt="DecorationRole, QTableView after 5 clicks on "resizeToContents"" /></a></li>
</ol>
<p>The behavior I otherwise wanted (and expected) is: if a QTableView starts off with showing a single cell with empty data, and there are no changes to the cell data, then no matter how many times I click on "resizeToContents", I should have no change in the GUI state.</p>
<hr />
<p>The fact that the GUI state changes in the "no DecorationRole" case at start is not a problem: I guess "resizeToContents" takes into account the contents of the labels "1" in the row and column headers as well, and resizes everything according to that - which can be solved by running "resizeToContents" once in init; then the user would see no change when "resizeToContents" is clicked after program start.</p>
<p>However, the constant growing of the cell size in the "yes DecorationRole" case is a problem; in my opinion, it is due to the fact that I cannot really get the size of the cell/item, since in a <code>QTableView</code>, the cells/items are not even <code>QWidget</code>s - they are <a href="https://doc.qt.io/qt-5/qstyleditemdelegate-members.html" rel="nofollow noreferrer"><code>QStyledItemDelegate</code></a>, which have no methods to get or set item width or height. And therefore I have attempted a workaround in the code by using <code>QTableView.rowHeight</code> and <code>QTableView.columnWidth</code> - but these apparently return sizes that are slightly larger than the actual cell/item dimensions, and so the DecorationRole pixmap ends up slightly larger than the cell/item actual size - so next time "resizeToContents", it tries to accomodate this, but then a new, even larger, decoration pixmap is generated - and we end up in a sort of a recursive increase of cell size.</p>
<p>Which is why my question is ultimately - how do I get the actual size of a QStyledItemDelegate item (cell) in a QTableView? My guess is, if I could generate a pixmap with the actual size of the cell, then subsequent calls to "resizeToContents" would not increase the cell size anymore, and things would work as I had imagined them.</p>
<p>Here is the code:</p>
<pre class="lang-py prettyprint-override"><code>import sys
from PyQt5 import QtCore, QtGui, QtWidgets
from PyQt5.QtCore import (Qt, QSize, QPointF)
from PyQt5.QtGui import (QColor, QPalette, QPixmap, QBrush, QPen, QPainter)
from PyQt5.QtWidgets import (QWidget, QVBoxLayout, QPushButton, QStyleOptionViewItem, QStyledItemDelegate)
# starting point from https://www.pythonguis.com/tutorials/qtableview-modelviews-numpy-pandas/
class TableModel(QtCore.QAbstractTableModel):
def __init__(self, data, parview):
super(TableModel, self).__init__()
self._data = data
self._parview = parview # parent table view
self.do_decoration = True
#
def create_pixmap(self, pic_qsize): # SO:62799632
pixmap = QPixmap(pic_qsize)
pixmap.fill(QColor(0,0,0,0))
painter = QPainter()
painter.begin(pixmap)
#painter.setBrush(QtGui.QBrush(QtGui.QColor("blue")))
painter.setPen(QPen(QColor("#446600"), 4, Qt.SolidLine))
painter.drawLine(pixmap.rect().bottomLeft(), pixmap.rect().center()+QPointF(0,4))
painter.drawLine(pixmap.rect().bottomRight(), pixmap.rect().center()+QPointF(0,4))
painter.drawLine(pixmap.rect().topLeft(), pixmap.rect().center()+QPointF(0,-4))
painter.drawLine(pixmap.rect().topRight(), pixmap.rect().center()+QPointF(0,-4))
painter.end()
return pixmap
#
def data(self, index, role):
if role == Qt.DisplayRole:
return self._data[index.row()][index.column()]
if self.do_decoration and role == Qt.DecorationRole: # SO:74203503
value = self._data[index.row()][index.column()]
if not(value):
row_height = self._parview.rowHeight(index.row()) #-5
column_width = self._parview.columnWidth(index.column()) #-13
#item = self._parview.itemDelegate(index) # QStyledItemDelegate
print(f"{row_height=} {column_width=}")
pic_qsize = QSize(column_width, row_height)
return self.create_pixmap(pic_qsize)
#
def rowCount(self, index):
return len(self._data)
#
def columnCount(self, index):
return len(self._data[0])
class MainWindow(QtWidgets.QMainWindow):
def __init__(self):
super().__init__()
self.centralw = QWidget()
self.setCentralWidget(self.centralw)
self.vlayout = QVBoxLayout(self.centralw)
#
self.btn = QPushButton("resizeToContents")
self.btn.clicked.connect(self.resizeToContents)
self.vlayout.addWidget(self.btn)
#
self.table_view = QtWidgets.QTableView()
data = [ [ "" ] ]
self.model = TableModel(data, self.table_view)
self.table_view.setModel(self.model)
self.vlayout.addWidget(self.table_view)
#
#
def resizeToContents(self):
self.table_view.resizeColumnsToContents()
self.table_view.resizeRowsToContents()
app=QtWidgets.QApplication(sys.argv)
window=MainWindow()
window.show()
window.resize(280, 140)
app.exec_()
</code></pre>
|
<python><pyqt5><qt5>
|
2024-10-30 04:22:44
| 1
| 5,938
|
sdbbs
|
79,139,406
| 4,701,426
|
Executing an scheduled task if the script is run too late and after the scheduled time, using the `schedule` library
|
<p>Imagine I want to run this function:</p>
<pre><code>def main():
pass
</code></pre>
<p>at these scheduled times (not every random 3 hours):</p>
<pre><code>import schedule
schedule.every().day.at("00:00").do(main)
schedule.every().day.at("03:00").do(main)
schedule.every().day.at("06:00").do(main)
schedule.every().day.at("09:00").do(main)
schedule.every().day.at("12:00").do(main)
schedule.every().day.at("15:00").do(main)
schedule.every().day.at("18:00").do(main)
schedule.every().day.at("21:00").do(main)
while True:
schedule.run_pending()
time.sleep(1)
</code></pre>
<p>How can I make this run <code>main()</code> immediately when the script is run at some time between the scheduled start times, say at 19:42. This works fine but if the something happens like the power goes out and I'm not there to restart the script and miss a scheduled run, I'd like it to execute the function as soon as the script is run again even if it's not at one of the scheduled times. This is Windows machine if it matters.</p>
|
<python><scheduled-tasks>
|
2024-10-30 01:22:57
| 1
| 2,151
|
Saeed
|
79,139,391
| 6,401,858
|
Zip 2 lists with an offset, but keep left and right leftover
|
<p>I have two lists that I would like to zip with an offset similar to here: <a href="https://stackoverflow.com/questions/45185354/join-two-offset-lists-offset-zip">Join two offset lists ("offset zip"?)</a></p>
<p>However I would like to keep the left and right leftover too, so the final result = left + zipped middle + right. I have the indices of a pair of values that need to line up in the middle.</p>
<p>Example</p>
<pre><code>list1 = [1, 2, 3, 4, 5, 6]
list2 = ["a", "b", "c"]
index1 = 2 # list1[index1] is 3
index2 = 1 # list2[index2] is "b"
# So 3 and "b" must be adjacent in the zipped_middle
# Result
left = [1]
zipped_middle = [2, "a", 3, "b", 4, "c"]
right = [5, 6]
final_list = left + zipped_middle + right
</code></pre>
<p>How can I achieve this in Python? It needs to handle lists of any size.</p>
|
<python><list>
|
2024-10-30 01:10:38
| 2
| 382
|
fariadantes
|
79,139,355
| 2,690,221
|
Why does hex give a different output than indexing into bytes in python?
|
<p>I'm trying to confirm this struct is being formatted correctly for network byte order, but printing out the bytes and printing out the hex of the bytes give me different output, with the hex being the expected output. But I don't know why they are different.</p>
<pre><code>import ctypes
class test_struct(ctypes.BigEndianStructure):
_pack_ = 1
_fields_ = [ ('f1', ctypes.c_ushort, 16) ]
foo = test_struct()
foo.f1 = 0x8043
bs = bytes(foo)
print(str(bs[0:1]) + " " + str(bs[1:2]))
print(bs.hex(' '))
</code></pre>
<p>The output is</p>
<pre><code>b'\x80' b'C'
80 43
</code></pre>
|
<python><hex><byte>
|
2024-10-30 00:41:00
| 1
| 421
|
Thundercleez
|
79,139,320
| 48,956
|
All of the async workers quit. When doesn't asyncio.run return?
|
<p>Anyone know why this asyncio.run fails to exit in this program when the child process quits?
I run the program below, and then <code>pkill ssh</code> to kill the ssh process.</p>
<p>AFAICT all of the async workers:</p>
<ul>
<li><code>runcommand</code></li>
<li><code>read_from_process</code></li>
<li><code>write_to_process</code>
have all quit. The program prints:</li>
</ul>
<pre><code>finally... worker 1 done
finally... worker 2 done
t1, t2 done
finally
finally .. DONE returncode=255
</code></pre>
<p>But "ALL DONE" is not printed until return is printed (and <code>ps</code> confirms that the python process did not quit).</p>
<pre><code>import asyncio
import os
import pty
import sys
import termios
import traceback
import tty
async def run_command(command):
# Create a pseudoterminal pair (master and slave)
master_fd, slave_fd = pty.openpty()
old_tty = termios.tcgetattr(sys.stdin)
try:
# Set the terminal in raw mode for capturing control characters
tty.setraw(sys.stdin.fileno())
# Start the process using the slave as stdin, stdout, stderr
process = await asyncio.create_subprocess_exec(
*command,
stdin=slave_fd,
stdout=slave_fd,
stderr=slave_fd
)
# Close the slave in the parent process
os.close(slave_fd)
# Start tasks to handle reading from and writing to the terminal
async def onCancel(coro, proc):
try:
await coro
except asyncio.CancelledError as e:
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, old_tty)
except OSError as e:
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, old_tty)
# Cancel other end.
if proc == 1:
t2.cancel()
else:
t1.cancel()
# process.kill()
# print("".join(traceback.format_exception(type(e), e, e.__traceback__)), file=sys.stderr)
# cancel_task(t1)
# cancel_task(t2)
# print("".join(traceback.format_exception(type(e), e, e.__traceback__)), file=sys.stderr)
except Exception as e:
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, old_tty)
print("".join(traceback.format_exception(type(e), e, e.__traceback__)), file=sys.stderr)
finally:
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, old_tty)
print(f"finally... worker {proc} done")
# cancel_task(t1)
# cancel_task(t2)
t1 = asyncio.create_task(onCancel(read_from_process(master_fd), 1))
t2 = asyncio.create_task(onCancel(write_to_process(master_fd), 2))
await asyncio.gather(t1, t2)
print("t1, t2 done")
await process.wait()
# asyncUtil.printWorkers() # Shows only run_command coro remains
finally:
print("finally")
# Restore the terminal to its previous state
os.close(master_fd)
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, old_tty)
print(f"finally .. DONE returncode={process.returncode}")
async def read_from_process(master_fd):
"""Asynchronously read from the master side of the PTY and print to stdout. Note that stdin/stdout are combined."""
loop = asyncio.get_event_loop()
while True:
data = await loop.run_in_executor(None, os.read, master_fd, 1024)
if not data: # EOF
break
sys.stdout.write(data.decode())
sys.stdout.flush()
async def write_to_process(master_fd):
"""Asynchronously read from stdin and send input to the process."""
try:
loop = asyncio.get_event_loop()
while True:
try:
user_input = await loop.run_in_executor(None, sys.stdin.read, 1) # Read one character
if user_input == "\x04": # CTRL+D to signal EOF
os.write(master_fd, user_input.encode())
# print("CTRL+D")
break
if user_input == "":
break # EOF, such as CTRL-D
os.write(master_fd, user_input.encode())
except EOFError:
break
except Exception as e:
print("".join(traceback.format_exception(type(e), e, e.__traceback__)), file=sys.stderr)
# Run the main event loop with the command
command = ["ssh", "localhost"] # replace with your SSH command
asyncio.run(run_command(command))
print("ALL DONE")
sys.stdout.flush()
</code></pre>
|
<python><python-asyncio>
|
2024-10-30 00:14:39
| 0
| 15,918
|
user48956
|
79,139,271
| 4,549,682
|
How to install LightGBM on Ubuntu 20.04 with CUDA support
|
<p>I followed the instructions in the <a href="https://lightgbm.readthedocs.io/en/latest/Installation-Guide.html#build-cuda-version" rel="nofollow noreferrer">docs</a>, but also tried many other variations. All ended up in the same place, with this:</p>
<pre><code>LightGBM] [Fatal] Check failed: (split_indices_block_size_data_partition) > (0) at /home/azureuser/localfiles/LightGBM/lightgbm-python/src/treelearner/cuda/cuda_data_partition.cpp, line 280 .
</code></pre>
<p>I cannot find an answer to what is going on or how to fix this. I've tried Python 3.10 and 11 in conda envs and using the latest lightgbm and cuda. I suspect the cuda implementation is faster than the other gpu implementation, but not sure. The full traceback is:</p>
<pre><code> "name": "LightGBMError",
"message": "Check failed: (split_indices_block_size_data_partition) > (0) at /home/azureuser/localfiles/LightGBM/lightgbm-python/src/treelearner/cuda/cuda_data_partition.cpp, line 280 .
",
"stack": "---------------------------------------------------------------------------
LightGBMError Traceback (most recent call last)
Cell In[2], line 19
12 # Create and train the LightGBM classifier with GPU support
13 clf = lgb.LGBMClassifier(
14 objective='binary',
15 device='cuda',
16 verbose=1,
17 )
---> 19 clf.fit(X_train, y_train)
21 # Predict and evaluate
22 y_pred = clf.predict(X_test)
File /anaconda/envs/py311/lib/python3.11/site-packages/lightgbm/sklearn.py:1421, in LGBMClassifier.fit(self, X, y, sample_weight, init_score, eval_set, eval_names, eval_sample_weight, eval_class_weight, eval_init_score, eval_metric, feature_name, categorical_feature, callbacks, init_model)
1418 else:
1419 valid_sets.append((valid_x, self._le.transform(valid_y)))
-> 1421 super().fit(
1422 X,
1423 _y,
1424 sample_weight=sample_weight,
1425 init_score=init_score,
1426 eval_set=valid_sets,
1427 eval_names=eval_names,
1428 eval_sample_weight=eval_sample_weight,
1429 eval_class_weight=eval_class_weight,
1430 eval_init_score=eval_init_score,
1431 eval_metric=eval_metric,
1432 feature_name=feature_name,
1433 categorical_feature=categorical_feature,
1434 callbacks=callbacks,
1435 init_model=init_model,
1436 )
1437 return self
File /anaconda/envs/py311/lib/python3.11/site-packages/lightgbm/sklearn.py:1015, in LGBMModel.fit(self, X, y, sample_weight, init_score, group, eval_set, eval_names, eval_sample_weight, eval_class_weight, eval_init_score, eval_group, eval_metric, feature_name, categorical_feature, callbacks, init_model)
1012 evals_result: _EvalResultDict = {}
1013 callbacks.append(record_evaluation(evals_result))
-> 1015 self._Booster = train(
1016 params=params,
1017 train_set=train_set,
1018 num_boost_round=self.n_estimators,
1019 valid_sets=valid_sets,
1020 valid_names=eval_names,
1021 feval=eval_metrics_callable, # type: ignore[arg-type]
1022 init_model=init_model,
1023 callbacks=callbacks,
1024 )
1026 # This populates the property self.n_features_, the number of features in the fitted model,
1027 # and so should only be set after fitting.
1028 #
1029 # The related property self._n_features_in, which populates self.n_features_in_,
1030 # is set BEFORE fitting.
1031 self._n_features = self._Booster.num_feature()
File /anaconda/envs/py311/lib/python3.11/site-packages/lightgbm/engine.py:361, in train(params, train_set, num_boost_round, valid_sets, valid_names, feval, init_model, feature_name, categorical_feature, keep_training_booster, callbacks)
349 for cb in callbacks_before_iter:
350 cb(
351 callback.CallbackEnv(
352 model=booster,
(...)
358 )
359 )
--> 361 booster.update(fobj=fobj)
363 evaluation_result_list: List[_LGBM_BoosterEvalMethodResultType] = []
364 # check evaluation result.
File /anaconda/envs/py311/lib/python3.11/site-packages/lightgbm/basic.py:4143, in Booster.update(self, train_set, fobj)
4141 if self.__set_objective_to_none:
4142 raise LightGBMError(\"Cannot update due to null objective function.\")
-> 4143 _safe_call(
4144 _LIB.LGBM_BoosterUpdateOneIter(
4145 self._handle,
4146 ctypes.byref(is_finished),
4147 )
4148 )
4149 self.__is_predicted_cur_iter = [False for _ in range(self.__num_dataset)]
4150 return is_finished.value == 1
File /anaconda/envs/py311/lib/python3.11/site-packages/lightgbm/basic.py:295, in _safe_call(ret)
287 \"\"\"Check the return value from C API call.
288
289 Parameters
(...)
292 The return value from C API calls.
293 \"\"\"
294 if ret != 0:
--> 295 raise LightGBMError(_LIB.LGBM_GetLastError().decode(\"utf-8\"))
LightGBMError: Check failed: (split_indices_block_size_data_partition) > (0) at /home/azureuser/localfiles/LightGBM/lightgbm-python/src/treelearner/cuda/cuda_data_partition.cpp, line 280 .
"
}
</code></pre>
|
<python><cuda><lightgbm>
|
2024-10-29 23:27:30
| 1
| 16,136
|
wordsforthewise
|
79,139,264
| 15,412,256
|
Polars Time Series Path Dependent Event Outcome calculation
|
<p>In the demo DataFrame I have three events:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame(
{
"timestamp": [1, 2, 3, 4, 5, 6, 7, 8],
"threshold": [8, None, None, None, 5, None, None, 8],
"value": [2, 3, 4, 5, 6, 7, 8, 9],
"event": [1, 0, 0, 0, 1, 0, 0, 1],
"start_ts": [1, None, None, None, 5, None, None, 8],
"end_ts": [6, None, None, None, 8, None, None, 8],
"event_id": [0, None, None, None, 1, None, None, 2],
}
).with_columns(pl.col("end_ts").sub(pl.col("start_ts")).alias("event_span"))
print(df)
</code></pre>
<pre><code>shape: (8, 8)
βββββββββββββ¬ββββββββββββ¬ββββββββ¬ββββββββ¬βββββββββββ¬βββββββββ¬βββββββββββ¬βββββββββββββ
β timestamp β threshold β value β event β start_ts β end_ts β event_id β event_span β
β --- β --- β --- β --- β --- β --- β --- β --- β
β i64 β i64 β i64 β i64 β i64 β i64 β i64 β i64 β
βββββββββββββͺββββββββββββͺββββββββͺββββββββͺβββββββββββͺβββββββββͺβββββββββββͺβββββββββββββ‘
β 1 β 8 β 2 β 1 β 1 β 6 β 0 β 5 β
β 2 β null β 3 β 0 β null β null β null β null β
β 3 β null β 4 β 0 β null β null β null β null β
β 4 β null β 5 β 0 β null β null β null β null β
β 5 β 5 β 6 β 1 β 5 β 8 β 1 β 3 β
β 6 β null β 7 β 0 β null β null β null β null β
β 7 β null β 8 β 0 β null β null β null β null β
β 8 β 8 β 9 β 1 β 8 β 8 β 2 β 0 β
βββββββββββββ΄ββββββββββββ΄ββββββββ΄ββββββββ΄βββββββββββ΄βββββββββ΄βββββββββββ΄βββββββββββββ
</code></pre>
<ul>
<li><code>timestamp</code> is the timestamp in the real world.</li>
<li><code>threshold</code> is the value that every events' value need to achieve, or exceed <strong>during</strong> the event span.</li>
<li><code>value</code> is the value at each timestamp, we can have duplicated values.</li>
<li><code>event</code> is a binary column indicating whether a certain timestamp generates event.</li>
<li><code>start_ts</code> is the starting <code>timestamp</code> of an event. For example, a <code>start_ts</code> of 1 means that the event will start at the end of <code>timestamp</code>1, at the beginning of <code>timestamp</code>2</li>
<li><code>end_ts</code> is the ending <code>timestamp</code> of an event.</li>
<li><code>event_id</code> is a unique identifier for each event.</li>
<li><code>event_span</code> is the number of <code>timestamp</code> that an event spans.</li>
</ul>
<p>Problem:
I want to identify:</p>
<ol>
<li><code>event outcome</code>: binary value indicating whether the <code>threshold</code> of each event is reached by <code>value</code> during each event span.</li>
<li><code>event outcome timestamp</code>: the <code>timestamp</code> where the <strong>first time</strong> of <code>value</code> reaching <code>threshold</code></li>
</ol>
<p>Additional Note:</p>
<ul>
<li>Event 0 spans [2, 3, 4, 5, 6], event 1 spans [6, 7, 8], and event 2 spans nothings.</li>
<li>The events would not span beyond the data that we have here (e.g., <code>end_ts</code> <= <code>timestamp</code>)</li>
</ul>
<p>Desired Output:</p>
<pre class="lang-py prettyprint-override"><code>shape: (8, 10)
βββββββββββββ¬ββββββββββββ¬ββββββββ¬ββββββββ¬ββββ¬βββββββββββ¬βββββββββββββ¬ββββββββββββββββ¬βββββββββββββββ
β timestamp β threshold β value β event β β¦ β event_id β event_span β event_outcome β event_outcom β
β --- β --- β --- β --- β β --- β --- β --- β e_timestamp β
β i64 β i64 β i64 β i64 β β i64 β i64 β i32 β --- β
β β β β β β β β β i64 β
βββββββββββββͺββββββββββββͺββββββββͺββββββββͺββββͺβββββββββββͺβββββββββββββͺββββββββββββββββͺβββββββββββββββ‘
β 1 β 8 β 2 β 1 β β¦ β 0 β 5 β 0 β null β
β 2 β null β 3 β 0 β β¦ β null β null β null β null β
β 3 β null β 4 β 0 β β¦ β null β null β null β null β
β 4 β null β 5 β 0 β β¦ β null β null β null β null β
β 5 β 5 β 6 β 1 β β¦ β 1 β 3 β 1 β 6 β
β 6 β null β 7 β 0 β β¦ β null β null β null β null β
β 7 β null β 8 β 0 β β¦ β null β null β null β null β
β 8 β 8 β 9 β 1 β β¦ β 2 β 0 β null β null β
βββββββββββββ΄ββββββββββββ΄ββββββββ΄ββββββββ΄ββββ΄βββββββββββ΄βββββββββββββ΄ββββββββββββββββ΄βββββββββββββββ
</code></pre>
<p>My current solution involves generating the full path values of each event, which is highly resource intensive, especially when the events overlap with each other:</p>
<pre class="lang-py prettyprint-override"><code>event_df = (
df
.filter(pl.col("event") == 1, pl.col("event_span") > 0)
.with_columns(
pl.int_ranges(pl.col("start_ts")+1, pl.col("end_ts")+1) # Map event full path
.alias("event_timestamps")
)
.explode("event_timestamps") # Generate event full path
.drop(pl.col("value"))
.join(
df
.select(pl.col("timestamp"), pl.col("value"))
.rename({"timestamp": "event_timestamps"}),
on="event_timestamps", how="left") # Get the value of the full path
.with_columns(# Map event outcome based on threshold
pl.when(pl.col("value") >= pl.col("threshold"))
.then(1)
.otherwise(None)
.alias("event_outcome")
)
.with_columns(# Get event outcome timestamp
pl.when(pl.col("event_outcome") == 1)
.then(pl.col("event_timestamps"))
.otherwise(None)
.alias("event_outcome_timestamp")
)
.with_columns(# Map event outcome to the event start timestamp
pl.col("event_outcome", "event_outcome_timestamp")
.fill_null(strategy="backward")
.over("event_id")
)
.unique("event_id")
.sort("event_id")
.with_columns(# Take care of the events that have not exceeded the threshold
pl.when(pl.col("event_outcome").is_null())
.then(0)
.otherwise(pl.col("event_outcome"))
.alias("event_outcome")
)
.select(pl.col("event_id", "event_outcome", "event_outcome_timestamp"))
)
event_df
</code></pre>
<pre><code>shape: (2, 3)
ββββββββββββ¬ββββββββββββββββ¬ββββββββββββββββββββββββββ
β event_id β event_outcome β event_outcome_timestamp β
β --- β --- β --- β
β i64 β i32 β i64 β
ββββββββββββͺββββββββββββββββͺββββββββββββββββββββββββββ‘
β 0 β 0 β null β
β 1 β 1 β 6 β
ββββββββββββ΄ββββββββββββββββ΄ββββββββββββββββββββββββββ
</code></pre>
<pre class="lang-py prettyprint-override"><code>df = df.join(event_df, on="event_id", how="left")
</code></pre>
|
<python><dataframe><time-series><python-polars>
|
2024-10-29 23:23:47
| 1
| 649
|
Kevin Li
|
79,139,225
| 82,474
|
Why does python successfully open a file when the filename is incorrect?
|
<p>I accidentally asked a python program to open a file with the wrong name -- with an extra <code>.</code> on the end, and it worked.</p>
<p>So I started experimenting:</p>
<pre><code>>>> open('myfile.txt')
<_io.TextIOWrapper name='fruit.iml' mode='r' encoding='cp1252'>
>>> open('myfile.txt.')
<_io.TextIOWrapper name='fruit.iml.' mode='r' encoding='cp1252'>
>>> open('myfile.txt. ')
<_io.TextIOWrapper name='fruit.iml. ' mode='r' encoding='cp1252'>
>>> open('myfile.txt. x')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
FileNotFoundError: [Errno 2] No such file or directory: 'myfile.txt. x'
</code></pre>
<p>Why is it accepting filenames with extra whitespace or periods at the end? How generous is Python's <code>open</code>?</p>
|
<python><windows><file>
|
2024-10-29 23:01:24
| 0
| 59,725
|
Eric Wilson
|
79,138,994
| 3,404,377
|
How can I check if the current environment satisfies a `requirements.txt`?
|
<p>I have a <code>requirements.txt</code> file. From a shell script, is there any way to determine if the current environment (whatever it might be) satisfies the <code>requirements.txt</code>?</p>
<p>In particular, I don't want to install anything, reach out to the network, or check if updates are available. I'm also not looking for a list of installed packages -- I just want a boolean "is it satisfied".</p>
<p>(My overall goal is to integrate some Python code that has dependencies into a CMake project, either using the current environment if all requirements are satisfied or setting up a venv if not.)</p>
<p>I know several things that don't work:</p>
<ul>
<li>Running <code>pip freeze</code> and parsing the output tells me the current installed versions, but not if they match the <code>requirements.txt</code>.</li>
<li>The <a href="https://docs.python.org/3.11/library/importlib.metadata.html#module-importlib.metadata" rel="nofollow noreferrer"><code>importlib.metadata</code></a> module gives me the current versions, but not if they match the <code>requirements.txt</code>.</li>
<li>The <a href="https://pip.pypa.io/en/stable/cli/pip_check/#" rel="nofollow noreferrer"><code>pip check</code></a> command tells me if the installed modules have their dependencies, but it doesn't accept a <code>requirements.txt</code> file.</li>
<li>The solutions in <a href="https://stackoverflow.com/questions/1051254/check-if-python-package-is-installed">this answer</a> don't provide a true/false answer and generally don't parse <code>requirements.txt</code>.</li>
</ul>
|
<python><pip><requirements.txt>
|
2024-10-29 21:03:07
| 0
| 1,131
|
ddulaney
|
79,138,978
| 4,701,426
|
merging lists with identical elements but in different order in pandas series into one unique lists
|
<p>Consider this simple dataframe:</p>
<pre><code>df = pd.DataFrame({'category' :[['Restaurants', 'Pizza'], ['Pizza', 'Restaurants'], ['Restaurants', 'Mexican']]})
</code></pre>
<p>df:</p>
<p><a href="https://i.sstatic.net/1Q0TRW3L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1Q0TRW3L.png" alt="enter image description here" /></a></p>
<p>The issue is that the <code>category</code> in the first two rows are essentially identical, just in different order. My goal is to collapse the two into one (does not matter which one). So, the resulting df should look like:</p>
<p><a href="https://i.sstatic.net/TZ9o8LJj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TZ9o8LJj.png" alt="enter image description here" /></a></p>
<p>or:</p>
<p><a href="https://i.sstatic.net/XmuWq6cg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XmuWq6cg.png" alt="enter image description here" /></a></p>
<p>I thought about getting the indices of the rows with essentially the same categories (rows indexed 0 and 1 in this example) and then find a way to replace all with one. But I am not sure if my code is correct and also the whole dataset is huge so this is inefficient:</p>
<pre><code>identical_idx = []
df_length = len(df)
for i in range(df_length):
for j in range(df_length):
if i!=j:
if set(df.category.iloc[i]) == set(df.category.iloc[j]): identical_idx.append([i, j])
</code></pre>
<p>What is the most efficient way to achieve this?</p>
|
<python><pandas><dataframe>
|
2024-10-29 20:57:16
| 1
| 2,151
|
Saeed
|
79,138,858
| 16,611,809
|
How can I highlight or select all Pandas code in Visual Studio Code
|
<p>I want to transition from Pandas to Polars in a big Python project. Is there a way to highlight or find all Pandas commands I've written in Visual Studio Code (or another IDE if necessary) so I would see what I'd need to edit?</p>
|
<python><pandas><visual-studio-code><refactoring>
|
2024-10-29 20:12:45
| 1
| 627
|
gernophil
|
79,138,666
| 6,811,048
|
Convert pandas merge code to pyspark join
|
<p>I have small problem with pyspark. I try to rewrite code from pandas where I have:</p>
<pre><code>df.merge(df2, how="outer", left_on=["b"], right_on=["b"])
</code></pre>
<p>And I have got columns <code>b</code>, <code>a_x</code> and <code>a_y</code>. But when I do it in pyspark like:</p>
<pre><code>df.join(df2, how="outer", on=["b"])
</code></pre>
<p>I have got <code>b</code>, <code>a</code> and <code>a</code>. When I try to do <code>df.select("a")</code> I have got error: <code>Reference "a" is ambiguous</code>. What did I wrong?</p>
|
<python><pandas><pyspark>
|
2024-10-29 19:22:09
| 3
| 649
|
Nju
|
79,138,641
| 4,392,566
|
Plottable Customize Bars
|
<p>Given this dataframe:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'Grade':[2,3,4,5], 'start':[0,0,20,10], 'end':[100,90,80,100]})
Grade start end
0 2 0 100
1 3 0 90
2 4 20 80
3 5 10 100
</code></pre>
<p>I would like to generate a table with it via plottable library with 2 columns: Grade and a column called "Progress" with bars (though a green arrow would be amazing) that starts at 'start' and ends at 'end' where the x-axis starts at 0 and ends at 100. Both 'start' and 'end should be annotated.</p>
<p>Here's an example of what I'm looking for in that column:
<a href="https://i.sstatic.net/ZLhEB6Lm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZLhEB6Lm.png" alt="enter image description here" /></a></p>
<p>If "Start" starts beyond 0, then the bar (or arrow) should start at that relativel position on the "x-axis". The annotations should alwasy be on the left and right sides fo the bars (arrows), as in the example above.</p>
<p>From the "docs", I did find this example for bar charts:</p>
<pre><code>ColumnDefinition(
"D",
width=1.25,
plot_fn=bar,
plot_kw={
"cmap": cmap,
"plot_bg_bar": True,
"annotate": True,
"height": 0.5,
"lw": 0.5,
"formatter": decimal_to_percent,
"xlim":[low, high],
},
),
</code></pre>
<p>...but I cannot find any reference to control the annotations in terms of labels and positions. I found that I can add "xlim" to the plot_kws but now I need to get start and end from the passed dataframe to determine those values per row.</p>
|
<python><matplotlib><plottable>
|
2024-10-29 19:12:39
| 1
| 3,733
|
Dance Party
|
79,138,587
| 11,197,957
|
Hitting a stubborn ClientAuthenticationError while attempting to use the ContainerClient class
|
<p>I'm trying to run some integration tests in <strong>Azure</strong> via a pipeline designed by a previous team, and I'm not having much luck. Specifically, the pipeline keeps hitting a <code>ClientAuthenticationError</code>.</p>
<p>This is a <strong>minimal working example</strong> of the <strong>Python</strong> code which runs right before it crashes:</p>
<pre class="lang-py prettyprint-override"><code>SAS_TOKEN = os.environ["SAS_TOKEN"]
credential = AzureSasCredential(SAS_TOKEN)
account_name = "ACCOUNT_NAME"
account_url = f"https://{account_name}.blob.core.windows.net"
container_name = "CONTAINER_NAME"
blob_service_client = BlobServiceClient(account_url, credential=credential)
container = blob_service_client.get_container_client(container_name)
parquet_names = container.list_blob_names(name_starts_with="PATTERN")
list_of_parquet = list(parquet_names)
</code></pre>
<p>And this is the traceback that the above code produces:</p>
<pre><code>File "/home/vsts/work/1/s/./tests/run_tests.py", line 24, in access_datalake_locally
list_of_parquet = list(parquet_names)
File "/opt/hostedtoolcache/Python/3.10.15/x64/lib/python3.10/site-packages/azure/core/paging.py", line 123, in __next__
return next(self._page_iterator)
File "/opt/hostedtoolcache/Python/3.10.15/x64/lib/python3.10/site-packages/azure/core/paging.py", line 75, in __next__
self._response = self._get_next(self.continuation_token)
File "/opt/hostedtoolcache/Python/3.10.15/x64/lib/python3.10/site-packages/azure/storage/blob/_list_blobs_helper.py", line 175, in _get_next_cb
process_storage_error(error)
File "/opt/hostedtoolcache/Python/3.10.15/x64/lib/python3.10/site-packages/azure/storage/blob/_shared/response_handlers.py", line 186, in process_storage_error
exec("raise error from None") # pylint: disable=exec-used # nosec
File "<string>", line 1, in <module>
azure.core.exceptions.ClientAuthenticationError: Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
</code></pre>
<p>I can think of a few things which might be going wrong here:</p>
<ol>
<li>That <code>parquet_names</code> object really doesn't like being type-cast into a <code>list</code>.</li>
<li>I'm doing something bone-headed in the way in which I construct that <code>credential</code> object. (I have tried <code>blob_service_client = BlobServiceClient(account_url, credential=SAS_TOKEN</code>. Hits exactly the same error.)</li>
<li>The <code>SAS_TOKEN</code> environmental variable is wrong.</li>
</ol>
<p>I would be very grateful for any suggestions for how to fix this situation.</p>
|
<python><azure><azure-blob-storage>
|
2024-10-29 18:58:11
| 1
| 734
|
Tom Hosker
|
79,138,547
| 9,848,275
|
ipfsapi issue with pyinstaller
|
<p>I'm running a simple script that downloads and uploads files from an IPFS node using ipfsapi. Everything works fine.</p>
<p>But when running with Pyinstaller ( I use the <code>--onefile</code> option, but the same thing happens with default settings), I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "multiaddr/transforms.py", line 66, in string_iter
File "multiaddr/codecs/__init__.py", line 23, in codec_by_name
...
File "<frozen importlib._bootstrap>", line 984, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'multiaddr.codecs.idna'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "apt.py", line 9, in <module>
import ipfsapi
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
multiaddr.exceptions.StringParseError: Invalid MultiAddr '/dns/localhost/tcp/5001/http' protocol dns: Unknown Protocol
[PYI-29889:ERROR] Failed to execute script 'apt' due to unhandled exception!
</code></pre>
<ul>
<li>I've tried using a hook for force miltiaddr loading</li>
<li>I've tried with ipfshttpclient (newer version of ipfsapi)</li>
</ul>
<p>but I always get the exact same error. It fails only when running with Pyinstaller. If I run my script with Python, it just works.</p>
<p>Something odd I found is that I print a message at the very start of the script, with Pyinstaller doesn't get printed, it seems the failure comes even before my script is executed. It's as if its loading the modules before running the script, and as part of loading is trying to connect to <code>/dns/localhost/tcp/5001/http</code>, but that will always fails, I don't have a node running on localhost.</p>
<p>Any ideas?</p>
<p>Thanks</p>
|
<python><pyinstaller><ipfs-http-client>
|
2024-10-29 18:39:53
| 2
| 615
|
Matias Salimbene
|
79,138,384
| 16,611,809
|
Use np.where to create a list with same number of elements, but different content
|
<p>I have a pandas dataframe where a value sometimes gets NA. I want to fill this column with a list of strings with the same length as another column:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({"a": ["one", "two"],
"b": ["three", "four"],
"c": [[1, 2], [3, 4]],
"d": [[5, 6], np.nan]})
</code></pre>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>a</th>
<th>b</th>
<th>c</th>
<th>d</th>
</tr>
</thead>
<tbody>
<tr>
<td>one</td>
<td>three</td>
<td>[1, 2]</td>
<td>[5, 6]</td>
</tr>
<tr>
<td>two</td>
<td>four</td>
<td>[3, 4]</td>
<td>NaN</td>
</tr>
</tbody>
</table></div>
<p>and I want this to become</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>a</th>
<th>b</th>
<th>c</th>
<th>d</th>
</tr>
</thead>
<tbody>
<tr>
<td>one</td>
<td>three</td>
<td>[1, 2]</td>
<td>[5, 6]</td>
</tr>
<tr>
<td>two</td>
<td>four</td>
<td>[3, 4]</td>
<td>[no_value, no_value]</td>
</tr>
</tbody>
</table></div>
<p>I tried</p>
<pre><code>df["d"] = np.where(df.d.isna(),
['no_value' for element in df.c],
df.d)
</code></pre>
<p>and</p>
<pre><code>df["d"] = np.where(df.d.isna(),
['no_value'] * len(df.c),
df.d)
</code></pre>
<p>but both does not work. Anyone has an idea?</p>
<p>SOLUTION:
I adjusted PaulS answer a little to:</p>
<pre><code>df[βdβ] = np.where(df.d.isna(),
pd.Series([['no_value'] * len(lst) for lst in df.c]),
df.d))
</code></pre>
|
<python><pandas><numpy>
|
2024-10-29 17:45:42
| 2
| 627
|
gernophil
|
79,138,375
| 8,423,881
|
Python typing: accept kwargs only with some class' methods
|
<p>Is it possible to somehow extract some class' methods type and use them in <code>kwargs</code>?</p>
<p>Example of what I want:</p>
<pre class="lang-py prettyprint-override"><code>class A:
prop: int = 10
def foo(self):
pass
def bar(self):
pass
def fun[T](**kwargs: Idk[T]):
pass
fun[A](foo=lambda: 123) # this is correct
fun[A](prop=200) # this is not
fun[A](smth=lambda: 123) # this is not
</code></pre>
<p>Equivalent in TypeScript, which works:</p>
<pre class="lang-js prettyprint-override"><code>class A {
prop: number = 100;
foo() {}
bar() {}
}
type OnlyMethods<T> = {
[K in keyof T as T[K] extends Function ? K : never]: T[K];
};
function fun<T>(arg: Partial<OnlyMethods<T>>) {}
fun<A>({
bar() {}, // this works
});
fun<A>({
prop: 200, // error
});
fun<A>({
smth() {}, // error
});
</code></pre>
|
<python><python-typing>
|
2024-10-29 17:40:18
| 1
| 311
|
Malyutin Egor
|
79,138,298
| 17,702,266
|
Issue with Mentions and Hashtags via LinkedIn API
|
<p>Weβre investigating a problem with the LinkedIn API where mentions and hashtags are not recognized when posting via the API. Weβve thoroughly reviewed the LinkedIn documentation and even attempted a hardcoded test using LinkedIn's example organization.</p>
<p>Here are the details:</p>
<p>According to the LinkedIn documentation, a valid post should be structured like this:</p>
<pre class="lang-json prettyprint-override"><code>{
"author": "urn:li:organization:123456789",
"commentary": "Hello @[Devtestco](urn:li:organization:2414183)",
"visibility": "PUBLIC",
"distribution": {
"feedDistribution": "MAIN_FEED",
"targetEntities": [],
"thirdPartyDistributionChannels": []
},
"lifecycleState": "PUBLISHED",
"isReshareDisabledByAuthor": false
}
</code></pre>
<p>Ref: <a href="https://learn.microsoft.com/en-us/linkedin/marketing/community-management/shares/posts-api?view=li-lms-2024-10&tabs=http#content" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/linkedin/marketing/community-management/shares/posts-api?view=li-lms-2024-10&tabs=http#content</a></p>
<p>Our generated code follows this format:</p>
<pre class="lang-py prettyprint-override"><code>#...
payload = {
"author": f"urn:li:person:{user_id}",
"commentary": text,
"visibility": "PUBLIC",
"distribution": {
"feedDistribution": "MAIN_FEED",
"targetEntities": [],
"thirdPartyDistributionChannels": [],
},
"content": {
"media": {
"altText": credential_name,
"id": thumbnail_urn,
}
},
"lifecycleState": "PUBLISHED",
"isReshareDisabledByAuthor": False
}
response = requests.post("https://api.linkedin.com/rest/posts", data=json.dumps(payload), headers=headers)
#...
</code></pre>
<p>And the variable text is generated so:</p>
<pre class="lang-py prettyprint-override"><code>f"{text}\n @[Devtestco](urn:li:organization:2414183) {hashtags}\n{credential_link}"
</code></pre>
<p>The API version we call is <code>202407</code>, but even with version <code>202410</code> the problem staies.<br />
We used "Devtestco" as a placeholder for testing, but the output looks like this:
<a href="https://i.sstatic.net/f2duyh6t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f2duyh6t.png" alt="Censored LinkedIn post" /></a></p>
<p>The API does not seem to parse the commentary correctly, as mentions and hashtags only work if manually added after the post has been created.</p>
<p>How to resolve this, or apply some workaround?</p>
<p>What we already did: Check code and documentation multiple times. Google and search on stackoverflow without a finding.</p>
|
<python><linkedin-api>
|
2024-10-29 17:16:24
| 0
| 642
|
DominicSeel
|
79,137,995
| 7,648
|
Seaborn legend not matching with plot lines
|
<p>I have the following code:</p>
<pre><code>import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
v_exponents = range(2, 9)
vs = [2 ** exponent - 1 for exponent in v_exponents]
df = pd.read_csv('data/out/erdos_renyi_graph/path_lengths_dfs/stats.csv')
for v in vs:
df_filtered = df[df['V'] == v]
print(df_filtered)
sns.lineplot(data=df_filtered, x="E", y="fraction_connected")
plt.legend(title="V", labels=vs)
plt.savefig(f'src/main/python/erdos_renyi_graph/path_lengths_dfs/doc/img/ErdosRenyiGraph-PathLengthsDFS.png')
</code></pre>
<p>It gives this output:</p>
<p><a href="https://i.sstatic.net/2uyQSIM6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2uyQSIM6.png" alt="Seaborn plot" /></a></p>
<p>Why is the legend screwed up? There are alternating thin and thick lines, two of each color. Moreover, it doesn't show all the colors, only blue through red.</p>
|
<python><seaborn><legend>
|
2024-10-29 15:53:21
| 1
| 7,944
|
Paul Reiners
|
79,137,986
| 1,714,490
|
QtVirtualKeyboard/PyQt6 - how to programmatically select OSK language?
|
<p>I have several questions concerning <code>QtVirtualKeyboard</code> usage under PyQt6 so, since StackOverflow policy is "just one question" I will post several questions.</p>
<p>First part will be the same because they all pertain to same setup.</p>
<h2>Common Setup</h2>
<p>I installed <code>QtVirtualKeyboard</code> loosely following <a href="https://stackoverflow.com/questions/62473386/pyqt5-show-virtual-keyboard">this answer</a> (MANY Thanks @eyllanesc!).</p>
<p>Exact commands given are:</p>
<pre class="lang-bash prettyprint-override"><code>python3 -m venv venv
venv/bin/pip install -U pip wheel PyQt6 aqtinstall
QT_PREFIX_PATH=$(venv/bin/python -c "from PyQt6.QtCore import QLibraryInfo; print(QLibraryInfo.path(QLibraryInfo.LibraryPath.PrefixPath), end=None)")
QT_VERSION_STR=$(venv/bin/python -c "from PyQt6.QtCore import QT_VERSION_STR; print(QT_VERSION_STR, end=None)")
QT_VERSION_STR=6.7.3
venv/bin/python -m aqt install-qt linux desktop $QT_VERSION_STR -m qtvirtualkeyboard --outputdir qt
cp -p qt/$QT_VERSION_STR/gcc_64/lib/libQt6VirtualKeyboard.so.$QT_VERSION_STR $QT_PREFIX_PATH/lib/libQt6VirtualKeyboard.so.6
cp -p qt/$QT_VERSION_STR/gcc_64/lib/libQt6VirtualKeyboardSettings.so.$QT_VERSION_STR $QT_PREFIX_PATH/lib/libQt6VirtualKeyboardSettings.so.6
mkdir -p $QT_PREFIX_PATH/plugins/platforminputcontexts
cp -p qt/$QT_VERSION_STR/gcc_64/plugins/platforminputcontexts/libqtvirtualkeyboardplugin.so $QT_PREFIX_PATH/plugins/platforminputcontexts
#cp -pr qt/$QT_VERSION_STR/gcc_64/plugins/virtualkeyboard $QT_PREFIX_PATH/plugins
cp -pr qt/$QT_VERSION_STR/gcc_64/qml/QtQuick/VirtualKeyboard $QT_PREFIX_PATH/qml/QtQuick
mkdir -p $QT_PREFIX_PATH/qml/Qt/labs
cp -pr qt/$QT_VERSION_STR/gcc_64/qml/Qt/labs/folderlistmodel $QT_PREFIX_PATH/qml/Qt/labs
</code></pre>
<p><strong>Note:</strong> I had to force <code>QT_VERSION_STR=6.7.3</code> because using the value given by Python script above (6.7.1) I had a runtime error (something 6.7.3 not found); can that be hinting something is wrong in my setup?</p>
<p>Test program is:</p>
<pre><code>from PyQt6.QtWidgets import QApplication, QWidget, QMainWindow, QLineEdit, QVBoxLayout
import sys
import os
os.environ["QT_IM_MODULE"] = "qtvirtualkeyboard"
class MaiWindow(QMainWindow):
def __init__(self):
super().__init__()
self.line_edit = QLineEdit()
self.line_edit2 = QLineEdit()
self.layout = QVBoxLayout()
self.main_widget = QWidget()
self.main_widget.setLayout(self.layout)
self.layout.addWidget(self.line_edit)
self.layout.addWidget(self.line_edit2)
self.setCentralWidget(self.main_widget)
app = QApplication(sys.argv)
window = MaiWindow()
window.show()
app.exec()
</code></pre>
<p>Program is started using: <code>venv.hide/bin/python test.py</code>.</p>
<p>This correctly shows a virtual keyboard when focusing into either <code>QLineEdit</code>.</p>
<h2>Question 3</h2>
<p>Is it possible to programmatically set <code>QtVirtualKeyboard</code> properties?</p>
<p>Specific <em>need</em> is to set OSK language (my system is <code>LANG=en_US.UTF-8</code>, but my system keyboard is <code>it-IT</code> and I need an OSK <code>el.GR</code>).</p>
<p>Other customization (like having the OSK at the bottom of window and not at bottom of screen) would be "nice to have".</p>
|
<python><pyqt6><qtvirtualkeyboard>
|
2024-10-29 15:49:47
| 0
| 3,106
|
ZioByte
|
79,137,955
| 1,714,490
|
QtVirtualKeyboard/PyQt6 - how to link OSK to specific input widget?
|
<p>I have several questions concerning <code>QtVirtualKeyboard</code> usage under PyQt6 so, since StackOverflow policy is "just one question" I will post several questions.</p>
<p>First part will be the same because they all pertain to same setup.</p>
<h2>Common Setup</h2>
<p>I installed <code>QtVirtualKeyboard</code> loosely following <a href="https://stackoverflow.com/questions/62473386/pyqt5-show-virtual-keyboard">this answer</a> (MANY Thanks @eyllanesc!).</p>
<p>Exact commands given are:</p>
<pre class="lang-bash prettyprint-override"><code>python3 -m venv venv
venv/bin/pip install -U pip wheel PyQt6 aqtinstall
QT_PREFIX_PATH=$(venv/bin/python -c "from PyQt6.QtCore import QLibraryInfo; print(QLibraryInfo.path(QLibraryInfo.LibraryPath.PrefixPath), end=None)")
QT_VERSION_STR=$(venv/bin/python -c "from PyQt6.QtCore import QT_VERSION_STR; print(QT_VERSION_STR, end=None)")
QT_VERSION_STR=6.7.3
venv/bin/python -m aqt install-qt linux desktop $QT_VERSION_STR -m qtvirtualkeyboard --outputdir qt
cp -p qt/$QT_VERSION_STR/gcc_64/lib/libQt6VirtualKeyboard.so.$QT_VERSION_STR $QT_PREFIX_PATH/lib/libQt6VirtualKeyboard.so.6
cp -p qt/$QT_VERSION_STR/gcc_64/lib/libQt6VirtualKeyboardSettings.so.$QT_VERSION_STR $QT_PREFIX_PATH/lib/libQt6VirtualKeyboardSettings.so.6
mkdir -p $QT_PREFIX_PATH/plugins/platforminputcontexts
cp -p qt/$QT_VERSION_STR/gcc_64/plugins/platforminputcontexts/libqtvirtualkeyboardplugin.so $QT_PREFIX_PATH/plugins/platforminputcontexts
#cp -pr qt/$QT_VERSION_STR/gcc_64/plugins/virtualkeyboard $QT_PREFIX_PATH/plugins
cp -pr qt/$QT_VERSION_STR/gcc_64/qml/QtQuick/VirtualKeyboard $QT_PREFIX_PATH/qml/QtQuick
mkdir -p $QT_PREFIX_PATH/qml/Qt/labs
cp -pr qt/$QT_VERSION_STR/gcc_64/qml/Qt/labs/folderlistmodel $QT_PREFIX_PATH/qml/Qt/labs
</code></pre>
<p><strong>Note:</strong> I had to force <code>QT_VERSION_STR=6.7.3</code> because using the value given by Python script above (6.7.1) I had a runtime error (something 6.7.3 not found); can that be hinting something is wrong in my setup?</p>
<p>Test program is:</p>
<pre><code>from PyQt6.QtWidgets import QApplication, QWidget, QMainWindow, QLineEdit, QVBoxLayout
import sys
import os
os.environ["QT_IM_MODULE"] = "qtvirtualkeyboard"
class MaiWindow(QMainWindow):
def __init__(self):
super().__init__()
self.line_edit = QLineEdit()
self.line_edit2 = QLineEdit()
self.layout = QVBoxLayout()
self.main_widget = QWidget()
self.main_widget.setLayout(self.layout)
self.layout.addWidget(self.line_edit)
self.layout.addWidget(self.line_edit2)
self.setCentralWidget(self.main_widget)
app = QApplication(sys.argv)
window = MaiWindow()
window.show()
app.exec()
</code></pre>
<p>Program is started using: <code>venv.hide/bin/python test.py</code>.</p>
<p>This correctly shows a virtual keyboard when focusing into either <code>QLineEdit</code>.</p>
<h2>Question 2</h2>
<p>How should I modify the test code (if it is possible at all) in order to have the first <code>QLineEdit</code> to use "normal" system keyboard and the second one the on-screen <code>QtVirtualKeyboard</code>?</p>
<p>Reason for this request is the Virtual Keyboard should be in another Locale (details in next question).</p>
|
<python><pyqt6><qtvirtualkeyboard>
|
2024-10-29 15:41:48
| 0
| 3,106
|
ZioByte
|
79,137,935
| 8,533,290
|
Numpy: large number modulo numpy array
|
<p>I have a large number (>64 Bit) and an array of small numbers.</p>
<pre><code>import numpy as np
n = 375562681772559479679199924760395898982847025172274709141095261928746039609
primes = np.array([2,3,5,7,11])
</code></pre>
<p>In an earlier numpy version I could write <code>n % primes</code>, which was significantly faster than list comprehension.
After my latest update I get <code>OverflowError: Python int too large to convert to C long</code>. Yes, n is too large, but the result isn't.</p>
<p>Is there any way, I can still achieve this in numpy (using its speed)?</p>
<p>To compare, I picked some smaller n of 64 bit. My alternative <code>np.array([n % p for p in primes])</code> takes about 3 times as long as <code>n % primes</code>.</p>
<p>In my larger use cases, <code>primes</code> are the primes below 1000, i.e. 168 elements.</p>
|
<python><numpy>
|
2024-10-29 15:38:22
| 0
| 720
|
Hennich
|
79,137,924
| 1,714,490
|
QtVirtualKeyboard/PyQt6 - how to install and deploy?
|
<p>I have several questions concerning <code>QtVirtualKeyboard</code> usage under PyQt6 so, since StackOverflow policy is "just one question" I will post several questions.</p>
<p>First part will be the same because they all pertain to same setup.</p>
<h2>Common Setup</h2>
<p>I installed <code>QtVirtualKeyboard</code> loosely following <a href="https://stackoverflow.com/questions/62473386/pyqt5-show-virtual-keyboard">this answer</a> (MANY Thanks @eyllanesc!).</p>
<p>Exact commands given are:</p>
<pre class="lang-bash prettyprint-override"><code>python3 -m venv venv
venv/bin/pip install -U pip wheel PyQt6 aqtinstall
QT_PREFIX_PATH=$(venv/bin/python -c "from PyQt6.QtCore import QLibraryInfo; print(QLibraryInfo.path(QLibraryInfo.LibraryPath.PrefixPath), end=None)")
QT_VERSION_STR=$(venv/bin/python -c "from PyQt6.QtCore import QT_VERSION_STR; print(QT_VERSION_STR, end=None)")
QT_VERSION_STR=6.7.3
venv/bin/python -m aqt install-qt linux desktop $QT_VERSION_STR -m qtvirtualkeyboard --outputdir qt
cp -p qt/$QT_VERSION_STR/gcc_64/lib/libQt6VirtualKeyboard.so.$QT_VERSION_STR $QT_PREFIX_PATH/lib/libQt6VirtualKeyboard.so.6
cp -p qt/$QT_VERSION_STR/gcc_64/lib/libQt6VirtualKeyboardSettings.so.$QT_VERSION_STR $QT_PREFIX_PATH/lib/libQt6VirtualKeyboardSettings.so.6
mkdir -p $QT_PREFIX_PATH/plugins/platforminputcontexts
cp -p qt/$QT_VERSION_STR/gcc_64/plugins/platforminputcontexts/libqtvirtualkeyboardplugin.so $QT_PREFIX_PATH/plugins/platforminputcontexts
#cp -pr qt/$QT_VERSION_STR/gcc_64/plugins/virtualkeyboard $QT_PREFIX_PATH/plugins
cp -pr qt/$QT_VERSION_STR/gcc_64/qml/QtQuick/VirtualKeyboard $QT_PREFIX_PATH/qml/QtQuick
mkdir -p $QT_PREFIX_PATH/qml/Qt/labs
cp -pr qt/$QT_VERSION_STR/gcc_64/qml/Qt/labs/folderlistmodel $QT_PREFIX_PATH/qml/Qt/labs
</code></pre>
<p><strong>Note:</strong> I had to force <code>QT_VERSION_STR=6.7.3</code> because using the value given by Python script above (6.7.1) I had a runtime error (something 6.7.3 not found); can that be hinting something is wrong in my setup?</p>
<p>Test program is:</p>
<pre><code>from PyQt6.QtWidgets import QApplication, QWidget, QMainWindow, QLineEdit, QVBoxLayout
import sys
import os
os.environ["QT_IM_MODULE"] = "qtvirtualkeyboard"
class MaiWindow(QMainWindow):
def __init__(self):
super().__init__()
self.line_edit = QLineEdit()
self.line_edit2 = QLineEdit()
self.layout = QVBoxLayout()
self.main_widget = QWidget()
self.main_widget.setLayout(self.layout)
self.layout.addWidget(self.line_edit)
self.layout.addWidget(self.line_edit2)
self.setCentralWidget(self.main_widget)
app = QApplication(sys.argv)
window = MaiWindow()
window.show()
app.exec()
</code></pre>
<p>Program is started using: <code>venv.hide/bin/python test.py</code>.</p>
<p>This correctly shows a virtual keyboard when focusing into either <code>QLineEdit</code>.</p>
<h2>Question 1</h2>
<p>First and foremost: is this the "Right Way" to install <code>QtVirtualKeyboard</code>?</p>
<p>I ask because it seems very convoluted and I expect problems in deployment of completed application ("if" and "when").</p>
|
<python><pyqt6><qtvirtualkeyboard>
|
2024-10-29 15:36:10
| 0
| 3,106
|
ZioByte
|
79,137,908
| 11,261,546
|
Error when runing mlflow with sklearn model
|
<p>I'm training a <code>RandomForestRegressor</code> and keeping track of it using <code>mlflow</code></p>
<p>Using the following code works perfectly <strong>only when</strong> <code>n_estimators</code> is lower than <code>90</code></p>
<p>The code:</p>
<pre><code>import mlflow
import mlflow.sklearn
from mlflow.models import infer_signature
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_squared_error, root_mean_squared_error
dataset = mlflow.data.from_pandas(
final_result, source=file, name="SSM", targets="Shifted_10"
)
# Start the MLflow run
with mlflow.start_run(run_name='ssm_rain_predict_ssm'):
# Initialize the Random Forest Regressor
n_estimators = 100
rf_regressor = RandomForestRegressor(n_estimators=n_estimators, random_state=42, verbose=True)
mlflow.log_param("n_estimators", n_estimators)
# Prepare target values
y_train = y_train_df.values.ravel()
y_test = y_test_df.values.ravel()
# Fit the model
rf_regressor.fit(X_train, y_train)
# Make predictions on the test set
y_pred = rf_regressor.predict(X_test)
signature = infer_signature(X_test, y_pred)
# Calculate metrics
mse = mean_squared_error(y_test, y_pred)
rmse = root_mean_squared_error(y_test, y_pred)
print(f'Mean Squared Error: {mse}')
print(f'Root Mean Squared Error: {rmse}')
# Log metrics
mlflow.log_metric("mse", mse)
mlflow.log_metric("rmse", rmse)
# Prepare an example input from the training data, format if necessary
input_example_no_fmt = get_train_data_point(df_cleaned, 2600, N)
input_example = input_example_no_fmt.drop(input_example_no_fmt.columns[10], axis=1)
mlflow.log_input(dataset, context="training")
# Log the model with an input example
mlflow.sklearn.log_model(sk_model=rf_regressor,
artifact_path="sklearn-model",
signature=signature,
registered_model_name="ssm_from_ssm_rain_type"
)
</code></pre>
<p>When I run it, with lower values I get this log (I'm just hiding the url):</p>
<pre><code>Registered model 'ssm_from_ssm_rain_type' already exists. Creating a new version of this model...
2024/10/29 14:55:23 INFO mlflow.store.model_registry.abstract_store: Waiting up to 300 seconds for model version to finish creation. Model name: ssm_from_ssm_rain_type, version 10
Created version '10' of model 'ssm_from_ssm_rain_type'.
2024/10/29 14:55:23 INFO mlflow.tracking._tracking_service.client: π View run ssm_rain_predict_ssm at: http://xxxxxxxxxxx:5000/#/experiments/62/runs/56abb7acc7c14d7a81e967a568a8bb74.
2024/10/29 14:55:23 INFO mlflow.tracking._tracking_service.client: π§ͺ View experiment at: http://xxxxxxxxxx:5000/#/experiments/62.
</code></pre>
<p>However, when I increase <code>n_estimators</code> I get:</p>
<pre><code>---------------------------------------------------------------------------
ResponseError Traceback (most recent call last)
ResponseError: too many 500 error responses
The above exception was the direct cause of the following exception:
MaxRetryError Traceback (most recent call last)
File /usr/local/lib/python3.10/dist-packages/requests/adapters.py:667, in HTTPAdapter.send(self, request, stream, timeout, verify, cert, proxies)
666 try:
--> 667 resp = conn.urlopen(
668 method=request.method,
669 url=url,
670 body=request.body,
671 headers=request.headers,
672 redirect=False,
673 assert_same_host=False,
674 preload_content=False,
675 decode_content=False,
676 retries=self.max_retries,
677 timeout=timeout,
678 chunked=chunked,
679 )
681 except (ProtocolError, OSError) as err:
File /usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py:944, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, preload_content, decode_content, **response_kw)
943 log.debug("Retry: %s", url)
--> 944 return self.urlopen(
945 method,
946 url,
947 body,
948 headers,
949 retries=retries,
950 redirect=redirect,
951 assert_same_host=assert_same_host,
952 timeout=timeout,
953 pool_timeout=pool_timeout,
954 release_conn=release_conn,
955 chunked=chunked,
956 body_pos=body_pos,
957 preload_content=preload_content,
958 decode_content=decode_content,
959 **response_kw,
960 )
962 return response
File /usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py:944, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, preload_content, decode_content, **response_kw)
943 log.debug("Retry: %s", url)
--> 944 return self.urlopen(
945 method,
946 url,
947 body,
948 headers,
949 retries=retries,
950 redirect=redirect,
951 assert_same_host=assert_same_host,
952 timeout=timeout,
953 pool_timeout=pool_timeout,
954 release_conn=release_conn,
955 chunked=chunked,
956 body_pos=body_pos,
957 preload_content=preload_content,
958 decode_content=decode_content,
959 **response_kw,
960 )
962 return response
[... skipping similar frames: HTTPConnectionPool.urlopen at line 944 (2 times)]
File /usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py:944, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, preload_content, decode_content, **response_kw)
943 log.debug("Retry: %s", url)
--> 944 return self.urlopen(
945 method,
946 url,
947 body,
948 headers,
949 retries=retries,
950 redirect=redirect,
951 assert_same_host=assert_same_host,
952 timeout=timeout,
953 pool_timeout=pool_timeout,
954 release_conn=release_conn,
955 chunked=chunked,
956 body_pos=body_pos,
957 preload_content=preload_content,
958 decode_content=decode_content,
959 **response_kw,
960 )
962 return response
File /usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py:934, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, preload_content, decode_content, **response_kw)
933 try:
--> 934 retries = retries.increment(method, url, response=response, _pool=self)
935 except MaxRetryError:
File /usr/local/lib/python3.10/dist-packages/urllib3/util/retry.py:519, in Retry.increment(self, method, url, response, error, _pool, _stacktrace)
518 reason = error or ResponseError(cause)
--> 519 raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
521 log.debug("Incremented Retry for (url='%s'): %r", url, new_retry)
MaxRetryError: HTTPConnectionPool(host='xxxx.xxxxx.xxxxx', port=5000): Max retries exceeded with url: /api/2.0/mlflow-artifacts/artifacts/62/8c7575e5c786420d9ba78af9cc7f70cd/artifacts/sklearn-model/model.pkl (Caused by ResponseError('too many 500 error responses'))
During handling of the above exception, another exception occurred:
RetryError Traceback (most recent call last)
File /usr/local/lib/python3.10/dist-packages/mlflow/utils/rest_utils.py:189, in http_request(host_creds, endpoint, method, max_retries, backoff_factor, backoff_jitter, extra_headers, retry_codes, timeout, raise_on_status, respect_retry_after_header, **kwargs)
188 try:
--> 189 return _get_http_response_with_retries(
190 method,
191 url,
192 max_retries,
193 backoff_factor,
194 backoff_jitter,
195 retry_codes,
196 raise_on_status,
197 headers=headers,
198 verify=host_creds.verify,
199 timeout=timeout,
200 respect_retry_after_header=respect_retry_after_header,
201 **kwargs,
202 )
203 except requests.exceptions.Timeout as to:
File /usr/local/lib/python3.10/dist-packages/mlflow/utils/request_utils.py:237, in _get_http_response_with_retries(method, url, max_retries, backoff_factor, backoff_jitter, retry_codes, raise_on_status, allow_redirects, respect_retry_after_header, **kwargs)
235 allow_redirects = env_value if allow_redirects is None else allow_redirects
--> 237 return session.request(method, url, allow_redirects=allow_redirects, **kwargs)
File /usr/local/lib/python3.10/dist-packages/requests/sessions.py:589, in Session.request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
588 send_kwargs.update(settings)
--> 589 resp = self.send(prep, **send_kwargs)
591 return resp
File /usr/local/lib/python3.10/dist-packages/requests/sessions.py:703, in Session.send(self, request, **kwargs)
702 # Send the request
--> 703 r = adapter.send(request, **kwargs)
705 # Total elapsed time of the request (approximately)
File /usr/local/lib/python3.10/dist-packages/requests/adapters.py:691, in HTTPAdapter.send(self, request, stream, timeout, verify, cert, proxies)
690 if isinstance(e.reason, ResponseError):
--> 691 raise RetryError(e, request=request)
693 if isinstance(e.reason, _ProxyError):
RetryError: HTTPConnectionPool(host='xxxxx.xxxxxx.xxxxx.xxx', port=5000): Max retries exceeded with url: /api/2.0/mlflow-artifacts/artifacts/62/8c7575e5c786420d9ba78af9cc7f70cd/artifacts/sklearn-model/model.pkl (Caused by ResponseError('too many 500 error responses'))
During handling of the above exception, another exception occurred:
MlflowException Traceback (most recent call last)
Cell In[162], line 47
44 mlflow.log_input(dataset, context="training")
46 # Log the model with an input example
---> 47 mlflow.sklearn.log_model(sk_model=rf_regressor,
48 artifact_path="sklearn-model",
49 signature=signature,
50 registered_model_name="ssm_from_ssm_rain_type"
51 )
File /usr/local/lib/python3.10/dist-packages/mlflow/sklearn/__init__.py:413, in log_model(sk_model, artifact_path, conda_env, code_paths, serialization_format, registered_model_name, signature, input_example, await_registration_for, pip_requirements, extra_pip_requirements, pyfunc_predict_fn, metadata)
334 @format_docstring(LOG_MODEL_PARAM_DOCS.format(package_name="scikit-learn"))
335 def log_model(
336 sk_model,
(...)
348 metadata=None,
349 ):
350 """
351 Log a scikit-learn model as an MLflow artifact for the current run. Produces an MLflow Model
352 containing the following flavors:
(...)
411
412 """
--> 413 return Model.log(
414 artifact_path=artifact_path,
415 flavor=mlflow.sklearn,
416 sk_model=sk_model,
417 conda_env=conda_env,
418 code_paths=code_paths,
419 serialization_format=serialization_format,
420 registered_model_name=registered_model_name,
421 signature=signature,
422 input_example=input_example,
423 await_registration_for=await_registration_for,
424 pip_requirements=pip_requirements,
425 extra_pip_requirements=extra_pip_requirements,
426 pyfunc_predict_fn=pyfunc_predict_fn,
427 metadata=metadata,
428 )
File /usr/local/lib/python3.10/dist-packages/mlflow/models/model.py:743, in Model.log(cls, artifact_path, flavor, registered_model_name, await_registration_for, metadata, run_id, resources, **kwargs)
741 elif tracking_uri == "databricks" or get_uri_scheme(tracking_uri) == "databricks":
742 _logger.warning(_LOG_MODEL_MISSING_SIGNATURE_WARNING)
--> 743 mlflow.tracking.fluent.log_artifacts(local_path, mlflow_model.artifact_path, run_id)
745 # if the model_config kwarg is passed in, then log the model config as an params
746 if model_config := kwargs.get("model_config"):
File /usr/local/lib/python3.10/dist-packages/mlflow/tracking/fluent.py:1170, in log_artifacts(local_dir, artifact_path, run_id)
1136 """
1137 Log all the contents of a local directory as artifacts of the run. If no run is active,
1138 this method will create a new active run.
(...)
1167 mlflow.log_artifacts(tmp_dir, artifact_path="states")
1168 """
1169 run_id = run_id or _get_or_start_run().info.run_id
-> 1170 MlflowClient().log_artifacts(run_id, local_dir, artifact_path)
File /usr/local/lib/python3.10/dist-packages/mlflow/tracking/client.py:1977, in MlflowClient.log_artifacts(self, run_id, local_dir, artifact_path)
1930 def log_artifacts(
1931 self, run_id: str, local_dir: str, artifact_path: Optional[str] = None
1932 ) -> None:
1933 """Write a directory of files to the remote ``artifact_uri``.
1934
1935 Args:
(...)
1975
1976 """
-> 1977 self._tracking_client.log_artifacts(run_id, local_dir, artifact_path)
File /usr/local/lib/python3.10/dist-packages/mlflow/tracking/_tracking_service/client.py:874, in TrackingServiceClient.log_artifacts(self, run_id, local_dir, artifact_path)
865 def log_artifacts(self, run_id, local_dir, artifact_path=None):
866 """Write a directory of files to the remote ``artifact_uri``.
867
868 Args:
(...)
872
873 """
--> 874 self._get_artifact_repo(run_id).log_artifacts(local_dir, artifact_path)
File /usr/local/lib/python3.10/dist-packages/mlflow/store/artifact/http_artifact_repo.py:80, in HttpArtifactRepository.log_artifacts(self, local_dir, artifact_path)
76 artifact_dir = (
77 posixpath.join(artifact_path, rel_path) if artifact_path else rel_path
78 )
79 for f in filenames:
---> 80 self.log_artifact(os.path.join(root, f), artifact_dir)
File /usr/local/lib/python3.10/dist-packages/mlflow/store/artifact/http_artifact_repo.py:63, in HttpArtifactRepository.log_artifact(self, local_file, artifact_path)
61 extra_headers = {"Content-Type": mime_type}
62 with open(local_file, "rb") as f:
---> 63 resp = http_request(
64 self._host_creds, endpoint, "PUT", data=f, extra_headers=extra_headers
65 )
66 augmented_raise_for_status(resp)
File /usr/local/lib/python3.10/dist-packages/mlflow/utils/rest_utils.py:212, in http_request(host_creds, endpoint, method, max_retries, backoff_factor, backoff_jitter, extra_headers, retry_codes, timeout, raise_on_status, respect_retry_after_header, **kwargs)
210 raise InvalidUrlException(f"Invalid url: {url}") from iu
211 except Exception as e:
--> 212 raise MlflowException(f"API request to {url} failed with exception {e}")
MlflowException: API request to http://xxxxxxxxxxxxxxxxxxxx.net:5000/api/2.0/mlflow-artifacts/artifacts/62/8c7575e5c786420d9ba78af9cc7f70cd/artifacts/sklearn-model/model.pkl failed with exception HTTPConnectionPool(host='xxxx.xxxx.Xxxxx', port=5000): Max retries exceeded with url: /api/2.0/mlflow-artifacts/artifacts/62/8c7575e5c786420d9ba78af9cc7f70cd/artifacts/sklearn-model/model.pkl (Caused by ResponseError('too many 500 error responses'))
</code></pre>
<p>Why am I getting this error? How could I fix it?</p>
<p>Thanks</p>
|
<python><scikit-learn><mlflow>
|
2024-10-29 15:30:45
| 0
| 1,551
|
Ivan
|
79,137,843
| 4,701,426
|
filtering a pandas dataframe string column using the `str.contains` method
|
<p>I have dataframe looking like this, where <code>long_category</code> reflects the category of businesses in the rows:</p>
<pre><code>df = pd.DataFrame({
'long_category': {0: 'Doctors, Traditional Chinese Medicine, Naturopathic/Holistic, Acupuncture, Health & Medical, Nutritionists',
1: 'Shipping Centers, Local Services, Notaries, Mailbox Centers, Printing Services',
2: 'Department Stores, Shopping, Fashion, Home & Garden, Electronics, Furniture Stores',
3: 'Restaurants, Food, Bubble Tea, Coffee & Tea, Bakeries',
4: 'Brewpubs, Breweries, Food',
5: 'Burgers, Fast Food, Sandwiches, Food, Ice Cream & Frozen Yogurt, Restaurants',
6: 'Sporting Goods, Fashion, Shoe Stores, Shopping, Sports Wear, Accessories',
7: 'Synagogues, Religious Organizations',
8: 'Pubs, Restaurants, Italian, Bars, American (Traditional), Nightlife, Greek',
9: 'Ice Cream & Frozen Yogurt, Fast Food, Burgers, Restaurants, Food'}})
</code></pre>
<p>df:</p>
<p><a href="https://i.sstatic.net/JpC1sPP2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JpC1sPP2.png" alt="enter image description here" /></a></p>
<p>My goal is to shorten those categories into the below <code>known_categories</code> based on whether or not a long category contains a keyword in the <code>known_categories</code>:</p>
<pre><code> known_categories = ['restaurant', 'beauty & spas', 'hotels', 'health & medical', 'shopping', 'coffee & tea','automotive', 'pets|veterinian', 'services', 'stores', 'grocery', 'ice cream']
</code></pre>
<p>So, I do:</p>
<pre><code>df['short_category'] = np.nan
for cat in known_categories:
excluded_cats = [x for x in known_categories if x!= cat]
df['short_category'] [ ~(df.long_category.str.contains('|'.join(excluded_cats), regex = True, case = False, na = False)) & (df.long_category.str.contains(cat, regex = True, case = False, na = False))] = cat
</code></pre>
<p>It is important that the short categories are mutually exclusive. For example, row indexed 3 should be put in either "restaurant" or "coffee & tea" short category, hence the <code>~(df.long_category.str.contains('|'.join(excluded_cats), regex = True, case = False, na = False))</code> condition above.</p>
<p>But this is not working as can be seen below. For example, both of the last two rows have "restaurant" in their long category but only the first one's short category has captured that. I was expecting the last one to be either 'restaurant' or 'ice cream' because it has those keywords from <code>short_categories</code>. So, where have I gone wrong? As a side note, I would like to be able to impact the frequency of the short categories by moving the category further to the front or back of <code>known_categories</code> if possible. For example, with the current <code>short_categories</code>, I'd like the last row to have 'restaurant' as short category. But if I move 'ice cream' before 'restaurant' in <code>short_categories</code>, I would like the last row to show 'ice cream' and not 'restaurant'</p>
<p><a href="https://i.sstatic.net/w8qGhbY8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/w8qGhbY8.png" alt="enter image description here" /></a></p>
|
<python><pandas><dataframe>
|
2024-10-29 15:13:06
| 2
| 2,151
|
Saeed
|
79,137,715
| 7,346,706
|
Flyer color in seaborn boxplot with palette
|
<p>I have a seaborn boxplot with a categorical variable to use for hue and a dictionary with a color for each category given as the palette argument. MWE:</p>
<pre><code>import seaborn as sns
from matplotlib import pyplot as plt
cdict = {"First" : "gold",
"Second": "blue",
"Third" : "red"}
df = sns.load_dataset("titanic")[["sex","fare","class"]]
fig, ax = plt.subplots()
sns.boxplot(data=df, x="sex", y="fare", hue="class",palette=cdict, ax=ax)
plt.show()
</code></pre>
<p>Giving:
<a href="https://i.sstatic.net/JXBBnW2C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JXBBnW2C.png" alt="boxplot MWE" /></a>
I would like to have the fliers in the same color as the boxes (face or edge color). My plot is relatively crowded, so without this it is difficult to quickly see which outlier corresponds to which category.</p>
|
<python><matplotlib><colors><seaborn>
|
2024-10-29 14:43:02
| 1
| 1,415
|
NicoH
|
79,137,578
| 695,134
|
VSCode fails with venv but works from terminal window manually
|
<p>I'm using Python 3.12.3, latest vscode and from a (Windows) terminal I can set up and run venv fine. If I am in vscode, go to th the terminal window and run all my commands (x\Scripts\activate.bat) it switches fine and uses the virtual environment.</p>
<p>However, it is failing within vscode in both ways I know how:</p>
<ol>
<li><p>If I start vscode in a folder where there is a venv it says 'I can see you have a venv do you want to switch to it'. When I select yes it generates no error but it has not switched and everything still goes/uses global space</p>
</li>
<li><p>If I run the vscode command 'create virtual environment' (the way I want to use venv in vscode), select venv as the option it falls over as follows log. Note, that pip works just fine as from the same terminal window in vscode I can manually upgrade PIP and it is actually on the latest version already prior to this script running</p>
</li>
</ol>
<p>So it is only when vscode is running the commands itself is things failing.</p>
<p>Any thoughts appreciated.</p>
<pre><code>2024-10-29 14:02:12.434 [info] Selected workspace c:\DATA_NODRIVE\Projects\testproject for creating virtual environment.
2024-10-29 14:02:12.439 [info] Starting Environment refresh
2024-10-29 14:02:12.440 [info] Searching for windows registry interpreters
2024-10-29 14:02:12.440 [info] Searching windows known paths locator
2024-10-29 14:02:12.441 [info] Searching for pyenv environments
2024-10-29 14:02:12.441 [info] Searching for conda environments
2024-10-29 14:02:12.441 [info] Searching for global virtual environments
2024-10-29 14:02:12.441 [info] Searching for custom virtual environments
2024-10-29 14:02:12.442 [info] Searching for windows store envs
2024-10-29 14:02:12.448 [info] > hatch env show --json
2024-10-29 14:02:12.448 [info] cwd: .
2024-10-29 14:02:12.453 [info] pyenv is not installed
2024-10-29 14:02:12.453 [info] Finished searching for pyenv environments: 13 milliseconds
2024-10-29 14:02:12.460 [info] Finished searching for custom virtual envs: 20 milliseconds
2024-10-29 14:02:12.460 [info] Finished searching for windows store envs: 20 milliseconds
2024-10-29 14:02:12.463 [info] Finished searching for global virtual envs: 23 milliseconds
2024-10-29 14:02:12.483 [info] > C:\DATA_NODRIVE\applications\python312\python.exe -I ~\.vscode\extensions\ms-python.python-2024.16.1-win32-x64\python_files\get_output_via_markers.py ~\.vscode\extensions\ms-python.python-2024.16.1-win32-x64\python_files\interpreterInfo.py
2024-10-29 14:02:12.505 [info] Finished searching for windows registry interpreters: 54 milliseconds
2024-10-29 14:02:12.505 [info] > C:\DATA_NODRIVE\applications\python310\python.exe -I ~\.vscode\extensions\ms-python.python-2024.16.1-win32-x64\python_files\get_output_via_markers.py ~\.vscode\extensions\ms-python.python-2024.16.1-win32-x64\python_files\interpreterInfo.py
2024-10-29 14:02:12.514 [info] Finished searching windows known paths locator: 76 milliseconds
2024-10-29 14:02:12.514 [info] Environments refresh paths discovered (event): 76 milliseconds
2024-10-29 14:02:12.514 [info] Environments refresh paths discovered: 76 milliseconds
2024-10-29 14:02:12.628 [info] > C:\DATA_NODRIVE\applications\python\python.exe -I ~\.vscode\extensions\ms-python.python-2024.16.1-win32-x64\python_files\get_output_via_markers.py ~\.vscode\extensions\ms-python.python-2024.16.1-win32-x64\python_files\interpreterInfo.py
2024-10-29 14:02:12.718 [info] Environments refresh finished (event): 280 milliseconds
2024-10-29 14:02:12.740 [info] Environment refresh took 304 milliseconds
2024-10-29 14:02:12.774 [warning] Identifier for virt-virtualenv failed to identify c:\DATA_NODRIVE\Projects\testproject\neil\Scripts\python.exe [Error: ENOENT: no such file or directory, scandir 'c:\DATA_NODRIVE\Projects\testproject\neil\Scripts'] {
errno: -4058,
code: 'ENOENT',
syscall: 'scandir',
path: 'c:\\DATA_NODRIVE\\Projects\\testproject\\neil\\Scripts'
}
2024-10-29 14:02:12.795 [info] > .\neil\Scripts\python.exe -I ~\.vscode\extensions\ms-python.python-2024.16.1-win32-x64\python_files\get_output_via_markers.py ~\.vscode\extensions\ms-python.python-2024.16.1-win32-x64\python_files\interpreterInfo.py
2024-10-29 14:02:12.837 [error] Error: Command failed: c:\DATA_NODRIVE\Projects\testproject\neil\Scripts\python.exe -I c:\Users\neil\.vscode\extensions\ms-python.python-2024.16.1-win32-x64\python_files\get_output_via_markers.py c:\Users\neil\.vscode\extensions\ms-python.python-2024.16.1-win32-x64\python_files\interpreterInfo.py
The system cannot find the path specified.
at genericNodeError (node:internal/errors:984:15)
at wrappedFn (node:internal/errors:538:14)
at ChildProcess.exithandler (node:child_process:423:12)
at ChildProcess.emit (node:events:531:35)
at maybeClose (node:internal/child_process:1105:16)
at Process.ChildProcess._handle.onexit (node:internal/child_process:305:5) {
code: 1,
killed: false,
signal: null,
cmd: 'c:\\DATA_NODRIVE\\Projects\\testproject\\neil\\Scripts\\python.exe -I c:\\Users\\neil\\.vscode\\extensions\\ms-python.python-2024.16.1-win32-x64\\python_files\\get_output_via_markers.py c:\\Users\\neil\\.vscode\\extensions\\ms-python.python-2024.16.1-win32-x64\\python_files\\interpreterInfo.py'
}
2024-10-29 14:02:13.927 [info] Selected interpreter C:\DATA_NODRIVE\applications\python312\python.exe for creating virtual environment.
2024-10-29 14:02:14.018 [info] Running Env creation script: [
'C:\\DATA_NODRIVE\\applications\\python312\\python.exe',
'c:\\Users\\neil\\.vscode\\extensions\\ms-python.python-2024.16.1-win32-x64\\python_files\\create_venv.py',
'--git-ignore'
]
2024-10-29 14:02:14.018 [info] > C:\DATA_NODRIVE\applications\python312\python.exe ~\.vscode\extensions\ms-python.python-2024.16.1-win32-x64\python_files\create_venv.py --git-ignore
2024-10-29 14:02:14.018 [info] cwd: .
2024-10-29 14:02:14.179 [info] Running: C:\DATA_NODRIVE\applications\python312\python.exe -m venv .venv
2024-10-29 14:02:14.847 [info] > .\neil\Scripts\python.exe -I ~\.vscode\extensions\ms-python.python-2024.16.1-win32-x64\python_files\get_output_via_markers.py ~\.vscode\extensions\ms-python.python-2024.16.1-win32-x64\python_files\interpreterInfo.py
2024-10-29 14:02:14.897 [error] Error: Command failed: c:\DATA_NODRIVE\Projects\testproject\neil\Scripts\python.exe -I c:\Users\neil\.vscode\extensions\ms-python.python-2024.16.1-win32-x64\python_files\get_output_via_markers.py c:\Users\neil\.vscode\extensions\ms-python.python-2024.16.1-win32-x64\python_files\interpreterInfo.py
The system cannot find the path specified.
at genericNodeError (node:internal/errors:984:15)
at wrappedFn (node:internal/errors:538:14)
at ChildProcess.exithandler (node:child_process:423:12)
at ChildProcess.emit (node:events:531:35)
at maybeClose (node:internal/child_process:1105:16)
at Process.ChildProcess._handle.onexit (node:internal/child_process:305:5) {
code: 1,
killed: false,
signal: null,
cmd: 'c:\\DATA_NODRIVE\\Projects\\testproject\\neil\\Scripts\\python.exe -I c:\\Users\\neil\\.vscode\\extensions\\ms-python.python-2024.16.1-win32-x64\\python_files\\get_output_via_markers.py c:\\Users\\neil\\.vscode\\extensions\\ms-python.python-2024.16.1-win32-x64\\python_files\\interpreterInfo.py'
}
2024-10-29 14:02:14.937 [warning] Identifier for virt-virtualenv failed to identify c:\DATA_NODRIVE\Projects\testproject\neil\Scripts\python.exe [Error: ENOENT: no such file or directory, scandir 'c:\DATA_NODRIVE\Projects\testproject\neil\Scripts'] {
errno: -4058,
code: 'ENOENT',
syscall: 'scandir',
path: 'c:\\DATA_NODRIVE\\Projects\\testproject\\neil\\Scripts'
}
2024-10-29 14:02:15.389 [info] > .\.venv\Scripts\python.exe -I ~\.vscode\extensions\ms-python.python-2024.16.1-win32-x64\python_files\get_output_via_markers.py ~\.vscode\extensions\ms-python.python-2024.16.1-win32-x64\python_files\interpreterInfo.py
2024-10-29 14:02:15.390 [info] Environments refresh paths discovered: 23 milliseconds
2024-10-29 14:02:15.566 [info] Environments refresh finished (event): 199 milliseconds
2024-10-29 14:02:23.410 [info] CREATED_VENV:c:\DATA_NODRIVE\Projects\testproject\.venv\Scripts\python.exe
2024-10-29 14:02:23.411 [info] Creating: c:\DATA_NODRIVE\Projects\testproject\.venv\.gitignore
CREATE_VENV.UPGRADING_PIP
Running: c:\DATA_NODRIVE\Projects\testproject\.venv\Scripts\python.exe -m pip install --upgrade pip
2024-10-29 14:02:24.071 [info] Requirement already satisfied: pip in c:\data_nodrive\projects\testproject\.venv\lib\site-packages (24.0)
2024-10-29 14:02:24.561 [info] Collecting pip
2024-10-29 14:02:24.578 [info] Using cached pip-24.3.1-py3-none-any.whl.metadata (3.7 kB)
2024-10-29 14:02:24.610 [info] Using cached pip-24.3.1-py3-none-any.whl (1.8 MB)
2024-10-29 14:02:24.693 [info] Installing collected packages: pip
2024-10-29 14:02:24.694 [info] Attempting uninstall: pip
2024-10-29 14:02:24.699 [info] Found existing installation: pip 24.0
2024-10-29 14:02:24.912 [info] Uninstalling pip-24.0:
2024-10-29 14:02:25.843 [info] ERROR: Could not install packages due to an OSError: [WinError 32] The process cannot access the file because it is being used by another process: 'c:\\data_nodrive\\projects\\testproject\\.venv\\lib\\site-packages\\pip\\_internal\\resolution'
Check the permissions.
2024-10-29 14:02:25.849 [info] WARNING: Ignoring invalid distribution ~ip (c:\DATA_NODRIVE\Projects\testproject\.venv\Lib\site-packages)
2024-10-29 14:02:25.874 [info] WARNING: Ignoring invalid distribution ~ip (c:\DATA_NODRIVE\Projects\testproject\.venv\Lib\site-packages)
2024-10-29 14:02:25.996 [info] Traceback (most recent call last):
File "c:\Users\neil\.vscode\extensions\ms-python.python-2024.16.1-win32-x64\python_files\create_venv.py", line 96, in run_process
2024-10-29 14:02:25.997 [info] subprocess.run(args, cwd=os.getcwd(), check=True) # noqa: PTH109
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\DATA_NODRIVE\applications\python312\Lib\subprocess.py", line 571, in run
2024-10-29 14:02:26.024 [info] raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['c:\\DATA_NODRIVE\\Projects\\testproject\\.venv\\Scripts\\python.exe', '-m', 'pip', 'install', '--upgrade', 'pip']' returned non-zero exit status 1.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\Users\neil\.vscode\extensions\ms-python.python-2024.16.1-win32-x64\python_files\create_venv.py", line 262, in <module>
main(sys.argv[1:])
File "c:\Users\neil\.vscode\extensions\ms-python.python-2024.16.1-win32-x64\python_files\create_venv.py", line 245, in main
upgrade_pip(venv_path)
File "c:\Users\neil\.vscode\extensions\ms-python.python-2024.16.1-win32-x64\python_files\create_venv.py", line 134, in upgrade_pip
run_process(
File "c:\Users\neil\.vscode\extensions\ms-python.python-2024.16.1-win32-x64\python_files\create_venv.py", line 98, in run_process
raise VenvError(error_message) from exc
VenvError: CREATE_VENV.UPGRADE_PIP_FAILED
2024-10-29 14:02:26.039 [error] Error while running venv creation script: CREATE_VENV.UPGRADE_PIP_FAILED
2024-10-29 14:02:26.039 [error] CREATE_VENV.UPGRADE_PIP_FAILED
</code></pre>
|
<python><visual-studio-code><virtualenv>
|
2024-10-29 14:09:52
| 1
| 6,898
|
Neil Walker
|
79,137,342
| 3,728,901
|
Kaggle: How to export old environment with existing download files?
|
<p>I do with old notebook</p>
<pre><code>!pip freeze > kaggle_environment.txt
</code></pre>
<p>resutl <a href="https://gist.githubusercontent.com/donhuvy/eecbaed4da2c04f42ae6960eff897e38/raw/a920cdf67f8c3e139e226ee4cdc7f1c498734a2e/kaggle_environment.txt" rel="nofollow noreferrer">https://gist.githubusercontent.com/donhuvy/eecbaed4da2c04f42ae6960eff897e38/raw/a920cdf67f8c3e139e226ee4cdc7f1c498734a2e/kaggle_environment.txt</a></p>
<p>I create a new Kaggle notebook, and has code block</p>
<pre><code>!pip install -r https://gist.githubusercontent.com/donhuvy/eecbaed4da2c04f42ae6960eff897e38/raw/a920cdf67f8c3e139e226ee4cdc7f1c498734a2e/kaggle_environment.txt
</code></pre>
<p>error</p>
<pre><code>Obtaining bq_helper from git+https://github.com/SohierDane/BigQuery_Helper@8615a7f6c1663e7f2d48aa2b32c2dbcb600a440f#egg=bq_helper (from -r https://gist.githubusercontent.com/donhuvy/eecbaed4da2c04f42ae6960eff897e38/raw/a920cdf67f8c3e139e226ee4cdc7f1c498734a2e/kaggle_environment.txt (line 46))
Cloning https://github.com/SohierDane/BigQuery_Helper (to revision 8615a7f6c1663e7f2d48aa2b32c2dbcb600a440f) to ./src/bq-helper
Running command git clone --filter=blob:none --quiet https://github.com/SohierDane/BigQuery_Helper /kaggle/working/src/bq-helper
Running command git rev-parse -q --verify 'sha^8615a7f6c1663e7f2d48aa2b32c2dbcb600a440f'
Running command git fetch -q https://github.com/SohierDane/BigQuery_Helper 8615a7f6c1663e7f2d48aa2b32c2dbcb600a440f
Resolved https://github.com/SohierDane/BigQuery_Helper to commit 8615a7f6c1663e7f2d48aa2b32c2dbcb600a440f
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
Γ python setup.py egg_info did not run successfully.
β exit code: 1
β°β> [14 lines of output]
error: Multiple top-level modules discovered in a flat-layout: ['test_helper', 'version', 'bq_helper'].
To avoid accidental inclusion of unwanted files or directories,
setuptools will not proceed with this build.
If you are trying to create a single distribution with multiple modules
on purpose, you should not rely on automatic discovery.
Instead, consider the following options:
1. set up custom discovery (`find` directive with `include` or `exclude`)
2. use a `src-layout`
3. explicitly set `py_modules` or `packages` with a list of names
To find more information, look for "package discovery" on setuptools docs.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
Γ Encountered error while generating package metadata.
β°β> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
</code></pre>
<p><a href="https://i.sstatic.net/gwurirqI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gwurirqI.png" alt="enter image description here" /></a></p>
<p>Kaggle: How to export old environment with existing download files? How to import to new Kaggle notebook?</p>
|
<python><kaggle>
|
2024-10-29 13:03:24
| 1
| 53,313
|
Vy Do
|
79,137,181
| 7,709,727
|
How to get server's IP and port from Flask Python code
|
<p>I have a very simple Flask app (from <a href="https://flask.palletsprojects.com/en/stable/quickstart/" rel="nofollow noreferrer">https://flask.palletsprojects.com/en/stable/quickstart/</a>)</p>
<pre class="lang-py prettyprint-override"><code># File name: app.py
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello_world():
return "<p>Hello, World!</p>"
print("your server is running at:", "<Insert code here>")
</code></pre>
<p>I am using commands like <code>flask run --host 127.0.0.1 --port 8200</code> to run my app server. I am trying to access the IP and port the development is listening to from the last line of the Python code. Is it possible? How should I do it?</p>
<p>What I am looking for:</p>
<pre><code>$ flask run --host 127.0.0.1 --port 8200
your server is running at: http://127.0.0.1:8200
* Debug mode: off
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
* Running on http://127.0.0.1:8200
Press CTRL+C to quit
</code></pre>
<p>The first line is printed by my code. The rest of lines are printed by Werkzeug.</p>
|
<python><flask><werkzeug>
|
2024-10-29 12:20:23
| 1
| 1,570
|
Eric Stdlib
|
79,137,121
| 1,035,897
|
What API should I use to manage Alerts and Alert Rules in Azure?
|
<p>Looking in Azure portal I see this:</p>
<p><a href="https://i.sstatic.net/pBPwUpWf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pBPwUpWf.png" alt="Alerts and Alert Rules in Azure Portal" /></a></p>
<p>Further I have run with great success the following cli commands:</p>
<pre><code>az monitor metrics alert list --subscription "$sub"
az monitor activity-log alert list --subscription "$sub"
</code></pre>
<p>But for the life of me I cannot seem to find the correct API for managing alerts and alert rules using Python. Looking in the API reference, I see this:</p>
<p><a href="https://i.sstatic.net/JHTBAQ2C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JHTBAQ2C.png" alt="Azure API categories relevant for alerts and alert rules" /></a></p>
<p>Please note <strong>Alert Processing Rule</strong> is mentioned but not Alert or Alert rule. Monitors are also mentioned. <em>Are Monitors Alerts</em>?</p>
<p>I have wrangled with online example code, API documentation and trippy LLMs to come up with this abomination:</p>
<pre><code>def alert_rules(self, tag:str = None, do_debug = False):
"""
List metric alerts across all subscriptions
"""
rules = list()
for subscription_id in self.subscriptions:
if do_debug:
logger.info(f"Looking for alert rules in sub {subscription_id}")
try:
monitor_client = self._monitor_client_for_sub(subscription_id)
if monitor_client:
if do_debug:
metric_alerts = list(monitor_client.metric_alerts.list_by_subscription())
scheduled_query_alerts = list(monitor_client.scheduled_query_rules.list_by_subscription())
logger.info(f" Metric alerts retrieved: {devtools.pformat(metric_alerts)}")
logger.info(f" Scheduled query alerts retrieved: {devtools.pformat(scheduled_query_alerts)}")
resource_client = self._resource_client_for_sub(subscription_id)
if resource_client:
resource_groups = [rg.name for rg in resource_client.resource_groups.list()]
for rg in resource_groups:
if do_debug:
logger.info(f" Looking for alert rules in rg {rg}")
for rule in monitor_client.scheduled_query_rules.list_by_resource_group(rg):
logger.debug(f" {rule}")
rules.append(rule)
else:
if do_debug:
logger.warning(f" No resource group client for {subscription_id}")
# List all Metric Alert Rules
for rule in monitor_client.metric_alerts.list_by_subscription():
logger.debug(f" Scheduled Query Alert Rule: {rule}")
rules.append(rule)
# List all Scheduled Query (Log) Alert Rules
for rule in monitor_client.scheduled_query_rules.list_by_subscription():
logger.debug(f" Scheduled Query Alert Rule: {rule}")
rules.append(rule)
logs_query_client = self._logs_query_client_for_sub(subscription_id)
if do_debug:
#logger.warning(f"Logs query client for {subscription_id}: {devtools.pformat(logs_query_client.__dict__)}")
pass
else:
if do_debug:
logger.warning(f" No monitor client for {subscription_id}")
except Exception as e:
logger.error(f"* Error listing alert rules for sub {subscription_id}: {e}")
return None
if tag:
if do_debug:
logger.info(f"We have a tag to match: {tag}")
alert_rules_with_tag = []
for rule in rules:
if rule.tags and tag in rule.tags:
alert_rules_with_tag.append(rule)
rules = alert_rules_with_tag
else:
if do_debug:
logger.info(f"No tag")
processed = list()
for rule in rules:
if do_debug:
logger.info(f"Processing rule: {rule}")
processed.append(self.process_rule(rule))
return processed
</code></pre>
<p>It is a member function of my client class that tries desperately through many different means to simply list alert rules. It produces an empty list even when the cli command produces a good output.</p>
<p>So my question is, does this API exist? How should I list alert rules for my subscriptions?</p>
|
<python><azure><azure-python-sdk><alert-rules>
|
2024-10-29 12:01:53
| 1
| 9,788
|
Mr. Developerdude
|
79,137,093
| 25,606,333
|
Can dependency injection be used with django?
|
<p>I am very popular with using the technique of dependency injection to decouple my code. This would typically involve providing an object with functionality, into the constructor of it's dependent.</p>
<p>For the first time, I am using django to make a web api (with a database of objects attached). I intended to inject an elaborate dependency into the otherwise simple method. (in my case it was functionality to interpret messages coming from RabbitMQ exchanges, but my minimal example is just interpreting a generic message as a site-dependent dictionary).</p>
<p>However in django everything seems to be autogenerated from either static methods or the definition of classes, I couldn't find where anything was actually instantiated or customisable to push the dependency in.</p>
<p>Is the technique and the django framework just incompatible or am I missing something?</p>
<h2>Code so far</h2>
<p>(minimal example recreation, not actual code)</p>
<p>in urs.py:</p>
<pre><code>urlpatterns = [
path("run/", views.run),
]
</code></pre>
<p>in views.py</p>
<pre><code>def run(request):
interpreter = AbstractDataInterpreter() #This is the object I want to inject
data = interpreter.interpret(request)
return JsonResponse(data, safe=False)
</code></pre>
<p>I have a test class
<code>TestDataInterpreter</code> to use for testing.</p>
<p>I have a class
<code>CustomDataInterpreter</code>
custom for my domain/ecosystem.</p>
<p>I plan for for other interpreters on different deployments going forward.</p>
<p>But I can't find the mechanism to actually inject an interpreter into the run command on the different deployments.</p>
|
<python><django><dependency-injection>
|
2024-10-29 11:53:17
| 2
| 1,064
|
Neil Butcher
|
79,136,816
| 16,891,669
|
Pyspark force schema while reading parquet
|
<p>I have a parquet source with the 'Year' column as long and I have specified it as int in my table. While reading parquet I specify the schema of the table to force it but it gives an error instead.</p>
<pre class="lang-py prettyprint-override"><code>df = (
spark
.read
.schema(spark.table(f'{CATALOG_NAME}.{BRONZE_SCHEMA}.{SLA_TARGET}').schema)
.parquet(RAW_EXTERNAL_LOCATION_PATH + f"/{SLA_TARGET}/{SLA_TARGET}.parquet")
)
</code></pre>
<p>It gives the error</p>
<pre><code>SchemaColumnConvertNotSupportedException: column: [Year], physicalType: INT64, logicalType: int
</code></pre>
<p>I don't want to read and then cast it. Why can't it forcefully cast the column? I am using Databricks - 14.3LTS, Spark - 3.5.0</p>
<p><em>I don't think that we need to do <code>vectorizedReader</code> to false, because according to <a href="https://kb.databricks.com/scala/spark-job-fail-parquet-column-convert" rel="nofollow noreferrer">this</a> blog that is required for decimal types and not for others. But still even if I do <code>spark.conf.set("spark.sql.parquet.enableVectorizedReader","false")</code> then it is unable to read parquet file correctly and gives the error - <code>FileReadException: Error while reading file abfss:REDACTED_LOCAL_PART@satrinitydevgfsst01.dfs.core.windows.net/gfs_reporting_raw/sla_target/sla_target.parquet. Possible cause: Parquet column cannot be converted. Caused by: ParquetDecodingException: Can not read value at 1 in block 0 in file abfss:/...</code></em></p>
|
<python><apache-spark><pyspark><databricks>
|
2024-10-29 10:38:45
| 1
| 597
|
Dhruv
|
79,136,697
| 6,734,243
|
How to make a class extention detected by Intellisense?
|
<h2>Context</h2>
<p>I'm building an extention package on top of a Google API called Google Earth Engine: <a href="https://geetools.readthedocs.io/en/latest/" rel="nofollow noreferrer"><code>geetools</code></a>. In short I want to add to an existing package object extra functions that are repetitive or complicated for new users.</p>
<p>The choice of using extention is dictated from the client VS server interaction of this API and I cannot use simple class definition if I want to remain server sided. The complete explaination and inspiration is documented here: <a href="https://geetools.readthedocs.io/en/latest/setup/pattern.html" rel="nofollow noreferrer">https://geetools.readthedocs.io/en/latest/setup/pattern.html</a></p>
<h2>Issue</h2>
<p>My extentions are perfectly working but they remain undetected by Intellisence so I see no documentation and prototype when hover in VSCode, I cannot "go to definition" or even see the parameters names which makes it very difficult to use.</p>
<h2>reproduce example</h2>
<p>This is a completely unecessary overload of an empty class but it shows exactly what I perform and where it fails:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Callable
class Toto(object):
def __init__(self):
self.a = 1
def print(self):
print(self.a)
def register_class_accessor(klass: type, name: str) -> Callable:
"""Create an accessor through the provided namespace to a given class.
Parameters:
klass: The class to set the accessor to.
name: The name of the accessor namespace
Returns:
The accessor function to to the class.
"""
def decorator(accessor: Callable) -> object:
class ClassAccessor:
def __init__(self, name: str, accessor: Callable):
self.name, self.accessor = name, accessor
def __get__(self, obj: object, *args) -> object:
return self.accessor(obj)
# check if the accessor already exists for this class
if hasattr(klass, name):
raise AttributeError(f"Accessor {name} already exists for {klass}")
# register the accessor to the class
setattr(klass, name, ClassAccessor(name, accessor))
return accessor
return decorator
@register_class_accessor(Toto, "tools")
class Accessor:
"""Toolbox for the ``Toto`` class."""
def __init__(self, obj: ee.Array):
"""Initialize the Array class."""
self._obj = obj
def tool_print(self):
"""Print the array."""
print(f"tool object: {self._obj.a}")
</code></pre>
<p>The calls work:</p>
<pre class="lang-py prettyprint-override"><code>Toto().tools.tool_print()
>>>> tool object: 1
</code></pre>
<p>but lookup is not working:</p>
<p><a href="https://i.sstatic.net/iP79B2j8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iP79B2j8.png" alt="enter image description here" /></a></p>
<p>What should I modify in my code to make the accessor content detected ?</p>
|
<python><visual-studio-code>
|
2024-10-29 10:06:15
| 0
| 2,670
|
Pierrick Rambaud
|
79,136,480
| 3,104,974
|
Explode / unpivot DataFrame containing count data to one row per item
|
<p>Context: Data transformation for a logistic regression problem. I have the following data structure:</p>
<pre><code>df = pd.DataFrame({"group": ["A", "B"], "total": [3, 5], "occurrence": [2, 1]})
</code></pre>
<p>I want to do sth. like <a href="https://pandas.pydata.org/docs/dev/user_guide/reshaping.html#explode" rel="nofollow noreferrer"><code>pd.explode</code></a>, but creating one row for item of <code>total</code>, i.e. 5+6 rows where <code>occurrence</code> number of rows hold <code>1</code> and the rest <code>0</code> (either in the <code>occurrence</code> columns or a new target column).</p>
<p>Currently I'm doing it iteratively which is quite slow on large data:</p>
<pre><code>expanded = []
for ix, row in df.iterrows():
for i in range(row["total"]):
row["y"] = 1 if i < row["occurrence"] else 0
expanded.append(row.copy())
df_out = pd.DataFrame(expanded).reset_index(drop=True)
df_out.drop(["total", "occurrence"], axis=1, inplace=True)
df_out
group y
0 A 1
1 A 1
2 A 0
3 B 1
4 B 0
5 B 0
6 B 0
7 B 0
</code></pre>
|
<python><pandas><dataframe><logistic-regression>
|
2024-10-29 09:09:41
| 4
| 6,315
|
ascripter
|
79,136,433
| 1,782,553
|
Add external dependency to my project with UV
|
<p>I would like to add an external project (cloning a public github repository) as a dependency to my python project managed by UV with minimum changes (especially to the external repo).</p>
<p>The external project doesn't follow any python project standard, is not ment to be installed, and just has a <code>requirements.txt</code> file describing its dependencies.</p>
<p><strong>Question:</strong> What is a good approach for this ?</p>
<p><em>What I tried so far:</em></p>
<h2>Adding the dependency as a workspace member</h2>
<p>(reference: <a href="https://docs.astral.sh/uv/concepts/workspaces/" rel="nofollow noreferrer">https://docs.astral.sh/uv/concepts/workspaces/</a>)</p>
<p>I created a <code>pyproject.toml</code> inside the external repository folder</p>
<pre><code>[build-system]
requires = ["hatchling", "hatch-vcs"]
build-backend = "hatchling.build"
[project]
name = "other-repo"
version = "0.0.0"
requires-python = ">=3.7"
dynamic = ["dependencies"]
readme = "README.md"
[tool.uv]
package = true
dependencies = {file = ["requirements.txt"]}
[tool.hatch.build.targets.wheel]
packages = ["folder-with-code"]
</code></pre>
<p>And added it as a workspace member of my project with</p>
<pre><code>[project]
dependencies = [ ... , "other-repo" ]
...
[tool.uv.workspace]
members = ["external/other-repo"]
[tool.uv.sources]
other-repo = { workspace = true }
</code></pre>
<p>But I get a <code>ModuleNotFoundError</code> when trying to import the external repo.</p>
<h2>Adding the dependency as a path dependency</h2>
<p>(reference: <a href="https://docs.astral.sh/uv/concepts/workspaces/#when-not-to-use-workspaces" rel="nofollow noreferrer">https://docs.astral.sh/uv/concepts/workspaces/#when-not-to-use-workspaces</a>)</p>
<p>In my project's <code>pyproject.toml</code>, I added</p>
<pre><code>[project]
dependencies = [ ... , "other-repo" ]
...
[tool.uv.sources]
other-repo = { path = "external/other-repo" }
</code></pre>
<p>But I also get a <code>ModuleNotFoundError</code> when trying to import the external repo.</p>
|
<python><pyproject.toml><uv>
|
2024-10-29 08:58:42
| 0
| 1,647
|
Jav
|
79,136,379
| 12,545,137
|
Cannot modify Pandas dataframe read from Excel file
|
<p>I have a function to apply to a DataFrame as follows:</p>
<pre><code>import pandas as pd
from math import comb
from itertools import combinations
def find_match_total(row, total_col, sum_cols):
size = len(row)
data = row.loc[row.index.isin(sum_cols)].to_dict()
total= row[total_col]
keys = data.keys()
values = data.values()
break_outer_loop = False
for i in range(2,size+1):
if break_outer_loop:
break
key_combinations = combinations(keys,i)
value_combinations = combinations(values,i)
for cols, values in zip(key_combinations, value_combinations):
if sum(values)==total:
break_outer_loop= True
print(cols, values)
for key in row.index:
if key not in cols and key in sum_cols:
print(key, row[key])
row[key]= 0.0 if row[key].dtype =='float64' else 0
break
</code></pre>
<p>These are two test data:</p>
<pre><code>data1={'a':[1,2,3,4,5,6],'b':[11,12,13,14,15,16],'x':[11,12,13,14,15,16],'c':[12,12,16,14,15,16],'f':[12,12,13,14,15,16]}
data2={'columns': ['c', 'a', 'b', 'x', 'f'],
'data': [[786.6238018663737,
589.9678513997803,
196.65595046659342,
0.016,
0.0],
[786.6238018663737, 589.9678513997803, 196.65595046659342, 0.016, 0.0],
[786.6238018663737, 589.9678513997803, 196.65595046659342, 0.016, 0.0],
[786.6238018663737, 589.9678513997803, 196.65595046659342, 0.016, 0.0],
[786.6238018663737, 589.9678513997803, 196.65595046659342, 0.016, 0.0],
[786.6238018663737, 589.9678513997803, 196.65595046659342, 0.016, 0.0],
[786.6238018663737, 589.9678513997803, 196.65595046659342, 0.016, 0.0],
[786.6238018663737, 589.9678513997803, 196.65595046659342, 0.016, 0.0]]}
</code></pre>
<p>Link for excel file: <a href="https://docs.google.com/spreadsheets/d/1Li96MPqG2bURyZ0BvM2VzhkDd8FIIbJe/edit?usp=sharing&ouid=107181646148635794966&rtpof=true&sd=true" rel="nofollow noreferrer">Excel file Google sheet Link</a></p>
<p>I create dataframes from data1, data2 and from excel file:</p>
<pre><code>df_test1= pd.DataFrame(data1)
df_test2= pd.DataFrame(data2['data'], columns=data2['columns'])
df_test3= pd.read_excel('D:/Book2.xlsx', sheet_name='Sheet1')
</code></pre>
<p>I check the df_test2 and df_test3</p>
<pre><code>print(df_test2==df_test3)
</code></pre>
<p>Result is similar structure and values:</p>
<pre><code> c a b x f
0 True True True True True
1 True True True True True
2 True True True True True
3 True True True True True
4 True True True True True
5 True True True True True
6 True True True True True
7 True True True True True
</code></pre>
<p>I apply the function to each DataFrame:</p>
<pre><code>df_test1.apply(find_match_total, args=('c',['a','b','x']), axis=1)
df_test2.apply(find_match_total, args=('c',['a','b','x']), axis=1)
df_test3.apply(find_match_total, args=('c',['a','b','x']), axis=1)
</code></pre>
<p>I got weird results. I see df_test1 and df_test2 are modified inplace as desired (with 0) but df_test3 is still unchanged although df_test2 and df_test3 are the same data types and structure as tested above???</p>
<pre><code>df_test1 is modified (0,x) ; (2,x)
a b x c f
0 1 11 0 12 12
1 2 12 12 12 12
2 3 13 0 16 13
3 4 14 14 14 14
4 5 15 15 15 15
5 6 16 16 16 16
df_test2 is modified => column x is set to 0
c a b x f
0 786.623802 589.967851 196.65595 0.0 0.0
1 786.623802 589.967851 196.65595 0.0 0.0
2 786.623802 589.967851 196.65595 0.0 0.0
3 786.623802 589.967851 196.65595 0.0 0.0
4 786.623802 589.967851 196.65595 0.0 0.0
5 786.623802 589.967851 196.65595 0.0 0.0
6 786.623802 589.967851 196.65595 0.0 0.0
7 786.623802 589.967851 196.65595 0.0 0.0
df_test3 read from excel is not modified
c a b x f
0 786.623802 589.967851 196.65595 0.016 0
1 786.623802 589.967851 196.65595 0.016 0
2 786.623802 589.967851 196.65595 0.016 0
3 786.623802 589.967851 196.65595 0.016 0
4 786.623802 589.967851 196.65595 0.016 0
5 786.623802 589.967851 196.65595 0.016 0
6 786.623802 589.967851 196.65595 0.016 0
7 786.623802 589.967851 196.65595 0.016 0
</code></pre>
|
<python><pandas><dataframe>
|
2024-10-29 08:37:00
| 0
| 584
|
Dinh Quang Tuan
|
79,136,314
| 10,132,474
|
how to let pip install show progress
|
<p>In the past, I install my personal package with <code>setup.py</code>:</p>
<pre><code>python setup.py install
</code></pre>
<p>Now this method is deprecated, and I can only use <code>pip</code>:</p>
<pre><code>python -m pip install .
</code></pre>
<p>However, the method with <code>setup.py</code> can show install messages, but <code>pip</code> method cannot.
For example, when there is c++ code which requires compiling the source code, the <code>setup</code>.py<code>method can print warnings to the screen, but the</code>pip` method only let you wait until everything is done.</p>
<p>Is there any method that I can see more messages with <code>pip</code> like the <code>setup.py</code> in the past?</p>
|
<python><pip><setup.py>
|
2024-10-29 08:13:45
| 1
| 1,147
|
coin cheung
|
79,136,294
| 3,100,115
|
How to run tests using each package pyproject.toml configuration instead of the top level pyproject.toml?
|
<p>I'm using <a href="https://docs.astral.sh/uv/concepts/workspaces/" rel="nofollow noreferrer"><code>uv</code></a> workspace with a structure that looks like the following example from the doc.</p>
<pre><code>albatross
βββ packages
β βββ bird-feeder
β β βββ pyproject.toml
β β βββ src
β β βββ bird_feeder
β β βββ __init__.py
β β βββ foo.py
β βββ seeds
β βββ pyproject.toml
β βββ src
β βββ seeds
β βββ __init__.py
β βββ bar.py
βββ pyproject.toml
βββ README.md
βββ uv.lock
βββ src
βββ albatross
βββ main.py
</code></pre>
<p>How do I tell <code>pytest</code> to use the configuration in each package <code>pyproject.toml</code> while running <code>pytest</code> from the root package(albatross in this case) directory?</p>
|
<python><pytest><uv>
|
2024-10-29 08:08:16
| 1
| 61,461
|
Sede
|
79,136,280
| 8,589,908
|
find_peaks - trimming both peak array and prominences
|
<p>I have two arrays of data from a scipy <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.find_peaks.html" rel="nofollow noreferrer">signal.find_peaks</a> function.</p>
<pre><code> p, prom_data = signal.find_peaks(spec, prominence=0.000010)
prom = prom_data['prominences']
</code></pre>
<p><code>p</code> contains a list of indices where the peaks in <code>spec</code> are over the prominence and <code>prom</code> is a list of the heights of the peaks in <code>p</code>.</p>
<p>example:</p>
<pre><code>p = np.arr[ 3, 7, 10, 14, 16, 19, ]
prom = np.arr[ 99, 77, 100, 140, 50, 82 ]
</code></pre>
<p>Then some unnecessary data is trimmed from <code>p</code> with:</p>
<pre><code> # REMOVE PEAKS OUT OF BAND
p = p[(p >= 10) & (p <= 30)] # should remove 3 and 7
</code></pre>
<p>But this operation does no remove the corresponding data from <code>prom</code> (i.e. 99 and 77)</p>
<p><strong>Q:</strong> How can I remove the corresponding heights from <code>prom</code> ?</p>
|
<python><arrays><scipy>
|
2024-10-29 08:04:48
| 1
| 499
|
Radika Moonesinghe
|
79,136,087
| 1,285,061
|
Django - get local listening IP address
|
<p>In Django, how can I get Django's local IP <code>192.168.1.200</code> that it is presently listening on?</p>
<p>I need to use the IP in Django logging format.</p>
<p>settings.py</p>
<pre><code>ALLOWED_HOSTS = ['192.168.1.200', '127.0.0.1']
</code></pre>
<p>Run server</p>
<pre><code>python manage.py runserver 192.168.1.200:8000
</code></pre>
|
<python><python-3.x><django>
|
2024-10-29 06:49:57
| 2
| 3,201
|
Majoris
|
79,136,054
| 1,300,818
|
Getting different SHA-224 values in Python and Kotlin Android
|
<p>Need to validate the Hash value for a Json response received from the backend. The hash value is properly calculated in Python and matched with backend server hash value which is correct. where as I am getting a different hash value always in the Kotlin Android. sharing the code snippets below.</p>
<p>Python:</p>
<pre><code>import json
import hashlib
import base64
msg = {
"name": "Alice",
"age": 25,
"address": {
"city": "Wonderland",
"street": "Rabbit Hole"
}
}
# Sort keys and ensure ASCII encoding
json_str = json.dumps(msg, ensure_ascii=False, sort_keys=True)
print(f"Serialized JSON (Python): {json_str}")
json_bin = json_str.encode('utf-8')
# Create SHA-224 hash and take first 16 bytes
hash_calculator = hashlib.sha224(json_bin)
hash_digest = hash_calculator.digest()[:16]
# Print hex digest
print(f"Hex digest (Python): {hash_digest.hex()}")
# Base64 encode
hash_encoded64 = base64.urlsafe_b64encode(hash_digest).decode('utf8')
print(f"Base64 encoded hash (Python): {hash_encoded64}")
</code></pre>
<p>But when I convert the same python code to Kotlin and run, I am getting different value. Could anyone check this and let me know what is going wrong?</p>
<p>Kotlin:</p>
<pre><code> fun calculateHas(){
// Original JSON message as a string
val msg = """
{
"name": "Alice",
"age": 25,
"address": {
"city": "Wonderland",
"street": "Rabbit Hole"
}
}
""".trim().trimIndent().replace("\n","")
// Parse the JSON string into a JSONObject
val jsonObj = JSONObject(msg)
// Serialize the JSONObject into a string with sorted keys
val sortedJsonObj = JSONObject()
val keys = jsonObj.keys().asSequence().toList().sorted()
for (key in keys) {
sortedJsonObj.put(key, jsonObj.get(key))
}
val sortedJsonStr = sortedJsonObj.toString().replace("\n","").toByteArray(StandardCharsets.UTF_8)
// Create a SHA-224 hash of the JSON byte array
val sha224Digest = MessageDigest.getInstance("SHA-224")
val hashBytes = sha224Digest.digest(sortedJsonStr)
// Get the first 16 bytes of the hash
val truncatedHash = hashBytes.sliceArray(0 until 16)
// Encode the truncated hash in Base64 (URL-safe, without padding)
val hashBase64 = Base64.encodeToString(truncatedHash, Base64.URL_SAFE or Base64.NO_WRAP)
// Print the Base64-encoded hash
println(hashBase64)
Log.e("Code:" , hashBase64)
}
</code></pre>
|
<python><android><kotlin><hash><sha>
|
2024-10-29 06:32:07
| 1
| 662
|
Ganesh
|
79,136,032
| 5,790,653
|
How to sort a dictionary by using len() function
|
<p>This is a dictionary I have:</p>
<pre class="lang-py prettyprint-override"><code>mydict = {
4: {'K', 'H'},
2: {'Y', 'X', 'Q', 'U'},
8: {'A', 'T', 'S'},
}
</code></pre>
<p>It seems there are some ways to sort a dictionary that I tried but didn't work:</p>
<pre class="lang-py prettyprint-override"><code>sorted([(value,key) for (key,value) in mydict.items()])
</code></pre>
<p>But I'm going to sort using the <code>len()</code> function based on how many values does it each has.</p>
<p>This is the current <code>len()</code> output:</p>
<pre class="lang-py prettyprint-override"><code>for x in mydict: print(x, mydict[x], len(mydict[x]))
</code></pre>
<p>How can I sort the dictionary as expected?</p>
<p>Expected output would be:</p>
<pre class="lang-py prettyprint-override"><code>mydict = {
2: {'Y', 'X', 'Q', 'U'},
8: {'A', 'T', 'S'},
4: {'K', 'H'},
}
</code></pre>
|
<python>
|
2024-10-29 06:25:26
| 1
| 4,175
|
Saeed
|
79,135,844
| 3,464,927
|
pip3 upgrading Python dependencies - SSL error (unsafe legacy renegotiation failed)
|
<p>I'm trying to update some Python dependencies (mainly the remotior-sensus) using OSGeo4W, installed with QGIS 3.34.9. This is using Python 3.12.4.</p>
<p>I'm following instructions on this site: <a href="https://semiautomaticclassificationmanual.readthedocs.io/en/latest/installation_win64.html#qgis-download-and-installation" rel="nofollow noreferrer">https://semiautomaticclassificationmanual.readthedocs.io/en/latest/installation_win64.html#qgis-download-and-installation</a> and have managed to download the dependencies, but cannot install them.</p>
<p>Using</p>
<pre><code>pip3 install --upgrade remotior-sensus scikit-learn torch
</code></pre>
<p>I get the following:</p>
<pre><code>Requirement already satisfied: remotior-sensus in c:\progra~1\qgis33~1.9\apps\python312\lib\site-packages (0.3.5)
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '[SSL: UNSAFE_LEGACY_RENEGOTIATION_DISABLED] unsafe legacy renegotiation disabled (_ssl.c:1000)'))': /simple/remotior-sensus/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '[SSL: UNSAFE_LEGACY_RENEGOTIATION_DISABLED] unsafe legacy renegotiation disabled (_ssl.c:1000)'))': /simple/remotior-sensus/
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '[SSL: UNSAFE_LEGACY_RENEGOTIATION_DISABLED] unsafe legacy renegotiation disabled (_ssl.c:1000)'))': /simple/remotior-sensus/
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '[SSL: UNSAFE_LEGACY_RENEGOTIATION_DISABLED] unsafe legacy renegotiation disabled (_ssl.c:1000)'))': /simple/remotior-sensus/
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '[SSL: UNSAFE_LEGACY_RENEGOTIATION_DISABLED] unsafe legacy renegotiation disabled (_ssl.c:1000)'))': /simple/remotior-sensus/
Could not fetch URL https://pypi.org/simple/remotior-sensus/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/remotior-sensus/ (Caused by SSLError(SSLError(1, '[SSL: UNSAFE_LEGACY_RENEGOTIATION_DISABLED] unsafe legacy renegotiation disabled (_ssl.c:1000)'))) - skipping
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '[SSL: UNSAFE_LEGACY_RENEGOTIATION_DISABLED] unsafe legacy renegotiation disabled (_ssl.c:1000)'))': /simple/scikit-learn/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '[SSL: UNSAFE_LEGACY_RENEGOTIATION_DISABLED] unsafe legacy renegotiation disabled (_ssl.c:1000)'))': /simple/scikit-learn/
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '[SSL: UNSAFE_LEGACY_RENEGOTIATION_DISABLED] unsafe legacy renegotiation disabled (_ssl.c:1000)'))': /simple/scikit-learn/
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '[SSL: UNSAFE_LEGACY_RENEGOTIATION_DISABLED] unsafe legacy renegotiation disabled (_ssl.c:1000)'))': /simple/scikit-learn/
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '[SSL: UNSAFE_LEGACY_RENEGOTIATION_DISABLED] unsafe legacy renegotiation disabled (_ssl.c:1000)'))': /simple/scikit-learn/
Could not fetch URL https://pypi.org/simple/scikit-learn/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/scikit-learn/ (Caused by SSLError(SSLError(1, '[SSL: UNSAFE_LEGACY_RENEGOTIATION_DISABLED] unsafe legacy renegotiation disabled (_ssl.c:1000)'))) - skipping
ERROR: Could not find a version that satisfies the requirement scikit-learn (from versions: none)
ERROR: No matching distribution found for scikit-learn
Could not fetch URL https://pypi.org/simple/pip/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/pip/ (Caused by SSLError(SSLError(1, '[SSL: UNSAFE_LEGACY_RENEGOTIATION_DISABLED] unsafe legacy renegotiation disabled (_ssl.c:1000)'))) - skipping
</code></pre>
<p>Does anyone have any ideas? Just really need this installed so I can use one of the tools.</p>
|
<python><pip>
|
2024-10-29 04:36:01
| 2
| 769
|
user25730
|
79,135,473
| 894,067
|
Why is VSCode not recognizing imports that aren't top level imports?
|
<p>I have the following code in a Python file in VSCode:</p>
<pre class="lang-py prettyprint-override"><code># These work just fine
import tensorflow as tf
import numpy as np
from collections import Counter
# These have an error
from tensorflow.keras.models import Sequential
from sklearn.utils import resample
</code></pre>
<p>The error I'm getting is:</p>
<pre><code>Import "tensorflow.keras.models" could not be resolved Pylance (reportMissingImports)
Import "sklearn.utils" could not be resolved Pylance (reportMissingImports)
</code></pre>
<p>I have verified that TensorFlow is installed correctly, and that I'm using the correct Python in VSCode. Also if I wasn't it seems like it would fail to resolve the first one also.</p>
<p>I think the issue is that when you have those nested (not top level) imports, VSCode isn't working properly.</p>
<p>I read <a href="https://code.visualstudio.com/docs/python/editing#_troubleshooting" rel="nofollow noreferrer">here</a> that there is an option called <code>python.analysis.packageIndexDepths</code> in settings, which I have verified looks correct to me:</p>
<pre><code>"python.analysis.packageIndexDepths": [
{
"name": "sklearn",
"depth": 2
},
{
"name": "matplotlib",
"depth": 2
},
{
"name": "scipy",
"depth": 2
},
{
"name": "django",
"depth": 2
},
{
"name": "flask",
"depth": 2
},
{
"name": "fastapi",
"depth": 2
},
{
"name": "tensorflow",
"depth": 4,
"includeAllSymbols": true
}
],
</code></pre>
<p>I have tried reloading the window as well without any success.</p>
<p>Finally, I have tried to remove <code>includeAllSymbols</code>, but that didn't seem to make any difference as well.</p>
<p>One more note, not that I think it makes a difference, but I'm using VSCode through SSH Connect to Host mode. All my extensions I reinstalled after doing that, so the Python extension does look to be enabled and working.</p>
<p>Any ideas how I can fix this?</p>
<hr />
<p>Edit:</p>
<p>Based on the comments, after adding <code># pyright: strict</code> to the very top of my file and running <code>pip install types-tensorflow</code> that fixed a lot of problems, but created a lot more errors and warnings in my script that I'll have to go back and fix.</p>
<p>However, I'm still getting a few import errors.</p>
<p>I left this out of my original code since I was trying to create a minimal example, but <code>from tensorflow.keras.optimizers import Adam</code> was failing with the same error before, those things I did fixed it.</p>
<p>However for <code>from tensorflow.keras.models import Sequential</code> the error has changed to:</p>
<pre><code>Import "tensorflow.keras.models" could not be resolved from source Pylance (reportMissingModuleSource)
</code></pre>
<p>The error for <code>from sklearn.utils import resample</code> has changed to:</p>
<pre><code>Import "sklearn.utils" could not be resolved from source Pylance (reportMissingModuleSource)
Type of "resample" is partially unknown
Type of "resample" is "(*arrays: Any, replace: bool = True, n_samples: int | signedinteger[_8Bit] | signedinteger[_16Bit] | signedinteger[_32Bit] | signedinteger[_64Bit] | None = None, random_state: RandomState | int | signedinteger[_8Bit] | signedinteger[_16Bit] | signedinteger[_32Bit] | signedinteger[_64Bit] | None = None, stratify: ndarray | DataFrame | Any | Buffer | _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None) -> (list[ndarray] | None)" Pylance (reportUnknownVariableType)
</code></pre>
<p>Edit 2:</p>
<p>It looks like running <code>pip install scikit-learn</code> fixed the first (could not be resolved from source) issue, but the partially unknown issue is still happening.</p>
|
<python><visual-studio-code><pyright>
|
2024-10-29 00:28:50
| 0
| 20,944
|
Charlie Fish
|
79,135,371
| 11,865,845
|
Find the highest triplet
|
<p>I'm trying to create a function that receives a input list and return the sum of the highest triplet. My code is meant to take the highest 3 unique values (NO duplicates) and add them together and return the highest possible sum of the values.
Not sure why but the following function doesn't work when I run the tests on HackerRank.</p>
<p>Constraints:</p>
<p>1 β€ n β€4000</p>
<p>1 β€ profit[I] β€ 10^9</p>
<pre><code># Input samples:
# profit = [2, 10, 33, 2, 5] # here, the result should be 5+10+33 = 48
# profit = [2, 1, 33, -6, 5, 10, 10] # here, the result should be 5+10+33 = 48 (ignore duplicate 10)
def getMaximumProfit(profit):
# Debug the below code to return the required output.
maxi = 0
for i in range(len(profit)-2):
total = sum(profit[i:i+3])
if maxi > total:
maxi += total
total = 0
return total
</code></pre>
|
<python>
|
2024-10-28 23:14:59
| 4
| 564
|
Peter
|
79,135,297
| 2,192,824
|
Type hint with string constant
|
<p>I sometimes see people using string constant for type hint, like below:</p>
<pre><code>def my_function(arg1: "some_string", arg2: int) -> None:
pass
</code></pre>
<p>What does <code>arg1: "some_string"</code> mean? Under what condition should we use it like that?</p>
|
<python><python-typing>
|
2024-10-28 22:36:11
| 1
| 417
|
Ames ISU
|
79,135,214
| 272,023
|
How to cross-compile Rust-based Python wheel
|
<p>I want to create a Python wheel that will run on both Windows and Linux (x86_64 and possibly ARM). I want to write the data processing code in Rust and have Python call the Rust code, so I am going to need to have different wheels for each target.</p>
<p>I can use <a href="https://github.com/PyO3/maturin" rel="nofollow noreferrer">maturin</a> to create Python wheels that make calls to Rust. I can also use <a href="https://rust-lang.github.io/rustup/cross-compilation.html" rel="nofollow noreferrer">cargo</a> to perform cross compilation of Rust code.</p>
<p>I am building this inside an environment where I donβt have access to GitHub Actions so I canβt easily run CI build actions on multiple OSes.</p>
<p>Can I create multiple Python wheels on a Linux build server that target different OSes and if so, how do I do this?</p>
|
<python><rust><python-wheel>
|
2024-10-28 22:00:21
| 0
| 12,131
|
John
|
79,134,991
| 14,179,793
|
How to keep Python pexpect shell open
|
<p>I am trying to use pexpect to have a main processing loop that sends input to a number of other processes in order.</p>
<p><strong>Code</strong><br />
pexpect_test.py</p>
<pre><code>if __name__ == '__main__':
count = 3
processes = []
for x in range(count):
processes.append(pexpect.spawn('/bin/bash', logfile=sys.stdout.buffer))
print(f'Processes Created: {len(processes)}')
inputs = ['a', 'b', 'c']
for input in inputs:
print(f'Processing input: {input}')
for process in processes:
line = f'python test.py {input}'
print(f'Sending Line: {line}')
process.sendline(line)
print('reading')
res = process.read()
print(f'res: {res}')
</code></pre>
<p>test.py</p>
<pre><code>if __name__ == '__main__':
print(f'Process Started: {argv[1]} ')
sleep(2)
print(f'Process Complete: {argv[1]}')
</code></pre>
<p><strong>Results</strong></p>
<pre><code>$ python pexpect_test.py
Processes Created: 3
Processing input: a
Sending Line: python x.py a
python x.py a
reading
$ python x.py a
Process Started: a
Process Complete: a
$ ^CTraceback (most recent call last):
</code></pre>
<p>It looks like the process ends up just waiting at the spawned shell so I either have to use <code>control</code>+<code>c</code> or let it timeout. How could I leave the shells running while returning the flow of logic back to the main script?</p>
<p><strong>Simplified Example</strong></p>
<pre><code>if __name__ == '__main__':
ps = pexpect.spawn('/bin/bash', logfile=sys.stdout.buffer)
ps.sendline('python x.py 1 a')
# Expect EOF
ps.expect(pexpect.EOF)
# - The child bash shell stays active and control does not go back to python code
# - Will eventully timeout as expected and raise an exception
# - Does correctly call python script and waits for the defined amount of time
# Expect terminal character
ps.expect_exact('$ ')
# - The wait is skipped but the child shell does not remain active
# Expect literal string printed in child process
ps.expect_exact('Completed')
# - Correctly spawns child thread and waits specified duration
# - Return control back to script
# - The requirement to add a print statement containing unique string is not desirable.
# -There could be hundreds of different processes that need to be run and my group does not manage their code.
# Expect the terminal character twice
ps.expect('\$.*\$ ')
# - For some reason this works exactly as I need it to.
</code></pre>
<p>Notes:</p>
<ul>
<li>Use <code>subprocess.run(...)</code>: Not performant enough. The above sample code uses <code>bin/bash</code> as a placeholder, the actual command I need to run takes 2-3 seconds to initialize and I have a lot of inputs to process(10000+).</li>
</ul>
|
<python><pexpect>
|
2024-10-28 20:18:08
| 2
| 898
|
Cogito Ergo Sum
|
79,134,979
| 1,880,463
|
Relocated Subpackages in PySNMP
|
<p>I inherited some code that used PySNMP version 4.4.12, and while not very familiar with SNMP, it is my job to get some scripts working with the latest version of PySNMP. One issue I have is there was a function or class named MibVariable in pysnmp.entity.rfc3413.oneliner.cmdgen and it has been either relocated or deprecated in recent versions. I found cmdgen was moved to the hlapi subpackage, but I can't find a replacement for what was called MibVariable. What should I do? I am using Python 3.10.</p>
|
<python><pysnmp>
|
2024-10-28 20:13:14
| 0
| 327
|
stesteve
|
79,134,933
| 54,873
|
What is the new best way to mass-set values of an existing dataframe in pandas?
|
<p>Imagine I have a dataframe where I wish to change the existing values based on their location in the frame (e.g. <code>iloc</code>):</p>
<pre><code>(Pdb) df = pd.DataFrame([["a", 1], ["b", 2], ["c", 3]], columns=["let", "num"])
(Pdb) df
let num
0 a 1
1 b 2
2 c 3
</code></pre>
<p>What is the proper way with copy-on-write to do it in <code>pandas</code>?</p>
<pre><code>(Pdb) df["num"].iloc[1:] = 22
<stdin>:1: FutureWarning: ChainedAssignmentError: behaviour will change in pandas 3.0!
You are setting values through chained assignment. Currently this works in certain cases, but when using Copy-on-Write (which will become the default behaviour in pandas 3.0) this will never work to update the original DataFrame or Series, because the intermediate object on which we are setting values will behave as a copy.
A typical example is when you are setting values in a column of a DataFrame, like:
df["col"][row_indexer] = value
Use `df.loc[row_indexer, "col"] = values` instead, to perform the assignment in a single step and ensure this keeps updating the original `df`.
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
(Pdb) df
let num
0 a 1
1 b 22
2 c 22
</code></pre>
<p>I can do it today but with an ominous warning!</p>
<p>Thanks,
/YGA</p>
|
<python><pandas>
|
2024-10-28 19:54:47
| 1
| 10,076
|
YGA
|
79,134,554
| 4,200,859
|
Working directory weirdness, no module named 'tools'?
|
<p>I'm really unclear about Python, imports, and working directory.</p>
<p>I'm running python from my root directory, which contains a folder <code>tools</code> and within that, two python files: <code>tool.py</code> and <code>env.py</code>.</p>
<p>My python run command is: <code>python tools/tool.py</code>, and within <code>tool.py</code> I have:</p>
<pre><code>import os
print("Current working directory:",os.getcwd())
from tools.env import setup_env
</code></pre>
<p>I'm getting an error <code>ModuleNotFOundError: No module named 'tools'</code>.</p>
<p>Am i misunderstanding something here? The print statement shows the working directory is the root directory, so shouldn't I be able to import from the <code>tools</code> directory? I've tried putting <code>__init__.py</code> in the tools directory, didn't fix it.</p>
|
<python>
|
2024-10-28 17:49:48
| 0
| 639
|
Max
|
79,133,856
| 4,343,563
|
How to order dataframe based on substrings in column?
|
<p>I have a dataframe with string rows. I want to sort the entire dataframe based on the strings in this column. However, there are some rows that contains a substring that is the text in another row so it is messing up the order. My dataframe looks like:</p>
<pre><code> col1 col2 col3 col4
Animal Tiger Cat Dog
Name Adam Grace Julia
Street Name1 Pine St Crown St Palm Ave
Street Name2 Grey St Tree St New St
Color Green Blue Yellow
Interest Yes No Yes
Low Interest No No Yes
High Interest Yes No Yes
City2 x z y
City1 m r t
</code></pre>
<p>I want to sort it so it looks like:</p>
<pre><code> col1 col2 col3 col4
Name Adam Grace Julia
Street Name1 Pine St Crown St Palm Ave
Street Name2 Grey St Tree St New St
City1 m r t
City2 x z y
Interest Yes No Yes
High Interest Yes No Yes
Low Interest No No Yes
Animal Tiger Cat Dog
Color Green Blue Yellow
</code></pre>
<p>I tried using :</p>
<pre><code>order = ['Name', 'Street Name', 'City', 'Interest','High Interest','Low Interest', 'Animal', 'Color']
df['order'] = df['col1'].apply(order)
df = df.sort_values(by = 'order').drop(columns = 'order')
</code></pre>
<p>However, this created an issue where 'Street Name' rows were coming before 'Name' since 'Name' is in both. Like:</p>
<pre><code> col1 col2 col3 col4
Street Name1 Pine St Crown St Palm Ave
Street Name2 Grey St Tree St New St
Name Adam Grace Julia
</code></pre>
<p>How can I order this data frame so that it is in the proper order even if it is present substring for another row?</p>
<p>UPDATE:
I can't explicitly list 'Street Name1', 'Street Name2' since the number at the end could change. For different scenarios, it could be also 'Street Name1', 'Street Name2'.... 'Street Name7'. So I can't explicitly define the numbers.</p>
|
<python><pandas><dataframe><sorting>
|
2024-10-28 14:35:02
| 2
| 700
|
mjoy
|
79,133,681
| 10,818,184
|
why Python built-in str join() method is not faster than simple +=?
|
<pre><code>binary_str= np.random.randint(2,size=1000000,dtype='int')
</code></pre>
<p>method 1:</p>
<pre><code>binary_str=''.join(binary_str.astype(str))
</code></pre>
<p>method 2:</p>
<pre><code>s=''
for c in binary_str:
s+=str(c)
</code></pre>
<p>I read many articles saying that method 1 is supposed to be much faster since it concatenate the whole string once. By contrast method 2 has to allocate memory and copy for each step, which leads to time complexity of O(n^2).
But when I measure the time consumed by either time.time or timeit, the output of the two methods are nearly the same, and method 1 even consumed more time. Why? Python version 3.10.4.</p>
<p>It is caused by str method and iterating through numpy. When using astype(str) at the beginning and use .tolist(), the performance difference is huge.</p>
|
<python><performance>
|
2024-10-28 13:48:59
| 1
| 488
|
li_jessen
|
79,133,572
| 3,332,023
|
Change matplotlib yaxis offset from 1e-5 to caps 1E-5
|
<p>I have a plot in matplotlib and I can't get it to change from <code>1e-5</code> to <code>1E-5</code>. I've tried a lot, this seems to be the closest:
<code>ax.yaxis.get_offset_text().set_text(ax.yaxis.get_offset_text().get_text().replace('e', 'E'))</code>
if I run <code>print(ax.yaxis.get_offset_text().get_text().)</code> it prints <code>1E-5</code>, however the plot still shows <code>1e-5</code></p>
<p>I tried to run <code>plt.draw()</code> etc. but nothing seems to help. Any suggestions? (I only have the complete tick at the top, at the side it just says 1,2,3,4 etc., something that I like).</p>
<p>(I want to change this to 1E-5)</p>
<p><a href="https://i.sstatic.net/iVzlyqIj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iVzlyqIj.png" alt="The yaxis offset" /></a></p>
|
<python><matplotlib><plot>
|
2024-10-28 13:15:59
| 2
| 467
|
axel_ande
|
79,133,564
| 16,525,263
|
How to copy files and folders in HDFS using Pyspark
|
<p>I have folder structure like this on HDFS</p>
<p><code>/user/test/data/data_backlog</code></p>
<p>Inside <code>data_backlog</code>, I have date folders like below</p>
<pre><code> dt=2023.01.01
part1.avro
part2.avro
dt=2023.01.02
part1.avro
part2.avro
</code></pre>
<p>I need to copy the files by maintaining the same folder structure to another path
<code>/user/test/data/data_backlog_backup</code></p>
<p>This is my present code:</p>
<pre><code># get directory list to be moved
directories = get_list_path(end_date, lake.listStatus(spark._jvm.org.apache.hadoop.fs.Path('/user/test/data/data_backlog')),False)
for _par in directories:
df_bkp = spark.read.format('avro').load(_par)
DataIO.write(df_bkp.coalesce(5), "data_backlog", "overwrite")
lake.Delete(fs.Path(_par), True)
</code></pre>
<p>This is doing the job, but its moving all the files into parent folder without creating date sub-folders.
How to maintain the same folder structure in the destination path</p>
|
<python><pyspark>
|
2024-10-28 13:14:50
| 1
| 434
|
user175025
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.