QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
75,678,741
| 4,450,090
|
polars aggregate list[str] column into set[str]
|
<p>I have polars dataframe:</p>
<pre><code>df = pl.DataFrame({
'col1': [["aaa", "aaa"], ["bbb", "ccc"], ["ccc", "ddd", "ddd"], ["ddd", "ddd", "ddd"]],
'col2': ["a", "a", "a", "a"],
'col3': ["x", "x", "y", "y"]
})
</code></pre>
<p>I want to groupby col2, col3 and aggregate col1 into Set[String]</p>
<pre><code>(df
.group_by("col2", "col3")
.agg(pl.col("col1").flatten().map_elements(set).alias("result"))
)
</code></pre>
<p>when I run it on 17 milion records then it performs very slow.
after 10minutes it still does not complete.</p>
<p>how to speed it up?</p>
<p>EDIT:</p>
<p>This is how I solved it and it is blazing fast:</p>
<pre class="lang-py prettyprint-override"><code>df = (
df
.with_columns(
pl.col("col1").list.join(",")
)
.group_by("col2", "col3")
.agg(
pl.col("col1").alias("col1")
)
.with_columns(
pl.col("col1").list.join(",")
)
.with_columns(
pl.col("col1").str.split(",").list.unique().alias("col1")
)
)
</code></pre>
<pre><code>ββββββββ¬βββββββ¬ββββββββββββββββββββββββ
β col2 β col3 β col1 β
β --- β --- β --- β
β str β str β list[str] β
ββββββββͺβββββββͺββββββββββββββββββββββββ‘
β a β x β ["aaa", "bbb", "ccc"] β
β a β y β ["ccc", "ddd"] β
ββββββββ΄βββββββ΄ββββββββββββββββββββββββ
</code></pre>
|
<python><set><aggregate><flatten><python-polars>
|
2023-03-08 22:17:07
| 2
| 2,728
|
Dariusz Krynicki
|
75,678,693
| 2,981,639
|
Polars custom function returning multiple features
|
<p>I'm evaluating <code>polars</code> for timeseries feature extraction - replicating for example the functionality of libraries such as <code>tsfresh</code>.</p>
<p>I have the basics working really well, i.e. using <code>dynamic grouping</code> to create windows and most of the basic <code>tsfel</code> features can be directly reimplemented as <code>polars</code> custom functions (maybe I can optimise this but for now performance is fine), i.e.</p>
<pre><code>def var_larger_than_std(x: pl.Series) -> bool:
"""
Is variance higher than the standard deviation?
Boolean variable denoting if the variance of x is greater than its standard deviation. Is equal to variance of x
being larger than 1
:param x: the time series to calculate the feature of
:type x: pl.Series
:return: the value of this feature
:return type: bool
"""
y = x.var()
return y > np.sqrt(y)
q = (
dataset.lazy() \
.groupby_dynamic("ts_local", every="1d", by="id") \
.agg([
pl.col("value").count().alias("value__count"),
var_larger_than_std(pl.col("value")).alias("value__var_larger_than_std"),
])
)
q.collect()
</code></pre>
<p>For basic feature maps <code>f(series)->series</code> this works fine. It is quite common though for complex feature maps to produce multiple outputs. For example <a href="https://tsfresh.readthedocs.io/en/latest/_modules/tsfresh/feature_extraction/feature_calculators.html#fft_coefficient" rel="nofollow noreferrer">fft_coefficient</a> produces multiple outputs. In <code>tsfel</code> they're returned as a <code>dict</code> and each <code>key:value</code> is expanded to into a seperate <code>pandas.Series</code> in the final <code>pandas.Dataframe</code>.</p>
<p>How can I replicated this in <code>polars</code>? Say I have the function</p>
<pre><code>def complex_feature(x: pl.Series) -> Dict[str, float]:
return {"col1":0.0, "col2":1.0}
q = (
dataset.lazy() \
.groupby_dynamic("ts_local", every="1d", by="id") \
.agg([
complex_feature(pl.col("value")).???,
])
)
q.collect()
</code></pre>
<pre><code>ββββββββββββββββββββββββββββββββββββββ¬βββββββ¬βββββββ
β ts_local β col1 β col2 β
β -------- β ---- β ---- β
β timestamp[ns,tz=Australia/Sydney] β f64 β f64 β
ββββββββββββββββββββββββββββββββββββββͺβββββββͺβββββββ‘
ββββββββββββββββββββββββββββββββββββββΌβββββββΌβββββββ€
β 2023-01-01 β 0.0 β 1.0 β
ββββββββββββββββββββββββββββββββββββββ΄βββββββ΄βββββββ
</code></pre>
<p>i.e. the dict is expaned into seperate columns (it doesn't have to be a dict, I can return a tuple of pl.Series or whatever works)</p>
|
<python><python-polars>
|
2023-03-08 22:09:07
| 1
| 2,963
|
David Waterworth
|
75,678,678
| 817,659
|
Parse GET parameters
|
<p>I have an <code>REST API</code> where the url looks like this:</p>
<pre><code>class RRervice(pyrestful.rest.RestHandler):
@get('/Indicator/{rparms}')
def RR(self, rparms):
print(rparms)
...
</code></pre>
<p>So when this codes executes on this URL:</p>
<pre><code>http://ip:3000/Indicator/thresh=.8&symbol=AAPL
</code></pre>
<p>The <code>print</code> I get what I am supposed to get:</p>
<pre><code>thresh=.8&symbol=AAPL
</code></pre>
<p>My question is, is there an API that ensures that certain parameters must exist, and if so, what is the best way to parse them? I have seen <code>urllib</code>, or maybe even <code>argparse</code>. I guess I am looking for the right way to do this. I know I can hack it by <code>split</code>, but that just solves getting what was typed in the URL, not the correctness of the URL parameters.</p>
|
<python><rest><urllib><argparse><url-parameters>
|
2023-03-08 22:06:23
| 1
| 7,836
|
Ivan
|
75,678,657
| 1,968,829
|
How to merge two dataframes on list of columns and then deduplicate?
|
<p>I have two dataframes that I would like to do a merge on <code>cols = ['male', 'age_cat', 'inpatient']</code>, and once merged then I would like to dedupulicate these on the left and right column <code>index</code> from each dataframe.</p>
<p>a:</p>
<pre><code> index male age_cat inpatient
6885 34273 0.0 3.0 1.0
10145 50273 0.0 1.0 0.0
6099 30258 0.0 1.0 0.0
10834 53697 0.0 1.0 0.0
15267 76649 1.0 0.0 0.0
10122 50155 1.0 1.0 0.0
9210 45680 0.0 0.0 0.0
446 2221 1.0 3.0 0.0
38 196 1.0 0.0 0.0
14757 73363 1.0 0.0 0.0
450 2237 0.0 0.0 0.0
9617 47684 0.0 1.0 0.0
5285 26349 0.0 3.0 0.0
5666 28006 0.0 0.0 0.0
487 2475 1.0 2.0 0.0
671 3406 0.0 1.0 0.0
13398 66352 1.0 1.0 0.0
15525 78429 1.0 0.0 0.0
2831 14047 1.0 0.0 0.0
13361 66208 0.0 2.0 0.0
</code></pre>
<p>b:</p>
<pre><code> index male age_cat inpatient
99629 99629 0 0 0
39305 39305 1 0 1
168799 168799 1 0 0
25276 25276 0 0 0
9964 9964 1 2 1
5137 5137 1 0 1
158488 158488 1 0 0
94960 94960 0 0 0
8955 8955 0 0 0
190425 190425 0 0 0
</code></pre>
<p>So, I do as follows:</p>
<pre><code>mp=a.merge(b, on = cols)
</code></pre>
<p>and I get dataframe</p>
<p>mp:</p>
<pre><code> index_x male age_cat inpatient index_y
0 76649 1.0 0.0 0.0 168799
1 76649 1.0 0.0 0.0 158488
2 196 1.0 0.0 0.0 168799
3 196 1.0 0.0 0.0 158488
4 73363 1.0 0.0 0.0 168799
5 73363 1.0 0.0 0.0 158488
6 78429 1.0 0.0 0.0 168799
7 78429 1.0 0.0 0.0 158488
8 14047 1.0 0.0 0.0 168799
9 14047 1.0 0.0 0.0 158488
10 45680 0.0 0.0 0.0 99629
11 45680 0.0 0.0 0.0 25276
12 45680 0.0 0.0 0.0 94960
13 45680 0.0 0.0 0.0 8955
14 45680 0.0 0.0 0.0 190425
15 2237 0.0 0.0 0.0 99629
16 2237 0.0 0.0 0.0 25276
17 2237 0.0 0.0 0.0 94960
18 2237 0.0 0.0 0.0 8955
19 2237 0.0 0.0 0.0 190425
20 28006 0.0 0.0 0.0 99629
21 28006 0.0 0.0 0.0 25276
22 28006 0.0 0.0 0.0 94960
23 28006 0.0 0.0 0.0 8955
24 28006 0.0 0.0 0.0 190425
</code></pre>
<p>which is desired.</p>
<p>To check expected number of pairings, I first check for specific conditions:</p>
<pre><code>In [124]: a[(a.male==1)&(a.age_cat==0)&(a.inpatient==0)]
Out[124]:
index male age_cat inpatient
15267 76649 1.0 0.0 0.0
38 196 1.0 0.0 0.0
14757 73363 1.0 0.0 0.0
15525 78429 1.0 0.0 0.0
2831 14047 1.0 0.0 0.0
</code></pre>
<p>and</p>
<pre><code>In [125]: b[(b.male==1)&(b.age_cat==0)&(b.inpatient==0)]
Out[125]:
index male age_cat inpatient
168799 168799 1 0 0
158488 158488 1 0 0
</code></pre>
<p>Thus, I expect <strong>AT MOST</strong> two matched pairs after deduplication, from whence my issue arises: To deduplicate the indices <code>index_x</code> and <code>index_y</code> in the dataframe <code>mp</code> I tried something along the lines of:</p>
<pre><code>In [126]: mp[(~mp['index_x'].duplicated(keep='first'))&(~mp['index_y'].duplicated(keep='first'))]
Out[126]:
index_x male age_cat inpatient index_y
0 76649 1.0 0.0 0.0 168799
10 45680 0.0 0.0 0.0 99629
</code></pre>
<p>But, as you see, I only get a single instance of <code>male==1, age_cat==0 and inpatient==0</code>instead of the expected two.</p>
<p>I've tried other methods with even less success.</p>
<p>I am merging on more than three elements and the dataframes are significantly much larger, but the issue is pretty much the same.</p>
|
<python><pandas><merge><duplicates>
|
2023-03-08 22:02:57
| 1
| 2,191
|
horcle_buzz
|
75,678,575
| 5,924,264
|
Passing a series into a dict
|
<p>I have a <code>dict</code>, <code>ht</code>, where the key is an <code>int</code>, and the value is a conversion factor.</p>
<p>I also have a dataframe <code>df</code>, with a column <code>id</code> that stores <code>int</code> variables, and another column, <code>vals</code>, that I need to multiply to relevant conversion factors according to <code>id</code>, which will be the key input into <code>ht</code>.</p>
<p>I would like to do something like</p>
<pre><code>conversions = ht[df["id"]]
df["vals"] *= conversions
</code></pre>
<p>But this gives the error</p>
<pre><code>TypeError: 'Series' objects are mutable, thus they cannot be hashed
</code></pre>
<p>What is the most efficient way to accomplish what I want?</p>
|
<python><pandas><dictionary>
|
2023-03-08 21:52:17
| 2
| 2,502
|
roulette01
|
75,678,404
| 9,669,142
|
Python - change the layout of a decision tree
|
<p>I have a decision tree classifier from Scikit-Learn in Python and I want to visualize (plot) a tree. I'm looking at two options: using Graphviz (by exporting the data to a text file) or to use tree.tree_plot(clf). I find both options however not very readable and I want to be able to change the visualization. R produces a much better tree in my opinion and I want to have something like that.</p>
<p>When looking at the Titanic data, then the R tree and Python tree are as follows:</p>
<p><strong>R</strong></p>
<p><a href="https://i.sstatic.net/XrDmn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XrDmn.png" alt="enter image description here" /></a></p>
<p><strong>Python</strong> (done in Graphviz)</p>
<p><a href="https://i.sstatic.net/ONmAB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ONmAB.png" alt="enter image description here" /></a></p>
<p>Is there a way to produce a tree like the one from R?
I want to change the True and False arrows to the features itself and remove the features from the nodes itself, where it states "female <= 0.5" for example.</p>
|
<python><scikit-learn>
|
2023-03-08 21:28:50
| 0
| 567
|
Fish1996
|
75,678,261
| 15,803,668
|
Pdfs in QWebEngineView: How to select letters inside linked phrase
|
<p>I use <code>pyqt5 QWebEngineView</code> to display pdfs. In the pdfs there are phrases that are linked. When I want to select individual words or letters, it doesn't work with the links. Is there a possibility in <code>QWebEngineView</code> to "turn off" the links without changing the pdf itself?</p>
<p>The picture below shows an example: I can't select single letters in the linked phrase "Nichtamtliches Inhaltsverzeichnis". I can only select the whole phrase.</p>
<p><a href="https://i.sstatic.net/W5uBA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/W5uBA.png" alt="enter image description here" /></a></p>
<p>The code I use to display the pdf is:</p>
<pre><code>from PyQt5.QtWidgets import *
from PyQt5.QtCore import *
from PyQt5.QtWebEngineWidgets import QWebEngineSettings, QWebEngineView
import sys
class App(QMainWindow):
def __init__(self):
super(App, self).__init__()
webView = QWebEngineView()
webView.settings().setAttribute(
QWebEngineSettings.PluginsEnabled, True)
webView.settings().setAttribute(
QWebEngineSettings.PdfViewerEnabled, True)
webView.load(QUrl.fromUserInput("path-to-pdf"))
self.setCentralWidget(webView)
if __name__ == '__main__':
app = QApplication(sys.argv)
window = App()
window.showMaximized()
sys.exit(app.exec_())
</code></pre>
|
<python><pyqt5><qwebengineview>
|
2023-03-08 21:11:17
| 0
| 453
|
Mazze
|
75,678,100
| 2,006,921
|
OpenGL gluPerspective and shader question
|
<p>I am trying to learn OpenGL, and I have copied and pasted a few things from various sources. Things are starting to come together, but not quite. Here's my code that is doing something:</p>
<pre><code>from OpenGL.GLUT import *
from OpenGL.GL import *
from OpenGL.GL import shaders
import numpy as np
from PyQt6 import QtWidgets
from PyQt6.QtOpenGLWidgets import QOpenGLWidget
from OpenGL import GL as gl
from OpenGL import GLU
from OpenGL.arrays import vbo
from OpenGL.GL import shaders
class GLWidget(QOpenGLWidget):
def __init__(self, parent=None):
self.parent = parent
super().__init__(parent=parent)
QOpenGLWidget.__init__(self, parent)
def initializeGL(self):
gl.glClearColor(0.0, 0.0, 0.5, 1) # initialize the screen to blue
#gl.glEnable(gl.GL_DEPTH_TEST) # enable depth testing
VERTEX_SHADER = shaders.compileShader("""#version 120
void main() {
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}""", GL_VERTEX_SHADER)
FRAGMENT_SHADER = shaders.compileShader("""#version 120
void main() {
gl_FragColor = vec4( 1, 1, 0, 1 );
}""", GL_FRAGMENT_SHADER)
self.shader = shaders.compileProgram(VERTEX_SHADER,FRAGMENT_SHADER)
self.vbo = vbo.VBO(
np.array( [
[ 0, 1, 2 ],
[ -1,-1, 2 ],
[ 1,-1, 2 ],
[ 2,-1, 2 ],
[ 4,-1, 2 ],
[ 4, 1, 2 ],
[ 2,-1, 2 ],
[ 4, 1, 2 ],
[ 2, 1, 2 ],
],'f')
)
def resizeGL(self, width, height):
print(width, height)
gl.glViewport(0, 0, width, height)
gl.glMatrixMode(gl.GL_PROJECTION)
gl.glLoadIdentity()
aspect = width / float(height)
#GLU.gluPerspective(90.0, aspect, 0.1, 100.0)
gl.glMatrixMode(gl.GL_MODELVIEW)
def paintGL(self):
gl.glClear(gl.GL_COLOR_BUFFER_BIT | gl.GL_DEPTH_BUFFER_BIT)
shaders.glUseProgram(self.shader)
try:
self.vbo.bind()
try:
gl.glPushMatrix()
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointerf(self.vbo)
gl.glTranslate(-0.3, 0.0, 0.0)
gl.glScale(0.5, 0.5, 0.5)
gl.glRotate(45, 0.0, 0.0, 1.0)
glDrawArrays(GL_TRIANGLES, 0, 9)
finally:
self.vbo.unbind()
glDisableClientState(GL_VERTEX_ARRAY);
gl.glPopMatrix()
finally:
shaders.glUseProgram(0)
class MainWindow(QtWidgets.QWidget):
def __init__(self):
super().__init__()
self.resize(300, 300)
self.setWindowTitle('Hello OpenGL App')
self.glWidget = GLWidget()
self.layout = QtWidgets.QVBoxLayout(self)
self.layout.addWidget(self.glWidget)
if __name__ == '__main__':
app = QtWidgets.QApplication(sys.argv)
win = MainWindow()
win.show()
sys.exit(app.exec())
</code></pre>
<p>When I resize the window, though, it distorts my scene. Pretty sure that I need the <code>GLU.gluPerspective(90.0, aspect, 0.1, 100.0)</code> line, but when I uncomment that, my objects go away. What am I doing wrong?</p>
<p>Also, I understand what the <code>gluPerspective</code> does, but I am not sure about the <code>gl_ModelViewProjectionMatrix</code> in the shader script. Ultimately, I guess it will have to implement the gluPerspective functionality myself in the shader script?</p>
|
<python><opengl><pyopengl><pyqt6>
|
2023-03-08 20:50:44
| 1
| 1,105
|
zeus300
|
75,678,004
| 3,885,794
|
How to deal with module name collisions in Python?
|
<p>Say in my code, I need to use a 3rd-party package from Pypi which has a module <code>somemodule</code>. On the other hand, I have a user-defined module <code>somemodule</code> from which I need to import some functions. Is it possible to import from both modules in a single file?</p>
<p>Below is the sample code that illustrates the question. Is this even achievable?</p>
<pre><code>import somemodule # a 3rd-party module
from userdefined.somemodule import somefunc# user defined module
somemodule.run()
somefunc()
</code></pre>
|
<python>
|
2023-03-08 20:39:28
| 0
| 1,313
|
fhcat
|
75,677,991
| 4,450,090
|
polars datetime 5 minutes floor
|
<p>I have polars dataframe with timestamp folumn of type datetime[ns] which value is <code>2023-03-08 11:13:07.831</code>
I want to use polars efficiency to round timestamp to 5 minutes floor.</p>
<p>Right now I do:</p>
<pre><code>import arrow
def timestamp_5minutes_floor(ts: int) -> int:
return int(arrow.get(ts).timestamp() // 300000 * 300000)
df.with_columns([
pl.col("timestamp").apply(lambda x: timestamp_5minutes_floor(x)).alias("ts_floor")
])
</code></pre>
<p>It is slow. How to improve it?</p>
|
<python><timestamp><python-polars><floor>
|
2023-03-08 20:37:07
| 1
| 2,728
|
Dariusz Krynicki
|
75,677,936
| 970,121
|
How to transform a dataset to basic metrics dataset on various date based rollup in a big dataset using pyspark
|
<p>I have a dataset that looks like this.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Date</th>
<th>Time</th>
<th>Stock-a</th>
<th>Stock-b</th>
<th>Stock-c</th>
</tr>
</thead>
<tbody>
<tr>
<td>2023-01-01</td>
<td>10:30</td>
<td>10</td>
<td>20</td>
<td>30</td>
</tr>
<tr>
<td>2023-01-01</td>
<td>11:30</td>
<td>11</td>
<td>21</td>
<td>31</td>
</tr>
<tr>
<td>2023-01-02</td>
<td>01:30</td>
<td>15</td>
<td>19</td>
<td>18</td>
</tr>
<tr>
<td>2023-01-02</td>
<td>12:30</td>
<td>6</td>
<td>25</td>
<td>8</td>
</tr>
</tbody>
</table>
</div>
<p>I want to convert that into a dataset that looks like this</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Date</th>
<th>Stock Name</th>
<th>Mean</th>
<th>Stddev</th>
</tr>
</thead>
<tbody>
<tr>
<td>2023-01-01</td>
<td>Stock-a</td>
<td>mean value</td>
<td>standard deviation</td>
</tr>
<tr>
<td>2023-01-01</td>
<td>Stock-b</td>
<td>mean value</td>
<td>standard deviation</td>
</tr>
<tr>
<td>2023-01-02</td>
<td>Stock-a</td>
<td>mean value</td>
<td>standard deviation</td>
</tr>
</tbody>
</table>
</div>
<p>This is my code</p>
<pre><code>import pyspark
from pyspark.sql.functions import expr
#Create spark session
data = [("2023-01-01","10:30", 10, 20, 30), ("2023-01-01","11:30", 11, 21, 31) , \
("2023-01-01","13:30", 1, 2, 3),("2023-01-01","14:30", 110, 210, 310),("2023-01-02","01:30", 21, 21, 21), \
("2023-01-02","08:30", 11, 21, 31),("2023-01-02","11:30", 110, 210, 131),("2023-01-03","11:30", 10, 20, 30), \
("2023-01-03","12:30", 11, 21, 31),("2023-01-03","14:30", 8, 12, 13),("2023-01-03","15:30", 11, 21, 31)]
columns= ["Date","Time","Stock-a", "Stock-b", "Stock-c"]
df = spark.createDataFrame(data = data, schema = columns)
df.show()
from pyspark.sql.functions import expr, mean, stddev
columns = ["Stock-a", "Stock-b", "Stock-c"]
metrics_aggs = df.groupBy('Date').agg(
*[mean(col).alias("mean_" + col) for col in columns],
*[stddev(col).alias('std_' + col) for col in columns]
)
metrics_aggs.show()
</code></pre>
<p>Somehow I have to find a way to pivot on the column name and then just show the mean and standard deviation value as columns.
Any pointers or ideas to solve this?</p>
|
<python><apache-spark><pyspark>
|
2023-03-08 20:31:08
| 1
| 1,329
|
pramodh
|
75,677,825
| 523,612
|
Why is copying a list of simple objects slower, when the list contains multiple repeats?
|
<p>This is my environment:</p>
<pre><code>$ python
Python 3.8.10 (default, Nov 14 2022, 12:59:47)
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> quit()
$ lscpu | grep "Model name"
Model name: Intel(R) Core(TM) i5-4430 CPU @ 3.00GHz
$ sudo dmidecode --type memory | grep -E "^\\s+(Speed|Type):" -
Type: DDR3
Speed: 1600 MT/s
Type: DDR3
Speed: 1600 MT/s
</code></pre>
<p>I consistently get timing results like these on my machine:</p>
<pre><code>$ python -m timeit -s "x = list(range(250)) * 4" "[*x]"
200000 loops, best of 5: 1.54 usec per loop
$ python -m timeit -s "x = [0, 1, 2, 3] * 250" "[*x]"
200000 loops, best of 5: 1.48 usec per loop
$ python -m timeit -s "x = [0, 1] * 500" "[*x]"
100000 loops, best of 5: 2 usec per loop
$ python -m timeit -s "x = [0] * 1000" "[*x]"
100000 loops, best of 5: 3.84 usec per loop
</code></pre>
<p>(Here I have deliberately avoided using integers larger than 255, because I know about the <a href="https://stackoverflow.com/questions/306313">caching of small integers</a>.</p>
<p>Similarly, with single-character strings:</p>
<pre><code>$ python -m timeit -s "x = [chr(i) for i in range(250)] * 4" "[*x]"
200000 loops, best of 5: 1.81 usec per loop
$ python -m timeit -s "x = [chr(0), chr(1), chr(2), chr(3)] * 250" "[*x]"
200000 loops, best of 5: 1.5 usec per loop
$ python -m timeit -s "x = [chr(0), chr(1)] * 500" "[*x]"
100000 loops, best of 5: 2.03 usec per loop
$ python -m timeit -s "x = [chr(0)] * 1000" "[*x]"
100000 loops, best of 5: 3.83 usec per loop
</code></pre>
<p>(Again, I deliberately limit the character code points so that Python's <a href="https://peps.python.org/pep-0393/" rel="noreferrer">flexible string representation</a> will choose a consistent, single-byte-per-character representation.)</p>
<p><em>All of these input lists are length 1000, and the same method is being used to copy them, so I wouldn't expect dramatic differences in the timing</em>. However, I find that the lists that consist of the same simple element (a single-character string or a small integer) over and over, take nearly twice as long to copy as anything else; that using a few different values is even faster than using two alternating values; and that using a wide variety of values is slightly slower again.</p>
<p>With slower copying methods, the difference is still present, but less dramatic:</p>
<pre><code>$ python -m timeit -s "x = [chr(i) for i in range(250)] * 4" "[i for i in x]"
20000 loops, best of 5: 18.2 usec per loop
$ python -m timeit -s "x = [chr(0), chr(1), chr(2), chr(3)] * 250" "[i for i in x]"
20000 loops, best of 5: 17.3 usec per loop
$ python -m timeit -s "x = [chr(0), chr(1)] * 500" "[i for i in x]"
20000 loops, best of 5: 17.9 usec per loop
$ python -m timeit -s "x = [chr(0)] * 1000" "[i for i in x]"
20000 loops, best of 5: 19.1 usec per loop
</code></pre>
<p><strong>Why does this occur?</strong></p>
|
<python><list><performance>
|
2023-03-08 20:14:51
| 1
| 61,352
|
Karl Knechtel
|
75,677,672
| 2,261,553
|
Select specific items from generator
|
<p>Given a generator object, I want to be able to iterate only over the elements with defined indices. However, I have experienced that when the generator yields "heavy" items (memory-wise), the loading of each item takes a considerable time compared to the case when iteration is performed on every element in the original order. Here is a minimal example, where in my real situation, I have no direct access to the generator function (i.e. <code>mygen</code> cannot be modified).</p>
<pre><code>import itertools
import numpy as np
def mygen(n):
for k in range(n):
yield k
mygenerator=mygen(10)
indices_ls=[0,5,8]
mask =np.zeros(10,dtype=int)
mask[indices_ls]=1
compressed_generator=itertools.compress(mygenerator, mask)
for i in compressed_generator:
print(i)
</code></pre>
<p>This minimal code does the trick, because only selected elements are printed. But, is there a way to do this without using the <code>compressed_generator</code>, i.e. by iterating directly over the original generator, but getting ONLY the specified indices? Recall that the generator function (<code>mygen</code>)cannot be modified.</p>
|
<python><generator>
|
2023-03-08 19:58:50
| 2
| 411
|
Zarathustra
|
75,677,665
| 8,229,534
|
How to display year-month accurately on X-axis for plotly chart?
|
<p>I am trying to find a solution to accurately display the X-axis for my plotly chart.</p>
<p>Here is my code -</p>
<pre><code>import pandas as pd
print(pd.__version__) # 1.2.4
import plotly
print(plotly.__version__) # 5.5.0
</code></pre>
<p>I create a data frame with <code>year_month</code> as object type and <code>p95_gbs</code> as float type for my metrics (Y-axis).</p>
<pre><code>import plotly.express as px
df = pd.DataFrame({"year_month" : ["2023-01", "2023-02", "2023-03"], "p95_gbs" : [20.5,22.2,23.9]})
df.head()
</code></pre>
<p><a href="https://i.sstatic.net/Nyuxh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Nyuxh.png" alt="enter image description here" /></a></p>
<p>Here is my code to do the monthly line chart</p>
<pre><code>hover_data = ['year_month','p95_gbs']
fig = px.line(data_frame=df, x="year_month", y="p95_gbs", hover_data = hover_data)
fig.update_yaxes(rangemode = "tozero")
fig.show()
</code></pre>
<p>As of now, the code does not show only 3 data points that are present at <code>year-month</code> level. Instead the line chart gives a false impression that there are more than 3 data points which is not the case.</p>
<p>Ideally, my X-axis should only show 3 data points - <code>2023-01, 2023-02, 2023-03</code>.</p>
<p>In general, if I have 2 years worth of data, then I would expect to have 24 points (12 points per month). The format that I want in the output is <code>YYYY-MONTHNAME</code>.</p>
<p>How can I tweak my figure to display this year-month information accurately ?</p>
<p><a href="https://i.sstatic.net/uHZEa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uHZEa.png" alt="enter image description here" /></a></p>
<p>I have also tried to update the layout using <code>xaxis_tickformat</code> but still the chart does not display accurate information on X-axis</p>
<pre><code>import plotly.graph_objects as go
import pandas as pd
fig = go.Figure(go.Scatter(
x = df['year_month'],
y = df['p95_gbs'],
))
fig.update_layout(
title = 'Time Series with YearMonth on X-axis',
xaxis_tickformat = '%Y-%B'
)
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/qiSGk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qiSGk.png" alt="enter image description here" /></a></p>
|
<python><pandas><plotly>
|
2023-03-08 19:57:58
| 1
| 1,973
|
Regressor
|
75,677,643
| 10,922,372
|
Django - Query on overridden save() method cached or something
|
<p>I am overriding the default save() method of my model in Django 4.
The problem is that I am making queries related to an instance of the model with the overridden save() method, but the queries seem not to be updated during the method execution.</p>
<p>Example:</p>
<pre><code>class Routine(models.Model):
exercises = models.ManyToManyField(Exercise)
def save(self, *args, **kwargs):
e = self.exercises.all() # 3 exercises returned
super().save(*args, **kwargs) # I call to super deleting 1 exercise and adding a new one
e = self.exercises.all() # same 3 exercises returned, not updated with the previous save() call
</code></pre>
<p>Once all the code has been executed the routine.exercises have been properly updated, the problem is that I am not being able to get this query updated during the save() override.</p>
|
<python><django>
|
2023-03-08 19:55:57
| 0
| 1,010
|
Gonzalo Dambra
|
75,677,603
| 17,620,209
|
Checking if subprocess was terminated by user
|
<pre><code># Create a new terminal window and run the ffmpeg command in it
cmd = ' '.join(ffmpeg.compile(output_stream, overwrite_output=True))
compressing = subprocess.Popen('start cmd /k {}'.format(cmd), shell=True)
while compressing.poll() is None: #check if the process is still running
print('Processing video...')
break
else:
if compressing.returncode == 0: #check if the process was canceled by the user
print('Video processing canceled by user!')
else: #if the process was completed successfully
print('Video processing complete!')
</code></pre>
<p>I want to check if the process (terminal) was closed by the user it should print that sentence, however it's always stuck at <code>"Processing Video"</code> even if the user kills the subprocess terminal</p>
|
<python><shell><subprocess><subshell>
|
2023-03-08 19:51:49
| 0
| 714
|
Mohamed Darwesh
|
75,677,446
| 1,424,395
|
Qt for python not working: Spyder not launching and matplotlib unable to plot
|
<p>I am running Windows. I have Anaconda and several environments there, I use Spyder, and also Visual studio Code.
After updating packages (many, I cannot really tell which ones), spyder cannot be launched anymore. I get a window error with the following message.</p>
<blockquote>
<p>qt.qpa.plugin: Could not load the Qt platform plugin "windows" in ""
even though it was found. This application failed to start because no
Qt platform plugin could be initialized. Reinstalling the application
may fix this problem.</p>
<p>Available platform plugins are: windows, direct2d, minimal, offscreen,
webgl.</p>
</blockquote>
<p>I can open visual studio code, but trying to show plots from <code>matplotlib</code> using <code>matplotlib.pyplot.show()</code> (that has qt as default plotter) it shows almost the same error on the console,</p>
<blockquote>
<p>QObject::moveToThread: Current thread (0x2844e951fa0) is not the
object's thread (0x2844e952a40). Cannot move to target thread
(0x2844e951fa0)</p>
<p>qt.qpa.plugin: Could not load the Qt platform plugin "windows" in ""
even though it was found. This application failed to start because no
Qt platform plugin could be initialized. Reinstalling the application
may fix this problem.</p>
<p>Available platform plugins are: windows, direct2d, minimal, offscreen,
webgl.</p>
</blockquote>
<p>Qt is installed pyqt5 is installed in all the environments I have, and also in system's python.</p>
<p>I did reinstall all anaconda and didn't help. I also added the qt paths to the environment variables. But still does not work...</p>
<p>Any idea?</p>
|
<python><windows><qt><spyder>
|
2023-03-08 19:33:28
| 1
| 1,827
|
myradio
|
75,677,442
| 4,809,113
|
how to find rows in numpy array thatare in another array
|
<p>I have two numpy arrays:</p>
<pre><code>import numpy as np
arrlist = np.array([[1,0,0,1] , [0,1,1,0]])
rng = np.random.default_rng()
rand_arr = rng.choice([0, 1], size=(5, 4)))
</code></pre>
<p>How do I find the indices <code>i</code> of every row in <code>rand_arr</code> where <code>rand_arr[i]</code> is in <code>arrlist</code>?</p>
<p>For example, <code>rand_arr</code> maybe -</p>
<p><code>rand_arr = np.array([[1,0,0,1],[0,1,1,0],[1,1,1,0]])</code></p>
<p>So the expected return value should be -</p>
<p><code>np.array([[0,1]])</code></p>
<p>note all the above values <code>rand_arr</code> and <code>arrlist</code> are only known at runtime.</p>
|
<python><numpy>
|
2023-03-08 19:33:21
| 1
| 441
|
proton
|
75,677,435
| 12,076,197
|
Create a row in a in a multiindex dataframe
|
<p>original DF</p>
<pre><code> 2022 2023
In Out In Out
ABC DOG 7 3 1 10
DEF DOG 3 14 8 3
GHI CAT 12 5 2 3
JKL CAT 2 3 3 1
</code></pre>
<p>This is a multi index DF. Both the index and columns have two levels. What I need to do is add in a row for the Total. Example Below:</p>
<p>Desired DF:</p>
<pre><code>2022 2023
In Out In Out
ABC DOG 7 3 1 10
DEF DOG 3 14 8 3
GHI CAT 12 5 2 3
JKL CAT 2 3 3 1
Total 24 25 14 17
</code></pre>
<p>I tried this:</p>
<pre><code> df.loc['Total',:] = df.sum(axis=0)
</code></pre>
<p>This received an error:</p>
<pre><code> ValueError: Must have equal len keys and value when setting with an iterable
</code></pre>
<p>Any suggestions in creating the "Total" row in the desired DF example above? Thanks!</p>
|
<python><dataframe>
|
2023-03-08 19:32:08
| 1
| 641
|
dmd7
|
75,677,363
| 277,504
|
Does pip provide a TOML parser for python 3.9?
|
<p>I want to parse TOML files in python 3.9, and I am wondering if I can do so without installing another package.</p>
<p>Since <code>pip</code> knows how to work with <code>pyproject.toml</code> files, and I already have <code>pip</code> installed, does <code>pip</code> provide a parser that I can import and use in my own code?</p>
|
<python><pip><python-import><toml>
|
2023-03-08 19:23:32
| 3
| 1,872
|
Arthur Hebert-Ryan
|
75,677,038
| 4,801,066
|
Get a list of all Discord.Member(s) that are currently looking at a given Discord.TextChannel
|
<p>In Discord.py, TextChannel provides the property 'members' to "returns all members that can see this channel." (<a href="https://discordpy.readthedocs.io/en/stable/api.html?highlight=member#discord.TextChannel.members" rel="nofollow noreferrer">https://discordpy.readthedocs.io/en/stable/api.html?highlight=member#discord.TextChannel.members</a>).</p>
<p>I wonder if there is any way to get all members that <strong>are currently seeing this channel</strong>. In other words, to know the active tab/TextChannel of a given Member.</p>
<p>Thanks for your time.</p>
|
<python><discord><discord.py>
|
2023-03-08 18:45:35
| 0
| 693
|
Morgan
|
75,676,939
| 8,508
|
Celery error when a chord that starts a chain has an error in the group
|
<p>I have a Chord. After the chord, we chain another task. If one of the tasks in that group of that chord raises an exception, I get a mess of an exception that looks like a bug in celery or django_celery_results.</p>
<p>I am using amqp as my task queue and django_celery_results as my results backend.</p>
<p>My Tasks:</p>
<pre><code>@shared_task(bind=True)
def debug_task(self):
print(f'Request: debug_task')
@shared_task(bind=True)
def debug_A(self, i):
print(f'Doing A{i}')
@shared_task(bind=True)
def debug_broke(self):
print(f'OH NO')
raise Exception("Head Aspload")
@shared_task(bind=True)
def debug_finisher(self):
print(f'OK, We did it all')
def launch():
from celery import group
g = [debug_A.si(i) for i in range(5)]
g.append(debug_broke.si())
r = group(g) | debug_finisher.si() | debug_A.si(-1)
r.apply_async()
</code></pre>
<p>The results when I run this:</p>
<pre><code>[2023-03-08 18:19:52,428: INFO/MainProcess] Task topclans.worker.debug_A[c652e39a-8cf5-4ac8-924d-3bce32c190f8] received
[2023-03-08 18:19:52,429: WARNING/ForkPoolWorker-1] Doing A0
[2023-03-08 18:19:52,434: INFO/MainProcess] Task topclans.worker.debug_A[c1b8ed9c-a9e0-4960-8b9b-1ecd5b2254a3] received
[2023-03-08 18:19:52,436: INFO/MainProcess] Task topclans.worker.debug_A[a4478640-a50a-42a2-a8f0-8e6b84abab90] received
[2023-03-08 18:19:52,439: INFO/MainProcess] Task topclans.worker.debug_A[d4c9d249-98dd-4071-ab03-70a350c7d171] received
[2023-03-08 18:19:52,439: INFO/MainProcess] Task topclans.worker.debug_A[41c4bdb0-8993-40b0-90bd-2d6c642cb518] received
[2023-03-08 18:19:52,462: INFO/ForkPoolWorker-1] Task topclans.worker.debug_A[c652e39a-8cf5-4ac8-924d-3bce32c190f8] succeeded in 0.03368515009060502s: None
[2023-03-08 18:19:52,465: INFO/MainProcess] Task topclans.worker.debug_broke[de4d83d9-1f5b-4f02-ad9e-1a3edbedd2f8] received
[2023-03-08 18:19:52,466: WARNING/ForkPoolWorker-1] Doing A1
[2023-03-08 18:19:52,481: INFO/ForkPoolWorker-1] Task topclans.worker.debug_A[c1b8ed9c-a9e0-4960-8b9b-1ecd5b2254a3] succeeded in 0.015777542954310775s: None
[2023-03-08 18:19:52,483: WARNING/ForkPoolWorker-1] Doing A2
[2023-03-08 18:19:52,500: INFO/ForkPoolWorker-1] Task topclans.worker.debug_A[a4478640-a50a-42a2-a8f0-8e6b84abab90] succeeded in 0.017103158170357347s: None
[2023-03-08 18:19:52,501: WARNING/ForkPoolWorker-1] Doing A3
[2023-03-08 18:19:52,515: INFO/ForkPoolWorker-1] Task topclans.worker.debug_A[d4c9d249-98dd-4071-ab03-70a350c7d171] succeeded in 0.013990010134875774s: None
[2023-03-08 18:19:52,516: WARNING/ForkPoolWorker-1] Doing A4
[2023-03-08 18:19:52,535: INFO/ForkPoolWorker-1] Task topclans.worker.debug_A[41c4bdb0-8993-40b0-90bd-2d6c642cb518] succeeded in 0.019201028160750866s: None
[2023-03-08 18:19:52,536: WARNING/ForkPoolWorker-1] OH NO
[2023-03-08 18:19:52,563: ERROR/ForkPoolWorker-1] Chord '3834f510-0652-407f-a589-6d9514e78974' raised: Exception('Head Aspload')
Traceback (most recent call last):
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/celery/app/trace.py", line 451, in trace_task
R = retval = fun(*args, **kwargs)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/celery/app/trace.py", line 734, in __protected_call__
return self.run(*args, **kwargs)
File "/mnt/code/wxp-git/wxp/src/topclans/worker/__init__.py", line 31, in debug_broke
raise Exception("Head Aspload")
Exception: Head Aspload
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django_celery_results/backends/database.py", line 276, in trigger_callback
ret = j(timeout=app.conf.result_chord_join_timeout, propagate=True)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/celery/result.py", line 747, in join
value = result.get(
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/celery/result.py", line 220, in get
self.maybe_throw(callback=callback)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/celery/result.py", line 336, in maybe_throw
self.throw(value, self._to_remote_traceback(tb))
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/celery/result.py", line 329, in throw
self.on_ready.throw(*args, **kwargs)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/vine/promises.py", line 234, in throw
reraise(type(exc), exc, tb)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/vine/utils.py", line 30, in reraise
raise value
Exception: Head Aspload
[2023-03-08 18:19:52,573: WARNING/ForkPoolWorker-1] /home/matthew/venvs/wxp/lib/python3.8/site-packages/celery/app/trace.py:660: RuntimeWarning: Exception raised outside body: IntegrityError('null value in column "task_id" violates not-null constraint\nDETAIL: Failing row contains (77, null, FAILURE, application/x-python-serialize, binary, gASVRAAAAAAAAACMEWNlbGVyeS5leGNlcHRpb25zlIwKQ2hvcmRFcnJvcpSTlIwZ..., 2023-03-08 18:19:52.569411+00, null, gASVEQAAAAAAAAB9lIwIY2hpbGRyZW6UXZRzLg==, null, null, null, null, 2023-03-08 18:19:52.569405+00, null).\n'):
Traceback (most recent call last):
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/celery/app/trace.py", line 451, in trace_task
R = retval = fun(*args, **kwargs)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/celery/app/trace.py", line 734, in __protected_call__
return self.run(*args, **kwargs)
File "/mnt/code/wxp-git/wxp/src/topclans/worker/__init__.py", line 31, in debug_broke
raise Exception("Head Aspload")
Exception: Head Aspload
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django_celery_results/backends/database.py", line 276, in trigger_callback
ret = j(timeout=app.conf.result_chord_join_timeout, propagate=True)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/celery/result.py", line 747, in join
value = result.get(
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/celery/result.py", line 220, in get
self.maybe_throw(callback=callback)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/celery/result.py", line 336, in maybe_throw
self.throw(value, self._to_remote_traceback(tb))
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/celery/result.py", line 329, in throw
self.on_ready.throw(*args, **kwargs)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/vine/promises.py", line 234, in throw
reraise(type(exc), exc, tb)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/vine/utils.py", line 30, in reraise
raise value
Exception: Head Aspload
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django/db/models/query.py", line 581, in get_or_create
return self.get(**kwargs), False
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django/db/models/query.py", line 435, in get
raise self.model.DoesNotExist(
django_celery_results.models.TaskResult.DoesNotExist: TaskResult matching query does not exist.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
psycopg2.errors.NotNullViolation: null value in column "task_id" violates not-null constraint
DETAIL: Failing row contains (77, null, FAILURE, application/x-python-serialize, binary, gASVRAAAAAAAAACMEWNlbGVyeS5leGNlcHRpb25zlIwKQ2hvcmRFcnJvcpSTlIwZ..., 2023-03-08 18:19:52.569411+00, null, gASVEQAAAAAAAAB9lIwIY2hpbGRyZW6UXZRzLg==, null, null, null, null, 2023-03-08 18:19:52.569405+00, null).
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/celery/app/trace.py", line 468, in trace_task
I, R, state, retval = on_error(task_request, exc, uuid)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/celery/app/trace.py", line 379, in on_error
R = I.handle_error_state(
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/celery/app/trace.py", line 178, in handle_error_state
return {
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/celery/app/trace.py", line 225, in handle_failure
task.backend.mark_as_failure(
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/celery/backends/base.py", line 177, in mark_as_failure
self.on_chord_part_return(request, state, exc)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django_celery_results/backends/database.py", line 258, in on_chord_part_return
trigger_callback(
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django_celery_results/backends/database.py", line 284, in trigger_callback
app.backend.chord_error_from_stack(callback, ChordError(reason))
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/celery/backends/base.py", line 309, in chord_error_from_stack
return backend.fail_from_current_stack(callback.id, exc=exc)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/celery/backends/base.py", line 316, in fail_from_current_stack
self.mark_as_failure(task_id, exc, exception_info.traceback)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/celery/backends/base.py", line 172, in mark_as_failure
self.store_result(task_id, exc, state,
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/celery/backends/base.py", line 528, in store_result
self._store_result(task_id, result, state, traceback,
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django_celery_results/backends/database.py", line 132, in _store_result
self.TaskModel._default_manager.store_result(**task_props)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django_celery_results/managers.py", line 43, in _inner
return fun(*args, **kwargs)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django_celery_results/managers.py", line 168, in store_result
obj, created = self.using(using).get_or_create(task_id=task_id,
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django/db/models/query.py", line 588, in get_or_create
return self.create(**params), True
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django/db/models/query.py", line 453, in create
obj.save(force_insert=True, using=self.db)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django/db/models/base.py", line 739, in save
self.save_base(using=using, force_insert=force_insert,
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django/db/models/base.py", line 776, in save_base
updated = self._save_table(
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django/db/models/base.py", line 881, in _save_table
results = self._do_insert(cls._base_manager, using, fields, returning_fields, raw)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django/db/models/base.py", line 919, in _do_insert
return manager._insert(
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django/db/models/query.py", line 1270, in _insert
return query.get_compiler(using=using).execute_sql(returning_fields)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django/db/models/sql/compiler.py", line 1416, in execute_sql
cursor.execute(sql, params)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django/db/backends/utils.py", line 98, in execute
return super().execute(sql, params)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django/db/backends/utils.py", line 66, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django/db/backends/utils.py", line 75, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django/db/utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
django.db.utils.IntegrityError: null value in column "task_id" violates not-null constraint
DETAIL: Failing row contains (77, null, FAILURE, application/x-python-serialize, binary, gASVRAAAAAAAAACMEWNlbGVyeS5leGNlcHRpb25zlIwKQ2hvcmRFcnJvcpSTlIwZ..., 2023-03-08 18:19:52.569411+00, null, gASVEQAAAAAAAAB9lIwIY2hpbGRyZW6UXZRzLg==, null, null, null, null, 2023-03-08 18:19:52.569405+00, null).
warn(RuntimeWarning(
[2023-03-08 18:19:52,595: WARNING/ForkPoolWorker-1] Can't find ChordCounter for Group 3834f510-0652-407f-a589-6d9514e78974
[2023-03-08 18:19:52,596: ERROR/ForkPoolWorker-1] Task topclans.worker.debug_broke[de4d83d9-1f5b-4f02-ad9e-1a3edbedd2f8] raised unexpected: IntegrityError('null value in column "task_id" violates not-null constraint\nDETAIL: Failing row contains (77, null, FAILURE, application/x-python-serialize, binary, gASVRAAAAAAAAACMEWNlbGVyeS5leGNlcHRpb25zlIwKQ2hvcmRFcnJvcpSTlIwZ..., 2023-03-08 18:19:52.569411+00, null, gASVEQAAAAAAAAB9lIwIY2hpbGRyZW6UXZRzLg==, null, null, null, null, 2023-03-08 18:19:52.569405+00, null).\n')
Traceback (most recent call last):
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/celery/app/trace.py", line 451, in trace_task
R = retval = fun(*args, **kwargs)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/celery/app/trace.py", line 734, in __protected_call__
return self.run(*args, **kwargs)
File "/mnt/code/wxp-git/wxp/src/topclans/worker/__init__.py", line 31, in debug_broke
raise Exception("Head Aspload")
Exception: Head Aspload
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django_celery_results/backends/database.py", line 276, in trigger_callback
ret = j(timeout=app.conf.result_chord_join_timeout, propagate=True)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/celery/result.py", line 747, in join
value = result.get(
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/celery/result.py", line 220, in get
self.maybe_throw(callback=callback)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/celery/result.py", line 336, in maybe_throw
self.throw(value, self._to_remote_traceback(tb))
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/celery/result.py", line 329, in throw
self.on_ready.throw(*args, **kwargs)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/vine/promises.py", line 234, in throw
reraise(type(exc), exc, tb)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/vine/utils.py", line 30, in reraise
raise value
Exception: Head Aspload
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django/db/models/query.py", line 581, in get_or_create
return self.get(**kwargs), False
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django/db/models/query.py", line 435, in get
raise self.model.DoesNotExist(
django_celery_results.models.TaskResult.DoesNotExist: TaskResult matching query does not exist.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
psycopg2.errors.NotNullViolation: null value in column "task_id" violates not-null constraint
DETAIL: Failing row contains (77, null, FAILURE, application/x-python-serialize, binary, gASVRAAAAAAAAACMEWNlbGVyeS5leGNlcHRpb25zlIwKQ2hvcmRFcnJvcpSTlIwZ..., 2023-03-08 18:19:52.569411+00, null, gASVEQAAAAAAAAB9lIwIY2hpbGRyZW6UXZRzLg==, null, null, null, null, 2023-03-08 18:19:52.569405+00, null).
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/celery/app/trace.py", line 468, in trace_task
I, R, state, retval = on_error(task_request, exc, uuid)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/celery/app/trace.py", line 379, in on_error
R = I.handle_error_state(
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/celery/app/trace.py", line 178, in handle_error_state
return {
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/celery/app/trace.py", line 225, in handle_failure
task.backend.mark_as_failure(
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/celery/backends/base.py", line 177, in mark_as_failure
self.on_chord_part_return(request, state, exc)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django_celery_results/backends/database.py", line 258, in on_chord_part_return
trigger_callback(
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django_celery_results/backends/database.py", line 284, in trigger_callback
app.backend.chord_error_from_stack(callback, ChordError(reason))
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/celery/backends/base.py", line 309, in chord_error_from_stack
return backend.fail_from_current_stack(callback.id, exc=exc)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/celery/backends/base.py", line 316, in fail_from_current_stack
self.mark_as_failure(task_id, exc, exception_info.traceback)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/celery/backends/base.py", line 172, in mark_as_failure
self.store_result(task_id, exc, state,
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/celery/backends/base.py", line 528, in store_result
self._store_result(task_id, result, state, traceback,
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django_celery_results/backends/database.py", line 132, in _store_result
self.TaskModel._default_manager.store_result(**task_props)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django_celery_results/managers.py", line 43, in _inner
return fun(*args, **kwargs)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django_celery_results/managers.py", line 168, in store_result
obj, created = self.using(using).get_or_create(task_id=task_id,
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django/db/models/query.py", line 588, in get_or_create
return self.create(**params), True
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django/db/models/query.py", line 453, in create
obj.save(force_insert=True, using=self.db)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django/db/models/base.py", line 739, in save
self.save_base(using=using, force_insert=force_insert,
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django/db/models/base.py", line 776, in save_base
updated = self._save_table(
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django/db/models/base.py", line 881, in _save_table
results = self._do_insert(cls._base_manager, using, fields, returning_fields, raw)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django/db/models/base.py", line 919, in _do_insert
return manager._insert(
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django/db/models/query.py", line 1270, in _insert
return query.get_compiler(using=using).execute_sql(returning_fields)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django/db/models/sql/compiler.py", line 1416, in execute_sql
cursor.execute(sql, params)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django/db/backends/utils.py", line 98, in execute
return super().execute(sql, params)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django/db/backends/utils.py", line 66, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django/db/backends/utils.py", line 75, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django/db/utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/home/matthew/venvs/wxp/lib/python3.8/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
django.db.utils.IntegrityError: null value in column "task_id" violates not-null constraint
DETAIL: Failing row contains (77, null, FAILURE, application/x-python-serialize, binary, gASVRAAAAAAAAACMEWNlbGVyeS5leGNlcHRpb25zlIwKQ2hvcmRFcnJvcpSTlIwZ..., 2023-03-08 18:19:52.569411+00, null, gASVEQAAAAAAAAB9lIwIY2hpbGRyZW6UXZRzLg==, null, null, null, null, 2023-03-08 18:19:52.569405+00, null).
</code></pre>
<p>It's these cascades of database errors that bother me. Am I doing somthing wrong? Or is this a bug in something?</p>
<p>If I just have the chord and don't chain another task, this seems to works fine. If I don't have the exception raised, that also works even with the extra chain.</p>
|
<python><django><celery><amqp><django-celery-results>
|
2023-03-08 18:35:11
| 0
| 15,639
|
Matthew Scouten
|
75,676,928
| 6,290,062
|
Fortran subroutine returning None in python c.f. working in R
|
<p>Im trying to implement some probability work that utilises the bivariate normal cdf. In R I can use <a href="https://github.com/brentonk/pbivnorm" rel="nofollow noreferrer">the pbivnorm package</a> (line <a href="https://github.com/brentonk/pbivnorm/blob/master/R/pbivnorm.r#L90" rel="nofollow noreferrer">90</a> specifically), e.g. (with some made up numbers)</p>
<pre><code>library(pbivnorm)
.Fortran("PBIVNORM", as.double(0), c(0,0), as.double(-0.1), as.double(-0.2), as.integer(c(0,0)), as.double(0), as.integer(1), PACKAGE="pbivnorm")
[[1]]
[1] 0.193613
[[2]]
[1] 0 0
[[3]]
[1] -0.1
[[4]]
[1] -0.2
[[5]]
[1] 0 0
[[6]]
[1] 0
[[7]]
[1] 1
</code></pre>
<p>I've taken the underlying Fortran code from <a href="https://github.com/brentonk/pbivnorm/blob/master/src/pbivnorm.f" rel="nofollow noreferrer">the package here</a> and compiled it using F2py (named the compiled .so file fortran) so I can e.g. do:</p>
<pre><code>import fortran
fortran.pbivnorm(float(0), [float(0), float(0)], float(-0.1), float(-0.2), [int(0), int(0)], float(0),int(1))
</code></pre>
<p>however this returns None. I know that the subroutine technically is changing the values supplied to it so I also supplied various named constants but they remain unchanged when queried.</p>
<p>Also feel like maybe could be to do with fussiness over input types?</p>
<p>Any help much appreciated!</p>
|
<python><fortran><f2py>
|
2023-03-08 18:34:13
| 1
| 917
|
Robert Hickman
|
75,676,807
| 8,512,262
|
Preventing middle-clicking on selected text from copying and pasting in tkinter
|
<p>I've noticed a "feature" of the <code>Entry</code> and <code>Text</code> widgets wherein any selected text will be duplicated (essentially copied and pasted) when middle-clicked. I'm trying to prevent this behavior, since I don't want or need it for my use case.</p>
<pre><code>import tkinter as tk
from tkinter import ttk
root = tk.Tk()
root.geometry('240x60')
entry = ttk.Entry(root)
entry.insert(0, 'Select me, then middle click!')
entry.pack(expand=True, fill='x', padx=20, pady=20)
if __name__ == '__main__':
root.mainloop()
</code></pre>
<p>I attempted to override the middle-click with an event binding, but that doesn't appear to prevent this behavior:</p>
<pre><code>entry.bind('<Button-2'>, 'break') # consume the middle-click event and bail
</code></pre>
|
<python><tkinter><text><event-handling><mouseevent>
|
2023-03-08 18:20:38
| 1
| 7,190
|
JRiggles
|
75,676,642
| 6,077,239
|
How to perform conditional join more efficiently in Polars?
|
<p>I have a reasonably large dataframe on hand. Joining it with itself takes some time. But I want to join them with some conditions, which could make the resulting dataframe much smaller. My question is how can I take advantage of such conditions to make the conditional join faster to plain full join?</p>
<p>Code below for illustration:</p>
<pre><code>import time
import numpy as np
import polars as pl
# example dataframe
rng = np.random.default_rng(1)
nrows = 3_000_000
df = pl.DataFrame(
dict(
day=rng.integers(1, 300, nrows),
id=rng.integers(1, 5_000, nrows),
id2=rng.integers(1, 5, nrows),
value=rng.normal(0, 1, nrows),
)
)
# joining df with itself takes around 10-15 seconds on a machine with 32 cores.
start = time.perf_counter()
df.join(df, on=["id", "id2"], how="left")
time.perf_counter() - start
# joining df with itself with extra conditions - the implementation below that takes very similar time (10-15 seconds).
start = time.perf_counter()
df.join(df, on=["id", "id2"], how="left").filter(
(pl.col("day") < pl.col("day_right")) & (pl.col("day_right") - pl.col("day") <= 30)
)
time.perf_counter() - start
</code></pre>
<p>So, as mentioned above, my question is how can I take advantage of the conditions during join to make the 'conditional join' faster?</p>
<p><em>It should be faster since the resulting dataframe after conditions has 10x less rows than the full join without any conditions.</em></p>
|
<python><python-polars>
|
2023-03-08 18:01:53
| 1
| 1,153
|
lebesgue
|
75,676,575
| 10,375,073
|
Typing warning: pylance "str" is incompatible with "list[Literal]"
|
<p>Why pylance can't reconnize that I am assigning a literal in this case</p>
<pre class="lang-py prettyprint-override"><code>Color = Literal["blue", "green", "white"]
@dataclass
class TestColor:
my_color: Color | list[Color]
color = TestColor(my_color=["blue"])
color.my_color = color.my_color[0]
</code></pre>
<p>The error is on <code>color.my_color[0]</code> that raise a pylance warning:</p>
<pre><code>"str" is incompatible with "list[Color]"
"str" cannot be assigned to type "Literal['blue']"
"str" cannot be assigned to type "Literal['green']"
"str" cannot be assigned to type "Literal['white']"
</code></pre>
<p>But the value of <code>color.my_color[0]</code> is obviously of type Color !
Should I just ignore or should I raise an issue somewhere ?</p>
|
<python><python-3.x><typing><python-dataclasses><pylance>
|
2023-03-08 17:54:20
| 1
| 405
|
RaphWork
|
75,676,509
| 5,378,132
|
Python Docker - bind source path does not exist
|
<p>I basically want to run an external docker container within another docker container using the python docker client. Basically the python script, which will be run inside the docker container will pull in an image and then run the container. I tried running the standalone python script locally first, and didn't have any issues. Now when I try running the script from inside a docker container, it fails with the following error:</p>
<pre><code>docker.errors.APIError: 400 Client Error for http+docker://localhost/v1.41/containers/create: Bad Request ("invalid mount config for type "bind": bind source path does not exist: /code/User_Input.json")
</code></pre>
<p>Here is my dockerfile:</p>
<pre><code>FROM python3.7
WORKDIR /code
USER root
COPY ./requirements.txt /code/requirements.txt
RUN pip3 install -r /code/requirements.txt
COPY ./main.py /code/main.py
COPY ./config.json /code/config.json
COPY ./templates/ /code/
CMD ["python3", "main.py"]
</code></pre>
<p>And here is a snippet of my Python script:</p>
<pre><code>client.images.pull(docker_image)
cwd = os.getcwd()
with open(f"{cwd}/User_Input.json", "w") as f:
json.dump(user_input, f)
# Mount the current directory to /code/hostcwd
volumes = {cwd: {"bind": "/home/hostcwd", "mode": "rw"}}
# Mount the input json file to /User_Input.json inside the container
mounts = [
docker.types.Mount(
type="bind", source=f"{cwd}/User_Input.json", target="/User_Input.json", read_only=False
)
]
container = client.containers.run(
docker_image,
command=None,
volumes=volumes,
environment=env_vars,
mounts=mounts,
stdin_open=True,
stdout=True,
stderr=True,
remove=True,
tty=True,
detach=True,
)
for line in container.logs(stream=True):
print(line.decode("utf-8"), end="")
</code></pre>
<p>I am confused because it's complaining that the bind source path doesn't exist, but I'm manually creating the file to that designated path in my python script and when I ssh into the container, I see the <code>User_Input.json</code>.</p>
|
<python><docker><volume><mount>
|
2023-03-08 17:47:33
| 1
| 2,831
|
Riley Hun
|
75,676,268
| 562,697
|
Remove excessive top and left margin from matplotlib 3D subplot
|
<p>I have a bunch of plots, one is a 3D projection. All the plots look good, except the 3D plot has a ton of empty space above and to the left (see the example). The bottom and right margins look fine, just not the top and left margins. Is there any way to remove this extra white space?</p>
<p>I have tried manually changing the margins but with no apparent effect (they were 0.5, 0.5, 0). The matplotlib rc file has <code>savefig.bbox = tight</code>. I tried messing with the position bbox, but that hasn't gone well being a subplot. The problem seems to be related to a 3D plot in a subplot and not alone.</p>
<p><a href="https://i.sstatic.net/R6sJq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/R6sJq.png" alt="enter image description here" /></a></p>
<p><strong>EDIT</strong></p>
<p><a href="https://stackoverflow.com/questions/38604100/matplotlib-3d-plot-how-to-get-rid-of-the-excessive-white-space">Matplotlib 3d plot: how to get rid of the excessive white space?</a> does not answer this question unfortunately, because the 3D plot is not in a sub plot. Changing the left in <code>subplots_adjust</code> also damages the non 3D plot. <a href="https://stackoverflow.com/questions/41225293/remove-white-spaces-in-axes3d-matplotlib">Remove white spaces in Axes3d (matplotlib)</a> also largely requires the same underlying function <code>subplots_adjust</code> for the bulk of the adjustment.</p>
<p>Example:</p>
<pre><code>import matplotlib
matplotlib.use('agg')
import matplotlib.pyplot as plt
from matplotlib.backends.backend_agg import FigureCanvasAgg as FigureCanvas
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
fig = plt.figure()
ax = fig.add_subplot(2, 1, 1, projection='3d')
x_labels = [10,20,30]
x = [1,2,3,4]
y = [3,1,5,1]
for label in x_labels:
x_3d = label * np.ones_like(x)
ax.plot(x_3d, x, y, color='black')
#
ax.set_zlabel('test')
ax = fig.add_subplot(2, 1, 2)
time = np.arange(0, 10, 0.1)
ax.plot(time, np.sin(time))
fig.tight_layout()
fig.subplots_adjust(left=-0.11) # plot outside the normal area
canvas = FigureCanvas(fig)
canvas.print_figure(r'D:\test.png')
</code></pre>
<p>With <code>tight_layout()</code> and <code>subplots_adjust</code>:
<a href="https://i.sstatic.net/yB5V3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yB5V3.png" alt="enter image description here" /></a></p>
<p>Without:
<a href="https://i.sstatic.net/xSa0q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xSa0q.png" alt="enter image description here" /></a></p>
|
<python><matplotlib>
|
2023-03-08 17:24:48
| 0
| 11,961
|
steveo225
|
75,676,246
| 6,676,101
|
How would we create a class constructor which does nothing if the input is already an instance of that class?
|
<p>How would we create a class constructor which does nothing if the input is already an instance of that class?</p>
<p>The following code does not work, but it does partially show you what I am trying to accomplish:</p>
<pre class="lang-python prettyprint-override"><code>class Apple:
def __init__(self, *args):
if len(args) == 1 and isinstance(other, type(self)):
return self
else:
self._seeds = "".join(str(arg) for arg in args)
</code></pre>
<p>We probably have to override (overload?) <code>__new__</code>, or create a meta-class, but I am not sure how to do that.</p>
<p>The class named <code>Apple</code> was very contrived.</p>
<p>In the big picture, I am trying to write a function which can accept either parsed or un-parsed data as input.</p>
<p>If the data is already parsed, there is no reason to parse it a second time.</p>
<p>If the input is an instance of <code>ParsedData</code>, then we return the input.</p>
<p>If the input is not an instance of <code>ParsedData</code> then we pass the input into the constructor of the class named <code>ParsedData</code>.</p>
|
<python><python-3.x><constructor>
|
2023-03-08 17:22:48
| 1
| 4,700
|
Toothpick Anemone
|
75,676,231
| 2,912,859
|
How to prevent filling up memory when hashing large files with xxhash?
|
<p>I'm trying to calculate xxhash of video files using the following code:</p>
<pre><code>def get_hash(file):
with open(file, 'rb') as input_file:
return xxhash.xxh3_64(input_file.read()).hexdigest()
</code></pre>
<p>Some of the files are larger than the amount of RAM on the machine. When hashing those files, the memory fills up, followed by swap filling up, at which point the process gets killed by the OS (I assume).
What is the correct way to handle these situations? Thank you!</p>
|
<python>
|
2023-03-08 17:19:56
| 2
| 344
|
equinoxe5
|
75,676,099
| 10,714,156
|
PyTorch: `forward()` works but `backward()` fails for the same batch data
|
<p>Even though I provide a (rather convoluted but) concrete example below, I have a conceptual question that I haven't been able to solve reading the documentation on the <code>torch.tensor.backward()</code> function:</p>
<p>My question is: <strong>How could it be possible that, for a given batch, I can correctly compute the <code>loss</code> function, but then it fails to compute the gradients using the <code>backward()</code> method?</strong>.</p>
<p>In particular, I am getting a conformability error of the form: <code>RuntimeError: The size of tensor a (4) must match the size of tensor b (2) at non-singleton dimension 0</code> only when <code> loss.backward()</code>, but the loss function is computed without errors. The structure described is listed below.</p>
<pre><code># Defining my model
model = LL_MODEL(args)
# Training loop
for idx, data_batch in enumerate(DataLoader_obj):
# Compute the loss
loss = model(data_batch)
# Compute the gradient
loss.backward() #<- this fails even when the loss is computed fine.
</code></pre>
<p>However, what was unexpected to me is that, if it can correctly compute the loss function, then it shouldn't have problems computing the backward method. So I think that some conceptual reasons for this behavior to happen might be useful for my debugging process. Thank you.</p>
<hr />
<p>Additionally, before presenting the programs, here there are a couple of things I've checked and noticed trying to debug my program:</p>
<ol>
<li><p>All the parameters have <code>requires_grad = True</code>.</p>
</li>
<li><p>Dimensions match (because <code>loss</code> is correctly computed)</p>
</li>
<li><p>When using a <code>batch_size = 1</code> (only taken one individual, two records), the code runs without problems. Hence, that makes me think that the problem might be related to the panel structure of the data. In each batch, I am sampling at the level of groups (individuals = <code>id_ind</code> variable) and not at the level of observations (rows) (see <code>ChoiceDataset()</code> class below). On the data, each individual has a total of two records (two rows per individual). There are 5 individuals, each with 2 observations (10 data points in total).</p>
</li>
<li><p>When using a <code>batch_size= 5</code> (which is a total number of individuals, and hence is taking the entire data), the program also fails, and the error that is shown is <code>RuntimeError: The size of tensor a (10) must match the size of tensor b (5) at non-singleton dimension 0</code>. Where 10 is the total number of rows on the example data, and 5 is the number of individuals, which is also a flag pointing towards the panel structure of the data as the source of the bug. However, the loss function is computed correctly in all of the cases, regardless of the <code>batch_size</code>.</p>
</li>
</ol>
<hr />
<p>Finally, here is the code that replicates the described error.</p>
<h3>Sample data:</h3>
<pre><code>import pandas as pd
import argparse
# args to be passed to the model
parser = argparse.ArgumentParser(description='')
parser.add_argument('--R', type=int, default=7, help='Number of draws (default: 100)')
args = parser.parse_args("")
args.J = 3 # number of alternatives
args.batch_size = 2 # length of the batch
args.K = 3 # number of alternatives
args.K_r = 1 # rand par
args.K_f = 2 # fixed par
X = pd.DataFrame.from_dict({'x1_1': {0: -0.176, 1: 1.6458, 2: -0.13, 3: 1.96, 4: -1.70, 5: 1.45, 6: 0.06, 7: -1.21, 8: -0.30, 9: 0.07}, 'x1_2': {0: -2.420, 1: -1.0828, 2: 2.73, 3: 1.597, 4: 0.088, 5: 1.220, 6: -0.44, 7: -0.692, 8: 0.037, 9: 0.465}, 'x1_3': {0: -1.5483, 1: 0.8457, 2: -0.21250, 3: 0.52923, 4: -2.593, 5: -0.6188, 6: 1.69, 7: -1.027, 8: 0.63, 9: -0.771}, 'x2_1': {0: 0.379724, 1: -2.2364391598508835, 2: 0.6205947900678905, 3: 0.6623865847688559, 4: 1.562036259999875, 5: -0.13081282910947759, 6: 0.03914373833251773, 7: -0.995761652421108, 8: 1.0649494418154162, 9: 1.3744782478849122}, 'x2_2': {0: -0.5052556836786106, 1: 1.1464291788297152, 2: -0.5662380273138174, 3: 0.6875729143723538, 4: 0.04653136473130827, 5: -0.012885303852347407, 6: 1.5893672346098884, 7: 0.5464286050059511, 8: -0.10430829457707284, 9: -0.5441755265313813}, 'x2_3': {0: -0.9762973303149007, 1: -0.983731467806563, 2: 1.465827578266328, 3: 0.5325950414202745, 4: -1.4452121324204903, 5: 0.8148816373643869, 6: 0.470791989780882, 7: -0.17951636294180473, 8: 0.7351814781280054, 9: -0.28776723200679066}, 'x3_1': {0: 0.12751822396637064, 1: -0.21926633684030983, 2: 0.15758799357206943, 3: 0.5885412224632464, 4: 0.11916562911189271, 5: -1.6436210334529249, 6: -0.12444368631987467, 7: 1.4618564171802453, 8: 0.6847234328916137, 9: -0.23177118858569187}, 'x3_2': {0: -0.6452955690715819, 1: 1.052094761527654, 2: 0.20190339195326157, 3: 0.6839430295237913, 4: -0.2607691613858866, 5: 0.3315513026670213, 6: 0.015901139336566113, 7: 0.15243420084881903, 8: -0.7604225072161022, 9: -0.4387652927008854}, 'x3_3': {0: -1.067058994377549, 1: 0.8026914180717286, 2: -1.9868531745912268, 3: -0.5057770735303253, 4: -1.6589569342151713, 5: 0.358172252880764, 6: 1.9238983803281329, 7: 2.2518318810978246, 8: -1.2781475121874357, 9: -0.7103081175166167}})
Y = pd.DataFrame.from_dict({'CHOICE': {0: 1.0, 1: 1.0, 2: 2.0, 3: 2.0, 4: 3.0, 5: 2.0, 6: 1.0, 7: 1.0, 8: 2.0, 9: 2.0}})
Z = pd.DataFrame.from_dict({'z1': {0: 2.41967, 1: 2.41, 2: 2.822, 3: 2.82, 4: 2.07, 5: 2.073, 6: 2.04, 7: 2.04, 8: 2.40, 9: 2.40}, 'z2': {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 1.0, 5: 1.0, 6: 1.0, 7: 1.0, 8: 0.0, 9: 0.0}, 'z3': {0: 1.0, 1: 1.0, 2: 1.0, 3: 1.0, 4: 2.0, 5: 2.0, 6: 2.0, 7: 2.0, 8: 3.0, 9: 3.0}})
id = pd.DataFrame.from_dict({'id_choice': {0: 1.0, 1: 2.0, 2: 3.0, 3: 4.0, 4: 5.0, 5: 6.0, 6: 7.0, 7: 8.0, 8: 9.0, 9: 10.0}, 'id_ind': {0: 1.0, 1: 1.0, 2: 2.0, 3: 2.0, 4: 3.0, 5: 3.0, 6: 4.0, 7: 4.0, 8: 5.0, 9: 5.0}} )
# Create a dataframe with all the data
data = pd.concat([id, X, Z, Y], axis=1)
</code></pre>
<h3>Model + Dataset classes</h3>
<pre><code>
class ChoiceDataset(Dataset):
'''
Dataset for choice data
Args:
data (pandas dataframe): dataframe with all the data
Returns:
dictionary with the data for each individual
'''
def __init__(self, data, args , id_variable:str = "id_ind" ):
if id_variable not in data.columns:
raise ValueError(f"Variable {id_variable} not in dataframe")
self.data = data
self.args = args
# select cluster variable
self.cluster_ids = self.data[id_variable].unique()
self.Y = torch.LongTensor(self.data['CHOICE'].values -1).reshape(len(self.data['CHOICE'].index),1)
self.id = torch.LongTensor(self.data[id_variable].values).reshape(len(self.data[id_variable].index),1)
self.N_n = torch.unique(self.id).shape[0]
_ , self.t_n = self.id.unique(return_counts=True)
self.N_t = self.t_n.sum(axis=0).item()
self.X_wide = data.filter(regex='^x')
if data.filter(regex='^ASC').shape[1] > 0:
#Select variables that start with "ASC"
self.ASC_wide = data.filter(regex='^ASC')
# Concatenate X and ASC
self.X_wide_plus_ASC = pd.concat([self.X_wide, self.ASC_wide], axis=1)
self.K = int(self.X_wide_plus_ASC.shape[1] / args.J)
else:
self.X_wide_plus_ASC = self.X_wide
self.K = int(self.X_wide_plus_ASC.shape[1] / args.J)
self.X = torch.DoubleTensor(self.X_wide_plus_ASC.values)
self.Z = torch.DoubleTensor(self.data.filter(regex='^z').values)
def __len__(self):
return self.N_n # number of individuals
def __getitem__(self, idx):
# select the index of the individual
self.index = torch.where(self.id == idx+1)[0]
self.len_batch = self.index.shape[0]
Y_batch = self.Y[self.index]
Z_batch = self.Z[self.index]
id_batch = self.id[self.index]
# Number of individuals in the batch
t_n_batch = self.t_n[idx]
# Select regressors for the individual
X_batch = self.X[self.index]
# reshape X_batch to have the right dimensions
X_batch = X_batch.reshape(self.len_batch, self.K, self.args.J)
return {'X': X_batch, 'Z': Z_batch, 'Y': Y_batch, 'id': id_batch, 't_n': t_n_batch}
def cust_collate(batch_dict):
'''
This function is used to concatenate the data for each individual.
It relies on the function `default_collate()` to concatenate the
batches of each block of individuals. Later it resizes the tensors
to concatenate the data for each individual using axis 0.
Parameters
----------
batch_dict : dict
Dictionary containing the data for each block of individuals.
Keys are 'X', 'Y', 'Z' and 'id'
Returns
-------
collate_batche : dict
Dictionary containing the data for each individual.
Keys are 'X', 'Y', 'Z' and 'id' (again)
'''
def resize_batch_from_4D_to_3D_tensor(x:torch.Tensor):
"""
This function suppresses the extra dimension created by
`default_collate()` and concatenates the data for each
individual using axis 0 guesing (`-1`) dimension zero.
Parameters
----------
x : torch.Tensor (4D)
Returns
-------
torch.Tensor (3D)
"""
return x.view(-1, x.size(2), x.size(3))
def resize_batch_from_3D_to_2D_tensor(y:torch.Tensor):
"""
This function suppresses the extra dimension created by
`default_collate()` and concatenates the data for each
individual using axis 0 guesing (`-1`) dimension zero.
Parameters
----------
x : torch.Tensor (3D)
Returns
-------
torch.Tensor (2D)
"""
return y.view(-1, y.size(2))
collate_batche = default_collate(batch_dict)
# Resize the tensors to concatenate the data for each individual using axis 0.
collate_batche['X'] = resize_batch_from_4D_to_3D_tensor(collate_batche['X'])
collate_batche['Y'] = resize_batch_from_3D_to_2D_tensor(collate_batche['Y'])
collate_batche['Z'] = resize_batch_from_3D_to_2D_tensor(collate_batche['Z'])
collate_batche['id'] = resize_batch_from_3D_to_2D_tensor(collate_batche['id'])
# Number of individuals in the batch
collate_batche['N_n_batch'] = torch.unique(collate_batche['id']).shape[0]
# Total Number of choices sets in the batch
collate_batche['N_t_batch'] = collate_batche['Y'].shape[0]
return collate_batche
# model
class LL_MODEL(nn.Module):
def __init__(self,args):
super(LL_MODEL, self).__init__()
self.args = args
self.sum_param = self.args.K_r + self.args.K_f
assert self.args.K == self.sum_param , "Total number of parameters K is not equal to the sum of the number of random, fixed and taste parameters"
self.rand_param_list = nn.ParameterList([])
for i in range(2 * self.args.K_r):
beta_rand = nn.Parameter(torch.zeros(1,dtype=torch.double, requires_grad=True))
self.rand_param_list.append(beta_rand)
if self.args.K_f > 0:
self.fix_param_list = nn.ParameterList([])
for i in range(self.args.K_f):
beta_fix = nn.Parameter(torch.zeros(1,dtype=torch.double, requires_grad=True))
self.fix_param_list.append(beta_fix)
def forward(self, data):
'''
This function defines the forward pass of the model.
It receives as input the data and the draws from the QMC sequence.
It returns the log-likelihood of the model.
----------------
Parameters:
d: (dataset) dictionary with keys: X, Y, id, t_n, Z (if needed))
X: (tensor) dimension: (N_t x K x J) [attributes levels]
Y: (tensor) dimension: (N_t, J) [choosen alternative]
id: (tensor) dimension: (N_t, 1) [individual id]
t_n: (tensor) dimension: (N_n, 1) [number of choice sets per individual]
Z: (tensor) dimension: (N_t, K_t) [individual characteristics]
Draws: (tensor) dimension: (N_n, J, R)
N_n: number of individuals
J: number of alternatives
R: number of draws
----------------
Output:
simulated_log_likelihood: (tensor) dimension: (1,1)
'''
self.N_t = data['N_t_batch']
self.N_n = data['N_n_batch']
self.K = self.args.K
#self.Draws = data['Draws']
self.Draws = Create_Draws(args.K_r, self.N_n, args.R ,data['t_n'], args.J)
print('self.Draws.shape',self.Draws.shape)
self.X = data['X']
self.Y = data['Y']
self.t_n = data['t_n'].reshape(self.N_n,1)
self.Z = data['Z'] if data['Z'] is not None else None
self.id = torch.from_numpy(np.arange(self.N_n)).reshape(self.N_n,1)
rand_par = [self.rand_param_list[i] for i in range(2 * self.args.K_r)]
if self.args.K_f > 0:
fix_par = [self.fix_param_list[i] for i in range(self.args.K_f)]
self.params = torch.cat(rand_par + fix_par).reshape(self.args.K_f + 2 * self.args.K_r,1)
else:
self.params = torch.cat(rand_par).reshape(2 * self.args.K_r,1)
self.beta_means = self.params[0:2*self.args.K_r:2 ,0].reshape(self.args.K_r,1,1,1)
self.beta_stds = self.params[1:2*self.args.K_r:2 ,0].reshape(self.args.K_r,1,1,1)
self.beta_R = torch.empty(
self.args.K_r,
self.N_t,
self.args.J,
self.args.R)
for i in range(self.args.K_r):
self.beta_R[i,:,:,:] = self.beta_means[i,:,:,:] + self.beta_stds[i,:,:,:] * self.Draws[i,:,:,:]
print('self.beta_R[i,:,:,:].shape',self.beta_R[i,:,:,:].shape)
if self.args.K_f > 0:
self.beta_F = self.params[2*self.args.K_r:2*self.args.K_r + self.args.K_f,0].reshape(self.args.K_f,1)
self.beta_F = self.beta_F.repeat(1, self.N_n * self.args.J * self.args.R).reshape(
self.args.K_f,
self.N_n,
self.args.J,
self.args.R)
self.beta_F = self.beta_F.repeat_interleave(self.t_n.reshape(len(self.t_n)), dim = 1)
if self.args.K_f > 0:
self.all_beta = torch.cat((self.beta_R, self.beta_F), 0)
else:
self.all_beta = self.beta_R
self.all_X = self.X.transpose(0,1).reshape(
self.args.K,
self.N_t,
self.args.J)
self.all_X = self.all_X[:,:,:,None].repeat(1,1,1,self.args.R)
self.V_for_R_draws = torch.einsum(
'abcd,abcd->bcd',
self.all_X.double(),
self.all_beta.double()
)
self.V_for_R_draws_exp = torch.exp(self.V_for_R_draws)
self.sum_of_exp = self.V_for_R_draws_exp.sum(dim=1)
self.prob_per_draw = self.V_for_R_draws_exp/self.sum_of_exp[:,None,:]
self.Y_expand = self.Y[:,:,None].repeat(1,1,self.args.R)
self.prob_chosen_per_draw = self.prob_per_draw.gather(1,self.Y_expand)
self.id_expand = self.id[:,:,None].repeat(1,1,self.args.R).to(torch.int64)
self.prod_seq_choices_ind = torch.ones(
self.N_n,
1,
self.args.R,
dtype=self.prob_chosen_per_draw.dtype)
self.prod_seq_choices_ind = self.prod_seq_choices_ind.scatter_reduce(
0, # the dimension to scatter on (0 for rows, 1 for columns)
self.id_expand, # the index to scatter
self.prob_chosen_per_draw, # the value to scatter
reduce='prod' # the reduction operation
)
self.proba_simulated = self.prod_seq_choices_ind.mean(dim=2)
LL = torch.log(self.proba_simulated).sum()
return LL
def Create_Draws(dimension,N,R,t_n,J):
'''
Create Draws for the random parameters
input:
dimension: number of random parameters
N: number of individuals
R: number of draws
t_n: number of choices per individual
J: number of alternatives
output:
normal_draws: tensor of size (dimension, N.repeat(t_n), J, R)
'''
np.random.seed(123456789)
Halton_sampler = qmc.Halton(d=dimension, scramble=True)
normal_draws = norm.ppf(Halton_sampler.random(n=N * R * J))
normal_draws = torch.tensor(normal_draws).reshape(dimension, N, J, R)
normal_draws = normal_draws.repeat_interleave(t_n.reshape(len(t_n)),1)
print("normal_draws shape",normal_draws.shape)
return normal_draws
</code></pre>
<h3>Training Loop</h3>
<p>Here is where the error is produced when <code>loss.backward()</code>.</p>
<pre><code>#torch.autograd.set_detect_anomaly(True) # enable anomaly detection
# Defining my dataset
DataSet_Choice= ChoiceDataset(data ,args, id_variable = "id_ind" )
# Defining my dataloader
DataLoader_obj = DataLoader(DataSet_Choice, collate_fn=cust_collate, batch_size=args.batch_size, shuffle=False, num_workers=0, drop_last=False)
# Defining my model
model = LL_MODEL(args)
# Training loop
for idx, data_batch in enumerate(DataLoader_obj):
# Compute the loss
loss = model(data_batch)
# Compute the gradient
loss.backward() #<- this fails even when the loss is computed fine.
# RuntimeError: The size of tensor a (4) must match the size of tensor b (2) at non-singleton dimension 0
</code></pre>
<hr />
<h3>Full error message</h3>
<pre><code>
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
c:\_scratch_code\_problem_isolation.py in line 299
297 loss = model(data_batch)
298 # Compute the gradient
--> 299 loss.backward() #<- this fails even when the loss is computed fine.
File c:\lib\site-packages\torch\_tensor.py:396, in Tensor.backward(self, gradient, retain_graph, create_graph, inputs)
387 if has_torch_function_unary(self):
388 return handle_torch_function(
389 Tensor.backward,
390 (self,),
(...)
394 create_graph=create_graph,
395 inputs=inputs)
--> 396 torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File c:\lib\site-packages\torch\autograd\__init__.py:173, in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
168 retain_graph = create_graph
170 # The reason we repeat same the comment below is that
...
--> 173 Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
174 tensors, grad_tensors_, retain_graph, create_graph, inputs,
175 allow_unreachable=True, accumulate_grad=True)
RuntimeError: The size of tensor a (4) must match the size of tensor b (2) at non-singleton dimension 0
</code></pre>
<p>Thank you.</p>
|
<python><arrays><pandas><deep-learning><pytorch>
|
2023-03-08 17:07:06
| 0
| 1,966
|
Γlvaro A. GutiΓ©rrez-Vargas
|
75,676,023
| 7,949,129
|
Python ftplib fails with FTPS but WinSCP works
|
<p>I have following code:</p>
<pre><code>import pysftp
user = ...
password = ...
host = ...
port = 21
class FtpDirectory:
def __init__(self):
self.ftp = None
def __enter__(self):
self.ftp = FTP_TLS()
self.ftp.debugging = 1
self.ftp.connect(host, port)
self.ftp.login(
user,
password,
)
self.ftp.prot_p()
self.ftp.cwd(targetdir)
return self.ftp
files = [('test1.txt', b'test1content'), ('test2.txt', b'test2content')]
with FtpDirectory() as ftp:
for file in files:
ftp.storbinary("STOR " + file[0], BytesIO(file[1]))
</code></pre>
<p>But it fails to finish the transmission and throws an Exception (ftplib.error_temp: 426 Transfer failed) even though I can see the first file on the ftps server with the right content.</p>
<p>The ftp log is following</p>
<pre><code>*resp* '220 myexampleserver.xyz X2 WS_FTP Server 8.7.2(01051325)'
*cmd* 'AUTH TLS'
*resp* '234 SSL enabled and waiting for negotiation'
*cmd* 'USER myexampleuser'
*resp* '331 Enter password'
*cmd* 'PASS ************'
*resp* '230 User logged in'
*cmd* 'PBSZ 0'
*resp* '200 PBSZ=0'
*cmd* 'PROT P'
*resp* '200 PRIVATE data channel protection level set'
*cmd* 'CWD ftp_test_a'
*resp* '250 Command CWD succeed'
*cmd* 'TYPE I'
*resp* '200 Transfer mode set to BINARY'
*cmd* 'PASV'
*resp* '227 Entering Passive Mode (111,222,33,44,156,110).'
*cmd* 'STOR test.txt'
*resp* '150 Uploading in BINARY file test1.txt' <---- TAKES A WHILE, FILE APPEARS ON SERVER
*cmd* 'QUIT'
*resp* '426 Transfer failed'
</code></pre>
<p>I donΒ΄t get, why WinSCP connects and transfers files without problems while this Python Library is not capable of doing the same.</p>
<p>Here are some settings of WinSCP:</p>
<p><a href="https://i.sstatic.net/vT4Gj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vT4Gj.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/mOz9b.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mOz9b.png" alt="enter image description here" /></a></p>
<p><code>urlopen('https://www.howsmyssl.com/a/check', context=ssl._create_unverified_context()).read()</code> gives me "tls_version":"TLS 1.3"</p>
<p>I donΒ΄t necessarily have to use ftplib but I didnΒ΄t find some working alternative so far with Python. But in any case I must get this done somehow and any help would be really appreciated.</p>
|
<python><ftp><winscp><ftplib><ftps>
|
2023-03-08 16:59:37
| 1
| 359
|
A. L
|
75,676,017
| 1,975,199
|
FastAPI websocket not receiving all the data
|
<p>This is my first experience with websockets, and I'm not completely sure I am doing everything correctly. However, when reloading the web browser, the reconnects to the websocket. When this happens I have a list of messages from previous sessions and I send them back. At the moment, there are only 3, but the browser does not receive all 3 of them.</p>
<p>I am using FastAPI, and my send/receive code looks like this:</p>
<pre class="lang-py prettyprint-override"><code>@app.websocket_route("/ws")
async def websocket_endpoint(websocket: WebSocket):
ws_active = False
# when establishing the websocket, query the devices online
# but only if it's the first connection
device_report()
# accept the websocket connection
await websocket.accept()
ws_active = True
# Push message history to client
async def send(message):
nonlocal ws_active
await push_messages(websocket)
try:
while ws_active:
# Check if there are any messages in the queue
if not queue().empty():
message = await queue().get()
print('get: ' + message)
if ws_active:
await websocket.send_text(message)
# Wait for new messages to arrive in the queue
message = await queue().get()
if ws_active:
print('get2: ' + message)
await websocket.send_text(message)
except WebSocketDisconnect:
try:
await websocket.close()
ws_active = False
except:
pass
async def receive():
nonlocal ws_active
try:
while ws_active:
data_in = await websocket.receive_json()
print(data_in)
except WebSocketDisconnect:
try:
await websocket.close()
ws_active = False
except:
pass
try:
await asyncio.gather(send('{asd:asd}'), receive())
except:
pass
</code></pre>
<p>the function push_messages is simple and it just takes the array of data and puts them in the queue.</p>
<pre><code>async def push_messages(self, websocket: WebSocket):
for message in self.can_messages:
print('push: ' + message)
await self.message_queue.put(message)
for message in self.all_messages:
await self.message_queue.put(message)
</code></pre>
<p>Reloading the browser I see these output on the python console:</p>
<pre><code>push: {"type": "can", "msg": "Supervisor reported online."}
push: {"type": "can", "msg": "ECU 1 reported online."}
push: {"type": "can", "msg": "ECU 2 reported online."}
get2: {"type": "can", "msg": "Supervisor reported online."}
get: {"type": "can", "msg": "ECU 1 reported online."}
get2: {"type": "can", "msg": "ECU 2 reported online."}
</code></pre>
<p>The push is going to the queue object and the get is receiving them.
In the browser's javascript console I see only two are received:</p>
<p><a href="https://i.sstatic.net/rfyNk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rfyNk.png" alt="enter image description here" /></a></p>
<p>From everything I've read, the connection is established and there isn't any kind of "check" to make sure it's ready for sending and receiving.</p>
<p>However, if I put an arbitrary:
<code>await asyncio.sleep(1)</code> at the beginning of the <code>send()</code> function this seems to result in all the messages being sent. But that's a terrible solution and waiting for 1 second on every page reload/navigation is dumb.</p>
<p>Is there some other way I can approach this to get all the messages on a reload/navigation change?</p>
|
<python><websocket><fastapi>
|
2023-03-08 16:58:27
| 0
| 432
|
jgauthier
|
75,676,016
| 2,789,788
|
How to handle memory problems when using python multiprocessing
|
<p>I have 10 long arrays, <code>arr1, arr2, ..., arr10</code>, and each gets pass to a function <code>foo</code>. Each <code>arr</code> is list of around 100 million floats. I'd like to use multiprocessing. Something like</p>
<pre><code>import multiprocessing as mp
def foo(arr):
# ...call external module to process arr...
# return a float
myarrays = [arr1, ..., arr10]
with mp.Pool(8) as pool:
result = pool.map(foo, ((arr,) for arr in myarrays))
# I don't need myarrays anymore
for res in result:
# ...do more...
</code></pre>
<p>The problem that I think I'm having (I may be wrong) is that each of the 8 processes seems to be consuming a large amount of memory, and the program seems to hang once the calls to <code>foo</code> are complete. Is there a better way to handle this?</p>
|
<python><multiprocessing>
|
2023-03-08 16:58:24
| 2
| 1,810
|
theQman
|
75,675,892
| 1,804,896
|
Pandas styleframe styling applies differently to first sheet than all subsequent sheets
|
<p>I am writing multiple sheets to an excel workbook with some formatting. In general everything works, just the formatting is a little off. The formatting works exactly as I want on the first sheet, but I am looping through multiple data sets and writing to a sheet for each one and the subsequent sheets have the formatting a little differently.</p>
<p>Basically what I tried to setup was header is centered everything else is left aligned. On the first sheet its as expected, but subsequent sheet header and everything else is left aligned except for my rows that are <code>apply_style_by_indexes</code> which are now centered. So its like the <code>apply_header_style</code> and <code>apply_style_by_indexes</code> swapped or something.</p>
<p>Im guessing its something to do with how im defining the different styles and applying them.</p>
<pre><code>data_dict = {args.source_env: {}, args.target_env: {}, "compare": {}}
for app in uniq_apps:
# set some vars
#setup lists for pandas
pd_app_inst = []
pd_platforms = []
pd_src_env = []
pd_src_rel_type = []
pd_src_version = []
pd_tgt_env = []
pd_tgt_rel_type = []
pd_tgt_version = []
pd_param = []
pd_src_value = []
pd_tgt_value = []
pd_columns = ['AppInst','Platform','SrcEnv','SrcRelType','SrcVersion','TgtEnv','TgtRelType','TgtVersion','Param','SrcValue','TgtValue']
write=False
logger.info(f'Comparing config for {app}')
if data_dict[args.source_env][app]["config"] and data_dict[args.target_env][app]["config"]:
src_cfg = data_dict[args.source_env][app]["config"]
tgt_cfg = data_dict[args.target_env][app]["config"]
write=True
# add some data to data_dict
else:
logger.info(f"The app instance {app} does not exist in one of the environments, nothing to compare so skipping")
continue
params = list(data_dict["compare"][app])
params.sort()
for param in params:
pd_app_inst.append(app)
pd_platforms.append(data_dict[args.source_env][app]["platform"])
pd_src_env.append(args.source_env)
pd_src_rel_type.append(data_dict[args.source_env][app]["rel_type"])
pd_src_version.append(data_dict[args.source_env][app]["version"])
pd_tgt_env.append(args.target_env)
pd_tgt_rel_type.append(data_dict[args.target_env][app]["rel_type"])
pd_tgt_version.append(data_dict[args.target_env][app]["version"])
pd_param.append(param)
pd_src_value.append(data_dict["compare"][app][param][0])
pd_tgt_value.append(data_dict["compare"][app][param][1])
df = pd.DataFrame(list(zip(pd_app_inst,pd_platforms,pd_src_env,pd_src_rel_type,pd_src_version,pd_tgt_env,pd_tgt_rel_type,pd_tgt_version,pd_param,pd_src_value,pd_tgt_value)),columns=pd_columns)
sf = StyleFrame(df)
default_style = Styler(font_size=10, horizontal_alignment='left', wrap_text=False)
sf = StyleFrame(df, styler_obj=default_style)
header_style = Styler(bold=True, font_size=14, horizontal_alignment='center')
sf.apply_headers_style(styler_obj=header_style)
sf.apply_style_by_indexes(sf[sf['SrcValue'] != sf['TgtValue']], styler_obj=Styler(bg_color='yellow',font_size=10,horizontal_alignment='left',bold=True))
sf.set_column_width(columns=['AppInst','Platform','SrcEnv','TgtEnv'], width=15)
sf.set_column_width(columns=['SrcRelType','SrcVersion','TgtRelType','TgtVersion'], width=25)
sf.set_column_width(columns=['Param'], width=65)
sf.set_column_width(columns=['SrcValue','TgtValue'], width=80)
sf.set_row_height(rows=sf.row_indexes, height=15)
with pd.ExcelWriter(excel_file,mode='a') as writer:
sf.to_excel(writer, sheet_name=app, freeze_panes=(1,0), index=False)
logger.info(f'Comparison written to output file for {app}')
</code></pre>
|
<python><excel><pandas><dataframe><styleframe>
|
2023-03-08 16:47:07
| 0
| 854
|
ssbsts
|
75,675,839
| 16,319,191
|
Make new column based on values in other columns pandas
|
<p>I want to introduce a new col in df based on other col values.
If c1-c3 cols have only 1 unique value then that unique value will go into c4 col.
If c1-c3 cols have two different values then "both" will go into c4 col.
NaN should not be considered as a valid value. Only c2 and c3 have a few NaNs.</p>
<p>Minimal example:</p>
<pre><code>df = pd.DataFrame({
"c1": ["left", "right", "right", "left", "left","right"],
"c2": ["left", "right", "right", "right", "NaN","right"],
"c3": ["NaN", "NaN", "left", "NaN", "left","right"]})
</code></pre>
<p>Required df:</p>
<pre><code>answerdf = pd.DataFrame({
"c1": ["left", "right", "right", "left", "left","right"],
"c2": ["left", "right", "right", "right", "NaN","right"],
"c3": ["NaN", "NaN", "left", "NaN", "left","right"],
"c4":["left", "right", "both", "both", "left","right"] })
</code></pre>
|
<python><pandas><dataframe>
|
2023-03-08 16:42:06
| 3
| 392
|
AAA
|
75,675,752
| 12,320,370
|
DataFrame string exact match
|
<p>I am using the <code>str.contains</code> method to look for a specific row in a data frame. I however want to find an exact string match, and even if I add <code>regex = False</code>, it seems to pick up the partial matches.</p>
<p><code>str.find</code> doesn't work either, is there another function I should be using for this? Here is a snippet where I need it in:</p>
<p>Here is some code for replication</p>
<pre><code>data = {'A':['tree','bush','forest','tree/red']}
df_test=pd.DataFrame(data)
df_test['New'] = np.where(df_test['A'].str.contains('tree', regex = False) |
df_test['A'].str.contains('bush') |
df_test['A'].str.contains('forest')
, 'Good', '')
</code></pre>
<p>So I would like to code above to only find rows with 'tree','bush' or 'forest', however it also picks up rows which say 'tree/red'.</p>
<p><a href="https://i.sstatic.net/YoMiT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YoMiT.png" alt="enter image description here" /></a></p>
|
<python><pandas><string><numpy><find>
|
2023-03-08 16:34:45
| 2
| 333
|
Nairda123
|
75,675,743
| 8,899,386
|
How can I have a row-wise rank in a pyspark dataframe
|
<p>I have a dataset for which I am going to find the rank per row. This is a toy example in <code>pandas</code>.</p>
<pre class="lang-python prettyprint-override"><code>import pandas as pd
df = pd.DataFrame({"ID":[1,2,3,4], "a":[2,7,9,10],
"b":[6,7,4,2], "c":[3,4,8,5]})
print(df)
# ID a b c
# 0 1 2 6 3
# 1 2 7 7 4
# 2 3 9 4 8
# 3 4 10 2 5
df[["a","b","c"]] = df[["a","b","c"]].rank(method="min",
ascending=False,
axis=1).astype("int")
print(df)
# ID a b c
# 0 1 3 1 2
# 1 2 1 1 3
# 2 3 1 3 2
# 3 4 1 3 2
</code></pre>
<p>However, as I didn't find equivalent of <code>axis=1</code> in <code>pyspark</code>, I couldn't convert it to that. My dataset has 60 million rows and 40 columns, so the suggestion should be efficient (e.g., I cannot loop over them).</p>
|
<python><sql><pandas><dataframe><pyspark>
|
2023-03-08 16:34:13
| 1
| 4,890
|
Hadij
|
75,675,674
| 7,984,318
|
python pyodbc pass parameters issue
|
<p>Code:</p>
<pre><code>import pyodbc as pyodbc
cnxn = pyodbc.connect(DB_STR)
cursor = cnxn.cursor()
def insert_table(insert_query,date_para):
result = cursor.execute(insert_query,date_para)
cnxn.commit()
</code></pre>
<p>The code is used to pass a parameter in sql.</p>
<p>My issue is ,when there is no parameter need to be pass in sql,I still want to reuse insert_table function:</p>
<pre><code>my_query ='''
insert into mytable ...
'''
insert_table(my_query,'')
</code></pre>
<p>If I leave the second parameter position empty or add something else there,I will get an error.</p>
<p>How should I modify the code ,so that it can work on both scenarios ,has parameter and has not parameter need to be pass?</p>
|
<python><sql><postgresql><pyodbc>
|
2023-03-08 16:27:45
| 1
| 4,094
|
William
|
75,675,592
| 5,371,582
|
headless authentification of an app to google api
|
<p>I have a python app which needs (among many other things) to read a googlesheet.
The line</p>
<pre><code>flow = InstalledAppFlow.from_client_secrets_file(cred_file, SCOPES)
</code></pre>
<p>requires me to visit <code>https://accounts.google.com/o/oauth2/auth?response_type=code&client_id=...</code>
I go there, give a manual authorization and my app works well.</p>
<p>Now, here is the twist.</p>
<p>I need to run the app on a headless machine. I only have a ssh access. It still requires me to visit <code>https://accounts.google.com/o/oauth2/a</code>, but I don't know how to get to that URL with my "only ssh" machine.</p>
<p>I could go there with a text base browser[1], but I'd like a more "programmatic" way.</p>
<p>[1] w3m and elinks do not work because of a lack of javascript. I did not got further in that direction.</p>
<p>EDIT : when I said "read googlesheet", I was oversimplifying my needs. In fact I have (among others) to list elements in a shared drive.</p>
|
<python><google-api><headless>
|
2023-03-08 16:20:46
| 1
| 705
|
Laurent Claessens
|
75,675,556
| 8,847,609
|
How to get Aruco pose estimation in OpenCV Python?
|
<p>I'm following <a href="https://betterprogramming.pub/getting-started-with-aruco-markers-b4823a43973c" rel="nofollow noreferrer">this</a> tutorial on getting started with aruco markers.</p>
<p>I want to get the pose estimation frame axis and display it ontop of the aruco marker like so:</p>
<p><a href="https://i.sstatic.net/sTAAS.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sTAAS.jpg" alt="enter image description here" /></a></p>
<p>Here is the relevant code snippet from the tutorial:</p>
<pre><code> gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
detector = cv2.aruco.ArucoDetector(arucoDict, arucoParams)
(corners, ids, rejected) = detector.detectMarkers(gray)
if len(corners) > 0:
for i in range(0, len(ids)):
rvec, tvec, markerPoints = cv2.aruco.estimatePoseSingleMarkers(corners[i], 0.02, mtx, dist)
cv2.drawFrameAxes(frame, mtx, dist, rvec, tvec, 0.02)
</code></pre>
<p>But I get the following attributional error:</p>
<pre><code>AttributeError: module 'cv2.aruco' has no attribute 'estimatePoseSingleMarkers'
</code></pre>
<p>My python version: 3.11.2
OpenCV version: 4.7.0</p>
<p>How do I get the pose and draw them in the aruco marker?</p>
|
<python><opencv><aruco>
|
2023-03-08 16:18:08
| 2
| 1,176
|
Brandon Pillay
|
75,675,368
| 5,758,423
|
How to include conditional installs in python packages -- namely for backports?
|
<p>My question is about conditional installs in general, but the usage I have in mind is for python backports specifically. Let's do this with a concrete example.</p>
<p>It often happens that I want to use <code>from importlib.resources import files</code>, but that only exists in 3.9+.</p>
<p>In my code, I end up doing:</p>
<pre class="lang-py prettyprint-override"><code>try:
from importlib.resources import files # ... and any other things you want to get
except (ImportError, ModuleNotFoundError):
from importlib_resources import files # pip install importlib_resources
</code></pre>
<p>But I feel shame when copy/pasting this code everywhere. Additionally, it still doesn't solve my problem "completely".
The question is: Do I include <code>importlib_resources</code>, a third-party backport of <code>importlib.resources</code>, in my <code>setup.cfg</code> <code>install_requires</code>? If I don't, my code will fail on python < 3.9. If I do include it, I'm overkilling my dependencies.</p>
<p>Ideally, I'd like a solution where I can just do</p>
<pre class="lang-py prettyprint-override"><code>from importlib_resources import files
</code></pre>
<p>in my code, and trust that the <code>setup.cfg</code> took care of doing two things:</p>
<ul>
<li>telling the install to only do a <code>pip install importlib-resources</code> if python version < 3.9</li>
<li>telling it that <code>importlib_resources</code> is an alias for <code>importlib.resources</code> if in 3.9+, and of <code>importlib_resources</code> (the pip install backport) if not.</li>
</ul>
<p>Trying to solve the first problem only, I did this.</p>
<pre><code>install_requires =
dol
importlib_resources; python_version < '3.9'
</code></pre>
<p>But it doesn't condition the install at all (it's installed on both 3.8 and 3.10!).</p>
<p>Not sure how to even start with the second "aliasing" desire.</p>
|
<python><python-import><setuptools><python-packaging><python-importlib>
|
2023-03-08 16:01:36
| 1
| 2,432
|
thorwhalen
|
75,675,364
| 2,537,394
|
Why does tkinter block when ending this thread?
|
<p>I'm trying to create a worker thread that manages further threads/processes and communicates with my tkinter GUI thread.</p>
<p>For this I have a GUI class that inherits from <code>tkinter.Frame</code> with a simple start button, a progressbar and a variable label. When pressing the start button the worker thread gets started. This thread runs entirely separately from the GUI (therefore no <code>.join()</code> necessary after creating the worker thread) and sends back its progress via a tkinter Event (<code>tkinter.event_generate()</code>) and a Queue (<code>queue.Queue()</code>).</p>
<p>Now this works well, and even letting the thread complete works well. However if you try to abort the thread by setting a <code>threading.Event()</code> and join the thread to wait for a safe exit, the GUI hangs and only continues if join's timer runs out.</p>
<p>When setting the <code>threading.Event()</code> I would expect the worker thread to generate a <code><<MyEventAborted>></code>-event almost immediately, then to return the <code>run()/start()</code>-methods and thereby to stop existing. This should resolve the <code>join()</code>-call immediately. Why does it not work like that? I'm using python 3.9.12</p>
<pre class="lang-py prettyprint-override"><code>import queue
import threading
import time
import tkinter as tk
from tkinter import ttk
class GUI(tk.Frame):
def __init__(self, parent, *args, **kwargs):
tk.Frame.__init__(self, parent, *args, **kwargs)
self.parent = parent
self.init_elements()
def init_elements(self):
self.button = ttk.Button(self, text="Start", command=self.start_worker)
self.button.grid(row=0, column=0, sticky="W", padx=5, pady=5)
self.label = tk.StringVar(value="nothing yet")
ttk.Label(self, textvariable=self.label).grid(row=0, column=1, sticky="E", padx=5, pady=5)
self.progress = tk.DoubleVar(0)
ttk.Progressbar(
self, orient=tk.HORIZONTAL, length=200, mode='determinate', maximum=100, variable=self.progress
).grid(row=1, column=0, columnspan=2, sticky="WE", padx=5, pady=5)
self.bind("<<MyEventUpdate>>", self.on_update)
self.bind("<<MyEventFinished>>", self.on_finished)
self.bind("<<MyEventAborted>>", self.on_aborted)
self.com_queue = queue.Queue()
def on_update(self, event):
progress = self.progress.get()
self.progress.set(progress+1)
self.label.set(f"Task {(self.com_queue.get()):.0f}/100")
def on_finished(self, event):
self.label.set("Finished!")
self.button.configure(text="Start", command=self.start_worker)
def on_aborted(self, event):
self.label.set("Aborted!")
self.button.configure(text="Start", command=self.start_worker)
def start_worker(self):
self.progress.set(0)
self.button.configure(text="Stop", command=self.stop_worker)
self.worker_thread = MyStoppableThread(queue=self.com_queue, tk_gui=self)
self.worker_thread.start()
def stop_worker(self):
self.worker_thread.stop()
self.worker_thread.join(timeout=3)
self.button.configure(text="Start", command=self.start_worker)
class MyStoppableThread(threading.Thread):
def __init__(self, queue, tk_gui, *args, **kwargs):
super().__init__(*args, **kwargs)
self.queue = queue
self.tk_gui = tk_gui
self._stop_event = threading.Event()
def stop(self):
self._stop_event.set()
def stopped(self):
return self._stop_event.is_set()
def run(self):
task_num = 0
while task_num < 100:
if self.stopped():
self.tk_gui.event_generate("<<MyEventAborted>>", when="now")
print("Stopped")
return
time.sleep(0.1)
task_num += 1
self.queue.put(task_num)
self.tk_gui.event_generate("<<MyEventUpdate>>", when="tail")
self.tk_gui.event_generate("<<MyEventFinished>>", when="tail")
if __name__ == "__main__":
# initialize GUI and tkinter
root = tk.Tk()
GUI(root).grid(column=0, row=0, sticky="NSWE")
# Start GUI
root.mainloop()
</code></pre>
|
<python><python-3.x><multithreading><tkinter><python-multithreading>
|
2023-03-08 16:01:23
| 1
| 731
|
YPOC
|
75,674,970
| 2,836,172
|
Matplotlib Subplots: Subplot gets scaled down so axes do not match
|
<p>I got two numpy arrays which I want to plot above each other.
It works, but the top image gets scaled down, so the axis do not match anymore. I can "fix" it by setting the Figsize to (8, 8), but then there is a big whitespace below the plots.</p>
<p>So my code:</p>
<pre><code>fig, (ax1, ax2) = plt.subplots(2, 1, sharex=True)
plt.subplots_adjust(hspace=0.2)
ax1.imshow(np_source, cmap='binary')
ax1.set_ylabel('Segmentiertes Bild')
ax1.xaxis.set_major_locator(plt.MultipleLocator(1))
ax1.yaxis.set_major_locator(plt.MultipleLocator(1))
ax2.imshow(achse, cmap='binary')
ax2.set_ylabel('Achse')
ax2.xaxis.set_major_locator(plt.MultipleLocator(1))
ax2.yaxis.set_major_locator(plt.MultipleLocator(1))
ax1.grid(color='dimgrey', which="both", linewidth=1)
ax2.grid(color='dimgrey', which="both", linewidth=1)
ax1.set_anchor('NW')
ax2.set_anchor('NW')
plt.show()
</code></pre>
<p>And here's the image it produces:
<a href="https://i.sstatic.net/0gOU2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0gOU2.png" alt="enter image description here" /></a></p>
<p>How do I align them and remove the white space on the bottom too?</p>
|
<python><matplotlib>
|
2023-03-08 15:24:38
| 1
| 1,522
|
Standard
|
75,674,794
| 12,845,199
|
Interacting through dicts, grabbing their values and transitioning to a panda df
|
<p>I have a list of dicts</p>
<pre><code>[{a:'jeffrey',b:'pineapple',c:'apple'},{a:'epstein',c:'banana'},{a:'didnt kill'},{a:'himself',b:'jebus'}]
</code></pre>
<p>What I want to do is transition those values to a pandas df. But as you can see a few dicts are lacking a few keys and therefore lacking values. So I took a glance at defaultdict object so I could transform the list object to something that pandas actually is able to interpret. And transform it into a dataframe.</p>
<pre><code>dd = defaultdict(list)
for d in l:
for k in d.keys():
dd[k]
for d in l:
for k in dd.keys():
try:
dd[k].append(d[k])
except KeyError:
dd[k].append(0)
# Dict auto adaptavΓ©l
</code></pre>
<p>The code works, and folows the order given of those events meaning with the key is empty return a 0. But I was wondering if there better alternative or a code which has a better o(n) complexity</p>
<p>Wanted result:</p>
<pre><code>defaultdict(<class 'list'>, {'a': ['jeffrey', 'epstein', 'didnt kill', 'himself'], 'b': ['pineapple', 0, 0, 'jebus'], 'c': ['apple', 'banana', 0, 0]})
</code></pre>
|
<python><pandas><collections>
|
2023-03-08 15:08:00
| 3
| 1,628
|
INGl0R1AM0R1
|
75,674,779
| 3,343,378
|
Dash plotly figures jumping around in predefined grid
|
<p>In Dash, the layout of a page can be designed using bootstrap components (see for instance <a href="https://stackoverflow.com/questions/63592900/plotly-dash-how-to-design-the-layout-using-dash-bootstrap-components">Plotly-Dash: How to design the layout using dash bootstrap components?</a>). We can make a 12x12 grid of squares & in my app I make a raster of 4 plotly plots, let's say p1,p2,p3,p4 in the following way:</p>
<pre><code>import dash_core_components as dcc
import dbc
layout = dbc.Container([
dbc.Row([
dbc.Col([dcc.Graph(p1)], width=6),
dbc.Col([dcc.Graph(p2)], width=6),
]),
dbc.Row([
dbc.Col([dcc.Graph(p3)], width=6),
dbc.Col([dcc.Graph(p4)], width=6)
])
])
</code></pre>
<p>I notice that when I resize my browser slightly, the plots can start jumping around, meaning, the 2x2 grid of plots, becomes sometimes a 4x1 grid. Is there anyway I can prevent Dash from doing that, by just resizing the full grid depending on browser size, instead of altering the grid?</p>
<p>MANY THANKS!</p>
|
<python><layout><plotly-dash>
|
2023-03-08 15:06:58
| 1
| 444
|
blah_crusader
|
75,674,633
| 11,918,269
|
Explicit line ranges when parsing YAML in Python
|
<p>I have the following YAML file contents (a single block for example):</p>
<pre class="lang-yaml prettyprint-override"><code>.deploy_container:
tags:
- gcp
image: google/cloud-sdk
services:
- docker:find
variables:
PORT: "$APP_PORT"
script:
- cp $GCP_SERVICE_KEY gcloud-service-key.json # Google Cloud
</code></pre>
<p>I have a wrapper for the default <code>SafeLoader</code>:</p>
<pre class="lang-py prettyprint-override"><code>class SafeLineLoader(SafeLoader):
def construct_mapping(self, node: MappingNode, deep: bool = False) -> dict[Hashable, Any]:
mapping = super().construct_mapping(node, deep=deep)
mapping['__startline__'] = node.start_mark.line + 1
mapping['__endline__'] = node.end_mark.line + 1
return mapping
</code></pre>
<p>Given the file contents and loader, the parsed object I get is:</p>
<pre class="lang-json prettyprint-override"><code>{
".deploy_container": {
"tags": [
"gcp"
],
"image": "google/cloud-sdk",
"services": [
"docker:find"
],
"variables": {
"PORT": "$APP_PORT",
"__startline__": 137,
"__endline__": 138
},
"script": [
"cp $GCP_SERVICE_KEY gcloud-service-key.json"
],
"__startline__": 131,
"__endline__": 144
}
}
</code></pre>
<p>The thing I am struggling with is that on string or list values there's no additional info on the start line and end line.
Is there a way to add that metadata to these objects too? Even if it means essentially converting them to dicts.</p>
<p>For example I'd like that instead of:</p>
<pre class="lang-json prettyprint-override"><code>"services": [
"docker:find"
]
</code></pre>
<p>I'll get:</p>
<pre class="lang-json prettyprint-override"><code>"services": {
1: [
"docker:find"
]
__start_line__ : ...
__end_line__: ...
}
</code></pre>
|
<python><pyyaml>
|
2023-03-08 14:54:24
| 1
| 1,686
|
Eliran Turgeman
|
75,674,380
| 274,460
|
How to relate Python threads to POSIX threads in Python 3.6?
|
<p>Is there a way to relate the Python thread name and thread ID to the POSIX thread ID in Python 3.6? Any of the following would do:</p>
<ul>
<li>Get the POSIX thread ID from a <code>threading.Thread</code> object</li>
<li>Get the Python thread ID from a <code>psutil._common.pthread</code> object</li>
<li>Set the process command-line for the thread (as reported in <code>/proc/<id>/cmdline</code>)</li>
<li>Set some other thread property that shows up in <code>/proc</code> from Python</li>
<li>Something else</li>
</ul>
<p>I'm on Linux 4.14 and an embedded platform that's stuck with Python 3.6, unfortunately. I know that Python 3.8 has <code>threading.Thread.native_id</code>. I have control of the Python process source code and have SSH and root on the system - I can do this from either inside or outside the Python process.</p>
|
<python><multithreading><python-3.6><psutil>
|
2023-03-08 14:31:05
| 0
| 8,161
|
Tom
|
75,674,260
| 4,408,818
|
Shapely removing disconnected geometries after a unary_union of two MultiPolygons
|
<p><strong>edit: The geometries were not dissapearing, they were in the interiors property of the Polygon.</strong></p>
<p>I have two MultiPolygons, both of which are made up of disconnected polygons. I need to join them such that they "cut" each other out. The two shapes are these;
<a href="https://i.sstatic.net/8yUPH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8yUPH.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/CJdoV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CJdoV.png" alt="enter image description here" /></a></p>
<p>I perform a unary union to get the shape below;
<a href="https://i.sstatic.net/GiLrj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GiLrj.png" alt="enter image description here" /></a></p>
<p>This removes the "z" shaped wall in the bottom room, which I still need. Is there a way of "cutting" shape 1 with shape 2 such that the "z" shaped room remains? The closest I've got is with unary_union and playing around with the width of the small rectangles, but this is leaving a line between the doors which is not ideal.<a href="https://i.sstatic.net/cO49e.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cO49e.png" alt="enter image description here" /></a></p>
<p>I've tried using symetric_difference, but this also did not work.</p>
<pre><code>un_rm = unary_union(rooms)
un_dr = unary_union(doors)
# plot_multipoly(un_rm, plt)
# plot_multipoly(un_dr, plt)
# nonoverlap = (un_rm.symmetric_difference(un_dr))
un = unary_union([un_rm, un_dr])
</code></pre>
<p>Here's the geometries for the rooms</p>
<pre><code>POLYGON ((85887.4870641205488937 -25151.4419914101599716, 64629.2000917164332350 -25151.4419914101599716, 65757.5078179416013882 -28251.4419914100362803, 79287.4870641207526205 -28251.4419914100362803, 85887.4870641205488937 -28251.4419914100362803, 85887.4870641205488937 -25151.4419914101599716))
POLYGON ((64416.3645372212558868 -25151.4419914101599716, 60087.4870641206362052 -25151.4419914101599716, 60087.4870641206362052 -28251.4419914100362803, 65544.6722634464240400 -28251.4419914100362803, 64416.3645372212558868 -25151.4419914101599716))
POLYGON ((60087.4870641206362052 -28451.4419914100326423, 60087.4870641206362052 -32751.4419914099125890, 64487.4870641206289292 -32751.4419914099125890, 79187.4870641207526205 -32751.4419914099125890, 79187.4870641207526205 -28451.4419914100326423, 65687.4870641206362052 -28451.4419914100326423, 60087.4870641206362052 -28451.4419914100326423))
POLYGON ((85887.4870641205488937 -32751.4419914099125890, 85887.4870641205488937 -28451.4419914100326423, 79387.4870641207380686 -28451.4419914100326423, 79387.4870641207380686 -32751.4419914099125890, 85887.4870641205488937 -32751.4419914099125890))
POLYGON ((85887.4870641205488937 -36951.4419914102036273, 85887.4870641205488937 -32951.4419914099125890, 79287.4870641207526205 -32951.4419914099125890, 64587.4870641206362052 -32951.4419914099125890, 64587.4870641206362052 -34651.4419914099125890, 79387.4870641207380686 -34651.4419914099635207, 79387.4870641207380686 -36951.4419914102036273, 85887.4870641205488937 -36951.4419914102036273))
POLYGON ((60087.4870641206362052 -36951.4419914102036273, 79187.4870641207526205 -36951.4419914102036273, 79187.4870641207526205 -34851.4419914099635207, 64387.4870641206289292 -34851.4419914099125890, 64387.4870641206289292 -32951.4419914099125890, 60087.4870641206362052 -32951.4419914099125890, 60087.4870641206362052 -36951.4419914102036273))
</code></pre>
<p>Here's the door geometries;</p>
<pre><code>POLYGON ((79437.4870641207380686 -36406.4419914101163158, 79437.4870641207380686 -35396.4419914101163158, 79137.4870641207380686 -35396.4419914101163158, 79137.4870641207380686 -36406.4419914101163158, 79437.4870641207380686 -36406.4419914101163158))
POLYGON ((64637.4870641206362052 -34306.4419914099344169, 64637.4870641206362052 -33296.4419914099344169, 64337.4870641206362052 -33296.4419914099344169, 64337.4870641206362052 -34306.4419914099344169, 64637.4870641206362052 -34306.4419914099344169))
POLYGON ((61982.4870641205998254 -33001.4419914099125890, 62992.4870641205998254 -33001.4419914099125890, 62992.4870641205998254 -32701.4419914099125890, 61982.4870641205998254 -32701.4419914099125890, 61982.4870641205998254 -33001.4419914099125890))
POLYGON ((79137.4870641207526205 -31346.4419914098834852, 79137.4870641207526205 -32356.4419914098834852, 79437.4870641207526205 -32356.4419914098834852, 79437.4870641207526205 -31346.4419914098834852, 79137.4870641207526205 -31346.4419914098834852))
POLYGON ((64793.2278058871524991 -26333.0555322997970507, 65138.6681506460663513 -27282.1450792935502250, 65420.5759368818398798 -27179.5390362958569312, 65075.1355921229260275 -26230.4494893020892050, 64793.2278058871524991 -26333.0555322997970507))
POLYGON ((74382.4870641207235167 -28501.4419914100362803, 75392.4870641207235167 -28501.4419914100362803, 75392.4870641207235167 -28201.4419914100362803, 74382.4870641207235167 -28201.4419914100362803, 74382.4870641207235167 -28501.4419914100362803))
</code></pre>
|
<python><pandas><geopandas><shapely>
|
2023-03-08 14:20:23
| 1
| 959
|
Ozymandias
|
75,674,206
| 16,389,095
|
KivyMD AttributeError: Content/TwoLineIconListItem object has no attribute '_disabled_count'
|
<p>Starting from an <a href="https://github.com/kivymd/KivyMD/wiki/Components-Expansion-Panel" rel="nofollow noreferrer">example</a> of the Kivy MD Expansion Panel, I would like to customize the content of the panel. The example shows the same content for each expansion panel. I would like to set values of the fields 'text' and 'secondary text'. So I modified the code as following:</p>
<pre><code>from kivy.lang import Builder
from kivymd.uix.boxlayout import MDBoxLayout
from kivymd.app import MDApp
from kivymd import images_path
from kivymd.uix.expansionpanel import MDExpansionPanel, MDExpansionPanelThreeLine
from kivymd.uix.list import TwoLineIconListItem
KV = '''
MDScreen:
MDBoxLayout:
orientation: "vertical"
MDTopAppBar:
title: "Expansion panel"
elevation: 10
ScrollView:
MDGridLayout:
cols: 1
adaptive_height: True
id: box
'''
class Content(MDBoxLayout, TwoLineIconListItem):
def __init__(self, primaryText, secondaryText):
self.size_hint_y = None
self.height = self.minimum_height
self.text = primaryText
self.secondary_text = secondaryText
class Test(MDApp):
def build(self):
return Builder.load_string(KV)
def on_start(self):
for i in range(10):
myContent = Content('PRIMARY ' + str(i), 'SECONDARY ' + str(i))
self.root.ids.box.add_widget(
MDExpansionPanel(
icon="language-python",
content=myContent,
panel_cls=MDExpansionPanelThreeLine(
text="Text " + str(i),
secondary_text="Secondary text " + str(i),
tertiary_text="Tertiary text " + str(i),
)
)
)
Test().run()
</code></pre>
<p>Unexpectably, I get this error: <em><strong>AttributeError: 'Content' object has no attribute 'disabled_count'</strong></em>.</p>
<p>In this second version, I didn't get this error anymore, but no content is shown:</p>
<pre><code>from kivy.lang import Builder
from kivymd.uix.boxlayout import MDBoxLayout
from kivymd.app import MDApp
from kivymd import images_path
from kivymd.uix.expansionpanel import MDExpansionPanel, MDExpansionPanelThreeLine
from kivymd.uix.list import TwoLineIconListItem
KV = '''
<Content>
size_hint_y: None
height: self.minimum_height
TwoLineIconListItem:
MDScreen:
MDBoxLayout:
orientation: "vertical"
MDTopAppBar:
title: "Expansion panel"
elevation: 10
ScrollView:
MDGridLayout:
cols: 1
adaptive_height: True
id: box
'''
class Content(MDBoxLayout):
pass
class Test(MDApp):
def build(self):
return Builder.load_string(KV)
def on_start(self):
for i in range(10):
myContent = Content()
myContent.text = 'PRIMARY ' + str(i)
myContent.secondary_text = 'SECONDARY ' + str(i)
self.root.ids.box.add_widget(
MDExpansionPanel(
icon="language-python",
content=myContent,
panel_cls=MDExpansionPanelThreeLine(
text="Text " + str(i),
secondary_text="Secondary text " + str(i),
tertiary_text="Tertiary text " + str(i),
)
)
)
Test().run()
</code></pre>
|
<python><kivy><kivy-language><kivymd>
|
2023-03-08 14:16:14
| 2
| 421
|
eljamba
|
75,674,169
| 12,501,684
|
How to assign text class based on line height (quantize height to discrete H1, H2, H3, p)?
|
<p><strong>TLDR:</strong> I am converting a PDF to MarkDown and I need a heuristic that will allow me to assign styles (H1, H2, H3, regular, caption) to lines, based on their heights. Essentially, I have a <code>list[tuple(str, float)]</code> of lines and their heights that I need to covert into a <code>list[tuple(str, int)]</code> where the integer <code>[1-5]</code> is the style of the line.</p>
<p>I am using <a href="https://pymupdf.readthedocs.io/" rel="nofollow noreferrer"><code>PyMuPDF</code></a> to parse PDF documents and I convert them to a format consumable by an LLM. I decided to convert them to MarkDown because it is plain-text (directly understandable by an LLM), while still having the most crucial structural information about the document (like heading, chapters, etc.).</p>
<p>Firstly, I open the document,</p>
<pre class="lang-py prettyprint-override"><code>import fitz
doc = fitz.open("to_process.pdf")
</code></pre>
<p>I extract <code>dict</code>s for each of its pages,</p>
<pre class="lang-py prettyprint-override"><code>page_datas = []
for page in doc:
text_page = page.get_textpage(flags=fitz.TEXT_MEDIABOX_CLIP)
page_data = text_page.extractDICT(sort=True)
page_datas.append(page_data)
</code></pre>
<p>And I remove non-horizontal lines (as a means of cleaning the document up).</p>
<pre class="lang-py prettyprint-override"><code>for page_data in page_datas:
for block in page_data["blocks"]:
block["lines"] = [line for line in block["lines"] if line["dir"] == (1.0, 0.0)]
</code></pre>
<p>At this point, I can start actually converting the document into MarkDown.</p>
<p>Compared to a PDF, which can have arbitrary styling applied to text, MarkDown distinguishes only a few text classes, such as headings H1-H3. As such, I need to "quantize" the continuously-sized lines into these discrete classes. I decided to create a list of all line heights in the document and based on that, assign them categories. For example, if there are only two lines in the document that have a large size, they are most likely the title. If there are a few lines with a large (but not the largest size) they are probably headings. If most lines' heights fit into a specific range (say <code>[11.8-12.1]</code>) then these are probably lines from the main body of the document. Any smaller lines are probably captions, comments or some other additional information.</p>
<p>I can get a list of all line_heights in the document like this:</p>
<pre class="lang-py prettyprint-override"><code>fitz.TOOLS.set_small_glyph_heights(True)
line_heights = []
for page_data in page_datas:
for block in page_data["blocks"]:
for line in block["lines"]:
line_heights.append(line["bbox"][3] - line["bbox"][1])
</code></pre>
<p>I can round the heights to the nearest <code>0.1</code> and create a "histogram" of them like this:</p>
<pre class="lang-py prettyprint-override"><code>line_heights = [round(height, 1) for height in line_heights]
line_heights = sorted(list(Counter(line_heights).items()), reverse=True)
</code></pre>
<p>Still, this leaves my with a histogram with (in general) an arbitrary number of line heights. I can manually assign heights to categories based on looking at a PDF, but different PDFs can have different font size ranges in general. For one PDF, I get:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Height</th>
<th>Count</th>
<th>Manual Category</th>
</tr>
</thead>
<tbody>
<tr>
<td>14.4</td>
<td>1</td>
<td>H1</td>
</tr>
<tr>
<td>14.3</td>
<td>1</td>
<td>H1</td>
</tr>
<tr>
<td>12.8</td>
<td>5</td>
<td>H2</td>
</tr>
<tr>
<td>12.1</td>
<td>1</td>
<td>H2</td>
</tr>
<tr>
<td>12.0</td>
<td>2</td>
<td>H2</td>
</tr>
<tr>
<td>11.7</td>
<td>1</td>
<td>H2</td>
</tr>
<tr>
<td>10.1</td>
<td>15</td>
<td>p</td>
</tr>
<tr>
<td>10.0</td>
<td>24</td>
<td>p</td>
</tr>
<tr>
<td>9.9</td>
<td>9</td>
<td>p</td>
</tr>
<tr>
<td>9.1</td>
<td>5</td>
<td>sup</td>
</tr>
<tr>
<td>9.0</td>
<td>18</td>
<td>sup</td>
</tr>
<tr>
<td>8.9</td>
<td>6</td>
<td>sup</td>
</tr>
</tbody>
</table>
</div>
<p>In the case of this file, there are no H3-H6.</p>
<p>How could I do this programatically?</p>
|
<python><pdf><markdown><classification><histogram>
|
2023-03-08 14:12:52
| 1
| 5,075
|
janekb04
|
75,674,114
| 8,176,763
|
airflow cannot find module from imported file
|
<p>I have a directory structure like this:</p>
<pre><code>dags/
βββ airflow_sched.py
βββ backend
β βββ README.md
β βββ requirements.txt
β βββ src
β βββ buffer.py
β βββ config
β β βββ config.py
|-- stage.py
</code></pre>
<p><code>airflow_sched.py</code> is where my dag definition file is:</p>
<pre><code>from datetime import timedelta
import pendulum
from airflow.decorators import dag
from backend.src.stage import stage_data
@dag(
dag_id = "data-sync",
schedule_interval = "1 * * * *",
start_date=pendulum.datetime(2023, 3, 8, tz="Asia/Hong_Kong"),
catchup=False,
dagrun_timeout=timedelta(minutes=20),
)
def Pipeline():
a = stage_data()
</code></pre>
<p>The main take of the code above is the <code>from backend.src.stage import stage_data</code> it works fine , but in my <code>stage.py</code> file there is the config import.</p>
<pre><code>from config import config
</code></pre>
<p>When issuing <code>airflow dags list-import-errors</code> I get this:</p>
<pre><code>filepath | error
===================================+=====================================================================
/opt/airflow/dags/airflow_sched.py | Traceback (most recent call last):
| File "/opt/airflow/dags/airflow_sched.py", line 4, in <module>
| from backend.src.stage import stage_data
| File "/opt/airflow/dags/backend/src/stage.py", line 5, in <module>
| from config import config
| ModuleNotFoundError: No module named 'config'
</code></pre>
<p>what is wrong with my import here ?</p>
|
<python><airflow>
|
2023-03-08 14:07:09
| 1
| 2,459
|
moth
|
75,674,082
| 8,869,570
|
How to find the name of the class that defined a method?
|
<p>I'm trying to get the name of the class that defined a method. The method is passed into a function. Within that function, I printed the type of the method, and it shows</p>
<pre><code><class 'method'>
</code></pre>
<p>but it doesn't tell me what class it belongs to. How can I get the name of the class?</p>
|
<python><types>
|
2023-03-08 14:04:05
| 1
| 2,328
|
24n8
|
75,674,032
| 10,485,625
|
Concat two dataframe with different timedeltas
|
<p>Hello I am trying to concat two dataframes based on this logic :
I have a first df <code>df_minutes</code> that contains data for each minute and a second <code>df_hourly</code> that contains data with an hour delay.</p>
<p>I need to add the column RSI-1H of my 2nd dataframe to the first and get the same value 56 for all the rows from 00:00 to 00:59; 58 for all the rows from 01:00 to 01:59, etc ...</p>
<p><strong>df_minutes</strong></p>
<pre><code> index unix symbol open high low
date
2022-01-01 00:00:00 525599 1640995200000 AVAXUSDT 109.43 109.88 109.42
2022-01-01 00:01:00 525598 1640995260000 AVAXUSDT 109.75 110.09 109.58
2022-01-01 00:02:00 525597 1640995320000 AVAXUSDT 109.98 110.16 109.87
2022-01-01 00:03:00 525596 1640995380000 AVAXUSDT 110.06 110.11 109.74
2022-01-01 00:04:00 525595 1640995440000 AVAXUSDT 109.76 110.06 109.72
...
2022-01-01 01:08:00 525591 1640995680000 AVAXUSDT 110.24 110.30 110.17
2022-01-01 01:09:00 525590 1640995740000 AVAXUSDT 110.18 110.29 110.02
....
2022-01-01 02:18:00 525591 1640995680000 AVAXUSDT 110.24 110.30 110.17
2022-01-01 02:19:00 525590 1640995740000 AVAXUSDT 110.18 110.29 110.02
</code></pre>
<p><strong>df_hourly</strong></p>
<pre><code> RSI-1H
date
2022-01-01 00:00:00 56
2022-01-01 01:00:00 58
2022-01-01 02:00:00 59
2022-01-01 03:00:00 60
2022-01-01 04:00:00 59
</code></pre>
<p>I need to obtain this result :</p>
<p><strong>final_df</strong></p>
<pre><code> index unix symbol open high low RSI-1H
date
2022-01-01 00:00:00 525599 1640995200000 AVAXUSDT 109.43 109.88 109.42 56
2022-01-01 00:01:00 525598 1640995260000 AVAXUSDT 109.75 110.09 109.58 56
2022-01-01 00:02:00 525597 1640995320000 AVAXUSDT 109.98 110.16 109.87 56
2022-01-01 00:03:00 525596 1640995380000 AVAXUSDT 110.06 110.11 109.74 56
2022-01-01 00:04:00 525595 1640995440000 AVAXUSDT 109.76 110.06 109.72 56
...
2022-01-01 01:08:00 525591 1640995680000 AVAXUSDT 110.24 110.30 110.17 58
2022-01-01 01:09:00 525590 1640995740000 AVAXUSDT 110.18 110.29 110.02 58
....
2022-01-01 02:18:00 525591 1640995680000 AVAXUSDT 110.24 110.30 110.17 59
2022-01-01 02:19:00 525590 1640995740000 AVAXUSDT 110.18 110.29 110.02. 59
</code></pre>
<p>Thank you</p>
|
<python><dataframe><merge><concatenation>
|
2023-03-08 13:59:23
| 3
| 589
|
userHG
|
75,673,818
| 2,532,296
|
parsing words from a specific line
|
<p>Following is python implementation to extract specific fields from the following input file.
For the line starting with <code>MEMRANGE</code>, I would like to extract the values corresponding to keys <code>BASEVALUE</code>, <code>INSTANCE</code> and <code>SLAVEBUSINTERFACE</code> and print the result in the following format.
<code>value[INSTANCE]</code>.<code>value[SLAVEBUSINTERFACE]</code> <code>\tab</code> =<code>value[BASEVALUE]</code></p>
<p>Similarly, for the line starting with <code>PORT</code>, I would like to extract the values corresponding to keys <code>CLKFREQUENCY</code> and <code>SIGNAME</code> and print the result as <code>value[SIGNAME]</code> <code>\tab</code> =<code>value[CLKFREQUENCY]</code></p>
<p><em>input file</em></p>
<pre><code> <MEMRANGE ADDRESSBLOCK="HP0_DDR_LOW" BASENAME="C_BASEADDR" BASEVALUE="0x00000000" HIGHNAME="C_HIGHADDR" HIGHVALUE="0x3FFFFFFF" INSTANCE="trimap_0" IS_DATA="TRUE" IS_INSTRUCTION="TRUE" MASTERBUSINTERFACE="M_AXI" MEMTYPE="MEMORY" SLAVEBUSINTERFACE="S_AXI_HP0_hmo"/>
<PORT CLKFREQUENCY="90005550" DIR="I" NAME="ACLK" SIGIS="clk" SIGNAME="SYS_clk_wiz_0_clk_out1">
<MEMRANGE ADDRESSBLOCK="HP0_DDR_LOW" BASENAME="C_BASEADDR" BASEVALUE="0x00000000" HIGHNAME="C_HIGHADDR" HIGHVALUE="0x3FFFFFFF" INSTANCE="trimap_0" IS_DATA="TRUE" IS_INSTRUCTION="TRUE" MASTERBUSINTERFACE="M_AXI_DRAM0" MEMTYPE="MEMORY" SLAVEBUSINTERFACE="S_AXI_HP0_hmo"/>
<PORT CLKFREQUENCY="90005550" DIR="I" NAME="ACLK" SIGIS="clk" SIGNAME="SYS_clk_wiz_0_clk_out1">
</code></pre>
<p>I have tested the individual functions and they work fine. When I try to write the output of the functions to the output file, only the first function is executed and the second function doesn't execute.</p>
<pre><code>import re
def populate_ip_address(input_file, output_file):
print(" > writing ip address")
for line in input_file:
match_address = re.match(r'.*MEMRANGE .*BASEVALUE="(\w+)\".* INSTANCE="(\w+)\".* SLAVEBUSINTERFACE="(\w+)\"',line)
if match_address:
newline1= "\n%s.%s\t\t\t\t\t= %s" % (match_address.group(2), match_address.group(3), match_address.group(1))
output_file.write(newline1)
def populate_clock_frequency(input_file, output_file):
print(" > writing clock_frequency")
for line in input_file:
match_clock = re.match(r'.*PORT .*CLKFREQUENCY="(\w+)\".* SIGNAME="(\w+)\"',line)
if match_clock:
print(" >> writing clock_frequency")
newline2= "\n%s \t= %s" % (match_clock.group(2), match_clock.group(1))
output_file.write(newline2)
input_file = open("test.txt", "r")
with open('new.txt','w') as output_file:
populate_clock_frequency(input_file, output_file)
populate_ip_address(input_file, output_file)
</code></pre>
<p>expected</p>
<pre><code>SYS_clk_wiz_0_clk_out1 = 90005550
SYS_clk_wiz_0_clk_out1 = 90005550
trimap_0.S_AXI_HP0_hmo = 0x00000000
trimap_0.S_AXI_HP0_hmo = 0x00000000
</code></pre>
<p>current output</p>
<pre><code>SYS_clk_wiz_0_clk_out1 = 90005550
SYS_clk_wiz_0_clk_out1 = 90005550
</code></pre>
<p>Am I doing something wrong in the file access?
I am also open to answers doing the above parsing in sed, awk or grep.</p>
|
<python><parsing><awk><sed><grep>
|
2023-03-08 13:40:45
| 3
| 848
|
user2532296
|
75,673,794
| 13,039,962
|
How to transpose a table keeping their columns headers?
|
<p>I have a df like this:</p>
<pre><code>+----+----+----+----+
| December| January|
+----+----+----+----+
1 1
2 2
. .
31 31
</code></pre>
<p>You can build it with this code:</p>
<pre><code>numbers = []
for i in range(1, 32):
numbers.append(i)
df = pd.DataFrame({
'December': numbers,
'January': numbers})
</code></pre>
<p>Size: (31,2)</p>
<p>I want to plot a table like this:</p>
<pre><code>+--------------+--------------+
| December | January |
+----+----+----+----+----+----+
| 1 2 3.. 31 | 1 2 3.. 31 |
+----+----+----+----+----+----+
</code></pre>
<p>The picture is only a example with 5 days values.
<a href="https://i.sstatic.net/kSoKs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kSoKs.png" alt="enter image description here" /></a></p>
<p>I tried this code:</p>
<p><a href="https://stackoverflow.com/questions/69250999/how-to-change-a-pandas-dataframe-into-a-column-multi-index">How to change a Pandas DataFrame into a column Multi-Index?</a></p>
<p>But with no results</p>
<p>I would appreciate any solution.</p>
<p>Thanks in advance.</p>
|
<python><pandas>
|
2023-03-08 13:38:48
| 1
| 523
|
Javier
|
75,673,679
| 9,225,671
|
python signxml - InvalidDigest if there is a "xmlns" namespace declaration in the root element
|
<p>I need to sign a XML file.
The signature process works in both cases (with and without the namespace declared in the root element), but the signature verification fails when there is the specific <code>xmlns</code> namespace declared.</p>
<p>Also, when I have less nested levels, then there is no error in both cases, but I need the nesting (the final XML file will have even more nested levels than this example).</p>
<p>The XML files need to be submitted to a government agency (Country Paraguay, SIFEN / EKUATIA, Sistema Nacional de Facturacion ElectrΓ³nica) and the validation fails if I exclude the namespace from the file.</p>
<p>Here is my code to replicate the error; I am using <code>Ubuntu 20.04</code>, <code>Python 3.8.10</code>, <code>lxml==4.9.2</code> and <code>signxml==2.10.1</code>.</p>
<pre class="lang-py prettyprint-override"><code>def test(with_ns):
print('### parsing XML from string')
ns = ''
if with_ns:
ns = 'xmlns="http://ekuatia.set.gov.py/sifen/xsd"'
cdc = '0123'*11
root = etree.fromstring(
'<rDE {} xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://ekuatia.set.gov.py/sifen/xsd/siRecepDE_v150.xsd">'
'<dVerFor>1</dVerFor>'
'<DE Id="{}">'
'<dDVId>9</dDVId>'
'<gOpeDE>'
'<iTipEmi>1</iTipEmi>'
'</gOpeDE>'
'</DE>'
'</rDE>'.format(ns, cdc))
output_bytes = etree.tostring(root, pretty_print=True)
print(output_bytes.decode())
print()
print('### signing XML')
with open(os.path.join(DIR_PATH, 'tb_pub_key.pem')) as f:
pub_key = f.read()
with open(os.path.join(DIR_PATH, 'tb_priv_key.pem')) as f:
priv_key = f.read()
signer = signxml.XMLSigner(
c14n_algorithm='http://www.w3.org/TR/2001/REC-xml-c14n-20010315')
signer.namespaces = {
None: signxml.namespaces.ds,
}
root = signer.sign(
root,
key=priv_key,
cert=pub_key,
reference_uri=cdc)
output_bytes = etree.tostring(root, pretty_print=True)
print(output_bytes.decode())
print()
print('### verifying signature')
result = signxml.XMLVerifier().verify(
root,
ca_pem_file=os.path.join(DIR_PATH, 'bundle-doc-mic.pem'))
output_bytes = etree.tostring(result.signed_xml, pretty_print=True)
print(output_bytes.decode())
</code></pre>
<p>The code runs successfully without the namespace:</p>
<pre><code>>>> test(with_ns=False)
### parsing XML from string
<rDE xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://ekuatia.set.gov.py/sifen/xsd/siRecepDE_v150.xsd">
<dVerFor>1</dVerFor>
<DE Id="01230123012301230123012301230123012301230123">
<dDVId>9</dDVId>
<gOpeDE>
<iTipEmi>1</iTipEmi>
</gOpeDE>
</DE>
</rDE>
### signing XML
<rDE xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://ekuatia.set.gov.py/sifen/xsd/siRecepDE_v150.xsd">
<dVerFor>1</dVerFor>
<DE Id="01230123012301230123012301230123012301230123">
<dDVId>9</dDVId>
<gOpeDE>
<iTipEmi>1</iTipEmi>
</gOpeDE>
</DE>
<Signature xmlns="http://www.w3.org/2000/09/xmldsig#">
<SignedInfo>
<CanonicalizationMethod Algorithm="http://www.w3.org/TR/2001/REC-xml-c14n-20010315"/>
<SignatureMethod Algorithm="http://www.w3.org/2001/04/xmldsig-more#rsa-sha256"/>
<Reference URI="#01230123012301230123012301230123012301230123">
<Transforms>
<Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/>
<Transform Algorithm="http://www.w3.org/TR/2001/REC-xml-c14n-20010315"/>
</Transforms>
<DigestMethod Algorithm="http://www.w3.org/2001/04/xmlenc#sha256"/>
<DigestValue>kKbZeA/kIplpYlTuynW8SsjEovYlkDW4JdiST9qH6s8=</DigestValue>
</Reference>
</SignedInfo>
<SignatureValue>...</SignatureValue>
<KeyInfo>
<X509Data>
<X509Certificate>...</X509Certificate>
</X509Data>
</KeyInfo>
</Signature>
</rDE>
### verifying signature
<DE xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" Id="01230123012301230123012301230123012301230123">
<dDVId>9</dDVId>
<gOpeDE>
<iTipEmi>1</iTipEmi>
</gOpeDE>
</DE>
</code></pre>
<p>But when adding the namespace, I get this error:</p>
<pre><code>>>> test(with_ns=True)
### parsing XML from string
<rDE xmlns="http://ekuatia.set.gov.py/sifen/xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://ekuatia.set.gov.py/sifen/xsd/siRecepDE_v150.xsd">
<dVerFor>1</dVerFor>
<DE Id="01230123012301230123012301230123012301230123">
<dDVId>9</dDVId>
<gOpeDE>
<iTipEmi>1</iTipEmi>
</gOpeDE>
</DE>
</rDE>
### signing XML
<rDE xmlns="http://ekuatia.set.gov.py/sifen/xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://ekuatia.set.gov.py/sifen/xsd/siRecepDE_v150.xsd">
<dVerFor>1</dVerFor>
<DE Id="01230123012301230123012301230123012301230123">
<dDVId>9</dDVId>
<gOpeDE>
<iTipEmi>1</iTipEmi>
</gOpeDE>
</DE>
<Signature xmlns="http://www.w3.org/2000/09/xmldsig#">
<SignedInfo>
<CanonicalizationMethod Algorithm="http://www.w3.org/TR/2001/REC-xml-c14n-20010315"/>
<SignatureMethod Algorithm="http://www.w3.org/2001/04/xmldsig-more#rsa-sha256"/>
<Reference URI="#01230123012301230123012301230123012301230123">
<Transforms>
<Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/>
<Transform Algorithm="http://www.w3.org/TR/2001/REC-xml-c14n-20010315"/>
</Transforms>
<DigestMethod Algorithm="http://www.w3.org/2001/04/xmlenc#sha256"/>
<DigestValue>1PsVs9lH4Yx0i/2ler3MaIUFP4R9yhTm6uksVkh72Sk=</DigestValue>
</Reference>
</SignedInfo>
<SignatureValue>...</SignatureValue>
<KeyInfo>
<X509Data>
<X509Certificate>...</X509Certificate>
</X509Data>
</KeyInfo>
</Signature>
</rDE>
### verifying signature
Traceback (most recent call last):
File "/snap/pycharm-professional/331/plugins/python/helpers/pydev/pydevconsole.py", line 364, in runcode
coro = func()
File "<input>", line 1, in <module>
File "/home/ralf/PycharmProjects/tb_system_01/base/sifen/base.py", line 111, in test_2
result = signxml.XMLVerifier().verify(
File "/home/ralf/PycharmProjects/tb_system_01/.venv/lib/python3.8/site-packages/signxml/__init__.py", line 981, in verify
raise InvalidDigest("Digest mismatch for reference {}".format(len(verify_results)))
signxml.exceptions.InvalidDigest: Digest mismatch for reference 0
</code></pre>
<p>Any ideas why this could be happening? Or how this could be solved?</p>
|
<python>
|
2023-03-08 13:29:39
| 1
| 16,605
|
Ralf
|
75,673,613
| 1,939,730
|
Sklearn's TargetEncoder not dealing properly with category with 1 observation
|
<p>Given this piece of code</p>
<pre><code>from category_encoders import TargetEncoder
X = ["a", "a", "a", "b", "b", "c"]
y = [5, 5, 5, 6, 7, 4]
encoder = TargetEncoder(smoothing = 0, min_samples_leaf = 0)
print(encoder.fit_transform(X, y))
</code></pre>
<p>I would expect the output to be</p>
<pre><code>[5, 5, 5, 6.5, 6.5, 4]
</code></pre>
<p>But for some reason it returns</p>
<pre><code>[5, 5, 5, 6.5, 6.5, 5.333]
</code></pre>
<p>That is, category 'c' is being encoded to 5.33 which is the average of all the values.</p>
<p>My question is: shouldn't the category 'c' be encoded to 4, since I set no smoothing and no min_samples_leaf?</p>
<p>Same result happens when min_samples_leaf = 1 and/or smoothing > 0. No matter which value of smoothing I set, category 'c' is always transformed to the global mean.</p>
<p>I'm using v2.5.0 of category_encoders.</p>
|
<python><scikit-learn>
|
2023-03-08 13:23:21
| 1
| 567
|
Carlos Navarro AstiasarΓ‘n
|
75,673,609
| 1,946,052
|
Equivalent of xsum for pySCIPOpt
|
<p>What is the equivalent of <code>xsum</code> from python Mip in pySCIPOpt to summarize arrays of variables in constraints?</p>
|
<python><optimization><scip>
|
2023-03-08 13:22:34
| 1
| 2,283
|
Michael Hecht
|
75,673,581
| 11,167,163
|
How to use ROLLING in Python ? Error : ValueError: window must be an integer 0 or greater
|
<p>I have the following code :</p>
<pre><code>DF['AA'].rolling('90D').apply(calc_performance)
</code></pre>
<p>Where <code>calc_performance</code></p>
<pre><code>def calc_performance(x):
if len(x) < 3:
return np.nan
else:
return (x[-1] / x[0]) - 1
</code></pre>
<p>But it does throw an error :</p>
<pre><code>ValueError: window must be an integer 0 or greater
</code></pre>
<p>My Dataframe is made as follow :</p>
<pre><code> AA
2009-12-31 107.50
2010-01-04 106.40
2010-01-05 102.50
2010-01-06 106.90
2010-01-07 104.40
... ...
2023-03-01 121.08
2023-03-02 124.12
2023-03-03 122.52
2023-03-06 128.19
2023-03-07 133.33
</code></pre>
<p>What am I doing wrong ?</p>
|
<python><pandas>
|
2023-03-08 13:19:44
| 1
| 4,464
|
TourEiffel
|
75,673,570
| 3,437,787
|
How to expand the number of dataframe rows based on column titles?
|
<p>I've got a dataframe of the form:</p>
<pre><code> name 0 1 2
0 A 4 2 1
1 B 2 3 4
2 C 1 3 2
</code></pre>
<p>This is the result of grouping and summarizing data earlier in my real world data process.
What I would like to do now, is to expand (explode?) each row so that each element in that row fills a number of rows corresponding to the column title, except for <code>name</code>, so that the dataframe ends up like this:</p>
<pre><code>name 0 1 2
------------------
A 0 1 2
A 0 1 nan
A 0 nan nan
A 0 nan nan
B 0 1 2
B 0 1 2
B nan 1 2
B nan nan 2
C 0 1 2
C nan 1 2
C nan 1 nan
</code></pre>
<p>I've tried a bunch of stuff with <code>df.iterrows()</code> and assigning new columns to an empty dataframe with columns with the lenghts of the <code>max</code> of each row and filling up with <code>nans</code>, but it ended up buggy and really messy. It would be great if any of you pandas experts could take a look. Thank you for any suggestions!</p>
<h3>Reproducible dataframe</h3>
<pre><code>import pandas as pd
df = pd.DataFrame({'name': ['A','B','C'], 0:[4,2,1], 1:[2,3,3], 2:[1,4,2]})
</code></pre>
|
<python><pandas>
|
2023-03-08 13:19:05
| 2
| 62,064
|
vestland
|
75,673,363
| 4,210,662
|
Typescript Uint8Array, Uint16Array Python equivalent
|
<pre><code>new Uint8Array(new Uint16Array([64]).buffer)
</code></pre>
<p>How can I achieve same result structure with pure Python? What is an equivalent of Uint8Array/Uint16Array?</p>
<p>I'm getting buffer from Uint16Array type here and cast to Uint8Array, however I'm not sure how can I achieve similar behavior in Python. I was trying to play with bytearray, but for me it looks different.</p>
<p>[EDIT]</p>
<pre><code>const uint16 = new Uint16Array([32 + 32]);
const uint16buffer = uint16.buffer;
console.log('uint16', uint16);
console.log('uint16buffer', uint16buffer);
const uint8 = new Uint8Array(uint16buffer);
console.log('uint8', uint8);
const data = new Uint8Array(new Uint16Array([32 + 32]).buffer);
console.log('data', data);
</code></pre>
<p>Returns output like below:</p>
<pre><code>uint16 Uint16Array(1) [ 64 ]
uint16buffer ArrayBuffer { [Uint8Contents]: <40 00>, byteLength: 2 }
uint8 Uint8Array(2) [ 64, 0 ]
data Uint8Array(2) [ 64, 0 ]
</code></pre>
|
<python><typescript><uint32>
|
2023-03-08 13:00:20
| 1
| 794
|
Tatarinho
|
75,673,304
| 12,924,562
|
How to get Mastodon direct messages using Mastodon.py
|
<p>I am trying to get direct messages from my Mastodon account using Mastodon.py. I am able to get everything on the timeline, but can't seem to return direct messages. Since direct messages are separate from the timeline (on the web interface), I am assuming there is some function other than <code>timeline()</code> I should be using. I looked into the <code>status()</code> function but I would need to already have the ID of the message to use it.</p>
<p>I have been searching the API documentation, but there doesn't appear to be any function or method that is obvious to me.</p>
<p>My function currently looks like this:</p>
<pre><code>def get_direct_messages(mastodon):
# Get all status messages from the user's timeline
statuses = mastodon.timeline()
print(statuses)
# Filter the status messages to only include direct messages
direct_messages = [status for status in statuses if status["visibility"] == "direct"]
print (direct_messages)
return direct_messages
</code></pre>
<p>This will print out all the messages on the timeline, so I know the connection and everything is valid, but I'm just not doing something right.
<code>print(statuses)</code> shows timeline messages only, so I am sure this isn't the right way to access direct messages.</p>
|
<python><dm><mastodon>
|
2023-03-08 12:53:50
| 1
| 386
|
Rick Dearman
|
75,673,294
| 5,295,778
|
How to run command with docker that saves data on local machine
|
<p>My intention is to dump data from Django app (running inside docker container) directly on the disk and not inside the container.
Any ideas how to do it?
I tried this:</p>
<pre><code>docker exec -it app bash -c "python manage.py dumpdata > data.json"
</code></pre>
<p>but the data is saved inside container.</p>
|
<python><django><linux><docker>
|
2023-03-08 12:52:41
| 4
| 526
|
cristian
|
75,673,278
| 1,652,954
|
OSError: [Errno 24] Too many open files when using With....as: expression
|
<p>despite i am using <code>with....as:</code> expression to gaurantee that the pool and consequently all of its processes are closed, however, when i run the code, i receive the error posted below.
to solve this issue, i referred to some posts in stackoverflow and referred to a tutorial called "superFast python", and as a result, there was a common concordance in the posts i check, since using the <code>with...as:</code> closes the pool automatically.</p>
<p>please let me know why i am still getting that error despite using <code>with...as:</code> expression</p>
<p>*<strong>codee</strong>:</p>
<pre><code>with Pool(processes=self.__processCount,initializer=self.initPool,initargs=(arg0,arg1,arg2,arg3,arg4,arg5,arg6,arg7,arg8,arg9,)) as GridCells10mX10mIteratorPool.__pool:
self.__chunkSize = PoolUtils.getChunkSizeForLenOfIterables(lenOfIterablesList=self.__maxNumOfCellsVertically*self.__maxNumOfCellsHorizontally,cpuCount=self.__cpuCount)
iterables = self.__yieldIterables()
logger.info(f"GridCells10mX10mIteratorPool.self.__chunkSize(task per processor):{self.__chunkSize}")
for res in GridCells10mX10mIteratorPool.__pool.map(func=self.run,iterable=iterables,chunksize=self.__chunkSize):
...
...
...
GridCells10mX10mIteratorPool.__pool.close()
GridCells10mX10mIteratorPool.__pool.terminate()
GridCells10mX10mIteratorPool.__pool.join()
def __yieldIterables(self):
cnt = -1
for i in range(GridCells10mX10mIteratorModel.getStrtRowIdx(),GridCells10mX10mIteratorModel.getMaxSpanInHeight(),GridCells10mX10mIteratorModel.getVerticalStep()):
for j in range(GridCells10mX10mIteratorModel.getStrtColIdx(),GridCells10mX10mIteratorModel.getMaxSpanInWidth(),GridCells10mX10mIteratorModel.getHorizontalStep()):
cnt+=1
yield(i,j,cnt)
</code></pre>
<p><em><strong>error</strong></em>:</p>
<pre><code> File "/usr/lib/python3.8/multiprocessing/context.py", line 119, in Pool
File "/usr/lib/python3.8/multiprocessing/pool.py", line 212, in __init__
File "/usr/lib/python3.8/multiprocessing/pool.py", line 303, in _repopulate_pool
File "/usr/lib/python3.8/multiprocessing/pool.py", line 326, in _repopulate_pool_static
File "/usr/lib/python3.8/multiprocessing/process.py", line 121, in start
File "/usr/lib/python3.8/multiprocessing/context.py", line 277, in _Popen
File "/usr/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__
File "/usr/lib/python3.8/multiprocessing/popen_fork.py", line 69, in _launch
OSError: [Errno 24] Too many open files
</code></pre>
|
<python><multiprocessing><python-multiprocessing>
|
2023-03-08 12:51:14
| 0
| 11,564
|
Amrmsmb
|
75,673,074
| 3,099,733
|
A pythonic way to iterate combination of items in different categories?
|
<p>Suppose there are 2 categories if items: <code>cat_1 = [1, 2], cat_2 = ['a', 'b']</code>, then I should get <code>(1, 'a'), ('1', 'b'), (2, 'a'), (2, 'b')</code>. And if there is one more categories of items <code>cat_3 = ['x', 'y']</code>, then the result should be <code>(1, 'a', 'x'), (1, 'a', 'y'), ... (2, 'b', 'y')</code>.</p>
<p>It's easy to get all combination if the number of categories is a constant value, one can just get all combination by</p>
<pre class="lang-py prettyprint-override"><code>[(i,j) for i in cat_1 for j in cat_2]
</code></pre>
<p>But I don't know what's the pythonic way to implement a function to generate all possible combination of various categories of items.</p>
|
<python>
|
2023-03-08 12:32:16
| 1
| 1,959
|
link89
|
75,672,901
| 13,158,157
|
Algorithm for insight into string pattern matching
|
<p>My question is general but I struggle to find proper articles for that so any reading leads will be much appreciated!</p>
<p>I have two dataframes to join by Id that should be formed from multiple different columns. For example:</p>
<pre><code>df.Id = df.A + df.B + df.C
df2.Id= df2.A2 + df2.D2 + df2.E2 +df2.F2
</code></pre>
<p>There multiple cases when such simple column concatenation won't work to create matching Id and you cannot use multiple columns as key for merge because there are different number of them on both sides and they often have variable parts of Id.</p>
<p>To complete the task I had to manually find out and establish exception rules.
For example (if column df.A2 had a dash you would need to omit and other more convoluted):</p>
<pre><code>df2.loc[df.A2.str.contains('-'),'Id'] = df2.D2 + df2.E2 +df2.F2
</code></pre>
<p>This got me thinking:</p>
<ol>
<li>Are there ways I could speed up finding and/or visualizing such sub-populations (maybe something like clustering) ?</li>
<li>Are there algorithms that can aid into identifying those sub-population pattern divergencies (<code>if df2.A2 contains "-"</code>) and ways those divergencies can be fixed (<code>omit A2 to get a proper pattern</code>) ?</li>
</ol>
|
<python><algorithm><cluster-analysis>
|
2023-03-08 12:14:07
| 0
| 525
|
euh
|
75,672,527
| 492,034
|
Apply excel formulas with xlsxwriter (or another engine)
|
<p>I create with pandas and xlsxwiter an excel file with some formulas and, from that file, I have to create another one. Thus I need the formulas to be applied and the right values being set on load.
On windows the best solution I've found is the following:</p>
<pre><code>def __applyExcelFormulasOrPromptToDoItManually(self, filename):
if sys.platform == 'win32':
# NOTE: In order to get the real values of the columns, we have to apply formulas.
# The easiest way is to open the xlsx with excel, save it and close it
excel = win32.gencache.EnsureDispatch('Excel.Application')
workbook = excel.Workbooks.Open(filename)
workbook.Save()
workbook.Close()
excel.Quit()
else:
input('Please open the generated file with Excel, save it, close it and press enter...')
</code></pre>
<p>Now I'm moving this to a cron job on AWS that runs on linux and cannot have human interaction.
How can I apply formulas?
I don't know if I can install libreoffice on that image.</p>
|
<python><excel><pandas><xlsxwriter>
|
2023-03-08 11:37:21
| 0
| 1,776
|
marco
|
75,672,490
| 3,478,288
|
Display data by date, when data comes from multiple models in Django
|
<p>I'm currently working on an app with Django 4.1. I try to display a resume of some data comming from the database. Here is a simplified example of my models:</p>
<pre class="lang-py prettyprint-override"><code>class Signal(models.Model):
data1 = models.CharField()
data2 = models.CharField()
date = models.DateTimeField()
class Ban(models.Model):
data3 = models.CharField()
data4 = models.CharField()
date = models.DateTimeField()
</code></pre>
<p>What I'm trying to do is getting filtered data from these 2 models, and make a list ordered by date from all this data to display it. someting like:</p>
<ul>
<li>Ban1 (08/03/2023)</li>
<li>Ban2 (07/03/2023)</li>
<li>Signal2 (06/03/2023)</li>
<li>Ban3 (05/03/2023)</li>
<li>Signal1 (04/03/2023)</li>
</ul>
<p>Thanks in advance for your ideas</p>
|
<python><django><django-models>
|
2023-03-08 11:34:11
| 2
| 632
|
Tartempion34
|
75,672,488
| 7,214,714
|
Efficient way for ordering values within group in Pandas
|
<p>I'm working with time series data, which is packaged in a time long dataframe, something like this:</p>
<pre><code>ACCOUNT | VAR1 | VAR2 | DAY
</code></pre>
<p>I'm interested in creating a new column <code>DAY_ORD</code>, which would give the ordinal rank of the <code>DAY</code> variable to each row within a group over unique <code>(ACCOUNT, VAR1, VAR2)</code> triplets.</p>
<p>Here is a small example of what I want to achieve:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">ACCOUNT</th>
<th style="text-align: center;">VAR1</th>
<th style="text-align: center;">VAR2</th>
<th style="text-align: right;">DAY</th>
<th style="text-align: right;">DAY-ORD</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">A</td>
<td style="text-align: center;">X</td>
<td style="text-align: center;">True</td>
<td style="text-align: right;">2022-02-03</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td style="text-align: center;">A</td>
<td style="text-align: center;">X</td>
<td style="text-align: center;">True</td>
<td style="text-align: right;">2022-02-04</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: center;">B</td>
<td style="text-align: center;">X</td>
<td style="text-align: center;">True</td>
<td style="text-align: right;">2021-05-18</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td style="text-align: center;">A</td>
<td style="text-align: center;">X</td>
<td style="text-align: center;">True</td>
<td style="text-align: right;">2022-02-05</td>
<td style="text-align: right;">2</td>
</tr>
<tr>
<td style="text-align: center;">B</td>
<td style="text-align: center;">X</td>
<td style="text-align: center;">True</td>
<td style="text-align: right;">2022-05-20</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: center;">A</td>
<td style="text-align: center;">Y</td>
<td style="text-align: center;">True</td>
<td style="text-align: right;">2022-02-05</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td style="text-align: center;">A</td>
<td style="text-align: center;">X</td>
<td style="text-align: center;">True</td>
<td style="text-align: right;">2022-03-12</td>
<td style="text-align: right;">3</td>
</tr>
</tbody>
</table>
</div>
<p>Here is my current implementation:</p>
<pre><code>#initialize an empty 'DAY_ORD' column
df['DAY_ORD'] = [None for i in range(len(df))]
#iterate over all triplets that appear in the data
for (_, row) in fb_data[['ACCOUNT', 'VAR1', 'VAR2']].copy().drop_duplicates().iterrows()):
acc, v1, v2 = row[0], row[1], row[2]
#find the df slice that adheres to the considered triplet
fdf = df.loc[(df.ACCOUNT== acc) & (fb_data.VAR1 == v1) & (fb_data.VAR2 == v2)].sort_values('DAY')
#assign them an ordinal rank
fdf['DAY_ORD'] = [i for i in range(len(fdf))]
#set the DAY_ORD values in the original dataframe
for i in fdf.index:
df.loc[i, 'DAY_ORD'] = fdf['DAY_ORD'][i]
df['DAY_ORD']
</code></pre>
<p>It seems like it will do the job, but it runs very slowly, at around <code>8 it/s</code>. What is a clean way to make this faster?</p>
|
<python><pandas><dataframe>
|
2023-03-08 11:33:56
| 1
| 302
|
Vid Stropnik
|
75,672,420
| 4,658,633
|
How to send HTTP request with trailers?
|
<p>MDN Web Docs stated that the trailer could be included in the request phase: <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Trailer" rel="nofollow noreferrer">https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Trailer</a></p>
<p>I saw some trailers in the response examples. How could I send an HTTP request with trailers with Curl or Go/Python HTTP Client trailers rather than sending raw bytes to a socket to mock that behavior?</p>
|
<python><go><http><curl>
|
2023-03-08 11:27:09
| 0
| 556
|
Blaise Wang
|
75,672,263
| 3,348,264
|
SQLAlchemy & postgres - add days to date, get date diff and compare. Coerce error
|
<p>No matter what I try I keep getting an error <code>cannot coerce the integer to in interval.</code></p>
<pre><code>@get_due_date.expression
def get_due_date(cls):
return func.add(cls.date_last_service, cast(cls.get_service_schedule, INTERVAL), func.now()) <= 3.5
</code></pre>
<p>The equivalent python code for what I want to do is.</p>
<pre><code>((self.date_last_service + timedelta(days=self.get_service_schedule)) - datetime.datetime.now()).days <= 3.5
</code></pre>
<p>Any one know why I'm getting this error? <code>cls.get_service_schedule</code> is an integer and you can cast a class</p>
|
<python><postgresql><sqlalchemy>
|
2023-03-08 11:11:59
| 1
| 2,204
|
Lewis Morris
|
75,672,080
| 2,626,865
|
Map (1:1) N input sentences to N given sentences by similarity
|
<p>I want to map <code>N_a</code> input sentences to <code>N_b</code> given sentences so that the mapping is one-to-one. That is,</p>
<ul>
<li>Every <code>N_a</code> is assigned</li>
<li>No <code>N_a</code> appears more than once</li>
</ul>
<p>Unfortunately the inputs vary slightly over time. Here is a representative mapping:</p>
<pre><code>{ "typeA": "step1 typeA (before typeB)"
, "typeD": "type D"
, "actionA": "actionA-suffix: typeB or Type D (type E available)"
, "typeE": "typeE - (not-actionA)"
, "actionB": "actionB some descriptive words"
, "typeA subtypeA": "subtypeA typeA or typeX - (not for typeB)"
, "actionA subtypeA": "actionA-suffix: subtypeA (type E available)"
, "typeB subtypeA": "subtypeA typeB"
, "typeC subtypeA": "subtypeA typeC"
, "typeB": "typeB (not subtypeA)"
, "typeF": "get typeF or typeF-subtypeA"
, "typeF actionB": "actionB typeF or typeF subtypeA"
}
</code></pre>
<p>Following [1], I've created this workflow using the <code>sentence_transformers</code> package[2]:</p>
<ul>
<li>BERT -> mean-pool -> cosine similarity</li>
</ul>
<p>Given the inputs it is clear that string-based alignment plus edit-distance (of the type featured in <code>rapidfuzz</code>[3]) won't work. But I'm not sure if BERT is the best approach, or if it is overkill for my needs. Perhaps I should use a word embedding model (word2vec, glove, etc) rather than a sentence embedding model. For this particular task I wonder if BERT could be tweaked to perform better.</p>
<p>The categories aren't well differentiated so that BERT sometimes maps an input to more than one given. An input <code>N_a</code> can be the best match for multiple givens <code>N_b0, ..., N_bi</code>. In such cases I keep the map with the best score and for the others fall upon the 2nd-best, 3rd-best, ... map. How can I improve BERT's performance to avoid these duplicates?</p>
<hr />
<p>Current implementation below:</p>
<pre><code>import pandas as pd
import sentence_transformers as st
model = st.SentenceTransformer('all-mpnet-base-v2')
sentences_input = [ ... ]
sentences_given = [ ... ]
# Create the encodings and apply mean pooling.
enc_input = model.encode(sentences_input)
enc_given = model.encode(sentences_given)
# Calculate a cosine similarity matrix
cos_similarity = st.util.cos_sim(enc_input, enc_given)
# As a pandas dataframe, label the matrix with the sentences.
df = pd.DataFrame\
(columns=sentences_input, index=sentences_given, dtype=float)
for i, sgiven in enumerate(sentences_given):
for j, sinput in enumerate(sentences_input):
df.loc[sgiven, sinput] = cos_similarity[j,i].item()
# For each given sentence, extract the best-scoring input sentence. Which
# unfortunately is not a one-to-one mapping.
mapping_bad_duplicates = df.idxmax(axis=1)
# Create a one-to-one mapping by iterating over the matches in order of best
# score. For each map, blocklist the row and column sentences, preventing
# duplicates.
mapping_good = {}
by_scores = sorted(df.unstack().items(), key=lambda k: k[1], reverse=True)
sentences = set(sentences_input) | set(sentences_given)
for (sinput,sgiven), score in by_scores:
if not sentences:
break
if sgiven not in sentences or sinput not in sentences:
continue
mapping_good[sgiven] = sinput
sentences.remove(sgiven)
sentences.remove(sinput)
# Convert the result to a dataframe
mapping_good_df = pd.Series(mapping_good)
</code></pre>
<ol>
<li><a href="https://towardsdatascience.com/bert-for-measuring-text-similarity-eec91c6bf9e1" rel="nofollow noreferrer">BERT For Measuring Text Similarity</a></li>
<li><a href="https://www.sbert.net/docs/usage/semantic_textual_similarity.html" rel="nofollow noreferrer">SBERT: Semantic Textual Similarity</a></li>
<li><a href="https://maxbachmann.github.io/RapidFuzz/Usage/fuzz.html#rapidfuzz.fuzz.partial_ratio" rel="nofollow noreferrer">rapidfuzz</a></li>
</ol>
|
<python><deep-learning><nlp><similarity>
|
2023-03-08 10:51:24
| 1
| 2,131
|
user19087
|
75,671,996
| 330,867
|
Python SSLError "SSLV3_ALERT_CERTIFICATE_EXPIRED". How to get the expired certificate name from the exception?
|
<p>I'm running <code>aiosmtpd</code> on my server, and from time to time, I have the following error and stack:</p>
<pre><code>SSL handshake failed
protocol: <asyncio.sslproto.SSLProtocol object at 0x7f90f77c59e8>
transport: <_SelectorSocketTransport fd=11 read=polling write=<idle, bufsize=0>>
Traceback (most recent call last):
File "/usr/lib/python3.7/asyncio/sslproto.py", line 625, in _on_handshake_complete
raise handshake_exc
File "/usr/lib/python3.7/asyncio/sslproto.py", line 189, in feed_ssldata
self._sslobj.do_handshake()
File "/usr/lib/python3.7/ssl.py", line 763, in do_handshake
self._sslobj.do_handshake()
ssl.SSLError: [SSL: SSLV3_ALERT_CERTIFICATE_EXPIRED] sslv3 alert certificate expired (_ssl.c:1056)
</code></pre>
<p>Is there a way to know what is the certificate when catching the error from</p>
<pre><code>sys.excepthook = handle_exception() # <= this function
</code></pre>
<p>(or any way to catch it from aiosmtpd but I'm not sure here)</p>
|
<python><ssl>
|
2023-03-08 10:43:30
| 0
| 40,087
|
Cyril N.
|
75,671,713
| 9,859,642
|
Using filename to initialize new DataFrame without exec
|
<p>I have a series of files to process and I need DataFrames that will have the same names as the files. The number of files is large so I wanted to make the processing parallel with joblib. Unfortunately joblib is not accepting <code>exec</code> as an element of a function to execute. Is there a better solution to this problem?</p>
<p>The script to process the files looks like that:</p>
<pre><code>files = Path(".").glob("*.out")
for output in files:
df_name = str(output).strip(".out")
exec(str(df_name) + " = pd.DataFrame(columns = ['col_1', 'col_2', 'col_3'])")
exec(str(df_name) + "_tmp = pd.DataFrame(columns = ['col_1', 'col_2', 'col_3'])")
. . .
</code></pre>
<p>I need a way to initialize DataFrames from filenames in such a way that it would be acceptable for joblib.</p>
|
<python><pandas><dataframe><exec><joblib>
|
2023-03-08 10:12:05
| 1
| 632
|
Anavae
|
75,671,573
| 746,777
|
Win32 in Python - LoadImage gets handle of icon but InsertMenuItem throws error
|
<p>I'm trying to add an icon to every menu option in a systray library in Python, <a href="https://pypi.org/project/pystray/" rel="nofollow noreferrer">pystray</a>. For that I'm using win32 functions, I can sucessfully load an icon with extension .ico using the <a href="https://learn.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-loadimagea" rel="nofollow noreferrer">LoadImage</a> function and I get an handle (HBITMAP) to it, next time I try to use that handle in a <a href="https://learn.microsoft.com/en-us/windows/win32/api/winuser/ns-winuser-menuiteminfow" rel="nofollow noreferrer">MENUITEMINFO</a> structure which is passed as the 4th parameter in the <a href="https://learn.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-insertmenuitemw" rel="nofollow noreferrer">InsertMenuItem</a> function gives error WinError 87.</p>
<p>Now about the details, for example:</p>
<pre><code> icon = win32.LoadImage(
None,
iconpath,
win32.IMAGE_ICON,
ico_x,
ico_y,
win32.LR_LOADFROMFILE)
</code></pre>
<p>returns a HBITMAP sucessfully, but after filling the MENUITEMINFO structure:</p>
<pre class="lang-py prettyprint-override"><code>
menu_item = win32.MENUITEMINFO(
cbSize=ctypes.sizeof(win32.MENUITEMINFO),
fMask=win32.MIIM_ID | win32.MIIM_STRING | win32.MIIM_STATE
| win32.MIIM_FTYPE | win32.MIIM_SUBMENU | win32.MIIM_BITMAP ,
wID=len(callbacks),
dwTypeData=descriptor.text,
fState=0
| (win32.MFS_DEFAULT if descriptor.default else 0)
| (win32.MFS_CHECKED if descriptor.checked else 0)
| (win32.MFS_DISABLED if not descriptor.enabled else 0),
fType=0
| (win32.MFT_RADIOCHECK if descriptor.radio else 0)
| (win32.MFT_STRING if descriptor.text else 0) ,
hbmpItem= icon if descriptor.icon else None,
hSubMenu=self._create_menu(descriptor.submenu, callbacks)
if descriptor.submenu
else None)
</code></pre>
<p><code>descriptor</code> here is a variable that holds the parameters for a menu entry (its text, if it has a check mark, etc.).</p>
<p>everything goes up to:</p>
<pre><code> win32.InsertMenuItem(hmenu, i, True, ctypes.byref(menu_item))
</code></pre>
<p>where <code>menu_item</code> is the variable of type MENUITEMINFO filled above.
When I debug I can see the hbmpItem correctly filled with the HBITMAP long value.
But this function fails with <code>OSError:WinError 87: The parameter is incorrect</code>.</p>
<p>Strangely, if i use a BMP file, and in the LoadImage function I do:</p>
<pre class="lang-py prettyprint-override"><code> icon = win32.LoadImage(
None,
bmppath,
win32.IMAGE_BITMAP,
ico_x,
ico_y,
win32.LR_LOADFROMFILE)
</code></pre>
<p>Everything runs fine with the <code>InserMenuItem</code> function.</p>
<p>Can somebody experienced with the <code>win32</code> API give me an hint about what I'm doing wrong ?</p>
|
<python><winapi><pystray>
|
2023-03-08 09:56:35
| 0
| 317
|
digfish
|
75,671,499
| 9,439,097
|
DuckDB Binder Error: Referenced column not found in FROM clause
|
<p>I am working in DuckDB in a database that I read from json.</p>
<p>Here is the json:</p>
<pre class="lang-json prettyprint-override"><code>[{
"account": "abcde",
"data": [
{
"name": "hey",
"amount":1,
"flow":"INFLOW"
},
{
"name": "hello",
"amount":-2,
"flow": null
}
]
},
{
"account": "hijkl",
"data": [
{
"name": "bonjour",
"amount":1,
"flow":"INFLOW"
},
{
"name": "hallo",
"amount":-3,
"flow":"OUTFLOW"
}
]
}
]
</code></pre>
<p>I am opening it in Python as follows:</p>
<pre class="lang-py prettyprint-override"><code>import duckdb
duckdb.sql("""
CREATE OR REPLACE TABLE mytable AS SELECT * FROM "example2.json"
""")
</code></pre>
<p>This all works fine and I get a copy of my table, but then I try to update it:</p>
<pre class="lang-py prettyprint-override"><code>duckdb.sql("""
UPDATE mytable SET data = NULL WHERE account = "abcde"
""")
</code></pre>
<p>which crashes with</p>
<pre class="lang-py prettyprint-override"><code>---------------------------------------------------------------------------
BinderException Traceback (most recent call last)
Cell In[109], line 1
----> 1 duckdb.sql("""
2 UPDATE mytable SET data = NULL WHERE account = "abcde"
3 """)
6 # duckdb.sql("""
7 # DELETE FROM mytable WHERE account = "abcde"
8 # """)
10 duckdb.sql("""
11 SELECT * FROM mytable
12 """)
BinderException: Binder Error: Referenced column "abcde" not found in FROM clause!
Candidate bindings: "mytable.data"
LINE 2: ...mytable SET data = NULL WHERE account = "abcde"
^
</code></pre>
<p>I have searched the documentation and the error but I just can't find what I am doing wrong here.</p>
|
<python><sql><duckdb>
|
2023-03-08 09:49:07
| 2
| 3,893
|
charelf
|
75,671,456
| 10,989,584
|
Error installing pyqt5 under aarch64 architecture
|
<p>I'm trying to install <strong>pyqt5 V5.15.2</strong> on an emulate qemu <strong>aarch64 debian</strong> distro, but it fails with the following trace:</p>
<pre class="lang-bash prettyprint-override"><code>root@debian-arm64:~# pip install pyqt5==5.15.2 --config-settings --confirm-license= --verbose
Using pip 23.0.1 from /usr/local/lib/python3.9/dist-packages/pip (python 3.9)
Collecting pyqt5==5.15.2
Using cached PyQt5-5.15.2.tar.gz (3.3 MB)
Running command pip subprocess to install build dependencies
Collecting sip<7,>=5.3
Using cached sip-6.7.7-cp37-abi3-linux_aarch64.whl
Collecting PyQt-builder<2,>=1.6
Using cached PyQt_builder-1.14.1-py3-none-any.whl (3.7 MB)
Collecting toml
Using cached toml-0.10.2-py2.py3-none-any.whl (16 kB)
Collecting packaging
Using cached packaging-23.0-py3-none-any.whl (42 kB)
Collecting ply
Using cached ply-3.11-py2.py3-none-any.whl (49 kB)
Collecting setuptools
Using cached setuptools-67.5.1-py3-none-any.whl (1.1 MB)
Installing collected packages: ply, toml, setuptools, packaging, sip, PyQt-builder
Successfully installed PyQt-builder-1.14.1 packaging-23.0 ply-3.11 setuptools-67.5.1 sip-6.7.7 toml-0.10.2
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
Installing build dependencies ... done
Running command Getting requirements to build wheel
Getting requirements to build wheel ... done
Running command Preparing metadata (pyproject.toml)
Querying qmake about your Qt installation...
This is the GPL version of PyQt 5.15.2 (licensed under the GNU General Public License) for Python 3.9.2 on linux.
Found the license file 'pyqt-gpl.sip'.
Checking to see if the QtCore bindings can be built...
Checking to see if the QtNetwork bindings can be built...
Checking to see if the QtGui bindings can be built...
Checking to see if the QtWidgets bindings can be built...
Checking to see if the QtQml bindings can be built...
Checking to see if the QAxContainer bindings can be built...
Checking to see if the QtAndroidExtras bindings can be built...
Checking to see if the QtBluetooth bindings can be built...
Checking to see if the QtDBus bindings can be built...
Checking to see if the QtDesigner bindings can be built...
Checking to see if the Enginio bindings can be built...
Checking to see if the QtHelp bindings can be built...
Checking to see if the QtMacExtras bindings can be built...
Checking to see if the QtMultimedia bindings can be built...
Checking to see if the QtMultimediaWidgets bindings can be built...
Checking to see if the QtNetworkAuth bindings can be built...
Checking to see if the QtNfc bindings can be built...
Checking to see if the QtOpenGL bindings can be built...
Checking to see if the QtPositioning bindings can be built...
Checking to see if the QtLocation bindings can be built...
Checking to see if the QtPrintSupport bindings can be built...
Checking to see if the QtQuick bindings can be built...
Checking to see if the QtQuick3D bindings can be built...
Checking to see if the QtQuickWidgets bindings can be built...
Checking to see if the QtRemoteObjects bindings can be built...
Checking to see if the QtSensors bindings can be built...
Checking to see if the QtSerialPort bindings can be built...
Checking to see if the QtSql bindings can be built...
Checking to see if the QtSvg bindings can be built...
Checking to see if the QtTest bindings can be built...
Checking to see if the QtTextToSpeech bindings can be built...
Checking to see if the QtWebChannel bindings can be built...
Checking to see if the QtWebKit bindings can be built...
Checking to see if the QtWebKitWidgets bindings can be built...
Checking to see if the QtWebSockets bindings can be built...
Checking to see if the QtWinExtras bindings can be built...
Checking to see if the QtX11Extras bindings can be built...
Checking to see if the QtXml bindings can be built...
Checking to see if the QtXmlPatterns bindings can be built...
Checking to see if the _QOpenGLFunctions_2_0 bindings can be built...
Checking to see if the _QOpenGLFunctions_2_1 bindings can be built...
Checking to see if the _QOpenGLFunctions_4_1_Core bindings can be built...
Checking to see if the dbus-python support should be built...
The dbus-python package does not seem to be installed.
These bindings will be built: Qt, QtCore, QtNetwork, QtGui, QtWidgets, QtDBus, QtOpenGL, QtPrintSupport, QtSql, QtTest, QtXml, _QOpenGLFunctions_2_0, _QOpenGLFunctions_2_1, _QOpenGLFunctions_4_1_Core, pylupdate, pyrcc.
Generating the Qt bindings...
_in_process.py: /tmp/pip-install-m6oiyjbv/pyqt5_1965ddd1193045bab17bb1f59ff08aa1/sip/QtCore/qprocess.sip: line 99: column 5: 'Q_PID' is undefined
error: subprocess-exited-with-error
Γ Preparing metadata (pyproject.toml) did not run successfully.
β exit code: 1
β°β> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
full command: /usr/bin/python3 /usr/local/lib/python3.9/dist-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py prepare_metadata_for_build_wheel /tmp/tmpo3qo2dlh
cwd: /tmp/pip-install-m6oiyjbv/pyqt5_1965ddd1193045bab17bb1f59ff08aa1
Preparing metadata (pyproject.toml) ... error
error: metadata-generation-failed
Γ Encountered error while generating package metadata.
β°β> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
</code></pre>
<p>The only 2 stuff I'm getting warn about are:</p>
<ul>
<li>Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: <a href="https://pip.pypa.io/warnings/venv" rel="nofollow noreferrer">https://pip.pypa.io/warnings/venv</a></li>
<li>The dbus-python package does not seem to be installed.</li>
</ul>
<p>Speaking about the first one, every package installation is printing it: I don't care, that's just an emulated test environment.<br />
Speaking about the second one, I've installed <em>libdbus-1-3</em> and <em>libdbus-1-dev</em> and ran both pip install <em>dbus-python</em> (which seems to be a deprecated module) and <em>dbus-next</em>, so i don't know what still missing there.</p>
<p><strong>ISSUE</strong><br />
Apparently a Q_PID var in qprocess.sip seems to be undefined</p>
<blockquote>
<p>_in_process.py: /tmp/pip-install-m6oiyjbv/pyqt5_1965ddd1193045bab17bb1f59ff08aa1/sip/QtCore/qprocess.sip: line 99: column 5: 'Q_PID' is undefined</p>
</blockquote>
<p><strong>QUESTION</strong><br />
What am I doing wrong? How should I fix this error?</p>
<p><strong>EXTRA INFOS</strong><br />
List of installed packages:</p>
<pre class="lang-bash prettyprint-override"><code>root@debian-arm64:~# pip list
Package Version
------------------------- --------------
altgraph 0.17.3
certifi 2020.6.20
chardet 4.0.0
dbus-next 0.2.3
dbus-python 1.3.2
httplib2 0.18.1
idna 2.10
Mako 1.1.3
Markdown 3.3.4
MarkupSafe 1.1.1
packaging 23.0
pip 23.0.1
ply 3.11
pycurl 7.43.0.6
Pygments 2.7.1
pyinstaller 5.8.0
pyinstaller-hooks-contrib 2023.0
PySimpleSOAP 1.16.2
python-apt 2.2.1
python-debian 0.1.39
python-debianbts 3.1.0
PyYAML 5.3.1
reportbug 7.10.3+deb11u1
requests 2.25.1
setuptools 67.5.1
six 1.16.0
toml 0.10.2
urllib3 1.26.5
wheel 0.34.2
</code></pre>
<p>Python environment:</p>
<pre class="lang-bash prettyprint-override"><code>root@debian-arm64:~# python3 --version
Python 3.9.2
root@debian-arm64:~# pip --version
pip 23.0.1 from /usr/local/lib/python3.9/dist-packages/pip (python 3.9)
</code></pre>
<p>OS specs (debian-11.6.0-arm64):</p>
<pre class="lang-bash prettyprint-override"><code>root@debian-arm64:~# uname -a
Linux debian-arm64 5.10.0-21-arm64 #1 SMP Debian 5.10.162-1 (2023-01-21) aarch64 GNU/Linux
</code></pre>
<p>Are you guys able to help there?<br />
Thanks in advance,<br />
Hele.</p>
|
<python><pyqt5><arm64>
|
2023-03-08 09:44:24
| 3
| 413
|
Hele
|
75,671,277
| 230,270
|
python set windows environment parameters persistent
|
<p>I would like to set environment parameters using python in windows 11.
Using commands like:</p>
<pre><code>import os
os.environ['USER'] = 'username'
print(os.environ['USER'])
</code></pre>
<p>will print <em>username</em>. Nether the less, once I close the window. The set environment parameters dissipated. It seems I need something like <em>SETX</em> command in windows. I found the article <a href="https://stackoverflow.com/questions/488366/how-do-i-make-environment-variable-changes-stick-in-python">how do I make environment variable changes stick in python</a>. Nether the less, they use there the windows command.</p>
<p>Are there python commands as well to do the same? instead of sending <em>subprocess</em> or <em>os.system</em> commands.</p>
|
<python><windows>
|
2023-03-08 09:22:07
| 1
| 3,496
|
Eagle
|
75,671,167
| 3,657,417
|
Pandas update previous records because future peaking is not possible
|
<p>This is what I have so far:</p>
<pre><code>import numpy as np
import pandas_ta as ta
from pandas import DataFrame, pandas
df = pandas.DataFrame({"color": [None, None, 'blue', None, None, None, 'orange', None, None, None, None],
'bottom': [1, 2, 7, 5, 9, 9, 5, 4, 5, 5, 3],
'top': [5, 5, 11, 8, 10, 10, 9, 7, 10, 6, 7]})
print(df)
"""
color down top
0 None 1 5
1 None 2 5
2 blue 7 11
3 None 5 8
4 None 9 10
5 None 9 10
6 orange 5 9
7 None 4 7
8 None 5 10
9 None 5 6
10 None 3 7
"""
# lookback period
N = 3
# Pivot each color to own column and shift
df2 = (df.pivot(columns='color', values=['top', 'bottom'])
.drop(columns=np.nan, level=1)
.ffill(limit=N-1).shift()
)
# compare current top with bottom & top from color occurance
out = df.join((df2['bottom'].le(df['top'], axis=0)
& df2['top'].ge(df['top'], axis=0)).astype(int))
print(out)
"""
color bottom top blue orange
0 None 1 5 0 0
1 None 2 5 0 0
2 blue 7 11 0 0
3 None 5 8 1 0
4 None 9 10 1 0
5 None 9 10 1 0
6 orange 5 9 0 0
7 None 4 7 0 1
8 None 5 10 0 0
9 None 5 6 0 1
10 None 3 7 0 0
"""
</code></pre>
<h3>Question:</h3>
<p>I only want to consume each color once. That means that for every blue or orange occurrence there can only be only one 1 in the upcoming 3 rows.
( 2 blues after each other will result in two 1s. One 1 for every blue.)</p>
<pre><code>"""
color bottom top blue orange
0 None 1 5 0 0
1 None 2 5 0 0
2 blue 7 11 0 0
3 None 5 8 1 0
4 None 9 10 1 0 --> this should be 0, blue already consumed on row 3
5 None 9 10 1 0 --> this should be 0, blue already consumed on row 3
6 orange 5 9 0 0
7 None 4 7 0 1
8 None 5 10 0 0
9 None 5 6 0 1 --> this should be 0, orange already consumed on row 7
10 None 3 7 0 0
"""
</code></pre>
<p>One bottleneck is that for this to function correctly I am not allowed to peak in to the future. So I am not allowed to use <code>.shift(-3)</code> or <code>iloc[-1]</code> for example.</p>
<p>That sort of kills my initial thinking about keeping track of a consumed state by using something like <code>.rolling(-3).max() == 1</code> .</p>
|
<python><pandas><dataframe>
|
2023-03-08 09:10:32
| 1
| 773
|
Florian
|
75,671,125
| 2,926,271
|
Autoencoder for dimensionality reduction of binary dataset for clustering
|
<p>Given a binary dataset (derived from yes/no questionnaire responses) aimed to use for subsequent unsupervised cluster analysis, with significant multicolinearity and a total of 31 features by ~50,000 observations (subjects), it appeared sensible to reduce the dimensionality of the input data before performing cluster analysis. I attempted using an autoencoder for this, but surprisingly, the clusters (derived through k-medoids, chosen due to the nominal nature of the underlying data and a greater stability in relation to outliers/noise compared to e.g., k-means) were actually more distinct and clearly distinguished when using MCA, with a clear maximum Silhouette coefficient at k = 5.</p>
<p>Given that MCA with the first 5 PCs (explaining just ~75% of the variance, chosen through a scree plot) was used before I attempted the autoencoder way, it surprises me that an autoencoder did a worse job at extracting meaningful features at the same bottleneck dimension. <strong>The problem with the current autoencoder, appears to be that the data in the bottleneck layer, which is used in the clustering, is distorted...</strong></p>
<p>Below is the code I used to construct the autoencoder. Can it be so that the hyper-parameters are off, or some details of the overall architecture? Random search of specific numbers of the number of layers, learning rate, batch size, dimensions in the layers etc. have not yielded anything substantial. Loss is similar between train and validation dataset, and levels out at around 0.15 after ~40 epochs.</p>
<p>I've also tried to identify studies where such data has been used, but not found anything useful.</p>
<pre><code>import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
import matplotlib.pyplot as plt
input_dim = 31
layer_1_dim = 18
layer_2_dim = 10
bottleneck_dim = 5
learning_rate = 0.001
epochs = 100
batch_size = 300
# split data into training and validation
training_n = int(data.shape[0] * 0.8)
train_data = data[:training_n, :]
val_data = data[training_n:, :]
# define autoencoder initializer
initializer = tf.keras.initializers.GlorotUniform()
# autoencoder layers
input_layer = Input(shape=(input_dim,))
layer = Dense(layer_1_dim, activation='relu')(input_layer)
layer = Dense(layer_2_dim, activation='relu', kernel_initializer=initializer)(layer)
layer = Dense(bottleneck_dim, activation='relu', kernel_initializer=initializer, name="bottleneck-output")(layer)
layer = Dense(layer_2_dim, activation='relu', kernel_initializer=initializer)(layer)
layer = Dense(layer_1_dim, activation='relu', kernel_initializer=initializer)(layer)
output_layer = Dense(input_dim, activation='sigmoid', kernel_initializer=initializer)(layer)
# define and compile autoencoder model
autoencoder = Model(inputs=input_layer, outputs=output_layer)
optimizer = Adam(learning_rate=learning_rate)
autoencoder.compile(optimizer=optimizer, loss='binary_crossentropy')
# train the autoencoder model
history = autoencoder.fit(train_data, train_data, epochs=epochs, batch_size=batch_size, validation_data=(val_data, val_data))
# get bottleneck output
bottleneck_autoencoder = Model(inputs=autoencoder.input, outputs=autoencoder.get_layer('bottleneck-output').output)
bottleneck_output = bottleneck_autoencoder.predict(data)
# plot loss in traning and validation set
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Autoencoder loss (binary cross-entropy)')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Validation'], loc='upper right')
plt.savefig('output/embedding.png')
</code></pre>
|
<python><tensorflow><keras><artificial-intelligence><cluster-analysis>
|
2023-03-08 09:06:14
| 2
| 4,405
|
display-name-is-missing
|
75,671,070
| 7,483,211
|
How to make Snakemake run a rule once for all matching outputs, not once for each wildcard match
|
<p>In a Snakemake workflow, for efficiency reasons, I want to run a rule once for the <em>list</em> of wildcard matches - rather than once for each match.</p>
<p>What's the idiomatic way of doing this in Snakemake?</p>
<p>This is a minimal starter code that does <em>not</em> what I want, as it would call <code>rule produce_all_csvs</code> once for each of the required outputs (here 3 times) rather than the desired one time.</p>
<pre><code>rule all:
input:
"out_1.csv",
"out_2.csv",
"out_3.csv",
rule produce_all_csv:
"""
This rule should be called _once_ for _all_ samples
Not once per sample
"""
input:
"in_{sample}.csv",
output:
"out_{sample}.csv",
shell:
"""
# Placeholder for a real command
# that takes a list of input files
# and produces a list of output file
"""
</code></pre>
<p>For concreteness, assume the tool has this API:</p>
<pre class="lang-bash prettyprint-override"><code>tool --inputs input_1.csv,input_2.csv --outputs output_1.csv,output_2.csv
</code></pre>
<p><em>This question is inspired by <a href="https://stackoverflow.com/questions/75603548/how-to-escape-missingoutputexception-while-running-a-for-loop-in-a-rule-in-snake">how to escape MissingOutputException while running a for loop in a rule in snakemake</a></em></p>
|
<python><wildcard><bioinformatics><snakemake>
|
2023-03-08 09:00:24
| 1
| 10,272
|
Cornelius Roemer
|
75,671,038
| 7,376,511
|
pytest: manually add test to discovered tests
|
<pre><code># tests/test_assert.py
@pytest.mark.mymark
def custom_assert():
assert True
</code></pre>
<p>How do I force pytest to discover this test?</p>
<p>In general, how do I dynamically add any test to pytest's list of discovered tests, even if they don't fit in the naming convention?</p>
|
<python><pytest>
|
2023-03-08 08:56:52
| 2
| 797
|
Some Guy
|
75,670,969
| 2,793,602
|
Hugging Face transformer - object not callable
|
<p>I am trying to make a text summarizer using the T5 transformer from Hugging Face.</p>
<p>From the Hugging Face site I am running this:</p>
<pre><code>from transformers import T5Tokenizer, T5Model
tokenizer = T5Tokenizer.from_pretrained("t5-small")
model = T5Model.from_pretrained("t5-small")
</code></pre>
<p>Before this I pip installed transformers, sentencepiece, and datasets transformers[sentencepiece]</p>
<p>When I run it I get the following:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In [9], line 2
1 # Load the T5 model and tokenizer
----> 2 tokenizer = T5Tokenizer.from_pretrained("t5-small")
3 model = T5Model.from_pretrained("t5-small")
TypeError: 'NoneType' object is not callable
</code></pre>
<p>What am I doing wrong here?</p>
|
<python><huggingface-transformers>
|
2023-03-08 08:49:55
| 1
| 457
|
opperman.eric
|
75,670,692
| 5,159,404
|
Why does Pandas fail to open excel file?
|
<p>I am having troubles opening excel files (xlsx) in pandas.
I had a small application that used to work, but then another application with the same code was failing miserably when opening the same files.</p>
<p>The files have not changed since and I cannot change them (they are provided externally)</p>
<p>After some digging I was able to find out that this seems to be linked to the version of openpyxl:</p>
<ul>
<li>If the version is set to <code>openpyxl==3.0.9</code>: files are opened without failing.</li>
<li>If the version is <code>openpyxl==3.1.1</code>: 2 out of three files fail to open raising exception <code>'ReadOnlyWorksheet' object has no attribute 'defined_names'</code></li>
</ul>
<p><code>File "path\venvtest\lib\site-packages\openpyxl\reader\workbook.py", line 109, in assign_names sheet.defined_names[name] = defn AttributeError: 'ReadOnlyWorksheet' object has no attribute 'defined_names'</code></p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
folder = 'C:\\Path\\of\\folder\\'
FILES = [
folder + 'file1.xlsx',
folder + 'file2.xlsx',
folder + 'file3.xlsx'
]
for file in FILES :
#make sure the file is readable by pandas
try :
sheet_name = 'Inventory Summary'
df = pd.read_excel(file, sheet_name=sheet_name, engine='openpyxl')
print(f'file : OK')
except Exception as e:
print(f'file : ERROR : {file}')
print(e)
continue
</code></pre>
<p>I can obviously keep the old version for now, but I am curious to know if anyone has a clearer understanding of what is going on.</p>
|
<python><pandas><openpyxl>
|
2023-03-08 08:18:27
| 0
| 1,002
|
Wing
|
75,670,679
| 7,376,511
|
Dynamically reuse integration testing scenarios in pytest
|
<p>I have a set of classes for which each property and method must be tested.</p>
<p>Something like</p>
<pre><code>datasets = [{
"klass": MyClass1,
"a": 1,
"c": 3,
},
{
"klass": MyClass2,
"a": 12,
"b": 13,
}]
</code></pre>
<p>Note that these are just quick examples, the test data is much more complex and large.</p>
<p>Coming from ruby and the Shoulda framework, before I actually discovered how unwieldy python testing was, I had thought I could simply write some dynamic generator that would allow me to do something like</p>
<pre><code># tests/test_datasets_client_1.py
for dataset in datasets_client_1:
generate_test_case(**dataset)
# tests/test_datasets_client_2.py
for dataset in datasets_client_2:
generate_test_case(**dataset)
</code></pre>
<p>Which would really be calling</p>
<pre><code># tests/test_libraries.py
def generate_test_case(**kwargs):
# do some stuff to register TestCase(**kwargs) so that it's picked up by the testing framework
# stuff like markers and the test name would also be dynamically set here based on the kwargs
class TestCaseDynamic:
def __init__(klass, **kwargs):
self.instance = klass(kwargs["some_specific_parameter"])
self.kwargs = kwargs # or setattr, doesn't matter as long as they're available in the rest of the instance methods
def test_a(self):
if self.kwargs["a"]:
assert self.instance.a == self.kwargs["a"]
def test_b(self):
if self.kwargs["b"]:
assert self.instance.b > self.kwargs["b"]
def test_c(self):
if self.kwargs["c"]:
assert self.instance.c <= self.kwargs["c"]
</code></pre>
<p>However python testing is much worse than I anticipated, and there does not seem to be an immediate way to do this. Can anyone point me in the right direction? How can I dynamically generate a huge quantity of these tests without losing my sanity in the process?</p>
<p>How can I make this class instance actually be an instance with self persistence? <code>self.instance.b</code> and <code>self.instance.c</code> could be calling the same expensive method internally, which is cached between instances, so why would I have to wait 5 minutes for each of these tests when in the real world it would just be called once?</p>
<p>Given python's dynamicity I had thought these were all rethorical questions with easy answers, but after dipping my toes in pytest I am not so sure anymore.</p>
<p>Every single example code I've seen is overly complicated and relies on metaclasses, nested decorators and other hard to understand code that I have to sit down and study to achieve something that is basic, obvious behavior in other languages. I found some previous answers like <a href="https://stackoverflow.com/a/35580034/7376511">https://stackoverflow.com/a/35580034/7376511</a>, but there was no way to actually dynamically call that class with the dataset without redefining it in every test file, which defeats the aim of declaring the testing scenario once and importing it everywhere else.</p>
|
<python><testing><pytest><integration-testing>
|
2023-03-08 08:17:20
| 1
| 797
|
Some Guy
|
75,670,657
| 4,212,875
|
For each group in Pandas dataframe, return the most common value if it shows up more than `x%` of the time
|
<p>Given a pandas dataframe, I would like to return a column's (string datatype) most common value for each groupby if this value shows up in more than <code>n%</code> of the rows, otherwise return 'NA'.</p>
|
<python><pandas><dataframe><group-by>
|
2023-03-08 08:15:07
| 2
| 411
|
Yandle
|
75,670,106
| 20,762,114
|
Is there a way to perform validations/assertions within method chaining in Polars?
|
<p>Is there any way to perform validations/assertions as part of method chaining?</p>
<p>This is very useful for long queries where we want to stop execution early on if a certain condition is not met.</p>
<p>Some possible examples:</p>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame({
'A': ['X', 'X', 'Y', 'Z'],
})
(
df
.assert((pl.col('A') == 'X').any(), message="At least one element in A should be X")
.filter(pl.col('A') == 'X')
)
(
df
.assert((pl.col('A') == 'X').sum() > 1, message="At least 2 elements in A should be X")
.filter(pl.col('A') == 'X')
)
(
df
.lazy()
.assert([
pl.cond((pl.col('A') == 'X').sum() > 1).message("At least 2 elements in A should be X"),
pl.cond((pl.col('A') == 'Y').sum() > 1).message("At least 2 elements in A should be Y"),
])
.filter(pl.col('A') == 'X')
.collect()
)
</code></pre>
|
<python><python-polars>
|
2023-03-08 06:53:41
| 0
| 317
|
T.H Rice
|
75,669,517
| 1,289,479
|
How to send text from Python to DeepL desktop app
|
<p>I know DeepL has a library to send text to their servers for translation, but I want to use their desktop app which has no internet dependency.</p>
<p>The Desktop app has this feature where if you press Ctrl+C+C, it will put that highlighted text into the DeepL app and it works from there.</p>
<p>I was wondering if it's possible from python automatically send text over to the DeepL Desktop app this way.</p>
|
<python><desktop-application><deepl>
|
2023-03-08 04:46:52
| 0
| 311
|
user1289479
|
75,669,373
| 7,780,577
|
How do you avoid memory leaks in Spark/Pyspark for multiple dataframe edits and loops?
|
<p>There are 2 scenarios that I feel cause memory leaks that I struggle to know how to avoid.</p>
<p><strong>Scenario 1:</strong></p>
<p>There is a need to perform multiple edits to a df like below:</p>
<pre><code>df = method1()
df = method2(df)
df = method3(df)
</code></pre>
<p>If I'm not mistaken, this approach is discouraged because each df is lengthening the memory footprint. How do you get around this?</p>
<p><strong>Scenario 2:</strong></p>
<p>There is a need to execute looping in pyspark. For example, Let's say I have 400 files I need to execute a transformation on and I loop through 10 at a time --> read in 10 files, transform data, write back out to file...loop again. This feels like it is also causing a memory leak.</p>
<p>Should we be persisting data in both scenarios? How do we prevent memory build up? Is there a way to refresh/kill the spark context but maintain the looping so force release any memory usage?</p>
|
<python><apache-spark><pyspark><memory-management><memory-leaks>
|
2023-03-08 04:07:46
| 1
| 301
|
cauthon
|
75,669,344
| 7,021,137
|
`import matplotlib.pyplot` or `import pylab`?
|
<p>Excuse the blunt question, but what's the difference between <code>import pylab</code> and <code>import matplotlib.pyplot</code>? I realized my coworker and I have been using different imports and, to the best of our knowledge, there is no difference between the two imports.</p>
<p>Does anyone choose one over the other for a specific reason? What's the difference?</p>
|
<python><matplotlib>
|
2023-03-08 04:01:34
| 2
| 377
|
NoVa
|
75,669,330
| 2,073,640
|
Getting data from streamlit instance
|
<p>I made a streamlit app that locally writes to a CSV on the instance, and just realized now i have no way to get that data. I am pretty sure if I push up a change via github it'll overwrite the DB (can I use a gitignore maybe?)</p>
<p>Pretty dumb problem, open to any suggestions how to recover the data :)</p>
|
<python><streamlit>
|
2023-03-08 03:56:44
| 1
| 358
|
SimonStern
|
75,669,316
| 7,021,137
|
Connecting Lowest Datapoints in Plot
|
<p>I'm needing to make a fancy plot that draws a line through the lowest x-valued data points. The code below is a working example of similar-looking data to what I'm working with. Please only use <code>numpy</code> and <code>matplotlib</code>, and please continue using the <code>plt.subplots()</code> method.</p>
<pre><code>import numpy as np
import pylab as plt
y = np.random.randint(1,20,50)
fig, ax = plt.subplots()
ax.plot(y, '.')
ax.grid(alpha=0.5)
plt.show()
</code></pre>
<p>The above code makes this plot:</p>
<p><a href="https://i.sstatic.net/kM6cN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kM6cN.png" alt="enter image description here" /></a></p>
<p>I want a plot that "hugs" the left-most boundary of points, like this:</p>
<p><a href="https://i.sstatic.net/h9D5T.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/h9D5T.png" alt="enter image description here" /></a></p>
<p>Any ideas?</p>
|
<python><pandas><numpy><matplotlib>
|
2023-03-08 03:53:16
| 2
| 377
|
NoVa
|
75,669,198
| 7,475,225
|
Odd behavior with !help command in discord.py?
|
<p>So I am trying to define the brief description for the generic <code>!help</code> command and that works fine but I am also trying to define the extended help for the <code>!help command_name</code> command so I get get further details on how a command works.</p>
<p>In my command I have defined the details like so:</p>
<pre><code>*@client.command()
async def box_custom(ctx, command, *args):
"""Allows the defining of custom commands for a given built in command.
Example: "!custom clear box_clear" will add a new command called !clear.
This will call the box_clear function!
Args:
command (str): The new command to add.
args (List[str]): Any additional arguments for the original command.
"""
# do some stuff here
box_custom.brief = "Define custom command for a given built in command."*
</code></pre>
<p>When I run my <code>!help</code> Command everything looks fine:</p>
<pre><code>βNo Category:
box_custom Define custom command for a given built in command.
box_play Plays audio from a specified URL.
help Shows this message
Type !help command for more info on a command.
You can also type !help category for more info on a category.
</code></pre>
<p>When I run the <code>!help box_custom</code> command I get what I defined but I also get some generic placeholder text telling me no description was given.</p>
<pre><code>!box_custom
Allows the defining of custom commands for a given built in command.
Example: "!custom clear box_clear" will add a new command called !clear.
This will call the box_clear function!
Args:
command (str): The new command to add.
args (List[str]): Any additional arguments for the original command.
Arguments:
command No description given
args No description given
</code></pre>
<p>I have not been able to find a workable way to either get rid of this extra <code>Arguments:</code> section or to at the very least be able to provide a description for it.</p>
<p>Am I missing something here as I have not normally worked with providing descriptions in functions so this is kinda new to me.</p>
|
<python><python-3.x><discord><discord.py>
|
2023-03-08 03:27:46
| 1
| 15,266
|
Mike - SMT
|
75,669,165
| 7,993,977
|
Find time difference of first instance of a value and first instance of another value in dataframe
|
<p>I have an already sorted dataframe of panel data below, where a certain <code>diagnosisCode</code> is mapped to a positive or false value in <code>hasDisease</code>. I have 2 arrays which indicate whether a code is positive or negative</p>
<pre><code>positiveDiagnosisCodes = [A] # corresponds to hasDisease = 1
negativeDiagnosisCodes = [B, C, D] # corresponds to hasDisease = 0
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ID</th>
<th>Date</th>
<th>diagnosisCode</th>
<th>hasDisease</th>
</tr>
</thead>
<tbody>
<tr>
<td>2</td>
<td>01/01/2020</td>
<td>A</td>
<td>1</td>
</tr>
<tr>
<td>2</td>
<td>02/01/2020</td>
<td>A</td>
<td>1</td>
</tr>
<tr>
<td>2</td>
<td>03/01/2020</td>
<td>B</td>
<td>0</td>
</tr>
<tr>
<td>2</td>
<td>04/01/2020</td>
<td>B</td>
<td>0</td>
</tr>
<tr>
<td>2</td>
<td>07/01/2020</td>
<td>A</td>
<td>1</td>
</tr>
<tr>
<td>3</td>
<td>02/01/2020</td>
<td>B</td>
<td>0</td>
</tr>
<tr>
<td>3</td>
<td>03/01/2020</td>
<td>A</td>
<td>1</td>
</tr>
<tr>
<td>3</td>
<td>07/01/2020</td>
<td>A</td>
<td>1</td>
</tr>
<tr>
<td>3</td>
<td>11/01/2020</td>
<td>B</td>
<td>0</td>
</tr>
<tr>
<td>3</td>
<td>15/01/2020</td>
<td>A</td>
<td>1</td>
</tr>
<tr>
<td>3</td>
<td>18/01/2020</td>
<td>A</td>
<td>1</td>
</tr>
<tr>
<td>3</td>
<td>19/01/2020</td>
<td>A</td>
<td>1</td>
</tr>
<tr>
<td>3</td>
<td>20/01/2020</td>
<td>B</td>
<td>0</td>
</tr>
<tr>
<td>3</td>
<td>21/01/2020</td>
<td>C</td>
<td>0</td>
</tr>
<tr>
<td>3</td>
<td>22/01/2020</td>
<td>A</td>
<td>1</td>
</tr>
<tr>
<td>3</td>
<td>25/01/2020</td>
<td>A</td>
<td>1</td>
</tr>
<tr>
<td>3</td>
<td>26/01/2020</td>
<td>D</td>
<td>0</td>
</tr>
<tr>
<td>3</td>
<td>28/01/2020</td>
<td>D</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
<p>I am interested in finding the time it takes for a patient show signs of no disease. Specifically, I want to find the time difference between the</p>
<ol>
<li>the earliest timestamp where <code>hasDisease</code> = 1 and</li>
<li>*the earliest timestamp where <code>hasDisease</code> = 0</li>
</ol>
<p>*However, this is only valid for <code>negativeDiagnosisCodes</code> not previously recorded.</p>
<hr />
<p>Hence in the example above, the time difference (in days for simplicity, but actual data will be in hours) of <code>ID</code> 2 is 2 days, since the earliest timestamp where <code>Disease</code> = 1 is <code>01/01/2020</code> and the earliest timestamp where <code>Disease</code> = 0 is <code>03/01/2020</code>.</p>
<p>For <code>ID</code> 3, there are 2 results - 18 days and 23 days. We take the time difference of <code>21/01/2020</code> and <code>03/01/2020</code>, where the patient is first diagnosed with code <code>C</code>. We also take the time difference of <code>26/01/2020</code> and <code>03/01/2020</code>, where the patient is first diagnosed with code <code>D</code>. We ignore the entries on <code>11/01/2020</code> and <code>20/01/2020</code> since the patient was already diagnosed with code <code>B</code> on <code>02/01/2020</code>.</p>
<hr />
<p>I hope to get an output like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ID</th>
<th>Time Difference</th>
<th>diagnosisCode</th>
</tr>
</thead>
<tbody>
<tr>
<td>2</td>
<td>2 days</td>
<td>B</td>
</tr>
<tr>
<td>3</td>
<td>18 days</td>
<td>C</td>
</tr>
<tr>
<td>3</td>
<td>23 days</td>
<td>D</td>
</tr>
</tbody>
</table>
</div><hr />
<p>How can I achieve this efficiently? I have tried using pandas groupby() but I am stuck on the aggregation part. Do I need to use more complex data structures like graphs?</p>
|
<python><pandas><dataframe><algorithm><panel-data>
|
2023-03-08 03:20:04
| 1
| 375
|
MrSoLoDoLo
|
75,669,150
| 11,016,395
|
Retrieve relative position of multilevel list in docx file using Python
|
<p>I need to read docx files in Python and retrieve the relative positions of multilevel list. See below example:</p>
<p><a href="https://i.sstatic.net/Kbpnw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Kbpnw.png" alt="enter image description here" /></a></p>
<p>I want to read the text within the multilevel list only and retrieve the relative position and return a dictionary. Expected output be like:</p>
<pre><code>output = {'1': 'This is the first bullet point.',
'1-(a)': 'This is the first sub bullet point.',
'1-(b)': 'This is the second sub bullet point.',
'1-(b)-(i)': 'My name is Bob.',
'1-(b)-(ii)': 'My name is Dave.',
'2': 'This is the second bullet point.'
}
</code></pre>
<p>As 'This is a sample document.' and 'End of document.' are not within the multilevel list, these texts shouldn't be included in the dictionary.</p>
<p>I saw some related questions such as <a href="https://stackoverflow.com/questions/66374154/getting-the-list-numbers-of-list-items-in-docx-file-using-python-docx">this</a> and <a href="https://stackoverflow.com/questions/51829366/bullet-lists-in-python-docx">this</a> but they're different from my requirement. Appreciate your help on my question!</p>
|
<python><python-docx>
|
2023-03-08 03:17:25
| 1
| 483
|
crx91
|
75,669,055
| 4,420,797
|
Python: Read a .txt file and change the name of files in directory with .txt index
|
<p>I have a directory that contains lots of images and meanwhile, I have a <code>.txt</code> file that contains names like in the below format. I would like to change the name of images based on text file titles. There are 100 names and 100 images in a folder.</p>
<p><strong>Inside Text Files</strong></p>
<pre><code>Flower Blue
Flower Red
Flower Black
Flower orange
</code></pre>
<p><strong>Inside Folder</strong></p>
<pre><code>001.mp4
002.mp4
003.mp4
004.mp4
</code></pre>
<p><strong>Code</strong> Below code does not change the name</p>
<pre><code>import os
import shutil
dirpath = "/media/cvpr/CM_22/inst_you/vidoes"
for file in os.listdir(dirpath):
with open('/media/cvpr/CM_22/inst_you/vidoes_name') as f:
lines = [line.rstrip('\n') for line in f]
print(file)
newfile = os.path.join(dirpath, file.split("_", 1)[1])
print(newfile)
os.rename(os.path.join(dirpath, file), newfile)
</code></pre>
|
<python><python-3.x><list>
|
2023-03-08 02:56:21
| 2
| 2,984
|
Khawar Islam
|
75,669,048
| 16,922,748
|
Randomly changing letters in list of string based on probability
|
<p>Given the following</p>
<p><code>data = ['AAACGGGATT\n','CTGTGTCAGT\n','AATCTCTACT\n']</code></p>
<p>For every letter in a string not including (\n), if its probability is greater than 5 (i.e. there is a 50% chance for a change), I'd like to replace the letter with a randomly selected letter from options (A,G,T,C), with the caveat that the replacement cannot be the same as original.</p>
<p>This is what I'm attempted thus far:</p>
<pre><code>import random
def Applyerrors(string, string_length, probability):
i=0
while i < string_length:
i = i + 1
p = i/string_length
if p > probability:
new_var = string[i]
options = ['A', 'G', 'T', 'C']
[item.replace(new_var, '') for item in options]
replacer = random.choice(options)
[res.replace(new_var, replacer) for res in string]
else:
pass
# Testing
data_updated = [Applyerrors(unit, 10, 0.5) for unit in data]
data_updated
</code></pre>
<p>The result from this:</p>
<pre><code>[None, None, None]
</code></pre>
<p>In addition to not getting the desired result, my probability doesn't make sense as I'm hoping to achieve 50% overall change in the data_updated file.</p>
<p>Any insight would be greatly appreciated.Thanks</p>
|
<python><string><list><replace><probability>
|
2023-03-08 02:54:25
| 2
| 315
|
newbzzs
|
75,668,928
| 14,608,529
|
How to export played audio that is merged and looped through via Python?
|
<p>I have code in Python that loops through and merges a series of audio files to play a customized sound as follows:</p>
<pre><code>from pydub import AudioSegment
from pydub.playback import play
sounds = []
sounds.append(AudioSegment.from_file("Downloads/my_file.mp3"))
for i in range(4):
sounds.append(sounds[i].overlay(sounds[0], position=delays[i+1]))
for i in range(len(sounds)):
counter = 0
while counter <= 2:
play(sounds[i])
time.sleep(delays[i])
counter = counter + 1
</code></pre>
<p>How can I export the final sound that is played on my computer through this Python code?</p>
<p>I know it's possible to do something like:</p>
<pre><code>my_custom_sound.export("/path/to/new/my_custom_sound.mp4", format="mp4")
</code></pre>
<p>But how can I save the custom sound that's outputted through my loops?</p>
|
<python><python-3.x><file><audio><pydub>
|
2023-03-08 02:24:20
| 2
| 792
|
Ricardo Francois
|
75,668,916
| 8,751,871
|
Overriding dynamically created dag's attributes (start_date)
|
<p>Say I had a <code>dag1.py</code> file:</p>
<pre class="lang-py prettyprint-override"><code>@dag(
dag_id="my_dag",
start_date=pendulum.datetime(2022, 9, 28, tz="UTC"),
catchup=True
# ...
)
def some_dag_code(name="bob", age=21):
@task()
def print_name(name):
print(name)
@task()
def print_age(age):
print(age)
task1 = print_name(name)
task2 = print_age(age)
dag = some_dag_code(name="bob", age=21)
</code></pre>
<p>I can then re-use the logic in a <code>dag2.py</code> file by doing the following:</p>
<pre class="lang-py prettyprint-override"><code>from dags.dag1 import some_dag_code
dag = some_dag_code(name="michael", age=31)
dag.dag_id = "my_dag_but_for_michael" # this works
dag.start_date = pendulum.datetime(2023, 3, 5, tz="UTC") # does nothing (uses 28-sep)
# dag.catchup = False # this works
</code></pre>
<p>Any ideas how I can override the <code>start_date</code>?</p>
|
<python><airflow>
|
2023-03-08 02:22:56
| 1
| 2,670
|
A H
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.