QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
78,577,545
3,466,818
How can I access a buffer or sequence in an unusual order?
<p>I have a block of memory representing RGB values for a 8 row x 32 column matrix. When writing into this block of memory, it would be convenient to treat it as properly ordered. When reading from the memory (and pushing into a peripheral), due to the way the electronics are wired, every other column is reversed.</p> <p>So the &quot;correct&quot; reading order due to the electronics should be something like:</p> <p>0 Red, 0 Green, 0 Blue, 1 Red, 1 Green, 1 Blue... 7 Red, 7 Green, 7 Blue, 15 Red, 15 Green, 15 Blue, 14 Red ... 8 Blue, 16 Red ... 23 Blue ... 31 -&gt; 24, 32 -&gt; 39, etc.</p> <p>(As a note, I can't just reverse the entire sequence of bytes 25-48, because then RGB would be in the wrong order).</p> <p>I'm trying to avoid duplicating the memory and I want as fast a read as possible - I'm hoping to avoid interpreted address translation.</p> <p>Is there any way to construct a sequence of memoryviews that can be addressed/sliced (read and write) like a single block? Or a clever sequence slicing that could reverse every other block of 24 bytes, and then every block of 3 bytes within that (to fix the RGB ordering)?</p> <p>I'd like to learn about such methods if they exist, but recognize that they still might not be the right tools for the job. In the absence of clever methods requested above (or even in the event they exist but there's something better), how else might I address/organize my data so I can access it (read/write) effectively for both programming and display?</p>
<python><list><slice><sequence><memoryview>
2024-06-04 20:14:30
1
706
Helpful
78,577,362
10,727,283
Should a Protocol with @property change runtime behavior in Python?
<p>I thought python <code>Protocol</code> was only useful for type-hints, without any impact on the runtime behavior (except when using <code>@runtime_checkable</code> or default method implementation).</p> <p>But see this example:</p> <pre class="lang-py prettyprint-override"><code>from typing import Protocol class PortProto(Protocol): @property def port_id(self) -&gt; str: &quot;&quot;&quot;a read-only port id&quot;&quot;&quot; class MyPortA: port_id: str class MyPortB(PortProto): port_id: str my_port_a = MyPortA() my_port_a.port_id = &quot;some_id&quot; print(my_port_a.port_id) # prints &quot;some_id&quot; my_port_b = MyPortB() my_port_b.port_id = &quot;some_id&quot; # raises &quot;AttributeError: can't set attribute&quot; print(my_port_b.port_id) </code></pre> <p>Where line <code>my_port_b.port_id = &quot;some_id&quot;</code> raises <code>AttributeError: can't set attribute</code>.</p> <p>The only difference between <code>MyPortA</code> and <code>MyPortB</code> is the inheritance of the <code>Protocol</code>.<br /> Is it a bug in Python or the intended behavior?</p> <p>Yes, I know this line is a violation of the getter-only attribute defined in the Protocol, but this is a type-hint problem for tools like mypy, not something for the runtime.</p> <p>(Or maybe it's not even a violation of the type-hint, because a read-and-write attribute is a subtype of a read-only attribute).</p> <p>I expected to see no difference between classes that inherit a <code>Protocol</code> and classes that do not.</p> <p><em>Python version: 3.9.7</em></p>
<python><protocols><python-typing><mypy>
2024-06-04 19:27:02
1
1,004
Noam-N
78,577,145
4,865,723
How to extract glm() results from R objects using Pythons rpy2?
<p>I use <em>Python</em> <code>rpy2</code> to run <code>glm()</code> (regression model) in <em>R</em>. My problem is to get a result back that I can handle. The best would be a nice data structure (dict, dataframe, ...) but I also would be satisfied with a human readable multi-line string.</p> <p>Using <code>glm()</code> directly in R produces outputs like, and that is what I expect:</p> <pre><code>Call: glm(formula = C ~ B + A, family = poisson(), data = df, offset = D) Coefficients: Estimate Std. Error z value Pr(&gt;|z|) (Intercept) -2.391e+01 3.171e-03 -7540.871 &lt;2e-16 *** B -2.372e-04 1.331e-04 -1.782 0.0748 . A 1.285e-03 1.345e-04 9.552 &lt;2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for poisson family taken to be 1) Null deviance: 17500269 on 49999 degrees of freedom Residual deviance: 17500174 on 49997 degrees of freedom AIC: Inf Number of Fisher Scoring iterations: 16 </code></pre> <p>In the example code below I try two variants of extracting the result.</p> <p>In <em>Variant A</em> I do use <code>capture.output()</code> on the R side to extract the console output as a string. But the results looks akward (see doc string in code). In <em>Variant B</em> I use the <code>summary()</code> function via <code>rpy2</code>. Here I get a wired data structure I am not able to work with.</p> <pre><code>#!/usr/bin/env python3 import random import pandas as pd import rpy2 import rpy2.robjects as robjects from rpy2.robjects.packages import importr import rpy2.robjects.pandas2ri as pandas2ri pandas2ri.activate() random.seed(0) k = 10000 df = pd.DataFrame({ 'A': random.choices(range(100), k=k), 'B': random.choices([1, 2, 3], k=k), 'C': random.choices([0, 1], k=k), 'D': random.choices(range(20, 30), k=k), }) glm = robjects.r['glm'] reg = glm( formula='C ~ B', data=robjects.conversion.py2rpy(df), family=robjects.r['poisson'](), offset=df['D'] ) robjects.r('''get_result &lt;- function(reg) { result &lt;- paste(capture.output(reg), collapse=&quot;\n&quot;) return(result) } ''') get_result = robjects.globalenv['get_result'] result_variant_A = get_result(reg) &quot;&quot;&quot; [...SNIPPED...] L, `9948` = 23L, `9949` = 23L, `9950` = 22L, `9951` = 22L, \n`9952` = 29L, `9953` = 27L, `9954` = 26L, `9955` = 23L, `9956` = 28L, \n`9957` = 23L, `9958` = 22L, `9959` = 26L, `9960` = 21L, `9961` = 24L, \n`9962` = 28L, `9963` = 22L, `9964` = 24L, `9965` = 26L, `9966` = 29L, \n`9967` = 20L, `9968` = 25L, `9969` = 22L, `9970` = 20L, `9971` = 22L, \n`9972` = 28L, `9973` = 29L, `9974` = 24L, `9975` = 21L, `9976` = 28L, \n`9977` = 22L, `9978` = 27L, `9979` = 26L, `9980` = 23L, `9981` = 22L, \n`9982` = 22L, `9983` = 21L, `9984` = 22L, `9985` = 28L, `9986` = 23L, \n`9987` = 23L, `9988` = 21L, `9989` = 28L, `9990` = 28L, `9991` = 20L, \n`9992` = 20L, `9993` = 20L, `9994` = 24L, `9995` = 22L, `9996` = 24L, \n`9997` = 20L, `9998` = 20L, `9999` = 22L))\n\nCoefficients:\n(Intercept) B \n -27.79929 -0.01377 \n\nDegrees of Freedom: 9999 Total (i.e. Null); 9998 Residual\nNull Deviance:\t 33360 \nResidual Deviance: 33360 \tAIC: 43370'], dtype='&lt;U419704') &gt;&gt;&gt; type(result) &lt;class 'numpy.ndarray'&gt; &quot;&quot;&quot; result_variant_B = robjects.r['summary'](reg) &quot;&quot;&quot; &lt;rpy2.robjects.vectors.ListVector object at 0x000002C9B71F6A10&gt; [19] R classes: ('summary.glm',) [LangSexpV..., LangSexpV..., ListSexpV..., FloatSexp..., ..., FloatSexp..., IntSexpVe..., FloatSexp..., FloatSexp...] call: &lt;class 'rpy2.robjects.language.LangVector'&gt; Rlang( (function (formula, family = gaussian, data, weights, subset, ) terms: &lt;class 'rpy2.robjects.Formula'&gt; &lt;rpy2.robjects.Formula object at 0x000002C9B2630890&gt; [6] R classes: ('terms', 'formula') &lt;rpy2.robjects.vectors.ListVector object at 0x000002C9B71F6A10&gt; [19] R classes: ('summary.glm',) [LangSexpV..., LangSexpV..., ListSexpV..., FloatSexp..., ..., FloatSexp..., IntSexpVe..., FloatSexp..., FloatSexp...] deviance: &lt;class 'numpy.ndarray'&gt; array([33357.71065632]) ... contrasts: &lt;class 'numpy.ndarray'&gt; array([1.]) df.residual: &lt;class 'rpy2.robjects.vectors.IntVector'&gt; &lt;rpy2.robjects.vectors.IntVector object at 0x000002C9B2630890&gt; [13] R classes: ('integer',) [2, 9998, 2] null.deviance: &lt;class 'numpy.ndarray'&gt; array([[ 0.00138301, -0.00059514], [-0.00059514, 0.00029936]]) df.null: &lt;class 'numpy.ndarray'&gt; array([[ 0.00138301, -0.00059514], [-0.00059514, 0.00029936]]) &quot;&quot;&quot; </code></pre>
<python><rpy2>
2024-06-04 18:32:01
1
12,450
buhtz
78,577,010
13,721,819
How to suppress stdout within a specific python thread?
<p>I want to be able to suppress any print to stdout within a specific thread. Here is what I have tried:</p> <pre><code>import sys, io, time from threading import Thread def do_thread_action(): # Disable stdout sys.stdout = io.StringIO() print(&quot;don't print this 1&quot;) time.sleep(1) print(&quot;don't print this 2&quot;) time.sleep(1) print(&quot;don't print this 3&quot;) # Re-enable stdout sys.stdout = sys.__stdout__ thread = Thread(target=do_thread_action) thread.start() time.sleep(1.5) # Print this to stdout print('Print this') thread.join() </code></pre> <p>However this does not work because <code>sys.stdout</code> is global for both <code>thread</code> and the main thread.</p> <p>How do I suppress the prints inside <code>do_thread_action</code> within the thread, but not suppress the prints outside of it?</p>
<python><stdout><python-multithreading>
2024-06-04 17:56:39
1
612
Wilson
78,576,942
11,402,025
Pydantic model : add case insensitive field
<p>I have a confirm field that accepts yes ( case insensitive ) input from the user. This is how I am implementing it :</p> <pre><code>class ConfirmEnum(str, Enum): yes = &quot;yes&quot; Yes = &quot;Yes&quot; YES = &quot;YES&quot; </code></pre> <pre><code>class OtherFields(CamelModel): data: int confirm: Optional[ConfirmEnum] ...other fileds added here </code></pre> <pre><code>async def pet_classes( pet_service: PetService = Depends( PetService ), confirm: ConfirmEnum = Query(None, alias=&quot;confirm&quot;), response: Response = status.HTTP_200_OK, ): </code></pre> <p>I do not think it is the right way to do it, or there must be better way than to just use enum.</p>
<python><enums><model><fastapi><pydantic>
2024-06-04 17:37:41
1
1,712
Tanu
78,576,803
5,568,409
Title is bold, but label is not. What to do?
<p>I made a small reproducible program, where you will see the <code>title</code> in bold font, but the <code>label</code> in normal font:</p> <pre><code>import matplotlib.pyplot as plt import seaborn as sns import numpy as np observed = np.array([28, 15, 21, 23, 17, 22, 19, 19, 24, 27, 20, 21, 25, 25, 16, 20, 23]) fig, ax = plt.subplots(figsize=(4, 2)) sns.histplot(ax = ax, x = list(observed), label = &quot;observed data&quot;) ax.set_title(&quot;Observed data&quot;, fontweight='bold') ax.legend() plt.show() </code></pre> <p><a href="https://i.sstatic.net/f0g8P16t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f0g8P16t.png" alt="enter image description here" /></a></p> <p>I tried to set the <code>label</code> in bold by putting the same code <code>fontweight='bold'</code> (after the <code>label =</code> term) <strong>inside</strong> the <code>sns.histplot</code> instruction, but this leads to an error:</p> <pre><code>sns.histplot(ax = ax, x = list(observed), label = &quot;observed data&quot;, fontweight='bold') </code></pre> <p>Okay, I admit that it wasn't a very smart attempt, but I don't know how to get the label in bold, in a simple way... Could someone help me to do so?</p>
<python><matplotlib><fonts>
2024-06-04 17:01:28
1
1,216
Andrew
78,576,526
237,225
PyTorch Startup High Memory Consumption Without Import
<p>I have a docker container running a Python (3.10) Flask app. The app had high baseline memory usage (1 GB+) when loading/idle. The app was formerly dependent on these 17 <code>requirements</code> dependencies. I removed two dependencies: <code>angle-emb</code> and <code>sentence-transformers</code>. <code>torch 2.3.0</code> is a transitive dependency of <code>angle-emb</code>.</p> <p>After removing only the dependencies, <strong>and changing zero lines of python code</strong>, the memory usage immediately dropped by 63%. I want to know why.</p> <pre><code>angle-emb~=0.3.10 # Removed couchbase~=4.2.1 Flask~=3.0.3 Flask-Cors~=4.0.0 Flask-JWT-Extended~=4.6.0 google-cloud-aiplatform~=1.49.0 google-cloud-language~=2.13.3 langchain-community~=0.0.36 numpy~=1.26.4 pandas~=2.2.2 prometheus-flask-exporter~=0.23.0 python-dotenv~=1.0.1 requests~=2.31.0 scikit-learn~=1.4.2 sentence-transformers~=2.7.0. # Removed tiktoken~=0.6.0 vertexai~=1.49.0 </code></pre> <p>I have read the PyTorch docs and found <a href="https://pytorch.org/docs/stable/notes/cpu_threading_torchscript_inference.html" rel="nofollow noreferrer">reference</a> to a thread called <code>pt_main_tread</code>, and <a href="https://stackoverflow.com/questions/62327089/high-cpu-consumption-pytorch">somewhat related StackOverflow posts</a>. All of them are explicitly <code>import</code>ing invoking PyTorch for training. I had <strong>zero code</strong> that used PyTorch/<a href="https://github.com/SeanLee97/AnglE" rel="nofollow noreferrer">AnglE</a>/<a href="https://github.com/UKPLab/sentence-transformers/tree/master" rel="nofollow noreferrer">Sentence-Transformers</a>.</p> <p>What is the mechanism that is invoking something in PyTorch at Flask startup time, OR, how can I investigate more effectively?</p>
<python><python-3.x><memory><pytorch>
2024-06-04 15:58:08
0
3,719
JJ Zabkar
78,576,521
3,124,181
How to run commands from Python console with the same config as Pycharm Terminal
<p>Is there a quick/practical way to run commands from the Pycharm Python console with exactly the same configurations as Pycharm Terminal?</p> <p>I.e. I am able to run certain software like <code>wget</code> from Pycharm's terminal but not from the python console <code>os.system(&quot;wget&quot;)</code> which gives me a &quot;<code>wget</code> is not recognized...&quot;</p> <p>I know there are other ways to achieve the same thing as <code>wget</code> but I'm only interested in knowing if there is a quick/practical way to configure it so that I get the same experience running system commands from python console as from the terminal.</p> <p>It's fine if it's not done through the <code>os</code> package specifically, but I need to get to a point where I can execute terminal commands exactly the same from terminal &amp; python console.</p>
<python><terminal><pycharm>
2024-06-04 15:57:16
0
903
user3124181
78,576,403
8,964,393
Open html file and insert a plot in Python
<p>I have an html file called <code>output.html</code> (see the html code at the bottom of this message).</p> <p>The html file contains the following table.</p> <p><a href="https://i.sstatic.net/zZNBto5n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zZNBto5n.png" alt="enter image description here" /></a></p> <p>I also have a file called <code>foo.png</code> that I need to:</p> <ol> <li>open in Python</li> <li>insert below the table in the <code>output.html</code> file</li> </ol> <p>Here is the graph contained in the <code>foo.png</code> file.</p> <p><a href="https://i.sstatic.net/nS42TdgP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nS42TdgP.png" alt="enter image description here" /></a></p> <p>So, the final html file (called <code>finalOutput.html</code>)would look like this:</p> <p><a href="https://i.sstatic.net/3rsSXqlD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3rsSXqlD.png" alt="enter image description here" /></a></p> <p>Is there a way to create the <code>finalOutput.html</code> in Python?</p> <p>Here is the html code of the <code>output.html</code> file.</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>Univariate Analysis&lt;table border="1" class="dataframe"&gt; &lt;thead&gt; &lt;tr style="text-align: right;"&gt; &lt;th&gt;&lt;/th&gt; &lt;th&gt;Feature&lt;/th&gt; &lt;th&gt;Value&lt;/th&gt; &lt;/tr&gt; &lt;/thead&gt; &lt;tbody&gt; &lt;tr&gt; &lt;th&gt;0&lt;/th&gt; &lt;td&gt;Feature Name&lt;/td&gt; &lt;td&gt;AgeM&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;1&lt;/th&gt; &lt;td&gt;Feature Type&lt;/td&gt; &lt;td&gt;int64&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;2&lt;/th&gt; &lt;td&gt;Number of Records&lt;/td&gt; &lt;td&gt;177607&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;3&lt;/th&gt; &lt;td&gt;Number of Zeros&lt;/td&gt; &lt;td&gt;0&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;4&lt;/th&gt; &lt;td&gt;% Zeros&lt;/td&gt; &lt;td&gt;0.0&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;5&lt;/th&gt; &lt;td&gt;Number of Missing Records&lt;/td&gt; &lt;td&gt;0&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;6&lt;/th&gt; &lt;td&gt;% Missing Records&lt;/td&gt; &lt;td&gt;0.0&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;7&lt;/th&gt; &lt;td&gt;Minimum&lt;/td&gt; &lt;td&gt;18&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;8&lt;/th&gt; &lt;td&gt;Mean&lt;/td&gt; &lt;td&gt;48.44&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;9&lt;/th&gt; &lt;td&gt;Median&lt;/td&gt; &lt;td&gt;49.0&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;10&lt;/th&gt; &lt;td&gt;Maximum&lt;/td&gt; &lt;td&gt;111&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;11&lt;/th&gt; &lt;td&gt;Skewness&lt;/td&gt; &lt;td&gt;0.03&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;12&lt;/th&gt; &lt;td&gt;Kurtosis&lt;/td&gt; &lt;td&gt;-0.95&lt;/td&gt; &lt;/tr&gt; &lt;/tbody&gt; &lt;/table&gt;</code></pre> </div> </div> </p>
<python><html><file><html-table><format>
2024-06-04 15:32:46
0
1,762
Giampaolo Levorato
78,576,317
6,930,340
Python hypothesis dataframe assume column not exclusively consisting of NaN values
<p>I am producing a <code>pd.DataFrame</code> using the <code>hypothesis</code> library like so:</p> <pre><code>import datetime from hypothesis import strategies as st from hypothesis.extra.pandas import columns as cols from hypothesis.extra.pandas import data_frames, indexes data_frames( columns=cols( [&quot;sec1&quot;, &quot;sec2&quot;, &quot;sec3&quot;], elements=st.floats(allow_infinity=False) ), index=indexes(elements=st.dates( min_value=datetime.date(2023,10,31), max_value=datetime.date(2024,5,31)) ), ).example() sec1 sec2 sec3 2024-01-05 -3.333333e-01 NaN NaN 2024-05-20 -9.007199e+15 NaN -2.000010e+00 2024-02-28 -1.175494e-38 NaN 1.500000e+00 2024-01-24 -1.100000e+00 NaN 1.100000e+00 2023-11-19 -1.175494e-38 NaN -2.000010e+00 2024-05-28 -1.000000e-05 NaN 2.541486e+16 2024-01-31 -1.797693e+308 NaN NaN 2024-05-03 4.940656e-324 NaN -6.647158e+16 </code></pre> <p>I need to make sure that an individual column doesn't exclusively consist of <code>NaN</code> values.</p> <p>Also, I want to avoid to create an empty <code>pd.DataFrame</code>.</p>
<python><pandas><python-hypothesis>
2024-06-04 15:19:15
1
5,167
Andi
78,576,288
5,558,497
expand based on a dictionary
<p>Considering I have the following Python dictionary:</p> <pre><code>d = {'A': ['a', 'b', 'c], 'B': ['d', 'e', 'f']} </code></pre> <p>Considering the keys in <code>d</code>, I would like to produce a plot with the following pattern <code>f&quot;{key}/plot/{d[key]}&quot;</code>, for every element in <code>d[key]</code>.</p> <p>In other words, considering <code>d</code> above the output files that I would like to have are:</p> <pre><code>f&quot;{A}/plots/{a}.pdf&quot; f&quot;{A}/plots/{b}.pdf&quot; f&quot;{A}/plots/{c}.pdf&quot; f&quot;{B}/plots/{d}.pdf&quot; f&quot;{B}/plots/{e}.pdf&quot; f&quot;{B}/plots/{f}.pdf&quot; </code></pre> <p>How can I define a <code>rule all</code> that selectively expands the input based on a dictionary?</p>
<python><snakemake>
2024-06-04 15:12:58
1
2,249
BCArg
78,576,233
1,417,735
How do you get 'additional' combinations when adding items to a list that I have already gotten all combinations for?
<p>Using python I would like to calculate all combinations of 3 from a list.</p> <p>For example, list = [a,b,c,d] and combinations would be - [a,b,c], [a,b,d], [a,c,d], [b,c,d].</p> <p>And then I would like to add some items to the original list and get only the additional combinations of 3.</p> <p>For example, adding items [e,f] would generate new combinations - [a,b,e], [a,b,f], [a,c,e], [a,c,f], [a,d,e], [a,d,f], [a,e,f], [b,c,e], [b,c,f],...</p> <p>The lists will be large so we need to avoid generating the combinations twice and then filtering in order to get the 'additional combinations'.</p> <p>Background: I use itertools.combinations to get all combinations (of 3) for a list right now of about 100 items. That generates a lot of combinations and I doing a bunch of calculations and whatnot based on those combinations, looking for patterns and matches and stuff. I get through all that processing and if I don't have a 'successful' combination of 3 then I generate more candidates for the list (which in itself takes a long time). When I add the additional candidates to the list (usually like 10 or so), I then restart the analysis on the combinations which seems wasteful, so I would like to only be checking the 'additional' combinations.</p>
<python><combinations>
2024-06-04 15:01:42
1
1,287
shawn.mek
78,576,088
10,633,596
How to create Pandas data frame with dynamic values within a for loop
<p>I'm literally new to data engineering where I'm using Python, PySpark and Pandas to create a data frame and I'm since a very long time I'm blocked and could not put my head around it. It is a simple problem but I'm stuck here.</p> <p>Here is my code snippet which works fine (assuming there is only 1 primary key) without any loop iteration and generates dataframe in the end.</p> <pre><code> primary_keys.append(primary_key) primary_keys = [primary_key] df = pd.DataFrame({'primary_key': primary_keys, mapped_column[0]: value}) # Define the schema for the Spark DataFrame schema = T.StructType([ T.StructField(&quot;primary_key&quot;, T.IntegerType(), True), # Integer primary key T.StructField(mapped_column[0], T.StringType(), True) # Integer primary key ]) self.logger.info(&quot;$$$$$$$$$$$$$$$ Creating DF $$$$$$$$$$$$$$$$$$$$$&quot;) # Create the Spark DataFrame from the Pandas DataFrame spark_df = spark.createDataFrame(df, schema) # (Optional) Verify the Spark DataFrame spark_df.printSchema() spark_df.show() </code></pre> <p>However, my requirement is to get this <code>primary_keys</code> from a loop where every primary key is generated with the dynamic value for each iteration. I tried to do following but it keeps only last object and I ended up getting the same values (values for primary key and mapped_column) in the final data frame generated.</p> <pre><code> primary_keys = [] # Iterate over the list and access values directly for row in column_values: ### some logic to generate the primary key from the loop iteration primary_keys.append(primary_key) df = pd.DataFrame({'primary_key': primary_keys, mapped_column[0]: value}) self.logger.info(&quot;$$$$$$$$$$$$$$$ For Loop Completed $$$$$$$$$$$$$$$$$$$$$&quot;) # Define the schema for the Spark DataFrame schema = T.StructType([ T.StructField(&quot;primary_key&quot;, T.IntegerType(), True), # Integer primary key T.StructField(mapped_column[0], T.StringType(), True) # Integer primary key ]) self.logger.info(&quot;$$$$$$$$$$$$$$$ Creating DF $$$$$$$$$$$$$$$$$$$$$&quot;) # Create the Spark DataFrame from the Pandas DataFrame spark_df = spark.createDataFrame(df, schema) # (Optional) Verify the Spark DataFrame spark_df.printSchema() spark_df.show() </code></pre> <p>I suspect the problem is here in the <code>df</code> object in this statement <code>df = pd.DataFrame({'primary_key': primary_keys, mapped_column[0]: value})</code> as it is replacing it rather than append to the existing one.</p> <p>I would really appreciate if someone can assist me here, thank you</p>
<python><pandas><pyspark>
2024-06-04 14:37:11
1
1,574
vinod827
78,575,906
6,195,489
How to use a dask cluster as a scheduler for dask.compute
<p>I have a class that has something like the following context manager to create a dask client &amp; cluster:</p> <pre><code>class some_class(): def __init__(self,engine_kwargs: dict = None): self.distributed = engine_kwargs.get(&quot;distributed&quot;, False) self.dask_client = None self.n_workers = engine_kwargs.get( &quot;n_workers&quot;, int(os.getenv(&quot;SLURM_CPUS_PER_TASK&quot;, os.cpu_count())) ) @contextmanager def dask_context(self): &quot;&quot;&quot;Dask context manager to set up and close down client&quot;&quot;&quot; if self.distributed: if self.distributed_mode == &quot;processes&quot;: processes = True self.dask_cluster = LocalCluster(n_workers=self.n_workers, processes=processes) self.dask_client = Client(self.dask_cluster) try: yield finally: if self.dask_client is not None: self.dask_client.close() self.local_cluster.close() </code></pre> <p>In the class I have a method that uses delayed, with the intention of distibuting the work across the cluster.</p> <pre><code>def some_class_method( self, ): min_ind = segy_container.trace_headers[&quot;SOME_GROUPER&quot;].values.min() max_ind = segy_container.trace_headers[&quot;SOME_GROUPER&quot;].values.max() tasks = [ delayed(self._process_group)(index,some,other,method,arguments,here ) for index in range(min_ind, max_ind + 1) ] with ProgressBar(): with self.dask_context(): results = compute(*tasks, scheduler=self.dask_client) #scheduler=&quot;processes&quot;) </code></pre> <p>In the last line if I try to use the dask_client which is set up by the context manager I get the following error:</p> <pre><code>TypeError: cannot pickle '_asyncio.Task' object </code></pre> <p>If i get rid of the context manager, and use <code>scheduler=&quot;processes&quot;</code> it works fine.</p> <p>I assume the secheduler=&quot;processes&quot; doesn't serialise the tasks, so it proceeds ok ,but when trying to use the local cluster it does and throws the error.</p> <p>Is it possible to use the local cluster with delayed and compute in some way, or is there another approach to solving the problem?</p>
<python><dask><dask-delayed>
2024-06-04 14:02:21
1
849
abinitio
78,575,791
14,282,714
Submit results to fill a dataframe in streamlit
<p>I would like to build a streamlit app where the user gives some inputs and every time it uses the <code>form_submit_button</code> button, it will fill a dataframe row by row to collect user data. Currently I have the following app:</p> <pre><code>import streamlit as st import pandas as pd import numpy as np # page st.sidebar.header(&quot;Submit results&quot;) with st.form('Form1'): option = st.selectbox( &quot;Select the tracked you played:&quot;, (&quot;ds Mario Kart&quot;, &quot;Toad Harbour&quot;, &quot;Koopa Cape&quot;)) number = st.slider(&quot;Pick a number&quot;, 1, 12) submitted1 = st.form_submit_button('Submit 1') df = pd.DataFrame(np.array([[option, number]]), columns = [&quot;Track&quot;, &quot;Result&quot;]) dfs = [] dfs.append(df) df_combined = pd.concat(dfs) st.dataframe(df_combined) </code></pre> <p>Output:</p> <p><a href="https://i.sstatic.net/cWJri9gY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cWJri9gY.png" alt="enter image description here" /></a></p> <p>As you can see it creates the dropdown and slider. These return both a value. When we change the value and hit the submit button, the value does change in the dataframe. Unfortunately, the dataframe keeps being only one row. So I was wondering how we can fill the dataframe row by row every time we hit the submit button?</p>
<python><streamlit>
2024-06-04 13:41:41
1
42,724
Quinten
78,575,696
293,420
Firebase admin authentication works in interpreter but fails in production
<p>I'm interfacing with the resources of a firebase project.</p> <p>From a VM in GCP I am using python to get information from the database and files from the bucket. The VM has gcloud installed and the VM is authenticated using the firebase service account which should have all privileges for the firebase app.</p> <p>The VM is a plain generic E2-medium, default networks, default settings.</p> <p>When I ssh to the VM I use</p> <p><code>gcloud auth activate-service-account [ACCOUNT] --key-file=KEY_FILE </code></p> <p>so that the machine has credentials. These credentials are picked up by any SDK that is to be used. This allows me to read any GCP API using any programming language. I use python. Python can be ran from a file, or one can open an interpreter and type code.</p> <p>My main problem is that when I use the interpreter everything works, but when I call a function within a server (django/celery/redis) I get an error related to permissions. The code is the following. The exact code works in the interpreter but not in the service.</p> <p>code:</p> <pre><code>import firebase_admin from firebase_admin import auth, credentials, firestore from google.cloud import storage PROJECT_ID=&quot;myprojectid&quot; BUCKET_NAME=&quot;mybucket&quot; cred = credentials.ApplicationDefault() firebase_admin.initialize_app(cred, {'projectId': PROJECT_ID}) storage_client = storage.Client() db = firestore.client() bucket = storage_client.get_bucket(BUCKET_NAME) docs = list( db.collection(u'mycollection') .where(....) .stream() ) </code></pre> <p>As soon as I run this as part of a server in the VM I get the error</p> <pre><code>[reason: &quot;ACCESS_TOKEN_SCOPE_INSUFFICIENT&quot; domain: &quot;googleapis.com&quot; metadata { key: &quot;service&quot; value: &quot;firestore.googleapis.com&quot; } metadata { key: &quot;method&quot; value: &quot;google.firestore.v1.Firestore.RunQuery&quot; } </code></pre> <p>However, if I open an interpreter and type the exact same code it works perfectly.</p> <p>I know I can change the scope in the VM settings, but then why does it work when in the python interpreter? I can even upload files to the bucket.</p>
<python><firebase><firebase-authentication><gcloud><firebase-admin>
2024-06-04 13:23:54
0
3,654
lesolorzanov
78,575,692
13,946,204
How to use structlog with logfmt formatted logs in python?
<p>I want to print out root logs in <code>logfmt</code> format. <code>structlog</code> also should print logs in the same format. I tried <a href="https://www.structlog.org/en/stable/standard-library.html#rendering-using-logging-based-formatters" rel="nofollow noreferrer">this example</a> for JSON format and looks like it doesn't work with <code>logfmt</code>.</p> <p>Here is a first try:</p> <pre class="lang-py prettyprint-override"><code>import logging import sys import logfmter import structlog handler = logging.StreamHandler(sys.stdout) handler.setFormatter(logfmter.Logfmter()) root_logger = logging.getLogger() root_logger.addHandler(handler) structlog.get_logger(&quot;test&quot;).warning(&quot;hello&quot;) logging.getLogger(&quot;test&quot;).warning(&quot;hello&quot;) </code></pre> <p>It prints:</p> <pre><code>2024-06-04 22:11:01 [warning ] hello at=WARNING msg=hello </code></pre> <p>Here is <code>structlog</code> is definitely not a <code>logfmt</code> formatted.</p> <p>I tried to add <code>logfmt</code> render then:</p> <pre class="lang-py prettyprint-override"><code>import logging import sys import logfmter import structlog handler = logging.StreamHandler(sys.stdout) handler.setFormatter(logfmter.Logfmter()) root_logger = logging.getLogger() root_logger.addHandler(handler) structlog.configure( processors=[ structlog.stdlib.filter_by_level, structlog.stdlib.add_logger_name, structlog.stdlib.add_log_level, structlog.stdlib.PositionalArgumentsFormatter(), structlog.processors.TimeStamper(fmt='iso'), structlog.processors.StackInfoRenderer(), structlog.processors.format_exc_info, # ADD LOGFMT RENDER structlog.processors.LogfmtRenderer(), ], logger_factory=structlog.stdlib.LoggerFactory(), wrapper_class=structlog.stdlib.BoundLogger, cache_logger_on_first_use=True, ) structlog.get_logger(&quot;test&quot;).warning(&quot;hello&quot;) logging.getLogger(&quot;test&quot;).warning(&quot;hello&quot;) </code></pre> <p>And fail again:</p> <pre><code>at=WARNING msg=&quot;event=hello logger=test level=warning timestamp=2024-06-04T13:12:54.637931Z&quot; at=WARNING msg=hello </code></pre> <p>Obviously correct log line should be like:</p> <pre><code>at=WARNING msg=hello logger=test level=warning timestamp=2024-06-04T13:12:54.637931Z </code></pre> <p>Is it possible to format both <code>structlog</code> and <code>logging</code> as <code>logfmt</code>?</p>
<python><logging><structlog>
2024-06-04 13:23:10
1
9,834
rzlvmp
78,575,645
25,413,271
Python re, extract path from text
<p>I receive a text like:</p> <pre><code>`D:\Programming\sit\bin\MyLab.json` </code></pre> <p>It may contain different kind of quotes or may not contain them. Quotes if present are placed strictly at the beggining and at the end of the text, wrapping the path. But the text definately contains absolute windows path of a file. But this file may also be absent. I am struggling to write algorythm extracting the path.</p> <p>I have tried regex like:</p> <pre><code>re.findall(r'[a-zA-Z]:\\((?:[a-zA-Z0-9() ]*\\)*).*', a) </code></pre> <p>but I receive:</p> <pre><code>['Programming\\sit\\bin\\'] </code></pre> <p>But I expect to get string with path like:</p> <pre><code>D:\Programming\sit\bin\MyLab.json </code></pre>
<python><python-re>
2024-06-04 13:17:25
3
439
IzaeDA
78,575,463
17,672,187
Fitting outer edge of a rough rectangle with rounded corners
<p><a href="https://i.sstatic.net/8Wa66uTK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8Wa66uTK.png" alt="enter image description here" /></a></p> <p>I have detected edges of an image with openCV (shown with green points) as</p> <pre><code> edges = cv2.Canny(gray, canny_0, canny_1) kernel = np.ones((int(kernel_size), int(kernel_size)), np.uint8) edges = cv2.dilate(edges, kernel, iterations=1) </code></pre> <p>Now I need to find out the quadrilateral that tightly fits this contour (all the points should lie within the quadrilateral) - shown with red lines on the diagram. I am open to using open CV or scikit or any other suitable python library.</p>
<python><opencv><image-processing><scikit-image>
2024-06-04 12:40:39
1
691
Loma Harshana
78,575,343
2,749,397
How to set a distinct label for each plotted curve?
<p><a href="https://i.sstatic.net/lUcCFk9F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lUcCFk9F.png" alt="enter image description here" /></a></p> <pre><code>In [9]: %matplotlib qt ...: from sympy import * ...: ...: x = symbols('x') ...: plot(4-x**3/2, x*x, (x, 0, 2), legend=1) </code></pre> <hr /> <p>As you can see, I can plot two curves and Sympy takes care of labelling them, but.</p> <p>I want to label the first curve &quot;cubic&quot; and the second one &quot;quadratic&quot;, but I don't kmow how to specify two labels, I tried with <code>label=['c', 'q']</code> but Sympy displayed <code>&quot;['c', 'q']&quot;</code> twice…</p> <p>How can I set a distinct label for each curve?</p>
<python><plot><sympy>
2024-06-04 12:12:55
0
25,436
gboffi
78,575,313
6,929,467
Need Help Reducing OpenAI API [Python] Costs: Here's My Code
<p>I then call like 500 times the <code>get_category_and_subcategory</code> function.. It cost me like 60$ which is too much for this type of job. Am I doing something wrong and what could be improved?</p> <p>It seems my problem is that the category_and_subcategory is huge, that seems to be causing the biggest cost, context tokens.. I don't know how to reduce this cost.</p> <pre><code>def get_chatgpt_response(prompt): response = client.chat.completions.create( model=&quot;gpt-4o&quot;, messages=[ {&quot;role&quot;: &quot;system&quot;, &quot;content&quot;: &quot;You are a helpful assistant.&quot;}, {&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: prompt} ], temperature=0.1, # Adjust the temperature for creativity top_p=0.1, response_format={&quot;type&quot;: &quot;json_object&quot;} ) return response.choices[0].message.content def get_category_and_subcategory(title, description, prompt, category_and_subcategory): product = {'title': title, 'description': description} response = get_chatgpt_response(prompt.format(json.dumps(product, ensure_ascii=False), json.dumps(category_and_subcategory, ensure_ascii=False))) response = json.loads(response) category, subcategory = response['category'], response['subcategory'] return category.strip(), subcategory.strip() </code></pre> <p>Here is my prompt</p> <pre><code>prompt = &quot;&quot;&quot; You are given product details and a dictionary of possible categories and subcategories. Your task is to categorize the product into the most appropriate category and subcategory based on these details. Product: {} Choose the exact names of the Categories and Subcategories from the provided options: {} Ensure your response uses the exact names from the options provided. Format your response as JSON: `{{&quot;category&quot;: &quot;category&quot;, &quot;subcategory&quot;: &quot;subcategory&quot;}}` and nothing else. Do not deviate from the provided options. &quot;&quot;&quot; title = &quot;Sample Product&quot; description = &quot;This is a sample product description.&quot; category_and_subcategory = { &quot;Electronics&quot;: [&quot;Mobile Phones&quot;, &quot;Laptops&quot;], &quot;Home Appliances&quot;: [&quot;Refrigerators&quot;, &quot;Washing Machines&quot;] } category, subcategory = get_category_and_subcategory(title, description, category_and_subcategory) print(f&quot;Category: {category}, Subcategory: {subcategory}&quot;) </code></pre>
<python><openai-api><chatgpt-api><cost-management>
2024-06-04 12:08:55
1
2,720
innicoder
78,575,261
896,451
What's the significance of `SyntaxError: 'break' outside loop` v. `SyntaxError: 'continue' not properly in loop`?
<p>I accept that, due to Python language limitation, <code>continue</code> must be within a loop &quot;properly&quot; i.e. <a href="https://docs.python.org/3/reference/simple_stmts.html#the-continue-statement" rel="nofollow noreferrer">&quot;not nested in a function or class definition within that loop.&quot;</a></p> <pre><code>while True: def f(): continue # SyntaxError: 'continue' not properly in loop f() </code></pre> <p>What's the significance of the difference at <code>break</code>, where the (arguably false) complaint is that it is &quot;outside&quot; loop?</p> <pre><code>while True: def f(): break # SyntaxError: 'break' outside loop f() </code></pre> <p>I see no relevant difference in <a href="https://docs.python.org/3/reference/simple_stmts.html#the-break-statement%3E" rel="nofollow noreferrer">the language reference</a>.</p>
<python>
2024-06-04 11:56:28
2
2,312
ChrisJJ
78,575,177
10,396,491
Reading PETSc binary matrix in Python
<p>I am trying to read a sparse PETSc matrix saved from a Fortran code into a binary file like so:</p> <pre><code>CALL PetscViewerBinaryOpen(PETSC_COMM_SELF, TRIM(ADJUSTL(filename)), FILE_MODE_WRITE, viewer2, ier) CALL MatView(aa_symmetric, viewer2, ier) CALL PetscViewerDestroy(viewer2, ier) </code></pre> <p>When I run the Fortran code single-threaded, all works fine and I can read the matrix without any issues using the following Python code:</p> <pre><code>import petsc4py from petsc4py import PETSc # Initialize PETSc petsc4py.init(sys.argv) # Create a viewer for reading the binary file viewer = PETSc.Viewer().createBinary(filename, mode='r', comm=PETSc.COMM_WORLD) # Create a matrix and load data from the binary file A_petsc = PETSc.Mat().create(comm=PETSc.COMM_WORLD) A_petsc.setType(PETSc.Mat.Type.AIJ) A_petsc.setFromOptions() A_petsc.load(viewer) # Finalize PETSc PETSc.Finalize() </code></pre> <p>However, when I run the Fortran code on more processors (&quot;mpirun -n 2 my_exe&quot;), I get the following error on the Python side:</p> <pre><code> Traceback (most recent call last): File &quot;/home/Python/test_matrixImport_binary.py&quot;, line 80, in &lt;module&gt; A_petsc.load(viewer) File &quot;petsc4py/PETSc/Mat.pyx&quot;, line 2025, in petsc4py.PETSc.Mat.load petsc4py.PETSc.Error: error code 79 [0] MatLoad() at /home/lib/petsc-3.21.0/src/mat/interface/matrix.c:1344 [0] MatLoad_SeqAIJ() at /home/lib/petsc-3.21.0/src/mat/impls/aij/seq/aij.c:5091 [0] MatLoad_SeqAIJ_Binary() at /home/lib/petsc-3.21.0/src/mat/impls/aij/seq/aij.c:5142 [0] Unexpected data in file [0] Inconsistent matrix data in file: nonzeros = 460, sum-row-lengths = 761 </code></pre> <p>How can I fix this?</p>
<python><fortran><petsc>
2024-06-04 11:43:09
1
457
Artur
78,575,142
11,751,799
`matplotlib` figure border and text disagree about the figure limits
<p>I find the behavior of the following image to be surprising.</p> <pre><code>import matplotlib.pyplot as plt fig, ax = plt.subplots() plt.text(1.3, 1.3, &quot;ABC&quot;, color = 'red') # Add blue border # fig.patch.set_linewidth(3) fig.patch.set_edgecolor(&quot;blue&quot;) fig.text(0, 1, &quot;DEF&quot;, color = 'red') plt.show() plt.close() fig.patch.set_linewidth(3) fig.patch.set_edgecolor(&quot;blue&quot;) fig.text(0, 1, &quot;DEF&quot;) plt.show() plt.close() </code></pre> <p><a href="https://i.sstatic.net/Tp1LVXDJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Tp1LVXDJ.png" alt="matplotlib surprise" /></a></p> <p>When I use <code>plt.text</code> to add text to the figure beyond the upper limit of 1, the border expands to contain that text in the top right.</p> <p>The border comes from a <code>fig</code> method, seemingly using the new bounds of the figure to determine border placement.</p> <p>However, the <code>fig.text</code> line, another <code>fig</code> method, puts the text at the original upper limit, what would have been the upper limit if I had not used the <code>plt.text</code> line.</p> <p>Why do these disagree when they come from methods on the same figure object, and how can I get the <code>fig.text</code> line to automatically consider the additional text?</p>
<python><matplotlib><plot><graph>
2024-06-04 11:37:22
0
500
Dave
78,574,926
351,410
Python: concise float format without losing order of magnitude
<p>For easy comparison at a glance, statistical results in my project are usually best formatted as <code>0.3f</code>, except (a) where decimal places are useless, or (b) where there is a small fraction such that the first 3 places are all zero. As of Python 3.11.8, there doesn't seem to be a built-in format that always gives the order of magnitude while minimizing useless digits. Using a loop over each character of a standard format, the result can be adjusted to meet these requirements, but it seems inefficient and unnecessarily detailed:</p> <pre><code>class FloatFormatter: def within(n, minimum, maximum): s = '{:.{m}f}'.format(n, m=maximum) decimal = truncate = False i = 0 for c in s: i += 1 match c: case '0': continue case '.': dot = i decimal = True case _: if (decimal): truncate = True break if decimal: places = (i - dot) if places &lt; minimum: return s[:dot+minimum].rstrip('0') # minimum precision modulo trailing zeros if truncate: return s[:i].rstrip('0') # precision up to first non-zero if n &gt; 1.0: return str(int(n)) + '.0_' else: return '* ' + '{:.0e}'.format(n)[1:] # sci-fi format with only the order of magnitude else: return s.rstrip('0').rstrip('.') # just the integer </code></pre> <p>Usage:</p> <pre><code>&gt;&gt;&gt; x = 1.23456 &gt;&gt;&gt; print(&quot;• x: &quot; + FloatFormatter.within(x, 3, 10)) • x: 1.234 </code></pre> <p>special case 1: show extra decimal places (up to 10) to preserve the order of magnitude</p> <pre><code>&gt;&gt;&gt; x = 1.000023456 &gt;&gt;&gt; print(&quot;• x: &quot; + FloatFormatter.within(x, 3, 10)) • x: 1.00002 </code></pre> <p>special case 2: precision 10 is insufficient to reach non-zero, so just show that it's slightly above zero</p> <pre><code>&gt;&gt;&gt; x = 1.00000000000023456 &gt;&gt;&gt; print(&quot;• x: &quot; + FloatFormatter.within(x, 3, 10)) • x: 1.0_ </code></pre> <p>special case 3: preserve the order of magnitude without giving more detail</p> <pre><code>&gt;&gt;&gt; x = 0.00000000000023456 &gt;&gt;&gt; print(&quot;• x: &quot; + FloatFormatter.within(x, 3, 10)) • x: * e-13 </code></pre> <p>Is there a simpler way to accomplish the same basic idea?</p>
<python><floating-point><format><precision>
2024-06-04 10:52:17
2
2,715
Byron Hawkins
78,574,898
5,306,861
How to find Base-line of Curved Text?
<p>Attached is a picture with curved lines, how can you find the <strong>Baseline</strong> of the text?</p> <p><a href="https://i.sstatic.net/269aSnEM.jpg" rel="noreferrer"><img src="https://i.sstatic.net/269aSnEM.jpg" alt="enter image description here" /></a></p> <p>The goal is to get lines like I drew by hand in the following picture: <a href="https://i.sstatic.net/A29bR6t8.png" rel="noreferrer"><img src="https://i.sstatic.net/A29bR6t8.png" alt="enter image description here" /></a></p> <p>I tried the following code, but letters like g p q y and similar break the line.</p> <pre class="lang-py prettyprint-override"><code>import cv2 as cv import numpy as np src = cv.imread(&quot;boston_cooking_a.jpg&quot;, cv.IMREAD_GRAYSCALE) src = cv.adaptiveThreshold(src=src, maxValue=255, blockSize=55, C=11, thresholdType=cv.THRESH_BINARY, adaptiveMethod=cv.ADAPTIVE_THRESH_MEAN_C) src = cv.dilate(src, cv.getStructuringElement(ksize=(3, 3), shape=cv.MORPH_RECT)) src = cv.erode(src, cv.getStructuringElement(ksize=(50, 3), shape=cv.MORPH_RECT)) src = cv.Sobel(src, ddepth=0, dx=0, dy=1, ksize=5) cv.imwrite(&quot;test.jpg&quot;, src) cv.imshow(&quot;src&quot;, src) cv.waitKey(0) </code></pre> <p><a href="https://i.sstatic.net/4gtjsiLj.jpg" rel="noreferrer"><img src="https://i.sstatic.net/4gtjsiLj.jpg" alt="enter image description here" /></a></p> <p><strong>EDIT:</strong></p> <p>Attached is another image to test your answer on, so we can make sure the answer doesn't suffer from &quot;overfitting&quot; to a single image.</p> <p><a href="https://i.sstatic.net/bZzzEeCU.jpg" rel="noreferrer"><img src="https://i.sstatic.net/bZzzEeCU.jpg" alt="enter image description here" /></a></p>
<python><algorithm><opencv><image-processing><ocr>
2024-06-04 10:46:09
2
1,839
codeDom
78,574,847
227,860
DST aware shift in polars
<p>I'm working with a dataset that is dependent on local dates and times. I want to add a lag so a row has access to the values from the previous <code>n</code> days. The challenge here comes from the DST transitions. Sometimes there are too few or too many records in one of the previous days. For example, when changing to summer time and using a half hour periodicity I could get:</p> <pre><code>┌───────────────────────────────┬───────────┬───────────┐ │ timestamp ┆ value │ lag_value │ │ --- ┆ --- │ --- │ │ datetime[ms, Europe/Brussels] ┆ f64 │ f64 │ ╞═══════════════════════════════╪═══════════╪═══════════╡ │ 2017-03-26 00:00:00 CET ┆ -0.453049 │ ... │ │ 2017-03-26 00:30:00 CET ┆ 1.696162 │ ... │ │ 2017-03-26 01:00:00 CET ┆ 0.10527 │ ... │ │ 2017-03-26 01:30:00 CET ┆ 0.93969 │ ... │ │ 2017-03-26 03:00:00 CEST ┆ 1.158872 │ ... │ &lt;- │ 2017-03-26 03:30:00 CEST ┆ -0.158087 │ ... │ ... </code></pre> <p>where values for the local time of 02:00 and 02:30 are missing. I am using half an hour here to make sure it isn't too verbose but I'm looking for a solution that works for any periodicity (that is a divisor of 60), e.g. 10 minutes. The DST strategy is easy: if there are too many records then pick the first entry. For example, if 01:00 appears twice the first value should be used. In the above case, with missing data, the missing value should be filled backwards so the values at 03:00 and 03:30 should be used as replacements for the missing values at 02:00 and 02:30. The expected result is thus:</p> <pre><code>┌───────────────────────────────┬───────────┬───────────┐ │ timestamp ┆ value │ lag_value │ │ --- ┆ --- │ --- │ │ datetime[ms, Europe/Brussels] ┆ f64 │ f64 │ ╞═══════════════════════════════╪═══════════╪═══════════╡ │ 2017-03-27 00:00:00 CEST ┆ 2.358506 │ -0.453049 │ │ 2017-03-27 00:30:00 CEST ┆ -1.235676 │ 1.696162 │ │ 2017-03-27 01:00:00 CEST ┆ -0.430255 │ 0.10527 │ │ 2017-03-27 01:30:00 CEST ┆ -1.460279 │ 0.93969 │ │ 2017-03-27 02:00:00 CEST ┆ -0.918418 │ 1.158872 │ x │ 2017-03-27 02:30:00 CEST ┆ -0.933531 │ -0.158087 │ x │ 2017-03-27 03:00:00 CEST ┆ -0.421031 │ 1.158872 │ │ 2017-03-27 03:30:00 CEST ┆ -0.800223 │ -0.158087 │ ... </code></pre> <p>I have tried with a generic shift, as you can fix the DST issue by then doing a partial shift of the affected/wrong columns of 2 entries backward/forward. However, I cannot find how to do a partial shift in some columns from a given index.</p> <p>Alternatively, I can calculate a second dataframe and join it with the first dataframe. This would ideally create <code>None</code> values, but runs into other polars issues:</p> <pre><code>from datetime import datetime df = pl.DataFrame({&quot;localdatetime&quot;: [datetime(2017, 3, 27, 2)]}).with_columns(pl.col(&quot;localdatetime&quot;).dt.replace_time_zone(&quot;Europe/Brussels&quot;)) df = df.with_columns( polars .col(&quot;localdatetime&quot;) .dt.offset_by(&quot;-1d&quot;) .alias(&quot;localdatetime_YTD&quot;) ) &gt;&gt;&gt; polars.exceptions.ComputeError: datetime '2017-03-26 02:00:00' is non-existent in time zone 'Europe/Brussels'. You may be able to use `non_existent='null'` to return `null` in this case. </code></pre> <p>A backward fill or forward fill works for hourly data as only a single value is missing, but does not work for finer granularity of 30 minutes or less. The gap in these cases is more than a single row and the outlined DST strategy is to shift a block of an entire hour, not simply fill with the next/last known value.</p>
<python><python-polars>
2024-06-04 10:36:59
1
852
kvaruni
78,574,826
6,997,665
Indexing from a 3D numpy array using another 3D numpy array
<p>I have a 3D array, say <code>a</code>. I want to create another array <code>b</code> which has <code>0</code>s at least and second least absolute values in <code>a</code>. My approach is as follows</p> <pre><code>import numpy as np a = np.random.randn(100, 4, 4) c = np.argsort(np.abs(a), axis=2)[..., :2] b = np.ones(a.shape) # I need to do b[c] = 0 </code></pre> <p>However the code <code>b[c] = 0</code> does not work. Also <code>c</code> has entries from <code>0</code> to <code>3</code> only, therefore the indices for first and second dimension in <code>b</code> need to be inferred from the location of the index in <code>c</code>. How does one go about it? Any help is appreciated.</p>
<python><arrays><numpy>
2024-06-04 10:32:31
1
3,502
learner
78,574,815
3,929,481
Obtain type annotation of a function argument
<p>Since Python 3.5 type aliases are available (with a simplified syntax since 3.12),</p> <pre><code>type T = int # introduces type alias T t: T = 1 # annotates t as type T </code></pre> <p>The type annotation of variable <code>t</code> can be found in the <code>__annotations__</code> dict</p> <pre><code>&gt;&gt;&gt; __annotations__['t'] &quot;T&quot; </code></pre> <p>Is it possible to determine the annotation of a variable that was passed as an argument to a function? E.g. when above <code>t</code> is passed to a function, <code>f(t)</code></p> <pre><code>def f(x): # what is the type annotation of the variable passed in as x (if any)? pass </code></pre> <p>Unfortunately the annotation of the variable passed into a function is not maintained in the local frame of the function.</p> <p>Therefore my conclusion is that the type annotation is just not available in the function body anymore. Is this assessment correct?</p>
<python><python-typing>
2024-06-04 10:29:10
1
2,053
mcmayer
78,574,458
6,941,400
How can I automate the connection to an On Prem Windows VM that uses Azure AAD for authentication?
<p>My requirement is to automate the transfer of files, and running commands on the Windows VM, which is currently a manual process where I log in to the VM via RDP (and it prompts me for my username/password of my account).</p> <p>I have been doing a bit of digging on this, as I am able to automate the same thing for a Linux VM where no such AAD based authentication exists and I use the paramiko library to transfer content from my PC to the VM and run bash commands.</p> <pre><code>client = paramiko.SSHClient() client.set_missing_host_key_policy(paramiko.AutoAddPolicy()) # Connect to the server try: client.connect(host, username=username, password=password) # Initialize the SFTP client sftp = client.open_sftp() # Transfer the file sftp.put(local_file_path, remote_file_path) print(f&quot;File {local_file_path} has been transferred to {remote_file_path} on the remote server.&quot;) _stdin, _stdout,_stderr = client.exec_command(&quot;ls random*&quot;) </code></pre> <p>(Omitted content for brevity and confidentiality)</p> <p>However, getting a set of credentials to log in to the Windows VM isn't allowed as per the org policy.</p> <p>I am not really able to find any online resources on the same, beyond <a href="https://learn.microsoft.com/en-us/windows/win32/winrm/authentication-for-remote-connections" rel="nofollow noreferrer">this</a> and <a href="https://stackoverflow.com/questions/18961213/how-to-connect-to-a-remote-windows-machine-to-execute-commands-using-python">this</a> but in the latter, it seems all be related to a simpler scenario where there is a username/password combination. I am investigating the usage of Pywinauto to try this out, but I am still not really sure on what the best approach really is. Personally, I would prefer to use Python but I am amenable to any solution at this point (or at least being pointed in the right direction).</p>
<python><windows><ssh><automation><azure-active-directory>
2024-06-04 09:23:52
0
576
Anshuman Kumar
78,574,396
1,658,080
How to extract nested text while keeping line breaks?
<p>I want to extract text from an extremely nested website without any obvious pattern nor classes I can use.</p> <p>That's why I need to write a logic which is quite &quot;generic&quot; and works in multiple scenarios. That's where I need some support.</p> <p>If we have for example:</p> <pre><code>&lt;div&gt;&lt;span&gt;Hello&lt;br&gt;World&lt;/span&gt;, how are you doing?&lt;/div&gt; &lt;span&gt;&lt;span&gt;This&lt;br&gt;&lt;br&gt;&lt;br&gt;is difficult&lt;/span&gt;, at&lt;br&gt;least for me.&lt;/span&gt; </code></pre> <p>... I would like to extract <code>Hello&lt;br&gt;World, how are you doing?</code> as the first element, and then <code>This&lt;br&gt;&lt;br&gt;&lt;br&gt;is difficult, at&lt;br&gt;least for me.</code></p> <p>So it should keep the text (and line breaks) while grouping the elements together.</p> <p>I tried multiple approaches, the latest:</p> <pre><code>def is_visible_text(element): if isinstance(element, NavigableString): # Remove non-visible characters using regex text = re.sub(r'[\u200B-\u200D\uFEFF]', '', element) return text.strip() != '' return False def extract_deepest_text_elements(element): if isinstance(element, NavigableString) and is_visible_text(element): return [element] if element.name in ['br']: return [element] # List to hold extracted text and &lt;br&gt; elements extracted_elements = [] # Process child elements first for child in element.contents: extracted_elements.extend(extract_deepest_text_elements(child)) return extracted_elements def refine_content(input_file, output_file): with open(input_file, 'r', encoding='utf-8') as file: content = file.read() soup = BeautifulSoup(content, 'html.parser') new_body_content = soup.new_tag('div') # Start with the highest-order elements (div, span, p) elements = soup.find_all(['div', 'span', 'p']) for elem in elements: while elem: deepest_elements = extract_deepest_text_elements(elem) if deepest_elements: for element in deepest_elements: new_body_content.append(element) new_body_content.append(soup.new_tag('br')) # Ensure BRs after text # Move up to the parent element elem = elem.parent if elem.parent and elem.parent.name != 'body' else None new_soup = BeautifulSoup('&lt;html&gt;&lt;body&gt;&lt;/body&gt;&lt;/html&gt;', 'html.parser') new_soup.body.append(new_body_content) with open(output_file, 'w', encoding='utf-8') as file: file.write(new_soup.prettify()) </code></pre> <p>... it does not work like intended. Currently the elements multiple times in the output.</p> <p>I would highly appreciate your take on that challenge.</p>
<python><web-scraping><beautifulsoup>
2024-06-04 09:10:22
2
725
Clms
78,574,392
10,517,777
Create a log file only when there were logs
<p>Python version: 3.11.8</p> <p>Hi everyone,</p> <p>I am using the package logging to log of the execution. I am using <code>logging.basicConfig</code> to output the logs in a log file. I added it at the beginning of my python script. I realised a file was created when invoking <code>logging.basicConfi</code>g. However, I do not want to create a log file if there were no data to log. Currently, I am getting empty files in my repository and I am having issues with a lot of files created in that repository. This is a part of my script to show you how I am using the logging package. How can I create the log file only when there were logs?.</p> <pre><code>def create_target_file(logs_file_path: str, platform: str, )-&gt; str: try: data ={} try: logging.basicConfig(filename=logs_file_path, filemode='a', level=logging.INFO, format=&quot;%(levelname)s | %(asctime)s | %(message)s&quot;, datefmt=&quot;%Y-%m-%d %H:%M:%S&quot;) except OSError as e: sleep(10) logging.basicConfig(filename=logs_file_path, filemode='a', level=logging.INFO, format=&quot;%(levelname)s | %(asctime)s | %(message)s&quot;, datefmt=&quot;%Y-%m-%d %H:%M:%S&quot;) if platform == 'DRD': logging.info(&quot;Logging some important stuff&quot;) except Exception as e: logging.error(str(e)) data['error_message'] = str(e) data['stack_trace'] = format_exc() finally: return dumps(data) </code></pre> <p>I am using a try-except block to prevent an error when creating the log file with <code>logging.basicConfig</code>. My goal is to create a log file only when the function logging is used. For example if platform is not equal to &quot;DRD&quot; and there was no errors, then the file must not be created.</p>
<python><logging><python-logging>
2024-06-04 09:09:17
1
364
sergioMoreno
78,574,357
19,130,803
Given argument of list[int] | list[str], can't I be sure that the list is list[int] if element [0] is int, and vice versa?
<p>I have a <code>python</code> script and trying to add type hints to the code, Following is the sample code (without type hints the code works) using <code>mypy</code>.</p> <pre><code>values_int: list[int] = [1, 2, 3, 4] values_str: list[str] = [&quot;1&quot;, &quot;2&quot;, &quot;3&quot;, &quot;4&quot;] def bar(*, x: list[int]) -&gt; bool: # processing return True def baz(*, y: list[str]) -&gt; bool: # processing return True def foo(*, values: list[int] | list[str]) -&gt; bool: status: bool = False if isinstance(values[0], int): x: list[int] = values status = bar(x=x) elif isinstance(values[0], str): # case-1 # status = baz(y=values) # case-2 y: list[str] = values status = baz(y=y) return status foo(values=values_str) </code></pre> <p>Errors for:</p> <pre><code># case-1 # error: Argument &quot;y&quot; to &quot;baz&quot; has incompatible type &quot;list[int] | list[str]&quot;; expected &quot;list[str]&quot; # case-2 # error: Incompatible types in assignment (expression has type &quot;list[int] | list[str]&quot;, variable has type &quot;list[str]&quot;) </code></pre>
<python><mypy><python-typing>
2024-06-04 09:00:05
1
962
winter
78,574,168
2,398,430
pytest - collect information about test together with parameters
<p>I am using pytest. I would like to query the framework to get a list of all the tests that would be scheduled to run, together with the parameters for each test.</p> <p>By query, I mean an action to acquire that info.</p> <p>When I execute <code>pytest -s --co</code>, I get some info but not all the info. The issue is that I want to be able to get the explicit parameters used in each test, not a label like pytest_parameters0.</p> <pre><code>&lt;Module test_name.py&gt; &lt;Function test_do[pytest_parameters0] &lt;Function test_do[pytest_parameters1] </code></pre> <p>I have also tried to implement hook functions like &quot;pytest_collection_modify_items&quot;, however I a have not been able to retrieve the parameter information by this means either. Is there a way to do what I am seeking?</p> <p>Any help will be appreciated!</p>
<python><pytest>
2024-06-04 08:20:51
1
366
stackQA
78,574,071
8,869,003
Prevent django from writing e-mail information to stdout
<p>I'm using django 2.2. with python 3.6. In some cases my server stores stdout into a log used by managers.</p> <p>For some reason django started to write to stdout at send-mail some time ago. The managers do not like it because it looks like en error case for them.</p> <p>So the following sends the e-mail, but also writes to stdout:</p> <pre><code>^C(venv) &gt; python manage.py shell Python 3.6.15 (default, Sep 15 2021, 12:00:00) Type 'copyright', 'credits' or 'license' for more information IPython 7.5.0 -- An enhanced Interactive Python. Type '?' for help. In [1]: from django.core.mail import send_mail In [2]: send_mail( ...: 'Problems in sync script', ...: &quot;kukkuu&quot;, ...: 'pxpro_sync@&lt;myOrg&gt;', ...: ['&lt;myEmail&gt;'], ...: fail_silently = False,) connection: {'fail_silently': False, 'host': 'localhost', 'port': 25, 'username': '', 'password': '', 'use_tls': False, 'use_ssl': False, 'timeout': None, 'ssl_keyfile': None, 'ssl_certfile': None, 'connection': None, '_lock': &lt;unlocked _thread.RLock object owner=0 count=0 at 0x7fa8e5a01180&gt;} mail: &lt;django.core.mail.message.EmailMultiAlternatives object at 0x7fa8e59b4ba8&gt; &lt;class 'django.core.mail.message.EmailMultiAlternatives'&gt; Out[2]: 1 </code></pre> <p>How can I get just e-mails sent without anything (like &quot;connection:...&quot;) displayed into stdout ?</p> <p>I does not matter whether I define <code>EMAIL_BACKEND</code> as <code>'django.core.mail.backends.smtp.EmailBackend'</code> or not.</p>
<python><django><smtp>
2024-06-04 07:58:20
0
310
Jaana
78,574,047
5,786,649
Python setattr and getattr: best practice with flexible, mutable object variables
<p>I have a class that holds an unspecified number of dicts as variables. I want to provide methods that allow the user to easily append to any of the dicts while naming the corresponding variable, without having to check whether the variable was already created. Example for my goal:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; c = Container() &gt;&gt;&gt; c.add_value(variable=&quot;first_dict&quot;, key=&quot;foo&quot;, value=1) &gt;&gt;&gt; c.add_value(variable=&quot;first_dict&quot;, key=&quot;bar&quot;, value=2) &gt;&gt;&gt; c.add_value(variable=&quot;second_dict&quot;, key=&quot;foo&quot;, value=2) &gt;&gt;&gt; print(c.first_dict) {&quot;foo&quot;:1, &quot;bar&quot;:2) &gt;&gt;&gt; print(c.second_dict) {&quot;foo&quot;:2) </code></pre> <p>Currently, this is my solution:</p> <pre class="lang-py prettyprint-override"><code>class Container(): def __init__(self): pass def add_value(self, variable: str, key: Any, value: Any): x = getattr(self, variable, {}) x[key] = value setattr(self, variable, x) </code></pre> <p>My concern is that accessing the attribute via <code>getattr</code>, then mutating it and setting it back via <code>setattr</code> introduces overhead that is not necessary.</p> <p>Is there a better way to write the <code>Container.add_value()</code>-method? Should I approach the whole problem differently?</p>
<python><dictionary>
2024-06-04 07:55:43
1
543
Lukas
78,574,035
11,562,537
Tkinter - How can I disable the "undo" option for a default message saved in a text widget?
<p>I have a text widget with the &quot;undo&quot; option activated and a default message already inserted. In this case, how can I avoid affecting the default message using <kbd>Ctrl+Z</kbd>? In my mind it should work only for the new text inserted by the user but it always deletes all the text. How can I solve this issue?</p>
<python><tkinter><text><tkinter-text>
2024-06-04 07:52:48
1
918
TurboC
78,573,979
2,604,247
Does Ray Offer any Functional/Declarative Interface to Map a Remote Function to an Iterator/Iterable?
<h4>My present Code</h4> <pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3 # encoding: utf-8 &quot;&quot;&quot;Demonstration of Ray parallelism&quot;&quot;&quot; import ray from typing import Iterator ray.init() @ray.remote def square(n:int)-&gt;int: return n*n references: Iterator[ray.ObjectRef] = map(lambda val: square.remote(val), range(10)) ray.get([*references]) ray.shutdown() </code></pre> <p>Basically, nothing but a form of <code>map(square, range(10))</code> powered by Ray.</p> <h4>Question</h4> <p>For such a standard and common pattern, the above operation looks too verbose. So does ray offer any API exposing a more declarative/functional interface to get the above result? In addition to map, best if it offers some kind of filter, reduce etc. as well.</p>
<python><concurrency><functional-programming><ray>
2024-06-04 07:41:42
2
1,720
Della
78,573,962
8,163,773
Getting google oauth tokens using "code" with python
<p>I want to create an auth flow using react + python Here is an <a href="https://github.com/MomenSherif/react-oauth/issues/12#issuecomment-1131408898" rel="nofollow noreferrer">example</a> of how to do it with react + node.js:</p> <pre><code>const { tokens } = await oAuth2Client.getToken(req.body.code); </code></pre> <p>But I want to rewrite it using Python and I'm struggling to find any information about how to use that &quot;code&quot; I get from react with Python</p>
<python><google-oauth>
2024-06-04 07:38:04
1
9,359
Arseniy-II
78,573,789
8,964,393
Export multiple pandas crosstabs into one html file with table of contents
<p>I have created a pandas dataframe as follows:</p> <pre><code>import pandas as pd import numpy as np ds = {'col1' : [1,1,2,3,4], 'col2' : [1,1,3,4,5], 'col3': [3,3,3,3,3]} df = pd.DataFrame(data=ds) </code></pre> <p>The dataframe looks like this:</p> <pre><code>print(df) col1 col2 col3 0 1 1 3 1 1 1 3 2 2 3 3 3 3 4 3 4 4 5 3 </code></pre> <p>I have then produced two crosstabs as follows:</p> <pre><code>x1 = pd.crosstab(df['col1'], df['col2'], normalize='index').sort_index() x2 = pd.crosstab(df['col1'], df['col3'], normalize='index').sort_index() </code></pre> <p>Which look like this:</p> <pre><code>print(x1) print(&quot;&quot;) print(x2) col2 1 3 4 5 col1 1 1.0 0.0 0.0 0.0 2 0.0 1.0 0.0 0.0 3 0.0 0.0 1.0 0.0 4 0.0 0.0 0.0 1.0 col3 3 col1 1 1.0 2 1.0 3 1.0 4 1.0 </code></pre> <p>I need to export those crosstabs results into one html file and that html file needs to have a table of contents at the top of the page containing the second element of the crosstab. So, the html would look something like this:</p> <pre><code>1. col2 2. col3 col2 1 3 4 5 col1 1 1.0 0.0 0.0 0.0 2 0.0 1.0 0.0 0.0 3 0.0 0.0 1.0 0.0 4 0.0 0.0 0.0 1.0 col3 3 col1 1 1.0 2 1.0 3 1.0 4 1.0 </code></pre>
<python><html><templates><format>
2024-06-04 06:57:37
0
1,762
Giampaolo Levorato
78,573,661
893,254
Calculating a rolling mean window with Pandas with data which is non-periodic
<p>I have some existing Pandas code which calculates the mean of some timeseries data on a month period.</p> <pre><code> df .groupby( pandas.Grouper( key='transaction_date', freq='M', ) ) .aggregate( { 'transaction_date': 'first', 'price': 'mean' } ) </code></pre> <p>The resulting data still maintains a large amount of variance. I would like to reduce this variance by applying a rolling mean operation over a period of 6 months.</p> <p>I have not found a good solution to this.</p> <ul> <li>I tried replacing <code>groupby</code> with <code>rolling</code> and using a <code>pandas.Grouper</code>. However the <code>rolling</code> API does not work with <code>Grouper</code> objects.</li> <li>I tried another code (below), however this failed with an exception.</li> </ul> <pre><code>raise ValueError(&quot;window must be an integer 0 or greater&quot;) ValueError: window must be an integer 0 or greater </code></pre> <pre><code> df_slow = ( df .copy() .set_index('transaction_date') .rolling( window=timedelta(days=1), ) .aggregate( { 'price': 'mean' } ) .rename(columns={'price': 'mean'}) ) df_slow['transaction_date'] = ( df .copy() .set_index('transaction_date') .index .to_series() .rolling( window=timedelta(days=1) ) .min() ) </code></pre> <ul> <li>Finally I tried this (below).</li> </ul> <pre><code> df_slow = ( df .copy() .groupby( pandas.Grouper( key='transaction_date', freq='D', ) ) .aggregate({'price': 'mean'}) ) df_slow['price_rolling'] = ( df ['price'] .rolling( window=180, # 180 days ) .aggregate('mean') ) </code></pre> <p>This works, but I would have thought there is a more straght forward solution to this problem.</p> <p>Can anyone help me understand what the sensible approach is here?</p>
<python><pandas>
2024-06-04 06:26:55
0
18,579
user2138149
78,573,638
1,082,349
Reindex to expand and fill value only across one level of multi-index
<p>I have a dataframe with an index of (month, A, B):</p> <pre><code> foo N month A B 1983-03-01 3 9 0 1 1983-06-01 3 9 0 1 1983-09-01 3 9 0 1 1983-11-01 4 5 0 1 1984-05-01 4 5 0 1 1984-06-01 3 9 0 1 1984-09-01 3 9 0 2 </code></pre> <p>I would like to fill all missing dates, provided that a certain (A, B) combination exists in the index. What I do not want to do is to fill in the index for all (A, B) combinations.</p> <p>That is, I would like to have for (A=3, B=9) and for (A=4, B=5) month-indices running from 1983-03-01 to 1984-09-01 and 0s for filling. But I don't want there to be any records of (A=3, B=5) or (A=4, B=9).</p> <p>If this was a single index, I could simply</p> <pre><code>idx = pd.date_range(df['month'].min(), df['month'].max(), freq='M') df = df.set_index('month') df.index = df.reindex(idx, fill_value=0) </code></pre> <p>How would I approach it in this situation?</p> <p>Worth noting that this solution should scale with a large number of unique values for A, B.</p>
<python><pandas>
2024-06-04 06:19:19
2
16,698
FooBar
78,573,164
5,273,594
Value from F expression in Django ORM is not a decimal and Objects has to field named
<p>I have the following classes/models and I'm trying to do bulk update thousands of Recipt objects to have an Item JSON object in the details section</p> <pre><code>from django.db import models from schematics import models as schema_models class Recipt(models.Model): id = UUID() is_red = models.BooleanField(default=False) disputed_at = models.DateTimeField(null=True) total_amount = models.DecimalField(max_digits=12, decimal_places=2) details = JSONModelField(ReciptDetails) class ReciptDetails(PickleableModel): id = UUID() item = schema_types.ModelType(Item) class Item(schema_models.Model): id = schema_types.UUIDType() name = schema_types.StringType() amount = schema_types.DecimalType() description = schema_types.StringType() </code></pre> <p>When doing:</p> <p><code>Recipt.objects.filter(Q(details__item=None), is_red=True).update(details__item=Item({&quot;name&quot;: SOMECONST.HAPPY, &quot;amount&quot;: F('total_amount')})) </code></p> <p>I get the error:</p> <p><code> *** schematics.exceptions.DataError: {&quot;amount&quot;: [&quot;Value 'F(total_amount)' is not decimal.&quot;]}</code></p> <p>I'm guessing it's because the F expression is inside of the Item class (json object). What is the proper way to get the <code>total_amount</code> value from the Recipt to add to the item json?</p> <p>Moreover, when I try to set a decimal constant value (to bypass the issue above):</p> <p><code>Recipt.objects.filter(Q(details__item=None), is_red=True).update(details__item=Item({&quot;name&quot;: SOMECONST.HAPPY, &quot;amount&quot;: Decimal(&quot;10.00&quot;)})) </code></p> <p>I get that:</p> <p><code> django.core.exceptions.FieldDoesNotExist: Recipt has no field named 'details__item'</code></p> <p>Any ideas of how to fix these issues? I think it stems from the nesting of JSON objects</p>
<python><json><django><django-models><orm>
2024-06-04 02:46:33
0
2,073
Fredy
78,573,014
9,279,753
Forward slash being changed to "%2F" when deploying Azure Function
<p>I have an Azure Function that connects to an Azure File Storage account.</p> <p>The File Storage Account is structure like the following:</p> <pre><code>ShareName ShareName/folder_1 ShareName/folder_1/subfolder_1 ShareName/folder_1/subfolder_1/other_folder ShareName/folder_1/subfolder_1/another_folder ShareName/folder_1/subfolder_1/... ShareName/folder_2 ShareName/folder_2/subfolder_1 ShareName/folder_3 </code></pre> <p>I am able to open the <code>ShareClient</code> to the account, I can access all files and folders that are on <code>ShareName/folder_1/subfolder_1</code> and retrieve all files and directories recursevely, with the following code:</p> <pre><code>def main(): share_client = ShareClient.from_share_url(share_url=os.environ['STACC_URL'], credential=os.environ['STACC_KEY']) for file in get_files(share_client, 'folder_1/subfolder_1'): print(file) def get_files(share_client: ShareClient, dir_name: os.PathLike: for item in share_client.list_directories_and_files(dir_name): if item.is_directory: yield from get_files(share_client, dir_name + '/' + item.name) else: yield dir_name + '/' + item.name </code></pre> <p>That works perfectly while running Azure Functions locally, and I get some output like this:</p> <pre><code>folder_1/subfolder_1/new_folder/something.pdf folder_1/subfolder_1/other_folder/other.pdf folder_1/subfolder_1/other_folder/new.pdf ... </code></pre> <p>However, when I deploy the code to the Azure Function, it starts failing. Oh the log, I can see that it is requesting an URL like this: <code>https://nameofaccount.file.core.windows.net/sharename/folder_1%2Fsubfolder_1</code>. It is of course not going to find that URL because it does not exist.</p> <p>I've added a log statement for the directory that is going to be search and it does say &quot;<code>Getting files from directory: folder_1/subfolder_1...</code>&quot;, but then immediately after, I get the information log referencing &quot;<code>Request URL: 'https://nameofaccount.file.core.windows.net/sharename/folder_1%2Fsubfolder_1?restype=REDACTED&amp;comp=REDACTED'</code>&quot;</p> <p>Basically, for some reason, the Azure function is transforming the &quot;/&quot; from the requested directory to search to the encoded &quot;%2F&quot;. I have tried to use &quot;\&quot; as the standard Python escape character, but the issue remains.</p> <p>How can I make the Azure Function run on Azure as it does locally and search only for the information in the specified subdirectory, recursively?</p>
<python><azure><azure-functions><azure-file-share>
2024-06-04 01:13:37
1
599
Jose Vega
78,572,954
3,610,310
Using CloudLinux to create a Python app getting error Specified directory already used by '/path/to/application'
<p>I am using cloudlinux through DirectAdmin (this shouldn't matter) and I am trying to create a new Django application using the &quot;Setup Python App&quot; create application option. I have uploaded my django files properly and I am certain that the correct permissions are set to the applications folder (i.e. <code>chowner -R username foldername</code> and <code>chmod -R 755 foldername</code>).</p> <p>Every time I try to add it get the error <code>Specified directory already used by '/path/to/application'</code>.</p> <p>I have removed the virtual environment and even tried putting the app in various locations in case the root wasn't good enough but I still received this error.</p>
<python><node.js><django><directadmin>
2024-06-04 00:39:37
1
3,475
jAC
78,572,945
14,673,832
× Encountered error while trying to install package. ╰─> typed-ast
<p>I have a django project which I cloned into my local but I am not being able to start backend coding because of the requirements not being installed. Everytime I try to install the requirements.txt, it gives me the following error.</p> <pre><code>note: This error originates from a subprocess, and is likely not a problem with pip. error: legacy-install-failure × Encountered error while trying to install package. ╰─&gt; typed-ast note: This is an issue with the package mentioned above, not pip. hint: See above for output from the failure. </code></pre> <p>I have tried the following things.</p> <ol> <li>Downgrading python version to 3.11.1</li> <li>Downgrade the typed-ast to 1.4.2</li> <li>Installed mypy latest version 1.10.0</li> <li>Installed Desktop development with C++ and other packages.</li> </ol> <p>I have windows 11. I cloned the project in other system which has windows 10, but it worked perfecty...I didnt have to downgrade anything. My error is as follows:</p> <pre><code> Running setup.py install for typed-ast ... error error: subprocess-exited-with-error × Running setup.py install for typed-ast did not run successfully. │ exit code: 1 ╰─&gt; [127 lines of output] running install C:\ALL_FILES\Projected\kvwsmb\myvenv\Lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. warnings.warn( running build running build_py creating build creating build\lib.win-amd64-cpython-311 creating build\lib.win-amd64-cpython-311\typed_ast copying typed_ast\ast27.py -&gt; build\lib.win-amd64-cpython-311\typed_ast copying typed_ast\ast3.py -&gt; build\lib.win-amd64-cpython-311\typed_ast copying typed_ast\conversions.py -&gt; build\lib.win-amd64-cpython-311\typed_ast copying typed_ast\__init__.py -&gt; build\lib.win-amd64-cpython-311\typed_ast creating build\lib.win-amd64-cpython-311\typed_ast\tests copying ast3\tests\test_basics.py -&gt; build\lib.win-amd64-cpython-311\typed_ast\tests running build_ext building '_ast27' extension creating build\temp.win-amd64-cpython-311 creating build\temp.win-amd64-cpython-311\Release creating build\temp.win-amd64-cpython-311\Release\ast27 creating build\temp.win-amd64-cpython-311\Release\ast27\Custom creating build\temp.win-amd64-cpython-311\Release\ast27\Parser creating build\temp.win-amd64-cpython-311\Release\ast27\Python &quot;C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.40.33807\bin\HostX86\x64\cl.exe&quot; /c /nologo /O2 /W3 /GL /DNDEBUG /MD -Iast27/Include -IC:\ALL_FILES\Project ed\kvwsmb\myvenv\include &quot;-IC:\Users\Aliza Paudel\AppData\Local\Programs\Python\Python311\include&quot; &quot;-IC:\Users\Aliza Paudel\AppData\Local\Programs\Python\Python311\Include&quot; &quot;-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.40.33807\include&quot; &quot;-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.40.33807\ATLMFC\include&quot; &quot;-IC:\ Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\VS\include&quot; &quot;-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\ucrt&quot; &quot;-IC:\Program Files (x86)\Windows Kits\1 0\\include\10.0.22621.0\\um&quot; &quot;-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\shared&quot; &quot;-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\winrt&quot; &quot;-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\cppwinrt&quot; &quot;-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um&quot; /Tcast27/Custom/typed_ast.c /Fobuild\temp.win-amd64-cpython-311\Release\ast27/Custom/typed_ast.obj typed_ast.c C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Include\../Include/asdl.h(32): error C2143: syntax error: missing ')' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Include\../Include/asdl.h(32): error C2081: 'PyArena': name in formal parameter list illegal C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Include\../Include/asdl.h(32): error C2143: syntax error: missing '{' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Include\../Include/asdl.h(32): error C2059: syntax error: ')' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Include\../Include/asdl.h(33): error C2143: syntax error: missing ')' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Include\../Include/asdl.h(33): error C2081: 'PyArena': name in formal parameter list illegal C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Include\../Include/asdl.h(33): error C2143: syntax error: missing '{' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Include\../Include/asdl.h(33): error C2059: syntax error: ')' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(398): error C2143: syntax error: missing ')' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(398): error C2081: 'PyArena': name in formal parameter list illegal C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(398): error C2143: syntax error: missing '{' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(398): error C2059: syntax error: ')' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(400): error C2143: syntax error: missing ')' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(400): error C2081: 'PyArena': name in formal parameter list illegal C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(400): error C2143: syntax error: missing '{' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(400): error C2059: syntax error: ')' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(402): error C2143: syntax error: missing ')' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(402): error C2081: 'PyArena': name in formal parameter list illegal C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(402): error C2143: syntax error: missing '{' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(402): error C2059: syntax error: ')' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(404): error C2143: syntax error: missing ')' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(404): error C2081: 'PyArena': name in formal parameter list illegal C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(404): error C2143: syntax error: missing '{' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(404): error C2059: syntax error: ')' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(406): error C2143: syntax error: missing ')' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(406): error C2081: 'PyArena': name in formal parameter list illegal C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(406): error C2143: syntax error: missing '{' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(406): error C2059: syntax error: ')' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(410): error C2143: syntax error: missing ')' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(409): error C2081: 'PyArena': name in formal parameter list illegal C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(410): error C2143: syntax error: missing '{' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(410): error C2059: syntax error: ')' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(413): error C2143: syntax error: missing ')' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(413): error C2081: 'PyArena': name in formal parameter list illegal C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(413): error C2143: syntax error: missing '{' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(413): error C2059: syntax error: ')' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(415): error C2143: syntax error: missing ')' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(415): error C2081: 'PyArena': name in formal parameter list illegal C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(415): error C2143: syntax error: missing '{' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(415): error C2059: syntax error: ')' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(417): error C2143: syntax error: missing ')' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(417): error C2081: 'PyArena': name in formal parameter list illegal C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(417): error C2143: syntax error: missing '{' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(417): error C2059: syntax error: ')' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(420): error C2143: syntax error: missing ')' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(420): error C2081: 'PyArena': name in formal parameter list illegal C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(420): error C2143: syntax error: missing '{' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(420): error C2059: syntax error: ')' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(423): error C2143: syntax error: missing ')' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(423): error C2081: 'PyArena': name in formal parameter list illegal C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(423): error C2143: syntax error: missing '{' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(423): error C2059: syntax error: ')' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(426): error C2143: syntax error: missing ')' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(425): error C2081: 'PyArena': name in formal parameter list illegal C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(426): error C2143: syntax error: missing '{' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(426): error C2059: syntax error: ')' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(429): error C2143: syntax error: missing ')' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(429): error C2081: 'PyArena': name in formal parameter list illegal C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(429): error C2143: syntax error: missing '{' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(429): error C2059: syntax error: ')' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(432): error C2143: syntax error: missing ')' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(432): error C2081: 'PyArena': name in formal parameter list illegal C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(432): error C2143: syntax error: missing '{' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(432): error C2059: syntax error: ')' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(435): error C2143: syntax error: missing ')' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(435): error C2081: 'PyArena': name in formal parameter list illegal C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(435): error C2143: syntax error: missing '{' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(435): error C2059: syntax error: ')' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(438): error C2143: syntax error: missing ')' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(438): error C2081: 'PyArena': name in formal parameter list illegal C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(438): error C2143: syntax error: missing '{' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(438): error C2059: syntax error: ')' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(441): error C2143: syntax error: missing ')' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(440): error C2081: 'PyArena': name in formal parameter list illegal C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(441): error C2143: syntax error: missing '{' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(441): error C2059: syntax error: ')' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(444): error C2143: syntax error: missing ')' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(444): error C2081: 'PyArena': name in formal parameter list illegal C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(444): error C2143: syntax error: missing '{' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(444): error C2059: syntax error: ')' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(447): error C2143: syntax error: missing ')' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(446): error C2081: 'PyArena': name in formal parameter list illegal C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(447): error C2143: syntax error: missing '{' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(447): error C2059: syntax error: ')' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(449): error C2143: syntax error: missing ')' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(449): error C2081: 'PyArena': name in formal parameter list illegal C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(449): error C2143: syntax error: missing '{' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(449): error C2059: syntax error: ')' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(451): error C2143: syntax error: missing ')' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(451): error C2081: 'PyArena': name in formal parameter list illegal C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(451): error C2143: syntax error: missing '{' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(451): error C2059: syntax error: ')' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(454): error C2143: syntax error: missing ')' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(454): error C2081: 'PyArena': name in formal parameter list illegal C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(454): error C2143: syntax error: missing '{' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(454): error C2059: syntax error: ')' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(457): error C2143: syntax error: missing ')' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(457): error C2081: 'PyArena': name in formal parameter list illegal C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(457): error C2143: syntax error: missing '{' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(457): error C2059: syntax error: ')' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(459): error C2143: syntax error: missing ')' before '*' C:\Users\Aliza Paudel\AppData\Local\Temp\pip-install-o586qmu9\typed-ast_326fc7f310584db1b6b96f1c4187d291\ast27\Custom\../Include/Python-ast.h(459): fatal error C1003: error count exceeds 100; stopping compilation error: command 'C:\\Program Files\\Microsoft Visual Studio\\2022\\Community\\VC\\Tools\\MSVC\\14.40.33807\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: legacy-install-failure × Encountered error while trying to install package. ╰─&gt; typed-ast note: This is an issue with the package mentioned above, not pip. hint: See above for output from the failure. </code></pre> <p>My requirements.txt file</p> <pre><code>pytz==2021.1 # https://github.com/stub42/pytz python-slugify==4.0.1 # https://github.com/un33k/python-slugify argon2-cffi==20.1.0 # https://github.com/hynek/argon2_cffi redis==3.5.3 # https://github.com/andymccurdy/redis-py hiredis==1.1.0 # https://github.com/redis/hiredis-py typed-ast==1.4.2 tablib # Django # ------------------------------------------------------------------------------ django==3.1.7 # pyup: &lt; 3.2 # https://www.djangoproject.com/ django-environ==0.4.5 # https://github.com/joke2k/django-environ django-model-utils==4.1.1 # https://github.com/jazzband/django-model-utils django-allauth==0.44.0 # https://github.com/pennersr/django-allauth django-crispy-forms==1.11.1 # https://github.com/django-crispy-forms/django-crispy-forms django-redis==4.12.1 # https://github.com/jazzband/django-redis django-cors-headers django-rest-passwordreset pillow gunicorn Collectfast dj-database-url django-autoslug django-rest-knox django-import-export Werkzeug==1.0.1 # https://github.com/pallets/werkzeug ipdb==0.13.5 # https://github.com/gotcha/ipdb # https://github.com/psycopg/psycopg2 # Testing # ------------------------------------------------------------------------------ mypy==0.812 # https://github.com/python/mypy django-stubs==1.7.0 # https://github.com/typeddjango/django-stubs pytest==6.2.2 # https://github.com/pytest-dev/pytest pytest-sugar==0.9.4 # https://github.com/Frozenball/pytest-sugar # Documentation # ------------------------------------------------------------------------------ sphinx==3.5.1 # https://github.com/sphinx-doc/sphinx sphinx-autobuild==2020.9.1 # https://github.com/GaretJax/sphinx-autobuild # Code quality # ------------------------------------------------------------------------------ flake8==3.8.4 # https://github.com/PyCQA/flake8 flake8-isort==4.0.0 # https://github.com/gforcada/flake8-isort coverage==5.5 # https://github.com/nedbat/coveragepy black==20.8b1 # https://github.com/ambv/black pylint-django==2.4.2 # https://github.com/PyCQA/pylint-django pre-commit==2.10.1 # https://github.com/pre-commit/pre-commit # Django # ------------------------------------------------------------------------------ factory-boy==3.2.0 # https://github.com/FactoryBoy/factory_boy django-debug-toolbar==3.2 # https://github.com/jazzband/django-debug-toolbar django-extensions==3.1.1 # https://github.com/django-extensions/django-extensions django-coverage-plugin==1.8.0 # https://github.com/nedbat/django_coverage_plugin pytest-django==4.1.0 # https://github.com/pytest-dev/pytest-django whitenoise djangorestframework django-filter django-tables2 drf-yasg django-filter </code></pre>
<python><django><pip><requirements.txt><typed>
2024-06-04 00:35:05
1
1,074
Reactoo
78,572,914
896,451
On this Conditional Expression, what the syntax error about?
<p>I get:</p> <pre><code> return r.group() if r := re.match(rx,l) else None ^ </code></pre> <p>SyntaxError: invalid syntax</p> <p>whereas</p> <pre><code> return r.group() if (r := re.match(rx,l)) else None </code></pre> <p>is accepted.</p> <p>What is invalid about the first's syntax? And what other interpretation of it is there, than the second, such that it is not unambiguous?</p>
<python><python-3.x><conditional-operator><python-assignment-expression>
2024-06-04 00:14:35
1
2,312
ChrisJJ
78,572,908
1,593,783
Python initializes method parameters instead of using default values
<p>I have created a class:</p> <pre><code>class PolicyJSON: def __init__(this, path:str=&quot;&quot;, values:dict={}, children:list=[], parent=None): this.path = path this.values = values this.children = children this.parent = parent this.path = this.getfullpath() </code></pre> <p>This class is hierarchical - it can have many children of it's type, and it can have a parent (of it's type). I have wrote several hundreds of lines of code already, basically creating whole hierarchy, my codes are working just okay so I was happy, until now. I have started spawning multiple instances of these hierarchies (see <code>mymethod</code> below), only to find out that for some reason python simply ignores parameter default values defined in <code>__init__()</code>, and, for some weird reason, it's putting existing variables in these places!</p> <p>Can someone please explain to me, what is going on here, and how to make class behave like a class?</p> <p>Basically, I have lots of code that generates <code>PolicyJSON</code> instances with hierarchical data, and then, I do this:</p> <pre><code># here is lots of code that also creates PolicyJSON # instances, then I have something like below: def mymethod(someparams): result = PolicyJSON() return result myinstance = mymethod(...) </code></pre> <p>At this point, result, instead of being empty, already has data filled in - I can do result.children and it will return actual data! When I paused debugger at that <code>result = PolicyJSON()</code> line, and then Jumped in (F11), the <code>__init__()</code> constructor had its parameters filled in with data (i.e. <code>children:list=[]</code> was not <code>[]</code>, but an actual array with data. How? Why? How can I make it behave &quot;the normal&quot; way?</p>
<python>
2024-06-04 00:11:21
1
10,128
ojek
78,572,746
5,042,280
How to use Python Click's `ctx.with_resource` to capture tracebacks in (sub-)commands/groups
<p>In Click, <code>Context.with_resource</code>'s <a href="https://click.palletsprojects.com/en/8.1.x/api/#click.Context.with_resource" rel="nofollow noreferrer">documentation</a> states it can be used to:</p> <blockquote> <p>[r]egister a resource as if it were used in a <code>with</code> statement. The resource will be cleaned up when the context is popped.</p> </blockquote> <p>From this, I understand that the context manager that I pass to <code>Context.with_resource</code> will be exited after execution of the root CLI group and any of its sub-groups and commands. This seems to work fine with an example such as this, where I am redirecting <code>stdout</code> to a file:</p> <pre class="lang-py prettyprint-override"><code>import contextlib import click @contextlib.contextmanager def my_redirect_stdout(file): with open(file, mode=&quot;w&quot;) as fp: with contextlib.redirect_stdout(fp): yield @click.group() @click.pass_context @click.argument(&quot;file&quot;) def cli(ctx, file): ctx.with_resource(my_redirect_stdout(file)) @cli.command() def hello(): print(f&quot;this goes to a file&quot;) if __name__ == &quot;__main__&quot;: cli() </code></pre> <p>However, this does not work when I try to capture exception tracebacks in the following way:</p> <pre class="lang-py prettyprint-override"><code>import contextlib import sys import traceback import click @contextlib.contextmanager def capture_traceback(file): with open(file, mode=&quot;w&quot;) as fp: try: yield except: print(f&quot;exception!&quot;) traceback.print_exc(file=fp) sys.exit(1) @click.group() @click.pass_context @click.argument(&quot;file&quot;) def cli(ctx, file): ctx.with_resource(capture_traceback(file)) @cli.command() def hello(): raise ValueError(&quot;error!&quot;) if __name__ == &quot;__main__&quot;: cli() </code></pre> <p>The exception does not seem to be caught within the <code>capture_traceback</code> function. Instead, it is printed by the interpreter as usual. It seems as though Click is catching the error, closing the context manager, and then re-raising. How can I catch exceptions from any group or command, print the traceback to a file, and exit the program (without printing the traceback to the terminal/<code>stderr</code>)?</p>
<python><python-click>
2024-06-03 22:47:04
1
506
Adam
78,572,702
1,718,989
How to Slice Multi Dimensional Array?
<p>Dipping my toes into multi dimensional array slicing in Python and am hitting a wall of confusion with the following code</p> <pre><code># Example 2D array matrix = [ [1, 2, 3], [4, 5, 6], [7, 8, 9] ] # Basic slicing result = matrix[0:2][1:2] print(result) </code></pre> <p>So when I read this example with the following line: result = matrix[0:2][1:2]</p> <p>I read this as <code>result = matrix [rows][columns]</code></p> <p>So to me I would think that this should return the first two rows of the matrix, and then the specified columns as per the slice [1:2]</p> <p>So I would imagine it should return first the rows first -&gt; <code>[1, 2, 3], [4, 5, 6]</code></p> <p>Then after this it should take another slice with [1:2] so I would guess it should return [5]? Row 1 and up to the 2nd element?</p> <p>When I run the code the following is the result</p> <p><code>[[4,5,6]]</code></p> <p>I am totally wrong so was wondering what am I missing?</p> <p>I tried to google this kind of syntax where it has matrix[X:X][X:X] but I was not able to find out what is this called so if anyone can shed some light or point me in the right direction I would greatly appreciate it!</p>
<python><arrays><multidimensional-array><slice>
2024-06-03 22:29:16
1
311
chilly8063
78,572,653
6,666,008
Automating pdf to pdf/A-2b conversion using ghostscript. How to overcome icc color profiling error?
<p>I'm trying to figure out how to convert pdf's to pdfa2b format for archiving. The batch process only takes in 600-800 files at a time, and we have over half a million files. It would take an eternity if we do it one by one (probably 20 months at this rate). Any help is appreciated. Is there a way within Adobe to achieve automation, or would anyone be able to help me point in the right direction with respect to open-source scripts?</p> <p>Note: Apart from adobe tools, Ive also tried using Ghostscript. I'm hitting a wall with respect to adding the proper color profiles.</p> <p>PDF VALIDATION ERROR WITHOUT ADDING THE ICC PROFILE:</p> <pre><code>Device process color used but no PDF/A OutputIntent Has Output Intent Base color space name Outside visible page area </code></pre> <p>Ghostscript parameters in python:</p> <pre><code>gs_command = [ r&quot;C:\Program Files\gs\gs9.55.0\bin\gswin64c.exe&quot;, # Full path to the Ghostscript executable &quot;-dPDFA=2&quot;, &quot;-dBATCH&quot;, &quot;-dNOPAUSE&quot;, &quot;-sDEVICE=pdfwrite&quot;, f&quot;-sColorConversionStrategy={color_conversion_strategy}&quot;, f&quot;-sProcessColorModel={process_color_model}&quot;, f&quot;-sOutputICCProfile={icc_profile_path}&quot;, # Path to the ICC profile &quot;-sPDFACompatibilityPolicy=1&quot;, f&quot;-sOutputFile={output_pdf}&quot;, input_pdf ] </code></pre> <p>Error:</p> <blockquote> <p>Error: /undefined in --runpdf-- Operand stack: --nostringval-- 1<br /> 0 --nostringval-- ( **** Error: PDF interpreter encountered an error processing the file.\n) Execution stack: %interp_exit<br /> .runexec2 --nostringval-- runpdf --nostringval-- 2<br /> %stopped_push --nostringval-- runpdf runpdf false 1<br /> %stopped_push 1949 1 3 %oparray_pop 1948 1 3<br /> %oparray_pop 1933 1 3 %oparray_pop 1934 1 3<br /> %oparray_pop runpdf Dictionary stack: --dict:753/1123(ro)(G)--<br /> --dict:0/20(G)-- --dict:86/200(L)-- --dict:2/10(L)-- Current allocation mode is local Last OS error: Permission deniedError: Command '['D:\Projects\PDFProcessing\packages\gs10.03.1\bin\gswin64c.exe', '-dPDFA=2', '-dBATCH', '-dNOPAUSE', '-sProcessColorModel=DeviceCMYK', '-sDEVICE=pdfwrite', '-sColorConversionStrategy=CMYK', '-sProcessColorModel=DeviceCMYK', '-sOutputICCProfile=../packages/Adobe ICC Profiles (end-user)/Generic Gray Gamma 2.2 Profile.icc', '-sPDFACompatibilityPolicy=1', '-sOutputFile=../resources/output_pdfa2b.pdf', '../resources/test.pdf']' returned non-zero exit status 1. GPL Ghostscript 10.03.1: Unrecoverable error, exit code 1 Command output: None</p> </blockquote>
<python><adobe><ghostscript><pdfa>
2024-06-03 22:09:22
1
682
ss301
78,572,444
2,153,235
Python regular expression adorns string with visible delimiters, yields extra delmiter
<p>I am fairly new to Python and pandas. In my data cleaning, I would like to see the I performed previous cleaning steps correctly on a string column. In particular, I want to see where the strings begin and end, regardless of whether they have leading/trailing white space.</p> <p>The following is meant to bookend each string with a pair of single underscores, but it seems to generate two extra unintended underscores at the end, resulting in a total of three trailing underscores:</p> <pre><code>&gt;&gt;&gt; df = pd.DataFrame({'A':['DOG']}) &gt;&gt;&gt; df.A.str.replace(r'(.*)',r'_\1_',regex=True) 0 _DOG___ Name: A, dtype: object </code></pre> <p>I'm not entirely new to regular expressions, having used them with <code>sed</code>, <code>vim</code>, and <code>Matlab</code>. What is it about <a href="https://docs.python.org/3/library/re.html" rel="nofollow noreferrer">Python's implementation</a> that I'm not understanding?</p> <p>I am using Python 3.9 for compatibility with other work.</p>
<python><pandas><regex>
2024-06-03 20:41:02
0
1,265
user2153235
78,572,415
10,994,166
Create a Sparse vector from Pyspark dataframe maintaing the index
<p>I have pyspark df like this:</p> <pre><code>+--------------------+-------+----------+----------+----------+----------+--------+ | user_id|game_id|3mon_views|3mon_carts|3mon_trans|3mon_views| dt| +--------------------+-------+----------+----------+----------+----------+--------+ |0006e38c8968431f8...|0418034| 1.0| 0.0| 0.0| 0.0|20230813| |0006e38c8968431f8...|0501080| 0.0| 1.0| 0.0| 0.0|20230813| |0006e38c8968431f8...|0601010| 3.0| 0.0| 0.0| 0.0|20230813| |0006e38c8968431f8...|0602002| 0.0| 2.0| 0.0| 0.0|20230813| |0006e38c8968431f8...|0603006| 0.0| 0.0| 5.0| 0.0|20230813| |0006e38c8968431f8...|0605004| 0.0| 0.0| 0.0| 1.0|20230813| |0006e38c8968431f8...|0608002| 0.0| 0.0| 0.0| 2.0|20230813| |0006e38c8968431f8...|0608006| 0.0| 0.0| 2.0| 0.0|20230813| |0006e38c8968431f8...|0608007| 0.0| 0.0| 0.0| 4.0|20230813| |0006e38c8968431f8...|0611004| 0.0| 1.0| 0.0| 0.0|20230813| |0006e38c8968431f8...|0614001| 0.0| 0.0| 0.0| 1.0|20230813| |0006e38c8968431f8...|0614008| 0.0| 0.0| 0.0| 2.0|20230813| |0006e38c8968431f8...|0615007| 9.0| 0.0| 0.0| 0.0|20230813| |0006e38c8968431f8...|1101004| 10.0| 0.0| 15.0| 0.0|20230813| |0006e38c8968431f8...|1101007| 0.0| 0.0| 5.0| 3.0|20230813| +--------------------+-------+----------+----------+----------+----------+--------+ </code></pre> <p>where for given <code>user_id, game_id</code> and given <code>dt</code> we have <code>4</code> features <code>3mon_views|3mon_carts|3mon_trans|3mon_views</code>. We have total <code>1000</code> unique <code>game_id</code></p> <p>Now I want to create a sparse matrix of <code>user_id</code> for given <code>dt</code> which will have all the features for that users corresponding to that game if feature for that <code>game_id</code> is present in user or else it'll be 0. So over sparse vector will be of shape <code>(4*1000, )</code>, Now how can I create a sparse matrix after grouping by <code>user, dt</code> and maintain index for that <code>game_id</code>, for example 10th index in sparse vector will belong to specifc <code>game_id, feature</code> combination, like each index will belong to specifc <code>game_id, feature</code> pair and value will be the score from the dataframe if exist or 0.</p> <p>Result df:</p> <pre><code>user_id | sparsefeat_vec | dt </code></pre>
<python><dataframe><pyspark><apache-spark-sql><sparse-matrix>
2024-06-03 20:33:50
0
923
Chris_007
78,572,370
2,115,971
Difference between accuracy during training and accuracy during testing
<p>In the model below the accuracy reported at the end of the final validation stage is 0.46, but when reported during the manual testing the value is 0.53. What can account for this discrepancy?</p> <pre class="lang-py prettyprint-override"><code>import torch from torch import nn import torchvision.models as models import pytorch_lightning as pl from torchmetrics.classification import BinaryAccuracy from pytorch_lightning.loggers import NeptuneLogger from torch.utils.data import DataLoader, TensorDataset from os import environ class ResNet(pl.LightningModule): def __init__(self, n_classes=1, n_channels=3, lr=1e-3): super().__init__() self.save_hyperparameters() self.validation_accuracy = BinaryAccuracy() backbone = models.resnet18(pretrained=True) n_filters = backbone.fc.in_features if n_channels != 3: backbone.conv1 = nn.Conv2d(n_channels, 64, kernel_size=7, stride=2, padding=3, bias=False) layers = list(backbone.children())[:-1] self.feature_extractor = nn.Sequential(*layers) self.classifier = nn.Linear(n_filters, n_classes) self.loss_fn = nn.BCEWithLogitsLoss() def forward(self, x): features = self.feature_extractor(x) logits = self.classifier(features.squeeze()) return logits def configure_optimizers(self): return torch.optim.Adam(self.parameters(), lr=self.hparams.lr) def training_step(self, batch, batch_idx): x, y = batch y_hat = self(x) loss = self.loss_fn(y_hat, y) self.log('train_loss', loss, on_step=True, on_epoch=True, prog_bar=True, logger=True) return loss def validation_step(self, batch, batch_idx): x, y = batch y_hat = self(x) loss = self.loss_fn(y_hat, y) self.validation_accuracy.update(y_hat, y) self.log('val_loss', loss, on_step=True, on_epoch=True, prog_bar=True, logger=True) return loss def on_validation_epoch_end(self): val_acc = self.validation_accuracy.compute() self.log('val_acc', val_acc, on_epoch=True, prog_bar=True, logger=True) self.validation_accuracy.reset() def predict_step(self, batch, batch_idx): x, y = batch y_hat = self(x) preds = torch.sigmoid(y_hat) &gt; 0.5 return preds, y def get_dataloader(): # Create a simple synthetic dataset for demonstration purposes x = torch.randn(100, 3, 224, 224) y = torch.randint(0, 2, (100, 1)).float() dataset = TensorDataset(x, y) return DataLoader(dataset, batch_size=8) # Set up model, data, and trainer model = ResNet() train_loader = get_dataloader() val_loader = get_dataloader() logger = NeptuneLogger( api_key=environ.get(&quot;NEPTUNE_API_TOKEN&quot;), project=&quot;richbai90/ResnetTest&quot;, tags=[&quot;MRE&quot;, &quot;resnet&quot;], ) trainer = pl.Trainer(max_epochs=3, logger=logger, log_every_n_steps=1) # Train and validate the model trainer.fit(model, train_loader, val_loader) # Predict on validation data val_loader = get_dataloader() preds, targets = [], [] for batch in val_loader: batch_preds, batch_targets = model.predict_step(batch, 0) preds.extend(batch_preds) targets.extend(batch_targets) # Calculate accuracy manually for comparison preds = torch.stack(preds).view(-1) targets = torch.stack(targets).view(-1) manual_accuracy = (preds == targets).float().mean().item() print(f&quot;Manual accuracy: {manual_accuracy:.4f}&quot;) </code></pre>
<python><machine-learning><pytorch><pytorch-lightning>
2024-06-03 20:17:59
1
5,244
richbai90
78,572,233
23,260,297
Rename all files in a directory with specific format
<p>I have 100s of files in a directory that are all in the same format like this:</p> <pre><code>Recon-2024Jun03.xlsx Recon-2024Jun02.xlsx Recon-2024Jun01.xlsx etc... </code></pre> <p>I need to rename them all with a new format like this:</p> <pre><code>Recon-240603.xlsx Recon-240602.xlsx Recon-240601.xlsx etc... </code></pre> <p>I have this piece of code, but I am unsure what to put inside the replace function:</p> <pre><code>import os [os.rename(f, f.replace('.', '.')) for f in os.listdir(path)] </code></pre> <p>here is the date format I need</p> <pre><code>'%y%m%d' </code></pre>
<python><regex>
2024-06-03 19:34:49
1
2,185
iBeMeltin
78,572,163
832,230
Python threading lock with cooldown period for rate-limiting
<p>I am using Python 3.12 or newer. I need a threading lock, blocking only, with a cooldown feature that serves as a simple rate limiter. This will help me to avoid hammering a resource excessively.</p> <p>Expected features:</p> <ol> <li>It must have the typical mutual exclusion feature of a threading lock.</li> <li>It must not be possible to reacquire it in the 1 second following its last release. This is the cooldown period which is set at initialization.</li> </ol> <p>The correspondingly expected interface is:</p> <pre><code>def __init__(self, cooldown: float = 1) def acquire(self) # Note: For simplicity, there is no 'blocking' or 'timeout' arg. # As with `threading.Lock`: def release(self) # Must not block! def __enter__(self) def __exit__(self, exc_type, exc_val, exc_tb): </code></pre> <p>I have a current solution which is posted below as an <a href="https://stackoverflow.com/a/78572164/">answer</a>, but I feel that it is not 100% thread safe due to a potential race condition, although it is close to it.</p> <p>Third-party packages are also acceptable if they have an accurate implementation.</p>
<python><multithreading><locking><rate-limiting>
2024-06-03 19:14:25
1
64,534
Asclepius
78,572,051
4,663,429
How to create custom Pydantic type for Python's "ElementTree"
<p>I want to create a custom Pydantic type for Python's <code>ElementTree</code>. It should accept string as input and type to parse the XML string via <code>ElementTree.fromstring</code> and raise appropriate error is invalid XML is found.</p> <p>I tried creating a custom type using pydantic <code>Annotated</code> method but got an error saying <code>ElementTree.Element</code> does not define <code>__get_pydantic_core_schema__</code>.</p>
<python><pydantic><pydantic-v2>
2024-06-03 18:45:55
1
361
rishabhc32
78,572,006
982,402
AutoIt library not clicking dropdown of windows print pop-up using robot framework
<p>I am trying to select the <code>Save as PDF</code> option fron the print pop-up of windows. For which I have used <code>AutoItLibrary</code> along with <code>Robot Framework</code>. My code is as below but I am unable to click the <code>Destination</code> dropdown of the print pop-up. Please help me what can be wrong here. Is it <code>Intermediate D3D Window</code> value which is wrong? I got this <code>Class</code> value from the <code>AutoIt v3 window info tool</code>. Please guide also if any alternate solution.</p> <pre><code>*** Settings *** Library AutoItLibrary Library AutoItLibrary print_try open browser https://wcmshelp.ucsc.edu/advanced/print-button.html browser=Chrome Maximize Browser Window click element //button[.='Print this page'] Sleep 30s Win Wait Active Creating a Print Button Win Activate Creating a Print Button Control Focus Creating a Print Button ${EMPTY} Intermediate D3D Window Control Click Creating a Print Button ${EMPTY} Intermediate D3D Window Left Sleep 10s </code></pre> <p><a href="https://i.sstatic.net/v8XGMuYo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/v8XGMuYo.png" alt="enter image description here" /></a></p> <p>Please note the used URL is just for example, i do not own this URL.</p>
<python><robotframework><ui-automation><autoit>
2024-06-03 18:34:56
1
1,719
Anna
78,571,748
14,073,111
Encrypt message body using kerberos
<p>How, we would encrypt me message body of the request using kerberos? I have kerberos ticket active.</p> <p>My main goal is to send first request as authentication part and it is ok, i get 200 from the WEC. This is the code i am using:</p> <pre><code>import requests import kerberos def get_kerberos_token(service): __, krb_context = kerberos.authGSSClientInit(service) kerberos.authGSSClientStep(krb_context, &quot;&quot;) negotiate_details = kerberos.authGSSClientResponse(krb_context) return negotiate_details soap_message = &quot;soap xml&quot; auth_url = &quot;http://localhost:8081/wsman&quot; auth_headers = { 'Host': 'localhost:8081', 'Content-Length': '0', 'Authorization': f'Kerberos {get_kerberos_token(service=&quot;http@localhost&quot;)}', 'Content-Type': 'application/soap+xml;charset=UTF-8', 'Accept-Encoding': 'gzip' } session = requests.Session() auth_response = session.post(auth_url, headers=auth_headers) print(f&quot;Auth Response status code: {auth_response.status_code}&quot;) print(f&quot;Auth Response content: {auth_response.text}&quot;) boundary = &quot;Encrypted Boundary&quot; payload = f&quot;--{boundary}\r\nContent-Type: application/HTTP-Kerberos-session-encrypted\r\nOriginalContent: type=application/soap+xml;charset=UTF-8;Length={len(soap_message)}\r\n--{boundary}\r\nContent-Type: application/octet-stream\r\n{soap_message.encode('utf-8')}--{boundary}\r\n&quot; # Define headers for the second request soap_headers = { 'Host': 'localhost:8081', 'User-Agent': 'Go-http-client/1.1', 'Content-Type': f'multipart/encrypted;protocol=&quot;application/HTTP-Kerberos-session-encrypted&quot;;boundary=&quot;{boundary}&quot;', 'Accept-Encoding': 'gzip' } # Send the second request with the multipart/encrypted body soap_url = &quot;http://localhost:8081/wsman/subscriptions/24f5eb95-d9b1-1005-8062-697970657274/0&quot; soap_response = session.post(soap_url, headers=soap_headers, data=payload) print(f&quot;SOAP Response status code: {soap_response.status_code}&quot;) print(f&quot;SOAP Response content: {soap_response.text}&quot;) </code></pre> <p>This is what i get from the WEC if i run it:</p> <pre><code>Auth Response status code: 200 Auth Response content: SOAP Response status code: 400 SOAP Response content: failed to unwrap kerberos packet </code></pre> <p>From, what i see, the request body in the second request, needs to be encrypted with kerberos which i actually don't know how to do it. Is there a way to do it?</p> <p>As a reference, this is what a WEC is expecting as well -&gt; <a href="https://github.com/cea-sec/openwec/blob/main/doc/protocol.md#the-client-sends-a-enumerate-request" rel="nofollow noreferrer">https://github.com/cea-sec/openwec/blob/main/doc/protocol.md#the-client-sends-a-enumerate-request</a></p>
<python><encryption><http-post><kerberos>
2024-06-03 17:28:23
1
631
user14073111
78,571,618
12,415,855
Selenium / How to select a value form a select-box?
<p>I try to select the option &quot;Psychatrie&quot; on a website using the following code:</p> <pre><code>from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.chrome.service import Service from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By print(f&quot;Checking Browser driver...&quot;) options = Options() options.add_argument(&quot;start-maximized&quot;) options.add_argument('--log-level=3') options.add_experimental_option(&quot;prefs&quot;, {&quot;profile.default_content_setting_values.notifications&quot;: 1}) options.add_experimental_option(&quot;excludeSwitches&quot;, [&quot;enable-automation&quot;]) options.add_experimental_option('excludeSwitches', ['enable-logging']) options.add_experimental_option('useAutomationExtension', False) options.add_argument('--disable-blink-features=AutomationControlled') srv=Service() link = &quot;https://asu.kvs-sachsen.de/arztsuche/pages/search.jsf&quot; driver = webdriver.Chrome (service=srv, options=options) waitWD = WebDriverWait (driver, 10) driver.get (link) waitWD.until(EC.presence_of_element_located((By.XPATH,'//div[@id=&quot;searchForm:specialismDetail:selectWindow&quot;]'))).send_keys(&quot;Psychatrie&quot;) </code></pre> <p>But i only get this error:</p> <pre><code>(selenium) C:\DEV\Fiverr2024\ORDER\schlosswaechter&gt;python temp1.py Checking Browser driver... Traceback (most recent call last): File &quot;C:\DEV\Fiverr2024\ORDER\schlosswaechter\temp1.py&quot;, line 23, in &lt;module&gt; waitWD.until(EC.presence_of_element_located((By.XPATH,'//div[@id=&quot;searchForm:specialismDetail:selectWindow&quot;]'))).send_keys(&quot;Psychatrie&quot;) File &quot;C:\DEV\.venv\selenium\Lib\site-packages\selenium\webdriver\remote\webelement.py&quot;, line 231, in send_keys self._execute( File &quot;C:\DEV\.venv\selenium\Lib\site-packages\selenium\webdriver\remote\webelement.py&quot;, line 395, in _execute return self._parent.execute(command, params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\DEV\.venv\selenium\Lib\site-packages\selenium\webdriver\remote\webdriver.py&quot;, line 347, in execute self.error_handler.check_response(response) File &quot;C:\DEV\.venv\selenium\Lib\site-packages\selenium\webdriver\remote\errorhandler.py&quot;, line 229, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.ElementNotInteractableException: Message: element not interactable (Session info: chrome=124.0.6367.202) Stacktrace: GetHandleVerifier [0x00007FF7E2331522+60802] (No symbol) [0x00007FF7E22AAC22] (No symbol) [0x00007FF7E2167B13] (No symbol) [0x00007FF7E21B09F7] (No symbol) [0x00007FF7E21AEB1A] (No symbol) [0x00007FF7E21DAB7A] (No symbol) [0x00007FF7E21AA7C6] (No symbol) [0x00007FF7E21DAD90] (No symbol) [0x00007FF7E21FA224] (No symbol) [0x00007FF7E21DA923] (No symbol) [0x00007FF7E21A8FEC] (No symbol) [0x00007FF7E21A9C21] GetHandleVerifier [0x00007FF7E26341BD+3217949] GetHandleVerifier [0x00007FF7E2676157+3488183] GetHandleVerifier [0x00007FF7E266F0DF+3459391] GetHandleVerifier [0x00007FF7E23EB8E6+823622] (No symbol) [0x00007FF7E22B5FBF] (No symbol) [0x00007FF7E22B0EE4] (No symbol) [0x00007FF7E22B1072] (No symbol) [0x00007FF7E22A18C4] BaseThreadInitThunk [0x00007FF8F4D0257D+29] RtlUserThreadStart [0x00007FF8F676AA48+40] </code></pre> <p>How can i select this value from the select-box on the page?</p>
<python><selenium-webdriver>
2024-06-03 16:52:09
1
1,515
Rapid1898
78,571,464
6,376,297
How to pass a list of strings as a python sys.argv in a script, when the script command must be executed from a string
<p>Suppose you have a <code>script.py</code> that takes various input arguments, so you can run it by a command line like:</p> <pre><code>python script.py arg1 arg2 </code></pre> <p><code>sys.argv</code> can be used <em>inside</em> <code>script.py</code> to read those arguments:</p> <pre><code>sys.argv[1] # gives you str(arg1) sys.argv[2] # gives you str(arg2) </code></pre> <p>Example of <code>script.py</code> :</p> <pre><code>import sys print(sys.argv[1]) print(sys.argv[2]) print(eval(sys.argv[2])) print(type(eval(sys.argv[2]))) </code></pre> <p>Suppose that the desired content of <code>arg2</code> is a <strong>list of strings</strong>, like <code>[&quot;x?47&quot;, &quot;b-12&quot;, &quot;k:4&quot;]</code>.</p> <p>If you are writing the command manually, you can of course do:</p> <pre><code>python script.py 156 '[&quot;x?47&quot;, &quot;b-12&quot;, &quot;k:4&quot;]' </code></pre> <p>By single-quoting outside the list, there is no problem.<br /> You get as output:</p> <pre><code>156 [&quot;x?47&quot;, &quot;b-12&quot;, &quot;k:4&quot;] ['x?47', 'b-12', 'k:4'] &lt;class 'list'&gt; </code></pre> <p>The last two lines indicate that string <code>sys.argv[2]</code> has been correctly interpreted as a list of strings, as desired. One could do further work with those strings.</p> <p>However, what if you must <strong>create</strong> the above command itself as a string <code>cmd</code> and execute it programmatically, e.g. by <code>os.system(cmd)</code>?</p> <p>I tried several different approaches, escaping by '', double quoting, etc., but none of them worked.<br /> It seems that <code>cmd</code> just cannot contain the two different types of quotes, or at least, not in a format that is then correctly processed by code like <code>os.system()</code>.</p> <p>Invariably, you end up with a <code>cmd</code> like:</p> <pre><code>python script.py 156 [&quot;x?47&quot;, &quot;b-12&quot;, &quot;k:4&quot;] </code></pre> <p>which obviously does not work:</p> <pre><code>156 [x?47, Traceback (most recent call last): File &quot;/mnt/code/PersonalFolders/GT/script.py&quot;, line 5, in &lt;module&gt; print(eval(sys.argv[2])) File &quot;&lt;string&gt;&quot;, line 1 [x?47, ^ SyntaxError: invalid syntax </code></pre> <p>I discussed this with some colleagues, who advised to either change the whole code, from using <code>sys.argv</code> to using <code>argparse</code> (probably best solution in the long term, but also more demanding to implement), or to pass the list as a single string containing only the strings that are inside the list, separated by commas or some other suitable character that I can split on.<br /> So in this case I would pass <code>&quot;x?47,b-12,k:4&quot;</code> and then do <code>sys.argv[2].split(',')</code>.</p> <p>I just wanted to understand from this expert community if it is really <em>wrong</em> to want to pass a list of strings as an argument in this situation, or if we missed something that would resolve the issue without having to change the script.</p> <hr /> <p><strong>EDIT</strong> : trying out John Gordon's solution</p> <p><a href="https://i.sstatic.net/6fF0GwBM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6fF0GwBM.png" alt="enter image description here" /></a></p> <p>So this solution works for <code>os.system()</code>, which I mentioned as an example.<br /> On retry, it actually also works with the original job launcher I am using, namely Domino job_start (see <a href="https://github.com/dominodatalab/python-domino/blob/master/domino/domino.py" rel="nofollow noreferrer">https://github.com/dominodatalab/python-domino/blob/master/domino/domino.py</a>).</p>
<python>
2024-06-03 16:14:10
2
657
user6376297
78,571,410
11,233,365
Creating optional argparse arguments, and ones that takes either None or a str as input
<p>I'm writing <code>argparse</code> arguments for a command line function where:</p> <ol> <li>More than one of the functions are optional, and</li> <li>One of the functions defaults to <code>None</code> if no file path string is provided</li> </ol> <p>The code is as follows, and I would like to know if I've done both cases of it correctly.</p> <pre class="lang-py prettyprint-override"><code>import argparse # The function to be run def run(): parser = argparse.ArgumentParser( description=&quot;Convert individual image files into a stack&quot; ) # This is the mandatory argument parser.add_argument( nargs=1, dest=&quot;file&quot;, type=str, help=&quot;Path to any one of the files from the series to be processed&quot;, ) # This is one optional argument, which has a default str input parser.add_argument( &quot;--root-dir&quot;, default=&quot;images&quot;, type=str, help=&quot;Top subdirectory that raw files are stored in. Used to determine destination of the created image stacks&quot;, ) # This is the second optional argument, which should default to None if not included parser.add_argument( &quot;--metadata&quot;, default=None, type=str, help=&quot;Path to the metadata file associated with this dataset. If not provided, the script will use relative file paths to find what it thinks is the appropriate file&quot;, ) args = parser.parse_args() # The function they are plugged into create_the_stack(file=args.file, root_dir=args.root_dir, metadata=args.metadata) </code></pre>
<python><argparse>
2024-06-03 16:03:25
0
301
TheEponymousProgrammer
78,571,372
2,977,256
Spatial data structures with efficient dynamic updates
<p>I am looking for a python library which does approximate (or exact, no problem with that) nearest neighbor search which has fast dynamic updates. (it seems that the scipy.spatial ones, like KDTree or BallTree do not. It would be even better if this were GPU compatible, but we should not be too greedy.</p>
<python><spatial><knn>
2024-06-03 15:54:55
1
4,872
Igor Rivin
78,571,287
5,924,264
How to ensure subprocess sees mock patched variable?
<p>I have unit tests that look like the following:</p> <pre><code>with patch('my_module.file_in_module.SOME_GLOBAL_VARIABLE', new=mocked_var): subprocess.check_call([ # CLI omitted ]) </code></pre> <p>If I print <code>my_module.file_in_module.SOME_GLOBAL_VARIABLE</code> right before the <code>subprocess</code> call it prints as <code>mocked_var</code>. However, when I step into the actual functionality that's being executed by the subprocess, and print <code>SOME_GLOBAL_VARIABLE</code> from <code>file_in_module</code>, it prints as the non-mocked variable. I think this makes sense since the <code>mock</code> only applies to the main thread and not to the subprocess. How can I mock the variable the subprocess sees in this case?</p>
<python><mocking><subprocess>
2024-06-03 15:36:17
1
2,502
roulette01
78,571,286
6,622,697
Could not translate host name "jdbc:postgresql" in Django and Python
<p>I've successfully created a Django project in Pycharm to talk to the default Sqlite3. But now I'd like to switch to Postgress. I have the following in <code>settings.py</code></p> <pre><code>DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'NAME': 'postgres', 'USER': 'postgres', 'PASSWORD': 'postgres', 'HOST': 'jdbc:postgresql://localhost:5432/postgres', 'PORT': '5432' } } </code></pre> <p>But I get this error when trying to run</p> <pre><code>django.db.utils.OperationalError: could not translate host name &quot;jdbc:postgresql://localhost:5432/postgres&quot; to address: Name or service not known </code></pre> <p>I get the same error when trying to run <code>migrate</code>, which I assume is necessary to create the tables</p>
<python><django><postgresql><pycharm>
2024-06-03 15:36:09
2
1,348
Peter Kronenberg
78,571,230
2,522,673
FineTune llama3 model with torch tune gives error
<p>Im trying to fine tune the llama3 model with torch tune.</p> <p>these are the steps that ive already done :</p> <pre><code>1.pip install torch 2.pip install torchtune 3.tune download meta-llama/Meta-Llama-3-8B --output-dir llama3 --hf-token ***(my token)*** 4.tune run lora_finetune_single_device --config llama3/8B_lora_single_device device=&quot;cpu&quot; </code></pre> <p>and then this error happens:</p> <pre><code>INFO:torchtune.utils.logging:Running LoRAFinetuneRecipeSingleDevice with resolved config: batch_size: 2 checkpointer: _component_: torchtune.utils.FullModelMetaCheckpointer checkpoint_dir: /tmp/Meta-Llama-3-8B/original/ checkpoint_files: - consolidated.00.pth model_type: LLAMA3 output_dir: /tmp/Meta-Llama-3-8B/ recipe_checkpoint: null compile: false dataset: _component_: torchtune.datasets.alpaca_cleaned_dataset train_on_input: true device: cpu dtype: bf16 enable_activation_checkpointing: true epochs: 1 gradient_accumulation_steps: 64 log_every_n_steps: null loss: _component_: torch.nn.CrossEntropyLoss lr_scheduler: _component_: torchtune.modules.get_cosine_schedule_with_warmup num_warmup_steps: 100 max_steps_per_epoch: null metric_logger: _component_: torchtune.utils.metric_logging.DiskLogger log_dir: /tmp/lora_finetune_output model: _component_: torchtune.models.llama3.lora_llama3_8b apply_lora_to_mlp: false apply_lora_to_output: false lora_alpha: 16 lora_attn_modules: - q_proj - v_proj lora_rank: 8 optimizer: _component_: torch.optim.AdamW lr: 0.0003 weight_decay: 0.01 output_dir: /tmp/lora_finetune_output profiler: _component_: torchtune.utils.profiler enabled: false resume_from_checkpoint: false seed: null shuffle: true tokenizer: _component_: torchtune.models.llama3.llama3_tokenizer path: /tmp/Meta-Llama-3-8B/original/tokenizer.model DEBUG:torchtune.utils.logging:Setting manual seed to local seed 2762364121. Local seed is seed + rank = 2762364121 + 0 Writing logs to /tmp/lora_finetune_output/log_1717420025.txt Traceback (most recent call last): File &quot;/home/ggpt/.local/bin/tune&quot;, line 8, in &lt;module&gt; sys.exit(main()) ^^^^^^ File &quot;/home/ggpt/.local/lib/python3.12/site-packages/torchtune/_cli/tune.py&quot;, line 49, in main parser.run(args) File &quot;/home/ggpt/.local/lib/python3.12/site-packages/torchtune/_cli/tune.py&quot;, line 43, in run args.func(args) File &quot;/home/ggpt/.local/lib/python3.12/site-packages/torchtune/_cli/run.py&quot;, line 179, in _run_cmd self._run_single_device(args) File &quot;/home/ggpt/.local/lib/python3.12/site-packages/torchtune/_cli/run.py&quot;, line 93, in _run_single_device runpy.run_path(str(args.recipe), run_name=&quot;__main__&quot;) File &quot;&lt;frozen runpy&gt;&quot;, line 286, in run_path File &quot;&lt;frozen runpy&gt;&quot;, line 98, in _run_module_code File &quot;&lt;frozen runpy&gt;&quot;, line 88, in _run_code File &quot;/home/ggpt/.local/lib/python3.12/site-packages/recipes/lora_finetune_single_device.py&quot;, line 510, in &lt;module&gt; sys.exit(recipe_main()) ^^^^^^^^^^^^^ File &quot;/home/ggpt/.local/lib/python3.12/site-packages/torchtune/config/_parse.py&quot;, line 50, in wrapper sys.exit(recipe_main(conf)) ^^^^^^^^^^^^^^^^^ File &quot;/home/ggpt/.local/lib/python3.12/site-packages/recipes/lora_finetune_single_device.py&quot;, line 504, in recipe_main recipe.setup(cfg=cfg) File &quot;/home/ggpt/.local/lib/python3.12/site-packages/recipes/lora_finetune_single_device.py&quot;, line 182, in setup checkpoint_dict = self.load_checkpoint(cfg_checkpointer=cfg.checkpointer) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/ggpt/.local/lib/python3.12/site-packages/recipes/lora_finetune_single_device.py&quot;, line 135, in load_checkpoint self._checkpointer = config.instantiate( ^^^^^^^^^^^^^^^^^^^ File &quot;/home/ggpt/.local/lib/python3.12/site-packages/torchtune/config/_instantiate.py&quot;, line 106, in instantiate return _instantiate_node(config, *args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/ggpt/.local/lib/python3.12/site-packages/torchtune/config/_instantiate.py&quot;, line 31, in _instantiate_node return _create_component(_component_, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/ggpt/.local/lib/python3.12/site-packages/torchtune/config/_instantiate.py&quot;, line 20, in _create_component return _component_(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/ggpt/.local/lib/python3.12/site-packages/torchtune/utils/_checkpointing/_checkpointer.py&quot;, line 517, in __init__ self._checkpoint_path = get_path(self._checkpoint_dir, checkpoint_files[0]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/ggpt/.local/lib/python3.12/site-packages/torchtune/utils/_checkpointing/_checkpointer_utils.py&quot;, line 44, in get_path raise ValueError(f&quot;{input_dir} is not a valid directory.&quot;) ValueError: /tmp/Meta-Llama-3-8B/original is not a valid directory. </code></pre> <p>should i copy the original folder from llama3 download path to /tmp folder ? its like 16g model. Can i gave the already downloaded model path to tune ?</p>
<python><pytorch><torchtune><llama3>
2024-06-03 15:25:27
1
1,718
Ahad Porkar
78,571,212
1,188,758
Implement WCF service in Python
<p>I have an existing web app developed in ASP.NET Frameword 4.7.2 using C#, and it's very large. Because most NLP libraries (i.e. Spacy) are implemented using Python, I have the idea to define a WCF service interface to be called by the web app, but <strong>implement the WCF service in Python</strong> so I can use these specialized libraries, and avoid issues calling Python directly from C# or running separate Python process instances. I've been unable to find information on doing so. Or is doing something like using TCP sockets the only way? Any thoughts? Thanks.</p>
<python><c#><wcf>
2024-06-03 15:21:50
0
543
jtsoftware
78,571,193
2,973,447
Avoiding iteration in pandas when I want to update the value in a column x when a condition is true where x is given by another column
<p>I have the following pandas dataframe:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>key1</th> <th>key2</th> <th>col_name</th> <th>bool</th> <th>col_1</th> <th>col_2</th> <th>col_3</th> </tr> </thead> <tbody> <tr> <td>a1</td> <td>a2</td> <td>col_1</td> <td>0</td> <td>5</td> <td>10</td> <td>20</td> </tr> <tr> <td>b1</td> <td>b2</td> <td>col_3</td> <td>1</td> <td>10</td> <td>10</td> <td>5</td> </tr> <tr> <td>c1</td> <td>c2</td> <td>col_1</td> <td>1</td> <td>5</td> <td>15</td> <td>5</td> </tr> </tbody> </table></div> <p>Where bool==1, I would like to update the value in the column given by the col_name column to be 100.</p> <p>Expected output:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>key1</th> <th>key2</th> <th>col_name</th> <th>bool</th> <th>col_1</th> <th>col_2</th> <th>col_3</th> </tr> </thead> <tbody> <tr> <td>a1</td> <td>a2</td> <td>col_1</td> <td>0</td> <td>5</td> <td>10</td> <td>20</td> </tr> <tr> <td>b1</td> <td>b2</td> <td>col_3</td> <td>1</td> <td>10</td> <td>10</td> <td><strong>100</strong></td> </tr> <tr> <td>c1</td> <td>c2</td> <td>col_1</td> <td>1</td> <td><strong>100</strong></td> <td>15</td> <td>5</td> </tr> </tbody> </table></div> <p>I can do this by iterating through the table, but from what I've read this is never best practice. What would be the most efficient way of doing this?</p>
<python><pandas>
2024-06-03 15:18:33
1
381
user2973447
78,571,071
5,985,921
Altair Show Axis Tick Labels As Percentage Point
<p>I would like to format axis tick labels as percentage point.</p> <pre class="lang-py prettyprint-override"><code>import altair as alt from vega_datasets import data source = data.jobs.url alt.Chart(source).mark_line().encode( alt.X('year:O'), alt.Y('perc:Q').axis(format='%'), alt.Color('sex:N') ).transform_filter( alt.datum.job == 'Welder' ) </code></pre> <p>So in the example above (taken from <a href="https://altair-viz.github.io/gallery/line_percent.html" rel="nofollow noreferrer">here</a>), it should show the y-axis tick labels as <code>'&lt;value&gt;%pt'</code> instead of <code>&lt;value&gt;%</code> - is this possible to achieve?</p>
<python><formatting><altair>
2024-06-03 14:57:16
1
1,651
clog14
78,570,966
6,037,956
Given a pandas DataFrame with several columns containing NaNs, the goal is to efficiently find the last non-null value for each row
<p>Given a pandas DataFrame with several columns containing potentially missing values (NaN), the goal is to efficiently find the last non-null value for each row. A similar question using polars DataFrame and solution is here: <a href="https://stackoverflow.com/q/77401947/6037956">Fill null values with the closest non-null value from other columns</a></p> <pre><code>data = { &quot;product_id&quot;: [&quot;1&quot;, &quot;2&quot;, &quot;3&quot;, &quot;4&quot;, &quot;5&quot;, &quot;6&quot;, &quot;7&quot;, &quot;8&quot;, &quot;9&quot;], &quot;col1&quot;: [&quot;a&quot;, &quot;a&quot;, &quot;a&quot;, &quot;a&quot;, &quot;a&quot;, &quot;a&quot;, &quot;a&quot;, &quot;a&quot;, &quot;a&quot;,], &quot;col2&quot;: [&quot;b&quot;, None, &quot;b&quot;, None, &quot;b&quot;, None, &quot;b&quot;, None, &quot;b&quot;], &quot;col3&quot;: [&quot;c&quot;, None, &quot;c&quot;, None, None, None, None, None, None], &quot;col4&quot;: [None, None, None, None, None, &quot;d&quot;, None, None, &quot;d&quot;] } df = pd.DataFrame(data) </code></pre> <p>Expected Output:</p> <pre><code> product_id col1 col2 col3 col4 row_wise_last_non_nulls 0 1 a b c None c 1 2 a None None None a 2 3 a b c None c 3 4 a None None None a 4 5 a b None None b 5 6 a None None d d 6 7 a b None None b 7 8 a None None None a 8 9 a b None d d </code></pre>
<python><pandas><numpy>
2024-06-03 14:36:30
2
2,072
Soudipta Dutta
78,570,944
2,545,680
Is it possible to execute package wrapped into archive as module
<p>This is the structure:</p> <pre><code>__init__.py __main__.py server.py </code></pre> <p>Inside <code>__main__.py</code> I use relative imports:</p> <pre><code>from .server import app </code></pre> <p>When I zip the directory:</p> <pre><code>python -m zipapp app_builder </code></pre> <p>And then try to run it like this I get the error:</p> <pre><code>python app_builder.pyz </code></pre> <blockquote> <p>ImportError: attempted relative import with no known parent package</p> </blockquote> <p>which is expected. I need a way to run the code inside the archive similar to this:</p> <pre><code>python -m app_builder </code></pre> <p>which works fine.</p>
<python>
2024-06-03 14:32:31
1
106,269
Max Koretskyi
78,570,938
5,349,291
Pandas aggregated groupby has incorrect size
<p>I have a puzzling situation with pandas groupby objects. I'm in a situation where I have a dataset with ids, features, and targets for training a machine learning model. In some cases, there are groups of features with differing target values, and since that doesn't make sense, I would like to compute the mean of target values within those groups.</p> <pre><code>id_cols = list(df.columns[:4]) features = list(df.columns[4:-1]) target = df.columns[-1] ad_id = id_cols[1] creative_id = id_cols[-1] </code></pre> <p>Unfortunately though, as I add a larger number of features names (there are around 200) into the groupby operation, the aggregated (means) dataset changes shape. In my understanding, the shape of the resulting dataset of means should be <em>exactly</em> the number of unique groups. After adding a threshold number of features, the number of entries in the aggregated dataset goes down to small numbers:</p> <pre><code>for n in [10,20,30, 35, 40, 50,100,200]: grpby = df.groupby(features[:n]) mean_targets = grpby[target].agg([&quot;mean&quot;]) print(n, len(grpby), mean_targets.shape) # 10 1349 (1349, 1) # 20 1882 (1882, 1) # 30 1978 (1978, 1) # 35 1978 (31, 1) # 40 1978 (31, 1) # 50 1978 (31, 1) # 100 1978 (19, 1) # 200 4870 (2, 1) </code></pre> <p>As you can see, after I add 35 features, my mean_targets series shape doesn't match the number of groups in the groupby object anymore.</p> <p>What could I be doing wrong - or could this be a pandas groupby limitation?</p>
<python><pandas><group-by>
2024-06-03 14:31:54
1
2,074
mchristos
78,570,901
10,734,452
How to get the number of explicit arguments in the __init__ of a class?
<p>I have a class A that is inherited by classes A0,A1...An, some of which are inherited by classes A0a, A0b, A0c, A1a, A2a,...</p> <pre><code>class A(): def __init__(self,att_1=12,att_2=&quot;Blue&quot;,att3=False,att4=None) class A0_end(A): def __init__(self) super().__init__(self,att_1=13) class A1(A): def __init__(self,att_1=0,att_2=&quot;Red&quot;,att_X1=&quot;hello&quot;) self.att_X1=att_X1 super().__init__(self,att_1=att_1,att_2=att_2) class A1a_end(A1): def __init__(self) super().__init__(self,att_1=att_1,att_2=att_2, att_X1=&quot;world&quot;) </code></pre> <p>I would like in the end classes to forbid the use of argument so there is only one way to instantiate the class.</p> <p>In the above example, all the class prefixed with '_end' respect this rule.</p> <p>For this, I think I need to know the number of possible argument of <code>__init__()</code></p> <p>I could make something in the class A</p> <pre><code> class A(): def __init__(self,att_1=12,att_2=&quot;Blue&quot;,att3=False,att4=None) if '_end' in __class__.__name__: if self.get_magic_number_of_init_attribute() != 0: print(&quot;error, this is not allowed&quot;) os._exit(1) </code></pre> <p>I would try not use kwargs if possible (and not change the <code>__init__()</code> arguments if possible).</p>
<python>
2024-06-03 14:25:14
2
2,378
Guillaume D
78,570,866
6,751,456
cron log shows the job is running but actually the job is failing
<p>I have a cron job in <code>aws ec2 instance</code> defined as:</p> <pre><code>0 2 * * * /home/ec2-user/scripts/backup.sh 0 3 * * 0 /home/ec2-user/scripts/cleanup.sh 0 * * * * /home/ec2-user/scripts/health_check.sh </code></pre> <p>I checked whether the jobs are running by :</p> <pre><code>$ tail -f /var/log/cron Jun 3 02:00:01 ip-172-31-0-123 CROND[1234]: (ec2-user) CMD (/home/ec2-user/scripts/backup.sh) Jun 3 03:00:01 ip-172-31-0-123 CROND[1235]: (ec2-user) CMD (/home/ec2-user/scripts/cleanup.sh) Jun 3 04:00:01 ip-172-31-0-123 CROND[1236]: (ec2-user) CMD (/home/ec2-user/scripts/health_check.sh) Jun 3 05:00:01 ip-172-31-0-123 CROND[1237]: (ec2-user) CMD (/home/ec2-user/scripts/health_check.sh) </code></pre> <p>It looked like the jobs are running.</p> <p>But on manually running one of the jobs, I found that the job was actually failing and throwing an error:</p> <pre><code>$ /home/ec2-user/scripts/clean_up.sh Traceback (most recent call last): File &quot;/home/ec2-user/scripts/worker_script.py&quot;, line 15, in &lt;module&gt; connection = pika.BlockingConnection(pika.ConnectionParameters('localhost')) File &quot;/usr/local/lib/python3.8/site-packages/pika/adapters/blocking_connection.py&quot;, line 360, in __init__ self._impl = self._create_connection(parameters, _impl_class) File &quot;/usr/local/lib/python3.8/site-packages/pika/adapters/blocking_connection.py&quot;, line 451, in _create_connection raise self._reap_last_connection_workflow_error(error) pika.exceptions.AMQPConnectionError: Connection to 127.0.0.1:5672 failed: [Errno 111] Connection refused </code></pre> <p>Is there any way to know that the jobs are running successfully or not? and log the errors accordingly?</p>
<python><linux><bash><amazon-ec2><cron>
2024-06-03 14:19:47
1
4,161
Azima
78,570,861
179,014
Smarter way to create diff between two pandas dataframes?
<p>I have two pandas dataframes which represent a directory structure with file hashes like</p> <pre><code>import pandas as pd dir_old = pd.DataFrame([ {&quot;Filepath&quot;: &quot;dir1/file1&quot;, &quot;Hash&quot;: &quot;hash1&quot;}, {&quot;Filepath&quot;: &quot;dir1/file2&quot;, &quot;Hash&quot;: &quot;hash2&quot;}, {&quot;Filepath&quot;: &quot;dir2/file3&quot;, &quot;Hash&quot;: &quot;hash3&quot;}, ]) dir_new = pd.DataFrame([ # {&quot;Filepath&quot;: &quot;dir1/file1&quot;, &quot;Hash&quot;: &quot;hash1&quot;}, # deleted file {&quot;Filepath&quot;: &quot;dir1/file2&quot;, &quot;Hash&quot;: &quot;hash2&quot;}, {&quot;Filepath&quot;: &quot;dir2/file3&quot;, &quot;Hash&quot;: &quot;hash5&quot;}, # changed file {&quot;Filepath&quot;: &quot;dir1/file4&quot;, &quot;Hash&quot;: &quot;hash4&quot;}, # new file ]) </code></pre> <p>The <code>dir_new</code> shows the content of the directory structure after some changes. To compare these two dataframes I use</p> <pre><code>df_merged = pd.merge(dir_new, dir_old, how='outer', indicator=True) print(df_merged) </code></pre> <p>This will return</p> <pre><code> Filepath Hash _merge 0 dir1/file1 hash1 right_only 1 dir1/file2 hash2 both 2 dir1/file4 hash4 left_only 3 dir2/file3 hash3 right_only 4 dir2/file3 hash5 left_only </code></pre> <p>It is easy to identify the <code>right_only</code> rows as <code>deleted</code>, <code>both</code> as unchanged and <code>left_only</code> as new files. However what to do about the modified file <code>dir/file3</code> which appears twice as <code>right_only</code> and <code>left_only</code>? I did the following:</p> <pre><code># The indicator columns _merge has categorical values. # We need to convert it to string to be able to add later a new value `modified` df_merged[&quot;State&quot;] = df_merged[&quot;_merge&quot;].astype(str) df_merged = df_merged.drop(columns=[&quot;_merge&quot;]) # Identify the rows with duplicated filepath and only keep the new (left_only) ones modified = df_merged[df_merged.duplicated(subset=[&quot;Filepath&quot;], keep=False)] keep = modified[modified[&quot;State&quot;] == &quot;left_only&quot;] drop = modified[modified[&quot;State&quot;] == &quot;right_only&quot;] # Rename the state of the new modified files to `changed` and drop the old duplicated row df_merged.iloc[keep.index, df_merged.columns.get_loc(&quot;State&quot;)] = &quot;changed&quot; df_dropped = df_merged.drop(drop.index) # Finally rename the State for all the remaining rows df_final = df_dropped.replace(to_replace=[&quot;right_only&quot;, &quot;left_only&quot;, &quot;both&quot;], value=[&quot;deleted&quot;, &quot;created&quot;, &quot;equal&quot;]).reset_index(drop=True) print(df_final) </code></pre> <p>The output is</p> <pre><code> Filepath Hash State 0 dir1/file1 hash1 deleted 1 dir1/file2 hash2 equal 2 dir1/file4 hash4 created 3 dir2/file3 hash5 changed </code></pre> <p>So it works. But is strikes me as a very complicated solution. Is there maybe a smarter way create a diff between these two dataframes and especially to identify the modified rows between <code>dir_old</code> and <code>dir_new</code> ?</p>
<python><pandas><dataframe><diff>
2024-06-03 14:18:53
3
11,858
asmaier
78,570,731
6,829,655
Build distribution package for different platform architecture
<p>We have Jenkins pipeline that runs on X86 architecture and invokes following command:</p> <pre><code>python setup.py sdist bdist_wheel </code></pre> <p>I would like to build a package for ARM-based architecture. Is it possible by adding some additional configuration to setup.py?</p>
<python><jenkins><jenkins-pipeline><setup.py>
2024-06-03 13:55:19
1
651
datahack
78,570,583
5,662,005
Store methods in class in order they are written
<p>This might be a two part question. First is the context if better alternatives can be suggested. Second part is the (probably) XY problem for my solution.</p> <p>xy - Have a method in the class which returns a list of all functions in the class in order they were written/typed.</p> <p>My projects typically involve long sequences of queries/data transformations. I want to be able to wrap each step as a function so that the docstrings can be discoverable by those tools. But don't want to have to retype every function call at the end of each notebook.</p> <p>Undesirable way:</p> <pre><code>def print_something_else(): &quot;&quot;&quot; Some docstring here &quot;&quot;&quot; print(&quot;something else&quot;) def print_some_other_thing(): &quot;&quot;&quot; Some other docstring here &quot;&quot;&quot; print(&quot;something other thing&quot;) def print_something(): print(&quot;something&quot;) print_something_else() print_some_other_thing() print_something() </code></pre> <p>I would like to just define the functions and have some wrapper so that they are run in order.</p> <p>Attempt:</p> <pre><code>class RegisteredCommandSequence: def __init__(self): def __include(method): only_include_if = [ type(getattr(self, method)).__name__ == 'method', not(method.startswith('__')), method not in ('run_all', 'run_from_to')] return all(only_include_if) self._public_nonbuiltin_callable_methods = [ method for method in dir(self) if __include(method)] def run_all(self): for func in self._public_nonbuiltin_callable_methods: getattr(self, func)() class QueryBundle(RegisteredCommandSequence): def __init__(self): super().__init__() def print_something_else(self): &quot;&quot;&quot; Some docstring here &quot;&quot;&quot; print(&quot;something else&quot;) def print_some_other_thing(self): &quot;&quot;&quot; Some other docstring here &quot;&quot;&quot; print(&quot;something other thing&quot;) def print_something(self): print(&quot;something&quot;) test = QueryBundle() test.run_all() </code></pre> <pre class="lang-none prettyprint-override"><code>something other thing something something else </code></pre> <p>The problem is the dir() doesn't preserve the order in which the functions were written, which is vital.</p> <p>If in the class itself I have a list of all the in-scope methods, it'd be very convenient to be able to run subsequences like test.run_from_to('print_something_else', 'print_some_other_thing').</p> <p>Things I've considered</p> <ol> <li>Decorators which dictate the order of execution of each.</li> <li>Using the classes <code>__dict__</code>, which seems to preserve the order, but is empty when defined how I'm trying to use it.</li> </ol> <p>example:</p> <pre><code>class ShowEmptyDict: def __init__(self): self.print_methods_in_class() def print_methods_in_class(self): return(self.__dict__) demo = ShowEmptyDict() if demo.print_methods_in_class(): print('Not empty') else: print('empty') </code></pre>
<python>
2024-06-03 13:30:08
5
3,899
Error_2646
78,570,412
7,389,168
How does the Python Interpreter check thread duration?
<p>My understanding is that historically, the python Interpreter counted lines of code executed and switched threads after a fixed amount. This was then changed to being time dependent.</p> <p>What I am trying to figure out is how is the time checked for the current threads running duration? Does time duration get called after every line is executed?</p>
<python><python-3.x><python-internals>
2024-06-03 12:59:15
0
601
FourierFlux
78,570,189
20,301,996
Is there an easier way to run async functions in Celery tasks in one event loop?
<p>I have some async code and I need to run it inside the Celery task.</p> <p>I tried the approach with using <code>asgiref.sync.async_to_sync()</code>, but it turned out that it creates new event loop every time. And it brakes my code since I use SQLAlchemy session pool and there are restrictions about using sessions in different threads and event loops.</p> <p><strong>It is important to note</strong> that I don't care about performance issues. I understand the overheads of using this approach. I just don't want to rewrite my async code to sync.</p> <p>Trying to run all my async functions in one event loop I created a simple helper module:</p> <pre><code>import asyncio import time from collections.abc import Awaitable from threading import Thread from typing import TypeVar class _SingleEventLoop: _loop: asyncio.AbstractEventLoop | None = None _loop_thread: Thread | None = None def _enshure_loop_is_running(self): if self._loop and (self._loop.is_running() is False): self._loop.close() self._loop = None if self._loop_thread.is_alive(): self._loop_thread.stop() if (self._loop is None) or (self._loop.is_running() is False): self._loop_thread = Thread(target=self._eventloop_thread_run, daemon=True) self._loop_thread.start() time.sleep(0.1) def _eventloop_thread_run(self): self._loop = asyncio.new_event_loop() asyncio.set_event_loop(self._loop) self._loop.run_forever() def execute_async_task(self, coroutine): self._enshure_loop_is_running() feature = asyncio.run_coroutine_threadsafe(coroutine, self._loop) return feature.result() _single_event_loop = _SingleEventLoop() R = TypeVar(&quot;R&quot;) def execute_async_task(coroutine: Awaitable[R]) -&gt; R: &quot;&quot;&quot;Executes async tasks in single event loop&quot;&quot;&quot; return _single_event_loop.execute_async_task(coroutine=coroutine) </code></pre> <p>I can execute my async code like it's shown below:</p> <pre><code>from single_event_loop_runner.run_async import execute_async_task @celery_app.task() def sync_celery_task(param): res = execute_async_task(async_func(param)) return res </code></pre> <p>It seems to work fine for my use case, but I want to know what the possible negative consequences of using this approach are? Can I use any easier way to solve my task?</p> <p><strong>UPD:</strong> updated code to use separate event loop (not to use event loop that might be already running in the current thread).</p>
<python><celery><python-asyncio>
2024-06-03 12:13:23
0
2,593
Yurii Motov
78,570,167
9,703,655
Error when update with Python Tortoise ORM
<p>I use <code>Tortoise ORM</code> for my Postgresql databse. I have two models. They are related via <code>OneToOneField</code>.</p> <pre class="lang-py prettyprint-override"><code>class User(Model): class Meta: table = &quot;users&quot; id = fields.UUIDField(primary_key=True) internal_id = fields.IntField(unique=True) info: fields.ReverseRelation[&quot;UserInfo&quot;] class UserInfo(Model): class Meta: table = &quot;user_infos&quot; id = fields.UUIDField(primary_key=True) rank = fields.IntField() user = fields.OneToOneField(&quot;app.User&quot;, related_name=&quot;info&quot;) </code></pre> <p>When I try to update <code>UserInfo</code> by user's <code>internal_id</code> I got an error:</p> <pre class="lang-py prettyprint-override"><code>await UserInfo.filter(user__internal_id=SOME_ID).update(rank=5) </code></pre> <pre><code>tortoise.exceptions.OperationalError: invalid reference to FROM-clause entry for table &quot;user_infos&quot; HINT: There is an entry for table &quot;user_infos&quot;, but it cannot be referenced from this part of the query. </code></pre> <p>SQL Query:</p> <pre class="lang-sql prettyprint-override"><code>UPDATE &quot;user_infos&quot; SET &quot;rank&quot;=$1 FROM &quot;user_infos&quot; &quot;user_infos_&quot; LEFT OUTER JOIN &quot;users&quot; &quot;user_infos__user&quot; ON &quot;user_infos__user&quot;.&quot;id&quot;=&quot;user_infos&quot;.&quot;user_id&quot; WHERE &quot;user_infos__user&quot;.&quot;internal_id&quot;=893706004; </code></pre> <p>If I call just filter without update it works fine. It returns UserInfo object:</p> <pre class="lang-py prettyprint-override"><code>await UserInfo.filter(user__internal_id=internal_id) </code></pre> <p>What I'm doing wrong making update?</p>
<python><postgresql><tortoise-orm>
2024-06-03 12:07:09
1
463
picKit
78,570,103
11,760,835
How to test Flask SQLAlchemy database with different data
<p>I am pretty newbie in web testing. I would like to test my Flask web and API which uses SQLAlchemy using pytest.</p> <p>I would like to test the behaviour of the Flask server when the database is empty and when the database is full.</p> <p>Having the database empty and filling it with values on each test is not efficient because I am reading the data from an Excel file.</p> <p>I have tried to create two different Flask applications at the same time but I end up having errors due to having different contexts. For example:</p> <pre><code>... AssertionError: Popped wrong app context. (&lt;flask.ctx.AppContext object at 0x000001D896BD6F00&gt; instead of &lt;flask.ctx.AppContext object at 0x000001D8980EE090&gt;) ... AssertionError: Popped wrong request context. (&lt;RequestContext 'http://localhost/' [GET] of project&gt; instead of &lt;RequestContext 'http://localhost/filters/edit/1' [GET] of project&gt; </code></pre> <p>My pytest set up:</p> <pre class="lang-py prettyprint-override"><code>from unittest import TestCase from paint_filter_manager import create_app class BaseTestCase(TestCase): def setUp(self): self.app = create_app(&quot;test&quot;) self.app_context = self.app.app_context() self.app_context.push() self.client = self.app.test_client() def tearDown(self): self.app_context.pop() self.app = None self.app_context = None self.client = None </code></pre> <p>Can I have two or more in-memory sqlite databases for this purpose? Is there any efficient solution?</p>
<python><flask><pytest><flask-sqlalchemy>
2024-06-03 11:53:02
2
394
Jaime02
78,569,948
19,363,912
Vectorize processing of log table to determine the latest availability
<p>I have a log table which contains changes. Sign + means addition, sign - means deletion.</p> <pre><code>import pandas as pd history = pd.DataFrame({ &quot;First&quot;: [&quot;X&quot;,&quot;X&quot;, &quot;Y&quot;, &quot;Y&quot;, &quot;X&quot;, &quot;X&quot;, &quot;Y&quot;, &quot;Z&quot;], &quot;Last&quot;: [&quot;Y&quot;, &quot;X&quot;, &quot;Y&quot;, &quot;Y&quot;, &quot;X&quot;, &quot;X&quot;, &quot;Y&quot;, &quot;A&quot;], &quot;Change&quot;: [&quot;+&quot;, &quot;+&quot;, &quot;-&quot;, &quot;+&quot;, &quot;-&quot;, &quot;+&quot;, &quot;+&quot;, &quot;-&quot;], &quot;Date&quot;: [&quot;2022-05-01&quot;, &quot;2024-05-01&quot;, &quot;2024-06-01&quot;, &quot;2024-06-01&quot;, &quot;2024-05-03&quot;, &quot;2024-05-02&quot;, &quot;2024-06-02&quot;, &quot;2024-06-01&quot;] }) history = history.sort_values(by=[&quot;Date&quot;, &quot;Change&quot;]) # sort needed to process the entries chronologically </code></pre> <p>This produces</p> <pre><code> First Last Change Date 0 X Y + 2022-05-01 1 X X + 2024-05-01 5 X X + 2024-05-02 4 X X - 2024-05-03 3 Y Y + 2024-06-01 2 Y Y - 2024-06-01 7 Z A - 2024-06-01 6 Y Y + 2024-06-02 </code></pre> <p>In next step, I want to display only what is currently available.</p> <ul> <li>The last available sign needs to be + for item to be available. <ul> <li>Available: +, ++, +-+, -+, --+, etc.</li> <li>Not available: -, ++-, +-- etc.</li> </ul> </li> <li>Item is a combination of columns First and Last</li> <li>Sorting is done by Date and Change</li> </ul> <p>I build this logic using iteration which is very slow. Basically</p> <pre><code>latest = {} item_columns = [ &quot;First&quot;, &quot;Last&quot;, ] for _, row in history.iterrows(): key = tuple(row[column] for column in item_columns) if row[&quot;Change&quot;] == &quot;+&quot;: latest[key] = row elif row[&quot;Change&quot;] == &quot;-&quot; and key in latest: del latest[key] available = pd.DataFrame(latest.keys(), columns=item_columns) </code></pre> <p>This produces available items</p> <pre><code> First Last 0 X Y 1 Y Y </code></pre> <p>The issue is that loop is slow with big tables, e.g. 20 seconds for below</p> <pre><code>latest = {} item_columns = [ &quot;First&quot;, &quot;Last&quot;, ] duplicated = pd.concat([history.iloc[[1]]] * 50000, ignore_index=True) history = pd.concat([history, duplicated], ignore_index=True) for _, row in history.iterrows(): key = tuple(row[column] for column in item_columns) if row[&quot;Change&quot;] == &quot;+&quot;: latest[key] = row elif row[&quot;Change&quot;] == &quot;-&quot; and key in latest: del latest[key] available = pd.DataFrame(latest.keys(), columns=item_columns) </code></pre> <p>Any way to speed up?</p>
<python><pandas><performance><vectorization>
2024-06-03 11:17:21
1
447
aeiou
78,569,898
205,147
What is the fastest way to drop consecutive duplicates a List[int] column?
<p>I have a Polars DataFrame which has a column containing a variable length list of numbers in each row. For each row I need to drop consecutive duplicates from the list.</p> <p>So for example:</p> <pre><code>[5, 5, 5, 4, 4, 5, 5, 6, 6, 7] =&gt; [5, 4, 5, 6, 7] [3, 4, 4, 5, 6] =&gt; [3, 4, 5, 6] [2, 2] =&gt; [2] [1] =&gt; [1] </code></pre> <p>What is the fastest and most performant way to do this in Polars?</p>
<python><python-polars>
2024-06-03 11:07:12
2
2,229
Hendrik Wiese
78,569,897
10,794,986
Survey data many periods: transformation to current and previous period (wide to long format)
<p>I have a data frame (survey data) called <code>df</code> that looks like this (this is sample data):</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>respondent_id</th> <th>r1age</th> <th>r2age</th> <th>r3age</th> <th>r4age</th> <th>r1smoke</th> <th>r2smoke</th> <th>r3smoke</th> <th>r4smoke</th> <th>r1income</th> <th>r2income</th> <th>r3income</th> <th>r4income</th> </tr> </thead> <tbody> <tr> <td>16178</td> <td>35</td> <td>38</td> <td>41</td> <td>44</td> <td>1</td> <td>1</td> <td>1</td> <td>1</td> <td>60</td> <td>62</td> <td>68</td> <td>70</td> </tr> <tr> <td>161719</td> <td>65</td> <td>68</td> <td>71</td> <td>74</td> <td>0</td> <td>0</td> <td>0</td> <td>1</td> <td>50</td> <td>52</td> <td>54</td> <td>56</td> </tr> <tr> <td>161720</td> <td>47</td> <td>50</td> <td>53</td> <td>56</td> <td>0</td> <td>1</td> <td>0</td> <td>1</td> <td>80</td> <td>82</td> <td>85</td> <td>87</td> </tr> </tbody> </table></div> <p>The number after the &quot;r&quot; or &quot;h&quot; represents the wave or period of each interview. For this particular example, there are only four interviews for each respondent, and data for 3 different variables (age, whether the respondent smokes, and his/her gross annual income in $10,000).</p> <p>I'm interested in transforming this to get the following instead:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>respondent_id</th> <th>t_1_period</th> <th>t_age</th> <th>t_1_age</th> <th>t_smoke</th> <th>t_1_smoke</th> <th>t_income</th> <th>t_1_income</th> </tr> </thead> <tbody> <tr> <td>16178</td> <td>1</td> <td>38</td> <td>35</td> <td>1</td> <td>1</td> <td>62</td> <td>60</td> </tr> <tr> <td>16178</td> <td>2</td> <td>41</td> <td>38</td> <td>1</td> <td>1</td> <td>68</td> <td>62</td> </tr> <tr> <td>16178</td> <td>3</td> <td>44</td> <td>41</td> <td>1</td> <td>1</td> <td>70</td> <td>68</td> </tr> <tr> <td>161719</td> <td>1</td> <td>68</td> <td>65</td> <td>0</td> <td>0</td> <td>52</td> <td>50</td> </tr> <tr> <td>161719</td> <td>2</td> <td>71</td> <td>68</td> <td>0</td> <td>0</td> <td>54</td> <td>52</td> </tr> <tr> <td>161719</td> <td>3</td> <td>74</td> <td>71</td> <td>1</td> <td>0</td> <td>56</td> <td>54</td> </tr> <tr> <td>161720</td> <td>1</td> <td>50</td> <td>47</td> <td>1</td> <td>0</td> <td>82</td> <td>80</td> </tr> <tr> <td>161720</td> <td>2</td> <td>53</td> <td>50</td> <td>0</td> <td>1</td> <td>85</td> <td>82</td> </tr> <tr> <td>161720</td> <td>3</td> <td>56</td> <td>53</td> <td>1</td> <td>0</td> <td>87</td> <td>85</td> </tr> </tbody> </table></div> <p>I'm interested in repeating the respondents such that the number of observations for each respondent are the number of interviews/waves - 1 (that is, the unique transitions), and for each variable there must be t (current period) and t_1 (previous period) columns, again, for each transition. Additionally, I add a <code>t_1_period</code> column representing the number of the previous period for that observation.</p> <p>I have tried the following:</p> <pre class="lang-py prettyprint-override"><code>df = pd.melt(df, id_vars=[&quot;respondent_id&quot;]) variable_names = [&quot;age&quot;, &quot;smoke&quot;, &quot;income&quot;] new_rows = [] for respondent_id in df[&quot;respondent_id&quot;].unique(): df_temp = df[df[&quot;respondent_id&quot;] == respondent_id] for i in range(2, 5): new_row = {&quot;respondent_id&quot;: respondent_id, &quot;t_1_period&quot;: i-1} for var in variable_names: if var not in [&quot;income&quot;]: current_var = f&quot;r{i}{var}&quot; previous_var = f&quot;r{i-1}{var}&quot; new_row[f&quot;t_{var}&quot;] = df_temp[df_temp[&quot;variable&quot;] == current_var][&quot;value&quot;].values[0] new_row[f&quot;t_1_{var}&quot;] = df_temp[df_temp[&quot;variable&quot;] == previous_var][&quot;value&quot;].values[0] elif var == &quot;income&quot;: current_var = f&quot;h{i}{var}&quot; previous_var = f&quot;h{i-1}{var}&quot; new_row[f&quot;t_h{var}&quot;] = df_temp[df_temp[&quot;variable&quot;] == current_var][&quot;value&quot;].values[0] new_row[f&quot;t_1_h{var}&quot;] = df_temp[df_temp[&quot;variable&quot;] == previous_var][&quot;value&quot;].values[0] new_rows.append(new_row) df_periods = pd.DataFrame(new_rows) </code></pre> <p>In my real data, I have much more than 3 variables: I sometimes have up to 100. Additionally, all variables are always present for all periods, however some of them can have NaNs, but the columns are there. In terms of respondents, I can also have a lot: as much as 50,000 for example. Note that some variables start with &quot;h&quot; instead of &quot;r&quot;, and others with &quot;s&quot; (not present in this example).</p> <p>My question: is there a faster way of transforming this? Every time I want to transform the data in this t vs. t-1 version for all variables I decide to include in <code>variable_names</code> I have to wait a lot. I believe there must be a better way of doing this. I appreciate your help, thank you.</p>
<python><pandas><dataframe><performance><optimization>
2024-06-03 11:06:53
4
410
caproki
78,569,761
14,882,395
Type hint for an object that can be used as a type hint itself
<p>I have following code</p> <pre class="lang-py prettyprint-override"><code>from typing import TypeVar, Type, overload T = TypeVar('T') @overload def foo(bar: Type[T]) -&gt; T: ... @overload def foo(bar: Type[T] | None) -&gt; T | None: ... def foo(bar: Type[T] | None) -&gt; T | None: # implementation goes here ... class Bar: ... bar = foo(Bar) bar2 = foo(Bar | None) # No overload variant of &quot;foo&quot; matches argument type &quot;UnionType&quot; </code></pre> <p>How to properly type hint case for <code>bar2</code>?</p> <hr /> <p>I tried some others:</p> <p><code>Type[T | None]</code>, mypy says <code>Overloaded function signature 2 will never be matched: signature 1's parameter type(s) are the same or broader</code></p> <p>removing 2nd overload (resulting in only <code>Type[T]</code> allowed), mypy says <code>No overload variant of &quot;foo&quot; matches argument type &quot;UnionType&quot;</code> (meaning 2nd overload is incorrect for that case anyways)</p>
<python><mypy><python-typing>
2024-06-03 10:38:38
1
2,213
sudden_appearance
78,569,602
9,827,719
Python receive txt/xml logs from Palo Alto HTTP
<p>I have a flask application that should receive txt/xml logs from a Palo Alto Firewall. How can I receive the traffic logs?</p> <p><strong>My Python Script: main.py</strong></p> <pre><code>import flask from flask import request # For development! app = flask.Flask(__name__) @app.route('/', methods=['GET', 'POST']) def __index(): # Request as TEXT/XML xml_data = None try: xml_data = request.form print(f&quot;requests.xml_data={xml_data}&quot;) except Exception as e: print(f&quot;Error #2 Could not get request.form data: {e}&quot;) if xml_data is None: raise Exception(f&quot;Error #3 Could not get json data because missing xml_data as post&quot;) # Flattern xml_data_flattern = xml_data.to_dict(flat=True) # Log JSON data print(f&quot;xml_data={xml_data}&quot;) print(f&quot;xml_data_flattern={xml_data_flattern}&quot;) # Finish program return {&quot;message&quot;: &quot;Finished&quot;, &quot;data&quot;: &quot;&quot;} if __name__ == '__main__': app.run(debug=False, host=&quot;0.0.0.0&quot;, port=8080) </code></pre> <p><strong>Palo Alto:</strong></p> <p>This is the Device-&gt;HTTP-&gt;HTTP Server Profile-&gt;Servers:</p> <p><a href="https://i.sstatic.net/DdgRwII4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DdgRwII4.png" alt="enter image description here" /></a></p> <ul> <li>Name: MyServer</li> <li>Address: myserver.runn.app</li> <li>Protocol: HTTPS</li> <li>Port: 443</li> <li>TLS Version: 1.2</li> <li>Certificate profile: None</li> <li>HTTP Method: POST</li> <li>Username: admin</li> <li>Pasword: admin</li> </ul> <p>This is the Device-&gt;HTTP-&gt;HTTP Server Profile-&gt;Payload Format for Traffic:</p> <p><a href="https://i.sstatic.net/FymytOtV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FymytOtV.png" alt="enter image description here" /></a></p> <ul> <li>Name: Traffic-Payload</li> <li>HTTP Headers: content-type text/xml</li> <li>Payload: <code>&lt;request&gt;&lt;entry&gt;&lt;short_description&gt; $type&lt;/short_description&gt;&lt;/entry&gt;&lt;/request&gt;</code></li> </ul>
<python><request>
2024-06-03 10:04:30
1
1,400
Europa
78,569,532
2,242,012
Symmetry of Best Fit Straight Line on Inverting Axes
<p>I have a set of data that I have created a scatter plot from. On top of this I overlay the best fit straight line. Everything was fine until I realised that because of the nature of the data, it made more conceptual sense if the y-axis data was plotted on the x-axis. So I inverted the axes</p> <p>Say a1 and b1 are the intercept and slope of the original best fit line. Say a2 and b2 are the intercept and slope after inverting and reperforming the best fit</p> <p>Given that the original slope, b1, was approximately 1, I would expect that a2 would be approximately -a1</p> <p>And I would expect that the small difference between b1 and 1 would be (very approximately) opposite in sign to the small difference between b2 and 1</p> <p>Originally I was performing the best fit using numpy.polyfit with deg=1 (I learned this method is obsolete as part of my investigation)</p> <pre><code>before axes inversion np.polyfit gave intercept of 0.016 and slope of 1.005 after axes inversion np.polyfit gave intercept of 0.002 and slope of 0.75 </code></pre> <p>So the intercept and slope of the new best fit line do not have the properties I expected</p> <p>I then switched to stats.linregress</p> <pre><code>before axes inversion stats.linregress gave intercept of 0.016 and slope of 1.005 after axes inversion stats.linregress gave intercept of 0.002 and slope of 0.75 </code></pre> <p>The two algorithms are in agreement, meaning the mistake is on my side</p> <p>As I see it, the possibilities of what is happening are the following; 1) my assumptions about the symmetry of of an axes inversion are incorrect, 2) there are additional properties I should be passing to the best fit algorithm that allow it to widen the range of the best fit properties, 3) some mysterious third thing</p> <p>So which is it?</p>
<python><numpy><scipy>
2024-06-03 09:48:27
1
460
sil
78,569,342
1,422,096
Import a script from a parent's subdir, with a filename and dirname starting with digits
<p>I know the usual Python import system, packages, <code>__init__.py</code>, the <code>__import__</code> function (see <a href="https://stackoverflow.com/questions/9090079/in-python-how-to-import-filename-starts-with-a-number">In python, how to import filename starts with a number</a>), etc. I know the classic <a href="https://stackoverflow.com/questions/2349991/how-do-i-import-other-python-files">How do I import other Python files?</a> answers.</p> <p>Disclaimer: I know the following is not a common file-structure, but sometimes you have to quickly deal with legacy code, without refactoring it.</p> <pre><code>| |- 1 UTILS | |- 123myscript.py |- 2 TESTS | |- test.py </code></pre> <p>When running <code>test.py</code>, how can I import <code>123myscript.py</code>, without mangling with the <code>sys.path.append</code>?</p> <p>Is it possible with <code>importlib</code>? I haven't been able to do it with:</p> <pre><code>importlib.import_module(&quot;../1 UTILS/123myscript.py&quot;, package=None) </code></pre>
<python><import><python-import><python-importlib>
2024-06-03 09:06:14
2
47,388
Basj
78,569,244
17,795,398
numpy.histogram is there a way to get a binning so there is at least one count per bin?
<p>I'm using <code>numpy.histogram</code> on my data and then I want to perform a fit to some curve where the number of occurrences in each bin is dividing, so it cannot be zero. So I need the bins to contain at least one count. Is there anyway to do that with <code>numpy.histogram</code> or <code>numpy.histogram_bin_edges</code>?</p> <p>This is just an example:</p> <pre><code>import numpy as np sizes = np.array([ 1, 1, 2, 3, 4, 1, 2, 4, 9, 9, 7, 9, 10, 10, 20, 21, ]) hist, bins = np.histogram(sizes) print(hist) print(bins) </code></pre> <p>Returns:</p> <pre><code>[5 3 0 1 5 0 0 0 0 2] [ 1. 3. 5. 7. 9. 11. 13. 15. 17. 19. 21.] </code></pre> <p>The idea is to avoid testing different number of bins manually.</p> <p>The fitting function is <code>f(N) = A/N**S</code>, where <code>N</code> is an integer number. Regular binning is not mandatory. In the code example <code>sizes</code> are just some integer random numbers I chose.</p>
<python><numpy><histogram><divide-by-zero>
2024-06-03 08:44:49
0
472
Abel Gutiérrez
78,569,237
6,837,132
FastAPI App using Zoom Webhook runs locally but not in Docker where Zoom Signature Rejected
<p>I built a FastAPI App that receives meeting/recordings events from Zoom API through Zoom Webhook. I use ngrok for the webhook URL. Everything works perfectly when running it locally.</p> <p>However, I built an image for the app in Docker, and when I try to run it, I get &quot;signature rejected&quot;. I verified that I&quot;m using the same environment variables, the same environment (even libraries version since I&quot;m using poetry so the creation of the environment is specific and identical). I verified that the container timezone is same as my host system timezone. But I'm still getting the error.</p> <p>Here is the code of the signature verification function:</p> <pre><code>def signature_verified(self, headers: dict, body: bytes): # Extract the necessary components from headers and body timestamp = headers['x-zm-request-timestamp'] if not timestamp: return False # Timestamp is missing, cannot verify # Construct the message string decoded_body = body.decode('utf-8') message = f&quot;v0:{timestamp}:{decoded_body}&quot; print(&quot;Constructed Message:&quot;, message) #get zoom webhook secret token secret_token = os.getenv('ZOOM_WEBHOOK_SECRET_TOKEN') if not secret_token: print(&quot;Secret token is missing.&quot;) return False # Create the HMAC SHA-256 hash hash_for_verify = hmac.new(secret_token.encode(), message.encode(), hashlib.sha256).hexdigest() # Create the signature generated_signature = f&quot;v0={hash_for_verify}&quot; print(&quot;Computed HMAC:&quot;, hash_for_verify) human_readable = datetime.fromtimestamp(int(timestamp)) print(&quot;Timestamp from Zoom:&quot;, headers['x-zm-request-timestamp']) print(&quot;Human-readable timestamp:&quot;, human_readable) print(&quot;Generated Signature:&quot;, generated_signature) print(&quot;Expected Signature:&quot;, headers.get('x-zm-signature')) return headers.get('x-zm-signature') == generated_signature </code></pre> <p>Here is where the main.py where it triggers that function:</p> <pre><code> @app.post(&quot;/webhook&quot;) async def webhook(request: Request): # Get the request body and headers client_host = request.client.host print(f&quot;Request from IP: {client_host}&quot;) body = await request.body() headers = request.headers try: # Verify the request came from Zoom if zoom_client.signature_verified(headers, body): # Parse the request body as JSON event_data = await request.json() # Check if this is a URL validation request if event_data[&quot;event&quot;] == &quot;endpoint.url_validation&quot;: logging.info(&quot;WEBHOOK: URL validation request received&quot;) return zoom_client.url_verification(event_data) elif event_data[&quot;event&quot;] == &quot;session.ended&quot;: # Handle session.ended event topic = event_data[&quot;payload&quot;][&quot;object&quot;][&quot;topic&quot;] id = event_data[&quot;payload&quot;][&quot;object&quot;][&quot;id&quot;] logging.info(f&quot;WEBHOOK: Session ended: {topic}({id}))&quot;) return {&quot;message&quot;: &quot;Received session.ended event.&quot;} else: logging.error(&quot;WEBHOOK: Unauthorized request to Webhook: Signature verified rejected&quot;) return {&quot;message&quot;: &quot;Unauthorized request to Zoom Webhook&quot;}, 401 except Exception as e: logging.error(f&quot;WEBHOOK: Error processing webhook: {e}&quot;) return {&quot;message&quot;: &quot;Error processing Zoom Webhook&quot;}, 500 </code></pre> <p>pyproject.toml file:</p> <pre><code> [tool.poetry] name = &quot;zoomerizer&quot; version = &quot;0.1.0&quot; description = &quot;An app that summarizes the recordings of zoom meetings. It communicates with Zoom API.&quot; authors = [&quot;&lt;email&gt;&quot;] readme = &quot;README.md&quot; [tool.poetry.dependencies] python = &quot;^3.11&quot; fastapi = &quot;^0.109.2&quot; redis = &quot;^5.0.1&quot; types-redis = &quot;^4.6.0.20240218&quot; uvicorn = &quot;^0.27.1&quot; python-dotenv = &quot;^1.0.1&quot; requests = &quot;^2.31.0&quot; logging = &quot;^0.4.9.6&quot; coloredlogs = &quot;^15.0.1&quot; verboselogs = &quot;^1.7&quot; SQLAlchemy = &quot;^2.0.27&quot; langchain = &quot;^0.1.8&quot; openai = &quot;^1.12.0&quot; langchain-openai = &quot;^0.0.8&quot; [build-system] requires = [&quot;poetry-core&quot;] build-backend = &quot;poetry.core.masonry.api&quot; </code></pre> <p>Dockerfile:</p> <pre><code># For more information, please refer to https://aka.ms/vscode-docker-python FROM python:3-slim # Set the working directory in the container WORKDIR /app # Keeps Python from generating .pyc files in the container ENV PYTHONDONTWRITEBYTECODE=1 # Turns off buffering for easier container logging ENV PYTHONUNBUFFERED=1 # Install Poetry ENV POETRY_VERSION=1.8.2 RUN pip install &quot;poetry==$POETRY_VERSION&quot; # Copy only requirements to cache them in docker layer COPY pyproject.toml poetry.lock* ./ # Project initialization: # This does not actually populate your project with Python packages # It only copies the project setup files RUN poetry config virtualenvs.create false \ &amp;&amp; poetry install --no-dev --no-interaction --no-ansi # Exposing the port that Gunicorn will run on EXPOSE 8000 # Copying everything from the current directory into the container COPY . /app # Creates a non-root user with an explicit UID and adds permission to access the /app folder # For more info, please refer to https://aka.ms/vscode-docker-python-configure-containers RUN adduser -u 5678 --disabled-password --gecos &quot;&quot; appuser &amp;&amp; chown -R appuser:appuser /app USER appuser # During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug CMD [&quot;uvicorn&quot;, &quot;zoomerizer.main:app&quot;, &quot;--host&quot;, &quot;0.0.0.0&quot;, &quot;--port&quot;, &quot;8000&quot;] </code></pre> <p>ngrok url (that is put in Zoom API Webhook) is forwarding to http://localhost:8000</p> <p>And here is the error I get when I run this command:</p> <pre><code>docker run -e TZ=Europe/Berlin -p 8000:8000 --env-file .env zoomerizer </code></pre> <p>Error:</p> <pre><code>INFO: 192.168.65.1:64127 - &quot;POST /webhook HTTP/1.1&quot; 200 OK 2024-05-30 14:38:40,320 - ERROR - WEBHOOK: Unauthorized request to Webhook: Signature verified rejected </code></pre> <p>The signature generated when tested with Zoom signature returns false, while locally without Docker it returns true. Can you please help me identify the reason?</p>
<python><docker><fastapi><ngrok><zoom-sdk>
2024-06-03 08:43:26
0
849
SarahData
78,569,097
424,957
How to plot mutltiple geometries in correct projection by geoplot?
<p>I use code as below to plot multiple geometries:</p> <pre><code>newGdf = gpd.read_file(os.path.abspath(newFile), driver='KML') oldGdf = gpd.read_file(os.path.abspath(oldFile), driver='KML') lonMean, LagMean = newGdf.centroid.get_coordinates().mean() proj = geoplot.crs.Orthographic(central_longitude=lonMean, central_latitude=LagMean) fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(18, 10.5), subplot_kw={&quot;projection&quot;: proj}) ax.set_aspect('equal') newPolyGdf = newGdf[newGdf['geometry'].apply(lambda x: isinstance(x, Polygon))] newLineGdf = newGdf[newGdf.geometry.type=='LineString'] geoplot.polyplot(newPolyGdf, ax=ax, facecolor=color, zorder=2) geoplot.polyplot(newLineGdf, ax=ax, facecolor='none', color='blue', zorder=3) oldPolyGdf = oldGdf[oldGdf['geometry'].apply(lambda x: isinstance(x, Polygon))] oldLineGdf = oldGdf[oldGdf.geometry.type=='LineString'] geoplot.polyplot(oldPolyGdf, ax=ax, facecolor=color, zorder=2) geoplot.polyplot(oldLineGdf, ax=ax, facecolor='none', color='blue', zorder=3) </code></pre> <p>You can see the post plot oldGdf LineString is not correct, I think the reason is the projection is used from the newGdf, how can I let both line in correct projection?</p> <p><a href="https://i.sstatic.net/Hl9j042O.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Hl9j042O.png" alt="enter image description here" /></a> The error when do transoform before proj, plot result as below if transform after proj:</p> <p><a href="https://i.sstatic.net/4U0NcFLj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4U0NcFLj.png" alt="enter image description here" /></a></p>
<python><matplotlib><map-projections><geoplot>
2024-06-03 08:08:00
1
2,509
mikezang
78,569,067
567,797
How to avoid the entire app refresh on streamlit chat input
<p>I have the following piece of code</p> <pre><code>def ask_q_n_a(data): question = streamlit.chat_input(&quot;Ask any question&quot;) if question: streamlit.write(ask_f(data, question)) </code></pre> <p>the problem is as soon as the user enters the question the page refreshes and the entire app reloads fetching the data again. Using <code>streamlit.cache_data()</code> also doesn't help here . Any pointers??</p>
<python><streamlit>
2024-06-03 07:59:23
0
7,145
station
78,568,527
8,995,555
How to send large videos to Gemini AI API 1.5 Pro for inference?
<p>I'm currently working with the Gemini AI API 1.5 Pro (latest version) and need to send large video files for inference. These videos are several hundred megabytes each (~700MB) but are within the API's constraints (e.g., less than 1 hour in length). I want to upload them once and perform inference without re-uploading.</p> <p>In GPT-4o, there was an option to use <code>image_url</code>s to reference images. Is there a similar method or best practice for handling large video files with the Gemini AI API 1.5 Pro?</p> <p>The videos are too large to send repeatedly, so an efficient method for uploading and referencing them is crucial.</p> <p>Any guidance on API endpoints, required parameters, or example code snippets would be greatly appreciated.</p>
<python><server><openai-api><google-gemini>
2024-06-03 05:27:01
1
1,014
RukshanJS
78,568,499
828,647
Passing named arguments to test template in Robot framework
<p>I am using the following code in Robot framework in Python</p> <pre><code>*** Settings *** Library SeleniumLibrary Test Template Create New Guest of Specified Type *** Test Cases *** Verify creation of new guest of type Today guest_details = &amp;{TodayGuestDetails} schedule_type = Today guest_type = Visitor *** Keywords *** Create New Guest of Specified Type [Arguments] ${guest_details}=guest_details ${schedule_type}=schedule_type ${guest_type}=guest_type Fill Guest Details ${guest_details} *** Variables *** &amp;{TodayGuestDetails} firstname=Test lastname=Guest email=guest@mail.com </code></pre> <p>What is wrong with the above code? I am getting error <code>Resolving variable '${guest_details = {'firstname': 'Test', 'lastname': 'Guest', 'email': 'guest@mail.com'}}' failed: Variable '${guest_details }' not found.</code></p>
<python><selenium-webdriver><robotframework>
2024-06-03 05:20:17
1
515
user828647
78,568,483
22,912,974
FastAPI request with base64 image body takes too long to get response
<p>Below api took too long to respond(more than 1 minute) in swagger, but the print statement <code>print(len(doc))</code> prints instantly.</p> <pre><code># pip install fastapi # fastapi dev main.py from fastapi import FastAPI from typing import Any, Dict app = FastAPI() @app.post(&quot;/fast&quot;) async def root(data:Dict[Any,Any]): doc = data['doc'] print(len(doc)) # 1459305 return {&quot;success&quot;:len(doc)} </code></pre> <p>request body:</p> <pre><code>{&quot;doc&quot;:&quot;JVBERi0xLjMgCjEgMCBvYmoKPDwK..&lt;-- 1459305 total chars-- &gt;.Tk2MzU1NTM4OTY4NGIzYTBlNjIwZjA+XQo+PgpzdGFydHhyZWYKMTA5MzMzNQolJUVPRgo=&quot; </code></pre> <p>I am expecting the response very quickly</p>
<python><swagger><base64><fastapi>
2024-06-03 05:13:06
1
1,804
christopher johnson mccandless
78,568,303
13,176,726
Stripe Checkout Session: No session_id Provided in Request After Successful Payment in Django
<p>I'm working on a Django project where users can purchase subscriptions to various packages using Stripe Checkout. However, after a successful payment, I'm encountering an issue where the session_id is not being provided in the request when redirected to the payment_success view.</p> <p><strong>here are the models.py:</strong></p> <pre><code>class Package(models.Model): name = models.CharField(max_length=255) description = models.TextField() price_monthly = models.DecimalField(max_digits=10, decimal_places=2) price_annually = models.DecimalField(max_digits=10, decimal_places=2) tax_percentage = models.DecimalField(max_digits=5, decimal_places=2, default=0) def __str__(self): return self.name class Subscription(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE) stripe_customer_id = models.CharField(max_length=255) stripe_subscription_id = models.CharField(max_length=255) package = models.ForeignKey(Package, on_delete=models.CASCADE) interval = models.CharField(max_length=10) # 'monthly' or 'annual' active = models.BooleanField(default=True) </code></pre> <p><strong>Here are the relevant views:</strong></p> <pre><code>def package_list(request): packages = Package.objects.all() return render(request, 'subscriptions/package_list.html', {'packages': packages}) def create_checkout_session(request, package_id, interval): package = get_object_or_404(Package, id=package_id) if interval not in ['monthly', 'annually']: return redirect('package_list') if interval == 'monthly': price = package.price_monthly_including_tax() stripe_interval = 'month' else: price = package.price_annually_including_tax() stripe_interval = 'year' session = stripe.checkout.Session.create( payment_method_types=['card'], line_items=[{ 'price_data': { 'currency': 'cad', 'product_data': { 'name': package.name, }, 'unit_amount': int(price * 100), # Stripe expects the amount in cents 'recurring': { 'interval': stripe_interval, }, }, 'quantity': 1, }], mode='subscription', success_url=request.build_absolute_uri('/subscriptions/payment-success/?session_id={CHECKOUT_SESSION_ID}'), cancel_url=request.build_absolute_uri('/subscriptions/packages/'), customer_email=request.user.email if request.user.is_authenticated else None, ) return redirect(session.url, code=303) def payment_success(request): session_id = request.GET.get('session_id') if not session_id: logger.error(&quot;No session_id provided in the request.&quot;) return redirect('subscriptions:package_list') try: session = stripe.checkout.Session.retrieve(session_id) logger.info(f&quot;Session retrieved: {session}&quot;) except stripe.error.StripeError as e: logger.error(f&quot;Stripe error: {e}&quot;) return redirect('subscriptions:package_list') customer_id = session.customer subscription_id = session.subscription try: subscription = Subscription.objects.get(stripe_subscription_id=subscription_id) except Subscription.DoesNotExist: logger.error(f&quot;No subscription found for ID: {subscription_id}&quot;) return redirect('subscriptions:package_list') context = { 'session_id': session_id, 'customer_id': customer_id, 'subscription_id': subscription_id, 'subscription': subscription, } return render(request, 'subscriptions/payment_success.html', context) @csrf_exempt def stripe_webhook(request): payload = request.body sig_header = request.META['HTTP_STRIPE_SIGNATURE'] endpoint_secret = settings.STRIPE_WEBHOOK_SECRET event = None try: event = stripe.Webhook.construct_event( payload, sig_header, endpoint_secret ) except ValueError as e: # Invalid payload logger.error(f&quot;Invalid payload: {e}&quot;) return JsonResponse({'status': 'invalid payload'}, status=400) except stripe.error.SignatureVerificationError as e: # Invalid signature logger.error(f&quot;Invalid signature: {e}&quot;) return JsonResponse({'status': 'invalid signature'}, status=400) # Handle the event if event['type'] == 'checkout.session.completed': session = event['data']['object'] handle_checkout_session_completed(session) # Add more event types if needed return JsonResponse({'status': 'success'}, status=200) def handle_checkout_session_completed(session): customer_id = session.get('customer') subscription_id = session.get('subscription') package_name = session['display_items'][0]['custom']['name'] user_email = session['customer_email'] interval = session['line_items']['data'][0]['price']['recurring']['interval'] try: user = User.objects.get(email=user_email) package = Package.objects.get(name=package_name) Subscription.objects.create( user=user, stripe_customer_id=customer_id, stripe_subscription_id=subscription_id, package=package, interval=interval, active=True ) except User.DoesNotExist: logger.error(f&quot;User with email {user_email} does not exist.&quot;) except Package.DoesNotExist: logger.error(f&quot;Package with name {package_name} does not exist.&quot;) except Exception as e: logger.error(f&quot;Error creating subscription: {e}&quot;) </code></pre> <p><strong>Issue</strong> After a successful payment, the user is redirected to the payment_success view, but the session_id is not provided in the request, resulting in No session_id provided in the request.</p> <p><strong>Expected Behavior</strong> The user should be redirected to the payment_success view with the session_id as a query parameter, allowing the view to retrieve the session details and display the subscription information.</p> <p><strong>Actual Behavior</strong> The user is redirected to the package page not to the success screen due to the error</p> <p>question how can I fix this error invoice.payment_succeeded and there is checkout.session.completed all codes appearing are [500]</p> <p>In the webhook I get [500] Post however</p>
<python><django><stripe-payments>
2024-06-03 03:50:23
1
982
A_K
78,568,094
5,210,052
How to better include the optional filtering conditions for URL of API calls
<p>When working with API calls, we need to edit the URL. We typically can add many options for filtering in the URL.</p> <p>For example, if I have two optional filtering conditions (lane_id and workflow_id), then I can stupidly write the following for all the possible combinations:</p> <pre><code>def get_cards(board_id, lane_id=None, workflow_id=None): &quot;&quot;&quot; Get the all the cards by the given conditions &quot;&quot;&quot; if lane_id is None and workflow_id is None: url = f'https://xxxx.kanbanize.com/api/v2/cards?board_ids={board_id}' if lane_id and workflow_id: url = f'https://xxxx.kanbanize.com/api/v2/cards?board_ids={board_id}&amp;lane_ids={lane_id}&amp;workflow_ids={workflow_id}' if lane_id and workflow_id is None: url = f'https://xxxx.kanbanize.com/api/v2/cards?board_ids={board_id}&amp;lane_ids={lane_id}' if lane_id is None and workflow_id: url = f'https://xxxx.kanbanize.com/api/v2/cards?board_ids={board_id}&amp;workflow_ids={workflow_id}' </code></pre> <p>The problem is, when the number of options grows, the number of the possible combinations will soar.</p> <p>Is there any easier way to write the options into the URL</p>
<python><url>
2024-06-03 01:36:03
2
1,828
John
78,568,011
2,016,632
How to compute the derivative of a spline in scipy, including the edges
<p>I'm having trouble with the derivative of spline computed with <code>LSQUnivariateSpine</code>. Specifically that it is generating garbage at the edges.... My example is a bit wierd because I want to specify the knot location rather than letting the algorithm choose (like <code>splrep</code>) but the that should be fine, right?</p> <p>By eye, I wonder if the derivative might be correct inside knot[1:-1] ? Is there a way of getting a valid derivative across the whole domain?</p> <p><a href="https://i.sstatic.net/9QvIb9PK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9QvIb9PK.png" alt="enter image description here" /></a></p> <pre><code>import numpy as np import matplotlib.pyplot as plt from scipy.interpolate import LSQUnivariateSpline # Example noisy data np.random.seed(42) NTime = 100 time = np.linspace(0, 1, NTime) y_true = np.sin(2 * np.pi * time) y_noisy = y_true + np.random.normal(scale=0.1, size=time.shape) # Specify the number of knots and their initial locations n_knots = 10 # Generate n_knots internal knots between the second and second-to-last time points knots = np.linspace(time[1], time[-2], n_knots) k = 3 # Fit the spline spline = LSQUnivariateSpline(time, y_noisy, knots, k=3) # Evaluate the spline and its derivative on a fine mesh t_fine = np.linspace(time[0], time[-1], 1000) y_fine = spline(t_fine) y_fine_derivative = spline.derivative()(t_fine) # Plot the results plt.figure(figsize=(10, 6)) plt.plot(time, y_noisy, 'o', label='Noisy Data') plt.plot(t_fine, y_fine, label='Fitted Spline') plt.plot(t_fine, y_fine_derivative, label='Derivative') plt.legend() plt.xlabel('x') plt.ylabel('y') plt.title('LSQUnivariateSpline Fit') plt.show() </code></pre>
<python><scipy><interpolation><spline><derivative>
2024-06-03 00:21:06
2
619
Tunneller
78,567,764
5,171,169
how to stop a thread? Is it possible to tie a thread to a global variable? the thread2 in this script is not visible to elife statment event
<pre><code>import PySimpleGUI as sg from my_scripts import * from my_constants import * import sys import glob import yt_dlp import threading import time global thread2 sg.LOOK_AND_FEEL_TABLE['MyCreatedTheme'] = {'BACKGROUND': '#000066', 'TEXT': '#ffebbf', 'INPUT': '#354230', 'TEXT_INPUT': '#ffebbf', 'SCROLL': '#99CC99', 'BUTTON': ('#003333', '#FFCC66'), 'PROGRESS': ('#D1826B', '#CC8019'), 'BORDER': 1, 'SLIDER_DEPTH': 0, 'PROGRESS_DEPTH': 0, } # Switch to use your newly created theme sg.theme('MyCreatedTheme') sg.theme_background_color('#4eae61') sg.set_options(font=('Fira Code', 16), background_color= '#555f9a') l1 = sg.Text('Put url here',font=('Fira Code', 16), expand_x=True, justification='center') t1 = sg.Text('', enable_events=True, font=('Fira Code', 16), justification='left') t2 = sg.InputText( font=('Fira Code', 16), justification='left', default_text='5', key='-folder-') def delete_video_in_D(): list_of_files = [os.remove(p) for p in glob.glob(r'D:\\*.*') if (os.path.isfile(p) and is_video(p))] layout = [ [l1], [t1, sg.InputText(key='-url-')], [t2], [sg.Button('Download', button_color='#b1defb')], [sg.Button('Cancel playing', button_color='#fbceb1')], ] FOLDERS = {'1':Chef_Max_Mariola_PATH, '2':JuliaCrossBow_PATH, '3':French_Vibes_PATH, '4':Studio_Italiano_InClasse_PATH, '5':'D:\\'} window = sg.Window('Main Window', layout) while True: event, values = window.read() if event == sg.WIN_CLOSED: break elif event == 'Download': delete_video_in_D() url = values['-url-'] folder = values['-folder-'] os.chdir(FOLDERS[folder]) try: with yt_dlp.YoutubeDL() as ydl: thread1 = threading.Thread(target=ydl.download, args=(url, ), daemon=True) thread1.start() while True: if thread1.is_alive(): pass else: list_of_files = [x for x in glob.glob(r'*.*') if os.path.isfile(x) and is_video(x)] latest_file = max(list_of_files, key=os.path.getctime) new_name = re.sub(r'\s*\[.*?\]\s*', '', latest_file ) os.rename(latest_file, new_name) thread2 = threading.Thread(target=play_video, args=(new_name,), daemon=True).start() break except Exception as e: print('Error on line {}'.format(sys.exc_info()[-1].tb_lineno), type(e).__name__, e) elif event == 'Cancel playing': # thread2 is not visible here thread2.stop() # the error message I get when click on button 'Cancel playing' is # AttributeError: 'NoneType' object has no attribute 'stop' window.close() </code></pre>
<python><pysimplegui>
2024-06-02 21:41:27
2
5,696
LetzerWille
78,567,601
15,433,308
Effect of python interpreter and environment on wheel creation
<p>I wrote a python package that I want to publish. I created a setup.py file for it and managed to create a wheel file by running <code>python setup.py sdist bdist_wheel</code>.</p> <p>My question is whether the python interpreter that I run the command with affects the created wheel and its compatibilities somehow, for example:</p> <ol> <li>Does it matter which packages are installed on the interpreter? For example if the interpreter has different packages installed than specified in the <code>install_requires</code> parameters in the setup.py file.</li> <li>Can the operating system affect the wheel compatibility? For example if I run the command on a windows machine will I be able to install it seamlessly on a linux machine?</li> </ol> <p>My current understanding is that both of these things shouldn't matter but I'm afraid something unexpected might break.</p> <p>thanks.</p>
<python><setuptools>
2024-06-02 20:25:57
2
492
krezno