QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
βŒ€
78,991,776
388,951
How can I reference the type hint for an instance variable on another class?
<p>The OpenAI Python types define a <code>Batch</code> class with a <a href="https://github.com/openai/openai-python/blob/192b8f2b6a49f462e48c1442858931875524ab49/src/openai/types/batch.py#L39-L42" rel="nofollow noreferrer"><code>status</code> field</a>:</p> <pre class="lang-py prettyprint-override"><code>class Batch(BaseModel): id: str # ... status: Literal[ &quot;validating&quot;, &quot;failed&quot;, &quot;in_progress&quot;, &quot;finalizing&quot;, &quot;completed&quot;, &quot;expired&quot;, &quot;cancelling&quot;, &quot;cancelled&quot; ] &quot;&quot;&quot;The current status of the batch.&quot;&quot;&quot; </code></pre> <p>I'd like to re-use the <code>Literal[...]</code> type hint from the <code>status</code> field in my own class. Here's an attempt:</p> <pre class="lang-py prettyprint-override"><code>from typing import Optional, TypedDict import openai class FileStatus(TypedDict): filename: str sha256: str file_id: Optional[str] batch_id: Optional[str] batch_status: Optional[openai.types.Batch.status] # ~~~~~~ # Variable not allowed in type expression Pylance(reportInvalidTypeForm) </code></pre> <p>What should I use as the type hint for <code>batch_status</code>? I'd prefer to avoid copy/pasting the <code>Literal[...]</code> expression from the OpenAI API if possible.</p>
<python><python-typing>
2024-09-16 20:56:16
1
17,142
danvk
78,991,666
825,227
Looking to create column names from top rows in dataframe
<p>I have a dataframe that looks like this:</p> <pre><code> 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 0 9.0 8.0 7.0 6.0 5.0 4.0 3.0 2.0 1.0 0.0 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9 .0 1 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2 4484.75 4485.0 4485.25 4485.5 4485.75 4486.0 4486.25 4486.5 4486.75 4487.0 4487.25 4487.5 4487.75 4488.0 4488.25 4488.5 4488.75 4489.0 4489.25 4489.5 3 4484.75 4485.0 4485.25 4485.5 4485.75 4486.0 4486.25 4486.5 4486.75 4487.0 4487.25 4487.5 4487.75 4488.0 4488.25 4488.5 4488.75 4489.0 4489.25 4489.5 4 4484.75 4485.0 4485.25 4485.5 4485.75 4486.0 4486.25 4486.5 4486.75 4487.0 4487.25 4487.5 4487.75 4488.0 4488.25 4488.5 4488.75 4489.0 4489.25 4489.5 5 4484.75 4485.0 4485.25 4485.5 4485.75 4486.0 4486.25 4486.5 4486.75 4487.0 4487.25 4487.5 4487.75 4488.0 4488.25 4488.5 4488.75 4489.0 4489.25 4489.5 </code></pre> <p>The first two rows of which are identifying values for the rest of rows (first row would be called <code>Position</code>, second row called <code>Side</code>). I'd like to create column names, with these rows as identifiers, that will allow the df to be easily sliced along both rows and columns.</p> <p>For instance, Id like to be able to filter the df according to column header values like this:</p> <pre><code>df.loc[:,[(df.Position.isin(0,1,2,3,4)) &amp; (df.Side==0)] </code></pre> <p>Would return this subset of values:</p> <pre><code> 10 11 12 13 14 2 4487.25 4487.5 4487.75 4488.0 4488.25 3 4487.25 4487.5 4487.75 4488.0 4488.25 4 4487.25 4487.5 4487.75 4488.0 4488.25 5 4487.25 4487.5 4487.75 4488.0 4488.25 </code></pre> <p>I've tried the <code>set_axis</code> attribute, but this doesn't allow the column names to be referenced. How would I achieve this?</p>
<python><pandas>
2024-09-16 20:16:13
1
1,702
Chris
78,991,594
6,930,340
Polars: Advanced cross sectional trading algorithm
<p>I have a polars dataframe containing stock returns as well as cross-sectional ranks in a tidy/long <code>pl.DataFrame</code>.</p> <pre><code>import polars as pl prices = pl.DataFrame( { &quot;symbol&quot;: [ *[&quot;symbol1&quot;] * 7, *[&quot;symbol2&quot;] * 6, *[&quot;symbol3&quot;] * 6, *[&quot;symbol4&quot;] * 7, ], &quot;date&quot;: [ &quot;2023-12-30&quot;, &quot;2023-12-31&quot;, &quot;2024-01-03&quot;, &quot;2024-01-04&quot;, &quot;2024-01-05&quot;, &quot;2024-01-06&quot;, &quot;2024-01-07&quot;, &quot;2023-12-30&quot;, &quot;2024-01-03&quot;, &quot;2024-01-04&quot;, &quot;2024-01-05&quot;, &quot;2024-01-06&quot;, &quot;2024-01-07&quot;, &quot;2023-12-30&quot;, &quot;2023-12-31&quot;, &quot;2024-01-03&quot;, &quot;2024-01-04&quot;, &quot;2024-01-05&quot;, &quot;2024-01-07&quot;, &quot;2023-12-30&quot;, &quot;2023-12-31&quot;, &quot;2024-01-03&quot;, &quot;2024-01-04&quot;, &quot;2024-01-05&quot;, &quot;2024-01-06&quot;, &quot;2024-01-07&quot;, ], &quot;price&quot;: [ 100, 105, 110, 115, 120, 125, 100, 200, 210, 220, 230, 240, 250, 3000, 3100, 3200, 3300, 3400, 3700, 1000, 1050, 1080, 1090, 1300, 1350, 1400, ], } ) shape: (26, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ symbol ┆ date ┆ price β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•ͺ════════════β•ͺ═══════║ β”‚ symbol1 ┆ 2023-12-30 ┆ 100 β”‚ β”‚ symbol1 ┆ 2023-12-31 ┆ 105 β”‚ β”‚ symbol1 ┆ 2024-01-03 ┆ 110 β”‚ β”‚ symbol1 ┆ 2024-01-04 ┆ 115 β”‚ β”‚ symbol1 ┆ 2024-01-05 ┆ 120 β”‚ β”‚ symbol1 ┆ 2024-01-06 ┆ 125 β”‚ β”‚ symbol1 ┆ 2024-01-07 ┆ 100 β”‚ β”‚ symbol2 ┆ 2023-12-30 ┆ 200 β”‚ β”‚ symbol2 ┆ 2024-01-03 ┆ 210 β”‚ β”‚ symbol2 ┆ 2024-01-04 ┆ 220 β”‚ β”‚ symbol2 ┆ 2024-01-05 ┆ 230 β”‚ β”‚ symbol2 ┆ 2024-01-06 ┆ 240 β”‚ β”‚ symbol2 ┆ 2024-01-07 ┆ 250 β”‚ β”‚ symbol3 ┆ 2023-12-30 ┆ 3000 β”‚ β”‚ symbol3 ┆ 2023-12-31 ┆ 3100 β”‚ β”‚ symbol3 ┆ 2024-01-03 ┆ 3200 β”‚ β”‚ symbol3 ┆ 2024-01-04 ┆ 3300 β”‚ β”‚ symbol3 ┆ 2024-01-05 ┆ 3400 β”‚ β”‚ symbol3 ┆ 2024-01-07 ┆ 3700 β”‚ β”‚ symbol4 ┆ 2023-12-30 ┆ 1000 β”‚ β”‚ symbol4 ┆ 2023-12-31 ┆ 1050 β”‚ β”‚ symbol4 ┆ 2024-01-03 ┆ 1080 β”‚ β”‚ symbol4 ┆ 2024-01-04 ┆ 1090 β”‚ β”‚ symbol4 ┆ 2024-01-05 ┆ 1300 β”‚ β”‚ symbol4 ┆ 2024-01-06 ┆ 1350 β”‚ β”‚ symbol4 ┆ 2024-01-07 ┆ 1400 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>A simple cross-sectional trading algorithm would be something like this:</p> <ol> <li>Identify the best-performing symbol in a period (here: rank returns over two periods over all symbols for every date).</li> <li>Insert (buy) the symbol and put it into your portfolio.</li> <li>Repeat the process for every period (date).</li> </ol> <p>The actual results of the above algo can be computed as follows, whereas the column <code>in_portfolio</code> shows the actual signal.</p> <pre><code> returns_ranked = ( prices.with_columns(pl.col(&quot;price&quot;).pct_change(2).over(&quot;symbol&quot;).alias(&quot;return&quot;)) .with_columns( rank=pl.col(&quot;return&quot;) .rank(descending=True, method=&quot;random&quot;) .over(&quot;date&quot;) .cast(pl.UInt8) ) .with_columns(in_portfolio=pl.when(pl.col(&quot;rank&quot;) == 1).then(True).otherwise(False)) ) shape: (22, 6) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ symbol ┆ date ┆ price ┆ return ┆ rank ┆ in_portfolio β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ i64 ┆ f64 ┆ u8 ┆ bool β”‚ β•žβ•β•β•β•β•β•β•β•β•β•ͺ════════════β•ͺ═══════β•ͺ══════════β•ͺ══════β•ͺ══════════════║ β”‚ symbol1 ┆ 2023-12-30 ┆ 100 ┆ null ┆ null ┆ false β”‚ β”‚ symbol1 ┆ 2023-12-31 ┆ 105 ┆ null ┆ null ┆ false β”‚ β”‚ symbol1 ┆ 2024-01-03 ┆ 110 ┆ 0.1 ┆ 1 ┆ true β”‚ β”‚ symbol1 ┆ 2024-01-04 ┆ 115 ┆ 0.095238 ┆ 2 ┆ false β”‚ β”‚ symbol1 ┆ 2024-01-05 ┆ 120 ┆ 0.090909 ┆ 3 ┆ false β”‚ β”‚ symbol1 ┆ 2024-01-06 ┆ 125 ┆ 0.086957 ┆ 3 ┆ false β”‚ β”‚ symbol2 ┆ 2023-12-30 ┆ 200 ┆ null ┆ null ┆ false β”‚ β”‚ symbol2 ┆ 2024-01-03 ┆ 210 ┆ null ┆ null ┆ false β”‚ β”‚ symbol2 ┆ 2024-01-04 ┆ 220 ┆ 0.1 ┆ 1 ┆ true β”‚ β”‚ symbol2 ┆ 2024-01-05 ┆ 230 ┆ 0.095238 ┆ 2 ┆ false β”‚ β”‚ symbol2 ┆ 2024-01-06 ┆ 240 ┆ 0.090909 ┆ 2 ┆ false β”‚ β”‚ symbol3 ┆ 2023-12-30 ┆ 3000 ┆ null ┆ null ┆ false β”‚ β”‚ symbol3 ┆ 2023-12-31 ┆ 3100 ┆ null ┆ null ┆ false β”‚ β”‚ symbol3 ┆ 2024-01-03 ┆ 3200 ┆ 0.066667 ┆ 3 ┆ false β”‚ β”‚ symbol3 ┆ 2024-01-04 ┆ 3300 ┆ 0.064516 ┆ 3 ┆ false β”‚ β”‚ symbol3 ┆ 2024-01-05 ┆ 3400 ┆ 0.0625 ┆ 4 ┆ false β”‚ β”‚ symbol4 ┆ 2023-12-30 ┆ 1000 ┆ null ┆ null ┆ false β”‚ β”‚ symbol4 ┆ 2023-12-31 ┆ 1050 ┆ null ┆ null ┆ false β”‚ β”‚ symbol4 ┆ 2024-01-03 ┆ 1080 ┆ 0.08 ┆ 2 ┆ false β”‚ β”‚ symbol4 ┆ 2024-01-04 ┆ 1090 ┆ 0.038095 ┆ 4 ┆ false β”‚ β”‚ symbol4 ┆ 2024-01-05 ┆ 1300 ┆ 0.203704 ┆ 1 ┆ true β”‚ β”‚ symbol4 ┆ 2024-01-06 ┆ 1350 ┆ 0.238532 ┆ 1 ┆ true β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>Just to show how to &quot;normally&quot; print a cross-sectional ranking, you can do</p> <pre><code>returns_ranked.pivot(on=&quot;symbol&quot;, index=[&quot;date&quot;], values=&quot;rank&quot;) shape: (7, 5) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ date ┆ symbol1 ┆ symbol2 ┆ symbol3 ┆ symbol4 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ u8 ┆ u8 ┆ u8 ┆ u8 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═════════β•ͺ═════════β•ͺ═════════β•ͺ═════════║ β”‚ 2023-12-30 ┆ null ┆ null ┆ null ┆ null β”‚ β”‚ 2023-12-31 ┆ null ┆ null ┆ null ┆ null β”‚ β”‚ 2024-01-03 ┆ 1 ┆ null ┆ 3 ┆ 2 β”‚ β”‚ 2024-01-04 ┆ 2 ┆ 1 ┆ 3 ┆ 4 β”‚ β”‚ 2024-01-05 ┆ 3 ┆ 2 ┆ 4 ┆ 1 β”‚ β”‚ 2024-01-06 ┆ 3 ┆ 2 ┆ null ┆ 1 β”‚ β”‚ 2024-01-07 ┆ 4 ┆ 2 ┆ 1 ┆ 3 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>Now, let's make this algorithm a bit more advanced. Rather than simply buying the best performing symbol, the new algorithm looks like this:</p> <ol> <li>Identify the two best-performing symbols in a period (here: rank returns over two periods over all symbols for every date).</li> <li>Insert (buy) the symbols and put them into your portfolio.</li> <li>In the next period, you only sell the symbol(s) if they have a rank less than 3. Only then the highest ranked symbol(s) at that period will replace the previous symbols.</li> </ol> <p>The issue here is that this new algorithm always depends on the previous period. We need to know which symbols have been in the portfolio in the previous period in order to check if their current rank is less than 3. That is, we introduce a path dependency.</p> <p>I am looking for a vectorized solution. I am potentially working with a huge dataset.</p> <p>Here's the output I am looking for:</p> <pre><code>shape: (26, 6) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ symbol ┆ date ┆ price ┆ return ┆ rank ┆ in_portfolio β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ i64 ┆ f64 ┆ u8 ┆ bool β”‚ β•žβ•β•β•β•β•β•β•β•β•β•ͺ════════════β•ͺ═══════β•ͺ═══════════β•ͺ══════β•ͺ══════════════║ β”‚ symbol1 ┆ 2023-12-30 ┆ 100 ┆ null ┆ null ┆ null β”‚ β”‚ symbol1 ┆ 2023-12-31 ┆ 105 ┆ null ┆ null ┆ null β”‚ β”‚ symbol1 ┆ 2024-01-03 ┆ 110 ┆ 0.1 ┆ 1 ┆ true β”‚ β”‚ symbol1 ┆ 2024-01-04 ┆ 115 ┆ 0.095238 ┆ 2 ┆ true β”‚ β”‚ symbol1 ┆ 2024-01-05 ┆ 120 ┆ 0.090909 ┆ 3 ┆ true β”‚ β”‚ symbol1 ┆ 2024-01-06 ┆ 125 ┆ 0.086957 ┆ 3 ┆ true β”‚ β”‚ symbol1 ┆ 2024-01-07 ┆ 100 ┆ -0.166667 ┆ 4 ┆ false β”‚ β”‚ symbol2 ┆ 2023-12-30 ┆ 200 ┆ null ┆ null ┆ null β”‚ β”‚ symbol2 ┆ 2024-01-03 ┆ 210 ┆ null ┆ null ┆ null β”‚ β”‚ symbol2 ┆ 2024-01-04 ┆ 220 ┆ 0.1 ┆ 1 ┆ true β”‚ β”‚ symbol2 ┆ 2024-01-05 ┆ 230 ┆ 0.095238 ┆ 2 ┆ true β”‚ β”‚ symbol2 ┆ 2024-01-06 ┆ 240 ┆ 0.090909 ┆ 2 ┆ true β”‚ β”‚ symbol2 ┆ 2024-01-07 ┆ 250 ┆ 0.086957 ┆ 2 ┆ true β”‚ β”‚ symbol3 ┆ 2023-12-30 ┆ 3000 ┆ null ┆ null ┆ null β”‚ β”‚ symbol3 ┆ 2023-12-31 ┆ 3100 ┆ null ┆ null ┆ null β”‚ β”‚ symbol3 ┆ 2024-01-03 ┆ 3200 ┆ 0.066667 ┆ 3 ┆ false β”‚ β”‚ symbol3 ┆ 2024-01-04 ┆ 3300 ┆ 0.064516 ┆ 3 ┆ false β”‚ β”‚ symbol3 ┆ 2024-01-05 ┆ 3400 ┆ 0.0625 ┆ 4 ┆ false β”‚ β”‚ symbol3 ┆ 2024-01-07 ┆ 3700 ┆ 0.121212 ┆ 1 ┆ true β”‚ β”‚ symbol4 ┆ 2023-12-30 ┆ 1000 ┆ null ┆ null ┆ null β”‚ β”‚ symbol4 ┆ 2023-12-31 ┆ 1050 ┆ null ┆ null ┆ null β”‚ β”‚ symbol4 ┆ 2024-01-03 ┆ 1080 ┆ 0.08 ┆ 2 ┆ true β”‚ β”‚ symbol4 ┆ 2024-01-04 ┆ 1090 ┆ 0.038095 ┆ 4 ┆ false β”‚ β”‚ symbol4 ┆ 2024-01-05 ┆ 1300 ┆ 0.203704 ┆ 1 ┆ false β”‚ β”‚ symbol4 ┆ 2024-01-06 ┆ 1350 ┆ 0.238532 ┆ 1 ┆ false β”‚ β”‚ symbol4 ┆ 2024-01-07 ┆ 1400 ┆ 0.076923 ┆ 3 ┆ false β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ shape: (7, 5) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ date ┆ symbol1 ┆ symbol2 ┆ symbol3 ┆ symbol4 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ u8 ┆ u8 ┆ u8 ┆ u8 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═════════β•ͺ═════════β•ͺ═════════β•ͺ═════════║ β”‚ 2023-12-30 ┆ null ┆ null ┆ null ┆ null β”‚ β”‚ 2023-12-31 ┆ null ┆ null ┆ null ┆ null β”‚ β”‚ 2024-01-03 ┆ true ┆ null ┆ false ┆ true β”‚ β”‚ 2024-01-04 ┆ true ┆ true ┆ false ┆ false β”‚ β”‚ 2024-01-05 ┆ true ┆ true ┆ false ┆ false β”‚ β”‚ 2024-01-06 ┆ true ┆ true ┆ null ┆ false β”‚ β”‚ 2024-01-07 ┆ false ┆ true ┆ true ┆ false β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre>
<python><python-polars>
2024-09-16 19:52:28
0
5,167
Andi
78,991,587
901,444
getting last character from a string that may or may not be unicode
<p>I'm parsing a file that contains both alpha strings and unicode/UTF-8 strings containing IPA pronunciations.</p> <p>I want to be able to obtain the last character of a string, but sometimes those characters occupy two spaces, e.g.</p> <pre><code>syl = 'tyl' # plain ascii last_char = syl[-1] # last char is 'l' syl = 'tlΜ©' # contains IPA char last_char = syl[-1] # last char erroneously contains: 'Μ©' which is a diacritical mark on the l # want the whole character 'lΜ©' </code></pre> <p>If I try using <code>.decode()</code>, it fails with:</p> <blockquote> <p>'str' object has no attribute 'decode'</p> </blockquote> <p>How to obtain the last character of the Unicode/UTF-8 string (when you don't know if it's Ascii or Unicode string)?</p> <p>I guess I could use a lookup table to known characters and if it fails, go back and grab <code>syl[-2:]</code>. Is there an easier way?</p> <hr /> <p>In response to some comments, here is the complete list of IPA characters I've collected so far:</p> <pre><code>a, b, d, e, f, fΜ©, g, h, i, iΜ©, iΜ¬, j, k, l, lΜ©, m, n, nΜ©, o, p, r, s, sΜ©, t, tΜ©, tΜ¬, u, v, w, x, z, Γ¦, Γ°, Ε‹, Ι‘, Ι‘Μƒ, Ι’, Ι”, Ι™, ɚ, Ι›, ɜ, ɜ˞, ɝ, Ι‘, Ιͺ, Ι΅, ΙΉ, ΙΎ, Κƒ, ΚƒΜ©, ʊ, ʌ, Κ’, Κ€, ΞΈ, βˆ… </code></pre>
<python><string><unicode><utf-8><character>
2024-09-16 19:51:07
4
8,708
slashdottir
78,991,568
615,525
How to Increase Accuracy of Text Read in Images Using Microsoft Azure Computer Vision AI
<p>I'm new to Microsoft Azure AI Computer Vision. I am using Cognitive Services and the Computer Vision Client in a Python Program to do two things:</p> <ol> <li>Extract text from a JPG Image using Optical Character Recognition (OCR)</li> <li>Use Cognitive Services to provide a Description of the Image</li> </ol> <p>After lots of configuration issues (and PIP installs!), I have achieved SOME results</p> <p>The Code for extracting the text from the image is:</p> <pre><code>#Create A ComputerVision Client client = ComputerVisionClient(ENDPOINT, CognitiveServicesCredentials(API_KEY)) image_path = '/Users/Owner/Documents/Bills Stuff/eBay/Images/Document_20240914_0008.jpg' #Use Azure AI Cognitive Services to Get the Title and Description of Image #For the TITLE, Use Optical Character Recognition (OCR) to Read the Text (Caption) on the Image with open(image_path, &quot;rb&quot;) as image_stream: ocr_results=client.recognize_printed_text_in_stream(image_stream) if ocr_results.regions: for region in ocr_results.regions: for line in region.lines: print(f&quot; Title: {' '.join([word.text for word in line.words])}&quot;) </code></pre> <p>My second Point - the Description is working great, BUT the code above is not extracting the Text from the Image accurately at all.</p> <p>It's CLOSE, but the actual text is: &quot;Scenic View of Horseshoe Curve on Pennsylvania Railroad&quot;</p> <p>The Code I presented above, returns: &quot;Inside of Horseshoe Curve on Cina Railroad&quot;</p> <p>Is there a way to improve my code to make this result more accurate?</p> <p>Adding: If I decrease/increase the size of my image, the code picks upmore or fewer words - maybe I need to somehow give the code more time to process the image??</p> <p>Maybe, if someone could answer my question if I phrase it more broadly:</p> <p>When I use the Microsoft Azure Computer Vision Sample AI online Tool Found Here:<a href="https://portal.vision.cognitive.azure.com/demo/extract-text-from-images" rel="nofollow noreferrer">https://portal.vision.cognitive.azure.com/demo/extract-text-from-images</a></p> <p>The text from the image is processed 100% correctly.</p> <p>The output displays blue boxes around each block of text. I think these are called Bounding Boxes.</p> <p>It appears that if Bounding Boxes are used then maybe the accuracy improves??</p> <p>Again, the Azure online Tool at the URL above is 100% correct.</p> <p>My code does not use Bounding Boxes and is about 75% accurate.</p> <p>Can someone point me in the right direction?</p>
<python><azure><ocr><azure-cognitive-services>
2024-09-16 19:43:15
1
353
user615525
78,991,562
210,867
How to process tasks as they complete when using TaskGroup?
<p>I understand the <a href="https://textual.textualize.io/blog/2023/02/11/the-heisenbug-lurking-in-your-async-code/" rel="nofollow noreferrer">arguments</a> for using the newer <code>TaskGroup</code> in place of older mechanisms based on <code>create_task()</code>.</p> <p>However, <code>TaskGroup</code> exits after <em>all</em> tasks are done. What if you want to start processing the results as soon as the first task finishes, like you can with <code>asyncio.as_completed()</code> or <code>asyncio.wait( ..., return_when=FIRST_COMPLETED )</code>?</p> <p>If <code>TaskGroup</code> offers no way to do that, then it's a trade-off rather than a strict upgrade, isn't?</p>
<python><parallel-processing><python-asyncio>
2024-09-16 19:39:05
2
8,548
odigity
78,991,298
4,577,467
Can a Python class derive from a SWIG-wrapped C++ class?
<p>I am using SWIG version 4.0.2 in a Windows Subsystem for Linux (WSL) Ubuntu distribution. I can wrap a C++ class (<code>EventProcessor</code>), create an instance of that class in Python, and provide that instance to a wrapped global C++ function (<code>registerEventProcessor()</code>). But, can I define a class in Python that derives from the base <code>EventProcessor</code> class?</p> <p>The simple C++ class and global function to wrap is represented by the following header file.</p> <h5>example4.h</h5> <pre><code>#pragma once class EventProcessor { private: int m_value; public: EventProcessor( void ) : m_value( 42 ) { } int getValue( void ) const { return m_value; } void setValue( int value ) { m_value = value; } }; void registerEventProcessor( int packetId, EventProcessor * pProc ) { if ( pProc ) { // Something simple to demonstrate that the function is successful: // Store the given packet ID. pProc-&gt;setValue( packetId ); } } </code></pre> <p>The SWIG interface file I use to wrap the C++ code is the following.</p> <h5>example4.swg</h5> <pre><code>%module example4 %{ #define SWIG_FILE_WITH_INIT #include &quot;example4.h&quot; %} %include &quot;example4.h&quot; </code></pre> <p>The command lines I use to generate the C++ wrapper code and then build the Python extension module are the following.</p> <pre><code>finch@laptop:~/work/swig_example$ swig -c++ -python example4.swg finch@laptop:~/work/swig_example$ python3 example4_setup.py build_ext --inplace running build_ext building '_example4' extension ... a lot of output, no errors ... </code></pre> <h1>Use base C++ class in Python - SUCCESS</h1> <p>The following is a demonstration that I can successfully create an instance of the base C++ class and provide that instance to the global function. The global function stores the given packet ID in the instance, which I can fetch afterwards.</p> <pre><code>finch@laptop:~/work/swig_example$ python3 Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] on linux Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; import example4 &gt;&gt;&gt; b = example4.EventProcessor() &gt;&gt;&gt; b.getValue() 42 &gt;&gt;&gt; b.setValue( 20 ) &gt;&gt;&gt; b.getValue() 20 &gt;&gt;&gt; example4.registerEventProcessor( 30, b ) &gt;&gt;&gt; b.getValue() 30 </code></pre> <h1>Use derived Python class - FAILURE</h1> <p>The derived Python class is represented by the following file.</p> <h5>derived4.py</h5> <pre><code>import example4 class SOFProcessor( example4.EventProcessor ): def __init__( self ): print( &quot;SOFProcessor.ctor&quot; ) </code></pre> <p>The following is a demonstration that I can create an instance of the derived Python class.</p> <pre><code>finch@laptop:~/work/swig_example$ python3 Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] on linux Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; import example4 &gt;&gt;&gt; import derived4 &gt;&gt;&gt; d = derived4.SOFProcessor() SOFProcessor.ctor </code></pre> <p>However, I cannot even fetch the value stored in that instance.</p> <pre><code>&gt;&gt;&gt; d.getValue() Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;/home/finch/work/swig_example/example4.py&quot;, line 72, in getValue return _example4.EventProcessor_getValue(self) TypeError: in method 'EventProcessor_getValue', argument 1 of type 'EventProcessor const *' </code></pre> <p>Neither can I provide that instance to the global function.</p> <pre><code>&gt;&gt;&gt; example4.registerEventProcessor( 100, d ) Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;/home/finch/work/swig_example/example4.py&quot;, line 83, in registerEventProcessor return _example4.registerEventProcessor(packetId, pProc) TypeError: in method 'registerEventProcessor', argument 2 of type 'EventProcessor *' </code></pre> <h1>A typedef to override ... something?</h1> <p>Is there a typedef I can define in the SWIG interface file so that an instance of the derived Python class can be correctly wrapped and provided to the C++ side?</p>
<python><c++><swig>
2024-09-16 18:07:51
1
927
Mike Finch
78,991,208
6,770,842
How to generate minimal epub file using ebooklib
<p>I would like to use python and ebooklib to generate a &quot;minimal&quot; epub from an existing one. In other words, I want to keep only the required subset of metadata (title, creator, date, publisher), cover image, toc, spine, and (obviously) the whole text, and remove all extra-stuff (extra metadata, extra images, fonts, and so on).</p> <p>Here is my current code:</p> <pre><code>import ebooklib as ebl from ebooklib import epub def minimal_epub(filename): old, new = epub.read_epub(filename), epub.EpubBook() for key in ['title', 'creator', 'date', 'publisher']: metadata = old.get_metadata('DC', key) if metadata: new.add_metadata('DC', key, metadata[0][0]) print(new.get_metadata('DC', key)) # show current metadata field print(new.metadata) # show all metadata fields # assume that the largest image in archive is the cover image sortkey = lambda item: -len(item.content) # sort images by decreasing sizes cover = sorted(old.get_items_of_type(ebl.ITEM_IMAGE), key=sortkey)[0] new.set_cover(cover.file_name, cover.content) # keep largest image as cover print(cover.file_name) # show cover filename for item in old.get_items_of_type(ebl.ITEM_DOCUMENT): new.add_item(item); print(item) # show name of document items for item in old.get_items_of_type(ebl.ITEM_STYLE): new.add_item(item); print(item) # show name of styles new.toc = old.toc; print(new.toc) # show links in table of content new.spine = old.spine; print(new.spine) # show spine new.add_item(epub.EpubNcx()); new.add_item(epub.EpubNav()) # navigation epub.write_epub(filename.replace('.','+.'), new) &gt;&gt;&gt; minimal_epub('path/to/file.epub') </code></pre> <p>I don't know how to find the cover image among all images stored in the <code>epub</code> file, so I assume that the one with largest size is the cover (the assumption works almost everytime).</p> <p>The generated (minimal) <code>epub</code> file can be correctly opened with a standard <code>epub</code> reader about 8 times out of 10, but in the remaining cases, the reader complains about incorrect format and cannot correctly display the book. So I guess that some other items in the archive should also be copied, but I am not a specialist of the <code>epub</code> format, and the documentation of the <code>ebooklib</code> package is rather short and doesn't cover all details.</p> <p>Is there an <code>epub</code> wizard here, to give me some hints?</p>
<python><epub><ebooklib>
2024-09-16 17:38:10
0
2,427
sciroccorics
78,991,192
8,810,517
Race Condition in Confluent Kafka Consumer with Asyncio and ThreadPoolExecutor in Python
<p>I'm working on a Python application where I need to consume messages from a Kafka topic, process them by making an async API request, and produce a response to an outbound Kafka topic. Since the Kafka client I'm using is synchronous (confluent-kafka), I decided to use ThreadPoolExecutor to run the consumer in a separate thread and launch async tasks in the main event loop for I/O-bound operations.</p> <p>The code works, but I'm facing a race condition when two requests arrive simultaneously. The acknowledgment is sent for both requests, but the actual API request (inside <code>fetch_response_from_rest_service</code>) is sent for only one of the requests, twice. This issue is happening in the section of the code where I’ve marked a comment.</p> <p>Here’s the relevant code:</p> <pre><code>import json import confluent_kafka as kafka import asyncio from concurrent.futures import ThreadPoolExecutor import logging as logger async def run_prediction(inbound_topics, outbound_topics): consumer_args = {'bootstrap.servers': config.BOOTSTRAP_SERVERS, 'group.id': config.APPLICATION_ID, 'default.topic.config': {'auto.offset.reset': config.AUTO_OFFSET_RESET}, 'enable.auto.commit': config.ENABLE_AUTO_COMMIT, 'max.poll.interval.ms': config.MAX_POLL_INTERVAL_MS} training_consumer = kafka.Consumer(consumer_args) training_consumer.subscribe(inbound_topics) outbound_producer = kafka.Producer({'bootstrap.servers': config.BOOTSTRAP_SERVERS}) logger.info(f&quot;Listening to inbound topic {inbound_topics}&quot;) while True: msg = training_consumer.poll(timeout=config.KAFKA_POLL_TIMEOUT) if not msg: continue if msg.error(): logger.info(f&quot;Consumer error: {str(msg.error())}&quot;) continue try: send_ack(msg.value(), &quot;MESSAGE_RECEIVED&quot;) loop = asyncio.get_event_loop() loop.run_in_executor(executor, lambda: asyncio.run( fetch_response_from_rest_service(message=msg.value().decode(), callback=kafka_status_callback(msg, outbound_producer, outbound_topics)))) except Exception as ex: logger.exception(ex) async def fetch_response_from_rest_service(message, callback): # race condition happens at this point message variable when two requests come at same time message = json.loads(message) url = &quot;SOME_ENDPOINT&quot; headers = { &quot;Content-Type&quot;: &quot;application/json&quot; } response = None try: logger.info(f&quot;sending request to {url} for payload {message}&quot;) response = await async_request(&quot;POST&quot;, url, headers, data=json.dumps(message), timeout=10) response = json.loads(response) except Exception as ex: logger.exception(f&quot;All retries failed. Error: {ex}&quot;) finally: callback(response) asyncio.run(run_prediction([&quot;INBOUND_TOPIC&quot;], [&quot;OUTBOUND_TOPIC&quot;])) </code></pre> <p>send_ack method just sends a message on a kafka topic regarding acknowledgment of processing.</p> <p>I identified the race condition when multiple messages are processed at the same time. The <code>fetch_response_from_rest_service</code> function seems to be using the wrong message for one of the requests, causing it to reuse the some message twice and other one gets dropped.</p> <p>I tried solving this by locking the section of the code that processes the message variable:</p> <pre><code>async def fetch_response_from_rest_service(message, callback): message_copy_lock = asyncio.Lock() async with message_copy_lock: logger.info(f&quot;Got message : {json.loads(message)['conversationRequest']['requestId']}&quot;) message = json.loads(message) url = &quot;SOME_ENDPOINT&quot; headers = { &quot;Content-Type&quot;: &quot;application/json&quot; } response = None try: logger.info(f&quot;sending request to {url} for payload {message}&quot;) response = await async_request(&quot;POST&quot;, url, headers, data=json.dumps(message), timeout=10) response = json.loads(response) except Exception as ex: logger.exception(f&quot;All retries failed. Error: {ex}&quot;) finally: callback(response) </code></pre> <p>However, this did not fix the issue.</p> <p>My constraints:</p> <ul> <li>I want to keep using the synchronous Kafka client (Confluent Kafka) due to project constraints. I am unable to switch to an async Kafka client like AIOKafka. I considered using <code>asyncio.create_task()</code> if I was using an async Kafka client, where I could await for response from API call and still go ahead with polling requests, but I want to avoid that path due to project limitations.</li> </ul> <p>Questions:</p> <ol> <li>Why is this happening?</li> <li>How can I avoid the race condition in my current setup?</li> <li>How do multiple event loops on a single thread work vs event loops running on different threads work?</li> <li>Is my usage of asyncio.run_in_executor() correct in this context?</li> <li>Should I be doing something differently to handle parallel requests safely? Would any other synchronization technique work better than the asyncio.Lock()?</li> </ol> <p>Any help or suggestions would be appreciated!</p>
<python><multithreading><apache-kafka><async-await><python-asyncio>
2024-09-16 17:33:58
1
526
Abhay
78,991,070
6,618,225
Column manipulation based on headline values within rows
<p>I have a Pandas dataframe with a column that contains different types of values and I want to create a new column out of it based on the information inside that column. Every few rows there is a kind of &quot;headline&quot; row that should define that values for the following rows until the next headline row that then defines the values for the next rows and so on.</p> <p>To understand better, here is an example:</p> <pre><code>import pandas as pd import pandas as pd data = {'AA': ['', '', '', 'V_525-124', 'gsdgsd', 'hdfjhdf', 'gsdhsdhsd', 'gsdgsd', 'V_535-623', 'hosdfjk', 'hjodfjh', 'hjsdfjo', 'V_563-534', 'hojhdfhjdf', 'hodfjhjdfj', 'hofoj', 'hkdfphdf']} df = pd.DataFrame(data) print(df) </code></pre> <p>I want to create a new column BB that would look like that:</p> <pre><code>import pandas as pd data = {'AA': ['', '', '', 'V_525-124', 'gsdgsd', 'hdfjhdf', 'gsdhsdhsd', 'gsdgsd', 'V_535-623', 'hosdfjk', 'hjodfjh', 'hjsdfjo', 'V_563-534', 'hojhdfhjdf', 'hodfjhjdfj', 'hofoj', 'hkdfphdf'], 'BB': ['', '', '', 'V_525-124', 'V_525-124', 'V_525-124', 'V_525-124', 'V_525-124', 'V_535-623', 'V_535-623', 'V_535-623', 'V_535-623', 'V_563-534', 'V_563-534', 'V_563-534', 'V_563-534', 'V_563-534']} df = pd.DataFrame(data) print(df) </code></pre> <p>The number of rows under each &quot;headline&quot; varies, so the script should sort of check whether the next row is a headline-type, then add the headline value to column BB and then move on down the table until a new headline is detected. I can only think of a for-loop with indices and if-statements but I am sure Pandas offers a more elegant solution.</p> <p>The &quot;headlines&quot; all start with 'V_' if that helps.</p>
<python><pandas>
2024-09-16 16:50:30
3
357
Kai
78,991,020
179,014
Why are type annotations needed in Python dataclasses?
<p>In opposition to standard classes in Python dataclasses fields must have a type annotation. But I don't understand what the purpose of these type annotations really is. One can create a dataclass like this</p> <pre><code>from dataclasses import dataclass, fields @dataclass class Broken: field1: str = &quot;default_string1&quot;, field2: &quot;&quot; = &quot;default_string2&quot;, field3 = &quot;default_string3&quot; </code></pre> <p>And this class will be accepted by the python interpreter (and your IDE/static code checker might also not see anything wrong with it). However if you use the class</p> <pre><code>b = Broken() print(&quot;Wannabe type for field1:&quot;, fields(b)[0].type) print(&quot;Real type for field1:&quot;, type(b.field1)) print(&quot;Wannabe type for field2:&quot;, fields(b)[1].type) print(&quot;Real type for field2:&quot;, type(b.field2)) try: print(&quot;Wannabe type for field3:&quot;, fields(b)[2].type) except: print(&quot;field3 is not part of fields(b)&quot;) print(&quot;Real type for field3:&quot;, type(b.field3)) </code></pre> <p>you will get some surprising output</p> <pre><code>Wannabe type for field1: &lt;class 'str'&gt; Real type for field1: &lt;class 'tuple'&gt; </code></pre> <p>Did you notice the trailing comma after the default value of field1? So the type annotation is not used to check, that the default value has the correct type, so the real type is <code>tuple</code> instead of the type <code>str</code> used in the annotation.</p> <pre><code>Wannabe type for field2: Real type for field2: &lt;class 'tuple'&gt; </code></pre> <p>You can even use an empty string as type annotation and Python won't raise an eyebrow.</p> <pre><code>field3 is not part of fields(b) Real type for field3: &lt;class 'str'&gt; </code></pre> <p>If you leave away the type annotation completely, then the field will not be shown when calling the <code>fields()</code> function. However it can still be accessed via the object.</p> <p>So what is the purpose of the type annotation for dataclasses? Are they really just used to check, if a field should be listed with the <code>fields()</code> function? Or did the Python maintainers anticipate future functionality for types in dataclasses, that wasn't implemented, yet? Why do I need to add type annotations to dataclasses in Python?</p>
<python><syntax><python-typing><python-dataclasses>
2024-09-16 16:33:32
2
11,858
asmaier
78,990,992
16,912,844
Circular Import During `dictConfig` in Python Logging
<p>I have a custom logger with the below structure, it uses <code>dictConfig</code> to read the <code>default.json</code> file for the logger. It works fine, but when I wanted to nest under a project root, I got an error while importing it. Any reason why that might happen? I double checked the path and it see to be correct.</p> <p><strong>Initial Structure</strong></p> <pre><code>tealogger/ - configuration/ - default.json - __init__.py - formatter.py - ... </code></pre> <p><strong>Nested Structure</strong></p> <pre><code>projectroot/ - tealogger/ - configuration/ - default.json - __init__.py - formatter.py - ... </code></pre> <p><strong>Error</strong></p> <pre><code>Traceback (most recent call last): File &quot;/opt/python/substance3d_scene_automation_110_m_1089/lib/python3.11/logging/config.py&quot;, line 400, in resolve found = getattr(found, frag) ^^^^^^^^^^^^^^^^^^^^ AttributeError: cannot access submodule 'tealogg' of module 'projectroot' (most likely due to a circular import) During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/opt/python/substance3d_scene_automation_110_m_1089/lib/python3.11/logging/config.py&quot;, line 552, in configure formatters[name] = self.configure_formatter( ^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/opt/python/substance3d_scene_automation_110_m_1089/lib/python3.11/logging/config.py&quot;, line 664, in configure_formatter result = self.configure_custom(config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/opt/python/substance3d_scene_automation_110_m_1089/lib/python3.11/logging/config.py&quot;, line 479, in configure_custom c = self.resolve(c) ^^^^^^^^^^^^^^^ File &quot;/opt/python/substance3d_scene_automation_110_m_1089/lib/python3.11/logging/config.py&quot;, line 403, in resolve found = getattr(found, frag) ^^^^^^^^^^^^^^^^^^^^ AttributeError: cannot access submodule 'tealogg' of module 'projectroot' (most likely due to a circular import) The above exception was the direct cause of the following exception: Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;/Users/yk/Desktop/Project/Repository/Adobe/3DIQE/projectroot/projectroot/tealogger/__init__.py&quot;, line 128, in &lt;module&gt; tealogger = TeaLogger( ^^^^^^^ File &quot;/Users/yk/Desktop/Project/Repository/Adobe/3DIQE/projectroot/projectroot/tealogger/__init__.py&quot;, line 100, in __new__ logging.config.dictConfig(DEFAULT_CONFIGURATION) File &quot;/opt/python/substance3d_scene_automation_110_m_1089/lib/python3.11/logging/config.py&quot;, line 823, in dictConfig dictConfigClass(config).configure() File &quot;/opt/python/substance3d_scene_automation_110_m_1089/lib/python3.11/logging/config.py&quot;, line 555, in configure raise ValueError('Unable to configure ' ValueError: Unable to configure formatter 'color' </code></pre> <p>The shorten version of the code is below, full version can be found committed to <a href="https://github.com/TeaFTI/Python-TeaLogger" rel="nofollow noreferrer">GitHub</a>. The result I got can be replicated with the <a href="https://github.com/TeaFTI/Python-TeaLogger/tree/develop-nest" rel="nofollow noreferrer">develop-nest branch</a>. I simply just started a Python shell, and ran <code>from projectroot import tealogger</code>.</p> <p><strong><code>projectroot/tealogger/__init__.py</code></strong></p> <pre class="lang-py prettyprint-override"><code>import json import logging import logging.config from pathlib import Path from typing import Union # Log Level CRITICAL = logging.CRITICAL FATAL = logging.FATAL ERROR = logging.ERROR WARNING = logging.WARNING WARN = logging.WARN INFO = logging.INFO DEBUG = logging.DEBUG NOTSET = logging.NOTSET # Default DEFAULT_CONFIGURATION = None CURRENT_MODULE_PATH = Path(__file__).parent.expanduser().resolve() with open( CURRENT_MODULE_PATH / 'configuration' / 'default.json', mode='r', encoding='utf-8' ) as file: DEFAULT_CONFIGURATION = json.load(file) class TeaLogger(logging.Logger): def __new__(cls, name: Union[str, None] = None, level: Union[int, str] = NOTSET, **kwargs): if kwargs.get('dictConfig'): # Dictionary logging.config.dictConfig(kwargs.get('dictConfig')) elif kwargs.get('fileConfig'): # File ... else: # Default if 'loggers' not in DEFAULT_CONFIGURATION: DEFAULT_CONFIGURATION['loggers'] = {} if name not in DEFAULT_CONFIGURATION['loggers']: # Configure new logger with default configuration DEFAULT_CONFIGURATION['loggers'][name] = { 'propagate': kwargs.get('propagate', False), 'handlers': kwargs.get('handler_list', ['default']) } # configuration['loggers'][name]['handlers'] = kwargs.get('handler_list') # NOTE: Override only individual configuration! # Overriding the entire configuration will cause this child # logger to inherit any missing configuration from the root # logger. (Even if the configuration was set previously.) DEFAULT_CONFIGURATION['loggers'][name]['level'] = logging.getLevelName(level) # configuration['loggers'][name]['level'] = level logging.config.dictConfig(DEFAULT_CONFIGURATION) # Get (Create) the Logger tea = logging.getLogger(name) return tea def __init__(self, name: str, level: Union[int, str] = NOTSET) -&gt; None: super().__init__(self, name=name, level=level) tea = TeaLogger(name=__name__, level=WARNING) def set_level(level: Union[int, str] = NOTSET): tea.setLevel(level) def log(level: Union[int, str], message: str, *args, **kwargs): if isinstance(level, str): level = logging.getLevelName(level) tea.log(level=level, msg=message, *args, **kwargs) def debug(message: str, *args, **kwargs): tea.debug(message, *args, **kwargs) ... </code></pre> <p><strong><code>projectroot/tealogger/configuration/default.json</code></strong></p> <pre class="lang-json prettyprint-override"><code>{ &quot;version&quot;: 1, &quot;formatters&quot;: { &quot;default&quot;: { &quot;format&quot;: &quot;[%(levelname)s %(name)s %(asctime)s] %(message)s&quot;, &quot;datefmt&quot;: &quot;%Y-%m-%dT%H:%M:%S%z&quot; }, &quot;short&quot;: { &quot;format&quot;: &quot;[%(levelname)-.1s %(asctime)s] %(message)s&quot;, &quot;datefmt&quot;: &quot;%Y-%m-%dT%H:%M:%S%z&quot; }, &quot;color&quot;: { &quot;()&quot;: &quot;projectroot.tealogger.formatter.ColorFormatter&quot;, &quot;record_format&quot;: &quot;[%(levelname)s %(name)s %(asctime)s] %(message)s&quot;, &quot;date_format&quot;: &quot;%Y-%m-%dT%H:%M:%S%z&quot; } }, &quot;filters&quot;: { &quot;stdout&quot;: { &quot;()&quot; : &quot;projectroot.tealogger.filter.StandardOutFilter&quot; } }, &quot;handlers&quot;: { &quot;default&quot;: { &quot;class&quot;: &quot;logging.StreamHandler&quot;, &quot;formatter&quot;: &quot;default&quot;, &quot;filters&quot;: [], &quot;stream&quot;: &quot;ext://sys.stdout&quot; }, &quot;console&quot;: { &quot;class&quot;: &quot;logging.StreamHandler&quot;, &quot;formatter&quot;: &quot;color&quot;, &quot;filters&quot;: [], &quot;stream&quot;: &quot;ext://sys.stdout&quot; }, &quot;stdout&quot;: { &quot;class&quot;: &quot;logging.StreamHandler&quot;, &quot;level&quot;: &quot;DEBUG&quot;, &quot;formatter&quot;: &quot;color&quot;, &quot;filters&quot;: [ &quot;stdout&quot; ], &quot;stream&quot;: &quot;ext://sys.stdout&quot; }, &quot;stderr&quot;: { &quot;class&quot;: &quot;logging.StreamHandler&quot;, &quot;level&quot;: &quot;ERROR&quot;, &quot;formatter&quot;: &quot;color&quot;, &quot;filters&quot;: [], &quot;stream&quot;: &quot;ext://sys.stderr&quot; } }, &quot;loggers&quot;: { &quot;base&quot;: { &quot;level&quot;: &quot;WARNING&quot;, &quot;propagate&quot;: false, &quot;filters&quot;: [], &quot;handlers&quot;: [ &quot;stderr&quot;, &quot;stdout&quot; ] }, &quot;tealogger&quot;: { &quot;level&quot;: &quot;WARNING&quot;, &quot;propagate&quot;: false, &quot;filters&quot;: [], &quot;handlers&quot;: [ &quot;console&quot; ] }, &quot;tealogger.test.conftest&quot;: { &quot;level&quot;: &quot;DEBUG&quot;, &quot;propagate&quot;: false, &quot;filters&quot;: [], &quot;handlers&quot;: [ &quot;stderr&quot;, &quot;stdout&quot; ] } }, &quot;root&quot;: { &quot;level&quot;: &quot;WARNING&quot;, &quot;filters&quot;: [], &quot;handlers&quot;: [ &quot;default&quot; ] }, &quot;incremental&quot;: false, &quot;disable_existing_loggers&quot;: false } </code></pre> <p><strong><code>projectroot/tealogger/formatter.py</code></strong></p> <pre class="lang-py prettyprint-override"><code>import logging from typing import Union ESC = '\x1b[' _COLOR_CODE = { # Reset 'RESET': f'{ESC}0m', # Foreground 'FOREGROUND_BLACK': f'{ESC}30m', ... } _LEVEL_COLOR_CODE = { 'NOTSET': _COLOR_CODE['RESET'], 'DEBUG': _COLOR_CODE['FOREGROUND_CYAN'], 'INFO': _COLOR_CODE['FOREGROUND_GREEN'], 'WARNING': _COLOR_CODE['FOREGROUND_YELLOW'], 'SUCCESS': _COLOR_CODE['FOREGROUND_GREEN'], 'ERROR': _COLOR_CODE['FOREGROUND_RED'], 'CRITICAL': f&quot;{_COLOR_CODE['FOREGROUND_RED']}{_COLOR_CODE['BACKGROUND_WHITE']}&quot;, } class ColorFormatter(logging.Formatter): def __init__(self, record_format: Union[str, None] = None, date_format: Union[str, None] = None) -&gt; None: super().__init__(fmt=record_format, datefmt=date_format) self._level_format = { logging.DEBUG: ( f&quot;{_LEVEL_COLOR_CODE['DEBUG']}&quot; f&quot;{record_format}&quot; f&quot;{_LEVEL_COLOR_CODE['NOTSET']}&quot; ), logging.INFO: ( f&quot;{_LEVEL_COLOR_CODE['INFO']}&quot; f&quot;{record_format}&quot; f&quot;{_LEVEL_COLOR_CODE['NOTSET']}&quot; ), ... } self._date_format = date_format def format( self, record: logging.LogRecord ) -&gt; str: ... </code></pre>
<python><logging>
2024-09-16 16:20:33
0
317
YTKme
78,990,976
18,476,381
sqlalchemy contains_eager not loading nested relationships
<p>I have tables of motor and motor_cycle, which is a one to many relationship. I would like to join these tables and show all the motor_cycles for a motor with an <strong>OPTIONAL</strong> condition on the motor_cycle table. Originally I was using <strong>joinedload</strong> but found after applying the where condition it would just return all records. I then switched to <strong>contains_eager</strong> but there is some unusual behavior. When I pass in my filter condition it seems to apply correctly and return one motor_cycle record. But when I don't pass in any condition, instead of returning all the motor_cycle records it just returns one. The one it returns seems to be the same everytime and it doesn't match the condition I had used previously (Not that it matters). What can I do differently to load all the motor_cycle records into my response but at the same time be able to optionally filter them.</p> <p>EDIT: I've narrowed down the issue to the limit being applied. When sending in my request I am passing in a limit of 1. However this limit should be applied to the parent object not the children. Instead of applying the limit of 1 to motor it is applying it to the motor_cycle. Any idea on how to fix that?</p> <pre><code>async def search_motors( session: AsyncSession, serial_number: str = None, is_latest_fl: str = None, limit: int = 20, offset: int = 0, ): async with session: statement = ( select( DBMotor, ).options(contains_eager(DBMotor.motor_cycle)) .join(DBMotor.motor_cycle) ) if serial_number: statement = statement.where( DBMotor.serial_number.ilike(f&quot;%{serial_number}%&quot;) ) if is_latest_fl: statement = statement.where(DBMotorCycle.is_latest_fl == is_latest_fl) statement = statement.limit(limit).offset(offset) result = await session.scalars(statement) list_of_motors = result.unique().all() list_of_motor_models = ( [MotorSearchModel.model_validate(motor) for motor in list_of_motors] if list_of_motors else None ) return list_of_motor_models </code></pre>
<python><sql><sqlalchemy>
2024-09-16 16:15:16
1
609
Masterstack8080
78,990,912
5,838,180
Scatter plot on a region of the sky with a circle
<p>I am trying to do a scatter plot of a few data points on a region of the sky and then add a circle to the plot. Note, that I don't want a plot of the full sky, just of the region around the data points. The coordinates of the data points in degrees are:</p> <pre><code>ra = np.array([248.3, 249.8, 250.2, 250.5, 250.5]) dec = np.array([68.8, 67.7, 65.7, 72.2, 63.3]) </code></pre> <p>The centre of the circle should be at</p> <pre><code>ra1 = 270 dec1= 66 </code></pre> <p>It seems the packages for that (because we need to account for the curvature of the sky!) would be <code>astropy</code> and <code>regions</code>. But I just can't make it work. This post <a href="https://stackoverflow.com/a/77265601/5838180">here</a> is almost what I want to achieve, but it works with only one point and I don't see how to add more points. Thx for advice!</p>
<python><matplotlib><coordinates><region><astropy>
2024-09-16 15:53:13
1
2,072
NeStack
78,990,852
7,200,174
Improve Pandas and Vectorization Performance on large dataset
<p><strong>CONTEXT</strong></p> <p>I have a large dataset (100-250mb) in a CSV file and need to assign groupings to the population of people. The groupings are based on a dynamic ruleset defined in another CSV file. For ease of reproduction, I've added sample data and sample 'rulesets' / query strings</p> <p><strong>DATA</strong></p> <pre><code> # Data looks like this: ID Gender Age Country 1 Male 60 USA 2 Female 25 UK 3 Male 30 Australia </code></pre> <p><strong>CURRENT CODE</strong></p> <pre><code>import pandas as pd import numpy as np query1 = '(Gender in [&quot;Male&quot;,&quot;Female&quot;]) &amp; (Country==&quot;USA&quot;)' query2 = '(Country in [&quot;USA&quot;, &quot;UK&quot;]) &amp; (Gender==&quot;Male&quot;)' query3 = '(Age &gt; 40) &amp; (Gender==&quot;Male&quot;)' query_list = [query1, query2, query3] query_names = ['USA', 'MALE_USA_UK', 'MALE_OVER_40'] def assign_name(row, id_list, name, column_list): id = row['ID'] if name in column_list: if row[name] == 'Yes': return 'Yes' if str(id) in id_list: return 'Yes' return 'No' # Create a dataframe with random data data = { 'ID': range(1, 101), 'Gender': ['Male', 'Female'] * 50, 'Age': np.random.randint(18, 70, size=100), 'Country': ['USA', 'Canada', 'UK', 'Australia'] * 25 } df = pd.DataFrame(data) df = pd.DataFrame(data) tmp = df.copy() for query in query_list: name = query_names[query_list.index(query)] out = tmp.query(query) # Create a list of people that were derived in out. These are 'yes' person_list = out['ID'].to_list() column_list = out.columns.to_list() # Give them a 'Yes' or 'No' based on them being in the 'out' df df[name] = df.apply( lambda row: assign_name(row, person_list, name, column_list), axis = 1) </code></pre> <p><strong>PROBLEM</strong></p> <p>With larger datasets with 200k+ rows and 50+ different classification groups, this process takes a long time to run. I often get the DataFrame is highly fragmented error on .insert. I would like help building a solution that is quicker and more efficient.</p>
<python><pandas><dataframe><performance><vectorization>
2024-09-16 15:36:56
2
331
KL_
78,990,761
12,352,239
Where are pip dependencies installed in a docker image?
<p>I have a docker file that is using poetry to install dependencies e.g.</p> <pre><code>FROM public.ecr.aws/lambda/python:3.12 # Install poetry RUN pip install &quot;poetry&quot; WORKDIR ${LAMBDA_TASK_ROOT} COPY poetry.lock pyproject.toml ${LAMBDA_TASK_ROOT}/ # Project initialization: RUN poetry config virtualenvs.create false &amp;&amp; poetry install --without test --no-interaction --no-ansi --no-root # Copy our Flask app to the Docker image ADD src ${LAMBDA_TASK_ROOT} # Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile) CMD [ &quot;handler.lambda_handler&quot; ] </code></pre> <p>My poetry file species a few specific test dependencies:</p> <pre><code>[tool.poetry.group.test.dependencies] pytest = &quot;^8.3.3&quot; </code></pre> <p>I want to verify that those dependencies are <strong>not</strong> included in my final docker image by seeing if the pip dependencies are present (or not) in the docker filesystem.</p> <p>when I inspect the container inside the docker desktop app I can view the docker filesystem.</p> <p><a href="https://i.sstatic.net/vg3gaWo7.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vg3gaWo7.jpg" alt="enter image description here" /></a></p> <p>However, I cannot find where the <code>pip</code> installation actually is inside my docker container file system. Where can I find the python dependencies installed by pip inside my docker container?</p> <p>The reason I am digging through the docker container is to see if my test specific dependencies (e.g. pytest) are being installed or not (I do not want them to be installed).</p> <p>Teh</p>
<python><docker>
2024-09-16 15:07:12
1
480
219CID
78,990,748
16,226,816
Torch backward PowBackward0 causes nan gradient where it shouldn't
<p>I have a pytorch tensor with NaN inside, when I calculate the loss function using a simple MSE Loss the gradient becomes NaN even if I mask out the NaN values.</p> <p>Weirdly this happens only when the mask is applyied after calculating the loss and only when the loss has a pow operation inside. The various cases follow</p> <pre><code>import torch torch.autograd.set_detect_anomaly(True) x = torch.rand(10, 10) y = torch.rand(10, 10) w = torch.rand(10, 10, requires_grad=True) y[y &gt; 0.5] = torch.nan o = w @ x l = (y - o)**2 l = l[~y.isnan()] try: l.mean().backward(retain_graph=True) except RuntimeError: print('(y-o)**2 caused nan gradient') l = (y - o) l = l[~y.isnan()] try: l.mean().backward(retain_graph=True) except RuntimeError(): pass else: print('y-o does not cause nan gradient') l = (y[~y.isnan()] - o[~y.isnan()])**2 l.mean().backward() print('masking before pow does not propagate nan gradient') </code></pre> <p>What makes NaN gradients propagate when passing through the backward pass of the pow function?</p>
<python><pytorch>
2024-09-16 15:04:08
1
378
Paul_0
78,990,683
2,311,719
How can run vLLM model on a multi GPU server
<p>I want to run an LLM model on a multi-gpu server (4 GPU) using vLLM.</p> <p>Here is the command line I run:</p> <pre><code>python3 -m vllm.entrypoints.api_server --model bjaidi/Phi-3-medium-128k-instruct-awq --quantization awq --dtype auto --gpu-memory-utilization 0.8 </code></pre> <p>I got this error:</p> <pre><code>[rank0]: torch.OutOfMemoryError: Error in model execution (input dumped to /tmp/err_execute_model_input_20240916-094901.pkl): CUDA out of memory. Tried to allocate 4.38 GiB. GPU 0 has a total capacity of 21.96 GiB of which 3.09 GiB is free. Including non-PyTorch memory, this process has 18.87 GiB memory in use. Of the allocated memory 18.58 GiB is allocated by PyTorch, and 63.57 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) </code></pre> <p>any hints ?</p>
<python><gpu><large-language-model><vllm>
2024-09-16 14:46:27
1
860
Ali Ait-Bachir
78,990,589
1,756,833
How to merge and match different length dataframes (lists) in Python/Pandas
<p>I have over 12 dataframes that I want to merge into a single dataframe, where row values match for each column (or null if they don't exist). Each dataframe has a different number of rows, but will never repeat values. The goal is to both identify common values and missing values.</p> <p>Eg.df1</p> <pre><code>id label 1 a-1 2 b-2 3 z-10 </code></pre> <p>Eg.df2</p> <pre><code>id label 1 b-2 2 d-4 3 e-5 </code></pre> <p>Eg.df3</p> <pre><code>id label 1 a-1 2 d-4 3 f-6 </code></pre> <p>Desired output</p> <p>Eg.final</p> <pre><code>id df1 df2 df3 1 a-1 null a-1 2 b-2 b-2 null 3 null d-4 d-4 4 null e-5 null 5 null null f-6 6 z-10 null null </code></pre> <p>I've investigated <code>join</code>, but these all seem to collapse values. <code>insert</code> seemed plausible, but I can't rectify the different row sizes/matching values to the same row. I want to maintain each <code>df</code> as it's own column.</p>
<python><pandas><dataframe>
2024-09-16 14:23:05
2
3,139
KHibma
78,990,487
3,801,449
ReadTheDocs Sphinx: WARNING: Field list ends without a blank line; unexpected unindent
<p>I'm documenting my Python package using Sphinx. But I've come across several warnings I have no idea, what to do about. All of them are occurring in my module-level docstrings, not inside of classes or methods.</p> <p>When trying to build my documentation on ReadTheDocs I'm getting the following error:</p> <pre><code>/home/docs/checkouts/readthedocs.org/user_builds/project/checkouts/latest/project/vqe_optimization.py:docstring of project.optimization:1: WARNING: Field list ends without a blank line; unexpected unindent. /home/docs/checkouts/readthedocs.org/user_builds/project/checkouts/latest/project/optimization.py:docstring of project.vqe_optimization:1: WARNING: Field list ends without a blank line; unexpected unindent. /home/docs/checkouts/readthedocs.org/user_builds/project/checkouts/latest/project/optimization.py:docstring of project.vqe_optimization:3: WARNING: Definition list ends without a blank line; unexpected unindent. </code></pre> <p>I'm completely puzzled by this, as my docstring looks like this:</p> <pre><code>&quot;&quot;&quot;Module containing the core Project engine. Core Project module comprising implementation of the core Project solver class together with optimizer interfaces etc. The module aims to contain all the logic behind Project solution, which is not directly connected to the properties of the electronic structure problem being solved or to the properties of logically-independent circuits like an initial orthogonal set or an ansatz. &quot;&quot;&quot; </code></pre> <p>Here I'm completely puzzled... <strong>What is incorrect at the first and third line?</strong></p> <p>In my <code>conf.py</code> I'm using these extensions:</p> <pre><code>extensions = [ 'sphinx.ext.viewcode', 'sphinx.ext.napoleon', 'myst_parser', 'nbsphinx', 'nbsphinx_link', 'sphinx.ext.autodoc' ] </code></pre> <p>The link to my project's build log is here: <a href="https://readthedocs.org/api/v2/build/25619818.txt" rel="nofollow noreferrer">https://readthedocs.org/api/v2/build/25619818.txt</a></p>
<python><python-sphinx><read-the-docs><autodoc><sphinx-napoleon>
2024-09-16 13:54:09
1
3,007
Eenoku
78,990,400
12,719,086
How can I keep Dask workers busy when processing large datasets to prevent them from running out of tasks?
<p>I'm trying to process a large dataset (around 1 million tasks) using Dask distributed computing in Python. (I am getting data from a database to process it, and I am retriving around 1M rows). Here I have just made a simpler version of my code:</p> <p>Each task simulates some computation, and I want to efficiently distribute these tasks among multiple workers to maximize resource utilization.</p> <p>Here's a minimal reproducible example of my code:</p> <pre><code>from dask.distributed import Client, as_completed from tqdm import tqdm import time import random # Dummy computational function def compute_task(data): # Simulate some computation time.sleep(random.uniform(0.01, 0.02)) # Simulate computation time return data * data # Function to process a chunk of data def process_chunk(chunk): results = [] for item in chunk: result = compute_task(item) results.append(result) return results def main(scheduler_address, num_tasks=1000000, chunk_size=100, max_concurrent_tasks=1000): client = Client(scheduler_address) print(f&quot;Connected to Dask scheduler at {scheduler_address}&quot;) try: # Generate dummy data data = list(range(num_tasks)) total_chunks = (num_tasks + chunk_size - 1) // chunk_size # Create a generator for chunks def chunk_generator(): for i in range(0, len(data), chunk_size): yield data[i:i + chunk_size] chunks = chunk_generator() active_futures = [] # Initial submission of tasks for _ in range(min(max_concurrent_tasks, total_chunks)): try: chunk = next(chunks) future = client.submit(process_chunk, chunk) active_futures.append(future) except StopIteration: break completed_chunks = 0 with tqdm(total=total_chunks, desc=&quot;Processing data&quot;) as pbar: for completed_future in as_completed(active_futures): results = completed_future.result() # Here we could do something with the results pbar.update(1) completed_chunks += 1 # Submit new tasks to keep the pipeline full try: chunk = next(chunks) future = client.submit(process_chunk, chunk) active_futures.append(future) except StopIteration: pass # Remove completed future from the list active_futures.remove(completed_future) print(&quot;Processing complete.&quot;) finally: client.close() print(&quot;Client closed.&quot;) if __name__ == &quot;__main__&quot;: main(scheduler_address='tcp://localhost:8786') </code></pre> <p><strong>Explanation:</strong></p> <ul> <li>compute_task: A dummy function that simulates computational work by sleeping for a short random duration and returning the square of the input data.</li> <li>process_chunk: Applies compute_task to each item in a chunk.</li> <li>The main function: <ul> <li>Generates a list of numbers as dummy data.</li> <li>Splits the data into chunks.</li> <li>Submits tasks to workers, aiming to keep a certain number of tasks (tasks_per_worker) queued per worker.</li> <li>Processes results as tasks complete and tries to replenish the workers' queues.</li> </ul> </li> </ul> <p>The Problem:</p> <p>Despite this setup, the workers quickly run out of tasks and become idle. And the worker pool becomes deprived of tasks. It seems that my logic for submitting and replenishing tasks isn't keeping the workers sufficiently occupied, leading to inefficient utilization of resources. The workers process tasks faster than new tasks are being submitted, causing them to become idle while waiting for more tasks.</p> <p>My Questions:</p> <ul> <li>How can I improve my task submission logic to ensure that Dask workers remain busy until all data is processed?</li> <li>Is there a more efficient way to distribute tasks among workers to maximize throughput and resource utilization?</li> </ul> <p>I suspect that the overhead in my task submission and management logic is causing delays. Managing per-worker queues and specifying workers in client.submit might be introducing unnecessary complexity and latency. Considering letting Dask handle the worker assignment by removing the workers parameter, but I'm unsure how to adjust my code accordingly.</p> <p>Any guidance or suggestions would be greatly appreciated!</p>
<python><dask><dask-distributed><dask-delayed>
2024-09-16 13:33:10
1
471
Polymood
78,990,308
1,103,595
How do I download wheels for the allosaurus python module?
<p>I'm having trouble understanding how to download wheels for a Python project. I need to build a set of wheels for a Blender addon - Blender has recently changed it's addon requirements, so I need to build wheels for it now.</p> <p>Anyhow, I want to use the allosaurus package. It has dependencies on a lot of other packages such as scipy and torch. Some of the dependencies have platform specific binary distros, while other dependencies are source only.</p> <p>If I use the <code>pip wheel allosaurus -w ./wheels</code> command, this downloads the wheels and dependencies, but only has the binaries for Windows (I'm working on a Windows machine). I also need to download the full set of wheels for Mac and Linux too. Unfortunately, when I try <code>pip download allosaurus --dest ./wheels-mac --only-binary=:all: --python-version=3.11 --platform=macosx_11_0_arm64</code>, it only downloads one very out of date version of allosaurus.</p> <p>I've tried downloading the wheels for the dependencies independently, however scipy on the Mac only has macosx versions 14, 12 and 10 available, while torch has macosx 11. I don't know if these are compatible. And when I try to download some of the source only dependencies, I get the zip file. I'm also getting multiple versions of numpy downloaded.</p> <p>I don't really understand what I'm doing. How can I download directories for Mac, Linux and Windows distros, with all the wheels they need?</p>
<python><windows><macos><pip>
2024-09-16 13:02:40
1
5,630
kitfox
78,990,178
17,082,611
Unable to import module 'src.main': No module named 'dependencies' when uploading FastAPI zipped project to AWS Lambda
<p>I am developing a simple FastAPI app and I want to deploy it on AWS Lambda.</p> <p>This is the structure of the project:</p> <p><a href="https://i.sstatic.net/XWxCJ1ec.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XWxCJ1ec.png" alt="structure" /></a></p> <p>And I ran this command for creating the <code>dependencies</code> directory:</p> <pre><code>pip3 install -r requirements.txt --platform manylinux2014_x86_64 --target=dependencies --implementation cp --python-version 3.12 --only-binary=:all: --upgrade openai </code></pre> <p>Then adding the content of <code>dependencies</code> to <code>aws_lambda_artifact.zip</code>:</p> <pre><code>(cd dependencies; zip ../aws_lambda_artifact.zip -r .) </code></pre> <p>And finally adding to the <code>.zip</code> archive the content of <code>src</code> directory:</p> <pre><code>zip aws_lambda_artifact.zip -u -r src </code></pre> <p>You can see here that the archive contains the <code>src</code> directory:</p> <p><a href="https://i.sstatic.net/53iIGjtH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/53iIGjtH.png" alt="src" /></a></p> <p>And this is its content:</p> <p><a href="https://i.sstatic.net/ZL25KSOm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZL25KSOm.png" alt="enter image description here" /></a></p> <p>Unfortunately when I upload the <code>.zip</code> archive into AWS Lambda, whose configuration is:</p> <p><a href="https://i.sstatic.net/1UP5CS3L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1UP5CS3L.png" alt="lambda" /></a></p> <p>I get:</p> <pre><code>{ &quot;errorMessage&quot;: &quot;Unable to import module 'src.main': No module named 'dependencies'&quot;, &quot;errorType&quot;: &quot;Runtime.ImportModuleError&quot;, &quot;requestId&quot;: &quot;&quot;, &quot;stackTrace&quot;: [] } </code></pre> <p>In the &quot;test&quot; panel of the console. What's wrong?</p> <p>In case you need it, this is the <strong>main.py</strong> file:</p> <pre><code>from fastapi import FastAPI from mangum import Mangum from src import models from src.database import engine from src.routers import blog, user, authentication app = FastAPI() handler = Mangum(app) models.Base.metadata.create_all(bind=engine) app.include_router(blog.router) app.include_router(user.router) app.include_router(authentication.router) </code></pre> <p>And this is the <strong>requirements.txt</strong> file:</p> <pre><code>fastapi==0.114.2 mangum==0.17.0 SQLAlchemy==2.0.34 passlib==1.7.4 bcrypt==4.2.0 python-jose==3.3.0 python-multipart==0.0.9 </code></pre> <p>This is the full log output:</p> <pre><code>/var/task/pydantic/_internal/_config.py:341: UserWarning: Valid config keys have changed in V2: * 'orm_mode' has been renamed to 'from_attributes' warnings.warn(message, UserWarning) [ERROR] Runtime.ImportModuleError: Unable to import module 'src.main': No module named 'dependencies' Traceback (most recent call last):INIT_REPORT Init Duration: 1323.45 ms Phase: init Status: error Error Type: Runtime.Unknown /var/task/pydantic/_internal/_config.py:341: UserWarning: Valid config keys have changed in V2: * 'orm_mode' has been renamed to 'from_attributes' warnings.warn(message, UserWarning) [ERROR] Runtime.ImportModuleError: Unable to import module 'src.main': No module named 'dependencies' Traceback (most recent call last):INIT_REPORT Init Duration: 19347.21 ms Phase: invoke Status: error Error Type: Runtime.Unknown START RequestId: 6ae13d71-0d1f-428a-8793-e79ee0590f81 Version: $LATEST END RequestId: 6ae13d71-0d1f-428a-8793-e79ee0590f81 REPORT RequestId: 6ae13d71-0d1f-428a-8793-e79ee0590f81 Duration: 19376.13 ms Billed Duration: 19377 ms Memory Size: 128 MB Max Memory Used: 104 MB Status: error Error Type: Runtime.Unknown </code></pre> <p>This is the template <strong>API Gateway AWS Proxy</strong> I am using for testing the API:</p> <pre><code>{ &quot;body&quot;: &quot;eyJ0ZXN0IjoiYm9keSJ9&quot;, &quot;resource&quot;: &quot;/{proxy+}&quot;, &quot;path&quot;: &quot;/blogs&quot;, &quot;httpMethod&quot;: &quot;GET&quot;, &quot;isBase64Encoded&quot;: true, &quot;queryStringParameters&quot;: { &quot;foo&quot;: &quot;bar&quot; }, &quot;multiValueQueryStringParameters&quot;: { &quot;foo&quot;: [ &quot;bar&quot; ] }, &quot;pathParameters&quot;: { &quot;proxy&quot;: &quot;/blogs&quot; }, &quot;stageVariables&quot;: { &quot;baz&quot;: &quot;qux&quot; }, &quot;headers&quot;: { &quot;Accept&quot;: &quot;text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8&quot;, &quot;Accept-Encoding&quot;: &quot;gzip, deflate, sdch&quot;, &quot;Accept-Language&quot;: &quot;en-US,en;q=0.8&quot;, &quot;Cache-Control&quot;: &quot;max-age=0&quot;, &quot;CloudFront-Forwarded-Proto&quot;: &quot;https&quot;, &quot;CloudFront-Is-Desktop-Viewer&quot;: &quot;true&quot;, &quot;CloudFront-Is-Mobile-Viewer&quot;: &quot;false&quot;, &quot;CloudFront-Is-SmartTV-Viewer&quot;: &quot;false&quot;, &quot;CloudFront-Is-Tablet-Viewer&quot;: &quot;false&quot;, &quot;CloudFront-Viewer-Country&quot;: &quot;US&quot;, &quot;Host&quot;: &quot;1234567890.execute-api.us-east-1.amazonaws.com&quot;, &quot;Upgrade-Insecure-Requests&quot;: &quot;1&quot;, &quot;User-Agent&quot;: &quot;Custom User Agent String&quot;, &quot;Via&quot;: &quot;1.1 08f323deadbeefa7af34d5feb414ce27.cloudfront.net (CloudFront)&quot;, &quot;X-Amz-Cf-Id&quot;: &quot;cDehVQoZnx43VYQb9j2-nvCh-9z396Uhbp027Y2JvkCPNLmGJHqlaA==&quot;, &quot;X-Forwarded-For&quot;: &quot;127.0.0.1, 127.0.0.2&quot;, &quot;X-Forwarded-Port&quot;: &quot;443&quot;, &quot;X-Forwarded-Proto&quot;: &quot;https&quot; }, &quot;multiValueHeaders&quot;: { &quot;Accept&quot;: [ &quot;text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8&quot; ], &quot;Accept-Encoding&quot;: [ &quot;gzip, deflate, sdch&quot; ], &quot;Accept-Language&quot;: [ &quot;en-US,en;q=0.8&quot; ], &quot;Cache-Control&quot;: [ &quot;max-age=0&quot; ], &quot;CloudFront-Forwarded-Proto&quot;: [ &quot;https&quot; ], &quot;CloudFront-Is-Desktop-Viewer&quot;: [ &quot;true&quot; ], &quot;CloudFront-Is-Mobile-Viewer&quot;: [ &quot;false&quot; ], &quot;CloudFront-Is-SmartTV-Viewer&quot;: [ &quot;false&quot; ], &quot;CloudFront-Is-Tablet-Viewer&quot;: [ &quot;false&quot; ], &quot;CloudFront-Viewer-Country&quot;: [ &quot;US&quot; ], &quot;Host&quot;: [ &quot;0123456789.execute-api.us-east-1.amazonaws.com&quot; ], &quot;Upgrade-Insecure-Requests&quot;: [ &quot;1&quot; ], &quot;User-Agent&quot;: [ &quot;Custom User Agent String&quot; ], &quot;Via&quot;: [ &quot;1.1 08f323deadbeefa7af34d5feb414ce27.cloudfront.net (CloudFront)&quot; ], &quot;X-Amz-Cf-Id&quot;: [ &quot;cDehVQoZnx43VYQb9j2-nvCh-9z396Uhbp027Y2JvkCPNLmGJHqlaA==&quot; ], &quot;X-Forwarded-For&quot;: [ &quot;127.0.0.1, 127.0.0.2&quot; ], &quot;X-Forwarded-Port&quot;: [ &quot;443&quot; ], &quot;X-Forwarded-Proto&quot;: [ &quot;https&quot; ] }, &quot;requestContext&quot;: { &quot;accountId&quot;: &quot;123456789012&quot;, &quot;resourceId&quot;: &quot;123456&quot;, &quot;stage&quot;: &quot;prod&quot;, &quot;requestId&quot;: &quot;c6af9ac6-7b61-11e6-9a41-93e8deadbeef&quot;, &quot;requestTime&quot;: &quot;09/Apr/2015:12:34:56 +0000&quot;, &quot;requestTimeEpoch&quot;: 1428582896000, &quot;identity&quot;: { &quot;cognitoIdentityPoolId&quot;: null, &quot;accountId&quot;: null, &quot;cognitoIdentityId&quot;: null, &quot;caller&quot;: null, &quot;accessKey&quot;: null, &quot;sourceIp&quot;: &quot;127.0.0.1&quot;, &quot;cognitoAuthenticationType&quot;: null, &quot;cognitoAuthenticationProvider&quot;: null, &quot;userArn&quot;: null, &quot;userAgent&quot;: &quot;Custom User Agent String&quot;, &quot;user&quot;: null }, &quot;path&quot;: &quot;/blogs&quot;, &quot;resourcePath&quot;: &quot;/{proxy+}&quot;, &quot;httpMethod&quot;: &quot;GET&quot;, &quot;apiId&quot;: &quot;1234567890&quot;, &quot;protocol&quot;: &quot;HTTP/1.1&quot; } } </code></pre> <p>Amazon Q says:</p> <blockquote> <p>The error indicates that the Lambda function failed to import the module 'src.main' because it could not find the dependencies module that 'src.main' relies on. This often occurs when a module import is missing or incorrectly specified in the function code.</p> </blockquote> <p>Also:</p> <pre><code>1. Review your Lambda function code and ensure that the 'dependencies' module is correctly installed and available in the deployment package. 2. If the 'dependencies' module is not included in the deployment package, update your function code to include the missing module: - If using a virtual environment, install the 'dependencies' module and create a new deployment package with the updated virtual environment. - If not using a virtual environment, install the 'dependencies' module and include it in your deployment package. 3. After updating the deployment package with the missing 'dependencies' module, update the Lambda function code by uploading the new deployment package. 4. If the issue persists after updating the deployment package, review the Lambda function configuration and ensure that the correct handler is specified for the 'src.main' module. 5. If the issue still persists, review the Lambda function logs for any additional errors or information that could help identify the root cause. </code></pre>
<python><amazon-web-services><aws-lambda><fastapi>
2024-09-16 12:27:30
1
481
tail
78,989,984
8,208,804
Optimize tree-traversal over table of data
<p>I have a tree which contains links between multiple nodes. I also have a dataframe with 1 million rows. There is a mapping between tree nodes and dataframe columns as follows:</p> <pre class="lang-py prettyprint-override"><code>import networkx as nx import pandas as pd import numpy as np g = nx.DiGraph() g.add_edge('a', 'b') g.add_edge('a', 'c') g.add_edge('b', 'd') g.add_edge('b', 'e') g.add_edge('c', 'f') g.add_edge('c', 'g') nx.draw(g, with_labels=True) df = pd.DataFrame({ &quot;l1&quot;: &quot;a&quot;, &quot;l2&quot;: ['b', 'c', 'b', 'c'], 'l3': ['d', 'f', 'e', 'g'], 'data': [1, 2, 3, 4] }) id_to_col_map = {'a': 'l1', 'b': 'l2', 'c': 'l2', 'd': 'l3', 'e': 'l3', 'f': 'l3', 'g': 'l3'} root_node_id = 'a' </code></pre> <p>Now, I need to find out the path taken by each row in the dataframe. Here is my solution:</p> <pre class="lang-py prettyprint-override"><code>leaf_cols = [] def helper(node_id, sub_df, parent_truthy, path=None): path = path or [] new_col_name = f&quot;matches_{','.join([*path, node_id])}&quot; col_name = id_to_col_map[node_id] sub_df[new_col_name] = parent_truthy &amp; (sub_df[col_name] == node_id) successors = list(g.successors(node_id)) if len(successors) == 0: leaf_cols.append(new_col_name) else: for successor in successors: helper(successor, sub_df, sub_df[new_col_name], path=[*path, node_id]) truthy = np.ones(len(df), dtype=bool) helper(root_node_id, df, truthy) df[&quot;path&quot;] = &quot;&quot; for leaf_col in leaf_cols: cond = (df[&quot;path&quot;].str.len() == 0) &amp; df[leaf_col] df[&quot;path&quot;] = np.where( cond, leaf_col.replace(&quot;matches_&quot;, &quot;&quot;), df[&quot;path&quot;] ) </code></pre> <p>Here is the how this solution works:</p> <ul> <li>By DFS (preorder) traversal over the tree, I am checking if the condition for the path so far has been met. For example, for the root node 'a', the condition is that 'l1' column should have 'a' as the value. For the next node 'b', the condition is that 'l2' column should have 'b' as the value (and should also satisfy the previous condition). This way, I will know for each row, which nodes' conditions have been met and I am adding these are columns to the dataframe itself.</li> <li>During this recursion, I am also capturing the leaf node columns. Later, I am looping through these columns and checking which of these is have a value of <code>True</code> for each row. That helps me determine the path taken by each row.</li> </ul> <p>This algorithm is really slow. Please help me optimize this.</p>
<python><pandas><dataframe><tree><tree-traversal>
2024-09-16 11:37:18
1
1,462
Sreekar Mouli
78,989,835
11,720,066
AWS Athena with python - is it possible to mock with moto while still testing the sql?
<p>My code performs sql queries on Athena using <code>boto3</code>.</p> <p>I want to be able to test the entire functionality, but avoiding the actual access to athena. I need the data to be fetched based on the query string my code sends and the actual data I put in mocked <code>s3</code> as part of the tests setup.</p> <p>Has anyone ever done something like this?</p> <p>All examples I see online are not really testing sql logic, but rather use predefined query results to be returned when starting Athena queries.</p>
<python><boto3><amazon-athena><moto>
2024-09-16 10:53:15
1
613
localhost
78,989,606
2,491,541
Set Python Environment and Execute Python Code in PHP Script
<p>I am trying to convert the vcd file to wavedrom json file using the python library</p> <p><a href="https://github.com/Toroid-io/vcd2wavedrom" rel="nofollow noreferrer">https://github.com/Toroid-io/vcd2wavedrom</a></p> <p>when I run on my terminal the below code works fine</p> <pre><code> source /var/www/xxx/vcd2wavedrom/verilog-env/bin/activate &amp;&amp; python3 /var/www/xxx/vcd2wavedrom/vcd2wavedrom/vcd2wavedrom.py -i /var/www/xxx/uuids/abcd/dump.vcd -o /var/www/xxx/uuids/abcd/dump.json </code></pre> <p>When i execute the same code in my php script as</p> <pre><code>$env = 'source /var/www/xxx/vcd2wavedrom/verilog-env/bin/activate'; cmd = &quot;python3 /var/www/xxx/vcd2wavedrom/vcd2wavedrom/vcd2wavedrom.py -i &quot;.$uuid_dir.&quot;/dump.vcd -o &quot;.$uuid_dir.&quot;/dump.json&quot;; #!/bin/sh shell_exec($env .&quot; &amp;&amp; &quot;. $cmd); </code></pre> <p>I am getting <strong>sh: 1: source: not found</strong> error.</p> <p>How can set the venv from php script.</p> <p>I am not a python developer. Please elaborate your answer as much as possible for me to understand better.</p> <p>I tried to remove the source, but I got</p> <pre><code> sh: 1: /var/www/xxx/vcd2wavedrom/verilog-env/bin/activate: Permission denied Traceback (most recent call last): File &quot;/var/www/xxx/vcd2wavedrom/vcd2wavedrom/vcd2wavedrom.py&quot;, line 9, in &lt;module&gt; from vcdvcd.vcdvcd import VCDVCD ModuleNotFoundError: No module named 'vcdvcd' </code></pre> <p>I set verilog-env folder to 777 but still the same error persists</p>
<python><php>
2024-09-16 09:41:08
0
6,891
Alaksandar Jesus Gene
78,989,501
4,105,307
Is there a simple way to hide database credentials in psycopg2 error messages
<p>I have recently realised that some error messages, in psycopg2, display the full database uri. This dbi's credentials are obviously secret, and access to them is far more secure than the access to the logs.</p> <pre><code>psycopg2.OperationalError: connection to server at &quot;my_database_uri.including_password.in_plain_text.com&quot; (db_ip_address), port 5432 failed: FATAL: the database system is shutting down </code></pre> <p>I'm planning to catch the errors to obfuscate the error message, but is there a simpler way to do it?</p>
<python><psycopg2>
2024-09-16 09:11:32
1
454
DarksteelPenguin
78,989,466
13,849,446
Can not Install Ryu: AttributeError: module 'setuptools.command.easy_install' has no attribute 'get_script_args'
<p>I have been trying to install Ryu for few days but was not able to get around this error. I have been stuck for long and need to install it as soon as possible to continue working.</p> <pre><code>fani@fani-VMware-Virtual-Platform:~/ryu$ pip install . --break-system-packages Defaulting to user installation because normal site-packages is not writeable Processing /home/fani/ryu Preparing metadata (setup.py) ... error error: subprocess-exited-with-error Γ— python setup.py egg_info did not run successfully. β”‚ exit code: 1 ╰─&gt; [9 lines of output] Traceback (most recent call last): File &quot;&lt;string&gt;&quot;, line 2, in &lt;module&gt; File &quot;&lt;pip-setuptools-caller&gt;&quot;, line 34, in &lt;module&gt; File &quot;/home/fani/ryu/setup.py&quot;, line 21, in &lt;module&gt; ryu.hooks.save_orig() File &quot;/home/fani/ryu/ryu/hooks.py&quot;, line 36, in save_orig _main_module()._orig_get_script_args = easy_install.get_script_args ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: module 'setuptools.command.easy_install' has no attribute 'get_script_args' [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed Γ— Encountered error while generating package metadata. ╰─&gt; See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. </code></pre>
<python><python-3.x><sdn><ryu>
2024-09-16 09:03:49
4
1,146
farhan jatt
78,989,454
10,451,021
Unable to import selenium.webdriver.common.by
<p>While trying to run selenium commands in Python, I am not able to import <code>selenium.webdriver.common.by</code>. However, the library 'selenium' is finely imported. I am running the program in VS Code.</p> <p>Script:-</p> <pre><code>from selenium import webdriver from selenium.webdriver.common.by import By driver = webdriver.Chrome() driver.get(&quot;https://**.com&quot;) search_id = &quot;123&quot; search_elem = driver.find_element(By.ID, search_id) #search_elem.send_keys(&quot;**email.com&quot;) breakpoint() </code></pre> <p>Error:-</p> <pre><code>PS C:\Users\Desktop\python selenium&gt; python test.py DevTools listening on ws://***/devtools/browser/44ebdc50-e0c6-43e6-a329-85c2e96b0892 Traceback (most recent call last): File &quot;C:\Users\Desktop\python selenium\test.py&quot;, line 7, in &lt;module&gt; search_elem = driver.find_element(By.ID, search_id) File &quot;C:\Python310\lib\site-packages\selenium\webdriver\remote\webdriver.py&quot;, line 748, in find_element return self.execute(Command.FIND_ELEMENT, {&quot;using&quot;: by, &quot;value&quot;: value})[&quot;value&quot;] File &quot;C:\Python310\lib\site-packages\selenium\webdriver\remote\webdriver.py&quot;, line 354, in execute self.error_handler.check_response(response) File &quot;C:\Python310\lib\site-packages\selenium\webdriver\remote\errorhandler.py&quot;, line 229, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {&quot;method&quot;:&quot;css selector&quot;,&quot;selector&quot;:&quot;[id=&quot;i0116&quot;]&quot;} (Session info: chrome=128.0.6613.138); For documentation on this error, please visit: https://www.selenium.dev/documentation/webdriver/troubleshooting/errors#no-such-element-exception </code></pre> <p><a href="https://i.sstatic.net/VCgnP5th.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VCgnP5th.png" alt="error" /></a></p>
<python><visual-studio-code><selenium-webdriver>
2024-09-16 08:56:00
0
1,999
Salman
78,989,293
19,369,310
InconsistentVersionWarning: Trying to unpickle estimator StandardScaler from version 1.2.2 when using version 1.3.2
<p>I am trying to train a machine learning model which uses the <code>PowerTransformer</code> from <code>scikit-learn</code> to transform my training data. And here is my code:</p> <pre><code>from sklearn import preprocessing from sklearn.preprocessing import PowerTransformer yj = PowerTransformer(method='yeo-johnson') df = yj.fit_transform(df) dump(yj, 'yeo_johnson_scaler.bin', compress=True) </code></pre> <p>and it works perfectly fine. Then when I actually deploy my model to new data, I reload the fitted transformer as follows:</p> <pre><code>yj=load('yeo_johnson_scaler.bin') df = yj.transform(df) </code></pre> <p>However, I got the following warning message:</p> <pre><code>/usr/local/lib/python3.10/dist-packages/sklearn/base.py:348: InconsistentVersionWarning: Trying to unpickle estimator StandardScaler from version 1.2.2 when using version 1.3.2. This might lead to breaking code or invalid results. Use at your own risk. For more info please refer to: https://scikit-learn.org/stable/model_persistence.html#security-maintainability-limitations warnings.warn( </code></pre> <p>So as far as I understand, there is an update in scikit-learn that might lead to inconsistencies, I tried to read the link given but I don't really understand why this would lead to a problem in my code and how to resolve it.</p>
<python><scikit-learn>
2024-09-16 08:00:37
1
449
Apook
78,989,286
10,415,129
How to pass config options to plotly in Shiny for Python?
<p>How do I configure a <code>plotly</code> graph in shiny for Python?</p> <p>Consider the following minimal shiny example.</p> <pre class="lang-py prettyprint-override"><code>from shiny import ui, App from shinywidgets import output_widget, render_widget import plotly.express as px import plotly.graph_objects as go app_ui = ui.page_fluid( output_widget(&quot;plotly&quot;) ) def server(input, output, session): @render_widget def plotly(): df = px.data.gapminder().query(&quot;country=='Canada'&quot;) fig = px.line(df, x=&quot;year&quot;, y=&quot;lifeExp&quot;, title='Life expectancy in Canada') return go.FigureWidget(fig) app = App(app_ui, server) </code></pre> <p>How do I set the <code>config = {'displayModeBar': False}</code>.</p> <p>I cannot use <code>fig.show(config=config)</code> as this opens a seperate tab with just the <code>plotly</code> graph.</p> <p><code>@render_widget</code> and <code>go.FigureWidget()</code> also do not seem to be able to accept the option.</p> <p>All online resources point to using <code>fig.show()</code>, which I cannot use in shiny for Python. Using <code>fig.show(config=config)</code> opens a seperate tab with just the <code>plotly</code> graph. I have searched the internet, but I have not come yet to a workaround.</p> <p>In R, I can just add the config to the fig object.</p> <p>Why is this not the case in Python?</p>
<python><plotly><py-shiny>
2024-09-16 07:59:44
1
351
Escalen
78,989,228
2,864,143
Python import from parent directory not working
<p>I have the following code structure:</p> <pre><code>root_package/common.py - common package code imported from different submodules, tests etc. root_package/core - core package functionality root_package/test/test.py - tests for core </code></pre> <p>I have read the <a href="https://docs.python.org/3/tutorial/modules.html" rel="nofollow noreferrer">https://docs.python.org/3/tutorial/modules.html</a>. Not quite formal language, as usual, with lots of ambiguities, but whatever. What I got from it is that one can use dotted notation for intra-package references.</p> <p>However, I could not not manage to import common.py from inside test.py, as neither of these worked:</p> <pre><code>from ..root_package import common.py from ...root_package import common.py or w/o &quot;root_package&quot; </code></pre> <p>It returned:</p> <blockquote> <p>ImportError: attempted relative import with no known parent package</p> </blockquote> <p>I have empty __init__.py files in root_package_dir and test subdir.</p> <p>So do I have to resort to sys.path hack in my case?</p>
<python><python-3.x>
2024-09-16 07:38:18
1
960
Student4K
78,989,207
315,182
Why is Python httpx.get or requests.get much slower than cURL for this API?
<p>For a home automation project I am trying to pull train delay data. <a href="https://v6.db.transport.rest/" rel="nofollow noreferrer">An API wrapper exists</a>, with cURL examples. These work fine, but both Python's <code>requests.get</code> and <code>httpx.get</code> are slow to pull data (up to a minute for <code>requests</code> and about 4 seconds for <code>httpx</code>) but <code>curl</code>, or pasting a link in the browser, returns almost immediately. Why?</p> <p>The internet suggested that some sites have anti-scraping protections and may throttle or block <code>requests</code>, as it uses HTTP1.0. On this API <code>httpx</code> does seem to be much faster, but nowhere near as fast as <code>curl</code> or the browser.</p> <p>Some examples - this Python snippet takes about 4 seconds:</p> <pre><code> import httpx client = httpx.Client(http2=True) response = client.get('https://v6.db.transport.rest/stations?query=berlin') print(response.text) </code></pre> <p>This takes up to a minute:</p> <pre><code> import requests response = requests.get('https://v6.db.transport.rest/stations?query=berlin') print(response.text) </code></pre> <p>This returns almost immediately:</p> <pre><code> import subprocess command = 'curl \'https://v6.db.transport.rest/stations?query=berlin\' -s' result = subprocess.run(command, capture_output=True, shell=True, text=True) print(result.stdout) print(result.stderr) </code></pre> <p>What's the magic here?</p>
<python><rest><curl><python-requests><httpx>
2024-09-16 07:28:06
1
898
0xDEADBEEF
78,989,006
20,793,070
aiogram 3 suddenly started giving an error
<p>I created the bot using aiogram 3.4.1 and it worked well. I left the project for two months and returned to it recently. When calling any menu, the bot gives an error:</p> <pre><code>aiogram.exceptions.TelegramBadRequest: Telegram server says - Bad Request: can't parse entities: Unsupported start tag &quot;&quot; at byte offset </code></pre> <p>Next are lines about dispatcher errors:</p> <pre><code>Traceback (most recent call last): File &quot;/Users/.../lib/python3.10/site-packages/aiogram/dispatcher/dispatcher.py&quot;, line 309, in _process_update response = await self.feed_update(bot, update, **kwargs) File &quot;/Users/.../lib/python3.10/site-packages/aiogram/dispatcher/dispatcher.py&quot;, line 158, in feed_update response = await self.update.wrap_outer_middleware( File &quot;/Users/.../lib/python3.10/site-packages/aiogram/dispatcher/middlewares/error.py&quot;, line 25, in __call__ return await handler(event, data) ... and more </code></pre> <p>I have tried other versions of aiogram. I changed the quotes in the menu item definitions to single ones. Nothing helped. Search didn't help me too.</p> <p>Please help me to understand what the problem is it, because the bot worked well before, and I don’t want to completely rewrite it.</p> <p>main.py</p> <pre><code>from aiogram import Bot, Dispatcher from aiogram.enums import ParseMode from aiogram.fsm.storage.memory import MemoryStorage from aiogram.client.bot import DefaultBotProperties async def main() -&gt; None: bot = Bot(API_TOKEN, default=DefaultBotProperties(parse_mode=ParseMode.HTML)) dp = Dispatcher(storage=MemoryStorage()) dp.include_router(common.router) await bot.delete_webhook(drop_pending_updates=True) await dp.start_polling(bot) if __name__ == &quot;__main__&quot;: asyncio.run(main()) </code></pre> <p>common.py</p> <pre><code>from aiogram import F, Router from aiogram.filters import Command from aiogram.fsm.context import FSMContext from aiogram.types import Message from handlers.keyboard import start_keyboard from handlers import add_hand from handlers import manage_hand from handlers import count_hand from handlers import service_hand from config import admins router = Router() router.include_router(add_hand.router) router.include_router(manage_hand.router) router.include_router(count_hand.router) router.include_router(service_hand.router) @router.message(F.from_user.id.in_(admins), Command(commands=[&quot;start&quot;])) async def cmd(message: Message, state: FSMContext): await state.clear() keyboard=start_keyboard() await message.answer( text=&quot;Choose action&quot;, reply_markup=keyboard ) @router.message(Command(commands=[&quot;cancel&quot;])) @router.message(F.text.lower() == &quot;back&quot;) async def cmd(message: Message, state: FSMContext): await state.clear() keyboard=start_keyboard() await message.answer( text=&quot;Choose action&quot;, reply_markup=keyboard ) </code></pre> <p>keyboard.py</p> <pre><code>from aiogram.types import ReplyKeyboardMarkup, KeyboardButton def make_row_keyboard(items: list[str]) -&gt; ReplyKeyboardMarkup: row = [KeyboardButton(text=item) for item in items] return ReplyKeyboardMarkup(keyboard=[row], resize_keyboard=True) def start_keyboard(): kb = [ [ KeyboardButton(text=&quot;Add&quot;), KeyboardButton(text='Manage'), KeyboardButton(text=&quot;Count&quot;), KeyboardButton(text=&quot;Service&quot;) ], ] keyboard = ReplyKeyboardMarkup( keyboard=kb, resize_keyboard=True, input_field_placeholder=&quot;Choose action&quot; ) return (keyboard) </code></pre>
<python><aiogram>
2024-09-16 06:17:17
1
433
Jahspear
78,988,545
5,937,757
`schemachange`, snowflake and oauth authentication in GitHub action
<p>I have been trying to use Oauth Authentication of <a href="https://github.com/Snowflake-Labs/schemachange?tab=readme-ov-file#oauth-authentication" rel="nofollow noreferrer"><code>schemachange</code></a> in my github action pipeline; however, I get the following error message</p> <pre><code>Traceback (most recent call last): File &quot;/opt/actions-runner/_work/_tool/Python/3.8.18/x64/bin/schemachange&quot;, line 8, in &lt;module&gt; Using Snowflake account *** sys.exit(main()) File &quot;/opt/actions-runner/_work/_tool/Python/3.8.18/x64/lib/python3.8/site-packages/schemachange/cli.py&quot;, line 1309, in main deploy_command(config) File &quot;/opt/actions-runner/_work/_tool/Python/3.8.18/x64/lib/python3.8/site-packages/schemachange/cli.py&quot;, line 601, in deploy_command session = SnowflakeSchemachangeSession(config) File &quot;/opt/actions-runner/_work/_tool/Python/3.8.18/x64/lib/python3.8/site-packages/schemachange/cli.py&quot;, line 283, in __init__ if self.set_connection_args(): File &quot;/opt/actions-runner/_work/_tool/Python/3.8.18/x64/lib/python3.8/site-packages/schemachange/cli.py&quot;, line 372, in set_connection_args oauth_token = self.get_oauth_token() File &quot;/opt/actions-runner/_work/_tool/Python/3.8.18/x64/lib/python3.8/site-packages/schemachange/cli.py&quot;, line 329, in get_oauth_token &quot;url&quot;: self.oauth_config[&quot;token-provider-url&quot;], TypeError: 'NoneType' object is not subscriptable </code></pre> <p>one might suspect that this is due to improper config on env variable in workflow, for that I have written a <code>python</code> script to get the token and authenticate with snowflake in the same pipeline, and it works. This is the Github pipleine:</p> <pre><code>name: testing on: push: branches: - feat/* jobs: Deploy: runs-on: ubuntu-latest permissions: id-token: write contents: read defaults: run: shell: bash environment: name: 'Dev-WB' # get environment variables from the environment env: SF_ACCOUNT: ${{ secrets.SF_ACCOUNT }} SF_USERNAME: ${{ secrets.SF_USERNAME }} SF_ROLE: ${{ secrets.SF_ROLE_MULESOFT }} SF_ROLE_TASK: ${{ secrets.SF_ROLE_TASK }} SF_WAREHOUSE: ${{ vars.SF_WAREHOUSE_MULESOFT }} SF_DATABASE: ${{ vars.SF_DATABASE }} SCHEMA_AI_MODEL: ${{ vars.SCHEMA_AI_MODEL }} SCHEMA_FEATURE_STORE: ${{ vars.SCHEMA_FEATURE_STORE }} SCHEMA_REPORT: ${{ vars.SCHEMA_REPORT }} AZURE_ORG_GUID: ${{ vars.AZURE_ORG_GUID }} CLIENT_ID: ${{ vars.CLIENT_ID }} CLIENT_SECRET: ${{ vars.CLIENT_SECRET }} SCOPE_URL_TEST: ${{ vars.SCOPE_URL_TEST }} SNOWFLAKE_AUTHENTICATOR: oauth steps: - name: Checkout repository uses: actions/checkout@v4 - name: Use Python 3.8.x uses: actions/setup-python@v5 with: python-version: 3.8.x - name: Install Python dependencies run: | pip install --upgrade pip pip install -r code/python/requirements.txt - name: Run python run: | python code/python/src/snowflake_conn_tocken.py - name: Run schemachange to deploy the snowflake Stored Procedures run: | schemachange -f ${{ github.workspace }}/code/schemachange/stored_procedure \ -a $SF_ACCOUNT \ -r $SF_ROLE \ -u $SF_USERNAME \ -w $SF_WAREHOUSE \ -d $SF_DATABASE \ -c $SF_DATABASE.MRVWORK_DNA_DISCOVER.CLAIM_SCHEMACHANGE_CHANGE_HISTORY --create-change-history-table \ --vars '{&quot;schema_ai_model&quot;: &quot;${{vars.SCHEMA_AI_MODEL}}&quot;, &quot;schema_feature_store&quot;: &quot;${{vars.SCHEMA_FEATURE_STORE}}&quot;, &quot;schema_report&quot;: &quot;${{vars.SCHEMA_REPORT}}&quot;}' # schema's names change based on the environment --oauth-config '{&quot;url&quot;: ${{ vars.AZURE_ORG_GUID }}, &quot;headers&quot;: {&quot;Content-Type&quot;: &quot;application/x-www-form-urlencoded&quot;, &quot;User-Agent&quot;: &quot;python/schemachange&quot;}, &quot;data&quot;: {&quot;client_id&quot;: CLIENT_ID: ${{ vars.CLIENT_ID }}, &quot;client_secret&quot;: ${{ vars.CLIENT_SECRET }}, &quot;grant_type&quot;: 'client_credentials', &quot;scope&quot;: ${{ vars.SCOPE_URL_TEST }}}' </code></pre> <p>I'm guessing the problem is that <code>schemachange</code> can't parse/access the env? Anyone can help me with this?</p>
<python><git><snowflake-cloud-data-platform><schemachange>
2024-09-15 23:42:48
0
423
mas
78,988,527
5,775,965
PyO3 pass mutable rust struct between Rust and Python callbacks
<p>I am trying to use Python as a scripting engine for my Rust project. I have some Rust code which calls a Python function which in turn should call some Rust functions. The problem is that there is mutable state that I need to keep track of in Rust-land.</p> <p>The way I have it set up right now is that <code>Descriptor::eval</code> calls the python</p> <pre><code>impl Descriptor { ... fn eval(&amp;self, &amp;PyRefMut&lt;Context&gt;) -&gt; String { Python::with_gil(|py| { // inner is a Py&lt;PyAny&gt; self.inner.call_method(py, &quot;to_string&quot;, (ctx,), None) .unwrap() .downcast_bound::&lt;PyString&gt;(py) .unwrap() .to_str() .unwrap() .to_string() }) } } </code></pre> <p>The python then calls back into the Rust:</p> <pre><code>class Class(object): @staticmethod def to_string(ctx): return rust.eval_template(ctx, '$foo') </code></pre> <pre><code>#[pyfunction] fn eval_template(mut ctx: PyRefMut&lt;ReprContext&gt;, s: &amp;str) -&gt; String { // do stuff with s and ctx } </code></pre> <p>The problem is that when the python code tries to call <code>rust.eval_template</code>, the dynamic borrow checker says that <code>ctx</code> is already borrowed.</p> <p>What I'm guessing is happening is that <code>call_method</code> borrows <code>ctx</code> and since <code>rust.eval_template</code> is called before that function returns, <code>eval_template</code> then also tries to borrow <code>ctx</code>. Though I'm a bit unclear as to what is happening on the whole.</p>
<python><rust><ffi><borrow-checker><pyo3>
2024-09-15 23:22:48
0
1,159
genghiskhan
78,988,482
7,495,123
Custom authorization restriction in django view
<p>I have an app. The app has users, posts, comments to post etc(it's kinda blog). The task is to limit users from editing objects, that do not belong to the user. Like User can not edit posts made by another one(or comments). I want to write a decorator for it in order to authorize user's actions. So i did, but now i got an error</p> <pre><code>ValueError: The view blog.views.wrapper didn't return an HttpResponse object. It returned None instead. </code></pre> <p>My code:</p> <pre><code>def authorize(func): def wrapper(*args, **kwargs): if kwargs.get('post_id'): instance = get_object_or_404(Post, id=kwargs.get('post_id')) if not args[0].user.id == instance.author_id: return redirect( 'blog:post_detail', post_id=kwargs.get('post_id') ) kwargs.update({'instance': instance}) elif kwargs.get('comment_id'): instance = get_object_or_404(Comment, id=kwargs.get('comment_id')) if not args[0].user.id == instance.author_id: return redirect( 'blog:post_detail', post_id=kwargs.get('post_id') ) kwargs.update({'instance': instance}) func(*args, **kwargs) return wrapper @login_required @authorize def edit_post(request, post_id, instance=None): instance = get_object_or_404(Post, id=post_id) form = PostForm(request.POST or None, instance=instance) context = {'form': form} if form.is_valid(): form.save() return render(request, 'blog/create.html', context) </code></pre> <p>What am i doing wrong?</p>
<python><django><decorator>
2024-09-15 22:40:37
1
1,137
chydik
78,988,417
13,498,838
Why is await queue.get() Blocking and Causing My Async Consumers to Hang?
<p>I’m having trouble with an asynchronous queue. Specifically, the <code>await queue.get()</code> seems to block the rest of my application, causing it to hang.</p> <p>I came across a similar issue discussed here: <a href="https://stackoverflow.com/questions/56377402/why-is-asyncio-queue-await-get-blocking">Why is asyncio queue await get() blocking?</a> blocking, but I’m still having difficulty understanding how to resolve the problem in my case.</p> <p>In my setup, I am trying to create a web scraping API (with FastAPI) that allows users submit URLs which are batched and processed by different web scrapers running in consumer tasks. However, it seems that once the consumers are waiting for new batches from the queue, the app halts. Specifically, I’m calling <code>start_consumers()</code> during the FastAPI lifespan event before the application starts, but the app never fully initialises because it gets blocked.</p> <p>Is there a way to modify my setup so that the consumers can wait for new items in the queue without blocking the rest of the application? Or is this approach doomed?</p> <pre><code>import asyncio from asyncio.queues import Queue from internal.services.webscraping.base import BaseScraper class QueueController: _logger = create_logger(__name__) def __init__( self, scrapers: list[BaseScraper], batch_size: int = 50 ): self.queue = Queue() self.batch_size = batch_size self.scrapers = scrapers self.consumers = [] self.running = False async def put(self, urls: list[str]) -&gt; None: &quot;&quot;&quot; Add batches of URLs to the queue. &quot;&quot;&quot; # Split the list of URLs into batches of size self.batch_size and add them to the queue. # If the queue is full, wait until there is space available. for i in range(0, len(urls), self.batch_size): batch = urls[i:i + self.batch_size] await self.queue.put(batch) async def get(self) -&gt; list[str]: &quot;&quot;&quot; Retrieve a batch from the queue. &quot;&quot;&quot; # Get a batch of URLs from the queue. # If queue is empty, wait until an item is available. return await self.queue.get() async def consumer(self, scraper: BaseScraper) -&gt; None: &quot;&quot;&quot; Consumer coroutine that processes batches of URLs. &quot;&quot;&quot; # Consumer tasks are designed to run in an infinite loop (as long as self.running is # True) and fetch batches of URLs from the queue. while self.running: try: batch = await self.get() if batch: records = await scraper.run(batch) # TODO: Handle saving of result except Exception as e: # TODO: Add proper error handling ... raise e async def start_consumers(self) -&gt; None: &quot;&quot;&quot; Start the consumer tasks. Notes: https://docs.python.org/3/library/asyncio-task.html https://superfastpython.com/asyncio-task/ &quot;&quot;&quot; self.running = True self.consumers = [ asyncio.create_task(self.consumer(scraper)) for scraper in self.scrapers ] await asyncio.gather(*self.consumers) async def stop_consumers(self) -&gt; None: &quot;&quot;&quot; Stop all consumer tasks gracefully. &quot;&quot;&quot; self.running = False for task in self.consumers: task.cancel() </code></pre>
<python><asynchronous><queue><python-asyncio><fastapi>
2024-09-15 21:50:57
1
1,454
jda5
78,988,304
298,607
Split on regex (more than a character, maybe variable width) and keep the separator like GNU awk
<p>In GNU awk, there is a four argument version of <a href="https://www.gnu.org/software/gawk/manual/html_node/String-Functions.html#index-split_0028_0029-function" rel="noreferrer">split</a> that can optionally keep all the separators from the split in a second array. This is useful if you <a href="https://stackoverflow.com/a/70641933/298607">want to reconstruct</a> a select subset of columns from a file where the delimiter may be more complicated than just a single character.</p> <p>Suppose I have the following file:</p> <pre><code># sed makes the invisibles visible... # βˆ™ is a space; \t is a literal tab; $ is line end $ sed -E 's/\t/\\t/g; s/ /βˆ™/g; s/$/\$/' f.txt a\tβˆ™βˆ™bβˆ™c\tdβˆ™_βˆ™e$ aβˆ™βˆ™βˆ™bβˆ™c\tdβˆ™_βˆ™e$ βˆ™βˆ™βˆ™aβˆ™βˆ™βˆ™bβˆ™c\tdβˆ™_βˆ™e$ aβˆ™βˆ™βˆ™b_c\tdβˆ™_βˆ™e\t$ abcd$ </code></pre> <p>Here I have a field comprised of anything other than the delimiter character set, and a delimiter of one or more characters of the set <code>[\s_]</code>.</p> <p>With gawk, you can do:</p> <pre><code>gawk '{ printf &quot;[&quot; n=split($0, flds, /[[:space:]_]+/, seps) for(i=1; i&lt;=n; i++) printf &quot;[\&quot;%s\&quot;, \&quot;%s\&quot;]%s&quot;, flds[i], seps[i], i&lt;n ? &quot;, &quot; : &quot;]&quot; ORS } ' f.txt </code></pre> <p>Prints (where the first element is the field, the second is the match to the delimiter regexp):</p> <pre><code>[[&quot;a&quot;, &quot; &quot;], [&quot;b&quot;, &quot; &quot;], [&quot;c&quot;, &quot; &quot;], [&quot;d&quot;, &quot; _ &quot;], [&quot;e&quot;, &quot;&quot;]] [[&quot;a&quot;, &quot; &quot;], [&quot;b&quot;, &quot; &quot;], [&quot;c&quot;, &quot; &quot;], [&quot;d&quot;, &quot; _ &quot;], [&quot;e&quot;, &quot;&quot;]] [[&quot;&quot;, &quot; &quot;], [&quot;a&quot;, &quot; &quot;], [&quot;b&quot;, &quot; &quot;], [&quot;c&quot;, &quot; &quot;], [&quot;d&quot;, &quot; _ &quot;], [&quot;e&quot;, &quot;&quot;]] [[&quot;a&quot;, &quot; &quot;], [&quot;b&quot;, &quot;_&quot;], [&quot;c&quot;, &quot; &quot;], [&quot;d&quot;, &quot; _ &quot;], [&quot;e&quot;, &quot; &quot;], [&quot;&quot;, &quot;&quot;]] [[&quot;abcd&quot;, &quot;&quot;]] </code></pre> <p>Ruby's <a href="https://ruby-doc.org/3.3.5/String.html#method-i-split" rel="noreferrer">str.split</a>, unfortunately, does not have the same functionality. (Neither does <a href="https://docs.python.org/3/library/re.html#re.split" rel="noreferrer">Python's</a> or <a href="https://perldoc.perl.org/functions/split" rel="noreferrer">Perl's</a>.)</p> <p>What you <em>can</em> do is capture the match string from the delimiter regexp:</p> <pre><code>irb(main):053&gt; s=&quot;a b c d _ e&quot; =&gt; &quot;a b c d _ e&quot; irb(main):054&gt; s.split(/([\s_]+)/) =&gt; [&quot;a&quot;, &quot; &quot;, &quot;b&quot;, &quot; &quot;, &quot;c&quot;, &quot; &quot;, &quot;d&quot;, &quot; _ &quot;, &quot;e&quot;] </code></pre> <p>Then use that result with <code>.each_slice(2)</code> and replace the <code>nil</code>'s with <code>''</code>:</p> <pre><code>irb(main):055&gt; s.split(/([\s_]+)/).each_slice(2).map{|a,b| [a,b]} =&gt; [[&quot;a&quot;, &quot; &quot;], [&quot;b&quot;, &quot; &quot;], [&quot;c&quot;, &quot; &quot;], [&quot;d&quot;, &quot; _ &quot;], [&quot;e&quot;, nil]] irb(main):056&gt; s.split(/([\s_]+)/).each_slice(2).map{|a,b| [a,b]}.map{|sa| sa.map{|e| e.nil? ? &quot;&quot; : e} } =&gt; [[&quot;a&quot;, &quot; &quot;], [&quot;b&quot;, &quot; &quot;], [&quot;c&quot;, &quot; &quot;], [&quot;d&quot;, &quot; _ &quot;], [&quot;e&quot;, &quot;&quot;]] </code></pre> <p>Which allows gawk's version of split to be replicated:</p> <pre><code>ruby -ne 'p $_.gsub(/\r?\n$/,&quot;&quot;).split(/([\s_]+)/).each_slice(2). map{|a,b| [a,b]}.map{|sa| sa.map{|e| e.nil? ? &quot;&quot; : e} }' f.txt </code></pre> <p>Prints:</p> <pre><code>[[&quot;a&quot;, &quot;\t &quot;], [&quot;b&quot;, &quot; &quot;], [&quot;c&quot;, &quot;\t&quot;], [&quot;d&quot;, &quot; _ &quot;], [&quot;e&quot;, &quot;&quot;]] [[&quot;a&quot;, &quot; &quot;], [&quot;b&quot;, &quot; &quot;], [&quot;c&quot;, &quot;\t&quot;], [&quot;d&quot;, &quot; _ &quot;], [&quot;e&quot;, &quot;&quot;]] [[&quot;&quot;, &quot; &quot;], [&quot;a&quot;, &quot; &quot;], [&quot;b&quot;, &quot; &quot;], [&quot;c&quot;, &quot;\t&quot;], [&quot;d&quot;, &quot; _ &quot;], [&quot;e&quot;, &quot;&quot;]] [[&quot;a&quot;, &quot; &quot;], [&quot;b&quot;, &quot;_&quot;], [&quot;c&quot;, &quot;\t&quot;], [&quot;d&quot;, &quot; _ &quot;], [&quot;e&quot;, &quot;\t&quot;]] [[&quot;abcd&quot;, &quot;&quot;]] </code></pre> <p>So the same output (other than the line with trailing <code>\t</code> which gawk has as an empty field, delimiter combination.)</p> <p>In Python, roughly the same method also works:</p> <pre><code>python3 -c ' import sys, re from itertools import zip_longest with open(sys.argv[1]) as f: for line in f: lp=re.split(r&quot;([\s_]+)&quot;, line.rstrip(&quot;\r\n&quot;)) print(list(zip_longest(*[iter(lp)]*2, fillvalue=&quot;&quot;)) ) ' f.txt </code></pre> <p>I am looking for a <strong>general algorithm</strong> to replicate the functionality of gawk's four argument split in Ruby/Python/Perl/etc. The Ruby and Python I have here works.</p> <p>Most of solutions (other than for gawk) to <em>I want to split on this delimiter and keep the delimiter?</em> involve a unique regex more complex than simply matching the delimiter. Most seem to be either scanning for a field, delimiter combination or use lookarounds. I am specifically trying to use a <em>simple</em> regexp that matches the delimiter only without lookarounds. With roughly the same regexp I would have used with GNU awk.</p> <p>So stated generally:</p> <ol> <li>Take a regexp matching the delimiter fields (without having to think much about the data fields) and put inside a capturing group;</li> <li>Take the resulting array of <code>[field1, delimiter1, field2, delimiter2, ...]</code> and create array of <code>[[field1, delimiter1], [field2, delimiter2], ...]</code></li> </ol> <p>That method is easily used in Ruby (see above) and Python (see above) and Perl (I was too lazy to write that one...)</p> <p>Is this the best way to do this?</p>
<python><regex><ruby><algorithm><split>
2024-09-15 20:32:14
2
104,598
dawg
78,988,186
3,163,618
Function memoization, passing around precomputed values
<p>I have a function like this in my library, which depends on a large subset of the inputs being pre-calculated:</p> <pre><code>@cache def f(n, precompute): if n &lt; len(precompute): return precompute[n] # recursive calculations, which are memoized on return return some_compute(f(n-1, precompute)) </code></pre> <p>What's the most user-friendly way to pass around a precomputed list while using memoizing <code>functools.cache</code>, without caching the precompute?</p> <p>My ideas:</p> <ul> <li>Just make <code>precompute</code> a global/outer scope variable (impure, global variable has to exist in the module beforehand <a href="https://stackoverflow.com/questions/1977362/how-to-create-module-wide-variables-in-python">How to create module-wide variables in Python?</a>)</li> <li>Make <code>precompute</code> a function attribute to use it as an &quot;argument&quot; bypassing caching (works because the function is an object)</li> <li>Write a special memoziation decorator that is only based on the first argument (suggested in <a href="https://stackoverflow.com/questions/66002642/caching-python-function-results-using-only-subset-of-arguments-as-identifier">Caching Python function results using only subset of arguments as identifier</a>)</li> <li>Turn it into a class so precomputed can be an instance attribute or class attribute (this is what <a href="https://docs.sympy.org/latest/modules/ntheory.html" rel="nofollow noreferrer">sympy.sieve</a> does)</li> <li>Load <code>cache(f)</code> with the values of precompute. My thinking is that this is less efficient as the cache will be a dictionary, while precompute is a simple list of size 10^7</li> <li>Create an inner helper recursive function with one argument that uses precompute in the outer scope (I don't think works across multiple outer calls because the inner falls out of scope)</li> </ul>
<python><memoization><functools><function-attributes>
2024-09-15 19:26:25
0
11,524
qwr
78,988,118
4,875,641
Python Multiprocessing - losing command prompt window
<p>I am running the following Python program to start a few processes to run simultaneously. Each process will run for several seconds. This is being started from a command prompt. But if I interrupt the execution (ctl-C) I often get a console message 'keyboard interrupt' along with a long traceback and the command prompt is forever lost. I must kill the command prompt and start a new one. Depending upon timing, sometimes it will allow new commands to be enterred, but there is no input prompt C\xxx\xxx&gt;.</p> <p>I expected the ctl-C just to terminate the mainline being executed which would in turn kill the child processes that were running. But this was not the case.</p> <p>Why can't this program be interrupted without bringing down the Command Prompt from running again?</p> <pre><code> from multiprocessing import Process, Pool import os import time def proc(name): print('\nStarting process name:',name, 'PID: ', os.getpid()) for i in range(1,5): print (os.getpid(),': ',i,'\n') time.sleep(1) if __name__ == '__main__': print ('\nMain process ID:', os.getpid(), '\n') numServers = 3 serverNum = [] for i in range(numServers): serverNum.append(i+1) print ('serverNum: ', serverNum) with Pool(numServers) as p: p.map(proc, serverNum) # --&gt; hang here until the processes complete # Execution stops when all processes complete print ('Ending mainline task', os.getpid()) exit(0) </code></pre> <p>The program output is:</p> <pre><code> Main process ID: 40536 serverNum: [1, 2, 3] Starting process name: 1 PID: 34508 34508 : 1 Starting process name: 2 PID: 2200 2200 : 1 Starting process name: 3 PID: 43476 43476 : 1 34508 : 2 2200 : 2 43476 : 2 34508 : 3 2200 : 3 43476 : 3 34508 : 4 2200 : 4 43476 : 4 Ending mainline task 40536 </code></pre>
<python><multiprocessing><python-multiprocessing>
2024-09-15 18:46:04
3
377
Jay Mosk
78,988,099
14,022,262
Increase speed of two for loops in Python
<p>I have a double for loop in a Python code. I want to increase the execution speed, but the matrix is built using data from the previous row and column. How can I achieve this?</p> <pre><code>import numpy as np num_rows, num_cols = 100, 5 matrix = np.zeros((num_rows, num_cols)) matrix[0, :] = np.random.rand(num_cols) matrix[:, 0] = np.random.rand(num_rows) coeff1 = np.random.rand(num_rows) coeff2 = np.random.rand(num_rows) coeff3 = np.random.rand(num_rows) result = np.zeros_like(matrix) for j in range(1, num_cols): for n in range(1, num_rows): term1 = coeff1[n] * matrix[n-1, j-1] term2 = coeff2[n] * matrix[n, j-1] term3 = coeff3[n] * matrix[n-1, j] result[n, j] = term1 + term2 + term3 </code></pre>
<python><for-loop>
2024-09-15 18:36:29
4
531
Vinicius B. de S. Moreira
78,988,010
17,729,094
Explode multiple columns with different lengths
<p>I have a dataframe like:</p> <pre><code>data = { &quot;a&quot;: [[1], [2], [3, 4], [5, 6, 7]], &quot;b&quot;: [[], [8], [9, 10], [11, 12]], } df = pl.DataFrame(data) &quot;&quot;&quot; β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ a ┆ b β”‚ β”‚ --- ┆ --- β”‚ β”‚ list[i64] ┆ list[i64] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════════║ β”‚ [1] ┆ [] β”‚ β”‚ [2] ┆ [8] β”‚ β”‚ [3, 4] ┆ [9, 10] β”‚ β”‚ [5, 6, 7] ┆ [11, 12] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ &quot;&quot;&quot; </code></pre> <p>Each pair of lists may not have the same length, and I want to &quot;truncate&quot; the explode to the shortest of both lists:</p> <pre><code>&quot;&quot;&quot; β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ a ┆ b β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════║ β”‚ 2 ┆ 8 β”‚ β”‚ 3 ┆ 9 β”‚ β”‚ 4 ┆ 10 β”‚ β”‚ 5 ┆ 11 β”‚ β”‚ 6 ┆ 12 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ &quot;&quot;&quot; </code></pre> <p>I was thinking that maybe I'd have to fill the shortest of both lists with <code>None</code> to match both lengths, and then <code>drop_nulls</code>. But I was wondering if there was a more direct approach to this?</p>
<python><dataframe><python-polars>
2024-09-15 17:45:39
2
954
DJDuque
78,987,980
67,579
How do calls of models.Manager and custom managers work?
<p>The following question was <a href="https://stackoverflow.com/q/78833708/67579">asked by a different user</a> that subsequently removed the question. But I think it is useful to &quot;dig&quot; a bit in how Django's manager logic works.</p> <blockquote> <p>I extended the <code>models.Manager</code> class and created a custom manager.</p> <pre class="lang-py prettyprint-override"><code>class PublishedManager(models.Manager): def get_queryset(self): return super().get_queryset().filter(status=Post.Status.PUBLISHED)</code></pre> <p>It's more than understandable, BUT how do calls of managers work?</p> <pre class="lang-py prettyprint-override"><code>objects = models.Manager() # The default manager. published = PublishedManager() # Our custom manager.</code></pre> <p>I don't address <code>get_queryset()</code> method, I just call constructors. Then how does it work?</p> </blockquote>
<python><django><django-managers>
2024-09-15 17:31:49
1
481,621
willeM_ Van Onsem
78,987,882
10,430,394
Is pathlib.Path.read_text() better than _io.TextIOWrapper.read()?
<p>I've recently discovered that <code>pathlib</code> offers some methods for dealing with file paths. When looking through the list of methods, <code>.read_text()</code> caught my attention. It implied that I could keep an object of type <code>Path</code> that stores a bunch of useful info in one neat package as well as offering the option of getting the file string without having to manually close/open a text file or using <code>with open():</code>.</p> <p>I am about to change my default coding practices fundamentally unless I hear of a good reason why not to do it. So I wanted to know: is there a good reason why <code>_io.TextIOWrapper.read()</code> is better?</p>
<python><pathlib>
2024-09-15 16:53:47
1
534
J.Doe
78,987,822
4,875,641
Python Multiprocessing - is it creating two threads per process
<p>This program starts two processes. It appears that there may actually be two threads created for each process. One, which is the mainline code of the spawned python program and the other being the execution of the function that was called to start the process. The program output shows that both the mainline of the program running in each process as well as running the called function specified when spawning the process.</p> <p>The mainline of each process ends with the displaying of the process ID. Each of the two spawned tasks also terminate by displaying their process ID. The log shows the mainline messages be printed first, then the spawned function is called after that.</p> <pre><code> from multiprocessing import Process import os import psutil def f(name): print('\nStarting process ID:', os.getpid()) print ('I am process with argument: ', name) print ('parent: ', os.getppid()) print ('process name: ', psutil.Process().name()) if __name__ == '__main__': print ('\nMain process ID:', os.getpid(), '\n') p = Process(target=f, name='process-1', args=('passed-name1',)) p.start() p.join() print ('\nAfter 1st join, PID:', os.getpid(),'\n') p = Process(target=f, name='process-2', args=('passed-name2',)) p.start() p.join() print ('\nAfter 2 joins, PID:',os.getpid(),'\n') print ('End mainline code for PID',os.getpid()) </code></pre> <p>I expected the message 'Executed at task end' to appear when the process completes its operation. But I find it executing BEFORE it runs the mainline code of the process.</p> <p>Here is the output of this program:</p> <pre><code> Main process ID: 5528 End mainline code for PID: 42368 Starting process ID: 42368 I am process with argument: passed-name1 parent: 5528 process name: python.exe After 1st join, PID: 5528 End mainline code for PID: 17764 Starting process ID: 17764 I am process with argument: passed-name2 parent: 5528 process name: python.exe After 2 joins, PID: 5528 End mainline code for PID: 5528 </code></pre> <p>Question 1: So does the Process() function both spawn the entire program as well as running the named function as a separate process?</p> <p>Question 2: Why is the process named 'python.exe' when I specifically named each process with a name to call it?</p> <p>Now here is another wrinkle is this multiprocessing dilemma. If you add the code</p> <pre><code>exit(0) </code></pre> <p>as the last line of the code, after the printed termination message, then the main line of the two spawned tasks do not execute at all. Here is the output of the code if the exit(0) is added as the last line:</p> <pre><code> Main process ID: 36996 End mainline code PID: 39976 After 1st join, PID: 36996 End mainline code PID: 28324 After 2 joins, PID: 36996 End mainline code PID: 36996 </code></pre> <p>So the mainline code of each process ran, but the function called by the Process() function, did not.</p> <p>I also noted that if instead of calling exit(0), I import time and call time.sleep(5) instead, the End of mainline message appears then the mainline code of the spawned task is executed after the 5 second delay. I would have expected the called function to be started immediately. This seems confusing as to executing what looks like two threads for the process. Or perhaps it executes the mainline followed by the called function after that. But important to understand as the code becomes more complex into mainline versus the function called in the Process() call.</p>
<python><multiprocessing><python-multiprocessing>
2024-09-15 16:21:40
1
377
Jay Mosk
78,987,693
11,747,861
polars: how to find out the number of columns in a polars expression?
<p>I'm building a package on top of Polars, and one of the functions looks like this</p> <pre class="lang-py prettyprint-override"><code>def func(x: IntoExpr, y: IntoExpr): ... </code></pre> <p>The business logic requires that x can include multiple columns, but y must be a single column.</p> <p>What should I do to check and validate this?</p>
<python><python-polars>
2024-09-15 15:11:09
2
2,757
Mark Wang
78,987,685
2,754,510
Abnormal interpolating spline with odd number of points
<p>I have implemented a cubic B-Spline interpolation, not approximation, as follows:</p> <pre><code>import numpy as np import math from geomdl import knotvector def cox_de_boor( d_, t_, k_, knots_): if (d_ == 0): if ( knots_[k_] &lt;= t_ &lt;= knots_[k_+1]): return 1.0 return 0.0 denom_l = (knots_[k_+d_] - knots_[k_]) left = 0.0 if (denom_l != 0.0): left = ((t_ - knots_[k_]) / denom_l) * cox_de_boor(d_-1, t_, k_, knots_) denom_r = (knots_[k_+d_+1] - knots_[k_+1]) right = 0.0 if (denom_r != 0.0): right = ((knots_[k_+d_+1] - t_) / denom_r) * cox_de_boor(d_-1, t_, k_+1, knots_) return left + right def interpolate( d_, P_, n_, ts_, knots_ ): A = np.zeros((n_, n_)) for i in range(n_): for j in range(n_): A[i, j] = cox_de_boor(d_, ts_[i], j, knots_) control_points = np.linalg.solve(A, P_) return control_points def create_B_spline( d_, P_, t_, knots_): sum = Vector() # just a vector class. for i in range( len(P_) ): sum += P_[i] * cox_de_boor(d_, t_, i, knots_) return sum def B_spline( points_ ): d = 3 # change to 2 for quadratic. P = np.array( points_ ) n = len( P ) ts = np.linspace( 0.0, 1.0, n ) knots = knotvector.generate( d, n ) # len = n + d + 1 control_points = interpolate( d, P, n, ts, knots) crv_pnts = [] for i in range(10): t = float(i) / 9 crv_pnts.append( create_B_spline(d, control_points, t, knots) ) return crv_pnts control_points = [ [float(i), math.sin(i), 0.0] for i in range(4) ] cps = B_spline( control_points ) </code></pre> <p>Result is OK when interpolating 4 points (control vertices): <a href="https://i.sstatic.net/fzYYsK86.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fzYYsK86.png" alt="enter image description here" /></a></p> <p>Result is NOT OK when interpolating 5 points (control vertices): <a href="https://i.sstatic.net/V01YqBNt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/V01YqBNt.png" alt="enter image description here" /></a></p> <p>Result is OK when interpolating 6 points (control vertices): <a href="https://i.sstatic.net/rU7VaQUk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rU7VaQUk.png" alt="enter image description here" /></a></p> <p>and so on...</p> <p>I noticed two things:</p> <ol> <li>The spline does not interpolate properly when the number of control vertices is odd.</li> <li>The spline interpolates properly with any number of vertices when the degree becomes quadratic. So, if you change <code>d = 2</code>, in the <code>B_spline</code> function, the curve will interpolate properly for odd and even number of control vertices.</li> </ol> <p>The <a href="https://en.wikipedia.org/wiki/De_Boor%27s_algorithm" rel="nofollow noreferrer">cox de boor</a> function is correct and according to the mathematical expression, but with a small alteration on the 2nd conditional expression <code>t[i] &lt;= t **&lt;=** t[i+1]</code> (see my previous SO question <a href="https://stackoverflow.com/questions/78960111/singular-matrix-during-b-spline-interpolation">here</a> for more details). Also, I used numpy to solve the <a href="https://numpy.org/doc/stable/reference/routines.linalg.html" rel="nofollow noreferrer">linear system</a>, which also works as expected. Other than <code>np.linalg.solve</code>, I have tried <code>np.linalg.lstsq</code> but it returns the same results.</p> <p>I honestly do not know where to attribute this abnormal behaviour. What could cause this issue?</p>
<python><numpy><interpolation><spline>
2024-09-15 15:06:49
1
3,276
Constantinos Glynos
78,987,427
3,565,923
Mocking pytest test with @patch unittest.mock decorator
<p><strong>Issue with Mocking <code>send_request()</code> in Unit Tests</strong></p> <p>I'm having trouble with mocking the <code>send_request()</code> function in my tests. Despite using <code>patch</code>, the original <code>send_request()</code> function is being called, and the mock isn't applied. Here's the relevant output and code:</p> <p><strong>Output:</strong></p> <pre><code>Send_click send_request mock: &lt;function send_request at 0x7f9c2146de40&gt; send request </code></pre> <p><strong>Code:</strong></p> <p><em><code>logic.py</code></em></p> <pre class="lang-py prettyprint-override"><code>def send_request(manager, action_method, url, data=None, params=None, headers=None): print(&quot;send request&quot;) </code></pre> <p><em><code>test_ui.py</code></em></p> <pre class="lang-py prettyprint-override"><code>from ..ui import UserInterface from unittest.mock import patch, Mock @patch(&quot;ui.send_request&quot;) def test_send_request(mock_send_request, qtbot): ui = UserInterface() ui.get_request_data = Mock() data = QByteArray(data_str_create.encode(&quot;utf-8&quot;)) action_method, url, data = ACTIONS_METHODS.POST, QUrl(&quot;http://example.com/api&quot;), data ui.get_request_data.return_value = action_method, url, data ui.send_click() mock_send_request.assert_called_once() </code></pre> <p><strong>Issue:</strong></p> <ul> <li><code>send_request()</code> is not mocked; the original function is being executed. Proven by <code>send request</code> in the output and <code>function</code> type in the output.</li> <li>The <code>AssertionError</code> in the last line of the test indicates that the mock was not applied.</li> </ul> <p><strong>How <code>send_request()</code> is used:</strong></p> <p><em><code>ui.py</code></em></p> <pre class="lang-py prettyprint-override"><code>from logic import handle_response, send_request class UserInterface(QWidget): # other methods def get_request_data(self): action_method = self.combo_box.currentText() json_dict = json.loads( self.body.toPlainText().encode(&quot;utf-8&quot;).decode(&quot;unicode_escape&quot;) ) json_data = json.dumps(json_dict).encode(&quot;utf-8&quot;) data = QByteArray(json_data) url = QUrl(self.url.text()) return action_method, url, data def send_click(self): print(&quot;Send_click&quot;) print(f&quot;send_request mock: {send_request}&quot;) send_request(self.rest_manager, *self.get_request_data()) </code></pre> <p>Can someone help figure out why the <code>send_request()</code> function is not being mocked as expected?</p>
<python><mocking><python-unittest><patch>
2024-09-15 12:56:27
1
350
user3565923
78,987,084
14,517,452
How to use ctypes.memmove on memoryview objects in regular python
<p>First let me state the context for this issue. I'm using PySide6 to capture the screen, and I want to intercept the video feed and perform some image processing on the frames using opencv. I am able to collect a QVideoFrame, convert to QImage, then convert that to a numpy array and do my image processing. However, I also want to be able to pass that numpy array back to an output video stream so I can see the results of the image processing.</p> <p>I'm able to convert the numpy array into a QImage with this code;</p> <pre><code>arr = cv2.cvtColor(arr, cv2.COLOR_BGR2RGBA) image = QImage(arr.data, arr.shape[1], arr.shape[0], QImage.Format.Format_RGBA8888) </code></pre> <p>I can then start to create a QVideoFrame like so;</p> <pre><code>format_ = QVideoFrameFormat( image.size(), QVideoFrameFormat.pixelFormatFromImageFormat(image.format()) ) frame2 = QVideoFrame(format_) frame2.map(QVideoFrame.ReadWrite) </code></pre> <p>So far so good... but the next step is to copy the bytes from the QImage into the memory reserved for the QVideoFrame. There is <a href="https://stackoverflow.com/questions/71407367/how-to-convert-qimage-to-qvideoframe-so-i-would-set-it-as-frame-for-videosink-qt">this example</a> that shows how to do that in C++. I tried to get this working in python using the ctypes library like this;</p> <pre><code>ctypes.memmove( frame2.bits(0)[0], image.bits()[0], image.sizeInBytes() ) </code></pre> <p>This is where I got stuck. Basically <code>QImage.bits()</code> and <code>QVideoFrame.bits()</code> both return a <a href="https://docs.python.org/3/c-api/memoryview.html" rel="nofollow noreferrer">memoryview</a> object which I assume contains the actually data in bytes that I need to copy over. The issues with that code snippet above is that <code>bits()[0]</code> always returns zero which raises an error from attempting to access memory that is out of bounds. I think it needs to return the pointer of that memory as an integer. I have seen various suggestions that in CPython <code>id(object)</code> will give the pointer of an object, however I am not using CPython - so the question is how to do this in regular python?</p> <p>For reference, my project is using python 3.9.10 on Windows 11 Home version 23H2.</p>
<python><ctypes><pyside6>
2024-09-15 09:55:00
1
748
Edward Spencer
78,986,977
12,415,855
How can I close and open a new window?
<p>I try to close and open a windows using PyQt5 and the following code:</p> <pre class="lang-py prettyprint-override"><code>import PyQt5.QtWidgets as qtw from PyQt5 import uic import sys import os class Window2(qtw.QMainWindow): def __init__(self): super().__init__() uic.loadUi(&quot;window2.ui&quot;,self) class MainWindow(qtw.QMainWindow): def __init__(self): super().__init__() uic.loadUi(&quot;selectWindow.ui&quot;,self) self.path = os.path.abspath(os.path.dirname(sys.argv[0])) self.pbStart.clicked.connect(self.showData) def showData(self): print(self.leInput.text()) self.w = Window2() self.w.show() self.hide() if __name__ == '__main__': app = qtw.QApplication(sys.argv) win = MainWindow() win.show() sys.exit(app.exec_()) </code></pre> <p>The first windows appears, I can enter some value and press the button, but then I get the following error:</p> <pre class="lang-none prettyprint-override"><code>$ python selectWindow.py 3 Traceback (most recent call last): File &quot;D:\DEV\Fiverr2024\TRY\artfulrooms\selectWindow.py&quot;, line 20, in showData self.w = Window2() ^^^^^^^^^ File &quot;D:\DEV\Fiverr2024\TRY\artfulrooms\selectWindow.py&quot;, line 9, in __init__ uic.loadUi(&quot;window2.ui&quot;,self) File &quot;D:\DEV\.venv\pyqt\Lib\site-packages\PyQt5\uic\__init__.py&quot;, line 241, in loadUi return DynamicUILoader(package).loadUi(uifile, baseinstance, resource_suffix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;D:\DEV\.venv\pyqt\Lib\site-packages\PyQt5\uic\Loader\loader.py&quot;, line 66, in loadUi return self.parse(filename, resource_suffix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;D:\DEV\.venv\pyqt\Lib\site-packages\PyQt5\uic\uiparser.py&quot;, line 1037, in parse actor(elem) File &quot;D:\DEV\.venv\pyqt\Lib\site-packages\PyQt5\uic\uiparser.py&quot;, line 822, in createUserInterface self.toplevelWidget = self.createToplevelWidget(cname, wname) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;D:\DEV\.venv\pyqt\Lib\site-packages\PyQt5\uic\Loader\loader.py&quot;, line 57, in createToplevelWidget raise TypeError( TypeError: ('Wrong base class of toplevel widget', (&lt;class '__main__.Window2'&gt;, 'QDialog')) </code></pre> <p>How can I close the actual windows and open the new window2?</p>
<python><pyqt5>
2024-09-15 09:05:17
1
1,515
Rapid1898
78,986,910
4,466,240
FastAPI endpoint does not generate swagger doc for body parameter created without using Model
<p>I have an API endpoint in FastAPI wherein I'm <strong>not</strong> using Models to access request body. I'm instead using the built-in type <code>Body</code> with a description.</p> <pre class="lang-py prettyprint-override"><code>@vault_router('/secret') def get_secret( token: Annotated[str, Header(description=&quot;Requires token to be sent in headers as access_token&quot;)], request: Request, path: Annotated[str, Body(description=&quot;key-vault path which needs to be accessed&quot;)], keys: Annotated[List | None, Body(description=&quot;list of keys to fetch from the specified path&quot;)]=None ): # code goes here </code></pre> <p>Redoc generates a documentation that perfectly describes the parameters as Optional and Required with the correct decsription:</p> <p><a href="https://i.sstatic.net/DaopbBV4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DaopbBV4.png" alt="enter image description here" /></a></p> <p>However, SwaggerUI does not:</p> <p><a href="https://i.sstatic.net/jteRQ1fF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jteRQ1fF.png" alt="enter image description here" /></a></p> <p>I'm unable to find any solution on Internet. Even if I use Models I cannot describe the arguments as I desire with comments, required, optional, etc.</p> <p><em>My inclination is towards <strong>not</strong> using Models for this solution.</em></p>
<python><model><fastapi><pydantic>
2024-09-15 08:20:46
1
724
Ritik Saxena
78,986,801
14,472,571
ImportError: cannot import name 'Iterator' from 'typing_extensions' (/databricks/python/lib/python3.10/site-packages/typing_extensions.py)
<p>I'm facing the following issue on Azure Databricks when trying to import <code>from openai import OpenAI</code> after installing <code>openai</code>.</p> <p>Here's the error:</p> <pre><code>ImportError: cannot import name 'Iterator' from 'typing_extensions' (/databricks/python/lib/python3.10/site-packages/typing_extensions.py) </code></pre> <p>I looked up similar issues and found that using <code>--force-reinstall</code> like this:</p> <pre><code>pip install --force-reinstall typing-extensions==4.5 pip install --force-reinstall openai==1.8 </code></pre> <p>Worked for some users. However, it did not work in my case.</p> <p>How do I resolve this?</p>
<python><databricks><azure-databricks><openai-api>
2024-09-15 07:06:46
1
2,778
The Singularity
78,986,747
17,889,492
Sympy solve solutions missing
<p>Sympy's <code>solve</code> method doesn't return all solutions on this relatively simple problem:</p> <pre><code>import sympy as sp y1, y2, x = sp.symbols('y1 y2 x') y1 = x**2 y2 = x sp.solve(y1, y2, dict = True) </code></pre> <p>This returns <code>[{x: 0}]</code> but there should be an <code>x=1</code> solution as well.</p> <p>Am I using this correctly?</p>
<python><sympy><symbolic-math>
2024-09-15 06:27:50
2
526
R Walser
78,986,621
9,855,588
pydantic collect all failed validations from field_validator
<p>Here a few field validators using Pydantic v2, but the behavior I'm experiencing is that the <code>field_validator</code> will return an error immediately instead of running all field validations and returning all field errors with the record being validated. This is probably expected, but I'm wondering if it's possible to run all <code>field_validations</code> for a record, and return all failures together.</p> <pre><code>class Foo(BaseModel): name: str age: int @field_validator(&quot;name&quot;, mode=&quot;after&quot;) def field_length(cls, value: str): value_length = len(value) if value_length &gt; 20: raise ValueError(&quot;you know the drill, less than 20!&quot;) return value @field_validator(&quot;age&quot;, mode=&quot;before&quot;) def cast_str_to_int(cls, value: str) -&gt; int: if not value.isnumeric(): raise ValueError(&quot;String read in does not conform to int type.&quot;) return int(value) </code></pre>
<python><python-3.x><pydantic-v2>
2024-09-15 04:24:31
1
3,221
dataviews
78,986,437
4,529,605
Add variable "monthly" increments to a datetime column in pandas using another column
<p>I have the following dataset and I simply want to add the monthly increments (from a column within the dataframe) to my date column (another column in my dataframe).</p> <pre><code>index monthly_increment date 0 1 2018-12-01 1 2 2018-10-01 2 3 2018-12-01 3 4 2018-01-01 4 5 2018-06-01 </code></pre> <p>I'm hoping to get a new column:</p> <pre><code> index new_date 0 2019-01-01 1 2018-12-01 2 2019-03-01 3 2018-05-01 4 2018-11-01 </code></pre> <p>There are two relevant posts:</p> <ol> <li>Using <a href="https://stackoverflow.com/questions/23917144/python-pandas-dateoffset-using-value-from-another-column">this post</a>, but the timedelta does NOT take month increments</li> <li>Using <a href="https://stackoverflow.com/questions/18935783/pandas-simple-add-one-month-to-a-datetime-column-in-a-dataframe">this post</a>, but DateOffset does not take Series/dataframe column.</li> </ol> <p>When I try using DateOffset I get a <code>TypeError</code>.</p> <pre><code>&gt;&gt; df['new_date'] = df['date']+ pd.DateOffset(df['monthly_increment']) TypeError: `n` argument must be an integer, got &lt;class 'pandas.core.series.Series'&gt; </code></pre> <p>Is there any simple solution to this without using loops/lambda function? My hunch is pandas should have a straightforward solution to this. It is super easy in Spark.</p>
<python><pandas><date-manipulation>
2024-09-15 01:14:11
2
405
Gobryas
78,986,411
3,855,264
How to diagnose a serial communication crash that happens randomly and very seldom?
<p>I'm connecting to a Lakeshore 336 Temperature controller from Linux Mint via pyserial. My app works relatively well and has operated for months at a time without error. However, it sometimes happen that the device in question stops responding generating the following error message:</p> <p><em>device reports readiness to read but returned no data (device disconnected or multiple access on port?)</em></p> <p>This error was also happening when I ran an earlier version of this GUI on a RaspberriPi4 thus likely not due to the specifics of the runtime environment.</p> <p>The API for this device inherits from a custom parent class which implements the read function as shown below. Note that I am using semaphores to try to avoid this issue but it still happens! Can someone suggest a way to diagnose (trigger) and perhaps cure this issue?</p> <pre><code>def read(self, command): response = '' if self.mutex.tryLock(self.waitLock): try: if self.ser is not None: if self.serialDevice: self.ser.write(bytes(command + self.settings['ending'],'utf-8')) response = self.ser.readline().decode('utf-8').strip() else: self.openSocket() self.ser.sendall((command+self.settings['ending']).encode()) response, listening = '', True while listening: response += self.ser.recv(128).decode() if self.settings['ending'] in response: response = response.strip() listening = False self.closeSocket() except Exception as e: print('While reading {} from {}, SerialDevice:read raised:'.format(command, self.settings['name']), e) response = '' finally: self.mutex.unlock() #pass else: print('{} not locking while attempting {}'.format(self.settings['name'], command)) return response </code></pre>
<python><mutex><pyserial>
2024-09-15 00:45:55
0
308
Alexis R Devitre
78,986,341
6,618,225
VLOOKUP in Python Pandas
<p>A question that has been asked similarly a million times but I cannot find the correct answer that applies to my case. I have two excel files that I want to join like an Excel VLOOKUP.</p> <p>File A already has the columns already created and they are <strong>partially</strong> filled (or empty). There is one column with adresses that will be the Join column for the other Excel file, File B. However, in File A are more addresses than in File B, so File B will only fill the targeted columns based on the values for those addresses, the others should not be affected (so either remain empty or with their old value).</p> <p>File B has this column with addresses and then the columns with the values that are to be assigned to the respective addresses in File A. There are other columns in File B but I don't want those in File A.</p> <p>According to my research I need an Inner Left Join to accomplish this. The column in common is AdressID, the columns to merge let's call them AA, BB, CC and DD.</p> <p>My dataframes are called dfFileA and dfFileB.</p> <p>I tried the following code:</p> <pre><code>dfFileA = pd.merge(dfFileA[['AA', 'BB', 'CC', 'AdressID']], dfFileB[['AA', 'BB', 'CC', 'AdressID']], on=['AdressID'], how='left') </code></pre> <p>Error message is KeyError: &quot;['AA', 'BB', 'CC'] not in index&quot;</p> <p>Where is the mistake?</p> <p>Thanks a lot!</p> <p>[EDIT:] Here is an example of my problem, however, here it creates new columns with the same name but ending on _y and it renamed the original ones ending on _x (I don't know why it does that in this example and not in the original file):</p> <pre><code>import pandas as pd FileA = { 'AA' : ['', 'DEF', '', 'JKL'], 'BB' : ['MNO', '', 'STU', ''], 'CC' : ['WX', '', 'GDG', 'GJ'], 'DD' : ['GDSG', 'OFHHJHFS', 'GDJIO', 'GDKOS'], 'EE' : ['GHDH', 'IGHDH', 'GDJG', 'GODJS'], 'AdressID' : ['Great Street 1', 'Amazing Street 5', 'Perfect Street 21', 'Fantastic Street 88'] } FileB = { 'AA' : ['ABC', '', 'GHI', ''], 'BB' : ['', 'PQR', '', 'FAS'], 'CC' : ['', 'YZ', '', 'GJ'], 'FF' : ['GDSGH', 'GDKOJ', 'GDD', 'GDOKJ'], 'GG' : ['GIOJD', 'GDOK', 'GDJI', 'DGOJ'], 'AdressID' : ['Great Street 1', 'Amazing Street 5', 'Perfect Street 21', 'Fantastic Street 88'] } dfFileA = pd.DataFrame(FileA) dfFileB = pd.DataFrame(FileB) print(dfFileA) print(dfFileB) merged = pd.merge(dfFileA, dfFileB, on='AdressID', how='inner') print(merged) </code></pre> <p>Obviously, I want the output to be like</p> <pre><code>merged = { 'AA' : ['ABC', 'DEF', 'GHI', 'JKL'], 'BB' : ['MNO', 'PQR', 'STU', 'FAS'], 'CC' : ['WX', 'YZ', 'GDG', 'GJ'], 'DD' : ['GDSG', 'OFHHJHFS', 'GDJIO', 'GDKOS'], 'EE' : ['GHDH', 'IGHDH', 'GDJG', 'GODJS'], 'AdressID' : ['Great Street 1', 'Amazing Street 5', 'Perfect Street 21', 'Fantastic Street 88'] } </code></pre> <p>Hope this helps better. So I only want to keep the columns from FileB that exist also in FileA. Alternatively, I could just delete them selectively after the merge/combine_first.</p> <p>[Edit: I modified the example, I should have put more different columns in FileA and FileB that do not exist in the other.</p>
<python><pandas>
2024-09-14 23:28:49
2
357
Kai
78,986,294
14,250,641
Accurately Compute Overlapping and Non-Overlapping Genomic Intervals Across Two DataFrames in Python
<p>I want the <em>exact</em> intervals of DF1 that overlap with DF2. Also, I want the intervals that do not overlap with DF2. This is tricky because you must include A) the DF1 rows that are not overlapping at all B) the DF1 rows that overlap partially.</p> <p>You can see in the first row of DF1 and DF2, there is overlap-- the overlapping interval (in terms of DF1) is 10-20. The non-overlapping intervals (in terms of DF1) would be 1-9, 21-100. The other tricky part comes in when you consider the next row. The non-overlapping interval would be 1-59, which is correct if we just look at this row. But you must make sure ALL intervals (whether overlapping or not) do not overlap within themselves ON THE CHROMOSOME level. If you remember the row above has an overlapping interval from 10-20, which overlaps with 1-59 (so 1-59 is not a true non-overlapping interval).</p> <p>The second row on DF1 has no overlap with DF2, so you can count that entire interval as non-overlapping.</p> <p>Therefore, for <em>chromosome 1</em>: the overlapping intervals are: 10-20, 60-100, 150-200.</p> <p>The non-overlapping intervals are 1-9, 21-59 (for chromosome 1 lookin g at DF1 only).</p> <p>As you can see, a true test to see if you got the right output is if you add up all of the intervals (overlapping &amp; non-overlapping) it should be equal to lengths of the regions from DF1.</p> <p>Example (ignoring chromosome 2 for simplicity):</p> <p>DF1: 1-100 + 150-200= 151 bases long (start/end inclusive)</p> <p>1-9 = 9</p> <p>10-20 = 11</p> <p>21-59 = 39</p> <p>60-100 = 41</p> <p>150-200 = 51</p> <p>9+11+39+41+51= 151 bases (matches DF1)</p> <p>DF1</p> <pre><code>chr start end 1 1 100 1 150 200 2 5 10 </code></pre> <p>DF2</p> <pre><code>chr start end 1 10 20 1 60 260 1 500 550 2 1 20 </code></pre> <p>UPDATE:</p> <p>Comment on proposed solution (by @Andrej Kesely):</p> <p>The proposed solution seems promising, but when you test the following example, it doesn't work as expected.</p> <p>DF1 *total value count= 137</p> <pre><code>Chromosome Start End 0 chr1 200 227 1 chr1 613 721 </code></pre> <p>DF2</p> <pre><code>Chromosome Start End 0 chr1 1000 1227 </code></pre> <p>expected output: DF1 *total value count for left_only= 137</p> <pre><code>Chromosome Start End merge 0 chr1 200 227 left_only 1 chr1 613 721 left_only 2 chr1 1000 1227 right_only [optional to include right_only] </code></pre> <p>Incorrect DF output *total value count for left_only= 522 (does NOT match 137)</p> <pre><code> Chromosome_ range_first range_last _merge_first 1 chr1 1 200 721 left_only </code></pre> <p>You must keep within the ranges of DF1. In this example DF1, position 228-612 does not exist. Since these values are not in DF1 (or DF2) I do not want them reported in the final output.</p>
<python><pandas><dataframe><bioinformatics><pyranges>
2024-09-14 22:42:36
1
514
youtube
78,985,935
1,479,670
python calling executable with popen;, file written by executable can not be accessed
<p>I have a C++ file</p> <pre><code>#include &lt;cstdio&gt; int main(int iArgC, char *apArgV[]) { int iResult = -1; if (iArgC &gt; 1) { FILE *fOut = fopen(apArgV[1], &quot;wt&quot;); if (fOut != NULL) { fprintf(fOut, &quot;hello\n&quot;); fprintf(fOut, &quot;goodbye\n&quot;); fclose(fOut); printf(&quot;Written `%s`\n&quot;, apArgV[1]); iResult = 0; } else { printf(&quot;CoudnΒ΄t open %s\n&quot;, apArgV[1]); } } else { printf(&quot;I need an argument\n&quot;); } return 0; } </code></pre> <p>which i compile like this</p> <pre><code>g++ -g -o hello hello.cpp </code></pre> <p>When i run <code>./hello /home/jody/hihi,txt</code> the file <code>hihi.txt</code> is written.</p> <p>Then i have this python script</p> <pre><code>#!/usr/bin/python import subprocess as sp import time def callHello(file_name) : command = [&quot;/home/jody/progs/polyhedronLED/version_c/hello&quot;, file_name] proc = sp.Popen(command, stdout=sp.PIPE, stderr=sp.PIPE) sOut, sError = proc.communicate() iResult = proc.returncode print(&quot;return code: %d&quot;%iResult) print(&quot;Output: %s&quot;%sOut) # -- end def if __name__ == '__main__': file_name0 = &quot;/home/jody/hihi.txt&quot; callHello(file_name0) ff = open(file_name0, &quot;wt&quot;) lines = ff.readlines() ff.close() print(&quot;output\n%s&quot;%lines) # -- end if </code></pre> <p>When i run this script, i get the output:</p> <pre><code>return code: 0 Output: b'Written `/home/jody/hihi.txt`\n' Traceback (most recent call last): File &quot;/home/jody/progs/polyhedronLED/version_c/./hellocall.py&quot;, line 22, in &lt;module&gt; lines = ff.readlines() ^^^^^^^^^^^^^^ io.UnsupportedOperation: not readable </code></pre> <p>But when i subsequently check the output directory, the file <code>hihi.txt</code> has been written.</p> <p>I also tried it with a delay <code>time.sleep(5)</code> after the call to <code>callHello()</code> but the result was the same.</p> <p>I then wrote a bash script which calls <code>hello</code> and subsequently reads the contents of the file: this worked as expected.</p> <p>When (or how) can a file written by a subprocess be actually accessed in python?</p>
<python><subprocess>
2024-09-14 19:03:29
1
1,355
user1479670
78,985,781
15,662,114
How to fix this python import shadowing problem?
<h2>Problem Overview</h2> <p>I'm facing an import shadowing problem with a python 3.12 project that uses google protobuf.</p> <p>Below is the structure of my project.</p> <pre class="lang-none prettyprint-override"><code>Project β”œβ”€β”€ proto (generated by protoc) β”‚ β”œβ”€β”€ google β”‚ β”‚ └── api β”‚ β”‚ └── ... β”‚ └── my_proto (depends on google.api protos) β”‚ └── ... └── main.py Virtualenv └── lib └── python3.12 └── site-packages └── google └── protobuf </code></pre> <p>I'm using a virtual environment to manage dependencies and protoc to generate python code. The problem is that in some of the generated code <code>proto/my_proto</code>, it attempts to do both <code>import google.protobuf</code> (which is the <code>protobuf</code> package) and the generated proto code module <code>import google.api.xxxx</code>. One of the imports is guaranteed to fail because the <code>proto/google</code> module shadows the one in the virtual environment. How can I fix this problem?</p> <h2>Minimal Repro</h2> <h3>Get the Googleapi protos</h3> <p>clone this repo here: <code>https://github.com/googleapis/googleapis</code></p> <h3>Create a directory with the following content</h3> <pre class="lang-none prettyprint-override"><code>. β”œβ”€β”€ project β”‚ β”œβ”€β”€ proto β”‚ β”‚ └── __init__.py β”‚ β”œβ”€β”€ __init__.py β”‚ └── __main__.py β”œβ”€β”€ proto β”‚ └── my_proto.proto β”œβ”€β”€ build.sh └── pyproject.toml </code></pre> <h4><code>__main__.py</code></h4> <pre class="lang-py prettyprint-override"><code>from .proto.my_proto_pb2 import * def main(): pass </code></pre> <h4><code>my_proto.proto</code></h4> <pre class="lang-none prettyprint-override"><code>syntax = &quot;proto3&quot;; import &quot;google/api/annotations.proto&quot;; message MyMessage { string example = 1; } </code></pre> <h4><code>pyproject.toml</code></h4> <pre class="lang-ini prettyprint-override"><code>[tool.poetry] name = &quot;project&quot; version = &quot;0.1.0&quot; description = &quot;&quot; authors = [] readme = &quot;README.md&quot; [tool.poetry.scripts] main = &quot;project.__main__:main&quot; [tool.poetry.dependencies] python = &quot;^3.12&quot; grpcio-tools = &quot;1.66.1&quot; grpcio = &quot;1.66.1&quot; protobuf = &quot;5.27.2&quot; [build-system] requires = [&quot;poetry-core&quot;] build-backend = &quot;poetry.core.masonry.api&quot; </code></pre> <h4><code>build.sh</code></h4> <pre class="lang-bash prettyprint-override"><code>proto_folder=&quot;./proto&quot; google_proto_folder=&quot;&lt;dir containg this repo: https://github.com/googleapis/googleapis&gt;&quot; poetry lock poetry install poetry run python -m grpc_tools.protoc \ -I&quot;$proto_folder&quot; \ -I&quot;$google_proto_folder/googleapis&quot; \ --experimental_allow_proto3_optional \ --python_out=&quot;./project/proto&quot; \ --grpc_python_out=&quot;./project/proto&quot; \ $google_proto_folder/googleapis/google/api/annotations.proto \ $proto_folder/my_proto.proto poetry install </code></pre> <h3>Build and Run</h3> <p>change the working directory to the project root and run the following:</p> <pre class="lang-bash prettyprint-override"><code>./build.sh poetry run main </code></pre> <p>the following error message will be shown:</p> <pre class="lang-none prettyprint-override"><code>Traceback (most recent call last): File &quot;&lt;string&gt;&quot;, line 1, in &lt;module&gt; File &quot;/usr/lib/python3.12/importlib/__init__.py&quot;, line 90, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1387, in _gcd_import File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1360, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1331, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 935, in _load_unlocked File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 995, in exec_module File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 488, in _call_with_frames_removed File &quot;/home/tongweid/Documents/repos/avsim-service/_local/repro/project/project/__main__.py&quot;, line 1, in &lt;module&gt; from .proto.my_proto_pb2 import * File &quot;/home/tongweid/Documents/repos/avsim-service/_local/repro/project/project/proto/my_proto_pb2.py&quot;, line 25, in &lt;module&gt; from google.api import annotations_pb2 as google_dot_api_dot_annotations__pb2 ModuleNotFoundError: No module named 'google.api' </code></pre> <p>I suspect this is because <code>google.protobuf</code> shadows the generated <code>google.api</code>.</p>
<python><protocol-buffers>
2024-09-14 17:48:03
1
646
dw218192
78,985,764
9,381,966
How to access checkout session metadata in Stripe webhook for payment methods in subscription mode
<p>I'm integrating Stripe Checkout in a Django application and handling webhooks to update user information based on payment events. However, I'm encountering issues accessing metadata associated with a <code>Checkout Session</code> when dealing with <code>payment_method</code> objects.</p> <h3>Context:</h3> <p>I have the following setup for Stripe Checkout:</p> <ul> <li><strong><code>StripeCheckoutMonthlyView</code> and <code>StripeCheckoutYearlyView</code></strong>: Both create a <code>Checkout Session</code> with metadata (e.g., <code>user_id</code>, <code>plan_type</code>).</li> <li><strong>Webhook Handler (<code>stripe_webhook</code>)</strong>: Processes different event types from Stripe.</li> </ul> <h3>Problem:</h3> <p>In the <code>payment_method.attached</code> event, I need to access metadata that was included in the <code>Checkout Session</code>. However, the <code>payment_method</code> object does not include metadata and does not directly reference the <code>Checkout Session</code>.</p> <p>Here’s how my webhook handler looks:</p> <pre class="lang-py prettyprint-override"><code>@csrf_exempt def stripe_webhook(request): payload = request.body event = None print('Stripe Webhook Received!') try: event = stripe.Event.construct_from( json.loads(payload), stripe.api_key ) except ValueError as e: # Invalid payload return HttpResponse(status=400) if event.type == 'payment_intent.succeeded': # Handle payment_intent.succeeded event pass elif event.type == 'payment_method.attached': payment_method = event.data.object # Issue: Payment method does not include metadata or session_id pass elif event.type == 'checkout.session.completed': session = event.data.object # Retrieve session metadata here pass else: print('Unhandled event type {}'.format(event.type)) return HttpResponse(status=200) </code></pre> <h3>What I Need:</h3> <p>I need to update user information based on metadata that was included in the <code>Checkout Session</code>. Specifically:</p> <ol> <li>Access metadata in <code>payment_method.attached</code> event.</li> <li>Retrieve metadata from the <code>Checkout Session</code> when handling <code>payment_method</code> events.</li> </ol> <h3>Solution Attempted:</h3> <p>I tried to use <code>payment_intent_data</code>:</p> <pre class="lang-py prettyprint-override"><code>class StripeCheckoutMonthlyView(APIView): def post(self, request, *args, **kwargs): # try: checkout_session = stripe.checkout.Session.create( line_items=[ { 'price': settings.STRIPE_PRICE_ID_MONTHLY, 'quantity': 1, }, ], payment_method_types=['card'], mode='subscription', success_url=settings.SITE_URL + '/pagamento/?success=true&amp;session_id={CHECKOUT_SESSION_ID}', cancel_url=settings.SITE_URL + '/?canceled=true', metadata={'user_id': request.user.id, 'plan_type': 'monthly'}, payment_intent_data={ 'metadata': { 'user_id': request.user.id, } } ) return Response({'url': checkout_session.url, 'id': checkout_session.id, }, status=status.HTTP_200_OK) </code></pre> <p>and then use it in the appropriate function:</p> <pre class="lang-py prettyprint-override"><code>def add_info_card(payment_method): &quot;&quot;&quot; Update the user's card details and payment date based on the payment intent. Args: payment_method: The payment intent object from Stripe containing card and charge details. user: The user object retrieved from the database. &quot;&quot;&quot; print('Payment Method: ', payment_method) user = get_object_or_404(User, id=payment_method.metadata.user_id) last4 = payment_method['card']['last4'] payment_date = datetime.now().strftime('%Y-%m-%d %H:%M:%S') brand_card = payment_method['card']['brand'] print('Last 4: ', last4) print('Payment Date: ', payment_date) # Update the user with card details and payment date user.card_last4 = last4 user.brand_card = brand_card user.payment_date = payment_date user.save() print(f&quot;User {user.id} updated with card details and payment date.&quot;) </code></pre> <p>but received the error:</p> <pre><code>stripe._error.InvalidRequestError: Request req_td3acLmE4ziQqi: You can not pass `payment_intent_data` in `subscription` mode. </code></pre> <h3>Questions:</h3> <ol> <li>How can I access <code>Checkout Session</code> metadata when handling <code>payment_method</code> events?</li> <li>What is the best way to link <code>payment_method</code> to <code>Checkout Session</code> metadata in the webhook?</li> </ol>
<python><django><stripe-payments>
2024-09-14 17:38:43
1
1,590
Lucas
78,985,516
3,801,449
How to automatically download or warn about a non-PyPi dependency of a Python package?
<p>I have a Python package, which is distributed on PyPi. It depends on number of other packages available on PyPi and on <a href="https://psicode.org/" rel="nofollow noreferrer">Psi4</a>, which is only distributed on Conda repositories (<a href="https://anaconda.org/psi4/psi4" rel="nofollow noreferrer">https://anaconda.org/psi4/psi4</a>), not on PyPi.</p> <p>Now, my package is distributed as <code>wheel</code> package via <code>hatchling</code>, so my <code>pyproject.toml</code> looks similar to this:</p> <pre><code>[build-system] requires = [&quot;hatchling&quot;] build-backend = &quot;hatchling.build&quot; [project] name = &quot;My project&quot; version = &quot;1.0.0&quot; authors = [ ] description = &quot;New method&quot; readme = &quot;README.md&quot; requires-python = &quot;&gt;=3.12&quot; classifiers = [ &quot;Programming Language :: Python :: 3&quot;, &quot;License :: OSI Approved :: GNU General Public License v3 (GPLv3)&quot;, &quot;Operating System :: POSIX :: Linux&quot;, ] dependencies = [ &quot;qiskit==1..0&quot;, &quot;qiskit-nature&gt;=0.5.1&quot;, &quot;numpy&gt;=1.23.0&quot;, &quot;deprecated&gt;=1.2.0&quot;] </code></pre> <p>Is there any way to deal with such an external dependency automatically? Ideally it'd download and install Psi4 from its repositories, but if not, is there any way to get at least a warning before the download from PyPi starts?</p> <p>I had a look around and found this related question, which, unfortunately, got no answers:</p> <ul> <li><a href="https://stackoverflow.com/questions/36942209/distributing-pip-packages-that-have-non-pypi-dependencies">Distributing pip packages that have non-pypi dependencies</a></li> </ul>
<python><pypi><python-packaging><hatch>
2024-09-14 15:41:21
1
3,007
Eenoku
78,984,852
8,179,672
Renumerate a column in a group
<p>I'm looking for a scalable way to re-numerate groups (in my case cluster number) in a column. Below the current state with Pandas schema.</p> <pre><code>data = [ [&quot;Adam&quot;, 1, 22], [&quot;Eddy&quot;, 0, 22], [&quot;Boby&quot;, 9, 22], [&quot;Timy&quot;, 2, 26], [&quot;Carl&quot;, 9, 26], [&quot;Anna&quot;, 0, 33], [&quot;Paul&quot;, 5, 35], [&quot;Mike&quot;, 7, 51], ] current = pd.DataFrame(data, columns=['name', 'score', 'cluster'] ).astype({'name':'string', 'score':'Int64', 'cluster':'Int64'}) </code></pre> <p>current table looks like that:</p> <pre><code>+----+--------+---------+-----------+ | | name | score | cluster | |----+--------+---------+-----------| | 0 | Adam | 1 | 22 | | 1 | Eddy | 0 | 22 | | 2 | Boby | 9 | 22 | | 3 | Timy | 2 | 26 | | 4 | Carl | 9 | 26 | | 5 | Anna | 0 | 33 | | 6 | Paul | 5 | 35 | | 7 | Mike | 7 | 51 | +----+--------+---------+-----------+ </code></pre> <p>my desired state would be:</p> <pre><code>+----+--------+---------+-----------+ | | name | score | cluster | |----+--------+---------+-----------| | 0 | Adam | 1 | 1 | | 1 | Eddy | 0 | 1 | | 2 | Boby | 9 | 1 | | 3 | Timy | 2 | 2 | | 4 | Carl | 9 | 2 | | 5 | Anna | 0 | 3 | | 6 | Paul | 5 | 4 | | 7 | Mike | 7 | 5 | +----+--------+---------+-----------+ </code></pre> <p>Assume there will be millions of rows and hundreds of clusters. For a very small number of clusters, I can do it by hand, but I need a scalable solution.</p> <p>I've tried to iterate over groups after <code>.groupby(&quot;cluster&quot;)</code> and to assign consecutive numbers but I failed.</p>
<python><pandas><dataframe>
2024-09-14 10:07:03
2
739
Roberto
78,984,714
6,345,936
How to use asyncio gather with sqlalchemy session in FastAPI and the workaround
<p>I am trying to make an asynchronous call using asyncio gather to call multiple database query. I am getting error</p> <pre><code>sqlalchemy.exc.IllegalStateChangeError: Method 'close()' can't be called here; method '_connection_for_bind()' is already in progress and this would cause an unexpected state change to &lt;SessionTransactionState.CLOSED: 5&gt; (Background on this error at: https://sqlalche.me/e/20/isce) </code></pre> <p>Based on this github <a href="https://github.com/sqlalchemy/sqlalchemy/discussions/9312" rel="nofollow noreferrer">https://github.com/sqlalchemy/sqlalchemy/discussions/9312</a> Its not possible to share the same session and doing <code>asyncio.gather(func(session), func2(session)</code></p> <p>Is there any solution that will make us able to do some db queries simultaneously?</p> <p><strong>Code:</strong></p> <pre class="lang-py prettyprint-override"><code>async def handler( app_db: AsyncSession = Depends(get_async_session_app), ): # some other codes conversation_task = get_one_app( app_db, query=text(&quot;SELECT data_schema FROM conversations WHERE id = :conversation_id&quot;), params={&quot;conversation_id&quot;: conversation_id}, ) messages_task = get_all_app( app_db, query=text(&quot;&quot;&quot; SELECT id, role, content FROM raw_messages WHERE conversation_id = :conversation_id ORDER BY created_at ASC, id ASC &quot;&quot;&quot;), params={&quot;conversation_id&quot;: conversation_id}, ) conversation, messages_result = await asyncio.gather(conversation_task, messages_task) #rest of the code </code></pre> <p>database config</p> <pre><code>SQLALCHEMY_DATABASE_URL = os.getenv(&quot;APP_DB_URL&quot;) engine = create_async_engine(SQLALCHEMY_DATABASE_URL, echo=True) AsyncSessionLocal = sessionmaker( autocommit=False, autoflush=False, bind=engine, class_=AsyncSession ) async def get_async_session(): async with AsyncSessionLocal() as session: try: yield session finally: await session.close() </code></pre>
<python><sqlalchemy><python-asyncio>
2024-09-14 09:01:03
1
695
kusiaga
78,984,536
1,388,799
Why does this tkinter image animation not delay properly?
<p>Here's my attempt, modified from another post, to create a <em>minimal</em> loop where I create a numpy array and display it. (For convenience I have a Start button.)</p> <p>It works fine, with one exception -- it runs at full speed, regardless of what delay I put in as the first argument to self.after (it's 1000 in the example below. Changing this arg has no visible effect.)</p> <p>What am I missing? (I am running python 3.8.3. Oh, that's apparently a very old version. Damn. I'll go update just in case that's related to the issue.)</p> <pre><code>from PIL import ImageTk import numpy as np import tkinter as tk class App(tk.Tk): def __init__(self): super().__init__() self.canvas = tk.Canvas(self, width = 1200, height=1200) self.canvas.pack() tk.Button(self, text=&quot;Start&quot;, command=self.start_animation).pack() def start_animation(self): self.canvas.delete('all') # generate a random test frame im = np.random.randint(0, 255, (1024,1024,3), dtype=&quot;uint8&quot;) im1 = Image.fromarray(im, 'RGB') im2 = ImageTk.PhotoImage(im1) self.canvas.create_image(600, 600, image=im2) self.update() self.after(1000, self.start_animation()) if __name__ == '__main__': App().mainloop() </code></pre>
<python><numpy><tkinter>
2024-09-14 07:30:55
1
359
Walt Donovan
78,984,405
7,516,523
Find duplicate "group of rows" in pandas DataFrame
<p>How can I find duplicates of a group of rows inside of a DataFrame? Or in other words, how can I find the indices of a specific duplicated DataFrame inside of a larger DataFrame?</p> <p>The larger DataFrame:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>index</th> <th>0</th> <th>1</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>0</td> <td>1</td> </tr> <tr> <td>1</td> <td>2</td> <td>3</td> </tr> <tr> <td>2</td> <td>4</td> <td>4</td> </tr> <tr> <td>3</td> <td>0</td> <td>1</td> </tr> <tr> <td>4</td> <td>2</td> <td>3</td> </tr> <tr> <td>5</td> <td>2</td> <td>3</td> </tr> <tr> <td>6</td> <td>0</td> <td>1</td> </tr> </tbody> </table></div> <p>The specific duplicated DataFrame (or group of rows):</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>index</th> <th>0</th> <th>1</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>0</td> <td>1</td> </tr> <tr> <td>1</td> <td>2</td> <td>3</td> </tr> </tbody> </table></div> <p>Indices I am looking for:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>index</th> </tr> </thead> <tbody> <tr> <td>0</td> </tr> <tr> <td>1</td> </tr> <tr> <td>3</td> </tr> <tr> <td>4</td> </tr> </tbody> </table></div> <p>(Note that the indices of the duplicated DataFrame do not matter, only the values).</p> <pre><code>import pandas as pd # larger DataFrame lrg_df = pd.DataFrame([[0, 1], [2, 3], [4, 4], [0, 1], [2, 3], [2, 3], [0, 1]]) # group of rows (i.e., duplicated DataFrame) dup_df = pd.DataFrame([[0, 1], [2, 3]]) # get indices of lrg_df that contain dup_df indcs = lrg_df[lrg_df == dup_df].index # Doesn't work of course </code></pre>
<python><pandas><dataframe><indexing><duplicates>
2024-09-14 05:59:47
3
345
Florent H
78,984,342
14,250,641
Find non-overlapping intervals within DNA coordinates
<p>I am trying to find the non-overlapping intervals for the start/end DNA coordinates (on the same chromosome). I am having a hard time developing a function that takes into account two rows on the same exon. The non-overlapping interval must be unique (not overlap with any other intervals).</p> <p>For ex, on the first row below, the non-overlapping interval would be 1-49, 61-100. But, if you look at the second row, the non-overlapping interval would be 1-69, 81-100. I want <em>non-overlapping</em>, non-overlapping intervals, so the true interval output I want is 1-49, 61-69, 81-100. Ideally, I would like these intervals to bee separated into their own columns (non-overlap_start, non-overlap_end).</p> <p>*Please note: I have unique intervals <em>per chromosome</em>.</p> <p>starting DF</p> <pre><code>chrom exon_start exon_end start1 end1 1 1 100 50 60 1 1 100 70 80 1 150 155 155 160 2 5 50 25 100 </code></pre> <p>final DF</p> <pre><code>chrom exon_start exon_end start1 end1 non_overlap 1 1 100 50 60 [1-49, 61-69, 81-100] 1 1 100 70 80 [1-49, 61-69, 81-100] 1 150 155 155 160 [150-154] 2 5 50 25 100 [5-24] </code></pre>
<python><pandas><dataframe><bioinformatics>
2024-09-14 05:01:47
2
514
youtube
78,984,299
11,141,816
How to run an interactive Julia session in python without using the python's julia library?
<p>I want to use python to send command to julia and then get string output as the return and then send the command again. i.e. <code>x=2</code> and then <code>x+1</code> return <code>3</code>. This required an interactive session through subprocess. I tried the <code>subprocess.Popen([&quot;julia&quot;, &quot;-e&quot;]</code>, the non interactive session and checked that the Julia interface worked. However, when I tried to use the following code to run the Julia interactive session, it got stuck.</p> <pre><code>import subprocess class PersistentJulia: def __init__(self): # Start Julia in interactive mode self.process = subprocess.Popen( [&quot;julia&quot;, &quot;-i&quot;], # Interactive mode stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True, # Text mode for easier string handling bufsize=1, # Line-buffered ) def run_command(self, command): # Send the command to Julia and read the output self.process.stdin.write(command + &quot;\n&quot;) self.process.stdin.flush() # Read the output output = [] while True: line = self.process.stdout.readline() if not line or line.startswith(&quot;julia&gt;&quot;): break output.append(line) # Return the collected output as a single string return ''.join(output).strip() def close(self): # Close the Julia process self.process.stdin.write(&quot;exit()\n&quot;) self.process.stdin.flush() self.process.stdin.close() self.process.stdout.close() self.process.stderr.close() self.process.terminate() self.process.wait() # Example usage if __name__ == &quot;__main__&quot;: julia = PersistentJulia() # Run some Julia commands output1 = julia.run_command('x = 2') print(&quot;Output 1:&quot;, output1) output2 = julia.run_command('x + 3') print(&quot;Output 2:&quot;, output2) output3 = julia.run_command('println(&quot;The value of x is &quot;, x)') print(&quot;Output 3:&quot;, output3) # Close the Julia process julia.close() </code></pre> <p>The <code>julia = PersistentJulia()</code> could be ran successfully, but start with</p> <pre><code># Run some Julia commands output1 = julia.run_command('x = 2') print(&quot;Output 1:&quot;, output1) </code></pre> <p>the code got stuck and doesn't work. It looked fine but I'm not sure which part it went wrong.</p> <p>I'm not sure if the subprocess was the right way to go, and I don't want to use the Julia library in python. How to ruan an interactive Julia session in python withouth using the python's julia library?</p>
<python><julia><subprocess><interactive>
2024-09-14 04:27:11
0
593
ShoutOutAndCalculate
78,984,259
2,210,825
Consens sequence of list of strings as multiple sequence alignment?
<p>I'm working on a program whose goal is to make a playlist that &quot;flows&quot; nicely. To do this, I obtain for each song a list of the genres it belongs to from Spotify. I've assumed that the order of this list heuristically goes from &quot;bigger&quot; genres to &quot;nicher&quot; genres. Effectively, my data structure here is a list of list of strings <code>[[&quot;rock&quot;, &quot;American rock&quot;, &quot;synth rock&quot;], [&quot;rock&quot;, &quot;French rock&quot;, &quot;synth rock&quot;], [&quot;pop&quot;, &quot;rock&quot;, &quot;alt rock&quot;]]</code>.</p> <p>Now, in order to figure out how to order my playlist, I need a &quot;consensus sequence&quot; of all the genres. Because I have a computational biology background, my first thought was to try to align these lists together and obtain a consensus from that alignment.</p> <p>Do you know of any packages that implement such an algorithm on sets of strings rather than DNA/RNA/Protein alphabets in python?</p>
<python><bioinformatics><spotify><sequence-alignment>
2024-09-14 03:58:52
1
1,458
donkey
78,984,164
11,573,887
Accessing BigQuery Dataset from a Different GCP Project Using PySpark on Dataproc
<p>I am working with BigQuery, Dataproc, Workflows, and Cloud Storage in Google Cloud using Python.</p> <p>I have two GCP projects:</p> <ul> <li><p><strong>gcp-project1:</strong> contains the BigQuery dataset <strong>gcp-project1.my_dataset.my_table</strong></p> </li> <li><p><strong>gcp-project2:</strong> contains my <strong>myscript.py</strong> and my files stored in Cloud Storage</p> </li> </ul> <p>In <strong>myscript.py</strong>, I am trying to read a SQL query from a file stored in Cloud Storage (<strong>query1=gs://path/bq.sql</strong>) and query data from the BigQuery dataset in <strong>gcp-project1</strong>.</p> <p>According to the documentation <a href="https://github.com/GoogleCloudDataproc/spark-bigquery-connector?tab=readme-ov-file#reading-data-from-a-bigquery-query" rel="nofollow noreferrer">here</a>, when reading from BigQuery using a SQL query, I need to set the properties <strong>viewsEnabled=true</strong> and <strong>materializationDataset=dataset</strong>.</p> <p>Here are the approaches I tried:</p> <p><strong>Test 1:</strong></p> <pre><code>spark.conf.set(&quot;viewsEnabled&quot;, &quot;true&quot;) spark.conf.set(&quot;materializationDataset&quot;, &quot;my_dataset&quot;) </code></pre> <p>This fails because it searches for the dataset in <strong>gcp-project2</strong> (where <strong>myscript.py</strong> is running), but my dataset is in <strong>gcp-project1</strong>. The error is: <strong>Not found: Dataset gcp-project2:my_dataset was not found in location...</strong></p> <p><strong>Test 2:</strong></p> <pre><code>spark.conf.set(&quot;viewsEnabled&quot;, &quot;true&quot;) spark.conf.set(&quot;materializationDataset&quot;, &quot;gcp-project1.my_dataset&quot;) </code></pre> <p>This fails with the error: <strong>Dataset IDs must be alphanumeric (plus underscores) and must be at most 1024 characters long.</strong></p> <p><strong>Test 3:</strong></p> <pre><code>spark.conf.set(&quot;viewsEnabled&quot;, &quot;true&quot;) spark.conf.set(&quot;materializationDataset&quot;, &quot;my_dataset&quot;) try: df = spark.read.format('bigquery') \ .option('project', 'gcp-project1') \ #Adding gcp-project1 contains dataset .option('query', query1) \ .load() df.printSchema() df.show(10) except Exception as e: logger.error(f&quot;Failed to read data from BigQuery: {e}&quot;) sys.exit(1) </code></pre> <p>This also fails with the same error: <strong>Not found: Dataset gcp-project2:my_dataset was not found in location...</strong></p> <p><strong>Question:</strong></p> <p>How can I configure my PySpark script to read data from a BigQuery dataset in <strong>gcp-project1</strong> while running the script in <strong>gcp-project2</strong>?</p> <p>Any suggestions for interacting with datasets across different GCP projects would be appreciated.</p> <p>Thanks in advance!</p>
<python><apache-spark><google-bigquery><google-cloud-dataproc>
2024-09-14 02:28:55
1
386
Henry Xiloj Herrera
78,983,916
4,921,103
Implementing Discriminated Unions in Pydantic without using Nested Models?
<p>I'm trying to implement discriminated unions in Pydantic to select the correct class based on user input using the <code>discriminator</code> parameter. While the <a href="https://docs.pydantic.dev/latest/concepts/unions/#discriminated-unions-with-str-discriminators" rel="nofollow noreferrer">documentation</a> suggests creating a nested model class to handle this easily, I'd like to use this functionality without introducing an additional nested model and have a similar behaviour as a normal Pydantic <code>BaseModel</code> class.</p> <p>I've tried to use <code>RootModel</code> as a workaround, but the resulting object is encapsulated within the <code>.root</code> property, which isn't ideal for my use case. I am able to do <code>.model_dump()</code> but unable to access the attributes on it directly.</p> <p>Is there a better way to implement this without creating a nested model or using <code>RootModel</code>?</p> <pre><code>from typing import Literal, Union, Annotated from pydantic import BaseModel, Field, RootModel class Cat(BaseModel): pet_type: Literal[&quot;cat&quot;] meows: int class Dog(BaseModel): pet_type: Literal[&quot;dog&quot;] barks: float class Lizard(BaseModel): pet_type: Literal[&quot;reptile&quot;, &quot;lizard&quot;] scales: bool Animal = Annotated[ Union[Cat, Dog, Lizard], Field(discriminator=&quot;pet_type&quot;), ] AnimalModel = RootModel[Animal] animal = AnimalModel.model_validate({&quot;pet_type&quot;: &quot;cat&quot;, &quot;meows&quot;: 3}) try: # want to access the attributes directly print(animal.pet_type) except AttributeError as e: print(e) #&gt; &quot;RootModel[Annotated[Union[Cat, Dog, Lizard], FieldInfo(annotation=NoneType, required=True, discriminator='pet_type')]]&quot; object has no attribute 'pet_type' # have to access the attributes by first accessing the .root attribute print(animal.root.pet_type) </code></pre> <p>(am using Pydantic v2.7)</p>
<python><pydantic>
2024-09-13 22:34:18
1
48,014
Rahul Gupta
78,983,898
856,804
how to fix `"Series[Any]" not callable [operator]` mypy error
<p>For the code below</p> <pre><code>import pandas as pd df_data = pd.DataFrame( [ {&quot;a&quot;: 2, &quot;b&quot;: 1}, {&quot;a&quot;: 2, &quot;b&quot;: 10}, {&quot;a&quot;: 3, &quot;b&quot;: 77}, ] ) df_data.groupby(&quot;a&quot;).size().to_frame(&quot;size&quot;) </code></pre> <p>mypy complains</p> <pre><code>toy.py:12: error: &quot;Series[Any]&quot; not callable [operator] Found 1 error in 1 file (checked 1 source file) </code></pre> <p>I know the complain is due to <code>to_frame</code> call, not sure why or how to fix?</p> <p>versions:</p> <ul> <li>pandas: <code>2.2.2</code></li> <li>pandas-stub: <code>2.2.2.240807</code></li> <li>mypy: <code>1.11.2</code></li> <li>python: <code>3.12.4</code></li> </ul>
<python><pandas><python-typing><mypy>
2024-09-13 22:22:17
0
9,110
zyxue
78,983,868
17,729,094
Keep only rows that have at least one null
<p>I am trying to do basically the opposite of <code>drop_nulls()</code>. I want to keep all rows that have at least one null.</p> <p>I want to do something like (but I don't want to list all other columns):</p> <pre class="lang-py prettyprint-override"><code>for (name,) in ( df.filter( pl.col(&quot;a&quot;).is_null() | pl.col(&quot;b&quot;).is_null() | pl.col(&quot;c&quot;).is_null() ) .select(&quot;name&quot;) .unique() .rows() ): print( f&quot;Ignoring `{name}` because it has at least one null&quot;, file=sys.stderr, ) df = df.drop_nulls() </code></pre>
<python><dataframe><python-polars>
2024-09-13 22:07:20
1
954
DJDuque
78,983,752
367,181
Internationalization on the server side (changing languages dynaimcally mutliple times per second with gettext)
<p>I am working on a websocket server in python (FastAPI) which will work with multiple connected clients over websockets. The server is exchanging JSON messages with the clients. Some of the JSONs outputted by the server are UI-related i.e. they may include text messages which are then shown to the user in the client UI.</p> <p>How do I make the server-side multilingual?</p> <p>Let's suppose I always know each user's language and the language never changes as long as the websocket is open. All I want is my server to be able to translate messages on the fly, for each user.</p> <p>Python's <code>gettext</code> includes an example where <code>install()</code> method is invoked on translation object to do exactly that: change the language on the fly. But from what I understood the <code>install()</code> method seems to be meant for the client-side usage, where the user changes the language in the client app. I want to be able to do the same thing on the server side, where I'll have hundreds or even thousands connected websockets with potentially different languages. Which means the language switching can potentially happen for each serialized JSON message. With large number of concurrent users (with different languages) it can theoretically be many times per seconds.</p> <p>Is calling the <code>Translation.install()</code> method just before serialization of each JSON message on the server side <em>really</em> the correct way of doing it? Can very frequent changes affect overall server performance or maybe have another side effects? If it's a no-go, what's the best practice?</p>
<python><internationalization><client-server><gettext>
2024-09-13 21:06:30
0
6,332
PawelRoman
78,983,656
20,302,906
Unittest can't find imported module in imported module
<p>Unittest modules import issue</p> <p>My project folder structure goes like this:</p> <pre><code>project/ __init__.py (empty) tests/ __init__.py (empty) tests.py src/ __init__.py a.py b.py c.py </code></pre> <p><em>tests/tests.py</em></p> <pre class="lang-py prettyprint-override"><code>import unittest from a import A my_a = A() class TestClass(unittest.TestCase): # Some tests pass </code></pre> <p><em>src/<strong>init</strong>.py</em></p> <pre><code>__all__ = [&quot;a&quot;, &quot;b&quot;, &quot;c&quot;] </code></pre> <p><em>src/a.py</em></p> <pre class="lang-py prettyprint-override"><code>from b import B </code></pre> <p><em>src/b.py</em></p> <pre class="lang-py prettyprint-override"><code>from c import C </code></pre> <p><em>src/c.py</em></p> <pre class="lang-py prettyprint-override"><code>pass </code></pre> <p>Everything works fine when I run the code from the command line but testing it throws this error: <em>ModuleNotFoundError: No module named 'b'</em></p> <p>I did lots of research that I could condensed to these links:</p> <ul> <li><a href="https://stackoverflow.com/questions/34986900/python-unittest-failing-to-resolve-import-statements">Python unittest failing to resolve import statements</a></li> <li><a href="https://stackoverflow.com/questions/1896918/running-unittest-with-typical-test-directory-structure">Running unittest with typical test directory structure</a></li> <li><a href="https://stackoverflow.com/questions/1944569/how-do-i-write-good-correct-package-init-py-files">How do I write good/correct package __init__.py files</a></li> <li><a href="https://docs.python.org/3/tutorial/modules.html#packages" rel="nofollow noreferrer">https://docs.python.org/3/tutorial/modules.html#packages</a></li> </ul> <p>Right now I'm really confused about how <code>__init__.py</code> would work in this case and why my test can't find imported modules path. Can anyone lend me a hand with this please?</p>
<python><python-unittest>
2024-09-13 20:25:54
1
367
wavesinaroom
78,983,396
4,444,757
TypeError: MetaTrader5.initialize() missing 1 required positional argument: 'self' only in linux not in windows
<p>I have seen the following posts (<a href="https://stackoverflow.com/questions/17534345/why-do-i-get-typeerror-missing-1-required-positional-argument-self">-</a> <a href="https://stackoverflow.com/questions/75362227/missing-1-required-positional-argument-self">-</a> <a href="https://stackoverflow.com/questions/17534345/why-do-i-get-typeerror-missing-1-required-positional-argument-self">-</a>) but they are all related to classes that people have written themselves. I call a function from a ready library that works fine on Windows but fails on Linux.</p> <p>I use <code>MetaTrader5</code> in Ubuntu server 22.04 GUI to run a Python script as a trader bot.</p> <p>My script worked on Windows definitely, however, when I ran the script on the Linux server I got this error:</p> <pre><code>TypeError: MetaTrader5.initialize() missing 1 required positional argument: 'self' </code></pre> <p>It's the part of my script that caused this error:</p> <pre><code>from mt5linux import MetaTrader5 as mt5 mt5.initialize(login=111111, password='fjfklsfja', server = 'test-server') </code></pre> <p>this error has occurred even if I use the simple function on <code>MetaTrader</code> such as:</p> <pre><code>mt5.version() </code></pre> <p>Why does it get this error even though I have provided all the required parameters of the function? Even the simple function <code>version</code>, which has no input parameters, faces this problem.</p> <p>Edit:</p> <p>It's an example of <code>MetaTrader</code> initialization in its documentation.</p> <pre><code>import MetaTrader5 as mt5 # display data on the MetaTrader 5 package print(&quot;MetaTrader5 package author: &quot;,mt5.__author__) print(&quot;MetaTrader5 package version: &quot;,mt5.__version__) # establish MetaTrader 5 connection to a specified trading account if not mt5.initialize(login=25115284, server=&quot;MetaQuotes-Demo&quot;,password=&quot;4zatlbqx&quot;): print(&quot;initialize() failed, error code =&quot;,mt5.last_error()) quit() </code></pre> <p>It works on Windows but fails in Linux.</p>
<python><linux><metatrader5>
2024-09-13 18:43:45
0
1,290
Sadabadi
78,983,184
5,560,898
Selenium Invalid Selector Exception
<p>I am trying to click a button on a page.</p> <pre class="lang-html prettyprint-override"><code>&lt;div class=&quot;button mt-8 text-center md:mt-12 lg:mt-16&quot;&gt; &lt;button class=&quot;cmp-button spacing-t-none spacing-b-none min-w-[12rem] justify-center bg-beige-200 text-black&quot; type=&quot;button&quot;&gt; &lt;!----&gt; &lt;span class=&quot;cmp-button__text&quot;&gt;View More&lt;/span&gt; &lt;/button&gt; &lt;/div&gt; </code></pre> <p>Here is the code I am using:</p> <pre class="lang-python prettyprint-override"><code>from selenium import webdriver import selenium import http.client from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.common.exceptions import TimeoutException from selenium.common.exceptions import NoSuchElementException from selenium.common.exceptions import StaleElementReferenceException from selenium.common.exceptions import WebDriverException from datetime import datetime as dt from bs4 import BeautifulSoup stuff = browser.find_element( By.CLASS_NAME, #'text-center md:mt-12 lg:mt-16' 'cmp-button-spacing-t-none-spacing-b-none min-w-[12rem]-justify-center-bg-beige-200 text-black' #'cmp-button__text' ).click() </code></pre> <p>I have tried various combinations but I can't seem to click that button.</p> <p>Any suggestions?</p> <p><strong>Edit #1:</strong></p> <p>If I use <code>'cmp-button__text'</code>, it seems to work, but it works on earlier values, not this particular one that I want. If only there was a way to reference this particular button. hmm...</p>
<python><selenium-webdriver>
2024-09-13 17:25:41
1
3,572
Chicken Sandwich No Pickles
78,983,171
11,244,991
How to use cloud functions to create a document in Firebase Cloud Storage
<p>I have created a Python cloud function to create a new document in Cloud Storage. It works well with firebase emulators:start, but I get an error when trying to call it from my application:</p> <blockquote> <p>cloud-function.ts:20 Error executing cloud function upload_html_page FirebaseError: unauthenticated</p> </blockquote> <p>I have set my scurity rules to allow read, write: true in Firebase Cloud Storage.</p> <p>My cloud function is:</p> <pre><code>@https_fn.on_request() def upload_html_page(req: https_fn.Request) -&gt; https_fn.Response: &quot;&quot;&quot;Store an entire recorded HTML page.&quot;&quot;&quot; try: # Expecting the HTML content to be provided in the request body data = req.get_json().get('data', {}) print(f&quot;Received data: {data}&quot;) # Log received data html_content = data.get(&quot;htmlAsString&quot;) documentId = data.get('documentId') actionId = data.get('actionId') eventId = data.get('eventId') storage_client = gcs_storage.Client() # Reference to your bucket bucket = storage_client.bucket('***SECRET FOR STACK OVERFLOW***') # Create a new blob and upload the file's content. blob = bucket.blob(documentId + &quot;/&quot; + eventId + &quot;_&quot; + actionId) # Upload the file to Firebase Storage blob.upload_from_string(html_content) return https_fn.Response(status=200) except Exception as e: return https_fn.Response(f&quot;Error processing HTML content: {str(e)}&quot;, status=500) </code></pre> <p>And I call it in typescript with:</p> <pre><code>import { getApp } from './get-app'; import { getFunctions, httpsCallable } from 'firebase/functions'; const functions = getFunctions(getApp()); const payload: { [key: string]: any } = {}; payload[&quot;htmlAsString&quot;] = response.htmlAsString; payload[&quot;documentId&quot;] = documentId; payload[&quot;actionId&quot;] = actionId; payload[&quot;eventId&quot;] = eventId; const cloudFunction = httpsCallable(functions, &quot;upload_html_page&quot;); try { const result = await cloudFunction(payload); return result.data; } catch (ex) { console.error(&quot;Error executing cloud function&quot;, name, ex); return null; } </code></pre> <p>I am connected to a FireBase account in my application when I do the call.</p> <p>Is there anything I must configure in the Firebase console ?</p>
<javascript><python><typescript><firebase><google-cloud-functions>
2024-09-13 17:23:38
1
801
Kantine
78,983,089
12,904,419
Install `venv` without sudo access
<p>I am in a Ubuntu 22.04 cluster with <code>Python==3.11</code> and <code>pip==24.2</code> and no sudo access. When I try to create a virtual environment <code>python3 -m venv .env</code> I get:</p> <pre class="lang-none prettyprint-override"><code>The virtual environment was not created successfully because ensurepip is not available. On Debian/Ubuntu systems, you need to install the python3-venv package using the following command. apt-get install python3-venv </code></pre> <p>any idea how to do it? I only need 'venv', for 'virtualenv' or 'pyenv' I have found how to do it.</p>
<python><python-venv>
2024-09-13 16:55:01
1
962
ABarrier
78,983,053
6,843,153
Create pydantic computed 2.0 computed_field like field on pydatinc 1.10
<p>I am using <strong>Pydantic 1.10</strong> and I want to define computed fields in my model so I researched and found <a href="https://docs.pydantic.dev/2.0/usage/computed_fields/" rel="nofollow noreferrer">this do</a>c about <strong>Pydantic 2.0</strong> about computed_field decorator. The problem is that I can't find anything similar for version 1.10 and I'm not allowed to update the version.</p> <p>How can I achieve in version 1.10 the same I can do in version 2.0?</p>
<python><pydantic>
2024-09-13 16:44:47
1
5,505
HuLu ViCa
78,982,936
4,436,517
Why are marginals from scipy.linprog(method='highs') -0.?
<p>Please find code, output, and package versions below.</p> <p>I was under the impression that the <code>r.eqlin.marginals</code> array returned from <code>scipy.optimize.linprog</code> represents how much the cost function changes with respect to a small change in the equality constraints. In my example I have two equality constraints, and I would expect the marginals for both constraints to be <code>1</code>. However, the marginal of the first constraint becomes <code>-0.</code> and the marginal for the second constraint becomes <code>1</code>. It's not obvious to me why this happens and would appreciate any help on the matter. When manually increasing the value of <code>b_eq[0]</code>, the value of the cost function increases, so if my interpretation of what the marginals represents, it seems strange that <code>r.eqlin.marginals[0]</code> becomes <code>-0.</code>.</p> <p><strong>Code</strong></p> <pre class="lang-py prettyprint-override"><code>import numpy as np from scipy.optimize import linprog A_eq = np.array([[1, 1, 0], [0, 1, 1]]) b_eq = np.array([10, 10]) c = np.array([1, 1, 1]) r = linprog(c, A_eq=A_eq, b_eq=b_eq) print(r) </code></pre> <p><strong>Output</strong></p> <pre><code> con: array([0., 0.]) crossover_nit: 0 eqlin: marginals: array([-0., 1.]) residual: array([0., 0.]) fun: 10.0 ineqlin: marginals: array([], dtype=float64) residual: array([], dtype=float64) lower: marginals: array([1., 0., 0.]) residual: array([ 0., 10., -0.]) message: 'Optimization terminated successfully. (HiGHS Status 7: Optimal)' nit: 0 slack: array([], dtype=float64) status: 0 success: True upper: marginals: array([0., 0., 0.]) residual: array([inf, inf, inf]) x: array([ 0., 10., -0.]) </code></pre> <p><strong>Package versions</strong></p> <pre><code>numpy: 1.24.4 scipy: 1.9.0 </code></pre>
<python><scipy><highs>
2024-09-13 16:10:48
0
1,159
rindis
78,982,732
7,253,901
Extracting data from two nested columns in one dataframe
<p>I have a pandas dataframe that contains transactions. A transaction is either booked as a payment, or a ledger_account_booking. A single transaction can have <em>multiple</em> payments and/or <em>multiple</em> ledger account bookings. Therefore, my columns <code>payments</code> and <code>ledger_account_bookings</code> contain a list of dicts, where the number of lists in a dict can vary. A small example dataframe looks as follows:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>transaction_id</th> <th>total_amount</th> <th>date</th> <th>payments</th> <th>ledger_account_bookings</th> </tr> </thead> <tbody> <tr> <td>4308</td> <td>645,83</td> <td>30-8-2024</td> <td>[]</td> <td>[]</td> </tr> <tr> <td>4254</td> <td>291,67</td> <td>2-7-2024</td> <td>[]</td> <td>[{'ledger_id': '4265', 'amount': '291,67'}]</td> </tr> <tr> <td>4128</td> <td>847</td> <td>14-2-2024</td> <td>[{'payment_id': '4128', 'amount': '847.0'}]</td> <td>[]</td> </tr> <tr> <td>4248</td> <td>4286,98</td> <td>25-6-2024</td> <td>[{'payment_id': '4261', 'amount': '400.0'},<br>Β {'payment_id': '4262', 'amount': '11.0'},<br>Β {'payment_id': '4263', 'amount': '1668.51'},<br>Β {'payment_id': '4264', 'amount': '1868.54'},<br>Β {'payment_id': '4265', 'amount': '20.91'},<br>Β {'payment_id': '4266', 'amount': '2.21'},<br>Β {'payment_id': '4267', 'amount': '309.62'}]</td> <td>[{'ledger_id' : '4265', 'amount': '6,19'}]</td> </tr> <tr> <td>4192</td> <td>6130,22</td> <td>24-4-2024</td> <td>[{'payment_id': '4193', 'amount': '9.68'}]</td> <td>[{'ledger_id': '4222', 'amount':'2106.0'},<br>Β {'ledger_id': '4222','amount': '4014.54'}]</td> </tr> <tr> <td>4090</td> <td>1158,98</td> <td>25-1-2024</td> <td>[{'id': '4110','amount': '16.22'},<br>Β {'id': '4111', 'amount': '84.0'},<br>Β {'id': '4112', 'amount': '41.99'},<br>Β {'id': '4113, 'amount': '9.11',}<br>Β {'id': '4114', 'amount': '10.0'},<br>Β {'id': '4115', 'amount': '997.16'}]</td> <td>[{'ledger_id': '4231', 'amount': '-0.32'},<br>Β {'ledger_id': '4231', 'amount': '-0.18'}]</td> </tr> </tbody> </table></div> <p>What I want is that every dict in one of the columns <code>payments</code> or <code>ledger_account_bookings</code> becomes a row in my dataframe. Expected result would look something like this:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>transaction_id</th> <th>total_amount</th> <th>date</th> <th>payment_id</th> <th>payment_amount</th> <th>ledger_id</th> <th>ledger_amount</th> </tr> </thead> <tbody> <tr> <td>4308</td> <td>645,83</td> <td>30-8-2024</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <td>4254</td> <td>291,67</td> <td>2-7-2024</td> <td>Nan</td> <td>NaN</td> <td>4265</td> <td>291,67</td> </tr> <tr> <td>4128</td> <td>847</td> <td>14-2-2024</td> <td>4128</td> <td>847.0</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <td>4248</td> <td>4286,98</td> <td>25-6-2024</td> <td>4261</td> <td>400.0</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <td>4248</td> <td>4286,98</td> <td>25-6-2024</td> <td>4262</td> <td>11.0</td> <td>NaN</td> <td>Nan</td> </tr> <tr> <td>4248</td> <td>4286,98</td> <td>25-6-2024</td> <td>4263</td> <td>1668.51</td> <td>NaN</td> <td>Nan</td> </tr> <tr> <td>4248</td> <td>4286,98</td> <td>25-6-2024</td> <td>4264</td> <td>1868.4</td> <td>NaN</td> <td>Nan</td> </tr> <tr> <td>4248</td> <td>4286,98</td> <td>25-6-2024</td> <td>4265</td> <td>20.91</td> <td>NaN</td> <td>Nan</td> </tr> <tr> <td>4248</td> <td>4286,98</td> <td>25-6-2024</td> <td>4266</td> <td>2.21</td> <td>NaN</td> <td>Nan</td> </tr> <tr> <td>4248</td> <td>4286,98</td> <td>25-6-2024</td> <td>4267</td> <td>309.62</td> <td>NaN</td> <td>Nan</td> </tr> <tr> <td>4248</td> <td>4286,98</td> <td>25-6-2024</td> <td>NaN</td> <td>NaN</td> <td>4265</td> <td>6,19</td> </tr> <tr> <td>4192</td> <td>6130,22</td> <td>24-4-2024</td> <td>4193</td> <td>9.68</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <td>4192</td> <td>6130,22</td> <td>24-4-2024</td> <td>NaN</td> <td>NaN</td> <td>4222</td> <td>2106</td> </tr> <tr> <td>4192</td> <td>6130,22</td> <td>24-4-2024</td> <td>NaN</td> <td>NaN</td> <td>4222</td> <td>4014.54</td> </tr> <tr> <td>4090</td> <td>1158,98</td> <td>25-1-2024</td> <td>4110</td> <td>16.22</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <td>4090</td> <td>1158,98</td> <td>25-1-2024</td> <td>4111</td> <td>84.0</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <td>4090</td> <td>1158,98</td> <td>25-1-2024</td> <td>4112</td> <td>41.99</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <td>4090</td> <td>1158,98</td> <td>25-1-2024</td> <td>4113</td> <td>9.11</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <td>4090</td> <td>1158,98</td> <td>25-1-2024</td> <td>4114</td> <td>10.0</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <td>4090</td> <td>1158,98</td> <td>25-1-2024</td> <td>4115</td> <td>997.16</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <td>4090</td> <td>1158,98</td> <td>25-1-2024</td> <td>NaN</td> <td>NaN</td> <td>4231</td> <td>0.32</td> </tr> <tr> <td>4090</td> <td>1158,98</td> <td>25-1-2024</td> <td>NaN</td> <td>NaN</td> <td>4231</td> <td>0.18</td> </tr> </tbody> </table></div> <p>For example, transaction 4248 has 7 payments and 1 ledger account booking. So the resulting dataframe would have 8 rows. transaction 4192 has 2 payments and 1 ledger account bookings, so resulting df should have 3 rows.</p> <p>I know how to achieve this for one column, for example by using the following code:</p> <pre><code>df_explode = df_financial_mutations.explode(['payments']) #Normalize the json column into separate columns df_normalized = json_normalize(df_explode['payments']) #Add prefix to the columns that were 'exploded' df_normalized = df_normalized.add_prefix('payments_') </code></pre> <p>The problem is, I don't know how to do it for two columns. If I would call explode on <code>ledger_account_bookings</code> again, the result becomes murky since I already have exploded the payments column, and therefore 'duplicate' rows were introduced into my dataframe. So, where a payment was exploded, I now have two rows with exactly the same values in the <code>ledger_account_bookings</code> column. When I explode again, this time on the other column, those 'duplicates' are also exploded, so that my dataframe now contains rows of data that don't make sense.</p> <p>How do I solve such a problem where I need to explode two columns at once? I've seen <a href="https://stackoverflow.com/questions/45846765/efficient-way-to-unnest-explode-multiple-list-columns-in-a-pandas-dataframe">Efficient way to unnest (explode) multiple list columns in a pandas DataFrame</a> but unfortunately the lists of <code>payments</code> and <code>ledger_account_bookings</code> can be of different size, and can be dynamic as well (e.g. it's possible to have 0-5 payments and 0-5 ledger_account_bookings, there is no fixed value)</p> <p>Any help would be greatly appreciated.</p>
<python><pandas><dataframe>
2024-09-13 15:10:42
2
2,825
Psychotechnopath
78,982,686
1,870,832
Filter polars dataframe on records where column values differ, catching nulls
<p><strong>Have:</strong></p> <pre><code>import polars as pl df = pl.DataFrame({'col1': [1,2,3], 'col2': [1, None, None]}) </code></pre> <p>in polars dataframes, those <code>None</code>s become <code>null</code>s:</p> <pre><code>&gt; df β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ col1 ┆ col2 β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•ͺ══════║ β”‚ 1 ┆ 1 β”‚ β”‚ 2 ┆ null β”‚ β”‚ 3 ┆ null β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p><strong>Want:</strong> some command that returns the last two rows of <code>df</code>, since <code>2</code> &amp; <code>3</code> are not <code>null</code></p> <p><strong>Tried:</strong></p> <p>..., but everything I've thought to try seems to drop/ignore records where one column is <code>null</code>:</p> <ul> <li><code>df.filter(pl.col('col1')!=pl.col('col2')) # returns no rows</code></li> <li><code>df.filter(~pl.col('col1')==pl.col('col2')) # returns no rows</code></li> <li><code>df.filter(~pl.col('col1').eq(pl.col('col2'))) # returns no rows</code></li> <li>...</li> </ul>
<python><dataframe><python-polars>
2024-09-13 14:57:35
2
9,136
Max Power
78,982,506
2,925,620
pyqtgraph: LegendItem offset returns false values regardless of position
<p>I have a Qt5 window with a pyqtgraph in a dynamic situation, i.e., plots, axes, curves can be added or modified. Also, the legend can be shown and hidden. Since the user can also drag the legend to a different place using the mouse, I would like to store its position and perhaps restore this position later. Setting a position using LegendItems' <code>setOffset</code> works as expected, but regardless of the actual position, LegendItems' <code>offset</code> attribute always returns the same values. Here's a MWE:</p> <pre><code>import numpy as np import pyqtgraph as pg win = pg.plot() win.setWindowTitle('MWE legend offset') c1 = win.plot([np.random.randint(0,8) for i in range(10)], pen='r', name='curve1') legend = pg.LegendItem((80,60), offset=(70,20)) legend.setParentItem(win.graphicsItem()) legend.addItem(c1, 'curve1') print(f&quot;Before setting an offset: {legend.offset}&quot;) # Gives (70,20) legend.setOffset([300,300]) print(f&quot;After setting an offset: {legend.offset}&quot;) # Gives (70,20) as well if __name__ == '__main__': pg.exec() </code></pre> <p>Any idea how to get the real LegendItem position?</p> <p><strong>EDIT</strong>: The suggested <code>legend.opts['offset']</code> seems to work if the offset is set via <code>legend.setOffset([300,300])</code>, as suggested by <a href="https://stackoverflow.com/a/79012722/2925620">Rik</a>.</p> <p>However, my use case is a bit different: The legend can be moved by a mouse drag, but this does not seem to affect the offset. According to the source code of <a href="https://pyqtgraph.readthedocs.io/en/latest/_modules/pyqtgraph/graphicsItems/LegendItem.html#LegendItem" rel="nofollow noreferrer">LegendItem</a>, the <code>mouseDragEvent</code> modifies the <code>autoAnchor</code> which is a <a href="https://pyqtgraph.readthedocs.io/en/latest/_modules/pyqtgraph/graphicsItems/GraphicsWidgetAnchor.html" rel="nofollow noreferrer">GraphicsWidgetAnchor</a> object. So far, I am stuck here of how to obtain its values. (An option I see is subclassing LegendItem and overwrite the <code>mouseDragEvent</code>, but I was wondering if that is necessary..)</p>
<python><pyqtgraph>
2024-09-13 14:10:01
1
357
emma
78,982,423
17,729,094
How to propagate `null` in a column after first occurrence?
<p>I have 2 data sets:</p> <p>The first one describes what I expect:</p> <pre class="lang-py prettyprint-override"><code>expected = { &quot;name&quot;: [&quot;start&quot;, &quot;stop&quot;, &quot;start&quot;, &quot;stop&quot;, &quot;start&quot;, &quot;stop&quot;, &quot;start&quot;, &quot;stop&quot;], &quot;description&quot;: [&quot;a&quot;, &quot;b&quot;, &quot;c&quot;, &quot;d&quot;, &quot;e&quot;, &quot;f&quot;, &quot;g&quot;, &quot;h&quot;], } </code></pre> <p>and the second one describes what I observe:</p> <pre><code>observed = { &quot;name&quot;: [&quot;start&quot;, &quot;stop&quot;, &quot;start&quot;, &quot;stop&quot;, &quot;stop&quot;, &quot;stop&quot;, &quot;start&quot;], &quot;time&quot;: [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7], } </code></pre> <p>I want to match all my observations to descriptions based on the order I expect. But once I see an inconsistency, nothing should match anymore.</p> <p>I managed to find the first inconsistency like:</p> <pre class="lang-py prettyprint-override"><code>observed_df = pl.DataFrame(observed).with_row_index() expected_df = pl.DataFrame(expected).with_row_index() result = observed_df.join(expected_df, on=[&quot;index&quot;, &quot;name&quot;], how=&quot;left&quot;).select( &quot;description&quot;, &quot;time&quot; ) &quot;&quot;&quot; β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ description ┆ time β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ f64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ══════║ β”‚ a ┆ 0.1 β”‚ β”‚ b ┆ 0.2 β”‚ β”‚ c ┆ 0.3 β”‚ β”‚ d ┆ 0.4 β”‚ β”‚ null ┆ 0.5 β”‚ -&gt; First inconsistency gets a &quot;null&quot; description β”‚ f ┆ 0.6 β”‚ β”‚ g ┆ 0.7 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ &quot;&quot;&quot; </code></pre> <p>How can I propagate this <code>null</code> passed the first inconsistency?</p> <p>Also, my real data has an additional <code>id</code> column, where each <code>id</code> is a case like described above, and independent from other <code>id</code>s. Is it possible to somehow &quot;group by id&quot; and apply this logic all at once instead of working with each <code>id</code> separately:</p> <pre class="lang-py prettyprint-override"><code>observed = { &quot;id&quot;: [1, 2, 1, 2, 2], &quot;name&quot;: [&quot;start&quot;, &quot;start&quot;, &quot;stop&quot;, &quot;stop&quot;, &quot;stop&quot;], &quot;time&quot;: [0.1, 0.2, 0.3, 0.4, 0.5], } expected = { &quot;id&quot;: [1, 1, 2, 2], &quot;name&quot;: [&quot;start&quot;, &quot;stop&quot;, &quot;start&quot;, &quot;stop&quot;], &quot;description&quot;: [&quot;a&quot;, &quot;b&quot;, &quot;c&quot;, &quot;d&quot;], } result = { &quot;id&quot;: [1, 2, 1, 2, 2], &quot;description&quot;: [&quot;a&quot;, &quot;c&quot;, &quot;b&quot;, &quot;d&quot;, None], &quot;time&quot;: [0.1, 0.2, 0.3, 0.4, 0.5], } </code></pre>
<python><dataframe><python-polars>
2024-09-13 13:50:20
1
954
DJDuque
78,982,236
12,485,858
Making Python script running in a shell terminal read stdin from another shell instance
<p>I have a python script which reads and input when provided and does something with the input (see process_input below):</p> <pre><code>import sys def process_input(input_data): # logic to process the input data print(f&quot;Processed input: {input_data}&quot;) return 1 # Return 1 to increment the counter if __name__ == &quot;__main__&quot;: input_count = 0 while True: try: input_data = sys.stdin.readline().strip() if input_data == &quot;exit&quot;: break input_count += process_input(input_data) print(f&quot;Input count: {input_count}&quot;) except (KeyboardInterrupt, EOFError): break </code></pre> <p>Now I want to be able to pass the input to this python script (which I will execute in the terminal) from another shell script (bash).</p> <p>How can I achieve this?</p> <p>I tried the following so far:</p> <pre><code>1.I started the python program 2. I found the PID using ps -xv command 3. I tried redirecting a simple echo input from another terminal using: </code></pre> <p><code>echo &quot;some text&quot; &gt; /proc/41267/fd/0 </code> where 41267 was the PID</p> <p>The result of these actions was that the terminal where the python program is running prints the echo text, but it does not execute the process_input function. I read this related post <a href="https://unix.stackexchange.com/questions/385771/writing-to-stdin-of-a-process">https://unix.stackexchange.com/questions/385771/writing-to-stdin-of-a-process</a> and as far as I understood the problem is that I am redirecting the stdin to a pseudo-terminal.</p> <p>In one of the comments it was mentioned to use mkfifo , but I can't understand how to actually use this with my script. What would be the correct way to implement this?</p>
<python><shell><ipc>
2024-09-13 12:56:08
1
846
teoML
78,982,004
6,439,229
How to get stretch factor for QSpacerItems
<p>When you <code>addStretch()</code> or <code>insertStretch()</code> to a <code>QBoxLayout</code>, a <code>QSpacerItem</code> is added/inserted to the layout and you can set the stretch factor while doing so.</p> <p>Now I would like to retrieve that stretch factor from the SpacerItem but I can't find how to do so.<br /> <code>item.sizePolicy().verticalStretch()</code> always returns <code>0</code>, as can be demonstrated with this example:</p> <pre><code>from PyQt6.QtWidgets import QApplication, QWidget, QVBoxLayout, QPushButton, QSpacerItem class Window(QWidget): def __init__(self): super().__init__() self.setGeometry(700, 500, 100, 200) self.layout = QVBoxLayout(self) for i in range(3): but = QPushButton(f'Push {i}') self.layout.addWidget(but) if i == 0: but.clicked.connect(self.click) self.layout.insertStretch(1, 2) self.layout.insertStretch(3, 3) def click(self): for i in (1, 3): stretch = self.layout.itemAt(i) assert isinstance(stretch, QSpacerItem) pol = stretch.sizePolicy() print(pol.verticalStretch()) app = QApplication([]) window = Window() window.show() app.exec() </code></pre> <p>So where is the stretch factor stored? and is this info accessible somehow or would I have to keep a separate reference to keep the stretch info available?</p>
<python><spacing><pyqt6><qboxlayout>
2024-09-13 11:45:29
1
1,016
mahkitah
78,981,908
8,801,862
Application type API permission with Microsoft Graph API
<blockquote> <p>I want to create an app that will list all my emails in Outlook via Microsoft Graph API.</p> </blockquote> <p>What I did:</p> <p>1)</p> <ul> <li>Go to &quot;Microsoft Entra ID&quot; (former Active Directory)</li> <li>Head to &quot;App registrations&quot; -&gt; &quot;New registration&quot;</li> <li>Select &quot;Accounts in any organizational directory (Any Microsoft Entra ID tenant - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)&quot; as Supported Account Type.</li> <li>Create new Client Secret in &quot;Certificates &amp; secrets&quot;</li> <li>In &quot;Authentication&quot; set &quot;Redirect URIs&quot; to &quot;http://localhost:8000&quot;</li> <li>Set up permissions &quot;Mail.Read&quot; and &quot;Mail.ReadBasic&quot; along with the default &quot;User.Read&quot; in &quot;API Permissions&quot;. The type of the permissions is <strong>Application</strong>, not <strong>Delegated</strong> as I want my app to run in the background without any sign-ups.</li> </ul> <ol start="2"> <li>My Code:</li> </ol> <pre><code>import msal import requests client_id = &quot;my_client_id&quot; tenant_id = &quot;my_tenant_id&quot; client_secret = &quot;my_client_secret&quot; redirect_url = f&quot;http://localhost:8000&quot; authority = f&quot;https://login.microsoftonline.com/{tenant_id}/&quot; scopes = [&quot;https://graph.microsoft.com/.default&quot;] # This scope means all permissions granted to the app app = msal.ConfidentialClientApplication(client_id, client_credential=client_secret, authority=authority) result = app.acquire_token_for_client(scopes=scopes) #print(result) if &quot;access_token&quot; in result: access_token = result[&quot;access_token&quot;] print(&quot;Access Token:&quot;, access_token) # Example of making a request to Microsoft Graph headers = { &quot;Authorization&quot;: f&quot;Bearer {access_token}&quot;, &quot;Content-Type&quot;: &quot;application/json&quot; } endpoint = &quot;https://graph.microsoft.com/v1.0/users/user@outlook.com/messages&quot; # Adjust the endpoint as needed response = requests.get(endpoint, headers=headers) print(f&quot;Error: {response.status_code}, {response.json()}&quot;) </code></pre> <p>I always get: &quot;Error: the client application 'my_client_id' is missing service principal in the tenant 'SOME TENANT ID (it is interesting that this TENANT ID is <strong>NOT</strong> my teant_id that I specify in the code)'</p>
<python><json><web-scraping><outlook><microsoft-graph-api>
2024-09-13 11:15:48
1
401
user13
78,981,868
13,443,954
How to escape backslash loads from dotenv json variable python
<p>I have a dotenv variable saved into .env file (in json format). Example:</p> <pre><code>my_env_variable = '{&quot;ip&quot;: &quot;xx.xx.xxx.xxx&quot;, &quot;user&quot;: &quot;my\user&quot;, &quot;password&quot;: &quot;password&quot;}' </code></pre> <p>Reading this variable form my .py script (connection is a enum type DBConnection)</p> <pre><code>from os import environ as env from dotenv import load_dotenv load_dotenv() class DatabaseUtils: @staticmethod def get_data_from_db(connection:DBConnection , script_path, query): selected_connection = json.loads(env[connection.value].replace('\n', '')) user = selected_connection[&quot;user&quot;] </code></pre> <p>Unfortunately, I got an error message, due to backslash in user (coming from json) and gives error:</p> <pre><code>self = &lt;json.decoder.JSONDecoder object at 0x000001295223F0D0&gt; s = '{&quot;instance_ip&quot;: &quot;xx.xx.xxx.xxx&quot;, &quot;user&quot;: &quot;my\user&quot;, &quot;password&quot;: &quot;password&quot;' idx = 0 json.decoder.JSONDecodeError: Invalid \escape: line 1 column 70 (char 69) </code></pre> <p>What is the best solution to escape the backslash? Unfortunately, I cannot omit like r'value' because the parameter is in the .env file and cannot add &quot;r&quot; before the json</p> <p>Edit: I also try with json.dumps before json.loads, but in this case I cannot</p> <blockquote> <p>selected_connection[&quot;user&quot;]</p> </blockquote> <p>anymore, because selection_connection became a str</p>
<python><json><dotenv>
2024-09-13 11:03:48
2
333
M AndrΓ‘s
78,981,779
21,049,944
Multiple dataframe logs into one node using python bigtree
<p>I would like to use bigtree to transform my polars/pandas dataframe to a tree in the following way. My data looks something like this:</p> <pre><code>id, parrent, val1, val2 1, 0, ..., ... 1, 0, ..., ... 1, 0, ..., ... 2, 1, ..., ... 2, 1, ..., ... 3, 1, ..., ... 3, 1, ..., ... 3, 1, ..., ... 4, 2, ..., ... 4, 2, ..., ... 4, 2, ..., ... 4, 2, ..., ... </code></pre> <p>I would like to transform them into a tree that looks like this:</p> <pre><code>1[[val1,val2], [val1,val2], [val1,val2]] |--2[[val1,val2], [val1,val2]] | |--4[[val1,val2], [val1,val2], [val1,val2], [val1,val2]] | |--3[[val1,val2], [val1,val2], [val1,val2]] </code></pre> <p>Is there a simple way to do this?</p>
<python><pandas><dataframe><tree>
2024-09-13 10:37:22
1
388
Galedon
78,981,714
3,941,955
How can I apply one time pad on a sound waveform file, assuming a same size key is distributed on both sides?
<p>I am looking for a way to apply OTP on a sound waveform if possible. Some questions emerged before I ever started.</p> <p>Can I XOR a text file of the same size, with the waveform to produce a ciphertext that can be then xored back to the original sound, using the same text file? Is it as effective or is it better to OTP using a white noise waveform of the same length? Is there a way to implement any of these ways suggested easily in Python3?</p>
<python><python-3.x><encryption><one-time-pad>
2024-09-13 10:20:00
0
466
George Eco
78,981,683
1,685,729
Error installing trottersuzuki package in venv. numpy not found error even though it is there
<p>It says numpy not installed even though it is installed. I thought may be the venv is not accessible to pip (which it should be, because numpy is installed inside the venv) and I installed it system wide using <code>sudo apt install python3-numpy</code> as you can see in the very last of the following snippet.</p> <pre><code>vanangamudi@kaithadi:~/code/bec-gp/BEC_GP $ workon gpinn (gpinn) vanangamudi@kaithadi:~/code/bec-gp/BEC_GP $ pip install trottersuzuki Collecting trottersuzuki Using cached trottersuzuki-1.6.2.tar.gz (218 kB) Installing build dependencies ... done Getting requirements to build wheel ... error error: subprocess-exited-with-error Γ— Getting requirements to build wheel did not run successfully. β”‚ exit code: 1 ╰─&gt; [20 lines of output] Traceback (most recent call last): File &quot;/home/vanangamudi/.virtualenvs/gpinn/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 353, in &lt;module&gt; main() File &quot;/home/vanangamudi/.virtualenvs/gpinn/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 335, in main json_out['return_val'] = hook(**hook_input['kwargs']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/vanangamudi/.virtualenvs/gpinn/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 118, in get_requires_for_build_wheel return hook(config_settings) ^^^^^^^^^^^^^^^^^^^^^ File &quot;/tmp/pip-build-env-d6cdwe1m/overlay/lib/python3.12/site-packages/setuptools/build_meta.py&quot;, line 332, in get_requires_for_build_wheel return self._get_build_requires(config_settings, requirements=[]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/tmp/pip-build-env-d6cdwe1m/overlay/lib/python3.12/site-packages/setuptools/build_meta.py&quot;, line 302, in _get_build_requires self.run_setup() File &quot;/tmp/pip-build-env-d6cdwe1m/overlay/lib/python3.12/site-packages/setuptools/build_meta.py&quot;, line 503, in run_setup super().run_setup(setup_script=setup_script) File &quot;/tmp/pip-build-env-d6cdwe1m/overlay/lib/python3.12/site-packages/setuptools/build_meta.py&quot;, line 318, in run_setup exec(code, locals()) File &quot;&lt;string&gt;&quot;, line 6, in &lt;module&gt; ModuleNotFoundError: No module named 'numpy' [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error Γ— Getting requirements to build wheel did not run successfully. β”‚ exit code: 1 ╰─&gt; See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. (gpinn) vanangamudi@kaithadi:~/code/bec-gp/BEC_GP $ python -c 'import numpy' (gpinn) vanangamudi@kaithadi:~/code/bec-gp/BEC_GP $ python -c 'import matplotlib' Traceback (most recent call last): File &quot;&lt;string&gt;&quot;, line 1, in &lt;module&gt; ModuleNotFoundError: No module named 'matplotlib' (gpinn) vanangamudi@kaithadi:~/code/bec-gp/BEC_GP $ </code></pre>
<python><numpy><pip><package>
2024-09-13 10:12:32
2
727
vanangamudi
78,981,590
1,728,502
Azure Function with Blob Trigger Running with InputStream but not with BlobClient Parameter
<p>I'm trying to get a blob-triggered Azure Function to work. I got the samples to run, and am all set up with my local development AzureWebJobsStorage connection string set up to use Azurite. I am testing by adding a file using Azure Storage Explorer.</p> <p>This function triggers and my code is run:</p> <pre><code>@app.blob_trigger(arg_name=&quot;myblob&quot;, path=&quot;inbox&quot;, connection=&quot;AzureWebJobsStorage&quot;) def BlobTriggerTest4(myblob: func.InputStream): logging.info(f&quot;Python blob trigger function processed blob. &quot; f&quot;Name: {myblob.name}, &quot; f&quot;Blob Size: {myblob.length} bytes&quot;) </code></pre> <p>Log output in the VS Code terminal (size shows as zero in this code from the MS sample, which is fine, because stream has not yet been read):</p> <pre><code>[2024-09-13T09:15:37.534Z] Host lock lease acquired by instance ID '000000000000000000000000FF46044C'. [2024-09-13T09:15:45.228Z] Executing 'Functions.BlobTriggerTest4' (Reason='New blob detected(LogsAndContainerScan): inbox/test.txt', Id=2bf0477b-06e1-4af3-af64-285dd0d65d20) [2024-09-13T09:15:45.230Z] Trigger Details: MessageId: e46320bd-d339-49a7-af17-2be2def490d1, DequeueCount: 1, InsertedOn: 2024-09-13T09:15:45.000+00:00, BlobCreated: 2024-09-13T09:15:44.000+00:00, BlobLastModified: 2024-09-13T09:15:44.000+00:00 [2024-09-13T09:15:45.300Z] Python blob trigger function processed blob. Name: inbox/test.txt, Blob Size: None bytes [2024-09-13T09:15:45.316Z] Executed 'Functions.BlobTriggerTest4' (Succeeded, Id=2bf0477b-06e1-4af3-af64-285dd0d65d20, Duration=140ms) </code></pre> <p>When I change it to use BlobClient, however, the function triggers buy my code isn't run! I want to use BlobClient so I can easily delete the blob once I have processed it.</p> <pre><code>@app.blob_trigger(arg_name=&quot;myblob&quot;, path=&quot;inbox&quot;, connection=&quot;AzureWebJobsStorage&quot;) def BlobTriggerTest5(myblob: blob.BlobClient): logging.info( f&quot;Python blob trigger function processed blob. Properties: {myblob.get_blob_properties()}. Blob content head: {myblob.download_blob().read(size = 1)}&quot; ) </code></pre> <p>Here is the output. I get the boilerplate framework output but my own logging code doesn't run:</p> <pre><code>[2024-09-13T09:17:58.634Z] Host lock lease acquired by instance ID '000000000000000000000000FF46044C'. [2024-09-13T09:18:02.117Z] Executing 'Functions.BlobTriggerTest5' (Reason='New blob detected(LogsAndContainerScan): inbox/test.txt', Id=af20f729-9414-4531-b22a-e75d9031b225) [2024-09-13T09:18:02.120Z] Trigger Details: MessageId: 548b4b12-5af0-4b5e-b424-2fe7222c4908, DequeueCount: 1, InsertedOn: 2024-09-13T09:18:02.000+00:00, BlobCreated: 2024-09-13T09:18:00.000+00:00, BlobLastModified: 2024-09-13T09:18:00.000+00:00 </code></pre> <p>Note that in this case with BlobClient I am not even getting the usual &quot;Executed 'Functions.BlobTriggerTest5'&quot; terminating message in the logs, almost as if the function has somehow got &quot;stuck&quot; right after triggering.</p> <p>I had more code in these test functions previously, and none of it was running. I cut these examples down to more clearly demonstrate the problem.</p> <p>I tried recreating the function from scratch in case the previous name had somehow been bound to the earlier implementation, etc. I can just switch back and forth between these two functions (only having one uncommented at a time since they trigger on the same blob path), and with the first using InputStream my function code runs and with the second using BlobClient it does not (despite the function showing it has been triggered!).</p> <p>Can anyone suggest what I might be doing wrong or what is happening here? Many thanks in advance.</p> <p>[Edit] I have now discovered that the BlobClient parameter version does indeed work when uploaded to Azure Functions, but it fails to run when triggered in the local debug environment. This means I can at least use it to access the blob, delete it, etc., but means I can't very easily debug it locally: I have to upload it and debug it there, using logging, etc., which is quite restrictive.</p>
<python><azure-functions><azure-blob-storage><azure-blob-trigger><azure-triggers>
2024-09-13 09:48:28
1
315
Rich
78,981,469
1,145,666
Can I create a method that returns a value, or yields data, based on a switch?
<p>I have this method:</p> <pre><code>def fetch_rows(connection, return_cursor = False): query = f&quot;&quot;&quot;SELECT ... &quot;&quot;&quot; cur = connection.cursor(cursor_factory=psycopg2.extras.RealDictCursor) cur.execute(query) if return_cursor: return cur else: info(f&quot;Found {cur.rowcount} record(s).&quot;) for row in cur: yield row </code></pre> <p>But, even as I set <code>return_cursor</code> to <code>True</code> in the call, I still get this error when trying to use the returned cursor:</p> <blockquote> <p>AttributeError: 'generator' object has no attribute 'rowcount'</p> </blockquote> <p>Is what I am trying even possible?</p>
<python><generator>
2024-09-13 09:14:09
0
33,757
Bart Friederichs
78,981,438
4,718,423
unittest class init mock exception raised
<pre><code>#!/usr/bin/env python3 import unittest from unittest.mock import patch class User(object): def __init__(self): self.__name = None self.__authorised_users = [&quot;me&quot;, &quot;you&quot;] local_input = input(&quot;please provide your windows 8 character lower case login: &quot;) if local_input not in self.__authorised_users: raise ValueError(&quot;you have no permission to run this app&quot;) else: self.__name = local_input class TestUser(unittest.TestCase): def testUserClassFound(self): self.assertNotIsInstance(ModuleNotFoundError, User) @patch('builtins.input', lambda *args:&quot;y&quot;) def testUserClassInit(self): # just check if user class has __name set to none local_object = User() self.assertEqual(local_object._User__name, None) if __name__ == &quot;__main__&quot;: unittest.main() </code></pre> <p>I would like to, in the second test, just assure when the class object is created, the tester checks the class has the attribute __name and set to None. I need to <strong>patch the raise ValueError</strong> from the class <strong>init</strong> , but I can't find the correct patch.</p>
<python><unit-testing><patch>
2024-09-13 09:04:16
1
1,446
hewi
78,981,280
5,989,438
django after submit login info, does not lead to intended detail view page
<p>I don't know what to do here after a few days of trying different ideas. After I submit the the prop login email and password, the page does not do anything. It just says &quot;Invalid password&quot;. I am not sure my login view is connected to the <code>sqllite</code>. I am thinking my mistake is where the login view file, but not sure. I also use chatgpt to see if it can identify where the mistake is, but it didn't come up with anything. If anyone can point me where I need to adjust, would be much appricated. If I go to the address of the <a href="http://127.0.0.1:8000/user/2/" rel="nofollow noreferrer">http://127.0.0.1:8000/user/2/</a>, then, the page shows correctly. Unfortunately, the codes are long.</p> <p>Here are all the codes:</p> <p>views.py:</p> <pre><code>import random from django.shortcuts import render from django.views.generic import (ListView, DetailView) from django.views.decorators.csrf import csrf_exempt from .models import CustomUser, Events from rest_framework.views import APIView from .serializers import CustomUserSerializer, EventsSerializer from rest_framework.response import Response from rest_framework import status from rest_framework.renderers import TemplateHTMLRenderer from django.contrib.auth import authenticate, login from django.contrib import messages from django.shortcuts import redirect, render from django.urls import reverse from .models import CustomUser # Assuming CustomUser is your user model def index(request): return render(request, &quot;users_application/index.html&quot;, {}) def user_detail(request, id): user = CustomUser.objects.get(id = id) events = user.events_set.all() return render(request, &quot;users_application/user_detail.html&quot;, {'user': user}) # Define a view function for the login page def user_login(request): if request.method == &quot;POST&quot;: email = request.POST.get('email') password = request.POST.get('password') if not CustomUser.objects.filter(email=email).exists(): messages.error(request, 'Invalid Email') return redirect('login') user = authenticate(request, username=email, password=password) if user is None: messages.error(request, &quot;Invalid Password&quot;) return redirect('login') else: login(request, user) # Redirect to the user's detail page after login return redirect('user-detail', id=user.id) return render(request, 'users_application/login.html') class CustomUserAPIView(APIView): def get(self, request): users = CustomUser.objects.all() serializer = CustomUserSerializer(users, many=True) return Response(serializer.data) class CustomUserDetailView(APIView): renderer_classes = [TemplateHTMLRenderer] template_name = 'users_application/user_detail.html' # Create this template def get_object(self, id): try: return CustomUser.objects.get(id=id) except CustomUser.DoesNotExist: raise Http404 def get(self, request, id=None): if id: user = self.get_object(id) serializer = CustomUserSerializer(user) return Response({'user': serializer.data}) else: users = CustomUser.objects.all() serializer = CustomUserSerializer(users, many=True) return Response({'users': serializer.data}) </code></pre> <p>urls.py</p> <pre><code>from django.urls import path from .views import user_detail from .views import CustomUserAPIView, EventAPIView, CustomUserDetailView, user_login from . import views urlpatterns = [ path(&quot;&quot;, views.index, name=&quot;index&quot;), #index is the function name from view.py #path('users/', views.user_list, name = &quot;users&quot;), #path('user/&lt;int:id&gt;/', views.user_detail, name = &quot;user_detail&quot;), path('users/', CustomUserAPIView.as_view(), name='user-list'), path('user/&lt;int:id&gt;/', CustomUserDetailView.as_view(), name='user-detail'), path('login/', views.user_login, name='login'), ] </code></pre> <p>backends.py</p> <pre><code>from django.contrib.auth.backends import ModelBackend from django.contrib.auth import get_user_model class EmailBackend(ModelBackend): def authenticate(self, request, username=None, password=None, **kwargs): UserModel = get_user_model() try: user = UserModel.objects.get(email=username) if user.check_password(password): return user except UserModel.DoesNotExist: return None def get_user(self, user_id): UserModel = get_user_model() try: return UserModel.objects.get(pk=user_id) except UserModel.DoesNotExist: return None </code></pre> <p>settings.py</p> <pre><code>from pathlib import Path ALLOWED_HOSTS = [&quot;*&quot;] # Application definition INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'users_application', 'rest_framework', ] MIDDLEWARE = [ 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ] ROOT_URLCONF = 'REST_PROJECT.urls' TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] WSGI_APPLICATION = 'REST_PROJECT.wsgi.application' # Database # https://docs.djangoproject.com/en/5.0/ref/settings/#databases DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': BASE_DIR / 'db.sqlite3', } } # Password validation # https://docs.djangoproject.com/en/5.0/ref/settings/#auth-password-validators AUTH_PASSWORD_VALIDATORS = [ { 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', }, { 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', }, { 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', }, { 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', }, ] # Internationalization # https://docs.djangoproject.com/en/5.0/topics/i18n/ LANGUAGE_CODE = 'en-us' TIME_ZONE = 'UTC' USE_I18N = True USE_TZ = True # Static files (CSS, JavaScript, Images) # https://docs.djangoproject.com/en/5.0/howto/static-files/ STATIC_URL = 'static/' # Default primary key field type # https://docs.djangoproject.com/en/5.0/ref/settings/#default-auto-field DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField' AUTH_USER_MODEL = &quot;users_application.CustomUser&quot; AUTHENTICATION_BACKENDS = ['users_application.backends.EmailBackend', 'django.contrib.auth.backends.ModelBackend'] </code></pre> <p>login.html</p> <pre><code>&lt;!DOCTYPE html&gt; &lt;html lang=&quot;en&quot;&gt; &lt;head&gt; &lt;meta charset=&quot;UTF-8&quot;&gt; &lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1.0&quot;&gt; &lt;title&gt;Login&lt;/title&gt; &lt;link href=&quot;https://maxcdn.bootstrapcdn.com/bootstrap/4.5.2/css/bootstrap.min.css&quot; rel=&quot;stylesheet&quot;&gt; &lt;/head&gt; &lt;body&gt; &lt;div class=&quot;container mt-5&quot;&gt; &lt;!-- Login form --&gt; &lt;form class=&quot;col-6 mx-auto card p-3 shadow-lg&quot; method=&quot;post&quot; enctype=&quot;multipart/form-data&quot;&gt; &lt;h1 style=&quot;text-align: center;&quot;&gt;&lt;span style=&quot;color: green;&quot;&gt;USER LOGIN&lt;/span&gt;&lt;/h1&gt; {% csrf_token %} &lt;!-- CSRF token for security --&gt; &lt;!-- Login heading --&gt; &lt;h3&gt;Login&lt;/h3&gt; &lt;hr&gt; &lt;!-- Display error/success messages --&gt; {% if messages %} &lt;div class=&quot;alert alert-primary&quot; role=&quot;alert&quot;&gt; {% for message in messages %} {{ message }} {% endfor %} &lt;/div&gt; {% endif %} &lt;!-- Email input field --&gt; &lt;div class=&quot;form-group&quot;&gt; &lt;label for=&quot;exampleInputEmail1&quot;&gt;Email&lt;/label&gt; &lt;input type=&quot;email&quot; class=&quot;form-control&quot; name=&quot;email&quot; id=&quot;exampleInputEmail1&quot; aria-describedby=&quot;emailHelp&quot; placeholder=&quot;Enter email&quot; required&gt; &lt;/div&gt; &lt;!-- Password input field --&gt; &lt;div class=&quot;form-group&quot;&gt; &lt;label for=&quot;exampleInputPassword1&quot;&gt;Password&lt;/label&gt; &lt;input type=&quot;password&quot; name=&quot;password&quot; class=&quot;form-control&quot; id=&quot;exampleInputPassword1&quot; placeholder=&quot;Password&quot; required&gt; &lt;/div&gt; &lt;!-- Submit button --&gt; &lt;button type=&quot;submit&quot; class=&quot;btn btn-primary&quot;&gt;Submit&lt;/button&gt; &lt;/form&gt; &lt;/div&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>and fanlly, models.py</p> <pre><code>from django.contrib.auth.models import AbstractUser from django.contrib.auth.base_user import BaseUserManager from django.db import models from django.conf import settings class CustomUserManager(BaseUserManager): def create_superuser(self, email, password=None, **extra_fields): extra_fields.setdefault('is_staff', True) extra_fields.setdefault('is_superuser', True) if extra_fields.get('is_staff') is not True: raise ValueError('Superuser must have is_staff=True') if extra_fields.get('is_superuser') is not True: raise ValueError('Superuser must have is_superuser=True') return self.create_user(email, password, **extra_fields) def create_user(self, email, password, **extra_fields): if not email: raise ValueError('The Email must be set') email = self.normalize_email(email) user = self.model(email=email, **extra_fields) user.set_password(password) user.save() return user class CustomUser(AbstractUser): username = None email = models.EmailField((&quot;email address&quot;), unique=True) USERNAME_FIELD = &quot;email&quot; REQUIRED_FIELDS = [] objects = CustomUserManager() def __str__(self): return self.email class Events(models.Model): TYPES = ( ('PRI', 'Rides'), ('CLN', 'Cleaning'), ('CPN', 'Companionship'), ) #event = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE) event = models.ForeignKey(CustomUser, related_name='events', on_delete=models.CASCADE) date = models.DateTimeField(auto_now_add=True, blank=True, null=True) category = models.CharField(max_length=50, choices = TYPES) previous_balance = models.DecimalField(max_digits=7, decimal_places=2, blank=True, null=True) spent = models.DecimalField(max_digits=7, decimal_places=2, blank=True, null=True) add = models.DecimalField(max_digits=7, decimal_places=2, blank=True, null=True) remaining_balance = models.DecimalField(max_digits=7, decimal_places=2, blank=True, null=True) destination = models.TextField(blank=True) </code></pre>
<python><django><authentication><view><django-views>
2024-09-13 08:23:30
1
309
fishtang
78,981,172
13,250,589
convert a polars.DataFrames of List[Struct[2]] to a dict of polars.DataFrame
<p>I have a single row <code>polars.DataFrame</code> <code>df</code> with schema</p> <pre class="lang-py prettyprint-override"><code>df.schema &gt;&gt;&gt; Schema([('S1', List(Struct({'S1': Float64, 'timestamp': Datetime(time_unit='us', time_zone=None)}))), ('S2', List(Struct({'S2': Float64, 'timestamp': Datetime(time_unit='us', time_zone=None)}))), ('S3', List(Struct({'S3': Float64, 'timestamp': Datetime(time_unit='us', time_zone=None)}))), ('CO', List(Struct({'CO': Float64, 'timestamp': Datetime(time_unit='us', time_zone=None)}))), ('RH', List(Struct({'RH': Float64, 'timestamp': Datetime(time_unit='us', time_zone=None)}))), ('TP', List(Struct({'TP': Float64, 'timestamp': Datetime(time_unit='us', time_zone=None)})))]) </code></pre> <p>I want to convert it into a <code>dict</code> of <code>polars.DataFrame</code>, currently im using the following technique</p> <pre class="lang-py prettyprint-override"><code>{ c: pl.DataFrame(v.explode()).unnest(c) for c,v in df.to_dict().items() } </code></pre> <p>which gives me a dictionary with the schema</p> <pre class="lang-py prettyprint-override"><code>{'S1': Schema([('S1', Float64), ('timestamp', Datetime(time_unit='us', time_zone=None))]), 'S2': Schema([('S2', Float64), ('timestamp', Datetime(time_unit='us', time_zone=None))]), 'S3': Schema([('S3', Float64), ('timestamp', Datetime(time_unit='us', time_zone=None))]), 'CO': Schema([('CO', Float64), ('timestamp', Datetime(time_unit='us', time_zone=None))]), 'RH': Schema([('RH', Float64), ('timestamp', Datetime(time_unit='us', time_zone=None))]), 'TP': Schema([('TP', Float64), ('timestamp', Datetime(time_unit='us', time_zone=None))])} </code></pre> <p>I cannot use <code>.explode()</code> on the original <code>df</code> because all the list elements in the <code>df</code> are of different lengths. I would like know if there is a more elegant or <code>polars</code>ish syntax of doing it</p> <p>here is some sample data</p> <pre class="lang-py prettyprint-override"><code>import datetime as dt import polars as pl df = pl.DataFrame({ 'S1': [[ {'S1': 102007.4 , 'timestamp': dt.datetime(2024, 9, 4, 14, 58, 37)}, {'S1': 102007.45454545454 , 'timestamp': dt.datetime(2024, 9, 4, 14, 58, 54)}, {'S1': 102005.83333333333 , 'timestamp': dt.datetime(2024, 9, 4, 14, 59, 11)}, {'S1': 102000.07692307692 , 'timestamp': dt.datetime(2024, 9, 4, 14, 59, 28)}, {'S1': 101996.0 , 'timestamp': dt.datetime(2024, 9, 4, 15, 0 , 1 )}, ]], 'S2': [[ {'S2': 50902.6 , 'timestamp': dt.datetime(2024, 9, 4, 14, 58, 37)}, {'S2': 50904.09090909091 , 'timestamp': dt.datetime(2024, 9, 4, 14, 58, 54)}, {'S2': 50904.833333333336 , 'timestamp': dt.datetime(2024, 9, 4, 14, 59, 11)}, {'S2': 50903.0 , 'timestamp': dt.datetime(2024, 9, 4, 14, 59, 28)}, ]], 'S3': [[ {'S3': 860903.6666666666 , 'timestamp': dt.datetime(2024, 9, 4, 14, 58, 20)}, {'S3': 860899.4545454546 , 'timestamp': dt.datetime(2024, 9, 4, 14, 58, 54)}, {'S3': 860862.5833333334 , 'timestamp': dt.datetime(2024, 9, 4, 14, 59, 11)}, ]], 'CO': [[ {'CO': 639162.2 , 'timestamp': dt.datetime(2024, 9, 4, 14, 58, 37)}, {'CO': 639161.2727272727 , 'timestamp': dt.datetime(2024, 9, 4, 14, 58, 54)}, {'CO': 639159.4166666666 , 'timestamp': dt.datetime(2024, 9, 4, 14, 59, 11)}, {'CO': 639167.5 , 'timestamp': dt.datetime(2024, 9, 4, 14, 59, 44)}, {'CO': 639163.2666666667 , 'timestamp': dt.datetime(2024, 9, 4, 15, 0, 1)}, ]], 'RH': [[ {'RH': 3655.3 , 'timestamp': dt.datetime(2024, 9, 4, 14, 58, 37)}, {'RH': 3655.2727272727275 , 'timestamp': dt.datetime(2024, 9, 4, 14, 58, 54)}, ]], 'TP': [[ {'TP': 2621.7 , 'timestamp': dt.datetime(2024, 9, 4, 14, 58, 37)}, {'TP': 2621.818181818182 , 'timestamp': dt.datetime(2024, 9, 4, 14, 58, 54)}, ]], }) </code></pre>
<python><dataframe><python-polars>
2024-09-13 07:51:02
1
885
Hammad Ahmed
78,981,088
1,838,076
Is there a way to print incremental time instead of absolute time with Python logging
<p>When using Python logging, <code>asctime</code> is very handy in looking at the hotspots in code. However, I have to decode or post-process the log to show incremental time taken between each log message.</p> <pre><code>logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') </code></pre> <p>Is there a way to print incremental time instead?</p> <p>I understand logging is meant to handle parallel threads etc., so incremental may not make much sense, but it helps a lot in simple cases.</p> <p>What I get now</p> <pre><code>2024-09-13 12:37:19,981 - INFO - Got a Chunk of Data in 1.662097 seconds 2024-09-13 12:37:19,989 - INFO - Processed the Chunk in 0.008471 seconds 2024-09-13 12:37:19,993 - INFO - Optimized the Data in 0.002940 seconds </code></pre> <p>What I am looking for</p> <pre><code>1.662097 - INFO - Got a Chunk of Data in 1.662097 seconds 0.008471 - INFO - Processed the Chunk in 0.008471 seconds 0.002940 - INFO - Optimized the Data in 0.002940 seconds </code></pre> <p>Or something similar</p>
<python><logging><python-logging>
2024-09-13 07:21:53
2
1,622
Krishna
78,981,052
1,125,062
Pytorch incorrect results with non_blocking assignment from Cuda to CPU
<p>I'm trying to assign a tensor on CPU with values I just obtained from GPU, however getting incorrect results, both tensors should be the same obviously:</p> <p>(To avoid any unnecessary chatter, I'd like to mention beforehand that I'm also aware of exactly <em><strong>two</strong></em> other different methods of copying such as 1) using <code>copy_</code> method and 2) by overwriting a variable altogether. However I find the first slower and second producing memory overhead, based on some of my tests. The assignment with semicolon seems to achieve best performance so far, albeit the obtained results are incorrect when <code>non_blocking</code> option is used.)</p> <pre><code>import torch gpu = torch.device('cuda') cpu = torch.device('cpu') a = torch.rand((13223,134,4), dtype=torch.float32, device=cpu) b = torch.rand((13223,134,4), dtype=torch.bfloat16, device=gpu) for i in range(3): b.mul_(0.5) a[:] = b.to(device=cpu, memory_format=torch.preserve_format, dtype=torch.bfloat16, non_blocking=True) torch.cuda.synchronize() print(b[0,0], a[0,0]) </code></pre> <h1></h1> <pre><code>tensor([0.0942, 0.1621, 0.2041, 0.1543], device='cuda:0', dtype=torch.bfloat16) tensor([0., 0., 0., 0.]) tensor([0.0471, 0.0811, 0.1021, 0.0771], device='cuda:0', dtype=torch.bfloat16) tensor([0.0942, 0.1621, 0.2041, 0.1543]) tensor([0.0236, 0.0405, 0.0510, 0.0386], device='cuda:0', dtype=torch.bfloat16) tensor([0.0471, 0.0811, 0.1021, 0.0771]) &gt;&gt;&gt; </code></pre> <p>In my application the results are even weirder, as some of the values become negative! (...despite that none of them are negative in the original tensor). Appreciate any input in advance, the goal is to transfer data to CPU quickly and efficiently, and if you know any other methods that I haven't mentioned that will help me achieve that, do leave a comment. :)</p>
<python><python-3.x><pytorch><nonblocking>
2024-09-13 07:10:08
2
4,641
Anonymous