QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
77,105,594
7,848,740
Use Django messages framework when a webhook is called
<p>I have set up an URL in Django that is being used as webhook from another service. It has implemented like:</p> <pre><code>@csrf_exempt @require_POST def hoof_alert(request): json_rx = json.loads(request.body.decode(&quot;utf-8&quot;)) if json_rx[&quot;alerts&quot;][0][&quot;status&quot;] == &quot;firing&quot;: messages.error(&quot;Alarm fired&quot;) else: messages.success(&quot;No more alarm&quot;) returns HttpResponse(status=200) </code></pre> <p>Now, the main issue is that, even if the function is called, as I see it happens on the console and my service get back the code 200 status, I can't see the message popping up on the front end.</p> <p>My guess is that, the message is attached to the request done on the webhook, which is not the request an user will do when access the website.</p> <p>So, my question is, how do I make the message being visible on the front end when the hook is called?</p> <p><strong>More Info:</strong> The webhook is called by a Grafana client when an alarm is fired so, there's no human request behind it. I need the message to appear on the front end as soon as someone request another page of my Django application</p> <p>The idea is:</p> <ol> <li>Grafana calls the webhook</li> <li>The Webhook is fired and the message is created</li> <li>If someone access the Django application frontend the message must be shown</li> </ol>
<python><django><webhooks><django-messages>
2023-09-14 13:52:24
1
1,679
NicoCaldo
77,105,565
6,464,947
Why this error with pygbag: file not found in folder assets #python #pygbag
<p>When I run the localhost it will not work and the #debug says that there is an error and that it does not find the png files. I also put them into an assets folder, but it doesn't find them even if I try to figure out different solutions</p>
<python><pygbag>
2023-09-14 13:49:31
0
23,563
PythonProgrammi
77,105,434
264,136
python read stored credentials in credential manager
<pre><code>target_name = &quot;rtp-abs-cache.com&quot; credential = win32cred.CredRead(target_name, win32cred.CRED_TYPE_GENERIC) </code></pre> <p>gives error:</p> <pre><code> Traceback (most recent call last): File &quot;c:\code\daily_files_uploader.py&quot;, line 11, in &lt;module&gt; credential = win32cred.CredRead(target_name, win32cred.CRED_TYPE_GENERIC) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ pywintypes.error: (1168, 'CredRead', 'Element not found.') </code></pre> <p>But the credential is indeed present.</p> <p><a href="https://i.sstatic.net/ly7Lg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ly7Lg.png" alt="enter image description here" /></a></p>
<python>
2023-09-14 13:33:59
1
5,538
Akshay J
77,105,414
1,014,217
How to iterate over a pyspark dataframe to increment a value and reset it to 0
<p>I have a pyspark dataframe with the following fields: dt = timestamp (one row per hour) rain_1h = rain in mm that hour</p> <p>Now I need to calculate the number of Dry Hours, when it doesnt rain, so it must increment it when it doesnt rain, and when it rains it should reset to 0.</p> <p>I tried the following:</p> <pre><code>def calculate_dry_hours(df): window_spec = Window.orderBy(&quot;dt&quot;) # Create a column &quot;RainyHour&quot; to flag rainy hours df = df.withColumn('DryHour', when(col('rain_1h') == 0, 1).otherwise(0)) df = df.withColumn('RainHour', when(col('rain_1h') &gt; 0, 1).otherwise(0)) # Create a column &quot;lag_rain1h&quot; for the lag of &quot;rain_1h&quot; df = df.withColumn(&quot;lag_rain1h&quot;, lag(&quot;rain_1h&quot;).over(window_spec)) dry_hour_window = Window.partitionBy().orderBy('dt') df = df.withColumn('DryHourCount', when(col('RainHour') == 0, sum('DryHour').over(dry_hour_window)).otherwise(0)) df = df.withColumn('DryHourCount', when(col('RainHour') == 0 &amp; lag('RainHour', 1).over(dry_hour_window) == 1, 0).otherwise(col('DryHourCount'))) return df </code></pre> <p>However this is not giving the desired results</p>
<python><pandas><pyspark>
2023-09-14 13:31:36
1
34,314
Luis Valencia
77,105,226
561,243
Specify many option in argparse using *
<p>I have a question for you! I have developed a command line tools with many options using argparse. Here is a simplified version of what I have:</p> <pre class="lang-py prettyprint-override"><code>import argparse parser = argparse.ArgumentParser() parser.add_argument('--skip-part1', action='store_true', default=False) parser.add_argument('--skip-part2', action='store_true', default=False) parser.add_argument('--skip-part3', action='store_true', default=False) parser.add_argument('--skip-part4', action='store_true', default=False) </code></pre> <p>I would like the user to be able to select all optional <em>skip</em> arguments by typing something like</p> <pre><code>myscript.py --skip-* </code></pre> <p>Even more advanced, using regular expression:</p> <pre><code>myscript.py --skip-part[1-3] </code></pre> <p>to select the first three options only.</p> <p>Do you know how I could obtain such behavior?</p> <p>Thanks</p>
<python><argparse>
2023-09-14 13:10:12
2
367
toto
77,105,126
11,932,905
Pandas: re-assign values for groups to same value in one column if they have at least one common in another
<p>I have following dataframe:</p> <pre><code>df = pd.DataFrame({&quot;zip&quot;:['A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A','C', 'C', 'C','C', 'C', 'C'], &quot;zip_splitted&quot;:['A_1', 'A_1', 'A_1','A_1', 'A_1', 'A_1', 'A_2', 'A_2', 'A_2','A_2', 'A_2', 'A_2', 'C_1', 'C_1', 'C_1', 'C_1', 'C_1', 'C_1'], &quot;cluster&quot;:['111', '111', '111', '112', '112', '112', '113', '113', '113', '114', '114', '114', '115', '115', '115', '116', '116', '116'], &quot;cluster2&quot;:['991', '991', '994', '991', '882', '991', '993', '991', '994', '992', '991', '991', '889', '889', '992', '998', '997', '999'] }) </code></pre> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">zip</th> <th style="text-align: left;">zip_splitted</th> <th style="text-align: left;">cluster</th> <th style="text-align: left;">cluster2</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">A</td> <td style="text-align: left;">A_1</td> <td style="text-align: left;">111</td> <td style="text-align: left;">991</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: left;">A_1</td> <td style="text-align: left;">111</td> <td style="text-align: left;">991</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: left;">A_1</td> <td style="text-align: left;">111</td> <td style="text-align: left;">994</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: left;">A_1</td> <td style="text-align: left;">112</td> <td style="text-align: left;">991</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: left;">A_1</td> <td style="text-align: left;">112</td> <td style="text-align: left;">882</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: left;">A_1</td> <td style="text-align: left;">112</td> <td style="text-align: left;">991</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: left;">A_2</td> <td style="text-align: left;">113</td> <td style="text-align: left;">993</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: left;">A_2</td> <td style="text-align: left;">113</td> <td style="text-align: left;">991</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: left;">A_2</td> <td style="text-align: left;">113</td> <td style="text-align: left;">994</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: left;">A_2</td> <td style="text-align: left;">114</td> <td style="text-align: left;">992</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: left;">A_2</td> <td style="text-align: left;">114</td> <td style="text-align: left;">991</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: left;">A_2</td> <td style="text-align: left;">114</td> <td style="text-align: left;">991</td> </tr> <tr> <td style="text-align: left;">C</td> <td style="text-align: left;">C_1</td> <td style="text-align: left;">115</td> <td style="text-align: left;">889</td> </tr> <tr> <td style="text-align: left;">C</td> <td style="text-align: left;">C_1</td> <td style="text-align: left;">115</td> <td style="text-align: left;">889</td> </tr> <tr> <td style="text-align: left;">C</td> <td style="text-align: left;">C_1</td> <td style="text-align: left;">115</td> <td style="text-align: left;">992</td> </tr> <tr> <td style="text-align: left;">C</td> <td style="text-align: left;">C_1</td> <td style="text-align: left;">116</td> <td style="text-align: left;">998</td> </tr> <tr> <td style="text-align: left;">C</td> <td style="text-align: left;">C_1</td> <td style="text-align: left;">116</td> <td style="text-align: left;">997</td> </tr> <tr> <td style="text-align: left;">C</td> <td style="text-align: left;">C_1</td> <td style="text-align: left;">116</td> <td style="text-align: left;">999</td> </tr> </tbody> </table> </div> <p>Main target is to re-assign values in 'cluster', so that if groups of values in 'cluster' have at least 1 same value in 'cluster2' value - they should be combined with same cluster id.</p> <p>For current case output should be following (cluster 111,112,113,114,115,116 - have at least one common vlaue in cluster2, re-assign them to 111, 116 - keeps the same):</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">zip</th> <th style="text-align: left;">zip_splitted</th> <th style="text-align: left;">cluster</th> <th style="text-align: left;">cluster2</th> <th style="text-align: left;">cluster_new</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">A</td> <td style="text-align: left;">A_1</td> <td style="text-align: left;">111</td> <td style="text-align: left;">991</td> <td style="text-align: left;">111</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: left;">A_1</td> <td style="text-align: left;">111</td> <td style="text-align: left;">991</td> <td style="text-align: left;">111</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: left;">A_1</td> <td style="text-align: left;">111</td> <td style="text-align: left;">994</td> <td style="text-align: left;">111</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: left;">A_1</td> <td style="text-align: left;">112</td> <td style="text-align: left;">991</td> <td style="text-align: left;">111</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: left;">A_1</td> <td style="text-align: left;">112</td> <td style="text-align: left;">882</td> <td style="text-align: left;">111</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: left;">A_1</td> <td style="text-align: left;">112</td> <td style="text-align: left;">991</td> <td style="text-align: left;">111</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: left;">A_2</td> <td style="text-align: left;">113</td> <td style="text-align: left;">993</td> <td style="text-align: left;">111</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: left;">A_2</td> <td style="text-align: left;">113</td> <td style="text-align: left;">991</td> <td style="text-align: left;">111</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: left;">A_2</td> <td style="text-align: left;">113</td> <td style="text-align: left;">994</td> <td style="text-align: left;">111</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: left;">A_2</td> <td style="text-align: left;">114</td> <td style="text-align: left;">992</td> <td style="text-align: left;">111</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: left;">A_2</td> <td style="text-align: left;">114</td> <td style="text-align: left;">991</td> <td style="text-align: left;">111</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: left;">A_2</td> <td style="text-align: left;">114</td> <td style="text-align: left;">991</td> <td style="text-align: left;">111</td> </tr> <tr> <td style="text-align: left;">C</td> <td style="text-align: left;">C_1</td> <td style="text-align: left;">115</td> <td style="text-align: left;">889</td> <td style="text-align: left;">111</td> </tr> <tr> <td style="text-align: left;">C</td> <td style="text-align: left;">C_1</td> <td style="text-align: left;">115</td> <td style="text-align: left;">889</td> <td style="text-align: left;">111</td> </tr> <tr> <td style="text-align: left;">C</td> <td style="text-align: left;">C_1</td> <td style="text-align: left;">115</td> <td style="text-align: left;">992</td> <td style="text-align: left;">111</td> </tr> <tr> <td style="text-align: left;">C</td> <td style="text-align: left;">C_1</td> <td style="text-align: left;">116</td> <td style="text-align: left;">998</td> <td style="text-align: left;">116</td> </tr> <tr> <td style="text-align: left;">C</td> <td style="text-align: left;">C_1</td> <td style="text-align: left;">116</td> <td style="text-align: left;">997</td> <td style="text-align: left;">116</td> </tr> <tr> <td style="text-align: left;">C</td> <td style="text-align: left;">C_1</td> <td style="text-align: left;">116</td> <td style="text-align: left;">999</td> <td style="text-align: left;">116</td> </tr> </tbody> </table> </div> <p>Currently stuck with groupbys, trying to create a list for mapping, but not sure it's a correct way.<br /> Appreciate any help.</p>
<python><pandas><group-by>
2023-09-14 12:58:35
2
608
Alex_Y
77,105,076
3,433,875
Curve text around a polar plot
<p>I am trying to curve text on a polar chart in matplotlib.</p> <p>Here is an example of my case:</p> <pre><code>import matplotlib.pyplot as plt import numpy as np text = [&quot;Electoral Process&quot;, &quot;Political Pluralism&quot;, &quot;Rule of Law&quot;, &quot;Freedom of expression&quot;, &quot;Freedom of believe&quot;] no_labels =len(text) angle_size = int(360/no_labels) #define the number of angles in degrees and convert it to radians theta = [i/180*np.pi for i in range(0, 360,angle_size) ] #where to put the labels on the radial axis radius = [1]*no_labels #Find the mid point of each angle mid_point =[i/180*np.pi for i in range(int(angle_size/2), 360, angle_size)] fig, ax = plt.subplots(subplot_kw={'projection': 'polar'},figsize=(10, 6), facecolor=&quot;white&quot;) ax.set_theta_zero_location('N') ax.set_theta_direction(-1) ax.set_xticklabels([]) #remove the original xlabels ax.set_yticklabels([]) #remove the original ylabels # Arrange the grid into number of sales equal parts in degrees lines = plt.thetagrids(range(0, 360, int(360/len(text)))) #Place the text in the midle of each pie for m, r, t in zip(mid_point,radius, text): ax.annotate(t, xy=[m, r], fontsize=12, ha=&quot;center&quot;, va=&quot;center&quot;, ) </code></pre> <p>Which generates this:</p> <p><a href="https://i.sstatic.net/WoOAh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WoOAh.png" alt="enter image description here" /></a></p> <p>I have found other posts that do that, but I dont understand the code so I am trying to build it from scratch. This is what I got so far:</p> <pre><code>import numpy as np import matplotlib as mpl text = [&quot;Electoral Process&quot;, &quot;Political Pluralism&quot;, &quot;Rule of Law&quot;, &quot;Freedom of expression&quot;, &quot;Freedom of believe&quot;] no_labels =len(text) angle_size = int(360/no_labels) #define the number of angles in degrees and convert it to radians theta = [i/180*np.pi for i in range(0, 360,angle_size) ] #where to put the labels on the radial axis radius = [1]*no_labels #Find the mid point of each angle mid_point =[i/180*np.pi for i in range(int(angle_size/2), 360, angle_size)] fig,ax=plt.subplots(subplot_kw={'projection': 'polar'},figsize=(10,6), dpi=100) # Arrange the grid into number of sales equal parts in degrees lines = plt.thetagrids(range(0, 360, int(360/len(text)))) ax.set_theta_zero_location('N') ax.set_theta_direction(-1) ax.set_xticklabels([]) #remove the original xlabels ax.set_yticklabels([]) #remove the original ylabels #start with one label start_text = theta[0] text2=[&quot;ELECTORAL PROCESS&quot;] spacing=len(text[0])+4 end_text = theta[1] x= np.linspace(start_text, end_text, spacing) y= [1, 0.5] for txt in text2: print(txt) for a,tx in zip(x,txt): ax.text(a, 1.05 ,tx,rotation=-28, fontsize= 8, ha=&quot;center&quot;, va= &quot;center&quot;), print(a,tx) </code></pre> <p>Which produces this: <a href="https://i.sstatic.net/637YS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/637YS.png" alt="enter image description here" /></a></p> <p>Now I have to iterate it for every label, rotate each letter to follow the curve (using a hardcoded angle at the moment) and also write upwards or downwards depending on where the code is and it seems like I might be complicating things.</p> <p>Anyone has done this using &quot;simple code&quot; or can explain how the code below works?</p> <p><a href="https://stackoverflow.com/questions/61844673/how-do-i-curve-text-in-a-polar-plot">Similar problem but dont understand the code</a></p>
<python><matplotlib><plot-annotations><polar-plot>
2023-09-14 12:53:21
2
363
ruthpozuelo
77,105,067
15,320,579
Create a new dictionary based on 2 dictionaries with not same key name in python
<p>I have the following 2 dictionaries:</p> <pre><code>dict1 = {'imgsss/modifiedmerged/modifiedmerged-2.png': 3, 'imgsss/modifiedmerged/modifiedmerged-4.png': 5} dict2 = {&quot;form1&quot;: &quot;POLICY NUMBER COMMERCIAL GENERAL&quot;, &quot;form2&quot;: &quot;ACP GLDO7285650787&quot;} </code></pre> <p>I want to create a new dictionary based on the above 2 dictionaries. Basically the value of the <code>dict1</code> should be the last digit of the key of <code>dict2</code>. Rest everything should be same. So the output dictionary should be:</p> <pre><code>op_dict = { &quot;form3&quot;: &quot;POLICY NUMBER COMMERCIAL GENERAL&quot;, &quot;form5&quot;: &quot;ACP GLDO7285650787&quot; } </code></pre> <p>Basically the first element of <code>dict1</code> should be matched with the 1st element of <code>dict2</code>, 2nd element of <code>dict1</code> with 2nd element of <code>dict2</code> and so on. The <code>form</code> keyword will remain constant but the digit after may change. Similary the values of <code>dict1</code> may change.</p> <p>I tried the following code:</p> <pre><code>op_dict = {} for k, v in dict1.items(): for k1, v1 in dict2.items(): op_dict[f'form{str(v)}'] = v1 </code></pre> <p>but it is giving incorrect output:</p> <pre><code>op_dict = {'form3': 'ACP GLDO7285650787', 'form5': 'ACP GLDO7285650787'} </code></pre>
<python><python-3.x><dictionary>
2023-09-14 12:52:14
2
787
spectre
77,105,050
9,139,930
Python relative imports that skip intermediate folder in VSCode
<p>I am trying to understand the code in someone else's mixed-language repository (part of a large, complicated code base). The directory structure is like this:</p> <pre><code>module_A/ python/ __init__.py script_A.py script_B.py other/ other_scripts.cxx </code></pre> <p>The contents of <code>__init__.py</code> are</p> <pre class="lang-py prettyprint-override"><code>__version__ = '1.0.0' </code></pre> <p>The code has relative imports from <code>script_B.py</code> to <code>script_A.py</code> like the following:</p> <pre class="lang-py prettyprint-override"><code>from module_A import script_B </code></pre> <p>Note that the intermediate directory <code>python</code> has been skipped. I know that the code runs, so python itself does not have a problem parsing this import statement. However, in VSCode, pylint throws a fit:</p> <pre><code>No name 'script_B' in module 'module_A' </code></pre> <p>I would really like to resolve this error so that (a) I can use VSCode tools to help myself understand how the code works (e.g., function definitions on hover) and (b) I can more easily contribute to the code in the future (e.g., with code autocomplete).</p> <p>Can someone explain the following?</p> <ol> <li><p>What is going on here? I have never seen relative imports in python that skip an intermediate directory before.</p> </li> <li><p>How can I resolve the error in VSCode to recover the usual quality-of-life tools?</p> </li> </ol>
<python><visual-studio-code><python-import><python-module><relative-import>
2023-09-14 12:50:03
1
367
book_kees
77,105,015
1,652,219
Python Polars: Low memory read, process, writing of parquet to/from Hadoop
<p>I would like to be able to process very large files in Polars without running out of memory. In the documentation they suggest using scanning, lazyframes and sinks, but it is hard to find proper documentation of how to do this in practice. Hopefully some experts on here can help.</p> <p>Here I provide an example of what works for &quot;smaller&quot; files that can be handled in memory.</p> <h3>1. Setup</h3> <pre class="lang-python prettyprint-override"><code># Imports import pandas as pd import polars as pl import numpy as np import pyarrow as pa import pyarrow.parquet as pq from pyarrow._hdfs import HadoopFileSystem # Setting up HDFS file system hdfs_filesystem = HDFSConnection('default') hdfs_out_path_1 = &quot;scanexample.parquet&quot; hdfs_out_path_2 = &quot;scanexample2.parquet&quot; </code></pre> <h3>2. Creating data</h3> <pre class="lang-python prettyprint-override"><code># Dataset df = pd.DataFrame({ 'A': np.arange(10000), 'B': np.arange(10000), 'C': np.arange(10000), 'D': np.arange(10000), }) # Writing to Hadoop pq_table = pa.Table.from_pandas(df) pq_writer = pq.ParquetWriter(hdfs_out_path_1, schema=pq_table.schema, filesystem=hdfs_filesystem) # Appending to parquet file pq_writer.write_table(pq_table) pq_writer.close() </code></pre> <h3>3. Reading parquet into polars dataframe (in memory)</h3> <pre class="lang-python prettyprint-override"><code># Read file pq_df = pl.read_parquet(source=hdfs_out_path_1, use_pyarrow=True, pyarrow_options={&quot;filesystem&quot;: hdfs_filesystem}) </code></pre> <h3>4. Making transforms and writing to file</h3> <pre class="lang-python prettyprint-override"><code> # Transforms and write pq_df.filter(pl.col('A')&gt;9000)\ .write_parquet(file = hdfs_out_path_2, use_pyarrow=True, pyarrow_options={&quot;filesystem&quot;: hdfs_filesystem}) </code></pre> <h3>5. Now doing the same with low memory</h3> <pre class="lang-python prettyprint-override"><code># Scanning file: Attempt 1 scan_df = pl.scan_parquet(source = hdfs_out_path_2) ERROR: Cannot find file # Scanning file: Attempt 2 scan_df = pl.scan_parquet(source = hdfs_filesystem.open_input_stream(hdfs_out_path_1)) ERROR: expected str, bytes or os.PathLike object, not pyarrow.lib.NativeFile </code></pre> <p>According to the <a href="https://pola-rs.github.io/polars/py-polars/html/reference/api/polars.scan_parquet.html" rel="nofollow noreferrer">polars documentation</a> the scan_parquet does not take pyarrow arguments. But it talks about taking some &quot;storage options&quot;, which I guess is what I need to use. But how?</p> <h3>6. Example without Hadoop</h3> <pre class="lang-python prettyprint-override"><code># Writing to parquet df.to_parquet(path=&quot;testlocal.parquet&quot;) # Read lazily lazy_df = pl.scan_parquet(source=&quot;testlocal.parquet&quot;) # Transforms and write lazy_df.filter(pl.col('A')&gt;9000).sink_parquet(path= &quot;testlocal.out.parquet&quot;) </code></pre> <h1>UPDATE!</h1> <p>While the accepted answer lets you load your data into a LazyFrame that lazyframe comes with limited functionality, as it cannot sink that data to a file without first collecting it all to the memory!</p> <pre class="lang-python prettyprint-override"><code># Reading into LazyFrame import pyarrow.dataset as ds pq_lf = pl.scan_pyarrow_dataset( ds.dataset(hdfs_out_path_1, filesystem= hdfs_filesystem)) # Attempt at sinking to parquet pq_lf.filter(pl.col('A')&gt;9000).sink_parquet(path= &quot;testlocal.out.parquet&quot;) PanicException: sink_parquet not yet supported in standard engine. Use 'collect().write_parquet()' </code></pre>
<python><dataframe><parquet><python-polars><pyarrow>
2023-09-14 12:44:28
1
3,944
Esben Eickhardt
77,104,909
22,326,950
Why does the same win32.com GetObject call (same line of code) only in certain circumstances find 'SAPGUI' in the Running Object Table
<p>While writing a generic function (for python and Excel vba) to return a available SAP session (existing or new) for further scripting, I encountered a strange behavior of the GetObject call when it comes to finding the <code>SAPGUI</code> object in the ROT.</p> <ul> <li>With a session open the function behaves as expected and returnes the desired session object</li> <li>With no SAP open the function opens one using <code>sapshut.exe</code> and returns the session object as well</li> </ul> <h2>So far so good, but here comes the twist:</h2> <ul> <li>After a session was created by <code>sapshcut.exe</code> the <code>win32.com.GetObject('SAPGUI')</code> call does no longer recognise sessions opened &quot;by hand&quot; <em>(meaning: When jou open the logon pad and start a session from there)</em></li> <li>If the session is not closed it is recognised again if you rerun the function</li> <li>If you close the session the function opens and recognises a new one</li> <li>The only way to recognise a session opened &quot;by hand&quot; again is to restart the computer</li> </ul> <h2>Here is a minimal code snippet which reproduces the problem for me:</h2> <pre class="lang-py prettyprint-override"><code>from os import system from win32com.client import GetObject from pywintypes import com_error from time import sleep def open_and_close_shortcut(): # check if 'SAPGUI' is in Windows Running Object Table (and getting connection object) try: GetObject('SAPGUI').GetScriptingEngine.Children(0) # If not open then open with sapshcut.exe and single sign on (no Logon is opened) except com_error: system('start sapshcut.exe -system=&lt;SID&gt; -client=&lt;CLI&gt;') # put in your system and client sleep(2) # tweak the time a little if it is not sufficient # get 'SAPGUI' from Windows Running Object Table (and getting connection object) finally: con = GetObject('SAPGUI').GetScriptingEngine.Children(0) # get available session and open TAC MD04 for session in con.Children: if not session.busy: session.StartTransaction(&quot;MD04&quot;) # change to any TAC you can access return if __name__ == '__main__': open_and_close_shortcut() </code></pre> <p><strong>please note:</strong> I tried the same logic in Excel VBA and encountered the same behavior!</p> <h2>Fastest way to get the strange behavior for me is:</h2> <ol> <li>open a SAP session and start the script (see if MD04 is opened to confirm a sucessfull run)</li> <li>close SAP and rerun the script (SAP should pop up, if the transaction is not executed maby you need a longer <code>sleep()</code> value)</li> <li>close SAP again and open again manually (I do it with the logon pad)</li> </ol> <p>→ now the script should enter the exept statement even though SAP is open, resulting in a SAP error/popup saying the system is already open</p> <h2>Can anyone explain this behavior to me and how to eliminate it?</h2> <p><strong>PS</strong> I found a workaround by programatically opening the logon pad an starting the session from there which does work as expected. So maby there is a difference in a session from logon and sapshcut?! Which still is misterious because both sessions are at some point recognised by the same line of code...</p>
<python><excel><vba><winapi><sap-gui>
2023-09-14 12:31:57
0
884
Jan_B
77,104,513
1,473,517
Why is numba popcount code twice as fast as equivalent C code?
<p>I have this simple python/numba code:</p> <pre><code>from numba import njit import numba as nb @nb.njit(nb.uint64(nb.uint64)) def popcount(x): b=0 while(x &gt; 0): x &amp;= x - nb.uint64(1) b+=1 return b @njit def timed_loop(n): summand = 0 for i in range(n): summand += popcount(i) return summand </code></pre> <p>It just adds the popcounts for integers 0 to n - 1.</p> <p>When I time it I get:</p> <pre><code>%timeit timed_loop(1000000) 340 µs ± 1.08 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) </code></pre> <p>It <a href="https://stackoverflow.com/questions/77102860/how-to-use-native-popcount-with-numba">turns out</a> that llvm cleverly converts the popcount function into the native CPU POPCNT instruction so we should expect it to be fast. But the question is, how fast.</p> <p>I thought I would compare it to a C version to see the speed difference.</p> <pre><code> #include &lt;stdio.h&gt; #include &lt;time.h&gt; // Function to calculate the population count (number of set bits) of an integer using __builtin_popcount int popcount(int num) { return __builtin_popcount(num); } int main() { unsigned int n; printf(&quot;Enter the value of n: &quot;); scanf(&quot;%d&quot;, &amp;n); // Variables to store start and end times struct timespec start_time, end_time; // Get the current time as the start time clock_gettime(CLOCK_MONOTONIC, &amp;start_time); int sum = 0; for (unsigned int i = 0; i &lt; n; i++) { sum += popcount(i); } // Get the current time as the end time clock_gettime(CLOCK_MONOTONIC, &amp;end_time); // Calculate the elapsed time in microseconds long long elapsed_time = (end_time.tv_sec - start_time.tv_sec) * 1000000LL + (end_time.tv_nsec - start_time.tv_nsec) / 1000; printf(&quot;Sum of population counts from 0 to %d-1 is: %d\n&quot;, n, sum); printf(&quot;Elapsed time: %lld microseconds\n&quot;, elapsed_time); return 0; } </code></pre> <p>I then compiled this with <code>-march=native -Ofast</code>. I tried both gcc and clang and the results were very similar.</p> <pre><code>./popcount Enter the value of n: 1000000 Sum of population counts from 0 to 1000000-1 is: 9884992 Elapsed time: 732 microseconds </code></pre> <p>Why is the numba twice as fast as the C code?</p>
<python><c><performance><x86-64><numba>
2023-09-14 11:36:18
2
21,513
Simd
77,104,419
17,561,414
Call one notebook from another notebook
<p>I would like to call one notebook from another notebook in databricks.</p> <p>lets say I have notebook_main and notebook_variable.</p> <p>I want to run the notebook_variable from notebook_main and the out of the notebook vairiable I want to pass in the notebook_main.</p> <p>For example:</p> <p>In notebook_variable I have defiend one string variable</p> <pre><code>variable = &quot;Select * From {table}&quot; </code></pre> <p><code>{table}</code> this has to become the dynamic variable in the second notebook.</p> <p>Currently I have hardcode things in the notebook_main but I want to make it dynamic.</p> <pre><code>dbutils.widgets.text(&quot;table&quot;, &quot;&quot;) table = dbutils.widgets.get(&quot;table&quot;) def update_changefeed(df, epochId): filtered_df = df.filter(col(&quot;_change_type&quot;).isin(&quot;insert&quot;, &quot;update_postimage&quot;, &quot;delete&quot;)) filtered_df.createOrReplaceGlobalTempView(&quot;test2&quot;) dfUpdates = sqlContext.sql(&quot;&quot;&quot; SELECT * from FROM global_temp.test2 &quot;&quot;&quot;) </code></pre> <p>So with the above <code>def</code> Im trying to filter the <code>df</code>. Then create the global temp view out of it. Note that now the temp view name is hard coded and I need to pass the {table} name in it which will also need to be pass in the below <code>select</code> statement instead of <code>global_temp.test2</code> it should be <code>global_temp.{table}</code>.</p> <p>I have two main problem. How can I out put the notebook_variable result in this main nootebook and if the variables will be passed correctly.</p> <p>My pseudo code:</p> <pre><code>dbutils.widgets.text(&quot;table&quot;, &quot;&quot;) table = dbutils.widgets.get(&quot;table&quot;) query = out put of the notebook def update_changefeed(df, epochId): filtered_df = df.filter(col(&quot;_change_type&quot;).isin(&quot;insert&quot;, &quot;update_postimage&quot;, &quot;delete&quot;)) filtered_df.createOrReplaceGlobalTempView(table) dfUpdates = sqlContext.sql(query) </code></pre>
<python><sql><databricks><azure-databricks>
2023-09-14 11:24:14
1
735
Greencolor
77,104,373
903,011
Select n columns, with the rest of line added to the end of the last column, using read_csv()
<p>I am using Pandas to read with <code>read_csv()</code> a file with fields separated by spaces. There is a fixed number of columns, but a few lines do not follow the pattern.</p> <p>The solution I used so far is to add <code>on_bad_lines='warn'</code> which skips them and informs me that there is a faulty line.</p> <p>This was OK while there were a few lines I could look at separately. Unfortunately, the number of such lines increased.</p> <p>The solution I would be fine with would be to load only 10 columns: 9 that are always fine (and have predictable names), and the 10th one that would have the rest of the line as one column (called 'everything else').</p> <p>I was looking back and forth at the <a href="https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html" rel="nofollow noreferrer">documentation for <code>read_csv()</code></a> but I could not find the right parameter. Is there a way to limit the number of columns to read in (including a last column with the rest of the line)?</p> <p>To put some context: the kind of file I have</p> <pre><code>aaax bbb ccc ddd mmmxxx nnn ooo ppp sjkhdkjsh skdjhsksdkskjdh ksjh sdkjsdh ksjdh fffxx ggg hhh iii </code></pre> <p>I would like retrieve four columns on each line, the fourth column would be (for each line)</p> <pre><code>ddd ppp sjkhdkjsh skdjhsksdkskjdh ksjh sdkjsdh ksjdh iii </code></pre>
<python><pandas><regex><csv>
2023-09-14 11:15:25
3
30,596
WoJ
77,104,355
7,465,516
How to open a textfile in python on windows with the correct encoding?
<p>I am aware that <a href="https://stackoverflow.com/a/436299/7465516">correctly determining the encoding of text is impossible</a>. I notice however, that programs such as notepad can correctly read a file encoded in utf-8, and I want to do the same in python.</p> <p><code>open</code> can not do it, it is <a href="https://docs.python.org/3/library/functions.html#open" rel="nofollow noreferrer">documented</a> in python3.11 to use the systems local encoding as default, which is 'cp1252' on my machine, so the behaviour does not depend on any BOM or other magic that might be in the file or its metadata.</p> <p>The following code acts the same on python-versions 3.7, 3.10 and 3.11 (which is all that I tried):</p> <pre class="lang-py prettyprint-override"><code> fname = &quot;tmpWindowsOut.txt&quot; with open(fname, &quot;w&quot;, encoding=&quot;utf-8&quot;) as f: # if encoding wasn't specified in `open` this fails with the infamous # UnicodeEncodeError: 'charmap' codec can't encode character '\u03b2' in position 1: character maps to &lt;undefined&gt; f.write(&quot;βKHα&quot;) with open(fname, &quot;r&quot;) as f: print(f.read()) # prints βKHα # opening the file with notepad shows the correct output </code></pre> <p>I know I can correctly read the file by passing the encoding-parameter to the other <code>open</code>-call, the question is not how to read a file that is known to be encoded in <code>utf-8</code>. The question is how to either:</p> <ol> <li>do the <code>read</code>-portion correctly <em>without</em> specifying the encoding</li> <li>do the <code>write</code>-portion in a way that makes No.1 easier (or even just possible)</li> </ol>
<python><windows><unicode><utf-8><character-encoding>
2023-09-14 11:12:54
1
2,196
julaine
77,104,336
15,759,796
How to serialize a list of ndarray?
<p>I have a script which reads a video frame by frame and accumulate them in a deque.</p> <pre><code>labeled_frame_dequeue = [numpy.ndarray, numpy.ndarray, numpy.ndarray, numpy.ndarray] </code></pre> <p>I am trying to pass this list to a celery task</p> <pre><code>generate_clip_from_frames.delay(labeled_video_path, labeled_frame_dequeue, video_width, video_height, fps) </code></pre> <p>I have tried converting the deque to list and tuple but I am getting error <code>Object of type ndarray is not JSON serializable</code> while passing it to celery.</p>
<python><celery><numpy-ndarray><celery-task>
2023-09-14 11:10:39
2
477
raj-kapil
77,104,206
9,669,142
Python - curve_fitting automating initial guesses
<p>I asked the following question yesterday and this is an extension of this question: <a href="https://stackoverflow.com/questions/77099062/python-curve-fitting-doesnt-work-properly-with-high-x-values/77099276#77099276">Python - curve_fitting doesn&#39;t work properly with high x-values</a></p> <p>For completeness, I will explain the situation again. I have four XY-points for which I need to obtain a curve, hence I use <code>scipy.optimize.curve_fit</code>. Looking at the points, I want to use an exponential function. I had some issues with this, but someone was able to fix it and with that I have the code example below:</p> <pre><code>from scipy.optimize import curve_fit import matplotlib.pyplot as plt import numpy as np ## X # list_x1: [3.139, 2.53, 0.821, 0.27] # list_x2: [859.8791936328762, 805.5080517453312, 639.2578427310567, 496.3622821767497] ## Y # list_y1: [0.21, 0.49, 1.56, 23.97] # list_y2: [0.01, 0.01, 0.04, 2.46] list_x = [859.8791936328762, 805.5080517453312, 639.2578427310567, 496.3622821767497] list_y = [0.01, 0.01, 0.04, 2.46] def func_exp(x, a, b, c, d): return a + b*np.exp(-c*(x-d)) p0 = (0, max(list_x), 0, max(list_y)) list_line_info, pcov = curve_fit(func_exp, list_x, list_y, maxfev=100000, p0=p0) plot_x = np.linspace(min(list_x), max(list_x), 1000) plot_y = func_exp(plot_x, *list_line_info) plt.plot(plot_x, plot_y, linestyle=&quot;-&quot;, color=&quot;orangered&quot;) plt.plot(list_x, list_y, linestyle=&quot;--&quot;, color=&quot;dodgerblue&quot;) plt.scatter(list_x, list_y, color=&quot;black&quot;, zorder=2) plt.grid() </code></pre> <p>The code worksin my program for all lists I enter, except for one combination. The problem is with initial guess values. The suggestion that was made in the other question to use these and this worked. Then I tried to automate this by using <code>p0 = (0, max(list_x), 0, max(list_y))</code>. This works for all graphs (all combinations of <code>list_x1</code>, <code>list_x2</code> and <code>list_y1</code>), but not for the combination <code>list_x2</code> and <code>list_y2</code>.</p> <p>Combination <code>list_x1</code> and <code>list_y2</code>:</p> <p><a href="https://i.sstatic.net/KXDao.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KXDao.png" alt="enter image description here" /></a></p> <p>Combination <code>list_x2</code> and <code>list_y2 </code>: <a href="https://i.sstatic.net/mgthI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mgthI.png" alt="enter image description here" /></a></p> <p>The initial guess <code>p0 = (0, max(list_x), 0.01, max(list_y))</code> does work in the combination <code>list_x2</code> and <code>list_y2</code>, but then I have the same issue with the combination <code>list_x1</code> and <code>list_y2</code></p> <p>Am I missing something here?</p> <p>EDIT</p> <p>I updated the code according to the suggestions made in the comments:</p> <pre><code>from scipy.optimize import curve_fit import matplotlib.pyplot as plt import numpy as np ## X # list_x1: [3.139, 2.53, 0.821, 0.27] # list_x2: [859.8791936328762, 805.5080517453312, 639.2578427310567, 496.3622821767497] ## Y # list_y1: [0.21, 0.49, 1.56, 23.97] # list_y2: [0.01, 0.01, 0.04, 2.46] list_x = [859.8791936328762, 805.5080517453312, 639.2578427310567, 496.3622821767497] list_y = [0.21, 0.49, 1.56, 23.97] def func_exp(x, a, b, c, d): return a + b*np.exp(-c*(x-d)) p0 = (0, max(list_y), 0, max(list_x)) list_line_info, pcov = curve_fit(func_exp, list_x, list_y, maxfev=100000, p0=p0) plot_x = np.linspace(min(list_x), max(list_x), 1000) plot_y = func_exp(plot_x, *list_line_info) plt.plot(plot_x, plot_y, linestyle=&quot;-&quot;, color=&quot;orangered&quot;) plt.plot(list_x, list_y, linestyle=&quot;--&quot;, color=&quot;dodgerblue&quot;) plt.scatter(list_x, list_y, color=&quot;black&quot;, zorder=2) plt.grid() </code></pre> <p>This suggestion does not work for one of the combination used here:</p> <p><a href="https://i.sstatic.net/K29Gy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/K29Gy.png" alt="enter image description here" /></a></p>
<python><scipy><curve-fitting>
2023-09-14 10:53:13
2
567
Fish1996
77,103,963
630,866
Method keyword argument is same in every call
<p>I have following method</p> <pre><code>&gt;&gt;&gt; import uuid &gt;&gt;&gt; def get_sample(sample_value=str(uuid.uuid4())): ... return sample_value ... &gt;&gt;&gt; print(get_sample()) 2ca4a40c-62c4-4efa-add4-8f13067349d5 &gt;&gt;&gt; print(get_sample()) 2ca4a40c-62c4-4efa-add4-8f13067349d5 &gt;&gt;&gt; print(get_sample()) 2ca4a40c-62c4-4efa-add4-8f13067349d5 &gt;&gt;&gt; print(get_sample()) 2ca4a40c-62c4-4efa-add4-8f13067349d5 </code></pre> <p>Every invocation of the method returns same value.</p> <p>Want to understand what's happening here. Can this be modified to return new UUID in every call?</p>
<python>
2023-09-14 10:18:56
0
2,763
Bhaskar
77,103,869
6,645,564
How can I figure out the source of an error from code that occurs when I run it within a Jupyter notebook?
<p>I am running some code through a Jupyter notebook and I have been having an incredibly frustrating time trying to find the source of a bug that has been plaguing my work. I have set up my virtual environment (venv) in python to have these packages at this versions installed:</p> <pre><code>#For python 3.9.0 scipy==1.10.1 pandas==2.0.1 PyYAML==6.0 scikit-learn==1.2.2 statsmodels==0.14.0 natsort==8.3.1 plotly==5.14.1 seaborn==0.12.2 notebook==6.5.4 requests==2.31.0 openpyxl==3.1.2 kaleido==0.2.1 UpSetPlot==0.8.0 dash-bio==1.0.2 logomaker==0.8 pingouin==0.5.3 </code></pre> <p>After the virtual environment is set up and the relevant packages are installed, I boot up Jupyte-notebook, import my code, and start using it to analyze some input data. Everything ostensibly works until I get to the statistical analysis part, where I get this error repeated multiple times:</p> <pre><code>/Users/bob/Documents/virtualEnv20230909/lib/python3.9/site-packages/scipy/stats/_continuous_distns.py:6832: RuntimeWarning: overflow encountered in _nct_sf return np.clip(_boost._nct_sf(x, df, nc), 0, 1) /Users/bob/Documents/virtualEnv20230909/lib/python3.9/site-packages/scipy/stats/_continuous_distns.py:6826: RuntimeWarning: overflow encountered in _nct_cdf return np.clip(_boost._nct_cdf(x, df, nc), 0, 1) /Users/bob/Documents/virtualEnv20230909/lib/python3.9/site-packages/scipy/stats/_continuous_distns.py:6832: RuntimeWarning: overflow encountered in _nct_sf return np.clip(_boost._nct_sf(x, df, nc), 0, 1) </code></pre> <p>Now, as far as I can tell, this error does not affect the results, but I'm not sure. What is more baffling is that my colleague, who ran the same code with her virtual environment set up with the same packages on her local machine, never experienced such an error.</p> <p>Using the information of the traceback of the error, I decided to dig deeper myself into the problem which seemed to occur within the stats folder of scipy. I checked the file _continuous_distns.py, but the functions of _boost.nct_sf and _boost.nct_cdf are not defined there. So I looked within the _boost folder, but I cannot find this function's definition at all. It is referred to within the <strong>init</strong>.py, but otherwise, is not defined at all (you can check here <a href="https://github.com/pri-cph/scipy/tree/v1.10.1_dev/scipy/stats/_boost" rel="nofollow noreferrer">https://github.com/pri-cph/scipy/tree/v1.10.1_dev/scipy/stats/_boost</a>). Even more confusing, the error of <code>RuntimeWarning: overflow encountered in </code> is not even mentioned in the same class as the two functions of _nct_sf or _nct_cdf. So it's the flow between these two functions and that error is not clear at all.</p> <p>Debugging from the Jupyter notebook is primitive at best, since I need to restart the kernel anytime I modify the imported code. I tried to debug in VS Code, but even though I know approximately where the error occurs, I still can't pinpoint where the problem originates from, even with breakpoints.</p> <p>Given that this error does not occur on my colleague's machine, I could reasonably assume that the problem might be beyond the scope of the virtual environment we set up, but I have no way to ascertain that. Can anybody give me any tips for how I could more adequately debug the problem?</p>
<python><debugging><jupyter-notebook>
2023-09-14 10:06:33
0
924
Bob McBobson
77,103,814
7,132,596
How to optimize Django's model's string representation database hits in browsable API
<p>I am currently trying to optimize my API and noticed that the browsable API is responsible for N+1 queries. This only happens when I add the parent-class in the child's <code>__str__</code> method.</p> <p>Comparison using django-debug-toolbar:</p> <ul> <li><code>f&quot;{self.name} ({self.client.name})&quot;</code>: default 531.44 ms (370 queries including 358 similar and 348 duplicates )</li> <li><code>f&quot;{self.name}&quot;</code>: default 130.54 ms (24 queries including 12 similar and 2 duplicates )</li> </ul> <p>My (simplified) models are the following:</p> <pre><code>class Client(models.Model): &quot;&quot;&quot;Model for a Client.&quot;&quot;&quot; name = models.CharField(max_length=128, unique=True, verbose_name=&quot;Company name&quot;) class Meta: &quot;&quot;&quot;Set ordering.&quot;&quot;&quot; ordering = [&quot;name&quot;] def __str__(self): &quot;&quot;&quot;Return a good string representation for a Client.&quot;&quot;&quot; return f&quot;{self.name}&quot; class Budget(models.Model): &quot;&quot;&quot;Model for representing a budget of a client.&quot;&quot;&quot; name = models.CharField(max_length=128, verbose_name=&quot;Budget name&quot;) client = models.ForeignKey(Client, related_name=&quot;budgets&quot;, on_delete=models.PROTECT) class Meta: &quot;&quot;&quot;Set ordering.&quot;&quot;&quot; constraints = [models.UniqueConstraint(fields=[&quot;client&quot;, &quot;name&quot;], name=&quot;unique_client_budget_name_combination&quot;)] ordering = [&quot;name&quot;] def __str__(self): &quot;&quot;&quot;Return a good string representation of a Budget.&quot;&quot;&quot; # self.client.name triggers multiple database hits return f&quot;{self.name} ({self.client.name})&quot; </code></pre> <p>This is the view:</p> <pre><code>class BudgetCreateAPIView(generics.ListCreateAPIView): &quot;&quot;&quot;Overview of all Budgets. Uses one serializer for creating and another for displaying.&quot;&quot;&quot; def get_serializer_class(self): &quot;&quot;&quot;Use the display_serializer_class for displaying the model instance.&quot;&quot;&quot; if self.request.method == &quot;GET&quot;: return self.display_serializer_class return self.serializer_class queryset = Budget.objects.all().select_related(&quot;client&quot;) serializer_class = BudgetSerializer display_serializer_class = BudgetDisplaySerializer </code></pre> <p>I have also tried using <code>select_related(&quot;client&quot;)</code> in the view, but this did not help. It seems like the browsable API is not using the view's queryset.</p> <p>How can I make the browsable API make more efficient? Is that even possible?</p>
<python><django><django-rest-framework>
2023-09-14 09:58:42
1
956
Hans Bambel
77,103,813
1,254,528
Python 3.11 , converting serialzier to data model object getting error object has no attribute '_sa_instance_state
<p>I am new python and I am trying to convert serialize object to data model and my flow is Json request to serialize and from serialize to model-object in python to persist the object.</p> <p>Json request :</p> <pre><code>{ &quot;id&quot;: &quot;string&quot;, &quot;href&quot;: &quot;string&quot;, &quot;category&quot;: &quot;string&quot;, &quot;description&quot;: &quot;string&quot;, &quot;administrativeState&quot;: &quot;string&quot;, &quot;attachment&quot;: [ { &quot;id&quot;: &quot;string&quot;, &quot;attachmentType&quot;: &quot;string&quot; } ], &quot;operationalState&quot;: &quot;string&quot;, &quot;resourceStatus&quot;: &quot;string&quot; } </code></pre> <p>and my serilizer class</p> <pre><code>class ResourceReq(BaseModel): id : Optional[int] href : str category: str description: str administrativeState: str attachment: List[Attachment] operationalState : str resourceStatus : str class Attachment(BaseModel): id:str attachmentType : str </code></pre> <p>and My model class</p> <pre><code>class Resource(Base): __tablename__ = 'resource' id = Column(Integer, primary_key=True, index=True, autoincrement=True) category = Column(String(255), nullable=True) description = Column(String(255), nullable=True) href = Column(String(255), nullable=True) operationalState = Column(String(255), nullable=True) resourceStatus = = Column(String(255), nullable=True) attachment = relationship(&quot;AttachmentRefOrValue&quot;, back_populates=&quot;resource&quot; ) class AttachmentRefOrValue(Base): __tablename__ = 'attachment_ref_or_value' attachmentType = Column(String(255), nullable=True) id = Column(String(255), nullable=True) resource_id = Column(Integer(), ForeignKey(&quot;resource.id&quot;)) resource = relationship(&quot;Resource&quot;, back_populates=&quot;attachment&quot;) </code></pre> <p>here i am converting to serializer to model</p> <pre><code>def creatResource(resourcesInventory : ResourceReq) -&gt; resource_models.Resource : newResourcesInventory = resource_inventory_models.Resource(**resourcesInventory.__dict__) return newResourcesInventory </code></pre> <p>when i call this this method i am getting below error ,</p> <p><strong>AttributeError: 'Attachment' object has no attribute '_sa_instance_state'</strong></p>
<python><python-3.x><list><sqlalchemy><fastapi>
2023-09-14 09:58:42
1
499
karthik selvaraj
77,103,791
5,663,844
Custom methods for pandas df, that selects prespecified column by column name
<p>I would like to write a custom method for the the <code>pd.DataFrame</code> class. That would allow me to directly place new values at prespecified location.</p> <pre class="lang-py prettyprint-override"><code>class myDF(pd.DataFrame): def func1(self, row, value): self.df.at[row,&quot;prespecified_column_name&quot;] = value </code></pre> <p>like in the example above where I only have to specify the value and the row to be set, but the column is always the same. The problem <code>self.df</code> does not exist. So I was wondering, how I would access columns of the DataFrame within the class using a column name.</p> <p>Before people complain about this being not very efficient: This is obviosuly a reduced example.</p>
<python><pandas>
2023-09-14 09:56:20
1
480
Janosch
77,103,773
3,480,297
Convert array of strings with single quotes to array of strings with double quotes
<p>Suppose I have a list of strings, where each string in the list can either start with a single or double quote, such as:</p> <pre><code>my_list = ['a string with a single quote', &quot;my organisation's goals&quot;, 'another string with double quote'] </code></pre> <p>Notice that the string that contains double quotes is as such because it contains an apostrophe.</p> <p>How can I make sure that my list contains each string within a double quote? In other words, I'd like for <code>my_list</code> to be:</p> <pre><code>my_list = [&quot;a string with a single quote&quot;, &quot;my organisation's goals&quot;, &quot;another string with double quote&quot;] </code></pre> <p>I've tried to simply use <code>.replace(&quot;'&quot;, '&quot;')</code>, but this doesn't work because it will replace the apostrophe with a double quote, which causes undesired behavior.</p> <p>I've also tried something like this, but similar to the above, it doesn't work.</p> <pre><code>new_list = [item[1:-1] if item.startswith(&quot;'&quot;) and item.endswith(&quot;'&quot;) else item for item in my_list] </code></pre>
<python>
2023-09-14 09:54:39
2
2,612
Adam
77,103,701
4,391,360
Keep only two digits after the first digit other than zero
<p>I know how to keep only two decimal places after the decimal point. For example (there are other methods):</p> <pre><code>&gt;&gt;&gt; print(f'{10000.01908223295211791992:.2f}') 10000.02 </code></pre> <p>But I want to keep the first two digits other than zero after the decimal point:</p> <p>10000.01908223295211791992 will give: 10000.019</p> <p>0.0000456576578765 will give: 0.000046</p> <p>Is there an built-in method that I'm missing? Or is the only solution to code a test for (almost) every case?</p>
<python><rounding>
2023-09-14 09:46:21
4
727
servoz
77,103,687
11,729,033
How do I rename a Diskcache memoization cache?
<p><code>diskcache.Cache</code> objects provide a <code>memoize</code> function, which can be used like this:</p> <pre><code>cache = Cache(...) @cache.memoize() def long_computation(*args): ... </code></pre> <p>One can provide a <code>name</code> argument to <code>cache.memoize</code> to uniquely identify the callable being decorated. If one omits this argument (as in the example above), it's inferred from the full name of the callable.</p> <p>However, the full name of the callable depends on how (and whether) you're importing it - which is undesirable to me, as it means different scripts &quot;see&quot; the function as having different names, and therefore do not share the computation cache and wind up repeating work between them.</p> <p>The fix is simple: add an explicit <code>name</code> to the decorator call above. However, I've already done a fair amount of computation under the inferred name, and don't want to have to redo it. So I'd like some way to rename the existing cache to the new (explicit) name, so future calls can still use it.</p>
<python><python-3.x><diskcache>
2023-09-14 09:44:04
0
314
J E K
77,103,643
6,156,353
Polars - efficiently partition large csv dataset into parquet
<p>I need to partition large CSV dataset into smaller year-month partitions. What I do:</p> <pre class="lang-py prettyprint-override"><code>for year in range(2016, 2019): year_df = (pl .scan_csv('some.csv', infer_schema_length=100000, null_values=['\\N'], cache=True) .with_columns( pl.col('origin_datetime').dt.year().alias('year'), pl.col('origin_datetime').dt.month().alias('month') ) .filter((pl.col('year') == year)) .cache() ) for month in range(1,13): (year_df .filter((pl.col('month') == month)) .collect(streaming=True) .write_parquet(f'/{year}_{month}.parquet') ) </code></pre> <p>so I thought I could cache data for every year and therefore then filtering for each month would be fast but it seems every month still take about the same time. Is there any way to improve this?</p>
<python><dataframe><csv><parquet><python-polars>
2023-09-14 09:39:56
2
1,371
romanzdk
77,103,376
10,970,202
pandas split string column on any character that is not an alphabet
<p>I have dataframe something like below</p> <pre><code>a str_col 1 ABC*EFG 2 DDC/DSD 3. sew^sds ... </code></pre> <p>I want to split them on non alphabet and into a list. Desired df is as follows</p> <pre><code>a str_col. new_col 1 ABC*EFG. [ABC, EFG] 2 DDC/DSD. [DDC, DSD] 3. sew^sds [sew, sds] ... </code></pre> <p>I've tried</p> <ul> <li><code>df['str_col'].str.split('^[a-zA-Z]+') but it created something like </code>[, *EFG]`</li> </ul>
<python><pandas>
2023-09-14 09:07:30
3
5,008
haneulkim
77,103,226
8,588,743
VSCode does not recognize installed packages, despite correct interpreter selected and correct environment
<p>I'm trying to do some forecasting using facebook prophet using an environment named <code>'fbprophet'</code> where I've installed several dependencies. However when I select the correct interpreter and run <code>'import pandas as pd'</code> I get the error: <code>'ModuleNotFoundError: No module named 'pandas''</code>. I've tried restarting VSCode as well as the suggested solutions <a href="https://stackoverflow.com/questions/64172961/vs-code-cant-import-module-which-is-already-installed?noredirect=1&amp;lq=1">here</a> and <a href="https://stackoverflow.com/questions/56658553/why-do-i-get-a-modulenotfounderror-in-vs-code-despite-the-fact-that-i-already">here</a> but none of them have been of any help. Below is some screen shots of my VSCode settings.</p> <p><a href="https://i.sstatic.net/xCQss.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xCQss.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/twopK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/twopK.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/GgX0i.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GgX0i.png" alt="enter image description here" /></a></p>
<python><visual-studio-code>
2023-09-14 08:51:47
1
903
Parseval
77,103,225
6,664,393
Highlighting non-local variables in VS Code Python
<p>The VS Code syntax highlighter is able to highlight unused variables in a function body.</p> <p>I'm looking for the &quot;converse&quot;: variables that are used, but not parameters or defined inside the function. Instead they're defined somewhere else in the same module.</p> <p>For example:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np x = np.arange(5) def f(): print(x[:, np.newaxis]) </code></pre> <p>In this snippet <code>x</code> would be highlighted as a free variable, but preferably not <code>np.newaxis</code> (which is defined to be <code>None</code>).</p> <p>Is it possible to do this using VS Code?</p>
<python><visual-studio-code>
2023-09-14 08:51:40
0
1,913
user357269
77,103,160
8,740,854
Check if (pos/neg) polygon is inside another (pos/neg) polygon
<h2>Description</h2> <p>I have the problem to decide if a region <code>A</code> is inside a region <code>B</code>. These regions are defined by a list of vertices <code>polyA</code> and <code>polyB</code>:</p> <ul> <li>If the vertices <code>polyA</code> are counter-clockwise, <ul> <li>then <code>A</code> is the interior region (purple) and <code>areaA &gt; 0</code> (positive polygon)</li> </ul> </li> <li>Else, <ul> <li><code>A</code> is the exterior region (pink) and <code>areaA &lt; 0</code> (negative polygon)</li> </ul> </li> </ul> <p><a href="https://upload.wikimedia.org/wikipedia/commons/thumb/3/3c/Jordan_curve_theorem.svg/300px-Jordan_curve_theorem.svg.png" rel="nofollow noreferrer"><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/3c/Jordan_curve_theorem.svg/300px-Jordan_curve_theorem.svg.png" alt="Example of internal and external regions" /></a></p> <p>I got in this <a href="https://stackoverflow.com/questions/4833802/check-if-polygon-is-inside-a-polygon">SO question</a> which is a sub-problem of mine: It solves when <code>areaA &gt; 0</code> and <code>areaB &gt; 0</code>. Since my problem is more general, the given solution is not enough.</p> <h2>Problem</h2> <p>So far, I verify if all the vertices <code>polyA</code> are inside <code>polyB</code> and if there are intersections, but it gives wrong results when</p> <ol> <li>The 'minor' polygon is negative and 'major' polygon is positive, with same center</li> </ol> <pre class="lang-py prettyprint-override"><code>polyA = [(1, 1), (1, 2), (2, 2), (2, 1)] # Clockwise square of side 1 polyB = [(0, 0), (3, 0), (3, 3), (0, 3)] # Counter-clockwise square of side 3 polygon_in_polygon(polyA, polyB) # Expect False # Gives True since all vertices polyA are inside B </code></pre> <ol start="2"> <li><code>polyA</code> is the inverse of <code>polyB</code></li> </ol> <pre class="lang-py prettyprint-override"><code>polyA = [(0, 0), (1, 0), (1, 1), (0, 1)] # Counter-clockwise square of side 1 polyB = [(0, 0), (0, 1), (1, 1), (1, 0)] # Clockwise square of side 1 polygon_in_polygon(polyA, polyB) # Expect False # Gives True, cause all the vertices polyA are in the boundary of B </code></pre> <p>Can anyone help me finding an algorithm for it?</p> <h2>Current algorithm</h2> <p>So far my algorithm (in python) is</p> <pre class="lang-py prettyprint-override"><code>def polygon_and_polygon(polyA: tuple[Point], polyB: tuple[Point]) -&gt; bool: &quot;&quot;&quot;Checks the intersections of polyA and polyB Returns False if there are any point in each segment of A which is outside B &quot;&quot;&quot; ... def point_in_polygon(point: Point, polygon: tuple[Point]) -&gt; bool: &quot;&quot;&quot;Checks if given point is inside the region made by polygon&quot;&quot;&quot; area = compute_area(polygon) # area &gt; 0 if polygon is counter-clockwise # area &lt; 0 else wind = winding_number(point, polygon) return wind == 1 if area &gt; 0 else wind == 0 def polygon_in_polygon(polyA: tuple[Point], polyB: tuple[Point]) -&gt; bool: &quot;&quot;&quot;Checks if polyA is completely inside polyB&quot;&quot;&quot; for point in polyA: if not point_in_polygon(point, polyB): return False if polygon_and_polygon(polyA, polyB): return False # Insert code here return True </code></pre> <p>I expect hints or an algorithm so the function <code>polygon_in_polygon</code> is able to treat the 2 presented cases, or also some problems which may arrive and I don't see them yet.</p> <p>I thought about creating some random points <code>rand_point</code> which are inside <code>A</code>, and verify if they are inside <code>B</code>. But it's possible to have a bad luck and get <code>(0, 1)</code> in the first example which is indeed inside <code>B</code>, but <code>A</code> is not inside <code>B</code>.</p> <h2>Additional info</h2> <p><strong>Note 1:</strong> Consider the coordinates of each point are integers: disregard <code>float</code> precision problem</p> <p><strong>Note 2:</strong> The regions <code>A</code> and <code>B</code> are closed sets. That means, if an edge <code>edgeA</code> touches an edge <code>edgeB</code>, but doesn't cross <code>edgeB</code>, we consider <code>edgeA</code> is inside <code>B</code>. Therefore</p> <pre class="lang-py prettyprint-override"><code>polyA = [(0, 0), (1, 0), (0, 1)] # Positive triangle of side 1 polyB = [(0, 0), (1, 0), (1, 1), (0, 1)] # Positive square of side 1 polygon_in_polygon(polyA, polyB) # Expect True polyA = [(0, 0), (0, 1), (1, 1), (1, 0)] # Negative square of side 1 polyB = [(0, 0), (0, 1), (1, 0)] # Negative triangle of side 1 polygon_in_polygon(polyA, polyB) # Expect True </code></pre>
<python><algorithm><geometry><polygon><point-in-polygon>
2023-09-14 08:41:57
0
472
Carlos Adir
77,102,998
8,124,392
Loading .npy into torch throws a missing key error
<p>This is my code where I'm trying to load a .npy file to torch:</p> <pre><code># Load the state dictionary from the numpy file state_dict_np = np.load(model_path, allow_pickle=True).item() # Convert numpy arrays within the state dictionary to PyTorch tensors state_dict_torch = {k: torch.tensor(v, dtype=torch.float32).cpu() for k, v in state_dict_np.items()} # Load the converted state dictionary into the model self.model.load_state_dict(state_dict_torch) </code></pre> <p>This is the error I'm met with:</p> <pre><code>Error while subprocess initialization: Traceback (most recent call last): File &quot;/app/core/joblib/SubprocessorBase.py&quot;, line 62, in _subprocess_run self.on_initialize(client_dict) File &quot;/app/mainscripts/Extractor.py&quot;, line 73, in on_initialize self.rects_extractor = facelib.S3FDExtractor(place_model_on_cpu=place_model_on_cpu) File &quot;/app/facelib/S3FDExtractor.py&quot;, line 156, in __init__ self.model.load_state_dict(state_dict_torch) File &quot;/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py&quot;, line 2041, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for S3FD: Missing key(s) in state_dict: &quot;conv1_1.weight&quot;, &quot;conv1_1.bias&quot;, &quot;conv1_2.weight&quot;, &quot;conv1_2.bias&quot;, &quot;conv2_1.weight&quot;, &quot;conv2_1.bias&quot;, &quot;conv2_2.weight&quot;, &quot;conv2_2.bias&quot;, &quot;conv3_1.weight&quot;, &quot;conv3_1.bias&quot;, &quot;conv3_2.weight&quot;, &quot;conv3_2.bias&quot;, &quot;conv3_3.weight&quot;, &quot;conv3_3.bias&quot;, &quot;conv4_1.weight&quot;, &quot;conv4_1.bias&quot;, &quot;conv4_2.weight&quot;, &quot;conv4_2.bias&quot;, &quot;conv4_3.weight&quot;, &quot;conv4_3.bias&quot;, &quot;conv5_1.weight&quot;, &quot;conv5_1.bias&quot;, &quot;conv5_2.weight&quot;, &quot;conv5_2.bias&quot;, &quot;conv5_3.weight&quot;, &quot;conv5_3.bias&quot;, &quot;fc6.weight&quot;, &quot;fc6.bias&quot;, &quot;fc7.weight&quot;, &quot;fc7.bias&quot;, &quot;conv6_1.weight&quot;, &quot;conv6_1.bias&quot;, &quot;conv6_2.weight&quot;, &quot;conv6_2.bias&quot;, &quot;conv7_1.weight&quot;, &quot;conv7_1.bias&quot;, &quot;conv7_2.weight&quot;, &quot;conv7_2.bias&quot;, &quot;conv3_3_norm.weight&quot;, &quot;conv4_3_norm.weight&quot;, &quot;conv5_3_norm.weight&quot;, &quot;conv3_3_norm_mbox_conf.weight&quot;, &quot;conv3_3_norm_mbox_conf.bias&quot;, &quot;conv3_3_norm_mbox_loc.weight&quot;, &quot;conv3_3_norm_mbox_loc.bias&quot;, &quot;conv4_3_norm_mbox_conf.weight&quot;, &quot;conv4_3_norm_mbox_conf.bias&quot;, &quot;conv4_3_norm_mbox_loc.weight&quot;, &quot;conv4_3_norm_mbox_loc.bias&quot;, &quot;conv5_3_norm_mbox_conf.weight&quot;, &quot;conv5_3_norm_mbox_conf.bias&quot;, &quot;conv5_3_norm_mbox_loc.weight&quot;, &quot;conv5_3_norm_mbox_loc.bias&quot;, &quot;fc7_mbox_conf.weight&quot;, &quot;fc7_mbox_conf.bias&quot;, &quot;fc7_mbox_loc.weight&quot;, &quot;fc7_mbox_loc.bias&quot;, &quot;conv6_2_mbox_conf.weight&quot;, &quot;conv6_2_mbox_conf.bias&quot;, &quot;conv6_2_mbox_loc.weight&quot;, &quot;conv6_2_mbox_loc.bias&quot;, &quot;conv7_2_mbox_conf.weight&quot;, &quot;conv7_2_mbox_conf.bias&quot;, &quot;conv7_2_mbox_loc.weight&quot;, &quot;conv7_2_mbox_loc.bias&quot;. Unexpected key(s) in state_dict: &quot;conv1_1/weight:0&quot;, &quot;conv1_1/bias:0&quot;, &quot;conv1_2/weight:0&quot;, &quot;conv1_2/bias:0&quot;, &quot;conv2_1/weight:0&quot;, &quot;conv2_1/bias:0&quot;, &quot;conv2_2/weight:0&quot;, &quot;conv2_2/bias:0&quot;, &quot;conv3_1/weight:0&quot;, &quot;conv3_1/bias:0&quot;, &quot;conv3_2/weight:0&quot;, &quot;conv3_2/bias:0&quot;, &quot;conv3_3/weight:0&quot;, &quot;conv3_3/bias:0&quot;, &quot;conv4_1/weight:0&quot;, &quot;conv4_1/bias:0&quot;, &quot;conv4_2/weight:0&quot;, &quot;conv4_2/bias:0&quot;, &quot;conv4_3/weight:0&quot;, &quot;conv4_3/bias:0&quot;, &quot;conv5_1/weight:0&quot;, &quot;conv5_1/bias:0&quot;, &quot;conv5_2/weight:0&quot;, &quot;conv5_2/bias:0&quot;, &quot;conv5_3/weight:0&quot;, &quot;conv5_3/bias:0&quot;, &quot;fc6/weight:0&quot;, &quot;fc6/bias:0&quot;, &quot;fc7/weight:0&quot;, &quot;fc7/bias:0&quot;, &quot;conv6_1/weight:0&quot;, &quot;conv6_1/bias:0&quot;, &quot;conv6_2/weight:0&quot;, &quot;conv6_2/bias:0&quot;, &quot;conv7_1/weight:0&quot;, &quot;conv7_1/bias:0&quot;, &quot;conv7_2/weight:0&quot;, &quot;conv7_2/bias:0&quot;, &quot;conv3_3_norm/weight:0&quot;, &quot;conv4_3_norm/weight:0&quot;, &quot;conv5_3_norm/weight:0&quot;, &quot;conv3_3_norm_mbox_conf/weight:0&quot;, &quot;conv3_3_norm_mbox_conf/bias:0&quot;, &quot;conv3_3_norm_mbox_loc/weight:0&quot;, &quot;conv3_3_norm_mbox_loc/bias:0&quot;, &quot;conv4_3_norm_mbox_conf/weight:0&quot;, &quot;conv4_3_norm_mbox_conf/bias:0&quot;, &quot;conv4_3_norm_mbox_loc/weight:0&quot;, &quot;conv4_3_norm_mbox_loc/bias:0&quot;, &quot;conv5_3_norm_mbox_conf/weight:0&quot;, &quot;conv5_3_norm_mbox_conf/bias:0&quot;, &quot;conv5_3_norm_mbox_loc/weight:0&quot;, &quot;conv5_3_norm_mbox_loc/bias:0&quot;, &quot;fc7_mbox_conf/weight:0&quot;, &quot;fc7_mbox_conf/bias:0&quot;, &quot;fc7_mbox_loc/weight:0&quot;, &quot;fc7_mbox_loc/bias:0&quot;, &quot;conv6_2_mbox_conf/weight:0&quot;, &quot;conv6_2_mbox_conf/bias:0&quot;, &quot;conv6_2_mbox_loc/weight:0&quot;, &quot;conv6_2_mbox_loc/bias:0&quot;, &quot;conv7_2_mbox_conf/weight:0&quot;, &quot;conv7_2_mbox_conf/bias:0&quot;, &quot;conv7_2_mbox_loc/weight:0&quot;, &quot;conv7_2_mbox_loc/bias:0&quot;. </code></pre> <p>Previously I tried to rename the keys like this:</p> <pre><code>renamed_state_dict = {} for key, value in state_dict.items(): new_key = key.split(&quot;/&quot;)[0] if &quot;weight&quot; in key: new_key += &quot;.weight&quot; elif &quot;bias&quot; in key: new_key += &quot;.bias&quot; renamed_state_dict[new_key] = value self.model.load_state_dict(renamed_state_dict) </code></pre> <p>But then got this error:</p> <pre><code>Error while subprocess initialization: Traceback (most recent call last): File &quot;/app/core/joblib/SubprocessorBase.py&quot;, line 62, in _subprocess_run self.on_initialize(client_dict) File &quot;/app/mainscripts/Extractor.py&quot;, line 73, in on_initialize self.rects_extractor = facelib.S3FDExtractor(place_model_on_cpu=place_model_on_cpu) File &quot;/app/facelib/S3FDExtractor.py&quot;, line 165, in __init__ self.model.load_state_dict(renamed_state_dict) File &quot;/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py&quot;, line 2041, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for S3FD: While copying the parameter named &quot;conv1_1.weight&quot;, expected torch.Tensor or Tensor-like object from checkpoint but received &lt;class 'numpy.ndarray'&gt; While copying the parameter named &quot;conv1_1.bias&quot;, expected torch.Tensor or Tensor-like object from checkpoint but received &lt;class 'numpy.ndarray'&gt; While copying the parameter named &quot;conv1_2.weight&quot;, expected torch.Tensor or Tensor-like object from checkpoint but received &lt;class 'numpy.ndarray'&gt; While copying the parameter named &quot;conv1_2.bias&quot;, expected torch.Tensor or Tensor-like object from checkpoint but received &lt;class 'numpy.ndarray'&gt; ... While copying the parameter named &quot;conv3_1.weight&quot;, expected torch.Tensor or Tensor-like object from checkpoint but received &lt;class 'numpy.ndarray'&gt; </code></pre> <p>Where am I going wrong?</p> <p><strong>Update</strong></p> <p>Per user suggestion, I tried this:</p> <pre><code>def can_squeeze(t): shape_set = set(t.shape) if len(shape_set) == 2 and 1 in shape_set: return True return False def reshape_tensor(t): if can_squeeze(t): return t.squeeze() else: return torch.permute(t, [3, 2, 1, 0]) def rename_key(key): new_key = key.split(&quot;:&quot;)[0] # discard :0 and similar key_elements = new_key.split(&quot;/&quot;) new_key = &quot;.&quot;.join(key_elements) # replace every / with . return new_key # Load the state dictionary from the numpy file state_dict_np = np.load(model_path, allow_pickle=True).item() # Convert numpy arrays within the state dictionary to PyTorch tensors state_dict_torch = {rename_key(k): reshape_tensor(torch.tensor(v, dtype=torch.float32)).cpu() for k, v in state_dict_np.items()} # Load the converted state dictionary into the model self.model.load_state_dict(state_dict_torch) </code></pre> <p>and got this error:</p> <pre><code>Error while subprocess initialization: Traceback (most recent call last): File &quot;/app/core/joblib/SubprocessorBase.py&quot;, line 62, in _subprocess_run self.on_initialize(client_dict) File &quot;/app/mainscripts/Extractor.py&quot;, line 73, in on_initialize self.rects_extractor = facelib.S3FDExtractor(place_model_on_cpu=place_model_on_cpu) File &quot;/app/facelib/S3FDExtractor.py&quot;, line 172, in __init__ self.model.load_state_dict(state_dict_torch) File &quot;/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py&quot;, line 2041, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for S3FD: size mismatch for fc7.weight: copying a param with shape torch.Size([1024, 1024]) from checkpoint, the shape in current model is torch.Size([1024, 1024, 1, 1]). size mismatch for conv3_3_norm.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([1, 256, 1, 1]). size mismatch for conv4_3_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([1, 512, 1, 1]). size mismatch for conv5_3_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([1, 512, 1, 1]). </code></pre>
<python><numpy><deep-learning><pytorch>
2023-09-14 08:20:57
1
3,203
mchd
77,102,860
1,473,517
How to use native popcount with numba
<p>I am using numba 0.57.1 and I would like to exploit the native CPU popcount in my code. My existing code is too slow as I need to run it hundreds of millions of times. Here is a MWE:</p> <pre><code>import numba as nb @nb.njit(nb.uint64(nb.uint64)) def popcount(x): b=0 while(x &gt; 0): x &amp;= x - nb.uint64(1) b+=1 return b print(popcount(43)) </code></pre> <p>The current speed is:</p> <pre><code>%timeit popcount(255) 148 ns ± 0.369 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each) </code></pre> <p>I believe that it should be possible to use the native CPU popcount (_mm_popcnt_u64 as a C intrinsic) instruction using <a href="https://numba.readthedocs.io/en/stable/extending/high-level.html#implementing-intrinsics" rel="nofollow noreferrer">numba intrinsics</a> but this area is new to me. The llvm intrinsic I need to use is, I think, <a href="https://llvm.org/docs/LangRef.html#llvm-ctpop-intrinsic" rel="nofollow noreferrer">ctpop</a>. Assuming this is right, how can I do this?</p>
<python><llvm><numba>
2023-09-14 08:00:54
1
21,513
Simd
77,102,848
17,561,414
Autoloader - creating tmp view
<p>Im using autoloader to incrementally load the data into silver layer from bronze tables where I use changeFeed feature.</p> <p>This is how I read the <code>df</code></p> <pre><code>df = spark.readStream.format(&quot;delta&quot;) \ .option(&quot;readChangeFeed&quot;, &quot;true&quot;) \ .table(&quot;mdp_prd.bronze.nrq_customerassetproperty_autoloader_nodups&quot;) </code></pre> <p>I also use the <code>def</code> to pass this function inside <code>ForEachBatch</code> when writing the stream.</p> <p>part of function look like this.</p> <pre><code>from pyspark.sql.functions import col def update_changefeed(df, epochId): filtered_df = df.filter(col(&quot;_change_type&quot;).isin(&quot;insert&quot;, &quot;update_postimage&quot;, &quot;delete&quot;)) filtered_df.createOrReplaceTempView(&quot;test2&quot;) </code></pre> <p>Running the writing part of the code failes:</p> <pre><code>df.writeStream.foreachBatch(update_changefeed) \ .option(&quot;checkpointLocation&quot;, checkpoint_directory) \ .trigger(availableNow=True).start().awaitTermination() </code></pre> <p>error telling me that <code>The table or view &quot;test2&quot; cannot be found</code>.</p> <p>I tried to test the function and create the <code>tmp view</code> out of the streaming <code>df</code> and it worked. I think the problem is when the function is passed to <code>ForEachBatch</code> it does not create it there. Is there any fix or solution to this?</p>
<python><pyspark><azure-databricks><spark-structured-streaming><databricks-autoloader>
2023-09-14 07:58:34
1
735
Greencolor
77,102,842
10,744,889
Keras One hot encoding (to_categorical) returns wrong output
<p>I have an output with three distinctive classes:</p> <pre><code>import numpy as np np.unique(y_train) </code></pre> <blockquote> <p>array([0, 2, 5], dtype=uint8)</p> </blockquote> <p>When I one-hot-encode it using the following code, I get an output with the shape of (number of instances, 6), not (number of instances, 3).</p> <pre><code>from keras.utils import to_categorical y_train = to_categorical(y_train) y_test = to_categorical(y_test) y_train.shape </code></pre> <blockquote> <p>(17302, 6)</p> </blockquote> <p>I also passed the number of classes (3) to <code>to_categorical</code> function, but it complained:</p> <blockquote> <p>IndexError: index 5 is out of bounds for axis 1 with size 3</p> </blockquote> <p>Does anybody have any ideas about this? Thanks for your help.</p>
<python><tensorflow><keras>
2023-09-14 07:58:05
2
537
Mojtaba Abdi Khassevan
77,102,762
13,706,389
Plotly show minor tick values
<p>In Python, I'm making a large figure with 15 subplots using Plotly. On these subplots, I want to indicate some important points along the x-axis. Initially, I used fig.add_vline to draw a vertical line + a label at these points. For some reason, this is extremely slow:</p> <pre class="lang-py prettyprint-override"><code>import time from plotly.subplots import make_subplots import numpy as np import plotly.graph_objects as go n = 15 fig = make_subplots(n, 1) for i in range(1, n+1): fig.add_trace(go.Scatter(x=np.arange(0, 3000, 5), y=np.random.randn(600), mode=&quot;lines&quot;), row=i, col=1) for point in range(0,3000,300): fig.add_vline(x=point, line_width=2, line_dash=&quot;dash&quot;, line_color=&quot;green&quot;, annotation_text=&quot;test&quot;) fig.show() </code></pre> <p>Takes 16 seconds.</p> <p>Therefore, I tried to achieve something similar by setting the figure's tickvalues and ticktext to the values and labels of the points that I want to indicate.</p> <pre class="lang-py prettyprint-override"><code>fig = make_subplots(n, 1) for i in range(1, n+1): fig.add_trace(go.Scatter(x=np.arange(0, 3000, 5), y=np.random.randn(600), mode=&quot;lines&quot;), row=i, col=1) fig.update_xaxes(showticklabels=True, tickmode=&quot;array&quot;, tickvals=list(range(300, 3000, 300)), ticktext=['test']*10, gridcolor=&quot;green&quot;, griddash='dash', gridwidth=2) fig.show() </code></pre> <p>This takes about 0.2 seconds which is perfect. It gives a figure with almost the same information except for the values on the xaxis. I'd like to still show these as well. I thought of doing this by adding minor ticks and showing their values as well but I don't manage to do this. How can I do this or resolve this in another way?</p>
<python><plotly>
2023-09-14 07:46:59
1
684
debsim
77,102,707
10,090,254
Type hinting of dependency injection
<p>I'm creating a declarative http client and have a problem with mypy linting.</p> <p>Error:</p> <pre><code>Incompatible default for argument &quot;user&quot; (default has type &quot;Json&quot;, argument has type &quot;dict\[Any, Any\]&quot;) </code></pre> <p>I have a &quot;Dependency&quot; class that implements the logic of: value validation agains type, request modification:</p> <pre><code>class Dependency(abc.ABC): def __init__( self, default: Any = Empty, field_name: Union[str, None] = None, ): self.default = default self._overridden_field_name = field_name ... @abc.abstractmethod def modify_request(self, request: RawRequest) -&gt; RawRequest: raise NotImplementedError </code></pre> <p>The dependencies inherited from <code>Dependency</code>, e.g. Json:</p> <pre><code>class Json(Dependency): location = Location.json def __init__(self, default: Any = Empty): &quot;&quot;&quot;Field name is unused for Json.&quot;&quot;&quot; super().__init__(default=default) def modify_request(self, request: &quot;RawRequest&quot;) -&gt; &quot;RawRequest&quot;: ... return request </code></pre> <p>Then I use them as function argument's default to declare:</p> <pre><code>@http(&quot;GET&quot;, &quot;/example&quot;) def test_get(data: dict = Json()): ... </code></pre> <p>It works as expected, but mypy is raising a lot of errors.</p> <p>The question - how to deal with type hinting?</p> <p>I need it to work like Query() or Body() in FastAPI, without changing the way of declaration.</p> <p>I tried to make a Dependency class to be generic, but it wasn't helped me.</p> <p>UPD:</p> <p>Sorry, forgot to mention that type hint can be dataclass, or pydantic model, or any other type. Then it will be deserialized in function execution.</p> <p>Dict as type annotation:</p> <pre><code>@http(&quot;GET&quot;, &quot;/example&quot;) def test_get(data: dict = Json()): ... </code></pre> <p>Pydantic model as type annotation:</p> <pre><code>class PydanticModel(BaseModel): … @http(&quot;GET&quot;, &quot;/example&quot;) def test_get(data: PydanticModel = Json()): ... </code></pre> <p>Dataclass as type annotation:</p> <pre><code>@dataclasses.dataclass class DataclassModel(): … @http(&quot;GET&quot;, &quot;/example&quot;) def test_get(data: DataclassModel = Json()): ... </code></pre> <p>It should support any type provided in type hint.</p>
<python><python-typing><mypy>
2023-09-14 07:37:57
2
486
floydya
77,102,648
11,164,450
Escape JSON double quotes in Azure Pipeline
<p>I am trying to pass in Azure Pipeline Template JSON parameter as string but can't figure out how to make this right so i could do <strong>json.loads</strong> in main pipeline to convert string to JSON. <br> Template pipeline:</p> <pre><code>trigger: none resources: repositories: - repository: P1-T_automation type: git name: One/P1-T_automation ref: afat parameters: - name: accountvars type: object default: | { &quot;hostgroup1&quot;: {&quot;account1&quot;: [&quot;hostname1&quot;, &quot;hostname2&quot;, &quot;hostname3&quot;]}, &quot;hostgroup2&quot;: {&quot;account2&quot;: [&quot;hostname4&quot;, &quot;hostname5&quot;, &quot;hostname6&quot;]} } extends: template: Pipelines/main.yaml@P1-T_automation parameters: accountvars: ${{ parameters.accountvars }} </code></pre> <p>Follwing task <code>- bash: echo &quot;${{ parameters.accountvars }}&quot;</code> in main pipeline gives me output without double quotes (which are required for JSON format):</p> <pre><code>{ hostgroup1: {account1: [hostname1, hostname2, hostname3]}, hostgroup2: {account2: [hostname4, hostname5, hostname6]} } </code></pre> <p>if i make object type of parameter <strong>string</strong> i get the same output. <br> if i escape every double quote like this:</p> <pre><code>- name: accountvars type: string default: | { \&quot;hostgroup1\&quot;: {\&quot;account1\&quot;: [\&quot;hostname1\&quot;, \&quot;hostname2\&quot;, \&quot;hostname3\&quot;]}, \&quot;hostgroup2\&quot;: {\&quot;account2\&quot;: [\&quot;hostname4\&quot;, \&quot;hostname5\&quot;, \&quot;hostname6\&quot;]} } </code></pre> <p>i get good ech on bash task:</p> <pre><code>{ &quot;hostgroup1&quot;: {&quot;account1&quot;: [&quot;hostname1&quot;, &quot;hostname2&quot;, &quot;hostname3&quot;]}, &quot;hostgroup2&quot;: {&quot;account2&quot;: [&quot;hostname4&quot;, &quot;hostname5&quot;, &quot;hostname6&quot;]} } </code></pre> <p>but after passing it to python script as argument:</p> <pre><code>- task: PythonScript@0 displayName: Create fake data inputs: scriptSource: filePath scriptPath: $(Pipeline.Workspace)/self/Pipelines/scripts/datain.py arguments: ${{ variables.first_var }} $(topicName) ${{ parameters.accountvars }} workingDirectory: $(Pipeline.Workspace) pythonInterpreter: $(Pipeline.Workspace)/python_venv/bin/python </code></pre> <p>in python script <code>print(sys.argv[3])</code> and <code>print(str(sys.argv[3]))</code> both gives me:</p> <pre><code>{ \hostgroup1&quot;: {&quot;account1&quot;: [&quot;hostname1&quot;, &quot;hostname2&quot;, &quot;hostname3&quot;]}, &quot;hostgroup2&quot;: {&quot;account2&quot;: [&quot;hostname4&quot;, &quot;hostname5&quot;, &quot;hostname6&quot;]} } </code></pre> <p>Why do first escaped double quotes is replaced with ''? <br> I need to make it string one-liner with double quotes to make this fit in json.loads in python script.</p> <p>How can i achieve this?</p>
<python><json><yaml><azure-pipelines>
2023-09-14 07:29:46
2
321
gipcu
77,102,437
2,383,070
Polars from_pandas() fails with "ComputeError: only string-like values are supported in dictionaries"
<p>Converting a Pandas DataFrame to a Polars DataFrame will result in an error if the Pandas DataFrame has any columns with categorical dtypes that are either <code>int</code> or <code>float</code>.</p> <p>For example, this...</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import polars as pl df = pd.DataFrame( { &quot;label1&quot;: pd.Categorical([1, 1, 1, 2, 2]), &quot;label2&quot;: pd.Categorical([0.5, 0.4, 0.3, 0.4, 0.3]), &quot;measurement&quot;: [1, 2, 3, 4, 5], } ) pl_df = pl.from_pandas(df) </code></pre> <p>produces this error...</p> <pre><code>ComputeError: only string-like values are supported in dictionaries </code></pre> <p>This example was original shared <a href="https://github.com/pola-rs/polars/issues/4172" rel="nofollow noreferrer">here</a>, but no workaround was suggested. How can I get around this error?</p>
<python><pandas><python-polars>
2023-09-14 06:50:54
2
3,511
blaylockbk
77,102,291
219,153
How to broadcast this array with Numpy?
<p>In this Python 3.11 code snippet:</p> <pre><code>import numpy as np state = np.arange(48, dtype='u1').reshape((2, 8, 3)) pixels = [3, 4, 5] colors = [[42, 43, 44], [0, 1, 2]] state[0, pixels] = colors[0] # line 1 state[1, pixels] = colors[1] # line 2 # state[:, pixels, :] = colors # error </code></pre> <p>I would like to replace <code>line 1</code> and <code>line 2</code> with a single line Numpy magic. The last line doesn't compile.</p>
<python><numpy><array-broadcasting>
2023-09-14 06:27:22
1
8,585
Paul Jurczak
77,102,001
14,430,730
How to Package and Publish a Script That Calls Other Scripts as a Pip Wheel
<p>I'm trying to create wheel for scripts generated with bazel <code>py_binary</code> rule, and encountered some problem.</p> <p>I try to give a minor example of the problem. I have a directory like this:</p> <pre><code>. ├── MANIFEST.in ├── bin │ ├── __init__.py │ ├── mytest │ └── mytest.lib │ ├── used1 │ └── used2 └── setup.py </code></pre> <p>where <code>bin/mytest.lib</code> contains many scripts, and <code>bin/mytest</code> is the entry of this package, calling all the scripts in <code>bin/mytest.lib</code>. It could look like:</p> <pre class="lang-bash prettyprint-override"><code>#!/bin/bash bash $0.lib/used1 bash $0.lib/used2 </code></pre> <p>I write a <code>setup.py</code> to create a wheel:</p> <pre class="lang-py prettyprint-override"><code>import setuptools setuptools.setup( name=&quot;mytest&quot;, version=&quot;0.0.1&quot;, maintainer=&quot;&quot;, maintainer_email=&quot;&quot;, packages=setuptools.find_packages(), install_requires=[], scripts=[&quot;bin/mytest&quot;], include_package_data=True, ) </code></pre> <p>and <code>MANIFEST.in</code> tries to get <strong>all</strong> files in <code>bin</code>:</p> <pre><code>graft bin </code></pre> <p>Then, I <code>python setup.py bdist_wheel</code> and <code>pip install dist/mytest-0.0.1-py3-none-any.whl</code>.</p> <p>I tried to use <code>mytest</code> but failed as <code>MY_ENV/bin/mytest.lib/used1</code> doesn't exist. (Here <code>MY_ENV</code> == <code>/home/dev/conda_dev/devenv/Linux/envs/devenv-3.8-c</code>)</p> <p>There are 2 major problems:</p> <ol> <li><code>mytest.lib</code> is in <code>MY_ENV/lib/python3.8/site-packages/bin</code> rather than <code>MY_ENV/lib/python3.8/site-packages/mytest-0.0.1.dist-info/bin</code></li> <li><code>mytest</code> actually expects <code>mytest.lib</code> be in <code>MY_ENV/bin</code> rather than <code>MY_ENV/lib/python3.8/site-packages/mytest-0.0.1.dist-info/bin</code> or other directories.</li> </ol> <p>If I cannot modify <code>mytest</code> or <code>mytest.lib</code> as they are generated by bazel (but I can add something if needed), how can I make <code>mytest</code> available?</p>
<python><setuptools><python-wheel><distribute>
2023-09-14 05:26:23
1
336
XuanInsr
77,101,989
9,681,645
How to run Flet app without Flutter for mobile
<p>I discovered that Flet app runs on desktop (linux) without the needs of flutter. I want to know if there is anyway to package or run Flet app without Flutter on Android or iOS.</p> <pre><code>import flet as ft def main(page: ft.Page): page.add(ft.Text(&quot;Hello World!&quot;) ft.app(target=main) </code></pre>
<python><flutter><mobile><flet>
2023-09-14 05:23:18
1
709
ganiular
77,101,602
3,228,800
Is there a way to find all models that contain a GenericRelation to a particular model in Django?
<p>Lets say we have the following model with a <code>GenericForeignKey</code></p> <pre><code>class Comment(models.Model): customer_visible = models.BooleanField(default=False) comment = models.TextField() content_type = models.ForeignKey(ContentType, on_delete=models.CASCADE) object_id = models.PositiveIntegerField() model = GenericForeignKey() </code></pre> <p>And in several places it is referenced in other models:</p> <pre><code>class Invoice(models.Model): ... comments = GenericRelation(Comment) ... class Contract(models.Model): ... comments = GenericRelation(Comment) </code></pre> <p>Is it possible to get a list of the types that reference the Comment model? e.g. <code>[Contract,Invoice]</code> in this example?</p> <p>Similar functionality is possible in for instance C# with Reflection.</p> <p>I need something that will work when there are no references already, so <code>Comment.objects.values('content_type')</code> won't work.</p>
<python><python-3.x><django><reflection>
2023-09-14 03:28:07
1
567
Kylelem62
77,101,575
3,400,316
Python 3.10 type hinting for decorator to be used in a method
<p>I'm trying to use <code>typing.Concatenate</code> alongside <code>typing.ParamSpec</code> to type hint a decorator to be used by the methods of a class. The decorator simply receives flags and only runs if the class has that flag as a member. Code shown below:</p> <pre><code>import enum from typing import Callable, ParamSpec, Concatenate P = ParamSpec(&quot;P&quot;) Wrappable = Callable[Concatenate[&quot;Foo&quot;, P], None] class Flag(enum.Enum): FLAG_1 = enum.auto() FLAG_2 = enum.auto() def requires_flags(*flags: Flag) -&gt; Callable[[Wrappable], Wrappable]: def wrap(func: Wrappable) -&gt; Wrappable: def wrapped_f(foo: &quot;Foo&quot;, *args: P.args, **kwargs: P.kwargs) -&gt; None: if set(flags).issubset(foo.flags): func(foo, *args, **kwargs) return wrapped_f return wrap class Foo: def __init__(self, flags: set[Flag] | None = None) -&gt; None: self.flags: set[Flag] = flags or set() super().__init__() @requires_flags(Flag.FLAG_1) def some_conditional_method(self, some_int: int): print(f&quot;Number given: {some_int}&quot;) Foo({Flag.FLAG_1}).some_conditional_method(1) # prints &quot;Number given: 1&quot; Foo({Flag.FLAG_2}).some_conditional_method(2) # does not print anything </code></pre> <p>The point of using <code>Concatenate</code> here is that the first parameter of the decorated function must be an instance of <code>Foo</code>, which aligns with methods of Foo (for which the first parameter is <code>self</code>, an instance of <code>Foo</code>). The rest of the parameters of the decorated function can be anything at all, hence allowing <code>*args</code> and <code>**kwargs</code></p> <p>mypy is failing the above code with the following:</p> <pre><code>error: Argument 1 has incompatible type &quot;Callable[[Foo, int], Any]&quot;; expected &quot;Callable[[Foo, VarArg(Any), KwArg(Any)], None]&quot; [arg-type] note: This is likely because &quot;some_conditional_method of Foo&quot; has named arguments: &quot;self&quot;. Consider marking them positional-only </code></pre> <p>It's having an issue with the fact that at the call site, I'm not explicitly passing in an instance of <code>Foo</code> as the first argument (as I'm calling it as a method). Is there a way that I can type this strictly and correctly? Does the wrapper itself need to be defined within the class somehow so that it has access to <code>self</code> directly?</p> <p>Note that if line 5 is updated to <code>Wrappable = Callable[P, None]</code> then mypy passes, but this is not as strict as it could be, as I'm trying to enforce in the type that it can only be used on methods of <code>Foo</code> (or free functions which receive a <code>Foo</code> as their first parameter).</p> <p>Similarly, if I update <code>some_conditional_method</code> to be a free function rather than a method on <code>Foo</code>, then mypy also passes (this aligns with the linked SO question below). In this case it is achieving the strictness that I'm after, but I really want to be able to apply this to methods, not just free functions (in fact, it doesn't need to apply to free functions at all).</p> <p>This question is somewhat of an extension to <a href="https://stackoverflow.com/questions/47060133/python-3-type-hinting-for-decorator">Python 3 type hinting for decorator</a> but has the nuanced difference of the decorator needing to be used in a method.</p> <p>To be clear, the difference between this and that question is that the following (as described in the linked question) works perfectly:</p> <pre><code>@requires_flags(Flag.FLAG_1) def some_conditional_free_function(foo: Foo, some_int: int): print(f&quot;Number given: {some_int}&quot;) some_conditional_free_function(Foo({Flag.FLAG_1}), 1) # prints &quot;Number given: 1&quot; </code></pre>
<python><decorator><mypy><typing>
2023-09-14 03:17:27
1
795
Coxy
77,101,344
22,371,917
selenium headless not functioning with buttons properly
<p>I have some code to open a search url for a site, click the first result, and click a button. This all works fine until I try to use headless Chrome.</p> <p>Code (working -&gt; not using headless Chrome):</p> <pre class="lang-py prettyprint-override"><code>from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.support.ui import WebDriverWait browser = webdriver.Chrome() browser.get(&quot;https://www.google.com/search?q=chatgpt&quot;) f=browser.find_elements(By.TAG_NAME, &quot;h3&quot;) f[0].click() button = browser.find_elements(By.TAG_NAME, 'button') button[0].click() print(&quot;Page title was '{}'&quot;.format(browser.title)) input() </code></pre> <p>Output (which is what I want):</p> <pre><code>DevTools listening on ws://127.0.0.1:58358/devtools/browser/e8f33ba5-9423-4900-9d36-035615599b61 Page title was 'ChatGPT' [6464:11928:0914/044721.426:ERROR:device_event_log_impl.cc(225)] [04:47:21.426] USB: usb_service_win.cc:415 Could not read device interface GUIDs: The system cannot find the file specified. (0x2) </code></pre> <p>Code (not working -&gt; using headless Chrome):</p> <pre class="lang-py prettyprint-override"><code>from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.support.ui import WebDriverWait chrome_options = webdriver.ChromeOptions() chrome_options.add_argument(&quot;--no-sandbox&quot;) chrome_options.add_argument(&quot;--headless&quot;) chrome_options.add_argument(&quot;--disable-gpu&quot;) browser = webdriver.Chrome(options=chrome_options) browser.get(&quot;https://www.google.com/search?q=chatgpt&quot;) f=browser.find_elements(By.TAG_NAME, &quot;h3&quot;) f[0].click() button = browser.find_elements(By.TAG_NAME, 'button') button[0].click() print(&quot;Page title was '{}'&quot;.format(browser.title)) input() </code></pre> <p>Error:</p> <pre><code>DevTools listening on ws://127.0.0.1:58289/devtools/browser/e4c33772-2e8c-4060-96b6-6aa730ef53c2 [0914/044642.023:INFO:CONSOLE(0)] &quot;Error with Permissions-Policy header: Origin trial controlled feature not enabled: 'unload'.&quot;, source: (0) [0914/044644.747:INFO:CONSOLE(0)] &quot;Error with Permissions-Policy header: Origin trial controlled feature not enabled: 'unload'.&quot;, source: (0) [0914/044645.283:INFO:CONSOLE(0)] &quot;Error with Permissions-Policy header: Origin trial controlled feature not enabled: 'browsing-topics'.&quot;, source: (0) [0914/044645.284:INFO:CONSOLE(0)] &quot;Error with Permissions-Policy header: Origin trial controlled feature not enabled: 'interest-cohort'.&quot;, source: (0) [0914/044645.435:INFO:CONSOLE(0)] &quot;Refused to execute script from 'https://chat.openai.com/cdn-cgi/challenge-platform/h/g/scripts/alpha/invisible.js?ts=1694649600' because its MIME type ('') is not executable, and strict MIME type checking is enabled.&quot;, source: about:blank (0) [0914/044646.033:INFO:CONSOLE(0)] &quot;Error with Permissions-Policy header: Origin trial controlled feature not enabled: 'browsing-topics'.&quot;, source: (0) [0914/044646.033:INFO:CONSOLE(0)] &quot;Error with Permissions-Policy header: Origin trial controlled feature not enabled: 'interest-cohort'.&quot;, source: (0) Traceback (most recent call last): File &quot;d:\RapidAPI\willitworkorwillittwerk.py&quot;, line 14, in &lt;module&gt; button[0].click() ~~~~~~^^^ IndexError: list index out of range </code></pre> <p>I tried google, youtube, chatgpt, and tiktok. Most sites are giving me the same thing, but example.com worked for some reason.</p> <p>This is my first time trying to use headless mode. I asked chatgpt and it said this isn't intended behavior, so if anyone has any information, that would be great.</p> <p>I also tried undetected_chromedriver, which I might just use, but it gave similar errors.</p> <p>Chrome Version: 117.0.5938.63 Webdriver Version: 117.0.5938.62 Selenium Version: 4.12.0 i added --headless==new it removed the cloudflare errors but im still getting the out of index error but when its not headless i dont get it</p>
<python><selenium-webdriver><selenium-chromedriver><headless><google-chrome-headless>
2023-09-14 01:49:57
1
347
Caiden
77,101,323
11,107,192
Debugging azure python durable function on vscode
<p>I have faithfully followed <a href="https://learn.microsoft.com/en-us/azure/azure-functions/durable/quickstart-python-vscode?tabs=linux%2Cazure-cli-set-indexing-flag&amp;pivots=python-mode-configuration" rel="nofollow noreferrer">this tutorial</a> on how to create a python durable function and everything went well.</p> <p>However able to start the project from the CLI, within the context of <strong>virtual env</strong>, I am unable to start <strong>vscode</strong> debugging.</p> <p>Vscode python interpreter was set to the one inside <strong>.env</strong>, but consistently I get <code>ModuleNotFoundError</code>, more specifically the module <code>'azure.durable_functions'</code> -surely installed- when starting debug.</p> <hr> <p><strong>requirements.txt</strong></p> <pre><code>azure-functions azure-functions-durable </code></pre> <p><strong>.vscode/launch.json</strong></p> <pre><code>{ &quot;version&quot;: &quot;0.2.0&quot;, &quot;configurations&quot;: [ { &quot;name&quot;: &quot;Attach to Python Functions&quot;, &quot;type&quot;: &quot;python&quot;, &quot;request&quot;: &quot;attach&quot;, &quot;port&quot;: 9091, &quot;preLaunchTask&quot;: &quot;func: host start&quot;, } ] } </code></pre> <p><strong>.vscode/tasks.json</strong></p> <pre><code>{ &quot;version&quot;: &quot;2.0.0&quot;, &quot;tasks&quot;: [ { &quot;type&quot;: &quot;func&quot;, &quot;label&quot;: &quot;func: host start&quot;, &quot;command&quot;: &quot;host start&quot;, &quot;problemMatcher&quot;: &quot;$func-python-watch&quot;, &quot;isBackground&quot;: true } ] } </code></pre> <hr> <p>This issue was reproduced in two different Windows machines and an Ubuntu one.</p> <p>Python version: 3.11 Vscode version: latest and 1.77.3</p>
<python><azure><visual-studio-code><vscode-debugger>
2023-09-14 01:42:03
1
717
basquiatraphaeu
77,101,303
10,132,474
python asyncio is much slower than threads when reading files from hard disk
<p>There are about 1M images, I need to read them and insert the bytes into redis with python. I have two choices, the first is to use a thread pool, and the second is to use asyncio, since this is only IO task. However, I find out that the thread pool method is much faster than asyncio method. A piece of example code is like this:</p> <pre><code>import pickle import os import os.path as osp import re import redis import asyncio from multiprocessing.dummy import Pool r = redis.StrictRedis(host='localhost', port=6379, db=1) data_root = './datasets/images' print('obtain name and paths') paths_names = [] for root, dis, fls in os.walk(data_root): for fl in fls: if re.search('JPEG$', fl) is None: continue pth = osp.join(root, fl) name = re.sub('\.+/', '', pth) name = re.sub('/', '-', name) name = 'redis-' + name paths_names.append((pth, name)) print('num samples in total: ', len(paths_names)) ### this is slower print('insert into redis') async def insert_one(path_name): pth, name = path_name if r.get(name): return with open(pth, 'rb') as fr: binary = fr.read() r.set(name, binary) async def func(cid, n_co): num = len(paths_names) for i in range(cid, num, n_co): await insert_one(paths_names[i]) n_co = 256 loop = asyncio.get_event_loop() tasks = [loop.create_task(func(cid, n_co)) for cid in range(n_co)] fut = asyncio.gather(*tasks) loop.run_until_complete(fut) loop.close() ### this is more than 10x faster def insert_one(path_name): pth, name = path_name if r.get(name): return with open(pth, 'rb') as fr: binary = fr.read() r.set(name, binary) def func(cid, n_co): num = len(paths_names) for i in range(cid, num, n_co): insert_one(paths_names[i]) with Pool(128) as pool: pool.map(func, paths_names) </code></pre> <p>Here I have two questions that puzzled me a lot:</p> <ol> <li><p>What is the problem with the asyncio method, which makes is slower than thread method?</p> </li> <li><p>Is it encouraged to add millions of tasks to the <code>gather</code> function? Like this:</p> </li> </ol> <pre><code> num_parallel = 1000000000 tasks = [loop.create_task(fetch_func(cid, num_parallel)) for cid in range(num_parallel)] await asyncio.gather(*tasks) </code></pre>
<python><io><python-asyncio>
2023-09-14 01:35:22
2
1,147
coin cheung
77,101,296
1,445,660
"ENOTEMPTY: directory not empty" when running 'cdk deploy'
<p>I have this error when I run <code>cdk deploy</code>. How can I know what causes this?</p> <pre><code>node:internal/fs/rimraf:202 throw err; ^ Error: ENOTEMPTY: directory not empty, rmdir '\\?\C:\Users\ro\AppData\Local\Temp\jsii-kernel-WYBhQW\node_modules\@aws-cdk\asset-kubectl-v20\layer' at Object.rmdirSync (node:fs:1229:10) at _rmdirSync (node:internal/fs/rimraf:260:21) at rimrafSync (node:internal/fs/rimraf:193:7) at node:internal/fs/rimraf:253:9 at Array.forEach (&lt;anonymous&gt;) at _rmdirSync (node:internal/fs/rimraf:250:7) at rimrafSync (node:internal/fs/rimraf:193:7) at node:internal/fs/rimraf:253:9 at Array.forEach (&lt;anonymous&gt;) at _rmdirSync (node:internal/fs/rimraf:250:7) { errno: -4051, syscall: 'rmdir', code: 'ENOTEMPTY', path: '\\\\?\\C:\\Users\\ro\\AppData\\Local\\Temp\\jsii-kernel-WYBhQW\\node_modules\\@aws-cdk\\asset-kubectl-v20\\layer' } </code></pre>
<python><python-3.x><amazon-web-services><aws-cloudformation><aws-cdk>
2023-09-14 01:33:48
0
1,396
Rony Tesler
77,101,287
3,387,716
Python - wild grouping of friends
<p>I generated a <code>dict</code> with about 60×10<sup>6</sup> elements:</p> <pre class="lang-py prettyprint-override"><code>friends = { 'aaron': {'john'}, 'bob': {'john'}, 'amy': {'gael', 'joe'}, 'gael': {'amy'}, 'joe': {'amy', 'patrick'}, 'john': {'aaron', 'bob'}, 'patrick': {'joe'} } </code></pre> <p>I need to group the friends, the friends of the friends, the friends of the friends of the friends, etc... And get an equivalent of:</p> <pre class="lang-py prettyprint-override"><code>groups = { {'aaron', 'john', 'bob'}, {'amy', 'gael', 'joe', 'patrick'} } </code></pre> <p>My approach would be to join all the <code>sets</code>:</p> <pre class="lang-py prettyprint-override"><code>for n1 in friends: for n2 in list(friends[n1]): if not friends[n1] is friends[n2]: friends[n1].update(friends[n2]) friends[n2] = friends[n1] </code></pre> <p>Which results in:</p> <pre class="lang-py prettyprint-override"><code>friends = { 'aaron': {'bob', 'aaron', 'john'}, 'bob': {'bob', 'aaron', 'john'}, 'amy': {'patrick', 'gael', 'joe', 'amy'}, 'gael': {'patrick', 'gael', 'joe', 'amy'}, 'joe': {'patrick', 'gael', 'joe', 'amy'}, 'john': {'bob', 'aaron', 'john'}, 'patrick': {'patrick', 'gael', 'joe', 'amy'} } </code></pre> <p>And then get the unique sets out of it:</p> <pre class="lang-py prettyprint-override"><code>groups = set() for n in friends: groups.add(friends[n]) </code></pre> <p>But that isn't permitted:</p> <pre class="lang-none prettyprint-override"><code>TypeError: unhashable type: 'set' </code></pre> <p>I have to say, the whole method feels lacking when you consider the size of the input; I'm sure there is a better way to do the task. Could someone point me out in the right direction?</p>
<python><set>
2023-09-14 01:30:20
2
17,608
Fravadona
77,101,265
8,471,995
python yaml save and load a nested class
<p>I have a nested class:</p> <pre class="lang-py prettyprint-override"><code># hello.py import pydantic as dan class Hello: @dan.dataclasses.dataclass class Data: data_a: int import yaml data = Hello.Data(1) filename = &quot;hi.yaml&quot; with open(filename, &quot;w&quot;) as fh: yaml.dump(data, fh) with open(filename, &quot;r&quot;) as fh: yaml.load(fh, yaml.Loader) # Causes the error </code></pre> <p>It seems the dumper doesn't recognize that the <code>Data</code> is defined inside <code>Hello</code> and saves the class as <code>python/object:hello.Data</code>. I expected <code>python/object:hello.Hello.Data</code>.</p> <p>Is there a workaround for this?</p> <p>I have a nested class because I have multiple classes that requires a dedicated <code>dataclass</code> for each.</p> <p>I found <a href="https://smarie.github.io/python-yamlable/generated/gallery/1_basic_usage_demo/" rel="nofollow noreferrer">this</a> library. It will be painful for me to add the decorator for each class. But I will try it for now.</p>
<python><yaml>
2023-09-14 01:20:27
2
1,617
Inyoung Kim 김인영
77,101,068
3,260,052
PySpark - How to filter dates with OR clause?
<p>I need to create a PySpark script that, among other conditions and filters, queries a given table to look for rows either after a given date OR another. This is how my code looks:</p> <pre class="lang-py prettyprint-override"><code>#imports from pyspark.sql import HiveContext from pyspark.sql.functions import col, concat_ws, collect_list from datetime import date ... hive_context = HiveContext(sc) today = str(date.today()) date_filter_condition = (col(&quot;created&quot;) &gt;= today) | (col(&quot;updated&quot;) &gt;= today) lookup = hive_context.table(&quot;MY_TABLE&quot;).filter(col(&quot;status&quot;) == 1 &amp; date_filter_condition) </code></pre> <p>The above code throws an error:</p> <blockquote> <p>An error occurred while calling o365.and. Trace: py4j.Py4JException: Method and([class java.lang.Integer]) does not exist at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:318) at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:326) at py4j.Gateway.invoke(Gateway.java:274) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:748)</p> </blockquote> <p>If I remove the part that includes the dates filter (&quot;&amp; date_filter_condition&quot;) the script works just fine. How to properly filter dates using pySpark? I'm using the date in the following format YYYY-MM-DD.</p>
<python><python-3.x><pyspark>
2023-09-13 23:56:40
1
1,305
Marcos Guimaraes
77,100,962
8,942,319
Which is best for defining a complex type: CustomType(TypedDict) or CustomType = NewType(...)?
<pre><code>from typing import TypedDict class CustomType(TypedDict): id: str path: str targets: list[str] </code></pre> <p>or</p> <pre><code>from typing import NewType CustomType = NewType(&quot;CustomType&quot;, dict[str, str, list[str]]) </code></pre> <p>Looks like in either case I can use <code>CustomType</code> in a type hint</p> <pre><code>def method(arg: CustomType) -&gt; None: print(arg) </code></pre> <p>I feel that the TypedDict method is better because it's more explicit and packs more info into it.</p> <p>But not sure which is better if my goal is to simply have type hints for complex data types.</p>
<python><python-typing>
2023-09-13 23:08:11
1
913
sam
77,100,890
8,145,356
Pydantic v2 custom type validators with info
<p>I'm trying to update my code to pydantic v2 and having trouble finding a good way to replicate the custom types I had in version 1. I'll use my custom date type as an example. The original implementation and usage looked something like this:</p> <pre><code>from datetime import date from pydantic import BaseModel class CustomDate(date): # Override POTENTIAL_FORMATS and fill it with date format strings to match your data POTENTIAL_FORMATS = [] @classmethod def __get_validators__(cls): yield cls.validate_date @classmethod def validate_date(cls, field_value, values, field, config) -&gt; date: if type(field_value) is date: return field_value return to_date(field.name, field_value, cls.POTENTIAL_FORMATS, return_str=False) class ExampleModel(BaseModel): class MyDate(CustomDate): POTENTIAL_FORMATS = ['%Y-%m-%d', '%Y/%m/%d'] dt: MyDate </code></pre> <p>I tried to follow the <a href="https://docs.pydantic.dev/latest/usage/types/custom/#as-a-method-on-a-custom-type" rel="noreferrer">official docs</a> and the examples laid out <a href="https://github.com/pydantic/pydantic/discussions/5581" rel="noreferrer">here</a> below and it mostly worked, but the <code>info</code> parameter does not have the fields I need (<code>data</code> and <code>field_name</code>). Attempting to access them gives me an AttributeError.</p> <pre><code>info.field_name *** AttributeError: No attribute named 'field_name' </code></pre> <p>Both the <code>Annotated</code> and <code>__get_pydantic_core_schema__</code> approaches have this issue</p> <pre><code>from datetime import date from typing import Annotated from pydantic import BaseModel, BeforeValidator from pydantic_core import core_schema class CustomDate: POTENTIAL_FORMATS = [] @classmethod def validate(cls, field_value, info): if type(field_value) is date: return field_value return to_date(info.field_name, field_value, potential_formats, return_str=False) @classmethod def __get_pydantic_core_schema__(cls, source, handler) -&gt; core_schema.CoreSchema: return core_schema.general_plain_validator_function(cls.validate) def custom_date(potential_formats): &quot;&quot;&quot; :param potential_formats: A list of datetime format strings &quot;&quot;&quot; def validate_date(field_value, info) -&gt; date: if type(field_value) is date: return field_value return to_date(info.field_name, field_value, potential_formats, return_str=False) CustomDate = Annotated[date, BeforeValidator(validate_date)] return CustomDate class ExampleModel(BaseModel): class MyDate(CustomDate): POTENTIAL_FORMATS = ['%Y-%m-%d', '%Y/%m/%d'] dt: MyDate dt2: custom_date(['%Y-%m-%d', '%Y/%m/%d']) </code></pre> <p>If I just include the <code>validate_date</code> function as a regular <code>field_validator</code> I get <code>info</code> with all the fields I need, it's only when using it with custom types that I see this issue. How do I write a custom type that has access to previously validated fields and the name of the field being validated?</p>
<python><python-3.x><pydantic>
2023-09-13 22:43:56
3
1,191
hamdog
77,100,877
3,842,845
How to select certain columns and rows based on keyword in column1?
<p>I am trying to extract/select the bottom two areas (blues) based on the values in column1.</p> <p>The data is in a CSV file.</p> <pre><code>Case1 Not Started: 12 Sent: 3 Completed: 3 Division Community ResidentName Date DocumentStatus Last Update Test Station Jane Doe 9/6/2023 Completed 9/4/2023 Test Station 2 John Doe 9/6/2023 Completed 9/4/2023 Alibaba Fizgerald Super Man 9/6/2023 Not Started Iceland Kingdom Super Woman 9/6/2023 Not Started ,,,,, Case2 Not Started: 6 Sent: 0 Completed: 2 Division Community Resident Name Date DocumentStatus Last Update Station Kingdom Pretty Woman 9/6/2023 Not Started My Goodness Ugly Man 5/24/2023 Not Started Landmark Cinema Nice Guys 5/25/2023 Not Started Iceland Kingdom Mr. Heroshi 1/24/2023 Not Started More Kingdom King Kong ,,,, Case3 Not Started: 8 Sent: 2 Completed: 3 Division Community Resident Name Date DocumentStatus Last Update Station Kingdom1 Pretty Woman2 1/6/2023 Completed My Goodness1 Ugly Man1 4/24/2023 Completed Landmark2 Nice Guys 9/25/2023 Not Started Iceland Kingdom2 Mr. Heroshi2 2/24/2023 Not Started More Kingdom 2 King Kong </code></pre> <p>I am trying to possibly use the Pandas library if it is easier.</p> <p><a href="https://i.sstatic.net/jpFlR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jpFlR.png" alt="enter image description here" /></a></p> <p>So, the logic is:</p> <p><strong>Case1</strong>:</p> <ol> <li>Find the row that has data where column1 = '<strong>Case1</strong>'</li> <li>Go down 3 rows, go right 1 column.</li> <li>Take all the data of 5 columns until it hits a row that has no data.</li> </ol> <p><strong>Case2</strong>:</p> <ol> <li>Find the row that has data where column1 = '<strong>Case2</strong>'</li> <li>Go down 3 rows, go right 1 column.</li> <li>Take all the data of 5 columns until it hits a row that has no data.</li> </ol> <p>There is <strong>Case3</strong> on the bottom of <strong>Case2</strong>, but I only need to include <strong>Case1</strong> and <strong>Case2</strong>.</p>
<python><pandas>
2023-09-13 22:41:34
2
1,324
Java
77,100,832
643,357
conda install <package> in a read-only multiuser miniconda setup results in an orphaned package installation
<p>I admin in a computer science department where hundreds of users with limited filesystem usage quotas are trying to set up conda environments for various computational tasks. So I've been experimenting with setting up multi-user miniconda environments on shared storage in an attempt to get at least the package installs out of users' home directories.</p> <p>The simple method described here: <a href="https://docs.anaconda.com/free/anaconda/install/multi-user/" rel="nofollow noreferrer">https://docs.anaconda.com/free/anaconda/install/multi-user/</a> won't work for us because we have lots of users that can't necessarily be trusted, so a world or even group-writable conda install is not an option. Users could just cd to the miniconda directory and wreak havoc.</p> <p>The ideal I was aiming for was to set up a number of frequently implemented environments for using things like numpy and pytorch in <strong>/mnt/opt/miniconda/envs</strong>, which users could then activate by (for example)</p> <pre><code>source /mnt/opt/miniconda/bin/activate pytorch </code></pre> <p>This works, but users are unable to install supplemental packages because <strong>/mnt/opt/miniconda/envs/pytorch</strong> is read-only. What I thought might be possible is that these supplemental packages would be installed in <strong>/home/$USER/.conda/pkgs</strong> and associated with the environment for that user only, but this doesn't seem work. However, in testing things, I ran into a rather strange anomaly. If I run</p> <pre><code>source /mnt/opt/miniconda/bin/activate numpy conda install scipy </code></pre> <p>the install fails with what amount to a write permission error. If, however, I set this environment variable in .bashrc first:</p> <pre><code>export CONDA_PKGS_DIRS=&quot;/home/$USER/.conda/pkgs&quot; </code></pre> <p>and then repeat:</p> <pre><code>source /mnt/opt/miniconda/bin/activate numpy conda install scipy </code></pre> <p>the packages get installed in the correct directory:</p> <pre><code>(numpy) pgoetz@texas-tea pkgs$ pwd /home/pgoetz/.conda/pkgs (numpy) pgoetz@texas-tea pkgs$ ls cache scipy-1.11.1-py311h08b1b3b_0 libgfortran5-11.2.0-h1234567_1 scipy-1.11.1-py311h08b1b3b_0.conda libgfortran5-11.2.0-h1234567_1.conda urls libgfortran-ng-11.2.0-h00389a5_1 urls.txt libgfortran-ng-11.2.0-h00389a5_1.conda </code></pre> <p>but they're not accessible in the environment:</p> <pre><code>(numpy) pgoetz@texas-tea pkgs$ python Python 3.11.5 (main, Sep 11 2023, 13:54:46) [GCC 11.2.0] on linux Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; import numpy &gt;&gt;&gt; import scipy Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; ModuleNotFoundError: No module named 'scipy' &gt;&gt;&gt; </code></pre> <p>So these are ... orphaned conda packages? Does anyone know what's going on here, or is this just an unanticipated edge case?</p>
<python><anaconda><conda><miniconda>
2023-09-13 22:27:03
1
919
pgoetz
77,100,651
13,457,123
How to find confidence Yolo V8
<p>I trained a model to detect traffic lights and classify thier color, <code>[red,green,yellow,off]</code>.</p> <p>I want to only display the lights with a confidence above 50%, but I cant figure out how to do that with yolo v8. Ive tried using <code>.conf</code> and <code>.prob</code> as the documentation states but its all empty. Here is my current script. It streams webcam view and looks for stoplights.</p> <p>Any help to get confidence values or even just the classification values from this would be amazing. I have uploaded the <a href="https://github.com/Syazvinski/Traffic-Light-Detection-Color-Classification" rel="nofollow noreferrer">model to github here</a> for people that want to test.</p> <p>I reccomend using the <code>best_traffic_nano_yolo.pt</code> model as its the most lightweight.</p> <p>Googling <code>traffic light at intersection</code> and holding an your phone up to the webcam should be enough for the model to detect and classify the light.</p> <pre class="lang-py prettyprint-override"><code>import cv2 from PIL import Image from ultralytics import YOLO # Load a pretrained YOLOv8n model model = YOLO('/Models/best_traffic_nano_yolo.pt') # open a video file or start a video stream cap = cv2.VideoCapture(0) # replace with 0 for webcam while cap.isOpened(): # Capture frame-by-frame ret, frame = cap.read() if not ret: break # flip the image # frame = cv2.flip(frame, -1) # Run inference on the current frame results = model(frame) # results list for r in results: frame = r.plot() # Display the resulting frame cv2.imshow('frame', frame) # Press 'q' on keyboard to exit if cv2.waitKey(1) &amp; 0xFF == ord('q'): break # After the loop release the cap object and destroy all windows cap.release() cv2.destroyAllWindows() </code></pre>
<python><machine-learning><computer-vision><yolo><yolov8>
2023-09-13 21:40:28
1
598
Stephan Yazvinski
77,100,616
12,133,280
Using odeint on multidimensional array
<p>I would like to apply odeint to an array of initial conditions and return a derivative that has the same size as those initial conditions. I could loop over each initial condition, but I think that will be very slow with higher N. The example below is from the <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.odeint.html" rel="nofollow noreferrer">docs for odeint</a>. sol_1 works as expected, but sol_2 gives the error <code>ValueError: Initial condition y0 must be one-dimensional.</code></p> <p>Does anyone have a clever solution for how to make sol_2 run without just looping over each initial condition? Thanks.</p> <pre><code>import numpy as np from scipy.integrate import odeint def pend(y, t, b, c): theta, omega = y dydt = [omega, -b*omega - c*np.sin(theta)] return dydt b = 0.25 c = 5.0 t = np.linspace(0, 10, 101) # 0D initial values, directly from docs y0_1 = [np.pi - 0.1, 0] sol_1 = odeint(pend, y0_1, t, args=(b, c)) # 1D initial values, directly from docs y0_2 = [np.ones((3)) * (np.pi - 0.1), np.zeros((3))] # Error here sol2 = odeint(pend, y0_2, t, args=(b, c)) </code></pre>
<python><numpy><scipy><differential-equations><odeint>
2023-09-13 21:31:02
2
447
pasnik
77,100,598
993,812
Percentage Monthly Data
<p>I've got some volume data on a daily level with a type column.</p> <pre><code> Day Type Volume 0 20230101 0 -23336.289 1 20230101 1 2930009.848 2 20230101 2 -2906673.559 3 20230102 0 7377.021 4 20230102 1 2892521.704 5 20230102 2 -2899898.724 </code></pre> <p>I'd like to create a column, maybe <code>mon_pct</code>, that uses the volumes from each day of type 0 and divides them by the monthly sum of volumes of type 1. For example, (-23336.289 / (Jan Sum Type 1)) * 100. Only rows of type 0 would get a resultant value.</p> <p>How can I accomplish this?</p>
<python><pandas>
2023-09-13 21:27:57
4
555
John
77,100,467
1,659,599
FontFamily in music21 lyrics
<p>Is it possible to change the FontFamily of the lyrics of a note or a chord?</p> <p>I've tried <code>style.fontFamily</code> on <code>Lyric</code> but no success.</p> <p>With <code>TextBox</code> class I can change the fontFamily. But I cannot tell music21 to display the <code>TextBox</code> aligned with the note or chord.</p> <p>I've created a &quot;pianofont&quot; that I would like to use to display chords (see image below)</p> <p><a href="https://i.sstatic.net/pjdSC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pjdSC.png" alt="enter image description here" /></a></p>
<python><music21>
2023-09-13 21:00:38
1
7,359
wolfrevo
77,100,435
2,954,839
Re-usable Utility Methods in Digital Ocean Functions in Python
<p>I would like to be able to re-use code across multiple Digital Ocean Functions (their serverless tool). I do not want to publish a library if I can avoid it.</p> <p>I have tried a couple of approaches, but am willing to do about anything to get this to work.</p> <h2>Approach 1. Single File</h2> <p>My first attempt was using a single file with multiple Function entry points.</p> <p>Under <code>packages/tom/</code> I have the file <code>tomsfns.py</code>:</p> <pre><code>def t1(): return { &quot;body&quot;, make_msg(&quot;T1&quot;) } def t2(): return { &quot;body&quot;, make_msg(&quot;T2&quot;) } def make_msg(caller): return &quot;Hello &quot; + caller </code></pre> <p>Dumb as toast, but should work fine. Under packages in the project.yml I have the following:</p> <pre><code>- name: tom functions: - name: tomsfns main: t1 binary: false runtime: python:3.11 web: false - name: tomsfns main: t2 binary: false runtime: python:3.11 web: false </code></pre> <p>What actually happens when you deploy that is you get one Function, <code>tom/tomsfns</code> and the entirety of the code in that single file. What I hoped for was two Functions off the same codebase, even if they duplicated the code.</p> <h2>Multi File</h2> <p>My second attempt was was to break out the desired utility into it’s own python file and then have separate files call it. This is the ‘style’ that makes the most sense to me but doesn’t work either.</p> <p>Both the structure and the entirety of their contents are:</p> <pre><code> packages rick r1.py import make_msg def main(): return { &quot;body&quot;: make_msg(&quot;R1&quot;) } r2.py import make_msg def main(): return { &quot;body&quot;: make_msg(&quot;R2&quot;) } rickutils.py def make_msg(caller): return &quot;Hello &quot; + caller </code></pre> <p><em>(Begin: added after the original post)</em></p> <p>The relevant project.yml is:</p> <pre><code> - name: rick functions: - name: r1 runtime: python:3.11 web: false - name: r2 runtime: python:3.11 web: false </code></pre> <p><em>(End: Added after original post)</em></p> <p>This results in what appears to be the right structure and code in the published functions:</p> <pre><code> Deployed functions ('doctl sls fn get &lt;funcName&gt; --url' for URL): - rick/r1 - rick/r2 - rick/rickutils - tom/tomsfns </code></pre> <p>However</p> <ul> <li>I didn’t want rickutils as a function</li> <li>More importantly the functions do not run.</li> </ul> <p><code>stderr: Invalid function: No module named 'make_msg'</code></p> <p>I have tried every combination of import statement and file location I can think of to get that method into the Function code but nothing has worked.</p> <p>Does anybody have any thoughts or examples of this kind of thing?</p> <p>Thanks.</p> <h1>The Solution</h1> <p>(as I implemented it)</p> <p>from the proj root</p> <pre><code>. ├── lib │ └── utils.py ├── packages │ └── sample │ └── r1 │ ├── .include │ └── r1.py └── project.yml </code></pre> <p>There are two key points.</p> <ol> <li>The lib directory holds the 'common' files.</li> </ol> <p>They will be copied into each function at build time.</p> <ol start="2"> <li>In your IDE, you will probably want to mark this directory as a 'Sources Root'.</li> </ol> <p>This way that you can 'see' the utility methods during development.</p> <ol start="3"> <li>The .include file moves the files into place</li> </ol> <p>Mine looks like this:</p> <pre><code> ../../../lib/utils.py </code></pre> <p><strong>NOTE:</strong></p> <p>I was able to avoid a build.sh for now but as soon as I start including external libraries, I will have to to that in the build.sh script. However, I do not expect that to change any of this.</p>
<python><function><digital-ocean>
2023-09-13 20:54:30
1
445
Dilapidus
77,100,252
1,668,622
How to reliably terminate a process started via asyncssh?
<p>I have the following snippet starting a long-going process locally via ssh:</p> <pre class="lang-py prettyprint-override"><code>from asyncio import gather, run import asyncssh async def listen(stream): async for line in stream: print(line) async def main(): async with asyncssh.connect(&quot;localhost&quot;, username=&quot;root&quot;) as conn: process = await conn.create_process(&quot;while true; do date; sleep 1; done&quot;) await gather( listen(process.stdout), listen(process.stderr), process.wait(), ) run(main()) </code></pre> <p>Works fine for me (you have to be able to connect via <code>ssh</code> locally to run the snippet, of course), except I don't know how to make sure the process gets terminated when the parent process terminates.</p> <p>My first idea was to <code>terminate()</code>/<code>kill()</code> the process in a <code>finally</code> block around <code>gather</code>. This works when the parent process got a signal it can handle, e.g <code>SIGINT</code>. But in case it's been executed within an IDE/debugger or it receives <code>SIGSEGV</code> the child process will stay.</p> <p>Is there a way to couple the spawned process or ssh connection with the parent process in order avoid stray processes?</p>
<python><ssh><multiprocessing><asyncssh>
2023-09-13 20:19:10
0
9,958
frans
77,100,239
13,112,739
Extract multicolumn(?) PDFs in python
<p>I'm trying to write a program to convert multi-page PDFs to plain text in bulk (think many page textbooks). If I run it through <code>PyPDF2</code>, I find issues where if a particular page has 2 columns, it reads incorrectly.</p> <p>The best solution I have found is the use <a href="https://github.com/ocrmypdf/OCRmyPDF" rel="nofollow noreferrer">OCRmyPDF</a> to convert scanned PDFs to text PDFs, and use <code>tabulizer::extract_text()</code> in R (via a <a href="https://stackoverflow.com/a/69211650/13112739">this solution</a>, which is a R wrapper of <a href="https://github.com/tabulapdf/tabula" rel="nofollow noreferrer">tabula PDF</a>, which has a python wrapper, but this function is based on <a href="https://pdfbox.apache.org/" rel="nofollow noreferrer">Apache PDFBox</a>, which does not have a <a href="https://github.com/lebedov/python-pdfbox/tree/master#notes" rel="nofollow noreferrer">current python wrapper</a>). The only python solution I can find is to run <code>tesseract</code> with both 1 and 2 column options and select the one with more semantic information, but this is incredibly slow.</p>
<python><pdf><ocr><pdfbox><text-mining>
2023-09-13 20:16:25
1
616
user760900
77,100,236
13,112,739
Extracting tables in line from PDF
<p>I'm trying to extract text from PDF files in bulk. I found that I can use tabula/camelot for extracting tables, but I'm unsure how I can put them in the appropriate places. The closest I've come is using <code>tabulizer::extract_text()</code> and <code>tabulizer::extract_tables()</code>, and trying to match table text to replace. This seems unwieldy - is there a better solution?</p>
<python><python-camelot><tabula-py>
2023-09-13 20:14:59
0
616
user760900
77,100,212
14,293,020
Which layers to use for a Neural Network fit of a strange function?
<p>I am using a neural network to find the best fit of a function that has a step aspect (blue is my real data, orange is the neural network prediction): <a href="https://i.sstatic.net/wmdJq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wmdJq.png" alt="enter image description here" /></a></p> <p>I am new to this and the paper I am basing myself on uses <code>relu</code> layers with a <code>linear</code> output but at the same time their data to fit looks much cleaner: <a href="https://i.sstatic.net/kCi8D.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kCi8D.png" alt="enter image description here" /></a></p> <p><strong>Question:</strong> Do you know if there would be a combination of layers that would be more adapted to a case like I have, where most of the data has this strong variations early on (left of the 1st plot) and then stabilizes ?</p> <p><strong>Structure of my Neural Network:</strong> I have 5 relu layers, 1 sigmoid and 1 linear layer acting as an output. I use Adam for optimization and a custome root-mean-square error cost function. In the code, I also use a K-fold cross-validation method because the paper was using that, but I did not include it because I first need to nail the layers structure.</p> <p><strong>Input Data:</strong> I work with Column arrays, varying in size. That's why I use the linear layer as an output.</p> <p><strong>Code:</strong></p> <pre><code>def create_model(): model = keras.Sequential() model.add(keras.layers.Dense(units = 64, activation = 'relu', input_shape=(1,))) model.add(keras.layers.Dense(units = 64, activation = 'relu')) model.add(keras.layers.Dense(units = 64, activation = 'relu')) model.add(keras.layers.Dense(units = 64, activation = 'sigmoid')) model.add(keras.layers.Dense(units = 64, activation = 'relu')) model.add(keras.layers.Dense(units = 64, activation = 'relu')) model.add(keras.layers.Dense(units = 1, activation = 'linear')) model.compile(loss=root_mean_squared_error, optimizer='adam', metrics=['accuracy']) return model </code></pre>
<python><keras><deep-learning><neural-network><curve-fitting>
2023-09-13 20:11:40
0
721
Nihilum
77,100,163
11,188,140
Building an array that combines indices and values in pairings
<p>Consider an array <code>a</code> whose rows hold unique values from 0 to 7, and no value matches its own column index.</p> <pre><code>a = np.array([[1, 0, 5, 6, 7, 2, 3, 4], [1, 0, 7, 4, 3, 6, 5, 2], [4, 2, 1, 7, 0, 6, 5, 3]]) </code></pre> <p>I plan to use <code>a</code> to build a new array <code>b</code> having the same shape. The method of producing <code>b</code> is described below in a scaled-down example:</p> <p>For each row of <code>a</code>, I need to:<br> i) select the values that are <strong>greater than their index values</strong>. For the 1st row of <code>a</code>, this involves indices <code>0, 2, 3, 4</code>, holding values <code>1, 5, 6, 7</code>.<br></p> <p>ii) then I need to fill a new array <code>b</code> with rows that <strong>combine the index and value pairings</strong>. The 1st row of <code>b</code> would be <code>[0,1, 2,5, 3,6, 4,7]</code>. (The unusual spacing is just to emphasize the index-value pairings) The completed <code>b</code> would be:</p> <pre><code>b = np.array([[[0,1, 2,5, 3,6, 4,7], [0,1, 2,7, 3,4, 5,6], [0,4, 1,2, 3,7, 5,6]]) </code></pre> <p>I need help with an efficient way to build <code>b</code>.</p>
<python><arrays><numpy>
2023-09-13 20:01:36
1
746
user109387
77,100,083
3,646,720
stack 2d density maps along the z axis direction
<p>I know how to do it using contourf, which has an option <code>contourf(X,Y,Z,...,offset=?,zdir=&quot;z&quot;)</code>, see attached figure for illustration.</p> <p>I tried to do something similar for pcolor but couldn't seem to find similar option. My question is is matplotlib capable of stacking 2d density maps? If so, how?</p> <p><a href="https://i.sstatic.net/Jc8RG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jc8RG.png" alt="stacking 2d contourf plots along z axis" /></a></p>
<python><matplotlib>
2023-09-13 19:44:33
1
375
Rain
77,100,067
7,339,624
Fill the Diagonal of Each Matrix in a 3D numpy Array with a Vector
<p>Having a 3D numpy array, where each 2D slice represents an individual matrix. I'm looking to replace the diagonal elements of every matrix with a specific set of values.</p> <p>For instance, if I have a <code>3x3x3</code> array:</p> <pre><code>array([[[a1, a2, a3], [a4, a5, a6], [a7, a8, a9]], [[b1, b2, b3], [b4, b5, b6], [b7, b8, b9]], [[c1, c2, c3], [c4, c5, c6], [c7, c8, c9]]]) </code></pre> <p>I'd like to replace the diagonals <code>[a1, a5, a9]</code>, <code>[b1, b5, b9]</code>, and <code>[c1, c5, c9]</code> with a new set of values for each matrix. How can I achieve this?</p>
<python><arrays><numpy><matrix><diagonal>
2023-09-13 19:41:21
5
4,337
Peyman
77,100,051
893,254
Python Pandas apply a string splitting function to an index
<p>I have a Pandas DataFrame containing Postcodes and counts. It has been created using <code>value_count</code>.</p> <p>The DataFrame looks like this:</p> <pre><code> count Postcode AL1 1AJ 151 AL1 1AR 36 AL1 1AS 21 AL1 1AT 12 AL1 1AU 11 ... ... YO8 9YD 10 YO8 9YE 4 YO90 1UU 2 YO90 1WR 1 YO91 1RT 1 </code></pre> <p>I am attempting to split the index column using a string splitting function. My objective is to chop each postcode, returning only the first part.</p> <p>Here's a function which does (should do?) that.</p> <pre><code>def split_postcode(postcode): postcode_parts = postcode.split(' ') if len(postcode_parts) == 2: return postcode_parts[0] elif len(postcode_parts) == 1: return postcode else: print(f'unexpected postcode length: {len(postcode_parts)}') </code></pre> <p>I tried to apply it with</p> <pre><code># value_count_df is the above DataFrame value_count_df.apply(split_postcode, axis=0) </code></pre> <p>but this failed with the error</p> <pre><code>ValueError: Length mismatch: Expected axis has 1 elements, new values have 2 elements </code></pre> <p>What I am trying to do probably doesn't make much sense, because if I recall correctly, index columns are immutable.</p> <p>So I'm not sure how to proceed.</p> <p>It is likely that I created this DataFrame in a way which is less suitable than an alternative.</p> <hr /> <p>Here's some information about how I created the <code>value_count_df</code> object.</p> <ul> <li>I read Postcode data from an SQL file, and inserted all the values into a list.</li> <li>I then did this:</li> </ul> <pre><code>postcode_df = pandas.DataFrame(postcode_list) postcode_df.columns = ['Postcode'] value_count = postcode_df.value_counts() value_count_df = pandas.DataFrame(value_count) value_count_df.columns = ['Postcode', 'Count'] value_count_df = value_count_df.sort_index() # fails value_count_df.apply(split_postcode, axis=0) </code></pre> <p>How should I do things differently to achieve a sensible result?</p> <p>The final objective is to truncate the postcodes down to just the &quot;first&quot; part of the postcode (split by space <code>' '</code> character, and return the first string) and then obtain the value counts for each unique string.</p> <p>I currently have value counts for each unique postcode, I just want to repeat this for the &quot;truncated&quot; postcode.</p> <p>I could do it by creating a new list of truncated postcodes from the existing list, but this seems inefficient, and it would be good to lean how to do it using the data in the DataFrame directly.</p>
<python><pandas><dataframe><series>
2023-09-13 19:38:42
1
18,579
user2138149
77,099,997
268,847
Passing a Python union type parameter to a function that expects a non-union type
<p>I have a data type that, because of where the data comes from, has to be flexible. It is a mapping from a string into either a string, an int, or a list of strings. However, when I run the code through the mypy type checker I get an error.</p> <p>Here is an example of the code and the result of running it through <code>mypy</code>:</p> <pre><code>AttributeDict = dict[str, int | str | list[str]] x: AttributeDict = {&quot;a&quot;: 1, &quot;b&quot;: &quot;2&quot;, &quot;c&quot;: [&quot;a&quot;, &quot;b&quot;]} def my_func(arg1: list[str]) -&gt; None: print(arg1) my_func(x[&quot;c&quot;]) </code></pre> <p>The error:</p> <pre><code>error: Argument 1 to &quot;my_func&quot; has incompatible type &quot;int | str | list[str]&quot;; expected &quot;list[str]&quot; [arg-type] </code></pre> <p>I would have thought that the type being a union of three types the type-checker would not complain, but it does. How do I annotate the function definition so that the type checker is happy with this?</p>
<python><types><mypy>
2023-09-13 19:27:43
0
7,795
rlandster
77,099,956
11,370,582
Filter pandas dataframe for first N unique values per group
<p>I need to filter a large dataset of variables with entries for multiple dates. In this instance I want to keep only data entered on the very first date.</p> <p>For example in the dataset below:</p> <pre><code>dfex = pd.DataFrame({'names':['jim','jim','jim','jim','jim','jim','jim','jim','jim', 'bob','bob','bob','bob','bob','bob', 'sara','sara','sara','sara','sara','sara','sara','sara','sara','sara'], 'dates':['01-01-19','01-01-19','01-01-19','01-05-19','01-06-19','01-07-19','01-08-19','01-09-19','01-10-19', '01-05-19','01-05-19','01-07-19','01-08-19','01-09-19','01-10-19', '01-02-19','01-02-19','01-02-19','01-02-19','01-05-19','01-06-19','01-07-19','01-08-19','01-09-19','01-10-19']}) dfex['dates'] = pd.to_datetime(dfex['dates']) dfex </code></pre> <p>jim would keep the first 3 rows, bob the first 2 and sara the first 5.</p>
<python><pandas><dataframe><date><datetime>
2023-09-13 19:21:26
1
904
John Conor
77,099,891
8,869,570
How to properly write a unit test that checks that a path doesn't exist?
<p>I need to write a unit test for a class:</p> <pre><code>import os class my_class: def __init__(self, path): assert os.path.exists(path) </code></pre> <p>I want to write a unit test that intentionally passes an incorrect path. Currently, I have this:</p> <pre><code>def unit_test: non_existent_path = '/non_existent_path` my_class(non_existent_path) </code></pre> <p>and I expect this test to fail, but i don't like my definition <code>non_existent_path = '/non_existent_path</code>. Is there a more robust way to specify a path that can never exist?</p>
<python>
2023-09-13 19:07:59
4
2,328
24n8
77,099,794
1,473,517
Why can't you use bitwise & with numba and uint64?
<p>I have the following MWE:</p> <pre><code>import numba as nb @nb.njit(nb.uint64(nb.uint64)) def popcount(x): b=0 while(x &gt; 0): x &amp;= x - 1 b+=1 return b print(popcount(43)) </code></pre> <p>It fails with:</p> <pre><code>numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend) No implementation of function Function(&lt;built-in function iand&gt;) found for signature: &gt;&gt;&gt; iand(float64, float64) There are 8 candidate implementations: - Of which 4 did not match due to: Overload of function 'iand': File: &lt;numerous&gt;: Line N/A. With argument(s): '(float64, float64)': No match. - Of which 2 did not match due to: Operator Overload in function 'iand': File: unknown: Line unknown. With argument(s): '(float64, float64)': No match for registered cases: * (bool, bool) -&gt; bool * (int64, int64) -&gt; int64 * (int64, uint64) -&gt; int64 * (uint64, int64) -&gt; int64 * (uint64, uint64) -&gt; uint64 - Of which 2 did not match due to: Overload in function 'gen_operator_impl.&lt;locals&gt;._ol_set_operator': File: numba/cpython/setobj.py: Line 1508. With argument(s): '(float64, float64)': Rejected as the implementation raised a specific error: TypingError: All arguments must be Sets, got (float64, float64) raised from /home/user/python/mypython3.10/lib/python3.10/site-packages/numba/cpython/setobj.py:108 During: typing of intrinsic-call at /home/user/python/popcount.py (7) File &quot;popcount.py&quot;, line 7: def popcount(x): &lt;source elided&gt; while(x &gt; 0): x &amp;= x - 1 ^ </code></pre> <p>What is wrong with using uint64 for this?</p> <hr /> <p>The code fails with the same message even if I use:</p> <pre><code>print(popcount(nb.uint64(43)) </code></pre>
<python><numba>
2023-09-13 18:54:07
2
21,513
Simd
77,099,756
2,779,432
list assignment index out of range but range seems to work
<p>I'm using OpenCV to read some pixels from several images and store the pixels in an array of which each index represents a single image, like this:</p> <pre><code>path = os.getcwd() c = 0 for path in os.listdir(path + '\scribbles'): imgs[c] = cv2.imread(str(path),0) index[c] = np.where(imgs[c]!= [0]) c = c + 1 </code></pre> <p>All good here, now I want to create a list that has &quot;N = number of images&quot; as index, and for each index store all the X Coordinates, then do the same for the Y coordinates (on a separate list) like so:</p> <pre><code>path = os.getcwd() c = 0 for path in os.listdir(path + '\scribbles'): xCoord[c] = list(index[c][0]) yCoord[c] = list(index[c][1]) c = c + 1 IndexError: list assignment index out of range </code></pre> <p>I'm not sure what I'm doing wrong, I can print to screen the set of coordinates for each index element, like so:</p> <pre><code>print(index[5][0]) [308 308 309 309 309 310 310 310 310 311 311 311 311 312 312 312 312 312 312 313 313 313 313 313 313 313 313 314 314 314 314 314 314 314 314 315 315 315 315 315 315 316 316 316 316 316 316 316 317 317 317 317 317 317 317 317 318 318 318 318 318 318 318 318 318 318 318 318 319 319 319 319 319 319 319 319 319 319 319 319 320 320 320 320 320 320 320 320 321 321 321 321 321 321 321 321 322 322 322 322 322 322 322 322 322 323 323 323 323 323 323 323 323 323 324 324 324 324 324 324 324 324 324 325 325 325 325 325 325 325 325 325 326 326 326 326 326 326 326 326 327 327 327 327 327 327 327 327 327 328 328 328 328 328 328 328 328 329 329 329 329 329 329 329 329 329 330 330 330 330 330 330 330 330 330 331 331 331 331 331 331 331 331 332 332 332 332 332 332 333 333 333 333 334 334 334 335 335 335 336 372 372 372 372 372 373 373 373 373 373 373 374 374 374 374 374 374 374 375 375 375 375 375 375 375 376 376 376 376 376 376 377 377 377 377 378 378 378 378 379 379 379 379 379 379 380 380 380 380 380 381 381 381 381 381 381 381 382 382 382 382 382 382 382 382 382 382 382 383 383 383 383 383 383 384 384 385 385 385 386 386 387 388 389 390 391 392 392 392 392 393 393 393 393 393 393 393 393 393 393 393 394 394 394 394 394 394 394 394 394 394 395 395 395 395 395 395 395 395 395 395 396 396 396 396 396 396 396 396 396 396 396 397 397 397 397 397 397 397 397 397 397 398 398 398 398 398 398 398 398 398 398 399 399 399 399 399 399 399 399 399 399 399 399 400 400 400 400 400 400 400 400 400 400 400 400 401 401 401 401 401 401 401 401 401 401 401 401 402 402 402 402 402 402 402 402 402 402 402 402 403 403 403 403 403 403 403 403 403 403 403 404 404 404 404 404 404 404 404 404 404 405 405 405 405 405 405 405 405 405 406 406 406 406 406 406 406 406 406 406 406 406 406 407 407 407 407 407 407 407 407 407 407 407 407 407 407 408 408 408 408 408 408 408 408 408 408 408 408 408 408 409 409 409 409 409 409 409 409 409 409 410 410 410 410 410 410 410 410 410 411 411 411 411 411 411 412 412 412 412 412 413 413 413 413 414 414 415 415 416 416 416 416 417 417 417 417 418 418 418 418 419 419 419 419 420 420 420 420 421 421 421 421 421 422 422 422 422 423 423 423 424 424 424 425 425 425 425 426 426 427 427 427 427 428 428 428 428 429 429 429 429 429 430 430 430 430 430 431 431 432 432 433 433 433 433 434 434 434 434 435 435 435 435 435 435 436 436 436 436 436 436 436 436 437 437 437 437 437 437 437 437 438 438 438 438 438 438 438 439 439 439 439] </code></pre> <p>and as expected, it fails if I use index 6 (i have 6 elements 0 to 5)</p> <p>Can somebody help me figure out where the problem is? Thank you</p> <p>EDIT: This is the code that ended up working for me</p> <pre><code>path = os.getcwd() xCoord = [] yCoord = [] c = 0 for path in os.listdir(path + '\scribbles'): print(c) xCoord.append(list(index[c][0])) yCoord.append(list(index[c][1])) c = c + 1 </code></pre>
<python>
2023-09-13 18:47:13
1
501
Francesco
77,099,749
1,304,376
Can I change the Anchor of a Path object in pathlib?
<p>I want to process files in a remote share and when I find a file I want, I want to copy it to my local machine. Something like:</p> <pre><code>copy \\remote_server\share\path\to\file.txt d:\path\to\file.txt </code></pre> <p>I could do something like :</p> <pre><code>path_str = '\\remote_server\share\path\to\file.txt' anchor_str = Path(path_str).anchor new_path = Path(path_str.replace(anchor_str, 'd:')) </code></pre> <p>But is there a more pythonic way?</p>
<python><pathlib>
2023-09-13 18:45:58
1
1,676
Ching Liu
77,099,705
876,375
How do I obtain Django's user password reset URL within Python (i.e., not within a template)
<p>I am generating a variable text in Python where I need to inject the URL for the user to navigate to after they requested password reset. I.e., I am building a string for content of an email and thus cannot use the standard template language <code>{% url 'password_reset_confirm' uidb64=uid token=token %}</code></p> <p>What I have so far is:</p> <pre><code>def myFunction(toEmail_): #-- find user to send the email to user = User.objects.get(email=toEmail_) #-- build email text resetUrl = &quot;????&quot; #TODO: WHAT TO PUT IN HERE? emailText = &quot;Hi, please reset your password on this url {}&quot;.format(resetUrl) #-- send email with the text ... </code></pre> <p>How do I obtain this reset url in Python?</p>
<python><django>
2023-09-13 18:38:03
1
1,123
Lenka Pitonakova
77,099,509
5,440,823
How to use back_populates() in SQLAlchemy with two (or more) foreign keys from same table?
<p>I have three tables: Competitor, Competition and Duel.</p> <pre><code>class Competitor(db.Model): # type: ignore __tablename__ = 'competitors' id_competitor = db.Column(db.Integer, primary_key=True, autoincrement=True) name = db.Column(db.String(255), nullable=False) class Duel(db.Model): # type: ignore __tablename__ = 'duels' id_competitor1 = db.Column(db.ForeignKey('competitors.id_competitor'), primary_key=True) id_competitor2 = db.Column(db.ForeignKey('competitors.id_competitor'), primary_key=True) id_competition = db.Column(db.ForeignKey('competitions.id_competition'), primary_key=True) phase = db.Column(db.Integer, primary_key=True) rel_competition = relationship('Competition', back_populates='rel_competition_duel') rel_competitor1 = relationship('Competitor', foreign_keys=[id_competitor1]) rel_competitor2 = relationship('Competitor', foreign_keys=[id_competitor2]) class Competition(db.Model): # type: ignore __tablename__ = 'competitions' id_competition = db.Column(db.Integer, primary_key=True, autoincrement=True) name = db.Column(db.String(255), nullable=False) rel_competition_duel = relationship('Duel', back_populates='rel_competition') </code></pre> <p>As one can see, there are two foreign keys in Duel that reference table Competitor. This is what works.</p> <p>However, I want to connect relationships with <code>back_populates()</code> so that modifying one also modifies the other.</p> <p>If I replace two relationships in Duel with these two:</p> <pre><code>rel_competitor1 = relationship('Competitor', back_populates='rel_duel1') rel_competitor2 = relationship('Competitor', back_populates='rel_duel2') </code></pre> <p>and add:</p> <pre><code>rel_duel1 = relationship('Duel', back_populates='rel_competitor1') rel_duel2 = relationship('Duel', back_populates='rel_competitor2') </code></pre> <p>to Competitor, I get the error: &quot;<code>[..] Original exception was: Could not determine join condition between parent/child tables on relationship Competitor.rel_duel1 - there are multiple foreign key paths linking the tables. Specify the 'foreign_keys' argument, providing a list of those columns which should be counted as containing a foreign key reference to the parent table.</code>&quot;</p> <p>So, how do I connect relationships with <code>back_populates()</code> when I have two (or possibly more) foreign keys from same table?</p>
<python><postgresql><sqlalchemy><orm><flask-sqlalchemy>
2023-09-13 18:06:45
1
754
dosvarog
77,099,356
5,924,264
Is there an analogous "pass" keyword for attributes like the one for methods?
<p>I want to define a skeleton class (a class that has doesn't yet have the implementations and initializations for the attributes).</p> <p>For methods of the class, I can just use <code>pass</code>, but how do I do something analogous for the attributes?</p> <p>e.g.,</p> <pre><code>class MyClass: def __init__(self): # what can I use in place of pass here self.var1 = pass self.var1 = pass def method1(self): pass </code></pre>
<python>
2023-09-13 17:38:35
2
2,502
roulette01
77,099,182
3,808,179
Cannot unpack non-iterable 'datetime.time'
<p>My primary question is: <strong>How can one iterate over a column full of dates?</strong></p> <pre><code># Assume an excel file with one column called 'Time' that contains datetime.time values df = pd.read_excel(&quot;some/file.xls&quot;) for index, time in df['Time']: println(time + &quot; &quot; + index) # OR for index, time in enumerate(df['Time']): println(time + &quot; &quot; + index) </code></pre> <p>but both end up with <code>cannot unpack non-iterable 'datetime.time' object</code>. There is gotta be a way to just loop through a column in <code>DataFrame</code>...</p> <p>The reason I want to loop through them is to check if the time includes milliseconds and if not add <code>.000000</code> to the value. Which leads to my second related question.</p> <p>Secondary question: <strong>If I cannot loop through a <code>datetime.time</code> object how can I force <code>to_datetime()</code> when the format is not the same?</strong></p> <p>I have two formats: <code>%H:%M:%S.%f</code> and <code>%H:%M:%S</code>. Setting format to mixed doesnt work. Apply also doesnt work (same error that datetime.time is not iterable)</p> <pre><code>def date_formatter(date): if '.' in date: return ... # %H:%M:%S.%f else return ... # %H:%M:%S df['Time'].apply(date_formatter) </code></pre> <p>This is a basic functionality, I dont understand hy this has to be so hard. I can do this perfectly fine in .csv file as the values are string. Note: The Date column in excel is a formula (e.g. =A1-A2, =B1-B2,...)</p>
<python><pandas><excel><datetime>
2023-09-13 17:11:12
0
2,267
kristyna
77,099,174
11,277,108
Why will standard scrapy scrape an API but selenium version returns 401 error?
<p>I'm using a selenium driven scrapy scraper to scrape a website to avoid detection as a bot. Within the site are a number of APIs that I can call to get data without downloading a whole page. However, I keep getting a 401 error from the following:</p> <pre><code>import logging from scrapy import Request, Spider, signals from scrapy.crawler import Crawler, CrawlerProcess from scrapy.http import HtmlResponse from selenium.webdriver import Firefox, FirefoxOptions import scrape.constants as cs class FlashscoreMiddleware: @classmethod def from_crawler(cls, crawler: Crawler): middleware = cls() crawler.signals.connect(middleware.spider_opened, signals.spider_opened) crawler.signals.connect(middleware.spider_closed, signals.spider_closed) return middleware def spider_opened(self, *_) -&gt; None: self.driver = Firefox() def spider_closed(self, *_) -&gt; None: self.driver.quit() def process_request(self, request: Request, **_) -&gt; None: self.driver.get(request.url) return HtmlResponse( request.url, body=self.driver.page_source, encoding=&quot;utf-8&quot;, request=request, ) class FlashscoreSpider(Spider): name = &quot;flashscore&quot; custom_settings = { &quot;DOWNLOADER_MIDDLEWARES&quot;: {&quot;selenium_scraper.FlashscoreMiddleware&quot;: 400}, &quot;REQUEST_FINGERPRINTER_IMPLEMENTATION&quot;: &quot;2.7&quot;, &quot;LOG_LEVEL&quot;: logging.ERROR, } def start_requests(self): yield Request( url=&quot;https://global.flashscore.ninja/5/x/feed/df_st_1_WKM03Vff&quot;, headers={ &quot;X-Fsign&quot;: &quot;SW9D1eZo&quot;, &quot;User-Agent&quot;: &quot;Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 Safari/537.36&quot;, }, callback=self.parse, ) def parse(self, response): if response.text.startswith(&quot;SE÷Match&quot;): print(f&quot;Successfully returned {response.url}&quot;) else: print(&quot;Unsuccessful&quot;) if __name__ == &quot;__main__&quot;: process = CrawlerProcess() process.crawl(FlashscoreSpider) process.start() </code></pre> <p>If I use a standard scrapy scraper though - all is fine:</p> <pre><code>import logging from scrapy import Request, Spider from scrapy.crawler import CrawlerProcess class FlashscoreSpider(Spider): name = &quot;flashscore&quot; custom_settings = { &quot;REQUEST_FINGERPRINTER_IMPLEMENTATION&quot;: &quot;2.7&quot;, &quot;LOG_LEVEL&quot;: logging.ERROR, } def start_requests(self): yield Request( url=&quot;https://global.flashscore.ninja/5/x/feed/df_st_1_WKM03Vff&quot;, headers={&quot;X-Fsign&quot;: &quot;SW9D1eZo&quot;}, callback=self.parse, ) def parse(self, response): if response.text.startswith(&quot;SE÷Match&quot;): print(f&quot;Successfully returned {response.url}&quot;) else: print(&quot;Unsuccessful&quot;) if __name__ == &quot;__main__&quot;: process = CrawlerProcess() process.crawl(FlashscoreSpider) process.start() </code></pre> <p>Why is the selenium version not returning the correct result?</p>
<python><selenium-webdriver><scrapy>
2023-09-13 17:10:19
0
1,121
Jossy
77,099,117
2,781,105
Pandas groupby to create additional columns from one column
<p>I have a dataframe that contains a date column and a discount value column, as follows:</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame({'date': ['2023-09-01', '2023-09-02', '2023-09-03', '2023-09-04', '2023-09-05', '2023-09-06','2023-09-07', '2023-09-08', '2023-09-09', '2023-09-10'], 'discount': [30, 25, 0, 10, 15, 15,0,25,30,0]}) df </code></pre> <p><a href="https://i.sstatic.net/N4AII.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/N4AII.png" alt="enter image description here" /></a></p> <p>I need to add additional columns that contain the values that precede a zero, where the number of additional columns is determined by the number of 0 delimiters, such that the resultant df looks like this...</p> <pre><code>df2 = pd.DataFrame({'date': ['2023-09-01', '2023-09-02', '2023-09-03', '2023-09-04', '2023-09-05', '2023-09-06','2023-09-07', '2023-09-08', '2023-09-09', '2023-09-10'], 'discount': [30, 25, 0, 10, 15, 15,0,25,30,0], 'split1': [30,25,0,0,0,0,0,0,0,0], 'split2': [0,0,0,10,15,15,0,0,0,0], 'split3': [0,0,0,0,0,0,0,25,30,0]}) df2 </code></pre> <p><a href="https://i.sstatic.net/sOAL4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sOAL4.png" alt="enter image description here" /></a></p> <p>So far my attempts have yielded the following;</p> <pre><code>for date, group in df.groupby('date'): num_splits = len(group) splits = group['discount'].tolist() for i in range(num_splits): df.loc[group.index[i], f'split{i+1}'] = splits[i] df </code></pre> <p><a href="https://i.sstatic.net/kWulW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kWulW.png" alt="enter image description here" /></a></p> <p>Note - there may be multiple, sequential zeros after a non-zero value in the 'discount' column, so the first non-zero value should be used to define the group.</p> <p>Guidance appreciated.</p>
<python><pandas><dataframe><group-by>
2023-09-13 17:00:50
3
889
jimiclapton
77,099,080
8,248,194
datetime.utcfromtimestamp changing time in python
<p>I have a string representing a datetime (ts). I want to transform it to int and then use <code>datetime.utcfromtimestamp</code> to recover the same datetime:</p> <pre class="lang-py prettyprint-override"><code>ts = '2023-07-10 14:06:22.000 UTC' int_ts = int(datetime.strptime(ts, &quot;%Y-%m-%d %H:%M:%S.%f %Z&quot;).timestamp() * 1000) # change this line, cannot change the previous or next back_to_ts = datetime.utcfromtimestamp(int_ts / 1000) print(f&quot;{back_to_ts = }, {ts = }, {int_ts = }&quot;) </code></pre> <p>However, I get the following output:</p> <pre><code>back_to_ts = datetime.datetime(2023, 7, 10, 12, 6, 22), ts = '2023-07-10 14:06:22.000 UTC', int_ts = 1688990782000 </code></pre> <p>I can only change the second line, how should I change it to get the follwoing output?</p> <pre><code>back_to_ts = datetime.datetime(2023, 7, 10, 14, 6, 22), ts = '2023-07-10 14:06:22.000 UTC', int_ts = 1688990782000 </code></pre>
<python><datetime><timezone><utc>
2023-09-13 16:54:29
1
2,581
David Masip
77,099,062
9,669,142
Python - curve_fitting doesn't work properly with high x-values
<p>I have four XY-points for which I need to obtain a curve, hence I use <code>scipy.optimize.curve_fit</code>. Looking at the points, I want to use an exponential function. This works for the lower x-values, but not for the higher x-values.</p> <p>The code is the following (you can switch between <code>low_x_values</code> and <code>high_x_values</code>):</p> <pre><code>from scipy.optimize import curve_fit import matplotlib.pyplot as plt import numpy as np low_x_values = [3.139, 2.53, 0.821, 0.27] high_x_values = [859.8791936328762, 805.5080517453312, 639.2578427310567, 496.3622821767497] list_x = high_x_values list_y = [0.21, 0.49, 1.56, 23.97] def func_exp(x, a, b, c): return a + (b * np.exp(-c * x)) list_line_info, pcov = curve_fit(func_exp, list_x, list_y, maxfev=100000) plot_x = np.linspace(min(list_x), max(list_x), 1000) plot_y = list_line_info[0] + ((list_line_info[1]) * np.exp((-list_line_info[2] * plot_x))) plt.plot(plot_x, plot_y, linestyle='-', color='orangered') plt.plot(list_x, list_y, linestyle='--', color='dodgerblue') plt.scatter(list_x, list_y, color='black', zorder=2) plt.grid() </code></pre> <p>When I want to find the curve for the low values, it shows this: <a href="https://i.sstatic.net/kicuo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kicuo.png" alt="enter image description here" /></a></p> <p>However, when I want to do the same for the high values, I get this: <a href="https://i.sstatic.net/FcseO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FcseO.png" alt="enter image description here" /></a></p> <p>I'm not sure what I'm missing here.</p>
<python><scipy><curve-fitting>
2023-09-13 16:51:27
1
567
Fish1996
77,098,930
2,015,461
Making comparison between specific columns two dataframes more efficient
<p>I have two datasets, each with a number of email addresses and a record ID. I am comparing the data, finding every record where an email in dataset1 matches a an email for a record in dataset 2.</p> <p>I take the record id's from each dataset, name of the columns that match &amp; values and add them to a dataset that I output at the end.</p> <p>Ive done it the beginner way, with loops and although it works, its horribly inefficient. For the two datasets, both with 150k records, its going to take a few thousand hours!</p> <p>I appreciate any advise on how to improve this.</p> <p>Note - I changed variable names in the below code so they would not indicate the systems im working with, apologies if i have introduced issues because of this.</p> <pre><code>import pandas as pd # Load the first CSV file (contact1_Contacts) contacts_1_df = pd.read_csv(&quot;C:/temp/Contacts1.csv&quot;) # Load the second CSV file (contact2_Contacts) contacts_2_df = pd.read_csv(&quot;C:/temp/contacts2.CSV&quot;) contacts_1_df.dropna(subset=['ContactId'], inplace=True) contacts_2_df.dropna(subset=['ContactID'], inplace=True) contacts_1_df.drop(['Full Name', 'Source Ref', 'Last Name', 'First Name', 'Birthdate', 'Mobile Phone', 'Date of Birth'], axis='columns', inplace=True) contacts_2_df.drop(['Title', 'First Name','Middle Name', 'Surname', 'Maiden Name', 'Known As', 'Gender', 'DOB', 'Deceased?', 'Deceased Date', 'Phone Number', 'Nationality'], axis='columns', inplace=True) #convert dataframes to lower contacts_1_df = contacts_1_df.apply(lambda x: x.astype(str).str.lower()) contacts_2_df = contacts_2_df.apply(lambda x: x.astype(str).str.lower()) matches_df = pd.DataFrame(columns = ['Contact1ContactGUID','Contact1ContactEmailType','Contact1ContactEmailValue','Contact2ContactID','Contact2EmailType','Contact2EmailValue']) contact1_record_count = 0 contact2_record_count = 0 #for each contact1 contact for contacts_1_index, contact1_row in contacts_1_df[['Preferred Email', 'Company Email', 'Email Address 2', 'Email 3', 'ContactId']].iterrows(): #variable for columnindex so i can get the column name contact1_colindex = 0 for contact1_email_col_value in contact1_row: #increment the column index contact1_colindex = contact1_colindex + 1 #dont test for nan values if(contact1_email_col_value != 'nan'): for contact2_index, contact2_row in contacts_2_df[['Email' ,'Email_1' ,'Email_2' ,'Email_3' ,'Email_4','ContactID']].iterrows(): #variable to hold the col index so i can get the column name contact2_colindex = 0 for contact2_email in contact2_row: #dont test for nan values if(contact2_email != 'nan'): if(contact2_email == contact1_email_col_value): #print('****************MATCH****************') match_row = {'Contact1ContactGUID': contact1_row[4], 'Contact1ContactEmailType': contact1_row.index[contact1_colindex], 'Contact1ContactEmailValue': contact1_email_col_value, 'Contact2ContactID': contact2_row[5], 'Contact2EmailType': contact2_row.index[contact2_colindex], 'Contact2EmailValue': contact2_email } matches_df = pd.concat([matches_df, pd.DataFrame([match_row])], ignore_index=True) contact2_colindex = contact2_colindex + 1 matches_df.to_csv('C:/temp/output.csv', encoding='utf-8', index=False) </code></pre> <p>Can anyone advise how this can be made more efficient?</p>
<python><pandas><performance>
2023-09-13 16:31:42
1
1,546
wilson_smyth
77,098,926
2,614,378
Snowflake error - 253003: While putting file(s) there was an error: 'HTTPError('403 Client Error: Forbidden for url
<p>I have the following snowflake code which will basically connect to snowflake cloud after which I try to read local csv file to df and then push df to Snowflake cloud db</p> <pre><code>import pandas as pd # The Snowflake Connector library. import snowflake.connector as snow from snowflake.connector.pandas_tools import write_pandas from snowflake.connector.pandas_tools import pd_writer from sqlalchemy import create_engine from urllib.parse import quote_plus class SnowFlakeHelper: def __init__(self,CONFIG): self.account = CONFIG['snowflake_accountname'] self.account_fullurl = CONFIG[&quot;snowflake_account_fullurl&quot;] self.dbname = CONFIG['snowflake_dbname'] self.tablename = CONFIG['snowflake_tablename'] self.uname = CONFIG['snowflake_uname'] self.password = CONFIG['snowflake_password'] self.syncdatacsvpath = CONFIG['data_csv_path'] self.cur,self.conn = self.connect() self.engine = self.connect_using_sqlalchemy() def connect_using_sqlalchemy(self): conn_string = f&quot;snowflake://{self.uname}:{quote_plus(self.password)}@{self.account}/{self.dbname}/PUBLIC&quot; engine = create_engine(conn_string) return engine def load_csv_with_sqlalchemy(self): df = pd.read_csv(self.syncdatacsvpath) if_exists = 'replace' #Write the data to Snowflake, using pd_writer to speed up loading print('.... Trying to send csv data to snowflake .....') with self.engine.connect() as con: df.to_sql(name=self.tablename.lower(), con=con, if_exists=if_exists, method=pd_writer, index=False) </code></pre> <p>Once I call the load_csv_with_sqlalchemy() with the correct username, password, account details I am facing the following problem -</p> <pre><code>Exception has occurred: OperationalError 253003: While putting file(s) there was an error: 'HTTPError('403 Client Error: Forbidden for url: https://sfc-in-ds1-99-customer-stage.s3.amazonaws.com/3f0c0000-s/stages/decf206a-e8c1-484e-9231-5d7feaa038f0/file0.txt')', this might be caused by your access to the blob storage provider, or by Snowflake. File &quot;D:\AAIN1464\reconciliation\lib\snowflake_helper.py&quot;, line 55, in load_csv_with_sqlalchemy df.to_sql(name=self.tablename.lower(), con=con, if_exists=if_exists, method=pd_writer, index=False) File &quot;D:\AAIN1464\reconciliation\run.py&quot;, line 104, in sync_data snowflakehelper.load_csv_with_sqlalchemy() File &quot;D:\AAIN1464\reconciliation\run.py&quot;, line 203, in &lt;module&gt; sync_data() snowflake.connector.errors.OperationalError: 253003: While putting file(s) there was an error: 'HTTPError('403 Client Error: Forbidden for url: https://sfc-in-ds1-99-customer-stage.s3.amazonaws.com/3f0c0000-s/stages/decf206a-e8c1-484e-9231-5d7feaa038f0/file0.txt')', this might be caused by your access to the blob storage provider, or by Snowflake. </code></pre> <p>I tried using the write_pandas() function and gave all access to my account role and still facing the issue , wonder what is causing the problem here ?</p> <p>From my debugger I know that the source of error is happening in the line below which means connection is successful but failing to add df data to snow flake due to some unknown restrictions -</p> <pre><code>df.to_sql(name=self.tablename.lower(), con=con, if_exists=if_exists, method=pd_writer, index=False) </code></pre>
<python><python-3.x><snowflake-cloud-data-platform>
2023-09-13 16:30:54
1
1,070
null
77,098,919
12,394,134
Wrapping head around sequential GitHub Actions
<p>I am wanting to run some sequential GitHub Actions. I have a GitHub action for a data cleaning step and then another one to produce some topline reports from these data.</p> <p>My thoughts were, that I could do a process like so.</p> <pre><code>setup.yml --&gt; clean.yml --&gt; topline.yml </code></pre> <p>This is what setup.yml looks like:</p> <pre><code>name: setup on: workflow_call: jobs: installation: runs-on: ubuntu-latest steps: - name: Checkout repository id: checkout-repo uses: actions/checkout@v3 - name: Install python id: setup-python uses: actions/setup-python@v4 with: python-version: 3.11 - name: Load cached poetry intallation id: cached-poetry uses: actions/cache@v3 with: path: ~/.local key: poetry-0 - name: Install poetry uses: snok/install-poetry@v1 with: virtualenvs-create: true virtualenvs-in-project: true installer-parallel: true - name: Load cached virtual environment id: cached-poetry-dependencies uses: actions/cache@v3 with: path: .venv key: venv-${{ runner.os }}-${{ steps.setup-python.outputs.python-version }}-${{ hashFiles('**/poetry.lock') }} - name: Install dependencies if: steps.cached-poetry-dependencies.outputs.cache-hit != 'true' run: poetry install --no-interaction --no-root - name: Install project run: poetry install --no-interaction - name: Install Quarto id: install-quarto uses: quarto-dev/quarto-actions/setup@v2 - name: Activate poetry shell id: activate-poetry run: source .venv/bin/activate </code></pre> <p>Then, my hope was that I could then execute a python file that cleans the data</p> <pre><code>name: cleaning on: workflow_call: jobs: execute-cleaning: runs-on: ubuntu-latest steps: - name: Execute cleaning script run: | python src/twenty-three/sp-2023.py </code></pre> <p>Then, once the cleaning step is complete, I was hoping that I could then produce the topline reports with a topline workflow.</p> <pre><code>name: topline on: workflow_call: jobs: execute-topline: runs-on: ubuntu-latest steps: - name: Render the topline file id: render-topline run: | quarto render src/twenty-three/sp-2023-topline.qmd </code></pre> <p>All of these steps are managed with a file called build.yml</p> <pre><code>name: build on: [push] jobs: call-setup: uses: ./.github/workflows/setup.yml call-clean: needs: call-setup uses: ./.github/workflows/clean.yml call-topline: needs: call-clean uses: ./.github/workflows/topline.yml </code></pre> <p>The problem that I am running into is that between each of these workflows, things seem to restart.</p> <p>I've looked around at some of the documentation and I am new to GitHub Actions. So I am getting a little bit confused. Is my problem that I am using <code>workflow_call</code> instead of <code>workflow_run</code>? Would using <code>workflow_run</code> in my <code>build.yml</code> allow me to hand-off things from one workflow to the next, or do I have to store data from one to the next? I am just a little bit confused as to what the most efficient way to do this would be. I would definitely not prefer to have to re-install Python and my dependencies for the <code>clean</code> and <code>topline</code> workflows, that seems like a bit of a waste. Thus, trying to do things sequentially like this.</p> <p>I am sorry if this might seem like a duplicate question and I have looked around quite a bit, but I could really use someone helping me out by explaining some of the logic and where my confusion is coming from here.</p>
<python><github-actions><python-poetry>
2023-09-13 16:30:13
0
326
Damon C. Roberts
77,098,918
1,955,231
Printing out the OPCODES to which a Python Re is compiled?
<p>Python's <code>re</code> module allows for some compilation of a given regex pattern.</p> <p>For example,</p> <pre><code>import re c = re.compile('(?&lt;=abc)def') </code></pre> <p>How can I print out the code the expression is compiled to? Any links to some documentation about the implementation details would also be helpful</p> <p>The <a href="https://github.com/python/cpython/blob/3.11/Lib/re/_compiler.py" rel="nofollow noreferrer">source code</a> suggests that this compilation is done to some kind of internal VM (like <a href="https://swtch.com/%7Ersc/regexp/regexp1.html" rel="nofollow noreferrer">PikeVM</a>). The opcodes of this VM can be found <a href="https://github.com/python/cpython/blob/3.11/Lib/re/_constants.py" rel="nofollow noreferrer">here</a></p>
<python><regex><compilation>
2023-09-13 16:30:05
1
2,051
Agnishom Chattopadhyay
77,098,917
21,395,742
Inbuilt python command to run user input script
<p>Is there a inbuilt command to run user input as python script? e.g.</p> <pre><code>a = 3 command(&quot;Enter command: &quot;) </code></pre> <p>Then when run on terminal:</p> <pre><code>Enter command: print(a) 3 </code></pre>
<python><python-3.x>
2023-09-13 16:29:46
1
845
hehe
77,098,807
8,385,599
Allocating large shared memory leads to "Bus error (core dumped)"
<p>When allocating a large amount of shared memory (60% of total 128GiB RAM, 3GiB used), I got &quot;Bus error (core dumped)&quot; after several seconds and the memory usage went up. But if I allocate normal memory (in <code>b()</code>), it works ok.</p> <pre class="lang-py prettyprint-override"><code>from multiprocessing.managers import SharedMemoryManager import numpy as np SIZE = 70462337280 # Exact size is not quite important def spin(): import time while True: time.sleep(60) def a(): with SharedMemoryManager() as smm: shm = smm.SharedMemory(size=SIZE) buf = np.ndarray((SIZE,), dtype=np.uint8, buffer=shm.buf) buf.fill(0) print(&quot;Done&quot;) spin() def b(): buf = np.empty((SIZE,), dtype=np.uint8) buf.fill(0) print(&quot;Done&quot;) spin() # Bus error (core dumped) a() # Works fine # b() </code></pre> <p>I checked <code>/proc/sys/kernel/shmmax</code>, but that doesn't seem to be bounded.</p> <pre><code>cat /proc/sys/kernel/shmmax 18446744073692774399 </code></pre>
<python><linux><shared-memory><coredump>
2023-09-13 16:12:48
1
677
obfish
77,098,690
1,914,781
Insert rows with previous row to keep column value continuously increase
<p>I have some data which missed some value, I need insert fake row with previous row.</p> <p>example:</p> <pre><code>import pandas as pd data = [ [1,'A'], [2,'B'], [3,'C'], [4,'D'], [5,'E'], [7,'G'], [9,'H'], [10,'I'] ] df = pd.DataFrame(data,columns=['v1','v2']) print(df) </code></pre> <p>This sample v1 column should be increasing, so #6 missing, I would like to insert [6,'E'] to get final results.</p>
<python><pandas>
2023-09-13 15:56:10
1
9,011
lucky1928
77,098,653
13,040,314
SQLAlchemy: select nested columns
<p>I have following models and associations. AA can have multiple BB i.e AA.bbs is a list of BB. I want to select only few columns from AA and BB i.e name of AA and name, username from BB.</p> <pre><code>aa2cc_association = Table( 'aa2bbassociation', Base.metadata, Column('aa_id', Integer, ForeignKey(get_full_table_name(TABLE_FOLDER_NAME) + '.id'), primary_key=True), Column('username', Unicode(512), ForeignKey(get_full_table_name(TABLE_FOLDER_TEST_NAME) + '.username'), primary_key=True), **_tablekwargs) class BB(Base): id = Column(&quot;id&quot;) name = Column(&quot;name&quot;) username = Column(&quot;username&quot;) class AA(Base): id = Column(&quot;id&quot;) name = Column(&quot;name&quot;) bbs = relationship(BB, secondary=aa2cc_association, backref='folders', lazy=&quot;joined&quot;) </code></pre> <p>Table AA</p> <pre><code>id name 1 aa1 2 aa2 </code></pre> <p>Table BB</p> <pre><code>id name username 1 bb1 u1 2 bb2 u2 </code></pre> <p>Table aa2bbassociation</p> <pre><code>aa_id username 1 u1 1 u2 </code></pre> <p>I perform this query</p> <pre><code>results = session.query(AA.id, AA.name, BB.name, BB.username).group_by(AA.name) </code></pre> <p>But it returns only one row.</p> <p>Returned result table</p> <pre><code>AA.name BB.name BB.username aa1 bb1 u1 </code></pre> <p>I am expecting something like this. How can I get this?</p> <pre><code> AA.id AA.name BB.name BB.username 1 aa1 bb1 [u1, u2] </code></pre>
<python><sqlalchemy><orm><foreign-keys>
2023-09-13 15:51:16
1
325
StaticName
77,098,619
16,864,869
Deepcopy of openpyxl workbook object
<p>I was wondering if there's a way to create a deep copy of a workbook object that's created using the <code>load_workbook</code> method.</p> <p>I want to create a process like this (this project will be in a Jypter Notebook):</p> <pre class="lang-py prettyprint-override"><code> #Cell 1 import openpyxl #Cell 2 wb1 = load_workbook(&quot;c:\temp\&quot;abc123.xlsm&quot;, keep_vba=True) #This file is large and takes several minutes to load #Cell 3 wb2 = wb1 #This doesn't create a deep copy but I would like something that creates a deep copy #Cell 4 #Changes to wb2 happen here e.g. sheets get deleted. I would like to go back and rerun cell 2 so that I can reuse the wb1 variable without needing to load the file again. </code></pre>
<python><openpyxl>
2023-09-13 15:47:28
1
1,348
Brian Gonzalez
77,098,480
6,560,267
Polars + Psycopg2: write column of lists to PostgreSQL
<p>I'm trying to write a table that has a column of lists. When writing, I was initially getting an error due to psycopg2 not being able to transform the <code>np.array</code> datatype to a postgre type.</p> <p>After reading <a href="https://stackoverflow.com/questions/39564755/programmingerror-psycopg2-programmingerror-cant-adapt-type-numpy-ndarray">this question</a> I learned of adapters and tried both the <code>AsIs</code> and the <code>QuotedString</code> ones, trying to imitate the string one would use in SQL.</p> <p>Let's see a MWE: in SQL, the following works</p> <pre class="lang-sql prettyprint-override"><code>CREATE TABLE test_arrays ( id serial PRIMARY KEY, foo REAL [], bar VARCHAR (24) ); INSERT INTO test_arrays (foo, bar) VALUES('{1.0, 4.0}', 'baz'); INSERT INTO test_arrays (foo, bar) VALUES('{2.0, 42.0}', 'qux'); </code></pre> <p>my attempt in Python to write the same table, but the <code>foo</code> column is of type text:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl def register_psycopg_adapters(): import numpy as np from psycopg2.extensions import register_adapter, QuotedString def addapt_numpy_array(numpy_array): # should return e.g. '{1.0, 4.0}' return QuotedString(&quot;{&quot; + &quot;, &quot;.join(map(str, numpy_array)) + &quot;}&quot;) register_adapter(np.ndarray, addapt_numpy_array) def test_writing_array_to_postgres(): conn = &quot;...&quot; # the connection string register_psycopg_adapters() df = pl.DataFrame( dict( foo=[[1.0, 42.0], [4.0, 7.0]], bar=[&quot;baz&quot;, &quot;qux&quot;], ) ) df.write_database(&quot;test_arrays&quot;, conn, if_exists=&quot;replace&quot;) </code></pre> <p>I'd like to know how to write the foo column as an ARRAY type.</p>
<python><postgresql><psycopg2><python-polars>
2023-09-13 15:28:02
1
913
Adrian
77,098,444
17,471,060
Disable font colour formatting for negetive values in python Polars generated excel
<p>I would like to disable automatic font colouring of negetive values in polars <code>write_excel</code>. Any tip?</p> <pre><code>import polars as pl import xlsxwriter df2 = pl.DataFrame(data=np.random.randint(-10, 10, 5*3).reshape(-1, 3), schema=['x', 'y', 'z']) with xlsxwriter.Workbook(r'_out_.xlsx') as workbook: df2.write_excel(workbook=workbook, worksheet='sheet1', autofit=True, table_style=None, column_formats=None, conditional_formats=False, float_precision=0, sparklines=None, formulas=None) </code></pre> <p>This is what it generates -</p> <p><a href="https://i.sstatic.net/TBumf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TBumf.png" alt="enter image description here" /></a></p>
<python><python-3.x><python-polars>
2023-09-13 15:23:33
2
344
beta green
77,098,398
18,108,367
Why is executed only one module import instead of 2?
<p>I suppose to have 3 files in the folder <code>project</code> as showed following:</p> <pre><code>project |- module_b.py |- script_B1.py |- script_B2.py </code></pre> <h2>Content of 3 python files</h2> <p>The content of <code>module_b.py</code> is:</p> <pre><code>print(&quot;Inside module_b&quot;) </code></pre> <p>The content of <code>script_B1.py</code> is:</p> <pre><code>import module_b print(&quot;Inside script_B1&quot;) </code></pre> <p>The content of <code>script_B2.py</code> is:</p> <pre><code>import module_b import script_B1 print(&quot;Inside script_B2&quot;) </code></pre> <h2>Output of execution of 2 scripts</h2> <p>The output of the execution of <code>script_B1.py</code> is:</p> <pre><code>Inside module_b Inside script_B1 </code></pre> <p>The output of <code>script_B2.py</code> is:</p> <pre><code>Inside module_b Inside script_B1 Inside script_B2 </code></pre> <h2>Question about the output of <code>script_B2.py</code></h2> <p>In the execution of <code>script_B2.py</code> the module <code>module_b.py</code> is imported first by <code>script_B2.py</code> and after by <code>script_B1.py</code>. So I was thinking that the output of the execution of <code>script_B2.py</code> would be:</p> <pre><code>Inside module_b Inside module_b Inside script_B1 Inside script_B2 </code></pre> <p>My question is: why the output of execution of <code>script_B2.py</code> contains only one string <code>Inside module_b</code>?</p> <hr /> <p><strong>EDIT</strong>: I have modified the file <code>script_B2.py</code> adding 2 prints of the list <code>sys.modules.keys()</code>:</p> <pre><code>import sys import module_b print(sys.modules.keys()) import script_B1 print(sys.modules.keys()) print(&quot;Inside script_B2&quot;) </code></pre> <p>By these prints I can see that, after the <code>import module_b</code> inside the file <code>script_B2.py</code>, the list <code>sys.modules.keys()</code> contains the element <code>module_b</code>.<br /> This avoid the execution of the code of <code>module_a.py</code> when it is imported by <code>script_B1.py</code>.</p>
<python><import><module>
2023-09-13 15:17:16
1
2,658
User051209
77,098,347
9,415,280
UserWarning: Input dict contained keys which did not match any model input. They will be ignored by the model
<p>I try to convert my code to use data.dataset. I'm not far but still have proble with my features and model input layer never seen before I use data.dataset</p> <p>I load a lot of .csv with many columns for features, csv had heather with name string.</p> <p>my simple test code is:</p> <pre><code>import tensorflow as tf import pandas as pd bd_path = 'C:/Users/my doc/Python/mini_test/' keep_columns = ['precipitation', 'temperature_min', 'temperature_max', 'snow_depth_water_equivalent_max', 'streamflow'] name_columns = pd.read_csv(bd_path + 'camels_01022500+attributs_mensuels.csv').columns # Enable eager execution tf.config.run_functions_eagerly(True) # Load a single CSV file and preprocess it def load_and_preprocess_csv(filename): columns = name_columns dataset = tf.data.experimental.make_csv_dataset( file_pattern=filename, num_parallel_reads=2, batch_size=32, num_epochs=1, label_name='streamflow', column_names=columns, select_columns=keep_columns, shuffle_buffer_size=10000, header=True, field_delim=',' ) # Apply preprocessing to the dataset def preprocess_fn(features, label): # Normalize the features (example: scaling to [0, 1]) features['precipitation'] /= 100.0 features['temperature_min'] /= 100.0 features['temperature_max'] /= 100.0 features['snow_depth_water_equivalent_max'] /= 100.0 # last trial I did # Create a 'main_inputs' feature by stacking the selected columns features['main_inputs'] = tf.stack([ features['precipitation'], features['temperature_min'], features['temperature_max'], features['snow_depth_water_equivalent_max'] ], axis=-1) # here an other trial without sucess... # Rename the columns to match the model's input layer #features['main_inputs'] = tf.cast(features['main_inputs'], tf.float32) # Ensure the dtype is correct #features['main_inputs'] = tf.identity(features['main_inputs'], name='main_inputs') # Rename the feature return features, label dataset = dataset.map(preprocess_fn) return dataset # Create a list of file paths matching pattern file_paths = tf.io.gfile.glob(bd_path + '*.csv') # Load and preprocess CSV files in parallel building_datasets = [] for file_path in file_paths: dataset = load_and_preprocess_csv(file_path) building_datasets.append(dataset) # Combine the individual datasets into a single dataset combined_dataset = tf.data.Dataset.sample_from_datasets(building_datasets) # Optional, further transform, shuffle, and batch the dataset as needed # For example: combined_dataset = combined_dataset.shuffle(buffer_size=10000) #combined_dataset = combined_dataset.batch(64) # model tensor_input = tf.keras.layers.Input(shape=(4,), name='main_inputs') xy = tf.keras.layers.Dense(10, activation='linear')(tensor_input) xy = tf.keras.layers.Dropout(rate=0.2)(xy) out = tf.keras.layers.Dense(1, activation='linear')(xy) model = tf.keras.Model(inputs=tensor_input, outputs=out) optimizer = tf.keras.optimizers.Adam(learning_rate=0.001) model.compile(optimizer=optimizer, loss='mse') # Train the model history = model.fit(combined_dataset, epochs=1) </code></pre> <p>the warning I get is:</p> <pre><code>... \keras\engine\functional.py:637: UserWarning: Input dict contained keys ['temperature_min', 'snow_depth_water_equivalent_max', 'temperature_max', 'precipitation'] which did not match any model input. They will be ignored by the model. </code></pre> <p>my experience is passing array directly to model, is the input layer must be modify or this is my dataset who need more modification?</p>
<python><tensorflow><keras><deep-learning><tf.data.dataset>
2023-09-13 15:11:08
1
451
Jonathan Roy
77,098,297
1,479,670
TensorFlow: failure to convert elements of BinaryCrossentropy to Tensor
<p>I´m following an online course for TensorFlow, but one of the examples that works in the video does not work for me (I've checked - there are no typos).</p> <p>I have a feature array <code>X</code> of shape <code>(1000,2)</code> and a label array <code>y</code> of shape <code>(1000,)</code>. Here's the code that fails for me:</p> <pre><code># 1. Create the model using the Sequential API model_1 = tf.keras.Sequential([ tf.keras.layers.Dense(1) ]) # 2. Compile it model_1.compile(loss=tf.keras.losses.BinaryCrossentropy, optimizer=tf.keras.optimizers.SGD(), metrics=['accuracy']) # 3. Fit it model_1.fit(X, y, epochs=5) </code></pre> <p>The error occurs in the <code>fit()</code> method:</p> <pre><code> TypeError: Failed to convert elements of &lt;keras.src.losses.BinaryCrossentropy object at 0x79aa68216da0&gt; to Tensor. Consider casting elements to a supported type. See https://www.tensorflow.org/api_docs/python/tf/dtypes for supported TF dtypes. </code></pre> <p>I have tried to manually convert the numpy arrays to tensors with type <code>tf.float64</code>, but the error remains.</p> <p>I think it could be a version problem: in the video the TensorFlow version is 2.3.0, whereas mine is 2.13.0.</p> <p>So my question is, which elements would have to be cast to which types to make this code work?</p>
<python><tensorflow>
2023-09-13 15:03:41
0
1,355
user1479670
77,098,140
2,826,892
Reusing detached ORM objects across SQLAlchemy sessions during testing
<p>I'm trying to understand what SQL-Alchemy does to orm objects. They must have some state attached to them I'm not groking. In the below example, <code>test_one</code> will succeed, but <code>test_two</code> will fail because the Employee object will not be added to the session in <code>test_two</code>. Somewhere in sqla internals it sees that the object is already added to a session and deleted from a session even though the session is closed in the module scope so <code>test_two</code> should have a brand new session. My workaround was to wrap the defaults in a function and return new instances of the orm objects each time. Still curious what is happening?</p> <p>Some things I've tried: Call <code>expunge_all</code>, <code>expire_all</code>, <code>expunge</code> the object etc when I close the session. On the second call to <code>add_all</code> in <code>test_two</code>, the objects are added the sessions <code>identity_map</code> but on <code>commit()</code> it's never added to the db. Hence the second assert fails.</p> <p>This is a Flask app using Flask-SQLAlchemy and SQLAlchemy 1.4</p> <p>defaults.py</p> <pre><code>from myapp.models import Employee EMPLOYEES = [ Employee(employee_id=8, given_name='first', family_name='last', email='test@example.com') ] </code></pre> <p>conftest.py <code>mysql_proc</code> is a fixture provided by pytest-mysql</p> <pre><code>@pytest.fixture(scope=&quot;session&quot;) def app(mysql_proc): url = f'mysql+mysqldb://root:@127.0.0.1:3307' # create the test database before we connect using TestingConfig engine = create_engine(url, echo=False, poolclass=NullPool) with engine.connect() as conn: conn.execute('CREATE DATABASE IF NOT EXISTS mydb') conn.execute('USE mydb') url = f'mysql+mysqldb://root:@127.0.0.1:3307/mydb' engine = create_engine(url, echo=True, poolclass=NullPool) with engine.connect() as conn: db.metadata.create_all(bind=conn) # this will use Flask-SQLAlchemy's session from now on app = create_app(TestingConfig()) yield app @pytest.fixture(scope=&quot;session&quot;) def db_engine(mysql_proc): url = f'mysql+mysqldb://root:@127.0.0.1:3307/mydb' engine = create_engine(url, echo=False, poolclass=NullPool) yield engine engine.dispose() @pytest.fixture(scope=&quot;session&quot;) def db_session_maker(db_engine): return scoped_session(sessionmaker(bind=db_engine)) @pytest.fixture(scope=&quot;module&quot;) def core_session(app, db_session_maker): session = db_session_maker() yield session session.rollback() session.close() @pytest.fixture(scope=&quot;module&quot;) def employees_fixture(core_session): core_session.add_all(EMPLOYEES) core_session.commit() yield core_session core_session.execute(delete(Employee)) core_session.commit() </code></pre> <p>test_one.py</p> <pre><code>import pytest from myapp.models import Employee def test_one(core_session, employees_fixture): assert core_session.query(Employee).count() == 1 </code></pre> <p>test_two.py</p> <pre><code>import pytest from myapp.models import Employee def test_two(core_session, employees_fixture): assert core_session.query(Employee).count() == 1 </code></pre>
<python><sqlalchemy><pytest><flask-sqlalchemy>
2023-09-13 14:43:07
1
924
mikew
77,098,111
4,239,879
QlistWidget won't show in a custom QFrame
<p>I can't get <code>ModMergeFrame</code> frame to show the <code>self.mod_list</code>. I have tried reading the docs and examples, but failed to understand and fix the problem.</p> <p>Below code in <code>ModMergeFrame</code> class works correcty. Displaying the <code>self.mod_list.count()</code> returns 2, as expected. So the list item is being created and populated. The two <code>QPushButtons</code> are visible as well, but the <code>QListWidget</code> is not.</p> <p>Putting the same object in the <code>centralwidget</code> of <code>MainWindow</code> displayes it correctly though</p> <pre><code>class CommonFrame(QFrame): &quot;&quot;&quot; Common frame that will be used as a base for all other frames &quot;&quot;&quot; def __init__(self, parent, settings, objectName): super().__init__(parent, objectName=objectName) layout = QGridLayout(self) self.setLayout(layout) self.hide() self.settings = settings class ModMergeFrame(CommonFrame): &quot;&quot;&quot; Mod merge frame that will be displayed in the main application window &quot;&quot;&quot; def __init__(self, parent, settings): super().__init__(parent, settings, objectName=&quot;Mod merge&quot;) test_button = QPushButton(self.objectName(), self) self.layout().addWidget(test_button) test_button = QPushButton(&quot;13&quot;, self) self.layout().addWidget(test_button) # Mod list for selection of mods to merge self.mod_list = QListWidget(self) self.mod_list.additems([&quot;bla&quot;, &quot;bla2&quot;]) self.layout().addWidget(self.mod_list) class MainWindow(QMainWindow): &quot;&quot;&quot; Main application window &quot;&quot;&quot; def __init__(self, settings): try: QMainWindow.__init__(self) self.settings = settings self.setMinimumSize(QSize(700, 700)) self.setWindowTitle(&quot;Bla&quot;) toolbar = QToolBar(&quot;Main toolbar&quot;) self.addToolBar(toolbar) central_widget = QWidget(self) central_widget_layout = QVBoxLayout(central_widget) # Generate frames. Those come from custom QFrame-based classes # Can be toggled with toolbar and are displayed in the central widget self.generate_mod_merge_frame(central_widget_layout, toolbar) central_widget.setLayout(central_widget_layout) self.setCentralWidget(central_widget) except Exception as exc: traceback_formatted = traceback.format_exc() logger.info(traceback_formatted) raise RuntimeError(traceback_formatted) from exc def generate_mod_merge_frame(self, central_widget_layout, toolbar): &quot;&quot;&quot; Generates main window frame with a given name &quot;&quot;&quot; frame_name = &quot;Mod merge&quot; frame = ModMergeFrame(self, self.settings) frame_action = QAction(frame_name, self) frame_action.setStatusTip(frame_name) frame_action.triggered.connect( lambda checked, frame_name=frame_name: self.show_frame_by_name(frame_name) ) central_widget_layout.addWidget(frame) toolbar.addAction(frame_action) </code></pre>
<python><pyqt6>
2023-09-13 14:38:26
1
2,360
Ivan
77,098,066
1,132,544
Why do I miss rows if I split a dataframe on a condition and the opposite?
<p>I have a dataframe <code>df</code> which I want to split.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Product</th> <th>Category</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>non-food</td> </tr> <tr> <td>B</td> <td>food</td> </tr> <tr> <td>C</td> <td>non-food</td> </tr> <tr> <td>D</td> <td>NULL</td> </tr> <tr> <td>...</td> <td>....</td> </tr> <tr> <td>XYZ</td> <td>NULL</td> </tr> </tbody> </table> </div> <p>Therefore I have the following code:</p> <pre><code>print(df.count()) -&gt; 150.000 filter_cr = (df[&quot;Category&quot;].isNull()) &amp; (df[&quot;Category&quot;] != &quot;food&quot;) df_food = df.filter(filter_cr) print(df_food.count()) -&gt; 120.000 df_non_food = df.filter(~filter_cr) print(df_food.count()) -&gt; 22.000 </code></pre> <p>No I try to understand where my <strong>8.000</strong> missing rows are. Anybody an idea?</p> <p>BR</p>
<python><apache-spark><pyspark><azure-synapse>
2023-09-13 14:31:36
1
2,707
Gerrit
77,097,909
5,379,479
Collect data from side table(s) in wikipedia page(s)
<p>I'm trying to create a python script that can collect information from the side tables in a wikipedia page. For an example see <a href="https://en.wikipedia.org/wiki/Ford_Fusion_(Americas)" rel="nofollow noreferrer">this page</a>. Along the right hand side of the page, there are 3 vertical looking HTML <code>table</code>s. The first is titled &quot;Ford Fusion&quot;, the 2nd &quot;First generation&quot;, and the 3rd &quot;Second generation&quot;.</p> <p>When I try to collect the HTML for the webpage, the tables on the right are not returned with code like this</p> <pre class="lang-py prettyprint-override"><code>import requests from bs4 import BeautifulSoup search_string = f&quot;Ford Fusion&quot; search_url = f&quot;https://en.wikipedia.org/w/api.php?action=query&amp;list=search&amp;format=json&amp;srsearch={search_string}&quot; search_response = requests.get(search_url) search_data = search_response.json() closest_match = search_data[&quot;query&quot;][&quot;search&quot;][0][&quot;title&quot;] page_url = f&quot;https://en.wikipedia.org/w/api.php?action=query&amp;prop=extracts&amp;format=json&amp;titles={closest_match}&quot; page_response = requests.get(page_url) page_data = page_response.json() page_id = list(page_data[&quot;query&quot;][&quot;pages&quot;].keys())[0] html_text = page_data[&quot;query&quot;][&quot;pages&quot;][page_id][&quot;extract&quot;] soup = BeautifulSoup(html_text, &quot;html.parser&quot;) tables = soup.find_all('table') print(len(tables)) &gt;&gt; 0 </code></pre> <p>I've inspected the <code>html_text</code> variable and for some reason the <code>table</code>s aren't even there, even though I can plainly see them when inspecting the webpage in my browser. How can I get these tables to be returned as part of the <code>request.get</code> call to the URL?</p>
<python><web-scraping><python-requests><wikipedia>
2023-09-13 14:13:36
1
2,138
Jed
77,097,816
1,585,652
Why is my KafkaConsumer stuck in a Joining Group Loop?
<p>I have a KafkaConsumer that is stuck in a &quot;joining group&quot; loop.</p> <p>It's a simple Kafka Python consumer, here is the python script:</p> <pre><code>from kafka import KafkaConsumer import logging import sys root = logging.getLogger('kafka') root.setLevel(logging.INFO) handler = logging.StreamHandler(sys.stdout) handler.setLevel(logging.INFO) formatter = logging.Formatter( '%(asctime)s - %(name)s - %(levelname)s - %(message)s') handler.setFormatter(formatter) root.addHandler(handler) bootstrap_servers_sasl = ['node1.dev.company.local:9092', 'node2.dev.company.local:9092', 'node3.dev.company.local:9092'] topicName = 'test_sasl' consumer = KafkaConsumer( topicName, bootstrap_servers = bootstrap_servers_sasl, security_protocol = 'SASL_PLAINTEXT', sasl_mechanism = 'SCRAM-SHA-512', sasl_plain_username = 'test_user', sasl_plain_password = 't3st_us3r', group_id = 'test_group' ) try: for message in consumer: if message: print(f&quot;Received message: {message.value.decode('utf-8')}&quot;) except Exception as e: print(f&quot;An exception occurred: {e}&quot;) finally: consumer.close() </code></pre> <p>When I include group_id when creating the KafkaConsumer, I will see this in the log over and over again forever, and the consumer will never actually see an item published to the topic it is supposed to be monitoring.</p> <pre><code>2023-09-13 08:35:44,102 - kafka.cluster - INFO - Group coordinator for test_group is BrokerMetadata(nodeId='coordinator-0', host='node1.dev.company.local', port=9092, rack=None) 2023-09-13 08:35:44,102 - kafka.coordinator - INFO - Discovered coordinator coordinator-0 for group test_group 2023-09-13 08:35:44,102 - kafka.coordinator - INFO - (Re-)joining group test_group 2023-09-13 08:35:44,104 - kafka.coordinator - WARNING - Marking the coordinator dead (node coordinator-0) for group test_group: [Error 16] NotCoordinatorForGroupError. </code></pre> <p>If I don't include group_id everything works fine.</p> <p>The Kafka brokers are Confluent Kafka, with protocol version 3.4-IV0.</p> <p>I can provide any other information, but am not sure what would be the most useful information to include.</p> <p>The controller.log on the Kafka server side contains this:</p> <pre><code>[2023-09-13 08:56:03,345] INFO [Controller id=0] Processing automatic preferred replica leader election (kafka.controller.KafkaController) [2023-09-13 08:56:03,345] TRACE [Controller id=0] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) [2023-09-13 08:56:03,345] DEBUG [Controller id=0] Topics not in preferred replica for broker 0 HashMap() (kafka.controller.KafkaController) [2023-09-13 08:56:03,345] TRACE [Controller id=0] Leader imbalance ratio for broker 0 is 0.0 (kafka.controller.KafkaController) [2023-09-13 08:56:03,345] DEBUG [Controller id=0] Topics not in preferred replica for broker 1 HashMap() (kafka.controller.KafkaController) [2023-09-13 08:56:03,345] TRACE [Controller id=0] Leader imbalance ratio for broker 1 is 0.0 (kafka.controller.KafkaController) [2023-09-13 08:56:03,345] DEBUG [Controller id=0] Topics not in preferred replica for broker 2 HashMap() (kafka.controller.KafkaController) [2023-09-13 08:56:03,345] TRACE [Controller id=0] Leader imbalance ratio for broker 2 is 0.0 (kafka.controller.KafkaController) </code></pre> <p>I'm not sure whether that is indicative of a problem, or just general information.</p> <p><strong>Edit:</strong></p> <p>Here is the server.properties file (from node1, there are three nodes in total):</p> <pre><code>broker.id=0 listeners=LISTENER_ONE://:9092,LISTENER_TWO://:9096 inter.broker.listener.name=LISTENER_ONE sasl.mechanism.inter.broker.protocol=PLAIN sasl.enabled.mechanisms=PLAIN,SCRAM-SHA-512 authorizer.class.name=kafka.security.authorizer.AclAuthorizer super.users=User:admin allow.everyone.if.no.acl.found=true security.protocol=SASL_PLAINTEXT advertised.listeners=LISTENER_ONE://node1.dev.company.local:9092,LISTENER_TWO://node1.dev.company.local:9096 listener.security.protocol.map=LISTENER_ONE:SASL_PLAINTEXT,LISTENER_TWO:PLAINTEXT num.network.threads=3 num.io.threads=8 socket.send.buffer.bytes=102400 socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 log.dirs=/var/lib/kafka num.partitions=1 num.recovery.threads.per.data.dir=1 offsets.topic.replication.factor=3 transaction.state.log.replication.factor=3 transaction.state.log.min.isr=2 log.retention.hours=24 log.segment.bytes=1073741824 log.retention.check.interval.ms=300000 zookeeper.connect=node1.dev.company.local:2181,node2.dev.company.local:2181,node3.dev.company.local:2181 zookeeper.connection.timeout.ms=6000 group.initial.rebalance.delay.ms=3 auto.create.topics.enable=false inter.broker.protocol.version=3.4-IV0 </code></pre> <p>I also ran kafka-console-consumer from one of the broker machines to test:</p> <pre><code>kafka-console-consumer --bootstrap-server node1.dev.company.local:9092,node2.dev.company.local:9092,node3.dev.company.local:9092 --group test_group --topic test_sasl --consumer.config consumer.config </code></pre> <p>With the following in consumer.config:</p> <pre><code>security.protocol=SASL_PLAINTEXT sasl.mechanism=SCRAM-SHA-512 sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \ username=&quot;test_user&quot; \ password=&quot;t3st_us3r&quot;; </code></pre> <p>I don't get any errors, but also no indication that it is processing any of the test messages I produce (although when I run it without --group, I also don't get any indication that it is processing messages...this is different than what I am seeing with the Kafka Python client).</p> <p><strong>Edit:</strong></p> <p>I ran the following command:</p> <pre><code>kafka-topics --bootstrap-server node1.dev.company.local:9092,node2.dev.company.local:9092,node3.dev.company.local:9092 --describe --topic test_sasl --command-config admin-plaintext.config </code></pre> <p>Where admin-plaintext.config contains this:</p> <pre><code>security.protocol=SASL_PLAINTEXT sasl.mechanism=PLAIN sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \ username=&quot;admin-username&quot; \ password=&quot;admin-password&quot;; </code></pre> <p>The response is:</p> <pre><code>Topic: test_sasl TopicId: Y3hiju-ZQsOvdRgLk4vpZw PartitionCount: 1 ReplicationFactor: 2 Configs: cleanup.policy=delete,segment.bytes=1073741824,retention.ms=86400000,unclean.leader.election.enable=true Topic: test_sasl Partition: 0 Leader: 2 Replicas: 2,0 Isr: 0,2 </code></pre>
<python><apache-kafka><kafka-consumer-api><kafka-python>
2023-09-13 14:01:23
0
1,997
jceddy