QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
76,617,531
7,026,980
how to construct a `np.record` in numpy
<p>the following code constructs an array with structured dtype</p> <pre><code>import numpy as np arr = np.zeros(3, dtype=[('a', 'i4'), ('b', bool)]).view(np.recarray) x = arr[0] </code></pre> <p><code>x</code>'s type is <code>np.record</code> and I can use <code>x.a</code> and <code>x.b</code> to access its types.</p> <p>How can I construct a similar <code>np.record</code> object directly (without creating a <code>recarray</code> first) and make its values being <code>(3, True)</code>? There is no sufficient information from the <a href="https://numpy.org/doc/stable/reference/generated/numpy.record.html#numpy.record" rel="nofollow noreferrer">doc of <code>np.record</code> function</a>.</p>
<python><numpy>
2023-07-05 06:17:01
2
2,594
doraemon
76,617,521
13,490,662
How to deal with unicode chars in python on windows
<p>I recently developed a PyQt app on Linux and I try to run it on Windows. I am not used to Windows at all.</p> <p>At some places the program reads and interprets .csv file I created in Linux.</p> <p>I read the .csv file line per line and for each line I detect a number of fields thanks to a field char separator (in my case ⎈ ). On Linux the python program detects the correct number of fields but not on Windows.</p> <p>When I print the lines on the terminal in Windows, they are unreadable.</p> <p>What is the mean to tell python to deal with unicode strings the same way on both platforms without having to change the program in many places?</p> <p>Here is a simplified example</p> <p><strong>example.csv</strong></p> <pre><code>♆⎈deux⎈trois⎈quatre⎈cinq ♆⎈two⎈three⎈four⎈five </code></pre> <p><strong>testRead.py</strong></p> <pre><code>fileObj = open('example.csv', 'r') #lines = fileObj.readlines() char=None end=False line='' stop_char='♆' #read line after line while True: line='' while True: char=fileObj.read(1) if(char ==stop_char): break if not char: break line += char if not char: #('Fin de fichier') end=True #we do not break here to allow the last line treatement print('line is '+line) if(line != '') : #to avoid the first empty line x_array=line.split('⎈') print('length of line is '+str(len(x_array))) if end: break fileObj.close() </code></pre> <p>Output on linux</p> <pre><code>(venvbase) [jaaf@fedora project]$ python testRead.py line is line is ⎈deux⎈trois⎈quatre⎈cinq length of line is 5 line is ⎈two⎈three⎈four⎈five length of line is 5 (venvbase) [jaaf@fedora project]$ </code></pre> <p><strong>Output on Windows</strong></p> <pre><code>(.venv) PS E:\project&gt; python testRead.py line is ♆⎈deux⎈trois⎈quatre⎈cinq ♆⎈two⎈three⎈four⎈five length of line is 1 </code></pre>
<python><windows><unicode>
2023-07-05 06:14:19
1
535
Meaulnes
76,617,236
1,570,408
UTF8 encoding error while posting request after adding boundaries to original file content
<p>I am trying to read a file, add boundaries around it and post it to my url. I tried to read file, decode (using <code>latin1</code> as <code>utf-8</code> was not working) and concatenate boundary strings, save this file with <code>utf-8</code> encoding. All these work fine.</p> <p>But when i tried to post this file it gave me an error <em><strong>&quot;'utf-8' codec can't decode byte 0x94 in position 23: invalid start byte&quot;</strong></em> - How do I solve this problem? Is there a way to post other encodings than <code>utf-8</code> in requests?</p> <pre><code>gen_data = open(fw_file,'rb').read() boundary = '------WebKitFormBoundaryV2I0Y3IdM8aQPLv3\r\nContent-Disposition: form-data; name=&quot;file&quot;; filename=&quot;'\ + file_name + '&quot;\r\nContent-Type: application/octet-stream\r\n\r\n' boundary_end = '\r\n\r\n------WebKitFormBoundaryV2I0Y3IdM8aQPLv3--\r\n' def create_data(boundary, data, end): return boundary + data.decode('latin1') + end data = create_data(boundary, gen_data, boundary_end) export_file = rf'c:\temp\{file_name}' open(export_file, 'w', encoding=&quot;utf-8&quot;).write(data) resp = requests.post(target, files=export_file, headers=headers, verify=False) </code></pre> <p>Following is the exception log:</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\username\AppData\Local\Programs\Python\Python39\lib\site-packages\requests\api.py&quot;, line 61, in request return session.request(method=method, url=url, **kwargs) File &quot;C:\Users\username\AppData\Local\Programs\Python\Python39\lib\site-packages\requests\sessions.py&quot;, line 515, in request prep = self.prepare_request(req) File &quot;C:\Users\username\AppData\Local\Programs\Python\Python39\lib\site-packages\requests\sessions.py&quot;, line 443, in prepare_request p.prepare( File &quot;C:\Users\username\AppData\Local\Programs\Python\Python39\lib\site-packages\requests\models.py&quot;, line 321, in prepare self.prepare_body(data, files, json) File &quot;C:\Users\username\AppData\Local\Programs\Python\Python39\lib\site-packages\requests\models.py&quot;, line 514, in prepare_body (body, content_type) = self._encode_files(files, data) File &quot;C:\Users\username\AppData\Local\Programs\Python\Python39\lib\site-packages\requests\models.py&quot;, line 128, in _encode_files files = to_key_val_list(files or {}) File &quot;C:\Users\username\AppData\Local\Programs\Python\Python39\lib\site-packages\requests\utils.py&quot;, line 343, in to_key_val_list raise ValueError('cannot encode objects that are not 2-tuples') ValueError: cannot encode objects that are not 2-tuples python-BaseException </code></pre> <p><em><strong>EDIT</strong></em> I tried to upload data directly using following code: <code>resp = requests.post(target, data=data, headers=headers, verify=False)</code></p> <p>But got following exception:</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\username\AppData\Local\Programs\Python\Python39\lib\site-packages\urllib3\connectionpool.py&quot;, line 699, in urlopen httplib_response = self._make_request( File &quot;C:\Users\username\AppData\Local\Programs\Python\Python39\lib\site-packages\urllib3\connectionpool.py&quot;, line 445, in _make_request six.raise_from(e, None) File &quot;&lt;string&gt;&quot;, line 3, in raise_from File &quot;C:\Users\username\AppData\Local\Programs\Python\Python39\lib\site-packages\urllib3\connectionpool.py&quot;, line 440, in _make_request httplib_response = conn.getresponse() File &quot;C:\Users\username\AppData\Local\Programs\Python\Python39\lib\http\client.py&quot;, line 1372, in getresponse response.begin() File &quot;C:\Users\username\AppData\Local\Programs\Python\Python39\lib\http\client.py&quot;, line 320, in begin version, status, reason = self._read_status() File &quot;C:\Users\username\AppData\Local\Programs\Python\Python39\lib\http\client.py&quot;, line 289, in _read_status raise RemoteDisconnected(&quot;Remote end closed connection without&quot; http.client.RemoteDisconnected: Remote end closed connection without response During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;C:\Users\username\AppData\Local\Programs\Python\Python39\lib\site-packages\requests\adapters.py&quot;, line 440, in send resp = conn.urlopen( File &quot;C:\Users\username\AppData\Local\Programs\Python\Python39\lib\site-packages\urllib3\connectionpool.py&quot;, line 755, in urlopen retries = retries.increment( File &quot;C:\Users\username\AppData\Local\Programs\Python\Python39\lib\site-packages\urllib3\util\retry.py&quot;, line 532, in increment raise six.reraise(type(error), error, _stacktrace) File &quot;C:\Users\username\AppData\Local\Programs\Python\Python39\lib\site-packages\urllib3\packages\six.py&quot;, line 769, in reraise raise value.with_traceback(tb) File &quot;C:\Users\username\AppData\Local\Programs\Python\Python39\lib\site-packages\urllib3\connectionpool.py&quot;, line 699, in urlopen httplib_response = self._make_request( File &quot;C:\Users\username\AppData\Local\Programs\Python\Python39\lib\site-packages\urllib3\connectionpool.py&quot;, line 445, in _make_request six.raise_from(e, None) File &quot;&lt;string&gt;&quot;, line 3, in raise_from File &quot;C:\Users\username\AppData\Local\Programs\Python\Python39\lib\site-packages\urllib3\connectionpool.py&quot;, line 440, in _make_request httplib_response = conn.getresponse() File &quot;C:\Users\username\AppData\Local\Programs\Python\Python39\lib\http\client.py&quot;, line 1372, in getresponse response.begin() File &quot;C:\Users\username\AppData\Local\Programs\Python\Python39\lib\http\client.py&quot;, line 320, in begin version, status, reason = self._read_status() File &quot;C:\Users\username\AppData\Local\Programs\Python\Python39\lib\http\client.py&quot;, line 289, in _read_status raise RemoteDisconnected(&quot;Remote end closed connection without&quot; urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;C:\Users\username\AppData\Local\Programs\Python\Python39\lib\site-packages\requests\api.py&quot;, line 61, in request return session.request(method=method, url=url, **kwargs) File &quot;C:\Users\username\AppData\Local\Programs\Python\Python39\lib\site-packages\requests\sessions.py&quot;, line 533, in request resp = self.send(prep, **send_kwargs) File &quot;C:\Users\username\AppData\Local\Programs\Python\Python39\lib\site-packages\requests\sessions.py&quot;, line 649, in send r = adapter.send(request, **kwargs) File &quot;C:\Users\username\AppData\Local\Programs\Python\Python39\lib\site-packages\requests\adapters.py&quot;, line 501, in send raise ConnectionError(err, request=request) requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) python-BaseException </code></pre>
<python><python-3.x><python-requests><urllib3>
2023-07-05 05:02:22
0
6,107
Ulysses
76,617,107
2,575,970
String separator for a Dataframe
<p>Below is my extracted String :</p> <pre><code>extractedString = &quot;1) No structured exercise.\n\n2) Above ideal body Mass index.\n\n3) Cancer gene testing.\n\n4) Suboptimal vitamin D.\n\n5) Slight anaemia.&quot; </code></pre> <p>What I am looking for is the output that you see for the dataframe:</p> <pre><code>list = [['No structured exercise'],['Above ideal body Mass index'],['Cancer gene testing'] ,['Suboptimal vitamin D'],['Slight anaemia']] df = pd.DataFrame(list) print(df) </code></pre> <p>Ouput:</p> <pre><code> 0 0 No structured exercise 1 Above ideal body Mass index 2 Cancer gene testing 3 Suboptimal vitamin D 4 Slight anaemia </code></pre> <p>How best can I achieve this?</p>
<python><pandas><dataframe>
2023-07-05 04:28:53
2
416
WhoamI
76,617,067
4,624,500
display math in GoogleColab
<p>Can someone explain me why the code below works in Jupyter (the string is updated and displayed when modified by the widget) but does not work in GoogleColab ? (And how to make it works in GoogleColab ?)</p> <pre><code>from IPython.display import display, Math def combustion(nH2i,nO2i,nH2Oi): x = nH2i/2 if nO2i-x&lt;0: x = nO2i nH2f = nH2i - 2*x nO2f = nO2i - x nH2Of = nH2Oi + 2*x reponse = r&quot;&quot;&quot;$\begin{array}{cccc} 2.H_2&amp;+&amp;O_2&amp;\to&amp;2.H_2O\\ {nH2i}&amp;+&amp;{nO2i}&amp;&amp;{nH2Oi}\\ {nH2f}&amp;+&amp;{nO2f}&amp;&amp;{nH2Of}\\ \end{array}$&quot;&quot;&quot; liste = [&quot;nH2i&quot;,&quot;nO2i&quot;,&quot;nH2Oi&quot;,&quot;nH2f&quot;,&quot;nO2f&quot;,&quot;nH2Of&quot;] for x in liste: reponse = reponse.replace(x,str(eval(x))) display(Math(reponse)) from ipywidgets import interact interact(combustion, nH2i=(0,10,1), nO2i=(0,10,1), nH2Oi=(0,10,1)) </code></pre> <p>Edit: It doesn't work because it prints the uninterpreted LaTeX string (as shown in Mark comment) but it doesn't display it as a LaTeX interpeter should do: an array with the values !</p>
<python><widget><latex><google-colaboratory><display>
2023-07-05 04:15:27
1
326
user4624500
76,616,997
736,662
Python/Locust unique values for each virtual user
<p>I have a working script like this:</p> <pre><code>from locust import HttpUser, between, task import random import copy TS_IDs = [ '10158', '11016', '10479', '10482', '11045', '10311', '10159' ] values = [ { &quot;id&quot;: 219468, &quot;values&quot;: [ { &quot;from&quot;: &quot;2023-07-04T08:00:00.000Z&quot;, &quot;to&quot;: &quot;2023-07-04T09:00:00.000Z&quot;, &quot;value&quot;: 33 } ] } ] values_new = copy.deepcopy(values) for index, item in enumerate(values_new): item[&quot;id&quot;] = TS_IDs[index] print(values_new) class SaveValues(HttpUser): def _run_write_ts(self, values_new): resp = self.client.put(f'/api/SaveValues', json=values_new, headers={'X-API-KEY': 'AKjCg9hTcYQ=', 'Content-Type': 'application/json'}) print(&quot;Response status code:&quot;, resp.status_code) print(values_new) @task(1) def test_save_ts_1(self): self._run_write_ts(values_new) </code></pre> <p>So when running this Locust test the console prints only the value structure using the first elemt in the TS_IDs list (10158).</p> <p>How can I make my script go sequentially through the TS_IDs list for each virtual? (the list is much longer than in this example).</p>
<python><locust>
2023-07-05 03:51:10
1
1,003
Magnus Jensen
76,616,925
13,494,917
Iterate and compare values from two different columns in different dataframes- if values match add a new field/column
<p>Say I have a dataframe (df1) that looks something like this</p> <pre><code>df1 +----+-------+ | id | name | +----+-------+ | 1 | name1 | +----+-------+ | 2 | name2 | +----+-------+ | 3 | name3 | +----+-------+ | 4 | name4 | +----+-------+ </code></pre> <p>and I've got another dataframe that has a subset of the values under the &quot;name&quot; column</p> <p>for example it has a column that looks like</p> <pre><code>df2 +-------+ | name | +-------+ | name1 | +-------+ | name2 | +-------+ | name4 | +-------+ | name5 | +-------+ </code></pre> <p>I am grabbing a list of all the values in the &quot;name&quot; column in df1</p> <pre class="lang-py prettyprint-override"><code>TypeNameList = df1['name'].tolist() </code></pre> <p>and then iterating through the rows of df2 and seeing if the &quot;name&quot; value is in the list (because while the name column here is a subset of the name column in df1, it can also contain other values that are not in df1)</p> <pre class="lang-py prettyprint-override"><code>for index, row in df2.iterrows(): if (row[&quot;name&quot;],) in TypeNameList: print(row[&quot;name&quot;]) # Would like to take the value found in the &quot;id&quot; column in df1 that is mapped to this value and insert it into this dataframe </code></pre> <p>What I would like to do, if conditions are met, is take the id from df1 that is mapped to the specific value in the &quot;name&quot; and insert it into a new column in df2 that also maps to that value</p> <p>So the final product would like this</p> <pre><code>+-------+----+ | name | id | +-------+----+ | name1 | 1 | +-------+----+ | name2 | 2 | +-------+----+ | name4 | 4 | +-------+----+ | name5 | | +-------+----+ </code></pre> <p>(name5 would not have a value in the id column because name5 is not in df1)</p>
<python><pandas><dataframe>
2023-07-05 03:21:35
1
687
BlakeB9
76,616,781
7,502,914
How to find minimum cost and combination that satisfies (or over) the goal?
<p>I have an algorithm problem created by myself, and I am seeking some guidance here. The goal is to get <strong>at least</strong> X gold. There are different dealers selling different amounts of golds at different prices. I need to find an optimal combination that <strong>minimizes the cost</strong> while buying <strong>at least</strong> X gold.</p> <p>Rules:</p> <ul> <li>You must buy the gold in full amounts</li> <li>You can buy any number of the same item</li> </ul> <p>This resembles an unbounded knapsack problem, but in this problem there are no limit to the amount of gold to buy. Solving this using knapsack will give me solutions that are below the required amount of the goal, which are obviously wrong answers. Obviously I could brute-force it by gradually increasing the limit of the knapsack, but that would not be optimal in terms of performance.</p> <p>As mentioned, I am seeking for some <strong>guidance</strong> here, on some less known algorithms that would solve this problem. I am <strong>not</strong> asking for code.</p> <hr /> <p>In an example, these are the prices available.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">Item ID</th> <th style="text-align: left;">Gold</th> <th style="text-align: left;">Price</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">1</td> <td style="text-align: left;">120</td> <td style="text-align: left;">0.99</td> </tr> <tr> <td style="text-align: left;">2</td> <td style="text-align: left;">600</td> <td style="text-align: left;">4.99</td> </tr> <tr> <td style="text-align: left;">3</td> <td style="text-align: left;">1,960</td> <td style="text-align: left;">14.99</td> </tr> <tr> <td style="text-align: left;">4</td> <td style="text-align: left;">3,960</td> <td style="text-align: left;">29.99</td> </tr> <tr> <td style="text-align: left;">5</td> <td style="text-align: left;">4,970</td> <td style="text-align: left;">38.89</td> </tr> <tr> <td style="text-align: left;">6</td> <td style="text-align: left;">6,560</td> <td style="text-align: left;">49.99</td> </tr> <tr> <td style="text-align: left;">7</td> <td style="text-align: left;">12,960</td> <td style="text-align: left;">99.99</td> </tr> <tr> <td style="text-align: left;">8</td> <td style="text-align: left;">14,000</td> <td style="text-align: left;">104.99</td> </tr> </tbody> </table> </div> <p>For this example, my task is to buy at least 12,880 gold. I need to find the combination and how many of each item I need to buy to satisfy the goal, which is get at least 12,880 gold while minimizing the cost.</p> <hr /> <p>My attempt to solve is the process of finding the algorithm to solve it. Here is a sheet with the different combinations I have tested, but I still cannot find a viable algorithm to find the optimal combination. <a href="https://i.sstatic.net/e8uyo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e8uyo.png" alt="Sheet showing my attempt to find the algorithm" /></a></p> <p>In the image, you can see that buying 2 item_4 and 1 item_5 is currently my best solution. But it is not for certain the optimal solution.</p> <hr /> <h3>Edit 1:</h3> <p>Add some more details/rules above</p> <h3>Edit 2:</h3> <p>I am just asking for some <strong>guidance</strong> here and not any code from you guys. I have solved similar problems with unbounded knapsack, but the solution cannot be applied here at all. This is because I need <strong>at least</strong> the amount, and <strong>not less than or equal</strong> to the amount. These are two different problems and I don't see any reason to post an unrelated code regarding the knapsack that does not even works as an attempt to solve the problem.</p> <h3>Edit 3:</h3> <p>I realized that I need to scope this question a bit, to make it reasonable. So here are the constraints.</p> <pre><code>n: Amount of gold in the goal k: Number of dealers/items m: Maximum amount of gold that will be sold as a single purchase n &lt;= 1000000 k &lt;= 1000 m &lt;= 10000000 </code></pre> <p>I am again not looking for any code from you, just some advice for possible solutions, and I will write my own code.</p>
<python><algorithm><optimization><dynamic-programming>
2023-07-05 02:35:12
2
995
Timmy Chan
76,616,733
2,813,606
How to create new pandas column while checking for NoneType
<p>I am working on creating a dataframe of location data using Nominatim to pull out longitude and latitude from city names. Here is my code so far:</p> <pre><code>from geopy.geocoders import Nominatim import pandas as pd geolocator = Nominatim(user_agent=&quot;measurements&quot;, timeout=3) locations = ['Portland, Oregon', 'Seattle, Washington','New York, New York','Columbia, South Carolina'] df = pd.DataFrame(locations, columns=['locations']) df['latlon'] = df['locations'].apply(lambda x: geolocator.geocode(x, addressdetails=True, language='en')) </code></pre> <p>I want to pull out the latitude and longitude from the 2nd column but have been having issues parsing the Location(...) data properly. So, I've edited my code above to parse it more directly:</p> <pre><code>df['lat'] = df['locations'].apply(lambda x: geolocator.geocode(x, addressdetails=True, language='en').latitude) df['lon'] = df['locations'].apply(lambda x: geolocator.geocode(x, addressdetails=True, language='en').longitude) </code></pre> <p>This all works for my reproducible example above, but I am running into a problem where I get the following error:</p> <pre><code>AttributeError: 'NoneType' object has no attribute 'latitude' </code></pre> <p>How can I write the second chunk of code above to have a check for the data type &quot;NoneType&quot; and then continue on evaluating the lambda expression if NoneType is found?</p>
<python><pandas><lambda><geolocation><nominatim>
2023-07-05 02:16:18
2
921
user2813606
76,616,687
15,656,276
Tensorflow loaded model outputing list of `False` instead of embeddings
<h1><strong>Background Information</strong></h1> <p>I have downloaded a pretrained face-detection model (13.6mb) which outputs the bounding box of the detected face. However, I don't need a face detection model, I need a model that calculates embeddings for a face. So I want to extract the layer from the face detection model which outputs the embeddings.</p> <p><code>main.py</code>:</p> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf import cv2 import numpy as np # load the pretrained face detection model model = tf.keras.models.load_model(&quot;RFB&quot;) # make a new model which only calculates embedddings # layer 187 outputs a 1D-tensor of length 4420, which I assumed to be the layer that outputs the embeddings new_model = tf.keras.models.Model(inputs=model.input, outputs=model.layers[187].output) ## load image image = cv2.imread(&quot;image.jpg&quot;, cv2.COLOR_BGR2RGB) image = cv2.resize(image, (240, 320)) ## convert to batch format image = np.array(image).reshape((1, 240, 320, 3)).astype(np.float32) # predict print(new_model.predict(image)[0]) </code></pre> <p>For context, here is the output of <code>model.summary()</code>, which is the original face detection model:</p> <pre><code>Model: &quot;functional_1&quot; __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 240, 320, 3 0 [] )] basenet.0.0_padding (ZeroPaddi (None, 242, 322, 3) 0 ['input_1[0][0]'] ng2D) basenet.0.0_conv (Conv2D) (None, 120, 160, 16 432 ['basenet.0.0_padding[0][0]'] ) basenet.0.1_bn (BatchNormaliza (None, 120, 160, 16 64 ['basenet.0.0_conv[0][0]'] tion) ) basenet.0.2_relu (ReLU) (None, 120, 160, 16 0 ['basenet.0.1_bn[0][0]'] ) basenet.1.0_padding (ZeroPaddi (None, 122, 162, 16 0 ['basenet.0.2_relu[0][0]'] ng2D) ) basenet.1.0_dconv (DepthwiseCo (None, 120, 160, 16 144 ['basenet.1.0_padding[0][0]'] nv2D) ) basenet.1.1_bn (BatchNormaliza (None, 120, 160, 16 64 ['basenet.1.0_dconv[0][0]'] tion) ) basenet.1.2_relu (ReLU) (None, 120, 160, 16 0 ['basenet.1.1_bn[0][0]'] ) basenet.1.3_conv (Conv2D) (None, 120, 160, 32 512 ['basenet.1.2_relu[0][0]'] ) basenet.1.4_bn (BatchNormaliza (None, 120, 160, 32 128 ['basenet.1.3_conv[0][0]'] tion) ) basenet.1.5_relu (ReLU) (None, 120, 160, 32 0 ['basenet.1.4_bn[0][0]'] ) basenet.2.0_padding (ZeroPaddi (None, 122, 162, 32 0 ['basenet.1.5_relu[0][0]'] ng2D) ) basenet.2.0_dconv (DepthwiseCo (None, 60, 80, 32) 288 ['basenet.2.0_padding[0][0]'] nv2D) basenet.2.1_bn (BatchNormaliza (None, 60, 80, 32) 128 ['basenet.2.0_dconv[0][0]'] tion) basenet.2.2_relu (ReLU) (None, 60, 80, 32) 0 ['basenet.2.1_bn[0][0]'] basenet.2.3_conv (Conv2D) (None, 60, 80, 32) 1024 ['basenet.2.2_relu[0][0]'] basenet.2.4_bn (BatchNormaliza (None, 60, 80, 32) 128 ['basenet.2.3_conv[0][0]'] tion) basenet.2.5_relu (ReLU) (None, 60, 80, 32) 0 ['basenet.2.4_bn[0][0]'] basenet.3.0_padding (ZeroPaddi (None, 62, 82, 32) 0 ['basenet.2.5_relu[0][0]'] ng2D) basenet.3.0_dconv (DepthwiseCo (None, 60, 80, 32) 288 ['basenet.3.0_padding[0][0]'] nv2D) basenet.3.1_bn (BatchNormaliza (None, 60, 80, 32) 128 ['basenet.3.0_dconv[0][0]'] tion) basenet.3.2_relu (ReLU) (None, 60, 80, 32) 0 ['basenet.3.1_bn[0][0]'] basenet.3.3_conv (Conv2D) (None, 60, 80, 32) 1024 ['basenet.3.2_relu[0][0]'] basenet.3.4_bn (BatchNormaliza (None, 60, 80, 32) 128 ['basenet.3.3_conv[0][0]'] tion) basenet.3.5_relu (ReLU) (None, 60, 80, 32) 0 ['basenet.3.4_bn[0][0]'] basenet.4.0_padding (ZeroPaddi (None, 62, 82, 32) 0 ['basenet.3.5_relu[0][0]'] ng2D) basenet.4.0_dconv (DepthwiseCo (None, 30, 40, 32) 288 ['basenet.4.0_padding[0][0]'] nv2D) basenet.4.1_bn (BatchNormaliza (None, 30, 40, 32) 128 ['basenet.4.0_dconv[0][0]'] tion) basenet.4.2_relu (ReLU) (None, 30, 40, 32) 0 ['basenet.4.1_bn[0][0]'] basenet.4.3_conv (Conv2D) (None, 30, 40, 64) 2048 ['basenet.4.2_relu[0][0]'] basenet.4.4_bn (BatchNormaliza (None, 30, 40, 64) 256 ['basenet.4.3_conv[0][0]'] tion) basenet.4.5_relu (ReLU) (None, 30, 40, 64) 0 ['basenet.4.4_bn[0][0]'] basenet.5.0_padding (ZeroPaddi (None, 32, 42, 64) 0 ['basenet.4.5_relu[0][0]'] ng2D) basenet.5.0_dconv (DepthwiseCo (None, 30, 40, 64) 576 ['basenet.5.0_padding[0][0]'] nv2D) basenet.5.1_bn (BatchNormaliza (None, 30, 40, 64) 256 ['basenet.5.0_dconv[0][0]'] tion) basenet.5.2_relu (ReLU) (None, 30, 40, 64) 0 ['basenet.5.1_bn[0][0]'] basenet.5.3_conv (Conv2D) (None, 30, 40, 64) 4096 ['basenet.5.2_relu[0][0]'] basenet.5.4_bn (BatchNormaliza (None, 30, 40, 64) 256 ['basenet.5.3_conv[0][0]'] tion) basenet.5.5_relu (ReLU) (None, 30, 40, 64) 0 ['basenet.5.4_bn[0][0]'] basenet.6.0_padding (ZeroPaddi (None, 32, 42, 64) 0 ['basenet.5.5_relu[0][0]'] ng2D) basenet.6.0_dconv (DepthwiseCo (None, 30, 40, 64) 576 ['basenet.6.0_padding[0][0]'] nv2D) basenet.6.1_bn (BatchNormaliza (None, 30, 40, 64) 256 ['basenet.6.0_dconv[0][0]'] tion) basenet.6.2_relu (ReLU) (None, 30, 40, 64) 0 ['basenet.6.1_bn[0][0]'] basenet.6.3_conv (Conv2D) (None, 30, 40, 64) 4096 ['basenet.6.2_relu[0][0]'] basenet.6.4_bn (BatchNormaliza (None, 30, 40, 64) 256 ['basenet.6.3_conv[0][0]'] tion) basenet.6.5_relu (ReLU) (None, 30, 40, 64) 0 ['basenet.6.4_bn[0][0]'] basenet.7.branch2.0_conv (Conv (None, 30, 40, 8) 512 ['basenet.6.5_relu[0][0]'] 2D) basenet.7.branch2.0_bn (BatchN (None, 30, 40, 8) 32 ['basenet.7.branch2.0_conv[0][0]' ormalization) ] basenet.7.branch2.1_padding (Z (None, 32, 42, 8) 0 ['basenet.7.branch2.0_bn[0][0]'] eroPadding2D) basenet.7.branch2.1_conv (Conv (None, 30, 40, 12) 864 ['basenet.7.branch2.1_padding[0][ 2D) 0]'] basenet.7.branch0.0_conv (Conv (None, 30, 40, 8) 512 ['basenet.6.5_relu[0][0]'] 2D) basenet.7.branch1.0_conv (Conv (None, 30, 40, 8) 512 ['basenet.6.5_relu[0][0]'] 2D) basenet.7.branch2.1_bn (BatchN (None, 30, 40, 12) 48 ['basenet.7.branch2.1_conv[0][0]' ormalization) ] basenet.7.branch0.0_bn (BatchN (None, 30, 40, 8) 32 ['basenet.7.branch0.0_conv[0][0]' ormalization) ] basenet.7.branch1.0_bn (BatchN (None, 30, 40, 8) 32 ['basenet.7.branch1.0_conv[0][0]' ormalization) ] basenet.7.branch2.1_relu (ReLU (None, 30, 40, 12) 0 ['basenet.7.branch2.1_bn[0][0]'] ) basenet.7.branch0.1_padding (Z (None, 32, 42, 8) 0 ['basenet.7.branch0.0_bn[0][0]'] eroPadding2D) basenet.7.branch1.1_padding (Z (None, 32, 42, 8) 0 ['basenet.7.branch1.0_bn[0][0]'] eroPadding2D) basenet.7.branch2.2_padding (Z (None, 32, 42, 12) 0 ['basenet.7.branch2.1_relu[0][0]' eroPadding2D) ] basenet.7.branch0.1_conv (Conv (None, 30, 40, 16) 1152 ['basenet.7.branch0.1_padding[0][ 2D) 0]'] basenet.7.branch1.1_conv (Conv (None, 30, 40, 16) 1152 ['basenet.7.branch1.1_padding[0][ 2D) 0]'] basenet.7.branch2.2_conv (Conv (None, 30, 40, 16) 1728 ['basenet.7.branch2.2_padding[0][ 2D) 0]'] basenet.7.branch0.1_bn (BatchN (None, 30, 40, 16) 64 ['basenet.7.branch0.1_conv[0][0]' ormalization) ] basenet.7.branch1.1_bn (BatchN (None, 30, 40, 16) 64 ['basenet.7.branch1.1_conv[0][0]' ormalization) ] basenet.7.branch2.2_bn (BatchN (None, 30, 40, 16) 64 ['basenet.7.branch2.2_conv[0][0]' ormalization) ] basenet.7.branch0.1_relu (ReLU (None, 30, 40, 16) 0 ['basenet.7.branch0.1_bn[0][0]'] ) basenet.7.branch1.1_relu (ReLU (None, 30, 40, 16) 0 ['basenet.7.branch1.1_bn[0][0]'] ) basenet.7.branch2.2_relu (ReLU (None, 30, 40, 16) 0 ['basenet.7.branch2.2_bn[0][0]'] ) basenet.7.branch0.2_padding (Z (None, 34, 44, 16) 0 ['basenet.7.branch0.1_relu[0][0]' eroPadding2D) ] basenet.7.branch1.2_padding (Z (None, 36, 46, 16) 0 ['basenet.7.branch1.1_relu[0][0]' eroPadding2D) ] basenet.7.branch2.3_padding (Z (None, 40, 50, 16) 0 ['basenet.7.branch2.2_relu[0][0]' eroPadding2D) ] basenet.7.branch0.2_conv (Conv (None, 30, 40, 16) 2304 ['basenet.7.branch0.2_padding[0][ 2D) 0]'] basenet.7.branch1.2_conv (Conv (None, 30, 40, 16) 2304 ['basenet.7.branch1.2_padding[0][ 2D) 0]'] basenet.7.branch2.3_conv (Conv (None, 30, 40, 16) 2304 ['basenet.7.branch2.3_padding[0][ 2D) 0]'] basenet.7.branch0.2_bn (BatchN (None, 30, 40, 16) 64 ['basenet.7.branch0.2_conv[0][0]' ormalization) ] basenet.7.branch1.2_bn (BatchN (None, 30, 40, 16) 64 ['basenet.7.branch1.2_conv[0][0]' ormalization) ] basenet.7.branch2.3_bn (BatchN (None, 30, 40, 16) 64 ['basenet.7.branch2.3_conv[0][0]' ormalization) ] basenet.7_cat (Concatenate) (None, 30, 40, 48) 0 ['basenet.7.branch0.2_bn[0][0]', 'basenet.7.branch1.2_bn[0][0]', 'basenet.7.branch2.3_bn[0][0]'] basenet.7.convlinear_conv (Con (None, 30, 40, 64) 3072 ['basenet.7_cat[0][0]'] v2D) basenet.7.convlinear_bn (Batch (None, 30, 40, 64) 256 ['basenet.7.convlinear_conv[0][0] Normalization) '] basenet.7.shortcut_conv (Conv2 (None, 30, 40, 64) 4096 ['basenet.6.5_relu[0][0]'] D) tf_op_layer_basenet.7_mul (Ten (None, 30, 40, 64) 0 ['basenet.7.convlinear_bn[0][0]'] sorFlowOpLayer) basenet.7.shortcut_bn (BatchNo (None, 30, 40, 64) 256 ['basenet.7.shortcut_conv[0][0]'] rmalization) basenet.7_add (Add) (None, 30, 40, 64) 0 ['tf_op_layer_basenet.7_mul[0][0] ', 'basenet.7.shortcut_bn[0][0]'] basenet.7_relu (ReLU) (None, 30, 40, 64) 0 ['basenet.7_add[0][0]'] basenet.8.0_padding (ZeroPaddi (None, 32, 42, 64) 0 ['basenet.7_relu[0][0]'] ng2D) basenet.8.0_dconv (DepthwiseCo (None, 15, 20, 64) 576 ['basenet.8.0_padding[0][0]'] nv2D) basenet.8.1_bn (BatchNormaliza (None, 15, 20, 64) 256 ['basenet.8.0_dconv[0][0]'] tion) basenet.8.2_relu (ReLU) (None, 15, 20, 64) 0 ['basenet.8.1_bn[0][0]'] basenet.8.3_conv (Conv2D) (None, 15, 20, 128) 8192 ['basenet.8.2_relu[0][0]'] basenet.8.4_bn (BatchNormaliza (None, 15, 20, 128) 512 ['basenet.8.3_conv[0][0]'] tion) basenet.8.5_relu (ReLU) (None, 15, 20, 128) 0 ['basenet.8.4_bn[0][0]'] basenet.9.0_padding (ZeroPaddi (None, 17, 22, 128) 0 ['basenet.8.5_relu[0][0]'] ng2D) basenet.9.0_dconv (DepthwiseCo (None, 15, 20, 128) 1152 ['basenet.9.0_padding[0][0]'] nv2D) basenet.9.1_bn (BatchNormaliza (None, 15, 20, 128) 512 ['basenet.9.0_dconv[0][0]'] tion) basenet.9.2_relu (ReLU) (None, 15, 20, 128) 0 ['basenet.9.1_bn[0][0]'] basenet.9.3_conv (Conv2D) (None, 15, 20, 128) 16384 ['basenet.9.2_relu[0][0]'] basenet.9.4_bn (BatchNormaliza (None, 15, 20, 128) 512 ['basenet.9.3_conv[0][0]'] tion) basenet.9.5_relu (ReLU) (None, 15, 20, 128) 0 ['basenet.9.4_bn[0][0]'] basenet.10.0_padding (ZeroPadd (None, 17, 22, 128) 0 ['basenet.9.5_relu[0][0]'] ing2D) basenet.10.0_dconv (DepthwiseC (None, 15, 20, 128) 1152 ['basenet.10.0_padding[0][0]'] onv2D) basenet.10.1_bn (BatchNormaliz (None, 15, 20, 128) 512 ['basenet.10.0_dconv[0][0]'] ation) basenet.10.2_relu (ReLU) (None, 15, 20, 128) 0 ['basenet.10.1_bn[0][0]'] basenet.10.3_conv (Conv2D) (None, 15, 20, 128) 16384 ['basenet.10.2_relu[0][0]'] basenet.10.4_bn (BatchNormaliz (None, 15, 20, 128) 512 ['basenet.10.3_conv[0][0]'] ation) basenet.10.5_relu (ReLU) (None, 15, 20, 128) 0 ['basenet.10.4_bn[0][0]'] basenet.11.0_padding (ZeroPadd (None, 17, 22, 128) 0 ['basenet.10.5_relu[0][0]'] ing2D) basenet.11.0_dconv (DepthwiseC (None, 8, 10, 128) 1152 ['basenet.11.0_padding[0][0]'] onv2D) basenet.11.1_bn (BatchNormaliz (None, 8, 10, 128) 512 ['basenet.11.0_dconv[0][0]'] ation) basenet.11.2_relu (ReLU) (None, 8, 10, 128) 0 ['basenet.11.1_bn[0][0]'] basenet.11.3_conv (Conv2D) (None, 8, 10, 256) 32768 ['basenet.11.2_relu[0][0]'] basenet.11.4_bn (BatchNormaliz (None, 8, 10, 256) 1024 ['basenet.11.3_conv[0][0]'] ation) basenet.11.5_relu (ReLU) (None, 8, 10, 256) 0 ['basenet.11.4_bn[0][0]'] basenet.12.0_padding (ZeroPadd (None, 10, 12, 256) 0 ['basenet.11.5_relu[0][0]'] ing2D) basenet.12.0_dconv (DepthwiseC (None, 8, 10, 256) 2304 ['basenet.12.0_padding[0][0]'] onv2D) basenet.12.1_bn (BatchNormaliz (None, 8, 10, 256) 1024 ['basenet.12.0_dconv[0][0]'] ation) basenet.12.2_relu (ReLU) (None, 8, 10, 256) 0 ['basenet.12.1_bn[0][0]'] basenet.12.3_conv (Conv2D) (None, 8, 10, 256) 65536 ['basenet.12.2_relu[0][0]'] basenet.12.4_bn (BatchNormaliz (None, 8, 10, 256) 1024 ['basenet.12.3_conv[0][0]'] ation) basenet.12.5_relu (ReLU) (None, 8, 10, 256) 0 ['basenet.12.4_bn[0][0]'] extras_convbias (Conv2D) (None, 8, 10, 64) 16448 ['basenet.12.5_relu[0][0]'] extras_relu1 (ReLU) (None, 8, 10, 64) 0 ['extras_convbias[0][0]'] extras_sep_dconv_padding (Zero (None, 10, 12, 64) 0 ['extras_relu1[0][0]'] Padding2D) extras_sep_dconvbias (Depthwis (None, 4, 5, 64) 640 ['extras_sep_dconv_padding[0][0]' eConv2D) ] extras_sep_relu (ReLU) (None, 4, 5, 64) 0 ['extras_sep_dconvbias[0][0]'] reg_0_sep_dconv_padding (ZeroP (None, 32, 42, 64) 0 ['basenet.7_relu[0][0]'] adding2D) reg_1_sep_dconv_padding (ZeroP (None, 17, 22, 128) 0 ['basenet.10.5_relu[0][0]'] adding2D) reg_2_sep_dconv_padding (ZeroP (None, 10, 12, 256) 0 ['basenet.12.5_relu[0][0]'] adding2D) extras_sep_convbias (Conv2D) (None, 4, 5, 256) 16640 ['extras_sep_relu[0][0]'] reg_0_sep_dconvbias (Depthwise (None, 30, 40, 64) 640 ['reg_0_sep_dconv_padding[0][0]'] Conv2D) reg_1_sep_dconvbias (Depthwise (None, 15, 20, 128) 1280 ['reg_1_sep_dconv_padding[0][0]'] Conv2D) reg_2_sep_dconvbias (Depthwise (None, 8, 10, 256) 2560 ['reg_2_sep_dconv_padding[0][0]'] Conv2D) extras_relu2 (ReLU) (None, 4, 5, 256) 0 ['extras_sep_convbias[0][0]'] reg_0_sep_relu (ReLU) (None, 30, 40, 64) 0 ['reg_0_sep_dconvbias[0][0]'] reg_1_sep_relu (ReLU) (None, 15, 20, 128) 0 ['reg_1_sep_dconvbias[0][0]'] reg_2_sep_relu (ReLU) (None, 8, 10, 256) 0 ['reg_2_sep_dconvbias[0][0]'] reg_0_sep_convbias (Conv2D) (None, 30, 40, 12) 780 ['reg_0_sep_relu[0][0]'] reg_1_sep_convbias (Conv2D) (None, 15, 20, 8) 1032 ['reg_1_sep_relu[0][0]'] reg_2_sep_convbias (Conv2D) (None, 8, 10, 8) 2056 ['reg_2_sep_relu[0][0]'] reg_3_convbias (Conv2D) (None, 4, 5, 12) 27660 ['extras_relu2[0][0]'] reshape (Reshape) (None, 3600, 4) 0 ['reg_0_sep_convbias[0][0]'] reshape_1 (Reshape) (None, 600, 4) 0 ['reg_1_sep_convbias[0][0]'] reshape_2 (Reshape) (None, 160, 4) 0 ['reg_2_sep_convbias[0][0]'] reshape_3 (Reshape) (None, 60, 4) 0 ['reg_3_convbias[0][0]'] concatenate (Concatenate) (None, 4420, 4) 0 ['reshape[0][0]', 'reshape_1[0][0]', 'reshape_2[0][0]', 'reshape_3[0][0]'] tf_op_layer_strided_slice_1 (T (None, 4420, 2) 0 ['concatenate[0][0]'] ensorFlowOpLayer) tf_op_layer_strided_slice (Ten (None, 4420, 2) 0 ['concatenate[0][0]'] sorFlowOpLayer) tf_op_layer_Mul_2 (TensorFlowO (None, 4420, 2) 0 ['tf_op_layer_strided_slice_1[0][ pLayer) 0]'] cls_0_sep_dconv_padding (ZeroP (None, 32, 42, 64) 0 ['basenet.7_relu[0][0]'] adding2D) cls_1_sep_dconv_padding (ZeroP (None, 17, 22, 128) 0 ['basenet.10.5_relu[0][0]'] adding2D) cls_2_sep_dconv_padding (ZeroP (None, 10, 12, 256) 0 ['basenet.12.5_relu[0][0]'] adding2D) tf_op_layer_Mul (TensorFlowOpL (None, 4420, 2) 0 ['tf_op_layer_strided_slice[0][0] ayer) '] tf_op_layer_Exp (TensorFlowOpL (None, 4420, 2) 0 ['tf_op_layer_Mul_2[0][0]'] ayer) cls_0_sep_dconvbias (Depthwise (None, 30, 40, 64) 640 ['cls_0_sep_dconv_padding[0][0]'] Conv2D) cls_1_sep_dconvbias (Depthwise (None, 15, 20, 128) 1280 ['cls_1_sep_dconv_padding[0][0]'] Conv2D) cls_2_sep_dconvbias (Depthwise (None, 8, 10, 256) 2560 ['cls_2_sep_dconv_padding[0][0]'] Conv2D) tf_op_layer_Mul_1 (TensorFlowO (None, 4420, 2) 0 ['tf_op_layer_Mul[0][0]'] pLayer) tf_op_layer_Mul_3 (TensorFlowO (None, 4420, 2) 0 ['tf_op_layer_Exp[0][0]'] pLayer) cls_0_sep_relu (ReLU) (None, 30, 40, 64) 0 ['cls_0_sep_dconvbias[0][0]'] cls_1_sep_relu (ReLU) (None, 15, 20, 128) 0 ['cls_1_sep_dconvbias[0][0]'] cls_2_sep_relu (ReLU) (None, 8, 10, 256) 0 ['cls_2_sep_dconvbias[0][0]'] tf_op_layer_AddV2 (TensorFlowO (None, 4420, 2) 0 ['tf_op_layer_Mul_1[0][0]'] pLayer) tf_op_layer_RealDiv (TensorFlo (None, 4420, 2) 0 ['tf_op_layer_Mul_3[0][0]'] wOpLayer) tf_op_layer_RealDiv_1 (TensorF (None, 4420, 2) 0 ['tf_op_layer_Mul_3[0][0]'] lowOpLayer) cls_0_sep_convbias (Conv2D) (None, 30, 40, 6) 390 ['cls_0_sep_relu[0][0]'] cls_1_sep_convbias (Conv2D) (None, 15, 20, 4) 516 ['cls_1_sep_relu[0][0]'] cls_2_sep_convbias (Conv2D) (None, 8, 10, 4) 1028 ['cls_2_sep_relu[0][0]'] cls_3_convbias (Conv2D) (None, 4, 5, 6) 13830 ['extras_relu2[0][0]'] tf_op_layer_Sub (TensorFlowOpL (None, 4420, 2) 0 ['tf_op_layer_AddV2[0][0]', ayer) 'tf_op_layer_RealDiv[0][0]'] tf_op_layer_AddV2_1 (TensorFlo (None, 4420, 2) 0 ['tf_op_layer_AddV2[0][0]', wOpLayer) 'tf_op_layer_RealDiv_1[0][0]'] reshape_4 (Reshape) (None, 3600, 2) 0 ['cls_0_sep_convbias[0][0]'] reshape_5 (Reshape) (None, 600, 2) 0 ['cls_1_sep_convbias[0][0]'] reshape_6 (Reshape) (None, 160, 2) 0 ['cls_2_sep_convbias[0][0]'] reshape_7 (Reshape) (None, 60, 2) 0 ['cls_3_convbias[0][0]'] tf_op_layer_concat (TensorFlow (None, 4420, 4) 0 ['tf_op_layer_Sub[0][0]', OpLayer) 'tf_op_layer_AddV2_1[0][0]'] concatenate_1 (Concatenate) (None, 4420, 2) 0 ['reshape_4[0][0]', 'reshape_5[0][0]', 'reshape_6[0][0]', 'reshape_7[0][0]'] tf_op_layer_Minimum (TensorFlo (None, 4420, 4) 0 ['tf_op_layer_concat[0][0]'] wOpLayer) softmax (Softmax) (None, 4420, 2) 0 ['concatenate_1[0][0]'] tf_op_layer_Maximum (TensorFlo (None, 4420, 4) 0 ['tf_op_layer_Minimum[0][0]'] wOpLayer) concatenate_2 (Concatenate) (None, 4420, 6) 0 ['softmax[0][0]', 'tf_op_layer_Maximum[0][0]'] tf_op_layer_Shape (TensorFlowO (3,) 0 ['concatenate_2[0][0]'] pLayer) tf_op_layer_strided_slice_3 (T (2,) 0 ['tf_op_layer_Shape[0][0]'] ensorFlowOpLayer) tf_op_layer_strided_slice_2 (T (None, 4420) 0 ['softmax[0][0]'] ensorFlowOpLayer) tf_op_layer_Shape_1 (TensorFlo (3,) 0 ['concatenate_2[0][0]'] wOpLayer) tf_op_layer_Prod (TensorFlowOp () 0 ['tf_op_layer_strided_slice_3[0][ Layer) 0]'] tf_op_layer_Shape_2 (TensorFlo (3,) 0 ['concatenate_2[0][0]'] wOpLayer) tf_op_layer_Greater (TensorFlo (None, 4420) 0 ['tf_op_layer_strided_slice_2[0][ wOpLayer) 0]'] tf_op_layer_strided_slice_4 (T (0,) 0 ['tf_op_layer_Shape_1[0][0]'] ensorFlowOpLayer) tf_op_layer_concat_1/values_1 (1,) 0 ['tf_op_layer_Prod[0][0]'] (TensorFlowOpLayer) tf_op_layer_strided_slice_5 (T (1,) 0 ['tf_op_layer_Shape_2[0][0]'] ensorFlowOpLayer) tf_op_layer_Reshape_1 (TensorF (None,) 0 ['tf_op_layer_Greater[0][0]'] lowOpLayer) tf_op_layer_concat_1 (TensorFl (2,) 0 ['tf_op_layer_strided_slice_4[0][ owOpLayer) 0]', 'tf_op_layer_concat_1/values_1[0 ][0]', 'tf_op_layer_strided_slice_5[0][ 0]'] tf_op_layer_Where (TensorFlowO (None, 1) 0 ['tf_op_layer_Reshape_1[0][0]'] pLayer) tf_op_layer_Reshape (TensorFlo (None, 6) 0 ['concatenate_2[0][0]', wOpLayer) 'tf_op_layer_concat_1[0][0]'] tf_op_layer_Squeeze (TensorFlo (None,) 0 ['tf_op_layer_Where[0][0]'] wOpLayer) tf_op_layer_GatherV2 (TensorFl (None, 6) 0 ['tf_op_layer_Reshape[0][0]', owOpLayer) 'tf_op_layer_Squeeze[0][0]'] tf_op_layer_strided_slice_6 (T (None, 4) 0 ['tf_op_layer_GatherV2[0][0]'] ensorFlowOpLayer) tf_op_layer_strided_slice_7 (T (None,) 0 ['tf_op_layer_GatherV2[0][0]'] ensorFlowOpLayer) tf_op_layer_NonMaxSuppressionV (None,) 0 ['tf_op_layer_strided_slice_6[0][ 3 (TensorFlowOpLayer) 0]', 'tf_op_layer_strided_slice_7[0][ 0]'] tf_op_layer_GatherV2_1 (Tensor (None, 6) 0 ['tf_op_layer_GatherV2[0][0]', FlowOpLayer) 'tf_op_layer_NonMaxSuppressionV3 [0][0]'] tf_op_layer_Shape_3 (TensorFlo (2,) 0 ['tf_op_layer_GatherV2_1[0][0]'] wOpLayer) tf_op_layer_strided_slice_8 (T () 0 ['tf_op_layer_Shape_3[0][0]'] ensorFlowOpLayer) tf_op_layer_strided_slice_9 (T (None,) 0 ['tf_op_layer_GatherV2_1[0][0]'] ensorFlowOpLayer) tf_op_layer_Minimum_1 (TensorF () 0 ['tf_op_layer_strided_slice_8[0][ lowOpLayer) 0]'] tf_op_layer_TopKV2 (TensorFlow [(None,), 0 ['tf_op_layer_strided_slice_9[0][ OpLayer) (None,)] 0]', 'tf_op_layer_Minimum_1[0][0]'] tf_op_layer_GatherV2_2 (Tensor (None, 6) 0 ['tf_op_layer_GatherV2_1[0][0]', FlowOpLayer) 'tf_op_layer_TopKV2[0][1]'] ================================================================================================== Total params: 281,100 Trainable params: 276,292 Non-trainable params: 4,808 </code></pre> <p><code>model.layers[187]</code> which I used as the output for <code>new_model</code> refers to this layer: <code>tf_op_layer_Greater (TensorFlowOpLayer)</code></p> <p>Also, here is the structure of the directory <code>RFB</code>:</p> <pre><code>. └── RFB/ ├── variables/ │ ├── variables.data-00000-of-00001 │ └── variables.index └── saved_model.py </code></pre> <h1><strong>Problem</strong></h1> <p>This is the output I get from running <code>main.py</code>:</p> <pre><code>WARNING:tensorflow:SavedModel saved prior to TF 2.5 detected when loading Keras model. Please ensure that you are saving the model with model.save() or tf.keras.models.save_model(), *NOT* tf.saved_model.save(). To confirm, there should be a file named &quot;keras_metadata.pb&quot; in the SavedModel directory. WARNING:tensorflow:No training configuration found in save file, so the model was *not* compiled. Compile it manually. 1/1 [==============================] - 1s 617ms/step Prediction: [False False False ... False False False] </code></pre> <p>The prediction is indeed a list of length <code>4420</code>, but I expected the values to be floating point numbers, not <code>False</code>.</p>
<python><tensorflow>
2023-07-05 01:59:44
0
520
chai
76,616,665
7,530,306
numba `if condition` performance
<p>I have this code</p> <pre><code>from numba import njit import numpy as np @njit() def build_permutations(position_length, starting_leverage, ending_leverage, iteration_size=100_000_000, starting_amount=100_000 ): sublist = np.zeros((100_000_000, position_length), dtype=np.int16) sublist_index = 0 current_permutation = np.full(position_length, starting_leverage) while True: # Apply the filter condition sum_of_leverage = np.sum(current_permutation) if(sum_of_leverage &gt; 20) and (sum_of_leverage &lt;= 100): sublist[sublist_index] = current_permutation sublist_index += 1 if sublist_index == iteration_size: yield sublist[:sublist_index] sublist.fill(0) sublist_index = 0 # Generate the next permutation i = position_length - 1 while i &gt;= 0 and current_permutation[i] == ending_leverage: current_permutation[i] = starting_leverage i -= 1 if i &lt; 0: break current_permutation[i] += 1 # Yield the remaining sublist if it's not empty if sublist_index &gt; 0: yield sublist[:sublist_index] </code></pre> <p>changing the filter condition to be this</p> <pre><code> call_leverage = 0 put_leverage = 0 for index, val in enumerate(current_permutation): if index_to_position_type[index] == 1: put_leverage += val if index_to_position_type[index] == -1: call_leverage += val max_leverage = max(call_leverage, put_leverage) min_leverage = min(call_leverage, put_leverage) if (min_leverage &gt;= 35) and (max_leverage &lt;= 40): </code></pre> <p>instead of</p> <pre><code> sum_of_leverage = np.sum(current_permutation) if(sum_of_leverage &gt; 20) and (sum_of_leverage &lt;= 100): </code></pre> <p>slows it from around 1 second to around 90 seconds. In order to remove this, I have been thinking of paralellizing the function. Is there any reason adding this for loop + if condition would increase the time so much? The original filter was also summing the permutation, just without if's. Why would this take so much longer? Is there a way to index into another array with numpy's sum function, something like</p> <p>np.sum(perm, where=index_to_position_type[{{perm_index}}]==1)</p>
<python><performance><numba>
2023-07-05 01:52:11
0
665
sf8193
76,616,436
11,317,931
Link from Django Model's "absolute url" to its admin page
<p>According to the <a href="https://docs.djangoproject.com/en/4.2/ref/models/instances/#get-absolute-url" rel="nofollow noreferrer">docs</a>, you can specify a <code>get_absolute_url</code> in your Django Model for a &quot;View on site&quot; button to show up on Django Admin.</p> <p>However, how would you do the opposite? A link on the above defined &quot;absolute url&quot; that takes you to the admin page for that instance? I can think of ways to hardcode the admin url and fill in just the instance ID, but I was hoping for a pythonic way (e.g. <code>instance.get_admin_url()</code>)</p> <p>Thanks for the help!</p>
<python><django><django-models><django-admin>
2023-07-05 00:27:01
1
1,017
Krishnan Shankar
76,616,374
3,933,143
How to update index.html file to add static to its path during build process?
<p>I want to add <code>/static/</code> in front of all the paths in index.html. Since Flask looks for static files inside the <code>/root_folder/static</code> folder. Even though I have placed all the files inside the <code>static</code> folder, it is unable to get the files as the path in the index.html file is not correct.</p> <p>My current directory structure in Flask looks like this</p> <pre><code>web_ui src -static -web_app.py logs config </code></pre> <p>This is what the path to various files in index.html looks like.</p> <pre><code> &lt;script src=&quot;assets/js/apexcharts.min.js&quot;&gt;&lt;/script&gt; &lt;script src=&quot;assets/js/bootstrap.bundle.min.js&quot;&gt;&lt;/script&gt; </code></pre> <p>I would like to have</p> <pre><code> &lt;script src=&quot;/static/assets/js/apexcharts.min.js&quot;&gt;&lt;/script&gt; &lt;script src=&quot;/static/assets/js/bootstrap.bundle.min.js&quot;&gt;&lt;/script&gt; </code></pre> <p>is there any way to change the angular.json file? I have changed the output directory</p> <pre><code>&quot;architect&quot;: { &quot;build&quot;: { &quot;builder&quot;: &quot;@angular-devkit/build-angular:browser&quot;, &quot;options&quot;: { &quot;outputPath&quot;: &quot;../web_ui/src/static/&quot;, #&lt;---- &quot;index&quot;: &quot;src/index.html&quot;, &quot;main&quot;: &quot;src/main.ts&quot;, &quot;polyfills&quot;: &quot;src/polyfills.ts&quot;, &quot;tsConfig&quot;: &quot;tsconfig.app.json&quot;, &quot;assets&quot;: [ &quot;src/favicon.ico&quot;, &quot;src/assets&quot; ], &quot;styles&quot;: [ &quot;src/custom-theme.scss&quot;, &quot;src/styles.css&quot; ], &quot;scripts&quot;: [ &quot;src/assets/js/main.js&quot;, &quot;node_modules/apexcharts/dist/apexcharts.min.js&quot; ] </code></pre> <p>I have looked into this <a href="https://stackoverflow.com/questions/45692749/angular-4-frontend-with-python-flask-backend-how-to-render-simple-index-page">answer</a> but the person has not explained how to do it. Do I have to manually change the path?</p>
<python><angular><flask>
2023-07-05 00:01:34
1
1,269
Sam
76,616,241
14,245,686
How to encode person names for use in XGBoost?
<p>I appreciate the help. Googling for how to use string features with XGBoost mainly returns results for categorical variables, which I think is different from what I am doing. I am trying to use XGBoost to predict race from first name and last name. What I want to do is character-encode my names, similar to what I am doing for an LSTM in Pytorch. There, I use this function</p> <pre class="lang-py prettyprint-override"><code>def encode_name(name: str) -&gt; torch.Tensor: return torch.tensor( [VALID_NAME_CHARS_DICT[char] for char in name], device=DEVICE ) </code></pre> <p>How would I do something similar for XGBoost?</p>
<python><machine-learning><nlp><xgboost>
2023-07-04 23:06:35
1
482
stressed
76,616,204
11,512,576
How to set up REST API connection to pull records from databricks using Python
<p>I'd like to upload and download mlflow experiments saved in databricks using python. I can set up the connection using the following code.</p> <pre><code>dbk_host = 'https://adb-xxxxxxxxxxxxxxxxxxxxxxxxx.clooud.databricks.com' api_version = '/api/2.0' api_command = '/jobs/runs/list' url = f&quot;{dbk_host}{api_version}{api_command}&quot; DBRKS_REQ_HEADERS = {'Authorization': 'Bearer ' + DBRKS_BEARER_TOKEN } response = requests.get(url = url, headers=DBRKS_REQ_HEADERS, params={'limit': 25, 'offset': 0}) </code></pre> <p>However, when I switched to a slightly different code, <code>400 Client Error</code> is reported.</p> <pre><code>dbk_host = 'https://adb-xxxxxxxxxxxxxxxxxxxxxxxxx.clooud.databricks.com' api_version = '/api/2.0' api_command = '/mlflow/experiments/get-by-name' url = f&quot;{dbk_host}{api_version}{api_command}&quot; DBRKS_REQ_HEADERS = {'Authorization': 'Bearer ' + DBRKS_BEARER_TOKEN } DBRKS_REQ_DATA = {&quot;experiment_name&quot;: &quot;mlflow-model-registry-example&quot;} response = requests.get(url = url, data=DBRKS_REQ_DATA, headers=DBRKS_REQ_HEADERS) </code></pre> <p>Is it because I'm missing <code>params</code>? How should I solve this issue. And more generally, how should I set up connection, so that I can use the <code>REST API</code> as list.</p> <p><a href="https://www.mlflow.org/docs/latest/rest-api.html#search-experiments" rel="nofollow noreferrer">https://www.mlflow.org/docs/latest/rest-api.html#search-experiments</a></p>
<python><rest><databricks><mlflow>
2023-07-04 22:50:04
1
491
Harry
76,616,042
20,022,511
AttributeError: module 'PIL.Image' has no attribute 'ANTIALIAS'
<ul> <li>I am trying to have images in my Tkinter GUI, hence I am using <a href="https://en.wikipedia.org/wiki/Python_Imaging_Library" rel="noreferrer">PIL</a>.</li> <li><em>Image.ANTIALAIS</em> is not working. However, <em>Image.BILINEAR</em> works</li> </ul> <p>Here's some sample code:</p> <pre class="lang-py prettyprint-override"><code>import tkinter as tk from PIL import Image, ImageTk window = tk.Tk() image = Image.open(r&quot;VC.png&quot;) image = image.resize((20, 20), Image.ANTIALIAS) tk_image = ImageTk.PhotoImage(image) image_label = tk.Label(window, image=tk_image) image_label.pack() window.mainloop() </code></pre> <p>Here's the error:</p> <pre class="lang-none prettyprint-override"><code>Traceback (most recent call last): File &quot;&lt;module1&gt;&quot;, line 19, in &lt;module&gt; AttributeError: module 'PIL.Image' has no attribute 'ANTIALIAS' </code></pre> <ul> <li>I tried reinstalling pip <em>and</em> <a href="https://en.wikipedia.org/wiki/Python_Imaging_Library" rel="noreferrer">Pillow</a>. It didn't work.</li> <li>I asked <a href="https://en.wikipedia.org/wiki/ChatGPT" rel="noreferrer">ChatGPT</a> about this, and it advised me to upgrade to Pillow's latest version. I am on the latest version (10.0.0).</li> </ul>
<python><python-imaging-library>
2023-07-04 21:58:01
9
1,373
MT_276
76,615,998
2,813,606
Use lambda function to pull latitude & longitude out of city names
<p>I have a dataframe of 1100 rows with moving data: things like origin cities and countries as well as destination cities and countries.</p> <p>The process I'm working through involves taking city names (eg: Portland, Oregon) and sending them to the Nominatim search page (<a href="https://nominatim.openstreetmap.org/search/" rel="nofollow noreferrer">https://nominatim.openstreetmap.org/search/</a>) to pull out the latitude and longitude.</p> <p>I found a pretty good one-off example on Stackoverflow:</p> <pre><code>import requests import urllib.parse address = 'Portland, Oregon' url = 'https://nominatim.openstreetmap.org/search/' + urllib.parse.quote(address) +'?format=json' response = requests.get(url).json() print(response[0][&quot;lat&quot;]) print(response[0][&quot;lon&quot;]) </code></pre> <p>This works great even when I have non-city entries (eg: Texas, United States or Bavaria, Germany).</p> <p>The issue I'm running into now is that I can't quite get the code to run down my list of locations in my dataframe column and pull out the info I need.</p> <p>Here is my code:</p> <pre><code>segment1 = 'https://nominatim.openstreetmap.org/search/' segment3 = '?format=json' df1['json_location_data'] = df1.apply(lambda x: requests.get(segment1 + urllib.parse.quote(str(df1['Origin'])) + segment3).json()) </code></pre> <p>I'm getting an error that reads:</p> <blockquote> <p>ValueError: Expected a 1D array, got an array with shape (1100, 17)</p> </blockquote> <p>Not sure how to fix this error, so I created a reproducible example here:</p> <pre><code>import pandas as pd locations = ['Portland, Oregon', 'Seattle, Washington','New York, New York','Texas, United States'] df = pd.DataFrame(locations, columns=['locations']) segment1 = 'https://nominatim.openstreetmap.org/search/' segment3 = '?format=json' df['json_location_data'] = df.apply(lambda x: requests.get(segment1 + urllib.parse.quote(str(df['locations'])) + segment3).json()) </code></pre> <p>This works without producing any errors, but returns a column with all NAs.</p> <p>How can I solve this issue and get the desired data?</p>
<python><pandas><nominatim>
2023-07-04 21:42:27
2
921
user2813606
76,615,910
11,628,437
Why is my gaussian elimination algorithm failing?
<p>In gaussian elimination, we take our matrix A and another matrix I.</p> <p>We proceed to convert the matrix A into an identity matrix. Once that's done, we apply the same steps that we did on A on the identity matrix. Then A becomes the identity matrix and I the inverse.</p> <p>However, in my code A becomes identity, but the first few rows of the inverse are incorrect</p> <p>I have spent more than a couple of hours debugging. What I realized is, that my method works as long as my matrix is relatively sparse. For instance, this works -</p> <p><code>A = [[10., 3, 10.], [0., 9., 0.], [10.,12.,1.]]</code></p> <p>And yes, my matrix <code>A</code> always becomes the identity matrix in the end. The problem is that <code>I</code> doesn't become <code>A^{-1}</code></p> <p>A quick explanation about my approach. I first worked on converting my matrix into an upper triangular matrix. Then I worked on converting it into an identity matrix. The code is written with <code>Python 2.7</code></p> <p>Here's my code -</p> <pre><code> from fractions import Fraction A = [[10., 3, 10.], [0., 9., 0.], [10.,12.,1.]] for i in range(len(A)): for j in range(len(A)): A[i][j] = Fraction(A[i][j]) row = len(A) print &quot;row = &quot;, row I = [[0 for i in range(len(A))] for j in range(len(A))] for i in range(len(I)): for j in range(len(I)): if i == j: I[i][j] = 1 I[i][j] = Fraction(I[i][j]) print I foo = {} for i in range(len(A)): for j in range(len(A)): if i==j: foo[i] = A[i][j] print &quot;A before = &quot;, A print &quot;I before = &quot;, I i = 0 for k in range(len(A)): normalize = A[k][k] if normalize != Fraction(0,1): for l in range(len(A)): A[k][l] = A[k][l]/normalize for l in range(len(I)): I[k][l] = I[k][l]/normalize i = k+1 while i&lt;len(A): coeff = A[i][k] for j in range(len(A)): A[i][j] = A[k][j]*coeff - A[i][j] I[i][j] = I[k][j]*coeff - I[i][j] i = i + 1 print &quot;A intermediate = &quot;, A print &quot;I intermediate = &quot;, I for i in range(len(A)): for j in range(len(A)): k = i while k+1&lt;len(A): coeff = A[i][k+1] A[i][j] = A[i][j] - A[k+1][j]*coeff print &quot;A[i][j] final = &quot;, A[i][j] I[i][j] = I[i][j] - I[k+1][j]*coeff k = k+1 print &quot;A final = &quot;, A print &quot;I final = &quot;, I </code></pre> <p>Here's the output -</p> <pre><code>A final = [[Fraction(1, 1), Fraction(0, 1), Fraction(0, 1)], [Fraction(0, 1), Fraction(1, 1), Fraction(0, 1)], [Fraction(0, 1), Fraction(0, 1), Fraction(1, 1)]] I final = [[Fraction(-1, 90), Fraction(-13, 90), Fraction(1, 9)], [Fraction(0, 1), Fraction(1, 9), Fraction(0, 1)], [Fraction(1, 9), Fraction(1, 9), Fraction(-1, 9)]] </code></pre> <p><strong>Edit 1</strong> - I need to solve this using only standard Python libraries.</p> <p><strong>Edit 2</strong> - For instance, if I give this input -</p> <pre><code>A = [[5,10,20], [2, 8, 12], [4, 8, 8]] </code></pre> <p>I get the following output which is wrong -</p> <pre><code>A final = [[Fraction(1, 1), Fraction(0, 1), Fraction(0, 1)], [Fraction(0, 1), Fraction(1, 1), Fraction(0, 1)], [Fraction(0, 1), Fraction(0, 1), Fraction(1, 1)]] I final = [[Fraction(0, 1), Fraction(-1, 2), Fraction(1, 2)], [Fraction(-1, 5), Fraction(1, 4), Fraction(1, 8)], [Fraction(1, 10), Fraction(0, 1), Fraction(-1, 8)]] </code></pre> <p>Here's the right answer -</p> <p><a href="https://i.sstatic.net/pk3H5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pk3H5.png" alt="enter image description here" /></a></p>
<python><math><matrix><linear-algebra><matrix-inverse>
2023-07-04 21:15:56
1
1,851
desert_ranger
76,615,881
857,832
Django: aggregate django fields to avoid N + 1 problem
<p>I have 3 tables/classes that are relevant to each other:</p> <ul> <li>CourseStudent - represents Student signed up to the course</li> <li>Presence - represents the attendance list of the CourseStudent</li> <li>CourseStudentPayment - represents the payments list for CourseStudent</li> </ul> <p>In the code it looks like this:</p> <pre><code>class CourseStudentPayment(models.Model): course_student = models.ForeignKey( &quot;CourseStudent&quot;, on_delete=models.CASCADE, related_name=&quot;course_student_payments&quot;, ) start_date = models.DateField(db_index=True) # other fields: price, currency, etc price = models.DecimalField(default=0, max_digits=10, decimal_places=2) def lessons_complete(self) -&gt; int: return ( Presence.objects.filter( course_student=self.course_student, ) .filter(date__gte=self.start_date) .count() ) class Presence(models.Model): course_student = models.ForeignKey(&quot;CourseStudent&quot;, on_delete=models.CASCADE) date = models.DateField() # some other fields class CourseStudent(models.Model): # some course-related information student = models.CharField(...) def last_payment(self) -&gt; CourseStudentPayment: return CourseStudentPayment.objects.filter(course_student=self).order_by(&quot;-start_date&quot;).first() </code></pre> <p>So <code>lessons_complete</code> function calculates the number of the attendances since the payment date. Both <code>CourseStudentPayment</code> and <code>Presence</code> objects have <code>CourseStudent</code> pk.</p> <p>I want to render a list of payments for the students with lessons_complete in an efficient way. The dumb solution would be:</p> <ul> <li>get list of payments. <code>course_payments = CourseStudentPayment.objects.all()</code></li> <li>for each payment I call lessons_complete.</li> </ul> <p>This solution creates N+1 problem, where for each payment I do Presence lookup.</p> <p>In SQL I would just join two tables (pseudocode):</p> <pre><code>SELECT csp.*, count(p.id) from CourseStudentPayment csp JOIN Presence p ON scp.course_student_id = p.course_student_id WHERE p.date &gt; csp.start_date </code></pre> <p>Is it possible to aggregate Presence table results and use within CourseStudentPayment rows?</p>
<python><sql><django><django-models>
2023-07-04 21:08:09
1
954
Rustam Ganeyev
76,615,848
3,937,811
Raise ValueError("Wedge sizes 'x' must be non negative values") ValueError: Wedge sizes 'x' must be non negative values
<p>I am making a simple Python program for task management and financial reporting. The error that I am receiving is</p> <pre><code>Traceback (most recent call last): File &quot;/Users/evangertis/development/PythonAutomation/IGTS/Budgeting/financial_progress_report.py&quot;, line 57, in &lt;module&gt; plt.pie(values, labels=labels, autopct=&quot;%1.1f%%&quot;) File &quot;/usr/local/lib/python3.11/site-packages/matplotlib/pyplot.py&quot;, line 2772, in pie return gca().pie( ^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/site-packages/matplotlib/__init__.py&quot;, line 1442, in inner return func(ax, *map(sanitize_sequence, args), **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/site-packages/matplotlib/axes/_axes.py&quot;, line 3196, in pie raise ValueError(&quot;Wedge sizes 'x' must be non negative values&quot;) ValueError: Wedge sizes 'x' must be non negative values </code></pre> <p>and the program is</p> <pre><code>import openpyxl import matplotlib.pyplot as plt # Prompt the user to answer the morning checklist questions priority1 = input(&quot;What is your top priority for the day? &quot;) priority2 = input(&quot;What is your second priority for the day? &quot;) priority3 = input(&quot;What is your third priority for the day? &quot;) advice = input(&quot;Who can you call for advice on achieving your goals? &quot;) actions = input(&quot;What specific actions do you need to take to move closer to your objectives? &quot;) devotional = input(&quot;Did you take 10-15 minutes for morning devotional or meditation? &quot;) workout = input(&quot;Did you complete a 30-45 minute workout or core exercise routine? &quot;) budget_review = input(&quot;Did you review your budget and expenses for the day? &quot;) money_made_today = input(&quot;How much money did you make today? &quot;) money_spent_today = input(&quot;How much money did you spend today? &quot;) weekly_expenses = input(&quot;How much money do you need to live for this week? &quot;) last_week_expenses = input(&quot;How much money did you need to live for last week? &quot;) money_made_this_week = input(&quot;How much money did you make this week? &quot;) money_made_last_week = input(&quot;How much money did you make last week? &quot;) projected_monthly_income = input(&quot;How much money will you make by the end of the month? &quot;) last_month_income = input(&quot;How much money did you make last month? &quot;) tax_allocation = input(&quot;How much money needs to be allocated for taxes? &quot;) gratitude = input(&quot;Did you write down three things you are grateful for today? &quot;) adjustments = input(&quot;What adjustments do you need to make to reach your financial and exercise goals for the week? &quot;) affirmation = input(&quot;What positive affirmation or quote inspires you to be productive and focused? &quot;) # Save the responses in an Excel file workbook = openpyxl.Workbook() sheet = workbook.active sheet.title = &quot;Morning Checklist&quot; sheet[&quot;A1&quot;] = &quot;Priority 1&quot; sheet[&quot;B1&quot;] = &quot;Priority 2&quot; sheet[&quot;C1&quot;] = &quot;Priority 3&quot; sheet[&quot;D1&quot;] = &quot;Advice&quot; sheet[&quot;E1&quot;] = &quot;Actions&quot; sheet[&quot;F1&quot;] = &quot;Devotional&quot; sheet[&quot;G1&quot;] = &quot;Workout&quot; sheet[&quot;H1&quot;] = &quot;Budget Review&quot; sheet[&quot;I1&quot;] = &quot;Money Made Today&quot; sheet[&quot;J1&quot;] = &quot;Money Spent Today&quot; sheet[&quot;K1&quot;] = &quot;Weekly Expenses&quot; sheet[&quot;L1&quot;] = &quot;Last Week Expenses&quot; sheet[&quot;M1&quot;] = &quot;Money Made This Week&quot; sheet[&quot;N1&quot;] = &quot;Money Made Last Week&quot; sheet[&quot;O1&quot;] = &quot;Projected Monthly Income&quot; sheet[&quot;P1&quot;] = &quot;Last Month Income&quot; sheet[&quot;Q1&quot;] = &quot;Tax Allocation&quot; sheet[&quot;R1&quot;] = &quot;Gratitude&quot; sheet[&quot;S1&quot;] = &quot;Adjustments&quot; sheet[&quot;T1&quot;] = &quot;Affirmation&quot; sheet.append([priority1, priority2, priority3, advice, actions, devotional, workout, budget_review, money_made_today, money_spent_today, weekly_expenses, last_week_expenses, money_made_this_week, money_made_last_week, projected_monthly_income, last_month_income, tax_allocation, gratitude, adjustments, affirmation]) workbook.save(&quot;morning_checklist.xlsx&quot;) # Display a visualization of the financial data labels = [&quot;Money Made Today&quot;, &quot;Money Spent Today&quot;, &quot;Weekly Expenses&quot;, &quot;Last Week Expenses&quot;, &quot;Money Made This Week&quot;, &quot;Money Made Last Week&quot;, &quot;Projected Monthly Income&quot;, &quot;Last Month Income&quot;, &quot;Tax Allocation&quot;] values = [float(money_made_today), -float(money_spent_today), -float(weekly_expenses), -float(last_week_expenses), float(money_made_this_week), float(money_made_last_week), float(projected_monthly_income), float(last_month_income), -float(tax_allocation)] plt.bar(x=range(len(values)),height=values) plt.title(&quot;Financial Overview&quot;) plt.show() </code></pre> <p>Input:</p> <pre><code>What is your top priority for the day? create a contact list for marketing campaign What is your second priority for the day? update marketing strategy What is your third priority for the day? call mentor for advice and update a marketing strategy Who can you call for advice on achieving your goals? X What specific actions do you need to take to move closer to your objectives? create contact list Did you take 10-15 minutes for morning devotional or meditation? yes Did you complete a 30-45 minute workout or core exercise routine? yes Did you review your budget and expenses for the day? yes How much money did you make today? 0 How much money did you spend today? 0 How much money do you need to live for this week? 1174 How much money did you need to live for last week? 439 How much money did you make this week? 17.89 How much money did you make last week? 3.50 How much money will you make by the end of the month? 3522 How much money did you make last month? 89.50 How much money needs to be allocated for taxes? 300 Did you write down three things you are grateful for today? yes What adjustments do you need to make to reach your financial and exercise goals for the week? call certifified financial planner (CFP) and my friend who is a personal trainer then collect their advice and update my budget and exercise routine What positive affirmation or quote inspires you to be productive and focused? you are wonderfully and fearfully made </code></pre> <p>Expected output:</p> <ol> <li>An Excel sheet with the answers</li> <li>A visualization showing the progress</li> </ol> <h2>Update</h2> <p>I followed the advice from the answers below and I'm almost there. <a href="https://i.sstatic.net/TtuND.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TtuND.png" alt="enter image description here" /></a></p> <p>I would like to get this to show how to correlate the priorities to the money generated/lost. Is this possible?</p>
<python><matplotlib><openpyxl>
2023-07-04 20:58:04
2
2,066
Evan Gertis
76,615,773
5,867,094
How to get element's index for PandasArray object
<p>If I have this kind of code <code>a=pd.array(['a', 'b'], dtype=str)</code>. How can I get index of element 'a'?</p>
<python><pandas>
2023-07-04 20:43:00
3
891
Tiancheng Liu
76,615,478
3,535,537
how to configure pylint for E0611: No name 'xxxx' in module 'xxx' (no-name-in-module)?
<pre><code>├── src ├──── events ├────── foo_bar ├──────── pop_bou ├────────── app.py def handler() -&gt; Optional[dict]: return None </code></pre> <p><strong>and my pytest file is:</strong></p> <pre><code>def test_foo(): from events.foo_bar.pop_bou.app import handler ... </code></pre> <p>When I run pylint, I have this error:</p> <pre><code>E0611: No name 'foo_bar' in module 'events' (no-name-in-module) </code></pre>
<python><pytest><pylint>
2023-07-04 19:40:37
0
11,934
Stéphane GRILLON
76,615,437
5,410,696
Psycopg inserting only two rows on data being passed via for loop
<p>I'm using a function to insert data to a PostGIS table using psycopg.</p> <pre><code>def shptoPosGIS(a,b,c): fieldRecord = &quot;&quot;&quot; insert into 'tablename' &quot;&quot;&quot; fieldValue = (a,b,c) db_cursor.execute(fieldRecord, fieldValue) db_connection.commit() for i in range (20000): shptoPostGIS(data[&quot;a&quot;].iloc[i], data[&quot;b&quot;].iloc[i], data[&quot;c&quot;].iloc[i]) </code></pre> <p>My Python script is inserting only two rows rather than 20000 to the PostGIS table. I'm guessing that the problem is related to synchronization. My for loop may run faster than the db_cursor.execute() method. Could anyone give a direction on how to solve this?</p>
<python><data-science><gis><postgis><psycopg2>
2023-07-04 19:32:01
1
335
HelpOverFlow
76,615,393
3,247,006
response.set_cookie() vs response.cookies[] in Django
<p>I could set the cookies with <a href="https://docs.djangoproject.com/en/4.2/ref/request-response/#django.http.HttpResponse.set_cookie" rel="nofollow noreferrer">response.set_cookie()</a> and <code>response.cookies[]</code> as shown below. *I use <strong>Django 4.2.1</strong>:</p> <pre class="lang-py prettyprint-override"><code># &quot;my_app1/views.py&quot; from django.http import HttpResponse def test(request): response = HttpResponse('Test') response.set_cookie('first_name', 'John') # Here response.cookies['last_name'] = 'Smith' # Here return response </code></pre> <p><a href="https://i.sstatic.net/cwzMf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cwzMf.png" alt="enter image description here" /></a></p> <p>Then, I could only delete <code>response.set_cookie()</code>'s cookie <code>first_name</code> rather than <code>response.cookies[]</code>'s cookie <code>last_name</code> with <a href="https://docs.djangoproject.com/en/4.2/ref/request-response/#django.http.HttpResponse.delete_cookie" rel="nofollow noreferrer">response.delete_cookie()</a> as shown below:</p> <pre class="lang-py prettyprint-override"><code># &quot;my_app1/views.py&quot; from django.http import HttpResponse def test(request): response = HttpResponse('Test') response.delete_cookie('first_name') # Deleted response.delete_cookie('last_name') # Undeleted return response </code></pre> <p><a href="https://i.sstatic.net/XehkC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XehkC.png" alt="enter image description here" /></a></p> <p>So, what is the difference between <code>response.set_cookie()</code> and <code>response.cookies[]</code> in Django?</p>
<python><django><cookies><django-views><django-cookies>
2023-07-04 19:22:44
1
42,516
Super Kai - Kazuya Ito
76,615,359
536,650
Sum columns of one dataframe based on another dataframe
<p>I have two dataframes that look like those:</p> <pre><code>df1 = pl.DataFrame( { &quot;Name&quot;: [&quot;A&quot;, &quot;B&quot;, &quot;C&quot;, &quot;D&quot;], &quot;Year&quot;: [2001, 2003, 2003, 2004] } ) df2 = pl.DataFrame( { &quot;Name&quot;: [&quot;A&quot;, &quot;B&quot;, &quot;C&quot;, &quot;D&quot;], &quot;2001&quot;: [111, 112, 113, 114], &quot;2002&quot;: [221, 222, 223, 224], &quot;2003&quot;: [331, 332, 333, 334], &quot;2004&quot;: [441, 442, 443, 444] } ) </code></pre> <p>I'd like to sum each year column of the second df (df2), taking in account only names whose corresponding year in df1 is the same year or later. Desired output:</p> <pre><code>┌──────┬──────┐ │ Year ┆ Sum │ ╞══════╪══════╡ │ 2001 ┆ 111 │ │ 2002 ┆ 221 │ │ 2003 ┆ 996 │ (= 331 + 332 + 333) │ 2004 ┆ 1770 │ (= 441 + 442 + 443 + 444) └──────┴──────┘ </code></pre> <p>I'm new to Polars (coming from Pandas), and I'm not sure how to do this. Any help will be appreciated.</p>
<python><python-polars>
2023-07-04 19:16:10
2
1,576
kodkod
76,615,297
5,423,185
cv::solve with DECOMP_SVD using C++ has different results from Python and Java(Kotlin)
<p>I found an interesting issue with different results of SVD (Singular Value Decomposition) in my project. I need to find intersections by lines (4 points). I have 2 vertical lines and 2 horizontal lines.</p> <p>I wrote the following code for <strong>Python</strong>, and it works correctly:</p> <pre class="lang-py prettyprint-override"><code>a = numpy.array([ [0.99773484, -0.06726905], [0.02908471, 0.9995769] ]) b = numpy.array([ [180.0], [264.66666] ]) _, (x0, y0) = cv2.solve(a, b, flags=cv2.DECOMP_SVD) # x0 = 197.87232204 # y0 = 259.02119276 </code></pre> <p>Also I wrote the same code using <strong>Kotlin</strong>, and it works correctly, too:</p> <pre class="lang-kotlin prettyprint-override"><code>val a = Mat.zeros(2, 2, CvType.CV_32F) a.put(0, 0, 0.99773484) a.put(0, 1, -0.06726905) a.put(1, 0, 0.02908471) a.put(1, 1, 0.9995769) val b = Mat.zeros(2, 1, CvType.CV_32F) b.put(0, 0, 180.0) b.put(1, 0, 264.66666) val linearFit = Mat() Core.solve(a, b, linearFit, Core.DECOMP_SVD) val x0 = linearFit.get(0, 0)[0] val y0 = linearFit.get(1, 0)[0] // x0 = 197.87232971191406 // y0 = 259.02117919921875 </code></pre> <p>But when I wrote this code using <strong>C++</strong>, I got different results:</p> <pre class="lang-c prettyprint-override"><code>cv::Mat a = cv::Mat::zeros(2, 2, CV_32F); a.at&lt;float&gt;(0, 0) = 0.99773484; a.at&lt;float&gt;(0, 1) = -0.06726905; a.at&lt;float&gt;(1, 0) = 0.02908471; a.at&lt;float&gt;(1, 1) = 0.9995769; cv::Mat b = cv::Mat::zeros(2, 1, CV_32F); b.at&lt;float&gt;(0, 0) = 180.0; b.at&lt;float&gt;(1, 0) = 264.66666; cv::Mat linearFit; cv::solve(a, b, linearFit, cv::DECOMP_SVD); double x0 = linearFit.at&lt;cv::Vec3b&gt;(0, 0)[0]; double y0 = linearFit.at&lt;cv::Vec3b&gt;(1, 0)[0]; // x0 = 81.000000 // y0 = 182.000000 </code></pre> <p>Is this a problem with my syntax or with opencv2.framework for iOS? I tested on opencv2.framework v.3.4.19 and v.4.1.0.</p> <p>P.S. The function to find intersections I got in this answer: <a href="https://stackoverflow.com/a/46572063">https://stackoverflow.com/a/46572063</a></p>
<python><c++><opencv><svd>
2023-07-04 19:03:46
2
673
Danil Valov
76,615,215
4,127,330
Strange behavior of simple Python CGI script
<p>For testing purposes, I created a very simple CGI script in Python 2.7, as shown below:</p> <pre><code>#!/usr/bin/env python import os import sys version = sys.version.split('\n')[0] print &quot;Content-Type: text/html&quot; print print &quot;&lt;html&gt;&quot; print &quot;&lt;head&gt;&lt;title&gt;First Python HTTP Programming &lt;/title&gt;&lt;/head&gt;&quot; print &quot;&lt;body&gt;&quot; print &quot;&lt;h2&gt;Hello World!&lt;/h2&gt;&quot; print &quot;&lt;h3&gt;Python Version: &lt;/h3&gt;&quot; + version print &quot;&lt;/body&gt;&quot; print &quot;&lt;/html&gt;&quot; </code></pre> <p>In a local or remote host, it works just fine!</p> <p>I then added just a few lines to this script, in order to check if one specific package is available:</p> <pre><code>#!/usr/bin/env python import sys version = sys.version.split('\n')[0] print &quot;Content-Type: text/html&quot; print &quot;&lt;html&gt;&quot; print &quot;&lt;head&gt;&lt;title&gt;First Python HTTP Programming &lt;/title&gt;&lt;/head&gt;&quot; print &quot;&lt;body&gt;&quot; print &quot;&lt;h3&gt;Python Version: &lt;/h3&gt;&quot; + version try: from bs4 import BeautifulSoup print &quot;&lt;h3&gt; Already installed &lt;/h3&gt;&quot; except ImportError as e: print &quot;&lt;h3&gt;Not installed&lt;/h3&gt;&quot; print &quot;&lt;/body&gt;&quot; print &quot;&lt;/html&gt;&quot; </code></pre> <p>And then the script fails with a dreaded &quot;Internal server failure&quot; error!</p> <p>I cannot see anything obviously wrong with such an ordinary piece of code. Could someone out there provide me with an explanation?</p> <p>Please notice that I am using Python 2.7 because I have no privileged access to the server and that is the version they have available (but I do not believe the error is related to this or that version of Python).</p> <p>Thanks in advance!</p>
<python><python-2.7><cgi>
2023-07-04 18:49:47
0
1,607
maurobio
76,615,093
12,113,958
Merge two Dataframe based on Column that contains name and surname but in different order
<p>I have two dataframes that has a column with name and surname but in one of them is in different order, in the first one is name surname order and in the second one surname name order.</p> <p>How can I do the merge by ignoring the order of the <code>name</code> column?</p> <pre><code>import pandas as pd df1 = pd.DataFrame({'name':['Dominik Hull D', 'Lulu Castaneda', 'Zachary Pearce', 'Paul Lewis', 'Neave Potts', 'Ruth Callahan', 'Evelyn Haney W', 'Julie Mclaughlin', 'Kaleb Hardin', 'Kayleigh Little', ]}) df2 = pd.DataFrame({'name':['Mclaughlin Julie', 'Hardin Kaleb', 'Hull D Dominik', 'Castaneda Lulu', 'Callahan Ruth', 'Haney W Evelyn', 'Pearce Zachary', 'Lewis Paul', 'Potts Neave', 'Little Kayleigh', ], 'value':[0,1,2,3,4,5,6,7,8,9]}) new_df = pd.merge(df1,df2,on='name', how='left') print(new_df) </code></pre>
<python><pandas><dataframe>
2023-07-04 18:25:57
2
529
luka
76,615,030
15,363,250
How can I add 2 100% stacked bars (Y Axis) to each element in the X Axis of my chart in plotly?
<p>I manged to create the following chart in plotly using python. It's pretty much all I need, but the final thing is to make the whole chart a 100% stacked, so instead o 0.35, the max value in the Y axis would be 1 and all the bars would be evenly aligned. Is there any way to do that in plotly?</p> <p>Here is the code I used to generate the chart:</p> <pre><code>fig = go.Figure() categories = grouped_values2['portfolio_category'].unique() category_colors = [&quot;#0070C0&quot;, &quot;#FFC000&quot;, &quot;#AD1457&quot;, &quot;#00A651&quot;, &quot;#DB4437&quot;, &quot;#6E3159&quot;, &quot;#F4A000&quot;, &quot;#00788C&quot;, &quot;#5FB3C3&quot;] for i, category in enumerate(categories): num1 = random.randint(0, 30) category_data = grouped_values2[grouped_values['portfolio_category'] == category] fig.add_trace( go.Bar( x=category_data['month'], y=category_data['importancia_realizado'], name=f'R - {category}', marker_color=category_colors[i], offsetgroup=0 ) ) fig.add_trace( go.Bar( x=category_data['month'], y=category_data['importancia_referencia'], name=f'P - {category}', marker_color=category_colors[i], offsetgroup=1 ) ) fig.update_layout( barmode='group', title='IMPORTÂNCIA PROJETADA (P) vs IMPORTANCIA REALIZADA', plot_bgcolor=&quot;rgba(0, 0, 0, 0)&quot;, xaxis_title='Mês', yaxis_title='Importância' ) fig.show() </code></pre> <p><a href="https://i.sstatic.net/gkhqp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gkhqp.png" alt="enter image description here" /></a></p> <p>Here is the data:</p> <pre><code>month,month_number,portfolio_category,gmv_mtd,importancia_referencia,importancia_realizado, 0,Apr,4,CAT1,3242989.05,0.230,0.188 1,Apr,4,CAT2,3490670.33,0.190,0.202 2,Apr,4,CAT3,2970437.35,0.170,0.172 3,Apr,4,CAT4,6310704.34,0.160,0.365 4,Apr,4,CAT5,1046692.36,0.120,0.610 5,Apr,4,CAT6,216612.53,0.130,0.130 6,Jun,6,CAT1,4449799.95,0.210,0.274 7,Jun,6,CAT2,5209480.79,0.240,0.321 8,Jun,6,CAT3,2478664.66,0.160,0.153 9,Jun,6,CAT4,1318476.99,0.130,0.810 10,Jun,6,CAT5,2349817.17,0.0,130,0.145 11,Jun,6,CAT6,425692.50,0.130,0.260 12,May,5,CAT1,4057372.58,0.210,0.284 13,May,5,CAT2,3504235.05,0.250,0.245 14,May,5,CAT3,2863510.23,0.170,0.201 15,May,5,CAT4,900040.46,0.130,0.630 16,May,5,CAT5,2564349.59,0.130,0.180 17,May,5,CAT6,385128.35,0.120,0.270 </code></pre>
<python><plotly><bar-chart>
2023-07-04 18:15:59
1
450
Marcos Dias
76,614,653
5,356,096
Indexing and searching over a nested JSON field in Redis Python
<p>I am trying to set an index to a nested field inside Redis to search over it easily, specifically a numeric field representing a timestamp, but I can't figure it out. The documentation is quite complicated and ever since RedisSearch was merged with main Redis, I've been struggling to find any good examples.</p> <p>Here's my attempt:</p> <pre class="lang-py prettyprint-override"><code>import time from redis import Redis from redis.commands.search.indexDefinition import IndexDefinition, IndexType from redis.commands.search.field import NumericField from redis.commands.search.query import Query, NumericFilter def main(): r = None test_dict1 = {&quot;context&quot;: {&quot;test&quot;: {&quot;other&quot;: &quot;test&quot;}, &quot;messages&quot;: [{&quot;text&quot;: &quot;mytext&quot;, &quot;timestamp&quot;: str(time.time())}]}} test_dict2 = {&quot;context&quot;: {&quot;test&quot;: {&quot;other&quot;: &quot;test&quot;}, &quot;messages&quot;: [{&quot;text&quot;: &quot;mytext2&quot;, &quot;timestamp&quot;: str(time.time() + 10)}]}} try: r = Redis() r.json().set(&quot;uuid:4587-7d5f9-4545&quot;, &quot;$&quot;, test_dict1) r.json().set(&quot;uuid:4587-7d5f9-4546&quot;, &quot;$&quot;, test_dict2) r.ft('timestamp').create_index(fields=(NumericField(&quot;$.messages.timestamp&quot;)), definition=IndexDefinition(prefix=['uuid:'], index_type=IndexType.HASH)) print(r.json().get(&quot;uuid:4587-7d5f9-4545&quot;, &quot;$.context.test.other&quot;)) q = Query(&quot;*&quot;).add_filter(NumericFilter(field=&quot;$.messages.timestamp&quot;, minval=0, maxval=time.time())) print(r.ft('timestamp').search(q)) except Exception as e: raise e finally: if r is not None: r.flushall() if __name__ == &quot;__main__&quot;: main() </code></pre> <p>That currently returns 0 results, but doesn't throw any errors.</p>
<python><python-3.x><redis><redis-py>
2023-07-04 17:02:46
2
1,665
Jack Avante
76,614,582
16,284,229
Where to find a more complete documentation for Selenium?
<p>For example, the below get_log method on the driver instance allows me to receive a collection of console log messages. Im wondering why i couldn't find this on the Selenium 4 official documentation. The documentation seems to be very sparse and undetailed. Is there a better alternative then ravaging through the source code? I dont want to be looking at multiple chained function calls in the original source code. Would be a great help thanks.</p> <pre><code> dc = DesiredCapabilities.CHROME dc[&quot;goog:loggingPrefs&quot;] = {&quot;browser&quot;:&quot;INFO&quot;} driver = webdriver.Chrome(service=service, options=options, desired_capabilities=dc) driver.get(&quot;http://www.thepools.com&quot;) time.sleep(5) for entry in driver.get_log('browser'): print(entry) </code></pre>
<python><selenium-webdriver><selenium-chromedriver>
2023-07-04 16:49:29
0
305
Tech Visionary
76,614,479
12,131,472
How to use pd.json_normalize to retrieve the data I need from 2 parts (I'm almost there already, only need last piece of data)
<p>I have this JSON list in Python:</p> <pre><code>[{'id': 'TD3$-FFA', 'shortCode': 'TD3$-FFA', 'dataSet': {'id': 'TD3C', 'shortCode': 'TD3C', 'shortDescription': 'Dirty Middle East Gulf to China', 'displayGroup': 'BDTI', 'datumUnit': 'Worldscale', 'datumPrecision': 2, 'data': [{'value': 56.67, 'date': '2023-06-30'}], 'apiIdentifier': 'RDSSYGSJBFEV9P2FLSCXGQC3510G2EGE'}, 'datumUnit': '$/mt', 'datumPrecision': 3, 'projectionStartOn': '2010-05-10T00:00:00', 'projectionEndOn': '2023-06-30T00:00:00', 'groupings': [{'date': '2023-06-30T00:00:00', 'groups': [{'periodType': 'm', 'projections': [{'identifier': 'TD3$BALMO', 'period': 'Jul 23', 'value': 14.4, 'validFrom': '2023-07-01', 'validTo': '2023-07-31', 'nextRolloverDate': '2023-07-29', 'archiveDate': '2023-06-30'}, {'identifier': 'TD3$CURMON', 'period': 'Jul 23', 'value': 14.4, 'validFrom': '2023-07-01', 'validTo': '2023-07-31', 'nextRolloverDate': '2023-07-29', 'archiveDate': '2023-06-30'}, {'identifier': 'TD3$+1_M', 'period': 'Aug 23', 'value': 13.662, 'validFrom': '2023-08-01', 'validTo': '2023-08-31', 'nextRolloverDate': '2023-07-29', 'archiveDate': '2023-06-30'}, {'identifier': 'TD3$+2_M', 'period': 'Sep 23', 'value': 13.716, 'validFrom': '2023-09-01', 'validTo': '2023-09-29', 'nextRolloverDate': '2023-07-29', 'archiveDate': '2023-06-30'}, {'identifier': 'TD3$+3_M', 'period': 'Oct 23', 'value': 13.83, 'validFrom': '2023-10-01', 'validTo': '2023-10-31', 'nextRolloverDate': '2023-07-29', 'archiveDate': '2023-06-30'}, {'identifier': 'TD3$+4_M', 'period': 'Nov 23', 'value': 14.619, 'validFrom': '2023-11-01', 'validTo': '2023-11-30', 'nextRolloverDate': '2023-07-29', 'archiveDate': '2023-06-30'}, {'identifier': 'TD3$+5_M', 'period': 'Dec 23', 'value': 16.389, 'validFrom': '2023-12-01', 'validTo': '2023-12-22', 'nextRolloverDate': '2023-07-29', 'archiveDate': '2023-06-30'}]}, {'periodType': 'q', 'projections': [{'identifier': 'TD3$CURQ', 'period': 'Q3 23', 'value': 13.926, 'validFrom': '2023-07-01', 'validTo': '2023-09-29', 'nextRolloverDate': '2023-09-29', 'archiveDate': '2023-06-30'}, {'identifier': 'TD3$+1Q', 'period': 'Q4 23', 'value': 14.946, 'validFrom': '2023-10-01', 'validTo': '2023-12-22', 'nextRolloverDate': '2023-09-29', 'archiveDate': '2023-06-30'}, {'identifier': 'TD3$+2Q', 'period': 'Q1 24', 'value': 13.056, 'validFrom': '2024-01-01', 'validTo': '2024-03-29', 'nextRolloverDate': '2023-09-29', 'archiveDate': '2023-06-30'}, {'identifier': 'TD3$+3Q', 'period': 'Q2 24', 'value': 11.818, 'validFrom': '2024-04-01', 'validTo': '2024-06-28', 'nextRolloverDate': '2023-09-29', 'archiveDate': '2023-06-30'}, {'identifier': 'TD3$+4Q', 'period': 'Q3 24', 'value': 11.407, 'validFrom': '2024-07-01', 'validTo': '2024-09-30', 'nextRolloverDate': '2023-09-29', 'archiveDate': '2023-06-30'}]}, {'periodType': 'y', 'projections': [{'identifier': 'TD3$+1CAL', 'period': 'Cal 24', 'value': 12.693, 'validFrom': '2024-01-01', 'validTo': '2024-12-24', 'nextRolloverDate': '2023-12-22', 'archiveDate': '2023-06-30'}, {'identifier': 'TD3$+2CAL', 'period': 'Cal 25', 'value': 12.057, 'validFrom': '2025-01-01', 'validTo': '2025-12-24', 'nextRolloverDate': '2023-12-22', 'archiveDate': '2023-06-30'}, {'identifier': 'TD3$+3CAL', 'period': 'Cal 26', 'value': 11.756, 'validFrom': '2026-01-01', 'validTo': '2026-12-24', 'nextRolloverDate': '2023-12-22', 'archiveDate': '2023-06-30'}, {'identifier': 'TD3$+4CAL', 'period': 'Cal 27', 'value': 11.683, 'validFrom': '2027-01-01', 'validTo': '2027-12-24', 'nextRolloverDate': '2023-12-22', 'archiveDate': '2023-06-30'}]}]}], 'apiIdentifier': 'RPSVJJTJBXBCAF2FAG2PQAVYN4UGQ9LN'}, {'id': 'TD20$-FFA', 'shortCode': 'TD20$-FFA', 'dataSet': {'id': 'TD20', 'shortCode': 'TD20', 'shortDescription': 'Dirty West Africa to UK-Continent', 'displayGroup': 'BDTI', 'datumUnit': 'Worldscale', 'datumPrecision': 2, 'data': [{'value': 101.14, 'date': '2023-06-30'}], 'apiIdentifier': 'RDSU23QH0OX6DZZDC5BYZTQIZ9TXHUQR'}, 'datumUnit': '$/mt', 'datumPrecision': 3, 'projectionStartOn': '2014-08-01T00:00:00', 'projectionEndOn': '2023-06-30T00:00:00', 'groupings': [{'date': '2023-06-30T00:00:00', 'groups': [{'periodType': 'm', 'projections': [{'identifier': 'TD20$BALMO', 'period': 'Jul 23', 'value': 19.093, 'validFrom': '2023-07-01', 'validTo': '2023-07-31', 'nextRolloverDate': '2023-07-29', 'archiveDate': '2023-06-30'}, {'identifier': 'TD20$CURMON', 'period': 'Jul 23', 'value': 19.093, 'validFrom': '2023-07-01', 'validTo': '2023-07-31', 'nextRolloverDate': '2023-07-29', 'archiveDate': '2023-06-30'}, {'identifier': 'TD20$+1_M', 'period': 'Aug 23', 'value': 17.896, 'validFrom': '2023-08-01', 'validTo': '2023-08-31', 'nextRolloverDate': '2023-07-29', 'archiveDate': '2023-06-30'}, {'identifier': 'TD20$+2_M', 'period': 'Sep 23', 'value': 17.832, 'validFrom': '2023-09-01', 'validTo': '2023-09-29', 'nextRolloverDate': '2023-07-29', 'archiveDate': '2023-06-30'}, {'identifier': 'TD20$+3_M', 'period': 'Oct 23', 'value': 18.61, 'validFrom': '2023-10-01', 'validTo': '2023-10-31', 'nextRolloverDate': '2023-07-29', 'archiveDate': '2023-06-30'}, {'identifier': 'TD20$+4_M', 'period': 'Nov 23', 'value': 19.417, 'validFrom': '2023-11-01', 'validTo': '2023-11-30', 'nextRolloverDate': '2023-07-29', 'archiveDate': '2023-06-30'}, {'identifier': 'TD20$+5_M', 'period': 'Dec 23', 'value': 20.272, 'validFrom': '2023-12-01', 'validTo': '2023-12-22', 'nextRolloverDate': '2023-07-29', 'archiveDate': '2023-06-30'}]}, {'periodType': 'q', 'projections': [{'identifier': 'TD20$CURQ', 'period': 'Q3 23', 'value': 18.274, 'validFrom': '2023-07-01', 'validTo': '2023-09-29', 'nextRolloverDate': '2023-09-29', 'archiveDate': '2023-06-30'}, {'identifier': 'TD20$+1Q', 'period': 'Q4 23', 'value': 19.433, 'validFrom': '2023-10-01', 'validTo': '2023-12-22', 'nextRolloverDate': '2023-09-29', 'archiveDate': '2023-06-30'}, {'identifier': 'TD20$+2Q', 'period': 'Q1 24', 'value': 17.142, 'validFrom': '2024-01-01', 'validTo': '2024-03-29', 'nextRolloverDate': '2023-09-29', 'archiveDate': '2023-06-30'}, {'identifier': 'TD20$+3Q', 'period': 'Q2 24', 'value': 14.091, 'validFrom': '2024-04-01', 'validTo': '2024-06-28', 'nextRolloverDate': '2023-09-29', 'archiveDate': '2023-06-30'}, {'identifier': 'TD20$+4Q', 'period': 'Q3 24', 'value': 12.478, 'validFrom': '2024-07-01', 'validTo': '2024-09-30', 'nextRolloverDate': '2023-09-29', 'archiveDate': '2023-06-30'}]}, {'periodType': 'y', 'projections': [{'identifier': 'TD20$+1CAL', 'period': 'Cal 24', 'value': 14.904, 'validFrom': '2024-01-01', 'validTo': '2024-12-24', 'nextRolloverDate': '2023-12-22', 'archiveDate': '2023-06-30'}, {'identifier': 'TD20$+2CAL', 'period': 'Cal 25', 'value': 14.184, 'validFrom': '2025-01-01', 'validTo': '2025-12-24', 'nextRolloverDate': '2023-12-22', 'archiveDate': '2023-06-30'}, {'identifier': 'TD20$+3CAL', 'period': 'Cal 26', 'value': 13.831, 'validFrom': '2026-01-01', 'validTo': '2026-12-24', 'nextRolloverDate': '2023-12-22', 'archiveDate': '2023-06-30'}]}]}], 'apiIdentifier': 'RPSRTIFJYJVDT9TFWIYQMLXN2ZN7RRK1'}] </code></pre> <p>Now I use <code>df_usd_mt = pd.json_normalize(response_usd_mt, record_path=['groupings', 'groups', 'projections'], meta=['shortCode', 'datumUnit'])</code> to ALMOST get everything I need.</p> <p>my current dataframe looks like this</p> <pre><code> identifier period value ... archiveDate shortCode datumUnit 0 TD3$BALMO Jul 23 14.400 ... 2023-06-30 TD3$-FFA $/mt 1 TD3$CURMON Jul 23 14.400 ... 2023-06-30 TD3$-FFA $/mt 2 TD3$+1_M Aug 23 13.662 ... 2023-06-30 TD3$-FFA $/mt 3 TD3$+2_M Sep 23 13.716 ... 2023-06-30 TD3$-FFA $/mt 4 TD3$+3_M Oct 23 13.830 ... 2023-06-30 TD3$-FFA $/mt 5 TD3$+4_M Nov 23 14.619 ... 2023-06-30 TD3$-FFA $/mt 6 TD3$+5_M Dec 23 16.389 ... 2023-06-30 TD3$-FFA $/mt 7 TD3$CURQ Q3 23 13.926 ... 2023-06-30 TD3$-FFA $/mt 8 TD3$+1Q Q4 23 14.946 ... 2023-06-30 TD3$-FFA $/mt 9 TD3$+2Q Q1 24 13.056 ... 2023-06-30 TD3$-FFA $/mt 10 TD3$+3Q Q2 24 11.818 ... 2023-06-30 TD3$-FFA $/mt 11 TD3$+4Q Q3 24 11.407 ... 2023-06-30 TD3$-FFA $/mt 12 TD3$+1CAL Cal 24 12.693 ... 2023-06-30 TD3$-FFA $/mt 13 TD3$+2CAL Cal 25 12.057 ... 2023-06-30 TD3$-FFA $/mt 14 TD3$+3CAL Cal 26 11.756 ... 2023-06-30 TD3$-FFA $/mt 15 TD3$+4CAL Cal 27 11.683 ... 2023-06-30 TD3$-FFA $/mt 16 TD20$BALMO Jul 23 19.093 ... 2023-06-30 TD20$-FFA $/mt 17 TD20$CURMON Jul 23 19.093 ... 2023-06-30 TD20$-FFA $/mt 18 TD20$+1_M Aug 23 17.896 ... 2023-06-30 TD20$-FFA $/mt 19 TD20$+2_M Sep 23 17.832 ... 2023-06-30 TD20$-FFA $/mt 20 TD20$+3_M Oct 23 18.610 ... 2023-06-30 TD20$-FFA $/mt 21 TD20$+4_M Nov 23 19.417 ... 2023-06-30 TD20$-FFA $/mt 22 TD20$+5_M Dec 23 20.272 ... 2023-06-30 TD20$-FFA $/mt 23 TD20$CURQ Q3 23 18.274 ... 2023-06-30 TD20$-FFA $/mt 24 TD20$+1Q Q4 23 19.433 ... 2023-06-30 TD20$-FFA $/mt 25 TD20$+2Q Q1 24 17.142 ... 2023-06-30 TD20$-FFA $/mt 26 TD20$+3Q Q2 24 14.091 ... 2023-06-30 TD20$-FFA $/mt 27 TD20$+4Q Q3 24 12.478 ... 2023-06-30 TD20$-FFA $/mt 28 TD20$+1CAL Cal 24 14.904 ... 2023-06-30 TD20$-FFA $/mt 29 TD20$+2CAL Cal 25 14.184 ... 2023-06-30 TD20$-FFA $/mt 30 TD20$+3CAL Cal 26 13.831 ... 2023-06-30 TD20$-FFA $/mt </code></pre> <p>I only wish to have one additional column with the information under 'dataSet'&gt;&gt;'id' for example for the first 16 rows, I need 'TD3C' as values of this additional column, this info could be seen at the top of the Json list. to be accurate, the 3rd row of the data, for the following rows I need value &quot;TD20&quot;</p> <p>I really can't figure it out and asked Chatgpt, it gave me this code which looks correct but just generate Traceback</p> <pre><code>df_usd_mt = pd.json_normalize(response_usd_mt, record_path=['groupings', 'groups', 'projections'], meta=['shortCode', 'datumUnit', ['dataSet', 'id']]) </code></pre>
<python><json><pandas><json-normalize>
2023-07-04 16:30:34
1
447
neutralname
76,614,342
2,301,970
Running pytest from pycharm
<p>I am learning to write tests and I am having issues with the PyCharm configuration.</p> <p>First, I had <a href="https://stackoverflow.com/questions/59043307/python-pytest-hangs-for-instance-pytest-version-simply-hangs">this issue</a> in windows 11 where the pytest command would get stuck. The solution was to run a pytest command as an administrator once first.</p> <p>Now, if I click on the green run symbols next to my test functions, they are launched but get stuck at the collecting phase:</p> <pre><code>============================= test session starts ============================= collecting ... </code></pre> <p>I think this is because pytest cannot find my project folder. If I run pytest within the pycharm terminal:</p> <blockquote> <p>python -m pytest ./tests</p> </blockquote> <p>I get this error:</p> <pre><code>ModuleNotFoundError: No module named 'myproject' </code></pre> <p>This is my folder structure:</p> <pre><code>myproject/ docs/ examples/ src/ myproject/ functionsA.py functionsB.py tests/ test_functionsA.py </code></pre> <p>I wonder if anyone could please help me to to avoid these issues and properly run pytest from the pycharm sidebar launch icons and/or pycharm terminal.</p>
<python><pycharm><pytest>
2023-07-04 16:08:08
0
693
Delosari
76,614,300
13,058,252
Python logging format: Add a colon right after `levelname` and then pad it
<p>I'm trying to achieve the following format:</p> <pre><code>INFO: Application shutdown complete. </code></pre> <p>My logger code currently looks like this:</p> <pre><code>import logging logging.basicConfig( level=logging.INFO, format=&quot;%(levelname)-9s %(message)s [%(name)s]&quot;, ) logger = logging.getLogger() </code></pre> <p>My logged messages look like this:</p> <pre><code>INFO Application shutdown complete. </code></pre> <p>How can I add a colon right after the <code>levelname</code> before the padding?</p>
<python><logging>
2023-07-04 16:00:22
0
320
duplxey
76,614,167
5,513,532
Bubble up error from asyncio "create_task" in Python
<p>I have a long running task that may raise an exception randomly. This task is an infinite loop: it's not expected to finish or to be awaited. It is tied to an instance of a specific class, I cancel the task when the instance is garbage collected.</p> <p>I'd like to raise potential errors &quot;outside&quot; of this task but failed to do so.</p> <pre class="lang-py prettyprint-override"><code> def start_worker(self): self.worker = asyncio.create_task( self._worker() ) def should_never_be_done(t: asyncio.Task): if t.exception() is not None: logger.exception(t.exception()) raise t.exception() raise Exception( &quot;_worker task is done, yet it should never finish&quot;) self.worker.add_done_callback(should_never_be_done) </code></pre> <p>I see the exception in my log, but it doesn't crash the whole app, which I want in this case instead of a silent fail leading to infinite loadings in my UI.</p> <p><a href="https://stackoverflow.com/questions/55708097/python-asyncio-exceptions-raised-from-loop-create-task">This related question</a> show only how to log the error but do not aim at raising it, hence the separate question.</p>
<python><python-asyncio>
2023-07-04 15:41:54
1
5,136
Eric Burel
76,614,066
18,756,733
Replace \ character with blank in pandas
<p>I want to replace \ with blank in pandas series</p> <pre><code>df['Review'].apply(lambda x:x.replace('\','') if isinstance(x,str) else x)[0] </code></pre> <p>But this code is not working. What to change in the code?</p>
<python><pandas>
2023-07-04 15:29:08
1
426
beridzeg45
76,613,917
1,942,868
Retrieve function for viewset without model
<p>I am using object without RDS entry.</p> <p><code>Model</code></p> <pre><code>class MyDummy(object): def __init__(self, **kwargs): for field in ('id', 'detail'): setattr(self, field, kwargs.get(field, None)) mydummys = { 1: MyDummy(id=1, detail={&quot;test&quot;:&quot;test&quot;}), 2: MyDummy(id=2,detail={&quot;test2&quot;:&quot;test2&quot;}) } </code></pre> <p><code>Serializer</code></p> <pre><code>class MyDummySerializer(serializers.Serializer): id = serializers.IntegerField(read_only=True) detail = serializers.JSONField(allow_null=True) def create(self, validated_data): return MyDummy(id=None, **validated_data) def update(self, instance, validated_data): for field, value in validated_data.items(): setattr(instance, field, value) return instance </code></pre> <p><code>list</code> function works well for now</p> <pre><code>class MyDummyViewSet(viewsets.ViewSet): serializer_class = s.MyDummySerializer def list(self, request): serializer = s.MyDummySerializer( instance=m.mydummys.values(), many=True) return Response(serializer.data) </code></pre> <p>However now I want to make <code>retreive</code> for url like <code>api/mydummy/1</code></p> <pre><code>class MyDummyViewSet(viewsets.ViewSet): serializer_class = s.MyDummySerializer def list(self, request): serializer = s.MyDummySerializer( instance=m.mydummys.values(), many=True) return Response(serializer.data) def retrieve(self, request, pk=None): serializer = s.MyDummySerializer( instance=m.mydummys.values(), many=False) return Response(serializer.data) </code></pre> <p>It returns,</p> <pre><code>{ &quot;detail&quot;: null } </code></pre> <p>I think I should narrowing by id in retrieve.</p> <p>How can I make it ?</p>
<python><django>
2023-07-04 15:08:23
1
12,599
whitebear
76,613,877
3,792,852
How to send commands to a UART CLI using Python and parse the returned text
<p>I have LiPo-Charger that is running on FreeRTOS and exposes a small command line interface via UART.</p> <p>The charger's firmware and manual are located <a href="https://github.com/AlexKlimaj/LiPow-Firmware" rel="nofollow noreferrer">here</a>. Note that I am not the developer of this device an that I have little experience with the hardware side of UART and FreeRTOS. Feel free to ask all sorts of complex questions, but I problably won't be able to answer them.</p> <p>I'm using a Raspberry Pi 3 with Raspbian 10 lite. When I connect it to the RPi via GPIO 14 and 15 and run a serial console application, using e.g.</p> <p><code>cu -l /dev/ttyS0 -s 921600</code></p> <p>, I can properly connect the device's CLI</p> <pre><code>pi@dronebox:~ $ cu -l /dev/ttyS0 -s 921600 Connected. </code></pre> <p>and execute queries. In the following, I execute <code>help</code> and <code>stats</code></p> <pre><code>&gt;help help: Lists all the registered commands stats: Displays a table showing the system stats cal: Calibrates the ADC based on a known input voltage. Expects one argument as a float in milivolts. Connect input voltage to cells 1-4 and the XT60 battery output. write_otp: Writes the calibration scalars to OTP flash. Will fail if scalars not set or out of range. Must run cal first with known accurate voltage. Can run up to ~32 times. task-stats: Displays a table showing the state of each FreeRTOS task run-time-stats: Displays a table showing how much processing time each FreeRTOS task has used [Press ENTER to execute the previous command again] &gt;stats Variable Value ************************************************ Battery Voltage MCU(V) 0.000 Battery Voltage Reg (V) 2.880 Charging Current (A) 0.000 Charging Power (W) 0.000 Cell One Voltage (V) 0.000 Cell Two Voltage (V) 0.000 Cell Three Voltage (V) 0.000 Cell Four Voltage (V) 0.000 2 Series Voltage (V) 0.000 3 Series Voltage (V) 0.000 4 Series Voltage (V) 0.000 MCU Temperature (C) 29 VDDa (V) 3.291 XT60 Connected 0 Balance Connection State 0 Number of Cells 0 Battery Requires Charging 0 Balancing State/Bitmask 0 Regulator Connection State 1 Charging State 0 Max Charge Current 0.000 Vbus Voltage (V) 5.056 Input Current (A) 0.000 Input Power (W) 0.000 Efficiency (OutputW/InputW) nan Battery Error State 0 [Press ENTER to execute the previous command again] &gt; </code></pre> <p>This tells me that the hardware side seems to be in order.</p> <p>Now, I'd like to integrate this charger into a project and constantly monitor the charging parameters, using the above shown <code>stats</code> command. For testing, I wrote a Python script, using pySerial, that is supposed to connect to the serial interface, issue the <code>stats</code> command and output the device's reply to stdout.</p> <pre><code>import serial from time import sleep ser = serial.Serial(&quot;/dev/ttyS0&quot;, baudrate=921600, parity=serial.PARITY_NONE, stopbits=serial.STOPBITS_ONE, bytesize=8, timeout=1) print(&quot;sending...&quot;) ser.write('help\n'.encode()) print('receiving...') received_data = ser.read() #read serial port sleep(0.03) data_left = ser.inWaiting() #check for remaining byte received_data += ser.read(data_left) print (received_data.decode()) </code></pre> <p>However, when I run this script, I get the following output:</p> <pre><code>sending... receiving... e Command not recognized. Enter 'help' to view a list of available commands. [Press ENTER to execute the previous command again] &gt; Command not recognized. Enter 'help' to view a list of available commands. [Press ENTER to execute the previous command again] &gt; Command not recognized. Enter 'help' to view a list of available commands. [Press ENTER to execute the previous command again] &gt; </code></pre> <p>I would expect it to return the output for the <code>stats</code> command as shown above. It's also not very helpful that the interface doesn't echo the command it didn't recognize, so I can only guess that somehow my command didn't correctly arrive.</p> <p>I've also tried</p> <ul> <li>sending <code>\r</code> and <code>\r\n</code> at the end of my command with the same result.</li> <li>using <code>.encode('utf-8')</code> instead of <code>.encode()</code>.</li> <li>using <code>b'stats'</code> instead of <code>.encode()</code>.</li> </ul>
<python><uart><freertos>
2023-07-04 15:01:32
0
355
user3792852
76,613,856
3,187,106
failing on OSError libgssapi_krb5.so: cannot open shared object file while installing using pip python 3.11
<p>I'm trying to install requests-kerberos package <code>pip install requests-kerberos</code>, but failing on <code>OSError: ...:/libgssapi_krb5.so: cannot open shared object file: No such file or directory</code></p> <p>I have libgssapi_krb5.so in my ../python/lib/. I have tried to add the ../python/lib/ to my LD_LIBRARY_PATH but no success.</p> <p>I'm using python 3.11.1 version</p> <p>what can be the issue ? and how come it could not found although LD_LIBRARY_PATH include the lib ?</p> <p>full er</p> <pre><code>Installing build dependencies ... done Running command Getting requirements to build wheel Traceback (most recent call last): File &quot;my_area/python_tst/my_py/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 353, in &lt;module&gt; main() File &quot;my_area/python_tst/my_py/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 335, in main json_out['return_val'] = hook(**hook_input['kwargs']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;my_area/python_tst/my_py/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 118, in get_requires_for_build_wheel return hook(config_settings) ^^^^^^^^^^^^^^^^^^^^^ File &quot;/tmp/pip-build-env-r4fyocus/overlay/lib/python3.11/site-packages/setuptools/build_meta.py&quot;, line 341, in get_requires_for_build_wheel return self._get_build_requires(config_settings, requirements=['wheel']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/tmp/pip-build-env-r4fyocus/overlay/lib/python3.11/site-packages/setuptools/build_meta.py&quot;, line 323, in _get_build_requires self.run_setup() File &quot;/tmp/pip-build-env-r4fyocus/overlay/lib/python3.11/site-packages/setuptools/build_meta.py&quot;, line 338, in run_setup exec(code, locals()) File &quot;&lt;string&gt;&quot;, line 206, in &lt;module&gt; File &quot;python3/3.11.1/lib/python3.11/ctypes/__init__.py&quot;, line 376, in __init__ self._handle = _dlopen(self._name, mode) ^^^^^^^^^^^^^^^^^^^^^^^^^ OSError: python3/3.11.1/lib64:python3/3.11.1/lib:python3/3.11.1/lib64:python3/3.11.1/lib:python3/3.11.1/lib:python3/3.11.1/lib64:/libgssapi_krb5.so: cannot open shared object file: No such file or directory error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─&gt; See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. full command: my_area/python_tst/my_py/bin/python3 my_area/python_tst/my_py/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py get_requires_for_build_wheel /tmp/tmpcq6mx3_f cwd: /tmp/pip-install-l9aynml0/gssapi_40233975a0f24def9d5db604f4e4b0bd Getting requirements to build wheel ... error error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─&gt; See above for output. </code></pre> <p>Thanks in advance</p>
<python><pip><python-3.11>
2023-07-04 14:57:07
1
426
Saeed isa
76,613,677
2,060,596
Python: module name conflicts with upstream module name
<p>I have a Python project which has a module (folder with <strong>init</strong>.py) which is named <code>gitlab</code>. I also use the python-gitlab library in my project.</p> <p>There seem to be a naming conflict:</p> <p>In my gitlab module there is a action.py with the following content:</p> <pre><code>import gitlab # this should be import python-gitlab # [...] _gl = gitlab.Gitlab(url=gitlab_url, private_token=os.getenv('GIT_PASSWORD')) </code></pre> <p>This code does not work. The error is <code>NameError: name 'Gitlab' is not defined</code></p> <p>When I rename my module to <code>mygitlab</code> everything is fine.</p> <p>So my question is: How can I import the &quot;upstream&quot; gitlab explizitly, as obviously the import statement the way i use it imports itself (?) and I have no <code>Gitlab</code> defined in my code.</p>
<python>
2023-07-04 14:33:23
1
6,062
Dakkar
76,613,672
1,961,574
Python: fastest way of checking if there are more than x files in a folder
<p>I am looking for a very rapid way to check whether a folder contains more than 2 files.</p> <p>I worry that <code>len(os.listdir('/path/')) &gt; 2</code> may become very slow if there are a lot of files in <strong>/path/</strong>, especially since this function will be called frequently by multiple processes at a time.</p>
<python>
2023-07-04 14:33:11
6
2,712
bluppfisk
76,613,542
4,451,315
Make a categorical column which has categories ['a', 'b', 'c'] in Polars
<p>How do I make a Categorical column which has:</p> <ul> <li>elements: <code>['a', 'b', 'a', 'a']</code></li> <li>categories <code>['a', 'b', 'c']</code></li> </ul> <p>in polars?</p> <p>In pandas, I would do:</p> <pre class="lang-py prettyprint-override"><code>In [31]: pd.Series(pd.Categorical(['a', 'b', 'a', 'a'], categories=['a', 'b', 'c'])) Out[31]: 0 a 1 b 2 a 3 a dtype: category Categories (3, object): ['a', 'b', 'c'] </code></pre> <p>I have no idea how to do this in polars, the docs for <code>Categorical</code> look completely empty: <a href="https://pola-rs.github.io/polars/py-polars/html/reference/api/polars.Categorical.html" rel="nofollow noreferrer">https://pola-rs.github.io/polars/py-polars/html/reference/api/polars.Categorical.html</a></p>
<python><dataframe><python-polars><categorical-data>
2023-07-04 14:16:05
3
11,062
ignoring_gravity
76,613,438
10,309,712
What is responsible for this TypeError: DataUndersampler.transform() missing 1 required positional argument: 'y'?
<p>This is a custom support vectorbased data undersampler answer from my previous <a href="https://stackoverflow.com/a/76396909/10309712">question</a>.</p> <p>The main idea is to undersample the majority class in an informed way, by fitting an SVC to the data, find the support vectors, and then undersample the majority class based on the distances to these support vectors.</p> <p>Code:</p> <pre class="lang-py prettyprint-override"><code>from sklearn.base import BaseEstimator, TransformerMixin from sklearn.utils import resample from sklearn.svm import SVC import numpy as np from sklearn.multiclass import OneVsOneClassifier from imblearn.pipeline import Pipeline from sklearn.ensemble import RandomForestClassifier class DataUndersampler(BaseEstimator, TransformerMixin): def __init__(self, random_state=None): self.random_state = random_state self.svc = SVC(kernel='linear') def fit(self, X, y): # Fit SVC to data self.svc.fit(X, y) return self def transform(self, X, y): # Get support vectors support_vectors = self.svc.support_vectors_ # Get indices of support vectors support_vector_indices = self.svc.support_ # Separate majority and minority classes majority_class = y.value_counts().idxmax() minority_class = y.value_counts().idxmin() X_majority = X[y == majority_class] y_majority = y[y == majority_class] X_minority = X[y == minority_class] y_minority = y[y == minority_class] # Calculate distances of majority class samples to nearest support vector distances = np.min(np.linalg.norm(X_majority.values[:, np.newaxis] - support_vectors, axis=2), axis=1) # Sort the majority class samples by distance and take only as many as there are in minority class sorted_indices = np.argsort(distances) indices_to_keep = sorted_indices[:len(y_minority)] # Combine the undersampled majority class with the minority class X_resampled = pd.concat([X_majority.iloc[indices_to_keep], X_minority]) y_resampled = pd.concat([y_majority.iloc[indices_to_keep], y_minority]) return X_resampled, y_resampled </code></pre> <p>MWE:</p> <pre class="lang-py prettyprint-override"><code>from sklearn.datasets import make_classification X, y = make_classification(n_samples=10_000, n_classes=5, weights=[22.6, 3.7, 16.4, 51.9], n_informative=4) rf_clf = model = RandomForestClassifier() resampler = DataUndersampler(random_state=234) pipeline = Pipeline([('sampler', resampler), ('clf', rf_clf)]) classifier = OneVsOneClassifier(estimator=pipeline) classifier.fit(X, y) </code></pre> <p>Produces the error:</p> <pre class="lang-py prettyprint-override"><code>----&gt; 7 classifier.fit(X, y) 18 frames /usr/local/lib/python3.10/dist-packages/sklearn/utils/_set_output.py in wrapped(self, X, *args, **kwargs) 138 @wraps(f) 139 def wrapped(self, X, *args, **kwargs): --&gt; 140 data_to_wrap = f(self, X, *args, **kwargs) 141 if isinstance(data_to_wrap, tuple): 142 # only wrap the first output for cross decomposition TypeError: DataUndersampler.transform() missing 1 required positional argument: 'y' </code></pre>
<python><machine-learning><scikit-learn><classification>
2023-07-04 14:03:04
2
4,093
arilwan
76,613,437
7,657,180
Diable ads by adding extension adblock selenium python
<p>I am trying the following code that is supposed to add the extension <code>adblock</code> to the driver</p> <pre><code>from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options from webdriver_manager.chrome import ChromeDriverManager def navigate_to_url(url): options = Options() options.add_argument('start-maximized') options.add_extension('adblock.crx') options.add_experimental_option('detach', True) options.add_experimental_option('excludeSwitches', ['enable-logging']) driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=options) driver.get(url) return driver if __name__ == '__main__': driver = navigate_to_url('https://www.google.com') print(driver.current_url) </code></pre> <p>The code is working and the extension is already there but there is a new tab opened by the adblock extension. How can I prevent such page to be opened automatically?</p>
<python><selenium-webdriver><adblock>
2023-07-04 14:02:58
1
9,608
YasserKhalil
76,613,406
14,313,852
Manim not installing on Google Colab
<p>I did <code>pip install manim</code> and got an error:</p> <pre><code> × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─&gt; See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. Getting requirements to build wheel ... error error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─&gt; See above for output. </code></pre> <p>Then I tried <code>!pip install manimce</code> and got an exception:</p> <pre><code>Exception: You have installed Manim from `manimce` PyPI package which is deprecated and no longer updated. Please uninstall `manimce` and install Manim from `manim` PyPI package. </code></pre> <p>Plz help...</p>
<python><google-colaboratory><manim>
2023-07-04 13:58:49
2
647
Shub
76,613,183
1,991,502
In python, should attributes only be accessed by methods?
<p>Here are two examples of classes that handle their attribute differently.</p> <pre><code>class MyClassA(): def __init__(self): self.my_attribute = 1 class MyClassB(): def __init__(self): self._my_attribute = 1 def get_my_attribute(self): return self._my_attribute def set_my_attribute(self, value): self._my_attribute = value </code></pre> <p>Is there a python consensus on which is best practice?</p>
<python><attributes>
2023-07-04 13:32:52
2
749
DJames
76,613,140
9,640,238
How to specify the file encoding for pyarrow.csv.write_csv?
<p>When reading a CSV file with pyarrow, you can specify the encoding with a <code>pyarrow.csv.ReadOptions</code> constructor.</p> <p>However, I find no equivalent on <code>pyarrow.csv.write_csv</code>:</p> <ul> <li>The <code>pyarrow.csv.WriteOptions</code> does not have an <code>encoding</code> argument</li> <li>If you pass a file-like object instead of a file name, it must be opened in binary mode, so the encoding cannot be specified when instantiating the file object</li> </ul> <p>So how can I specify the encoding when writing to a CSV file?</p>
<python><csv><pyarrow><apache-arrow>
2023-07-04 13:27:06
0
2,690
mrgou
76,612,786
11,598,948
Create all combinations based on a subset of variables with Polars?
<p>I have a <code>DataFrame</code> that looks like this:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl df = pl.DataFrame( { &quot;country&quot;: [&quot;France&quot;, &quot;France&quot;, &quot;UK&quot;, &quot;UK&quot;, &quot;Spain&quot;], &quot;year&quot;: [2020, 2021, 2019, 2020, 2022], &quot;value&quot;: [1, 2, 3, 4, 5], } ) df shape: (5, 3) ┌─────────┬──────┬───────┐ │ country ┆ year ┆ value │ │ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ i64 │ ╞═════════╪══════╪═══════╡ │ France ┆ 2020 ┆ 1 │ │ France ┆ 2021 ┆ 2 │ │ UK ┆ 2019 ┆ 3 │ │ UK ┆ 2020 ┆ 4 │ │ Spain ┆ 2022 ┆ 5 │ └─────────┴──────┴───────┘ </code></pre> <p>I'd like to make a balanced panel by creating all country-year pairs. In R, I could use <code>tidyr::complete()</code> for this, but I didn't find a built-in way to do this in Polars. Is there something like this? If not, what would be the fastest way to mimick it?</p> <p>Expected output:</p> <pre class="lang-py prettyprint-override"><code>shape: (12, 3) ┌─────────┬──────┬───────┐ │ country ┆ year ┆ value │ │ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ i64 │ ╞═════════╪══════╪═══════╡ │ France ┆ 2019 ┆ null │ │ France ┆ 2020 ┆ 1 │ │ France ┆ 2021 ┆ 2 │ │ France ┆ 2022 ┆ null │ │ UK ┆ 2019 ┆ 3 │ │ UK ┆ 2020 ┆ 4 │ │ UK ┆ 2021 ┆ null │ │ UK ┆ 2022 ┆ null │ │ Spain ┆ 2019 ┆ null │ │ Spain ┆ 2020 ┆ null │ │ Spain ┆ 2021 ┆ null │ │ Spain ┆ 2022 ┆ 5 │ └─────────┴──────┴───────┘ </code></pre> <hr /> <hr /> <p><strong>Edit:</strong> the example above is quite simple because it only has 2 vars to complete but it started being trickier with 3 vars and I don't see how to adapt the <code>pivot()</code> + <code>unpivot()</code>:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl df = pl.DataFrame( { &quot;orig&quot;: [&quot;France&quot;, &quot;France&quot;, &quot;UK&quot;, &quot;UK&quot;, &quot;Spain&quot;], &quot;dest&quot;: [&quot;Japan&quot;, &quot;Vietnam&quot;, &quot;Japan&quot;, &quot;China&quot;, &quot;China&quot;], &quot;year&quot;: [2020, 2021, 2019, 2020, 2022], &quot;value&quot;: [1, 2, 3, 4, 5], } ) df shape: (5, 4) ┌────────┬─────────┬──────┬───────┐ │ orig ┆ dest ┆ year ┆ value │ │ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ str ┆ i64 ┆ i64 │ ╞════════╪═════════╪══════╪═══════╡ │ France ┆ Japan ┆ 2020 ┆ 1 │ │ France ┆ Vietnam ┆ 2021 ┆ 2 │ │ UK ┆ Japan ┆ 2019 ┆ 3 │ │ UK ┆ China ┆ 2020 ┆ 4 │ │ Spain ┆ China ┆ 2022 ┆ 5 │ └────────┴─────────┴──────┴───────┘ </code></pre> <p>While the original works, it is much slower than <code>tidyr::complete()</code> (66ms for Polars, 1.8ms for <code>tidyr::complete()</code>):</p> <pre class="lang-py prettyprint-override"><code>import time tic = time.perf_counter() ( df .select(&quot;orig&quot;) .unique() .join(df.select(&quot;dest&quot;).unique(), how=&quot;cross&quot;) .join(df.select(&quot;year&quot;).unique(), how=&quot;cross&quot;) .join(df, how=&quot;left&quot;, on=[&quot;country&quot;, &quot;year&quot;]) ) toc = time.perf_counter() print(f&quot;Lazy eval: {toc - tic:0.4f} seconds&quot;) shape: (36, 4) ┌───────┬─────────┬──────┬───────┐ │ orig ┆ dest ┆ year ┆ value │ │ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ str ┆ i64 ┆ i64 │ ╞═══════╪═════════╪══════╪═══════╡ │ Spain ┆ Japan ┆ 2021 ┆ null │ │ Spain ┆ Japan ┆ 2022 ┆ null │ │ Spain ┆ Japan ┆ 2019 ┆ null │ │ Spain ┆ Japan ┆ 2020 ┆ null │ │ … ┆ … ┆ … ┆ … │ │ UK ┆ Vietnam ┆ 2021 ┆ null │ │ UK ┆ Vietnam ┆ 2022 ┆ null │ │ UK ┆ Vietnam ┆ 2019 ┆ null │ │ UK ┆ Vietnam ┆ 2020 ┆ null │ └───────┴─────────┴──────┴───────┘ &gt;&gt;&gt; &gt;&gt;&gt; toc = time.perf_counter() &gt;&gt;&gt; print(f&quot;Lazy eval: {toc - tic:0.4f} seconds&quot;) Lazy eval: 0.0669 seconds </code></pre> <p>In R:</p> <pre class="lang-r prettyprint-override"><code>test &lt;- data.frame( orig = c(&quot;France&quot;, &quot;France&quot;, &quot;UK&quot;, &quot;UK&quot;, &quot;Spain&quot;), dest = c(&quot;Japan&quot;, &quot;Vietnam&quot;, &quot;Japan&quot;, &quot;China&quot;, &quot;China&quot;), year = c(2020, 2021, 2019, 2020, 2022), value = c(1, 2, 3, 4, 5) ) bench::mark( test = tidyr::complete(test, orig, dest, year), iterations = 100 ) #&gt; # A tibble: 1 × 6 #&gt; expression min median `itr/sec` mem_alloc `gc/sec` #&gt; &lt;bch:expr&gt; &lt;bch:tm&gt; &lt;bch:tm&gt; &lt;dbl&gt; &lt;bch:byt&gt; &lt;dbl&gt; #&gt; 1 test 1.61ms 1.81ms 496. 4.6MB 10.1 </code></pre>
<python><dataframe><python-polars>
2023-07-04 12:45:01
3
8,865
bretauv
76,612,363
17,471,060
Regex - Discard any word starts with nonDecimal in a string
<p>In this case, I would not like to include first element '223', any suggestion?</p> <pre><code>re.findall(pattern=r&quot;[^\D][-+]?[0-9]*\.?[0-9]+(?:[eE][-+]?[0-9]+)?&quot;, string=&quot; LComb_1_2_223 -4.00000000E+00 500000E+01 -1.09000000E-02 2.00E+00 22 5.23 &quot;) # out: ['223', '4.00000000E+00', '500000E+01', '1.09000000E-02', '0.00000000E+00'] </code></pre>
<python><regex>
2023-07-04 11:53:58
1
344
beta green
76,612,296
8,832,008
Index a directory of geotif files
<p>I have a directory of many GeoTIFF files that are tiles of global map. Is there a python module that can index such a directory and then provide access to the data using coordinates in the respective CRS? Say I have a coordinate or polygon, I would like the module to return the file(s) that contain the coordinate/polygon.</p>
<python><geotiff><rasterio>
2023-07-04 11:46:17
2
1,334
cmosig
76,612,168
14,282,714
Remove pillow package from anaconda environment
<p>I would like to uninstall the <code>pillow</code> package from my conda environment. The problem is that although the package is listed, I can't uninstall it. First I check if the package is in the environment:</p> <pre><code>conda list pillow # Name Version Build Channel pillow 10.0.0 pypi_0 pypi </code></pre> <p>So we can see that the package is installed in the environment. To remove the package I tried the following:</p> <pre><code>conda remove pillow </code></pre> <p>Which returns:</p> <pre><code>PackageNotFoundError: The following packages are missing from the target environment: - pillow </code></pre> <p>This is surprising because we can see that it is installed. As I understand the <code>Channel</code> is <code>pypi</code> which means that I have mixed up <code>pip</code> and <code>conda</code>. So according to <a href="https://stackoverflow.com/questions/58404007/conda-wont-remove-package">here</a>, I have to uninstall <code>pillow</code> using <code>pip</code> like this:</p> <pre><code>pip uninstall pillow </code></pre> <p>This works, when checking <code>pip list</code> there is no <code>pillow</code> package anymore. But when I try again to remove the package from the <code>conda</code> environment I get the same error. So I was wondering if anyone know how to uninstall a package from your <code>conda</code> environment in this case?</p> <hr /> <p>Please note: I use windows.</p>
<python><pip><anaconda><python-imaging-library><uninstallation>
2023-07-04 11:31:28
1
42,724
Quinten
76,612,163
13,086,128
What are the advantages of a polars LazyFrame over a Dataframe?
<p>Python polars are pretty similar to Python pandas.</p> <p>I know in pandas we do not have Lazyframes.</p> <p>We can create Lazyframes just like Dataframes in polars.</p> <pre><code>import polars as pl data = {&quot;a&quot;: [1, 2, 3], &quot;b&quot;: [5, 4, 8]} lf = pl.LazyFrame(data) </code></pre> <p>I want to know what are the advantages of a <code>Lazyframe</code> over a <code>Dataframe</code> ?</p> <p>If someone can explain with examples.</p> <p>Thanks.</p>
<python><dataframe><python-polars><lazyframe>
2023-07-04 11:30:46
1
30,560
Talha Tayyab
76,612,118
5,868,293
How to add separate title at each stacked bar plot, in plotly express, python
<p>I have the following data frame:</p> <pre><code>import pandas as pd ppp2 = pd.DataFrame({'cluster_cj': {0: 'A', 1: 'A', 2: 'A', 3: 'A', 4: 'A', 5: 'A', 6: 'A', 7: 'A', 8: 'A', 9: 'A', 10: 'A', 11: 'A', 12: 'A', 13: 'C', 14: 'C', 15: 'C', 16: 'C', 17: 'C', 18: 'C', 19: 'C', 20: 'C', 21: 'C', 22: 'C', 23: 'C', 24: 'C', 25: 'C', 26: 'H', 27: 'H', 28: 'H', 29: 'H', 30: 'H', 31: 'H', 32: 'H', 33: 'H', 34: 'H', 35: 'H', 36: 'H', 37: 'H', 38: 'H', 39: 'H', 40: 'H', 41: 'H', 42: 'M', 43: 'M', 44: 'M', 45: 'M', 46: 'M', 47: 'M', 48: 'M', 49: 'M', 50: 'M', 51: 'M', 52: 'S', 53: 'S', 54: 'S', 55: 'S', 56: 'S', 57: 'S', 58: 'S', 59: 'S', 60: 'S', 61: 'S', 62: 'S', 63: 'S'}, 'cluster': {0: 'A', 1: 'A', 2: 'A', 3: 'A', 4: 'C', 5: 'C', 6: 'H', 7: 'H', 8: 'H', 9: 'H', 10: 'M', 11: 'M', 12: 'S', 13: 'A', 14: 'A', 15: 'C', 16: 'C', 17: 'C', 18: 'C', 19: 'H', 20: 'H', 21: 'H', 22: 'M', 23: 'M', 24: 'M', 25: 'M', 26: 'A', 27: 'A', 28: 'C', 29: 'C', 30: 'C', 31: 'C', 32: 'H', 33: 'H', 34: 'H', 35: 'H', 36: 'M', 37: 'M', 38: 'M', 39: 'M', 40: 'S', 41: 'S', 42: 'C', 43: 'C', 44: 'C', 45: 'C', 46: 'H', 47: 'M', 48: 'M', 49: 'M', 50: 'M', 51: 'S', 52: 'C', 53: 'H', 54: 'H', 55: 'H', 56: 'M', 57: 'M', 58: 'M', 59: 'M', 60: 'S', 61: 'S', 62: 'S', 63: 'S'}, 'cluster_confidence_label': {0: 'confident', 1: 'confused', 2: 'low', 3: 'moderate', 4: 'confident', 5: 'confused', 6: 'confident', 7: 'confused', 8: 'low', 9: 'moderate', 10: 'confused', 11: 'low', 12: 'moderate', 13: 'low', 14: 'moderate', 15: 'confident', 16: 'confused', 17: 'low', 18: 'moderate', 19: 'confident', 20: 'low', 21: 'moderate', 22: 'confident', 23: 'confused', 24: 'low', 25: 'moderate', 26: 'confident', 27: 'moderate', 28: 'confident', 29: 'confused', 30: 'low', 31: 'moderate', 32: 'confident', 33: 'confused', 34: 'low', 35: 'moderate', 36: 'confident', 37: 'confused', 38: 'low', 39: 'moderate', 40: 'confident', 41: 'confused', 42: 'confident', 43: 'confused', 44: 'low', 45: 'moderate', 46: 'moderate', 47: 'confident', 48: 'confused', 49: 'low', 50: 'moderate', 51: 'confused', 52: 'confused', 53: 'confident', 54: 'confused', 55: 'moderate', 56: 'confident', 57: 'confused', 58: 'low', 59: 'moderate', 60: 'confident', 61: 'confused', 62: 'low', 63: 'moderate'}, '%': {0: 24.0, 1: 18.0, 2: 24.0, 3: 34.0, 4: 50.0, 5: 50.0, 6: 21.4, 7: 21.4, 8: 14.3, 9: 42.9, 10: 83.3, 11: 16.7, 12: 100.0, 13: 66.7, 14: 33.3, 15: 42.1, 16: 12.4, 17: 8.9, 18: 36.7, 19: 35.7, 20: 21.4, 21: 42.9, 22: 17.8, 23: 36.0, 24: 14.0, 25: 32.2, 26: 50.0, 27: 50.0, 28: 31.2, 29: 35.4, 30: 20.8, 31: 12.5, 32: 39.5, 33: 20.2, 34: 14.7, 35: 25.6, 36: 17.9, 37: 7.7, 38: 30.8, 39: 43.6, 40: 60.0, 41: 40.0, 42: 7.4, 43: 55.6, 44: 11.1, 45: 25.9, 46: 100.0, 47: 25.6, 48: 29.8, 49: 19.3, 50: 25.4, 51: 100.0, 52: 100.0, 53: 42.9, 54: 23.8, 55: 33.3, 56: 20.9, 57: 41.8, 58: 24.2, 59: 13.2, 60: 44.2, 61: 30.8, 62: 3.8, 63: 21.2}, '%_all': {0: 43.9, 1: 43.9, 2: 43.9, 3: 43.9, 4: 5.3, 5: 5.3, 6: 12.3, 7: 12.3, 8: 12.3, 9: 12.3, 10: 36.8, 11: 36.8, 12: 1.8, 13: 1.8, 14: 1.8, 15: 52.2, 16: 52.2, 17: 52.2, 18: 52.2, 19: 2.8, 20: 2.8, 21: 2.8, 22: 43.1, 23: 43.1, 24: 43.1, 25: 43.1, 26: 5.2, 27: 5.2, 28: 20.6, 29: 20.6, 30: 20.6, 31: 20.6, 32: 55.4, 33: 55.4, 34: 55.4, 35: 55.4, 36: 16.7, 37: 16.7, 38: 16.7, 39: 16.7, 40: 2.1, 41: 2.1, 42: 4.8, 43: 4.8, 44: 4.8, 45: 4.8, 46: 1.2, 47: 93.1, 48: 93.1, 49: 93.1, 50: 93.1, 51: 0.9, 52: 1.2, 53: 12.7, 54: 12.7, 55: 12.7, 56: 54.8, 57: 54.8, 58: 54.8, 59: 54.8, 60: 31.3, 61: 31.3, 62: 31.3, 63: 31.3}} ) </code></pre> <p>and I am using the following code to create a stacked bar plot</p> <pre><code>import plotly_express as px fig = px.bar(ppp2.reset_index(), x='cluster_cj', color='cluster_confidence_label', y='%', facet_row='cluster', color_discrete_map={'confident': '#104E8B', 'moderate': '#1874CD', 'low': '#1C86EE', 'confused': '#1E90FF'}, category_orders={'cluster_confidence_label': ['confident','moderate','low','confused']}, text=ppp['%'], custom_data=['%_all'] ) fig.for_each_annotation(lambda a: a.update(text=a.text.split(&quot;=&quot;)[-1])) fig.show() </code></pre> <p>which gives me</p> <p><a href="https://i.sstatic.net/koZTf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/koZTf.png" alt="enter image description here" /></a></p> <p>I would like to add, no top of each stacked bar plot, a title, which will will be the column <code>'%_all'</code>. Maybe I could somehow leverage the <code>custom_data</code> argument ? Any ideas ?</p>
<python><plotly><plotly-express>
2023-07-04 11:24:16
1
4,512
quant
76,612,047
3,927,335
How the Number series getting in Python?
<p>Can anyone explain how Number series result getting , what is the calculation?</p> <pre><code>def tri_recursion(k): if(k &gt; 0): result = k + tri_recursion(k - 1) print(result) else: result = 0 return result print(&quot;\n\nRecursion Example Results&quot;) tri_recursion(10) Result 1 3 6 10 15 21 </code></pre>
<python>
2023-07-04 11:15:29
1
1,139
PRANAV
76,611,965
12,436,050
Split pandas column by separator and merge to a column of another dataframe
<p>I have following dataframes. One of them have a column with comma separated values (df1: col2) which I want to split and join on a column of another dataframe (df2: col4).</p> <pre><code>df1 col1 col2 a abc, df b ert c xyz, ghi df2 col3 col4 id1 abc id2 erg id3 ghi </code></pre> <p>In the end, I would like to get this output</p> <pre><code>col1 col2 col3 col4 a abc, df id1 abc c xyz, ghi id3 ghi </code></pre> <pre><code>df1 = (df1.assign(col2 = df1['col2'].str.split(',')) .explode('col2') .merge(df1, on=['col4'], how='left') .groupby(['col1'], as_index=False, sort=False) .agg({'col1':','.join})) </code></pre> <p>I am getting this error:</p> <pre><code>KeyError: 'col4' </code></pre> <p>However this column exist in the dataframe. Any help will be highly appreciated</p>
<python><pandas><dataframe>
2023-07-04 11:04:59
1
1,495
rshar
76,611,714
13,314,132
For GPT4All, cannot seem to load downloaded model file. Getting error llama_init_from_file: failed to load model (bad f16 value 5)
<p>I am trying to use the following code for using GPT4All with langchain but am getting the above error:</p> <p>Code:</p> <pre><code>import streamlit as st from langchain import PromptTemplate, LLMChain from langchain.llms import GPT4All from langchain.agents.agent_toolkits import create_python_agent from langchain.tools.python.tool import PythonREPLTool PATH = 'D:\Python Projects\LangchainModels\models\ggml-stable-vicuna-13B.q4_2.bin' llm = GPT4All(model=PATH, verbose=True) agent_executor = create_python_agent( llm=llm, tool=PythonREPLTool(), verbose=True ) st.title('🦜🔗 GPT For Y\'all') prompt = st.text_input('Enter your prompt here!') if prompt: response = agent_executor.run(prompt) st.write(response) </code></pre> <p>And the error traceback from the code being run:</p> <pre><code>llama_model_load: loading model from 'D:\Python Projects\LangchainModels\models\ggml-stable-vicuna-13B.q4_2.bin' - please wait ... llama_model_load: n_vocab = 32001 llama_model_load: n_ctx = 512 llama_model_load: n_embd = 5120 llama_model_load: n_mult = 256 llama_model_load: n_head = 40 llama_model_load: n_layer = 40 llama_model_load: n_rot = 128 llama_model_load: f16 = 5 llama_model_load: n_ff = 13824 llama_model_load: n_parts = 2 llama_model_load: type = 2 llama_model_load: invalid model file 'D:\Python Projects\LangchainModels\models\ggml-stable-vicuna-13B.q4_2.bin' (bad f16 value 5) llama_init_from_file: failed to load model </code></pre> <p>I have also given the same error to the gpt4all repository and no feedback yet. Is there any version dependencies? For example even though not document specified I know langchain needs to have &gt;= python3.8 for it to be run successfully.</p>
<python><langchain><gpt4all>
2023-07-04 10:31:23
1
655
Daremitsu
76,611,574
3,650,983
Can't close open3d window on Mac
<p>after run</p> <pre><code>o3d.visualization.draw_geometries(...) </code></pre> <p>on jupyter notebook getting the following error:</p> <blockquote> <p>[Open3D WARNING] GLFW Error: Cocoa: Failed to find service port for display</p> </blockquote> <p>when try to close open3d window. The window stay open (Not Responding) till kernel restart.</p> <p>on:</p> <pre><code>Mac M2 Pro, Python 3.8.16, open3d 0.17.0, IPython 8.3.0, ipykernel 6.9.1, jupyter_client 7.2.2 jupyter_core 4.10.0 jupyter_server 2.7.0 notebook 6.5.4. </code></pre> <p>found following GitHub issues: <a href="https://github.com/isl-org/Open3D/issues/1673" rel="nofollow noreferrer">1673</a> , <a href="https://github.com/isl-org/Open3D/issues/4813" rel="nofollow noreferrer">4813</a></p>
<python><jupyter-notebook><ipython><open3d>
2023-07-04 10:13:13
1
4,119
ChaosPredictor
76,611,386
11,318,472
groupby(..).sum() of `inf` values resulting in `NaN
<p>I have the following <code>pandas.DataFrame</code>:</p> <p><a href="https://i.sstatic.net/swrMP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/swrMP.png" alt="enter image description here" /></a></p> <p>where the column <code>p_nom_max</code> contains <code>inf</code> values:</p> <pre><code>&gt; df[&quot;p_nom_max&quot;] == np.inf Generator DE0 1 nuclear True DE0 2 nuclear True DE0 3 nuclear False Name: p_nom_max, dtype: bool </code></pre> <p>When aggregating over this column using <code>.sum()</code> the result is as expected <code>inf</code>:</p> <pre><code>&gt; df[&quot;p_nom_max&quot;].sum() inf </code></pre> <p>However when doing a <code>.groupby(&quot;carrier&quot;)</code> and then aggregating over <code>p_nom_max</code>, the standard functions result in <code>NaN</code> values, only a <code>lambda-wrapper</code> gives the correct result:</p> <pre><code>&gt; df.groupby(&quot;carrier&quot;).agg({&quot;p_nom_max&quot;: [&quot;sum&quot;, np.sum, lambda x: np.sum(x)]}) p_nom_max sum sum &lt;lambda_0&gt; carrier nuclear NaN NaN inf </code></pre> <p>Can someone help me understand what is going on here and why the standard groupby gives <code>NaN</code> instead of <code>inf</code> as a result?</p>
<python><pandas><dataframe><group-by>
2023-07-04 09:53:14
1
1,319
euronion
76,611,276
8,995,555
Convert dot-separated values into Go structs using Python
<p>This is a specific requirement for a application whose configurations can be changed (Specifically the WSO2 Identity Server since I'm writing a Kubernetes operator for it using Go). But it's really not relevant here. I want to create a solution which would allow to easily manage a lot of config mappings to generate Go structs. These configs are mapped within a .csv</p> <p>Link to .csv - <a href="https://drive.google.com/file/d/1EeHNEt1XWdWQXa7WCFgdU_fczUW__4kC/view?usp=sharing" rel="nofollow noreferrer">my_configs.csv</a></p> <p>I want to, <strong>write a python script that would automatically generate the Go structs</strong> so that any change to the application configs could be updated by creating the corresponding Go structs by simply executing the python script. I'm referring to the configs of the application itself. For example, the toml key names in the csv can be changed/new values can be added.</p> <p>I have been successful so far to create a python script that <strong>nearly achieves my goal</strong>. The script is,</p> <pre><code>import pandas as pd def convert_to_dict(data): result = {} for row in data: current_dict = result for item in row[:-1]: if item is not None: if item not in current_dict: current_dict[item] = {} current_dict = current_dict[item] return result def extract_json_key(yaml_key): if isinstance(yaml_key, str) and '.' in yaml_key: return yaml_key.split('.')[-1] else: return yaml_key def add_fields_to_struct(struct_string,go_var,go_type,json_key,toml_key): struct_string += str(go_var) + &quot; &quot; + str(go_type) + ' `json:&quot;' + str(json_key) + ',omitempty&quot; toml:&quot;' +str(toml_key) + '&quot;` ' + &quot;\n&quot; return struct_string def generate_go_struct(struct_name, struct_data): struct_name=&quot;Configurations&quot; if struct_name == &quot;&quot; else struct_name struct_string = &quot;type &quot; + struct_name + &quot; struct {\n&quot; yaml_key=df['yaml_key'].str.split('.').str[-1] # Base case: Generate fields for the current struct level for key, value in struct_data.items(): selected_rows = df[yaml_key == key] if len(selected_rows) &gt; 1: go_var = selected_rows['go_var'].values[1] toml_key = selected_rows['toml_key'].values[1] go_type=selected_rows['go_type'].values[1] json_key=selected_rows['json_key'].values[1] else: go_var = selected_rows['go_var'].values[0] toml_key = selected_rows['toml_key'].values[0] go_type=selected_rows['go_type'].values[0] json_key=selected_rows['json_key'].values[0] # Add fields to the body of the struct struct_string=add_fields_to_struct(struct_string,go_var,go_type,json_key,toml_key) struct_string += &quot;}\n\n&quot; # Recursive case: Generate struct definitions for nested structs for key, value in struct_data.items(): selected_rows = df[yaml_key == key] if len(selected_rows) &gt; 1: go_var = selected_rows['go_var'].values[1] else: go_var = selected_rows['go_var'].values[0] if isinstance(value, dict) and any(isinstance(v, dict) for v in value.values()): nested_struct_name = go_var nested_struct_data = value struct_string += generate_go_struct(nested_struct_name, nested_struct_data) return struct_string # Read excel csv_file = &quot;~/Downloads/my_configs.csv&quot; df = pd.read_csv(csv_file) # Remove rows where all columns are NaN df = df.dropna(how='all') # Create the 'json_key' column using the custom function df['json_key'] = df['yaml_key'].apply(extract_json_key) data=df['yaml_key'].values.tolist() # Read the 'yaml_key' column data = pd.DataFrame({'column':data}) # Convert to dataframe data=data['column'].str.split('.', expand=True) # Split by '.' nested_list = data.values.tolist() # Convert to nested list data=nested_list result_json = convert_to_dict(data) # Convert to dict (JSON) # The generated co code go_struct = generate_go_struct(&quot;&quot;, result_json) # Write to file file_path = &quot;output.go&quot; with open(file_path, &quot;w&quot;) as file: file.write(go_struct) </code></pre> <p>The problem is (look at the below part of the csv),</p> <pre><code>authentication.authenticator.basic authentication.authenticator.basic.parameters authentication.authenticator.basic.parameters.showAuthFailureReason authentication.authenticator.basic.parameters.showAuthFailureReasonOnLoginPage authentication.authenticator.totp authentication.authenticator.totp.parameters authentication.authenticator.totp.parameters.showAuthFailureReason authentication.authenticator.totp.parameters.showAuthFailureReasonOnLoginPage authentication.authenticator.totp.parameters.encodingMethod authentication.authenticator.totp.parameters.timeStepSize </code></pre> <p>Here since the fields <code>parameters</code> are repeated for <code>basic</code> and <code>totp</code>, the script confuses itself and produces two <code>TotpParameters</code> structs. The expected outcome is to have <code>BasicParameters</code> and <code>TotpParameters</code> structs. Many similar repeating words are present in the csv's <code>yaml_key</code> column.</p> <p>I understand this has something to do with the index being hardcoded as 1 in <code>go_var = selected_rows['go_var'].values[1]</code> but have a hard-time fixing this.</p> <p>Could anyone please point me to an answer? I think,</p> <ol> <li>An issue with the recursive function</li> <li>An issue in the code to generate the JSON might be a root cause for this issue.</li> </ol> <p>Thanks!</p> <p>I tried with ChatGPT also, but since this has something to do with nesting and recursion, the provided answers by ChatGPT are not very valid.</p> <p><strong>UPDATE</strong></p> <p>I found that the problem exists for the rows that contain <code>properties</code>, <code>poolOptions</code>, <code>endpoint</code> and <code>parameters</code> fields. This is because they are repeated in the <code>yaml_key</code> column.</p>
<python><pandas><csv><go><recursion>
2023-07-04 09:41:27
2
1,014
RukshanJS
76,611,154
2,876,079
How to automatically sort functions in classes by their usage?
<p><strong>a)</strong> The book Clean Code from Robert C. Martin suggests to sort functions according to the &quot;step down rule&quot;:</p> <blockquote> <p>We want the code to read like a top-down narrative. We want every function to be followed by those at the next level of abstraction so that we can read the program, descending one level of abstraction at a time as we read down the list of functions. I call this The Step- down Rule.</p> </blockquote> <p><strong>b)</strong> PyCharm allows to show functions in <strong>alphabetical</strong> order in the Structure View. Also see <a href="https://stackoverflow.com/questions/45883520/can-pycharm-sort-methods-alphabetically">Can PyCharm sort methods alphabetically?</a> Therefore, there is <strong>no need to manually sort functions in alphabetical order</strong> and I can use the order of functions in a class for a different purpose.</p> <p><strong>c)</strong> Until now I manually <strong>sort them by their usage/abstraction-level/call stack</strong>. (I also put static and public functions on top and functions starting with &quot;_&quot; below. I put properties below functions.) However, some colleagues in my team are not aware of that order or might follow a different order.</p> <p><strong>d)</strong> Instead of <strong>manually</strong> sorting functions by their usage/abstraction-level/call stack, I would prefer a tool similar to <code>black</code> (unfortunately, black <a href="https://github.com/psf/black/issues/3029" rel="nofollow noreferrer">does not sort functions</a>), doing that formatting/sorting work <strong>automatically</strong> for us on save actions. (The order of the corresponding unit-tests should be the same as the order of the functions.)</p> <p><strong>=&gt;</strong> How can I achieve that in PyCharm? Is there some <strong>code</strong>/plugin I could start with, already knowing about the usage order/calling tree?</p>
<python><sorting><pycharm>
2023-07-04 09:24:59
1
12,756
Stefan
76,611,116
3,787,692
Explore all possible BOOLEAN combinations of variables
<p>I have a list: [A,B,C] and I want to explore all possible logic combinations. I have got as far as generating:</p> <pre><code>[A,AND,B,AND,C] [A,AND,B,OR,C] [A,OR,B,AND,C] [A,OR,B,OR,C] [A,AND,C,OR,B] </code></pre> <p>This was <em>partially</em> achieved using the following:</p> <pre><code>def get_macro_comb(keys): all_comb = [] for combination in itertools.product(BOOL, repeat=get_needed_bool(keys)): all_comb.append(combination) expanded_list = list(set(all_comb)) all_aud = [] for boo in expanded_list: aud = [x for x in chain(*zip_longest(keys, boo)) if x is not None] all_aud.append(aud) return all_aud def get_needed_bool(l): needed_bool = len(l) -1 return needed_bool BOOL = ['AND','OR'] full_aud = [] for comb in get_macro_comb(['A','B','C']): aud = ' '.join(comb) full_aud.append(comb) </code></pre> <p>and I want to group this for a query, so taking the second example I want:</p> <pre><code>(A AND B) OR C A AND (B OR C) </code></pre> <p>and I want this generalised to <code>n</code> items in a list.</p> <p>I have only managed to think of a bullish approach:</p> <pre><code>x =['A','AND','B','OR','C'] aud = '({})'.format(' '.join(x[i-1:i+2])) rest_aud = ' '.join(x[i+2:]) aud = aud + ' ' + rest_aud print(aud) aud = '({})'.format(' '.join(x[i+1:])) rest_aud = ' '.join(x[i-1:i+1]) aud = rest_aud + ' ' + aud print(aud) </code></pre> <p>which returns:</p> <pre><code>(A AND B) OR C A AND (B OR C) </code></pre> <p>clearly non-generalisable</p>
<python><python-itertools>
2023-07-04 09:21:08
2
1,089
laila
76,611,006
4,391,360
pip can't find a package even though it's installed
<p>When a dependency (<code>dipy</code>) is installed with <code>pip</code> on an appveyor windows build, the installation fails because a module is not installed (<code>packaging</code>).</p> <p>However, it is installed (in <code>C:\Python37\lib\site-packages</code>).</p> <p>In fact, when we look at the traceback, we'll notice that at first it's <code>C:\Python37\lib\site-packages</code> that's used, then when we search for the hook, we switch to <code>C:\Users\appveyor\AppData\Local\Temp\1\pip-build-env-en87c6ej\overlay\Lib\site-packages</code> (vide infra), where I think <code>packaging</code> isn't installed.</p> <p>I don't understand why this is happening and how to fix it.</p> <pre><code>Collecting dipy (from mia-processes&gt;=2.3.0-&gt;populse-mia==2.4.1.dev0+8df8ccf7) Downloading dipy-1.7.0.tar.gz (12.4 MB) --------------------------------------- 12.4/12.4 MB 29.7 MB/s eta 0:00:00 Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'error' error: subprocess-exited-with-error Getting requirements to build wheel did not run successfully. exit code: 1 [19 lines of output] Traceback (most recent call last): File &quot;C:\Python37\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py&quot;, line 353, in &lt;module&gt; main() File &quot;C:\Python37\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py&quot;, line 335, in main json_out['return_val'] = hook(**hook_input['kwargs']) File &quot;C:\Python37\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py&quot;, line 118, in get_requires_for_build_wheel return hook(config_settings) File &quot;C:\Users\appveyor\AppData\Local\Temp\1\pip-build-env-en87c6ej\overlay\Lib\site-packages\setuptools\build_meta.py&quot;, line 341, in get_requires_for_build_wheel return self._get_build_requires(config_settings, requirements=['wheel']) File &quot;C:\Users\appveyor\AppData\Local\Temp\1\pip-build-env-en87c6ej\overlay\Lib\site-packages\setuptools\build_meta.py&quot;, line 323, in _get_build_requires self.run_setup() File &quot;C:\Users\appveyor\AppData\Local\Temp\1\pip-build-env-en87c6ej\overlay\Lib\site-packages\setuptools\build_meta.py&quot;, line 488, in run_setup self).run_setup(setup_script=setup_script) File &quot;C:\Users\appveyor\AppData\Local\Temp\1\pip-build-env-en87c6ej\overlay\Lib\site-packages\setuptools\build_meta.py&quot;, line 338, in run_setup exec(code, locals()) File &quot;&lt;string&gt;&quot;, line 35, in &lt;module&gt; File &quot;C:\Users\appveyor\AppData\Local\Temp\1\pip-install-1nml3j3e\dipy_a5069f489a1242509a5cd4120d68f564\cythexts.py&quot;, line 7, in &lt;module&gt; from packaging.version import Version ModuleNotFoundError: No module named 'packaging' [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error Getting requirements to build wheel did not run successfully. exit code: 1 See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. </code></pre>
<python><pip><appveyor>
2023-07-04 09:07:48
0
727
servoz
76,610,995
6,117,400
SLURM sbatch fails while interactive srun succeeds (PyTorch distributed run)
<p>I have a python script that test connectivity in the <code>torch.distributed.run</code> environment. Allocating nodes and running the command by hand results in a successfuly execution.</p> <p>Running an sbatch script results in a timeout and no execution at all.</p> <p>Python Script:</p> <pre class="lang-py prettyprint-override"><code>import os import socket import torch import torch.distributed as dist socket.AF_INET6 = socket.AF_INET print(&quot;Starting python script&quot;) local_rank = int(os.environ[&quot;LOCAL_RANK&quot;]) torch.cuda.set_device(local_rank) device = torch.device(&quot;cuda&quot;, local_rank) hostname = socket.gethostname() gpu = f&quot;[{hostname}-{local_rank}]&quot; try: # test distributed dist.init_process_group(&quot;nccl&quot;) dist.all_reduce(torch.ones(1).to(device), op=dist.ReduceOp.SUM) dist.barrier() # test cuda is available and can allocate memorys torch.cuda.is_available() torch.ones(1).cuda(local_rank) # global rank rank = dist.get_rank() world_size = dist.get_world_size() print(f&quot;{gpu} is OK (global rank: {rank}/{world_size})&quot;) dist.barrier() if rank == 0: print(f&quot;pt={torch.__version__}, cuda={torch.version.cuda}, nccl={torch.cuda.nccl.version()}&quot;) except Exception: print(f&quot;{gpu} is broken&quot;) raise </code></pre> <p>SLURM SBATCH file:</p> <pre class="lang-bash prettyprint-override"><code>#!/bin/bash #SBATCH --job-name=rtx-distribution-test # name #SBATCH --nodes=2 # nodes #SBATCH --ntasks-per-node=1 # crucial - only 1 task per dist per node! #SBATCH --cpus-per-task=4 # number of cores per tasks #SBATCH --partition=clara #SBATCH --gres=gpu:rtx2080ti:1 # number of gpus #SBATCH --time 01:15:00 # maximum execution time (HH:MM:SS) #SBATCH --output=clara_rtx/%x-%j.out # output file name #SBATCH --mail-type=ALL # load modules module load Python srun pip install --user -r requirements.txt MASTER_PORT=12340 MASTER_ADDR=$(scontrol show hostnames &quot;$SLURM_JOB_NODELIST&quot; | head -n 1) echo &quot;MASTER_ADDR=&quot;$MASTER_ADDR GPUS_PER_NODE=1 export NCCL_DEBUG=INFO LOGLEVEL=INFO srun bash -c &quot;NCCL_DEBUG=INFO python -m torch.distributed.run --nproc_per_node $GPUS_PER_NODE --nnodes $SLURM_NNODES --node_rank $SLURM_PROCID --master_addr $MASTER_ADDR --master_port $MASTER_PORT torch-distributed-gpu-test_no_flock.py&quot; </code></pre> <p>Interactive Run Command:</p> <pre class="lang-bash prettyprint-override"><code> srun bash -c 'NCCL_DEBUG=INFO python -m torch.distributed.run --nproc_per_node 1 --nnodes 2 --node_rank $SLURM_PROCID --master_addr clara06 --master_port 12340 torch-distributed-gpu-test_no_flock.py' </code></pre> <p>Output of Interactive Command:</p> <pre><code>[W socket.cpp:601] [c10d] The client socket cannot be initialized to connect to [clara06.CENSORED]:12340 (errno: 97 - Address family not supported by protocol). [W socket.cpp:426] [c10d] The server socket cannot be initialized on [::]:12340 (errno: 97 - Address family not supported by protocol). [W socket.cpp:601] [c10d] The client socket cannot be initialized to connect to [clara06.CENSORED]:12340 (errno: 97 - Address family not supported by protocol). [W socket.cpp:601] [c10d] The client socket cannot be initialized to connect to [clara06.CENSORED]:12340 (errno: 97 - Address family not supported by protocol). [W socket.cpp:601] [c10d] The client socket cannot be initialized to connect to [clara06.CENSORED]:12340 (errno: 97 - Address family not supported by protocol). Starting python script Starting python script [W socket.cpp:601] [c10d] The client socket cannot be initialized to connect to [clara06.CENSORED]:12340 (errno: 97 - Address family not supported by protocol). [W socket.cpp:601] [c10d] The client socket cannot be initialized to connect to [clara06.CENSORED]:12340 (errno: 97 - Address family not supported by protocol). [W socket.cpp:601] [c10d] The client socket cannot be initialized to connect to [clara06.CENSORED]:12340 (errno: 97 - Address family not supported by protocol). [W socket.cpp:601] [c10d] The client socket cannot be initialized to connect to [clara06.CENSORED]:12340 (errno: 97 - Address family not supported by protocol). clara06:3883842:3883842 [0] NCCL INFO Bootstrap : Using ib0:10.10.10.6&lt;0&gt; clara06:3883842:3883842 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation clara06:3883842:3883842 [0] NCCL INFO cudaDriverVersion 11060 NCCL version 2.14.3+cuda11.7 ... </code></pre> <p>Output of SBATCH script:</p> <pre><code>[W socket.cpp:426] [c10d] The server socket cannot be initialized on [::]:12340 (errno: 97 - Address family not supported by protocol). [W socket.cpp:601] [c10d] The client socket cannot be initialized to connect to [clara06.CENSORED]:12340 (errno: 97 - Address family not supported by protocol). [W socket.cpp:601] [c10d] The client socket cannot be initialized to connect to [clara06.CENSORED]:12340 (errno: 97 - Address family not supported by protocol). srun: Job step aborted: Waiting up to 32 seconds for job step to finish. slurmstepd: error: *** STEP 3111195.1 ON clara23 CANCELLED AT --- DUE TO TIME LIMIT *** slurmstepd: error: *** JOB 3111195 ON clara23 CANCELLED AT --- DUE TO TIME LIMIT *** </code></pre> <p>If run long enough (by increasing the time limit), the job will cancel with a <code>RuntimeError: Socket Timeout</code> with the following stack trace:</p> <pre class="lang-bash prettyprint-override"><code>Traceback (most recent call last): File &quot;/software/all/staging/Python/3.10.8-GCCcore-12.2.0/lib/python3.10/runpy.py&quot;, line 196, in _run_module_as_main return _run_code(code, main_globals, None, File &quot;/software/all/staging/Python/3.10.8-GCCcore-12.2.0/lib/python3.10/runpy.py&quot;, line 86, in _run_code exec(code, run_globals) File &quot;/home/CENSORED/.local/lib/python3.10/site-packages/torch/distributed/run.py&quot;, line 798, in &lt;module&gt; main() File &quot;/home/CENSORED/.local/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py&quot;, line 346, in wrapper return f(*args, **kwargs) File &quot;/home/CENSORED/.local/lib/python3.10/site-packages/torch/distributed/run.py&quot;, line 794, in main run(args) File &quot;/home/CENSORED/.local/lib/python3.10/site-packages/torch/distributed/run.py&quot;, line 785, in run elastic_launch( File &quot;/home/CENSORED/.local/lib/python3.10/site-packages/torch/distributed/launcher/api.py&quot;, line 134, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File &quot;/home/CENSORED/.local/lib/python3.10/site-packages/torch/distributed/launcher/api.py&quot;, line 241, in launch_agent result = agent.run() File &quot;/home/CENSORED/.local/lib/python3.10/site-packages/torch/distributed/elastic/metrics/api.py&quot;, line 129, in wrapper result = f(*args, **kwargs) File &quot;/home/CENSORED/.local/lib/python3.10/site-packages/torch/distributed/elastic/agent/server/api.py&quot;, line 723, in run result = self._invoke_run(role) File &quot;/home/CENSORED/.local/lib/python3.10/site-packages/torch/distributed/elastic/agent/server/api.py&quot;, line 858, in _invoke_run self._initialize_workers(self._worker_group) File &quot;/home/CENSORED/.local/lib/python3.10/site-packages/torch/distributed/elastic/metrics/api.py&quot;, line 129, in wrapper result = f(*args, **kwargs) File &quot;/home/CENSORED/.local/lib/python3.10/site-packages/torch/distributed/elastic/agent/server/api.py&quot;, line 692, in _initialize_workers self._rendezvous(worker_group) File &quot;/home/CENSORED/.local/lib/python3.10/site-packages/torch/distributed/elastic/metrics/api.py&quot;, line 129, in wrapper result = f(*args, **kwargs) File &quot;/home/CENSORED/.local/lib/python3.10/site-packages/torch/distributed/elastic/agent/server/api.py&quot;, line 549, in _rendezvous workers = self._assign_worker_ranks(store, group_rank, group_world_size, spec) File &quot;/home/CENSORED/.local/lib/python3.10/site-packages/torch/distributed/elastic/metrics/api.py&quot;, line 129, in wrapper result = f(*args, **kwargs) File &quot;/home/CENSORED/.local/lib/python3.10/site-packages/torch/distributed/elastic/agent/server/api.py&quot;, line 624, in _assign_worker_ranks role_infos = self._share_and_gather(store, group_rank, group_world_size, spec) File &quot;/home/CENSORED/.local/lib/python3.10/site-packages/torch/distributed/elastic/agent/server/api.py&quot;, line 661, in _share_and_gather role_infos_bytes = store_util.synchronize( File &quot;/home/CENSORED/.local/lib/python3.10/site-packages/torch/distributed/elastic/utils/store.py&quot;, line 64, in synchronize agent_data = get_all(store, rank, key_prefix, world_size) File &quot;/home/CENSORED/.local/lib/python3.10/site-packages/torch/distributed/elastic/utils/store.py&quot;, line 34, in get_all data = store.get(f&quot;{prefix}{idx}&quot;) RuntimeError: Socket Timeout </code></pre> <p>The interactive srun takes around 2 seconds to completion while the sbatch timeouts after 15 minutes.</p> <ul> <li>I have tried different srun commands using <code>torchrun</code> and <code>torch.distributed.launch</code>.</li> <li>I also replaced the <code>&quot;</code> with <code>'</code> in the srun command inside the sbatch script.</li> <li>I have tried different configuations with v100 gpus and more gpus per node</li> <li>I have added the <code>--jobid</code> to the srun command with no difference</li> </ul>
<python><pytorch><artificial-intelligence><slurm><multi-gpu>
2023-07-04 09:06:26
0
649
Scorix
76,610,986
15,991,297
Filtering Rows where Value has Decreased by x% then Flatlined in Pandas Dataframe
<p><strong>Outline:</strong></p> <p>I have a large dataframe of test results. Each test has a unique testID and within each test there are multiple unitIDs. unitIDs may appear in multiple tests. degCent values are listed at 1 second intervals for each unitID.</p> <p>I want to filter for unitIDs where the degCent has declined by more than x% in a particular test and then flatlined.</p> <p>This is best shown in a chart (the data extract is below). The chart shows the decrease starting at the beginning of the test but it may occur at any point in the data series:</p> <p><a href="https://i.sstatic.net/WTezg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WTezg.png" alt="enter image description here" /></a></p> <p>So, in this example unitID 40190525 in testID 215299311 has declined from 30.2 at 19/06/2023 13:44:59.000 to 8.6 at 19/06/2023 13:45:05.000 and then flatlined from there.</p> <p><strong>Detail</strong></p> <p>First we need to calculate the decrease. For now I would like to identify unitIDs whose degCents have decreased by more than 70% from their high to their &quot;lowest low&quot; before flatlining.</p> <p>From those unitIDs whose degCents have decreased by more than 70% I need to identify those that have flatlined. By &quot;flatlined&quot; I mean unitIDs whose degCents have stayed within +5% of the lowest low for 5 or more seconds.</p> <p>I hope that make sense. It is a simple concept but not so simple to explain.</p> <p>Please ask if anything does not make sense. I am relatively new to Pandas and am not even sure this is possible. The data is stored in a dataframe but if there is a better library I am ok with using that.</p> <p>Thanks in advance.</p> <p><strong>Example extract</strong></p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>testID</th> <th>Date/Time</th> <th>unitID</th> <th>degCent</th> </tr> </thead> <tbody> <tr> <td>215299311</td> <td>19/06/2023 13:44:59.000</td> <td>40190525</td> <td>30.2</td> </tr> <tr> <td>215299311</td> <td>19/06/2023 13:44:59.000</td> <td>28677460</td> <td>4</td> </tr> <tr> <td>215299311</td> <td>19/06/2023 13:44:59.000</td> <td>13180675</td> <td>8.6</td> </tr> <tr> <td>215299311</td> <td>19/06/2023 13:45:00.000</td> <td>40190525</td> <td>28.2</td> </tr> <tr> <td>215299311</td> <td>19/06/2023 13:45:00.000</td> <td>28677460</td> <td>4</td> </tr> <tr> <td>215299311</td> <td>19/06/2023 13:45:00.000</td> <td>13180675</td> <td>8.6</td> </tr> <tr> <td>215299311</td> <td>19/06/2023 13:45:01.000</td> <td>40190525</td> <td>22.4</td> </tr> <tr> <td>215299311</td> <td>19/06/2023 13:45:01.000</td> <td>28677460</td> <td>3.95</td> </tr> <tr> <td>215299311</td> <td>19/06/2023 13:45:01.000</td> <td>13180675</td> <td>8.4</td> </tr> <tr> <td>215299311</td> <td>19/06/2023 13:45:02.000</td> <td>40190525</td> <td>15.3</td> </tr> <tr> <td>215299311</td> <td>19/06/2023 13:45:02.000</td> <td>28677460</td> <td>3.95</td> </tr> <tr> <td>215299311</td> <td>19/06/2023 13:45:02.000</td> <td>13180675</td> <td>8.4</td> </tr> <tr> <td>215299311</td> <td>19/06/2023 13:45:03.000</td> <td>40190525</td> <td>12.3</td> </tr> <tr> <td>215299311</td> <td>19/06/2023 13:45:03.000</td> <td>28677460</td> <td>3.95</td> </tr> <tr> <td>215299311</td> <td>19/06/2023 13:45:03.000</td> <td>13180675</td> <td>8.4</td> </tr> <tr> <td>215299311</td> <td>19/06/2023 13:45:04.000</td> <td>40190525</td> <td>8.8</td> </tr> <tr> <td>215299311</td> <td>19/06/2023 13:45:04.000</td> <td>28677460</td> <td>3.95</td> </tr> <tr> <td>215299311</td> <td>19/06/2023 13:45:04.000</td> <td>13180675</td> <td>8.4</td> </tr> <tr> <td>215299311</td> <td>19/06/2023 13:45:05.000</td> <td>40190525</td> <td>8.6</td> </tr> <tr> <td>215299311</td> <td>19/06/2023 13:45:05.000</td> <td>28677460</td> <td>3.95</td> </tr> <tr> <td>215299311</td> <td>19/06/2023 13:45:05.000</td> <td>13180675</td> <td>8.4</td> </tr> <tr> <td>215299311</td> <td>19/06/2023 13:45:06.000</td> <td>40190525</td> <td>8.7</td> </tr> <tr> <td>215299311</td> <td>19/06/2023 13:45:06.000</td> <td>28677460</td> <td>3.95</td> </tr> <tr> <td>215299311</td> <td>19/06/2023 13:45:06.000</td> <td>13180675</td> <td>8.4</td> </tr> <tr> <td>215299311</td> <td>19/06/2023 13:45:07.000</td> <td>40190525</td> <td>8.7</td> </tr> <tr> <td>215299311</td> <td>19/06/2023 13:45:07.000</td> <td>28677460</td> <td>3.95</td> </tr> <tr> <td>215299311</td> <td>19/06/2023 13:45:07.000</td> <td>13180675</td> <td>8.4</td> </tr> <tr> <td>215299311</td> <td>19/06/2023 13:45:08.000</td> <td>40190525</td> <td>8.8</td> </tr> <tr> <td>215299311</td> <td>19/06/2023 13:45:08.000</td> <td>28677460</td> <td>3.95</td> </tr> <tr> <td>215299311</td> <td>19/06/2023 13:45:08.000</td> <td>13180675</td> <td>8.4</td> </tr> <tr> <td>215299311</td> <td>19/06/2023 13:45:09.000</td> <td>40190525</td> <td>8.6</td> </tr> <tr> <td>215299311</td> <td>19/06/2023 13:45:09.000</td> <td>28677460</td> <td>3.95</td> </tr> <tr> <td>215299311</td> <td>19/06/2023 13:45:09.000</td> <td>13180675</td> <td>8.4</td> </tr> </tbody> </table> </div>
<python><pandas>
2023-07-04 09:05:23
1
500
James
76,610,964
4,615,389
How to identify the Add button within chrome://settings/passwords page using Selenium Python
<p>I'm writing a test which needs to store password inside the browser for a dumb site and I'm performing this task with Selenium Python. I'm able to load <code>chrome://settings/passwords</code> without problem.</p> <p>The task requires to click the &quot;Add&quot; button and fill the form with dumb data if they are not present. The problem which I am facing is the selection of the button or any other element of the settings page.</p> <p>This is the HTML element of the button:</p> <pre><code>&lt;cr-button id=&quot;addPasswordButton&quot; title=&quot;Add new password&quot; role=&quot;button&quot; tabindex=&quot;0&quot; aria-disabled=&quot;false&quot;&gt; Add </code></pre> <p>The idea I had was to select it with one of these options:</p> <pre><code>btn = web_driver.find_element(By.ID, &quot;addPasswordButton&quot;) btn = web_driver.find_element(By.XPATH, &quot;//*[@id='addPasswordButton')]&quot;) </code></pre> <p>but both returns:</p> <pre><code>RtlGetAppContainerNamedObjectPath [0x773C7B3E+238] </code></pre> <p>The same error with any other element in the page, exept the <code>settings-ui</code> at top level. I've also tryed to select elements based on the tag name, which seems quite custom, but nothing.</p> <p>Also searching the error has not given any clue for scarcity of results. It seems to me that the settings page is build with nested subpages and that Selenium is not able to inspect them.</p> <p>I've also tryed to switch to the inner page without results:</p> <pre><code>a = web_driver.find_element(By.XPATH, &quot;//settings-ui&quot;) web_driver.switch_to(a) </code></pre> <p>leads to:</p> <pre><code>{TypeError}TypeError(&quot;'SwitchTo' object is not callable&quot;) </code></pre> <p>Is there a way to solve this issue and being able to select an element which is also provided with an unique ID?</p>
<python><google-chrome><selenium-webdriver><webdriverwait><shadow-dom>
2023-07-04 09:02:11
2
707
Marco
76,610,386
2,347,063
Excluding sub-folders when uploading files to an Azure container on blob storage
<p>I'm using Azure's <a href="https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blob-upload-python" rel="nofollow noreferrer">sample code</a> in Python to upload a file into a container on Azure blob storage. Here's the example:</p> <pre><code>def upload_blob_file(self, blob_service_client: BlobServiceClient, container_name: str): container_client = blob_service_client.get_container_client(container=container_name) with open(file=os.path.join('filepath', 'filename'), mode=&quot;rb&quot;) as data: blob_client = container_client.upload_blob(name=&quot;sample-blob.txt&quot;, data=data, overwrite=True) </code></pre> <p>When I run this on a Mac, in the container on Azure, I only see the file itself uploaded. However, when I run the very code on a Windows machine (10 Pro, in my case,) I see that all sub-folders also get uploaded with the file. For example, let's say my container's name on Azure is <code>container1</code> and the file I want to upload to Azure is stored on my local machine in the following path: <code>/Users/[my-username]/PycharmProjects/project1/data/file1.txt</code>. When I run the code on:</p> <ul> <li>Mac, container content on Azure: <code>container1/file1.txt</code></li> <li>Windows, container content on Azure: <code>container1/C/Users/[my-username]/PycharmProjects/project1/data/file1.txt</code></li> </ul> <p>How can I exclude the folders and sub-folders that are also created on my blob storage when I run the code from Windows?</p>
<python><azure><azure-blob-storage>
2023-07-04 07:48:21
1
2,629
Pedram
76,610,058
15,002,748
Optimize GROUPBY in PySpark to run faster
<pre><code>df_grouped = ( df.groupBy(&quot;Date&quot;, &quot;B&quot;, &quot;C&quot;, &quot;D&quot;, &quot;E&quot;, &quot;F&quot;, &quot;G&quot;, &quot;H&quot;, &quot;I&quot;, &quot;J&quot;, &quot;K&quot;, &quot;L&quot;) .agg( first(&quot;M&quot;).alias(&quot;M&quot;), first(&quot;N&quot;).alias(&quot;N&quot;), first(&quot;O&quot;).alias(&quot;O&quot;), first(&quot;P&quot;).alias(&quot;P&quot;), first(&quot;Q&quot;).alias(&quot;Q&quot;), first(&quot;R&quot;).alias(&quot;R&quot;), first(&quot;S&quot;).alias(&quot;S&quot;) ) .withColumn(&quot;Year&quot;, year(col(&quot;Date&quot;))) .withColumn(&quot;Month&quot;, month(col(&quot;Date&quot;))) .select(&quot;Date&quot;, &quot;B&quot;, &quot;C&quot;, &quot;D&quot;, &quot;E&quot;, &quot;F&quot;, &quot;G&quot;, &quot;H&quot;, &quot;I&quot;, &quot;J&quot;, &quot;K&quot;, &quot;L&quot;, &quot;M&quot;, &quot;N&quot;, &quot;O&quot;, &quot;P&quot;, &quot;Q&quot;, &quot;R&quot;, &quot;S&quot; &quot;Year&quot;, &quot;Month&quot;) .filter(col(&quot;Date&quot;) &gt;= date_sub(current_date(), 600)) ) </code></pre> <p>Hello,</p> <p>I have a code as shown above, it takes a lot of time to run when there are billions row. May I know how can I optimize this code so that it can run more efficiently and faster? I have been searching around the alternative method but couldn't find any, appreciate any help or advise, thank you!</p>
<python><apache-spark><optimization><pyspark><group-by>
2023-07-04 07:00:53
1
1,127
weizer
76,609,435
4,210,950
Json Normalize to Pandas Dataframe multiple list of dicts
<p>I am having a nested json file like below:</p> <p><strong>test.json</strong></p> <pre><code>{ &quot;resourceType&quot;: &quot;test&quot;, &quot;url&quot;: &quot;/test/abc-oq-001&quot;, &quot;version&quot;: &quot;1.0&quot;, &quot;title&quot;: &quot;xyz test&quot;, &quot;status&quot;: &quot;active&quot;, &quot;date&quot;: &quot;1990-01-01T10:10:10+08:00&quot;, &quot;description&quot;: &quot;one&quot;, &quot;code&quot;: [ { &quot;system&quot;: &quot;http://aaa.com/ValueSet&quot;, &quot;code&quot;: &quot;test&quot; } ], &quot;item&quot;: [ { &quot;linkId&quot;: &quot;twb&quot;, &quot;text&quot;: &quot;twb&quot;, &quot;type&quot;: &quot;group&quot;, &quot;required&quot;: true, &quot;item&quot;: [ { &quot;linkId&quot;: &quot;twb.1&quot;, &quot;text&quot;: &quot;Input&quot;, &quot;type&quot;: &quot;group&quot;, &quot;item&quot;: [ { &quot;linkId&quot;: &quot;twb.1.1&quot;, &quot;code&quot;: [ { &quot;system&quot;: &quot;http://aaa.com/ValueSete&quot;, &quot;code&quot;: &quot;oq001&quot; } ], &quot;text&quot;: &quot;sfd&quot;, &quot;type&quot;: &quot;quantity&quot;, &quot;required&quot;: true }, { &quot;linkId&quot;: &quot;twb.1.2&quot;, &quot;code&quot;: [ { &quot;system&quot;: &quot;http://aaa.com/ValueSet&quot;, &quot;code&quot;: &quot;oq002&quot; } ], &quot;text&quot;: &quot;fsd&quot;, &quot;type&quot;: &quot;quantity&quot;, &quot;required&quot;: true } ] } ] }, { &quot;linkId&quot;: &quot;sfds&quot;, &quot;text&quot;: &quot;sfds&quot;, &quot;type&quot;: &quot;group&quot;, &quot;required&quot;: true, &quot;item&quot;: [ { &quot;linkId&quot;: &quot;fds-history.1&quot;, &quot;code&quot;: [ { &quot;system&quot;: &quot;http://aaa.com/ValueSet&quot;, &quot;code&quot;: &quot;oq004&quot; } ], &quot;prefix&quot;: &quot;1&quot;, &quot;text&quot;: &quot;Are you a dsfdf?&quot;, &quot;type&quot;: &quot;choice&quot;, &quot;required&quot;: true, &quot;answerValueSet&quot;: &quot;http://aaa.com/ValueSet&quot; } ] } ] } </code></pre> <p>I need to normalize this file to have the respective columns and rows</p> <pre><code>file_0 = r'test.json' with open(file_0) as json_file_0: json_data_0 = json.load(json_file_0) # pd.json_normalize(json_data_0) pd.json_normalize(json_data_0, record_path=['code'], meta=['resourceType', 'url', 'version', 'title', 'status', 'date', 'description']) </code></pre> <p>However, there are more than one column that is having nested list of dictionaries. How should this be completely normalized. I understand flatten would completely flatten the data to the respective columns, but i would miss the hierarchy.</p> <p>How should i completely normalize a json file having multipe objects having nested list of dicts.</p>
<python><json><pandas><nested-json>
2023-07-04 04:36:20
0
685
blackfury
76,609,373
1,145,760
Is out of bounds access safe in python? Is it bad form?
<pre><code>&gt; [1, 2, 3][1:int(1e9)] [2, 3] </code></pre> <p>Coming from C background the above code gives me a mini heart attack. But it seems to work. Is it guaranteed to work in any <code>python3</code>? Is it as efficient as <code>&gt; [1, 2, 3][1:int(2)]</code>? Is it considered bad form as in 'prefer to use anything else before relying on this behavior'?</p> <hr /> <p>OK, I've probably described the problem badly. This is my code. The last access is out of bonds most of the time. Is that fine?</p> <pre><code>def cut(b: bytes, max: int) -&gt; [bytes]: '''Chop into pieces no longer than max bytes.''' return [b[i:i+max] for i in range(0, len(b), max)] </code></pre>
<python><arrays><indexing>
2023-07-04 04:14:41
1
9,246
Vorac
76,609,077
3,204,942
python.lib or python.a or a python.wasm static library usable for nodejs
<p>Is there a way I can use a <code>python.lib</code> or <code>python.a</code> or a <code>python.wasm</code> static library usable into nodejs extension to be used as imports? I am looking for such for other interpretors like <code>ruby</code>, <code>php</code>, <code>perl</code>, <code>rust</code> as well.</p>
<python><php><ruby><perl>
2023-07-04 02:28:26
1
2,349
Gary
76,608,916
3,247,006
How to set a dictionary or list to a cookie and get it properly in Django?
<p>I could set a dictionary or list to a session and get it properly as shown below:</p> <pre class="lang-py prettyprint-override"><code># &quot;views.py&quot; from django.http import HttpResponse def my_view(request): request.session['teacher'] = {'name': 'John', 'age': 36} print(request.session['teacher']) # {'name':'John','age':36} print(request.session['teacher']['name']) # John print(request.session['teacher']['age']) # 36 request.session['student'] = ['David', 21] print(request.session['student']) # ['David', 21] print(request.session['student'][0]) # David print(request.session['student'][1]) # 21 return HttpResponse(&quot;Test&quot;) </code></pre> <p>But, I couldn't set a dictionary or list to a cookie and get it properly as shown below:</p> <pre class="lang-py prettyprint-override"><code># &quot;views.py&quot; from django.http import HttpResponse def my_view(request): response = HttpResponse('Test') response.set_cookie('teacher', {'name': 'John', 'age': 36}) response.cookies['student'] = ['David', 21] return response </code></pre> <pre class="lang-py prettyprint-override"><code># &quot;views.py&quot; from django.http import HttpResponse def my_view(request): print(request.COOKIES['teacher']) # {'name': 'John', 'age': 36} print(request.COOKIES['teacher']['name']) # Error print(request.COOKIES['teacher']['age']) # Error print(request.COOKIES['student']) # ['David', 21] print(request.COOKIES['student'][0]) # [ print(request.COOKIES['student'][1]) # ' HttpResponse(&quot;Test&quot;) </code></pre> <p><a href="https://i.sstatic.net/4Lbbz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4Lbbz.png" alt="enter image description here" /></a></p> <p>So, how can I set a dictionary or list to a cookie and get it properly in Django?</p>
<python><django><list><dictionary><django-cookies>
2023-07-04 01:31:39
0
42,516
Super Kai - Kazuya Ito
76,608,775
1,767,106
fast.ai course lesson 1: object has no attribute 'fine_tune'
<p>I'm following the official course</p> <p>Notebook For Lesson 1: <a href="https://www.kaggle.com/code/jhoward/is-it-a-bird-creating-a-model-from-your-own-data" rel="nofollow noreferrer">https://www.kaggle.com/code/jhoward/is-it-a-bird-creating-a-model-from-your-own-data</a></p> <p>Tutorial docs for the fast.ai vision library: <a href="https://docs.fast.ai/tutorial.vision.html" rel="nofollow noreferrer">https://docs.fast.ai/tutorial.vision.html</a></p> <p>this code:</p> <pre class="lang-py prettyprint-override"><code> dls = DataBlock( blocks=[ImageBlock, CategoryBlock], get_items=get_image_files, splitter=RandomSplitter(valid_pct=0.2, seed=42), get_y=parent_label, item_tfms=[Resize(192, method='squish')] ).dataloaders(path, bs=32) learn = vision_learner(dls, resnet18, metrics=error_rate) learn.fine_tune(3) </code></pre> <p>results in this error:</p> <pre><code>Traceback (most recent call last): File &quot;/Projects/fastai/l1-image-classification/l1_image_classification/build_model.py&quot;, line 39, in &lt;module&gt; build_model() File &quot;/Projects/fastai/l1-image-classification/l1_image_classification/build_model.py&quot;, line 25, in build_model learn.fine_tune(3) ^^^^^^^^^^^^^^^ File &quot;/Users/sys/Library/Caches/pypoetry/virtualenvs/l1-image-classification-ApaWH_9y-py3.11/lib/python3.11/site-packages/fastcore/basics.py&quot;, line 496, in __getattr__ if attr is not None: return getattr(attr,k) ^^^^^^^^^^^^^^^ File &quot;/Users/sys/Library/Caches/pypoetry/virtualenvs/l1-image-classification-ApaWH_9y-py3.11/lib/python3.11/site-packages/torch/nn/modules/module.py&quot;, line 1614, in __getattr__ raise AttributeError(&quot;'{}' object has no attribute '{}'&quot;.format( AttributeError: 'Sequential' object has no attribute 'fine_tune' </code></pre> <p>What am I doing wrong?</p>
<python><fast-ai>
2023-07-04 00:35:26
1
20,816
clay
76,608,550
10,982,177
Using Categorical Variables for locations as an additional regressor to the Prophet model
<p>I am currently trying to estimate the ridership counts for a few selected locations. Say, I have 3 bus stops labeled <code>A,B,C</code>. On the one hand, I can obviously create 3 separate Prophet models to predict each location individually. On the other hand, I wonder if I could use one-hot encoding for the labels, and then pass the encoded values as a new regressor(via</p> <pre><code>m.add_regressor() </code></pre> <p>to the Prophet model to have a single model, instead of 3 to predict the ridership at each location. This would imply that I would also need to pass the encoded category along with the future date for the prediction.</p> <p>If I am wrong, I would be glad to hear if there's a different approach to using multiple geographical locations in a single model or if it is actually unrealistic.</p>
<python><time-series><prediction><facebook-prophet>
2023-07-03 23:09:12
0
612
Alex.Kh
76,608,528
6,284,287
How can I get the extra_package option to work for a Dataflow Flex Template?
<p>I have a Dataflow flex template I am trying to run which has to install a private repo. I followed the Beam documentation <a href="https://beam.apache.org/documentation/sdks/python-pipeline-dependencies/" rel="nofollow noreferrer">here</a> which says to use the --extra_package pipeline option to specify the path to a tarball and the Dataflow documentation <a href="https://cloud.google.com/dataflow/docs/guides/templates/configuring-flex-templates#specify-options" rel="nofollow noreferrer">here</a> that says to specify the option as a parameter in the metadata file:</p> <pre><code>{ &quot;description&quot;: &quot;Dataflow flex template test&quot;, &quot;name&quot;: &quot;dataflow-flex-test&quot;, &quot;parameters&quot;: [ { &quot;name&quot;: &quot;kafka_topic&quot;, &quot;label&quot;: &quot;kafka_topic&quot;, &quot;helpText&quot;: &quot;Specify a confluent kafka topic to read from.&quot; }, { &quot;name&quot;: &quot;extra_package&quot;, &quot;label&quot;: &quot;extra_package&quot;, &quot;helpText&quot;: &quot;Specify a local package.&quot; } ] } </code></pre> <p>And here is my run command:</p> <pre><code>gcloud dataflow flex-template run ${JOB_NAME} \ --template-file-gcs-location ${GCS_PATH}/templates/${TEMPLATE_TAG}/${TEMPLATE_NAME}.json \ --region ${GCP_REGION} \ --staging-location ${GCS_PATH}/staging \ --temp-location ${GCS_PATH}/temp \ --subnetwork ${SUBNETWORK} \ --parameters kafka_topic=${KAFKA_TOPIC} \ --parameters extra_package=${PACKAGE} </code></pre> <p>Where package is just the name of my package &lt;my_package.tar.gz&gt; which is in the same directory. When I run the template I get:</p> <pre><code>ModuleNotFoundError: No module named &lt;my_module&gt; </code></pre> <p>I'm wondering is the extra_package option even supported in Flex templates? I've looked through the logs and extra_package is in the launch args but it appears to do absolutely nothing. I also checked the staging bucket which has the SDK tarball, the pickled main session etc - it's not there either when I think it should be. How can I install my private repo to use for Dataflow jobs? Thank you.</p>
<python><google-cloud-dataflow><apache-beam>
2023-07-03 23:02:07
2
626
CClarke
76,608,404
272,023
How to include Python namespace package in AWS Lambda distribution zip?
<p>I have a Python namespace package that I want to install into an AWS lambda. The package directory structure looks like this:</p> <pre><code>mypackage foo __init__.py bar.py </code></pre> <p>Note that there is no <code>__init__.py</code> at the top level because it is a namespace package.</p> <p>I want to include it in my lambda directly without installing it into a layer, if possible. My distribution zip contents looks like this:</p> <pre><code>mypackage foo __init__.py bar.py my_app __init__.py handler.py </code></pre> <p>In the above, <code>handler.py</code> imports <code>mypackage.foo.bar.ExampleClass</code>.</p> <p>I'm getting &quot;Unable to import ...&quot; errors because &quot;No module named mypackage&quot;. That <code>mypackage</code> package I am importing needs to be a namespace package so I can't just put a <code>__init__.py</code> file in it.</p> <p>Is there a way of using Python namespace packages in AWS Lambda?</p>
<python><amazon-web-services><aws-lambda>
2023-07-03 22:22:13
1
12,131
John
76,608,243
12,224,591
Get X of Very Large Polynomial at Y? (Python 3.10, NumPy)
<p>I'm attempting to calculate all real X-values, at a certain Y-value, of a 20-degree polynomial provided in a coefficent list of ascending powers. I'm using Python 3.10 as well as the roots function of the NumPy library.</p> <p>(I previously posted a <a href="https://stackoverflow.com/questions/76603915/get-polynomial-x-at-y-python-3-10-numpy">simillar question here on StackOverflow</a>, attempting to solve this issue by producing a much simpler example with a far smaller polynomial. However, while the solution to that posting did solve the issue for small polynomials, it did not solve the issue for when I use large polynomials.)</p> <p>Here is my example code:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt def main(): # Declare 20-degree polynomial coeffs = np.array([0.028582380269090664, -11.07209382330515, 81.98886137780704, -231.55726577581814, 344.8963540923082, -312.0774305574022, 185.36300988588312, -75.58767327080196, 21.660357329823697, -4.380346453954649, 0.6125779086029393, -0.05524029639869697, 0.002489646479713822, 4.159253423505381e-05, -9.927184104751197e-06, 8.634292922740336e-08, 3.278890552085848e-08, -3.350925351280405e-10, -1.1510786631391544e-10, -2.3053456641329463e-13, 3.9927010006271344e-13, 8.499990550259627e-15, -1.2514130885401332e-15, -5.653183054629818e-17, 3.5580325101956145e-18, 2.634844330077943e-19, -1.2086461102288157e-20, -9.772155900613053e-22, 8.336597931160041e-23, -2.286862805703104e-24, 2.2638043801338818e-26], dtype = float) y = -5 # Create polynomial data on interval (0, 17) polyDataX = np.linspace(0, 17) polyDataY = np.empty(shape = len(polyDataX), dtype = float) for i in range(len(polyDataX)): polyDataY[i] = 0 for j in range(len(coeffs)): polyDataY[i] += coeffs[j] * pow(polyDataX[i], j) # Solve for all real solutions coeffs[-1] -= y sols = np.roots(coeffs).tolist() i = 0 while (i &lt; len(sols)): if (sols[i].imag != 0) or (sols[i].real &lt; 0) or (sols[i].real &gt; 17): del sols[i] i -= 1 else: sols[i] = round(sols[i].real, 2) i += 1 # Plot polynomial &amp; solutions plt.xlim([0, 17]) plt.ylim([-30, 30]) plt.plot(polyDataX, polyDataY, color = &quot;gray&quot;) plt.axhline(y, color = &quot;blue&quot;) for sol in sols: plt.axvline(sol, color = &quot;red&quot;) plt.title(&quot;Sols: &quot; + str(sols)) plt.show() plt.close() plt.clf() if (__name__ == &quot;__main__&quot;): main() </code></pre> <p>First I generate a 20-degree polynomial (stored in <code>coeffs</code>), and use the <code>roots</code> function in order to acquire all X-values (stored in <code>sols</code>), at a certain Y-value (stored in <code>y</code>). I subsequently filter out all the imaginary solutions, leaving only the real ones at the desired interval of <code>[0, 17]</code>, and I plot the plolynomial, alongside the desired Y-value and found real X-values. I colored the polynomial in <code>gray</code>, the desired Y-value in <code>blue</code> and all the filtered X-values in <code>red</code>.</p> <p>In the above example, I attempt to acquire all real X-values at a the Y-value of <code>-5</code>. The X-values are listed as <code>[3.92, 1.34, 1.13]</code>, which is clearly incorrect according to the generated plot: <a href="https://i.sstatic.net/I4Vf6l.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/I4Vf6l.png" alt="enter image description here" /></a></p> <p>According to the plot, the actual X-values for a Y-value of <code>-5</code> should be ~<code>11.2</code> and ~<code>16</code>.</p> <p>What am I doing wrong here? Why are the listed solutions from the <code>roots</code> function incorrect here?</p> <p>Thanks for reading my post, any guidance is appreciated.</p>
<python><numpy><polynomials>
2023-07-03 21:40:26
1
705
Runsva
76,608,166
9,291,575
How do I parse a list of datetimes with a 's' resolution in pandas 2?
<p>I have a csv with a datetime column. However, dates go all the way from 1 AD to 3000 AD. They are out of range for a normal <code>datetime64[ns]</code> dtype.</p> <p>In pandas &lt; 2, my workaround was to have this column with a &quot;period[H]&quot; dtype. I had a custom function doing this, which I passed as a <code>date_parser</code> in <code>pd.read_csv</code>.</p> <p>In pandas &gt;= 2, an option to have timestamps with other resolutions has been introduced. The <code>date_parser</code> argument has been deprecated, so the previous solution is also not doable. I thought I could simply cast the whole column of strings with <code>pd.to_datetime</code>.</p> <p>But that doesn't work.</p> <p>Full example:</p> <p><code>example.csv</code>:</p> <pre><code>index,date A,&quot;0004-04-04T12:30&quot; B,&quot;2004-04-04T12:30&quot; C,&quot;3004-04-04T12:30&quot; </code></pre> <h3>1. Clueless optimism - Fails</h3> <p>Dates are ISO8601, easiest of the formats.</p> <pre class="lang-py prettyprint-override"><code>df = pd.read_csv('example.csv', parse_dates=['date']) print(df.date.dtype, '|', type(df.date.iloc[0])) </code></pre> <p>Got a warning:</p> <pre class="lang-none prettyprint-override"><code>&lt;ipython-input-103-059359fbaeab&gt;:1: UserWarning: Could not infer format, so each element will be parsed individually, falling back to `dateutil`. To ensure parsing is consistent and as-expected, please specify a format </code></pre> <p>And output: <code>object | &lt;class 'str'&gt;</code></p> <p>If <code>example.csv</code> only contains <code>datetime64[ns]</code>-valid dates, this works perfectly.</p> <h3>2. As documented - Fails</h3> <p>Doc says: &quot;read in as object and then apply to_datetime() as-needed.&quot;</p> <pre class="lang-py prettyprint-override"><code>df = pd.read_csv('example.csv') pd.to_datetime(df.date) </code></pre> <p>Fails with</p> <pre class="lang-none prettyprint-override"><code>OutOfBoundsDatetime: Out of bounds nanosecond timestamp: 0004-04-04T12:30, at position 0 </code></pre> <p><code>pd.to_datetime(df.date, unit='s')</code> gives :</p> <pre class="lang-none prettyprint-override"><code>ValueError: non convertible value 0004-04-04T12:30 with the unit 's', at position 0 </code></pre> <p>so I guess the <code>unit</code> argument here doesn't have the meaning I expected.</p> <h3>3. Per-element conversion - Partially works</h3> <pre class="lang-py prettyprint-override"><code>def to_timestamp(val): if pd.isna(val): return pd.Timestamp('NaT', unit='s') return pd.Timestamp(val, unit='s') df = pd.read_csv('example.csv') df['date'] = df.date.apply(_to_timestamp) print(df.date.dtype, '|', type(df.date.iloc[0])) </code></pre> <p>Prints: <code>object | &lt;class 'pandas._libs.tslibs.timestamps.Timestamp'&gt;</code></p> <p>So indeed, the individual elements have been converted to timestamps, yay! But the column itself still has an &quot;object&quot; dtype, not a &quot;datetime&quot; one.</p> <p><code>df.date.astype('datetime64[s]')</code> fails with</p> <pre class="lang-none prettyprint-override"><code>OutOfBoundsDatetime: Cannot cast 0004-04-04 12:30:00 to unit='ns' without overflow., at position 0 </code></pre> <p>even though I specified a &quot;s&quot; resolution ? Same with <code>pd.to_datetime(df.date)</code>.</p> <h3>4. Step outside pandas - Works</h3> <pre class="lang-py prettyprint-override"><code>import numpy as np def to_d64(val): if pd.isna(val): return np.datetime64('') return np.datetime64(val) df = pd.read_csv('example.csv') df['date'] = np.array(df.date.apply(to_d64), dtype='M8[s]') print(df.date.dtype, '|', type(df.date.iloc[0])) </code></pre> <p>Success! It prints:</p> <pre class="lang-none prettyprint-override"><code>datetime64[s] | &lt;class 'pandas._libs.tslibs.timestamps.Timestamp'&gt; </code></pre> <p>But at what price? In the real code, my csv is thousands of lines long. The pandas &lt; 2 code made sure to avoid converting element-per-element, as this had performance issues. I was hoping that the support for &quot;s&quot; resolution would be accessible directly from the string-parsing step. I also note that my final example works because my strings are in a ISO8601 format, which is support by numpy. That is not a general solution at all.</p> <p>Did I miss something? Does anybody have a cleaner solution?</p> <p>EDIT: I am using pandas 2.0.3 and numpy 1.24.4, Python 3.11.4 on Linux (Fedora 37).</p>
<python><pandas><numpy><datetime>
2023-07-03 21:21:38
1
708
Aule Mahal
76,607,983
12,596,824
converting column with multiple datatypes pandas to character
<p>I read in a csv file and I have col1 having multiple data types. Col1 is apparently of type int64, but the values are shown below and they are not integers..</p> <pre><code>col1 1.0 1 NaN 0.0 0 2+ </code></pre> <p>When I do <code>df['col1'].astype(int)</code> i get an error: cannot convert float NaN to integer...but this shouldn't work anyway. I want to make all the floats to be integers and than convert to characters..</p> <p>Expected output should be</p> <pre><code>col1 1 1 NaN 0 0 2+ </code></pre>
<python><pandas>
2023-07-03 20:43:18
1
1,937
Eisen
76,607,968
7,668,453
Why are custom unicode characters sometimes not printing to the terminal in python?
<p>I have created a custom font and mapped a few new characters to unmapped unicode codepoints. This works fairly well, but these characters are seemingly randomly getting converted into their codepoint representaiton ('\u2fd6' for example) instead of showing the custom font.</p> <p>I've isolated the issue to a printing error when printing tuples and arrays. (perhaps more occations too.)</p> <p>The following code shows when it works and when it does not. (Remember I have a custom font and do se a character where the box is, which is the desired behaivour):</p> <pre><code>char = &quot;⿖&quot; print(char) # Ok tuple = (&quot;a&quot;, &quot;⿖&quot;) print(tuple) # not Ok print(tuple[0], tuple[1]) # Ok array = [&quot;a&quot;, &quot;⿖&quot;] print(array) # not Ok print(array[0], array[1]) # Ok </code></pre> <p>Procuces the following output in the terminal:</p> <pre><code>⿖ ('a', '\u2fd6') a ⿖ ['a', '\u2fd6'] a ⿖ </code></pre> <p>I don't even know where to start. Is this a python issue? Any ideas how to avoid this conversion to unicode codepoints?</p>
<python><python-unicode>
2023-07-03 20:38:14
0
351
Torben Nordtorp
76,607,902
18,769,241
How to replicate rows of a dataframe a fixed number of times?
<p>I want to replicate rows of a dataframe as to prepare for the adding of a column. The dataframe contains <em>years</em> column and I want to add a fixed column of <em>months</em>. The idea is to replicate each same year rows exactly 12 times then add a fixed value column (1-12). my code is the following:</p> <pre><code> all_years = dataframe[&quot;Year&quot;].unique().tolist() new_dataset = pd.DataFrame() for idx, year in enumerate(all_years): rows_dataframe = pd.concat( [dataframe.where(dataframe[&quot;Year&quot;] == year).dropna()] * 12, ignore_index=True) new_dataset = pd.concat([rows_dataframe, new_dataset], ignore_index=True) </code></pre> <p>The results are correct, but can I avoid the for loop here, and implement this in a more &quot;pandas-ic&quot; way?</p> <p>EDIT: expected results for one value of years (here 2012) is: (to note that months column is not added through my code, but added it to show the final output)</p> <pre><code>+-------+--------+---------+ | Years | Months | SomeCol | +-------+--------+---------+ | 2011 | 12 | val1 | +-------+--------+---------+ | 2012 | 1 | val1 | +-------+--------+---------+ | 2012 | 2 | val1 | +-------+--------+---------+ | 2012 | 3 | val1 | +-------+--------+---------+ | 2012 | 4 | val1 | +-------+--------+---------+ | 2012 | 5 | ... | +-------+--------+---------+ | 2012 | 6 | ... | +-------+--------+---------+ | 2012 | 7 | val1 | +-------+--------+---------+ | 2012 | 8 | val1 | +-------+--------+---------+ | 2012 | 9 | val1 | +-------+--------+---------+ | 2012 | 10 | | +-------+--------+---------+ | 2012 | 11 | | +-------+--------+---------+ | 2012 | 12 | | +-------+--------+---------+ | 2013 | 1 | ... | +-------+--------+---------+ </code></pre>
<python><pandas>
2023-07-03 20:24:21
4
571
Sam
76,607,814
6,018,285
pytest-bdd Retrieve entire unique row based on particular value of outline table
<p>I am looking to get the entire unique row based on test_type_to_run value, for example I want to only retrieve the rows having test_type_to_run==regression and skip the rows with 'smoke'</p> <pre><code>Feature: Add a new module using API Scenario Outline: the admin can Add a project Given need to call api with &lt;type&gt; for &quot;createNewModule&quot; and &lt;apipayload&gt; value and &lt;test_type_to_run&gt; Given set header &quot;Content-Type:application/json&quot; with &lt;role&gt; Given create payload as &lt;apipayload&gt; When Call &quot;createNewModule&quot; api using &quot;POST&quot; method Then validate the status code as &lt;respcode&gt; in response body Then validate the message is returned as &lt;message&gt; Then validate the DB records if the &lt;message&gt; is Success } </code></pre> <pre><code>Examples: |type |apipayload |respcode|msg |role|test_type_to_run | |ValidValues|ValidValsAPI |200 |Success|Tester|smoke| |InValidVals |InValidValsAPI|400 |Error|Tester|smoke| |ValidNumbers|ValidNumbersAPI|200 |Success|Developer|regression| |ValidIntegers|ValidIntegersAPI|202 |Success|Developer Admin|regression| |GarbageData|GarbageDataAPI|400 |Error|Developer Admin|regression| </code></pre> <p>Expected Test method:</p> <pre><code>@given(parsers.parse('need to call api with {type} for &quot;{api}&quot; and {api_payload} value and {test_type_to_run} with role {role}')) def need_to_call_api_for(api, test_type, api_payload, test_type_to_run): if test_type_to_run == &quot;regression&quot;: site_url = call_api(api, payload=api_payload, test_type=type, role=role) </code></pre> <p>The above function should only have below input based on <strong>test_type_to_run==&quot;smoke&quot;</strong>:</p> <pre><code>|ValidValues|ValidValsAPI |200 |Success|Tester|smoke| |InValidVals |InValidValsAPI|400 |Error|Tester|smoke| </code></pre> <p>Can something like this be possbible in pytest-bdd ?</p>
<python><pytest><python-behave><pytest-bdd>
2023-07-03 20:06:58
1
454
Automation Engr
76,607,703
1,860,222
Correct way to define properties in an abstract python class
<p>I have a project where I want to create an abstract base class to define some shared properties and methods I created a simple abstract class that looks like:</p> <pre><code>from abc import ABCMeta, abstractmethod class Person(metaclass=ABCMeta): #this doesn't seem to work # password: str def __init__(self): pass @abstractmethod def getRoles(self) -&gt; list: pass #neither does this @abstractmethod @property def password(self): pass @abstractmethod @password.setter def password(self, password): pass </code></pre> <p>I then have a child class that looks like:</p> <pre><code>from src.Person import Person class User(Person): def __init__(self, username: str = &quot;blank&quot;, password: str =&quot;&quot;): super(User, self).__init__() self.roles = [&quot;System User&quot;] self.username = username self.password = password def getRoles(self): return self.roles </code></pre> <p>So far so good. Where things get tricky is that I also want a 'wrapper' function that will extend any Person class at runtime. To do that I created a wrapper function that looks like this:</p> <pre><code>from src.Person import Person def person_wrapper(person: Person): class MyWrapper(person): def __init__(self, mask: bool, extra_roles: list): super(MyWrapper, self).__init__() self.mask = mask self.extra_roles = extra_roles def getRoles(self) -&gt; list: return self.getRoles() + self.extra_roles def getPassword(self) -&gt; str: if self.mask: return self.password.encode(&quot;ascii&quot;) return MyWrapper </code></pre> <p>That wrapper then gets called from my main like:</p> <pre><code>if __name__ == '__main__': base_env: Person = get_environment(&quot;user&quot;) env = person_wrapper(base_env)( True, [&quot;abc&quot;, &quot;123&quot;]) print(env.getRoles()) print(env.getPassword()) </code></pre> <p>The problem I'm running into is that properties like 'password' don't seem to be visible in the wrapper function. I tried defining them as a instance variable (password: str ) and as a property using decorators. In both cases the IDE flags password as an unresolved attribute reference. If I try to run the code with the field defined as a property I get the error:</p> <pre><code>AttributeError: attribute '__isabstractmethod__' of 'property' objects is not writable </code></pre> <p>I'm not quite sure where I'm going wrong here. What is the correct way to define abstract variables?</p>
<python><python-3.x><inheritance><abstract-class>
2023-07-03 19:47:38
1
1,797
pbuchheit
76,607,498
131,874
Fit single string to a template
<p>I want to fit a string to a template like this</p> <pre><code>s = '12345678901234' print('{0}.{1}.{2}/{3}-{4}'.format(s[0:2], s[2:5], s[5:8], s[8:12], s[12:])) 12.345.678/9012-34 </code></pre> <p>Or using formatted string literals:</p> <pre><code>print(f'{s[0:2]}.{s[2:5]}.{s[5:8]}/{s[8:12]}-{s[12:]}') </code></pre> <p>Is there some formatting string syntax to fit the string without slicing it first?</p>
<python><string><format>
2023-07-03 19:05:57
2
126,654
Clodoaldo Neto
76,607,484
9,328,846
How to extract the hour and minute from a given date string in Python
<p>I have the following string in Python:</p> <pre><code>2023-07-03T14:30:00.000Z </code></pre> <p>How can I extract the hour and minute from this string in the most elegant way?</p>
<python><python-3.x><datetime><python-datetime>
2023-07-03 19:03:15
2
2,201
edn
76,607,225
15,578,536
pandas erasing scatterplot on ax
<p>I want to use pandas to plot lines for the ease of use with handling the dates on the x-axis. However, I also want to overlay the line plot with a scatterplot to show each individual point. I was able to to this sucessfully in an previous version of pandas (1.4.2 I believe?) but I am now on a new computer with version 1.5.3 and this is no longer working. See the MVP. Note that the red scatterplots aren't showing up, but that removing the s.plot call shows the scatterplots. Somehow, pandas is erasing the scatter points.</p> <pre><code>s = pd.Series(np.random.randn(100)) s.index = pd.period_range(start = &quot;2000&quot;, freq = &quot;M&quot;, periods = 100).to_timestamp() f, ax = plt.subplots() s.plot(ax = ax) ax.scatter(s.index, s, color = &quot;red&quot;) plt.show() </code></pre> <p>Pandas version = 1.5.3 Matplotlib version = 3.7.0</p> <p>plotting using %matplotlib inline</p>
<python><python-3.x><pandas><matplotlib>
2023-07-03 18:08:10
2
1,030
John Sorensen
76,607,126
2,444,909
Inconsistent results between pow, np.power and direct multiplication between windows and linux in python
<p>Python outputs inconsistent values between methods to square a float, under windows:</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; pow(1.3938562725915322, 2) 1.9428353086427599 &gt;&gt;&gt; np.power(1.3938562725915322, 2) 1.9428353086427599 &gt;&gt;&gt; 1.3938562725915322 * 1.3938562725915322 1.9428353086427597 </code></pre> <p>Using <strong>Decimal</strong> confirms that the first 2 outputs have a floating error, and the third one is not rounded:</p> <pre><code>&gt;&gt;&gt; from decimal import Decimal &gt;&gt;&gt; np.power(Decimal(1.3938562725915322), 2) Decimal('1.942835308642759772416265092') </code></pre> <p>The same code under linux outputs <code>1.9428353086427597</code> all three times.</p> <p>Is there a way to make the computations consistent between platforms ?</p>
<python><windows><floating-point><precision>
2023-07-03 17:52:01
2
17,915
mxdbld
76,607,079
6,260,154
Need help in modifying existing regex pattern in python to not include last word in pattern searching
<p>I need help in modifying existing regex pattern to not include the last word in pattern searching.</p> <p>Currently, this is the <a href="https://regex101.com/r/tZRR8Y/1" rel="nofollow noreferrer">regex</a> pattern I have <code>.*\w(?=\s+\w+-)</code> and this is the input string:</p> <pre><code> 13wfe + 123dg Tetest-xt ldf-dfdlj-dfldjf-dfs test </code></pre> <p>Now, this pattern returns the match which is expected:</p> <pre><code> 13wfe + 123dg Tetest-xt </code></pre> <p>But, I have another scenario, where my input string is:</p> <pre><code> 13wfe + 123dg Tetest-xt ldf-dfdlj-dfldjf-dfs test-test </code></pre> <p>Now, my expected match is</p> <pre><code> 13wfe + 123dg Tetest-xt </code></pre> <p>But I am getting</p> <pre><code> 13wfe + 123dg Tetest-xt ldf-dfdlj-dfldjf-dfs </code></pre> <p>So now, basically I want to modify the <a href="https://regex101.com/r/TEBqOy/1" rel="nofollow noreferrer">regex</a> pattern which will basically ignore the last word in sentence during pattern search. Kindly guide me on this.</p>
<python><regex>
2023-07-03 17:43:53
2
1,016
Tony Montana
76,607,056
12,596,824
Randomly sample a dataframe while reading a csv in pandas
<p>I want to to randomly sample a data frame without reading the entire csv in pandas. Is this possible?</p> <p>There's an argument <code>nrows</code> but i think it gets the first n rows and it's not actually random.</p> <p>I don't want to use <code>.sample()</code> because that means I have to read the entire csv first.</p> <p>My code</p> <pre><code>sample_size = 10 df = pd.read_csv(input_data, nrows=sample_size) </code></pre>
<python><pandas>
2023-07-03 17:39:46
0
1,937
Eisen
76,607,008
2,160,936
Choose column 'row' value based on row value in panda data frame
<p>I got a data frame with multiple rows like this:</p> <pre><code>N amount country codeA codeB 0 -15678 NAN CH US 1 3475 NAN POL FR </code></pre> <p>How can I select column codeA for the value of country when the amount is negative and codeB when positive</p> <p>I would like to have something like this as output</p> <pre><code>N amount country codeA codeB 0 -15678 CH CH US 1 3475 FR POL FR </code></pre>
<python><pandas><dataframe>
2023-07-03 17:31:23
5
519
Sallyerik
76,606,995
5,623,899
Python 3.11 decorator to convert a function into a dataclass for scikit-learn BaseEstimator TransformerMixin including type annotations
<h1>Update</h1> <h2>Question / Goal</h2> <blockquote> <p>How can I write a decorator that takes in a function <code>fn</code> and creates a dataclass where each argument / keyword-argument is a field and the docstring is copied over for better intellisense support. I don't want to see <code>**kwargs:Any</code> I want to know what the variables are.</p> </blockquote> <p>In the MWE below there are the following things:</p> <ul> <li><code>get_func_params</code>: utility function for getting default parameters from a function</li> <li><code>SomeModuleType</code>: a dummy type to check if intelli-sense is showing what kind of types hints are being copied-over</li> <li><code>mwe_func</code>: a dummy function that has a docstring and some type annotations. The decorator <code>func_to_class</code> should copy over the docstring and function annotations for the new class constructor. Ideally all the arguments are fields in of the new dataclass.</li> <li><code>Class2Subclass</code>: a dummy class for the decorated class to subclass.</li> <li><code>mwe_class</code>: the dummy class that we are decorating</li> <li><code>func_to_class</code>: the decorator function</li> </ul> <h2>M.W.E.</h2> <h3>Imports</h3> <pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass, field, fields, _FIELD from typing import get_type_hints, Optional, NamedTuple, Callable, Dict, Any, List import inspect from inspect import Signature, Parameter from functools import wraps import numpy as np </code></pre> <h3>Utils</h3> <pre class="lang-py prettyprint-override"><code>def get_func_params( fn: Callable, drop_self: Optional[bool] = True, drop_before: Optional[int] = 0, drop_idxs: Optional[List[int]] = list(), drop_names: Optional[List[str]] = list(), drop_after: Optional[int] = None, ) -&gt; Dict[str, Any]: params = inspect.signature(fn).parameters params = {k: v.default for k, v in params.items()} if drop_self and 'self' in params: params.pop('self') params = { n: p for i, (n, p) in enumerate(params.items()) if ( # is before &lt;= i &lt; after (drop_before &lt;= i or (drop_after is not None and i &lt; drop_after)) # i not in drop_idxs and n not in drop_names and (i not in drop_idxs and n not in drop_names) ) } return params </code></pre> <h3>Dummy Functions / Classes</h3> <pre class="lang-py prettyprint-override"><code>class SomeModuleType(NamedTuple): a: int b: str def mwe_func(data:np.ndarray, a_bool:bool=False, a_thing:Optional[SomeModuleType]=None) -&gt; np.ndarray: ''' Parameters ---------- data : np.ndarray A numpy array of data a_bool : bool, default=False A boolean a_thing : Optinoal[SomeModuleType] A thing ''' # ... return data class Class2Subclass: def expected_method(self, a:int=0): pass pass </code></pre> <h3>Example</h3> <pre class="lang-py prettyprint-override"><code>@func_to_class(mwe_func) class mwe_class: pass </code></pre> <p>No intelli-sense (that is co-pilot suggesting things)</p> <p><a href="https://i.sstatic.net/arLky.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/arLky.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/IK5I0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IK5I0.png" alt="enter image description here" /></a></p> <hr /> <h1>Original</h1> <p>I am working with scanpy and sklearn. Currently I have the following:</p> <pre class="lang-py prettyprint-override"><code>import scanpy as sp, anndata as ad, numpy as np, pandas as pd from sklearn.base import BaseEstimator, TransformerMixin from dataclasses import dataclass import inspect from functools import wraps from typing import List, Any, Optional, Callable, Union, Tuple, Iterable, Set, TypeAlias, Type, Dict def filter_kwargs_for_func(fn: Callable, **kwargs:Optional[dict]): params = inspect.signature(fn).parameters return {k:v for k,v in kwargs.items() if k in params} def filter_kwargs_for_class(cls: Callable, **kwargs:Optional[dict]): params = inspect.signature(cls.__init__).parameters return {k:v for k,v in kwargs.items() if k in params} def wrangle_kwargs_for_func( fn: Callable, defaults: Optional[dict]=None, **kwargs:Optional[dict] ) -&gt; dict: # copy defaults params = (defaults or {}).copy() # update with kwargs of our function params.update(kwargs or {}) # filter for only the params that other function accepts params = filter_kwargs_for_func(fn, **params) return params def wrangle_kwargs_for_class( cls: Callable, defaults: Optional[dict]=None, **kwargs:Optional[dict] ) -&gt; dict: # copy defaults params = (defaults or {}).copy() # update with kwargs of our class params.update(kwargs or {}) # filter for only the params that other class accepts params = filter_kwargs_for_class(cls, **params) return params def get_func_params( fn: Callable, drop_self: Optional[bool] = True, drop_before: Optional[int] = 0, drop_idxs: Optional[List[int]] = list(), drop_names: Optional[List[str]] = list(), drop_after: Optional[int] = None, ) -&gt; Dict[str, Any]: params = inspect.signature(fn).parameters params = {k: v.default for k, v in params.items()} if drop_self and 'self' in params: params.pop('self') params = { n: p for i, (n, p) in enumerate(params.items()) if ( # is before &lt;= i &lt; after (drop_before &lt;= i or (drop_after is not None and i &lt; drop_after)) # i not in drop_idxs and n not in drop_names and (i not in drop_idxs and n not in drop_names) ) } return params @dataclass class MyPipeline: ... def preprocess_data(self, min_genes: int = 200, min_cells: int = 3): sc.pp.filter_cells(self.data, min_genes=min_genes) sc.pp.filter_genes(self.data, min_cells=min_cells) self.data.raw = self.data sc.pp.normalize_total(self.data, target_sum=1e4) sc.pp.log1p(self.data) sc.pp.highly_variable_genes(self.data, min_mean=0.0125, max_mean=3, min_disp=0.5) self.data = self.data[:, self.data.var.highly_variable] sc.pp.scale(self.data, max_value=10) sc.tl.pca(self.data, svd_solver='arpack') sc.pp.neighbors(self.data, n_neighbors=10, n_pcs=40) sc.tl.umap(self.data) </code></pre> <p>Where the focus here is on <code>MyPipeline</code>. Right now it isn't very flexible because very few keyword arguments are exposed and in the event some functions share the same keyword argument which function it belongs to.</p> <p>Initially all I wanted was a way to to specify something like</p> <pre class="lang-py prettyprint-override"><code>@fn_kwargs(sc.pp.filter_cells) class FilterCellKWArgs: pass ... def pipeline(filter_cells_kwargs:FilterCellKWArgs, ...): ... ... </code></pre> <p>and then have intellisense show me (or anyone else) what args / keyword arguments these functions have available, what their defaults are and maybe even the docstring of the original function. That doesn't seem too tenable.</p> <p>So now I am think it be useful to wrap functions like <code>sc.pp.filter_cells</code> , <code>sc.pp.highly_variable_genes</code> and <code>sc.pp.scale</code> as <code>sklearn</code> operators e.g. <code>BaseEstimator</code>s / <code>TransformerMixin</code> / etc.</p> <p>As the arguments / keyword arguments would be set on construction and then an sklearn <code>Pipeline</code> can handle the rest. So I am looking for something like this</p> <pre class="lang-py prettyprint-override"><code>@scop(sc.pp.filter_cells) @dataclass class FilterCells: pass </code></pre> <p>which should be functionally equivalent to</p> <pre class="lang-py prettyprint-override"><code>@dataclass class FilterCells(BaseEstimator, TransformerMixin): # NOTE: these are the defaults for sc.pp.filter_cells # you can get them from inspect.signature(sc.pp.filter_cells) # data: ad.Anndata min_counts: Optional[int] = None min_genes: Optional[int] = None max_counts: Optional[int] = None max_genes: Optional[int] = None inplace: bool = True copy: bool = False def fit(self, X: ad.AnnData, y=None): # NOTE: this is a dummy method # as we don't need to fit anything, just call the wrapped # function sc.pp.filtered_cells pass def transform(self, X): Y = sc.pp.filter_cells( X, min_counts=self.min_counts, min_genes=self.min_genes, max_counts=self.max_counts, max_genes=self.max_genes, inplace=self.inplace, copy=self.copy ) return X if self.inplace else Y def fit_transform(self, X, y=None): return self.fit(X, y).transform(X) </code></pre> <p>And I have tried quite a few things (see below).</p> <h1>Specific Question</h1> <p>How can I write a decorator that in one of these functions (or any function really) and creates a dataclass where each argument / keyword-argument is a field and the docstring is copied over for better intellisense support. I don't want to see <code>**kwargs:Any</code> I want to know what the variables are.</p> <h1>Attempts</h1> <h2>Current Attempt</h2> <pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass, field, fields, _FIELD from typing import get_type_hints from inspect import Signature, Parameter def scop(fn): params = get_func_params(fn, drop_self=False) params = {k: v for k, v in params.items() if v is not inspect.Parameter.empty} def class_decorator(cls): cls = dataclass(cls) # Ensure cls is a dataclass # Add fields from fn to cls for name, default in params.items(): if name not in get_type_hints(cls): field_obj = field(default=default) setattr(cls, name, field_obj) cls.__annotations__[name] = type(default) # Update __init__ method to include new fields def __init__(self, **kwargs): for name, value in kwargs.items(): setattr(self, name, value) cls.__init__ = __init__ # Update __init__ method signature sig = inspect.signature(fn) parameters = [ Parameter(name, Parameter.KEYWORD_ONLY, default=default) for name, default in params.items() ] cls.__init__.__signature__ = sig.replace(parameters=parameters) # Add methods to cls def fit(self, X, y=None): return self def transform(self, X): kwargs = {f.name: getattr(self, f.name) for f in fields(self)} fn(X, **kwargs) return X def fit_transform(self, X, y=None): return self.fit(X, y).transform(X) cls.fit = fit cls.transform = transform cls.fit_transform = fit_transform # Update docstrings cls.__doc__ = fn.__doc__ cls.fit.__doc__ = fn.__doc__ cls.transform.__doc__ = fn.__doc__ cls.fit_transform.__doc__ = fn.__doc__ return cls return class_decorator </code></pre> <p>which does basically work:</p> <pre class="lang-py prettyprint-override"><code>@dataclass @scop(sc.pp.filter_cells) class FilterCells: pass fc = FilterCells(min_cells=3) print(fc.min_cells) # 3 </code></pre> <p>But. I <em>still</em> can not see the docstring / args / keyword-args when I am typing <code>FilterCells(...)</code> and now <code>fit</code>, <code>fit_transform</code> just show <code>Any</code> rather than <code>(X, y=None)</code></p> <p>I also loose the <code>BaseEstimator</code> <code>repr</code> so there is that...</p> <h2>Another Notable Attempt</h2> <pre class="lang-py prettyprint-override"><code>def scop(fn): params = get_func_params(fn, drop_self=False) def class_decorator(cls): class Wrapper(cls, BaseEstimator, TransformerMixin): fn_params = {k: v for k, v in params.items() if v is not inspect.Parameter.empty} def __init__(self, **kwargs): self.params = {**self.fn_params, **kwargs} super().__init__() def fit(self, X, y=None): return self def transform(self, X): fn(X, **self.params) return X def fit_transform(self, X, y=None): return self.fit(X, y).transform(X) Wrapper.__name__ = cls.__name__ Wrapper.__doc__ = fn.__doc__ Wrapper.__annotations__ = {**cls.__annotations__, **params} return Wrapper return class_decorator </code></pre> <p>that can be used like</p> <pre class="lang-py prettyprint-override"><code>@scop(sc.pp.filter_cells) @dataclass class FilterCells: pass fc = FilterCells(min_genes=200) fc </code></pre> <p>Of note:</p> <ul> <li>we have <code>BaseEstimator</code> <code>repr</code> ... (stop..class_decorator..Wrapper()</li> <li><code>fc.min_genes</code> results in <code>AttributeError</code>, so we lose access to dataclass fields.</li> </ul> <h2>Original-ish Attempt</h2> <pre class="lang-py prettyprint-override"><code>def scop(fn): params = get_func_params(fn, drop_self=False) class Wrapper(BaseEstimator, TransformerMixin): @wraps(fn) def __init__(self, *args, **kwargs) -&gt; None: super().__init__() print('ARGS', args) print('KWARGS', kwargs) print('SIGNATURE', inspect.signature(sc.pp.filter_cells)) print('PARAMS', params) @wraps(fn) def fit(self, X, y=None): return self @wraps(fn) def transform(self, X): fn(self.data, X) return self.data @wraps(fn) def fit_transform(self, X, y=None): return self.fit(X, y).transform(X) Wrapper.__init__.__doc__ = fn.__doc__ Wrapper.__init__.__annotations__ = fn.__annotations__ # Wrapper = inspect.signature(sc.pp.filter_cells) Wrapper.fit.__doc__ = fn.__doc__ Wrapper.fit.__annotations__ = fn.__annotations__ Wrapper.transform.__doc__ = fn.__doc__ Wrapper.transform.__annotations__ = fn.__annotations__ Wrapper.fit_transform.__doc__ = fn.__doc__ Wrapper.fit_transform.__annotations__ = fn.__annotations__ Wrapper.__name__ = fn.__name__ Wrapper.__doc__ = fn.__doc__ Wrapper.__annotations__ = fn.__annotations__ # methods = { # '__init__': Wrapper.__init__, # 'fit': Wrapper.fit, # 'transform': Wrapper.transform, # 'fit_transform': Wrapper.fit_transform, # } # Wrapper = type(fn.__name__, (BaseEstimator, TransformerMixin), methods) return Wrapper </code></pre> <p>but notice that it prints <code>ARGS (&lt;class '__main__.FilterCells'&gt;,)</code> as this decorator gets called over the class not on class initialization.</p>
<python><python-3.x><scikit-learn><python-decorators><scanpy>
2023-07-03 17:29:46
0
5,218
SumNeuron
76,606,857
11,720,193
Issue with invoking Mock in PyCharm
<p>I have taken up Python recently.<br /> I'm trying to use <code>unittest.Mock()</code> for testing my code in PyCharm. But, PyCharm fails with the error:</p> <pre><code>requests = Mock() TypeError: 'module' object is not callable </code></pre> <p>My program:</p> <pre><code>import requests from unittest import Mock requests = unittest.Mock() def get_holidays(): r = requests.get(&quot;http://localhost/holidays&quot;) if r.status_code == 200: return r.json() return None </code></pre> <p>Can anyone please help to identify where I am going wrong. Thanks</p>
<python><python-unittest><python-unittest.mock>
2023-07-03 17:04:50
1
895
marie20
76,606,843
3,247,006
How to get the session's data of date with `request.session.get()` in Django?
<p>I could get the session's data of data with <code>request.session['key']['key']</code> as shown below:</p> <pre class="lang-py prettyprint-override"><code># &quot;views.py&quot; from django.http import HttpResponse def my_view(request): request.session['person'] = {'name':'John','age':27} print(request.session['person']['name']) # John print(request.session['person']['age']) # 27 return HttpResponse(&quot;Test&quot;) </code></pre> <p>But, I couldn't get the session's data of data with <a href="https://docs.djangoproject.com/en/4.2/topics/http/sessions/#django.contrib.sessions.backends.base.SessionBase.get" rel="nofollow noreferrer">request.session.get('key/key')</a> as shown below:</p> <pre class="lang-py prettyprint-override"><code># &quot;views.py&quot; from django.http import HttpResponse def my_view(request): request.session['person'] = {'name':'John','age':27} print(request.session.get('person/name')) # None print(request.session.get('person/age')) # None return HttpResponse(&quot;Test&quot;) </code></pre> <p>So, how can I get the session's data of data with <code>request.session.get()</code>?</p>
<python><django><django-views><django-sessions><django-request>
2023-07-03 17:03:01
1
42,516
Super Kai - Kazuya Ito
76,606,760
4,212,875
Given a numpy array of box coordinates, how to find total area of boxes taking into account box overlap?
<p>Given an nx4 numpy array of box coordinates where each has the format [xtl, ytl, xbr, ybr], is there an efficient (vectorized) way of finding the total area of the boxes in the array, taking into account that some of the boxes will overlap? i.e. if a box is completely inside of another box, its area should not be counted.</p>
<python><numpy>
2023-07-03 16:51:23
0
411
Yandle
76,606,737
1,003,798
Linear Regression (No Intercept) RSquare using Numpy
<p>For Linear Regression with 1 Variable and an Intercept, I can compute the RSquare as -</p> <pre><code>R^2 = (np.sum(((x - np.mean(x)) / np.std(x, ddof=1)) * ((y - np.mean(y)) / np.std(y, ddof=1))) / (len(x) - 1)) ** 2 </code></pre> <p>How do I compute R Square for Linear Regression with 1 Variable and without an intercept, and without having to deal with <code>statsmodels.api</code> OLS or <code>linregress</code> or any of the third party packages. Is the understanding correct that <code>np.mean(y) = 0 </code> for Linear Regresssion without intercept?</p> <p>What is the fastest way in numpy to get the RSquare for Linear Regression with 1 Variable and no intercept?</p>
<python><numpy><regression><linear-regression>
2023-07-03 16:47:29
3
999
godimedia
76,606,675
1,760,405
PyCharm crashes on README.md files (on Linux in Docker)
<p>Running PyCharm 2023.1.1 (Community Edition) on Linux in Docker, I get the following error when trying to edit <code>.md</code> files like <code>README.md</code>:</p> <pre><code> [0628/172358.898878:FATAL:setuid_sandbox_host.cc(157)] The SUID sandbox helper binary was found, but is not configured correctly. Rather than run without sandboxing I'm aborting now. You need to make sure that /pycharm-community-2023.1.1/jbr/lib/chrome-sandbox is owned by root and has mode 4755. Trace/breakpoint trap (core dumped) </code></pre>
<python><pycharm>
2023-07-03 16:36:04
1
4,063
Brad Grissom
76,606,557
4,995,349
How to embed a qr code png image in html template page?
<p>I am generating the qr image using the below code.</p> <pre><code>url=&quot;https://www.google.com&quot; img = qrcode.make(url) img.save(&quot;displayQrInvite.png&quot;) return render_template('invite.html') </code></pre> <p>Below is the html template file.</p> <pre><code> &lt;!DOCTYPE html&gt; &lt;html lang=&quot;en&quot;&gt; &lt;head&gt; &lt;meta charset=&quot;UTF-8&quot;&gt; &lt;title&gt;My App&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;h1&gt;Scan below qr code image&lt;/h1&gt; &lt;img src=&quot;/displayQrInvite.png&quot;/&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>Somehow the image is not visible on any of the browsers chrome,edge.c</p> <p><a href="https://i.sstatic.net/MaL81.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MaL81.png" alt="enter image description here" /></a></p>
<python><html><flask><render><qr-code>
2023-07-03 16:19:51
1
776
Karan Nayyar
76,606,445
2,307,934
SELECT list incompatible with DISTINCT with alias and order by
<p>I have two tables (MySQL 8, SQLAlchemy 1.4):</p> <pre class="lang-sql prettyprint-override"><code>CREATE TABLE user ( id BIGINT auto_increment NOT NULL, offset BIGINT NULL, CONSTRAINT user_PK PRIMARY KEY (id) ) INSERT INTO user (offset) values (1); INSERT INTO user (offset) values (2); INSERT INTO user (offset) values (3); INSERT INTO user (offset) values (4); CREATE TABLE score ( user_id BIGINT NOT NULL, score BIGINT NOT NULL, CONSTRAINT score_FK FOREIGN KEY (user_id) REFERENCES user(id) ) INSERT INTO score (user_id, score) values (1, 10); INSERT INTO score (user_id, score) values (1, 9); INSERT INTO score (user_id, score) values (1, 8); INSERT INTO score (user_id, score) values (2, 7); INSERT INTO score (user_id, score) values (2, 6); INSERT INTO score (user_id, score) values (2, 5); INSERT INTO score (user_id, score) values (3, 10); INSERT INTO score (user_id, score) values (3, 9); INSERT INTO score (user_id, score) values (3, 8); INSERT INTO score (user_id, score) values (4, 7); INSERT INTO score (user_id, score) values (4, 6); INSERT INTO score (user_id, score) values (4, 5); </code></pre> <p>I want to write a sqlalchemy query the translates to this SQL (the following code is a very simplified version of the actual query):</p> <pre class="lang-sql prettyprint-override"><code>select distinct id, max(score) OVER (PARTITION BY id) AS max_score, offset from user left join score on id = user_id order by max_score + offset </code></pre> <p>Expected output :</p> <pre><code>| id | score | offset | | -- | ----- | ------ | | 3| 10| 3| | 1| 10| 1| | 4| 7| 4| | 2| 7| 2| </code></pre> <p>I tried using the <code>over</code> clause both in the <code>select</code> and the <code>order by</code> and I tried defining an element to reuse it, as follows :</p> <pre class="lang-py prettyprint-override"><code>max_score = func.max(Score.score).over(partition_by=User.id).label(&quot;max_score&quot;) q = ( session.query( User.id, User.offset, max_score, ) .outerjoin(Score) .order_by((max_score + User.offset).desc()) ).distinct().all() </code></pre> <p>Both methods produce the error &quot;SELECT list incompatible with DISTINCT&quot; because in the generated SQL, the order by clause uses the definition of <code>max_score</code> and not the &quot;alias&quot; :</p> <pre class="lang-sql prettyprint-override"><code>ORDER BY max(score.score) OVER (PARTITION BY user.id) + user.offset DESC </code></pre> <p>In the generated SQL, if I replace the previous clause by :</p> <pre class="lang-sql prettyprint-override"><code>ORDER BY max_score + user.offset DESC </code></pre> <p>it works as expected.</p> <p>The error comes from Mysql I know, but how can I write my query so that SQLAlchemy uses the alias.</p> <p>Note: tried with <code>aliased</code> also, didn't work.</p>
<python><mysql><sqlalchemy>
2023-07-03 16:03:13
0
1,240
edg
76,606,382
2,125,671
How to reference object methods in a class variable in Python
<p>I have this class which works :</p> <pre><code>class A: def __init__(self): self.checks = (self.check1, self.check2) def check1(self): print(&quot;check1&quot;) def check2(self): print(&quot;check2&quot;) def run_all_checks(self): for check in self.checks: check() a = A() a.run_all_checks() </code></pre> <p>Now I think attribute <code>checks</code> should not belong to every A object, but should be a class attribute, so I'd like to write :</p> <pre><code>class A: checks = (self.check1, self.check2) def check1(self): print(&quot;check1&quot;) def check2(self): print(&quot;check2&quot;) def run_all_checks(self): for check in self.checks: check() a = A() a.run_all_checks() </code></pre> <p>This version does not work, giving error :</p> <pre><code>NameError: name 'self' is not defined </code></pre> <p>Is there a way to achieve that ?</p>
<python><class><methods><attributes>
2023-07-03 15:53:38
2
27,618
Philippe
76,606,159
18,769,241
distribute randomly a value to a fixed size list of values
<p>I have a value (a sum) (in the implementation it is <code>n</code>) that I want to distribute &quot;randomly&quot; to become a list of 12 elements summing up to that value (sum), I wrote the following method to achieve that:</p> <pre><code>def create_list_summing_up_to(n): values = [] for i in range(1,13): value = random.randint(0, int(n / i)) values.append(value) n -= value if n&lt;=0 and i&lt;12: values.extend(np.zeros(12-i).tolist()) break if n&gt;0: for idx,el in enumerate(values): values[idx] = values[idx]+int (n/12) return values </code></pre> <p>My question is: is the method efficient even for large sums (of the order of millions)? How could it be more efficient?</p>
<python><numpy>
2023-07-03 15:25:37
3
571
Sam
76,606,132
17,487,457
How to map class id to class name in summary plot's legend
<p>I fitted a random forest classifier on the iris dataset like so:</p> <pre class="lang-py prettyprint-override"><code>iris = datasets.load_iris() X = iris.data y = iris.target # dividing X, y into train and test data X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 0) model = RandomForestClassifier() model.fit(X_train, y_train) predictions = model.predict(X_test) </code></pre> <p>Then plot the shap values summary this way:</p> <pre class="lang-py prettyprint-override"><code>import shap explainer = shap.TreeExplainer(model) shap_values = explainer.shap_values(X_test) shap.summary_plot(shap_values, X_test) </code></pre> <p>Output:</p> <p><a href="https://i.sstatic.net/0wTdK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0wTdK.png" alt="enter image description here" /></a></p> <p>I want to map the <code>class</code> ids to the class name in the plot legend:</p> <pre class="lang-py prettyprint-override"><code>iris.target_names array(['setosa', 'versicolor', 'virginica'], dtype='&lt;U10') </code></pre> <p>Such that: <code>Class 0 -&gt; setosa, Class 1 -&gt; versicolor, Class 2 -&gt; virginica</code></p>
<python><machine-learning><classification><shap>
2023-07-03 15:22:29
1
305
Amina Umar