QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
75,711,746
12,767,247
How to call and pass information to a Python script in a Laravel 9 project?
<p>I have an HTML form in my Laravel 9 project saved to the browser's localStorage using JS (with jQuery). I also have a Python script that needs to take a CSV-formatted database, modify it based on the information from localStorage, and convert it to JSON. Lastly, I have a JavaScript file that takes the JSON and builds an HTML table. All of these parts work separately, but I'm having trouble integrating python into my Laravel 9 project.</p> <p>What is the best way to call and pass information to a Python script within a Laravel 9 project?</p> <p>Any help or guidance on this would be greatly appreciated. Thank you in advance!</p>
<javascript><python><json><laravel><laravel-9>
2023-03-12 07:53:45
1
500
andreasv
75,711,452
258,662
appropriate method to read ungridded lat / long csv as raster in python?
<p>I have a csv file or data.frame of un-gridded latitude/longitude coordinates which I would like to coerce to a raster object in python (e.g. ideally using <code>rasterio</code>).</p> <p>While GDAL's 'xyz' driver only accepts gridded coordinates in a text file, spatial packages in R are happy to do this coercion from a data frame to a raster object in one line, like this:</p> <pre class="lang-r prettyprint-override"><code>df &lt;- readr::read_csv(&quot;https://minio.carlboettiger.info/shared-data/gbif.csv&quot;) r &lt;- terra::rast(df, crs=&quot;epsg:4326&quot;) terra::plot(r) </code></pre> <p><a href="https://i.sstatic.net/QwYZK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QwYZK.png" alt="r plot of raster" /></a></p> <p>I'd love to find the analogously concise command in python (preferably in <code>rasterio</code>) but haven't been able to figure this out. I realize I can get a spatial <em>vector</em> object of points as simple features, e.g. using geopandas, but that's not what I'm looking for here.</p>
<python><geospatial><raster><spatial><rasterio>
2023-03-12 06:36:44
1
12,767
cboettig
75,711,311
10,284,437
python selenium chrome driver 111 path is not handled
<p>I have this code:</p> <pre><code>#!/usr/bin/env python import selenium from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.chrome.options import Options chrome_options = Options() import undetected_chromedriver as uc driver = uc.Chrome(executable_path='/usr/local/bin/chromedriver', options=chrome_options) # code </code></pre> <p>When I run the script, I get:</p> <pre><code>from session not created: This version of ChromeDriver only supports Chrome version 111 Current browser version is 110.0.5481.100 </code></pre> <pre><code>$ /usr/local/bin/chromedriver --version ChromeDriver 111.0.5563.64 (c710e93d5b63b7095afe8c2c17df34408078439d-refs/branch-heads/5563@{#995}) </code></pre> <p>What's wrong?</p> <p>Tested also with</p> <pre><code>from selenium import webdriver import chromedriver_autoinstaller chromedriver_autoinstaller.install() </code></pre> <p>Same error.</p>
<python><selenium-webdriver><selenium-chromedriver>
2023-03-12 05:59:38
1
731
Mévatlavé Kraspek
75,711,289
11,099,842
How to convert pandas column to numeric if there are strings?
<p>I have a dataset that has numerical values, empty values and text values. I want to do the following in pandas:</p> <ol> <li>Numerical Values -&gt; Float</li> <li>Empty Values -&gt; N/A</li> <li>Text Values -&gt; N/A</li> </ol> <p>When I try to run <code>astype('float')</code>, I get an error:</p> <pre><code>import pandas as pd data = ['5', '4', '3', '', 'NO DATA ', '5'] data = ['5', '4', '3', '', '', '5'] df = pd.DataFrame({'data': data}) df[['data']].astype('float') </code></pre> <p>I've tried to look over the documentation and stackoverflow, but I didn't find out how to do this.</p>
<python><pandas>
2023-03-12 05:54:03
1
891
Al-Baraa El-Hag
75,711,286
2,218,086
Decimal numbers in Python are displayed as floats
<p>I have defined my variables as decimal but they are still output as float when I print them.</p> <p>My actual problem has complex calculations and I want to print the values to verify my results but I’m getting floats not decimals.<br> Does this simply not matter as the calculations will come out correctly?</p> <pre><code>from decimal import Decimal from decimal import getcontext getcontext().prec = 6 a=Decimal(0.4) print(a) b=Decimal(0.3) print(b) x=(a/b) print(&quot;x=&quot;+str(x)) </code></pre> <p>Output</p> <pre><code>0.40000000000000002220446049250313080847263336181640625 0.299999999999999988897769753748434595763683319091796875 x=1.33333 ** Process exited - Return Code: 0 ** Press Enter to exit terminal </code></pre> <p><strong>Update</strong><br></p> <p>This makes for some pretty silly looking code.</p> <pre><code>from decimal import Decimal from decimal import getcontext from random import random getcontext().prec = 6 a=Decimal(str(random())) print(a) b=Decimal(str(random())) print(b) x=(a/b) print(&quot;x=&quot;+str(x)) </code></pre> <p><strong>Result</strong> <br> I still didn’t get the precision as I was expecting but I can work with that :)</p> <pre><code>0.27010708573406494 0.9726705675299723 x=0.277696 </code></pre> <p>** Process exited - Return Code: 0 ** Press Enter to exit terminal</p>
<python>
2023-03-12 05:52:23
1
411
David P
75,711,193
6,446,053
Running Selective Unit Tests in Jetbrains Idea or Pycharm
<p>I have a test suite with multiple test methods, but I want to run only a specific test method (i.e., test_one) using Jetbrains product (e.g., Pycharm, Idea). In this case, I try to avoid using the command line approach.</p> <p>How can I do that?</p> <p>The unit test is as below:</p> <pre><code>import unittest class TestMyMethods(unittest.TestCase): def setUp(self): # set up any necessary state before each test method is run pass def test_one(self): # test method print('Running test: one') def test_two(self): # test method print('Running test: two') if __name__ == '__main__': suite = unittest.TestSuite() suite.addTest(TestMyMethods('test_one')) unittest.TextTestRunner().run(suite) </code></pre> <p>Running the above code generate the following</p> <pre><code>Testing started at 1:09 PM ... Launching unittests with arguments python -m unittest C:\Users\mpx\so.py in C:\Users\Users Running test: one Running test: two Ran 2 tests in 0.002s OK Process finished with exit code 0 </code></pre>
<python><unit-testing>
2023-03-12 05:18:31
1
3,297
rpb
75,710,936
2,947,218
Match rule json with data json to find value in python3
<p><strong>PYTHON3</strong></p> <h2>Need to Find articleid by applying rules on data</h2> <p>Given a set of rules in JSON format, which includes article IDs and corresponding properties rules.</p> <p>Eg: <code>rules.json</code></p> <pre><code>{ &quot;rules&quot;: [ { &quot;articleId&quot;: &quot;art1&quot;, &quot;properties_rule&quot;: [ { &quot;condition&quot;: &quot;EQ&quot;, &quot;logicalOperator&quot;: &quot;AND&quot;, &quot;propertyId&quot;: 487, &quot;value&quot;: &quot;aaaa&quot; }, { &quot;condition&quot;: &quot;EQ&quot;, &quot;logicalOperator&quot;: &quot;&quot;, &quot;propertyId&quot;: 487, &quot;value&quot;: &quot;zzzz&quot; } ], }, { &quot;articleId&quot;: &quot;art2&quot;, &quot;properties_rule&quot;: [ { &quot;condition&quot;: &quot;GTE&quot;, &quot;logicalOperator&quot;: &quot;AND&quot;, &quot;propertyId&quot;: 487, &quot;value&quot;: &quot;bbbb&quot; }, { &quot;condition&quot;: &quot;LTE&quot;, &quot;logicalOperator&quot;: &quot;&quot;, &quot;propertyId&quot;: 487, &quot;value&quot;: &quot;eeee&quot; } ], }, { &quot;articleId&quot;: &quot;art3&quot;, &quot;properties_rule&quot;: [ { &quot;condition&quot;: &quot;GTE&quot;, &quot;logicalOperator&quot;: &quot;&quot;, &quot;propertyId&quot;: 487, &quot;value&quot;: &quot;ffff&quot; } ], } ] } </code></pre> <p>Qs well as a set of data in JSON format, which includes values for certain properties. Like Eg: <code>data.json</code></p> <pre><code>{ &quot;data&quot;: { &quot;1&quot;: { &quot;properties_values&quot;: [ { &quot;value&quot;: { &quot;property_id&quot;: 487, &quot;property_value&quot;: &quot;aaaa&quot;, &quot;response_id&quot;: 1 } } ] }, &quot;2&quot;: { &quot;properties_values&quot;: [ { &quot;value&quot;: { &quot;property_id&quot;: 487, &quot;property_value&quot;: &quot;bbbb&quot;, &quot;response_id&quot;: 2 } } ] }, &quot;3&quot;: { &quot;properties_values&quot;: [ { &quot;value&quot;: { &quot;property_id&quot;: 487, &quot;property_value&quot;: &quot;eeee&quot;, &quot;response_id&quot;: 3 } } ] } } } </code></pre> <p>the task is to apply the rules on the data and determine the article ID that matches the rules.</p> <p>How can we use the provided rules and data JSON to determine the corresponding &quot;articleId&quot; for each entry in the &quot;data&quot; JSON, based on the conditions specified in the &quot;properties_rule&quot; arrays of the rules JSON? The &quot;property_id&quot; field in the &quot;data&quot; JSON corresponds to the &quot;propertyId&quot; field in the &quot;rules&quot; JSON, and the &quot;property_value&quot; field in the &quot;data&quot; JSON corresponds to the &quot;value&quot; field in the &quot;rules&quot; JSON.</p> <hr /> <h2>Tried this To Solve this But didn't liked so many for loops</h2> <p>To determine the corresponding &quot;articleId&quot; for each entry in the &quot;data&quot; JSON based on the conditions specified in the &quot;properties_rule&quot; arrays of the rules JSON, we can follow the following steps in Python:</p> <ol> <li>Load the rules and data JSON into Python dictionaries using the json module.</li> <li>Loop through each entry in the &quot;data&quot; JSON.</li> <li>For each entry, loop through the &quot;rules&quot; JSON to find a matching article ID based on the conditions specified in the &quot;properties_rule&quot; arrays.</li> <li>For each rule, loop through the &quot;properties_values&quot; array in the data entry to find a matching property ID.</li> <li>If a matching property ID is found, check if the property value meets the condition specified in the rule. If it does not meet the condition, move on to the next rule.</li> <li>If all conditions are met, return the article ID associated with the rule.</li> <li>If no matching article ID is found, return a default value or raise an error.</li> </ol> <p>Here is sample code that implements this logic:</p> <pre><code>import json # Load the rules and data JSON files into memory with open('rules.json', 'r') as f: rules_data = json.load(f) rules = rules_data['rules'] with open('data.json', 'r') as f: data = json.load(f) data = data['data'] # Loop through each rule and check if it matches the data for rule in rules: properties_rule = rule['properties_rule'] articleId = rule['articleId'] # Check if all the conditions in the properties_rule array are satisfied matched = True for prop_rule in properties_rule: propertyId = prop_rule['propertyId'] value = prop_rule['value'] condition = prop_rule['condition'] # Check if any of the survey responses match the condition response_matched = False for key, value_dict in data.items(): properties_value = value_dict['properties_value'] for prop_value in properties_value: survey_response = prop_value['survey_response'] if survey_response['property_id'] == propertyId: property_value = survey_response['property_value'] if condition == 'EQ' and property_value == value: response_matched = True elif condition == 'GTE' and property_value &gt;= value: response_matched = True elif condition == 'LTE' and property_value &lt;= value: response_matched = True # If none of the survey responses match the condition, set matched to False if not response_matched: matched = False break # If all the conditions are satisfied, return the articleId of that rule if matched: print(&quot;Article ID: &quot;, articleId) break # If none of the rules match the data, return a default articleId if not matched: print(&quot;Default Article ID&quot;) </code></pre> <hr /> <h2>But Looking for effecient way of doing it. Don't want so many for loops</h2>
<python><json><python-3.x>
2023-03-12 03:39:39
1
606
Sudipta Dhara
75,710,928
7,932,972
How can I ensure every us state is accounted for in a pandas dataframe?
<p>Im very new to pandas</p> <p>I have a CSV that contains 43 states and a count of how many times something has happened in that state.</p> <pre><code>STATE,Count AL,1 AK,4 AZ,7 </code></pre> <p>My CSV does not contain every state, how can I ensure that every state is accounted for? If its not in the original dataframe it should have a <code>Count</code> of 0.</p> <p>Heres what I have so far, but its giving me <code>Count_x</code> and <code>Count_y</code> and its still not got all 50 states.</p> <pre class="lang-py prettyprint-override"><code># Original CSV only has 43 states states = pd.read_csv(&quot;states.csv&quot;) # Create a new dataframe with all states and count set to 0 all_states = [[&quot;AL&quot;, 0], [&quot;AK&quot;, 0], [&quot;AZ&quot;, 0], [&quot;AR&quot;, 0], [&quot;CA&quot;, 0], [&quot;CO&quot;, 0], [&quot;CT&quot;, 0], [&quot;DE&quot;, 0], [&quot;FL&quot;, 0], [&quot;GA&quot;, 0], [&quot;HI&quot;, 0], [&quot;ID&quot;, 0], [&quot;IL&quot;, 0], [&quot;IN&quot;, 0], [&quot;IA&quot;, 0], [&quot;KS&quot;, 0], [&quot;KY&quot;, 0], [&quot;LA&quot;, 0], [&quot;ME&quot;, 0], [&quot;MD&quot;, 0], [&quot;MA&quot;, 0], [&quot;MI&quot;, 0], [&quot;MN&quot;, 0], [&quot;MS&quot;, 0], [&quot;MO&quot;, 0], [&quot;MT&quot;, 0], [&quot;NE&quot;, 0], [&quot;NV&quot;, 0], [&quot;NH&quot;, 0], [&quot;NJ&quot;, 0], [&quot;NM&quot;, 0], [&quot;NY&quot;, 0], [&quot;NC&quot;, 0], [&quot;ND&quot;, 0], [&quot;OH&quot;, 0], [&quot;OK&quot;, 0], [&quot;OR&quot;, 0], [&quot;PA&quot;, 0], [&quot;RI&quot;, 0], [&quot;SC&quot;, 0], [&quot;SD&quot;, 0], [&quot;TN&quot;, 0], [&quot;TX&quot;, 0], [&quot;UT&quot;, 0], [&quot;VT&quot;, 0], [&quot;VA&quot;, 0], [&quot;WA&quot;, 0], [&quot;WV&quot;, 0], [&quot;WI&quot;, 0], [&quot;WY&quot;, 0]] all_states = pd.DataFrame(all_states, columns=[&quot;STATE&quot;, &quot;Count&quot;]) # Merge the two Dataframes new_df = states.merge(all_states, on=&quot;STATE&quot;) # Still only has 43 states new_df </code></pre> <p>Notice <code>AK</code> is still missing (and a few other states) <a href="https://i.sstatic.net/hO5ix.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hO5ix.png" alt="States" /></a></p>
<python><pandas><dataframe>
2023-03-12 03:37:24
2
357
Joey Stout
75,710,514
384,936
cannot plot the predictions
<p>When i apply valid predictions, it errors out stating only IndexError: only integers, slices (<code>:</code>), ellipsis (<code>...</code>), numpy.newaxis (<code>None</code>) and integer or boolean arrays are valid indices</p> <p>How can i convert this to ensure i can plot the predictions?</p> <pre><code>import numpy as np import pandas as pd import matplotlib.pyplot as plt import tensorflow as tf from sklearn.preprocessing import MinMaxScaler from datetime import datetime, timedelta import yfinance as yf # Download the AAPL stock price data from Yahoo Finance symbol = 'AAPL' start_date = datetime.now() - timedelta(days=365 * 2) end_date = datetime.now() df = yf.download(symbol, start=start_date, end=end_date) # Extract the daily adjusted closing price data data = df.filter(['Adj Close']).values # Scale the data scaler = MinMaxScaler(feature_range=(0, 1)) print(scaler) scaled_data = scaler.fit_transform(data) # Split the data into training and testing sets training_data_len = int(len(data) * 0.8) train_data = scaled_data[:training_data_len, :] test_data = scaled_data[training_data_len:, :] # Prepare the training data x_train = [] y_train = [] for i in range(60, len(train_data)): x_train.append(train_data[i-60:i, 0]) y_train.append(train_data[i, 0]) # Convert the data to numpy arrays x_train, y_train = np.array(x_train), np.array(y_train) # Reshape the data for the LSTM model x_train = np.reshape(x_train, (x_train.shape[0], x_train.shape[1], 1)) # Build the LSTM model model = tf.keras.models.Sequential() model.add(tf.keras.layers.LSTM(50, return_sequences=True, input_shape=(x_train.shape[1], 1))) model.add(tf.keras.layers.LSTM(50, return_sequences=False)) model.add(tf.keras.layers.Dense(25)) model.add(tf.keras.layers.Dense(1)) # Compile the model model.compile(optimizer='adam', loss='mean_squared_error') # Train the model model.fit(x_train, y_train, batch_size=1, epochs=1) # Prepare the testing data x_test = [] y_test = scaled_data[training_data_len + 60:, :] for i in range(60, len(test_data)): x_test.append(test_data[i-60:i, 0]) # Convert the data to numpy arrays x_test = np.array(x_test) # Reshape the data for the LSTM model print(&quot;1&quot;) x_test = np.reshape(x_test, (x_test.shape[0], x_test.shape[1], 1)) print(&quot;2&quot;) # Get the model's predicted price values predictions = model.predict(x_test) print(&quot;3&quot;) predictions = scaler.inverse_transform(predictions) # Calculate the root mean squared error (RMSE) rmse = np.sqrt(np.mean(predictions - y_test)**2) print(f&quot;RMSE: {rmse}&quot;) # Plot the data train = data[:training_data_len, :] valid = data[training_data_len:, :] valid['Predictions'] = predictions plt.figure(figsize=(16,8)) plt.title('LSTM Model') plt.xlabel('Date') plt.ylabel('Adj Close Price') plt.plot(train['Adj Close']) plt.plot(valid[['Adj Close', 'Predictions']]) plt.legend(['Train', 'Valid', 'Predictions'], loc='lower right') plt.show() # Make predictions for the next 10 days last_60_days = data[-60:] last_60_days_scaled = scaler.transform(last_60_days) X_test = [] X_test.append(last_60_days_scaled) X_test = np.array(X_test) X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1)) predicted_price = model.predict(X_test) predicted_price = scaler.inverse_transform(predicted_price) print(f&quot;Predicted price for the next day: ${predicted_price[0][0]} </code></pre>
<python><machine-learning>
2023-03-12 01:02:31
0
1,465
junkone
75,710,491
12,389,536
Colorful notebook output with rich library
<p>May someone tell me how to set again that colorful output from jupyter notebook without using <code>rich.print</code>? I use VSCode. I've got this <em>feature</em> with kedro=0.18.4 and <em>lost</em> with kedro=0.18.5. <strong>Kedro</strong> requires <strong>rich</strong> as an dependency.</p> <p><a href="https://i.sstatic.net/KeeZJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KeeZJ.png" alt="colorful output vs white output" /></a></p> <p>I think it was a <code>rich</code>'s bug, because after update dependencies in my project I lost this <em>feature</em>. ;) Previous similar topic with this bug: <a href="https://stackoverflow.com/questions/74632228/vscode-jupyter-interfering-with-rich-log-formatting-library-for-python-and-i-s">VSCode/Jupyter interfering with Rich (log formatting library for Python) and I see an &lt;/&gt; for each output</a></p> <p>I want to set it again, but I cannot. I tried</p> <pre class="lang-bash prettyprint-override"><code>pip install rich[jupyter] </code></pre> <pre class="lang-py prettyprint-override"><code>from rich import pretty from rich import traceback pretty.install(indent_guides=True, expand_all=True) traceback.install(show_locals=True, indent_guides=True) %load_ext rich </code></pre> <p>but I have still common <strong>white</strong> output. I can force it with</p> <pre class="lang-py prettyprint-override"><code>from rich import pretty pretty.Pretty(float) </code></pre> <p>returns</p> <p><a href="https://i.sstatic.net/fcf5h.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fcf5h.png" alt="Pretty" /></a></p> <p>or</p> <pre class="lang-py prettyprint-override"><code>from rich import pretty pretty.pprint(float) </code></pre> <p>returns</p> <p><a href="https://i.sstatic.net/aCHTZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aCHTZ.png" alt="pprint" /></a></p> <p>or</p> <pre class="lang-py prettyprint-override"><code>from rich import print print(int) </code></pre> <p>returns</p> <p><a href="https://i.sstatic.net/4Lp9F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4Lp9F.png" alt="print" /></a></p> <p>but I cannot do it with common jupyter output without run <code>print</code> function like before update.</p>
<python><visual-studio-code><jupyter-notebook><kedro><rich>
2023-03-12 00:56:50
0
339
matt91t
75,710,267
525,865
working with BeautifulSoup - defining the entities for getting all the data of the target page - perhaps panda would solve this even better
<p>i am in the mid of a task with BeautifulSoup - the awesome python-library for all things scraping. what is aimed: i want to get the data out of this page: <a href="https://schulfinder.kultus-bw.de" rel="nofollow noreferrer">https://schulfinder.kultus-bw.de</a> note; its a public page for finding all schools in a certain region.</p> <p>so a typical dataset will look like:</p> <pre><code>Adresse Name Adresse 2 Kategorie Straße PLZ und Ort Tel 1 Tel 2 Mail </code></pre> <p>well i think - with the usage of Python i will go like so:</p> <p>firstly i will have to send a request to the URL and get the page HTML content:</p> <pre><code>url = 'https://schulfinder.kultus-bw.de' response = requests.get(url) html_content = response.content </code></pre> <p>afterwards - the next step i will have to create a BeautifulSoup object and find the HTML elements that contain the school names:</p> <pre><code>soup = BeautifulSoup(html_content, 'html.parser') schools = soup.find_all('a', {'class': 'dropdown-item'}) Extract the school names from the HTML elements and store them in a list: school_names = [school.text.strip() for school in schools] </code></pre> <p>and subsequently i need to print the list of school names:</p> <pre><code>print(school_names) </code></pre> <p>well the complete code would look like this:</p> <pre><code>import requests from bs4 import BeautifulSoup url = 'https://schulfinder.kultus-bw.de' response = requests.get(url) html_content = response.content soup = BeautifulSoup(html_content, 'html.parser') schools = soup.find_all('a', {'class': 'dropdown-item'}) school_names = [school.text.strip() for school in schools] print(school_names) </code></pre> <p>but i need to have all the dataset -</p> <pre><code>Adresse Name Adresse 2 Kategorie Straße PLZ und Ort Tel 1 Tel 2 Mail </code></pre> <p>best thing would be to output it in CSV-formate; well if i would be a bit more familiar with Python then i would run this little code and would work with pandas - i guess that pandas would be much easier to work on that kind of thing.</p> <p>..</p> <p><strong>update:</strong> see some images of the page:</p> <p><a href="https://i.sstatic.net/cU8wl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cU8wl.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/VSfJA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VSfJA.png" alt="enter image description here" /></a></p> <p><strong>update 2</strong> i try to run this in google-colab: i get the following errors.. <strong>question:</strong> do i need to install some of the packages into collab!?</p> <pre><code>import pandas as pd from tqdm import tqdm from multiprocessing import Pool from string import ascii_lowercase as chars from itertools import product </code></pre> <p>do i need to take care for the preliminaries in google-colab?!</p> <p>see the <strong>errorlog that i have gotten</strong></p> <pre><code>100%|██████████| 676/676 [00:00&lt;00:00, 381711.03it/s] 0it [00:00, ?it/s] --------------------------------------------------------------------------- KeyError Traceback (most recent call last) /usr/local/lib/python3.9/dist-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance) 3360 try: -&gt; 3361 return self._engine.get_loc(casted_key) 3362 except KeyError as err: 5 frames pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item() pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item() KeyError: 'branches' The above exception was the direct cause of the following exception: KeyError Traceback (most recent call last) /usr/local/lib/python3.9/dist-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance) 3361 return self._engine.get_loc(casted_key) 3362 except KeyError as err: -&gt; 3363 raise KeyError(key) from err 3364 3365 if is_scalar(key) and isna(key) and not self.hasnans: KeyError: 'branches' </code></pre> <p><strong>end</strong> of <strong>errorlog</strong> - gotten from <strong>google-colab:</strong></p> <p>see <strong>below</strong> the <strong>errors</strong> - that i have gotten from <strong>Anaconda:</strong></p> <p>Anaconda: logs at home</p> <pre><code>100%|██████████| 676/676 [00:00&lt;00:00, 9586.24it/s] 0it [00:00, ?it/s] --------------------------------------------------------------------------- KeyError Traceback (most recent call last) ~/anaconda3/lib/python3.9/site-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance) 3628 try: -&gt; 3629 return self._engine.get_loc(casted_key) 3630 except KeyError as err: ~/anaconda3/lib/python3.9/site-packages/pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc() ~/anaconda3/lib/python3.9/site-packages/pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc() pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item() pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item() KeyError: 'branches' The above exception was the direct cause of the following exception: KeyError Traceback (most recent call last) /tmp/ipykernel_27106/2163647892.py in &lt;module&gt; 36 df = pd.DataFrame(all_data) 37 ---&gt; 38 df = df.explode('branches') 39 df = df.explode('trades') 40 df = pd.concat([df, df.pop('branches').apply(pd.Series).add_prefix('branch_')], axis=1) ~/anaconda3/lib/python3.9/site-packages/pandas/core/frame.py in explode(self, column, ignore_index) 8346 df = self.reset_index(drop=True) 8347 if len(columns) == 1: -&gt; 8348 result = df[columns[0]].explode() 8349 else: 8350 mylen = lambda x: len(x) if is_list_like(x) else -1 ~/anaconda3/lib/python3.9/site-packages/pandas/core/frame.py in __getitem__(self, key) 3503 if self.columns.nlevels &gt; 1: 3504 return self._getitem_multilevel(key) -&gt; 3505 indexer = self.columns.get_loc(key) 3506 if is_integer(indexer): 3507 indexer = [indexer] ~/anaconda3/lib/python3.9/site-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance) 3629 return self._engine.get_loc(casted_key) 3630 except KeyError as err: -&gt; 3631 raise KeyError(key) from err 3632 except TypeError: 3633 # If we have a listlike key, _check_indexing_error will raise KeyError: 'branches' </code></pre> <p><strong>conclusio:</strong> i am trying to find out more - i am eagerly trying to get more insights and to run the code ...</p> <p>many thanks for all the help - ahd for encouraging to dive in all things python... - this is awesme.<br /> have a great day...</p>
<python><pandas><csv><beautifulsoup>
2023-03-11 23:42:05
1
1,223
zero
75,710,142
1,506,589
Way around Pagination safeguards
<p>trying to scrape <a href="https://www.autoscout24.de/lst/bmw?atype=C&amp;cy=D&amp;desc=0&amp;ocs_listing=include&amp;sort=standard&amp;ustate=N%2CU" rel="nofollow noreferrer">this website</a> with python/selenium but they implemented a cunning way to stop scrapers - the pagination doesn't extend past 20 pages.</p> <p>If you <a href="https://www.autoscout24.de/lst/bmw?atype=C&amp;cy=D&amp;desc=0&amp;ocs_listing=include&amp;sort=standard&amp;ustate=N%2CU" rel="nofollow noreferrer">click this link</a> you'll see there are 60,000 results, but 10 results per page across 20 pages doesn't give me all the data.</p> <p>Any ideas on how to circumvent this safeguard?</p>
<python><selenium-webdriver><web-scraping>
2023-03-11 23:12:12
1
593
Csongor
75,710,106
12,125,777
How to use a map function insted of a Loop Python
<p>I am working on django project. I would like to avoid use Loop because of a number of row in the file to upload.</p> <blockquote> <p>With a loop, I have this (it's nicely working with small files):</p> </blockquote> <pre><code>my_list = [] for ligne in my_json: network = Network( x1=ligne[&quot;x1&quot;], x2=ligne.get(&quot;x2&quot;, None), x3=ligne.get(&quot;x3&quot;, None), ) my_list.append(network) </code></pre> <p>I try to use python map fucntion like:</p> <pre><code>my_map = map( lambda x: (x[&quot;x1&quot;], x.get(&quot;x2&quot;), x.get(&quot;x3&quot;)), my_json) list(my_map) </code></pre> <p>How can I do that with map</p>
<python><django>
2023-03-11 23:00:39
1
542
aba2s
75,710,027
11,741,232
Sending an image with an ID across processes in Python, Windows
<p>I have a project with multiple Python processes on the same computer.</p> <p>Python process A is the central node, and it receives images and data from sensors.</p> <p>Python process A sends images to process B for image processing/feature detection, and process B sends back some information to process A. The system is much more complex than this which is why it's separated in the first place.</p> <p>One idea that I want to implement is to tag the images with an integer, let's say <code>x</code>. This way, process B can say &quot;Hi process A, this is the data for image <code>x</code>&quot;.</p> <p><strong>The problem is, how can I send both images and this integer elegantly and rapidly across processes?</strong></p> <p>Ideas:</p> <ul> <li>Use MQTT or ZeroMQ to communicate. Turn image into string, put it in a dict with the id, send it. I think these will be slow because we are serializing lots of images and sending lots of data over these chunky protocols.</li> <li>Use <a href="https://docs.python.org/3/library/multiprocessing.shared_memory.html" rel="nofollow noreferrer">Shared memory</a> to share the image. I have prototyped this, it is very fast to send images, but encoding the integer is not elegant. I would maybe encode the integer into the RGB value of a few pixels in the corner, but this feels janky.</li> <li>Pickling might be better than string serialization since its bytes vs strings and so much faster? The <a href="https://superfastpython.com/multiprocessing-pipe-in-python/#How_to_Use_the_Pipe" rel="nofollow noreferrer">multiprocessing pipes and queues</a> seem like they are for this, but the sender and receiver seem to need to start in the same Python thread.</li> <li>Find a unique hash for each image in every process. I'm leaning towards this option right now, it's very easy to understand but probably not the fastest.</li> </ul> <p>I recognize this is an architecture question and those are frowned upon, but I feel like there has to be a more elegant solution and I think this is an interesting design problem to ask.</p>
<python><shared-memory>
2023-03-11 22:42:25
1
694
kevinlinxc
75,709,969
15,724,084
python Scrapy framework adding to my code proxy
<p>I am trying new feature for myself as adding proxy port to my python scraper code.</p> <p>I took free proxy from this <a href="https://scrapingant.com/free-proxies/" rel="nofollow noreferrer">site</a>, and looked for an answer from <a href="https://stackoverflow.com/questions/4710483/scrapy-and-proxies">SO</a>. With help of user @dskrypa I changed in my code <code>meta={'proxy':'103.42.162.50:8080'}</code></p> <p>Now it gives an error which continues all along if I do not stop the code run.</p> <pre><code>File &quot;C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\scrapy\core\downloader\handlers\http11.py&quot;, line 279, in _get_agent proxyScheme, proxyNetloc, proxyHost, proxyPort, proxyParams = _parse(proxy) File &quot;C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\scrapy\core\downloader\webclient.py&quot;, line 39, in _parse return _parsed_url_args(parsed) File &quot;C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\scrapy\core\downloader\webclient.py&quot;, line 20, in _parsed_url_args host = to_bytes(parsed.hostname, encoding=&quot;ascii&quot;) File &quot;C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\scrapy\utils\python.py&quot;, line 108, in to_bytes raise TypeError('to_bytes must receive a str or bytes ' TypeError: to_bytes must receive a str or bytes object, got NoneType 2023-03-12 02:47:32 [scrapy.core.scraper] ERROR: Error downloading &lt;GET https://dvlaregistrations.dvla.gov.uk/search/results.html?search=N11CKY&amp;action=index&amp;pricefrom=0&amp;priceto=&amp;prefixmatches=&amp;currentmatches=&amp;limitprefix=&amp;limitcurrent=&amp;limitauction=&amp;searched=true&amp;openoption=&amp;language=en&amp;prefix2=Search&amp;super=&amp;super_pricefrom=&amp;super_priceto=&gt; </code></pre> <p>Here is my code;</p> <pre><code>import scrapy from scrapy.crawler import CrawlerProcess import pandas as pd import scrapy_xlsx itemList=[] class plateScraper(scrapy.Spider): name = 'scrapePlate' allowed_domains = ['dvlaregistrations.dvla.gov.uk'] FEED_EXPORTERS = {'xlsx': 'scrapy_xlsx.XlsxItemExporter'} custom_settings = {'FEED_EXPORTERS' :FEED_EXPORTERS,'FEED_FORMAT': 'xlsx','FEED_URI': 'output_r00.xlsx', 'LOG_LEVEL':'INFO','DOWNLOAD_DELAY': 0} DOWNLOADER_MIDDLEWARES = { 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 1 } def start_requests(self): df=pd.read_excel('data.xlsx') columnA_values=df['PLATE'] for row in columnA_values: global plate_num_xlsx plate_num_xlsx=row base_url =f&quot;https://dvlaregistrations.dvla.gov.uk/search/results.html?search={plate_num_xlsx}&amp;action=index&amp;pricefrom=0&amp;priceto=&amp;prefixmatches=&amp;currentmatches=&amp;limitprefix=&amp;limitcurrent=&amp;limitauction=&amp;searched=true&amp;openoption=&amp;language=en&amp;prefix2=Search&amp;super=&amp;super_pricefrom=&amp;super_priceto=&quot; url=base_url yield scrapy.Request(url,callback=self.parse, cb_kwargs={'plate_num_xlsx': plate_num_xlsx},meta={'proxy':'103.42.162.50:8080'}) def parse(self, response, plate_num_xlsx=None): plate = response.xpath('//div[@class=&quot;resultsstrip&quot;]/a/text()').extract_first() price = response.xpath('//div[@class=&quot;resultsstrip&quot;]/p/text()').extract_first() try: a = plate.replace(&quot; &quot;, &quot;&quot;).strip() if plate_num_xlsx == plate.replace(&quot; &quot;, &quot;&quot;).strip(): item = {&quot;plate&quot;: plate_num_xlsx, &quot;price&quot;: price.strip()} itemList.append(item) print(item) yield item else: item = {&quot;plate&quot;: plate_num_xlsx, &quot;price&quot;: &quot;-&quot;} itemList.append(item) print(item) yield item except: item = {&quot;plate&quot;: plate_num_xlsx, &quot;price&quot;: &quot;-&quot;} itemList.append(item) print(item) yield item process = CrawlerProcess() process.crawl(plateScraper) process.start() import winsound winsound.Beep(555,333) </code></pre>
<python><proxy><scrapy>
2023-03-11 22:30:58
1
741
xlmaster
75,709,963
1,647,792
CDKTF Iterator On AWS Resources
<p>I've gone through the source code of CDKTF TerraformIterator, and the AWS Subnets to try and find a way to iterate over Subnet when setting for_each. I'm not sure how to do this as the iterator requires a list of strings and I don't see anything that would return subnet ids from that class. Here is some Python pseudo code:</p> <pre><code>public_subnets = Subnet(self, 'default-public', for_each = public_subnet_iterator, availability_zone_id = public_subnet_iterator.get_string('availability_zone'), cidr_block = public_subnet_iterator.get_string('CIDR'), vpc_id = default_vpc.id tags = Tags.commons_with_name('default-public') ) route_table = RouteTable(self, 'public-subnets', vpc_id = default_vpc.id, RouteTableRoute( cidr_block = '0.0.0.0/0', gateway_id = default_gateway.id ), tags = Tags.commons_with_name('public-subnet-routing') ) routable_subnets = TerraformIterator.fromList(public_subnets.?????) RouteTableAssociation(self, 'public-subnet-routes', for_each = routable_subnets, etc... ) </code></pre> <p>How can I iterate over the subnets to generate these route table associations?</p>
<python><amazon-web-services><terraform-cdk>
2023-03-11 22:30:04
1
399
jazzmasterkc
75,709,894
734,748
Display new chart or table in streamlit on demand
<p>All the tutorials online I found about streamlit is to create a fix type of table/chart, etc. I.e., the developer will write the code to display a table with the fixed query to extract the data.</p> <p>Imaging I have this use case:</p> <ol> <li>user type in something from the chat window: Show me the population growth per country in the past 3 years. Then we display this line chart</li> <li>user next type in: show me the export in dollar amount per country in the past 3 years. Then we display another line chart. Note the second chart basically shows up from the bottom and pushes the first chart above like a chat window.</li> </ol> <p>How can I achieve this using streamlit? I am open for other ideas using python framework</p>
<python><streamlit>
2023-03-11 22:17:13
1
3,367
drdot
75,709,741
10,870,383
Pydantic nested setting objects load env variables from file
<p>Using pydantic setting management, how can I load env variables on nested setting objects on a main settings class? In the code below, the <code>sub_field</code> env variable field doesn't get loaded. <code>field_one</code> and <code>field_two</code> load fine. How can I load an environment file so the values are propagated down to the nested <code>sub_settings</code> object?</p> <pre><code>from typing import Optional from pydantic import BaseSettings, Field class SubSettings(BaseSettings): sub_field: Optional[str] = Field(None, env='SUB_FIELD') class Settings(BaseSettings): field_one: Optional[str] = Field(None, env='FIELD_ONE') field_two: Optional[int] = Field(None, env='FIELD_TWO') sub_settings: SubSettings = SubSettings() settings = Settings(_env_file='local.env') </code></pre>
<python><pydantic>
2023-03-11 21:47:08
2
541
stormakt
75,709,429
5,957
Django child model's Meta not available in a @classmethod
<p>I am trying to write a check for django abstract base model that verifies some facts about the inner <code>Meta</code> of a derived class. As per <a href="https://docs.djangoproject.com/en/4.1/topics/checks/#field-model-manager-and-database-checks" rel="nofollow noreferrer">documentation</a> the check is a <code>@classmethod</code>. The problem is that in the method the <code>cls</code> parameter is the correct type of the derived class (<code>Child1</code> or <code>Child2</code>), but its <code>Meta</code> is the <code>Base.Meta</code> and not the <code>Meta</code> of the appropriate child class.</p> <p>How can I access the inner <code>Meta</code> of a child model in a base model's <code>@classmethod</code>?</p> <p>I've tried both with a simple <code>Meta</code> class (no base class - <code>Child1</code> below) and with one that derives from the base class' <code>Meta</code> (<code>Child2</code>) - as per <a href="https://docs.djangoproject.com/en/4.1/topics/db/models/#meta-inheritance" rel="nofollow noreferrer">https://docs.djangoproject.com/en/4.1/topics/db/models/#meta-inheritance</a></p> <pre class="lang-py prettyprint-override"><code>from django.db import models class Mixin: @classmethod def method(cls): print(f&quot;{cls=}, {cls.Meta=}&quot;) print([attr for attr in dir(cls.Meta) if not attr.startswith(&quot;__&quot;)]) class Base(Mixin, models.Model): name = models.TextField(blank=False, null=False) age = models.IntegerField(blank=False, null=False) class Meta: abstract = True class Child1(Base): city = models.TextField(blank=True, null=True) class Meta: db_table = &quot;city_table&quot; verbose_name = &quot;12345&quot; class Child2(Base): city = models.TextField(blank=True, null=True) class Meta(Base.Meta): db_table = &quot;other_city_table&quot; verbose_name_plural = &quot;qwerty&quot; Child1.method() Child2.method() </code></pre> <p>In both cases the output is:</p> <pre><code>cls=&lt;class 'mnbvc.models.Child1'&gt;, cls.Meta=&lt;class 'mnbvc.models.Base.Meta'&gt; ['abstract'] cls=&lt;class 'mnbvc.models.Child2'&gt;, cls.Meta=&lt;class 'mnbvc.models.Base.Meta'&gt; ['abstract'] </code></pre> <p>While I want <code>cls.Meta</code> to be the Meta of the child class.</p>
<python><django><django-models>
2023-03-11 20:47:55
1
4,159
Kasprzol
75,709,404
2,479,786
python httpx use --compressed like curl
<p>How can I use httpx library to do something similar to</p> <pre><code>curl --compressed &quot;http://example.com&quot; </code></pre> <p>Meaning I want the library to check if server supports compression, if so send the right header and return the data to me as if it was not compressed?</p>
<python><httpx>
2023-03-11 20:43:24
2
2,931
Ehud Lev
75,709,399
7,747,759
What does it mean if -1 is returned for .get_device() for torch tensor?
<p>I am using pytorch geometric to train a graph neural network. The problem that led to this question is the following error:</p> <blockquote> <p>RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_addmm)</p> </blockquote> <p>So, I am trying to check which device the tensors are loaded on, and when I run <code>data.x.get_device()</code> and <code>data.edge_index.get_device()</code>, I get <code>-1</code> for each. What does -1 mean?</p> <p>In general, I am a bit confused as to when I need to transfer data to the device (whether cpu or gpu), but I assume for each epoch, I simply use .to(device) on my tensors to add to the proper device (but as of now I am not using .to(device) since I am just testing with cpu).</p> <p><strong>Additional context:</strong></p> <p>I am running ubuntu 20, and I didn't see this issue until installing cuda (i.e., I was able to train/test the model on cpu but only having this issue after installing cuda and updating nvidia drivers). I have <code>cuda 11.7</code> installed on my system with an nvidia driver compatible up to cuda 12 (e.g., cuda 12 is listed with <code>nvidia-smi</code>), and the output of <code>torch.version.cuda</code> is <code>11.7</code>. Regardless, I am simply trying to use the cpu at the moment, but will use the gpu once this device issue is resolved.</p>
<python><pytorch><pytorch-geometric>
2023-03-11 20:42:02
2
511
Ralff
75,709,225
3,885,446
Yolo V8 on Raspberry Pi
<p>I am trying to localise my robot using a camera. After months trying to use classical computer vision to pinpoint landmarks in my garden I gave up and created a custom dataset and quickly trained a yolov8 nano model which was outstandingly effective. Now I have just got to work on speed. I ran the following code to see the effect of image size:</p> <pre><code>import time import numpy as np import cv2 from ultralytics import YOLO model = YOLO(&quot;best.pt&quot;) # load a pretrained model (recommended for training) img = cv2.imread('house.jpg') sizes = [320,480,640,1280] for sz in sizes: times = [] resized = cv2.resize(img, (sz,sz), interpolation = cv2.INTER_AREA) for i in range(10): timeStart = time.time() results = model.predict(source= resized,show= False,verbose = False) # predict on an image timeEnd = time.time() times.append(timeEnd-timeStart) ar = np.array(times) print(f'size:{sz:4}, mean:{int(ar.mean()*1000):3}, st dev:{int(ar.std()*1000):3}, min:{int(ar.min()*1000):3}, max:{int(ar.max()*1000)}:3') </code></pre> <p>The results on my laptop which has an i9 processor and a small GPU are:</p> <p>size: 320, mean: 19, st dev: 16, min: 10, max:134</p> <p>size: 480, mean: 16, st dev: 4, min: 6, max: 31</p> <p>size: 640, mean: 15, st dev: 4, min: 10, max: 20</p> <p>size:1280, mean: 16, st dev: 4, min: 8, max: 30</p> <p>all times are milliseconds</p> <p>I did not expect much difference as I assumed that they would all fit on the GPU and be done in parallel. Clearly however the smaller size takes longer but i think for someunkown reason it simply the first to be done that is slower, perhaps loading weights. To an extent the results on the laptop are academic - they are very fast, however the results on the RPI 4 are another story. Using the same piece of code:</p> <p>size: 320, mean:2002, st dev:391, min:1846, max:3177</p> <p>size: 480, mean:1895, st dev:26, min:1845, max:1929</p> <p>size: 640, mean:1933, st dev:30, min:1902, max:1992</p> <p>size:1280, mean:1931, st dev:33, min:1896, max:1991</p> <p>Again the first size is slower and all sizes are depressingly slow.</p> <p>I had expected/hoped that reducing the size on the CPU would have speeded things up. Soo my questions are why are smaller sizes not faster on a CPU and what if anything can I do to speed things up. I tried overclocking to 1800 and this produced an insignificant speedup?</p>
<python><raspberry-pi><yolo>
2023-03-11 20:07:59
2
575
Alan Johnstone
75,709,199
2,627,777
SimpleDirectoryReader cannot be downloaded via llama_index's download_loader
<p>I am using llama_index package to index some of our own documents and query them using GPT. It works fairly well with individual PDFs. However we have a large anout of PDFs which I would like to load in a single run as using its SimpleDirectoryReader. But I am getting the following error when the following commands were run.</p> <pre><code>from llama_index import download_loader SimpleDirectoryReader = download_loader(&quot;SimpleDirectoryReader&quot;) FileNotFoundError: [Errno 2] No such file or directory: C:\\Users\\XXXXX\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\gpt_index\\readers\\llamahub_modules/file/base.py' </code></pre> <p>The readers\llamahub_modules\file folder only has a folder called 'pdf'. It doesn't have a base.py file. How</p> <p>I tried uninstalling and re-installing llama_index python module but there was no impact. My python version is 3.8.2</p> <p>How can I get it working?</p>
<python><gpt-3>
2023-03-11 20:04:05
2
1,724
Ishan Hettiarachchi
75,709,193
310,370
How to calculate image similarity of given 2 images by using open AI Clip model - which method / AI model is best for calculating image similarity?
<p>I have prepared a small example code but It is throwing error. Can't solve the problem because it is supposed to work.</p> <p>Also do you think are there any better approaches to calculate image similarity? I want to find similar cloth images. e.g. I will give an image of a coat and I want to find similar coats.</p> <p>also would this code handle all dimensions of images and all types of images?</p> <p>here the code</p> <pre><code>import torch import torchvision.transforms as transforms import urllib.request from transformers import CLIPProcessor, CLIPModel, CLIPTokenizer from PIL import Image # Load the CLIP model device = &quot;cuda&quot; if torch.cuda.is_available() else &quot;cpu&quot; model_ID = &quot;openai/clip-vit-base-patch32&quot; model = CLIPModel.from_pretrained(model_ID).to(device) preprocess = CLIPProcessor.from_pretrained(model_ID) # Define a function to load an image and preprocess it for CLIP def load_and_preprocess_image(image_path): # Load the image from the specified path image = Image.open(image_path) # Apply the CLIP preprocessing to the image image = preprocess(image).unsqueeze(0).to(device) # Return the preprocessed image return image # Load the two images and preprocess them for CLIP image_a = load_and_preprocess_image('/content/a.png') image_b = load_and_preprocess_image('/content/b.png') # Calculate the embeddings for the images using the CLIP model with torch.no_grad(): embedding_a = model.encode_image(image_a) embedding_b = model.encode_image(image_b) # Calculate the cosine similarity between the embeddings similarity_score = torch.nn.functional.cosine_similarity(embedding_a, embedding_b) # Print the similarity score print('Similarity score:', similarity_score.item()) </code></pre> <p>here the error message</p> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) [&lt;ipython-input-24-e95a926e1bc8&gt;](https://localhost:8080/#) in &lt;module&gt; 25 26 # Load the two images and preprocess them for CLIP ---&gt; 27 image_a = load_and_preprocess_image('/content/a.png') 28 image_b = load_and_preprocess_image('/content/b.png') 29 3 frames [/usr/local/lib/python3.9/dist-packages/transformers/tokenization_utils_base.py](https://localhost:8080/#) in _call_one(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs) 2579 2580 if not _is_valid_text_input(text): -&gt; 2581 raise ValueError( 2582 &quot;text input must of type `str` (single example), `List[str]` (batch or single pretokenized example) &quot; 2583 &quot;or `List[List[str]]` (batch of pretokenized examples).&quot; ValueError: text input must of type `str` (single example), `List[str]` (batch or single pretokenized example) or `List[List[str]]` (batch of pretokenized examples) </code></pre>
<python><huggingface-transformers><clip><huggingface>
2023-03-11 20:03:21
1
23,982
Furkan Gözükara
75,709,118
3,147,690
How are attribute names with underscore managed in Pydantic Models?
<p>Can anyone explain how Pydantic manages attribute names with an underscore?</p> <p>In Pydantic models, there is a weird behavior related to attribute naming when using the underscore. That behavior does not occur in python classes. The test results show some allegedly &quot;unexpected&quot; errors.</p> <p>The following code is catching some errors for simple class and instance operations:</p> <pre class="lang-py prettyprint-override"><code>class PydanticClass(BaseModel): a: int = &quot;AAA&quot; _z: int = &quot;_Z_Z_Z&quot; @classmethod def get_a(cls): return cls.a @classmethod def get_z(cls): return cls._z class NonPydanticClass: a: int = &quot;AAA&quot; _z: int = &quot;_Z_Z_Z&quot; @classmethod def get_a(cls): return cls.a @classmethod def get_z(cls): return cls._z print (&quot;PYDANTIC MODEL CLASS TEST:&quot;) pydantic_instance = PydanticClass() try: msg1, msg2 = &quot;SUCCESS&quot;, f&quot;{pydantic_instance.get_a()}&quot; except Exception as e: msg1, msg2 = &quot;ERROR&quot;, f&quot;{e}&quot; print('{0:&lt;7} :: {1:&lt;27} :: {2:&lt;65} :: {3}'.format(msg1, &quot;1 pydantic_instance.get_a()&quot;, &quot;Accessing non-underscored attributes names in a class method&quot;, msg2)) try: msg1, msg2 = &quot;SUCCESS&quot;, f&quot;{pydantic_instance.get_z()}&quot; except Exception as e: msg1, msg2 = &quot;ERROR&quot;, f&quot;{e}&quot; print('{0:&lt;7} :: {1:&lt;27} :: {2:&lt;65} :: {3}'.format(msg1, &quot;2 pydantic_instance.get_z()&quot;, &quot;Accessing underscored attributes names in a class method&quot;, msg2)) try: msg1, msg2 = &quot;SUCCESS&quot;, f&quot;{PydanticClass.a}&quot; except Exception as e: msg1, msg2 = &quot;ERROR&quot;, f&quot;{e}&quot; print('{0:&lt;7} :: {1:&lt;27} :: {2:&lt;65} :: {3}'.format(msg1, &quot;3 PydanticClass.a &quot;,&quot;Accessing non-underscored attribute names as a 'static attribute'&quot;, msg2)) try: msg1, msg2 = &quot;SUCCESS&quot;, f&quot;{PydanticClass._z}&quot; except Exception as e: msg1, msg2 = &quot;ERROR&quot;, f&quot;{e}&quot; print('{0:&lt;7} :: {1:&lt;27} :: {2:&lt;65} :: {3}'.format(msg1, &quot;4 PydanticClass._z&quot;,&quot;Accessing underscored attributes names as a 'static attribute'&quot;, msg2)) print (&quot;\nNON PYDANTIC CLASS TEST:&quot;) nom_pydantic_instance = NonPydanticClass() try: msg1, msg2 = &quot;SUCCESS&quot;, f&quot;{nom_pydantic_instance.get_a()}&quot; except Exception as e: msg1, msg2 = &quot;ERROR&quot;, f&quot;{e}&quot; print('{0:&lt;7} :: {1:&lt;27} :: {2:&lt;65} :: {3}'.format(msg1, &quot;1 class_instance.get_a()&quot;, &quot;Accessing non-underscored attributes names in a class method&quot;, msg2)) try: msg1, msg2 = &quot;SUCCESS&quot;, f&quot;{nom_pydantic_instance.get_z()}&quot; except Exception as e: msg1, msg2 = &quot;ERROR&quot;, f&quot;{e}&quot; print('{0:&lt;7} :: {1:&lt;27} :: {2:&lt;65} :: {3}'.format(msg1, &quot;2 class_instance.get_z()&quot;, &quot;Accessing underscored attributes names in a class method&quot;, msg2)) try: msg1, msg2 = &quot;SUCCESS&quot;, f&quot;{NonPydanticClass.a}&quot; except Exception as e: msg1, msg2 = &quot;ERROR&quot;, f&quot;{e}&quot; print('{0:&lt;7} :: {1:&lt;27} :: {2:&lt;65} :: {3}'.format(msg1, &quot;3 PydanticClass.a &quot;,&quot;Accessing non-underscored attribute names as a 'static attribute'&quot;, msg2)) try: msg1, msg2 = &quot;SUCCESS&quot;, f&quot;{NonPydanticClass._z}&quot; except Exception as e: msg1, msg2 = &quot;ERROR&quot;, f&quot;{e}&quot; print('{0:&lt;7} :: {1:&lt;27} :: {2:&lt;65} :: {3}'.format(msg1, &quot;4 PydanticClass._z&quot;,&quot;Accessing underscored attributes names as a 'static attribute'&quot;, msg2)) </code></pre> <p>Here are the results:</p> <pre class="lang-none prettyprint-override"><code>PYDANTIC MODEL CLASS TEST: ERROR :: 1 pydantic_instance.get_a() :: Accessing non-underscored attributes names in a class method :: type object 'PydanticClass' has no attribute 'a' SUCCESS :: 2 pydantic_instance.get_z() :: Accessing underscored attributes names in a class method :: _Z_Z_Z ERROR :: 3 PydanticClass.a :: Accessing non-underscored attribute names as a 'static attribute' :: type object 'PydanticClass' has no attribute 'a' SUCCESS :: 4 PydanticClass._z :: Accessing underscored attributes names as a 'static attribute' :: _Z_Z_Z NON PYDANTIC CLASS TEST: SUCCESS :: 1 class_instance.get_a() :: Accessing non-underscored attributes names in a class method :: AAA SUCCESS :: 2 class_instance.get_z() :: Accessing underscored attributes names in a class method :: _Z_Z_Z SUCCESS :: 3 PydanticClass.a :: Accessing non-underscored attribute names as a 'static attribute' :: AAA SUCCESS :: 4 PydanticClass._z :: Accessing underscored attributes names as a 'static attribute' :: _Z_Z_Z </code></pre>
<python><pydantic>
2023-03-11 19:51:47
2
474
villamejia
75,707,839
4,321,525
Pandas read_csv() parse_dates does not limit itself to the specied column - how to fix?
<p>I want to read in different CSV files and while doing that, convert the time column to seconds since the epoch. However, the date_parser gets applied to more then the specified column, and my data is butchered.</p> <p>here is my code and some example data:</p> <pre><code>import pandas as pd TIME_STG = &quot;Datum (UTC)&quot; PRICE_STG = &quot;Day Ahead Auktion (DE-LU)&quot; PRICE_FILE = &quot;booking_algorythm/data/energy-charts_Stromproduktion_und_Börsenstrompreise_in_Deutschland_2021.csv&quot; def get_data(file, *columns): types_dict = {} parse_dates_list = [] for column in columns: if column == TIME_STG: types_dict.update({column: str}) parse_dates_list.append(column) else: types_dict.update({column: float}) data = pd.read_csv(file, sep=&quot;,&quot;, usecols=columns, dtype=types_dict, parse_dates=parse_dates_list, date_parser=lambda col: pd.to_datetime(col, utc=True)).astype(int) // 10**9 data_np = data.to_numpy() return data_np def get_price_vector(): data = get_data(PRICE_FILE, PRICE_STG, TIME_STG) return data def main(): vector = get_price_vector() print(vector) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>example data</p> <pre><code>&quot;Datum (UTC)&quot;,&quot;Kernenergie&quot;,&quot;Nicht Erneuerbar&quot;,&quot;Erneuerbar&quot;,&quot;Last&quot;,&quot;Day Ahead Auktion (DE-LU)&quot; 2021-01-01T00:00:00.000Z,8151.12,35141.305,11491.71,43516.88,48.19 2021-01-01T00:15:00.000Z,8147.209,34875.902,11331.25,42998.01,48.19 2021-01-01T00:30:00.000Z,8154.02,34825.553,11179.375,42494.2,48.19 2021-01-01T00:45:00.000Z,8152.82,34889.11,11072.377,42320.32,48.19 2021-01-01T01:00:00.000Z,8156.53,34922.123,10955.356,41598.39,44.68 2021-01-01T01:15:00.000Z,8161.601,34856.2,10867.771,41214.32,44.68 2021-01-01T01:30:00.000Z,8158.36,35073.1,10789.049,40966.95,44.68 2021-01-01T01:45:00.000Z,8151.3,34972.501,10657.209,40664.63,44.68 2021-01-01T02:00:00.000Z,8145.589,34911.037,10637.605,40502.78,42.92 </code></pre> <p>and this is the unexpected output - I had expected the price column to be actual data like 44.68.</p> <p><code>.astype(int) // 10**9</code> is a fast conversion to seconds since the epoch that I found here on StackOverflow.</p>
<python><pandas><datetime><timestamp>
2023-03-11 18:17:19
1
405
Andreas Schuldei
75,706,571
12,846,701
How to filter rows using custom function in pyarrow
<p>I've a parquet dataset that contains latitude and longitude values as separate columns. And I want to filter those rows that are inside a polygon, I'm able to do this in pandas dataframe but unable to do in pyarrow table.</p> <p>I'm using pyarrow to read the parquet files, as it's quite fast.</p> <p>Here's how I'm doing this in pandas:</p> <pre><code>import pyarrow as pa from shapely.geometry import shape, Point def point_in_polygon(df, polygon): return df.apply(lambda x: shape(polygon).intersects(Point(x.lon, x.lat)), axis=1) res: pa.Table = ParquetDataset(....) res.to_pandas().loc[lambda df: point_in_polygon(df, polygon)] </code></pre> <p>But the problem with above approach is that it's quite slow. I know of filters in pyarrow and pyarrow.compute but unable to figure out how I can achieve this.</p> <p>If more information is needed please let me know :) <br> Thanks :)</p>
<python><pandas><geolocation><pyarrow>
2023-03-11 17:07:16
1
356
cicada_
75,706,482
6,245,473
Select specific column in Python dictionary?
<p>I would like to select a specific field from a python dictionary. The following code produces a result of 152 lines, but I only want to select one line and output it as a dataframe.</p> <pre><code>import pandas as pd from yahooquery import Ticker AAPL = Ticker('AAPL') AAPL.earnings_trend </code></pre> <p>Code above returns 152 lines. Below is a portion of it.</p> <pre><code>{'AAPL': {'trend': [{'maxAge': 1, 'period': '0q', , {'maxAge': 1, 'period': '-5y', 'endDate': None, 'growth': 0.21908002, 'earningsEstimate': {'avg': {}, 'low': {}, 'high': {}, 'yearAgoEps': {}, 'numberOfAnalysts': {}, 'growth': {}}, 'downLast90days': {}}}], 'maxAge': 1}} </code></pre> <p>I want to print line 130 as a dataframe (<code>'growth': 0.21908002</code> from above):</p> <p><a href="https://i.sstatic.net/5LKDs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5LKDs.png" alt="enter image description here" /></a></p>
<python><pandas><dataframe><dictionary><yfinance>
2023-03-11 16:51:12
2
311
HTMLHelpMe
75,706,131
17,378,883
How to split the list into sublists with length between 2 and 4?
<p>I want to write a function, which splits the list into sublists with length between 2 and 4, no more or less.</p> <p>For example this list <code>[&quot;a&quot;,&quot;a&quot;,&quot;a&quot;,&quot;a&quot;,&quot;a&quot;,&quot;a&quot;,&quot;a&quot;,&quot;a&quot;,&quot;a&quot;]</code> can turn into this <code>[[&quot;a&quot;,&quot;a&quot;,&quot;a&quot;,&quot;a&quot;],[&quot;a&quot;,&quot;a&quot;,&quot;a&quot;],[&quot;a&quot;,&quot;a&quot;]]</code> or <code>[[&quot;a&quot;,&quot;a&quot;,&quot;a&quot;],[&quot;a&quot;,&quot;a&quot;,&quot;a&quot;,&quot;a&quot;],[&quot;a&quot;,&quot;a&quot;]]</code>.</p> <p>I wrote this function:</p> <pre><code>def split_list(lst): result = [] sublist = [] for i in range(len(lst)): sublist.append(lst[i]) if len(sublist) == 4 or (len(sublist) == 2 and i == len(lst)-1): result.append(sublist) sublist = [] elif len(sublist) == 3 and i == len(lst)-1: result.append(sublist) return result </code></pre> <p>but the output for <code>[&quot;a&quot;,&quot;a&quot;,&quot;a&quot;,&quot;a&quot;,&quot;a&quot;,&quot;a&quot;,&quot;a&quot;,&quot;a&quot;,&quot;a&quot;]</code> is <code>[['a', 'a', 'a', 'a'], ['a', 'a', 'a', 'a']]</code>, which is wrong.</p> <p>For, list with less than three values, it must output that list: for example <code>[&quot;a&quot;,&quot;a&quot;]</code> must not change</p> <p>How could I do that?</p>
<python><python-3.x><list><function>
2023-03-11 15:57:28
3
397
gh1222
75,706,096
3,672,883
is "correct" or pythonic, this way of make getters or is better to use @property?
<p>I have a class where the attribute is a numpy array, and a lot of getters, for several slices of that array.</p> <p>The question is about what is a more pythonic way of doing this</p> <pre><code>def get_right_knee(self) -&gt; YoloV7PoseKeypoint: return self._get_x_y_conf(49) </code></pre> <p>or</p> <pre><code>@property def right_knee(self) -&gt; YoloV7PoseKeypoint: return self._get_x_y_conf(49) </code></pre> <p>the _get_x_y_conf function is:</p> <pre><code>def _get_x_y_conf(self, start_index: int) -&gt; YoloV7PoseKeypoint: &quot;&quot;&quot; Get the x, y, and confidence values for a single keypoint. :param start_index: The index at which to start reading values from the raw_keypoints list. :return: A YoloV7PoseKeypoint object representing a single keypoint. &quot;&quot;&quot; end_index = start_index + self._step_keypoint x = self.raw_keypoints[start_index:end_index][0] y = self.raw_keypoints[start_index:end_index][1] conf = self.raw_keypoints[start_index:end_index][2] return YoloV7PoseKeypoint(x=x, y=y, conf=conf) </code></pre> <p>thanks</p>
<python><python-class>
2023-03-11 15:50:40
1
5,342
Tlaloc-ES
75,706,064
20,122,390
How to do web-scraping with python on pages with dynamic content?
<p>I'm trying to do web scraping on flight pages to compare prices for a specific month on the airlines, at the moment I have this:</p> <pre><code>from selenium import webdriver from selenium.webdriver.chrome.options import Options from bs4 import BeautifulSoup options = Options() options.add_argument('--headless') options.add_argument('--user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.96 Safari/537.36') # Dates dates = ['2023-04-01', '2023-04-02', '2023-04-03', '2023-04-04', '2023-04-05', '2023-04-06', '2023-04-07', '2023-04-08', '2023-04-09', '2023-04-10', '2023-04-11', '2023-04-12', '2023-04-13', '2023-04-14', '2023-04-15', '2023-04-16', '2023-04-17', '2023-04-18', '2023-04-19', '2023-04-20', '2023-04-21', '2023-04-22', '2023-04-23', '2023-04-24', '2023-04-25', '2023-04-26', '2023-04-27', '2023-04-28', '2023-04-29', '2023-04-30'] prices = {} for date in dates: driver = webdriver.Chrome(options=options) url = f'https://www.latamairlines.com/co/es/ofertas-vuelos?origin=BOG&amp;inbound=null&amp;outbound={date}T17%3A00%3A00.000Z&amp;destination=BGA&amp;adt=1&amp;chd=0&amp;inf=0&amp;trip=OW&amp;cabin=Economy&amp;redemption=false&amp;sort=RECOMMENDED' driver.get(url) html = driver.page_source driver.quit() soup = BeautifulSoup(html, 'html.parser') spans = soup.find_all('span', {'class': 'display-currencystyle__CurrencyAmount-sc__sc-19mlo29-2 fMjBKP'}) prices[date] = [] for span in spans: texto = span.text.strip() if texto not in prices[date]: prices[date].append(texto) print(prices) </code></pre> <p>The idea is to go through all the pages for all the dates of the month, since for each date the URL changes. My idea was to apply this implementation to more web pages (changing the url and the search html tags accordingly). However, I have come across pages like: <a href="https://www.easyfly.com.co/" rel="nofollow noreferrer">https://www.easyfly.com.co/</a> in which when I configure a date and destination, I do not get a URL with the parameters, but rather an application that I suppose loads dynamically:</p> <p><a href="https://i.sstatic.net/RKsRo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RKsRo.png" alt="enter image description here" /></a></p> <p>What should be done in these cases? To which URL should I make the request? Thank you.</p>
<python><selenium-webdriver><web-scraping><beautifulsoup>
2023-03-11 15:44:56
0
988
Diego L
75,706,046
21,343,992
boto3 Multiple ways to wait for an instance to be ready, which is the correct one?
<p>I'm investigating a sporadic bug with <code>send_command()</code> and I think it might be related to the instance not being completely ready before sending the command.</p> <p>I was using this to wait for the instance:</p> <pre><code>instance.wait_until_running() </code></pre> <p>However I have also seen:</p> <pre><code>waiter = ec2.get_waiter('instance_running') waiter.wait(InstanceIds=[instance.id]) </code></pre> <p>and:</p> <pre><code>waiter = ec2.get_waiter('instance_status_ok') waiter.wait(InstanceIds=[instance.id]) </code></pre> <p>What is the difference/correct way to wait for an instance to be completely ready to send a command?</p>
<python><amazon-web-services><amazon-ec2><boto3>
2023-03-11 15:41:11
1
491
rare77
75,706,032
10,735,076
Why is flask running on AWS Lambda corrupting GeoJSON files served using static folder?
<p>I am serving a simple folium map using the following flask app running as an AWS Lambda function:</p> <pre><code>from flask import Flask import serverless_wsgi from Map import fullscreen_map import logging logger = logging.getLogger() logger.setLevel(logging.INFO) app = Flask(__name__, static_folder=&quot;maps&quot;, static_url_path=&quot;/maps&quot;) @app.route(&quot;/&quot;) def homepage(): return fullscreen_map().get_root().render() def handler(event, context): return serverless_wsgi.handle_request(app, event, context) </code></pre> <p>The folium app (imported into the above):</p> <pre><code>from branca.element import Template, MacroElement import folium from flask import request def fullscreen_map(): m = folium.Map( location=(40.746759, -74.042197), zoom_start=16, tiles=&quot;cartodb positron&quot; ) heights_building_footprints_url = f&quot;{request.base_url}/maps/heights-building-footprints.geojson&quot; logging.info(f&quot;heights_building_footprints_url: {heights_building_footprints_url}&quot;) folium.GeoJson( heights_building_footprints_url, name=&quot;Building Footprints&quot;, style_function=lambda feature: { 'fillColor': 'grey', 'color': 'black', 'weight': 0.5, 'dashArray': '3, 3' }, ).add_to(m) return m </code></pre> <p>The problem I am having is that for some reason either flask or something in the Lambda handler seem to be encoding the geojson files as binary. I don't know why because it serves HTML files in the same static folder properly.</p> <p>You can see this for yourself in a test deployment:</p> <p><a href="https://rna.chilltownlabs.com/maps/test.html" rel="nofollow noreferrer">https://rna.chilltownlabs.com/maps/test.html</a> — flask serves this fine, so the static directory is working.</p> <p><a href="https://rna.chilltownlabs.com/maps/boundaries-rna.geojson" rel="nofollow noreferrer">https://rna.chilltownlabs.com/maps/boundaries-rna.geojson</a> - this appears to have some kind of binary encoding.</p> <p>Obviously the app dies, because the <code>folium.GeoJson</code> constructor in <code>fullscreen_map</code> tries to load this file via the web throws a JSON error.</p> <p>Assuming this has something to do with http content-type header but not sure what the right approach is to fix. A new route handler only the geojson files?</p> <p>This app is deployed as part of an AWS CDK v2 stack (python) but I don't think that's relevant.</p> <p>This is a very low-traffic app and would prefer not to go through the hassle of setting up an nginx server just to serve 5 static files.</p> <p>How can I fix this?</p> <p>I expected a geojson file with the proper encoding to be served up by flask. So far I have not tried to solve this, but would likely create a new route handler with a Response set for just for the geojson.</p>
<python><aws-lambda><folium>
2023-03-11 15:38:59
0
313
Anthony Townsend
75,705,912
4,989,403
Update a non trainable variable at the beginning of each epoch
<p>This question is similar to <a href="https://stackoverflow.com/questions/60102214/tensorflow-keras-modify-model-variable-from-callback">Tensorflow Keras modify model variable from callback</a>. I am unable to get the solution there to work (maybe there have been changes in TensorFlow 2.x since the solution was posted).</p> <p>Below is demo code. I apologise if there is a typo.</p> <p>I want to use a callback to update a non trainable variable (<code>weighted_add_layer.weight</code>) that affects the output of the layer.</p> <p>I have tried many variants such as putting <code>tf.keras.backend.set_value(weighted_add_layer.weight, value)</code> in <code>update function</code>.</p> <p>In all cases, after the model is compiled, <code>fit</code> uses the value of <code>weighted_add_layer.weight</code> at the time of compilation and does not update the value later.</p> <pre><code>class WeightedAddLayer(tf.keras.layers.Layer): def __init__(self, weight=0.00, *args, **kwargs): super(WeightedAddLayer, self).__init__(*args, **kwargs) self.weight = tf.Variable(0., trainable=False) def add(self, inputA, inputB): return (self.weight * inputA + self.weight * inputB) def update(self, weight): tf.keras.backend.set_value(self.weight, weight) input_A = tfkl.Input( shape=(32), batch_size=32, ) input_B = tfkl.Input( shape=(32), batch_size=32, ) weighted_add_layer = WeightedAddLayer() output = weighted_add_layer.add(input_A, input_B) model = tfk.Model( inputs=[input_A, input_B], outputs=[output], ) model.compile( optimizer='adam', loss=losses.MeanSquaredError() ) # Custom callback function def update_fun(epoch, steps=50): weighted_add_layer.update( tf.clip_by_value( epoch / steps, clip_value_min=tf.constant(0.0), clip_value_max=tf.constant(1.0),) ) # Custom callback update_callback = tfk.callbacks.LambdaCallback( on_epoch_begin=lambda epoch, logs: update_fun(epoch) ) # train model history = model.fit( x=train_data, epochs=EPOCHS, validation_data=valid_data, callbacks=[update_callback], ) </code></pre> <p>Any suggestions? Thanks much!</p>
<python><tensorflow><keras>
2023-03-11 15:20:52
2
499
Anirban Mukherjee
75,705,893
384,936
How to broadcast operands with same shape?
<p>I get error when --&gt; 73 rmse = np.sqrt(np.mean(predictions - y_test)**2) 74 print(f&quot;RMSE: {rmse}&quot;) 75</p> <p>ValueError: operands could not be broadcast together with shapes (41,1) (101,1) How do i fix it?</p> <pre><code>import numpy as np import pandas as pd import matplotlib.pyplot as plt import tensorflow as tf from sklearn.preprocessing import MinMaxScaler from datetime import datetime, timedelta import yfinance as yf # Download the AAPL stock price data from Yahoo Finance symbol = 'AAPL' start_date = datetime.now() - timedelta(days=365 * 2) end_date = datetime.now() df = yf.download(symbol, start=start_date, end=end_date) # Extract the daily adjusted closing price data data = df.filter(['Adj Close']).values # Scale the data scaler = MinMaxScaler(feature_range=(0, 1)) print(scaler) scaled_data = scaler.fit_transform(data) # Split the data into training and testing sets training_data_len = int(len(data) * 0.8) train_data = scaled_data[:training_data_len, :] test_data = scaled_data[training_data_len:, :] # Prepare the training data x_train = [] y_train = [] for i in range(60, len(train_data)): x_train.append(train_data[i-60:i, 0]) y_train.append(train_data[i, 0]) # Convert the data to numpy arrays x_train, y_train = np.array(x_train), np.array(y_train) # Reshape the data for the LSTM model x_train = np.reshape(x_train, (x_train.shape[0], x_train.shape[1], 1)) # Build the LSTM model model = tf.keras.models.Sequential() model.add(tf.keras.layers.LSTM(50, return_sequences=True, input_shape=(x_train.shape[1], 1))) model.add(tf.keras.layers.LSTM(50, return_sequences=False)) model.add(tf.keras.layers.Dense(25)) model.add(tf.keras.layers.Dense(1)) # Compile the model model.compile(optimizer='adam', loss='mean_squared_error') # Train the model model.fit(x_train, y_train, batch_size=1, epochs=1) # Prepare the testing data x_test = [] y_test = data[training_data_len:, :] for i in range(60, len(test_data)): x_test.append(test_data[i-60:i, 0]) # Convert the data to numpy arrays x_test = np.array(x_test) # Reshape the data for the LSTM model print(&quot;1&quot;) x_test = np.reshape(x_test, (x_test.shape[0], x_test.shape[1], 1)) print(&quot;2&quot;) # Get the model's predicted price values predictions = model.predict(x_test) print(&quot;3&quot;) predictions = scaler.inverse_transform(predictions) # Calculate the root mean squared error (RMSE) </code></pre> <p>rmse = np.sqrt(np.mean(predictions - y_test)**2)<br /> print(f&quot;RMSE: {rmse}&quot;)</p> <pre><code># Plot the data train = data[:training_data_len, :] valid = data[training_data_len:, :] valid['Predictions'] = predictions plt.figure(figsize=(16,8)) plt.title('LSTM Model') plt.xlabel('Date') plt.ylabel('Adj Close Price') plt.plot(train['Adj Close']) plt.plot(valid[['Adj Close', 'Predictions']]) plt.legend(['Train', 'Valid', 'Predictions'], loc='lower right') plt.show() # Make predictions for the next 10 days last_60_days = data[-60:] last_60_days_scaled = scaler.transform(last_60_days) X_test = [] X_test.append(last_60_days_scaled) X_test = np.array(X_test) X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1)) predicted_price = model.predict(X_test) predicted_price = scaler.inverse_transform(predicted_price) #print(f&quot;Predicted price for the next day: ${predicted_price[0][0]} </code></pre>
<python><machine-learning>
2023-03-11 15:17:59
1
1,465
junkone
75,705,845
13,460,543
Difference between ' and " in read_json?
<p>I have the following strings :</p> <pre class="lang-py prettyprint-override"><code>d1 = &quot;[{'col 1':'a','col 2':'b'}]&quot; d2 = '[{&quot;col 1&quot;:&quot;a&quot;,&quot;col 2&quot;:&quot;b&quot;}]' </code></pre> <p>These strings contains the same data.</p> <p>I only inverted the use of single and double quotes.</p> <p>But for <code>read_json</code> they are not the same :</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; pd.read_json(d1, orient='records') # builtins.ValueError: Expected object or value &gt;&gt;&gt; pd.read_json(d2, orient='records') # col 1 col 2 # 0 a b </code></pre> <p>I have done some unsuccessful research and I don't understand what's going on.</p> <p>So I have done some tests :</p> <pre><code>&gt;&gt;&gt; type(d1) # &lt;class 'str'&gt; &gt;&gt;&gt; type(d2) # &lt;class 'str'&gt; &gt;&gt;&gt; d1==d1 # True &gt;&gt;&gt; d2==d2 # True &gt;&gt;&gt; d1==d2 # False </code></pre> <p>So same type (str), same content but they are not equal. Why ?</p>
<python><json><pandas><dataframe>
2023-03-11 15:10:48
0
2,303
Laurent B.
75,705,324
7,848,740
Edit an Object of the PostgresSQL Database in Django from Celery Worker
<p>I'm trying to edit an object which is saved into my Postgress Database used by Django from a Celery Worker.</p> <p>The Worker is called when the signal <code>post_save</code> is called with</p> <pre><code>@receiver(post_save, sender=Task) def send_task_creation(sender, instance, created, raw, using, **kwargs): print(str(instance.pk)) rephrase_task.delay(instance.pk) </code></pre> <p>My <code>task.py</code> contains the function for the Worker</p> <pre><code>from celery import shared_task from celery import Celery from celery_progress.backend import ProgressRecorder from .models import Task from time import sleep from celery.utils.log import get_task_logger celery = Celery(__name__) celery.config_from_object(__name__) @shared_task(bind=True) def rephrase_task(self, primary_key): print('ID Worker ' + self.request.id) Task.objects.filter(pk=primary_key).update(TaskId=str(self.request.id)) progress_recorder = ProgressRecorder(self) for i in range(0,100): sleep(1) i += 1 progress_recorder.set_progress(i, 100) return 1 </code></pre> <p>Now, everything seems to work expect the fact that <code>Task.objects.filter(pk=primary_key).update(TaskId=str(self.request.id))</code> does really nothing</p> <p>I'm seeing the Celery task processing correctly but it seems it can't access the database</p> <p>My Celery settings in settings.py</p> <pre><code>REDIS_HOST = 'localhost' REDIS_PORT = '6379' BROKER_URL = 'redis://' + REDIS_HOST + ':' + REDIS_PORT + '/0' BROKER_TRANSPORT_OPTIONS = {'visibility_timeout': 3600} CELERY_RESULT_BACKEND = 'redis://' + REDIS_HOST + ':' + REDIS_PORT + '/0' CELERY_ACCEPT_CONTENT = ['application/json'] CELERY_TASK_SERIALIZER = 'json' CELERY_RESULT_SERIALIZER = 'json' </code></pre> <p>How can I access the database from the Celery Worker</p>
<python><django><database><celery>
2023-03-11 13:39:11
1
1,679
NicoCaldo
75,705,280
2,312,666
Python - Assesing each line in a file against an IF
<p>The below works for a single ip or fqdn, but for a list of assets its failing and assigning them all as FQDNs.</p> <pre><code>#!/usr/bin/env python import argparse import ipaddress import re def parse_args(): # Assess arguments parser = argparse.ArgumentParser(description='do some sh*t') parser.add_argument('-T', required=True, choices=['single','filename'], help='Type of input i.e. single asset name or file of assets') parser.add_argument('-A', required=True, help='Asset name or file name') args = parser.parse_args() args = vars(args) asset_type = (args['T']) if asset_type == 'single': single_asset(args) elif asset_type == 'filename': filename_asset(args) else: raise Exception('Unexpected asset type') def single_asset(args): # Function for single asset type print ('single_asset function') asset = (args['A']) identify_asset_type(asset) def filename_asset(args): # Function for file of assets print ('filename_asset function') filename = (args['A']) with open(filename, 'r') as filename_input: for asset in filename_input: identify_asset_type(asset) def identify_asset_type(asset): # Identify if IP address, FQDN or other try: if ipaddress.ip_address(asset): print(f'IP address - {asset}') except ValueError: if re.match(r'^(?!.{255}|.{253}[^.])([a-z0-9](?:[-a-z-0-9]{0,61}[a-z0-9])?\.)*[a-z0-9](?:[-a-z0-9]{0,61}[a-z0-9])?[.]?$', asset, re.IGNORECASE): print (f'FQDN - valid {asset}') else: print ('Not an IP or FQDN') </code></pre> <p>This is with a list of assets such as:</p> <pre><code>apple 123.4.5.1 bob.com 20.10.4.6 300.100.200.50 </code></pre> <p>How do i structure this code please so that when parse_args is run with -T filename -A name_of_file it will assess each line properly? Especially as the last item in the test input is not a valid IP.</p> <p>Thank you</p>
<python><for-loop><if-statement>
2023-03-11 13:31:51
1
309
yeleek
75,705,140
2,889,970
How to properly manage Python application dependencies?
<h2>Problem</h2> <p>I have a Python application (not library) that i want to publish as a <code>pip</code> installable package. In order for the application to install and run predictably with <code>pip install</code> now, in a month or in 2 years i want to pin all dependencies, for example with <a href="https://github.com/jazzband/pip-tools#pip-tools--pip-compile--pip-sync" rel="nofollow noreferrer"><code>pip-tools</code></a></p> <h2>Current workflow</h2> <p>Currently for my package there are 3 files involved:</p> <ul> <li><code>requirements.in</code>: Listing the direct application dependencies. No or loose version pinning. Source file for <code>pip-compile</code>.</li> <li><code>requirements.txt</code>: Full lockfile for all dependencies <em>(also transitive)</em> generated with <code>pip-compile</code> from <code>requirements.in</code>. The <code>requirements.txt</code> file is not part of the Python wheel, so <code>pip install</code> cannot use it.</li> <li><code>setup.py</code>: Here in <code>install_requires</code> i read in the compiled <code>requirements.txt</code> so that the complete, pinned dependencies find their way in the wheel metadata for <code>pip install</code>. <em>(Example see below)</em>:</li> </ul> <pre class="lang-py prettyprint-override"><code># setup.py setup( ... # pseudocode &quot;install_requires&quot;: pip.parse(Path(&quot;.&quot;)/&quot;requirements.txt&quot;) ... ) </code></pre> <h2>Questions</h2> <ul> <li>Is that how it should be done or are there better approaches?</li> <li>What is considered best practice for Python application dep. management? Pinning all deps. in setup.py, also transitive ones? Or just the direct ones <em>(trusting them with the pinning/locking of their deps.)</em>?</li> <li>How does the approach to dynamically read in the <code>requirements.txt</code> translate to the modern, declarative approach with <code>pyproject.toml</code>? Or does it not, and instead of the automation i need to manually copy the compiled <code>requirements.txt</code> content in the <code>pyproject.toml</code> dependencies section?</li> </ul>
<python><pip><dependency-management><setup.py><pyproject.toml>
2023-03-11 13:08:24
2
2,564
timmwagener
75,705,043
13,944,524
Why some Django view classes had not been defined as abstract base class?
<p>I'm writing a small and lightweight Django-like back-end framework for myself just for experiment.</p> <p>If we look at <code>ProcessFormView</code> view(and some other views):</p> <pre class="lang-py prettyprint-override"><code>class ProcessFormView(View): def get(self, request, *args, **kwargs): return self.render_to_response(self.get_context_data()) def post(self, request, *args, **kwargs): form = self.get_form() if form.is_valid(): return self.form_valid(form) else: return self.form_invalid(form) ... </code></pre> <p>To me it sounds like a valid case to define this class as an &quot;abstract base class&quot;.</p> <p>After all, It needs that sub-classes provide <code>render_to_response()</code>, <code>get_context_data()</code>, <code>get_form()</code>, <code>form_valid()</code>, <code>form_invalid()</code>. (They will be provided by <code>TemplateResponseMixin</code> and <code>FormMixin</code>.)</p> <p>I can do something like this:</p> <pre class="lang-py prettyprint-override"><code>class ProcessFormView(View, metaclass=ABCMeta): @abstractmethod def render_to_response(self, context): pass @abstractmethod def get_context_data(self): pass @abstractmethod def get_form(self): pass @abstractmethod def form_valid(self, form): pass @abstractmethod def form_invalid(self, form): pass def get(self, request, *args, **kwargs): return self.render_to_response(self.get_context_data()) def post(self, request, *args, **kwargs): form = self.get_form() if form.is_valid(): return self.form_valid(form) else: return self.form_invalid(form) ... </code></pre> <p>Or even better we can factor these abstract methods out to another ABC class and inherit from it to clear things up.</p> <p>I know that of course it's a made decision and nothing is wrong with it. I mostly interested to know if do what I just showed, how can it cause problem in future? what would be the cons that I'm not aware of?</p> <p>The only disadvantage that I can think about is I should then write many abstract classes! This make the code base much bigger.</p>
<python><python-3.x><django><django-views><abstract-class>
2023-03-11 12:49:16
1
17,004
S.B
75,704,833
14,853,907
Sizing the Python standard library
<p>I want to find out how many objects and methods there are in the standard library for a given Python 3.x installation, to compare to different languages. For this, I wrote a <code>script.py</code>.</p> <pre class="lang-py prettyprint-override"><code>from sys import stdlib_module_names from inspect import isclass from importlib import import_module modules = stdlib_module_names - {'this', 'antigravity'} col = [] for m in modules: try: col.append(dir(import_module(m))) except: pass col = {item for sublist in col for item in sublist} cnt = 0 for thing in col: try: if isclass(eval(thing)): cnt += len(dir(thing)) except: pass print(cnt + len(col)) </code></pre> <p>I notice <code>dir</code> is not exact, it gives different values per run.</p> <ul> <li>command line: <code>python script.py</code> gives 14075 to 14092 things,</li> <li>VSCode REPL: 1st and 2nd run are different, consequent runs are identical to 2nd.</li> </ul> <p><strong>Questions</strong></p> <ol> <li>Is this result in the ballpark of being correct?</li> <li>Is there an invariant way to get the number of functions and methods in <code>stdlib_module_names</code>?</li> </ol>
<python><python-3.x>
2023-03-11 12:11:37
1
4,470
Donald Seinen
75,704,644
2,079,875
unable to install confluent-kafka python module on alpine 3.13.5
<p>I am trying to install <code>confluent-kafka</code> on a <code>alpine-3.13.5</code>. Basically i am trying the same in <code>docker in docker</code> - <a href="https://hub.docker.com/_/docker" rel="nofollow noreferrer">https://hub.docker.com/_/docker</a> as part of my <code>gitlab runner</code>.</p> <p>This is how my <code>.gitlab-ci.yml</code> looks like</p> <pre><code>default: image: name: docker:19.03 entrypoint: - '/usr/bin/env' - 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin' tags: - general services: - docker:dind before_script: - apk update - apk add gcc libc-dev libffi-dev librdkafka-dev - apk add --update --no-cache python3 python3-dev &amp;&amp; ln -sf python3 /usr/bin/python - python3 -m ensurepip - pip3 install --no-cache --upgrade pip setuptools - pip install poetry - poetry install - source `poetry env info --path`/bin/activate </code></pre> <p>Now one of my python dependency is <code>confluent-kafka==1.8.2</code> , which is failing during <code>poetry install</code> . I am getting below error</p> <pre><code>building 'confluent_kafka.cimpl' extension creating build/temp.linux-x86_64-cpython-38 creating build/temp.linux-x86_64-cpython-38/tmp creating build/temp.linux-x86_64-cpython-38/tmp/tmphywbhg0_ creating build/temp.linux-x86_64-cpython-38/tmp/tmphywbhg0_/confluent-kafka-1.8.2 creating build/temp.linux-x86_64-cpython-38/tmp/tmphywbhg0_/confluent-kafka-1.8.2/src creating build/temp.linux-x86_64-cpython-38/tmp/tmphywbhg0_/confluent-kafka-1.8.2/src/confluent_kafka creating build/temp.linux-x86_64-cpython-38/tmp/tmphywbhg0_/confluent-kafka-1.8.2/src/confluent_kafka/src gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fomit-frame-pointer -g -fno-semantic-interposition -fomit-frame-pointer -g -fno-semantic-interposition -fomit-frame-pointer -g -fno-semantic-interposition -DTHREAD_STACK_SIZE=0x100000 -fPIC -I/tmp/tmpvzpv4nqv/.venv/include -I/usr/include/python3.8 -c /tmp/tmphywbhg0_/confluent-kafka-1.8.2/src/confluent_kafka/src/Admin.c -o build/temp.linux-x86_64-cpython-38/tmp/tmphywbhg0_/confluent-kafka-1.8.2/src/confluent_kafka/src/Admin.o In file included from /tmp/tmphywbhg0_/confluent-kafka-1.8.2/src/confluent_kafka/src/Admin.c:17: /tmp/tmphywbhg0_/confluent-kafka-1.8.2/src/confluent_kafka/src/confluent_kafka.h:66:2: error: #error &quot;confluent-kafka-python requires librdkafka v1.6.0 or later. Install the latest version of librdkafka from the Confluent repositories, see http://docs.confluent.io/current/installation.html&quot; 66 | #error &quot;confluent-kafka-python requires librdkafka v1.6.0 or later. Install the latest version of librdkafka from the Confluent repositories, see http://docs.confluent.io/current/installation.html&quot; | ^~~~~ error: command '/usr/bin/gcc' failed with exit code 1 at /usr/lib/python3.8/site-packages/poetry/installation/chef.py:152 in _prepare 148│ 149│ error = ChefBuildError(&quot;\n\n&quot;.join(message_parts)) 150│ 151│ if error is not None: → 152│ raise error from None 153│ 154│ return path 155│ 156│ def _prepare_sdist(self, archive: Path, destination: Path | None = None) -&gt; Path: Note: This error originates from the build backend, and is likely not a problem with poetry but with confluent-kafka (1.8.2) not supporting PEP 517 builds. You can verify this by running 'pip wheel --use-pep517 &quot;confluent-kafka (==1.8.2)&quot;'. </code></pre>
<python><docker><gitlab-ci-runner><alpine-linux>
2023-03-11 11:36:09
0
3,282
curiousguy
75,704,611
4,533,188
Getting the original name of an argument
<h1>Task</h1> <p>I would like to have a way to access the original name of a passed argument. I looked at the answers of</p> <ul> <li><a href="https://stackoverflow.com/questions/18425225/getting-the-name-of-a-variable-as-a-string">Getting the name of a variable as a string</a></li> <li><a href="https://stackoverflow.com/questions/2749796/how-to-get-the-original-variable-name-of-variable-passed-to-a-function">How to get the original variable name of variable passed to a function</a></li> </ul> <p>which seemed a bit complicated to me. Another idea is to write</p> <pre><code>import pandas as pd my_var = pd.DataFrame(...) def f(dataset: pd.DataFrame): for name, obj in globals().items(): if id(obj) == id(dataset): return name f(my_var) </code></pre> <p>which returns 'my_var'. This makes sense to me and does not seem to be brittle.</p> <p>However, since I did not see such an answer anywhere, I am wondering if I am missing something and this answer is actually a bad idea.</p> <h1>Questions</h1> <ol> <li>Is this code a good/valid idea?</li> <li>If not why is a different (which?) answer better?</li> </ol> <h1>What am I NOT asking</h1> <p>I am NOT asking how to &quot;how to get the original variable name of variable passed to a function&quot;. I am asking whether/why my suggested code is problematic. This is a different question.</p> <h1>Background</h1> <p>I want to use this as a helper function in a data analytics task, where I use the variable's name as a label for the later plot.</p> <pre><code>import pandas as pd def f(dataset: pd.DataFrame): return_dataframe = do_stuff(dataset) for name, obj in globals().items(): if id(obj) == id(dataset): return_dataframe[&quot;group&quot;] = name return return_dataframe data_frame_for_plotting = pd.concat([f(data_train), f(data_test)]) </code></pre>
<python>
2023-03-11 11:30:56
2
13,308
Make42
75,704,397
673,600
How to group data with color but still show a trendline for the entire dataset using Plotly Express?
<p>I'm getting a trend line, however the issue is that with color and a trendline (set to max of data points), I'm getting multiple lines. I only want one line on the entire data set. Here is my code:</p> <pre><code>fig = px.scatter(df, &quot;Date&quot;, y=&quot;Number&quot;, color=&quot;Technology&quot;, labels={ &quot;Date&quot;: &quot;Year&quot;, &quot;Number&quot;: &quot;Number&quot; }, trendline=&quot;expanding&quot;, trendline_options=dict(function=&quot;max&quot;)) </code></pre> <p><a href="https://i.sstatic.net/7CntK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7CntK.png" alt="enter image description here" /></a></p>
<python><dataframe><plotly>
2023-03-11 10:51:52
1
6,026
disruptive
75,704,050
12,438,249
Process multiline JSON file without Commas at the end of new Record
<p>I have a multiline JSON File with no commas after a new record. I have tried to read it using both JSON and BSON libraries but it's not working. Below are the sample two records from the file.</p> <pre><code>{ &quot;_id&quot; : , &quot;form&quot; : , &quot;owner&quot; : , &quot;deleted&quot; : null, &quot;metadata&quot; : [ ], &quot;data&quot; : { &quot;action&quot; : &quot;Document Uploaded&quot;, &quot;comments&quot; : &quot;New Document Added&quot;, &quot;descriptionEventLog&quot; : &quot;Ledger&quot;, &quot;status&quot; : &quot;Needs More Info&quot;, &quot;submissionIdCase&quot; : &quot;&quot;, &quot;timestamp&quot; : &quot;&quot;, &quot;type&quot; : &quot;&quot;, &quot;user&quot; : &quot;&quot;, &quot;submissionIdPaymentBatch&quot; : &quot;&quot;, &quot;submissionIdPayment&quot; : &quot;&quot;, &quot;reason&quot; : &quot;&quot; }, &quot;validationErrors&quot; : [ ], &quot;formArchive&quot; : null, &quot;isRevision&quot; : false, &quot;moduleArchiveIdAtCreated&quot; : , &quot;platformVersionAtCreated&quot; : &quot;6.72.5-2023-02-08&quot;, &quot;created&quot; : ISODate(&quot;2023-02-27T17:14:11.560+0000&quot;), &quot;modified&quot; : ISODate(&quot;2023-02-27T17:14:11.561+0000&quot;), &quot;__v&quot; : NumberInt(0) } { &quot;_id&quot; : , &quot;form&quot; : , &quot;owner&quot; : &quot;&quot;, &quot;deleted&quot; : null, &quot;metadata&quot; : [ ], &quot;data&quot; : { &quot;action&quot; : &quot;Document Uploaded&quot;, &quot;comments&quot; : &quot;New Document Added&quot;, &quot;descriptionEventLog&quot; : &quot;Lease&quot;, &quot;status&quot; : &quot;Needs More Info&quot;, &quot;submissionIdCase&quot; : &quot;&quot;, &quot;timestamp&quot; : &quot;&quot;, &quot;type&quot; : &quot;&quot;, &quot;user&quot; : &quot;&quot;, &quot;submissionIdPaymentBatch&quot; : &quot;&quot;, &quot;submissionIdPayment&quot; : &quot;&quot;, &quot;reason&quot; : &quot;&quot; }, &quot;validationErrors&quot; : [ ], &quot;formArchive&quot; : null, &quot;isRevision&quot; : false, &quot;moduleArchiveIdAtCreated&quot; : ObjectId(&quot;60e631e70b10c402e799dede&quot;), &quot;platformVersionAtCreated&quot; : &quot;6.72.5-2023-02-08&quot;, &quot;created&quot; : ISODate(&quot;2023-02-27T17:13:06.428+0000&quot;), &quot;modified&quot; : ISODate(&quot;2023-02-27T17:13:06.428+0000&quot;), &quot;__v&quot; : NumberInt(0) } </code></pre> <p>I have tried both json.load and json.loads but it is not working.</p> <p>Can anyone share some leads to solve the issue?</p> <p>Thanks</p>
<python><json><python-3.x><mongodb>
2023-03-11 09:44:12
1
532
Abdul Haseeb
75,704,000
12,027,232
Correctly zipping two columns with different data types in cuDF
<p>I have the following DataFrame in cuDF:</p> <pre><code> Context Questions 0 Architecturally, the school has a Catholic cha... [To whom did the Virgin Mary allegedly appear ... 1 As at most other universities, Notre Dame's st... [When did the Scholastic Magazine of Notre dam... 2 The university is the major seat of the Congre... [Where is the headquarters of the Congregation... 3 The College of Engineering was established in ... [How many BS level degrees are offered in the ... 4 All of Notre Dame's undergraduate students are... [What entity provides help with the management... </code></pre> <p>The <code>context</code> column is a single string, whilst the <code>Questions</code> column is a list of strings. What I wish to obtain is a new column that represents the zipped versions as a list such as <code>[(Context, question_i)]</code>.</p> <p>The following code works for the SQuaD-v.1 data-set:</p> <pre><code>data = cudf.read_csv(DATA_PATH) pattern = '([^&quot;]+\?)' data[&quot;Questions&quot;] = data['QuestionAnswerSets'].str.replace('Question\\&quot; -&gt; \\&quot;', '').str.findall(pattern) </code></pre> <p>Obstacles: I do not wish to invoke the list constructor, as this would create memory transfers from device to host. Furthermore, when trying to use user-defined-functions, such as:</p> <pre><code>def zip_context_question_pairs(row): return row['Context'], row['Questions'] df = df.apply_rows(zip_context_question_pairs, incols=['Context', 'Questions'], outcols={'Context_QuestionPairs': 'object'}, kwargs={}) </code></pre> <p>This will error as you can not use UDF's on different data types. How do I correctly zip the string and list into a new column whilst still having data on device?</p> <p>To reproduce:</p> <pre><code>df = cudf.DataFrame({ 'context': 'Architecturally, the school has a Catholic character.', 'question': [['To whom did the Virgin Mary allegedly appear?', &quot;another question&quot;]], }) context = df[&quot;context&quot;][0] questions = df[&quot;question&quot;][0] desired_result = [] # This for loop is what I would like to transform to a cuDF method to avoid lists for question in questions : desired_result.append((question, context)) print(desired_result) </code></pre>
<python><rapids><cudf>
2023-03-11 09:35:15
1
410
JOKKINATOR
75,703,828
842,476
Problem with getting data from Yahoo finance with python
<p>I am trying to get data from Yahoo finance but keep getting the same error that says:</p> <pre><code>TypeError Traceback (most recent call last) &lt;ipython-input-23-3fd0d552ddbc&gt; in &lt;module&gt; 6 enddate = dt.datetime.now() 7 ----&gt; 8 data = web.get_data_yahoo(y_symbols, start=startdate, end=enddate) 3 frames /usr/local/lib/python3.9/dist-packages/pandas_datareader/yahoo/daily.py in _read_one_data(self, url, params) 151 try: 152 j = json.loads(re.search(ptrn, resp.text, re.DOTALL).group(1)) --&gt; 153 data = j[&quot;context&quot;][&quot;dispatcher&quot;][&quot;stores&quot;][&quot;HistoricalPriceStore&quot;] 154 except KeyError: 155 msg = &quot;No data fetched for symbol {} using {}&quot; TypeError: string indices must be integers </code></pre> <p>The code I am using is:</p> <pre><code>import matplotlib.pyplot as plt import pandas as pd import pandas_datareader as web import datetime as dt import yfinance as yf yf.pdr_override() currency = 'BTC' against_currency='USD' y_symbols = [currency,against_currency] startdate = dt.datetime(2016,1,1) enddate = dt.datetime.now() data = web.get_data_yahoo(y_symbols, start=startdate, end=enddate) </code></pre> <p>I tried updating the packages and many other solutions i found but nothing worked.</p>
<python><yahoo-finance>
2023-03-11 09:03:01
0
302
Ayham
75,703,815
3,865,151
How to help Pylance understand dynamically added class properties
<p>I'm using code such as this to add properties dynamically to a class:</p> <pre class="lang-py prettyprint-override"><code>class A: def __init__(self, props) -&gt; None: for name in props: setattr(self.__class__, name, property(lambda _: 1)) </code></pre> <p>This will work as expected:</p> <pre class="lang-py prettyprint-override"><code>A([&quot;x&quot;]).x # outputs 1 </code></pre> <p>However, Pylance (at least in vscode) will (understandably) complain with <code>Cannot access member &quot;x&quot; for type &quot;A&quot; Member &quot;x&quot; is unknown</code>.</p> <p>Is there a way to help Pylance understand what is going on?</p> <p>Note: Adding the properties to <code>__annotations__</code> doesn't help, the warning remains:</p> <pre class="lang-py prettyprint-override"><code>class A: def __init__(self, props) -&gt; None: self.__class__.__annotations__ = {} for name in props: setattr(self.__class__, name, property(lambda _: 1)) self.__class__.__annotations__[name] = int </code></pre> <h2>Variant B:</h2> <p>What if the code was less dynamic, like so?</p> <pre class="lang-py prettyprint-override"><code>class A: PROPS = [&quot;x&quot;, &quot;y&quot;, &quot;z&quot;] def __init__(self) -&gt; None: for name in self.PROPS: setattr(self.__class__, name, property(lambda _: 1)) </code></pre> <p>Any chance to help Pylance in this scenario?</p>
<python><python-3.x><pylance>
2023-03-11 09:00:32
3
551
myke
75,703,773
8,618,242
ROS Service to Show Image
<p>I'm using <code>Ubuntu 20.04</code> and <code>ROS Noetic</code>, <code>OpenCV version 4.2.0.34</code> I'm creating a ros service to show an image:</p> <pre class="lang-py prettyprint-override"><code>#! /usr/bin/env python3 # -*- coding: utf-8 -*- import rospy import cv2 from img_reader.srv import ImgReader, ImgReaderResponse def showImage(path): img = cv2.imread(path) cv2.namedWindow('img', cv2.WINDOW_NORMAL) cv2.imshow('img', img) cv2.waitKey(0) cv2.destroyWindow('img') def readImgCB(req): try: showImage(req.img_path) msg = &quot;&quot; return ImgReaderResponse(True, msg) except: msg = &quot;Image Path is Incorrect&quot; return ImgReaderResponse(False, msg) if __name__ == &quot;__main__&quot;: try: rospy.init_node(&quot;img_reader&quot;) rospy.Service(&quot;img_reader_srv&quot;, ImgReader, readImgCB) rospy.spin() except rospy.ROSInterruptException: rospy.logerr(&quot;Error&quot;) </code></pre> <h3>ImgReader.srv</h3> <pre><code># Desired image path. string img_path --- uint16 result_code # Status message (empty if service succeeded). string message </code></pre> <p>The service works well in the first time, but I can't call it in the next time due to <code>cv2.namedWindow()</code> can't create a new window even though I destroyed the last one!</p> <p>Can you please tell me how can I fix that please? thanks in advance.</p>
<python><opencv><ros>
2023-03-11 08:50:57
1
4,115
Bilal
75,703,668
7,421,447
Error when trying to use custom loss function in Pytorch binary classifier
<p>I am trying to create a binary classification pytorch model using a custom loss function with the help of this <a href="https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html" rel="nofollow noreferrer">tutorial</a>. The model works when using inbuilt loss functions such as <code>nn.CrossEntropyLoss()</code> like in the tutorial. However, It gives an error when introducing the custom loss function. Model code:</p> <pre><code>def train_model(model, criterion, optimizer, scheduler, num_epochs=25): since = time.time() best_model_wts = copy.deepcopy(model.state_dict()) best_acc = 0.0 for epoch in range(num_epochs): print(f'Epoch {epoch}/{num_epochs - 1}') print('-' * 10) # Each epoch has a training and validation phase for phase in ['train', 'val']: if phase == 'train': model.train() # Set model to training mode else: model.eval() # Set model to evaluate mode running_loss = 0.0 running_corrects = 0 # Iterate over data. for inputs, labels in dataloaders[phase]: inputs = inputs.to(device) labels = labels.to(device) # zero the parameter gradients optimizer.zero_grad() # forward # track history if only in train with torch.set_grad_enabled(phase == 'train'): outputs = model(inputs) _, preds = torch.max(outputs, 1) loss = criterion(outputs, labels) # backward + optimize only if in training phase if phase == 'train': loss.backward() optimizer.step() # statistics running_loss += loss.item() * inputs.size(0) running_corrects += torch.sum(preds == labels.data) if phase == 'train': scheduler.step() epoch_loss = running_loss / dataset_sizes[phase] epoch_acc = running_corrects.double() / dataset_sizes[phase] print(f'{phase} Loss: {epoch_loss:.4f} Acc: {epoch_acc:.4f}') # deep copy the model if phase == 'val' and epoch_acc &gt; best_acc: best_acc = epoch_acc best_model_wts = copy.deepcopy(model.state_dict()) print() time_elapsed = time.time() - since print(f'Training complete in {time_elapsed // 60:.0f}m {time_elapsed % 60:.0f}s') print(f'Best val Acc: {best_acc:4f}') # load best model weights model.load_state_dict(best_model_wts) return model model_conv2 = torchvision.models.efficientnet_b7(weights = 'EfficientNet_B7_Weights.IMAGENET1K_V1') #model_conv = torchvision.models.efficientnet_b7(weights = 'EfficientNet_B7_Weights.IMAGENET1K_V1') for param in model_conv2.parameters(): param.requires_grad = False # Parameters of newly constructed modules have requires_grad=True by default num_ftrs = model_conv2.classifier[1].in_features model_conv2.classifier[1] = nn.Linear(num_ftrs, 2) model_conv2 = model_conv2.to(device) criterion = nn.CrossEntropyLoss() # Observe that only parameters of final layer are being optimized as # opposed to before. optimizer_conv = optim.SGD(model_conv2.classifier[1].parameters(), lr=0.001, momentum=0.9) # Decay LR by a factor of 0.1 every 7 epochs exp_lr_scheduler = lr_scheduler.StepLR(optimizer_conv, step_size=7, gamma=0.1) </code></pre> <p>Also, what is the meaning of the output of the below code.</p> <pre><code>outputs = model(inputs) _, preds = torch.max(outputs, 1) loss = criterion(labels,preds) outputs = tensor([[-0.1908, 0.4115], [-1.0019, -0.1685], [-1.1265, -0.3025], [-0.5925, -0.6610], [-0.4076, -0.4897], [-0.6450, -0.2863], [ 0.1632, 0.4944], [-1.0743, 0.1003], [ 0.6172, 0.5104], [-0.2296, -0.0551], [-1.3165, 0.3386], [ 0.2705, 0.1200], [-1.3767, -0.6496], [-0.5603, 1.0609], [-0.0109, 0.5767], [-1.1081, 0.8886]], grad_fn=&lt;AddmmBackward0&gt;) </code></pre> <p>From what I understand 16 rows represent the batch size. However, what do the value ranges mean? If its probabilities which probability represents which class (is the first value 0 class and second value class 1), Furthermore, why are there minus values and values more than one?</p> <p>Please find the custom loss function below:</p> <pre><code>#Original Function def pfbeta_fast(labels, predictions, beta): pTP = np.sum(labels * predictions) pFP = np.sum((1-labels) * predictions) num_positives = np.sum(labels) # = pTP+pFN pPrecision = pTP/(pTP+pFP) pRecall = pTP/num_positives beta_squared = beta**2 if (pPrecision &gt; 0 and pRecall &gt; 0): pF1 = (1+beta_squared) * pPrecision * pRecall/(beta_squared*pPrecision + pRecall) return pF1 else: return 0 #Adjusted function to fit model (by me) def pfbeta_torch(predictions, labels , beta=1.3): #labels = torch.tensor(labels.clone().detach().requires_grad(True),dtype=torch.float64) #predictions = torch.tensor(predictions.clone().detach().requires_grad(True),dtype=torch.float64) predictions = [1 if x &gt; 0.5 else 0 for x in predictions[:,1]] #Get second element and recode as 1 if value &gt; than .5 else 0 predictions = torch.tensor(predictions, requires_grad=True) #Loss function changed here pTP = torch.sum(labels * predictions) pFP = torch.sum((1-labels) * predictions) num_positives = torch.sum(labels) # = pTP+pFN pPrecision = pTP/(pTP+pFP) pRecall = pTP/num_positives beta_squared = beta**2 if (pPrecision &gt; 0 and pRecall &gt; 0): pF1 = (1+beta_squared) * pPrecision * pRecall/(beta_squared*pPrecision + pRecall) return pF1 else: return 0 </code></pre> <p>Error message after plugging second function to model:</p> <pre><code>--------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) /tmp/ipykernel_27/1892088165.py in &lt;module&gt; ----&gt; 1 model_conv = train_model(model_conv, criterion, optimizer_conv, exp_lr_scheduler, num_epochs=1) /tmp/ipykernel_27/3271713126.py in train_model(model, criterion, optimizer, scheduler, num_epochs) 33 _, preds = torch.max(outputs, 1) 34 print(outputs,labels) ---&gt; 35 loss = criterion(outputs,labels) #Changed here to criterion(labels,preds) 36 37 # backward + optimize only if in training phase /tmp/ipykernel_27/3035900216.py in pfbeta_torch(predictions, labels, beta) 3 #predictions = torch.tensor(predictions.clone().detach().requires_grad(True),dtype=torch.float64) 4 predictions = [1 if x &gt; 0.5 else 0 for x in predictions[:,1]] #Get second element and recode as 1 if value &gt; than .5 else 0 ----&gt; 5 predictions = torch.tensor(predictions,requires_grad=True) 6 pTP = torch.sum(labels * predictions) 7 pFP = torch.sum((1-labels) * predictions) RuntimeError: Only Tensors of floating point and complex dtype can require gradients </code></pre> <p>Additional code for understandability.</p> <pre><code>model_conv2 = torchvision.models.efficientnet_b7(weights = 'EfficientNet_B7_Weights.IMAGENET1K_V1') #model_conv = torchvision.models.efficientnet_b7(weights = 'EfficientNet_B7_Weights.IMAGENET1K_V1') for param in model_conv2.parameters(): param.requires_grad = False # Parameters of newly constructed modules have requires_grad=True by default num_ftrs = model_conv2.classifier[1].in_features model_conv2.classifier[1] = nn.Linear(num_ftrs, 2) model_conv2 = model_conv2.to(device) criterion = nn.CrossEntropyLoss() # Observe that only parameters of final layer are being optimized as # opposed to before. optimizer_conv = optim.SGD(model_conv2.classifier[1].parameters(), lr=0.001, momentum=0.9) # Decay LR by a factor of 0.1 every 7 epochs exp_lr_scheduler = lr_scheduler.StepLR(optimizer_conv, step_size=7, gamma=0.1) model_conv2 = train_model(model_conv2, criterion, optimizer_conv, exp_lr_scheduler, num_epochs=25) </code></pre> <p>EDIT: I changed my loss function as below;</p> <pre><code>def pfbeta_torch(predictions, labels, beta=1.3): #labels = torch.tensor(labels.clone().detach(), dtype=torch.float64, requires_grad=True) predictions = torch.tensor(predictions, dtype=torch.float64, requires_grad=True) pTP = torch.sum(labels * predictions) pFP = torch.sum((1 - labels) * predictions) num_positives = torch.sum(labels) # = pTP+pFN pPrecision = pTP / (pTP + pFP) pRecall = pTP / num_positives beta_squared = beta ** 2 # x=0 if (pPrecision &gt; 0 and pRecall &gt; 0): pF1 = (1 + beta_squared) * pPrecision * pRecall / (beta_squared * pPrecision + pRecall) return pF1 else: return torch.tensor(0, dtype=torch.float64, requires_grad=True) </code></pre> <p>The inputs to the <code>loss</code> function are as below.</p> <pre><code>preds: tensor([0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0]) labels: tensor([0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1]) </code></pre> <p>Now I am sill getting a warning message this time;</p> <pre><code>/opt/conda/lib/python3.7/site-packages/ipykernel_launcher.py:3: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). This is separate from the ipykernel package so we can avoid doing imports until </code></pre> <p>Would anyone be able let me know whether I am correct in going about (second) custom function and what is the reason for the error please.</p> <p>Thanks &amp; Best Regards AMJS</p>
<python><machine-learning><pytorch>
2023-03-11 08:24:31
0
713
Alain Michael Janith Schroter
75,703,636
6,947,156
Running all cells again adding new rows instead of return new dataframe
<p>I am creating a class just like <code>lazypredict</code> library but with k fold support and return a data frame withh the mean score of every model every thing works and I can print dataframe but when I run again it add more rows to the old datafarme istead show the new dataframe I also try to use implace parameter but same result</p> <pre class="lang-py prettyprint-override"><code>import time from typing import List import numpy as np import pandas as pd pd.set_option(&quot;display.precision&quot;, 4) from sklearn.model_selection import train_test_split from sklearn.model_selection import StratifiedKFold from sklearn.model_selection import cross_validate from sklearn.metrics import classification_report from sklearn.metrics import ConfusionMatrixDisplay from tqdm.notebook import tqdm class Compare_Models: scoring = [ &quot;accuracy&quot;, &quot;balanced_accuracy&quot;, &quot;roc_auc&quot;, &quot;precision_weighted&quot;, &quot;recall_weighted&quot;, &quot;f1_weighted&quot;, ] f1_list = [] name_list = [] recall_list = [] roc_auc_list = [] accuracy_list = [] fit_time_list = [] precision_list = [] score_time_list = [] balanced_accuracy_list = [] def __init__( self, X: pd.DataFrame, y: pd.DataFrame, models: List, should_print_report: bool = False, ) -&gt; pd.DataFrame: self.X = X self.y = y self.X_train, self.X_test, self.y_train, self.y_test = train_test_split( X, y, random_state=0 ) for model in tqdm(models): name = model.__class__.__name__ if should_print_report: self.__single_run(name, model) self.__cv_run(name, model) def __single_run(self, name, model): start = time.time() clf = model.fit(self.X_train, self.y_train) fit_time = time.time() - start start = time.time() y_pred = clf.predict(self.X_test) predict_time = time.time() - start report = classification_report(self.y_test, y_pred) print(name) print(&quot;Fit Time:&quot;, fit_time) print(&quot;Predict Time:&quot;, predict_time) print(report) ConfusionMatrixDisplay.from_predictions( self.y_test, y_pred, normalize=&quot;pred&quot;, ) def __cv_run(self, name, model): cv_results = cross_validate(model, self.X, self.y, scoring=self.scoring) fit_time = np.mean(cv_results[&quot;fit_time&quot;]) f1 = np.mean(cv_results[&quot;test_f1_weighted&quot;]) roc_auc = np.mean(cv_results[&quot;test_roc_auc&quot;]) score_time = np.mean(cv_results[&quot;score_time&quot;]) accuracy = np.mean(cv_results[&quot;test_accuracy&quot;]) recall = np.mean(cv_results[&quot;test_recall_weighted&quot;]) precision = np.mean(cv_results[&quot;test_precision_weighted&quot;]) balanced_accuracy = np.mean(cv_results[&quot;test_balanced_accuracy&quot;]) self.f1_list.append(f1) self.name_list.append(name) self.recall_list.append(recall) self.roc_auc_list.append(roc_auc) self.accuracy_list.append(accuracy) self.fit_time_list.append(fit_time) self.precision_list.append(precision) self.score_time_list.append(score_time) self.balanced_accuracy_list.append(balanced_accuracy) def get_scores(self): scores = pd.DataFrame( { &quot;Model&quot;: self.name_list, &quot;Accuracy&quot;: self.accuracy_list, &quot;Balanced Accuracy&quot;: self.balanced_accuracy_list, &quot;ROC AUC&quot;: self.roc_auc_list, &quot;Precision Weighted&quot;: self.precision_list, &quot;Recall Weighted&quot;: self.recall_list, &quot;F1 Weighted&quot;: self.f1_list, &quot;Fit Time&quot;: self.fit_time_list, &quot;Score Time&quot;: self.score_time_list, } ) scores = scores.sort_values([&quot;Fit Time&quot;, &quot;Balanced Accuracy&quot;]) return scores </code></pre> <p><a href="https://i.sstatic.net/kLYAs.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kLYAs.jpg" alt="enter image description here" /></a></p>
<python><pandas><dataframe><visual-studio-code><jupyter-notebook>
2023-03-11 08:16:03
1
1,076
Burhan Khanzada
75,703,607
13,362,665
Segmenting multiple unfull ellipses from each other in an image
<p>I have an image of multiple not fully drawn ellipses, and I am trying to separate each one from the other to further calculate the diameter of each one, so in other words to get another 3 images containing each ellipse, I want to make the algorithm know which one is the outer circle, middle circle and inner one.</p> <p>Example of an image: <a href="https://i.sstatic.net/ezWqV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ezWqV.png" alt="enter image description here" /></a></p> <p>I have tried detecting the outer circle only and having it as a reference for the other circles to be inside it but I didn't get it to fully work like that.</p> <p>Also in a real image scenario (I can't share for confidentiality reasons), it will not have a clear background, rather it will have many random dots and noise.</p> <p>I am not looking exactly for a code on how to do it, rather than different approaches and algorithms that can be used to do this.</p>
<python><image-processing><computer-vision><image-segmentation>
2023-03-11 08:09:34
1
593
Rami Janini
75,703,494
4,874,204
Python: Want to write zipfile.ZipFile object to disk as "foo.zip"
<p>It feels like I'm missing something obvious here.</p> <p>I have a zipfile.ZipFile object, which was created by writing files into it using io.BytesIO() called <code>buffer</code> as follows:</p> <pre><code>with zipfile.ZipFile(buffer, &quot;a&quot;) as zip: # write some things into the zip using zip.write() </code></pre> <p>then I returned <code>zip</code>, so I have a zipfile.ZipFile object.</p> <p>I want to write this object to disk as a zipped file, and I don't know how.</p> <pre><code>zip.write('foo.zip') </code></pre> <p>would write things <em>into</em> the object, and</p> <pre><code>with open('foo.zip', 'wb') as f: f.write(zip) </code></pre> <p>doesn't work because <code>zip</code> isn't a bytes object.</p> <pre><code>with open('foo.zip', 'wb') as f: f.write(zip.read()) </code></pre> <p>doesn't work because it's expecting a &quot;name&quot;, which I presume is the name of one of the files saved within zip. I would have assumed there was just some simple way to do this, e.g. <code>zip.to_zip(&quot;foo.zip&quot;)</code>?</p> <p>UPDATE: Because <code>zip</code> was originally written into the buffer, I tried changing zip.filename to <code>&quot;test.zip&quot;</code> which does write the zip to disk. However, the contents of <code>zip</code> aren't there. This could be a different issue, but I'm unsure.</p>
<python><io><zip>
2023-03-11 07:41:06
1
379
rorance_
75,703,265
10,625,950
Nothing happens after extracting .tar file in Python
<p>I downloaded the dataset from Yelp: <a href="https://www.yelp.com/dataset" rel="nofollow noreferrer">https://www.yelp.com/dataset</a> which is in .tar format.</p> <p>I tried the following in Python but nothing happened afterwards, am I doing anything wrong?</p> <pre><code>#import module import tarfile path = &quot;path to file&quot; # open file file = tarfile.open(path) # extracting file file.extractall() file.close() </code></pre>
<python>
2023-03-11 06:42:20
0
684
Xin
75,702,975
21,343,992
AWS Boto3 send_command() always fails on Ubuntu "An error occurred (InvalidInstanceId) when calling the SendCommand operation"
<p>I'm using boto3 to create an EC2 instance and then <code>send_command()</code> to send Linux commands to execute. I created an IAM role and pass this to <code>create_instances()</code>.</p> <p>For the Amazon Linux AMI <code>ami-0329eac6c5240c99d</code> it works 99% of the time. For the Ubuntu 22.04 AMI <code>ami-0b828c1c5ac3f13ee</code> it fails every time.</p> <p>I get this error:</p> <blockquote> <p>botocore.errorfactory.InvalidInstanceId: An error occurred (InvalidInstanceId) when calling the SendCommand operation: Instances [[&lt;my_instance_id&gt;]] not in a valid state for account &lt;my_account_id&gt;</p> </blockquote> <p>This Ubuntu page implies the SSM Agent should work when passing an IAM role via <code>create_instances()</code>:</p> <blockquote> <p>Every instance of Ubuntu server and Ubuntu Pro server comes with the AWS Systems Manager (SSM) agent installed. To enable it, it is only necessary to attach an IAM role that will allow the agent to interact with SSM.</p> </blockquote> <p><a href="https://ubuntu.com/tutorials/how-to-use-aws-ssm-session-manager-for-accessing-ubuntu-pro-instances#1-overview" rel="nofollow noreferrer">https://ubuntu.com/tutorials/how-to-use-aws-ssm-session-manager-for-accessing-ubuntu-pro-instances#1-overview</a></p> <ol> <li>Should Ubuntu 22.04 work with SSM by default?</li> <li>Why does this occasionally happen with Amazon AMI too?</li> </ol>
<python><amazon-web-services><amazon-ec2><boto3><aws-ssm>
2023-03-11 05:27:06
1
491
rare77
75,702,792
882,134
Error when attempting to run Google Colab Deforum Stable Diffusion with a local GPU -- ModuleNotFoundError: No module named 'helpers.save_images'
<p>I'm attempting to run Deforum in Google Colab (<a href="https://colab.research.google.com/github/deforum-art/deforum-stable-diffusion/blob/main/Deforum_Stable_Diffusion.ipynb" rel="nofollow noreferrer">https://colab.research.google.com/github/deforum-art/deforum-stable-diffusion/blob/main/Deforum_Stable_Diffusion.ipynb</a>).</p> <p>I was able to successfully get Jupyter installed and the GPU to successfully connect. However, when attempting to run the Environment Setup, I'm encountering the following error:</p> <p>ModuleNotFoundError: No module named 'helpers.save_images', which appears to be referencing line 57: 57 from helpers.save_images import get_output_folder</p> <p>Any ideas what I may need to do or install to resolve the issue?</p> <p>Thank you!</p>
<python><google-colaboratory><stable-diffusion>
2023-03-11 04:32:44
0
309
user882134
75,702,735
8,194,364
How to handle multiple url calls using global driver with selenium web driver?
<p>I have a function that returns a webdriver as a singleton (only instantiate once AND global_driver is a global variable):</p> <pre><code>global_driver = None ... def getDriver(): global global_driver if not global_driver: options = Options() options.add_argument('--headless') global_driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=options) return global_driver else: return global_driver </code></pre> <p>and another function that properly closes and quits the driver:</p> <pre><code>def quitDriver(): global global_driver global_driver.close() global_driver.quit() global_driver = None </code></pre> <p>I have another function that calls <code>getDriver()</code> to parse a url:</p> <pre><code>def clickQuarterlyButton(url): driver = getDriver() driver.get(url) WebDriverWait(driver, constants.SLEEP_TIME).until(EC.element_to_be_clickable((By.XPATH, '//*[@id=&quot;Col1-1-Financials-Proxy&quot;]/section/div[1]/div[2]/button'))).click() soup = BeautifulSoup(driver.page_source, 'html.parser') return soup </code></pre> <p>I call the <code>clickQuarterlyButton(url)</code> function in a for loop like this for multiple urls:</p> <pre><code>for url in urls: clickQuarterlyButton(url) quitDriver() </code></pre> <p>I want to create a new instance of web driver to read and parse for each url to avoid multi threading issues but I am still running into multi threading issues. What is the right way to set up the webdriver to handle multiple urls?</p>
<python><selenium-webdriver>
2023-03-11 04:15:53
0
359
AJ Goudel
75,702,723
1,525,788
can't parse IP address from PDF file, no error, just empty
<p>I'm using Tika to parse IP addresses from a PDF file. Below is my code:</p> <pre><code>import tika from tika import parser import re # Press the green button in the gutter to run the script. if __name__ == '__main__': tika.initVM() # opening pdf file parsed_pdf = parser.from_file(&quot;static_hosts.pdf&quot;) text = parsed_pdf[&quot;content&quot;] regex = '(?:[0-9]{1,3}\.){3}[0-9]{1,3}$' match = re.findall(regex, text) print(match) </code></pre> <p>I have tested the regex online and found that they work properly. I even tried these but none of them work:</p> <pre><code>regex = '(?:[0-9]{1,3}\.){3}[0-9]{1,3}$' regex = ^'(?:[0-9]{1,3}\.){3}[0-9]{1,3}$' regex = r'(?:[0-9]{1,3}\.){3}[0-9]{1,3}$' </code></pre> <p>Could you please show me where I missed?</p> <p>Thank you. Huy</p>
<python><regex><pdf><tika-python>
2023-03-11 04:11:00
1
1,556
Huy Than
75,702,658
3,015,449
Numpy array copy slower than Python list copy
<p>I've seen several posts here about accessing individual items in numpy arrays and python lists via a for loop.</p> <p>My program is a little different. What I'm doing is copying a small array (or list) of about 10 elements and then using it. I'm doing this many times, so I want it to be fast. The application if you're interested is that I'm searching a tree, and each small array/list is a 'state' in the tree.</p> <p>But I'm finding that the <code>numpy.copy()</code> function is slower than the Python <code>list()</code> function.</p> <p>To demonstrate what I'm saying, here's a small program with timings:</p> <pre><code>import time import numpy as np def numPyArrays(iterations:int): initialArray = np.array([1,0,0,1,0,1,1,1,0,0]) for i in range(iterations): nextArray = initialArray.copy() print(f&quot;Numpy Arrays:\n{nextArray}&quot;) return def pythonLists(iterations:int): initialList = [1,0,0,1,0,1,1,1,0,0] for i in range(iterations): nextList = list(initialList) print(f&quot;Python Lists:\n{nextList}&quot;) return def main(): numIterations = 10000000 startTime = time.time() numPyArrays(numIterations) print(f&quot;Time taken: {round(time.time() - startTime, 2)} seconds.\n&quot;) startTime = time.time() pythonLists(numIterations) print(f&quot;Time taken: {round(time.time() - startTime, 2)} seconds.\n&quot;) main() </code></pre> <p>Timings:</p> <pre><code>Numpy Arrays: [1 0 0 1 0 1 1 1 0 0] Time taken: 4.68 seconds. Python Lists: [1, 0, 0, 1, 0, 1, 1, 1, 0, 0] Time taken: 1.5 seconds. </code></pre> <p>I would have thought the numpy.copy function would have been as fast as a list copy.</p> <p>EDIT: For those wanting to know what the underlying problem is, it's an Advent of Code problem. Day 19, 2022. <code>https://adventofcode.com/2022/day/19</code></p>
<python><list><numpy><performance>
2023-03-11 03:47:14
2
724
davo36
75,702,573
412,234
Pythonic way to check if x is a module
<p>This works for builtin types (str, list, int, etc) and for classes that are imported properly, but not module:</p> <pre><code>type(x) is module #NameError: name 'module' is not defined. </code></pre> <p>There are workarounds:</p> <pre><code>str(type(x)) == &quot;&lt;class 'module'&gt;&quot; type(x) is type(os) # Or any imported module object. </code></pre> <p>But is there a more <em>pythonic</em> way?</p>
<python><syntax>
2023-03-11 03:17:51
1
3,589
Kevin Kostlan
75,702,555
9,542,989
DESCRIBE TABLE Equivalent for Apache Ignite
<p>I am looking to get the column names and the data types for my Apache Ignite tables. Is there a SQL query that can be used to accomplish this? Maybe an equivalent to the <code>DESCRIBE TABLE</code> command?</p> <p>If this is not possible using SQL, can it be done using the Python driver for Apache Ignite: <code>pyignite</code>?</p>
<python><sql><apacheignite>
2023-03-11 03:08:26
1
2,115
Minura Punchihewa
75,702,521
5,617,608
Recognizing drop caps in PDF in python
<p>I'm currently using pymupdf to extract text blocks from a <a href="https://drive.google.com/file/d/1QgtIbxQusZGluMo-jir3HryKv5_VKJzj/view?usp=sharing" rel="nofollow noreferrer">file</a> in python.</p> <p><a href="https://i.sstatic.net/2RA37.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2RA37.png" alt="enter image description here" /></a></p> <pre><code>import fitz doc = fitz.open(filename) for page in doc: text = page.get_text(&quot;blocks&quot;) for item in text: print(item[4]) </code></pre> <p>The problem is that drop caps are recognized weirdly. For example, &quot;N is recognized in multiple lines as:</p> <pre><code>£ £ &quot;1L ^ L I JL ^1 </code></pre> <p>I thought it can be an encoding problem so I tried utf-8 encoding as follows:</p> <pre><code>text = page.get_text().encode(&quot;utf8&quot;) </code></pre> <p>However, the problem is still the same. How can I solve this? Thanks in advance!</p>
<python><extract><pymupdf>
2023-03-11 03:00:24
1
1,759
Esraa Abdelmaksoud
75,702,242
5,212,614
How to make KMeans Clustering more Meaningful for Titanic Data?
<p>I'm running this code.</p> <pre><code>import pandas as pd titanic = pd.read_csv('titanic.csv') titanic.head() #Import required module from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.cluster import KMeans from sklearn.metrics import adjusted_rand_score documents = titanic['Name'] vectorizer = TfidfVectorizer(stop_words='english') X = vectorizer.fit_transform(documents) from sklearn.cluster import KMeans # initialize kmeans with 20 centroids kmeans = KMeans(n_clusters=20, random_state=42) # fit the model kmeans.fit(X) # store cluster labels in a variable clusters = kmeans.labels_ titanic['kmeans'] = clusters titanic.tail() Finally... from sklearn.decomposition import PCA documents = titanic['Name'] vectorizer = TfidfVectorizer(stop_words='english') X = vectorizer.fit_transform(documents) # initialize PCA with 2 components pca = PCA(n_components=2, random_state=42) # pass our X to the pca and store the reduced vectors into pca_vecs pca_vecs = pca.fit_transform(X.toarray()) # save our two dimensions into x0 and x1 x0 = pca_vecs[:, 0] x1 = pca_vecs[:, 1] # assign clusters and pca vectors to our dataframe titanic['cluster'] = clusters titanic['x0'] = x0 titanic['x1'] = x1 titanic.head() import plotly.express as px fig = px.scatter(titanic, x='x0', y='x1', color='kmeans', text='Name') fig.show() </code></pre> <p>Here is the plot that I see.</p> <p><a href="https://i.sstatic.net/S5fx3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/S5fx3.png" alt="enter image description here" /></a></p> <p>I guess it's working...but my question is...how can we make the text more dispersed and/or remove outliers so the chart is more meaningful? I'm guessing that the clustering is correct, because I'm not doing anything special here, but is there some way to make the clustering more significant or meaningful?</p> <p>Data is sourced from here.</p> <p><a href="https://www.kaggle.com/competitions/titanic/data?select=test.csv" rel="nofollow noreferrer">https://www.kaggle.com/competitions/titanic/data?select=test.csv</a></p>
<python><python-3.x><cluster-analysis><k-means>
2023-03-11 01:14:51
1
20,492
ASH
75,702,225
8,012,864
Python remove row from list of lists
<p>I have a list of lists that looks like this in Python...</p> <pre><code>[ [&quot;item 1&quot;, 'green', 'round', 'sold'], [&quot;item73&quot;, 'red', 'square', 'for sale'], ['item477', 'blue', 'rectangle', 'on hold'] ] </code></pre> <p>I have the name of the item <code>item73</code> generated by another process. How can I delete the matching list from this list of lists so I end up with..</p> <pre><code>[ [&quot;item 1&quot;, 'green', 'round', 'sold'], ['item477', 'blue', 'rectangle', 'on hold'] ] </code></pre> <p>It seems like a simple task but I am stuck unable to do it. I have tried...</p> <pre><code>mylist.remove('item73') </code></pre> <p>But this is not working</p>
<python>
2023-03-11 01:10:31
2
443
jsmitter3
75,702,101
3,886,898
Create a labeling project in Azure Machine Learning using python code
<p>The GUI in Azure Machine Learning for creating datasets is straight forward. But I have a hard time creating it through python code. I'm using the Python 3.8 Azure ML kernel. Here is the code I have but it's running into a bug and I'm not able to debug it.</p> <pre><code>from azureml.core import Workspace, Dataset from azureml.contrib.dataset import FileHandlingOption from azureml.contrib.dataset.labeled_dataset import _LabeledDatasetFactory # Authenticate and create a workspace object ws = Workspace.from_config() # Get a reference to the registered dataset dataset = Dataset.get_by_name(ws, 'my-registered-dataset') # Create a labeled dataset factory labeled_dataset_factory = _LabeledDatasetFactory() # Create the labeling project project = labeled_dataset_factory.from_input_data_reference( dataset.as_named_input('my_data').as_download(), label_column_name='my-label-column', file_handling_option=FileHandlingOption.SKIP_DOWNLOAD ) # Register the labeling project project.register(ws) </code></pre> <p>I'm receiving this error message:</p> <pre><code>AttributeError: '_LabeledDatasetFactory' object has no attribute 'from_input_data_reference' </code></pre> <p>What attribute should I use here to get this running?</p>
<python><azure><azure-functions><azure-machine-learning-service>
2023-03-11 00:37:23
2
1,108
Mohammad
75,702,065
7,267,480
h5py problem - driver lock request failed - how to fix? (under WSL Ubuntu)
<p>I am trying to use the h5py (<a href="https://pypi.org/project/h5py/" rel="nofollow noreferrer">https://pypi.org/project/h5py/</a>) package to handle the data of datasets in that format.</p> <p>I am using a side library to write datasets in that format, it uses the h5py.</p> <p>I got the following error when I tried to use their code:</p> <blockquote> <p>driver lock request failed\n File &quot;H5FDsec2.c&quot;, line 1002, in H5FD__sec2_lock\n unable to lock file, errno = 11, error message = 'Resource temporarily unavailable'\n\nEnd of HDF5 error back trace\n\nUnable to open/create file 'samples_file.hdf5'</p> </blockquote> <p>it creates a hdf file in the directory I am working but it seems that it's locked and it can't be written.</p> <p>What is the problem?</p> <p>How to fix or maybe any additional information is needed?</p> <p>Some information about the configuration I have :</p> <pre><code>{'platform': 'Linux', 'platform-release': '5.15.90.1-microsoft-standard-WSL2', 'platform-version': '#1 SMP Fri Jan 27 02:56:13 UTC 2023', 'architecture': 'x86_64', 'hostname': 'note-4', 'ip-address': '127.0.1.1', 'mac-address': '00:15:5d:aa:8e:c1', 'processor': 'x86_64', 'ram': '15 GB'} </code></pre> <p><em>UPDATE:</em></p> <p>I will show you a small part of code that is running here:</p> <pre><code>if use_hdf5: # check for exiting test case file check_case_file(case_file) h5f = h5py.File(case_file, &quot;a&quot;) # loop over given number of samples for i in range(min(dataset_range), max(dataset_range)): sample_group = f'sample_{i}' if sample_group in h5f: if ('exp_pw' in h5f[sample_group]) and ('theo_pw' in h5f[sample_group]) and ('theo_par' in h5f[sample_group]): if overwrite: sample_and_write(case_file, i, particle_pair, experiment, solver, open_data, fixed_resonance_ladder, vary_Erange, use_hdf5) else: samples_not_being_generated.append(i) else: sample_and_write(case_file, i, particle_pair, experiment, solver, open_data, fixed_resonance_ladder, vary_Erange, use_hdf5) else: sample_and_write(case_file, i, particle_pair, experiment, solver, open_data, fixed_resonance_ladder, vary_Erange, use_hdf5) h5f.close() </code></pre> <p>In the function that writes file..:</p> <pre><code>def sample_and_write(case_file, isample, particle_pair, experiment, solver, open_data, fixed_resonance_ladder, vary_Erange, use_hdf5): ... # write data if use_hdf5: exp_pw_df.to_hdf(case_file, f&quot;sample_{isample}/exp_pw&quot;) ... f = h5py.File(case_file, 'a') if 'exp_cov' in f[f&quot;sample_{isample}&quot;].keys(): del f[f'sample_{isample}/exp_cov'] else: pass f.create_dataset(f'sample_{isample}/exp_cov', data=CovT) f.close() else: exp_pw_df.to_csv(os.path.join(case_file,f'sample_{isample}','exp_pw'), index=False) theo_pw_df.to_csv(os.path.join(case_file,f'sample_{isample}','theo_pw'), index=False) </code></pre>
<python><h5py><hdf>
2023-03-11 00:25:44
0
496
twistfire
75,701,995
5,323,311
Polars: Naming Dataframe Columns when using scan_csv on headerless CSVs
<p>It doesn't look like Polars' <code>scan_csv</code> supports user provided column names. This can make it a little awkward to work with headerless CSVs because you then have to either mutate your data for Polars or load with <code>read_csv</code> which has the <code>new_columns</code> argument (and while <code>new_columns</code> achieves the desired outcome its name makes it feel like isn't necessarily intended for use this way).</p> <p>What is the current best way for working with headerless CSVs in Polars? Were there design decisions at play in not releasing Polars' with a <code>columns_names</code> or <code>headers</code> arg?</p>
<python><dataframe><python-polars>
2023-03-11 00:06:21
1
895
Tshimanga
75,701,948
14,729,820
How to merge many data frames to only one
<p>I have directory that has many folders inside each folder has (<code>images folder &amp; labels text</code>) and I want to combine them to one dataframe file by concatnating folders name with images name to make them uinque names . The structuer of my directory like below :</p> <pre><code>$ tree . ├── sample │ ├---- folder_1 ----|-- -- train.jsonl | | |----- imgs | | | └───├── 0.png | | | └── 1.png | | | └── 2.png | | | └── 3.png .. .. ... ... | | | └── n.png │ ├---- folder_2 ----|-- -- train.jsonl | | |----- imgs | | | └───├── 0.png | | | └── 1.png | | | └── 2.png | | | └── 3.png .. .. ... ... | | | └── n.png │ ├---- folder_3 ----|-- -- train.jsonl | | |----- imgs | | | └───├── 0.png | | | └── 1.png | | | └── 2.png | | | └── 3.png .. .. ... ... | | | └── n.png </code></pre> <p>In each folder, the <code>train.jsonl</code> file contains the image name and the corresponding text for example in <code>folder_1</code></p> <pre><code>{&quot;file_name&quot;: &quot;0.png&quot;, &quot;text&quot;: &quot;Hello&quot;} {&quot;file_name&quot;: &quot;1.png&quot;, &quot;text&quot;: &quot;there&quot;} </code></pre> <p>In others as well <code>folder_2</code> :</p> <pre><code>{&quot;file_name&quot;: &quot;0.png&quot;, &quot;text&quot;: &quot;Hi&quot;} {&quot;file_name&quot;: &quot;1.png&quot;, &quot;text&quot;: &quot;there from the second dir&quot;} </code></pre> <p>....</p> <p>What I want to update <code>file_name</code> path by reading those json lines with pandas or python and concatenating parent directories with image name :</p> <p>Now after the update by <a href="https://stackoverflow.com/questions/75701948/how-to-merge-many-data-frames-to-only-one/75702105#75702105"><strong>@RAI</strong></a></p> <pre><code>import pandas as pd import os df = pd.DataFrame(columns=['file_name', 'text']) # Traverse the directory recursively for root, dirs, files in os.walk('sample'): for file in files: if file == 'train.jsonl': df_temp = pd.read_json(os.path.join(root, file), lines=True) df_temp['file_name'] = os.path.join(root, 'imgs', df_temp['file_name']) df = df.append(df_temp, ignore_index=True) print(df) </code></pre> <p>Get this issue :</p> <pre><code>Traceback (most recent call last): File &quot;merage_files.py&quot;, line 11, in &lt;module&gt; print(os.path.join(root, 'imgs', df_temp['file_name'])) File &quot;/usr/lib/python3.8/posixpath.py&quot;, line 90, in join genericpath._check_arg_types('join', a, *p) File &quot;/usr/lib/python3.8/genericpath.py&quot;, line 152, in _check_arg_types raise TypeError(f'{funcname}() argument must be str, bytes, or ' TypeError: join() argument must be str, bytes, or os.PathLike object, not 'Series' </code></pre> <p>So the expected df should look like this:</p> <pre><code> file_name text 0 sample/folder_1/0.png Hello 1 sample/folder_1/1.png there 2 sample/folder_2/0.png Hi 3 sample/folder_2/1.png there from the second dir .......... ........ </code></pre> <p>To make them unique and we can loop through one data frame file combine all of them</p>
<python><json><pandas><dataframe><deep-learning>
2023-03-10 23:53:47
1
366
Mohammed
75,701,899
8,361,830
Cannot login with QRCode in TikTok using Selenium
<p>I have written a simple script to login in TikTok using qrcode, but when I scan it shows that URL is undefined:</p> <pre class="lang-py prettyprint-override"><code>from time import sleep from selenium import webdriver if __name__ == &quot;__main__&quot;: driver = webdriver.Chrome(driver_executable_path=&quot;path&quot;) sleep(1) driver.get(&quot;https://www.tiktok.com&quot;) sleep(1) driver.execute_script(f'document.querySelector(&quot;[data-e2e=nav-login-button]&quot;).click();') sleep(1) driver.execute_script(f'document.querySelector(&quot;a[href*=qrcode]&quot;.click();') sleep(160) </code></pre> <p>Result:</p> <pre><code>undefined? next_url=undefined%3Fclient_secret%3DQCTI6LYM </code></pre> <p>Screenshot: <a href="https://i.sstatic.net/E7qB1.png" rel="nofollow noreferrer">https://i.sstatic.net/E7qB1.png</a></p>
<python><selenium-webdriver><selenium-chromedriver><tiktok>
2023-03-10 23:39:08
2
396
Dori
75,701,878
817,659
how to read a json web response representing a pandas dataframe
<p>I have a <code>json</code> response from a <code>FastAPI</code> <code>REST</code> <code>service</code> that is generated like this:</p> <pre><code>dfjson = df.to_json(orient='records', date_format='iso', indent=4) </code></pre> <p>and looks like this:</p> <pre><code>[ { &quot;Date&quot;:&quot;2017-12-01T00:00:00.000Z&quot;, &quot;RR&quot;:0.0, &quot;Symbol&quot;:&quot;AAPL&quot; }, { &quot;Date&quot;:&quot;2018-03-12T00:00:00.000Z&quot;, &quot;RR&quot;:-0.0655954215, &quot;Symbol&quot;:&quot;AAPL&quot; }, { &quot;Date&quot;:&quot;2018-03-14T00:00:00.000Z&quot;, &quot;RR&quot;:-0.0493162968, &quot;Symbol&quot;:&quot;AAPL&quot; }, { &quot;Date&quot;:&quot;2018-03-15T00:00:00.000Z&quot;, &quot;RR&quot;:-0.0539632781, &quot;Symbol&quot;:&quot;AAPL&quot; }, ... </code></pre> <p>On the <code>client</code> side, I am trying to convert this <code>json</code> into a <code>pandas</code> <code>dataframe</code> thus:</p> <pre><code>df = pd.read_json(response.text) print(df) </code></pre> <p>But I get an error:</p> <pre><code>.... raise ValueError(&quot;If using all scalar values, you must pass an index&quot;) ValueError: If using all scalar values, you must pass an index </code></pre> <p>I am not sure what I am doing wrong?</p>
<python><json><pandas><rest>
2023-03-10 23:34:38
1
7,836
Ivan
75,701,809
13,376,511
Why does set subtraction and .difference() run at different speeds
<p>To find the difference between two sets there is the <code>-</code> operator and <code>.difference()</code>. I'm using this code to time each of those:</p> <pre><code>import timeit print(timeit.timeit(''' a.difference({b}) ''', setup=''' a = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10} b = 3 ''')) # =&gt; 0.2423834060318768 print(timeit.timeit(''' a - {b} ''', setup=''' a = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10} b = 3 ''')) # =&gt; 0.2027170000365004 </code></pre> <p>When I run this in CPython I get this:</p> <pre><code>0.24530324200168252 0.205820870003663 </code></pre> <p>This made sense to me because <code>.difference()</code> can take any iterable, not just sets. However, when I run it in PyPy I get this:</p> <pre><code>0.14613953093066812 0.23659668595064431 </code></pre> <p>The times are completely flipped, so surely it can't be because <code>.difference()</code> can take any interable. What is the difference between the implementations of <code>.difference()</code> and <code>-</code>? Is there any difference between CPython and PyPy's implementations?</p>
<python><set><cpython><pypy><timeit>
2023-03-10 23:20:21
1
11,160
Michael M.
75,701,791
7,133,942
How to use the current value of a variable in Python in another class without using arguments in the method
<p>I have the 2 python classes:</p> <p><strong>Test.py</strong></p> <pre><code>import Test2 variable1_Test = 1 #Call function from Test2 if __name__ == &quot;__main__&quot;: variable1_Test = 5 Test2.print_variables() </code></pre> <p><strong>Test2.py</strong></p> <pre><code>def print_variables (): import Test print(f&quot;Test.variable1_Test: {Test.variable1_Test}&quot;) </code></pre> <p>Strangely, when I run the main code in Test.py (which should be the class for the main execution) the output of <code>variable1_Test</code> is 1 and not 5, altough I clearly upadted its value to 5 before calling the method <code>Test2.print_variables()</code>.</p> <p>So my question is why this is happening and Python does not see the updated variable? Further, I would like to know how to use the current value of a variable in Python in another class without using arguments in the method. I know a work around is to pass arguments to the method <code>print_variables </code>in Test2.py. But I don't want to do this as I have a big simulation with over 100 parameters that I would have to pass. So I just want to change certain parameters in Test.py and then run the method in Test2.py with updated variables. How can I do that?</p>
<python>
2023-03-10 23:16:46
1
902
PeterBe
75,701,709
4,812,479
How do I align the display of non-aligning dataframe columns containing Arabic, Thai, CJK, etc Unicode characters?
<p>I have a dataframe containing some foreign Unicode characters, such as Arabic and Thai, that causes my dataframe's columns to be misaligned when viewed by a <code>print(df)</code> on the Terminal.</p> <p>I've tried <code>pd.set_option(&quot;display.unicode.east_asian_width&quot;, True)</code>, but it's only aligning the rows containing CJK characters and not the Arabic and Thai rows.</p> <p>How do I properly align dataframe columns when they contain Arabic and Thai characters (or possibly other misaligning Unicode characters / languages) when viewed by a <code>print(df)</code>?</p> <p>Example code:</p> <pre><code>import pandas as pd unicode_chars_list = ['我是一個蘋果', '麺作りは美味しいです', '나는 공원에서 걷고 뛰는 것을 좋아합니다', 'اكل البيتزا لذيذ جدا اريد المزيد.', 'มีจระเข้อยู่ในแม่น้ำ โปรดระวัง', 'Ευχαριστώ τους προγόνους μας.', 'Normal English sentence'] data = { 'a':[2134,4,547,56,3,234,124], 'language': ['Chinese', 'Japanese', 'Korean', 'Arabic', 'Thai', 'Greek', 'English'], 'unicode_chars': unicode_chars_list, 'b': [1,2,3,4,5,6,7], 'c': ['I am a long string here', '2nd string I want this to be some length', '3rd string please', '4th', '5th very loooooong',' 6 be good to the dataframe', '7 the end'] } df = pd.DataFrame(data) </code></pre> <p>Dataframe's columns are aligned when viewed without foreign Unicode characters.</p> <pre><code>print(df.loc[:,['a','language','b','c']]) # a language b c # 0 2134 Chinese 1 I am a long string here # 1 4 Japanese 2 2nd string I want this to be some length # 2 547 Korean 3 3rd string please # 3 56 Arabic 4 4th # 4 3 Thai 5 5th very loooooong # 5 234 Greek 6 6 be good to the dataframe # 6 124 English 7 7 the end </code></pre> <p>When viewing the whole dataframe, the dataframe's CJK (Chinese, Japanese, Korean), Arabic, and Thai columns are not aligned.</p> <pre><code>print(df) # a language unicode_chars b c # 0 2134 Chinese 我是一個蘋果 1 I am a long string here # 1 4 Japanese 麺作りは美味しいです 2 2nd string I want this to be some length # 2 547 Korean 나는 공원에서 걷고 뛰는 것을 좋아합니다 3 3rd string please # 3 56 Arabic اكل البيتزا لذيذ جدا اريد المزيد. 4 4th # 4 3 Thai มีจระเข้อยู่ในแม่น้ำ โปรดระวัง 5 5th very loooooong # 5 234 Greek Ευχαριστώ τους προγόνους μας. 6 6 be good to the dataframe # 6 124 English Normal English sentence 7 7 the end </code></pre> <p>After applying <code>pd.set_option(&quot;display.unicode.east_asian_width&quot;, True)</code>, CJK (Chinese, Japanese, Korean) rows are aligned, but not the Arabic and Thai rows.</p> <pre><code>pd.set_option(&quot;display.unicode.east_asian_width&quot;, True) print(df) # Run this yourself to see the output, pasting onto here doesn't give the dataframe's true display. </code></pre> <p>Here are the screenshots of what it looks like when I run the above in the Terminal:</p> <p><a href="https://i.sstatic.net/4wYRz.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4wYRz.jpg" alt="Misaligned columns in Terminal" /></a></p> <p>And when I run in Jupyter Lab:</p> <p><a href="https://i.sstatic.net/PRZ9P.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PRZ9P.jpg" alt="Misaligned columns in Jupyter Lab" /></a></p> <p>How do I align the dataframe's columns containing Arabic and Thai characters when viewing it with a <code>print(df)</code>? Is there a way?</p> <p>I'm on macOS Ventura 13.2, using Python 3.11.2.</p>
<python><pandas><dataframe><unicode>
2023-03-10 23:00:27
0
316
Aeronautix
75,701,634
3,261,292
Pandas update field value with the output of another condition
<p>I have the following dataframe:</p> <p><a href="https://i.sstatic.net/4e389.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4e389.png" alt="enter image description here" /></a></p> <p>What I want to achieve is, if id1 is not equal to id2, I want to update the y values (in <code>name</code> column) with x value (x is when id1 equals to id2).</p> <pre><code>test = pd.DataFrame({'id1':[1, 1, 1, 2, 2, 1], 'id2': [1, 1, 1, 1, 1, 1], 'name': ['x', 'x', 'x', 'y', 'y', 'x']}) </code></pre> <p>what I did is this:</p> <pre><code>test.loc[test['id1'] != test['id2'], 'name'] = test[test['id1'] == test['id2']]['name'].tolist()[0] </code></pre> <p>It works well, but I don't like the way I solved it (because of this <code>.tolist()[0]</code>). I feel there is another more correct solution.</p> <p>Edit:</p> <p>Sometimes we don't have <code>id1==id2</code> in this Dataframe, so my solution will produce an error, because there no elements in <code>test[test['id1'] == test['id2']]['name'].tolist()[0]</code>.</p>
<python><pandas><dataframe><indexing><mask>
2023-03-10 22:47:01
1
5,527
Minions
75,701,576
6,676,101
How do we dispatch to different class methods based on the number of input arguments?
<p>We want to have a class with two methods of the same name, however:</p> <blockquote> <ol> <li>there exists a method which accepts exactly one argument.</li> <li>The other method accepts two or more arguments.</li> </ol> </blockquote> <p>Maybe, to implement multi-dispatching a person can download a third party library.</p> <p>Maybe, another option is to decorate the function with a decorator which <em><strong>curries</strong></em> the function.</p> <p>Maybe, we then use <code>functools.singledispatch</code> to decorate the curried function?</p> <pre class="lang-python prettyprint-override"><code>class K: @dispatch def method(): pass @method.register def method_of_one(one_arg): return the_slowball_klass # method.register(method_of_one) @method.register def method_of_two(one_arg, two_args): return the_slowball_klass # method_of_two = method.register(method_of_two) </code></pre>
<python><python-3.x><overloading><dispatch><overload-resolution>
2023-03-10 22:37:58
2
4,700
Toothpick Anemone
75,701,437
2,593,810
Why do we multiply learning rate by gradient accumulation steps in PyTorch?
<p>Loss functions in pytorch use &quot;mean&quot; reduction. So it means that the model gradient will have roughly the same magnitude given any batch size. It makes sense that you want to scale the learning rate up when you increase batch size because your gradient doesn't become bigger as you increase batch size.</p> <p>For gradient accumulation in PyTorch, it will &quot;sum&quot; the gradient N times where N is the number of times you call <code>backward()</code> before you call <code>step()</code>. My intuition is that this would increase the magnitude of the gradient and you should reduce the learning rate, or at least not increase it.</p> <p>But I saw people wrote multiplication to gradient accumulation steps in <a href="https://github.com/huggingface/diffusers/blob/1a7e9f13fdc23f0766ad8a31171d00781c51b131/examples/textual_inversion/textual_inversion.py#L659-L662" rel="noreferrer">this repo</a>:</p> <pre class="lang-py prettyprint-override"><code>if args.scale_lr: args.learning_rate = ( args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes ) </code></pre> <p>I also see similar code in <a href="https://github.com/CompVis/stable-diffusion/blob/21f890f9da3cfbeaba8e2ac3c425ee9e998d5229/main.py#L686" rel="noreferrer">this repo</a>:</p> <pre class="lang-py prettyprint-override"><code>model.learning_rate = accumulate_grad_batches * ngpu * bs * base_lr </code></pre> <p>I understand why you want to increase the learning rate by batch size. But I don't understand why they try to increase the learning rate by the number of accumulation steps.</p> <ol> <li>Do they divide the loss by N to reduce the magnitude of the gradient? Otherwise why do they multiply learning rate by the accumulation steps?</li> <li>How are gradients from different GPUs accumulated? Is it using mean or sum? If it's sum, why are they multiplying the learning rate by nGPUs?</li> </ol>
<python><deep-learning><pytorch><gradient-descent><learning-rate>
2023-03-10 22:15:02
1
4,257
offchan
75,701,348
6,676,101
What it the most pythonic way to call a super class `__getattr__` method?
<p>In the code below we attempt to overload the dot-operator.</p> <p>I am not sure how to call <code>__getattr__</code> from the super class inside of the <code>__getattr__</code> method defined inside of the sub-class.</p> <pre class="lang-python prettyprint-override"><code>class Floor: def __init__(this_floor, surface_area:float): this_floor._surface_area = float(surface_area) class DrSeussHouse: def __init__(this_hoos, original_year_built:int, original_floor:Floor): this_hoos._floor = str(original_floor) this_hoos._year_built = int(original_year_built) def __getattr__(this_hoos, attr_name:str): # HELP IS DESIRED RIGHT HERE, INSIDE OF... # ... THIS IMPLEMENTATION OF `__getattr__` # return getattr(this._wahoos._floor, attr_name) # return getattr(super(type(this_hoos)), attr_floor)(attr_floor) # super().__getattr__(this._wahoos._floor, attr_name) </code></pre>
<python><python-3.x><attributes><operator-overloading>
2023-03-10 21:59:40
1
4,700
Toothpick Anemone
75,701,313
726,730
python - pyinstaller hidden_import exe file then subprocess Popen the exe file
<p>is it possible in the .spec file to include as hidden_import an exe file, and then in then main py script use a command subprocess.Popen to run it?</p> <p>I am getting an error like:</p> <pre class="lang-py prettyprint-override"><code>Traceback (most recent call last): File &quot;running_example.py&quot;, line 40, in &lt;module&gt; s = shout.Shout() File &quot;shout.py&quot;, line 66, in __init__ self.p1 = Popen([resource_path0(&quot;msys_shout.exe&quot;)], stdin=PIPE, stdout=PIPE,shell=False, universal_newlines=True, bufsize=1, close_fds=ON_POSIX) File &quot;subprocess.py&quot;, line 969, in __init__ File &quot;subprocess.py&quot;, line 1438, in _execute_child FileNotFoundError: [WinError 2] The system cannot find the file specified </code></pre> <p>where resource_path0 function is:</p> <pre class="lang-py prettyprint-override"><code>def resource_path0(relative_path): &quot;&quot;&quot; Get absolute path to resource, works for dev and for PyInstaller &quot;&quot;&quot; base_path = getattr( sys, '_MEIPASS', os.path.dirname(os.path.abspath(__file__))) return os.path.join(base_path, relative_path) </code></pre> <p>the .spec file is:</p> <pre class="lang-py prettyprint-override"><code># -*- mode: python ; coding: utf-8 -*- block_cipher = None a = Analysis( ['running_example.py'], pathex=[], binaries=[], datas=[], hiddenimports=[&quot;msys_shout.*&quot;,&quot;shout.*&quot;,&quot;*.mp3&quot;], hookspath=[], hooksconfig={}, runtime_hooks=[], excludes=[], win_no_prefer_redirects=False, win_private_assemblies=False, cipher=block_cipher, noarchive=False, ) pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher) exe = EXE( pyz, a.scripts, a.binaries, a.zipfiles, a.datas, [], name='running_example', debug=False, bootloader_ignore_signals=False, strip=False, upx=True, upx_exclude=[], runtime_tmpdir=None, console=True, disable_windowed_traceback=False, argv_emulation=False, target_arch=None, codesign_identity=None, entitlements_file=None, ) </code></pre>
<python><pyinstaller><popen>
2023-03-10 21:52:12
1
2,427
Chris P
75,701,278
12,106,577
Flask - Using javascript and url_for to replace html
<p>I am new to Flask (as well as whatever else is involved in what I'm trying to do, apparently) and I'm creating a small using M. Grinberg's Mega-Tutorial as reference (Flask==2.2.3, jQuery==3.6.4).</p> <p>I'm trying to incorporate a range html input element</p> <p><code>&lt;input id=&quot;slider&quot; type=&quot;range&quot; min=&quot;0&quot; max=&quot;10&quot; name=&quot;slider&quot; value=&quot;1&quot;/&gt;</code></p> <p>whose value I'm trying to take via jQuery and use to trigger an endpoint to create an image:</p> <p><code>@main.route(&quot;/graph.png/order=&lt;order&gt;&quot;)</code></p> <p>which I then want to display in another element in the page, without re-rendering the entire thing from scratch.</p> <p>I'm incorporating a script in the template:</p> <pre><code>{% block base_script %} &lt;script src=&quot;{{ url_for('static', filename='js/client_render.js') }}&quot;&gt;&lt;/script&gt; {% endblock %} </code></pre> <p>I would think that this would do the trick:</p> <pre><code>$(document).on('input', '#polyfit_order_slider', function () { slider_val = $('#polyfit_order_slider').get()[0]['value']; console.log(slider_val); $(document.getElementById('cur-input')).text('Current input: ' + slider_val) $(document.getElementById('graph_fit')).html( &quot;&lt;img src='{{ url_for('main.graph', order=&quot;+slider_val+&quot;) }}'&gt;&lt;/img&gt;&quot; ) }); </code></pre> <p>but it so happens that every time I move the input slider I'm getting a yellow 404 in the terminal <code>&quot;GET /%7B%7B%20url_for( HTTP/1.1&quot;</code> and a <code>GET http://localhost:8000/%7B%7B%20url_for( 404 (NOT FOUND)</code> from the console.</p> <p><a href="https://i.sstatic.net/4v94K.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4v94K.gif" alt="" /></a></p> <p><strong>What is the correct syntax to use for this scenario?</strong> I've implemented the functionality using entire page re-renders on the server side by having the user click submit on a flask wtf form but I feel a more dynamic client interaction would be better, if possible.</p> <p><a href="https://i.sstatic.net/Iw8eb.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Iw8eb.gif" alt="" /></a></p> <p><strong>Notes:</strong></p> <ul> <li><p>I'm trying to follow Flask docs' <a href="https://flask.palletsprojects.com/en/2.2.x/patterns/javascript/" rel="nofollow noreferrer">Generating URLs</a> but when putting anything like <code>const user_url = {{ url_for(&quot;user&quot;, id=current_user.id)|tojson }}</code> in the tags I'm getting SyntaxErrors.</p> </li> <li><p>The reference I mentioned does something very similar <a href="https://github.com/miguelgrinberg/microblog/blob/main/app/templates/base.html#L87" rel="nofollow noreferrer">here</a>.</p> </li> </ul>
<javascript><python><jquery><flask>
2023-03-10 21:47:01
1
399
John Karkas
75,701,273
1,473,517
Can callback show improved solutions with basinhopping without declaring a global variable?
<p>I am using <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.basinhopping.html" rel="nofollow noreferrer">basinhopping</a> from scipy and I would to see the progress that the optimizer is making. Currently I do:</p> <pre><code>def show_bh(a, b, c): global MIN if b &lt; MIN: print([round(x, 2) for x in a], b) MIN = b print(&quot;Optimizing using basinhopping&quot;) MIN = 10 res = basinhopping(distance, x0 = points, niter=1000, minimizer_kwargs={&quot;bounds&quot;: bounds, &quot;constraints&quot;: cons, &quot;method&quot;: &quot;SLSQP&quot;}, callback=show_bh) </code></pre> <p>But this is horrible because of the global variable MIN. Is there a more elegant way to show progress?</p>
<python><scipy>
2023-03-10 21:46:00
2
21,513
Simd
75,701,272
3,047,531
In Python, does a function capture variables created after the function is defined?
<p>I am trying to better understand closures. Examples 1 - 3 make sense to me according to my understanding, but I think it must be incorrect since example 4 does not make sense to me.</p> <h1>Example 1</h1> <pre class="lang-py prettyprint-override"><code>def f(): print(x) x=1 f() # prints 1 </code></pre> <p>At the time <code>f()</code> is defined, <code>x</code> does not exist, but that's fine. It will be looked up when <code>f()</code> is invoked, at which point <code>x</code> exists.</p> <h1>Example 2</h1> <pre class="lang-py prettyprint-override"><code>y=2 def g(): print(y) g() # prints 2 </code></pre> <p>This is a closure. When <code>g()</code> is defined, <code>y</code> exists and <code>g()</code> captures a reference to it. If I were to change the value of <code>y</code> after defining <code>g()</code>, this change would be reflected when <code>g()</code> is invoked. As far as I know, this does not involve a closure.</p> <h1>Example 3</h1> <pre class="lang-py prettyprint-override"><code>def h(): x=3 def i(): print(x) return i j=h() j() # prints 3 </code></pre> <p>This just shows that a closure retains a reference to variables even after they go out of scope. Once <code>h()</code> returns, I cannot refer to directly <code>x</code> since it has gone out of scope, but <code>j()</code> retains a reference to it.</p> <h1>Example 4</h1> <pre class="lang-py prettyprint-override"><code>def k(): def l(): print(x) x=4 return l m=k() m() # prints 4 </code></pre> <p>This one is confusing to me. I would think (as I claimed in example 1) that <code>l()</code> is not a closure, so it doesn't retain a reference to <code>x</code>; it would just attempt to look up <code>x</code> when invoked. But, in fact, when we invoke it via <code>m()</code>, <code>x</code> is printed even though it has gone out of scope. Does this mean that <code>f()</code> from example 1 was also a closure? It seems surprising that a function could capture variables created after its definition. Unless its wrong to say that it captures variables per se. Is it more accurate to say that it captures the &quot;environment&quot; or &quot;scope&quot; in which it is defined?</p>
<python><closures>
2023-03-10 21:45:52
1
444
master_latch
75,701,262
346,977
Pandas dataframe: resampling time intervals and dividing values proportionally?
<p>I have the following pandas dataframe:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>index</th> <th>start_time</th> <th>end_time</th> <th>amount</th> </tr> </thead> <tbody> <tr> <td>foo</td> <td>2023-03-11 09:45:27</td> <td>2023-03-11 09:58:39</td> <td>48</td> </tr> <tr> <td>bar</td> <td>2023-03-11 09:59:00</td> <td>2023-03-11 010:09:00</td> <td>20</td> </tr> </tbody> </table> </div> <p>I'm hoping to split the data by hourly intervals, such that amounts would be:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>interval</th> <th>amount</th> </tr> </thead> <tbody> <tr> <td>2023-03-11 09:00-10:00</td> <td>50</td> </tr> <tr> <td>2023-03-11 10:00-11:00</td> <td>18</td> </tr> </tbody> </table> </div> <p>The logic being:</p> <ul> <li>1 minute (=10%) of <code>bar</code> falls within the 09:00-10:00 interval, so 10% of <code>bar</code>'s amount is added to the 09:00-10:00 interval, for 48 + 20*0.1 = 50</li> <li>The remainder of <code>bar</code>'s amount (18) is assigned to 10:00-11:00</li> </ul> <p>From what I understand, dataframe's <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.resample.html" rel="nofollow noreferrer">.resample</a> method is best suited for splitting by intervals, however:</p> <ul> <li>It seems like resample is intended for individual times, not a start &amp; end time</li> <li>There doesn't seem to be a way to proportionally divide times based on intervals</li> </ul> <p>I'm guessing this is a common need when working with time series in Python. Before I go building out something convoluted...is there any built-in/simple way to tackle it?</p>
<python><pandas><group-by><resampling>
2023-03-10 21:43:30
1
12,635
PlankTon
75,701,258
1,663,762
Clone a Poetry project onto my PC and continue using Poetry
<p>It seems a simple question. I forked someone else's Github repository. This project is developed with and dependencies are managed by Poetry. The directory structure is there, pyproject.toml is there, poetry.lock is there.</p> <p>I have set up poetry on my local PC. And now what?</p> <p>I have gone through 473 tutorials and <em>all</em> tell me how to set up a <em>new</em> Poetry project. In the best case they also tell how to create a new repo on Github, and create a new Poetry project out of it. Some explain how to migrate an existing non-Poetry project to Poetry.</p> <p>I don't want to. This is what I want:</p> <ul> <li>Clone a project which is developed using Poetry</li> <li>Run the Python application, find the bug, whatever, change the code retest, and commit to git.</li> <li>Just as the upstream developer did when he was working on this project.</li> </ul> <p>I know how to use Git. I know how to use Github. I have installed Poetry and created a new test project. I can run it, I have a virtual environment, I can use Poetry shell.</p> <p>I just don't know how to clone an <em>existing</em> Github project and continue developing in the Poetry environment.</p> <p>I am using a Linux platform but I think the question is platform independent.</p>
<python><github><python-poetry>
2023-03-10 21:43:09
1
353
Johannes Linkels
75,701,207
15,176,150
Which Design Pattern combines an Interface, a Factory, and Data?
<h3>Problem</h3> <p>I've tried reading <a href="https://python-patterns.guide/gang-of-four/composition-over-inheritance/#solution-2-the-bridge-pattern" rel="nofollow noreferrer">this resource</a> on the naming conventions for common Python design patterns, but I can't find a pattern that matches the class I'm creating.</p> <p>The closest I could find was the <a href="https://python-patterns.guide/gang-of-four/composite/" rel="nofollow noreferrer">Composite Pattern</a>, although I don't feel that matches my use case.</p> <h3>About the Class</h3> <p>I'm working on a Machine Learning Explainability package. One way of explaining a model is to build a simpler <code>surrogate</code> model that mimics the behaviour of the complex model.</p> <p>A common way to implement this is the <a href="https://proceedings.neurips.cc/paper/2017/hash/8a20a8621978632d76c43dfd28b67767-Abstract.html" rel="nofollow noreferrer">SHAP algorithm</a>, which requires you to sample the dataset that you used to create the original complex model.</p> <p>How you sample is dependent on a single row of features, which you build the <code>surrogate</code> model around. This makes it useful to have a <code>Sampler</code> object that stores the background dataset and takes a single row as input.</p> <p>This gives me three strongly related complex objects and one dataset. It makes sense to me to store these in the same place and obfuscate their connections with a simple interface.</p> <p>However, I don't think this is an <a href="https://realpython.com/python-interface/" rel="nofollow noreferrer">interface</a> because the methods won't change to match the objects. The class will not be abstract, and it will hold all the important data.</p> <p>I'd like to contain the functionality for building the <code>surrogate</code> model, and <code>Sampler</code> inside this class too. So it's partially a <a href="https://en.wikipedia.org/wiki/Factory_method_pattern" rel="nofollow noreferrer">Factory design pattern</a>.</p> <h3>Question</h3> <p>Is there a pre-existing name for a class like this? Maybe the BORG design pattern? Or should I have rethink before I create something like this?</p>
<python><design-patterns><architecture><software-design>
2023-03-10 21:33:43
1
1,146
Connor
75,701,064
5,394,072
Why documentation etc. use @ in place of * for multiplication
<p>Why many documentations and blog posts use @ in place of * (multiplication operator) in python. <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.linprog.html#scipy.optimize.linprog" rel="nofollow noreferrer">Here</a> is an example. They use <code>C@x</code> instead of <code>c*x</code> (also found in the next lines in the page). Is @ used to say it is vector multiplication etc.?</p>
<python><numpy>
2023-03-10 21:13:19
1
738
tjt
75,701,011
2,510,104
Python multiprocessing pipe max write size
<p>I am implementing a piece of code using multiprocessing package of Python running on Ubuntu. For more efficient communication between processes, I'm collecting data to be sent through multiprocessing.Pipe in an array and send the whole array once size goes beyond 1000 elements. I noticed there are occasional deadlocks in my code and seems like I'm hitting it where I'm sending batched data. Tried a simple code to know if the size of the object sent over Pipe might cause deadlock and seems like to be the case. After couple of rounds of experiments, it seems like 64KiB of pickled data is the magic number. Any number larger will cause deadlock and 64KiB or lower will go through instantly. Documentation talks about 32MiB+ object sizes to throw exception. But my experiment shows 64KiB. Do you know if this is the case or am I doing something really wrong here?</p> <pre><code>import multiprocessing as mp import sys import pickle c, p = mp.Pipe(duplex=False) batch_list = [] for i in range(21912): batch_list.append(i) print(f&quot;Size of payload is: {sys.getsizeof(batch_list)/1024} KiB&quot;) pkl_batch_list = pickle.dumps(batch_list) print(f&quot;Size of pickled payload is: {sys.getsizeof(pkl_batch_list)/1024} KiB&quot;) p.send(batch_list) while c.poll(): try: c.recv() except Exception: print(&quot;got exception&quot;) print(&quot;Done!&quot;) </code></pre>
<python><multiprocessing><pipe><pickle>
2023-03-10 21:04:39
1
541
Amir
75,700,990
46,799
How can I print the value of a clicked point in altair_viz scatter plot?
<p>I have created a scatter plot from a dataframe. One of the columns contains large amounts of text, which isn't feasible to display in the hover bubble.</p> <p>I would like to be able to click on a point and the values in the corresponding row to be printed outside the scatter plot. I don't mind if it gets printed as text to the cell output or to another altair panel.</p> <p>Basically I'm thinking of a 'context-detail' type relationship, although it doesn't have to be fancy. I just want to investigate specific points on my chart.</p> <p>EDIT: Some relevant code</p> <pre class="lang-py prettyprint-override"><code>chart = alt.Chart(tsne_df).mark_point().encode( x = 'x', y = 'y', color = 'is_query', tooltip = ['x', 'y', 'is_query', 'source'] ).properties( width=800, height=500) </code></pre> <p>The <code>source</code> column contains too much info for the tooltip.</p>
<python><jupyter-notebook><altair><vega-lite>
2023-03-10 21:02:30
0
10,655
Shahbaz
75,700,946
2,580,302
Modify and re-queue a message using pika on python
<p>I'm using Pika/RabbitMQ lib for processing messages in a python 3.8 project. When processing the messages the function can fail due to several reasons. There are a few cases when the message can be recovered in part if modified. I'm currently creating a new message with the modified body and queuing it. This is not ideal since the processor is the one re-queuing a message that it did not send.</p> <p>I'm wondering if there's a way to modify the original message body and send a <code>basic_nack</code> in order to re-queue a modified message to be processed again. In this way I don't have to recreate the message broker and message sender.</p>
<python><python-3.x><rabbitmq><pika>
2023-03-10 20:56:17
1
17,662
zetacu
75,700,807
633,318
Picking/filtering element from pandas table where data is between column header values
<p>I have some 2D data that has boundaries (<code>bins</code>) like this:</p> <pre class="lang-py prettyprint-override"><code>import numpy as npy # These are the boundaries of our wind speeds and directions speed_bins = npy.array([0.0, 0.5, 1.0, 2.0, 5.0, 10.0, 100.0]) dir_bins = npy.linspace(0,360,9) # Random LPF values from 0–0.5 data = npy.random.rand(len(dir_bins)-1, len(speed_bins)-1)*0.5 dataTable = pd.DataFrame(LPF, index=dir_bins[1:],columns=speed_bins[1:]) </code></pre> <p>I have some meteorological data like this:</p> <pre class="lang-py prettyprint-override"><code> # Assume all arrays are the same length speeds = numpy.array([...]) directions = numpy.array([...]) X = numpy.array([...]) metData = pd.DataFrame({'speeds':speeds,'directions':directions,'X':X}) </code></pre> <p>What I need to do is find the element in <code>dataTable</code>, where each combination of <code>speed</code> and <code>direction</code> are bound.</p> <p>I'm sure there is some pandas #magic that will do the following:</p> <ul> <li>For each <code>(speed, direction)</code> pair: <ul> <li>Find the row in <code>dataTable</code> that bounds the <code>speed</code></li> <li>Find the column in <code>dataTable</code> that bounds the <code>direction</code></li> <li>return the <code>data</code> variable.</li> </ul> </li> </ul> <p><strong>For example:</strong> if my <code>speed</code> is <code>0.75</code> and my <code>direction</code> is <code>350</code> then I want to get the value at element <code>dataTable.iat[7,1]</code>. <code>0.75</code> is between <code>0.5</code> and <code>1.0</code> (the column header values) and similarly for the direction, but with the index values.</p> <p>I hope I've described this well.</p>
<python><pandas><numpy><filter>
2023-03-10 20:37:15
1
15,194
jlconlin
75,700,499
13,742,058
How do WebDriverWait throw a NoSuchElementException if there is no element found?
<p>My code does not throw NoSuchElementException even if no element found using WebDriverWait ..</p> <pre><code>import time from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as ec from selenium.common.exceptions import NoSuchElementException, TimeoutException from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.chrome.service import Service from webdriver_manager.chrome import ChromeDriverManager options = webdriver.ChromeOptions() options.add_argument(&quot;start-maximized&quot;) s = Service(ChromeDriverManager().install()) driver = webdriver.Chrome(service=s, options=options) driver.get(r&quot;https://www.google.com/&quot;) invalid_xpath = &quot;//h2&quot; time.sleep(2) try: element = WebDriverWait(driver,20).until(ec.visibility_of_element_located((By.XPATH,invalid_xpath))) #ignored_exceptions=Nosuchelement does not work # element = driver.find_element(By.XPATH,invalid_xpath) except NoSuchElementException: print(&quot;No such element exception occurs&quot;) except TimeoutException: print(&quot;Timeout exception occurs&quot;) </code></pre> <p>The code element = driver.find_element(By.XPATH,invalid_xpath) works but I need to wait for the element to become visible as well.</p>
<python><selenium-webdriver><web-scraping><webdriverwait><nosuchelementexception>
2023-03-10 19:58:45
0
308
fardV
75,700,411
6,534,818
Python: str split on repeated instances in single string
<p>How can I split and remove repeated patterns from a string, such as?</p> <pre><code># sample s1 = '-c /home/test/pipeline/pipelines/myspace4/.cache/sometexthere/more --log /home/test1/pipeline2/pipelines1/myspace1/.cache/sometexthere/more --arg /home/test4/pipeline3/pipelines3/myspace3/.cache/sometexthere/more --newarg etc.' # expected expected = '-c sometexthere/more --log sometexthere/more --arg sometexthere/more --newarg etc.' # attempt only yields last value rather than all ''.join([ s for s in s1.split('/.cache/')[-1] ]) </code></pre>
<python>
2023-03-10 19:46:30
1
1,859
John Stud
75,700,342
417,896
Python get microseconds from time stamp into uint64
<p>The purpose of this function is to get the microseconds from a numpy timestamp elapsed since the unix epoch.</p> <p>Would this conversion eventually overflow because the data type is restricted to an int64 instead of a uint64?</p> <pre><code>dt = np.datetime64('2023-01-01T00:00:00') timestamp_microsec = int((dt - np.datetime64('1970-01-01T00:00:00')) / np.timedelta64(1, 'us')) </code></pre> <p>And is there a way to do it using numpy <code>u8</code> data type that won't have that problem?</p>
<python><numpy>
2023-03-10 19:36:58
0
17,480
BAR
75,700,322
1,028,133
When is d1==d2 not equivalent to d1.__eq__(d2)?
<p>According <a href="https://docs.python.org/3.8/reference/datamodel.html#basic-customization" rel="nofollow noreferrer">to the docs</a> (in Python 3.8):</p> <blockquote> <p>By default, <code>object</code> implements <code>__eq__()</code> by using <code>is</code>, returning <code>NotImplemented</code> in the case of a false comparison: <code>True if x is y else NotImplemented</code>.</p> </blockquote> <p>And also:</p> <blockquote> <p>The correspondence between operator symbols and method names is as follows: [...] <code>x==y</code> calls <code>x.__eq__(y)</code></p> </blockquote> <p>So I expect</p> <ol> <li><code>==</code> to be equivalent to <code>__eq__()</code> and</li> <li>a custom class without an explicitly defined <code>__eq__</code> to return <code>NotImplemented</code> when using <code>==</code> to compare two different instances of the class. Yet in the following, <code>==</code> comparison returns <code>False</code>, while <code>__eq__()</code> returns <code>NotImplemented</code>:</li> </ol> <pre><code>class Dummy(): def __init__(self, a): self.a = a d1 = Dummy(3) d2 = Dummy(3) d1 == d2 # False d1.__eq__(d2) # NotImplemented </code></pre> <p>Why?</p>
<python><python-3.x>
2023-03-10 19:32:53
3
744
the.real.gruycho
75,700,147
6,676,101
How do you unpack a nested iterator/iterable as an argument to a variadic function?
<p>Consider the following snippet of code written in Python-style syntax:</p> <pre class="lang-python prettyprint-override"><code>class k: def __init__(*args): self._args = args def mimethod(self, attrname:str): return type(self)(getattr(arg, attrname) for arg in self._args) </code></pre> <p>Notice that <code>getattr(arg, attrname) for arg in self._args</code> returns an iterator.</p> <p>We are in danger of having the following two undesirable things happen:</p> <ul> <li>an iterator is assigned to <code>self._args[0]</code></li> <li><code>len(args) == 1</code>.</li> </ul> <p>How do you flatten a deeply nested iterator which is passed into a variadic function such that each string of length one becomes an element of args?</p> <p>A second example of passing an iterator into a variadic function is shown below:</p> <pre class="lang-python prettyprint-override"><code>import sys import io def variadic_function(*args, file_stream = sys.stdout): print(&quot;\n&quot;.join(str(arg) for arg in args, file=sys.stdout) strm = io.StringIO() # names = tuple(x.strip() for x in &quot;LUciA, SoFiA, mArIA, MARTInA, pAuLa, LuCaS, hUgO, mArTiN, dAnIeL, pAbLo&quot;.split(&quot;,&quot;)) names = ('LUciA', 'SoFiA', 'mArIA', 'MARTInA', 'pAuLa', 'LuCaS', 'hUgO', 'mArTiN', 'dAnIeL', 'pAbLo') variadic_function(name[0].upper()+ name[1:].lower() for name in names, file=strm) print(strm.getvalue()) </code></pre>
<python><iterable>
2023-03-10 19:11:33
2
4,700
Toothpick Anemone
75,700,124
9,105,621
What does {method 'poll' of 'select.poll'...} mean in cprofile output?
<p>I have a flask app that I'm trying to profile (I'm new to this). When I ran it, I had this result:</p> <pre><code>3327 function calls (3247 primitive calls) in 7.350 seconds Ordered by: cumulative time ncalls tottime percall cumtime percall filename:lineno(function) 7/1 0.000 0.000 7.350 7.350 {built-in method builtins.exec} 1 0.000 0.000 7.350 7.350 &lt;string&gt;:1(&lt;module&gt;) 1 0.000 0.000 7.350 7.350 vo_app_v2.py:1094(run_app) 1 0.000 0.000 7.350 7.350 app.py:1064(run) 1 0.000 0.000 7.349 7.349 serving.py:936(run_simple) 1 0.000 0.000 7.348 7.348 serving.py:739(serve_forever) 1 0.000 0.000 7.348 7.348 socketserver.py:215(serve_forever) 15 0.000 0.000 7.347 0.490 selectors.py:403(select) 15 7.347 0.490 7.347 0.490 {method 'poll' of 'select.poll' objects} </code></pre> <p>What does the last line mean? How do I trace it to the line in my code?</p>
<python><cprofile>
2023-03-10 19:09:19
1
556
Mike Mann
75,700,004
20,358
How to check in Python if the value in a key value pair of a dict is a list or a string
<p>I have an object like this</p> <pre><code>ds_obj = { &quot;Labels&quot; : [ { &quot;Label&quot; : &quot;pop&quot;, &quot;People&quot; : { &quot;Composer&quot; : &quot;John&quot; }, }, { &quot;Label&quot; : &quot;classical&quot;, &quot;People&quot; : { &quot;Composer&quot; : [ &quot;Johann&quot;, &quot;Jason&quot; ] }, }, { &quot;Label&quot; : &quot;classical-2&quot;, &quot;People&quot; : { &quot;Composer&quot; : [ &quot;Jacob&quot; ] }, } ] } </code></pre> <p>I need to iterate thru each of the labels and perform some actions on each of the composers, but first I need to find out if each label has a single composer or multiple. To do this I am running the following code to find out if the key <code>Composer</code> has an array or a string. If it has an array, I will run a nested loop thru it and work with each composer. The code below however is calling out every value as an array. What is wrong with my code? Is there a more efficient way to do this?</p> <pre><code> for label in ds_obj ['Labels']: if hasattr(label['People']['Composer'], &quot;__len__&quot;): print('IS ARRAY') else: print('NOT Array') </code></pre>
<python><dictionary>
2023-03-10 18:55:42
2
14,834
user20358
75,699,994
20,266,647
Sklearn-classifier, issue with freez (pod pending in K8s)
<p>I got freez of Sklearn-classifier in MLRun (the job is still running after 5, 10, 20, ... minutes), see log output:</p> <pre><code>2023-02-21 13:50:15,853 [info] starting run training uid=e8e66defd91043dda62ae8b6795c74ea DB=http://mlrun-api:8080 2023-02-21 13:50:16,136 [info] Job is running in the background, pod: training-tgplm </code></pre> <p>see freez/pending issue on Web UI:</p> <p><a href="https://i.sstatic.net/6IhBB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6IhBB.png" alt="enter image description here" /></a></p> <p>I used this source code and <code>classifier_fn.run(train_task, local=False)</code> generates freez:</p> <pre><code># Import the Sklearn classifier function from the function hub classifier_fn = mlrun.import_function('hub://sklearn-classifier') # Prepare the parameters list for the training function training_params = {&quot;model_name&quot;: ['risk_xgboost'], &quot;model_pkg_class&quot;: ['sklearn.ensemble.GradientBoostingClassifier']} # Define the training task, including the feature vector, label and hyperparams definitions train_task = mlrun.new_task('training', inputs={'dataset': transactions_fv.uri}, params={'label_column': 'n4_pd30'} ) train_task.with_hyper_params(training_params, strategy='list', selector='max.accuracy') # Specify the cluster image classifier_fn.spec.image = 'mlrun/mlrun' # Run training classifier_fn.run(train_task, local=False) </code></pre> <p>Did you have and solve the same issue?</p>
<python><kubernetes><scikit-learn><classification><mlrun>
2023-03-10 18:54:03
1
1,390
JIST
75,699,951
4,348,400
Finding where Pandera schemas are different
<p>I have two Pandas data frames <code>df1</code> and <code>df2</code> which <em>should</em> have the same inferred Pandera schema. Unfortunately they do not because when I run <code>pa.infer_schema(df1) != pa.infer_schema(df2)</code> I get a return of <code>False</code>. The print out (which should be <code>__repr__</code>) of these schema look identical under visual inspection, so I suspect the difference is something having to do with different instances. But I am unsure about that.</p> <p>How can I get a &quot;diff&quot; between Pandera schema to help me more quickly understand why they are not equal?</p>
<python><pandas><validation><schema><pandera>
2023-03-10 18:49:16
1
1,394
Galen
75,699,877
6,676,101
What is an example of a class decorator which overrides `__getattribute__` or `__getattr__`?
<p>Suppose that we want to overload the dot-operator, also known as <code>__getattribute__</code></p> <pre class="lang-python prettyprint-override"><code>obj.insert() obj.__getattribute__(&quot;insert&quot;)() height = rectangle.height height = rectangle.__getattribute__(&quot;height&quot;) </code></pre> <p>Is there someway to write a class decorator which overrides <code>__getattribute__</code> or <code>__getattr__</code>?</p> <p>How do you override <code>__getattribute__</code> or <code>__getattr__</code> in the interface, but not the implementation?</p> <pre class="lang-python prettyprint-override"><code>import itertools @decorate def MultitudeOfThings: def __init__(self, *args, **kwargs): if len(kwargs) &gt; 0: raise NotImplementedError() self._args = args self._kwargs = kwargs def __copy__(self, *args, **kwargs): if len(kwargs) &gt; 0: raise NotImplementedError() left = self._args.__getitem__(self, arg[0], **kwargs): leftovers = self._args[1:] return type(self)(itertools.chain(iter([left]), iter(self[leftovers]))) </code></pre> <p>So, we have...</p> <pre class="lang-python prettyprint-override"><code>MultitudeOfThings = decorate(MultitudeOfThings) </code></pre> <p>Now, the new <code>MultitudeOfThings</code> should have a custom-made method named <code>__getattribute__</code> or named <code>__getattr__</code>.</p> <p>However, the line of code <code>self._args = args</code> inside of the old <code>MultitudeOfThings.__init__</code> should use the old <code>MultitudeOfThings.__getattribute__</code> not the new <code>MultitudeOfThings.__getattribute__</code></p> <p>Below is an example of the overloaded (overridden?) dot operator:</p> <pre class="lang-python prettyprint-override"><code>def __getattribute__(self, attrname:str): return type(self)(getattr(arg, attrname) for arg in self._args) </code></pre>
<python><python-3.x><attributes><operator-overloading>
2023-03-10 18:38:46
0
4,700
Toothpick Anemone
75,699,856
14,269,252
DuplicateWidgetID: There are multiple identical st.checkbox widgets with the same generated key
<p>I am building stream lit app, I defined two function, sidebar, tab, etc. output of first function is simply a data frame, and output of the second function is chart. I get the error as follows. The error seems to be because of the second function</p> <pre><code> def func(): option_1 = st.sidebar.checkbox('x1', value=True) option_2 = st.sidebar.checkbox('x2') option_3 = st.sidebar.checkbox('x3') dfall = read() dfs = [] if option_1 : dfs.append(dfall[0]) if option_2 : dfs.append(dfall[1]) if option_3 : dfs.append(dfall[2]) if len(dfs) &gt; 1: df = pd.concat(dfs) elif len(dfs) == 1: df = dfs[0] do something on df.. return df def chart() df= func() plot the chart return with tab1: intro() with tab2: func() with tab3: chart() </code></pre> <p>The error:</p> <p>DuplicateWidgetID: There are multiple identical st.checkbox widgets with the same generated key.</p> <p>When a widget is created, it's assigned an internal key based on its structure. Multiple widgets with an identical structure will result in the same internal key, which causes this error.</p> <p>To fix this error, please pass a unique key argument to st.checkbox.</p>
<python><streamlit>
2023-03-10 18:35:53
1
450
user14269252