QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
βŒ€
75,941,725
10,176,057
How to find transparent background location in image?
<p>I have a HTML code where I will place text in the transparent background area of an image. Since I don't know how to get the exact coordinate on the fly using JS or HTML, the only solution I was able to start was using the <code>pyautogui</code> Python library. But I was also not being able to find the correct transparent area coordinate.</p> <p>PSD and PNG files shared from my OneDrive: <a href="https://1drv.ms/f/s!AgnBdF_BgESzjfg33R9YFCAEcbIKhA?e=gQODRC" rel="nofollow noreferrer">Pictures</a></p> <p>I will pass the coordinate to this style:</p> <pre><code>.code-container { position: absolute; top: 250px; left: 500px; transform: translate(-50%, -50%); } </code></pre> <p>I tried with Python, but can be another solution as well. My Python code:</p> <pre><code>import pyautogui import os import pathlib from PIL import Image # Load your PNG file and cut out the desired image Img = os.path.join(os.path.sep, pathlib.Path(__file__).parent.resolve(), 'test.png') image = Image.open(Img) image.show() alpha_channel = image.split()[-1] # get the alpha channel transparent_pixels = [] for x in range(image.width): for y in range(image.height): if alpha_channel.getpixel((x, y)) == 0: transparent_pixels.append((x, y)) pyautogui.moveTo(x, y) print(transparent_pixels) </code></pre> <p>The output was not in the transparent area, and I don't know how or why.</p> <p>So my question is, how can I find that <strong>transparent area coordinate</strong> to pass to my HTML code no matter the app opens the image or the Windows theme?</p> <p>On Photoshop I can see the background:</p> <p><a href="https://i.sstatic.net/rIkAF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rIkAF.png" alt="Photoshop" /></a></p> <p>When I open with Paint I can see the white area:</p> <p><a href="https://i.sstatic.net/v4qkj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/v4qkj.png" alt="Paint" /></a></p> <p>But when I open the image using the <code>PIL</code> Python library, it opens from &quot;Photos&quot; Windows app, and the color there is black in this case:</p> <p><a href="https://i.sstatic.net/48Hj1.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/48Hj1.jpg" alt="Windows app" /></a></p> <p>And the color found using the <code>pyautogui.displayMousePosition()</code> function is: <code>X: 966 Y: 634 RGB: ( 28, 32, 44)</code></p> <p><strong>----- UPDATE -----</strong></p> <p>What I need is to place a code where in the midle of this transparent background area, but this is what I am getting:</p> <p><a href="https://i.sstatic.net/bsbGn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bsbGn.png" alt="What I need" /></a></p>
<javascript><python><html><image><pyautogui>
2023-04-05 16:07:09
1
649
Guilherme Matheus
75,941,716
10,957,844
Selenium Proxy Error when intializing webdriver
<p>When using selenium for work behind enterprise firewall, it's necessary to connect to proxy to access internet. However, selenium is returning proxy error even when initializing the webdriver.</p> <pre><code>from selenium import webdriver proxy='x.x.x.x:xxx' chrome_options = webdriver.ChromeOptions() chrome_options.add_argument(f'--proxy-server={proxy}') driver = webdriver.Chrome(options=chrome_options) </code></pre> <p>Getting <code>401</code> unauthorized error even before visiting any webpage.</p>
<python><selenium-webdriver>
2023-04-05 16:06:16
1
3,457
Tim
75,941,644
7,626,198
How to read QVector with QDataStream using PyQt5
<p>I have a QtDataStream:</p> <pre><code>from PyQt5 import QtCore infile = QtCore.QFile(path) if not infile.open(QtCore.QIODevice.ReadOnly): raise IOError(infile.errorString() stream = QtCore.QDataStream(infile) </code></pre> <p>And I need to read a QVector. I need something like this:</p> <pre><code>var = QtCore.Qvector() stream &gt;&gt; var </code></pre> <p>But it seems that QVector class does not exist in PyQt5. Any ideas?</p>
<python><pyqt><pyqt5><qvector><qdatastream>
2023-04-05 15:59:47
1
442
Juan
75,941,480
2,607,331
Writing large objects in MongoDB's GridFS & extracting it in Python
<p>Context - On daily basis application log files are generated for external REST calls. I am trying to parse the log files and extract information related to external REST call. The information extracted are StartTime of Rest Call, EndTime when response was received, payloads etc. This information is collected and collated for a particular session in a waterfall diagram so as to visualize how much time is spent in each call.</p> <p>Problem - For one of the log files, the collated information for a particular session is more than 16MB and hence I have to use GridFS. The REST call information is collected in a custom object. I serialised the object and uploaded to GridFS. On Jupyter Notebook, I am extracting the data, however the data has some additional characters. In the below example, I have converted the object to a string (idea was to use json.loads in Python), but when I extract in Python, I get the following.</p> <p>Extraction:</p> <pre class="lang-py prettyprint-override"><code>import javaobj.v2 as javaobj pObj = javaobj.loads(fs.get(record['calls']).read()) pObj.dump() </code></pre> <p>The output of dump instruction is:</p> <pre class="lang-none prettyprint-override"><code>'[String 7e0000: \'[{&quot;tokenId&quot;:&quot;a6fc52b3-0c60-4332-b866-09f3fe8b243f&quot;,&quot;responseTime&quot;:540.2587,&quot;requestPayload&quot;:null,&quot;responsePayload&quot;:&quot;{\\\\&quot;Identifiers\\\\&quot;:[{\\\\&quot;Name\\\\&quot;:\\\\&quot;XXXX MANAGEMENT LLC-CISA\\\\&quot;,\\\\&quot;Number\\\\&quot;:\\\\&quot;0012345\\\\&quot;,\\\\&quot;SubNumber\\\\&quot;:\\\\&quot;00099\\\\&quot;,\\\\&quot;Sourceode\\\\&quot;:null,\\\\&quot;SourceGroups\\\\&quot;:null,\\\\&quot;IsPrimary\\\\&quot;:false}],\\\\&quot;serviceConsumerId\\\\&quot;:\\\\&quot;YYY\\\\&quot;,\\\\&quot;serviceConsumerSystemId\\\\&quot;:\\\\&quot;001\\\\&quot;,\\\\&quot;service .... </code></pre> <p>The string has too many slash as escape character, then it starts with text &quot;String 7e0000&quot;. Obviously json.loads in Python will fail.</p> <p>Below is the code that I used to serialize the object and store:</p> <p>Method to convert the custom object to String</p> <pre class="lang-java prettyprint-override"><code>public static String convertToString(ArrayList&lt;LogStructure&gt; logs){ StringBuilder sBuilder = new StringBuilder(); sBuilder.append(&quot;[&quot;); for (LogStructure log: logs){ sBuilder.append(getString(log)); sBuilder.append(&quot;,&quot;); } return sBuilder.toString(); } public static String getString(LogStructure log){ StringBuilder sBuilder = new StringBuilder(); sBuilder.append(&quot;{logLevel:&quot;); sBuilder.append(log.logLevel); sBuilder.append(&quot;,logInjestionDateTime:&quot;); sBuilder.append(log.logInjestionDateTime); sBuilder.append(&quot;,serverName:&quot;); sBuilder.append(log.serverName); sBuilder.append(&quot;,serviceName:&quot;); sBuilder.append(log.serviceName); sBuilder.append(&quot;,serviceEndPoint:&quot;); sBuilder.append(log.serviceEndPoint); sBuilder.append(&quot;,startTime:&quot;); sBuilder.append(log.startTime); sBuilder.append(&quot;,endTime:&quot;); sBuilder.append(log.endTime); sBuilder.append(&quot;,responseTime:&quot;); sBuilder.append(log.responseTime); sBuilder.append(&quot;,message:&quot;); sBuilder.append(log.message); sBuilder.append(&quot;,endTime:&quot;); sBuilder.append(log.endTime); sBuilder.append(&quot;,requestId:&quot;); sBuilder.append(log.requestId); sBuilder.append(&quot;,tokenId:&quot;); sBuilder.append(log.tokenId); sBuilder.append(&quot;,userId:&quot;); sBuilder.append(log.userId); sBuilder.append(&quot;,requestPayload:&quot;); sBuilder.append(log.requestPayload); sBuilder.append(&quot;,responsePayload:&quot;); sBuilder.append(log.responsePayload); sBuilder.append(&quot;,payloadType:&quot;); sBuilder.append(log.payloadType); sBuilder.append(&quot;}&quot;); return sBuilder.toString(); } </code></pre> <p>Upload to GridFs in App.java</p> <pre><code>JSONArray list = new JSONArray(); for(LogStructure temp: tempObject){ totalCalls++; JSONObject obj = new JSONObject(); //BasicDBObject basicDBObject = new BasicDBObject(); obj.put(&quot;logLevel&quot;, temp.logLevel); obj.put(&quot;logInjestionDateTime&quot;, temp.logInjestionDateTime); obj.put(&quot;serverName&quot;, temp.serverName); obj.put(&quot;serviceName&quot;, temp.serviceName); obj.put(&quot;serviceEndPoint&quot;, temp.serviceEndPoint); obj.put(&quot;startTime&quot;, temp.startTime); obj.put(&quot;endTime&quot;, temp.endTime); obj.put(&quot;responseTime&quot;, temp.responseTime); obj.put(&quot;message&quot;, temp.message); obj.put(&quot;requestId&quot;, temp.requestId); obj.put(&quot;tokenId&quot;, temp.tokenId); obj.put(&quot;requestPayload&quot;, temp.requestPayload); obj.put(&quot;responsePayload&quot;, temp.responsePayload); obj.put(&quot;payloadType&quot;, temp.payloadType); list.add(obj); } ByteArrayOutputStream baos = new ByteArrayOutputStream(); ObjectOutputStream oos; try { oos = new ObjectOutputStream(baos); oos.writeObject(list.toJSONString()); oos.flush(); oos.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } InputStream is = new ByteArrayInputStream(baos.toByteArray()); // ObjectOutputStream is = new ObjectOutputStream(null); GridFSBucket gridFSBucket = mongo.getGridFsBucket(mDb, &quot;Logs&quot;); ObjectId objectId = mongo.uploadToGrid(is, gridFSBucket); doc.append(&quot;totalCalls&quot;, totalCalls); doc.append(&quot;calls&quot;, objectId); mongo.insertDocument(mCollection, doc); </code></pre> <p>I referred to the below links to understand about GridFs and serialization of object.</p> <p><a href="https://mongodb.github.io/mongo-java-driver/3.5/driver/tutorials/gridfs/" rel="nofollow noreferrer">https://mongodb.github.io/mongo-java-driver/3.5/driver/tutorials/gridfs/</a> <a href="https://howtodoinjava.com/java/serialization/custom-serialization-readobject-writeobject/" rel="nofollow noreferrer">https://howtodoinjava.com/java/serialization/custom-serialization-readobject-writeobject/</a> <a href="https://www.baeldung.com/java-serialization" rel="nofollow noreferrer">https://www.baeldung.com/java-serialization</a></p> <p>Currently, the way Java is storing in MongoDB and extracting it, has some specific texts related to Java. For example, the first attempt was to serialize the ArrayList object as it is, however when I extracted the data in python, the string had &quot;System.*.ArrayList&quot;. Then I tried storing the data as JsonObject using simple.json library. Default serialization had object's memory address. Then I used toJsonString method and tried to save as string expecting that the final string will be of pattern</p> <blockquote> <p>&quot;{&quot;key&quot;: [{&quot;key1&quot;:value},{&quot;key2&quot;:value}]}&quot;</p> </blockquote> <p>This will allow me to use json.loads function and get a dictionary.</p>
<python><java><json><mongodb><gridfs>
2023-04-05 15:43:44
0
687
Tushar Saurabh
75,941,470
17,902,018
Count overlapping filaments in image
<p>I am trying to count filaments in binary images. They are segmentation masks for cell <a href="https://en.wikipedia.org/wiki/Filopodia" rel="nofollow noreferrer">filopodia</a>, and they are often overlapping. This overlapping makes it difficult to accurately count them.</p> <p>I am trying with this code that works with thinning/skeletonization and branch/endpoint counting, but with no luck:</p> <pre><code>def find_endpoints(img): (rows,cols) = np.nonzero(img) skel_coords = [] for (r,c) in zip(rows,cols): # Extract an 8-connected neighbourhood (col_neigh,row_neigh) = np.meshgrid(np.array([c-1,c,c+1]), np.array([r-1,r,r+1])) # Cast to int to index into image col_neigh = col_neigh.astype('int') row_neigh = row_neigh.astype('int') # Convert into a single 1D array and check for non-zero locations pix_neighbourhood = img[row_neigh,col_neigh].ravel() != 0 # If the number of non-zero locations equals 2, add this to our list of co-ordinates if np.sum(pix_neighbourhood) == 2: skel_coords.append((r,c)) return len(skel_coords) def num_filopodia_demerged(mask): thinned = thin(mask) img_thin_labeled = skimage.measure.label(thinned.astype(np.uint8), connectivity=2) stats_bbox = skimage.measure.regionprops(img_thin_labeled.astype(np.uint8)) filopodia_count = 0 for i in range(0, len(stats_bbox)): bbox = stats_bbox[i].bbox bbox_region = img_thin_labeled[bbox[0]:bbox[2], bbox[1]:bbox[3]] value_counts = Counter(bbox_region.flatten()).most_common() most_frequent_value = value_counts[1][0] if len(value_counts) &gt; 1 else value_counts[0][0] bbox_region = (bbox_region == most_frequent_value) * 1 bbox_region_padded = np.pad(bbox_region, pad_width=4, mode='constant', constant_values=0) n_endpoints = find_endpoints(bbox_region_padded) filopodia_count += (n_endpoints - 1) return max(filopodia_count, 0) </code></pre> <p>For example, take these patterns:</p> <p><a href="https://i.sstatic.net/3HJly.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3HJly.png" alt="image explaining branching patterns expected results" /></a></p> <p>How can I deal with this? Take into account that in the actual images these filaments are not this straight, may overlap a lot also in &quot;knot&quot; patterns (shape of a &quot;6&quot;). An example of &quot;real&quot; mask:</p> <p><a href="https://i.sstatic.net/eHXNE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eHXNE.png" alt="Binary mask example" /></a></p>
<python><numpy><image-processing><computer-vision><scikit-image>
2023-04-05 15:42:16
1
2,128
rikyeah
75,941,464
21,420,742
How to locate and add text to rows that don't fit the conditions. In python
<p>I have a dataset that uses numpy and pandas with employment history and in my code I look for those employees that report to a vacant manager spot being held by another manager in the mean time. Right now it kind of works but needs to be refined. Here is the current data and code that I have.</p> <p>Code:</p> <pre><code>m = df.groupby('ID','Reporting_Manager_ID'])['Manager_Name'].transform('first' ).ne(['Manger_Name']) df.loc[m,'Manager_Name'] += ' (Vacant)' </code></pre> <p>The output for this is:</p> <pre><code>Emp_ID Reporting_Manager_ID Manager_Name 1 4012 John Wick 1 4012 John Wick 2 2812 Sarah Smith 2 2812 Sarah Smith 2 2812 John Wick (Vacant) 3 9236 Peter Doe 3 9236 John Wick (Vacant) 3 9236 John Wick 4 1293 John Wick 4 1293 John Wick </code></pre> <p>The original Manager ID for 'John Wick' is 4012 and should show as it does however for the other Manager IDs that he takes over [2812, 9236, 1293] should all show (Vacant) for all lines.</p> <p>Desired Output:</p> <pre><code>Emp_ID Reporting_Manager_ID Manager_Name 1 4012 John Wick 1 4012 John Wick 2 2812 Sarah Smith 2 2812 Sarah Smith 2 2812 John Wick (Vacant) 3 9236 Peter Doe 3 9236 John Wick (Vacant) 3 9236 John Wick (Vacant) 4 1293 John Wick (Vacant) 4 1293 John Wick (Vacant) </code></pre> <p>The dataset has about 300+ Reporting Manager IDs and this happens multiple times, Any suggestions on how to fix this?</p>
<python><python-3.x><pandas><dataframe><numpy>
2023-04-05 15:41:38
1
473
Coding_Nubie
75,941,259
1,682,525
Append all logs to text file and send it to slack via AWS lambda python script
<p>I have a python script that collect the data from aws cloudwatch and send it to slack. Its working completely fine. Its sending logs to slack in the form of text in slack channel.</p> <p>I would want the same logs data to be stored in a text file and send it to slack. I am not into python so, please excuse my code.</p> <p>In the code below I am trying to create a file and write the txt file with the message variable to store text.</p> <p>Also remove <code>json=message</code> with <code>files=file_dict</code></p> <pre><code># json=message response = requests.post(slack_webhook_url,files=file_dict) </code></pre> <p><strong>Python</strong></p> <pre><code>import boto3 import requests import json def lambda_handler(event, context): print(event) build_id = event['detail']['build-id'].split(':')[-1] logs_client = boto3.client('logs') log_group_name = &quot;/aws/codebuild/&quot; + event['detail']['project-name'] log_stream_name = build_id response = logs_client.get_log_events( logGroupName=log_group_name, logStreamName=log_stream_name, startFromHead=True ) message = { 'text': 'Logs for build {0}:\n'.format(build_id) } for event in response['events']: message['text'] += event['message'] + '\n' slack_webhook_url = 'https://hooks.slack.com/services/123123123/123123123/123123123'; with open(&quot;/tmp/codebuild-logs.txt&quot;, &quot;w&quot;) as a_file: a_file.write(message) file_dict = {&quot;/tmp/codebuild-logs.txt&quot;: a_file} # json=message response = requests.post(slack_webhook_url,files=file_dict) if response.status_code == 200: return { 'statusCode': 200, 'body': 'Message is sent to Slack successfully.' } else: return { 'statusCode': response.status_code, 'body': 'Failed to send message to Slack.' } </code></pre> <p>After running this script I get lambda error.</p> <pre><code>Response { &quot;errorMessage&quot;: &quot;write() argument must be str, not dict&quot;, &quot;errorType&quot;: &quot;TypeError&quot;, &quot;stackTrace&quot;: [ &quot; File \&quot;/var/task/lambda_function.py\&quot;, line 36, in lambda_handler\n a_file.write(message)\n&quot; ] } </code></pre> <p>Any help will be greatly appreciated. Thanks in advance.</p>
<python><amazon-web-services><aws-lambda><amazon-cloudwatch>
2023-04-05 15:23:14
1
3,887
Santosh
75,941,255
6,706,419
Selenium - Get Text Attribute of Element
<p>I am trying to scrape a table and store in list/dict with intent to add this data to a Pandas Dataframe. I am having issues getting the desired output in my dictionary though.</p> <pre><code># import currencies from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC import pandas as pd webdriver.Chrome() url = 'https://www.worlddata.info/currencies/' driver = webdriver.Chrome() driver.get(url) table_identifier = 'table.std100.hover[data-linktitles=&quot;Details and exchange rates: _country_&quot;]' table = WebDriverWait(driver, 20).until(EC.presence_of_element_located((By.CSS_SELECTOR, table_identifier))) currency = [] for tr in table.find_elements(By.XPATH, '//tbody/tr'): iso = tr.find_elements(By.XPATH, './/td[1]') ctr = tr.find_elements(By.XPATH, './/td[3]') temp = { 'iso': iso, 'country': ctr } currency.append(temp) print(currency) </code></pre> <hr /> <p>When I run this code I get the following output (trimmed fro the first 2 tr tag contents). I'm not sure why the data in for tr is just blank values and the data in the second tr is giving me these very unexpected values in <code>iso</code> and <code>country</code> field.</p> <pre><code>[ {'iso': [], 'country': []}, {'iso': [&lt;selenium.webdriver.remote.webelement.WebElement (session=&quot;d60827d43f7fc7f7b91dec76d4ab861c&quot;, element=&quot;399a04e6-e4e5-468b-88bb-7307083aacd3&quot;)&gt;], 'country': [&lt;selenium.webdriver.remote.webelement.WebElement (session=&quot;d60827d43f7fc7f7b91dec76d4ab861c&quot;, element=&quot;37b7b061-1983-4703-ac39-b28d386f9a98&quot;)&gt;] }, </code></pre> <p>I tried to update the code to access the <code>.text</code> attribute (where <code>iso</code> &amp; <code>ctr</code> is defined as well as in the <code>temp</code> array. Each return an error saying there is no <code>.text</code> element.</p> <pre><code>line 20, in &lt;module&gt; iso = tr.find_elements(By.XPATH, './/td[1]').text ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'list' object has no attribute 'text' </code></pre> <hr /> <p><strong>Question:</strong> How do I access the text elements that exist in the tbody/tr/td tags?</p> <p>I'm not sure the loop (or dict assignment) is correct but for now I can't even access the text to start to debug the rest of code. I have also added a screenshot with the relevant page HTML for context</p> <p><a href="https://i.sstatic.net/cGG0q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cGG0q.png" alt="enter image description here" /></a></p>
<python><html><selenium-webdriver>
2023-04-05 15:23:06
1
14,588
urdearboy
75,941,202
4,865,723
RegEx: Disallow strings in a sub catchgroup
<p>I still have a working regex pattern. Now I found a new case that isn't catched My first solution does interfere with the previous cases.</p> <p>I want to extract three parts of a string. I'm using Python3 for this and also tested arround on regex101.com.</p> <p>Here is an example input string:</p> <pre><code>foo:bar][bluered </code></pre> <p>The resulting substrings should be <code>foo</code>, <code>bar</code> and <code>blured</code>. I can handle this via that RegEx pattern (Python3).</p> <pre><code>^(?:([^:]+):)?(.*?)(?:]\[(.*))?$ </code></pre> <p>Let me describe in my own words</p> <ul> <li>The first substring (<code>foo</code>) before the <code>:</code> is optional. That pattern also works with input <code>bar][bluered</code>.</li> <li>The second and third should be there and divided via <code>][</code>.</li> </ul> <h1>Problem</h1> <p>Now the problem case where the first substring is missing and the third substring contain a <code>:</code>.</p> <pre><code>bar][blue:red </code></pre> <p>The pattern gives me resulting substrings <code>bar][blue</code> and <code>red</code> and the third is ignored. The <em>expected</em> result here is an empty first substring and then <code>bar</code> as second and <code>blue:red</code> as third.</p> <h1>My approach</h1> <p>I added a check for <code>][</code>:</p> <pre><code>^(?:]\[([^:]+):)?(.*?)(?:]\[(.*))?$ ^^^ </code></pre> <p>This does catch input like <code>bar][blue:red</code> the way I need it. But the <em>problem</em> now is that the <code>:</code> between first and second substring is ignored:</p> <pre><code>foo:bar][blue:red </code></pre> <p>results in an empty first substring and <code>foo:bar</code> as second and <code>blue:red</code> as third. I do need <code>foo</code> as first, <code>bar</code> as second and <code>blue:red</code> as third.</p> <p>Now I'm a bit confused how to go further. I think somehow I need to ignore the <code>:</code> after the <code>][</code> in the third group even when there is no <code>:</code> between the first and second group (before the <code>][</code>).</p> <p>I need to find a solution that works for both types of input strings</p> <pre><code>bar][blue:red foo:bar][blue:red </code></pre> <p><em>EDIT</em>: Tried a lookahead but doesn't work. It should &quot;mean&quot; match <code>:</code> only if it is in front of a <code>][</code>.</p> <pre><code>^(?:([^:]+(?=\]\[)):)?(.*?)(?:]\[(.*))?$ ^^^^^^^^^ </code></pre> <p>But maybe it is the right direction?</p>
<python><regex>
2023-04-05 15:16:41
3
12,450
buhtz
75,941,135
5,212,614
Trying to Minimize Wages/Workers Problem Using SciPy Optimize
<p>I just recently learned about PuLP and now I am trying to learn about SciPy Optimize. If I have a simple dataframe, like the one below, how can I optimize/minimize wages, workers, or both?</p> <pre><code>import pandas as pd import numpy as np import scipy.optimize as sco from io import StringIO with StringIO( '''Time Windows,Shift 1,Shift 2,Shift 3,Shift 4,Workers Required 6:00 - 9:00, 1, 0, 0, 1, 55.0 9:00 - 12:00, 1, 0, 0, 0, 46.0 12:00 - 15:00, 1, 1, 0, 0, 59.0 15:00 - 18:00, 0, 1, 0, 0, 23.0 18:00 - 21:00, 0, 1, 1, 0, 60.0 21:00 - 24:00, 0, 0, 1, 0, 38.0 24:00 - 3:00, 0, 0, 1, 1, 20.0 3:00 - 6:00, 0, 0, 0, 1, 30.0 Wage_Rate, 135, 140, 190, 188, 0.0''') as f: df = pd.read_csv(f, skipinitialspace=True, index_col='Time Windows') is_shift = df.columns.str.startswith('Shift') is_wage = df.index == 'Wage_Rate' shifts = df.loc[~is_wage, is_shift] wage_rate = df.loc[is_wage, is_shift].squeeze() workers_req = df.loc[~is_wage, 'Workers Required'] # create arrays min_wages = wage_rate.values min_workers = workers_req.values # add constraints constraints = (min_workers, min_wages) # min 0 workers and increase in 10% increments up to 100% # there is no 10% of a worker, but .1 could be multiplied by 'workers_req' bound = (0.0,.1) bounds = tuple(bound for item in range(min_workers)) result = sco.minimize(method='SLSQP', bounds=bounds, constraints=constraints) </code></pre> <p>Everything compiles, up to the last two lines of code. When I try to set 'bounds' to 'min_workers', I get this error.</p> <pre><code>Traceback (most recent call last): Cell In[101], line 1 bounds = tuple(bound for item in range(min_workers)) TypeError: only integer scalar arrays can be converted to a scalar index bounds = tuple(bound for item in range(min_workers)) Traceback (most recent call last): Cell In[102], line 1 bounds = tuple(bound for item in range(min_workers)) TypeError: only integer scalar arrays can be converted to a scalar index </code></pre> <p>So, I'm not even sure if this is the right approach, but it seems pretty close. Basically, I am trying to minimize wages/workers, and set this as the constraint. How can I make this work?</p>
<python><python-3.x><optimization><scipy-optimize><scipy-optimize-minimize>
2023-04-05 15:11:29
0
20,492
ASH
75,941,121
848,277
Replicating pandas to_dict in polars
<p>In pandas I can do the following to get a key/value dictionary from <code>to_dict</code></p> <pre><code>d = [{'key':'a', 'value':1}, {'key':'b', 'value':2}, {'key':'b', 'value':1}] df = pd.DataFrame(d) df.groupby('key')['value'].sum().to_dict() Out[13]: {'a': 1, 'b': 3} </code></pre> <p>When I attempt to replicate this in polars i get the following:</p> <pre><code>df = pl.DataFrame(d) In [133]: df.groupby('key').agg(pl.col('value').sum()).to_dict(as_series=False) Out[133]: {'key': ['a', 'b'], 'value': [1, 3]} In [134]: df.groupby('key').agg(pl.col('value').sum()).transpose().to_dict(as_series=False) Out[134]: {'column_0': ['b', '3'], 'column_1': ['a', '1']} </code></pre> <p>Although it's technically correct, unpacking this to the dict that pandas returns for a large dataframe will be slow. How can I have polars return the same key/value dict as the pandas snippet above?</p>
<python><pandas><dataframe><python-polars>
2023-04-05 15:10:08
1
12,450
pyCthon
75,940,723
1,631,306
create masking using image boundaries
<p>I have a image where I have defined some boundaries. I am trying to mask these different boundaries.</p> <p><a href="https://i.sstatic.net/EGc7B.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EGc7B.png" alt="enter image description here" /></a></p> <p>So, I want to create mask for green circle (eye) and blue region and yellow regions. How can I do that? I also have the x,y coordinates for these boundries, but I am not able to mask.</p> <p>so far i have this</p> <pre><code>img = cv2.imread(path + filenames[0], cv2.IMREAD_COLOR) RGB_img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) plt.imshow(RGB_img) #color boundaries [B, G, R] lower = np.array([0,90,0]) upper = np.array([50,235,100]) # threshold on green color thresh = cv2.inRange(RGB_img, lower, upper) plt.imshow(thresh) </code></pre> <p>and I get this <a href="https://i.sstatic.net/oRigi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oRigi.png" alt="enter image description here" /></a></p>
<python><opencv>
2023-04-05 14:32:09
2
4,501
user1631306
75,940,698
4,877,683
Unable to change default download directory for google chrome while downloading using Selenium - python
<p>I'm trying to download a file from a URL using Selenium in Python. I'm trying to change the default download directory using <em>Options</em></p> <p>The script runs without any error message, but the download directory doesn't get updated. I'm using a jupyter notebook on Windows. This is what my code looks like:</p> <pre><code>options = Options() options.add_argument(&quot;start-maximized&quot;) options.add_experimental_option(&quot;excludeSwitches&quot;, [&quot;enable-automation&quot;]) options.add_experimental_option('useAutomationExtension', False) options.add_experimental_option(&quot;prefs&quot;, { &quot;download.default_directory&quot;: r&quot;C:\Users\danish.zahid\Downloads\pubsmed\\&quot;, &quot;download.prompt_for_download&quot;: False, &quot;download.directory_upgrade&quot;: True, }) # specify the title of the study you want to download study_title = &quot;Pan-cancer single-cell landscape of tumor-infiltrating T cells&quot; # start the browser and navigate to the PubMed website s = Service(r&quot;C:\Users\danish.zahid\Downloads\chromedriver_win32\chromedriver.exe&quot;) browser = webdriver.Chrome(service=s, options=options) browser.get(&quot;https://pubmed.ncbi.nlm.nih.gov/&quot;) # find the search box, enter the study title, and submit the form search_box = browser.find_element(By.ID, &quot;id_term&quot;) search_box.send_keys(study_title) search_box.send_keys(Keys.RETURN) # find the save button to and click it save_button = browser.find_element(By.XPATH, &quot;//*[@id='save-results-panel-trigger']&quot;) save_button.click() # Select Pubmed from drop down dropdownlist = browser.find_element(By.ID, &quot;save-action-format&quot;) select_dropdown = Select(dropdownlist) select_dropdown.select_by_visible_text(&quot;PubMed&quot;) download_file = browser.find_element(By.XPATH, &quot;//*[@id='save-action-panel-form']/div[2]/button[1]&quot;) download_file.click() browser.quit() </code></pre> <p>Went through a lot of possible solutions online but cant seem to make it work what am i doing wrong here.</p>
<python><css><selenium-webdriver><web-scraping>
2023-04-05 14:30:18
2
703
Danish Zahid Malik
75,940,697
2,893,053
Importing uninstalled Python package inside bash script doesn't fail but its __file__ is None
<p>I am facing a weird situation I've never encountered before.</p> <p>I have a bash script that contains the following line:</p> <pre><code>python -c &quot;import foo; print(foo.__file__); import pdb; pdb.set_trace() </code></pre> <p>When I directly run this bash script, I am getting None for <code>foo.__file__</code> and the package points to some &quot;_frozen_importlib_external._NamespaceLoader object&quot; that's confusing to me:</p> <pre><code>None --Return-- &gt; &lt;string&gt;(1)&lt;module&gt;()-&gt;None (Pdb) foo &lt;module 'foo' (&lt;_frozen_importlib_external._NamespaceLoader object at 0x7fbf30923790&gt;)&gt; (Pdb) foo.__file__ </code></pre> <p>But, when I run this line command on the terminal directly (not through a bash script), I get the following instead:</p> <pre><code>Traceback (most recent call last): File &quot;&lt;string&gt;&quot;, line 1, in &lt;module&gt; ModuleNotFoundError: No module named 'foo' </code></pre> <p>What could account for such inconsistency? (I am expecting the terminal output, that is, reporting ModuleNotFound, and not silently proceeds with a None <code>__file__</code> attribute)</p>
<python><bash>
2023-04-05 14:30:01
0
1,528
zkytony
75,940,559
6,357,916
Logging everything to file and only DEBUG messages to console in python
<p>I want logger to log everything to file but only debug messages to console. <a href="https://stackoverflow.com/a/17109663/6357916">This answer</a> suggests creating custom handler. I tried following:</p> <pre><code>import logging import sys class DebugConsoleHandler(logging.StreamHandler): def __init__(self): super().__init__(stream = sys.stdout) def emit(self, record): if not record.levelno == logging.DEBUG: return super().emit(record) class Module1(): def __init__(self) -&gt; None: self.logFilePath = 'logging_exp.txt' self.configureLogger() def configureLogger(self): self.logger = logging.getLogger(__name__) # log time and message to file formatter1 = logging.Formatter(&quot;%(asctime)s %(levelname)s %(message)s&quot;) file_hander = logging.FileHandler(self.logFilePath) file_hander.setLevel(logging.INFO) file_hander.setFormatter(formatter1) # log only message to console formatter2 = logging.Formatter(&quot;%(levelname)s %(message)s&quot;) console_handler = DebugConsoleHandler() console_handler.setFormatter(formatter2) self.logger.addHandler(file_hander) self.logger.addHandler(console_handler) def testLogging(self): a = 5 self.logger.info('Starting up ...') self.logger.debug('a:%s', a) m = Module1() m.testLogging() </code></pre> <p>This only created file <code>logging_expt.txt</code>, but did not log anything to anywhere, neither to file nor to console. What I am missing?</p>
<python><python-logging>
2023-04-05 14:17:19
1
3,029
MsA
75,940,472
353,337
Preserve custom dtype when creating NumPy array
<p>For compatibility with legacy software, I need to create a NumPy array with dtype</p> <pre class="lang-py prettyprint-override"><code>np.dtype(&quot;(2,)u8&quot;) </code></pre> <p>(i.e., an array of tuples of two <code>u8</code> integers each). When creating such an array, NumPy automatically converts the array to an array of shape <code>(4,2)</code> with dtype <code>u8</code> though:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np a = np.empty(4, dtype=np.dtype(&quot;(2,)u8&quot;)) print(a.shape, a.dtype) </code></pre> <pre><code>(4, 2) uint64 </code></pre> <p>Any way to prevent that? Explicitly changing the dtype afterwards would also work.</p>
<python><numpy><dtype>
2023-04-05 14:10:11
0
59,565
Nico SchlΓΆmer
75,940,278
13,300,255
How to access various Plotter methods outside my class?
<p>I'm plotting a mesh with PyVista using <a href="https://docs.pyvista.org/api/core/_autosummary/pyvista.UnstructuredGrid.html#pyvista-unstructuredgrid" rel="nofollow noreferrer"><code>UnstructuredGrid</code></a>. It's going pretty well and I managed to plot the geometry I want using <code>grid.plot()</code>.</p> <p>I now want to access methods outside of <code>UnstructuredGrid</code>: for example, I want to be able to click and select on mesh elements and I see this is possible to do with the <a href="https://docs.pyvista.org/api/plotting/_autosummary/pyvista.Plotter.enable_point_picking.html#enable-point-picking" rel="nofollow noreferrer"><code>enable_point_picking</code></a> method. The only problem is that this method does not belong to <code>UnstructuredGrid</code>, but to the <code>Plotter</code> class.</p> <p>Does this mean I cannot use that method, or are there any workarounds?</p>
<python><pyvista>
2023-04-05 13:50:24
1
439
man-teiv
75,940,265
21,351,146
How do I create a multithreaded async server application with Python
<p>for clarity i am using python3.10.9</p> <p>The program i am trying to make consists of a ThreadPool and some type of async runtime environment. If a connection comes in on a listening socket, the async environment should be able to handle multiple connections concurrently. Database writing and other proccessing work I want to be able to offload to threads.</p> <p>I have never used async in Python and per my understanding multithreading is hard but can be achieved with something like <code>multiprocessing</code> module.</p> <p>Can this type of application be built in Python? Help appreciated!</p> <p>EDIT: I suppose for a little more context, I found several of these types of examples in other programming books, so if there is a similar python tutorial I may have overlooked please send it my way!</p>
<python><python-3.x><multithreading><asynchronous><python-asyncio>
2023-04-05 13:49:29
1
301
userh897
75,940,177
5,574,107
Merge dataframes where df1['row']<=df2['row']
<p>I have two tables:</p> <pre><code>d = {'ID': ['A', 'B', 'C'], 'Month': [1,3,5],'group':['x','x','x'} df1 = pd.DataFrame(data=d) d2 = {'Month': [1, 2,3,4,5], 'value': [0.8, 0.2,0.5,0.3,0.7],'group':['x','x','x']} df2 = pd.DataFrame(data=d2) </code></pre> <p>I would like to join on group, which in my real table is not all the same, and on rows of df2 where month is &lt;= month in df1, SQL equivalent is join on df1.month&lt;=df2.month with output:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>ID</th> <th>Month</th> <th>value</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>1</td> <td>0.8</td> </tr> <tr> <td>B</td> <td>1</td> <td>0.8</td> </tr> <tr> <td>B</td> <td>2</td> <td>0.2</td> </tr> <tr> <td>B</td> <td>3</td> <td>0.5</td> </tr> <tr> <td>C</td> <td>1</td> <td>0.8</td> </tr> <tr> <td>C</td> <td>2</td> <td>0.2</td> </tr> <tr> <td>C</td> <td>3</td> <td>0.5</td> </tr> <tr> <td>C</td> <td>4</td> <td>0.3</td> </tr> <tr> <td>C</td> <td>5</td> <td>0.7</td> </tr> </tbody> </table> </div> <p>Where I will just drop the group column after the merge.</p> <p>A functional but slow way to do this is to just do the join and then do df1[df1.month&lt;=df1.month_x]. But my table is large and that would be inefficient. Is there a way to do this in one step with pd.merge or pd.join?</p>
<python><pandas><dataframe>
2023-04-05 13:41:34
1
453
user13948
75,940,073
19,980,284
Calculate the percent occurrence of values from summed rows and plot as bars
<p>I have survey data based on 6 questions, where each column corresponds to one question (case), and each row corresponds to one survey respondent who answered all 6 questions. <code>1.0</code> means the respondent would give fluid, and <code>0.0</code> means they would not give fluid:</p> <pre><code> case_1 case_2 case_3 case_4 case_5 case_6 0 0.0 0.0 0.0 0.0 0.0 1.0 1 1.0 1.0 1.0 0.0 0.0 1.0 2 1.0 1.0 0.0 1.0 1.0 1.0 3 0.0 0.0 1.0 0.0 1.0 1.0 4 0.0 0.0 1.0 1.0 1.0 0.0 ... ... ... ... ... ... ... 517 1.0 1.0 0.0 1.0 0.0 1.0 518 0.0 0.0 1.0 1.0 1.0 1.0 519 1.0 1.0 1.0 0.0 1.0 1.0 520 1.0 0.0 0.0 0.0 1.0 0.0 521 0.0 1.0 0.0 1.0 1.0 1.0 </code></pre> <p>I want to generate the following plot, which illustrates the percentage of respondents who never gave fluid, gave fluid in only one case, gave fluid in 2 cases etc.</p> <p><a href="https://i.sstatic.net/QqL5O.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QqL5O.png" alt="enter image description here" /></a></p>
<python><pandas><matplotlib><bar-chart>
2023-04-05 13:31:53
1
671
hulio_entredas
75,940,020
8,573,615
What is best practice for sharing config files across multiple functions in an AWS API Gateway?
<p>I am deploying an API Gateway to AWS that has multiple endpoints, grouped into multiple groups. In the interests of speed, readability and reusability, I've deployed this as a set of different lambda functions, one per group. In the interests of portability I've defined it all in one cloudformation template.</p> <p>So file structure looks somewhat like this:</p> <pre><code>package_name | | - group1/ | |- handler1.py | | def function11(event, context): | ... | | def function12(event, context): | ... | - group2/ | |- handler2.py | | def function21(event, context): | ... | | def function22(event, context): | ... | - config.py | S3_BUCKET_NAME = &quot;my_bucket&quot; | - template.yaml Resources: Group1Function: Properties: CodeUri: group1/ Handler: handler1.function11 ...etc... </code></pre> <p>... corresponding to endpoints</p> <pre><code>&lt;url&gt;/group1/function11 &lt;url&gt;/group1/function12 &lt;url&gt;/group2/function21 &lt;url&gt;/group2/function22 </code></pre> <p>The intent is that when endpoint function11 is requested, lambda only loads handler1.py, when function22 is requested it only loads handler2.py, and so on.</p> <p>Question: what is the best approach for sharing common config options across multiple functions? Normally I would put them in /config then import that into each function:</p> <pre><code>from package_name.config import S3_BUCKET_NAME </code></pre> <p>...but in this case I think cloudformation only packages the code in group1/ and group2/ for deployment, which doesn't include config.</p> <p>What's the best approach to scale up to a large deployment with a lot of endpoints and quite a lot of config options?</p>
<python><aws-lambda><aws-api-gateway>
2023-04-05 13:27:35
0
396
weegolo
75,939,929
17,267,064
Unable to transfer ownership of a google sheet owned by my service account to my personal account
<p>I want to transfer ownership of a google sheet that I created using my service account to my personal account using Python.</p> <pre><code>from google.oauth2 import service_account from googleapiclient.discovery import build creds = service_account.Credentials.from_service_account_file('credentials.json') service = build('drive', 'v3', credentials=creds) permission = { 'type': 'user', 'role': 'writer', 'transferOwnership': 'true', 'pendingOwner': 'true', 'emailAddress': new_owner_email } service.permissions().create( fileId=sheet_id, body=permission, transferOwnership=True ).execute() </code></pre> <p>I get below error.</p> <pre><code>HttpError: &lt;HttpError 403 when requesting https://www.googleapis.com/drive/v3/files/1X7O2jv-KDqZhvjTW3Dif1tuyvPA8O15Ck3e4foAf7-Y/permissions?transferOwnership=true&amp;alt=json returned &quot;The transferOwnership parameter must be enabled when the permission role is 'owner'.&quot;. Details: &quot;[{'message': &quot;The transferOwnership parameter must be enabled when the permission role is 'owner'.&quot;, 'domain': 'global', 'reason': 'forbidden', 'location': 'transferOwnership', 'locationType': 'parameter'}]&quot;&gt; </code></pre> <p>What do I change to make my personal account as the owner of my google sheet?</p>
<python><google-sheets><gspread>
2023-04-05 13:18:08
0
346
Mohit Aswani
75,939,770
10,499,953
How can I construct a DataFrame that uses the PyArrow backend directly (i.e., without a pd.read_xx() method)?
<p>Pandas 2.0 introduces the option to use PyArrow as the backend rather than NumPy. As of version 2.0, using it seems to require either calling one of the <code>pd.read_xxx()</code> methods with <code>type_backend='pyarrow'</code>, or else constructing a <code>DataFrame</code> that's NumPy-backed and then calling <code>.convert_dtypes</code> on it.</p> <p>Is there a more direct way to construct a PyArrow-backed <code>DataFrame</code>?</p>
<python><pandas><pyarrow>
2023-04-05 13:01:31
1
417
Attila the Fun
75,939,717
3,214,872
How to generate dummy op for torch-to-ONNX export?
<p>I'm trying to export a torch model to ONNX and, due to some custom operations in the model, I need to generate custom ONNX nodes hwile exporting. I would like to be able to define a &quot;dummy&quot; operation that I can then use to register a symbolic function handling the ONNX node generation, however I don't know how to tell torchscript to treat the function as an &quot;op&quot; rather than entering and tracing its content:</p> <pre class="lang-py prettyprint-override"><code>import torch # What decorator here? @torch.jit.script parses the body def my_dummy_fn(first: torch.Tensor, second: torch.Tensor): return torch.tensor(0), torch.tensor([1]), torch.tensor([2]) def my_symbolic_function(g, first, second): # generate the custom ONNX op via g.op() return g.op(&quot;my_custon_onnx&quot;, first, second) torch.onnx.register_custom_op_symbolic( symbolic_name=&quot;??????&quot;, # &lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt; What to use here? Ideally, the name of my dummy op symbolic_fn=my_symbolic_function, opset_version=12, ) </code></pre>
<python><pytorch><torchscript>
2023-04-05 12:57:48
0
19,263
GPhilo
75,939,550
1,000,343
globals() Doesn't Work Inside of a Package
<p>It is sometimes convenient to source part of a .py file into the global workspace when you're doing interactive tasks (I am aware that <code>exec</code> has code smell&quot;). I have a function that takes a file and regex markers of what start and end points to source lines from and assigns the contents of the file lines to the global workspace. This works as expected when you source the function in a script interactively but when I try to put this function into a convenience package it runs but the objects aren't added to the global workspace.</p> <p><strong>How can I make this function add the sourced objects to the global workspace when it is added to a package?</strong></p> <h2>Source Lines Function (Works until you put it in a package)</h2> <pre><code>from re import search from munch import DefaultMunch import tempfile import os def source_lines(path:str, start_line:str, end_line:str): with open(path, &quot;r&quot;) as f: script = f.readlines() starts = [i for i, line in enumerate(script) if search(start_line, line)] ends = [i for i, line in enumerate(script) if search(end_line, line)] out = [] for s, e in zip(starts, ends): out.append(script[s:e]) out_flat = [item for sublist in out for item in sublist] with tempfile.TemporaryDirectory() as td: f_name = os.path.join(td, 'tempsource.py') with open(f_name, 'w+t') as fh: fh.write('\n'.join(out_flat)) fh.seek(0) exec(open(f_name).read(), globals()) </code></pre> <h2>Test Code (I want this to work if <code>source_lines</code> is added to a package)</h2> <pre><code>pretend_file = ['## Dependencies\n', 'from datetime import datetime\n', '#\n', '\n', &quot;data = 'd'*100\n&quot;, '\n', '## Main fun\n', 'def date_to_epoch(date: str | datetime = None) -&gt; str:\n', '\n', ' if isinstance(date, datetime):\n', ' date = str(date.date())\n', '\n', ' return str(int(datetime.strptime(date, &quot;%Y-%m-%d&quot;).timestamp() * 1000))\n', '\n', '\n', '\n', '\n', '\n', '\n', &quot;data2 = 'e'*100\n&quot;, '#\n', '\n', '\n', &quot;s='d'&quot; ] with open('delete_me.py', 'a') as f: f.write('\n'.join(pretend_file)) ## Source to global source_lines( path = 'delete_me.py', start_line = '^## (Dependencies|Main fun)', end_line = '^#\s*$' ) ## Now have access to date_to_epoch, datetime, data2 (This does not work if `source_lines` was added to a package) date_to_epoch(datetime.strptime('2023-02-02', '%Y-%m-%d')) data2 os.remove('delete_me.py') </code></pre>
<python><python-3.x>
2023-04-05 12:41:30
1
110,512
Tyler Rinker
75,939,403
9,577,700
How to print only specific text to multiline window in pysimplegui
<p>I have a situation where I am creating application which is storing log information. I have added UI using pysimplegui package. what I want to achieve is</p> <ol> <li>Print some information into text file</li> <li>Print few information line to UI multiline box.</li> </ol> <p>Currently whatever I have printed is going to multiline window. Attaching code below which is small replica of my bigger code.</p> <pre><code>import threading import time import PySimpleGUI as sg import schedule THREAD_EVENT = '-THREAD-' cp = sg.cprint def the_thread(window): def the_thread1(): print(&quot;Avoid this line to get printed into the UI&quot;) #fruits = [1, &quot;banana&quot;, &quot;cherry&quot;] for x in range(10): print(&quot;Avoid this line to get printed into the UI&quot;) if x == 5: #print(&quot;x:&quot; + str(x)) print(&quot;Avoid this line to get printed into the UI&quot;) window.write_event_value('-THREAD-', (threading.current_thread().name, &quot;NMDC&quot;,&quot;THIS IS MESSAGE&quot;)) #cp('This is cheating from the thread', c='white on red') print(&quot;Avoid this line to get printed into the UI&quot;) schedule.every(5).seconds.do(the_thread1) while 1: schedule.run_pending() time.sleep(1) def main(): &quot;&quot;&quot; The demo will display in the multiline info about the event and values dictionary as it is being returned from window.read() Every time &quot;Start&quot; is clicked a new thread is started Try clicking &quot;Dummy&quot; to see that the window is active while the thread stuff is happening in the background &quot;&quot;&quot; layout = [ [sg.Text('Output Area - cprint\'s route to here', font='Any 15')], [sg.Multiline(size=(65,20), key='-ML-', autoscroll=True, reroute_stdout=True, write_only=True, reroute_cprint=True)], [sg.T('Input so you can see data in your dictionary')], [sg.Input(key='-IN-', size=(30,1))], [sg.B('Start A Thread'), sg.B('Dummy'), sg.Button('Exit')] , [sg.Text(&quot;TotaL messages &quot;,font=('HP Simplified',9),size=(20, 1)),sg.Input(&quot;0000&quot;,size=(20, 2),key='-msgcount-')]] window = sg.Window('Window Title', layout) while True: # Event Loop event, values = window.read() #cp(event, values) if event == sg.WIN_CLOSED or event == 'Exit': break if event.startswith('Start'): threading.Thread(target=the_thread, args=(window,), daemon=True).start() if event == THREAD_EVENT: #cp(f'Data from the thread ', colors='white on purple', end='') #cp(f'{values[THREAD_EVENT]}', colors='white on blue') print(&quot;Avoid this line to get printed into the UI&quot;) ## Print only below line of code to multiline window. ignore the other print command cp(values[THREAD_EVENT][1]+ &quot;:&quot; + values[THREAD_EVENT][2]) window.Element('-msgcount-').update(values[THREAD_EVENT][1]+ &quot;:&quot; + values[THREAD_EVENT][2]) print(&quot;Avoid this line to get printed into the UI&quot;) #window.Element('-msgcount-').update(values[THREAD_EVENT][1]) window.close() if __name__ == '__main__': main() </code></pre> <p>Please help me. I tried searching doc but couldn't find anything.</p>
<python><pysimplegui>
2023-04-05 12:24:56
0
481
dnyanesh ambhore
75,939,141
1,446,710
Create task from within another running task
<p>In Python I create two async tasks:</p> <pre><code>tasks = [ asyncio.create_task(task1(queue)), asyncio.create_task(task2(queue)), ] await asyncio.gather(*tasks) </code></pre> <p>Now, I have a need to create a third task &quot;task3&quot; <strong>within</strong> task1.</p> <p>So I have:</p> <pre><code>async def task1(queue): # and here I need to create the &quot;task3&quot;: asyncio.create_task(task3(queue)) # and how can I schedule this? </code></pre> <p>So I wish to schedule task3 also, without hurting task1 and task2 (they shall stay running).</p> <p>How am I supposed to do this?</p>
<python><python-asyncio>
2023-04-05 11:56:31
1
2,725
Daniel
75,939,055
3,630,230
Determine the current binlog file with python-mysql-replication
<p>I'm using the <a href="https://github.com/julien-duponchelle/python-mysql-replication" rel="nofollow noreferrer">python-mysql-replication</a> package to consume the MySQL binlog. The current binlog position is accessible as shown below, but I'm unable to access the current binlog file, which I'd need if I want to restart the process and resume from the last position.</p> <pre class="lang-py prettyprint-override"><code>stream = BinLogStreamReader( connection_settings=mysql_connection, server_id=server_id, blocking=True, resume_stream=True, only_events=[DeleteRowsEvent, WriteRowsEvent, UpdateRowsEvent], ) print('Current position:', binlogevent.packet.log_pos) </code></pre> <p>Is the binlog file name for a particular event made available through this API? If not, are there any recommendations for resuming from the last-read position in the log?</p>
<python><mysql>
2023-04-05 11:49:06
1
343
Weston Sankey
75,938,908
19,580,067
Find the index of the word in sentence even if it matches partially (fuzzy)
<p>I need to find the start index of a word in a sentence even if it matches partially.</p> <p>I tried find() method. But it matches only if the word is exact match.</p> <p>code:</p> <pre><code>import re body = 'Charles DοΏ½Silva | Technical Officer Anglo (UK) Limited' word = 'ANGLO (UK) LTD' start_idx = body.lower().find(word.lower()) print(match.start()) </code></pre> <p>So my output should be, need to get the start index of <strong>Anglo (UK) Limited</strong> in sentence and also need to get the word that partially matches in the sentence (Anglo (UK) Limited).</p> <p>Any suggestions for the above issue?</p>
<python><regex><string><fuzzy-search>
2023-04-05 11:34:35
2
359
Pravin
75,938,884
9,544,904
OSMNX find nearest nodes within 1km euclidean distance of a co-ordinate point
<p>I have an osmnx graph <code>G</code> and some co-ordinate points <code>P</code>. Now I want to find all the nodes which are within 1km Euclidean distance of at least one co-ordinate point of <code>P</code>. My code is as follows:</p> <pre class="lang-py prettyprint-override"><code> import osmnx as ox G = ox.graph.graph_from_place('New York City, NewYork, United States', network_type=&quot;all&quot;, retain_all=True) P = [(40.718797266, -73.753347584), (40.713511106, -73.759968316), ...] </code></pre> <p>There are like 44k items in <code>P</code> Is there any efficient way that I can achieve my results?</p>
<python><osmnx>
2023-04-05 11:32:09
1
443
Sihan Tawsik
75,938,882
1,498,199
Slice and assign in a pd.DataFrame
<p>At first we create a small <code>pd.DataFrame</code> with MultiIndex on both axis:</p> <pre><code>columns = pd.MultiIndex.from_tuples([('a', 2), ('a', 3), ('b', 1), ('b', 3)], names=['col_1', 'col_2']) index = pd.MultiIndex.from_tuples([(pd.Timestamp('2023-03-01'), 'A'), (pd.Timestamp('2023-03-01'), 'B'), (pd.Timestamp('2023-03-01'), 'C'), (pd.Timestamp('2023-03-02'), 'A'), (pd.Timestamp('2023-03-02'), 'B'), (pd.Timestamp('2023-03-03'), 'B'), (pd.Timestamp('2023-03-03'), 'C')], names=['idx_1', 'idx_2']) data = np.arange(len(index) * len(columns)).reshape(len(index), len(columns)) df = pd.DataFrame(index=index, columns=columns, data=data) </code></pre> <p>So we get</p> <pre><code>col_1 a b col_2 2 3 1 3 idx_1 idx_2 2023-03-01 A 0 1 2 3 B 4 5 6 7 C 8 9 10 11 2023-03-02 A 12 13 14 15 B 16 17 18 19 2023-03-03 B 20 21 22 23 C 24 25 26 27 </code></pre> <p>Now I want the 'A' and the 'B' rows to be equal:</p> <pre><code>col_1 a b col_2 2 3 1 3 idx_1 idx_2 2023-03-01 A 4 5 6 7 B 4 5 6 7 C 8 9 10 11 2023-03-02 A 16 17 18 19 B 16 17 18 19 2023-03-03 B 20 21 22 23 C 24 25 26 27 </code></pre> <p>I can do it likes this:</p> <pre><code>df = df.unstack() df.loc[:, pd.IndexSlice[:, :, 'A']] = df.loc[:, pd.IndexSlice[:, :, 'B']].values df = df.stack().reindex(index) </code></pre> <p>I wonder if there is another approach without de facto copying the data twice.</p>
<python><pandas><dataframe><multi-index>
2023-04-05 11:32:04
3
1,296
JoergVanAken
75,938,581
5,016,028
How can I use sympy sum over elements of a list in python?
<p>I am just trying to use a sympy sum over a list of elements, for example:</p> <pre><code>x = [1,2,3,4,5] expression = sympy.sum(x[i], (i,0,2)) * sympy.sum(x[j], (j,1,3)) </code></pre> <p>and I just want it to numerically evaluate this. I always get index out of range, although it's clearly not out of range. Any ideas?</p>
<python><python-3.x><list><sympy>
2023-04-05 10:57:16
1
4,373
Qubix
75,938,574
13,839,945
Matplotlib animation.artistanimation showing all plots at once for inital figure
<p>I want to create live plots of my results and found this SO question which did most of the job: <a href="https://stackoverflow.com/questions/49158604/matplotlib-animation-update-title-using-artistanimation">Matplotlib animation update title using ArtistAnimation</a></p> <p>BUT, when I start the script, the first figure shown is every figure at the same time (see screenshot below) What am I doing wrong? Also the axis is plotted twice but shifte (like in the initial figure) for each animation step. I would like to have the first plotted figure as the starting figure and then go to the next one after <code>interval=1000</code> miliseconds.</p> <p><a href="https://i.sstatic.net/gTyPE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gTyPE.png" alt="Initial Figure" /></a></p> <p>I used the code from the other questions without any additions:</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt from matplotlib import animation import numpy as np a = np.random.rand(10,10) fig, ax=plt.subplots() container = [] for i in range(a.shape[1]): line, = ax.plot(a[:,i]) title = ax.text(0.5,1.05,&quot;Title {}&quot;.format(i), size=plt.rcParams[&quot;axes.titlesize&quot;], ha=&quot;center&quot;, transform=ax.transAxes, ) container.append([line, title]) ani = animation.ArtistAnimation(fig, container, interval=200, blit=False) plt.show() </code></pre> <p>Using <code>Python=3.7.16</code> <code>matplotlib==3.5.3</code></p>
<python><matplotlib><plot><matplotlib-animation>
2023-04-05 10:56:31
1
341
JD.
75,938,471
17,973,259
Compare two image variables in pygame
<p>I have this method that when called it changes the weapon for the player, I want to somehow be able to determine if the current weapon is already the new weapon. I tried with <code>is</code> and <code>pygame.surface.get_at()</code> but it didn't work. Eventually I will do another action when the weapon is the same, the print statement is there just for now to check if it works.</p> <pre><code>class BulletsManager: &quot;&quot;&quot;The BulletsManager class manages the player bullets.&quot;&quot;&quot; def __init__(self, game): self.game = game self.thunder_weapon = pygame.image.load(WEAPONS['thunderbolt']) self.fire_weapon = pygame.image.load(WEAPONS['firebird']) def change_weapon(self, player, weapon_name): &quot;&quot;&quot;Change the player weapon.&quot;&quot;&quot; if player == &quot;thunderbird&quot;: new_weapon = pygame.image.load(WEAPONS[weapon_name]) if new_weapon == self.thunder_weapon: print('New weapon is the same as the current one') else: self.thunder_weapon = new_weapon elif player == &quot;phoenix&quot;: self.fire_weapon = pygame.image.load(WEAPONS[weapon_name]) </code></pre>
<python>
2023-04-05 10:44:30
1
878
Alex
75,938,329
938,021
Paramiko SFTPClient class throws "Authentication failed", but Transport class and WinSCP work
<p>I have an issue using Paramiko and connecting to an SFTP server.</p> <p>The connecting work fine using same credentials in WinSCP.<br /> I am also able to use pysftp.<br /> But I would prefer to be able to use Paramiko, which gives access to better control of timeouts (removed in the example below).</p> <p>I get the following error in the last Paramiko example:</p> <blockquote> <p>paramiko.ssh_exception.AuthenticationException: Authentication failed.</p> </blockquote> <p>What is even more strange, the exact same code, works for my colleague. So anyone have a clue what I should look into?</p> <pre class="lang-py prettyprint-override"><code>import pysftp from base64 import decodebytes import paramiko username = &quot;my_username&quot; password = &quot;my_secret_password&quot; hostname = &quot;ftp.hostname.tld&quot; directory = &quot;/protected/directory&quot; port = 22 keydata = b&quot;&quot;&quot;AAAAB...TuQ==&quot;&quot;&quot; key = paramiko.RSAKey(data=decodebytes(keydata)) # PYSFTP is working! cnopts = pysftp.CnOpts() cnopts.hostkeys.add(hostname, 'ssh-rsa', key) with pysftp.Connection(host=hostname, username=username, password=password, cnopts=cnopts) as sftp: files = sftp.listdir_attr(directory) test = &quot;test&quot; # This pure Paramiko is also working! with paramiko.Transport((hostname,port)) as transport: transport.connect( hostkey=key, username=username, password=password ) with paramiko.SFTPClient.from_transport(transport) as sftp: files = sftp.listdir_attr(directory) test = &quot;test&quot; # This version where the transport is established using sshclient, does not work. ssh_client = paramiko.SSHClient() ssh_client.get_host_keys().add(hostname=hostname, keytype=&quot;ssh-rsa&quot;, key=key) ssh_client.connect( hostname=hostname, username=username, password=password, port=port ) sftp = ssh_client.open_sftp() files = sftp.listdir_attr(directory) test = &quot;test&quot; </code></pre> <p>Paramiko log from a failed session:</p> <pre class="lang-none prettyprint-override"><code>DEBUG:paramiko.transport:starting thread (client mode): 0x5dad0e50 DEBUG:paramiko.transport:Local version/idstring: SSH-2.0-paramiko_3.1.0 DEBUG:paramiko.transport:Remote version/idstring: SSH-2.0-srtSSHServer_11.00 INFO:paramiko.transport:Connected (version 2.0, client srtSSHServer_11.00) DEBUG:paramiko.transport:=== Key exchange possibilities === DEBUG:paramiko.transport:kex algos: diffie-hellman-group14-sha256, diffie-hellman-group-exchange-sha256, diffie-hellman-group14-sha1, diffie-hellman-group-exchange-sha1, diffie-hellman-group1-sha1, diffie-hellman-group-exchange-sha512@ssh.com DEBUG:paramiko.transport:server key: ssh-rsa DEBUG:paramiko.transport:client encrypt: aes256-ctr, aes128-ctr, aes256-cbc, twofish256-cbc, aes128-cbc, twofish256-ctr, aes192-ctr, twofish192-ctr, twofish128-ctr, twofish128-cbc DEBUG:paramiko.transport:server encrypt: aes256-ctr, aes128-ctr, aes256-cbc, twofish256-cbc, aes128-cbc, twofish256-ctr, aes192-ctr, twofish192-ctr, twofish128-ctr, twofish128-cbc DEBUG:paramiko.transport:client mac: hmac-sha3-512, hmac-sha3-384, hmac-sha3-256, hmac-sha3-224, hmac-sha2-512, hmac-sha2-384, hmac-sha2-256, hmac-sha2-224, hmac-sha1 DEBUG:paramiko.transport:server mac: hmac-sha3-512, hmac-sha3-384, hmac-sha3-256, hmac-sha3-224, hmac-sha2-512, hmac-sha2-384, hmac-sha2-256, hmac-sha2-224, hmac-sha1 DEBUG:paramiko.transport:client compress: none DEBUG:paramiko.transport:server compress: none DEBUG:paramiko.transport:client lang: &lt;none&gt; DEBUG:paramiko.transport:server lang: &lt;none&gt; DEBUG:paramiko.transport:kex follows: False DEBUG:paramiko.transport:=== Key exchange agreements === DEBUG:paramiko.transport:Kex: diffie-hellman-group-exchange-sha256 DEBUG:paramiko.transport:HostKey: ssh-rsa DEBUG:paramiko.transport:Cipher: aes128-ctr DEBUG:paramiko.transport:MAC: hmac-sha2-256 DEBUG:paramiko.transport:Compression: none DEBUG:paramiko.transport:=== End of kex handshake === DEBUG:paramiko.transport:Got server p (2048 bits) DEBUG:paramiko.transport:kex engine KexGexSHA256 specified hash_algo &lt;built-in function openssl_sha256&gt; DEBUG:paramiko.transport:Switch to new keys ... DEBUG:paramiko.transport:Trying discovered key b'f28c2e56806b573d7fc45b9722242e5b' in C:\Users\my-user/.ssh/id_rsa DEBUG:paramiko.transport:userauth is OK DEBUG:paramiko.transport:Finalizing pubkey algorithm for key of type 'ssh-rsa' DEBUG:paramiko.transport:Our pubkey algorithm list: ['rsa-sha2-512', 'rsa-sha2-256', 'ssh-rsa'] DEBUG:paramiko.transport:Server did not send a server-sig-algs list; defaulting to our first preferred algo ('rsa-sha2-512') DEBUG:paramiko.transport:NOTE: you may use the 'disabled_algorithms' SSHClient/Transport init kwarg to disable that or other algorithms if your server does not support them! INFO:paramiko.transport:Authentication (publickey) failed. INFO:paramiko.transport:Disconnect (code 2): unexpected service request </code></pre>
<python><sftp><paramiko><pysftp>
2023-04-05 10:29:13
2
1,302
jakobdo
75,938,285
857,741
Get values of range object without using 'for' loop
<p>It seems <code>range()</code> doesn't have a method of getting values (something like <code>dict().values()</code>).</p> <p>Is there a way of getting the values without doing &quot;manual&quot; iteration (like <code>[x for x in range(10)]</code>)?</p>
<python>
2023-04-05 10:23:57
3
6,914
LetMeSOThat4U
75,938,229
5,868,293
Create indicator column threshold of values of another column for different rows with groupby in pandas
<p>I have the following dataframe</p> <pre><code>import pandas as pd pd.DataFrame({'id':[1,1,2,2,3,3], 'phase': ['pre', 'post','pre', 'post','pre', 'post'], 'n': [5,6,7,3,10,10]}) </code></pre> <p>I want to create a new column (<code>new_col</code>) which indicates if <code>n&gt;=5</code> for <strong>both</strong> <code>pre</code> &amp; <code>post</code> by <code>id</code>.</p> <p>The output dataframe looks like this</p> <pre><code>pd.DataFrame({'id':[1,1,2,2,3,3], 'phase': ['pre', 'post','pre', 'post','pre', 'post'], 'n': [5,6,7,3,10,10], 'new_col':[1,1,0,0,1,1]}) </code></pre> <p>I would like to avoid any solution using <code>pd.pivot_table</code></p> <p>How could I do that ?</p>
<python><pandas>
2023-04-05 10:17:48
2
4,512
quant
75,938,214
9,861,647
Duplicate rows in pandas on condition
<p>I have this pandas dataframe:</p> <pre><code> Column1 Column2 Column3 1 A C 2 A D 3 B </code></pre> <p>If the is a &quot;D&quot; in my column2 i want to duplicate the row with is values and reset the index like this :</p> <pre><code> Column1 Column2 Column3 1 A C 2 A D 3 A D 4 B </code></pre> <p>How do I do this in pandas?</p>
<python><pandas>
2023-04-05 10:15:55
1
1,065
Simon GIS
75,938,122
14,997,346
Flask MJPEG stream has long loading time
<p>I wrote a class to display images on a small webpage (mainly images captured by a camera, some image processing in between). I worked after <a href="https://blog.miguelgrinberg.com/post/video-streaming-with-flask" rel="nofollow noreferrer">Miguel's blog post</a>.<br /> Since few weeks I have an issue that the website sometimes needs a long time to load. To be more presice: the static stuff in the index.html is shown directly, but the images need 30s to 1min to start showing on the website.</p> <p>I made a smaller example of my class to look for this issue but I cannot discover the problem. Does someone has a tipp for me?</p> <p>The class:</p> <pre><code>import flask import cv2 import multiprocessing as mp import numpy as np import time import select import sys ## class display # # - show images on html site # - saves images for logging class display: ##constructor # - starts separate process for the website def __init__(self): self.__controleQueue = mp.Queue() self.__imageQueue = mp.Queue() self.__imageSemaphore = mp.Semaphore(30) self.imgdelete = 0 print(&quot;start controle process&quot;) self.__controleProcess = mp.Process(target=self.__startApp) self.__controleProcess.start() print(&quot;class initialized&quot;) ## destructor def exit(self): print(&quot;stop process&quot;) self.__controleQueue.put(&quot;exit&quot;) while True: print(&quot;exit loop main&quot;) if self.__controleQueue.get() == &quot;exit done&quot;: print(&quot;process stopped&quot;) break else: time.sleep(1) return True ## website app # - called by constructor as separate process # - starts website as separate process # - waits for message to stop website def __startApp(self): print(&quot;create app&quot;) self.app = self.__createApp() ip = &quot;192.168.62.144&quot; print(&quot;ip for display website: &quot;, ip) print(&quot;start working process/app&quot;) self.__workingProcess = mp.Process(target=self.app.run, kwargs={&quot;host&quot;:ip, &quot;port&quot;:&quot;80&quot;, &quot;debug&quot;:False, &quot;threaded&quot;:True, &quot;use_reloader&quot;:False}) self.__workingProcess.deamon = True self.__workingProcess.start() print(&quot;working process/app started, waiting for commands&quot;) while True: if self.__controleQueue.get() == &quot;exit&quot;: print(&quot;stop working process&quot;) if self.__workingProcess.is_alive(): self.__workingProcess.terminate() self.__workingProcess.join(2) while self.__workingProcess.is_alive(): time.sleep(0.01) print(&quot;working process stopped&quot;) self.__controleQueue.put(&quot;exit done&quot;) break else: time.sleep(0.1) return ## new image # - send new image to display # # @param image image to show on website def newImage(self, image): # self.logger.debug(&quot;new image&quot;) # if len(image.shape) &gt; 2: # image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) # self.__imageQueue.put(image) if self.__imageSemaphore.acquire(block=False): # # print(&quot;semaphore yes&quot;) if len(image.shape) &gt; 2: image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) self.__imageQueue.put(image) else: print(&quot;\t\t\t\t\t\t\t\tsemaphore full&quot;) self.imgdelete += 1 print(&quot;\t\t\t\t\timage delete: &quot;, self.imgdelete) ## generater function to prepare new image # - receives image from queue # - resizes image to defined maximum size # - returns encoded image def __generate(self): print(&quot;__generate call&quot;) fpsCounter = 0 fpsTime=time.time() sleeptime = 0.0001 while True: print(&quot;loop \t&quot;+str(fpsCounter)+&quot;\t&quot;+str(time.time()-fpsTime)) fpsCounter += 1 if fpsCounter == 3200: sleeptime=1 # if fpsCounter == 1010: # sleeptime=0.0001 # if fpsCounter == 1000: # sleeptime=1 time.sleep(sleeptime) output = self.__imageQueue.get() self.__imageSemaphore.release() # output = np.ones((500,500,3), dtype = np.uint8)*255 cv2.putText(output, &quot;test display &quot;+str(fpsCounter), (0, output.shape[0]-10), cv2.FONT_HERSHEY_SIMPLEX, 1, (255,0,0), 2) # self.logger.debug(&quot;yes new image&quot;) flag, frame = cv2.imencode(&quot;.jpg&quot;, output) if not flag: print(&quot;shit&quot;) continue yield(b'--frame\r\nContent-Type: image/jpeg\r\n\r\n' + frame.tobytes() + b'\r\n') ## start website app # - create app # - setup routes def __createApp(self): # app = flask.Flask(&quot;cam1&quot;, template_folder='/home/root/camino/templates') app = flask.Flask(&quot;test display&quot;, static_folder='tests') # render html page @app.route(&quot;/&quot;) def index(): return flask.render_template(&quot;2index.html&quot;) # return &quot;Congratulations, it's a web app!&quot; @app.route(&quot;/video_feed&quot;) def videoFeed(): # print(&quot;videoFeed: &quot;, self.__getCurrentMemoryUsage(), &quot; MB&quot;) return flask.Response(self.__generate(), mimetype = &quot;multipart/x-mixed-replace; boundary=frame&quot;) return app if __name__ == &quot;__main__&quot;: disp = display() sizex = 1500 sizey = 100 image = np.zeros((sizey,sizex,3),dtype=np.uint8) row = 0 col = 0 step = 10 change = True while True: if change: image[row:row+step, col:col+step,:] = 255 col += step if col &gt;= sizex: col = 0 row += step if row &gt;= sizey: row = 0 image = np.zeros((sizey,sizex,3),dtype=np.uint8) print(&quot;new image done&quot;) disp.newImage(image) time.sleep(0.02) i,o,e = select.select( [sys.stdin], [], [], 0.001) # print(row, col, &quot;terminal read&quot;, i) if i: key = sys.stdin.readline().strip() print(key) if key == 'q': break if key == 'c': change = not change disp.exit() print(&quot;end&quot;) </code></pre> <p>2index.html</p> <pre><code>&lt;html&gt; &lt;head&gt; &lt;link rel=&quot;icon&quot; href=&quot;data:,&quot;&gt; &lt;title&gt;test display&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;h1&gt;Welcome to FlaskApp!&lt;/h1&gt; &lt;h2&gt;2index.html&lt;/h2&gt; &lt;img src=&quot;{{ url_for('videoFeed') }}&quot;&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>I tried to generate the &quot;image&quot; at different points in the code. It doesn't look like the queue, semaphore, communication between the processes are the problem. Also I figuered out, that (at my pc) the images start showing in the browser at about image nr 3180, but depends on the size of the image. (As said, the headers are showen directly after loading the page.)</p> <p>I also tried the original example from Miguel (<a href="https://github.com/miguelgrinberg/flask-video-streaming" rel="nofollow noreferrer">https://github.com/miguelgrinberg/flask-video-streaming</a>) with the 3 images. The same problem occurs but it takes much longer for the first image to show. So I changed the sleeping time between each image from 1s to 10ms, now it takes about 3-4min until the first image is shown.</p> <p>That's the time measurements of firefox for the GET video_feed. <a href="https://i.sstatic.net/UfrWq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UfrWq.png" alt="That's the time measurements of firefox for the GET video_feed." /></a></p> <p>The server gets the <code>&quot;GET /video_feed HTTP/1.1&quot; 200 -</code> directly when loading the website, but it takes the waiting time (here 3,71min) to show the first image.</p> <p>I should mention that this runs on a compute module, website is called on my pc over local network. As I said before, this used to work well on this system.</p>
<python><flask><mjpeg>
2023-04-05 10:04:50
0
331
chillking
75,937,978
10,938,315
structlog: display log level in Cloudwatch
<p>I have set up my logger like this:</p> <pre><code>import logging import structlog class Logging: @staticmethod def my_logger() -&gt; logging.Logger: structlog.configure( processors=[ structlog.processors.add_log_level, structlog.processors.TimeStamper(fmt=&quot;iso&quot;, key=&quot;ts&quot;), structlog.processors.JSONRenderer(), ], wrapper_class=structlog.make_filtering_bound_logger(logging.INFO), ) logger = structlog.getLogger() return logger </code></pre> <p>Events in Cloudwatch now look like this:</p> <pre><code>2023-04-05T10:44:52.920+01:00 {&quot;event&quot;: &quot;My logging message&quot;, &quot;level&quot;: &quot;info&quot;, &quot;ts&quot;: &quot;2023-04-05T09:44:52.919867Z&quot;} </code></pre> <p>Instead, I want to see the log level right at the beginning like I would with the default logging module:</p> <pre><code>2023-04-05T10:44:52.920+01:00 [INFO] {&quot;event&quot;: &quot;My logging message&quot;, &quot;level&quot;: &quot;info&quot;, &quot;ts&quot;: &quot;2023-04-05T09:44:52.919867Z&quot;} </code></pre> <p>How can I accomplish this?</p>
<python><amazon-web-services><structlog>
2023-04-05 09:49:18
1
881
Omega
75,937,954
5,674,660
How to filter convert a date field into a date to extract the current month in SQLite db using func.strftime() in sqlalchemy
<p>I want to know the subscribed events (events that I am following) that took place in current month. This is irrespective of whether they took place 5 years ago, or a year ago, so long as they took place in the current month. I am convinced by problem emanates from <code>Event.event_date</code>. How do I convert this field into a date the <code>strftime()</code> can then extract the the month from it using <code>'%m'</code> and then compare it to the current month.</p> <p>However, the query gives no events that took place in the current month, yet I have events that indeed took place in April (the current month). The current month is to be automated, which I have successfully accomplished through <code>datetime.today().strftime('%m')</code>. Then I want to compare this month with all events that took place in the same month. Here is the full query code:</p> <pre><code>monthly_events = current_user.followed_events().filter(Event.event_date &lt; datetime.today().date()).filter(func.strftime('%m', Event.event_date == datetime.today().strftime('%m'))).order_by(Event.timestamp.desc()) </code></pre> <p>By breaking the query into sections, I was able to know where the problem is. The section of the query: <code>current_user.followed_events().filter(Event.event_date &lt; datetime.today().date())</code> gives all events that have passed (yesterday and beyond). This part works correctly.</p> <p>The section: <code>current_user.followed_events().filter(Event.event_date &lt; datetime.today().date()).order_by(Event.timestamp.desc())</code> arranges these pasts events in descending order and this section works correctly, as well.</p> <p>However, the part with problem is: <code>.filter(func.strftime('%m', Event.event_date == datetime.today().strftime('%m')))</code> where the aim is to filter out events that took place in the current month, irrespective of the year they took place.</p> <p>Note that I have imported the following modules <code>from sqlalchemy import func</code> and <code>from datetime import datetime</code> at the top of the <code>routes.py</code>.</p> <p>The <code>event_date</code> field in the <code>models.py</code> is stored as a <code>db.DateTime</code> with a <code>default = datetime.utcnow</code>. I am using <code>Flask</code>, with <code>SQLite</code> db, but will change it to <code>Postgresql</code> later.</p> <p>I hope the information is enough, otherwise let me know if additional information is needed.</p>
<python><datetime><flask><sqlalchemy><func>
2023-04-05 09:46:54
1
1,012
chibole
75,937,941
12,667,081
concurrent futures threadpoolexecutor repeating list items
<p>I'm new to multi-threading in Python, and trying to manage a script that calls thousands of APIs. I've read a number of answers and articles and got to this:</p> <pre><code>import requests import json import time import sys import concurrent.futures import threading thread_local = threading.local() pet_data = [] def get_session(): if not hasattr(thread_local, &quot;session&quot;): thread_local.session = requests.Session() return thread_local.session def scrape(url): session = get_session() with session.get(url) as response: info = response.text pet_data.append(json.loads(info)) def run_scrapes(sites): with concurrent.futures.ThreadPoolExecutor(max_workers=12) as executor: executor.map(scrape, sites) executor.shutdown(wait=True) </code></pre> <p>sites is a list of the urls to call (it's a paginated API, so it's a simple list of 'api.endpoint&amp;page='+str(i) URLs).</p> <p>It works fine, but the issue I'm having is that it repeats calls, many times (based on debugging through logs, 6 of each URL gets called, even though there's only 1 of each in the list).</p> <p>Is there something off in the code I've piece-mailed from articles/answers? I admit to not fully understanding the get_session function, which I suppose could be the issue.</p>
<python><python-3.x><list><multithreading>
2023-04-05 09:45:33
1
545
Brian - RGY Studio
75,937,892
14,649,310
Can an SQLAlchemy ARRAY column be mapped to a python tuple?
<p>I have seen the SQLAlchemy ARRAY columns be mapped to lists with a default empty list value like this:</p> <pre><code>tags = Column(ARRAY(Text, dimensions=1), default=[]) </code></pre> <p>Can this mapped to a non-mutable python object like a tuple?</p>
<python><sqlalchemy><flask-sqlalchemy>
2023-04-05 09:38:56
1
4,999
KZiovas
75,937,792
7,295,936
Group a list of strings by similar values
<p>I want to group a list of strings by similarity.</p> <p>In my case (here simplified cause this list can be huge) it's a list of path to zip files like this:</p> <pre><code>[&quot;path/002_AC_HELICOPTEROS-MARINOS_20230329_210358_145T3_21049_00748_DMAU.zip&quot;, &quot;path/002_AC_NOLAS_20230326_234440_145T2_20160_06473_VMS_UMS.zip&quot;, &quot;path/002_AC_HELICOPTEROS-MARINOS_20230329_211105_145T3_21049_00748_FDCR.zip&quot;, &quot;path/002_AC_HELICOPTEROS-MARINOS_20230329_205916_145T3_21049_00747_VMS_UMS.zip&quot;, &quot;path/002_AC_NOLAS_20230326_235504_145T2_20160_06473_FDCR.zip&quot;] </code></pre> <p>I would like to group the strings in that list by a key, but I don't know yet how to define it (I guess with a lambda but I can't figure it out) in order to get a result list like this:</p> <pre><code>[[&quot;path/002_AC_HELICOPTEROS-MARINOS_20230329_210358_145T3_21049_00748_DMAU.zip&quot;, &quot;path/002_AC_HELICOPTEROS-MARINOS_20230329_211105_145T3_21049_00748_FDCR.zip&quot;], [&quot;path/002_AC_HELICOPTEROS-MARINOS_20230329_205916_145T3_21049_00747_VMS_UMS.zip&quot;], [&quot;path/002_AC_NOLAS_20230326_234440_145T2_20160_06473_VMS_UMS.zip&quot;, &quot;path/002_AC_NOLAS_20230326_235504_145T2_20160_06473_FDCR.zip&quot;]] </code></pre> <p>To give you an example the first grouping key would be:</p> <pre class="lang-none prettyprint-override"><code>*_HELICOPTEROS-MARINOS_20230329_*_21049_00748_*.zip </code></pre> <p>second would be:</p> <pre class="lang-none prettyprint-override"><code>*_HELICOPTEROS-MARINOS_20230329_*_21049_00747_*.zip </code></pre> <p>and third:</p> <pre class="lang-none prettyprint-override"><code>*_NOLAS_20230326_*_20160_06473_*.zip </code></pre>
<python><python-3.x><string><list><grouping>
2023-04-05 09:27:42
2
1,560
FrozzenFinger
75,937,745
2,885,763
get reference from parent object
<p>is there a more efficient/pythonic way to catch a composition object upper reference. Not an heritage in terms of class but from an object that is an attribut of a parent one (array or direct). Something like <code>PythonicRef(self)</code> in this concept sample</p> <pre><code>class TheChild : def GetParentColor( self ) : myVal = PythonicRef(self).Color class TheParent : Color = &quot;Red&quot; AChild = TheChild() </code></pre> <p>actually i pass the reference in the child via a <code>__init__( self, MyParent )</code> with a object attribut in TheChild and <code>AChild = TheChild( self )</code> but this is heavy and using an object outside the class is loosing a bit of interest in this case because other class if very dependant.</p> <p>an equivalent of super() but not for class.</p> <p>In my specific case, i have a reference in a master object that use otehr basic object from another class where this reference is used (the value, not necessary the pointer). Like the Height in a Decor object that use several object Wall that will use this Height . This could work if wall are created from the object but this is a problem if created outside than added (and height is needed for creation of a wall and can't be changed later).</p> <p>Goal is not (in my case) to change parent attribute value by this way but just catch a value/function from a Parent owning the object as attribut. Indeed dependance is that Child code assume that the specific function/attribut exist in Parent one. This is not the problem here.</p>
<python><python-3.x>
2023-04-05 09:23:12
1
10,057
NeronLeVelu
75,937,636
6,375,025
Optimizing Complex Data Processing: Advanced Techniques for Efficiently Handling Massive CSV Files in Python
<p>I'm working on a project where I need to process massive CSV files that contain hundreds of millions of rows and dozens of columns. I've tried using various Python libraries such as pandas, Dask, and PySpark, but I'm still encountering serious performance issues, and my code keeps crashing due to memory errors.</p> <p>One of the challenges is that I need to perform complex transformations and aggregations on the data, such as merging multiple CSV files, filtering and grouping data based on multiple criteria, and computing complex statistical metrics.</p> <p>Given the massive scale of the data and the complexity of the processing, what are some advanced techniques and algorithms that I can use to optimize my code and achieve the best possible performance? Are there any cutting-edge tools or frameworks that I should be aware of? And how can I ensure that my code is scalable and can handle even larger datasets in the future?</p>
<python><pandas><performance><csv><memory-management>
2023-04-05 09:10:14
0
435
Millet Antoine
75,937,624
1,240,108
Python 3.8 and pickle load()
<p>I am struggling reading a pickle file. I keep having the <code>EOFError: Ran out of input</code> My file is definitely not empty and I am using python3.8 with pandas 1.4.4</p> <p><a href="https://i.sstatic.net/Br1na.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Br1na.png" alt="enter image description here" /></a></p> <p>When I try to load the file with python I get:</p> <p><a href="https://i.sstatic.net/WbbSx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WbbSx.png" alt="enter image description here" /></a></p> <p>Note that the pickle object was saved with HIGHEST_PROTOCOL. Is there any incompatibility with 3.8 and pandas?</p>
<python><pandas><pickle><python-3.8>
2023-04-05 09:08:53
0
1,079
JPV
75,937,608
275,224
How to make VSCode continue search for modules to import under debugging when having several packages under same namespace
<p>I have a Python project where I depend on another Python package I develop.</p> <p>Call them 'moduleA' and 'moduleB'. Both are distributed as Python packages.</p> <p>I have given the packages a namespace where the company name is the top level. i.e. 'companyA.moduleA' and 'companyA.moduleB'.</p> <p>In my repo (for moduleB), I have a 'src' layout:</p> <pre><code>projectB β”œβ”€β”€ build β”‚Β Β  └── venv β”œβ”€β”€ .env β”œβ”€β”€ setup.cfg β”œβ”€β”€ setup.py β”œβ”€β”€ src β”‚Β Β  └── companyA β”‚Β Β  └── moduleB β”‚Β Β  β”œβ”€β”€ classB.py β”‚Β Β  └── __init__.py └── tests </code></pre> <p>For debugging I make a virtual environment under the build folder. In that venv, I have installed the dependency moduleA with pip. So, I can use 'from companyA.moduelA import classA' in moduleB code.</p> <p>But, when I debug moduleB in VSCode, I fail to import from moduleA because it looks in the folder 'src/companyA' for 'moduleA', and does not look into the site-packages folder of the venv.</p> <p>Is it possible to configure VSCode so that if it does not find 'moduleA' under 'src/companyA', then search under 'companyA' in the venv site-packages?</p> <p>PS! I am running with Python v3.11+</p> <p>PS2! When installing both packages in a virtualenv, it works (since both are installed under the same namespace). The problem is when debugging in VSCode.</p>
<python><python-3.x><visual-studio-code>
2023-04-05 09:08:00
2
576
pablaasmo
75,937,446
7,330,316
pandas - How to show whole numbers along with floats?
<p>Imagine I have this list:</p> <pre class="lang-py prettyprint-override"><code>df = pandas.DataFrame([1, 2.321, 13.3444, 7.5, 5]) </code></pre> <p>How can I make the output table show whole numbers without decimal places and others with, say 2, decimal places? Something like this:</p> <pre><code>1 2.30 13.34 7.50 5 </code></pre> <p>instead of this:</p> <pre><code>1.00 2.30 13.34 7.50 5.00 </code></pre>
<python><pandas>
2023-04-05 08:52:33
1
2,462
Shahriar
75,937,434
13,793,478
How to perform 2 operations in MathFilters ( Django )
<p>I wanted to perform 2 operations 1: subtract entry_price from Exit_price then multiply the outcome by size. I am getting the error:</p> <pre><code>Could not parse some characters: |(journals.exit_price||sub:journals.entry_price)|mul:journals.size </code></pre> <pre><code>&lt;p class=&quot;pnl&quot;&gt; {% if journals.trade_direction == 'Long - Buy' %} {{ (journals.exit_price|sub:journals.entry_price)|mul:journals.size }} {% else %} {{ (journals.entry_price|sub:journals.exit_price)|mul:journals.size }} {% endif %} &lt;/p&gt; </code></pre>
<python><django>
2023-04-05 08:50:59
1
514
Mt Khalifa
75,937,355
18,091,040
Fill pandas dataframe missing data based on correlation with other columns
<p>I have a pandas DataFrame with many rows and a column of strings named <code>error_text</code> that contains several empty values. I wanted to fill these missing data based on the correlation of this column with another one.</p> <pre><code>mydf_example = pd.DataFrame({'a':[1,2,3,4,5,6,3],'b':[10,20,30,40,50,60,30],'c':['a','b','c','d','e','f','c'], 'error_text':[np.nan,'some_text','other_text',np.nan,'more_text','another_text',np.nan]}) mydf_example a b c error_text 0 1 10 a NaN 1 2 20 b some_text 2 3 30 c other_text 3 4 40 d Nan 4 5 50 e more_text 5 6 60 f another_text 6 3 30 c NaN </code></pre> <p>First I created a <code>sub_df</code> dropping the rows of missing data:</p> <pre><code>mydf_example = mydf_example.dropna() mydf_example a b c error_text 1 2 20 b some_text 2 3 30 c other_text 4 5 50 e more_text 5 6 60 f another_text </code></pre> <p>Then I converted the <code>error_text</code> column to category and calculated the correlation:</p> <pre><code>mydf_example['error_text'] = mydf_example['error_text'].astype('category').cat.codes mydf_example.corr()['error_text'] a -0.989949 b -0.989949 error_text 1.000000 </code></pre> <p>I was thinking if there is a way to fill the missing data in the <code>error_text</code> column based on the data of the other columns, for example, the last row would be filled with 'other_text', as the other values are equal to row 2. Of course, in my original dataset the correlation (or descorrelation) is not high as in the example, but I didn't find a way to set a value based on this info.</p>
<python><pandas><dataframe><missing-data>
2023-04-05 08:41:11
1
640
brenodacosta
75,937,200
6,871,867
How to select option from a combobox in python selenium?
<p>I am trying to put value in a combo-box using python selenium. As we enter text it gives different possible values in dropdown. How to select the first value on the drop down? As soon as I try to click on it, it removes the text and makes it blank.</p> <p><a href="https://i.sstatic.net/8iOxE.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8iOxE.jpg" alt="enter image description here" /></a> <a href="https://i.sstatic.net/WZsXw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WZsXw.png" alt="enter image description here" /></a></p> <p>So, for example, I am sending the text <code>Stockholms kommun, Stockholms lΓ€n</code> in the <code>combobox</code> by:</p> <pre><code>text = Stockholms kommun, Stockholms lΓ€n inputcity = driver.find_element(By.XPATH, '//*[@id=&quot;price-calculator-address&quot;]') inputcity.send_keys(text) </code></pre> <p>However, I am not able to click (as shown in image-2) on the dropdown which appears as I enter the text.</p> <p>Link of website: <a href="https://www.hemnet.se/annonsera-bostad#priskalkylatorn" rel="nofollow noreferrer">https://www.hemnet.se/annonsera-bostad#priskalkylatorn</a></p> <p>Can anyone help me with it?</p>
<python><selenium-webdriver>
2023-04-05 08:22:31
1
462
Gopal Chitalia
75,937,184
10,735,143
How to remove all ampersands in a string except for specific tokens
<p>I need to remove every ampersand in a string except tokens like 'd&amp;g' or 'black&amp;jones'. For this purpose it must be done using regex. I used this pattern <code>r'[a-zA-Z]&amp;[a-zA-Z]'</code> but not working. Any help or guidance would be greatly appreciated.</p>
<python><regex><string>
2023-04-05 08:20:14
2
634
Mostafa Najmi
75,937,154
13,647,125
Dash title while updating
<p>I would like to change app title <a href="https://i.sstatic.net/tIczX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tIczX.png" alt="enter image description here" /></a></p> <p>this is how my app.callback looks like</p> <pre><code>@app.callback( Output('table', 'data'), [Input('interval-component', 'n_intervals'), Input('my-input', 'value'), ] ) </code></pre> <p>I would like to remove the title 'Updating...' to 'MyApp' I have tried</p> <pre><code>app = dash.Dash(__name__, title= 'MyApp') </code></pre> <p>and</p> <pre><code>app.title = 'MyApp' </code></pre> <p>But it is still not working</p>
<python><plotly-dash><title>
2023-04-05 08:17:34
1
755
onhalu
75,936,946
15,877,202
How can I add and drop columns and rows in a pandas dataframe followed by saving it down into a csv (adding and dropping works but doesn't save)?
<p>So I have some code that drops a few columns in a pandas dataframe and add one column to the pandas dataframe:</p> <pre><code>import pandas as pd df = pd.read_csv('TradedInstrument_20230331_real.txt', sep='\t', encoding='unicode_escape') cols = ['stopTrdgAllag', 'stopTrdgsAllowTrdg', 'stopTrdleInTrdg', 'presholdChf', 'billentCode', 'preTradeBsholdChf', 'billinntDesc'] x = ['e'] * len(df.index) df['new_col'] = x df.drop(columns=cols).to_csv('output.csv', index=False) </code></pre> <p>So what I am trying to do is add a new column <code>new_col</code> with <code>e</code> as entries in each row. And also to drop all of the columns that are in <code>cols</code>. While the code works fine to achieve this, when I run the above, in the output, only the columns are dropped and the row is NOT added!</p> <p>I can kind of see why this is, it is because the <code>to_csv</code> only incorporates the dropping of the columns. So how can I add the adding of the new column too?</p>
<python><dataframe>
2023-04-05 07:56:47
1
566
Patrick_Chong
75,936,937
7,132,482
Python3: Update date format
<p>I have a tricky with date format with time series data. In my dataframe of over one hundred thousand rows I have a column datetime with date value but the format is %M:%S.%f.</p> <p>Example:</p> <pre><code>datetime 0 59:57.7 1 00:09.7 2 00:21.8 </code></pre> <p>What I want in output is to convert this format to %m/%d/%Y %H:%M:%S.%f with 01/01/2023 00:59:57.7 as first date and then increment hours and day. It's a time series data on few days.</p> <p>Result:</p> <pre><code>datetime ProcessTime 59:57.7 01/01/2023 00:59:57.7 00:09.7 01/01/2023 01:00:09.7 00:21.8 01/01/2023 01:00:21.8 </code></pre> <p>I did this code to change the first date to try to have a referential and change the others.</p> <pre><code>import pandas as pd from datetime import datetime # Example dataframe df = pd.DataFrame({'datetime': ['59:57.7', '00:09.7', '00:21.8']}) first_time_str = df['datetime'][0] first_time_obj = datetime.strptime(first_time_str, '%M:%S.%f') formatted_first_time = first_time_obj.replace(year=2023, month=1, day=1).strftime('%m/%d/%Y %H:%M:%S.%f') df['datetime'][0] = formatted_first_time </code></pre> <p>Thanks for your help. Regards</p>
<python><pandas><dataframe><datetime><timedelta>
2023-04-05 07:55:34
1
335
Laurent Cesaro
75,936,847
11,333,604
cannot import name 'function' from 'file' on collab
<p>I have a helper_functions.py file that I want to use some functions from, but nothing seems to work.</p> <p><code>from helper_functions import plot_decision_boundary</code></p> <p><em>ImportError: cannot import name 'plot_decision_boundary' from 'helper_functions' (/content/helper_functions.py)</em></p> <p><code>from helper_functions import *</code></p> <p>runs, but when I call any function I get</p> <p><em>name 'plot_decision_boundary' is not defined</em></p> <p>I tried the solutions to the two most similar questions <a href="https://stackoverflow.com/questions/42037936/cannot-import-function-name">this</a> and <a href="https://stackoverflow.com/questions/58405054/i-cannot-import-function-name-from-another-file">this</a>, but nothing seems to work.</p> <p>I am working on collab and have manually uploaded the file.</p> <p>!ls lists the file</p>
<python><import><google-colaboratory>
2023-04-05 07:46:16
3
303
Iliasp
75,936,744
9,270,562
Errors when trying to install psycopg2 using pip
<p>I was trying to install psycopg2 using pip when I got the errors:</p> <pre><code>subprocess-exited-with-error and legacy-install-failure </code></pre> <p>The following is the error description:</p> <pre><code>Collecting psycopg2 Using cached psycopg2-2.9.6.tar.gz (383 kB) Preparing metadata (setup.py) ... done Building wheels for collected packages: psycopg2 Building wheel for psycopg2 (setup.py) ... error error: subprocess-exited-with-error Γ— python setup.py bdist_wheel did not run successfully. β”‚ exit code: 1 ╰─&gt; [40 lines of output] /home/suneha/miniconda3/envs/vulogrca_core/lib/python3.9/site-packages/setuptools/config/setupcfg.py:516: SetuptoolsDeprecationWarning: The license_file parameter is deprecated, use license_files instead. warnings.warn(msg, warning_class) running bdist_wheel running build running build_py creating build creating build/lib.linux-x86_64-cpython-39 creating build/lib.linux-x86_64-cpython-39/psycopg2 copying lib/_json.py -&gt; build/lib.linux-x86_64-cpython-39/psycopg2 copying lib/_ipaddress.py -&gt; build/lib.linux-x86_64-cpython-39/psycopg2 copying lib/tz.py -&gt; build/lib.linux-x86_64-cpython-39/psycopg2 copying lib/extras.py -&gt; build/lib.linux-x86_64-cpython-39/psycopg2 copying lib/__init__.py -&gt; build/lib.linux-x86_64-cpython-39/psycopg2 copying lib/errorcodes.py -&gt; build/lib.linux-x86_64-cpython-39/psycopg2 copying lib/_range.py -&gt; build/lib.linux-x86_64-cpython-39/psycopg2 copying lib/extensions.py -&gt; build/lib.linux-x86_64-cpython-39/psycopg2 copying lib/errors.py -&gt; build/lib.linux-x86_64-cpython-39/psycopg2 copying lib/pool.py -&gt; build/lib.linux-x86_64-cpython-39/psycopg2 copying lib/sql.py -&gt; build/lib.linux-x86_64-cpython-39/psycopg2 running build_ext building 'psycopg2._psycopg' extension creating build/temp.linux-x86_64-cpython-39 creating build/temp.linux-x86_64-cpython-39/psycopg gcc -pthread -B /home/suneha/miniconda3/envs/vulogrca_core/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /home/suneha/miniconda3/envs/vulogrca_core/include -I/home/suneha/miniconda3/envs/vulogrca_core/include -fPIC -O2 -isystem /home/suneha/miniconda3/envs/vulogrca_core/include -fPIC &quot;-DPSYCOPG_VERSION=2.9.6 (dt dec pq3 ext lo64)&quot; -DPSYCOPG_DEBUG=1 -DPG_VERSION_NUM=120002 -DHAVE_LO64=1 -DPSYCOPG_DEBUG=1 -I/home/suneha/miniconda3/envs/vulogrca_core/include/python3.9 -I. -I/home/aayush/miniconda3/envs/vunetenv/include -I/home/aayush/miniconda3/envs/vunetenv/include/server -I/home/aayush/miniconda3/envs/vunetenv/include -c psycopg/adapter_asis.c -o build/temp.linux-x86_64-cpython-39/psycopg/adapter_asis.o -Wdeclaration-after-statement In file included from psycopg/adapter_asis.c:28:0: ./psycopg/psycopg.h:36:10: fatal error: libpq-fe.h: No such file or directory #include &lt;libpq-fe.h&gt; ^~~~~~~~~~~~ compilation terminated. It appears you are missing some prerequisite to build the package from source. You may install a binary package by installing 'psycopg2-binary' from PyPI. If you want to install psycopg2 from source, please install the packages required for the build and try again. For further information please check the 'doc/src/install.rst' file (also at &lt;https://www.psycopg.org/docs/install.html&gt;). error: command '/usr/bin/gcc' failed with exit code 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for psycopg2 Running setup.py clean for psycopg2 Failed to build psycopg2 Installing collected packages: psycopg2 Running setup.py install for psycopg2 ... error error: subprocess-exited-with-error Γ— Running setup.py install for psycopg2 did not run successfully. β”‚ exit code: 1 ╰─&gt; [42 lines of output] /home/suneha/miniconda3/envs/vulogrca_core/lib/python3.9/site-packages/setuptools/config/setupcfg.py:516: SetuptoolsDeprecationWarning: The license_file parameter is deprecated, use license_files instead. warnings.warn(msg, warning_class) running install /home/suneha/miniconda3/envs/vulogrca_core/lib/python3.9/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. warnings.warn( running build running build_py creating build creating build/lib.linux-x86_64-cpython-39 creating build/lib.linux-x86_64-cpython-39/psycopg2 copying lib/_json.py -&gt; build/lib.linux-x86_64-cpython-39/psycopg2 copying lib/_ipaddress.py -&gt; build/lib.linux-x86_64-cpython-39/psycopg2 copying lib/tz.py -&gt; build/lib.linux-x86_64-cpython-39/psycopg2 copying lib/extras.py -&gt; build/lib.linux-x86_64-cpython-39/psycopg2 copying lib/__init__.py -&gt; build/lib.linux-x86_64-cpython-39/psycopg2 copying lib/errorcodes.py -&gt; build/lib.linux-x86_64-cpython-39/psycopg2 copying lib/_range.py -&gt; build/lib.linux-x86_64-cpython-39/psycopg2 copying lib/extensions.py -&gt; build/lib.linux-x86_64-cpython-39/psycopg2 copying lib/errors.py -&gt; build/lib.linux-x86_64-cpython-39/psycopg2 copying lib/pool.py -&gt; build/lib.linux-x86_64-cpython-39/psycopg2 copying lib/sql.py -&gt; build/lib.linux-x86_64-cpython-39/psycopg2 running build_ext building 'psycopg2._psycopg' extension creating build/temp.linux-x86_64-cpython-39 creating build/temp.linux-x86_64-cpython-39/psycopg gcc -pthread -B /home/suneha/miniconda3/envs/vulogrca_core/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /home/suneha/miniconda3/envs/vulogrca_core/include -I/home/suneha/miniconda3/envs/vulogrca_core/include -fPIC -O2 -isystem /home/suneha/miniconda3/envs/vulogrca_core/include -fPIC &quot;-DPSYCOPG_VERSION=2.9.6 (dt dec pq3 ext lo64)&quot; -DPSYCOPG_DEBUG=1 -DPG_VERSION_NUM=120002 -DHAVE_LO64=1 -DPSYCOPG_DEBUG=1 -I/home/suneha/miniconda3/envs/vulogrca_core/include/python3.9 -I. -I/home/aayush/miniconda3/envs/vunetenv/include -I/home/aayush/miniconda3/envs/vunetenv/include/server -I/home/aayush/miniconda3/envs/vunetenv/include -c psycopg/adapter_asis.c -o build/temp.linux-x86_64-cpython-39/psycopg/adapter_asis.o -Wdeclaration-after-statement In file included from psycopg/adapter_asis.c:28:0: ./psycopg/psycopg.h:36:10: fatal error: libpq-fe.h: No such file or directory #include &lt;libpq-fe.h&gt; ^~~~~~~~~~~~ compilation terminated. It appears you are missing some prerequisite to build the package from source. You may install a binary package by installing 'psycopg2-binary' from PyPI. If you want to install psycopg2 from source, please install the packages required for the build and try again. For further information please check the 'doc/src/install.rst' file (also at &lt;https://www.psycopg.org/docs/install.html&gt;). error: command '/usr/bin/gcc' failed with exit code 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: legacy-install-failure Γ— Encountered error while trying to install package. ╰─&gt; psycopg2 note: This is an issue with the package mentioned above, not pip. hint: See above for output from the failure. </code></pre> <p><strong>What I have already tried</strong></p> <ul> <li>I tried installing psycopg2-binary as suggested in the error description.</li> <li>I installed cmake package based on this <a href="https://stackoverflow.com/questions/71470989/python-setup-py-bdist-wheel-did-not-run-successfully">answer</a> and tried it again.</li> </ul> <p><strong>My system</strong></p> <ul> <li>Ubuntu 18.04</li> <li>Python 3.9.16</li> </ul> <p><strong>Edit:</strong></p> <p>The full text output I got when I executed <code>pip install psycopg2-binary</code> is as given below</p> <pre><code>Collecting psycopg2-binary Downloading psycopg2_binary-2.9.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.0 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.0/3.0 MB 3.9 MB/s eta 0:00:00 Installing collected packages: psycopg2-binary Successfully installed psycopg2-binary-2.9.6 </code></pre>
<python><python-3.x><pip>
2023-04-05 07:32:40
0
362
Suneha K S
75,936,663
15,010,256
If condition is met, skip from 2nd iteration onwards
<pre><code>#Can be more than two items also. item_list = [&quot;rpm&quot;, &quot;exe&quot;] extraction_status = True while extraction_status: file_status_flag = True file_status = some_other_function(arg1, arg2) # returns a nested dictionary of lists. for item in file_status['BU']: # file_status['BU'] = [&quot;HR&quot;, &quot;DEV&quot; ,&quot;Admin&quot;] if item['extensions'] in item_list: #rpm or exe or zip if (file_status['status'] == &quot;PUSHED&quot;)): print(&quot;file from {} reached target..&quot;.format(item)) #do further action on that file continue # not working else: print(&quot;Status of {} is {}&quot;.format(item, file_status['status'])) # To check if all the items in item_list has been pushed. if (file_status_flag and (file_status['status'] == &quot;PUSHED&quot;)): &quot;All files reached target..make the script successful..&quot; extraction_status = False </code></pre> <p>Let me explain, if HR is of status PUSHED, for the 1st time or iteration, do certain actions, from subsequent iteration onwards, skip if condition is met.</p> <p><code>continue</code> skips the current iteration, which I'm not looking for.</p> <p>Is it possible ?</p> <p>Example output</p> <pre><code>1st iteration: file from BU reached target file from DEV In Progress file from Admin Not Prepared 2nd iteration: file from DEV In Progress file from Admin In Progress </code></pre> <p>so on..</p>
<python><python-3.x>
2023-04-05 07:20:37
2
466
Chel MS
75,936,529
1,478,875
Verifying Crowdstrike Webhooks
<p>I am attempting to verify webhooks that are sent from CrowdStrike to my python application.</p> <p>I know the signature is generated using HmacSHA256, and I have the shared secret.</p> <p>However, I'm not sure what they're hashing to get the signature. I have tried hashing the body of the webhook with the secret, and it does not work. Are they using some other value as a nonce?</p> <p>This is what I have tried:</p> <pre><code>import hmac import hashlib import base64 body = request.get_data() secret = my_secret.encode() hash = hmac.new( self.secret, self.body, hashlib.sha256 ).digest() b64_hash = base64.b64encode(hash).decode() </code></pre> <p>Thanks</p>
<python><webhooks>
2023-04-05 07:03:45
1
659
mark_3094
75,936,515
14,269,252
Adjust the height of y axes in scatter plot in Plotly instead of having it fixed
<p>I wrote the code as follows using Plotly. This is connected to some filter. The problem is I can have varies number of category (code) on my Y axes.</p> <p>How can I adjust the height and space between two code which is shown in the attached picture if I choose 2 code it should be small, when I choose 50, it must be bigger. I want it to automatically adjust height based on number of selected code instead of having it fixed.</p> <pre><code>def chart(df): color_discrete_map = {'df1': 'rgb(255,0,0)', 'df2': 'rgb(0,255,0)', 'df3': '#11FCE4', 'df4': '#9999FF', 'df5': '#606060', 'df6': '#CC6600'} fig = px.scatter(df, x='DATE', y='CODE', color='SOURCE', width=1200, height=800,color_discrete_map=color_discrete_map) fig.update_layout(xaxis_type='category') fig.update_layout( margin=dict(l=250, r=0, t=0, b=20), ) fig.update_layout(xaxis=dict(tickformat=&quot;%y-%m&quot;, tickmode = 'linear', tick0 = 0.5,dtick = 0.75)) fig.update_xaxes(ticks= &quot;outside&quot;, ticklabelmode= &quot;period&quot;, tickcolor= &quot;black&quot;, ticklen=10, minor=dict( ticklen=4, dtick=7*24*60*60*1000, tick0=&quot;2016-07-03&quot;, griddash='dot', gridcolor='white') ) fig.update_layout(yaxis={&quot;dtick&quot;:1},margin={&quot;t&quot;:0,&quot;b&quot;:0},height=500) </code></pre> <p><a href="https://i.sstatic.net/q7JE7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/q7JE7.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/hrIVM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hrIVM.png" alt="enter image description here" /></a></p>
<python><plotly>
2023-04-05 07:01:04
0
450
user14269252
75,936,392
1,446,710
Asyncio: task got bad yield?
<p>My code here always drops an error message which neither google, neither chatgpt are unable to explain:</p> <pre><code>import asyncio import udiskie import udiskie.config import udiskie.udisks2 async def main(): device = '/dev/mmcblk0p1' config = udiskie.config.Config.from_file() udisks = await udiskie.udisks2.Daemon.create() await udisks.connect() try: await udisks.unmount(device) finally: await udisks.disconnect() if __name__ == '__main__': asyncio.run(main()) </code></pre> <p>When being run:</p> <pre><code>test.py&quot;, line 9, in main udisks = await udiskie.udisks2.Daemon.create() File &quot;/usr/lib/python3/dist-packages/udiskie/udisks2.py&quot;, line 699, in create proxy = await dbus.connect_service(*service) File &quot;/usr/lib/python3/dist-packages/udiskie/dbus.py&quot;, line 288, in connect_service proxy = await proxy_new_for_bus( File &quot;/usr/lib/python3/dist-packages/udiskie/dbus.py&quot;, line 279, in proxy_new_for_bus result = await future File &quot;/usr/lib/python3/dist-packages/udiskie/async_.py&quot;, line 92, in __await__ return (yield self) RuntimeError: Task got bad yield: &lt;udiskie.async_.Future object at 0x7fbb12c100&gt; </code></pre>
<python><python-3.x><python-asyncio>
2023-04-05 06:43:47
1
2,725
Daniel
75,936,278
8,820,616
Python in time or Datetime, List all time zones available
<p>Is the a way to list All timezones for Time and Datetime library in Python</p> <p>so it can be used as options to get time in respective timezone</p> <p>like in the example</p> <pre><code>from datetime import datetime from pytz import timezone south_africa = timezone('Africa/Johannesburg') sa_time = datetime.now(south_africa) print sa_time.strftime('%Y-%m-%d_%H-%M-%S') </code></pre>
<python><datetime><time><timezone>
2023-04-05 06:30:00
1
694
Pradeep Padmanaban C
75,936,149
4,412,885
Convert TensorFlow to ONNX: Current implementation of RFFT or FFT only allows ComplexAbs as consumer not {'Imag', 'Real'}
<p>I am trying to convert this <a href="https://tfhub.dev/google/lite-model/spice/1" rel="nofollow noreferrer">TensorFlow model</a> to <a href="https://github.com/onnx/tensorflow-onnx" rel="nofollow noreferrer">onnx</a>. But I get an error message:</p> <pre><code>&gt; python -m tf2onnx.convert --saved-model .\spice --output model.onnx --opset 11 --verbose ... 2023-04-08 18:33:10,811 - ERROR - tf2onnx.tfonnx: Tensorflow op [Real: Real] is not supported 2023-04-08 18:33:10,812 - ERROR - tf2onnx.tfonnx: Tensorflow op [Imag: Imag] is not supported 2023-04-08 18:33:10,879 - ERROR - tf2onnx.tfonnx: Unsupported ops: Counter({'Real': 6, 'Imag': 6}) ... ValueError: make_sure failure: Current implementation of RFFT or FFT only allows ComplexAbs as consumer not {'Real', 'Imag'} </code></pre> <p>Same happens with the TensorFlow Light (tflight) model:</p> <pre><code>&gt; python -m tf2onnx.convert --opset 16 --tflite .\lite-model_spice_1.tflite --output spice.onnx ... ValueError: make_sure failure: Current implementation of RFFT or FFT only allows ComplexAbs as consumer not {'Imag', 'Real'} </code></pre> <p>I am on Windows 11, Python 3.10.10, TensorFlow 2.12</p> <p>This is my first attempt with TensorFlow / ONNX, so I am unsure where the error comes from.</p> <p><strong>Questions</strong></p> <ul> <li>Is it related to TensorFlow, tf2onnx, or the model?</li> <li>Would it work with another setup (maybe on Linux or other TF version)?</li> <li>How to fix the issue?</li> </ul>
<python><tensorflow><tensorflow-lite><onnx><tf2onnx>
2023-04-05 06:10:40
1
1,277
mihca
75,935,842
11,630,148
Pytest and pytest-django ends up with assert error
<p>I've been dealing with this for the last 30 mins and I just want to move on from it. Basically, the test returns Assertion Error:</p> <pre class="lang-py prettyprint-override"><code>E AssertionError: assert {} == {'rant': 'Kulugo', 'slug': 'apostrophe'} E Right contains 2 more items: E {'rant': 'Kulugo', 'slug': 'apostrophe'} E Full diff: E - {'rant': 'Kulugo', 'slug': 'apostrophe'} E + {} tests/rants/test_serializers.py:26: AssertionError </code></pre> <p>Now tracing back the error in line 26: <code>assert serializer.data == invalid_serializer_data</code></p> <p>I expect everything to pass since this written code is based on a tutorial from testdriven.io</p> <p>Here's the full code:</p> <pre class="lang-py prettyprint-override"><code>from main.serializers import RantSerializer, CategorySerializer def test_valid_rant_serializer(): valid_serializer_data = { 'id': 1, 'title': &quot;First Rant&quot;, 'slug': &quot;first-rant&quot;, } serializer = RantSerializer(data=valid_serializer_data) assert serializer.is_valid() assert serializer.validated_data == valid_serializer_data assert serializer.data == valid_serializer_data assert serializer.errors == {} def test_invalid_rant_serializer(): invalid_serializer_data = { 'rant': 'Kulugo', 'slug': 'apostrophe' } serializer = RantSerializer(data=invalid_serializer_data) assert not serializer.is_valid() assert serializer.validated_data == {} assert serializer.data == invalid_serializer_data assert serializer.errors == {&quot;id&quot;: [&quot;This field is required&quot;]} </code></pre> <p>This is the error:</p> <pre class="lang-py prettyprint-override"><code>FAILED tests/rants/test_serializers.py::test_valid_rant_serializer - AssertionError: assert False FAILED tests/rants/test_serializers.py::test_invalid_rant_serializer - AssertionError: assert {'title': [ErrorDetail(string='This field is required.', code='required')], 'categories': [ErrorDetail(st... </code></pre>
<python><django><pytest><pytest-django>
2023-04-05 05:11:19
1
664
Vicente Antonio G. Reyes
75,935,575
2,246,380
Identify missing data in python dataframe
<p>I have a dataframe that looks like this</p> <pre><code>location_id device_id phase timestamp date day_no hour quarter_hour data_received data_sent 10001 1001 Phase 1 2023-01-30 00:00:00 2023-01-30 1 0 00:00:00 150 98 10001 1001 Phase 1 2023-01-30 00:15:00 2023-01-30 1 0 00:15:00 130 101 10001 1001 Phase 1 2023-01-30 00:45:00 2023-01-30 1 0 00:45:00 121 75 10001 1001 Phase 1 2023-01-30 01:00:00 2023-01-30 1 1 01:00:00 104 110 10001 1001 Phase 1 2023-01-30 01:15:00 2023-01-30 1 1 01:15:00 85 79 10001 1001 Phase 1 2023-01-30 01:30:00 2023-01-30 1 1 01:45:00 127 123 . . . . . . . . . . 10001 1001 Phase 1 2023-02-03 23:30:00 2023-02-03 5 23 23:30:00 100 83 10001 1001 Phase 1 2023-02-03 23:45:00 2023-02-03 5 23 23:45:00 121 75 10001 1005 Phase 2 2023-02-15 02:15:00 2023-02-15 1 2 02:15:00 90 101 10001 1005 Phase 2 2023-02-15 02:30:00 2023-02-15 1 2 02:30:00 111 98 . . . . . . . . . . 10001 1005 Phase 2 2023-02-19 23:15:00 2023-02-19 5 23 23:15:00 154 76 10001 1005 Phase 2 2023-02-19 23:30:00 2023-02-19 5 23 23:30:00 97 101 10003 1010 Phase 2 2023-01-14 00:00:00 2023-01-14 1 0 00:00:00 112 87 10003 1010 Phase 2 2023-01-14 00:15:00 2023-01-14 1 0 00:15:00 130 101 10003 1010 Phase 2 2023-01-14 00:30:00 2023-01-14 1 0 00:30:00 89 91 . . . . . . . . . . 10003 1010 Phase 2 2023-01-18 23:45:00 2023-01-18 5 23 23:45:00 123 117 </code></pre> <p>Each location has various phases that happen one after the other and each phase has a device assigned to it and lasts 5 days. During the 5 days we get the status of device for every 15 mins as row in the dataframe. The dataframe can sometimes have missing rows of data or the device data for a previous phase can be missing too(not uploaded yet) and these missing rows need to be identified. Sometimes the device can be switched on after the phase has started and rows can be missing for the time when the device was switched off and these rows shouldn’t be identified as missing rows of data. The output for the above dataframe would be as follows</p> <pre><code>location_id phase day_no hour quarter_hour 10001 Phase 1 1 0 00:30:00 10001 Phase 2 5 23 23:45:00 10003 Phase 1 1 0 00:00:00 10003 Phase 1 1 0 00:15:00 10003 Phase 1 1 0 00:30:00 . . . . . 10003 Phase 1 5 23 23:45:00 </code></pre> <p>The following code works and I can identify all rows the missing of data but the code is very slow and especially below line is the bottleneck of my script.</p> <pre><code>day_quarter_hour_data_check = quarter_hourly_data_df[(quarter_hourly_data_df['location_id'].isin([location_id])) &amp; (quarter_hourly_data_df['phase'].isin([phase])) &amp; (quarter_hourly_data_df['day_no'].isin([cur_day])) &amp; (quarter_hourly_data_df['quarter_hour'].isin([quarter_hour]))]['timestamp'] quarter_hourly_data_df = pd.read_csv('location_quarter_hourly_data.csv') quarter_hourly_data_df = quarter_hourly_data_df.astype({'timestamp':'datetime64[ns]'}) quarter_hourly_data_df['quarter_hour'] = quarter_hourly_data_df['timestamp'].dt.time start_times_data_df = quarter_hourly_data_df.groupby(['location_id','device_id','phase']).agg(start_time=pd.NamedAgg(column=&quot;timestamp&quot;, aggfunc=&quot;min&quot;),day_no=pd.NamedAgg(column=&quot;day_no&quot;, aggfunc=&quot;min&quot;)).reset_index() start_times_data_df['start_time'] = start_times_data_df['start_time'].astype('datetime64[ns]') start_times_data_df['quarter_hour'] = start_times_data_df['start_time'].dt.time location_ids = start_times_data_df['location_id'].unique() location_ids.sort() phases = start_times_data_df['phase'].unique() phases.sort() location_phases_df = quarter_hourly_data_df[['location_id','phase']].drop_duplicates() location_last_phase_df = location_phases_df.groupby(['location_id']).agg(last_phase=pd.NamedAgg(column='phase', aggfunc='max')).reset_index() location_site_df = location_df[['site_id','location_id']].drop_duplicates() location_site_df.rename(columns={&quot;site_id&quot;: &quot;site_no&quot;}, inplace = True) quarter_hours= pd.date_range(&quot;00:00&quot;, &quot;23:45&quot;, freq=&quot;15min&quot;).time location_phase_quarter_hour_missing_data = [] for location_id in location_ids: location_last_phase = location_last_phase_df[(location_last_phase_df['location_id'].isin([location_id]))]['last_phase'].values[0] for phase in phases: if int(phase[6]) &lt;= int(location_last_phase[6]): # check if phase exists or not location_phase_data_check = start_times_data_df[(start_times_data_df['location_id'].isin([location_id])) &amp; (start_times_data_df['phase'].isin([phase]))]['start_time'] # if phase exists get start time of phase if not location_phase_data_check.empty: location_start_day = start_times_data_df[(start_times_data_df['location_id'].isin([location_id])) &amp; (start_times_data_df['phase'].isin([phase]))]['day_no'].values[0] location_start_time = start_times_data_df[(start_times_data_df['location_id'].isin([location_id])) &amp; (start_times_data_df['phase'].isin([phase]))]['quarter_hour'].values[0] for day_no in range(5): cur_day = day_no+1 for quarter_hour in quarter_hours: if ((quarter_hour &gt;= location_start_time) and (cur_day &gt;= location_start_day)) : #check if data exists for every quarter hour after start time day_quarter_hour_data_check = quarter_hourly_data_df[(quarter_hourly_data_df['location_id'].isin([location_id])) &amp; (quarter_hourly_data_df['phase'].isin([phase])) &amp; (quarter_hourly_data_df['day_no'].isin([cur_day])) &amp; (quarter_hourly_data_df['quarter_hour'].isin([quarter_hour]))]['timestamp'] # if data doesn't exist add row to missing data if day_quarter_hour_data_check.empty: location_phase_quarter_hour_missing_data.append({'location_id':location_id, 'phase':phase, 'day_no':cur_day, 'hour':pd.to_datetime(quarter_hour, format='%H:%M:%S').hour, 'quarter_hour':quarter_hour}) else: #if phase doesn't exist add all rows to missing data for day_no in range(5): cur_day = day_no+1 for quarter_hour in quarter_hours: location_phase_quarter_hour_missing_data.append({'location_id':location_id, 'phase':phase, 'day_no':cur_day, 'hour':pd.to_datetime(quarter_hour, format='%H:%M:%S').hour, 'quarter_hour':quarter_hour}) location_phase_missing_data_df = pd.DataFrame(location_phase_quarter_hour_missing_data) location_phase_missing_data_df.to_csv('location_phase_missing_rows.csv', index=False) </code></pre> <p>Are there other ways that are much faster to identify the missing rows or any way to optimize the code?</p>
<python><pandas><dataframe><missing-data>
2023-04-05 04:08:25
1
3,091
Ram
75,935,536
1,768,029
How to dynamically access string value?
<p>If I define 2 strings as:</p> <pre><code>Str1 = &quot;select * from tableA&quot; Str2 = &quot;select * from tableB&quot; </code></pre> <p>Now when i = 1, Str1 value should be retrieved and when i = 2 , str2 value should be retrieved. How to dynamically access string value?</p> <pre><code>Src=β€œStr” for i in range(1,3): ******** print(new_str) </code></pre> <p>How do I display Str1 value first and then Str2 next in above code ?</p>
<python>
2023-04-05 04:01:20
3
425
user1768029
75,935,511
6,534,818
Pandas: How can I use str.extractall with another column as the pattern input?
<p>How can I use another column as my pattern for str.extract/all? The below example uses a hardcoded pattern, but I want Pandas to look at every row in the <code>Pattern</code> column and use that pattern for its extraction search.</p> <pre><code>df = pd.DataFrame({&quot;Pattern&quot;: ['a|c'], &quot;Files&quot;: ['a.csv, b.csv, c.csv, d.csv']}) # explode df['Files'] = df['Files'].str.split(',') df = df.explode(['Files']) # extract df['Expected'] = df['Files'].str.extract(r'([a|d])') # hardcoded # expected Pattern Files Expected 0 a|d a.csv a 0 a|d b.csv NaN 0 a|d c.csv NaN 0 a|d d.csv d </code></pre>
<python><pandas>
2023-04-05 03:51:45
1
1,859
John Stud
75,935,442
9,644,490
Calling child method from parent method not working
<p>I have a Parent and Child class as follows:</p> <pre><code>class Parent: def __init__(self): self.__set_config(): def __set_config(self): raise NotImplementedError def run(self): # do something class Child: def __set_config(self): self.config = {&quot;foo&quot;: &quot;bar&quot;} </code></pre> <p>The expectation is that, in the parent class's call to <code>__set_config()</code>, the child class's overridden method would be called. What in fact happens is that <code>NotImplementedError</code> is raised instead of the config object being set.</p> <p>Is it not possible to</p> <ul> <li>enforce implementation in a child class of a method,</li> <li>and also call the same method in the parent class?</li> </ul>
<python>
2023-04-05 03:33:53
0
409
kyuden
75,935,433
424,957
how to calculate time between two datetime in python?
<p>I have two datetime as below</p> <pre><code>start=2023/04/05 08:00:00, end=2023/04/05 16:00:00 </code></pre> <p>I want to get time by</p> <pre><code>lTime = start + (end - start) /3 or lTime = (start * 2 + end) /3 </code></pre> <p>I used code as below</p> <pre><code>lTime = start + timedelta(end - start) / 3 </code></pre> <p>but I got error, what can I do?</p> <pre><code>unsupported operand type(s) for -: 'builtin_function_or_method' and 'datetime.datetime' </code></pre>
<python><datetime><timedelta>
2023-04-05 03:30:56
1
2,509
mikezang
75,935,429
5,212,614
How to normalize or explode a field from a JSON file?
<p>I'm reading a JSON file; it looks like this.</p> <pre><code> type features 0 FeatureCollection {'type': 'Feature', 'properties': {'GEO_ID': '... 1 FeatureCollection {'type': 'Feature', 'properties': {'GEO_ID': '... 2 FeatureCollection {'type': 'Feature', 'properties': {'GEO_ID': '... 3 FeatureCollection {'type': 'Feature', 'properties': {'GEO_ID': '... 4 FeatureCollection {'type': 'Feature', 'properties': {'GEO_ID': '... </code></pre> <p>I'm trying to figure out how to explode the column named 'features' but I can't get it working quite right.</p> <p>I tried the following code, and it does nothing.</p> <pre><code>import pandas as pd data = pd.read_json('C:\\Users\\ryans\\Desktop\\test.json') print(type(data)) # not working df = pd.json_normalize(data) </code></pre> <p>I also tried this code.</p> <pre><code>import pandas as pd import json from pandas.io.json import json_normalize with open('C:\\Users\\ryans\\Desktop\\test.json') as data_file: data = json.load(data_file) print(data) df = pd.DataFrame(data) </code></pre> <p>Even after running the second code sample, everything is jammed into the field named 'features'. How can I break out all the data in features into separate columns and push everything into a standardized dataframe?</p>
<python><json><python-3.x>
2023-04-05 03:30:19
1
20,492
ASH
75,935,363
2,825,403
Adding a column based on condition in Polars
<p>Let's say I have a Polars dataframe like so:</p> <pre class="lang-py prettyprint-override"><code>df = pl.DataFrame({ 'a': [0.3, 0.7, 0.5, 0.1, 0.9] }) </code></pre> <p>And now I need to add a new column where 1 or 0 is assigned depending on whether a value in column <code>'a'</code> is greater or less than some threshold. In Pandas I can do this:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np THRESHOLD = 0.5 df['new'] = np.where(df.a &gt; THRESHOLD, 0, 1) </code></pre> <p>I can also do something very similar in Polars:</p> <pre class="lang-py prettyprint-override"><code>df = df.with_columns( pl.lit(np.where(df.select('a').to_numpy() &gt; THRESHOLD, 0, 1).ravel()) .alias('new') ) </code></pre> <p>This works fine but I'm sure that using NumPy here is not the best practice.</p> <p>I've also tried something more like:</p> <pre class="lang-py prettyprint-override"><code>df = df.with_columns( pl.lit(df.filter(pl.col('a') &gt; THRESHOLD).select([0, 1])) .alias('new') ) </code></pre> <p>But with this syntax I keep running into the following error:</p> <pre class="lang-py prettyprint-override"><code>DuplicateError Traceback (most recent call last) Cell In[47], line 5 1 THRESHOLD = 0.5 2 DELAY_TOLERANCE = 10 4 df = df.with_columns( ----&gt; 5 pl.lit(df.filter(pl.col('a') &gt; THRESHOLD).select([0, 1])) 6 .alias('new') 7 ) 8 df.head() DuplicateError: column with name 'literal' has more than one occurrences </code></pre> <p>So my question is two-fold: what am I doing wrong here and what is the best practice in Polars for such conditional assignments?</p> <p>I did looks through docs and previous questions but couldn't find anything resembling my use-case.</p>
<python><dataframe><python-polars>
2023-04-05 03:11:16
1
4,474
NotAName
75,935,347
10,051,099
Can I set command output to basepython in tox.ini?
<p>Can I set command output to basepython in tox.ini? I want to do something like this:</p> <pre><code>[tox] envlist = pymain pynext [testenv:pymain] basepython=$(get_python_main_path) [testenv:pynext] basepython=$(get_python_next_path) </code></pre> <p>get_python_main_path and get_python_next_path are shell commands.</p>
<python><tox>
2023-04-05 03:07:23
0
3,695
tamuhey
75,935,293
10,756,025
pynecone cannot get detail information from item in the cards example (grid + foreach)
<p>This is our expected output.<br /> <a href="https://i.sstatic.net/t6Wnl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/t6Wnl.png" alt="enter image description here" /></a></p> <p>And this is the current output.<br /> <a href="https://i.sstatic.net/2QPfz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2QPfz.png" alt="enter image description here" /></a></p> <p>And this is the source code for the current output.</p> <pre class="lang-py prettyprint-override"><code>import pynecone as pc def show_items(item): return pc.box( pc.text(item), bg=&quot;lightgreen&quot; ) class ExampleState(pc.State): my_objects = [[&quot;item1&quot;, &quot;desc1&quot;], [&quot;item2&quot;, &quot;desc2&quot;]] print(my_objects) def home(): homeContainer = pc.container( pc.hstack( pc.container( # watch list pc.vstack( pc.container(h=&quot;20px&quot;), pc.hstack( pc.heading(&quot;Example List&quot;, size=&quot;md&quot;), ), pc.grid( pc.foreach(ExampleState.my_objects, show_items), template_columns=&quot;repeat(5, 1fr)&quot;, h=&quot;20vh&quot;, width=&quot;100%&quot;, gap=4, ), justifyContent=&quot;start&quot;, align_items=&quot;start&quot;, ), height=&quot;100vh&quot;, maxWidth=&quot;auto&quot;, ), bg=&quot;#e8e5dc&quot;, ), ) return homeContainer app = pc.App(state=ExampleState) app.add_page(home, route=&quot;/&quot;) app.compile() </code></pre> <p>In this example, we cannot make our expected output. In the above example, if we change the code from</p> <pre><code>pc.text(item) </code></pre> <p>to</p> <pre><code>pc.text(item[0]) </code></pre> <p>, then we will get the following error message</p> <pre><code> File &quot;xxxx_some_code.py&quot;, line 47, in &lt;module&gt; app.add_page(home, route=&quot;/&quot;) File &quot;/xxxx/opt/anaconda3/envs/pynecone-121-py311/lib/python3.11/site-packages/pynecone/app.py&quot;, line 261, in add_page raise e File &quot;/xxxx/opt/anaconda3/envs/pynecone-121-py311/lib/python3.11/site-packages/pynecone/app.py&quot;, line 252, in add_page component = component if isinstance(component, Component) else component() ^^^^^^^^^^^ File &quot;xxxx_some_code.py&quot;, line 26, in home pc.foreach(ExampleState.my_objects, show_items), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/xxxx/opt/anaconda3/envs/pynecone-121-py311/lib/python3.11/site-packages/pynecone/components/layout/foreach.py&quot;, line 48, in create children=[IterTag.render_component(render_fn, arg=arg)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/xxxx/opt/anaconda3/envs/pynecone-121-py311/lib/python3.11/site-packages/pynecone/components/tags/iter_tag.py&quot;, line 71, in render_component component = render_fn(arg) ^^^^^^^^^^^^^^ File &quot;xxxx_some_code.py&quot;, line 5, in show_items pc.text(item[0]), ~~~~^^^ File &quot;/xxxx/opt/anaconda3/envs/pynecone-121-py311/lib/python3.11/site-packages/pynecone/var.py&quot;, line 181, in __getitem__ raise TypeError( TypeError: Could not index into var of type Any. (If you are trying to index into a state var, add a type annotation to the var.) </code></pre> <p>We also read the document related to <a href="https://pynecone.app/docs/library/layout/grid" rel="nofollow noreferrer">pc.grid</a> and <a href="https://pynecone.app/docs/library/layout/foreach" rel="nofollow noreferrer">pc.foreach</a>.</p> <p>And still have no idea how to fix this issue from the two documents.</p> <p>So, what can we do if we want to get detailed information from the item and show it on the layout?</p>
<python><foreach><grid><python-reflex>
2023-04-05 02:54:13
1
4,045
Milo Chen
75,935,256
16,922,748
How to efficiently apply a function to every row in a dataframe
<p>Given the following table:</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'code':['100M','60M10N40M','5S99M','1S25I100M','1D1S1I200M']}) </code></pre> <p>that looks like this:</p> <pre class="lang-none prettyprint-override"><code> code 0 100M 1 60M10N40M 2 5S99M 3 1S25I100M 4 1D1S1I200M </code></pre> <p>I'd like to convert the <code>code</code> column strings to numbers where M, N, D are each equivalent to (times 1), I is equivalent to (times -1) and S is equivalent to (times 0).</p> <p>The result should look like this:</p> <pre class="lang-none prettyprint-override"><code> code Val 0 100M 100 This is (100*1) 1 60M10N40M 110 This is (60*1)+(10*1)+(40*1) 2 5S99M 99 This is (5*0)+(99*1) 3 1S25I100M 75 This is (1*0)+(25*-1)+(100*1) 4 1D1S1I200M 200 This is (1*1)+(1*0)+(1*-1)+(200*1) </code></pre> <p>I wrote the following function to this:</p> <pre class="lang-py prettyprint-override"><code>def String2Val(String): # Generate substrings sstrings = re.findall('.[^A-Z]*.', String) KeyDict = {'M':'*1','N':'*1','I':'*-1','S':'*0','D':'*1'} newlist = [] for key, value in KeyDict.items(): for i in sstrings: if key in i: p = i.replace(key, value) lp = eval(p) newlist.append(lp) OutputVal = sum(newlist) return OutputVal df['Val'] = df.apply(lambda row: String2Val(row['code']), axis = 1) </code></pre> <p>After applying this function to the table, I realized it's not efficient and takes forever when applied to large datasets. How can I optimize this process?</p>
<python><pandas><string><performance>
2023-04-05 02:43:10
5
315
newbzzs
75,935,251
262,875
Discord.py: How to get access to self in a check's predicate function?
<p>I've spent at least a day trying to find an answer to the question, including messing around with nested decorators for at least a couple hours but I don't seem to be able to figure it out.</p> <p>I am using Discord.py and it's <code>commands.check</code> &quot;decorator generator&quot;, but I think the problem (and likely it's solution) might not be really Discord.py specific.</p> <p>The problem statement is as follows:</p> <p>I would like to declare a Discord.py <code>check</code> on a <code>Cog</code>, but need access to the Cog's instance to do some data fetching to resolve the check.</p> <p>Now, Discord.py's syntax for declaring a check is to call the commands.check function given a predicate function, which in turn returns a decorator that can be applied to any of the Cog's methods, like</p> <pre><code>def custom_check(): async def predicate(ctx): # I'll need to do some data fetching here... return True return commands.check(predicate) class MyCog(commands.Cog): @commands.command() @custom_check() async func(self, ctx): pass async def some_data_fetching(): pass </code></pre> <p>I've been trying to figure out how to be able to solve this, but can't wrap my head around it. I've been exploring how to create a wrapping closure that has access to self, but</p> <ol> <li>I can't get it to work ('predicate called' never prints) and</li> <li>I still can't figure out how to inject self into the <code>predicate</code> function -- <code>commands.check</code> needs to be called (and predicate passed into it) before any method would be getting invoked and self could become available.</li> </ol> <pre><code>def custom_check(): async def predicate(ctx): print('predicate called') return False check_decorator = commands.check(predicate) def custom_decorator(f): async def wrapper(self, *args, **kwargs): return await f(self, *args, **kwargs) return wrapper return lambda f: check_decorator(custom_decorator(f)) </code></pre> <p>Any help or advice greatly appreciated.</p>
<python><discord.py><python-decorators>
2023-04-05 02:42:09
1
11,089
Daniel Baulig
75,935,049
6,676,101
When does a regex interpret something as a string of many characters and when does a regex use a single letter or other character?
<p>When I look at the regular expression <code>(foo|bar|baz)</code> I get confused.</p> <p>Does the regex say:</p> <ul> <li><code>fo</code>, followed by <code>o</code> or <code>b</code>, followed by exactly one letter <code>a</code>, followed by <code>r</code> or <code>b</code>, followed by the string <code>az</code>?</li> <li><code>foo</code> or <code>bar</code> or <code>baz</code></li> </ul> <p>When does a regex infix operator such as vertical pipe <code>|</code></p> <ol> <li>use only <strong>one</strong> character to the left of the operator and <strong>one</strong> character to the right of the operator.</li> <li>use a string of one or <strong>more</strong> characters to the left of the operator and use a string of one or <strong>more</strong> characters to the right of the operator.</li> </ol> <p>Assume we are using Python if the ambiguity in regex flavor is a problem.</p>
<python><regex>
2023-04-05 01:41:30
2
4,700
Toothpick Anemone
75,934,965
14,892,434
Celery task stays running forever, but completes successfully
<p>My Task:</p> <pre class="lang-py prettyprint-override"><code>@app.task(ignore_result=True) def leaks_scan(target: str, remove_type: str = &quot;old&quot;, **kwargs): url = settings.API_URL + &quot;leaks&quot; skip = 0 limit = 1000 max_limit = 5000 if remove_type == &quot;all&quot;: remove_all_data(Leak, target) scan_id = None target_id = None while True: data = get_data(url, target, skip=skip, limit=limit) count = data.get(&quot;count&quot;) data = data.get(&quot;data&quot;, []) if not data: if not count: remove_all_data(Leak, target) break break scan_id, target_id = load_leak_data(data) skip += limit if skip &gt; max_limit: break if remove_type == &quot;old&quot; and scan_id and target_id: remove_old_data(Leak, scan_id, target_id) print(&quot;Leak scan completed&quot;) </code></pre> <p>Notice that this task has a print statement that prints <code>Leak scan completed</code> in the last line</p> <h3>Logs:</h3> <p>Task received:</p> <p><img src="https://user-images.githubusercontent.com/54645034/229554172-7f67bbba-7172-4c0f-bb1f-6bd94591a631.png" alt="resim" /></p> <p>Task finished (debug print line):</p> <p><img src="https://user-images.githubusercontent.com/54645034/229554419-ed99a734-4c41-4d33-a7b5-07eca892b391.png" alt="resim" /></p> <p>The task works as expected and completes successfully. However, when I check the Celery worker logs, the task stays in the running state indefinitely, even though it has been completed:</p> <p><img src="https://user-images.githubusercontent.com/54645034/229554760-5bb475ad-1403-4b16-bb9d-76b2bf3feba5.png" alt="resim" /></p> <p>Can anyone suggest a solution or provide any guidance on how to troubleshoot this issue?</p> <p>Thanks in advance for your help.</p> <h2>Environment &amp; Settings</h2> <p><strong>Celery version</strong>:</p> <b><code>celery report</code> Output:</b> <p> <pre><code>software -&gt; celery:5.2.7 (dawn-chorus) kombu:5.2.4 py:3.11.2 billiard:3.6.4.0 py-amqp:5.1.1 platform -&gt; system:Linux arch:64bit, ELF kernel version:5.15.90.1-microsoft-standard-WSL2 imp:CPython loader -&gt; celery.loaders.default.Loader settings -&gt; transport:amqp results:disabled deprecated_settings: None </code></pre> </p> <h3>Broker &amp; Result Backend Settings</h3> <pre><code>CELERY_BROKER_URL=&quot;redis://localhost:6379/0&quot; CELERY_RESULT_BACKEND=&quot;redis://localhost:6379/0&quot; </code></pre>
<python><django><celery>
2023-04-05 01:15:32
1
305
OmerFI
75,934,860
19,157,137
View all of the built in functions that take in a particular data type in Python
<p>How would I be able to view all of built in functions take in a particular object(<code>list, str, int, float</code> ....) as its parameter for example some functions that pass <code>list</code> are <code>max(), min(), len() ....</code>. I can view all of the methods by simply <code>dir(object)</code> like <code>dir(list)</code>. How would I be able to do it for functions?</p>
<python><function><object>
2023-04-05 00:42:20
0
363
Bosser445
75,934,857
18,551,983
generate a new dictionary based on list of values
<p>I have an input dictionary d1 and list l1 and want to generate output dictionary d2.</p> <pre><code>d1 = {'A1':['b1','b2','b3'], 'A2':['b2', 'b3'], 'A3':['b1', 'b5']} l1 = ['b2', 'b5', 'b1', 'b3'] </code></pre> <p>Output dictionary</p> <pre><code>d2 = {'b2':['A1','A2'], 'b5':['A3'], 'b1':['A1','A3'], 'b3':['A1','A2']} </code></pre> <p>In output dictionary all values of list l1 act as keys, for the values of dictionary d2 we search the particular key in d1 dictionary values, if this key is present in dictionary values we will select the corresponding key from the dictionaryd1. For example, for key b2, we search it in dictionary values, as it is present in values of 'A1' keys and 'A2' keys, so we select 'A1' and 'A2' from d1. Is there any way to do it?</p>
<python><python-3.x><list><dictionary>
2023-04-05 00:41:24
1
343
Noorulain Islam
75,934,851
10,941,410
How to make sure list defined in class variable objects are not shared across different instances?
<p>I have the following:</p> <pre class="lang-py prettyprint-override"><code>class Loop: def __init__(self, group_class: Type[SegmentGroup], start: str, end: str): self.group_class = group_class self.start = start self.end = end self._groups: List[SegmentGroup] = [] def from_segments(self, segments): self._groups = [] # have to clear so this is not shared among other `Article` instances for segment in segments: group = self.group_class() if some_conditions_are_valid and after_have_populated_group: self._groups.append(group) class SegmentGroupMetaClass(type): def __new__(cls, name, bases, attrs): exclude = set(dir(type)) components: Dict[str, str] = {} loops: Dict[str, str] = {} for k, v in attrs.items(): if k in exclude: continue if isinstance(v, Segment): components[v.unique_tag] = k elif isinstance(v, Loop): loops[v.start] = k attrs.update(__components__=components, __loops__=loops) return super().__new__(cls, name, bases, attrs) class SegmentGroup(metaclass=SegmentGroupMetaClass): def from_segments(self, ...): &quot;&quot;&quot;May call `Loop.from_segments` </code></pre> <p>And then I have:</p> <pre class="lang-py prettyprint-override"><code>class Topic(SegmentGroup): infos = Segment(&quot;CAV&quot;) ... class Article(SegmentGroup): topics = Loop(Topic, ...) ... </code></pre> <p>I have a routine that essentially calls <code>Article().from_segments(segments)</code> where <code>segments</code> is injected from another service in a loop that creates a new <code>Article</code> instance on every new iteration.</p> <p>As you may have noticed, <code>Loop._groups</code> must be properly handled in order to not have its values shared among different <code>Article</code> instances. My (hacky) solution works fine if we call <code>Loop.from_segments</code> for all iterations, but this is not guaranteed as the <code>topics</code> segment loop may be missing in the injected <code>segments</code>.</p> <p>This means that for an <code>Article</code> without the <code>topics</code> segments, it'll actually use the values from the previous iteration (because <code>Loop._groups</code> won't be cleared since <code>Loop.from_segments</code> won't be called).</p> <p>I can think in a way to fix this by clearing <code>Loop._groups</code> at <code>Article.__init__</code>...</p> <pre><code>class SegmentGroup(metaclass=SegmentGroupMetaClass): def __init__(self) -&gt; None: for loop_label in self.__loops__.values(): setattr(getattr(self, loop_label), &quot;_groups&quot;, []) </code></pre> <p>...but it looks even hackier and non-elegant/less efficient to me.</p> <p>How would you clear <code>Loop._groups</code> for every new <code>Article</code> instance?</p>
<python><metaclass>
2023-04-05 00:39:54
3
305
Murilo Sitonio
75,934,631
3,380,902
Error connecting to aws redshift using dask
<p>I am using <code>dask</code> to connect to AWS Redshift and query the db. I run into an error when attempting to pass a connection string to <code>read_sql_query</code> method.</p> <p><code>config.py</code></p> <pre><code>import os # Create dictionary of environment variables env_var_dict = { # Amazon Redshift 'host':'xxxx.us-east-1.redshift.amazonaws.com', 'database':'db1', 'port':'5439', 'user':'user1', 'password':'xxxxx' } # set environment variables os.environ['host'] = env_var_dict['host'] os.environ['database'] = env_var_dict['database'] os.environ['port'] = env_var_dict['port'] os.environ['user'] = env_var_dict['user'] os.environ['password'] = env_var_dict['password'] # connect to aws redshift cluster import redshift_connector conn = redshift_connector.connect( host=os.environ['host'], database=os.environ['database'], port=int(os.environ['port']), user=os.environ['user'], password=os.environ['password'] ) import sqlalchemy as sa host=os.environ['host'], database=os.environ['database'], port=int(os.environ['port']), user=os.environ['user'], password=os.environ['password'] conn_str = f'redshift+redshift_connector://{user}:{password}@{host}:{port}/{database}' # dask import dask.dataframe as dd &quot;redshift+redshift_connector://('user',):pwd@hostname,):('5439',)/('tracking',)&quot; # Query table using dask dataframe query = ''' SELECT * FROM tbl WHERE type = 'xxx' and created_at &gt;= '2023-01-01 00:00:00' and created_at &lt;= '2023-12-01 00:00:00' ''' df = dd.read_sql_query(query, conn_str, index_col = 'id') ValueError: invalid literal for int() with base 10: &quot;('5439',)&quot; --------------------------------------------------------------------------- ValueError Traceback (most recent call last) File &lt;command-2539550446659032&gt;:10 1 # Query table using dask dataframe 2 query = ''' 3 SELECT * 4 FROM pmf (...) 7 and created_at &lt;= '2023-12-01 00:00:00' 8 ''' ---&gt; 10 df = dd.read_sql_query(query, conn_str, index_col = 'id') File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/dask/dataframe/io/sql.py:107, in read_sql_query(sql, con, index_col, divisions, npartitions, limits, bytes_per_chunk, head_rows, meta, engine_kwargs, **kwargs) 104 raise TypeError(&quot;Must supply either 'divisions' or 'npartitions', not both&quot;) 106 engine_kwargs = {} if engine_kwargs is None else engine_kwargs --&gt; 107 engine = sa.create_engine(con, **engine_kwargs) </code></pre> <p>I have tried to pass <code>port</code> as <code>int</code> and <code>str</code>. How do I connect to aws redshift and run a query in dask?</p>
<python><sqlalchemy><amazon-redshift><dask>
2023-04-04 23:38:13
1
2,022
kms
75,934,545
15,178,267
Django: How to loop through query set and change the boolean field to True in django?
<p>I am trying to loop through a queryset in django and turn all the boolean fields in the queryset to true.</p> <p>I want to turn this booleans to True on the success_page, so i tried doing it like this</p> <pre><code>@login_required def PaymentSuccessView(request): ... order = get_object_or_404(CartOrder, stripe_payment_intent=session.id) cart_order_items = CartOrderItem.objects.filter(order=order) for c in cart_order_items: c.paid = True c.save() return render(request, &quot;payment/payment_success.html&quot;, {&quot;order&quot;: order}) </code></pre> <p>The problem is that, this still does not work even after i get redirected to the success page on successful transaction.</p> <hr /> <p>I also tried doing this in the model, i am not sure if it's the right way to do this, but i tried it and it still did not work</p> <pre><code>class CartOrderItem(models.Model): order = models.ForeignKey(CartOrder, on_delete=models.CASCADE) vendor = models.ForeignKey(Vendor, on_delete=models.SET_NULL, null=True) paid = models.BooleanField(default=False) .... def save(self, *args, **kwargs): if self.order.payment_status == &quot;paid&quot;: self.paid = True return super(CartOrderItem, self).save(*args, **kwargs) </code></pre> <p>These are my model fields</p> <pre><code>class CartOrderItem(models.Model): order = models.ForeignKey(CartOrder, on_delete=models.CASCADE) vendor = models.ForeignKey(Vendor, on_delete=models.SET_NULL, null=True) paid = models.BooleanField(default=False) ... class CartOrder(models.Model): vendor = models.ManyToManyField(Vendor) buyer = models.ForeignKey(User, on_delete=models.SET_NULL, null=True, related_name=&quot;buyer&quot;) stripe_payment_intent = models.CharField(max_length=200,null=True, blank=True) </code></pre>
<python><django><django-rest-framework><django-views>
2023-04-04 23:19:05
1
851
Destiny Franks
75,934,493
13,916,049
Snakemake workflow error: tuple index out of range
<p>I want to create a reproducible workflow using <a href="https://snakemake.readthedocs.io/en/stable/tutorial/basics.html" rel="nofollow noreferrer">Snakemake</a>, which is built on top of Python.</p> <p>The <code>SAMPLES</code> is a list of strings in the <code>&quot;./input/SRR_Acc_List.txt&quot;</code> file and it matches to a substring of the <code>fastq</code> filenames.</p> <p>An answer <a href="https://bioinformatics.stackexchange.com/questions/2761/how-to-resolve-in-snakemake-error-target-rules-may-not-contain-wildcards">here</a> suggested that a master rule <code>rule all:</code> should be used. But it raised an error.</p> <p>First,</p> <pre><code>with open(&quot;./input/SRR_Acc_List.txt&quot;) as f: SAMPLES = ' '.join([line.rstrip() for line in f]) rule all: input: expand(html=&quot;qc/pretrim_fastqc/{sample}*.html&quot;, sample=SAMPLES.split(' ')), expand(zip=&quot;qc/pretrim_fastqc/{sample}*_fastqc.zip&quot;, sample=SAMPLES.split(' ')), &quot;plots/quals.svg&quot; # Step 1: Pre-trim Quality Check rule pretrim_fastqc: input: &quot;./input/{sample}*.fastq&quot; output: html=&quot;qc/pretrim_fastqc/{sample}*.html&quot;, zip=&quot;qc/pretrim_fastqc/{sample}*_fastqc.zip&quot; params: &quot;--quiet&quot; log: &quot;logs/pretrim_fastqc/{sample}.log&quot; threads: 1 wrapper: &quot;v1.25.0/bio/fastqc&quot; </code></pre> <p>Traceback:</p> <pre><code>tuple index out of range </code></pre> <p>SAMPLES:</p> <pre><code>SRR9200813 SRR9200814 SRR9200815 </code></pre> <p>Fastq files:</p> <p>SRR9200813_1.fastq SRR9200813_2.fastq SRR9200814_1.fastq SRR9200814_2.fastq SRR9200815_1.fastq SRR9200815_2.fastq</p>
<python><snakemake>
2023-04-04 23:08:26
1
1,545
Anon
75,934,421
4,560,996
What is Most Efficient Way to Split Python List
<p>I have a vector that is always a multiple of the objects <code>M</code> and <code>K</code>. In the example below, <code>M=2</code> and <code>K=3</code> the vector is of length 6. What I want to find is the best way to use use the values of <code>M</code> and <code>K</code> to split my list so it results in a pandas dataframe with <code>M</code> columns and <code>K</code> rows in each column.</p> <p>I can see a way to loop over the list and grab and organize the stuff I need. However, I'm assuming there is a more efficient way and hoping someone can help point that out.</p> <pre><code>import pandas as pd result = [1,2,3,4,5,6] M = 2 K = 3 A = pd.DataFrame(result[0:3]) B = pd.DataFrame(result[3:6]) final = pd.concat([A.reset_index(drop = True), B], axis = 1) </code></pre>
<python><pandas><numpy>
2023-04-04 22:51:24
2
827
dhc
75,934,399
5,054,505
How can I efficiently read paths stored in a column of a pyspark dataframe?
<p>The input to my process is a structured dataset where one column is a GCS path to a blob, something like:</p> <pre class="lang-py prettyprint-override"><code>df = spark.createDataFrame( [(&quot;gs://my/bucket/image.jpg&quot;,), (&quot;gs://my/bucket/image2.jpg&quot;,)], [&quot;image_path&quot;] ) </code></pre> <p>I want to read the (binary) data from the URIs in that column, and store that data in a new column. Something like:</p> <pre class="lang-py prettyprint-override"><code>import pyspark.sql.functions as F df.withColumn(&quot;image_blob&quot;, magic_read_function(F.col(&quot;image_path&quot;))) </code></pre> <p>Is there an efficient way to do this? I basically want to tell spark to batch the reads as much as possible, but without shuffling or collecting to a single worker... if possible.</p> <p>Some options I have considered are:</p> <ul> <li>I could get the list of paths with <code>paths = [row.image_path for row in df.collect()]</code> and then read in the list. This will force a shuffle, and gather all paths onto a single worker though, which I'd like to avoid.</li> <li>I could make a UDF that reads from the path, but then I get network overhead for each path being read, rather than batching the files to each worker.</li> <li>I could read all of the files from the parent bucket, and do an inner join on the URI column(<code>image_path</code> in this example). Then I still have to do a shuffle for the inner join though, and I have to assume all input paths are coming from the same bucket (this is typically the case for this process, but there's nothing currently guaranteeing it. I'd like to avoid adding this restriction to the process if possible)</li> </ul>
<python><image><apache-spark><pyspark>
2023-04-04 22:46:37
1
610
Patrick
75,934,397
9,927,519
Is it possible to create a TypedDict equivalent with its own methods and properties in Python?
<p>I like to work with <a href="https://docs.python.org/3/library/typing.html#typing.TypedDict" rel="nofollow noreferrer"><code>TypedDict</code></a>s to convey the batch and item information in PyTorch. It works well with the default collate function and adds the possibility to add types.</p> <p>What I don't like is that you need to call it with strings keys, that is error prone, and that you can't define methods.</p> <p>I usually overcome that by embedding the definition of the dicts as well as the keys and methods in a module. For example:</p> <pre class="lang-py prettyprint-override"><code>from typing import TypedDict class Keys: source = &quot;source&quot; encoding = &quot;encoding&quot; prediction = &quot;prediction&quot; class Batch(TypedDict): source: list[str] encoding: Tensor # Shape NCHV prediction: Tensor # Shape N class Item(TypedDict): source: str encoding: Tensor # CHW result: float def item_at(index: int, batch: Batch): &quot;&quot;&quot;Extract an item from the batch&quot;&quot;&quot; return Item(...) def get_source(x: Batch | Item): return x[Keys.source] def set_source(x: Batch | Item, value: str | List[str]): x[Keys.source] = value </code></pre> <p>I think it could be more interesting to embed the methods into the <code>Batch</code> and <code>Item</code> classes.</p> <p>However that's not allowed by <code>TypedDict</code>. On the other hand, when I inherit from <code>dict</code>, I don't know how to add the types.</p> <p>Do you know how to to do the trick and have a <code>TypedDict</code> equivalent with its own methods and properties?</p>
<python><dictionary><pytorch><python-typing><typeddict>
2023-04-04 22:46:14
0
3,287
Xiiryo
75,934,322
12,043,177
Neo4j python driver results don't contain edge data
<p>This is my Cypher query and its equivalent result in neo4j Browser</p> <blockquote> <p>MATCH (a)-[b:REFERENCE]-&gt;(c) RETURN a,b,c LIMIT 200</p> </blockquote> <p><a href="https://i.sstatic.net/B1OZP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/B1OZP.png" alt="enter image description here" /></a></p> <p>The b column (or key in JSON) contains the relationship and all the data that is associated with it. This is my ideal response.</p> <p>However running the same query in my Python API using the neo4j driver(==5.7.0) gives me the following result</p> <p><a href="https://i.sstatic.net/kgHkm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kgHkm.png" alt="enter image description here" /></a></p> <p>The 'b' column now gives me redundant and incomplete information</p> <p>Following is a snippet of the API code</p> <pre><code>with self.driver.session() as session: return [record.data() for record in session.run(query)] </code></pre> <p><strong>What I tried:</strong></p> <ol> <li>Using different methods for Record object as seen <a href="https://neo4j.com/docs/api/python-driver/current/api.html#record" rel="nofollow noreferrer">here</a></li> <li>Using different methods for Response object as seen <a href="https://neo4j.com/docs/api/python-driver/current/api.html#result" rel="nofollow noreferrer">here</a></li> <li>Changing query to <blockquote> <p>MATCH p = ()-[:REFERENCE]-&gt;() RETURN p LIMIT 200</p> </blockquote> </li> </ol> <p>None of this yielded different results</p>
<python><neo4j><neo4j-python-driver>
2023-04-04 22:29:18
2
974
kenneth-rebello
75,934,227
1,552,698
How to flatten list of dictionaries
<p>I am learning how to traverse lists and dictionaries in Python. I am a beginner.</p> <pre><code>process = [ { 'process1': [ {&quot;subprocess1&quot;:[&quot;subprocess1_1&quot;,&quot;subprocess1_2&quot;]}, &quot;subprocess2&quot;, {&quot;subprocess3&quot;:[&quot;subprocess3_1&quot;, &quot;subprocess3_2&quot;]}, &quot;subprocess4&quot;, {&quot;subprocess5&quot;:[{&quot;subprocess5_1&quot;:[&quot;subprocess5_1_1&quot;,&quot;subprocess5_1_2&quot;]}]}, ], }, { 'process2': [ &quot;subprocess2_1&quot; ] } ] </code></pre> <p>How do I flatten above list of dictionaries into the following?</p> <pre><code>process1 = [subprocess1, subprocess1_1, subprocess1_2, subprocess2, subprocess3, subprocess3_1, subprocess3_2, subprocess4, subprocess5, subprocess5_1, subprocess5_1_1, subprocess5_1_2] process2 = [subprocess2_1] </code></pre>
<python><list><dictionary>
2023-04-04 22:11:11
3
597
user1552698
75,934,176
5,692,012
Can't recover memory from deleted objects in Python
<p>I'm working on a project where I temporarily store objects in lists. But it seems that even when all references to these objects are deleted, the memory isn't freed, which leads to very high RAM usage in my process.</p> <p>I've tried to reproduce a simple example and the results are indeed very strange, I can't explain them.</p> <p>Here's a simple <code>bench.py</code></p> <pre class="lang-py prettyprint-override"><code>import psutil import os import gc import sys import numpy as np mod = int(sys.argv[1]) process = psutil.Process(os.getpid()) print(f&quot;Empty memory: {process.memory_info().rss/1e6:.1f}MB&quot;) arrs = list() for i in range(1000): arr = np.arange(50000) * i if i % mod == 0: arrs.append(arr) print(f&quot;Memory before cleaning: {process.memory_info().rss/1e6:.1f}MB - {len(arrs)} elements in array&quot;) del arrs gc.collect() print(f&quot;Memory after cleaning: {process.memory_info().rss/1e6:.1f}MB&quot;) </code></pre> <p>If I run it with 1, 2, 3 and 500 arguments I get these results.</p> <pre><code># 1 Empty memory: 33.3MB Memory before cleaning: 435.4MB - 1000 elements in array Memory after cleaning: 34.4MB #2 Empty memory: 33.3MB Memory before cleaning: 234.2MB - 500 elements in array Memory after cleaning: 34.5MB #3 - Why nothing is cleaned in that case ? Empty memory: 33.3MB Memory before cleaning: 167.8MB - 334 elements in array Memory after cleaning: 167.8MB #500 Empty memory: 33.3MB Memory before cleaning: 35.4MB - 2 elements in array Memory after cleaning: 34.4MB </code></pre> <p>It doesn't make any sense to me, why is the memory not cleaned the same way in the 3 cases ? Why the biggest array get the most efficient cleaning ?</p>
<python><memory><memory-leaks><garbage-collection><python-3.9>
2023-04-04 22:01:45
0
5,694
RobinFrcd
75,934,148
1,497,199
Django primary key field not showing up in postgres database
<p>I have a model class</p> <pre><code>import uuid import datetime, pytz from django.db import models from django.conf import settings class Topic(models.Model): key = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False) name = models.CharField(max_length=168) owner = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE, null=True) created = models.DateTimeField() updated = models.DateTimeField(default=datetime.datetime(2000, 1, 1, 0, 0, 0, 0, pytz.UTC)) keyword_json = models.TextField() model_bin = models.BinaryField() article_ttl = models.FloatField(default=48.0) timeliness = models.FloatField(defautl=4.0) </code></pre> <p>It results in a migrations file containing the block</p> <pre><code> name='Topic', fields=[ ('key', models.UUIDField(...)), ('name', models.CharField(...)), ('created', models.DateTimeField()), ('updated', models.DateTimeField(...)), ('keywords_json', models.TextField()), ('model_bin', models.BinaryField()), ('article_ttl', models.FloatField(...)), ('timeliness', models.FloatField(...)), ('owner', models.ForeignKey(...)) ] </code></pre> <p>However the <code>key</code> field is not present in the PostgreSQL database.</p> <p>This results in errors in any code that tries to access the <code>Topic</code> model objects from the database.</p> <p>This code is being run in kubernetes with Postgres hosted as its own service. I've gone into one of the pods running the service. This pod has the environment variables that define the host, port, user, database and password; I've checked the database using <code>psycopg2</code> running inside that pod and using those environment variables. Therefore I am very confident that I am accessing the database in exactly the same way as the django webapp is.</p> <p>I've gone through the process of <code>python manage.py flush; python manage.py migrate</code>.</p> <p>I've gone through the process of explicitly deleting the tables used by the webapp (again, in python via <code>psycopg2</code> running in the pod where the django service itself runs) and then running <code>python manage.py migrate</code>. The tables do get reconstructed (so I'm confident I'm looking at the right database). However the <code>key</code> field is missing from the database.</p> <p>I don't have the permissions to create a completely new database, so removing the tables is the closest I can get to starting from a clean slate.</p> <pre><code># having run python manage.py shell on a pod running the django service &gt;&gt;&gt; import os &gt;&gt;&gt; import psycopg2 &gt;&gt;&gt; conn = psycopg2.connect(dbname=os.environ['POSTGRES_DB'], user=...rest from os.environ...) &gt;&gt;&gt; curr = conn.cursor() &gt;&gt;&gt; curr.execute(&quot;&quot;&quot;SELECT column_name, data_type, character_maximum_length, column_default, is_nullable FROM information_schema.columns WHERE table_name='channels_topic';&quot;&quot;&quot;) &gt;&gt;&gt; res = list(curr) &gt;&gt;&gt; res [('name', 'character_varying', 168, None, 'NO'), ('created', 'timestamp with time zone', None, None, 'NO'), ('updated', 'timestamp with time zone', Non, None, 'NO'), ('keyword_json', 'text', None, None, 'NO'), ('model_bin', 'bytea', None, None, 'NO'), ('article_ttl', 'double precision', None, None, 'NO'), ('timeliness', 'double precision', None, None, 'NO'), ('owner_id', 'integer', None, None, 'YES') ] </code></pre> <p>I've also been able to log in to pgadmin and verify in this system that postgres only sees 8 columns (name, created, updated, keyword_json, model_bin, article_ttl, timeliness, owner_id) instead of the the 9 fields in the Topic class.</p>
<python><django><postgresql>
2023-04-04 21:56:10
0
8,229
Dave
75,933,853
5,407,050
Why two datetime.time cannot be substracted?
<p>For satisfying my curiosity, is there any reason why two datetime.time values cannot be direclty subtracted?</p> <pre><code>seconds = (datetime.combine(date.today(), end_time) - datetime.combine(date.today(), start_time)).total_seconds() </code></pre> <p>works, but</p> <pre><code>seconds = (end_time - start_time).total_seconds() </code></pre> <p>does not. Is there any particular reason?</p> <p>thank you.</p>
<python>
2023-04-04 21:06:35
0
334
Daniel Lema
75,933,813
1,914,034
Numpy - calculate mean of groups
<p>I have a an array of shape (N, 3) like this one for instance:</p> <pre><code>arr = np.array( [ [0,1,1], [0,1,2], [2,2,2] ] ) </code></pre> <p>I want to group this array by the two first columns to obtain an array like this</p> <pre><code>grouped_arr = np.array([[[0,1,1], [0,1,2]], [[2,2,2]]]) </code></pre> <p>Finally I would like to get only one element by group while the third column would be the mean of the group third column</p> <pre><code>final_array = np.array([[0,1,1.5], [2,2,2]]) </code></pre> <p>I am trying something but not sure if it's correct and if it's an efficient way to achieve it:</p> <pre><code>import numpy as np arr = np.array([[0,1,1], [0,1,2], [2,2,2]]) stacked = np.vstack((arr[:,0], arr[:,1])).transpose() uniques_values = np.unique(stacked, axis=0) groups = [] for v in uniques_values: groups.append(arr[v]) final_arr = [] for group in groups: mean = np.mean(group[:,2], axis=0) final_arr.append(np.array([group[0][0], group[0][1], mean])) print(final_arr) &gt;&gt;&gt;[array([0. , 1. , 1.5]), array([2., 2., 2.])] </code></pre> <p>I am looking for a reliable and efficient suggestion. In my real data the dtype is float</p>
<python><numpy><average>
2023-04-04 21:00:33
1
7,655
Below the Radar
75,933,760
1,315,621
Python Boto3 check if file on s3 is equal to local file
<p>I have some files in a local folder. These files can be modified locally. I need to keep a copy of the updated files on S3. Is there a way I can check whether a local file is equal to a file on S3 (example: using checksum)? In this way I won't have to upload the files that have not been changed. I am using boto3 and Python.</p>
<python><amazon-web-services><amazon-s3><upload><boto3>
2023-04-04 20:52:04
3
3,412
user1315621
75,933,676
1,446,710
Unmount a drive with python with udisks2 via dbus
<p>I'm trying to achieve the title.</p> <p>Here is my code:</p> <pre><code>from dbus_next.aio import MessageBus async def unmount_device(device_path): bus = await MessageBus().connect() udisk_obj = bus.get_proxy_object('org.freedesktop.UDisks2', device_path) udisk_interface = udisk_obj.get_interface('org.freedesktop.UDisks2.Filesystem') udisk_interface.Unmount() </code></pre> <p><code>dbus-python</code> is unavailable on Debian Bullseye, so I chosed dbus-next.</p> <p>But it's also hard and painful to use.</p> <p>What the hexk I should do to hack myself into udisk and hack it to umount my chosen drive?</p> <p>This code drops me an error:</p> <pre><code>dbus_next.errors.InvalidAddressError: DBUS_SESSION_BUS_ADDRESS not set and could not get DISPLAY environment variable to get bus address </code></pre> <p>But this is a headless system and I'm reaching it over SSH (but later I wish to use this script via systemd.</p> <p>Thank you if you could guide me a bit.</p>
<python><dbus><udisks>
2023-04-04 20:40:23
1
2,725
Daniel
75,933,400
8,382,452
how to prevent firefox from opening a downloaded file in selenium?
<p>I'm doing a selenium test to click on a button and download a file. It is downloading the file, but the browser always opens it in the middle of the execution, opening a new tab on firefox and breaking my test.</p> <p>I tried to set the preferences in code, but it's not working.</p> <pre><code>firefox_options = Options() firefox_options.set_preference(&quot;browser.download.folderList&quot;, 2) firefox_options.set_preference(&quot;browser.download.manager.showWhenStarting&quot;, False) firefox_options.set_preference(&quot;browser.download.dir&quot;, &quot;/tmp&quot;) firefox_options.set_preference(&quot;browser.helperApps.neverAsk.saveToDisk&quot;, &quot;application/pdf,text/csv&quot;) driver = Firefox(options=firefox_options) </code></pre> <p>Is there any additional preference to prevent this behavior?</p>
<python><selenium-webdriver>
2023-04-04 20:03:05
0
345
Yuri Costa
75,933,341
4,579,256
Dash: callback after clicking anywhere on a Scattermapbox map
<p>I am looking for a Dash callback that would access via <code>go.Scattermapbox()</code> any coordinate (longitude and latitude) after clicking or just hovering over a Mapbox map.</p> <p>It is possible to access image coordinates in case of a <code>dcc.Graph</code> containing an image, as shown here: <a href="https://github.com/AIMPED/plotly_dash/blob/master/plotlyDash_get_clickInfo.py" rel="nofollow noreferrer">https://github.com/AIMPED/plotly_dash/blob/master/plotlyDash_get_clickInfo.py</a></p> <p>It is also possible to access map coordinates with <code>dash-leaflet</code> via callback to <code>dl.Map()</code>. However, I would like to use Mapbox for my project.</p> <p>Finally, it is possible to get coordinates by hovering in Javascript, as shown on Mapbox website: <a href="https://docs.mapbox.com/mapbox-gl-js/example/mouse-position/" rel="nofollow noreferrer">https://docs.mapbox.com/mapbox-gl-js/example/mouse-position/</a> Is there also a purely Python-based functionality built in without using any Javascript?</p>
<python><mapbox><plotly-dash>
2023-04-04 19:56:06
1
350
petervanya
75,933,279
19,051,091
How to improve pytroch model?
<p>Good evening, I have 4 classes with black and white images each class has 3000 images with a test of 600 images so how to improve this model and that's my full code:</p> <pre><code>data_transform = transforms.Compose([ transforms.Grayscale(num_output_channels=1), transforms.Resize(size=(150, 150)), transforms.ToTensor(), transforms.Normalize(mean=[0.5], std=[0.5]), ]) </code></pre> <h1>Loading Images using ImageFolder</h1> <pre><code>train_data = datasets.ImageFolder(root=train_dir, transform=data_transform, # Transform the data target_transform=None) # Transform the Label test_data = datasets.ImageFolder(root=test_dir, transform=data_transform, # Transform the data target_transform=None) # Transform the Label train_data, test_data </code></pre> <h1>Turn the data to DataLoader</h1> <pre><code>BATCH_SIZE = 8 train_dataloader = DataLoader( dataset= train_data, batch_size=BATCH_SIZE, # How many images our model can see at the time num_workers=8, # Number of CPU Cores shuffle=True ) test_dataloader = DataLoader( dataset= test_data, batch_size=BATCH_SIZE, # How many images our model can see at the time num_workers=8, # Number of CPU Cores shuffle=False ) </code></pre> <h1>Create class</h1> <pre><code>class TingVGG(nn.Module): def __init__(self, input_shape: int, hidden_units: int, output_shape: int) -&gt; None: super().__init__() self.conv_block1 = nn.Sequential( nn.Conv2d(in_channels=input_shape,out_channels=hidden_units,kernel_size=3,stride=1,padding=1), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2), nn.Conv2d(in_channels=hidden_units, out_channels=hidden_units, kernel_size=3,stride=1,padding=1), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2), nn.Conv2d(in_channels=hidden_units, out_channels=hidden_units, kernel_size=3,stride=1,padding=1), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2) ) self.dropout = nn.Dropout(0.4) self.classifier = nn.Sequential(nn.Flatten(), nn.Linear(in_features=hidden_units*18*18 ,out_features=output_shape)) def forward(self, x: torch.Tensor): x = self.conv_block1(x) x = self.dropout(x) x = self.classifier(x) return x </code></pre> <h1>Train the Model</h1> <pre><code># Set random seed torch.manual_seed(42) torch.cuda.manual_seed(42) # Set number of epoches NUM_EPOCHS = 10 # Create and initialize of TinyVGG model_0 = TingVGG(input_shape=1, # Number of channels in the input image (c, h, w) -&gt; 3 hidden_units=128, output_shape=len(train_data.classes)).to(device) # Setup the loss function and optimizer loss_fn = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(params= model_0.parameters(), lr= 0.001) # Start the timer start_time = time.time() # Train model 0 model_0_results = train(model= model_0, train_dataloader= train_dataloader, test_dataloader= test_dataloader, optimizer= optimizer, loss_fn= loss_fn, epochs= NUM_EPOCHS ) 10%|β–ˆ | 1/10 [02:30&lt;22:37, 150.88s/it] Epoch: 1 | train_loss: 0.3549 | train_acc: 0.8668 | test_loss: 0.3059 | test_acc: 0.8842 20%|β–ˆβ–ˆ | 2/10 [04:57&lt;19:48, 148.59s/it] Epoch: 2 | train_loss: 0.1707 | train_acc: 0.9420 | test_loss: 0.2648 | test_acc: 0.9062 30%|β–ˆβ–ˆβ–ˆ | 3/10 [07:24&lt;17:14, 147.83s/it] Epoch: 3 | train_loss: 0.1153 | train_acc: 0.9627 | test_loss: 0.2790 | test_acc: 0.8962 40%|β–ˆβ–ˆβ–ˆβ–ˆ | 4/10 [09:52&lt;14:46, 147.71s/it] Epoch: 4 | train_loss: 0.0900 | train_acc: 0.9695 | test_loss: 0.2719 | test_acc: 0.8979 50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 5/10 [12:19&lt;12:18, 147.65s/it] Epoch: 5 | train_loss: 0.0760 | train_acc: 0.9758 | test_loss: 0.2927 | test_acc: 0.8950 60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 6/10 [14:47&lt;09:50, 147.57s/it] Epoch: 6 | train_loss: 0.0616 | train_acc: 0.9814 | test_loss: 0.3326 | test_acc: 0.8942 70%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 7/10 [17:15&lt;07:23, 147.76s/it] Epoch: 7 | train_loss: 0.0488 | train_acc: 0.9838 | test_loss: 0.3086 | test_acc: 0.8946 80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 8/10 [19:42&lt;04:55, 147.60s/it] Epoch: 8 | train_loss: 0.0534 | train_acc: 0.9835 | test_loss: 0.3186 | test_acc: 0.9017 90%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 9/10 [22:10&lt;02:27, 147.66s/it] Epoch: 9 | train_loss: 0.0422 | train_acc: 0.9878 | test_loss: 0.3317 | test_acc: 0.9012 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10/10 [24:38&lt;00:00, 147.80s/it] Epoch: 10 | train_loss: 0.0433 | train_acc: 0.9878 | test_loss: 0.3853 | test_acc: 0.9038 </code></pre> <h1>The train and test loss and accuracy</h1> <p><a href="https://i.sstatic.net/Y22sK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Y22sK.png" alt="Train and loss accuracy" /></a></p> <h1>Confusion matrix</h1> <pre><code># Import tqdm for progress bar from tqdm.auto import tqdm # 1. Make predictions with trained model y_preds = [] model_0.eval() with torch.inference_mode(): for X, y in tqdm(test_dataloader, desc=&quot;Making predictions&quot;): # Send data and targets to target device X, y = X.to(device), y.to(device) # Do the forward pass y_logit = model_0(X) # Turn predictions from logits -&gt; prediction probabilities -&gt; predictions labels y_pred = torch.softmax(y_logit, dim=1).argmax(dim=1) # Put predictions on CPU for evaluation y_preds.append(y_pred.cpu()) # Concatenate list of predictions into a tensor y_pred_tensor = torch.cat(y_preds) from torchmetrics import ConfusionMatrix from mlxtend.plotting import plot_confusion_matrix # 2. Setup confusion matrix instance and compare predictions to targets confmat = ConfusionMatrix(num_classes=len(class_names), task='multiclass') confmat_tensor = confmat(preds=y_pred_tensor, target=torch.Tensor(test_data.targets)) # 3. Plot the confusion matrix fig, ax = plot_confusion_matrix( conf_mat=confmat_tensor.numpy(), # matplotlib likes working with NumPy class_names=class_names, # turn the row and column labels into class names figsize=(10, 7) ); </code></pre> <h1>Confusion Result</h1> <p><a href="https://i.sstatic.net/WdYOF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WdYOF.png" alt="Confusion matrix" /></a></p> <p>so what should I do?</p>
<python><pytorch>
2023-04-04 19:47:40
2
307
Emad Younan
75,933,263
13,742,058
how to set experimental options on undetected chromedriver in selenium python?
<p>whenever I run this code ..</p> <pre><code>import undetected_chromedriver as uc from selenium.webdriver.support.ui import Select from webdriver_manager.chrome import ChromeDriverManager from selenium import webdriver from selenium.webdriver.chrome.service import Service options = webdriver.ChromeOptions() options.add_argument(&quot;start-maximized&quot;) options.add_argument(&quot;--log-level=3&quot;) options.add_experimental_option(&quot;excludeSwitches&quot;, [&quot;enable-automation&quot;]) options.add_experimental_option('useAutomationExtension', False) options.add_argument('--disable-blink-features=AutomationControlled') s = Service(ChromeDriverManager().install()) driver = uc.Chrome(service=s, options=options) driver.get(r&quot;https://www.google.com&quot;) </code></pre> <p>it shows an error</p> <p>selenium.common.exceptions.InvalidArgumentException: Message: invalid argument: cannot parse capability: goog:chromeOptions from invalid argument: unrecognized chrome option: excludeSwitches</p> <p>and when I comment out options.add_experimental_option(&quot;excludeSwitches&quot;, [&quot;enable-automation&quot;]),</p> <p>it shows another error..</p> <p>selenium.common.exceptions.InvalidArgumentException: Message: invalid argument: cannot parse capability: goog:chromeOptions from invalid argument: unrecognized chrome option: useAutomationExtension</p>
<python><selenium-webdriver><webdriver><undetected-chromedriver>
2023-04-04 19:45:11
0
308
fardV
75,933,081
12,016,688
Where is the giant cpython switch case statement moved?
<p>Not sure if it's off topic but don't know any better place to ask. In <code>cpython</code> there was a very giant <code>switch</code> <code>case</code> statement for executing each opcode. This switch case previously was placed in the <code>_PyEval_EvalFrameDefault</code> function. Here is the <a href="https://github.com/python/cpython/blob/3.8/Python/ceval.c#L745" rel="noreferrer">link</a>. The switch case statement starts <a href="https://github.com/python/cpython/blob/3.8/Python/ceval.c#L1323" rel="noreferrer">here</a>. This was a core part of cpython and everyone who was interested in cpython internals, would probably explore it in detail. Recently I was searching for it and I couldn't find it. In this version of <a href="https://github.com/python/cpython/blob/v3.12.0a6/Python/ceval.c#L702" rel="noreferrer"><code>_PyEval_EvalFrameDefault</code></a> I can't find it. It's much shorter than the last one. I even tried to find this switch statement by searching for opcodes in my IDE. But even that didn't helped find where it is. Can anyone who is aware of latest cpython development changes help me? Thanks in advance.</p>
<python><cpython>
2023-04-04 19:20:02
1
2,470
Amir reza Riahi