QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
75,416,640
282,918
Python3: is there an elegant way to check nested attributes?
<p>for now I use this:</p> <pre><code>print(theObject.nestedObjectOne.NestedObjectTwo.NestedObjectThree if theObject and theObject.NestedOBjectOne and theObject.NestedObjectOne.NestedObjectTwo else &quot;n/a&quot;) </code></pre> <p>In order to support a case where <code>NestedObjectOne</code> is None.</p> <p>Is there a more elegant way to do this? Yes, I know I can write a function to traverse the object and check attributes recursively. I'm asking if there's any construct in python that does this as elegantly as ES6 for instance:</p> <pre><code>console.log(theObj?.theFirstNested?.theSecondNested) </code></pre>
<python><python-3.x>
2023-02-10 22:17:49
1
5,534
JasonGenX
75,416,580
9,773,920
Save dataframe to same excel workbook but different sheets
<p>I want to loop through my data and save every component based data in separate sheet in the same excel workbook in S3 bucket.</p> <p>Dataframe df looks as below:</p> <p><a href="https://i.sstatic.net/zWPyg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zWPyg.png" alt="enter image description here" /></a></p> <p>Below is my code:</p> <pre><code> today = datetime.datetime.now().strftime('%m_%d_%Y_%H_%M_%S') components=[&quot;COMP1&quot;,&quot;COMP2&quot;,&quot;COMP3&quot;] filename = 'auto_export_'+today for comp in components: df1= df[df['component']==comp] print(comp) print(df1) with io.BytesIO() as output: with pd.ExcelWriter(output, engine='xlsxwriter') as writer: df1.append_df_to_excel(writer, sheet_name=comp, index=False) data = output.getvalue() s3 = boto3.resource('s3') s3.Bucket('mybucket').put_object(Key='folder1/'+filename+'.xlsx', Body=data) </code></pre> <p>This is generating the excel file correctly but writing only COMP3 data into it. It is not writing COMP1 and COMP2 sheets. Any guidance on how to fix this problem?</p>
<python><pandas><amazon-web-services><dataframe><aws-lambda>
2023-02-10 22:07:59
2
1,619
Rick
75,416,497
19,369,393
How to expand a filtering mask to cover more pixels of the area of interest in OpenCV?
<p>Take a look at <a href="https://i.sstatic.net/ELRl7.jpg" rel="nofollow noreferrer">this</a> image. I want to turn this card blue.</p> <p>I use Python and OpenCV to perform image processing.</p> <p><strong>Here's how I do it now:</strong></p> <pre><code>import cv2 import numpy as np # Load the image image = cv2.imread(&quot;input.jpg&quot;) # Convert the image to the HSV color space hsv_image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV) # Threshold the HSV image to get only the red colors # Bitwise OR unites Hue value 170-179 and 0-10. mask = cv2.bitwise_or( cv2.inRange(hsv_image, np.array([0, 120, 100]), np.array([10, 255, 255])), cv2.inRange(hsv_image, np.array([170, 120, 100]), np.array([180, 255, 255])) ) # Perform median blurring mask = cv2.medianBlur(mask, 5) # Define a kernel for the morphological operations kernel = np.ones((5, 5), np.uint8) # Perform an opening operation to remove small objects mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel, iterations=2) # Perform a closing operation to fill small holes mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel, iterations=2) # Perform a gradient operation to extract object boundaries gradient_mask = cv2.morphologyEx(mask, cv2.MORPH_GRADIENT, kernel) # Modify hue value of every masked pixel hsv_image[:, :, 0][mask != 0] = (hsv_image[:, :, 0][mask != 0].astype(int, copy=False) + 120) % 180 # Convert the HSV image back to BGR color space result = cv2.cvtColor(hsv_image, cv2.COLOR_HSV2BGR) # Display the result cv2.namedWindow(&quot;output&quot;, cv2.WINDOW_NORMAL) cv2.imshow(&quot;output&quot;, result) # Save images of the mask, result and gradient cv2.imwrite('mask.jpg', mask) cv2.imwrite('result.jpg', result) cv2.imwrite('gradient.jpg', gradient_mask) # Wait for the window to close while cv2.getWindowProperty('output', 0) &gt;= 0: cv2.waitKey(50) cv2.destroyAllWindows() </code></pre> <p>It works well. <a href="https://i.sstatic.net/gIiKM.png" rel="nofollow noreferrer">The result</a>, <a href="https://i.sstatic.net/XuREG.png" rel="nofollow noreferrer">The filtering mask</a></p> <p>But if you take a closer look you'll see the problem: <a href="https://i.sstatic.net/aOljR.png" rel="nofollow noreferrer">link</a>, <a href="https://i.sstatic.net/a36Q6.png" rel="nofollow noreferrer">link</a></p> <p><strong>Some red pixels is still here, and here's why.</strong> They do not fall in the filtered range of the red color: 170 &lt;= Hue &lt;= 10, Saturation &gt;= 120, Value &gt;= 100. Their HSV color is near to (178, 32, 60). So, the saturation and value fall out of the filter range.</p> <p><strong>Why I can't lower the range of saturation and value?</strong> That's because in this case there would be too much noise on another background that has more colors. The noise in this case is hard to avoid even using multiple iterations of opening morphological operation.</p> <p>I don't have much experience in image processing and using OpenCV, so my ideas may be far from the best solution. It's okay if you propose another approach.</p> <p><strong>Possible solution.</strong> Would it be possible to perform the dilate morphological operation on the filtering mask (expand the filtering mask) but only to those pixels that fall in another, broader range of red (with saturation and value range equal to 10, hue range stays the same). So that all the red pixels that fall in the broader range of the red color AND that are adjacent to the pixels of the existing mask (so no pixels from the background is added creating noise). If that is a good idea, how can I implement it, especially the part of dilating only to pixels that fall in the broader range of the red color? Maybe it's already implemented in OpenCV and I just don't know about that?</p> <p>Also I would be glad to hear any suggestions or recommendations. I am a student, I want to learn more. Thanks in advance!</p>
<python><opencv><image-processing><mathematical-morphology>
2023-02-10 21:58:19
1
365
g00dds
75,416,208
10,620,003
modify the dataframe based on ids and time (put all times series data for each id beside each others)
<p>I have a dataframe which has 3 columns. id, date and val. The ids are different. I want to put all of the ids besides eachothers. For example, the first rows consist only one id with different dates and then the second distinct ids and etc. Here is a simple example;</p> <pre><code>import pandas as pd df['id'] = [10, 2, 3, 10, 10, 2, 2] df['date'] = ['2020-01-01 12:00:00','2020-01-01 12:00:00','2020-01-01 12:00:00','2020-01-01 13:00:00','2020-01-01 14:00:00', '2020-01-01 13:00:00','2020-01-01 14:00:00'] df ['val'] = [0, 1, 2,-3, 4, 6,7] </code></pre> <p>The out put which I want is;</p> <pre><code> id date val 0 10 2020-01-01 12:00:00 0 1 10 2020-01-01 13:00:00 -3 2 10 2020-01-01 14:00:00 4 3 2 2020-01-01 12:00:00 1 4 2 2020-01-01 13:00:00 6 5 2 2020-01-01 14:00:00 7 6 3 2020-01-01 12:00:00 2 </code></pre> <p>Explanation: We have three ids, 1, 2, 3. At first I want the values based on date for id=1 and then the values based on date for id=2 and etc. The ids does not need to sort.</p> <p>Can you please help me with that? Thank you</p>
<python><dataframe>
2023-02-10 21:19:22
0
730
Sadcow
75,416,188
998,070
Get Evenly Spaced Points from a Curved Shape
<p>How may I take a shape that was created with more points at its curves and subdivide it so that the points are distributed more equally along the curve? In my research I thought that <code>numpy</code>'s <a href="https://numpy.org/doc/stable/reference/generated/numpy.interp.html" rel="nofollow noreferrer">interp</a> might be the right function to use, but I don't know what to use for the parameters (<code>x</code>, <code>xp</code>, <code>fp</code>, <code>left</code>, <code>right</code>, &amp; <code>period</code>). Any help would be very appreciated!</p> <p>Here is an animation showing the desired output.</p> <p><a href="https://i.sstatic.net/ycQUV.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ycQUV.gif" alt="Example of Shape with Even Distribution" /></a></p> <p>This is the code for the input rounded rectangle:</p> <pre class="lang-py prettyprint-override"><code>from matplotlib import pyplot as plt import numpy as np x_values = [1321.4, 598.6, 580.6, 563.8, 548.6, 535.4, 524.5, 516.2, 511, 509.2, 509.2, 511, 516.2, 524.5, 535.4, 548.6, 563.8, 580.6, 598.6, 1321.4, 1339.4, 1356.2, 1371.4, 1384.6, 1395.5, 1403.8, 1409, 1410.8, 1410.8, 1409, 1403.8, 1395.5, 1384.6, 1371.4, 1356.2, 1339.4, 1321.4] y_values = [805.4, 805.4, 803.5, 798.3, 790.1, 779.2, 766, 750.8, 734, 716, 364, 346, 329.2, 314, 300.8, 289.9, 281.7, 276.5, 274.6, 274.6, 276.5, 281.7, 289.9, 300.8, 314, 329.2, 346, 364, 716, 734, 750.8, 766, 779.2, 790.1, 798.3, 803.5, 805.4] fig, ax = plt.subplots(1) ax.plot(x_values,y_values) ax.scatter(x_values,y_values) ax.set_aspect('equal') plt.show() </code></pre> <p>Thank you!</p>
<python><numpy><matplotlib><scipy><curve>
2023-02-10 21:17:28
3
424
Dr. Pontchartrain
75,416,108
9,388,056
Polars YYYY week into a date
<p>Does anyone know how to parse YYYY Week into a date column in Polars?<br /> I have tried this code but it throws an error.</p> <pre class="lang-py prettyprint-override"><code>import polars as pl pl.DataFrame({ &quot;week&quot;: [201901, 201902, 201903, 201942, 201943, 201944] }).with_columns(pl.col(&quot;week&quot;).cast(pl.String).str.to_date(&quot;%Y%U&quot;).alias(&quot;date&quot;)) </code></pre> <pre><code>InvalidOperationError: conversion from `str` to `date` failed in column 'week' for 6 out of 6 values: [&quot;201901&quot;, &quot;201902&quot;, … &quot;201944&quot;] </code></pre>
<python><date><python-polars>
2023-02-10 21:05:32
2
620
Frank
75,416,041
5,212,614
Trying to plot data using networkx, and getting error about mapping nodes
<p>This is my first attempt at using networkx. Here is my sample code.</p> <pre><code>import pandas as pd import numpy as np data = [{'Circuit_Number': 1,'Description':'Stadium', 'Duration':10, 'Device_County':'Westchester', 'Picklist':1000, 'Postlist':50000}, {'Circuit_Number': 2, 'Description':'Stadium', 'Duration':12, 'Device_County':'Westchester', 'Picklist':3000, 'Postlist':40000}, {'Circuit_Number': 2, 'Description':'Arena', 'Duration':11, 'Device_County':'Westchester', 'Picklist':7000, 'Postlist':50000}, {'Circuit_Number': 3, 'Description':'Arena', 'Duration':8, 'Device_County':'Westchester', 'Picklist':3000, 'Postlist':40000}, {'Circuit_Number': 3, 'Description':'Casino', 'Duration':6, 'Device_County':'Queens', 'Picklist':5000, 'Postlist':6000}, {'Circuit_Number': 4, 'Description':'Casino', 'Duration':20, 'Device_County':'Queens', 'Picklist':5000, 'Postlist':4000}, {'Circuit_Number': 4, 'Description':'Library', 'Duration':15, 'Device_County':'Brooklyn', 'Picklist':5000, 'Postlist':9000}, {'Circuit_Number': 5, 'Description':'Library', 'Duration':7, 'Device_County':'Brooklyn', 'Picklist':6000, 'Postlist':10000}] df = pd.DataFrame(data) df ###################################################################################### import networkx as nx import matplotlib.pyplot as plt # Input data files check from subprocess import check_output import warnings warnings.filterwarnings('ignore') G = nx.Graph() for index, row in df.iterrows(): G.add_node(row['Circuit_Number'], group=row['Description'], nodesize=row['Duration']) for index, row in df.iterrows(): G.add_weighted_edges_from([(row['Device_County'], row['Picklist'], row['Postlist'])]) def draw_graph(G,size): nodes = G.nodes() color_map = {1000:'#f09494', 3000:'#eebcbc', 5000:'#72bbd0', 6000:'#91f0a1', 7000:'#629fff'} node_color = [color_map[d['Duration']] for n,d in G.nodes(data=True)] node_size = [d['Duration']*10 for n,d in G.nodes(data=True)] pos = nx.drawing.spring_layout(G,k=0.70,iterations=60) plt.figure(figsize=size) nx.draw_networkx(G,pos=pos,node_color=node_color,node_size=node_size,edge_color='#FFDEA2') plt.show() draw_graph(G,size=(30,30)) </code></pre> <p>This is the error that I get:</p> <pre><code>node_color = [color_map[d['Duration']] for n,d in G.nodes(data=True)] KeyError: 'Duration' </code></pre> <p>I am trying to feed rows from a dataframe into networkx, and produce a network graph, kind of like the one below, which I found online, but it's not plotting my data.</p> <p><a href="https://i.sstatic.net/F4iOj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F4iOj.png" alt="enter image description here" /></a></p>
<python><python-3.x><dataframe><networkx>
2023-02-10 20:57:03
0
20,492
ASH
75,415,993
4,655,673
How to parse a date-time string with extra characters like (+0) in Python?
<p>I am parsing log files which contain a timestamp with some extra text at the end.</p> <pre><code>2023-02-10 08:16:15.123456(+0) </code></pre> <p>I am not sure what the (+0) means, but the values change randomly. (+12), (+123), (+1234) are all possible values.</p> <p>I can get the entire string except the what is in parenthesis. I noticed if I add <code>(+0)</code> to the format it works, but only for exactly <code>(+0)</code></p> <pre><code>Import datetieme. datetime.datetime.strptime('2023-02-10 08:16:15.123456(+125)','%Y-%m-%d %H:%M:%S.%f(+125)') Out[176]: datetime.datetime(2023, 2, 10, 8, 16, 15, 123456) datetime.datetime.strptime('2023-02-10 08:16:15.123456(+125)','%Y-%m-%d %H:%M:%S.%f(+0)') ValueError: time data '2023-02-10 08:16:15.123456(+125)' does not match format '%Y-%m-%d %H:%M:%S.%f(+0)' </code></pre> <p>Is there a way to handle any value in the parenthesis? Hopefully without having to remove them from all the strings.</p> <p>Update: As FObersteiner suggested, I split on &quot;(&quot; and it seems to work. I still don't know the meaning of the extra filed so I am ignoring it for now.</p> <pre><code>datetime.datetime.strptime(orderDF.iloc[0,0].split(&quot;(&quot;)[0],'%Y-%m-%d %H:%M:%S.%f') Out[189]: datetime.datetime(2023, 1, 26, 4, 16, 38, 130088) </code></pre>
<python><datetime>
2023-02-10 20:51:05
0
847
MichaelE
75,415,991
7,934,786
how to split the items of a space-separated list into the columns in pandas
<p>I have a dataframe with multiple columns and the content of one of the columns looks like a list:</p> <pre><code>df = pd.DataFrame({'Emojis':['[1 2 3 4]', '[4 5 6]']}) </code></pre> <p>What I want to do to split the contents of these &quot;lists&quot; into the columns and since the sizes of the lists are not the same I will have the number of columns with the max of the items (5 items is the max) and whenever the items is less than that I will put null.</p> <p>So the output will be something like this:</p> <pre><code> Emojis it1 it2 it3 it4 it5 0 [1 2 3 4] 1 2 3 4 null 1 [4 5 6] 4 5 6 null null </code></pre> <p>I was doing like this:</p> <pre><code>splitlist = df['Emojis'].apply(pd.Series) df2 = pd.concat([df, splitlist], axis=1) </code></pre> <p>but its not close to what I want since the list is not really a list is saved in df as object without <code>,</code></p>
<python><pandas><dataframe>
2023-02-10 20:50:52
2
2,170
sariii
75,415,671
7,800,760
Python: return type hint of arbitrary depth dictionary
<p>I am using a defaultdict collection to easily build an arbitrary depth python dictionary as follows:</p> <pre><code>from collections import defaultdict from datetime import datetime def recursive_dict() -&gt; defaultdict: &quot;&quot;&quot;enable arbitrary depth dictionary declaration&quot;&quot;&quot; return defaultdict(recursive_dict) dbdict = recursive_dict() dbdict[&quot;entity&quot;][&quot;surface&quot;] = &quot;this is a string&quot; dbdict[&quot;entity&quot;][&quot;spotlight&quot;][&quot;uri&quot;] = &quot;http://test.com/test&quot; dbdict[&quot;entity&quot;][&quot;spotlight&quot;][&quot;curation&quot;][&quot;date&quot;] = datetime.now() </code></pre> <p>which works fine as expected but mypy type checking fails with the following error message:</p> <pre><code>error: Missing type parameters for generic type &quot;defaultdict&quot; [type-arg] </code></pre> <p>I am confused as how to fix this since I'd like to use the recursive_dict function for any type of dictionary that I'll build.</p>
<python><dictionary><mypy>
2023-02-10 20:05:44
1
1,231
Robert Alexander
75,415,643
14,141,126
Return specific values from nested Elastic Object
<p>I have to preface this with the fact that I'm working with Elasticsearch module, which returns <code>elastic_transport.ObjectApiResponse</code>. My problem is that I need to select specific keys from this json/dictionary looking log. The indices come from different sources, and thus contain different key/value pairs. They values I need to select are <code>ip</code>, <code>port</code>, <code>username</code>, <code>rule_name</code>, <code>severity</code>, and <code>risk_score</code>. The problem is that they have different key names and each dictionary is vastly different from the other, but they all contain those values. After that, I'll throw them into a Pandas dataframe and create a table with those values. Should a value be missing, I'll fill them with a '-'.</p> <p>So my question is how I can iterate over these nested objects that are neither ordered nor standardized? Any help is appreciated. Below is a sample of the data.</p> <pre><code>{ 'took': 11, 'timed_out': False, '_shards': { 'total': 17, 'successful': 17, 'skipped': 0, 'failed': 0 }, 'hits': { 'total': {'value': 58, 'relation': 'eq'}, 'max_score': 0.0, 'hits': [ { '_index': '.siem-signals-default-000017', '_type': '_doc', '_id': 'abcd1234', '_score': 0.0, '_source': { '@timestamp': '2023-02-09T15:24:09.368Z', 'process': {'pid': 668, 'executable': 'C:\\Windows\\System32\\lsass.exe', 'name': 'lsass.exe'}, 'ecs': {'version': '1.10.0'}, 'winlog': { 'computer_name': 'SRVDC1', 'User': 'John.Smith', 'api': 'wineventlog', 'keywords': ['Audit Failure'] }, 'source':{'domain': 'SRVDC1', 'ip': '10.17.13.118', 'port': 42548}} 'rule': {'id': 'aaabbb', 'actions': [], 'interval': '2m', 'name': 'More Than 3 Failed Login Attempts Within 1 Hour '} }, { '_index': '.siem-signals-default-000017', '_type': '_doc', '_id': 'abc123', '_score': 0.0, '_source': { '@timestamp': '2023-02-09T15:24:09.369Z', 'log': {'level': 'information'}, 'user': { 'id': 'S-1-0-0', 'name': 'John.Smith', 'domain': 'ACME' }, 'related': { 'port': '42554', 'ip': '10.17.13.118' }, 'logon': {'id': '0x3e7', 'type': 'Network', 'failure': {'sub_status': 'User logon with misspelled or bad password'}}, 'meta': {'risk_score': 46, 'severity': 'medium'}}}, { '_index': '.siem-signals-default-000017', '_type': '_doc', '_id': 'zzzzz', '_score': 0.0, '_source': { 'source': { 'port': '56489', 'ip': '10.18.13.101' }, 'observer': { 'type': 'firewall', 'name': 'pfSense', 'serial_number': 'xoxo', 'product': 'Supermicro', 'ip': '10.7.3.253' }, 'process': {'name': 'filterlog', 'pid': '45005'}, 'tags': ['firewall', 'IP_Private_Source', 'IP_Private_Destination'], 'destination': {'service': 'microsoft-ds', 'port': '445', 'ip': '10.250.0.64'}, 'log': {'risk_score': 73, 'severity': 'high'}, 'rule':{'name': 'Logstash Firewall (NetBIOS and SMB Vulnerability)'}}}]}} </code></pre> <p>Expected Output</p> <p>The sample below is possible only when the logs have the same standard structure. <a href="https://i.sstatic.net/U5Zi0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/U5Zi0.png" alt="enter image description here" /></a></p>
<python><elasticsearch>
2023-02-10 20:02:08
0
959
Robin Sage
75,415,616
238,074
AWS Boto3 sts get_caller_identity - catching exceptions if credentials are invalid
<p>With a Python app, using Boto3 v1.26.59 (and botocore of same version) about the first thing done is to try to get the username of the user. We have Identity Center (SSO) users. With aged credentials (token), two exceptions are thrown and I don't seem to be able to catch them. Here is a snippet:</p> <pre class="lang-py prettyprint-override"><code>import boto3 # type: ignore import botocore.errorfactory as ef import botocore.exceptions as bcexp def profile_user_name(profile_name: str) -&gt; Optional[str]: session = boto3.Session(profile_name=profile_name) sts = session.client(&quot;sts&quot;) try: user_id = sts.get_caller_identity().get(&quot;UserId&quot;) return user_id.split(&quot;:&quot;)[-1].split(&quot;@&quot;)[0] except ef.UnauthorizedException as e: _logger.error(f'Not authenticated. Please execute: aws sso login --profile {profile_name}') return None except bcexp.UnauthorizedSSOTokenError as e: _logger.error(f'Not authenticated. Please execute: aws sso login --profile {profile_name}') return None except Exception as e: _logger.error(f&quot;Encountered exception '{str(e)}'!&quot;) return None </code></pre> <p>Exceptions thrown by the above code look like these:</p> <pre><code>Refreshing temporary credentials failed during mandatory refresh period. Traceback (most recent call last): File &quot;/Users/kevinbuchs/lib/python3.11/site-packages/botocore/credentials.py&quot;, line 2121, in _get_credentials response = client.get_role_credentials(**kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/kevinbuchs/lib/python3.11/site-packages/botocore/client.py&quot;, line 530, in _api_call return self._make_api_call(operation_name, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/kevinbuchs/lib/python3.11/site-packages/botocore/client.py&quot;, line 960, in _make_api_call raise error_class(parsed_response, operation_name) botocore.errorfactory.UnauthorizedException: An error occurred (UnauthorizedException) when calling the GetRoleCredentials operation: Session token not found or invalid During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/Users/kevinbuchs/lib/python3.11/site-packages/botocore/credentials.py&quot;, line 510, in _protected_refresh metadata = self._refresh_using() ^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/kevinbuchs/lib/python3.11/site-packages/botocore/credentials.py&quot;, line 657, in fetch_credentials return self._get_cached_credentials() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/kevinbuchs/lib/python3.11/site-packages/botocore/credentials.py&quot;, line 667, in _get_cached_credentials response = self._get_credentials() ^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/kevinbuchs/lib/python3.11/site-packages/botocore/credentials.py&quot;, line 2123, in _get_credentials raise UnauthorizedSSOTokenError() botocore.exceptions.UnauthorizedSSOTokenError: The SSO session associated with this profile has expired or is otherwise invalid. To refresh this SSO session run aws sso login with the corresponding profile. Encountered exception 'The SSO session associated with this profile has expired or is otherwise invalid. To refresh this SSO session run aws sso login with the corresponding profile.'! The auth profile 'dev-devaccess-default' is not logged in. Login with 'aws sso login --profile dev-devaccess-default' and retry! </code></pre> <p>I thought I would check to see if I am missing some trick before submitting a GitHub issue.</p>
<python><amazon-web-services><single-sign-on><boto><botocore>
2023-02-10 19:58:49
1
2,922
Kevin Buchs
75,415,472
19,854,658
How to visualize EM steps in GMM model?
<p>I would like to visualize the EM steps taken in a GMM model but don't know how I would go about doing that.</p> <p>I've generated some synthetic data and fitted a model:</p> <pre><code>a = np.random.normal(loc=[2,2,2], scale=1.0, size=(100,3)) b = np.random.normal(loc=[5,5,5], scale=1.0, size=(100,3)) c = np.random.normal(loc = [7,7,7], scale = 1.0, size = (100,3)) data = np.concatenate((a,b,c), axis = 0) df = pd.DataFrame(data, columns=['x', 'y', 'z']) gm = GaussianMixture(n_components = 3, random_state = 213).fit(df) res = gm.fit_predict(df) </code></pre> <p>I've used graspologic (package for graph statistics) to visualize the end result but would like to see how the EM algorithm iterates through the data.</p> <p>Any thoughts on how I can implement this?</p>
<python><pandas><machine-learning><visualization><gmm>
2023-02-10 19:41:45
1
379
Jean-Paul Azzopardi
75,415,440
5,924,264
__init__() takes from 1 to 2 positional arguments but 4 were given after refactorizing class to be under base class
<p>I have a class that currently looks something like this:</p> <pre><code>import attr @attr.s class my_class(object): var1 = attr.ib(default=5) var2 = attr.ib(default=5) var3 = attr.ib(default=5) @classmethod def func(cls): cls(1, 2, 3) my_class.func() </code></pre> <p>I need to create an abstract base class and refactor <code>var1, var2</code> to be under base class. <code>var3</code> will stay as is.</p> <p>So I tried to do this:</p> <pre><code>import attr from abc import ABC, abstractmethod @attr.s class base(ABC, object): var1 = attr.ib(default=5) var2 = attr.ib(default=5) @attr.s class my_class(base): var3 = attr.ib(default=5) @classmethod def func(cls): cls(1, 2, 3) my_class.func() </code></pre> <p>but it gives the error</p> <pre><code>TypeError: __init__() takes from 1 to 2 positional arguments but 4 were given </code></pre> <p>It seems I'm not calling <code>cls</code> properly here since now there's only <code>var3</code> associated with <code>my_class</code>. If I want to initialize <code>var1</code> and <code>var2</code>, how I do it in an analogous way?</p>
<python><inheritance><attr><python-attrs>
2023-02-10 19:37:37
1
2,502
roulette01
75,415,350
1,413,826
Fiducial detection - hamming distance alternative
<p>I have a custom fiducial marker that is an X shape. It's all black and on a white background. I am able to use thresholding and contours to identify potential fiducial candidates, which I then perspective warp and downsize to 10x10 pixels and compare to a &quot;fixed&quot; or template fiducial. My current comparison algorithm is a simple Hamming Distance calculation between my fiducial candidates and the &quot;fixed&quot; or template fiducial.</p> <p>This current setup works well but it could be better. Specifically, using Hamming Distance to compare the candidates with the &quot;fixed&quot; fiducial seems weak, and doesn't really take into account the &quot;shape&quot; of the fiducial candidate.</p> <p>Below is an image that shows a candidate fiducial that I know to be correct with a Hamming Distance of 28. Then Fig C and Fig D are other images that also have a hamming distance of 28 from the fixed fiducial, but which are definitely not fiducials. <a href="https://i.sstatic.net/fFMsh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fFMsh.png" alt="Fiducial candidate - Hamming Distance" /></a></p> <p>Visually it seems quite clear that Fig C and Fig D are not actually what I'm looking for and that the &quot;Moving Image&quot; is much more likely to be a fiducial, even though they all have a hamming dist of 28 with respect to the Fixed Image.</p> <p>What is a more robust way to compare fiducial candidates than Hamming distance for this use case?</p>
<python><computer-vision><object-detection><hamming-distance><fiducial-markers>
2023-02-10 19:28:53
0
2,648
steve
75,415,211
2,382,483
Buffered FIFO queue for streaming bytes in python?
<p>I am doing a lot of processing and generating of csv and json files in python, to which I'm new-ish. Here's a typical example of what I have done so far:</p> <pre><code># return a db cursor that streaming rows of data row_iter = await get_row_iterator() # get a pointer to a google cloud storage object to store the csv gcs_object = self.gcs_client.bucket(self.env.temp_bucket).blob(manifest_key) with gcs_object.open(mode=&quot;w&quot;, content_type=&quot;text/csv&quot;) as f: async for row_obj in row_iter: f.write(f&quot;{row_obj.some_column},{row_obj.some_other_column}\n&quot;) </code></pre> <p>This works fine, but if I understand correctly, the problem is that there is a lot of waiting around. While I'm writing a row with <code>f.write()</code>, the row-reading iterator <code>row_iter</code> is just sitting there doing nothing. While I'm reading from the iterator, the writer is twiddling its thumbs. While the way I've done this conserves memory, from a speed perspective its no better than reading all the rows into memory first, processing them, and <em>then</em> starting an upload. There should be no reason I can't be uploading data that has already been processed while also downloading and processing the next item or batch of items.</p> <p>I'd like to stream this process, so that the row iterator can be reading rows and stuffing them into a fixed size FIFO buffer as fast as it possibly can, while on the other end I'm reading from the buffer and writing the csv to Google Cloud Storage (or S3, or a local file, etc) as fast as the network can handle. If the buffer fills up, writing to it should block until there is space. If the buffer empties, reading from it should block until there is enough to read so that the maximum amount of memory used by this pipeline is controlled.</p> <p>Essentially, I want an equivalent to <a href="https://nodejs.org/api/stream.html" rel="nofollow noreferrer">NodeJS's concept of streams</a> in python. It looks an awful lot like I should be able to do something like this with python's <a href="https://docs.python.org/3/library/io.html#buffered-streams" rel="nofollow noreferrer">Buffered Streams</a>, but I can't figure out how to use them to accomplish this, and I can't find any examples of how they should be used. I'll need to do this with csv, json, and probably other custom formats, so I'm not necessarily looking for a package that handles this for a specific format, but a more generic way, even a way to write my own custom streaming transforms and pipelines.</p>
<python><node.js>
2023-02-10 19:10:32
0
3,557
Rob Allsopp
75,415,208
17,160,160
Extract longest block of continuous non-null values from each row in Pandas data frame
<p>Suppose I have a Pandas data frame structured similarly to the following:</p> <pre><code>data = { 'A' : [5.0, np.nan, 1.0], 'B' : [7.0, np.nan, np.nan], 'C' : [9.0, 2.0, 6.0], 'D' : [np.nan, 4.0, 9.0], 'E' : [np.nan, 6.0, np.nan], 'F' : [np.nan, np.nan, np.nan], 'G' : [np.nan, np.nan, 8.0] } df = pd.DataFrame( data, index=['11','22','33'] ) </code></pre> <p>From each row, I would like to extract the longest continuous block of non-null values and append them to a list.</p> <p>So the following values from these rows:</p> <pre><code>row11: [5,7,9] row22: [2,4,6] row33: [6,9] </code></pre> <p>Giving me a list of values:</p> <pre><code>[5.0, 7.0, 9.0, 2.0, 4.0, 6.0, 6.0, 9.0] </code></pre> <p>My current approach uses <code>iterrows()</code> <code>first_valid_index()</code> and <code>last_valid_index()</code>:</p> <pre><code>mylist = [] for i, r in df.iterrows(): start = r.first_valid_index() end = r.last_valid_index() mylist.extend(r[start: end].values) </code></pre> <p>This works fine when the valid digits are blocked together such as <code>row11</code> and <code>row22</code>. However my approach falls down when digits are interspersed with null values such as in <code>row33</code>. In this case, my approach extracts the entire row as the first and last index contain non-null values. My solution (incorrectly) outputs a final list of:</p> <pre><code>[5.0, 7.0, 9.0, 2.0, 4.0, 6.0, 1.0, nan, 6.0, 9.0, nan, nan, 8.0] </code></pre> <p>I have the following questions:<br /> 1.) How can I combat the error I'm facing in the example of <code>row33</code>?<br /> 2.) Is there a more efficient approach than using <code>iterrows()</code>? My actual data has many thousands of rows. While it isn't necessarily too slow, I'm always wary of resorting to iteration when using Pandas.</p>
<python><pandas>
2023-02-10 19:10:21
3
609
r0bt
75,415,048
17,274,113
Python lidar package dealing with NoData values
<p>I am attempting to use the python <code>lidar</code> package function <code>lidar.ExtractSinks()</code>. I have a raster dataset that looks like the following.</p> <p><a href="https://i.sstatic.net/4Txla.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4Txla.png" alt="enter image description here" /></a></p> <p>Not a very useful image, but it gets the point across that I am only interested in the white area. The rest of it has been clipped, masked, or something to remove actual elevation values. This area is causing problems with the function.</p> <p>At this stage, I have attempted to replace 0 values with <code>nodata</code> I believe based on another post using rasterio <code>src.nodata = 0 # set the nodata value</code>. The reason I think This worked is that my data histogram now looks like the following, whereas before the histogram showed that I had a huge number of cells of value 0.</p> <p><a href="https://i.sstatic.net/XlowO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XlowO.png" alt="enter image description here" /></a></p> <p>Regardless of whether I apply this change to the black cells in the raster image, I receive the following error: <code>AttributeError: module 'numpy' has no attribute 'float'.</code>, which can be traced back to a line in the source code of the function: <code>max_elev = np.float(np.max(dem[dem != no_data]))</code>. This line is what makes me think this is a NoData cells issue. The documentation of the error suggests that <code>np.float</code> is deprecated and to use <code>float</code> instead, but this is not my code, so I cannot change it.</p> <p>Any suggestions as to how to get around this, or if I am even correct about the issue?</p> <p>Also, if it would be helpful to post more of the code I have used please let me know.</p> <p>Thanks!</p>
<python><lidar><no-data>
2023-02-10 18:49:58
0
429
Max Duso
75,415,039
12,244,355
Python: Issue with reassigning columns to DataFrame
<p>I have a DataFrame with multiple columns. I am trying to normalize all the columns except for one, <code>price</code>.</p> <p>I found a code that works perfectly on a sample DataFrame I created, but when I use it on the original DataFrame I have, it gives an error <code>ValueError: Columns must be same length as key</code></p> <p>Here is the code I am using:</p> <pre><code>df_final_1d_normalized = df_final_1d.copy() cols_to_norm = df_final_1d.columns[df_final_1d.columns!='price'] df_final_1d_normalized[cols_to_norm] = df_final_1d_normalized[cols_to_norm].apply(lambda x: (x - x.min()) / (x.max() - x.min())) </code></pre> <p>The issue is with reassigning the columns to themselves in the third line of code.</p> <p>Specifically, this works <code>df_final_1d_normalized[cols_to_norm].apply(lambda x: (x - x.min()) / (x.max() - x.min()))</code>.</p> <p>But, this does not work <code>df_final_1d_normalized[cols_to_norm] = df_final_1d_normalized[cols_to_norm].apply(lambda x: (x - x.min()) / (x.max() - x.min()))</code></p> <p>Here is a sample dataframe in case you want to test it out to see that it actually works on other DataFrames</p> <pre><code>df = pd.DataFrame() df['A'] = [1,2,3,4, np.nan, np.nan] df['B'] = [2,4,2,4,5,np.nan] df['C'] = [np.nan, np.nan, 4,5,6,3] df['D'] = [np.nan, np.nan, np.nan, 5,4,9] df_norm = df.copy() cols_to_norm = df.columns[df.columns!=&quot;D&quot;] df_norm[cols_to_norm] = df_norm[cols_to_norm].apply(lambda x: (x - x.min()) / (x.max() - x.min())) </code></pre> <p>What could the error be?</p>
<python><pandas><dataframe><normalize><reassign>
2023-02-10 18:49:16
1
785
MathMan 99
75,415,020
12,394,134
plotly error: ValueError: The data property of a figure may only be assigned a list or tuple that contains a permutation of a subset of itself
<p>I am simulating some data and trying to plot various samples of it using <code>plotly</code> and <code>ipythonwidgets</code>. I created dropdowns to let people choose the sample size and the number of samples that they want to collect from a population distribution generated with this:</p> <pre><code>from random import seed from numpy.random import normal, negative_binomial, binomial import pandas as pd population_N = 1000000 population_data = pd.DataFrame({ &quot;data.normal&quot;: normal(0, 1, population_N), &quot;data.poisson&quot;: negative_binomial(1, 0.5, population_N), &quot;data.binomial&quot;: binomial(1, 0.5, population_N) }) </code></pre> <p>For them to sample (either with or without a condition) and to average over the sample for multiple samples, I created the following function:</p> <pre><code>def custom_simple(df, sample_size, type = &quot;random&quot;): &quot;&quot;&quot; Description ---- Take the population data.frame and generate a sample with a specific sample size Parameters ---- df(pd.DataFrame): the population dataset sample_size(int): the number of rows in the sample type(str): if random, pull a random sample Returns ---- sample(pd.DataFrame) &quot;&quot;&quot; if type == &quot;random&quot;: single = df.sample(sample_size) else: condition = df[&quot;data.normal&quot;] &lt; 1 single = df[condition].sample(sample_size) return sample def mean_samples(df, sample_size, type = &quot;random&quot;, num = 1): &quot;&quot;&quot; Description ---- Take each sample and calculate the mean for each variable in the sample Parameters ---- df(pd.DataFrame): the population dataset sample_size(int): the number of rows in the dataset type(str): if random, then do random sample num(int): number of samples to take Returns ---- sample_means(pd.DataFrame) &quot;&quot;&quot; sample_means = pd.DataFrame() def repeated_sample(df, sample_size, type = type, num = num): &quot;&quot;&quot; Description: ---- Take the population dataset and come up with a specified number of samples Parameters ---- df(pd.DataFrame): the population dataset sample_size(int): the number of rows in the dataset type(str): if random, then randomly sample num(int): the number of samples to generate Returns ---- sample_list(list(pd.DataFrame)): a list of samples stored as DataFrames &quot;&quot;&quot; sample_list = [custom_simple(df, sample_size, type) for n in range(num)] return sample_list raw_samples = repeated_sample(df, sample_size, type = type, num = num) for i in raw_samples: sample_mans = pd.concat([sample_means, i.mean(axis = 0).to_frame().T]) return sample_means </code></pre> <p>What these functions essentially do is take the <code>population_data</code> dataframe object, uses <code>pd.DataFrame.sample()</code> to pass varying sample sizes and to then repeat these samples to then be averaged over.</p> <p>I execute this function with a for loop that stores a dictionary element which is the column average for each sample size for each sample number:</p> <pre><code>sample_data = {} sample_sizes = [20, 50, 100, 200, 500, 1000, 2000] num_of_samples = [1, 2, 5, 10, 20, 50, 100] for j in num_of_samples: for i in sample_sizes: sample_data['size_{}_sample_{}'.format(i,j)] = mean_samples(df = population_data, sample_size = i, type = &quot;random&quot;, num = j) sample_data[&quot;population_data&quot;] = population_data </code></pre> <p>I then create a function that uses <code>ipython.widgets.Dropdown</code> to give me a dropdown for various sample and number of samples which should connect to one of the <code>dict</code> objects in <code>sample_data</code>. The function also contains information to pass chosen <code>dict</code> object to a plotly histogram.</p> <pre><code>def samples_histogram(dict, variable, type = &quot;random&quot;): sample_widget = widgets.Dropdown( options = [&quot;20&quot;, &quot;50&quot;, &quot;100&quot;, &quot;200&quot;, &quot;500&quot;, &quot;1000&quot;, &quot;2000&quot;], value = &quot;100&quot;, description = &quot;Sample size:&quot; ) sampling_widget = widgets.Dropdown( options = [&quot;2&quot;, &quot;5&quot;, &quot;10&quot;, &quot;20&quot;, &quot;50&quot;, &quot;100&quot;], value = &quot;2&quot;, description = &quot;# of samples&quot; ) trace = go.Histogram(x = dict.get(&quot;population_data&quot;)[variable]) fig = go.FigureWidget(data = trace) def response(change): if sample_widget.value == &quot;20&quot; and sampling_widget.value == &quot;2&quot;: temp_df = dict.get(&quot;size_20_sample_1&quot;) temp_df = list(dict.items()) temp_df = temp_df[temp_df[0]=='size_20_sample_1'][1] with fig.batch_update(): fig.data = [temp_df[variable].tolist()] sample_widget.observe(response, names = &quot;value&quot;) sampling_widget.observe(response, names = &quot;value&quot;) box = widgets.VBox([ sample_widget, sampling_widget, fig ]) display(box) </code></pre> <p>The problem I am running into, is that the <code>dict</code> object that I am passing, even when I convert it to a list (see this line):</p> <pre><code> with fig.batch_update(): fig.data = [temp_df[variable].tolist()] </code></pre> <p>It gives me this error:</p> <pre><code>ValueError: The data property of a figure may only be assigned a list or tuple that contains a permutation of a subset of itself. Received element value of type &lt;class 'list'&gt; </code></pre> <p>I am not sure if there is some way I can transform the objects I am passing to play nicer with plotly or if I am just missing something.</p>
<python><plotly><python-interactive>
2023-02-10 18:47:27
1
326
Damon C. Roberts
75,414,955
12,890,458
How to view runtime warnings in PyCharm when running tests using pytest?
<p>When running tests in PyCharm 2022.3.2 (Professional Edition) using pytest (6.2.4) and Python 3.9 I get the following result in the PyCharm console window:</p> <blockquote> <p>D:\cenv\python.exe &quot;D:/Program Files (x86)/JetBrains/PyCharm 2022.3.2/plugins/python/helpers/pycharm/_jb_pytest_runner.py&quot; --path D:\tests\test_k.py Testing started at 6:49 PM ... Launching pytest with arguments D:\tests\test_k.py --no-header --no-summary -q in D:\tests</p> <p>============================= test session starts ============================= collecting ... collected 5 items</p> <p>test_k.py::test_init test_k.py::test_1 test_k.py::test_2 test_k.py::test_3 test_k.py::test_4</p> <p>======================= 5 passed, 278 warnings in 4.50s =======================</p> <p>Process finished with exit code 0 PASSED [ 20%]PASSED [ 40%]PASSED [ 60%]PASSED [ 80%]PASSED [100%]</p> </blockquote> <p>So the actual warnings don't show. Only the number of warnings (278) is shown.</p> <p>I tried:</p> <ol> <li><p>selecting: Pytest: do not add &quot;--no-header --no-summary -q&quot; in advanced settings</p> </li> <li><p>Setting Additional arguments to -Wall in the Run/Debug configurations window</p> </li> <li><p>Setting Interpreter options to -Wall in the Run/Debug configurations window</p> </li> </ol> <p>and all permutations, all to no avail. Is there a way to show all runtime warnings when running tests using pytest in PyCharm in the PyCharm Console window?</p> <p>EDIT:</p> <p>@Override12</p> <p>When I select do not add &quot;--no-header --no-summary -q&quot; in advanced settings I get the following output:</p> <blockquote> <p>D:\Projects\S\SHARK\development_SCE\cenv\python.exe &quot;D:/Program Files (x86)/JetBrains/PyCharm 2020.3.4/plugins/python/helpers/pycharm/_jb_pytest_runner.py&quot; --path D:\Projects\S\SHARK\development_SCE\cenv\Lib\site-packages\vistrails-3.5.0rc0-py3.9.egg\vistrails\packages\SHARK\analysis\tests\test_fairing_1_plus_k.py -- --jb-show-summary Testing started at 10:07 AM ... Launching pytest with arguments D:\Projects\S\SHARK\development_SCE\cenv\Lib\site-packages\vistrails-3.5.0rc0-py3.9.egg\vistrails\packages\SHARK\analysis\tests\test_fairing_1_plus_k.py in D:\Projects\S\SHARK\development_SCE\cenv\Lib\site-packages\vistrails-3.5.0rc0-py3.9.egg\vistrails\packages</p> <p>============================= test session starts ============================= platform win32 -- Python 3.9.7, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- D:\Projects\S\SHARK\development_SCE\cenv\python.exe cachedir: .pytest_cache rootdir: D:\Projects\S\SHARK\development_SCE\cenv\Lib\site-packages\vistrails-3.5.0rc0-py3.9.egg\vistrails\packages plugins: pytest_check-1.0.5 collecting ... collected 5 items</p> <p>SHARK/analysis/tests/test_fairing_1_plus_k.py::test_init SHARK/analysis/tests/test_fairing_1_plus_k.py::test_without_1_k_fairing SHARK/analysis/tests/test_fairing_1_plus_k.py::test_1_k_fairing_given SHARK/analysis/tests/test_fairing_1_plus_k.py::test_without_1_k_fairing_only_3_values_under_threshold SHARK/analysis/tests/test_fairing_1_plus_k.py::test_1_k_fairing_given_only_3_values_under_threshold</p> <p>============================== warnings summary =============================== ......\pyreadline\py3k_compat.py:8 D:\Projects\S\SHARK\development_SCE\cenv\lib\site-packages\pyreadline\py3k_compat.py:8: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working return isinstance(x, collections.Callable)</p> <p>......\nose\importer.py:12 D:\Projects\S\SHARK\development_SCE\cenv\lib\site-packages\nose\importer.py:12: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses from imp import find_module, load_module, acquire_lock, release_lock</p> <p>SHARK/analysis/tests/test_fairing_1_plus_k.py: 276 warnings D:\Projects\S\SHARK\development_SCE\cenv\lib\site-packages\pymarin\objects\key.py:1101: UserWarning: siUnits is deprecated, use siUnit warnings.warn('siUnits is deprecated, use siUnit')</p> <p>-- Docs: <a href="https://docs.pytest.org/en/stable/warnings.html" rel="noreferrer">https://docs.pytest.org/en/stable/warnings.html</a> ======================= 5 passed, 278 warnings in 5.79s =======================</p> <p>Process finished with exit code 0 PASSED [ 20%]PASSED [ 40%]PASSED [ 60%]PASSED [ 80%]PASSED [100%]</p> </blockquote> <p>So 4 warnings are displayed. However I would like to see all 278 warnings.</p> <p>When I run pytest from the command line outside PyCharm I get the same result. So it seems to be a pytest problem and it seems that it has nothing to do with PyCharm.</p>
<python><pycharm><pytest><warnings>
2023-02-10 18:39:22
2
460
Frank Tap
75,414,867
14,366,102
How to properly encode a file in Polish in Python?
<p>I want to open the inflation data from the Polish Statistics Office.</p> <p>The code below:</p> <pre><code>import pandas as pd inflation_data_url = 'https://stat.gov.pl/download/gfx/portalinformacyjny/pl/defaultstronaopisowa/4741/1/1/miesieczne_wskazniki_cen_towarow_i_uslug_konsumpcyjnych_od_1982_roku.csv' pd = pd.read_csv(inflation_data_url, sep=';', encoding=&quot;UTF-8&quot;) print(pd.head()) </code></pre> <p>Gives me the following error:</p> <blockquote> <p>UnicodeDecodeError: 'utf-8' codec can't decode byte 0xf3 in position 42: invalid continuation byte</p> </blockquote> <p>If I try a different encoding:</p> <pre><code>import pandas as pd inflation_data_url = 'https://stat.gov.pl/download/gfx/portalinformacyjny/pl/defaultstronaopisowa/4741/1/1/miesieczne_wskazniki_cen_towarow_i_uslug_konsumpcyjnych_od_1982_roku.csv' pd = pd.read_csv(inflation_data_url, sep=';', encoding=&quot;ISO-8859-1&quot;) print(pd.head()) </code></pre> <p>I get no errors, but the encoding is clearly wrong, since &quot;Wskaźnik cen towarów i usług konsumpcyjnych&quot; is being decoded as &quot;Wskaw i us³ug konsumpcyjnych&quot;.</p> <p>How can I download this data in Python and open it so it displays proper characters?</p>
<python><pandas>
2023-02-10 18:30:18
1
811
Aleister Crowley
75,414,808
18,756,733
How to combine two HTML files into one in Python
<p>I created two plotly charts and saved them as HTML files separately. Is there a way to combine them into one HTML file? For example I can do this with PDF files using the following code:</p> <pre><code>from PyPDF2 import PdfFileMerger, PdfFileReader merger = PdfFileMerger() merger.append(PdfFileReader(open(filename1, 'rb'))) merger.append(PdfFileReader(open(filename2, 'rb'))) merger.write(&quot;merged.pdf&quot;) </code></pre> <p>Is there any library that can merge HTML files this way?</p>
<python><charts><merge>
2023-02-10 18:24:51
1
426
beridzeg45
75,414,637
6,672,237
Altair + panel: chart dynamic dropdown filter doesn't work
<p>I almost have it, but the filter selection only works when executing the cell in jupyter. It doesn't get updated when the dropdown selector is used. I have three parts:</p> <ol> <li>A scatterplot that has to have a filter/dropdown menu bound to a list of values from a <code>df</code>'s column <code>Island</code>.</li> <li>Altair brush of type <code>selection_interval</code> that is connected to the scatterplot.</li> <li>When the scatterplot is rendered in (jupyter) and the brush selector is used, a table based on selected records form the <code>df</code> is rendered below the scatterplot and the records shown in the table are dynamically presented based on the brush selection.</li> </ol> <p>It almost work perfectly, but the dropdown menu doesn't get updated for the scatterplot, BUT DOES get updated for the table. What am I missing here? I need the dropdown to filter out the chart too. Below is the code and the animated gif</p> <pre><code>import panel as pn import pandas as pd import altair as alt pn.extension('vega', template='fast-list') penguins_url = &quot;https://raw.githubusercontent.com/vega/vega/master/docs/data/penguins.json&quot; df = pd.read_json(penguins_url) brush = alt.selection_interval(name='brush') # selection of type &quot;interval&quot; island = pn.widgets.Select(name='Island', options=df.Island.unique().tolist()) chart = alt.Chart(df.query(f'Island == &quot;{island.value}&quot;')).mark_point().encode( x=alt.X('Beak Length (mm):Q', scale=alt.Scale(zero=False)), y=alt.Y('Beak Depth (mm):Q', scale=alt.Scale(zero=False)), color=alt.condition(brush, 'Species:N', alt.value('lightgray')) ).properties( width=700, height=200 ).add_selection(brush) vega_pane = pn.pane.Vega(chart, debounce=5) def filtered_table(selection, island): if not selection: return '## No selection' query = ' &amp; '.join( f'{crange[0]:.3f} &lt;= `{col}` &lt;= {crange[1]:.3f} &amp; Island == &quot;{island}&quot;' for col, crange in selection.items() ) return pn.Column( f'Query: {query}', pn.pane.DataFrame(df.query(query).query(f'Island == &quot;{island}&quot;'), width=600, height=300) ) pn.Column( pn.Row(island, vega_pane), pn.bind(filtered_table, selection = vega_pane.selection.param.brush, island=island))[![enter image description here][1]][1] </code></pre> <p><a href="https://i.sstatic.net/bwzsi.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bwzsi.gif" alt="chart rendering and faulty filtering" /></a></p>
<python><panel><altair>
2023-02-10 18:02:09
1
562
kuatroka
75,414,598
17,696,880
Check if a string matches any of the lines of a txt file, and if it doesn't match any then add it to that txt file
<pre class="lang-py prettyprint-override"><code>import os if (os.path.isfile('data_file.txt')): data_memory_file_path = 'data_file.txt' else: open('data_file.txt', &quot;w&quot;).close() data_memory_file_path = 'data_file.txt' #Example input list with info in sublists reordered_input_info_lists = [ [['corre'], ['en el patio'], ['2023-02-05 00:00 am']], [['corre'], ['en el patio'], ['2022-12-29 12:33 am _--_ 2023-01-25 19:13 pm']], [['salta'], ['en el bosque'], ['2023-02-05 00:00 am']], [['salta'], ['en el patio'], ['2023-02-05 00:00 am']], [['dibuja'], ['en el bosque'], ['2023-02-05 00:00 am']], [['dibuja'], ['en el patio'], ['2022-12-29 12:33 am _--_ 2023-01-25 19:13 pm']]] #I decompose the main list into the sublists that compose it, and each sublist will be a string # that will be evaluated if it matches any of the already existing lines in the .txt for info_list in reordered_input_info_lists: #I convert the list to string to have it ready to compare it with the lines of the txt file info_list_str = repr(info_list) #THIS IS WHERE I HAVE THE PROBLEM, AND IT IS WHERE THE CHECK OF THE TXT LINES SHOULD BE </code></pre> <p>This is the text content that is contained in <code>data_file.txt</code> (assuming it is already created in this case)</p> <pre><code>[['analiza'], ['en la oficina'], ['2022-02-05 00:00 am']] [['corre'], ['en el bosque'], ['2023-02-05 00:00 am']] [['corre'], ['en el bosque'], ['2022-12-29 12:33 am _--_ 2023-01-25 19:13 pm']] [['corre'], ['en el patio'], ['2023-02-05 00:00 am']] [['corre'], ['en el patio'], ['2022-12-29 12:33 am _--_ 2023-01-25 19:13 pm']] [['dibuja'], ['en el estudio de animación'], ['2023-02-05 00:00 am']] [['dibuja'], ['en el estudio de animación'], ['2022-12-29 12:33 am _--_ 2023-01-25 19:13 pm']] [['dibuja'], ['en la escuela'], ['2023-02-05 00:00 am']] [['dibuja'], ['en la escuela'], ['2022-12-29 12:33 am _--_ 2023-01-25 19:13 pm']]] </code></pre> <p>After adding all the lines that were not there in the <code>data_file.txt</code>, the content of the file would look like this:</p> <pre><code>[['analiza'], ['en la oficina'], ['2022-02-05 00:00 am']] [['corre'], ['en el bosque'], ['2023-02-05 00:00 am']] [['corre'], ['en el bosque'], ['2022-12-29 12:33 am _--_ 2023-01-25 19:13 pm']] [['corre'], ['en el patio'], ['2023-02-05 00:00 am']] [['corre'], ['en el patio'], ['2022-12-29 12:33 am _--_ 2023-01-25 19:13 pm']] [['dibuja'], ['en el bosque'], ['2023-02-05 00:00 am']], [['dibuja'], ['en el estudio de animación'], ['2023-02-05 00:00 am']] [['dibuja'], ['en el estudio de animación'], ['2022-12-29 12:33 am _--_ 2023-01-25 19:13 pm']] [['dibuja'], ['en el patio'], ['2022-12-29 12:33 am _--_ 2023-01-25 19:13 pm']]] [['dibuja'], ['en la escuela'], ['2023-02-05 00:00 am']] [['dibuja'], ['en la escuela'], ['2022-12-29 12:33 am _--_ 2023-01-25 19:13 pm']]] [['salta'], ['en el bosque'], ['2023-02-05 00:00 am']] [['salta'], ['en el patio'], ['2023-02-05 00:00 am']] </code></pre> <p>One thing that is important is that the lines within the file must be arranged alphabetically. For code speed reasons, I don't know if it's convenient to alphabetize the lines at the end (ie after adding all the necessary lines) or if it's better for the program to put it line by line in its alphabetical order, assuming that the file You already have your previous lines sorted.</p> <pre class="lang-py prettyprint-override"><code>data_memory_file = open(data_memory_file_path) for line in sorted(data_memory_file.readlines()): print (line) </code></pre>
<python><python-3.x><string><file><writefile>
2023-02-10 17:57:45
1
875
Matt095
75,414,564
7,519,434
Inherit template without using super()
<p>I have the following <code>base.html</code></p> <pre><code>&lt;!DOCTYPE html&gt; &lt;html lang=&quot;en&quot;&gt; &lt;head&gt; &lt;script src=&quot;https://cdnjs.cloudflare.com/ajax/libs/jquery/3.6.3/jquery.min.js&quot;&gt;&lt;/script&gt; &lt;link rel = &quot;stylesheet&quot; href = &quot;{{ url_for('static', filename = 'css/base.css') }}&quot; type = &quot;text/css&quot;/&gt; {% block head %} {% endblock %} &lt;/head&gt; &lt;body&gt; {% block content %} {% endblock %} &lt;/body&gt; &lt;/html&gt; </code></pre> <p>Then I have <code>base_header.html</code>, which I want for pages with a header</p> <pre><code>{% extends &quot;base.html&quot; %} {% block head %} &lt;link rel = &quot;stylesheet&quot; href = &quot;{{ url_for('static', filename = 'css/header.css') }}&quot; type = &quot;text/css&quot;/&gt; {% endblock %} {% block content %} &lt;header&gt; ... &lt;/header&gt; {% endblock %} </code></pre> <p>In order for the header to show up, I have to call <code>super()</code> in both blocks</p> <pre><code>{% extends &quot;base_header.html&quot; %} {% block head %} {{ super() }} &lt;title&gt;Page title&lt;/title&gt; {% endblock %} {% block content %} {{ super() }} &lt;h1&gt;Page header&lt;/h1&gt; {% endblock %} </code></pre> <p>Is it possible to have this sort of template inheritance without having to include <code>super()</code> every time I want to include the header? I would like it so if I decide a page needs a header, I can just change <code>extends &quot;base.html&quot;</code> to <code>extends &quot;base_header.html&quot;</code> without further changes.</p>
<python><jinja2>
2023-02-10 17:53:35
1
3,989
Henry
75,414,525
935,376
Python: looping faster using inbuilt functions
<p>I took a python test where I had to code a function to solve the problem below. It passed some test cases but failed runtime for some others because it took a long time. The function feels bloated. How to make the function faster?</p> <p>Here is the problem:</p> <p>A truck fleet dispatcher is trying to determine which routes are still accessible after heavy rains flood certain highways. During their trips, trucks must follow linear, ordered paths between 26 waypoints labeled A through Z; in other words, they must traverse waypoints in either standard or reverse alphabetical order.</p> <p>The only data the dispatcher can use is the trip logbook, which contains a record of the recent successful trips. The logbook is represented as a list of strings, where each string (corresponding to one entry) has two characters corresponding to the trip origin and destination waypoints respectively. If the logbook contains a record of a successful trip between two points, it can be assumed that all of the waypoints between those points are also accessible. Note that logbook entries imply that both directions of the traversal are valid. For example, an entry of RP means that trucks can move along both R --&gt; Q --&gt; P and P --&gt; Q --&gt; R.</p> <p>Given an array of logbook entries, your task is to write a function to return the length of the longest consecutive traversal possible; in other words, compute the maximum number of consecutive edges known to be safe.</p> <p>Example</p> <p>For logbook = [&quot;BG&quot;, &quot;CA&quot;, &quot;FI&quot;, &quot;OK&quot;], the output should be solution(logbook) = 8.</p> <p>Because we can get both from A to C and from B to G, we can thus get from A to G. Because we can get from F to I and access I from G, we can therefore traverse A --&gt; I. This corresponds to a traversal length of 8, since 8 edges connect these 9 waypoints. O through K is a length 4 traversal. These two paths are disjoint, so no longer consecutive paths can be found and the answer is 8.</p> <p>Conditions: The run time should be less than 4 seconds</p> <pre><code>def solution(logbook): tpf = &quot;ABCDEFGHIJKLMNOPQRSTUVWXYZ&quot; tpf_list = list(tpf) flag=0 longest_route = [0 for x in range(26)] for i in logbook: p = sorted(i) st_idx = p[0] ed_idx = p[1] for j in range(26): if tpf_list[j]==st_idx: flag = 1 if flag == 1 and tpf_list[j]!=ed_idx: longest_route[j] = 1 if flag == 1 and tpf_list[j]==ed_idx: flag =0 summ =1 list2=[] for i in range(25): if longest_route[i] == 0: summ =1 if longest_route[i] == 1 and longest_route[i+1] == 1 : summ+=1 if longest_route[i] == 1 and longest_route[i+1] == 0: list2.append(summ) return max(list2) </code></pre>
<python><python-3.x>
2023-02-10 17:50:58
0
2,064
Zenvega
75,414,333
3,281,427
How to override the autocomplete view for a specific model field in the Django admin?
<p>I need to override the way a Select2 widget of a particular <code>ForeignKey</code> is behaving in the Django (4.1) admin.</p> <p>Namely, I am trying to increase the number of objects returned by the AJAX requests and defined by <code>paginate_by</code> in <a href="https://github.com/django/django/blob/main/django/contrib/admin/views/autocomplete.py" rel="nofollow noreferrer"><code>AutocompleteJsonView</code></a>.</p> <p>Sadly <a href="https://stackoverflow.com/a/56865950/3281427">this great solution</a> no longer works with Django 4.</p> <p>How can I extend <code>AutocompleteJsonView</code> and somehow tell Django to use my custom view?</p> <p>EDIT: my current workaround is to override <code>get_paginator</code> on the <code>ModelAdmin</code> by setting <code>paginator.per_page</code> to whatever value is suitable, but that doesn't answer to broader question of customising the autocomplete behaviour of Select2 widgets in the Django admin.</p>
<python><django><jquery-select2>
2023-02-10 17:29:12
1
1,694
Buddyshot
75,414,227
14,895,107
WARNING:absl:Found untraced functions such as _jit_compiled_convolution_op while saving in tensorflow
<p>i just trained a cnn with 99% accuracy on the mnist dataset. my model is working fine and giving accurate results. but when i converted my <code>h5</code> model to a <code>tflite</code> model, im getting only one result at every time. ie : <a href="https://i.sstatic.net/AD93n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AD93n.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/TnK1X.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TnK1X.png" alt="enter image description here" /></a></p> <p>code to convert my <code>h5</code> model into a <code>tflite</code> model :</p> <pre><code>tf_lite_interpreter=tf.lite.TFLiteConverter.from_keras_model(model) with open(&quot;mnist_tflite.tflite&quot;,&quot;wb&quot;) as f: f.write(tf_lite_interpreter.convert()) </code></pre> <p>i noticed that i get this warning when converting</p> <pre><code>WARNING:absl:Found untraced functions such as _jit_compiled_convolution_op while saving (showing 1 of 1). These functions will not be directly callable after loading. </code></pre> <p>also, when i removed <code>Conv2D</code> and <code>MaxPooling2D</code> layers, the warning was gone</p> <p>model's structure :</p> <pre><code>model=tf.keras.models.Sequential([ tf.keras.layers.Conv2D(64,(3,3),input_shape=(28,28,1),activation=tf.nn.relu), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Flatten(), tf.keras.layers.Dropout(0.25), tf.keras.layers.Dense(128,activation=tf.nn.relu), tf.keras.layers.Dense(10,activation=tf.nn.softmax), ]) </code></pre> <p>why is this happening?</p> <p>any help would be appreciated</p>
<python><tensorflow><conv-neural-network><tensorflow2.0><tensorflow-lite>
2023-02-10 17:16:57
0
903
Abhimanyu Sharma
75,414,218
5,446,815
Rotate pie chart in interactive window without replotting
<p>I am drawing a pie chart out of provided data, and this can potentially get out of hand as the length of the labels can be pretty long, and there can be a lot of them overlapping each other. Because of this, it is crucial to find a good startangle parameter to my pie chart drawing.</p> <p>Conceptually, I want to use a mouse scroll event to rotate the whole pie chart by 5 degrees every time the user uses the scroll wheel. Rotating the wedges isn't too much trouble with their theta1 and theta2 properties, but repositioning the labels and autotexts is serious trouble because of alignment properties and the lining up with wedges. I also want to retain an interactive frame rate, so clearing the figure and redrawing is not an option.</p> <p>Here is one such situation where this is useful. The labels are too big and rotating the whole pie chart would help reposition them in sight. Of course in this case it would be enough to resize the chart instead of rotating but you get my point.</p> <p><a href="https://i.sstatic.net/Vri1a.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Vri1a.png" alt="Pie chart with labels going off the side" /></a></p> <p>Is there a way to achieve this that does not imply rewriting the entire label and autotext positioning code for my own use?</p> <p>In particular, I'm wondering if it wouldn't be possible to do something that is conceptually equivalent to making the same <code>pyplot.pie</code> call as before, only with a different startangle. Alternatively, maybe Text objects have methods I can use for positioning them around the newly rotated wedges that spare me working with just positions and sizes.</p>
<python><matplotlib>
2023-02-10 17:16:13
0
652
Matrefeytontias
75,414,174
11,710,304
Testing if strip in column was successful with polars
<p>I have developed a function to strip a dataset using polars. Now I want to check with a test if the strip was successful. For this I want to use the following logic. But this code is in python. How can I solve this using polars?</p> <pre><code>def test_strip(): df = pd.DataFrame({ 'ID': [1, 1, 1, 1, 1], 'Entity': ['Entity 1 ', 'Entity 2', 'Entity 3', 'Entity 4', 'Entity 5'], 'Table': ['Table 1', ' Table 2', 'Table 3', 'Table 4', None], 'Local': ['Local 1', 'Local 2 ', None, 'Local 4', 'Local 5'], 'Global': ['Global 1', ' Global 2', 'Global 3', None, ' Global 5'], 'mandatory': ['M', 'M', 'M', 'CM ', 'M'] }) job = first_job( config=test_config, copying_list=copying, ) result = job.run(df) df_clean, *_ = result for column in df_clean.columns: for value in df_clean[column]: if isinstance(value, str) and (value.startswith(&quot; &quot;) or value.endswith(&quot; &quot;)): raise AssertionError(f&quot;Strip failed for column '{column}'&quot;) </code></pre>
<python><python-polars>
2023-02-10 17:12:01
1
437
Horseman
75,414,167
7,376,511
Dynamically generate mypy-compliant property setters
<p>I am trying to declare a base class with certain attributes for which the (very expensive) calculation differs depending on the subclass, but that accepts injecting the value if previously calculated</p> <pre><code>class Test: _value1: int | None = None _value2: str | None = None _value3: list | None = None _value4: dict | None = None @property def value1(self) -&gt; int: if self._value1 is None: self._value1 = self._get_value1() return self._value1 @value1.setter def value1(self, value1: int) -&gt; None: self._value1 = value1 def _get_value1(self) -&gt; int: raise NotImplementedError class SubClass(Test): def _get_value1(self) -&gt; int: time.sleep(1000000) return 1 instance = SubClass() instance.value1 = 1 print(instance.value1) # doesn't wait </code></pre> <p>As you can see it becomes very verbose, with every property having three different functions associated to it.</p> <p>Is there a way to dynamically declare at the very least the setter, so that mypy knows it's always the same function but with proper typing? Or in general, is there a more concise way to declare this kind of writable property for which the underlying implementation must be implemented by the base class, in bulk?</p> <p>Declaring <code>__setattr__</code> doesn't seem to be viable, because just having <code>__setattr__</code> declared tricks mpy into thinking I can just assign any value to anything else that's not overloaded, while I still want errors to show up in case I'm trying to assign the wrong attributes. It also doesn't fix that I still need to declare setters, otherwise it thinks the value is immutable.</p>
<python><mypy><setter>
2023-02-10 17:11:31
2
797
Some Guy
75,413,996
10,010,688
Python encode spaces in url only and not other special characters
<p>I know this question has been asked many times but I can't seem to find the variation that I'm looking for specifically.</p> <p>I have a url, lets say its:</p> <pre><code>https://somethingA/somethingB/somethingC/some spaces here </code></pre> <p>I want to convert it to:</p> <pre><code>https://somethingA/somethingB/somethingC/some%20spaces%20here </code></pre> <p>I know I can do it with the <code>replace</code> function like below:</p> <pre><code>url = https://somethingA/somethingB/somethingC/some spaces here url.replace(' ', '%20') </code></pre> <p>But I have a feeling that the best practice is probably to use the <code>urllib.parse</code> library. The problem is that when I use it, it encodes other special characters like <code>:</code> too.</p> <p>So if I do:</p> <pre><code>url = https://somethingA/somethingB/somethingC/some spaces here urllib.parse.quote(url) </code></pre> <p>I get:</p> <pre><code>https%3A//somethingA/somethingB/somethingC/some%20spaces%20here </code></pre> <p>Notice the <code>:</code> also gets converted to <code>%3A</code>. So my question is, is there a way I can achieve the same thing as replace with <code>urllib</code>? I think I would rather use a tried and tested library that is designed specifically to encode URLs instead of reinventing the wheel, which I may or may not be missing something leading to a security loop hole. Thank you.</p>
<python>
2023-02-10 16:54:33
1
3,858
Mark
75,413,774
5,722,359
X Error of failed request: BadAlloc (insufficient resources for operation) while running tkinter
<p>I encountered the following error message while running a python script that I am working on. It occurred while tkinter was loading ~1800 thumbnail images, each being 200x200 pixels large, into individual <code>ttk.Checkkbuttons</code>. It did not finish this process and the program crashed with this error message.</p> <pre><code>X Error of failed request: BadAlloc (insufficient resources for operation) Major opcode of failed request: 53 (X_CreatePixmap) Serial number of failed request: 140089 Current serial number in output stream: 140097 </code></pre> <p>The system that I ran this python tkinter script has a large RAM capacity. Before the program crashed, the RAM utilisation was only around 10%.</p> <p>I don't understand this error message. Can you please explain what is causing this error?, Is this error caused by a limitation of the tkinter package or of the system's hardware (e.g. RAM, GPU, ... etc)? Anyway to overcome this?</p> <p>I found a <a href="https://stackoverflow.com/questions/15577540/x-error-of-failed-request-badalloc-insufficient-resources-for-operation">similar question</a> but it was posted 9 years ago. I do not know if it is still relevant. Appreciate your assistance.</p> <p><strong>Update:</strong></p> <p>Following what was tried by the similar question, I added:</p> <pre><code>Option &quot;VideoRam&quot; &quot;65536&quot; Option &quot;CacheLines&quot; &quot;1980&quot; </code></pre> <p>into the <code>Section &quot;Device&quot;</code> segment of /etc/X11/xorg.conf , i.e it is now:</p> <pre><code>Section &quot;Device&quot; Identifier &quot;Device0&quot; Driver &quot;nvidia&quot; VendorName &quot;NVIDIA Corporation&quot; Option &quot;VideoRam&quot; &quot;65536&quot; Option &quot;CacheLines&quot; &quot;1980&quot; EndSection </code></pre> <p>The error changed to:</p> <pre><code>X Error of failed request: BadAlloc (insufficient resources for operation) Major opcode of failed request: 53 (X_CreatePixmap) Serial number of failed request: 128544 Current serial number in output stream: 128552 </code></pre> <p>The <code>Current serial number in output stream: 140097</code> dropped to 128552.</p> <p>The explanation of XCreatePixmap is given <a href="https://tronche.com/gui/x/xlib/pixmap-and-cursor/XCreatePixmap.html" rel="nofollow noreferrer">here</a>. It states that <strong>BadAlloc</strong> occurs when the server failed to allocate the requested source or server memory.</p> <p>I tried to create a test code to simulate the error. Although it failed to simulate the X BadAlloc error, it provides a simplified view of the scenario when the error occurred in my code. Nonetheless, I did learn that the max grid rows or columns allowed by tkinter is &lt; 10000 because it will return <code>_tkinter.TclError: row out of bounds</code>. So if my original code is hitting the error when the quantity value used is 1800, is not related to <code>_tkinter.TclError: row out of bounds</code>. I wonder if my error is related to one of the thumbnails supplied to my original code.</p> <p><strong>Test code:</strong></p> <pre><code>import tkinter as tk import tkinter.ttk as ttk from PIL import Image, ImageTk class App(ttk.Frame): def __init__(self, parent, file, quantity): super().__init__(parent) self.parent = parent self.quantity = quantity self.image = ImageTk.PhotoImage(Image.open(file)) self.checkbuttons = [] self.checkvalues = [] self.create_widgets() def create_widgets(self): for i in range(self.quantity): self.checkvalues.append(tk.BooleanVar()) self.checkbuttons.append( ttk.Checkbutton(self, image=self.image, variable= self.checkvalues[-1]) ) self.checkbuttons[-1].grid(row=i, column=0) if __name__ == &quot;__main__&quot;: root = tk.Tk() app = App(root, 'thumbnail.jpg', 9000) app.grid(row=0, column=0) root.columnconfigure(0, weight=1) root.rowconfigure(0, weight=1) root.mainloop() </code></pre> <p><a href="https://i.sstatic.net/VPLxG.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VPLxG.jpg" alt="thumbnail" /></a></p>
<python><tkinter><tk-toolkit>
2023-02-10 16:34:40
0
8,499
Sun Bear
75,413,603
11,622,712
How to retrieve the text removed by regex sub?
<p>I have a regex expression in Python that is expected to remove all occurences of the word &quot;NOTE.&quot; and the following sentence. How can I do it correctly and also return all sentences being removed that way?</p> <pre><code>import re text = &quot;NOTE. This is the subsequent sentence to be removed. The weather is good. NOTE. This is another subsequent sentence to be removed. The sky is blue. Note that it's a dummy text.&quot; clean_text = re.sub(&quot;NOTE\..*?(?=\.)&quot;, &quot;&quot;, text) </code></pre> <p><strong>Expected result:</strong></p> <p>clean_text:</p> <pre><code>The weather is good. The sky is blue. Note that it's a dummy text. </code></pre> <p>unique_sentences_removed:</p> <pre><code>[&quot;This is the subsequent sentence to be removed.&quot;, &quot;This is another subsequent sentence to be removed.&quot;] </code></pre>
<python><regex>
2023-02-10 16:17:30
3
2,998
Fluxy
75,413,514
10,872,199
Django channels. How to select consumer when sending through channel layer?
<p>In my application, I got two consumers with same handler names:</p> <pre class="lang-py prettyprint-override"><code>class GeneralConsumer(AuthenticatedOnlyAsyncConsumer): async def new_message(self, data): ... async def message_update(self, data): ... async def message_delete(self, data): ... class ChatVisitorConsumer(AuthenticatedOnlyAsyncConsumer): async def new_message(self, data): ... async def message_update(self, data): ... async def message_delete(self, data): ... </code></pre> <p>Consumers share same functionality, but used in different cases.</p> <hr /> <p>I wonder, how I can pick specific consumer handler in channel layer object to use <code>.send</code> or <code>.group_send</code>, obtained from function <code>channels.layers.get_channel_layer</code>.</p>
<python><django><websocket><django-channels>
2023-02-10 16:09:52
0
896
PythonNewbie
75,413,389
7,162,827
CDK scope-error when executing external jobs
<p>I've created a setup that works to generate state machines, however for some reason it does not work for the two specific <code>Task</code>s <code>GlueStartJobRun</code> and <code>StepFunctionsStartExecution</code>. Here is part of my current setup:</p> <pre><code>. ├── app.py ├── cloudformation │ ├── stepfunctions.py # includes `stack` ├── statemachines │ ├── some_workflow.py │ └── another_workflow.py └── ... </code></pre> <p>Inside of the <code>stepfunctions.py</code>, i define my stack and a way to generate state machines:</p> <pre><code>import dataclasses from aws_cdk import aws_stepfunctions from constructs import Construct @dataclasses.dataclass class StateMachineUtil: name: str definition: aws_stepfunctions.Chain from aws_cdk import Stack class StepfunctionStack(Stack): def __init__(self, scope: Construct, env: Environment): super().__init__(scope, env=env) def generate_state_machines(self, state_machines): [self.__generate_state_machine(state_machine) for state_machine in state_machines] def __generate_state_machine(self, state_machine): return aws_stepfunctions.StateMachine( scope=self, id=f&quot;{state_machine.name}-statemachine&quot;, definition=state_machine.definition, state_machine_name=state_machine.name, ) </code></pre> <p>Then, i define a function inside the workflow-scripts which returns a <code>definition</code>, which is</p> <pre><code>from aws_cdk.aws_stepfunctions_tasks import CallAwsService from aws_cdk.aws_stepfunctions import Pass def generate_some_definition(scope: Construct) -&gt; aws_stepfunctions.Chain: step_1 = Pass( scope=scope, id=&quot;some-id&quot;, ) step_2 = CallAwsService( scope=scope, id=&quot;another-id&quot;, service=&quot;glue&quot;, action=&quot;GetTable&quot;, iam=[&quot;*&quot;], parameters={...} result_path=&quot;...&quot; ) step_3 = Pass( scope=scope, id=&quot;a-third-id&quot;, ) return step_1.next(step_2).next(step_3) </code></pre> <p>The script <code>app.py</code> should now simply put everything together:</p> <pre><code>from cloudformation.stepfunctions import StepfunctionStack, StateMachineUtil from statemachines.some_workflow import generate_some_definition from statemachines.another_workflow import generate_another_definition app = aws_cdk.App() env = aws_cdk.Environment(account=12345, region=&quot;eu-west-1&quot;) mystack = StepfunctionStack(scope=app, env=env) mystack.generate_state_machines( [ StateMachineUtil(name=&quot;a-name&quot;, definition=generate_some_definition(scope=app)), StateMachineUtil(name=&quot;another-name&quot;, definition=generate_another_definition(scope=app)), ] ) </code></pre> <p>This setup works perfectly fine. No problem, i run <code>cdk synth</code> and i get no error message. However, if I try to setup a <code>workflow.py</code> that uses <code>StepFunctionsStartExecution</code> or <code>GlueStartJobRun</code>, then i get the error message:</p> <blockquote> <p>RuntimeError: StepFunctionsStartExecution at '' should be created in the scope of a Stack, but no Stack found</p> </blockquote> <p>Here is what the <code>workflow.py</code> could look like (same as before, but with 2 added tasks):</p> <pre><code>from aws_cdk.aws_stepfunctions_tasks import CallAwsService, GlueStartJobRun, StepFunctionsStartExecution from aws_cdk.aws_stepfunctions import Pass, IntegrationPattern, TaskInput def generate_some_definition(scope: Construct, crawler_state_machine: aws_stepfunctions.IStateMachine) -&gt; aws_stepfunctions.Chain: step_1 = Pass( scope=scope, id=&quot;a-fourth-id&quot;, ) step_2 = StepFunctionsStartExecution( scope=scope, id=&quot;Run separate state machine&quot;, state_machine=separate_state_machine, integration_pattern=IntegrationPattern.RUN_JOB, input=TaskInput.from_object({...}), result_selector={...} ) step_3 = GlueStartJobRun( scope=scope, id=&quot;Run glue job&quot;, glue_job_name=&quot;...&quot;, integration_pattern=IntegrationPattern.RUN_JOB, arguments=TaskInput.from_json_path_at(&quot;...&quot;) ) step_4 = CallAwsService( scope=scope, id=&quot;another-one-id&quot;, service=&quot;glue&quot;, action=&quot;GetTable&quot;, iam=[&quot;*&quot;], parameters={...} result_path=&quot;...&quot; ) step_5 = Pass( scope=scope, id=&quot;an-eighth-id&quot;, ) return step_1.next(step_2).next(step_3).next(step_4).next(step_5) </code></pre> <p>And suddenly, it breaks down. Giving me the error message i posted above.</p>
<python><scope><aws-cdk><aws-step-functions>
2023-02-10 15:56:34
1
567
armara
75,413,249
15,176,150
What does `amd64` mean in Python .whl files?
<p>I'm trying to download some Python <code>.whl</code> files, but I'm not sure what version I should be downloading.</p> <p>I have a windows machine with an Intel chip, but all the versions I see use <code>amd64</code>, does that mean the versions are for AMD chips only?</p>
<python><x86-64><intel><python-wheel><amd-processor>
2023-02-10 15:44:11
1
1,146
Connor
75,413,155
19,425,874
How to scrape multiple tags in single iteration?
<p>I have a script below that works perfectly - it visits each HREF tag on a list of URLs, then returns the associated p tag information. It pushes the info directly to a Google Sheet.</p> <p>I noticed, the player &quot;position&quot; isn't included, because it is an H2 tag not a p... I started to redo the entire script separately to scrape these (2nd script below).</p> <p>Is there a way I can just re-write the first one to include a column that adds these h2 tags (position)?</p> <p><strong>WORKING (RETRIEVES ALL P TAGS)</strong></p> <pre><code> import requests from bs4 import BeautifulSoup import gspread gc = gspread.service_account(filename='creds.json') sh = gc.open_by_key('1DpasSS8yC1UX6WqAbkQ515BwEEjdDL-x74T0eTW8hLM') worksheet = sh.get_worksheet(3) # AddValue = [&quot;Test&quot;, 25, &quot;Test2&quot;] # worksheet.insert_row(AddValue, 3) def get_links(url): data = [] req_url = requests.get(url) soup = BeautifulSoup(req_url.content, &quot;html.parser&quot;) for td in soup.find_all('td', {'data-th': 'Player'}): a_tag = td.a name = a_tag.text player_url = a_tag['href'] print(f&quot;Getting {name}&quot;) req_player_url = requests.get( f&quot;https://basketball.realgm.com{player_url}&quot;) soup_player = BeautifulSoup(req_player_url.content, &quot;html.parser&quot;) div_profile_box = soup_player.find(&quot;div&quot;, class_=&quot;profile-box&quot;) row = {&quot;Name&quot;: name, &quot;URL&quot;: player_url} for p in div_profile_box.find_all(&quot;p&quot;): try: key, value = p.get_text(strip=True).split(':', 1) row[key.strip()] = value.strip() except: # not all entries have values pass data.append(row) return data urls = [ 'https://basketball.realgm.com/dleague/players/2022', 'https://basketball.realgm.com/dleague/players/2021', 'https://basketball.realgm.com/dleague/players/2020', 'https://basketball.realgm.com/dleague/players/2019', 'https://basketball.realgm.com/dleague/players/2018', ] res = [] for url in urls: print(f&quot;Getting: {url}&quot;) data = get_links(url) res = [*res, *data] if res != []: header = list(res[0].keys()) values = [ header, *[[e[k] if e.get(k) else &quot;&quot; for k in header] for e in res]] worksheet.append_rows(values, value_input_option=&quot;USER_ENTERED&quot; ) **NOT WORKING, BUT AN ATTEMPT TO GET POSITIONS:** import requests from bs4 import BeautifulSoup import gspread gc = gspread.service_account(filename='creds.json') sh = gc.open_by_key('1DpasSS8yC1UX6WqAbkQ515BwEEjdDL-x74T0eTW8hLM') worksheet = sh.get_worksheet(1) # AddValue = [&quot;Test&quot;, 25, &quot;Test2&quot;] # worksheet.insert_row(AddValue, 3) def get_links(url): data = [] req_url = requests.get(url) soup = BeautifulSoup(req_url.content, &quot;html.parser&quot;) for td in soup.find_all('td', {'data-th': 'Player'}): a_tag = td.a name = a_tag.text player_url = a_tag['href'] print(f&quot;Getting {name}&quot;) req_player_url = requests.get( f&quot;https://basketball.realgm.com{player_url}&quot;) soup_player = BeautifulSoup(req_player_url.content, &quot;html.parser&quot;) div_profile_box = soup_player.find(&quot;div&quot;, class_=&quot;profile-box&quot;) row = {&quot;Name&quot;: name, &quot;URL&quot;: player_url} for p in div_profile_box.find_all(&quot;h2&quot;): try: p.get_text(strip=True) except: # not all entries have values pass data.append(row) return data urls = [ 'https://basketball.realgm.com/dleague/players/2022', # 'https://basketball.realgm.com/dleague/players/2021', # 'https://basketball.realgm.com/dleague/players/2020', # 'https://basketball.realgm.com/dleague/players/2019', # 'https://basketball.realgm.com/dleague/players/2018', ] res = [] for url in urls: print(f&quot;Getting: {url}&quot;) data = get_links(url) res = [*res, *data] if res != []: header = list(res[0].keys()) values = [ header, *[[e[k] if e.get(k) else &quot;&quot; for k in header] for e in res]] worksheet.append_rows(values, value_input_option=&quot;USER_ENTERED&quot;) </code></pre>
<python><web-scraping><beautifulsoup><python-requests>
2023-02-10 15:36:38
1
393
Anthony Madle
75,413,037
12,255,379
Installing NumPy to a virtual environment on raspberry pi for python 3.7
<p>I'm trying to install NumPy for python 3.7 on my raspberry pi as a part of installing OpenCV. Originally when running</p> <pre class="lang-bash prettyprint-override"><code>sudo pip3.7 install opencv-python mediapipe Flask==2.0.3 </code></pre> <p>and then got error on building wheels for NumPy. I sequentially tried to use <code>--no-binary flag</code> but no luck either.</p> <p>I then tried to install NumPy using apt package manager with <code>sudo apt install python3.7-numpy</code>, but this installed it into dist packages in <code>/lib/python3</code> rather then in <code>/lib/python3.7</code>. And somehow neither python3 nor python3.7 interactive shell cant locate and import it.</p> <p>In any case I don't want to use os package manager, since I want to install it to a virtual environment.</p> <p>Below is what seems to me like the relevant fragment from from the failed wheel build error message:</p> <pre><code>numpy/core/src/umath/loops_trigonometric.dispatch.c.src: In function ‘FLOAT_sin_NEON_VFPV4’: numpy/core/src/umath/loops_trigonometric.dispatch.c.src:202:20: internal compiler error: in convert_move, at expr.c:218 NPY_NO_EXPORT void NPY_CPU_DISPATCH_CURFX(FLOAT_@func@) </code></pre> <p>To address one of the comments: I did create and activated a virtual environment. The issue is that to my knowledge there is no way to force apt to install python packages outside of global/user site-packagse or dist-packages.</p>
<python><numpy><raspberry-pi>
2023-02-10 15:26:07
0
769
Nikolai Savulkin
75,413,010
5,181,219
Within a Python context manager, prevent people from running PySpark queries
<p>I would like to create a Python context manager that you would use in this way:</p> <pre class="lang-py prettyprint-override"><code>with ContextManager() as cm: ... </code></pre> <p>and I would like to make it so within the context manager, any attempt of the user to interact with PySpark should return an exception. So, assuming <code>df</code> is a PySpark DataFrame, the following code:</p> <pre class="lang-py prettyprint-override"><code>with ContextManager() as cm: ... df.count() </code></pre> <p>should throw an exception. Similarly, users shouldn't be allowed to create new DataFrames or RDDs. I suspect I need to dynamically redefine some of the methods of the SparkSession in my <code>ContextManager</code>'s <code>__enter__</code> method, and reset them on <code>__exit__</code>, but I'm not sure what's the simplest way of doing that to achieve my goal.</p> <p>I'm not looking to obtain a strong security guarantee, I'm aware that anything I do in Python can be dynamically changed at runtime by users of the library. But I'm in a situation where doing direct PySpark operations in the <code>ContextManager</code> block is almost certainly a user misunderstanding, so I would like to detect it and let the user know that they probably didn't mean to do this.</p>
<python><pyspark><dynamic><contextmanager>
2023-02-10 15:24:00
0
1,092
Ted
75,412,980
11,391,711
how MultiStepLR works in PyTorch
<p>I'm new to <code>PyTorch</code> and am working on a toy example to understand how weight decay works in learning rate passed into the optimizer. When I use <a href="https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.MultiStepLR.html" rel="nofollow noreferrer"><code>MultiStepLR </code></a>, I was expecting to decrease the learning rate in given epoch numbers, however, it does not work as I intended. What am I doing wrong?</p> <pre><code>import random import torch import pandas as pd import numpy as np from torch import nn from torch.utils.data import Dataset,DataLoader,TensorDataset from torchvision import datasets, transforms model = nn.Sequential(nn.Linear(n_input, n_hidden), nn.ReLU(), nn.Linear(n_hidden, n_out), nn.ReLU()) loss_function = nn.MSELoss() optimizer = torch.optim.Adam(model.parameters(), lr=0.1) scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=[2,4], gamma=0.1) for e in range(5): scheduler.step() print(e, ' : lr', scheduler.get_lr()[0],&quot;\n&quot;) 0 : lr 0.1 1 : lr 0.0010000000000000002 2 : lr 0.010000000000000002 3 : lr 0.00010000000000000003 4 : lr 0.0010000000000000002 </code></pre> <p>The expected behavior in learning rate is <code>[0.1, 0.1, 0.01, 0.01, 0.001] </code></p>
<python><machine-learning><pytorch><learning-rate>
2023-02-10 15:21:44
1
488
whitepanda
75,412,875
15,803,668
PyQt5 SplashScreen for loading custom modules before opening Main Window
<p>I have a <code>pyqt5</code> application. In the program you can open various other windows from the MainWindow. The different windows are stored in a separate module and are imported into the main file, which represents the MainWindow. Furthermore I use several custom widgets, which are also stored in a separate module and imported into the main file.</p> <p>The program runs smoothly, but it takes a few seconds for the program to start. Exactly I can not say what causes the delay at startup. However, it seems to be due to my custom modules.</p> <p>The main file looks something like his:</p> <pre><code>#other imports ... #import custom modules from MyWidgets import MyTreeView from MyWindows import MySecondWindow basedir = Path(__file__).parent.absolute() class App(QMainWindow): def __init__(self): super(App, self).__init__() uic.loadUi(os.path.join(basedir, &quot;gui&quot;, &quot;MainWindow.ui&quot;), self) </code></pre> <p>Is it possible to load the modules in a thread and after the loading is finished start the App, which uses the modules, that were loaded inside the thread.</p> <pre><code>from PyQt5.QtCore import QThread, pyqtSignal from PyQt5.QtWidgets import QApplication, QSplashScreen from PyQt5.QtGui import QPixmap class LoaderThread(QThread): loaded = pyqtSignal() def __init__(self, parent=None): super().__init__(parent) def run(self): # loading of heavy modules here from MyWidgets import MyTreeView from MyWindows import MySecondWindow self.loaded.emit() class SplashScreen(QSplashScreen): def __init__(self, parent=None): super().__init__(parent) # Set the splash screen image self.setPixmap(QPixmap('/Users/user/PycharmProjects/LR2/icons/home.png')) # Start loading the modules in a separate thread self.loader_thread = LoaderThread() self.loader_thread.loaded.connect(self.start_main_window) self.loader_thread.start() def start_main_window(self): self.close() self.main_window = App() self.main_window.show() if __name__ == '__main__': app = QApplication([]) splash = SplashScreen() splash.show() app.exec_() </code></pre>
<python><pyqt5>
2023-02-10 15:10:59
0
453
Mazze
75,412,546
3,951,933
How to get state of the streaming source in GStreamer-Python?
<p>I'm trying to analyze and take actions depending on the current state of the streaming source in gstreamer. I have a basic script to create pipeline elements, linking them and eventually see the live IP camera stream on my screen. However, most of the IP cameras seems to stop streaming the video at some point. The camera IP is accessible but RTSP drops the connection or displaying an all-black screen. I want to detect when the stream gets dropped and create a controlling interval to reconnect to the stream again. I'm already listening some bus messages at runtime but it doesn't seem like any of them is providing what I need.</p> <p>It would be great to have some ideas on how to check the state of the stream at any given time.</p> <p>Here is a basic blocks from my code:</p> <pre><code>def on_src_pad_added(src, new_pad, depayer): sink_pad = depayer.get_static_pad(&quot;sink&quot;) if(sink_pad.is_linked()): print(&quot;We are already linked. Ignoring.&quot;) return # check the new pad's type new_pad_caps = new_pad.get_current_caps() new_pad_struct = new_pad_caps.get_structure(0) new_pad_type = new_pad_struct.get_name() ret = new_pad.link(sink_pad) return def gst_to_opencv(sample): buf = sample.get_buffer() caps = sample.get_caps() arr = np.ndarray( (caps.get_structure(0).get_value('height'), caps.get_structure(0).get_value('width'), 3), buffer=buf.extract_dup(0, buf.get_size()), dtype=np.uint8) return arr def new_buffer(sink, data): global image_arr sample = sink.emit(&quot;pull-sample&quot;) arr = gst_to_opencv(sample) image_arr = arr return Gst.FlowReturn.OK </code></pre> <p>After these callbacks I'm constructing my pipeline:</p> <pre><code>def main(): # Standard GStreamer initialization GObject.threads_init() Gst.init(None) # Create gstreamer elements # Create Pipeline element that will form a connection of other elements print(&quot;Creating Pipeline \n &quot;) pipeline = Gst.Pipeline() if not pipeline: sys.stderr.write(&quot; Unable to create Pipeline \n&quot;) # Source element for reading from the file print(&quot;Creating Source \n &quot;) source = Gst.ElementFactory.make(&quot;rtspsrc&quot;, &quot;rtsp-cam-source&quot;) if not source: sys.stderr.write(&quot; Unable to create Source \n&quot;) depay = Gst.ElementFactory.make(&quot;rtph264depay&quot;, &quot;rtp-depay&quot;) if not depay: sys.stderr.write(&quot; Unable to create videoconvert \n&quot;) parser = Gst.ElementFactory.make(&quot;h264parse&quot;, &quot;h264-parser&quot;) if not parser: sys.stderr.write(&quot; Unable to create videoconvert \n&quot;) decoder = Gst.ElementFactory.make(&quot;avdec_h264&quot;, &quot;h264-decoder&quot;) if not decoder: sys.stderr.write(&quot; Unable to create videoconvert \n&quot;) ... Set plugin properties... Add plugins to the pipeline... Link plugins... ... </code></pre> <p>Lastly, my live streaming and message listening block is as follows:</p> <pre><code>... # start play back and listen to events print(&quot;Starting pipeline \n&quot;) ret = pipeline.set_state(Gst.State.PLAYING) if ret == Gst.StateChangeReturn.FAILURE: print(&quot;Unable to set the pipeline to the playing state.&quot;) exit(-1) # create an event loop and feed gstreamer bus mesages to it bus = pipeline.get_bus() bus.add_signal_watch() # Parse message while True: pipe_state = pipeline.get_state(Gst.CLOCK_TIME_NONE) print(pipe_state.state) message = bus.timed_pop_filtered(10000, Gst.MessageType.ANY) if image_arr is not None: cv2.imshow(&quot;Receive Image from Pipeline Buffer&quot;, image_arr) if cv2.waitKey(1) == ord('q'): break if message: if message.type == Gst.MessageType.ERROR: err, debug = message.parse_error() print((&quot;Error received from element %s: %s&quot; % ( message.src.get_name(), err))) print((&quot;Debugging information: %s&quot; % debug)) break elif message.type == Gst.MessageType.EOS: print(&quot;End-Of-Stream reached.&quot;) break elif message.type == Gst.MessageType.STATE_CHANGED: if isinstance(message.src, Gst.Pipeline): old_state, new_state, pending_state = message.parse_state_changed() print((&quot;Pipeline state changed from %s to %s.&quot; % (old_state.value_nick, new_state.value_nick))) else: # print(message.type) continue # cleanup pipeline.set_state(Gst.State.NULL) pipeline.send_event(Gst.Event.new_eos()) </code></pre>
<python><ffmpeg><camera><gstreamer><rtsp>
2023-02-10 14:38:59
1
447
doruk.sonmez
75,412,446
457,123
foreachBatch not getting executed in Spark stream write
<p>I seems like the run_command function is not run in the below code. What is wrong?</p> <pre><code>df.writeStream \ .foreachBatch(run_command) \ .format(&quot;delta&quot;) \ .outputMode(&quot;append&quot;) \ .option(&quot;checkpointLocation&quot;, &quot;/tmp/delta/events/_checkpoints/&quot;) \ .partitionBy(&quot;machineid&quot;) \ .toTable(&quot;mytimeseriesdata&quot;) n = 100 count = 0 def run_command(batchDF, epoch_id): global count print(&quot;something&quot;) count += 1 if count % n == 0: spark.sql(&quot;OPTIMIZE mytimeseriesdata ZORDER BY (timestamp)&quot;) print(&quot;Optimizing &quot; + count) </code></pre>
<python><apache-spark><stream><databricks>
2023-02-10 14:28:10
0
4,959
Mathias Rönnlund
75,412,207
18,168,625
Github Actions don't reuse cache
<p>I have a pretty simple step for CI on Github Actions, which is supposed to cache Python dependencies, so it would save a lot of computing time.</p> <pre><code> some-step: name: 'Test step' runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - run: pipx install poetry - name: Set up Python 3.8 uses: actions/setup-python@v4 with: python-version: &quot;3.8&quot; architecture: &quot;x64&quot; cache: &quot;poetry&quot; - name: Install dependencies run: poetry install - run: poetry run ... </code></pre> <p>Every time when I create a new PR, new cache is generated, even if dependencies didn't change. As I found out it happens because of <a href="https://docs.github.com/en/actions/using-workflows/caching-dependencies-to-speed-up-workflows#restrictions-for-accessing-a-cache" rel="nofollow noreferrer">cache branch restrictions</a>.</p> <p>My question is how to create a common cache? Or how to remove branch restrictions?</p> <p>I rarely have to rerun my actions, so this caching implementation doesn't give any benefits.</p>
<python><continuous-integration><github-actions><python-poetry>
2023-02-10 14:03:34
2
580
Aleks Kuznetsov
75,412,114
9,707,473
How to slice a numpy array using index arrays with different shapes?
<p>Let's say that we have the following 2d numpy array:</p> <pre><code>arr = np.array([[1,1,0,1,1], [0,0,0,1,0], [1,0,0,0,0], [0,0,1,0,0], [0,1,0,0,0]]) </code></pre> <p>and the following indices for rows and columns:</p> <pre><code>rows = np.array([0,2,4]) cols = np.array([1,2]) </code></pre> <p>The objective is to slice <code>arr</code> using <code>rows</code> and <code>cols</code> to take the following <em>expected result:</em></p> <pre><code>arr_sliced = np.array([[1,0], [0,0], [1,0]]) </code></pre> <p>Using directly the arrays as indices like <code>arr[rows, cols]</code> leads to:</p> <blockquote> <p>IndexError: shape mismatch: indexing arrays could not be broadcast together with shapes (3,) (2,)</p> </blockquote> <hr /> <p>So what is the straightforward way to achieve this kind of slicing?</p> <h2>Update: useful information about the solution</h2> <p>So the <a href="https://stackoverflow.com/questions/75412114/how-to-slice-a-numpy-array-using-index-arrays-with-different-shapes/75412195#75412195">solution</a> was simple enough and it demands a basic comprehension about numpy's broadcasting. Someone could read these nice but not so representative <a href="https://numpy.org/doc/stable/user/basics.indexing.html#integer-array-indexing" rel="nofollow noreferrer">examples</a> from numpy. Also, the <a href="https://numpy.org/doc/stable/user/basics.broadcasting.html#general-broadcasting-rules" rel="nofollow noreferrer">general broadcasting rules</a> explains why there is no <em>shape mismatch</em> in:</p> <pre><code>arr[rows[:, np.newaxis], cols] # rows[:, np.newaxis].shape == (3,1) # cols.shape == (2,) </code></pre>
<python><arrays><python-3.x><numpy>
2023-02-10 13:56:15
2
512
lezaf
75,412,082
11,167,163
How to add header & scrollbar to a tkinter table?
<p>I have the following code which is working, but I don't understand how to add a scroll bar to the table, and how to shows header at the top of each column.</p> <p>Also I would like to display gridlines, but I struggle to do it.</p> <p>Any help would be appreciated !</p> <pre><code>import tkinter as tk class Example(tk.Frame): def __init__(self, parent): tk.Frame.__init__(self, parent) b = tk.Button(self, text=&quot;Done!&quot;, command=self.upload_cor) b.pack() table = tk.Frame(self) table.pack(side=&quot;top&quot;, fill=&quot;both&quot;, expand=True) data = ( (45417, &quot;rodringof&quot;, &quot;CSP L2 Review&quot;, 0.000394, &quot;2014-12-19 10:08:12&quot;, &quot;2014-12-19 10:08:12&quot;), (45418, &quot;rodringof&quot;, &quot;CSP L2 Review&quot;, 0.000394, &quot;2014-12-19 10:08:12&quot;, &quot;2014-12-19 10:08:12&quot;), (45419, &quot;rodringof&quot;, &quot;CSP L2 Review&quot;, 0.000394, &quot;2014-12-19 10:08:12&quot;, &quot;2014-12-19 10:08:12&quot;), (45420, &quot;rodringof&quot;, &quot;CSP L2 Review&quot;, 0.000394, &quot;2014-12-19 10:08:12&quot;, &quot;2014-12-19 10:08:12&quot;), (45421, &quot;rodringof&quot;, &quot;CSP L2 Review&quot;, 0.000394, &quot;2014-12-19 10:08:12&quot;, &quot;2014-12-19 10:08:12&quot;), (45422, &quot;rodringof&quot;, &quot;CSP L2 Review&quot;, 0.000394, &quot;2014-12-19 10:08:12&quot;, &quot;2014-12-19 10:08:12&quot;), (45423, &quot;rodringof&quot;, &quot;CSP L2 Review&quot;, 0.000394, &quot;2014-12-19 10:08:12&quot;, &quot;2014-12-19 10:08:12&quot;), ) self.widgets = {} row = 0 for rowid, reviewer, task, num_seconds, start_time, end_time in (data): row += 1 self.widgets[rowid] = { &quot;rowid&quot;: tk.Label(table, text=rowid), &quot;reviewer&quot;: tk.Label(table, text=reviewer), &quot;task&quot;: tk.Label(table, text=task), &quot;num_seconds_correction&quot;: tk.Entry(table), &quot;num_seconds&quot;: tk.Label(table, text=num_seconds), &quot;start_time&quot;: tk.Label(table, text=start_time), &quot;end_time&quot;: tk.Label(table, text=start_time) } self.widgets[rowid][&quot;rowid&quot;].grid(row=row, column=0, sticky=&quot;nsew&quot;) self.widgets[rowid][&quot;reviewer&quot;].grid(row=row, column=1, sticky=&quot;nsew&quot;) self.widgets[rowid][&quot;task&quot;].grid(row=row, column=2, sticky=&quot;nsew&quot;) self.widgets[rowid][&quot;num_seconds_correction&quot;].grid(row=row, column=3, sticky=&quot;nsew&quot;) self.widgets[rowid][&quot;num_seconds&quot;].grid(row=row, column=4, sticky=&quot;nsew&quot;) self.widgets[rowid][&quot;start_time&quot;].grid(row=row, column=5, sticky=&quot;nsew&quot;) self.widgets[rowid][&quot;end_time&quot;].grid(row=row, column=6, sticky=&quot;nsew&quot;) table.grid_columnconfigure(1, weight=1) table.grid_columnconfigure(2, weight=1) # invisible row after last row gets all extra space table.grid_rowconfigure(row+1, weight=1) def upload_cor(self): for rowid in sorted(self.widgets.keys()): entry_widget = self.widgets[rowid][&quot;num_seconds_correction&quot;] new_value = entry_widget.get() print(&quot;%s: %s&quot; % (rowid, new_value)) if __name__ == &quot;__main__&quot;: root = tk.Tk() Example(root).pack(fill=&quot;both&quot;, expand=True) root.mainloop() </code></pre>
<python><tkinter>
2023-02-10 13:53:43
1
4,464
TourEiffel
75,411,837
2,444,023
How to make a TTF or OTF from SVG or BMP
<p>I am working on a pixel font which I am able to create with <code>Pillow</code>. <code>Pillow</code> can export <code>*.bmp</code> files. I would like to get from <code>*.bmp</code> to <code>*.ttf</code> or <code>*.otf</code>. In my current example I have a letter <code>a</code> (unicode <code>0061</code>) and letter <code>p</code> (unicode <code>0070</code>). I will eventually represent the full alphabet, but I use just these two letters for illustration.</p> <p><strong>How can I make a valid <code>ttf</code> or <code>otf</code> from my individual <code>*.bmp</code> letters?</strong></p> <p>I currently make <code>*.bmp</code> then convert to <code>*.svg</code> because I have read most font programs like &quot;Glyphs Mini 2&quot; and &quot;Font Forge&quot; can import <code>*.svg</code>. The problem is that I do not know how to combine my individual <code>*.svg</code> into a valid TTF or OTF font file.</p> <h2>Try 1</h2> <pre><code>from PIL import Image, ImageDraw from potrace import Bitmap import numpy as np import os # p p = np.array( [ [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 1, 1, 0, 0], [1, 0, 0, 1, 0], [1, 0, 0, 1, 0], [1, 0, 0, 1, 0], [1, 1, 1, 0, 0], [1, 0, 0, 0, 0], [1, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], ] ) # a a = np.array( [ [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 1, 1, 0, 0], [1, 0, 0, 1, 0], [1, 0, 0, 1, 0], [1, 0, 0, 1, 0], [0, 1, 1, 1, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], ] ) dir_bmp = &quot;bmp/&quot; dir_svg = &quot;svg/&quot; def array_to_drawing(letter_name: str, array: np.array): &quot;&quot;&quot;Converts a numpy array to a drawing. Args: letter_name (str): The name of the letter to save the image as. array (np.array): The 2D numpy array to convert. &quot;&quot;&quot; # Create an image with a white background width = 320 height = 1024 img = Image.new(&quot;1&quot;, (width, height), color=&quot;white&quot;) # Create a draw object to draw on the image draw = ImageDraw.Draw(img) # Draw the letter which the 2d numpy array represents for row in range(array.shape[0]): for col in range(array.shape[1]): if array[row, col] == 1: x0 = col * 64 y0 = row * 64 draw.rectangle([x0, y0, x0 + 64, y0 - 64], fill=&quot;black&quot;) img.save(dir_bmp + letter_name + &quot;.bmp&quot;) print( &quot;potrace &quot; + dir_bmp + letter_name + &quot;.bmp --svg -o &quot; + dir_svg + letter_name + &quot;.svg&quot; ) os.system( &quot;potrace &quot; + dir_bmp + letter_name + &quot;.bmp --svg -o &quot; + dir_svg + letter_name + &quot;.svg&quot; ) array_to_drawing(&quot;p&quot;, p) array_to_drawing(&quot;a&quot;, a) </code></pre> <h1>Letter p</h1> <p><a href="https://i.sstatic.net/ajyPR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ajyPR.png" alt="letter p" /></a></p> <h1>Letter a</h1> <p><a href="https://i.sstatic.net/A6ZgX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A6ZgX.png" alt="letter a" /></a></p>
<python><svg><fonts><bitmap><truetype>
2023-02-10 13:34:39
0
2,838
Alex
75,411,807
11,197,301
pandas fill a dataframe according to column and row value operations
<p>Let's say that I have this dataframe:</p> <pre><code>,,,,,, ,,2.0,,,, ,2.0,,2.23606797749979,,, ,,2.23606797749979,,,2.0, ,,,,,2.23606797749979, ,,,2.0,2.23606797749979,, ,,,,,, </code></pre> <p><a href="https://i.sstatic.net/iaFYL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iaFYL.png" alt="enter image description here" /></a></p> <p>I would like to get a two dimensional vector with values of the indexes and the columns of each element which is not nan.</p> <p>For example, in this case, I am expecting:</p> <pre><code>[[2,1],[1,2],[3,2],[2,3],[5,3],[3,5],[4,5],[5,4]]. </code></pre> <p>I am thinking about using iloc and the np.where functions but I am not able to merge the two concepts.</p>
<python><pandas><dataframe><select><indexing>
2023-02-10 13:31:17
1
623
diedro
75,411,652
7,376,511
Assign function to variable and get variable name from inside said function
<pre><code>def my_function(): ... my_variable = my_function my_variable() </code></pre> <p>In this case, is there a way to get <code>my_variable</code> as string from inside <code>my_function</code>?</p>
<python>
2023-02-10 13:16:02
1
797
Some Guy
75,411,547
10,714,156
Convert `DataFrame.groupby()` into dictionary (and then reverse it)
<p>Say I have the following <code>DataFrame()</code> where I have repeated observations per individual (column <code>id_ind</code>). Hence, first two rows belong the first individual, the third and fourth rows belong to the second individual, and so forth...</p> <pre><code>import pandas as pd X = pd.DataFrame.from_dict({'x1_1': {0: -0.1766214634108258, 1: 1.645852185286492, 2: -0.13348860101031038, 3: 1.9681043689968933, 4: -1.7004428240831382, 5: 1.4580091413853749, 6: 0.06504113741068565, 7: -1.2168493676768384, 8: -0.3071304478616376, 9: 0.07121332925591593}, 'x1_2': {0: -2.4207773498298844, 1: -1.0828751040719462, 2: 2.73533787008624, 3: 1.5979611987152071, 4: 0.08835542172064115, 5: 1.2209786277076156, 6: -0.44205979195950784, 7: -0.692872860268244, 8: 0.0375521181289943, 9: 0.4656030062266639}, 'x1_3': {0: -1.548320898226322, 1: 0.8457342014424675, 2: -0.21250514722879738, 3: 0.5292389938329516, 4: -2.593946520223666, 5: -0.6188958526077123, 6: 1.6949245117526974, 7: -1.0271341091035742, 8: 0.637561891142571, 9: -0.7717170035055559}, 'x2_1': {0: 0.3797245517345564, 1: -2.2364391598508835, 2: 0.6205947900678905, 3: 0.6623865847688559, 4: 1.562036259999875, 5: -0.13081282910947759, 6: 0.03914373833251773, 7: -0.995761652421108, 8: 1.0649494418154162, 9: 1.3744782478849122}, 'x2_2': {0: -0.5052556836786106, 1: 1.1464291788297152, 2: -0.5662380273138174, 3: 0.6875729143723538, 4: 0.04653136473130827, 5: -0.012885303852347407, 6: 1.5893672346098884, 7: 0.5464286050059511, 8: -0.10430829457707284, 9: -0.5441755265313813}, 'x2_3': {0: -0.9762973303149007, 1: -0.983731467806563, 2: 1.465827578266328, 3: 0.5325950414202745, 4: -1.4452121324204903, 5: 0.8148816373643869, 6: 0.470791989780882, 7: -0.17951636294180473, 8: 0.7351814781280054, 9: -0.28776723200679066}, 'x3_1': {0: 0.12751822396637064, 1: -0.21926633684030983, 2: 0.15758799357206943, 3: 0.5885412224632464, 4: 0.11916562911189271, 5: -1.6436210334529249, 6: -0.12444368631987467, 7: 1.4618564171802453, 8: 0.6847234328916137, 9: -0.23177118858569187}, 'x3_2': {0: -0.6452955690715819, 1: 1.052094761527654, 2: 0.20190339195326157, 3: 0.6839430295237913, 4: -0.2607691613858866, 5: 0.3315513026670213, 6: 0.015901139336566113, 7: 0.15243420084881903, 8: -0.7604225072161022, 9: -0.4387652927008854}, 'x3_3': {0: -1.067058994377549, 1: 0.8026914180717286, 2: -1.9868531745912268, 3: -0.5057770735303253, 4: -1.6589569342151713, 5: 0.358172252880764, 6: 1.9238983803281329, 7: 2.2518318810978246, 8: -1.2781475121874357, 9: -0.7103081175166167}}) Y = pd.DataFrame.from_dict({'CHOICE': {0: 1.0, 1: 1.0, 2: 2.0, 3: 2.0, 4: 3.0, 5: 2.0, 6: 1.0, 7: 1.0, 8: 2.0, 9: 2.0}}) Z = pd.DataFrame.from_dict({'z1': {0: 2.4196730570917233, 1: 2.4196730570917233, 2: 2.822802255159467, 3: 2.822802255159467, 4: 2.073171091633643, 5: 2.073171091633643, 6: 2.044165101485163, 7: 2.044165101485163, 8: 2.4001241292606275, 9: 2.4001241292606275}, 'z2': {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 1.0, 5: 1.0, 6: 1.0, 7: 1.0, 8: 0.0, 9: 0.0}, 'z3': {0: 1.0, 1: 1.0, 2: 1.0, 3: 1.0, 4: 2.0, 5: 2.0, 6: 2.0, 7: 2.0, 8: 3.0, 9: 3.0}}) id = pd.DataFrame.from_dict({'id_choice': {0: 1.0, 1: 2.0, 2: 3.0, 3: 4.0, 4: 5.0, 5: 6.0, 6: 7.0, 7: 8.0, 8: 9.0, 9: 10.0}, 'id_ind': {0: 1.0, 1: 1.0, 2: 2.0, 3: 2.0, 4: 3.0, 5: 3.0, 6: 4.0, 7: 4.0, 8: 5.0, 9: 5.0}} ) # Create a dataframe with all the data data = pd.concat([id, X, Z, Y], axis=1) print(data.head(4)) # id_choice id_ind x1_1 x1_2 x1_3 x2_1 x2_2 \ # 0 1.0 1.0 -0.176621 -2.420777 -1.548321 0.379725 -0.505256 # 1 2.0 1.0 1.645852 -1.082875 0.845734 -2.236439 1.146429 # 2 3.0 2.0 -0.133489 2.735338 -0.212505 0.620595 -0.566238 # 3 4.0 2.0 1.968104 1.597961 0.529239 0.662387 0.687573 # # x2_3 x3_1 x3_2 x3_3 z1 z2 z3 CHOICE # 0 -0.976297 0.127518 -0.645296 -1.067059 2.419673 0.0 1.0 1.0 # 1 -0.983731 -0.219266 1.052095 0.802691 2.419673 0.0 1.0 1.0 # 2 1.465828 0.157588 0.201903 -1.986853 2.822802 0.0 1.0 2.0 # 3 0.532595 0.588541 0.683943 -0.505777 2.822802 0.0 1.0 2.0 </code></pre> <hr /> <p>I want to perform two operations.</p> <ol> <li>First, I want to convert the DataFrame <code>data</code> into a dictionary of <code>DataFrame()</code>s where the keys are the number of individuals (in this particular case, numbers ranging from <code>1.0</code> to <code>5.0</code>.). I've done this below as suggested <a href="https://stackoverflow.com/questions/64640305/pandas-how-to-convert-dataframe-with-duplicate-index-values-to-a-dictionary">here</a>. Unfortunately, I am getting a dictionary of numpy values and not a dictionary of <code>DataFrame()</code>s.</li> </ol> <pre><code># Create a dictionary with the data for each individual data_dict = data.set_index('id_ind').groupby('id_ind').apply(lambda x : x.to_numpy().tolist()).to_dict() print(data_dict.keys()) # dict_keys([1.0, 2.0, 3.0, 4.0, 5.0]) print(data_dict[1.0]) #[[1.0, -0.1766214634108258, -2.4207773498298844, -1.548320898226322, 0.3797245517345564, -0.5052556836786106, -0.9762973303149007, 0.12751822396637064, -0.6452955690715819, -1.067058994377549, 2.4196730570917233, 0.0, 1.0, 1.0], [2.0, 1.645852185286492, -1.0828751040719462, 0.8457342014424675, -2.2364391598508835, 1.1464291788297152, -0.983731467806563, -0.21926633684030983, 1.052094761527654, 0.8026914180717286, 2.4196730570917233, 0.0, 1.0, 1.0]] </code></pre> <ol start="2"> <li>Second, I want to recover the original DataFrame <code>data</code> reversing the previous operation. The naive approach is as follows. However, it is, of course, not producing the expected result.</li> </ol> <pre><code># Naive approach res = pd.DataFrame.from_dict(data_dict, orient='index') print(res) # 0 1 #1.0 [1.0, -0.1766214634108258, -2.4207773498298844... [2.0, 1.645852185286492, -1.0828751040719462, ... #2.0 [3.0, -0.13348860101031038, 2.73533787008624, ... [4.0, 1.9681043689968933, 1.5979611987152071, ... #3.0 [5.0, -1.7004428240831382, 0.08835542172064115... [6.0, 1.4580091413853749, 1.2209786277076156, ... #4.0 [7.0, 0.06504113741068565, -0.4420597919595078... [8.0, -1.2168493676768384, -0.692872860268244,... #5.0 [9.0, -0.3071304478616376, 0.0375521181289943,... [10.0, 0.07121332925591593, 0.4656030062266639... </code></pre>
<python><pandas><list><dataframe><dictionary>
2023-02-10 13:06:29
1
1,966
Álvaro A. Gutiérrez-Vargas
75,411,504
12,149,587
Gather items that cause all() to return false
<p>This question is about what one can/cannot do with <code>all()</code> function.</p> <p>I would like to identify all elements which fail the condition set in <code>all()</code> function.</p> <p>In the example below <code>all()</code> will return <code>False</code> since not all elements are of type <code>int</code>. <code>'c'</code> and <code>'5'</code> will fail the condition.</p> <pre><code>lst=[1,2,'c',4,'5'] all(isinstance(li,int) for li in lst) &gt;&gt;&gt;False </code></pre> <p>I could parse the list myself in an equivalent function and build up a list with failing elements, but I wonder if there is a cleverer way of getting <code>['c','5']</code> while still using <code>all()</code>.</p>
<python><list><iterator><boolean>
2023-02-10 13:01:46
2
3,525
0buz
75,411,474
16,981,638
how to match data from a linked pages together by web scraping
<p>i'm trying to scrap some data from a web site and i have an issue in matching the data from every subpage to the data of the main page</p> <p>for Expample: the main page have a country name &quot;<strong>Alabama Trucking Companies</strong>&quot; and when i enter to it link, i'll found some cities(Abbeville, Adamsville,...etc), i need to clarify every city details (city name and city link) with it's country name</p> <p><strong>country names that i scraped from the main page:</strong></p> <p><a href="https://i.sstatic.net/Y57Gp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Y57Gp.png" alt="enter image description here" /></a></p> <p><strong>city names that i scraped from the sub page:</strong></p> <p><a href="https://i.sstatic.net/qF6qa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qF6qa.png" alt="enter image description here" /></a></p> <p>the below code that i used is extracting the data from the main and sub pages individually without matching them to other, So how can i solve this issue please.</p> <p><strong>The code that i've used:-</strong></p> <pre><code>start_time = datetime.now() url = 'https://www.quicktransportsolutions.com/carrier/usa-trucking-companies.php' page_country = requests.get(url).content soup_country = BeautifulSoup(page_country, 'lxml') countries = soup_country.find('div',{'class':'col-xs-12 col-sm-9'}) countries_list = [] country_info = countries.find_all('div',{'class':'col-md-4 column'}) for i in country_info: title_country = i.text.strip() href_country = i.find('a', href=True)['href'] countries_list.append({'Country Title':title_country, 'Link':(f'https://www.quicktransportsolutions.com//carrier//{href_country}')}) countries_links = [] for i in pd.DataFrame(countries_list)['Link']: page_city = requests.get(i).content soup_city = BeautifulSoup(page_city, 'lxml') city = soup_city.find('div',{'align':'center','class':'table-responsive'}) countries_links.append(city) cities_list = [] for i in countries_links: city_info = i.find_all('td',&quot;&quot;) for i in city_info: title_city = i.text.strip() try: href_city = i.find('a', href=True)['href'] except: continue cities_list.append({'City Title':title_city,'City Link':href_city}) end_time = datetime.now() print(f'Duration: {end_time - start_time}') df = pd.DataFrame(cities_list) df = df.loc[df['City Link']!= '#'].drop_duplicates().reset_index(drop=True) df </code></pre> <p><strong>The expected data to see for every country is the below:-</strong></p> <p><a href="https://i.sstatic.net/U6szk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/U6szk.png" alt="enter image description here" /></a></p>
<python><python-3.x><pandas><web-scraping>
2023-02-10 12:59:16
1
303
Mahmoud Badr
75,411,455
2,635,863
concatenate/merge multiple dataframes - pandas
<p>I have multiple data frames that I would like to concatenate. Some of these do not have certain columns so should be filled with NA.</p> <pre><code>df1_1 = pd.DataFrame({'id':[1,1,2,2,3,3], 'age':[22,22,55,55,53,53], 'group':1,'y':[1,2,3,4,5,6]}) df1_2 = pd.DataFrame({'id':[1,1,2,2,3,3], 'age':[22,22,55,55,53,53], 'group':1,'w':[7,8,9,10,11,12]}) df2 = pd.DataFrame({'id':[4,4,5,5], 'age':[39,39,54,54], 'group':2,'y':[1,2,3,4]}) df2_2 = pd.DataFrame({'id':[4,4,5,5], 'age':[39,39,54,54], 'group':2,'w':[5,6,7,8]}) df3 = pd.DataFrame({'id':[1,1,6,6,7,7,8,8], 'age':[23,23,63,63,43,43,25,25],'group':3,'w':[1,2,3,4,5,6,7,8]}) </code></pre> <p>Desired output:</p> <pre><code>id age group y w 1 22 1 1 7 1 22 1 2 8 2 55 1 3 9 2 55 1 4 10 3 53 1 5 11 3 53 1 6 12 4 39 2 1 5 4 39 2 2 6 5 54 2 3 7 5 54 2 4 8 1 23 3 NA 1 1 23 3 NA 2 6 63 3 NA 3 6 63 3 NA 4 7 43 3 NA 5 7 43 3 NA 6 8 25 3 NA 7 8 25 3 NA 8 </code></pre> <p>I tried</p> <pre><code>from functools import reduce dfs = [df1_1,df1_2,df2_1,df2_2,df3] df_merged = reduce(lambda left,right: pd.merge(left,right,on=['id','group','age'], how='outer'), dfs) df_merged = pd.concat(dfs, join='outer', axis=0) </code></pre> <p>but none of my attempts worked</p>
<python><pandas>
2023-02-10 12:57:07
1
10,765
HappyPy
75,411,340
16,883,182
Unexpected behavior in the "scrap" module of Pygame when attempting to copy/paste Unicode text
<p>I'm currently working on a GUI toolkit in Pygame, and so far it's looking pretty good. I'm currently working on a text-box widget, and it is nearly done. But I would like to implement copy/paste support. I scrolled through the Pygame docs and found there is a module, <code>pygame.scrap</code>, which is for this purpose. When I first implemented it, I used the <code>pygame.SCRAP_TEXT</code> content type when making calls to the <code>scrap.get</code> and <code>scrap.put</code> methods. It worked like expected, but obviously copy/pasting Unicode text didn't work, since <code>pygame.SCRAP_TEXT</code> can only handle ASCII text. So I switched to using <code>&quot;text/plain;charset=utf-8&quot;</code> as the MIME type to pass to the scrap methods, and that's when everything stopped working correctly.</p> <p>This is the code I'm currently using to get the clipboard content, with an OS check to make sure that the Unicode MIME type is only used if the host OS is Windows or Linux:</p> <pre><code>def paste_text(self) -&gt; None: if sys.platform in (&quot;win32&quot;, &quot;linux&quot;): data_type = &quot;text/plain;charset=utf-8&quot; decode_format = &quot;utf-8&quot; else: data_type = pygame.SCRAP_TEXT decode_format = &quot;ascii&quot; try: text = pygame.scrap.get(data_type) except pygame.error: return None if text is None: return None print(&quot;Hex bytes:&quot;, &quot;&quot;.join(char + (&quot;&quot; if index % 2 or index == len(text.hex()) else &quot; &quot;) for index, char in enumerate(text.hex(), start=1))) print(&quot;Glyph Representation: \&quot;{}\&quot;&quot;.format(text.decode(&quot;utf-8&quot;, &quot;ignore&quot;))) </code></pre> <p>The one-liner print statement on line 14 is for debugging purposes, and it converts the <code>bytes</code> object into a string of hexadecimal numbers seperated by spaces, and prints this information to the terminal.</p> <p>Now, I've checked the code above many times to figure out where I'm making a mistake, but I just can't figure out what I'm doing wrong. There are all sorts of weird behavior when using the code above to get the clipboard content... After some testing, I've found the following two big issues:</p> <ol> <li>When an string containing only ASCII characters is copied to the clipboard from an external program, and then pasted into the Pygame application (which calls the method in the snippet above), the codepoint for a NULL character is added after each hex byte, plus two extra NULL characters at the end of the string. For example, the string <code>Hello, world!</code> should have the 13 hex bytes: <code>48 65 6c 6c 6f 2c 20 77 6f 72 6c 64 21</code>. But the hex bytes actually printed by the method above is the following: <code>48 00 65 00 6c 00 6c 00 6f 00 2c 00 20 00 77 00 6f 00 72 00 6c 00 64 00 21 00 00 00</code>.</li> </ol> <p>If this was the only problem, it wouldn't be that hard to fix. I could simply filter out the NULL characters using the Python <code>str.isprintable()</code> method. But it's <em>not</em> the only problem. It bugs out even worse if the string contains Unicode characters:</p> <ol start="2"> <li>When a string containing Unicode characters is copied from an external program and pasted into the Pygame application, the hex bytes that make up the string are replaced by a completely different byte sequence, with even the <em>length</em> of the byte sequence being different. Also, each Unicode string always gets corrupted into the same &quot;different&quot; byte sequence. It's like some kind of weird seeding algorithm. For example, if the Mandarin string <code>您好,世界!</code>, which should have the 18 hex bytes <code>e6 82 a8 e5 a5 bd ef bc 8c e4 b8 96 e7 95 8c ef bc 81</code> when encoded with UTF-8, is typed into an external text editor, then copy-pasted into the Pygame application, the hex bytes become: <code>a8 60 7d 59 0c ff 16 4e 4c 75 01 ff 00 00</code>, which is 4 bytes shorter... And when decoded with UTF-8, it becomes the string <code>`}YNLu </code>...</li> </ol> <p>And it's not just pasting, copying text <em>from</em> the Pygame application is also very buggy. The below is the method I'm calling to copy the text currently selected in the text-box widget to the clipboard:</p> <pre><code>def copy_text(self) -&gt; None: ... selection = self.text[self.select_start:self.select_end] if sys.platform in (&quot;win32&quot;, &quot;linux&quot;): data_type = &quot;text/plain;charset=utf-8&quot; text_bytes = selection.encode(&quot;utf-8&quot;, &quot;ignore&quot;) else: data_type = pygame.SCRAP_TEXT text_bytes = selection.encode(&quot;ascii&quot;, &quot;ignore&quot;) try: pygame.scrap.put(data_type, text_bytes) except pygame.error: return None </code></pre> <p>And as for the buggy behavior when copying:</p> <ol> <li>When <em>any</em> text, whether pure ASCII or Unicode, is copied from the Pygame application, it <strong>works correctly</strong> <em>without</em> messing up the bytes if pasted <em>back</em> into the Pygame application. <em>But</em>, if the text is pasted into an <em>external</em> text editor, the byte sequence is replaced with one of a different length, with the same weird &quot;seeding algorithm&quot; effect mentioned above, since the same string will always produce the same corrupted text. For example, if the string <code>Hello, world!</code> is typed into the Pygame application and copied to the clipboard, then pasted into an <em>external</em> text editor, the string becomes the characters <code>效汬Ɐ眠牯摬</code>, with the 18 hex bytes <code>e6 95 88 e6 b1 ac e2 b1 af e7 9c a0 e7 89 af e6 91 ac</code>. That's 5 bytes longer than the original string, where did the extra 5 bytes even come from? And if the string <code>您好,世界!</code> is typed into the Pygame application (via pygame.TEXTINPUT events) and copied, then pasted into an external text editor, the characters become the following: <code>苦붥볯隸闧¼</code>, with the 26 hex bytes <code>e8 8b a6 ee 96 a8 eb b6 a5 eb b3 af ee 92 8c e9 9a b8 e9 97 a7 ee be 8c c2 bc</code>. That's 8 bytes longer than the original string.</li> </ol> <p>What is causing all these weird issues? Am I making some huge mistake in my code without seeing it?</p> <hr /> <p>Update: As there doesn't seem to be a solution, I'm switching to using the <code>pyperclip</code> module instead of <code>pygame.scrap</code> for clipboard operations.</p>
<python><unicode><utf-8><character-encoding>
2023-02-10 12:46:39
0
315
I Like Python
75,411,269
1,362,485
FileNotFound when trying to pickle TensorFlow object in GPU
<p>I'm running the code below, and it works perfectly if TensorFlow is installed <em>without</em> GPU. But if installed <em>with</em> GPU, I get a FileNotFound error when I try to load the object.</p> <p>I tried also with joblib and pickle directly, and I always get the same error.</p> <p>Any help will be greatly appreciated.</p> <pre><code>import tensorflow as tf import dill def Generator(): z_dim = 60 FEATURES_LIST = [&quot;aaa&quot;, &quot;bbb&quot;, &quot;ccc&quot; ] ME_FEATURES_LIST = [&quot;ddd&quot;, &quot;eee&quot;, &quot;fff&quot; ] NUM_FEATURES = len(FEATURES_LIST) NUM_ME_FEATURES = len(ME_FEATURES_LIST) z = tf.keras.layers.Input(shape=(z_dim,), dtype='float32') y = tf.keras.layers.Input(shape=(NUM_ME_FEATURES,), dtype='float32') tr = tf.keras.layers.Input(shape=(1,), dtype='bool') x = tf.keras.layers.concatenate([z, y]) x = tf.keras.layers.Dense(z_dim * NUM_ME_FEATURES, activation=&quot;relu&quot;)(x) out = tf.keras.layers.Dense(NUM_FEATURES, activation='sigmoid')(x) model = tf.keras.Model(inputs=[z, y, tr], outputs=(out, y)) return model G = Generator() with open(&quot;dill_functional&quot;, 'wb') as file: dill.dump(G, file) with open(&quot;dill_functional&quot;, 'rb') as file: G = dill.load(file) # &lt;--- error here print(str(G)) </code></pre> <blockquote> <p>C:\Users\igor-.cloned\gan&gt; python .\dill_test.py 2023-02-09 22:42:28.379108: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX AVX2 To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.</p> <p>2023-02-09 22:42:29.759547: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1616] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 9426 MB memory: -&gt; device: 0, name: NVIDIA GeForce RTX 3080 Ti, pci bus id: 0000:01:00.0, compute capability: 8.6</p> <p>WARNING:tensorflow:Compiled the loaded model, but the compiled metrics have yet to be built. <code>model.compile_metrics</code> will be empty until you train or evaluate the model.</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\igor-\.cloned\gan\dill_test.py&quot;, line 32, in &lt;module&gt; G = dill.load(file) File &quot;C:\Users\igor-\anaconda3\envs\ai\lib\site-packages\dill\_dill.py&quot;, line 272, in load return Unpickler(file, ignore=ignore, **kwds).load() File &quot;C:\Users\igor-\anaconda3\envs\ai\lib\site-packages\dill\_dill.py&quot;, line 419, in load obj = StockUnpickler.load(self) File &quot;C:\Users\igor-\anaconda3\envs\ai\lib\site-packages\keras\saving\pickle_utils.py&quot;, line 47, in deserialize_model_from_bytecode model = save_module.load_model(temp_dir) File &quot;C:\Users\igor-\anaconda3\envs\ai\lib\site-packages\keras\utils\traceback_utils.py&quot;, line 70, in error_handler raise e.with_traceback(filtered_tb) from None File &quot;C:\Users\igor-\anaconda3\envs\ai\lib\site-packages\tensorflow\python\saved_model\load.py&quot;, line 933, in load_partial raise FileNotFoundError( FileNotFoundError: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for ram://fc47ea82-4f6b-4736-9394-980cc1f14358/variables/variables </code></pre> <p>You may be trying to load on a different device from the computational device. Consider setting the <code>experimental_io_device</code> option in <code>tf.saved_model.LoadOptions</code> to the io_device such as '/job:localhost'.</p> </blockquote>
<python><tensorflow><keras><pickle><dill>
2023-02-10 12:39:46
4
1,207
ps0604
75,411,254
5,432,214
Minimal reproducible example of a django project using appengine task queue (dev_appserver.py)
<p>I am trying to create a django project which uses AppEngine's task queue, and would like to test it locally before deploying (using gcloud<code>s </code>dev_appserver.py`).</p> <p>I can't seem to find resources that help in local development, and the closest thing was a Medium article that helps setting up django with Datastore (<a href="https://medium.com/@bcrodrigues/quick-start-django-datastore-app-engine-standard-3-7-dev-appserver-py-way-56a0f90c53a3" rel="nofollow noreferrer">https://medium.com/@bcrodrigues/quick-start-django-datastore-app-engine-standard-3-7-dev-appserver-py-way-56a0f90c53a3</a>).</p> <p>Does anyone have an example that I could look into for understanding how to start my implementation?</p>
<python><django><google-app-engine><google-cloud-platform><dev-appserver>
2023-02-10 12:38:37
1
1,310
HitLuca
75,411,162
3,421,383
Not able to get undetected-chromedriver.exe download file
<p>Not able to get <code>undetected-chromedriver.exe</code> download file</p> <p>I am using <code>undetected-chromedriver</code> in Python. I have installed <code>undetected-chromedriver</code> using <code>pip install undetected-chromedriver</code></p> <p>Below is my code</p> <pre><code>import undetected_chromedriver as uc driver = uc.Chrome() # Setting Driver Implicit Time out for An Element driver.implicitly_wait(10) # Maximize the window driver.maximize_window() time.sleep(2000) print(&quot;maxmized window&quot;) driver.get(&quot;http://example.net&quot;) </code></pre> <p>Error I am getting with above code once browser is launched is as below</p> <pre><code>DevTools listening on ws://127.0.0.1:65496/devtools/browser/a66dad5d-073d-478c-a291-e29a127a2ecb [15220:11524:0210/192501.116:ERROR:device_event_log_impl.cc(218)] [19:25:01.116] USB: usb_device_handle_win.cc:1046 Failed to read descriptor from node connection: A device attached to the system is not functioning. (0x1F) [15220:11524:0210/192501.123:ERROR:device_event_log_impl.cc(218)] [19:25:01.123] USB: usb_device_handle_win.cc:1046 Failed to read descriptor from node connection: A device attached to the system is not functioning. (0x1F) </code></pre> <p><a href="https://pypi.org/project/undetected-chromedriver/#history" rel="nofollow noreferrer">https://pypi.org/project/undetected-chromedriver/#history</a></p> <p>I need exe of undetected-chromedriver , can you please help</p> <p>Also while instantiating I am getting can not find declaration, however I have installed <code>pip install undetected-chromedriver</code> <a href="https://i.sstatic.net/SOqlF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SOqlF.png" alt="enter image description here" /></a> Thanks in advance</p>
<python><google-chrome><selenium-chromedriver><undetected-chromedriver>
2023-02-10 12:30:49
1
814
simond
75,411,149
11,183,333
Create a partial pdf from bytes in python
<p>I have a pdf file somewhere. This pdf is being send to the destination in equal amount of bytes (apart from the last chunk).</p> <p>Let's say this pdf file is being read in like this in python:</p> <pre><code>with open(filename, 'rb') as file: chunk = file.read(3000) while chunk: #the sending method here await asyncio.sleep(0.5) chunk = file.read(3000) </code></pre> <p><strong>the question is</strong>: Can I construct a partial PDF file in the destination, while the leftover part of the document is being sent?</p> <p>I tried it with pypdfium2 / PyPDF2, but they throw errors until the whole PDF file is arrived:</p> <pre><code>full_pdf = b'' def process(self, message): self.full_pdf += message partial = io.BytesIO(self.full_pdf) try: pdf=pypdfium2.PdfDocument(partial) print(len(pdf)) except Exception as e: print(&quot;error&quot;, e) </code></pre> <p>basically I'd like to get the pages of the document, even if it's not the whole document currently.</p>
<python><pdf><stream><pypdf>
2023-02-10 12:30:11
1
324
Patrick Visi
75,410,827
12,014,637
How does masking work in Tensorflow Keras
<p>I have difficulty understanding how exactly masking works in Tensorflow/Keras. On the Keras website (<a href="https://www.tensorflow.org/guide/keras/masking_and_padding" rel="nofollow noreferrer">https://www.tensorflow.org/guide/keras/masking_and_padding</a>) they simply say that the neural network layers skip/ignore the masked values but it doesn't explain how? Does it force the weights to zero? (I know a boolean array is being created but I don't know how it's being used)</p> <p>For example check this simple example:</p> <pre class="lang-py prettyprint-override"><code>tf.random.set_seed(1) embedding = tf.keras.layers.Embedding(input_dim=10, output_dim=3, mask_zero=True) masked_output = embedding(np.array([[1,2,0]])) print(masked_output) </code></pre> <p>I asked the Embedding layer to mask zero inputs. Now look at the output:</p> <pre><code>tf.Tensor( [[[ 0.00300496 -0.02925059 -0.01254098] [ 0.04872786 0.01087702 -0.03656749] [ 0.00446818 0.00290152 -0.02269397]]], shape=(1, 3, 3), dtype=float32) </code></pre> <p>If you change the &quot;mask_zero&quot; argument to False you get the exact same results. Does anyone know what's happening behind the scene? Any resources explaining the masking mechanism more thoroughly is highly appreciated.</p> <p><strong>P.S:</strong> <em>This is also an example of a full Neural Network which gives an identical outcome with and without masking:</em></p> <pre class="lang-py prettyprint-override"><code>tf.random.set_seed(1) input = np.array([[1,2,0]]) # &lt;--- 0 should be masked and ignored embedding = tf.keras.layers.Embedding(input_dim=10, output_dim=3, mask_zero=True) masked_output = embedding(input) flatten = tf.keras.layers.Flatten()(masked_output) dense_middle = tf.keras.layers.Dense(4)(flatten) out = tf.keras.layers.Dense(1)(dense_middle) print(out) </code></pre>
<python><tensorflow><keras>
2023-02-10 11:58:51
2
618
Amin Shn
75,410,778
12,671,057
Sort by price and remove prices
<p>I have a shopping list:</p> <pre><code>items = [ ['Aspirin', 'Walgreens', 6.00], ['book lamp', 'Amazon', 2.87], ['popsicles', 'Walmart', 5.64], ['hair brush', 'Amazon', 6.58], ['Listerine', 'Walmart', 3.95], ['gift bag', 'Target', 1.50] ] </code></pre> <p>I want to sort the items from cheapest to highest price, and remove the prices. (I don't need them anymore then, I'll just buy from top down until I run out of money). Goal is:</p> <pre><code>items = [ ['gift bag', 'Target'], ['book lamp', 'Amazon'], ['Listerine', 'Walmart'], ['popsicles', 'Walmart'], ['Aspirin', 'Walgreens'], ['hair brush', 'Amazon'] ] </code></pre> <p>A way that works but looks clumsy (<a href="https://tio.run/##fZE9b4MwEIZ3/4rrBEiRlZKSppEYoq4dK3VADE56gBWwT2dXavrnqaEENVKEp/t4fH7fM118Y81mR9xrj52DHAoB4RTRwZFmbaIVRB@qrRnRuJBs5XpdribmaO0ZWtXRQB069WMHPpW75xkhS06fWnTToE6xD2Emt08z0yjNcOQv19zM2cpsNzNv2nkMevB2zka@ZDNT68rDUdUD8q64xoF4lNm6FKUQ@E148v8c3sEXnS1rWXY8d@@udXkTQX2vO7LswRKy8pbF9bdcqOJnPKYrOOMlvyJyqAVPQWmcJomoLIMGbYCVqTFu0fzdSpL9@PiYFLoMQ69hsU/LvmLbAVHQ7GFSQSQE0XRdjK04erXMYb8PEegKJnk5TEvH1mEwzNbUUdL/Ag" rel="nofollow noreferrer" title="Python 3.8 (pre-release) – Try It Online">demo/template</a>):</p> <pre><code>import operator items = sorted(items, key=operator.itemgetter(2)) for i in range(len(items)): items[i] = items[i][:2] </code></pre> <p>Is there a shorter way?</p>
<python><sorting>
2023-02-10 11:54:15
7
27,959
Kelly Bundy
75,410,769
5,594,712
Time response of a double sided frequency response
<p>I am trying to learn more about fourier transforms and inverse fourier transform. For the example below, I am unable to understand the time response signal.</p> <p>Here is what I am doing,</p> <p><strong>Step 1:</strong> I am starting in this case from the frequency domain with this signal:</p> <p><a href="https://i.sstatic.net/CmbaM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CmbaM.png" alt="enter image description here" /></a></p> <p><strong>Step 2:</strong> Next, I am doing the inverse fourier transform of the above signal and this is what I get:</p> <p><a href="https://i.sstatic.net/MFdEF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MFdEF.png" alt="enter image description here" /></a></p> <p>I don't understand why I am seeing the second peak (highlighted in the maroon box) at the end in the time response? How could I remove the last peak?</p> <p>The code I've used is as follows:</p> <pre><code>import numpy as np from scipy import fft start_time = 0 end_time = 10 fs = 100 dt = 1/fs t = np.arange(start_time, end_time, dt) freqs = fft.fftfreq(len(t)) * fs fq = list() signal = list() for f in freqs: fq += [f] signal += [np.sin(f) / (f)] fig, ax = plt.subplots() ax.plot(fq, signal) ax.set_xlabel('Frequency in Hertz [Hz]') ax.set_ylabel('Frequency Domain (Spectrum)') #removing contribution of f=0 signal1 = signal[1:] ift = fft.ifft(signal1) plt.plot(ift) </code></pre>
<python><scipy><signal-processing><fft>
2023-02-10 11:53:28
1
1,244
Adam Merckx
75,410,767
2,664,205
convert a string containing a valid file path into a list of path components
<p>Is there a function in python that takes as input variables with paths (a character containing a valid path) e.g.</p> <pre><code>[&quot;/Users/xyz/Dropbox/figures/foo.txt&quot;,&quot;/Users/xyz/Dropbox/figures/folder/&quot;] </code></pre> <p>and converts them into a character vector of path components (e.g. directories).</p> <p>e.g.</p> <pre><code>&quot;/hello/world&quot; -&gt; [&quot;hello&quot;, &quot;world&quot;] &quot;/hello/world/&quot; -&gt; [&quot;hello&quot;, &quot;world&quot;] </code></pre> <p>or on windows</p> <pre><code>&quot;a:\\hello\\world&quot; -&gt; [&quot;a:&quot;, &quot;hello&quot;, &quot;world&quot;] </code></pre> <p>Using this function I would like to implement for instance a similar functionality to this one of</p> <pre><code>find . -maxdepth 4 </code></pre>
<python><path>
2023-02-10 11:53:20
1
1,064
witek
75,410,747
11,748,924
Audio to Spectrogram Image
<p>I expect I can convert an audio file or waveform to the spectrogram image where:</p> <ol> <li>X-axis represent time (horizontal axis), where goes to the right meaning to the ending duration of audio.</li> <li>Y-axis represent frequency (vertical axis), where goes to the up meaning to the maximum of frequency from audio that. So the range is <code>(20hz until the max possible audio frequency can reach)</code>. I also expect I can set scale in this axis such as linearly or logarithm or with my custom function like: <code>f(p) = 2p</code> where <code>p is n-th pixel from 0 to the maximum heigh of image</code> and <code>f(p)</code> is frequency.</li> <li>Black pixel represent no amplitude</li> <li>White pixel represent the max possible audio amplitudo can reach</li> <li>That's mean, gray pixel represent amplitude value that between of them</li> <li>I also expect I able to specify resolution of image such as <code>720*480</code></li> </ol> <p>So is there python library/package that can I install, or I should calculate manually which I should transform from time domain waveform to the frequency domain waveform using Fast Fourier Transform?</p>
<python><image><audio><spectrogram>
2023-02-10 11:51:29
1
1,252
Muhammad Ikhwan Perwira
75,410,653
7,295,599
PyUSB: communication with OWON Oscilloscope (2)
<p>After quite some hassle <a href="https://stackoverflow.com/q/59105167/7295599">talking to an OWON oscilloscope</a>, there are new problems during the installation on another PC.</p> <p>The USB driver seems to be installed, at least that's what Windows10 makes me believe.</p> <p><a href="https://i.sstatic.net/tTcjB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tTcjB.png" alt="enter image description here" /></a></p> <p>The software which can be downloaded from PeakTech's homepage (DS_Wave V2.2.2) doesn't seem to work. When trying to connect the device remotely via that software it causes a &quot;crash&quot;/restart of the oscilloscope.</p> <p>Anyway, I don't want to use their software but I would like to address the oscilloscope via pyUSB.</p> <p>This little Python program is working on the first PC but not on the second PC.</p> <pre><code>### communicate with a Peaktech 1337 Oscilloscope (OWON) import usb.core import usb.util from usb.backend import libusb1 def send(dev,cmd): # address taken from results of print(dev): ENDPOINT 0x3: Bulk OUT dev.write(3,cmd+'\r') # address taken from results of print(dev): ENDPOINT 0x81: Bulk IN result = (dev.read(0x81,10000,1000)).tobytes().decode('utf-8')[:-4] return result back = libusb1.get_backend() print(type(back)) dev = usb.core.find(idVendor=0x5345, idProduct=0x1234, backend=back) print(dev) dev.set_configuration() print(send(dev,&quot;*IDN?&quot;)) ### end of script </code></pre> <p><strong>Result on PC1:</strong> (Python 3.7.4)</p> <pre><code>&lt;class 'NoneType'&gt; DEVICE ID 5345:1234 on Bus 000 Address 001 ================= bLength : 0x12 (18 bytes) bDescriptorType : 0x1 Device bcdUSB : 0x200 USB 2.0 bDeviceClass : 0x0 Specified at interface bDeviceSubClass : 0x0 bDeviceProtocol : 0x0 bMaxPacketSize0 : 0x40 (64 bytes) idVendor : 0x5345 idProduct : 0x1234 bcdDevice : 0x294 Device 2.94 iManufacturer : 0x1 System CPU iProduct : 0x2 Oscilloscope iSerialNumber : 0x3 SERIAL bNumConfigurations : 0x1 CONFIGURATION 1: 500 mA ================================== bLength : 0x9 (9 bytes) bDescriptorType : 0x2 Configuration wTotalLength : 0x20 (32 bytes) bNumInterfaces : 0x1 bConfigurationValue : 0x1 iConfiguration : 0x5 Bulk Data Configuration bmAttributes : 0xc0 Self Powered bMaxPower : 0xfa (500 mA) INTERFACE 0: Physical ================================== bLength : 0x9 (9 bytes) bDescriptorType : 0x4 Interface bInterfaceNumber : 0x0 bAlternateSetting : 0x0 bNumEndpoints : 0x2 bInterfaceClass : 0x5 Physical bInterfaceSubClass : 0x6 bInterfaceProtocol : 0x50 iInterface : 0x4 Bulk Data Interface ENDPOINT 0x81: Bulk IN =============================== bLength : 0x7 (7 bytes) bDescriptorType : 0x5 Endpoint bEndpointAddress : 0x81 IN bmAttributes : 0x2 Bulk wMaxPacketSize : 0x200 (512 bytes) bInterval : 0x0 ENDPOINT 0x3: Bulk OUT =============================== bLength : 0x7 (7 bytes) bDescriptorType : 0x5 Endpoint bEndpointAddress : 0x3 OUT bmAttributes : 0x2 Bulk wMaxPacketSize : 0x200 (512 bytes) bInterval : 0x0 ,P1337,1842237,V2.4. # &lt;--- that's the response I'm expecting after sending `*IDN?` </code></pre> <p><strong>Result on PC2:</strong> (Python 3.10.7)</p> <pre><code>&lt;class 'usb.backend.libusb1._LibUSB'&gt; DEVICE ID 5345:1234 on Bus 001 Address 049 ================= bLength : 0x12 (18 bytes) bDescriptorType : 0x1 Device bcdUSB : 0x200 USB 2.0 bDeviceClass : 0x0 Specified at interface bDeviceSubClass : 0x0 bDeviceProtocol : 0x0 bMaxPacketSize0 : 0x40 (64 bytes) idVendor : 0x5345 idProduct : 0x1234 bcdDevice : 0x294 Device 2.94 iManufacturer : 0x1 Error Accessing String iProduct : 0x2 Error Accessing String iSerialNumber : 0x3 Error Accessing String bNumConfigurations : 0x1 CONFIGURATION 1: 500 mA ================================== bLength : 0x9 (9 bytes) bDescriptorType : 0x2 Configuration wTotalLength : 0x20 (32 bytes) bNumInterfaces : 0x1 bConfigurationValue : 0x1 iConfiguration : 0x5 Error Accessing String bmAttributes : 0xc0 Self Powered bMaxPower : 0xfa (500 mA) INTERFACE 0: Physical ================================== bLength : 0x9 (9 bytes) bDescriptorType : 0x4 Interface bInterfaceNumber : 0x0 bAlternateSetting : 0x0 bNumEndpoints : 0x2 bInterfaceClass : 0x5 Physical bInterfaceSubClass : 0x6 bInterfaceProtocol : 0x50 iInterface : 0x4 Error Accessing String ENDPOINT 0x81: Bulk IN =============================== bLength : 0x7 (7 bytes) bDescriptorType : 0x5 Endpoint bEndpointAddress : 0x81 IN bmAttributes : 0x2 Bulk wMaxPacketSize : 0x200 (512 bytes) bInterval : 0x0 ENDPOINT 0x3: Bulk OUT =============================== bLength : 0x7 (7 bytes) bDescriptorType : 0x5 Endpoint bEndpointAddress : 0x3 OUT bmAttributes : 0x2 Bulk wMaxPacketSize : 0x200 (512 bytes) bInterval : 0x0 Traceback (most recent call last): File &quot;C:\Users\Lab\Programs\Test\Test_PeakTech.py&quot;, line 18, in &lt;module&gt; dev.set_configuration() File &quot;C:\Users\Lab\AppData\Local\Programs\Python\Python310\lib\site-packages\usb\core.py&quot;, line 915, in set_configuration self._ctx.managed_set_configuration(self, configuration) File &quot;C:\Users\Lab\AppData\Local\Programs\Python\Python310\lib\site-packages\usb\core.py&quot;, line 113, in wrapper return f(self, *args, **kwargs) File &quot;C:\Users\Lab\AppData\Local\Programs\Python\Python310\lib\site-packages\usb\core.py&quot;, line 158, in managed_set_configuration self.managed_open() File &quot;C:\Users\Lab\AppData\Local\Programs\Python\Python310\lib\site-packages\usb\core.py&quot;, line 113, in wrapper return f(self, *args, **kwargs) File &quot;C:\Users\Lab\AppData\Local\Programs\Python\Python310\lib\site-packages\usb\core.py&quot;, line 131, in managed_open self.handle = self.backend.open_device(self.dev) File &quot;C:\Users\Lab\AppData\Local\Programs\Python\Python310\lib\site-packages\usb\backend\libusb1.py&quot;, line 804, in open_device return _DeviceHandle(dev) File &quot;C:\Users\Lab\AppData\Local\Programs\Python\Python310\lib\site-packages\usb\backend\libusb1.py&quot;, line 652, in __init__ _check(_lib.libusb_open(self.devid, byref(self.handle))) File &quot;C:\Users\Lab\AppData\Local\Programs\Python\Python310\lib\site-packages\usb\backend\libusb1.py&quot;, line 604, in _check raise USBError(_strerror(ret), ret, _libusb_errno[ret]) usb.core.USBError: [Errno 13] Access denied (insufficient permissions) </code></pre> <p>What is going wrong here? I'm out of ideas and wasting my time</p> <ul> <li>which access and why is it denied?</li> <li>do I have the wrong driver at the right location or the right driver at the wrong location?</li> <li>might this be an issue between Python 3.7.4 and 3.10.7 ?</li> <li>any other info required to find out what the actual problem is?</li> </ul> <p>Thank you for any hints.</p>
<python><windows><usb><libusb><pyusb>
2023-02-10 11:41:47
0
27,030
theozh
75,410,631
1,934,903
pytest mocker.patch.object's return_value uses different mock than the one I passed it
<p>I'm using pytest to patch the os.makedirs method for a test. In a particular test I wanted to add a side effect of an exception.</p> <p>So I import the <code>os</code> object that I've imported in my script under test, patch it, and then set the side effect in my test:</p> <pre class="lang-py prettyprint-override"><code>from infrastructure.scripts.src.server_administrator import os def mock_makedirs(self, mocker): mock = MagicMock() mocker.patch.object(os, &quot;makedirs&quot;, return_value=mock) return mock def test_if_directory_exist_exception_is_not_raised(self, administrator, mock_makedirs): mock_makedirs.side_effect = Exception(&quot;Directory already exists.&quot;) with pytest.raises(Exception) as exception: administrator.initialize_server() assert exception.value == &quot;Directory already exists.&quot; </code></pre> <p>The problem I ran into was that when the mock gets called in my script under test, the side effect no longer existed. While troubleshooting I stopped the tests in the debugger to look at the ID values for the mock I created and the mock that the patch should have set as the return value and found that they are different instances:</p> <p><a href="https://i.sstatic.net/2njqk.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2njqk.jpg" alt="paused in the debugger and the ids don't match" /></a></p> <p>I'm still relatively new to some of the testing tools in python, so this may be me missing something in the documentation, but shouldn't the returned mock patched in here be the mock I created?? Am I patching it wrong?</p> <h1>UPDATE</h1> <p>I even adjusted the import style to grab <code>makedirs</code> directly to patch it:</p> <pre class="lang-py prettyprint-override"><code>def mock_makedirs(self, mocker): mock = MagicMock() mocker.patch(&quot;infrastructure.scripts.src.server_administrator.makedirs&quot;, return_value=mock) return mock </code></pre> <p>And I still run into the same &quot;different mocks&quot; issue.</p>
<python><unit-testing><mocking><pytest><python-unittest>
2023-02-10 11:40:01
1
21,108
Chris Schmitz
75,410,533
726,730
PyQt5 QTreeWidget disable on cell select style
<p><strong>File: preview_example.py</strong></p> <pre class="lang-py prettyprint-override"><code># -*- coding: utf-8 -*- # Form implementation generated from reading ui file 'C:\Users\chris\My Projects\papinhio-player\ui/menu-1/playlists/preview/preview_example.ui' # # Created by: PyQt5 UI code generator 5.15.7 # # WARNING: Any manual changes made to this file will be lost when pyuic5 is # run again. Do not edit this file unless you know what you are doing. from PyQt5 import QtCore, QtGui, QtWidgets class Ui_Dialog(object): def setupUi(self, Dialog): Dialog.setObjectName(&quot;Dialog&quot;) Dialog.resize(1171, 1511) Dialog.setMinimumSize(QtCore.QSize(0, 0)) icon = QtGui.QIcon() icon.addPixmap(QtGui.QPixmap(&quot;:/rest-windows/assets/images/rest-windows/preview-sound-file.png&quot;), QtGui.QIcon.Normal, QtGui.QIcon.Off) Dialog.setWindowIcon(icon) self.gridLayout_2 = QtWidgets.QGridLayout(Dialog) self.gridLayout_2.setContentsMargins(0, 0, 0, 0) self.gridLayout_2.setObjectName(&quot;gridLayout_2&quot;) self.scrollArea = QtWidgets.QScrollArea(Dialog) self.scrollArea.setWidgetResizable(True) self.scrollArea.setObjectName(&quot;scrollArea&quot;) self.scrollAreaWidgetContents = QtWidgets.QWidget() self.scrollAreaWidgetContents.setGeometry(QtCore.QRect(0, 0, 1169, 1509)) sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Ignored, QtWidgets.QSizePolicy.Preferred) sizePolicy.setHorizontalStretch(0) sizePolicy.setVerticalStretch(0) sizePolicy.setHeightForWidth(self.scrollAreaWidgetContents.sizePolicy().hasHeightForWidth()) self.scrollAreaWidgetContents.setSizePolicy(sizePolicy) self.scrollAreaWidgetContents.setObjectName(&quot;scrollAreaWidgetContents&quot;) self.gridLayout_5 = QtWidgets.QGridLayout(self.scrollAreaWidgetContents) self.gridLayout_5.setObjectName(&quot;gridLayout_5&quot;) self.treeWidget = QtWidgets.QTreeWidget(self.scrollAreaWidgetContents) self.treeWidget.setMinimumSize(QtCore.QSize(0, 400)) self.treeWidget.setStyleSheet(&quot;QTreeWidget::item{\n&quot; &quot; height:60px;\n&quot; &quot;}\n&quot; &quot;\n&quot; &quot;QTreeWidget::item:selected{\n&quot; &quot; background-color:#3399ff;\n&quot; &quot;}&quot;) self.treeWidget.setAnimated(True) self.treeWidget.setObjectName(&quot;treeWidget&quot;) item_0 = QtWidgets.QTreeWidgetItem(self.treeWidget) item_0 = QtWidgets.QTreeWidgetItem(self.treeWidget) self.gridLayout_5.addWidget(self.treeWidget, 0, 0, 1, 1) self.scrollArea.setWidget(self.scrollAreaWidgetContents) self.gridLayout_2.addWidget(self.scrollArea, 0, 0, 1, 1) self.retranslateUi(Dialog) QtCore.QMetaObject.connectSlotsByName(Dialog) def retranslateUi(self, Dialog): _translate = QtCore.QCoreApplication.translate Dialog.setWindowTitle(_translate(&quot;Dialog&quot;, &quot;Προεπισκόπηση λίστας αναπαραγωγής&quot;)) Dialog.setWhatsThis(_translate(&quot;Dialog&quot;, &quot;Προεπισκόπηση αρχείου ήχου (για τοπική προβολή)&quot;)) self.treeWidget.headerItem().setText(0, _translate(&quot;Dialog&quot;, &quot;Α/Α&quot;)) self.treeWidget.headerItem().setText(1, _translate(&quot;Dialog&quot;, &quot;Τύπος&quot;)) self.treeWidget.headerItem().setText(2, _translate(&quot;Dialog&quot;, &quot;Τίτλος&quot;)) self.treeWidget.headerItem().setText(3, _translate(&quot;Dialog&quot;, &quot;Προεπισκόπιση&quot;)) __sortingEnabled = self.treeWidget.isSortingEnabled() self.treeWidget.setSortingEnabled(False) self.treeWidget.topLevelItem(0).setText(0, _translate(&quot;Dialog&quot;, &quot;1&quot;)) self.treeWidget.topLevelItem(0).setText(1, _translate(&quot;Dialog&quot;, &quot;1&quot;)) self.treeWidget.topLevelItem(0).setText(2, _translate(&quot;Dialog&quot;, &quot;1&quot;)) self.treeWidget.topLevelItem(0).setText(3, _translate(&quot;Dialog&quot;, &quot;1&quot;)) self.treeWidget.topLevelItem(1).setText(0, _translate(&quot;Dialog&quot;, &quot;2&quot;)) self.treeWidget.topLevelItem(1).setText(1, _translate(&quot;Dialog&quot;, &quot;2&quot;)) self.treeWidget.topLevelItem(1).setText(2, _translate(&quot;Dialog&quot;, &quot;2&quot;)) self.treeWidget.topLevelItem(1).setText(3, _translate(&quot;Dialog&quot;, &quot;2&quot;)) self.treeWidget.setSortingEnabled(__sortingEnabled) if __name__ == &quot;__main__&quot;: import sys app = QtWidgets.QApplication(sys.argv) app.setStyle('Fusion') Dialog = QtWidgets.QDialog() ui = Ui_Dialog() ui.setupUi(Dialog) Dialog.show() sys.exit(app.exec_()) </code></pre> <p>In this QDialog a have a QTreeWidget. I want when a row is selected the row to be blue with white text color (as it is) but i also <strong>don't want the 'cell' of selected row to be highlighted special.</strong></p> <p>Print screen: <a href="https://i.sstatic.net/MPvVc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MPvVc.png" alt="enter image description here" /></a></p> <p>In this example the third cell has wrong qss (because I clicked inside this cell) but the other has correct qss.</p> <p>Possible solutions: QDelegate or QStyleItemOption or QBrush or QPallete..</p> <p>Code snippet:</p> <pre class="lang-py prettyprint-override"><code>class Delegate(QtWidgets.QStyledItemDelegate): def __init__(self,treeWidget): self.treeWidget = treeWidget super().__init__(treeWidget) def initStyleOption(self, option, index): super().initStyleOption(option, index) def paint(self, painter, option, index): if ( index.row() == self.treeWidget.currentIndex().row() and index.column() == self.treeWidget.currentIndex().column() ): super().paint(painter, option, index) self.initStyleOption(option, index) painter.drawRect(option.rect) painter.setPen(QtCore.Qt.white) painter.setBrush(QtGui.QColor(&quot;#3399ff&quot;)) else: super().paint(painter, option, index) </code></pre> <p>The above code fix a bit the problem but the solid black rectangle is still there...</p> <p>Edit: This problem is well visibled in Fusion Style.</p>
<python><pyqt5><qtreewidget>
2023-02-10 11:30:40
1
2,427
Chris P
75,410,443
13,506,329
Check if the column value of a row is equal to values in current or other columns for other rows of the same dataframe
<p>I have a <code>dataframe</code> of the following</p> <pre><code> | a| b_1 | b_2 |b_3 |c_1 | c_2 | c_3 | |--|-----------|-----------|-----|-----------|-----------|-----| |e1|3295.000000|-775.000000|604.5|3575.000000|-626.000000|604.5| |e2|3615.000000|-731.000000|604.5|1 |0 |0 | |e3|3615.000000|-731.000000|604.5|3575.000000|-626.000000|604.5| |e2|3615.000000|-731.000000|604.5|1 |0 | 0 | |e1|3295.000000|-775.000000|604.5|3575.000000|-626.000000|604.5| |e4| 0 |0 | 0 |0 |0 | 0 | </code></pre> <p>I want my resultant <code>dataframe</code> to look as follows</p> <pre><code>|a | b |c | d | |--|--------------------------------|-------------------------------|------| |e1| [3295.000000,-775.000000,604.5]|[3575.000000,-626.000000,604.5]|e3| |e2| [3615.000000,-731.000000,604.5]|[1, 0, 0] |e3| |e3| [3615.000000,-731.000000,604.5]|[3575.000000,-626.000000,604.5]|e1, e2| |e4| [0, 0, 0] |[0, 0, 0] |None | </code></pre> <p>Please note that <code>b</code> and <code>c</code> hold <code>numpy</code> arrays of size 3.</p> <p>The parameters for populating the columns <code>d</code> are as follows:</p> <ol> <li>If the value in <code>b</code> of the active row is equal to any other records in <code>b</code> barring itself, then take those records' <code>a</code> value.</li> <li>If the value in <code>b</code> of the active row is equal to any other records in <code>c</code> barring itself, then take those records' <code>a</code> value.</li> <li>If the value in <code>c</code> of the active row is equal to any other records in <code>c</code> barring itself, then take those records' <code>a</code> value.</li> <li>None otherwise</li> </ol>
<python><python-3.x><pandas><dataframe>
2023-02-10 11:22:08
1
388
Lihka_nonem
75,410,431
4,875,766
Polars: create column from sampling function
<p>I am looking to make a new column on a polars data frame.</p> <p>Suppose I have 2 Lazy Dataframes, <code>df_1</code> and <code>df_2</code> of different sizes. I want to sample, for each record of <code>df_2</code>, a column of <code>df_1</code>. In pandas:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import pandas as pd df_1 = pd.DataFrame(data={ &quot;sample_col&quot;: list(&quot;aabccdeff&quot;) }) df_2 = pd.DataFrame(data={ &quot;col_1&quot;: range(30) }) rng = np.random.default_rng() unique = df_1[&quot;sample_col&quot;].unique() df_2[&quot;my_sample&quot;] = rng.choice(unique, len(df_2)) </code></pre> <p>The unique values are known up front but I'm struggling to move this over to a lazy approach. Pointers?</p> <p>Edit: updated pandas code</p>
<python><python-polars>
2023-02-10 11:21:12
2
331
TobyStack
75,410,355
10,051,099
How to find "os" module file path in Python3.11 without importing it?
<p>I can get the path of 'os.py' like this:</p> <pre class="lang-py prettyprint-override"><code>import os os.__file__ </code></pre> <p>But how can I get it without importing it? I found <a href="https://stackoverflow.com/questions/4693608/find-path-of-module-without-importing-in-python">this relevant question</a>, but none of these work for <strong>Python3.11</strong> , although they work for Python&lt;=3.10 .</p> <p>It seems that the newly introduced Python3.11 optimizations (<a href="https://docs.python.org/3/whatsnew/3.11.html#frozen-imports-static-code-objects" rel="nofollow noreferrer">doc</a>) broke these solutions.</p>
<python><python-3.11>
2023-02-10 11:13:30
1
3,695
tamuhey
75,410,317
15,913,281
Duplicating and Transforming Data in Dataframe
<p>I have a dataframe of football results. It is laid out in date order with a row for each game. Each row contains the name of the home team and away team in different columns along with the result.</p> <p>I want to create a new dataframe that contains a series of all the games played by each team (home and away) in a column called &quot;Team&quot;, a separate column for the Opponent and a third column for the Result.</p> <p>Here's an example of the original dataframe:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Date</th> <th>Home</th> <th>Away</th> <th>Result</th> </tr> </thead> <tbody> <tr> <td>Sunday, 21 May 2017, 15:00</td> <td>A</td> <td>B</td> <td>A</td> </tr> <tr> <td>Thursday, 18 May 2017, 19:45</td> <td>C</td> <td>D</td> <td>D</td> </tr> <tr> <td>Wednesday, 17 May 2017, 19:45</td> <td>E</td> <td>A</td> <td>E</td> </tr> <tr> <td>Tuesday, 16 May 2017, 20:00</td> <td>B</td> <td>C</td> <td>Draw</td> </tr> </tbody> </table> </div> <p>And this is what I want to achieve:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Date</th> <th>Team</th> <th>Opponent</th> <th>Result</th> </tr> </thead> <tbody> <tr> <td>Sunday, 21 May 2017, 15:00</td> <td>A</td> <td>B</td> <td>A</td> </tr> <tr> <td>Wednesday, 17 May 2017, 19:45</td> <td>A</td> <td>E</td> <td>E</td> </tr> <tr> <td>Sunday, 21 May 2017, 15:00</td> <td>B</td> <td>A</td> <td>A</td> </tr> <tr> <td>Tuesday, 16 May 2017, 20:00</td> <td>B</td> <td>C</td> <td>Draw</td> </tr> <tr> <td>Tuesday, 16 May 2017, 20:00</td> <td>C</td> <td>B</td> <td>Draw</td> </tr> <tr> <td>Thursday, 18 May 2017, 19:45</td> <td>C</td> <td>D</td> <td>D</td> </tr> <tr> <td>Thursday, 18 May 2017, 19:45</td> <td>D</td> <td>C</td> <td>D</td> </tr> <tr> <td>Wednesday, 17 May 2017, 19:45</td> <td>E</td> <td>A</td> <td>E</td> </tr> </tbody> </table> </div> <p>I am new to Pandas and don't know where to start with this. Can anyone help?</p>
<python><pandas>
2023-02-10 11:09:59
1
471
Robsmith
75,410,264
1,728,544
PySide6: column of checkboxes for a boolean variable in a data table
<p>I am trying to create a table in PySide6 to show data from a database. One of the variables in my database is a boolean variable, and I would like that to show in the table as an editable checkbox, which I can simply click on to switch between True and False values.</p> <p>This, I would have thought, should be a really easy thing to do, but as far as I can tell, it isn't.</p> <p>Using QTableView, it appears to be phenomenally complicated. The question of how to do it has been asked many times before on this site and others (eg <a href="https://stackoverflow.com/questions/17748546/pyqt-column-of-checkboxes-in-a-qtableview">here</a>), and seems to involve things like defining your own class to replace QTableView or adding a Delegate item for the relevant column, and quite possibly both. It always seems to involve many dozens of lines of code.</p> <p>It does seem to be a lot simpler using QTableWidget instead of QTableView, and is simply a matter of doing something like this:</p> <pre><code>tbl = QTableWidget() tbl.setCellWidget(row, col, QCheckBox()) </code></pre> <p>(looks like you have to iterate through all the rows if you want it to apply to a whole column, but that's not too bad).</p> <p>However, as I understand it, QTableWidget can't connect to a data model in the same way that QTableView can, so I'd have to write my own code to connect the QTableWidget to the data, and I'm back in &quot;dozens of lines of code&quot; territory.</p> <p>Is it really that complicated, or am I missing something? It seems like it ought to be a very simple thing to do, but if it is, I haven't yet found the simple way of doing it.</p> <p>BTW, I'm not yet totally wedded to PySide6. It seems like a nice GUI tool in many ways, but if someone is going to tell me that this is way simpler in Tkinter or wxPython or something like that, then I might be willing to consider switching to a different GUI framework.</p> <p>Thanks</p>
<python><qt><pyside6>
2023-02-10 11:06:20
0
423
Adam Jacobs
75,410,261
6,930,340
How to check if occurrences of identical consecutive numbers is below a threshold in pandas series
<p>I need to check if the occurrences of identical consecutive numbers is below a certain threshold, e.g. maximal two same consecutive numbers.</p> <pre><code>pd.Series(data=[-1, -1, 2, -2, 2, -2, 1, 1]) # True pd.Series(data=[-1, -1, -1, 2, 2, -2, 1, 1]) # False </code></pre> <p>Further checks:<br /> Only the numbers <code>+1</code> and <code>-1</code> are allowed to occur as consecutive numbers with a maximum of two occurrences.</p> <pre><code>pd.Series(data=[-1, 1, -2, 2, -2, 2, -1, 1]) # True pd.Series(data=[1, 1, -2, 2, -2, 2, -1, 1]) # True pd.Series(data=[-1, -1, 2, 2, -2, 1, 1, -2]) # False pd.Series(data=[-1, 1, -2, -2, 1, -1, 2, -2]) # False </code></pre>
<python><pandas><list>
2023-02-10 11:06:12
1
5,167
Andi
75,410,243
7,376,511
del self followed by super() call raises RuntimeError
<p>I was playing around with inheritance today, and I came across this exception:</p> <pre><code>class A: def __init__(self): del self super().__init__(**locals()) A() # RuntimeError: super(): arg[0] deleted </code></pre> <p>Why does this happen? If I replace the line with <code>super().__init__(**{})</code> and remove the deletion of self it works just fine. I was under the impression that <code>self</code> in this case is only a reference to the instance, so why does deleting it break the super reference?</p>
<python><inheritance><garbage-collection>
2023-02-10 11:04:52
0
797
Some Guy
75,410,127
7,415,134
Connecting redis to docker
<p>I have a docker image and i want to connect to the redis database locally.</p> <pre><code>redis = redis.Redis( host='localhost', port = 6379, charset=&quot;utf-8&quot;, decode_responses=True) </code></pre> <p>what i typed in to the terminal</p> <pre><code>sudo docker build -t scraper . sudo docker run scraper </code></pre> <p>redis.exceptions.ConnectionError: Error 99 connecting to localhost:6379. Cannot assign requested address.</p> <p>NB:There was lot of articles here but none of them solved my problem. I am not a geek though so simple explanations will be appreciated.. :)</p>
<python><docker><redis><localhost>
2023-02-10 10:54:58
0
379
Sid
75,410,037
15,093,600
Different python package requirements for 32 and 64 bit
<p><strong>Problem</strong></p> <p>My code depends on two packages <code>A</code> and <code>B</code>.</p> <p>Recently I found an issue with the packages compatibility: maintainers of package <code>A</code> stopped releasing whl packages for 32 bit Python, while maintainers of package <code>B</code> removed some functionality, which package <code>A</code> relies on. Since package <code>A</code> relies on some functionality in package <code>B</code>, the code fails in runtime. When using 64 bit Python the problem goes away - all packages are synced nicely.</p> <p>Of course I can manually suppress the upper version for package <code>B</code>, but it does not make sense for 64 bit version. It is preferable to use the newest versions for 64 bit Python. Yet it is necessary to allow Python 32 bit run smoothly as well, with older package versions, which I can define myself.</p> <p><strong>Question</strong></p> <p>I wonder if there is an opportunity to specify different package requirements for different bit versions.</p> <p>Currently in <code>setup.py</code> we have <code>install_requires</code> field, but this is generic and not bit-specific.</p> <p><strong>Extra info</strong></p> <p>Excerpt from <code>setup.py</code>:</p> <pre class="lang-py prettyprint-override"><code>install_requires=[ 'A', 'B', ] </code></pre> <p>I install my package in a dev environment:</p> <pre class="lang-bash prettyprint-override"><code>$ pip install -e . </code></pre> <p><strong>Update</strong></p> <p>I probably should have mentioned explicitly that here I want to differentiate between different Python bit versions (32 bit vs 64 bit) rather than OS architecture.</p> <p>The only solution I managed to come up with is this:</p> <pre class="lang-py prettyprint-override"><code>is_32_bit = sys.maxsize &lt;= 2**32 if is_32_bit: install_requires = ['A&lt;=1.0', 'B&lt;=1.0'] else: install_requires = ['A'] setup( ... install_requires=install_requires, ) </code></pre>
<python><pip><setuptools>
2023-02-10 10:46:47
1
460
Maxim Ivanov
75,409,843
7,483,211
How to programatically collect a list of all stack traces during function call, not just when exception happens?
<p>In order to debug how a complicated library executes a function call, e.g. <code>pd.Series(np.NaN)|pd.Series(&quot;True&quot;)</code> I would like to generate a list of all states of the stack during the execution of that function. So for every line of Python code executed (even inside functions and functions called by functions), there should be one entry in the list of stack traces).</p> <p>This is a bit like a profile generated by a sampling profiler. But in contrast, I don't want a stack trace for every millisecond, but for every line of Python code executed.</p> <p>I know I can <em>manually</em> call this function using <code>pdb</code>, and repeatedly enter <code>step</code> to step into any function calls and <code>where</code> to print stack traces of the current state - but I want to do this <em>automatically</em>.</p> <p>How can I automate this stack trace collection? Is there a package for this? If not, how do I automate that with, for example, <code>pdb</code> or another tool that does the job? Is there an accepted word for such a list of stack traces?</p> <p>That stack list could be used for at least two purposes: a) quickly finding all code that is reached, reducing the scope for finding relevant lines, b) creating a &quot;flamegraph&quot; of execution.</p> <p>Somewhat related questions:<br /> <a href="https://stackoverflow.com/questions/1692866/what-cool-hacks-can-be-done-using-sys-settrace">What cool hacks can be done using sys.settrace?</a> <br /> <a href="https://stackoverflow.com/questions/9670931/sandboxing-running-python-code-line-by-line?noredirect=1&amp;lq=1">sandboxing/running python code line by line</a></p>
<python><debugging><trace><stack-trace><sys>
2023-02-10 10:30:50
1
10,272
Cornelius Roemer
75,409,689
11,491,600
Azure Cognitive Search:
<p>I have recently upgraded my Azure Cognitive Search instance so it has semantic search.</p> <p>However, when I add query_type=semantic, in the client search I get the following stacktrace...</p> <pre><code>Traceback (most recent call last): File &quot;call_semantic_search.py&quot;, line 34, in &lt;module&gt; c, r = main(search_text='what is a ') ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;call_semantic_search.py&quot;, line 28, in main count: float = search_results.get_count() ^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;.venv/lib/python3.11/site-packages/azure/search/documents/_paging.py&quot;, line 82, in get_count return self._first_iterator_instance().get_count() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;.venv/lib/python3.11/site-packages/azure/search/documents/_paging.py&quot;, line 91, in wrapper self._response = self._get_next(self.continuation_token) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;.venv/lib/python3.11/site-packages/azure/search/documents/_paging.py&quot;, line 115, in _get_next_cb return self._client.documents.search_post( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;.venv/lib/python3.11/site-packages/azure/search/documents/_generated/operations/_documents_operations.py&quot;, line 312, in search_post raise HttpResponseError(response=response, model=error) azure.core.exceptions.HttpResponseError: () The request is invalid. Details: parameters : Requested value 'semantic' was not found. Code: Message: The request is invalid. Details: parameters : Requested value 'semantic' was not found. </code></pre> <p>This is the code that I have been using to call the search index.</p> <pre class="lang-py prettyprint-override"><code>import logging from typing import Dict, Iterable, Tuple import settings as settings from azure.core.credentials import AzureKeyCredential from azure.search.documents import SearchClient from search import SearchableItem TOP = 10 SKIP = 0 def main(search_text: str) -&gt; Tuple[float, Iterable[Dict]]: client = SearchClient( api_version=&quot;2021-04-30-Preview&quot;, endpoint=settings.SEARCH_SERVICE_ENDPOINT, index_name=settings.SOCIAL_IDX_NAME, credential=AzureKeyCredential(key=settings.SEARCH_SERVICE_KEY) ) logging.info(f&quot;Calling: /search?top={TOP}&amp;skip={SKIP}&amp;q={search_text}&quot;) search_results = client.search( search_text=search_text, top=TOP, skip=SKIP, query_type=&quot;semantic&quot;, include_total_count=True, ) count: float = search_results.get_count() results = SearchableItem.from_result_as_dict(search_results) return count, results if __name__ == &quot;__main__&quot;: count, results = main(search_text='what is a ') print(count, list(results)) </code></pre> <p>And here is my Azure configuration (I'm able to perform Semantic searches via the portal:</p> <p><a href="https://i.sstatic.net/FHInu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FHInu.png" alt="enter image description here" /></a></p> <h2>EDITS</h2> <p>Taking @Thiago Custodio's advice;</p> <p>I enabled logging with:</p> <pre class="lang-py prettyprint-override"><code>import sys logger = logging.getLogger('azure') logger.setLevel(logging.DEBUG) # Configure a console output handler = logging.StreamHandler(stream=sys.stdout) logger.addHandler(handler) # ... search_results = client.search( search_text=search_text, top=TOP, skip=SKIP, query_type=&quot;semantic&quot;, include_total_count=True, logging_enable=True ) # ... </code></pre> <p>And I got the following:</p> <pre><code>DEBUG:azure.core.pipeline.policies._universal:Request URL: 'https://search.windows.net//indexes('idx-name')/docs/search.post.search?api-version=2020-06-30' Request method: 'POST' Request headers: 'Content-Type': 'application/json' 'Accept': 'application/json;odata.metadata=none' 'Content-Length': '86' 'x-ms-client-request-id': 'fbaafc9e-qwww-11ed-9117-a69cwa6c72e' 'api-key': '***' 'User-Agent': 'azsdk-python-search-documents/11.3.0 Python/3.11.1 (macOS-13.0-x86_64-i386-64bit)' </code></pre> <p>So this shows the request URL going out is pinned to <code>api-version=2020-06-30</code> - in the Azure Portal, if I change the search version to the same, semantic search is unavailable.</p> <p>I seem to have an outdated version of the search library even though I installed via:</p> <pre><code>pip install azure-search-documents </code></pre> <p>The most notable difference is that in my local <code>azure/search/documents/_generated/operations/_documents_operations.py</code> - the <code>api_version</code> seems to be hardcoded to <code>2020-06-30</code> see:</p> <p><a href="https://i.sstatic.net/4e5U1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4e5U1.png" alt="enter image description here" /></a></p> <p>Looking at the source, I actually need the <code>api_version</code> to be dynamically set, so at the caller I can pass it in the search client. This is something thats already implemented within there main branch of the source, see: <a href="https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/search/azure-search-documents/azure/search/documents/_generated/operations/_documents_operations.py#L739" rel="nofollow noreferrer">Source</a>, but for some reason, <em>my local version is different</em></p>
<python><azure>
2023-02-10 10:16:10
2
465
Bob
75,409,645
8,170,368
How to reshape matrices using index instead of shape inputs?
<p>Given an array of shape (8, 3, 4, 4), reshape them into an arbitrary new shape (8, 4, 4, 3) by inputting the new indices compared to the old positions (0, 2, 3, 1).</p>
<python><numpy><matrix>
2023-02-10 10:12:36
2
388
mariogarcc
75,409,518
5,722,359
How to change the colour of a ttk.Checkbutton when it is !disabled and selected?
<p>How to change the colour of the indicator of <code>ttk.Checkbutton</code> when it is <code>!disabled</code> and <code>selected</code>? The following picture shows it is blueish in colour.</p> <p><a href="https://i.sstatic.net/HM5hJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HM5hJ.png" alt="sample" /></a></p> <p>Typing the following:</p> <pre><code>import tkinter as tk import tkinter.ttk as ttk root = tk.Tk() s = ttk.Style() print(f&quot;{s.map('TCheckbutton', 'indicatorcolor')=}&quot;) </code></pre> <p>returns:</p> <pre><code>s.map('TCheckbutton', 'indicatorcolor')=[('pressed', '#ececec'), ('!disabled', 'alternate', '#9fbdd8'), ('disabled', 'alternate', '#c0c0c0'), ('!disabled', 'selected', '#4a6984'), ('disabled', 'selected', '#a3a3a3')] </code></pre> <p>I tried to change <code>('!disabled', 'selected', '#4a6984')</code> to red color by using</p> <pre><code>s.map('TCheckbutton.indicatorcolor', background=[('pressed', '#ececec'), ('!disabled', 'alternate', 'red'), ('disabled', 'alternate', '#c0c0c0'), # ('!disabled', 'selected', '#4a6984'), # original ('!disabled', 'selected', 'red'), ('disabled', 'selected', '#a3a3a3')] ) </code></pre> <p>and replacing the word <code>background</code> with <code>foreground</code> and even removing the word entirely but these methods failed to work. I also tried the below syntax but to no avail.</p> <pre><code>ss.configure(&quot;TCheckbutton.indicator.&quot;, background=&quot;red&quot;, foreground=&quot;red&quot;, indicatorcolor=&quot;red&quot;) </code></pre>
<python><tkinter><tk-toolkit><ttk>
2023-02-10 10:01:50
1
8,499
Sun Bear
75,409,463
10,035,626
Python does not note every new chrome access log when retrieving target urls from redirect urls
<p>I'm applying the code of <a href="https://stackoverflow.com/a/70869691/10035626">Shahin Shirazi</a> to retrieve the target url of a redirect link using Python. I'm applying this code to multiple redirect links. However, when I run this code, in some cases Python writes the target url of the previous redirect link again. Obviously, Chrome has not updated the history file on time. I already added some sleep time.</p> <pre><code>from bs4 import BeautifulSoup as soup import webbrowser import sqlite3 import pandas as pd import shutil import time import os # ---------------------FILEREADING------------------------------------------ colnames = ['Column1'] Filename_Link = &quot;reflinks.csv&quot; data = pd.read_csv(Filename_Link, names=colnames) links = data.Column1.tolist() # ---------------------Create a new .csv file that we are going to fill with the target urls------------------------------------------ output_data = &quot;output_data.csv&quot; fd = open(output_data, &quot;a&quot;,encoding='utf-8-sig') topline = &quot;url, target_url,&quot; fd.write(topline) fd.write(&quot;\n&quot;) # ---------------------Define the history folder of the browser ------------------------------------------ #source file is where the history of your webbroser is saved, I was using chrome, but it should be the same process if you are using different browser source_file = 'C:\\Users\\xxx\\AppData\\Local\\Google\\Chrome\\User Data\\Default\\History' # could not directly connect to history file as it was locked and had to make a copy of it in different location destination_file = 'C:\\Users\\xxx\\Downloads\\History' # ---------------------Run the code to get target urls for all redirect links ------------------------------------------ for link in links: webbrowser.open(link) time.sleep(30) # there is some delay to update the history file, so 30 sec wait give it enough time to make sure your last url get logged os.system(&quot;taskkill /im chrome.exe /f&quot;) shutil.copy(source_file,destination_file) # copying the file. time.sleep(10) # I added some delay to copy the files con = sqlite3.connect('C:\\Users\\xxx\\Downloads\\History') # connecting to browser history cursor = con.execute(&quot;SELECT * FROM urls&quot;) names = [description[0] for description in cursor.description] urls = cursor.fetchall() con.close() df_history = pd.DataFrame(urls,columns=names) last_url = df_history.loc[len(df_history)-1,'url'] print(last_url) fd.write(link.replace(&quot;,&quot;,&quot;&quot;).replace(&quot;\n&quot;,&quot; &quot;)) fd.write(&quot;,&quot;) fd.write(last_url.replace(&quot;,&quot;,&quot;&quot;).replace(&quot;\n&quot;,&quot; &quot;)) fd.write(&quot;,&quot;) fd.write(&quot;\n&quot;) fd.close() </code></pre> <p>Is there any solution to this?</p>
<python><google-chrome><http><url><http-redirect>
2023-02-10 09:57:51
0
561
Scijens
75,409,168
10,613,037
Ignore elements where the condition cannot be met with .mask()
<p>I have the following series'</p> <pre class="lang-py prettyprint-override"><code>s = pd.Series([0,1,'random',2,3,4]) s2 = pd.Series([5,6,7,8,9,10]) </code></pre> <p>How can I use <code>s.mask</code> to return a series where every even number in <code>s</code> is replaced by <code>s2</code>, and elements in s that can't get evaluated per the condition get ignored (e.g. 'random')?</p> <p>I tried this which gave an <code>ValueError: Array conditional must be same shape as self</code></p> <pre class="lang-py prettyprint-override"><code>def is_even_if_is_number(x): if isinstance(x, int): return x % 2 == 0 return False s.mask(lambda x: is_even_if_is_number(x), s2) </code></pre> <p>I want an output of this</p> <pre><code>0 5 1 1 2 random 3 8 4 3 5 10 </code></pre>
<python><pandas>
2023-02-10 09:31:25
1
320
meg hidey
75,409,082
5,368,122
SQLAlchemy unable to execute query python
<p>I have to conencto an Azure SQL database using Managed Identity. The connection is successful. The code I am using is:</p> <pre><code>self.logging.info(&quot;Connecting to database - connection string: {conn_str}&quot;.format(conn_str=self.conn_str)) credential = ManagedIdentityCredential(client_id = 'xxxxxxxxxxxxx') # TODO parametrize this client id, get from azurekeyvault database_token = credential.get_token(&quot;https://database.windows.net/.default&quot;) tokenstruct = self._bytes2mswin_bstr(database_token.token.encode()) self.engine = create_engine(self.conn_str,connect_args={'attrs_before': {self.SQL_COPT_SS_ACCESS_TOKEN:tokenstruct}}) self.logging.info(&quot;Connection successful&quot;) </code></pre> <p>I see in the logs that connection is successful: <a href="https://i.sstatic.net/7zKx2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7zKx2.png" alt="enter image description here" /></a></p> <p>But when executing a query using read_sql:</p> <pre><code>result_df = pd.read_sql(query, con=self.engine) self.logging.info('Query execution successfull, returned data dimensions: {shape}'.format(shape=result_df.shape)) </code></pre> <p>it gives an error that: <a href="https://i.sstatic.net/jZUYU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jZUYU.png" alt="enter image description here" /></a></p> <p>versions:</p> <pre><code>Name: **SQLAlchemy** Version: 1.4.46 Summary: Database Abstraction Library Home-page: https://www.sqlalchemy.org Author: Mike Bayer Author-email: mike_mp@zzzcomputing.com License: MIT Location: /anaconda/envs/localpy38/lib/python3.8/site-packages Requires: greenlet Required-by: Name: **pandas** Version: 1.5.2 Summary: Powerful data structures for data analysis, time series, and statistics Home-page: https://pandas.pydata.org Author: The Pandas Development Team Author-email: pandas-dev@python.org License: BSD-3-Clause Location: /anaconda/envs/localpy38/lib/python3.8/site-packages Requires: numpy, python-dateutil, pytz Name: **pyodbc** Version: 4.0.35 Summary: DB API Module for ODBC Home-page: https://github.com/mkleehammer/pyodbc Author: Author-email: License: MIT Location: /anaconda/envs/localpy38/lib/python3.8/site-packages Requires: **ODBC drivers** unixODBC 2.3.7 DRIVERS............: /etc/odbcinst.ini SYSTEM DATA SOURCES: /etc/odbc.ini FILE DATA SOURCES..: /etc/ODBCDataSources USER DATA SOURCES..: /home/azureuser/.odbc.ini SQLULEN Size.......: 8 SQLLEN Size........: 8 SQLSETPOSIROW Size.: 8 </code></pre> <p>I am confused, how can it create connection first and then complain about user when executing the query. Also, The solution was working but my environment crashed and I had to create a new one. I dnt remember the versions in the previous environments.</p>
<python><azure><sqlalchemy><odbc><pyodbc>
2023-02-10 09:22:09
1
844
Obiii
75,408,945
55,408
How to specify index content/entries in Markdown?
<p>Based on the <a href="https://www.sphinx-doc.org/en/master/usage/restructuredtext/directives.html#directive-index" rel="nofollow noreferrer">Sphinx documentation</a>, to specify various index entries (<code>single</code> entry, for example) in reStructuredText, like this:</p> <pre><code>.. index:: single: execution; context The execution context --------------------- ... </code></pre> <p>When using Myst to do the same in Markdown (and according to its <a href="https://myst-parser.readthedocs.io/en/v0.15.1/syntax/syntax.html#directives-a-block-level-extension-point" rel="nofollow noreferrer">documentation</a>), this should be its equivalent:</p> <pre><code>```{index} single: execution; context ``` </code></pre> <p><code>sphinx-build</code> reports an error: <code>Directive 'index': No content permitted</code>.</p> <p>Since adding content to other Sphinx directives (like <code>toctree</code>) works, my assumption is there is some hard-coded logic in the <code>myst_parser</code> Sphinx extension preventing adding the content for the <code>index</code> directive. Is my assumption correct or is there actually a way to add entries to the index in Markdown?</p> <hr /> <p><strong>UPDATE</strong>: as per <a href="https://stackoverflow.com/a/75411132/55408">Steve's answer</a>, it is possible to put one of the entries directly in the first line, like this:</p> <pre><code>```{index} single: execution; context ``` </code></pre> <p>But then the new question is how to add multiple entries into the same index item, which reStructuredText supports (an example from Sphinx docs):</p> <pre><code>.. index:: single: execution; context module: __main__ module: sys triple: module; search; path </code></pre>
<python><markdown><python-sphinx><restructuredtext><myst>
2023-02-10 09:07:06
1
19,094
Igor Brejc
75,408,909
12,814,680
Extracting values from non empty nested lists, randomly
<p>I have the following list of objects from which I would like to extract random values.</p> <pre><code>l = [ {'A':[],'B':['b1','b2'],'C':[],'D':['d1','d2','d3'],'E':['e1']}, {'A':['a4','a5','a6'],'B':['b6'],'C':['c4','c5','c6'],'D':[],'E':['e4','e5','e6']}, {'A':['a7'],'B':['b7','b8'],'C':['c7','c8','c9'],'D':['d7','d8','d9'],'E':[]}, ] </code></pre> <p>The goal is: For each of the 3 objects in list l, extract one nested value randomly. The result should be a list with 3 values in it. Examples of possible results : res = ['b1','b6','d9'] or res = ['e1','c4','a7']</p> <p>I have tried using random.choice() but the problem is when random lands on an empty list... If it does land on an empty list such as 'A' in the first object, then it has to try again until it lands on a non empty list.</p> <p>How can I achieve this?</p>
<python>
2023-02-10 09:02:46
2
499
JK2018
75,408,889
4,875,428
Learning rate for lightgbm with boosting_type = "rf"
<p>In the documentation i could not find anything on if/how the learning_rate parameter is used with random forest as boosting type in the python lightgbm implementation: <a href="https://lightgbm.readthedocs.io/en/latest/Parameters.html" rel="nofollow noreferrer">https://lightgbm.readthedocs.io/en/latest/Parameters.html</a> -- Intuitively it makes sense to set the learning rate to 1 because there is no iterative approximation of the loss functions gradient with this &quot;boosting&quot; type.</p> <p>In this example, the learning rate is also actually set to 1: <a href="https://github.com/microsoft/LightGBM/issues/691" rel="nofollow noreferrer">https://github.com/microsoft/LightGBM/issues/691</a></p> <p>However in other exapmles i found all over the internet it is not and i haven't seen anybody point this out as a mistake.</p> <p><strong>What is the python lightgbm implementation (<a href="https://pypi.org/project/lightgbm/" rel="nofollow noreferrer">https://pypi.org/project/lightgbm/</a>) actually doing with the learning rate parameter when boosting_type = &quot;rf&quot;?</strong></p>
<python><random-forest><lightgbm>
2023-02-10 09:01:25
1
361
skeletor
75,408,638
7,848,740
Django Python MQTT Subscribe onMessage got executed two times
<p>I've my Mosquitto MQTT broker and I've created a simple Django APP that subscribes to the topic <code>$SYS/broker/uptime</code> like below</p> <pre><code>from django.apps import AppConfig from threading import Thread import paho.mqtt.client as mqtt class MqttClient(Thread): def __init__(self, broker, port, timeout, topics): super(MqttClient, self).__init__() self.client = mqtt.Client() self.broker = broker self.port = port self.timeout = timeout self.topics = topics self.total_messages = 0 # run method override from Thread class def run(self): self.connect_to_broker() def connect_to_broker(self): self.client.on_connect = self.on_connect self.client.on_message = self.on_message self.client.connect(self.broker, self.port, self.timeout) self.client.loop_forever() # The callback for when a PUBLISH message is received from the server. def on_message(self, client, userdata, msg): self.total_messages = self.total_messages + 1 print(str(msg.payload) + &quot;Total: {}&quot;.format(self.total_messages)) # The callback for when the client receives a CONNACK response from the server. def on_connect(self, client, userdata, flags, rc): # Subscribe to a list of topics using a lock to guarantee that a topic is only subscribed once for topic in self.topics: client.subscribe(topic) class AppMqtteConfig(AppConfig): default_auto_field = 'django.db.models.BigAutoField' name = 'app_mqtt' def ready(self): MqttClient(&quot;localhost&quot;, 1883, 60, [&quot;$SYS/broker/uptime&quot;]).start() </code></pre> <p>For some reason, the <code>print</code> statement on the on_message callback got executed two times, at least from what I'm seeing from the console. See screenshot. I can't understand why</p> <p><a href="https://i.sstatic.net/QGLNh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QGLNh.png" alt="enter image description here" /></a></p>
<python><django><visual-studio-code><mqtt><mosquitto>
2023-02-10 08:34:44
1
1,679
NicoCaldo
75,408,504
9,198,074
Selenium unable to click button
<p>there is a button in <a href="https://mokivezi.lt/leidiniai" rel="nofollow noreferrer">https://mokivezi.lt/leidiniai</a> called &quot;<strong>Open</strong>&quot; you can see it under catalog image.</p> <p>It's element is: <code>&lt;button data-href=&quot;open&quot; aria-label=&quot;Open UAB &amp;quot;Makveža&amp;quot; - Pagrindinis Moki-vezi kaininis leidinys&quot;&gt;Open&lt;/button&gt;</code></p> <p>Using <strong>Python Selenium</strong> and <strong>Edge/Chrome</strong> webdrivers, I am unable to click that button. I tried switching to different iframe, but still getting attached error.</p> <pre><code>--------------------------------------------------------------------------- TimeoutException Traceback (most recent call last) ~\AppData\Local\Temp\ipykernel_14248\1636534134.py in &lt;module&gt; ----&gt; 1 WebDriverWait(driver, 5).until( 2 EC.element_to_be_clickable((By.XPATH, &quot;//button[text()='Open']&quot;)) 3 ).click() ~\Anaconda3\envs\webscraping\lib\site-packages\selenium\webdriver\support\wait.py in until(self, method, message) 93 if time.monotonic() &gt; end_time: 94 break ---&gt; 95 raise TimeoutException(message, screen, stacktrace) 96 97 def until_not(self, method, message: str = &quot;&quot;): TimeoutException: Message: Stacktrace: Backtrace: Microsoft::Applications::Events::EventProperties::SetProperty [0x00007FF7262B16C2+15186] Microsoft::Applications::Events::EventProperty::EventProperty [0x00007FF72624A212+827554] (No symbol) [0x00007FF725F0ED90] (No symbol) [0x00007FF725F52225] (No symbol) [0x00007FF725F523AC] (No symbol) [0x00007FF725F8E087] (No symbol) [0x00007FF725F71F8F] (No symbol) [0x00007FF725F44C3E] (No symbol) [0x00007FF725F8B513] (No symbol) [0x00007FF725F71D23] (No symbol) [0x00007FF725F43B80] (No symbol) [0x00007FF725F42B0E] (No symbol) [0x00007FF725F44344] Microsoft::Applications::Events::EventProperties::SetProperty [0x00007FF72612C3B0+182752] (No symbol) [0x00007FF726000095] Microsoft::Applications::Events::EventProperty::EventProperty [0x00007FF72618A6EA+42362] Microsoft::Applications::Events::EventProperty::EventProperty [0x00007FF72618D425+53941] Microsoft::Applications::Events::ILogManager::DispatchEventBroadcast [0x00007FF7264A8AB3+1456595] Microsoft::Applications::Events::EventProperty::EventProperty [0x00007FF72625276A+861690] Microsoft::Applications::Events::EventProperty::EventProperty [0x00007FF726257854+882404] Microsoft::Applications::Events::EventProperty::EventProperty [0x00007FF7262579AC+882748] Microsoft::Applications::Events::EventProperty::EventProperty [0x00007FF72626097E+919566] BaseThreadInitThunk [0x00007FFF40C77AD4+20] RtlUserThreadStart [0x00007FFF436CA371+33] </code></pre> <p>What wrong am I doing ? To click I tried using <code>WebDriverWait(driver, 5).until(EC.element_to_be_clickable((By.XPATH, &quot;//button[text()='Open']&quot;))).click()</code></p>
<python><selenium><selenium-webdriver><web-scraping>
2023-02-10 08:19:57
2
1,059
Gedas Miksenas
75,408,317
10,419,999
AWS bulk indexing using gives 'illegal_argument_exception', 'explicit index in bulk is not allowed')
<p>While I am trying to bulk index on AWS Opensearch Service (ElasticSearch V 10.1) using opensearch-py, I am getting below error</p> <pre><code>RequestError: RequestError(400, 'illegal_argument_exception', 'explicit index in bulk is not allowed') </code></pre> <pre><code>from opensearchpy.helpers import bulk bulk(client, format_embeddings_for_es_indexing(embd_data, titles_, _INDEX_)) </code></pre> <p><strong>format_embeddings_for_es_indexing()</strong> function yeilds</p> <pre><code>{ '_index': 'test_v1', '_id': '208387', '_source': { 'article_id': '208387', 'title': 'Battery and Performance', 'title_vector': [ 1.77665558e-02, 1.95874255e-02,.....], ...... } } </code></pre> <p>I am able to index documents one by one using `open search.index()'</p> <pre><code>failed = {} for document in format_embeddings_for_es_indexing(embd_data, titles_, _INDEX_): res = client.index( **document, refresh = True ) if res['_shards']['failed'] &gt; 0: failed[document[&quot;body&quot;][&quot;article_id&quot;]] = res['_shards'] # document body for open search index { 'index': 'test_v1', 'id': '208387', 'body': { 'article_id': '208387', 'title': 'Battery and Performance', 'title_vector': [ 1.77665558e-02, 1.95874255e-02,.....], ...... } } </code></pre> <p>please help</p>
<python><elasticsearch><opensearch><amazon-opensearch>
2023-02-10 07:58:57
1
4,912
Shijith
75,408,055
16,853,253
Unable to create database when running flask app
<p>I'm new to flask and I got stuck into one thing. So I have <code>run.py</code> file inside it</p> <pre><code>from market import app from market.models import db if __name__ == &quot;__main__&quot;: with app.app_context(): db.create_all() app.run(debug=True) </code></pre> <p>so when I call <code>python run.py</code> the <code>db.create_all()</code> function works. But when I call <code>flask --app market run</code> or <code>flask --app run.py run</code> the <code>db.create_all()</code> doesn't get executed.</p> <p>Please explain why this happens ?</p>
<python><flask><flask-sqlalchemy>
2023-02-10 07:27:01
3
387
Sins97
75,407,809
6,457,407
Knowing when you've read everything off a multiprocessing Queue
<p>I have some code that farms out work to tasks. The tasks put their results on a queue, and the main thread reads these results from the queue and deals with them.</p> <pre><code>from multiprocessing import Process, Queue, Pool, Manager import uuid def handle_task(arg, queue, end_marker): ... add some number of results to the queue . . . queue.put(end_marker) def main(tasks): manager = Manager() queue = manager.Queue() count = len(tasks) end_marker = uuid.uuid4() with Pool() as pool: pool.starmap(handle_task, ((task, queue, end_marker) for task in tasks)) while count &gt; 0: value = queue.get() if value == end_marker: count -= 1 else: ... deal with value ... </code></pre> <p>This code works, but it is incredibly kludgy and inelegant. What if <code>tasks</code> is a iterator? Why do I need to know how many tasks there are ahead of time and keep track of each of them.</p> <p>Is there a cleaner way of reading from a Queue and and knowing that every process that will write to that thread is done, and you've read everything that they've written?</p>
<python><python-3.x><multiprocessing><queue>
2023-02-10 06:54:41
1
11,605
Frank Yellin
75,407,687
3,380,902
dealing with pip install dependency conflicts
<p>I have ~300 python packages that I am trying to install from a shell script that is configured to run when an instance is created. The script fails due to dependency conflicts.</p> <pre><code>ERROR: Cannot install ipykernel==6.16.2, jupyter-client==7.3.4, nbclassic==0.4.5, nbclient==0.7.0, nest-asyncio==1.5.6, notebook==6.5.1 and sparkmagic==0.20.0 because these package versions have conflicting dependencies. 2023-02-09T21:47:54.127-08:00 Copy The conflict is caused by: The user requested nest-asyncio==1.5.6 ipykernel 6.16.2 depends on nest-asyncio jupyter-client 7.3.4 depends on nest-asyncio&gt;=1.5.4 nbclassic 0.4.5 depends on nest-asyncio&gt;=1.5 nbclient 0.7.0 depends on nest-asyncio notebook 6.5.1 depends on nest-asyncio&gt;=1.5 sparkmagic 0.20.0 depends on nest-asyncio==1.5.5 The conflict is caused by: The user requested nest-asyncio==1.5.6 ipykernel 6.16.2 depends on nest-asyncio jupyter-client 7.3.4 depends on nest-asyncio&gt;=1.5.4 nbclassic 0.4.5 depends on nest-asyncio&gt;=1.5 nbclient 0.7.0 depends on nest-asyncio notebook 6.5.1 depends on nest-asyncio&gt;=1.5 sparkmagic 0.20.0 depends on nest-asyncio==1.5.5 2023-02-09T21:47:54.127-08:00 Copy To fix this you could try to: 2023-02-09T21:47:54.127-08:00 Copy 1. loosen the range of package versions you've specified 2023-02-09T21:47:54.127-08:00 Copy 2. remove package versions to allow pip attempt to solve the dependency conflict 2023-02-09T21:47:58.887-08:00 Copy ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts </code></pre> <p>How do deal with conflicts without having to individually update the version numbers for each of them? I obtain all the packages by running <code>pip freeze</code> command.</p>
<python><pip>
2023-02-10 06:39:03
1
2,022
kms
75,407,672
8,547,986
split top three rows of pandas into three separate columns
<p>I have a pandas dataframe:</p> <pre><code> clientid date generatedTime feature featurePercentage 0 12345 2022-11-18 00:00:00 2022-11-23 08:58:09 timely_log 1.0 1 12345 2022-11-19 00:00:00 2022-11-24 08:55:46 red 0.822815 2 12345 2022-11-19 00:00:00 2022-11-24 08:55:46 timely_log 0.177185 </code></pre> <p>I need to group this dataframe by <code>clientid</code> and <code>date</code>, and then split the <code>feature</code> and <code>featurePercentage</code> values into separate columns, such that, the highest value of <code>featurePercentage</code> is added into new column called <code>First</code> and the corresponding value in <code>feature</code> column is added to column <code>First_feature</code>, similarly second highest value is added to column <code>Second</code> and corresponding <code>feature</code> value is added to <code>Second_feature</code> and likewise till top three values. Such that the output looks something like this</p> <pre><code> clientid date generatedTime First_feature First Second_feature Second Third_feature Third 0 12345 2022-11-18 00:00:00 2022-11-23 08:58:09 timely_log 1.0 None None None None 1 12345 2022-11-19 00:00:00 2022-11-24 08:55:46 red 0.822815 timely_log 0.177185 None None </code></pre>
<python><pandas>
2023-02-10 06:36:56
2
1,923
monte
75,407,605
4,470,126
Pyspark - Converting a stringtype nested json to columns in dataframe
<p>I am working on processing a CDC data recieved via kafka tables, and load them into databricks delta tables. I am able to get it working all, except for a nested JSON string which is not getting loaded when using from_json, spark.read.json.</p> <p>When I try to fetch schema of the json from level 1, using &quot;spark.read.json(df.rdd.map(lambda row: row.value)).schema&quot;, the column INPUT_DATA is considered as string loaded as a string object. Am giving sample json string, the code that I tried, and the expected results.</p> <p>I have many topics to process and each topic will have different schema, so I would like to process dynamically, and do not prefer to store the schemas, since the schema may change over time, and i would like to have my code handle the changes automatically.</p> <p>Appreciate any help as I have spent whole day to figure out, and still trying. Thanks in advance.</p> <p>Sample Json with nested tree:</p> <pre><code>after = { &quot;id_transaction&quot;: &quot;121&quot;, &quot;product_id&quot;: 25, &quot;transaction_dt&quot;: 1662076800000000, &quot;creation_date&quot;: 1662112153959000, &quot;product_account&quot;: &quot;40012&quot;, &quot;input_data&quot;: &quot;{\&quot;amount\&quot;:[{\&quot;type\&quot;:\&quot;CASH\&quot;,\&quot;amount\&quot;:1000.00}],\&quot;currency\&quot;:\&quot;USD\&quot;,\&quot;coreData\&quot;:{\&quot;CustId\&quot;:11021,\&quot;Cust_Currency\&quot;:\&quot;USD\&quot;,\&quot;custCategory\&quot;:\&quot;Premium\&quot;},\&quot;context\&quot;:{\&quot;authRequired\&quot;:false,\&quot;waitForConfirmation\&quot;:false,\&quot;productAccount\&quot;:\&quot;CA12001\&quot;},\&quot;brandId\&quot;:\&quot;TOYO-2201\&quot;,\&quot;dealerId\&quot;:\&quot;1\&quot;,\&quot;operationInfo\&quot;:{\&quot;trans_Id\&quot;:\&quot;3ED23-89DKS-001AA-2321\&quot;,\&quot;transactionDate\&quot;:1613420860087},\&quot;ip_address\&quot;:null,\&quot;last_executed_step\&quot;:\&quot;PURCHASE_ORDER_CREATED\&quot;,\&quot;last_result\&quot;:\&quot;OK\&quot;,\&quot;output_dataholder\&quot;:\&quot;{\&quot;DISCOUNT_AMOUNT\&quot;:\&quot;0\&quot;,\&quot;BONUS_AMOUNT_APPLIED\&quot;:\&quot;10000\&quot;}&quot;, &quot;dealer_id&quot;: 1, &quot;dealer_currency&quot;: &quot;USD&quot;, &quot;Cust_id&quot;: 11021, &quot;process_status&quot;: &quot;IN_PROGRESS&quot;, &quot;tot_amount&quot;: 10000, &quot;validation_result_code&quot;: &quot;OK_SAVE_AND_PROCESS&quot;, &quot;operation&quot;: &quot;Create&quot;, &quot;timestamp_ms&quot;: 1675673484042 } </code></pre> <p>I have created following script to get all the columns of the json structure:</p> <pre><code>import json # table_column_schema = {} json_keys = {} child_members = [] table_column_schema = {} column_schema = [] dbname = &quot;mydb&quot; tbl_name = &quot;tbl_name&quot; def get_table_keys(dbname): table_values_extracted = &quot;select value from {mydb}.{tbl_name} limit 1&quot; cmd_key_pair_data = spark.sql(table_values_extracted) jsonkeys=cmd_key_pair_data.collect()[0][0] json_keys = json.loads(jsonkeys) column_names_as_keys = json_keys[&quot;after&quot;].keys() value_column_data = json_keys[&quot;after&quot;].values() column_schema = list(column_names_as_keys) for i in value_column_data: if (&quot;{&quot; in str(i) and &quot;}&quot; in str(i)): a = json.loads(i) for i2 in a.values(): if (str(i2).startswith(&quot;{&quot;) and str(i2).endswith('}')): column_schema = column_schema + list(i2.keys()) table_column_schema['temp_table1'] = column_schema return 0 get_table_keys(&quot;dbname&quot;) </code></pre> <p>The following code is used to process the json and create a dataframe with all nested jsons as the columns:</p> <pre><code>from pyspark.sql.functions import from_json, to_json, col from pyspark.sql.types import StructType, StructField, StringType, LongType, MapType import time dbname = &quot;mydb&quot; tbl_name = &quot;tbl_name&quot; start = time.time() df = spark.sql(f'select value from {mydb}.{tbl_name} limit 2') tbl_columns = table_column_schema[tbl_name] data = [] for i in tbl_columns: if i == 'input_data': # print('FOUND !!!!') data.append(StructField(f'{i}', MapType(StringType(),StringType()), True)) else: data.append(StructField(f'{i}', StringType(), True)) schema2 = spark.read.json(df.rdd.map(lambda row: row.value)).schema print(type(schema2)) df2 = df.withColumn(&quot;value&quot;, from_json(&quot;value&quot;, schema2)).select(col('value.after.*'), col('value.op')) </code></pre> <p>Note: The VALUE is a column in my delta table (bronze layer)</p> <p>Current dataframe output: <a href="https://i.sstatic.net/UMuKy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UMuKy.png" alt="enter image description here" /></a></p> <p>Expected dataframe output: <a href="https://i.sstatic.net/99FHN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/99FHN.png" alt="enter image description here" /></a></p>
<python><json><pyspark><databricks>
2023-02-10 06:25:56
1
3,213
Yuva
75,407,467
8,481,155
Apache Beam exit pipeline on condition Python SDK
<p>I have an Apache Beam pipeline which reads from a BigQuery table and does a few processing. The dataflow job would be triggered by a Cloud Function. My requirement is to check a date column in the BigQuery which it reads at the first step and stop the pipeline from proceeding to the next stages if the date is same as today.</p> <pre><code>data = ( pipeline | beam.io.ReadFromBigQuery(query=''' SELECT date, unique_key, case_number FROM `bigquery-public-data.chicago_crime.crime` LIMIT 100 ''', use_standard_sql=True) # Further data processing ) count = (data | beam.Filter(lambda line : line['date'] == datetime.now()) | beam.combiners.Count.Globally() ) # Further data processing </code></pre> <p>The 'data' PCollection is my actual processing. The solution which I had in mind was creating a PCollection 'count' which checks if the date is same as today. But how can I add a logic to check if the count is greater than 0 and then step exit the pipeline by logging necessary information?</p> <p>Or is there a better way to do this instead?</p>
<python><python-3.x><google-cloud-dataflow><apache-beam>
2023-02-10 06:04:11
1
701
Ashok KS
75,407,432
14,847,960
How would I add concatenate items to a master dataframe that I receive from a requests.get()
<p>I have an api call that is returning the date and adjusted close prices for given tickers in a for loop.</p> <p>There are 1100 unique tickers x 252 days, and I want to create a dataframe that's 1100x252 with the index as date.</p> <p>The problem is that I can only query the api one ticker at a time, and it returns below (a sample of first several rows) which is for AAPL:</p> <pre><code>[{'date': '2020-01-02T00:00:00.000Z', 'adjClose': 73.4677943274}, {'date': '2020-01-03T00:00:00.000Z', 'adjClose': 72.7535410914}, {'date': '2020-01-06T00:00:00.000Z', 'adjClose': 73.3332603275}, {'date': '2020-01-07T00:00:00.000Z', 'adjClose': 72.9883640731}, {'date': '2020-01-08T00:00:00.000Z', 'adjClose': 74.1624789816}, {'date': '2020-01-09T00:00:00.000Z', 'adjClose': 75.7377498172}, {'date': '2020-01-10T00:00:00.000Z', 'adjClose': 75.908974908}] </code></pre> <p>What I am trying to create is a loop that extracts the <code>adjClose</code>, and merges them to a master dataframe, of sorts.</p> <p>I am currently looping through via:</p> <pre><code>tickers = list(data.ticker.unique) for ticker in tickers: api_call = reqests.get(f'api_call_from_site.com/stonkz&amp;ticker={tickers}') df = pd.DataFrame(api_call.json()) </code></pre> <p>I then want to concatenate the <code>adjClose</code> from each specific api call to said &quot;master dataframe,&quot; but I have no idea where to start. This would look like the below:</p> <p>NOTE: values for AAPL (and other tickers) would resemble above.</p> <pre><code>date AAPL TSLA AMD NVDA etc 2020-01-02 75 110 65 205 100 2020-01-03 76 111 66 206 101 2020-01-04 77 112 67 207 102 2020-01-05 78 113 68 208 103 2020-01-06 79 114 69 209 104 2020-01-07 80 115 70 210 105 2020-01-08 81 116 71 211 106 2020-01-09 82 117 72 212 107 2020-01-10 83 118 73 213 108 </code></pre> <p>Any and all help is appreciated, and thank you in advance.</p>
<python><pandas><python-requests>
2023-02-10 05:58:02
1
324
jd_h2003
75,407,365
2,588,860
Iterating over pyspark df becomes slower every iteration
<p>If I have a spark DF, it is my understanding that the DF is not really materialized until it is persisted or something similar, and that it is theoretically &quot;just an explain plan&quot; until said thing happens. For example, if I have <code>df</code> and remove some rows from it, until the DF needs to be materialized, it really just is <code>df - subset1</code>, and if i remove some more rows, it is <code>df - subset1 - subset2</code>.</p> <p>I have a use case in which this is done in a <code>for</code> loop, and every iteration was consistently at least doubling in time of execution. After some research, doing this at the end of every loop, fixed the problem:</p> <pre><code>data = data.rdd.toDF(schema) </code></pre> <p>In my case, that line, takes about 1 minute and <code>data</code> is originally 10MM rows with 30 string columns and has a decrease of 1MM rows every iteration.</p> <ol> <li>Am I understanding this correctly?</li> <li>Is there a better way of doing this?</li> </ol>
<python><pyspark>
2023-02-10 05:42:39
0
2,159
rodrigocf
75,407,351
13,115,571
How to use environment variable in odoo custom modules
<p>I installed odoo in ubuntu 20.04 and I set up environment variables in <code>/etc/environment</code>. But odoo is owned by odoo user. So, when I try to call in odoo custom module using <code>os.environ.get()</code>, I don't get anything. So, how can I solve that?</p>
<python><environment-variables><odoo><ubuntu-20.04>
2023-02-10 05:40:23
2
430
Neural
75,407,200
4,348,400
Does PriorityQueue call sorted every time get is called?
<p>The docs for <a href="https://docs.python.org/3/library/queue.html#queue.PriorityQueue" rel="nofollow noreferrer"><code>queue.PriorityQueue</code></a> say:</p> <blockquote> <p>The lowest valued entries are retrieved first (the lowest valued entry is the one returned by <code>sorted(list(entries))[0]</code>).</p> </blockquote> <p>Does that mean that every time I use the <code>get</code> method of a <code>PriorityQueue</code> that the elements are re-sorted? That would seem inefficient.</p>
<python><performance><sorting><priority-queue>
2023-02-10 05:11:31
1
1,394
Galen