QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
75,585,578
223,837
How to configure 'TLS1.2 only' in OpenSSL 1.0.2 config file?
<p>I would like to update the configuration of OpenSSL 1.0.2 (specifically 1.0.2k-fips as found on AWS's Amazon Linux 2 AMIs), so that any client using OpenSSL refuses TLSv1.1, TLSv1, or anything lower that is not TLSv1.2.</p> <p>I have learned that for OpenSSL 1.1+ the OpenSSL config file (e.g., /etc/pki/tls/openssl.cnf on Amazon Linux 2, or /usr/lib/ssl/openssl.cnf on Debian derivatives, or whatever <code>$OPENSSL_CONF</code> points to), one can specify <code>openssl_conf</code> -&gt; a section with <code>ssl_conf</code> -&gt; a section with <code>system_default</code> -&gt; a section with <code>MinProtocol=TLSv1.2</code>.</p> <p>However, that <code>ssl_conf</code> syntax is unknown in OpenSSL 1.0.2k, and instead it tries to load <code>libssl_conf.so</code> which fails because that shared library does not exist.</p> <p>So <em>my question:</em> Is it possible to configure OpenSSL 1.0.2 to fail if one tries to use TLSv1.1 or below? At least if the <code>openssl</code> binary tries, or any Python code that I don't control using <a href="https://docs.python.org/3.7/library/ssl.html" rel="noreferrer">the ssl module for Python 3.9 or lower</a>?</p> <hr /> <p>Additional information: At least on Amazon Linux 2 with OpenSSL 1.0.2k-fips, using <code>grep</code> I cannot even find the string MinProtocol in any OpenSSL 1.0.2 related binary or shared library. (But it <em>does</em> occur in an OpenSSL 1.1.1s <code>libssl.so.1.1</code> that is shipped with an agent I happened to have on that same AL2 system.)</p> <p>So that confirms my suspicion that the answer to my question is: No, this is not possible.</p>
<python><openssl><tls1.2><configuration-files>
2023-02-27 21:07:21
4
9,789
MarnixKlooster ReinstateMonica
75,585,475
2,908,017
How to make a Python FMX GUI Form jump to the front?
<p>I have a window <code>Form</code> that is created with the <a href="https://github.com/Embarcadero/DelphiFMX4Python" rel="nofollow noreferrer">DelphiFMX GUI library for Python</a>.</p> <p>I want to know if there is a specific method that I can call to send the Form to the top above all other forms and/or windows from other apps.</p> <p>Here's my current code. I want to make sure the <code>NewForm</code> is sent to the top when the button is clicked via the <code>Button_OnClick</code> method:</p> <pre><code>from delphifmx import * class HelloForm(Form): def __init__(self, owner): self.Caption = 'Hello World' self.Width = 1000 self.Height = 500 self.Position = &quot;ScreenCenter&quot; self.myButton = Button(self) self.myButton.Parent = self self.myButton.Text = &quot;Bring other form to front&quot; self.myButton.Align = &quot;Client&quot; self.myButton.Margins.Top = 20 self.myButton.Margins.Right = 20 self.myButton.Margins.Bottom = 20 self.myButton.Margins.Left = 20 self.myButton.StyledSettings = &quot;&quot; self.myButton.TextSettings.Font.Size = 50 self.myButton.onClick = self.Button_OnClick self.NewForm = Form(self) self.NewForm.Width = 1000 self.NewForm.Height = 500 self.NewForm.Show() def Button_OnClick(self, sender): # What code do I write here to send to front? def main(): Application.Initialize() Application.Title = &quot;Hello World&quot; Application.MainForm = HelloForm(Application) Application.MainForm.Show() Application.Run() Application.MainForm.Destroy() main() </code></pre>
<python><windows><user-interface><window><firemonkey>
2023-02-27 20:54:38
1
4,263
Shaun Roselt
75,585,462
12,734,492
Pyspark: List subfolders names in AWS S3 folder
<p>Main folder: s3_path = &quot;s3a://prices/xml_inputs/stores/&quot;</p> <p>My try:</p> <pre><code>from pyspark.sql import SparkSession from pyspark import SparkContext import os from dotenv import load_dotenv # Load environment variables from the .env file load_dotenv() AWS_ACCESS_KEY_ID = os.getenv(&quot;AWS_ACCESS_KEY_ID&quot;) AWS_SECRET_ACCESS_KEY = os.getenv(&quot;AWS_SECRET_ACCESS_KEY&quot;) # Create the SparkSession spark = SparkSession.builder \ .appName(&quot;price-xml-s3&quot;) \ .config(&quot;spark.jars.packages&quot;, &quot;com.databricks:spark-xml_2.12:0.12.0&quot;) \ .config(&quot;spark.sql.execution.arrow.enabled&quot;, &quot;true&quot;) \ .getOrCreate() # Set S3 configuration spark.sparkContext._jsc.hadoopConfiguration().set(&quot;com.amazonaws.services.s3.enableV4&quot;, &quot;true&quot;) spark.sparkContext._jsc.hadoopConfiguration().set(&quot;fs.s3a.impl&quot;, &quot;org.apache.hadoop.fs.s3a.S3AFileSystem&quot;) spark.sparkContext._jsc.hadoopConfiguration().set(&quot;fs.s3a.aws.credentials.provider&quot;, &quot;com.amazonaws.auth.InstanceProfileCredentialsProvider,com.amazonaws.auth.DefaultAWSCredentialsProviderChain&quot;) spark.sparkContext._jsc.hadoopConfiguration().set(&quot;fs.AbstractFileSystem.s3a.impl&quot;, &quot;org.apache.hadoop.fs.s3a.S3A&quot;) spark.sparkContext._jsc.hadoopConfiguration().set(&quot;fs.s3a.access.key&quot;, AWS_ACCESS_KEY_ID) spark.sparkContext._jsc.hadoopConfiguration().set(&quot;fs.s3a.secret.key&quot;, AWS_SECRET_ACCESS_KEY) spark.sparkContext._jsc.hadoopConfiguration().set(&quot;fs.s3a.endpoint&quot;, &quot;s3.us-east-1.amazonaws.com&quot;) # Define the S3 path to the main folder s3_path = &quot;s3a://prices/xml_inputs/stores/&quot; # Get the FileSystem object fs = SparkContext.getOrCreate()._jvm.org.apache.hadoop.fs.FileSystem.get(spark.sparkContext._jsc.hadoopConfiguration()) # List the subfolders subfolders = [path.toString() for path in fs.listStatus(spark._jvm.org.apache.hadoop.fs.Path(s3_path)) if path.isDirectory()] # Print the subfolder names for subfolder in subfolders: print(subfolder) </code></pre> <p>In output I get :</p> <pre><code>Traceback (most recent call last): File &quot;C:\xxx\xxx\xxx.py&quot;, line 36, in &lt;module&gt; subfolders = [path.toString() for path in fs.listStatus(spark._jvm.org.apache.hadoop.fs.Path(s3_path)) if path.isDirectory()] : Wrong FS: s3a://prices/xml_inputs/stores, expected: file:/// </code></pre> <p>When I try to export a single file the path is OK but when I try to list subfolders I get an error.</p>
<python><amazon-web-services><apache-spark><amazon-s3><pyspark>
2023-02-27 20:53:05
0
487
Galat
75,585,244
733,583
Handling data skewness in pyspark rangeBetween
<p>I am trying to calculate count, mean and average over rolling window using rangeBetween in pyspark. Some of the mid in my data are heavily skewed because of which its taking too long to compute.</p> <p>I am first grouping the data on epoch level and then using the window function. This reduces the compute time but still its taking longer than expected.</p> <p>Can someone please suggest optimizing this code or any other approach.</p> <p>I am using the below code:</p> <pre><code>def get_window_spec(partitioncol, ordercol, start, end): return Window.partitionBy(partitioncol).orderBy(ordercol).rangeBetween(start, end)` group_cols = [&quot;mid&quot;] midstatsdf = filtereddatadf.groupBy(group_cols + [&quot;epochtime&quot;])\ .agg(count(&quot;*&quot;).alias(&quot;mid_epoch_lvl_cnt&quot;), \ sum(&quot;amount&quot;).alias(&quot;mid_epoch_lvl_sum&quot;))\ .withColumn(&quot;mid_occur_num_1day&quot;, \ F.sum(col(&quot;mid_epoch_lvl_cnt&quot;)).over(get_window_spec(group_cols,&quot;epochtime&quot;,-24*60*60, 0))) \ .withColumn(&quot;mid_sum_1day&quot;, \ F.sum(col(&quot;mid_epoch_lvl_sum&quot;)).over(get_window_spec(group_cols,&quot;epochtime&quot;,-24*60*60, 0)))\ .withColumn(&quot;mid_avg_1day&quot;, \ col(&quot;mid_sum_1day&quot;)/col(&quot;mid_occur_num_1day&quot;))\ .drop(&quot;mid_epoch_lvl_cnt&quot;,&quot;mid_epoch_lvl_sum&quot;) </code></pre>
<python><apache-spark><pyspark><window-functions>
2023-02-27 20:27:40
0
2,002
Kundan Kumar
75,584,975
13,578,682
Generating combinations in order of their sum
<p>Itertools combinations seem to come out in lexicographic order:</p> <pre><code>&gt;&gt;&gt; for c in combinations([9,8,7,2,2,1], 2): ... print(c, sum(c)) ... (9, 8) 17 (9, 7) 16 (9, 2) 11 (9, 2) 11 (9, 1) 10 (8, 7) 15 (8, 2) 10 (8, 2) 10 (8, 1) 9 (7, 2) 9 (7, 2) 9 (7, 1) 8 (2, 2) 4 (2, 1) 3 (2, 1) 3 </code></pre> <p>But I want to generate them in order of greatest sum:</p> <pre><code>&gt;&gt;&gt; for c in sorted(combinations([9,8,7,2,2,1], 2), key=sum, reverse=True): ... print(c, sum(c)) ... (9, 8) 17 (9, 7) 16 (8, 7) 15 (9, 2) 11 (9, 2) 11 (9, 1) 10 (8, 2) 10 (8, 2) 10 (8, 1) 9 (7, 2) 9 (7, 2) 9 (7, 1) 8 (2, 2) 4 (2, 1) 3 (2, 1) 3 </code></pre> <p>Actually generating them all and sorting is not possible because the result is too large (this is a smaller example).</p> <p>I thought it might be possible to modify the recipe offered in the <a href="https://docs.python.org/3/library/itertools.html#itertools.combinations" rel="nofollow noreferrer">docs</a> as &quot;roughly equivalent&quot; python code:</p> <pre><code>def combinations(iterable, r): pool = tuple(iterable) n = len(pool) if r &gt; n: return indices = list(range(r)) yield tuple(pool[i] for i in indices) while True: for i in reversed(range(r)): if indices[i] != i + n - r: break else: return indices[i] += 1 for j in range(i+1, r): indices[j] = indices[j-1] + 1 yield tuple(pool[i] for i in indices) </code></pre> <p>But could not figure it out.</p> <p>If it helps, <code>r</code> will always be 2 and the input (~10k elements) is small enough that it could be sorted to a monotonically non-increasing list.</p> <p>There's also a rejection criteria that is expensive to compute. So it's not just the first combo which is selected, but the largest combo (by sum) that was not rejected after checking another condition. The vast majority of combos are rejected - we'll have to go through about 0.1% of them before finding one that is not rejected.</p>
<python><combinations><python-itertools><combinatorics>
2023-02-27 19:52:47
1
665
no step on snek
75,584,842
9,531,146
Python Lambda Function returning KeyError
<p>Currently trying to create a (simple) Lambda function in AWS using Python 3.8:</p> <pre><code>import json import urllib3 def lambda_handler(event, context): status_code = 200 array_of_rows_to_return = [ ] http = urllib3.PoolManager() try: event_body = event[&quot;body&quot;] payload = json.loads(event_body) rows = payload[&quot;data&quot;] for row in rows: row_number = row[0] from_currency = row[1] to_currency = row[2] response = http.request('GET','https://open.er-api.com/v6/latest/'+from_currency) response_data = response.data.decode('utf8').replace(&quot;'&quot;, '&quot;') data = json.loads(response_data) exchange_rate_value = data['rates'][to_currency] output_value = [exchange_rate_value] row_to_return = [row_number, output_value] array_of_rows_to_return.append(row_to_return) json_compatible_string_to_return = json.dumps({&quot;data&quot; : array_of_rows_to_return}) except Exception as err: status_code = 400 json_compatible_string_to_return = event_body return { 'statusCode': status_code, 'body': json_compatible_string_to_return } </code></pre> <p>When I deploy/attempt to test the function from within Lambda, I receive the following error/output message:</p> <p><strong>&quot;errorMessage&quot;: &quot;local variable 'event_body' referenced before assignment&quot;, &quot;errorType&quot;: &quot;UnboundLocalError&quot;</strong></p> <p>I don't think it's my code as the instructor in the tutorial was able to run his successfully without any changes. Could someone p[lease help me determine what may be going on please?</p> <p>Event_JSON below:</p> <pre><code>{ &quot;data&quot;: [ [ 0, &quot;USD&quot;, &quot;INR&quot; ] ] } </code></pre> <p>**Update: The Lambda function now returns the following: **</p> <pre><code>{ &quot;statusCode&quot;: 200, &quot;body&quot;: &quot;{\&quot;data\&quot;: [[0, [82.920013]]]}&quot; } </code></pre> <p>However, when I am trying to test the Amazon API Gateway implementation using the sample 'Request Body', I am receiving the following output:</p> <pre><code>{&quot;resource&quot;: &quot;/&quot;, &quot;path&quot;: &quot;/&quot;, &quot;httpMethod&quot;: &quot;POST&quot;, &quot;headers&quot;: null, &quot;multiValueHeaders&quot;: null, &quot;queryStringParameters&quot;: null, &quot;multiValueQueryStringParameters&quot;: null, &quot;pathParameters&quot;: null, &quot;stageVariables&quot;: null, &quot;requestContext&quot;: {&quot;resourceId&quot;: &quot;[removed]&quot;, &quot;resourcePath&quot;: &quot;/&quot;, &quot;httpMethod&quot;: &quot;POST&quot;, &quot;extendedRequestId&quot;: &quot;[requestid]&quot;, &quot;requestTime&quot;: &quot;27/Feb/2023:25:13:38 +0000&quot;, &quot;path&quot;: &quot;/&quot;, &quot;accountId&quot;: &quot;12345678&quot;, &quot;protocol&quot;: &quot;HTTP/1.1&quot;, &quot;stage&quot;: &quot;test-invoke-stage&quot;, &quot;domainPrefix&quot;: &quot;testPrefix&quot;, &quot;requestTimeEpoch&quot;: 36018142, ....{&quot;cognitoIdentityPoolId&quot;: null, &quot;cognitoIdentityId&quot;: null, &quot;apiKey&quot;: .......&quot;cognitoAuthenticationProvider&quot;: null, &quot;user&quot;: &quot;1234567&quot;}, .... &quot;body&quot;: &quot;{\r\n \&quot;data\&quot;:\r\n [\r\n [0,\&quot;USD\&quot;, \&quot;INR\&quot;]\r\n ]\r\n }&quot;, &quot;isBase64Encoded&quot;: false} </code></pre> <p>Request body below:</p> <pre><code>{ &quot;data&quot;: [ [0,&quot;USD&quot;, &quot;INR&quot;] ] } </code></pre>
<python><amazon-web-services><aws-lambda>
2023-02-27 19:37:39
1
765
John Wick
75,584,813
13,342,062
What does the sequence #' mean in Python source code
<p>In <code>os.py</code> there is a strange marker <code>#'</code> under the docstring and just before the imports</p> <pre><code>&gt;&gt;&gt; import os &gt;&gt;&gt; os?? File: /usr/local/lib/python3.11/os.py Source: r&quot;&quot;&quot;OS routines for NT or Posix depending on what system we're on. This exports: - all functions from posix or nt, e.g. unlink, stat, etc. - os.path is either posixpath or ntpath - os.name is either 'posix' or 'nt' - os.curdir is a string representing the current directory (always '.') - os.pardir is a string representing the parent directory (always '..') - os.sep is the (or a most common) pathname separator ('/' or '\\') - os.extsep is the extension separator (always '.') - os.altsep is the alternate pathname separator (None or '/') - os.pathsep is the component separator used in $PATH etc - os.linesep is the line separator in text files ('\r' or '\n' or '\r\n') - os.defpath is the default search path for executables - os.devnull is the file path of the null device ('/dev/null', etc.) Programs that import and use 'os' stand a better chance of being portable between different platforms. Of course, they must then only use functions that are defined by all platforms (e.g., unlink and opendir), and leave all pathname manipulation to os.path (e.g., split and join). &quot;&quot;&quot; #' import abc import sys import stat as st from _collections_abc import _check_methods GenericAlias = type(list[int]) ... </code></pre> <p>Does the #' have some special meaning there? I assume if nobody removed that in over 20 years of eyes on this file that it must be there for a good reason (some macro, or some kind of marker for a text processor, but I could not find what). And, like the <code>#!</code>, it is difficult to search for information about.</p> <p>What is the point of those #' comment?</p>
<python>
2023-02-27 19:34:46
1
323
COVFEFE-19
75,584,773
303,704
Call native .NET interfaces from Python in a client-server architecture
<p>There is a game called Kerbal Space Program (KSP), written using the Unity game engine in C#. It has rich modding scene. Some mods add new features and game object types with new APIs.</p> <p>There is one mod, called <a href="https://github.com/krpc/krpc" rel="nofollow noreferrer">kRPC</a>, which allows for remote procedure calls of Kerbal's and its mods' APIs in various languages, including Python, through a client-server architecture - the running Kerbal game being the server. The mod works, but has some drawbacks:</p> <ul> <li>it has stalled development which stopped at KSP v 1.5.1 (at least that's the version I was able to make working with my code)</li> <li>it requires writing definition and a small amout of wrapper code of each of the Kerbal's or its mods' methods to be able to call them from the client</li> </ul> <p>I suspect/assume the second drawback is also the main reason why the development has stalled and covers only a handful of the game's functionality, as with each breaking update or a new mod, the definitions and wrappers need to be fixed and/or extended.</p> <p>I am mainly interested in the Python language interface kRPC provides. I was wondering if there was a better way to achieve similar functionality, but without the need to re-define all the game's interfaces anew. What I would like to achieve:</p> <ul> <li>keep client-server architecture, but make it very thin and lightweight</li> <li>write a code in Python that queries or changes the state of the game using its native interfaces</li> <li>execute said code in the game and return the results in case of a query</li> <li>if the result is a reference to an object, reuse that reference for a subsequent call (e.g. call 1 queries the spaceship object on the launchpad and returns a reference to it, code 2 references the spaceship and starts its engine when a condition is met within the context of the client)</li> </ul> <p>What solution would you design for such functionality and what technologies (and how) would you use?</p> <p>Would you think of a completely different solution to remove the second drawback of the kRPC mod?</p> <p>After doing some searching, I've found <a href="https://github.com/pythonnet/pythonnet" rel="nofollow noreferrer">pythonnet</a>, which seems it would solve the direct calling of the native game interfaces from within Python. I am not sure how to make the client-server part working seamlessly with .NET technologies.</p>
<python><c#><.net><unity-game-engine><client-server>
2023-02-27 19:29:18
1
870
Dalibor Frivaldsky
75,584,636
2,627,487
Fortran-like negative indexing in numpy for physics and math equations
<p>In contrast to most languages that use zero-based indexing, Fortran allows to specify a particular indexing:</p> <pre><code>! array with 10 elements double precision arr(-8:1) arr(-8) = 0 ! 1st element arr( 0) = 1 ! second to last element arr( 1) = -1 ! last element </code></pre> <p>The same can be done in the C language by setting pointer to 9th element:</p> <pre class="lang-c prettyprint-override"><code>int _arr[10]; int *arr = &amp;_arr[8]; arr[-8] = 0; arr[ 0] = 1; arr[ 1] = -1; </code></pre> <p>Is it possible to do the same with numpy arrays? This would be awesome for the implementation of math equations that often feature negative indices, since it would make the code more readable and less error-prone, comparing to standard approach with offsetting indices.</p> <h4>One not very satisfying solution</h4> <p>One can overload the <code>__getitem__</code> method of <code>ndarray</code>, but in my opinion it is (1) cumbersome considering all the ways how numpy array can be indexed; (2) includes a noticeable overhead for the index operator (3) just dangerous to subclass <code>ndarray</code></p>
<python><arrays><numpy><performance><indexing>
2023-02-27 19:10:38
0
1,290
MrPisarik
75,584,567
16,053,370
How to search and delete specific lines from a parquet file in pyspark? (data purge)
<p>I'm starting a project to adjust the data lake for the specific purge of data, to comply with data privacy legislation.</p> <p>Basically the owner of the data opens a call requesting the deletion of records for a specific user, and I need to sweep all AWS S3 bucktes by checking all parquet files and delete this specific record from all parquet files in my data lake.</p> <p>Has anyone developed a similar project in python or pyspark?</p> <p>Can you suggest what would be the good market practice for this case?</p> <p>Today what I'm doing is reading all the parquet files, throwing it to a dataframe , filtering that dataframe excluding the current record, and rewriting the partition where that record was. This solution even works, but to purge where I need to look at a 5-year history, the processing is very heavy.</p> <p>Can anyone suggest me a more practical solution?</p> <p>remembering that my parquet files are in AWS S3, there are Athena tables, and my script will run in EMR (Pyspark)</p> <p>Thanks</p>
<python><amazon-web-services><pyspark><parquet>
2023-02-27 19:03:07
2
373
Carlos Eduardo Bilar Rodrigues
75,583,940
5,356,096
Python skips __next__ directive and returns a generator object
<p>I'm trying to implement an iterable class, which I have done several times before, but I'm experiencing some unexpected behavior this time around, and I can't figure out why.</p> <p>My class contains the usual <code>__iter__(self)</code> method that returns <code>self</code>, and <code>__next__(self)</code> method that yields results, however, when I attempt to do the following:</p> <pre class="lang-py prettyprint-override"><code>with VSIFile(params) as vsi: for roi in vsi: print(roi) </code></pre> <p>The <code>roi</code> is in fact a generator object instead of a yielded result. After going into debug, I found that <code>__next__</code> never triggers, only <code>__iter__</code>. I tested making an iterator with a simple number counting class and that one works well.</p> <p>I expect <code>roi</code> to be a numpy array.</p> <p>Here's the full code:</p> <p><strong>vsi_file.py</strong></p> <pre class="lang-py prettyprint-override"><code>from typing import Tuple import javabridge import bioformats from tqdm import tqdm from cv2 import resize javabridge.start_vm(class_path=bioformats.JARS) class VSIFile: def __init__(self, vsi_file: str, roi_size: Tuple[int, int] = (1024, 1024), target_size: Tuple[int, int] = (256, 256), use_pbar: bool = True): self.file_path = vsi_file self.roi_size = roi_size self.target_size = target_size self.slide = None self.shape = None self.max_x_idx = None self.max_y_idx = None self.num_rois = None self.skip = [1, 2, 5, 11, 22, 45, 72] if use_pbar: self.pbar = tqdm() else: self.pbar = None def __enter__(self): self.slide = bioformats.ImageReader(self.file_path) self.shape = self.slide.rdr.getSizeY(), self.slide.rdr.getSizeX(), 3 self.max_x_idx = self.shape[1] // self.roi_size[1] self.max_y_idx = self.shape[0] // self.roi_size[0] self.num_rois = self.max_x_idx * self.max_y_idx if self.pbar is not None: self.pbar.total = self.num_rois self.pbar.refresh() return self def __exit__(self, exc_type, exc_val, exc_tb): self.slide.close() if self.pbar is not None: self.pbar.close() def __del__(self): self.slide.close() if self.pbar is not None: self.pbar.close() def __iter__(self): return self def __next__(self): while self.idx in self.skip: if self.idx == self.max_x_idx * self.max_y_idx: if self.pbar is not None: self.pbar.close() raise StopIteration self.idx += 1 if self.pbar is not None: self.pbar.update(1) if self.idx == self.max_x_idx * self.max_y_idx: if self.pbar is not None: self.pbar.close() raise StopIteration y = (self.idx // self.max_x_idx) * self.roi_size[0] x = (self.idx % self.max_x_idx) * self.roi_size[1] roi = self.get_roi(x, y, self.roi_size[0], self.roi_size[1]) roi = resize(roi, self.target_size) if self.target_size else roi yield roi self.idx += 1 if self.pbar is not None: self.pbar.update(1) </code></pre> <p><strong>process_vsi.py</strong> (relevant portion)</p> <pre class="lang-py prettyprint-override"><code>with VSIFile(os.path.abspath(os.path.join(data_dir, file))) as vsi: for roi in vsi: print(roi) </code></pre> <p>This prints <code>&lt;generator object VSIFile.__next__ at 0x000001AA3C4AFC80&gt;</code>.</p>
<python><iterator><python-3.8>
2023-02-27 17:53:42
1
1,665
Jack Avante
75,583,912
19,694,624
Can't run Chromedriver in headless mode using Selenium
<p>running my selenium script I ran into an error that I don't know how to pass. I've googled and tried various solutions but they didn't work. Here is my code:</p> <pre><code>from selenium import webdriver from selenium.webdriver.chrome.options import Options chrome_driver_binary = '/home/user/Projects/myproject/chromedriver' options = Options() options.add_argument(&quot;--start-maximized&quot;) options.add_argument(&quot;--no-sandbox&quot;) options.add_argument(&quot;--disable-dev-shm-usage&quot;) options.add_argument(&quot;--headless&quot;) options.add_experimental_option(&quot;excludeSwitches&quot;, [&quot;enable-automation&quot;]) options.add_experimental_option('useAutomationExtension', False) driver = webdriver.Chrome(executable_path=chrome_driver_binary, options = options) driver.get('http://www.ubuntu.com/') </code></pre> <p>And everytime I run this I get the error &quot;The process started from chrome location /usr/bin/google-chrome is no longer running, so ChromeDriver is assuming that Chrome has crashed.&quot;</p> <p>However, if I comment out the line &quot;options.add_argument(&quot;--headless&quot;)&quot; my code works perfectly fine. But I can't leave that commented out because I need that script to run in headless mode.</p> <p>I am using Ubuntu 22.04 on VirtualBox 7, Python 3.10.6 and run my code in virtual environment.</p>
<python><python-3.x><google-chrome><selenium-webdriver><google-chrome-headless>
2023-02-27 17:50:35
1
303
syrok
75,583,768
5,446,749
Tell pip package to install build dependency for its own install and all install_requires installs
<p>I am installing a package whose dependency needs to import <code>numpy</code> inside its <code>setup.py</code>. It also needs <code>Cython</code> to correctly build this dependency. This dependency is <code>scikit-learn==0.21.2</code>. Here is the <code>setup.py</code> of my own package called <code>mypkgname</code>:</p> <pre class="lang-py prettyprint-override"><code>from setuptools import find_packages, setup import Cython # to check that Cython is indeed installed import numpy # to check that numpy is indeed installed setup( name=&quot;mypkgname&quot;, version=&quot;0.1.0&quot;, packages=find_packages(&quot;src&quot;, exclude=[&quot;tests&quot;]), package_dir={&quot;&quot;: &quot;src&quot;}, install_requires=[ &quot;scikit-learn==0.21.2&quot; ], ) </code></pre> <p>To make sure that <code>numpy</code> and <code>Cython</code> are available inside mypkgname's <code>setup.py</code> when <code>pip</code> installs <code>mypkgname</code>, I set up the <code>pyproject.toml</code> like this:</p> <pre class="lang-ini prettyprint-override"><code>[build-system] requires = [&quot;setuptools&gt;=40.8.0&quot;, &quot;Cython&quot;, &quot;numpy&gt;=1.11.0,&lt;=1.22.4&quot;, &quot;wheel&quot;] build-backend = &quot;setuptools.build_meta&quot; </code></pre> <p>After running <code>pip install -e .</code>, the <code>import numpy; import Cython</code> in mypkgname's <code>setup.py</code> work, but the <code>import Cython</code> inside the <code>scikit-learn==0.21.2</code> install does not:</p> <pre><code> File &quot;/home/vvvvv/.pyenv/versions/3.8.12/envs/withingswpm04-38/lib/python3.8/site-packages/numpy/distutils/misc_util.py&quot;, line 1016, in get_subpackage config = self._get_configuration_from_setup_py( File &quot;/home/vvvvv/.pyenv/versions/3.8.12/envs/withingswpm04-38/lib/python3.8/site-packages/numpy/distutils/misc_util.py&quot;, line 958, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File &quot;sklearn/utils/setup.py&quot;, line 8, in configuration from Cython import Tempita ModuleNotFoundError: No module named 'Cython' error: subprocess-exited-with-error </code></pre> <p>I don't understand why <code>Cython</code> is available for the install of my own mypkgname, but not for the <code>install_requires</code> packages of mypkgname. As if <code>Cython</code> was uninstalled(?) before installing the <code>install_requires</code> packages. But by checking the logs with <code>pip install -v</code>, it does not seem it is the case.</p> <hr /> <p>I tried installing <code>setuptools</code> and <code>Cython</code> beforehand:</p> <pre class="lang-bash prettyprint-override"><code>pip install setuptools Cython pip install -e . </code></pre> <p>and it works. Indeed, it seems that <code>scikit-learn==0.21.2</code> needs these 2 packages already installed to be properly installed. However, the scikit-learn version I am trying to install does not specify any <code>Cython</code> build requirements inside a <code>pyproject.toml</code>. Here is a link to the <a href="https://github.com/scikit-learn/scikit-learn/blob/0.21.2/setup.py" rel="nofollow noreferrer"><code>setup.py</code></a> of the scikit-learn package.</p> <p>If I just install <code>setuptools</code>, it still fails with the same error as in the first example (<code>ModuleNotFoundError: No module named 'Cython'</code>).</p> <p><strong>Notes:</strong></p> <ul> <li>I am forced for some out-of-context reasons to use this specific version of scikit-learn.</li> <li>I am using Python 3.8.12, pip 23.0.1 and Ubuntu 22.04. I tried this using pyenv and virtualenv with the same results. I also tried with <code>pip install -v -U --no-cache-dir</code>.</li> </ul> <hr /> <p>How can I tell mypkgname that I need <code>numpy</code>, <code>Cython</code> and <code>setuptools</code> installed from the beginning of the <code>pip install</code> until the very end? I want the <code>numpy</code>, <code>Cython</code> and <code>setuptools</code> packages to be available in the install of <code>mypkgname</code> and in every install of the <code>install_requires</code> packages.</p> <p>Since it will be a package deployed on pypi, I don't want people to have anything other than <code>pip</code> and maybe <code>setuptools</code> already installed when running <code>pip install mypkgname</code>.</p>
<python><pip><cython><setup.py><pyproject.toml>
2023-02-27 17:35:25
2
32,794
vvvvv
75,583,746
161,816
How do I PDF-export one or more Confluence Datacenter/Server spaces?
<p>How do I export one or more Confluence spaces to PDF based on a search of all available spaces? Information is scarce, so I am making this a Q&amp;A to help others.</p> <p>I have read through a maze of API deprecations, replacements, and problem reports, and I understand that Confluence still does not allow PDF export through a modern RESTful API, only through its long-unsupported SOAP API. In 2023.</p> <p>Some of the more useful content I have read includes:</p> <p><a href="https://jira.atlassian.com/browse/CONFSERVER-9901" rel="nofollow noreferrer">https://jira.atlassian.com/browse/CONFSERVER-9901</a> <a href="https://community.atlassian.com/t5/Confluence-questions/RPC-Confluence-export-fails-with-TYPE-PDF/qaq-p/269310" rel="nofollow noreferrer">https://community.atlassian.com/t5/Confluence-questions/RPC-Confluence-export-fails-with-TYPE-PDF/qaq-p/269310</a> <a href="https://developer.atlassian.com/server/confluence/remote-api-specification-for-pdf-export/" rel="nofollow noreferrer">https://developer.atlassian.com/server/confluence/remote-api-specification-for-pdf-export/</a></p> <p>This following SO example is similar to what is needed, but it does not search spaces, which requires a different endpoint as of sometime before June 2015. Use of Ruby and PHP would also represent introduction of a new language on my team, and we prefer to stick with C#, Python, and in emergency conditions, Java. <a href="https://stackoverflow.com/questions/21487525/how-to-export-a-confluence-space-to-pdf-using-remote-api">How to export a Confluence &quot;Space&quot; to PDF using remote API</a></p>
<python><python-3.x><soap><confluence><export-to-pdf>
2023-02-27 17:32:55
1
10,682
Charles Burns
75,583,649
13,231,896
How to check if two polygons have internal points in common, with geodjango and postgis
<p>I am am using geodjango with postgis backend. Giving two polygons I want to check if they overlap. I mean, they have interior points in common. If we check</p> <pre><code>A.function(B) </code></pre> <p>In the following picture &quot;Example 1&quot; would be False, &quot;Example 2&quot;, would be False (cause they only have edges in common) and &quot;Example 3&quot; Would be True, cause they have interior points in common. If both polygons are equal, the the function would return True as well.</p> <p><a href="https://i.sstatic.net/Z4o45.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Z4o45.png" alt="Examples" /></a></p>
<python><django><gis><postgis><geodjango>
2023-02-27 17:23:29
1
830
Ernesto Ruiz
75,583,269
12,932,447
Update a specific array element if it fulfils at least one of several conditions
<p>I am trying to understand (and fix) a code that is not mine, which uses PyMongo.</p> <p>We want to look for the document with <code>_id==_id</code> that has, inside its list <code>comments</code>, a comment with <code>id==comment_id</code> or <code>id==PyObjectId(comment_id)</code>. To that comment we want to add an answer. In code:</p> <pre class="lang-py prettyprint-override"><code>await db_collection.update_one( filter=query, update=update, array_filters=array_filters ) </code></pre> <p>where</p> <pre class="lang-py prettyprint-override"><code>query = { &quot;_id&quot;: _id, &quot;$or&quot;: [ {&quot;comments&quot;: {&quot;$elemMatch&quot;: {&quot;id&quot;: comment_id}}}, {&quot;comments&quot;: {&quot;$elemMatch&quot;: {&quot;id&quot;: PyObjectId(comment_id)}}}, ], } update = { &quot;$set&quot;: { &quot;comments.$[cmt].answer&quot;: jsonable_encoder(reply) } } array_filters = [{&quot;cmt.id&quot;: comment_id}] </code></pre> <p>My problem is that <code>array_filters</code> only check if <code>id==comment_id</code> and not also if <code>id == PyObjectId(comment_id)</code>, like the query does. This way when I have a <code>PyObjectId</code> as id, no item is updated.</p> <p>I guess I should modify <code>array_filters</code> with something like</p> <pre class="lang-py prettyprint-override"><code>array_filters = [{&quot;cmt.id&quot;: comment_id}, {&quot;cmt.id&quot;: PyObjectId(comment_id)}] </code></pre> <p>or</p> <pre class="lang-py prettyprint-override"><code>array_filters = [{&quot;$or$&quot;: [{ &quot;cmt.id&quot;: comment_id}, {&quot;cmt.id&quot;: PyObjectId(comment_id)}]} </code></pre> <p>but sadly I can just test my code on production and I'm trying to understand how this actually works before breaking things.</p> <p>Thank you all!</p>
<python><mongodb><pymongo>
2023-02-27 16:45:57
1
875
ychiucco
75,583,141
2,615,160
Conda-build fails on test with missing dependency
<p>For a reason I can not build a conda package. My package is posted on PyPi and accessible by</p> <p><code>pip install molcomplib</code></p> <p>To build a conda package I do:</p> <ol> <li>I run grayskull <code>grayskull pypi molcomplib</code></li> <li>I run <code>conda build -c conda-forge molcomplib</code></li> </ol> <p>It looks like <code>rdkit</code> is installed during the process</p> <p><a href="https://i.sstatic.net/rUM7n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rUM7n.png" alt="enter image description here" /></a></p> <p>But finally, after a long-time build stage, I have:</p> <p><code>molcomplib 1.0.1 requires rdkit, which is not installed.</code></p> <p>Which is strange, because <code>rdkit</code> is a direct dependence on my meta.yaml and just can not be ignored by <code>conda</code></p> <pre><code>import: 'molcomplib' import: 'molcomplib' + pip check molcomplib 1.0.1 requires rdkit, which is not installed. Tests failed for molcomplib-1.0.1-py_0.tar.bz2 - moving package to /home/sergeyadmin/miniconda3/conda-bld/broken </code></pre>
<python><anaconda><conda><conda-build>
2023-02-27 16:34:27
0
1,434
Sergey Sosnin
75,583,115
469,476
Parsing / reformatting a tokenized list in python
<p>I have lists of tokens of the form &quot;(a OR b) AND c OR d AND c&quot; or &quot;(a OR b) AND c OR (d AND c)&quot;</p> <p>I want to reformat these tokens into a string of the form: Expressions are of arbitrary form like (a or b) and c or (d and c).</p> <p>Write python to reformat the expressions as: {{or {and {or a b} c} {and d c}}}</p> <p>I have code that works for some token lists, but not for others:</p> <pre><code>def parse_expression(tokens): if len(tokens) == 1: return tokens[0] # Find the top-level operator (either 'and' or 'or') parens = 0 for i in range(len(tokens) - 1, -1, -1): token = tokens[i] if token == ')': parens += 1 elif token == '(': parens -= 1 elif parens == 0 and token in {'AND', 'OR'}: op = token break else: print('Invalid expression') # Recursively parse the sub-expressions left_tokens = tokens[:i] right_tokens = tokens[i+1:] print(&quot;{i} left {left_tokens}&quot;) print(&quot;{i} right {right_tokens}&quot;) if op == 'AND': left = parse_expression(left_tokens) right = parse_expression(right_tokens) return f'(and {left} {right})' else: left = parse_expression(left_tokens) right = parse_expression(right_tokens) return f'(or {left} {right})' x=list() x = ['x', 'AND', 'y', 'AND', 'z', 'AND', '(', '(', 'a', 'AND', 'b', ')', 'OR', '(', 'c', 'AND', 'd', ')', ')'] y = ['x', 'AND', 'y', 'AND', 'z', 'AND', '(', 'w', 'AND', 'q', ')'] </code></pre> <p>It seems to work without parenthesis, but not when I use them.</p> <p>When I try to reformat these with the parser, I keep getting</p> <pre><code>Traceback (most recent call last): File &quot;./prog.py&quot;, line 41, in &lt;module&gt; File &quot;./prog.py&quot;, line 29, in parse_expression File &quot;./prog.py&quot;, line 27, in parse_expression UnboundLocalError: local variable 'op' referenced before assignment </code></pre> <p>What am I doing wrong?</p>
<python><parsing><boolean><expression><reformatting>
2023-02-27 16:32:01
1
1,994
elbillaf
75,583,072
2,784,828
Use file input(stdin) in VS Code debug argument
<p>I have a script that accepts multiple arguments and takes a file input(stdin) as one of the last argument. Tried debugging the script in VS Code debugger. But the file input arg is not working. The script doesn't understand the fourth argument in the launch.json. here's what I have in my launch.json file:</p> <pre><code>{ &quot;configurations&quot;: [ { &quot;name&quot;: &quot;Python: Current File&quot;, &quot;type&quot;: &quot;python&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;program&quot;: &quot;${file}&quot;, &quot;console&quot;: &quot;integratedTerminal&quot;, &quot;justMyCode&quot;: true, &quot;args&quot;: [ &quot;arg1&quot;, &quot;arg2&quot;, &quot;arg3&quot;, &quot;''&quot;, &quot;&lt;&quot;, &quot;Path/to/file/test.json&quot; ] } ] } </code></pre> <p>I basically put the arguments the same way as I run in the console which is this-</p> <p><code>python main.py arg1 arg2 arg3 '' &lt; Path/to/file/test.json</code></p> <p>Is there a different way vscode debugger takes an argument?</p>
<python><visual-studio-code><debugging>
2023-02-27 16:27:39
1
1,025
saz
75,582,995
912,757
Python AES256-CBC, decryption with Rust OpenSSL
<p>I'm using this OpenSSL call to encrypt (symmetric) a file</p> <pre><code>key = secrets.token_bytes(32) iv = secrets.token_bytes(16) encrypted_file = file_path + &quot;.enc&quot; subprocess.run( [ &quot;openssl&quot;, &quot;enc&quot;, &quot;-aes-256-cbc&quot;, &quot;-K&quot;, key.hex(), &quot;-iv&quot;, iv.hex(), &quot;-in&quot;, file_path, &quot;-out&quot;, encrypted_file, ] ) </code></pre> <p>and this Rust code to decrypt it.</p> <pre><code>let mut ctx = CipherCtx::new().expect(&quot;Can't build CipherCtx&quot;); ctx.decrypt_init(Some(Cipher::aes_256_cbc()), Some(&amp;key), Some(&amp;iv)).unwrap(); let mut data = vec![]; archive.read_to_end(&amp;mut data).expect(&quot;Can't read encrypted archive file&quot;); let mut bytes = vec![]; ctx.cipher_update_vec(&amp;data, &amp;mut bytes).unwrap(); ctx.cipher_final_vec(&amp;mut bytes).unwrap(); let mut decrypted_file = tempfile().expect(&quot;Can't create temporary file&quot;); decrypted_file.write_all(&amp;bytes).expect(&quot;Can't write decrypted archivce file to disk&quot;); </code></pre> <p><strong>It does work</strong>. But I want to use the Python <code>cryptography</code> library to encrypt my file instead. So I replaced the block above with</p> <pre><code>with open(file_path, &quot;rb&quot;) as raw_file, open(encrypted_file, &quot;wb&quot;) as enc_file: encryptor = Cipher(algorithms.AES256(key), modes.CBC(iv)).encryptor() enc_file.write(encryptor.update(raw_file.read())) enc_file.write(encryptor.finalize()) </code></pre> <p>which I think is asking the same thing (eas-256-cbc, same key/iv, but in bytes, not hex). But now I get this error at the <code>ctx.cipher_final_vec(&amp;mut bytes)</code> line</p> <pre><code>ErrorStack([Error { code: 101077092, library: &quot;digital envelope routines&quot;, function: &quot;EVP_DecryptFinal_ex&quot;, reason: &quot;bad decrypt&quot;, file: &quot;../crypto/evp/evp_enc.c&quot;, line: 610 }]) </code></pre> <p>Because I use OpenSSL 1.1.1# in Rust, I also tried using <code>cryptography v36.0.2</code> (statically linked to OpenSSL 1.1.1n) instead of the latest version (39.0.1 is statically linked to OpenSSL 3.0.8), but I get the exact same error.</p> <p>What do I need to change for Rust to be able to decrypt my file?</p>
<python><rust><cryptography>
2023-02-27 16:21:24
1
2,397
Nil
75,582,975
5,491,623
Netmiko to login into 1000+ cisco router to check clock
<p>I am using python + netmiko for network automation , We have 1000's of device to manage. I am using threading to perform the task faster but I always receive read timeout exceptions if thread reaches 250+ devices.</p>
<python><automation><netmiko>
2023-02-27 16:19:25
0
643
Shashi Dhar
75,582,969
1,506,850
How does matplotlib deal with overplotting in time series?
<p>I am plotting a huge timeseries, having &gt; 1M (10^6) datapoints.</p> <p>I am plotting it using</p> <pre><code>import matplotlib.pyplot as plt plt.plot(timeseries) </code></pre> <p>Since my screen resolution is &lt;2k px, some overplotting is clearly taking place. How does matplotlib deal with number of points &gt;&gt; screen resolution?</p>
<python><matplotlib><plot>
2023-02-27 16:18:53
1
5,397
00__00__00
75,582,899
3,215,940
How to set and pass default values of kwargs from one function to another
<p>I have a few functions that call each other with a bunch of parameters. Because my use case contains several parameters, I wanted to check that they are being passed correctly whenever I modify them (will wrap this in a verbose mode afterwards, but it's mainly for debug purposes now). Here's a toy example below:</p> <pre><code>from functools import wraps import inspect def print_kwargs(func): @wraps(func) def wrapper(*args, **kwargs): # Print the function name print(f&quot;Calling function {func.__name__}&quot;) # Print the values of the passed keyword arguments if kwargs: print(&quot;Keyword arguments:&quot;) for key, value in kwargs.items(): print(f&quot; {key} = {value}&quot;) # Print the default values of keyword arguments defined in the function signature sig = inspect.signature(func) for param_name, param in sig.parameters.items(): print(f&quot;{param_name} : {param}&quot;) if param.default is not inspect.Parameter.empty: value = param.default if value is not None or (value is None and param_name in kwargs): print(f&quot; {param_name} = {value}&quot;) # Call the function return func(*args, **kwargs) return wrapper @print_kwargs def fun1(a, **kwargs): n_remove = kwargs.get('n_remove', 0) b = fun2(a[n_remove:], **kwargs) return b @print_kwargs def fun2(x, **kwargs): return x </code></pre> <p>If I call</p> <pre><code>fun1([1,2,3], n_remove=1) Calling function fun1 Keyword arguments: n_remove = 1 a : a kwargs : **kwargs Calling function fun2 x : x kwargs : kwargs [2, 3] </code></pre> <p>The keyword argument gets intercepted in the first call, but not the second one. <code>**kwargs</code> is printed &quot;as is&quot; instead of with the default value of 0. When I call the function with the default values I get nothing for keyword arguments (makes sense) and the same behavior for <code>**kwargs</code>.</p> <pre><code>fun1([1,2,3]) Calling function fun1 a : a kwargs : **kwargs Calling function fun2 x : x kwargs : kwargs [1, 2, 3] </code></pre> <p>I am not very familiar with the way **kwargs works and, although the functions seem to be working correctly and all the default values are properly set inside each function using the likes of <code>kwargs.get('n_remove', 0)</code>, I cannot figure out how to print the default values of my <code>kwargs</code>.</p>
<python><python-decorators><keyword-argument>
2023-02-27 16:12:11
0
4,270
Matias Andina
75,582,704
11,397,243
lambda binding upon call is causing issue in my generated code
<p>As the author of <a href="https://github.com/snoopyjc/pythonizer" rel="nofollow noreferrer">Pythonizer</a>, I'm translating some perl code to python that defines a function FETCH in 2 different packages in the same source file, like:</p> <pre><code>package Env; sub FETCH {...} package Env::Array; sub FETCH {...} </code></pre> <p>and my generated code needs to insert a special library call (<code>perllib.tie_call</code>) that handles the 'tie' operation at runtime. Here is a sample of my generated code for the above:</p> <pre><code>builtins.__PACKAGE__ = &quot;Env&quot; def FETCH(*_args): ... Env.FETCH = lambda *_args, **_kwargs: perllib.tie_call(FETCH, _args, _kwargs) builtins.__PACKAGE__ = &quot;Env.Array&quot; def FETCH(*_args): ... Env.Array.FETCH = lambda *_args, **_kwargs: perllib.tie_call(FETCH, _args, _kwargs) </code></pre> <p>What is happening, when I call <code>Env.FETCH</code>, it's invoking the second <code>def FETCH</code>, as that's the current one defined. I need to invoke the first <code>FETCH</code> for <code>Env.FETCH</code> and the second <code>FETCH</code> for <code>Env.Array.FETCH</code>. How can I modify my generated code to do that? In the situation that I don't need to sneak in a <code>perllib.tie_call</code>, my generated code is:</p> <pre><code>builtins.__PACKAGE__ = &quot;Env&quot; def FETCH(*_args): ... Env.FETCH = FETCH builtins.__PACKAGE__ = &quot;Env.Array&quot; def FETCH(*_args): ... Env.Array.FETCH = FETCH </code></pre> <p>which works as expected. With the <code>lambda</code>, the evaluation of <code>FETCH</code> gets delayed and the last one defined is being picked up. I need it to pick up the <code>FETCH</code> defined just above it instead.</p>
<python><lambda><expression-evaluation>
2023-02-27 15:55:36
1
633
snoopyjc
75,582,686
2,908,017
How to change GroupBox background color in Python VCL GUI app
<p>I'm using the <a href="https://github.com/Embarcadero/DelphiVCL4Python" rel="nofollow noreferrer">DelphiVCL GUI library for Python</a> and trying to change the background color on a <code>GroupBox</code> component, but it's not working</p> <p>I have the following code to create the <code>Form</code> and the <code>GroupBox</code> on my Form:</p> <pre><code>from delphivcl import * class frmMain(Form): def __init__(self, owner): self.Caption = 'Hello World' self.Width = 1000 self.Height = 500 self.Position = &quot;poScreenCenter&quot; self.myGroupBox = GroupBox(self) self.myGroupBox.Parent = self self.myGroupBox.Align = &quot;alClient&quot; self.myGroupBox.Caption = &quot;Hello World!&quot; self.myGroupBox.Font.Size = 30 self.myGroupBox.AlignWithMargins = True self.myGroupBox.Margins.Top = 100 self.myGroupBox.Margins.Right = 100 self.myGroupBox.Margins.Bottom = 100 self.myGroupBox.Margins.Left = 100 self.myGroupBox.StyleElements = &quot;&quot; self.myGroupBox.Color = &quot;$00418964&quot; # Green Color </code></pre> <p>I'm trying to give it a Green background color ($00418964). I have the <code>StyleElements</code> cleared like mentioned in <a href="https://stackoverflow.com/a/75581205/2908017">this post</a>:</p> <pre><code>self.myGroupBox.StyleElements = &quot;&quot; </code></pre> <p>But even with <code>StyleElements</code> cleared, it still isn't working. My output Form then looks like this:</p> <p><a href="https://i.sstatic.net/8BhYS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8BhYS.png" alt="Python GUI app with Hello World GroupBox" /></a></p> <p>My &quot;Hello World!&quot; GroupBox should have the green background color on it, but it's not showing. I'm setting the background color with this piece of code:</p> <pre><code>self.myGroupBox.Color = &quot;$00418964&quot; </code></pre>
<python><user-interface><vcl>
2023-02-27 15:54:11
1
4,263
Shaun Roselt
75,582,590
1,232,087
pyspark - concatenating columns and surround by two strings
<p><strong>Question</strong>: How can we concatenate the columns in the following <code>df</code> and surround each column with two strings (as shown in the <code>desired</code> output)?</p> <pre><code>from pyspark.sql.functions import concat_ws from pyspark.sql import functions as F df = spark.createDataFrame([[&quot;1&quot;, &quot;2&quot;], [&quot;2&quot;, None], [&quot;3&quot;, &quot;4&quot;], [&quot;4&quot;, &quot;5&quot;], [None, &quot;6&quot;]]).toDF(&quot;a&quot;, &quot;b&quot;) df = df.withColumn(&quot;concat_ws&quot;, concat_ws(&quot;varchar(&quot;, *[F.col(c) for c in df.columns])) df.show() </code></pre> <p><strong>Current output</strong>:</p> <pre><code>+----+----+-----------+ | a| b| concat_ws | +----+----+-----------+ | 1| 2| 1varchar(2| | 2|null| 2| | 3| 4| 3varchar(4| | 4| 5| 4varchar(5| |null| 6| 6| +----+----+-----------+ </code></pre> <p><strong>Desired output</strong>:</p> <pre><code>+----+----+----------------------+ | a| b| concat_ws | +----+----+----------- + | 1| 2| varchar(1),varchar(2)| | 2|null| varchar(2) | | 3| 4| varchar(3),varchar(4)| | 4| 5| varchar(4),varchar(5)| |null| 6| varchar(6) | +----+----+----------------------+ </code></pre>
<python><apache-spark><pyspark><apache-spark-sql><databricks>
2023-02-27 15:45:26
2
24,239
nam
75,582,587
13,742,058
getpath in lxml etree is showing different output for absolute xpath
<p>I am trying to get the absolute XPath of an element but it is giving different output. I am trying to get the full XPath of <kbd>search button</kbd> in Google. Here's the code I have tried:</p> <pre><code>import time import random from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.common.by import By from webdriver_manager.chrome import ChromeDriverManager from lxml import etree options = webdriver.ChromeOptions() options.add_argument(&quot;start-maximized&quot;) options.add_argument(&quot;--log-level=3&quot;) options.add_experimental_option(&quot;excludeSwitches&quot;, [&quot;enable-automation&quot;]) options.add_experimental_option('useAutomationExtension', False) options.add_argument('--disable-blink-features=AutomationControlled') s = Service(ChromeDriverManager().install()) driver = webdriver.Chrome(service=s, options=options) main_link = r&quot;https://www.google.com&quot; driver.get(main_link) time.sleep(5) with open (&quot;dom.xml&quot;,&quot;w&quot;,encoding=&quot;utf-8&quot;) as domfile: domfile.write(driver.page_source) tree = etree.parse(&quot;dom.xml&quot;,parser=etree.XMLParser(recover=True)) print(tree) element = tree.xpath(&quot;(//input[@class='gNO89b'])[2]&quot;) print(element) # Trying to print absolute xpath . . print(tree.getpath(element[0])) </code></pre> <p>Output should be: <code>/html/body/div[1]/div[3]/form/div[1]/div[1]/div[4]/center/input[1]</code></p> <p>But it is giving me: <code>/html/head/meta/meta/meta/link/script[6]/br/body/div/div[2]/div[2]/form/div/div/div/div[2]/div[2]/div[7]/center/input</code></p>
<python><selenium-webdriver><web-scraping><lxml>
2023-02-27 15:44:59
1
308
fardV
75,582,565
1,317,018
How to use tqdm with while loop if counter is conditionally updated
<p>In below code, I want <code>tqdm</code> to show progress bar for counter variable <code>i</code>. Note that <code>i</code> is conditionally updated and the condition or decision to increment itself involves randomness.</p> <pre><code>import random totalItr = 100 i = 0 actualItr = 0 while i &lt; totalItr: actualItr += 1 if random.choice([True,False]): i += 1 print(actualItr) </code></pre> <p>I tried to explicitly update the tqdm progressbar as below:</p> <pre><code>import random from tqdm.notebook import tqdm pbar = tqdm(total=100) totalItr = 100 i = 0 actualItr = 0 while i &lt; totalItr: actualItr += 1 if random.choice([True,False]): i += 1 pbar.update(1) pbar.close() print(actualItr) </code></pre> <p>But this does not show any increments in progress bar. Initially it only shows empty progress bar and then after completion of execution it shows green 100% progressbar. What I am missing here?</p> <p>PS: You can try above code in <a href="https://colab.research.google.com/drive/1ML-YyUX6tWzuIzmbECKmcHdin3_4z8xi?usp=sharing" rel="nofollow noreferrer">this colab notebook</a>.</p>
<python><tqdm>
2023-02-27 15:42:58
0
25,281
Mahesha999
75,582,446
6,216,161
descartes, python package, error in polygonpatch
<p>I have installed the latest Descartes package using pip3. But I cannot seem to run the example code provided in the <a href="https://pypi.org/project/descartes/" rel="nofollow noreferrer">website</a>.</p> <pre><code>` IndexError Traceback (most recent call last) /tmp/ipykernel_500743/916321365.py in &lt;module&gt; 20 21 dilated = line.buffer(0.5) ---&gt; 22 patch1 = PolygonPatch(dilated, fc=BLUE, ec=BLUE, alpha=0.5, zorder=2) 23 ax.add_patch(patch1) 24 ~/.local/lib/python3.8/site-packages/descartes/patch.py in PolygonPatch(polygon, **kwargs) 85 86 &quot;&quot;&quot; ---&gt; 87 return PathPatch(PolygonPath(polygon), **kwargs) ~/.local/lib/python3.8/site-packages/descartes/patch.py in PolygonPath(polygon) 60 &quot;A polygon or multi-polygon representation is required&quot;) 61 ---&gt; 62 vertices = concatenate([ 63 concatenate([asarray(t.exterior.coords)[:, :2]] + 64 [asarray(r)[:, :2] for r in t.interiors]) ~/.local/lib/python3.8/site-packages/descartes/patch.py in &lt;listcomp&gt;(.0) 62 vertices = concatenate([ 63 concatenate([asarray(t.exterior.coords)[:, :2]] + ---&gt; 64 [asarray(r)[:, :2] for r in t.interiors]) 65 for t in polygon]) 66 codes = concatenate([ ~/.local/lib/python3.8/site-packages/descartes/patch.py in &lt;listcomp&gt;(.0) 62 vertices = concatenate([ 63 concatenate([asarray(t.exterior.coords)[:, :2]] + ---&gt; 64 [asarray(r)[:, :2] for r in t.interiors]) 65 for t in polygon]) 66 codes = concatenate([ IndexError: too many indices for array: array is 0-dimensional, but 2 were indexed </code></pre> <p>There is a <a href="https://stackoverflow.com/questions/75287534/indexerror-descartes-polygonpatch-wtih-shapely">similar post</a>, I tried to apply the suggestions in that post, as you can see in the last few lines of the error, by simply changing</p> <pre><code>t.exterior </code></pre> <p>to</p> <pre><code>t.exterior.coords. </code></pre> <p>the error is not resolved. I have shapely version (2.0.1), and numpy (1.22.3) installed.</p>
<python><polygon><descartes>
2023-02-27 15:31:42
1
327
user252935
75,582,394
5,452,378
(Dataflow) Apache Beam Python requirements.txt file not installing on workers
<p>I'm trying to run an Apache Beam pipeline on Google Dataflow. This pipeline reads data from Google BigQuery, adds a schema, converts it to a Dataframe, and performs a transformation on that dataframe using a third-party library (<code>scrubadub</code>).</p> <p>From the Google Code CLI on GCP, I run:</p> <pre><code>/usr/bin/python /home/test_user/dataflow_beam_test.py --requirements_file /home/test_user/requirements.txt </code></pre> <p>Following the instructions <a href="https://beam.apache.org/documentation/sdks/python-pipeline-dependencies/" rel="nofollow noreferrer">under &quot;PyPi Dependencies&quot; here</a>, my requirements.txt file contains (among other packages):</p> <pre><code>scrubadub==2.0.0 </code></pre> <p>I haven't been able to get the pipeline to install the third party Python library I'm working with (<code>scrubadub</code>) onto remote workers. I've verified this package works locally.</p> <p>Here is the relevant code:</p> <pre><code> with beam.Pipeline(argv=argv) as p: pcoll = (p | 'read_bq_view' &gt;&gt; beam.io.ReadFromBigQuery(query=BIGQUERY_SELECT_QUERY,use_standard_sql=True) | 'ToRows' &gt;&gt; beam.Map(lambda x: beam.Row(id=x['id'], user_id=x['user_id'],query=x['query'])) ) df = beam_convert.to_dataframe(pcoll) df['query'] = df['query'].apply(lambda x: scrubadub.clean(x)) </code></pre> <p>The last line in this code block is what causes the error (I've confirmed by commenting it out and running the pipeline successfully).</p> <p>I've tried importing scrubadub at the top level of the file and as part of my <code>run()</code> function; both throw the same error:</p> <pre><code>/usr/local/lib/python3.9/site-packages/dill/_dill.py&quot;, line 826, in _import_module return __import__(import_name) ModuleNotFoundError: No module named 'scrubadub' </code></pre> <p>Notably, it doesn't seem like <code>pip install -r requirements.txt</code> is ever running on the workers.</p>
<python><dependencies><python-import><google-cloud-dataflow><apache-beam>
2023-02-27 15:26:54
1
409
snark17
75,582,335
11,092,636
cmprsk python package example notebook doesn't work
<p>I'm using <code>Python 3.11.1</code>. I've installed <a href="https://github.com/OmriTreidel/cmprsk" rel="nofollow noreferrer">https://github.com/OmriTreidel/cmprsk</a> (<code>1.0.0</code>, the latest version) on a new virtual environment, I've installed <code>rpy2 3.4.5</code> like the documentation suggested, I'm using <code>R 4.2.2</code> and I've installed the <code>cmprsk</code> package as suggested.</p> <p>When I run the example notebook provided in the GitHub, I have <code>ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (6,) + inhomogeneous part.</code></p> <ul> <li>Processor: AMD Ryzen 7 5800H with Radeon Graphics</li> <li>Windows 11 Home</li> <li>I'm not using WSL</li> </ul>
<python><r><statistics>
2023-02-27 15:21:17
1
720
FluidMechanics Potential Flows
75,582,323
5,181,219
Check that class was called in a with statement
<p>I am building a class that people are supposed to use with a context manager:</p> <pre class="lang-py prettyprint-override"><code>with MyClass(params) as mc: mc.do_things() ... </code></pre> <p>I was wondering whether it was possible to make sure that people called it this way, and that code looking like this:</p> <pre class="lang-py prettyprint-override"><code>mc = MyClass(params) mc.do_things() </code></pre> <p>would raise an exception or an error message. I suppose I could set a private variable when <code>mc.__enter__()</code> is executed, and throw an exception if that variable isn't set when <code>mc.do_things()</code> is called. Is there a better way to do this?</p> <p>I'm not looking for bullet-proof security, just to mitigate the chances that users misuse the interface by mistake.</p>
<python><contextmanager>
2023-02-27 15:19:58
1
1,092
Ted
75,582,319
5,058,116
Masking data frame with multidimensional key
<p>I have a data frame containing <code>value_1</code> and <code>value_2</code></p> <pre><code>df_1 = pd.DataFrame( { &quot;id_1&quot;: [101, 202], &quot;id_2&quot;: [101, 202], &quot;value_1&quot;: [5.0, 10.0], &quot;value_2&quot;: [10.0, 4.0], } ) df_1 = df_1.set_index([&quot;id_1&quot;, &quot;id_2&quot;]) </code></pre> <p>that looks like this:</p> <pre><code> value_1 value_2 id_1 id_2 101 101 5.0 10.0 202 202 10.0 4.0 </code></pre> <p>I have another data frame, that contains a flag for each value, i.e. <code>is_active_1</code> and <code>is_active_2</code>:</p> <pre><code>df_2 = pd.DataFrame( { &quot;id_1&quot;: [101, 202], &quot;id_2&quot;: [101, 202], &quot;is_active_1&quot;: [True, False], &quot;is_active_2&quot;: [False, False], } ) df_2 = df_2.set_index([&quot;id_1&quot;, &quot;id_2&quot;]) </code></pre> <p>that looks like this:</p> <pre><code> is_active_1 is_active_2 id_1 id_2 101 101 True False 202 202 False False </code></pre> <p>I want to multiply the <code>value</code> rows by <code>*3</code> in <code>df_1</code> depending on its flag in <code>df_2</code>. The end result should like this:</p> <pre><code> value_1 value_2 id_1 id_2 101 101 15.0 10.0 202 202 10.0 4.0 </code></pre> <p>i.e. the <code>is_active_1 = True</code> flag for <code>(id_1, id_2) = (101, 101)</code> causes <code>value_1 -&gt; 3 * 5.0 = 15.0</code></p> <p>I have tried the following:</p> <pre><code>df_1.loc[df_2[[&quot;is_active_1&quot;, &quot;is_active_2&quot;]], [&quot;value_1&quot;, &quot;value_2&quot;]] * 3 </code></pre> <p>but ended up with a value error <code>ValueError: Cannot index with multidimensional key</code>.</p>
<python><pandas>
2023-02-27 15:19:47
1
3,058
ajrlewis
75,582,201
16,578,438
pyspark fill missing dates in dataframe and fill other column with minimum of the two adjacent values
<p>looking to fill the pyspark dataframe and load the missing values. Existing Pyspark DataFrame -</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: center;">ID</th> <th style="text-align: center;">Date</th> <th style="text-align: center;">Qty</th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">100</td> <td style="text-align: center;">2023-02-01</td> <td style="text-align: center;">5</td> </tr> <tr> <td style="text-align: center;">100</td> <td style="text-align: center;">2023-02-03</td> <td style="text-align: center;">3</td> </tr> <tr> <td style="text-align: center;">100</td> <td style="text-align: center;">2023-02-04</td> <td style="text-align: center;">3</td> </tr> <tr> <td style="text-align: center;">100</td> <td style="text-align: center;">2023-02-05</td> <td style="text-align: center;">3</td> </tr> <tr> <td style="text-align: center;">100</td> <td style="text-align: center;">2023-02-08</td> <td style="text-align: center;">3</td> </tr> <tr> <td style="text-align: center;">100</td> <td style="text-align: center;">2023-02-09</td> <td style="text-align: center;">11</td> </tr> <tr> <td style="text-align: center;">100</td> <td style="text-align: center;">2023-02-10</td> <td style="text-align: center;">11</td> </tr> <tr> <td style="text-align: center;">100</td> <td style="text-align: center;">2023-02-11</td> <td style="text-align: center;">10</td> </tr> <tr> <td style="text-align: center;">100</td> <td style="text-align: center;">2023-02-13</td> <td style="text-align: center;">0</td> </tr> </tbody> </table> </div> <p>Expected Pyspark DataFrame (filling <strong>bold</strong> values) -</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: center;">ID</th> <th style="text-align: center;">Date</th> <th style="text-align: center;">Qty</th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">100</td> <td style="text-align: center;">2023-02-01</td> <td style="text-align: center;">5</td> </tr> <tr> <td style="text-align: center;"><strong>100</strong></td> <td style="text-align: center;"><strong>2023-02-02</strong></td> <td style="text-align: center;"><strong>3</strong> *<em>add row date and lowest adjacent qty</em></td> </tr> <tr> <td style="text-align: center;">100</td> <td style="text-align: center;">2023-02-03</td> <td style="text-align: center;">3</td> </tr> <tr> <td style="text-align: center;">100</td> <td style="text-align: center;">2023-02-04</td> <td style="text-align: center;">3</td> </tr> <tr> <td style="text-align: center;">100</td> <td style="text-align: center;">2023-02-05</td> <td style="text-align: center;">3</td> </tr> <tr> <td style="text-align: center;"><strong>100</strong></td> <td style="text-align: center;"><strong>2023-02-06</strong></td> <td style="text-align: center;"><strong>3</strong> *<em>add row date and lowest adjacent qty</em></td> </tr> <tr> <td style="text-align: center;"><strong>100</strong></td> <td style="text-align: center;"><strong>2023-02-07</strong></td> <td style="text-align: center;"><strong>3</strong> *<em>add row date and lowest adjacent qty</em></td> </tr> <tr> <td style="text-align: center;">100</td> <td style="text-align: center;">2023-02-08</td> <td style="text-align: center;">3</td> </tr> <tr> <td style="text-align: center;">100</td> <td style="text-align: center;">2023-02-09</td> <td style="text-align: center;">11</td> </tr> <tr> <td style="text-align: center;">100</td> <td style="text-align: center;">2023-02-10</td> <td style="text-align: center;">11</td> </tr> <tr> <td style="text-align: center;">100</td> <td style="text-align: center;">2023-02-11</td> <td style="text-align: center;">10</td> </tr> <tr> <td style="text-align: center;"><strong>100</strong></td> <td style="text-align: center;"><strong>2023-02-12</strong></td> <td style="text-align: center;"><strong>0</strong> *<em>add row date and lowest adjacent qty</em></td> </tr> <tr> <td style="text-align: center;">100</td> <td style="text-align: center;">2023-02-13</td> <td style="text-align: center;">0</td> </tr> </tbody> </table> </div> <p>did refer to existing answer but it doesnt fulfill my requirement of filling the lowest adjacent value(<a href="https://stackoverflow.com/questions/70467967/pyspark-generate-missing-dates-and-fill-data-with-previous-value">PySpark generate missing dates and fill data with previous value</a>)</p>
<python><dataframe><apache-spark><pyspark>
2023-02-27 15:10:20
2
428
NNM
75,582,190
4,451,315
Why does timestamp() show an extra microsecond compared with subtracting 1970-01-01?
<p>The following differ by 1 microsecond :</p> <pre class="lang-py prettyprint-override"><code>In [37]: datetime(2514, 5, 30, 1, 53, 4, 986754, tzinfo=dt.timezone.utc) - datetime(1970,1,1, tzinfo=dt.timezone.utc) Out[37]: datetime.timedelta(days=198841, seconds=6784, microseconds=986754) In [38]: datetime(2514, 5, 30, 1, 53, 4, 986754, tzinfo=dt.timezone.utc).timestamp() Out[38]: 17179869184.986755 </code></pre> <p>The number of microseconds in <code>986754</code> in the first case, and <code>986755</code> in the second.</p> <p>Is this just Python floating point arithmetic error, or is there something else I'm missing?</p>
<python><datetime><floating-point><timestamp>
2023-02-27 15:09:26
3
11,062
ignoring_gravity
75,582,153
3,894,818
-SOLVED- How to upload a photo to WordPress Media Library via WordPress REST API with Python?
<p>Unable to upload a photo to WordPress Media Library via WordPress REST API for the second day. <em>WordPress version 6.1.1</em> I am well aware that this and similar questions have been raised on StackOverflow many times, I have read probably all of them. I tried to use some of the suggested solutions, but unfortunately, none of the scripts uploaded the photo to WordPress.</p> <h1>I tried to minimize the code sample I expect to make work*:</h1> <pre><code>import requests photo_data = open(&quot;photo.jpg&quot;, 'rb').read() headers = {&quot;Content-Disposition&quot;: &quot;attachment; filename=photo.jpg&quot;, &quot;Content-Type&quot;: &quot;image/jpeg&quot;} r = requests.post(url='https://DOMAIN-NAME.com/wp-json/wp/v2/media', auth=('admin', 'APpL IcaT iONp assW ordW Padm'), headers=headers, data=photo_data) print(r) </code></pre> <p>*.Please note that I have intentionally not shared my account details as well as the domain of my real site.</p> <h1>So my question is what am I doing wrong and how do I get the code to work?</h1> <p><em><strong>PS All I get in response is data about the photos previously uploaded to WordPress Media Library via dashboard.</strong></em></p>
<python><python-3.x><wordpress><rest><request>
2023-02-27 15:06:56
1
811
Quanti Monati
75,582,039
1,473,517
How to optimize a function involving max
<p>I am having problems minimizing a simple if slightly idiosyncratic function. I have scipy.optimize.minimize but I can't get consistent results. Here is the full code:</p> <pre><code>from math import log, exp, sqrt from bisect import bisect_left from scipy.optimize import minimize from scipy.optimize import Bounds import numpy as np def new_inflection(x0, x1): return log((exp(x0)+exp(x1) + sqrt(exp(2*x0)+6*exp(x0+x1)+exp(2*x1)))/2) def make_pairs(points): new_points = [] for i in range(len(points)): for j in range(i+1, len(points)): new_point = new_inflection(points[i], points[j]) new_points.append(new_point) return new_points def find_closest_number(numbers, query): index = bisect_left(numbers, query) if index == 0: return numbers[0] if index == len(numbers): return numbers[-1] before = numbers[index - 1] after = numbers[index] if after - query &lt; query - before: return after else: return before def max_distance(target_points): pair_points = make_pairs(target_points) target_points = sorted(target_points) dists = [] return max(abs(point - find_closest_number(target_points, point)) for point in pair_points) num_points = 20 points = np.random.rand(num_points)*10 print(&quot;Starting score:&quot;, max_distance(points)) bounds = Bounds([0]*num_points, [num_points] * num_points) res = minimize(max_distance, points, bounds = bounds, options={'maxiter': 100}, method=&quot;SLSQP&quot;) print([round(x,2) for x in res.x]) print(res) </code></pre> <p>Every time I run it I get quite different results. This is despite the output saying <code>Optimization terminated successfully</code>. An example output:</p> <pre><code> message: Optimization terminated successfully success: True status: 0 fun: 0.4277378933292031 x: [ 5.710e+00 1.963e+00 ... 1.479e+00 6.775e+00] nit: 15 jac: [ 0.000e+00 0.000e+00 ... 0.000e+00 0.000e+00] nfev: 364 njev: 15 </code></pre> <p>Sometimes I get a result as low as 0.40 and other times as high as 0.51.</p> <p>Is there any way to optimize this function properly in Python?</p>
<python><scipy><mathematical-optimization>
2023-02-27 14:56:49
3
21,513
Simd
75,581,946
3,672,883
Why this asyncio code doesn't run concurrent?
<p>Hello I am trying to write a script to process pdf files with asyncio concurrent in order to do this I have the following code:</p> <pre><code>import click import asyncio from pdf2image import convert_from_path from functools import wraps def coro(f): @wraps(f) def wrapper(*args, **kwargs): return asyncio.run(f(*args, **kwargs)) return wrapper async def my_coroutine(path): print(path) return convert_from_path(path, fmt=&quot;ppm&quot;, poppler_path=&quot;&quot;) @click.command() @click.option(&quot;-s&quot;, &quot;settings_path&quot;, required=False, type=str): @coro async def dlr(settings_path) -&gt; None: paths = [...] responses = await asyncio.gather(*[my_coroutine(path) for path in path]) @click.group() def cli() -&gt; None: pass cli.add_command(dlr) if __name__ == &quot;__main__&quot;: cli() </code></pre> <p>When run this is running sequential instead of &quot;parallel&quot;, how can I improve this?</p> <p>Thanks</p>
<python><python-asyncio><python-click>
2023-02-27 14:46:39
1
5,342
Tlaloc-ES
75,581,932
1,290,170
How do I input values from a 2-index-level dataframe into a 3-index-level dataframe?
<p>I have an empty dataframe <code>df1</code>:</p> <pre><code> column1 column2 A B C </code></pre> <p>I have a 2 dimensional dataframe <code>df2</code> with same column, but only B and C as indexes:</p> <pre><code> column1 column2 B C foo bar 123 123 foo2 bar2 456 456 </code></pre> <p>How do I efficiently input those values into the 3-dimensional dataframe, with a value of my choice, say 1, for the A column?</p> <p>The expected output would be (with the same value in all indexes in A:</p> <pre><code> column1 column2 A B C 1 foo bar 123 123 1 foo2 bar2 456 456 </code></pre> <p>I tried something like in that format, but couldn't find the right one:</p> <pre><code>df1.loc[('my_value',??,??),:] = df2.values </code></pre>
<python><pandas>
2023-02-27 14:45:10
1
1,567
alexx0186
75,581,793
51,816
How to resume download using MediaIoBaseDownload with Google Drive and Python?
<p>With large files I get various errors that stops the download, so I want to resume from where it stopped by appending to the file on disk properly.</p> <p>I saw that the FileIO has to be using 'ab' mode:</p> <pre><code>fh = io.FileIO(fname, mode='ab') </code></pre> <p>but I couldn't find how to specify where to continue from using MediaIoBaseDownload.</p> <p>Any idea on how to implement this?</p>
<python><download><google-drive-api>
2023-02-27 14:31:42
2
333,709
Joan Venge
75,581,695
13,219,123
Missing Python executable 'python3'
<p>I am running pyspark locally and had some issues due to something with the paths to python (when running python3 in command prompt I got an error, but when running python I would not. I have python 3 installed) I would get an java.io.IOException error when trying to run a pyspark job.</p> <p>Now I have added</p> <pre><code>import os import sys os.environ['PYSPARK_PYTHON'] = sys.executable os.environ['PYSPARK_DRIVER_PYTHON'] = sys.executable </code></pre> <p>which solves my problem. However, this does not seem like the best solution. Do I then in every file have to add this at the beginning or is there a smarter solution?</p>
<python><pyspark>
2023-02-27 14:24:55
2
353
andKaae
75,581,571
13,596,037
in numpy, what is the difference between calling MA.masked_where and MA.masked_array?
<p>Calling <code>masked_array</code> (the class constructor) and the <code>masked_where</code> function both seem to do exactly the same thing, in terms of being able to construct a numpy masked array given the data and mask values. When would you use one or the other?</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; import numpy.ma as MA &gt;&gt;&gt; vals = np.array([0,1,2,3,4,5]) &gt;&gt;&gt; cond = vals &gt; 3 &gt;&gt;&gt; vals array([0, 1, 2, 3, 4, 5]) &gt;&gt;&gt; cond array([False, False, False, False, True, True], dtype=bool) &gt;&gt;&gt; MA.masked_array(data=vals, mask=cond) masked_array(data = [0 1 2 3 -- --], mask = [False False False False True True], fill_value = 999999) &gt;&gt;&gt; MA.masked_where(cond, vals) masked_array(data = [0 1 2 3 -- --], mask = [False False False False True True], fill_value = 999999) </code></pre> <p>The optional argument <code>copy</code> to <code>masked_where</code> (its only documented optional argument) is also supported by <code>masked_array</code>, so I don't see any options that are unique to <code>masked_where</code>. Although the converse is not true (e.g. <code>masked_where</code> doesn't support <code>dtype</code>), I don't understand the purpose of <code>masked_where</code> as a separate function.</p>
<python><numpy><masked-array>
2023-02-27 14:12:07
2
13,169
alani
75,581,476
8,388,965
2D to 3D projection opencv gives high error
<p>The purpose of my project is to convert some 2D/Image points into 3D/World coordinates.</p> <p>In order to determine the image coordinates, I have mapped the exact location of the image in meters/cms in the real world. Please see the image below,</p> <p><img src="https://i.sstatic.net/LfAlw.png" alt="Undistorted image using cv2.undistort 1" /> All the image points are highlighted in red on the image</p> <p>We assume the surface is flat therefore z=0, in world coordinates. This allows us to project x and y in 3D space whilst ignoring the Z axis.</p> <p>I use the following code, mostly consisting of OpenCV functions.</p> <pre><code>import numpy as np import cv2 # calibratiob done with cv2 calibtrate to get cam matrix and distortion params dist_params = np.array([-2.80467218e-01, 6.67589890e-02, 9.79684e-05, 7.560530e-04, 0]) cam_matrix = np.array([ [880.27, 0, 804.05388], [0.0, 877.2202, 431.85688], [0.0, 0.0, 1.0], ]) # values are in meters world_points_real = np.array([ # x, y, z [4.92, 0.0, 0.0], [4.92, -1.2, 0.0], [4.92, -2.44, 0.0], [4.92, -3.6, 0.0], [4.62, 5.66, 0.0], ], dtype=np.float32).reshape((-1,3)) img_points = np.array([ # u, v [ 937, 590], [ 1076, 597], [ 1220, 602], [ 1359, 608], [ 336, 543],, ], dtype=np.float32).reshape((-1,2)) # find rvecs and tvecs using OpenCV solvePnP methods ret, rvecs, tvecs, dist = cv2.solvePnPRansac(world_points_real, img_points, cam_matrix, dist_params)#, flags=cv2.SOLVEPNP_ITERATIVE) # # project 3d points to 2d img_points_project, jac = cv2.projectPoints(np.array([4.62, 5.66, 0.0]), rvecs, tvecs, cam_matrix, dist_params) print(&quot;img_points:&quot;, img_points_project) # this should be [ 305, 537] # gives (-215:Assertion failed) Q.size() == Size(4,4) # cv2.reprojectImageTo3D(img_points, r_and_t_vec) # We assume a flat surface; ie z=0, to do 2d to 3d projection. # Convert redian to Rodrigues rvecs_rod, _ = cv2.Rodrigues(rvecs) # create (3,4) shaped r&amp;t_vec r_and_t_vec = np.zeros((3,4)) r_and_t_vec[:,:-1] = rvecs_rod r_and_t_vec[:,3] = tvecs.reshape(-1) # find scaling factor # r and t vector times any world coordinate point [x, y, z, 1] sacling_factor = np.dot(r_and_t_vec, np.array([4.92, 0.0, 0.0 ,1]).reshape((4,1))) #drop r3 r_and_t_vec_nor3 = np.delete(r_and_t_vec,2,1) # since z = 0, we take out r3 for i in range(len(img_points)): # 2D points uv1 = np.array([img_points[i][0], img_points[i][1], 1]) # Homography matrix mat2 = np.dot(cam_matrix, r_and_t_vec_nor3) # Inverse it inv_mat2 = np.linalg.inv(mat2) # multiply with uv1 result2 = np.dot(inv_mat2, uv1) * sacling_factor[2] print(&quot;wprld_points:&quot;, result2) # this should be same as img_points </code></pre> <p>However, the projection is off by 2-3 meters. ChatGpt solution can only do 3D to 2D projection using cv2.projectpoints.</p> <p>I tried using CV2 <a href="https://docs.opencv.org/3.4/d9/d0c/group__calib3d.html#ga549c2075fac14829ff4a58bc931c033d" rel="nofollow noreferrer">solvepnp</a> to get the Rvecs and Tvecs. Then use them to generate the (3, 4) projection matrix. I use the inverse of the projection matrix to get the 3D projection given 2D image points. The rvecs and rvecs from solvepnp are below,</p> <pre><code>r_vecs = array([[ 1.24420712], [-1.23779433], [ 1.33559117]]) t_vecs = array([[1.54571503], [1.46783797], [2.78172814]]) </code></pre> <p>I was expecting the projected 3D points to be close to the <code>world_points_real</code> but they are 2-3 meters off. I have tried with more points with no improvements. Where is the error coming from?</p> <p>Reference: 1. <a href="https://stackoverflow.com/questions/18637494/camera-position-in-world-coordinate-from-cvsolvepnp?rq=1">SolvePnP in C++</a></p>
<python><opencv><computer-vision><camera-calibration><robotics>
2023-02-27 14:04:39
0
1,224
Farshid Rayhan
75,581,470
536,262
comparing pathlib.Paths finding if one is a subdir of the other
<p>How can I find out if a pathlib.Path() is a subdir of another?</p> <p>I can take the string of pathlib.PurePath(), but it only works if I have the whole path.</p> <pre><code>with zipfile.ZipFile(fname, 'w') as zip: for f in files: myrelroot = relroot # relative root for this zipfile if f&quot;{pathlib.PurePath(relroot)}&quot; not in f&quot;{pathlib.PurePath(f)}&quot;: myrelroot = None # outside of zipfile so we do not get ..\\.. try: zip.write(f, arcname=os.path.relpath(f, myrelroot)) except FileNotFoundError as e: log.warning(f&quot;file '{f}' not found: {e}&quot;) else: log.info(f&quot;adding {f} to {fname}&quot;) </code></pre>
<python><pathlib>
2023-02-27 14:04:00
2
3,731
MortenB
75,581,337
8,293,726
OSError: [Errno 24] Too many open files: '/home/ec2-user/car/1016780737.jpg'
<p>I am doing predictions on 100K images using following yolov8 code:</p> <pre><code>model = YOLO(self.weightpath) src_dir = self.src_Dir src_dir = src_dir+ '*' img_list = glob.glob(src_dir) results = model(img_list, max_det = 1) </code></pre> <p>I am getting following error:</p> <pre><code>Traceback (most recent call last): File &quot;/home/ec2-user/Deployment/main/Quality_check.py&quot;, line 63, in &lt;module&gt; File &quot;/home/ec2-user/Deployment/utils/save_img_bins.py&quot;, line 19, in run File &quot;/home/ec2-user/.local/lib/python3.7/site-packages/ultralytics/yolo/engine/model.py&quot;, line 102, in __call__ File &quot;/home/ec2-user/.local/lib/python3.7/site-packages/ultralytics/yolo/engine/model.py&quot;, line 202, in predict File &quot;/home/ec2-user/.local/lib/python3.7/site-packages/torch/autograd/grad_mode.py&quot;, line 27, in decorate_context File &quot;/home/ec2-user/.local/lib/python3.7/site-packages/ultralytics/yolo/engine/predictor.py&quot;, line 116, in __call__ File &quot;/home/ec2-user/.local/lib/python3.7/site-packages/ultralytics/yolo/engine/predictor.py&quot;, line 147, in stream_inference File &quot;/home/ec2-user/.local/lib/python3.7/site-packages/ultralytics/yolo/engine/predictor.py&quot;, line 135, in setup_source File &quot;/home/ec2-user/.local/lib/python3.7/site-packages/ultralytics/yolo/data/build.py&quot;, line 164, in load_inference_source File &quot;/home/ec2-user/.local/lib/python3.7/site-packages/ultralytics/yolo/data/build.py&quot;, line 148, in check_source File &quot;/home/ec2-user/.local/lib/python3.7/site-packages/ultralytics/yolo/data/dataloaders/stream_loaders.py&quot;, line 339, in autocast_list File &quot;/home/ec2-user/.local/lib/python3.7/site-packages/PIL/Image.py&quot;, line 3227, in open OSError: [Errno 24] Too many open files: '/home/ec2-user/car/1016780737.jpg' </code></pre>
<python><amazon-ec2><pytorch><yolo>
2023-02-27 13:49:54
1
1,333
Hitesh
75,581,224
339,652
Google Cloud Vision OCR - Language hints seem to be ignored
<p>I am using the Python framework of Google Cloud Vision to OCR passports. I want to support Ukrainian now, but the system has trouble recognizing handwritten parts.</p> <p>I tried to improve the results by giving language hints to the system, but it seems like those are entirely ignored.</p> <p>No matter what I set there, the recognized text and the language the system detects do not change at all. So to me, it looks like it is just ignoring the hint I give.</p> <p>E.g. numbers are recognized as language &quot;en&quot;, even if I change the language hint to &quot;de&quot; they are still returned as language &quot;en&quot;. Even giving random strings as a language hint isn't producing any error.</p> <pre><code># [...] client = vision.ImageAnnotatorClient(client_options={&quot;api_endpoint&quot;: endpoint}) response = client.document_text_detection( image=vision_img, image_context={&quot;language_hints&quot;: [&quot;uk&quot;]} ) # [...] </code></pre> <p>Are those hints just not supported using Python?</p>
<python><google-cloud-vision>
2023-02-27 13:39:58
1
843
Plankalkül
75,581,204
2,908,017
How to change Panel background color in Python VCL GUI app
<p>I'm using the <a href="https://github.com/Embarcadero/DelphiVCL4Python" rel="nofollow noreferrer">DelphiVCL GUI library for Python</a> and trying to change the background color on a <code>Panel</code> component, but it's not working</p> <p>I have the following code to create the <code>Form</code> and the <code>Panel</code> on my Form:</p> <pre><code>from delphivcl import * class frmMain(Form): def __init__(self, owner): self.Caption = 'Hello World' self.Width = 1000 self.Height = 500 self.Position = &quot;poScreenCenter&quot; self.myPanel = Panel(self) self.myPanel.Parent = self self.myPanel.Align = &quot;alClient&quot; self.myPanel.AlignWithMargins = True self.myPanel.Margins.Top = 100 self.myPanel.Margins.Right = 100 self.myPanel.Margins.Bottom = 100 self.myPanel.Margins.Left = 100 self.myPanel.Caption = &quot;Hello World!&quot; self.myPanel.Font.Size = 30 self.myPanel.Color = &quot;$00D2E525&quot; # Aqua Color </code></pre> <p>I'm trying to give it an Aqua background color ($00D2E525). My output Form then looks like this: <a href="https://i.sstatic.net/enOjM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/enOjM.png" alt="Python GUI VCL Hello World App" /></a></p> <p>My &quot;Hello World!&quot; Panel should have the aqua background color on it, but it's not showing. I'm setting the background color with this piece of code:</p> <pre><code>self.myPanel.Color = &quot;$00D2E525&quot; </code></pre>
<python><user-interface><vcl>
2023-02-27 13:38:30
1
4,263
Shaun Roselt
75,581,199
14,640,406
Pcap data to .ts file script
<p>I want to get the UDP stream of a pcap file, get the raw data and save to a .ts file. I now i can do that in Wireshark doing: <code>analyze-&gt;follow-&gt;UDP stream-&gt;show and save data as raw-&gt;save as-&gt;video.ts</code>.</p> <p>How can i make a script in Python for doing the same thing?</p>
<python><wireshark><scapy><pcap>
2023-02-27 13:37:35
1
309
carraro
75,581,192
4,502,325
Production flask app running in parallel with main program on a Raspberry pi
<p>I have built a system around a Raspberry pi that continuously takes images, runs some computer vision analyses on the images, and reads some variables from a sensor.</p> <p>Image files are saved on board as are analysis results and sensor data (in sqlite databases).</p> <p>Now, I need to present collected data (images and analyses results) in real time. I am doing this with a Flask app. Currently, I am running the main program and the flask app from one script using threading, which seem to work okay. But, how do I do this with a production server, so that the app can be accessed from other computers on the same network as the pi?</p> <p>Here's how I am currently running the main program and the webapp in &quot;parallel&quot;:</p> <pre><code># Data import time import sqlite3 from datetime import datetime # Webapp import threading from flask import Flask import random as rand def databaseSetup(): # Initialize SQLite database connection and create table to store sensor data connSensor = sqlite3.connect('sensorVariables.db') cSensor = connSensor.cursor() cSensor.execute('''CREATE TABLE IF NOT EXISTS sensorVariables (timestamp TEXT, temperature REAL, pressure REAL, humidity REAL)''') return connSensor, cSensor def main(): connSensor, cSensor = databaseSetup() i = 0 while True: print(&quot;Still running main worker: &quot;, i) temperature = rand.randint(12,25) pressure = rand.randint(1020, 1200) humidity = rand.randint(40,80) # Get current timestamp timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S') # Insert data into env database cSensor.execute(&quot;INSERT INTO sensorVariables VALUES (?, ?, ?, ?)&quot;, (timestamp, temperature, pressure, humidity)) connSensor.commit() print(&quot;Sensor variables saved&quot;) i += 1 time.sleep(5) ######## WEBAPP ######## app = Flask(__name__) def runApp(): app.run(debug=True, use_reloader=False, port=5000, host='0.0.0.0') @app.route('/') def index(): # connect to the sqlite database conn = sqlite3.connect('sensorVariables.db') c = conn.cursor() # retrieve the latest weather data from the database c.execute('SELECT temperature, humidity, pressure, timestamp FROM sensorVariables ORDER BY timestamp DESC LIMIT 1') latest_data = c.fetchone() temperature = latest_data[0] humidity = latest_data[1] pressure = latest_data[2] timestamp = latest_data[3] # close the database connection conn.close() return f'Time: {timestamp}, Temperature: {temperature}, Humidity: {humidity}, Pressure: {pressure}' ######## RUN ######## if __name__ == '__main__': try: t1 = threading.Thread(target=runApp).start() t2 = threading.Thread(target=main).start() except Exception as e: print(e) </code></pre>
<python><flask>
2023-02-27 13:36:55
1
396
Hjalte
75,581,175
4,393,334
How calculate percentile for each value in PySpark data frame?
<p>Let say I have PySpark data frame with column &quot;data&quot;.</p> <p>I would like to assign for each value in this column &quot;Percentile&quot; value with bin = 5.</p> <p>Here is a sketch of Python code and desired result</p> <pre><code>import numpy as np # Array of data data = [5,6,9,87,2,3,5,7,2,6,5,2,3,4,69,4] # Calculate percentiles list_percentile = [5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90, 95] p = np.percentile(data, list_percentile) # =&gt; Desired result data_with_percentile = [(5, 60), (6, 65), ..., (4, 55)] </code></pre> <p>where (5, 60) pair means that value &quot;5&quot; lies in the percentile bin 55-60%.</p> <p>Any help is welcome!</p>
<python><percentile>
2023-02-27 13:35:00
1
3,113
Andrii
75,581,172
3,910,269
Event grid does not trigger Azure Function when out binding is specified
<p>When I upload a file to a storage account container I have an <strong>Event Grid System Topic</strong> to detect this. I have an Event Subscription associated to it, to trigger an <strong>Azure Function</strong> with a binding <strong>in</strong>. At this point everything is working.</p> <p>However when I add a binding <strong>out</strong> to a cosmos db like this:</p> <pre><code>import logging import json import azure.functions as func def main(event_in: func.EventGridEvent, doc: func.Out[func.Document]) -&gt; func.HttpResponse: result = json.dumps({ 'id': event_in.id, 'data': event_in.get_json(), 'topic': event_in.topic, 'subject': event_in.subject, 'event_type': event_in.event_type, }) logging.info('Python EventGrid trigger processed an event: %s', result) return func.HttpResponse( &quot;This HTTP triggered function executed successfully.&quot;, status_code=200 ) </code></pre> <p>With this binding:</p> <pre><code>{ &quot;scriptFile&quot;: &quot;func.py&quot;, &quot;bindings&quot;: [ { &quot;type&quot;: &quot;eventGridTrigger&quot;, &quot;name&quot;: &quot;event_in&quot;, &quot;direction&quot;: &quot;in&quot; }, { &quot;type&quot;: &quot;cosmosDB&quot;, &quot;direction&quot;: &quot;out&quot;, &quot;name&quot;: &quot;doc&quot;, &quot;databaseName&quot;: &quot;MyDb&quot;, &quot;collectionName&quot;: &quot;my-files&quot;, &quot;createIfNotExists&quot;: &quot;true&quot;, &quot;connectionStringSetting&quot;: &quot;AzureCosmosDBConnectionString&quot; }, { &quot;type&quot;: &quot;http&quot;, &quot;direction&quot;: &quot;out&quot;, &quot;name&quot;: &quot;$return&quot; } ] } </code></pre> <p>the <strong>Event Subscription is sent but directly failed to deliver.</strong> The Function is never triggered.</p> <p>Here is the error I have from log analytics:</p> <pre><code>outcome=NotFound, latencyInMs=952, id=a7d79278-811f-420d-82c9-c1ab7ad4a4c0, outputEventSystemId=13b45faa-7e28-4ee4-bc1b-faea85ceb802, state=FilteredFailingDelivery, deliveryTime=2/27/2023 10:37:59 AM, deliveryCount=6, probationCount=5, deliverySchema=EventGridEvent, trackedSystemTopicState=CreatedExplicitlyByUser, eventSubscriptionDeliverySchema=EventGridEvent, outputEventFields=InputEvent| EventSubscriptionId| DeliveryTime| DeliveryCount| State| Id| ProbationCount| LastHttpStatusCode| LastDeliveryOutcome| DeliverySchema| LastDeliveryAttemptTime| SystemId| UseMappedResourceArmIdForBilling| TrackedSystemTopicState, outputEventFieldCount=14, requestExpiration=1/1/0001 12:00:00 AM, delivered=False id=b44393e7-e01e-0002-59d8- 493e1c061d90, inputEventSystemId=4bded2d3-0037-4eaf-8e9e-fa1e93cef82a publishTime=2/26/2023 11:50:53 AM, eventTime=2/26/2023 11:50:52 AM, eventType=Microsoft.Storage.BlobCreated, deliveryTime=1/1/0001 12:00:00 AM, filteringState=FilteringPending, inputSchema=EventGridEvent, publisher=MICROSOFT-STORAGE-STORAGEACCOUNTS.WESTUS-1.EVENTGRID.AZURE.NET, size=796, subject=/blobServices/default/containers/files/blobs/file.wav, inputEventFields=Id| PublishTime| SerializedBody| EventType| Topic| Subject| FilteringHashCode| SystemId| Publisher| FilteringTopic| TopicCategory| DataVersion| MetadataVersion| InputSchema| EventTime| FilteringPolicy, inputEventFieldCount=16, type=AzureFunction, subType=NotApplicable, supportsBatching=False, aadIntegration=False, managedIdentityType=None, armId=/subscriptions/7dd8/resourceGroups/rg-01/providers/Microsoft.Web/sites/my-func/functions/EventGridTrigger, deliveryResponse=NotFound, errorCode=NotFound, HttpRequestMessage: httpVersion=1.1, HttpResponseMessage: HttpVersion=1.1, StatusCode=NotFound(NotFound), StatusDescription=Not Found, ConnectionInfo=defaultConnectionLimit=1024, reusePortSupported=True, reusePort=True, </code></pre> <p>It looks like the Event Grid can't find my Azure Function now, is it a limitation of the Event Grid service or did I made a mistake?</p> <p>Thanks for your help!</p>
<python><azure><azure-functions><azure-cosmosdb>
2023-02-27 13:34:50
1
4,913
fandro
75,581,040
7,890,561
Check if string could be a filename
<p>I am looking for a way to check if an input string given by a user could be a valid file name for an output file, lets say a gif.</p> <p>The user may provide different kinds of input, e.g., they may provide a full path like <code>a/b/c.gif</code> or just a directory <code>a/b</code> or <code>a/b/</code>. I can check these using <code>os.path.isdir(user_path)</code> and <code>os.path.isdir(os.path.dirname(user_path))</code>.</p> <p>How should I check if I just receive a file name without a directory? I could do something like <code>not os.path.dirname(user_path) and user_path.endswith('.gif')</code> but that would allow still allow a lot of unwanted characters etc.</p> <p>I am sure there is a neater way to check this. Looking forward to your solution =)</p> <p>EDIT: at the time I am trying to check the user input, I do not have access to the full path, I will later write to - have to determine the parent directory first. So a solution without touching the file or similar would be preferred.</p>
<python>
2023-02-27 13:22:16
0
886
Nyps
75,581,024
4,393,951
getting edges / lines of vertical lines in an image
<h1>Original Post</h1> <p>I have a batch of similar images which I need to analyze.</p> <p>After</p> <ul> <li>thresholding</li> <li>hole filling using <code>skimage.morphology.reconstruction</code> (example of usage <a href="https://scikit-image.org/docs/dev/auto_examples/features_detection/plot_holes_and_peaks.html" rel="nofollow noreferrer">here</a>)</li> <li>image closing</li> </ul> <p>I get something like this image:</p> <p><a href="https://i.sstatic.net/5oC9n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5oC9n.png" alt="enter image description here" /></a></p> <p>I'm interested in the edges of shape, specifically the steepness of the slope along vertical lines in the original grayscale image. I thought I could use Canny to get the contours which indeed gave me:</p> <p><a href="https://i.sstatic.net/6EgTM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6EgTM.png" alt="enter image description here" /></a></p> <h3>My questions:</h3> <ol> <li>How would I go about separating the approximately vertical and horizontal edges? The have very different meaning for me, as I'm not at all interested in what happens between adjacent horizontal lines (I would rather just cut those parts from the image), and I'm interested in what happens around vertical lines (white surrounding in the BW image).</li> <li>I'm would assume my lines should be straight (there could be a slope, but the noise is too high). How would I smoothen the contour lines?</li> <li>Eventually I wish to go back to the gray scale image and look at the pixel statistics in the vicinity of vertical lines. How can extract the location info of lines from the edges found by Canny?</li> </ol> <p>My code:</p> <pre><code>im_th= cv2.inRange(img, 0, 500, cv2.THRESH_BINARY) seed = np.copy(im_th) seed[1:-1, 1:-1] = im_th.max() mask = im_th filled = reconstruction(seed, mask, method='erosion').astype(np.uint8) kernel = cv2.getStructuringElement(cv2.MORPH_CROSS,(4,4)) closed = cv2.morphologyEx(filled,cv2.MORPH_CLOSE, kernel=kernel) edges = cv2.Canny(cv2.medianBlur(closed, 5), 50, 150) </code></pre> <h1>Edit</h1> <p>Per suggestions of Christoph Rackwitz and mimocha, I attempted using <code>findContours</code> and <code>HoughLinesP</code>. Both results look promising, but require further work to be of use.</p> <h3>findContours approach</h3> <p>Code:</p> <pre><code>contours, hierarchy = cv2.findContours(edges, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) contour_img = cv2.drawContours(cv2.cvtColor(closed, cv2.COLOR_GRAY2BGR), contours, -1, (255,0,0), 3) </code></pre> <p>Resulting image (overlay over the closed image):</p> <p><a href="https://i.sstatic.net/9VRah.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9VRah.png" alt="enter image description here" /></a></p> <p>The contours are found nicely, I still 151 contour lines. I would want to smoothen the result and get less lines.</p> <h3>HoughLinesP approach</h3> <p>Code:</p> <pre><code>threshold = 15 min_line_length = 30 max_line_gap = 200 line_image = np.copy(cv2.cvtColor(closed, cv2.COLOR_GRAY2BGR)) * 0 lines = cv2.HoughLinesP(edges, 1, np.pi / 180, threshold, None, min_line_length, max_line_gap) for line in lines: for x1,y1,x2,y2 in line: cv2.line(line_image,(x1,y1),(x2,y2),(255,0,0),5) lines_edges = cv2.addWeighted(cv2.cvtColor(closed, cv2.COLOR_GRAY2BGR), 0.2, line_image, 1, 0) </code></pre> <p>Resulting image:</p> <p><a href="https://i.sstatic.net/WPIcG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WPIcG.png" alt="enter image description here" /></a></p> <p>HoughLinesP indeed guarantees straight lines, and it find the lines nicely, but I still have some lines which are too &quot;thick&quot; (you can see in the image some thick lines which are actually parallel lines). Thus I need some method for refining this results.</p> <h3>Summary so far:</h3> <ol> <li>Both the contours approach and the Hough transform approach have gain.</li> <li>Hough transform may have an advantage as it returns straight lines, and also allows to separate horizontal in vertical lines as shown <a href="https://stackoverflow.com/questions/70476773/detecting-near-horizontal-lines-using-image-processing">here</a>.</li> <li>I still need a method to merge/cluster lines.</li> </ol>
<python><opencv><image-processing><contour><edge-detection>
2023-02-27 13:20:25
2
499
Yair M
75,580,952
14,269,252
Filter data frames based on a list of data frames and keep column which is stored in a dictionary
<p>I wrote the code as follows to read data from ParquetDataset and save the result into a list.</p> <p><em>1- I want to do some filtration based on different dataset in list (see example 1).</em></p> <p><em>2- I want to iterate over the dictionary for each data frame, keep the columns that is defined in dictionary and remove the rest and at the end I want a list of data frame which is manipulated.</em></p> <p>Unfortunately I dont know how modify my code to get the result.</p> <pre><code>def readfile()-&gt;pd.DataFrame: dfList=[] val1 = [1,2,4] for j in [&quot;df0&quot;,&quot;df1&quot;,&quot;df2&quot;,&quot;df3&quot;]: vars()[j] = pq.ParquetDataset( F&quot;{path}/{j}.parquet&quot;, filesystem=s3).read_pandas().to_pandas().query(&quot;id in @val1&quot;) print(j) dfList.append(vars()[j]) dfList[0].head(3) return dfList </code></pre> <p>Code for question 1:</p> <pre><code> if i in (&quot;df1&quot;,&quot;df2&quot;,&quot;df4&quot;): dfList[0][['T']] = &quot;&quot; dfList[1][['TYPE']] = &quot;&quot; dfList[2][['val']] = &quot;&quot; </code></pre> <pre><code> dic= {&quot;df0&quot;:{&quot;id&quot;:&quot;P_ID&quot;,&quot;result_date&quot;:&quot;DATE&quot;,&quot;test_name&quot;:&quot;CODE&quot;,&quot;unit&quot;:&quot;T&quot;,&quot;result&quot;:&quot;val&quot;}, &quot;df1&quot;:{&quot;id&quot;:&quot;T_ID&quot;,&quot;DIA_DATE&quot;:&quot;DATE&quot;,&quot;DI_CODE&quot;:&quot;CODE&quot;,&quot;D_CODE_TYPE&quot;:&quot;TYPE&quot;}, &quot;df2&quot;:{&quot;id&quot; :&quot;P_id&quot;,&quot;PATH_ID&quot;,&quot;PR_DATE&quot;:&quot;DATE&quot;,&quot;PR_CODE&quot;:&quot;CODE&quot;,&quot;PR_CODE_TYPE&quot;:&quot;TYPE&quot;}, &quot;df3&quot;:{&quot;id&quot;:&quot;P_ID&quot;,&quot;PATH_ID&quot;,&quot;birth&quot;:&quot;DATE&quot;}, } </code></pre>
<python><pandas>
2023-02-27 13:14:41
1
450
user14269252
75,580,901
2,908,017
How to change Panel margins in Python VCL GUI app
<p>I'm using the <a href="https://github.com/Embarcadero/DelphiVCL4Python" rel="nofollow noreferrer">DelphiVCL GUI library for Python</a> and trying to change the margins on a <code>Panel</code> component, but it's not working</p> <p>I have the following code to create the <code>Form</code> and the <code>Panel</code> on my Form:</p> <pre><code>from delphivcl import * class frmMain(Form): def __init__(self, owner): self.Caption = 'Hello World' self.Width = 1000 self.Height = 500 self.Position = &quot;poScreenCenter&quot; self.myPanel = Panel(self) self.myPanel.Parent = self self.myPanel.Align = &quot;alClient&quot; self.myPanel.Margins.Top = 100 self.myPanel.Margins.Right = 100 self.myPanel.Margins.Bottom = 100 self.myPanel.Margins.Left = 100 self.myPanel.Caption = &quot;Hello World!&quot; self.myPanel.Font.Size = 30 self.myPanel.StyleElements = &quot;&quot; self.myPanel.Color = &quot;$00D2E525&quot; </code></pre> <p>My output Form then looks like this: <a href="https://i.sstatic.net/vQTP7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vQTP7.png" alt="Python Hello World GUI App" /></a></p> <p>My &quot;Hello World!&quot; Panel should have margins on top/right/bottom/left, but it's not showing. I'm setting the margins with this piece of code:</p> <pre><code>self.myPanel.Margins.Top = 100 self.myPanel.Margins.Right = 100 self.myPanel.Margins.Bottom = 100 self.myPanel.Margins.Left = 100 </code></pre>
<python><user-interface><vcl>
2023-02-27 13:09:45
1
4,263
Shaun Roselt
75,580,886
3,616,977
Open CV ImportError: /lib/x86_64-linux-gnu/libwayland-client.so.0: undefined symbol: ffi_type_uint32, version LIBFFI_BASE_7.0
<p>I have installed OpenCV and when trying to import cv2 in python, I get the following error. The import was working fine until I installed/un-installed and re-installed tensor flow.</p> <p>OpenCV has been installed in a conda environment using cmake. Any idea how to fix this?</p> <pre><code>Python 3.10.9 (main, Jan 11 2023, 15:21:40) [GCC 11.2.0] on linux Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; import cv2 Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;/home/deleeps/anaconda3/envs/zscore/lib/python3.10/site-packages/cv2/__init__.py&quot;, line 102, in &lt;module&gt; bootstrap() File &quot;/home/deleeps/anaconda3/envs/zscore/lib/python3.10/site-packages/cv2/__init__.py&quot;, line 90, in bootstrap import cv2 ImportError: /lib/x86_64-linux-gnu/libwayland-client.so.0: undefined symbol: ffi_type_uint32, version LIBFFI_BASE_7.0 &gt;&gt;&gt; $ ldconfig -p | grep libwayland-client libwayland-client.so.0 (libc6,x86-64) =&gt; /lib/x86_64-linux-gnu/libwayland-client.so.0 libwayland-client.so (libc6,x86-64) =&gt; /lib/x86_64-linux-gnu/libwayland-client.so libwayland-client++.so.0 (libc6,x86-64) =&gt; /lib/x86_64-linux-gnu/libwayland-client++.so.0 </code></pre>
<python><opencv><conda><undefined-symbol><libffi>
2023-02-27 13:07:53
2
557
user3616977
75,580,847
10,271,487
Issues with FuncAnimation in Matplotlib
<p>I have a set of scatter data that I am trying to animate with FuncAnimation but am not getting any output. I don't use matplotlib often and have never tried to animate anything before except in matlab.</p> <p>That being said, I have dataframes with x,y data but cannot get the plot to work. The cv,of,ol,nf,nl_data are dataframes with 'ewm_x' and 'ewm_y' being the x,y axis values.</p> <pre><code>fig, ax = plt.subplots() sc = ax.scatter([],[]) def update(frame): x_pos = [cv_data['ewm_x'].loc[frame], of_data['ewm_x'].loc[frame], ol_data['ewm_x'].loc[frame], nf_data['ewm_x'].loc[frame], nl_data['ewm_x'].loc[frame]] y_pos = [cv_data['ewm_y'].loc[frame], of_data['ewm_y'].loc[frame], ol_data['ewm_y'].loc[frame], nf_data['ewm_y'].loc[frame], nl_data['ewm_y'].loc[frame]] sc.set_offsets(np.hstack((x_pos, y_pos))) return sc animation = FuncAnimation(fig, update, frames=cv_data.index.values, interval=20) plt.show() </code></pre> <p>This is based off of this earlier question <a href="https://stackoverflow.com/questions/26892392/matplotlib-funcanimation-for-scatter-plot">scatter example 1</a></p>
<python><matplotlib>
2023-02-27 13:03:27
1
309
evan
75,580,833
3,214,538
Can I get an extended class object as a result of an SQLAlchemy relationship?
<p>I have the following structure for my Python project:</p> <pre><code>app/ ├─ classes/ │ ├─ person │ ├─ address ├─ database.py ├─ main.py ├─ models.py </code></pre> <p>database.py contains the connection details and creates the engine and session to be used:</p> <pre><code># database.py from sqlalchemy import create_engine from sqlalchemy.orm import declarative_base, sessionmaker, scoped_session Base = declarative_base() engine = create_engine('sqlite://', pool_pre_ping=True) session_factory = sessionmaker(bind=engine) ScopedSession = scoped_session(session_factory) class Repr: def __repr__(self) -&gt; str: # custom implementation of the __repr__ method print('some text') </code></pre> <p>models.py has the actual table definitions and relationships between the tables:</p> <pre><code># models.py from database import Base, Repr, engine from sqlalchemy import Column, ForeignKey, Integer, String, Text class PersonDB(Base, Repr): __tablename__ = 'persons' id = Column(Integer, primary_key=True, nullable=False) address_id = Column(Integer, ForeignKey('address.id'), nullable=False) first_name = Column(String(45)) last_name = Column(String(45)) # relationships address = relationship('AddressDB', backref=backref('persons')) class AddressDB(Base, Repr): id = Column(Integer, primary_key=True, nullable=False) street = Column(Text) city = Column(Text) Base.metadata.create_all(engine) </code></pre> <p>In the classes folder each table is added as its own file inheriting from the relevant table class in models.py and potentially extending it with additional methods and properties and otherwise merely representing an alias to be imported from classes:</p> <pre><code># person.py from models import PersonDB class Person(PersonDB): @property def full_name(self) -&gt; str: return f'{self.first_name} {self.last_name}' </code></pre> <pre><code># address.py from models import AddressDB class Address(AddressDB): # no changes to the base class required. # The class merely exist as an alias importable from classes. pass </code></pre> <p>I like this approach because it separates the extended logic from the base table classes and it keeps the individual files more manageable and encapsulates it. However the problem with this is that if I use a relationship to get from one table to another it returns the base class rather than the extended class meaning I can't use additionally implemented properties or methods. Example:</p> <pre><code>for p in a.persons: print(p.full_name) # Error: p is PersonDB not Person object </code></pre> <p>Is there a way around this or should I reconsider my approach? Ideally it would work seamlessly i.e. the relationship is changed in a way that a.persons returns a list of Person objects rather than having to somehow convert a PersonDB object into a Person object in the code that calls a.persons.</p>
<python><inheritance><sqlalchemy><relationship>
2023-02-27 13:02:14
0
443
Midnight
75,580,780
15,915,737
Run flow on prefect cloud without running local agent locally?
<p>I'm trying to deploy my flow but I'don't know what I should do to completely deploy it (serverless).</p> <p>I'm using the free tier of Prefect Cloud and I have create a storage and process block.</p> <p>The step I have done :</p> <ul> <li>Build deployment</li> </ul> <pre><code>$ prefect deployment build -n reporting_ff_dev-deployment flow.py:my_flow </code></pre> <ul> <li>Apply configuration</li> </ul> <pre><code>$ prefect deployment apply &lt;file.yaml&gt; </code></pre> <ul> <li>Create block</li> </ul> <pre><code>from prefect.filesystems import LocalFileSystem from prefect.infrastructure import Process #STORAGE my_storage_block = LocalFileSystem( basepath='~/ff_dev' ) my_storage_block.save( name='ff-dev-storage-block', overwrite=True) #INFRA my_process_infra = Process( working_dir='~/_ff_dev_work', ) my_process_infra.save( name='ff-dev-process-infra', overwrite=True) </code></pre> <ul> <li>deploy block</li> </ul> <pre><code>$ prefect deployment build -n &lt;name&gt; -sb &lt;storage_name&gt; -ib &lt;infra_name&gt; &lt;entry_point.yml&gt; -a </code></pre> <p>I know that prefect cloud is a control system rather than a storage medium but as I understand, a store block -&gt; store the code and process code -&gt; run the code. What is the next step to run the flow without local agent ?</p>
<python><deployment><prefect>
2023-02-27 12:56:30
2
418
user15915737
75,580,592
3,053,450
Why is tqdm output directed to sys.stderr and not to sys.stdout?
<p>Accordingo to the python <a href="https://docs.python.org/3/library/sys.html" rel="nofollow noreferrer">python documentation concerning <code>sys.stdout</code> and <code>sys.stderr</code></a>:</p> <blockquote> <p>stdout is used for the output of print() and expression statements and for the prompts of input();</p> </blockquote> <blockquote> <p>The interpreter’s own prompts and its error messages go to stderr.</p> </blockquote> <p>Nevertheless, according to the documentation ot <a href="https://tqdm.github.io/docs/tqdm/" rel="nofollow noreferrer">tqdm</a> the default output is sys.stderr.</p> <p>I am confused on why this would be the case given that it does not seem to be related to the interpreter own prompts or error messages. What am I missing? Why is tqdm output directed to sys.stderr and not to sys.stdout?</p> <p>Edit: I think that the discussion here sort of answers this: <a href="https://stackoverflow.com/questions/35461687/when-to-use-sys-stdout-instead-of-sys-stderr">When to use sys.stdout instead of sys.stderr?</a></p>
<python><stdout><stderr><sys><tqdm>
2023-02-27 12:39:16
1
11,221
Heberto Mayorquin
75,580,522
14,751,525
Why do edge detection results look different for uint8 and float32?
<p>I use OpenCV's Sobel filter for edge detection. I was wondering why the output looks different when I run the 2 following lines.</p> <pre><code># uint8 output img_uint8 = cv2.Sobel(img_gray, cv2.CV_8U, dx=1, dy=0, ksize=5) # float32 output img_float32 = cv2.Sobel(img_gray, cv2.CV_32F, dx=1, dy=0, ksize=5) </code></pre> <p>This is how the 2 outputs look:</p> <p><a href="https://i.sstatic.net/niN2U.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/niN2U.png" alt="enter image description here" /></a></p> <p>The float32 output appears mostly gray whereas the uint8 output appears mostly black. Are negative values of float32 output displayed as gray? Or how are negative values treated when we display the image?</p>
<python><opencv><matplotlib><image-processing>
2023-02-27 12:30:52
1
694
Abdulwahab Almestekawy
75,580,508
13,320,076
YOLOv8 does not take torch tensor as input in python script
<p>I have a custom neural network where I want to use the output of a YOLO model run on the same input as part of my target. (E.g. number of objects in the image.) For that I build my own class as follows:</p> <pre><code>class yolo_mask_model(pl.LightningModule): def __init__(self, weight_path = '/mypath/weights/best.pt'): super(yolo_mask_model, self).__init__() self.save_hyperparameters() self.pretrained_yolo = YOLO(weight_path) def forward(self, input): model_input = list(255 * np.transpose(input.cpu().numpy(),(0,2,3,1))) yolo_output = self.pretrained_yolo(model_input, stream=False) ... some more code </code></pre> <p>However I do find the step to go from <code>torch.tensor</code> to a list of numpy arrays very inefficient since it requires shifting data from my gpu to the cpu and back.</p> <p>When I just plug in the tensor directly I receive the following errormessage:</p> <p><code>Exception has occurred: AssertionError Expected PIL/np.ndarray image type, but got &lt;class 'torch.Tensor'&gt;</code></p> <p>Is there any way around this behaviour?</p>
<python><pytorch><neural-network><object-detection><yolo>
2023-02-27 12:29:30
1
631
Tom S
75,580,486
11,665,178
Firebase app check in python AWS lambda not working because of cryptography compilation
<p>I am trying to validate App Check tokens from my AWS lambda in python under the x86_64 architecture.</p> <p>When i execute my AWS lambda, i got the following error :</p> <pre><code>[ERROR] Runtime.ImportModuleError: Unable to import module 'lambda_function': /var/task/cryptography/hazmat/bindings/_rust.abi3.so: invalid ELF header Traceback (most recent call last): </code></pre> <p>I have seen this <a href="https://stackoverflow.com/questions/70649979/migrate-to-arm64-on-aws-lambda-show-error-unable-to-import-module-encryptor-la">answer</a> but i am working on MacOS x86_64 so i have the same architecture as the target.</p> <p>How can i solve this issue since it seems i have compiled it in the correct architecture ?</p> <p>Note : I know that this is from Firebase App Check because it's the only dependency i have in my lambda and if i remove it everything is fine.</p> <p>EDIT : here is my other <a href="https://stackoverflow.com/questions/75644507/python-aws-lambda-in-error-because-of-pyjwtcrypto-cryptography">question</a> very related to this one. If someone answers the other, i will just have to compile <code>firebase-admin</code> on my docker ubuntu container.</p>
<python><amazon-web-services><aws-lambda><firebase-admin><firebase-app-check>
2023-02-27 12:26:58
1
2,975
Tom3652
75,580,410
3,668,129
How to convert string of time to milliseconds?
<p>I have a string which contains time (Hour-Minute-Second-Millisecond):</p> <pre><code>&quot;00:00:12.750000&quot; </code></pre> <p>I tried to convert it to milliseconds number without any success:</p> <pre><code>dt_obj = datetime.strptime(&quot; 00:00:12.750000&quot;,'%H:%M:%S') millisec = dt_obj.timestamp() * 1000 </code></pre> <p>I'm getting error:</p> <pre><code>ValueError: time data ' 00:00:12.750000' does not match format '%H:%M:%S' </code></pre> <p>How can I convert it to milliseconds ? (it need to be 12*1000+750 = 12750)</p>
<python><python-3.x>
2023-02-27 12:19:01
1
4,880
user3668129
75,580,363
5,710,684
Apply varying function for pandas dataframe depending on column arguments being passed
<p>I would like to apply a function to each row of a pandas dataframe. Instead of the argument being variable across rows, it's the function itself that is different for each row depending on the values in its columns. Let's be more concrete:</p> <pre><code>import pandas as pd from scipy.interpolate import interp1d d = {'col1': [1, 2], 'col2': [2, 4], 'col3': [3, 6]} df = pd.DataFrame(data=d) </code></pre> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: center;"></th> <th style="text-align: center;">col1</th> <th style="text-align: center;">col2</th> <th style="text-align: center;">col3</th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">0</td> <td style="text-align: center;">1</td> <td style="text-align: center;">2</td> <td style="text-align: center;">3</td> </tr> <tr> <td style="text-align: center;">1</td> <td style="text-align: center;">2</td> <td style="text-align: center;">4</td> <td style="text-align: center;">6</td> </tr> </tbody> </table> </div> <p>Now, what I would like to achieve is to extrapolate columns 1 to 3 row-wise. For the first row, this would be:</p> <pre><code>f_1 =interp1d(range(df.shape[1]), df.loc[0], fill_value='extrapolate') </code></pre> <p>with the extrapolated value <code>f_1(df.shape[1]).item() = 4.0</code>.</p> <p>So the column I would like to add would be:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: center;">col4</th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">4</td> </tr> <tr> <td style="text-align: center;">8</td> </tr> </tbody> </table> </div> <p>I've tried something like following:</p> <pre><code>import numpy as np def interp_row(row): n = row.shape[1] fun = interp1d(np.arange(n), row, fill_value='extrapolate') return fun(n+1).item() df['col4'] = df.apply(lambda row: interp_row(row)) </code></pre> <p>Can I make this work?</p>
<python><pandas><apply>
2023-02-27 12:13:12
1
609
HannesZ
75,580,338
8,406,122
Converting a string to a list separated by spaces
<p>Say I have a string like this</p> <pre><code>s=&quot;a b c de fgh&quot; </code></pre> <p>Now I want to convert it to a list like this</p> <p><code>list1=['a','b','c','de','fgh']</code></p> <p>That is I will separate the characters when there's a space and make them a string of its own and store them in a list. Is there any easier way to do this in a single line command or so rather than iterating over the string and break it up and add it to the list whenever a space is found?</p>
<python>
2023-02-27 12:10:35
1
377
Turing101
75,580,337
9,274,940
pandas group by and aggregate in a list based on condition
<p>This code groups <em>X</em> columns and aggregates <em>Y</em> column in one list when the other values are the same. I want to do the same, aggregate into a list but <strong>based in a condition</strong>:</p> <p>As you can see in the example, from 4 rows I obtain 2 rows, since it's aggregating the <em>age</em> column. My condition is that I only want to aggregate them if the bins are next to each other.</p> <p>In other words: bin [0,20] can only be aggregated with bin [21,40], and bin [21,40] can be aggregated with [0,20] and [41,60], and so on...</p> <p>I've used the .agg method for that.</p> <p>Any ideas or suggestions are welcome.</p> <pre><code># Import pandas library import pandas as pd # Create a sample DataFrame df = pd.DataFrame({ 'country': ['1', '1', '1', '1'], 'age': [5, 25, 45, 70], 'gender': ['M', 'M', 'M', 'M'], 'language': ['A', 'B', 'B', 'A'] }) # Define the age bins with custom groups age_bins = [(0, 20), (21, 40), (41, 60), (61, 100)] age_labels = ['0-20', '21-40', '41-60', '61-100'] # Define a custom function to group the data based on adjacent age ranges def custom_age_group(age): for i, (start, end) in enumerate(age_bins): if age &gt;= start and age &lt;= end: return age_labels[i] return 'Unknown' # Apply the custom function to create a new column with the custom age groups df['age_group'] = df['age'].apply(custom_age_group) # Group the data based on country, gender, and custom age group df_out = df.groupby(['gender', 'country','language'])['age_group'].agg(list).reset_index() # Print the result print(df_out) </code></pre>
<python><pandas><dataframe>
2023-02-27 12:10:30
1
551
Tonino Fernandez
75,580,288
1,878,788
How to preserve category dtype in pandas multiindex when concatenating data frames?
<p>I'm handling data (in my real use case, chromosome names, but here I used dummy names) for which I want to be able to control the sort order, and that will be part of a <code>MultiIndex</code> (also containing positions within chromosomes: I want to sort my data by chromosome, then position).</p> <p>To ensure the desired sort order, it seems possible to have a <code>Categorical</code> in the index. However, the dtype is lost from the <code>MultiIndex</code> once I concatenate dataframes.</p> <p>(In the following example, &quot;A&quot; plays the role of my chromosome information, and &quot;B&quot; plays the role of the positional information. &quot;C&quot; is some unique locus identifier.)</p> <pre><code>df1 = pd.DataFrame({ &quot;A&quot;: pd.Categorical( [&quot;X9&quot;, &quot;X9&quot;, &quot;X10&quot;, &quot;X10&quot;], categories=[&quot;X8&quot;, &quot;X9&quot;, &quot;X10&quot;], ordered=True), &quot;B&quot;: [1, 2, 1, 2], &quot;C&quot;: [&quot;9_1&quot;, &quot;9_2&quot;, &quot;10_1&quot;, &quot;10_2&quot;], &quot;1&quot;: [1, 2, 3, 4]} ).set_index([&quot;A&quot;, &quot;B&quot;, &quot;C&quot;]) print(df1.index.dtypes) df2 = pd.DataFrame({ &quot;A&quot;: pd.Categorical( [&quot;X8&quot;, &quot;X8&quot;, &quot;X10&quot;, &quot;X10&quot;], categories=[&quot;X8&quot;, &quot;X9&quot;, &quot;X10&quot;], ordered=True), &quot;B&quot;: [1, 2, 1, 2], &quot;C&quot;: [&quot;8_1&quot;, &quot;8_2&quot;, &quot;10_1&quot;, &quot;10_2&quot;], &quot;2&quot;: [1, 2, 3, 4]} ).set_index([&quot;A&quot;, &quot;B&quot;, &quot;C&quot;]) print(df2.index.dtypes) df = pd.concat([df1, df2], axis=1).sort_index() print(df.index.dtypes) print(df.to_string()) </code></pre> <p>The above code generates the following output:</p> <pre><code>A category B int64 C object dtype: object A category B int64 C object dtype: object A object B int64 C object dtype: object 1 2 A B C X10 1 10_1 3.0 3.0 2 10_2 4.0 4.0 X8 1 8_1 NaN 1.0 2 8_2 NaN 2.0 X9 1 9_1 1.0 NaN 2 9_2 2.0 NaN </code></pre> <p>We can see that the index-sorted concatenated dataframe is sorted alphabetically on level &quot;A&quot;, which is coherent with the fact that the dtype is not categorical any more, but I want &quot;8&quot; and &quot;9&quot; to come before &quot;10&quot;, and I can't just drop the &quot;X&quot; and convert these names to ints (remember that these are supposed to be chromosome names, where, in the case of humans, we can have chromosomes &quot;X&quot; and &quot;Y&quot;).</p> <p>How can I preserve the index dtypes when concatenating?</p>
<python><pandas><bioinformatics><multi-index><categorical-data>
2023-02-27 12:05:10
1
8,294
bli
75,580,278
6,013,016
How to reorder telegram posts already sent (python)
<p>Is it possible to change the order of sent posts on telegram channel. If so how to do that? Let's say I have these posts sent in my telegram channel:</p> <p><a href="https://i.sstatic.net/tw04d.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tw04d.png" alt="enter image description here" /></a></p> <p>I wanna change the order of 2nd and 3rd posts like this:</p> <p><a href="https://i.sstatic.net/awTqa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/awTqa.png" alt="enter image description here" /></a></p> <p>And achieve this:</p> <p><a href="https://i.sstatic.net/ntndt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ntndt.png" alt="enter image description here" /></a></p> <p>Is that possible? Or am I the funniest person in the history of programming?</p>
<python><telegram><telegram-bot>
2023-02-27 12:04:48
1
5,926
Scott
75,580,189
4,113,031
filter None value from python list returns: TypeError: boolean value of NA is ambiguous
<p>I used to filter out None values from a python (3.9.5) list using the &quot;filter&quot; method. Currently while upgrading several dependencies (pandas 1.3.1, numpy 1.23.5, etc.) I get the following:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd x = pd.array(['This is', 'some text', None, 'data.'], dtype=&quot;string&quot;) x = list(x.unique()) list(filter(None, x)) </code></pre> <p>returns: <code>TypeError: boolean value of NA is ambiguous</code></p> <p>I'll appreciate any good explanation of what was changed and how to solve it, please.</p>
<python><python-3.x><pandas>
2023-02-27 11:57:47
3
1,178
Niv Cohen
75,580,118
5,931,672
Replace column value based on columns matches with another dataframe
<p>I have this two dataframes</p> <p>DataFrame A</p> <pre><code> column1 column2 column3 column4 1 a 2 True 23 2 b 2 False cdsg 3 c 3 False asdf 4 a 2 False sdac 5 b 1 False asdcd </code></pre> <p>Dataframe B is a single-row dataframe like</p> <pre><code> column1 column2 column3 column4 1 c 3 False asdmn </code></pre> <p>What I want to do it to match the first 3 columns and, if found, replace the value of <code>Column4</code> so that the result is this.</p> <pre><code> column1 column2 column3 column4 1 a 2 True 23 2 b 2 False cdsg 3 c 3 False asdmn 4 a 2 False sdac 5 b 1 False asdcd </code></pre> <p>Otherwise, if there is not match, to attach it at the end. I could do that last part with a simple <code>pd.append</code> but I first need to make the first part work.</p>
<python><pandas>
2023-02-27 11:51:30
2
4,192
J Agustin Barrachina
75,580,048
6,224,975
poetry does not install the package when run in Docker, but all the dependencies is installed
<p>I have a docker-image in which I want to install my own package, <code>mypackage</code>, using the <code>.toml</code> and <code>.lock</code> file from <code>poetry</code> - inspired by <a href="https://stackoverflow.com/questions/53835198/integrating-python-poetry-with-docker">this</a> SO answer.</p> <p>It seems to work fine, all the dependencies in the <code>.toml</code> are being installed but the package itself is not.</p> <p>If I add an <code>RUN poetry add mypackage</code> I get an error <code>Package 'mypackage' is listed as a dependency of itself</code>. If I don't I get a <code>ModuleError: No module named 'mypackage'</code>. If I spawn er terminal in the container, I can <code>pip list</code> the dependencies, but not <code>mypackage</code></p> <p>I have also tried copying <code>mypackage/</code> into my <code>workdir</code> (<code>scripts/</code>) in the <code>Dockerfile</code> but the same issue persists. Maybe there's some file-structure that's wrong? I'm not very experience with Docker, thus some of the files might not be where I think they are.</p> <pre><code># pyproject.toml [tool.poetry] name = &quot;mypackage&quot; version = &quot;1&quot; description = &quot;&quot; authors = [&quot;me&quot;] readme = &quot;README.md&quot; [tool.poetry.dependencies] python = &quot;~3.10&quot; waitress = &quot;2.1.2&quot; flask = &quot;2.2.2&quot; scikit-learn = &quot;1.2.1&quot; cython = &quot;^0.29.33&quot; gensim = &quot;^4.3.0&quot; [build-system] requires = [&quot;poetry-core&quot;] build-backend = &quot;poetry.core.masonry.api&quot; </code></pre> <p>and</p> <pre><code>FROM python:3.10-slim-bullseye RUN apt update &amp;&amp; apt install RUN apt install python3-pip -y RUN pip install &quot;poetry==1.3.1&quot; RUN mkdir /scripts COPY poetry.lock pyproject.toml /scripts/ # COPY mypackage/ /scripts/ WORKDIR /scripts RUN poetry config virtualenvs.create false \ &amp;&amp; poetry install --no-dev --no-interaction --no-ansi # RUN poetry add mypackage CMD poetry run python -c &quot;import mypackage;&quot; </code></pre>
<python><docker><python-poetry>
2023-02-27 11:44:35
0
5,544
CutePoison
75,579,904
10,545,426
MkDocs with auto generated References
<p>I am building a TensorFlow model and have a ton of functions and modules that have proper docstrings.</p> <p>I installed mkdocs due to popular demand and the documentation does appear to be very easy to write.</p> <p>Nevertheless, I don't want to manually write up the entire API reference of all my modules inside this package. I am using <code>mkdocstrings</code> but I am unable to find a way to automate all of these and store them in the references section in mkdocs as you see with any documentation sites like numpy/pandas.</p> <p>​</p> <p>I tried <a href="https://pdoc3.github.io/pdoc/" rel="nofollow noreferrer">pdoc3</a>, but it only solves 1 problem for me.</p> <p><a href="https://github.com/davidenunes/mkgendocs" rel="nofollow noreferrer">mkgendocs</a> was something I was hoping it would work, but this requires another config file! I followed <a href="https://towardsdatascience.com/easily-automate-and-never-touch-your-documentation-again-a98c91ce1b95" rel="nofollow noreferrer">this post</a> but it was not working for me.</p> <p>​</p> <p>Any suggestions/resources on how I can autogenerate all my API docstrings into an api references page in mkdocs? Sphinx is too advanced to work with sorry. I am trying to get my team to document more so I need something easy to use and MkDocs looks like the best option currently.</p>
<python><mkdocs>
2023-02-27 11:30:51
1
399
Stalin Thomas
75,579,822
8,406,122
Removing space from a list of strings taken from a dataframe
<p>I have a data frame like this</p> <p><a href="https://i.sstatic.net/c72X9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/c72X9.png" alt="enter image description here" /></a></p> <p>What I want to do that is take these as different strings in a list. For that I have written the following line</p> <pre><code>list_phonemes= df['HEADER'].astype(str).values.tolist()' </code></pre> <p>This does the job but the problem is that it retains the spaces and the output is like this</p> <pre><code>['M EH S S AH G EH', 'P L AH S M EH S S EH NX JH AH R', 'S K R IY EH N R IH K OO RX D IH NX G', 'Y UH AH TH UH B EY V IY D IH Y OO S', 'AE M EH ZH AA N P R AY M IY', 'V AH R K OO AH T S S K AE N N AH R', 'AA N V IY D IH Y OO', 'N OO N IH', 'P R AY M IY V IY D IH Y OO S', 'P R OO F AY L EH'] </code></pre> <p>But I want the strings without any spaces like</p> <pre><code>['MEHSSAHGEH', 'PLAHSMEHSSEHNXJHAHR', . . . ] </code></pre> <p>Is there any thing that I can add with this line <code>list_phonemes= df['HEADER'].astype(str).values.tolist()</code> so that I can achieve the output I am looking for rather than iterating over the entire list again just to remove the spaces?</p>
<python><dataframe>
2023-02-27 11:22:42
3
377
Turing101
75,579,814
1,432,980
static resource within package are not found after wheel module installation
<p>I have <code>pyproject.toml</code> that looks like this</p> <pre><code>[tool.poetry] name = &quot;cmd-tool&quot; version = &quot;0.1.0&quot; description = &quot;&quot; authors = [] readme = &quot;README.md&quot; repository = &quot;&quot; documentation = &quot;&quot; packages = [ { include = &quot;src&quot; } ] [tool.poetry.dependencies] python = &quot;^3.10&quot; click = &quot;^8.1.3&quot; pandas = &quot;^1.5.2&quot; openpyxl = &quot;^3.1.1&quot; jinja2 = &quot;^3.1.2&quot; [tool.poetry.scripts] execute-script = &quot;src.cli:cli&quot; [tool.poetry.group.dev.dependencies] pre-commit = &quot;^2.20.0&quot; pytest = &quot;^7.1&quot; pytest-cov = &quot;^4.0.0&quot; hypothesis = &quot;^6.62.0&quot; black = &quot;^22.12.0&quot; mypy = &quot;^0.940&quot; [build-system] requires = [&quot;poetry-core&gt;=1.0.0&quot;] build-backend = &quot;poetry.core.masonry.api&quot; [tool.pycln] all = true verbose = false [tool.black] line-length = 88 target-version = [&quot;py310&quot;] [tool.mypy] python_version = &quot;3.10&quot; plugins = [&quot;pydantic.mypy&quot;] disallow_untyped_defs = true show_error_context = true show_column_numbers = true ignore_missing_imports = true pretty = true [tool.isort] profile = &quot;black&quot; </code></pre> <p>And my project structure looks like</p> <pre><code>cmd-tool/ src/ __init__.py cli.py excel/ ast/ .... /static mapping.json </code></pre> <p>where <code>cli.py</code> is a basic command-line function</p> <pre><code>@click.command() @click.option(&quot;--operation&quot;, &quot;-o&quot;, &quot;op&quot;, required=True, help='command to execute (generate-columns, generate-payload)') @click.option(&quot;--source&quot;, &quot;-s&quot;, &quot;src&quot;, type=File('rb'), required=True, help='source file which will be transformed') @click.option(&quot;--destination&quot;, &quot;-d&quot;, &quot;dest&quot;, required=True, help='destination file to which write the change') def cli(op: str, src: File, dest: str): &quot;&quot;&quot;Manipulates files&quot;&quot;&quot; # ugly workaround to support multiple file types if dest.endswith('xlsx'): dest_file = open(dest, 'wb') else: dest_file = open(dest, 'w') match op: case 'generate-columns': ExcelProcessingService().generate_columns(src, dest_file) case 'generate-payload': ExcelProcessingService().generate_payload(src, dest_file) dest_file.close() # dest parameter is required </code></pre> <p>When I run my script <code>execute-script</code> using <code>poetry run execute-script ....</code> it works fine. However if I build the project using <code>poetry build</code> and install created <code>whl</code> package - it fails with the error <code>No such file or directory: 'src/static/mapping.json'</code></p> <p>This error comes from this file</p> <pre><code>from src.ast.util import parse_args_method_signature, parse_kwargs_method_signature # load mapping import json with open('src/static/mapping.json', 'r') as json_file: mapping = json.load(json_file) def _render_decimal(value): decimal_type, left_right = parse_args_method_signature(value) if left_right: return f&quot;{ mapping[decimal_type] }(left_digits={left_right[0]},right_digits={left_right[1]})&quot; else: return f&quot;{ mapping[decimal_type] }&quot; def _post_process_field(value): if not value.endswith(')'): value = f'{value}()' return value def render_fields(value): if value.startswith('decimal'): return _post_process_field(_render_decimal(value)) else: return _post_process_field(mapping.get(value)) def render_faker_expresison(attribute_name, expr): return [attribute_name, *parse_kwargs_method_signature(expr)] </code></pre> <p>That is located in <code>excel/column/processor.py</code></p> <p>Why it works when I run it as poetry run but does not work as <code>whl</code> package?</p> <p>After installation of the wheel package it is certainly there (in <code>lib/python3.10/site-package</code>)</p> <p><a href="https://i.sstatic.net/QPCMC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QPCMC.png" alt="enter image description here" /></a></p> <p>What is the problem?</p>
<python><python-packaging><python-poetry><python-wheel>
2023-02-27 11:22:08
0
13,485
lapots
75,579,680
110,963
Fine grained logging configuration for pytest
<p>When running my <a href="https://docs.pytest.org/en/7.2.x/" rel="nofollow noreferrer">Pytest</a> based tests, I want to enable debug logging for my own code, but not for 3rd party libraries. For example boto3 gets very noisy if you enable debug logging. According to <a href="https://docs.pytest.org/en/7.1.x/how-to/logging.html" rel="nofollow noreferrer">the docs</a> it is only possible to set the overall log level for all loggers.</p> <p>Is there a known solution or workaround to use a <a href="https://docs.python.org/3/library/logging.config.html#logging-config-fileformat" rel="nofollow noreferrer">logging config file</a> to define exactly what exactly I want to see?</p>
<python><pytest>
2023-02-27 11:07:49
2
15,684
Achim
75,579,081
10,012,856
PoetryException failed to install gssapi-1.8.2.tar.gz
<p>I need to use ArcGIS API, so I've created a poetry environment and I've tried to install the API: <code>poetry add arcgis</code>. Unfortunately I see the error message below:</p> <pre><code> • Installing gssapi (1.8.2): Failed CalledProcessError Command '['/home/max/.cache/pypoetry/virtualenvs/arcgis-api-UPBhVADH-py3.8/bin/python', '-m', 'pip', 'install', '--use-pep517', '--disable-pip-version-check', '--prefix', '/home/max/.cache/pypoetry/virtualenvs/arcgis-api-UPBhVADH-py3.8', '--no-deps', '/home/max/.cache/pypoetry/artifacts/64/da/a9/a06afe5c1d5ef8f2f266a3d030f065759a2632a554b16410139032456d/gssapi-1.8.2.tar.gz']' returned non-zero exit status 1. at /usr/lib/python3.8/subprocess.py:516 in run 512│ # We don't call process.wait() as .__exit__ does that for us. 513│ raise 514│ retcode = process.poll() 515│ if check and retcode: → 516│ raise CalledProcessError(retcode, process.args, 517│ output=stdout, stderr=stderr) 518│ return CompletedProcess(process.args, retcode, stdout, stderr) 519│ 520│ The following error occurred when trying to handle this error: EnvCommandError Command ['/home/max/.cache/pypoetry/virtualenvs/arcgis-api-UPBhVADH-py3.8/bin/python', '-m', 'pip', 'install', '--use-pep517', '--disable-pip-version-check', '--prefix', '/home/max/.cache/pypoetry/virtualenvs/arcgis-api-UPBhVADH-py3.8', '--no-deps', '/home/max/.cache/pypoetry/artifacts/64/da/a9/a06afe5c1d5ef8f2f266a3d030f065759a2632a554b16410139032456d/gssapi-1.8.2.tar.gz'] errored with the following return code 1, and output: Processing /home/max/.cache/pypoetry/artifacts/64/da/a9/a06afe5c1d5ef8f2f266a3d030f065759a2632a554b16410139032456d/gssapi-1.8.2.tar.gz Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'error' error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─&gt; [21 lines of output] /bin/sh: 1: krb5-config: not found Traceback (most recent call last): File &quot;/home/max/.cache/pypoetry/virtualenvs/arcgis-api-UPBhVADH-py3.8/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py&quot;, line 363, in &lt;module&gt; main() File &quot;/home/max/.cache/pypoetry/virtualenvs/arcgis-api-UPBhVADH-py3.8/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py&quot;, line 345, in main json_out['return_val'] = hook(**hook_input['kwargs']) File &quot;/home/max/.cache/pypoetry/virtualenvs/arcgis-api-UPBhVADH-py3.8/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py&quot;, line 130, in get_requires_for_build_wheel return hook(config_settings) File &quot;/tmp/pip-build-env-gh_kkuy0/overlay/lib/python3.8/site-packages/setuptools/build_meta.py&quot;, line 338, in get_requires_for_build_wheel return self._get_build_requires(config_settings, requirements=['wheel']) File &quot;/tmp/pip-build-env-gh_kkuy0/overlay/lib/python3.8/site-packages/setuptools/build_meta.py&quot;, line 320, in _get_build_requires self.run_setup() File &quot;/tmp/pip-build-env-gh_kkuy0/overlay/lib/python3.8/site-packages/setuptools/build_meta.py&quot;, line 335, in run_setup exec(code, locals()) File &quot;&lt;string&gt;&quot;, line 109, in &lt;module&gt; File &quot;&lt;string&gt;&quot;, line 22, in get_output File &quot;/usr/lib/python3.8/subprocess.py&quot;, line 415, in check_output return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, File &quot;/usr/lib/python3.8/subprocess.py&quot;, line 516, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command 'krb5-config --libs gssapi' returned non-zero exit status 127. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─&gt; See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. at ~/.local/share/pypoetry/venv/lib/python3.8/site-packages/poetry/utils/env.py:1476 in _run 1472│ output = subprocess.check_output( 1473│ command, stderr=subprocess.STDOUT, env=env, **kwargs 1474│ ) 1475│ except CalledProcessError as e: → 1476│ raise EnvCommandError(e, input=input_) 1477│ 1478│ return decode(output) 1479│ 1480│ def execute(self, bin: str, *args: str, **kwargs: Any) -&gt; int: The following error occurred when trying to handle this error: PoetryException Failed to install /home/max/.cache/pypoetry/artifacts/64/da/a9/a06afe5c1d5ef8f2f266a3d030f065759a2632a554b16410139032456d/gssapi-1.8.2.tar.gz at ~/.local/share/pypoetry/venv/lib/python3.8/site-packages/poetry/utils/pip.py:51 in pip_install 47│ 48│ try: 49│ return environment.run_pip(*args) 50│ except EnvCommandError as e: → 51│ raise PoetryException(f&quot;Failed to install {path.as_posix()}&quot;) from e 52│ </code></pre> <p>And the package isn't installed.</p> <p>Here the <code>pyproject.toml</code>:</p> <pre><code>[tool.poetry] name = &quot;arcgis-api&quot; version = &quot;0.1.0&quot; description = &quot;&quot; authors = [] readme = &quot;README.md&quot; packages = [{include = &quot;arcgis_api&quot;}] [tool.poetry.dependencies] python = &quot;&gt;=3.8,&lt;3.10&quot; [build-system] requires = [&quot;poetry-core&quot;] build-backend = &quot;poetry.core.masonry.api&quot; </code></pre> <p>I'm on Ubuntu 20.04 with Poetry 1.2.1 and I can't use Conda for this project.</p>
<python><linux><arcgis><python-poetry>
2023-02-27 10:06:44
1
1,310
MaxDragonheart
75,578,852
5,056,347
Django sitemap for dynamic static pages
<p>I have the following in my <strong>views.py</strong>:</p> <pre><code>ACCEPTED = [&quot;x&quot;, &quot;y&quot;, &quot;z&quot;, ...] def index(request, param): if not (param in ACCEPTED): raise Http404 return render(request, &quot;index.html&quot;, {&quot;param&quot;: param}) </code></pre> <p>Url is simple enough:</p> <pre><code>path('articles/&lt;str:param&gt;/', views.index, name='index'), </code></pre> <p>How do I generate a sitemap for this path only for the accepted params defined in the <code>ACCEPTED</code> constant? Usually examples I've seen query the database for a list of detail views.</p>
<python><django><django-urls><django-sitemaps>
2023-02-27 09:43:53
1
8,874
darkhorse
75,578,804
10,633,596
Getting # collapsed multi-line command error in GitLab CI config file scripts
<p>I'm trying to put a multi-line shell script command in the GitLab CI YAML file as shown below:-</p> <pre><code>test: stage: promote-test-reports image: &quot;937583223412.dkr.ecr.us-east-1.amazonaws.com/core-infra/linux:centos7&quot; before_script: - &quot;yum install unzip -y&quot; variables: GITLAB_PRIVATE_TOKEN: $GITLAB_TOKEN allow_failure: true script: - &gt; #!/bin/bash cd $CI_PROJECT_DIR echo running set -o errexit -o pipefail -o nounset API=&quot;${CI_API_V4_URL}/projects/${CI_PROJECT_ID}&quot; AUTH_HEADER=&quot;PRIVATE-TOKEN: ${GITLAB_PRIVATE_TOKEN}&quot; CHILD_PIPELINES=$(curl -sS --header &quot;${AUTH_HEADER}&quot; &quot;${API}/pipelines/${CI_PIPELINE_ID}/bridges&quot;) echo &quot;$CHILD_PIPELINES&quot; | jq . &gt; bridges-$CI_PIPELINE_ID.json CHILD_PIPELINES=$(echo $CHILD_PIPELINES | jq &quot;.[].downstream_pipeline.id&quot;) echo &quot;$CHILD_PIPELINES&quot; | while read cp do # Fetch the IDs of their &quot;build:*&quot; jobs that completed successfully JOBS=$(curl -sS --header &quot;${AUTH_HEADER}&quot; &quot;${API}/pipelines/$cp/jobs?scope=success&quot;) echo &quot;$JOBS&quot; | jq . &gt;&gt; job-$cp.json JOBS=$(echo &quot;$JOBS&quot; | jq '.[] | select(([.name] | inside([&quot;test&quot;, &quot;coverage&quot;])) and .artifacts_file != null) | .id') [[ -z &quot;$JOBS&quot; ]] &amp;&amp; echo &quot;No jobs in $cp&quot; &amp;&amp; continue echo &quot;$JOBS&quot; | while read job do echo 'DOWNLOADING ARTIFACT: $job' curl -sS -L --header '${AUTH_HEADER}' --output artifacts-$job.zip '${API}/jobs/$job/artifacts' done done if ls artifacts-*.zip &gt;/dev/null then unzip -o artifacts-\*.zip else echo &quot;No artifacts&quot; fi when: always rules: - if: $CI_PIPELINE_SOURCE == 'web' artifacts: reports: junit: &quot;**/testresult.xml&quot; coverage_report: coverage_format: cobertura path: &quot;**/**/coverage.xml&quot; coverage: '/TOTAL\s+\d+\s+\d+\s+(\d+%)/' </code></pre> <p>Error log:-</p> <pre><code>Complete! $ #!/bin/bash cd $CI_PROJECT_DIR echo running set -o errexit -o pipefail -o nounset API=&quot;${CI_API_V4_URL}/projects/${CI_PROJECT_ID}&quot; AUTH_HEADER=&quot;PRIVATE-TOKEN: ${GITLAB_PRIVATE_TOKEN}&quot; CHILD_PIPELINES=$(curl -sS --header &quot;${AUTH_HEADER}&quot; &quot;${API}/pipelines/${CI_PIPELINE_ID}/bridges&quot;) echo &quot;$CHILD_PIPELINES&quot; | jq . &gt; bridges-$CI_PIPELINE_ID.json CHILD_PIPELINES=$(echo $CHILD_PIPELINES | jq &quot;.[].downstream_pipeline.id&quot;) echo &quot;$CHILD_PIPELINES&quot; | while read cp do # Fetch the IDs of their &quot;build:*&quot; jobs that completed successfully JOBS=$(curl -sS --header &quot;${AUTH_HEADER}&quot; &quot;${API}/pipelines/$cp/jobs?scope=success&quot;) echo &quot;$JOBS&quot; | jq . &gt;&gt; job-$cp.json JOBS=$(echo &quot;$JOBS&quot; | jq '.[] | select(([.name] | inside([&quot;test&quot;, &quot;coverage&quot;])) and .artifacts_file != null) | .id') [[ -z &quot;$JOBS&quot; ]] &amp;&amp; echo &quot;No jobs in $cp&quot; &amp;&amp; continue echo &quot;$JOBS&quot; | while read job # collapsed multi-line command /scripts-36531461-3838222101/step_script: eval: line 177: syntax error near unexpected token `do' </code></pre> <p>The while loop is throwing error (<strong>syntax error near unexpected token `do'</strong>) for some reason which I'm not able to figure it out i.e.</p> <p>echo &quot;$JOBS&quot; | while read job do</p> <p>I tried removing the pipe '|' but no vein. Must be some silly mistake I'm doing. Any idea?</p>
<python><bash><shell><gitlab-ci><gitlab-ci-runner>
2023-02-27 09:39:13
1
1,574
vinod827
75,578,764
5,090,059
How to create a cumulative list of values, by group, in a Pandas dataframe?
<p>I'm trying to add a new column to the DataFrame, that consists of a cumulative list (by group) of another column.</p> <p>For example:</p> <pre><code>df = pd.DataFrame(data={'group1': [1, 1, 2, 2, 2], 'value': [1, 2, 3, 4, 5]}) </code></pre> <p>Expected output:</p> <pre><code> group1 value cumsum_column 0 1 1 [1] 1 1 2 [1, 2] 2 2 3 [3] 3 2 4 [3, 4] 4 2 5 [3, 4, 5] </code></pre> <p>What is the best way to accomplish this?</p> <p>One way I've tried that doesn't work:</p> <pre><code>df['value_list'] = [[i] for i in df['value']] df['cumsum_column'] = df.groupby('group1')['value_list'].cumsum() </code></pre> <p>This throws the error:</p> <pre><code>TypeError: cumsum is not supported for object dtype </code></pre> <p>EDIT: To be clearer, I'm looking to find out <em>why</em> this is not working + looking for the <em>fastest</em> way for this to happen — as I'm looking to use it on big dataframes.</p>
<python><pandas><group-by><cumsum>
2023-02-27 09:35:19
3
429
Nils Mackay
75,578,679
139,150
How to optimize this numpy code to make it faster?
<p>This code is working as expected and calculates cosign distance between two embeddings. But it takes a lot of time. I have tens of thousands of records to check and I am looking for a way to make it quicker.</p> <pre><code>import pandas as pd import numpy as np from numpy import dot from numpy.linalg import norm import ast df = pd.read_csv(&quot;https://testme162.s3.amazonaws.com/cosign_dist.csv&quot;) for k, i in enumerate(df[&quot;embeddings&quot;]): df[&quot;dist&quot; + str(k)] = df.embeddings.apply( lambda x: dot(ast.literal_eval(x), ast.literal_eval(i)) / (norm(ast.literal_eval(x)) * norm(ast.literal_eval(i))) ) </code></pre>
<python><pandas><numpy>
2023-02-27 09:25:42
2
32,554
shantanuo
75,578,556
5,024,631
pandas: add a grouping variable to clusters of rows that meet a criteria
<p>I have this dataframe:</p> <pre><code>df = pd.DataFrame({'forms_a_cluster': [False, False, True, True, True, False, False, False, True, True, False, True, True, True, False], 'cluster_number':[False, False, 1, 1, 1, False, False, False, 2, 2, False, 3, 3, 3, False]}) </code></pre> <p>The idea is that I have some criteria which, when certain rows have met it, selects those cases as True, and when consecutive rows meet the criteria, they then form a cluster. I want to be able to label each cluster as <code>cluster_1</code>, <code>cluster_2</code>, <code>cluster_3</code> etc. I've given an example of the hoped for output with the column <code>cluster_number</code>. But I have no idea how to do this, given that in the real data, I have to do it many times on different datasets which have a different number of rows and the cluster sizes will be different every time. Do you have any idea how to go about this?</p>
<python><pandas><dataframe><group-by>
2023-02-27 09:12:35
2
2,783
pd441
75,578,448
6,761,328
How to set the directory where the script is in in to working directory?
<p>I want to import multiple files which will always be in the same directory as the script itself. But the script might be copied and pasted to somewhere else. I know how to set and change the working directory but I don't know how to set the working directory dynamically/automatically to where the <code>.py</code> file is located and run from. The path must not be hard-coded.</p> <p>I've seen <a href="https://stackoverflow.com/questions/45861083/changing-current-working-directory-to-the-directory-from-where-script-is-running">Changing current working directory to the directory from where script is running (Python)</a> but there is not really an answer.</p>
<python>
2023-02-27 08:58:31
0
1,562
Ben
75,578,379
4,464,596
Can't update Tkinter label from Thread
<p>I'm trying to update the text of a label from a Thread but it doesn't work. Here is a sample of the code</p> <pre><code>def search_callback(): class Holder(object): done = False holder = Holder() t = threading.Thread(target=animate, args=(holder,)) t.start() #long process here holder.done = True # inform thread long process is finished def animate(holder): for c in itertools.cycle(['|', '/', '-', '\\']): if holder.done: break print('\rRecherche ' + c) # This work! label_wait.configure(text='\rRecherche ' + c) # this doesn't work nothing appear in my label time.sleep(0.5) label_wait.configure(text='\rTerminé! ') print('\rTerminé! ') </code></pre> <p>But I don't understand why the <code>label_wait.configure</code> doesn't work also <code>print</code> works well.</p> <p>I've tried to use <code>after</code> method in my thread but it's doesn't change.</p>
<python><multithreading><tkinter>
2023-02-27 08:51:27
1
1,234
simon
75,578,232
4,847,250
What is the input dimension for a LSTM in Keras?
<p>I'm trying to use deeplearning with LSTM in keras . I use a number of signal as input (<code>nb_sig</code>) that may vary during the training with a fixed number of samples (<code>nb_sample</code>) I would like to make parameter identification, so my output layer is the size of my parameter number (<code>nb_param</code>)</p> <p>so I created my training set of size (<code>nb_sig</code> x <code>nb_sample</code>) and the label (<code>nb_param</code> x <code>nb_sample</code>)</p> <p>my issue is I cannot find the correct dimension for the deep learning model. I tried this :</p> <pre><code>import numpy as np from keras.models import Sequential from keras.layers import Dense, LSTM nb_sample = 500 nb_sig = 100 # number that may change during the training nb_param = 10 train = np.random.rand(nb_sig,nb_sample) label = np.random.rand(nb_sig,nb_param) print(train.shape,label.shape) DLmodel = Sequential() DLmodel.add(LSTM(units=nb_sample, return_sequences=True, input_shape =(None,nb_sample), activation='tanh')) DLmodel.add(Dense(nb_param, activation=&quot;linear&quot;, kernel_initializer=&quot;uniform&quot;)) DLmodel.compile(loss='mean_squared_error', optimizer='RMSprop', metrics=['accuracy', 'mse'], run_eagerly=True) print(DLmodel.summary()) DLmodel.fit(train, label, epochs=10, batch_size=nb_sig) </code></pre> <p>but I get this error message:</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\maxime\Desktop\SESAME\PycharmProjects\LargeScale_2022_09_07\di3.py&quot;, line 22, in &lt;module&gt; DLmodel.fit(train, label, epochs=10, batch_size=nb_sig) File &quot;C:\Python310\lib\site-packages\keras\utils\traceback_utils.py&quot;, line 70, in error_handler raise e.with_traceback(filtered_tb) from None File &quot;C:\Python310\lib\site-packages\keras\engine\input_spec.py&quot;, line 232, in assert_input_compatibility raise ValueError( ValueError: Exception encountered when calling layer &quot;sequential&quot; &quot; f&quot;(type Sequential). Input 0 of layer &quot;lstm&quot; is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: (100, 500) Call arguments received by layer &quot;sequential&quot; &quot; f&quot;(type Sequential): • inputs=tf.Tensor(shape=(100, 500), dtype=float32) • training=True • mask=None </code></pre> <p>I don't understand what I'm suppose to put as <code>input_shape</code> for the LSTM layer and as the number of signals I use during the training will changed, this is not so clear to me.</p>
<python><keras><lstm>
2023-02-27 08:28:49
3
5,207
ymmx
75,578,098
7,800,760
SQLAlchemy: proper way to specify connection URL in python
<p>Using SQLAlchemy (version <strong>2.0.4</strong>)to connect to a remote MySQL server from python via the pymysql driver.</p> <p>If I declare my connection URL in the &quot;simple&quot; way, it works:</p> <pre><code>import sqlalchemy as db engine = db.create_engine(&quot;mysql+pymysql://dbuser:verysecret@myhost.com:9001/mydb&quot;) </code></pre> <p>but I'd prefer this other version I've found in a tutorial:</p> <pre><code>import sqlalchemy as db connect_url = db.engine.url.URL( &quot;mysql+pymysql&quot;, username=&quot;dbuser&quot;, password=&quot;verysecret&quot;, host=&quot;myhost.com&quot;, port=&quot;9001&quot;, database=&quot;mydb&quot;, ) engine = db.create_engine(connect_url) </code></pre> <p>but this fails on the connect_url line as follows:</p> <pre><code>Traceback (most recent call last): File &quot;/Users/bob/Documents/work/code/mysqltest/mysqltest/sqlalch.py&quot;, line 3, in &lt;module&gt; connect_url = db.engine.url.URL( TypeError: URL.__new__() missing 1 required positional argument: 'query' </code></pre> <p>EDIT: thanks to @snakecharmer's reply below I then tried to add a valid query parameter as follows:</p> <pre><code>import sqlalchemy as db connect_url = db.engine.url.URL( &quot;mysql+pymysql&quot;, username=&quot;root&quot;, password=&quot;ghj12WQ_AA1997&quot;, host=&quot;isagog.com&quot;, port=9001, database=&quot;athena&quot;, query=dict(charset=&quot;utf8mb4&quot;), ) engine = db.create_engine(connect_url) </code></pre> <p>And this does works well. But still not sure why I get an error if the query parameter is not provided.</p>
<python><mysql><sqlalchemy>
2023-02-27 08:14:33
1
1,231
Robert Alexander
75,578,069
9,727,704
swapping list variables in nested loop unexpected result (single array?)
<p>If i have a multidimensional array that is a square like this:</p> <pre><code>[[1, 2, 3], [4, 5, 6], [7, 8, 9]] </code></pre> <p>And I want to turn it 90 degrees clockwise like this:</p> <pre><code>[[7, 4, 1], [8, 5, 2], [9, 6, 3]] </code></pre> <p>I can print out the changed square like this:</p> <pre><code>for i in range(0,len(a)): for j in range(len(a),0,-1): print(f&quot;{a[j-1][i]}&quot;,end=&quot;,&quot;) print() </code></pre> <p>The output is:</p> <pre><code>7,4,1, 8,5,2, 9,6,3, </code></pre> <p>But if I tried to re-arrange the python list in place, I get the wrong order.</p> <pre><code>for i in range(0,len(a)): x = 0 for j in range(len(a),0,-1): # pythonism to swap w/o temp variable a[i][x],a[j-1][i] = a[j-1][i],a[i][x] x += 1 print(a) </code></pre> <p>The output is:</p> <pre><code> [[3, 6, 1], [8, 5, 2], [9, 4, 7]] </code></pre> <p>To clarify, so far my thinking was that in order to do this we make the following transformation:</p> <pre><code>a0 a1 a2 =&gt; c0 b0 a0 b0 b1 b2 =&gt; c1 b1 a1 c0 c1 c2 =&gt; c2 b2 a2 </code></pre> <p>Because of the comments I see that the issue was because I am changing the array cells more than once as I move it.</p> <p>What I am trying to do is change the square not more than using O(1) additional memory. Is there a way to do this without using a new array?</p>
<python><list><algorithm>
2023-02-27 08:10:34
2
765
Lucky
75,578,039
21,295,456
How to resolve 'ModuleNotFoundError: No module named 'termios' Error
<p>for the following code:</p> <p><code>from google.colab.patches import cv2_imshow </code> I get</p> <blockquote> <p>`ModuleNotFoundError: No module named 'termios'</p> </blockquote> <p>I saw in the other similar questions that there is no <code>termios</code> module in Windows.</p> <p>If so, is there any substitute for this (I'm using jupyter notebook)</p> <p>Thanks</p>
<python><jupyter-notebook><anaconda><google-colaboratory>
2023-02-27 08:07:29
0
339
akashKP
75,577,983
14,958,374
How to explicitly set the amount of visible memory from inside the docker container
<p>I am using complex multi-container setup via docker compose. I wish to limit each container's max amount of memory. I set max amount via <code>deploy.resources.limits.memory</code> section in docker compose:</p> <pre><code> deploy: resources: limits: memory: 3gb </code></pre> <p>After deployment I check the memory visible inside the container, and get the following:</p> <pre><code>$grep MemTotal /proc/meminfo MemTotal: 65669412 kB </code></pre> <p>So, my application can see all 64 Gb of RAM.</p> <p><strong>Is there a way to set the amount of memory my application can see from inside the docker container?</strong> (Im using python apps, so it is essential for memory management)</p>
<python><docker><docker-compose><fastapi>
2023-02-27 07:59:47
1
331
Nick Zorander
75,577,553
5,056,347
How to print HTML content from a Django template in a view?
<p>Let's say I have a template called <code>index.html</code> with a bunch of block tags and variables and URLs. I can display it from my view using the <code>render()</code> function like so:</p> <pre><code>def index(request): return render(request, &quot;index.html&quot;, {...}) </code></pre> <p>I would like to print the actual content that is being generated here in a normal python function. Something like this:</p> <pre><code>def get_html(): html_content = some_function(&quot;index.html&quot;, {...}) # Takes in the template name and context print(html_content) </code></pre> <p>Is this possible to do using Django?</p>
<python><django><django-views><django-templates>
2023-02-27 07:04:16
1
8,874
darkhorse
75,577,478
1,539,757
Execute javascript code in nodejs using spawn
<p>I wrote below code in node js to execute python code and print logs and return output using spawn,</p> <pre><code>const { spawn } = require('node:child_process'); const ls = spawn('node', ['-c', &quot;python code content will come here&quot;]); ls.stdout.on('data', (data) =&gt; { console.log(`stdout: ${data}`); }); </code></pre> <p>it works fine, but now i want to execute javascript code in the place of python code , can somebody have any idea how we can do it or my command is wrong.</p> <p>Below is my code to execute javascript code</p> <pre><code>const { spawn } = require('node:child_process'); jscode = `var hello = function() { console.log(&quot;log from DB function!&quot;); console.log(`{&quot;return&quot;:&quot;Return from DB function!&quot;}`); }; hello(); ` const ls = spawn('node', ['-c', jscode]); ls.stdout.on('data', (data) =&gt; { console.log(`stdout: ${data}`); }); </code></pre>
<javascript><python><node.js>
2023-02-27 06:54:07
1
2,936
pbhle
75,577,468
3,323,526
Why didn't Python multi-processing reduce processing time to 1/4 on a 4-cores CPU
<p>Multi-threading in CPython cannot use more than one CPU in parallel because the existence of GIL. To break this limitation, we can use multiprocessing. I'm writing Python code to demonstrate that. Here is my code:</p> <pre><code>from math import sqrt from time import time from threading import Thread from multiprocessing import Process def time_recorder(job_name): &quot;&quot;&quot;Record time consumption of running a function&quot;&quot;&quot; def deco(func): def wrapper(*args, **kwargs): print(f&quot;Run {job_name}&quot;) start_epoch = time() func(*args, **kwargs) end_epoch = time() time_consume = end_epoch - start_epoch print(f&quot;Time consumption of {job_name}: {time_consume}&quot;) return wrapper return deco def calc_sqrt(): &quot;&quot;&quot;Consume the CPU&quot;&quot;&quot; i = 2147483647 for j in range(20 * 1000 * 1000): i -= 1 sqrt(i) @time_recorder(&quot;one by one&quot;) def one_by_one(): for _ in range(8): calc_sqrt() @time_recorder(&quot;multi-threading&quot;) def multi_thread(): t_list = list() for i in range(8): t = Thread(name=f'worker-{i}', target=calc_sqrt) t.start() t_list.append(t) for t in t_list: t.join() @time_recorder(&quot;multi-processing&quot;) def multi_process(): p_list = list() for i in range(8): p = Process(name=f&quot;worker-{i}&quot;, target=calc_sqrt) p.start() p_list.append(p) for p in p_list: p.join() def main(): one_by_one() print('-' * 40) multi_thread() print('-' * 40) multi_process() if __name__ == '__main__': main() </code></pre> <p>Function &quot;calc_sqrt()&quot; is the CPU-consuming job, which calculates square root for 20 million times. Decorator &quot;time_recorder&quot; calculates the running time of the decorated functions. And there are 3 functions which run the CPU-consuming job one by one, in multiple threads and in multiple processes respectively.</p> <p>By running the above code on my laptop, I got the following output:</p> <pre><code>Run one by one Time consumption of one by one: 39.31295585632324 ---------------------------------------- Run multi-threading Time consumption of multi-threading: 39.36112403869629 ---------------------------------------- Run multi-processing Time consumption of multi-processing: 23.380358457565308 </code></pre> <p>Time consumption of &quot;one_by_one()&quot; and &quot;multi_thread()&quot; are almost the same, which are as expected. But time consumption of &quot;multi_process()&quot; is a little bit confusing. My laptop has an Intel Core i5-7300U CPU, which has 2 cores, 4 threads. Task manager simply shows that there are 4 (logic) CPUs in my computer. Task manager also shows that the CPU usage of all the 4 CPUs are 100% during the execution. But the processing time didn't reduce to 1/4 but rather 1/2, why? The operating system of my laptop is Windows 10 64-bit.</p> <p>Later, I tried this program on a Linux virtual machine, and got the following output, which is more reasonable:</p> <pre><code>Run one by one Time consumption of one by one: 33.78603768348694 ---------------------------------------- Run multi-threading Time consumption of multi-threading: 34.396817684173584 ---------------------------------------- Run multi-processing Time consumption of multi-processing: 8.470374584197998 </code></pre> <p>This time, processing time with multi-processing reduced to 1/4 of that with multi-threading. Host of this Linux server equipped with an Intel Xeon E5-2670, which has 8 cores and 16 threads. The host OS is CentOS 7. The VM is assigned with 4 vCPUs and the OS is Debian 10.</p> <p>The questions are:</p> <ul> <li>why didn't the processing time of the multi-processing job reduce to 1/4 but rather to just 1/2 on my laptop?</li> <li>Is it a CPU issue, which means that the 4 threads of Intel Core i5-7300U are not &quot;real parallel&quot; and may impact each other, and Intel Xeon E5-2670 doesn't have that issue?</li> <li>Or is it an OS issue, which means that Windows 10 doesn't support multi-processing well, processes may impact each other when running in parallel?</li> </ul>
<python><linux><windows><multiprocessing><cpu-architecture>
2023-02-27 06:52:32
2
3,990
Vespene Gas
75,577,379
1,852,526
Python Popen writing command prompt output to logfile
<p>Firstly I am running this Python script on Windows.</p> <p>I am trying to open a new command prompt window every time and want to execute a .exe with some arguments. After I execute, I want to copy the command prompt output to a log file. I created a log file at the location say &quot;log.log&quot;. When I run the script, it doesn't seem to write the contents to the log file at all.</p> <p>The coreServerFullPath is like <code>C:\Users\nanduk\Desktop\SourceCode\Core\CoreServer\Server\CoreServer\bin\Debug</code></p> <p>And here in this location I created a blank text document and named it to log.log</p> <pre><code>def OpenCoreServer(): os.chdir(coreServerFullPath) #logging.basicConfig(filename=&quot;log.log&quot;, level=logging.INFO) logging.info('your text goes here') #I see this line in the log. result = subprocess.Popen('start cmd /k &quot;CoreServer.exe -c -s&quot; &gt;&gt; log.log 2&gt;&amp;1', shell=True, stdin=PIPE, stdout=PIPE, stderr=STDOUT, close_fds=True) time.sleep(4) print(&quot;ABCD&quot;) with open('log.log', 'r') as logg: print(logg.read()) </code></pre>
<python><logging><popen><python-logging>
2023-02-27 06:35:09
1
1,774
nikhil
75,577,350
9,236,039
How to fetch data and store into multiple files based on condition
<p>test.csv</p> <pre><code>name,age,n1,n2,n3 a,21,1,2,3 b,22,4,9,0 c,25,4,5,6 d,25,41,5,6 e,25,4,66,6 f,25,4,5,66 g,25,4,55,6 h,25,4,5,56 i,25,41,5,61 j,25,4,51,60 k,20,40,50,60 l,21,40,51,60 </code></pre> <p>My code till reading and storing into dict</p> <pre><code>import pandas as pd input_file = pd.read_csv(&quot;test.csv&quot;) for i in range(0, len(input_file['name'])): dict1 = {} dict1[&quot;name&quot;] = str(input_file['name'][i]) dict1[&quot;age&quot;] = str(input_file['age'][i]) dict1[&quot;n1&quot;] = str(input_file['n1'][i]) dict1[&quot;n2&quot;] = str(input_file['n2'][i]) dict1[&quot;n3&quot;] = str(input_file['n3'][i]) </code></pre> <p>I want to generate output in multiple file for each 5 rows of data (But this I need to do using writeline function in python as I need to do many stuff in writelines. FIle name should be generated dynamically also input will be dynamic (Meaning more rows can come)</p> <p>example or expected output (herre file name must be dynamic)</p> <pre><code>out_file = open('File1.xml', 'w') out_file.writelines(I will process with dictionary data row by row) out_file.writelines(&quot;\n&quot;) </code></pre> <p>File1</p> <pre><code>a,21,1,2,3 b,22,4,9,0 c,25,4,5,6 d,25,41,5,6 e,25,4,66,6 </code></pre> <p>File2</p> <pre><code>f,25,4,5,66 g,25,4,55,6 h,25,4,5,56 i,25,41,5,61 j,25,4,51,60 </code></pre> <p>File3</p> <pre><code>k,20,40,50,60 l,21,40,51,60 </code></pre>
<python><pandas><dataframe><csv>
2023-02-27 06:30:44
1
357
Dhananjaya D N
75,577,330
2,975,438
FastAPI in docker container: how to get automatic detection of changes in python file?
<p>I was looking to this similar <a href="https://stackoverflow.com/questions/63038345/how-to-make-fastapi-pickup-changes-in-an-api-routing-file-automatically-while-ru">question</a> but for some reason it does not work for me.</p> <p>I have the following folder structure:</p> <pre><code>api.service - app -- main.py - Dockerfile - requirements.txt docker-compose.yml </code></pre> <p>main.py:</p> <pre><code>from fastapi import FastAPI, File, HTTPException, Request, Body app = FastAPI() @app.get(&quot;/&quot;) def read_root(): return {&quot;Hello&quot;: &quot;World!!!&quot;} @app.get(&quot;/items/{item_id}&quot;) def read_item(item_id: int, q: str = None): return {&quot;item_id&quot;: item_id, &quot;q&quot;: q} </code></pre> <p>docker-compose.yml:</p> <pre><code>version: '3' services: api-service: build: ./api.service ports: - &quot;8180:80&quot; networks: - default volumes: - ./api.service/app:/app/app </code></pre> <p>Dockerfile:</p> <pre><code>FROM tiangolo/uvicorn-gunicorn-fastapi:python3.11 COPY ./requirements.txt /app/requirements.txt RUN pip install --no-cache-dir --upgrade -r /app/requirements.txt </code></pre> <p>I tried to play with volumes in docker-compose as well as with Dockerfile but no luck so far.</p>
<python><docker><fastapi>
2023-02-27 06:26:47
0
1,298
illuminato
75,577,207
1,668,622
With argparse it it possible to process arguments only up to the last non-positional argument?
<p>I'm writing a tool which passes arguments to another command provided with the arguments like this:</p> <pre><code>foo --arg1 -b -c bar -v --number=42 </code></pre> <p>In this example <code>foo</code> is my tool and <code>--arg1 -b -c</code> should be the arguments parsed by <code>foo</code>, while <code>-v --number=42</code> are the arguments going to <code>bar</code>, which is the <em>command</em> called by <code>foo</code>.</p> <p>So this is quite similar to <code>strace</code> where you can provide arguments to <code>strace</code> while still providing a command with custom arguments.</p> <p><code>argparse</code> provides <code>parse_known_arguments()</code> but it will parse <em>all</em> arguments it knows even those coming after <code>bar</code>.</p> <p>Some tools use a special syntax element (e.g. <code>--</code>) to separate arguments with different semantics, but since I know <code>foo</code> will only process names arguments, I'd like to avoid this.</p> <p>That can't be too hard to manually find the first argument you might think, and this is what I'm currently doing:</p> <pre class="lang-py prettyprint-override"><code>parser.add_argument(&quot;--verbose&quot;, &quot;-v&quot;, action=&quot;store_true&quot;) all_args = args or sys.argv[1:] with suppress(StopIteration): split_at = next(i for i, e in enumerate(all_args) if not e.startswith(&quot;-&quot; )) return parser.parse_args(all_args[:split_at]), all_args[split_at:] raise RuntimeError(&quot;No command provided&quot;) </code></pre> <p>And this works with the example I've provided. But with <code>argparse</code> you can specify arguments with values which can but don't have to be provided with a <code>=</code>:</p> <pre><code>foo --file=data1 --file data2 bar -v --number=42 </code></pre> <p>So here it would be much harder to manually identify <code>data2</code> to be a value for the second <code>--file</code> argument, and <code>bar</code> to be the first positional argument.</p> <p>My current approach is to manually split arguments (backwards) and see, if all 'left hand' arguments successfully parse:</p> <pre class="lang-py prettyprint-override"><code>def parse_args(): class MyArgumentParser(ArgumentParser): def error(self, message: str) -&gt; NoReturn: raise RuntimeError() parser = MyArgumentParser() parser.add_argument(&quot;--verbose&quot;, &quot;-v&quot;, action=&quot;store_true&quot;) parser.add_argument(&quot;--with-value&quot;, type=str) all_args = sys.argv[1:] for split_at in (i for i, e in reversed(list(enumerate(all_args))) if not e.startswith(&quot;-&quot;)): with suppress(RuntimeError): return parser.parse_args(all_args[:split_at]), all_args[split_at:] parser.print_help(sys.stderr) print(&quot;No command provided&quot;, file=sys.stderr) raise SystemExit(-1) </code></pre> <p>That works for me, but next to the clumsy extra <code>MyArgumentParser</code> needed just to be able to manually handle parser errors I now need to manually classify mistakes, since an <code>ArgumentError</code> turns into something that occurs naturally.</p> <p>So is there a way to tell <code>argparse</code> to parse only until the first positional argument and then stop even if there are arguments it knows after that one?</p>
<python><command-line-arguments><argparse><positional-argument>
2023-02-27 06:04:32
1
9,958
frans
75,576,623
13,162,807
Docker python app can't find `PATH_INFO` env variable
<p>I'm trying to run python backend using Docker, however app can't find <code>PATH_INFO</code> environment variable. Though on another machine it works well, difference is OS version: previous is <code>ubuntu 21.04</code> and current is <code>ubuntu 22.04</code></p> <p>The error is caused by this piece of code:</p> <pre><code>path = environ.get(&quot;PATH_INFO&quot;, &quot;/&quot;).lower() </code></pre> <p>here environ is a dict containing all env variables</p> <p>App is running using <code>docker-compose</code> which creates contianer, where <code>ENTRYPOINT</code> includes <code>docker_entry</code> file:</p> <pre><code>import bjoern from index import app bjoern.run(app, '0.0.0.0', 80) </code></pre> <p>where <code>app</code> is</p> <pre><code>if __name__ == &quot;__main__&quot;: wsgiref.handlers.CGIHandler().run(app) </code></pre>
<python><docker><nginx>
2023-02-27 03:49:59
1
305
Alexander P
75,576,409
12,596,824
Duplicating rows and and creating ID column and repeating column in Python
<p>I have the following input dataframe:</p> <p><strong>Input Dataframe:</strong></p> <pre><code>c1 c2 c3 c4 c5 c6 c7 c8 c9 c10 c11 56 1 2 4 1.0 1 4.0 1.0 2 2 18000.0 52 2 2 5 3.0 1 4.0 1.0 1 1 0.0 82 2 2 5 4.0 2 4.0 1.0 1 1 0.0 26 1 2 4 2.0 1 4.0 1.0 2 2 12000.0 65 1 2 4 1.0 1 4.0 23.0 2 1 324900.0 </code></pre> <p>In the input datframe, I want duplicate each row 3 times. The calculated <code>id</code> column is the same number that repeats three times and represents the row number. The calculated <code>type</code> column goes 1,2,3 for each original record.</p> <p>How can I do this in Python?</p> <p><strong>Expected Output:</strong></p> <pre><code>id type c1 c2 c3 c4 c5 c6 c7 c8 c9 c10 c11 1 1 56 1 2 4 1.0 1 4.0 1.0 2 2 18000.0 1 2 56 1 2 4 1.0 1 4.0 1.0 2 2 18000.0 1 3 56 1 2 4 1.0 1 4.0 1.0 2 2 18000.0 2 1 52 2 2 5 3.0 1 4.0 1.0 1 1 0.0 2 2 52 2 2 5 3.0 1 4.0 1.0 1 1 0.0 2 3 52 2 2 5 3.0 1 4.0 1.0 1 1 0.0 3 1 82 2 2 5 4.0 2 4.0 1.0 1 1 0.0 3 2 82 2 2 5 4.0 2 4.0 1.0 1 1 0.0 3 3 82 2 2 5 4.0 2 4.0 1.0 1 1 0.0 4 1 26 1 2 4 2.0 1 4.0 1.0 2 2 12000.0 4 2 26 1 2 4 2.0 1 4.0 1.0 2 2 12000.0 4 3 26 1 2 4 2.0 1 4.0 1.0 2 2 12000.0 5 1 65 1 2 4 1.0 1 4.0 23.0 2 1 324900.0 5 2 65 1 2 4 1.0 1 4.0 23.0 2 1 324900.0 5 3 65 1 2 4 1.0 1 4.0 23.0 2 1 324900.0 </code></pre> <p>I have the following code but I don't like it because I have to use assign() twice and also I'm not sure how to calculate the id column...I just put placeholder code there. I seem to get the type column correct though.</p> <p><strong>My Attempt:</strong></p> <pre><code>(df .dropna() .assign(id = lambda x: range(1, len(x) + 1) ) .pipe(lambda x: x.loc[x.index.repeat(3)]) .assign(id = lambda x: np.r_[:len(x)] % 3 + 1, type = lambda x: np.r_[:len(x)] % 3 + 1)) </code></pre>
<python><pandas>
2023-02-27 02:54:55
2
1,937
Eisen
75,576,273
5,366,075
Generate PDF containing multiple SVG barcodes
<p>I am trying to generate a PDF containing all the barcodes in SVG format. Here is the code I've written so far.</p> <pre><code>from code128 import Code128 import csv from reportlab.graphics.shapes import Group, Drawing from reportlab.graphics import renderPDF from reportlab.lib.pagesizes import letter # Scale of SVG images SVG_SCALE = 10 # Read the CSV file with open('products.csv', newline='') as csvfile: reader = csv.DictReader(csvfile) products = [(row['code'], row['name']) for row in reader] # Generate barcodes for each product and add to drawing drawing = Drawing(width=letter[0], height=letter[1]) x, y = 50, 50 for product_code, product_name in products: # Generate SVG barcode using code128 library ean = Code128(product_code) barcode_svg = ean.svg(scale=SVG_SCALE) # Create a reportlab group for the barcode and set its transform barcode_group = Group() barcode_group.transform = (SVG_SCALE, 0, 0, SVG_SCALE, x, y) barcode_group.add(barcode_svg) drawing.add(barcode_group) # Add product name below barcode drawing.addString(x + (barcode_svg.width * SVG_SCALE) / 2, y - 10, product_name, fontSize=10, textAnchor=&quot;middle&quot;) # Move cursor to the next barcode position y += 75 if y &gt; 750: y = 50 x += 200 # Render drawing as PDF renderPDF.drawToFile(drawing, &quot;barcodes.pdf&quot;, &quot;Barcode Sheet&quot;) </code></pre> <p>The error I get is <code>ImportError: cannot import name 'Code128' from 'code128'</code></p> <p>I can't find another way to achieve what I'm after. Can someone please help me fix the imports/code or another way of achieving this?</p>
<python><svg><python-barcode>
2023-02-27 02:14:44
1
1,863
usert4jju7
75,576,166
18,758,062
How to run matplotlib FuncAnimation using multiprocessing?
<p>It takes a long time (20 secs) to render 500 frames of this matplotlib animation using the code below.</p> <p>Is it possible to produce the same animation <code>anim</code> but make use of more CPU cores, such as with the use of <code>multiprocessing</code> to create more processes that runs <code>FuncAnimation</code> for different ranges of frames, then combine all the frames together at the end into <code>anim</code>?</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt import matplotlib.animation import numpy as np plt.rcParams[&quot;animation.html&quot;] = &quot;jshtml&quot; plt.rcParams[&quot;figure.dpi&quot;] = 100 plt.ioff() def animate(frame_num): x = np.linspace(0, 2*np.pi, 100) y = np.sin(x + 2*np.pi * frame_num/100) line.set_data((x, y)) return line fig, ax = plt.subplots() line, = ax.plot([]) ax.set_xlim(0, 2*np.pi) ax.set_ylim(-1.1, 1.1) matplotlib.animation.FuncAnimation( fig, animate, frames=100, ) </code></pre>
<python><matplotlib><multiprocessing><jupyter>
2023-02-27 01:46:21
0
1,623
gameveloster