QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
75,349,681
13,849,446
Extract chat id from https://web.telegram.org/z/ using selenium
<p>I want to extract the chat-id from telegram web version <strong>z</strong>. I have done it on the telegram web version <strong>k</strong> but, it is not present in the <strong>z</strong> version. I looked every where but could not find any element containing chat-id. I know I can get the chat-id from url after opening the chat, but I can not open chat due to some reason. The following is the basic code to open the telegram.</p> <pre><code>from seleniumwire import webdriver from selenium.webdriver.chrome.service import Service from webdriver_manager.firefox import GeckoDriverManager import sys URL = 'https://web.telegram.org/' firefox_options = webdriver.FirefoxOptions() path_to_firefox_profile = &quot;output_files\\firefox\\ghr2wgpa.default-release&quot; profile = webdriver.FirefoxProfile(path_to_firefox_profile) profile.set_preference(&quot;dom.webdriver.enabled&quot;, False) profile.set_preference('useAutomationExtension', False) firefox_options.set_preference(&quot;general.useragent.override&quot;, 'user-agent=Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:89.0) Gecko/20100101 Firefox/89.0') profile.set_preference(&quot;security.mixed_content.block_active_content&quot;, False) profile.update_preferences() firefox_options.add_argument(&quot;--width=1400&quot;) firefox_options.add_argument(&quot;--height=1000&quot;) driver_installation = GeckoDriverManager().install() service = Service(driver_installation) if sys.platform == 'win32': from subprocess import CREATE_NO_WINDOW service.creationflags = CREATE_NO_WINDOW driver = webdriver.Firefox(options=firefox_options, firefox_profile=profile, capabilities = caps, service=service) driver.get(URL) driver.close() driver.quit() </code></pre> <p>I only know is that the chat-id is being passed to some function in common.js when we click on the chat to open it. The chat-id is available as the following attribute.</p> <p><em>data-peer-id=&quot;777000&quot;</em></p> <p>Thanks in advance.</p>
<python><selenium><selenium-webdriver><telegram>
2023-02-05 02:04:06
1
1,146
farhan jatt
75,349,639
3,019,570
Indirect inheritance in Python through decorators
<p>I want to define a decorator (let's name it subcollection) that behaves like this:</p> <pre class="lang-py prettyprint-override"><code>class ParentClass: name = None def __init__(self, name=None): self.name = name @subcollection class ChildClass: def greet(self): print(f'Hello from {self.outter.name}') parent_class = ParentClass('Mahdi') child_class = parent_class.ChildClass() assert child_class.greet() == 'Hello from Mahdi' </code></pre> <p>I'm trying to achieve a complicated inheritance in python that confused me. This is needed to create a nested unstructured ORM.</p> <p>I probably could do this with factory functions, but I'm wondering if there is any workaround to pass the <code>parent_class</code> instance at the time of the <code>child_class</code> creation with decorators?!</p>
<python><class><inheritance><python-decorators>
2023-02-05 01:52:46
0
1,977
Mahdi Jadaliha
75,349,592
3,788
Format HTML output of a Pandas dataframe based on row and column?
<p>I have the following code which outputs an HTML table:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df = pd.DataFrame({ 'Color': ['Red', 'Red', 'Yellow', 'Yellow'], 'Fruit': ['Apple', 'Strawberry', 'Banana', 'Peach'], 'Weight': [5, 3, 8, 6] }) df = pd.pivot(df, index='Color', columns='Fruit', values='Weight') print(df.to_html()) </code></pre> <p>I would like the output of each cell to reference its row (index) and column value. For example, I'd like a format function that looks like this:</p> <pre class="lang-py prettyprint-override"><code>def format_value(val, row, col): return f&quot;&lt;a href=/{row}/{col}&gt;{val}&lt;/a&gt;&quot; </code></pre> <p>However, I cannot figure out how to do this. This is what I've tried:</p> <h2>df.applymap()</h2> <pre class="lang-py prettyprint-override"><code>df = df.applymap(lambda x: format_value(x, row=x.index, col=x.name)) </code></pre> <p>This returns an error:</p> <pre><code>AttributeError: 'float' object has no attribute 'index' </code></pre> <h2>to_html(formatter=...)</h2> <pre><code>def format_cell(val, col, row): def f(val): return f&quot;/{row}/{col}&quot; return f format_map = {} for col in df.columns: format_map[col] = format_cell(col, df.index) print(df.to_html(formatter=format_map)) </code></pre> <p>However, <code>df.index</code> returns all of the indexes, so there is no way to know which row specifically the formatter is referring to.</p> <h2>Create a new dataframe</h2> <p>I was able to make this work in a cumbersome way, by creating a new dataframe and using <code>iterrows()</code>:</p> <pre class="lang-py prettyprint-override"><code>df_formatted = df.copy() for index, row in df.iterrows(): for c in df.columns: df_formatted[c][index] = f&quot;&lt;a href=/{index}/{c}&gt;{df[c][index]}&lt;/a&gt;&quot; </code></pre> <p>How can I create an HTML formatter which is aware of both the index and column for each value?</p>
<python><pandas><dataframe>
2023-02-05 01:39:46
1
19,469
poundifdef
75,349,515
3,247,006
"on_session" vs "off_session" in Stripe
<p>I can set <code>on_session</code> and <code>off_session</code> for <a href="https://stripe.com/docs/api/checkout/sessions/create#create_checkout_session-payment_intent_data-setup_future_usage" rel="nofollow noreferrer">payment_intent_data.setup_future_usage</a> and <a href="https://stripe.com/docs/api/checkout/sessions/create#create_checkout_session-payment_method_options-card-setup_future_usage" rel="nofollow noreferrer">payment_method_options.card.setup_future_usage</a> as shown below. *I use <strong>Python</strong>:</p> <pre class="lang-py prettyprint-override"><code>checkout_session = stripe.checkout.Session.create( # ... payment_intent_data={ &quot;setup_future_usage&quot;: &quot;on_session&quot; # Here }, payment_method_options={ &quot;card&quot;: { &quot;setup_future_usage&quot;: &quot;on_session&quot; # Here }, }, # ... ) </code></pre> <pre class="lang-py prettyprint-override"><code>checkout_session = stripe.checkout.Session.create( # ... payment_intent_data={ &quot;setup_future_usage&quot;: &quot;off_session&quot; # Here }, payment_method_options={ &quot;card&quot;: { &quot;setup_future_usage&quot;: &quot;off_session&quot; # Here } }, # ... ) </code></pre> <p>But, I cannot exactly understand what are <code>on_session</code> and <code>off_session</code> even though I read <a href="https://stripe.com/docs/payments/payment-intents#future-usage" rel="nofollow noreferrer">Optimizing cards for future payments</a> on Stripe Doc.</p> <p>So, what are <code>on_session</code> and <code>off_session</code>?</p>
<python><stripe-payments><payment><checkout><python-stripe-api>
2023-02-05 01:16:28
3
42,516
Super Kai - Kazuya Ito
75,349,427
7,583,953
Is there a difference between math.inf and cmath.inf in Python?
<p>Is there any difference between the infinities returned by the <code>math</code> module and <code>cmath</code> module?</p> <p>Does the complex infinity have an imaginary component of 0?</p>
<python><python-3.x><complex-numbers><infinity><python-cmath>
2023-02-05 00:54:45
1
9,733
Alec
75,349,313
5,051,937
Mock loading a dataframe during testing the function
<p>I am fairy new to unit testing in Python and have been using pytest. I have been using fixtures for some unit tests but there is a simple function (see below) that I want to test which has me baffled.</p> <p>It simply reads parquet file from s3 and as there are duplicate rows in the source data it dedupes on a given list of columns.</p> <p>So in the unit test I just want to mock loading the data from s3 as I dont want to perform this read. Any pointers on how to approach this?</p> <pre><code>def load_parquet(bucket_name: str, bucket_prefix: str, folder: str, columns_required: list, log_date_start: date, log_date_end=date): '''Function to load event data for given date Args: bucket_name: S3 bucket bucket_prefix: S3 bucket input path to load parquet folder: S3 folder log_date_start: Start of logdate log_date_end: End of logdate Returns: Dataframe with required columns and deduped ''' s3_path = f's3://{bucket_name}/{bucket_prefix}/{folder}/' df = wr.s3.read_parquet(path=s3_path, columns=columns_required, partition_filter=lambda x: log_date_start &lt;= x[&quot;date&quot;] &lt;= log_date_end, dataset=True, use_threads=True) # Partition filter creates unnecessary columns and drop any duplicates df_deduped = df[columns_required].drop_duplicates() return df_deduped </code></pre>
<python><unit-testing><mocking><pytest>
2023-02-05 00:23:31
0
323
pmanDS
75,349,191
2,713,263
How do I cimport scipy.spatial.ckdtree?
<p>I am rewriting some of my code using Cython, and want to type a <code>ckdtree</code> variable. I am currently doing</p> <pre><code>cimport scipy.spatial.ckdtree as ckdtree cdef tuple remove_labels(double[:, :] x_train, double[:, :] y_train): cdef ckdtree.KDTree tree // ... tree = ckdtree.KDTree(x_rest_view) </code></pre> <p>but I get several errors:</p> <p>When importing, I get the error</p> <pre><code>'scipy/spatial/ckdtree.pxd' not found </code></pre> <p>In line</p> <pre><code>cdef ckdtree.KDTree tree </code></pre> <p>I get</p> <pre><code>'KDTree' is not a type identifier </code></pre> <p>In line</p> <pre><code>tree = ckdtree.KDTree(x_rest_view) </code></pre> <p>I get</p> <pre><code>cimported module has no attribute 'KDTree' </code></pre> <p>What am I doing wrong?</p>
<python><scipy><cython>
2023-02-04 23:52:54
0
1,276
Rahul
75,349,029
1,960,266
how to interpret this MNIST tensor?
<p>I have found a code that converts the data from the MNIST dataset into tensors. The code is the following:</p> <pre><code>import torch import torchvision import matplotlib.pyplot as plt batch_size_test=1000 test_loader=torch.utils.data.DataLoader( torchvision.datasets.MNIST('/files/',train=False,download=True, transform=torchvision.transforms.Compose( [torchvision.transforms.ToTensor(), torchvision.transforms.Normalize( (0.1307,),(0.3081,)) ])), batch_size=batch_size_test,shuffle=True ) examples=enumerate(test_loader) print (example_data.shape) </code></pre> <p>when I print the shape of the example_data I get the following:</p> <pre><code>torch.Size([1000, 1, 28, 28]) </code></pre> <p>so, for what I know is a tensor of 1000 samples, 1 channel and images of 28 pixels of height and 28 pixels of width. Graphically, I imagine like a sort of a 4d array in which I have cubes, 1000 stacked one over another, each of then formed by 28 x 28 x 1 data.</p> <p>I have also tried the following instruction:</p> <pre><code>print (example_data[2][0]) </code></pre> <p>but the output is something like:</p> <pre><code>tensor([[-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242], [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242], </code></pre> <p>I see that each part between brackets, like a sort of an unidimensional array, contains 28 numbers in an horizontal way, but why I have also 28 of this [] vertically?</p> <p>Also, in this part: <code>print (example_data[2][0])</code>, the 2 refers to the second sample, but why I have to put the [0]?</p> <p>Sorry, if it seems like two questions in one post, but I believe they are closely related to each other.</p>
<python><tensorflow><mnist>
2023-02-04 23:18:43
1
3,477
Little
75,349,004
14,924,173
can not understand labeled concept in loc indexer in pandas
<p>I have a question and want to get an understanding of it, so the question is in pandas we have two indexers (loc, iloc) that we can use to select specific columns and rows, and the <em><strong>loc is labeled based on data selecting method which means that we have to pass the name of the row or column which we want to select</strong></em>, and <em><strong>iloc is an indexed based selecting method which means that we have to pass integer index in the method to select a specific row/column</strong></em>, so if I have the below data frame with columns names as a &quot;string&quot;</p> <p><a href="https://i.sstatic.net/TV4Fv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TV4Fv.png" alt="enter image description here" /></a></p> <p>I can use the loc as below:</p> <p><a href="https://i.sstatic.net/pCUFy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pCUFy.png" alt="enter image description here" /></a></p> <p>so if I rename the column to a number and the type of name of the column becomes 'integer' as below:</p> <p><a href="https://i.sstatic.net/ZirGy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZirGy.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/tBae0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tBae0.png" alt="enter image description here" /></a></p> <p>and using loc if I provide the name of the column that types integer why does this work without errors:</p> <p><a href="https://i.sstatic.net/idrc4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/idrc4.png" alt="enter image description here" /></a></p> <p><strong>I mean if loc for labeled data why it works on column names as numbers 0, 1, 2, 3, with type &quot;INTEGER&quot;</strong>?</p>
<python><python-3.x><pandas><dataframe><logic>
2023-02-04 23:13:28
1
496
Mohammad Al Jadallah
75,348,861
2,713,263
How can I speed up a loop that queries a kd-tree?
<p>The following section of my code is taking ages to run (it's the only loop in the function, so it's the most likely culprit):</p> <pre><code>tree = KDTree(x_rest) for i in range(len(x_lost)): _, idx = tree.query([x_lost[i]], k=int(np.sqrt(len(x_rest))), p=1) y_lost[i] = mode(y_rest[idx][0])[0][0] </code></pre> <p>Is there a way to speed this up? I have a few suggestions from Stack Overflow:</p> <ul> <li><a href="https://stackoverflow.com/a/13067268/2713263">This answer suggests using Cython</a>. I'm not particularly familiar with it, but I'm not very against it either.</li> <li><a href="https://stackoverflow.com/a/52856203/2713263">This answer uses a multiprocessing Pool</a>. I'm not sure how useful this will be: my current execution takes over 12h to run, and I'm hoping for at least a 5-10x speedup (though that may not be possible).</li> </ul>
<python><python-3.x><numpy>
2023-02-04 22:43:47
1
1,276
Rahul
75,348,803
19,051,091
How to make adaptive Threshold at part on image in Opencv python?
<p>Maybe my question is strange something but I need to make an adaptive Threshold on part of the image that the user selects with his mouse and that's my code</p> <pre><code>import cv2 img = cv2.imread(&quot;test.png&quot;) # img2 = cv2.imread(&quot;flower.jpg&quot;) # variables ix = -1 iy = -1 drawing = False def draw_reactangle_with_drag(event, x, y, flags, param): global ix, iy, drawing, img if event == cv2.EVENT_LBUTTONDOWN: drawing = True ix = x iy = y elif event == cv2.EVENT_MOUSEMOVE: if drawing == True: img2 = cv2.imread(&quot;test.png&quot;) cv2.rectangle(img2, pt1=(ix, iy), pt2=(x, y), color=(0, 255, 255), thickness=1) img = img2 elif event == cv2.EVENT_LBUTTONUP: drawing = False img2 = cv2.imread(&quot;test.png&quot;) cv2.rectangle(img2, pt1=(ix, iy), pt2=(x, y), color=(0, 255, 255), thickness=1) img = img2 gray = cv2.cvtColor(img2[y: iy, x: ix], cv2.COLOR_BGR2GRAY) th = cv2.adaptiveThreshold(gray, 255, # maximum value assigned to pixel values exceeding the threshold cv2.ADAPTIVE_THRESH_GAUSSIAN_C, # gaussian weighted sum of neighborhood cv2.THRESH_BINARY, # thresholding type 5, # block size (5x5 window) 3) # constant img = th cv2.namedWindow(winname=&quot;Title of Popup Window&quot;) cv2.setMouseCallback(&quot;Title of Popup Window&quot;, draw_reactangle_with_drag) while True: cv2.imshow(&quot;Title of Popup Window&quot;, img) if cv2.waitKey(10) == 27: break cv2.destroyAllWindows() </code></pre> <p>and that's what I got at attached screen<a href="https://i.sstatic.net/Bglv0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Bglv0.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/vE7J8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vE7J8.png" alt="enter image description here" /></a> What am I missing?</p>
<python><opencv>
2023-02-04 22:32:15
1
307
Emad Younan
75,348,791
257,924
seek(0) versus flush() on tempfile.NamedTemporaryFile before calling subprocess.run on a generated script
<p>Below is a example script to generate a Bash script into a temporary file, execute it, allow stdout and stderr to be emitted to the stdout of the calling script, and automatically delete the file.</p> <p>When I set <code>use_fix = True</code> as stated below, and set <code>use_seek</code> to either <code>True</code> or <code>False</code>, the script works: I see output. <em>But</em> if I set <code>use_fix = False</code>, it does not work.</p> <p>My question is this: Under the case of <code>use_fix = True</code>, which setting of <code>use_seek</code> is correct? More interestingly, why would I prefer one value over the other? My gut tells me that <code>f.flush()</code> is correct, because that is what I think is required: The buffered I/O needs to be flushed to disk before the subsequent child process can open that script and execute it. Could it be that the seek is also forcing a flush, as a side-effect?</p> <pre><code>import tempfile use_fix = True use_seek = False # use_seek = True # Create a temporary file with a bash script with tempfile.NamedTemporaryFile(prefix=&quot;/tmp/xxx.out.&quot;, suffix=&quot;.sh&quot;, mode='w+t', delete=True) as f: f.write(&quot;#!/bin/bash\necho 'This is a temporary bash script'\necho 'error message' &gt;&amp;2&quot;) if use_fix: # Allow subprocess.run to &quot;see&quot; the temporary file contents on disk: if use_seek: f.seek(0) else: # Allow subprocess.run to &quot;see&quot; the output. f.flush() # Execute the bash script and capture its standard output and standard error result = subprocess.run([&quot;bash&quot;, f.name], stdout=sys.stdout, stderr=subprocess.STDOUT) </code></pre>
<python><temporary-files>
2023-02-04 22:30:03
1
2,960
bgoodr
75,348,648
12,870,750
QtWidgets.QGraphicsLineItem issue when setting viewport to QOpenGLWidget()
<p><strong>Situation</strong></p> <p>I have a PyQt5 app that shows lines, text and circles, it shows them correctly but the text rendering is a bit slow. I have a custom class for <code>QGrapichsView</code> that implement all this.</p> <p><strong>problem</strong></p> <p>When I set in the properties of the gv the following I start getting errors such as the example. The text and circles render correctly at a much faster render time(much better) but the lines get the error in rendering.</p> <pre><code> self.gl_widget = QOpenGLWidget() format = QSurfaceFormat() # format.setVersion(3, 0) format.setProfile(QSurfaceFormat.CoreProfile) self.gl_widget.setFormat(format) self.setViewport(self.gl_widget) </code></pre> <p>the render of text get much much better and it shows them as it should. but a problem comes with the lines that start having strange behavior.</p> <p><strong>example with issue</strong></p> <p><a href="https://i.sstatic.net/ZmIgh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZmIgh.png" alt="enter image description here" /></a></p> <p><strong>example without issue</strong></p> <p><a href="https://i.sstatic.net/jWhQ8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jWhQ8.png" alt="enter image description here" /></a></p> <p>note how the width of the lines is variable even tough is set to a unique value, also, when I do a zoom out or zoom in, some of this lines appear and disappear randomly.</p> <p>As soon as I use path item the problems begin, just a line item does not create this problem.</p> <p>Does anybody have any idea what could this mean?</p> <p><strong>what to look for?</strong></p> <p>The issue is that the width of the lines are random, and not the set value I put in the code. Also when you zoom in or out, it disappears.</p> <p>It seems to have something to do with the set width, as a bigger width helps, but does not remove it.</p> <p><strong>minimal reproducible example</strong></p> <pre><code>import sys from PyQt5.QtWidgets import QApplication, QGraphicsScene, QGraphicsTextItem from PyQt5 import QtWidgets, QtCore, QtGui from PyQt5.QtWidgets import QOpenGLWidget import numpy as np from PyQt5.QtGui import QPainterPath, QPen from PyQt5.QtWidgets import QGraphicsPathItem, QGraphicsLineItem, QGraphicsPolygonItem from PyQt5.QtGui import QPolygonF from PyQt5.QtCore import QLineF, QPointF from PyQt5.QtGui import QSurfaceFormat class GraphicsView(QtWidgets.QGraphicsView): def __init__(self): super(GraphicsView, self).__init__() self.pos_init_class = None # &quot;VARIABLES INICIALES&quot; self.scale_factor = 1.5 # &quot;ASIGNAR LINEAS DE MARCO&quot; self.setFrameShape(QtWidgets.QFrame.VLine) # &quot;ACTIVAR TRACKING DE POSICION DE MOUSE&quot; self.setMouseTracking(True) # &quot;REMOVER BARRAS DE SCROLL&quot; self.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOff) self.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOff) # &quot;ASIGNAR ANCLA PARA HACER ZOOM SOBRE EL MISMO PUNTO&quot; self.setTransformationAnchor(QtWidgets.QGraphicsView.AnchorUnderMouse) self.setResizeAnchor(QtWidgets.QGraphicsView.AnchorUnderMouse) # &quot;MEJORAR EL RENDER DE VECTORES&quot; self.setRenderHint(QtGui.QPainter.Antialiasing, False) self.setRenderHint(QtGui.QPainter.SmoothPixmapTransform, False) self.setRenderHint(QtGui.QPainter.TextAntialiasing, False) self.setRenderHint(QtGui.QPainter.HighQualityAntialiasing, False) self.setRenderHint(QtGui.QPainter.NonCosmeticDefaultPen, True) self.setOptimizationFlag(QtWidgets.QGraphicsView.DontAdjustForAntialiasing, True) self.setOptimizationFlag(QtWidgets.QGraphicsView.DontSavePainterState, True) self.setCacheMode(QtWidgets.QGraphicsView.CacheBackground) self.setViewportUpdateMode(QtWidgets.QGraphicsView.BoundingRectViewportUpdate) #Try OpenGL stuff # self.gl_widget = QOpenGLWidget() # self.setViewport(self.gl_widget) self.gl_widget = QOpenGLWidget() format = QSurfaceFormat() format.setVersion(2, 8) format.setProfile(QSurfaceFormat.CoreProfile) self.gl_widget.setFormat(format) self.setViewport(self.gl_widget) def mousePressEvent(self, event): pos = self.mapToScene(event.pos()) # &quot;PAN MOUSE&quot; if event.button() == QtCore.Qt.MiddleButton: self.pos_init_class = pos QtWidgets.QApplication.setOverrideCursor(QtCore.Qt.ClosedHandCursor) super(GraphicsView, self).mousePressEvent(event) def mouseReleaseEvent(self, event): # PAN Y RENDER TEXT if self.pos_init_class and event.button() == QtCore.Qt.MiddleButton: # PAN self.pos_init_class = None QtWidgets.QApplication.setOverrideCursor(QtCore.Qt.ArrowCursor) super(GraphicsView, self).mouseReleaseEvent(event) def mouseMoveEvent(self, event): if self.pos_init_class: # &quot;PAN&quot; delta = self.pos_init_class - self.mapToScene(event.pos()) r = self.mapToScene(self.viewport().rect()).boundingRect() self.setSceneRect(r.translated(delta)) super(GraphicsView, self).mouseMoveEvent(event) def wheelEvent(self, event): old_pos = self.mapToScene(event.pos()) # Determine the zoom factor if event.angleDelta().y() &gt; 0: zoom_factor = self.scale_factor else: zoom_factor = 1 / self.scale_factor # Apply the transformation to the view transform = QtGui.QTransform() transform.translate(old_pos.x(), old_pos.y()) transform.scale(zoom_factor, zoom_factor) transform.translate(-old_pos.x(), -old_pos.y()) # Get the current transformation matrix and apply the new transformation to it current_transform = self.transform() self.setTransform(transform * current_transform) def zoom_extent(self): x_range, y_range, h_range, w_range = self.scene().itemsBoundingRect().getRect() rect = QtCore.QRectF(x_range, y_range, h_range, w_range) self.setSceneRect(rect) unity = self.transform().mapRect(QtCore.QRectF(0, 0, 1, 1)) self.scale(1 / unity.width(), 1 / unity.height()) viewrect = self.viewport().rect() scenerect = self.transform().mapRect(rect) factor = min(viewrect.width() / scenerect.width(), viewrect.height() / scenerect.height()) self.scale(factor, factor) class MainWindow(QtWidgets.QMainWindow): def __init__(self, parent=None): super().__init__(parent) self.view = GraphicsView() self.scene = QGraphicsScene() self.view.setScene(self.scene) self.generate_random_lines() self.setCentralWidget(self.view) self.showMaximized() self.view.zoom_extent() def rotate_vector(self, origin, point, angle): &quot;&quot;&quot; ROTATE A POINT COUNTERCLOCKWISE BY A GIVEN ANGLE AROUND A GIVEN ORIGIN. THE ANGLE SHOULD BE GIVEN IN RADIANS. :param origin: SOURCE POINT ARRAYS, [X_SOURCE, Y_SOURCE], LEN N :param point: DESTINATION POINT, [X_DEST, Y_DEST], LEN N :param angle: ARRAY OF ANGLE TO ROTATE VECTOR (ORIGIN --&gt; POINT), [ANG], LEN N :return: &quot;&quot;&quot; ox, oy = origin px, py = point qx = ox + np.cos(angle) * (px - ox) - np.sin(angle) * (py - oy) qy = oy + np.sin(angle) * (px - ox) + np.cos(angle) * (py - oy) return qx, qy def create_line_with_arrow_path(self, x1, y1, x2, y2, arr_width, arr_len): &quot;&quot;&quot; This function creates a line with an arrowhead at the end. The line is created between two points (x1, y1) and (x2, y2). The arrowhead is defined by its width (arr_width) and length (arr_len). Returns a QGraphicsPathItem with the line and arrowhead. &quot;&quot;&quot; # Initialize the path for the line and arrowhead path = QPainterPath() path.moveTo(x1, y1) path.lineTo(x2, y2) # Calculate the midpoint of the line mid_x = (x1 + x2) / 2 mid_y = (y1 + y2) / 2 # Define the points of the arrowhead arrow_x = np.array([arr_width, -arr_len, -arr_width, -arr_len, arr_width]) * 5 arrow_y = np.array([0, arr_width, 0, -arr_width, 0]) * 5 arrow_x += mid_x arrow_y += mid_y # Calculate the angle of the line angle = np.rad2deg(np.arctan2(y2 - y1, x2 - x1)) # Rotate the arrowhead points to align with the line origin = (np.array([mid_x, mid_x, mid_x, mid_x, mid_x]), np.array([mid_y, mid_y, mid_y, mid_y, mid_y])) point = (arrow_x, arrow_y) self.x_init, self.y_init = self.rotate_vector(origin, point, np.deg2rad(angle)) # Add the arrowhead to the path arrow_path = QtGui.QPainterPath() arrow_path.moveTo(self.x_init[0], self.y_init[0]) for i in range(1, len(arrow_x)): arrow_path.lineTo(self.x_init[i], self.y_init[i]) path.addPath(arrow_path) # Create a QGraphicsPathItem with the line and arrowhead item = QGraphicsPathItem(path) pen = QPen() pen.setWidthF(0.1) item.setPen(pen) return item, angle def create_line_with_arrow_item(self, x1, y1, x2, y2, arr_width, arr_len): # Calculate the midpoint of the line mid_x = (x1 + x2) / 2 mid_y = (y1 + y2) / 2 # Define the coordinates for the arrow arrow_x = np.array([arr_width, -arr_len, -arr_width, -arr_len, arr_width]) * 10 arrow_y = np.array([0, arr_width, 0, -arr_width, 0]) * 10 arrow_x += mid_x arrow_y += mid_y # Calculate the angle of the line angle = np.rad2deg(np.arctan2(y2 - y1, x2 - x1)) # Rotate the arrow to align with the line origin = (np.array([mid_x, mid_x, mid_x, mid_x, mid_x]), np.array([mid_y, mid_y, mid_y, mid_y, mid_y])) point = (arrow_x, arrow_y) x_init, y_init = self.rotate_vector(origin, point, np.deg2rad(angle)) # Create the line and arrow line = QLineF(x1, y1, x2, y2) arrow = QPolygonF([QPointF(x_init[0], y_init[0]), QPointF(x_init[1], y_init[1]), QPointF(x_init[2], y_init[2]), QPointF(x_init[3], y_init[3]), QPointF(x_init[4], y_init[4])]) item = QGraphicsLineItem(line) item_arrow = QGraphicsPolygonItem(arrow) # Set the pen for both line and arrow pen = QPen() pen.setWidthF(1) item.setPen(pen) item_arrow.setPen(pen) # Return the line and arrow items return item, item_arrow, angle def generate_random_lines(self): case = 'issue' x = np.array([0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100]) * 10 y = np.array([0, 20, 10, 0, 35, 90, 10, 60, 60, 90, 100]) * 10 for pos, i in enumerate(range(len(x) - 1)): x1 = x[i] y1 = y[i] x2 = x[i + 1] y2 = y[i + 1] if case in ['issue']: #add lines path, angle = self.create_line_with_arrow_path(x1, y1, x2, y2, 0.5, 1.5) self.scene.addItem(path) # add text text1 = QGraphicsTextItem() text1.setPlainText(str(pos)) text1.setPos(x1, y1) text1.setRotation(angle) self.scene.addItem(text1) else: #add lines line, arrow, angle = self.create_line_with_arrow_item(x1, y1, x2, y2, 0.5, 1.5) self.scene.addItem(line) self.scene.addItem(arrow) # add text text1 = QGraphicsTextItem() text1.setPlainText(str(pos)) text1.setPos(x1, y1) text1.setRotation(angle) self.scene.addItem(text1) if __name__ == &quot;__main__&quot;: app = QApplication(sys.argv) window = MainWindow() window.show() sys.exit(app.exec_()) </code></pre> <p><strong>figures for minimal example</strong></p> <p>this is an example whit the issue</p> <p><a href="https://i.sstatic.net/AotKy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AotKy.png" alt="this is an example whit the issue" /></a></p> <p>this is an example without the issue</p> <p><a href="https://i.sstatic.net/sk8Hg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sk8Hg.png" alt="enter image description here" /></a></p>
<python><pyqt5><pyopengl>
2023-02-04 21:58:12
1
640
MBV
75,348,588
16,665,831
Parse values from JSON body
<p>I have the attached following actions parameter part of my JSON data.</p> <pre><code>'actions': [{'action_type': 'onsite_conversion.post_save', 'value': '1'}, {'action_type': 'link_click', 'value': '2'}, {'action_type': 'post', 'value': '3'}, {'action_type': 'post_reaction', 'value': '4'}, {'action_type': 'video_view', 'value': '5'}, {'action_type': 'post_engagement', 'value': '6'}, {'action_type': 'page_engagement', 'value': '7'}], </code></pre> <p>API can send the following options at any time;</p> <pre><code>action_types = [&quot;onsite_conversion.post_save&quot;, &quot;link_click&quot;, &quot;post&quot;, &quot;post_reaction&quot;, &quot;comment&quot;, &quot;video_view&quot;, &quot;post_engagement&quot;, &quot;page_engagement&quot;] </code></pre> <p>I tried to write a python script that parses these possible values (<strong>in action_types list order</strong>) from the JSON body, as seen from the sample JSON data it doesn't send <strong>comment</strong> value so in this case script should write 0 to return list, below is my script</p> <pre><code>def get_action_values(insight): action_types = [&quot;onsite_conversion.post_save&quot;, &quot;link_click&quot;, &quot;post&quot;, &quot;post_reaction&quot;, &quot;comment&quot;, &quot;video_view&quot;, &quot;post_engagement&quot;, &quot;page_engagement&quot;] action_type_values = [] for action in action_types: if action in [item[&quot;action_type&quot;] for item in insight[&quot;actions&quot;]]: for item in insight[&quot;actions&quot;]: if action in item[&quot;action_type&quot;]: if &quot;value&quot; in item: action_type_values.append(item[&quot;value&quot;]) else: action_type_values.append(&quot;Null&quot;) else: action_type_values.append(0) return action_type_values </code></pre> <p>I am expecting it to return as <code>[1,2,3,4,0,5,6,7]</code> but it returned as <code>[1, 2, 1, 3, 4, 6, 4, 0, 5, 6, 7]</code></p>
<python><json><pandas><parsing><jsonparser>
2023-02-04 21:45:34
3
309
Ugur Selim Ozen
75,348,491
9,773,920
How to extract username and password form secrets manager to use in psycopg2 connect
<p>My secrets manager is throwing secrets in the below format:</p> <pre><code>{&quot;username&quot;:&quot;uname&quot;,&quot;password&quot;:&quot;mypassword&quot;} </code></pre> <p>I want to extract username as password to use in psycopg2 connection string but it's not working. below is my code:</p> <pre><code>username = secret['username'] passw = secret['password'] conn = psycopg2.connect(host=&quot;hostname&quot;,port='5432',database=&quot;db&quot;, user=username, password=passw) </code></pre> <p>I see this error : <code>[ERROR] TypeError: string indices must be integers</code></p> <p>how to get username, password and host form secrets manager to use in psycopg2 connection string?</p>
<python><amazon-web-services><aws-lambda><psycopg2><aws-secrets-manager>
2023-02-04 21:24:01
0
1,619
Rick
75,348,436
1,559,030
Get an evenly distributed subset of combinations without repetition
<p>I'm trying to get a subset of combinations such that every option is used the same amount of times, <em>or close to it</em>, from the total set of combinations without repetition. For example, I have 8 options (let's say A-H) and I need combinations of 4 letters where order doesn't matter. That would give me 70 possible combinations. I would like to take a subset of those combinations such that A appears as much as each other letter does, and A appears with B as much as C appears with D, etc. I know there are subsets where it is impossible to have each letter appear the same amount of times and appear with another letter the same amount of times so when I say &quot;same amount of times&quot; in this post, I mean the same amount or close to it.</p> <p>If the options are written out in an organized list as is shown below, I couldn't just select the first N options because that would give A far more use than it would H. Also, A and B would appear together more than C and D. The main idea is to get as evenly distributed use of each letter combination as possible.</p> <p>ABCD ABCE ABCF ABCG ABCH ABDE ABDF ABDG ABDH ABEF ABEG ABEH ABFG ABFH ABGH ACDE ACDF ACDG ACDH ACEF ACEG ACEH ACFG ACFH ACGH ADEF ADEG ADEH ADFG ADFH ADGH AEFG AEFH AEGH AFGH BCDE BCDF BCDG BCDH BCEF BCEG BCEH BCFG BCFH BCGH BDEF BDEG BDEH BDFG BDFH BDGH BEFG BEFH BEGH BFGH CDEF CDEG CDEH CDFG CDFH CDGH CEFG CEFH CEGH CFGH DEFG DEFH DEGH DFGH EFGH</p> <p>I could take a random sample but being random, it doesn't exactly meet my requirements of taking a subset intentionally to get an even distribution. It could randomly choose a very uneven distribution.</p> <p>Is there a tool or a mathematical formula to generate a list like I'm asking for? Building one in Python or some other coding language is an option if I had an idea of how to go about it.</p> <p><strong>EDIT</strong> - Think of it like ingredients. A-H are each unique ingredients such as flour, sugar, butter, etc. I'm creating recipes that use four of the available ingredients (not worrying about measurements, just what ingredients are used).</p> <p>There are 70 possible combinations of the 8 ingredients (put in 8 objects and 4 sample size <a href="https://www.calculatorsoup.com/calculators/discretemathematics/combinations.php" rel="nofollow noreferrer">here</a>). I want a list of a subset of those combinations, of a size I put into the formula. As an example, let's say I want 20 of the combinations. I want each ingredient to show up as much as the other ingredients, meaning among those combinations, maybe A (flour) is used 3 times, B (sugar) is used 3 times, C (butter) is used 3 times, etc.</p> <p>I also want A (flour) and B (sugar) to be used together as many times as B (sugar) and C (butter) are used together in a combination.</p> <p>Most subset sizes won't allow perfect distribution of items, but it should give a list that is as close to an even distribution as possible.</p> <p>I have created a Python script that I think does what I want. I'm cleaning it up and testing it more, then I'll post it here.</p> <p>To show an example of correct results, I found that a subset of size 14 will produce a perfect distribution:</p> <pre><code>ABCD ADEG BDGH ABEH ADFH CDEH ABFG BCEG CDFG ACEF BCFH EFGH ACGH BDEF </code></pre> <p>You can see that in that subset, each letter is used 7 times, and each pairing (such as A and B) appear 3 times, and each triplet (such as A and B and C) appears once. This is the exact type of result I'm looking for. I hope that clarifies the problem for readers.</p> <p>I'll post my python code soon.</p>
<python><statistics><combinations>
2023-02-04 21:15:13
2
678
Marty
75,348,406
9,317,361
How to use fakebrowser driver and control it through Selenium Python
<p><strong>Summery of the question</strong><br /> The question is about how I can incorporate javascript modules in python. I need to operate the <a href="https://www.npmjs.com/package/fakebrowser" rel="nofollow noreferrer">fakebrowser's driver</a> in python the same way it does in javascript so it can keep all the manipulated values of the browser fingerprint &amp; system.</p> <hr /> <p><strong>Full question</strong><br /> I have created an automated script for a website using selenium &amp; python. What my code does is that it manages multiple accounts but after some time the website can detect that the accounts are being used from the same devices even after using various methods to make it undetectable (Proxies, User-Agent, Driver Options, and a ton more).</p> <p>So the best module I found that can get my job done is <a href="https://www.npmjs.com/package/fakebrowser" rel="nofollow noreferrer">fakebrowser</a> because it can bypass most of the known ways websites use to detect automation. Fakebrowser can build a chromium driver but the problem I am facing is that it is using javascript and I don't want to rewrite all my code in javascript while I have already written it in Python (because I know only the basics in Javascript).</p> <p>I have tried a lot of other solutions, below are only two of them but nothing I have tried so far have worked.</p> <hr /> <p><strong>Attempt #1</strong><br /> I have tried using the chromium build which <a href="https://www.npmjs.com/package/fakebrowser" rel="nofollow noreferrer">fakebrowser</a> creates.</p> <pre class="lang-py prettyprint-override"><code>driver = webdriver.Chrome(executable_path=r&quot;C:\Users\Hidden\.cache\puppeteer\chrome\win64-1069273\chrome-win\chrome.exe&quot;) driver.get('https://pixelscan.com/') </code></pre> <p>It still leaks most of the information about my system &amp; browser fingerprint because most of the modifications to the browser are done when you open the chrome build using fakebrowser's module.</p> <hr /> <p><strong>Attempt #2</strong><br /> I have tried to open the driver using javascript and then start controlling it using selenium python which I found in here: <a href="https://stackoverflow.com/questions/8344776/can-selenium-interact-with-an-existing-browser-session">https://stackoverflow.com/questions/8344776/can-selenium-interact-with-an-existing-browser-session </a></p> <hr /> <p><strong>Fakebrowser Javascript Code</strong></p> <pre class="lang-js prettyprint-override"><code>const {FakeBrowser} = require('fakebrowser'); function sleep(ms) { return new Promise(resolve =&gt; setTimeout(resolve, ms)); } !(async () =&gt; { const createBrowserAndGoto = async (userDataDir, url) =&gt; { const builder = new FakeBrowser.Builder() .vanillaLaunchOptions({ headless: false, args: ['--remote-debugging-port=9223'] }) .userDataDir(userDataDir); const fakeBrowser = await builder.launch(); const page = await fakeBrowser.vanillaBrowser.newPage(); await page.goto(url); await sleep(1000000000); await fakeBrowser.shutdown(); }; createBrowserAndGoto('./fakeBrowserUserData1', 'http://pixelscan.net/').then(e =&gt; e); })(); </code></pre> <hr /> <p><strong>Selenium Python Code</strong></p> <pre class="lang-py prettyprint-override"><code>from selenium import webdriver from selenium.webdriver.chromium.options import ChromiumOptions class Driver: def __init__(self): self.options = ChromiumOptions() self.options.add_experimental_option(&quot;debuggerAddress&quot;, &quot;127.0.0.1:9223&quot;) def launch(self): driver = webdriver.Chrome(r&quot;C:\Users\User\.cache\puppeteer\chrome\win64-1069273\chrome-win\chrome.exe&quot;, chrome_options=self.options) driver.get('http://pixelscan.net/') Driver().launch() </code></pre> <hr /> <h2>Observation</h2> <p>Instead of connecting to the already open browser, it creates a new one but it's not using the same values as the fakebrowser driver does. So it's still leaking most of the information of my system &amp; browser.</p> <p>I need a way to control the same one that is already open or maybe I can use the driver variable created in JavaScript and pass it to the python code but I am not sure if that's even possible</p>
<javascript><python><selenium><selenium-webdriver>
2023-02-04 21:09:51
1
374
Speci
75,348,329
6,202,327
Numpy accurately computing the inverse of a matrix?
<p>I am trying to compute the eigen values of a matrix built by a matrix product M^{-1}K.</p> <p>I know M and K, I have initialized them properly. I thus try to compute the inverse of M:</p> <pre class="lang-py prettyprint-override"><code>M_inv = np.linalg.inv(M) with np.printoptions(threshold=np.inf, precision=10, suppress=True,linewidth=20000): print(np.matrix(M_inv * M)) </code></pre> <p>That should print the identity, but I get:</p> <p><a href="https://i.sstatic.net/l9Lvx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/l9Lvx.png" alt="enter image description here" /></a></p> <p>Which clearly is not the identity. I need to find the eigen values of M_inv * K, but if M_Inv is so innacurate I won't get anything useful, what do I do?</p> <p>This is the matrix:</p> <p><a href="https://i.sstatic.net/AWyTz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AWyTz.png" alt="enter image description here" /></a></p> <p>And this is my initialization code:</p> <pre class="lang-py prettyprint-override"><code>def mij(i, j, h): if i==j: return 2.0 * h / 3.0 else: return h / 6.0 def kij(i, j, h): if i==j: return 2.0 / h else: return -1 / h n = 500 size=n+1 h = 1 / n t=np.linspace(0,1,n) # Get A M = np.zeros((n, n)) K = np.zeros((n, n)) for i in range(0, n): M[i,i] = mij(i, i, h) if i+1 &lt; n: M[i,i+1] = mij(i, i+1, h) if i-1 &gt;= 0: M[i,i-1] = mij(i, i-1, h) K[i,i] = kij(i, i, h) if i+1 &lt; n: K[i,i+1] = kij(i, i+1, h) if i-1 &gt;= 0: K[i,i-1] = kij(i, i-1, h) </code></pre>
<python><arrays><math><matrix><eigenvalue>
2023-02-04 20:57:18
1
9,951
Makogan
75,348,321
817,659
redis.client import string_keys_to_dict, dict_merge
<p>I have tried using this from both python 3.6 and 3.9 but get the same error:</p> <pre><code>pip install serialized-redis-interface Collecting serialized-redis-interface Using cached serialized_redis_interface-0.3.1-py3-none-any.whl (7.8 kB) Requirement already satisfied: redis&gt;3 in /home/idf/anaconda3/envs/works/lib/python3.6/site-packages (from serialized-redis-interface) (4.3.5) Requirement already satisfied: packaging&gt;=20.4 in /home/idf/anaconda3/envs/works/lib/python3.6/site-packages (from redis&gt;3-&gt;serialized-redis-interface) (21.3) Requirement already satisfied: typing-extensions in /home/idf/anaconda3/envs/works/lib/python3.6/site-packages (from redis&gt;3-&gt;serialized-redis-interface) (4.1.1) Requirement already satisfied: importlib-metadata&gt;=1.0 in /home/idf/anaconda3/envs/works/lib/python3.6/site-packages (from redis&gt;3-&gt;serialized-redis-interface) (4.8.3) Requirement already satisfied: async-timeout&gt;=4.0.2 in /home/idf/anaconda3/envs/works/lib/python3.6/site-packages (from redis&gt;3-&gt;serialized-redis-interface) (4.0.2) Requirement already satisfied: zipp&gt;=0.5 in /home/idf/anaconda3/envs/works/lib/python3.6/site-packages (from importlib-metadata&gt;=1.0-&gt;redis&gt;3-&gt;serialized-redis-interface) (3.6.0) Requirement already satisfied: pyparsing!=3.0.5,&gt;=2.0.2 in /home/idf/anaconda3/envs/works/lib/python3.6/site-packages (from packaging&gt;=20.4-&gt;redis&gt;3-&gt;serialized-redis-interface) (3.0.9) Installing collected packages: serialized-redis-interface Successfully installed serialized-redis-interface-0.3.1 </code></pre> <p>This code doesn't work (but it used to work, not sure what this new installation broke):</p> <pre><code>import serialized_redis def connect_redis(redis_host, redis_port, redis_password): print(&quot;Connecting redis&quot;) try: # The decode_repsonses flag here directs the client to convert the responses from Redis into Python strings # using the default encoding utf-8. This is client specific. redis_connection_object = serialized_redis.JSONSerializedRedis(host=redis_host, port=redis_port, password=redis_password, db=0) #redis.StrictRedis(host=redis_host, port=redis_port, password=redis_password, decode_responses=True) if None == redis_connection_object: print(&quot;redis is not connected&quot;) return redis_connection_object except Exception as e: print(e) return None </code></pre> <p>The error:</p> <pre><code>Traceback (most recent call last): File &quot;options.py&quot;, line 42, in &lt;module&gt; from redis_connection import connect_redis File &quot;/home/idf/Downloads/backup/amplify/redis_connection.py&quot;, line 1, in &lt;module&gt; import serialized_redis File &quot;/home/idf/anaconda3/envs/works/lib/python3.6/site-packages/serialized_redis/__init__.py&quot;, line 5, in &lt;module&gt; from redis.client import string_keys_to_dict, dict_merge ImportError: cannot import name 'dict_merge' </code></pre>
<python><redis><anaconda><redisclient>
2023-02-04 20:56:06
0
7,836
Ivan
75,348,247
443,626
Python - program crashes every 60 seconds making new requests in a while loop
<p>I have a script running every 60 seconds making a request using rest API:</p> <pre><code> def get_conn(): try: ms = datetime.now() - timedelta(days=xxx) current_time = time.mktime(ms.timetuple()) ms = datetime.now() current_time2 = time.mktime(ms.timetuple()) api_Url = &quot;xxxxxx&quot;.format(int(xxxx), int(xxxx)) conn = http.client.HTTPSConnection(&quot;xxxxx&quot;) conn.request(&quot;GET&quot;, api_Url, payload, headers) print('Getting response...') res = conn.getresponse() data = res.read() data = data.decode(&quot;utf-8&quot;) data = json.loads(data) return data except Exception as e: handle_crash(e) def handle_crash(e): print('Restarting in 60 seconds ' + str(e)) time.sleep(60) # Restarts the script after 60 seconds start() def start(): while True: data = get_conn() time.sleep(60) start() </code></pre> <p>And the output:</p> <pre><code> Getting response... Restarting in 60 seconds Expecting value: line 1 column 1 (char 0) Getting response... Restarting in 60 seconds Expecting value: line 1 column 1 (char 0) Getting response... Restarting in 60 seconds Expecting value: line 1 column 1 (char 0) Getting response... Restarting in 60 seconds Expecting value: line 1 column 1 (char 0) Getting response... Restarting in 60 seconds Expecting value: line 1 column 1 (char 0) </code></pre> <p>I am not showing the output of the first request as the first request is successful, but after the first request immediately after 60 seconds, the handle_crash(e) gets called every time.</p> <p>When I test this calling start() every 1 second I get a few requests back but after 5 or 6 requests the script crashes, so I tried running the script every 60 seconds.</p> <p>Can someone see where the script is going wrong?</p>
<python>
2023-02-04 20:44:56
1
2,543
redoc01
75,348,156
1,533,576
Scale matplotlib text artist to fill rectangle patch bounding box
<p>Given a rectangle patch and text artist in matplotlib, is it possible to scale the text such that it fills the rectangle as best as possible without overfilling on either dimension?</p> <p>e.g.</p> <pre><code>import matplotlib as mpl import matplotlib.pyplot as plt f, ax = plt.subplots() ax.set(xlim=(0, 6), ylim=(0, 3)) x0, y0 = 1, 1 x1, y1 = 5, 1.5 width = x1 - x0 height = y1 - y0 rect = mpl.patches.Rectangle((x0, y0), width, height, fc=&quot;none&quot;, ec=&quot;y&quot;) ax.add_patch(rect) text = ax.text(x0 + width / 2, y0 + height / 2, &quot;yellow&quot;, ha=&quot;center&quot;, va=&quot;center&quot;) </code></pre> <p>Produces</p> <p><img src="https://i.sstatic.net/JmzxR.png" alt="enter image description here" /></p> <p>I would like to programmatically rescale the text to fit within the rectangle (including descenders), akin to this result found using trial-and-error:</p> <p><img src="https://i.sstatic.net/qI9gM.png" alt="enter image description here" /></p> <p>We can assume some simplifying constraints, namely that the figure size and axis limits are known and set ahead of time and don't need to change after this operation.</p>
<python><matplotlib>
2023-02-04 20:26:15
1
49,310
mwaskom
75,348,116
1,584,906
gRPC becomes very slow all of a sudden
<p>I use gRPC for Python RPC on the same machine. It has been working great till yesterday. Then, all of a sudden, it started being very slow. The <code>helloworld</code> example now takes about 78s to complete. I tested it on three computers on the same network, all Ubuntu 18.04, with the same results. At home, the same example runs almost instantaneously. I suspect some networking issue, maybe an automatic update on the gateway, but I'm at a loss on how to troubleshoot the problem. Any suggestions?</p> <p>EDIT:</p> <p>I still don't know what happened, but I found a workaround. Replacing <code>localhost</code> with <code>127.0.0.1</code> in the <code>grpc.insecure_channel</code> connection string makes gRPC responsive again.</p>
<python><grpc><grpc-python>
2023-02-04 20:19:43
0
1,465
Wolfy
75,347,974
5,151,909
mypy: Untyped decorator makes function "main" untyped for FastAPI routes
<p>Consider the following:</p> <pre class="lang-py prettyprint-override"><code>@app.post(&quot;/&quot;) def main(body: RequestBody) -&gt; dict[str, object]: pass </code></pre> <p>I'm getting the following error:</p> <blockquote> <p>Untyped decorator makes function &quot;main&quot; untyped [misc]mypy(error)</p> </blockquote> <p>Couldn't find anything online about it. What am I missing? Thanks</p>
<python><fastapi><mypy>
2023-02-04 19:56:29
0
4,011
galah92
75,347,830
562,769
(Why) is there a performance benefit of using list.clear?
<p>I've recently noticed that the built-in <a href="https://docs.python.org/3/tutorial/datastructures.html" rel="nofollow noreferrer">list has a <code>list.clear()</code> method</a>. So far, when I wanted to ensure a list is empty, I just create a new list: <code>l = []</code>.</p> <p>I was curious if it makes a difference, so I measured it:</p> <pre><code>$ python --version Python 3.11.0 $ python -m timeit 'a = [1, 2, 3, 4]; a= []' 5000000 loops, best of 5: 61.5 nsec per loop $ python -m timeit 'a = [1, 2, 3, 4]; a.clear()' 5000000 loops, best of 5: 57.4 nsec per loop </code></pre> <p>So creating a new empty list is about 7% slower than using <code>clear()</code> for small lists.</p> <p>For bigger lists, it seems to be faster to just create a new list:</p> <pre><code>$ python -m timeit 'a = list(range(10_000)); a = []' 2000 loops, best of 5: 134 usec per loop $ python -m timeit 'a = list(range(10_000)); a = []' 2000 loops, best of 5: 132 usec per loop $ python -m timeit 'a = list(range(10_000)); a = []' 2000 loops, best of 5: 134 usec per loop $ python -m timeit 'a = list(range(10_000)); a.clear()' 2000 loops, best of 5: 143 usec per loop $ python -m timeit 'a = list(range(10_000)); a.clear()' 2000 loops, best of 5: 139 usec per loop $ python -m timeit 'a = list(range(10_000)); a.clear()' 2000 loops, best of 5: 139 usec per loop </code></pre> <p>why is that the case?</p> <p>edit: Small clarification: I am aware that both are (in some scenarios) not semantically the same. If you clear a list you could still have two pointers to it. When you create a new list object, there will not be a pointer to it. For this question, I don't care about this potential difference. Just assume there is exactly one reference to that list.</p>
<python><cpython><python-3.11>
2023-02-04 19:35:30
1
138,373
Martin Thoma
75,347,762
12,574,341
Is using getters for read-only access pointless in Python?
<p>A method to achieve read-only access is to create a <code>getter</code> with no <code>setter</code>. This is the implementation in Python.</p> <pre class="lang-py prettyprint-override"><code>class Inventory: _items: list[Item] @property def items(self) -&gt; list[Item]: return self._items </code></pre> <p>But given that Python has no notion of access restriction, <code>._items</code> will still be viewable, readable, and modifiable externally.</p> <p><a href="https://i.sstatic.net/9u7R6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9u7R6.png" alt="enter image description here" /></a></p> <p>Instead, I could remove the <code>getter</code> and treat <code>.items</code> as a normal member - since Python won't restrict access either way - reducing code overhead and the number of members to keep track of.</p> <pre class="lang-py prettyprint-override"><code>class Inventory: items: list[Item] </code></pre> <p>The main benefit I can still see with the <code>getter</code> is that it signals to other developers by convention to avoid accessing the member. Are there any other arguments in its support?</p>
<python><oop>
2023-02-04 19:23:10
1
1,459
Michael Moreno
75,347,753
17,696,880
Replace ")" by ") " if the parenthesis is followed by a letter or a number using regex
<pre class="lang-py prettyprint-override"><code>import re input_text = &quot;((PL_ADVB)dentro ). ((PL_ADVB)ñu)((PL_ADVB) 9u)&quot; input_text = re.sub(r&quot;\s*\)&quot;, &quot;)&quot;, input_text) print(repr(input_text)) </code></pre> <p>How do I make if the closing parenthesis <code>)</code> is in front of a letter (uppercase or lowercase) it converts the <code>)</code> to <code>)</code>, so that the following output can be obtained using this code...</p> <pre><code>&quot;((PL_ADVB) dentro). ((PL_ADVB) ñu)((PL_ADVB) 9u)&quot; </code></pre>
<python><python-3.x><regex><string><regexp-replace>
2023-02-04 19:21:58
3
875
Matt095
75,347,524
9,213,682
How to convert string to key value in python
<p>I have a Django Application and want to convert a value from a string field which is comma separated to a key vaule pair and add it to a json data block.</p> <pre><code>class MyClass1(models.Model): keywords = models.TextField(_('Keywords'), null=True, blank=True) </code></pre> <p>Example of list:</p> <pre><code>blue,shirt,s,summer,for women </code></pre> <p>The JSON data in my code</p> <pre><code>data = { &quot;name&quot;: self.name, &quot;type&quot;: self.type, ... &quot;keywords&quot;: [] } </code></pre> <p>I want to split the comma separated string of self.keywords and append it to the keywords field in my json, but as a array like this:</p> <pre><code>{ &quot;name&quot;: keyword, }, </code></pre> <p>I do the split with the split function, but dont know how to create a key value pair as array and append to keywords.</p> <p>Expected output:</p> <pre><code>data = { &quot;name&quot;: &quot;Name of item&quot;, &quot;type&quot;: &quot;Type of item&quot;, ... &quot;keywords&quot;: [ { &quot;name&quot;: &quot;blue&quot; }, { &quot;name&quot;: &quot;shirt&quot; }, ... ] } </code></pre>
<python><arrays><django-models><key-value>
2023-02-04 18:51:07
1
549
sokolata
75,347,457
12,941,578
Python asyncio, possible to run a secondary event loop?
<p>Is there a way to create a secondary asyncio loop(or prioritize an await) that when awaited does not pass control back to the main event loop buts awaits for those 'sub' functions? IE</p> <pre><code>import asyncio async def priority1(): print(&quot;p1 before sleep&quot;) await asyncio.sleep(11) print(&quot;p1 after sleep&quot;) async def priority2(): print(&quot;p2 before sleep&quot;) await asyncio.sleep(11) print(&quot;p2 after sleep&quot;) async def foo(): while True: print(&quot;foo before sleep&quot;) #do not pass control back to main event loop here but run these 2 async await asyncio.gather(priority1(),priority2()) print(&quot;foo after sleep&quot;) async def bar(): while True: print(&quot;bar before sleep&quot;) await asyncio.sleep(5) print(&quot;bar after sleep&quot;) async def main(): await asyncio.gather(foo(),bar()) asyncio.run(main()) </code></pre> <p>I would like foo to wait for priority1/2 to finish before passing control back to the main event loop.</p> <p>Right now it would go:</p> <pre><code>foo before sleep bar before sleep p1 before sleep p2 before sleep bar after sleep </code></pre> <p>I would like to see:</p> <pre><code>foo before sleep bar before sleep p1 before sleep p2 before sleep p1 after sleep p2 after sleep bar after sleep </code></pre> <p>Is this possible? thanks</p>
<python><python-3.x><asynchronous><python-asyncio>
2023-02-04 18:40:59
1
452
Pearl
75,347,443
13,571,242
Error with `python3.10` when running `apt install software-properties-common` when building dockerfile
<p>Currently my dockerfile is just:</p> <pre><code>FROM ubuntu:latest RUN apt-get update RUN apt install software-properties-common -y </code></pre> <p>However when building the dockerfile and running the step <code>apt install software-properties-common -y</code> the following error is in the messages:</p> <pre><code>#0 41.07 Setting up python3.10-minimal (3.10.6-1~22.04.2) ... #0 41.16 [Errno 13] Permission denied: '/usr/lib/python3.10/__pycache__/__future__.cpython-310.pyc.139723958934016'dpkg: error processing package python3.10-minimal (--configure): #0 41.16 installed python3.10-minimal package post-installation script subprocess returned error exit status 1 #0 41.17 Errors were encountered while processing: #0 41.17 python3.10-minimal #0 41.18 E: Sub-process /usr/bin/dpkg returned an error code (1) ------ failed to solve: executor failed running [/bin/sh -c apt install software-properties-common -y]: exit code: 100 </code></pre> <p>Was wondering if you guys could please help me solve this error in order to finish building the dockerfile?</p>
<python><linux><docker><ubuntu><apt>
2023-02-04 18:38:33
2
407
James Chong
75,347,375
17,696,880
Set a regex pattern to condition placing or removing spaces before or after a ) according to the characters that are before or after
<pre class="lang-py prettyprint-override"><code>import re input_text = &quot;((NOUN) ) ) de el auto rojizo, algo) ) )\n Luego ((PL_ADVB)dentro ((NOUN)de baúl ))abajo.) ).&quot; input_text = input_text.replace(&quot; )&quot;, &quot;) &quot;) print(repr(input_text)) </code></pre> <p>Simply using the <code>.replace(&quot; )&quot;, &quot;) &quot;)</code> function I get this bad output, as it doesn't consider the conditional replacements that a function using regex patterns could, for example using <code>re.sub( , ,input_text, flags = re.IGNORECASE)</code></p> <pre><code>'((NOUN)) ) de el auto rojizo, algo)) ) \n Luego ((PL_ADVB)dentro ((NOUN)de baúl) )abajo.)) .' </code></pre> <p>The goal is to get this output where closing parentheses are stripped of leading whitespace's and a single whitespace is added after as long as the closing parenthesis <code>)</code> is not in front of a dot <code>.</code> , a newline <code>\n</code> or the end of line <code>$</code></p> <pre><code>'((NOUN))) de el auto rojizo, algo)))\n Luego ((PL_ADVB)dentro ((NOUN)de baúl))abajo.)).' </code></pre>
<python><regex>
2023-02-04 18:27:15
2
875
Matt095
75,347,354
19,363,912
Convert tuple with quotes to csv like string
<p>How to convert tuple</p> <pre><code>text = ('John', '&quot;n&quot;', '&quot;ABC 123\nDEF, 456GH\nijKl&quot;\r\n', '&quot;Johny\nIs\nHere&quot;') </code></pre> <p>to csv format</p> <pre><code>out = '&quot;John&quot;, &quot;&quot;&quot;n&quot;&quot;&quot;, &quot;&quot;&quot;ABC 123\\nDEF, 456\\nijKL\\r\\n&quot;&quot;&quot;, &quot;&quot;&quot;Johny\\nIs\\nHere&quot;&quot;&quot;' </code></pre> <p>or even omitting the special chars at the end</p> <pre><code>out = '&quot;John&quot;, &quot;&quot;&quot;n&quot;&quot;&quot;, &quot;&quot;&quot;ABC 123\\nDEF, 456\\nijKL&quot;&quot;&quot;, &quot;&quot;&quot;Johny\\nIs\\nHere&quot;&quot;&quot;' </code></pre> <p>I came up with this monster</p> <pre><code>out1 = ','.join(f'&quot;&quot;{t}&quot;&quot;' if t.startswith('&quot;') and t.endswith('&quot;') else f'&quot;{t}&quot;' for t in text) out2 = out1.replace('\n', '\\n').replace('\r', '\\r') </code></pre>
<python><csv><export-to-csv><quotes>
2023-02-04 18:23:46
1
447
aeiou
75,347,314
65,886
How to run python flet scripts on repl.it
<p>I'm trying to run a python flet script on repl.it. Here's what I did:</p> <ol> <li>created a new repl, for a python script; repl.it creates a main.py file</li> <li>from the Packages tool I imported flet</li> <li>I wrote this code into main.py:</li> </ol> <pre><code>import flet as ft def main(page:ft.Page): page.add(ft.Text(&quot;Hello World&quot;)) page.update() ft.app(target=main) </code></pre> <p>And I clicked the Run button. I got this error:</p> <pre><code>Traceback (most recent call last): File &quot;main.py&quot;, line 8, in &lt;module&gt; ft.app(target=main) File &quot;/home/runner/test/venv/lib/python3.10/site-packages/flet/flet.py&quot;, line 102, in app conn = _connect_internal( File &quot;/home/runner/test/venv/lib/python3.10/site-packages/flet/flet.py&quot;, line 192, in _connect_internal port = _start_flet_server( File &quot;/home/runner/test/venv/lib/python3.10/site-packages/flet/flet.py&quot;, line 361, in _start_flet_server subprocess.Popen( File &quot;/nix/store/hd4cc9rh83j291r5539hkf6qd8lgiikb-python3-3.10.8/lib/python3.10/subprocess.py&quot;, line 971, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File &quot;/nix/store/hd4cc9rh83j291r5539hkf6qd8lgiikb-python3-3.10.8/lib/python3.10/subprocess.py&quot;, line 1847, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) PermissionError: [Errno 13] Permission denied: '/home/runner/test/venv/lib/python3.10/site-packages/flet/bin/fletd' </code></pre> <p>I've gotten this same <code>Permission denied</code> error with every flet script I've tried. (Non flet scripts don't have errors.)</p>
<python><replit><flet>
2023-02-04 18:18:11
1
5,407
Al C
75,347,301
1,418,618
Using map to convert pandas dataframe to list
<p>I am using <code>map</code> to convert some columns in a dataframe to <code>list</code> of <code>dicts</code>. Here is a MWE illustrating my question.</p> <pre><code>import pandas as pd df = pd.DataFrame() df['Col1'] = [197, 1600, 1200] df['Col2'] = [297, 2600, 2200] df['Col1_a'] = [198, 1599, 1199] df['Col2_a'] = [296, 2599, 2199] print(df) </code></pre> <p>The output is</p> <pre><code> Col1 Col2 Col1_a Col2_a 0 197 297 198 296 1 1600 2600 1599 2599 2 1200 2200 1199 2199 </code></pre> <p>Now say I want to extract only those columns whose name ends with a suffix &quot;_a&quot;. One way to do it is the following -</p> <pre><code>list_col = [&quot;Col1&quot;,&quot;Col2&quot;] cols_w_suffix = map(lambda x: x + '_a', list_col) print(df[cols_w_suffix].to_dict('records')) </code></pre> <p><code>[{'Col1_a': 198, 'Col2_a': 296}, {'Col1_a': 1599, 'Col2_a': 2599}, {'Col1_a': 1199, 'Col2_a': 2199}]</code></p> <p>This is expected answer. However, if I try to print the same expression again, I get an empty dataframe.</p> <pre><code>print(df[cols_w_suffix].to_dict('records')) </code></pre> <p><code>[]</code></p> <p>Why does it evaluate to an empty dataframe? I think I am missing something about the behavior of map. Because when I directly pass the column names, the output is still as expected.</p> <pre><code>df[[&quot;Col1_a&quot;,&quot;Col2_a&quot;]].to_dict('records') </code></pre> <p><code>[{'Col1_a': 198, 'Col2_a': 296}, {'Col1_a': 1599, 'Col2_a': 2599}, {'Col1_a': 1199, 'Col2_a': 2199}]</code></p>
<python><pandas><debugging>
2023-02-04 18:16:27
1
1,593
honeybadger
75,347,177
4,100,282
Spinning globe GIF with cartopy
<p>I'm failing to find the most efficient way to generate a simple animation of a spinning globe with a filled contour using cartopy. The following code yields a static gif, probably because the figure is not redrawing itself? Is there a way for the animation function to just change the geographic projection, without calling a <code>contourf()</code> again (which is computationally expensive)?</p> <pre class="lang-py prettyprint-override"><code>from pylab import * import cartopy.crs as ccrs from matplotlib.animation import FuncAnimation lon, lat = meshgrid( (linspace(-180,180,361)+0.5)[::4], (linspace(-90,90,181)+0.5)[::4], ) h = (abs(lon)&lt;20).astype(int) * (abs(lat)&lt;10).astype(int) fig = figure(figsize=(3,3)) ax = fig.add_subplot(1, 1, 1, projection = ccrs.Orthographic()) ax.contourf(lon, lat, h, transform=ccrs.PlateCarree()) def update_fig(t): ax.projection = ccrs.Orthographic(t) ani = FuncAnimation( fig, update_fig, frames = linspace(0,360,13)[:-1], interval = 100, blit = False, ) ani.save('mwe.gif') </code></pre>
<python><cartopy><matplotlib-animation>
2023-02-04 17:57:17
1
305
Mathieu
75,346,979
19,916,174
Is using else faster than returning value right away?
<p>Which of the following is faster?</p> <p>1.</p> <pre><code>def is_even(num: int): if num%2==0: return True else: return False </code></pre> <ol start="2"> <li></li> </ol> <pre><code>def is_even(num: int): if num%2==0: return True return False </code></pre> <p>Regardless of which is faster, is there anything in PEP that states when to use one over the other (I couldn't find anything...)?</p>
<python><performance><if-statement><optimization><return>
2023-02-04 17:27:06
1
344
Jason Grace
75,346,978
15,724,084
python tkinter wait until entry box widget has an input by user ONLY after then continue
<p>I have a code snippet which runs perfectly. In some cases, I need user input, but there are also cases where user input is not necessary and code functions without it perfectly. So, in that cases I create with conditional a flow where <code>entry box</code> widget is created and destroyed after value is <code>get()</code> by script. But I cannot make code to wait until to say stops(pauses) when the user has given input value then continues to run.</p> <p>code is below;</p> <pre><code>varSheetname_GS = '' if varsoundTitle_usernameHeroContainer == 'FloatingBlueRecords' or varsoundTitle_usernameHeroContainer == 'DayDoseOfHouse': varSheetname_GS = varsoundTitle_usernameHeroContainer else: # look for sheetname as an input value entered by user new_sheetname_entryBox=tk.Entry(canvas2,width=30).pack() new_sheetname_entryBox.focus() var_new_sheetName =new_sheetname_entryBox.get() new_sheetname_entryBox.destroy() varSheetname_GS = var_new_sheetName #input(&quot;Enter the sheetname in (GooSheets):&quot;) </code></pre> <p>I have looked for <a href="https://stackoverflow.com/questions/60082377/how-to-wait-for-input-in-entry-widget-in-tkinter">so_01</a> and <a href="https://stackoverflow.com/questions/9342757/tkinter-executing-functions-over-time/9343402#9343402">so_02</a> which are related to topic but was not able to implement in my situation. So, anyone who would guide me towards that answers would be great from yourside. Thanks ahead!</p>
<python><tkinter><tkinter-entry>
2023-02-04 17:27:04
1
741
xlmaster
75,346,930
6,367,971
Creating dataframe from XML file with non-unique tags
<p>I have a directory of XML files, and I need to extract 4 values from each file and store to a dataframe/CSV.</p> <p>The problem is some of the data I need to extract uses redundant tags (e.g., <code>&lt;PathName&gt;</code>) so I'm not sure of the best way to do this. I could specify the exact line # to extract, because it appears consistent with the files I have seen; but I am not certain that will always be the case, so doing it that way is too brittle.</p> <pre><code>&lt;?xml version=&quot;1.0&quot; encoding=&quot;utf-8&quot;?&gt; &lt;BxfMessage xsi:schemaLocation=&quot;http://smpte-ra.org/schemas/2021/2019/BXF BxfSchema.xsd&quot; id=&quot;jffsdfs&quot; dateTime=&quot;2023-02-02T20:11:38Z&quot; messageType=&quot;Info&quot; origin=&quot;url&quot; originType=&quot;Delivery&quot; userName=&quot;ABC Corp User&quot; destination=&quot; System&quot; xmlns=&quot;http://sffe-ra.org/schema/1999/2023/BXF&quot; xmlns:xsi=&quot;http://www.w9.org/4232/XMLSchema-instance&quot;&gt; &lt;BxfData action=&quot;Spotd&quot;&gt; &lt;Content timestamp=&quot;2023-02-02T20:11:38Z&quot;&gt; &lt;NonProgramContent&gt; &lt;Details&gt; &lt;SpotType&gt;Paid&lt;/SpotType&gt; &lt;SpotType&gt;Standard&lt;/SpotType&gt; &lt;Spotvertiser&gt; &lt;SpotvertiserName&gt;Spot Plateau&lt;/SpotvertiserName&gt; &lt;/Spotvertiser&gt; &lt;Agency&gt; &lt;AgencyName&gt;Spot Plateau&lt;/AgencyName&gt; &lt;/Agency&gt; &lt;Product&gt; &lt;Name&gt;&lt;/Name&gt; &lt;BrandName&gt;zzTop&lt;/BrandName&gt; &lt;DirectResponse&gt; &lt;PhoneNo&gt;&lt;/PhoneNo&gt; &lt;PCode&gt;&lt;/PCode&gt; &lt;DR_URL&gt;&lt;/DR_URL&gt; &lt;/DirectResponse&gt; &lt;/Product&gt; &lt;/Details&gt; &lt;ContentMetSpotata&gt; &lt;ContentId&gt; &lt;BHGXId idType=&quot;CISC&quot; auth=&quot;Agency&quot;&gt;AAAA1111999Z&lt;/BHGXId&gt; &lt;/ContentId&gt; &lt;Name&gt;Pill CC Dutch&lt;/Name&gt; &lt;Policy&gt; &lt;PlatformType&gt;Spotcast&lt;/PlatformType&gt; &lt;/Policy&gt; &lt;Media&gt; &lt;BaseBand&gt; &lt;Audio VO=&quot;true&quot;&gt; &lt;AnalogAudio primAudio=&quot;false&quot; /&gt; &lt;DigitalAudio&gt; &lt;MPEGLayerIIAudio house=&quot;false&quot; audioId=&quot;1&quot; dualMono=&quot;false&quot; /&gt; &lt;/DigitalAudio&gt; &lt;/Audio&gt; &lt;Video withlate=&quot;false&quot; sidebend=&quot;false&quot;&gt; &lt;Format&gt;1182v&lt;/Format&gt; &lt;CCs&gt;true&lt;/CCs&gt; &lt;/Video&gt; &lt;AccessServices&gt; &lt;AudioDescription_DVS&gt;false&lt;/AudioDescription_DVS&gt; &lt;/AccessServices&gt; &lt;QC&gt;Passed QC (AAAA1111103H )&lt;/QC&gt; &lt;/BaseBand&gt; &lt;MediaLocation sourceType=&quot;Primary&quot;&gt; &lt;Location&gt; &lt;AssetServer PAA=&quot;true&quot; FTA=&quot;true&quot;&gt; &lt;PathName&gt;zzTap_zzTop_AAAA1111999Z_30s_Pill_aa-bb.mp4&lt;/PathName&gt; &lt;/AssetServer&gt; &lt;/Location&gt; &lt;SOM&gt; &lt;SmpteTimeCode&gt;00:00:00;00&lt;/SmpteTimeCode&gt; &lt;/SOM&gt; &lt;Duration&gt; &lt;SmpteDuration&gt; &lt;SmpteTimeCode&gt;00:00:30;00&lt;/SmpteTimeCode&gt; &lt;/SmpteDuration&gt; &lt;/Duration&gt; &lt;/MediaLocation&gt; &lt;MediaLocation sourceType=&quot;Proxy&quot; qualifer=&quot;Low-res&quot;&gt; &lt;Location&gt; &lt;AssetServer PAA=&quot;true&quot; FTA=&quot;true&quot;&gt; &lt;PathName&gt;https://app.url.com/DMM/DL/wew52f&lt;/PathName&gt; &lt;/AssetServer&gt; &lt;/Location&gt; &lt;SOM&gt; &lt;SmpteTimeCode&gt;00:00:00;00&lt;/SmpteTimeCode&gt; &lt;/SOM&gt; &lt;Duration&gt; &lt;SmpteDuration&gt; &lt;SmpteTimeCode&gt;00:00:30;00&lt;/SmpteTimeCode&gt; &lt;/SmpteDuration&gt; &lt;/Duration&gt; &lt;/MediaLocation&gt; &lt;MediaLocation sourceType=&quot;Preview&quot; qualifer=&quot;Thumbnail&quot;&gt; &lt;Location&gt; &lt;AssetServer PAA=&quot;true&quot; FTA=&quot;true&quot;&gt; &lt;PathName&gt;https://f9-int-5.rainxyz.com/url.com/media/t43fs/423gs-389a-40a4.jpg?inline&lt;/PathName&gt; &lt;/AssetServer&gt; &lt;/Location&gt; &lt;SOM&gt; &lt;SmpteTimeCode&gt;00:00:00;00&lt;/SmpteTimeCode&gt; &lt;/SOM&gt; &lt;Duration&gt; &lt;SmpteDuration&gt; &lt;SmpteTimeCode&gt;00:00:00;00&lt;/SmpteTimeCode&gt; &lt;/SmpteDuration&gt; &lt;/Duration&gt; &lt;/MediaLocation&gt; &lt;/Media&gt; &lt;/ContentMetSpotata&gt; &lt;/NonProgramContent&gt; &lt;/Content&gt; &lt;/BxfData&gt; &lt;/BxfMessage&gt; </code></pre> <p>Is there a more flexible method so that I can get consistent output like:</p> <pre><code>FileName Brand ID URL zzTap_zzTop_AAAA1111999Z_30s_Pill_aa-bb zzTop AAAA1111999Z https://app.url.com/DMM/DL/wew52f zzTap_zzTab_BAAA1111999Z_30s_Pill_aa-cc zzTab BAAA1111999Z https://app.url.com/DMM/DL/wew52c zzTap_zzTan_CAAA1111999Z_30s_Pill_aa-dd zzTan CAAA1111999Z https://app.url.com/DMM/DL/wew523 zzTap_zzTon_DAAA1111999Z_30s_Pill_aa-zz zzTon DAAA1111999Z https://app.url.com/DMM/DL/wew52y </code></pre>
<python><pandas><xml><beautifulsoup>
2023-02-04 17:22:32
3
978
user53526356
75,346,830
10,558,697
Reuse file handle in python ThreadPoolExecutor
<p>The background to my question is the following: I have a search index implemented in whoosh, and I want to get the rankings for a batch of queries. I want to speed this up by handling multiple queries at a time, using <code>ThreadPoolExecutor.map</code>.</p> <p>In whoosh you have a <code>Searcher</code> object, which is (among other things) a wrapper that handles multiple open files and has an internal state where it currently points at which file. According to the <a href="https://whoosh.readthedocs.io/en/latest/threads.html" rel="nofollow noreferrer">whoosh documentation</a> you have to <code>use one Searcher per thread in your code</code>, but <code>if you can share one across multiple search requests, it’s a big performance win</code>. So my understanding is that every time the <code>ThreadPoolExecutor</code> opens a new thread it should be initialized with a <code>Searcher</code> object to the index, but when that thread is reused for another query it should keep that object instead of creating a new one per query.</p> <p>My attempt was on the line of this:</p> <pre class="lang-py prettyprint-override"><code>from whoosh import Index from concurrent.futures import ThreadPoolExecutor from typing import List class MyExecutor: def __init__(self, whoosh_index: Index): self.searcher = index.searcher() def __call__(self, query): return self.searcher.search(query) def query_batched(queries: List[str], whoosh_index: Index, num_threads: int): with ThreadPoolExecutor(max_workers=num_threads) as pool: return pool.map(MyExecutor(whoosh_index), queries) </code></pre> <p>But this runs into an exception <code>UnpicklingError: could not find MARK</code> somewhere deep down in the whoosh code. <a href="https://stackoverflow.com/questions/35879096/pickle-unpicklingerror-could-not-find-mark">This</a> tells me that the error might be because it attempts to read a file which is not set to the beginning of a file. Does this mean that the code still uses only one <code>Searcher</code> across multiple threads, which is not thread-safe?</p> <p>How can I fix the code so that each thread has its own <code>Searcher</code> object but reuses it every time it is called?</p>
<python><multithreading><concurrency><whoosh>
2023-02-04 17:06:53
0
324
Josef
75,346,782
1,572,146
Type check that a module has specific functions?
<p>How do I check if a module implements specific functions in Python?</p> <p>Say I have two modules, both implement two functions: <code>f</code> and <code>g</code>; with equal arity. However, in <code>mod0</code> <code>g</code> returns a <code>float</code> and in <code>mod1</code> <code>g</code> returns an <code>int</code>.</p> <hr /> <h4><code>mod0/__init__.py</code></h4> <pre><code>def f(a: int) -&gt; int: return a def g(a: int) -&gt; float: return float(a) </code></pre> <h4><code>mod1/__init__.py</code></h4> <pre><code>def f(a: int) -&gt; int: return a def g(a: int) -&gt; int: return a # Or even `g = f`? </code></pre> <hr /> <p>Attempt:</p> <h4>type_checker.py</h4> <pre><code>from typing import Protocol class ProtoFg(Protocol): @staticmethod def f(a: int) -&gt; int: pass @staticmethod def g(a: int) -&gt; float: pass def check_conformance(mod: ProtoFg): pass import mod0, mod1 check_conformance(mod0) check_conformance(mod1) # I want an error here </code></pre> <p>What I am looking for is an error similar to: &quot;mod1 doesn't conform to ProtoFg as <code>mod1.f</code> has return type of <code>float</code> not <code>int</code>&quot;</p> <p>Related: <a href="https://github.com/microsoft/TypeScript/issues/420#issuecomment-1030025402" rel="nofollow noreferrer">equivalent in TypeScript</a></p>
<python><protocols><python-typing>
2023-02-04 16:59:18
1
1,930
Samuel Marks
75,346,686
15,549,110
Find line number of replaced key value
<p>How do I get the line number of replaced key value? currently functions are different for it, how do i combine it to have line number at the time of replacing the string.</p> <pre><code>filedata= is a path of file. In which i need to replace strings. old_new_dict = {'hi':'bye','old':'new'} def replace_oc(file): lines = file.readlines() line_number = None for i, line in enumerate(lines): line_number = i + 1 break return line_number def replacee(path, pattern): for key, value in old_new_dict.items(): if key in filedata: print(&quot;there&quot;) filedata = filedata.replace(key, value) else: print(&quot;not there&quot;) </code></pre>
<python><python-3.x><dictionary><for-loop><replace>
2023-02-04 16:46:58
1
379
pkk
75,346,623
5,422,354
How to customize polynomial print in Numpy?
<p>I have polynomials which coefficients are computed using numerical integration methods. Mathematically, I use the Gram-Schmidt algorithm to produce orthogonal polynomials from a given probability distribution function, which involves integrals in the associated Hilbert space. Hence, they are sometimes approximated as floating point numbers very close to zero, although I know that the mathematical value is zero. I would like to customize the printing so that these values are not printed at all.</p> <p>For example, the script:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np p = np.polynomial.Polynomial([1.23456789e-15, 1.0, 1.23456789e-13, 2.0]) print(&quot;p = &quot;, p) </code></pre> <p>produces:</p> <pre><code>p = 1.23456789e-15 + 1.0·x¹ + 1.23456789e-13·x² + 2.0·x³ </code></pre> <p>but I would like to print:</p> <pre><code>p = 1.0·x¹ + 2.0·x³ </code></pre> <p>How can I do this?</p>
<python><numpy>
2023-02-04 16:40:21
1
1,161
Michael Baudin
75,346,434
7,267,141
Automatic attribute copying from member class to parent class at class definition using ABC's __init_subclass__ in Python
<p>I have this code:</p> <pre><code>from abc import ABC class ConfigProto(ABC): def __init__(self, config_dict): self._config_dict = config_dict def parse(self, cfg_store: dict) -&gt; None: # do something with the config dict and the cfg_store and setup attributes ... class ConfigurableComponent(ABC): class Config(ConfigProto): ... def __init__(self, parsed_config: Config): for key, value in parsed_config.__dict__.items(): setattr(self, key, value) def __init_subclass__(cls, **kwargs): &quot;&quot;&quot; Adds relevant attributes from the member Config class to itself. This is done for all subclasses of ConfigurableComponent, so that the attributes are not required to be specified in two places (in the Config class and the component class). If there are significant differences between the Config attributes and the component attributes, they can be also specified in the component class, and then they will not be overwritten by the Config attributes. &quot;&quot;&quot; super().__init_subclass__(**kwargs) config = getattr(cls, 'Config', None) if config: # copy non-callable, non-protected, non-private attributes from config to component class conf_class_attributes = [attr_ for attr_ in dir(config) if not attr_.startswith('_') and not callable(getattr(config, attr_))] for attr_name in conf_class_attributes: # ignore private attributes, methods, and members already defined in the class if not attr_name.startswith('_') \ and not callable(getattr(config, attr_name)) \ and not hasattr(cls, attr_name): setattr(cls, attr_name, getattr(config, attr_name)) class ExampleComponent(ConfigurableComponent): class Config(ConfigProto): param1: int param2: str def parse(self, cfg_store: dict) -&gt; None: ... example_component = ExampleComponent(ExampleComponent.Config(config_dict={'param1': 1, 'param2': 'test'})) assert hasattr(example_component, 'param1') </code></pre> <p>It does not work. When subclassing the parent class <code>ConfigurableComponent(ABC)</code>, the <code>__init_subclass__</code> is called, but the <code>config</code> class variable does not contain the attributes defined in <code>ExampleComponent.Config</code> despite it showing it is the correct type. I expect the <code>__init_subclass__</code> method to be called after the subclass is defined (including its members). Still, even though the members are here (there is a member named &quot;Config&quot; in the ExampleComponent class), they are not initialised - the <code>Config</code> class seems to be empty.</p> <p><a href="https://i.sstatic.net/eMI3A.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eMI3A.png" alt="debugger printscr" /></a></p> <p>So far I think the reason is that the member class gets fully initialised only after the owner class gets instantiated into an object, but I am not sure and can't seem to find the details in the documentation.</p> <p>Does anybody have an idea how to make this code work so that I can: <strong>define attributes in the member class <code>Config</code> and get them added to the owning class automatically when subclassing the <code>ConfigurableComponent</code> class?</strong></p>
<python><subclassing><abc>
2023-02-04 16:11:45
0
603
Adam Bajger
75,346,397
3,397,007
Can annotations be used to narrow types in Python
<p>I think in reality I'm going to use a different design, where <code>_attrib</code> is set in the construct and can therefore not be <code>None</code>, however I'm fascinated to see if there's a way to make MyPy happy with this approach. I have a situation where an attribute (in this instance <code>_attrib</code>) is set after the construction of the <code>Thing</code> and there are a number of methods which require it to be set. It seamed reasonable, therefore, to spin up a teeny decorator to validate if the <code>_attrib</code> was set and to chuck an exception if it's not. The below code, however, causes MyPy to still complain - although the error is <code>Item &quot;None&quot; of &quot;Optional[Any]&quot; has no attribute &quot;upper&quot;</code>, so I think the type of <code>self</code> is getting completely lost. I'm also getting <code>Incompatible return value type (got &quot;Callable[[Any, VarArg(Any), KwArg(Any)], Any]&quot;, expected &quot;F&quot;)</code> on <code>return inner</code>.</p> <pre class="lang-py prettyprint-override"><code>class AttribIsNone(Exception): ... F = TypeVar(&quot;F&quot;, bound=Callable[...,Any] def guard(func: F) -&gt; F: def inner(self, *args, **kwargs): if self._attrib is None: raise AttribIsNoneException() return func(self, *args, **kwargs) return inner class Thing: _attrib: str | None @guard def guarded_function(self) -&gt; str: return self._attrib.upper() </code></pre> <p>Ideally I think I'd bind <code>F</code> to something like <code>Callable[[Thing,...], Any]</code> but that's not a valid thing to do right now.</p> <p>Similarly I tried creating a TypeVar for the return value:</p> <pre><code>R = TypeVar(&quot;R&quot;) F = TypeVar(&quot;F&quot;, bound=Callable[...,R] def guard(func: F) -&gt; F: def inner(self, *args, **kwargs) -&gt; R: ... </code></pre> <p>However this is not allowed either.</p> <p><a href="https://peps.python.org/pep-0612/" rel="nofollow noreferrer">PEP612</a> offers a <code>ParamSpec</code> but I can't seem to work out how to construct one, and I'm not even sure if I did that it would help much!</p>
<python><types><mypy>
2023-02-04 16:06:19
1
389
Richard Vodden
75,346,270
1,039,860
Add menu to libreoffice that persists app restarts via a script
<p>I'd like to run one script once that adds a menu that persists through <code>LibreOffice</code> restarts.</p> <p>I understand how to add a menu to <code>LibreOffice</code> <a href="https://stackoverflow.com/questions/75331489/how-to-run-libreoffice-python-script-using-scriptforge?noredirect=1#comment132933276_75331489">as posted here</a> (but it disappears on app restart).</p> <p>I see that it's possible to modify the start macro manually <a href="https://stackoverflow.com/questions/75331489/how-to-run-libreoffice-python-script-using-scriptforge?noredirect=1#comment132933276_75331489">as posted here</a>.</p> <p>Is it possible to modify the start script and add it to the <code>autoexec</code> section via a single script?</p> <p>I tried to add the menu manually (as in Tools &gt; Macros &gt; Customize) but my (latest 7.4.5.1 x64) version does not have a Customize section of the Macros menu.</p> <p><a href="https://i.sstatic.net/ik5re.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ik5re.png" alt="Customize is missing!" /></a></p> <p>I am feeling very foolish as I can't see where you are suggesting to add the start macro <a href="https://i.sstatic.net/D130j.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/D130j.png" alt="customize dialog" /></a></p> <p>I'm trying this code (which is supposed to add a popup to the startup), but the length of basic_libraries is always zero:</p> <pre><code>import uno from com.sun.star.beans import PropertyValue print('here 1') # Connect to LibreOffice localContext = uno.getComponentContext() print('here 1.1') resolver = localContext.ServiceManager.createInstanceWithContext(&quot;com.sun.star.bridge.UnoUrlResolver&quot;, localContext) print('here 1.2') ctx = resolver.resolve(&quot;uno:socket,host=localhost,port=8100;urp;StarOffice.ComponentContext&quot;) print('here 1.3') smgr = ctx.ServiceManager print('here 2') # Open a new document desktop = smgr.createInstanceWithContext(&quot;com.sun.star.frame.Desktop&quot;, ctx) doc = desktop.loadComponentFromURL(&quot;private:factory/scalc&quot;, &quot;_blank&quot;, 0, ()) print('here 3') # Add a start macro macro_name = &quot;Module1.start&quot; macro_code = &quot;Sub Main\n msgbox \&quot;Hello World!\&quot;\nEnd Sub&quot; basic_libraries = doc.BasicLibraries print(basic_libraries) print(len(basic_libraries)) for lib in basic_libraries: print(f'checking {lib}') if lib.Name == &quot;Standard&quot;: print('adding macro') lib.createModule(macro_name, macro_code) break print('here 4') # Save the document prop = PropertyValue() prop.Name = &quot;FilterName&quot; prop.Value = &quot;writer_pdf_Export&quot; print('here 5') doc.dispose() </code></pre> <p>Before running the script, I am starting scalc like this:</p> <pre><code>scalc.exe --accept=socket,host=127.0.0.1,port=8100;urp </code></pre> <p>My goal is to run the python script that adds the menu instead of the popup (so if you have any idea how to replace it, I would appreciate an help!</p>
<python><libreoffice>
2023-02-04 15:45:10
1
1,116
jordanthompson
75,346,172
1,506,850
rolling unique value count in pandas across multiple columns
<p>there are several answers around rolling count in pandas <a href="https://stackoverflow.com/questions/65072737/rolling-unique-value-count-in-pandas">Rolling unique value count in pandas</a> <a href="https://stackoverflow.com/questions/46470743/how-to-efficiently-compute-a-rolling-unique-count-in-a-pandas-time-series">How to efficiently compute a rolling unique count in a pandas time series?</a></p> <p>How do I count unique values across multiple columns? For one column, I can do:</p> <pre><code>df[my_col]=df[my_col].rolling(300).apply(lambda x: len(np.unique(x))) </code></pre> <p>How to extend to multipe columns, counting unique values overall across all values in the rolling window?</p>
<python><pandas><unique><rolling-computation>
2023-02-04 15:30:47
2
5,397
00__00__00
75,346,138
5,221,435
Python Flask - volumes don't work after dockerizing
<p>I'm trying to dockerize a Python-Flask application, using also volumes in order to have a live update when I change the code, but volumes don't work and I have to stop the containers and open run it again. That is the code that I try to change (main.py):</p> <pre><code>from flask import Flask import pandas as pd import json import os app = Flask(__name__) @app.route(&quot;/&quot;) def hello(): return &quot;Hello&quot; </code></pre> <p>My dockerfile.dev:</p> <pre><code>FROM python:3.9.5-slim-buster WORKDIR '/app' COPY requirements.txt . RUN pip3 install -r requirements.txt RUN pip install python-dotenv COPY ./ ./ ENV FLASK_APP=main.py EXPOSE 5000 CMD [ &quot;python3&quot;, &quot;-m&quot; , &quot;flask&quot;, &quot;run&quot;, &quot;--host=0.0.0.0&quot;] </code></pre> <p>My docker-compose.yaml</p> <pre><code>version: &quot;3&quot; services: backend: build: context: . dockerfile: Dockerfile.dev ports: - &quot;5000:5000&quot; expose: - &quot;5000&quot; volumes: - .:/app stdin_open: true environment: - CHOKIDAR_USEPOLLING=true - PGHOST=db - PGUSER=userp - PGDATABASE=p - PGPASSWORD=pgpwd - PGPORT=5432 - DB_HOST=db - POSTGRES_DB=p - POSTGRES_USER=userp - POSTGRES_PASSWORD=pgpwd depends_on: - db db: image: postgres:latest restart: always environment: - POSTGRES_DB=db - DB_HOST=127.0.0.1 - POSTGRES_USER=userp - POSTGRES_PASSWORD=pgpwd - POSTGRES_ROOT_PASSWORD=pgpwd volumes: - db-data-p:/var/lib/postgresql/data pgadmin-p: container_name: pgadmin4_container_p image: dpage/pgadmin4 restart: always environment: PGADMIN_DEFAULT_EMAIL: admin@admin.com PGADMIN_DEFAULT_PASSWORD: root ports: - &quot;5050:80&quot; logging: driver: none volumes: db-data-p: </code></pre> <p>To start I execute <code>docker-compose up</code></p> <p>Volume /app seems not works</p>
<python><docker><flask><docker-volume>
2023-02-04 15:25:48
1
1,585
Giacomo Brunetta
75,346,115
6,337,701
Unable to capture class with Brackets
<p>I'm using the following code in Python to capture certain text values from a webpage.</p> <pre><code>from bs4 import BeautifulSoup import requests url=&quot;https://example.com/page1.html&quot; response=requests.get(url) soup=BeautifulSoup(response.content,'html5lib') spans=soup.find_all('a',&quot;menu-tags&quot;) for span in spans: print(span.text) </code></pre> <p>It works perfectly when the input HTML page is having the following:</p> <pre><code> &lt;li class=&quot;foodie&quot;&gt; &lt;a href=&quot;../../-/british/&quot; class=&quot;menu-tags&quot; data-clickstream-city-cuisine-module&gt;British&lt;/a&gt; &lt;span&gt;,&amp;nbsp&lt;/span&gt; &lt;a href=&quot;../../-/indian/&quot; class=&quot;menu-tags&quot; data-clickstream-city-cuisine-module&gt;Indian&lt;/a&gt; &lt;span&gt;,&amp;nbsp&lt;/span&gt; &lt;a href=&quot;../../-/french/&quot; class=&quot;menu-tags&quot; data-clickstream-city-cuisine-module&gt;French&lt;/a&gt; </code></pre> <p>and correctly produces the following output:</p> <pre><code>British Indian French </code></pre> <p>However, when I use the following modified code on the following input HTML page containing the class which have brackets (), the output is NOT generated. from bs4 import BeautifulSoup</p> <pre><code>import requests url=&quot;https://example.com/page1.html&quot; response=requests.get(url) soup=BeautifulSoup(response.content,'html5lib') spans=soup.find_all('span',&quot;Fw(600)&quot;) for span in spans: print(span.text) </code></pre> <p>HTML code input:</p> <pre><code>&lt;span class=&quot;Fw(600)&quot;&gt;Pineapple&lt;/span&gt;&lt;br/&gt;&lt;span&gt;Animal&lt;/span&gt;: &lt;span class=&quot;Fw(600)&quot;&gt;Monkey&lt;/span&gt;&lt;br/&gt;&lt;span&gt; </code></pre> <p>Expected output is</p> <pre><code>Pineapple Monkey </code></pre> <p>But nothing is being generated. Is it because of brackets in the class, and if so how to capture it?</p> <p>Using single or double backslash(es) before brackets doesn't help either:</p> <pre><code>spans=soup.find_all('span',&quot;Fw\(600\)&quot;) spans=soup.find_all('span',&quot;Fw\\(600\\)&quot;) </code></pre>
<python><beautifulsoup>
2023-02-04 15:22:10
1
941
Aquaholic
75,346,073
4,418,481
Dash dropdown wont reset values once x clicked
<p>I created 2 Dash dropdowns where one dropdown (the lower) is based on the selection in the first dropdown (the upper)</p> <p><a href="https://i.sstatic.net/HSfCp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HSfCp.png" alt="enter image description here" /></a></p> <p>The selection and everything work fine.</p> <p>However, when I click the X button to remove all the options from the area-dropdown, it does remove all the options but still, city-dropdown values are the same as when I clicked the X button.</p> <p>why won't it reset?</p> <p>This is the code I'm using:</p> <pre><code>@app.callback( Output(&quot;city-dropdown&quot;, &quot;options&quot;), Input(&quot;area-dropdown&quot;, &quot;value&quot;), ) def update_city_dropdown(areas): if areas is None or None in areas or areas == []: return [] _area_codes = area_codes['area'][area_codes['name'].isin(areas)] cities = city_codes['name'][city_codes['area'].isin(_area_codes)] return [{'label': city, 'value': city} for city in cities] </code></pre> <p>where:</p> <pre><code>area_dropdown = dcc.Dropdown( options=area_codes['name'], placeholder=&quot;Select an area&quot;, multi=True, style=DROPDOWN_STYLE, id='area-dropdown' ) city_dropdown = dcc.Dropdown( placeholder=&quot;Select a city&quot;, options=[], multi=True, style=DROPDOWN_STYLE, id='city-dropdown' ) </code></pre> <p>Thank you</p>
<python><plotly-dash>
2023-02-04 15:17:34
1
1,859
Ben
75,346,044
4,307,872
How to extract only a Rect object in PyMuPDF
<p>I tried the solution from this thread here:</p> <p><a href="https://stackoverflow.com/questions/72916381/read-specific-region-from-pdf">Read specific region from PDF</a></p> <p>Sadly the following example from the thread by user Zach Young doesn't work for me.</p> <pre><code>import os.path import fitz from fitz import Document, Page, Rect # For visualizing the rects that PyMuPDF uses compared to what you see in the PDF VISUALIZE = True input_path = &quot;test.pdf&quot; doc: Document = fitz.open(input_path) for i in range(len(doc)): page: Page = doc[i] page.clean_contents() # https://pymupdf.readthedocs.io/en/latest/faq.html#misplaced-item-insertions-on-pdf-pages # Hard-code the rect you need rect = Rect(0, 0, 100, 100) if VISUALIZE: # Draw a red box to visualize the rect's area (text) page.draw_rect(rect, width=1.5, color=(1, 0, 0)) text = page.get_textbox(rect) print(text) if VISUALIZE: head, tail = os.path.split(input_path) viz_name = os.path.join(head, &quot;viz_&quot; + tail) doc.save(viz_name) </code></pre> <p>but if I set my respective values for the Rect object (which do seem reasonable) and issue</p> <pre><code>print(rect.is_empty) </code></pre> <p>it outputs <code>True</code>.</p> <p>Also it doesn't draw the rectangle as it obviously should. There is obviously no output from</p> <pre><code>text = page.get_textbox(rect) </code></pre> <p>But if I just issue</p> <pre><code>text = page.get_text() </code></pre> <p>that gives me some correct output.</p> <p>However I wonder what is the reason that it says that the rect is empty because I would eagerly need it to only extract the text from a certain area.</p>
<python><extract><text-extraction><pymupdf>
2023-02-04 15:13:53
1
925
von spotz
75,345,963
51,167
Make BeautifulSoup recognize word breaks caused by HTML <li> elements
<p>BeautifulSoup4 does not recognize that it should would break between <code>&lt;li&gt;</code> elements when extracting text:</p> <p>Demo program:</p> <pre><code>#!/usr/bin/env python3 HTML=&quot;&quot;&quot; &lt;html&gt; &lt;body&gt; &lt;ul&gt; &lt;li&gt;First Element&lt;/li&gt;&lt;li&gt;Second element&lt;/li&gt; &lt;/ul&gt; &lt;/body&gt; &quot;&quot;&quot; from bs4 import BeautifulSoup soup = BeautifulSoup( HTML, 'html.parser' ) print(soup.find('body').text.strip()) </code></pre> <p>Output:</p> <pre><code>First ElementSecond element </code></pre> <p>Desired output:</p> <pre><code>First Element Second element </code></pre> <p>I guess I could just globally add a space before all <code>&lt;li&gt;</code> elements. That seems like a hack?</p>
<python><beautifulsoup>
2023-02-04 15:02:01
1
30,023
vy32
75,345,951
7,179,538
jupyter notebook python compatibility
<p>I am totally new to Python/Jupyter notebook.</p> <p>Using Windows 11. I installed latest version python <code>Python 3.11.1</code>. Also latest Anaconda - <code>conda 22.9.0</code>.</p> <p>My jupyter notebook (installed by anaconda) does not start from command line:</p> <blockquote> <p>jupyter notebook Traceback (most recent call last): File &quot;C:\apps\anaconda\Scripts\jupyter-notebook-script.py&quot;, line 6, in from notebook.notebookapp import main File &quot;C:\apps\anaconda\lib\site-packages\notebook\notebookapp.py&quot;, line 77, in from .services.kernels.kernelmanager import MappingKernelManager, AsyncMappingKernelManager File &quot;C:\apps\anaconda\lib\site-packages\notebook\services\kernels\kernelmanager.py&quot;, line 18, in from jupyter_client.session import Session File &quot;C:\apps\anaconda\lib\site-packages\jupyter_client_<em>init</em>_.py&quot;, line 8, in from .asynchronous import AsyncKernelClient # noqa File &quot;C:\apps\anaconda\lib\site-packages\jupyter_client\asynchronous_<em>init</em>_.py&quot;, line 1, in from .client import AsyncKernelClient # noqa File &quot;C:\apps\anaconda\lib\site-packages\jupyter_client\asynchronous\client.py&quot;, line 6, in from jupyter_client.channels import HBChannel File &quot;C:\apps\anaconda\lib\site-packages\jupyter_client\channels.py&quot;, line 12, in import zmq.asyncio File &quot;C:\apps\anaconda\lib\site-packages\zmq_<em>init</em>_.py&quot;, line 103, in from zmq import backend File &quot;C:\apps\anaconda\lib\site-packages\zmq\backend_<em>init</em>_.py&quot;, line 31, in raise original_error from None File &quot;C:\apps\anaconda\lib\site-packages\zmq\backend_<em>init</em>_.py&quot;, line 26, in <em>ns = select_backend(first) File &quot;C:\apps\anaconda\lib\site-packages\zmq\backend\select.py&quot;, line 31, in select_backend mod = import_module(name) File &quot;C:\apps\anaconda\lib\importlib_<em>init</em></em>.py&quot;, line 127, in import_module return _bootstrap.<em>gcd_import(name[level:], package, level) File &quot;C:\apps\anaconda\lib\site-packages\zmq\backend\cython_<em>init</em></em>.py&quot;, line 6, in from . import (ImportError: DLL load failed while importing _device: The specified module could not be found.</p> </blockquote> <p>Conda documentation states &quot;Anaconda supports Python 3.7, 3.8, 3.9 and 3.10. The current default is Python 3.9.&quot;</p> <p>Should I just downgrade my python to 3.9 or 3.10 (I dont care) ?? Any other work arounds?</p> <p>Updates</p> <ol> <li><p>Could open the jupyter notebook from the conda prompt. Not stated using it yet. So lets see. If python compatibility issues need to be resolved - need to figure it out.</p> </li> <li><p>But, cant start the anaconda-navigator. When I try start from windows start menu - does nothing. From Conda prompt: I get the flwg error on the prompt: <code>&gt;anaconda-navigator Traceback (most recent call last): File &quot;C:\apps\anaconda\Scripts\anaconda-navigator-script.py&quot;, line 6, in &lt;module&gt; from anaconda_navigator.app.main import main File &quot;C:\apps\anaconda\lib\site-packages\anaconda_navigator\app\main.py&quot;, line 19, in &lt;module&gt; from anaconda_navigator.app.start import start_app File &quot;C:\apps\anaconda\lib\site-packages\anaconda_navigator\app\start.py&quot;, line 15, in &lt;module&gt; from qtpy.QtCore import QCoreApplication, QEvent, QObject, Qt # pylint: disable=no-name-in-module File &quot;C:\apps\anaconda\lib\site-packages\qtpy\QtCore.py&quot;, line 15, in &lt;module&gt; from PyQt5.QtCore import * ImportError: DLL load failed while importing QtCore: The specified procedure could not be found.</code></p> </li> </ol>
<python><jupyter-notebook><anaconda>
2023-02-04 14:59:29
0
2,005
Sam-T
75,345,929
14,190,526
python patch. get Mock rather than MagicMock (with autospec)
<p>How I can get <code>Mock</code> object as patch result rather than <code>MagicMock</code> in this short sample?</p> <pre><code>from unittest.mock import patch, Mock class X: x=1 with patch.object(X, 'x', autospec=True) as my_mock: print(type(my_mock)) # NonCallableMagicMock, but need just a Mock/NonCallableMock </code></pre>
<python><python-unittest>
2023-02-04 14:56:10
0
1,100
salius
75,345,866
1,942,868
Object update by PATCH for Django REST Framework
<p>I am using <code>viewsets.ModelViewSet</code></p> <pre><code>from rest_framework import viewsets class ProjectViewSet(viewsets.ModelViewSet): serializer_class = s.ProjectSerializer queryset = m.Project.objects.all() def patch(self,request,*args,**kwargs): instance = self.get_object() serializer = self.get_serializer(instance,data = request.data) if serializer.is_valid(): self.perform_update(serializer) return Response(serializer.data) return Response() </code></pre> <p>Then I test to update the object via django restframework UI.</p> <p><a href="https://i.sstatic.net/BcQ1y.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BcQ1y.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/l2Btd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/l2Btd.png" alt="enter image description here" /></a></p> <p>Then this error occurs.</p> <p>My basic idea that changing object data via PATCH is correct?</p> <p>How can I update the data via <code>Django REST Framework </code></p> <pre><code>Expected view ProjectViewSet to be called with a URL keyword argument named &quot;pk&quot;. Fix your URL conf, or set the `.lookup_field` attribute on the view correctly. </code></pre>
<python><django><django-rest-framework>
2023-02-04 14:48:14
1
12,599
whitebear
75,345,862
15,673,832
How to use tuples as slice start:end values
<p>I need to slice array with arbitrary dimensions by two tuples. I can use slicing, such as <code>a[1:3, 4:6]</code>. But what do i do if 1,3 and 4,6 are tuples?</p> <p>While <code>a[(1, 3)]</code> works, I tried <code>a[(1, 3), (4, 6)]</code> and it didn't work, I think it ignores (4, 6). And I couldn't figure out how to make something like <code>a[ t1[0] : t1[1], t2[0] : t2[1] ]</code> work depending on how many dimensions there are.</p> <p>I'd like something like <code>a[(1, 3):(4, 6)]</code>, that would also work for higher dimensions, e.g a[(1, 3, 2):(4, 6, 5)]</p>
<python><arrays><numpy><slice><dimensions>
2023-02-04 14:47:41
1
678
Ford F150 Gaming
75,345,674
11,436,357
Why playwright miss pattern url?
<p>I need to handle request with certain url and im trying to do it like this:</p> <pre class="lang-py prettyprint-override"><code>await page.route(&quot;**/api/common/v1/play?**&quot;, handle_data_route) </code></pre> <p>But it also handles a url like this: <code>api/common/v1/play_random?</code></p>
<python><playwright><playwright-python>
2023-02-04 14:17:25
1
976
kshnkvn
75,345,565
10,971,593
does read method of io.BytesIO returns copy of underlying bytes data?
<p>I am aware that <code>io.BytesIO()</code> returns a binary stream object which uses in-memory buffer. but also provides <code>getbuffer()</code> which provides a readable and writable view (<code>memoryview</code> obj) over the contents of the buffer without copying them.</p> <pre class="lang-py prettyprint-override"><code>obj = io.BytesIO(b'abcdefgh') buf = obj.getbuffer() </code></pre> <p>Now, we know <code>buf</code> points to underlying data and when sliced(<code>buf[:3]</code>) returns a memoryview object again without making a copy. So I want to know, if we do <code>obj.read(3)</code> does it also uses in-memory buffer or makes a copy ?. if it does uses in-memeory buffer, what is the difference between <code>obj.read</code> and <code>buf</code> and which one to prefer to effectively read the data in chunks for considerably very long byte objects ?</p>
<python><buffer><bytesio><memoryview>
2023-02-04 13:55:50
1
417
Scarface
75,345,521
9,773,920
Get filename from S3 bucket path
<p>I am getting the last modified file from S3 bucket using the below code:</p> <pre><code>import boto3 import urllib.parse import json import botocore.session as bc import time from time import mktime from datetime import datetime print('Loading function') def lambda_handler(event, context): s3_client = boto3.client(&quot;s3&quot;) list_of_s3_objs = s3_client.list_objects_v2(Bucket=&quot;mybucket&quot;, Prefix=&quot;folder/sub/&quot;) # Returns a bunch of json contents = list_of_s3_objs[&quot;Contents&quot;] #get last modified sorted_contents = sorted(list_of_s3_objs['Contents'], key=lambda d: d['LastModified'], reverse=True) print(sorted_contents[0].get('Key')) </code></pre> <p>This prints the last modified file from the path 'mybucket/folder/sub' correctly. The output is:</p> <pre><code>folder/sub/2023-02-03_myfile.csv </code></pre> <p>How to extract just the filename '2023-02-03_myfile.csv' file from this path?</p>
<python><amazon-web-services><amazon-s3><aws-lambda>
2023-02-04 13:49:57
1
1,619
Rick
75,345,190
323,631
pytest overrides existing warning filters
<p>It seems that ignoring warnings using <code>warnings.filterwarnings</code> is not respected by <code>pytest</code>. For example:</p> <pre><code>$ cat test.py import warnings warnings.filterwarnings('ignore', category=UserWarning) def test_warnings_filter(): warnings.warn(&quot;This is a warning&quot;, category=UserWarning) </code></pre> <p>When I run this I expect that my explicitly ignored warning will by ignored by <code>pytest</code>. Instead I get this:</p> <pre><code>$ pytest test.py =============================================================================== test session starts =============================================================================== platform darwin -- Python 3.10.8, pytest-7.2.1, pluggy-1.0.0 rootdir: /Users/aldcroft/tmp/pytest plugins: anyio-3.6.2 collected 1 item test.py . [100%] ================================================================================ warnings summary ================================================================================= test.py::test_warnings_filter /Users/aldcroft/tmp/pytest/test.py:7: UserWarning: This is a warning warnings.warn(&quot;This is a warning&quot;, category=UserWarning) -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html ========================================================================== 1 passed, 1 warning in 0.01s =========================================================================== </code></pre> <p>I know about <code>pytest.ini</code> config file and the <code>-W</code> flag and <code>@pytest.mark.filterwarnings</code>, but these don't work well for my use case of integration testing a large number of installed packages via <code>&lt;pkg_name&gt;.test()</code>, where there are at least a dozen 3rd party warnings that need to be ignored for a clean output.</p> <p>Any ideas on how to make this work?</p>
<python><pytest><warnings>
2023-02-04 12:47:38
2
2,581
Tom Aldcroft
75,345,086
6,630,397
Pandas DataFrame.to_sql() doesn't work anymore with an sqlalchemy 2.0.1 engine.connect() as a context manager and doesn't throw any error
<p>This code with <a href="https://pandas.pydata.org/docs/reference/index.html" rel="nofollow noreferrer">pandas</a> <code>1.5.3</code> and <a href="https://www.sqlalchemy.org/" rel="nofollow noreferrer">sqlalchemy</a> <code>2.0.1</code> is not working anymore and surprisingly, it doesn't raises any error, the code passes silently:</p> <pre class="lang-py prettyprint-override"><code># python 3.10.6 import pandas as pd # 1.5.3 import psycopg2 # '2.9.5 (dt dec pq3 ext lo64)' from sqlalchemy import create_engine # 2.0.1 def connector(): return psycopg2.connect(**DB_PARAMS) engine = create_engine('postgresql+psycopg2://', creator=connector) with engine.connect() as connection: df.to_sql( name='my_table', con=connection, if_exists='replace', index=False, ) </code></pre> <p>Currently, with sqlalchemy <code>2.0.1</code> my table is no more populated with the DataFrame content.</p> <p>Whereas it was correctly populated with sqlalchemy version <code>1.4.45</code>.</p> <h3>Edit</h3> <p>Apparently, it works when I <em><strong>don't</strong></em> use a context manager:</p> <pre class="lang-py prettyprint-override"><code>connection = engine.connect() res.to_sql( name='my_table', con=connection, if_exists='replace', index=False ) Out[2]: 133 # &lt;- wondering what is this return code '133' here? connection.commit() connection.close() </code></pre> <p>How could I get it to work with a context manager (aka a <code>with</code> statement)?</p>
<python><pandas><sqlalchemy><psycopg2><contextmanager>
2023-02-04 12:27:23
3
8,371
swiss_knight
75,344,761
13,962,514
Why is Python not found by VS Code but found in the VS Code integrated terminal?
<p>I was trying learning about logging in python for the first time today. I discovered when i tried running my code from VS Code, I received this error message</p> <p><code>/bin/sh: 1: python: not found</code> however when I run the code directly from my terminal, I get the expected result. I need help to figure out the reason for the error message when I run the code directly from VSCode.</p>
<python><visual-studio-code>
2023-02-04 11:26:42
3
311
Oluwasube
75,344,756
5,231,001
Python class function return super()
<p>So I was messing around with a readonly-modifyable class pattern which is pretty common in <code>java</code>. It involves creating a base class containig readonly properties, and extending that class for a modifyable version. Usually there is a function <code>readonly()</code> or something simular to revert a modifyable-version back to a readonly-version of itself.</p> <p>Unfortunately you cannot directly define a setter for a property defined in a super class in python, but you can simply redefine it as shown below. Mor interestingly In python you got the 'magic function' <code>super</code> returning a proxy-object of the parent/super class which allows for a cheecky <code>readonly()</code> implementation which feels really hacky, but as far as I can test it just works.</p> <pre class="lang-py prettyprint-override"><code>class Readonly: _t:int def __init__(self, t: int): self._t = t @property def t(self) -&gt; int: return self._t class Modifyable(Readonly): def __init__(self, t: int): super().__init__(t) @property def t(self) -&gt; int: return self._t # can also be super().t @t.setter def t(self, t): self._t = t def readonly(self) -&gt; Readonly: return super() </code></pre> <p>In the above pattern I can call the <code>readonly</code> function and obtain a proxy to the parent object without having to instantiate a readonly version. The only problem would be that when setting the readonly-attribute instead of throwing <code>AttributeError: can't set attribute</code>, it will throw a <code>AttributeError: 'super' object has no attribute '&lt;param-name&gt;'</code></p> <p>So here are the question for this:</p> <ul> <li>Can this cause problems (exposing the <code>super</code> proxy-object outside the class itself)?</li> <li>I tested this on python 3.8.5, but not sure if that is by accident and goes against the 'python-semantics' so to speak?</li> <li>Is there a better/more desirable way to achieve this? (I have no idea if its even worth the ambiguous error message in ragards to performance for example)</li> </ul> <p>I Would love to hear opinions and/or insights into this</p>
<python><python-3.x><design-patterns><super><readonly>
2023-02-04 11:26:19
1
1,922
n247s
75,344,613
13,285,583
How to get 1280x1280 from 3840x2160 output stream without scaling?
<p>My goal is to get 1280x1280 frame from nvarguscamerasrc. The problem is that <code>nvarguscamerasrc</code> scaled the 3840x2160 frame to 1280x720. The consequence is that the bottom of the frame is always black.</p> <p>JetsonCamera.py</p> <pre><code>def gstreamer_pipeline( # Issue: the sensor format used by Raspberry Pi 4B and NVIDIA Jetson Nano B01 are different # in Raspberry Pi 4B, this command # $ libcamera-still --width 1280 --height 1280 --mode 1280:1280 # uses sensor format 2328x1748. # However, v4l2-ctl --list-formats-ext do not have such format. capture_width=1920, capture_height=1080, display_width=640, display_height=360, framerate=21, flip_method=0, ): return ( &quot;nvarguscamerasrc ! &quot; &quot;video/x-raw(memory:NVMM), &quot; &quot;width=(int)%d, height=(int)%d, &quot; &quot;format=(string)NV12, framerate=(fraction)%d/1 ! &quot; &quot;nvvidconv flip-method=%d ! &quot; &quot;video/x-raw, width=(int)%d, height=(int)%d, format=(string)BGRx ! &quot; &quot;videoconvert ! &quot; &quot;video/x-raw, format=(string)BGR ! appsink&quot; % ( capture_width, capture_height, framerate, flip_method, display_width, display_height, ) ) class Camera(object): frame_reader = None cap = None previewer = None def __init__(self, width=640, height=360): self.open_camera(width, height) def open_camera(self, width=640, height=360): self.cap = cv2.VideoCapture(gstreamer_pipeline(flip_method=0, display_width=width, display_height=height), cv2.CAP_GSTREAMER) if not self.cap.isOpened(): raise RuntimeError(&quot;Failed to open camera!&quot;) if self.frame_reader == None: self.frame_reader = FrameReader(self.cap, &quot;&quot;) self.frame_reader.daemon = True self.frame_reader.start() def getFrame(self, timeout = None): return self.frame_reader.getFrame(timeout) class FrameReader(threading.Thread): queues = [] _running = True camera = None def __init__(self, camera, name): threading.Thread.__init__(self) self.name = name self.camera = camera def run(self): while self._running: _, frame = self.camera.read() while self.queues: queue = self.queues.pop() queue.put(frame) def addQueue(self, queue): self.queues.append(queue) def getFrame(self, timeout = None): queue = Queue(1) self.addQueue(queue) return queue.get(timeout = timeout) def stop(self): self._running = False </code></pre> <p>main.py</p> <pre><code>exit_ = False if __name__ == &quot;__main__&quot;: camera = Camera(width=1280, height=1280) while not exit_: global frame frame = camera.getFrame(2000) cv2.imshow(&quot;Test&quot;, frame) key = cv2.waitKey(1) if key == ord('q'): exit_ = True </code></pre> <p><a href="https://i.sstatic.net/MyEPg.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MyEPg.jpg" alt="frame's bottom is black" /></a></p>
<python><opencv><gstreamer><nvidia-jetson>
2023-02-04 10:58:32
0
2,173
Jason Rich Darmawan
75,344,574
2,700,593
How to prevent Pandas to_dict() from converting timestamps to string?
<p>I have a dataframe with a <em>date</em> field which appear to be represented as unix timestamps. When i call <code>df.to_dict()</code> on it the dates are getting converted to a string like this <strong>yyyy-mm-dd</strong> .... how can I prevent this from happening?</p> <p>I'm using the code to return a JSON in my FastAPI app ...</p> <pre><code>df_results = pd.read_sql_query(sql_query_str, _engine) return_object[&quot;results&quot;] = df_results.to_dict(orient='records') # outputs &quot;date&quot;: 2021-12-31&quot; in the json return_object[&quot;results&quot;] = json.loads(df_results.to_json(orient='records')) # outputs &quot;date&quot;: 1640908800000 in the json </code></pre>
<python><pandas><dataframe>
2023-02-04 10:53:09
1
3,203
rex
75,344,201
9,788,162
FastAPI Vercel deployment not starting app
<p>I can't get the API to run on Vercel, running locally works just fine.</p> <p>Since it's also my first time using Vercel, I can't seem to find logs. The build logs work fine (it's telling me 6 static pages, 0 of the rest), but when I go to the deployment page it serves me a <code>404</code> (not coming from my API).</p> <p>Port <code>8000</code> doesn't load anything at all.</p> <p><strong>Vercel.json</strong></p> <pre><code>{ &quot;version&quot;: 2, &quot;builds&quot;: [ { &quot;src&quot;: &quot;main.py&quot;, &quot;use&quot;: &quot;@vercel/python&quot; } ], &quot;routes&quot;: [ { &quot;src&quot;: &quot;/(.*)&quot;, &quot;dest&quot;: &quot;app/api.py&quot; } ] } </code></pre> <p><strong>main.py</strong></p> <pre><code>import uvicorn from os import getenv if __name__ == &quot;__main__&quot;: port = int(getenv(&quot;PORT&quot;, 8000)) uvicorn.run(&quot;app.api:app&quot;, host=&quot;0.0.0.0&quot;, port=port, reload=True) </code></pre> <p><strong>api.py</strong></p> <pre><code>from fastapi import FastAPI app = FastAPI() @app.get(&quot;/&quot;, tags=[&quot;Root&quot;]) async def read_root(): return {&quot;message&quot;: &quot;Welcome to this fantastic app!&quot;} </code></pre> <p><strong>Project structure</strong></p> <pre><code>app |- models/ |- routes/ |- __init__.py |- api.py |- database.py main.py requirements.txt Vercel.json </code></pre> <p>What am I missing here? I tried to deploy using the Vercel CLI with <code>Vercel .</code> and also by committing to my GitHub repo, which works. The API just doesn't seem to be running.</p>
<python><deployment><fastapi><vercel><uvicorn>
2023-02-04 09:41:14
2
432
Dominic
75,344,021
3,682,549
How to access an object created inside a function outside
<p>I have the following function:</p> <pre><code>from __future__ import print_function, division from future.utils import iteritems from builtins import range, input # Note: you may need to update your version of future # sudo pip install -U future import numpy as np import matplotlib.pyplot as plt #from kmeans import plot_k_means, get_simple_data, cost def cost(X, R, M): cost = 0 for k in range(len(M)): # method 1 # for n in range(len(X)): # cost += R[n,k]*d(M[k], X[n]) # method 2 diff = X - M[k] sq_distances = (diff * diff).sum(axis=1) cost += (R[:,k] * sq_distances).sum() return cost def plot_k_means(X, K, max_iter=20, beta=3.0, show_plots=False): N, D = X.shape # R = np.zeros((N, K)) exponents = np.empty((N, K)) # initialize M to random initial_centers = np.random.choice(N, K, replace=False) M = X[initial_centers] costs = [] k = 0 for i in range(max_iter): k += 1 # step 1: determine assignments / resposibilities # is this inefficient? for k in range(K): for n in range(N): exponents[n,k] = np.exp(-beta*d(M[k], X[n])) R = exponents / exponents.sum(axis=1, keepdims=True) # step 2: recalculate means # decent vectorization # for k in range(K): # M[k] = R[:,k].dot(X) / R[:,k].sum() # oldM = M # full vectorization M = R.T.dot(X) / R.sum(axis=0, keepdims=True).T # print(&quot;diff M:&quot;, np.abs(M - oldM).sum()) c = cost(X, R, M) costs.append(c) if i &gt; 0: if np.abs(costs[-1] - costs[-2]) &lt; 1e-5: break if len(costs) &gt; 1: if costs[-1] &gt; costs[-2]: pass # print(&quot;cost increased!&quot;) # print(&quot;M:&quot;, M) # print(&quot;R.min:&quot;, R.min(), &quot;R.max:&quot;, R.max()) if show_plots: plt.plot(costs) plt.title(&quot;Costs&quot;) plt.show() random_colors = np.random.random((K, 3)) colors = R.dot(random_colors) plt.scatter(X[:,0], X[:,1], c=colors) plt.show() #print(&quot;Final cost&quot;, costs[-1]) final_cost = costs[-1] print(final_cost) return M, R, final_cost def get_simple_data(): # assume 3 means D = 2 # so we can visualize it more easily s = 4 # separation so we can control how far apart the means are mu1 = np.array([0, 0]) mu2 = np.array([s, s]) mu3 = np.array([0, s]) N = 900 # number of samples X = np.zeros((N, D)) X[:300, :] = np.random.randn(300, D) + mu1 X[300:600, :] = np.random.randn(300, D) + mu2 X[600:, :] = np.random.randn(300, D) + mu3 return X def main(): X = get_simple_data() plt.scatter(X[:,0], X[:,1]) plt.show() costs = np.empty(10) costs[0] = None for k in range(1, 10): M, R, final_cost = plot_k_means(X, k, show_plots=False) c = cost(X, R, M) costs[k] = c plt.plot(costs) plt.title(&quot;Cost vs K&quot;) plt.show() print(costs) if __name__ == '__main__': main() </code></pre> <p>I want access costs from inside the main() but if just print(costs) i do not get the correct values. Need help here?</p>
<python><function>
2023-02-04 09:05:23
1
1,121
Nishant
75,343,916
2,706,344
Print full pandas index in Jupyter Notebook
<p>I have a pandas index with 380 elements and want to print the full index in Jupyter Notebook. I googled already but everything I've found did not help. For example this does not work:</p> <pre><code>with pd.option_context('display.max_rows', None, 'display.max_columns', None): print(my_index) </code></pre> <p>Neither this works:</p> <pre><code>with np.printoptions(threshold=np.inf): print(my_index.array) </code></pre> <p>In both cases only the first 10 and last 10 elements are shown. The elements in between are abbreviated by &quot;...&quot;.</p>
<python><pandas><indexing><printing>
2023-02-04 08:42:23
2
4,346
principal-ideal-domain
75,343,872
6,468,436
No autocompletion for other python class in PyCharm
<p>I am a beginner with Python, in this loop i trying to use the methods of the variable &quot;data_point&quot; Behind the variable &quot;data_point&quot; is a simple getter and setter class, nevertheless in the autocompletion of PyCharm it shows me only 2 methods instead of all. What do I have to do to see all methods of this class?<a href="https://i.sstatic.net/nGLC5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nGLC5.png" alt="enter image description here" /></a></p> <p>I have added the type, but the behavior is the same</p> <p><a href="https://i.sstatic.net/SA5j9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SA5j9.png" alt="enter image description here" /></a></p> <p>This is my model class with getter and setter <a href="https://i.sstatic.net/DBZQ1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DBZQ1.png" alt="enter image description here" /></a></p>
<python><autocomplete><pycharm>
2023-02-04 08:32:54
1
325
User1751
75,343,494
6,357,916
Empty `request.user.username` while handling a GET request created
<p>I was trying out logging all URLs accessed by user along with user id and date time when it was accessed using django middleware as explained <a href="https://stackoverflow.com/questions/67081706/django-middleware-to-log-every-time-a-user-hits-a-page">here</a>.</p> <p>For some URLs it was not logging user id. I checked and found that the <code>request.user.username</code> was empty string. I checked views corresponding to those URL and found that those views did not have desired decorators. For example, I changed this:</p> <pre><code>def getXyz_forListView(request): # view body ... </code></pre> <p>to this:</p> <pre><code>@api_view(['GET']) @authentication_classes([TokenAuthentication,]) def getXyz_forListView(request): # view body ... </code></pre> <p>and it started working.</p> <p>However some views are created from classes:</p> <pre><code>class XyzView(View): def get(self, request): # view body ... </code></pre> <p>I added same decorators:</p> <pre><code>class XyzView(View): @api_view(['GET']) @authentication_classes([TokenAuthentication,]) def get(self, request): # view body ... </code></pre> <p>But it is still not working. What I am missing?</p> <p><strong>PS:</strong></p> <p>It is added to <code>urls.py</code> as follows:</p> <pre><code>urlpatterns = [ # ... url(r'^xyz/', XyzView.as_view(), name=&quot;xyz&quot;), ] </code></pre>
<python><django><django-rest-framework><django-views>
2023-02-04 06:58:44
1
3,029
MsA
75,343,091
12,331,179
Json Creation using dataframe
<p>We are using below dataframe to create json file</p> <p>Input file</p> <pre><code>import pandas as pd import numpy as np a1=[&quot;DA_STinf&quot;,&quot;DA_Stinf_NA&quot;,&quot;DA_Stinf_city&quot;,&quot;DA_Stinf_NA_ID&quot;,&quot;DA_Stinf_NA_ID_GRANT&quot;,&quot;DA_country&quot;] a2=[&quot;data.studentinfo&quot;,&quot;data.studentinfo.name&quot;,&quot;data.studentinfo.city&quot;,&quot;data.studentinfo.name.id&quot;,&quot;data.studentinfo.name.id.grant&quot;,&quot;data.country&quot;] a3=[np.NaN,np.NaN,&quot;StringType&quot;,np.NaN,&quot;BoolType&quot;,&quot;StringType&quot;] d1=pd.DataFrame(list(zip(a1,a2,a3)),columns=['data','action','datatype']) </code></pre> <p>We have to build below 2 structure using above dataframe in dynamic way we have fit above data in below format</p> <p>for schema e.g::</p> <pre><code>StructType([StructField(Column_name,Datatype,True)]) </code></pre> <p>for Data e.g::</p> <pre><code>F.struct(F.col(column_name)).alias(json_expected_name) </code></pre> <p>expected output structure for schema</p> <pre><code>StructType( [ StructField(&quot;data&quot;, StructType( [ StructField( &quot;studentinfo&quot;, StructType( [ StructField(&quot;city&quot;,StringType(),True), StructField(&quot;name&quot;,StructType( [ StructField(&quot;id&quot;, StructType( [ StructField(&quot;grant&quot;,BoolType(),True) ]) )] ) ) ] ) ), StructField(&quot;country&quot;,StringType(),True) ]) ) ]) </code></pre> <p>2)Expected data fetch</p> <pre><code>df.select( F.struct( F.struct( F.struct(F.col(&quot;DA_Stinf_city&quot;)).alias(&quot;city&quot;), F.struct( F.struct(F.col(&quot;DA_Stinf_NA_ID_GRANT&quot;)).alias(&quot;id&quot;) ).alias(&quot;name&quot;), ).alias(&quot;studentinfo&quot;), F.struct(F.col(&quot;DA_country&quot;)).alias(&quot;country&quot;) ).alias(&quot;data&quot;) ) </code></pre> <p>We have to use for loop and add these kind of entry in (data.studentinfo.name.id) data-&gt;studentinfo-&gt;name-&gt;id Which I have already add in expected output structure</p>
<python><json><pandas><for-loop><pyspark>
2023-02-04 05:05:44
1
386
Amol
75,342,979
9,206,667
Correctly Using Qmark (or Named) Style for SQL Queries
<p>I've got a a database filled with lots of data. Let's make it simple and say the schema looks like this:</p> <pre><code>CREATE TABLE foo ( col1 CHAR(25) PRIMARY KEY, col2 CHAR(2) NOT NULL, col3 CHAR(1) NOT NULL CONSTRAINT c_col2 (col2 = 'an' OR col2 = 'bx' OR col2 = 'zz') CONSTRAINT c_col3 (col3 = 'a' OR col3 = 'b' OR col3 = 'n') ) </code></pre> <p>There are lots of rows with lots of values, but let's say I've just done this:</p> <pre><code>cur.executemany('INSERT INTO foo VALUES(?, ?, ?)', [('xxx', 'bx', 'a'), ('yyy', 'bx', 'b'), ('zzz', 'an', 'b')]) </code></pre> <p>I have match lists for each of the values, and I want to return rows that match the UNION of all list values. For this question, assume no lists are empty.</p> <p>Say I have these match lists...</p> <pre><code>row2 = ['bx', 'zz'] # Consider all rows that have 'bx' OR 'zz' in row2 row3 = ['b'] # Consider all rows that have 'b' in row3 </code></pre> <p>I can build a text-based query correctly, using something like this..</p> <pre><code>s_row2 = 'row2 IN (' + ', '.join('&quot;{}&quot;'.format(x) for x in row2) + ')' s_row3 = 'row3 IN (' + ', '.join('&quot;{}&quot;'.format(x) for x in row3) + ')' query = 'SELECT col1 FROM foo WHERE ' + ' AND '.join([s_row2, s_row3]) for row in cur.execute(query): print(row) </code></pre> <ul> <li>Output should be just <code>yyy</code>.</li> <li><code>xxx</code> is NOT chosen because <code>col3</code> is <code>a</code> and not in the col3 match list.</li> <li><code>zzz</code> is NOT chosen because <code>col2</code> is <code>an</code> and not in the col2 match list.</li> </ul> <p>How would I do this using the safer <em>qmark</em> style, like my 'INSERT' above?</p> <p>edit: I just realized that I screwed up the notion of 'row' and 'col' here... Sorry for the confusion! I won't change it because it has perpetuated into the answer below...</p>
<python><python-3.x><sqlite>
2023-02-04 04:36:56
1
1,283
Lance E.T. Compte
75,342,909
6,653,602
UsePythonVersion@0 sets wrong Python versions in Azure Pipelines
<p>I added Python 3.9 to self hosted Agent toolcache successfully as shown in docs (<a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/reference/use-python-version-v0?view=azure-pipelines&amp;viewFallbackFrom=azure-devops" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/reference/use-python-version-v0?view=azure-pipelines&amp;viewFallbackFrom=azure-devops</a>)</p> <p>I am running the following pipeline:</p> <pre><code>trigger: none pool: name: self-hosted-agent demands: - Agent.Name -equals agentname - agent.name -equals agentname - Agent.OS -equals Linux steps: - task: UsePythonVersion@0 inputs: versionSpec: '3.9' addToPath: true - script: | python --version python3 --version python print('Hello!') - task: Bash@3 displayName: Run Python Code inputs: targetType: inline script: | python --version python print('Hello!') </code></pre> <p>I can see in the logs that <code>UsePythonVersion@0</code> can find this additionally installed python 3.9.1 version:</p> <pre><code>Starting: UsePythonVersion ============================================================================== Task : Use Python version Description : Use the specified version of Python from the tool cache, optionally adding it to the PATH Version : 0.214.0 Author : Microsoft Corporation Help : https://docs.microsoft.com/azure/devops/pipelines/tasks/tool/use-python-version ============================================================================== Found tool in cache: Python 3.9.1 x64 Prepending PATH environment variable with directory: /home/.../myagent/_work/_tool/Python/3.9.1/x64 Prepending PATH environment variable with directory: /home/.../myagent/_work/_tool/Python/3.9.1/x64/bin Finishing: UsePythonVersion </code></pre> <p>But when I check python version in script:</p> <pre><code>Starting: CmdLine ============================================================================== Task : Command line Description : Run a command line script using Bash on Linux and macOS and cmd.exe on Windows Version : 2.212.0 Author : Microsoft Corporation Help : https://docs.microsoft.com/azure/devops/pipelines/tasks/utility/command-line ============================================================================== Generating script. ========================== Starting Command Output =========================== /usr/bin/bash --noprofile --norc /home/.../myagent/_work/_temp/0225cb7f-9153-4b7c-ad42-321b8b9bfa97.sh Python 2.7.16 Python 3.7.3 </code></pre> <p>I get wrong versions (probably the ones that are installed globally in the Agent Pool machine) But as I understand <code>UsePythonVersion@0</code> task should change the version of python to the one defined in the task?</p>
<python><azure><azure-devops><azure-pipelines>
2023-02-04 04:14:42
0
3,918
Alex T
75,342,857
4,883,320
three dots aka ellipsis in import statement before package name
<p>I would like to know the purpose of a three-dots symbol aka ellipsis put before the package name in an import statement.</p> <p>Examples:</p> <p><a href="https://github.com/scikit-learn/scikit-learn/blob/b22f7fa552c03aa7f6b9b4d661470d0173f8db5d/sklearn/metrics/_plot/det_curve.py" rel="nofollow noreferrer">sklearn/metrics/_plot/det_curve.py</a>:</p> <pre class="lang-py prettyprint-override"><code>import scipy as sp from .base import _get_response from .. import det_curve from .._base import _check_pos_label_consistency from ...utils import check_matplotlib_support </code></pre> <p>or <a href="https://github.com/danielgtaylor/python-betterproto/issues/178" rel="nofollow noreferrer">this Github issue</a>, where they call it an &quot;ellipsis import&quot; and where the three dots are followed by a space:</p> <pre class="lang-py prettyprint-override"><code>from ... import MyContentContentType as __MyContentContentType__ from ... import MyDetectionDetectionGender as __MyDetectionDetectionGender__ </code></pre>
<python>
2023-02-04 04:00:25
1
1,293
KiriSakow
75,342,640
13,775,586
Python recv() doesn't wait for client response
<p>I'm trying to set up a communication via socket between a PHP page (client) and a Python script (server). The PHP page has a button that, when clicked, sends &quot;next&quot; to the server. This part works but the problem happens when I refresh the page. In this situation I'm not writing anything to the server, yet, the function <code>recv()</code> of my server seems to receive something (an empty string) because the next lines are executed. Can someone tell me what's going on ?</p> <p><code>client.php</code></p> <pre><code>&lt;?php $host = '127.0.0.1'; $port = 5353; $socket = socket_create(AF_INET, SOCK_STREAM, 0) or die('Could not create socket\n'); $result = socket_connect($socket, $host, $port) or die('Could not connect to server\n'); if(isset($_POST['btnNext'])) { $msg_to_server = 'next'; socket_write($socket, $msg_to_server, strlen($msg_to_server)) or die('Could not send data to server\n'); $msg_from_server = socket_read($socket, 1024) or die('Could not read server response\n'); echo 'Server said : ' . $msg_from_server; } ?&gt; &lt;form action='' method='POST' &gt; &lt;button name='btnNext' type='submit'&gt;Next&lt;/button&gt; &lt;/form&gt; </code></pre> <p><code>server.py</code></p> <pre><code>import socket host = '127.0.0.1' port = 5353 server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) server_socket.bind((host, port)) server_socket.listen(5) while True: client_socket, addr = server_socket.accept() # doesn't wait for the client response : msg_from_client = client_socket.recv(5000).decode() print('Client said : ' + msg_from_client) </code></pre>
<python><php><sockets><websocket>
2023-02-04 02:50:46
1
663
Raphaël Goisque
75,342,402
19,716,381
Load images into tensorflow from a single directory, with classes specified in a csv file
<p>Let's say I have a single directory <code>data</code>, which has pictures of both cats and dogs and a separate csv file <code>labels.csv</code>, which has the names of the files in the directory and it's labels. How can I load this image dataset into tensorflow?</p> <p>csv:</p> <pre><code>| filename | label | |__________|_________| | a.png | cat | | b.png | dog | |__________|_________| </code></pre> <p>Most of the image classification tutorials in keras' website or tensorflow's website use the <code>tf.keras.utils.image_dataset_from_directory</code>, but it needs images to be in separate folders.</p>
<python><tensorflow><keras>
2023-02-04 01:23:07
1
484
berinaniesh
75,342,312
20,959,773
Find pattern of elements between multiple lists
<p>Take these lists:</p> <pre><code>[545, 766, 1015] [546, 1325, 2188, 5013] [364, 374, 379, 384, 385, 386, 468, 496, 497, 547] </code></pre> <p>My actual pattern is just finding numbers that are ascending by one, only one from each list <code>(For this example, would be 545, 546 and 547).</code> After actually finding this exists, I want the first number in the sequence to be returned to me so I can continue with my other calculations.</p> <p>Preferred code execution example:</p> <pre><code>&gt;&gt;&gt;[12, 64, 135, 23] # input lists &gt;&gt;&gt;[84, 99, 65] &gt;&gt;&gt;[66, 234, 7, 43, 68] &gt;&gt;&gt;[64, 65, 66] # found numbers by said pattern. &gt;&gt;&gt;return 64 </code></pre> <p>I have not thought of any function worthy to be written ...</p>
<python><list><math><logic><pattern-matching>
2023-02-04 00:54:46
2
347
RifloSnake
75,342,160
12,470,058
Partition of a list of integers into K sublists with equal sum
<p>Similar questions are <a href="https://stackoverflow.com/questions/27322804/partition-of-a-set-into-k-disjoint-subsets-with-equal-sum">1</a> and <a href="https://stackoverflow.com/questions/47557812/partition-a-set-into-k-subsets-with-equal-sum">2</a> but the answers didn't help. Assume we have a list of integers. We want to find <code>K</code> disjoint lists such that they completely cover the given list and all have the same sum. For example, if <code>A = [4, 3, 5, 6, 4, 3, 1]</code> and <code>K = 2</code> then the answer should be:</p> <pre><code>[[3, 4, 6], [1, 3, 4, 5]] or [[4, 4, 5], [1, 3, 3, 6]] </code></pre> <p>I have written a code that only works when <code>K = 2</code> and it works fine with small lists as input but with very larger lists, because of the code's high complexity, OS terminates the task. My code is:</p> <pre><code>def subarrays_equal_sum(l): from itertools import combinations if len(l) &lt; 2 or sum(l) % 2 != 0: return [] l = sorted(l) list_sum = sum(l) all_combinations = [] for i in range(1, len(l)): all_combinations += (list(combinations(l, i))) combinations_list = [i for i in all_combinations if sum(i) == list_sum / 2] if not combinations_list: return [] final_result = [] for i in range(len(combinations_list)): for j in range(i + 1, len(combinations_list)): first = combinations_list[i] second = combinations_list[j] concat = sorted(first + second) if concat == l and [list(first), list(second)] not in final_result: final_result.append([list(first), list(second)]) return final_result </code></pre> <p>An answer for any value of <code>K</code> is available <a href="https://www.techiedelight.com/k-partition-problem-print-all-subsets/" rel="nofollow noreferrer">here</a>. But if we pass the arguments <code>A = [4, 3, 5, 6, 4, 3, 1]</code> and <code>K = 2</code>, their code only returns <code>[[5, 4, 3, 1],[4, 3, 6]]</code> whereas my code returns all possible lists i.e.,</p> <p><code>[[[3, 4, 6], [1, 3, 4, 5]], [[4, 4, 5], [1, 3, 3, 6]]]</code></p> <p>My questions are:</p> <ol> <li>How to improve the complexity and cost of my code?</li> <li>How to make my code work with any value of <code>k</code>?</li> </ol>
<python><python-3.x><algorithm><combinations>
2023-02-04 00:14:57
2
368
Bsh
75,341,949
14,790,056
groupby and apply multiple conditions
<p>This is a bit complicated but I will try to explain as best as I can.</p> <p>i have the following dataframe.</p> <pre><code> transaction_hash block_timestamp from_address to_address value data token_address 1 0x00685b3aecf64de61bca7a7c7068c17879bb2a2f3ebfe65d4b9421b40ac63952 2023-01-02 03:12:59+00:00 0xe5d84152dd961e2eb0d6c202cf3396f579974983 0x1111111254eeb25477b68fb85ed929f73a960582 1.052e+20 trace ETH 2 0x00685b3aecf64de61bca7a7c7068c17879bb2a2f3ebfe65d4b9421b40ac63952 2023-01-02 03:12:59+00:00 0x1111111254eeb25477b68fb85ed929f73a960582 0x53222470cdcfb8081c0e3a50fd106f0d69e63f20 1.052e+20 trace ETH 3 0x00685b3aecf64de61bca7a7c7068c17879bb2a2f3ebfe65d4b9421b40ac63952 2023-01-02 03:12:59+00:00 0x1111111254eeb25477b68fb85ed929f73a960582 0xe5d84152dd961e2eb0d6c202cf3396f579974983 1.0652365814992255e+20 transfer stETH 4 0x00685b3aecf64de61bca7a7c7068c17879bb2a2f3ebfe65d4b9421b40ac63952 2023-01-02 03:12:59+00:00 0x53222470cdcfb8081c0e3a50fd106f0d69e63f20 0x1111111254eeb25477b68fb85ed929f73a960582 1.0652365814992255e+20 transfer stETH 5 0x00685b3aecf64de61bca7a7c7068c17879bb2a2f3ebfe65d4b9421b40ac63952 2023-01-02 03:12:59+00:00 0x7f39c581f595b53c5cb19bd0b3f8da6c935e2ca0 0x53222470cdcfb8081c0e3a50fd106f0d69e63f20 6.391691160717606e+19 transfer stETH 6 0x00685b3aecf64de61bca7a7c7068c17879bb2a2f3ebfe65d4b9421b40ac63952 2023-01-02 03:12:59+00:00 0xdc24316b9ae028f1497c275eb9192a3ea0f67022 0x53222470cdcfb8081c0e3a50fd106f0d69e63f20 4.260674654274649e+19 transfer stETH 7 0x00a0ff958f99fabe8a6bde12304436ed6c43524d1ab12bced426abf3a507d939 2023-01-04 07:34:47+00:00 0x1111111254eeb25477b68fb85ed929f73a960582 0xcb62961daac29b79ebac9a30e142da0e8ba8ead6 1.401493579633375e+20 trace ETH 8 0x00a0ff958f99fabe8a6bde12304436ed6c43524d1ab12bced426abf3a507d939 2023-01-04 07:34:47+00:00 0x53222470cdcfb8081c0e3a50fd106f0d69e63f20 0x1111111254eeb25477b68fb85ed929f73a960582 1.401493579633375e+20 trace ETH 9 0x00a0ff958f99fabe8a6bde12304436ed6c43524d1ab12bced426abf3a507d939 2023-01-04 07:34:47+00:00 0xcb62961daac29b79ebac9a30e142da0e8ba8ead6 0x53222470cdcfb8081c0e3a50fd106f0d69e63f20 1.419e+20 transfer stETH 10 0x00a0ff958f99fabe8a6bde12304436ed6c43524d1ab12bced426abf3a507d939 2023-01-04 07:34:47+00:00 0x53222470cdcfb8081c0e3a50fd106f0d69e63f20 0x7f39c581f595b53c5cb19bd0b3f8da6c935e2ca0 4.257e+19 transfer stETH 11 0x00a0ff958f99fabe8a6bde12304436ed6c43524d1ab12bced426abf3a507d939 2023-01-04 07:34:47+00:00 0x53222470cdcfb8081c0e3a50fd106f0d69e63f20 0xdc24316b9ae028f1497c275eb9192a3ea0f67022 9.933e+19 transfer stETH </code></pre> <p>These 11 transfers represent two swap transactions: 2 unique hash) between ETH and stETH. It'd be very nice if there was two clean transactions with one ETH going from A to B, and the other one stETH going from B to A. But, in decentralized exchanges, things work via routers which send multiple transactions through various addresses to complete one swap transaction.</p> <p>Here, there are two transactions (as you can see, within one transaction (hash) made up of several transfers). I want to verify which ETH amounts corresponds to stETH. ETH and stETH prices are almost 1:1 so they should be quite close in value.</p> <p>So from the first transaction (1-6), there is one ETH value (1.052e+20) but three different stETH value (1.0652365814992255e+20, 6.391691160717606e+19, and 4.260674654274649e+19). Obviously, it is clear that the pair that corresponds to the 1.052e+20 ETH swap is 1.0652365814992255e+20 stETH as it is the closest in value.</p> <p>So, in order to filter out the right pair, I want to groupby <code>transaction_hash</code>, if there are more than 1 unique value of stETH, then I want to pick out the one that is closest to the ETH value.</p> <p>so, the desired output will be:</p> <pre><code> transaction_hash block_timestamp from_address to_address value data token_address 1 0x00685b3aecf64de61bca7a7c7068c17879bb2a2f3ebfe65d4b9421b40ac63952 2023-01-02 03:12:59+00:00 0xe5d84152dd961e2eb0d6c202cf3396f579974983 0x1111111254eeb25477b68fb85ed929f73a960582 1.052e+20 trace ETH 2 0x00685b3aecf64de61bca7a7c7068c17879bb2a2f3ebfe65d4b9421b40ac63952 2023-01-02 03:12:59+00:00 0x1111111254eeb25477b68fb85ed929f73a960582 0x53222470cdcfb8081c0e3a50fd106f0d69e63f20 1.052e+20 trace ETH 3 0x00685b3aecf64de61bca7a7c7068c17879bb2a2f3ebfe65d4b9421b40ac63952 2023-01-02 03:12:59+00:00 0x1111111254eeb25477b68fb85ed929f73a960582 0xe5d84152dd961e2eb0d6c202cf3396f579974983 1.0652365814992255e+20 transfer stETH 4 0x00685b3aecf64de61bca7a7c7068c17879bb2a2f3ebfe65d4b9421b40ac63952 2023-01-02 03:12:59+00:00 0x53222470cdcfb8081c0e3a50fd106f0d69e63f20 0x1111111254eeb25477b68fb85ed929f73a960582 1.0652365814992255e+20 transfer stETH 5 0x00a0ff958f99fabe8a6bde12304436ed6c43524d1ab12bced426abf3a507d939 2023-01-04 07:34:47+00:00 0x1111111254eeb25477b68fb85ed929f73a960582 0xcb62961daac29b79ebac9a30e142da0e8ba8ead6 1.401493579633375e+20 trace ETH 6 0x00a0ff958f99fabe8a6bde12304436ed6c43524d1ab12bced426abf3a507d939 2023-01-04 07:34:47+00:00 0x53222470cdcfb8081c0e3a50fd106f0d69e63f20 0x1111111254eeb25477b68fb85ed929f73a960582 1.401493579633375e+20 trace ETH 7 0x00a0ff958f99fabe8a6bde12304436ed6c43524d1ab12bced426abf3a507d939 2023-01-04 07:34:47+00:00 0xcb62961daac29b79ebac9a30e142da0e8ba8ead6 0x53222470cdcfb8081c0e3a50fd106f0d69e63f20 1.419e+20 transfer stETH </code></pre> <p>Thanks!</p> <p>EDIT!</p> <p>I applied the code suggested below. Something strange is happening. My original df has a transaction like this</p> <pre><code> transaction_hash block_timestamp from_address to_address value data token_address 60347 0x001d443681cebc7d9520b19bbd7b4d2ac090c366cb6a2541f46573035a1d5947 2022-05-26 09:57:49 UTC 0xdef171fe48cf0115b1d80b88dc8eab59176fee57 0x558247e365be655f9144e1a0140d793984372ef3 6917030000000000.0 trace ETH 61076 0x001d443681cebc7d9520b19bbd7b4d2ac090c366cb6a2541f46573035a1d5947 2022-05-26 09:57:49 UTC 0xdef171fe48cf0115b1d80b88dc8eab59176fee57 0xb1720612d0131839dc489fcf20398ea925282fca 1220650000000000.0 trace ETH 399307 0x001d443681cebc7d9520b19bbd7b4d2ac090c366cb6a2541f46573035a1d5947 2022-05-26 09:57:49 UTC 0xdef171fe48cf0115b1d80b88dc8eab59176fee57 0x6d02e95909da8da09865a26b62055bd6a1d5f706 8.4846e+17 trace ETH 30155 0x001d443681cebc7d9520b19bbd7b4d2ac090c366cb6a2541f46573035a1d5947 2022-05-26 09:57:49+00:00 0x6d02e95909da8da09865a26b62055bd6a1d5f706 0xdef171fe48cf0115b1d80b88dc8eab59176fee57 8.800918863842962e+17 transfer stETH 625132 0x001d443681cebc7d9520b19bbd7b4d2ac090c366cb6a2541f46573035a1d5947 2022-05-26 09:57:49+00:00 0xdef171fe48cf0115b1d80b88dc8eab59176fee57 0x4028daac072e492d34a3afdbef0ba7e35d8b55c4 8.800918863842962e+17 transfer stETH </code></pre> <p>when I apply the code, instead of getting what I want which is..</p> <pre><code>399307 0x001d443681cebc7d9520b19bbd7b4d2ac090c366cb6a2541f46573035a1d5947 2022-05-26 09:57:49 UTC 0xdef171fe48cf0115b1d80b88dc8eab59176fee57 0x6d02e95909da8da09865a26b62055bd6a1d5f706 8.4846e+17 trace ETH 30155 0x001d443681cebc7d9520b19bbd7b4d2ac090c366cb6a2541f46573035a1d5947 2022-05-26 09:57:49+00:00 0x6d02e95909da8da09865a26b62055bd6a1d5f706 0xdef171fe48cf0115b1d80b88dc8eab59176fee57 8.800918863842962e+17 transfer stETH 625132 0x001d443681cebc7d9520b19bbd7b4d2ac090c366cb6a2541f46573035a1d5947 2022-05-26 09:57:49+00:00 0xdef171fe48cf0115b1d80b88dc8eab59176fee57 0x4028daac072e492d34a3afdbef0ba7e35d8b55c4 8.800918863842962e+17 transfer stETH </code></pre> <p>I get more rows! and doesn't even remove the rows</p> <pre><code> transaction_hash block_timestamp from_address to_address value data token_address 30155 0x001d443681cebc7d9520b19bbd7b4d2ac090c366cb6a2541f46573035a1d5947 2022-05-26 09:57:49+00:00 0x6d02e95909da8da09865a26b62055bd6a1d5f706 0xdef171fe48cf0115b1d80b88dc8eab59176fee57 8.800918863842962e+17 transfer stETH 30155 0x001d443681cebc7d9520b19bbd7b4d2ac090c366cb6a2541f46573035a1d5947 2022-05-26 09:57:49+00:00 0x6d02e95909da8da09865a26b62055bd6a1d5f706 0xdef171fe48cf0115b1d80b88dc8eab59176fee57 8.800918863842962e+17 transfer stETH 30155 0x001d443681cebc7d9520b19bbd7b4d2ac090c366cb6a2541f46573035a1d5947 2022-05-26 09:57:49+00:00 0x6d02e95909da8da09865a26b62055bd6a1d5f706 0xdef171fe48cf0115b1d80b88dc8eab59176fee57 8.800918863842962e+17 transfer stETH 60347 0x001d443681cebc7d9520b19bbd7b4d2ac090c366cb6a2541f46573035a1d5947 2022-05-26 09:57:49 UTC 0xdef171fe48cf0115b1d80b88dc8eab59176fee57 0x558247e365be655f9144e1a0140d793984372ef3 6917030000000000.0 trace ETH 61076 0x001d443681cebc7d9520b19bbd7b4d2ac090c366cb6a2541f46573035a1d5947 2022-05-26 09:57:49 UTC 0xdef171fe48cf0115b1d80b88dc8eab59176fee57 0xb1720612d0131839dc489fcf20398ea925282fca 1220650000000000.0 trace ETH 399307 0x001d443681cebc7d9520b19bbd7b4d2ac090c366cb6a2541f46573035a1d5947 2022-05-26 09:57:49 UTC 0xdef171fe48cf0115b1d80b88dc8eab59176fee57 0x6d02e95909da8da09865a26b62055bd6a1d5f706 8.4846e+17 trace ETH </code></pre> <p>what is going on?</p>
<python><pandas><dataframe>
2023-02-03 23:28:58
1
654
Olive
75,341,900
1,438,082
Query Nested JSON Data
<p>I am trying to query using the code below to get the value etoday but it does not return a result.</p> <pre><code> Result = json.loads(json_string) # Next Line works perfect print(&quot;The value of msg&quot;, &quot;msg&quot;, &quot;is: &quot;, Result[&quot;msg&quot;]) #Next Line does not (I have a few variations here that I tried print(Result['data']['page']['records']['etoday']) print(Result[&quot;data&quot;][0][&quot;page&quot;][0][&quot;etoday&quot;]) print(Result['page']['records'][0]['etoday']) Print(&quot;The value of&quot;, &quot;etoday&quot;, &quot;is: &quot;, Result[&quot;etoday&quot;]) </code></pre> <p>File I am querying</p> <pre><code>{ &quot;success&quot;: true, &quot;code&quot;: &quot;0&quot;, &quot;msg&quot;: &quot;success&quot;, &quot;data&quot;: { &quot;inverterStatusVo&quot;: { &quot;all&quot;: 1, &quot;normal&quot;: 1, &quot;fault&quot;: 0, &quot;offline&quot;: 0, &quot;mppt&quot;: 0 }, &quot;page&quot;: { &quot;records&quot;: [ { &quot;id&quot;: &quot;zzz&quot;, &quot;sn&quot;: &quot;xxx&quot;, &quot;collectorSn&quot;: &quot;vvv&quot;, &quot;userId&quot;: &quot;ttt&quot;, &quot;productModel&quot;: &quot;3105&quot;, &quot;nationalStandards&quot;: &quot;68&quot;, &quot;inverterSoftwareVersion&quot;: &quot;3d0037&quot;, &quot;dcInputType&quot;: 2, &quot;acOutputType&quot;: 0, &quot;stationType&quot;: 0, &quot;stationId&quot;: &quot;1298491919448828419&quot;, &quot;rs485ComAddr&quot;: &quot;101&quot;, &quot;simFlowState&quot;: -1, &quot;power&quot;: 6.000, &quot;powerStr&quot;: &quot;kW&quot;, &quot;pac&quot;: 0.001, &quot;pac1&quot;: 0, &quot;pacStr&quot;: &quot;kW&quot;, &quot;state&quot;: 1, &quot;stateExceptionFlag&quot;: 0, &quot;fullHour&quot;: 1.82, &quot;totalFullHour&quot;: 191.83, &quot;maxDcBusTime&quot;: &quot;1675460767377&quot;, &quot;maxUac&quot;: 251.5, &quot;maxUacTime&quot;: &quot;1664022228160&quot;, &quot;maxUpv&quot;: 366.3, &quot;maxUpvTime&quot;: &quot;1663309960961&quot;, &quot;timeZone&quot;: 0.00, &quot;timeZoneStr&quot;: &quot;(UTC+00:00)&quot;, &quot;timeZoneName&quot;: &quot;(UTC+00:00)Europe/Dublin&quot;, &quot;dataTimestamp&quot;: &quot;1675460767377&quot;, &quot;dataTimestampStr&quot;: &quot;2023-02-03 21:46:07 (UTC+00:00)&quot;, &quot;fisTime&quot;: &quot;1663189116371&quot;, &quot;inverterMeterModel&quot;: 5, &quot;updateShelfBeginTime&quot;: 1671292800000, &quot;updateShelfEndTime&quot;: 1829059200000, &quot;updateShelfEndTimeStr&quot;: &quot;2027-12-18&quot;, &quot;updateShelfTime&quot;: &quot;5&quot;, &quot;collectorId&quot;: &quot;1306858901387600551&quot;, &quot;dispersionRate&quot;: 0.0, &quot;currentState&quot;: &quot;3&quot;, &quot;pow1&quot;: 0.0, &quot;pow2&quot;: 0.0, &quot;pow3&quot;: 0.0, &quot;pow4&quot;: 0.0, &quot;pow5&quot;: 0.0, &quot;pow6&quot;: 0.0, &quot;pow7&quot;: 0.0, &quot;pow8&quot;: 0.0, &quot;pow9&quot;: 0.0, &quot;pow10&quot;: 0.0, &quot;pow11&quot;: 0.0, &quot;pow12&quot;: 0.0, &quot;pow13&quot;: 0.0, &quot;pow14&quot;: 0.0, &quot;pow15&quot;: 0.0, &quot;pow16&quot;: 0.0, &quot;pow17&quot;: 0.0, &quot;pow18&quot;: 0.0, &quot;pow19&quot;: 0.0, &quot;pow20&quot;: 0.0, &quot;pow21&quot;: 0.0, &quot;pow22&quot;: 0.0, &quot;pow23&quot;: 0.0, &quot;pow24&quot;: 0.0, &quot;pow25&quot;: 0.0, &quot;pow26&quot;: 0.0, &quot;pow27&quot;: 0.0, &quot;pow28&quot;: 0.0, &quot;pow29&quot;: 0.0, &quot;pow30&quot;: 0.0, &quot;pow31&quot;: 0.0, &quot;pow32&quot;: 0.0, &quot;gridPurchasedTodayEnergy&quot;: 3.800, &quot;gridPurchasedTodayEnergyStr&quot;: &quot;kWh&quot;, &quot;gridSellTodayEnergy&quot;: 2.400, &quot;gridSellTodayEnergyStr&quot;: &quot;kWh&quot;, &quot;psumCalPec&quot;: &quot;1&quot;, &quot;batteryPower&quot;: 0.099, &quot;batteryPowerStr&quot;: &quot;kW&quot;, &quot;batteryPowerPec&quot;: &quot;1&quot;, &quot;batteryCapacitySoc&quot;: 20.000, &quot;parallelStatus&quot;: 0, &quot;parallelAddr&quot;: 0, &quot;parallelPhase&quot;: 0, &quot;parallelBattery&quot;: 0, &quot;batteryTodayChargeEnergy&quot;: 4.800, &quot;batteryTodayChargeEnergyStr&quot;: &quot;kWh&quot;, &quot;batteryTotalChargeEnergy&quot;: 449.000, &quot;batteryTotalChargeEnergyStr&quot;: &quot;kWh&quot;, &quot;batteryTodayDischargeEnergy&quot;: 5.500, &quot;batteryTodayDischargeEnergyStr&quot;: &quot;kWh&quot;, &quot;batteryTotalDischargeEnergy&quot;: 627.000, &quot;batteryTotalDischargeEnergyStr&quot;: &quot;kWh&quot;, &quot;bypassLoadPower&quot;: 0.000, &quot;bypassLoadPowerStr&quot;: &quot;kW&quot;, &quot;backupTodayEnergy&quot;: 0.000, &quot;backupTodayEnergyStr&quot;: &quot;kWh&quot;, &quot;backupTotalEnergy&quot;: 0.000, &quot;backupTotalEnergyStr&quot;: &quot;kWh&quot;, &quot;etotal&quot;: 1.153, &quot;etoday&quot;: 10.900, &quot;psum&quot;: -1.756, &quot;psumCal&quot;: -1.756, &quot;etotal1&quot;: 1153.000, &quot;etoday1&quot;: 10.900000, &quot;offlineLongStr&quot;: &quot;--&quot;, &quot;etotalStr&quot;: &quot;MWh&quot;, &quot;etodayStr&quot;: &quot;kWh&quot;, &quot;psumStr&quot;: &quot;kW&quot;, &quot;psumCalStr&quot;: &quot;kW&quot; } ], &quot;total&quot;: 1, &quot;size&quot;: 20, &quot;current&quot;: 1, &quot;orders&quot;: [ ], &quot;optimizeCountSql&quot;: false, &quot;searchCount&quot;: true, &quot;pages&quot;: 1 }, &quot;mpptSwitch&quot;: 0 } } </code></pre>
<python><json><dictionary>
2023-02-03 23:17:41
2
2,778
user1438082
75,341,867
1,475,962
Regex question to have substring match given strings
<p>I want regex to combine</p> <pre><code>&quot;.*SimpleTaskv9MoreDetails.*&quot; </code></pre> <p>or</p> <pre><code>&quot;.*SimpleTaskv10MoreDetails.*&quot; </code></pre> <p>How can I create regex to match both of them? I know that below one matches v8 and v9</p> <pre><code>&quot;.*SimpleTaskv[89]MoreDetails.*&quot; </code></pre> <p>But if I want both v9 and v10 to be accepted? How do I do it?</p>
<python><regex>
2023-02-03 23:11:39
2
5,048
raju
75,341,854
728,438
How to zip a list with a list of tuples?
<p>Say I have the following lists:</p> <pre><code>list_a = [1, 4, 7] list_b = [(2, 3), (5, 6), (8, 9)] </code></pre> <p>How do I combine them so that it becomes</p> <pre><code>[(1, 2, 3), (4, 5, 6), (7, 8, 9)] </code></pre>
<python><list>
2023-02-03 23:09:40
2
399
Ian Low
75,341,818
11,649,973
Correct way to add image related info wagtail
<p>I am setting up a simple site with the main purpose of having galleries of images. Using collections to group images and display them on a specific page is how I set it up first. However I am not sure what the recommended way is to add a description.</p> <p>My options I have tried are:</p> <ul> <li><a href="https://docs.coderedcorp.com/wagtail-crx/advanced/advanced02.html" rel="nofollow noreferrer">Custom page</a> This seems as the way to handle it, however I lose the nice feature of using collections to generate pages. This would mean I separate info of a image into a page model, fragmenting the image from the data. I see that creating a custom image is <a href="https://github.com/spapas/wagtail-faq" rel="nofollow noreferrer">recommended</a>, but I doubt it's for this purpose.</li> <li><a href="https://docs.wagtail.org/en/stable/advanced_topics/images/custom_image_model.html" rel="nofollow noreferrer">Custom image model</a> This allows me to add a description field completely replaces the image model since you can only have 1 image model afaik. In a webshop like site this seems doesn't seem suitable (price, reviews, stock etc.) but since this site focuses on the image specific it seems this kind of coupling is ok.</li> </ul> <p>Is the first option the way to handle this? Or how can I set this up so that a designer/user can tie data to a image without losing the base image or allow the user to select a 'collection' with the child pages resolving a description/model related to a image.</p> <p>Any tips, references or alternatives would be appreciated.</p> <p>ps: I just realized this boils down to not knowing how wagtail/django does mapping db/pojo to viewmodels.</p>
<python><django><wagtail>
2023-02-03 23:02:01
1
425
GreatGaja
75,341,446
10,007,302
Error trying to use a prepared statement in mysql.connector to updae a database
<p>I'm fairly new to databases/python and i've read that you're supposed to use prepared statements instead of string formatting</p> <p>I had</p> <pre><code>conn = mysql.connector.connect(user=user, password=pw, host=host, database=db) cursor = conn.cursor(prepared=True) cursor.execute(f&quot;DESCRIBE {table_name}&quot;) fetch = cursor.fetchall() table_columns = [col[0] for col in fetch] </code></pre> <p>where table_name was the name of my sql table. I'm attampting to change it to the code below</p> <pre><code># Get the names of all the columns in the mySQL table statement = &quot;DESCRIBE %s&quot; cursor.execute(statement, (table_name,)) fetch = cursor.fetchall() table_columns = [col[0] for col in fetch] </code></pre> <p>but I keep getting the following error:</p> <pre><code>mysql.connector.errors.InterfaceError: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '?' at line 1 </code></pre>
<python><mysql><mysql-connector>
2023-02-03 22:05:25
0
1,281
novawaly
75,341,427
1,932,930
Enum for bitwise operations with user readable strings table
<p>I am looking for an efficient and maintainable way to create a table in Python that can be used to look up user readable strings for enumeration values.</p> <p>Constraints:</p> <ul> <li>I want it to work with an enumeration that supports bitwise operations. For example: passing in a value of enumeration values that has been bitmasked together will return a list of strings for each bitmasked value.</li> <li>I want the user readable strings to be translated from the enumeration value names so I don't have to maintain a table that has to be updated every time the enumeration is modified.</li> <li>I want it to be efficient. For example, I don't want a static function that will do the conversion every time it's called. I want to create a static table that is initialized once with the strings. For Example, I want to create a static <code>dict</code> that looks like this: <code>{Privileges.CanAddPost: &quot;can add post&quot;, Privileges.CanDeletePost: &quot;can delete post&quot;, ...}</code></li> </ul> <pre><code>from enum import IntFlag, unique @unique class Privileges(IntFlag): &quot;&quot;&quot;Privileges enum that supports bitwise operations&quot;&quot;&quot; NoPrivileges = 0 CanAddPost = 1 CanDeletePost = 2 CanBanUser = 4 CanResetPasswords = 8 CanModerateDiscussions = 16 CanSuspendAccounts = 32 All = CanAddPost | CanDeletePost | CanBanUser |\ CanResetPasswords | CanModerateDiscussions | CanSuspendAccounts #Instantiate the static variable Privileges.strings_map = ... # How do initialize this table? </code></pre>
<python><enums><bitmask>
2023-02-03 22:02:10
1
768
tdemay
75,341,411
214,526
pathlib relative path issue for upper/parent level directories
<p>I'm facing an issue while trying to get relative path to a parent level directory using pathlib but os.path works. How to satisfy the requirement using pathlib APIs?</p> <pre><code>In [1]: import os In [2]: p1 = &quot;/a/b/c&quot; In [3]: p2 = &quot;/a/b/c/d/e&quot; In [4]: os.path.relpath(p1, p2) Out[4]: '../..' </code></pre> <p>But below code throws exception -</p> <pre><code>In [5]: import pathlib In [6]: c = pathlib.Path(p1).relative_to(pathlib.Path(p2)) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[6], line 1 ----&gt; 1 c = pathlib.Path(p1).relative_to(pathlib.Path(p2)) File ~/.conda/envs/slm-scratch-experiment/lib/python3.9/pathlib.py:939, in PurePath.relative_to(self, *other) 937 if (root or drv) if n == 0 else cf(abs_parts[:n]) != cf(to_abs_parts): 938 formatted = self._format_parsed_parts(to_drv, to_root, to_parts) --&gt; 939 raise ValueError(&quot;{!r} is not in the subpath of {!r}&quot; 940 &quot; OR one path is relative and the other is absolute.&quot; 941 .format(str(self), str(formatted))) 942 return self._from_parsed_parts('', root if n == 1 else '', 943 abs_parts[n:]) ValueError: '/a/b/c' is not in the subpath of '/a/b/c/d/e' OR one path is relative and the other is absolute. </code></pre>
<python><python-3.x>
2023-02-03 22:00:15
0
911
soumeng78
75,341,338
7,045,589
Python Requests: why do I get the error "get() takes 2 positional arguments but 3 were given" when I only have two arguments?
<p>I'm trying to implement error-handling of 504 responses for my Requests statement when I query an API. I read through <a href="https://stackoverflow.com/questions/23267409/how-to-implement-retry-mechanism-into-python-requests-library">this question</a>, and am attempting to implement the solution given in the most-voted answer.</p> <p>My code keeps failing with the error <code>get() takes 2 positional arguments but 3 were given</code>. In my code (below), I'm only passing <code>get()</code> two arguments, though. Where am I going wrong?</p> <p>Here's the full error TraceBack:</p> <pre><code>Exception has occurred: TypeError get() takes 2 positional arguments but 3 were given File &quot; ... &quot;, line 113, in my_function s.get(&quot;https://epqs.nationalmap.gov/v1/json&quot;, payload) File &quot; ... &quot;, line 158, in &lt;module&gt; d = my_function(l) TypeError: get() takes 2 positional arguments but 3 were given </code></pre> <p>My code:</p> <pre><code>payload = { &quot;x&quot;: coordinate[0], &quot;y&quot;: coordinate[1], &quot;wkid&quot;: 4326, &quot;units&quot;: &quot;Meter&quot;, &quot;includeDate&quot;: &quot;false&quot;, } s = requests.Session() retries = Retry( total=10, backoff_factor=1, status_forcelist=[502, 503, 504], ) s.mount(&quot;https://&quot;, HTTPAdapter(max_retries=retries)) s.get(&quot;https://epqs.nationalmap.gov/v1/json&quot;, payload) </code></pre> <p>My previous code that didn't make use of <code>Session()</code> or <code>mount()</code> worked, so does <code>Session()</code> add some data behind the scenes to the <code>get()</code> request that I'm overlooking?</p> <p>Here's my previous code, fwiw:</p> <pre><code>payload = { &quot;x&quot;: coordinate[0], &quot;y&quot;: coordinate[1], &quot;wkid&quot;: 4326, &quot;units&quot;: &quot;Meter&quot;, &quot;includeDate&quot;: &quot;false&quot;, } r = requests.get(&quot;https://epqs.nationalmap.gov/v1/json&quot;, payload) </code></pre>
<python><python-requests><http-status-code-504>
2023-02-03 21:49:27
2
309
the_meter413
75,341,308
4,668,368
How to remove json element with empty quotes?
<p>In the <code>python3</code> code below I read all the <code>objects</code> in the <code>json</code> then loop through the <code>elements</code> in the <code>json</code>. If I only search for the <code>id is not None</code> that element and the null value are removed. However when I add the <code>or</code> statement to remove the <code>id is not &quot;&quot;</code> the entire file is printed with neither of the <code>id</code> in the or statement removed. Being new to <code>python</code> I am not sure what I am doing wrong. Thank you :).</p> <p><strong>file</strong></p> <pre><code>{ &quot;objects&quot;: [ { &quot;version&quot;: &quot;1&quot;, &quot;list&quot;: [ { &quot; &quot;json&quot;: null }, &quot;id&quot;: &quot;&quot;, } ], &quot;h&quot;: &quot;na&quot; }, { &quot;version&quot;: &quot;1&quot;, &quot;list&quot;: [ { &quot; &quot;json&quot;: null }, &quot;id&quot;: &quot;This has text in it and should be printed&quot;, } ], &quot;h&quot;: &quot;na&quot; }, { &quot;version&quot;: &quot;1&quot;, &quot;list&quot;: [ { &quot; &quot;json&quot;: null }, &quot;id&quot;: null, } ], &quot;h&quot;: &quot;na&quot; } ] } </code></pre> <p><strong>desired</strong></p> <pre><code>{ &quot;version&quot;: &quot;1&quot;, &quot;list&quot;: [ { &quot; &quot;json&quot;: null }, &quot;id&quot;: &quot;This has text in it and should be printed&quot;, } ], &quot;h&quot;: &quot;na&quot; }, </code></pre> <p><strong>python3</strong></p> <pre><code>if v[&quot;id&quot;] is not None or v['id'] is not &quot;&quot;: </code></pre> <p>Also tried the below and <code>id</code> were changed:</p> <pre><code>for obj in data['objects']: if obj.get('id') is &quot;&quot;: obj['description'] = 'na' </code></pre>
<python><json>
2023-02-03 21:45:17
1
3,022
justaguy
75,341,301
7,841,330
Python - Resolve response Remote URL link with actual page
<p>I am trying to get the actual page URL using the remote URL from JSON response. Below is the remote URL I get in an JSON API response.</p> <pre><code>https://somesite.com/mainlink/1eb-68a8-40be-a3-5679e/utilities/927-40-b958-3b5?pagePath=teststaff </code></pre> <p>when I click the link, it resolves to actual page when opened in browser which in this case is</p> <pre><code>https://somesite.com/mainlink/recruitement/utilities/salesteam?pagePath=teststaff </code></pre> <p>How can programatically get this resolved URL without opening in browser ?</p>
<python><python-3.x><python-requests>
2023-02-03 21:44:41
1
669
Joe_12345
75,341,277
2,519,150
Python get datetime intervals of x minutes between two datetime objects
<p>How can I get the list of intervals (as 2-tuples) of <code>x</code> minutes between to datetimes? It should be easy but even <code>pandas.date_range()</code> returns exacts intervals, dropping end time. For example:</p> <p>In:</p> <pre><code>from = '2023-01-05 16:01:58' to = '2023-01-05 17:30:15' x = 30 </code></pre> <p>Expected:</p> <pre><code>[ ('2023-01-05 16:01:58', '2023-01-05 16:30:00'), ('2023-01-05 16:30:00', '2023-01-05 17:00:00'), ('2023-01-05 17:00:00', '2023-01-05 17:30:00'), ('2023-01-05 17:30:00', '2023-01-05 17:30:15') ] </code></pre>
<python><pandas><datetime>
2023-02-03 21:41:17
1
487
Stefano Lazzaro
75,341,238
17,696,880
Prevent a regex with capturing groups from acting on matches of a string that re preceded by a specific word and at the same time succeeded by another
<pre class="lang-py prettyprint-override"><code>import re input_text = &quot;((PL_ADVB)alrededor ((NOUN)(del auto rojizo, dentro de algo grande y completamente veloz)). Luego dentro del baúl rápidamente abajo de una caja por sobre ello vimos una caña.&quot; #example input place_reference = r&quot;(?i:[\w,;.]\s*)+?&quot; list_all_adverbs_of_place = [&quot;adentro&quot;, &quot;dentro&quot;, &quot;al rededor&quot;, &quot;alrededor&quot;, &quot;abajo&quot;, &quot;hacía&quot;, &quot;hacia&quot;, &quot;por sobre&quot;, &quot;sobre&quot;] list_limiting_elements = list_all_adverbs_of_place + [&quot;vimos&quot;, &quot;hemos visto&quot;, &quot;encontramos&quot;, &quot;hemos encontrado&quot;, &quot;rápidamente&quot;, &quot;rapidamente&quot;, &quot;intensamente&quot;, &quot;durante&quot;, &quot;luego&quot;, &quot;ahora&quot;, &quot;.&quot;, &quot;:&quot;, &quot;;&quot;, &quot;,&quot;, &quot;(&quot;, &quot;)&quot;, &quot;[&quot;, &quot;]&quot;, &quot;¿&quot;, &quot;?&quot;, &quot;¡&quot;, &quot;!&quot;, &quot;&amp;&quot;, &quot;=&quot;] pattern = re.compile(rf&quot;(?:(?&lt;=\s)|^)({'|'.join(re.escape(x) for x in list_all_adverbs_of_place)})?(\s+{place_reference})\s*((?={'|'.join(re.escape(x) for x in list_limiting_elements)}))&quot;, flags = re.IGNORECASE) input_text = re.sub(pattern, lambda m: f&quot;((PL_ADVB){m[1]}{m[2]}){m[3]}&quot; if m[2] else f&quot;((PL_ADVB){m[1]} NO_DATA){m[3]}&quot;, input_text) print(repr(input_text)) #--&gt; output </code></pre> <p>How to make the regex in the variable called as <code>pattern</code> only capture if and only if the text to capture is not in the middle of <code>((NOUN)</code> <strong>&quot;the captured text&quot;</strong> <code>)</code></p> <p>This way you should prevent this string</p> <p><code>((NOUN)(del auto rojizo, dentro de algo grande y completamente veloz))</code></p> <p>become in this other string...</p> <p><code>((NOUN)(del auto rojizo, ((PL_ADVB)dentro de algo grande y completamente veloz)))</code></p> <p>And this because <code>dentro de algo grande y completamente veloz</code> is in the middle oF <code>((NOUN)</code> and <code>)</code>. It is very important that for the regex to decide not to capture the blocking area in the string, it is in the middle of these 2 limiters.</p> <p>The correct output would be:</p> <pre><code>'((PL_ADVB)alrededor ((NOUN)(del auto rojizo, dentro de algo grande y completamente veloz)). Luego ((PL_ADVB)dentro del baúl) rápidamente ((PL_ADVB)abajo de una caja) ((PL_ADVB)por sobre ello) vimos una caña.' </code></pre> <p>As can be seen in the rest of the areas of the string where the <strong>blocking pattern</strong> (in this case is <code>((NOUN)</code> <strong>blocking area</strong> <code>)</code> ) was not present, the replacements were made.</p>
<python><python-3.x><regex><string><regex-group>
2023-02-03 21:35:36
0
875
Matt095
75,341,236
6,918,112
How to run Netflix movies on Raspberry PIs using Docker container with Python and Selenium headless?
<p>I have a Raspberry PI and Docker containers in it.</p> <p>One of the containers (Python + Selenium script) is supposed to access Netflix headless and run a movie.</p> <p>It works fine in my Windows PC using Chrome (non-headless) or Firefox driver (headless and non-headless).</p> <p>Then, because it works on Firefox driver headless I decided to go with it and try on the PI. Unfortunately it doesn't work and I get the same message as always(I am screenshooting the screen in headless mode):</p> <p>Firefox driver image: <a href="https://i.sstatic.net/qXDHW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qXDHW.png" alt="Firefox driver image:" /></a></p> <p>Chrome driver image: <a href="https://i.sstatic.net/h2Fay.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/h2Fay.png" alt="Chrome driver image:" /></a></p> <p>I've read some possible fixes here and there with the libwidevine.so and so on, no lucy yet!</p> <p>Anything else that I might have missed here? Somebody else on the same boat?</p> <p>Thanks a lot!</p> <p>It works fine in my Windows PC using Chrome (non-headless) or Firefox driver (headless and non-headless).</p> <p>Then, because it works on Firefox driver headless I decided to go with it and try on the PI. Unfortunately it doesn't work and I get the same message as always</p>
<python><selenium><raspberry-pi><headless><netflix>
2023-02-03 21:35:27
0
640
Jeff Santos
75,341,170
1,848,244
Pandas Hierarchical MultiIndex: Concise way to use values from other levels in a row calc
<p>Probably easiest to demonstrate in code.</p> <p>Basically I have:</p> <ul> <li>A hierarchical dataframe. Something similar to:</li> </ul> <pre><code>level1 level2 root child1 10 10 child2 20 20 child3 30 30 child4 40 40 root2 child1 100 100 child2 200 200 child3 300 300 child4 400 400 </code></pre> <p>The real data is not massive, so manually mapping is feasible.</p> <p>I would like to:</p> <ul> <li>create a new column.</li> <li>the values in that column are a function of: <ul> <li>another column value and;</li> <li>another column value <em>in a different row</em>.</li> </ul> </li> </ul> <p>Unfortunately, the location of the different row in the real data isn't something easily predictable/ordered (so I do not think shift or rolling will work - even if you might be able to stretch one of those to work for this data.)</p> <p>I have a working solution, but I'm really new to PD, and I'm not sure its as concise as it could be, nor as canonical as it should be. (maybe it is great, happy to know if that's the case.)</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd from io import StringIO data = &quot;&quot;&quot; level1,level2,value1,value2 root,child1,10,10 root,child2,20,20 root,child3,30,30 root,child4,40,40 root2,child1,100,100 root2,child2,200,200 root2,child3,300,300 root2,child4,400,400 &quot;&quot;&quot; df = pd.read_csv(StringIO(data), index_col=[0,1]) # We want the following result: #level1,level2,value1,value2,desiredcol #root,child1,10,10,(child4, value1) + (child1, value2) #root,child2,10,10,(child1, value1) + (child2, value2) #root,child3,10,10,(child2, value1) + (child3, value2) #root,child4,10,10,(child4, value2) # First - isolate the value1 column, and use unstack # to flip the dimension columnwise. value1s = df[['value1']].unstack() print('value1s unstacked:') print(value1s) # Copy the needed columns to the relevant mapped states. value1s[('intermediate', 'child1')] = value1s[('value1', 'child4')] value1s[('intermediate', 'child2')] = value1s[('value1', 'child1')] value1s[('intermediate', 'child3')] = value1s[('value1', 'child2')] value1s[('intermediate', 'child4')] = 0 # Get rid of the original values as they are still in df. value1s = value1s.drop(columns=['value1']) # Flip back to rows. value1s = value1s.stack() print(value1s) df = df.join(value1s, how='inner') # Now we can just row-wise sum the relevant columns. df['desiredcol'] = df[['value2', 'intermediate']].sum(axis=1) print(df) </code></pre>
<python><pandas>
2023-02-03 21:26:06
0
437
user1848244
75,341,070
4,792,865
Qt QProgressBar not updating regularly when text is not visible
<p>I have a QProgressBar that is set to not display text. It appears that this causes the progress bar to not update regularly, but only after some chunks (roughly 5-10%).</p> <p>The following minimum Python example shows a window with two progress bars which get updated in sync. The one not displaying text is lagging behind.</p> <p><a href="https://i.sstatic.net/5MB4w.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5MB4w.png" alt="Example" /></a></p> <pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3 from PyQt5 import QtCore, QtWidgets class Window(QtWidgets.QWidget): def __init__(self): super().__init__() layout = QtWidgets.QVBoxLayout(self) self.progressBar1 = QtWidgets.QProgressBar() self.progressBar2 = QtWidgets.QProgressBar() layout.addWidget(self.progressBar1) layout.addWidget(self.progressBar2) self.progressBar1.setTextVisible(False) self.progress = 0 QtCore.QTimer.singleShot(1000, self.updateProgress) def updateProgress(self): self.progress = (self.progress+1) % 100 self.progressBar1.setValue(self.progress) self.progressBar2.setValue(self.progress) QtCore.QTimer.singleShot(1000, self.updateProgress) app = QtWidgets.QApplication([]) win = Window() win.show() app.exec() </code></pre> <p>Is this a bug? Everything I could find about QProgressBar not updating correctly was about the main thread being stuck and not processing events, which doesn't seem to be the case here.</p> <p>The same behavior occurs in Python (PyQt5) and C++ (Qt 5.15.2, Qt 6.4.1)</p>
<python><qt><pyqt5>
2023-02-03 21:12:26
1
735
Andre
75,340,998
9,328,993
cannot import name 'compute_pycoco_metrics' from 'keras_cv.metrics.coco'
<p>i install keras_cv on macbook M1:</p> <pre><code>pip install keras_cv </code></pre> <p>and run this code</p> <pre><code>import keras_cv </code></pre> <p>and get this error:</p> <pre><code>Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;/opt/homebrew/lib/python3.10/site-packages/keras_cv/__init__.py&quot;, line 21, in &lt;module&gt; from keras_cv import callbacks File &quot;/opt/homebrew/lib/python3.10/site-packages/keras_cv/callbacks/__init__.py&quot;, line 14, in &lt;module&gt; from keras_cv.callbacks.pycoco_callback import PyCOCOCallback File &quot;/opt/homebrew/lib/python3.10/site-packages/keras_cv/callbacks/pycoco_callback.py&quot;, line 18, in &lt;module&gt; from keras_cv.metrics.coco import compute_pycoco_metrics ImportError: cannot import name 'compute_pycoco_metrics' from 'keras_cv.metrics.coco' (/opt/homebrew/lib/python3.10/site-packages/keras_cv/metrics/coco/__init__.py) </code></pre> <p>How i can fix this?</p>
<python><keras-cv>
2023-02-03 21:01:36
2
2,630
Sajjad Aemmi
75,340,900
21,116,910
Fast data transfer between a python program (server) and a Java program (client) on the same singleboard Linux computer (Raspberry Pi 3)
<p>I know relativ good java and created a videogame using it for raspberry pi. The game must be controlled via an USB joystick. I searched and found many variant to read joystick data, but it seems that the best way is to use evdev written on python. How can I transfer data from a python script to a Java application? They are on the same raspberry pi with raspberry pi OS.</p> <p>It seams that the best way is to create files using python and read they or the their names from my Java game. The files must be named so: 4_up_pressed.txt or 5_x+_035.txt 6_y-_075.txt And so on. The first number is the number of a command. But I think that this way is not fast. Can I direct and fast transfer data from python into Java on the same Raspberry pi?</p>
<python><java><linux><raspberry-pi><gamepad>
2023-02-03 20:48:06
1
333
Alexander Gorodilov
75,340,879
7,157,059
Multi-Thread Binary Tree Search Algorithm
<p>I find implementing a multi-threaded binary tree search algorithm in Python can be challenging because it requires proper synchronization and management of multiple threads accessing shared data structures.</p> <p>One approach, I think is to achieve this would be to use a thread-safe queue data structure to distribute search tasks to worker threads, and use locks or semaphores to ensure that each node in the tree is accessed by only one thread at a time.</p> <p>How can you implement a multi-threaded binary tree search algorithm in Python that takes advantage of multiple cores, while maintaining thread safety and avoiding race conditions?</p>
<python><multithreading><binary>
2023-02-03 20:44:20
2
749
Acerace.py
75,340,743
2,690,257
How to use a variable dynamically from an imported module in python 3.8
<p>I currently have the following file load.py which contains:</p> <pre><code>readText1 = &quot;test1&quot; name1 = &quot;test1&quot; readText1 = &quot;test2&quot; name1 = &quot;test2&quot; </code></pre> <p>Please note that the number will change frequently. Sometimes there might be 2, sometimes 20 etc.</p> <p>I need to do something with this data and then save it individually.</p> <p>In my file I import load like so:</p> <pre><code>from do.load import * #where do is a directory </code></pre> <p>I then create a variable to know how many items are in the file (which I know)</p> <pre><code>values = range(2) </code></pre> <p>I then attempt to loop and use each &quot;variable by name&quot; like so:</p> <pre><code>for x in values: x = x + 1 textData = readText + x nameSave = name + x </code></pre> <p>Notice I try to create a textData variable with the readText but this won't work since readText isn't actually a variable. It errors. This was my attempt but it's obviously not going to work. What I need to do is loop over each item in that file and then use it's individual variable data. How can I accomplish that?</p>
<python>
2023-02-03 20:30:17
3
3,353
FabricioG
75,340,739
10,976,654
Python locally installed package, module not defined
<p>I have a local package set up like this:</p> <pre><code>D:. | .gitignore | LICENSE | pyproject.toml | README.md | requirements.txt +---.venv | | ... +---mypackage | | __init__.py | +---moduleA | | | module_a_src.py | | | module_a_helpers.py | +---tools | | tools.py \---tests </code></pre> <p>The <code>__init__.py</code> file is empty. The <code>tools.py</code> file contains the following:</p> <pre><code>def working(string): print(string) print(&quot;here i am&quot;) </code></pre> <p>I installed the package in edit mode to my local venv using <code>pip install -e .</code></p> <p>I don't have/want entry points yet. I can run the following from the shell and it works as expected:</p> <pre><code>$ py -c &quot;from mypackage.tools import tools; tools.working('foo')&quot; here i am foo (.venv) </code></pre> <p><strong>However, I want to be able to run</strong> <code>py -c &quot;import mypackage; tools.working('foo')&quot;</code>. I tried adding the following to the <code>__init__.py</code> file:</p> <pre><code>from tools import tools # other things that didn't work and return the same error: # from .tools import tools # import tools.tools # from . import tools # from mypackage.tools import tools </code></pre> <p>But I get this:</p> <pre><code>$ py -c &quot;import mypackage; tools.working('foo')&quot; Traceback (most recent call last): File &quot;&lt;string&gt;&quot;, line 1, in &lt;module&gt; NameError: name 'tools' is not defined here i am (.venv) </code></pre> <p>I tried adding an empty <code>__init__.py</code> to the tools folder, no luck.</p> <p>The <code>pyproject.toml</code> contains this, if that matters:</p> <pre><code>[tool.setuptools] include-package-data = true [tool.setuptools.packages.find] where = [&quot;.&quot;, &quot;mypackage&quot;] </code></pre>
<python><package>
2023-02-03 20:29:55
0
3,476
a11
75,340,659
7,042,778
Easy code adding additional information into the dataframe
<p>Is there another way to add the information to the datafame, an easier way?</p> <p>I have a dataframe which looks like the following:</p> <pre><code> Product Price 2023-01-03 Apple 2.00 2023-01-04 Apple 2.10 2023-01-05 Apple 1.90 2023-01-03 Banana 1.10 2023-01-04 Banana 1.15 2023-01-05 Banana 1.30 2023-01-03 Cucumber 0.50 2023-01-04 Cucumber 0.80 2023-01-05 Cucumber 0.55 </code></pre> <p>I have two additional information:</p> <pre><code>Apple Fuit Banana Fruit Cucumber Vegetable </code></pre> <p>and</p> <pre><code>Apple UK Banana Columbia Cucumber Mexico </code></pre> <p>this needs to be added in an additional column as shown below in the results.</p> <pre><code> Product Price Category Country 2023-01-03 Apple 2.00 Fuit UK 2023-01-04 Apple 2.10 Fuit UK 2023-01-05 Apple 1.90 Fuit UK 2023-01-03 Banana 1.10 Fuit Columbia 2023-01-04 Banana 1.15 Fuit Columbia 2023-01-05 Banana 1.30 Fuit Columbia 2023-01-03 Cucumber 0.50 Vegtable Mexico 2023-01-04 Cucumber 0.80 Vegtable Mexico 2023-01-05 Cucumber 0.55 Vegtable Mexico </code></pre> <p>The code lokks like the following:</p> <pre><code>import pandas as pd df = pd.DataFrame({ 'Product': ['Apple', 'Apple', 'Apple', 'Banana', 'Banana', 'Banana', 'Cucumber', 'Cucumber', 'Cucumber'], 'Price': [2.0, 2.1, 1.9, 1.1, 1.15, 1.3, 0.5, 0.8, 0.55]}, index=['2023-01-03', '2023-01-04', '2023-01-05', '2023-01-03', '2023-01-04', '2023-01-05', '2023-01-03', '2023-01-04', '2023-01-05'], ) df.index = pd.to_datetime(df.index) print(df) def category(row): if row['Product'] == 'Apple': return 'Fuit' elif row['Product'] == 'Banana': return 'Fuit' else: return 'Vegtable' def country(row): if row['Product'] == 'Apple': return 'UK' elif row['Product'] == 'Banana': return 'Columbia' else: return 'Mexico' df['Category'] = df.apply(lambda row: category(row), axis=1) df['Country'] = df.apply(lambda row: country(row), axis=1) print(df) </code></pre>
<python><dataframe>
2023-02-03 20:19:01
2
1,511
MCM
75,340,441
19,366,064
Traverse Tree and finding all possible path to target
<p>Original Question: Given the root of a binary tree and an integer targetSum, return <em>the number of paths where the sum of the values along the path equals</em> targetSum. The path does not need to start or end at the root or a leaf, but it must go downwards (i.e., traveling only from parent nodes to child nodes).</p> <p>I have posted my implementation below</p> <p>Fails on test case: [1,null,2,null,3,null,4,null,5] , tagetSum = 3 it return 3 instead of 2</p> <p><a href="https://i.sstatic.net/0QEt2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0QEt2.png" alt="enter image description here" /></a></p> <p>After some debugging I found out that node with value 3 is recorded twice, does anyone know why?</p> <pre><code># Definition for a binary tree node. # class TreeNode: # def __init__(self, val=0, left=None, right=None): # self.val = val # self.left = left # self.right = right class Solution: def pathSum(self, root: Optional[TreeNode], targetSum: int) -&gt; int: self.res = 0 def dfs(node, sum): if not node: return sum += node.val if sum == targetSum: self.res += 1 dfs(node.left, 0) dfs(node.right, 0) dfs(node.left, sum) dfs(node.right, sum) dfs(root, 0) return self.res </code></pre> <p>Asked chatGPT and got the right solution...</p> <pre><code>class Solution: def pathSum(self, root: Optional[TreeNode], targetSum: int) -&gt; int: self.res = 0 def dfs(node, sum): if not node: return sum += node.val if sum == targetSum: self.res += 1 dfs(node.left, sum) dfs(node.right, sum) def traverse(node): if not node: return dfs(node, 0) traverse(node.left) traverse(node.right) traverse(root) return self.res </code></pre> <p>Can someone explain the difference between the 2 implementaiton, I feel like they are doing the same thing.</p>
<python><binary-tree><tree-traversal>
2023-02-03 19:53:22
1
544
Michael Xia
75,340,307
9,542,989
"Unable to connect: Adaptive Server is unavailable or does not exist" in SQL Server Docker Image
<p>I am trying to connect to a Docker container running SQL Server using <code>pymssql</code> and I am getting the following error,</p> <pre><code>Can't connect to db: (20009, b'DB-Lib error message 20009, severity 9: Unable to connect: Adaptive Server is unavailable or does not exist (localhost) Net-Lib error during Connection refused (111) DB-Lib error message 20009, severity 9: Unable to connect: Adaptive Server is unavailable or does not exist (localhost) Net-Lib error during Connection refused (111)\n') </code></pre> <p>Now, I know similar questions have been asked before, such as the one given below, <br> <a href="https://stackoverflow.com/questions/52401252/database-connection-failed-for-local-mssql-server-with-pymssql">Database connection failed for local MSSQL server with pymssql</a></p> <p>However, most of the solutions given revolve around opening up the Sql Server Configuration Manager and making changes through the UI or allowing network traffic to pass through. However, I am not entirely certain how to do this with a Docker container.</p> <p>How can I resolve this? Is there a different SQL Server I should run? Or a different Python package I should use to establish the connection?</p> <p>I have spin up my container based on the instructions given here, <br> <a href="https://learn.microsoft.com/en-us/sql/linux/quickstart-install-connect-docker?view=sql-server-ver16&amp;pivots=cs1-bash" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/sql/linux/quickstart-install-connect-docker?view=sql-server-ver16&amp;pivots=cs1-bash</a></p> <p>Given below are the parameters I am using to establish the connection,</p> <pre><code>{ &quot;host&quot;: &quot;localhost&quot;, &quot;port&quot;: &quot;1433&quot;, &quot;user&quot;: &quot;SA&quot;, &quot;password&quot;: &quot;&lt;my-password&gt;&quot;, &quot;database&quot;: &quot;TestDB&quot; } </code></pre> <p>Update: I am actually trying to connect to the SQL Server Docker instance from within another container. It is the MindsDB container. This MindsDB container is, however, using <code>pymysql</code> to establish the connection.</p> <p>Update: Given below are the two commands I am using to run my containers. <br> SQL Server:</p> <pre><code>sudo docker run -e &quot;ACCEPT_EULA=Y&quot; -e &quot;MSSQL_SA_PASSWORD=&lt;my-password&gt;&quot; \ -p 1433:1433 --name sql1 --hostname sql1 \ -d \ mcr.microsoft.com/mssql/server:2022-latest </code></pre> <p>MindsDB:</p> <pre><code>docker run -p 47334:47334 -p 47335:47335 mindsdb/mindsdb </code></pre>
<python><sql-server><docker><pymssql><mindsdb>
2023-02-03 19:36:18
1
2,115
Minura Punchihewa