QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
76,685,844 | 6,494,707 | How to plot the NIR spectral signal from the hypercube data in python? | <p>I have the following data available from the hyperspectral imaging:</p>
<ul>
<li>cube_envi32.dat</li>
<li>cube_envi32.hdr</li>
<li>roi_masks.bmp</li>
</ul>
<p>How can I plot the reflectance vs wavelength plot? Is there any sample python code that I can use?</p>
| <python><python-3.x><satellite-image><spectral-python> | 2023-07-14 08:08:22 | 1 | 2,236 | S.EB |
76,685,822 | 10,863,083 | reduce milliseconds precision python | <p>I have this timestamp</p>
<pre><code>2023-07-12 21:12:21.940000
</code></pre>
<p>After converting it using :</p>
<pre><code>df_snowflake['last_activity']=df_snowflake['last_activity'].dt.strftime('%Y-%m-%d %H:%M:%S.%f')
</code></pre>
<p>I want to reduce the milliseconds precision and get the next datetime :</p>
<pre><code>2023-07-12 21:12:21.940
</code></pre>
| <python><pandas><datetime><timestamp><milliseconds> | 2023-07-14 08:05:01 | 1 | 417 | baddy |
76,685,818 | 5,521,699 | Python pinecone pod not completely full | <p>I am using pinecone in order to store 768-dimensional vectors in a vector database. From the docs, I read that the free tier allows to store up to 100,000 vectors. However, in my case, there seem to be only 1,466 vectors. Also, the dashboard reports a pod fullness of 0.0%.</p>
<p>In my client code, I upserted around 86,000 vectors, but only 1,466 are uploaded. I mention that I also add some metadata for each vector, namely an integer encoded as a string. Below is a screenshot from the dashboard, showing the number of vectors and pod fullness.</p>
<p><a href="https://i.sstatic.net/RAoLV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RAoLV.png" alt="enter image description here" /></a></p>
<p>How could I upload all 86,000 vectors?</p>
| <python><vector><pinecone> | 2023-07-14 08:04:09 | 1 | 710 | Polb |
76,685,730 | 10,570,372 | Understanding the Internal Stack Frames in a Recursive Function Call | <p>I'm trying to understand how the system's call stack works internally when a
recursive function is called. Specifically, I'm looking at a function that
computes the maximum depth of a binary tree using a postorder traversal.</p>
<p>I do know there is a stack solution out there, but I would one that mimics the internal call as closely as possible.</p>
<p>Here's the tree I'm working with:</p>
<pre><code> 1
/ \
2 3
/ \
4 5
</code></pre>
<p>And here's the recursive function:</p>
<pre class="lang-py prettyprint-override"><code>def maxDepth(root: Optional[TreeNode]) -> int:
def dfs_postorder(node: TreeNode) -> Union[int, Literal[0]]:
if not node:
return 0
left_depth = dfs_postorder(node.left)
right_depth = dfs_postorder(node.right)
return max(left_depth, right_depth) + 1
return dfs_postorder(root)
</code></pre>
<p>When this function is run on the tree, it first calls itself on the root, then
on the root's left child, then on the left child's left child, and so on, until
it reaches a leaf. At each step, it adds a frame to the call stack.</p>
<p>However, I'm confused about when the right children get added to the stack. From
my understanding of the code, it seems like the function would only start
processing the right children after it has finished with all the left children.</p>
<p>For instance, after the first call to the function (with the root node <code>1</code>), I
would expect the next call to be with the left child (<code>2</code>), and the system's
call stack to look like this:</p>
<pre class="lang-py prettyprint-override"><code>[(1, "processed"), (2, "process")]
</code></pre>
<p>where <code>process</code> means the node's left and right are yet to be explored and <code>processed</code> means
we explored its left and right and probably ready to be "popped" later.</p>
<p>However, I've been told that even at this early stage, the call stack would also
include a frame for the right child (<code>3</code>), like this:</p>
<pre class="lang-py prettyprint-override"><code>[(1, "processed"), (3, "process"), (2, "process")]
</code></pre>
<p>Why is this the case? How does the system's call stack actually work in this
situation?</p>
<hr />
<p>Stack Solution from Leetcode:</p>
<pre class="lang-py prettyprint-override"><code>class Solution:
def maxDepth_stack(self, root: BinaryTreeNode) -> Union[int, Literal[0]]:
if not root:
return 0
stack = [(1, root)] # The stack holds tuples of a node and its depth
max_depth = 0
while stack:
depth, node = stack.pop()
if node: # If the node is not None
# Update max_depth if current depth is greater
max_depth = max(max_depth, depth)
# Add the left child and its depth to the stack
stack.append((depth + 1, node.left))
# Add the right child and its depth to the stack
stack.append((depth + 1, node.right))
return max_depth
</code></pre>
| <python><algorithm><recursion><binary-tree> | 2023-07-14 07:52:51 | 3 | 1,043 | ilovewt |
76,685,578 | 3,564,977 | Is there a way to find overlaps in time periods in two dataframes on Python and return the max and min timestamps? | <p>I have two Pandas dataframes of events, with start and end times for time periods:</p>
<p><strong>DF1</strong></p>
<pre><code>Group amin amax
1 2023-07-03 10:45:00 2023-07-03 16:00:00
2 2023-07-04 11:00:00 2023-07-04 11:00:00
3 2023-07-04 11:30:00 2023-07-04 18:15:00
</code></pre>
<p><strong>DF2</strong></p>
<pre><code>Group amin amax
1 2023-07-03 13:30:00 2023-07-03 13:30:00
2 2023-07-03 14:30:00 2023-07-03 15:30:00
3 2023-07-03 16:30:00 2023-07-03 16:30:00
4 2023-07-03 17:00:00 2023-07-03 17:00:00
5 2023-07-04 15:45:00 2023-07-04 16:30:00
</code></pre>
<p>Ideally, I'd like to iterate through the two dataframes to create a new dataframe that would find the overlap between them, and give the min and max of the overall overlap:</p>
<pre><code>Group amin amax
1 2023-07-03 10:45:00 2023-07-03 17:00:00
2 2023-07-04 11:30:00 2023-07-03 18:15:00
</code></pre>
<p>Does anyone have any suggestions on how to do so? Thanks!</p>
| <python><pandas><datetime> | 2023-07-14 07:31:17 | 1 | 331 | Brian O'Halloran |
76,685,543 | 11,922,765 | python dataframe automatically convert numeric columns as float but don't drop non-numeric | <p>I have created the following function to take a data frame, convert numeric columns d-type to numeric. This is doing good job but the problem is, it drops the non-numeric columns as well, which I don't want to. Because, those columns carry some important information as well.</p>
<pre><code>def convert_dataframe_to_numeric_type(df):
def is_it_a_number(x):
try:
float(x)
return True
except:
return False
df = df[df.applymap(is_it_a_number)]
df = df.dropna(how='all',axis=1)
# after converting all non-numeric elements, transform them to numeric
df = df.transform(pd.to_numeric,errors='ignore')
return df
</code></pre>
| <python><python-3.x><pandas><dataframe><numeric> | 2023-07-14 07:27:19 | 1 | 4,702 | Mainland |
76,685,471 | 583,464 | Can't get right the concatenate layer dimensions | <p>I am using this unet:</p>
<pre><code>import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.layers import BatchNormalization, Conv2D, Activation,\
MaxPooling2D, Conv2DTranspose, Dropout, Input, Concatenate, \
LeakyReLU, Flatten, Reshape, Lambda, MaxPool2D
def conv2d_block(input, num_filters):
x = Conv2D(num_filters, 3, padding="same")(input)
x = BatchNormalization()(x)
x = Activation("relu")(x)
x = Conv2D(num_filters, 3, padding="same")(x)
x = BatchNormalization()(x)
x = Activation("relu")(x)
return x
n_filters = 16
def build_unet(input_shape):
inputs = Input(input_shape)
c1 = conv2d_block(inputs, num_filters=n_filters * 1)
p1 = MaxPooling2D((2, 2))(c1)
c2 = conv2d_block(p1, num_filters=n_filters * 2)
p2 = MaxPooling2D((2, 2))(c2)
c3 = conv2d_block(p2,num_filters=n_filters * 4)
p3 = MaxPooling2D((2, 2))(c3)
c4 = conv2d_block(p3, num_filters=n_filters * 8)
p4 = MaxPooling2D((2, 2))(c4)
c5 = conv2d_block(p4, num_filters=n_filters * 16)
p5 = MaxPooling2D((2, 2))(c5)
p5 = Dropout(0.2)(p5)
c6 = conv2d_block(p5, num_filters=n_filters * 32)
c6 = Dropout(0.2)(c6)
u6 = Conv2DTranspose(n_filters * 16, (3, 3), strides=(2, 2),
padding='same')(c6)
u6 = Concatenate()([u6, c6])
c7 = conv2d_block(u6, num_filters=n_filters * 16)
u7 = Conv2DTranspose(n_filters * 8, (3, 3), strides=(2, 2),
padding='same')(c7)
u7 = Concatenate()([u7, c7])
c8 = conv2d_block(u7, num_filters=n_filters * 8)
u8 = Conv2DTranspose(n_filters * 4, (3, 3), strides=(2, 2),
padding='same')(c8)
u8 = Concatenate()([u8, c8])
c9 = conv2d_block(u8, num_filters=n_filters * 4)
u9 = Conv2DTranspose(n_filters * 2, (3, 3), strides=(2, 2),
padding='same')(c9)
u9 = Concatenate()([u9, c9])
c9 = conv2d_block(u9, num_filters=n_filters * 2)
u10 = Conv2DTranspose(n_filters * 1, (3, 3), strides=(2, 2),
padding='same')(c9)
u10 = Concatenate()([u10, c1])
u10 = Dropout(0.3)(u10)
c10 = conv2d_block(u10, num_filters=n_filters * 1)
outputs = Conv2D(2, (1, 1), activation='relu') (c10)
model = Model(inputs=[inputs], outputs=[outputs])
return model
INPUT_SHAPE = (156, 156, 2)
model = build_unet(INPUT_SHAPE)
</code></pre>
<p>I have images with 2 channels.</p>
<p>At the first concatenation layer, <code>u6</code>, I am receiving:</p>
<pre><code> A `Concatenate` layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(None, 8, 8, 256), (None, 4, 4, 512)]
</code></pre>
<p>If I change all <code>Conv2DTranspose</code> strides, with <code>strides=(1,1)</code> except the last <code>Conv2DTranspose</code> layer, and if I use <code>strides=(39, 39)</code> , then it works! But <code>39</code> strides?? Way too much.</p>
<pre><code>...
u6 = Conv2DTranspose(n_filters * 16, (3, 3), strides=(1, 1),
padding='same')(c6)
u6 = Concatenate()([u6, c6])
c7 = conv2d_block(u6, num_filters=n_filters * 16)
u7 = Conv2DTranspose(n_filters * 8, (3, 3), strides=(1, 1),
padding='same')(c7)
u7 = Concatenate()([u7, c7])
c8 = conv2d_block(u7, num_filters=n_filters * 8)
u8 = Conv2DTranspose(n_filters * 4, (3, 3), strides=(1, 1),
padding='same')(c8)
u8 = Concatenate()([u8, c8])
c9 = conv2d_block(u8, num_filters=n_filters * 4)
u9 = Conv2DTranspose(n_filters * 2, (3, 3), strides=(1, 1),
padding='same')(c9)
u9 = Concatenate()([u9, c9])
c9 = conv2d_block(u9, num_filters=n_filters * 2)
u10 = Conv2DTranspose(n_filters * 1, (3, 3), strides=(39, 39),
padding='same')(c9)
u10 = Concatenate()([u10, c1])
...
</code></pre>
<p>How to find the right dimensions?</p>
| <python><tensorflow><deep-learning> | 2023-07-14 07:16:42 | 1 | 5,751 | George |
76,684,967 | 4,141,279 | Repeatedly process big list of images with changing parameters using multiple cores in python | <p>I have a big list of images <code>list_img</code>, say 20k that I need to process multiple times with changing arguments out of a list <code>params = [arg1, arg2, ...]</code>. Ideally, I want to use multiple processes to do so. But I need all processes to first use <code>arg1</code> and then <code>arg2</code> on chunks of my list <code>list_img</code>. The processing time for each <code>arg</code> in <code>params</code> varies greatly. So if I would distribute the list <code>params</code> over my processes instead of the list of images (core 1: arg1, core 2: arg2, ...) it happens that after a while most of the processes are idle (finished) while very few are still crunching data.</p>
<p>My current (working) solution looks like that:</p>
<pre><code>from multiprocessing import Pool
import numpy as np
def calc_image(argument, image):
val = argument * image # not the real process, just demo
return val
if __name__ == "__main__":
pool = Pool(processes=8)
list_img = [np.ones((100, 100))] * 20000 # for demo only
params = list(range(100)) # for demo only
for par in params:
par_list = [par] * len(list_img)
return_vals = pool.starmap(calc_image, zip(par_list, list_img))
pool.close()
</code></pre>
<p>How can I avoid to copy the list <code>list_img</code> every time the variable <code>par</code> changes in the for-loop? I also would like to avoid using global variables, if possible.</p>
| <python><multiprocessing><pool><starmap> | 2023-07-14 05:40:06 | 2 | 1,597 | RaJa |
76,684,873 | 12,415,287 | Replacing tag in Beautiful soup doesn't work | <p>I have this code in <code>BeautifulSoup4</code>:</p>
<pre><code>import glob
from bs4 import BeautifulSoup
# get all ccl_tags
# rearrange
# replace current ccl_tag with data replaced
# print to output file
def rearrange_xml(input_file, output_file, tag_ordering):
# Reading the data inside the xml
# file to a variable under the name
# data
with open(input_file, "r") as f:
data = f.read()
# Passing the stored data inside
# the beautifulsoup parser, storing
# the returned object
Bs_data = BeautifulSoup(data, "xml")
# Finding all instances of tag
# `ccl_tag` asign to "ccls"
ccls = Bs_data.find_all("ccl_tag")
# rearrange the ccl based on the given ordering
for ccl in ccls:
# declare new ccl
new_ccl = BeautifulSoup()
tags_to_rearrange = []
# find all tags in tag ordering which are also inside ccl
for tag_name in tag_ordering:
tags = ccl.find_all(tag_name)
tags_to_rearrange.extend(tags)
# find only the found tags to rearrage and apply to new ccl
for tag in tags_to_rearrange:
tag.extract()
# only add the tags specified
new_ccl.append(tag)
print(new_ccl)
# replace with doesn"t work?
ccl.replace_with(new_ccl)
print(Bs_data.find("ccl_tag"))
with open(output_file, "w") as file:
file.write(Bs_data.prettify())
# Get a list of XML files in the same directory
xml_files = glob.glob("*.xml")
tag_ordering = [
"date_tag",
"req_tag",
"req2_tag",
"req3_tag"
]
# Rearrange each XML file
for input_file in xml_files:
# don't include output_ files
if not input_file.startswith("output_"):
output_file = f"output_{input_file}"
rearrange_xml(input_file, output_file, tag_ordering)
</code></pre>
<p>which get's the xml files in current directory and creates a new xml file from the input file.</p>
<p>It then finds a tag, ccl_tag, and replaces that tag with the same tag only rearranged, and with specific tags included.</p>
<p>However, the code to replace the tag does not work. I can confirm by printing new_ccl that the values are correct, but does not copy properly.</p>
<p>What could be wrong?</p>
| <python><xml><beautifulsoup> | 2023-07-14 05:12:41 | 0 | 2,820 | Prosy A. |
76,684,617 | 15,613,481 | Python decode bytearray | <p>I need send some data over serialport, but I have problem with <code>bytearray</code>
I wrote simple script, like this:</p>
<pre class="lang-py prettyprint-override"><code>msg = bytearray([255,1,134,0,0,0,0,121])
print(msg)
</code></pre>
<p>But output from this looks like this:</p>
<pre><code>python3 bytearrtest.py
bytearray(b'\xff\x01\x86\x00\x00\x00\x00y')
</code></pre>
<p>What is it this <code>y</code>? Why output do not have <code>0x79</code> at the end instead of <code>y</code>? And why 4 times <code>Zero</code> is converted to two times <code>0x00</code> ?</p>
<p>Context of question:
I wrote very simple code in Micropython:</p>
<pre><code>from machine import UART, Pin
import time
uart1 = UART(1, baudrate=9600, tx=Pin(8), rx=Pin(9),bits=8, parity=None, stop=1)
msg = b"\xff\x01\x86\x00\x00\x00\x00\x00\x79"
print(msg)
uart1.write(msg) # write
time.sleep_ms(2000)
print("Response:")
print(uart1.read())
</code></pre>
<p>And response of this code is:</p>
<pre><code>b'\xff\x01\x86\x00\x00\x00\x00\x00y'
Response:
b'\x00\x01\x86\x00\x00\x00\x00\x00y\x00'
</code></pre>
<p>Last line of output come from another device. But If I do it in terminal BRAY, on command: <code>FF 01 86 00 00 00 00 79</code> response is: <code>FF 86 02 55 3E 00 02 00 E3</code> and this is ok, because response must staart from <code>FF 86...</code></p>
<p>So, why it works in console, but not in MicroPython?</p>
| <python><serial-port><python-bytearray> | 2023-07-14 03:51:52 | 0 | 1,532 | sosnus |
76,684,358 | 3,247,006 | How to change the default cookie name `django_language` set by `set_language()` in Django? | <p>I could set and use <a href="https://docs.djangoproject.com/en/4.2/topics/i18n/translation/#the-set-language-redirect-view" rel="nofollow noreferrer">set_language()</a> as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "core/urls.py"
...
urlpatterns += [
path("i18n/", include("django.conf.urls.i18n")) # Here
]
</code></pre>
<p>Now, I want to change the default cookie name <code>django_language</code> below considering the security because users can easily know that <strong>Django</strong> is used for the website:</p>
<p><a href="https://i.sstatic.net/xz9W8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xz9W8.png" alt="enter image description here" /></a></p>
<p>So, how can I change the default cookie name <code>django_language</code>?</p>
| <python><django><default><django-i18n><django-cookies> | 2023-07-14 02:34:31 | 1 | 42,516 | Super Kai - Kazuya Ito |
76,684,335 | 8,026,780 | What is the task.info in Celery? | <p>I found print task.info will display the worker'hostname and pid,but I not found any doc in celery documents.
Could I modify the info method to gather other data such as worker time,worker memroy,worker cpu load?</p>
| <python><celery> | 2023-07-14 02:26:19 | 1 | 453 | Cherrymelon |
76,684,282 | 266,185 | global variable in a task | <p>I am using celery director to write celery task. And I need to pass parameter from one step to downstream step. So I write the following decorator to do that. But the decorator need to access a parameter named params defined in the task, so I used a global variable (global params). The question is: will the global variable result in case condition? Suppose the first execution set params to {"test":{"v1":1}}, and the second execution set params to {"test":{"v1":2}} because they pass different kwargs. If they are running at nearly t he same time, is it for both execution see the same params value since they are global?</p>
<pre><code>def inject_params(func):
def inner(*args, **kwargs):
inherit_params = {}
for arg in args:
if isinstance(arg, dict):
if "params" in arg:
inherit_params = arg["params"] |inherit_params
new_kwargs = inherit_params|kwargs
result = func(*args, **new_kwargs)
global params
return {"params": params|inherit_params, "result": result}
return result
inner.__name__ = func.__name__
return inner
params={}
@task(name="EVALUATE_DELETE_PERFORMANCE")
@inject_params
def evaluate_delete_performance(*args, **kwargs):
global params
params = {"test":kwargs}
return "some value"
</code></pre>
| <python><celery> | 2023-07-14 02:01:47 | 1 | 6,013 | Daniel Wu |
76,684,195 | 6,646,161 | how to best filter exceptions on their __cause__ (or __context__)? | <p>Given a base exception type:</p>
<pre><code>class MyModuleError(Exception):
pass
</code></pre>
<p>Suppose we have code that explicitly raises it, using exception chaining:</p>
<pre><code>def foo():
try:
#some code
except (ZeroDivisionError, OSError) as e:
raise MyModuleError from e
</code></pre>
<p>Now, in the calling code...</p>
<pre><code>try:
foo()
except MyModuleError as e:
# Now what?
</code></pre>
<p>how can I idiomatically write the <code>except</code> clause, so that the exception handling depends on the <code>__cause__</code> (chained exception)?</p>
<p>I thought of these approaches:</p>
<p>a) using <code>type(e)</code> like:</p>
<pre><code> # filter here
t=type(e.__cause__)
if t is ZeroDivisionError:
doStuff()
elif t is OSError:
doOtherStuff()
else:
raise
</code></pre>
<p>b) using <code>isinstance()</code> like:</p>
<pre><code> # filter here
if isinstance(e.__cause__, ZeroDivisionError):
doStuff()
elif isinstance(e.__cause__, OSError):
doOtherStuff()
else:
raise
</code></pre>
<p>c) re-raising like:</p>
<pre><code> # filter here
try:
raise e.__cause__
except ZeroDivisionError:
doStuff()
except OSError:
doOtherStuff()
except:
raise e #which should be the "outer" exception
</code></pre>
| <python><python-3.x><exception> | 2023-07-14 01:34:02 | 2 | 327 | calestyo |
76,684,101 | 5,942,100 | Pairing corresponding values to a columns values using Pandas - no duplicates | <p>I would like to pair corresponding matching values to a specific columns' values using Pandas without getting duplicates.</p>
<p><strong>Data</strong></p>
<p>df1</p>
<pre><code>IN name1 size ttee date days new date year month
AA1 dd 1 FALSE 1/14/2023 10 1/14/2023 2023 January
AA1 ff 2 FALSE 1/9/2023 10 1/9/2023 2023 January
AA1 jj 3 FALSE 1/8/2023 10 1/8/2023 2023 January
AA1 mm 4 FALSE 1/9/2023 10 1/9/2023 2023 January
AA1 nn 5 FALSE 1/29/2023 10 1/29/2023 2023 January
</code></pre>
<p>df2</p>
<pre><code>name stat year month
AA1 KZZ:1 2022 January
AA1 KZZ:1 2023 April
AA1 KZZ:1 2023 March
AA1 KZZ:1 2022 February
AA1 KZZ:1 2022 April
AA1 KZZ:1 2022 September
AA1 KZZ:1 2022 February
AA1 KZZ:1 2022 March
AA1 KZZ:1 2023 June
AA1 KZZ:1 2022 December
AA1 KZZ:1 2022 January
AA1 KZZ:1 2022 January
</code></pre>
<p><strong>Desired</strong></p>
<pre><code>IN name1 size ttee date days new date year month stat
AA1 dd 1 FALSE 1/14/2023 10 1/14/2023 2023 January KZZ:1
AA1 ff 2 FALSE 1/9/2023 10 1/9/2023 2023 January KZZ:1
AA1 jj 3 FALSE 1/8/2023 10 1/8/2023 2023 January KZZ:1
AA1 mm 4 FALSE 1/9/2023 10 1/9/2023 2023 January KZZ:1
AA1 nn 5 FALSE 1/29/2023 10 1/29/2023 2023 January KZZ:1
</code></pre>
<p>I've tried doing this:</p>
<pre><code>out = pd.merge(df1,df2.drop_duplicates(), left_on=['IN'], right_on= ['name'], how="left")
</code></pre>
<p>However, the above script is giving an exploded output with combinations and does not retain the original left dataframe row count.</p>
| <python><pandas><numpy> | 2023-07-14 01:02:03 | 1 | 4,428 | Lynn |
76,684,051 | 1,056,563 | What is wrong with this pytest parametrize invocation? | <p>I have extensively used <code>pytest.mark.parametrize</code> but am stumbling on its use in one test method:</p>
<pre><code> @pytest.mark.parametrize("is_except, except_pattern, in_query, table, out_query",[
(False,None,"WHERE Table1.Col1='Foo'", "Table1","select * from Table1"),
(True,None,"Table in query (Table11) must match the parameter value (Table1)", "Table1",None)
])
def test_all_parameters(self,is_except,except_pattern,in_query,table,out_query):
# logic ..
</code></pre>
<p>This is resulting in:</p>
<blockquote>
<p>E TypeError: TestDeltaLakeReader.test_all_parameters() missing 5 required positional arguments: 'is_except', 'except_pattern', 'in_query', 'table', and 'out_query'</p>
</blockquote>
<p>Full stacktrace:</p>
<pre><code>(TestDeltaLakeReader.test_all_parameters)
self = <unittest.case._Outcome object at 0x121d78b20>
test_case = <framework.tests.ddex.reader.test_deltalake_reader.TestDeltaLakeReader testMethod=test_all_parameters>
isTest = True
@contextlib.contextmanager
def testPartExecutor(self, test_case, isTest=False):
old_success = self.success
self.success = True
try:
> yield
/usr/local/Cellar/python@3.10/3.10.10_1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/unittest/case.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/local/Cellar/python@3.10/3.10.10_1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/unittest/case.py:591: in run
self._callTestMethod(testMethod)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <framework.tests.ddex.reader.test_deltalake_reader.TestDeltaLakeReader testMethod=test_all_parameters>
method = <bound method TestDeltaLakeReader.test_all_parameters of <framework.tests.ddex.reader.test_deltalake_reader.TestDeltaLakeReader testMethod=test_all_parameters>>
def _callTestMethod(self, method):
> method()
E TypeError: TestDeltaLakeReader.test_all_parameters() missing 5 required positional arguments: 'is_except', 'except_pattern', 'in_query', 'table', and 'out_query'
/usr/local/Cellar/python@3.10/3.10.10_1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/unittest/case.py:549: TypeError
</code></pre>
<p>Any ideas?</p>
<p><strong>Update</strong> I brought this down to a single testing parameter and still get the error.</p>
<p><strong>Another update</strong> This same <code>parametrize</code> works fine when the method is extracted out into a standalone function. We have many many uses of <code>parametrize</code> inside test classes so I don't understand this.</p>
| <python><pytest> | 2023-07-14 00:41:57 | 3 | 63,891 | WestCoastProjects |
76,683,954 | 1,911,133 | PNG image frombytes turns black using Python Pillow. How to keep colors? | <p>I want to rotate this image.</p>
<p><a href="https://i.sstatic.net/aJpWQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aJpWQ.png" alt="enter image description here" /></a></p>
<p>I have it as bytes with:</p>
<pre><code>img = Image.open(image_path)
img.tobytes()
</code></pre>
<p>But when I decode it:</p>
<pre><code>image = Image.frombytes('P', (width, height), image_data)
</code></pre>
<p>I get a black square.</p>
<p>How can I read the image from bytes and keep the colors? This is happening for PNG images.</p>
<p>The farthest I've got is getting a black background with a barely noticeable shape of the original image in white.
With</p>
<pre><code>image = Image.frombytes('P', (width, height), image_data).convert('L')
</code></pre>
<p>I'm using Pillow. I'm open to use anything.</p>
| <python><image-processing><python-imaging-library> | 2023-07-14 00:07:16 | 1 | 1,109 | AAlvz |
76,683,917 | 4,858,233 | How to get PANDAS EWMA results to match online EWMA calculator results? | <p>Here is my code:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'values': [345,373,373,359,376,376,413,383,467,464,540,628,545,631,617,607,590,576,565,617]})
ewm = df['values'].ewm(span=7, adjust=False).mean()
ewm
</code></pre>
<p>The result is:</p>
<pre><code>0 345.000000
1 352.000000
2 357.250000
3 357.687500
4 362.265625
5 365.699219
6 377.524414
7 378.893311
8 400.919983
9 416.689987
10 447.517490
11 492.638118
12 505.728588
13 537.046441
14 557.034831
15 569.526123
16 574.644592
17 574.983444
18 572.487583
19 583.615687
</code></pre>
<p>But online calculator (<a href="https://goodcalculators.com/exponential-moving-average-calculator" rel="nofollow noreferrer">https://goodcalculators.com/exponential-moving-average-calculator</a>) using the same input values and period (or span, 7) gives these results:</p>
<pre><code>363.667,366,367.75,365.563,368.172,370.129,380.847,381.385,402.789,418.092,448.569,493.427,506.32,537.49,557.367,569.776,574.832,575.124
</code></pre>
<p>Why are the results different?</p>
<p>How can the code be modified to give results matching the online calculator?</p>
| <python><pandas><statistics> | 2023-07-13 23:53:21 | 0 | 1,561 | Ray Zhang |
76,683,800 | 689,242 | Program invoked with subprocess.Popen() does not recognize two arguments passed as one | <p>I have this function in python script on WSL2 and I run it as a super user:</p>
<pre><code>import subprocess
def flash() -> None:
p = subprocess.Popen(
["JLinkExe", ""],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
universal_newlines=True,
text=True
)
# Read the response from JLinkExe
response = p.stdout.read()
print(response)
</code></pre>
<p>As you can see I try to run program <code>JLinkExe</code> which is a command line utility with it's own shell <code>J-Link></code>. For some reason my <code>print()</code> function prints this:</p>
<pre class="lang-none prettyprint-override"><code>SEGGER J-Link Commander V7.60 (Compiled Dec 14 2021 11:40:17)
DLL version V7.60, compiled Dec 14 2021 11:40:00
Unknown command line option .
</code></pre>
<p>First two lines indicate that <code>application JLinkExe</code> did run, but for some reason there is a <code>.</code> appended at the back of the arguments or empty argument is not recognized which is weird.</p>
<hr />
<p>If I change this line:</p>
<pre><code>["JLinkExe", "-nogui 1"]
</code></pre>
<p>The response also changes:</p>
<pre class="lang-none prettyprint-override"><code>SEGGER J-Link Commander V7.60 (Compiled Dec 14 2021 11:40:17)
DLL version V7.60, compiled Dec 14 2021 11:40:00
Unknown command line option -nogui 1.
</code></pre>
<p>Again <code>.</code> is appended to the back of the arguments...</p>
<hr />
<p>If I run the same command as a super user directly in WSL everything looks fine:</p>
<pre class="lang-none prettyprint-override"><code>$ sudo JLinkExe -nogui 1
SEGGER J-Link Commander V7.60 (Compiled Dec 14 2021 11:40:17)
DLL version V7.60, compiled Dec 14 2021 11:40:00
Connecting to J-Link via USB...O.K.
Firmware: J-Link STLink V21 compiled Aug 12 2019 10:29:20
Hardware version: V1.00
S/N: 775087052
VTref=3.300V
Type "connect" to establish a target connection, '?' for help
J-Link>
</code></pre>
<p>Why are my arguments not recognized?</p>
| <python><subprocess> | 2023-07-13 23:15:01 | 1 | 1,505 | 71GA |
76,683,540 | 15,412,256 | How to vectorize complex cumulative aggregation problem? | <p>Dataset:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>date</th>
<th>time_index</th>
<th>identifier</th>
<th>value</th>
<th>cum_value</th>
<th>bar_index</th>
<th>desired_output</th>
</tr>
</thead>
<tbody>
<tr>
<td>2023-06-01</td>
<td>1</td>
<td>stackoverflow</td>
<td>5</td>
<td>5</td>
<td>NaN</td>
<td>0</td>
</tr>
<tr>
<td>2023-06-01</td>
<td>2</td>
<td>stackoverflow</td>
<td>10</td>
<td>15</td>
<td>NaN</td>
<td>0</td>
</tr>
<tr>
<td>2023-06-01</td>
<td>3</td>
<td>stackoverflow</td>
<td>10</td>
<td>25</td>
<td>NaN</td>
<td>1</td>
</tr>
<tr>
<td>2023-06-01</td>
<td>1</td>
<td>cross_validated</td>
<td>4</td>
<td>4</td>
<td>NaN</td>
<td>0</td>
</tr>
<tr>
<td>2023-06-01</td>
<td>2</td>
<td>cross_validated</td>
<td>6</td>
<td>10</td>
<td>NaN</td>
<td>0</td>
</tr>
<tr>
<td>2023-06-01</td>
<td>3</td>
<td>cross_validated</td>
<td>20</td>
<td>30</td>
<td>NaN</td>
<td>1</td>
</tr>
<tr>
<td>2023-06-01</td>
<td>4</td>
<td>cross_validated</td>
<td>5</td>
<td>35</td>
<td>NaN</td>
<td>2</td>
</tr>
<tr>
<td>2023-06-02</td>
<td>1</td>
<td>stackoverflow</td>
<td>2</td>
<td>2</td>
<td>NaN</td>
<td>0</td>
</tr>
<tr>
<td>2023-06-02</td>
<td>2</td>
<td>stackoverflow</td>
<td>10</td>
<td>12</td>
<td>NaN</td>
<td>0</td>
</tr>
<tr>
<td>2023-06-02</td>
<td>1</td>
<td>cross_validated</td>
<td>20</td>
<td>20</td>
<td>NaN</td>
<td>0</td>
</tr>
<tr>
<td>2023-06-02</td>
<td>2</td>
<td>cross_validated</td>
<td>3</td>
<td>23</td>
<td>NaN</td>
<td>1</td>
</tr>
<tr>
<td>2023-06-02</td>
<td>3</td>
<td>cross_validated</td>
<td>3</td>
<td>26</td>
<td>NaN</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
<p>Code that generates the dataset:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
df = pd.DataFrame({
"date": ["2023-06-01", "2023-06-01", "2023-06-01", "2023-06-01", "2023-06-01", "2023-06-01", "2023-06-01", "2023-06-02", "2023-06-02", "2023-06-02", "2023-06-02", "2023-06-02"],
"time_index": [1, 2, 3, 1, 2, 3, 4, 1, 2, 1, 2, 3],
"identifier": ["stackoverflow", "stackoverflow", "stackoverflow", "cross_validated", "cross_validated", "cross_validated", "cross_validated",
"stackoverflow", "stackoverflow", "cross_validated", "cross_validated", "cross_validated"],
"value": [5, 10, 10, 4, 6, 20, 5, 2, 10, 20, 3, 3]
})
df["cum_value"] = df.groupby(["identifier", "date"])["value"].cumsum()
df["bar_index"] = np.nan
df["desired_output"] = [0, 0, 1, 0, 0, 1, 2, 0, 0, 0, 1, 1]
</code></pre>
<p>I want to sample <code>bar_index</code> for each <code>identifier</code> and <code>date</code> according to a fixed (for now) threshold <code>τ</code>=10, using a column's <code>value</code> and/or <code>cum_value</code>.</p>
<ul>
<li><code>τ</code> = 10</li>
<li>date: 2023-06-01 = <code>d1</code> & 2023-06-02 = <code>d2</code></li>
<li>identifier: stackoverflow = <code>id1</code> & cross_validated = <code>id2</code></li>
<li>time_index ∈ <code>{t1, t2,...,tn} ∀ d, id</code></li>
</ul>
<ol>
<li><p>Observation {id1, d1, t1} has a value less than the threshold of 10 so we continue to the next entries. If we add the <code>value</code> of {id1, d1, t1} and {id1, d1, t2} together we reach a <code>cum_value</code> (cumulative value) of 15, which exceeds the threshold. Therefore we would sample {id1, d1, t1} as well as {id1, d1, t2} as <code>bar_index</code> 0.</p>
</li>
<li><p>If we encounter an observation with a very large value (for example {id2, d1, t3}) and the previous bar ended (cumulative value exceeded the threshold from the last trade), we would sample this observation along as a <code>bar_index</code>. The next observation starts a new accumulation (in theory).</p>
</li>
</ol>
<p>Current non-vectorized approach:</p>
<pre class="lang-py prettyprint-override"><code>def aggregate_bars(group, threshold):
cum_value = 0
bar_index = 0
for i in range(len(group)):
cum_value += group.iloc[i]["value"]
if cum_value >= threshold:
group["bar_index"].iloc[i] = bar_index
bar_index += 1
cum_value = 0
elif cum_value < threshold:
group["bar_index"].iloc[i] = bar_index
return group
df = df.groupby(["identifier", "date"]).apply(lambda x: aggregate_bars(x, 10))
df
</code></pre>
<p>Output:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>date</th>
<th>time_index</th>
<th>identifier</th>
<th>value</th>
<th>cum_value</th>
<th>bar_index</th>
<th>desired_output</th>
</tr>
</thead>
<tbody>
<tr>
<td>2023-06-01</td>
<td>1</td>
<td>stackoverflow</td>
<td>5</td>
<td>5</td>
<td>0.0</td>
<td>0</td>
</tr>
<tr>
<td>2023-06-01</td>
<td>2</td>
<td>stackoverflow</td>
<td>10</td>
<td>15</td>
<td>0.0</td>
<td>0</td>
</tr>
<tr>
<td>2023-06-01</td>
<td>3</td>
<td>stackoverflow</td>
<td>10</td>
<td>25</td>
<td>1.0</td>
<td>1</td>
</tr>
<tr>
<td>2023-06-01</td>
<td>1</td>
<td>cross_validated</td>
<td>4</td>
<td>4</td>
<td>0.0</td>
<td>0</td>
</tr>
<tr>
<td>2023-06-01</td>
<td>2</td>
<td>cross_validated</td>
<td>6</td>
<td>10</td>
<td>0.0</td>
<td>0</td>
</tr>
<tr>
<td>2023-06-01</td>
<td>3</td>
<td>cross_validated</td>
<td>20</td>
<td>30</td>
<td>1.0</td>
<td>1</td>
</tr>
<tr>
<td>2023-06-01</td>
<td>4</td>
<td>cross_validated</td>
<td>5</td>
<td>35</td>
<td>2.0</td>
<td>2</td>
</tr>
<tr>
<td>2023-06-02</td>
<td>1</td>
<td>stackoverflow</td>
<td>2</td>
<td>2</td>
<td>0.0</td>
<td>0</td>
</tr>
<tr>
<td>2023-06-02</td>
<td>2</td>
<td>stackoverflow</td>
<td>10</td>
<td>12</td>
<td>0.0</td>
<td>0</td>
</tr>
<tr>
<td>2023-06-02</td>
<td>1</td>
<td>cross_validated</td>
<td>20</td>
<td>20</td>
<td>0.0</td>
<td>0</td>
</tr>
<tr>
<td>2023-06-02</td>
<td>2</td>
<td>cross_validated</td>
<td>3</td>
<td>23</td>
<td>1.0</td>
<td>1</td>
</tr>
<tr>
<td>2023-06-02</td>
<td>3</td>
<td>cross_validated</td>
<td>3</td>
<td>26</td>
<td>1.0</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
<p>How to vectorize the code so it can effectively process trillions of rows?</p>
| <python><pandas><vectorization><aggregation><stock> | 2023-07-13 22:08:44 | 1 | 649 | Kevin Li |
76,683,399 | 5,032,387 | Populating large matrix with values | <p>I have a 100K by 12 by 100K matrix that I need to populate with computation results. I tried creating it using numpy.empty but got a memory error.</p>
<p>So I turned to dask instead. I'm able to create the dask array. I'm running a function that creates a vector as I traverse through the 0th and 1st dimension in a for loop. I then populate this vector into the i,jth position of the matrix. If I just populate the dask array as is, just the assignment step takes 50 milliseconds, which is way too long when extrapolated for all atomic cells in the matrix.</p>
<p>It seems it should be possible to speed up the assignment with dask's delayed function, but can't figure it out.</p>
<p>Here's how this would look without delay:</p>
<pre><code>import dask.array as da
import dask.delayed as delayed
from dask import compute
import numpy as np
test_arr = da.empty(shape=(10000, 12, 10000), dtype='float32')
for i in range(test_arr.shape[0]):
for j in range(test_arr.shape[1]):
vals = np.random.normal(size=test_arr.shape[2])
test_arr[i,j,:] = vals
</code></pre>
<p>And here is my attempt at using delay:</p>
<pre><code>def populate_array(i, j, vec):
test_arr[i, j, :] = vec
return test_arr
for i in range(test_arr.shape[0]):
for j in range(test_arr.shape[1]):
vals = np.random.normal(size=test_arr.shape[2])
delayed(populate_array)(i, j, vals)
compute(test_arr)
</code></pre>
<p>The latter doesn't error but just seems to return an array with all zeroes.<br />
I know that I can also speed this up by getting rid of the for loop and vectorizing but assume that is currently not feasible.</p>
<p>I'm not tied to dask per se but it seems like a practical approach with a familiar syntax if coming from pandas / numpy.</p>
<p>Update:
Accepted answer works but the task stream has a lot of blank spaces. I bring this up because my actual use case with a complex create_array_chunk formula just hangs. Cannot see the dashboard or what's going on.</p>
<p><a href="https://i.sstatic.net/TeElO.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TeElO.gif" alt="enter image description here" /></a></p>
| <python><numpy><dask> | 2023-07-13 21:39:23 | 1 | 3,080 | matsuo_basho |
76,683,379 | 2,908,017 | Is there a short one-line way to set the margins of a component in a Python VCL GUI App? | <p>I know you can set each margin (Top, Right, Bottom, Left) individually like the following code:</p>
<pre><code>self.myPanel.AlignWithMargins = True
self.myPanel.Margins.Top = 100
self.myPanel.Margins.Right = 100
self.myPanel.Margins.Bottom = 100
self.myPanel.Margins.Left = 100
</code></pre>
<p>But is there a way to set all four margins with just one line of code instead of having four lines of code (one for each margin)?</p>
<p>I was hoping for something like <code>self.myPanel.Margins = [100, 100, 100, 100]</code>, but this doesn't work. I get the following error <code>AttributeError: Error in setting property Margins (Error: Expected a Pascal object)</code>.</p>
<p>Is there a shorter/better way to set all four margins in one line?</p>
| <python><user-interface><vcl> | 2023-07-13 21:34:58 | 1 | 4,263 | Shaun Roselt |
76,683,183 | 420,157 | Assigning python functions to C structure variables using SWIG | <p>The problem I'm currently facing is to assign a python function to a C structure member variable that is a function pointer.</p>
<p>I get the error from the setter function generated by Swig when we call the SWIG_ConvertFunctionPtr for the assigned value stating that the expected type is not correct. For the sake of simplicity lets consider (void*) function pointer.</p>
<p>C code:</p>
<pre><code>typedef void (*MyFunctionPtr)();
struct MyStruct {
MyFunctionPtr func;
};
</code></pre>
<p>Python Code:</p>
<pre><code>import example
def my_python_function():
print("Hello from Python!")
my_struct = example.MyStruct()
my_struct.func = my_python_function
</code></pre>
<p>the assignment would throw a run time error.</p>
<p>I did not add anything in my swig code to handle this except for including the header file.
I wasn't able to find any tricks from the documentation or other forums.
I won't be able to modify the C code, hence looking for solutions via python/swig.</p>
<p>Do you happen to have any ideas regarding how I can workaround this? Any pointers would be really helpful.</p>
<p>Edit (experiments):</p>
<p>Expt 1:
typemap to typecast the input to void pointer (but still got the same error for mismatched type)</p>
<pre><code>%typemap(in) void* MyFunctionPtr %{
PyObject *func;
if (PyCallable_Check($input)) {
func = $input;
} else {
PyErr_SetString(PyExc_TypeError, "funcptr must be a callable object");
return NULL;
}
$1 = (void*)func;
%}
</code></pre>
<p>Expt 2:
I came across the following trick for C callback in swig. But in this case they are passing the python function as a PyObject and then type casting it to void in the function arguments.</p>
<p><a href="http://www.fifi.org/doc/swig-examples/python/callback/widget.i" rel="nofollow noreferrer">http://www.fifi.org/doc/swig-examples/python/callback/widget.i</a></p>
<p>Still won't be able to achieve the structure assignment in this case.</p>
<p><strong>Partial Solution:</strong></p>
<ul>
<li>created a global PyObject* to store the function. (Bad design according to swig)</li>
<li>override the constructor of the structure to accept the function as an input and assign it to the global variable.</li>
</ul>
| <python><c><python-3.x><swig> | 2023-07-13 20:56:13 | 0 | 777 | Maverickgugu |
76,682,995 | 15,439,115 | psycopg2.InterfaceError: connection already closed on 2nd request: | <p>I am using postgres and this is my code to first get id , If it exist then it will delete data against that record with some conditions . So now Issue is 1st time query work fines even record is there but if I run 2nd time this error I get "psycopg2.InterfaceError: connection already"</p>
<p>My python code to do this is this :</p>
<pre><code>def clean_tables(account_id, node_id, scope_id, node_type):
with conn:
cur = conn.cursor()
print(account_id, "account_id__")
check_id_query = "SELECT uuid FROM my_accounts WHERE name = %s"
cur.execute(check_id_query, (account_id,))
result = cur.fetchone()
if result:
uuid = result[0]
print("ID:", uuid)
scope_id_string = "and cmetadata->>'scope_id' = %s" if scope_id else ''
clean_record_query = "DELETE FROM my_collections WHERE collection_id = %s " \
"AND cmetadata->>'node_type' = %s AND cmetadata->>'node_id' = %s {}".format(scope_id_string)
print(clean_record_query , "clean_record_query_")
cur.execute(clean_record_query, (uuid, node_type, node_id, scope_id))
conn.commit()
if cur.rowcount > 0:
print("Record deleted successfully.")
else:
print("No record found to delete.")
else:
print("No ID found for the given account_id.")
</code></pre>
| <python><postgresql> | 2023-07-13 20:21:54 | 0 | 309 | Ninja |
76,682,993 | 836,026 | How to use pretrained encoder for customized Unet | <p>if you have a standard Unet encoder such as resnet50, then it's easy to add pertaining to it. for example:</p>
<pre><code>ENCODER = 'resnet50'
ENCODER_WEIGHTS = 'imagenet'
CLASSES = class_names
ACTIVATION = 'sigmoid' # could be None for logits or 'softmax2d' for multiclass segmentation
# create segmentation model with pretrained encoder
model = smp.Unet(
encoder_name=ENCODER,
encoder_weights=ENCODER_WEIGHTS,
classes=len(CLASSES),
activation=ACTIVATION,
)
preprocessing_fn = smp.encoders.get_preprocessing_fn(ENCODER, ENCODER_WEIGHTS)
</code></pre>
<p>However, suppose you have a custom-made Unet (not necessarily use resent50) encoder such as:</p>
<pre><code>class VGGBlock(nn.Module):
def __init__(self, in_channels, middle_channels, out_channels):
super().__init__()
self.relu = nn.ReLU(inplace=True)
self.conv1 = nn.Conv2d(in_channels, middle_channels, 3, padding=1)
self.bn1 = nn.BatchNorm2d(middle_channels)
self.conv2 = nn.Conv2d(middle_channels, out_channels, 3, padding=1)
self.bn2 = nn.BatchNorm2d(out_channels)
def forward(self, x):
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
return out
class UNet(nn.Module):
def __init__(self, num_classes, input_channels=3, **kwargs):
super().__init__()
nb_filter = [32, 64, 128, 256, 512]
self.pool = nn.MaxPool2d(2, 2)
self.up = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)
self.conv0_0 = VGGBlock(input_channels, nb_filter[0], nb_filter[0])
self.conv1_0 = VGGBlock(nb_filter[0], nb_filter[1], nb_filter[1])
self.conv2_0 = VGGBlock(nb_filter[1], nb_filter[2], nb_filter[2])
self.conv3_0 = VGGBlock(nb_filter[2], nb_filter[3], nb_filter[3])
self.conv4_0 = VGGBlock(nb_filter[3], nb_filter[4], nb_filter[4])
self.conv3_1 = VGGBlock(nb_filter[3]+nb_filter[4], nb_filter[3], nb_filter[3])
self.conv2_2 = VGGBlock(nb_filter[2]+nb_filter[3], nb_filter[2], nb_filter[2])
self.conv1_3 = VGGBlock(nb_filter[1]+nb_filter[2], nb_filter[1], nb_filter[1])
self.conv0_4 = VGGBlock(nb_filter[0]+nb_filter[1], nb_filter[0], nb_filter[0])
self.final = nn.Conv2d(nb_filter[0], num_classes, kernel_size=1)
def forward(self, input):
x0_0 = self.conv0_0(input)
x1_0 = self.conv1_0(self.pool(x0_0))
x2_0 = self.conv2_0(self.pool(x1_0))
x3_0 = self.conv3_0(self.pool(x2_0))
x4_0 = self.conv4_0(self.pool(x3_0))
x3_1 = self.conv3_1(torch.cat([x3_0, self.up(x4_0)], 1))
x2_2 = self.conv2_2(torch.cat([x2_0, self.up(x3_1)], 1))
x1_3 = self.conv1_3(torch.cat([x1_0, self.up(x2_2)], 1))
x0_4 = self.conv0_4(torch.cat([x0_0, self.up(x1_3)], 1))
output = self.final(x0_4)
return output
</code></pre>
<p>How to do Imagenet pretraining for the encoder. I assume doing pretraining for the encoder from scratch will take long time. Is there a way to utilize an existing pre-trained encoder such as the resnet50 for such Unet.</p>
| <python><deep-learning><pytorch><pre-trained-model><unet-neural-network> | 2023-07-13 20:21:42 | 1 | 11,430 | user836026 |
76,682,983 | 12,466,687 | How to rename column on basis of condition in Polars python? | <p>I am trying to <strong>rename column</strong> on basis of a <strong>condition</strong> in <code>Polars python</code> but getting errors.</p>
<p><strong>Data:</strong></p>
<pre><code>import polars as pl
test_df = pl.DataFrame({'Id': [100118647578,
100023274028,100023274028,100023274028,100118647578,
100118647578,100118647578,100023274028,100023274028,
100023274028,100118647578,100118647578,100023274028,
100118647578,100118647578,100118647578,100118647578,
100118647578,100118647578,100023274028,100118647578,
100118647578,100118647578,100118647578,100023274028,
100118647578,100118647578,100118647578,100023274028,
100118647578,100118647578,100023274028],
'Age': [49,22,25,18,41,45,42,30,28,
20,44,56,26,53,40,35,29,
8,55,23,54,36,52,33,29,
10,34,39,27,51,19,31],
'Status': [2,1,1,1,1,1,1,3,2,1,1,
1,2,1,1,1,1,1,1,2,1,1,1,1,2,1,1,
1,1,1,1,4]})
</code></pre>
<p><strong>Below code</strong> is to <strong>filter</strong> the data on basis of value from argument and rename on same basis:</p>
<pre><code>def Age_filter(status_filter_value = 1):
return (
test_df
.filter(pl.col('Status') == status_filter_value)
.sort(['Id','Age'])
.groupby('Id')
.agg( pl.col('Age').first())
.sort('Id')
# below part of code is giving error
.rename({'Age' : pl.when(status_filter_value == 1)
.then('30_DPD_MOB')
.otherwise(pl.when(status_filter_value == 2)
.then('60_DPD_MOB')
.otherwise(pl.when(status_filter_value == 3)
.then('90_DPD_MOB')
.otherwise('120_DPD_MOB')
)
)
})
)
Age_filter()
</code></pre>
<p>this gives an error: <code>TypeError: argument 'new': 'Expr' object cannot be converted to 'PyString'</code></p>
<p>I have also tried <strong>below code</strong> but that is also not working:</p>
<pre><code>def Age_filter1(status_filter_value = 1):
{
renamed_value = pl.when(status_filter_value == 1)
.then('30')
.otherwise(pl.when(status_filter_value == 2)
.then('60')
.otherwise(pl.when(status_filter_value == 3)
.then('90')
.otherwise('120')
)
)
return (
test_df
.filter(pl.col('Status') == status_filter_value)
.sort(['Id','Age'])
.groupby('Id')
.agg( pl.col('Age').first())
.sort('Id')
.rename({'Age' : renamed_value
})
)
}
Age_filter1()
</code></pre>
| <python><dataframe><python-polars> | 2023-07-13 20:19:39 | 1 | 2,357 | ViSa |
76,682,960 | 10,491,381 | Single machine scheduling - due dates constraints | <p>I am trying to program the single machine scheduling/bottleneck optimization, with python pulp, without making for loops to better understand.</p>
<p>I have 2 tasks called task1 and task2 to schedule that respectively last 1H and 3H, with due dates respectively at 1H and 4H.</p>
<p>Here are two solutions cases: first one is bad, the second one is good.</p>
<pre><code># Ko, because the task 1 due date is outdated
# hours 1 2 3 4
# task1 -
# task2 - - -
# Ok, because due dates are respected
# hours 1 2 3 4
# task1 -
# task2 - - -
</code></pre>
<p>I want the pulp solver to find solution 2 above.</p>
<p>Here is my Pulp code, solution is close, maybe you can help me write the "due dates" type constraints, I can't do it.There are no FOR loops, it's done on purpose to try to understand...</p>
<pre><code>import pulp as p
# Tasks
tasks = ["task1","task2"]
duration = {1,3}
due = {1,4}
starts = {1,2,3,4}
# Create problem
prob = p.LpProblem('single_machine_scheduling', p.LpMinimize)
# Each possible "time slots" to select for tasks
task1_1 = p.LpVariable("task1_1", 0, None, p.LpBinary)
task1_2 = p.LpVariable("task1_2", 0, None, p.LpBinary)
task1_3 = p.LpVariable("task1_3", 0, None, p.LpBinary)
task1_4 = p.LpVariable("task1_4", 0, None, p.LpBinary)
task2_1 = p.LpVariable("task2_1", 0, None, p.LpBinary)
task2_2 = p.LpVariable("task2_2", 0, None, p.LpBinary)
task2_3 = p.LpVariable("task2_3", 0, None, p.LpBinary)
task2_4 = p.LpVariable("task2_4", 0, None, p.LpBinary)
# Theses constraints should be used for due dates constraints eventually...
start_1 = p.LpVariable("start_1", 0, None, p.LpContinuous)
start_2 = p.LpVariable("start_2", 0, None, p.LpContinuous)
start_3 = p.LpVariable("start_3", 0, None, p.LpContinuous)
start_4 = p.LpVariable("start_4", 0, None, p.LpContinuous)
# Objective : Minimize timespan
prob += 1 * task1_1 + 1 * task1_2 + 1 * task1_3 + 1 * task1_4 + 3 * task2_1 + 3 * task2_2 + 3 * task2_3 + 3 * task2_4
# Constraints
# Only one task1 and one task2 can be selected
prob += task1_1 + task1_2 + task1_3 + task1_4 == 1
prob += task2_1 + task2_2 + task2_3 + task2_4 == 1
# Due dates constraints ( How to ?)
# Solve
prob.solve()
# Print variables
for v in prob.variables():
print(v.name, "=", v.varValue)
# Print objective
print("Minimized timespan = ", p.value(prob.objective))
start_3 = 0.0
start_4 = 0.0
task1_1 = 0.0
task1_2 = 0.0
task1_3 = 0.0
task1_4 = 1.0
task2_1 = 0.0
task2_2 = 0.0
task2_3 = 0.0
task2_4 = 1.0
Minimized timespan = 4.0
</code></pre>
<p>Maybe you can point me in the right direction? I also can't write them in the mathematical model, The ones I found include additional parameters like weight, but I don't want that here..</p>
<p>Edit: Oops sorry, I have learned that it's called "Total Tardiness minimization", and the Moore and Hodgson algorithm .</p>
| <python><optimization><pulp> | 2023-07-13 20:15:36 | 1 | 347 | harmonius cool |
76,682,932 | 8,820,463 | Issue with authenticating Google Earth Engine in Jupyter notebook | <p>I have a conda environment on my local computer through which I have installed the earthengine-api. If I type <code>python</code> to run python inline, and then do <code>import ee</code> followed by <code>ee.Authenticate()</code>, I am able to authenticate earth engine no problem using gCloud. However, when I go to jupyter notebook in that same environment and try to run the same code (<code>import ee</code> followed by <code>ee.Authenticate()</code> ), I get directed to a slightly different looking authentication page (titled 'notebook authenticator', which has a button that says 'generate token'.</p>
<p>However, when I click that button, I get an error that says "<em>Project has an incompatible OAuth2 Client configuration. Please select a different project. See <a href="https://developers.google.com/earth-engine/guides/auth#troubleshooting" rel="nofollow noreferrer">https://developers.google.com/earth-engine/guides/auth#troubleshooting</a> for details.</em>"</p>
<p>Any thought of what could be going on?</p>
| <python><jupyter-notebook><google-earth-engine> | 2023-07-13 20:11:46 | 0 | 483 | Ana |
76,682,859 | 12,279,326 | extracting a piece of text from HTML | <p>I need to be able to extract snippet one, without capturing snippet two, which is disabled, using a single CSS selector.</p>
<pre><code>snippet one
<button data-v-4e0029d1=""><div data-v-4e0029d1="">Small</div></button>
snippet two
<button data-v-4e0029d1=""><div data-v-4e0029d1="">Large</div><div class="disabled" data-v-4e0029d1="">disabled</div></button>
</code></pre>
<p>I am drawing a blank, I have tried the following:</p>
<pre><code>html.css("button > div:first-child:not(:has(button > div + div.disabled))")
</code></pre>
<pre><code>html.css("button > div:not(:has(div.disabled))")
</code></pre>
<pre><code>html.css("button > div:not(.disabled)")
</code></pre>
<pre><code>html.css("button > div[data-v-^]")
</code></pre>
<p>here is a reproducible example</p>
<pre><code>from selectolax.parser import HTMLParser
html = """
<button data-v-4e0029d1=""><div data-v-4e0029d1="">Small</div></button>
<button data-v-4e0029d1=""><div data-v-4e0029d1="">Large</div><div class="disabled" data-v-4e0029d1="">disabled</div></button>
html = HTMLParser(html)
"""
</code></pre>
<p>Any help would be most appreciated. FYI I used selectolax for parsing however for this example and support I do not mind a bs4 flavour.</p>
| <python><web-scraping> | 2023-07-13 20:00:27 | 3 | 948 | dimButTries |
76,682,815 | 5,052,482 | ipywidgets clear and reset TextArea | <p>After I add some text to <code>TextArea</code> and run some preprocessing on that text I would like to reset the TextArea widget so I can add new text to it.</p>
<p>For example:</p>
<pre><code>q1 = Textarea(layout=Layout(width='70%', height='100px'))
q1 # opens a window and then I add text to it
</code></pre>
<p><a href="https://i.sstatic.net/wkfF2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wkfF2.png" alt="enter image description here" /></a></p>
<pre><code>q2 = re.sub(r'[^\x00-\x7f]',r'', q1.value)
# do some more preprocessing
q1.reset() # I am looking for something like this
</code></pre>
<p>After preprocessing is complete, I'd like to reset <code>q1</code> to a blank cell so it is ready to take in as input a completely new value.</p>
<p>Any help on this is appreciated.</p>
| <python><jupyter-notebook><ipython><ipywidgets> | 2023-07-13 19:54:03 | 1 | 2,174 | Jash Shah |
76,682,762 | 12,860,716 | Python multiprocessing and shared memory: AttributeError: 'module' object has no attribute 'SharedMemory' | <p>I'm working on a Python project where I need to use multiprocessing and shared memory to efficiently process large amounts of data. However, when attempting to use the SharedMemory class from the multiprocessing module, I encounter <strong>an AttributeError: 'module' object has no attribute 'SharedMemory' error</strong>. I'm confused because I thought SharedMemory was a valid class in the multiprocessing module. Can someone please shed some light on this issue and provide guidance on how to properly use shared memory in multiprocessing?</p>
<p><strong>Code snippet:</strong></p>
<pre><code>import multiprocessing as mp
def worker(shared_memory):
# Perform some operations using shared memory
if __name__ == "__main__":
shared_memory = mp.SharedMemory(name='my_shared_memory', create=True, size=100)
# Rest of the code...
</code></pre>
<p><strong>Expected behavior:</strong>
I expect to be able to create a shared memory segment using SharedMemory and use it within the worker function to process data concurrently.</p>
<p><strong>Error message:</strong></p>
<pre><code>AttributeError: 'module' object has no attribute 'SharedMemory'
</code></pre>
<p><strong>Additional information:</strong></p>
<p>Python version: 3.11</p>
<p>Operating system: Windows 10</p>
<p>I have verified that the multiprocessing module is imported correctly without any syntax errors.
Any help or insights on how to resolve this issue and successfully utilize shared memory in multiprocessing would be greatly appreciated.</p>
| <python><memory><concurrency><multiprocessing> | 2023-07-13 19:45:50 | 1 | 1,151 | Bialomazur |
76,682,761 | 895,029 | SQLAlchemy - Bind params and compile an INSERT statement from TextClause | <p>I'm trying to generate the raw SQL required for an insert statement using a DataFrame of values. I am aware of <code>df.to_sql()</code>, but I need the actual text query.</p>
<p>I'm not sure how to bind the values to the insert statement itself (when only using the simply <code>TextClause</code> instance).</p>
<p>For example,</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import sqlalchemy as sql
df = pd.DataFrame(data={"foo": [1, 2, 3], "bar": ["a", "b", "c"]})
query = sql.text(
"""
INSERT INTO some_table(Foo, Bar)
VALUES(:foo, :bar)
"""
)
conn = sql.engine.create_engine("mysql+pymysql://user:pass@localhost:1234/some_db")
# Psuedo code of what I'd like to do
query_with_values = query.bind(df.to_dict(orient="records"))
print(query_with_values.compile(conn))
# And what I'd like to see
# INSERT INTO some_table(Foo, Bar) VALUES (1, "a"), (2, "b"), (3, "c")
</code></pre>
<p>Bonus points if you know how I can specify the type of the columns (as I can if I were using <code>bindparam</code>)</p>
<p>Thanks!</p>
| <python><sql><python-3.x><pandas><sqlalchemy> | 2023-07-13 19:45:50 | 0 | 4,506 | rwb |
76,682,739 | 12,436,050 | QueryBadFormed: QueryBadFormed: SPARQLWrapper | <p>I am trying to insert triples into blazegraph using below sparql query.</p>
<pre><code>for index, row in df_omim.iterrows():
omim = row['mim_number']
omim_label = row['preferred_title_symbol']
rel = row['skos_rel']
meddra = row['MDR_code']
queryString = """
PREFIX skos: <http://www.w3.org/2004/02/skos/core#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
INSERT DATA {
GRAPH <http://example.org/mapping>
{ %s skos:%s %s ;
rdfs:%s %s .}}
""" %(omim, rel, meddra, 'label', omim_label)
sparql = SPARQLWrapper("http://blazegraph/namespace/HC2/sparql/update")
sparql.setQuery(queryString)
sparql.method = 'POST'
sparql.query()
</code></pre>
<pre><code>Sample triples are:
<https://www.omim.org/entry/202110> skos:exactMatch <https://identifiers.org/meddra:10000014> ;
rdfs:label ADRENAL HYPERPLASIA, CONGENITAL, DUE TO 17-ALPHA-HYDROXYLASE DEFICIENCY .
</code></pre>
<p>The value of omim and meddra are <a href="https://www.omim.org/entry/202110" rel="nofollow noreferrer">https://www.omim.org/entry/202110</a>, <a href="https://identifiers.org/meddra:10000014" rel="nofollow noreferrer">https://identifiers.org/meddra:10000014</a></p>
<p>I am getting below error while running above SPARQL query.
<code>QueryBadFormed: QueryBadFormed: A bad request has been sent to the endpoint: probably the SPARQL query is badly formed.</code></p>
<p>Any help is highly appreciated</p>
| <python><sparql><sparqlwrapper><blazegraph> | 2023-07-13 19:41:23 | 0 | 1,495 | rshar |
76,682,576 | 726,730 | Python PyQt5 - Convert QTime to python time object | <p>I have to solve this error:</p>
<pre class="lang-py prettyprint-override"><code>day_frequency_parameters = self.main_self.ui_scheduled_transmitions_create_window.daily_one_time_timeEdit.time().strftime('%H:%M:%S')
AttributeError: 'QTime' object has no attribute 'strftime'
</code></pre>
<p>where <code>daily_one_time_timeEdit</code> is a QTimeEdit object.</p>
<p>Is there any way to convert QTimeEdit or QTime to python time object?</p>
| <python><strftime><qtime><qtimeedit> | 2023-07-13 19:15:03 | 1 | 2,427 | Chris P |
76,682,513 | 19,580,540 | GitHub Action Error: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt' while deploying on Azure App Service | <p>I built a simple <strong>Django</strong> application and am trying to deploy it using <strong>Azure App Service</strong> and <strong>GitHub Actions CI/CD</strong>. While deploying, I am getting the error-
<a href="https://i.sstatic.net/rpQ7M.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rpQ7M.png" alt="enter image description here" /></a>
Here is my project directory structure-
<a href="https://i.sstatic.net/jnrT8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jnrT8.png" alt="enter image description here" /></a>
Can somebody please help me understand what the issue is here? I came accross this answer but this is not working for me- <a href="https://stackoverflow.com/questions/74067023/github-for-django-on-azure-could-not-open-requirements-file-errno-2-no-such">GitHub for Django on Azure: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'</a></p>
| <python><django><azure-web-app-service><github-actions><requirements.txt> | 2023-07-13 19:03:13 | 1 | 686 | ayush |
76,682,408 | 11,370,582 | Linear Decision Boundary in Logistic Regression | <p>Which line(s) of code are responsible for the additional linear decision boundaries, 15%, 30%, etc., in the following code?</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
from sklearn import datasets
from sklearn.linear_model import LogisticRegression
iris = datasets.load_iris()
print(list(iris.keys()))
X = iris['data'][:,3:] #petal width
y = (iris['target']==2).astype(np.int32) #1 if virginica, else 0
X = iris["data"][:, (2, 3)] # petal length, petal width
y = (iris["target"] == 2).astype(np.int32)
log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42)
log_reg.fit(X, y)
x0, x1 = np.meshgrid(
np.linspace(2.9, 7, 500).reshape(-1, 1),
np.linspace(0.8, 2.7, 200).reshape(-1, 1),
)
X_new = np.c_[x0.ravel(), x1.ravel()]
y_proba = log_reg.predict_proba(X_new)
plt.figure(figsize=(10, 4))
plt.plot(X[y==0, 0], X[y==0, 1], "bs")
plt.plot(X[y==1, 0], X[y==1, 1], "g^")
zz = y_proba[:, 1].reshape(x0.shape)
contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg)
left_right = np.array([2.9, 7])
boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1]
plt.clabel(contour, inline=1, fontsize=12)
plt.plot(left_right, boundary, "k--", linewidth=3)
plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center")
plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center")
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.axis([2.9, 7, 0.8, 2.7])
plt.show()
</code></pre>
<p>I'm working though a ML textbook with undocumented code and this is the one feature I can't figure out the origin of.</p>
| <python><matplotlib><machine-learning><scikit-learn><logistic-regression> | 2023-07-13 18:46:24 | 1 | 904 | John Conor |
76,682,394 | 5,405,669 | save to local and S3 object store (efficiently) | <p>I have an artifact that I would like to save to the local drive as well as to a S3 object store. I suspect there might be a more efficient way to</p>
<ol>
<li>write to the local drive (sync)</li>
<li>send a copy of the file to S3 (async)</li>
</ol>
<p>I'm torn on how to use <code>dump</code> and <code>dumps</code> in a way that makes sense.</p>
<pre class="lang-py prettyprint-override"><code>// res can be any python type
with open(cachepath, 'wb') as f:
logger.info("Writing results to cache (pkl).")
pickle.dump(res, f)
post_to_object_store(cachepath,
pickle.dumps(res),
async=True,
app=current_app)
</code></pre>
<p>The good is that I'm reusing the <code>res</code> value stored in memory. Is there a way to leverage the first conversion to pickle?</p>
| <python><amazon-web-services><flask><amazon-s3> | 2023-07-13 18:43:53 | 0 | 916 | Edmund's Echo |
76,682,341 | 13,394,817 | Not filled, white voids near boundaries of polar plot | <p>I have tried to create a <em>polar plot</em> using <em>matplotlib</em>'s <code>contourf</code>, but it produces some not-filled areas near the circle boundary, in just some sides of the plot. It seems, this problem is a common problem that we can see in many examples e.g. <a href="https://stackoverflow.com/a/9083017/13394817">1</a>.</p>
<p><a href="https://i.sstatic.net/GlJQe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GlJQe.png" alt="enter image description here" /></a></p>
<p>At first, I thought it may needs some interpolations and tried interpolating based on some available methods e.g. <a href="https://stackoverflow.com/a/15432819/13394817">2</a>. But I couldn't produce a perfect plot. Interpolating using <em>SciPy griddata</em> with <em>linear</em> method solved the main issue but produce some shadows on the plot, and the <em>cubic</em> method result in some inappropriate colors (which shown the results incorrect).</p>
<p><a href="https://i.sstatic.net/itDlG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/itDlG.png" alt="enter image description here" /></a></p>
<p>Finally, I guess this issue may be related to figure size that I specified and the dpi that I used. With low dpi it was cured a lot, but will get a low quality png. when I deactivate the related line for specifying figure size (<code># plt.rcParams["figure.figsize"] = (19.2, 9.6)</code>) with <code>dpi=600</code>, it is shown fairly correct, but not on the needed figure size.</p>
<p><a href="https://i.sstatic.net/ctCug.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ctCug.png" alt="enter image description here" /></a></p>
<p><strong>The main question</strong> is how to solve the issue for saved files, with the desired specified figure size and dpi? It must be said that the problem is appearing on the <strong>saved files</strong>.</p>
<p>Besides the answer to solve the issue, I will be appreciated if any answer about these questions too:</p>
<ul>
<li><p>Why it happens?</p>
</li>
<li><p>Why it happens just in some sides of the plot? In this example it is problematic just on the right side of the quarter, not the upside. This issue makes me doubt that the data is correctly shown on the circle for analysis. Is it OK?</p>
</li>
<li><p>Do we need interpolating on such data? If so, which algorithms will be the best for that which does not show the results incorrectly as cubic method in Scipy interpolation and without shadowing as the linear method? If choosing between algorithms is based on the case, how to decide for that? It will be very helpful if be explained with examples.</p>
</li>
</ul>
<pre><code>import numpy as np
from matplotlib import cm
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (19.2, 9.6)
save_dpi = 600
Azimuth = np.tile(np.arange(0, 91, 10), 10)
Deviation = np.repeat(np.arange(0, 91, 10), 10)
color_data = np.array([2123, 2124, 2126, 2130, 2135, 2139, 2144, 2147, 2150, 2151, 2212,
2211, 2205, 2197, 2187, 2176, 2166, 2158, 2152, 2150, 2478, 2468,
2439, 2395, 2342, 2285, 2231, 2188, 2160, 2150, 2912, 2888, 2819,
2715, 2589, 2456, 2334, 2236, 2172, 2150, 3493, 3449, 3324, 3135,
2908, 2674, 2462, 2294, 2187, 2150, 4020, 4020, 3912, 3618, 3270,
2917, 2602, 2357, 2203, 2151, 4020, 4020, 4020, 4020, 3633, 3156,
2737, 2417, 2218, 2150, 4020, 4020, 4020, 4020, 3947, 3358, 2850,
2466, 2230, 2150, 4020, 4020, 4020, 4020, 4020, 3495, 2926, 2499,
2238, 2150, 4020, 4020, 4020, 4020, 4020, 3543, 2951, 2510, 2241,
2150])
ax = plt.subplot(projection='polar')
plt.xticks([])
plt.yticks([])
Az = np.unique(Azimuth)
Dev = np.unique(Deviation)
mAz, mDev = np.meshgrid(Az, Dev)
# way1: Original
Xi, Yi, Zi = np.deg2rad(mAz), mDev, color_data.reshape(mAz.shape)
contour_plot = ax.contourf(Xi, Yi, Zi, levels=256, cmap=cm.viridis_r, zorder=1)
ax.plot()
plt.savefig("way1.png", dpi=save_dpi)
# way2: Interpolation
# import scipy.interpolate as sci_int
# Xi = np.linspace(0, 2 * np.pi, 256, endpoint=True)
# Yi = np.linspace(0, 90, 256, endpoint=True)
# Zi = sci_int.griddata(np.stack((np.deg2rad(mAz), mDev), axis=2).reshape(len(Azimuth), 2), color_data,
# (Xi[None, :], Yi[:, None]), method='linear')
# contour_plot = ax.contourf(Xi, Yi, Zi, levels=256, cmap=cm.viridis_r, zorder=1)
# ax.plot()
# plt.savefig("way2.png", dpi=save_dpi)
</code></pre>
<p><strong>Tested on windows 10 by:</strong><br />
<strong>Python ver.:</strong> 3.8 & 3.10<br />
<strong>Matplotlib ver.:</strong> 3.5.3 & 3.7.2</p>
| <python><matplotlib><plot><cartopy> | 2023-07-13 18:35:06 | 1 | 2,836 | Ali_Sh |
76,682,210 | 5,942,100 | Retain original number of rows with dataset to be matched, when pairing values from two different datasets in Pandas | <p><strong>Data</strong></p>
<p>df1</p>
<pre><code>ID stat
AA1 exzone
BB2 exzone5
CC4 limit5
</code></pre>
<p>df2</p>
<pre><code>name state
AA1 NY
AA1 NY
AA1 NY
AA1 NY
BB2 GA
BB2 GA
BB2 GA
CC4 CA
CC4 CA
</code></pre>
<p><strong>Desired</strong></p>
<pre><code>name stat state
AA1 exzone NY
BB2 exzone5 GA
CC4 limit5 CA
</code></pre>
<p><strong>Doing</strong></p>
<pre><code>out = pd.merge(df1,df2, left_on=['ID'], right_on= ['name'], how="left")
</code></pre>
<p>however, the above script is giving an exploded output and does not retain the original Left dataframe row count. Any suggestion is appreciated.</p>
| <python><pandas><numpy> | 2023-07-13 18:13:40 | 1 | 4,428 | Lynn |
76,682,117 | 15,439,115 | AttributeError: 'int' object has no attribute 'encode when uploading txt file containing query to s3 | <p>Hi I have a query in this form :</p>
<pre><code>data = {
"pg_content": "CREATE OR REPLACE TABLE CREDITCARDS.CC_TRANSACTION(\n TRANSACTION_ID DECIMAL COMMENT 'Identifier.Values between 0 and 23162883'\n ACCOUNT_NUMBER VARCHAR COMMENT 'Categorical Attribute with specific values as ACC2060,ACC1188,ACC1437,ACC5552,ACC98,ACC2240,ACC4096,ACC5555,ACC22,ACC4232'\n TRANSACTION_AMOUNT DECIMAL COMMENT 'Values between -6125.96 and 600.00'\n TRANSACTION_DATE DATE COMMENT 'Values between 2023-01-09 and 2023-06-16'\n TRANSACTION_TYPE VARCHAR COMMENT 'Categorical Attribute with specific values as Purchase,Cash Advance,Void,Refund,Verification,Payment'\n MERCHANT_NAME VARCHAR COMMENT 'Categorical Attribute with specific values as Merchant 250575,Merchant 265897,Merchant 54632,Merchant 100866,Merchant 749929,Merchant 250268,Merchant 486642,Merchant 27292,Merchant 250396,Merchant 108175',\n PRIMARY KEY(TRANSACTION_ID),\n FOREIGN KEY(ACCOUNT_NUMBER) REFERENCES CREDITCARDS.CREDIT_CARD_ACCOUNT(ACCOUNT_NUMBER)\n); "
}
</code></pre>
<p>I want to write this content to a txt file and upload it to s3 but during upload I am getting this error :</p>
<p><strong>[ERROR] AttributeError: 'int' object has no attribute 'encode</strong></p>
<p>My code is :</p>
<pre><code>local_file_name = 'local_file'
metadata = {'_id':123 , 'name':'new'}
text_file_path = '/tmp/' + str(local_file_name) + '.csv'
s3_client = boto3.client('s3')
with open(text_file_path, 'w') as file:
file.write(data.get('pg_content'))
# Upload the text file to S3
s3_client.upload_file(text_file_path, bucket_name, file_name)
s3_client.put_object(
Bucket=bucket_name,
Key=file_name,
Metadata=metadata
)
</code></pre>
| <python><python-3.x><boto3> | 2023-07-13 18:01:36 | 2 | 309 | Ninja |
76,682,072 | 11,703,015 | ModuleNotFoundError: No module named 'imagekitio' | <p>I am trying to import the <code>imagekitio</code> module for a project I am working on. However, I am getting the follwing error:</p>
<pre><code>ModuleNotFoundError: No module named 'imagekitio'
</code></pre>
<p>At first, you might think that the module is not installed, but I checked and it is actually installed. For instance:</p>
<pre><code>pip show imagekitio
</code></pre>
<p>An the response is:</p>
<pre><code>Name: imagekitio
Version: 3.1.0
Summary: Python wrapper for the ImageKit API
Home-page: https://github.com/imagekit-developer/imagekit-python
Author:
Author-email:
License:
Location: C:\Users\nekov\Cloud-Drive\Máster Data Engineering\Módulo 2 - Arquitecturas transaccionales\M02-proyecto-consolidacion\Solucion proyecto consolidacion\project_env\Lib\site-packages
Requires: requests, requests-toolbelt, urllib3
Required-by:
</code></pre>
<p>I am executing this in a new environment I have created. I have tried deactivating/activating the environment again, uninstalling the <code>imagekitio</code> and installing again, but it does not work.</p>
<p>Do you know how I could solve this issue?</p>
| <python><modulenotfounderror> | 2023-07-13 17:52:47 | 1 | 516 | nekovolta |
76,682,015 | 11,152,224 | How to connect celery to rabbitmq in django from docker | <p>I learn docker and don't understand why celery can't connect to rabbitmq. I've created 2 images for rabbitmq and django, here is my docker-compose:</p>
<pre><code>version: "3.0"
services:
# WEB
django:
build: .
volumes:
- media_volume:/website/journal_website/media
- static_volume:/website/journal_website/static
- database_volume:/website/journal_website/database
ports:
- "8000:8000"
depends_on:
- rabbit
# RabbitMQ
rabbit:
hostname: rabbit
container_name: rabbitmq
image: rabbitmq:3.12-rc-management
environment:
- RABBITMQ_DEFAULT_USER=simple_user
- RABBITMQ_DEFAULT_PASS=simple_password
ports:
# AMQP protocol port
- "5672:5672"
# HTTP management UI
- "15672:15672"
restart: always
volumes:
media_volume:
static_volume:
database_volume:
</code></pre>
<p>my broker configuration in <code>settings.py</code> in django:</p>
<pre><code>CELERY_BROKER_URL = 'amqp://localhost:5672'
</code></pre>
<p><code>celery.py</code> file:</p>
<pre><code>from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "website.settings")
celery_application = Celery("website")
celery_application.config_from_object("django.conf:settings", namespace="CELERY")
celery_application.autodiscover_tasks()
</code></pre>
<p><code>entrypoint.sh</code> that's used as entrypoint in <code>Dockerfile</code>:</p>
<pre><code>#!/bin/bash
# Start Celery
celery -A website worker -l info
# Run Django server
python3.11 manage.py runserver 127.0.0.1:8000
</code></pre>
<p>and <code>Dockerfile</code>:</p>
<pre><code># Many many commands to install python and all dependencies in ubuntu 20.04
# Run Django server
EXPOSE 8000
ENTRYPOINT ["/entrypoint.sh"]
</code></pre>
<p>I use Docker Desktop on Windows 10, when I run <code>docker compose up</code> or run container in desktop application I get such error: <code>[Date: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**@127.0.0.1:5672//: [Errno 111] Connection refused.</code>, I've checked that <code>localhost</code> is <code>127.0.0.1</code>. What should I do to run celery properly?</p>
| <python><django><docker><rabbitmq><celery> | 2023-07-13 17:43:21 | 1 | 569 | WideWood |
76,681,969 | 1,249,581 | How to combine two match patterns with guards in Python | <p>Consider the following case:</p>
<pre class="lang-py prettyprint-override"><code>match var1:
case enum.A if var2 == 1:
{code block}
case enum.B if var2 == 2:
{code block}
...
</code></pre>
<p>Here <code>{code block}</code> is exactly the same code, which I don't want to repeat.</p>
<p>Is there a nice way of doing a fall-through or combining these two cases without refactoring <code>{code block}</code> into a function?</p>
| <python><pattern-matching> | 2023-07-13 17:34:19 | 2 | 145,636 | VisioN |
76,681,632 | 10,918,680 | Vectorizing or speed up for loop in Pandas for data transformation | <p>I have a dataframe in the following format:</p>
<pre><code>df = pd.DataFrame({'Parent_username': ['Bob1', 'Ron23', 'Lisa00', 'Joe_'],
'Parent_age': [38, None, 40, 26],
'Child1_name': ['Mike', 'John', 'Curt', 'Kelly'],
'Child1_age': [2, None, 1, 2],
'Child2_name': ['Pat', 'Dennis', None, None],
'Child2_age': [4, None, None, None]})
Parent_username Parent_age Child1_name Child1_age Child2_name Child2_age
0 Bob1 38.0 Mike 2.0 Pat 4.0
1 Ron23 NaN John NaN Dennis NaN
2 Lisa00 40.0 Curt 1.0 None NaN
3 Joe_ 26.0 Kelly 2.0 None NaN
</code></pre>
<p>As you can see above, each row corresponds to a parent (unique ID), and each parents can have multiple children. There can be many children but I have 2 listed, and each child can have many attributes, but I only have 2 (name, age) in this example. The child attribute columns follow the same convention.</p>
<p>I'd like to transform it to such:</p>
<pre><code>df2 = pd.DataFrame({'Child_name': ['Mike', 'Pat', 'John', 'Dennis', 'Curt', 'Kelly'],
'Child_number': [1, 2, 1, 2, 1, 1],
'Child_age': [2, 4, None, None, 1, 2],
'Parent_username': ['Bob1', 'Bob1', 'Ron23', 'Ron23', 'Lisa00', 'Joe_'],
'Parent_age': [38, 38, None, None, 40, 26]})
Child_name Child_number Child_age Parent_username Parent_age
0 Mike 1 2.0 Bob1 38.0
1 Pat 2 4.0 Bob1 38.0
2 John 1 NaN Ron23 NaN
3 Dennis 2 NaN Ron23 NaN
4 Curt 1 1.0 Lisa00 40.0
5 Kelly 1 2.0 Joe_ 26.0
</code></pre>
<p>Each row corresponds to a child, and the Child_number indicates if it's the first child or second child, etc.</p>
<p>In order to speed things up, I pre-allocated space for df2 by making an empty dataframe of the right size, rather than doing concatenation. I first looped through df1 by counting how many children each parent has, to get the number of rows needed for df2.</p>
<p>Then, I built dictionaries of indexes to map each child/parent to its row/rows in df2. I figure since dictionary lookup is fast, this is better than trying to find the row in df2 each time using where(). Again, a for loop was used for this.</p>
<p>Those actually does not take long. But the actual copying of the data from df to df2 takes a long time using for loop:</p>
<pre><code>for index in df.index:
for col in df.columns:
// copy df.loc[index, col] into the corresponding position in df2 using dataframe.loc
</code></pre>
<p>I'm really hoping there is a faster way to do this. I don't understand vectorization very well and I'm not sure if it works well for string columns.</p>
<p>Please advise.
Thanks</p>
| <python><pandas><numpy> | 2023-07-13 16:41:15 | 1 | 425 | user173729 |
76,681,559 | 1,494,193 | How to insert a list of values with corresponding index numbers into a pandas column? | <p>How can I update a column when I have the values with their corresponding index? Without iterating.</p>
<p>df1:</p>
<pre><code> index_copy Numeric Value
0 63 7.999600e+07
1 64 8.333368e+07
2 70 3.999600e+07
3 71 7.333368e+07
4 77 2.999600e+07
</code></pre>
<p>Into another dataframe column with empty values. In this case I want to add the values at index 63, 64, 70, 71 and 77.</p>
<p>df2['Column_to_be_updated']:</p>
<pre><code> Column_to_be_updated
0 NaN
1 NaN
2 NaN
3 NaN
4 NaN
</code></pre>
| <python><pandas><dataframe> | 2023-07-13 16:29:58 | 1 | 401 | Gorgonzola |
76,681,498 | 10,754,437 | Retrieving value from condition in CDK | <p>When creating a condition as described here <a href="https://docs.aws.amazon.com/cdk/api/v2/python/aws_cdk/CfnCondition.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/cdk/api/v2/python/aws_cdk/CfnCondition.html</a>.</p>
<pre><code>my_param = aws_cdk.CfnParameter(self, "MyParm",
type="AWS::SSM::Parameter::Value<String>",
default="/path/name"
)
condition = aws_cdk.CfnCondition(self, "Example",
expression=aws_cdk.Fn.condition_equals(my_param.value_as_string, "someString")
)
</code></pre>
<p>How would I retrieve the value from the condition so I can use it in a different resource as input to a parameter? Like for example to sepcify if SSL should be used on a s3 bucket.</p>
<pre><code>s3.Bucket(scope, 'Bucket', {
enforceSSL: USE_BOOLEAN_RESULT_HERE,
});
</code></pre>
| <python><aws-cdk> | 2023-07-13 16:21:46 | 1 | 2,597 | SecretIndividual |
76,681,353 | 7,657,180 | Add header to existing pdf file in python | <p>I am trying to insert header <code>YasserKhalil</code> to <code>Sample.pdf</code> file.
Tried this code</p>
<pre><code>import PyPDF2
def add_header_footer_pdf(input_file, output_file, header_text):
with open(input_file, 'rb') as file:
pdf_reader = PyPDF2.PdfReader(file)
pdf_writer = PyPDF2.PdfWriter()
for page_num in range(len(pdf_reader.pages)):
page = pdf_reader.pages[page_num]
header = PyPDF2.pdf.PageObject.createBlankPage(None, page.mediaBox.getWidth(), 30)
header.mergeTranslatedPage(page, 0, 30)
header.mergeTranslatedPage(PyPDF2.pdf.PageObject.createTextObject(None, header_text), 0, 5)
pdf_writer.addPage(header)
with open(output_file, 'wb') as output:
pdf_writer.write(output)
if __name__ == '__main__':
add_header_footer_pdf('Sample.pdf', 'Output.pdf', 'YasserKhalil')
</code></pre>
<p>But I got an error</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\Future\Desktop\demo.py", line 20, in <module>
add_header_footer_pdf('Sample.pdf', 'Output.pdf', 'YasserKhalil')
File "C:\Users\Future\Desktop\demo.py", line 11, in add_header_footer_pdf
header = PyPDF2.pdf.PageObject.createBlankPage(None, page.mediaBox.getWidth(), 30)
^^^^^^^^^^
AttributeError: module 'PyPDF2' has no attribute 'pdf'
</code></pre>
<p>I tried to modify this line <code>header = PyPDF2.PageObject.createBlankPage(None, page.mediabox.width(), 30)</code>
but got another error</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\Future\Desktop\demo.py", line 20, in <module>
add_header_footer_pdf('Sample.pdf', 'Output.pdf', 'YasserKhalil')
File "C:\Users\Future\Desktop\demo.py", line 11, in add_header_footer_pdf
header = PyPDF2.PageObject.createBlankPage(None, page.mediabox.width(), 30)
^^^^^^^^^^^^^^^^^^^^^
TypeError: 'decimal.Decimal' object is not callable
</code></pre>
| <python><pdf> | 2023-07-13 16:01:41 | 1 | 9,608 | YasserKhalil |
76,681,320 | 4,180,168 | How to execute python script on Azure AKS - Job Deployment | <p>I am trying to execute a simple python script via Azure Cloud Shell (Powershell) that generates a simple log via a Kubernates Job, using the following yaml file:</p>
<pre><code> apiVersion: batch/v1
kind: Job
metadata:
name: your-job-name
spec:
template:
spec:
containers:
- name: your-container-name
image: python:3
command: ["python", "your-script.py"]
restartPolicy: Never
</code></pre>
<p>This is the python script:</p>
<pre><code> import sys
import datetime as dt
def print_to_stdout(*a):
# Here a is the array holding the objects
# passed as the argument of the function
print(*a, file=sys.stdout)
# Save the current time to a variable ('t')
t = dt.datetime.now()
while True:
delta = dt.datetime.now()-t
if delta.seconds >= 30:
print_to_stdout("Hello World, half minute has passed")
# Update 't' variable to new time
t = dt.datetime.now()
</code></pre>
<p>But I am getting stuck on this error: python: can't open file '//your-script.py': [Errno 2] No such file or directory.</p>
<p>Any hint on how to solve this. It seems like python cannot find the file with the source code.</p>
| <python><azure-aks><azure-cloud-shell> | 2023-07-13 15:57:14 | 1 | 435 | OuterSpace |
76,681,319 | 11,642,691 | Python rookie question on some unusual syntax | <p>I am reading thru some pygame examples and learning python, I thought I was doing well until I came upon this line of code in aliens.py example:</p>
<pre><code>class Alien(pg.sprite.Sprite):
"""An alien space ship. That slowly moves down the screen."""
speed = 13
animcycle = 12
images: List[pg.Surface] = []
</code></pre>
<p>I don't understand the use of a colon (:) as in,</p>
<pre><code>imgaes:
</code></pre>
<p>And even what comes after colon,</p>
<pre><code>List[pg.Surface]
</code></pre>
<p>is puzzling too, any hints or links to read up on would be greatly appreciated!</p>
| <python><list> | 2023-07-13 15:56:55 | 0 | 305 | PaulM |
76,681,318 | 11,829,398 | Why is RecursiveCharacterTextSplitter not giving any chunk overlap? | <p>I am trying to create chunks (max) 350 characters long with 100 chunk overlap.</p>
<p>I understand that <code>chunk_size</code> is an upper limit, so I may get chunks shorter than that. But why am I not getting any <code>chunk_overlap</code>?</p>
<p>Is it because the overlap also has to split on one of the separator chars? So it's 100 chars chunk_overlap if there is a <code>separator</code> within 100 chars of the split that it can split on?</p>
<pre class="lang-py prettyprint-override"><code>from langchain.text_splitter import RecursiveCharacterTextSplitter
some_text = """When writing documents, writers will use document structure to group content. \
This can convey to the reader, which idea's are related. For example, closely related ideas \
are in sentances. Similar ideas are in paragraphs. Paragraphs form a document. \n\n \
Paragraphs are often delimited with a carriage return or two carriage returns. \
Carriage returns are the "backslash n" you see embedded in this string. \
Sentences have a period at the end, but also, have a space.\
and words are separated by space."""
r_splitter = RecursiveCharacterTextSplitter(
chunk_size=350,
chunk_overlap=100,
separators=["\n\n", "\n", "(?<=\. )", " ", ""]
)
x = r_splitter.split_text(some_text)
print(x)
for thing in x:
print(len(thing))
</code></pre>
<p>Output</p>
<pre><code>["When writing documents, writers will use document structure to group content. This can convey to the reader, which idea's are related. For example, closely related ideas are in sentances. Similar ideas are in paragraphs. Paragraphs form a document.",
'Paragraphs are often delimited with a carriage return or two carriage returns. Carriage returns are the "backslash n" you see embedded in this string. Sentences have a period at the end, but also, have a space.and words are separated by space.']
248
243
</code></pre>
| <python><langchain><py-langchain> | 2023-07-13 15:56:54 | 2 | 1,438 | codeananda |
76,681,016 | 2,929,914 | Python - Pandas x Polars - Values mapping (Lookup value) | <p>One way in Pandas to map a Series from one value to another (or to do a 'lookup value') is with the <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.map.html" rel="nofollow noreferrer">map function</a>.</p>
<p>So If I have two DataFrames (say one dimension and one fact table):</p>
<pre><code># imports
import numpy as np
import pandas as pd
# Dimension Table - DataFrame to hold the letter and its ID.
letters_ids = pd.DataFrame({
'Letters': ['A', 'B', 'C'],
'Letters_id': [1, 2, 3]
}).set_index('Letters')
# Fact Table - DataFrame with millions of letters.
many_letters = pd.DataFrame({
'Letters': np.random.choice(['A', 'B', 'C'], 10_000_000)
})
</code></pre>
<p>And considering that the business model require that we add to the fact table the 'Letters_id' we could do:</p>
<pre><code>many_letters['letters_mapped'] = many_letters.Letters.map(letters_ids.squeeze())
</code></pre>
<p>That's clean and straightforward.</p>
<p>Now what about Polars?
I mean Polars have a <a href="https://pola-rs.github.io/polars/py-polars/html/reference/expressions/api/polars.map.html" rel="nofollow noreferrer">map function</a> but doesn't seems to work like Pandas.</p>
<p>I found two ways with Polars to map a Series from one value to another (or to do a 'lookup value') but I'm feeling I'm missing a proper way of doing it.</p>
<p>So say we have the same dataset with Polars:</p>
<pre><code># imports
import numpy as np
import polars as pl
# Dimension Table - DataFrame to hold the letter and its ID.
letters_ids = pl.DataFrame({
'Letters': ['A', 'B', 'C'],
'Letters_id': [1, 2, 3]
})
# Fact Table - DataFrame with millions of letters.
many_letters = pl.DataFrame({
'Letters': np.random.choice(['A', 'B', 'C'], 10_000_000)
})
</code></pre>
<p>Now in order to add to the fact table the 'Letters_id':</p>
<p><strong>Approach # 1</strong> - We can use the <a href="https://docs.pola.rs/api/python/dev/reference/dataframe/api/polars.DataFrame.join.html" rel="nofollow noreferrer">join method</a>:</p>
<pre><code>many_letters = many_letters.join(
other=letters_ids,
on='Letters',
how='left'
).rename({'Letters_id': 'letters_mapped'})
</code></pre>
<p><strong>Approach # 2</strong> - Use the <a href="https://docs.pola.rs/api/python/dev/reference/expressions/api/polars.Expr.replace.html" rel="nofollow noreferrer">replace</a> (found the idea <a href="https://stackoverflow.com/questions/76238414/recursively-lookup-value-with-polars">at this SO question</a>)</p>
<pre><code># Convert the two columns DataFrame into a Python's dictionary.
letters_dict = dict(letters_ids.iter_rows())
# Maps the dictionary
many_letters = many_letters.with_columns(
pl.col('Letters').replace(letters_dict).alias('letters_mapped')
)
</code></pre>
<p>Just as an information when considering performance, both Polars approaches above are faster to do the job then Pandas:</p>
<ul>
<li>Pandas mapping execution time (for 10 million rows): average 0.35 seconds</li>
<li>Polars mapping execution time (for 10 million rows): average 0.21 seconds (Approach # 1)</li>
<li>Polars mapping execution time (for 10 million rows): average 0.20 seconds (Approach # 2)</li>
</ul>
<p>Which of these is preferred or is there a 3rd approach which may be better still?</p>
| <python><pandas><python-polars> | 2023-07-13 15:20:49 | 1 | 705 | Danilo Setton |
76,680,995 | 8,936,239 | Plot YOLOv5 instance segmentation predictions as masks | <p>I'm detecting tennis courts to then pull corner coordinates from. YOLOv5 instance segmentation provides a rough polygon in a txt file as prediction. How do you plot this YOLO polygon label?</p>
<p>Example of txt file from predictions:</p>
<pre><code>0 0.289062 0.24375 0.2875 0.245312 0.2875 0.248437 0.285937 0.25 0.285937 0.253125 0.284375 0.254687 0.282813 0.254687 0.282813 0.25625 0.28125 0.257812 0.28125 0.265625 0.279687 0.267188 0.279687 0.26875 0.276563 0.271875 0.276563 0.273438 0.275 0.275 0.275 0.282813 0.273438 0.284375 0.273438 0.285937 0.271875 0.2875 0.270312 0.2875 0.26875 0.289062 0.26875 0.301562 0.2625 0.307813 0.2625 0.315625 0.260938 0.317187 0.260938 0.31875 0.257812 0.321875 0.257812 0.323438 0.25625 0.325 0.25625 0.334375 0.253125 0.3375 0.251563 0.3375 0.25 0.339063 0.25 0.348437 0.248437 0.35 0.248437 0.351562 0.245312 0.354688 0.245312 0.35625 0.24375 0.357812 0.24375 0.365625 0.2375 0.371875 0.2375 0.378125 0.234375 0.38125 0.232812 0.38125 0.23125 0.382812 0.23125 0.390625 0.229687 0.392188 0.229687 0.395312 0.226562 0.398438 0.226562 0.4 0.225 0.401563 0.225 0.4125 0.223438 0.414062 0.223438 0.415625 0.21875 0.420312 0.21875 0.429688 0.217187 0.43125 0.217187 0.432813 0.214062 0.435937 0.214062 0.4375 0.2125 0.439063 0.2125 0.446875 0.209375 0.45 0.207813 0.45 0.20625 0.451562 0.20625 0.459375 0.204688 0.460938 0.204688 0.4625 0.201562 0.465625 0.201562 0.467187 0.2 0.46875 0.2 0.476562 0.19375 0.482812 0.19375 0.490625 0.192188 0.492188 0.192188 0.49375 0.189063 0.496875 0.189063 0.498437 0.1875 0.5 0.1875 0.507812 0.18125 0.514063 0.18125 0.521875 0.179688 0.523438 0.179688 0.526563 0.175 0.53125 0.175 0.539062 0.16875 0.545313 0.16875 0.55625 0.167187 0.557813 0.167187 0.559375 0.1625 0.564062 0.1625 0.571875 0.159375 0.575 0.157813 0.575 0.157813 0.576563 0.15625 0.578125 0.15625 0.5875 0.154687 0.589063 0.154687 0.590625 0.15 0.595312 0.15 0.603125 0.14375 0.609375 0.14375 0.61875 0.142188 0.620313 0.142188 0.621875 0.1375 0.626562 0.1375 0.634375 0.13125 0.640625 0.13125 0.651563 0.125 0.657812 0.125 0.665625 0.11875 0.671875 0.11875 0.68125 0.117188 0.682813 0.117188 0.684375 0.1125 0.689062 0.1125 0.696875 0.10625 0.703125 0.10625 0.710938 0.104687 0.7125 0.104687 0.714063 0.101562 0.717188 0.101562 0.71875 0.1 0.720312 0.1 0.728125 0.0953125 0.732813 0.0953125 0.735937 0.09375 0.7375 0.09375 0.74375 0.0921875 0.745313 0.0921875 0.746875 0.0875 0.751562 0.0875 0.759375 0.0828125 0.764063 0.0828125 0.765625 0.08125 0.767187 0.08125 0.771875 0.0796875 0.773438 0.0796875 0.775 0.0765625 0.778125 0.0765625 0.779688 0.075 0.78125 0.075 0.7875 0.0765625 0.789062 0.0765625 0.790625 0.078125 0.790625 0.0796875 0.792188 0.10625 0.792188 0.107813 0.790625 0.121875 0.790625 0.123438 0.789062 0.139062 0.789062 0.140625 0.7875 0.192188 0.7875 0.19375 0.785937 0.198437 0.785937 0.2 0.7875 0.384375 0.7875 0.385938 0.785937 0.432813 0.785937 0.434375 0.7875 0.440625 0.7875 0.442187 0.785937 0.679688 0.785937 0.68125 0.7875 0.81875 0.7875 0.820312 0.789062 0.832812 0.789062 0.834375 0.790625 0.86875 0.790625 0.870313 0.792188 0.921875 0.792188 0.923437 0.790625 0.923437 0.778125 0.917188 0.771875 0.917188 0.764063 0.915625 0.7625 0.915625 0.760938 0.9125 0.757812 0.9125 0.75625 0.910937 0.754687 0.910937 0.746875 0.909375 0.745313 0.909375 0.74375 0.907812 0.74375 0.904688 0.740625 0.904688 0.729688 0.903125 0.728125 0.903125 0.726562 0.898438 0.721875 0.898438 0.714063 0.896875 0.7125 0.896875 0.710938 0.89375 0.707812 0.89375 0.70625 0.892187 0.704687 0.892187 0.696875 0.890625 0.695312 0.890625 0.69375 0.889063 0.69375 0.885938 0.690625 0.885938 0.682813 0.884375 0.68125 0.884375 0.679688 0.88125 0.676562 0.88125 0.675 0.879687 0.673437 0.879687 0.664062 0.873438 0.657812 0.873438 0.646875 0.867188 0.640625 0.867188 0.632812 0.8625 0.628125 0.8625 0.626562 0.860937 0.625 0.860937 0.614062 0.859375 0.6125 0.857813 0.6125 0.854688 0.609375 0.854688 0.598437 0.853125 0.596875 0.853125 0.595312 0.848437 0.590625 0.848437 0.582812 0.84375 0.578125 0.84375 0.576563 0.842188 0.575 0.842188 0.564062 0.840625 0.5625 0.839063 0.5625 0.835938 0.559375 0.835938 0.55 0.834375 0.548437 0.834375 0.545313 0.832812 0.545313 0.83125 0.54375 0.83125 0.542188 0.829687 0.540625 0.829687 0.532812 0.825 0.528125 0.825 0.526563 0.823438 0.525 0.823438 0.514063 0.821875 0.5125 0.820312 0.5125 0.817187 0.509375 0.817187 0.5 0.815625 0.498437 0.815625 0.495313 0.810938 0.490625 0.810938 0.482812 0.80625 0.478125 0.80625 0.476562 0.804688 0.475 0.804688 0.464063 0.803125 0.4625 0.801562 0.4625 0.798437 0.459375 0.798437 0.446875 0.796875 0.445312 0.796875 0.44375 0.795313 0.44375 0.792188 0.440625 0.792188 0.432813 0.7875 0.428125 0.7875 0.426562 0.785937 0.425 0.785937 0.415625 0.784375 0.414062 0.784375 0.4125 0.782812 0.4125 0.779688 0.409375 0.779688 0.395312 0.775 0.390625 0.775 0.389062 0.773438 0.3875 0.773438 0.38125 0.771875 0.379687 0.771875 0.378125 0.76875 0.375 0.76875 0.373437 0.767187 0.371875 0.767187 0.364062 0.760938 0.357812 0.760938 0.345313 0.759375 0.34375 0.757812 0.34375 0.754687 0.340625 0.754687 0.33125 0.753125 0.329688 0.753125 0.326562 0.751562 0.326562 0.75 0.325 0.75 0.323438 0.748438 0.321875 0.748438 0.314063 0.746875 0.3125 0.746875 0.310937 0.74375 0.307813 0.74375 0.30625 0.742188 0.304688 0.742188 0.295312 0.735937 0.289062 0.735937 0.278125 0.734375 0.276563 0.734375 0.275 0.732813 0.275 0.729688 0.271875 0.729688 0.259375 0.723437 0.253125 0.723437 0.245312 0.721875 0.24375
</code></pre>
| <python><opencv><yolo> | 2023-07-13 15:17:37 | 1 | 779 | spazznolo |
76,680,977 | 4,489,998 | Using Enum values as type variables, without using Literal | <p>I'm trying to represent <a href="http://www-ksl.stanford.edu/knowledge-sharing/ontologies/html/physical-quantities/PHYSICAL-DIMENSION.html" rel="nofollow noreferrer">physical dimensions</a> (length, time, temperature, ...), and cannot find a nice way to do so, that is compatible with type hinting and generics.</p>
<p>Ideally, I want to be able to define an Enum whose names are types themselves (a metaenum ?):</p>
<pre><code>from enum import Enum
class Dim(Enum):
TIME = "t"
MASS = "m"
</code></pre>
<p>I can type hint dimensions (<code>dim: Dim</code>) but cannot do things like</p>
<pre><code>from typing import Generic, TypeVar
T = TypeVar("T", bound=Dim) # only accepts `Dim`
class PhysicalQuantity(Generic[T]):
pass
class Container:
some_time: PhysicalQuantity[Dim.TIME] # doesn't work
</code></pre>
<p><a href="https://stackoverflow.com/questions/63332993/type-annotation-for-specific-enum-value">because these are values</a>.</p>
<p><strong>Is there a construct as simple as Enum, but to make types instead of values ?</strong></p>
<p>Reasons why I want to keep <code>Enum</code>:</p>
<ul>
<li>very easy to define</li>
<li>very easy to associate to a value (str)</li>
<li>Ability to sort of "think of <code>Dim</code> as the type, and <code>Dim.TIME</code> as a subtype"</li>
</ul>
<hr />
<p>There are functional solutions, however I'm asking this to get a "best way" more than a "working way".
Here's what I found:</p>
<ol>
<li><p>The simplest solution is to do use <code>Literal</code>: <code>SomeGenericType[Literal[Dim.TIME]]</code>, but this is both annoying to write each time and counter-intuitive for people who expect Dim.TIME to behave as a type.</p>
</li>
<li><p>Switching to classes, the most intuitive idea:</p>
<pre><code>class Dimension:
pass
class TIME(Dimension):
pass
</code></pre>
<p>doesn't work, because I want <code>type(TIME)</code> to be <code>Dim</code>, to reproduce Enum behavior</p>
</li>
<li><p>That leads to using a metaclass:</p>
<pre><code>class Dimension(type):
# ... complete __init__ and __new__ to get TIME.symbol = "t"
class TIME(metaclass=Dimension, symbol="t"):
pass
</code></pre>
<p>This works, but I lose the ability to do <code>Dim.TIME</code>, to get <code>Dim.TIME</code> from <code>Dim('t')</code>, ...</p>
</li>
</ol>
| <python><enums><python-typing> | 2023-07-13 15:15:20 | 1 | 2,185 | TrakJohnson |
76,680,975 | 8,382,028 | How to merge byte files together in Python | <p>I am running into a situation where I need to generate multiple filled PDF forms, I have the data filled out and the 'bytes' file is accessible, but when I attempt to combine the 2 files in their byte representation, nothing happens, the file is overridden and the original is the only thing shown.</p>
<p>What am I doing wrong? This seems like it should be easy.</p>
<pre class="lang-py prettyprint-override"><code># don't worry about these, they are filled PDFs in byte form, this works as expected.
pdf1 = PDFFormManager.fill_with_jinja(file=template, data=data)
pdf2 = PDFFormManager.fill_with_jinja(file=template, data={})
# here is the issue
print(len(pdf1), len(pdf2)) # 177354 177354
print(type(pdf1), type(pdf2)) # <class 'bytes'> <class 'bytes'>
print(len(pdf1+ pdf2)) # 354708
# when I return this, I only get the single pdf, not the concatenated one
response = HttpResponse(pdf1+pdf2, content_type=f"application/pdf")
</code></pre>
| <python><django><byte> | 2023-07-13 15:14:40 | 1 | 3,060 | ViaTech |
76,680,962 | 11,365,539 | GDB --init-command - import python class | <p>I have a very minimal understanding of Python. I am also not too experienced with GDB. I'm trying to use GDB's <code>--init-command</code> flag to initialize some set up for me. I have two processors to debug which require setting up a JLink debug server. Since there's minor differences between the two, I wanted to create a class that would be imported by their respective init files. So Module1.init and Module2.init would import another file called Module.py which contains shared functions and a class to inherit from.</p>
<p>It looks something like the following.
Note: I've replaced some of the names and hid other information with <code>...</code> for confidentiality, but the structure is the same.<br />
<code>Module.py</code></p>
<pre class="lang-py prettyprint-override"><code>import os, sys, subprocess
import atexit
sys.path.insert(0, 'C:/.../libcxx-pretty-printers-master/src/')
from libcxx.v1.printers import register_libcxx_printers
register_libcxx_printers (None)
def handleSignals(e):
if (not isinstance(e, gdb.SignalEvent)):
return
elif (e.stop_signal == 'SIGILL'):
gdb.execute("continue")
class Module:
...
def connectToDebugServer(self):
gdb.execute(...)
</code></pre>
<p><code>Module1.init</code> (which looks almost identical to <code>Module2.init</code> except for parameters</p>
<pre class="lang-py prettyprint-override"><code>python
import os, sys
sys.path.insert(0, 'C:/.../scripts/GDB/init/')
import Module
class Module1(Module.Module):
def __init__(self):
super().__init__(...)
...
inst = Module1()
inst.startDebugServer()
inst.connectToDebugServer()
end
set print pretty on
set print array on
set print demangle on
set print asm-demangle on
set print object on
</code></pre>
<p>I call these with <code>gdb --init-command=/path/to/Module1.init</code>. I get this error</p>
<pre><code>Traceback (most recent call last):
File "<string>", line 15, in <module>
File "C:/.../scripts/GDB/init\Module.py", line 49, in connectToDebugServer
gdb.execute(...)
NameError: name 'gdb' is not defined
C:\...\scripts\GDB\init\Module1.init:21: Error in sourced command file:
Error while executing Python code.
</code></pre>
<p>When I call it instead like <code>gdb --init-command=/path/to/Module1.init --data-directory=/path/containing/Module1.init and Module.py/</code>, I get the following error:</p>
<pre class="lang-py prettyprint-override"><code>Traceback (most recent call last):
File "<string>", line 5, in <module>
File "C:/.../scripts/GDB/init\Module.py", line 5, in <module>
from libcxx.v1.printers import register_libcxx_printers
File "C:/.../scripts/GDB/lib/libcxx-pretty-printers-master/src\libcxx\v1\printers.py", line 19, in <module>
import gdb
ModuleNotFoundError: No module named 'gdb'
C:\...\scripts\GDB\init\Module1.init:21: Error in sourced command file:
Error while executing Python code.
</code></pre>
<p>Alternatively, if there is a way to pass arguments to an init file that would work too. That was my original idea but I couldn't find a way to do it.</p>
| <python><gdb><gdbinit> | 2023-07-13 15:13:45 | 1 | 613 | Warpstar22 |
76,680,955 | 13,412,850 | python - optimisation with specific values to chose from in my bounds | <p>below are the codes</p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.optimize import minimize
def objective(x):
x1 = x[0]
x2 = x[1]
x3 = x[2]
x4 = x[3]
return x1*x4*(x1+x2+x3)+x3
def const1(x):
return x[0]*x[1]*x[2]*x[3]-25
def const2(x):
sum_sq = 40
for i in range(4):
sum_sq = sum_sq - x[i]**2
return sum_sq
x0 = [1,5,5,1]
b = (1,5)
bnds = (b,b,b,b)
cons1 = {'type':'ineq','fun':const1}
cons2 = {'type':'eq','fun':const2}
cons = [cons1,cons2]
sol = minimize(objective,x0,method='SLSQP',bounds=bnds,constraints=cons)
print(sol)
</code></pre>
<p>I have defined my bounds here between 1 and 5 (b = (1,5))</p>
<p>but, what if I wanted my variables to have only specific values i.e. either 3,4 or 5. i.e. each of the 4 variables can have values 3,4 or 5. Is it possible?</p>
| <python><optimization><scipy-optimize><scipy-optimize-minimize> | 2023-07-13 15:12:46 | 1 | 529 | user13412850 |
76,680,683 | 10,358,900 | How to downgrade python and torch on Colab? | <p>I'm trying to reproduce the study from <a href="https://github.com/sanghyun-son/EDSR-PyTorch" rel="nofollow noreferrer">https://github.com/sanghyun-son/EDSR-PyTorch</a> on colab. To do that, it is necessary to use Python 3.6 and torch 1.2.0</p>
<p>I've successfully downgraded Python to 3.6.</p>
<pre><code>!python --version
# first install python 3.6
!sudo apt-get update -y
!sudo apt-get install python3.6
# change alternatives
!sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.6 1
# select python version
!sudo update-alternatives --config python3
# check python version
!python --version
# install pip for new python
!sudo apt-get install python3.6-distutils
!wget https://bootstrap.pypa.io/get-pip.py
!python get-pip.py
# upgrade pip
!sudo apt install python3-pip
!python -m pip install --upgrade pip
!python --version
!pip --version
</code></pre>
<p>I've uninstall pytorch and download and install the 1.2 torch version</p>
<pre><code>!wget https://download.pytorch.org/whl/cpu/torch-1.2.0%2Bcpu-cp36-cp36m-manylinux1_x86_64.whl
!pip uninstall torch -y
!pip install torch-1.2.0+cpu-cp36-cp36m-manylinux1_x86_64.whl
</code></pre>
<p>However, when I try to import torch, the <code>"No module named 'torch'"</code> error occurred</p>
<pre><code>import torch
print(torch.__version__)
</code></pre>
<p><a href="https://i.sstatic.net/Bmvsl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Bmvsl.png" alt="enter image description here" /></a></p>
<p>Any idea why this is happening?</p>
| <python><pytorch><google-colaboratory> | 2023-07-13 14:44:02 | 2 | 313 | Luiz Fernando Puttow Southier |
76,680,549 | 1,021,306 | Normalize JSON data to Pandas DataFrame where columns and values are in lists | <p>I have the following complex JSON data that needs to be normalised to a Pandas DataFrame.</p>
<pre><code>{
"notice": {
"copyright": "Copyright Info",
"copyright_url": "This_is_URL"
},
"product_info": {
"refresh message": "Issued at 09:36 UTC Saturday 01/10/22 October 2022",
"issuance_time": "20221001T0936Z",
"product_name": "24hr historic data",
"ID": "XXXXYYYYZZZZ"
},
"data": [
{
"station_info": {
"wmo_id": 95214,
"station_name": "STATION 1",
"station_height": 3.8,
"station_type": "AWS"
},
"observation_data": {
"columns": [
"temp",
"dewt",
"rh",
"wnd_spd_kmh"
],
"data": [
[
27.2,
23.7,
81.0,
26.0
],
[
27.3,
23.5,
80.0,
28.0
],
[
27.4,
23.2,
78.0,
30.0
]
]
}
},
{
"station_info": {
"wmo_id": 95215,
"station_name": "STATION 2",
"station_height": 3.5,
"station_type": "AWS"
},
"observation_data": {
"columns": [
"temp",
"dewt",
"rh",
"wnd_spd_kmh"
],
"data": [
[
24.2,
25.7,
82.1,
21.0
],
[
24.3,
25.8,
79.6,
22.0
],
[
24.4,
25.9,
78.3,
16.0
]
]
}
}
]
}
</code></pre>
<p>The columns of the expected DataFrame is in a list in the JSON as "columns". So are the actual "data".</p>
<p>What is expected to output is as below:</p>
<p><a href="https://i.sstatic.net/nWf93.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nWf93.png" alt="enter image description here" /></a></p>
<p>What I have attempted:</p>
<pre><code>with gzip.open(json_file_path, "r") as f:
data = f.read()
j = json.loads (data.decode('utf-8'))
national_data = pd.json_normalize(j['data'])
</code></pre>
<p>However, the whole "columns" list was converted to a cell value.</p>
<pre><code> station_info.wmo_id station_info.station_name observation_data.columns observation_data.data
0 95214 STATION 1 [temp, dewt, rh, wnd_spd_kmh] [[27.2, 23.7, 81.0, 26.0],[...],[...]]
1 95215 STATION 2 [temp, dewt, rh, wnd_spd_kmh] [[24.2, 25.7, 82.1, 21.0],[...],[...]]
</code></pre>
| <python><pandas><dataframe><normalize> | 2023-07-13 14:31:11 | 1 | 3,579 | alextc |
76,680,444 | 7,802,354 | pyodbc errors when querying datetime column returns NULL value | <p>I'm connecting to a SYBASE database using pyodbc and querying a table that has NULL values for some <strong>datetime</strong> attributes:</p>
<blockquote>
<p>month must be in 1..12</p>
</blockquote>
<p>The attribute Type is defined as 'datetime' in the table, but as I mentioned it may be NULL.
The error happens in</p>
<pre><code>self.cursor.execute(stored_proc, params)
</code></pre>
<p>I get the error when the cursor attempts to fetch a record with NULL.
How can I configure pyodbc in a way that it can ignore NULL values for datetime?
I didn't find anything in it's document:
<a href="https://github.com/mkleehammer/pyodbc/wiki/Data-Types" rel="nofollow noreferrer">pyodbc</a></p>
<p>I have the following connection code:</p>
<pre><code>self.conn = pyodbc.connect('DRIVER={};SERVER={};UID={};PWD={};PORT=1234;'.format(self.db_config['driver'],self.db_config['server'], self.db_config['id'], self.db_config['pass']),autocommit=True)
</code></pre>
| <python><sql><pyodbc><sybase> | 2023-07-13 14:21:01 | 0 | 755 | brainoverflow |
76,680,222 | 350,685 | String indices must be integers, not 'str' when extracting value from JSON using python | <p>I have a JSON of the following format:</p>
<pre><code>{
"records": {
"expiryDates": [
'date1',
'date2'
]
}
}
</code></pre>
<p>I get the above json via an API call and the result of the request is cast into a JSON object like so:</p>
<pre><code>r = requests.get(url, headers)
dataJSON = json.loads(r.text)
</code></pre>
<p>To get expiryDates, I use <code>dataJSON["records"]["expiryDates"].</code>
Doing this gives me the error: TypeError: string indices must be integers, not 'str'</p>
<p>I read through various threads on this and understand that I should probably cycle through all the keys to get at the value for 'records'.<br />
Questions:</p>
<ol>
<li>Is the above method not the most optimal way of accessing items within a JSON object?</li>
<li>When executed as standalone python code, there are no errors, but when executed within a Flask application, it crashes. Any help in understanding why would be most helpful.</li>
</ol>
| <python><json> | 2023-07-13 13:58:51 | 0 | 10,638 | Sriram |
76,680,124 | 1,770,604 | Mypy throwing union-attr errors after instance checks before early return | <p>I am trying to implement the chain of responsibility pattern in python by passing a <code>Request</code> object into a handler "chain". The <code>Request</code> object passed in to each handler method in sequence has a few optional fields and some union types.</p>
<p>The general pattern I have for these handlers is to create a few boolean values at the top of the method to check if the current handler method should handle the request and if not, return early by calling the next handler in line.</p>
<p>The following is an example of my setup:</p>
<pre class="lang-py prettyprint-override"><code>
from pathlib import Path
from typing import Any, Callable
class Request:
data: int | Any | None = None
location: str | Path | None = None
def handle_int(request: Request, next_handler: Callable[[Request], int]) -> int:
is_int = isinstance(request.data, int)
is_str = isinstance(request.location, int)
if not (is_int and is_str):
return next_handler(request)
a = request.location.split("_")
return request.data * 3
</code></pre>
<p>The issue I am having is that mypy catches <code>union-attr</code> errors whenever I access the attributes on the request object:</p>
<pre><code>test_mypy.py:17:9: error: Item "Path" of "str | Path | None" has no attribute "split" [union-attr]
test_mypy.py:17:9: error: Item "None" of "str | Path | None" has no attribute "split" [union-attr]
test_mypy.py:19:12: error: Unsupported operand types for * ("None" and "int") [operator]
test_mypy.py:19:12: note: Left operand is of type "int | Any | None"
</code></pre>
<p>I can get rid of these by adding assertions <strong>after</strong> the return statement but that is creating basically duplicate code for the predicate checks before the early return and feels like a bad pattern.</p>
<p>Apart from creating more specific request types is there a way to let mypy know that the fact we are <strong>after</strong> the early return means that checks have been done on the form of the attributes n the request already?</p>
| <python><mypy> | 2023-07-13 13:47:50 | 1 | 691 | Martin O Leary |
76,680,049 | 14,693,246 | Is it safe to convert synchronous HTTP endpoint to asynchronous | <p>The framework we're using is Chalice (AWS) but in this case I think framework doesn't matter alot. Simply, it does not support async routes like some other Python frameworks and uses Python.</p>
<p>Chalice official documentation: <a href="https://aws.github.io/chalice/" rel="nofollow noreferrer">https://aws.github.io/chalice/</a></p>
<p>You can refer to the discussion here for further reading on sync/async issue: <a href="https://github.com/aws/chalice/issues/637" rel="nofollow noreferrer">https://github.com/aws/chalice/issues/637</a></p>
<p>We would like to use Beanie ODM for our project which is an ODM with asynchronous functions, so we needed to convert our routes into async one (since functions on Beanie are async).</p>
<p>What we have done to overcome this issue lays below. A simple Chalice backend would look like this;</p>
<h2>Before</h2>
<pre><code>from chalice import Chalice
app = Chalice(app_name="helloworld")
@app.route("/")
def index():
# You can't do await HelloModel.find_all() since route is not async
return {"hello": "world"}
</code></pre>
<p>So we prepared a basic asyncio decorator to run each request in coroutine;</p>
<h2>After</h2>
<pre><code>def async_route(function):
def run_async_task(*args, **kwargs):
# any algorithm can be used
result = asyncio.run(function(*args, **kwargs))
return result
return run_async_task
</code></pre>
<p>And added this decorator to the routes;</p>
<pre><code>from chalice import Chalice
app = Chalice(app_name="helloworld")
@app.route("/")
@async_route
def index():
# this way we can use async abilities.
await init_db()
hello_list = await HelloModel.find_all().to_list()
return {"hello": "world"}
</code></pre>
<ul>
<li>My first question is; Is it safe to use it like that?</li>
<li>Secondly; what's the best practice on operating asynchronous
operations outside of routes; like initializing db only once when
program starts up?</li>
</ul>
| <python><asynchronous><python-asyncio><chalice> | 2023-07-13 13:40:02 | 0 | 749 | Fatih Ersoy |
76,679,892 | 21,107,707 | Folder structure for TensorFlow image segmentation? | <p>I am using <a href="https://www.tensorflow.org/tutorials/images/segmentation" rel="nofollow noreferrer">this tutorial</a> to do image segmentation with tensorflow. However, I want to make my own data for this. What should my folder structure be if I'm trying to stay consistent with the tutorial? Currently, I have the following:</p>
<pre><code>path/to/image
train/
image/
1.png
...
segmentation_mask/
1.jpg
test/
image/
6.png
...
segmentation_mask/
6.jpg
...
</code></pre>
<p>I'm trying to read the dataset using <code>tf.data_folder.ImageFolder("path/to/image")</code> and then <code>.as_dataset()</code> to convert it to be a dataset. However, this raises errors throughout the rest of the program. So, how to I structure my folders and how do I load the dataset in properly so there would not be any more errors?</p>
| <python><tensorflow><image-segmentation> | 2023-07-13 13:26:31 | 1 | 801 | vs07 |
76,679,880 | 11,328,614 | Instantiate attrs class from dict with superfluous key/value pairs | <p>I decorated a class using <code>attrs.define</code> and added all important attributes/fields.</p>
<pre class="lang-py prettyprint-override"><code>import attrs
@attrs.define
class MyAttrsClass:
important1 = attrs.field(type = int)
important2 = attrs.field(type = str)
[...]
</code></pre>
<p>Now I would like to initialize instances of that class from a dict.
So what I do is calling the contructor with a dict-unpacking.</p>
<pre><code>test_dict = {'important1': 100, 'important2': 'foo'}
ci = MyAttrsClass(**test_dict)
</code></pre>
<p>This works in that simple example case, because I specify the same fields in the dict as well as in the <code>MyAttrsClass</code> class.</p>
<p>However, in the real scenario the dict is a deserialized response from a web-api (using <code>json.loads</code>) and it returns many key-value-pairs I'm not interested in, despite the important ones.
Then, I get the following error <code>TypeError: MyAttrsClass.__init__() got an unexpected keyword argument 'XYZ'</code>.</p>
<p>Is is possible to initialize an instance of <code>MyAttrsClass</code>, though?
I would like to discard the superfluous elements.
Do I need to manually remove entries from the input dict or is there a better way using attrs.</p>
| <python><python-3.x><dictionary><httpresponse><python-attrs> | 2023-07-13 13:25:04 | 2 | 1,132 | Wör Du Schnaffzig |
76,679,704 | 11,141,816 | How to calculate the Automatic differentiation of an arbitrary function in python | <p>I'm using the mpmath library to compute a self defined function <code>f(x)</code> and needed to compute its higher order derivatives.</p>
<p>I found the Automatic differentiation on Wikipedia and found it useful.</p>
<pre><code># Automatic Differentiation
import math
class Var:
def __init__(self, value, children=None):
self.value = value
self.children = children or []
self.grad = 0
print(children)
def __add__(self, other):
return Var(self.value + other.value, [(1, self), (1, other)])
def __mul__(self, other):
return Var(self.value * other.value, [(other.value, self), (self.value, other)])
def sin(self):
return Var(math.sin(self.value), [(math.cos(self.value), self)])
def calc_grad(self, grad=1):
self.grad += grad
for coef, child in self.children:
child.calc_grad(grad * coef)
# Example: f(x, y) = x * y + sin(x)
x = Var(2)
y = Var(3)
f = x * y + x.sin()
# Calculation of partial derivatives
f.calc_grad()
print("f =", f.value)
print("∂f/∂x =", x.grad)
print("∂f/∂y =", y.grad)
</code></pre>
<p>which returned</p>
<pre><code>None
None
[(3, <__main__.Var object at 0x000001EDDCCA2260>), (2, <__main__.Var object at 0x000001EDDCCA1300>)]
[(-0.4161468365471424, <__main__.Var object at 0x000001EDDCCA2260>)]
[(1, <__main__.Var object at 0x000001EDDCCA0A30>), (1, <__main__.Var object at 0x000001EDDCCA0BB0>)]
f = 6.909297426825682
∂f/∂x = 2.5838531634528574
∂f/∂y = 2
</code></pre>
<p>However, I have some question about how the code was implemented. For example, I guessed that the children list was the derivatives with Chain rule, but I'm not sure how the statement such as <code>[(1, self), (1, other)]</code> implemented this recursion.</p>
<p>Also, the Automatic Differentiation in this code seemed to be written as more of a "symbolic calculation" in disguise, and I'm not sure if it will work for arbitrary function and at arbitrary order. For example, I wanted to try the second order derivatives but <code>f.calc_grad(grad=2).value</code> and <code>f.calc_grad().calc_grad( ).value</code>didn't work, I suppose it was because the <code>cos</code> function was not defined.</p>
<p>How to calculate the Automatic differentiation of an arbitrary function in python? Can I not define a class function but to work with mpmath directly?</p>
| <python><class><numerical-methods><mpmath><numerical-computing> | 2023-07-13 13:05:31 | 0 | 593 | ShoutOutAndCalculate |
76,679,690 | 7,458,826 | RuntimeError: Input type (torch.cuda.DoubleTensor) and weight type (torch.cuda.FloatTensor) should be the same | <p>I am training a CNN to classify some images. The objective is to classify them into two classes. I already executed the same code on Windows with a RTX3070, now I am trying to do the exact same on Ubuntu with a Nvidea A100-40Gb. The code I am using is this one:</p>
<pre><code>import warnings
warnings.filterwarnings('ignore')
import random
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import plotly
import plotly.graph_objects as go
%matplotlib inline
import os
from sklearn.calibration import calibration_curve
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
import itertools
import torch
import torchvision
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
from torch.optim import lr_scheduler
if torch.cuda.is_available():
print("CUDA available. Using GPU acceleration.")
device = "cuda"
else:
print("CUDA is NOT available. Using CPU for training.")
device = "cpu"
import pickle
def save_var(var,filename):
with open(filename, 'wb') as f:
pickle.dump(var, f)
def recover_var(filename):
with open(filename, 'rb') as f:
var = pickle.load(f)
return var
df = recover_var('dataframe_cnn.pickle') #my dataset
df = df.sample(frac=1).reset_index(drop=True)
df.columns = ['label'] + list(range(1,27649))
train =df[:int(0.7*len(df))]
test = df[int(0.7*len(df)):]
def preprocessing(train, test, split_train_size = 0.2):
# Split data into features(pixels) and labels(numbers from 0 to 9)
targets = train.label.values
features = train.drop(["label"], axis = 1).values
# Normalization
features = features/255.
X_test = test.values/255.
# Train test split. Size of train data is (1-split_train_size)*100% and size of test data is split_train_size%.
X_train, X_val, y_train, y_val = train_test_split(features,
targets,
test_size = split_train_size,
random_state = 42)
# Create feature and targets tensor for train set. I need variable to accumulate gradients. Therefore first I create tensor, then I will create variable
X_train = torch.from_numpy(X_train)
y_train = torch.from_numpy(y_train).type(torch.LongTensor) # data type is long
# Create feature and targets tensor for test set.
X_val = torch.from_numpy(X_val)
y_val = torch.from_numpy(y_val).type(torch.LongTensor) # data type is long
# Create feature tensor for train set.
X_test = torch.from_numpy(X_test)
return X_train, y_train, X_val, y_val, X_test
X_train, y_train, X_val, y_val, X_test = preprocessing(train, test)
print(f'Shape of training data: {X_train.shape}')
print(f'Shape training labels: {y_train.shape}')
print(f'Shape of validation data: {X_val.shape}')
print(f'Shape of valiation labels: {y_val.shape}')
print(f'Shape of testing data: {X_test.shape}')
# batch_size, epoch and iteration
BATCH_SIZE = 100
N_ITER = 2500
EPOCHS = 5
# I will be trainin the model on another 10 epochs to show flexibility of pytorch
EXTRA_EPOCHS = 10
# Pytorch train and test sets
train_tensor = torch.utils.data.TensorDataset(X_train, y_train)
val_tensor = torch.utils.data.TensorDataset(X_val, y_val)
test_tensor = torch.utils.data.TensorDataset(X_test)
# data loader
train_loader = torch.utils.data.DataLoader(train_tensor,
batch_size = BATCH_SIZE,
shuffle = True)
val_loader = torch.utils.data.DataLoader(val_tensor,
batch_size = BATCH_SIZE,
shuffle = False)
test_loader = torch.utils.data.DataLoader(test_tensor,
batch_size = BATCH_SIZE,
shuffle = False)
class CNNModel(nn.Module):
def __init__(self):
super(CNNModel, self).__init__()
# convolution 1
self.c1 = nn.Conv2d(in_channels=1, out_channels=16, kernel_size=(5,5), stride=1, padding=0)
self.relu1 = nn.ReLU()
# maxpool 1
self.maxpool1 = nn.MaxPool2d(kernel_size=(2,2))
# dropout 1
self.dropout1 = nn.Dropout(0.25)
# convolution 2
self.c2 = nn.Conv2d(in_channels=16, out_channels=32, kernel_size=(3,3), stride=1, padding=0)
self.relu2 = nn.ReLU()
# maxpool 2
self.maxpool2 = nn.MaxPool2d(kernel_size=(2,2))
# dropout 2
self.dropout2 = nn.Dropout(0.25)
# linear 1
self.fc1 = nn.Linear(32*5*5, 256)
# dropout 3
self.dropout3 = nn.Dropout(0.25)
# linear 2
self.fc2 = nn.Linear(256, 10)
def forward(self, x):
out = self.c1(x) # [BATCH_SIZE, 16, 24, 24]
out = self.relu1(out)
out = self.maxpool1(out) # [BATCH_SIZE, 16, 12, 12]
out = self.dropout1(out)
out = self.c2(out) # [BATCH_SIZE, 32, 10, 10]
out = self.relu2(out)
out = self.maxpool2(out) # [BATCH_SIZE, 32, 5, 5]
out = self.dropout2(out)
out = out.view(out.size(0), -1) # [BATCH_SIZE, 32*5*5=800]
out = self.fc1(out) # [BATCH_SIZE, 256]
out = self.dropout3(out)
out = self.fc2(out) # [BATCH_SIZE, 10]
return out
# Create CNN
model = CNNModel()
# Optimizer
optimizer = torch.optim.Adam(model.parameters(), lr=0.003)
# Cross Entropy Loss
criterion = nn.CrossEntropyLoss()
# LR scheduler
exp_lr_scheduler = lr_scheduler.StepLR(optimizer, step_size=3, gamma=0.5)
# On GPU if possible
if torch.cuda.is_available():
print("Model will be training on GPU")
model = model.cuda()
criterion = criterion.cuda()
else:
print("Model will be training on CPU")
def fit(epoch):
print("Training...")
# Set model on training mode
model.train()
# Update lr parameter
exp_lr_scheduler.step()
# Initialize train loss and train accuracy
train_running_loss = 0.0
train_running_correct = 0
train_running_lr = optimizer.param_groups[0]['lr']
for batch_idx, (data, target) in enumerate(train_loader):
data, target = Variable(data.view(BATCH_SIZE,1,144,192)), Variable(target)
if torch.cuda.is_available():
data = data.cuda()
target = target.cuda()
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
train_running_loss += loss.item()
_, preds = torch.max(output.data, 1)
train_running_correct += (preds == target).sum().item()
loss.backward()
optimizer.step()
if (batch_idx + 1)% 50 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch+1,
(batch_idx + 1) * len(data),
len(train_loader.dataset),
BATCH_SIZE * (batch_idx + 1) / len(train_loader),
loss.cpu().detach().numpy())
)
train_loss = train_running_loss/len(train_loader.dataset)
train_accuracy = 100. * train_running_correct/len(train_loader.dataset)
return train_loss, train_accuracy, train_running_lr
def validate(data_loader):
print("Validating...")
# Set model on validating mode
model.eval()
val_preds = torch.LongTensor().cuda()
val_proba = torch.LongTensor().cuda()
# Initialize validation loss and validation accuracy
val_running_loss = 0.0
val_running_correct = 0
for data, target in data_loader:
# Regarding volatile argument, check the note below
data, target = Variable(data.view(BATCH_SIZE,1,144,192), volatile=True), Variable(target)
if torch.cuda.is_available():
data = data.cuda()
target = target.cuda()
output = model(data)
loss = criterion(output, target)
val_running_loss += loss.item()
pred = output.data.max(1, keepdim=True)[1]
proba = torch.nn.functional.softmax(output.data)
val_running_correct += pred.eq(target.data.view_as(pred)).cpu().sum()
# Store val_predictions with probas for confusion matrix calculations & best errors made
val_preds = torch.cat((val_preds.float(), pred), dim=0).float()
val_proba = torch.cat((val_proba.float(), proba)).float()
val_loss = val_running_loss/len(data_loader.dataset)
val_accuracy = 100. * val_running_correct/len(data_loader.dataset)
return val_loss, val_accuracy, val_preds, val_proba
train_loss, train_accuracy = [], []
val_loss, val_accuracy = [], []
val_preds, val_proba = [], []
train_lr = []
for epoch in range(EPOCHS):
print(f"Epoch {epoch+1} of {EPOCHS}\n")
train_epoch_loss, train_epoch_accuracy, train_epoch_lr = fit(epoch)
val_epoch_loss, val_epoch_accuracy, val_epoch_preds, val_epoch_proba = validate(val_loader)
train_loss.append(train_epoch_loss)
train_accuracy.append(train_epoch_accuracy)
train_lr.append(train_epoch_lr)
val_loss.append(val_epoch_loss)
val_accuracy.append(val_epoch_accuracy)
val_preds.append(val_epoch_preds)
val_proba.append(val_epoch_proba)
print(f"Train Loss: {train_epoch_loss:.4f}, Train Acc: {train_epoch_accuracy:.2f}")
print(f'Val Loss: {val_epoch_loss:.4f}, Val Acc: {val_epoch_accuracy:.2f}\n')
</code></pre>
<p>However, it is returning the following error:</p>
<pre><code>Cell In[149], line 285
281 for epoch in range(EPOCHS):
283 print(f"Epoch {epoch+1} of {EPOCHS}\n")
--> 285 train_epoch_loss, train_epoch_accuracy, train_epoch_lr = fit(epoch)
286 val_epoch_loss, val_epoch_accuracy, val_epoch_preds, val_epoch_proba = validate(val_loader)
288 train_loss.append(train_epoch_loss)
Cell In[149], line 213, in fit(epoch)
210 target = target.cuda()
212 optimizer.zero_grad()
--> 213 output = model(data)
214 loss = criterion(output, target)
216 train_running_loss += loss.item()
File /opt/miniconda3/envs/mlgpu/lib/python3.9/site-packages/torch/nn/modules/module.py:1130, in Module._call_impl(self, *input, **kwargs)
1126 # If we don't have any hooks, we want to skip the rest of the logic in
1127 # this function, and just call forward.
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []
Cell In[149], line 152, in CNNModel.forward(self, x)
150 def forward(self, x):
--> 152 out = self.c1(x) # [BATCH_SIZE, 16, 24, 24]
153 out = self.relu1(out)
154 out = self.maxpool1(out) # [BATCH_SIZE, 16, 12, 12]
File /opt/miniconda3/envs/mlgpu/lib/python3.9/site-packages/torch/nn/modules/module.py:1130, in Module._call_impl(self, *input, **kwargs)
1126 # If we don't have any hooks, we want to skip the rest of the logic in
1127 # this function, and just call forward.
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/miniconda3/envs/mlgpu/lib/python3.9/site-packages/torch/nn/modules/conv.py:457, in Conv2d.forward(self, input)
456 def forward(self, input: Tensor) -> Tensor:
--> 457 return self._conv_forward(input, self.weight, self.bias)
File /opt/miniconda3/envs/mlgpu/lib/python3.9/site-packages/torch/nn/modules/conv.py:453, in Conv2d._conv_forward(self, input, weight, bias)
449 if self.padding_mode != 'zeros':
450 return F.conv2d(F.pad(input, self._reversed_padding_repeated_twice, mode=self.padding_mode),
451 weight, bias, self.stride,
452 _pair(0), self.dilation, self.groups)
--> 453 return F.conv2d(input, weight, bias, self.stride,
454 self.padding, self.dilation, self.groups)
RuntimeError: Input type (torch.cuda.DoubleTensor) and weight type (torch.cuda.FloatTensor) should be the same```
</code></pre>
<p>I already tried to typecast the output to float, torch.float, np.float32, but it still returned the same error. Moreover, I tried to change the type of the variable <code>x</code> on line 152, without results. How can I solve it?</p>
| <python><machine-learning><pytorch> | 2023-07-13 13:04:07 | 1 | 636 | donut |
76,679,583 | 5,609,595 | Pre-Commit hooks on MacOs require superuser to work | <p>I am working on a python project with a MacOs laptop, I just installed pre-commit with pip install pre-commit and then pre-commit install, and when I try to commit or execute <code>pre-commit run --all-files</code> it gives the following error:</p>
<pre><code>An unexpected error has occurred: OperationalError: unable to open database file
Failed to write to log at /Users/user11/.cache/pre-commit/pre-commit.log
### version information
pre-commit version: 3.3.3
git --version: git version 2.39.2
sys.version:
3.11.2 (v3.11.2:878ead1ac1, Feb 7 2023, 10:02:41) [Clang 13.0.0 (clang-1300.0.29.30)]
sys.executable: /Library/Frameworks/Python.framework/Versions/3.11/bin/python3.11
os.name: posix
sys.platform: darwin
### error information
An unexpected error has occurred: OperationalError: unable to open database file
Traceback (most recent call last):
.....
.....
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pre_commit/store.py", line 120, in connect
with contextlib.closing(sqlite3.connect(db_path)) as db:
^^^^^^^^^^^^^^^^^^^^^^^^
sqlite3.OperationalError: unable to open database file
</code></pre>
<p>nonetheless if I execute with: <code>sudo pre-commit run --all-files</code> then it works.</p>
<p>I have tried giving permission with <code>sudo chmod ug+x</code> to /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pre_commit/* and also to /Users/user11/.cache/pre-commit/*</p>
<p>but I am still getting the same error</p>
| <python><git><macos><pre-commit-hook><pre-commit.com> | 2023-07-13 12:51:41 | 1 | 317 | Thirsty |
76,679,582 | 9,477,704 | TypeError: Could not guess image MIME subtype | <p>I'm trying to capture the images of charts and attach them to the email template using the <code>img</code> tag. I'm able to send the email local but getting this <code>TypeError: Could not guess image MIME subtype</code> in the server.</p>
<p>I'm using Gramex capture API to capture the images and using email service in Gramex. When I checked in the server, the images are stored in the <code>png</code> format and right path. Not sure where the issue is.</p>
<p>Here's the code I'm using to capture and send the email.</p>
<pre><code>def send_email(handler):
user = handler.args.get('user', ['None'])[0]
chart_selector = ['.bar-chart', '.donut-chart', '.world_map', '.headerImg']
folder = os.path.dirname(os.path.abspath(__file__))
mailer = service.email["galaxy-email-smtp"]
try:
yield service.threadpool.submit(
load_file, user, chart_selector
)
except Exception as e:
print("An error occurred:", str(e))
temp_file = gc.open('email.digest.template.formatted.html', 'template')
file_path_barchart = os.path.join(folder, 'downloaded1.png')
file_path_donutchart = os.path.join(folder, 'downloaded2.png')
file_path_worldmap = os.path.join(folder, 'downloaded3.png')
file_path_header = os.path.join(folder, 'downloaded4.png')
html_content = temp_file.generate(data={"handler": handler}).decode('utf-8')
mailer.mail(
sender=<email>,
to=user,
subject=<subject>,
html=html_content,
images={
"headerImg": file_path_header,
"ricoImg": os.path.join(folder, 'assets/img/rico.png'),
"bar-chart-img": file_path_barchart,
"donut-chart-img": file_path_donutchart,
"world_map": file_path_worldmap,
},
)
return "email has been sent"
def load_file(user, chart_selector):
url = <url> + user
for index, each_chart_selector in enumerate(chart_selector):
file_name = f"downloaded{index+1}.png" # Generate the file name
chart_capture = capture.png(
url, selector=each_chart_selector, scale=2, timeout=60
)
with open(file_name, "wb") as f:
f.write(chart_capture)
</code></pre>
<p>Here's the traceback error:</p>
<pre><code>ERROR 13-Jul 12:25:14 tornado.application:web Uncaught exception GET /send?user=<user>
HTTPServerRequest(protocol='https', host=<hostname>, method='GET', uri='/send?user=<user>, version='HTTP/1.1', remote_ip='59.160.48.210')
Traceback (most recent call last):
File "/root/anaconda3/lib/python3.9/site-packages/tornado/web.py", line 1704, in _execute
result = await result
File "/root/anaconda3/lib/python3.9/site-packages/tornado/gen.py", line 769, in run
yielded = self.gen.throw(*exc_info) # type: ignore
File "/root/anaconda3/lib/python3.9/site-packages/gramex/handlers/functionhandler.py", line 53, in get
item = yield item
File "/root/anaconda3/lib/python3.9/site-packages/tornado/gen.py", line 762, in run
value = future.result()
File "/root/anaconda3/lib/python3.9/site-packages/tornado/gen.py", line 775, in run
yielded = self.gen.send(value)
File "/mnt/apps/threat-research-email-digest/email_service.py", line 705, in email_digest_req
mailer.mail(
File "/root/anaconda3/lib/python3.9/site-packages/gramex/services/emailer.py", line 147, in mail
msg = message(**kwargs)
File "/root/anaconda3/lib/python3.9/site-packages/gramex/services/emailer.py", line 203, in message
img = MIMEImage(handle.read())
File "/root/anaconda3/lib/python3.9/email/mime/image.py", line 43, in __init__
raise TypeError('Could not guess image MIME subtype')
TypeError: Could not guess image MIME subtype
</code></pre>
<p>Noticed that in Gramex <code>emailer.py</code>, we are not passing any <code>_subtype</code> value in the below code:</p>
<pre><code>for name, path in images.items():
with open(path, 'rb') as handle:
img = MIMEImage(handle.read())
img.add_header('Content-ID', f'<{name}>')
html_part.attach(img)
</code></pre>
<p>When I tried giving a subtype in this line <code>img = MIMEImage(handle.read(), 'png')</code>, it is working fine. Is there any option to add the subtype in while adding images?
Thanks in advance!</p>
| <python><gramex> | 2023-07-13 12:51:39 | 1 | 499 | Lahari |
76,679,508 | 9,472,066 | How to set multiple values in Pandas in column of dtype np.array? | <p>I have a column of Numpy arrays in Pandas, something like:</p>
<pre><code> col1 col2 col3
0 1 a None
1 2 b [2, 4]
2 3 c None
</code></pre>
<p>The <code>[2, 4]</code> is really <code>np.array([2, 4])</code>. Now I need to impute the missing values, and I have a list of arrays for that. For example:</p>
<pre><code>vals_to_impute = [np.array([1, 2]), np.array([1, 4])]
</code></pre>
<p>I tried:</p>
<pre><code>mask = col3.isna()
df.loc[mask, "col3"] = vals_to_impute
</code></pre>
<p>This results in error:</p>
<pre><code>ValueError: Must have equal len keys and value when setting with an ndarray
</code></pre>
<p>I tried converting to Numpy array, extracting column etc., nothing worked. Is it actually possible to set this in a vectorized operation, or do I have to do a manual loop?</p>
| <python><arrays><pandas><numpy> | 2023-07-13 12:41:37 | 4 | 1,563 | qalis |
76,679,340 | 11,501,976 | Can I convert "sparse" contour to "dense" contour in Python? | <p>OpenCV's <code>findContours()</code> function provides contour compression.</p>
<ol>
<li>If <code>cv2.CHAIN_APPROX_NONE</code> option is passed, contour is not approximated and every point on the contour is returned (i.e., "dense" contour).</li>
<li>If <code>cv2.CHAIN_APPROX_SIMPLE</code> option is passed, the points forming a straight line segment are compressed so that the points in between are gone (i.e., "sparse" contour).</li>
</ol>
<p>I want to transform a sparse contour to a dense contour. I considered interpolating the sparse points, but I want the "exact" result without any parameter. In other words, I want what I would get if I re-drew the contour from sparse points and re-applied <code>findContours()</code> with <code>cv2.CHAIN_APPROX_NONE</code> option. Is there any way I can achieve this?</p>
| <python><opencv> | 2023-07-13 12:23:17 | 1 | 378 | JS S |
76,679,241 | 1,305,688 | Pandas pd.DataFrame from loop-data | <p>I am new to Python. I have some data that I get from a loop. <code>cat</code> and be between two and <code>n</code> and <code>TorF</code> will always be (<code>cat*5</code>) or (<code>cat*4</code>). My gold is to create a <code>pd.DataFrame</code> from two lists, like this</p>
<pre><code>cat = ['a', 'b']
TorF = [True, True, True, False, False, True, False, False, True, True]
</code></pre>
<p>I think my current solution is kind of clumpy with the <code>int((len(man_corr_n)/len(cat)))</code>,</p>
<pre><code>import pandas as pd
data = [[c, *TorF[i:i+int((len(TorF )/len(cat)))]] for i, c in enumerate(cat)]
df = pd.DataFrame(data)
</code></pre>
<p>if there a simpler way to do it?</p>
<p>My desired output is</p>
<pre><code> 0 1 2 3 4 5
0 a True True False False True
1 b True False True False True
</code></pre>
| <python><pandas> | 2023-07-13 12:12:13 | 3 | 8,018 | Eric Fail |
76,679,153 | 2,292,133 | Setting up async class-based unittests with pytest fixtures | <p>I'm trying to set up class-based unittests using pytest and it's fixtures but can't get it working.</p>
<p>The system I'm building is built with FastAPI and Tortoise ORM.</p>
<p>conftest.py</p>
<pre><code>from app.main import app
from httpx import AsyncClient
@pytest.fixture(scope='module')
async def client():
async with AsyncClient(app=app, base_url='http://test') as c:
yield c
</code></pre>
<p>tests.py</p>
<pre><code>from tortoise.contrib.test import TestCase
class MyTestcase(TestCase):
async def test_get_google(self):
r = await client.get('/dashboard/')
assert r.status_code == 200
</code></pre>
<p>I'd like to be able to use the <code>client</code> fixture inside the test, but I can't figure out how to include it. I've followed the patterns <a href="https://tonybaloney.github.io/posts/async-test-patterns-for-pytest-and-unittest.html" rel="nofollow noreferrer">here</a> but no luck; the author isn't using async class-based tests.</p>
<p>Thanks in advance</p>
| <python><asynchronous><fastapi><tortoise-orm> | 2023-07-13 12:02:05 | 0 | 767 | Tom Hamilton Stubber |
76,678,793 | 1,485,872 | PyTorch enforce non-negative values in convolution weigths inside the model | <p>As we know, the optimization parameters used in ML are not constrained, and thus you can't really enforce parameters to be positive. In scientific articles, whenever this is needed, I see it implemented as (pseudocode):</p>
<pre><code>optimizer.step()
model.weights=max(model.weights ,0)
</code></pre>
<p>Which works. However, I was wondering if there is a way to put this constraint inside the model?
Say, given a complex model where only some weights are required to be positive, is there a way to make an update on these apply this constraint, so its part of the model, rather than part of the training? e.g. by overloading however the weights are updated?</p>
<p>This makes sense conceptually in some models, as some model definitions require some weights to have this constraint, and I would like whoever uses the model to not need to remember to apply this at training, but the model to be defined as such.</p>
<p>Or the only way to do so is by doing it as described above?</p>
| <python><pytorch> | 2023-07-13 11:20:31 | 0 | 35,659 | Ander Biguri |
76,678,733 | 2,307,570 | When to put a dot in relative imports? And what does the m-switch have to do with it? | <p>I have a simple example project with these files:</p>
<pre><code>folder
__init__.py
foo.py
foobar.py
script.py
</code></pre>
<p>In <em>foobar.py</em> the function <code>foo</code> is imported with <code>from .foo import foo</code>.<br>
(It is then used in the function <code>foobar</code>).<br>
In <em>script.py</em> the function <code>foobar</code> is imported with <code>from .foobar import foobar</code>.<br>
(It is then executed.)</p>
<p>But putting the dots in the import statement is not always the right choice.<br>
Depending on how the script is run, their presence can cause:<br>
<code>ImportError: attempted relative import with no known parent package</code><br>
Or their absence can cause:<br>
<code>ModuleNotFoundError: No module named 'foobar'</code></p>
<p>The code can be found <a href="https://github.com/entenschule/examples_py/tree/main/a001_misc/b004_relative_import" rel="nofollow noreferrer">on GitHub</a>. (There is also <a href="https://github.com/entenschule/examples_py/tree/main/a001_misc/b005_absolute_import" rel="nofollow noreferrer">the same with absolute paths</a>.)</p>
<h2>with dots</h2>
<p>This works for the two usecases that I need:</p>
<ul>
<li>run script using the <a href="https://stackoverflow.com/questions/7610001">m-switch</a>: <code>python -m project.folder.script</code></li>
<li>import <code>foobar</code> in the Python console: <code>from project.folder.foobar import foobar</code></li>
</ul>
<p>This screenshot shows how running the script works with and fails without the m-switch:</p>
<p><a href="https://i.sstatic.net/uYROP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uYROP.png" alt="overview with dots" /></a></p>
<h2>without dots</h2>
<p>Gladly there is no need to run the script without the m-switch.<br>
(Although it would be nice, because the IDE has a button for that.)</p>
<p>This screenshot shows how running the script fails with and works without the m-switch:</p>
<p><a href="https://i.sstatic.net/U6MWZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/U6MWZ.png" alt="enter image description here" /></a></p>
<h2>paths in detail</h2>
<p>It seems worthwhile to take a closer look at <code>__name__</code> and <code>__file__</code> in all cases.<br>
<code>__name__</code> of _script.py is always the string <code>'__main__'</code>.</p>
<p><a href="https://i.sstatic.net/Z6hYT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Z6hYT.png" alt="enter image description here" /></a></p>
<h2>conclusion</h2>
<p>Apparently for me this just boils down to having to use the dot and the m-switch.<br>
(This is just mildly annoying, because my IDE does not have auto-completion for it.)</p>
<p>But I would still like to understand, why there are these two ways of doing essentially the same thing.
People often say, that the dot is for going up a directory (like <code>../</code> in HTML and PHP).<br>
But there is something wrong with that, because the location is the same in both cases.<br>
(We just talk about neighboring files in the same folder.)</p>
<p>This must be far from the first question about this topic.<br>
I hope it is clear enough to compensate for that redundancy.</p>
| <python><python-import><relative-import> | 2023-07-13 11:14:40 | 1 | 1,209 | Watchduck |
76,678,487 | 5,678,653 | python request equivalent to curl -L | <p>curl -L retains headers across redirects.</p>
<p>I am currently looking at a third party URL which with a normal curl (allows redirects) is as follows <em>(This is the same response I get using a python request.)</em></p>
<pre><code>$curl https://[xxx].fr/[xxxx]
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>307 Temporary Redirect</title>
</head><body>
<h1>Temporary Redirect</h1>
<p>The document has moved <a href="/redirect?token=lfbzjv3m2yabpbb8tggjjmsgz2">here</a>.</p>
</body></html>
</code></pre>
<p>using curl -L, I see the following</p>
<pre><code>$curl -L https://[xxx].fr/[xxxx]
<html>
<head>
<title>Detection des bots</title>
</head>
<body>
<h1>Bot detecte.</h1>
</body>
</html>
</code></pre>
<p>Despite that this reveals 3rd party cruft at the server, I want to ensure I get the "Detection des bots" message rather than the "307 Temporary Redirect" message.</p>
<p>I will include the actual url as a comment (and will subsequently delete it once a solution is found, as the url is likely to be modified).</p>
<p>My purpose is to provide a meaningful third party link status report - there is no malign intent. 'Detection des bots' is good enough for me.</p>
| <python><curl><python-requests> | 2023-07-13 10:45:06 | 1 | 2,248 | Konchog |
76,678,447 | 1,581,090 | How to use `pexpect` on windows to read the "welcome message" like `telnetlib3`? | <p>I want to use <code>pexpect</code> on Windows to open a telnet connection and to be able to read the "welcome message" (i.e. the output you get when you connect to some device). This is working with <code>telnetlib3</code> using the following code</p>
<pre><code>import asyncio
import telnetlib3
async def foo():
reader, writer = await telnetlib3.open_connection("192.168.200.10", 9000)
data = await asyncio.wait_for(reader.read(4096), timeout=2)
print(data)
asyncio.run(foo())
</code></pre>
<p>but this unfortunately uses asynchronous methods and hence is very cumbersome to use.</p>
<p>To avoid these issues I want to use <code>pexpect</code> to do the same, but with the following code I am not able to get that "welcome message" as for <code>telnetlib3</code>:</p>
<pre><code>import time
from pexpect import popen_spawn
session = popen_spawn.PopenSpawn("telnet 192.168.200.10 9000")
time.sleep(5)
data = session.read_nonblocking(4096, 1)
print(data)
</code></pre>
<p>In that case, I do not get any output. Is there something I am missing? How to get the "welcome message" using <code>pexpect</code> on Windows?</p>
<p>Running the command</p>
<pre><code>telnet 192.168.200.10 9000
</code></pre>
<p>in a Windows terminal works as expected, i.e. I get this "welcome message".</p>
<p>The problem seems to be part of the "telnet" "negotiation" when the server sends commands like "IAC" and "DO", which I do not read on the client side (see <a href="https://datatracker.ietf.org/doc/html/rfc1091" rel="nofollow noreferrer">HERE</a>). But why does this work for telnetlib3 but not for telnetlib? What is the difference?</p>
| <python><windows><telnet><pexpect> | 2023-07-13 10:40:33 | 3 | 45,023 | Alex |
76,678,420 | 6,494,707 | How to extract the spectra range within a roi mask? | <p>I am learning hyperspectral data analysis, so my question may sound simple.</p>
<p>I am reading a hypercube by using the following command:</p>
<pre><code>import spectral.io.envi as envi
hc = envi.open('cube_envi32.hdr','cube_envi32.dat')
</code></pre>
<p>'hc' has the following shape:</p>
<pre><code># Rows: 512
# Samples: 640
# Bands: 92
Interleave: BSQ
Quantization: 32 bits
Data format: float32
(512, 640, 92)
</code></pre>
<p>I want to extract the spectral (or pixel values of within a specific binary mask, as shown with rectangle here:</p>
<p><a href="https://i.sstatic.net/1S7Zf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1S7Zf.png" alt="enter image description here" /></a></p>
<p>My question includes two parts:</p>
<ol>
<li>which python library is suitable for spectra analysis and working with hypercubes?</li>
<li>what command should I write to extract the spectra values of the region of interest?</li>
</ol>
<p>Thanks</p>
| <python><signal-processing><hypercube><spectral-python> | 2023-07-13 10:37:04 | 1 | 2,236 | S.EB |
76,678,291 | 1,716,525 | map list based on dict internal values | <p>I have a list like:</p>
<pre><code>[{'sample': {'sample_context': {'board': u'9', 'encoder': u'1'}, 18421: u'1191'}}, {'sample': {'sample_context': {'board': u'5', 'encoder': u'1'}, 18422: u'2251'}}, {'sample': {'sample_context': {'board': u'9', 'encoder': u'1'}, 18423: u'2291'}}, {'sample': {25680: u'3321', 'sample_context': {'board': u'2', 'encoder': u'1'}}}, {'sample': {'sample_context': {'board': u'9', 'encoder': u'1'}, 29100: u'5591'}}]
</code></pre>
<p>How could I separate values i.e. <code>18421: u'1191', 18422: u'2251'</code> etc based per <code>sample_context</code> field</p>
<p>I tried to do but it doesn't work, I think I need to map values based on sample_context field</p>
<pre><code>[sample_sent['sample'] for sample_sent in x]
</code></pre>
<p>The goal is to answer give me all the values with context lets say: <code>{'board': u'9', 'encoder': u'1'}</code></p>
| <python><list><python-2.7><dictionary> | 2023-07-13 10:18:56 | 1 | 3,336 | Kumar Roshan Mehta |
76,678,289 | 16,452,929 | Create endpoint which can be called by other computers on same machine using flask framework | <p>I am currently trying to create an endpoint which can be called by other computers on the same network as me. Following is the sample code I am working with</p>
<pre><code>from flask import Flask, request, jsonify
import requests
app = Flask(__name__)
countries = [
{"id": 1, "name": "Thailand", "capital": "Bangkok", "area": 513120},
{"id": 2, "name": "Australia", "capital": "Canberra", "area": 7617930},
{"id": 3, "name": "Egypt", "capital": "Cairo", "area": 1010408},
]
@app.get("/countries")
def get_countries():
return jsonify(countries)
</code></pre>
<p>Currently the above endpoint can be called at</p>
<pre><code>http://127.0.0.1:5000/countries
</code></pre>
<p>from my own PC. How can I configure it such that it can be called from other computers on my network. I tried replacing <code>127.0.0.1</code> with my PC name but connection was refused.</p>
| <python><rest><flask> | 2023-07-13 10:18:37 | 1 | 517 | CS1999 |
76,678,275 | 7,301,792 | How could get weights considered in DataFrame.describe? | <p>I have such a sample with student's score and population of the score:</p>
<pre class="lang-py prettyprint-override"><code># Create the DataFrame
sample = pd.DataFrame(
{'score':[595, 594, 593, 592, 591, 590, 589, 588, 587, 586, 585, 584, 583,582, 581, 580, 579, 578, 577, 576],
'population':[ 705, 745, 716, 742, 722, 746, 796, 750, 816, 809, 815,821, 820, 865, 876, 886, 947, 949, 1018, 967]})
</code></pre>
<p>The I calculate it's weigthed average of scores:</p>
<pre class="lang-py prettyprint-override"><code>np.average(sample['score'], weights=sample['population'])
# 584.9062443219672
</code></pre>
<p>However, when I run sample.describe(), it not get weights considered:</p>
<pre class="lang-py prettyprint-override"><code>sample.describe()
score population
count 20.00000 20.000000
mean 585.50000 825.550000
std 5.91608 91.465539
min 576.00000 705.000000
25% 580.75000 745.750000
50% 585.50000 815.500000
75% 590.25000 878.500000
max 595.00000 1018.000000
</code></pre>
<p>How could get weights included in sample.describe()?</p>
| <python><pandas><dataframe><numpy> | 2023-07-13 10:16:30 | 1 | 22,663 | Wizard |
76,678,185 | 4,356,169 | BeautifulSoup previous_sibling not working | <p>Some items have title but some don't, sample html like this:</p>
<pre><code><div id="content">
<h5>Title1</h5>
<div class="text">text 1</div>
<h5>Title2</h5>
<div class="text">text 2</div>
<div class="text">text 3</div>
<div class="text">text 4</div>
</div>
</code></pre>
<p>Tried to get all the class <code>text</code>, and get their titles <code>h5</code>(if any).</p>
<p><code>find_previous_sibling</code> can get the title, but the last two <code>text</code> also list the title which is not owned by them.</p>
<p>and also tried <code>previous_sibling</code>, then judge whether it is <code>h5</code> or <code>div</code>, <code>h5</code> as title, but it returns nothing.</p>
<pre><code>html = BeautifulSoup(response.text,'lxml')
content = html.find('div',{'id': 'content'})
paras = content.find_all('div', {'class': 'text'})
for para in paras:
title = p.find_previous_sibling('h5')
if title:
print(title.get_text())
pr = para.previous_sibling
if pr:
print(pr)
</code></pre>
| <python><html><beautifulsoup> | 2023-07-13 10:04:31 | 1 | 1,088 | jdleung |
76,678,160 | 17,194,313 | How do I efficiently hash a Polars dataframe? | <p>I am trying to implement some caching logic on a function that acts on a Polars dataframe (this is all in Python).</p>
<p>To avoid needlessly re-computing the result, it'd be great if I could quickly check if the dataframe has changed - ie. a hash comparison.</p>
<p>I am currently using:</p>
<pre class="lang-py prettyprint-override"><code>_my_hash = df.hash_rows().sum() # int
</code></pre>
<p>But curious to know if there are better options.</p>
| <python><python-polars> | 2023-07-13 10:01:03 | 2 | 3,075 | MYK |
76,678,109 | 941,397 | How to return a pdf from an Azure App Service site with Python | <p>I am trying to return a PDF from an Azure App Service Python web site using <code>Flask</code> and <code>fpdf</code>. I used the Azure Flask example as a starter and it worked great. I then added <code>fpdf</code> in the <code>requirements.txt</code> and looked at the Azure log stream and I can see it being successfully installed.</p>
<p>I added the following code to my <code>app.py</code>.</p>
<pre><code>import os
from flask import (Flask, redirect, render_template, request,
send_from_directory, url_for, make_response)
from fpdf import FPDF
app = Flask(__name__)
@app.route('/')
def index():
print('Request for index page received')
return render_template('index.html')
@app.route('/favicon.ico')
def favicon():
return send_from_directory(os.path.join(app.root_path, 'static'),
'favicon.ico', mimetype='image/vnd.microsoft.icon')
@app.route('/hello', methods=['POST'])
def hello():
name = request.form.get('name')
if name:
print('Request for hello page received with name=%s' % name)
return render_template('hello.html', name = name)
else:
print('Request for hello page received with no name or blank name -- redirecting')
return redirect(url_for('index'))
@app.route('/jpg_to_pdf/<name>')
def jpg_to_pdf(name):
pdf = FPDF()
pdf.add_page()
pdf.image(os.path.join(app.root_path, 'static/iamges/'+ name + '.jpg'), 50, 50)
response = make_response(pdf.output(dest='S').encode('latin-1'))
response.headers.set('Content-Disposition', 'attachment', filename=name + '.pdf')
response.headers.set('Content-Type', 'application/pdf')
return response
@app.route('/pdf/')
def pdf():
# save FPDF() class into a variable pdf
pdf = FPDF()
# Add a page
pdf.add_page()
# set style and size of font
# that you want in the pdf
pdf.set_font("Arial", size = 15)
# create a cell
pdf.cell(200, 10, txt = "GeeksforGeeks", ln = 1, align = 'C')
# add another cell
pdf.cell(200, 10, txt = "A Computer Science portal for geeks.", ln = 2, align = 'C')
# save the pdf with name .pdf
# pdf.output("GFG.pdf")
response = make_response(pdf.output(dest='S').encode('latin-1'))
response.headers.set('Content-Disposition', 'attachment', 'GMT.pdf')
response.headers.set('Content-Type', 'application/pdf')
return response
</code></pre>
<p>However when I try the <code>pdf</code> and `jpg_to_pdf/test' routes I get the following error:</p>
<pre><code>Internal Server Error
The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
</code></pre>
<p>I'm not sure how to enable HTTP logging or where I can see what the error is. Can anyone help to guide me in how to debug and fix this?</p>
| <python><flask><pdf><azure-web-app-service><fpdf> | 2023-07-13 09:55:23 | 1 | 8,133 | Superdooperhero |
76,677,910 | 1,448,641 | Structuring documentation with sphinx autodoc | <p>So, I have two modules:</p>
<pre><code>"""
Module 1
=========
"""
def foo():
"""foo doc"""
</code></pre>
<p>and</p>
<pre><code>"""
Module 2
=========
"""
def bar():
"""bar doc"""
</code></pre>
<p>Then I have a rst file that goes like</p>
<pre><code>*************
Some heading
*************
.. automodule:: package.module1
:members:
.. automodule:: package.module2
:members:
</code></pre>
<p>The build gives me a side bar structure like this:</p>
<pre><code>[+] Some Heading
Module 1
foo
Module 2
bar
</code></pre>
<p>Obviously, the headings from the module doc strings are on the same level as the members of the same module. I don't like this.</p>
<p>Instead I found, that moving the headings from the module doc strings to the rst file like</p>
<pre><code>*************
Some heading
*************
Module 1
=========
.. automodule:: package.module1
:members:
Module 2
=========
.. automodule:: package.module2
:members:
</code></pre>
<p>gives me the structure that I expected, which is:</p>
<pre><code>[+] Some Heading
[+] Module 1
foo
[+] Module 2
func1
bar
</code></pre>
<p>How can I get autodoc (or sphinx?) to turn headings in module doc strings to expandable sections in the side bar?</p>
| <python><documentation><python-sphinx><autodoc> | 2023-07-13 09:33:38 | 1 | 5,519 | MaxPowers |
76,677,885 | 4,422,949 | Where is cget in membership test in tkinter? | <p>I spent quite a while looking at the line 17 (see setaview() below) and trying to comprehend where were the int, string and concatenation.</p>
<pre><code>if clsname in vcache22: # line 17
</code></pre>
<p>The error</p>
<pre><code>TypeError: can only concatenate str (not "int") to str
</code></pre>
<p>was introduced in the line 21: The right line should be:</p>
<pre><code>vcache22[clsname] = aview = cls()
</code></pre>
<p>The terminal session:</p>
<pre><code>$ python setaview.py
setaview(Track)
Exception in Tkinter callback
Traceback (most recent call last):
File "/usr/lib/python3.9/tkinter/__init__.py", line 1892, in __call__
return self.func(*args)
File "/home/vlz/Documents/ARCHIVE/musicbox/setaview.py", line 36, in <lambda>
command=lambda x=t: setaview(x))
File "/home/vlz/Documents/ARCHIVE/musicbox/setaview.py", line 17, in setaview
if clsname in vcache22:
File "/usr/lib/python3.9/tkinter/__init__.py", line 1652, in cget
return self.tk.call(self._w, 'cget', '-' + key)
TypeError: can only concatenate str (not "int") to str
</code></pre>
<p>The function with the error:</p>
<pre><code>def setaview(clsname):
global vcache22, aview
if clsname in vcache22: # line 17
aview = vcache22[clsname]
else:
cls = getattr(sys.modules['__main__'], clsname)
vcache22 = aview = cls() # line21
# To indicate that smth has done
print(f'setaview({clsname})')
</code></pre>
<p>This is the contrived example to isolate the problem I found in other longer code. The full script is below:</p>
<pre><code>#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import sys
import tkinter as tk
from tkinter import ttk
aview = None
vcache22 = {}
class Track(tk.Listbox): pass # noqa
class Album(tk.Listbox): pass # noqa
class Playlist(tk.Listbox): pass # noqa
def setaview(clsname):
global vcache22, aview
if clsname in vcache22:
aview = vcache22[clsname]
else:
cls = getattr(sys.modules['__main__'], clsname)
vcache22 = aview = cls()
# To indicate that smth has done
print(f'setaview({clsname})')
def main():
root = tk.Tk()
box = tk.Frame(root)
box.grid(column=0, row=0)
selected = tk.StringVar()
i = 0
for t in 'Track', 'Album', 'Playlist':
rb = ttk.Radiobutton(
box, text=t, value=t, variable=selected,
command=lambda x=t: setaview(x))
rb.grid(column=i, row=0)
i += 1
if t == 'Track':
rb.invoke()
tk.mainloop()
if __name__ == '__main__':
main()
</code></pre>
<p>Using:<br />
Python 3.9.2<br />
Debian GNU/Linux 11 (bullseye)</p>
| <python><python-3.x><tkinter> | 2023-07-13 09:30:46 | 1 | 529 | Vladimir Zolotykh |
76,677,515 | 11,227,857 | Why are my environmental variables adding whitespace? | <p>I'm running Debian 12 and Python 3.9.16.</p>
<p>test.env:</p>
<pre><code>export TEST="ABC"
export SECONDTEST="DEF"
</code></pre>
<p>I run <code>source test.env</code></p>
<p>Then try to run this script in Python:</p>
<pre><code>import os
test1 = os.getenv("TEST")
test2 = os.getenv("SECONDTEST")
def printenv(envvar):
print(envvar)
for char in envvar:
print(char)
print("done")
if envvar == "ABC" or envvar == "DEF":
print("ok!")
else:
print("Not ok!")
printenv(test1)
printenv(test2)
printenv(test1.rstrip())
</code></pre>
<p>The environmental variables add a new line character to the end. The output is:</p>
<pre><code>A
B
C
done
Not ok!
DEF
D
E
F
done
Not ok!
ABC
A
B
C
done
ok!
</code></pre>
<p>Why are there newline characters in my environmental variables?</p>
| <python><linux><debian> | 2023-07-13 08:47:35 | 1 | 530 | gazm2k5 |
76,677,507 | 15,958,062 | F-string not formatting floats into an integer | <p>Usually, I use <code>%</code> string formatting. But, as I discovered <code>f-string</code> is the new way of string formatting and it is faster as well, I want to use it. <br/>
But, I am facing a problem when formatting a float into an integer using <code>f-string</code>. Following is what I have tried:<br/>
Using <code>%</code> string formatting</p>
<pre><code>'%5d'%1223 # yields --> ' 1223'
'%5d'%1223.555 # yields --> ' 1223'
</code></pre>
<p>Using <code>f-string</code> formatting</p>
<pre><code>f'{1223:5d}' # yields --> ' 1223' ==> correct
f'{1223.555:5d}' # gives error
# error is "ValueError: Unknown format code 'd' for object of type 'float'"
</code></pre>
<p>Am I missing something?</p>
| <python><python-3.x><string><f-string> | 2023-07-13 08:46:43 | 1 | 924 | Satish Thorat |
76,677,386 | 815,443 | How can I format string with thousands separator on Mac with the Python locale library? | <p>I'd like to format a number with <code>.</code> for thousands separator and <code>,</code> for decimal point. I know I could do that with a triple <code>replace</code> and the <code>{:,}</code> format.</p>
<p>Is there a cleaner way to do it with the standard library <code>locale</code>?</p>
<p>First I found that even today the Mac <code>de_DE</code> locale does not include the thousands separator and you need to use the trick</p>
<pre><code>locale._override_localeconv["thousands_sep"] = "."
locale._override_localeconv["grouping"] = [3, 0]
</code></pre>
<p>But next I do not know how to get the desired result</p>
<pre><code>import locale
locale.setlocale(locale.LC_ALL, 'de_DE')
locale._override_localeconv["thousands_sep"] = "."
locale._override_localeconv["grouping"] = [3, 0]
x=123456.789
# -> expected result "123.456,789"
"{:n}".format(x)
# "123457" : rounding and no grouping
locale.format_string('%f', x, grouping=True)
# "123.456,789000" : extra zeros
locale.format_string('%g', x, grouping=True)
# "123.457" : rounding
"{:,}".format(x)
# "123,456.789" : no locale
</code></pre>
<p>Is there any way I can use <code>locale</code> to get the result <code>"123.456,789"</code>?</p>
| <python><locale> | 2023-07-13 08:31:20 | 1 | 12,817 | Gere |
76,677,247 | 1,920,368 | How to get object's actual rotation after freeze? | <p><strong>Hi, how can I get object's actual rotation after freeze?</strong></p>
<p>For instance :</p>
<pre><code># create a cube
CudeTransformNode = cmds.polyCube()[ 0 ]
# rotate X 20 degree.
cmds.setAttr( f"{CudeTransformNode}.rotateX" , 20 )
# * now its like
# - freezed rotation X : 0
# - rotation translation X : 20
# - actual rotation X : 20
# freeze translation.
cmds.makeIdentity( CudeTransformNode , a = 1 , r = 1 )
# * then its like
# - freezed rotation X : 20
# - rotation translation X : 0
# - actual rotation X : 20
# for test, rotate X 30 more degree.
cmds.setAttr( f"{CudeTransformNode}.rotateX" , 30 )
# * now its like
# - freezed rotation X : 20
# - rotation translation X : 30
# - actual rotation X : 50
# From here
# how to get actual rotation
Foo() == 50
# or how to get freezed rotation
Boo() == 20
</code></pre>
<p>** Above example, my question is how can we get the real rotation??( how to get 50 or 20 )**</p>
<p>Because every method I found is only telling you how to get current rotation( * rotation translation )</p>
<p>For reference :</p>
<ul>
<li><a href="https://www.akeric.com/blog/?p=1067" rel="nofollow noreferrer">https://www.akeric.com/blog/?p=1067</a></li>
<li><a href="https://stackoverflow.com/questions/68937114/getting-rotation-from-matrix-openmaya">Getting rotation from matrix, OpenMaya</a></li>
<li><a href="https://stackoverflow.com/questions/56900173/is-there-a-way-to-calculate-3d-rotation-on-x-and-y-axis-from-a-4x4-matrix/56950130#56950130">Is there a way to calculate 3D rotation on X and Y axis from a 4x4 matrix</a></li>
</ul>
<p>All these are telling you to use Matrix to get rotation, but the Matrix returned from native commands are always reflecting the translated values only. Therefore, in above example, the calculated output will always be 30( current rotation ).</p>
<p>For instance :</p>
<pre><code>import maya.api.OpenMaya as om
Matrix = cmds.xform( CudeTransformNode, q = 1 , m = 1 )
_M = om.MMatrix( Matrix )
_TM = om.MTransformationMatrix( _M )
_rotation_radiants = _TM.rotation()
_rotation_radiants[ 0 ] <-- # equals to 30 degree
# But I want to get 20 or 50...
</code></pre>
<p><strong>Maybe the question is more correct to say, how to get overall rotation matrix?</strong></p>
<p>Thank you for your advices!!</p>
| <python><math><matrix><rotation><maya> | 2023-07-13 08:12:35 | 1 | 4,570 | Micah |
76,677,184 | 9,443,671 | How can I split a dataframe into multiple dataframes based on timestamp? | <p>Let's say I have a dataframe which contains content and different timestamps for each entry; how can I split the dataframe into two (or more) dataframes based on the difference in timestamps? For example, if an entry starts 6 hours after the one preceeding it, then I'd like all the rows coming after it (which are within 6 hour time interval) to be grouped together. You may assume that the timestamps come in ascending order in the dataframe.</p>
<p>Here's an example df:</p>
<pre><code> content timestamp
0 Content_100 2023-06-28 21:15:29.605277
1 Content_91 2023-07-10 21:15:29.605277
2 Content_49 2023-07-11 21:15:29.605277
3 Content_76 2023-06-22 21:15:29.605277
4 Content_42 2023-07-04 21:15:29.605277
5 Content_39 2023-07-04 21:15:29.605277
6 Content_76 2023-06-29 21:15:29.605277
7 Content_85 2023-06-27 21:15:29.605277
</code></pre>
| <python><pandas><dataframe><data-wrangling> | 2023-07-13 08:02:54 | 1 | 687 | skidjoe |
76,677,004 | 19,580,067 | Split a column into 2 columns like alphabetic text in one column and alphanumeric or numbers or anything in 2nd column | <p>I have a dataframe column which contains product and technical details merged. I just want to split them separately into 2 columns like actual product name in one column and other technical details in one column.</p>
<p>I tried to solve the problem using regex and splitted the technical details separately, but the product name was going null wherever the technical details get splitted. not sure what went wrong.</p>
<pre><code>This is the dataframe I tried
df = pd.DataFrame({'Description': ['WASHER tey DIN6340 10.5 C 35;', 'CABINET EL', 'CYLINDER SCREW', 'M12x N15']})
Code:
df['Technical Data'] = df['Description'].str.extract(r'^.*?(\s\w*\d+\w*\s.*)$')
df['Product Description'] = df['Description'].apply(lambda x: re.sub(r'^.*?(\w*\d+\w*\s.*)$', '', x))
</code></pre>
<p>The result I'm getting is
<a href="https://i.sstatic.net/xjerh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xjerh.png" alt="enter image description here" /></a></p>
<p>So I want the output to be like this</p>
<p><a href="https://i.sstatic.net/OLcoK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OLcoK.png" alt="enter image description here" /></a></p>
<p>Any suggestions on how to do that??</p>
| <python><regex><dataframe> | 2023-07-13 07:42:19 | 1 | 359 | Pravin |
76,676,986 | 6,455,731 | Using functools.wraps to get signature and type hints for free? | <p>I recently found myself using <code>functools.wraps</code> in order to get a signature and parameter type hints for free.</p>
<p>E.g. I have a class that features some rendering functionality and I 'borrow' the signature of <code>builtins.open</code> via <code>functools.wraps</code> like so:</p>
<pre><code>import functools
from typing import Callable
class Cls:
def render(self, call: Callable) -> None:
...
@functools.wraps(open)
def render_to_file(self, *args, mode="w", **kwargs) -> None:
with open(*args, mode=mode, **kwargs) as f:
self.render(f.write)
</code></pre>
<p>Has this some caveats I am missing or can I continue to use <code>functools.wraps</code> like this?</p>
| <python><functools> | 2023-07-13 07:38:05 | 1 | 964 | lupl |
76,676,913 | 6,162,092 | pdf2docx is not able to join tables that span across two pages | <p>I have quite simple code:</p>
<pre><code>def pdf_to_docx(pdf_file, docx_file):
try:
cv = Converter(pdf_file)
tables = cv.extract_tables()
cv.close()
doc = Document()
for table in tables:
#do stuff
</code></pre>
<p>However, I have a table which spans across two pages, when I look into 'Tables' it shows this Table as 2 different ones.</p>
<p>The issue I face is that because I don't have the header on the second page, my code cannot work out how to join them together.</p>
<p>Any idea how I can resolve this or how to join them up easily?</p>
<p>My end goal is to convert all tables in a PDF into Excel (i have to convert to word as some files I use are word and others are PDFs.</p>
<p>Thanks</p>
| <python><pypdf> | 2023-07-13 07:28:51 | 0 | 467 | SCramphorn |
76,676,841 | 1,367,705 | X-HTTP-Method-Override in Flask's Python Web server | <p>I'm learning HTTP and I try to learn how different headers work. I want to implement a simple HTTP server using Flask and Python just to test the<code>X-HTTP-Method-Override</code> header.</p>
<p>My code:</p>
<pre><code>class HTTPMethodOverrideMiddleware(object):
allowed_methods = frozenset(
["GET", "HEAD", "POST", "DELETE", "PUT", "PATCH", "OPTIONS"]
)
bodyless_methods = frozenset(["GET", "HEAD", "OPTIONS", "DELETE"])
def __init__(self, app):
self.app = app
def __call__(self, environ, start_response):
method = environ.get("HTTP_X_HTTP_METHOD_OVERRIDE", "").upper()
if method in self.allowed_methods:
environ["REQUEST_METHOD"] = method
if method in self.bodyless_methods:
environ["CONTENT_LENGTH"] = "0"
return self.app(environ, start_response)
from flask import Flask, request
app = Flask(__name__)
@app.route("/")
def index():
return request.method
app.wsgi_app = HTTPMethodOverrideMiddleware(app.wsgi_app)
app.run(host="0.0.0.0", port=9090)
</code></pre>
<p>When I'm sending this curl request: <code>curl -X GET "http://127.0.0.1:9090"</code> server returns <code>GET</code>, which is OK, but with this: <code>curl -X GET "http://127.0.0.1:9090" --header "X-HTTP-Method-Override : POST"</code> it still shows <code>GET</code> instead of <code>POST</code>.</p>
<p>What is wrong? The code or curl?</p>
| <python><http><flask><http-headers> | 2023-07-13 07:19:31 | 1 | 2,620 | mazix |
76,676,758 | 9,985,032 | Compile and load a dll without saving to disk | <p>I'm using dynamic code generation with ctypes, currently to load my compiled code I have to create a DLL on disk:</p>
<pre><code>import pathlib
import subprocess
import ctypes
f = '''
#include <stdio.h>
__declspec(dllexport) int add(int a, int b) {
return a + b;
}
'''
subprocess.check_output(args=('clang -shared -x c - -o lib.dll').split(), input=f.encode('utf-8'))
lib = ctypes.CDLL(str(pathlib.Path().absolute() /'lib.dll'))
lib.add.argtypes = [ctypes.c_int, ctypes.c_int]
lib.add.restype = ctypes.c_int
print(lib.add(1, 2))
</code></pre>
<p>This is slow and wasteful since I'm never going to reuse the dll again, is there any way to load the compiled DLL directly to memory without using a hard drive?</p>
| <python><clang><ctypes> | 2023-07-13 07:08:16 | 0 | 596 | SzymonO |
76,676,582 | 2,742,533 | Trigger and read jenkin console logs | <p>I am planning to automate triggering jenkins job and read console logs to validate.
Currently I am able to trigger jenkins build using java.
Is there a way to read console logs using java?</p>
<p>Which other language will be useful to perform the same task?</p>
| <python><java><jenkins> | 2023-07-13 06:43:22 | 1 | 363 | Venu |
76,676,419 | 859,191 | Connecting a Qt5 Designer design with a slot function in my main() code without modifying the Designer file | <p>Being new in using Qt5 and Object Oriented Programming, I try to use the Designer for the design of my GUI.</p>
<p>I have foreseen a button that calls a custom slot once clicked. In Qt5 Designer I used a custom slot automatically set to slot1(). I save the design into an .ui file and I run pyuic5 in order to obtain a .py file for my GUI.</p>
<p>This .py file I can run but it gives me an AttributeError stating that the object 'Ui_HomeWindow' has no attribute 'slot1'.</p>
<p>This is as I expected because I did not provide the slot1 function in the class. From Designer I understand that the .py file obtained after running pyuic5 should not be modified (fully understandable).</p>
<p>I need to define the slot1 function in, let say, my main() so that it gets accepted as a method of the class as generated by Designer.</p>
<p>Having read a lot about polymorphism and inheritance I get completely lost (I am new in OOP).</p>
<p>Inheriting from the class defined by Designer after running pyuic5 will allow me to reuse methods in another class defined by myself. In that class I can then define the slot1 function.
Where I get stuck is that by clicking a button, the "Designer" generated class needs access to the slot1 function defined in my class. Translated in OOP concepts, the parent class (resulting from Designer) needs to access a function defined in a child class (defined by me).</p>
<p>Is this correct? Is this possible, and how can I make this happen? Could someone point me to a simple explanation please? What I read was always sophisticated and complex, while I cannot believe that one needs to be a guru in OOP for being able to use Qt5 Designer. Thank you very much in advance for the hints.</p>
| <python><qt5><qt-designer><signals-slots><python-3.10> | 2023-07-13 06:16:14 | 1 | 385 | noste99 |
76,676,293 | 11,939,660 | Poetry fail to find package while pip can | <p>I was trying to install the package <a href="https://pypi.org/project/google-cloud-aiplatform/" rel="nofollow noreferrer">google-cloud-aiplatform</a>. I have the following lines in my <code>pyproject.toml</code> file</p>
<pre><code>[tool.poetry.dependencies]
google-cloud-aiplatform = "^1.28"
</code></pre>
<p>However, running <code>poetry install</code> I got the error</p>
<pre><code>Because my_package depends on google-cloud-aiplatform (^1.28) which doesn't match any versions, version solving failed.
</code></pre>
<p>I don't know how to fix the issue, pip can install the package with no issues.</p>
<p><strong>UPDATE</strong></p>
<p>I solved the issue by deleting the lock file and run <code>poetry install</code> again. After further investigating, I found that the correct workflow to add a new dependency to your poetry project is</p>
<ol>
<li>use <code>poetry add [new_deps]</code></li>
<li>run <code>poetry install</code> again.</li>
</ol>
<p>I had this issue because I manually edited the <code>pyproject.toml</code> file. I still think this is a bug on the poetry side, the tool should allow user to edit config file. A perfect example would be Cargo for Rust.</p>
| <python><pip><python-poetry> | 2023-07-13 05:51:18 | 0 | 421 | Hongtao Yang |
76,676,288 | 2,802,576 | Invalid behavior of __main__.py and __init__.py files | <p>I am currently working on a Python package. The package can be imported or it can also run from command line with <code>-m</code> switch. Inside the package I am building a dependency injection container using - <a href="https://python-dependency-injector.ets-labs.org/" rel="nofollow noreferrer">python-dependency-injector</a>.</p>
<p>Now, here is my understanding of the <code>__init__.py</code> and <code>__main__.py</code> files -</p>
<ol>
<li>If package is imported - the content of <code>__init__.py</code> is executed.</li>
<li>If package is ran from command line with <code>-m</code> - the content of <code>__main__.py</code> is executed.</li>
</ol>
<p>For me weirdly, in either cases content of both files is being executed. Either I import the package or run it from command line the control flows through <code>__init__.py</code> to <code>__main__.py</code> and so on. All I am doing inside these files is building the dependency injection container -</p>
<pre><code>from .containers import Container
container = Container()
container.init_resources()
container.wire(packages=[__name__])
</code></pre>
<p>When I run the package from command line I also get a RunTimeWarning as -
<code>RunTimeWarning: 'package.__main__.py' found in sys_modules after import of package 'package', but prior to execution of 'package.__main__'; this may result in unpredictable behavior</code></p>
<p>And to fix this I added <code>del sys.modules['package.__main__']</code> in <code>__init__.py</code>.</p>
<p>Here is my folder structure -</p>
<pre><code>src
\package
\__init__.py
\__main__.py
\readers
\__init__.py
\reader_module.py
\writers
\__init__.py
\writer_module.py
\utility
\__init__.py
\container_module.py
test.py <-- imports the package
</code></pre>
| <python><python-3.11> | 2023-07-13 05:50:57 | 1 | 801 | arpymastro |
76,675,929 | 1,758,952 | Vertex AI endpoint 500 Internal Server Error | <p>I tried to deploy a custom container to Vertex AI endpoint using LLM model (PaLM), the container is successfully deployed to the endpoint with the following code and dockerfile. But when I tried to query it with Vertex AI API or gcloud cli, I get a 500 Internal Server Error reply.</p>
<p>May I know what's the cause of this error?</p>
<p>Am I using the right way to deploy the model?</p>
<p>Python Code</p>
<pre><code>import uvicorn
#import tensorflow as tf
import os
import numpy as np
#from enum import Enum
#from typing import List, Optional
#from pydantic import BaseModel
from fastapi import Request, FastAPI, Response
from fastapi.responses import JSONResponse
from langchain.vectorstores.matching_engine import MatchingEngine
from langchain.agents import Tool
from langchain.embeddings import VertexAIEmbeddings
from vertexai.preview.language_models import TextGenerationModel
embeddings = VertexAIEmbeddings()
INDEX_ID = "<index id>"
ENDPOINT_ID = "<index endpoint id>"
PROJECT_ID = '<project name>'
REGION = 'us-central1'
DOCS_BUCKET='<bucket name>'
TEXT_GENERATION_MODEL='text-bison@001'
def matching_engine_search(question):
vector_store = MatchingEngine.from_components(
index_id=INDEX_ID,
region=REGION,
embedding=embeddings,
project_id=PROJECT_ID,
endpoint_id=ENDPOINT_ID,
gcs_bucket_name=DOCS_BUCKET)
relevant_documentation=vector_store.similarity_search(question, k=8)
context = "\n".join([doc.page_content for doc in relevant_documentation])[:10000] #[:10000]
return str(context)
app = FastAPI(title="Chatbot")
AIP_HEALTH_ROUTE = os.environ.get('AIP_HEALTH_ROUTE', '/health')
AIP_PREDICT_ROUTE = os.environ.get('AIP_PREDICT_ROUTE', '/predict')
#class Prediction(BaseModel):
# response: str
@app.get(AIP_HEALTH_ROUTE, status_code=200)
async def health():
return {'health': 'ok'}
@app.post(AIP_PREDICT_ROUTE)#,
#response_model=Predictions,
#response_model_exclude_unset=True
async def predict(request: Request):
body = await request.json()
print(body)
question = body["question"]
matching_engine_response=matching_engine_search(question)
prompt=f"""
Follow exactly those 3 steps:
1. Read the context below and aggregrate this data
Context : {matching_engine_response}
2. Answer the question using only this context
3. Show the source for your answers
User Question: {question}
If you don't have any context and are unsure of the answer, reply that you don't know about this topic.
"""
model = TextGenerationModel.from_pretrained(TEXT_GENERATION_MODEL)
response = model.predict(
prompt,
temperature=0.2,
top_k=40,
top_p=.8,
max_output_tokens=1024,
)
print(f"Question: \n{question}")
print(f"Response: \n{response.text}")
outputs = response.text
return {"predictions": [{"response": response.text}] }#Prediction(outputs)
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0",port=8080)
</code></pre>
<p>Docker file</p>
<pre><code>FROM tiangolo/uvicorn-gunicorn-fastapi:python3.8-slim
RUN pip install --no-cache-dir google-cloud-aiplatform==1.25.0 langchain==0.0.187 xmltodict==0.13.0 unstructured==0.7.0 pdf2image==1.16.3 numpy==1.23.1 pydantic==1.10.8 typing-inspect==0.8.0 typing_extensions==4.5.0
COPY main.py ./main.py
</code></pre>
<p>Cloudbuild.yaml</p>
<pre><code>steps:
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/<project name>/chatbot', '.']
# Push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/<project name>/chatbot']
images:
- gcr.io/<project name>/chatbot
</code></pre>
<p>Code to query the model endpoint</p>
<pre><code>from google.cloud import aiplatform
aiplatform.init(project=PROJECT_ID,
location=REGION)
instances = [{"question": "<Some question>"}]
endpoint = aiplatform.Endpoint("projects/<project id>/locations/us-central1/endpoints/<model endpoint id>")
prediction = endpoint.predict(instances=instances)
print(prediction)
</code></pre>
<p>Error message</p>
<p><a href="https://i.sstatic.net/TXvr2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TXvr2.png" alt="enter image description here" /></a></p>
| <python><docker><google-cloud-platform><artificial-intelligence><google-cloud-vertex-ai> | 2023-07-13 04:25:07 | 1 | 497 | user1758952 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.