QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
76,231,965
| 1,783,793
|
Add hint of duration for each iteration in tqdm
|
<p>I have a list of tasks that each take a different amount of time. Let's say, I have 3 tasks, with durations close to 1<em>x, 5</em>x, 10*x. My tqdm code is something like:</p>
<pre class="lang-py prettyprint-override"><code>from tqdm import tqdm
def create_task(n):
def fib(x):
if x == 1 or x == 0:
return 1
return fib(x - 1) + fib(x - 2)
return lambda: fib(n)
n = 1
tasks = [create_task(n), create_task(5*n), create_task(10*n)]
for task in tqdm(tasks):
task.run()
</code></pre>
<p>The problem is that tqdm thinks each iteration takes the same amount of time. As the first takes approximately 1/10 of the time, the ETA is unreliable.</p>
<p><strong>My question</strong>: is it possible to somehow add a hint to tqdm to inform how much each iteration takes compared to the first? Something like informing the duration weights of each iteration...</p>
<p>Thanks!</p>
|
<python><tqdm>
|
2023-05-11 22:21:29
| 2
| 2,634
|
fegemo
|
76,231,956
| 5,942,100
|
Separate numerical values from alphabetic characters within a column as well as expand dates
|
<p>I would like to separate the numerical values from the alphabetic characters within a column as well as to expand the quarters in a date.</p>
<p><strong>Data</strong></p>
<pre><code>state stat1 type stat2 qtr
NY up AAA01 Gr Q1 24
NY up AAA02 Re Q1 24
NY up BB01 Gr Q1 24
NY up DD01 Gr Q1 24
NY up DD02 Gr Q1 24
CA low AAA01 Re Q2 24
CA low DD01 Re Q2 24
CA low AAA01 Re Q2 24
CA low SSS01 Gr Q2 24
</code></pre>
<p><strong>Desired</strong></p>
<pre><code>state stat1 type stat2 qtr
NY up AAA Gr Q1 2024
NY up AAA Re Q1 2024
NY up BB Gr Q1 2024
NY up DD Gr Q1 2024
NY up DD Gr Q1 2024
CA low AAA Re Q2 2024
CA low DD Re Q2 2024
CA low AAA Re Q2 2024
CA low SSS Gr Q2 2024
</code></pre>
<p><strong>Doing</strong></p>
<pre><code># Extract year from the 'qtr' column
df['year'] = df['qtr'].apply(lambda x: x.split(' ')[-1])
# Modify the 'type' column to include only the first three characters
df['type'] = df['type'].str[:3]
# Concatenate 'qtr' and 'year' columns
df['qtr'] = df['qtr'].apply(lambda x: x.split(' ')[0] + ' ' + x.split(' ')[-1])
</code></pre>
<p>However the output does not fully remove the numerical values from the characters within a column. The quarter transformation is not correct as well. Any suggestion is appreciated.</p>
|
<python><pandas>
|
2023-05-11 22:19:10
| 1
| 4,428
|
Lynn
|
76,231,859
| 12,820,223
|
Chunking misses out some files
|
<p>I have 10000 files which I'm trying to split into 7 chunks and create simlinks to in a different place. For some reason, the last chunk ends up with exactly the same number of files as the other chunks, even though 10000 doesn't divide by 7 so it should have four extra. Why is this happening and how can I fix it?</p>
<pre><code>import os
StartDir = "OriginalDir"
FirstEndDir = "FirstSeventh"
SecondEndDir = "SecondSeventh"
ThirdEndDir = "ThirdSeventh"
FourthEndDir = "FourthSeventh"
FifthEndDir = "FifthSeventh"
SixthEndDir = "SixthSeventh"
SeventhEndDir = "SeventhSeventh"
AllFileList = os.listdir(StartDir)
SortedList = sorted(AllFileList, key=lambda x: int(x.split("_")[-1].split(".")[0]))
print SortedList[0]
n = len(SortedList)/7
FirstList = [SortedList[i:i + n] for i in xrange(0, n, 1)]
SecondList = [SortedList[i:i + n] for i in xrange(n, 2*n, 1)]
ThirdList = [SortedList[i:i + n] for i in xrange(2*n, 3*n, 1)]
FourthList = [SortedList[i:i + n] for i in xrange(3*n, 4*n, 1)]
FifthList = [SortedList[i:i + n] for i in xrange(4*n, 5*n, 1)]
SixthList = [SortedList[i:i + n] for i in xrange(5*n, 6*n, 1)]
SeventhList = [SortedList[i:i + n] for i in xrange(6*n, len(SortedList), 1)]
for ii in FirstList[0]:
print ii
os.symlink(os.path.join(StartDir,ii),os.path.join(FirstEndDir,ii))
for ii in SecondList[0]:
print ii
os.symlink(os.path.join(StartDir,ii),os.path.join(SecondEndDir,ii))
for ii in ThirdList[0]:
print ii
os.symlink(os.path.join(StartDir,ii),os.path.join(ThirdEndDir,ii))
for ii in FourthList[0]:
print ii
os.symlink(os.path.join(StartDir,ii),os.path.join(FourthEndDir,ii))
for ii in FifthList[0]:
print ii
os.symlink(os.path.join(StartDir,ii),os.path.join(FifthEndDir,ii))
for ii in SixthList[0]:
print ii
os.symlink(os.path.join(StartDir,ii),os.path.join(SixthEndDir,ii))
for ii in SeventhList[0]:
print ii
os.symlink(os.path.join(StartDir,ii),os.path.join(SeventhEndDir,ii))
</code></pre>
|
<python><python-2.7>
|
2023-05-11 21:58:59
| 1
| 411
|
Beth Long
|
76,231,846
| 1,750,360
|
Overriding not working when calling method using reference name
|
<p>My primary goal was to have a dictionary to select method to run. It was easy to do per <a href="https://stackoverflow.com/a/9168387/1750360">this</a> suggestion.
But now I am somehow blocked on using the inheritance concept to call child overridden method.</p>
<pre><code>class A():
def sum(self, id: int):
return 1
def multiply(self, id: int):
return 2
def process(self, type: str, id: int):
callable_method = self.__dispatcher.get(type)
return callable_method(self, id) # Runs
#callable_method(id) # Doesnt work with my object, says parameter mismatch
#self.callable_method(id) # Doesn't Work obviously as there is no callable method in self
__dispatcher = { "+": sum, "*": multiply }
class B(A):
def process(self, type: str, id: int):
return super(B, self).process(type, id)
def multiply(self, id: int):
return 3
# main class call:
ob = B()
ob.process("*", 0) # This is returning 2 instead of 3
</code></pre>
<p>The above overriding works perfectly well if I dont use the dictionary and method references and directly use the method in parent's process() like <code>self.multiply(id)</code></p>
<p>I might have an idea why this is not working but is there a way to make this work in Python?</p>
<p>Note:</p>
<ul>
<li>Dont want to use exec() , eval() due to security issues</li>
<li>I'm not trying to write a calculator, actual problem is related to software design</li>
<li>using Python 3.8</li>
</ul>
|
<python>
|
2023-05-11 21:56:40
| 1
| 1,631
|
pxm
|
76,231,804
| 2,698,972
|
How to modularize code into multiple files with access to app decorators in FastAPI
|
<p>I want to distribute my FastAPI code into multiple files so as to make it clear for team, so instead of using decorator i tried using APIRoute , but in this way I can't use FastAPI decorator in other file.
Lets say I have my <code>main.py</code> like this</p>
<pre><code>... import statements
rts = [
APIRoute("/{value}", methods=["get"], endpoint=read_item)
]
app = FastAPI(routes=rts)
uvicorn.run("main:app", host="0.0.0.0", port=80)
</code></pre>
<p>now I have defined <code>read_item</code> method in another file named <code>read_item.py</code> which i am importing in my main file.</p>
<pre><code>def read_item(value: int | float = None):
results = {"value": value}
return results
</code></pre>
<p>if i start using @app decorator in main file , it will become too large.
now the problem is that i can not access app decorators like <code>@app.on_event</code> in <code>read_items.py</code> file as app is declared in <code>main</code> file and method needs to declared in <code>read_item</code> file.
is there any way around this in FastAPI ?</p>
|
<python><python-3.x><fastapi>
|
2023-05-11 21:46:00
| 1
| 1,041
|
dev
|
76,231,768
| 1,118,236
|
Keras change the layer names of a save model and get ValueError Graph disconnected
|
<p>I would like to change layer names of a trained and saved Keras(2.3.1) model as follows: user_embedding_123 and item_embedding_284 are the only two Input layers of the model.</p>
<pre><code>from keras.models import load_model, save_model
from keras.layers import Input, Dense
from keras.models import Model
model = load_model('final_model.hdf5')
for layer in model.layers:
if layer.name == 'user_embedding_123':
layer.name = 'user_embedding'
if layer.name == 'item_embedding_284':
layer.name = 'item_embedding'
save_model(model, "final_model_renamed.hdf5")
model_renamed = load_model('final_model_renamed.hdf5') # Get an error
</code></pre>
<p>However I get an error:</p>
<pre><code>ValueError: Graph disconnected: cannot obtain value for tensor Tensor("user_embedding:0", shape=(?, 768), dtype=float32) at layer "user_embedding". The following previous layers were accessed without issue: []
</code></pre>
<p>The model summary was like:</p>
<pre><code>Model: "model_1"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
user_embedding_123 (Input (None, 768) 0
__________________________________________________________________________________________________
item_embedding_284 (Input (None, 768) 0
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 1536) 0 user_embedding_123[0][0]
item_embedding_284[0][0
__________________________________________________________________________________________________
hidden_0 (Dense) (None, 256) 393472 concatenate_1[0][0]
......
</code></pre>
<p>Now it is like:</p>
<pre><code>Model: "model_1"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
user_embedding (InputLayer) (None, 768) 0
__________________________________________________________________________________________________
item_embedding (InputLayer (None, 768) 0
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 1536) 0 user_embedding[0][0]
item_embedding[0][0]
__________________________________________________________________________________________________
hidden_0 (Dense) (None, 256) 393472 concatenate_1[0][0]
</code></pre>
<p>The model summary looks good, but still get the error. Do I miss anything?</p>
|
<python><keras><tf.keras>
|
2023-05-11 21:37:37
| 1
| 4,041
|
Munichong
|
76,231,751
| 945,939
|
Python script using subprocess run has different output when run directly in Powershell than when run in Jenkins
|
<p>I have a python script that runs a git command <code>git log --pretty=format:%H branch -n 20</code></p>
<p>It runs the command using <code>process = subprocess.Popen(commandLine, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)</code></p>
<p>When we run the git command in powershell directly, or through the python script it works and returns the commits. When I run the powershell script through Jenkins the command returns the following with no commit data:</p>
<pre><code>09:12:00 H
09:12:00 H
09:12:00 H
09:12:00 H
09:12:00 H
09:12:00 H
09:12:00 H
09:12:00 H
</code></pre>
<p>Any ideas as to why this might happen and how to resolve it?</p>
<p>Edit: It looks like when running through Jenkins the % gets stripped out so the command ends up being <code>git log --pretty=format:H branch -n 20</code> and thus the failure. I'm not sure how to prevent this.</p>
<p>Thank you!</p>
|
<python><powershell><jenkins>
|
2023-05-11 21:33:38
| 1
| 599
|
James
|
76,231,449
| 9,161,607
|
How to transform the data based on the column name
|
<p>I am trying to transform my <code>df</code> where some of the column names are melted in wide-to-long form.</p>
<p>My <code>df</code> looks like this:</p>
<pre><code>id, university, # CA Holiday, # CA Hours, # WA Holiday, # WA Hours
1 abc 1 5 NA 3
2 def 5 NA 3 7
3 ijk 1 5 NA 3
</code></pre>
<p>here,</p>
<ul>
<li>My keys are: <code>[id, university]</code>. The Keys are not melted.</li>
<li>Output variables: <code>WA, CA</code>. Output variables are basically the short-form of the columns, which is broken down.</li>
</ul>
<p>My final transformed output <code>df</code> is arranged like this.</p>
<pre><code>id, university, Holiday_State, Holiday, Hours
1 abc CA 1 5
1 abc WA NA 3
2 def CA 5 NA
2 def WA 3 7
3 ijk CA 1 5
3 ijk WA NA 3
</code></pre>
<p>Could someone please provide assistance to help solve this problem?</p>
|
<python><pandas>
|
2023-05-11 20:41:41
| 2
| 2,793
|
floss
|
76,231,351
| 10,794,031
|
How to apply @enum.nonmember?
|
<p>I was trying to come up with a use case for the new <a href="https://docs.python.org/3/library/enum.html#enum.nonmember" rel="noreferrer"><code>@enum.nonmember</code></a> decorator in Python 3.11. The docs clearly mention it is a decorator meant to be applied to members.
However, when I tried literally decorating a member directly:</p>
<pre class="lang-py prettyprint-override"><code>import enum
class MyClass(enum.Enum):
A = 1
B = 2
@enum.nonmember
C = 3
</code></pre>
<p>this results in an error as:</p>
<pre class="lang-py prettyprint-override"><code>Traceback (most recent call last):
File "C:\Program Files\Python311\Lib\code.py", line 63, in runsource
code = self.compile(source, filename, symbol)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\codeop.py", line 153, in __call__
return _maybe_compile(self.compiler, source, filename, symbol)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\codeop.py", line 73, in _maybe_compile
return compiler(source, filename, symbol)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\codeop.py", line 118, in __call__
codeob = compile(source, filename, symbol, self.flags, True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<input>", line 9
C = 3
^
SyntaxError: invalid syntax
</code></pre>
<p>However, if I had declared an atribute as a property or a descriptor it also wouldn't become an Enum member... So how, when and why do you use <code>@enum.nonmember</code>?</p>
|
<python><enums><python-decorators><python-3.11>
|
2023-05-11 20:20:07
| 1
| 13,254
|
bad_coder
|
76,231,321
| 5,563,584
|
Converting a numpy image array based on a boolean mask
|
<p>I have 2 numpy arrays. One is a 3D integer array (image RGB values) with dimensions (988, 790, 3) and the other is a mask boolean array with the same shape. I want to use the mask to convert False values in the image array to black and leave true values as is.</p>
<p>I tried <code>(image & mask)</code> which appears to convert the entire image to black (or white) instead of just the False locations. I want to avoid loops for efficiency so looking for a numpy solution.</p>
|
<python><python-3.x><numpy><numpy-ndarray>
|
2023-05-11 20:14:59
| 2
| 327
|
greenteam
|
76,230,996
| 11,956,484
|
Replace float values with strings based on two different conditions
|
<p>In my df I have a column of results:</p>
<pre><code>Results
755
1065
2733
40
116
241
345
176
516
5000
</code></pre>
<p>What I would like to do is replace all values <=55 with the string Low and all values >=3500 with the string High, retaining all other values. So the end result would be:</p>
<pre><code>Results
755
1065
2733
Low
116
241
345
176
516
High
</code></pre>
<p>The issue is if you do a simple <code>RM.loc[RM["Result"]<=55,"Result"]="Low"</code>, then it sets the entire column as strings and won't allow you to filter based on the second condition result>=3500. So I accomplished what I wanted by doing</p>
<pre class="lang-py prettyprint-override"><code>RM.loc[RM["Result"]<=55,"Result"]=-111
RM.loc[RM["Result"]>=3511,"Result"]=999
RM.loc[RM["Result"]==-111,"Result"]="Low"
RM.loc[RM["Result"]==999,"Result"]="High"
</code></pre>
<p>But there must be a concise, one-line way of doing this I just can't think of it?</p>
|
<python><pandas>
|
2023-05-11 19:21:11
| 3
| 716
|
Gingerhaze
|
76,230,979
| 6,591,677
|
OpenAI turbo-3.5 API returning error with complex prompts
|
<p>With simple prompts like "Hey" or "Tell me [this]" or "Summarize [this]", it works fine. But when I run more complex prompts like "List this and that and explain...", it breaks. There's no other change besides the complexity. The error message:
<code>APIConnectionError: Error communicating with OpenAI: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))</code></p>
<p>I'm running this on JupyterLab.</p>
<p>I'd really appreciate help.</p>
<pre class="lang-py prettyprint-override"><code>def get_completion(prompt, model="gpt-3.5-turbo"):
messages = [{"role": "user", "content": prompt}]
response = openai.ChatCompletion.create(
model=model,
messages=messages,
temperature=0,
)
return response.choices[0].message["content"]
</code></pre>
|
<python><python-3.x><openai-api><chatgpt-api>
|
2023-05-11 19:19:02
| 0
| 479
|
superbot
|
76,230,931
| 219,153
|
What is the way to mutate 2D array with Python multiprocessing?
|
<p>I was expecting that array <code>a</code> will be mutated in this Python 3.11 script:</p>
<pre><code>import multiprocessing as mp
def incr(a):
for i in range(len(a)):
a[i] += 1
if __name__ == "__main__":
a = mp.Manager().list([[0, 1, 2], [3, 4, 5]])
with mp.Pool(2) as pool:
pool.map(incr, [a[0], a[1]])
print(a)
</code></pre>
<p>equivalent to:</p>
<pre><code> a = [[0, 1, 2], [3, 4, 5]]
incr(a[0])
incr(a[1])
</code></pre>
<p>but the result is:</p>
<pre><code>[[0, 1, 2], [3, 4, 5]]
</code></pre>
<p>How to mutate each row by a different process?</p>
|
<python><python-multiprocessing>
|
2023-05-11 19:11:12
| 1
| 8,585
|
Paul Jurczak
|
76,230,846
| 8,618,242
|
How to increase the accuracy of my model fitting using Scipy Optimization
|
<p>I want to make a curve fitting of the following data:
<a href="https://github.com/vuillaut/datascience_intro/blob/main/Scipy/data/munich_temperatures_average.txt" rel="nofollow noreferrer">munich_temperatures_average.txt</a></p>
<p>I have tried:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
from scipy import optimize
def func(temp, a, b, c):
return a * np.cos(2 * np.pi * temp + b) + c
date, temperature = np.loadtxt('munich_temperatures_average.txt', unpack=True)
result = optimize.curve_fit(func, date, temperature)
plt.plot(date, temperature, '.')
plt.plot(date, func(date, result[0][0], result[0][1], result[0][2]), c='red', zorder=10)
plt.ylim([-20, 30])
plt.xlabel("Year", fontsize=18)
plt.ylabel("Temperature", fontsize=18)
plt.show()
</code></pre>
<p>But as you can see in the output image, the <strong>oscillation magnitude</strong> of the model after fitting seems to be <strong>less</strong> than the actual, can you please tell me how can I make the fitting more accurate? thanks in advance.</p>
<p><a href="https://i.sstatic.net/RFz17.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RFz17.png" alt="enter image description here" /></a></p>
|
<python><scipy><scipy-optimize><model-fitting>
|
2023-05-11 18:58:10
| 1
| 4,115
|
Bilal
|
76,230,831
| 964,143
|
Can you use a custom script to activate a python environment in vs code?
|
<p>Is it possible to set a custom activate script to active a Python environment, on terminal launch, in VS Code.</p>
<p>Right now, it uses the normal activate script, e.g. <code>/Users/..../my_python/bin/activate</code></p>
<p>It would be nice to be able to run another script that also does other things as part of terminal set up, as well as activate the Python environment.</p>
<p>I know that VS Code does support setting environment variables for a python terminal, but this would only solve part of the problem.</p>
|
<python><visual-studio-code>
|
2023-05-11 18:56:01
| 1
| 674
|
BrainPermafrost
|
76,230,795
| 918,866
|
Replace characters with a count of characters
|
<p>I have <code>XXXXXXX99</code> and I'd like to replace it with <code>X(7)99</code> because there are 7 <code>X</code> characters followed by <code>99</code>.</p>
<pre><code>>>> import re
>>> s = "XXXXXXX99"
>>> re.sub(r"(X+)", "X(the count)", s)
'X(the count)99'
</code></pre>
<p>Both of these:</p>
<pre><code>>>> re.sub(r"(X+)", "X(" + len(\1) + ")", s)
>>> re.sub(r"(X+)", "X(" + len(\\1) + ")", s)
</code></pre>
<p>give:
<code>SyntaxError: unexpected character after line continuation character</code></p>
<p>In the general case the string could be more complicated, such as <code>XXXX99_999999XX99.99</code>. I'm going to want to focus on repeats greater than, say 5, meaning this example would become <code>XXXX99_9(6)XX99.99</code>.</p>
|
<python><regex>
|
2023-05-11 18:50:29
| 4
| 1,701
|
jsf80238
|
76,230,597
| 1,631,306
|
converting RGB to HSV looses the boundary
|
<p>I have following image
<a href="https://i.sstatic.net/ySbFl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ySbFl.png" alt="enter image description here" /></a></p>
<p>and I want to generate the mask for the area under green color. My code is following:</p>
<pre><code> img = cv2.imread(path + file, cv2.IMREAD_COLOR)
hsv_img = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
lower = np.array([10, 180, 180])
upper = np.array([130, 255, 255])
thresh= cv2.inRange(hsv_img, lower, upper)
masked = cv2.bitwise_and(img,img, mask=thresh)
#plt.imshow(masked)
# get external contours
contours = cv2.findContours(thresh , cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
# draw white filled contours on black background
mask = np.zeros_like(img)
for cntr in contours:
cv2.drawContours(mask, [cntr], 0, (255,255,255), -1)
# invert so black regions on white background if desired
mask = 255 - mask
# show results
#plt.imshow(img)
plt.imshow( mask)
</code></pre>
<p>My output is <a href="https://i.sstatic.net/iOpmZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iOpmZ.png" alt="enter image description here" /></a>. The boundary is disconnected from the left and thats why the coountour is not filled. When I check the hsv_img, I get the following
<a href="https://i.sstatic.net/ofh3h.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ofh3h.png" alt="enter image description here" /></a></p>
<p>So, something at rgb to hsv conversion causing the break in green boundary. Any idea how to fix it. All i care is generating the maskings</p>
|
<python><image><opencv><image-processing>
|
2023-05-11 18:20:49
| 1
| 4,501
|
user1631306
|
76,230,596
| 9,983,652
|
no minor tick setting for plotly go.Figure?
|
<p>when using plotly express, we can set minor tick like below post</p>
<p><a href="https://stackoverflow.com/questions/73756610/how-to-set-minor-ticks-in-plotly">How to set minor ticks in plotly</a></p>
<p>However, when using go.Figure(), It tried like in below manual and I got an error, so how to set minor tick for date axis in go.Figure()? Thanks</p>
<p><a href="https://plotly.com/python/time-series/" rel="nofollow noreferrer">https://plotly.com/python/time-series/</a></p>
<pre class="lang-py prettyprint-override"><code>fig.update_xaxes(autorange=False,range=[start_date_dts_heatmap_prod,end_date_dts_heatmap_prod_1],
tickfont=dict(size=tick_font_size),
ticklabelmode= "period",
tickcolor= "black",
ticklen=10,
minor=dict(
ticklen=4,
dtick=7*24*60*60*1000,
tick0=start_date_dts_heatmap_prod,
griddash='dot',
gridcolor='white'),
row=3,col=1
) #autorange=False,range=y_range,row=2,col=1
</code></pre>
<pre><code>ValueError: Invalid property specified for object of type plotly.graph_objs.layout.XAxis: 'minor'
Did you mean "mirror"?
</code></pre>
|
<python><plotly>
|
2023-05-11 18:20:35
| 1
| 4,338
|
roudan
|
76,230,417
| 10,284,437
|
Python3, Selenium4 how to open a new 'TAB' from particular node?
|
<p>In Selenuim4 I have an input node that do a POST request when you click on it.</p>
<p>By default, it change the current window.</p>
<p>I prefer to open a new TAB to process this page and latter, return to main page to avoid</p>
<pre><code>selenium.common.exceptions.StaleElementReferenceException:
Message: stale element reference: stale element not found
</code></pre>
<p>I searched tons of examples, but they are for old Selenium.</p>
<p>Moreover, seems like there's a new TAB feature.</p>
<p>From: <a href="https://blog.testproject.io/2020/06/30/selenium-4-new-window-tab-screenshots/" rel="nofollow noreferrer">Java selenium-4-new-window-tab-screenshots</a></p>
<pre><code>WebDriver newTab = driver.switchTo().newWindow(WindowType.TAB);
</code></pre>
<p>How can I do it in Python with a particular node?</p>
<p>Retrieve product<strong>s</strong> input element<strong>s</strong></p>
<pre><code>products = driver.find_elements(By.XPATH, '//input[@attr="foobar"]')
for product in products:
# FIXME need new tab opened to retrieve one product
product("new window").click() # this is wrong, but you know what I mean
</code></pre>
|
<python><selenium-webdriver><selenium-chromedriver>
|
2023-05-11 17:56:27
| 1
| 731
|
Mévatlavé Kraspek
|
76,230,297
| 482,819
|
Necessity of TypeAlias in Python
|
<p>I am trying to understand in which cases <code>TypeAlias</code> is necessary in Python. Consider the following code:</p>
<pre class="lang-py prettyprint-override"><code>class Internal:
def f(self):
print("Hello!")
class Container:
internal = Internal
RenamedInternal = Internal
ContainerInternal = Container.internal
def f1() -> Internal:
obj = Internal()
obj.f()
return obj
def f2() -> RenamedInternal:
obj = RenamedInternal()
obj.f()
return obj
def f3() -> ContainerInternal:
obj = ContainerInternal()
obj.f()
return obj
f1()
f2()
f3()
</code></pre>
<p><code>mypy</code> reports</p>
<pre><code>example.py:24: error: Variable "example2.ContainerInternal" is not valid as a type [valid-type]
example.py:24: note: See https://mypy.readthedocs.io/en/stable/common_issues.html#variables-vs-type-aliases
</code></pre>
<p>That can be fixed by</p>
<pre class="lang-py prettyprint-override"><code>class Container:
internal: TypeAlias = Internal
</code></pre>
<p>It is not clear to me why it is necessary for <code>ContainerInternal</code> but not for <code>RenamedInternal</code>. Also does adding type alias has a functional effect?</p>
|
<python><class><alias><typing>
|
2023-05-11 17:40:02
| 0
| 6,143
|
Hernan
|
76,230,187
| 4,377,095
|
In Pandas DataFrame how can I increment the values per row if it matches a character
|
<p>This is somewhat similar to <a href="https://stackoverflow.com/questions/76211854/in-pandas-how-can-i-increment-values-correctly-for-each-row-and-column/76212089#76212089">this</a> question but a little bit complex.</p>
<p>Assuming I have this table:</p>
<pre><code>data = {'column1': ['A1', 'A1', 'A1', 'A1'],
'column2': ['A9', 'A1', 'A8', 'A1'],
'column3': ['D1', 'D1', 'D1', 'D1'],
'column4': ['A6', 'A2', 'A3', 'A4'],
'column5': ['H1', 'H1', 'H1', 'H1'],
'column6': ['A4', '', '', 'A3'],
'column7': ['A5', '', '', 'A9']}
df = pd.DataFrame(data)
+---------+---------+---------+---------+---------+---------+---------+
| column1 | column2 | column3 | column4 | column5 | column6 | column7 |
+---------+---------+---------+---------+---------+---------+---------+
| A1 | A9 | D1 | A6 | H1 | A4 | A5 |
| A1 | A1 | D1 | A2 | H1 | | |
| A1 | A8 | D1 | A3 | H1 | | |
| A1 | A1 | D1 | A4 | H1 | A3 | A9 |
+---------+---------+---------+---------+---------+---------+---------+
</code></pre>
<p>my goal here is to reset the number counterpart of all values containing "A" per row, starting with A1. if "A1" re-occurs on the same row, move to the next cell. Also, values that is not "A" and blanks should be ignored.</p>
<pre><code>+---------+---------+---------+---------+---------+---------+---------+
| column1 | column2 | column3 | column4 | column5 | column6 | column7 |
+---------+---------+---------+---------+---------+---------+---------+
| A1 | A2 | D1 | A3 | H1 | A4 | A5 |
| A1 | A1 | D1 | A2 | H1 | | |
| A1 | A2 | D1 | A3 | H1 | | |
| A1 | A1 | D1 | A2 | H1 | A3 | A4 |
+---------+---------+---------+---------+---------+---------+---------+
</code></pre>
|
<python><pandas><dataframe>
|
2023-05-11 17:26:06
| 2
| 537
|
Led
|
76,230,176
| 5,069,105
|
Load Python standard module from Modular Mojo
|
<p>From the available information and docs, Mojo claims to be fully compatible with Python syntax and modules.</p>
<p>However, from the <a href="https://www.modular.com/mojo" rel="nofollow noreferrer">Playground notebook</a>, I can't seem to be able to load any module from Python:</p>
<pre><code>In:
import os
import time
import sys
import numpy
Out:
error: Expression [1]:5:8: unable to locate module 'os'
import os
^
error: Expression [1]:7:8: unable to locate module 'time'
import time
^
error: Expression [1]:6:8: unable to locate module 'sys'
import sys
^
error: Expression [1]:8:8: unable to locate module 'numpy'
import numpy
^
</code></pre>
|
<python><mojolang>
|
2023-05-11 17:24:46
| 1
| 1,789
|
Raf
|
76,230,150
| 19,675,781
|
How to display intersection values instead of distinct values in Upset plot
|
<p>I tried to create an upset plot and display intersection among different sets. <br />
But my upset plot is displaying dinstinct value counts among sets. <br />
How do I change it to intersections instead of distinct counts?</p>
<p>This is my code:</p>
<pre><code>mammals = ['Cat', 'Dog', 'Horse', 'Sheep', 'Pig', 'Cattle', 'Rhinoceros', 'Moose']
herbivores = ['Horse', 'Sheep', 'Cattle', 'Moose', 'Rhinoceros']
domesticated = ['Dog', 'Chicken', 'Horse', 'Sheep', 'Pig', 'Cattle', 'Duck']
from upsetplot import from_contents
animals = from_contents({'mammal': mammals, 'herbivore': herbivores, 'domesticated': domesticated})
from upsetplot import UpSet
ax_dict = UpSet(animals, subset_size='count',show_counts=True).plot()
</code></pre>
<p>This is my output:</p>
<p><a href="https://i.sstatic.net/UQQre.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UQQre.png" alt="Output" /></a></p>
<p>The actual intersection between herbivores and mammals is 5 while my plot shows 2.<br />
Can anyone help me how to show intersections in upset plots?</p>
|
<python><python-3.x><visualization><upsetplot>
|
2023-05-11 17:21:27
| 1
| 357
|
Yash
|
76,230,112
| 12,820,223
|
Fast copying of files with two different names
|
<p>Edit: I've changed the names of the data so that they're all the same
Edit: I decided to use a symlink as suggested in the comments, but now <code>SeventhList</code> is too short, it has 1428 elements in it when it should have 1432 elements. Why is this and how can I change it? <code>len(AllFileList)</code> is 10000, which is correct...</p>
<p>I have a list of files <code>MC_File0.root... MC_File9999.root</code>.</p>
<p>They're all inside a directory <code>MCAll</code> and I want to split them into 7 equal chunks and copy them to their respective directories <code>MCFirstSeventh, MCSecondSeventh, ...</code></p>
<p>Following advice from the first commenter I have a better chunking system than before, but the copying is still slow. Is there a way to speed it up or is it hardware limited?</p>
<pre><code>StartDir = "/data05/padme/beth/MC_Run660/RawMCFiles"
FirstEndDir = "/data05/padme/beth/MC_Run660_FirstSeventh/RawMCFiles"
SecondEndDir = "/data05/padme/beth/MC_Run660_SecondSeventh/RawMCFiles"
ThirdEndDir = "/data05/padme/beth/MC_Run660_ThirdSeventh/RawMCFiles"
FourthEndDir = "/data05/padme/beth/MC_Run660_FourthSeventh/RawMCFiles"
FifthEndDir = "/data05/padme/beth/MC_Run660_FifthSeventh/RawMCFiles"
SixthEndDir = "/data05/padme/beth/MC_Run660_SixthSeventh/RawMCFiles"
SeventhEndDir = "/data05/padme/beth/MC_Run660_SeventhSeventh/RawMCFiles"
AllFileList = os.listdir(StartDir)
#sorted(AllFileList, key=lambda x: int(x.split("_")[-1].split(".")[0]))
n = len(AllFileList)/7
FirstList = [AllFileList[i:i + n] for i in xrange(0, n, 1)]
SecondList = [AllFileList[i:i + n] for i in xrange(n, 2*n, 1)]
ThirdList = [AllFileList[i:i + n] for i in xrange(2*n, 3*n, 1)]
FourthList = [AllFileList[i:i + n] for i in xrange(3*n, 4*n, 1)]
FifthList = [AllFileList[i:i + n] for i in xrange(4*n, 5*n, 1)]
SixthList = [AllFileList[i:i + n] for i in xrange(5*n, 6*n, 1)]
SeventhList = [AllFileList[i:i + n] for i in xrange(6*n, len(AllFileList), 1)]
print 6*n
for ii in FirstList[0]:
print ii
os.symlink(os.path.join(StartDir,ii),os.path.join(FirstEndDir,ii))
for ii in SecondList[0]:
print ii
os.symlink(os.path.join(StartDir,ii),os.path.join(SecondEndDir,ii))
for ii in ThirdList[0]:
print ii
os.symlink(os.path.join(StartDir,ii),os.path.join(ThirdEndDir,ii))
for ii in FourthList[0]:
print ii
os.symlink(os.path.join(StartDir,ii),os.path.join(FourthEndDir,ii))
for ii in FifthList[0]:
print ii
os.symlink(os.path.join(StartDir,ii),os.path.join(FifthEndDir,ii))
for ii in SixthList[0]:
print ii
os.symlink(os.path.join(StartDir,ii),os.path.join(SixthEndDir,ii))
for ii in SeventhList[0]:
print ii
os.symlink(os.path.join(StartDir,ii),os.path.join(SeventhEndDir,ii))
</code></pre>
|
<python><optimization><copy>
|
2023-05-11 17:15:49
| 0
| 411
|
Beth Long
|
76,230,067
| 13,132,640
|
Numpy conditional but only in some indices?
|
<p>I have an array of shape [10,200,50]. I would like to replace all values which are:</p>
<ol>
<li>Greater than 33</li>
<li>Fall within a set of indices on the third axis: indices=[0,1,15,20,19]</li>
</ol>
<p>So - any value which has any of those indices on axis 3 AND is greater than 33 will be replaced by the number 22.</p>
<p>I know this is quite trivial, but I'm having trouble searching the right terms to find the solution. My instinct was to do: arr[arr[:,:,indices]==33]]=22, but this doesn't work because the shape of the internal array does not match the shape of the outer.</p>
|
<python><numpy>
|
2023-05-11 17:08:02
| 1
| 379
|
user13132640
|
76,230,057
| 7,179,546
|
Use keyFile in mongo5
|
<p>I'm trying to set up Mongo5 with docker-compose and it's failing for me. I'm using pymongo 3.13.0 and motor 2.5 in the application that attaches to this Mongo</p>
<p>My Dockerfile:</p>
<pre><code>FROM mongo:5.0
RUN mkdir /keys
COPY src/setup/replica.key /keys/replica.key
RUN chown 999:999 /keys/replica.key
RUN chmod 400 /keys/replica.key
</code></pre>
<p>The mongo section of my docker-compose:</p>
<pre><code>mongo:
build:
args:
REGISTRY_PATH: ${REGISTRY_PATH}
IMAGE_TAG: ${IMAGE_TAG:-latest}
context: .
dockerfile: build/mongo/Dockerfile
image: ${REGISTRY_PATH}mongo:${IMAGE_TAG:-latest}
environment:
MONGO_INITDB_DATABASE: <database>
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: <password>
MONGO_USERNAME: <user>
MONGO_PASSWORD: <password>
MONGO_HOSTNAME: mongo
BRANCH_NAME: ${BRANCH_NAME:-dev}
command: "--auth --oplogSize 32 --quiet --replSet -nameReplicaSet --keyFile /keys/replica.key --logpath /dev/stdout"
ports:
- 27017:27017
volumes:
- ${MINIKUBE_REPO_PATH}/src/setup/replica.key:/keys/replica.key:ro
- mongodb:/data/db
- ${MINIKUBE_REPO_PATH}/src/setup/mongo-initdb.d:/docker-entrypoint-initdb.d/
</code></pre>
<p>After <code>docker-compose up -d mongo</code></p>
<p>I get the error <code>pymongo.errors.OperationFailure: The $changeStream stage is only supported on replica sets, full error: {'ok': 0.0, 'errmsg': 'The $changeStream stage is only supported on replica sets', 'code': 40573, 'codeName': 'Location40573'}</code></p>
<p>However, it looks to me like I'm configuring a replica set, so in my opinion this error shouldn't show up.</p>
<p>What am I missing here?</p>
|
<python><mongodb>
|
2023-05-11 17:06:54
| 0
| 737
|
Carabes
|
76,230,048
| 5,179,643
|
Pandas: selecting specific rows and specific columns using .loc() and/or .iloc()
|
<p>I have a Pandas dataframe that looks like this:</p>
<pre><code>df = pd.DataFrame ({
'id': [1, 17, 19, 17, 22, 3, 0, 3],
'color': ['Green', 'Blue', 'Orange', 'Yellow', 'White', 'Silver', 'Purple', 'Black'],
'shape' : ['Circle', 'Square', 'Circle', 'Triangle', 'Rectangle', 'Circle', 'Square', 'Triangle'],
'person' : ['Sally', 'Bob', 'Tim', 'Sue', 'Bill', 'Diane', 'Brian', 'Sandy']
})
df
id color shape person
0 1 Green Circle Sally
1 17 Blue Square Bob
2 19 Orange Circle Tim
3 17 Yellow Triangle Sue
4 22 White Rectangle Bill
5 3 Silver Circle Diane
6 0 Purple Square Brian
7 3 Black Triangle Sandy
</code></pre>
<p>I set the index to <code>color</code>:</p>
<p><code>df.set_index ('color', inplace = True )</code></p>
<pre><code> id shape person
color
Green 1 Circle Sally
Blue 17 Square Bob
Orange 19 Circle Tim
Yellow 17 Triangle Sue
White 22 Rectangle Bill
Silver 3 Circle Diane
Purple 0 Square Brian
Black 3 Triangle Sandy
</code></pre>
<p>I'd like to select only the columns <code>id</code> and <code>person</code> and only the indices 2 and 3. To do so, I'm using the following:</p>
<pre><code>new_df = df.loc[:, ['id', 'person']][2:4]
new_df
id person
color
Orange 19 Tim
Yellow 17 Sue
</code></pre>
<p>It feels like this might not be the most 'elegant' approach. Instead of tacking on <code>[2:4]</code> to slice the rows, is there a way to effectively <em><strong>combine</strong></em> <code>.loc</code> (to get the columns) and <code>.iloc</code> (to get the rows)?</p>
<p>Thanks!</p>
|
<python><pandas>
|
2023-05-11 17:06:17
| 2
| 2,533
|
equanimity
|
76,229,976
| 1,056,563
|
How to use check if a pytest patched method were invoked [and with which parameters]?
|
<p>How can a <em>patch()</em> object be used to check if the method it represents had been invoked?
The following is pseudocode for the intended task</p>
<pre><code> from unittest.mock import patch
# "filter" is a method on BlobFilter class
mpatch = patch('platform.blob_reader.BlobFilter.filter')
# other logic that should invoke that method..
mpatch.assert_called_once() # not supported this way, how to do it?
</code></pre>
<p>I want to use <em>patch</em> in particular because the invocation is deeply nested in our class hierarchy and trying to send/inject mocks down that hierarchy will disrupt the codebase substantially due to adding more parameters to existing methods.</p>
<p>An additional goal would be to ensure the method were invoked with the correct data types. These methods exist on mocks, but I can't send in mocks to this class hierarchy at the present time.</p>
|
<python><mocking><pytest>
|
2023-05-11 16:55:18
| 1
| 63,891
|
WestCoastProjects
|
76,229,957
| 578,721
|
How to increase a number within a lambda in Python?
|
<p>It is possible to increase a number object like <code>int</code> within a lambda function?</p>
<p>Imagine having a peek function like:</p>
<pre class="lang-py prettyprint-override"><code>def _peek(cb, iter):
for i in iter:
cb(i)
</code></pre>
<p>How can I peek and add these values to a sum like in this following simple example:</p>
<pre class="lang-py prettyprint-override"><code>numbers = (1, 2, 3)
s = 0
# Doesn't work, because __add__ doesn't add inline
_peek(s.__add__, numbers)
# Doesn't work, because s is outside of scope (syntax error)
_peek(lambda x: s += x, numbers)
# Does work, but requires an additional function
def _sum(var):
nonlocal s
s += var
_peek(_sum, numbers)
# Does work, but reduces numbers
sum = reduce(lambda x, y: x+y, numbers)
</code></pre>
<p>Now this is a real world example:</p>
<pre class="lang-py prettyprint-override"><code>@dataclass
class Vote:
count = 0
def add_count(self, count: int):
self.count += count
vote = Vote()
# Doesn't work work
_peek(lambda x: vote.count += x, map(lambda x: x['count'], data))
# Does work, but requires additional function
_peek(vote.add_count, map(lambda x: x['count'], data))
</code></pre>
<p>In Java, I can write easily:</p>
<pre class="lang-java prettyprint-override"><code>@Test
public void test_numbers() {
class Vote {
int count = 0;
}
var vote = new Vote();
var count = Stream.of(1,2,3).peek(i -> vote.count+=i).filter(i -> i > 1).count();
assert vote.count == 6;
assert count == 2;
}
</code></pre>
|
<python><lambda>
|
2023-05-11 16:52:44
| 2
| 450
|
Mike Reiche
|
76,229,874
| 3,247,006
|
How to save "Shipping address" to fill it automatically every time I go to Stripe Checkout with Django?
|
<p>I set <a href="https://stripe.com/docs/api/checkout/sessions/create#create_checkout_session-shipping_address_collection" rel="nofollow noreferrer">shipping_address_collection</a> in <strong>Django View</strong> as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "views.py"
import stripe
from django.shortcuts import redirect
def test(request):
customer = stripe.Customer.search(query="email:'test@gmail.com'", limit=1)
checkout_session = stripe.checkout.Session.create(
customer=customer["data"][0]["id"] if customer['data'] else None,
line_items=[
{
"price_data": {
"currency": "USD",
"unit_amount_decimal": 1000,
"product_data": {
"name": "T-shirt",
},
},
"quantity": 2,
}
],
payment_method_options={
"card": {
"setup_future_usage": "on_session",
},
}, # ↓ ↓ ↓ Here ↓ ↓ ↓
shipping_address_collection={
"allowed_countries": ['US']
},
mode='payment',
success_url='http://localhost:8000',
cancel_url='http://localhost:8000'
)
return redirect(checkout_session.url, code=303)
</code></pre>
<p>But, I need to fill <strong>Shipping address</strong> of <strong>Shipping information</strong> manually every time I go to <strong>Stripe Checkout</strong> because it is not saved while <strong>Payment details</strong> is saved with <a href="https://stripe.com/docs/api/checkout/sessions/create#create_checkout_session-payment_method_options-card-setup_future_usage" rel="nofollow noreferrer">setup_future_usage</a> as shown below:</p>
<p><a href="https://i.sstatic.net/zqCys.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zqCys.png" alt="enter image description here" /></a></p>
<p>So now, how can I save <strong>Shipping address</strong> to fill it automatically every time I go to <strong>Stripe Checkout</strong>?</p>
<p>Is there like <code>setup_future_usage</code> for <strong>Shipping address</strong> to fill it automatically?</p>
|
<python><django><django-views><stripe-payments><python-stripe-api>
|
2023-05-11 16:42:32
| 2
| 42,516
|
Super Kai - Kazuya Ito
|
76,229,805
| 817,659
|
Pandas DataFrame arithmetic
|
<p>Say you have a <code>DataFrame</code> with n rows, in this case 3 rows:</p>
<pre><code> High Low
0 100 90
1 110 95
2 105 80
</code></pre>
<p>I would like to compute the biggest difference between a row, and it's subsequent rows as follows.</p>
<p>So for example, the first computation is the High of the first row, subtracting the low of all rows: 100 - 90, 100 - 95 and 100 - 80. The biggest of those differences is 20. Move to the next row, 110 - 95, 110 - 80. The biggest of those differences is 30. Move on to the final row, 105 - 80 = 25. So the biggest of all the differences is 30, so I want to return the tuple (110, 80), or equivalently, where the <code>loc</code> of the <code>DataFrame</code> are, (1, 2)</p>
<p>I know I can use two <code>for loop</code>. Is there a <code>pythonic</code> or <code>pandas-onic</code>, I guess <code>functional</code> way to do this?</p>
|
<python><pandas><functional-programming>
|
2023-05-11 16:32:51
| 3
| 7,836
|
Ivan
|
76,229,790
| 1,780,761
|
python slowing down every minute or so on windows
|
<p>I have a clean windows pc, with just python and the bear minimum installed. the problem i am facing is that every 3/4 minutes the execution of my python code slows down by a factor of 3 for a few seconds. it looks like the OS is taking some CPU Power away from python. Its not affected by any specified function, it just happens randomly, wherever it is located. The app uses Qthreads and it is always just one of these threads that is affected. (not always the same thread tough...) the other threads keep working like normal.</p>
<p>How can I track down the root cause of this and solve it?</p>
|
<python><multithreading>
|
2023-05-11 16:31:12
| 0
| 4,211
|
sharkyenergy
|
76,229,577
| 1,484,601
|
How to report step failure if a test does not pass
|
<p>In a GitHub Actions workflow, I run pytest:</p>
<pre class="lang-yaml prettyprint-override"><code>- name: running pytest
run: poetry run pytest
</code></pre>
<p>When this is run on the server, one of the tests does not pass:</p>
<pre><code>Run poetry run pytest
============================= test session starts ==============================
platform linux -- Python 3.9.16, pytest-7.3.1, pluggy-1.0.0
rootdir: /home/runner/work/Package/Package
plugins: cov-4.0.0
collected 271 items
tests/test_base.py ....x......................................... [ 16%]
[...]
tests/test_utils.py ...... [100%]
=============================== warnings summary ===============================
.venv/lib/python3.9/site-packages/pkg_resources/__init__.py:121
/home/runner/work/package/Package/.venv/lib/python3.9/site-packages/pkg_resources/__init__.py:121: DeprecationWarning: pkg_resources is deprecated as an API
warnings.warn("pkg_resources is deprecated as an API", DeprecationWarning)
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
============ 270 passed, 1 xfailed, 1 warning in 182.06s (0:03:02) =============
</code></pre>
<p>yet the workflow step is set as success. How to have the step reporting error if one test or more does not pass?</p>
|
<python><pytest><github-actions>
|
2023-05-11 16:02:43
| 1
| 4,521
|
Vince
|
76,229,471
| 7,085,162
|
Read an excel file, manipulate the data but retain the original formatting using Pandas
|
<p>I have code that essentially reads an excel file, that excel file has a header / filter on the first few lines, then the columns are below that. I have to load that into a dataframe, apply a filter and then export out to a new excel.</p>
<p>The issue however is that once exported, I obviously loose the header/filter.. I need to ensure my final result contains those first 3 rows as I use this file with a later script and it requires the format of the original file.</p>
<p>I have an excel file that looks like:</p>
<pre><code>+----------------+-----------+-------+--------+--+
| My Report | | | | |
+----------------+-----------+-------+--------+--+
| Report Filters | Include X | | | |
+----------------+-----------+-------+--------+--+
| | | | | |
+----------------+-----------+-------+--------+--+
| Name | ID | My ID | Status | |
+----------------+-----------+-------+--------+--+
| test | 1234 | 4321 | Open | |
+----------------+-----------+-------+--------+--+
</code></pre>
<p>My columns in this sample is "Name, ID etc".</p>
<p>When I do my filtering and export to excel, I loose the first 3 rows (My report etc).</p>
<p>My goal is to keep the first 3 rows. Everything else works except for that.</p>
<p>I tried concatenating the dataframes but then I end up with a wonky result.</p>
<p>Code (I changed variable names etc to keep it as a sample):</p>
<pre><code># Store the original dataset as is for later use (rebuild with header/filter rows)
self.original_data = pd.read_excel('original-data.xlsx')
# New dataframe copied from original, used for filtering data
self.new_data = pd.DataFrame(self.original_data.values[3:],
columns=self.original_data.iloc[2])
# Filter the data
self.filtered_data = self.new_data[self.new_data[self.filter_column].isin(self.filtered_list)]
# Grabs first three rows of original to keep integrity
self.filtered_data = pd.concat(objs=[
self.original_data.iloc[:3],
self.filtered_data])
self.filtered_data.to_excel('filtered_data/new-data.xlsx',
sheet_name='Yay',
index=False
)
</code></pre>
|
<python><pandas><excel><dataframe>
|
2023-05-11 15:50:01
| 1
| 1,069
|
William
|
76,229,276
| 1,732,969
|
AWS lambda boto3 invoke with file as payload
|
<p>I'm integrating a new AWS Lambda with an existent API running also in lambda. I want to call that API in lambda directly by instead of using postman.</p>
<p>Right now, I have a postman collection that calls the API lambda and then the flow starts. Now, I need to code that AWS lambda invoke from another lambda.</p>
<p>This is the exported python code from postman:</p>
<pre><code>import requests
url = "my_awesome_url/api/1.0/ingest?query_param_1=my_query_param"
payload = "<file contents here>"
headers = {
'Content-Type': 'application/pdf'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
</code></pre>
<p>In postman, I'm attaching a pdf file as binary:</p>
<p><a href="https://i.sstatic.net/HmWru.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HmWru.png" alt="enter image description here" /></a></p>
<p>In the API, the file content is read and stored in another S3 location.
I can see in the code, that the file manipulation is performed in the next way:</p>
<pre><code>document = S3.Object(bucket, renamed_file)
document.put(Body=base64.b64decode(event['body']))
</code></pre>
<p>Where bucket and renamed_file are pointing to the new object location. In the "body" is going the file content (?).</p>
<p><strong>My particular question is: How to build the payload of the AWS invoke with the file in the body?</strong></p>
<p>Now, the good thing is that I have the file located in S3. So I have the bucket name, the file path and also the object url of that pdf file.</p>
<p>So far, the payload I build is the next one:</p>
<pre><code> payload = {
"path": "/api/1.0/ingest",
"headers": {
"Content-Type": content_type
},
"queryStringParameters": {
"query_param_1": my_query_param
},
"body": ¿?
}
</code></pre>
|
<python><amazon-web-services><amazon-s3><aws-lambda><boto3>
|
2023-05-11 15:26:05
| 0
| 1,593
|
eduardosufan
|
76,229,274
| 7,337,831
|
How to get last week number with year?
|
<p>I want to create a function that gives last week with the year</p>
<p>for example, the current week number is 19, so the function should give the result <code>2023_W18</code>. And the weeks should start from Monday to Sunday. So I run the script on Monday, and it will give last week's number.</p>
<p>I found a few examples:</p>
<pre><code>>>> import datetime
>>> datetime.date(2010, 6, 16).isocalendar()[1]
24
</code></pre>
<p>but how to get last week's number? Do I need to first subtract 7 days from the current day? It should also support when the year change.</p>
|
<python>
|
2023-05-11 15:25:53
| 2
| 4,526
|
lucy
|
76,229,246
| 6,141,688
|
How to attribute an image in a variable in unix inside a python script
|
<p>I have this code:</p>
<pre><code>os.system('convert xc:red xc:black xc:white +append swatch.png')
os.system('convert red_1.jpg +dither -remap swatch.png start.png')
</code></pre>
<p>In first line I create a saved colored Image like this:</p>
<p>But I would like just to attribute this image a variable, without open this image again, do this directly. Something like this, in a simple way:</p>
<pre><code>os.system('convert xc:red xc:black xc:white +append my_image')
</code></pre>
<p>In second line, I need to read the images, I would like to atribute the variables directly too, like this:</p>
<pre><code>os.system('convert input_image_1 +dither -remap input_image_2 output_image')
</code></pre>
<p>I am doing several processes of an image, and I would not like to need to save an image and then open it again, I would like to do the process directly, is it possible?</p>
|
<python><unix><os.system>
|
2023-05-11 15:22:20
| 1
| 347
|
Greg Rov
|
76,228,990
| 3,815,773
|
Check which Python process is running 'myscript.py' on Windows, Linux
|
<p>Finding out that Python is running is rather easy. But how can I find out which Python process is the one running <code>myscript.py</code>?</p>
<p>I took help from this <a href="https://stackoverflow.com/questions/7787120/check-if-a-process-is-running-or-not-on-windows">Check if a process is running or not on Windows?</a> eventually using <code>psutil</code> as simple as:</p>
<pre><code>import psutil
for proc in psutil.process_iter(['pid', 'name']):
if "python" in str(proc.info).lower():
print(proc.info)
</code></pre>
<p>This results e.g. in this output:</p>
<pre><code>{'pid': 34461, 'name': 'python'}
{'pid': 1245157, 'name': 'python'}
{'pid': 1433410, 'name': 'python'}
{'pid': 2176252, 'name': 'python3.11'}
{'pid': 2894859, 'name': 'python'}
{'pid': 2973672, 'name': 'python'}
{'pid': 3354315, 'name': 'python'}
</code></pre>
<p>Very easy so far, but which of these Python processes is the one which runs <code>myscript.py</code>?</p>
<p>A solution must be possible from within Python and must run on Windows, and Linux. Hopefully, also be on Mac.</p>
|
<python><psutil>
|
2023-05-11 14:56:06
| 1
| 505
|
ullix
|
76,228,791
| 2,562,058
|
Conda 23.3.1: what shall be the content of build.sh?
|
<p>I found <code>grayskull</code> for creating <code>meta.yml</code> files and I found <a href="https://github.com/MichaelsJP/conda-package-publish-action" rel="nofollow noreferrer">this github action</a> for publishing on conda.</p>
<p>However, said github action require a build.sh file and according to <a href="https://docs.conda.io/projects/conda-build/en/stable/user-guide/tutorials/build-pkgs.html?highlight=build.sh#writing-the-build-script-files-build-sh-and-bld-bat" rel="nofollow noreferrer">the official guide</a> such a file must contain "...the text exactly as shown:"</p>
<pre><code>$PYTHON setup.py install # Python command to install the script.
</code></pre>
<p>Nevertheless, newest project have <code>pyproject.toml</code> and I am not sure the above guide applies today since it is fairly old, apparently.</p>
<p>As for conda 23.3.1, what shall be the content of said <code>build.sh</code>?</p>
<p><strong>EDIT:</strong> The question has been closed due to that it has been considered opinion-based.</p>
<p>Unfortunately, it is not, as the following Theorem shows.</p>
<p><em>Theorem: The question is NOT opinion based.</em></p>
<p><strong>Proof</strong>: By contradiction assume that the question is opinion based. Then, there should not exist any non-opinion based guide explaining what shall be included in build.sh. However, an official, non-opinion based guide exists and it is <a href="https://docs.conda.io/projects/conda-build/en/latest/resources/build-scripts.html" rel="nofollow noreferrer">here</a>, which contradicts the initial hypothesis. Hence, the question is not opinion-based. <strong>QED</strong></p>
|
<python><anaconda><conda><miniconda>
|
2023-05-11 14:31:55
| 1
| 1,866
|
Barzi2001
|
76,228,787
| 16,383,578
|
How to check if an IP address is reserved without using ipaddress module in Python?
|
<p>According to <a href="https://en.wikipedia.org/wiki/Reserved_IP_addresses" rel="nofollow noreferrer">Wikipedia</a>, the following are all the reserved IPv4 addresses:</p>
<pre><code>RESERVED_IPV4 = [
'0.0.0.0/8',
'10.0.0.0/8',
'100.64.0.0/10',
'127.0.0.0/8',
'169.254.0.0/16',
'172.16.0.0/12',
'192.0.0.0/24',
'192.0.2.0/24',
'192.88.99.0/24',
'192.168.0.0/16',
'198.18.0.0/15',
'198.51.100.0/24',
'203.0.113.0/24',
'224.0.0.0/4',
'233.252.0.0/24',
'240.0.0.0/4',
'255.255.255.255/32'
]
</code></pre>
<p>I want to check if any given IP address is reserved without using <code>ipaddress</code>, this is trivial, I only need to check if it falls into any of the address ranges.</p>
<p>I have programmatically converted the reserved networks to start, end IP pairs of <code>int</code>s:</p>
<pre><code>RESERVED_IPV4 = [
(0, 16777215),
(167772160, 184549375),
(1681915904, 1686110207),
(2130706432, 2147483647),
(2851995648, 2852061183),
(2886729728, 2887778303),
(3221225472, 3221225727),
(3221225984, 3221226239),
(3227017984, 3227018239),
(3232235520, 3232301055),
(3323068416, 3323199487),
(3325256704, 3325256959),
(3405803776, 3405804031),
(3758096384, 4294967295)
]
</code></pre>
<p>A naïve solution would be to check if a number falls into any of the ranges, like so:</p>
<pre><code>RESERVED_IPV4_RANGES = [range(*e) for e in RESERVED_IPV4]
def is_reserved_range(ip):
return any(ip in e for e in RESERVED_IPV4_RANGES)
</code></pre>
<p>But this is inefficient. Since these are IP addresses, all IP addresses in the same range share the same binary prefix:</p>
<pre><code>In [252]: [(f'{a:032b}', f'{b:032b}') for a, b in RESERVED_IPV4]
Out[252]:
[('00000000000000000000000000000000', '00000000111111111111111111111111'),
('00001010000000000000000000000000', '00001010111111111111111111111111'),
('01100100010000000000000000000000', '01100100011111111111111111111111'),
('01111111000000000000000000000000', '01111111111111111111111111111111'),
('10101001111111100000000000000000', '10101001111111101111111111111111'),
('10101100000100000000000000000000', '10101100000111111111111111111111'),
('11000000000000000000000000000000', '11000000000000000000000011111111'),
('11000000000000000000001000000000', '11000000000000000000001011111111'),
('11000000010110000110001100000000', '11000000010110000110001111111111'),
('11000000101010000000000000000000', '11000000101010001111111111111111'),
('11000110000100100000000000000000', '11000110000100111111111111111111'),
('11000110001100110110010000000000', '11000110001100110110010011111111'),
('11001011000000000111000100000000', '11001011000000000111000111111111'),
('11100000000000000000000000000000', '11111111111111111111111111111111')]
</code></pre>
<p>So a smarter approach would be checking if the binary representation of an IP address starts with any of the prefixes:</p>
<pre><code>def longest_common_prefix(a, b):
short = min(len(a), len(b))
for i in range(short):
if a[:i] != b[:i]:
break
return a[:i-1] if i else ''
RESERVED_IPV4_PREFIXES = tuple(longest_common_prefix(f'{a:032b}', f'{b:032b}') for a, b in RESERVED_IPV4)
def is_reserved_prefix(ip):
return f'{ip:032b}'.startswith(RESERVED_IPV4_PREFIXES)
</code></pre>
<p>The prefixes are:</p>
<pre><code>In [253]: RESERVED_IPV4_PREFIXES
Out[253]:
('00000000',
'00001010',
'0110010001',
'01111111',
'1010100111111110',
'101011000001',
'110000000000000000000000',
'110000000000000000000010',
'110000000101100001100011',
'1100000010101000',
'110001100001001',
'110001100011001101100100',
'110010110000000001110001',
'111')
</code></pre>
<p>I have confirmed the second method is more efficient than the first:</p>
<pre><code>In [254]: import random
In [255]: n = random.randrange(2**32)
In [256]: n
Out[256]: 634831452
In [257]: is_reserved_prefix(n)
Out[257]: False
In [258]: is_reserved_range(n)
Out[258]: False
In [259]: %timeit is_reserved_range(n)
1.92 µs ± 56 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
In [260]: %timeit is_reserved_prefix(n)
734 ns ± 6.59 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
</code></pre>
<p>But can it get any more efficient? Like directly using bitwise operations on <code>int</code>s? I know how to convert to and from IP addresses using bitwise operations, but I don't know how to use them in this case.</p>
<hr />
<p>I have confirmed directly comparing <code>int</code>s is slower than the smart approach as well:</p>
<pre><code>In [261]: %timeit any(a <= n <= b for a, b in RESERVED_IPV4)
1.71 µs ± 6.31 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
</code></pre>
<hr />
<p>I have finally managed to do it using bitwise and, however I don't know why this is actually slower than the <code>str.startswith</code> based approach. And what baffles me most is that <code>any</code> is actually slower than a <code>for</code> loop with early return:</p>
<pre><code>RESERVED_IPV4_MASKS = []
for a, b in RESERVED_IPV4:
count = b - a + 1
x = count.bit_length() - 1
y = 32 - x
mask = ((1<<y)-1)<<x
RESERVED_IPV4_MASKS.append((a, mask))
RESERVED_IPV4_MASKS = tuple(RESERVED_IPV4_MASKS)
def is_reserved_mask(ip):
return any(ip & mask == start for start, mask in RESERVED_IPV4_MASKS)
</code></pre>
<pre><code>In [332]: RESERVED_IPV4_MASKS = []
...:
...: for a, b in RESERVED_IPV4:
...: count = b - a + 1
...: x = count.bit_length() - 1
...: y = 32 - x
...: mask = ((1<<y)-1)<<x
...: RESERVED_IPV4_MASKS.append((a, mask))
...:
...: RESERVED_IPV4_MASKS = tuple(RESERVED_IPV4_MASKS)
In [333]: def is_reserved_mask(ip):
...: return any(ip & mask == start for start, mask in RESERVED_IPV4_MASKS)
In [334]: %timeit is_reserved_mask(n)
2.12 µs ± 53.4 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
In [335]: def is_reserved_mask(ip):
...: for start, mask in RESERVED_IPV4_MASKS:
...: if ip & mask == start:
...: return True
...: return False
In [336]: %timeit is_reserved_mask(n)
1.3 µs ± 97.4 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
</code></pre>
<p>I don't know why my code isn't as efficient as I imagined.</p>
|
<python><python-3.x><ip-address>
|
2023-05-11 14:31:34
| 3
| 3,930
|
Ξένη Γήινος
|
76,228,595
| 2,562,058
|
conda won't update environment through environment.yml
|
<p>I have an <code>environment.yml</code> file with the following content:</p>
<pre><code>channels:
- conda-forge
- defaults
dependencies:
- python=3.10
- pandas
- matplotlib
- control
- scipy
- pathlib
- numpy
- tomli
- sphinx
- furo
- sphinx-toolbox
- sphinx-autodoc-typehints
- pytest
- mypy
- black
- flake8
- coverage
- h5py
</code></pre>
<p>I created an environment named <em>pippo</em> with conda <code>create --name pippo python=3.10</code>, then I run <code>conda activate pippo</code> and finally I run <code>conda env update --name pippo --file environment.yml --prune</code>.</p>
<p>The output is:</p>
<pre><code>conda-forge/osx-arm64 Using cache
conda-forge/noarch Using cache
pkgs/r/noarch No change
pkgs/main/noarch No change
pkgs/main/osx-arm64 No change
pkgs/r/osx-arm64 No change
Collect all metadata (repodata.json): done
Solving environment: done
#
# To activate this environment, use
#
# $ conda activate pippo
#
# To deactivate an active environment, use
#
# $ conda deactivate
</code></pre>
<p>whereas I expected the packages in <code>environment.yml</code> to be installed.</p>
<p>How to solve it?</p>
|
<python><anaconda><conda><miniconda>
|
2023-05-11 14:11:48
| 0
| 1,866
|
Barzi2001
|
76,228,417
| 21,881,451
|
"cannot import name 'EVENT_TYPE_OPENED' from 'watchdog.events' "
|
<p>I'm trying to make a REST api (begginer) but when i tried to initialize the server from this code:</p>
<pre><code>from flask import Flask
app = Flask(__name__)
if __name__=='__main__':
app.run(debug=True, port=4000)
</code></pre>
<hr />
<p>i get this error in the prompt:</p>
<pre><code> from watchdog.events import EVENT_TYPE_OPENED
ImportError: cannot import name 'EVENT_TYPE_OPENED' from 'watchdog.events'
(C:\ ********* \Python\Python310\lib\site-packages\watchdog\events.py)
</code></pre>
<p>I'm expecting something like this (Min 8:27):
<a href="https://www.youtube.com/watch?v=GMppyAPbLYk&ab_channel=TechWithTim" rel="noreferrer">https://www.youtube.com/watch?v=GMppyAPbLYk&ab_channel=TechWithTim</a></p>
|
<python><flask><python-watchdog>
|
2023-05-11 13:54:46
| 4
| 323
|
brian valladares
|
76,228,371
| 375,432
|
Replace missing values with mean using Ibis
|
<p>How can I use <a href="https://ibis-project.org" rel="nofollow noreferrer">Ibis</a> to fill missing values with the mean?</p>
<p>For example, if I have this data:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import ibis
from ibis import _
ibis.options.interactive = True
df = pd.DataFrame(data={'fruit': ['apple', 'apple', 'apple', 'orange', 'orange', 'orange'],
'variety': ['gala', 'honeycrisp', 'fuji', 'navel', 'valencia', 'cara cara'],
'weight': [134 , 158, pd.NA, 142, 96, pd.NA]})
t = ibis.memtable(df)
</code></pre>
<p>Using Ibis code:</p>
<ul>
<li>How would I replace the <code>NA</code> values in the <code>weight</code> column with the overall mean of <code>weight</code>?</li>
<li>How would I replace the <code>NA</code> values in the <code>weight</code> column with the the mean within each group (apples, oranges)?</li>
</ul>
|
<python><ibis>
|
2023-05-11 13:50:00
| 1
| 763
|
ianmcook
|
76,228,323
| 14,385,814
|
How to past multiple list from ajax to Django
|
<p>I have an issue which I want to get all the data from <code>list</code>, so I want to loop every single data from selected items and insert it to database,
currently it returns the data like this when I print it <code>['[object Object]', '[object Object]']</code>,
how can I insert these data one by one? or just print it one by one?</p>
<p><strong>I have this list</strong> which is selected_items I loop the data and then pass it to ajax</p>
<pre><code>selected_items = [];
for (var i = 0; i < checkBoxes.length; i++) {
var selected_obj ={
stock_id: checkBoxes[i].id,
quantity: row.cells[3].innerHTML
}
selected_items.push(selected_obj);
}
</code></pre>
<p><strong>when console the selected_items</strong> it's just like this</p>
<p><a href="https://i.sstatic.net/wRrly.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wRrly.png" alt="enter image description here" /></a></p>
<p>so now I want to pass these list to django using ajax</p>
<pre><code>console.log(selected_items);
$.ajax({
type: "POST",
url: "{% url 'sales-item' %}",
data:{
multiple_list: selected_items.join(','), item_size: selected_items.length
}
}).done(function(data){...
</code></pre>
<p><strong>views.py</strong></p>
<pre><code>out_items = request.POST.getlist('multiple_list[]')
print(out_items)
</code></pre>
<p><strong>and it prints like this</strong></p>
<pre><code>['[object Object]', '[object Object]']
</code></pre>
<p><strong>Updated Code</strong> how can I loop the data? this is what I tried but it doesn't reflect the data at all</p>
<pre><code>multiple_list: JSON.stringify(selected_items)
</code></pre>
<p><strong>viesw.py</strong></p>
<pre><code>out_items = request.POST.get('multiple_list')
for i in out_items:
print(out_items1[i])
</code></pre>
<p>**how to print or insert it to the database?</p>
|
<javascript><python><jquery><django><ajax>
|
2023-05-11 13:44:59
| 1
| 464
|
BootCamp
|
76,228,297
| 7,465,516
|
Is this PyCharm-inspection a false positive? "Cannot find reference '__class__' in 'None'"
|
<p>In my codebase there is some amount of <code>None.__class__</code>.
PyCharm marks this as as a warning:</p>
<blockquote>
<p><code>Cannot find reference '__class__' in 'None'</code></p>
</blockquote>
<p>I am using PyCharm 2022.3 (Community Edition) if that matters.</p>
<p>However, when I try it out in REPL I get this sensible output which seems to be consistent across different python versions (including Python 2.7 and Python 3.10) from what I have tried:</p>
<pre class="lang-py prettyprint-override"><code>>>> None.__class__
<type 'NoneType'>
</code></pre>
<p>Is there a hidden danger I am not seeing?
The documentation <a href="https://docs.python.org/3/library/constants.html#None" rel="nofollow noreferrer">https://docs.python.org/3/library/constants.html#None</a> seems to suggest <code>NoneType</code> is a proper part of the language and not some implementation-quirk.</p>
|
<python><intellij-idea><pycharm><nonetype>
|
2023-05-11 13:41:00
| 2
| 2,196
|
julaine
|
76,228,147
| 4,681,355
|
Python selenium browser accepts cookies and returns`AttributeError`
|
<p>I'm scraping data from a sports website. This is one of the pages, where the code gets stuck:</p>
<p><a href="https://www.unitedrugby.com/clubs/dhl-stormers/kwenzo-blose" rel="nofollow noreferrer">https://www.unitedrugby.com/clubs/dhl-stormers/kwenzo-blose</a></p>
<p>When opening it with an incognito browser you'll see a Cookies window where one must accept the cookies. Here's how I do it:</p>
<pre class="lang-py prettyprint-override"><code>driver.get(url)
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
WebDriverWait(driver, 8).until(
EC.element_to_be_clickable((By.XPATH, "//button[@class='flex items-center justify-center py-2.5 px-[18px] lg:px-[22px] border border-solid rounded-tl-lg rounded-br-lg text-base lg:text-xl tracking-[2px] font-urc-sans transition ease-linear duration-300 border-turquoise-primary bg-turquoise-primary bg-opacity-[0.08] text-slate-deep hover:text-turquoise-secondary mx-auto md:order-last md:ml-4 md:mr-0']"))
)
WebDriverWait(driver, 8).until(
EC.element_to_be_clickable((By.XPATH, "//button/span[contains(., 'ACCEPT ALL')]"))
).click()
el = driver.get_attribute("outerHTML")
el = BeautifulSoup(el, "html.parser")
...
</code></pre>
<p>I'm not a pro with selenium, I'm still learning, so I can't figure out what I'm doing wrong. It either gets stuck at the cookies window, or it crashes at the <code>.get_attribute()</code> call by saying:</p>
<pre><code>AttributeError: 'WebDriver' object has no attribute 'get_attribute'
</code></pre>
<p>There are > 800 players which have a page in this website, and I have successfully scraped the data of many of them every week. They must have changed something in the website's architecture but even today some of the scrapings of other players went fine, and then this one just doesn't want to be scraped.</p>
<p>I hope someone can shed some light on the issue for me!</p>
|
<python><parsing><selenium-webdriver><web-scraping>
|
2023-05-11 13:25:15
| 2
| 622
|
schmat_90
|
76,228,098
| 2,386,113
|
How to update Spyder from version 5.4.1 to 5.4.3 with anaconda? [SSL: CERTIFICATE_VERIFY_FAILED]
|
<p>I am absolutely new to the installation of python IDE. I want to use the latest version of Spyder. To do that, I installed <strong>anaconda</strong>, with which Spyder is bundled together.</p>
<p>When I opened the Spyder, I got the messagebox (screenshot below) that I should update my Spyder version:</p>
<p><a href="https://i.sstatic.net/McUXQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/McUXQ.png" alt="enter image description here" /></a></p>
<p>I tried to execute the fist command <code>conda update anaconda</code> as per the screenshot's suggestion above. But I am getting the following error:</p>
<blockquote>
<p>Note: you may need to restart the kernel to use updated packages.</p>
</blockquote>
<blockquote>
<p>PackageNotInstalledError: Package is not installed in prefix.
prefix: C:\Users\xyz\AppData\Local\anaconda3
package name: anaconda</p>
</blockquote>
<p><a href="https://i.sstatic.net/Pts31.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Pts31.png" alt="enter image description here" /></a></p>
<p><strong>PS:</strong> I am running Sypder as <strong>Administrator</strong>, and already tried to restart the kernel</p>
<p><strong>Update-1</strong> <code>conda update conda</code> also did not work and produced the error below:</p>
<blockquote>
<p>Collecting package metadata (current_repodata.json): ...working... failed
Note: you may need to restart the kernel to use updated packages.
CondaSSLError: Encountered an SSL error. Most likely a certificate verification issue.</p>
</blockquote>
<p><a href="https://i.sstatic.net/3zZhl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3zZhl.png" alt="enter image description here" /></a></p>
|
<python><anaconda><spyder>
|
2023-05-11 13:18:56
| 1
| 5,777
|
skm
|
76,228,059
| 15,569,921
|
3D surface plot with different length of arrays
|
<p>I want to create one 3D surface plot while the length of the arrays for each iteration is different hence different length of axis. I have a working example here for better understanding. You notice that <code>J</code> is dependent on <code>M</code> so the length of arrays in each iteration is different. When I plot this, it doesn't produce surface plots but just a 2d plot stretched over the <code>J</code> axis. Any help would be appreciated.</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
from matplotlib import cm
import numpy as np
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1, projection='3d')
ax.set_xlabel("M")
ax.set_ylabel("J")
ax.set_zlabel("K")
for M in range(1, 5):
liss = []
for J in range(M, M*3):
K = M + J
liss.append(K)
L = np.array([liss])
J = np.array([i for i in range(M, M*3)])
M, J = np.meshgrid(M, J)
surf = ax.plot_surface(M, J, L, cmap=cm.coolwarm,
linewidth=0.1, alpha=0.7)
fig.colorbar(surf, shrink=0.5, aspect=5, pad=0.1);
</code></pre>
<p><a href="https://i.sstatic.net/XK6Yf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XK6Yf.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><3d><matplotlib-3d>
|
2023-05-11 13:14:19
| 0
| 390
|
statwoman
|
76,227,987
| 1,934,212
|
Get list of files within a time range based on file names containing timestamps
|
<p><a href="https://replit.com/@ralphfehrer/GlobFilesWithinFileRange?v=1" rel="nofollow noreferrer">A simplified python project</a> with the structure</p>
<p><a href="https://i.sstatic.net/tQMvh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tQMvh.png" alt="enter image description here" /></a></p>
<p>Contains a number of files with names encoding a timestamp using the format</p>
<pre><code>MMDDYYYY_HHMMSS
</code></pre>
<p>E.g., the name of the first file contains the timestamp May 11th, 2023, 08:01:34.</p>
<p>What I need is a function with the signature</p>
<pre><code>def get_files_between(starttime,endtime):
</code></pre>
<p>That returns all the file names within the range between starttime and endtime, where the starttime and endtime are given by hours and minutes.</p>
<p>For example,</p>
<pre><code>get_files_between("0801","0802")
</code></pre>
<p>should return</p>
<pre><code>['f_sometext_05112023_080134.csv','f_sometext_05112023_080155.csv','f_sometext_05112023_080218.csv','f_sometext_05112023_080225.csv']
</code></pre>
<p>It is realatively easy to realise something like that for a one minute interval like 08:01, using glob:</p>
<pre><code>import glob
files = glob.glob(pathname="Files/f_*_*_0801*.csv")
print(files)
</code></pre>
<p>How can this code be generalised to intervals containing several minutes?</p>
|
<python><glob>
|
2023-05-11 13:05:51
| 1
| 9,735
|
Oblomov
|
76,227,968
| 3,482,266
|
Type Hinting: Use of Elipsis
|
<p>I'm trying to use Elipsis. Here's the declaration of the method</p>
<pre class="lang-py prettyprint-override"><code>def func_test(
self,
top_k: int,
tensor_1: torch.Tensor,
tensor_2: torch.Tensor,
) -> list[list[int], ...]:
</code></pre>
<p>Pylance, in Visual Studio Code, is giving me the error</p>
<pre><code>> "..." not allowed in this context
</code></pre>
<p>However, mypy <a href="https://mypy.readthedocs.io/en/stable/cheat_sheet_py3.html#useful-built-in-types" rel="nofollow noreferrer">docs</a> state that it should be possible. The hack I used to solve this was to use the var <code>Ellipsis</code>, but I'm not sure whether this is the pythonic way.</p>
|
<python><python-typing>
|
2023-05-11 13:03:23
| 1
| 1,608
|
An old man in the sea.
|
76,227,963
| 13,612,961
|
What is the right way to send Messages to a channel or user via a slackbot?
|
<p>I am working on a solution that automates a certain process using a slackbot.
The actual process is automated using a pipeline. If a user types the command, the slackbot starts pipeline for the user over an api request. The slackbot should then inform the user when the pipeline is finished.</p>
<p>What is the right way to do this? Do I have to set up an api end point, that the pipeline can call that then makes the bot send the message?</p>
<p>I am using the bolt for python library.</p>
|
<python><slack-api><bolt><slack-bot>
|
2023-05-11 13:03:01
| 1
| 569
|
Lumberjack
|
76,227,893
| 12,176,973
|
ffmpeg black screen issue for video video generation from a list of frames
|
<p>I used a video to generate a list of frames from it, then I wanted to create multiple videos from this list of frames.
I've set starting and ending frames indexes for each "sub video", so for example,
<code>indexes = [[0, 64], [64, 110], [110, 234], [234, 449]]</code>, and those indexes will help my code generate 4 videos of various durations. The idea is to decompose the original video into multiple sub videos. My code is working just fine, the video generated.</p>
<p>But every sub video start with multiple seconds of black screen, only the first generated video (so the one using <code>indexes[0]</code> for starting and ending frames) is generated without this black screen part. I've tried changing the frame rate for each <code>sub_video</code>, according to the number of frames and things like that, but I didn't work. You can find my code below</p>
<pre class="lang-py prettyprint-override"><code>for i, (start_idx, end_idx) in enumerate(self.video_frames_indexes):
if end_idx - start_idx > 10:
shape = cv2.imread(f'output/video_reconstitution/{video_name}/final/frame_{start_idx}.jpg').shape
os.system(f'ffmpeg -r 30 -s {shape[0]}x{shape[1]} -i output/video_reconstitution/{video_name}/final/frame_%d.JPG'
f' -vf "select=between(n\,{start_idx}\,{end_idx})" -vcodec libx264 -crf 25'
f' output/video_reconstitution/IMG_7303/sub_videos/serrage_{i}.mp4')
</code></pre>
<p>Just the ffmpeg command</p>
<pre class="lang-bash prettyprint-override"><code>ffmpeg -r 30 -s {shape[0]}x{shape[1]} -i output/video_reconstitution/{video_name}/final/frame_%d.JPG -vf "select=between(n\,{start_idx}\,{end_idx})" -vcodec libx264 -crf 25 output/video_reconstitution/IMG_7303/sub_videos/serrage_{i}.mp4
</code></pre>
|
<python><video><ffmpeg>
|
2023-05-11 12:56:56
| 1
| 311
|
arlaine
|
76,227,846
| 2,383,529
|
Numpy print floats <1 without the leading 0
|
<p>I need to display and inspect large numpy matrices of floats in the interval <code>[0,1]</code> and am trying to avoid line wrapping, so each printed value should be as short as possible. I'm already using <code>np.set_printoptions(precision=2)</code> which yields something like</p>
<pre><code>[[1. 0.93 0.87 0.9 0.9 1. 0.93 1. 0.9 0.97 0.97 0.97 0.77 0.73 0.73 0.83 0.67 0.87 0.8 0.87 0.9 0.8 0.9 0.97 0.97 0.83 0.57]
[0.93 1. 0.83 0.8 0.83 0.9 0.97 0.9 0.97 0.97 0.97 0.87 0.73 0.67 0.63 0.77 0.6 0.8 0.7 0.9 0.9 0.8 0.83 0.97 0.97 0.73 0.53]
[0.87 0.83 1. 0.7 0.73 0.9 0.8 0.93 0.8 0.83 0.9 0.77 0.67 0.7 0.67 0.67 0.6 0.73 0.63 0.7 0.73 0.77 0.87 0.93 0.83 0.63 0.5 ]
...
]
</code></pre>
<p>Since all of the float values are <code><=1</code>, I could save a character per column by getting rid of leading <code>0</code>s and hence view larger mats on my screen. E.g. <code>0.87</code> -> <code>.87</code>. <strong>How do I do this?</strong></p>
<p>I don't care what happens to the <code>1.</code>. It could stay the same (and have misaligned <code>.</code> with the other numbers), or be displayed as <code>1</code>. Similar for <code>0.</code>. It can stay <code>0.</code> or be <code>0</code>. I also don't care what the solution does if the matrix happens to contain values outside of <code>[0,1]</code>.</p>
<p>I have looked through <a href="https://numpy.org/doc/stable/reference/generated/numpy.set_printoptions.html" rel="nofollow noreferrer">numpy.set_printoptions</a> and <a href="https://numpy.org/devdocs/reference/generated/numpy.format_float_positional.html" rel="nofollow noreferrer">numpy.format_float_positional</a>, but they can't do this. Googling "numpy skip leading 0 for floats" merely brings up a lot of solutions for single floats non-numpy contexts (e.g. <code>printf("%0.2f", val)[1:]</code>)</p>
|
<python><numpy><floating-point>
|
2023-05-11 12:51:44
| 2
| 1,901
|
waldol1
|
76,227,827
| 11,117,255
|
No data found for this date range, symbol may be delisted
|
<p>I tried to pull data on a stock symbol using the Yahoo finance python module. I checked the other questions that are on a similar topic and most of them say to try to run the program during market hours, but that does not help.</p>
<p>This is my code:</p>
<pre><code>import yfinance as yf
yf.download('QQQ', '2020-01-01', dt.datetime.now().strftime('%Y-%m-%d'))
</code></pre>
<p>This is the error:</p>
<pre><code>No data found for this date range, symbol may be delisted
</code></pre>
<p>But clearly the symbol exists:</p>
<blockquote>
<p><a href="https://finance.yahoo.com/quote/QQQ" rel="nofollow noreferrer">https://finance.yahoo.com/quote/QQQ</a></p>
</blockquote>
|
<python><pandas><yahoo-finance><yfinance>
|
2023-05-11 12:49:43
| 1
| 2,759
|
Cauder
|
76,227,735
| 5,321,862
|
How to choose whether to persist or discard changes in HDF5 file before closing
|
<p>I'd like to manipulate a set of data in an <code>hdf5</code> file and be able to decide, before closing the file, whether to discard every changes or not.
From the doc of <a href="https://docs.h5py.org/en/stable/high/file.html#file-drivers" rel="nofollow noreferrer">File drivers</a>:</p>
<blockquote>
<p>HDF5 ships with a variety of different low-level drivers, which map
the logical HDF5 address space to different storage mechanisms. You
can specify which driver you want to use when the file is opened:</p>
<pre><code>f = h5py.File('myfile.hdf5', driver=<driver name>, <driver_kwds>)
</code></pre>
<p>For example, the HDF5 “core” driver can be used to create a purely
in-memory HDF5 file, optionally written out to disk when it is closed.
Here’s a list of supported drivers and their options:</p>
<ul>
<li><p>‘core’:</p>
<p>Store and manipulate the data in memory, and optionally write it back out when the file is closed. Using this with an existing file and
a reading mode will read the entire file into memory. Keywords:</p>
<ul>
<li><p>backing_store:</p>
<p>If True (default), save changes to the real file at the specified path on close() or flush(). If False, any changes are
discarded when the file is closed.</p>
</li>
</ul>
</li>
</ul>
</blockquote>
<p>Regardless whether I perform a call to <code>flush()</code> or not, changes are always discarded (as expected). While, opening with default driver, changes are always persisted to the file on closure.</p>
<p>Based on what above, I've just created a very simple example:</p>
<pre class="lang-py prettyprint-override"><code>from h5py import File
# Create a dummy file from scratch
f = File('test.h5', 'w')
f.create_dataset("test_dataset", data=[1, 2, 3])
f.close()
# Open and modify the data
f = File('test.h5', 'r+') # In this case changes are always persisted
# f = File('test.h5', 'r+', driver='core', backing_store=False) # In this case changes are always discarded
ds = f["test_dataset"]
ds[...] = [3, 4, 5]
# f.flush() # Useless in this case, obviously
f.close() # Here changes should be discarded
# Read now `test_dataset`
f = File('test.h5', 'r')
print(f['test_dataset'][...])
f.close()
</code></pre>
<p>Is there a way to decide just before closing the file whether to save changes or not?</p>
<h2>EDIT 1: PyTables <code>undo</code> mechanism seems to work ONLY with newly created dataset, NOT with editing of pre-existing ones</h2>
<pre class="lang-py prettyprint-override"><code>import tables as t
import numpy as np
# Create the file
with t.open_file(r'test.h5', 'w') as fr:
fr.create_carray('/', 'TestArray', obj=np.array([1, 2, 3], dtype='uint8'))
with t.open_file('test.h5', 'r+') as fr:
# This will remove any previously created marks
if fr.is_undo_enabled():
fr.disable_undo()
fr.enable_undo() # Re-enable undo
fr.mark('MyMark')
# Create new array from scratch, and it will be discarded
new_arr = fr.create_carray('/', 'NewCreatedArray', obj=np.array([10, 11, 12]))
# Modify a pre-existing array! --> THIS WILL NOT BE DISCARDED
arr = fr.root.TestArray
arr[...] = np.array([3, 4, 5])
# Move back to when I opened the file
fr.undo('MyMark')
with t.open_file('test.h5', 'r+') as fr:
print(fr)
print('Test Array: ', fr.root.TestArray[:])
</code></pre>
<p>Result is:</p>
<pre><code>test.h5 (File) ''
Last modif.: '2023-05-18T07:26:13+00:00'
Object Tree:
/ (RootGroup) ''
/TestArray (CArray(3,)) ''
Test Array: [3 4 5]
</code></pre>
|
<python><hdf5><h5py><pytables>
|
2023-05-11 12:36:36
| 2
| 1,462
|
Buzz
|
76,227,724
| 7,822,387
|
how to call databricks notebook from python using rest api
|
<p>I want to create a python notebook on my desktop that pass an input to another notebook in databricks, and then return the output of the databricks notebook. For example, my local python file will pass a string into a databricks notebook, which will reverse the string and then output the result back to my local python file. What would be the best way to achieve this?</p>
<p>This is what I tried but when I try to create a new run, I get this error. Is my json formatted incorrectly or am I missing something else? Thanks</p>
<pre><code>import os
from databricks_cli.sdk.api_client import ApiClient
from databricks_cli.clusters.api import ClusterApi
os.environ['DATABRICKS_HOST'] = "https://adb-################.##.azuredatabricks.net/"
os.environ['DATABRICKS_TOKEN'] = "token-value"
api_client = ApiClient(host=os.getenv('DATABRICKS_HOST'), token=os.getenv('DATABRICKS_TOKEN'))
runJson = """
{
"name": "test job",
"max_concurrent_runs": 1,
"tasks": [
{
"task_key": "test",
"description": "test",
"notebook_task":
{
"notebook_path": "/Users/user@domain.com/api_test"
},
"existing_cluster_id": "cluster_name",
"timeout_seconds": 3600,
"max_retries": 3,
"retry_on_timeout": true
}
]
}
"""
runs_api = RunsApi(api_client)
runs_api.submit_run(runJson)
</code></pre>
<p>Error: Response from server:</p>
<pre class="lang-json prettyprint-override"><code>{
'error_code': 'MALFORMED_REQUEST',
'message': 'Invalid JSON given in the body of the request - expected a map'}
</code></pre>
|
<python><rest><azure-databricks><databricks-rest-api>
|
2023-05-11 12:35:14
| 1
| 311
|
J. Doe
|
76,227,685
| 3,247,006
|
How to create tables with "models.py" in the main folder where "settings.py" is in Django?
|
<p>I'm trying to create a table with <code>models.py</code> in the main folder <code>core</code> where <code>settings.py</code> is as shown below:</p>
<pre class="lang-none prettyprint-override"><code>django-project
|-core
| |-models.py # Here
| └-settings.py
|-app1
└-app2
</code></pre>
<p>This is <code>models.py</code> below:</p>
<pre class="lang-py prettyprint-override"><code># "core/models.py"
from django.db import models
class Person(models.Model):
name = models.CharField(max_length=20)
</code></pre>
<p>Then, I tried to make migrations as shown below below:</p>
<pre class="lang-none prettyprint-override"><code>python manage.py makemigrations
</code></pre>
<p>But, I couldn't make migrations getting the message below:</p>
<blockquote>
<p>No changes detected</p>
</blockquote>
<p>So, how can I create a table with <code>models.py</code> in the main folder <code>core</code>?</p>
|
<python><django><django-models><migration><django-migrations>
|
2023-05-11 12:31:18
| 1
| 42,516
|
Super Kai - Kazuya Ito
|
76,227,399
| 547,231
|
How do I read doubles from a binary file in a loop?
|
<p>Simple question, but I don't find a helpful answer on the web. I have file create with C++, where I first output a <code>std::size_t k</code> and then write <code>2 * x</code> <code>double</code>s.</p>
<p>I need to first read the <code>std::size_t k</code> in python and then iterate in a loop from <code>0</code> to <code>k - 1</code>, read two <code>double x, y</code> in each iteration and do something with them:</p>
<pre><code>with open('file', 'r') as f:
fig, ax = pyplot.subplots()
k = numpy.fromfile(f, numpy.uint64)[0] # does not work
for j in range(0, k):
# get double x and y somehow
x = numpy.fromfile(f, numpy.double)[0]
y = numpy.fromfile(f, numpy.double)[0]
ax.scatter(x = x, y = y, c = 0)
ax.set_xlim(0, 1)
ax.set_ylim(0, 1)
</code></pre>
<p>The value I read in <code>k</code> is <code>3832614067495317556</code>, but it should be <code>4096</code>. And at the point where I read <code>x</code>, I immediately get an index out of range exception.</p>
|
<python><numpy>
|
2023-05-11 11:56:41
| 1
| 18,343
|
0xbadf00d
|
76,227,359
| 15,757,310
|
Is there a way to make a deployment to AWS EKS using boto3?
|
<p>I've reading through many documentations trying to figure out way to deploy objects to EKS using a python script or client. My objective is to be able then to trigger AWS Lambda that deploy objects to EKS.</p>
<p>Is there a possible way ?</p>
|
<python><amazon-web-services><aws-lambda><boto3><amazon-eks>
|
2023-05-11 11:52:43
| 0
| 863
|
arhe10
|
76,227,184
| 5,704,159
|
show Figure as plot instead of browser
|
<p>I'm new in plot\figure and trying to show my output in local plot instead of showing in browser.</p>
<p>This is my code and my output shows in my browser.
How can I show it in plot window?</p>
<pre><code>import plotly.express as px
import pandas as pd
df = pd.DataFrame(dict(value = [3739 3770, 4761, 42125, 4957, 5028, 6048, 80985, 1474.2, 1558.6, 998.1, 38642.1],
variable = ['1', '2', '3', '4', '1', '2', '3', '4', '1', '2', '3', '4'],
group = ['A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'C', 'C', 'C', 'C']))
fig = px.line_polar(df, r = 'value', theta = 'variable', line_close = True, color = 'group')
fig.update_traces(fill = 'toself')
fig.show()
</code></pre>
|
<python><plotly>
|
2023-05-11 11:33:11
| 0
| 429
|
gogo
|
76,227,172
| 4,691,830
|
Create pandas data from one date and timestrings without colons
|
<p>I want to read times from a file that includes <a href="https://en.wikipedia.org/wiki/Satellite_navigation" rel="nofollow noreferrer">GNSS</a> times, among a lot of other data. The expected result is a pandas array (Index or Series) with datetime datatype, with the date of the dataset applied.</p>
<p>In an intermediate step, I have a list of timestamps in the format <code>hhmmss</code> with some invalid data mixed in:</p>
<pre><code>import datetime as dt
import pandas as pd
date = dt.date(2023, 5, 9)
times_from_file = [",,,,,,"¸ "123456", "123457", "123458", "123459", "123500"]
</code></pre>
<p>I can get the desired output with this lengthy code snippet:</p>
<pre><code>datetimes = pd.to_datetime(
times_from_file, format="%H%M%S", errors="coerce"
).map(
lambda datetime: pd.NaT
if pd.isnull(datetime)
else dt.datetime.combine(date, datetime.time())
)
</code></pre>
<p>Output:</p>
<pre><code>DatetimeIndex([ 'NaT', '2023-05-09 12:34:56',
'2023-05-09 12:34:57', '2023-05-09 12:34:58',
'2023-05-09 12:34:59', '2023-05-09 12:35:00'],
dtype='datetime64[ns]', freq=None)
</code></pre>
<p>However, this looks overly complicated. I was hoping this could be solved with <a href="https://pandas.pydata.org/docs/reference/api/pandas.to_timedelta.html" rel="nofollow noreferrer"><code>pd.to_timedelta</code></a> instead but unfortunately that doesn't allow passing a format string. Even the <code>na_action</code> keyword of <a href="https://pandas.pydata.org/docs/reference/api/pandas.Index.map.html" rel="nofollow noreferrer"><code>pandas.Index.map</code></a> is ignored – that's why I used <code>if pd.isnull(datetime)</code> instead.</p>
<p>Is there a simpler way to do this, preferably leveraging purpose-built Pandas functions or methods?</p>
|
<python><pandas><timestamp><python-datetime>
|
2023-05-11 11:31:48
| 1
| 4,145
|
Joooeey
|
76,226,974
| 539,023
|
How to write unit test to check database creation
|
<p>I have a sample function as below that receives the database name and creates a new database. I would like to write a unit test for this function. Any idea how to write test case for it? Many thanks.</p>
<pre><code>import psycopg2
def create_database(db_name):
conn = psycopg2.connect(
database="postgres", user='postgres', password='password', host='127.0.0.1', port= '5432'
)
conn.autocommit = True
#Creating a cursor object using the cursor() method
cursor = conn.cursor()
#Preparing query to create a database
sql = f'''CREATE database {db_name}''';
#Creating a database
cursor.execute(sql)
print("Database created successfully........")
#Closing the connection
conn.close()
</code></pre>
|
<python><unit-testing>
|
2023-05-11 11:10:42
| 0
| 20,230
|
kta
|
76,226,866
| 2,749,397
|
Drawing dividers between different colors in a segmented colorbar
|
<p><a href="https://i.sstatic.net/IkRgs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IkRgs.png" alt="enter image description here" /></a></p>
<p>I would like to draw black horizontal dividers between the different individual colors in the segmented colorbar.</p>
<p><strong>mcve</strong></p>
<pre><code>import matplotlib.pyplot as plt
from matplotlib.cm import ScalarMappable
from matplotlib.colors import BoundaryNorm
plt.plot() # dummy Axes
nmin, N = 0, 11
bounds = [nmin-0.5]+[n+0.5 for n in range(nmin, N)]
norm = BoundaryNorm(bounds, 256)
cm = plt.get_cmap('cool')
cb = plt.colorbar(ScalarMappable(norm, cm), format='%02d', ax=plt.gca())
cb.set_ticks(range(nmin, N))
plt.show()
</code></pre>
|
<python><matplotlib><colorbar>
|
2023-05-11 10:57:01
| 1
| 25,436
|
gboffi
|
76,226,717
| 7,396,613
|
Memory Error in numpy.save() in list of lists but not list of dicts
|
<p>I am creating a dataset, it would ideally be comprised in a list with different lists (matrix).
The final size of the dataset should be <code>(n_data, 10, 20, 3)</code>, being n_data the amount of data available.</p>
<p>I receive a list which contains n_data elements, being each one a dict. A test example, instead of <code>(10,20,3)</code>, I will use <code>(3,2,2)</code></p>
<pre><code> [{
'1': '[[1,2], [3,4]]',
'2': '[[5,6], [7,8]]',
'3': '[[9,1], [2,3]]'
},
# there are more elements in the list, n_data amount
]
</code></pre>
<p>So everything is a string. I can do <code>json.loads()</code> in order to obtain each value from each key as a list <code>[[1,2],[3,4]]</code>.</p>
<pre><code> final_matrix = []
for sequence in data:
temporal_list = []
for key, value in sequence.items():
parsed_value = json.loads(value)
temporal_list.append(parsed_value)
final_matrix.append(temporal_list)
np.save(file_path, np.array(final_matrix, dtype=object), allow_pickle=True)
</code></pre>
<p>Saving this gives me <code>Memory Error</code> from numpy, but if the value is not parsed with <code>json.loads</code>, and just sequence is appended to <code>temporal_list</code>, then it can be saved properly. I can just save this and whenever I need to use the values, use the parsing and no further problems but I am now questioning... Why is this? Is it because it's different to save actual lists rather than strings?</p>
<p>Error code:</p>
<pre><code> np.save(file_path, np.array(final_matrix, dtype=object), allow_pickle=True)
File "<__array_function__ internals>", line 6, in save
File "/usr/local/lib/python3.7/dist-packages/numpy/lib/npyio.py", line 530, in save
pickle_kwargs=dict(fix_imports=fix_imports))
File "/usr/local/lib/python3.7/dist-packages/numpy/lib/format.py", line 680, in write_array
pickle.dump(array, fp, protocol=3, **pickle_kwargs)
MemoryError
</code></pre>
|
<python><json><numpy><numpy-ndarray>
|
2023-05-11 10:38:20
| 0
| 1,525
|
M.K
|
76,226,696
| 803,127
|
FastAPI + Uvicorn + multithreading. How to make web app to work with many requests in parallel?
|
<p>I'm new to Python development. (But I have doenet background)
I do have a simple FastAPI application</p>
<pre><code>from fastapi import FastAPI
import time
import logging
import asyncio
import random
app = FastAPI()
r = random.randint(1, 100)
logging.basicConfig(level="INFO", format='%(levelname)s | %(asctime)s | %(name)s | %(message)s')
logging.info(f"Starting app {r}")
@app.get("/")
async def long_operation():
logging.info(f"Starting long operation {r}")
await asyncio.sleep(1)
time.sleep(4) # I know this is blocking and the endpoint marked as async, but I actually do have some blocking requests in my code.
return r
</code></pre>
<p>And I run the app using this comand:</p>
<pre><code>uvicorn "main:app" --workers 4
</code></pre>
<p>And the app starts 4 instances in different processes:</p>
<pre><code>INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: Started parent process [22112]
INFO | 2023-05-11 12:32:43,544 | root | Starting app 17
INFO: Started server process [10180]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO | 2023-05-11 12:32:43,579 | root | Starting app 58
INFO: Started server process [29592]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO | 2023-05-11 12:32:43,587 | root | Starting app 12
INFO: Started server process [7296]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO | 2023-05-11 12:32:43,605 | root | Starting app 29
INFO: Started server process [15208]
INFO: Waiting for application startup.
INFO: Application startup complete.
</code></pre>
<p>Then I open the 3 browser tabs and start sending requests to the app as parallel as possible. And here is the log:</p>
<pre><code>INFO | 2023-05-11 12:32:50,770 | root | Starting long operation 29
INFO: 127.0.0.1:55031 - "GET / HTTP/1.1" 200 OK
INFO | 2023-05-11 12:32:55,774 | root | Starting long operation 29
INFO: 127.0.0.1:55031 - "GET / HTTP/1.1" 200 OK
INFO | 2023-05-11 12:33:00,772 | root | Starting long operation 29
INFO: 127.0.0.1:55031 - "GET / HTTP/1.1" 200 OK
INFO | 2023-05-11 12:33:05,770 | root | Starting long operation 29
INFO: 127.0.0.1:55031 - "GET / HTTP/1.1" 200 OK
INFO | 2023-05-11 12:33:10,790 | root | Starting long operation 29
INFO: 127.0.0.1:55031 - "GET / HTTP/1.1" 200 OK
INFO | 2023-05-11 12:33:15,779 | root | Starting long operation 29
INFO: 127.0.0.1:55031 - "GET / HTTP/1.1" 200 OK
INFO | 2023-05-11 12:33:20,799 | root | Starting long operation 29
INFO: 127.0.0.1:55031 - "GET / HTTP/1.1" 200 OK
INFO | 2023-05-11 12:33:25,814 | root | Starting long operation 29
INFO: 127.0.0.1:55031 - "GET / HTTP/1.1" 200 OK
INFO | 2023-05-11 12:33:30,856 | root | Starting long operation 29
INFO: 127.0.0.1:55031 - "GET / HTTP/1.1" 200 OK
</code></pre>
<p>My observations:</p>
<ol>
<li>Only 1 process is working. Others do not handle requests (I have tried many times. It is always like that.)</li>
<li>4 different instances are created.</li>
</ol>
<p>My questions:</p>
<ol>
<li>Why only one process does work and others don't?</li>
<li>If I want to have an in-memory cache. Can I achieve that?</li>
<li>Can I run 1 process which can handle some amount of requests in parallel?</li>
<li>Can this be somehow related to the fact that I do tests on Windows?</li>
</ol>
<p><strong>UPDATE+SOLUTION:</strong></p>
<p>My real problem was the <a href="https://fastapi.tiangolo.com/async/#path-operation-functions" rel="nofollow noreferrer">def/async def</a> behavior (which I find very confusing). I was trying to solve the problem with a blocked thread using multiple workers which worked wired for my case as well (only 1 actually worked) and that's probably because I used a single browser with many tabs. Once I tested the service using JMeter it showed me that all workers were used. But the solution with multiple processes was not the right one for me. The better one was to try to unblock the single thread in a single process. At first, I used the <a href="https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.run_in_executor" rel="nofollow noreferrer">following approach</a> because I used an external library with SYNC IO function. However I have found an ASYNC variant of that function. So the problem was solved by using the correct library.
Thank you all for your help.</p>
|
<python><multithreading><fastapi><uvicorn>
|
2023-05-11 10:36:23
| 1
| 2,654
|
Anubis
|
76,226,693
| 8,110,010
|
How to decently destroy opengl resources when using QOpenGLWidget in pyqt5
|
<p>First of all, My codes are as follows:</p>
<pre class="lang-py prettyprint-override"><code>from PyQt5.QtWidgets import QApplication, QOpenGLWidget, QMainWindow
from PyQt5.QtGui import QPainter, QOpenGLShader, QOpenGLShaderProgram, QMatrix4x4, QOpenGLBuffer, QOpenGLVertexArrayObject
from PyQt5.QtCore import Qt, QTimer
import numpy as np
import OpenGL.GL as gl
import sys
class OpenGLWidget(QOpenGLWidget):
def __init__(self, parent=None):
super().__init__(parent)
self.frame_count = 0
def initializeGL(self):
self.program = QOpenGLShaderProgram()
self.program.addShaderFromSourceCode(QOpenGLShader.Vertex, """
attribute highp vec4 aPos;
void main() {
gl_Position = aPos;
}
""")
self.program.addShaderFromSourceCode(QOpenGLShader.Fragment, """
void main() {
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}
""")
self.program.link()
self.program.bind()
self.vertices = np.array([
[-0.5, -0.5, 0.0, 1.0],
[0.0, -0.5, 0.0, 1.0],
[-0.25, 0.0, 0.0, 1.0],
[-0.5, 0.5, 0.0, 1.0],
[0.0, 0.5, 0.0, 1.0],
], dtype=np.float32)
self.indices = np.array([
0,1,2,2,3,4
], dtype=np.uint16)
self.vao = QOpenGLVertexArrayObject()
self.vbo = QOpenGLBuffer(QOpenGLBuffer.VertexBuffer)
self.ibo = QOpenGLBuffer(QOpenGLBuffer.IndexBuffer)
self.vao.create()
self.vbo.create()
self.ibo.create()
self.vao.bind()
self.vbo.bind()
self.ibo.bind()
self.vbo.allocate(self.vertices.nbytes)
self.ibo.allocate(self.indices.nbytes)
self.vbo.write(0, self.vertices.tobytes(), self.vertices.nbytes)
self.ibo.write(0, self.indices.tobytes(), self.indices.nbytes)
posAttribLoc = self.program.attributeLocation("aPos")
self.program.setAttributeBuffer(posAttribLoc, gl.GL_FLOAT, 0, 4, 0)
self.program.enableAttributeArray(posAttribLoc)
self.vbo.release()
self.vao.release()
def paintGL(self):
gl = self.context().versionFunctions()
gl.glClearColor(0.33,0.33,0.33, 1.0)
gl.glClear(gl.GL_COLOR_BUFFER_BIT | gl.GL_DEPTH_BUFFER_BIT)
self.vao.bind()
self.program.bind()
gl.glDrawElements(gl.GL_TRIANGLES, len(self.indices), gl.GL_UNSIGNED_SHORT, None)
self.frame_count += 1
def resizeGL(self, width, height):
gl.glViewport(0, 0, width, height)
def __del__(self):
self.vao.destroy()
self.vbo.destroy()
self.ibo.destroy()
self.program.removeAllShaders()
pass
if __name__ == '__main__':
app = QApplication(sys.argv)
widget = OpenGLWidget()
window = QMainWindow()
window.setCentralWidget(widget)
window.setGeometry(0,0,640,480)
window.show()
sys.exit(app.exec_())
</code></pre>
<p>Everything goes well, I can render it as expected. But python complains:</p>
<pre><code>in __del__
RuntimeError: wrapped C/C++ object of type QOpenGLVertexArrayObject has been deleted
</code></pre>
<p>when I close the window. Guess the buffer is freed twice so I change the destructor to</p>
<pre><code> def __del__(self):
pass
</code></pre>
<p>while the interpreter complain</p>
<pre><code>QOpenGLVertexArrayObject::destroy() failed to restore current context
</code></pre>
<p>After closing. I think there must be something wrong on freeing, can anyone figure out proper way to release these buffers?</p>
<p><strong>UPDAETE</strong><br />
Enlightened by <a href="https://devdocs.io/qt%7E5.12/qopenglwidget" rel="nofollow noreferrer"><em>Resource Initialization and Cleanup</em> topic within Qt doc</a>, we can connect <code>aboutToBeDestroyed</code> signal to a customized slot function for cleanup:</p>
<pre><code> def __init__(self, parent=None):
super().__init__(parent)
self.frame_count = 0
self.glContext = None
def paintGL(self):
if self.glContext is None:
self.glContext = self.context()
self.glContext.aboutToBeDestroyed.connect(self.cleanup)
gl = self.glContext.versionFunctions()
gl.glClearColor(0.33,0.33,0.33, 1.0)
gl.glClear(gl.GL_COLOR_BUFFER_BIT | gl.GL_DEPTH_BUFFER_BIT)
self.vao.bind()
self.program.bind()
gl.glDrawElements(gl.GL_TRIANGLES, len(self.indices), gl.GL_UNSIGNED_SHORT, None)
self.frame_count += 1
def cleanup(self):
print('cleanup called')
self.makeCurrent()
self.vao.destroy()
self.vbo.destroy()
self.ibo.destroy()
self.program.removeAllShaders()
self.doneCurrent()
</code></pre>
<p>However, <code>cleanup</code> still not called after closing window</p>
|
<python><python-3.x><pyqt5><pyopengl><qtopengl>
|
2023-05-11 10:35:45
| 1
| 865
|
Finley
|
76,226,619
| 12,858,691
|
GCP code editor python debugger does not show its output
|
<p>Testing the GCP code editor I noticed that the python debugging console does not show its own output. Eg for <code>1+1</code> it should show <code>2</code>. I know from my vscode that sometimes the output is shown in another console/terminal. Any suggestions on how to get the input within the debugging console?</p>
<p>The debugging console in GCP:
<a href="https://i.sstatic.net/LVxns.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LVxns.gif" alt="gcp debugger example" /></a></p>
<p>My debugging console on my local machine as reference:
<a href="https://i.sstatic.net/OsQhM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OsQhM.png" alt="local vsc debugger example" /></a></p>
|
<python><google-cloud-platform>
|
2023-05-11 10:26:15
| 0
| 611
|
Viktor
|
76,226,579
| 9,488,023
|
Change value in a Pandas dataframe column based on several conditions
|
<p>What I have is a long Pandas dataframe in Python that contains three columns named 'file', 'comment', and 'number'. A simple example is:</p>
<pre><code>import pandas as pd
df_test = pd.DataFrame(data = None, columns = ['file','comment','number'])
df_test.file = ['file_1', 'file_1', 'file_1_v2', 'file_2', 'file_2', 'file_3', 'file_4', 'file_4_v2', 'file_5']
df_test.comment = ['none: 2', 'old', 'Replacing: file_1', 'v1', 'v2', 'none', 'old', 'Replacing: file_4', 'none']
df_test.number = [12, 12, 12, 13, 13, 13, 14, 14, 15]
</code></pre>
<p>Each file should have a unique number associated with it, but it currently has numerous errors where many unique files have been given the same number. There are also files which has the same name but are different versions that should have the number and files which have different names but the comment will show that they are supposed to have the same number as well.</p>
<p>In the example, files that have the same name or has a comment that starts with the string 'Replacing: ' should not have the number changed, but if the file has a different name but the same number as a previous file, I want the number of that file and every subsequent number to increase by one, meaning the end result here should be:</p>
<p>[12, 12, 12, 13, 13, 14, 15, 15, 16]</p>
<p>My idea was to check if each file has the same number as the previous in the list, and if it does, and the name of the file is not the same, and the comment does not start with the string 'Replacing: ', the value of the number and all following numbers will increase by one, but I am not sure how to write this code. Any help is really appreciated, thanks!</p>
|
<python><pandas><dataframe><indexing><conditional-formatting>
|
2023-05-11 10:22:27
| 2
| 423
|
Marcus K.
|
76,226,385
| 1,115,237
|
Dynaconf Object Access Fails After Loading Configuration from S3 Unless Iterated Over
|
<p>I'm encountering a strange issue while loading TOML configuration files from an AWS S3 bucket into a Dynaconf object in Python.</p>
<p>Here's a simplified version of the code I'm using:</p>
<pre><code>import os
import boto3
from dynaconf import Dynaconf
def load_settings(template_name: str) -> Dynaconf:
s3 = boto3.client("s3")
key = f"{template_name}.toml"
obj = s3.get_object(Bucket="my_bucket", Key=key)
toml_str = obj["Body"].read().decode("utf-8")
temp_file = f"{template_name}.toml"
# Write the TOML string to the temporary file
with os.fdopen(fd, "w") as file:
file.write(toml_str)
settings = Dynaconf(
envvar_prefix="DYNACONF",
environments=True,
settings_files=[temp_file]
)
# Iterating over the items
for k, v in settings.items():
print(k, v)
# Now I can access the values
print(settings.my_value)
os.remove(temp_file)
return settings
</code></pre>
<p>The problem arises when I try to directly access a value from the settings object (for example, settings.my_value) after loading the configuration from the S3 bucket. This direct access fails unless I previously iterate over the items in settings.</p>
<p>Expected behavior: I should be able to directly access a value from the settings object without first iterating over all the items.</p>
<p>Actual behavior: Direct access fails with an error message stating that the requested key doesn't exist, unless I first iterate over the items in settings.</p>
<p>This is particularly puzzling because if I comment out the iteration over the items in settings, the print statement fails, stating that 'my_value' doesn't exist. But, if I leave the iteration in place, the print statement succeeds.</p>
<p>Any ideas why this might be happening? Is there something about how Dynaconf loads or accesses data that I'm missing here? Any help would be greatly appreciated!</p>
<p><strong>Update:</strong> even better, give me a guideline on what would be the proper way of loading a remote settings file.</p>
|
<python><amazon-web-services><amazon-s3><settings><dynaconf>
|
2023-05-11 09:58:59
| 2
| 8,953
|
Shlomi Schwartz
|
76,226,239
| 17,267,064
|
Mail move error | Imaplib | COPY command error: BAD [b'Command Argument Error. 12']
|
<p>I wish to move emails to my custom folder named 'Test Folder' using imaplib library. I am able to move email to "Archive" folder but not in my custom made folder.</p>
<p>I get below error when attempted to copy an email to my 'Test Folder'.</p>
<pre><code>error Traceback (most recent call last)
Cell In[30], line 3
1 for num in msg_num[0].split():
2 # imap.uid('COPY', num, 'archive')
----> 3 imap.copy(num, 'Test Folder')
File c:\Program Files\Python311\Lib\imaplib.py:486, in IMAP4.copy(self, message_set, new_mailbox)
481 def copy(self, message_set, new_mailbox):
482 """Copy 'message_set' messages onto end of 'new_mailbox'.
483
484 (typ, [data]) = .copy(message_set, new_mailbox)
485 """
--> 486 return self._simple_command('COPY', message_set, new_mailbox)
File c:\Program Files\Python311\Lib\imaplib.py:1230, in IMAP4._simple_command(self, name, *args)
1228 def _simple_command(self, name, *args):
-> 1230 return self._command_complete(name, self._command(name, *args))
File c:\Program Files\Python311\Lib\imaplib.py:1055, in IMAP4._command_complete(self, name, tag)
1053 self._check_bye()
1054 if typ == 'BAD':
-> 1055 raise self.error('%s command error: %s %s' % (name, typ, data))
1056 return typ, data
error: COPY command error: BAD [b'Command Argument Error. 12']
</code></pre>
<p>Below is my code.</p>
<pre><code>import imaplib
imap = imaplib.IMAP4_SSL("outlook.office365.com")
imap.login("Email", "Password")
imap.select("Inbox")
_, msg_num = imap.search(None, 'SINCE "25-Jan-2021" BEFORE "26-Jan-2021"')
for num in msg_num[0].split():
imap.copy(num, 'Test Folder')
</code></pre>
<p>I tried below code for moving email to archive and it moved and gave no error.</p>
<pre><code>imap.copy(num, 'archive')
</code></pre>
<p>What am I doing wrong? I did try make different folder names but encountered error in all the names.</p>
|
<python><python-3.x><imaplib>
|
2023-05-11 09:41:55
| 0
| 346
|
Mohit Aswani
|
76,226,189
| 16,423,684
|
What's an easy to get explanation for the difference between built-in types and objects?
|
<p>I'm referring to <a href="https://developer.apple.com/library/archive/documentation/Cocoa/Conceptual/RubyPythonCocoa/Articles/RubyPythonMacOSX.html" rel="nofollow noreferrer">this article</a>. I can't fully understand this sentence:</p>
<blockquote>
<p>While Python code can contain both objects and built-in types, in Ruby everything is an object. There are no primitive or built-in types, such as integers. Thus anything in Ruby code can accept messages.</p>
</blockquote>
<p>What's the difference between for example Ruby's <a href="https://ruby-doc.org/core-2.5.0/Integer.html" rel="nofollow noreferrer">Integer</a> and Python's <a href="https://docs.python.org/3/library/stdtypes.html#numeric-types-int-float-complex" rel="nofollow noreferrer">int</a>?</p>
<p>I mean: Integer, String, etc. are in Ruby available without any imports or similar as well. Where is the difference?</p>
<p>Can someone give an easy explanation what built-in types are and how they differ from the classes of the standard library?</p>
|
<python><ruby>
|
2023-05-11 09:36:45
| 0
| 557
|
a.hess
|
76,225,994
| 1,021,819
|
How can I create and update a dict simultaneously?
|
<p>I see a tonne of answers on how to update an existing <code>dict</code> if a key doesn't exist, but my question is different. How can I update a dictionary but create the <em>dictionary</em> if it doesn't exist?</p>
<p>Use case - single cell of Jupyter notebook:</p>
<pre class="lang-py prettyprint-override"><code>my_dict |= function_return_dict
</code></pre>
<p>I want to do something like the above without doing:</p>
<pre class="lang-py prettyprint-override"><code>my_dict = {}
my_dict |= function_return_dict(arg1, arg2, ...)
</code></pre>
<p>so that I can run all in the notebook and/or update <code>my_dict</code> on multiple successive runs (with different function args).</p>
|
<python><jupyter-notebook>
|
2023-05-11 09:15:56
| 1
| 8,527
|
jtlz2
|
76,225,890
| 10,836,309
|
module 'PyPDF2' has no attribute 'ContentStream' error
|
<p>I am trying to run the following code to replace text inside a PDF file:</p>
<pre><code>import os
import re
import PyPDF2
from io import StringIO
# Define a function to replace text in a PDF file
def replace_text_in_pdf(input_pdf_path, output_pdf_path, search_text, replace_text):
# Open the input PDF file in read-binary mode
with open(input_pdf_path, 'rb') as input_file:
# Create a PDF reader object
pdf_reader = PyPDF2.PdfReader(input_file)
# Create a PDF writer object
pdf_writer = PyPDF2.PdfWriter()
# Iterate through each page of the PDF
for page_num in range(len(pdf_reader.pages)):
# Get the page object
page = pdf_reader.pages[page_num]
# Get the text content of the page
text = page.extract_text()
# Replace the search text with the replace text
new_text = re.sub(search_text, replace_text, text)
# Create a new page with the replaced text
new_page = PyPDF2.PageObject.create_blank_page(None, page.mediabox.width, page.mediabox.height)
new_page.merge_page(page) # Copy the original page content to the new page
new_page.add_transformation(PyPDF2.Transformation().translate(0, 0).scale(1, 1)) # Reset the transformation matrix
# Begin the text object
new_page._text = PyPDF2.ContentStream(new_page.pdf)
new_page._text.beginText()
# Set the font and font size
new_page._text.setFont("Helvetica", 12)
# Draw the new text on the page
x, y = 100, 100 # Replace with the desired position of the new text
new_page._text.setFontSize(12)
new_page._text.textLine(x, y, new_text)
# End the text object
new_page._text.endText()
# Add the new page to the PDF writer object
pdf_writer.addPage(new_page)
# Save the new PDF file
with open(output_pdf_path, 'wb') as output_file:
pdf_writer.write(output_file)
# Call the function to replace text in a PDF file
input_pdf_path = r'D:\file1.pdf' # Replace with your input PDF file path
output_pdf_path = r'D:\file1_replaced.pdf' # Replace with your output PDF file path
search_text = '<FirstName>' # Replace with the text you want to replace
replace_text = 'John' # Replace with the text you want to replace it with
replace_text_in_pdf(input_pdf_path, output_pdf_path, search_text, replace_text)
</code></pre>
<p>However, line: <code>new_page._text = PyPDF2.ContentStream(new_page.pdf)</code> is giving me the following error: <code>module 'PyPDF2' has no attribute 'ContentStream'</code>.</p>
<p>Can someone help how to fix it?</p>
|
<python><pypdf>
|
2023-05-11 09:05:10
| 2
| 6,594
|
gtomer
|
76,225,840
| 6,631,639
|
snakemake target rule as variable for python code
|
<p>I have a snakemake workflow that does a lot of work before it reaches the definition of the rules (querying files, accessing databases, filtering a data frame). Not all of this work is necessary, depending on the target rule I want to invoke.</p>
<p>How can I know which target rule is selected on the command line? Then I could have the pure-python work done upfront be limited to only what is necessary for that target rule.</p>
<p>Desired example, for which I need to know if the "snakemake.target_rule" variable exists and which variable it actually is:</p>
<pre class="lang-py prettyprint-override"><code>def do_work_for_target_a():
lots_of_work()
return table
def do_work_for_target_b():
lots_of_work()
return table
if snakemake.target_rule == 'a':
table_a = do_work_for_target_a()
elif snakemake.target_rule === 'b':
table_b = do_work_for_target_b()
else:
table_a = do_work_for_target_a()
table_b = do_work_for_target_b()
rule all:
input:
"output_a.txt",
"output_b.txt",
rule a:
input:
"output_a.txt",
rule b:
input:
"output_b.txt",
</code></pre>
|
<python><snakemake><directed-acyclic-graphs><orchestration>
|
2023-05-11 08:58:23
| 3
| 527
|
Wouter De Coster
|
76,225,668
| 12,285,101
|
Remove duplicated values appear in two columns in dataframe
|
<p>I have table similar to this one:</p>
<pre><code>index name_1 path1 name_2 path2
0 Roy path/to/Roy Anne path/to/Anne
1 Anne path/to/Anne Roy path/to/Roy
2 Hari path/to/Hari Wili path/to/Wili
3 Wili path/to/Wili Hari path/to/Hari
4 Miko path/to/miko Lin path/to/lin
5 Miko path/to/miko Dan path/to/dan
6 Lin path/to/lin Miko path/to/miko
7 Lin path/to/lin Dan path/to/dan
8 Dan path/to/dan Miko path/to/miko
9 Dan path/to/dan Lin path/to/lin
...
</code></pre>
<p>As you can see, the table kind of showing relationship between entities -<br />
Roi is with Anne,<br />
Wili with Hari,<br />
Lin with Dan and with Miko.</p>
<p>The table is actually showing overlap data , meaning, Hari and wili for example, have the same document, and I would like to remove one of them not to have duplicated files.
In order to do this, I would like to create new table that has only one name in relationship, so I can later create list of paths to remove.</p>
<p>The result table will look like this :</p>
<pre><code>index name_1 path1 name_2 path2
0 Roy path/to/Roy Anne path/to/Anne
1 Hari path/to/Hari Wili path/to/Wili
2 Miko path/to/miko Lin path/to/lin
3 Miko path/to/miko Dan path/to/dan
</code></pre>
<p>The idea is that I'll use the values of "path2" to remove files with this path, and will still have the files in path1.
for that reason,
this line:</p>
<pre><code>4 Lin path/to/lin Dan path/to/dan
</code></pre>
<p>is missing, as it will be removed using Miko...
any ideas how to do this ? :)</p>
<p>Edit:</p>
<p>I have tried this based on t<a href="https://stackoverflow.com/questions/55480504/efficient-way-in-pandas-for-removing-columns-with-duplicate-values-in-different">his answer:</a></p>
<pre><code>df_2= df[~pd.DataFrame(np.sort(df.values,axis=1)).duplicated()]
</code></pre>
<p>And it's true that I get less rows in my dataframe (it has 695 and I got now 402) , but, I still have the first lines like this:</p>
<pre><code>index name_1 path1 name_2 path2
0 Roy path/to/Roy Anne path/to/Anne
1 Anne path/to/Anne Roy path/to/Roy
...
</code></pre>
<p>meaning I still get the same issue</p>
|
<python><pandas><drop>
|
2023-05-11 08:36:24
| 1
| 1,592
|
Reut
|
76,225,650
| 11,169,692
|
How to convert json into excel format using python
|
<p>I tried to convert json file into excel but somehow panda is not able to do it for all the keys.</p>
<p>I have a json input:</p>
<pre><code>{
"result": [
{
"level": "L3_SW",
"name": "L23",
"type": "CM"
},
{
"level": "L3_SW",
"name": "SOFT",
"type": "QM"
}],
"context": {
"config": {
"project_area_name": "XYZ",
"component_name": "Configuration",
"config_name": "_WorkOn",
"bu": "H"
},
"meta": {
"project": {
"name": "_WorkOn",
"key": "2023-05-02_96614cc50ac8dc7e121f7090",
"started_at": "2023-05-02-16-00"
},
"task": {
"started": "2023-05-02-16-00",
"finished": "2023-05-02-16-00",
"req_count": 1
}
}
}
}
</code></pre>
<p>This is just one of a json file and I don't know what will be the structure of other json inputs.</p>
<p>I tried with panda library with I guess it required specific key which I don't want.</p>
<pre><code>json_file_path = filedialog.askopenfilename(title='Select a JSON file', filetypes=[('JSON files','*.json')])
# Load the JSON file into a pandas dataframe
data = json.load(open(json_file_path))
print(data)
df = pd.DataFrame(data["result"]) #=============> this giving me output of excel column only of result array.
# Ask the user to select a location to save the Excel file
excel_file_path = filedialog.asksaveasfilename(title='Save as Excel',defaultextension='.xlsx')
# Save the dataframe as an Excel file
df.to_excel(excel_file_path, index=False)
</code></pre>
<p>How can I convert entire json to excel irrespective of the json structure which can give me all columns of the keys ?</p>
|
<python><pandas>
|
2023-05-11 08:34:36
| 1
| 503
|
NoobCoder
|
76,225,612
| 1,278,896
|
Why some python class properties aren't visible within the class in certain positions?
|
<p>I've coded something like</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
'''
Illustration
'''
class C1:
lst1 = "one", "two", "three"
lst2 = "red", "green", "blue"
dct1 = { "ten": 10, "eleven": 11, "twelve": 12 }
lst3 = tuple(dct1.keys())
lst4 = tuple(x for x in lst2)
lst5 = tuple(x for x in lst1 if x not in lst2)
# vim:set ft=python ai et ts=4 sts=4 sw=4 cc=80:EOF #
</code></pre>
<p>then ran the code and got</p>
<pre><code>Traceback (most recent call last):
File "/home/jno/xxx.py", line 7, in <module>
class C1:
File "/home/jno/xxx.py", line 13, in C1
lst5 = tuple(x for x in lst1 if x not in lst2)
File "/home/jno/xxx.py", line 13, in <genexpr>
lst5 = tuple(x for x in lst1 if x not in lst2)
NameError: name 'lst2' is not defined
</code></pre>
<p>So, I wonder, why the hell <code>lst2</code> is <strong>visible</strong> in <code>lst4</code> creation and <strong>not visible</strong> in <code>lst5</code>?</p>
<p>I mean, what's the difference in appearance in different clauses of the same symbol regarding that symbol visibility?</p>
<p>Or how do I have to reference it here?</p>
<p>PS. <strong>RESOLVED</strong> (Thanks to moderator for way better refs than were provided at the time of question creation!)</p>
<p>It <strong>will</strong> work in this notation:</p>
<pre class="lang-py prettyprint-override"><code>class C1:
lst1 = "one", "two", "three"
lst2 = "red", "green", "blue"
dct1 = { "ten": 10, "eleven": 11, "twelve": 12 }
lst3 = tuple(dct1.keys())
lst4 = tuple(x for x in lst2)
# lst5 = tuple(x for x in lst1 if x not in lst2)
lst5 = (lambda P1, P2: tuple(x for x in P1 if x not in P2))(lst1, lst2) # move name resolution out of "function scope"
</code></pre>
<p>See <a href="https://stackoverflow.com/a/40849176/1278896">https://stackoverflow.com/a/40849176/1278896</a>
And <a href="https://docs.python.org/3/reference/executionmodel.html#resolution-of-names" rel="nofollow noreferrer">https://docs.python.org/3/reference/executionmodel.html#resolution-of-names</a></p>
|
<python>
|
2023-05-11 08:30:35
| 0
| 1,047
|
jno
|
76,225,556
| 9,488,023
|
Add a value to each cell in a Pandas dataframe column if another column contains a certain string
|
<p>I have a very long and complicated Pandas dataframe in Python consisting of many columns, but an example would be something like:</p>
<pre><code>df_test = pd.DataFrame(data = None, columns = ['file','comment','number'])
df_test.file = ['file_1', 'file_1_v2', 'file_2', 'file_3', 'file_4', 'file_4_v2', 'file_5']
df_test.comment = ['old', 'Replacing: file_1', 'none', 'new: 3', 'maybe', 'Replacing: file_4', 'none']
df_test.number = ['12', '12', '13', '13', '14', '14', '15']
</code></pre>
<p>What this shows is that the dataframe contains the names of several files which each has a comment and a number associated with them. Here, the files which has a comment that starts with 'Replacing: ' should have the same value in 'number' as the file it is replacing, but other files should not have the same number, as you can see that 'file_2' and 'file_3' has.</p>
<p>What I want to do is to increase the 'number' value whenever a duplicate of that value is found for both the duplicate and all files after that, as long as the 'comment' cell does not start with the string 'Replacing: '. This means that the 'number' column should end up looking like:</p>
<pre><code>[12, 12, 13, 14, 15, 15, 16]
</code></pre>
<p>I figured it might work with a for- and if-loop, but I'm really not sure and any help would be appreciated, thanks!</p>
|
<python><pandas><dataframe><indexing>
|
2023-05-11 08:24:04
| 1
| 423
|
Marcus K.
|
76,225,517
| 15,239,717
|
How Can I Write and Run a Django User Registration Unit Test Case
|
<p>I am working on a Django Project that contain two apps; account and realestate and I have a user registration form for registration. This Registration Form is working correctly on my local machine but I want to write and run a Django Unit test for it to be sure everything is working internally well.
Below is my Registration Form code:</p>
<pre><code>from django import forms
from django.contrib.auth.forms import UserCreationForm
from django.contrib.auth.models import User
# Define the user_type choices
USER_TYPE_CHOICES = (
('landlord', 'Landlord'),
('agent', 'Agent'),
('prospect', 'Prospect'),
)
# Create the registration form
class RegistrationForm(UserCreationForm):
username = forms.CharField(widget=forms.TextInput(attrs={'class': 'form-control left-label'}))
password1 = forms.CharField(widget=forms.PasswordInput(attrs={'class': 'form-control left-label'}))
password2 = forms.CharField(widget=forms.PasswordInput(attrs={'class': 'form-control left-label'}))
user_type = forms.ChoiceField(choices=USER_TYPE_CHOICES, widget=forms.Select(attrs={'class': 'form-control left-label'}))
class Meta:
model = User
fields = ['username', 'password1', 'password2', 'user_type']
</code></pre>
<p>Here is my User Registration View code:</p>
<pre><code>def register(request):
if request.method == 'POST':
form = RegistrationForm(request.POST)
if form.is_valid():
# Create a new user object
user = form.save()
# Get the user_type value from the form
user_type = form.cleaned_data['user_type']
# Create a new object based on the user_type
if user_type == 'landlord':
Landlord.objects.create(user=user)
elif user_type == 'agent':
Agent.objects.create(user=user)
elif user_type == 'prospect':
Prospect.objects.create(user=user)
# Log the user in and redirect to the homepage
login(request, user)
return redirect('success-account')
else:
form = RegistrationForm()
context = {
'form': form,
'page_title': 'Register',
}
return render(request, 'account/register.html', context)
</code></pre>
<p>Here is my Unit Test Code:</p>
<pre><code>from django.contrib.auth.models import User
from django.test import TestCase, Client
from django.urls import reverse
from account.forms import RegistrationForm
from realestate.models import Landlord, Agent, Prospect
class RegisterViewTestCase(TestCase):
def setUp(self):
self.client = Client()
self.url = reverse('register-account')
def test_register_view_success(self):
data = {
'username': 'testuser',
'email': 'testuser@example.com',
'password1': 'testpassword',
'password2': 'testpassword',
'user_type': 'landlord'
}
form = RegistrationForm(data)
self.assertTrue(form.is_valid())
response = self.client.post(self.url, data)
self.assertEqual(response.status_code, 302)
self.assertRedirects(response, reverse('success-account'))
user = User.objects.get(username='testuser')
self.assertIsNotNone(user)
self.assertTrue(user.check_password('testpassword'))
landlord = Landlord.objects.get(user=user)
self.assertIsNotNone(landlord)
def test_register_view_invalid_form(self):
data = {
'username': 'testuser',
'email': 'testuser@example.com',
'password1': 'testpassword',
'password2': 'wrongpassword',
'user_type': 'landlord'
}
form = RegistrationForm(data)
self.assertFalse(form.is_valid())
response = self.client.post(self.url, data)
self.assertEqual(response.status_code, 200)
self.assertTemplateUsed(response, 'account/register.html')
form = response.context['form']
self.assertTrue(form.has_error('password2'))
user_exists = User.objects.filter(username='testuser').exists()
self.assertFalse(user_exists)
</code></pre>
<p>Any time I run <strong>python manage.py test account.tests.RegisterViewTestCase</strong> on the terminal error says: <strong>ERROR: RegisterViewTestCase (unittest.loader._FailedTest), AttributeError: module 'account.tests' has no attribute 'RegisterViewTestCase'</strong>.
What could be the issue here and how can it be fixed.</p>
|
<python><django>
|
2023-05-11 08:20:26
| 2
| 323
|
apollos
|
76,225,350
| 9,182,743
|
groupby mean of datetime64[ns] column
|
<p>I have a datafrmame:</p>
<ul>
<li>user_id object</li>
<li>local time datetime64[ns]</li>
<li>value int32</li>
</ul>
<pre class="lang-py prettyprint-override"><code> user_id local time value
0 user1 2023-01-01 00:00:00 3
1 user1 2023-01-01 00:00:00 3
2 user1 2023-01-01 01:00:00 7
3 user1 2023-01-01 01:00:00 2
4 user2 2023-01-01 02:00:00 4
5 user2 2023-01-01 02:00:00 10
6 user2 2023-01-01 03:00:00 7
7 user2 2023-01-01 03:00:00 2
</code></pre>
<p>I want to:</p>
<ul>
<li>groupby user_id</li>
<li>mean of cols: "local time" (only time HH:MM:SS, not datetime) and "value"</li>
</ul>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
# Set the random seed for reproducibility
np.random.seed(123)
# Define the number of users and values
num_users = 2
num_values = 4
# Generate the user IDs
user_ids = ['user{}'.format(i+1) for i in range(num_users)]
# Generate the local time values
local_time = pd.date_range(start='2023-01-01 00:00:00', periods=num_values, freq='H')
# Generate the random values
values = np.random.randint(1, 11, size=(num_values*num_users))
# Create the DataFrame
df = pd.DataFrame({
'user_id': np.repeat(user_ids, num_values),
'local time': np.repeat(local_time, num_users),
'value': values})
# calculate the mean of local time TIME - NOT datetime.
print (df)
print("expected_output")
'''
local time value
user1 00:30:00 3.75
user2 02:30:00 5.75
'''
df.groupby('user_id').mean()
</code></pre>
<h1>Expected Output:</h1>
<p>I want the mena of the time (hour minutes and seoncds, not date) and mean of value, groupby user</p>
<pre class="lang-py prettyprint-override"><code> local time value
user1 00:30:00 3.75
user2 02:30:00 5.75
</code></pre>
|
<python><pandas><group-by>
|
2023-05-11 08:01:39
| 2
| 1,168
|
Leo
|
76,225,163
| 14,365,042
|
Why can't Series.fillna() fill all NaN values?
|
<p>I want to fill the <code>NaN</code>s in a dataframe with random values:</p>
<pre><code>df1 = pd.DataFrame(
list(zip(
['0001', '0001', '0002', '0003', '0004', '0004'],
['a', 'b', 'a', 'b', 'a', 'b'],
['USA', 'USA', 'USA', 'USA', 'USA', 'USA'],
[np.nan, np.nan, 'Jan', np.nan, np.nan, 'Jan'],
[1,2,3,4,5,6])),
columns=['sample ID', 'compound', 'country', 'month', 'value'])
df1
</code></pre>
<p>Out:</p>
<pre><code> sample ID compound country month value
0 0001 a USA NaN 1
1 0001 b USA NaN 2
2 0002 a USA Jan 3
3 0003 b USA NaN 4
4 0004 a USA NaN 5
5 0004 b USA Jan 6
</code></pre>
<p>I slice the database based on the <code>compound </code> column:</p>
<pre><code>df2 = df1.loc[df1.compound == 'a']
df2
</code></pre>
<p>Out:</p>
<pre><code> sample ID compound country month value
0 0001 a USA NaN 1
2 0002 a USA Jan 3
4 0004 a USA NaN 5
</code></pre>
<p>Then I tried to <code>fillna</code> with non-repeated values using <code>filler</code>:</p>
<pre><code>from numpy.random import default_rng
rng = default_rng()
filler = rng.choice(len(df2.month), size=len(df2.month), replace=False)
filler = pd.Series(-abs(filler))
df2.month.fillna(filler, inplace=True)
df2
</code></pre>
<p>Out:</p>
<pre><code> sample ID compound country month value
0 0001 a USA -1.0 1
2 0002 a USA Jan 3
4 0004 a USA NaN 5
</code></pre>
<p>I expected no <code>NaN</code> in the out but actually not, Why?</p>
|
<python><pandas><fillna>
|
2023-05-11 07:37:28
| 2
| 305
|
Joe
|
76,225,072
| 2,473,382
|
How to set up the ID of a json element in rdflib
|
<p><strong>Context</strong></p>
<p>I am loading a rdf file in <a href="https://rdflib.readthedocs.io/en/stable/" rel="nofollow noreferrer">rdflib</a>, and am trying to export it in json-ld.</p>
<p>The original rdf looks like:</p>
<pre class="lang-xml prettyprint-override"><code><cim:Substation rdf:ID="_1234">
<cim:IdentifiedObject.name>A substation</cim:IdentifiedObject.name>
</cim:Substation>
</code></pre>
<p>The exported json-lib looks like:</p>
<pre class="lang-json prettyprint-override"><code>{
"@id": "file:///path/to/original/xml#_418866779",
"@type": "cim:Substation",
"cim:IdentifiedObject.name": "A substation"
},
</code></pre>
<p>I cannot seem to override the <code>file:///path/to/original/xml</code> component in json.</p>
<p><strong>Code</strong></p>
<p>This is what I am doing:</p>
<pre class="lang-py prettyprint-override"><code>from rdflib import Graph
g = Graph(
# Giving an identifier or base does not seem to change anything
)
g.parse('path to rdf')
js = g.serialize(
format="json-ld",
encoding="utf-8",
destination='path to json',
sort_keys=True, # repeatable output is Good.
context={ some relevant entries (rdf, cim) },
auto_compact=True,
# giving base does not seem to change anything
)
</code></pre>
<p><strong>Question</strong>
How can I set/override the first component of the ID in the exported json-ld?</p>
|
<python><rdf><json-ld><rdflib><linked-data>
|
2023-05-11 07:27:19
| 1
| 3,081
|
Guillaume
|
76,224,945
| 944,146
|
Fastest way to get a dictionary of counts of objects between 2 pandas series
|
<p>Lets say I have two equal length lists: ss and ee. Each contain values such that ss[i] >= ee[i] and ss[i+1] >= ee[i] is true for all i.</p>
<p>For example:</p>
<pre class="lang-py prettyprint-override"><code>ss = [0,10,20,30]
ee = [3,15,23,40]
vals = [0,1,2,5,7,10,11,16,21,22,23,29,31,35,45]
</code></pre>
<p>I want to return a dictionary of counts where the keys are the values of ss, and the values are the counts of vals that fall between ss and its corresponding values.</p>
<p>for the first iteration, the values of vals between 0 and 3 (inclusive) are 0,1,2 so the key of 0 would have a value of 3.</p>
<p>Here is the desired output for my examples: <code>{0: 3, 10: 2, 20: 3, 30: 2}</code></p>
<p>Here is my latest attempt:</p>
<pre class="lang-py prettyprint-override"><code>dCounts = {}
iv = 0
for i,e in enumerate(ss):
count = 0
s1 = e
s2 = ee[i]
while vals[iv] < s1:
iv += 1
while vals[iv] <= s2:
iv += 1
count += 1
dCounts[s1] = count
</code></pre>
<p>if len(ss) = n, and len(vals) = m, then I think this runs in roughly O(m + n) time.</p>
<p>I can't think of a faster way to do it than that. I think I'm at the mercy of python's indexing for large lists though.</p>
<p>I've given this problem as simple python for clarity, but I'm really working with pandas series with datetime indices. I've been trying to leverage the speed I usually get out of pandas but can't seem to get anything fast enough to parse my large (~200,000) vectors in reasonable time.</p>
<p>I can't think of a good way to not have to interpret each time through the loop. I tried putting ss and ee into a data frame and using the .loc method on vals inside an apply function, but that performed worse than anything else I tried.</p>
|
<python><pandas>
|
2023-05-11 07:08:07
| 2
| 664
|
pseudoabdul
|
76,224,917
| 21,404,794
|
Pandas replace(np.nan, value) vs fillna(value) which is faster?
|
<p>I'm trying to replace NaNs in different columns and I wanted to know which one is better (faster) for this task, replace or fillna.</p>
<p>Here's some sample code for the fillna option:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3', 'K4', 'K5'],
'A': ['A0', 'A1', 'A2', 'A3', 'A4', 'A5']})
other = pd.DataFrame({'key_2': ['K0.1', 'K1.1', 'K2.1'],
'B': ['B0', 'B1', 'B2']},index=[0,2,3])
result = df.join([other])
</code></pre>
<p>After this line the joined dataframe looks like this:</p>
<pre><code> key A key_2 B
0 K0 A0 K0.1 B0
1 K1 A1 NaN NaN
2 K2 A2 K1.1 B1
3 K3 A3 K2.1 B2
4 K4 A4 NaN NaN
5 K5 A5 NaN NaN
</code></pre>
<p>and after doing the fillna with</p>
<pre class="lang-py prettyprint-override"><code>result[['key','key_2']] = result[['key','key_2']].fillna('K0.0')
result[['A','B']] = result[['A','B']].fillna('B0.0')
</code></pre>
<p>it looks like this:</p>
<pre><code> key A key_2 B
0 K0 A0 K0.1 B0
1 K1 A1 K0.0 B0.0
2 K2 A2 K1.1 B1
3 K3 A3 K2.1 B2
4 K4 A4 K0.0 B0.0
5 K5 A5 K0.0 B0.0
</code></pre>
<p>Using the replace instead,</p>
<pre class="lang-py prettyprint-override"><code>result[['key','key_2']] = result[['key','key_2']].replace(np.nan,'K0.0')
result[['A','B']] = result[['A','B']].replace(np.nan,'B0.0')
</code></pre>
<p>The resulting dataframe is:</p>
<pre><code> key A key_2 B
0 K0 A0 K0.1 B0
1 K1 A1 K0.0 B0.0
2 K2 A2 K1.1 B1
3 K3 A3 K2.1 B2
4 K4 A4 K0.0 B0.0
5 K5 A5 K0.0 B0.0
</code></pre>
<p>As you can see, they both achieve the same result, at least as far as I've been able to test.</p>
<p>I have 2 questions:</p>
<ol>
<li>What kind of NaN does join create (seeing as np.nan is found, I think it's that one, but I want to be sure to catch every NaN created by the join method)</li>
<li>Which one is faster, fillna or replace?</li>
</ol>
|
<python><pandas><dataframe><performance>
|
2023-05-11 07:04:02
| 1
| 530
|
David Siret Marqués
|
76,224,897
| 6,430,403
|
Scheduling job in APScheduler weekdays between 2 times
|
<p>I want to schedule a job mon-fri, between two given times, say 10:20 to 18:35, for every 5 minute.<br />
So it will run: 10:25, 10:30, 10:35, ..., 18:30, 18:35. every weekday.<br />
I tried combining CronTrigger with IntervalTrigger but wasn't able to make it. Can anyone help?</p>
|
<python><multithreading><triggers><cron><apscheduler>
|
2023-05-11 07:01:26
| 1
| 401
|
Rishabh Gupta
|
76,224,705
| 7,921,635
|
How to validate a JSON list of dictionaries using Marshmallow in Python?
|
<p>How can I validate nestes json data using Marshmallow?</p>
<p>This was I came up with, currently I get:</p>
<p><code>{'_schema': ['Invalid input type.']}</code>
note sure why.</p>
<pre><code>from marshmallow import Schema, fields, validate
class AuthUserSchema(Schema):
user = fields.String()
password = fields.String()
enabled = fields.Boolean()
class AuthDataSchema(Schema):
data = fields.List(fields.Nested(AuthUserSchema))
data = [
{
"enabled": True,
"password": "6e5b5410415bde",
"user": "admin"
},
{
"enabled": True,
"password": "4e5b5410415bde",
"user": "guest"
},
]
schema = AuthDataSchema()
errors = schema.validate(data)
print(errors)
</code></pre>
|
<python><validation><marshmallow>
|
2023-05-11 06:26:50
| 1
| 427
|
Nir Vana
|
76,224,539
| 11,192,313
|
Remove/Erase object from image using python
|
<p>I have been trying to Remove/Erase an object from image which is highlighted with red color using opencv <a href="https://i.sstatic.net/U6Vj5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/U6Vj5.png" alt="Original Image" /></a></p>
<p>I have achieved a bit closer result but it's just reducing the transparency of the object
<a href="https://i.sstatic.net/giyn1.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/giyn1.jpg" alt="Result I have acheived" /></a> but i want it to make it completely invisible.</p>
<p>Here is the code:</p>
<pre><code>import numpy as np
import sys
input = sys.argv[1]
# Load the image
img = cv2.imread(input)
# Define the color range of the object to be removed
lower_red = np.array([0, 0, 200])
upper_red = np.array([50, 50, 255])
# Create a mask that identifies the object to be removed
mask = cv2.inRange(img, lower_red, upper_red)
# Inpaint the object area using the TELEA algorithm
inpaint = cv2.inpaint(img, mask, 3, cv2.INPAINT_TELEA)
# Save the inpainted image
cv2.imwrite(input, inpaint)
</code></pre>
|
<python><python-3.x><opencv><image-processing>
|
2023-05-11 05:58:43
| 2
| 345
|
Muhammad Adeel Shoukat
|
76,224,536
| 1,100,107
|
Integral on a polyhedral non-rectangular domain with Python
|
<p>You are a Python user and you want to evaluate this triple integral:</p>
<p><a href="https://i.sstatic.net/g5Nlo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/g5Nlo.png" alt="enter image description here" /></a></p>
<p>for a certain function <code>f</code>, say <code>f(x,y,z) = x + y*z</code>.</p>
<p>In R, a possibility is to nest one-dimensional integrals:</p>
<pre class="lang-r prettyprint-override"><code>f <- function(x,y,z) x + y*z
integrate(Vectorize(function(x) {
integrate(Vectorize(function(y) {
integrate(function(z) {
f(x, y, z)
}, -10, 6 - x - y)$value
}), -5, 3 - x)$value
}), -5, 4)
## -5358.3 with absolute error < 9.5e-11
</code></pre>
<p>The point I don't like with this method is that it does not take into account
the error estimates on the first two integrals.</p>
<p>I'm interested in the Python way to realize this method.
But I'm also interested in a Python way to get a result with a more reliable error estimate.</p>
|
<python><integral>
|
2023-05-11 05:58:33
| 1
| 85,219
|
Stéphane Laurent
|
76,224,436
| 1,568,590
|
How to an extra string to the display of an object with "display" in Jupyter, using Python?
|
<p>I have a number of mathematical objects created with SymPy, which display fine in Jupyter using "display". For example, there are a number of functions which I can display with:</p>
<pre><code>display(f1(x))
display(f2(x))
</code></pre>
<p>and so on. But what I want is to prefix each line with some explanatory text, so the output is, for example:</p>
<p>Question 1: f1(x)</p>
<p>Question 2: f2(x)</p>
<p>with the functions f1(x), f2(x) displayed properly typeset (which "display" provides).</p>
<p>The nearest I've got is with matrices:</p>
<pre><code>M = sympy.Matrix([["(a)",f1(x)],["(b)",f2(x)]])
display(M)
</code></pre>
<p>but this has two problems: (1) it shows the matrix delimiters, and (2) it strips the parentheses from "(a)" and "(b)".</p>
<p>So I'm still hoping for help: how to add a string to a line that uses "display". Thanks!</p>
|
<python><string><jupyter><display>
|
2023-05-11 05:40:37
| 1
| 1,414
|
Alasdair
|
76,224,359
| 21,305,238
|
PyCharm gives me a type warning about my metaclass; mypy disagrees
|
<p>I was trying to write a metaclass named <code>Singleton</code>, that, of course, implement the singleton design pattern:</p>
<pre class="lang-py prettyprint-override"><code>class Singleton(type):
def __new__(cls, name, bases = None, attrs = None):
if bases is None:
bases = ()
if attrs is None:
attrs = {}
new_class = type.__new__(cls, name, bases, attrs)
new_class._instance = None
return new_class
def __call__(cls, *args, **kwargs):
if cls._instance is None:
cls._instance = cls.__new__(cls, *args, **kwargs)
cls.__init__(cls._instance, *args, **kwargs)
return cls._instance
</code></pre>
<p>This seems to work correctly:</p>
<pre class="lang-py prettyprint-override"><code>class Foo(metaclass = Singleton):
pass
foo1 = Foo()
foo2 = Foo()
print(foo1 is foo2) # True
</code></pre>
<p>However, PyCharm gave me this warning for <code>cls._instance = cls.__new__(<b>cls</b>, *args, **kwargs)</code>:</p>
<pre class="lang-none prettyprint-override"><code>Expected type 'Type[Singleton]', got 'Singleton' instead
</code></pre>
<p>...and this for <code>cls.__init__(<b>cls._instance</b>, *args, **kwargs)</code>:</p>
<pre class="lang-none prettyprint-override"><code>Expected type 'str', got 'Singleton' instead
</code></pre>
<p>I ran mypy on the same file, and here's what it said:</p>
<pre class="lang-none prettyprint-override"><code># mypy test.py
Success: no issues found in 1 source file
</code></pre>
<p>I'm using Python 3.11, PyCharm 2023.1.1 and mypy 1.3.0 if that makes a difference.</p>
<p>So what exactly is the problem here? Am I doing this correctly? Is this a bug with PyCharm, with mypy or something else? If the error is on me, how can I fix it?</p>
|
<python><pycharm><mypy><python-typing><metaclass>
|
2023-05-11 05:20:57
| 2
| 12,143
|
InSync
|
76,224,352
| 10,576,322
|
Logging messages from __init__.py of an imported module in Python
|
<p>I wrote some libraries and part of the code relies on environment variables.</p>
<p>Therefor the <strong>init</strong>.py files of those libraries have some logic to parse env files or set defaults.</p>
<p>Since this envvars influence the code, I set up a logger in <strong>init</strong>.py and I am logging whether an env file was found etc.</p>
<p>If I am now writing a skript and want to configure a logger, I am missing those messages if I don't configure the logger before the import. On the other hand imports should come first.</p>
<p>What is a better way to solve this topic?</p>
|
<python><logging><import><python-import><python-logging>
|
2023-05-11 05:19:27
| 1
| 426
|
FordPrefect
|
76,224,256
| 4,348,400
|
What is the model argument in the super().__init__() in the example subclassing of pymc.Model?
|
<p>The <a href="https://www.pymc.io/projects/docs/en/stable/api/generated/pymc.Model.html" rel="nofollow noreferrer"><code>pymc.Model</code></a> docs show this example:</p>
<pre class="lang-py prettyprint-override"><code>class CustomModel(Model):
# 1) override init
def __init__(self, mean=0, sigma=1, name=''):
# 2) call super's init first, passing model and name
# to it name will be prefix for all variables here if
# no name specified for model there will be no prefix
super().__init__(name, model)
# now you are in the context of instance,
# `modelcontext` will return self you can define
# variables in several ways note, that all variables
# will get model's name prefix
# 3) you can create variables with the register_rv method
self.register_rv(Normal.dist(mu=mean, sigma=sigma), 'v1', initval=1)
# this will create variable named like '{name::}v1'
# and assign attribute 'v1' to instance created
# variable can be accessed with self.v1 or self['v1']
# 4) this syntax will also work as we are in the
# context of instance itself, names are given as usual
Normal('v2', mu=mean, sigma=sigma)
# something more complex is allowed, too
half_cauchy = HalfCauchy('sigma', beta=10, initval=1.)
Normal('v3', mu=mean, sigma=half_cauchy)
# Deterministic variables can be used in usual way
Deterministic('v3_sq', self.v3 ** 2)
# Potentials too
Potential('p1', pt.constant(1))
# After defining a class CustomModel you can use it in several
# ways
# I:
# state the model within a context
with Model() as model:
CustomModel()
# arbitrary actions
# II:
# use new class as entering point in context
with CustomModel() as model:
Normal('new_normal_var', mu=1, sigma=0)
# III:
# just get model instance with all that was defined in it
model = CustomModel()
# IV:
# use many custom models within one context
with Model() as model:
CustomModel(mean=1, name='first')
CustomModel(mean=2, name='second')
# variables inside both scopes will be named like `first::*`, `second::*`
</code></pre>
<p>Clearly is isn't a working example, rather it is meant to give an overview of some of the internals of defining a custom PyMC model class.</p>
<p>I do have a question about part of this example though. When the parent class is initialized with <code>super().__init__(name, model)</code> I see there are two arguments passed: <code>name</code> and <code>model</code>. I think <code>name</code> might just be a string, but what is less clear to me is what <code>model</code> is supposed to be. Is <code>model</code> also supposed to be a string? Or something else?</p>
|
<python><subclass><pymc>
|
2023-05-11 04:57:06
| 1
| 1,394
|
Galen
|
76,224,122
| 199,166
|
Including zero counts for unobserved categoricals in Pandas DataFrame value_counts
|
<p>In Pandas, calling <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.value_counts.html" rel="nofollow noreferrer">value_counts</a> on a <a href="https://pandas.pydata.org/docs/reference/api/pandas.Categorical.html" rel="nofollow noreferrer">Categorical</a> series will make sure that each possible value gets a count even when that count is zero, all of which is subtle, <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/categorical.html#operations" rel="nofollow noreferrer">lightly documented</a>, and maybe seldom cared-about, but hold on tight. We're going down a rabbit hole.</p>
<p>Let's say I define a <a href="https://pandas.pydata.org/docs/reference/frame.html" rel="nofollow noreferrer">DataFrame</a> with two Categorical columns like this:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
abc_categorical = pd.CategoricalDtype(['a', 'b', 'c'], ordered=True)
size_categorical = pd.CategoricalDtype(['s', 'm', 'l'], ordered=True)
df = pd.DataFrame({
'a': pd.Categorical(list('aaabababababa'), dtype=abc_categorical),
'tsize': pd.Categorical(list('smmssmlmlslsl'), dtype=size_categorical),
}
)
</code></pre>
<pre><code> a tsize
0 a s
1 a m
2 a m
3 b s
4 a s
5 b m
6 a l
7 b m
8 a l
9 b s
10 a l
11 b s
12 a l
</code></pre>
<h2>Series.value_counts</h2>
<p>I have no rows where the 'a' column has the value 'c', so <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.value_counts.html" rel="nofollow noreferrer">Series.value_counts</a> gives us a zero count for it.</p>
<pre class="lang-py prettyprint-override"><code>df.a.value_counts(sort=False)
</code></pre>
<pre><code>a 8
b 5
c 0
Name: a, dtype: int64
</code></pre>
<p>Because we bothered to define a Categorical, Pandas knows that the column 'a' could take on the value 'c', but that never occurs in the data, so we get a 0. So far, so good.</p>
<h2>DataFrame.value_counts</h2>
<p>However, if I call <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.value_counts.html" rel="nofollow noreferrer">DataFrame.value_counts</a> to count values of both columns in combination, I don't get a zero either for the 'b' & 'l' combination, of which there are zero, and nothing for 'c'.</p>
<pre class="lang-py prettyprint-override"><code>df.value_counts(sort=False)
</code></pre>
<pre><code>a tsize
a s 2
m 2
l 4
b s 3
m 2
dtype: int64
</code></pre>
<p>Bummer!</p>
<h2>Crosstab?</h2>
<p>The <a href="https://pandas.pydata.org/docs/reference/api/pandas.crosstab.html" rel="nofollow noreferrer">pandas.crosstab</a> function does a little better, giving a zero count for 'b' & 'l', but still leaves out the 'c' values.</p>
<pre class="lang-py prettyprint-override"><code>pd.crosstab(df.a, df.tsize)
</code></pre>
<pre><code>tsize s m l
a
a 2 2 4
b 3 2 0
</code></pre>
<h2>Expected results</h2>
<p>I think, DataFrame.value_counts should return something like this:</p>
<pre><code> count
a tsize
a s 2
m 2
l 4
b s 3
m 2
l 0
c s 0
m 0
l 0
</code></pre>
<p>DataFrame.value_counts should do the same as Series.value_counts, or at least provide an option to do so, maybe "value_counts(include_zeros=True)". For that matter, crosstab should do likewise.</p>
<h2>My actual question</h2>
<p>My question: Is there a concise idiomatic way to get Pandas to do the count values of categoricals <em>including those with zero counts</em>?</p>
<p>Note: asked in context of Pandas 2.0.1</p>
|
<python><pandas>
|
2023-05-11 04:16:42
| 1
| 12,578
|
cbare
|
76,224,121
| 17,128,041
|
Unable to install Contrast-agent package on Poetry
|
<p>Getting the below error when I install contrast package using <code>poetry install</code> command:</p>
<p><code>RuntimeError: Failed to build Contrast C extension. It is necessary for autotools (autoconf, automake) to be installed in order for Contrast to build properly. On lightweight systems such as Alpine, it may be necessary to install linux-headers if they are not available already. Some other systems may require "build essential" packages to be installed.</code></p>
<p>Can someone help me how to fix this issue ?</p>
|
<python><docker><python-poetry><contrast>
|
2023-05-11 04:16:40
| 1
| 1,599
|
sidharth vijayakumar
|
76,224,086
| 5,579,099
|
When (and how) do executors yield control back to the event loop?
|
<p>I am trying to wrap my head around <code>asyncio</code> for the first time, and I think I've got a basic grasp on coroutines & how to await their corresponding objects. Now, I've encountered <code>AbstractEventLoop.run_in_executor</code>, and the concept makes sense in the abstract (no pun intended): you have some blocking operation, so you kick it off to a thread so that the main thread (i.e. the thread running the main event loop) can continue its work.</p>
<p>The thing that I do not understand is how the event loop manages context switches between coroutines (cooperative multitasking), and this newly created thread (preemptive multitasking). As I understand it, every coroutine is <code>await</code>ed, and the act of <code>await</code>ing that coroutine yields control back to the event loop. This allows the event loop to start running another coroutine. But threads are not cooperative in this manner -- there is some other scheduler (I believe in your OS, but that may be wrong) that schedules when threads run, and threads can be stopped at any point in their execution. Additionally, why do you even call <code>run_in_executor</code> on the event loop? Shouldn't the newly created thread be completely separate from the thread that is running the event loop in order to allow for the concurrency that we're looking for?</p>
<p>My only guess at what could be happening is that since <code>run_in_executor</code> returns a coroutine (which is also a little confusing -- how do you get a coroutine out of a thread?), <code>await</code>ing this coroutine causes a context switch in the underlying thread, but I really have no idea how you could implement something like that.</p>
|
<python><python-asyncio><python-multithreading>
|
2023-05-11 04:08:10
| 2
| 394
|
Pacopenguin
|
76,224,024
| 3,247,006
|
What is "can_delete_extra" for in Django Admin?
|
<p>I have <code>Person</code> model and <code>Email</code> model which has the foreign key of <code>Person</code> model as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "models.py"
class Person(models.Model):
name = models.CharField(max_length=20)
def __str__(self):
return self.name
class Email(models.Model):
person = models.ForeignKey(Person, on_delete=models.CASCADE)
email = models.EmailField()
def __str__(self):
return self.email
</code></pre>
<p>Then, I set <a href="https://docs.djangoproject.com/en/4.2/topics/forms/formsets/#can-delete-extra" rel="nofollow noreferrer">can_delete_extra = True</a> to <code>Email</code> inline as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "admin.py"
class EmailInline(admin.TabularInline):
model = Email
can_delete_extra = True # Here
@admin.register(Person)
class PersonAdmin(admin.ModelAdmin):
inlines = (EmailInline,)
</code></pre>
<p>Or, I set <code>can_delete_extra = False</code> to <code>Email</code> inline as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "admin.py"
class EmailInline(admin.TabularInline):
model = Email
can_delete_extra = False # Here
@admin.register(Person)
class PersonAdmin(admin.ModelAdmin):
inlines = (EmailInline,)
</code></pre>
<p>But, nothing happens to <code>DELETE?</code> check boxes on <code>change</code> page as shown below:</p>
<p><a href="https://i.sstatic.net/a4Uzn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/a4Uzn.png" alt="enter image description here" /></a></p>
<p>I know if setting <a href="https://docs.djangoproject.com/en/4.2/topics/forms/formsets/#can-delete" rel="nofollow noreferrer">can_delete = False</a> to <code>Email</code> inline as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "admin.py"
class EmailInline(admin.TabularInline):
model = Email
can_delete = False # Here
@admin.register(Person)
class PersonAdmin(admin.ModelAdmin):
inlines = (EmailInline,)
</code></pre>
<p>Then, I can hide <code>DELETE?</code> check boxes on <code>change</code> page as shown below:</p>
<p><a href="https://i.sstatic.net/bCwlc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bCwlc.png" alt="enter image description here" /></a></p>
<p>So, what is <code>can_delete_extra</code> for in Django?</p>
|
<python><django><django-models><django-admin>
|
2023-05-11 03:47:23
| 1
| 42,516
|
Super Kai - Kazuya Ito
|
76,223,664
| 6,395,930
|
Installing top2vec package, particularly in H2O Notebooks, and the error
|
<p>After installing python top2vec package in H2O notebooks (!pip install top2vec), I am getting the following error when importing top2vec:</p>
<pre><code>import top2vec
ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject
</code></pre>
<p>I tried the techniques used in <a href="https://github.com/MaartenGr/BERTopic/issues/392" rel="nofollow noreferrer">https://github.com/MaartenGr/BERTopic/issues/392</a> and also <a href="https://stackoverflow.com/questions/75045575/unable-to-install-top2vec">Unable to install top2vec</a>, but none of them were helpful. It looks it is incompatible with numpy package. Any help on how to tackle the error in H2O environment much appreciated!</p>
<p><a href="https://i.sstatic.net/rwUmZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rwUmZ.png" alt="enter image description here" /></a></p>
|
<python><package><incompatibility><h2o.ai><top2vec>
|
2023-05-11 02:04:37
| 1
| 853
|
Sam S.
|
76,223,497
| 19,506,623
|
How to create subelements with conditions in list?
|
<p>I have a list like this</p>
<pre><code>a = [3, 2, 1, 2, 3, 1, 3, 1, 2]
</code></pre>
<p>I'm trying to get a new list with elements separated in sublist for following conditions:</p>
<p>1-) Separate in sublists the sequences from 1 to N (N in this case is 3)</p>
<p>2-) Separate in sublists the sequences from 1 to N even there are missing numbers between 1 and N</p>
<p>3-) Separate in sublists the rest elements</p>
<p>The expected output would be like this.</p>
<pre><code>b = [[3], [2], [1, 2, 3], [1, 3], [1, 2]]
</code></pre>
<p>My current attempt is below but is associating incorrectly the elements. Thanks for any help</p>
<pre><code>b = [None]*len(a)
for i in range(len(a)):
if 1 not in a[i][0]:
b[i] = [a[i]]
</code></pre>
|
<python><list>
|
2023-05-11 01:08:59
| 2
| 737
|
Rasec Malkic
|
76,223,362
| 19,048,408
|
In a Polars group_by aggregation, how do you concatenate string values in each group?
|
<p>When grouping a Polars dataframe in Python, how do you concatenate string values from a single column across rows within each group?</p>
<p>For example, given the following DataFrame:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame(
{
"col1": ["a", "b", "a", "b", "c"],
"col2": ["val1", "val2", "val1", "val3", "val3"]
}
)
</code></pre>
<p>Original df:</p>
<pre><code>shape: (5, 2)
┌──────┬──────┐
│ col1 ┆ col2 │
│ --- ┆ --- │
│ str ┆ str │
╞══════╪══════╡
│ a ┆ val1 │
│ b ┆ val2 │
│ a ┆ val1 │
│ b ┆ val3 │
│ c ┆ val3 │
└──────┴──────┘
</code></pre>
<p>I want to run a group_by operation, like:</p>
<pre class="lang-py prettyprint-override"><code>
df.group_by('col1').agg(
col2_g = pl.col('col2').some_function_like_join(',')
)
</code></pre>
<p>The expected output is:</p>
<pre><code>┌──────┬───────────┐
│ col1 ┆ col2_g │
│ --- ┆ --- │
│ str ┆ str │
╞══════╪═══════════╡
│ a ┆ val1,val1 │
│ b ┆ val2,val3 │
│ c ┆ val3 │
└──────┴───────────┘
</code></pre>
<p>What is the name of the <code>some_function_like_join</code> function?</p>
<p>I have tried the following methods, and none work:</p>
<pre class="lang-py prettyprint-override"><code>df.group_by('col1').agg(pl.col('col2').list.concat(','))
df.group_by('col1').agg(pl.col('col2').join(','))
df.group_by('col1').agg(pl.col('col2').list.join(','))
</code></pre>
|
<python><python-polars>
|
2023-05-11 00:23:05
| 2
| 468
|
HumpbackWhale194
|
76,223,056
| 21,767,810
|
Why does multithreading operations on a single numpy array improve performance?
|
<p>In short, my question is <strong>shouldn't Python block multiple accesses to the same memory?</strong></p>
<h4>In detail</h4>
<p>Say I want to do some computations on one hundred numpy arrays. My data is stored as a contiguous <code>np.ndarray</code> with shape <code>(100, 1000, 1000)</code>. As per the code below, I iterate through the data and call <code>multiply</code> for each block of size <code>(1000, 1000)</code>. To avoid unnecessary copying, the results are stored in an already initialized array.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import time
data = np.random.random((100, 1000, 1000))
result = np.zeros((100, 1000, 1000))
def multiply(source, dest):
"""Dummy calculation"""
np.divide(source, 1.5, out=dest)
start = time.time()
# Call multiply on each subarray
for i in range(100):
multiply(data[i], result[i])
print(f"For loop exec time {time.time() - start:.3f} seconds")
</code></pre>
<p>This, on my system, yields</p>
<pre><code>>>> For loop exec time 0.381 seconds
</code></pre>
<h4>With concurrency</h4>
<p>Next, I rewrite the code above to utilize tools from <code>concurrent.futures</code>. I launch a <code>ThreadPoolExecutor</code>, and submit ten asynchronous calls to <code>multiply</code> at a time, waiting them to finish, and ultimately processing all of my data.</p>
<pre class="lang-py prettyprint-override"><code>from concurrent.futures import ThreadPoolExecutor, wait
threaded_result = np.zeros((100, 1000, 1000))
executor = ThreadPoolExecutor()
start = time.time()
for i in range(10):
futures = {
executor.submit(
multiply, # callable
data[i*10 + j],
threaded_result[i*10 + j],
) for j in range(10)
}
wait(futures)
print(f"Threaded exec time {time.time() - start:.3f} seconds")
executor.shutdown()
print(f"Both equal: {np.allclose(result, threaded_result)}") # Sanity check
</code></pre>
<p>Outputting</p>
<pre><code>>>> Threaded exec time 0.079 seconds
>>> Both equal: True
</code></pre>
<h4>What I don't understand</h4>
<p>I tried this just out of curiosity, expecting both approaches to run in an approximately equal amount of time and was surprised by the result. My understanding of numpy internals is, that when passing <code>array[index]</code> to a function, you are actually passing the entire <code>array</code>, but with an offset and size specific to the subarray.</p>
<p>So when the underlying implementation of <code>np.multiply</code> receives subarrays <code>data[i]</code> and <code>result[i]</code>, wouldn't CPython stop all other threads from operating on the <em>entire</em> <code>data</code> and <code>result</code>?</p>
<p>Is the speedup because of some technical detail (i.e. multiple threads can <em>read</em> <code>data</code>, calculate the result into a buffer, and then wait their turn to write it into <code>result</code>), or is my understanding of the underlying mechanics <strong>fundamentally</strong> wrong? Any insight or references are much appreciated!</p>
|
<python><multithreading><numpy>
|
2023-05-10 22:48:53
| 0
| 970
|
simeonovich
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.