QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
74,844,191
| 9,373,756
|
Why polars date time subseting is slow?
|
<p>I'm looking for a faster way to subset a polars dataframe using datetime. I tried 2 different ways and I believe there is a faster way since I also tried with pandas and it was faster:</p>
<pre><code>datetime.datetime(2009, 8, 3, 0, 0)
datetime.datetime(2009, 11, 3, 0, 0)
# - 1st
%%timeit
df_window = df_stock.filter(
(df_stock["date_time"] >= start_date)
& (df_stock["date_time"] <= end_date)
)
# 2.61 ms ± 243 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
# - 2nd
%%timeit
df_window = df_stock.filter(
(pl.col("date_time") >= start_date) & (pl.col("date_time") <= end_date)
)
# 12.5 ms ± 801 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
</code></pre>
<p>Pandas's subsetting is faster for datetime.</p>
<pre><code>df_test = df_stock.to_pandas()
df_test.set_index('date_time', inplace=True)
%%timeit
df_test['2009-8-3':'2009-8-3']
# 1.02 ms ± 108 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
</code></pre>
<p>So why pandas faster than polars in this case? Is there a faster form of subsetting datetime on polars?</p>
<p>Enviroment:</p>
<ul>
<li>Spyder version: 5.4.0 (conda)</li>
<li>Python version: 3.8.13 64-bit</li>
<li>Operating System: Linux 5.15.79.1-microsoft-standard-WSL2</li>
</ul>
|
<python><pandas><dataframe><python-polars>
|
2022-12-18 20:31:34
| 1
| 725
|
Artur
|
74,844,113
| 11,809,811
|
hidden titlebar makes app not show up in task switcher?
|
<p>I have a simple tkinter app and I want to hide the titlebar. I am doing that via:</p>
<pre><code>root.overrideredirect(True)
</code></pre>
<p>and that works fine. However, the resulting window has no icon in the taskbar and when switching the windows using Alt + Tab the window does not appear. I created an exe file using Pyinstaller and was hoping that would solve the issue but it doesn't. Is there a way to to fix this?</p>
|
<python><tkinter>
|
2022-12-18 20:17:41
| 1
| 830
|
Another_coder
|
74,844,103
| 20,800,676
|
Project Euler #79: Using the Python standard library only, can a function be defined to sort the Passcode digits order from the given set?
|
<p>The problem in matter can be found here: <a href="https://projecteuler.net/problem=79" rel="nofollow noreferrer">Problem #79</a></p>
<p>"A common security method used for online banking is to ask the user for three random characters from a passcode. For example, if the passcode was 531278, they may ask for the 2nd, 3rd, and 5th characters; the expected reply would be: 317.</p>
<p>The text file, keylog.txt, contains fifty successful login attempts.</p>
<p>Given that the three characters are always asked for in order, analyse the file so as to determine the shortest possible secret passcode of unknown length."</p>
<p>So far I've been able to find the number of digits making up the passcode, but I can't order them.
I'm working on defining a simple function that will return the position of each digit in regard with the others and finally figure out the final position and the Passcode.</p>
<p>If anyone can help, here is my work so far:</p>
<pre><code># Open the file and create a list from a set in order to remove duplicates.
with open('D:\\Development\\keylog.txt', 'r') as file:
logins = list(set(file.read().split()))
# Create three separate sets with the positions of the digits.
first_digits = set()
second_digits = set()
third_digits = set()
for login in logins:
first_digits.add(login[0])
second_digits.add(login[1])
third_digits.add(login[2])
newSet = (first_digits | second_digits | third_digits)
# After the three sets are combined it will return another set of 8 digits.
# From this we can deduce that the shortest passcode is made from the 8 different digits.
# The set looks like this: {'3', '7', '2', '6', '1', '0', '9', '8'}
# def A function that gives the order of the digits in the passcode.
</code></pre>
<p>I've tried a few variations of functions like "is_before(x)" or "is_after(x)" but it gets way to complicated for me and also not working or returning the final answer.</p>
|
<python><python-3.x>
|
2022-12-18 20:14:53
| 2
| 541
|
Sigmatest
|
74,844,094
| 11,653,374
|
Projection onto unit simplex using gradient decent in Pytorch
|
<p>In Professor Boyd <a href="https://see.stanford.edu/materials/lsocoee364b/hw4sol.pdf" rel="nofollow noreferrer">homework solution</a> for projection onto the unit simplex, he winds up with the following equation:</p>
<pre><code>g_of_nu = (1/2)*torch.norm(-relu(-(x-nu)))**2 + nu*(torch.sum(x) -1) - x.size()[0]*nu**2
</code></pre>
<p><a href="https://i.sstatic.net/BAetU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BAetU.png" alt="enter image description here" /></a></p>
<p>If one calculates <code>nu*</code>, then the projection to unit simplex would be <code>y*=relu(x-nu*1)</code>.</p>
<p>What he suggests is to find the maximizer of <code>g_of_nu</code>. Since <code>g_of_nu</code> is strictly concave, I multiply it by a negative sign (<code>f_of_nu</code>) and find its global minimizer using gradient descent.</p>
<hr />
<p><strong>Question</strong></p>
<p>My final vector <code>y*</code>, does not add up to one, what am I doing wrong?</p>
<hr />
<p><strong>Code for replication</strong></p>
<pre><code>torch.manual_seed(1)
x = torch.randn(10)#.view(-1, 1)
x_list = x.tolist()
print(list(map(lambda x: round(x, 4), x_list)))
nu_0 = torch.tensor(0., requires_grad = True)
nu = nu_0
optimizer = torch.optim.SGD([nu], lr=1e-1)
nu_old = torch.tensor(float('inf'))
steps = 100
eps = 1e-6
i = 1
while torch.norm(nu_old-nu) > eps:
nu_old = nu.clone()
optimizer.zero_grad()
f_of_nu = -( (1/2)*torch.norm(-relu(-(x-nu)))**2 + nu*(torch.sum(x) -1) - x.size()[0]*nu**2 )
f_of_nu.backward()
optimizer.step()
print(f'At step {i+1:2} the function value is {f_of_nu.item(): 1.4f} and nu={nu: 0.4f}' )
i += 1
y_star = relu(x-nu).cpu().detach()
print(list(map(lambda x: round(x, 4), y_star.tolist())))
print(y_star.sum())
</code></pre>
<pre><code>[0.6614, 0.2669, 0.0617, 0.6213, -0.4519, -0.1661, -1.5228, 0.3817, -1.0276, -0.5631]
At step 1 the function value is -1.9618 and nu= 0.0993
.
.
.
At step 14 the function value is -1.9947 and nu= 0.0665
[0.5948, 0.2004, 0.0, 0.5548, 0.0, 0.0, 0.0, 0.3152, 0.0, 0.0]
tensor(1.6652)
</code></pre>
<p><strong>The function</strong></p>
<pre><code>torch.manual_seed(1)
x = torch.randn(10)
nu = torch.linspace(-1, 1, steps=10000)
f = lambda x, nu: -( (1/2)*torch.norm(-relu(-(x-nu)))**2 + nu*(torch.sum(x) -1) - x.size()[0]*nu**2 )
f_value_list = np.asarray( [f(x, i) for i in nu.tolist()] )
i_min = np.argmin(f_value_list)
print(nu[i_min])
fig, ax = plt.subplots()
ax.plot(nu.cpu().detach().numpy(), f_value_list);
</code></pre>
<p>Here is the minimizer from the graph which is consistent with the gradient descent.</p>
<pre><code>tensor(0.0665)
</code></pre>
<p><a href="https://i.sstatic.net/mCpCV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mCpCV.png" alt="enter image description here" /></a></p>
|
<python><pytorch><gradient-descent>
|
2022-12-18 20:12:43
| 1
| 728
|
Saeed
|
74,843,820
| 10,797,718
|
Compute Aspect Ratio of a Rectangle in Perspective
|
<p>A popular question but for which I can't find a <strong>definitive answer</strong>. <em>Math wizards we need you!</em></p>
<h2>The Problem is the following</h2>
<p>We have a <strong>unmodified photograph</strong> (not cropped) of a <strong>rectangular object</strong> taken from some angle. We don't know the real size of the rectangle or any info about that picture.</p>
<p>GOAL : We want to find the <em><strong>aspect ratio / proportions</strong></em> of the rectangle with only this image as input.</p>
<p>This question isn't linked to a particular programming language (pseudo code allowed) but I want to implement the solution in <strong>Python</strong> using <strong>NumPy</strong>.</p>
<p><a href="https://i.sstatic.net/Mdo9ql.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Mdo9ql.jpg" alt="Example image with a paper sheet in perspective" /></a></p>
<p>In this photograph it's an A4 paper sheet so the code/algorithm should find a ratio of <code>1.4142 : 1</code></p>
<h1>My Questions</h1>
<ol>
<li><strong>What is the best solution (See Info section below), handling edges cases ? Is there another one I not listed ?</strong></li>
<li><strong>Is there an implementation (C++, Python, ...) in a maintained library package (OpenCV, SciPy, ...) ?</strong></li>
</ol>
<h3>Info</h3>
<p>After digging the internet it's seems that this problem isn't easy to solve, there is some info :</p>
<ul>
<li>Research Paper by Zhengyou Zhang , Li-Wei He, "<strong>Whiteboard scanning and image enhancement</strong>" <a href="http://research.microsoft.com/en-us/um/people/zhang/papers/tr03-39.pdf" rel="nofollow noreferrer">http://research.microsoft.com/en-us/um/people/zhang/papers/tr03-39.pdf</a> / <a href="https://www.microsoft.com/en-us/research/publication/2016/11/Digital-Signal-Processing.pdf" rel="nofollow noreferrer">https://www.microsoft.com/en-us/research/publication/2016/11/Digital-Signal-Processing.pdf</a></li>
<li>Research Paper by Shen Cai,* Longxiang Huang, and Yuncai Liu, "<strong>Automatically obtaining the correspondences of four
coplanar points for an uncalibrated camera</strong>" <a href="https://sci-hub.hkvisa.net/10.1364/AO.51.005369" rel="nofollow noreferrer">https://sci-hub.hkvisa.net/10.1364/AO.51.005369</a></li>
<li>A blog post named "Aspect Ratio of a Rectangle in Perspective" by Andrew Kay <a href="https://andrewkay.name/blog/post/aspect-ratio-of-a-rectangle-in-perspective/" rel="nofollow noreferrer">https://andrewkay.name/blog/post/aspect-ratio-of-a-rectangle-in-perspective/</a></li>
</ul>
<h3>Others StackOverflow related questions</h3>
<p>There is some questions on StackOverflow with the same topic/subject/goal:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/1194352/proportions-of-a-perspective-deformed-rectangle">proportions of a perspective-deformed rectangle</a></li>
<li><a href="https://stackoverflow.com/questions/38285229/calculating-aspect-ratio-of-perspective-transform-destination-image">Calculating aspect ratio of Perspective Transform destination image</a></li>
<li><a href="https://stackoverflow.com/questions/8235288/perspective-correction-of-uiimage-from-points/47178521#47178521">Perspective correction of UIImage from Points</a></li>
<li><a href="https://stackoverflow.com/questions/54015836/get-imagerect-on-a-plane-with-perspective-correction-from-camera-2d-image">Get image(rect on a plane) with perspective correction from camera (2d image)</a></li>
</ul>
<p><em><strong>BUT</strong></em> the answers seems to differ in the implementation(even though they claim to be based on the same research paper) and be incomplet (not always handling parallel opposite sides).</p>
<h2>My current working implementations</h2>
<p>Based on all the info above I implemented the two methods in Python with OpenCV & NumPy:</p>
<h3>Method #1</h3>
<pre><code>def compute_aspect_ratio(image, corners):
# Based on :
# - https://andrewkay.name/blog/post/aspect-ratio-of-a-rectangle-in-perspective/
# Step 1: Get image center, will be used as origin
h, w = image.shape[:2]
origin = (w * .5, h * .5)
# Step 2: Points coords from image origin
# /!\ CAREFUL : points need to be in zig-zag order (A, B, D, C)
a = corners[0] - origin
b = corners[1] - origin
c = corners[3] - origin
d = corners[2] - origin
# Step 3: Check if the camera lie into the plane of the rectangle
# Coplanar if three points are collinear, in that case the aspect ratio cannot be computed
M = numpy.array([[b[0], c[0], d[0]], [b[1], c[1], d[1]], [1., 1., 1.]])
det = numpy.linalg.det(M)
if math.isclose(det, 0., abs_tol=.001):
# Cannot compute the aspect ratio, the caller need to check if the return value is 0.
return 0.
# Step 4: Create the matrixes
A = numpy.array([[1., 0., -b[0], 0., 0., 0.],
[0., 1., -b[1], 0., 0., 0.],
[0., 0., 0., 1., 0., -c[0]],
[0., 0., 0., 0., 1., -c[1]],
[1., 0., -d[0], 1., 0., -d[0]],
[0., 1., -d[1], 0., 1., -d[1]]], dtype=float)
B = numpy.array([[b[0]-a[0]],
[b[1]-a[1]],
[c[0]-a[0]],
[c[1]-a[1]],
[d[0]-a[0]],
[d[1]-a[1]]], dtype=float)
# Step 5: Solve it, this will give us [Ux, Uy, (Uz / λ), Vx, Vy, (Vz / λ)]
s = numpy.linalg.solve(A, B)
# Step 6: Compute λ, it's the focal length
l = 0.
l_sq = ((-(s[0] * s[3]) - (s[1] * s[4])) / (s[2] * s[5]))
if l_sq > 0.:
l = numpy.sqrt(l_sq)
# If l_sq <= 0, λ cannot be computed, two sides of the rectangle's image are parallel
# Either Uz and/or Vz is equal zero, so we leave l = 0
# Step 7: Get U & V
u = numpy.linalg.norm([s[0], s[1], (s[2] * l)])
v = numpy.linalg.norm([s[3], s[4], (s[5] * l)])
return (v / u)
</code></pre>
<h3>Method #2</h3>
<pre><code>def compute_aspect_ratio(image, corners):
# Based on :
# - https://www.microsoft.com/en-us/research/publication/2016/11/Digital-Signal-Processing.pdf
# - http://research.microsoft.com/en-us/um/people/zhang/papers/tr03-39.pdf
# Step 1: Compute image center (a.k.a origin)
h, w = image.shape[:2]
origin = (w * .5, h * .5)
# Step 2: Make homeneous coordinates
# /!\ CAREFUL : points need to be in zig-zag order (A, B, D, C)
p1 = numpy.array([*corners[0], 1.])
p2 = numpy.array([*corners[1], 1.])
p3 = numpy.array([*corners[3], 1.])
p4 = numpy.array([*corners[2], 1.])
k2 = numpy.dot(numpy.cross(p1, p4), p3) / numpy.dot(numpy.cross(p2, p4), p3)
k3 = numpy.dot(numpy.cross(p1, p4), p2) / numpy.dot(numpy.cross(p3, p4), p2)
# Step 3: Computing U & V vectors, but at this point the z value of these vectors are in the form z / f
# Where f is the focal length
u = (k2 * p2) - p1
v = (k3 * p3) - p1
# Step 4: Unpack vectors to avoid using accessors
uX, uY, uZ = u
vX, vY, vZ = v
# Step 5: Check if two opposite sides are parallel (or almost parallel)
if math.isclose(uZ, .0, abs_tol=.01) or math.isclose(vZ, .0, abs_tol=.01):
aspect_ratio = numpy.sqrt((vX ** 2 + vY ** 2) / (uX ** 2 + uY ** 2))
return aspect_ratio
# Step 6: Compute the focal length
f = numpy.sqrt(numpy.abs((1. / (uZ * vZ)) * ((uX * vX - (uX * vZ + uZ * vX) * origin[0] + uZ * vZ * origin[0] * origin[0]) + (uY * vY - (uY * vZ + uZ * vY) * origin[1] + uZ * vZ * origin[1] * origin[1]))))
A = numpy.array([[f, 0., origin[0]], [0., f, origin[1]], [0., 0., 1.]]).astype('float32')
Ati = numpy.linalg.inv(numpy.transpose(A))
Ai = numpy.linalg.inv(A)
# Step 7: Calculate the real aspect ratio
aspect_ratio = numpy.sqrt(numpy.dot(numpy.dot(numpy.dot(v, Ati), Ai), v) / numpy.dot(numpy.dot(numpy.dot(u, Ati), Ai), u))
return aspect_ratio
</code></pre>
<p>If you have any info / help don't hesite to share!</p>
|
<python><computer-vision><geometry><computational-geometry>
|
2022-12-18 19:26:35
| 1
| 1,519
|
Ben Souchet
|
74,843,802
| 11,564,487
|
Code wrapping in Quarto pdf output documents
|
<p>Suppose we have the following <code>qmd</code> document:</p>
<pre><code>---
title: "Untitled"
format: pdf
---
```{python}
#| eval: false
print('Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aliquam dapibus feugiat nibh, sed gravida ipsum rutrum nec. Donec tincidunt arcu scelerisque enim tempor blandit. Donec a neque facilisis, interdum leo et, luctus erat. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Integer ac nunc eu orci luctus feugiat.')
```
</code></pre>
<p>How can the python code be wrapped in the output pdf file?</p>
|
<python><latex><quarto>
|
2022-12-18 19:23:30
| 1
| 27,045
|
PaulS
|
74,843,770
| 12,113,684
|
Keras sequential model from functional API
|
<p>I have a Keras model using functional API and it looks:</p>
<pre><code>nn = keras.layers.Conv1D(300,19,strides=1,activation='relu')(inputs)
nn = keras.layers.Conv1D(300,19,strides=1,activation='relu')(nn)
nn = keras.layers.MaxPool1D(pool_size=3)(nn)
nn = keras.layers.Flatten()(nn)
nn = keras.layers.Dense(596,activation='relu')(nn)
logits = keras.layers.Dense(35, activation='linear')(nn)
outputs = keras.layers.Activation('sigmoid')(logits)
</code></pre>
<p>I want to convert it to sequential model however I am confused how <strong>logit</strong> and <strong>output</strong> layer would look like in sequential model. So what I have so far:</p>
<pre><code>model.add(keras.layers.Conv1D(300,19,'relu',input_shape=dataset['x_train'].shape[1:])
model.add(keras.layers.Conv1D(300,19,'relu')
model.add(Flatten())
model.add(keras.layers.Dense(596,'relu'))
</code></pre>
<p>I am confused about the next two layers. Can someone guide me how to code for it in a sequential model. Help will be much appreciated.</p>
|
<python><tensorflow><keras><deep-learning>
|
2022-12-18 19:17:38
| 1
| 845
|
John
|
74,843,703
| 9,138,097
|
How to download and save the .csv from the url and use it to process in next function using python
|
<p>I'm trying to download and process the .csv files and stuck on one thing, below function to process the .csv works perfect.</p>
<pre><code>def insertTimeStampcsv():
rows = []
with open(r'output.csv', 'w', newline='') as out_file:
timestamp = datetime.now()
df = pd.read_csv(getCsv())
if 'Name' in df.columns:
df.rename(columns = {"Name": "team", "Total": "cost"}, inplace = True)
df.insert(0, 'date',timestamp)
df.insert(1, 'resource_type', "pod")
df.insert(2, 'resource_name', "kubernetes")
df.insert(3, 'cluster_name', "talabat-qa-eks-cluster")
df.drop(["CPU", "GPU", "RAM", "PV", "Network", "LoadBalancer", "External", "Shared", "Efficiency"], axis=1, inplace=True)
df["team"] = df["team"].replace(["search-discovery"], "vendor-list")
df.to_csv(out_file, index=False)
return df
insertTimeStampcsv()
</code></pre>
<p>But when it comes to provide the .csv to above function, I'm using another function to generate the .csv using below code but it does not works… any help would be appreciated?</p>
<pre><code>from datetime import datetime
import pandas as pd
import requests
headers = {
'Content-Type': 'application/x-www-form-urlencoded',
}
params = {
'window': 'today',
'aggregate': 'namespace',
'accumulate': 'false',
'shareTenancyCosts': 'true',
'shareNamespaces': 'kube-system,lens-metrics,istio-system,default,newrelic,webhook,cert-manager',
'shareIdle': 'true',
'format': 'csv',
}
# temp_file_name = 'input.csv'
def getCsv():
# result = []
r = requests.get('http://localhost:9090/model/allocation', headers=headers, params=params)
lines = r.content
print(lines)
getCsv()
</code></pre>
|
<python><python-3.x><pandas><csv>
|
2022-12-18 19:07:01
| 1
| 811
|
Aziz Zoaib
|
74,843,655
| 15,649,230
|
MVC who should pass parent view to the child in GUI?
|
<p>I am trying to implement an application where i need the GUI framework to be easily swappable, so i decided to use a model-view-controller approach, and to keep the controller unchanged when swapping the view component, i decided to make the view responsible for contacting whatever GUI framework is used (tkinter,QT,web), the design is summarized in the image below.</p>
<p><a href="https://i.sstatic.net/xl4ss.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xl4ssm.png" alt="model summary photo" /></a></p>
<p>Since the view element of the parent and the child will be created separately, they will need to be linked in order to be rendered, and the component that links them needs to know about both the parent and the child's view objects and how they work, so whose responsibility is it to link them ?</p>
<p>The code looks as following, using Pyside2 as an example, the locations where the parent gui can be linked to the child is all commented out.</p>
<pre><code>from PySide2.QtWidgets import QWidget
class parentView:
def __init__(self):
self.gui = QWidget()
# framework specific code to initialize layout
def add_child(self,child):
pass
# child.gui.parent = self.gui
# child.parent = self.gui
class parent_controller:
def __init__(self):
self.view = parentView()
self.gui = QWidget()
self.child = child_controller(self)
# self.child.view.gui.parent = self.view.gui ?
# self.view.add_child(self.child.view.gui) ?
# self.view.add_child(self.child.view) ?
class childView:
def __init__(self, controller):
self.controller = controller
self.gui = QWidget() # QWdiget(self.parent.view.gui) ?
# self.gui.parent = self.parent.view.gui ?
class child_controller:
def __init__(self, parent):
self.parent = parent
self.view = childView(self)
# self.view.gui.parent = self.parent.parent.view.gui ?
</code></pre>
<p>Creating the link in the parent view makes sense to me as i can make it do a framework-specific logic when the framework is changed, but i wanted to know what is best in terms of ease of use and maintainability, as i think i am creating too many objects.</p>
|
<python><qt><user-interface><model-view-controller>
|
2022-12-18 19:00:55
| 0
| 23,158
|
Ahmed AEK
|
74,843,475
| 12,883,297
|
Create 2 new time columns in a dataframe which are cumulative sum of min columns in pandas
|
<p>I have a dataframe</p>
<pre><code>df = pd.DataFrame([["X","0 min","30 mins"],["X","1 hour 1 min","20 mins"],["X","1 min","30 mins"],["X","41 mins","28 mins"],
["Y","0 min","30 mins"],["Y","35 mins","25 mins"],["Y","1 hour 21 mins","30 mins"]],columns=["id","travel_time","dur"])
</code></pre>
<pre><code>id travel_time dur
X 0 min 30 mins
X 1 hour 1 min 20 mins
X 1 min 30 mins
X 41 mins 28 mins
Y 0 min 30 mins
Y 35 mins 25 mins
Y 1 hour 21 mins 30 mins
</code></pre>
<p>I want 2 more columns <strong>start_time</strong> and <strong>end_time</strong> where each id starts at 9:00 AM, add travel_time for start_time column and add dur for the end_time. For the next row add travel_time for the previous row end_time and repeat the same.</p>
<p><strong>Expected Output:</strong></p>
<pre><code>df_out = pd.DataFrame([["X","0 min","30 mins","9:00 AM","9:00 AM"],["X","1 hour 1 min","20 mins","10:01 AM","10:21 AM"],
["X","1 min","30 mins","10:22 AM","10:52 AM"],["X","41 mins","28 mins","11:33 AM","12:01 PM"],
["Y","0 min","30 mins","9:00 AM","9:00 AM"],["Y","35 mins","25 mins","9:35 AM","10:00 AM"],
["Y","1 hour 21 mins","30 mins","11:21 AM","11:51 AM"]],columns=["id","travel_time","dur","start_time","end_time"])
</code></pre>
<pre><code>id travel_time dur start_time end_time
X 0 min 30 mins 9:00 AM 9:00 AM
X 1 hour 1 min 20 mins 10:01 AM 10:21 AM
X 1 min 30 mins 10:22 AM 10:52 AM
X 41 mins 28 mins 11:33 AM 12:01 PM
Y 0 min 30 mins 9:00 AM 9:00 AM
Y 35 mins 25 mins 9:35 AM 10:00 AM
Y 1 hour 21 mins 30 mins 11:21 AM 11:51 AM
</code></pre>
<p>How to do it in pandas?</p>
|
<python><python-3.x><pandas><dataframe><datetime>
|
2022-12-18 18:20:44
| 1
| 611
|
Chethan
|
74,843,377
| 5,032,387
|
Getting spot gold price without API
|
<p>Apparently it was possible to query spot gold data using Pandas Datareader, but when I run the query mentioned in the <a href="https://github.com/pydata/pandas-datareader/issues/842" rel="nofollow noreferrer">issue</a>, I get an error. Alternative methods welcome.</p>
<pre><code>import pandas as pd
from pandas_datareader import data
import yfinance as yf
yf.pdr_override()
data.DataReader("GOLDAMGBD228NLBM", data_source="fred", start="2022-06-01")
</code></pre>
<blockquote>
<ul>
<li>GOLDAMGBD228NLBM: No data found for this date range, symbol may be delisted</li>
</ul>
</blockquote>
|
<python><pandas-datareader>
|
2022-12-18 18:05:36
| 0
| 3,080
|
matsuo_basho
|
74,843,359
| 16,395,449
|
How to add a top title above the header line
|
<p>I need to add a TITLE just above header using Pandas script.</p>
<pre><code>Sl_No Departure Arrival
1 UK India
2 US India
3 France India
4 New York Singapore
5 Tokyo Singapore
</code></pre>
<p>I tried the below script but I'm not sure what changes I need to do in order to add TTITLE(top title)</p>
<pre><code> import os
os.chdir("/dev/alb_shrd/alb/alb_prj/Files/albt/alt/scrpts")
# Reading the csv file
import pandas as pd
print(pd.__file__)
col_names=["Sl_No","Departure","Arrival"]
df_new = pd.read_csv("SourceFile.csv", quotechar='"',names=col_names, sep="|",skiprows=1, low_memory=False,error_bad_lines=False,header=None).dropna(axis=1, how="all")
header = pd.MultiIndex.from_product([["TOP TITLE"],list(df_new.columns)])
df_new.head()
pd.DataFrame(df_new.to_numpy(), None , columns = header).head()
# Saving xlsx file
file = f"SourceFile_{pd.Timestamp('now').strftime('%Y%m%d_%I%M')}.xlsx"
df_new.to_excel(file, index=False)
</code></pre>
<p>Can anyone guide me how to add title at the top of the excel?</p>
<p>Thanks!</p>
|
<python><python-3.x><pandas><dataframe>
|
2022-12-18 18:02:00
| 0
| 369
|
Jenifer
|
74,843,260
| 7,800,760
|
Stanford's Stanza NLP: find all words ids for a given span
|
<p>I am using a Stanza pipeline that extracts both words and named entities.</p>
<p>The sentence.entities gives me a list of recognized named entities with their start and end characters. Here is an example:</p>
<pre><code>{
"text": "Dante Alighieri",
"type": "PER",
"start_char": 1,
"end_char": 16
}
</code></pre>
<p>The sentence.words gives a list of all tokenized words also with their start and end characters: Here is a fragment of the corresponding example:</p>
<pre><code>{
"id": 1,
"text": "Dante",
"lemma": "Dante",
"upos": "PROPN",
"xpos": "SP",
"head": 3,
"deprel": "nsubj",
"start_char": 1,
"end_char": 6
}
{
"id": 2,
"text": "Alighieri",
"lemma": "Alighieri",
"upos": "PROPN",
"xpos": "SP",
"head": 1,
"deprel": "flat:name",
"start_char": 7,
"end_char": 16
}
{
"id": 3,
"text": "scrisse",
"lemma": "scrivere",
"upos": "VERB",
"xpos": "V",
"feats": "Mood=Ind|Number=Sing|Person=3|Tense=Past|VerbForm=Fin",
"head": 0,
"deprel": "root",
"start_char": 17,
"end_char": 24
}
</code></pre>
<p>I need to generate a list of all words that are included in the named entity span. Using the above example those would be the words with Id 1 and 2 but not 3</p>
|
<python><nlp><stanford-nlp>
|
2022-12-18 17:37:20
| 1
| 1,231
|
Robert Alexander
|
74,843,000
| 4,826,074
|
Calling Python code from C++ gives unexpected number of references
|
<p>I am running the C++ code below and it works fine if I comment out the <code>Py_XDECREF</code> lines. I suppose, however, that it causes memory leaks in such a case. When the <code>Py_XDECREF</code> lines are commented in, the code execution behaves unexpected. First of all, I don't understand the number of references. Why is there for example seven references to pName? Second, why do I get a negative refcount-error when using <code>Py_XDECREF</code>? As I understood <code>Py_XDECREF</code> should be safe in this respect, and the current error should only be possible if I use <code>Py_DECREF</code>? Third and most importantly: How to I change the code so that it works?</p>
<p>When executing the line <code>Py_XDECREF(pDict);</code> I get the following error:</p>
<pre><code>D:\a\1\s\Include\object.h:497: _Py_NegativeRefcount: Assertion failed: object has negative ref count
<object at 0000018CA6726410 is freed>
Fatal Python error: _PyObject_AssertFailed: _PyObject_AssertFailed
Python runtime state: initialized
Current thread 0x000086b0 (most recent call first):
<no Python frame>
</code></pre>
<p>Here is the code:</p>
<pre class="lang-cpp prettyprint-override"><code>#include <Python.h>
#include <stdio.h>
#include <iostream>
#include <sstream>
int main(int argc, char** argv) {
// 1) initialize Python
Py_Initialize();
PyRun_SimpleString("import sys");
PyRun_SimpleString("sys.path.append(\".\")");
PyRun_SimpleString("print(sys.path)");
// 2) Import the python module
PyObject* pName = PyUnicode_FromString("module1");
assert(pName != NULL);
PyObject* pModule = PyImport_Import(pName);
assert(pModule != NULL);
// 3) Get a reference to the python function to call
PyObject* pDict = PyModule_GetDict(pModule);
assert(pDict != NULL);
PyObject* pFunc = PyDict_GetItemString(pDict, (char*)"getInteger");
assert(pFunc != NULL);
// 4) Call function
PyObject* pValue = PyObject_CallObject(pFunc, NULL);
Py_XDECREF(pValue);
Py_XDECREF(pFunc);
Py_XDECREF(pDict);
Py_XDECREF(pModule);
Py_XDECREF(pName);
Py_Finalize();
</code></pre>
<p>The imported module is called <code>module1.py</code> and has this content:</p>
<pre class="lang-py prettyprint-override"><code>def getInteger():
print('Python function getInteger() called')
c = 100*50/30
return c
</code></pre>
|
<python><c++><cpython>
|
2022-12-18 16:49:59
| 1
| 380
|
Johan hvn
|
74,842,944
| 10,914,089
|
Capture the output of setup.py build_ext on stdout
|
<p>In my current project, I'm extensively using Cython. I have many separated setup.py files to build Cython code (*.pyx) with no issues at all, (python 3.8 and gcc 8.3) with the command:</p>
<pre><code>python setup.py build_ext --inplace
</code></pre>
<p>Here is a straightforward example:</p>
<pre><code># =============================================================
# Imports:
# =============================================================
import os
import sys
from distutils.core import setup
import setuptools
from distutils.extension import Extension
from Cython.Build import build_ext
from Cython.Build import cythonize
import numpy
import os.path
import io
import shutil
# =============================================================
# Modules:
# =============================================================
ext_modules = [
Extension("cc1", ["cc1.pyx"],
extra_compile_args=['-O3','-w','-fopenmp'],
extra_link_args=['-fopenmp','-ffast-math','-march=native'],
include_dirs=[numpy.get_include()],
language='c++'),
Extension("cc2", ["cc2.pyx"],
extra_compile_args=['-O3','-w','-fopenmp'],
extra_link_args=['-fopenmp','-ffast-math','-march=native'],
include_dirs=[numpy.get_include()],
language='c++'),
]
# =============================================================
# Class:
# =============================================================
class BuildExt(build_ext):
def build_extensions(self):
if '-Wstrict-prototypes' in self.compiler.compiler_so:
self.compiler.compiler_so.remove('-Wstrict-prototypes')
super().build_extensions()
# =============================================================
# Main:
# =============================================================
for e in ext_modules:
e.cython_directives = {'embedsignature': True,'boundscheck': False,'wraparound': False,'linetrace': True, 'language_level': "3"}
setup(
name='jackProject',
version='0.1.0',
author='Jack',
ext_modules=ext_modules,
cmdclass = {'build_ext': BuildExt},
)
</code></pre>
<p>Everything works fine, and while the script is in execution, I can check the status on my console, as instance:</p>
<pre><code>running build_ext
cythoning cc1.pyx to cc1.cpp
cythoning cc2.pyx to cc2.cpp
building 'cc1' extension
creating build
creating build/temp.linux-x86_64-3.8
gcc -pthread -B /home/anaconda3/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -I/home/anaconda3/lib/python3.8/site-packages/numpy/core/include -I/home/anaconda3/include/python3.8 -c cc1.cpp -o build/temp.linux-x86_64-3.8/cc1.o -O3 -w -fopenmp
gcc -pthread -B /home/anaconda3/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -I/home/anaconda3/lib/python3.8/site-packages/numpy/core/include -I/home/anaconda3/include/python3.8 -c cc1.cpp -o build/temp.linux-x86_64-3.8/cc1.o -O3 -w -fopenmp
g++ -pthread -shared -B /home/anaconda3/compiler_compat -L/home/anaconda3/lib -Wl,-rpath=/home/anaconda3/lib -Wl,--no-as-needed -Wl,--sysroot=/ build/temp.linux-x86_64-3.8/cc1.o -o /home/Desktop/DRAFT_CODING/Python/return_many_values_with_cyt/cc1.cpython-38-x86_64-linux-gnu.so -fopenmp -ffast-math -march=native
building 'cc2' extension
gcc -pthread -B /home/anaconda3/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -I/home/anaconda3/lib/python3.8/site-packages/numpy/core/include -I/home/anaconda3/include/python3.8 -c cc2.cpp -o build/temp.linux-x86_64-3.8/cc2.o -O3 -w -fopenmp
g++ -pthread -shared -B /home/anaconda3/compiler_compat -L/home/anaconda3/lib -Wl,-rpath=/home/anaconda3/lib -Wl,--no-as-needed -Wl,--sysroot=/ build/temp.linux-x86_64-3.8/cc2.o -o /home/Desktop/DRAFT_CODING/Python/return_many_values_with_cyt/cc2.cpython-38-x86_64-linux-gnu.so -fopenmp -ffast-math -march=native
</code></pre>
<p>Now, that I have transferred my project into my Raspberry Pi 4, everything works fine as well, but when I start the setuptools, by launching the same command as usual on the remote terminal, for the same setup.py, this time nothing is written on the terminal, while the script is cythoninzing the source code.
Launch python with the 'verbose' option is not the best...
In this case my virtualenv is based on Python 3.9. with gcc 10.21.
Can anyone tell me what is happening or which is the culprit?
How can I capture again the classic output messages I have always seen until today?
It is not the end of the world, but it was convenient to follow the status of the execution, especially when creating many shared objects at the same time.</p>
|
<python><cython><setup.py>
|
2022-12-18 16:36:48
| 1
| 457
|
JackColo_Ben4
|
74,842,630
| 17,908,075
|
How can i send messages or files to discord without client triggers?
|
<p>i want the send files to my discord channel.</p>
<p>But i want the send this file depend of some conditions and this conditions is <strong>not related with discord messages</strong>.</p>
<pre><code>def run_discord_bot():
TOKEN = '111111111111111111111'
intents = discord.Intents.default()
intents.message_content = True
client = discord.Client(intents=intents)
@client.event
async def on_ready():
print(f'{client.user} is now running!')
@client.event
async def on_message(message):
if message.author == client.user:
return
await send_message(message, user_message)
client.run(TOKEN)
</code></pre>
<p><strong>on_message</strong> and <strong>on_ready</strong> methods are trigged with <strong>discord client</strong>.</p>
<p>I don't want the trigger function with channel messages or any client event.
Can i create a function here and can i trigger this function from another script ?
Just like an API.</p>
<p>I tried something like this:</p>
<pre><code>import discord
intents = discord.Intents.default()
client = discord.Client(intents=intents)
client.login(TOKEN)
channel = client.get_channel(1111111111111)
channel.send('Hello Discord!')
</code></pre>
<p>but its returning error</p>
<pre><code>RuntimeWarning: coroutine 'Client.login' was never awaited
client.login(TOKEN)
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
</code></pre>
|
<python><discord><discord.py>
|
2022-12-18 15:42:09
| 1
| 360
|
Selman
|
74,842,570
| 2,230,585
|
Change the font size of matplotlib.pyplot Axis after its been set
|
<p>How to change the font size of the X-Axis or Y-Axis on a matplotlib.pyplot plot in Python.</p>
<p>I'm want to change the fon size of the text that has already set and I don't want to re-enter the text. Is it possible to get the axis object and set its font size ?</p>
|
<python><matplotlib><plot>
|
2022-12-18 15:28:12
| 1
| 345
|
KaO
|
74,842,206
| 3,247,006
|
Are there any other cases which "select_for_update()" doesn't work but works with "print(qs)" in Django?
|
<p>I have <strong><code>Person</code> model</strong> as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "store/models.py"
from django.db import models
class Person(models.Model):
name = models.CharField(max_length=30)
</code></pre>
<h2><<strong>CASE 1</strong>></h2>
<p>Then, when I use <a href="https://docs.djangoproject.com/en/4.1/ref/models/querysets/#select-for-update" rel="nofollow noreferrer"><strong>select_for_update()</strong></a> and <a href="https://docs.djangoproject.com/en/4.1/ref/models/querysets/#django.db.models.query.QuerySet.update" rel="nofollow noreferrer"><strong>update()</strong></a> of <a href="https://docs.djangoproject.com/en/4.1/ref/models/querysets/#django.db.models.query.QuerySet" rel="nofollow noreferrer"><strong>a queryset</strong></a> together as shown below (I use <strong>Django 3.2.16</strong>):</p>
<pre class="lang-py prettyprint-override"><code># "store/views.py"
from django.db import transaction
from .models import Person
from django.http import HttpResponse
@transaction.atomic
def test(request):
# Here # Here
print(Person.objects.select_for_update().update(name="Tom"))
# Here # Here
print(Person.objects.select_for_update().all().update(name="Tom"))
# Here # Here
print(Person.objects.select_for_update().filter(id=1).update(name="Tom"))
return HttpResponse("Test")
</code></pre>
<p>Only <strong><code>UPDATE</code> query</strong> is run without <strong><code>SELECT FOR UPDATE</code> query</strong> as shown below. *I use <strong>PostgreSQL</strong> and these logs below are <strong>the queries of PostgreSQL</strong> and you can check <a href="https://stackoverflow.com/questions/54780698/postgresql-database-log-transaction/73432601#73432601"><strong>on PostgreSQL, how to log queries with transaction queries such as "BEGIN" and "COMMIT"</strong></a>:</p>
<p><a href="https://i.sstatic.net/Dw2IM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Dw2IM.png" alt="enter image description here" /></a></p>
<p>But, when I use <code>select_for_update()</code> and <code>update()</code> of <strong>a queryset</strong> separately then put <code>print(qs)</code> between them as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "store/views.py"
from django.db import transaction
from .models import Person
from django.http import HttpResponse
@transaction.atomic
def test(request):
qs = Person.objects.select_for_update()
print(qs) # Here
qs.update(name="Tom")
qs = Person.objects.select_for_update().all()
print(qs) # Here
qs.update(name="Tom")
qs = Person.objects.select_for_update().filter(id=1)
print(qs) # Here
qs.update(name="Tom")
return HttpResponse("Test")
</code></pre>
<p><strong><code>SELECT FOR UPDATE</code> and <code>UPDATE</code> queries</strong> are run as shown below:</p>
<p><a href="https://i.sstatic.net/h0gP2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/h0gP2.png" alt="enter image description here" /></a></p>
<p>In addition, when I use <code>select_for_update()</code> and <a href="https://docs.djangoproject.com/en/4.1/ref/models/instances/#django.db.models.Model.save" rel="nofollow noreferrer"><strong>save()</strong></a> of <strong>an object</strong> separately as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "store/views.py"
from django.db import transaction
from .models import Person
from django.http import HttpResponse
@transaction.atomic
def test(request):
# Here
person1 = Person.objects.select_for_update().first()
person1.name = "Tom"
person1.save() # Here
# Here
person2 = Person.objects.select_for_update().all().first()
person2.name = "Tom"
person2.save() # Here
# Here
person3 = Person.objects.select_for_update().filter(id=1).first()
person3.name = "Tom"
person3.save() # Here
# Here
person4 = Person.objects.select_for_update().get(id=1)
person4.name = "Tom"
person4.save() # Here
return HttpResponse("Test")
</code></pre>
<p><strong><code>SELECT FOR UPDATE</code> and <code>UPDATE</code> queries</strong> are run as shown below:</p>
<p><a href="https://i.sstatic.net/ObOj3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ObOj3.png" alt="enter image description here" /></a></p>
<h2><<strong>CASE 2</strong>></h2>
<p>And, when I use <code>select_for_update()</code> and <a href="https://docs.djangoproject.com/en/4.1/ref/models/querysets/#django.db.models.query.QuerySet.delete" rel="nofollow noreferrer"><strong>delete()</strong></a> of <strong>a queryset</strong> together as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "store/views.py"
from django.db import transaction
from .models import Person
from django.http import HttpResponse
@transaction.atomic
def test(request):
# Here # Here
print(Person.objects.select_for_update().delete())
# Here # Here
print(Person.objects.select_for_update().all().delete())
# Here # Here
print(Person.objects.select_for_update().filter(id=1).delete())
return HttpResponse("Test")
</code></pre>
<p>Only <strong><code>DELETE</code> query</strong> is run without <strong><code>SELECT FOR UPDATE</code> query</strong> as shown below.</p>
<p><a href="https://i.sstatic.net/jZ4Ae.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jZ4Ae.png" alt="enter image description here" /></a></p>
<p>But, when I use <code>select_for_update()</code> and <code>delete()</code> of <strong>a queryset</strong> separately then put <code>print(qs)</code> between them as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "store/views.py"
from django.db import transaction
from .models import Person
from django.http import HttpResponse
@transaction.atomic
def test(request):
qs = Person.objects.select_for_update()
print(qs) # Here
qs.delete()
qs = Person.objects.select_for_update().all()
print(qs) # Here
qs.delete()
qs = Person.objects.select_for_update().filter(id=1)
print(qs) # Here
qs.delete()
return HttpResponse("Test")
</code></pre>
<p><strong><code>SELECT FOR UPDATE</code> and <code>DELETE</code> queries</strong> are run as shown below:</p>
<p><a href="https://i.sstatic.net/2YMM9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2YMM9.png" alt="enter image description here" /></a></p>
<p>In addition, when I use <code>select_for_update()</code> and <code>delete()</code> of <strong>an object</strong> together as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "store/views.py"
from django.db import transaction
from .models import Person
from django.http import HttpResponse
@transaction.atomic
def test(request):
# Here # Here
print(Person.objects.select_for_update().first().delete())
# Or
# Here # Here
print(Person.objects.select_for_update().all().first().delete())
# Or
# Here # Here
print(Person.objects.select_for_update().filter(id=1).first().delete())
# Or
# Here # Here
print(Person.objects.select_for_update().get(id=1).delete())
return HttpResponse("Test")
</code></pre>
<p><strong><code>SELECT FOR UPDATE</code> and <code>DELETE</code> queries</strong> are run as shown below:</p>
<p><a href="https://i.sstatic.net/QQ7ZE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QQ7ZE.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/QQ7ZE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QQ7ZE.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/ARgR2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ARgR2.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/3VGTi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3VGTi.png" alt="enter image description here" /></a></p>
<p>I know <a href="https://docs.djangoproject.com/en/4.1/topics/db/queries/#querysets-are-lazy" rel="nofollow noreferrer"><strong>QuerySets are lazy</strong></a> and <a href="https://docs.djangoproject.com/en/4.1/ref/models/querysets/#when-querysets-are-evaluated" rel="nofollow noreferrer"><strong>When QuerySets are evaluated</strong></a>.</p>
<p>So, are there any other cases which <code>select_for_update()</code> doesn't work but works with <code>print(qs)</code> in Django in addition to what I've shown above?</p>
|
<python><sql><python-3.x><django><select-for-update>
|
2022-12-18 14:37:35
| 1
| 42,516
|
Super Kai - Kazuya Ito
|
74,841,979
| 17,889,840
|
How to remove a word from str in list in dictionary
|
<p>I want to remove all <code><start></code> and <code><end></code> from a dictionary as:</p>
<pre class="lang-py prettyprint-override"><code>my_dict = {1:['<start> the woman is slicing onions <end>',
'<start> a woman slices a piece of onion with a knife <end>',
'<start> a woman is chopping an onion <end>',
'<start> a woman is slicing an onion <end>',
'<start> a woman is chopping onions <end>',
'<start> a woman is slicing onions <end>',
'<start> a girl is cutting an onion <end>'],
2: ['<start> a large cat is watching and sniffing a spider <end>',
'<start> a cat sniffs a bug <end>',
'<start> a cat is sniffing a bug <end>',
'<start> a cat is intently watching an insect crawl across the floor <end>',
'<start> the cat checked out the bug on the ground <end>',
'<start> the cat is watching a bug <end>',
'<start> a cat is looking at an insect <end>'],
3:['<start> a man is playing a ukulele <end>',
'<start> a man is playing a guitar <end>',
'<start> a person is playing a guitar <end>',]}
</code></pre>
|
<python><dictionary>
|
2022-12-18 14:06:45
| 2
| 472
|
A_B_Y
|
74,841,827
| 12,887,070
|
Python - How to trim black boarder from video (top, bottom, sides)
|
<p>If I have a <code>.mp4</code> video file with black boarders on the top, bottom, and/or sides, How can I trim these boarders off with python?</p>
<p>I do not want to replace the boarders with anything, I just want to trim the video.</p>
<p>For example, if I run <code>magically_trim_black_boarders_from_vid(in_vid)</code> on a video that looks like the frame below, the video's height would not change, but its width would be reduced.</p>
<p><a href="https://i.sstatic.net/wnkaa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wnkaa.png" alt="enter image description here" /></a></p>
<p>Thanks!</p>
|
<python><ffmpeg><moviepy>
|
2022-12-18 13:43:43
| 1
| 489
|
perry_the_python
|
74,841,806
| 544,542
|
Python Flask how to get values for items such as DN for LDAP authentication
|
<p>I am using <a href="https://flask-ldap3-login.readthedocs.io/en/latest/quick_start.html#basic-application" rel="nofollow noreferrer">https://flask-ldap3-login.readthedocs.io/en/latest/quick_start.html#basic-application</a> which comes with a sample script that contains the following</p>
<pre><code># Hostname of your LDAP Server
app.config['LDAP_HOST'] = 'ad.mydomain.com'
# Base DN of your directory
app.config['LDAP_BASE_DN'] = 'dc=mydomain,dc=com'
# Users DN to be prepended to the Base DN
app.config['LDAP_USER_DN'] = 'ou=users'
# Groups DN to be prepended to the Base DN
app.config['LDAP_GROUP_DN'] = 'ou=groups'
# The RDN attribute for your user schema on LDAP
app.config['LDAP_USER_RDN_ATTR'] = 'cn'
</code></pre>
<p>I presume I can get <code>app.config['LDAP_HOST']</code> by running <code>nslookup -type=srv _ldap._tcp.mydomain.co.uk</code> and using that value but how would I go about the rest?</p>
<p>I've ran <code>get-aduser jo.bloggs -Properties *</code> but this gives me so may values I'm not sure where to start</p>
<p>Any advice?</p>
<p>Thanks</p>
|
<python><ldap>
|
2022-12-18 13:41:00
| 1
| 3,797
|
pee2pee
|
74,841,782
| 11,274,362
|
Problem when download Python package and then install from .whl files
|
<p>I want to download Python package <code>pipy.org</code> and move them to another machine and finally install that packages with downloaded <code>.whl</code> files in that machine
This is <code>requirements.txt</code> file:</p>
<pre><code>amqp==5.1.1
anytree==2.8.0
asgiref==3.5.2
async-timeout==4.0.2
attrs==22.1.0
autobahn==22.7.1
Automat==22.10.0
beautifulsoup4==4.11.1
billiard==3.6.4.0
celery==5.2.7
certifi==2022.9.24
cffi==1.15.1
channels==4.0.0
channels-redis==4.0.0
charset-normalizer==2.1.1
click==8.1.3
click-didyoumean==0.3.0
click-plugins==1.1.1
click-repl==0.2.0
constantly==15.1.0
coreapi==2.3.3
coreschema==0.0.4
cryptography==38.0.3
daphne==4.0.0
Deprecated==1.2.13
Django==4.0.8
django-celery-beat==2.3.0
django-celery-results==2.4.0
django-filter==22.1
django-jalali==6.0.0
django-timezone-field==5.0
djangorestframework==3.14.0
djangorestframework-simplejwt==5.2.2
drf-yasg==1.21.4
et-xmlfile==1.1.0
gunicorn==20.1.0
h2==4.1.0
hpack==4.0.0
hyperframe==6.0.1
hyperlink==21.0.0
idna==3.4
incremental==22.10.0
inflection==0.5.1
itypes==1.2.0
jdatetime==4.1.0
Jinja2==3.1.2
kombu==5.2.4
lxml==4.9.1
MarkupSafe==2.1.1
msgpack==1.0.4
multitasking==0.0.11
numpy==1.23.3
openpyxl==3.0.10
packaging==21.3
pandas==1.5.0
pandas-datareader==0.10.0
Pillow==9.2.0
priority==1.3.0
prompt-toolkit==3.0.31
psutil==5.9.2
psycopg2==2.9.4
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser==2.21
PyJWT==2.6.0
pyOpenSSL==22.1.0
pyparsing==3.0.9
python-crontab==2.6.0
python-dateutil==2.8.2
python-dotenv==0.21.0
pytz==2022.4
redis==4.3.4
requests==2.28.1
ruamel.yaml==0.17.21
ruamel.yaml.clib==0.2.7
service-identity==21.1.0
simplejson==3.17.6
six==1.16.0
soupsieve==2.3.2.post1
sqlparse==0.4.3
Twisted==22.10.0
txaio==22.2.1
typing_extensions==4.4.0
tzdata==2022.5
Unidecode==1.3.6
uritemplate==4.1.1
urllib3==1.26.12
vine==5.0.0
wcwidth==0.2.5
wrapt==1.14.1
yfinance==0.1.74
zope.interface==5.5.1
</code></pre>
<p>I did download packages with:</p>
<pre><code>pip download -r requirements.txt
</code></pre>
<p>This is list of downloaded pacakges in <code>~/LocalPythonPackage</code> directory:</p>
<pre><code>→ ls
amqp-5.1.1-py3-none-any.whl lxml-4.9.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl
anytree-2.8.0-py2.py3-none-any.whl MarkupSafe-2.1.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
asgiref-3.5.2-py3-none-any.whl msgpack-1.0.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
async_timeout-4.0.2-py3-none-any.whl multitasking-0.0.11-py3-none-any.whl
attrs-22.1.0-py2.py3-none-any.whl numpy-1.23.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
autobahn-22.7.1.tar.gz openpyxl-3.0.10-py2.py3-none-any.whl
Automat-22.10.0-py2.py3-none-any.whl packaging-21.3-py3-none-any.whl
beautifulsoup4-4.11.1-py3-none-any.whl pandas-1.5.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
billiard-3.6.4.0-py3-none-any.whl pandas_datareader-0.10.0-py3-none-any.whl
celery-5.2.7-py3-none-any.whl Pillow-9.2.0-cp310-cp310-manylinux_2_28_x86_64.whl
certifi-2022.9.24-py3-none-any.whl priority-1.3.0-py2.py3-none-any.whl
cffi-1.15.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl prompt_toolkit-3.0.31-py3-none-any.whl
channels-4.0.0-py3-none-any.whl psutil-5.9.2-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl
channels_redis-4.0.0-py3-none-any.whl psycopg2-2.9.4.tar.gz
charset_normalizer-2.1.1-py3-none-any.whl pyasn1-0.4.8-py2.py3-none-any.whl
click-8.1.3-py3-none-any.whl pyasn1_modules-0.2.8-py2.py3-none-any.whl
click_didyoumean-0.3.0-py3-none-any.whl pycparser-2.21-py2.py3-none-any.whl
click_plugins-1.1.1-py2.py3-none-any.whl PyJWT-2.6.0-py3-none-any.whl
click_repl-0.2.0-py3-none-any.whl pyOpenSSL-22.1.0-py3-none-any.whl
constantly-15.1.0-py2.py3-none-any.whl pyparsing-3.0.9-py3-none-any.whl
coreapi-2.3.3-py2.py3-none-any.whl python-crontab-2.6.0.tar.gz
coreschema-0.0.4.tar.gz python_dateutil-2.8.2-py2.py3-none-any.whl
cryptography-38.0.3-cp36-abi3-manylinux_2_28_x86_64.whl python_dotenv-0.21.0-py3-none-any.whl
daphne-4.0.0-py3-none-any.whl pytz-2022.4-py2.py3-none-any.whl
Deprecated-1.2.13-py2.py3-none-any.whl redis-4.3.4-py3-none-any.whl
Django-4.0.8-py3-none-any.whl requests-2.28.1-py3-none-any.whl
django_celery_beat-2.3.0-py3-none-any.whl ruamel.yaml-0.17.21-py3-none-any.whl
django_celery_results-2.4.0-py3-none-any.whl ruamel.yaml.clib-0.2.7-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl
django_filter-22.1-py3-none-any.whl service_identity-21.1.0-py2.py3-none-any.whl
django_jalali-6.0.0-py3-none-any.whl setuptools-65.6.3-py3-none-any.whl
djangorestframework-3.14.0-py3-none-any.whl simplejson-3.17.6-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl
djangorestframework_simplejwt-5.2.2-py3-none-any.whl six-1.16.0-py2.py3-none-any.whl
django_timezone_field-5.0-py3-none-any.whl soupsieve-2.3.2.post1-py3-none-any.whl
drf_yasg-1.21.4-py3-none-any.whl sqlparse-0.4.3-py3-none-any.whl
et_xmlfile-1.1.0-py3-none-any.whl Twisted-22.10.0-py3-none-any.whl
gunicorn-20.1.0-py3-none-any.whl txaio-22.2.1-py2.py3-none-any.whl
h2-4.1.0-py3-none-any.whl typing_extensions-4.4.0-py3-none-any.whl
hpack-4.0.0-py3-none-any.whl tzdata-2022.5-py2.py3-none-any.whl
hyperframe-6.0.1-py3-none-any.whl Unidecode-1.3.6-py3-none-any.whl
hyperlink-21.0.0-py2.py3-none-any.whl uritemplate-4.1.1-py2.py3-none-any.whl
idna-3.4-py3-none-any.whl urllib3-1.26.12-py2.py3-none-any.whl
incremental-22.10.0-py2.py3-none-any.whl vine-5.0.0-py2.py3-none-any.whl
inflection-0.5.1-py2.py3-none-any.whl wcwidth-0.2.5-py2.py3-none-any.whl
itypes-1.2.0-py2.py3-none-any.whl wrapt-1.14.1-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl
jdatetime-4.1.0-py3-none-any.whl yfinance-0.1.74-py2.py3-none-any.whl
Jinja2-3.1.2-py3-none-any.whl zope.interface-5.5.1-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl
</code></pre>
<p>and after copy all `.whl files to traget computer, I did run this code:</p>
<pre><code>pip install --no-index --find-links ~/LocalPythonPackage -r requirements.txt
</code></pre>
<p>But I got this error:</p>
<pre><code>ERROR: Could not find a version that satisfies the requirement MarkupSafe==2.1.1 (from versions: none)
ERROR: No matching distribution found for MarkupSafe==2.1.1
</code></pre>
<p>I use <code>python3.11</code> and <code>Ubuntu 20.04.5 LTS</code> in both computers. I think that, this problem is for dependencies or different in <code>OS</code>.
Can you help me to solve this problem?</p>
|
<python><django><pip>
|
2022-12-18 13:38:33
| 2
| 977
|
rahnama7m
|
74,841,444
| 4,865,723
|
Use os.execvp() to call Python script (with Tkinter based GUI) via pkexec and setting environment variables before
|
<p>This is a follow-up question of <a href="https://stackoverflow.com/q/74840452/4865723">Use tkinter based PySimpleGUI as root user via pkexec</a>.</p>
<p>I have a Python GUI application. It should be able to run as user and as root. For the latter I know I have to set <code>$DISPLAY</code> and <code>$XAUTHORITY</code> to get a GUI application work under root. I use <code>pkexec</code> to start that application as root.</p>
<p>I assume the problem is how I use <code>os.getexecvp()</code> to call <code>pkexec</code> with all its arguments. But I don't know how to fix this. In the linked previous question and answer it works when calling <code>pkexec</code> directly via bash.</p>
<p>For that example the full path of the script should be<code>/home/user/x.py</code>.</p>
<pre><code>#!/usr/bin/env python3
# FILENAME need to be x.py !!!
import os
import sys
import getpass
import PySimpleGUI as sg
def main_as_root():
# See: https://stackoverflow.com/q/74840452
cmd = ['pkexec',
'env',
f'DISPLAY={os.environ["DISPLAY"]}',
f'XAUTHORITY={os.environ["XAUTHORITY"]}',
f'{sys.executable} /home/user/x.py']
# output here is
# ['pkexec', 'env', 'DISPLAY=:0.0', 'XAUTHORITY=/home/user/.Xauthority', '/usr/bin/python3 ./x.py']
print(cmd)
# replace the process
os.execvp(cmd[0], cmd)
def main():
main_window = sg.Window(title=f'Run as "{getpass.getuser()}".',
layout=[[]], margins=(100, 50))
main_window.read()
if __name__ == '__main__':
if len(sys.argv) == 2 and sys.argv[1] == 'root':
main_as_root() # no return because of os.execvp()
# else
main()
</code></pre>
<p>Calling that script as <code>/home/user/x.py root</code> means that the script will call itself again via <code>pkexec</code>. I got this output (self translated to English from German).</p>
<pre><code>['pkexec', 'env', 'DISPLAY=:0.0', 'XAUTHORITY=/home/user/.Xauthority', '/usr/bin/python3 /home/user/x.py']
/usr/bin/env: „/usr/bin/python3 /home/user/x.py“: File or folder not found
/usr/bin/env: Use -[v]S, to takeover options via #!
</code></pre>
<p>For me it looks like that the <code>python3</code> part of the command is interpreted by <code>env</code> and not <code>pkexec</code>. Some is not as expected while interpreting the <code>cmd</code> via <code>os.pkexec()</code>.</p>
<p>But when I do this on the shell it works well.</p>
<pre><code>pkexec env DISPLAY=$DISPLAY XAUTHORITY=$XAUTHORITY python3 /home/user/x.py
</code></pre>
|
<python><environment-variables><polkit>
|
2022-12-18 12:47:03
| 1
| 12,450
|
buhtz
|
74,841,344
| 19,797,660
|
In Pandas dataframe, how to add metadata to a column?
|
<p>I am creating my own module for calculation of price indicators, and I would like to add metadata to a column of a dataframe, so each column will have unique metadata.</p>
<p>It is important for me since I have multiple functions and I can recalculate output of one function using another function, also I use some functions to do calculations inside the function I am calling.</p>
<p>For example:</p>
<pre><code>def average_day_range(price_df: PandasDataFrame, n: int=14, calculation_tool: int=0):
'''
0 - SMA, 1 - EMA, 2 - WMA, 3 - EWMA, 4 - VWMA, 5 - ALMA, 6 - LSMA, 7 - HULL MA
:param price_df:
:param n:
:param calculation_tool:
:return:
'''
if calculation_tool not in range(0, 8, 1):
calculation_tool = 0
else:
pass
function_dict = {
0: {
"func": sma,
"name": "SMA"
},
1: {
"func": ema,
"name": "EMA"
},
2: {
"func": wma,
"name": "WMA"
},
3: {
"func": ewma,
"name": "EWMA"
},
4: {
"func": volume_weighted_moving_average,
"name": "VWMA"
},
5: {
"func": arnaud_legoux_ma,
"name": "ALMA"
},
6: {
"func": least_squares_moving_average,
"name": "LSMA"
},
7: {
"func": hull_ma,
"name": "HULL_MA"
},
}
function, name = function_dict[calculation_tool]["func"], function_dict[calculation_tool]["name"]
high_var = function(price_df=price_df, input_mode=3, n=n, from_price=True)
low_var = function(price_df=price_df, input_mode=4, n=n, from_price=True)
adr = high_var[f'{name}_{n}'] - low_var[f'{name}_{n}']
adr.rename(columns={0: f'Average Day Range {name}{n}'}, inplace=True)
return adr
</code></pre>
<p>As you see I can use different functions to do calculations inside one function, and then if I would like to I can recalculate the output using different functions.</p>
<p>So I want to avoid naming columns like <code>func1 with func2 from func3</code> etc, because it is very confusing not to mention the horrendously long column names. Also, I don't want to store multiple dataframes separately but to concatenate the main dataframe with raw data and the dataframe with calculated output.</p>
<p>So let's say I have a dataframe with such columns <code>datetime, open, close, high, low, volume, indicator1, indicator2</code>.</p>
<p>How do I add metadata to each column in dataframe separately so I will be able to easily identify the column I am looking at? I would like to metadata to contain informations such as the source of the calculation (if it was <code>open, high close</code> or <code>low</code> or something else) and other.
(Identifying columns when there are only a couple of them is easy, but if I have ~50 columns, with names that are very similar to each other, then anybody who is looking at it will be just confused).</p>
<p>If adding the metadata to each column is not possible, is there maybe a way to connect a <code>.json</code> file to each column? If I have a wrong thinking process, is there another way to do what I want to do?</p>
|
<python><pandas><dataframe>
|
2022-12-18 12:33:30
| 0
| 329
|
Jakub Szurlej
|
74,841,340
| 490,648
|
No method to rename Pandas index names (not labels) inline?
|
<p>I have a dataframe <code>bmi</code>. I can summarize it like this:</p>
<p><a href="https://i.sstatic.net/V9Vo9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/V9Vo9.png" alt="enter image description here" /></a></p>
<p>Now I want to summarize the dataframe based on <code>Age</code> values measured in decades. So I do the following:</p>
<pre><code>bmi.groupby(by=bmi.Age//10).describe().stack()
</code></pre>
<p>which shows me the following summary:
<a href="https://i.sstatic.net/KzsjL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KzsjL.png" alt="enter image description here" /></a></p>
<p>Note that this creates an index named <code>Age</code> which I'd rather name <code>Decade</code>.
How can I rename the name of this index (to be precise, index level) during the generation of the dataframe?</p>
<p>So I am looking for a function <code>which_func()</code> so that my code can read like this:</p>
<pre><code>bmi.groupby(by=bmi.Age//10).describe().which_func({'Age':'Decade'}).stack()
</code></pre>
<p>and I get the output:
<a href="https://i.sstatic.net/jLaVd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jLaVd.png" alt="enter image description here" /></a></p>
<p>The shortest single-line version I can come up with is:</p>
<pre><code>bmi.groupby(by=bmi.Age//10).describe().reset_index(level='Age', names='Decades').set_index('Decades').stack()
</code></pre>
<p>which seems redundant.</p>
<p>The built-in method <code>DataFrame.rename()</code> renames the labels of an index, not its name, and other methods require extracting the index or their names and then renaming, overwriting them so the code cannot be chained.</p>
<p>Given that this is a very common situation in <code>groupby()</code>-<code>aggregate()</code> setup, is there really no way to do this with chained code?</p>
|
<python><pandas>
|
2022-12-18 12:32:42
| 1
| 494
|
farhanhubble
|
74,840,937
| 16,682,019
|
RegEx works in regexr but not in Python re
|
<p>I have this regex: <code>If you don't want these messages, please [a-zA-Z0-9öäüÖÄÜ<>\n\-=#;&?_ "/:.@]+settings<\/a></code>. It works on <a href="https://regexr.com/" rel="nofollow noreferrer">regexr</a> but not when I am using the <code>re</code>
library in Python:</p>
<pre><code>data = "<my text (comes from a file)>"
search = "If you don't want these messages, please [a-zA-Z0-9öäüÖÄÜ<>\n\-=#;&?_ \"/:.@]+settings<\/a>" # this search string comes from a database, so it's not hardcoded into my script
print(re.search(search, data))
</code></pre>
<p>Is there something I don't see?</p>
|
<python><regex><python-re>
|
2022-12-18 11:29:53
| 1
| 1,900
|
Julius Babies
|
74,840,911
| 10,563,068
|
How to split a dataframe into multiple dataframe in pandas dynamically based on the column count?
|
<p>I am still a newbie in pandas. I have an excel file which contains more than 700 columns with 600 rows. I am trying to split it into multiple dataframes so I can insert it into sql table. The reason why I am trying to split is because I am getting an error saying The statement has been terminated. (3621); [42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Cannot create a row of size 8349 which is greater than the allowable maximum row size of 8060. I want to split it by 250 columns per dataframe and also take the first row so I can join it in the sql.</p>
<p>Another reason why I cannot specify the dataframe range is because, different excel files might have different number of columns. Lets say the excel file contains 800 columns, but I set it as 700: 850, it gives me an error saying iloc giving 'IndexError: single positional indexer is out-of-bounds'.</p>
<p>If there is a way to do it dynamically by splitting it by 250 rows, it would be great. Below is my code which is working but I need to specify the range of index:</p>
<pre><code> def sqlcol(dfparam):
dtypedict = {}
for i,j in zip(dfparam.columns, dfparam.dtypes):
if "object" in str(j):
dtypedict.update({i: sqla.types.NVARCHAR(length=255)})
if "datetime" in str(j):
dtypedict.update({i: sqla.types.DateTime()})
if "float" in str(j):
dtypedict.update({i: sqla.types.Float()})
if "int" in str(j):
dtypedict.update({i: sqla.types.BIGINT()})
if "decimal" in str(j):
dtypedict.update({i: sqla.types.DECIMAL()})
return dtypedict
def import_varchar_to_hst03(db:str,tb_name:str,df):
n= 1
import pandas as pd
import sqlalchemy as sqla
import urllib
import pyodbc
t=sqlcol(df1)
a=sqlcol(df2)
c=sqlcol(df3)
quoted = urllib.parse.quote_plus("DRIVER={ODBC Driver 17 for SQL Server};SERVER=localhost;DATABASE="+db+";Trusted_Connection=yes;")
engine = sqla.create_engine('mssql+pyodbc:///?odbc_connect={}'.format(quoted), fast_executemany = True)
df1.to_sql(tb_name, schema='dbo', con = engine, index=False,dtype=t,if_exists='replace')
df2.to_sql(tb_name + ( "A" * n), schema='dbo', con = engine, index=False,dtype=a,if_exists='replace')
df3.to_sql(tb_name + ( "B" * n), schema='dbo', con = engine, index=False,dtype=c,if_exists='replace')
import pandas as pd
import numpy as np
Ex=pd.read_excel(r'C:\Users\sriram.ramasamy\Desktop\Testsriram.xlsx',sheet_name=None)
for sh,v in Ex.items():
df=pd.DataFrame(v)
df1 = df.iloc[:,:255]
df2 = df.iloc[:,np.r_[0:1,256:500]]
df3 = df.iloc[:,np.r_[0:1,501:700]]
import_varchar_to_hst03('InsightMaster',sh,df)
print('data imported to database')
</code></pre>
|
<python><sql><pandas><dataframe><dynamic>
|
2022-12-18 11:24:38
| 1
| 453
|
Sriram
|
74,840,864
| 7,713,770
|
How to show content with zip in view with django?
|
<p>I hava a django applicaiton. And I try to show the content values from the backend in the template. There is a method: <code>show_extracted_data_from_file </code> where I combine three methods:</p>
<pre><code>
class FilterText:
def total_cost_fruit(self):
return [3588.20, 5018.75, 3488.16, 444]
def total_cost_fruit2(self):
return [3588.20, 5018.75, 3488.99]
def total_cost_fruit3(self):
return [3588.20, 5018.75, 44.99]
def show_extracted_data_from_file(self):
regexes = [
self.total_cost_fruit(),
self.total_cost_fruit2(),
self.total_cost_fruit3,
]
matches = [(regex) for regex in regexes]
return zip(matches)
</code></pre>
<p>and I have the view:</p>
<pre><code>
def test(request):
content_pdf = ""
filter_text = FilterText()
content_pdf = filter_text.show_extracted_data_from_file()
context = {"content_pdf": content_pdf}
return render(request, "main/test.html", context)
</code></pre>
<p>and html:</p>
<pre><code><div class="wishlist">
<table>
<tr>
<th>Method 1</th>
<th>Method 2</th>
<th>Method 3</th>
</tr>
{% for value in content_pdf %}
<tr>
<td>{{value.0}}</td>
<td>{{value.1}}</td>
<td>{{value.2}}</td>
</tr>
{% endfor %}
</table>
</div>
</code></pre>
<p>But it looks now:</p>
<pre><code>Method 1 Method 2 Method 3
[3588.2, 5018.75, 3488.16, 444]
[3588.2, 5018.75, 3488.99]
[3588.2, 5018.75, 44.99]
</code></pre>
<p>But I want it of course under each other:</p>
<pre><code>Method 1 Method 2 Method 3
3588.2 3588.2 3588.2
5018.75, 44.99 3488.99
5018.75,
5018.75
3488.16
444
</code></pre>
|
<python><html><django>
|
2022-12-18 11:13:39
| 1
| 3,991
|
mightycode Newton
|
74,840,859
| 17,945,841
|
Rotate the xlabels with seaborn
|
<p>I've made a boxplot, but as you see the xlabels are too long so I need to rotate them a little bit:</p>
<p><a href="https://i.sstatic.net/wk7jw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wk7jw.png" alt="enter image description here" /></a></p>
<p>I've found several topics for this, here on stack, and on other sites. Some of my tries and the errors I got:</p>
<p><code>long_box.set_xticklabels(long_box.get_xticklabels(),rotation=30)</code></p>
<p>or</p>
<p><code>plt.setp(long_box.get_xticklabels(), rotation=45)</code></p>
<p>give this error:</p>
<pre><code>AttributeError: 'Text' object has no attribute 'set_xticklabels'
</code></pre>
<p>and, this code <code>long_box.tick_params(axis='x', labelrotation=90)</code></p>
<p>produces this error:</p>
<pre><code>AttributeError: 'Text' object has no attribute 'tick_params'
</code></pre>
<p>I didn't understand what does a 'Text' object mean.. how do I fix this?</p>
|
<python><seaborn><boxplot>
|
2022-12-18 11:12:01
| 1
| 1,352
|
Programming Noob
|
74,840,717
| 14,862,885
|
Difference between today,week start
|
<p>How to get the difference between <code>datetime.datetime.utcnow()</code>, and the start of week?</p>
<p>I want start of week of<br />
<code>"date=17,month=12,year=2022,hour=x,minute=y,second=z"</code> to be:
<code>"date=12,month=12,year=2022,hour=0,minute=0,second=0"</code>, where <em>x,y,z are variables</em>,while they have to be converted to 0,0,0 only.</p>
<p>I tried the following code:</p>
<pre><code>from datetime import datetime as dt,\
timezone as tz,\
timedelta as td
print(dt.isoformat(dt.utcnow()-td(days=dt.weekday(dt.now(tz.utc)))))
</code></pre>
<p>I used iso format to demonstrate what I mean,<br />
for the 'now' value, I gave above,where date=17,<br />
I get the following output:<br />
<code>"date=12,month=12,year=2022,hour=x,minute=y,second=z"</code>(not what i expected)</p>
<p><em>Q.s Whats the best way to make these hours,minutes,seconds 0,taking into account all <strong>special cases:</strong></em></p>
<ol>
<li>A range which includes feb 29, of a leap year</li>
<li>A Example where week day is of previous month, of current day</li>
<li>etc.</li>
</ol>
|
<python><datetime><dayofweek><weekday>
|
2022-12-18 10:44:19
| 1
| 3,266
|
redoc
|
74,840,642
| 19,251,893
|
How to use xmltodict.parse with non-printable data
|
<p>I tried to convert xml to dict with Python</p>
<pre><code>import xmltodict
xmltodict.parse('<a>as\x08</a>')
</code></pre>
<p>But I got an exception</p>
<pre><code>ExpatError Traceback (most recent call last)
<ipython-input-8-XXXXXX> in <module>
----> 1 xmltodict.parse('<a>as\x08</a>')
~\AppData\Local\Programs\Python\Python37\lib\site-packages\xmltodict.py in parse(xml_input, encoding, expat, process_namespaces, namespace_separator, disable_entities, process_comments, **kwargs)
376 parser.Parse(b'',True)
377 else:
--> 378 parser.Parse(xml_input, True)
379 return handler.item
380
ExpatError: not well-formed (invalid token): line 1, column 5
</code></pre>
<p>How can I use xmltodict library with non-printable char</p>
|
<python><python-3.x><dictionary><xmltodict>
|
2022-12-18 10:31:10
| 0
| 345
|
python3.789
|
74,840,608
| 5,836,951
|
When using TensorFLow / Keras, what is the most efficient way to get some metadata inside a custom cost function?
|
<p>In my dataset, I have a binary Target column, some Features columns, and a Date column. I want to write a custom cost function that would first compute a cost-by-date quantity, then add all the costs up. But to do this, I would need to know inside the cost function the corresponding date for each data point in <code>y_pred</code> and <code>y_true</code>.</p>
<p>What would be the best way to do this to maximize performance? I have a couple of ideas:</p>
<ul>
<li>Make the target variable a tuple <code>(target, date)</code>, have a custom first layer that extracts the first entry of the tuple, and have the cost function extract the second entry of the tuple <code>y_true</code></li>
<li>Make the target column variable an index, and have the custom first layer as well as the custom cost function pull the relevant values from a global variable based on index</li>
</ul>
<p>What is the most efficient way to get this information inside the custom cost function?</p>
|
<python><tensorflow><keras>
|
2022-12-18 10:26:35
| 1
| 487
|
Gabi
|
74,840,591
| 17,176,270
|
Mock object with pyTest
|
<p>I have a function I need to test:</p>
<pre><code>def get_user_by_username(db, username: str) -> Waiter:
"""Get user object based on given username."""
user = db.query(Waiter).filter(Waiter.username == username).first()
return user
</code></pre>
<p>Here I try to mock DB call and return correct Waiter object, so my test is:</p>
<pre><code>def test_get_user_by_username(mocker):
waiter = Waiter(id=10, username="string", password="1111")
db_mock = mocker.Mock(return_value=waiter)
result = get_user_by_username(db=db_mock, username="string")
assert result.json()["username"] == "string"
</code></pre>
<p>But I get an error <code>TypeError: 'Mock' object is not subscriptable</code>. How I can do it?</p>
<p>According to @Don Kirkby answer, the right solution is:</p>
<pre><code>def test_get_user_by_username(mocker):
waiter = Waiter(id=10, username="string", password="1111")
filter_method = mocker.Mock()
filter_method.first = mocker.Mock(return_value=waiter)
query = mocker.Mock()
query.filter = mocker.Mock(return_value=filter_method)
db_mock = mocker.Mock()
db_mock.query = mocker.Mock(return_value=query)
db_mock.query.filter.first = mocker.Mock(return_value=waiter)
result = get_user_by_username(db=db_mock, username="string")
assert vars(result)["username"] == "string"
</code></pre>
|
<python><unit-testing><mocking><pytest>
|
2022-12-18 10:24:17
| 1
| 780
|
Vitalii Mytenko
|
74,840,576
| 5,404,647
|
Cannot import name 'structures' from partially initialized module 'music'
|
<p>I'm trying to use the package <code>music</code>, and I installed it using <code>pip3 install music</code>.
It installed the dependencies correctly, but now with a sample code like the following</p>
<pre><code>from music import *
# create a clarinet
clarinet = Clarinet()
# create a song
song = Song()
# add notes to the song
song.addNote(Note('C4', QUARTER))
song.addNote(Note('D4', QUARTER))
song.addNote(Note('E4', QUARTER))
</code></pre>
<p>I get the following error</p>
<pre><code>Traceback (most recent call last):
File "ex.py", line 3, in <module>
from music import *
File "/home/norhther/.local/lib/python3.8/site-packages/music/__init__.py", line 1, in <module>
from . import utils, tables, synths, effects, structures, singing, core
ImportError: cannot import name 'structures' from partially initialized module 'music' (most likely due to a circular import) (/home/norhther/.local/lib/python3.8/site-packages/music/__init__.py)
</code></pre>
|
<python><python-3.x>
|
2022-12-18 10:22:37
| 1
| 622
|
Norhther
|
74,840,452
| 4,865,723
|
Use tkinter based PySimpleGUI as root user via pkexec
|
<p>I want to show a GUI Window (<code>PySimpleGUI</code> which is based on <code>tkinter</code>) as root user.
I'm using <code>pkexec</code> for that. Using GNU/Linux Debian stable.</p>
<p>But I got the error</p>
<pre><code>no display name and no $DISPLAY environment variable
</code></pre>
<p>I understand that a bit. But I don't know how to solve it. I tried to set <code>DISPLAY = ":0.0"</code> but this doesn't work either.</p>
<pre><code>couldn't connect to display ":0.0"
</code></pre>
<p>This is my test call include setting <code>DISPLAY</code>.</p>
<pre><code>pkexec python3 -c "import PySimpleGUI as sg;import os;os.environ['DISPLAY'] = ':0.0';sg.Window(title='title', layout=[[]], margins=(200, 100)).read()"
</code></pre>
<p>This is the full error output</p>
<pre><code>No protocol specified
No protocol specified
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/local/lib/python3.9/dist-packages/PySimpleGUI/PySimpleGUI.py", line 10075, in read
results = self._read(timeout=timeout, timeout_key=timeout_key)
File "/usr/local/lib/python3.9/dist-packages/PySimpleGUI/PySimpleGUI.py", line 10146, in _read
self._Show()
File "/usr/local/lib/python3.9/dist-packages/PySimpleGUI/PySimpleGUI.py", line 9886, in _Show
StartupTK(self)
File "/usr/local/lib/python3.9/dist-packages/PySimpleGUI/PySimpleGUI.py", line 16817, in StartupTK
_get_hidden_master_root()
File "/usr/local/lib/python3.9/dist-packages/PySimpleGUI/PySimpleGUI.py", line 16704, in _get_hidden_master_root
Window.hidden_master_root = tk.Tk()
File "/usr/lib/python3.9/tkinter/__init__.py", line 2270, in __init__
self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use)
_tkinter.TclError: couldn't connect to display ":0.0"
</code></pre>
|
<python><tkinter><pysimplegui><polkit>
|
2022-12-18 10:00:06
| 0
| 12,450
|
buhtz
|
74,840,385
| 9,079,461
|
How to trigger scala/python code when value in BQ table changes
|
<p>We wanted to run (poll) scala/python code in GCP VM continuously which will run ETL program only when there is change in value in BQ table.
i.e. we'll add which ETL to run in BQ table and based on that ETL program will run via GCP VM.</p>
<p>If there are multiple values in BQ table, it should run multiple ETL program simultaneously.</p>
<p>How we can achieve the same.</p>
|
<python><scala><google-cloud-platform>
|
2022-12-18 09:46:46
| 1
| 887
|
Khilesh Chauhan
|
74,840,293
| 4,451,521
|
How to find the place where a int conversion is failing in a dataframe?
|
<p>I have a dataframe with a column "<code>next</code> which is a string of numbers separated by ;</p>
<p>When I do</p>
<pre><code>thedataframe.next.str.split(;).explode().astype(int)
</code></pre>
<p>It usually works but sometimes it fails with the error</p>
<pre><code>ValueError: invalid literal for int() with base 10: ' '
</code></pre>
<p>So I am debugging and the column is a column of 1135 rows.
I suspect in one of these rows there is an invalid value and this is causing the error</p>
<p>However it is 1135 values so how can I find where in the column it is failing?</p>
<p>I tried</p>
<pre><code>>np.where(pd.isnull(thedataframe.next.str.split(';').explode()))
(array([], dtype=int64),)
</code></pre>
|
<python><pandas>
|
2022-12-18 09:27:22
| 1
| 10,576
|
KansaiRobot
|
74,840,258
| 1,525,936
|
Django DRF - nested serializer (level>2) does not show up in the response
|
<p>We have the follwing structure <code>(library->books->pages)</code></p>
<p>the first serializer</p>
<pre><code> class Library(serializers.ModelSerializer):
books = BookSerializer(many=True)
class Meta:
model = Library
fields = '__all__'
@transaction.atomic
def create(self, validated_data):
# create logic here
</code></pre>
<p>the second serializer</p>
<pre><code> class BookSerializer(serializers.ModelSerializer):
pages = PageSerializer(many=True, required=False)
class Meta:
model = Book
fields = '__all__'
</code></pre>
<p>we have an endpoint <code>library/</code>, where we post the payload of the following format</p>
<pre><code>{
"ref": "43a0c953-1380-43dd-a844-bbb97a325586",
"books": [
{
"name": "The Jungle Book",
"author": "Rudyard Kipling",
"pages": [
{
"content": "...",
"pagenumber": 22
}
]
}
]
}
</code></pre>
<p>all the objects are created in the database, but the response does not contain <code>pages</code> key. It looks like this</p>
<pre><code>{
"id": 27,
"ref": "43a0c953-1380-43dd-a844-bbb97a325586",
"books": [
{
"id": 34,
"name": "The Jungle Book",
"author": "Rudyard Kipling"
}
]
}
</code></pre>
<p><code>depth</code> attribute does not seem to work. What do I have to do to make <code>pages</code> appear in the responce?</p>
|
<python><django><django-rest-framework>
|
2022-12-18 09:21:39
| 1
| 1,323
|
Paramore
|
74,840,221
| 6,699,103
|
AttributeError: 'Flask' object has no attribute 'get
|
<p>I'm getting the following error while running the below code</p>
<p>File "D:\Inteliji Projects\Python Flask\whatsgrouprestapi\src\app.py", line 13, in create_app
@app.get("/")
AttributeError: 'Flask' object has no attribute 'get</p>
<pre><code>from flask import Flask
def create_app(test_config=None):
app = Flask(__name__, instance_relative_config=True)
if test_config is None:
app.config.from_mapping(
SECRET_KEY="dev"
)
else:
app.config.from_mapping(test_config)
@app.get("/")
def index():
return "Hello World"
@app.get("/hello")
def say_hello():
return {"message" : "Hellos World"}
return app
</code></pre>
<p>What is wrong with the code</p>
<p><strong>Updated</strong></p>
<p>After i changed my code like this it is working fine I don't know what is the issue</p>
<pre><code>def create_app(test_config=None):
app = Flask(__name__, instance_relative_config=True)
if test_config is None:
app.config.from_mapping(SECRET_KEY=os.environ.get("SECRET_KEY"),)
else:
app.config.from_mapping(test_config)
@app.route('/', methods=["GET"])
def index():
return "Hello World"
@app.route('/hello', methods=["GET"])
def say_hello():
return {"message" : "Hellos World"}
return app
</code></pre>
|
<python><rest><flask>
|
2022-12-18 09:15:04
| 1
| 494
|
Mohan K
|
74,840,212
| 18,125,194
|
chi squared hypothesis test with several binary variables
|
<p>I have a data about plants grown in a nursery. I have a variable for plant health and several factors.</p>
<p>I wanted to test if any of the factors influenced plant health, so I thought the best method would be to use a chi squared test.</p>
<p>My method is below, but I get stuck after the cross tab</p>
<pre><code># Example Data
df = pd.DataFrame({'plant_health': ['a','b','c','a','b','b'],
'factor_1': ['yes','no','no','no','yes','yes'],
'factor_2': ['yes','yes','no','no','yes','yes'],
'factor_3': ['yes','no','no','yes','yes','yes'],
'factor_4': ['yes','yes','no','no','yes','yes'],
'factor_5': ['yes','no','yes','no','yes','yes'],
'factor_6': ['yes','no','no','no','yes','yes'],
'factor_7': ['yes','yes','no','yes','yes','yes'],
'factor_8': ['yes','no','yes','no','yes','yes'],
'factor_9': ['yes','yes','yes','yes','yes','yes'],
})
# Melt dataframe
df = df.melt(id_vars='plant_health',
value_vars=['factor_1', 'factor_2', 'factor_3', 'factor_4', 'factor_5',
'factor_6', 'factor_7', 'factor_8', 'factor_9'])
# Create cross tab
pd.crosstab(df.plant_health, columns=[df.variable, df.value])
</code></pre>
<p>I can do the test with one factor but don't know how to expand that to all factors.</p>
<pre><code>from scipy.stats import chisquare
from scipy import stats
from scipy.stats import chi2_contingency
# Example with only the first factor
tab_data = [[1,1], [1,2],[1,0]]
chi2_contingency(tab_data)
</code></pre>
|
<python><scipy><chi-squared>
|
2022-12-18 09:13:13
| 1
| 395
|
Rebecca James
|
74,840,179
| 11,883,900
|
Plotly line graph to follow ordered time series on X-axis
|
<p>I have a dataframe that looks like this :</p>
<pre><code>country = ['Cambodia',
'Cambodia',
'Cambodia',
'Cambodia',
'Cambodia',
'Lao PDR',
'Lao PDR',
'Lao PDR',
'Lao PDR']
month_year = ['2020-Aug',
'2020-Dec',
'2020-May',
'2020-Oct',
'2021-Mar',
'2020-Jul',
'2021-Mar',
'2021-May',
'2021-Nov']
count = [5, 4, 4, 4, 2, 6, 4, 4, 6]
# Make dictionary, keys will become dataframe column names
intermediate_dictionary = {'country':country, 'month_year':month_year, 'count':count}
# Convert dictionary to Pandas dataframe
df3 = pandas_dataframe = pd.DataFrame(intermediate_dictionary)
df3
</code></pre>
<p>Using the dataframe I ordered the months correctly then plotted a line graph this way</p>
<pre><code>ordered_months = ["2020-May", "2020-Jul", "2020-Aug", "2020-Oct", "2020-Dec", "2021-Mar",
"2021-May", "2021-Nov"]
fig = px.line(df3, x="month_year", y="count", color='country', symbol='country')
fig.update_xaxes( categoryarray= ordered_months)
fig.show()
</code></pre>
<p>Now my problem is, the line chart for the country cambodia is not following the order of months as expected. Here is a photo of the graph</p>
<p><a href="https://i.sstatic.net/jci7P.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jci7P.jpg" alt="enter image description here" /></a></p>
<p>According to the graph , cambodia line chart starts from <em>May-2020</em> then skips <em>Aug-2020</em> and goes to <em>oct-2020</em> then go to <em>Dec-2020</em> before coming back to <em>Aug-2020</em>. Notice also that the line connecting to <em>Mar-2021</em> doesnt start from <em>Dec-2020</em> but <em>Oct-2020</em>.</p>
<p>My intention is the line for cambodia to follow the order of x-axis, from <em>May-2020</em> go to <em>Aug-2020</em> then <em>Oct-2020</em> then <em>Dec-2020</em> and finally <em>Mar-2020</em>.</p>
<p>How can I fix this ?</p>
|
<python><pandas><plotly>
|
2022-12-18 09:05:42
| 1
| 1,098
|
LivingstoneM
|
74,840,161
| 12,883,297
|
Get the dictionary from elements in the list of a column where key-value pair are number of elements and element value in pandas
|
<p>I have a dataframe</p>
<pre><code>df = pd.DataFrame([["X",["A","B","C","D"]],["Y",["E","F","G"]],["Z",["H","I","J","K","L"]]],columns=["type","id"])
</code></pre>
<pre><code>type id
X [A, B, C, D]
Y [E, F, G]
Z [H, I, J, K, L]
</code></pre>
<p>I want 2 new columns <strong>id_len</strong> which tells the number of elements in the list of id column and <strong>id_dict</strong> which makes the dictionary of columns id_len and id as key-value.</p>
<p><strong>Expected output</strong></p>
<pre><code>df_new = pd.DataFrame([["X",["A","B","C","D"],[1,2,3,4],{1:"A",2:"B",3:"C",4:"D"}],["Y",["E","F","G"],[1,2,3],{1:"E",2:"F",3:"G"}],["Z",["H","I","J","K","L"],[1,2,3,4,5],{1:"H",2:"I",3:"J",4:"K",5:"L"}]],columns=["type","id","id_len","id_dict"])
</code></pre>
<pre><code>
type id id_len id_dict
X [A, B, C, D] [1, 2, 3, 4] {1: 'A', 2: 'B', 3: 'C', 4: 'D'}
Y [E, F, G] [1, 2, 3] {1: 'E', 2: 'F', 3: 'G'}
Z [H, I, J, K, L] [1, 2, 3, 4, 5] {1: 'H', 2: 'I', 3: 'J', 4: 'K', 5: 'L'}
</code></pre>
<p>How to do it?</p>
|
<python><python-3.x><pandas><dataframe>
|
2022-12-18 09:00:55
| 1
| 611
|
Chethan
|
74,840,109
| 1,231,714
|
Python3 not recognized in bash scripts
|
<p>I removed Python 2.7 from my system and some of my old bash scripts stopped working. The problem was that it was not finding python3. I aliased python to point to python3 in .bashrc and everything works fine at the commandline, but in the bash script I still get the same error "python: command not found".</p>
<p>I made a simple script with the following lines and I get the error for the first line only.
python --version
python3 --version</p>
<p>I could change "python" to "python3", but what else could be doing to avoid making changes in all the scripts?</p>
<p>Thanks.</p>
|
<python><bash><raspberry-pi>
|
2022-12-18 08:49:42
| 1
| 1,390
|
SEU
|
74,839,927
| 386,861
|
How to get selenium working in VSCode python
|
<p>Trying to work through tutorial that works in Python using Selenium.</p>
<p>Tried this:</p>
<pre><code>import selenium
</code></pre>
<p>Didn't work. Tried to install again:</p>
<pre><code>pip install selenium
Requirement already satisfied: selenium in ./opt/anaconda3/lib/python3.9/site-packages (4.7.2)
Requirement already satisfied: urllib3[socks]~=1.26 in ./opt/anaconda3/lib/python3.9/site-packages (from selenium) (1.26.12)
Requirement already satisfied: certifi>=2021.10.8 in ./opt/anaconda3/lib/python3.9/site-packages (from selenium) (2022.9.24)
Requirement already satisfied: trio~=0.17 in ./opt/anaconda3/lib/python3.9/site-packages (from selenium) (0.22.0)
Requirement already satisfied: trio-websocket~=0.9 in ./opt/anaconda3/lib/python3.9/site-packages (from selenium) (0.9.2)
Requirement already satisfied: sortedcontainers in ./opt/anaconda3/lib/python3.9/site-packages (from trio~=0.17->selenium) (2.4.0)
Requirement already satisfied: attrs>=19.2.0 in ./opt/anaconda3/lib/python3.9/site-packages (from trio~=0.17->selenium) (21.4.0)
Requirement already satisfied: async-generator>=1.9 in ./opt/anaconda3/lib/python3.9/site-packages (from trio~=0.17->selenium) (1.10)
Requirement already satisfied: outcome in ./opt/anaconda3/lib/python3.9/site-packages (from trio~=0.17->selenium) (1.2.0)
Requirement already satisfied: sniffio in ./opt/anaconda3/lib/python3.9/site-packages (from trio~=0.17->selenium) (1.2.0)
Requirement already satisfied: idna in ./opt/anaconda3/lib/python3.9/site-packages (from trio~=0.17->selenium) (3.4)
Requirement already satisfied: exceptiongroup>=1.0.0rc9 in ./opt/anaconda3/lib/python3.9/site-packages (from trio~=0.17->selenium) (1.0.4)
Requirement already satisfied: wsproto>=0.14 in ./opt/anaconda3/lib/python3.9/site-packages (from trio-websocket~=0.9->selenium) (1.2.0)
Requirement already satisfied: PySocks!=1.5.7,<2.0,>=1.5.6 in ./opt/anaconda3/lib/python3.9/site-packages (from urllib3[socks]~=1.26->selenium) (1.7.1)
Requirement already satisfied: h11<1,>=0.9.0 in ./opt/anaconda3/lib/python3.9/site-packages (from wsproto>=0.14->trio-websocket~=0.9->selenium) (0.14.0)
</code></pre>
<p>The import still doesn't work. Why? What do I do to get this statement to work?</p>
|
<python><selenium>
|
2022-12-18 08:10:14
| 1
| 7,882
|
elksie5000
|
74,839,916
| 2,853,908
|
Python OHLC convert weekly to biweekly
|
<p>I have weekly data from Yahoo Finance</p>
<pre><code> Open High Low Close Adj Close Volume
Date
1999-12-27 0.901228 0.918527 0.888393 0.917969 0.782493 1.638112e+08
2000-01-03 0.936384 1.004464 0.848214 0.888393 0.757282 3.055203e+09
2000-01-10 0.910714 0.912946 0.772321 0.896763 0.764417 3.345742e+09
2000-01-17 0.901786 1.084821 0.896763 0.993862 0.847186 3.383878e+09
2000-01-24 0.968192 1.019531 0.898438 0.907366 0.773455 2.068674e+09
2000-01-31 0.901786 0.982143 0.843750 0.964286 0.821975 2.384424e+09
2000-02-07 0.964286 1.045759 0.945871 0.970982 0.827682 1.664309e+09
2000-02-14 0.976004 1.070871 0.969866 0.993304 0.846710 1.754469e+09
2000-02-21 0.983259 1.063616 0.952567 0.985491 0.840050 1.520971e+09
2000-02-28 0.983259 1.179129 0.967634 1.142857 0.974192 2.408918e+09
2000-03-06 1.125000 1.152902 1.055804 1.122768 0.957068 1.280126e+09
2000-03-13 1.090402 1.129464 1.017857 1.116071 0.951359 1.859290e+09
2000-03-20 1.102679 1.342634 1.085938 1.238281 1.055533 2.306293e+09
2000-03-27 1.228795 1.292411 1.119978 1.212612 1.033652 1.541019e+09
2000-04-03 1.209821 1.245536 1.042411 1.176339 1.002732 1.948621e+09
2000-04-10 1.175781 1.185268 0.936384 0.998884 0.851467 2.892669e+09
2000-04-17 0.977679 1.162946 0.973772 1.061384 0.904743 2.042757e+09
2000-04-24 1.026786 1.149554 1.024554 1.107701 0.944224 1.778358e+09
2000-05-01 1.114955 1.127232 0.987165 1.010045 0.860980 1.636018e+09
</code></pre>
<p>We need to resample it to biweekly, i.e. once in 2 weeks.</p>
<p>yahoo finance only provided the data for these intervals :</p>
<p>[1m, 2m, 5m, 15m, 30m, 60m, 90m, 1h, 1d, 5d, 1wk, 1mo, 3mo]</p>
<p>need something between 1 week and 1 Month i.e. 2 weeks.</p>
<p>Please let me know how to resample</p>
|
<python><dataframe><yahoo-finance><ohlc>
|
2022-12-18 08:06:59
| 1
| 479
|
Ashish
|
74,839,883
| 11,274,362
|
How to return empty queryset from django_filters.FilterSet in API view if querystring hasn't any valid key
|
<p>I use <code>django-filter</code> package with <code>djangorestframework</code> package for filter objects that return from API view. There are my files:</p>
<pre><code># models.py
class Symbol(models.Model):
title = models.CharField(max_length=30, verbose_name='title')
# serializers.py
class SymbolSerializer(serializers.ModelSerializer):
class Meta:
model = Symbol
fields = ('title',)
# filters.py
class SymbolFilter(django_filters.FilterSet):
st = django_filters.CharFilter(method='get_st', label='search')
def get_st(self, queryset, field_name, value):
return queryset.filter(title__icontains=value)
class Meta:
model = Symbol
# views.py
@api_view(['GET'])
def symbol_list(request):
queryset = Symbol.objects.all()
view_filter = APIFilters.APISymbolFilter(request.GET, queryset=queryset)
if (view_filter.is_valid() is False) or (not view_filter.qs):
return Response(None, status=status.HTTP_404_NOT_FOUND)
ser = SymbolSerializer(view_filter.qs, many=True)
return Response(ser.data, status=status.HTTP_200_OK)
# urls.py
from .views import *
urlpatterns = [
path('symbol/list/', symbol_list, name='symbol_list'),
]
</code></pre>
<p>So, If I sent <code>get</code> request to <code>localhost:8000/symbol/list/?st=sometitle</code> all things are good and I will got <code>Symbol</code> objects that have <code>sometitle</code> in there <code>title</code> field.
But when I remove <code>st</code> from <code>querystring</code>, <code>django-filter</code> will return all objects in <code>Symbol</code> model.
My question is:</p>
<blockquote>
<p><em><strong>How Can I return empty queryset if <code>st</code> key isn't in <code>querystring</code> or if <code>filter(title__icontains=value)</code> was empty?</strong></em></p>
</blockquote>
|
<python><django><django-rest-framework><django-filter>
|
2022-12-18 08:00:55
| 1
| 977
|
rahnama7m
|
74,839,830
| 17,550,177
|
HeidiSQL is not showing any data even when table size is more than 0B
|
<p>I have Postgresql Database in production and i'm trying to insert data from my localhost using <code>psycopg2</code> library. i'm seeing my database through HeidiSQL. But after i finished inserting the data, my HeidiSQL shows that table size increased but when i try to see the data, it shows 0 rows. I have tried to drop all tables and re-migrate and re-insert the data, but still it shows 0 rows even when the table size is not 0 B.</p>
<p>I have tried to insert table one by one from the program by comment the other insert and it works. But i still can't understand why i can't see any data when i'm inserting simultaneously</p>
<p><a href="https://i.sstatic.net/jcvgI.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jcvgI.jpg" alt="Database After Migrate" /></a></p>
<p><a href="https://i.sstatic.net/2SnKQ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2SnKQ.jpg" alt="Database After Inserting" /></a></p>
|
<python><django><postgresql><heidisql>
|
2022-12-18 07:47:32
| 0
| 756
|
Michael Halim
|
74,839,707
| 501,333
|
What is the cause of the Sphinx warning "Block quote ends without a blank line; unexpected unindent" in a one line docstring
|
<p>When I build my Sphinx HTML documentation, I get a warning.</p>
<blockquote>
<p>checkdmarc.py:docstring of checkdmarc.BIMIRecordInWrongLocation:1: WARNING: Block quote ends without a blank line; unexpected unindent.</p>
</blockquote>
<p><code>BIMIRecordInWrongLocation</code> is a simple <code>Exception</code> class with a one line docstring.</p>
<pre class="lang-py prettyprint-override"><code>class BIMIRecordInWrongLocation(BIMIError):
"""Raised when a BIMI record is found at the root of a domain"""
</code></pre>
<p>Oddly, the warning persisted even after I replaced the docstring with <code>pass</code>.</p>
<p>For context, here is the <a href="https://github.com/domainaware/checkdmarc/blob/4fcdd758f90395312a6a030aef7491bd96dc4c09/checkdmarc.py#L296" rel="nofollow noreferrer">full code</a>, and the Sphinx <a href="https://github.com/domainaware/checkdmarc/actions/runs/3723676690/jobs/6315337532#step:5:20" rel="nofollow noreferrer">build log</a>.</p>
|
<python><python-sphinx><docstring><myst>
|
2022-12-18 07:19:03
| 1
| 5,152
|
Sean W.
|
74,839,586
| 1,779,091
|
Is there any purpose of using loc or iloc when filtering the dataframe?
|
<p>Taking into account the following:</p>
<pre><code>df[df.B == 9]
</code></pre>
<p>vs</p>
<pre><code>df.loc[df.B == 9]
</code></pre>
<p>I understand that 1st is syntactic sugar for the 2nd.</p>
<p>Generally we use loc or iloc when we need to work with label or index respectively.</p>
<p>However, as shown in the above examples when we are filtering the dataframe, there doesn't seen to be a use case of choosing between loc vs iloc. Is that correct?</p>
|
<python><pandas>
|
2022-12-18 06:46:48
| 0
| 9,866
|
variable
|
74,839,415
| 3,761,310
|
hybrid_property.expression to return date from column or today's date
|
<p>I have a SQLAlchemy model in my Flask app that looks something like this:</p>
<pre><code>class MyModel(db.model):
_date_start = db.Column(db.Date, nullable=False, default=date.today)
_date_end = db.Column(db.Date, nullable=True, default=None)
@hybrid_property
def date_start(self) -> date:
return self._date_start
@hybrid_property
def date_end(self) -> date:
return self._date_end
@hybrid_property
def date_last_active(self) -> date:
if self.date_end:
return self.date_end
return date.today()
@hybrid_method
def contains(self, date: date) -> bool:
return (date >= self.date_start and date <= self.date_last_active)
@contains.expression
def contains(cls, date: date) -> bool:
return and_(date >= cls._date_start, date <= cls.date_last_active)
</code></pre>
<p>When I create an entry with a null <code>date_end</code>, my <code>contains</code> method fails to find that entry. This appears to be because of the <code>date_last_active</code> hybrid property:</p>
<pre><code>>>> entry = MyModel(_date_start=date(2022, 12, 1)
>>> # ... add and commit, then:
>>> MyModel.query.filter(MyModel.contains(date(2022, 12, 5)).count()
0
>>> MyModel.query.filter(date(2022, 12, 5) >= MyModel.date_start).count()
1
>>> MyModel.query.filter(date(2022, 12, 5) <= MyModel.date_last_active).count()
0
>>> MyModel.query.filter(date(2022, 12, 5) <= func.current_date()).count()
1
</code></pre>
<p>I'd guessed that the conditional is what's tripping it up, so I tried adding an expression with a <code>case</code> statement, but I can't get it to work. Two expressions I've tried are:</p>
<pre><code># case attempt 1
@date_last_active.expression
def date_last_active(cls) -> date:
return case(
(cls._date_end.is_(None), func.current_date()),
(not_(cls._date_end.is_(None)), cls._date_end)
)
# Raises: NotImplementedError: Operator 'getitem' is not supported on this expression
# case attempt 2
@date_last_active.expression
def date_last_active(cls) -> date:
return case(
{cls._date_end.is_(None): func.current_date()},
value=cls._date_end,
else_=cls._date_end
)
# Still doesn't find the entry:
# >>> DailyStreak.query.filter(date(2022, 12, 5) <= DailyStreak.date_last_active).count()
# 0
</code></pre>
<p>I feel like I'm close, but I'm just not seeing where I need to go. Thanks for the help!</p>
|
<python><sqlalchemy>
|
2022-12-18 06:06:12
| 1
| 718
|
DukeSilver
|
74,839,413
| 8,417,363
|
Add column from single dimension list pyspark efficiently
|
<p>I'm doing ensemble learning and I want to put the ensembled single column matrix into the original Dataframe used to gather the data.</p>
<p>The <code>df</code> looks like this, with repeating UserId every six rows.</p>
<pre><code>+------+-------+----------+----------+----------+----------+----------+----------+
|UserId|TrackId|Predictor1|Predictor2|Predictor3|Predictor4|Predictor5|Predictor6|
+------+-------+----------+----------+----------+----------+----------+----------+
|199810| 105760| 1| 1| 1| 1| 1| 1|
|199810| 18515| 1| 1| 1| 1| 1| 1|
|199810| 242681| 1| 1| -1| -1| -1| -1|
|199810| 74139| -1| -1| -1| -1| -1| -1|
|199810| 208019| -1| -1| -1| -1| -1| -1|
|199810| 9903| -1| -1| -1| -1| -1| -1|
|199812| 142408| 1| 1| 1| 1| 1| 1|
+------+-------+----------+----------+----------+----------+----------+----------+
</code></pre>
<p>and the <code>s_ensemble</code> matrix looks like</p>
<pre><code>[[ 0.72892909]
[ 0.72892909]
[-0.58959307]
...
[-0.72892909]
[-0.72892909]
[-0.72892909]]
</code></pre>
<p>both have the same amount of rows being 120000.</p>
<p>I'm creating a Dataframe for the matrix flattened into an array and adding an index column to both dataframes, joining the two on the same index and then dropping it.</p>
<pre><code>df = df.select('UserId', 'TrackID').withColumn("row_idx", row_number().over(Window.orderBy(monotonically_increasing_id())))
b = spark.createDataFrame([np.array(s_ensemble).flatten().tolist(),], ['Ensembled_Prediction']).withColumn("row_idx", row_number().over(Window.orderBy(monotonically_increasing_id())))
df = df.join(b, df.row_idx == b.row_idx).drop("row_idx")
</code></pre>
<p>This works to create it, but I am unable to see what it looks like and I believe I'm maxing out my machine's memory as it spikes after merging these two Dataframes together. I see this as an exception trying to <code>df.show(5)</code>.</p>
<pre><code>Py4JJavaError: An error occurred while calling o1086.showString.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 9 in stage 159.0 failed 1 times, most recent failure: Lost task 9.0 in stage 159.0 (TID 166) (host.docker.internal executor driver): org.apache.spark.SparkException: Python worker failed to connect back.
</code></pre>
<p>I bumped my jupyter notebook to use 16gs and I also maxed out RAM on the free version of colab.</p>
<p>Is there another way to add a single dimension matrix to an existing Dataframe without creating multiple temporary large Dataframes?</p>
|
<python><dataframe><pyspark><jupyter-notebook>
|
2022-12-18 06:05:25
| 1
| 3,186
|
Jimenemex
|
74,839,142
| 8,070,090
|
google sheet API Request had insufficient authentication scopes
|
<p>I'm trying to follow the Google Sheet API from its documentation here:
<a href="https://developers.google.com/sheets/api/guides/values" rel="nofollow noreferrer">https://developers.google.com/sheets/api/guides/values</a></p>
<p>I'm trying to run the snippets of the codes mentioned there, for example, Python code on <a href="https://developers.google.com/sheets/api/guides/values#reading_a_single_range" rel="nofollow noreferrer">Reading a single range</a></p>
<p>Here is an example of a snippet of code that I ran, <code>sheets_get_values.py</code></p>
<pre><code>from __future__ import print_function
import google.auth
from googleapiclient.discovery import build
from googleapiclient.errors import HttpError
def get_values(spreadsheet_id, range_name):
creds, _ = google.auth.default()
try:
service = build('sheets', 'v4', credentials=creds)
result = service.spreadsheets().values().get(
spreadsheetId=spreadsheet_id, range=range_name).execute()
rows = result.get('values', [])
print(f"{len(rows)} rows retrieved")
return result
except HttpError as error:
print(f"An error occurred: {error}")
return error
if __name__ == '__main__':
get_values("1CM29gwKIzeXsAppeNwrc8lbYaVMmUclprLuLYuHog4k", "A1:C2")
</code></pre>
<p><code>python3 sheets_get_values.py</code></p>
<p>But I got this message:</p>
<pre><code>An error occurred: <HttpError 403 when requesting https://sheets.googleapis.com/v4/spreadsheets/1CM29gwKIzeXsAppeNwrc8lbYaVMmUclprLuLYuHog4k/values/A1%3AC2?alt=json returned "Request had insufficient authentication scopes.". Details: "[{'@type': 'type.googleapis.com/google.rpc.ErrorInfo', 'reason': 'ACCESS_TOKEN_SCOPE_INSUFFICIENT', 'domain': 'googleapis.com', 'metadata': {'method': 'google.apps.sheets.v4.SpreadsheetsService.GetValues', 'service': 'sheets.googleapis.com'}}]">
</code></pre>
<p>I have tried to set the <strong>GOOGLE_APPLICATION_CREDENTIALS</strong> environment:</p>
<p><code>export GOOGLE_APPLICATION_CREDENTIALS='/home/myUser/.config/gcloud/application_default_credentials.json'</code></p>
<p>I also already enable application default credentials:</p>
<p><code>gcloud auth application-default login</code></p>
<p><code>gcloud config set project project_name</code></p>
<p>I also try to solve this by following a few instructions related to <strong>Request had insufficient authentication scopes</strong> by Googling, for example <a href="https://stackoverflow.com/a/46686130/8070090">this one</a></p>
<p>I try to modify the default snippet of code by Google documentation be like this:</p>
<pre><code>SCOPES = [
'https://www.googleapis.com/auth/drive',
'https://www.googleapis.com/auth/drive.file',
'https://www.googleapis.com/auth/spreadsheets',
]
def append_values(spreadsheet_id, range_name, value_input_option,
_values):
creds, _ = google.auth.default(scopes=SCOPES)
...
...
</code></pre>
<p>But the error still appear when I ran the snippet of the code.</p>
<p>So, what should I do to solve this error?</p>
|
<python><ubuntu><google-sheets-api>
|
2022-12-18 04:33:18
| 2
| 3,109
|
Tri
|
74,839,133
| 19,939,086
|
A more efficient solution for balanced split of an array with additional conditions
|
<p>I appreciate your help in advance. This is a practice question from Meta's interview preparation website. I have solved it, but I wonder if any optimization can be done.</p>
<h4>Question:</h4>
<p>Is there a way to solve the following problem with a time complexity of O(n)?</p>
<h4>Problem Description:</h4>
<blockquote>
<p>You have been given an array nums of type int. Write a program that
returns the bool type as the return value of the solution function to
determine whether it is possible to split nums into two arrays A and B
such that the following two conditions are satisfied.</p>
<ol>
<li>The sum of the respective array elements of A and B is equal.</li>
<li>All the array elements in A are strictly smaller than all the array elements in B.</li>
</ol>
</blockquote>
<h4>Examples:</h4>
<blockquote>
<p>nums = [1,5,7,1] -> true since A = [1,1,5], B = [7]</p>
<p>nums = [12,7,6,7,6] -> false since A = [6,6,7], B = [7,12] failed the 2nd
requirement</p>
</blockquote>
<h4>What I have tried:</h4>
<p>I have used the sort function in Python, which has a time complexity of O(nlog(n)).</p>
<pre><code>from typing import List
def solution(nums: List[int]) -> bool:
total_sum = sum(nums)
# If the total sum of the array is 0 or an odd number,
# it is impossible to have array A and array B equal.
if total_sum % 2 or total_sum == 0:
return False
nums.sort()
curr_sum, i = total_sum, 0
while curr_sum > total_sum // 2:
curr_sum -= nums[i]
i += 1
if curr_sum == total_sum // 2 and nums[i] != nums[i - 1]:
return True
return False
</code></pre>
|
<python><algorithm><sorting><optimization><time-complexity>
|
2022-12-18 04:30:09
| 1
| 938
|
KORIN
|
74,838,922
| 2,989,330
|
python3-config --ldflags doesn't include -lpython3.x
|
<h1>Problem statement</h1>
<p>When trying to build a sample Cython application with g++, I get the linker error <code>undefined reference to 'Py_Finalize'</code>. The Python linker arguments are provided by <code>python3-config</code>, but they appear to be incomplete. If I additionally include <code>-lpython3.x</code>, the build succeeds and the binary runs.</p>
<h1>Reproduction</h1>
<p>To reproduce the problem, I created a minimal (not) working example with Docker. The file that I want to compile is <code>main.cpp</code> and looks like this:</p>
<pre><code>#include <Python.h>
#include <iostream>
int main() {
Py_Initialize();
std::cout << "Python initialized" << std::endl;
Py_Finalize();
}
</code></pre>
<p>The corresponding <code>Dockerfile</code> is the following:</p>
<pre><code>FROM ubuntu:22.04
RUN apt-get update -y && \
apt-get install -y build-essential python3.10 libpython3.10 python3-dev libpython3-dev
COPY main.cpp /root
RUN g++ -g $(python3-config --cflags) /root/main.cpp $(python3-config --ldflags) -o /root/main
</code></pre>
<p>My system is Ubuntu 22.04 (but 20.04 exhibits the same problem) and Python 3.10 (but 3.8 and 3.9 exhibits the same problem).</p>
<h1>Observations</h1>
<p>The build log is the following:</p>
<pre><code>Step 4/4 : RUN g++ -g $(python3-config --cflags) /root/main.cpp $(python3-config --ldflags) -o /root/main
---> Running in b60bbdb5d14f
/usr/bin/ld: /tmp/ccU46Ddl.o: in function `main':
/root/main.cpp:5: undefined reference to `Py_Initialize'
/usr/bin/ld: /root/main.cpp:7: undefined reference to `Py_Finalize'
collect2: error: ld returned 1 exit status
The command '/bin/sh -c g++ -g $(python3-config --cflags) /root/main.cpp $(python3-config --ldflags) -o /root/main' returned a non-zero code: 1
</code></pre>
<p>It's missing <code>Py_Initialize</code> and <code>Py_Finalize</code>, i.e., those symbols from the Python3 library. The output of <code>python3-config --ldflags</code> is:</p>
<pre><code>-L/usr/lib/python3.10/config-3.10-x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -lcrypt -ldl -lm -lm
</code></pre>
<p>If I manually add the <code>-lpython3.10</code> option, the build succeeds:</p>
<pre><code>Step 4/4 : RUN g++ -g $(python3-config --cflags) /root/main.cpp $(python3-config --ldflags) -lpython3.10 -o /root/main && /root/main
---> Running in c2551fbfa979
Python initialized
Removing intermediate container c2551fbfa979
---> 47303fef29ad
Successfully built 47303fef29ad
</code></pre>
<p>The output of <code>python3-config --ldflags</code> on Debian and Python 3.7 (Docker image <code>python:3.7-slim</code>) <em>does</em> include <code>lpython3.7m</code>:</p>
<pre><code>$ docker run -it python:3.7-slim python3-config --ldflags
-L/usr/local/lib -lpython3.7m -lcrypt -lpthread -ldl -lutil -lm
</code></pre>
<p>On Debian with Python 3.10 (Docker image <code>python:3.10-slim</code>), we can observe this problem as well:</p>
<pre><code>$ docker run -it python:3.10-slim python3-config --ldflags
-L/usr/local/lib -lcrypt -lpthread -ldl -lutil -lm -lm
</code></pre>
<h1>Questions</h1>
<ol>
<li>Why doesn't <code>python3-config</code> give me all necessary linker options to build this application?</li>
<li>Am I supposed to add <code>-lpython3.10</code> by myself or are there better ways (i.e., more system-independent ways) to circumvent this?</li>
<li>Is the output of <code>python3-config</code> erroneous and why does it include <code>-lm</code> twice?</li>
</ol>
|
<python><c++><python-3.x><linux><compiler-errors>
|
2022-12-18 03:18:12
| 1
| 3,203
|
Green 绿色
|
74,838,796
| 7,987,118
|
Efficiently replace column value from a column of dict from pandas dict
|
<p>I would like help vectorizing my current code, any help or comments are appreiciated
I have a df with a weird column that is derived from an availability checker function like this:</p>
<pre><code>original_df = pd.DataFrame({
'a':['a1', 'a2', 'a3', 'a4'],
'b':['b1', 'b20', 'b98', 'b4'],
'c':[{'a':'not_available', 'b': 'b1'}, {}, {'a':'a3', 'b': 'b98'}, {'a':'not_available', 'b': 'not_available'}],
})
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">a</th>
<th style="text-align: center;">b</th>
<th style="text-align: left;">c</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">a1</td>
<td style="text-align: center;">b1</td>
<td style="text-align: left;"><code>{'a': 'not_available', 'b': 'b11'}</code></td>
</tr>
<tr>
<td style="text-align: center;">a2</td>
<td style="text-align: center;">b20</td>
<td style="text-align: left;"><code> {}</code></td>
</tr>
<tr>
<td style="text-align: center;">a3</td>
<td style="text-align: center;">b98</td>
<td style="text-align: left;"><code> {'a': 'a3', 'b': 'b98'}</code></td>
</tr>
<tr>
<td style="text-align: center;">a4</td>
<td style="text-align: center;">b4</td>
<td style="text-align: left;"><code> {'a': 'not_available', 'b': 'not_available'}</code></td>
</tr>
</tbody>
</table>
</div>
<p>I would like to transform the columns <code>a</code> and <code>b</code> based on the dictionary of column <code>c</code>
So, the resulting DF looks something like this:</p>
<pre><code>desired_df = pd.DataFrame({
'a':['not_available', 'a2', 'a3', 'not_available'],
'b':['b1', 'b20', 'b98', 'not_available']})
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">a</th>
<th style="text-align: center;">b</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">not_available</td>
<td style="text-align: center;">b1</td>
</tr>
<tr>
<td style="text-align: center;">a2</td>
<td style="text-align: center;">b20</td>
</tr>
<tr>
<td style="text-align: center;">a3</td>
<td style="text-align: center;">b98</td>
</tr>
<tr>
<td style="text-align: center;">not_available</td>
<td style="text-align: center;">not_available</td>
</tr>
</tbody>
</table>
</div>
<p>Some things to note: if the dict in column <code>c</code> is empty, leave the values as they are in other columns.
The values in dict of <code>c</code> can only be the current value in the other colum or <code>not_available</code>.</p>
<pre><code>for idx, row in original_df.iterrows():
for key, value in row.c.items():
original_df.loc[idx, key] = value
</code></pre>
<p>This is a downsampled scenario, the dict contains 8 columns and the df usually has 20-60 rows.</p>
<p>This is my current code and it works but it is very slow. This code is used in an API and my profiler tells me that this function consumes the highest cumulative time. Which makes sense since I'm iterating over everything and I was hoping to get some help!</p>
<p>Shubams answer has made this function go from 20secs to 0.208 seconds. Thank you!</p>
|
<python><python-3.x><pandas><performance><vectorization>
|
2022-12-18 02:35:34
| 2
| 363
|
Harsha
|
74,838,582
| 15,542,245
|
"NoneType" error message for variables of class string type
|
<p>I understand that re.search returns a match object <a href="https://docs.python.org/3/library/re.html" rel="nofollow noreferrer">re.search(pattern, string, flags=0)</a>
I am testing the success of the match & then retrieving the string.</p>
<pre><code># matchTest.py --- testing match object returns
import re
# expected_pattern = suburbRegex
suburbRegex = "(?s)(,_\S+\s)"
# line leading up to and including expected_pattern
mostOfLineRex = "(?s)(^.+,\S+)"
# expected_pattern to end of line
theRestRex = "(?s),_\S+\s\w+\s(.+)"
fileLines = ['173 ANDREWS John Frances 20 Bell_Road,_Sub_urbia Semi Retired\n']
for fileLine in fileLines:
result = re.search(suburbRegex, fileLine)
# print(type(result)) # re.Match
if(result):
patResult = re.search(suburbRegex, fileLine).group(0)
# print(patResult)
# print(type(patResult)) # str
# print(type(re.search(suburbRegex, fileLine).group(0))) # str
start = re.search(mostOfLineRex, fileLine)
if(start):
start = re.search(mostOfLineRex, fileLine).group(0)
# print(start)
print(type(start)) # str
end = re.search(theRestRex, fileLine)
if(end):
end = re.search(theRestRex, fileLine).group(0)
# print(end)
print(type(end)) # str
newFileLine = start + ' ' + end
else:
print("The listing does not have a suburb!")
# File "~\matchTest.py", line 31, in <module>
# newFileLine = start + ' ' + end
# TypeError: can only concatenate str (not "NoneType") to str
</code></pre>
<p>I checked that types for start & end are <code><class 'str'></code> So why do I get <code>NoneType</code> error?</p>
|
<python><regex><typeerror>
|
2022-12-18 01:30:34
| 1
| 903
|
Dave
|
74,838,554
| 5,410,188
|
requests.exceptions.JSONDecodeError: Extra data: line 1 column 5 (char 4)
|
<p>newb to python and requests here. I'm working with an API trying to get a JSON response but am only able to get the text version of it. My line of <code>resp_random_1_100_int.json()</code> returns the error:</p>
<p><code>*** requests.exceptions.JSONDecodeError: Extra data: line 1 column 4 (char 3) resp_random_1_100_int.json()</code></p>
<p>but when I run my Insomnia(similar to Postman), I'm able to get a <code>JSON</code> response</p>
<p>here is my code:</p>
<pre><code>resp_random_1_100_int = requests.get(
f"http://numbersapi.com/random",
params={
"min": "0",
"max": "100"
}
)
resp_random_1_100_int.json() # <-- gives error
</code></pre>
<p>Here is my response from insomnia:</p>
<pre><code>{
"text": "1 is the number of Gods in monotheism.",
"number": 1,
"found": true,
"type": "trivia"
}
</code></pre>
<p>Here is what it looks like in insomnia:
<a href="https://i.sstatic.net/LomoW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LomoW.png" alt="enter image description here" /></a></p>
<p>Also, if I only <code>run resp_random_1_100_int.text</code>, the response comes back with a random fact and it works but I'm <strong>not</strong> able to access the other object attributes with <code>resp_random_1_100_int.number</code> or <code>resp_random_1_100_int.type</code></p>
<p>i've tried setting the headers with <code>headers = {'Content-type': 'application/json'}</code> but still no luck</p>
<p>Any and all help/direction is appreciated.</p>
|
<python><json><python-requests><insomnia>
|
2022-12-18 01:20:43
| 1
| 1,542
|
Cflux
|
74,838,498
| 1,914,781
|
plotly - show x-axis with arrow head
|
<p>How to draw a x-axis with arrow head at the right side? example:</p>
<pre><code>import re
import numpy as np
import plotly.express as px
def save_fig(fig,pngname):
fig.write_image(pngname,format="png", width=800, height=300, scale=1)
print("[[%s]]"%pngname)
#plt.show()
return
def plot(x,y):
fig = px.scatter(
x=x,
y=y,
error_y=[0] * len(y),
error_y_minus=y
)
tickvals = [0,np.pi/2,np.pi,np.pi*3/2,2*np.pi]
ticktext = ["0","$\\frac{\pi}{2}$","$\pi$","$\\frac{3\pi}{4}$","$2\pi$"]
layout = dict(
title="demo",
xaxis_title="X",
yaxis_title="Y",
title_x=0.5,
margin=dict(l=10,t=20,r=0,b=40),
height=300,
xaxis=dict(
tickangle=0,
tickvals = tickvals,
ticktext=ticktext,
),
yaxis=dict(
showgrid=True,
zeroline=False,
showline=False,
showticklabels=True
)
)
fig.update_traces(
marker_size=14,
)
fig.update_layout(layout)
fig.show()
</code></pre>
<p>Current results:
<a href="https://i.sstatic.net/DXGeR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DXGeR.png" alt="enter image description here" /></a></p>
|
<python><plotly>
|
2022-12-18 01:01:52
| 1
| 9,011
|
lucky1928
|
74,838,389
| 12,990,109
|
Import output of kubectl get pods -o json in to pandas dataframe
|
<p>I'd like to import the output of:</p>
<pre><code>kubectl get pods -o json
</code></pre>
<p>into a python pandas dataframe. This should contain also all containers and there resource request and limits.</p>
<p>My code starts as follows:</p>
<pre><code>import json
import numpy as np
import pandas as pd
import os
pods_raw = os.popen('kubectl get pods -o json').read()
pods_json = json.loads(pods_raw)['items']
</code></pre>
<p>from here on I struggle to get the data in a correct way in a dataframe, especially the 'spec.containers' should be split up when multiple containers exist.</p>
|
<python><json><pandas><kubernetes>
|
2022-12-18 00:28:06
| 2
| 453
|
Jeromba6
|
74,838,294
| 13,983,136
|
Update a model with test data to make new forecasts after training it on training data
|
<p>I would like to understand if the procedure I'm following is standard or if I'm making some mistake.
I have a time series of 48 values (one value per month from 2018 to 2021), stored in the data frame <code>df</code>:</p>
<pre><code> Amount
2018-01 125.6
... ...
2020-12 145.2
2021-01 148.4
... ...
2021-12 198.8
</code></pre>
<p>I would like to create a model that can predict the quantity for the months I want.</p>
<p>In short, I take the first three years (36 months) and use this data to train my model, and then test it on the last year (2021), as follows:</p>
<pre><code>df_train = df[:36]
df_test = df[36:]
arima = pm.auto_arima(df_train, error_action='ignore', trace=True,
suppress_warnings=True, maxiter=50,
seasonal=True, m=12,
random_state=1)
# Best model: ARIMA(1,1,1)(0,1,0)[12]
predictions, conf_int = arima.predict(n_periods=12, return_conf_int=True)
df_predictions = pd.DataFrame(predictions, index=df_test.index)
df_predictions.columns = ['Predicted amount']
</code></pre>
<p>Then, I use:</p>
<pre><code>r2_score(df_test['Amount'], df_predictions['Predicted amount'])
</code></pre>
<p>getting about 0.92, so everything seems to be fine. Is this correct up to here?</p>
<p>Finally, I want to forecast 2022 amounts, where I have no control data.
To do this, I update the model and repeat the process from before:</p>
<pre><code>arima.update(df_test)
df_forecasts = pd.DataFrame(arima.predict(n_periods=12), index=pd.date_range(start='2022-01-01', end='2022-12-01', freq='MS'))
df_forecasts.columns = ['Forecasted amount']
</code></pre>
<p>I'm more unsure about this last part, is that correct?</p>
<p>I have made a very concise summary of the procedure, but I am interested in understanding if the path I have followed is standard and correct. Thanks to anyone who can answer me.</p>
|
<python><time-series><forecasting><arima><pmdarima>
|
2022-12-18 00:00:41
| 0
| 787
|
LJG
|
74,838,138
| 2,665,591
|
Python function with two overloads calls another one with the same overloads - explain to mypy
|
<p>How do we explain to mypy that this code is ok?</p>
<pre><code>from typing import overload, Union
@overload
def foo(x: float, y: str):
...
@overload
def foo(x: list, y: tuple):
...
def foo(x: Union[float, list], y: Union[str, tuple]):
bar(x, y) # type error!
@overload
def bar(x: float, y: str):
...
@overload
def bar(x: list, y: tuple):
...
def bar(x: Union[float, list], y: Union[str, tuple]):
print(x)
print(y)
</code></pre>
<p>The type errors are:</p>
<pre><code>foo.py:13: error: Argument 1 to "bar" has incompatible type "Union[float, List[Any]]"; expected "float" [arg-type]
foo.py:13: error: Argument 2 to "bar" has incompatible type "Union[str, Tuple[Any, ...]]"; expected "str" [arg-type]
</code></pre>
<p>In similar cases I would use <code>TypeVar</code>, but the problem is that here <code>x</code> and <code>y</code> are different (but correlated) types.</p>
|
<python><mypy>
|
2022-12-17 23:20:29
| 1
| 3,462
|
DanielSank
|
74,838,124
| 3,684,575
|
How to convert screen x,y (cartesian coordinates) to 3D world space crosshair movement angles (screenToWorld)?
|
<p>Recently I've been playing around with computer vision and neural networks.<br>
And came across experimental object detection within a 3D application.<br>
But, surprisingly to me - I've faced an issue of converting one coordinates system to another <em>(AFAIK cartesian to polar/sphere)</em>.</p>
<p><strong>Let me explain.</strong><br>
For example, we have a screenshot of a 3D application window <em>(some 3D game)</em>:
<a href="https://i.sstatic.net/kC8wG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kC8wG.png" alt="enter image description here" /></a></p>
<p>Now, using Open-CV or neural network I'm able to detect the round spheres <em>(in-game targets)</em>.<br>
As well as their <code>X, Y</code> coordinates within the game window <em>(x, y offsets)</em>.<br>
<a href="https://i.sstatic.net/TBTv8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TBTv8.png" alt="enter image description here" /></a></p>
<p>And if I will programmatically move a mouse cursor within the given <code>X, Y</code> coordinates in order to aim one of the targets.<br>
It will work only when I'm in desktop environment <em>(moving the cursor in desktop)</em>.<br>
But when I switch to the 3D game and thus, my mouse cursor is now within 3D game world environment - it does not work and does not aim the target.</p>
<p>So, I did a decent research on the topic.<br>
And what I came across, is that the mouse cursor is locked inside 3D game.<br>
Because of this, we cannot move the cursor using <code>MOUSEEVENTF_MOVE (0x0001) + MOUSEEVENTF_ABSOLUTE (0x8000)</code> flags within the <code>mouse_event</code> win32 call.</p>
<p>We are only able to move the mouse programmatically using relative movement.<br>
And, theoretically, in order to get this relative mouse movement offsets, we can calculate the offset of detections from the middle of the 3D game window.<br>
In such case, relative movement vector would be something like <code>(x=-100, y=0)</code> if the target point is <code>100px</code> left from the middle of the screen.</p>
<p>The thing is, that the crosshair inside a 3D game will not move <code>100px</code> to the left as expected.<br>
And will not aim the given target.<br>
But it will move a bit in a given direction.</p>
<p>After that, I've made more research on the topic.<br>
And as I understand, the crosshair inside a 3D game is moving using angles in 3D space.<br>
Specifically, there are only two of them: <code>horizontal movement angles</code> and <code>vertical movement angles</code>.</p>
<p>So the game engine takes our mouse movement and converts it to the movement angles within a given 3D world space.<br>
And that's how the crosshair movement is done inside a 3D game.<br>
But we don't have access to that, all we can is move the mouse with <code>win32</code> calls externally.</p>
<p>Then I've decided to somehow calculate <code>pixels per degree</code> <em>(amount of pixels we need to use with <code>win32</code> relative mouse movement in order to move the crosshair by 1 degrees inside the game)</em>.<br>
In order to do this, I've wrote down a simple calculation algorithm.<br>
Here it is:
<a href="https://i.sstatic.net/XHdZw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XHdZw.png" alt="enter image description here" /></a></p>
<p>As you can see, we need to move our mouse relatively with <code>win32</code> by <code>16400</code> pixels horizontally, in order to move the crosshair inside our game by 360 degrees.<br>
And indeed, it works.<br>
<code>16400/2</code> will move the crosshair by 180 degrees respectively.</p>
<p>What I did next, is I tried to convert our screen <code>X, Y</code> target offset coordinates to percentages (from the middle of the screen).<br>
And then convert them to degrees.<br></p>
<p>The overall formula looked like <em>(example for horizontal movement only)</em>:<br></p>
<pre><code>w = 100 # screen width
x_offset = 10 # target x offset
hor_fov = 106.26
degs = (hor_fov/2) * (x_offset /w) # 5.313 degrees
</code></pre>
<p>And indeed, it worked!<br>
But not quite as expected.<br>
The overall aiming precision was different, depending on how far the target is from the middle of the screen.</p>
<p>I'm not that great with trigonometry, but as I can say - there's something to do with polar/sphere coordinates.<br>
Because we can see only some part of the game world both horizontally & vertically.<br>
It's also called the <code>FOV (Field of view)</code>.</p>
<p>Because of this, in the given 3D game we are only able to view <code>106.26</code> degrees horizontally.<br>
And <code>73.74</code> degrees vertically.<br></p>
<p>My guess, is that I'm trying to convert coordinates from linear system to something non-linear.<br>
As a result, the overall accuracy is not good enough.</p>
<p>I've also tried to use <code>math.atan</code> in Python.<br>
And it works, but still - not accurate.<br></p>
<p>Here is the code:</p>
<pre><code>def point_get_difference(source_point, dest_point):
# 1000, 1000
# source_point = (960, 540)
# dest_point = (833, 645)
# result = (100, 100)
x = dest_point[0]-source_point[0]
y = dest_point[1]-source_point[1]
return x, y
def get_move_angle__new(aim_target, gwr, pixels_per_degree, fov):
game_window_rect__center = (gwr[2]/2, gwr[3]/2)
rel_diff = list(point_get_difference(game_window_rect__center, aim_target))
x_degs = degrees(atan(rel_diff[0]/game_window_rect__center[0])) * ((fov[0]/2)/45)
y_degs = degrees(atan(rel_diff[1] / game_window_rect__center[0])) * ((fov[1]/2)/45)
rel_diff[0] = pixels_per_degree * x_degs
rel_diff[1] = pixels_per_degree * y_degs
return rel_diff, (x_degs+y_degs)
get_move_angle__new((900, 540), (0, 0, 1920, 1080), 16364/360, (106.26, 73.74))
# Output will be: ([-191.93420990140876, 0.0], -4.222458785413539)
# But it's not accurate, overall x_degs must be more or less than -4.22...
</code></pre>
<p>Is there a way to precisely convert 2D screen <code>X, Y</code> coordinates into 3D game crosshair movement degrees?<br>
There must be a way, I just can't figure it out ...</p>
|
<python><math><3d><camera><trigonometry>
|
2022-12-17 23:17:08
| 1
| 1,912
|
Abraham Tugalov
|
74,838,064
| 3,611,472
|
SSL pickle error when running multiprocess inside class's method and websocket
|
<p>I am creating a phyton object that establishes a websocket connection.
Depending the message that I receive, I would like to run a class's method on a child-process using <code>multiprocessing</code>. However, I get the error <code>cannot pickle 'SSLSocket' object</code>. Why does it happen?</p>
<p>The following code reproduces the error.</p>
<pre><code>import websocket
import json
from multiprocessing import Process
def on_close(ws, close_status_code, close_msg):
print('closed')
def on_error(ws, error):
print(error)
class ObClient():
def __init__(self):
self.ws = None
def on_open(self, ws):
subscribe_message = {"method": "SUBSCRIBE",
"params": 'btcusdt@depth5@1000ms',
"id": 1}
ws.send(json.dumps(subscribe_message))
def func(self):
print('hello')
def on_message(self, ws, message):
process = Process(target=self.func)
process.start()
process.join()
def connect(self):
self.ws = websocket.WebSocketApp('wss://stream.binance.com:443/stream', on_close=on_close, on_error=on_error,
on_open=self.on_open, on_message=self.on_message)
self.ws.run_forever()
if __name__ == '__main__':
client = ObClient()
client.connect()
</code></pre>
|
<python><python-3.x><websocket><multiprocessing><python-multiprocessing>
|
2022-12-17 23:04:08
| 1
| 443
|
apt45
|
74,838,036
| 11,573,910
|
Dataframe with Array Column to New Dataframe
|
<p>Actually i have an <code>feather</code> file that im loading to an Dataframe.</p>
<p>And this Dataframe have a column with <code>languages </code> in each row. Like the abone one:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>student_id</th>
<th>name</th>
<th>created_at</th>
<th>languages</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Foo</td>
<td>2019-01-03 14:30:32.146000+00:00</td>
<td>[{'language_id': 1, 'name': 'English', 'optin_...</td>
</tr>
<tr>
<td>2</td>
<td>Bar</td>
<td>2019-01-03 14:30:32.146000+00:00</td>
<td>[{'language_id': 1, 'name': 'English', 'optin_...</td>
</tr>
</tbody>
</table>
</div>
<p>My question is: How can i generate an new Dataframe only with <code>student_id</code> column and the rest of <code>languages</code> array?</p>
<p>For example the above one:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>student_id</th>
<th>language_id</th>
<th>language_name</th>
<th>optin_at</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
<td>English</td>
<td>2019-01-03T14:30:32.148Z</td>
</tr>
<tr>
<td>2</td>
<td>1</td>
<td>English</td>
<td>2021-05-30T00:33:02.915Z</td>
</tr>
<tr>
<td>2</td>
<td>2</td>
<td>Portuguese</td>
<td>2022-03-07T07:42:07.082Z</td>
</tr>
</tbody>
</table>
</div>
<p>// EDIT:</p>
<p>Exported Dataframe as JSON (<code>orient='records'</code>) for testing purposes:</p>
<pre><code>[{"student_id":"1","name":"Foo","created_at":"2019-01-03T14:30:32.146Z","languages":[{"language_id":1,"name":"English","optin_at":"2019-01-03T14:30:32.148Z"}]},{"student_id":"2","name":"Bar","created_at":"2019-01-03T14:30:32.146Z","languages":[{"language_id":1,"name":"English","optin_at":"2021-05-30T00:33:02.915Z"},{"language_id":2,"name":"Portuguese","optin_at":"2022-03-07T07:42:07.082Z"}]}]
</code></pre>
|
<python><pandas><dataframe><feather>
|
2022-12-17 22:58:54
| 1
| 331
|
Hudson Medeiros
|
74,838,017
| 3,215,940
|
Apply rolling custom function with pandas
|
<p>There are a few similar questions in this site, but I couldn't find out a solution to my particular question.</p>
<p>I have a dataframe that I want to process with a custom function (the real function has a bit more pre-procesing, but the gist is contained in the toy example <code>fun</code>).</p>
<pre><code>import statsmodels.api as sm
import numpy as np
import pandas as pd
mtcars=pd.DataFrame(sm.datasets.get_rdataset("mtcars", "datasets", cache=True).data)
def fun(col1, col2, w1=10, w2=2):
return(np.mean(w1 * col1 + w2 * col2))
# This is the behavior I would expect for the full dataset, currently working
mtcars.apply(lambda x: fun(x.cyl, x.mpg), axis=1)
# This was my approach to do the same with a rolling function
mtcars.rolling(3).apply(lambda x: fun(x.cyl, x.mpg))
</code></pre>
<p>The <code>rolling</code> version returns this error:</p>
<pre><code>AttributeError: 'Series' object has no attribute 'cyl'
</code></pre>
<p>I figured I don't fully understand how <code>rolling</code> works, since adding a print statement to the beginning of my function shows that <code>fun</code> is not getting the full dataset but an unnamed series of 3. What is the approach to apply this rolling function in <code>pandas</code>?</p>
<p>Just in case, I am running</p>
<pre><code>>>> pd.__version__
'1.5.2'
</code></pre>
<h3>Update</h3>
<p>Looks like there is a very similar question <a href="https://stackoverflow.com/questions/60736556/pandas-rolling-apply-using-multiple-columns">here</a> which might partially overlap with what I'm trying to do.</p>
<p>For completeness, here's how I would do this in <code>R</code> with the expected output.</p>
<pre><code>library(dplyr)
fun <- function(col1, col2, w1=10, w2=2){
return(mean(w1*col1 + w2*col2))
}
mtcars %>%
mutate(roll = slider::slide2(.x = cyl,
.y = mpg,
.f = fun,
.before = 1,
.after = 1))
mpg cyl disp hp drat wt qsec vs am gear carb roll
Mazda RX4 21.0 6 160.0 110 3.90 2.620 16.46 0 1 4 4 102
Mazda RX4 Wag 21.0 6 160.0 110 3.90 2.875 17.02 0 1 4 4 96.53333
Datsun 710 22.8 4 108.0 93 3.85 2.320 18.61 1 1 4 1 96.8
Hornet 4 Drive 21.4 6 258.0 110 3.08 3.215 19.44 1 0 3 1 101.9333
Hornet Sportabout 18.7 8 360.0 175 3.15 3.440 17.02 0 0 3 2 105.4667
Valiant 18.1 6 225.0 105 2.76 3.460 20.22 1 0 3 1 107.4
Duster 360 14.3 8 360.0 245 3.21 3.570 15.84 0 0 3 4 97.86667
Merc 240D 24.4 4 146.7 62 3.69 3.190 20.00 1 0 4 2 94.33333
Merc 230 22.8 4 140.8 95 3.92 3.150 22.90 1 0 4 2 90.93333
Merc 280 19.2 6 167.6 123 3.92 3.440 18.30 1 0 4 4 93.2
Merc 280C 17.8 6 167.6 123 3.92 3.440 18.90 1 0 4 4 102.2667
Merc 450SE 16.4 8 275.8 180 3.07 4.070 17.40 0 0 3 3 107.6667
Merc 450SL 17.3 8 275.8 180 3.07 3.730 17.60 0 0 3 3 112.6
Merc 450SLC 15.2 8 275.8 180 3.07 3.780 18.00 0 0 3 3 108.6
Cadillac Fleetwood 10.4 8 472.0 205 2.93 5.250 17.98 0 0 3 4 104
Lincoln Continental 10.4 8 460.0 215 3.00 5.424 17.82 0 0 3 4 103.6667
Chrysler Imperial 14.7 8 440.0 230 3.23 5.345 17.42 0 0 3 4 105
Fiat 128 32.4 4 78.7 66 4.08 2.200 19.47 1 1 4 1 105
Honda Civic 30.4 4 75.7 52 4.93 1.615 18.52 1 1 4 2 104.4667
Toyota Corolla 33.9 4 71.1 65 4.22 1.835 19.90 1 1 4 1 97.2
Toyota Corona 21.5 4 120.1 97 3.70 2.465 20.01 1 0 3 1 100.6
Dodge Challenger 15.5 8 318.0 150 2.76 3.520 16.87 0 0 3 2 101.4667
AMC Javelin 15.2 8 304.0 150 3.15 3.435 17.30 0 0 3 2 109.3333
Camaro Z28 13.3 8 350.0 245 3.73 3.840 15.41 0 0 3 4 111.8
Pontiac Firebird 19.2 8 400.0 175 3.08 3.845 17.05 0 0 3 2 106.5333
Fiat X1-9 27.3 4 79.0 66 4.08 1.935 18.90 1 1 4 1 101.6667
Porsche 914-2 26.0 4 120.3 91 4.43 2.140 16.70 0 1 5 2 95.8
Lotus Europa 30.4 4 95.1 113 3.77 1.513 16.90 1 1 5 2 101.4667
Ford Pantera L 15.8 8 351.0 264 4.22 3.170 14.50 0 1 5 4 103.9333
Ferrari Dino 19.7 6 145.0 175 3.62 2.770 15.50 0 1 5 6 107
Maserati Bora 15.0 8 301.0 335 3.54 3.570 14.60 0 1 5 8 97.4
Volvo 142E 21.4 4 121.0 109 4.11 2.780 18.60 1 1 4 2 96.4
</code></pre>
|
<python><pandas>
|
2022-12-17 22:54:23
| 4
| 4,270
|
Matias Andina
|
74,837,978
| 19,111,974
|
SyntaxError: invalid non-printable character U+00A0 in python
|
<p>I'm getting the error:</p>
<p><code>SyntaxError: invalid non-printable character U+00A0</code></p>
<p>When I'm running the code below:</p>
<pre><code># coding=utf-8
from PIL import Image
img = Image.open("img.png")
</code></pre>
<p>I have tried to load different images with different formats (png, jpg, jpeg). I have tried using different versions of the Pillow library. I have also tried running it using python 2 and 3.</p>
|
<python><syntax-error><indentation>
|
2022-12-17 22:45:14
| 1
| 571
|
cris.sol
|
74,837,661
| 13,062,745
|
Pandas how to plot multiple 0/1 distributions in to one figure
|
<p>I have a dataset that looks like this:</p>
<pre><code>label colA colB colC
0 1 0 0
0 0 1 0
1 0 0 1
1 1 0 1
</code></pre>
<p>Each row will be either label 0 or 1, and only one of colA, colB and colC will be 1, other will be 0. <br>
I want to plot one figure that looks something like (<strong>Y will be counts, X will be 0/1</strong>): <br><br>
<a href="https://i.sstatic.net/7JvWR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7JvWR.png" alt="enter image description here" /></a> <br>
Based on the example given, there will be 6 columns in total since there'll be 3 columns indicating 0 label, and 3 columns for 1 label. How do I do that? I know how to plot one column
<code>df[df['colA']==1]['label'].plot()</code> but not sure how to combine multiple columns together..</p>
|
<python><pandas><matplotlib>
|
2022-12-17 21:44:42
| 2
| 742
|
zxcisnoias
|
74,837,591
| 386,861
|
No module called found - why?
|
<p>Still trying to get to grips with VSCode.</p>
<p>Trying to work through this: <a href="https://www.youtube.com/watch?v=7aIb6iQZkDw" rel="nofollow noreferrer">https://www.youtube.com/watch?v=7aIb6iQZkDw</a></p>
<p>I've run the first line of code:</p>
<p>from selenium import webdriver</p>
<p>The reply is:</p>
<pre><code>ModuleNotFoundError: No module named 'selenium'
</code></pre>
<p>Tried the normal pip install method having thought I'd already installed it:</p>
<pre><code>pip install selenium
Requirement already satisfied: selenium in ./opt/anaconda3/lib/python3.9/site-packages (4.7.2)
Requirement already satisfied: urllib3[socks]~=1.26 in ./opt/anaconda3/lib/python3.9/site-packages (from selenium) (1.26.12)
Requirement already satisfied: certifi>=2021.10.8 in ./opt/anaconda3/lib/python3.9/site-packages (from selenium) (2022.9.24)
Requirement already satisfied: trio~=0.17 in ./opt/anaconda3/lib/python3.9/site-packages (from selenium) (0.22.0)
Requirement already satisfied: trio-websocket~=0.9 in ./opt/anaconda3/lib/python3.9/site-packages (from selenium) (0.9.2)
Requirement already satisfied: sortedcontainers in ./opt/anaconda3/lib/python3.9/site-packages (from trio~=0.17->selenium) (2.4.0)
Requirement already satisfied: attrs>=19.2.0 in ./opt/anaconda3/lib/python3.9/site-packages (from trio~=0.17->selenium) (21.4.0)
Requirement already satisfied: async-generator>=1.9 in ./opt/anaconda3/lib/python3.9/site-packages (from trio~=0.17->selenium) (1.10)
Requirement already satisfied: outcome in ./opt/anaconda3/lib/python3.9/site-packages (from trio~=0.17->selenium) (1.2.0)
Requirement already satisfied: sniffio in ./opt/anaconda3/lib/python3.9/site-packages (from trio~=0.17->selenium) (1.2.0)
Requirement already satisfied: idna in ./opt/anaconda3/lib/python3.9/site-packages (from trio~=0.17->selenium) (3.4)
Requirement already satisfied: exceptiongroup>=1.0.0rc9 in ./opt/anaconda3/lib/python3.9/site-packages (from trio~=0.17->selenium) (1.0.4)
Requirement already satisfied: wsproto>=0.14 in ./opt/anaconda3/lib/python3.9/site-packages (from trio-websocket~=0.9->selenium) (1.2.0)
Requirement already satisfied: PySocks!=1.5.7,<2.0,>=1.5.6 in ./opt/anaconda3/lib/python3.9/site-packages (from urllib3[socks]~=1.26->selenium) (1.7.1)
Requirement already satisfied: h11<1,>=0.9.0 in ./opt/anaconda3/lib/python3.9/site-packages (from wsproto>=0.14->trio-websocket~=0.9->selenium) (0.14.0)
</code></pre>
<p>What's going on?</p>
|
<python><visual-studio-code>
|
2022-12-17 21:30:55
| 1
| 7,882
|
elksie5000
|
74,837,586
| 6,346,514
|
Loop through .zip files and print length of excels inside zip
|
<p>I am having trouble looping through my .zip files (5 of them) in my directory , which have multiple sheets in them.</p>
<ol>
<li>I want to count the records in each .zip file (appending record count of each sheet)</li>
</ol>
<p>right now im just getting the first row of my below output. I do think i have an issue at this line <code>df = df[:1]</code> as i think its just limiting to the one row. Any ideas?</p>
<p>Output should be something like -</p>
<p><a href="https://i.sstatic.net/yi92u.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yi92u.png" alt="enter image description here" /></a></p>
<p>Code:</p>
<pre><code># Import File Function
def read_excel_sheets(xls_path):
"""Read all sheets of an Excel workbook and return a single DataFrame"""
xl = pd.ExcelFile(xls_path)
df = pd.DataFrame()
for idx, name in enumerate(xl.sheet_names):
sheet = xl.parse(name, header=None, dtype=str, ignore_index=True)
# Drop Empty Columns
sheet.dropna(axis=1, how='all', inplace=True)
# Add sheet name as column
sheet['sheet'] = name.split(" ")[-1]
# Appending Data
if len(df.columns) >= len(sheet.columns):
df = df.append(sheet, ignore_index=True, sort=True)
else:
df = sheet.append(df, ignore_index=True, sort=True)
del sheet
return df
# Process File Function
def process_files(list_of_files):
df = pd.DataFrame()
for file in list_of_files:
# zip file handler
zip = zipfile.ZipFile(file)
# list available files in the container
zfiles = zip.namelist()
#extensions to process
extensions = (".xls",".xlsx")
# Importing to Dataframe
for zfile in zfiles:
if zfile.endswith(extensions):
# For Row Count Check
df_temp = read_excel_sheets(zip.open(zfile))
df = df.append(df_temp, ignore_index=True, sort=True)
df['Length'] = len(df)
df['FileName'] = file
df = df[['Length','FileName']]
df = df[:1]
return df
input_location = r'O:\Stack\Over\Flow'
month_to_process = glob.glob(input_location + "\\2022 - 10\\*.zip")
df = process_files(month_to_process)
print(df)
</code></pre>
|
<python><pandas>
|
2022-12-17 21:28:47
| 1
| 577
|
Jonnyboi
|
74,837,553
| 1,123,336
|
How to fix a Mac base conda environment when sqlite3 is broken
|
<p>I recently updated the Python version of my base conda environment from 3.8 to 3.9, using <code>mamba update python=3.9</code>, but I can no longer run IPython, because the sqlite3 package appears to be broken.</p>
<pre><code>python
Python 3.9.15 | packaged by conda-forge | (main, Nov 22 2022, 08:55:37)
[Clang 14.0.6 ] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import sqlite3
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/rosborn/opt/miniconda3/lib/python3.9/sqlite3/__init__.py", line 57, in <module>
from sqlite3.dbapi2 import *
File "/Users/rosborn/opt/miniconda3/lib/python3.9/sqlite3/dbapi2.py", line 27, in <module>
from _sqlite3 import *
ImportError: dlopen(/Users/rosborn/opt/miniconda3/lib/python3.9/lib-dynload/_sqlite3.cpython-39-darwin.so, 0x0002): Symbol not found: (_sqlite3_enable_load_extension)
Referenced from: '/Users/rosborn/opt/miniconda3/lib/python3.9/lib-dynload/_sqlite3.cpython-39-darwin.so'
Expected in: '/usr/lib/libsqlite3.dylib'
</code></pre>
<p>Since I had another Python 3.9 environment that is still functional, I tried copying over the <code>envs/py39/lib/sqlite3.36.0</code> and <code>envs/py39/lib/python3.9/sqlite3</code> directories, as well as <code>envs/py39/lib/python3.9/lib-dynload/_sqlite3.cpython-39-darwin.so</code> because I assumed the sqlite3 libraries had been incorrectly compiled, but that doesn't fix the problem.</p>
<p>On the Homebrew Github, there was a <a href="https://github.com/Homebrew/legacy-homebrew/issues/49731" rel="noreferrer">related issue</a>, where someone suggested checking whether the missing symbol was there. It seems to be all present and correct.</p>
<pre><code>$ nm -gj /Users/rosborn/opt/miniconda3/lib/python3.9/lib-dynload/_sqlite3.cpython-39-darwin.so | grep enable_load_extension
_sqlite3_enable_load_extension
</code></pre>
<p>I don't know how Homebrew installs sqlite3, but the remaining fixes seemed to require checking the system libsqlite, which I don't have administrative access to. In case it's relevant, I am on an Intel Mac, so it's not related to the M1 chip, as some related issues appear to be.</p>
<p>Does the conda distribution attempt to link to the system libsqlite? If so, why does this problem not affect the <code>py39</code> environment?</p>
<p>Any tips will be welcome. If it were not the base environment, I would just delete the one with the problem and start again. I attempted a forced reinstall of sqlite3, but it appeared not to be installable as a separate package.</p>
|
<python><sqlite><conda>
|
2022-12-17 21:19:53
| 2
| 582
|
Ray Osborn
|
74,837,544
| 86,924
|
Exclude packages included in Lambda layers from being packaged with AWS Lambda function
|
<p>A number of the dependencies for my Python AWS Lambda function are in Lambda layers, so they do not have to be in the deployment package for the lambda. I'm using pipenv to manage my dependencies.</p>
<p>To test locally I need the dependencies that are in the layers to be in my Pipfile, but then they also end up in the deployment package. Is there a way to prevent that?</p>
|
<python><aws-lambda><pipenv>
|
2022-12-17 21:19:02
| 1
| 7,162
|
Dan Hook
|
74,837,332
| 14,190,526
|
Why is an autospecced mock of Enum not iterable (has no __iter__ method)?
|
<p>Why is an autospecced mock of <code>Enum</code> not iterable (has no <code>__iter__</code> method)?</p>
<pre class="lang-py prettyprint-override"><code>from unittest.mock import create_autospec
from enum import IntEnum
class IE(IntEnum):
Q = 1
W = 2
m = create_autospec(spec=IE, instance=False)
print(hasattr(IE, '__iter__')) # True
print(hasattr(m, '__iter__')) # False
print(m) # <MagicMock spec='IE' id='140008077774128'>
ml = create_autospec(spec=[1,2,3], instance=True)
print(hasattr(ml, '__iter__')) # True
</code></pre>
<p>Python version:</p>
<pre><code>Python 3.10.9 (main, Dec 7 2022, 01:12:00) [GCC 9.4.0] on linux
</code></pre>
|
<python><mocking><python-unittest><python-unittest.mock>
|
2022-12-17 20:38:16
| 1
| 1,100
|
salius
|
74,837,241
| 4,864,195
|
use openAI CLIP with torch tensor or numpy array as input
|
<p>How to use a pytorch tensor or an openCV iamge correctly as input for for OpenAi CLIP?</p>
<p>I tried the following but this didn't work so far :</p>
<pre><code>device = "cuda" if torch.cuda.is_available() else "cpu"
clip_model, clip_preprocess = clip.load("ViT-B/32", device=device)
clip_preprocess(torch.from_numpy(OpenCVImage)).unsqueeze(0).to(device)
</code></pre>
<ul>
<li>the preprocess step fails with message <code>Process finished with exit code 137 (interrupted by signal 9: SIGKILL)</code></li>
<li>openCVImage is a object that was already processed with cv2.COLOR_BGR2RGB</li>
</ul>
|
<python><pytorch><openai-clip>
|
2022-12-17 20:21:38
| 3
| 333
|
werber bang
|
74,837,095
| 5,881,882
|
Torch: tensor to numpy - cycling problem .cpu() to .detach().numpy()
|
<p>I got a problem with converting a torch tensor to numpy array.</p>
<pre><code> avg_rewards = th.mean(rewards, dim=[0,1,3])
avg_targets = th.mean(th.mean(targets.reshape(rewards.shape), dim=[0, 1, 3]))
avg_score = th.max(avg_rewards, avg_targets)
avg_q = th.mean(q, dim=[0, 1, 3])
griefers = avg_score > avg_q
griefers = [i for i, x in enumerate(griefers) if x]
grieve_factor = th.tanh(th.clamp(avg_score / avg_q, min=0)).detach().numpy()
</code></pre>
<p>The last line returns an error. If I use <code>.detach().numpy()</code> I get "use .cpu()..." message.
If I however use <code>.cpu()</code> I get the "use <code>.detach().numpy()</code>" error</p>
<p>I printed out the type of grieve_factor:</p>
<pre><code>the grieve factors are tensor([0., 0.], device='cuda:0', grad_fn=<TanhBackward0>)
</code></pre>
<p>I don't need any gradients on it, and I just want it to be [0., 0.] array</p>
|
<python><numpy><tensor><torch>
|
2022-12-17 19:58:16
| 1
| 388
|
Alex
|
74,836,987
| 4,358,113
|
How can I extract all sections of a Wikipedia page in plain text?
|
<p>I have the following code in python, which extracts only the introduction of the article on "Artificial Intelligence", while instead I would like to extract all sub-sections (History, Goals ...)</p>
<pre><code>import requests
def get_wikipedia_page(page_title):
endpoint = "https://en.wikipedia.org/w/api.php"
params = {
"format": "json",
"action": "query",
"prop": "extracts",
"exintro": "",
"explaintext": "",
"titles": page_title
}
response = requests.get(endpoint, params=params)
data = response.json()
pages = data["query"]["pages"]
page_id = list(pages.keys())[0]
return pages[page_id]["extract"]
page_title = "Artificial intelligence"
wikipedia_page = get_wikipedia_page(page_title)
</code></pre>
<p>Someone proposed to use another approach that parses html and uses BeautifulSoup to convert to text:</p>
<pre><code>from urllib.request import urlopen
from bs4 import BeautifulSoup
url = "https://en.wikipedia.org/wiki/Artificial_intelligence"
html = urlopen(url).read()
soup = BeautifulSoup(html, features="html.parser")
# kill all script and style elements
for script in soup(["script", "style"]):
script.extract() # rip it out
# get text
text = soup.get_text()
# break into lines and remove leading and trailing space on each
lines = (line.strip() for line in text.splitlines())
# break multi-headlines into a line each
chunks = (phrase.strip() for line in lines for phrase in
line.split("
"))
# drop blank lines
text = '\n'.join(chunk for chunk in chunks if chunk)
print(text)
</code></pre>
<p>This is not a good-enough solution, as it includes all text that appears on the website (like image text), and it includes citations in the text (e.g. [1]), while the first script removes them.</p>
<p>I suspect that the api of wikipedia should offer a more elegant solution, it would be rather weird if one can get only the first section?</p>
|
<python><wikipedia-api>
|
2022-12-17 19:41:27
| 1
| 507
|
blindeyes
|
74,836,912
| 20,026,039
|
Selecting an element with multiple classes in python lxml using xpath
|
<p>I was trying to scrape a website using python request and lxml. I could easily select the elements with single class using <code>html.xpath()</code> but I can't figure out how to select the elements with multiple class.</p>
<p>I used some code like this to select the elements in page with class "title":</p>
<p><code>page.xpath('//a[@class="title"]')</code></p>
<p>However, I couldn't select elements with multiple classes. I checked some few codes. I tried to study xpath but it seemes like <code>lxml.html.xpath()</code> works different, may be it's my lack of understanding. I tried few codes which didnt' work for me. They are given below.</p>
<p><strong>HTML code</strong></p>
<pre><code><a href="https://www.lovemycosmetic.de/skin1004-madagascar-centella-ampoule-30ml-" class="info text-center" title="SKIN1004 Madagascar Centella Ampoule 30ml"> <strong class="supplier"><font style="vertical-align: inherit;"><font style="vertical-align: inherit;">SKIN1004</font></font></strong><font style="vertical-align: inherit;"><font style="vertical-align: inherit;">SKIN1004 Madagascar Centella Ampoule 30ml</font></font></a>
</code></pre>
<p><strong>Test 1:</strong></p>
<p><code>page.xpath('//a[@class="info text-center"]')</code></p>
<p><strong>Test 2:</strong></p>
<p><code>page.xpath("//a[@class='info text-center']")</code></p>
<p><strong>Test 3:</strong></p>
<p><code>page.xpath('//a[@class="info.text-center"]')</code></p>
<p><strong>Test 4:</strong></p>
<p><code>page.xpath("//a[contains(@class, 'info') and contains(@class, 'text-center')]")</code></p>
<p>I did couple more tests too but I forgot to save the code. It will be great to know how to select elements with multiple classes using <code>lxml.html.xpath()</code>.</p>
|
<python><html><python-3.x><xpath><lxml>
|
2022-12-17 19:29:23
| 2
| 464
|
Ajay Pun Magar
|
74,836,900
| 11,652,655
|
ValueError: [E966] `nlp.add_pipe` when changing the sentence segmentaion rule of spaCy model
|
<p>I am using Python 3.9.7 and the <code>spaCy</code> library and want to change the way the model segments a given sentence. Here is a sentence and the segmentation rule I created as an example:</p>
<pre class="lang-py prettyprint-override"><code>import spacy
nlp=spacy.load('en_core_web_sm')
doc2=nlp(u'"Management is doing the right things; leadership is doing the right things." -Peter Drucker')
def set_custom_boundaries(doc):
for token in doc[:-1]:
if token.text==";":
doc[token.i +1].is_sent_start=True
return doc
nlp.add_pipe(set_custom_boundaries, before='parser')
</code></pre>
<p>However, this produces the error message below:</p>
<pre class="lang-none prettyprint-override"><code>ValueError Traceback (most recent call last)
C:\Users\SEYDOU~1\AppData\Local\Temp/ipykernel_21000/1705623728.py in <module>
----> 1 nlp.add_pipe(set_custom_boundaries, before='parser')
~\Anaconda3\lib\site-packages\spacy\language.py in add_pipe(self, factory_name, name, before, after, first, last, source, config, raw_config, validate)
777 bad_val = repr(factory_name)
778 err = Errors.E966.format(component=bad_val, name=name)
--> 779 raise ValueError(err)
780 name = name if name is not None else factory_name
781 if name in self.component_names:
ValueError: [E966] `nlp.add_pipe` now takes the string name of the registered component factory, not a callable component. Expected string, but got <function set_custom_boundaries at 0x000002520A59CCA0> (name: 'None').
- If you created your component with `nlp.create_pipe('name')`: remove nlp.create_pipe and call `nlp.add_pipe('name')` instead.
- If you passed in a component like `TextCategorizer()`: call `nlp.add_pipe` with the string name instead, e.g. `nlp.add_pipe('textcat')`.
- If you're using a custom component: Add the decorator `@Language.component` (for function components) or `@Language.factory` (for class components / factories) to your custom component and assign it a name, e.g. `@Language.component('your_name')`. You can then run `nlp.add_pipe('your_name')` to add it to the pipeline.
</code></pre>
<p>I looked at some solutions online, however, I could not solve the problem as a beginner in Python. How does one use their own custom segmentation rule in the spaCy pipeline?</p>
|
<python><nlp><spacy>
|
2022-12-17 19:27:39
| 1
| 1,285
|
Seydou GORO
|
74,836,886
| 5,075,518
|
Deserialize JSON string to a class instance automatically in Python
|
<p>Assume I have several classes:</p>
<pre><code>class Assignee:
gid: str
name: str
class Task:
gid: str
name: str
created_at: datetime
assignee: Assignee
</code></pre>
<p>and I receive a JSON, that I want to translate into my Task class instance:</p>
<pre><code>{
"gid": "#1",
"name": "my task",
"created_at": "2022-11-02T10:25:49.806Z",
"assignee": {
"gid": "#a1",
"name": "assignee name"
}
}
</code></pre>
<p>I need to get the strong typed result, not a dict. Is it possible to convert from JSON string to a class instance automatically?
In C# we can do it with <code>JsonConvert.DeserializeObject<Task>(json_string)</code>.
In Python I found a jsonpickle library, but as far as I see, it cannot do it automatically. Am I missing anything?</p>
<p>In real life I have many more classes and JSON properties.</p>
|
<python><json>
|
2022-12-17 19:25:49
| 3
| 466
|
Alex Sham
|
74,836,777
| 4,061,068
|
Matplotlib in huggingfaces space cases Permission Denied Error
|
<p>I am trying to setup huggingface spaces. Here is my dockerfile</p>
<pre><code>FROM python:3.9
WORKDIR /code
COPY ./requirements.txt /code/requirements.txt
RUN pip install --no-cache-dir --prefer-binary -r /code/requirements.txt
RUN pip install --user matplotlib
COPY . .
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "7860"]
</code></pre>
<p>When I try to build it and start it it gives me the following error:</p>
<pre><code>-->
===== Application Startup =====
Fetching model from: https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self
Traceback (most recent call last):
File "/usr/local/bin/uvicorn", line 8, in
sys.exit(main())
File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1130, in __call__
return self.main(*args, **kwargs)
......
File "/code/./app.py", line 3, in
gr.Interface.load("models/facebook/wav2vec2-large-960h-lv60-self").launch()
File "/usr/local/lib/python3.9/site-packages/gradio/interface.py", line 109, in load
return super().load(name=name, src=src, api_key=api_key, alias=alias, **kwargs)
File "/usr/local/lib/python3.9/site-packages/gradio/blocks.py", line 1154, in load
return external.load_blocks_from_repo(name, src, api_key, alias, **kwargs)
File "/usr/local/lib/python3.9/site-packages/gradio/external.py", line 58, in load_blocks_from_repo
blocks: gradio.Blocks = factory_methods[src](name, api_key, alias, **kwargs)
File "/usr/local/lib/python3.9/site-packages/gradio/external.py", line 311, in from_model
interface = gradio.Interface(**kwargs)
File "/usr/local/lib/python3.9/site-packages/gradio/interface.py", line 424, in __init__
self.flagging_callback.setup(
File "/usr/local/lib/python3.9/site-packages/gradio/flagging.py", line 187, in setup
os.makedirs(flagging_dir, exist_ok=True)
File "/usr/local/lib/python3.9/os.py", line 225, in makedirs
mkdir(name, mode)
PermissionError: [Errno 13] Permission denied: 'flagged'
</code></pre>
<p>I tried creating folder giving it permission by adding <code>RUN chown</code> statement but it does not seem to work, how can I work around it?</p>
|
<python><docker><matplotlib><huggingface>
|
2022-12-17 19:13:15
| 1
| 1,067
|
Andrius Solopovas
|
74,836,772
| 7,318,120
|
can seaborn normalise data such that y-axis is clear
|
<p>I plot time series of data where the y values of the data are orders of magnitude different.</p>
<p>I am using <code>seaborn.lmplot</code> and was expecting to find a <code>normalise</code> keyword, but have been unable to.</p>
<p>I tried to use a log scale, but this failed (see diagram).</p>
<p>This is my best attempt so far:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
gbp_stats = pd.read_csv('price_data.csv')
sns.lmplot(data=gbp_stats, x='numeric_time', y='last trade price', col='symbol')
plt.yscale('log')
plt.show()
</code></pre>
<p>Which gave me this:</p>
<p><a href="https://i.sstatic.net/OXX35.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OXX35.png" alt="enter image description here" /></a></p>
<p>As you can see, the result <strong>needs to scale or normalize the y-axis for each plot</strong>. I could do a normalization in pandas, but wanted to avoid such if possible.</p>
<p>So my question is this: Does <code>seaborn</code> have a normailze feature such that the y-axis can be compared better than what i have achieved?</p>
|
<python><matplotlib><graph><seaborn>
|
2022-12-17 19:12:29
| 1
| 6,075
|
darren
|
74,836,446
| 7,800,760
|
Networkx: merge list of nodes into a new one conserving edges
|
<p>I am aware of the nx.contracted_nodes function but it only takes two nodes to merge.</p>
<p>Is there a simple, concise way of passing a list of nodes, and merge all those nodes into a new one? Here is an example:</p>
<pre><code>import networkx as nx
G = nx.DiGraph()
G.add_edges_from([
('A','B'),
('B','C'),
('C','D'),
('D','E'),
('F','B'),
('B','G'),
('B','D'),
])
nx.draw(
G,
pos=nx.nx_agraph.graphviz_layout(G, prog='dot'),
node_color='#FF0000',
with_labels=True
)
</code></pre>
<p>would generate the following graph:
<a href="https://i.sstatic.net/g3Icr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/g3Icr.png" alt="original graph" /></a>
Now I would like to be able to provide a list of nodes such as:</p>
<pre><code>G2 = nodes_to_collapse(['B','C','D'])
</code></pre>
<p>and obtain the following graph:
<a href="https://i.sstatic.net/6CmUY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6CmUY.png" alt="modified graph" /></a>
where H is the new node that results from the collapse of B,C and D of the original graph.</p>
|
<python><networkx>
|
2022-12-17 18:22:06
| 2
| 1,231
|
Robert Alexander
|
74,836,347
| 494,134
|
sqlalchemy session with autocommit=True does not commit
|
<p>I'm trying to use a session with autocommit=true to create a row in a table, and it does not seem to be working. The row is not saved to the table.</p>
<pre><code>import os
import sqlalchemy
from sqlalchemy import Table
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from sqlalchemy import Column, create_engine, String
db_hostname = os.environ['DB_HOSTNAME']
db_username = os.environ['DB_USERNAME']
db_password = os.environ['DB_PASSWORD']
db_servicename = os.environ['DB_SERVICENAME']
engine_string = f"postgresql://{db_username}:{db_password}@{db_hostname}:5432/{db_servicename}"
engine = create_engine(engine_string, isolation_level='REPEATABLE READ',
poolclass=sqlalchemy.pool.NullPool
)
base = declarative_base()
class Testing(base):
__tablename__ = 'testing'
name = Column(String, primary_key=True)
comment = Column(String)
base.metadata.create_all(engine)
S1 = sessionmaker(engine)
with S1() as session:
test1 = Testing(name="Jimmy", comment="test1")
session.add(test1)
session.commit()
S2 = sessionmaker(engine, autocommit=True)
with S2() as session:
test2 = Testing(name="Johnny", comment="test2")
session.add(test2)
</code></pre>
<p>In this code, the first row with name="Jimmy" and an explicit <code>session.commit()</code> is saved to the table.</p>
<p>But the second row with name="Johnny" is not saved. Specifying <code>autocommit=True</code> on the session appears to have no effect.</p>
<p>What is the cause?</p>
|
<python><sqlalchemy><commit>
|
2022-12-17 18:07:56
| 1
| 33,765
|
John Gordon
|
74,836,331
| 10,088,866
|
SqlAlchemy + Postgres - query with @> ANY(ARRAY['["...]']::jsonb[])
|
<p>I have a postgres table represented in the sql alchemy like</p>
<pre><code>from sqlalchemy import Column
from sqlalchemy.dialects.postgresql import UUID, JSONB
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class ListingDatabaseDocument(Base):
__tablename__ = 'listing'
uuid = Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4, unique=True)
doc = Column(JSONB, nullable=False)
</code></pre>
<p>My <code>doc</code> jsonb field looks like</p>
<pre><code>{"name": "SOME_NAME", "features": ["BALCONY", "GARAGE", "ELEVATOR"]}
</code></pre>
<p>Now I'd like to get all rows where the <code>doc->'features'</code> array contains <code>"ELEVATOR","GARAGE"</code> - in pure sql I do it like</p>
<pre><code>SELECT * FROM listing
WHERE doc -> 'features' @> ANY(ARRAY['["ELEVATOR","GARAGE"]']::jsonb[])
</code></pre>
<p>How to achieve this in SqlAlchemy ? I tried something like</p>
<pre><code>from sqlalchemy.dialects.postgresql import JSONB, ARRAY
from sqlalchemy.sql.expression import cast
from sqlalchemy import any_
return session.query(ListingDatabaseDocument).filter(
ListingDatabaseDocument.doc['features'].op('@>')(any_(cast(['ELEVATOR','GARAGE'], ARRAY(JSONB))))
).all()
</code></pre>
<p>but it's not working. Thanks for help !</p>
|
<python><postgresql><sqlalchemy>
|
2022-12-17 18:05:05
| 1
| 2,164
|
Clyde Barrow
|
74,836,262
| 19,797,660
|
What to use instead of if-elif inside a function to choose between multiple functions?
|
<p>I am trying to create my own module that will calculate indicators, and I am implementing a function like this:</p>
<pre><code>def average_day_range(price_df: PandasDataFrame, n: int=14, calculation_tool: int=0):
'''
0 - SMA, 1 - EMA, 2 - WMA, 3 - EWMA, 4 - VWMA, 5 - ALMA, 6 - LSMA, 7 - HULL MA
:param price_df:
:param n:
:param calculation_tool:
:return:
'''
if calculation_tool == 0:
sma_high = sma(price_df=price_df, input_mode=3, n=n, from_price=True)
sma_low = sma(price_df=price_df, input_mode=4, n=n, from_price=True)
adr = sma_high[f'SMA_{n}'] - sma_low[f'SMA_{n}']
adr.rename(columns={0: f'Average Day Range SMA{n}'})
return adr
elif calculation_tool == 1:
ema_high = ema(price_df=price_df, input_mode=3, n=n, from_price=True)
ema_low = ema(price_df=price_df, input_mode=4, n=n, from_price=True)
adr = ema_high[f'EMA_{n}'] - ema_low[f'EMA_{n}']
adr.rename(columns={0: f'Average Day Range SMA{n}'})
return adr
elif calculation_tool == 2:
ema_high = wma(price_df=price_df, input_mode=3, n=n, from_price=True)
ema_low = wma(price_df=price_df, input_mode=4, n=n, from_price=True)
adr = ema_high[f'EMA_{n}'] - ema_low[f'EMA_{n}']
adr.rename(columns={0: f'Average Day Range SMA{n}'})
return adr
elif calculation_tool == 3:
ewma_high = ewma(price_df=price_df, input_mode=3, n=n, from_price=True)
ewma_low = ewma(price_df=price_df, input_mode=4, n=n, from_price=True)
adr = ewma_high[f'EMA_{n}'] - ewma_low[f'EMA_{n}']
adr.rename(columns={0: f'Average Day Range SMA{n}'})
return adr
etc(...)
</code></pre>
<p>Is there a more convinient way to do the same but without <code>if-elif</code> hell?</p>
|
<python>
|
2022-12-17 17:54:49
| 2
| 329
|
Jakub Szurlej
|
74,836,250
| 7,032,878
|
From normalized Pandas DataFrame to list of nested dicts
|
<p>Supposing I have obtained a normalized DataFrame starting from a list of nested dicts:</p>
<pre><code>sample_list_of_dicts = [
{ 'group1': { 'item1': 'value1', 'item2': 'value2' } },
{ 'group1': { 'item1': 'value3', 'item2': 'value4' } }
]
df = pd.json_normalize(sample_list_of_dicts)
</code></pre>
<p>Is there a way to revert back to the list of nested dicts from the DataFrame <code>df</code>?</p>
|
<python><pandas>
|
2022-12-17 17:52:59
| 3
| 627
|
espogian
|
74,836,227
| 511,542
|
When loading Keras checkpoint the validation loss increases on first evaluation
|
<p>I'm having an issue with model saving and loading (fine-tuning).</p>
<p>After I load the model that was saved at val_loss 0.8, on first epoch it will be evaluated to 0.85 or so, even if the model never climbed more than 0.82 for example on previous training (just went flat).
The ModelCheckpoint then will save it at that point (seems like it starts from inf?) overwriting any previous progress.</p>
<p>I'm saving checkpoints like this:</p>
<pre><code>checkpoint = ModelCheckpoint(
"count_fingers.h5",
#filepath="weights.{epoch:02d}-{val_loss:.2f}.h5",
monitor= "val_loss",
verbose= 1,
save_best_only= True,
#save_weights_only= True,
save_freq="epoch"
)
</code></pre>
<p>and loading like this:</p>
<pre><code>model = tf.keras.models.load_model("count_fingers.h5")
</code></pre>
<p>I've tried also <code>model.load_weights</code>.</p>
<p>After some search I think it might be related to the optimizer decay? That will have a big learning rate on first epoch? Not sure.</p>
<p>If you can let me know what I'm doing wrong or why this is happening and what's the right approach to continue exactly from where I've left?</p>
<p>Thank you,</p>
<p>Adrian</p>
|
<python><tensorflow><machine-learning><keras>
|
2022-12-17 17:50:44
| 0
| 1,002
|
Spider
|
74,836,151
| 12,228,795
|
nothing provides __cuda needed by tensorflow-2.10.0-cuda112py310he87a039_0
|
<p>I'm using mambaforge on WSL2 Ubuntu 22.04 with <code>systemd</code> enabled. I'm trying to install TensorFlow 2.10 with CUDA enabled, by using the command:</p>
<pre><code>mamba install tensorflow
</code></pre>
<p>And the command <code>nvidia-smi -q</code> from WSL2 gives:</p>
<pre><code>==============NVSMI LOG==============
Timestamp : Sat Dec 17 23:22:43 2022
Driver Version : 527.56
CUDA Version : 12.0
Attached GPUs : 1
GPU 00000000:01:00.0
Product Name : NVIDIA GeForce RTX 3070 Laptop GPU
Product Brand : GeForce
Product Architecture : Ampere
Display Mode : Disabled
Display Active : Disabled
Persistence Mode : Enabled
MIG Mode
Current : N/A
Pending : N/A
Accounting Mode : Disabled
Accounting Mode Buffer Size : 4000
Driver Model
Current : WDDM
Pending : WDDM
Serial Number : N/A
GPU UUID : GPU-f03a575d-7930-47f3-4965-290b89514ae7
Minor Number : N/A
VBIOS Version : 94.04.3f.00.d7
MultiGPU Board : No
Board ID : 0x100
Board Part Number : N/A
GPU Part Number : 249D-750-A1
Module ID : 1
Inforom Version
Image Version : G001.0000.03.03
OEM Object : 2.0
ECC Object : N/A
Power Management Object : N/A
GPU Operation Mode
Current : N/A
Pending : N/A
GSP Firmware Version : N/A
GPU Virtualization Mode
Virtualization Mode : None
Host VGPU Mode : N/A
IBMNPU
Relaxed Ordering Mode : N/A
PCI
Bus : 0x01
Device : 0x00
Domain : 0x0000
Device Id : 0x249D10DE
Bus Id : 00000000:01:00.0
Sub System Id : 0x118C1043
GPU Link Info
PCIe Generation
Max : 3
Current : 3
Device Current : 3
Device Max : 4
Host Max : 3
Link Width
Max : 16x
Current : 8x
Bridge Chip
Type : N/A
Firmware : N/A
Replays Since Reset : 0
Replay Number Rollovers : 0
Tx Throughput : 0 KB/s
Rx Throughput : 0 KB/s
Atomic Caps Inbound : N/A
Atomic Caps Outbound : N/A
Fan Speed : N/A
Performance State : P8
Clocks Throttle Reasons
Idle : Active
Applications Clocks Setting : Not Active
SW Power Cap : Not Active
HW Slowdown : Not Active
HW Thermal Slowdown : Not Active
HW Power Brake Slowdown : Not Active
Sync Boost : Not Active
SW Thermal Slowdown : Not Active
Display Clock Setting : Not Active
FB Memory Usage
Total : 8192 MiB
Reserved : 159 MiB
Used : 12 MiB
Free : 8020 MiB
BAR1 Memory Usage
Total : 8192 MiB
Used : 1 MiB
Free : 8191 MiB
Compute Mode : Default
Utilization
Gpu : 0 %
Memory : 0 %
Encoder : 0 %
Decoder : 0 %
Encoder Stats
Active Sessions : 0
Average FPS : 0
Average Latency : 0
FBC Stats
Active Sessions : 0
Average FPS : 0
Average Latency : 0
Ecc Mode
Current : N/A
Pending : N/A
ECC Errors
Volatile
SRAM Correctable : N/A
SRAM Uncorrectable : N/A
DRAM Correctable : N/A
DRAM Uncorrectable : N/A
Aggregate
SRAM Correctable : N/A
SRAM Uncorrectable : N/A
DRAM Correctable : N/A
DRAM Uncorrectable : N/A
Retired Pages
Single Bit ECC : N/A
Double Bit ECC : N/A
Pending Page Blacklist : N/A
Remapped Rows : N/A
Temperature
GPU Current Temp : 46 C
GPU Shutdown Temp : 101 C
GPU Slowdown Temp : 98 C
GPU Max Operating Temp : 87 C
GPU Target Temperature : N/A
Memory Current Temp : N/A
Memory Max Operating Temp : N/A
Power Readings
Power Management : Supported
Power Draw : 12.08 W
Power Limit : 4294967.50 W
Default Power Limit : 80.00 W
Enforced Power Limit : 100.00 W
Min Power Limit : 1.00 W
Max Power Limit : 100.00 W
Clocks
Graphics : 210 MHz
SM : 210 MHz
Memory : 405 MHz
Video : 555 MHz
Applications Clocks
Graphics : N/A
Memory : N/A
Default Applications Clocks
Graphics : N/A
Memory : N/A
Deferred Clocks
Memory : N/A
Max Clocks
Graphics : 2100 MHz
SM : 2100 MHz
Memory : 6001 MHz
Video : 1950 MHz
Max Customer Boost Clocks
Graphics : N/A
Clock Policy
Auto Boost : N/A
Auto Boost Default : N/A
Voltage
Graphics : 637.500 mV
Fabric
State : N/A
Status : N/A
Processes
GPU instance ID : N/A
Compute instance ID : N/A
Process ID : 24
Type : G
Name : /Xwayland
Used GPU Memory : Not available in WDDM driver model
</code></pre>
<p>And my other enviroment works as expected:</p>
<pre><code>⬢ [Systemd] ❯ mamba activate tf
~ via 🅒 tf via 🐏 774MiB/19GiB | 0B/5GiB
⬢ [Systemd] ❯ python
Python 3.9.15 | packaged by conda-forge | (main, Nov 22 2022, 08:45:29)
[GCC 10.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
2022-12-17 23:25:13.867166: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE4.1 SSE4.2 AVX AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
</code></pre>
<p>Then, it tries to install package version <code>cuda112py39h9333c2f_1</code>, winch uses Python 3.9, but I want Python 3.10. Whenever I try to install the version for 3.10, it shows the error:</p>
<pre><code>Could not solve for environment specs
Encountered problems while solving:
- nothing provides __cuda needed by tensorflow-2.10.0-cuda112py310he87a039_0
The environment can't be solved, aborting the operation
</code></pre>
<p>Why is this error occurring and how can I solve it?</p>
|
<python><tensorflow><conda><mamba><mambaforge>
|
2022-12-17 17:40:52
| 1
| 420
|
Otávio Augusto Silva
|
74,835,980
| 4,508,737
|
Yahoo Finance encrypt context Python fetch
|
<p>Using python script to find quotes data but yahoo recently encrypted it, now it is not clear text quotes but it shows no encrypt when web browser.</p>
<pre><code>def _get_headers():
return {"accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8",
"authority":"ca.finance.yahoo.com",
"accept-encoding": "gzip, deflate, br",
"accept-language": "en;q=0.9",
"cache-control": "no-cache",
"dnt": "1",
"sec-ch-ua-platform": "Windows",
"sec-fetch-dest": "document",
"sec-fetch-mode": "navigate",
"sec-fetch-user": "?1",
"upgrade-insecure-requests": "1",
"user-agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36"}
def get_yahoo_finance_price(ticker):
time.sleep(5)
url = 'https://finance.yahoo.com/quote/'+ticker+'/history?p='+ticker
html = requests.get(url, headers=_get_headers(), timeout=(3.05, 21)).text
soup = BeautifulSoup(html,'html.parser')
soup_script = soup.find("script",text=re.compile("root.App.main")).text
matched = re.search("root.App.main\s+=\s+(\{.*\})",soup_script)
# if matched:
json_script = json.loads(matched.group(1))
print(json_script)
data = json_script['context']['dispatcher']['stores']['HistoricalPriceStore']['prices'][0]
df = pd.DataFrame({'date': dt.fromtimestamp(data['date']).strftime("%Y-%m-%d"),
'close': round(data['close'], 2),
"adjusted close": round(data['adjclose'], 2),
'volume': data['volume'],
'open': round(data['open'], 2),
'high': round(data['high'], 2),
'low': round(data['low'], 2),
}, index=[0])
return df
</code></pre>
|
<python><python-3.x><yahoo-finance>
|
2022-12-17 17:15:15
| 2
| 767
|
Colin Zhong
|
74,835,912
| 309,812
|
ValueError: cannot join with no overlapping index names on the same dataframe
|
<p>I am encountering a strange problem with a pandas dataframe where in, where() fails complaining that it cannot join on the overlapping index names.</p>
<p>To reproduce this problem try below:</p>
<pre class="lang-py prettyprint-override"><code>import yfinance as yf
from datetime import datetime
startdate=datetime(2022,12,1)
enddate=datetime(2022,12,6)
y_symbols = ['GOOG', 'AAPL', 'MSFT']
data=yf.download(y_symbols, start=startdate, end=enddate, auto_adjust=True, threads=True)
data[data['Close'] > 100]
</code></pre>
<p>Then the raised error looks like:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
..
File "lib/python3.9/site-packages/pandas/core/indexes/base.py", line 229, in join
join_index, lidx, ridx = meth(self, other, how=how, level=level, sort=sort)
File "lib/python3.9/site-packages/pandas/core/indexes/base.py", line 4658, in join
return self._join_multi(other, how=how)
File "lib/python3.9/site-packages/pandas/core/indexes/base.py", line 4782, in _join_multi
raise ValueError("cannot join with no overlapping index names")
ValueError: cannot join with no overlapping index names
</code></pre>
<p>Here, <code>data</code> looks like:</p>
<pre class="lang-none prettyprint-override"><code> Close High ... Open Volume
AAPL GOOG MSFT AAPL GOOG MSFT ... AAPL GOOG MSFT AAPL GOOG MSFT
Date ...
2022-12-01 148.309998 101.279999 254.690002 149.130005 102.589996 256.119995 ... 148.210007 101.400002 253.869995 71250400 21771500 26041500
2022-12-02 147.809998 100.830002 255.020004 148.000000 101.150002 256.059998 ... 145.960007 99.370003 249.820007 65421400 18812200 21522800
2022-12-05 146.630005 99.870003 250.199997 150.919998 101.750000 253.820007 ... 147.770004 99.815002 252.009995 68826400 19955500 23435300
</code></pre>
<p>What could be missing here in the dataframe that this would not work?</p>
|
<python><pandas><dataframe><valueerror>
|
2022-12-17 17:03:41
| 4
| 736
|
Nikhil Mulley
|
74,835,752
| 3,728,901
|
How to use GPU with TensorFlow?
|
<p>My PC</p>
<p><a href="https://i.sstatic.net/yIFDJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yIFDJ.png" alt="enter image description here" /></a></p>
<pre class="lang-py prettyprint-override"><code>Microsoft Windows [Version 10.0.22621.963]
(c) Microsoft Corporation. All rights reserved.
C:\Users\donhu>nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Tue_May__3_19:00:59_Pacific_Daylight_Time_2022
Cuda compilation tools, release 11.7, V11.7.64
Build cuda_11.7.r11.7/compiler.31294372_0
C:\Users\donhu>nvidia-smi
Sat Dec 17 23:40:44 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 512.77 Driver Version: 512.77 CUDA Version: 11.6 |
|-------------------------------+----------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... WDDM | 00000000:01:00.0 On | N/A |
| 34% 31C P8 16W / 125W | 1377MiB / 6144MiB | 4% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 3392 C+G C:\Windows\explorer.exe N/A |
| 0 N/A N/A 4484 C+G ...artMenuExperienceHost.exe N/A |
| 0 N/A N/A 6424 C+G ...n1h2txyewy\SearchHost.exe N/A |
| 0 N/A N/A 6796 C+G ...lPanel\SystemSettings.exe N/A |
| 0 N/A N/A 7612 C+G ...8bbwe\WindowsTerminal.exe N/A |
| 0 N/A N/A 9700 C+G ...8bbwe\WindowsTerminal.exe N/A |
| 0 N/A N/A 10624 C+G ...perience\NVIDIA Share.exe N/A |
| 0 N/A N/A 10728 C+G ...er Java\jre\bin\javaw.exe N/A |
| 0 N/A N/A 13064 C+G ...8bbwe\WindowsTerminal.exe N/A |
| 0 N/A N/A 14496 C+G ...462.46\msedgewebview2.exe N/A |
| 0 N/A N/A 17124 C+G ...ooting 2\BugShooting2.exe N/A |
| 0 N/A N/A 19064 C+G ...8bbwe\Notepad\Notepad.exe N/A |
| 0 N/A N/A 19352 C+G ...8bbwe\WindowsTerminal.exe N/A |
| 0 N/A N/A 20920 C+G ...y\ShellExperienceHost.exe N/A |
| 0 N/A N/A 21320 C+G ...e\PhoneExperienceHost.exe N/A |
| 0 N/A N/A 21368 C+G ...me\Application\chrome.exe N/A |
+-----------------------------------------------------------------------------+
C:\Users\donhu>
</code></pre>
<p>I train model</p>
<pre class="lang-py prettyprint-override"><code>from tensorflow import keras
from tensorflow.keras import layers
def get_model():
model = keras.Sequential([
layers.Dense(512, activation="relu"),
layers.Dense(10, activation="softmax")
])
model.compile(optimizer="rmsprop",
loss="sparse_categorical_crossentropy",
metrics=["accuracy"])
return model
model = get_model()
history_noise = model.fit(
train_images_with_noise_channels, train_labels,
epochs=10,
batch_size=128,
validation_split=0.2)
model = get_model()
history_zeros = model.fit(
train_images_with_zeros_channels, train_labels,
epochs=10,
batch_size=128,
validation_split=0.2)
</code></pre>
<p>source code <a href="https://github.com/donhuvy/deep-learning-with-python-notebooks/blob/master/chapter05_fundamentals-of-ml.ipynb" rel="nofollow noreferrer">https://github.com/donhuvy/deep-learning-with-python-notebooks/blob/master/chapter05_fundamentals-of-ml.ipynb</a></p>
<p><a href="https://i.sstatic.net/zbTgq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zbTgq.png" alt="enter image description here" /></a></p>
<p>How to use GPU with TensorFlow?</p>
|
<python><tensorflow><machine-learning><keras><nvidia>
|
2022-12-17 16:39:29
| 2
| 53,313
|
Vy Do
|
74,835,691
| 54,606
|
How to determine why a query acquire a lock in postgresql
|
<p>I've got some weird issue with Odoo and postgresql. It happens when I try to install the forum module on odoo 15. It's not that relevant and I'm not sure it's repeatable but if you can reproduce the bug that would be how.</p>
<p>Using the answers from this question: <a href="https://stackoverflow.com/questions/26489244/how-to-detect-query-which-holds-the-lock-in-postgres">How to detect query which holds the lock in Postgres?</a></p>
<p>I can find that the SQL query acquiring a lock is the following:</p>
<pre><code>SELECT * FROM ir_translation
WHERE lang='en_US' AND
type='model_terms' AND
name='ir.ui.view,arch_db' AND
res_id IN (2618)
</code></pre>
<p>The res_id (2618) is being created within the same transaction. But this query returns nothing because there is apparently no translation loaded yet.</p>
<pre><code> Table "public.ir_translation"
Column | Type | Collation | Nullable | Default
----------+-------------------+-----------+----------+--------------------------------------------
id | integer | | not null | nextval('ir_translation_id_seq'::regclass)
name | character varying | | not null |
res_id | integer | | |
lang | character varying | | |
type | character varying | | |
src | text | | |
value | text | | |
module | character varying | | |
state | character varying | | |
comments | text | | |
Indexes:
"ir_translation_pkey" PRIMARY KEY, btree (id)
"ir_translation_code_unique" UNIQUE, btree (type, lang, md5(src)) WHERE type::text = 'code'::text
"ir_translation_comments_index" btree (comments)
"ir_translation_model_unique" UNIQUE, btree (type, lang, name, res_id) WHERE type::text = 'model'::text
"ir_translation_module_index" btree (module)
"ir_translation_res_id_index" btree (res_id)
"ir_translation_src_md5" btree (md5(src))
"ir_translation_type_index" btree (type)
"ir_translation_unique" UNIQUE, btree (type, name, lang, res_id, md5(src))
Foreign-key constraints:
"ir_translation_lang_fkey_res_lang" FOREIGN KEY (lang) REFERENCES res_lang(code)
</code></pre>
<p>That's the schema for <code>ir_translation</code>.</p>
<p>Then there are 3 queries working from querying fields from <code>res_groups_users_rel, ir_model_data, ir_model_access, ir_model, ir_rule, rule_group_rel</code>. None of those query gets blocked.</p>
<p>After that there is this query that get stuck on a lock on the query with <code>ir_translation</code>.</p>
<pre><code>SELECT "website"."id" as "id", "website"."name" as "name", "website"."sequence" as "sequence", "website"."domain" as "domain", "website"."company_id" as "company_id", "website"."default_lang_id" as "default_lang_id", "website"."auto_redirect_lang" as "auto_redirect_lang", "website"."cookies_bar" as "cookies_bar", "website"."configurator_done" as "configurator_done", "website"."social_twitter" as "social_twitter", "website"."social_facebook" as "social_facebook", "website"."social_github" as "social_github", "website"."social_linkedin" as "social_linkedin", "website"."social_youtube" as "social_youtube", "website"."social_instagram" as "social_instagram", "website"."has_social_default_image" as "has_social_default_image", "website"."google_analytics_key" as "google_analytics_key", "website"."google_management_client_id" as "google_management_client_id", "website"."google_management_client_secret" as "google_management_client_secret", "website"."google_search_console" as "google_search_console", "website"."google_maps_api_key" as "google_maps_api_key", "website"."user_id" as "user_id", "website"."cdn_activated" as "cdn_activated", "website"."cdn_url" as "cdn_url", "website"."cdn_filters" as "cdn_filters", "website"."homepage_id" as "homepage_id", "website"."custom_code_head" as "custom_code_head", "website"."custom_code_footer" as "custom_code_footer", "website"."robots_txt" as "robots_txt", "website"."theme_id" as "theme_id", "website"."specific_user_account" as "specific_user_account", "website"."auth_signup_uninvited" as "auth_signup_uninvited", "website"."create_uid" as "create_uid", "website"."create_date" as "create_date", "website"."write_uid" as "write_uid", "website"."write_date" as "write_date", "website"."channel_id" as "channel_id", "website"."karma_profile_min" as "karma_profile_min" FROM "website" WHERE "website".id IN (1)
</code></pre>
<p>I don't really understand why a query on a website table with no direct ling to ir_translation can get locked.</p>
<p>Here's the result from <code>select * from lock_monitor;</code> view created based on the linked question.</p>
<pre><code> locked_item | waiting_duration | blocked_pid | blocked_query | blocked_mode | blocking_pid | blocking_query | blocking_mode
-------------+------------------+-------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+--------------+-------------------------------------------------------------------------------------------------------------------+--------------------------
website | 00:01:20.267783 | 21038 | SELECT "website"."id" as "id", "website"."name" as "name", "website"."sequence" as "sequence", "website"."domain" as "domain", "website"."company_id" as "company_id", "website"."default_lang_id" as "default_lang_id", "website"."auto_redirect_lang" as "auto_redirect_lang", "website"."cookies_bar" as "cookies_bar", "website"."configurator_done" as "configurator_done", "website"."social_twitter" as "social_twitter", "website"."social_facebook" as "social_facebook", "website"."social_github" as "social_github", "website"."social_linkedin" as "social_linkedin", "website"."social_youtube" as "social_youtube", "website"."social_instagram" as "social_instagram", "website"."has_social_default_image" as "has_social_default_image", "website"."google_analytics_key" as "google_analytics_key", "website"."google_management_client_id" as "google_management_client_id", "website"."google_management_client_secret" as "google_management_client_secret", "website"."google_search_console" as "google_search_console", "website"."goo | AccessShareLock | 21059 | SELECT * FROM ir_translation +| AccessShareLock
| | | | | | WHERE lang='en_US' AND type='model_terms' AND name='ir.ui.view,arch_db' AND res_id IN (2619) |
website | 00:01:20.267783 | 21038 | SELECT "website"."id" as "id", "website"."name" as "name", "website"."sequence" as "sequence", "website"."domain" as "domain", "website"."company_id" as "company_id", "website"."default_lang_id" as "default_lang_id", "website"."auto_redirect_lang" as "auto_redirect_lang", "website"."cookies_bar" as "cookies_bar", "website"."configurator_done" as "configurator_done", "website"."social_twitter" as "social_twitter", "website"."social_facebook" as "social_facebook", "website"."social_github" as "social_github", "website"."social_linkedin" as "social_linkedin", "website"."social_youtube" as "social_youtube", "website"."social_instagram" as "social_instagram", "website"."has_social_default_image" as "has_social_default_image", "website"."google_analytics_key" as "google_analytics_key", "website"."google_management_client_id" as "google_management_client_id", "website"."google_management_client_secret" as "google_management_client_secret", "website"."google_search_console" as "google_search_console", "website"."goo | AccessShareLock | 21059 | SELECT * FROM ir_translation +| RowShareLock
| | | | | | WHERE lang='en_US' AND type='model_terms' AND name='ir.ui.view,arch_db' AND res_id IN (2619) |
website | 00:01:20.267783 | 21038 | SELECT "website"."id" as "id", "website"."name" as "name", "website"."sequence" as "sequence", "website"."domain" as "domain", "website"."company_id" as "company_id", "website"."default_lang_id" as "default_lang_id", "website"."auto_redirect_lang" as "auto_redirect_lang", "website"."cookies_bar" as "cookies_bar", "website"."configurator_done" as "configurator_done", "website"."social_twitter" as "social_twitter", "website"."social_facebook" as "social_facebook", "website"."social_github" as "social_github", "website"."social_linkedin" as "social_linkedin", "website"."social_youtube" as "social_youtube", "website"."social_instagram" as "social_instagram", "website"."has_social_default_image" as "has_social_default_image", "website"."google_analytics_key" as "google_analytics_key", "website"."google_management_client_id" as "google_management_client_id", "website"."google_management_client_secret" as "google_management_client_secret", "website"."google_search_console" as "google_search_console", "website"."goo | AccessShareLock | 21059 | SELECT * FROM ir_translation +| RowExclusiveLock
| | | | | | WHERE lang='en_US' AND type='model_terms' AND name='ir.ui.view,arch_db' AND res_id IN (2619) |
website | 00:01:20.267783 | 21038 | SELECT "website"."id" as "id", "website"."name" as "name", "website"."sequence" as "sequence", "website"."domain" as "domain", "website"."company_id" as "company_id", "website"."default_lang_id" as "default_lang_id", "website"."auto_redirect_lang" as "auto_redirect_lang", "website"."cookies_bar" as "cookies_bar", "website"."configurator_done" as "configurator_done", "website"."social_twitter" as "social_twitter", "website"."social_facebook" as "social_facebook", "website"."social_github" as "social_github", "website"."social_linkedin" as "social_linkedin", "website"."social_youtube" as "social_youtube", "website"."social_instagram" as "social_instagram", "website"."has_social_default_image" as "has_social_default_image", "website"."google_analytics_key" as "google_analytics_key", "website"."google_management_client_id" as "google_management_client_id", "website"."google_management_client_secret" as "google_management_client_secret", "website"."google_search_console" as "google_search_console", "website"."goo | AccessShareLock | 21059 | SELECT * FROM ir_translation +| ShareUpdateExclusiveLock
| | | | | | WHERE lang='en_US' AND type='model_terms' AND name='ir.ui.view,arch_db' AND res_id IN (2619) |
website | 00:01:20.267783 | 21038 | SELECT "website"."id" as "id", "website"."name" as "name", "website"."sequence" as "sequence", "website"."domain" as "domain", "website"."company_id" as "company_id", "website"."default_lang_id" as "default_lang_id", "website"."auto_redirect_lang" as "auto_redirect_lang", "website"."cookies_bar" as "cookies_bar", "website"."configurator_done" as "configurator_done", "website"."social_twitter" as "social_twitter", "website"."social_facebook" as "social_facebook", "website"."social_github" as "social_github", "website"."social_linkedin" as "social_linkedin", "website"."social_youtube" as "social_youtube", "website"."social_instagram" as "social_instagram", "website"."has_social_default_image" as "has_social_default_image", "website"."google_analytics_key" as "google_analytics_key", "website"."google_management_client_id" as "google_management_client_id", "website"."google_management_client_secret" as "google_management_client_secret", "website"."google_search_console" as "google_search_console", "website"."goo | AccessShareLock | 21059 | SELECT * FROM ir_translation +| ShareRowExclusiveLock
| | | | | | WHERE lang='en_US' AND type='model_terms' AND name='ir.ui.view,arch_db' AND res_id IN (2619) |
website | 00:01:20.267783 | 21038 | SELECT "website"."id" as "id", "website"."name" as "name", "website"."sequence" as "sequence", "website"."domain" as "domain", "website"."company_id" as "company_id", "website"."default_lang_id" as "default_lang_id", "website"."auto_redirect_lang" as "auto_redirect_lang", "website"."cookies_bar" as "cookies_bar", "website"."configurator_done" as "configurator_done", "website"."social_twitter" as "social_twitter", "website"."social_facebook" as "social_facebook", "website"."social_github" as "social_github", "website"."social_linkedin" as "social_linkedin", "website"."social_youtube" as "social_youtube", "website"."social_instagram" as "social_instagram", "website"."has_social_default_image" as "has_social_default_image", "website"."google_analytics_key" as "google_analytics_key", "website"."google_management_client_id" as "google_management_client_id", "website"."google_management_client_secret" as "google_management_client_secret", "website"."google_search_console" as "google_search_console", "website"."goo | AccessShareLock | 21059 | SELECT * FROM ir_translation +| AccessExclusiveLock
| | | | | | WHERE lang='en_US' AND type='model_terms' AND name='ir.ui.view,arch_db' AND res_id IN (2619) |
(6 rows)
</code></pre>
<p>Technically everything is executed in the same transaction. So I'm not so certain why it is locking on a query it previously made. It would make sense if there were 2 concurrent transaction because the concurrency mode is set to serial but in this case I just don't see how 2 unrelated table can cause a lock.</p>
<p>But the PID seems to imply there are 2 processes and I can see two processes in top. Is it possible for postgres to create multiple process for 1 cursor?</p>
|
<python><postgresql><odoo><deadlock>
|
2022-12-17 16:31:32
| 0
| 13,710
|
Loïc Faure-Lacroix
|
74,835,522
| 5,889,169
|
Django models - how to properly calculate multiple values with various conditions
|
<p>I want to calculate multiple values in a single query on my model. Each metric should have a different filter (or no filters at all). I'm using Django==2.2.3 and a <a href="https://www.djongomapper.com/" rel="nofollow noreferrer">Djongo</a> model.</p>
<p>MyModel columns:</p>
<pre><code>user_id = models.IntegerField(blank=True, null=True)
group_id = models.IntegerField(blank=True, null=True)
</code></pre>
<p>What I try to run, <a href="https://stackoverflow.com/a/58108675/5889169">suggested in another topic</a>, does not work for me.</p>
<p>Query:</p>
<pre><code>MyModel.objects.aggregate
(
total=Count('user_id'),
test=Count('user_id', filter=Q(user_id='just_a_fake_id')),
group_1_value=Count('user_id', filter=Q(group_id=1)),
group_2_value=Count('user_id', filter=Q(group_id=2)),
)
</code></pre>
<p>The results: <code>{'total': 0, 'test': 47479, 'group_1_value': 47479, 'group_2_value': 47479}</code>, does not make sense - all results (except <code>total</code>) returns the same number which is the count of all the records.</p>
<p>What I want to run is query similar to</p>
<pre><code>SELECT COUNT(user_id) as total,
COUNT(CASE WHEN group_id=1 THEN user_id END) as group_1_value,
COUNT(CASE WHEN group_id=2 THEN user_id END) as group_2_value
FROM MyModel
</code></pre>
<p>How do I modify the query in order to get the correct values?</p>
|
<python><django><filter><orm><model>
|
2022-12-17 16:07:39
| 0
| 781
|
shayms8
|
74,835,369
| 7,984,318
|
Postgresql remove rows by duplicated column value
|
<p>I'm using python pandas with Postgresql,and I have a table named stock:</p>
<pre><code> open high low close volume datetime
383.97 384.22 383.66 384.08 1298649 2022-12-16 14:25:00
383.59 384.065 383.45 383.98 991327 2022-12-16 14:20:00
383.59 384.065 383.45 383.98 991327 2022-12-16 14:20:00
383.59 384.065 383.45 383.98 991327 2022-12-16 14:20:00
383.64 384.2099 383.54 383.61 1439271 2022-12-16 14:15:00
</code></pre>
<p>How can I remove the rows that have duplicated datetime ,and only keep 1 row of it,only keep the latest row of it.</p>
<p>The output should be:</p>
<pre><code> open high low close volume datetime
383.97 384.22 383.66 384.08 1298649 2022-12-16 14:25:00
383.59 384.065 383.45 383.98 991327 2022-12-16 14:20:00
383.64 384.2099 383.54 383.61 1439271 2022-12-16 14:15:00
</code></pre>
<p>Something like:</p>
<pre><code>delete from stock where datetime duplicated > 1
</code></pre>
|
<python><sql><pandas><postgresql>
|
2022-12-17 15:44:53
| 1
| 4,094
|
William
|
74,835,240
| 4,907,339
|
Comparing time of day on day of week to time of day on day of week
|
<p>Is there a straightforward way to test whether <code>now</code> is between Friday, 5pm and Sunday, 5pm, of the same week?</p>
<p>This attempt returns <code>False</code> because it does not compare <code>now.time()</code> relative to either <code>now.isoweekday() >= 5</code> or <code>now.isoweekday() <= 7</code> being <code>True</code> first.</p>
<pre class="lang-py prettyprint-override"><code>[in]:
import datetime
now = datetime.datetime.now()
print(now)
(now.isoweekday() >= 5 and now.time() >= datetime.time(17, 0, 0, 0)) and (now.isoweekday() <= 7 and now.time() <= datetime.time(17, 0, 0, 0))
[out]:
2022-12-17 10:00:32.253489
False
</code></pre>
|
<python><datetime><date-comparison>
|
2022-12-17 15:21:55
| 3
| 492
|
Jason
|
74,835,108
| 9,220,463
|
Graph analysis and optimisation. Transform a UCG in a UAG while maximising total edge weights
|
<p>I want to transform a UCG (undirected cyclic graph) into a UAG (Undirected acyclic graph)</p>
<p>To do so I'm looking for an algorithm that can help me prune the edges.</p>
<p>Let's take as example the following graph:</p>
<pre><code>import pandas as pd
import igraph as ig
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
df = pd.DataFrame(
[
[0, 1, 100],
[0, 2, 110],
[2, 3, 70],
[3, 4, 100],
[5, 3, 90],
[1, 6, 85],
[6, 7, 90],
[7, 8, 100],
[5, 6, 10],
[4, 5, 60],
],
columns=["nodeA", "nodeB", "weigth"],
)
g = ig.Graph.DataFrame(df,directed=False)
ig.plot(
g,
target=ax,
edge_label=g.es["weigth"],
edge_color=["green"]*8+["red"]*2,)
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/WOSS8.pngg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WOSS8.pngg" alt="enter image description here" /></a></p>
<p>The goal is to make the graph <strong>acyclic</strong> by dropping edges while respecting the following conditions:</p>
<ul>
<li>All the nodes in the graph must remain connected</li>
<li>Given two random nodes, after pruning there is a unique path connecting them</li>
<li>As a loss function, we should try to maximize the sum of the weights of the edges. In the case of multiple solutions with the same total graph weight, the one with fewer edges wins.</li>
</ul>
<p>In this example, we will drop the edges with weights 10 and 60 (The ones in red!) and we will obtain the following structure:</p>
<p><a href="https://i.sstatic.net/taZ1P.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/taZ1P.png" alt="enter image description here" /></a></p>
<p>What kind of algorithm/ strategy could I use to prune the edges?</p>
|
<python><optimization><graph><igraph><mathematical-optimization>
|
2022-12-17 15:00:19
| 1
| 621
|
riccardo nizzolo
|
74,834,890
| 15,852,600
|
How to detect the language used in a column and put it in a new column?
|
<p>I have the following <code>df</code>:</p>
<pre><code>df = pd.DataFrame({
'user': ['Id159', 'Id758', 'Id146', 'Id477', 'Id212', 'Id999'],
'comment' : ["I inboxed you", '123', 123, 'je suis fatigué', "j'aime", 'ما نوع الجهاز بالضبط']
})
</code></pre>
<p>It has the following display:</p>
<pre><code> user comment
0 Id159 I inboxed you
1 Id758 123
2 Id146 123
3 Id477 je suis fatigué
4 Id212 j'aime
5 Id999 ما نوع الجهاز بالضبط
</code></pre>
<p>My <strong>goal</strong> is to get a new column containing language used in the column <code>df['comment']</code> as follows:</p>
<pre><code> user comment language
0 Id159 I inboxed you en
1 Id758 123 UNKNOWN
2 Id146 123 UNKNOWN
3 Id477 je suis fatigué fr
4 Id212 j'aime fr
5 Id999 ما نوع الجهاز بالضبط ar
</code></pre>
<p><strong>My code</strong></p>
<pre class="lang-py prettyprint-override"><code>from langdetect import detect
df['language'] = [detect(x) for x in df['comment']]
</code></pre>
<p>When I tried to use <code>detect</code> I faced the following message error:</p>
<pre><code>LangDetectException: No features in text.
</code></pre>
<p>I tried to add an <code>if else</code> statement but douldn't solve the issue.</p>
<p>Any help from your side will be highly appreciated (I <strong>upvote</strong> all answers)</p>
<p>Than you!</p>
|
<python><pandas><dataframe><language-detection>
|
2022-12-17 14:29:44
| 3
| 921
|
Khaled DELLAL
|
74,834,753
| 1,088,305
|
Is there a way to perform lazy operations in Polars on multiple frames at the same time?
|
<p>Lazy more and query optimization in Polars is a great tool for saving on memory allocations and CPU usage for a single data frame. I wonder if there is a way to do this for multiple lazy frames as:</p>
<pre><code>lpdf1 = pdf1.lazy()
lpdf2 = pdf2.lazy()
result_lpdf = -lpdf1/lpdf2
result_pdf = result_lpdf.collect()
</code></pre>
<p>The above code will not run, as division and negation is not implemented for <code>LazyFrame</code>. Yet my aim would be to create the new <code>result_pdf</code> frame without creating temporary frames for division, then yet another for negation (as it would be the case in <code>pandas</code> and <code>numpy</code>).</p>
<p>I'm trying to get some performance improvement relative to <code>-pdf1/pdf2</code>, on frames of size <code>(283681, 93)</code>. Any suggestions are welcome.</p>
|
<python><dataframe><hpc><python-polars><rust-polars>
|
2022-12-17 14:08:34
| 1
| 1,186
|
Mark Horvath
|
74,834,635
| 5,281,811
|
Efficient way to check if list of strings are in list of lists and get indexes
|
<p>I have two lists who are generated, one is a simple list of words, and another a list of lists of words. What would be the fastest way to check if elements in the list are in the list of lists and get the indexes?</p>
<p>E.g.</p>
<pre><code>lists=[["apple","car"],["street","beer"],["plate"]]
Test=["apple","plate"]
# should return [(apple,0),(plate,2)] apple is inside first list and plate inside 3rd list
Test2=["car","street"]
# should return [(car,0),(street,1)]
Test3=["pineapple"]
# should return [] because pineapple isn't inside lists
</code></pre>
<p>i have difficulties to implement a solution because i have never worked with list of lists. Can someone help me or at least guide me?</p>
|
<python><list>
|
2022-12-17 13:50:04
| 4
| 373
|
Laz22434
|
74,834,564
| 17,696,880
|
How to add an if conditional inside a lambda function to determine what string concatenate to another string?
|
<pre class="lang-py prettyprint-override"><code>def weeks_to_days(input_text, nro_days_in_a_week = 7):
input_text = re.sub(
r"(\d+)[\s|]*(?:semanas|semana)",
lambda m: str(int(m[1]) * nro_days_in_a_week) + " dias ",
input_text)
return input_text
input_text = "en 1 semana quizas este lista, aunque viendole mejor, creo esta muy oxidada y seguro la podre poner en marcha en 3 semanas"
print(weeks_to_days(input_text))
</code></pre>
<p>The problem with this lambda function is that it always puts <code>"dias"</code> in the plural no matter how many.</p>
<p>How do I place this conditional <strong>inside the lambda function</strong> to determine if it is <code>"dias"</code> in the plural or if it is <code>"dia"</code> in the singular depending on the amount</p>
<pre class="lang-py prettyprint-override"><code>if (str(m[1]) != "1"): str(int(m[1]) * nro_days_in_a_week) + " dias "
elif (str(m[1]) == "1"): str(int(m[1]) * nro_days_in_a_week) + " dia "
</code></pre>
<p>Taking that string from the example, we should get this output:</p>
<pre><code>"en 7 dias quizas este lista, aunque viendole mejor, creo esta muy oxidada y seguro la podre poner en marcha en 21 dias"
</code></pre>
<p>In this case, since they are weeks, they will always remain in the plural, but assuming that the week lasts 1 day as a parameter, then the problem would be there.</p>
|
<python><regex><string><if-statement><lambda>
|
2022-12-17 13:40:34
| 1
| 875
|
Matt095
|
74,834,362
| 16,009,435
|
Know when upload to s3 bucket is complete
|
<p>I am using boto3 to upload an image to my s3 bucket and the below proccess works fine. What I want is, how can I know when the upload process is complete. Maybe print 'upload complete' when done. How can I achieve this? Thanks in advance.</p>
<pre><code>import boto3
s3 = boto3.resource('s3')
file = open(imagePath, 'rb')
object = s3.Object('s3BucketName', fileName)
ret = object.put(Body=file, Metadata={'example': 'test'})
#when complete
print('upload complete')
</code></pre>
|
<python><amazon-web-services><amazon-s3><boto3>
|
2022-12-17 13:12:59
| 1
| 1,387
|
seriously
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.