QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,192,381
| 547,231
|
Change x- and y-numbering in imshow
|
<p>I would like to plot a function of two variables in python. Similar to <a href="https://glowingpython.blogspot.com/2012/01/how-to-plot-two-variable-functions-with.html" rel="nofollow noreferrer">this article</a>, we can obtain an output like</p>
<p><a href="https://i.sstatic.net/tLJNd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tLJNd.png" alt="enter image description here" /></a></p>
<p>using this code:</p>
<pre><code>from numpy import exp,arange
from pylab import meshgrid,cm,imshow,contour,clabel,colorbar,axis,title,show
from matplotlib import pyplot
# the function that I'm going to plot
def z_func(x,y):
return (1-(x**2+y**3))*exp(-(x**2+y**2)/2)
x = arange(-3.0,3.0,0.1)
y = arange(-3.0,3.0,0.1)
z = [[0] * y.__len__() for i in range(x.__len__())]
for i in range(0, x.__len__()):
for j in range(0, y.__len__()):
z[j][i] = z_func(x[i], y[j])
im = imshow(z,cmap=cm.RdBu, extent = [-3, 3, -3, 3], interpolation = "none", origin='lower') # drawing the function
# adding the Contour lines with labels
cset = contour(z,arange(-1,1.5,0.2),linewidths=2,cmap=cm.Set2)
clabel(cset,inline=True,fmt='%1.1f',fontsize=10)
colorbar(im) # adding the colobar on the right
# latex fashion title
title('$z=(1-x^2+y^3) e^{-(x^2+y^2)/2}$')
show()
</code></pre>
<p>As you can see, the x- and y-labels go from 0 to 59 (which is the count of elements in <code>x</code> and <code>y</code>). How can I correct these values such that they range from -3 to 3?</p>
<p>A minor sub-question: Why do I need to "transpose" in <code>z[j][i] = z_func(x[i], y[j])</code>? Does Python treat the first dimension as "column" and the second as "row"?</p>
|
<python><plot><imshow>
|
2023-01-21 09:37:57
| 1
| 18,343
|
0xbadf00d
|
75,192,220
| 10,380,766
|
NumPy convolve method has slight variance between equivalent for loop method for Volume Weighted Average Price
|
<p>I have a method for calculating the volume weighted average price given a stock. On the one hand, I have a readable, traditional for loop. However, it is very very slow.</p>
<p>I have tried to implement a version using numpy array method <code>convolve</code>. It performs SIGNIFICANTLY better (see RESULTS below), but the outputted values differ slightly from the standard for loop.</p>
<p>I wondered if it was the difference between integer and float division, but both of my VWAP methods are using float division. As of now I'm not sure what is accounting for the difference.</p>
<h2>VWAP methods</h2>
<pre class="lang-py prettyprint-override"><code>def calc_vwap(price, volume, period_lookback):
"""
Calculates the volume-weighted average price (VWAP) for a given period of time.
The VWAP is calculated by taking the sum of the product of each price and volume over a given period,
and dividing by the sum of the volume over that period.
Parameters:
price (numpy.ndarray): A list or array of prices.
volume (numpy.ndarray): A list or array of volumes, corresponding to the prices.
period_lookback (int): The number of days to look back when calculating VWAP.
Returns:
numpy.ndarray: An array of VWAP values, one for each day in the input period.
"""
vwap = np.zeros(len(price))
for i in range(period_lookback, len(price)):
lb = i - period_lookback # lower bound
ub = i + 1 # upper bound
volume_sum = volume[lb:ub].sum()
if volume_sum > 0:
vwap[i] = (price[lb:ub] * volume[lb:ub]).sum() / volume_sum
else:
vwap[i] = np.nan
return vwap
def calc_vwap_speedy(price, volume, period_lookback):
# Calculate product of price and volume
price_volume = price * volume
# Use convolve to get the rolling sum of product of price and volume and volume array
price_volume_conv = np.convolve(price_volume, np.ones(period_lookback), mode='valid')
volume_conv = np.convolve(volume, np.ones(period_lookback), mode='valid')
# Create a mask to check if the volume sum is greater than 0
mask = volume_conv > 0
# Initialize the vwap array
vwap = np.zeros(len(price))
# Use the mask to check if volume sum is greater than zero, if it is, proceed with the division and store the result in vwap array, otherwise store NaN
vwap[period_lookback-1:] = np.where(mask, price_volume_conv / volume_conv, np.nan)
return vwap
</code></pre>
<h2>Results</h2>
<pre class="lang-bash prettyprint-override"><code>RUN TIME
standard -> 8.046217331999998
speedy -> 0.09436071299998616
OUTPUT
standard -> [0. 0. 0. ... 0.49073531 0.48826866 0.49220622]
speedy -> [0. 0. 0. ... 0.49525183 0.48842067 0.49092021]
</code></pre>
|
<python><numpy>
|
2023-01-21 09:07:50
| 2
| 1,020
|
Hofbr
|
75,192,169
| 2,398,574
|
How to intercept the instantiated class name from the function?
|
<p>I encountered an issue in one of my projects, I managed to reduce it to a simplest example possible. Consider the following</p>
<pre class="lang-py prettyprint-override"><code>class A:
def f(self):
return 'I am f()'
class B(A):
def g(self):
return 'I am g()'
a = A()
b = B()
print(a.f.__qualname__)
print(b.f.__qualname__)
print(b.g.__qualname__)
</code></pre>
<p>The output I am getting</p>
<pre><code>A.f
A.f
B.g
</code></pre>
<p>the output I am expecting</p>
<pre><code>A.f
B.f
B.g
</code></pre>
<p>because what I care about is not only the function name, but also the class name, not really the class in which the function is defined but rather the class that gets instantiated. Anyone has an idea how to get it?</p>
|
<python><oop><inheritance><instance>
|
2023-01-21 08:58:30
| 2
| 1,563
|
Marek
|
75,192,128
| 470,081
|
Django deployment and virtual environment
|
<p>I am aware that there are many questions regarding Django and virtual environments, but I cannot wrap my head around the use of virtual environments with respect to deploying my Django app (locally) via uwsgi/nginx.</p>
<p>My setup includes a virtual environment (with Django and uwsgi), my Django app, nginx and PostgreSQL. The app was created before the virtual environment, and I applied only a single change to <code>manage.py</code>:</p>
<pre><code>#!/Users/snafu/virtualdjango/bin/python3
</code></pre>
<p>When I start up the uwsgi located in the virtual environment (with the appropriate <code>.ini</code> file), everything works right away, but I wonder why. I did not need to fiddle around with the $PYTHONPATH, or append the site packages directory to the system path in <code>manage.py</code>, or activate the virtual environment at any point (apart from the initial installation of packages), although the boilerplate comment in <code>manage.py</code> explicitly mentions an inactive virtual environment as a possible reason for an import error.</p>
|
<python><django><virtualenv>
|
2023-01-21 08:51:50
| 1
| 461
|
janeden
|
75,192,055
| 3,876,796
|
Incompatible shape error when using tf.map_fn to apply a python function on tensors
|
<p>While building some code to train a tensorflow deep model, I am using tensorflow tf.map_fn and tf.py_function as a wrapper to apply a scipy python function as a loss function mapping each 2 rows of a batch of 2 probability vectors p and q of shape [batch_size,num_classes]. When using KL_divergence over this batch of vectors (p,q), the training works fine with this computation and there is no shape incompatibility issue:</p>
<pre><code>tf.reduce_sum(p*(tf.log(p + 1e-16) - tf.log(q + 1e-16)), axis=1) #KL divergence
</code></pre>
<p>However, when I tried to use Wasserstein distance or the energy_distance functions from scipy, I get an error dealing with incompatible shapes [] and [5000]. 5000 is here the number of classes (p and q of shape [batch_size, 5000])</p>
<pre><code>import tensorflow as tf
def compute_kld(p_logit, q_logit, divergence_type):
p = tf.nn.softmax(p_logit)
q = tf.nn.softmax(q_logit)
if divergence_type == "KL_divergence":
return tf.reduce_sum(p*(tf.log(p + 1e-16) - tf.log(q + 1e-16)), axis=1)
elif divergence_type == "Wasserstein_distance":
def wasserstein_distance(x,y):
import scipy
from scipy import stats
return stats.wasserstein_distance(x,y)
@tf.function
def func(p,q):
return tf.map_fn(lambda x: tf.py_function(func=wasserstein_distance, inp=[x[0], x[1]], Tout=tf.float32), (p, q), dtype=(tf.float32)) #, parallel_iterations=10)
return func(p, q)
elif divergence_type == "energy_distance": # The Cramer Distancedef energy_distance(x,y):
def energy_distance(x,y):
import scipy
from scipy import stats
return stats.energy_distance(x,y)
@tf.function
def func(p,q):
return tf.map_fn(lambda x: tf.py_function(func=energy_distance, inp=[x[0], x[1]], Tout=tf.float32), (p, q), dtype=(tf.float32)) #, parallel_iterations=10)
return func(p, q)
</code></pre>
<p>This is the code to test the loss functions with a batch of 5 and 3 classes, which all work fine individually:</p>
<pre><code>import tensorflow as tf
p = tf.constant([[1, 2, 3], [1, 2, 3], [14, 50, 61], [71, 83, 79], [110,171,12]])
q = tf.constant([[1, 2, 3], [1.2, 2.3, 3.2], [4.2, 5.3, 6.4], [7.5, 8.6, 9.4], [11.2,10.1,13]])
p = tf.reshape(p, [-1,3])
q = tf.reshape(q, [-1,3])
p = tf.cast(p, tf.float32)
q = tf.cast(q, tf.float32)
with tf.Session() as sess:
divergence_type = "KL_divergence"
res = compute_kld(p, q, divergence_type = divergence_type)
divergence_type = "Wasserstein_distance"
res2 = compute_kld(p, q, divergence_type = divergence_type)
divergence_type = "energy_distance"
res3 = compute_kld(p, q, divergence_type = divergence_type)
print("############################## p")
print(sess.run(tf.print(p)))
print("##")
print(sess.run(tf.print(tf.shape(p))))
print("############################## KL_divergence")
print(sess.run(tf.print(res)))
print("##")
print(sess.run(tf.print(tf.shape(res))))
print("############################## Wasserstein_distance")
print(sess.run(tf.print(res2)))
print("##")
print(sess.run(tf.print(tf.shape(res2))))
print("############################## energy_distance")
print(sess.run(tf.print(res3)))
print("##")
print(sess.run(tf.print(tf.shape(res3))))
</code></pre>
<p>This is the output:</p>
<pre><code>############################## p
[[1 2 3]
[1 2 3]
[14 50 61]
[71 83 79]
[110 171 12]]
None
##
[5 3]
None
############################## KL_divergence
[0 0.000939823687 0.367009342 1.1647588 3.09911442]
None
##
[5]
None
############################## Wasserstein_distance
[0 0.0126344115 0.204870835 0.237718046 0.120362818]
None
##
[5]
None
############################## energy_distance
[0 0.0917765796 0.41313991 0.438246906 0.316672504]
None
##
[5]
None
</code></pre>
<p>However, when using the wasserstein distance or the energy distance inside my training code, I get incompatible shape error:</p>
<pre><code>tensorflow.python.framework.errors_impl.InvalidArgumentError: Tried to set a tensor with incompatible shape at a list index. Item element shape: [] list shape: [5000]
[[{{node gradients/TensorArrayV2Read/TensorListGetItem_grad/TensorListSetItem}}]]
</code></pre>
<p>I am wondering if the dtype for tf.map_fn or tf.py_function I am using is wrong or if I have to specify/impose shape somewhere ?</p>
<p>Here is a link for the whole code where I tried to replace KL-divergence with Wasserstein distance in method "compute_kld": <a href="https://github.com/shenyuanyuan/IMSAT/blob/master/imsat_cluster.py" rel="nofollow noreferrer">https://github.com/shenyuanyuan/IMSAT/blob/master/imsat_cluster.py</a></p>
<p>Thank you in advance for your kind help!</p>
<p>== UPDATE ==</p>
<p>I inspected all the provided batches and the shapes of p and q seem correct</p>
<pre><code>shape(p)
(?, 5000)
shape(q)
(?, 5000)
</code></pre>
<p>However, the type of func's returned object is . Thus, I have tried to reshape it with:</p>
<pre><code>return tf.reshape(func(p, q), [p.shape[0]])
</code></pre>
<p>However, this doesn't seem to change anything as the error is still the same. After providing the first batch, the code crashes before starting to process the second batch.</p>
|
<python><arrays><tensorflow><machine-learning><scipy.stats>
|
2023-01-21 08:36:54
| 1
| 1,124
|
Othmane
|
75,191,942
| 8,207,701
|
'float' object has no attribute 'rolling' when using lambda function
|
<p>So I'm trying to calculate the z score using the lambda function.</p>
<p>Here's the code,</p>
<pre><code>zscore_fun_improved = lambda x: ((x - x.rolling(window=200, min_periods=20).mean()) / x.rolling(window=200, min_periods=20).std())
df.Close.apply(zscore_fun_improved)
</code></pre>
<p>But it gives me the following error,</p>
<pre><code>AttributeError: 'float' object has no attribute 'rolling'
</code></pre>
<p>What am I doing wrrong?</p>
|
<python><pandas><dataframe><numpy>
|
2023-01-21 08:10:38
| 1
| 1,216
|
Bucky
|
75,191,903
| 14,256,643
|
python how to combine two list by matching parent_id
|
<p>variation_product have attribute parent_product_id and I want to combine parent_product and variation_product into a single list by matching parent_product_id attribute. how to do that?</p>
<p>here my data look like:</p>
<pre><code> {
"parent_product": [
{
"parent_product_id": "sku01"
"product_title": "product1",
],
"variation_product": [
{
"variation_product_id": "001",
"parent_product_id": "sku01"
"user_id": "1"
}
],
"parent_product": [
{
"parent_product_id": "sku02"
"product_title": "product2",
],
"variation_product": [
"variation_product_id": "002",
"parent_product_id": "sku02"
"user_id": "2"
]
}
</code></pre>
<p>my expected result will be:</p>
<pre><code>{
"parent_product": [
"parent_product_id": "sku01"
"product_title": "product_1",
"variation_product_id": "001",
"user_id": "1"
],
"parent_product": [
"parent_product_id": "sku02"
"product_title": "product2",
"variation_product_id": "002",
"user_id": "2"
],
}
</code></pre>
|
<python><python-3.x><dictionary>
|
2023-01-21 08:01:36
| 2
| 1,647
|
boyenec
|
75,191,564
| 5,053,475
|
- This pattern is interpreted as a regular expression, and has match groups - but with no capturing group
|
<p>I'm migrating a script to a new python env,
I don't like the regex I'd use \b instead, anyway I want to change as little as possible the existing code.</p>
<p>I get this error executing the script:</p>
<pre><code>UserWarning: This pattern is interpreted as a regular expression, and has match groups. To actually get the groups, use str.extract.
word_in_data = self.data['text'].str.contains(r"(?:^|[^a-zA-Z0-9])"+word+r"(?:$|[^a-zA-Z0-9])", na=False, regex=True).copy()
</code></pre>
<p>This is the row containing the regex:</p>
<pre><code>self.data['text'].str.contains(r"(?:^|[^a-zA-Z0-9])"+word+r"(?:$|[^a-zA-Z0-9])", na=False, regex=True).copy()
</code></pre>
<p>It's using non capturing matching groups, (?:)
why do I get this warning?</p>
<p>Thanks!</p>
|
<python><pandas><regex>
|
2023-01-21 06:37:41
| 1
| 734
|
Daniele Rugginenti
|
75,191,442
| 10,035,190
|
How to run route from other route function flask?
|
<p>are routes run only after clicking submit button? I want to run routes from other route function not by clicking submit button. I am doing like this because ajaxCall route only running function not rendering template</p>
<pre><code>from flask import Flask,render_template,send_file
app = Flask(__name__)
@app.route("/ajaxCall")
def home():
# some code
# i want to call @app.route("/name") here
return render_template('home.html')
@app.route("/name")
def name():
# some code
return render_template('name.html')
if __name__=='__main__':
app.run(debug=True,port=5002)
</code></pre>
|
<python><python-3.x><flask><flask-wtforms>
|
2023-01-21 06:01:37
| 1
| 930
|
zircon
|
75,191,438
| 6,758,739
|
Identifying the calling python script in the called python script and print
|
<p>How can I print the name of the script that called the current script in python? Are there any built-in functions to identify the source script?</p>
<p>Example: if <code>x.py</code> calls <code>y.py</code> and <code>y.py</code> calls <code>z.py</code>, I want to print <code>x.py</code> in <code>z.py</code> as <code>x.py</code> initiated the function.</p>
|
<python>
|
2023-01-21 05:59:07
| 0
| 992
|
LearningCpp
|
75,191,375
| 2,676,598
|
Changing FPS in OpenCV
|
<p>I have an application that needs to capture only a few frames per second from a webcam. Setting videowriter in the below code to 3 frames per second results in the webcam's normal framerate of approximately 30 fps being saved.
What are the options to save only the recorded 3 frames per second, and let the other 27 or so go? Thanks in advance.</p>
<pre><code>import cv2
import numpy as np
import time
import datetime
import pathlib
import imutils
cap = cv2.VideoCapture(0)
if (cap.isOpened() == False):
print("Unable to read camera feed")
capture_duration = 15
frame_per_sec = 3
frame_width = 80
frame_height = 60
out = cv2.VideoWriter('C:\\Users\\student\\Desktop\\videoFile.avi',cv2.VideoWriter_fourcc('m','j','p','g'),frame_per_sec, (frame_width,frame_height))
start_time = time.time()
while( int(time.time() - start_time) < capture_duration ):
ret, frame = cap.read()
if ret==True:
frame = imutils.resize(frame, width=frame_width)
out.write(frame)
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break
cap.release()
out.release()
cv2.destroyAllWindows()
</code></pre>
|
<python><opencv><video-processing><mjpeg>
|
2023-01-21 05:43:59
| 2
| 2,174
|
portsample
|
75,191,261
| 17,347,824
|
Multiple sets of criteria to subset python dataframe with Pandas
|
<p>I have a small data set that I am trying to filter out to create an even smaller dataframe. The issue I'm having is I don't know how to get the sets of criteria nested inside one another to work correctly.</p>
<p>The code below is the closest I have been able to get. It should be looking in the larger data frame with columns for 'material', 'waterbody', and 'mosscat'; and only returning those that satisfy the combination of stone, marsh, and average OR brick, river, average.</p>
<pre><code>dfy = dfx[
(dfx['material']=='stone') &
(dfx['waterbody']=='marsh') &
(dfx['mosscat']=='average'),
(dfx['material']=='brick') &
(dfx['waterbody']=='river') &
(dfx['mosscat']=='average')]
</code></pre>
|
<python><pandas>
|
2023-01-21 05:11:15
| 1
| 409
|
data_life
|
75,191,187
| 1,389,057
|
TypeError: unhashable type: 'Path' (Importing any package)
|
<p><strong>EDIT</strong>: I've narrowed this down to django rest framework, but not exactly why. About to start deep package debugging :/</p>
<p>I've got a weird one here. I've setup a mini Django project, and in the urls.py file, if I try any "from package import X", it results in an exception: TypeError: unhashable type: 'Path'</p>
<p>With no imports in this file, it all works perfectly and the server starts etc.</p>
<pre><code>File "/app/app/urls.py", line 2, in <module>
2023-01-21T04:41:15.890469838Z from rest_framework.response import Response
2023-01-21T04:41:15.890486088Z File "/usr/local/lib/python3.8/site-packages/rest_framework/response.py", line 11, in <module>
2023-01-21T04:41:15.890490338Z from rest_framework.serializers import Serializer
2023-01-21T04:41:15.890492504Z File "/usr/local/lib/python3.8/site-packages/rest_framework/serializers.py", line 27, in <module>
2023-01-21T04:41:15.890516629Z from rest_framework.compat import postgres_fields
2023-01-21T04:41:15.890534421Z File "/usr/local/lib/python3.8/site-packages/rest_framework/compat.py", line 33, in <module>
2023-01-21T04:41:15.890571546Z import coreapi
2023-01-21T04:41:15.890591004Z File "/usr/local/lib/python3.8/site-packages/coreapi/__init__.py", line 2, in <module>
2023-01-21T04:41:15.890598088Z from coreapi import auth, codecs, exceptions, transports, utils
2023-01-21T04:41:15.890600588Z File "/usr/local/lib/python3.8/site-packages/coreapi/auth.py", line 1, in <module>
2023-01-21T04:41:15.890602963Z from coreapi.utils import domain_matches
2023-01-21T04:41:15.890605254Z File "/usr/local/lib/python3.8/site-packages/coreapi/utils.py", line 5, in <module>
2023-01-21T04:41:15.890724213Z import pkg_resources
2023-01-21T04:41:15.890736088Z File "/usr/local/lib/python3.8/site-packages/pkg_resources/__init__.py", line 3249, in <module>
2023-01-21T04:41:15.891071296Z def _initialize_master_working_set():
2023-01-21T04:41:15.891081671Z File "/usr/local/lib/python3.8/site-packages/pkg_resources/__init__.py", line 3223, in _call_aside
2023-01-21T04:41:15.891363838Z f(*args, **kwargs)
2023-01-21T04:41:15.891371088Z File "/usr/local/lib/python3.8/site-packages/pkg_resources/__init__.py", line 3261, in _initialize_master_working_set
2023-01-21T04:41:15.891722921Z working_set = WorkingSet._build_master()
2023-01-21T04:41:15.891733421Z File "/usr/local/lib/python3.8/site-packages/pkg_resources/__init__.py", line 608, in _build_master
2023-01-21T04:41:15.891780379Z ws = cls()
2023-01-21T04:41:15.891790796Z File "/usr/local/lib/python3.8/site-packages/pkg_resources/__init__.py", line 601, in __init__
2023-01-21T04:41:15.891883463Z self.add_entry(entry)
2023-01-21T04:41:15.891889546Z File "/usr/local/lib/python3.8/site-packages/pkg_resources/__init__.py", line 655, in add_entry
2023-01-21T04:41:15.892080713Z self.entry_keys.setdefault(entry, [])
2023-01-21T04:41:15.892097713Z TypeError: unhashable type: 'Path'
</code></pre>
<p>Here is the full repo ready to run <a href="https://gitlab.com/mainmast/import-issue/-/tree/main" rel="nofollow noreferrer">https://gitlab.com/mainmast/import-issue/-/tree/main</a></p>
<p>You will need to add an .env file in services/shared/ folder. Then you can run "make up"</p>
<pre><code>POSTGRES_PASSWORD=ai
POSTGRES_USER=ai
POSTGRES_DBNAME=ai
POSTGRES_HOST=postgres-ai
POSTGRES_TEST=test_ai
SECRET_KEY=%&jxaf6ijs36d82@r+^hq%=d7w8*2)r9afv*z@5y=(vt$h38m5
REGISTRATION_SALT=sx<)XYH.G0\"3GhqGLfY7~3Nt'63\"Yp
DJANGO_DEBUG=True
DEBUG_DB=True
ALLOWED_HOSTS=*
CORS_ORIGIN_WHITELIST=http://0.0.0.0
CORS_ALLOWED_ORIGINS=http://0.0.0.0
WEB_SERVER=app.myapp.co
WEB_APP_URL=app.myapp.co
EMAIL_URL=
AXES_BEHIND_REVERSE_PROXY=off
SITE_ID=3
ACCESS_TOKEN_EXPIRE_SECONDS=600
SUPPORT_EMAIL=
ACCOUNT_ACTIVATION_DAYS=2
SENTRY_ID=
SENTRY_ENDPOINT=
SENTRY_ENVIRONMENT=SomeEnv
MAX_IMAGE_UPLOAD_SIZE_BYTES=10000000
# Application environment settings
APP_ENV=local
APP_USE_AWS_S3_STATIC=False
APP_USE_AWS_S3_MEDIA=False
SSL_CERT_CHAIN=ssl\/local.crt
SSL_CERT_KEY=ssl\/local.key
SSL_CERT_DHPARAM=ssl\/dhparams.local.pem
</code></pre>
<p>I've never come across this before, although this is the first time I'm doing experimenting with sharing Django code across smaller independent container services.</p>
<p>Any help would be greatly appreciated. I see that something around coreapi dies, and no matter what I import, coreapi seems to be there as the last thing trying to import pkg_resources - but I can't figure out why?</p>
|
<python><django>
|
2023-01-21 04:46:28
| 0
| 3,133
|
Trent
|
75,191,021
| 13,700,055
|
Overlay plot on an image inside a for loop Python
|
<p>I am trying to overlay a matplotlib plot on an image using the following code snippet.</p>
<pre><code>plt.imshow(image, zorder=0)
plt.plot(some_array, zorder=1)
plt.savefig('image_array.png')
</code></pre>
<p>If I now include this code inside a <code>for</code> loop to overlay a plot on a different image in each loop, the resulting images overlap. For example, <code>image_array2.png</code> sits on top of <code>image_array1.png</code> and so on. How do I correctly save the figures without overlapping?</p>
|
<python><matplotlib><overlay>
|
2023-01-21 03:44:28
| 3
| 403
|
Nanda
|
75,190,914
| 11,281,877
|
How to put values on a single raw from multiple columns in Pandas
|
<p>I have been scratching my head for days about this problem. Please, find below the structure of my input data and the output that I want.
I color-coded per ID, Plot, Survey, Trial and the 3 estimation methods.
In the output, I want to get all the scorings for each group, which are represented by color, on the same row. By doing that, we should get rid of the Estimation Method column in the output. I kept it for sake of clarity.</p>
<p>This is my code. Thank you in advance for your time.</p>
<pre><code>import pandas as pd
import functools
data_dict = {'ID': {0: 'id1',
1: 'id1',
2: 'id1',
3: 'id1',
4: 'id1',
5: 'id1',
6: 'id1',
7: 'id1',
8: 'id1',
9: 'id1',
10: 'id1',
11: 'id1',
12: 'id1',
13: 'id1',
14: 'id1',
15: 'id1',
16: 'id1',
17: 'id1',
18: 'id1',
19: 'id1',
20: 'id1',
21: 'id1',
22: 'id1',
23: 'id1'},
'Plot': {0: 'p1',
1: 'p1',
2: 'p1',
3: 'p1',
4: 'p1',
5: 'p1',
6: 'p1',
7: 'p1',
8: 'p1',
9: 'p1',
10: 'p1',
11: 'p1',
12: 'p1',
13: 'p1',
14: 'p1',
15: 'p1',
16: 'p1',
17: 'p1',
18: 'p1',
19: 'p1',
20: 'p1',
21: 'p1',
22: 'p1',
23: 'p1'},
'Survey': {0: 'Sv1',
1: 'Sv1',
2: 'Sv1',
3: 'Sv1',
4: 'Sv1',
5: 'Sv1',
6: 'Sv2',
7: 'Sv2',
8: 'Sv2',
9: 'Sv2',
10: 'Sv2',
11: 'Sv2',
12: 'Sv1',
13: 'Sv1',
14: 'Sv1',
15: 'Sv1',
16: 'Sv1',
17: 'Sv1',
18: 'Sv2',
19: 'Sv2',
20: 'Sv2',
21: 'Sv2',
22: 'Sv2',
23: 'Sv2'},
'Trial': {0: 't1',
1: 't1',
2: 't1',
3: 't2',
4: 't2',
5: 't2',
6: 't1',
7: 't1',
8: 't1',
9: 't2',
10: 't2',
11: 't2',
12: 't1',
13: 't1',
14: 't1',
15: 't2',
16: 't2',
17: 't2',
18: 't1',
19: 't1',
20: 't1',
21: 't2',
22: 't2',
23: 't2'},
'Mission': {0: 'mission1',
1: 'mission1',
2: 'mission1',
3: 'mission1',
4: 'mission1',
5: 'mission1',
6: 'mission1',
7: 'mission1',
8: 'mission1',
9: 'mission1',
10: 'mission1',
11: 'mission2',
12: 'mission2',
13: 'mission2',
14: 'mission2',
15: 'mission2',
16: 'mission2',
17: 'mission2',
18: 'mission2',
19: 'mission2',
20: 'mission2',
21: 'mission2',
22: 'mission2',
23: 'mission2'},
'Estimation Method': {0: 'MCARI2',
1: 'NDVI',
2: 'NDRE',
3: 'MCARI2',
4: 'NDVI',
5: 'NDRE',
6: 'MCARI2',
7: 'NDVI',
8: 'NDRE',
9: 'MCARI2',
10: 'NDVI',
11: 'NDRE',
12: 'MCARI2',
13: 'NDVI',
14: 'NDRE',
15: 'MCARI2',
16: 'NDVI',
17: 'NDRE',
18: 'MCARI2',
19: 'NDVI',
20: 'NDRE',
21: 'MCARI2',
22: 'NDVI',
23: 'NDRE'},
'MCARI2_sd': {0: 1.5,
1: np.nan,
2: np.nan,
3: 10.0,
4: np.nan,
5: np.nan,
6: 1.5,
7: np.nan,
8: np.nan,
9: 10.0,
10: np.nan,
11: np.nan,
12: 101.0,
13: np.nan,
14: np.nan,
15: 23.5,
16: np.nan,
17: np.nan,
18: 111.0,
19: np.nan,
20: np.nan,
21: 72.0,
22: np.nan,
23: np.nan},
'MACRI2_50': {0: 12.4,
1: np.nan,
2: np.nan,
3: 11.0,
4: np.nan,
5: np.nan,
6: 12.4,
7: np.nan,
8: np.nan,
9: 11.0,
10: np.nan,
11: np.nan,
12: 102.0,
13: np.nan,
14: np.nan,
15: 2.1,
16: np.nan,
17: np.nan,
18: 112.0,
19: np.nan,
20: np.nan,
21: 74.0,
22: np.nan,
23: np.nan},
'MACRI2_AVG': {0: 15.0,
1: np.nan,
2: np.nan,
3: 12.0,
4: np.nan,
5: np.nan,
6: 15.0,
7: np.nan,
8: np.nan,
9: 12.0,
10: np.nan,
11: np.nan,
12: 103.0,
13: np.nan,
14: np.nan,
15: 24.0,
16: np.nan,
17: np.nan,
18: 113.0,
19: np.nan,
20: np.nan,
21: 77.0,
22: np.nan,
23: np.nan},
'NDVI_sd': {0: np.nan,
1: 2.9,
2: np.nan,
3: np.nan,
4: 20.0,
5: np.nan,
6: np.nan,
7: 2.9,
8: np.nan,
9: np.nan,
10: 20.0,
11: np.nan,
12: np.nan,
13: 201.0,
14: np.nan,
15: np.nan,
16: 11.0,
17: np.nan,
18: np.nan,
19: 200.0,
20: np.nan,
21: np.nan,
22: 32.0,
23: np.nan},
'NDVI_50': {0: np.nan,
1: 21.0,
2: np.nan,
3: np.nan,
4: 21.0,
5: np.nan,
6: np.nan,
7: 21.0,
8: np.nan,
9: np.nan,
10: 21.0,
11: np.nan,
12: np.nan,
13: 201.0,
14: np.nan,
15: np.nan,
16: 12.0,
17: np.nan,
18: np.nan,
19: 300.0,
20: np.nan,
21: np.nan,
22: 39.0,
23: np.nan},
'NDVI_AVG': {0: np.nan,
1: 27.0,
2: np.nan,
3: np.nan,
4: 22.0,
5: np.nan,
6: np.nan,
7: 27.0,
8: np.nan,
9: np.nan,
10: 22.0,
11: np.nan,
12: np.nan,
13: 203.0,
14: np.nan,
15: np.nan,
16: 13.0,
17: np.nan,
18: np.nan,
19: 400.0,
20: np.nan,
21: np.nan,
22: 40.0,
23: np.nan},
'NDRE_sd': {0: np.nan,
1: np.nan,
2: 3.1,
3: np.nan,
4: np.nan,
5: 31.0,
6: np.nan,
7: np.nan,
8: 3.1,
9: np.nan,
10: np.nan,
11: 31.0,
12: np.nan,
13: np.nan,
14: 301.0,
15: np.nan,
16: np.nan,
17: 15.0,
18: np.nan,
19: np.nan,
20: 57.0,
21: np.nan,
22: np.nan,
23: 21.0},
'NDRE_50': {0: np.nan,
1: np.nan,
2: 33.0,
3: np.nan,
4: np.nan,
5: 32.0,
6: np.nan,
7: np.nan,
8: 33.0,
9: np.nan,
10: np.nan,
11: 32.0,
12: np.nan,
13: np.nan,
14: 302.0,
15: np.nan,
16: np.nan,
17: 16.0,
18: np.nan,
19: np.nan,
20: 58.0,
21: np.nan,
22: np.nan,
23: 22.0},
'NDRE_AVG': {0: np.nan,
1: np.nan,
2: 330.0,
3: np.nan,
4: np.nan,
5: 33.0,
6: np.nan,
7: np.nan,
8: 330.0,
9: np.nan,
10: np.nan,
11: 33.0,
12: np.nan,
13: np.nan,
14: 303.0,
15: np.nan,
16: np.nan,
17: 17.0,
18: np.nan,
19: np.nan,
20: 59.0,
21: np.nan,
22: np.nan,
23: 32.0}}
df_test = pd.DataFrame(data_dict)
def generate_data_per_EM(df):
data_survey = []
for (survey,mission,trial,em),data in df.groupby(['Survey','Mission','Trial','Estimation Method']):
df_em = data.set_index('ID').dropna(axis=1)
df_em.to_csv(f'tmp_data_{survey}_{mission}_{trial}_{em}.csv') #This generates 74 files, but not sure how to join/merge them
data_survey.append(df_em)
#Merge the df_em column-wise
df_final = functools.reduce(lambda left, right: pd.merge(left, right, on=['ID','Survey','Mission','Trial']), data_survey)
df_final.to_csv(f'final_{survey}_{mission}_{em}.csv') #Output is not what I expected
generate_data_per_EM(df_test)
</code></pre>
<p><a href="https://i.sstatic.net/fK4qb.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fK4qb.jpg" alt="my data and the output" /></a></p>
|
<python><pandas><join><merge><functools>
|
2023-01-21 03:08:02
| 1
| 519
|
Amilovsky
|
75,190,911
| 7,021,137
|
Angle at Intersection of Two Lines
|
<p>This is a rather simple question, but I'm trying to track down a bug and could use a confirmation of my methods.</p>
<p>Given this example code, how would you calculate the angle at the intersection of the two lines <code>a</code> and <code>b</code>? i.e., at θ? Please only use <code>numpy</code>.</p>
<pre><code>import numpy as np
n = 1000
x = np.arange(n)
a = np.ones(n) * 3
b = -0.11 * x - 112
</code></pre>
<p>I'll post my method in an edit in a day or so, but I'm getting ~29 degrees.</p>
<p><a href="https://i.sstatic.net/dOQrn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dOQrn.png" alt="enter image description here" /></a></p>
<hr />
<p>EDIT: here's the solution that got me 29 degrees. Why is this incorrect?</p>
<pre><code>def _angle(a, b):
return np.degrees(np.arccos(np.dot(a, b)/ (np.linalg.norm(a) * np.linalg.norm(b))))
print(_angle(a,b))
</code></pre>
|
<python><numpy><angle>
|
2023-01-21 03:07:35
| 1
| 377
|
NoVa
|
75,190,835
| 15,843,133
|
How to write a function that that generates the login for a mailbox with imap_tools?
|
<p>I created a helper function that logs into a mailbox.</p>
<pre><code>import imap_tools
def mailbox_login():
try:
with imap_tools.MailBoxUnencrypted(ENV["IMAP4_FQDN"]).login(
ENV["RECEIVING_EMAIL_USER"], ENV["RECEIVING_EMAIL_PASSWORD"]
) as mailbox:
print("Successfully logged into the mailbox.")
return mailbox
except imap_tools.MailboxLoginError as error:
print(f"CRITICAL: Failed to login to the mailbox: {error}")
</code></pre>
<p>Another function requires a mailbox connection.</p>
<pre><code>def email_count():
"""
Get all emails from the mail server via IMAP
"""
msgs = []
mailbox = mailbox_login()
for msg in mailbox.fetch():
msgs.append(msg)
return msgs
</code></pre>
<p>When I run <code>email_count()</code>, I get the following error:</p>
<pre><code>imaplib.IMAP4.error: command SEARCH illegal in state LOGOUT, only allowed in states SELECTED
</code></pre>
<p>As soon as I leave the scope of the <code>with</code> statement, it logs out of the mailbox. Is there any way to maintain the connection after leaving <code>mailbox_login()</code>?</p>
|
<python><imap-tools>
|
2023-01-21 02:42:56
| 2
| 353
|
Trouble Bucket
|
75,190,832
| 6,814,713
|
How to coalesce multiple pyspark arrays?
|
<p>I have an arbitrary number of arrays of equal length in a PySpark DataFrame. I need to coalesce these, element by element, into a single list. The problem with coalesce is that it doesn't work by element, but rather selects the entire first non-null array. Any suggestions for how to accomplish this would be appreciated. Please see the test case below for an example of expected input and output:</p>
<pre class="lang-py prettyprint-override"><code>def test_coalesce_elements():
"""
Test array coalescing on a per-element basis
"""
from pyspark.sql import SparkSession
import pyspark.sql.types as t
import pyspark.sql.functions as f
spark = SparkSession.builder.getOrCreate()
data = [
{
"a": [None, 1, None, None],
"b": [2, 3, None, None],
"c": [5, 6, 7, None],
}
]
schema = t.StructType([
t.StructField('a', t.ArrayType(t.IntegerType())),
t.StructField('b', t.ArrayType(t.IntegerType())),
t.StructField('c', t.ArrayType(t.IntegerType())),
])
df = spark.createDataFrame(data, schema)
# Inspect schema
df.printSchema()
# root
# | -- a: array(nullable=true)
# | | -- element: integer(containsNull=true)
# | -- b: array(nullable=true)
# | | -- element: integer(containsNull=true)
# | -- c: array(nullable=true)
# | | -- element: integer(containsNull=true)
# Inspect df values
df.show(truncate=False)
# +---------------------+------------------+---------------+
# |a |b |c |
# +---------------------+------------------+---------------+
# |[null, 1, null, null]|[2, 3, null, null]|[5, 6, 7, null]|
# +---------------------+------------------+---------------+
# This obviously does not work, but hopefully provides the general idea
# Remember: this will need to work with an arbitrary and dynamic set of columns
input_cols = ['a', 'b', 'c']
df = df.withColumn('d', f.coalesce(*[f.col(i) for i in input_cols]))
# This is the expected output I would like to see for the given inputs
assert df.collect()[0]['d'] == [2, 1, 7, None]
</code></pre>
<p>Thanks in advance for any ideas!</p>
|
<python><apache-spark><pyspark><coalesce>
|
2023-01-21 02:41:47
| 3
| 2,124
|
Brendan
|
75,190,755
| 11,478,305
|
TypeError: plot() missing 1 required positional argument: 'ys' (Python)
|
<p>I'm doing some basic plotting with matplotlib 3.6.2 of an Artificial Neural Network for an online class for Tensorflow 2.0 that uses Colab and I'm doing it in VS Code with Tensorflow 2.3.0, it works in Jupyter Notebooks, but I'm getting this error running it in the console:</p>
<pre><code>Traceback (most recent call last):
File "annr.py", line 43, in <module>
plt.plot(r.history['loss'], label='loss')
File "C:\Users\calem\AppData\Local\Programs\Python\Python36\lib\site-packages\matplotlib\pyplot.py", line 2842, in plot
**({"data": data} if data is not None else {}), **kwargs)
TypeError: plot() missing 1 required positional argument: 'ys'
</code></pre>
<p>Here is the entire script up to this point:</p>
<pre><code>import tensorflow as tf
print('tf version:' + tf.__version__)
# %% [markdown]
#
# %%
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
# %%
# Make the dataset
N = 1000
X = np.random.random((N, 2)) * 6 - 3 # uniformly distributed (-3, +3)
Y = np.cos(2*X[:,0]) + np.cos(3*X[:,1])
# %%
# y = cos(2x_1) + cos(3x_2)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(X[:,0], X[:,1], Y)
# plt.show()
# %%
# %%
# Build the model
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(128, input_shape=(2,), activation='relu'),
tf.keras.layers.Dense(1)
])
# %%
# Compile and fit & plot the loss
opt = tf.keras.optimizers.Adam(0.01)
model.compile(optimizer=opt, loss='mse')
r = model.fit(X, Y, epochs=100)
# Plot the loss
plt.plot(r.history['loss'], label='loss')
</code></pre>
|
<python><artificial-intelligence>
|
2023-01-21 02:10:47
| 2
| 359
|
Cale McCollough
|
75,190,696
| 1,019,129
|
Shorten the number of logical checks
|
<p>Is there a way to shorten the number of checks</p>
<pre><code> def is_render(self):
if not hasattr(self, 'p') : return True
if hasattr(self, 'p') and 'render' not in self.p : return True
if hasattr(self, 'p') and 'render' in self.p and self.p['render'] != 0 : return True
return False
</code></pre>
<p>i.e.</p>
<pre><code>if not A : true
elif A and not B : true
elif A and B and not C : true
else false
</code></pre>
<p>... even better if I can check simultaneously <code> x != 0 or x is not False</code></p>
|
<python><logic>
|
2023-01-21 01:41:03
| 2
| 7,536
|
sten
|
75,190,663
| 1,204,143
|
ThreadingHTTPServer eventually stops accepting sockets on macOS
|
<p>I'm running into a weird issue on macOS + Python (macOS 12.6.2, Python 3.10.7 from python.org) when using a ThreadingHTTPServer to serve requests, although I suspect the issue might not be limited to ThreadingHTTPServer. After around an hour or two of serving requests, the server eventually stops accepting new connections entirely. Using <code>lsof -T fqs</code>, I can see the server socket has a full listening queue:</p>
<p><code>Python 12345 user 3u IPv4 0xf0c5647762f3efe1 0t0 TCP *:8881 (LISTEN QR=0 QS=0 SO=ACCEPTCONN,PQLEN=5,QLEN=5,QLIM=5,RCVBUF=1048576,REUSEADDR,SNDBUF=1048576 TF=MSS=512,UNKNOWN=0xa0)</code></p>
<p>Note that <code>QLEN</code> is the number of established inbound connections that have not been accepted yet. <code>QLEN</code> is initially 0, and seems to gradually increase over time (right now, for the process that's been running for ~30 minutes, it's 3).</p>
<p>When I check the process with <code>spindump</code>, I can see that the main thread is spending most of its time (>99.5%) in <code>select_poll_poll -> poll</code>, but I also see a couple of other calls, indicating that the process is occasionally breaking out of the <code>poll</code> call. This is expected: <code>socketserver.BaseServer.serve_forever</code> calls <code>selector.select</code> (which calls <code>poll</code>) with a short timeout in a loop. <code>serve_forever</code> accepts exactly one socket each time <code>selector.select</code> returns successfully.</p>
<p>From this, it looks like the program is somehow failing to accept listening sockets. I'm not doing anything fancy in my code, so I wonder if this could possibly be a bug with poll() on macOS (which <a href="https://daniel.haxx.se/blog/2016/10/11/poll-on-mac-10-12-is-broken/" rel="nofollow noreferrer">would not be the first time</a>) or in the way in which Python is using poll() for server sockets on macOS?</p>
<p>The program itself is a proxy server using a vanilla ThreadingHTTPServer with a BaseHTTPRequestHandler subclassed to handle CONNECT, GET, POST etc. I'm running connections from two clients through the proxy. The code looks like this:</p>
<pre><code>from http.server import BaseHTTPRequestHandler, ThreadingHTTPServer
class ProxyHandler(BaseHTTPRequestHandler):
def do_CONNECT(self): ...
def do_other(self): ...
do_GET = do_other
do_POST = do_other
do_PUT = do_other
do_DELETE = do_other
do_PATCH = do_other
do_OPTIONS = do_other
server = ThreadingHTTPServer(("0.0.0.0", 8881), ProxyHandler)
print("Serving on %s:%d" % server.server_address)
try:
server.serve_forever()
except KeyboardInterrupt:
print("Caught Ctrl+C, exiting...")
server.server_close()
</code></pre>
<p>I've omitted the implementations of <code>do_CONNECT</code> and <code>do_other</code> because I don't believe they're relevant: these functions run in their own thread (per ThreadingHTTPServer) and should not affect the main thread.</p>
|
<python><macos><sockets>
|
2023-01-21 01:27:29
| 0
| 180,592
|
nneonneo
|
75,190,654
| 9,934,348
|
pip doesn't install the lastest vertion of a package
|
<p>I'm trying to install the latest version (<code>0.0.7</code>) of the package <a href="https://pypi.org/project/revolutionhtl/" rel="nofollow noreferrer">https://pypi.org/project/revolutionhtl/</a> using the command</p>
<p><code>pip install revolutionhtl</code></p>
<p>After running this command, the installed version is <code>0.0.4</code>. Bellow you can see the output of the command, please note that the third line says <code>Using cached revolutionhtl-0.0.7-py3-none-any.whl (29 kB)</code>, so it seems that pip detects version <code>0.0.7</code>, nevertheless, it is not installed.</p>
<pre class="lang-bash prettyprint-override"><code>Defaulting to user installation because normal site-packages is not writeable
Collecting revolutionhtl
Using cached revolutionhtl-0.0.7-py3-none-any.whl (29 kB)
Collecting networkx>=2.8
Using cached networkx-3.0-py3-none-any.whl (2.0 MB)
Collecting numpy>=1.22.3
Using cached numpy-1.24.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.3 MB)
Requirement already satisfied: tqdm>=4.63.0 in /usr/lib/python3/dist-packages (from revolutionhtl) (4.64.0)
Collecting pandas>=1.4.2
Using cached pandas-1.5.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (12.1 MB)
Collecting revolutionhtl
Using cached revolutionhtl-0.0.6-py3-none-any.whl (29 kB)
Using cached revolutionhtl-0.0.5-py3-none-any.whl (29 kB)
Using cached revolutionhtl-0.0.4-py3-none-any.whl (29 kB)
Installing collected packages: revolutionhtl
Successfully installed revolutionhtl-0.0.4
</code></pre>
<p>Also, I tried with the command <code>pip install revolutionhtl==0.0.7</code>, obtaining as output:</p>
<pre class="lang-bash prettyprint-override"><code>Defaulting to user installation because normal site-packages is not writeable
Collecting revolutionhtl==0.0.7
Using cached revolutionhtl-0.0.7-py3-none-any.whl (29 kB)
ERROR: Could not find a version that satisfies the requirement itertools (from revolutionhtl) (from versions: none)
ERROR: No matching distribution found for itertools
</code></pre>
<p>What should I do to install version <code>0.0.7</code>?</p>
<p>My python version: 3.10.9.</p>
<pre class="lang-bash prettyprint-override"><code>$ pip --version
pip 22.3 from /usr/lib/python3/dist-packages/pip (python 3.10)
</code></pre>
<p>Since revolutionhtl requires python >= 3.7, the problem shouldn't be my python</p>
|
<python><pip><version><pypi>
|
2023-01-21 01:25:43
| 2
| 304
|
Antonio Ramírez
|
75,190,628
| 9,090,340
|
write data using pyodbc iterate over a dictionary
|
<p>I have the below python snippet where I am parsing data from a dictionary, to produce into a table using pyodbc library.</p>
<pre><code>import pyodbc
data = {
"demographic": [
{
"id": 1,
"country": {
"code": "AU",
"name": "Australia"
},
"state": {
"name": "New South Wales"
},
"location": {
"time_zone": {
"name": "(UTC+10:00) Canberra, Melbourne, Sydney",
"standard_name": "AUS Eastern Standard Time",
"symbol": "AUS Eastern Standard Time"
}
},
"address_info": {
"address_1": "",
"address_2": "",
"city": "",
"zip_code": ""
}
},
{
"id": 2,
"country": {
"code": "AU",
"name": "Australia"
},
"state": {
"name": "New South Wales"
},
"location": {
"time_zone": {
"name": "(UTC+10:00) Canberra, Melbourne, Sydney",
"standard_name": "AUS Eastern Standard Time",
"symbol": "AUS Eastern Standard Time"
}
},
"address_info": {
"address_1": "",
"address_2": "",
"city": "",
"zip_code": ""
}
},
{
"id": 3,
"country": {
"code": "US",
"name": "United States"
},
"state": {
"name": "Illinois"
},
"location": {
"time_zone": {
"name": "(UTC-06:00) Central Time (US & Canada)",
"standard_name": "Central Standard Time",
"symbol": "Central Standard Time"
}
},
"address_info": {
"address_1": "",
"address_2": "",
"city": "",
"zip_code": "60611"
}
}
]
}
result = [(d["id"], d["country"]["name"], d["address_info"]["zip_code"]) for d in data["demographic"] if d["country"]["code"] == "US"]
</code></pre>
<p>Now I need to write this result to a table which has id, country_name, zip</p>
<p>I'm trying the below code but not sure how to loop over all the records where the country code is US and map the values to the parameter marker. I have the cursor setup but not how to read from the python dictionary.</p>
<pre><code>for r in result:
sql = """INSERT INTO tname (id, country_name, zip) VALUES (?, ?, ?)"""
cursor.execute(sql, params)
conn.commit()
</code></pre>
|
<python>
|
2023-01-21 01:18:36
| 1
| 937
|
paone
|
75,190,363
| 4,531,757
|
Pandas - Build sequnce in the group while resetting and create new dataframe with summary
|
<p>I got some help in the past and was able to advance well. Now I have an additional need to create a summary dataset for the study. Please help me if you can.</p>
<p>This is my current Dataset:
import pandas as pd</p>
<pre><code>df2 = pd.DataFrame({'patient': ['one', 'one', 'one', 'three','three', 'two','two','two','two'],
'pattern': ['A', 'B', '000', 'C', 'A', '000','D','A','C'],
'date': ['11/20/2022', '11/22/2022', '11/23/2022', '11/8/2022', '11/9/2022', '11/14/2022','11/20/2022', '11/22/2022', '11/23/2022']})
m = df2['pattern'] == '000'
df2['result'] = (df2[~m].groupby(['patient', m.cumsum()])
.cumcount().add(1)
.reindex(df2.index, fill_value=0))
df2
</code></pre>
<p>From the above current Dataset, I like to create a summary dataset like shown below. Can you help me how extract the below summary dataset from the above dataset, please?</p>
<pre><code>required_dataset = pd.DataFrame({'pattern': ['A,B', 'C,A','D,A,C'], ### Pattern happend by Date
'patients': [1,1,1]}) ### Total Number of unique patients
required_dataset
</code></pre>
|
<python><pandas><numpy>
|
2023-01-21 00:04:45
| 2
| 601
|
Murali
|
75,190,328
| 8,816,642
|
Calculate AUC by different segments in python
|
<p>I have a dataset which contains <strong>id, datetime, model features, ground truth labels</strong> and the <strong>predicted probability</strong>.</p>
<pre><code>id datetime feature1 feature2 feature3 ... label probability
001 2023-01-01 a1 b3 c1 ... Rejected 0.98
002 2023-01-04 a2 b1 c1 ... Approved 0.28
003 2023-01-04 a1 b2 c1 ... Rejected 0.81
004 2023-01-08 a2 b3 c2 ... Rejected 0.97
005 2023-01-09 a2 b1 c1 ... Approved 0.06
006 2023-01-09 a2 b2 c2 ... Approved 0.06
007 2023-01-10 a1 b1 c2 ... Approved 0.13
008 2023-01-11 a2 b2 c1 ... Approved 0.18
009 2023-01-12 a2 b1 c1 ... Approved 0.16
010 2023-01-12 a1 b1 c2 ... Rejected 0.96
011 2023-01-09 a2 b3 c2 ... Approved 0.16
...
</code></pre>
<p>I want to know what is the AUC of each segment under different features. How can I manipulate the dataset to get results?</p>
<p>What I have done is to use the groupby method on date to get the monthly AUC for all features together.</p>
<pre><code>def group_auc(x, col_tar, col_scr):
from sklearn import metrics
return metrics.roc_auc_score(x[col_tar], x[col_scr])
def map_y(x):
if x == 'Rejected':
return 1
elif x == 'Approved':
return 0
return x
## example
y_name = 'label'
df[y_name] = df[y_name].apply(map_y)
# Remove NA rows
df = df.dropna(subset = [y_name])
df['Month_Year'] = df['datetime'].dt.to_period('M')
group_data_monthly = df.groupby('Month_Year').apply(group_auc, y_name, 'probability').reset_index().rename(columns={0:'AUC'})
</code></pre>
<p>My expected output will be like,</p>
<pre><code>datetime features value AUC
2023-01-01 feature1 a1 0.98
2023-01-01 feature1 a2 ...
2023-01-01 feature1 a3 ...
2023-01-01 feature2 b1 ...
2023-01-01 feature2 b2 ...
2023-01-01 feature2 b3 ...
2023-01-01 feature3 c1 ...
2023-01-01 feature3 c2 ...
2023-01-04 feature1 a1 ...
2023-01-04 feature1 a2 ...
2023-01-04 feature1 a3 ...
2023-01-04 feature2 b1 ...
...
</code></pre>
<p>I have also tried to use <code>stack</code> method to transpose the dataframe, but the script failed due to the huge size of the dataframe.</p>
|
<python><pandas><machine-learning><scikit-learn><auc>
|
2023-01-20 23:55:13
| 2
| 719
|
Jiayu Zhang
|
75,190,310
| 5,601,193
|
Need help implementing this state-variable filter
|
<p>Pursuant to <a href="https://dsp.stackexchange.com/questions/86305/how-do-i-use-the-lazzarini-timoney-dsvf-as-a-sine-generator">this question</a>, I rewrote some simple generator code into a Python filter class and tried to get the same result out of the filter itself. It didn't work. There is a bunch of code here and I'm not sure where the problem is exactly; it's probably somewhere in <code>DSVF.filter_sample()</code></p>
<p>This python code works, and is based on the Chamberlin DSVF:</p>
<pre class="lang-py prettyprint-override"><code>from numpy import sin, pi
import numpy as np
import pyaudio as pya
import matplotlib.pyplot as plt
import scipy.fft as fft
f0=np.single(440)
fs=np.single(96000)
frq = np.single(2*sin(pi*f0/fs))
a=frq
print(a)
c_sin = np.single(0)
c_cos = np.single(1)
pyAudio = pya.PyAudio()
audioOut = pyAudio.open(format=pya.get_format_from_width(width=2), channels=1, rate=int(fs), output=True)
sin_wave = []
sin_wave_np = []
for i in range(0, np.uint(fs)):
# Convert to 16-bit scaled output
# The generated sine wave overflows a bit, so scale it down
sin_out = np.int16(0.999*(c_sin) * 0.5 * (2**16-1))
# Iterate based on Chamberlin DSVF with q=0
c_cos -= a*c_sin
c_sin += a*c_cos
# Numpy sine wave for reference
sin_out_np = (sin(2*pi*i*f0/fs) * 0.5) * (2**16-1)
# Build the arrays of samples
sin_wave.append(np.int16(sin_out))
sin_wave_np.append(np.int16(sin_out_np))
# Plot the sine waves one overlayed over the other
for s in [[sin_wave_np, 'red'], [sin_wave, 'blue']]:
x = range(0, np.uint(f0))
plt.plot(x, s[0][0:int(f0)], color=s[1])
audioOut.write(np.array(s[0], dtype='int16').tobytes())
audioOut.stop_stream()
audioOut.close()
pyAudio.terminate()
plt.show()
# Concatenate both and run an FFT to ensure only f0 and -f0 show up
twosec = np.concatenate((sin_wave_np, sin_wave))
yf = fft.fft(twosec)
xf = [x / f0 for x in fft.fftfreq(np.uint(fs)*2,1/fs)]
plt.plot(xf, np.abs(yf))
plt.show()
</code></pre>
<p>This is my filter code attempting to replicate the same result, which I am having no luck figuring out what I did wrong. It implements both filter modes, but I'm only using the Chamberlin mode to test.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from numpy import sin, tan, pi
from typing import Optional
import matplotlib.pyplot as plt
import scipy.fft as fft
class FilterModes:
DSVF_CHAMBERLIN = 'Chamberlin'
DSVF_LAZZARINI_TIMONEY = 'Lazzarini-Timoney'
class DSVF:
def __init__(self,
sample_rate: int = 48000,
Q: float = np.sqrt(2),
frequency: float = 440,
bandpass_z1: float = 0,
lowpass_z1: float = 0,
mode: str = FilterModes.DSVF_CHAMBERLIN):
self._mode = mode
self._fs = sample_rate
self.set_state(Q=Q,
q = None,
frequency=frequency,
bandpass_z1=bandpass_z1,
lowpass_z1=lowpass_z1,
highpass=0,
lowpass=0,
bandpass=0)
def _get_f(self, frequency: float):
f = None
match self._mode:
case FilterModes.DSVF_LAZZARINI_TIMONEY:
f = np.tan(pi * frequency / self._fs)
case _:
f = 2 *np.sin(pi * frequency / self._fs)
return f
def filter_sample(self, sample: Optional[float] = 0):
# Need the last cycle's z^-1
lowpass_z1 = None
match self._mode:
case FilterModes.DSVF_LAZZARINI_TIMONEY:
lowpass_z1 = self._lowpass + self._f * self._bandpass
case _:
lowpass_z1 = self._lowpass
##############################
## Generate Highpass output ##
##############################
# FIXME: Check that proper lowpass feedback is being used.
highpass = sample + self._bandpass_z1 * -self._q
match self._mode:
case FilterModes.DSVF_LAZZARINI_TIMONEY:
highpass -= self._lowpass_z1
case _:
highpass -= self._lowpass
##############################
## Generate Bandpass Output ##
##############################
f_highpass = highpass * self._f
bandpass = self._bandpass_z1 + f_highpass
bandpass_z1 = bandpass
f_bandpass = None
match self._mode:
case FilterModes.DSVF_LAZZARINI_TIMONEY:
bandpass_z1 = bandpass_z1 + f_highpass
f_bandpass = bandpass * self._f
case _:
f_bandpass = self._bandpass_z1 * self._f
#############################
## Generate Lowpass Output ##
#############################
lowpass = f_bandpass + self._lowpass_z1
# Replace state
self._highpass = highpass
self._bandpass = bandpass
self._bandpass_z1 = bandpass_z1
self._lowpass = lowpass
self._lowpass_z1 = lowpass_z1
def get_state(self):
return { 'Lowpass': self._lowpass,
'Bandpass': self._bandpass,
'Highpass': self._highpass,
'Bandpass z1': self._bandpass_z1,
'Lowpass z1': self._lowpass_z1
}
def set_state(self,
Q: Optional[float] = None,
q: Optional[float] = None,
frequency: Optional[float] = None,
bandpass_z1: Optional[float] = None,
lowpass_z1: Optional[float] = None,
highpass: Optional[float] = None,
bandpass: Optional[float] = None,
lowpass: Optional[float] = None):
if Q is not None: self._q = 1 / Q
if q is not None: self._q = q
if frequency is not None: self._f = self._get_f(frequency)
if bandpass_z1 is not None: self._bandpass_z1 = bandpass_z1
if lowpass_z1 is not None: self._lowpass_z1 = lowpass_z1
if highpass is not None: self._highpass = highpass
if bandpass is not None: self._bandpass = bandpass
if lowpass is not None: self._lowpass = lowpass
dsvFilter = DSVF(bandpass_z1 = 1, lowpass_z1 = 0)
dsvFilter.set_state(q=0)
fs=int(48000)
sin_wave = np.array([],dtype='int16')
for i in range(0,fs):
dsvFilter.filter_sample()
s = dsvFilter.get_state()
s16 = np.int16(0.999*(np.single(s['Lowpass z1'])) * 0.5 * (2**16-1))
sin_wave = np.append(sin_wave, [s16])
plt.plot(range(0, np.uint(fs)), sin_wave)
plt.show()
</code></pre>
<p>It's giving me a warning as well, which I haven't been able to understand or resolve:</p>
<pre><code>dsvf.py:117: RuntimeWarning: invalid value encountered in cast
s16 = np.int16(0.999*(np.single(s['Lowpass z1'])) * 0.5 * (2**16-1))
</code></pre>
<p>What'd I do wrong and why is it wrong?</p>
|
<python><signal-processing>
|
2023-01-20 23:51:43
| 1
| 483
|
John Moser
|
75,190,302
| 807,797
|
py command breaks in ps1 but not in PowerShell terminal
|
<p>The <code>py -3 -m venv venv</code> command works when typed manually into a Powershell terminal, but breaks when called from a ps1 script.</p>
<p>What specifically must be changed in the code below in order for the <code>setup.ps1</code> script to successfully run the <code>py -3 -m venv venv</code> command when the <code>setup.ps1</code> is called from outside using the <code>Invoke-Command -FilePath setup.ps1</code> or a similar replacement command?</p>
<p>The contents of the <code>setup.ps1</code> command are:</p>
<pre><code>py -3 -m venv venv
</code></pre>
<p>The command that calls <code>setup.ps1</code> including the error together look like:</p>
<pre><code>PS C:\path\to\dir\containing_setup_py> Invoke-Command -FilePath setup.ps1
Invoke-Command : Parameter set cannot be resolved using the specified named parameters.
At line:1 char:1
+ Invoke-Command -FilePath setup.ps1
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: (:) [Invoke-Command], ParameterBindingException
+ FullyQualifiedErrorId : AmbiguousParameterSet,Microsoft.PowerShell.Commands.InvokeCommandCommand
PS C:\path\to\dir\containing_setup_py>
</code></pre>
|
<python><powershell><python-venv><invoke-command>
|
2023-01-20 23:50:06
| 0
| 9,239
|
CodeMed
|
75,190,290
| 7,217,960
|
Is it possible to have two distinct installs of Python 3 of the same revision on a Windows system?
|
<p>I know it possible to have two installs of Python of different versions on a Windows system. But I cannot manage to have two installs of the same revision (in my case 3.8.10) to coexist.</p>
<p>I'm designing an application that creates a Python process. That process needs to run from a specific version of Python with packages of specific versions installed on it. In order to fully control the Python install, decision was made to install it inside the application distribution directory, segregating it from any other Python installed on the system. No environment variable refers to it.</p>
<p>As part of the the deployment/install process for the application, a PowerShell script downloads the Python installer and installs Python and the necessary packages into the application distribution directory. The Python installer is invoked as follows:</p>
<pre><code>.\\python-3.8.10-amd64.exe /quiet InstallAllUsers=1 PrependPath=1 Include_test=0 TargetDir="$curDir\\Python" Include_exe=1 Include_lib=1 Include_pip=1 Include_tcltk=1 | Out-Null
</code></pre>
<p>It works well unless the system has already a Python install of the same version installed on it. In that case, running the installer will break the existing install, and not fully install the new one.</p>
<p>I tried to run the installer manually and I noticed that it is able, somehow, to detect that an install of the same revision exist on the system. In that case, it does not allow an new install. To do so, I would have to uninstall Python at its current location to be able to install it somewhere else.
<a href="https://i.sstatic.net/1mbd2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1mbd2.png" alt="enter image description here" /></a></p>
<p>Is there a way to have two distinct installs of Python 3 of the same revision on a Windows system? And if yes, how can it be done?</p>
|
<python><windows>
|
2023-01-20 23:48:06
| 1
| 412
|
Guett31
|
75,190,227
| 2,403,819
|
Why do the Pyproj and Haversine methods to calculate heading from two lat/lon points differ from what Google Earth displays?
|
<p>I am trying to implement a short python function to calculate the heading between two points. In other words if I am at point one, what heading relative to true north would I need to take to get to point two. I have used the following Python implementation which gives me essentially the same results.</p>
<pre><code>from pyproj import Geod
lat1 = 42.73864
lon1 = 111.8052
lat2 = 43.24844
lon2 = 110.6083
geod = Geod(ellps='WGS84')
# This implements the highly accurate Vincenty method
bearing = geod.inv(lon1, lat1, lon2, lat2)[0]
# >>> 60.31358
</code></pre>
<p>I have also used the following code that uses a Haversine method</p>
<pre><code>from math import degrees, radians, sin, cos, atan2
def bearing(lat1, lon1, lat2, lon2):
lat, lon1, lat2, lon2 = map(radians, [lat1, lon1, lat2, lon1])
dLon = lon2 - lon1
y = sin(dLon) * cos(lat2)
x = cos(lat1)*sin(lat2) - sin(lat1)*cos(lat2)*cos(dLon)
brng = degrees(atan2(y, x))
if brng < 0: brng += 360.0
return brng
</code></pre>
<p>With the same inputs from the previous implementation I get a result of 60.313 degrees, which matches the first implementation. However, when I use the Ruler function in google earth I get a result of 15.71 degrees. Furthermore when I activate the grid on google earth that shows the lines of longitude as a reference, 15.71 degrees makes far more sense. Why does the Google Earth implementation differ from the Python implementations?</p>
|
<python><geopy><pyproj>
|
2023-01-20 23:29:20
| 1
| 1,829
|
Jon
|
75,190,098
| 19,628,700
|
How to calculate mulitple rolling windows on one column in pandas quickly
|
<p>I am currently trying to calculate the rolling average of one column in my pandas dataframe over many rolling periods. My dataframe has one column of interest where I wish to calculate a rolling average from 2-40 periods and have the same dataframe and indexes know these values. This has proven a bit to slow as my dataframe has ~6,000,000 rows these windows are calculated over.</p>
<p>I've provided some sample data at the bottom that can be copy/pasted into the pd.DataFrame method. That is what my variable "df" stores</p>
<h1>Current solution</h1>
<pre><code>df = pd.DataFrame(*dictionary at thebottom*)
for i in range(2, 41):
df[f'roll_{i}'] = df['col1'].rolling(i).mean()
</code></pre>
<h2>Other methods:</h2>
<p>I've tried giving .mean the engine='pyarrow' parameter, but that doesn't seem to do much. Can someone help point me to speeding this calculation up?</p>
<h1>The Data</h1>
<pre><code>{Timestamp('2022-10-18 16:10:00'): 18.1065,
Timestamp('2022-10-18 16:11:00'): 18.120449999999998,
Timestamp('2022-10-18 16:12:00'): 18.1293,
Timestamp('2022-10-18 16:13:00'): 18.13035,
Timestamp('2022-10-18 16:14:00'): 18.1245,
Timestamp('2022-10-18 16:15:00'): 18.1049,
Timestamp('2022-10-18 16:16:00'): 18.1014,
Timestamp('2022-10-18 16:17:00'): 18.103499999999997,
Timestamp('2022-10-18 16:18:00'): 18.09375,
Timestamp('2022-10-18 16:19:00'): 18.0906,
Timestamp('2022-10-18 16:20:00'): 18.092699999999997,
Timestamp('2022-10-18 16:21:00'): 18.0855,
Timestamp('2022-10-18 16:22:00'): 18.055349999999997,
Timestamp('2022-10-18 16:23:00'): 18.05745,
Timestamp('2022-10-18 16:24:00'): 18.06645,
Timestamp('2022-10-18 16:25:00'): 18.06945,
Timestamp('2022-10-18 16:26:00'): 18.06465,
Timestamp('2022-10-18 16:27:00'): 18.062549999999998,
Timestamp('2022-10-18 16:28:00'): 18.06645,
Timestamp('2022-10-18 16:29:00'): 18.060449999999996,
Timestamp('2022-10-18 16:30:00'): 18.042675,
Timestamp('2022-10-18 16:31:00'): 18.046349999999997,
Timestamp('2022-10-18 16:32:00'): 18.0456,
Timestamp('2022-10-18 16:33:00'): 18.0444,
Timestamp('2022-10-18 16:34:00'): 18.039150000000003,
Timestamp('2022-10-18 16:35:00'): 18.040200000000002,
Timestamp('2022-10-18 16:36:00'): 18.039675000000003,
Timestamp('2022-10-18 16:37:00'): 18.0423,
Timestamp('2022-10-18 16:38:00'): 18.044249999999998,
Timestamp('2022-10-18 16:39:00'): 18.044249999999998,
Timestamp('2022-10-18 16:40:00'): 18.04035,
Timestamp('2022-10-18 16:41:00'): 18.0414,
Timestamp('2022-10-18 16:42:00'): 18.040499999999998,
Timestamp('2022-10-18 16:43:00'): 18.037349999999996,
Timestamp('2022-10-18 16:44:00'): 18.0213,
Timestamp('2022-10-18 16:45:00'): 18.01455,
Timestamp('2022-10-18 16:46:00'): 18.031200000000002,
Timestamp('2022-10-18 16:47:00'): 18.03225,
Timestamp('2022-10-18 16:48:00'): 18.02055,
Timestamp('2022-10-18 16:49:00'): 18.001875000000002,
Timestamp('2022-10-18 16:50:00'): 18.01735,
Timestamp('2022-10-18 16:51:00'): 18.02295,
Timestamp('2022-10-18 16:52:00'): 18.024,
Timestamp('2022-10-18 16:53:00'): 18.028200000000002,
Timestamp('2022-10-18 16:54:00'): 18.02295,
Timestamp('2022-10-18 16:55:00'): 18.02505,
Timestamp('2022-10-18 16:56:00'): 18.0219,
Timestamp('2022-10-18 16:57:00'): 18.0177,
Timestamp('2022-10-18 16:58:00'): 18.03225,
Timestamp('2022-10-18 16:59:00'): 18.0375}
Timestamp('2022-10-18 16:57:00'): 18.0177,
Timestamp('2022-10-18 16:58:00'): 18.03225,
Timestamp('2022-10-18 16:59:00'): 18.0375}
</code></pre>
|
<python><pandas><performance><rolling-computation>
|
2023-01-20 23:05:18
| 1
| 311
|
finman69
|
75,190,046
| 4,547,759
|
Duplicate rows when merging dataframes with repetitions
|
<p>Say I have the following dataframes:</p>
<pre><code>>>> df1 = pd.DataFrame({'fruit':['apple','orange','orange'],'taste':['sweet','sweet','sour']})
>>> df1
fruit taste
0 apple sweet
1 orange sweet
2 orange sour
>>> df2 = pd.DataFrame({'fruit':['apple','orange','orange'],'price':['high','low','low']})
>>> df2
fruit price
0 apple high
1 orange low
2 orange low
</code></pre>
<p>When I do <code>df3=df1.merge(df2,on='fruit')</code>, I got the following result:</p>
<pre><code> fruit taste price
0 apple sweet high
1 orange sweet low
2 orange sweet low
3 orange sour low
4 orange sour low
</code></pre>
<p>Here it looks like 2 duplicate rows were created; instead, I would expect something like</p>
<pre><code> fruit taste price
0 apple sweet high
1 orange sweet low
3 orange sour low
</code></pre>
<p>How should I understand this behavior and how to obtain the result I was looking for?</p>
|
<python><pandas>
|
2023-01-20 22:56:17
| 2
| 363
|
Akahs
|
75,190,020
| 14,692,430
|
Python static method return subclass instead of superclass
|
<p>Intuitively this seems impossible, but here goes.</p>
<p>I am importing a class from a python module, where it has a static method that returns a new instance of the class and does some stuff to that instance, let's just call this method <code>make_instance</code>. I am trying to create a custom class with some overridden functionality that inherits from this class. Here comes the problem, there seems to be no way of overriding the <code>make_instance</code> in a way so that it returns my subclass instead of the super class.</p>
<p>Here's a minimal example:</p>
<pre class="lang-py prettyprint-override"><code># Note that I cannot edit this class, nor see the contents of this class, as it is from a python module
class SuperClass:
@staticmethod
def make_instance(number) -> SuperClass:
obj = SuperClass()
obj.number = number * 2
return obj
class SubClass(SuperClass):
@staticmethod
def make_instance(number) -> SubClass:
return super().make_instance(number) # Returns a SuperClass object
# Additional functionality of the subclass
</code></pre>
<p>Is there potentially any way of achieving this? If not is there any other suggestions that could help with this kind of situation? Thanks.</p>
|
<python><class><inheritance><return><static-methods>
|
2023-01-20 22:52:30
| 1
| 352
|
DaNubCoding
|
75,190,008
| 2,396,640
|
Python BoundedSemaphore Acquire and then stuck
|
<p>While trying to investigate this issue : <a href="https://stackoverflow.com/questions/75161513/cant-pause-python-process-using-debug">Can't pause python process using debug</a> I was trying to figure out what makes the sub process stuck so I ended up logging every line to a file and I encountered a weird situation where the sub process stuck right after it acquires the semaphore from the BoundedSemaphore.</p>
<p>This is the code:</p>
<pre><code>my_queue = queue.Queue()
my_semaphore = BoundedSemaphore(10)
def execute_function(obj):
log('execute function started')
do the thing...
def start_function(obj, my_semaphore):
log('start_function started')
my_semaphore.acquire()
log('sema acquired')
execute_function(obj)
log('function executed')
my_semaphore.release()
log('sema released')
for object in objects:
my_queue.put(object)
while not my_queue.empty():
obj=my_queue.get()
thread = multiprocessing.Process(target=start_function,args=[obj,my_semaphore])
threads.append(thread)
thread.start()
for t in threads:
t.join()
</code></pre>
<p>My queue include 1000-2000 records and my execute_function function can run 2000 times with no issues and sometimes it just stuck after some runs.
While I wasn't able to see exactly where it stopped because the issue I described here <a href="https://stackoverflow.com/questions/75161513/cant-pause-python-process-using-debug">Can't pause python process using debug</a> I logged every line and what I saw in the log of the process that stopped is that the last line it stopped in is 'sema acquired' meaning it successfully acquired the sema but for some reason didn't continue</p>
|
<python><python-3.x><multithreading><python-multithreading><semaphore>
|
2023-01-20 22:49:40
| 0
| 369
|
user2396640
|
75,189,975
| 10,200,497
|
Replacing values with nan based on values of another column
|
<p>This is my dataframe:</p>
<pre><code>df = pd.DataFrame(
{
'a': [np.nan, np.nan, np.nan, 3333, np.nan, np.nan, 10, np.nan, np.nan, np.nan, np.nan, 200, 100],
'b': [np.nan, 20, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, 100, np.nan, np.nan, np.nan, np.nan]
}
)
</code></pre>
<p>And this is the output that I want:</p>
<pre><code> a b
0 NaN NaN
1 NaN 20.0
2 NaN NaN
3 3333.0 NaN
4 NaN NaN
5 NaN NaN
6 NaN NaN
7 NaN NaN
8 NaN 100.0
9 NaN NaN
10 NaN NaN
11 200.0 NaN
12 NaN NaN
</code></pre>
<p>Basically if a value in column 'b' is not NaN, I want to keep one value in column <code>a</code>. And then make the rest of values in column <code>a</code> NaN until a value in column <code>b</code> is not NaN.</p>
<p>For example the first case is 20 in column <code>b</code>. After that I want to keep 3333 because this is one value below it which is not NaN and I want to replace 10 with NaN because I've already got one value below <code>b</code> which in this case is 3333 and it is not NaN. The same applies for 100 in column <code>b</code>.</p>
<p>I've searched many posts on stackoverflow and also tried a couple of lines but it didn't work. I guess maybe it can be done by <code>fillna</code>.</p>
|
<python><pandas><dataframe>
|
2023-01-20 22:45:09
| 2
| 2,679
|
AmirX
|
75,189,805
| 5,734,187
|
How do I implement a model that finds a correlation (not similarity) between query and target sentence?
|
<p>When building an NLP model (I'm going to use an attention-based one), how can we implement one for finding the <em>correlation</em>, not <em>similarity</em>, between the query and target sentences?
For instance, the two sentences "I am an environmentalist." and "We should raise the gas price, ban combustion-engine vehicles, and promote better public transit." are somehow <em>similar</em> and <em>positively correlated</em>. However, if the first sentence becomes "I am <strong>not</strong> an environmentalist.", the two sentences are still <em>similar</em> but now <em>negatively correlated</em>.</p>
<pre class="lang-py prettyprint-override"><code>import json
import azure.functions as func
from sentence_transformers import SentenceTransformer, util
query = ["I am an environmentalist.",
"I am not an environmentalist.",
"I am a tech-savvy person."]
target = ["We should raise the gas price, ban combustion-engine vehicles, and promote better public transit."]
embedder = SentenceTransformer('distilbert-base-nli-stsb-mean-tokens')
query_embedding = embedder.encode(query, convert_to_tensor=True)
target_embedding = embedder.encode(target, convert_to_tensor=True)
searched = util.semantic_search(query_embedding, target_embedding)
print(searched)
# [
# [{'corpus_id': 0, 'score': 0.30188844}],
# [{'corpus_id': 0, 'score': 0.22667089}],
# [{'corpus_id': 0, 'score': 0.05061193}]
# ]
</code></pre>
<p>Are there any useful resources or information about this difference and/or finding the <em>correlation</em> by a model? I'm still new to the field of NLP (I have used the sentence transformer for some of my projects) so maybe I simply didn't do a good search on the web.</p>
|
<python><deep-learning><nlp><attention-model><sentence-similarity>
|
2023-01-20 22:14:33
| 0
| 1,132
|
kemakino
|
75,189,803
| 2,471,211
|
Inheriting from a class that has __new__ returning another class in python
|
<p>If have a class like this one:</p>
<pre><code>class ExA:
def __new__(cls, obj):
if type(obj) is dict:
return ExADict(obj)
if type(obj) is list
return ExAList(obj)
</code></pre>
<p>Assuming I cannot change this class.
I want to inherit from it.</p>
<pre><code>class ExB(ExA):
def new_method(self):
...
</code></pre>
<p>I figure I need to write a __new__ method that would handle this but I can't figure it out. I think I don't fully understand how super() works.</p>
<pre><code>class ExB(ExA):
def __new__(cls, arg):
o = super().__new__(cls, list(arg))
</code></pre>
<p>"o" is an ExAList. I'm close. I now need to return the equivalent of <code>class ExB(ExAList)</code>. But I don't get how to do so. (I know I could just go with <code>class ExB(ExAList)</code> and don't bother with __new__ but at this point, I just want to know!! :)</p>
<p>Edit: How would you re write ExA to make this easier?</p>
|
<python><inheritance>
|
2023-01-20 22:14:33
| 0
| 485
|
Flo
|
75,189,704
| 1,473,517
|
How to save a large dict with tuples as keys?
|
<p>I have large dict which has 3-tuples of integers as keys. I would like to save it to disk so I can read it in quickly. Sadly it seems I can't save it as a JSON file (which would let me use a fast JSON module such as <a href="https://pypi.org/project/orjson/" rel="nofollow noreferrer">orjson</a>). What are my options other than pickle?</p>
<p>A tiny example would be:</p>
<pre><code>my_dict = {
(1, 2, 3): [4, 5, 6],
(4, 5, 6): [7, 8, 9],
(7, 8, 9): [10, 11, 12]
}
</code></pre>
<p>I have about 500,000 keys and each value list is of length 500.</p>
<p>I will make this data once and it will not be modified after it is made. <code>my_dict</code> will only ever be used as a lookup table</p>
|
<python><dictionary><tuples>
|
2023-01-20 21:59:17
| 1
| 21,513
|
Simd
|
75,189,699
| 1,489,422
|
Pytest mocking nested object
|
<p>I am trying to understand how to overcome an issue with mocking a python attribute coming from an imported module within a constructor.</p>
<p>I have a simplified Tableau python class which is defined like this:</p>
<pre class="lang-py prettyprint-override"><code>import tableauserverclient as TSC
import pandas as pd
from os import environ
class Tableau(object):
def __init__(self):
# BYPASS ALL OF THIS AUTHENTICATION?
self.server = TSC.Server(environ.get("TABLEAU_URL"), use_server_version=False)
self.server.version = environ.get("TABLEAU_API_VERSION")
self.tableau_auth = TSC.PersonalAccessTokenAuth(
environ.get("TABLEAU_TOKEN_NAME"),
environ.get("TABLEAU_TOKEN_VALUE"),
site_id="",
)
self.tableau_server = self.server.auth.sign_in(self.tableau_auth)
def get_all_views(self) -> pd.DataFrame:
data = []
# HOW TO MOCK self.tableau_server.views?
for view in TSC.Pager(self.tableau_server.views):
data.append([view.name, view.id, view.workbook_id])
df = pd.DataFrame(data, columns=["View", "Id", "Workbook_Id"])
return df
</code></pre>
<p>How can I mock the output of <code>self.tableau_server.views</code> in get_all_views() from pytest to return a mocked list of views...</p>
<pre><code>[
(id=1, name="view_a", "workbook_id"=1),
(id=2, name="view_b", "workbook_id"=2),
(id=3, name="view_c", "workbook_id"=3),
]
</code></pre>
<p>*Note - the return value needs to be iterable</p>
<p>Here's what I tried so far... I have been running into "module not found" errors and errors within the constructor - so I think mocking is not working correctly.</p>
<pre class="lang-py prettyprint-override"><code>
from pytest_mock import mocker
from connectors import Tableau
import tableauserverclient as TSC
import pandas as pd
from pandas.testing import assert_frame_equal
def get_mocked_tableau_views(mocker):
a = mocker.Mock()
a.name = "a"
a.id = 1
a.workbook_id = 1
b = mocker.Mock()
b.name = "b"
b.id = 2
b.workbook_id = 2
c = mocker.Mock()
c.name = "c"
c.id = 3
c.workbook_id = 3
return mocker.Mock(return_value=iter([a, b, c]))
def test_initialize(mocker):
mocker.patch.object(TSC, "__init__", return_value=None)
mocker.patch.object(TSC.__init__, "server", return_value=None)
mocker.patch.object(TSC.__init__, "server.version", return_value=None)
def test_get_all_views(mocker):
mocked_tsc = mocker.MagicMock()
mocked_tsc.Server.auth.sign_in = "test"
with mocker.patch(
"connectors.tableau.Tableau.tableau_server.views",
return_value=get_mocked_tableau_views,
):
tab = Tableau()
df_actual = tab.get_all_views()
df_expected = pd.DataFrame(
{"Id": [1, 2, 3], "View": ["a", "b", "c"], "Workbook_Id": [1, 2, 3]}
)
assert_frame_equal(df_actual, df_expected)
</code></pre>
<p>Thanks in advance!</p>
|
<python><mocking><pytest>
|
2023-01-20 21:58:38
| 1
| 439
|
Michael K
|
75,189,681
| 17,945,841
|
After performing t-SNE dimentionality reduction, use k-means and check what features contribute the most in each individual cluster
|
<p>The following plot displays the t-SNE plot. I can show it here but unfortunately, I can't show you the labels. There are 4 different labels:</p>
<p><a href="https://i.sstatic.net/ztBgI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ztBgI.png" alt="enter image description here" /></a></p>
<p>The plot was created using a data frame called <code>scores</code>, which contains approximately 1100 patient samples and 25 features represented by its columns. The labels for the plot were sourced from a separate data frame called <code>metadata</code>. The following code was used to generate the plot, utilizing the information from both <code>scores</code> and <code>metadata</code> data frames.</p>
<pre><code>tsneres <- Rtsne(scores, dims = 2, perplexity = 6)
tsneres$Y = as.data.frame(tsneres$Y)
ggplot(tsneres$Y, aes(x = V1, y = V2, color = metadata$labels)) +
geom_point()
</code></pre>
<h3>My mission:</h3>
<p>I want to analyze the t-SNE plot and identify which features, or columns from the "scores" matrix, are most prevalent in each cluster. Specifically, I want to understand which features are most helpful in distinguishing between the different clusters present in the plot. Is it possible to use an alternative algorithm, such as PCA, that preserves the distances between data points in order to accomplish this task? perhaps it's even a better choice than t-SNE?</p>
<p>This is an example of <code>scores</code>, this is not the real data, but it's similar:</p>
<pre><code>structure(list(Feature1 = c(0.1, 0.3, -0.2, -0.12, 0.17, -0.4,
-0.21, -0.19, -0.69, 0.69), Feature2 = c(0.22, 0.42, 0.1, -0.83,
0.75, -0.34, -0.25, -0.78, -0.68, 0.55), Feature3 = c(0.73, -0.2,
0.8, -0.48, 0.56, -0.21, -0.26, -0.78, -0.67, 0.4), Feature4 = c(0.34,
0.5, 0.9, -0.27, 0.64, -0.11, -0.41, -0.82, -0.4, -0.23), Feature5 = c(0.45,
0.33, 0.9, 0.73, 0.65, -0.1, -0.28, -0.78, -0.633, 0.32)), class = "data.frame", row.names = c("Patient_A",
"Patient_B", "Patient_C", "Patient_D", "Patient_E", "Patient_F",
"Patient_G", "Patient_H", "Patient_I", "Patient_J"))
</code></pre>
<h2>EDIT - PYTHON</h2>
<p>I got to the same point python. I tried PCA at first but it produced very bad plots. So I first reduced dimensions using t-SNE, which produced much better results and clustered the data using k-means. I still got the same question as before, just now I don't mind using R or python.</p>
<p>This is the new plot:</p>
<p><a href="https://i.sstatic.net/Iq3ed.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Iq3ed.png" alt="enter image description here" /></a></p>
<p>And this is the code:</p>
<pre><code>from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, perplexity=30, learning_rate=200)
tsne_result = tsne.fit_transform(scores)
#create a dict to map the labels to colors
label_color_dict = {'label1':'blue', 'label2':'red', 'label3':'yellow', 'label4':'green'}
#create a list of colors based on the 'labels' column in metadata
colors = [label_color_dict[label] for label in metadata[['labels']]
plt.scatter(tsne_result[:, 0], tsne_result[:, 1], c=colors, s=50)
plt.scatter(cluster_centers[:, 0], cluster_centers[:, 1], c='red', marker='o')
# Add labels to the cluster centers
for i, center in enumerate(cluster_centers,1):
plt.annotate(f"Cluster {i}", (center[0], center[1]),
textcoords="offset points",
xytext=(0,10), ha='center', fontsize=20)
</code></pre>
|
<python><r><machine-learning><k-means><dimensionality-reduction>
|
2023-01-20 21:56:15
| 2
| 1,352
|
Programming Noob
|
75,189,529
| 3,507,584
|
Remove zero from ternary plot
|
<p>I have the following plot and I want to remove the 0 in the origins.</p>
<pre><code>import plotly.graph_objects as go
import plotly.express as px
fig = go.Figure(go.Scatterternary({
'mode': 'markers', 'a': [0.3],'b': [0.5], 'c': [0.6],
'marker': {'color': 'AliceBlue','size': 14,'line': {'width': 2} },
}))
fig.update_layout({
'ternary': {
'sum': 100,
'aaxis': {'nticks':1, 'ticks':""},
'baxis': {'nticks':1},
'caxis': {'nticks':1} }})
fig.update_traces( hovertemplate = "<b>CatA: %{a:.0f}<br>CatB: %{b:.0f}<br>CatC: %{c:.0f}<extra></extra>")
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/dAKpM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dAKpM.png" alt="Remove zero" /></a></p>
<p>I am surprised because documentation <a href="https://plotly.com/python/reference/layout/ternary/" rel="nofollow noreferrer">here</a> says the minimum of <code>nticks</code> is 1, not 0 (which does not work). How can I remove the 0 from the corners?</p>
<p><a href="https://i.sstatic.net/HBBFz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HBBFz.png" alt="Documentation" /></a></p>
|
<python><plotly><axis-labels><ternary>
|
2023-01-20 21:35:01
| 1
| 3,689
|
User981636
|
75,189,522
| 368,367
|
QtCreator's UI is not applied to the window, it's always empty (Qt for Python)
|
<p>I have a fresh QtCreator installation, and I set it up to run using a fresh install of Python3.8 on which I pip-installed both pyside2 and pyside6.</p>
<p>When I create a new Qt for Python - Window (UI file) application, whatever I do to the UI file the window always shows up empty and with the default size when I run the app.</p>
<p>I've tried with a QDialog, QMainApplication, using Pyside2 or Pyside6, I've checked that it was correctly loading the UI (and the right one) - no dice. It just won't update, and appears not to have any reason not to.</p>
<p>Default code for completeness:</p>
<pre><code># This Python file uses the following encoding: utf-8
import os
from pathlib import Path
import sys
from PySide2.QtWidgets import QApplication, QDialog
from PySide2.QtCore import QFile
from PySide2.QtUiTools import QUiLoader
class Dialog(QDialog):
def __init__(self):
super(Dialog, self).__init__()
self.load_ui()
def load_ui(self):
loader = QUiLoader()
path = os.fspath(Path(__file__).resolve().parent / "form.ui")
ui_file = QFile(path)
ui_file.open(QFile.ReadOnly)
loader.load(ui_file, self)
ui_file.close()
if __name__ == "__main__":
app = QApplication([])
widget = Dialog()
widget.show()
sys.exit(app.exec_())
</code></pre>
<p>(In the UI I just drag-dropped a button right in the middle and saved the file)</p>
<p>Am I forgetting something fundamental? I'm only used to programming in C++ using QtCreator.</p>
|
<python><qt><pyside>
|
2023-01-20 21:33:47
| 1
| 1,002
|
Mister Mystère
|
75,189,505
| 15,810,170
|
PyMuPDF get optimal font size given a rectangle
|
<p>I am making an algorithm that performs certain edits to a PDF using the fitz module of PyMuPDF, more precisely inside widgets. The font size 0 has a weird behaviour, not fitting in the widget, so I thought of calculating the distance myself.
But searching how to do so only led me to innate/library functions in other programming languages.
Is there a way in PyMuPDF to get the optimal/maximal font size given a rectangle, the text and the font?</p>
|
<python><pymupdf>
|
2023-01-20 21:32:20
| 1
| 742
|
Clement Genninasca
|
75,189,435
| 11,380,243
|
Printing matrices between text in Python
|
<p>I have seen a few examples over here where it was explained how to print a matrix to a .txt file. However, I haven't found a simple way to perform what I would like to do, which is to print several matrices that are written in a .txt in which additional text is also being printed. Note, this text is not meant to be a headers or something, therefore solutions using dataframe are not really suited.</p>
<p>supposing I have the following matrices:</p>
<pre><code>M1 = np.zeros((6, 6))
M2 = np.zeros((6, 6))
M3 = np.zeros((6, 6))
</code></pre>
<p>I would like to print the matrix relative to the line on-top</p>
<pre><code>with open("example.txt", "w") as f:
f.write("------- MATRIX 1 ------------------------------------------------------------\n")
f.write("------- MATRIX 2 ---------------------------------------------------------\n")
f.write("------- MATRIX 3 ---------------------------------------------------------\n")
</code></pre>
<p>The results I would be expecting would be a .txt file similar like this:</p>
<pre><code>------- MATRIX 1 ------------------------------------------------------------
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
------- MATRIX 2 ------------------------------------------------------------
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
------- MATRIX 3 ------------------------------------------------------------
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
</code></pre>
|
<python><numpy><matrix>
|
2023-01-20 21:22:39
| 3
| 438
|
Marc Schwambach
|
75,189,391
| 9,169,546
|
Why is Django check_password=True but authenticate=None
|
<p>I'm trying to write a unit test for a login endpoint in Django using as much of the built in functionality as possible.</p>
<p>There are existing tests that confirm that the account create endpoint is functioning properly.</p>
<p>In the login view, however, the <code>check_password()</code> function will return <code>True</code> for this test, but the <code>authenticate()</code> function returns <code>None</code>.</p>
<p>Is it safe to use the <code>check_password()</code> function instead?</p>
<p>Otherwise, how do I update this code to use the <code>authenticate()</code> function?</p>
<p><code>accounts.py</code></p>
<pre><code>class Account(AbstractUser):
username = models.CharField(max_length=150, unique=True, null=False, default='')
password = models.CharField(max_length=100)
...
REQUIRED_FIELDS = ['username', 'password']
class Meta:
app_label = 'accounts'
db_table = 'accounts_account'
objects = models.Manager()
</code></pre>
<p><code>test_login.py</code></p>
<pre><code>def test_login(self):
# Create account
request_create = self.factory.post('/accounts/account',
self.request_data,
content_type='application/json')
view = AccountView.as_view()
response_create = view(request_create)
# Login account
request_login = self.factory.post('/accounts/login',
self.request_data,
content_type='application/json')
view = LoginView.as_view()
response = view(request_login)
</code></pre>
<p><code>views.py</code></p>
<pre><code>class LoginView(View):
def post(self, request):
r = json.loads(request.body)
username = r.get('username')
password = r.get('password')
cp = check_password(password, Account.objects.get(username=username).password)
user = authenticate(username=username, password=password)
</code></pre>
<p>P.S. I've checked <a href="https://stackoverflow.com/questions/70998720/django-invalid-credentials-user-object-check-password-returns-true-but-authent">this</a> thread and <code>is_active</code> is set to true.</p>
|
<python><django><authentication>
|
2023-01-20 21:14:46
| 2
| 1,601
|
ang
|
75,189,350
| 12,386
|
How do I debug through a gdb helper script written in python?
|
<p>there may very well be an answer to this question, but it's really hard to google for.</p>
<p>you can add commands to gdb by writing them in python. I am interested in debugging one of those python scripts that's running in gdb session.</p>
<p>my best guess is to run gdb on gdb and execute the user added command and somehow magically break on the python program code?</p>
<p>has anybody done anything like this before? I don't know the mechanism by which gdb calls python code, so if it's not in the same process space as the gdb that's calling it, I don't see how I'd be able to set breakpoints in the python program.</p>
<p>or do I somehow get pdb to run in gdb? I guess I can put pdb.set_trace() in the python program, but here's the extra catch: I'd like to be able to do all this from vscode.</p>
<p>so I guess my question is: what order of what things do I need to run to be able to vscode debug a python script that was initiated by gdb?</p>
<p>anybody have any idea?</p>
<p>thanks.</p>
|
<python><gdb><gdb-python>
|
2023-01-20 21:07:38
| 1
| 8,875
|
stu
|
75,189,298
| 9,373,756
|
The variables created using jupyter (.ipynb) do not work on .py files in vscode (in the same environment in WSL). Different terminals?
|
<p>Context: I'm using vscode with WSL and I also use conda for environment management.</p>
<p>I'm trying to create a variable in a jupyter notebook, let's say <code>x = [10, 20]</code>, and then use that same variable in a .py file (not on jupyter notebooks). I'm already using the same environment on both, but the terminal/kernel I believe is different for each. I believe this because when I run a cell on jupyter notebook, nothing happens on the terminal. However, when I run on .py files, the terminal runs the code I selected.</p>
<p>I would like to see the terminal running something for jupyter (.ipynb) and also for my .py files.</p>
<p>Any help would be really appreciated.</p>
|
<python><visual-studio><jupyter-notebook><environment-variables><conda>
|
2023-01-20 20:59:25
| 0
| 725
|
Artur
|
75,189,277
| 2,601,293
|
upgrade python offline - ubuntu apt
|
<p>I have an Ubuntu VM without internet access. It currently has Python 3.10 installed but I want to update to Python 3.11 (the latest at the time of this post).</p>
<p>On a machine with internet access, I used <code>apt</code> to download Python3.11.</p>
<pre><code>mkdir python_3.11
apt-get --download-only -o Dir::Cache="./python_3.11/" -o Dir::Cache::archives="./python_3.11/" install python3.11
$ ls python_3.11
libpython3.11-minimal_3.11.0~rc1-1~22.04_amd64.deb pkgcache.bin
libpython3.11-stdlib_3.11.0~rc1-1~22.04_amd64.deb python3.11-minimal_3.11.0~rc1-1~22.04_amd64.deb
lock python3.11_3.11.0~rc1-1~22.04_amd64.deb
partial srcpkgcache.bin
</code></pre>
<p>After transferring the files onto the VM, I tried running <code>sudo dpkg -i</code> on each of the files. This eventually "installed" them but opening a python shell still shows the old 3.10 version. <code>/usr/bin/python3</code> still points to <code>/usr/bin/python3.10</code> and this is no <code>/usr/bin/python3.11</code>.</p>
<p><strong>Another thing I've tried</strong></p>
<pre><code># on the machine im trying to install
sudo apt-get update -oDir::Cache::archives="/path/to/downloaded/packages" --no-install-recommends --no-download
sudo apt-get -oDir::Cache::archives="/path/to/downloaded/packages" --no-install-recommends --no-download install python3.11-minimal
</code></pre>
<p>This ends up returning</p>
<blockquote>
<p>E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?</p>
</blockquote>
<p>I've since added the following to the download command. <code>python3.11-minimal libpython3.11-stdlib python3.11 libpython3.11-minimal python3.11-venv python3.11-doc binfmt-support python3-pip-whl python3-setuptools-whl</code>. During the install, I'm still getting the same error message on "Unable to fetch some archives".</p>
|
<python><ubuntu><offline><apt>
|
2023-01-20 20:57:36
| 1
| 3,876
|
J'e
|
75,189,220
| 425,895
|
How to modify the Timeseries forecasting for weather prediction example to increase the number of predictions?
|
<p>(And plot them all in the same figure).</p>
<p>I've been following the "Timeseries forecasting for weather prediction" code found here:<br />
<a href="https://keras.io/examples/timeseries/timeseries_weather_forecasting/" rel="nofollow noreferrer">https://keras.io/examples/timeseries/timeseries_weather_forecasting/</a></p>
<p>The article says:<br />
"The trained model above is now able to make predictions for 5 sets of values from validation set."</p>
<p>And it uses this code to get predictons and plot them:</p>
<pre><code>def show_plot(plot_data, delta, title):
labels = ["History", "True Future", "Model Prediction"]
marker = [".-", "rx", "go"]
time_steps = list(range(-(plot_data[0].shape[0]), 0))
if delta:
future = delta
else:
future = 0
plt.title(title)
for i, val in enumerate(plot_data):
if i:
plt.plot(future, plot_data[i], marker[i], markersize=10, label=labels[i])
else:
plt.plot(time_steps, plot_data[i].flatten(), marker[i], label=labels[i])
plt.legend()
plt.xlim([time_steps[0], (future + 5) * 2])
plt.xlabel("Time-Step")
plt.show()
return
for x, y in dataset_val.take(5):
show_plot(
[x[0][:, 1].numpy(), y[0].numpy(), model.predict(x)[0]],
12,
"Single Step Prediction",
)
</code></pre>
<p>In my computer in order to downsample the series to 1 hour... instead of using "sampling_rate=6" I have directly modified the frequency of the input data and I'm using "sampling_rate=1"</p>
<p>Now, considering that the model was fitted properly... What do I need to modify if I want to get predictions for the next 500 intervals instead of just 5?</p>
<p><code>dataset_val.take(500)</code></p>
<p>Or something else?</p>
<p>The configuration at the beginning also says:</p>
<pre><code>split_fraction = 0.715
train_split = int(split_fraction * int(df.shape[0]))
step = 6
past = 720
future = 72
learning_rate = 0.001
batch_size = 256
epochs = 10
</code></pre>
<p>What values do I need to use now for past and future (if my data has a frequency of 1 hour and I want to predict 500 points forward?<br />
future = 500<br />
past = ? (it seems to be the number of timestamps taken backwards for training)</p>
<p>What about delta? It's fixed to 12, but it seems to be the value for future.</p>
|
<python><tensorflow><keras>
|
2023-01-20 20:50:39
| 1
| 7,790
|
skan
|
75,189,205
| 17,274,113
|
`TypeError: missing 1 required positional argument: 'self'` Whitebox tools
|
<p>I am attempting to use whitebox geospatial tools to analyze .tif files. Any whitebox tool I run however, raises the error: <code>TypeError: missing 1 required positional argument: 'self'</code>. I understand that this is a well-documented error within the stack overflow community, however, the way I understand the self argument, it is used in the creation of a class, which I am not doing as far as I can tell.</p>
<p>Additionally, upon the addition of the argument in an attempt to resolve the issue as various other stack answers have suggested, I receive a name error, stating that 'self' is not defined. Both cases are exemplified bellow.</p>
<p>Code:</p>
<pre><code>from whitebox_tools import WhiteboxTools as wbt
print(wbt.list_tools())
</code></pre>
<p>Result:</p>
<pre class="lang-none prettyprint-override"><code>TypeError: list_tools() missing 1 required positional argument: 'self'
</code></pre>
<p>Code (self argument added):</p>
<pre><code>print(wbt.list_tools(self))
</code></pre>
<p>Result:</p>
<pre class="lang-none prettyprint-override"><code>NameError: name 'self' is not defined
</code></pre>
<p>Please excuse my lack of understanding of the self argument. It stems from a further lack of understanding of Python classes. Either way, any resolution to this problem I can find has been to add the self argument which does not seem to work in this case.</p>
|
<python><typeerror><self><positional-argument>
|
2023-01-20 20:48:45
| 1
| 429
|
Max Duso
|
75,189,181
| 1,353,951
|
Obtaining values from a foreign key Python model
|
<p>Let's say I have these models/classes:</p>
<pre><code>class User(models.Model):
id = models.AutoField. . .
group = models.ForeignKey(
Group,
. . .
)
. . .
class Group(models.Model):
id = models.AutoField. . .
name = models.CharField. . .
. . .
</code></pre>
<p>In a custom logging function called when there is a change being made, I do this:</p>
<pre><code>obj = # object/model being updated; in this case: User
old_values = {}
new_values = {}
for i in range(len(form.changed_data)):
vname = obj._meta.get_field(form.changed_data[i]).verbose_name
old_values.update(
{ vname: form[form.changed_data[i]].initial }
)
new_values.update(
{ vname: form.cleaned_data[form.changed_data[i]] }
)
</code></pre>
<p>That leads to this output:</p>
<pre><code>old_values = {'Group': 2}
new_values = {'Group': <Group: New Group Name>}
</code></pre>
<p>Looks like <code>form.initial</code> uses the <code>id</code> while <code>form.cleaned_data</code> uses some kind of unsightly object name format.</p>
<p>Neither are desired. I want the output to look like this:</p>
<pre><code>old_values = {'Group': 'Old Group Name'}
new_values = {'Group': 'New Group Name'}
</code></pre>
<p>How do I do this? <strong>I cannot explicitly import the model name and use it.</strong> <code>User</code> and <code>Group</code> are merely two of dozens of models that must be treated non-explicitly in this generic logging function.</p>
<p>I've tried <code>apps.get_model()</code>, <code>get_object_or_404()</code>, and other methods, but nothing has been working for me so far.</p>
|
<python><django><django-models>
|
2023-01-20 20:45:10
| 2
| 1,501
|
Ness
|
75,189,130
| 1,718,174
|
Python order of execution of logic check
|
<p>So my boss came up with this (by accident) after a quick search and replace on the code and opening a Pull Request, where tag is always a string:</p>
<pre><code>if "team_" in tag in tag:
</code></pre>
<p>To my surprise, that actually works! Not really sure why. I was expecting to parse it from left to right and get an error, like this</p>
<pre><code>>>> ("team_" in "something") in "something"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'in <string>' requires string as left operand, not bool
</code></pre>
<p>or even in the worst case, parse it from right to left (which I find it odd, but let's assume it works that way)</p>
<pre><code>>>> "team_" in ("something" in "something")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: argument of type 'bool' is not iterable
</code></pre>
<p>but what I got instead was</p>
<pre><code>>>> "team_" in "something" in "something"
False
>>>
>>> "some" in "something" in "something"
True
>>>
</code></pre>
<p>can someone explain to me how/why does that work?</p>
|
<python><python-3.x>
|
2023-01-20 20:39:38
| 2
| 11,945
|
Vini.g.fer
|
75,189,084
| 5,471,957
|
Flask-Socketio emitting from an external process
|
<p>I've been working on a project using flask, flask-socketio and redis</p>
<p>I have a server, and some modules I would like to be able to emit from outside of the server file.</p>
<p>server.py</p>
<pre><code>from flask import Flask, Response, request, json
from flask_socketio import SocketIO, join_room, leave_room, emit
app = Flask(__name__)
socketio = SocketIO()
socketio.init_app(
app,
cors_allowed_origins="*",
message_que="redis://127.0.0.1:6379/0"
)
@socketio.on('ready')
def ready(data):
socketio.emit('rollCall', { 'message': 'Ive got work for you' }, room='Ready')
...
</code></pre>
<p>jobque.py</p>
<pre><code>from modules.server.server import socketio
...
socketio.emit('rollCall', { 'message': 'Ive got work for you' }, room='Ready')
</code></pre>
<p>As it's currently configured, emits from the server file all work, the clients respond and they can talk back and forth. But when <code>jobque</code> goes to emit the same message, nothing happens. There's no error, and the client doesn't hear it.</p>
<p>I'm also using redis for things other than the websockets, I can get and set from it with no problem, in both files.</p>
<p>What do I need to do to get this external process to emit? I've looked through the <code>flask-socketio</code> documentation and this is exactly how they have it setup.</p>
<p>I've also tries creating a new SocketIO object inside <code>jobque.py</code> instead of importing the one form the server, but the results are the same</p>
<pre><code>socket = SocketIO(message_queue="redis://127.0.0.1:6379/0")
socketio.emit('rollCall', { 'message': 'Ive got work for you' }, room='Ready')
</code></pre>
<p>I also went and checked if I could see the websocket traffic in redis with the message que setup using <code>redis-cli > MONITOR</code>, but I don't see any. I only see the operations I'm using redis for directly with the redis module. This makes me think the message que isn't actually being used, but I can't know for sure.</p>
|
<python><flask><redis><flask-socketio>
|
2023-01-20 20:36:00
| 1
| 1,276
|
SpeedOfRound
|
75,188,930
| 19,838,445
|
Poetry add same library for different Python versions
|
<p>I know how to add python constraint for a single library</p>
<pre class="lang-ini prettyprint-override"><code>flake8 = { version = "^6.0.0", python = ">=3.8.1" }
</code></pre>
<p>But what if I want to have same library, but different version for a different Python version? In case I add it with another constraint it produces an error</p>
<pre><code>Invalid TOML file /home/user/mylib/pyproject.toml: Key "flake8" already exists.
</code></pre>
<p>For example, I want my package to support Python <code>^3.7</code> but latest <code>flake8</code> is only compatible with <code>>=3.8.1</code>. How to I add <code>flake8</code> specification which will be installed for <code>python = "<3.8.1"</code>.</p>
<p>Is it possible to achieve at all? Should I create another release calleb <code>mylib-3.7</code> just to support earlier Python versions?</p>
|
<python><python-packaging><python-poetry><pyproject.toml>
|
2023-01-20 20:12:36
| 1
| 720
|
GopherM
|
75,188,852
| 20,102,061
|
match case in micropython - SyntaxError: invalid syntax
|
<p>I am using python 3.10.5 on my raspberry pi pico and I am trying to use <code>match</code> & <code>case</code> instead of <code>if</code> statements When I try to run the program it returns an error:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 22
SyntaxError: invalid syntax
</code></pre>
<p>Here is my function:</p>
<pre><code>async def BlueTooth(delay):
while True:
if uart.any():
command = uart.readline()
#print(command) # uncomment this line to see the received data
match command:
case b'1':
led.value(1)
case b'0':
led.value(0)
write_i2c("cmd: {}\n{}".format(command, Commands.get_command_action(str(command))))
await asyncio.sleep(delay)
</code></pre>
<p>I have checked, and everything should be normal, what can cause the problem?<br />
BTW, I am using <code>Thonny</code> as my editor.</p>
|
<python><python-3.x><micropython><python-3.10><raspberry-pi-pico>
|
2023-01-20 20:01:47
| 1
| 402
|
David
|
75,188,823
| 6,687,699
|
Override a template of a Django package
|
<p>How can I override a change_list.html template of a Django package e.g <a href="https://github.com/django-import-export/django-import-export" rel="nofollow noreferrer">Django import export</a>, in an existing Django app.</p>
<p>E.g I want to override this package <a href="https://github.com/django-import-export/django-import-export/blob/main/import_export/templates/admin/import_export/change_list_export.html" rel="nofollow noreferrer">template</a>, this is what I did in my project.</p>
<p>path to the file of my app : <code>app/templates/import_export/change_list.html</code></p>
<p>Below is how am overriding :</p>
<pre><code>{% extends 'import_export/change_list_export.html' %}
{% block object-tools-items %}
<div>
Hello there
</div>
{% endblock %}
</code></pre>
<p>I get this error :</p>
<p><a href="https://i.sstatic.net/TiKXJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TiKXJ.png" alt="enter image description here" /></a></p>
|
<python><django><django-templates>
|
2023-01-20 19:57:33
| 2
| 4,030
|
Lutaaya Huzaifah Idris
|
75,188,812
| 3,611,472
|
Evaluating algebraic expression of the keys of a dictionary on its values
|
<p>I have a class <code>A</code> whose objects have a dictionary as a property. This dictionary is <code>self.data</code>. The keys of the dictionary are strings, while the values are float numbers.</p>
<p>I want to define a <code>__getitem__(self,src:str)</code> method that processes a string <code>src</code> as an algebraic expression of the keys in the dictionary <code>self._data</code> and returns the corresponding algebraic expression evaluated on the keys.</p>
<p>For example, if <code>self.data = {'open':1, 'close':1}</code>, I want <code>self.__getitem__('open+close')</code> to return <code>2</code>. Also, <code>self.__getitem__('(open+close)/2')</code> should return <code>1</code>.</p>
<p>My implementation is using the <code>ast</code> module and <code>eval()</code> in the following way.</p>
<pre><code>import ast
class A():
def __init__(self, data=None):
self.data = {} if data is None else data
def __getitem__(self, src:str):
parsed_expr = ast.parse(src, mode='eval')
compiled_expr = compile(parsed_expr, '', 'eval')
item = eval(compiled_expr, self.data)
return item
obj = A({'open':1,'close':1})
print(obj['close + open']) # it prints 2
</code></pre>
<p>However, this definition of <code>__getitem__</code> modifies the dictionary <code>self.data</code> by adding a <code>__builtins__</code> key.</p>
<p>For example, <code>print(obj.data)</code> returns</p>
<pre><code>{'close': 1, 'open': 1, '__builtins__': {'__name__': 'builtins', '__doc__': "Built-in functions, exceptions, and other objects.\n\nNoteworthy: None is the `nil' object; Ellipsis represents `...' in slices.", '__package__': '', '__loader__': <class '_frozen_importlib.BuiltinImporter'>, '__spec__': ModuleSpec(name='builtins', loader=<class '_frozen_importlib.BuiltinImporter'>, origin='built-in'), '__build_class__': <built-in function __build_class__>, '__import__': <built-in function __import__>, 'abs': <built-in function abs>, 'all': <built-in function all>, 'any': <built-in function any>, 'ascii': <built-in function ascii>, 'bin': <built-in function bin>, 'breakpoint': <built-in function breakpoint>, 'callable': <built-in function callable>, 'chr': <built-in function chr>, 'compile': <built-in function compile>, 'delattr': <built-in function delattr>, 'dir': <built-in function dir>, 'divmod': <built-in function divmod>, 'eval': <built-in function eval>, 'exec': <built-in function exec>, 'format': <built-in function format>, 'getattr': <built-in function getattr>, 'globals': <built-in function globals>, 'hasattr': <built-in function hasattr>, 'hash': <built-in function hash>, 'hex': <built-in function hex>, 'id': <built-in function id>, 'input': <built-in function input>, 'isinstance': <built-in function isinstance>, 'issubclass': <built-in function issubclass>, 'iter': <built-in function iter>, 'aiter': <built-in function aiter>, 'len': <built-in function len>, 'locals': <built-in function locals>, 'max': <built-in function max>, 'min': <built-in function min>, 'next': <built-in function next>, 'anext': <built-in function anext>, 'oct': <built-in function oct>, 'ord': <built-in function ord>, 'pow': <built-in function pow>, 'print': <built-in function print>, 'repr': <built-in function repr>, 'round': <built-in function round>, 'setattr': <built-in function setattr>, 'sorted': <built-in function sorted>, 'sum': <built-in function sum>, 'vars': <built-in function vars>, 'None': None, 'Ellipsis': Ellipsis, 'NotImplemented': NotImplemented, 'False': False, 'True': True, 'bool': <class 'bool'>, 'memoryview': <class 'memoryview'>, 'bytearray': <class 'bytearray'>, 'bytes': <class 'bytes'>, 'classmethod': <class 'classmethod'>, 'complex': <class 'complex'>, 'dict': <class 'dict'>, 'enumerate': <class 'enumerate'>, 'filter': <class 'filter'>, 'float': <class 'float'>, 'frozenset': <class 'frozenset'>, 'property': <class 'property'>, 'int': <class 'int'>, 'list': <class 'list'>, 'map': <class 'map'>, 'object': <class 'object'>, 'range': <class 'range'>, 'reversed': <class 'reversed'>, 'set': <class 'set'>, 'slice': <class 'slice'>, 'staticmethod': <class 'staticmethod'>, 'str': <class 'str'>, 'super': <class 'super'>, 'tuple': <class 'tuple'>, 'type': <class 'type'>, 'zip': <class 'zip'>, '__debug__': True, 'BaseException': <class 'BaseException'>, 'Exception': <class 'Exception'>, 'TypeError': <class 'TypeError'>, 'StopAsyncIteration': <class 'StopAsyncIteration'>, 'StopIteration': <class 'StopIteration'>, 'GeneratorExit': <class 'GeneratorExit'>, 'SystemExit': <class 'SystemExit'>, 'KeyboardInterrupt': <class 'KeyboardInterrupt'>, 'ImportError': <class 'ImportError'>, 'ModuleNotFoundError': <class 'ModuleNotFoundError'>, 'OSError': <class 'OSError'>, 'EnvironmentError': <class 'OSError'>, 'IOError': <class 'OSError'>, 'EOFError': <class 'EOFError'>, 'RuntimeError': <class 'RuntimeError'>, 'RecursionError': <class 'RecursionError'>, 'NotImplementedError': <class 'NotImplementedError'>, 'NameError': <class 'NameError'>, 'UnboundLocalError': <class 'UnboundLocalError'>, 'AttributeError': <class 'AttributeError'>, 'SyntaxError': <class 'SyntaxError'>, 'IndentationError': <class 'IndentationError'>, 'TabError': <class 'TabError'>, 'LookupError': <class 'LookupError'>, 'IndexError': <class 'IndexError'>, 'KeyError': <class 'KeyError'>, 'ValueError': <class 'ValueError'>, 'UnicodeError': <class 'UnicodeError'>, 'UnicodeEncodeError': <class 'UnicodeEncodeError'>, 'UnicodeDecodeError': <class 'UnicodeDecodeError'>, 'UnicodeTranslateError': <class 'UnicodeTranslateError'>, 'AssertionError': <class 'AssertionError'>, 'ArithmeticError': <class 'ArithmeticError'>, 'FloatingPointError': <class 'FloatingPointError'>, 'OverflowError': <class 'OverflowError'>, 'ZeroDivisionError': <class 'ZeroDivisionError'>, 'SystemError': <class 'SystemError'>, 'ReferenceError': <class 'ReferenceError'>, 'MemoryError': <class 'MemoryError'>, 'BufferError': <class 'BufferError'>, 'Warning': <class 'Warning'>, 'UserWarning': <class 'UserWarning'>, 'EncodingWarning': <class 'EncodingWarning'>, 'DeprecationWarning': <class 'DeprecationWarning'>, 'PendingDeprecationWarning': <class 'PendingDeprecationWarning'>, 'SyntaxWarning': <class 'SyntaxWarning'>, 'RuntimeWarning': <class 'RuntimeWarning'>, 'FutureWarning': <class 'FutureWarning'>, 'ImportWarning': <class 'ImportWarning'>, 'UnicodeWarning': <class 'UnicodeWarning'>, 'BytesWarning': <class 'BytesWarning'>, 'ResourceWarning': <class 'ResourceWarning'>, 'ConnectionError': <class 'ConnectionError'>, 'BlockingIOError': <class 'BlockingIOError'>, 'BrokenPipeError': <class 'BrokenPipeError'>, 'ChildProcessError': <class 'ChildProcessError'>, 'ConnectionAbortedError': <class 'ConnectionAbortedError'>, 'ConnectionRefusedError': <class 'ConnectionRefusedError'>, 'ConnectionResetError': <class 'ConnectionResetError'>, 'FileExistsError': <class 'FileExistsError'>, 'FileNotFoundError': <class 'FileNotFoundError'>, 'IsADirectoryError': <class 'IsADirectoryError'>, 'NotADirectoryError': <class 'NotADirectoryError'>, 'InterruptedError': <class 'InterruptedError'>, 'PermissionError': <class 'PermissionError'>, 'ProcessLookupError': <class 'ProcessLookupError'>, 'TimeoutError': <class 'TimeoutError'>, 'open': <built-in function open>, 'quit': Use quit() or Ctrl-D (i.e. EOF) to exit, 'exit': Use exit() or Ctrl-D (i.e. EOF) to exit, 'copyright': Copyright (c) 2001-2022 Python Software Foundation.
All Rights Reserved.
Copyright (c) 2000 BeOpen.com.
All Rights Reserved.
</code></pre>
<p>To overcome this problem, I can evaluate <code>eval</code> on a deep copy of <code>self.data</code>, but I don't like this solution.</p>
<p>Do you have any thoughts about how to achieve the same result without affecting <code>self.data</code>?</p>
|
<python><dictionary>
|
2023-01-20 19:56:40
| 1
| 443
|
apt45
|
75,188,789
| 7,447,976
|
How to get proper feature importance information when using categorical feature in h2O
|
<p>When I have categorical features in my dataset, <code>h20</code> implies one-hot encoding and start the training process. When I call <code>summary</code> method to see the feature importance tho, it treats each encoded categorical feature as a feature. My question is that how can I get the feature importance information for the original features?</p>
<pre><code>#import libraries
import pandas as pd
import h2o
import random
from h2o.estimators.glm import H2OGeneralizedLinearEstimator
#initiate h20
h2o.init(ip ='localhost')
h2o.remove_all()
#load a fake data
training_data = h2o.import_file("http://h2o-public-test-data.s3.amazonaws.com/smalldata/glm_test/gamma_dispersion_factor_9_10kRows.csv")
#Spesifcy the predictors (x) and the response (y). I add a dummy categorical column named "0"
myPredictors = ["abs.C1.", "abs.C2.", "abs.C3.", "abs.C4.", "abs.C5.", '0']
myResponse = "resp"
#add a dummy column consisting of random string values
train = h2o.as_list(training_data)
train = pd.concat([train, pd.DataFrame(random.choices(['ya','ne','agh','c','de'], k=len(training_data)))], axis=1)
train = h2o.H2OFrame(train)
#define linear regression method
def linearRegression(df, predictors, response):
model = H2OGeneralizedLinearEstimator(family="gaussian", lambda_ = 0, standardize = True)
model.train(x=predictors, y=response, training_frame=df)
print(model.summary)
linearRegression(train, myPredictors, myResponse)
</code></pre>
<p>Once I run the model, here's the summary of feature importance reported by <code>h20</code>.</p>
<pre><code>Variable Importances:
variable relative_importance scaled_importance percentage
0 abs.C5. 1.508031 1.000000 0.257004
1 abs.C4. 1.364653 0.904924 0.232569
2 abs.C3. 1.158184 0.768011 0.197382
3 abs.C2. 0.766653 0.508380 0.130656
4 abs.C1. 0.471997 0.312989 0.080440
5 0.de 0.275667 0.182799 0.046980
6 0.ne 0.210085 0.139311 0.035803
7 0.ya 0.078100 0.051789 0.013310
8 0.c 0.034353 0.022780 0.005855
</code></pre>
<p>Is there a method that I'd get the feature importance for column <code>0</code>. Note that in real, I have way more categorical feature, this is just a MWE.</p>
|
<python><regression><h2o><feature-selection>
|
2023-01-20 19:52:48
| 1
| 662
|
sergey_208
|
75,188,722
| 2,954,547
|
Wrapping a polling-based asynchronous API as an Awaitable
|
<p>Consider some library with an interface like this:</p>
<ul>
<li><code>RemoteTask.start()</code></li>
<li><code>RemoteTask.cancel()</code></li>
<li><code>RemoteTask.get_id()</code></li>
<li><code>RemoteTask.get_result()</code></li>
<li><code>RemoteTask.is_done()</code></li>
</ul>
<p>For example, <code>concurrent.futures.Future</code> implements an API like this, but I don't want to assume the presence of a function like <code>concurrent.futures.wait</code>.</p>
<p>In traditional Python code, you might need to <em>poll</em> for results:</p>
<pre class="lang-py prettyprint-override"><code>def foo():
task = RemoteTask()
while not task.is_done():
time.sleep(2)
return task.get_result()
</code></pre>
<p>Is there some general recommended best-practice technique for wrapping this in an <code>Awaitable</code> interface?</p>
<p>The desired usage would be:</p>
<pre class="lang-py prettyprint-override"><code>async def foo():
task = RemoteTask()
return await run_remote_task()
</code></pre>
<p>I understand that the implementation details might differ across async libraries, so I am open to both general strategies for solving this problem, and specific solutions for Asyncio, Trio, AnyIO, or even Curio.</p>
<p>Assume that this library cannot be easily modified, and must be wrapped.</p>
|
<python><asynchronous><python-asyncio><python-trio><python-anyio>
|
2023-01-20 19:44:01
| 2
| 14,083
|
shadowtalker
|
75,188,661
| 2,747,181
|
Running a feed forward Neural Network in PyTorch Results in an error when run layer by layer vs all at once
|
<p>This is my neural network:</p>
<pre><code>class NeuralNetwork(nn.Module):
def __init__(self):
super(NeuralNetwork, self).__init__()
# self.flatten = nn.Flatten()
self.cnn = nn.Sequential(
nn.Conv2d(in_channels=1, out_channels=4, kernel_size=3), # 28 x 28 --> 26 x 26 x 4
nn.ReLU(),
nn.MaxPool2d(kernel_size=1), # 26 x 26 x 4
nn.Flatten(), # --> (26 x 26 x 4)
nn.Linear(26*26*4, 64),
nn.ReLU(),
nn.Linear(64, 10)
)
def forward(self, X):
# x = self.flatten(X)
logits = self.cnn(X)
return logits
def w_size(self, X):
print(X.size())
for layer in self.cnn:
X = layer(X)
print(X.size())
return X
</code></pre>
<p>When I run the model like this:</p>
<pre><code>model.w_size(training_data[0][0])
</code></pre>
<p>I get this error:</p>
<pre><code>torch.Size([1, 28, 28])
torch.Size([4, 26, 26])
torch.Size([4, 26, 26])
torch.Size([4, 26, 26])
torch.Size([4, 676])
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In [126], line 1
----> 1 model.w_size(training_data[0][0])
Cell In [121], line 23, in NeuralNetwork.w_size(self, X)
21 print(X.size())
22 for layer in self.cnn:
---> 23 X = layer(X)
24 print(X.size())
25 return X
File ~/.pyenv/versions/3.10.1/lib/python3.10/site-packages/torch/nn/modules/module.py:1130, in Module._call_impl(self, *input, **kwargs)
1126 # If we don't have any hooks, we want to skip the rest of the logic in
1127 # this function, and just call forward.
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []
File ~/.pyenv/versions/3.10.1/lib/python3.10/site-packages/torch/nn/modules/linear.py:114, in Linear.forward(self, input)
113 def forward(self, input: Tensor) -> Tensor:
--> 114 return F.linear(input, self.weight, self.bias)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (4x676 and 2704x64)
</code></pre>
<p>However, when I run the model like this:</p>
<pre><code>def train_loop(dataloader, model, loss_fn, optimizer):
size = len(dataloader.dataset)
for batch, (X, y) in enumerate(dataloader):
# predictions
pred = model(X)
loss = loss_fn(pred, y)
# backprop
optimizer.zero_grad()
loss.backward()
optimizer.step()
if batch%100 == 0:
loss, current = loss.item(), batch * len(X)
print(f"Current loss: {loss:>7f}, [{current:>5d}/{size:>5d}]")
</code></pre>
<p>it works perfectly fine and produces a training output and accuracy in the 80s.</p>
<p>My question is this: why does the model work when I run it the second way (just passing in the input data) but not the first way (passing in one training example to a function)?</p>
<p><a href="https://github.com/ytang07/nn_examples/blob/main/pytorch/cnn_epxloration.ipynb" rel="nofollow noreferrer">Full Code on GitHub here</a></p>
<p><a href="https://www.youtube.com/watch?v=bZKZt-bRaAc" rel="nofollow noreferrer">If you want to see the code experiments leading up to this</a></p>
|
<python><deep-learning><pytorch><neural-network><conv-neural-network>
|
2023-01-20 19:36:07
| 1
| 302
|
bujian
|
75,188,581
| 3,156,300
|
Using imagemagick to mask one image and output transparent png
|
<p>I've gone nuts here. I'm simply trying to do the following with imagemagick and don't understand where im going one. I'm trying blend my colored image onto a transparent background using the illuminance of another image.</p>
<p><a href="https://i.sstatic.net/WXTjh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WXTjh.png" alt="enter image description here" /></a></p>
<p>Here are the images:
Color: <a href="https://www.dropbox.com/s/9grpa6ustt92ve1/color.png?dl=0" rel="nofollow noreferrer">https://www.dropbox.com/s/9grpa6ustt92ve1/color.png?dl=0</a>
Mask: <a href="https://www.dropbox.com/s/wfx5j5i4s03hh2i/mask.png?dl=0" rel="nofollow noreferrer">https://www.dropbox.com/s/wfx5j5i4s03hh2i/mask.png?dl=0</a></p>
<p>The code below masks the colored image but doesn't create a transparent png as expected. It's filled black.</p>
<pre><code>import subprocess
CONVERT_EXE = '.../convert.exe'
subprocess.run([CONVERT_EXE, solid, mask, '-alpha', 'off', '-compose', 'Multiply', '-composite', 'PNG32:' + output])
</code></pre>
|
<python><imagemagick>
|
2023-01-20 19:26:32
| 1
| 6,178
|
JokerMartini
|
75,188,563
| 5,120,843
|
How do I tell SpringBoot where to find Python packages?
|
<p>I've installed several Python libraries, like the "ML" library called "flair" into Ubuntu server 20.04.</p>
<p>Now, although I can run the flair python program, as one example, fine <strong>from the command line</strong> in Ubuntu 20.04 server (Ubuntu knows where to look), when I run this same logic ($python3 + " " + program path.py + " " + data path) from SpringBoot (a WAR file), I get the message that:</p>
<p>import flair</p>
<pre><code>ModuleNotFoundError: No module named 'flair'
</code></pre>
<p>This Springboot approach works fine in Windows, but I'm missing something with Ubuntu.</p>
<p>I'm not therefore sure where SpringBoot looks for Python modules (or in general). There is no environment variable called "CLASSPATH", just path.</p>
<p>However, I changed the regular path variable to include the site-packages folder beneath <strong>/home/ubuntu/.local/lib/python3.8/site-packages</strong>, but it made no difference.</p>
<p>I created a CLASSPATH variable to see if that would help, but SpringBoot (the WAR file context) still doesn't find the packages in the Ubuntu path.</p>
<p>So, where do I need to put the Python modules, or how do I tell SpringBoot where they are now via environment variable or other method?</p>
<p>Thanks,</p>
|
<python><java><spring-boot><ubuntu>
|
2023-01-20 19:25:00
| 1
| 627
|
Morkus
|
75,188,523
| 13,455,916
|
Installing requirements.txt in a venv inside VSCode
|
<p>Apart from typing out commands - is there a good way to install <code>requirements.txt</code> inside <code>VSCode</code>.</p>
<p>I have a workspace with 2 folders containing different Python projects added - each has it's own virtual environment. I would like to run a task to execute and install the requirements to each of these.</p>
<p>I have tried adding a task to <code>tasks.json</code> to try and do it for one with no success.</p>
<pre class="lang-json prettyprint-override"><code>{
"version": "2.0.0",
"tasks": [
{
"label": "Service1: Install requirements",
"type": "shell",
"runOptions": {},
"command": "'${workspaceFolder}/sources/api/.venv/Scripts/activate'; pip install -r '${workspaceFolder}/sources/api/requirements.txt'",
"problemMatcher": []
}
]
}
</code></pre>
<p>This runs - but you can see it refer to my global Python packages <code>h:\program files\python311\lib\site-packages</code> - not the virtual environment.</p>
<p>I am running on Windows for this - but would like it to work eventually with Linux.</p>
|
<python><visual-studio-code><python-venv><requirements.txt>
|
2023-01-20 19:19:59
| 1
| 347
|
andrewthedev
|
75,188,501
| 1,323,044
|
Obtaining a pandas series by constructing its name as a string
|
<p>I'm looking to construct a series name as a string and get its values for a given index, or set its value for a particular index. For example:</p>
<pre><code>def getEntityValue(self, testCase, ent_order):
if ent_order == 1:
return self.testInputEnt1[testCase]
elif ent_order == 2:
return self.testInputEnt2[testCase]
elif ent_order == 3:
return self.testInputEnt3[testCase]
</code></pre>
<p>Or another one:</p>
<pre><code>def setEntityValue(self, testCase, ent_order, value):
if ent_order == 1:
self.testResultEnt1[testCase] = value
elif ent_order == 2:
self.testResultEnt2[testCase] = value
elif ent_order == 3:
self.testResultEnt3[testCase] = value
</code></pre>
<p>Is there a simpler way of constructing this testInputEntX series in a better way? I'm well aware of the fact that it's ideal to use other type of data structures where the values 1, 2, 3 can be used as another index and testInputEnt can be a list of series. But I will have to stick to these series for this application.</p>
|
<python><pandas><series>
|
2023-01-20 19:17:07
| 1
| 569
|
Baykal
|
75,188,488
| 12,313,380
|
How to render differently a django model object field on a html page?
|
<p>My model object have a IntegerField but I want to be able to render it differently on my html page, like lets say the object IntegerField is <code>500000</code> I want to render it as <code>500 000$</code> on my html page. So Add a space before the <code>last 3 number</code> and add a <code>$</code> at the end.</p>
<p>I have a models with a IntegerField that look like this</p>
<pre><code>class Listing(models.Model):
listing_price = models.IntegerField(max_length=100)
</code></pre>
<p>In my view I extract the models like this</p>
<pre><code>def home(request):
listing_object = Listing.objects.all()
context = {
"listing_object": listing_object,
}
return render(request, "main/index.html", context)
</code></pre>
<p>I render the data like this</p>
<pre><code>{% for listing in listing_new_object %}
{{listing.listing_price}}
{% endfor %}
</code></pre>
|
<python><django>
|
2023-01-20 19:15:46
| 1
| 576
|
tiberhockey
|
75,188,401
| 9,306,620
|
Python: Class Serialization: Convert class object to JSON
|
<p>I have a complex class object that I am trying to convert to a json array. It works, but the json is double quoted. I tried to make the class serializable using json.dumps and this function is converting it to a string.</p>
<pre><code>import datetime as dt
import json
class StatusDetails:
def __init__(self, Description, Value):
self.Description = Description
self.Value = Value
def toJSON(self):
return json.dumps(self, default=lambda o:o.__dict__)
class OrderRef:
def __init__(self, ID, Name):
self.ID = ID
self.Name = Name
def toJSON(self):
return json.dumps(self, default=lambda o:o.__dict__) #this converts the OrderRef object to a String literal
class WorkOrder:
def __init__(self, statusDetails, orderRef, RequestedDate):
self.RequestedDate = RequestedDate
self.StatusDetails = statusDetails
self.OrderRef= orderRef
listOfWO = []
_statusDetails = StatusDetails("OPEN", "OPEN").toJSON()
_orderRef = OrderRef('12345', 'SOME VALUE').toJSON()
_requestedDate = dt.datetime.now('US/Central').isoformat()
_wo = WorkOrder(_statusDetails , _orderRef , _requestedDate )
listOfWO.append(_wo)
_WorkOrderString = json.dumps([ob.__dict__ for ob in listOfWO]) #_orderRef and __statusDetails are literal json strings rather than json objects; how do I get the _statusDetails and _orderRef as json objects than as literal strings?
print('posting workorder json: \n' + _WorkOrderString )
</code></pre>
|
<python><json>
|
2023-01-20 19:06:15
| 2
| 1,041
|
SoftwareDveloper
|
75,188,362
| 523,612
|
How can I diagnose common errors in JSON data?
|
<p>I have to deal with putative JSON from a lot of different sources, and a lot of the time it seems that there is a problem with the data itself. I suspect that it sometimes isn't intended to be JSON at all; but a lot of the time it comes from a buggy tool, or it was written by hand for a quick test and has some typo in it.</p>
<p>Rather than ask about a specific error, I'm looking for a checklist: based on the error message, what is the most likely cause? What information is present in these error messages, and how can I use it to locate the problem in the data? Assume for these purposes that I can save the data to a temporary file for analysis, if it didn't already come from a file.</p>
|
<python><json><debugging><syntax>
|
2023-01-20 19:02:31
| 1
| 61,352
|
Karl Knechtel
|
75,188,254
| 11,825,717
|
PyTorch — proper way to compute loss on GPU?
|
<p>What is the proper way to handle loss values with <code>PyTorch</code> CUDA? For example:</p>
<ol>
<li>Should I store the loss value in GPU?</li>
<li>How do I move the loss value to GPU?</li>
<li>How do I update the loss value on GPU?</li>
</ol>
<h3>Inside __init__():</h3>
<pre><code>self.device = torch.device('cuda')
self.model = self.model.to(device)
self.total_loss = torch.Tensor([0]).to(device)
</code></pre>
<h3>For each batch:</h3>
<pre><code>self.loss1 = torch.Tensor(y_true - y_pred)
self.loss2 = 0.5 # some other loss
self.total_loss = self.loss1 + self.loss2
self.total_loss.backward()
</code></pre>
|
<python><pytorch>
|
2023-01-20 18:50:41
| 1
| 2,343
|
Jeff Bezos
|
75,188,016
| 4,764,604
|
'project.Account' has no ForeignKey to 'project.Object': How to link an account model to the objects of a project?
|
<p>I am trying to create an announcement website (<code>All</code>) that can be visible to others (the <code>Users</code>, for which I added an <code>Account</code>). For this I wanted to modify a little the user profile to add fields like telephone, email address...</p>
<p>So I modified <code>admin.py</code>:</p>
<pre><code>from django.contrib import admin
from .models import Todo, Account
from django.contrib.auth.models import User
class AccountInline(admin.StackedInline):
model = Account
can_delete = False
verbose_name_plural = 'Accounts'
class TodoAdmin(admin.ModelAdmin):
readonly_fields = ('created',)
inlines = (AccountInline, )
admin.site.unregister(User)
admin.site.register(Todo, TodoAdmin)
</code></pre>
<p>But got back:</p>
<pre><code><class 'todo.admin.AccountInline'>: (admin.E202) 'todo.Account' has no ForeignKey to 'todo.Todo'.
</code></pre>
<p>So I added a <code>ForeignKey</code> to <code>Todo</code> with <code>account = models.ForeignKey(Account, on_delete=models.CASCADE)</code>:</p>
<pre><code>from django.db import models
from django.contrib.auth.models import User
class Account(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE)
email = models.CharField(max_length=100)
firstname = models.CharField(max_length=30)
lastname = models.CharField(max_length=50)
company = models.CharField(max_length=5)
def __str__(self):
return self.user.username
class Todo(models.Model):
title = models.CharField(max_length=100)
datetime = models.DateTimeField()
memo = models.TextField(blank=True)
created = models.DateTimeField(auto_now_add=True)
datecompleted = models.DateTimeField(null=True, blank=True)
important = models.BooleanField(default=False)
user = models.ForeignKey(User, on_delete=models.CASCADE)
account = models.ForeignKey(Account, on_delete=models.CASCADE)
def __str__(self):
return self.title
</code></pre>
<p>But I still have the error, and I don't have any Users in the admin panel anymore</p>
<p><a href="https://i.sstatic.net/CDwNw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CDwNw.png" alt="enter image description here" /></a></p>
|
<python><python-3.x><django><model><one-to-many>
|
2023-01-20 18:24:40
| 1
| 3,396
|
Revolucion for Monica
|
75,187,972
| 19,776,016
|
how to divide a column of a matrix to a constant number in numpy?
|
<p>I try this division and get these strange zeros instead of correct numbers.</p>
<pre><code>i = 0
data_x = np.array([[3104, 5, 1, 45], [1416, 3, 2, 40], [852, 2, 1, 35]])
mu_i = np.mean(data_x[:,i])
print(mu_i)
max_num = np.max(data_x[:,i])
min_num = np.min(data_x[:,i])
d = (max_num - min_num)
print(d)
data_x[:,i] = (data_x[:,i] - mu_i)/d
data_x
</code></pre>
<p>result:</p>
<pre><code>1790.6666666666667
0,5,1,45
0,3,2,40
0,2,1,35
</code></pre>
|
<python><numpy>
|
2023-01-20 18:20:12
| 0
| 339
|
Gaff
|
75,187,862
| 10,576,322
|
Default variables with python-dotenv
|
<p>I want to make a package that works out of the box with reasonable default variables, like defining some servers, ports etc. so that the code works for an average user like he expects it without further configuration.
But I want this environment variables to be overriden if a .env file exists in order to allow configuration for other environments.
I read that python-dotenv load_values will use defaults if no .env file exists, but there is no example on pypi how that would set up ideally.</p>
|
<python><environment-variables><python-dotenv>
|
2023-01-20 18:07:29
| 3
| 426
|
FordPrefect
|
75,187,849
| 10,328,083
|
Why does import <package> not work ,but import <package.submodule> work?
|
<p>I am trying to use a python package called <code>nilearn</code>, but I think this issue could occur more generally, and I'm just trying to understand WHY this happens. I'd really appreciate any further references that could help me understand what's going on at a deeper level.</p>
<p>The very first instructions in the <a href="https://nilearn.github.io/stable/introduction.html" rel="nofollow noreferrer">intro nilearn tutorial</a> are</p>
<pre><code>import nilearn
print(nilearn.datasets.MNI152_FILE_PATH)
</code></pre>
<p>If I try to run this, I get the following error:</p>
<pre><code>AttributeError: module 'nilearn' has no attribute 'datasets'
</code></pre>
<p>However, if I try the following code, everything works</p>
<pre><code>import nilearn.datasets
print(nilearn.datasets.MNI152_FILE_PATH)
</code></pre>
<p>Clearly, <code>nilearn</code> does have a submodule called <code>datasets</code>. Why am I not able to use it when I just <code>import nilearn</code>.</p>
<p>More broadly, is this behavior specific to <code>nilearn</code>, or does it occur more broadly across python packages? What exactly is going on?</p>
|
<python><import><module><attributes><attributeerror>
|
2023-01-20 18:05:30
| 2
| 547
|
seeker_after_truth
|
75,187,730
| 11,002,498
|
Find emojis in messages and then react using discord.py
|
<p>I am trying to make a discord bot that finds emojis in messages in a channel and then reacts with these messages.</p>
<p>My code is:</p>
<pre><code>if message.channel.id == 645647298423613453:
custom_emojis = re.findall(r'<:\w*:\d*>', message.content)
custom_emojis = [e.split('<')[1].replace('>', '') for e in custom_emojis]
for item in custom_emojis:
await message.add_reaction(item)
</code></pre>
<p>and it perfectly reacts with all custom emojis (I used resources from <a href="https://stackoverflow.com/questions/67615363/discord-py-custom-emoji-reaction">here</a> and <a href="https://stackoverflow.com/questions/54859876/how-check-on-message-if-message-has-emoji-for-discord-py">here</a>). So, now I have to implement the same for normal emojis.</p>
<p>The <strong>first issue</strong> is how to find if there are any of them in the message sent. In threads like <a href="https://stackoverflow.com/questions/36216665/find-there-is-an-emoji-in-a-string-in-python3">this</a> or <a href="https://stackoverflow.com/questions/43146528/how-to-extract-all-the-emojis-from-text">this</a> it is used the module <code>emoji.UNICODE_EMOJI</code>. However, as it is said <a href="https://stackoverflow.com/questions/62544309/why-client-emojis-newer-version-of-client-get-all-emojis-returns-empy-list-wh">here</a> you cannot use it, right?</p>
<p>The <strong>second issue</strong> is that the only way that I have found how do react with them is <a href="https://stackoverflow.com/questions/69362952/reacting-messages-with-emojis-in-discord-py">this</a>.</p>
<p>I know that <code>await message.add_reaction("🤩")</code> works perfectly but I lack the icon itself.</p>
<p>When I run this:</p>
<pre><code>msg = message.content.split()
print("msg is ", msg)
</code></pre>
<p>I get this list: msg is <code>['🔪', '🏳️\u200d⚧️', 'dasdass', 'aadsdasasd', '<:test2:1066037921007808612>']</code> while my original discord message was the knife icon, Trans flag, two random string and a custom emoji.</p>
<p>So the <strong>final question</strong> is: How do I get the non-custom emojis, and how do I react with them in the format that I have extracted them.</p>
<p>For reference my code is hosted in replit.com and I have used this <a href="https://www.freecodecamp.org/news/create-a-discord-bot-with-python/" rel="nofollow noreferrer">source</a> to make the bot.</p>
|
<python><discord><discord.py><emoji><replit>
|
2023-01-20 17:55:07
| 1
| 464
|
Skapis9999
|
75,187,724
| 1,443,098
|
How should I change my tkinter code to rearrange the elements on my page?
|
<p>I'm learning tkinter and getting stumped in one area. Here's the code:</p>
<pre><code>from tkinter import *
from tkinter.messagebox import showinfo
def button_press():
showinfo('info','pressed button')
root = Tk()
root.geometry('800x500')
f = Frame(root)
f.pack()
Label(f, text="this is a line of text").pack(side=LEFT)
s = StringVar(value='enter here')
Entry(f, textvariable=s, width=100).pack(side=LEFT)
Button(f, text='Button', command=button_press).pack(side=RIGHT)
root.mainloop()
</code></pre>
<p>It produces:</p>
<p><a href="https://i.sstatic.net/qYoqJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qYoqJ.png" alt="enter image description here" /></a></p>
<p>But I want to align the text vertically with the entry field like this:</p>
<p><a href="https://i.sstatic.net/ORTXQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ORTXQ.png" alt="enter image description here" /></a></p>
<p>What do I need to change to make that happen?</p>
|
<python><tkinter>
|
2023-01-20 17:54:14
| 1
| 7,755
|
user1443098
|
75,187,644
| 8,372,455
|
Adding new values to pandas df and increment timestamp
|
<p>I have a time series dataset of a Pandas series df that I am trying to add a new value to the bottom of the df and then increment the timestamp which is the df index.</p>
<p>For example the new value I can add to the bottom of the df like this:</p>
<pre><code>testday.loc[len(testday.index)] = testday_predict[0]
print(testday)
</code></pre>
<p>Which seems to work but the time stamp is just incremented:</p>
<pre><code> kW
Date
2022-07-29 00:00:00 39.052800
2022-07-29 00:15:00 38.361600
2022-07-29 00:30:00 38.361600
2022-07-29 00:45:00 38.534400
2022-07-29 01:00:00 38.880000
... ...
2022-07-29 23:00:00 36.806400
2022-07-29 23:15:00 36.806400
2022-07-29 23:30:00 36.633600
2022-07-29 23:45:00 36.806400
96 44.482361 <---- my predicted value added at the bottom good except for the time stamp value of 96
</code></pre>
<p>Like the value of <code>96</code> is just the next value in the length of the df.index hopefully this makes sense.</p>
<p>If I try:</p>
<pre><code>from datetime import timedelta
last_index_stamp = testday.last_valid_index()
print(last_index_stamp)
</code></pre>
<p>This returns:</p>
<pre><code>Timestamp('2022-07-29 23:45:00')
</code></pre>
<p>And then I can add 15 minutes to this Timestamp (my data is 15 minute data) like this:</p>
<pre><code>new_timestamp = last_index_stamp + timedelta(minutes=15)
print(new_timestamp)
</code></pre>
<p>Which returns what I am looking instead of the value of <code>96</code>:</p>
<pre><code>Timestamp('2022-07-30 00:00:00')
</code></pre>
<p>But how do I replace the value of <code>96</code> with <code>new_timestampt</code>? If I try:</p>
<pre><code>testday.index[-1:] = new_timestamp
</code></pre>
<p>This will error out:</p>
<pre><code>TypeError: Index does not support mutable operations
</code></pre>
|
<python><pandas>
|
2023-01-20 17:46:56
| 1
| 3,564
|
bbartling
|
75,187,339
| 8,869,570
|
Printing list of class object's member variable
|
<p>I have a list of class objects and each object has a variable <code>val</code>.</p>
<p>e.g.,</p>
<pre><code>class my_class:
a = 3
lst = [my_class(), my_class()]
</code></pre>
<p>I would like to print all the object's <code>val</code>. How can this be done without iterating through the list?</p>
<p>I tried</p>
<pre><code>print(lst.a)
print(lst[:].a)
</code></pre>
|
<python><list>
|
2023-01-20 17:19:22
| 0
| 2,328
|
24n8
|
75,187,259
| 14,500,576
|
Kill a Python.exe process after opening it with PowerShell and Task Scheduler
|
<p>I used to run Python scripts with Task Scheduler by executing a .bat file. However, since I'm running this on a Windows VM, this approach would fail after the computer received a Windows update. So I decided to open the scripts differently, I still call the .bat file from Task Scheduler but this file now opens a PowerShell script as admin which calls the Python script.</p>
<p>Here's the .bat file:</p>
<pre><code>Powershell.exe -executionpolicy Unrestricted -File C:\Users\UserName\Desktop\PsPythonTest\runPythonScript.ps1 -Verb RunAs
</code></pre>
<p>Here's the PowerShell script:</p>
<pre><code>$path = 'C:\Users\UserName\Desktop\PsPythonTest\test.py'
start-process PowerShell -verb runas
python $path
</code></pre>
<p>The problem that I have is that the Python.exe processes will be left open. I don't want to add a line to the PS script to kill ALL Python instances since I have several tasks running the same way.</p>
<p>How do I have it end the specific Python instance after it's done?</p>
<p>Thanks</p>
|
<python><powershell><windows-task-scheduler>
|
2023-01-20 17:11:33
| 0
| 355
|
ortunoa
|
75,187,149
| 13,943,207
|
matplotlib is indexing in a wrong way
|
<p>I'm trying to plot vertical lines on an existing (realtime) plot.<br></p>
<pre><code>def animate(ival):
df = pd.read_pickle("/Users/user/Workfiles/Python/rp/0.72.0.0/df.pkl")
ax1.clear()
mpf.plot(df, ax=ax1, type='candle', ylabel='p', warn_too_much_data=999999999999)
try:
ax1.hlines(y=price, xmin=df.shape[0]-10, xmax=df.shape[0], color='r', linewidth=1)
except UnboundLocalError:
pass
ani = FuncAnimation(fig, animate, interval=100)
mpf.show()
</code></pre>
<p>This works as it should:</p>
<p><a href="https://i.sstatic.net/5nQkP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5nQkP.png" alt="enter image description here" /></a></p>
<p>Now I need to add vertical lines. I have the index numbers of the rows I want to see plottet, in this variable: <code>lows_peaks</code></p>
<p>df.iloc[lows_peaks]:</p>
<pre><code> open high ...
datetime ...
2023-01-20 15:07:30.776127 3919.0 3919.0 ...
2023-01-20 15:14:46.116836 3915.0 3915.0 ...
2023-01-20 15:23:23.845752 3928.0 3928.0 ...
2023-01-20 15:30:08.680839 3917.0 3917.0 ...
2023-01-20 15:37:26.709335 3938.0 3938.0 ...
2023-01-20 15:43:57.275134 3941.0 3941.0 ...
2023-01-20 15:55:56.717249 3951.0 3951.0 ...
2023-01-20 16:03:24.278924 3939.0 3939.0 ...
2023-01-20 16:10:05.334341 3930.0 3930.0 ...
2023-01-20 16:18:53.015390 3955.0 3955.0
</code></pre>
<p>Now adding the vlines:</p>
<pre><code>for i in df.iloc[lows_peaks].index:
ax1.vlines(x=i, ymin=df.low.min(), ymax=df.high.max(), color='r', linewidth=1)
</code></pre>
<p>result:</p>
<p><a href="https://i.sstatic.net/uElkW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uElkW.png" alt="enter image description here" /></a></p>
<p><code>i</code> are the correct timestamps:</p>
<pre><code>2023-01-20 15:07:30.776127
2023-01-20 15:14:46.116836
2023-01-20 15:23:23.845752
2023-01-20 15:30:08.680839
2023-01-20 15:37:26.709335
2023-01-20 15:43:57.275134
2023-01-20 15:55:56.717249
2023-01-20 16:03:24.278924
2023-01-20 16:10:05.334341
2023-01-20 16:18:53.015390
</code></pre>
<p>Why are the vertical lines somewhere far on the right side of the plot?</p>
<p>minimal reproducible code:</p>
<pre><code>import pandas as pd
import numpy as np
from matplotlib.animation import FuncAnimation
import mplfinance as mpf
times = pd.date_range(start='2022-01-01', periods=50, freq='ms')
df = pd.DataFrame(np.random.randint(3000, 3100, (50, 1)), columns=['open'])
df['high'] = df.open+5
df['low'] = df.open-2
df['close'] = df.open
df.set_index(times, inplace=True)
lows_peaks = df.low.nsmallest(5).index
print(lows_peaks)
fig = mpf.figure(style="charles",figsize=(7,8))
ax1 = fig.add_subplot(1,1,1)
def animate(ival):
ax1.clear()
for i in lows_peaks:
ax1.vlines(x=i, ymin=df.low.min(), ymax=df.high.max(), color='blue', linewidth=3)
mpf.plot(df, ax=ax1)
ani = FuncAnimation(fig, animate, interval=100)
mpf.show()
</code></pre>
|
<python><matplotlib>
|
2023-01-20 17:01:24
| 1
| 552
|
stanvooz
|
75,187,041
| 10,589,816
|
How do I extract the modal peaks from a 1-d data vector?
|
<pre><code>values = [ 8.42, 8.87, 8.88, 8.88, 8.88, 8.58, 8.58,
8.58, 8.58, 8.58, 8.58, 8.58, 8.58, 8.58, 0. , 8.58,
17.65, 17.65, 17.65, 17.65, 17.65, 17.65, 17.65, 17.65, 17.65,
17.65, 17.65, 17.65, 17.9 , 0. , 17.9 , 17.9 , 17.68, 17.68,
17.68, 17.68, 17.68, 17.68, 17.68, 17.68, 17.68, 17.68, 17.68,
8.89, 8.89, 9.86, 8. , 8.89, 8.89, 8.89, 8.93, 8.95,
]
data = pd.Series(values)
data.plot.kde()
</code></pre>
<p><a href="https://i.sstatic.net/0siJD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0siJD.png" alt="KDE Plot showing peaks" /></a></p>
<p>I have a list of values, and I can easily generate a kernel density plot which shows there are modal peaks at about 8 and 17.</p>
<p>I know that matplotlib is using <code>scipy.stats.gaussian_kde</code> to generate the curve, and that with the curve I should be able to use <code>scipy.signal.find_peaks</code> to find the stationary peaks... but I can't quite get anything working.</p>
<p>How do I extract the modal peaks from a 1-d data vector?</p>
|
<python><kernel-density>
|
2023-01-20 16:50:42
| 1
| 918
|
Peter Prescott
|
75,186,965
| 13,488,334
|
Python - Creating installable packages from different unique project structure
|
<p>My goal is to establish a project structure that will limit future headache. The project I am working on has 3 different packages that need to be installable and tested, the structure looks something like this:</p>
<pre><code>my-project
- src
- __init__.py
- widely_used_module_1.py
- widely_used_module_2.py
- widely_used_module_3.py
- package_2
- __init__.py
- package_2_module.py
- package_3
- __init__.py
- package_3_module.py
- tests
- src_tests
- package_2_tests
- package_3_tests
- pyproject.toml
- setup.cfg
- setup.py
</code></pre>
<p>Note that <code>\src</code> needs to be an installable package itself. There are modules within the <code>\src</code> directory that are used throughout the entire project, in each of the packages.</p>
<p>To my knowledge, I have a few options. I can leave the project as it is and pull my hair out trying to create and test 3 different packages in this format, OR I can create 3 different subdirectories within <code>\src</code>. In the latter option I would package the "widely_used_modules" into their own directory, keep <code>package_2</code> how it is, and add <code>package_3</code> into <code>src</code>.</p>
<p>There is a lot of coupling between these packages as <code>package_3</code> acts as a "wrapper" around <code>package_2</code> & <code>package_3</code>, so refactoring subdirectories within <code>src</code> will take considerable time. But I'm willing to go this route if it will make life easier in the future.</p>
<p>My main issue with leaving the project structure in its current state is that I am not sure how to write the <code>pyproject.toml</code> or <code>setup.cfg</code> to create 3 different installable packages.</p>
<p>If anyone is aware of some existing projects that are structured in a similar way, please direct me to them. Also open to any other suggestions.</p>
|
<python><configuration><setuptools><python-packaging><project-structure>
|
2023-01-20 16:43:21
| 0
| 394
|
wisenickel
|
75,186,953
| 8,833,702
|
Pandas/Python read file with different separators
|
<p>I have a .txt file as follows:</p>
<pre><code>columnA;columnB;columnC;columnD
2022040200000000000000000000011 8000702 79005889 SPECIAL_AGENCY
</code></pre>
<p>You can observe that the names of the columns are separated by a semi column <code>;</code>, however, row values, have different separators. In this example, <code>columnA</code> has 3 spaces, <code>columnB</code> has 3, <code>columnC</code> has 2, and <code>columnD</code> has 7.</p>
<p>It is important to clarify, that I need to keep the spaces, hence the “real” separator is the last space.</p>
<p>Considering I have a schema, that tells me for each column what is the amount of spaces (separators?) I have, how can I turn it into a pandas dataframe?</p>
|
<python><pandas><dataframe><text-files>
|
2023-01-20 16:42:36
| 2
| 433
|
robsanna
|
75,186,853
| 4,332,480
|
How to properly run tests based on the APITestCase in Django?
|
<p>When I use the <code>python manage.py test</code> command, in the console I see such a result: <code>Ran 0 tests in 0.000s</code>.</p>
<p>How to run these <code>UNIT tests</code>?</p>
<p>Also how to know the correctness of URLs which I use in the <code>reverse</code> function?</p>
<p><strong>project/urls.py:</strong></p>
<pre><code>urlpatterns = [
path('', include('client.urls')),
]
</code></pre>
<p><strong>client/urls.py:</strong></p>
<pre><code>urlpatterns = [
path('clients', ClientView.as_view()),
path('clients/<int:pk>', ClientView.as_view()),
]
</code></pre>
<p><strong>client/tests.py:</strong></p>
<pre><code>from django.urls import reverse
from rest_framework import status
from rest_framework.test import APITestCase
from client.models import Client
class ClientTestCase(APITestCase):
def setUp(self):
self.data = {
"first_name": "Jimmy",
"last_name": "Smith",
"email": "jimmysmith@gmail.com"
}
self.response = self.client.post(
reverse('client:client-list'),
self.data,
format="json"
)
def create_client(self):
self.assertEqual(self.response.status_code, status.HTTP_201_CREATED)
self.assertEqual(Client.objects.count(), 1)
self.assertEqual(Client.objects.get().first_name, 'Jimmy')
def get_clients(self):
response = self.client.get(reverse('client:client-list'))
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(Client.objects.count(), 1)
def get_client(self):
client = Client.objects.get()
response = self.client.get(
reverse('client:client-detail', kwargs={'pk': client.id}),
format="json"
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertContains(response, client)
def update_client(self):
client = Client.objects.get()
new_data = {
"first_name": "Bob",
"last_name": "Marley",
"email": "bobmarley@gmail.com"
}
response = self.client.put(
reverse('client:client-detail', kwargs={'pk': client.id}),
data=new_data,
format="json"
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(Client.objects.get().first_name, 'Bob')
def delete_client(self):
client = Client.objects.get()
response = self.client.delete(
reverse('client:client-detail', kwargs={'pk': client.id}),
format="json"
)
self.assertEqual(response.status_code, status.HTTP_204_NO_CONTENT)
self.assertEqual(Client.objects.count(), 0)
</code></pre>
<p><strong>ERROR:</strong></p>
<pre><code>Error
Traceback (most recent call last):
File "/Users/nurzhan_nogerbek/PycharmProjects/c02_tit/venv/lib/python3.9/site-packages/django/urls/base.py", line 71, in reverse
extra, resolver = resolver.namespace_dict[ns]
KeyError: 'person'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/nurzhan_nogerbek/PycharmProjects/project/person/tests.py", line 15, in setUp
reverse('person:person-list'),
File "/Users/nurzhan_nogerbek/PycharmProjects/project/venv/lib/python3.9/site-packages/django/urls/base.py", line 82, in reverse
raise NoReverseMatch("%s is not a registered namespace" % key)
django.urls.exceptions.NoReverseMatch: 'person' is not a registered namespace
</code></pre>
|
<python><django><django-rest-framework><django-testing><django-tests>
|
2023-01-20 16:33:25
| 1
| 5,276
|
Nurzhan Nogerbek
|
75,186,814
| 14,094,546
|
Stacked barplot inside a bar plot python
|
<p>I have the following barplot. It is showing the distribution of the letters in my dataset (x) in percentage (y). Inside this barplot I want to add that, for example, the 10% of L is 'male' and the 60% is female,10% is neutral,10% is other and 10% missing. For all the letters, like the second attached plot; meaning: in all the L analyzed the 10% is male etc. A stacked barplot inside a barplot, maybe using the female etc percentage label inside the bar since it is on a different scale (each letters sum to 100%).
How can I do that in python? Thanks a lot!</p>
<p><a href="https://i.sstatic.net/5fjWX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5fjWX.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/IxVqB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IxVqB.png" alt="enter image description here" /></a>
The reproducible code:</p>
<pre><code>data=
{'L': 0.10128343899798979,
'A': 0.04587392402453482,
'G': 0.05204199096266515,
'V': 0.08343212549181313,
'E': 0.07848392694534645,
'S': 0.03242100922632854,
'I': 0.05353675927357696,
'K': 0.07614727763173719,
'R': 0.0878305241997835,
'D': 0.05932683882274109,
'T': 0.06166348813635036,
'P': 0.033915777537240344,
'N': 0.04120062539731629,
'Q': 0.03858907616445887,
'F': 0.033073896534542895,
'Y': 0.04503204302183736,
'M': 0.018126213425424805,
'H': 0.04008384447537069,
'C': 0.0014947683109118087,
'W': 0.016442451420029897}
import matplotlib.pyplot as plt
plt.bar(range(len(data)), list(data.values()), align='center')
plt.xticks(range(len(data)), list(data.keys()))
#stacked bar plot data subset
index,female,male,neutral,other,missing
L,0.40816326530612246,0.30612244897959184,0.02040816326530612,0.0,0.2653061224489796
A,0.34615384615384615,0.34615384615384615,0.0,0.0,0.3076923076923077
G,0.2962962962962963,0.1111111111111111,0.037037037037037035,0.0,0.5555555555555556
V,0.20833333333333334,0.5625,0.020833333333333332,0.0,0.20833333333333334
E,0.5,0.225,0.025,0.0,0.25
</code></pre>
|
<python><matplotlib><seaborn><bar-chart><visualization>
|
2023-01-20 16:30:00
| 2
| 520
|
Chiara
|
75,186,800
| 11,974,225
|
Weighted average of a dictionary - Pandas
|
<p>I have the following column in a data-frame (it is an example):</p>
<p>First row is: <code>'{"100":10,"50":3,"-90":2}'</code>.</p>
<p>Second row is: <code>'{"100":70,"50":3,"-90":2,"-40":3}'</code>.</p>
<p>I want to calculate a weighted average where the dictionary's keys are the values and the dictionary's values are the weights of the weighted average.</p>
<p>The final value of the first row should be: <code>64.666</code>, which is <code>(100*10+50*3-90*2)/(10+3+2)</code>; and the of the second row should be: <code>87.82</code>.</p>
<p>For each dictionary there might be hundreds of keys/values and the column might have thousands of rows. How can I code it efficiently? Preferably vectorially.</p>
|
<python><pandas><numpy>
|
2023-01-20 16:29:03
| 3
| 907
|
qwerty
|
75,186,793
| 7,531,433
|
Single sourcing package dependencies with conda build, develop and pip
|
<p>I am writing a Python package, for which I provide packaging via setuptools as well as a recipe for conda build. This allows the package to be installed via <code>pip</code> and <code>conda</code>.
Additionally, I want to provide an easy way to setup a conda environment for development, so I provide an <code>environment.yml</code> and use conda develop.
Unfortunately, this means that I have multiple files in my repository which list dependencies for my package:</p>
<ul>
<li><code>meta.yml</code> Contains the build and run dependencies when packaging with conda build.</li>
<li><code>setup.cfg</code> Contains the run dependencies when packaging with setuptools.</li>
<li><code>pyproject.toml</code> Contains the build dependencies when packaging with setuptools.</li>
<li><code>environment.yml</code> Contains the build and run dependencies for development (and building the documentation).</li>
</ul>
<p>I have no problems with splitting up the dependencies in multiple files.
However, with this setup, I need to specify many of the same dependencies at different places, which I want to avoid:</p>
<ul>
<li>The run dependencies need to be specified in <code>meta.yml</code>, <code>setup.cfg</code> and <code>environment.yml</code>.</li>
<li>Some build dependencies need to be specified in <code>meta.yml</code>, <code>pyproject.toml</code> and <code>environment.yml</code>, while a few are specific to the packaging system and only need to be specified at one place.</li>
</ul>
<p>Is there a way for me to single-source my dependencies or at least reduce the amount of duplication I currently have?</p>
|
<python><pip><conda><setuptools><conda-build>
|
2023-01-20 16:28:09
| 0
| 709
|
tierriminator
|
75,186,701
| 16,591,513
|
How to automatically create and install requirements.txt file inside the Docker Build
|
<p>I have a question: how can I create <code>requirements.txt</code> file inside my Docker build, so I don't have to update it manually at project's directory, while releasing new versions of the app?</p>
<p>So, what I want is basically to construct the <code>requirements.txt</code> file inside the Docker build and install it then.</p>
<p>My Dockerfile</p>
<pre><code>FROM --platform=arm64 python:3.9-buster
# Initializing Project Directory
CMD mkdir /project/dir/
# Setting up working directory
WORKDIR /project/dir/
ENV PYTHONUNBUFFERED=1
RUN pip install --upgrade pip
RUN pip freeze > requirements.txt
ADD ./requirements.txt ./requirements.txt # error occurs at this line
COPY . .
RUN pip install -r requirements.txt
RUN chmod +x ./run.sh
ENTRYPOINT ["sh", "./run.sh"]
</code></pre>
<p>But unfortunately there is an error occured: <code>failed to compute cache key: "/requirements.txt" not found: not found</code>.</p>
<p>Do you have any tips for implementation?</p>
|
<python><docker>
|
2023-01-20 16:20:02
| 1
| 449
|
CraZyCoDer
|
75,186,553
| 528,369
|
Python Pandas convert month/year dataframe with one column to dataframe where each row is one year
|
<p>Given a Pandas dataframe of the form</p>
<pre><code>January-2021,0.294
February-2021,0.252
March-2021,0.199
...
January-2022,0.384
February-2022,0.333
March-2022,0.271
...
</code></pre>
<p>how do I transform it to a dataframe with 12 columns, one for each month, so it looks like</p>
<pre><code>year,January,February,March,...
2021,0.294,0.252,0.199,...
2022,0.384,0.333,0.271,...
</code></pre>
|
<python><pandas><dataframe><data-wrangling>
|
2023-01-20 16:08:37
| 3
| 2,605
|
Fortranner
|
75,186,428
| 12,065,403
|
How to display two progress bar with tqdm on notebook on vscode?
|
<p>I would like to display two progress bars on my notebook using tqdm on vscode.</p>
<p>I wrote the following code but I do not understand why it is not working:</p>
<pre><code>import asyncio
from tqdm.notebook import tqdm
print('\n------ simple p bar ------\n')
for i in tqdm(range(10), desc='simple example'):
await asyncio.sleep(0.5)
print('\n------ two p bar ------\n')
primary_p_bar = tqdm(total=10, desc='primary')
for i in range(10):
secondary_p_bar = tqdm(total=5, leave=False, desc='seconday')
for j in range(5):
await asyncio.sleep(0.5)
secondary_p_bar.update(1)
secondary_p_bar.close()
primary_p_bar.update(1)
primary_p_bar.close()
</code></pre>
<p>It displays:
<a href="https://i.sstatic.net/guC28.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/guC28.png" alt="issu tqdm img" /></a></p>
<h2>Env</h2>
<ul>
<li>VsCode version: 1.74.2</li>
<li>In my Pipfile:</li>
</ul>
<pre><code>[packages]
tqdm = "*"
ipywidgets = "*"
[requires]
python_version = "3.8"
</code></pre>
<h2>things I tried</h2>
<p>I did test this code on a <code>.py</code> file and it works:</p>
<pre><code># type: ignore
import asyncio
from tqdm import tqdm
async def main():
print('\n------ simple p bar ------\n')
for i in tqdm(range(10), desc='simple example'):
await asyncio.sleep(0.5)
print('\n------ two p bar ------\n')
primary_p_bar = tqdm(total=10, desc='primary')
for i in range(10):
secondary_p_bar = tqdm(total=5, leave=False, desc='seconday')
for j in range(5):
await asyncio.sleep(0.5)
secondary_p_bar.update(1)
secondary_p_bar.close()
primary_p_bar.update(1)
primary_p_bar.close()
asyncio.run(main())
</code></pre>
<p>In the terminal it displays:
<a href="https://i.sstatic.net/hwN5w.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hwN5w.png" alt="working example tqdm" /></a></p>
<p>I also tried by replacing the import <code>from tqdm.notebook import tqdm</code> by <code>from tqdm import tqdm</code> on the notebook cell. It works for the single progress bar but not for two (the secondary prgoress bar is not displayed):
<a href="https://i.sstatic.net/0uAtc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0uAtc.png" alt="half working tqdm" /></a></p>
<p>I also tried to downgrade <code>ipywidgets</code> in my pipFile: <code>ipywidgets = "7.7.1"</code> following <a href="https://stackoverflow.com/questions/73407189/vs-code-interactive-is-not-rendering-tqdm-notebook-properly">VS Code Interactive is not rendering tqdm.notebook properly</a> it did not change anything.</p>
|
<python><visual-studio-code><jupyter-notebook><tqdm>
|
2023-01-20 15:57:36
| 0
| 1,288
|
Vince M
|
75,186,331
| 1,970,684
|
MWAA Airflow job getting SCRAM error when connecting to postgres
|
<p>I'm trying to query postgres from an MWAA instance of airflow. I'm not sure if there is a conflict due to airflow itself having a different version of postgres for its metadata or what, but I get this error when connecting to postgres:</p>
<pre><code> File "/usr/local/airflow/dags/transactions/transactions.py", line 62, in load_ss_exposures_to_s3
ss_conn = psycopg2.connect(
File "/usr/local/airflow/.local/lib/python3.10/site-packages/psycopg2/__init__.py", line 122, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: SCRAM authentication requires libpq version 10 or above
</code></pre>
<p>Locally I have psycopg2 version 2.9.5 and libpq version 140005. MWAA is using psycopg2 2.9.5 and libpq 90224. Is there a way for me to force MWAA to use another version? Maybe through airflow plugins? Airflow version is 2.4.3.</p>
|
<python><postgresql><amazon-web-services><airflow><mwaa>
|
2023-01-20 15:48:14
| 3
| 431
|
RagePwn
|
75,186,241
| 6,645,564
|
How can I add more than one continuous color scale in a single px.scatter plot?
|
<p>I currently am trying to make a bubble plot that is generated from the following code. An example of the bubble plot that is produced is also included.</p>
<pre><code>height = (df['terms'].nunique()*20)+100
fig = px.scatter(df, x='group', y='terms', size='-log10(pvalue)',
color='health_status', height=height, width=1500,
color_discrete_map={'healthy': 'red', 'sick':'blue'})
iplot(fig)
</code></pre>
<p><a href="https://i.sstatic.net/osQur.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/osQur.png" alt="bubbleplot_example" /></a></p>
<p>Now, although I have the size of the markers on the plot determined by the value of -log10(pvalue), I also want to have the color of the markers to be determined by the -log10(pvalue). However, I want to distinguish between healthy and sick groups in that the healthy groups would have a continuous color scale going from light to dark red, while the sick groups would have a continuous color scale going from light to dark blue (also in proportion to the value of -log10(pvalue)). I have been trying to figure out how to integrate these two different continuous color scales into the plot, but I have been having no luck so far. It seems like you can only use one continuous color plot per plot as far as I can tell, but I'm not sure. Any help would be highly appreciated.</p>
<p><strong>Update</strong>:</p>
<p>Here is a small snippet of the input data I am using to create the plot (separated by tabs):</p>
<pre><code>terms group health_status pvalue -LOG10(pvalue)
innate immune response in mucosa Group A healthy healthy 0.001312593 2.881869847
low-density lipoprotein particle remodeling Group A healthy healthy 0.004084727 2.388836964
nucleosome assembly Group A healthy healthy 0.005324106 2.273753336
antimicrobial humoral immune response mediated by antimicrobial peptide Group B healthy healthy 0.005932275 2.226778741
intermediate filament organization Group B healthy healthy 0.005932275 2.226778741
defense response to bacterium Group B healthy healthy 0.005932275 2.226778741
leukocyte migration involved in inflammatory response Group B healthy healthy 0.015600119 1.806872092
defense response to Gram-negative bacterium Group B healthy healthy 0.015600119 1.806872092
keratinization Group C healthy healthy 0.018984856 1.721592692
Golgi apparatus mannose trimming Group C healthy healthy 0.018984856 1.721592692
sequestering of zinc ion Group C healthy healthy 0.018984856 1.721592692
chylomicron remnant clearance Group A sick sick 0.018984856 1.721592692
neutrophil aggregation Group A sick sick 0.018984856 1.721592692
protein localization to CENP-A containing chromatin Group A sick sick 0.018984856 1.721592692
negative regulation of lipid biosynthetic process Group B sick sick 0.018984856 1.721592692
antibacterial humoral response Group B sick sick 0.022844656 1.641215378
positive regulation of cell growth Group B sick sick 0.023998364 1.619818356
</code></pre>
|
<python><plotly-express>
|
2023-01-20 15:39:55
| 1
| 924
|
Bob McBobson
|
75,186,150
| 13,742,665
|
Handle Request payload using HTTPX for post data and capcha key to google recapchaV2
|
<p>I'm trying to send the key I got from anti captcha, an example of the key is like this</p>
<pre class="lang-json prettyprint-override"><code>{
"errorId":0,
"status":"ready",
"solution":
{
"gRecaptchaResponse":"3AHJ_VuvYIBNBW5yyv0zRYJ75VkOKvhKj9_xGBJKnQimF72rfoq3Iy-DyGHMwLAo6a3"
},
"cost":"0.001500",
"ip":"46.98.54.221",
"createTime":1472205564,
"endTime":1472205570,
"solveCount":"0"
}
</code></pre>
<p>I'm trying to send the value in the g-response key (it contains bypass capcha) but I don't know how to send a payload request to the Recaptcha URL, the network description is like this</p>
<p><a href="https://i.sstatic.net/0PXmV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0PXmV.png" alt="recapcha network" /></a></p>
<p>these are the headers for the captcha,</p>
<p><a href="https://i.sstatic.net/yKMcf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yKMcf.png" alt="headers and method for capcha URL" /></a></p>
<p>I want to use the key that I got from anticapcha to bypass, then I make a request for data search after the capcha has been bypassed, my code is like this</p>
<pre class="lang-py prettyprint-override"><code>class Locator(object):
"""class to find school locator"""
def __init__(
self,
url: Optional[str] = "https://mybaragar.com/index.cfm?",
params: Optional[dict[str, Any]] = None,
headers: Optional[dict[str, Any]] = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36",
},
api_key: Optional[str] = None,
website_key: Optional[str] = None,
):
self.url: str = url
self.params: dict[str, Any] = params
self.headers: dict[str, Any] = headers
self.solver = recaptchaV2Proxyless()
# solver setup
self.api_key = api_key
self.website_key = website_key
self.client = Client()
def solve_captcha(self, sitekey: str):
"""resolve captcha v2
Args:
sitekey (str): website key for bypass capcha
"""
headers: dict = {
"content-type": "application/x-protobuffer",
"cookie": "__Secure-3PSID=SgjFUgcMhK-HVyybbLH0riteaGdyRWxselnkRXWJxq2rjwyvYXp3pavZE0ePt1obNzd12Q.; __Secure-3PAPISID=Y_7wkodD5a0ZuHN4/Ar2CPyucPiF7dfJA0; NID=511=s-gIuV4uvQNKsfuPUm6Z41MVf8EMHdBeCD0ydgE1o6bGIQ8JE0LHu2GagudzwTjPdCfI78TASrrK6Xq9XGzk6lVaLgvNl6I9SNQ5MaVQr2hDj75FqZozXKgYYAdBAMUc-6wi1xI89i6HazvhC_7SyvajPmbXdD8J6rob43zrjlERiPshgiV8oozdvh16VdSrGQzZfWYNxqFFN95iEMo6BAXj_iEWnaRv_7vWw5oHKYcvnK3B7w_-p3C2477AXkUCEmMzRCDLlzRJjWhQ78XJD5IixMImsXmYJyGMKpkUz7gPns358uq5_bSB1WjG72fuFr9YK6XT_CqoIbLaxW1a7EWeegUjGDaXkNRCwvCvzkcMqkqLaOcT6lhqxAVWTgkT64hQX5HAr1kLd2j2Of8gZozSWqsPhaGQmCwOi4jTFxzR9B_SKVQxBBTDEMPk7XPDVeRrviwOPJJAwW55AInDpkY2Tx789-jYhpUn3t--japlkbZzpjInu6GX_wXK7ABSMRVSfpeFIQky62f-BroPYLQkU8JalDmVnQ_9gO3HsKlx1pvxV96aXNkaYu9WdW0PJ8MKc__whIenFw; __Secure-3PSIDCC=AIKkIs2ZZCAwmd6CAU2iZePJJjBhda6W0UW1jQ9rQ3phpRz_zUnE3H4cXYZRAetQu1dHi34gM6Ga",
"origin": "https://www.google.com",
"referer": "https://www.google.com/recaptcha/api2/bframe?hl=en&v=Gg72x2_SHmxi8X0BLo33HMpr&k=6LelzS8UAAAAAGSL60ADV5rcEtK0x0lRsHmrtm62",
"sec-ch-ua": '"Not_A Brand";v="99", "Google Chrome";v="109", "Chromium";v="109"',
"sec-ch-ua-mobile": "?0",
"sec-ch-ua-platform": "macOS",
"sec-fetch-dest": "empty",
"sec-fetch-mode": "cors",
"ec-fetch-site":"same-origin",
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36",
"x-client-data": ""
}
response = self.client.post(url, headers=headers)
print(response.text)
def parse(self):
soup = self.client.get(url=self.url, params=self.params,)
index_url = soup.css_first("form#SLSearchForm[action]").attrs["action"]
base_url = "https://mybaragar.com/{}".format(index_url)
def search(self, update: bool = False):
searcher: SearchGenerator = SearchGenerator(
search_url="https://geographic.org/streetview/canada/bc/west_vancouver.html",
update_data=update,
)
reports = searcher.report()
df = pd.read_csv(reports)
datas = df.to_dict()
</code></pre>
<p>can anyone help?,</p>
|
<python><web-scraping><python-requests><recaptcha><httpx>
|
2023-01-20 15:32:15
| 1
| 833
|
perymerdeka
|
75,186,144
| 7,959,614
|
Create ranking within set of rows resulting from GROUP BY
|
<p>I have the following table</p>
<pre><code>CREATE TABLE "results" (
"player" INTEGER,
"tournament" INTEGER,
"year" INTEGER,
"course" INTEGER,
"round" INTEGER,
"score" INTEGER,
);
</code></pre>
<p>With the following data sample for a single <code>tournament</code> / <code>year</code> / <code>round</code>-combination.</p>
<pre><code>1 33 2016 895 1 20
2 33 2016 895 1 10
3 33 2016 895 1 25
4 33 2016 895 1 28
7 33 2016 895 1 25
8 33 2016 895 1 17
9 33 2016 895 1 12
</code></pre>
<p>I would like to create a new column called <code>ranking</code> that represents the ranking of the player for that particular <code>tournament</code> / <code>year</code> / <code>round</code>-combination. The player with the most points is #1. If players score the same, they are tied which needs to specified with a "T".</p>
<p>The desired output looks as follows:</p>
<pre><code>1 33 2016 895 1 20 3
2 33 2016 895 1 12 T5
3 33 2016 895 1 25 T2
4 33 2016 895 1 28 1
7 33 2016 895 1 25 T2
8 33 2016 895 1 17 4
9 33 2016 895 1 12 T5
</code></pre>
<p>How can I achieve the above? Thanks</p>
|
<python><sqlite><count><window-functions><dense-rank>
|
2023-01-20 15:31:38
| 1
| 406
|
HJA24
|
75,186,036
| 1,169,091
|
Why does the last line in a cell generate output but preceding lines do not?
|
<p>Given this Jupyter notebook cell:</p>
<pre><code>x = [1,2,3,4,5]
y = {1,2,3,4,5}
x
y
</code></pre>
<p>When the cell executes, it generates this output:</p>
<pre><code>{1, 2, 3, 4, 5}
</code></pre>
<p>The last line in the cell generates output, the line above it has no effect. This works for any data type, as far as I can tell.</p>
<p>Here's a snip of the same code as above:</p>
<p><a href="https://i.sstatic.net/GmXmL.png" rel="noreferrer"><img src="https://i.sstatic.net/GmXmL.png" alt="Same code as above!" /></a></p>
|
<python><jupyter-notebook>
|
2023-01-20 15:22:26
| 2
| 4,741
|
nicomp
|
75,186,020
| 6,525,082
|
How can I check for unused import in a python jupyterlab notebook?
|
<p>suppose a jupyter file has several imports</p>
<pre><code>import sys
import pandas as pd
import p1
import p2
</code></pre>
<p>however suppose p2 is never used and I want to remove it. The answer to the question for regular python scripts is <a href="https://stackoverflow.com/questions/2540202/how-can-i-check-for-unused-import-in-many-python-files">here</a> and several others. But jupyter files are not regular files so cleaning them up this way might not work. I could not find on how to do this</p>
<p>Question: How can I check for unused import like the example above in a python jupyterlab notebook?</p>
|
<python><jupyter-notebook><jupyter-lab>
|
2023-01-20 15:20:58
| 0
| 1,436
|
wander95
|
75,186,010
| 9,392,446
|
Create binary columns out of data nested in another dfs columns
|
<p>This one is weird --</p>
<p>let's say I have a <code>df</code> like this:</p>
<pre><code>user_id city state network
123 austin tx att
113 houston tx tmobile
343 miami fl att
356 seattle wa verizon
</code></pre>
<p>and I have another <code>df1</code> like this (these 2 dfs wont be the same shape):</p>
<pre><code>col1
'network': 'att'
'city': 'austin'
'state': 'tx'
'city': 'seattle'
</code></pre>
<p>I'm trying to build a <code>final_df</code> like this:</p>
<pre><code>user_id is_network_att is_city_austin is_state_tx is_city_seattle
123 1 1 1 0
113 0 0 1 0
343 1 0 0 0
356 0 0 0 1
</code></pre>
<p>Easier to just show it - but a sentence to describe it:
I'm trying to create conditional/true-false columns out of <code>df1.col1</code> in a new <code>final_df</code> that use <code>df</code> column's data.</p>
<p>Strategies I'm tying:</p>
<p>-throw the df1 columns in a list or dictionary and loop through each element and then somehow loop through each row and incorporate and if statement for each row</p>
<p>-maybe make a makeshift column in <code>df1</code> of the exact code that would create the column in <code>final_df</code> and somehow use the text in this columnd as code</p>
<p>**here's a handful of the rows i'm trying to put in the dictionary</p>
<pre><code>Here's a handful of rows in that I'm trying to put in a dictionary:
912 'organization': 'atlantic metro communications'
913 'isp_name': 'Atlantic Metro Communications'
915 'location_name': 'martinez ca'
917 'location_name': 'martinez ca'
918 'location_name': 'martinez ca'
919 'location_name': 'martinez ca'
920 'isp_name': 'Hurricane Electric'
922 'organization': 'hurricane electric'
923 'organization': 'hurricane electric'
924 'isp_name': 'Hurricane Electric'
925 'count_users_per_ip': 28.0
926 'organization': 'atlantic metro communications'
927 'isp_name': 'Atlantic Metro Communications'
928 'isp_name': 'Hurricane Electric'
929 'organization': 'hurricane electric'
930 'isp_name': 'Hurricane Electric'
931 'organization': 'hurricane electric'
932 'location_name': 'hermosillo son'
933 'organization': 'atlantic metro communications'
934 'isp_name': 'Atlantic Metro Communications'
935 'location_state': ' son'
966 'count_users_per_ip': 28.0
1057 'count_users_per_device': 4.0
1218 'count_ips_per_user': 3.0
1408 'moderated_action': 'SOFT_BLOCK'
1418 'moderated_action': 'SOFT_BLOCK'
1430 'moderated_action': 'SOFT_BLOCK'
1438 'moderated_action': 'SOFT_BLOCK'
1517 'app_build': '405000004'
1605 'app_build': '405000004'
</code></pre>
<p>Update - heres as far as Ive got:</p>
<pre><code>def transpose_features(df1,col1,main_df,attr1,attr2):
from ast import literal_eval
# dic = literal_eval(f"{{{', '.join(df1[col1])}}}")
dic = {}
for i in df_features[attr1].tolist():
dic[i] = df_features[df_features[attr1]==i][attr2].tolist()
df_final = (main_df.drop(columns=list(dic))
.join(main_df[list(dic)].eq(dic).astype(int)
.rename(columns=lambda x: f'is_{x}_{dic[x]}')
)
)
print(df_final.shape)
return df_final
df_final = transpose_features(
df1 = df_features
,col1 = 'attr'
,main_df = df
,attr1 = 'attr1'
,attr2 = 'attr2'
)
df_final.head()
</code></pre>
<p>-This code pulls all the values into a list attaches that list to each key in the dictionary. But the issue now is - I need to basically an <code>or</code> statement in the method @mozway provided - that says "does user have ANY of the values in the list in each dict key".</p>
<p>Hard to even type that.</p>
|
<python><pandas>
|
2023-01-20 15:20:18
| 1
| 693
|
max
|
75,185,927
| 6,694,814
|
Python folium - conditional based class_name for DivIcon
|
<p>I would like to set the class_name based on some conditions in foium.</p>
<p>I tried:</p>
<pre><code> folium.Marker(location=[lat,lng],
icon = folium.DivIcon(html="<b>" + sp + "</b>",
if role == 'Contractor':
class_name= "mapText-Contractor"
else:
class_name= "mapText"
icon_anchor=(30,5))
).add_to(fs)
</code></pre>
<p>but the console says, that my syntax is invalid.</p>
<p>I found, that some classes can be created, but with no more information:</p>
<p><a href="https://snyk.io/advisor/python/branca/functions/branca.element.MacroElement" rel="nofollow noreferrer">https://snyk.io/advisor/python/branca/functions/branca.element.MacroElement</a></p>
<p>Is there any way of making the class_name condition-based?</p>
|
<python><folium>
|
2023-01-20 15:14:19
| 1
| 1,556
|
Geographos
|
75,185,926
| 6,683,176
|
Why does FastAPI's Depends() work without any parameter passed to it?
|
<p>I found the following FastAPI code for authenticating a user with their information gotten from a form:</p>
<pre class="lang-py prettyprint-override"><code>@app.post("/token")
async def login_for_access_token(form_data:OAuth2PasswordRequestForm = Depends(),
db: Session = Depends(get_db)):
user = authenticate_user(form_data.username, form_data.password, db)
if not user:
raise token_exception()
token_expires = timedelta(minutes=20)
token = create_access_token(user.username,
user.id,
expires_delta=token_expires)
return {"token": token}
</code></pre>
<p>I'm struggling to understand why in <code>form_data:OAuth2PasswordRequestForm = Depends()</code>, <code>Depends()</code> has no parameter passed to it? I thought that the whole point of <code>Depends()</code> was to be instantiated with a function that gets called before the endpoint function is called.</p>
|
<python><dependency-injection><fastapi><depends>
|
2023-01-20 15:14:19
| 1
| 339
|
pablowilks2
|
75,185,794
| 5,287,190
|
In SQLAlchemy, how can I prevent the reassignment of a child in a one-to-many relationship?
|
<p>How can I dynamically prevent SQLAlchemy from reassigning a <code>many(Member)</code> to a different <code>one(Group)</code> if the member already belongs to a different group?</p>
<p>The semantics of the relationship are as follows:</p>
<ul>
<li>A <code>Group</code> may have <em>zero</em> or more <code>Members</code></li>
<li>A <code>Member</code> may belong to <em>zero</em> or <em>one</em> <code>Groups</code></li>
<li>Once assigned to a <code>Group</code>, a <code>Member</code> may not be assigned to a different <code>Group</code>, and attempting to do so should be a <em>no-op</em></li>
</ul>
<p>Example code:</p>
<pre class="lang-py prettyprint-override"><code>import sqlalchemy as sa
import sqlalchemy.orm as orm
from sqlalchemy import event
class Group:
# Columns
id = sa.Column(sa.Integer(), primary_key=True)
# Relationships
members = orm.relationship('Member', back_populates='group')
class Member:
# Columns
id = sa.Column(sa.Integer(), primary_key=True)
# Relationships
group_id = sa.Column(sa.Integer(), sa.ForeignKey('group.id'), nullable=True)
group = orm.relationship('Group', back_populates='members')
@event.listens_for(Group.members, 'append', retval=True)
def _append_member(target, value, initiator):
if value.mro is not None:
msg = f'Warning: Member {value.id} already belongs to Group {value.group.id}. Cannot reassign'
print(msg)
# do something here to prevent the append
return None
</code></pre>
<p>Example usage:</p>
<pre class="lang-py prettyprint-override"><code>list_of_lists = get_or_create_sorted_members()
for member_list in list_of_lists:
group = Group()
group.members.extend(member_list)
</code></pre>
<p>This works, expect that I can't figure out what to return from the event handler in order to signal to SQLAlchemy that no append should occur. Returning <code>None</code> produces an error during the next <code>session.flush()</code>. Returning any of the <code>orm.interfaces.EXT_XXX</code> constants produces immediate errors.</p>
<p>I can raise an exception, but that prevents subsequent calls from going through, and if I am adding to the relationship via <code>Group.members.extend(list_of_members)</code>, there is no opportunity to catch the exception so as to allow the other assignments to continue.</p>
<p>I could replace the direct relationship with a secondary link table that had a unique constraint <code>member.id</code>, but that seems overkill for a zero/one-to-many.</p>
<p>I have little control over the <code>get_or_create_sorted_members()</code> behavior.</p>
|
<python><sqlalchemy>
|
2023-01-20 15:03:22
| 1
| 2,255
|
Brian H.
|
75,185,612
| 1,920,003
|
Python Unittest: How to initialize selenium in a class and avoid having the browser opening twice?
|
<p>Consider the example below, since I'm initializing the driver in <code>setUp</code> method and using it in <code>test_login</code>, the browser will open twice, the first time during <code>setUp</code> and then it will be closed and the tests will begin.</p>
<p>If I remove the logic from <code>setUp</code> and put it in <code>test_login</code>, the driver will be undefined in <code>test_profile</code> and <code>tearDown</code></p>
<p>What's the correct way to initialize the driver and use it throughout the class while not causing the browser to open twice?</p>
<pre><code>from selenium import webdriver
import unittest
from selenium.webdriver.chrome.service import Service
from webdriver_manager.chrome import ChromeDriverManager
class Test(unittest.TestCase):
def setUp(self):
self.driver = webdriver.Chrome(
service=Service(ChromeDriverManager().install()))
self.driver.get('https://example.com/login')
self.current_url = self.driver.current_url
self.dashboard_url = 'https://example.com/dashboard'
def test_login(self):
self.assertEqual(self.dashboard_url, self.current_url)
def test_profile(self):
self.driver.get('https://example.com/profile')
def tearDown(self):
self.driver.close()
</code></pre>
|
<python><unit-testing><selenium-webdriver><selenium-chromedriver><python-unittest>
|
2023-01-20 14:49:30
| 3
| 5,375
|
Lynob
|
75,185,545
| 5,901,870
|
After converting Spark data frame to pandas data frame, pandas data frame is throwing KeyError 0 when searching for a cell
|
<p>I am running this in data bricks notebook.</p>
<p>I converted a spark data frame [250 rows x 2 columns] to pandas data frame using below method:</p>
<pre><code>pandasDF = spark_df.toPandas()
</code></pre>
<p>this is successful and I was able to run below lines successfully:</p>
<pre><code>print(pandasDF)
print(pandasDF.columns.tolist())
</code></pre>
<p>but when I tried to query a cell in pandas data frame I am getting error. Why is that?</p>
<pre><code>print(pandasDF [0][1])
KeyError: 0
</code></pre>
<p>Additional details about the exception:</p>
<pre><code>/databricks/python/lib/python3.8/site-packages/pandas/core/frame.py in __getitem__(self, key)
3022 if self.columns.nlevels > 1:
3023 return self._getitem_multilevel(key)
-> 3024 indexer = self.columns.get_loc(key)
3025 if is_integer(indexer):
3026 indexer = [indexer]
/databricks/python/lib/python3.8/site-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance)
3080 return self._engine.get_loc(casted_key)
3081 except KeyError as err:
-> 3082 raise KeyError(key) from err
</code></pre>
|
<python><pandas><dataframe><databricks>
|
2023-01-20 14:42:47
| 0
| 400
|
Mikesama
|
75,185,484
| 17,561,414
|
further expldoe on string datatype pyspark
|
<p>I have df where I have the column called data. In the data column we can expect the single values per <code>identifier_filed</code> column or list values. This is shown as <code>[ ]</code>brackets under the data column. For example <code>Allegren</code> under the <code>values</code> column can have different <code>data</code> type, but this specific <code>identifie_field</code> has only one value but other <code>identifie_field</code> can more than one .</p>
<p>Moreover<code>physical_form</code> value can have multiple <code>data</code> type values also. I would like to explode on <code>data</code> column and presnt each value as a seperate row.</p>
<p>schema of the df:</p>
<pre><code>root
|-- identifier_field: string (nullable = true)
|-- values: string (nullable = false)
|-- data: string (nullable = true)
|-- locale: string (nullable = true)
|-- scope: string (nullable = true)
</code></pre>
<p>How it looks now:
<a href="https://i.sstatic.net/FRSKP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FRSKP.png" alt="enter image description here" /></a></p>
<p>Desired OUTPUT:</p>
<p><a href="https://i.sstatic.net/mHgGn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mHgGn.png" alt="enter image description here" /></a></p>
|
<python><apache-spark><pyspark>
|
2023-01-20 14:37:07
| 2
| 735
|
Greencolor
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.