input_text_instruct
stringlengths 282
37.9k
| output_text
stringlengths 37
27.3k
|
|---|---|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
GPFlow Multiclass classification with vector inputs causes value error on shape mismatch<p>I am trying to follow the Multiclass classification in GPFlow (using v2.1.3) as described here:</p>
<p><a href="https://gpflow.readthedocs.io/en/master/notebooks/advanced/multiclass_classification.html" rel="nofollow noreferrer">https://gpflow.readthedocs.io/en/master/notebooks/advanced/multiclass_classification.html</a></p>
<p>The difference with the example is that the <em>X</em> vector is 10-dimensional and the number of classes to predict is 5. But there seems to be error in dimensionality when using the inducing variables. I changed the kernel and use dummy data for reproducability, just looking to get this code to run. I put the dimensions of the variables in case that is the issue. Any calculation of loss causes an error like:</p>
<pre><code> ValueError: Dimensions must be equal, but are 10 and 5 for '{{node truediv}} = RealDiv[T=DT_DOUBLE](strided_slice_2, truediv/softplus/forward/IdentityN)' with input shapes: [200,10], [5].
</code></pre>
<p>It is as if it requires the <em>Y</em> results for the inducing variables, but the example on the gpflow site does not require it or it is confusing the length of the <em>X</em> input with the number of classes to predict.</p>
<p>I tried expanding the dimension of <em>Y</em> as in <a href="https://stackoverflow.com/questions/68419128/gpflow-classification-implementation">gpflow classification implementation</a>, but did not help.</p>
<p>Reproducible Code:</p>
<pre class="lang-py prettyprint-override"><code>import gpflow
from gpflow.utilities import ops, print_summary, set_trainable
from gpflow.config import set_default_float, default_float, set_default_summary_fmt
from gpflow.ci_utils import ci_niter
import random
import numpy as np
import tensorflow as tf
np.random.seed(0)
tf.random.set_seed(123)
num_classes = 5
num_of_data_points = 1000
num_of_functions = num_classes
num_of_independent_vars = 10
data_gp_train = np.random.rand(num_of_data_points, num_of_independent_vars)
data_gp_train_target_hot = np.eye(num_classes)[np.array(random.choices(list(range(num_classes)), k=num_of_data_points))].astype(bool)
data_gp_train_target = np.apply_along_axis(np.argmax, 1, data_gp_train_target_hot)
data_gp_train_target = np.expand_dims(data_gp_train_target, axis=1)
data_gp = ( data_gp_train, data_gp_train_target )
lengthscales = [0.1]*num_classes
variances = [1.0]*num_classes
kernel = gpflow.kernels.Matern32(variance=variances, lengthscales=lengthscales)
# Robustmax Multiclass Likelihood
invlink = gpflow.likelihoods.RobustMax(num_of_functions) # Robustmax inverse link function
likelihood = gpflow.likelihoods.MultiClass(num_of_functions, invlink=invlink) # Multiclass likelihood
inducing_inputs = data_gp_train[::5].copy() # inducing inputs (20% of obs are inducing)
# inducing_inputs = data_gp_train[:200,:].copy() # inducing inputs (20% of obs are inducing)
m = gpflow.models.SVGP(
kernel=kernel,
likelihood=likelihood,
inducing_variable=inducing_inputs,
num_latent_gps=num_of_functions,
whiten=True,
q_diag=True,
)
set_trainable(m.inducing_variable, False)
print_summary(m)
opt = gpflow.optimizers.Scipy()
opt_logs = opt.minimize(
m.training_loss_closure(data_gp), m.trainable_variables, options=dict(maxiter=ci_niter(1000))
)
print_summary(m, fmt="notebook")
</code></pre>
<p>Dimensions:</p>
<pre><code>data_gp[0].shape
Out[132]: (1000, 10)
data_gp[1].shape
Out[133]: (1000, 5)
inducing_inputs.shape
Out[134]: (200, 10)
</code></pre>
<p>The error:</p>
<pre><code> ValueError: Dimensions must be equal, but are 10 and 5 for '{{node truediv}} = RealDiv[T=DT_DOUBLE](strided_slice_2, truediv/softplus/forward/IdentityN)' with input shapes: [200,10], [5].
</code></pre>
|
<p>When running your example I get a slightly different bug, but the issue is in how you define lengthscales and variances. You write:</p>
<pre class="lang-py prettyprint-override"><code>lengthscales = [0.1]*num_classes
variances = [1.0]*num_classes
kernel = gpflow.kernels.Matern32(variance=variances, lengthscales=lengthscales)
</code></pre>
<p>But the standard kernels require a scalar variance and the lengthscales to be a scalar or to match the number of <em>features</em>, so if you replace that code with the following:</p>
<pre class="lang-py prettyprint-override"><code>lengthscales = [0.1]*num_of_independent_vars
kernel = gpflow.kernels.Matern32(variance=1.0, lengthscales=lengthscales)
</code></pre>
<p>then it all runs fine.</p>
<p>This would give you a <em>shared</em> kernel for each output (class probability) with <em>independent</em> lengthscales per input dimension ("ARD").</p>
<p>If you want different kernels for each output (but, for example, isotropic lengthscale), you can achieve this with a <code>SeparateIndependent</code> multi-output kernel, see the <a href="https://gpflow.readthedocs.io/en/master/notebooks/advanced/multioutput.html#2.-Separate-independent-MOK-and-shared-independent-inducing-variables" rel="nofollow noreferrer">multioutput notebook example</a>.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How to store the outcome of a method<p>I am trying to draw squares in random positions and random rgb values and I want 1000 of them to be created. The problem I'm facing is that everytime the loop for drawing occurs, it randomizes it all again, is there any way to make this not happen</p>
<pre><code>import pygame
import sys
import random
pygame.init()
win = pygame.display.set_mode((800,600))
pygame.display.set_caption("Simulation")
def safeZone():
#Draws a top rectangle
pygame.draw.rect(win, (50,205,50), (0, 0, 800, 100))
def dot():
width = 10
height = 10
spawnX = random.randrange(1, 801)
spawnY = random.randrange(1, 601)
r = random.randrange(1, 256)
g = random.randrange(1, 256)
b = random.randrange(1, 256)
pygame.draw.rect(win, (r, g, b), (spawnX, spawnY, width, height))
def population(size):
for x in range(size):
dot()
run = True
while run:
for event in pygame.event.get():
if event.type == pygame.QUIT:
run = False
win.fill((255, 255, 255))
safeZone() # Always draw dots after safe zone
population(1000)
pygame.display.update()
pygame.quit()
</code></pre>
|
<p>Create a dot collection, then just draw that dot collection. Now you can update the dot positions separately, and they will redraw in the new positions. Here, I'm having each dot move a random amount in every loop.</p>
<pre><code>import pygame
import sys
import random
pygame.init()
win = pygame.display.set_mode((800,600))
pygame.display.set_caption("Simulation")
class Dot:
def __init__(self):
self.spawnX = random.randrange(0, 800)
self.spawnY = random.randrange(0, 600)
self.r = random.randrange(0, 256)
self.g = random.randrange(0, 256)
self.b = random.randrange(0, 256)
def safeZone():
#Draws a top rectangle
pygame.draw.rect(win, (50,205,50), (0, 0, 800, 100))
def drawdot(dot):
width = 10
height = 10
pygame.draw.rect(win, (dot.r, dot.g, dot.b), (dot.spawnX, dot.spawnY, width, height))
def population(dots):
for dot in dots:
dot.spawnX += random.randrange(-3,4)
dot.spawnY += random.randrange(-3,4)
drawdot(dot)
alldots = [Dot() for _ in range(1000)]
run = True
while run:
for event in pygame.event.get():
if event.type == pygame.QUIT:
run = False
win.fill((255, 255, 255))
safeZone() # Always draw dots after safe zone
population(alldots)
pygame.display.update()
</code></pre>
<p>A worthwhile modification is to store the whole rectangle in the object:</p>
<pre><code>...
class Dot:
def __init__(self):
self.location = [
random.randrange(0, 800),
random.randrange(0, 600),
10, 10
]
self.color = (
random.randrange(0, 256),
random.randrange(0, 256),
random.randrange(0, 256)
)
def move(self, dx, dy ):
self.location[0] += dx
self.location[1] += dy
def drawdot(dot):
pygame.draw.rect(win, dot.color, dot.location)
def population(dots):
for dot in dots:
dot.move( random.randrange(-3,4), random.randrange(-3,4) )
drawdot(dot)
...
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Subtraction assignment gives 0 and not a negative number<p>I'm messing around with probability and I'm simulating something where a user can bet <code>x</code> amount, if the user loses I want it to drain their overall "<code>bank</code>". If they win it adds money.</p>
<p>As you can see in the code when they win the money gets "added" using <code>+=</code> but how can I do this so the number becomes a negative when they lose (for example <code>-=</code>, that didn't work for me).</p>
<p>Code</p>
<pre class="lang-py prettyprint-override"><code>import random
def get_possible(x):
return random.randrange(x)
def loop(possibilities, number, loop, bet):
success = 0
fail = 0
for i in range(loop):
if get_possible(x=possibilities) == number:
success += 1
bet += bet * 8
else:
fail += 1
bet -= bet
print("Success: " + str(success))
print("Fail: " + str(fail))
print("Total earned: " + str(bet))
loop(possibilities=14, number=1, loop=100, bet=100)
</code></pre>
<p>Here when they lose it says the total earned is <code>0</code>, I want it to say <code>- bet_amount</code></p>
|
<p>Introduce a new variable sum</p>
<pre><code>def loop(possibilities, number, loop, bet):
success = 0
fail = 0
sum = 0
for i in range(loop):
if get_possible(x=possibilities) == number:
success += 1
sum += bet * 8
else:
fail += 1
sum -= bet
print("Success: " + str(success))
print("Fail: " + str(fail))
print("Total earned: " + str(sum))
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
mean of a column in pandas dataframe when the 'count' value is total<p>I have a dataframe like this</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>A</th>
<th>B</th>
</tr>
</thead>
<tbody>
<tr>
<td>yes</td>
<td>4</td>
</tr>
<tr>
<td>yes</td>
<td>3</td>
</tr>
<tr>
<td>yes</td>
<td>3</td>
</tr>
<tr>
<td>total</td>
<td>nan</td>
</tr>
<tr>
<td>yes</td>
<td>5</td>
</tr>
<tr>
<td>yes</td>
<td>5</td>
</tr>
<tr>
<td>total</td>
<td>nan</td>
</tr>
</tbody>
</table>
</div>
<p>Desired output when the value of count column is total then replace the nan with mean of above values that is 4+3+3/3 ,, 5+5/2</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>A</th>
<th>B</th>
</tr>
</thead>
<tbody>
<tr>
<td>yes</td>
<td>4</td>
</tr>
<tr>
<td>yes</td>
<td>3</td>
</tr>
<tr>
<td>yes</td>
<td>3</td>
</tr>
<tr>
<td>total</td>
<td>3.33</td>
</tr>
<tr>
<td>yes</td>
<td>5</td>
</tr>
<tr>
<td>yes</td>
<td>5</td>
</tr>
<tr>
<td>total</td>
<td>5.0</td>
</tr>
</tbody>
</table>
</div>
|
<p>Let's load the data</p>
<pre><code>from io import StringIO
data = StringIO(
"""
A B
yes 4
yes 3
yes 3
total nan
yes 5
yes 5
total nan
""")
df = pd.read_csv(data, delim_whitespace=True)
df['B'] = df['B'].astype('float')
</code></pre>
<p>First we calculate the mean by group -- groups are defined as being separated by NaN</p>
<pre><code>dfm = df.groupby((df['B'].shift().isna()).cumsum(), as_index = False).mean()
dfm
</code></pre>
<p>we get</p>
<pre><code>
B
0 3.333333
1 5.000000
</code></pre>
<p>now we can asign these to the right cells in df:</p>
<pre><code>df.loc[df['A'] == 'total','B'] = dfm['B'].values
df
</code></pre>
<p>for the end result</p>
<pre><code>
A B
0 yes 4.000000
1 yes 3.000000
2 yes 3.000000
3 total 3.333333
4 yes 5.000000
5 yes 5.000000
6 total 5.000000
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Python subprocess terminal/space need to be adjusted to a value other than 80/24<p>I am working on invoking and executable as a part of script in python using subprocess.Popen. <code>subprocess.Popen(command,stdin=subprocess.PIPE, stdout=subprocess.PIPE)</code>. The executable need to open in a subprocess terminal/window/space larger than 80/24 else the results get truncated. I need to adjust the input/flags to subprocess so that columns number is changed. I have tried <code>env= {'COLUMNS':'300'}</code> but that doesn't help.</p>
<p>I use python 2.7 </p>
|
<p><code>subprocess.Popen(command,stdin=subprocess.PIPE, stdout=subprocess.PIPE)</code> executes a command and returns the result text.</p>
<pre><code>import os
command = "some command"
res = os.popen(command).read() # get all content as text
res = list(os.popen(command)) # get lines as array elements
</code></pre>
<p>To change the terminal size, use</p>
<pre><code>import os
rows = raw_input("rows: ")
cols = raw_input("cols: ")
os.system("resize -s {row} {col}".format(row=rows, col=cols))
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Dictionary of Words as keys and the Sentences it appears in as values<p>I have a text which I split into a list of unique words using set. I also have split the text into a list of sentences. I then split that list of sentences into a list of lists (of the words in each sentence / maybe I don't need to do the last part)</p>
<pre><code>text = 'i was hungry. i got food. now i am not hungry i am full'
sents = ['i was hungry', 'i got food', 'now i am', 'not hungry i am full']
words = ['i', 'was', 'hungry', 'got', 'food', 'now', 'not', 'am', 'full']
split_sents = [['i', 'was', 'hungry'], ['i', 'got', 'food'], ['now', 'i', 'am', 'not','hungry','i','am','full']]
</code></pre>
<p>I want to write a loop or a list comprehension that makes a dictionary where each word in words is a key and if the word appears in a sentence each sentence is captured as a list value so I can then get some statistics like the count of sentences but also the average length of the sentences for each word...so far I have the following but it's not right.</p>
<pre><code>word_freq = {}
for sent in split_sents:
for word in words:
if word in sent:
word_freq[word] += sent
else:
word_freq[word] = sent
</code></pre>
<p>it returns a dictionary of word keys and empty values. Ideally, I'd like to do it without collections/counter though any solution is appriciated. I'm sure this question has been asked before but I couldn't find the right solution so feel free to link and close if you link to a solution.</p>
|
<p>Here is an approach using list and dictionary comprehension</p>
<p><strong>Code:</strong></p>
<pre><code>text = 'i was hungry. i got food. now i am not hungry i am full'
sents = ['i was hungry', 'i got food', 'now i am', 'not hungry i am full']
words = ['i', 'was', 'hungry', 'got', 'food', 'now', 'not', 'am', 'full']
word_freq = {w:[s for s in sents if w in s.split()] for w in words }
print(word_freq)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>{
'i': ['i was hungry', 'i got food', 'now i am', 'not hungry i am full'],
'was': ['i was hungry'], '
hungry': ['i was hungry', 'not hungry i am full'],
'got': ['i got food'],
'food': ['i got food'],
'now': ['now i am'],
'not': ['not hungry i am full'],
'am': ['now i am', 'not hungry i am full'],
'full': ['not hungry i am full']
}
</code></pre>
<p><strong>Or if you want output sentences as list of words:</strong></p>
<pre><code>word_freq = {w:[s.split() for s in sents if w in s.split()] for w in words }
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>{
'i': [['i', 'was', 'hungry'], ['i', 'got', 'food'], ['now', 'i', 'am'], ['not', 'hungry', 'i', 'am', 'full']],
'was': [['i', 'was', 'hungry']],
'hungry': [['i', 'was', 'hungry'], ['not', 'hungry', 'i', 'am', 'full']],
'got': [['i', 'got', 'food']],
'food': [['i', 'got', 'food']],
'now': [['now', 'i', 'am']],
'not': [['not', 'hungry', 'i', 'am', 'full']],
'am': [['now', 'i', 'am'], ['not', 'hungry', 'i', 'am', 'full']],
'full': [['not', 'hungry', 'i', 'am', 'full']]}
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
efficient way to check every value in a 2d python array<p>I have a 2D numpy array of values, a list of x-coordinates, and a list of y-coordinates. the x-coordinates increase left-to-right and the y-coordinates increase top-to-bottom.</p>
<p>For example:</p>
<pre><code>a = np.random.random((3, 3))
a[0][1] = 9.0
a[0][2] = 9.0
a[1][1] = 9.0
a[1][2] = 9.0
xs = list(range(1112, 1115))
ys = list(range(1109, 1112))
</code></pre>
<p>Output:</p>
<pre><code>[[0.48148651 9. 9. ]
[0.09030393 9. 9. ]
[0.79271224 0.83413552 0.29724989]]
[1112, 1113, 1114]
[1109, 1110, 1111]
</code></pre>
<p>I want to remove the values from the 2D array that are greater than 1. I also want to combine the lists <code>xs</code> and <code>ys</code> to get a list of all the coordinate pairs for points that are kept.</p>
<p>In this example I want to remove <code>a[0][1], a[0][2], a[1][1], a[1][2]</code> and I want the list of coordinate pairs to be</p>
<pre><code>[[1112, 1109], [1112,1110], [1112, 1111], [1113, 1111], [1114, 1111]]
</code></pre>
<p>I have been able to accomplish this using a double <code>for</code> loop and <code>if</code> statements:</p>
<pre><code>a_values = []
point_pairs = []
for i in range(0, a.shape[0]):
for j in range(0, a.shape[1]):
if (a[i][j] < 1):
a_values.append(a[i][j])
point_pairs.append([xs[j], ys[i]])
print(a_values)
print(point_pairs)
</code></pre>
<p>Output:</p>
<pre><code>[0.48148650831317796, 0.09030392566133771, 0.7927122386213029, 0.8341355206494774, 0.2972498933037804]
[[1112, 1109], [1112, 1110], [1112, 1111], [1113, 1111], [1114, 1111]]
</code></pre>
<p>What is a more efficient way of doing this?</p>
|
<p>You can use <a href="https://numpy.org/doc/stable/reference/generated/numpy.nonzero.html" rel="nofollow noreferrer"><code>np.nonzero</code></a> to get the indices of the elements you removed:</p>
<pre><code>mask = a < 1
i, j = np.nonzero(mask)
</code></pre>
<p>The fancy indices <code>i</code> and <code>j</code> can be used to get the elements of <code>xs</code> and <code>ys</code> directly if they are numpy arrays:</p>
<pre><code>xs = np.array(xs)
ys = np.array(ys)
point_pairs = np.stack((xs[j], ys[i]), axis=-1)
</code></pre>
<p>You can also use <a href="https://numpy.org/doc/stable/reference/generated/numpy.take.html" rel="nofollow noreferrer"><code>np.take</code></a> to make the conversion happen under the hood:</p>
<pre><code>point_pairs = np.stack((np.take(xs, j), np.take(ys, i)), axis=-1)
</code></pre>
<p>The remaining elements of <code>a</code> are those not covered by the mask:</p>
<pre><code>a_points = a[mask]
</code></pre>
<p>Alternatively:</p>
<pre><code>i, j = np.nonzero(a < 1)
point_pairs = np.stack((np.take(xs, j), np.take(ys, i)), axis=-1)
a_points = a[i, j]
</code></pre>
<p>In this context, you can use <a href="https://numpy.org/doc/stable/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>np.where</code></a> as a drop-in alias for <code>np.nonzero</code>.</p>
<p><strong>Notes</strong></p>
<ul>
<li><p>If you are using numpy, there is rarely a need for lists. Putting <code>xs = np.array(xs)</code>, or even just initializing it as <code>xs = np.arange(1112, 1115)</code> is faster and easier.</p>
</li>
<li><p>Numpy arrays should generally be indexed through a single index: <code>a[0, 1]</code>, not <code>a[0][1]</code>. For your simple case, the behavior just happens to be the same, but it will not be in the general case. <code>a[0, 1]</code> is an index into the original array. <code>a[0]</code> is a view of the first row of the array, i.e., a separate array object. <code>a[0][1]</code> is an index into that new object. You just happened to get lucky that you are getting a view that shares the base memory, so the assignment is visible in <code>a</code> itself. This would not be the case if you tried a mask or fancy index, for example.</p>
</li>
<li><p>On a related note, setting a rectangular swath in an array only requires one line: <code>a[1:, :-1] = 9</code>.</p>
</li>
</ul>
<p>I would write your example something like this:</p>
<pre><code>a = np.random.random((3, 3))
a[1:, :-1] = 9.0
xs = np.arange(1112, 1115)
ys = np.arange(1109, 1112)
i, j = np.nonzero(a < 1)
point_pairs = np.stack((xs[j], ys[i]), axis=-1)
a_points = a[i, j]
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Tests in Python of a class that requests to REST API<p>I am working in a desktop client application that accesses the OneDrive (Microsoft Graph) REST API to download and upload files.
The Onedrive class checks if the access token needs to be renewed before each request.</p>
<p>Should I unit test this class? Is there a way to mock these requests?</p>
<p>Here is the source code of the class:</p>
<pre class="lang-py prettyprint-override"><code>import json
from datetime import datetime, timedelta
from urllib.request import Request, urlopen
TOKEN_URL = 'https://login.microsoftonline.com/common/oauth2/v2.0/token'
DRIVE_BY_ID_URL = 'https://graph.microsoft.com/v1.0/drives/'
CLIENT_ID = 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'
class Onedrive:
def __init__(self, access_token, refresh_token, expire_date):
self._access_token = access_token
self._refresh_token = refresh_token
self._expire_date = expire_date
self.timeout = 5
def download(self, drive_id, file_id, filename):
self._renew_token()
url = f'{DRIVE_BY_ID_URL}/{drive_id}/items/{file_id}/content?AVOverride=1'
with open(filename, 'wb+') as fp:
headers = {'Authorization': self._access_token}
request = Request(url, headers=headers)
with urlopen(request, timeout=self.timeout) as response:
fp.write(response.read())
def upload(self, local_path, parent_drive_id, parent_id, filename):
self._renew_token()
url = f'{DRIVE_BY_ID_URL}/{parent_drive_id}/items/{parent_id}:/{filename}:/content'
headers = {'Authorization': self._access_token, "Content-Type": "application/octet-stream"}
with open(local_path, 'rb') as fp:
data = fp.read()
request = Request(url, headers=headers, data=data, method='PUT')
with urlopen(request, timeout=self.timeout) as response:
return json.load(response)
def _renew_token(self):
if self._expire_date <= datetime.now():
body = f'client_id={CLIENT_ID}&refresh_token={self._refresh_token}&grant_type=refresh_token'
self._acquire_token(body)
def _acquire_token(self, body):
now = datetime.now()
request = Request(TOKEN_URL, data=body.encode(), method='POST')
with urlopen(request, timeout=self.timeout) as response:
data = json.load(response)
self._access_token = data['access_token']
self._refresh_token = data['refresh_token']
expire = data['expires_in']
self._expire_date = now + timedelta(seconds=int(expire))
</code></pre>
|
<p>here's attached how you can improve the code above: </p>
<ol>
<li><p>First on constructor or <strong>init</strong> you don't need to access_token, refresh_token and expired date. To make it simple for any modules that consume this class, all of token related stuff is handled inside OneDrive Class. The class will simply get token every time you use download or upload methods. this will remove <code>self._renew_token()</code> on every other methods.</p>
<ol start="2">
<li>You could extract line below: </li>
</ol></li>
</ol>
<pre><code>request = Request(url, headers=headers)
with urlopen(request, timeout=self.timeout) as response:
</code></pre>
<p>Which used in few methods into his own function. this will adhere to single responsibility so later new method called <code>request</code> is the one responsible handling HTTP Request and other method that consume this method only care about the return. With this design it's also possible later in the future if you decide to change library into something else like Requests or AIOHTTP you only need to modify this method. </p>
<ol start="3">
<li>For testing, now you can just the the <code>request</code> methods by patching the urllib, and now for <code>upload</code> or <code>download</code> there's few approach you can do, you could mock the <code>request</code> or still just patch the urllib that used inside request method. I recommend that you try the unit-test (test all module and mock other module that this module relying) and later integration-test (all module connecting with other module)</li>
</ol>
<p>onedrive.py</p>
<pre><code>import json
from datetime import datetime, timedelta
from urllib.request import Request, urlopen
TOKEN_URL = "https://login.microsoftonline.com/common/oauth2/v2.0/token"
DRIVE_BY_ID_URL = "https://graph.microsoft.com/v1.0/drives/"
CLIENT_ID = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
class Onedrive:
def __init__(self):
access_token, refresh_token, expire_date = self._renew_token()
self._access_token = access_token
self._refresh_token = refresh_token
self._expire_date = expire_date
@staticmethod
def request(url, headers, method="GET", data=None):
"""
this function handle the actual HTTP request to actual endpoint
behind by providing url, headers, method and data
"""
request = Request(url, headers=headers, data=data, method=method)
with urlopen(request, timeout=5) as response:
return response
def download(self, drive_id, file_id, filename):
url = f"{DRIVE_BY_ID_URL}/{drive_id}/items/{file_id}/content?AVOverride=1"
with open(filename, "wb+") as fp:
headers = {"Authorization": self._access_token}
response = self.request(url, headers)
fp.write(response.read())
def upload(self, local_path, parent_drive_id, parent_id, filename):
url = f"{DRIVE_BY_ID_URL}/{parent_drive_id}/items/{parent_id}:/{filename}:/content"
headers = {
"Authorization": self._access_token,
"Content-Type": "application/octet-stream",
}
with open(local_path, "rb") as fp:
response = self.request(
url=url, headers=headers, data=fp.read(), method="PUT"
)
return json.load(response)
def _renew_token(self):
access_token = self._access_token
refresh_token = self._refresh_token
expire_date = self._expire_date
if self._expire_date <= datetime.now():
body = f"client_id={CLIENT_ID}&refresh_token={self._refresh_token}&grant_type=refresh_token"
access_token, refresh_token, expire_date = self._acquire_token(body)
return access_token, refresh_token, expire_date
def _acquire_token(self, body):
now = datetime.now()
response = self.request(
url=TOKEN_URL, headers=None, data=body.encode(), method="POST"
)
data = json.load(response)
access_token = data["access_token"]
refresh_token = data["refresh_token"]
expire = data["expires_in"]
expire_date = now + timedelta(seconds=int(expire))
return access_token, refresh_token, expire_date
</code></pre>
<p>test_onedrive.py</p>
<pre><code>from unittest import TestCase
from unittest.mock import patch, MagicMock
from example import Onedrive
class TestOneDrive(TestCase):
@patch("urllib.request.urlopen")
def test_request(self, mock_urlopen):
mock = MagicMock()
mock.getcode.return_value = 200
mock.read.return_value = "some-contents"
mock.__enter__.return_value = mock
mock_urlopen.return_value = mock
response = Onedrive.request(
url="https://login.microsoftonline.com/common/oauth2/v2.0/token",
headers={"Content-Type": "application/x-www-form-urlencoded"},
method="POST",
data={"data": "some_data"},
)
self.assertEqual(response.getcode(), 200)
self.assertEqual(response.read(), "some-contents")
@patch("urllib.request.urlopen")
def test_download(self, mock_urlopen):
mock = MagicMock()
mock.getcode.return_value = 200
mock.read.return_value = "some-contents"
mock.__enter__.return_value = mock
mock_urlopen.return_value = mock
Onedrive().download(
drive_id="drive-id", file_id="file_id", filename="some_filename"
)
# now you just check whether the file actual written
</code></pre>
<p>reference:
<a href="https://docs.python.org/3/howto/urllib2.html" rel="nofollow noreferrer">https://docs.python.org/3/howto/urllib2.html</a>
<a href="https://docs.python.org/3/library/unittest.mock.html" rel="nofollow noreferrer">https://docs.python.org/3/library/unittest.mock.html</a></p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Converting a tensorflow object into a jpeg on local drive<p>I'm following the tutorial <a href="https://www.tensorflow.org/tutorials/generative/deepdream" rel="nofollow noreferrer">here</a>:</p>
<p>in order to create a python program that will create a deep-dream style img and save in onto disk. I thought that changes to the following lines should do the trick:</p>
<pre><code> img = run_deep_dream_with_octaves(img=original_img, step_size=0.01)
display.clear_output(wait=True)
img = tf.image.resize(img, base_shape)
img = tf.image.convert_image_dtype(img/255.0, dtype=tf.uint8)
tf.compat.v1.enable_eager_execution()
fname = '2.jpg'
with tf.compat.v1.Session() as sess:
enc = tf.io.encode_jpeg(img)
fwrite = tf.io.write_file(tf.constant(fname), enc)
result = sess.run(fwrite)'
</code></pre>
<p>the key line being encode_jpeg, however this gives me the following error:</p>
<pre><code> Traceback (most recent call last):
File "main.py", line 246, in <module>
enc = tf.io.encode_jpeg(img)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-
packages/tensorflow/python/ops/gen_image_ops.py", line 1496, in encode_jpeg
name=name)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-
packages/tensorflow/python/framework/op_def_library.py", line 470, in
_apply_op_helper
preferred_dtype=default_dtype)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-
packages/tensorflow/python/framework/ops.py", line 1465, in convert_to_tensor
raise RuntimeError("Attempting to capture an EagerTensor without "
RuntimeError: Attempting to capture an EagerTensor without building a function.
</code></pre>
|
<p>You can simply convert the "img" tensor into numpy array and then save it as you have eager execution enabled (its enabled by default in tf 2.0)</p>
<p>So, the modified code for saving the image will be:</p>
<pre><code>img = run_deep_dream_with_octaves(img=original_img, step_size=0.01)
display.clear_output(wait=True)
img = tf.image.resize(img, base_shape)
img = tf.image.convert_image_dtype(img/255.0, dtype=tf.uint8)
fname = '2.jpg'
PIL.Image.fromarray(np.array(img)).save(fname)
</code></pre>
<p>You don't have to use sessions in tf2.0 to get the values from tensor.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Pickleable partial class<p>I need to partially instantiate a class and later use in multiprocessing.
Multiprocessing pickles classes to pass between processes.
How can I pickle a partially instantiated class?</p>
<p>I'm targeting Python 3.8+</p>
<p>P.S. Here is my <code>partialclass</code> implementation:</p>
<pre class="lang-py prettyprint-override"><code>def partialclass(cls, *args, **kwargs):
new_cls = type(cls.__name__, (cls,), {})
new_cls.__init__ = partialmethod(cls.__init__, *args, **kwargs)
return new_cls
</code></pre>
<p>EDIT: I thought about overriding <code>__reduce__</code> method, but class can potentially take arguments, which I don't know at the time of definition.</p>
|
<p>You are creating a <em>class</em>, not a 'partial instance'. Pickle doesn't serialise classes, as it assumes that all <em>code</em> can be loaded from source instead.</p>
<p>Instead, produce <em>instances</em> of a utility class, one that can be pickled, and when called does the same thing as calling your generated class:</p>
<pre><code>class Partial:
def __init__(self, cls, *args, **kwargs):
self.cls, self.args, self.kwargs = cls, args, kwargs
def __call__(self, *args, **kwargs):
return self.cls(*self.args, *args, **self.kwargs, **kwargs)
</code></pre>
<p>The class attributes are then pickled (here, a reference to a class object, a tuple and a dictionary), and you can call the instance to produce the actual class:</p>
<pre><code>>>> import pickle
>>> class Foo:
... def __init__(self, *args, **kwargs):
... self.a, self.k = args, kwargs
... def __repr__(self):
... return f"<Foo(*{self.a!r}, **{self.k!r})>"
...
>>> pickled = pickle.dumps(Partial(Foo, "bar", monty="python"))
>>> partial = pickle.loads(pickled)
>>> partial("baz", spam="ham")
<Foo(*('bar', 'baz'), **{'monty': 'python', 'spam': 'ham'})>
</code></pre>
<p>However, the above is <strong>basically</strong> re-inventing the <a href="https://docs.python.org/3/library/functools.html#functools.partial" rel="nofollow noreferrer"><code>functools.partial()</code> object</a>, which are instances of a utility class that can be pickled, and called, just like the above Python re-implementation:</p>
<pre><code>>>> import pickle
>>> from functools import partial
>>> pickled = pickle.dumps(partial(Foo, "bar", monty="python"))
>>> fpartial = pickle.loads(pickled)
>>> fpartial("baz", spam="ham")
<Foo(*('bar', 'baz'), **{'monty': 'python', 'spam': 'ham'})>
</code></pre>
<p><code>functools.partial()</code> is more memory efficient and faster, however.</p>
<p>The only possible advantage would be that you could use <code>isinstance(object, Partial)</code> to distinguish it from <code>functools.partial()</code>. But you could just use subclassing for that too:</p>
<pre><code>import functools
class CPartial(functools.partial): pass
</code></pre>
<p>and it'd work just the same.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Panda dataframe conversion of series of 03Mar2020 date format to 2020-03-03<p>I'm not able to convert input</p>
<pre><code>Dates = {'dates': ['05Sep2009','13Sep2011','21Sep2010']}
</code></pre>
<p>to desired output</p>
<pre><code>Dates = {'dates': [2019-09-02,2019-09-13,2019-09-21]}
</code></pre>
<p>using Pandas Dataframe.</p>
<pre><code>data = {'dates': ['05Sep2009','13Sep2011','21Sep2010']}
df = pd.DataFrame(data, columns=['dates'])
df['dates'] = pd.to_datetime(df['dates'], format='%Y%m%d')
print (df)
</code></pre>
<p>Output:</p>
<pre><code>ValueError: time data '05Sep2009' does not match format '%Y%m%d' (match)
</code></pre>
<p>I'm new to this library. Help is appreciated.</p>
|
<p>Currently the months are abbreviated and are not numeric, so you can't use <code>%m</code>.
To convert abbreviated months and get the expected output use <code>%b</code>, like this:</p>
<pre><code>df['dates'] = pd.to_datetime(df['dates'], format='%d%b%Y')
</code></pre>
<p><strong>Update:</strong> to convert the DataFrame back to a dictionary you can use the function <code>to_dict()</code> but first, to get the desidered output, you need to convert the column from <code>datetime</code> back to <code>string</code> type. You can achieve it through this:</p>
<pre><code>df['dates'] = df['dates'].astype(str)
df.to_dict('list')
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Add diagonal line to hist2d with matplotlib<p>I am analyzing the heights/widths of some ML label boxes by creating a 2D histogram with matplotlib:</p>
<pre><code>plt.hist2d(widths, heights, 100, [(0, 0.1), (0, 0.1)])
plt.gca().set_aspect(aspect=1)
plt.savefig(outimage)
</code></pre>
<p><a href="https://i.stack.imgur.com/vAYVl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vAYVl.png" alt="enter image description here" /></a></p>
<p>But I want to add a semi-transparent diagonal line from (0,0) to (0.1,0.1)--the line indicating a square label box--to emphasize how the majority of the aspect ratios tend towards the tall and narrow.</p>
<p>I've tried adding this line both before the <code>hist2d()</code> call and after:</p>
<pre><code>plt.plot(0, 0, 0.1, 0.1, marker = "o")
</code></pre>
<p>But as you can see in the picture, it just creates the endpoint dots, not the line. How do I add a line <em>on top of the existing plot</em>, ideally a semi-transparent line?</p>
|
<p>You should pass the coordinates of the line plot as a list of x values and a list of y values. For the transparency you can use the <code>alpha</code> parameter. So your code for the line should be</p>
<pre><code>plt.plot([0, 0.1], [0, 0.1], marker="o", alpha=0.5)
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Python: separate utilities file or use static methods?<p>In python I have getters and setters, and calculate utilities. The getters return the property, the setters set the property and the calculate utilities are functions that calculate stuff with the arguments. E.g.</p>
<p><code>obj.property = calculate_property(arguments)</code></p>
<p>However, there are various ways to implement <code>calculate_property</code>, it could</p>
<ol>
<li>be a part of the class <code>calculate_property(self)</code> and return <code>_property</code></li>
<li>It could be an independent function which could be moved to a separate file</li>
<li>It could be a static method</li>
</ol>
<p>My problem with 1) is that the arguments to calculate_property can be obscured because they might be contained within the object itself, and it is not possible to use the function outside of the object. In both 2 and 3 the functional form of <code>calculate</code> is retained but the namespaces are different.</p>
<p>So I'm ruling out 1) for this reason.</p>
<p>IIf I go with 2) the advantage is that I can split all my utilities up into small files e.g. utilities_a.py, utilities_b.py and import them as modules.</p>
<p>If I go with 3) the functions remain in the same file making it overall longer, but the functions are encapsulated better.</p>
<p>Which is preferred between 2) and 3)? Or am I missing a different solution?</p>
|
<p>You can achieve both <code>2</code> and <code>3</code> with this example of adding a static method to your class that is defined in a separate file.</p>
<p><code>helper.py</code>:</p>
<pre><code>def square(self):
self.x *= self.x
</code></pre>
<p><code>fitter.py</code>:</p>
<pre><code>class Fitter(object):
def __init__(self, x: int):
self.x = x
from helper import square
if name == '__main__':
f = Fitter(9)
f.square()
print(f.x)
</code></pre>
<p>Output:</p>
<pre><code>81
</code></pre>
<p>Adapted from this <a href="https://stackoverflow.com/a/47562412/2359945">answer</a> which was doing this for a class method. Seems to work for static method, too.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Problem loading a web app using flask, python, HTML and PostgreSQL; not able to connect python and html scripts plus Internal Server Error message<p>Recently, I have started working on a new project : a web app which will take a name as an input from a user and as result outputs the database rows related to the user input. The database is created using PostgreSQL and in order to complete the task I am using Python as a programming language, followed by Flask (I am new to it) and HTML. I have created 2 source codes, 1 in Python as below :</p>
<pre><code>import os
import psycopg2 as pg
import pandas as pd
import flask
app = flask.Flask(__name__)
@app.route('/')
def home():
return "<a href='/search'>Input a query</a>"
@app.route('/search')
def search():
term = flask.request.args.get('query')
db = pg.connect(
host="***",
database="***",
user ="***",
password="***")
db_cursor = db.cursor()
q = ('SELECT * FROM table1')
possibilities = [i for [i] in db_cursor.execute(q) if term.lower() in i.lower()]
return flask.jsonify({'html':'<p>No results found</p>' if not possibilities else '<ul>\n{}</ul>'.format('\n'.join('<li>{}</li>'.format(i) for i in possibilities))})
if __name__ == '__main__':
app.run()
</code></pre>
<p>and HTML code :</p>
<pre><code><html>
<head>
<script src = "https://ajax.googleapis.com/ajax/libs/jquery/3.1.1/jquery.min.js"></script>
</head>
<body>
<input type='text' name ='query' id='query'>
<button type='button' id='search'>Search</button>
<div id='results'></div>
</body>
<script>
$(document).ready(function(){
$('#search').click(function(){
var text = $('#query').val();
$.ajax({
url: "/search",
type: "get",
data: {query: text},
success: function(response) {
$("#results").html(response.html);
},
error: function(xhr) {
//Do Something to handle error
}
});
});
});
</script>
</html>
</code></pre>
<p>For these scripts I read the discussion<a href="https://stackoverflow.com/questions/50244721/how-to-take-html-user-input-and-query-it-via-python-sql"> here</a>.
These scripts are giving me troubles and I have two main questions :
First : How are these two source codes connected to each other? whenever I run the python script or the html, they look completly disconnected and are not functioning.Moreover, when I run the Python script it gives me this error message on the webpage :</p>
<blockquote>
<p>Internal Server Error
The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>
</blockquote>
<p>and this message on terminal :</p>
<blockquote>
<p>Serving Flask app 'userInferface' (lazy loading)
Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
Debug mode: off
Running on....</p>
</blockquote>
<p>Can someone please help me by showing how can these 2 scripts connect and why am I getting such errors. Thank you.</p>
|
<p>You need to use <code>render_template</code> to connect Flask and your HTML code. <br> For example:</p>
<pre><code>from flask import render_template
@app.route("/", methods=['GET'])
def index():
return render_template('index.html')
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How to use a list to print out a sentence in Python<p>I have this list of data</p>
<pre><code>[
{"type": "Square", "area": 150.5},
{"type": "Rectangle", "area": 80},
{"type": "Rectangle", "area": 660},
{"type": "Circle", "area": 68.2},
{"type": "Triangle", "area": 20}
]
</code></pre>
<p>I want to define an object to represent this data, which takes the values from 'type' and 'area' and store it in a class (I call this class Object).</p>
<p>Here is what I tried to do:</p>
<pre><code> def __init__(self, list):
self.list = list
</code></pre>
<p>Then from this class I want to print out type and area for each object in the class.</p>
|
<p>If you want to print it then use <code>f string</code></p>
<pre><code>listt = [
{"type": "Square", "area": 150.5},
{"type": "Rectangle", "area": 80},
{"type": "Rectangle", "area": 660},
{"type": "Circle", "area": 68.2},
{"type": "Triangle", "area": 20}
]
for i in range(len(listt)):
print(f'{i+1}- {listt[i]["type"]} with area size {listt[i]["area"]}')
>> 1- Square with area size 150.5
2- Rectangle with area size 80
3- Rectangle with area size 660
4- Circle with area size 68.2
5- Triangle with area size 20
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
TypeError: 'type' object is not subscriptable during reading data<p>I'm pretty new importing data and I´m trying to make def that reads in the presented order. The error is in the fuction, not in the data.</p>
<pre class="lang-py prettyprint-override"><code>def read_eos(filename: str, order: dict[str, int] = {"density": 0, "pressure": 1,
"energy_density": 2}):
# Paths to files
current_dir = os.getcwd()
INPUT_PATH = os.path.join(current_dir, "input")
in_eos = np.loadtxt(os.path.join(INPUT_PATH, filename))
eos = np.zeros(in_eos.shape)
# Density
eos[:, 0] = in_eos[:, order["density"]]
eos[:, 1] = in_eos[:, order["pressure"]]
eos[:, 2] = in_eos[:, order["energy_density"]]
return eos
</code></pre>
|
<p>Looks like the problem is right in the type hint of one of your function parameters: <code>dict[str, int]</code>. As far as Python is concerned, <code>[str, int]</code> is a <em>subscript</em> of the type <code>dict</code>, but <code>dict</code> can't accept that subscript, hence your error message.</p>
<p>The fix is fairly simple. First, if you haven't done so already, add the following import statement above your function definition:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Dict
</code></pre>
<p>Then, change <code>dict[str, int]</code> to <code>Dict[str, int]</code>.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
instances.setMetadata() - nothing changes<p>I'm trying to add startup-script for an existing machine, when I do it from Google's tester ('Try this API') it works, with the client seems like nothing's changing...
Here's the code (just an example of the request) + the response I get (which looks fine) and the machine's data after sending the request.</p>
<h1>My Request:</h1>
<pre><code>from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
from pprint import pprint
jsonPath = "myJSON.json"
credentials = GoogleCredentials.from_stream(jsonPath)
# Define gcp service
service = discovery.build('compute', 'v1', credentials=credentials)
#Request Body
bodyData = {"fingerprint": "***","items": [{"key": "startup-script","value": "#! /bin/bash\n\nservice start sshguard"}]}
#setMeta Data Request
instance = service.instances().setMetadata(project="<PROJECT>", zone="europe-west1-b", instance="<ID>", body=bodyData)
#Execute request
response = instance.execute()
# Get instacne details
instanceget = service.instances().get(project="<PROJECT>", zone="europe-west1-b", instance="<ID>").execute()
#Print response + New Metadata
pprint(response)
print("'New Metadata:", instanceget['metadata'])
</code></pre>
<h1>Response I Get</h1>
<pre><code>{'id': '6099825023953066427',
'insertTime': '2020-03-16T04:47:32.880-07:00',
'kind': 'compute#operation',
'name': 'operation-84359252330-5a0f7626e861e-cf743913-4f05cd',
'operationType': 'setMetadata',
'progress': 0,
'selfLink': 'https://www.googleapis.com/compute/v1/projects/<Project>/zones/europe-west1-b/operations/operation-1584359252330-5a0f7626e861e-cf743913-4f05cd31',
'startTime': '2020-03-16T04:47:32.899-07:00',
'status': 'RUNNING',
'targetId': '<ID>',
'targetLink': 'https://www.googleapis.com/compute/v1/projects/<Project>/zones/europe-west1-b/instances/<ID>',
'user': 'pubsub-aws@<Project>.iam.gserviceaccount.com',
'zone': 'https://www.googleapis.com/compute/v1/projects/<Project>/zones/europe-west1-b'}
'New Metadata: {'fingerprint': '***', 'kind': 'compute#metadata'}
</code></pre>
<p>As you can see, the startup-script never added to the Metadata... I thought maybe its something with the json format? maybe the request body needs to be encoded or something?
Will appreciate any help!
Thanks in advance.</p>
|
<p>Solved. Gave it 'Owner' permissions and it worked.
Means that I had wrong permissions.
Thanks everyone! </p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Load a gpx file on python<p>I'm a Mac user. I'm trying to load a .gpx file on python,using the following code:</p>
<pre><code>import gpxpy
import gpxpy.gpx
gpx_file = open('Downloads/UAQjXL9WRKY.gpx', 'r')
</code></pre>
<p>And I get the following message:</p>
<pre><code>FileNotFoundError: [Errno 2] No such file or directory: 'Downloads/UAQjXL9WRKY.gpx'
</code></pre>
<p>Could someone help me figure out why? Thanks in advance.</p>
|
<p>Obviously, one reason would be that the file does not, in fact, exist, but let us assume that it does.</p>
<p>A relative filename (i.e, one that does not start with a <code>/</code>) is interpreted relative to the current working direcory of the process. You are apparently expecting that to be the user's home directory, and you are apparently wrong.</p>
<p>One way around this would be to explicitly add the user's home directory to the filename:</p>
<pre><code>import os
home = os.path.expanduser('~')
absfn = os.path.join(home, 'Downloads/whatever.gpx')
gpx_file = open(absfn, ...)
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
'Main' object has no attribute 'TabWidget' between parent Ui class "Ui_MainWindow" and child class "Ui_dialog_List"<p>According to the code below, when I try to take the MainTabIndex value from the addLineDialoge function, which is part of the parent Main () class, and try to use it in the child Dialoge_list class, an 'AttributeError:' Main 'object has no attribute' MainTabIndex 'error occurs.
Please tell me how to solve this problem.</p>
<p>Parent class:</p>
<pre><code>class Main(QMainWindow, Ui_MainWindow):
def __init__(self):
super(Main, self).__init__()
self.setupUi(self)
self.addLineButt.clicked.connect(self.addLineDialoge)
def addLineDialoge(self):
self.**MainTabIndex** = self.MainTab.currentIndex()
self.**ListTabIndex** = self.ListTab.currentIndex()
print(self.ListTabIndex)
addList = Dialoge_list()
addWaste = Dialoge_Waste()
if self.MainTab.currentIndex() == 0:
addList.exec()
elif self.MainTab.currentIndex() == 1:
addWaste.exec()
</code></pre>
<p>Child class:</p>
<pre><code>class Dialoge_list(QDialog, Ui_dialog_List):
def __init__(self):
super(Dialoge_list, self).__init__()
self.setupUi(self)
self.addItemButt.clicked.connect(self.addListToBase)
def addListToBase(self):
numCell = self.numLine.text()
thikCell = self.thikLine.text()
matCell = self.matLine.text()
sizeCell = self.sizeLine.text()
quantityCell = self.quantityLine.text()
dateinCell = self.dateinLine.text()
applyCell = self.applyLine.text()
noticeCell = self.noticeLine.text()
list_tab = [(numCell,
thikCell,
matCell,
sizeCell,
quantityCell,
dateinCell,
applyCell,
noticeCell)]
if "".__eq__(numCell) and "".__eq__(thikCell) and "".__eq__(matCell) and "".__eq__(
sizeCell) and "".__eq__(quantityCell) and "".__eq__(dateinCell) and "".__eq__(
applyCell) and "".__eq__(noticeCell):
emptyError = EmptyErrorDialoge()
emptyError.exec()
elif Main().**MainTabIndex** == 0 and Main().**ListTabIndex** == 0:
. . . .
</code></pre>
|
<p>I found a solution.
It was necessary to make the following changes in the super method of the "Dialoge_list" classWas:</p>
<pre><code>class Dialoge_list(QDialog, Ui_dialog_List):
def __init__(self):
super(Dialoge_list, self).__init__()
</code></pre>
<p>Became:</p>
<pre><code>class Dialoge_list(QDialog, Ui_dialog_List):
def __init__(self, parent):
super(Dialoge_list, self).__init__(parent)
</code></pre>
<p>Also made changes to the addListToBase function.Was:</p>
<pre><code>def addLineDialoge(self):
self.MainTabIndex = self.MainTab.currentIndex()
self.ListTabIndex = self.ListTab.currentIndex()
print(self.ListTabIndex)
addList = Dialoge_list()
. . .
</code></pre>
<p>Became</p>
<pre><code>def addLineDialoge(self):
self.MainTabIndex = self.MainTab.currentIndex()
self.ListTabIndex = self.ListTab.currentIndex()
print(self.ListTabIndex)
addList = Dialoge_list(self)
. . .
</code></pre>
<p>After that, in the child class, you can call the MainTabIndex and ListTabIndex functions, waiting for this you just need to add the method parent(). Was:</p>
<pre><code>class Dialoge_list(QDialog, Ui_dialog_List):
def __init__(self):
super(Dialoge_list, self).__init__()
self.setupUi(self)
self.addItemButt.clicked.connect(self.addListToBase)
</code></pre>
<p>Became:</p>
<pre><code>class Dialoge_list(QDialog, Ui_dialog_List):
def __init__(self):
super(Dialoge_list, self).__init__()
self.setupUi(self)
self.addItemButt.clicked.connect(self.addListToBase)
self.MainTabIndex = str(self.parent().MainTab.currentIndex())
self.ListTabIndex = str(self.parent().ListTab.currentIndex())
</code></pre>
<p>And only after that, the values of these functions can be used anywhere in the child class.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Numpy masked array - find segment nearest to specific index<p>I have a series of small images, stored as 2d numpy arrays. After applying a threshold, I turn the input image into a masked array (mask out values below threshold). Based on this, as shown in the plot below I end up with a number of different objects (2 sometimes more). How can I identify the object that is nearest to center of this masked image, and return an array containing only that object?</p>
<p><a href="https://i.stack.imgur.com/8PUvu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8PUvu.png" alt="enter image description here" /></a></p>
<pre><code>img = masked_array(
data=[[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, --, 1.0, 1.0, --, --, --, --,
--, --, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, 1.0, 1.0, 1.0, --, --, --, --,
--, --, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, 1.0, 1.0, 1.0, --, --, --, --,
--, --, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, --, 1.0, 1.0, --, --, --, --,
--, --, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, 1.0, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, 1.0, 1.0, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, 1.0, 1.0, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, 1.0, 1.0, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --],
[--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --, --, --, --, --, --, --, --, --,
--, --, --, --, --, --, --, --]],
mask=[[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, False, False,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, False, False, False,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, False, False, False,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, False, False,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, False, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, False, False, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, False, False, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, False, False, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True],
[ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True]],
fill_value=1e+20)
</code></pre>
|
<p>Final result:<br />
<a href="https://i.stack.imgur.com/sKXyR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sKXyR.png" alt="enter image description here" /></a></p>
<h2>4 steps involved here:</h2>
<ol>
<li>get index of the center and indices of non-masked points</li>
<li>get the closest point to the center</li>
<li>get the "object" in which the point closest to the center belongs. You can get this using <a href="https://stackoverflow.com/questions/46441893/connected-component-labeling-in-python">opencv connected components</a> (or <a href="https://stackoverflow.com/questions/64345584/how-to-properly-use-cv2-findcontours-on-opencv-version-4-4-0">findContour</a>)</li>
<li>Modify the image to keep only the object you are interested in</li>
</ol>
<h1>Step 1</h1>
<p>Getting indices of non masked point to get the one closest to the center</p>
<pre class="lang-py prettyprint-override"><code>>>> center = [[img.shape[1] // 2, img.shape[0] // 2]]
>>> center
[[20, 20]]
>>> indices = np.array(list(zip(*img.nonzero())))
>>> indices
array([[18, 25],
[18, 26],
[19, 24],
[19, 25],
[19, 26],
[20, 24],
[20, 25],
[20, 26],
[21, 25],
[21, 26],
[22, 19],
[23, 19],
[23, 20],
[24, 19],
[24, 20],
[25, 19],
[25, 20]], dtype=int64)
</code></pre>
<h1>Step 2</h1>
<p>Compute the distance. I'll use scipy for this</p>
<pre class="lang-py prettyprint-override"><code>>>> from scipy.spatial.distance import cdist
>>> distance = cdist(indices, center)
>>> distance
array([[5.38516481],
[6.32455532],
[4.12310563],
[5.09901951],
[6.08276253],
[4. ],
[5. ],
[6. ],
[5.09901951],
[6.08276253],
[2.23606798],
[3.16227766],
[3. ],
[4.12310563],
[4. ],
[5.09901951],
[5. ]])
>>> np.argmin(distance)
10
>>> distance[10]
array([2.23606798])
>>> closest_point = indices[10]
>>> closest_point
array([22, 19], dtype=int64)
</code></pre>
<h1>Step 3</h1>
<p>Find the "object" ( = the connected component) closest to the center.<br />
I'll use OpenCV for this</p>
<pre class="lang-py prettyprint-override"><code>>>> import cv2
>>> N_cc, cc = cv2.connectedComponents(img.data.astype(np.uint8))
>>> N_cc
3 # 3 objects on your image: the 2 'black' and the white one
>>> obj = np.array(
list(zip(
*np.where(cc == cc[closest_point[0], closest_point[1]]) # getting all the points from the connected component having the same label as the point closest to the center
))
)
>>> obj
array([[22, 19], # the closest point to the center
[23, 19], # and all its neighbours in the same object
[23, 20],
[24, 19],
[24, 20],
[25, 19],
[25, 20]], dtype=int64)
</code></pre>
<h1>Step 4</h1>
<p>Keep only the main object in the original image</p>
<pre class="lang-py prettyprint-override"><code>>>> img.mask.fill(True)
>>> img.mask[obj[:, 0], obj[:, 1]] = False # numpy version of: for i in obj: img.mask[i[0], i[1]] = False
>>> img.nonzero()
(array([22, 23, 23, 24, 24, 25, 25], dtype=int64),
array([19, 19, 20, 19, 20, 19, 20], dtype=int64))
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Sort a dictionary depending of some value<pre><code>"users": {
"673336994218377285": {"votes": 5},
"541388453708038165": {"votes": 1},
"845444326065700865": {"votes": 9}
}
</code></pre>
<p>How can I sort a dictionary depending on the "votes" key of the nested dictionary? The dictionary should look like this:</p>
<pre><code>"users": {
"845444326065700865": {"votes": 9},
"673336994218377285": {"votes": 5},
"541388453708038165": {"votes": 1}
}
</code></pre>
|
<p>Dictionaries in Python (since 3.6) are sorted by their insertion order, so you have to create a new dictionary with the elements inserted in their sorted order:</p>
<pre><code>users = {
"673336994218377285": {"votes": 5},
"541388453708038165": {"votes": 1},
"845444326065700865": {"votes": 9}
}
dict(sorted(users.items(), key=lambda x: x[1]['votes'], reverse=True))
</code></pre>
<p>The <code>key=lambda x: x[1]['votes']</code> makes it sort each element according to the 'votes' field of the value of each item.</p>
<p>If you're using an older version of Python then dictionaries will not be sorted, so you'll have to use this same approach with <code>collections.OrderedDict</code> instead of <code>dict</code>.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How to set a new index<p>My df has the columns 'Country' and 'Country Code' as the current index. How can I remove this index and create a new one that just counts the rows? I´ll leave the picture of how it´s looking. All I want to do is add a new index next to Country. Thanks a lot!</p>
<p><a href="https://i.stack.imgur.com/WfYNd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WfYNd.png" alt="enter image description here" /></a></p>
|
<p>If you are using a pandas DataFrame and your DataFrame is called df:</p>
<pre><code>df = df.reset_index(drop=False)
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
how to list a name n-times?<p>I have a pandas data frame named <code>df</code> and an integer variable named <code>n</code>.</p>
<p>how can I create a list of <code>n</code>-times my dataframe's name <code>df</code>?</p>
<p>Example:</p>
<pre><code>n=3
l = [df,df,df]
</code></pre>
<p>Note: <code>n</code> changes almost randomly for every execution.</p>
|
<p>You can use a list comprehension to replicate the list <code>n</code> times:</p>
<pre><code>l = [df for _ in range(n)]
</code></pre>
<p>Though note that, as mentioned in the comments, this creates <code>n</code> references to the same object, so a change in any of them will be reflected across all dataframes. If that is a problem (which most likely is) take a new <code>copy</code> on each iteration:</p>
<pre><code>l = [df.copy() for _ in range(n)]
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
HTML Img Src Variable<p>I am trying to pass an array of image urls (in String format) from a python application to an HTML web page. In this HTML web page, I run a Jinja for loop in which I try to display a series of images whose sources are in the array I passed to the HTML webpage. However, the images are not appearing. Below is my code for this attempt. imgLinks is the array that contains all the urls of my image sources, and the variable length represents the length of this array.</p>
<pre><code>{% for arrayPos in range(length) %}
<div class="card">
<div class="container">
<img src=imgLinks[arrayPos] width=30% height=20%> </img>
</div>
</div>
<br>
{% endfor %}
</code></pre>
<p>Per the request of someone, here is my python code. I am basically calling from an API to get my information.</p>
<pre><code>@app.route('/explorelaunches')
def explorelaunches():
response = requests.get("https://ll.thespacedevs.com/2.0.0/launch/upcoming/")
jsonResponse = response.json()
launches = jsonResponse["results"]
names = []
startTimes = []
padNames = []
descriptions = []
imgLinks = []
for launch in launches:
if (launch['name'] is None):
names.append("N/A")
else:
names.append(launch['name'])
if (launch['net'] is None):
startTimes.append("N/A")
else:
dateString = launch['net']
finalDate = parse(dateString).strftime('%B %d, %Y')
startTimes.append(finalDate)
if (launch['pad'] is None or launch['pad']['name'] is None):
padNames.append("N/A")
else:
padNames.append(launch['pad']['name'])
if (launch['mission'] is None or launch['mission']['description'] is None):
descriptions.append("N/A")
else:
descriptions.append(launch['mission']['description'])
if (launch['image'] is None):
imgLinks.append("N/A")
else:
imgLinks.append(launch['image'])
return render_template("explorelaunches.html", username=session.get(names=names, startTimes=startTimes, padNames=padNames, descriptions=descriptions, length=len(names), imgLinks=imgLinks)
</code></pre>
<p>Is there any way I can accomplish this task? Thanks in advance.</p>
|
<p>Given an list of URLs, you can directly iterate over the list in jinja template. I have just made bit of correction according to the information you have provide.</p>
<pre><code>{% for image_link in imgLinks%}
<div class="card">
<div class="container">
<img src={{ image_link }} width=30% height=20%> </img> <!--you forgot to put url inside {{...}}-->
</div>
</div>
<br>
{% endfor %}
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
CreateProcessW failed error:2 ssh_askpass: posix_spawn: No such file or directory Host key verification failed, jupyter notebook on remote server<p>So I was following a <a href="https://towardsdatascience.com/running-jupyter-notebooks-on-remote-servers-603fbcc256b3" rel="nofollow noreferrer">tutorial</a> to connect to my jupyter notebook which is running on my remote server so that I can access it on my local windows machine.</p>
<p>These were the steps that I followed.</p>
<p>On my remote server :</p>
<pre><code>jupyter notebook --no-browser --port=8889
</code></pre>
<p>Then on my local machine</p>
<pre><code>ssh -N -f -L localhost:8888:localhost:8889 *******@**********.de.gyan.com
</code></pre>
<p>But I am getting an error</p>
<pre><code>CreateProcessW failed error:2
ssh_askpass: posix_spawn: No such file or directory
Host key verification failed.
</code></pre>
<p>How do I resolve this? Or is there is any other way to achieve the same?</p>
|
<p>If you need the DISPLAY variable set because you want to use VcXsrc or another X-Server in Windows 10 the workaround is to add the host you want to connect to your known_hosts file.
This can be done by calling</p>
<pre><code>ssh-keyscan -t rsa host.example.com | Out-File ~/.ssh/known_hosts -Append -Encoding ASCII;
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How to create a matrix of lists?<p>I need to create a matrix MxN where every element of the matrix is a list of integers. I need a list so that I can append new elements as I go, therefore a 3D matrix would not work here.
I'm not sure if what I need here would actually be a list of lists.</p>
|
<p>The following function creates a 2D matrix of empty lists:</p>
<pre><code>>>> def create(row,col):
... return [[[] for _ in range(col)] for _ in range(row)]
...
>>> L = create(2,3)
>>> L[1][2].extend([1,2,3]) # add multiple integers at a location
>>> for row in L:
... print(row)
...
[[], [], []]
[[], [], [1, 2, 3]]
>>> L[0][1].append(1) # add one more integer at a location
>>> for row in L:
... print(row)
...
[[], [1], []]
[[], [], [1, 2, 3]]
</code></pre>
<p>How it works:</p>
<ul>
<li><code>[]</code> is a new instance of an empty list.</li>
<li><code>[[] for _ in range(col)]</code> uses a list comprehension to create a list of "col" empty lists.</li>
<li><code>[[] for _ in range(col)] for _ in range(row)</code> create "row" new lists of "col" empty lists.</li>
</ul>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Can I use the column name as condition?<p>I have a pandas dataframe which contains around a hundred columns.
Most of these columns are dates and I want to iterate through all these.</p>
<p>Here is an example :</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">date</th>
<th style="text-align: center;">nbDays</th>
<th style="text-align: right;">2020-12-20</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">2020-12-30</td>
<td style="text-align: center;">4</td>
<td style="text-align: right;"></td>
</tr>
</tbody>
</table>
</div>
<p>IF date + nbdays <= 2020-12-20, set the value of this column to TRUE, FALSE if No.
The only thing I can't do is taking the column name as an argument in my condition, and do it for all these date columns.</p>
<p>Here's my expected output :</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">date</th>
<th style="text-align: center;">nbDays</th>
<th style="text-align: center;">2020-12-20</th>
<th style="text-align: right;">2020-12-21</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">2020-12-30</td>
<td style="text-align: center;">4</td>
<td style="text-align: center;">FALSE</td>
<td style="text-align: right;">FALSE</td>
</tr>
<tr>
<td style="text-align: left;">2020-12-18</td>
<td style="text-align: center;">2</td>
<td style="text-align: center;">TRUE</td>
<td style="text-align: right;">FALSE</td>
</tr>
</tbody>
</table>
</div>
<p>Maybe in a loop but it'll be long to run?</p>
|
<p><em>Ensure your date columns are converted to datetime for this to work</em></p>
<p>The basic steps I've used are:</p>
<ol>
<li>get pandas to identify the date columns</li>
<li>shift the "date" column by nbDays</li>
<li>compare the shifted date column to the dates in the columns</li>
</ol>
<pre><code>from dateutil.relativedelta import relativedelta
shifted_date = [
t + relativedelta(days=nb_days)
for t, nb_days in zip(df[date], df[nbDays])
]
date_columns = df.select_dtypes(include=[np.datetime64]).columns
for date_column in date_columns:
date_to_check = pd.to_datetime(date_column)
df[date_column] = np.where(
shifted_date <= date_to_check, True, False
)
</code></pre>
<p>Don't be afraid to use a <code>for</code> loop here because the tough work is vectorised in the <code>np.where</code> function.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How to add 91 to all the values in a column of a pandas data frame?<p>Consider my data frame as like this</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>S.no</th>
<th>Phone Number</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>9955290232</td>
</tr>
<tr>
<td>2</td>
<td>8752837492</td>
</tr>
<tr>
<td>3</td>
<td>9342832245</td>
</tr>
<tr>
<td>4</td>
<td>919485928837</td>
</tr>
<tr>
<td>5</td>
<td>917482482938</td>
</tr>
<tr>
<td>6</td>
<td>98273642733</td>
</tr>
</tbody>
</table>
</div>
<p>I want the values in "Phone number" column to prefixed with 91
If the value has 91 already then, proceed to the next value.</p>
<p>My output</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>S.no</th>
<th>Phone Number</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>919955290232</td>
</tr>
<tr>
<td>2</td>
<td>918752837492</td>
</tr>
<tr>
<td>3</td>
<td>919342832245</td>
</tr>
<tr>
<td>4</td>
<td>919485928837</td>
</tr>
<tr>
<td>5</td>
<td>917482482938</td>
</tr>
<tr>
<td>6</td>
<td>919827364273</td>
</tr>
</tbody>
</table>
</div>
<p>How could this be done?</p>
|
<p>Simplest would be comvert to string, add <code>91</code> to the beginning and slice to last 12 digits:</p>
<pre><code>df['New Phone Number'] = df['Phone Number'].astype(str).radd("91").str[-12:]
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How to create an alphanumeric grid in a certain sequence allowing double digit numbers?<p>I have a grid feature class that varies in size and shape. My test shapefile is a 3x4 grid. I need to create an alphanumeric sequence that goes in a specific order but can be scaled for a different size grid. Below is the order the grid is in:</p>
<pre><code>A4 | B4 | C4
A3 | B3 | C3
A2 | B2 | C2
A1 | B1 | C1
</code></pre>
<p>and to use this alphanumeric sequence, the list will need to be printed in a specific order (starting from the bottom left of the table, moving to the right, and then returning to the left value on the next row up:
A1, B1, C1, A2, B2, C2, A3, B3, C3, A4, B4, C4</p>
<p>I had this:</p>
<pre><code>from itertools import product
from string import ascii_uppercase, digits
for x, y in product(ascii_uppercase, digits):
print('{}{}'.format(x, y))
</code></pre>
<p>It generates a sequence like: A0 through A9, then B0 through B9, and so forth.
However I also need larger grids so the script would have to compensate and allow the sequence to use double digits after 9 if the grid is larger than 9 high.
ie. A10, B10, C10</p>
<p>I then tried to make 2 lists and then combine them together, but I ran into the problem of joining these in the sequence I need.</p>
<pre><code>w = 3
h = 4
alpha = []
numeric = []
for letter in ascii_uppercase[:w]:
alpha.append(letter)
for num in range(1, h+1):
numeric.append(num)
</code></pre>
<p>I assume I might not need to make a numeric list, but don't know how to do it. I know slightly more than just the basics of python and have created so more complex scripts, but this is really puzzling for me! I feel like I am so close but missing something really simple from both of my samples above. Thank you for any help you can give me!</p>
<p>Solved, here is what I have for others who might need to use my question:</p>
<pre><code>w = 9
h = 20
alpha = []
numeric = []
for letter in ascii_uppercase[:w]:
alpha.append(letter)
for num in range(1, h+1):
numeric.append(num)
longest_num = len(str(max(numeric)))
for y in numeric:
for x in alpha:
print '{}{:0{}}'.format(x, y, longest_num)
</code></pre>
<p>I didn't need the code formatted as a table since I was going to perform a field calculation in ArcMap.</p>
|
<p>After you compute <code>numeric</code>, also do:</p>
<pre><code>longest_num = len(str(max(numeric)))
</code></pre>
<p>and change your format statement to:</p>
<pre><code>'{}{:0{}}'.format(x, y, longest_num)
</code></pre>
<p>This ensures that when you get to double digits you get the following result:</p>
<pre><code>A12 | B12 | C12
A11 | B11 | C11
...
A02 | B02 | C02
A01 | B01 | C01
</code></pre>
<p>To actually print the grid however you need to change your code:</p>
<pre><code>longest_num = len(str(max(numeric)))
for y in reversed(numeric):
print(" | ".join('{}{:0{}}'.format(x, y, longest_num)
for x in alpha))
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
python requests and bs4 how to navigate through the children of an element<p>so this is my code</p>
<pre><code>from bs4 import BeautifulSoup
import requests
import time
URL = 'http://www.vn-meido.com/k1/index.php?board=17.0'
# loads page
r = requests.get(URL)
soup = BeautifulSoup(r.content, "html.parser")
# gets the newest book
book = soup.select_one('td[class^="subject windowbg2"]').text
while True:
# reloads the page
r = requests.get(URL)
soup = BeautifulSoup(r.content, "html.parser")
# gets the newest book
new_book = soup.select_one('td[class^="subject windowbg2"]').text
# checks if a new book has been uploaded
if book == new_book:
print("no new book found")
elif book != new_book:
print(new_book)
book = soup.select_one('td[class^="subject windowbg2"]').text
# repeats after 30 seconds
time.sleep(30)
</code></pre>
<p>but if you go to the website and have a look I get the text of the newest book uploaded but I want to be able to separate the title and the author and the title and author are in different elements but they don't have a way to identify them (like a class or an ID) so if you can help please do, thanks</p>
|
<p>Assuming html remains consistent across entries (I only checked a few) then when next text is found under the pinned listings at the top (I assume this to be a new book) then you need to extract the book url, visit that url, then you can use ``:-soup-contains<code>to target author and book title by specific text and</code>next_sibling` to get the required return values.</p>
<p>N.B. I have removed the while loop for the purposes of this answer. The additions to the <code>elif</code> are the important ones.</p>
<pre><code>from bs4 import BeautifulSoup
import requests
URL = 'http://www.vn-meido.com/k1/index.php?board=17.0'
# loads page
r = requests.get(URL)
soup = BeautifulSoup(r.content, "html.parser")
# gets the newest book
book = '' # for testing altered this line
r = requests.get(URL)
soup = BeautifulSoup(r.content, "html.parser")
# gets the newest book
new_book = soup.select_one('td[class^="subject windowbg2"]').text
# checks if a new book has been uploaded
if book == new_book:
print("no new book found")
elif book != new_book:
print(new_book)
new_book_url = soup.select_one('tr:not([class]) td:not([class*=stickybg]) ~ .subject a')['href']
r = requests.get(new_book_url)
soup = BeautifulSoup(r.content, "html.parser")
for member in ['TITLE ', 'AUTHOR']:
print(soup.select_one(f'strong:-soup-contains("{member}")').next_sibling.next_sibling)
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Pandas group by values in list (in series)<p>I am trying to group by items in a list in DataFrame Series. The dataset being used is the <a href="https://insights.stackoverflow.com/survey" rel="nofollow noreferrer">Stack Overflow 2020 Survey</a>.</p>
<p>The layout is roughly as follows:</p>
<pre><code> ... LanguageWorkedWith ... ConvertedComp ...
Respondent
1 Python;C 50000
2 C++;C 70000
</code></pre>
<p>I want to essentially want to use <code>groupby</code> on the unique values in the list of languages worked with, and apply the a mean aggregator function to the ConvertedComp like so...</p>
<pre><code>LanguageWorkedWith
C++ 70000
C 60000
Python 50000
</code></pre>
<p>I have actually managed to achieve the desired output but my solution seems somewhat janky and being new to Pandas, I believe that there is probably a better way.</p>
<p>My solution is as follows:</p>
<pre><code># read csv
sos = pd.read_csv("developer_survey_2020/survey_results_public.csv", index_col='Respondent')
# seperate string into list of strings, disregarding unanswered responses
temp = sos["LanguageWorkedWith"].dropna().str.split(';')
# create new DataFrame with respondent index and rows populated withknown languages
langs_known = pd.DataFrame(temp.tolist(), index=temp.index)
# stack columns as rows, dropping old column names
stacked_responses = langs_known.stack().reset_index(level=1, drop=True)
# Re-indexing sos DataFrame to match stacked_responses dimension
# Concatenate reindex series to ConvertedComp series columnwise
reindexed_pays = sos["ConvertedComp"].reindex(stacked_responses.index)
stacked_with_pay = pd.concat([stacked_responses, reindexed_pays], axis='columns')
# Remove rows with no salary data
# Renaming columns
stacked_with_pay.dropna(how='any', inplace=True)
stacked_with_pay.columns = ["LWW", "Salary"]
# Group by LLW and apply median
lang_ave_pay = stacked_with_pay.groupby("LWW")["Salary"].median().sort_values(ascending=False).head()
</code></pre>
<p>Output:</p>
<pre><code>LWW
Perl 76131.5
Scala 75669.0
Go 74034.0
Rust 74000.0
Ruby 71093.0
Name: Salary, dtype: float64
</code></pre>
<p>which matches the value calculated when choosing specific language: <code>sos.loc[sos["LanguageWorkedWith"].str.contains('Perl').fillna(False), "ConvertedComp"].median()</code></p>
<p>Any tips on how to improve/functions that provide this functionality/etc would be appreciated!</p>
|
<p>In the target column only data frame, decompose the language name and combine it with the salary. The next step is to convert the data from horizontal format to vertical format using melt. Then we group the language names together to get the median. <a href="https://pandas.pydata.org/docs/reference/api/pandas.melt.html" rel="nofollow noreferrer">melt docs</a></p>
<pre><code>lww = sos[["LanguageWorkedWith","ConvertedComp"]]
lwws = pd.concat([lww['ConvertedComp'], lww['LanguageWorkedWith'].str.split(';', expand=True)], axis=1)
lwws.reset_index(drop=True, inplace=True)
df_long = pd.melt(lwws, id_vars='ConvertedComp', value_vars=lwws.columns[1:], var_name='lang', value_name='lang_name')
df_long.groupby('lang_name')['ConvertedComp'].median().sort_values(ascending=False).head()
lang_name
Perl 76131.5
Scala 75669.0
Go 74034.0
Rust 74000.0
Ruby 71093.0
Name: ConvertedComp, dtype: float64
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Get first Tuesday of month every year<p>I want to use sql or python to get the first Tuesday in June for the current year. </p>
<p>For example:</p>
<ul>
<li>The year is 2020, so I need to return 06/02/2020</li>
<li>The year is 2021, so I need to return 06/01/2021</li>
</ul>
|
<p>Here's the python way.</p>
<pre class="lang-py prettyprint-override"><code>import datetime
def get_day(year):
d = datetime.datetime(year, 6, 1)
offset = 1-d.weekday() #weekday = 1 means tuesday
if offset < 0:
offset+=7
return d+datetime.timedelta(offset)
</code></pre>
<p>Pass in the year to the function, it returns the first tuesday as a <code>datetime</code> object.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How to declare a global variable of pySerial with Python<p>Using Python I need to call the 'configSerialPort' function several times and for that I have declared serialPort as global and it is my code:</p>
<pre><code>import serial
comPort = 'COM5'
serialPort = None
def configSerialPort(timeout):
serialPort = serial.Serial(port = comPort, baudrate = 9600, timeout = timeout)
def ping():
#serialPort = serial.Serial(port = comPort, baudrate = 9600, timeout = 1)
command = "AT\n"
serialPort.write(command.encode('ASCII'))
bufferRxSerial = serialPort.readline().decode('ASCII')
if( bufferRxSerial.strip() == "OK" ):
return True
else:
return False
def main():
global serialPort
configSerialPort(3)
flagConexion = ping()
if flagConexion == True:
print('The Modem is connected!')
else:
print('ERROR, no connection to modem!')
main()
</code></pre>
<p>but I have this error:</p>
<pre><code>File "C:/Users/WSR/AppData/Local/Programs/Python/Python39/TestSerialPort.py", line 12, in ping
serialPort.write(command.encode('ASCII'))
AttributeError: 'NoneType' object has no attribute 'write'
</code></pre>
<p>how can I correct it</p>
|
<p>Whenever you want to access a global variable in a function, you should declare it as global</p>
<pre><code>def configSerialPort(timeout):
global serialPort
...
def ping():
global serialPort
...
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How do I stop the code running the remember() function every time I use speakRemember() function?<p>I have this code for a remembering system:</p>
<pre><code>def remember():
speak('What do you want me to remember sir?')
toRemember = input('What should I remember? ')
speak('ok, i will remember: ' + toRemember)
return toRemember
def speakRemember():
toRemember = remember()
speak('this is what you told mke to remember: '+toRemember)
</code></pre>
<p>whenever I trigger the speakRemember() function and it gets the data from the remember() it ends up running the remember() function. I think the error is here: <code>toRemember = remember()</code>
but I don't see why it would run the other function. If anyone knows if this is a bug or just human error please tell me!</p>
<p>(there are no errors)</p>
|
<p>To elaborate on my comment:</p>
<pre class="lang-py prettyprint-override"><code>class Remember:
def __init__(self, to_remember=None):
self.to_remember = to_remember
def __call__(self):
self.to_remember = input('What Should I remember? ')
def __str__(self):
return f"this is what you told me to remember: {toRemember}"
def __repr__(self):
retrun self.__str__
# If you want to print response while setting a value:
def remember_this(self):
print('What do you want me to remember sir?')
self()
print(f"ok, I will remember: {self.to_remember}")
remember = Remember()
remember()
>> What Should I remember? This
print(remember.to_remember)
>>> 'This'
print(remember)
>>> this is what you told me to remember: This
remember.remember_this()
>>> What do you want me to remember sir?
>>> What Should I remember? That
>>> ok, I will remember: That
print(remember)
>>> this is what you told me to remember: That
</code></pre>
<p>If you call the instance of your class (in this case <code>remember</code>) the <code>__call__</code> function is used. You can also create different function to change class attributes.</p>
<p>The <code>__str__</code> and <code>__repr__</code> functions are use to make it easy to print the values in a string.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Reading a (Wireshark) LiveCapture of USB keystrokes into python on Ubuntu 20.04?<h2>Context</h2>
<p>The bluetooth of my keyboard is unstable, it's a known problem of the device. The manufacturer says the micro-USB does not transmit keypresses and that it is for charging only. However, I inspected the USB data with Wireshark and detected it does transmit the keystrokes over its micro-USB connection. So I am trying to give the keyboard a second life through the micro-USB connection (and hopefully help people with the same issue).</p>
<h2>System</h2>
<p>Ubuntu 20.04</p>
<h2>Approach</h2>
<p>I've identified the USB port of my keyboard device in Wireshark, and recorded the stream of data across that port. That data is saved into a file called <code>abcd.pcapng</code> (I pressed the buttons <code>abcd</code> during the recording). Next, I wrote a basic python script that uses <code>tshark</code> to convert <code>abcd.pcapng</code> file into its original keypresses <code>abcd</code>.</p>
<h2>Code</h2>
<p>This is the Python code that converts the <code>abcd.pcapng</code> file into the letters <code>abcd</code>:</p>
<pre><code># This script extracts the keypresses from a pcapng file.
import os
pcapng_filename = "abcd.pcapng"
keypress_ids_filename = "keypress_ids.txt"
# create the output for
command_pcapng_to_keypress_ids = (
f"tshark -r '{pcapng_filename}' -T fields -e usb.capdata > {keypress_ids_filename}"
)
print(
f"Running the following bash command to convert the pcapng file to 00xx00000 nrs:\n{command_pcapng_to_keypress_ids}"
)
os.system(command_pcapng_to_keypress_ids)
# read keypress id file
switcher = {
"04": "a", # or A
"05": "b", # or B
"06": "c", # or C
"07": "d", # or D
"08": "e", # or E
"09": "f", # or F
"0A": "g", # or G
"0B": "h", # or H
"0C": "i", # or I
"0D": "j", # or J
"0E": "k", # or K
"0F": "l", # or L
"10": "m", # or M
"11": "n", # or N
"12": "o", # or O
"13": "p", # or P
"14": "q", # or Q
"15": "r", # or R
"16": "s", # or S
"17": "t", # or T
"18": "u", # or U
"19": "v", # or V
"1A": "w", # or W
"1B": "x", # or X
"1C": "y", # or Y
"1D": "x", # or Z
"1E": "1", # or !
"1F": "2", # or @
"20": "3", # or #
"21": "4", # or $
"22": "5", # or %
"23": "6", # or ^
"24": "7", # or &
"25": "8", # or *
"26": "9", # or (
"27": "0", # or )
"2D": "-", # or _
"2E": "+", # or =
"2F": "[", # or {
"30": "]", # or }
"31": '"', # or |
"33": ";", # or :
"34": "'", # or "
"35": "`", # or ~
"36": ",", # or <
"37": ".", # or >
"38": "/", # or ?
}
def readFile(filename):
fileOpen = open(filename)
return fileOpen
file = readFile(keypress_ids_filename)
print(f"file={file}")
# parse the 0000050000000000 etc codes and convert them into keystrokes
for line in file:
if len(line) == 17:
two_chars = line[4:6]
try:
print(
f"line={line[0:16]}, relevant characters indicating keypress ID: {two_chars} convert keypres ID to letter: {switcher[two_chars]}"
)
except:
pass
</code></pre>
<h2>Output</h2>
<p>The output of that script for the specified file is:</p>
<pre><code>Running the following bash command to convert the pcapng file to 00xx00000 nrs:
tshark -r 'abcd.pcapng' -T fields -e usb.capdata > keypress_ids.txt
file=<_io.TextIOWrapper name='keypress_ids.txt' mode='r' encoding='UTF-8'>
line=0000040000000000, relevant characters indicating keypress ID: 04 convert keypres ID to letter: a
line=0000050000000000, relevant characters indicating keypress ID: 05 convert keypres ID to letter: b
line=0000060000000000, relevant characters indicating keypress ID: 06 convert keypres ID to letter: c
line=0000070000000000, relevant characters indicating keypress ID: 07 convert keypres ID to letter: d
</code></pre>
<h2>Question</h2>
<p>How can I adjust the code to get the USB data directly as a continuous stream, instead of first having to start- and stop recording the USB data followed by having to create the output <code>abcd.pcapng</code> file?</p>
<p>For example, is there a Wireshark-api or tshark function that starts listening until the/some script is stopped?</p>
|
<h2>Solution</h2>
<p>The following script called <code>live_capture_keystrokes.py</code> captures the <code>Leftover Capture Data</code> which contains the signals of the keystrokes, they are parsed live and continuously by the Python code.</p>
<p>I think it is important to activate usb monitoring each time you reboot the computer (if you want to run this script successfully). To do that, you can run:</p>
<pre><code>sudo modprobe usbmon
</code></pre>
<p>Ideally you would run that without sudo, that would also prevent one to run the python file with sudo, but I did not yet figure out how to run <code>modprobe usbmon</code> without sudo.</p>
<h2>Code</h2>
<p>This <code>live_capture_keystrokes.py</code> script needs to be ran with sudo if <code>modprobe usbmon</code> is ran with sudo. To run the python script in sudo, one can type:</p>
<pre><code>sudo su
# sudo apt install python3
# pip install pyshark
# python3 live_capture_keystrokes.py
</code></pre>
<p>And the content of <code>live_capture_keystrokes.py</code> is:</p>
<pre><code>import pyshark
# Get keystrokes data
print("\n----- Capturing keystrokes from usbmon0 --------------------")
capture = pyshark.LiveCapture(interface='usbmon0', output_file='output.pcap')
# Source: https://www.programcreek.com/python/example/92561/pyshark.LiveCapture
for i, packet in enumerate(capture.sniff_continuously()):
try:
data= packet[1].usb_capdata.split(":")
print(data)
except:
pass
capture.clear()
capture.close()
print(f'DONE')
</code></pre>
<h2>Output</h2>
<p>The output is not yet parsed back to keystrokes, I will add that later. However, this is the output, each time you press a key on your usb keyboard, it prints the accompanying list directly to screen.</p>
<pre><code>a['00', '00', '04', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
b['00', '00', '05', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
c['00', '00', '06', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
d['00', '00', '07', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
e['00', '00', '08', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
f['00', '00', '09', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
g['00', '00', '0a', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
h['00', '00', '0b', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
i['00', '00', '0c', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
j['00', '00', '0d', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
k['00', '00', '0e', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
l['00', '00', '0f', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
m['00', '00', '10', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
n['00', '00', '11', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
o['00', '00', '12', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
p['00', '00', '13', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
q['00', '00', '14', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
r['00', '00', '15', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
s['00', '00', '16', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
t['00', '00', '17', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
u['00', '00', '18', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
v['00', '00', '19', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
w['00', '00', '1a', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
x['00', '00', '1b', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
y['00', '00', '1c', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
z['00', '00', '1d', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
0['00', '00', '62', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
1['00', '00', '59', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
2['00', '00', '5a', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
3['00', '00', '5b', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
4['00', '00', '5c', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
5['00', '00', '5d', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
6['00', '00', '5e', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
7['00', '00', '5f', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
8['00', '00', '60', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
9['00', '00', '61', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
1['00', '00', '1e', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
2['00', '00', '1f', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
3['00', '00', '20', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
4['00', '00', '21', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
5['00', '00', '22', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
6['00', '00', '23', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
7['00', '00', '24', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
8['00', '00', '25', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
9['00', '00', '26', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
0['00', '00', '27', '00', '00', '00', '00', '00']
['00', '00', '00', '00', '00', '00', '00', '00']
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Updating column-entries when using groupby+apply iteratively<p>I use the groupby+apply methods on a dataframe and store the return-Values of the applied function in a new column.</p>
<p>The initial dataframe df is:</p>
<pre><code>In[1]: df
Out[1]:
tag a b
0 tag1 15 1
1 tag1 26 2
2 tag2 20 2
3 tag3 11 3
4 tag3 15 3
5 tag3 24 4
</code></pre>
<p>The groupby+apply procedure is the following:</p>
<pre><code>In[2]: grouped = df.groupby('tag')
In[3]: df['a+b'] = grouped.get_group('tag1').apply(function,axis=1)
In[4]: df
Out[4]:
tag a b a+b
0 tag1 15 1 16
1 tag1 26 2 28
2 tag2 20 2 nan
3 tag3 11 3 nan
4 tag3 15 3 nan
5 tag3 24 4 nan
In[5]: df['a+b'] = grouped.get_group('tag2').apply(function,axis=1)
In[6]: df
Out[6]:
tag a b a+b
0 tag1 15 1 nan
1 tag1 26 2 nan
2 tag2 20 2 22
3 tag3 11 3 14
4 tag3 15 3 nan
5 tag3 24 4 nan
</code></pre>
<p>First I chose to apply the function only to entries with 'tag1'. In the original case this is, because the used dataframe is huge and I am only interested in a small number of specific groups to apply the function to.</p>
<p>The problem which you can see from <em>In[5]</em> onwards, is that when repeating code from <em>In[3]</em> in <em>In[5]</em> for a different group, the entries in column 'a+b' for the group 'tag1' will get lost in this procedure.</p>
<p>How can I find a way to simply update column-entries of 'a+b' and not overwriting? Is there a best-practice example for this kind of problem?</p>
|
<p>This is what I found works best. I use pandas.Series.update() which updates a single column of the Dataframe:</p>
<pre><code>for key, item in grouped:
series = grouped.get_group(key).apply(function,axis=1)
if 'a+b' in df.columns :
df['a+b'].update(series)
else:
df['a+b'] = series
</code></pre>
<p>This code works best, because I need the function to be applied iteratively to each group (or a specified set or groups) and not all groups at once. This is needed due to limited memory capacities and also because I need the information of the applied function only for a small number of groups.</p>
<p>Any comments on this code are very much appreciated!</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Calling a certain function based on a variable<p>I am trying to call certain functions based on a certain variable. If I have lots of functions based on this variable it quickly gets very ugly with all if statements. My questions is if there is a more elegant and "pythonic" solution than doing something like the code below?</p>
<pre><code>if variable == 0:
function_0()
elif variable == 1:
function_1()
elif variable == 2:
function_2()
</code></pre>
|
<p>Create an array of the functions, index with the variable and call the function.</p>
<pre><code>[function_0, function_1, function_2][variable]()
</code></pre>
<p>Or do it via a dictionary</p>
<pre><code>dd = {0 : function_0, 1 : function_1, 2 : function_2}
dd[variable]()
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
What method do you use a lot when controlling p4 command with Python?<p>What method do you use a lot when controlling p4 command with Python?</p>
<ol>
<li>Use p4 module</li>
<li>Control p4 command using subprocess ("p4 change")</li>
</ol>
<p>I'm currently creating tool the second method</p>
|
<p>I find the simplest way to translate from command line to P4Python is the <code>p4.run()</code> command. You just pass in the command you want to run as the first argument and then add each P4 argument after that.</p>
<p>For example, in the terminal:</p>
<p><code>p4 changes</code></p>
<p>In Python would be:</p>
<p><code>p4.run("changes")</code></p>
<p>Terminal:</p>
<p><code>p4 describe 153</code></p>
<p>Python:</p>
<p><code>p4.run("describe", 153)</code></p>
<p>So a super simple example of just printing the results of p4 changes would be something like this:</p>
<pre><code>from P4 import P4
p4 = P4()
p4.connect()
changelists = p4.run("changes")
print(changelists)
p4.disconnect()
</code></pre>
<p>There are other ways to run certain commands and some extra steps for things like submitting changelists so be sure to check the documentation: <a href="https://www.perforce.com/manuals/p4python/Content/P4Python/python.p4.html#Instance_Methods_..39" rel="nofollow noreferrer">https://www.perforce.com/manuals/p4python/Content/P4Python/python.p4.html#Instance_Methods_..39</a></p>
<p>Hopefully this helps you get started!</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Need download voice message from Telegram on Python<p>I started developing a pet project related to telegram bot. One of the points was the question, <strong>how to download a voice message from the bot?</strong></p>
<p>Task: Need to download a audiofile from telegram bot and save in project folder.</p>
<p><strong>GetUpdates</strong>
<a href="https://api.telegram.org/bot" rel="nofollow noreferrer">https://api.telegram.org/bot</a>/getUpdates:</p>
<pre><code>{"duration":2,"mime_type":"audio/ogg","file_id":"<file_id>","file_unique_id":"<file_unique_id>","file_size":8858}}}]}
</code></pre>
<p><br/>
I checked <strong><a href="https://github.com/eternnoir/pyTelegramBotAPI" rel="nofollow noreferrer">pyTelegramBotAPI</a> documentation</strong>, but I didn't find an explanation for exactly how to download the file.</p>
<p>I created the code based on the documentation:</p>
<pre><code>@bot.message_handler(content_types=['voice'])
def voice_processing(message):
file_info = bot.get_file(message.voice.file_id)
file = requests.get('https://api.telegram.org/file/bot{0}/{1}'.format(cfg.TOKEN, file_info.file_path))
</code></pre>
<pre><code>print(type(file), file)
------------------------------------------------------------
Output: <class 'requests.models.Response'>, <Response [200]>
</code></pre>
<p><br/>
I also found one <strong>example</strong> where the author downloaded audio in <strong>chunks</strong>. How exactly I did not understand, but it used a <strong>similar function</strong>:</p>
<pre><code>def read_chunks(chunk_size, bytes):
while True:
chunk = bytes[:chunk_size]
bytes = bytes[chunk_size:]
yield chunk
if not bytes:
break
</code></pre>
|
<p>In the github of the project there is an <a href="https://github.com/eternnoir/pyTelegramBotAPI/blob/master/examples/download_file_example.py" rel="nofollow noreferrer">example</a> for that:</p>
<pre><code>@bot.message_handler(content_types=['voice'])
def voice_processing(message):
file_info = bot.get_file(message.voice.file_id)
downloaded_file = bot.download_file(file_info.file_path)
with open('new_file.ogg', 'wb') as new_file:
new_file.write(downloaded_file)
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Broadcasting a M*D matrix to N*D matrix in python (D is greater than 1, M>N)<p>I would like to subtract the rows of a MXD matrix from a NXD matrix (D is greater than 1, M > N) without using any for loops in Python. e.g. suppose I want to subtract the rows of a 100*25 matrix from the rows of a 20*25 matrix. How to write the code without for loops (I know I can do it using broadcasting but can't seem to code).</p>
|
<p>Method 1:</p>
<pre><code>def subtract(A, B):
m = A.shape[0]
n = B.shape[0]
C = np.empty_like(A)
for i in range(m // n):
C[i*n : (i+1)*n] = A[i*n : (i+1)*n] - B
return C
</code></pre>
<p>Method 2:</p>
<pre><code>def subtract(A, B):
m = A.shape[0]
n = B.shape[0]
return A - np.tile(B, (m // n, 1))
</code></pre>
<p>Method 3:</p>
<pre><code>def subtract(A, B):
B_ = np.repeat(B, 5).reshape(B.size, -1).T.reshape(-1, B.shape[1])
return A - B_
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
shutil.copy2 gives "SameFileError" altho files are not at all the same - why?<pre><code> File "C:\WPy64-3810\python-3.8.1.amd64\lib\shutil.py", line 239, in copyfile
raise SameFileError("{!r} and {!r} are the same file".format(src, dst))
SameFileError: 'G:\\My Drive\\xxxxxxxxxxxx\\Customers (CR, Kit, & Consulting)\\xxxxx\\reports\\old drafts\\Rxxxxxxxx-1E0 (canceled pilot).doc' and
'G:\\Shared drives\\Studies sorted by model\\Executed - updated 2020-03-22 15h05m55s\\EAE in C57BL_6 mice, therapeutic\\MOG35-55\\Rxxxxxxxx-1E0 (canceled pilot) xxxxx__Therapeutic EAE studies in C57BL_6 mice.doc' are the same file
</code></pre>
<p>What the heck is happening here? </p>
<p>Python 3.8, x64, Windows - the two files it prints are clearly not at all the same, yet it says "SameFileError".</p>
<p>I've redacted the path with "xxxxx" in a few places (these are customer files). And inserted a newline to make the source/dest filenames line up (easier to compare).</p>
<p>FWIW, both the source and destination filepaths are on Google Drive File Stream (G:); that may have something to do with it.</p>
|
<p>It's a bug related to how shutil reads a Google Drive File Stream file system.</p>
<p>See here:
<a href="https://bugs.python.org/issue33935" rel="nofollow noreferrer">https://bugs.python.org/issue33935</a></p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How to get xml elements which have childs with a certain tag and attribute<p>I want to find xml elements which have certain child elements. The child elements need to have a given tag and an attribute set to a specific value.</p>
<p>To give a concrete example (based on the <a href="https://docs.python.org/3/library/xml.etree.elementtree.html#parsing-xml" rel="nofollow noreferrer">official documentation</a>). I want to find all <code>country</code> elements which have a child element <code>neighbor</code> with attribute <code>name="Austria"</code>:</p>
<pre><code>import xml.etree.ElementTree as ET
data = """<?xml version="1.0"?>
<data>
<country name="Liechtenstein">
<neighbor name="Austria" direction="E"/>
<neighbor name="Switzerland" direction="W"/>
</country>
<country name="Singapore">
<neighbor name="Malaysia" direction="N"/>
<partner name="Austria"/>
</country>
<country name="Panama">
<neighbor name="Costa Rica" direction="W"/>
<neighbor name="Colombia" direction="E"/>
</country>
</data>
"""
root = ET.fromstring(data)
</code></pre>
<p>What I've tried without success:</p>
<pre><code>countries1 = root.findall('.//country[neighbor@name="Austria"]')
countries2 = root.findall('.//country[neighbor][@name="Austria"]')
countries3 = root.findall('.//country[neighbor[@name="Austria"]]')
</code></pre>
<p>which all give:</p>
<blockquote>
<p>SyntaxError: invalid predicate</p>
</blockquote>
<hr />
<p>Following solutions are obviously wrong, as too much elements are found:</p>
<pre><code>countries4 = root.findall('.//country/*[@name="Austria"]')
countries5 = root.findall('.//country/[neighbor]')
</code></pre>
<p>where <code>countries4</code> contains all elements having an attribute <code>name="Austria"</code>, but including the <code>partner</code> element. <code>countries5</code> contains all elements which have <em>any</em> neighbor element as a child.</p>
|
<blockquote>
<p>I want to find all country elements which have a child element neighbor with attribute name="Austria"</p>
</blockquote>
<p>see below</p>
<pre><code>import xml.etree.ElementTree as ET
data = """<?xml version="1.0"?>
<data>
<country name="Liechtenstein">
<neighbor name="Austria" direction="E"/>
<neighbor name="Switzerland" direction="W"/>
</country>
<country name="Singapore">
<neighbor name="Malaysia" direction="N"/>
<partner name="Austria"/>
</country>
<country name="Panama">
<neighbor name="Costa Rica" direction="W"/>
<neighbor name="Colombia" direction="E"/>
</country>
</data>
"""
root = ET.fromstring(data)
countries_with_austria_as_neighbor = [c.attrib['name'] for c in root.findall('.//country') if
'Austria' in [n.attrib['name'] for n in c.findall('neighbor')]]
print(countries_with_austria_as_neighbor)
</code></pre>
<p>output</p>
<pre><code>['Liechtenstein']
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Append inner 0 & 1st index to all elements in 2nd index two-dimensional List of Lists - python<p>Hello new to python here... wondering what the best way is to solve a problem like this. </p>
<p>I have a 2d array that look something like this:</p>
<pre><code>a = [['October 17', 'Manhattan', '10024, 10025, 10026'],
['October 17', 'Queen', '11360, 11362, 11365, 11368']]
</code></pre>
<p>Would like to iterate over this to create a new list or data frame that looks like the following:</p>
<pre><code>10024, October 17, Manhattan
10025, October 17, Manhattan
10026, October 17, Manhattan
11360, October 17, Queens
11362, October 17, Queens
11365, October 17, Queens
11368, October 17, Queens
</code></pre>
<p>Any insight would be greatly appreciated. </p>
<p>Thank you. </p>
|
<p>You may need to iterate over the values, and for each iterate over the several indices you have</p>
<pre><code>values = [['October 17', 'Manhattan', '10024, 10025, 10026'],
['October 17', 'Queens', '11360, 11362, 11365, 11368']]
result = [[int(idx), row[0], row[1]]
for row in values
for idx in row[2].split(',')]
df = DataFrame(result, columns=['idx', 'date', 'place'])
</code></pre>
<p>To obtain</p>
<pre><code>[[10024, 'October 17', 'Manhattan'], [10025, 'October 17', 'Manhattan'],
[10026, 'October 17', 'Manhattan'], [11360, 'October 17', 'Queens'],
[11362, 'October 17', 'Queens'], [11365, 'October 17', 'Queens'],
[11368, 'October 17', 'Queens']]
idx date place
0 10024 October 17 Manhattan
1 10025 October 17 Manhattan
2 10026 October 17 Manhattan
3 11360 October 17 Queens
4 11362 October 17 Queens
5 11365 October 17 Queens
6 11368 October 17 Queens
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Pandas - calculate monthly average from data with mixed frequencies<p>Suppose I have a dataset consisting of monthly, quarterly and annual average occurrences of an event:</p>
<pre><code>multi_index = pd.MultiIndex.from_tuples([("2022-01-01", "2022-12-31"),
("2022-01-01", "2022-03-30"),
("2022-03-01", "2022-03-30"),
("2022-04-01", "2022-04-30")])
multi_index.names = ['period_begin', 'period_end']
df = pd.DataFrame(np.random.randint(10, size=4), index=multi_index)
df
0
period_begin period_end
2022-01-01 2022-12-31 4
2022-03-30 3
2022-03-01 2022-03-30 5
2022-04-01 2022-04-30 8
</code></pre>
<p>I want to calculate the monthly averages as a (simple) sum of these overlapping data. For instance, the mean in March 2022 should be equal to the sum of the observations March-2022, Q1-2022 and Y-2022. For April 2022, it's the sum of April-2022 and Y-2022 (Q2-2022 does not show up and has no observation). In the end, what I would like to have is:</p>
<pre><code>month_begin Monthly_Avg
2022-01-01 7
2022-02-01 7
2022-03-01 12
2022-04-01 15
...
2022-12-01 4
</code></pre>
<p>I tried <code>pd.Grouper()</code> but it didn't work. Does anybody have an idea? I would be grateful!</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.date_range.html" rel="nofollow noreferrer"><code>date_range</code></a> in list comprehension for months values, create DataFrame and aggregate <code>sum</code>:</p>
<pre><code>L = [(x, v) for (s, e), v in df[0].items() for x in pd.`(s, e, freq='MS')]
df = (pd.DataFrame(L, columns=['month_begin','Data'])
.groupby('month_begin', as_index=False)['Data']
.sum())
print (df)
month_begin Data
0 2022-01-01 7
1 2022-02-01 7
2 2022-03-01 12
3 2022-04-01 12
4 2022-05-01 4
5 2022-06-01 4
6 2022-07-01 4
7 2022-08-01 4
8 2022-09-01 4
9 2022-10-01 4
10 2022-11-01 4
11 2022-12-01 4
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
In R, Error for No Boto3 to connect Athena even though Boto3 Installed<p>I am trying to connect to Athena from R. After setup 'RAthena' and connection, I got this error:</p>
<pre><code>Error: Boto3 is not detected please install boto3 using either: `pip install boto3` in terminal or `install_boto()`.
Alternatively `reticulate::use_python` or `reticulate::use_condaenv` will have to be used if boto3 is in another environment.
</code></pre>
<p>So by using <code>pip install</code>, I installed <code>boto3</code> in both Python 2 and Python 3.</p>
<pre><code>Requirement already up-to-date: boto3 in ./Library/Python/2.7/lib/python/site-packages (1.12.39)
</code></pre>
<pre><code>Requirement already satisfied: boto3 in ./Library/Python/3.7/lib/python/site-packages (1.12.39)
</code></pre>
<p>But in <code>R</code>, I am still having the same error. Then I tried using <code>install_boto()</code> in <code>R</code>.
It tells me to do as follow:</p>
<pre><code>Installation complete. Please restart R.
</code></pre>
<p>Then I would stay in this <code>Restarting R session...</code> output forever and never see any note for successful restart.
And at the end, <code>R</code> still can't detect <code>boto3</code>.</p>
|
<p>really sorry to hear you are having issue with the <code>RAthena</code> package. Can you let me know what version of the package you are running. </p>
<p>Have you tried setting which python you are using through <code>reticulate</code>? For example:</p>
<pre><code>library(DBI)
# specifying python conda environment
reticulate::use_condaenv("RAthena")
# Or specifying python virtual enviroment
reticulate::use_virtualenv("RAthena")
con <- dbConnect(RAthena::athena())
</code></pre>
<p>Can you also check if <code>numpy</code> is installed, I remember <code>reticulate</code> can bind to python environments better if <code>numpy</code> is apart of it. </p>
<p>Alternatively you can use <a href="https://github.com/DyfanJones/noctua" rel="nofollow noreferrer"><code>noctua</code></a>. <code>noctua</code> works exactly the same as <code>RAthena</code> but instead of using python's <code>boto3</code> it uses R's <code>paws</code> package.</p>
<p>If you are still struggling I can raise this as an issue on Github. I thought I had resolved this issue by adding <code>numpy</code> to the installation function <code>install_boto</code>, however I am happy to re-open this issue.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How to append value to list in a row based on another column python?<p>I have a dataframe that looks like:</p>
<pre><code> body label
the sky is blue [noun]
the apple is red. [noun]
Let's take a walk [verb]
</code></pre>
<p>I want to add an item to the list in label depending on if there is a color in the body column.</p>
<p>Desired Output:</p>
<pre><code> body label
the sky is blue [noun, color]
the apple is red. [noun, color]
Let's take a walk [verb]
</code></pre>
<p>I have tried:</p>
<pre><code>data.loc[data.body.str.contains("red|blue"), 'label'] = data.label.str.append('color')
</code></pre>
|
<p>One option is to use <code>apply</code> on the Series and then directly append to list:</p>
<pre><code>data.loc[data.body.str.contains('red|blue'), 'label'].apply(lambda lst: lst.append('color'))
data
body label
0 the sky is blue [noun, color]
1 the apple is red. [noun, color]
2 Let's take a walk [verb]
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Pandas: Splitting datetime into weekday, month, hour columns<p>I have a dataset with date-time values like this,</p>
<pre><code> datetime
0 2012-04-01 07:00:00
. .
. .
</code></pre>
<p>I would like to create separate columns of <strong>weekday, hour, month</strong> like,</p>
<pre><code> datetime weekday_1 ... weekday_7 hour_1 ... hour_7 ... hour_24 month_1 ... month_4 ... month_12
0 2012-04-01 07:00:00 0 1 0 1 0 0 1 0
</code></pre>
<p><em>(taking monday as weekday_1, the example date is sunday: weekday_7)</em></p>
<p>The only way I know how to extract from datetime is this,</p>
<pre><code>df['month'] = df['datetime'].dt.month
</code></pre>
<p>But I cannot seem to apply this to fit my problem.</p>
<p>Sorry if this sounds repetitive, i am fairly new to this. But similar question answers were not helpful enough.
Thanks in advance.</p>
|
<p>Create a custom function:</p>
<pre><code># Use {i:02} to get a number on two digits
cols = [f'weeday_{i}' for i in range(1, 8)] \
+ [f'hour_{i}' for i in range(1, 25)] \
+ [f'month_{i}' for i in range(1, 13)]
def get_dummy(dt):
l = [0] * (7+24+12)
l[dt.weekday()] = 1
l[dt.hour + 6] = 1
l[dt.month + 30] = 1
return pd.Series(dict(zip(cols, l)))
df = df.join(df['datetime'].apply(get_dummy))
</code></pre>
<p>Output:</p>
<pre><code>>>> df.iloc[0]
datetime 2012-04-01 07:00:00
weeday_1 0
weeday_2 0
weeday_3 0
weeday_4 0
weeday_5 0
weeday_6 0
weeday_7 1 # <- Sunday
hour_1 0
hour_2 0
hour_3 0
hour_4 0
hour_5 0
hour_6 0
hour_7 1 # <- 07:00
hour_8 0
hour_9 0
hour_10 0
hour_11 0
hour_12 0
hour_13 0
hour_14 0
hour_15 0
hour_16 0
hour_17 0
hour_18 0
hour_19 0
hour_20 0
hour_21 0
hour_22 0
hour_23 0
hour_24 0
month_1 0
month_2 0
month_3 0
month_4 1 # <- April
month_5 0
month_6 0
month_7 0
month_8 0
month_9 0
month_10 0
month_11 0
month_12 0
Name: 0, dtype: object
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Resuming debug from the middle in pycharm<p>This should be a common issue and I believe it should have been asked somewhere! But I couldn't find a wording that leads my search to an answer!</p>
<p>Suppose you have a python program that runs for 1 hour! The issue that you want to debug and ultimately rerun (possibly in several rounds) happens after 45 mins! Is there a way to kinda Cache or save your variables in debugger space, and rerun the program from that point onwards? (Especially for Python/Pycharm). I already thought of Pickling my variables but first, there are too many variables and second, not all objects can be pickled in python!</p>
|
<p><strong>Proper way</strong>:
You can accomplish this with the <a href="https://docs.python.org/3/library/pdb.html" rel="nofollow noreferrer">post-mortem functionality from pdb</a>. <a href="https://paris-swc.github.io/python-testing-debugging-profiling/07-debugging-post-mortem.html" rel="nofollow noreferrer">[more info]</a>. This lets you use the debugger after an exception is raised.</p>
<p><strong>Quick and dirty way</strong>: This isn't the best way to do it, but a quick way to post-mortem debug is to place the part of the code where you want to check the variables in a try-except block with a breakpoint set in the except block. If you do this with the built-in <code>breakpoint()</code> function, you can play with the variables and then continue execution by entering <code>c</code> at the debugger prompt.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Split a dictionary to explictly call out 'Key' : dict.keys() and "Value' : dict.values() for JSON data going into an API<p>I'm currently working with the CampaignMonitor API to support with email subscription management and email marketing. I need to send JSON data through the API to make any changes to the subscription list. My current code looks like the following,</p>
<pre><code>df = test_json.groupby(['EmailAddress', 'Name']).apply(lambda x : x[['Location', 'location_id', 'customer_id', 'status','last_visit'].astype(str).to_dict(orient = 'records'))
df = df.reset_index().rename(columns = {0 : 'CustomFields'})
#df['CustomFields'].apply(lambda x : print(x[0].items()))
</code></pre>
<p>Which returns the following</p>
<pre><code>{‘EmailAddress’ : ‘fake@gmail.com’, ‘Name’ : ‘John Smith’, ‘CustomFields’ : [{'Location': 'H6GO', location_id': 'D8047', 'customer_id': '2963', 'status': 'Active', 'last_visit': '2020-06-23'}]}
</code></pre>
<p>However, the Campaign Monitor API specifically wants for the CustomFields to contain an explicit call out for the key and value of each dictionary pairing. Is there a simple way to expand an existing dictionary and create a sub dictionary within a dictionary calling out the the key and value? Right now, I'm thinking of using apply to do it row by row but am struggling for how to break these down so the key and value callouts remain in one dictionary,</p>
<pre><code>{‘EmailAddress’ : ‘fake@gmail.com’, ‘Name’ : ‘John Smith’, ‘CustomFields’ : [{‘Key’ : 'Location', ‘Value’ : 'H6GO'},
{‘Key’ : ‘location_id' , ‘Value’ : 'D8047'},
{‘Key’ : 'customer_id', ‘Value’ : '2963'},
{‘Key’ : 'status', ‘Value’ : 'Active'},
{‘Key’ : 'last_visit', ‘Value’ : '2020-06-23'}
]}
</code></pre>
|
<p>Try this:</p>
<pre><code>d["CustomFields"] = [{"key": k, "value": v} for k,v in d["CustomFields"][0].items()]
</code></pre>
<p>output:</p>
<pre><code>{'EmailAddress': 'fake@gmail.com',
'Name': 'John Smith',
'CustomFields': [{'key': 'Location', 'value': 'H6GO'},
{'key': 'location_id', 'value': 'D8047'},
{'key': 'customer_id', 'value': '2963'},
{'key': 'status', 'value': 'Active'},
{'key': 'last_visit', 'value': '2020-06-23'}]}
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
What is the most efficient way to edit the values in a list of dictionaries?<p>I have multiple dictionaries inside the list, what is an efficient and possible way to update and limit all the float values to only two decimal points?
For example: Make the value of <code>'AmazonEC2': 22.740000000000002</code> to <code>'AmazonEC2': 22.74</code></p>
<pre><code>[{
'AmazonEC2': 22.740000000000002,
'awskms': 0.09,
'AmazonDynamoDB': 6.740000000000002,
'AmazonElastiCache': 0.01,
'AmazonS3': 5.54,
'AmazonCloudWatch': 1.08,
'AWSAmplify': 0.55,
'AmazonRDS': 0.01
}, {
'awskms': 0.740000000000003,
'AmazonS3': 5.740000000000004,
'AmazonCloudWatch': 1.740000000000003,
'AmazonDynamoDB': 6.740000000000006,
'AmazonEC2': 22.740000000000002,
'AWSAmplify': 0.49,
'AmazonRDS': 0.01,
'AmazonElastiCache': 0.01
}]
</code></pre>
|
<pre><code>for item in your_list:
for key in item.keys():
item[key] = round(item[key], 2)
</code></pre>
<p>The main iteration is list, dict lookup is <code>O(1)</code>.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How to get a column name to display even when there is no data for that column?<p><strong>I want to get the names of the columns of a pandas dataframe even when I don't have data according to certain search criteria</strong></p>
<p><strong>that's how it is now:</strong></p>
<pre><code>Empty DataFrame
Columns: []
Index: []
</code></pre>
<hr />
<p><strong>as I want it to be:</strong></p>
<pre><code>Empty DataFrame
Columns: [column_name]
Index: []
</code></pre>
<p><em><strong>I select the data directly from the MYSQL DB, I do not save it in memory !</strong></em></p>
|
<p>I am not sure of your need, but if you want to have a dataframe with column names, you can initialize it with the column names :</p>
<pre class="lang-py prettyprint-override"><code> df = pd.DataFrame(columns=['A', 'B', 'C'])
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How to add hex bytes in a string?<p>I have this string of hex bytes, separated by spaces:</p>
<pre><code>byteString = "7E 00 0A 01 01 50 01 00 48 65 6C 6C 6F"
</code></pre>
<p>How to add bytes in this way:</p>
<pre><code>01 + 01 + 50 + 01 + 00 + 48 + 65 + 6C + 6C + 6F = 247
</code></pre>
|
<p>Considering that you have the <em>hex</em> sequence as a <em>str</em> (<em>bytes</em>), what you need to do is:</p>
<ul>
<li>Split the sequence in smaller strings each representing a byte (2 <em>hex</em> digits): "<em>7E</em>", "<em>00</em>", ...</li>
<li>Convert each such string to the integer value corresponding to the <em>hex</em> representation (the result will be a list of integers)</li>
<li>Add the desired values (ignoring the 1<sup>st</sup> 3)</li>
</ul>
<blockquote>
<pre class="lang-py prettyprint-override"><code>>>> byte_string = "7E 00 0A 01 01 50 01 00 48 65 6C 6C 6F"
>>>
>>> l = [int(i, 16) for i in byte_string.split(" ")] # Split and conversion to int done in one step
>>> l
[126, 0, 10, 1, 1, 80, 1, 0, 72, 101, 108, 108, 111]
>>>
>>> [hex(i) for i in l] # The hex representation of each element (for checking only)
['0x7e', '0x0', '0xa', '0x1', '0x1', '0x50', '0x1', '0x0', '0x48', '0x65', '0x6c', '0x6c', '0x6f']
>>>
>>> s = sum(l[3:])
>>>
>>> s
583
>>> hex(s)
'0x247'
</code></pre>
</blockquote>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
ValueError building a neural network with 2 outputs in Keras<p>I tried to build a network having a single input X (a 2-dimensions matrix of size Xa*Xb) and 2 outputs Y1 and Y2 (both in 1 dimension). Even though it isn't the case in the code I posted below, Y1 is supposed to be a classifier that outputs a one-hot vector and Y2 is supposed to be for regression (the original code raised the same error).</p>
<p>When training the network I get the following error:</p>
<p><code>ValueError: Shapes (None, None) and (None, 17, 29) are incompatible</code></p>
<p>Obviously, <code>(None, 17, 29)</code> translates to <code>(None, size_Xa, size_Y1)</code>, and I don't understand why Xa and Y1 should be related (independantly from Xb) in the first place.</p>
<p>Here is my code. I tried to reduce it to the minimum in order to make it easier to understand.</p>
<pre><code>import numpy as np
from keras.layers import Dense, LSTM, Input
from keras.models import Model
def dataGenerator():
while True:
yield makeBatch()
def makeBatch():
"""generates a batch of artificial training data"""
x_batch, y_batch = [], {}
x_batch = np.random.rand(batch_size, size_Xa, size_Xb)
#x_batch = np.random.rand(batch_size, size_Xa)
y_batch['output1'] = np.random.rand(batch_size, size_Y1)
y_batch['output2'] = np.random.rand(batch_size, size_Y2)
return x_batch, y_batch
def generate_model():
input_layer = Input(shape=(size_Xa, size_Xb))
#input_layer = Input(shape=(size_Xa))
common_branch = Dense(128, activation='relu')(input_layer)
branch_1 = Dense(size_Y1, activation='softmax', name='output1')(common_branch)
branch_2 = Dense(size_Y2, activation='relu', name='output2')(common_branch)
model = Model(inputs=input_layer,outputs=[branch_1,branch_2])
losses = {"output1":"categorical_crossentropy", "output2":"mean_absolute_error"}
model.compile(optimizer="adam",
loss=losses,
metrics=['accuracy'])
return model
batch_size=5
size_Xa = 17
size_Xb = 13
size_Y2 = 100
size_Y1 = 29
model = generate_model()
model.fit( x=dataGenerator(),
steps_per_epoch=50,
epochs=15,
validation_data=dataGenerator(), validation_steps=50, verbose=1)
</code></pre>
<p>If I uncomment the 2 commented lines in makeBatch and generate_model, the error disappears. So if the input X is in 1 dimension it runs, but when I change it to 2 dimensions (keeping everything else the same) the error appears.</p>
<p>Is this related to the architecture with 2 outputs? I think there is something I'm missing here, any help is welcome.</p>
<p>I add the full error log for reference:</p>
<pre><code>Epoch 1/15
Traceback (most recent call last):
File "neuralnet_minimal.py", line 41, in <module>
model.fit( x=dataGenerator(),
File "/path/of/my/project/venv/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 66, in _method_wrapper
return method(self, *args, **kwargs)
File "/path/of/my/project/venv/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 848, in fit
tmp_logs = train_function(iterator)
File "/path/of/my/project/venv/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 580, in __call__
result = self._call(*args, **kwds)
File "/path/of/my/project/venv/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 627, in _call
self._initialize(args, kwds, add_initializers_to=initializers)
File "/path/of/my/project/venv/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 505, in _initialize
self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
File "/path/of/my/project/venv/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 2446, in _get_concrete_function_internal_garbage_collected
graph_function, _, _ = self._maybe_define_function(args, kwargs)
File "/path/of/my/project/venv/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 2777, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/path/of/my/project/venv/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 2657, in _create_graph_function
func_graph_module.func_graph_from_py_func(
File "/path/of/my/project/venv/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 981, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/path/of/my/project/venv/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 441, in wrapped_fn
return weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/path/of/my/project/venv/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 968, in wrapper
raise e.ag_error_metadata.to_exception(e)
ValueError: in user code:
/path/of/my/project/venv/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:571 train_function *
outputs = self.distribute_strategy.run(
/path/of/my/project/venv/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:951 run **
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/path/of/my/project/venv/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:2290 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/path/of/my/project/venv/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:2649 _call_for_each_replica
return fn(*args, **kwargs)
/path/of/my/project/venv/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:532 train_step **
loss = self.compiled_loss(
/path/of/my/project/venv/lib/python3.8/site-packages/tensorflow/python/keras/engine/compile_utils.py:205 __call__
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
/path/of/my/project/venv/lib/python3.8/site-packages/tensorflow/python/keras/losses.py:143 __call__
losses = self.call(y_true, y_pred)
/path/of/my/project/venv/lib/python3.8/site-packages/tensorflow/python/keras/losses.py:246 call
return self.fn(y_true, y_pred, **self._fn_kwargs)
/path/of/my/project/venv/lib/python3.8/site-packages/tensorflow/python/keras/losses.py:1527 categorical_crossentropy
return K.categorical_crossentropy(y_true, y_pred, from_logits=from_logits)
/path/of/my/project/venv/lib/python3.8/site-packages/tensorflow/python/keras/backend.py:4561 categorical_crossentropy
target.shape.assert_is_compatible_with(output.shape)
/path/of/my/project/venv/lib/python3.8/site-packages/tensorflow/python/framework/tensor_shape.py:1117 assert_is_compatible_with
raise ValueError("Shapes %s and %s are incompatible" % (self, other))
ValueError: Shapes (None, None) and (None, 17, 29) are incompatible
</code></pre>
|
<p>Strangely enough, the error disappears when I add a <code>Flatten()</code> layer before the network splits... It has to do with the shape of the network but I still don't get the real reason behind all of this.</p>
<p>I will mark this as correct answer as it solves the problem, unless someone else posts something. Please tell me if this is not the right way to do it.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
how do I pass the √ untouched<p>is it possible to pass the <code>√</code> through this untouched or am i asking too much</p>
<pre><code>import urllib.request
path = 'html'
links = 'links'
with open(links, 'r', encoding='UTF-8') as links:
for link in links: #for each link in the file
print(link)
with urllib.request.urlopen(link) as linker: #get the html
print(linker)
with open(path, 'ab') as f: #append the html to html
f.write(linker.read())
</code></pre>
<p>links</p>
<pre><code>https://myanimelist.net/anime/27899/Tokyo_Ghoul_√A
</code></pre>
<p>output</p>
<pre><code>File "PYdown.py", line 7, in <module>
with urllib.request.urlopen(link) as linker:
File "/usr/lib64/python3.6/urllib/request.py", line 223, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib64/python3.6/urllib/request.py", line 526, in open
response = self._open(req, data)
File "/usr/lib64/python3.6/urllib/request.py", line 544, in _open
'_open', req)
File "/usr/lib64/python3.6/urllib/request.py", line 504, in _call_chain
result = func(*args)
File "/usr/lib64/python3.6/urllib/request.py", line 1392, in https_open
context=self._context, check_hostname=self._check_hostname)
File "/usr/lib64/python3.6/urllib/request.py", line 1349, in do_open
encode_chunked=req.has_header('Transfer-encoding'))
File "/usr/lib64/python3.6/http/client.py", line 1254, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/lib64/python3.6/http/client.py", line 1265, in _send_request
self.putrequest(method, url, **skips)
File "/usr/lib64/python3.6/http/client.py", line 1132, in putrequest
self._output(request.encode('ascii'))
UnicodeEncodeError: 'ascii' codec can't encode character '\u221a' in position 29: ordinal not in range(128)
</code></pre>
|
<p>You need to quote Unicode chars in URL. You have file which contains list of urls you need to open, so you need to split each url <em>(using <a href="https://docs.python.org/3/library/urllib.parse.html#urllib.parse.urlsplit" rel="nofollow noreferrer"><code>urllib.parse.urlsplit()</code></a>)</em>, quote <em>(with <a href="https://docs.python.org/3/library/urllib.parse.html#urllib.parse.quote" rel="nofollow noreferrer"><code>urllib.parse.quote()</code></a>)</em> host and every part of path <em>(to split paths you can use <a href="https://docs.python.org/3/library/pathlib.html#pathlib.PurePath.parts" rel="nofollow noreferrer"><code>pathlib.PurePosixPath.parts</code></a>)</em> and then form URL back <em>(using <a href="https://docs.python.org/3/library/urllib.parse.html#urllib.parse.urlunsplit" rel="nofollow noreferrer"><code>urllib.parse.urlunsplit()</code></a>)</em>.</p>
<pre class="lang-py prettyprint-override"><code>from pathlib import PurePosixPath
from urllib.parse import urlsplit, urlunsplit, quote, urlencode, parse_qsl
def normalize_url(url):
splitted = urlsplit(url) # split link
path = PurePosixPath(splitted.path) # initialize path
parts = iter(path.parts) # first element always "/"
quoted_path = PurePosixPath(next(parts)) # "/"
for part in parts:
quoted_path /= quote(part) # quote each part
return urlunsplit((
splitted.scheme,
splitted.netloc.encode("idna").decode(), # idna
str(quoted_path),
urlencode(parse_qsl(splitted.query)), # force encode query
splitted.fragment
))
</code></pre>
<p>Usage:</p>
<pre class="lang-py prettyprint-override"><code>links = (
"https://myanimelist.net/anime/27899/Tokyo_Ghoul_√A",
"https://stackoverflow.com/",
"https://www.google.com/search?q=√2&client=firefox-b-d",
"http://pfarmerü.com/"
)
print(*(normalize_url(link) for link in links), sep="\n")
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>https://myanimelist.net/anime/27899/Tokyo_Ghoul_%E2%88%9AA
https://stackoverflow.com/
https://www.google.com/search?q=%E2%88%9A2&client=firefox-b-d,
http://xn--pfarmer-t2a.com/
</code></pre>
<hr />
<p><em>You can help my country, check <a href="https://stackoverflow.com/users/10824407/olvin-roght?tab=profile">my profile info</a>.</em></p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Start index (under FIeld) from 1 with pandas DataFrame<p>I would like to start the index from 1 undes the "Field" column</p>
<pre><code>df = pd.DataFrame(list(zip(total_points, passing_percentage)),
columns =['Pts Measured', '% pass'])
df = df.rename_axis('Field').reset_index()
df["Comments"] = ""
df
</code></pre>
<p>Output:</p>
<pre><code> Field Pts Measured % pass Comments
0 0 92909 90.66
1 1 92830 91.85
2 2 130714 99.99
</code></pre>
|
<p>I found a similar question here: <a href="https://stackoverflow.com/questions/32249960/in-python-pandas-start-row-index-from-1-instead-of-zero-without-creating-additi">In Python pandas, start row index from 1 instead of zero without creating additional column</a></p>
<p>For your question, it would be as simple as adding the following line:</p>
<pre><code>df["Field"] = np.arange(1, len(df) + 1)
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
SQLite3 in python update where multiple possible matches<p>Perhaps you can help me. I am selecting the oldest # rows in a database, then want to update the date column for each item I selected.</p>
<pre><code>from datetime import datetime
import sqlite3
conn = sqlite3.connect('testing.db')
cursor = conn.cursor()
# Create the table
cursor.execute('CREATE TABLE IF NOT EXISTS players (player_tag TEXT, update_date TEXT, UNIQUE(player_tag));')
max_player_tags = 2
# Insert some data with old dates
arr = [['tag1','20200123T05:06:07'], ['tag2','20200123T05:06:07'], ['tag3','20200123T05:06:07'], ['tag4', datetime.now().isoformat()]]
cursor.executemany('INSERT OR IGNORE INTO PLAYERS values (?, ?)', arr)
conn.commit()
#Select the oldest 2 items
old_tags = [i[0] for i in cursor.execute('SELECT player_tag FROM players ORDER BY update_date DESC LIMIT 2')]
print(old_tags)
#Now update the dates to now
cursor.execute('UPDATE players SET update_date = datetime("now") WHERE player_tag in %s' % old_tags)
print([i for i in 'SELECT * FROM players'])
cursor.close()
conn.close()
</code></pre>
<p>The error I get is</p>
<pre><code> cursor.execute('UPDATE players SET update_date = datetime("now") WHERE player_tag in %s' % old_tags)
sqlite3.OperationalError: no such table: 'tag1', 'tag2'
['tag1', 'tag2']
</code></pre>
<p>I have also tried:</p>
<pre><code>cursor.executemany('INSERT OR IGNORE INTO PLAYERS values (?, ?) ON DUPLICATE KEY UPDATE', upd_arr)
</code></pre>
<p>Any ideas?</p>
|
<p>I would just use a single query here:</p>
<pre><code>sql = """UPDATE players
SET update_date = datetime("now")
WHERE player_tag IN (SELECT player_tag FROM players
ORDER BY update_date DESC LIMIT 2)"""
cursor.execute(sql)
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
if an item in a list doesn't match a column name in a data frame, produce exception statement<p>I have the following code which creates a list, takes inputs of column names the user wants, then a for loop applies each list attribute individually to check in the if statement if the user input matches the columns in the data frame.</p>
<p>Currently this produces an exception handling statement if all inputs to the list are unmatching, but if item in the list matches the column in the dataframe but others do not, then jupyter will produce its own error message "KeyError: "['testcolumnname'] not in index", because it is trying to move onto the else part of my statement and create the new dataframe with this but it cant (because those columns do not exist)</p>
<p>I want it to be able to produce this error message 'Attribute does not exist in Dataframe. Make sure you have entered arguments correctly.' if even 1 inputted list attribute does not match the dataframe and all other do. But Ive been struggling to get it to do that, and it produces this KeyError instead.</p>
<p>My code:</p>
<pre><code> lst = []
lst = [item for item in str(input("Enter your attributes here: ")).lower().split()]
for i in lst:
if i not in df.columns:
print('Attribute does not exist in Dataframe. Make sure you have entered arguments correctly.')
break
else:
df_new = df[lst]
# do other stuff
</code></pre>
<p>for example if i have a dataframe:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>A</th>
<th>B</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td>NA</td>
<td>yes</td>
<td>yes</td>
</tr>
<tr>
<td>yes</td>
<td>no</td>
<td>yes</td>
</tr>
</tbody>
</table>
</div>
<p>and my list contains:</p>
<pre><code>['A','B','C']
</code></pre>
<p>It works correctly, and follows the else statement, because all list items match the dataframes columns so it has no problem.</p>
<p>Or if it has this:</p>
<pre><code>['x','y','z']
</code></pre>
<p>It will give the error message I have, correctly. Because no items match the data frames items so it doesn't continue.</p>
<p>But if it is like this, where one attribute is matching the dataframe, and others not...</p>
<pre><code>['A','D','EE']
</code></pre>
<p>it gives the jupyter KeyError message but I want it to bring back the print message i created ('Attribute does not exist in Dataframe. Make sure you have entered arguments correctly.').</p>
<p>The KeyError appears on the line of my else statement: 'df_new = df[lst]'</p>
<p>Can anyone spot an issue i have here that will stop it from going this? Thank you all</p>
|
<p>Try not to print, but to raise exception
And you need to fix your indentation</p>
<pre class="lang-py prettyprint-override"><code>lst = []
lst = [item for item in str(input("Enter your attributes here: ")).lower().split()]
for i in lst:
if i not in df.columns:
raise ValueError('Attribute does not exist in Dataframe. Make sure you have entered arguments correctly.')
df_new = df[lst]
# do other stuff
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Pandas - merge dataframe to keep all values on left and 'insert' values from right if 'no key on left' else 'update' existing 'key' in left<p>I have two dataframes df1 and df2.</p>
<pre><code>np.random.seed(0)
df1= pd.DataFrame({'key': ['A', 'B', 'C', 'D'],'id': ['2', '23', '234', '2345'], '2021': np.random.randn(4)})
df2= pd.DataFrame({'key': ['B', 'D', 'E', 'F'], 'id': ['23', '2345', '67', '45'],'2022': np.random.randn(4)})
key id 2021
0 A 2 1.764052
1 B 23 0.400157
2 C 234 0.978738
3 D 2345 2.240893
key id 2022
0 B 23 1.867558
1 D 2345 -0.977278
2 E 67 0.950088
3 F 45 -0.151357
</code></pre>
<p>I want to have unique keys. If key found already just update the key else insert new row.
I am not sure if I have to use merge/concat/join. Can anyone give insight on this please?</p>
<p>Note:I have used full outer join, it returns duplicate columns. Have edited the input dataframes after posting the question.</p>
<p>Thanks!</p>
|
<p>You can do it using merge function:</p>
<pre><code>df = df1.merge(df2, on='key', how='outer')
df
key 2021 2022
0 A 1.764052 NaN
1 B 0.400157 1.867558
2 C 0.978738 NaN
3 D 2.240893 -0.977278
4 E NaN 0.950088
5 F NaN -0.151357
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Tweepy, a bytes-like object is required, not str. How do i fix this error?<p>error: Traceback (most recent call last):
File "C:\Users\zakar\PycharmProjects\Tweepy-bots\main.py", line 29, in
c=c.replace("im ","")
TypeError: a bytes-like object is required, not 'str'</p>
<p>error <a href="https://i.stack.imgur.com/qOQH4.png" rel="nofollow noreferrer">Error picture</a></p>
<p>this is how the code looks like:</p>
<pre><code>`import tweepy
import tweepy as tt
import time
import sys
import importlib
importlib.reload(sys)
#login credentials twitter account
consumer_key = '-NOT GOING TO PUT THE ACTUAL KEY IN-'
consumer_secret = '-NOT GOING TO PUT THE ACTUAL KEY IN-'
access_token = '-NOT GOING TO PUT THE ACTUAL KEY IN-'
access_secret = '-NOT GOING TO PUT THE ACTUAL KEY IN-'
#login
auth = tt.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_secret)
api = tt.API(auth)
search_query = "barca"
user = api.me()
print(user.name)
max_tweets = 100
for tweet in tweepy.Cursor(api.search, q=search_query).items(max_tweets):
c=tweet.text.encode('utf8')
c=c.replace("im ","")
answer="@"+tweet.user.screen_name+" Hi " + c + ", I'm a bot!"
print ("Reply:",answer)
api.update_status(status=answer,in_reply_to_status_id=tweet.id)
time.sleep(300) #every 5 minutes`
</code></pre>
|
<p>The problem is because you are encoding the text , and then replacing.</p>
<p>here</p>
<pre><code>c=tweet.text.encode('utf8')
c=c.replace("im ","")
</code></pre>
<p>encode() will return bytes not a string. So in replace also you need to use the bytes. Like</p>
<pre><code>c=tweet.text.encode('utf8')
c=c.replace(b"im ",b"")
</code></pre>
<p>Or you replace the content first and then encode.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Remove python, can I still use the the virtual Environment<p>I created a virtual environment, for example on my computer, python3.9 -m venv myenv
if I uninstall python3.9 can I still launch my python script once myenv is activated ?</p>
|
<p>no you wont be able to run python applications anymore.</p>
<p>refer <a href="https://docs.python.org/3/library/venv.html" rel="nofollow noreferrer">https://docs.python.org/3/library/venv.html</a>
venv — Creation of virtual environments¶
New in version 3.3.</p>
<p>Source code: Lib/venv/</p>
<p>The venv module provides support for creating lightweight “virtual environments” with their own site directories, optionally isolated from system site directories. Each virtual environment has its own Python binary (which matches the version of the binary that was used to create this environment) and can have its own independent set of installed Python packages in its site directories.</p>
<p>See PEP 405 for more information about Python virtual environments.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How to make python read input as a float?<p>I need to take an input in the following form "score/max" (Example 93/100) and store it as a float variable. The problem I run into is that python does the division indicated by backslash and since the two numbers are integers, the result is 0. Even if I convert my input into a float the result is 0.0.
Here is my code for reference:</p>
<pre><code>#!/usr/bin/env python
exam1=float(input("Input the first test score in the form score/max:"))
</code></pre>
<p>If 93/100 is entered, exam1 variable will be equal to 0.0 instead of the intended 0.93. </p>
|
<p><strong>Note:</strong></p>
<h2><code>input()</code></h2>
<blockquote>
<p>reads a line from input, converts it to a string
(stripping a trailing newline), and returns that.</p>
</blockquote>
<p>You may want to try the following code,</p>
<pre><code>string = input("Input the first test score in the form score/max: ")
scores = string.strip().split("/")
exam1 = float(scores[0]) / float(scores[1])
print(exam1)
</code></pre>
<p>Input:</p>
<pre><code>Input the first test score in the form score/max: 93/100
</code></pre>
<p>Output:</p>
<pre><code>0.93
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Using one column values as index to list type values in another in pandas<p>We have data representing temperature forecast for every 3 hours period from the moment. We also know number of 3 hour periods after which the weather is needed. So, we have dataframe:</p>
<pre><code>import pandas as pd
d = {'T_forecast': [[11.98, 10.84, 8.74, 6.31, 4.52],[11.29, 7.87, 3.94, 5.02, 7.97],[16.22, 14.87, 11.31, 10.54, 10.72]
,[9.77, 7.54, 5.96, 2.75, 4.99],[18.61, 16.52, 13.52, 11.62, 16.44]], 'delta_hours_divided_by_3': [3, 1,2,3,1]}
df = pd.DataFrame(data=d)
print(df)
T_forecast delta_hours_divided_by_3
0 [11.98, 10.84, 8.74, 6.31, 4.52] 3
1 [11.29, 7.87, 3.94, 5.02, 7.97] 1
2 [16.22, 14.87, 11.31, 10.54, 10.72] 2
3 [9.77, 7.54, 5.96, 2.75, 4.99] 3
4 [18.61, 16.52, 13.52, 11.62, 16.44] 1
</code></pre>
<p>We need to create column with particular element from the list in 'T_forecast' columns.The result should be:</p>
<pre><code> T_forecast delta_hours_divided_by_3 T_by_the_shift_start
0 [11.98, 10.84, 8.74, 6.31, 4.52] 3 8.74
1 [11.29, 7.87, 3.94, 5.02, 7.97] 1 11.29
2 [16.22, 14.87, 11.31, 10.54, 10.72] 2 14.87
3 [9.77, 7.54, 5.96, 2.75, 4.99] 3 5.96
4 [18.61, 16.52, 13.52, 11.62, 16.44] 1 18.61
</code></pre>
<p>I can get it for particular value, with code:</p>
<pre><code>print(df['T_forecast'][0][df['delta_hours_divided_by_3'][0]-1])
8.74
</code></pre>
<p>But I struggle creating column out:</p>
<pre><code>df['T_by_the_shift_start']=df['T_forecast'][df['delta_hours_divided_by_3']-1]
ValueError: cannot reindex on an axis with duplicate labels
</code></pre>
<p>Using the for loop is not an option since the original dataframe is very large and the server will choke. What can lead to solving the issue?</p>
|
<h3>Loop solution</h3>
<pre><code>df['T_by_the_shift_start'] = [a[b - 1] for a, b in df.to_numpy()]
</code></pre>
<h3>Non-loop solution</h3>
<p>** lists should have same length across all rows</p>
<p>** This solution will perform around 2x better only on large data sets >= 500K</p>
<pre><code>df['T_by_the_shift_start'] = np.array([*df['T_forecast']])[range(len(df)), df['delta_hours_divided_by_3'] - 1]
</code></pre>
<hr />
<pre><code> T_forecast delta_hours_divided_by_3 T_by_the_shift_start
0 [11.98, 10.84, 8.74, 6.31, 4.52] 3 8.74
1 [11.29, 7.87, 3.94, 5.02, 7.97] 1 11.29
2 [16.22, 14.87, 11.31, 10.54, 10.72] 2 14.87
3 [9.77, 7.54, 5.96, 2.75, 4.99] 3 5.96
4 [18.61, 16.52, 13.52, 11.62, 16.44] 1 18.61
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Image is not updating in Django<p>Please help me. I am trying to update the profile in which username and email are updated but the image dose not. My code is....</p>
<p><strong>profile.html</strong></p>
<pre><code> <form method="POST" enctype="multipart/form-data">
{% csrf_token %}
<fieldset class="form-group">
<legend class="border-bottom mb-4">Profile Info</legend>
{{ u_form|crispy }}
{{ p_form|crispy }}
</fieldset>
<br>
<div class="form-group">
<button class="btn btn-outline-info" type="submit">Update</button>
</div>
</form>
</code></pre>
<p><strong>views.py</strong></p>
<pre><code>@login_required
def profile(request):
if request.method == 'POST':
u_form = UserUpdateForm(request.POST, instance=request.user)
p_form = ProfileUpdateForm(request.FILES, instance=request.user.profile)
if u_form.is_valid() and p_form.is_valid():
u_form.save()
p_form.save()
messages.success(request, f'Account Successfully Updated!')
return redirect('profile')
else:
u_form = UserUpdateForm(request.POST, instance=request.user)
p_form = ProfileUpdateForm(request.POST, instance=request.user.profile)
context = {
'u_form' : u_form,
'p_form' : p_form
}
return render(request, 'users/profile.html', context)
</code></pre>
<p><strong>forms.py</strong></p>
<pre><code>class ProfileUpdateForm(forms.ModelForm):
class Meta:
model = Profile
fields = ['image']
</code></pre>
|
<p>I believe your issue lies in views.py.</p>
<p>Firstly, you are checking to see if the method for retrieving the view is POST. If it is not, you are initializing a form with the POST data that is not present. I have simplified that for you below.</p>
<p>Secondly, you are not passing the POST information to the second form, only the files portion. Have you tried changing the p_form to take both parameters like below?</p>
<pre><code>@login_required
def profile(request):
if request.method == 'POST':
u_form = UserUpdateForm(request.POST, instance=request.user)
p_form = ProfileUpdateForm(request.POST, request.FILES, instance=request.user.profile)
if u_form.is_valid() and p_form.is_valid():
u_form.save()
p_form.save()
messages.success(request, f'Account Successfully Updated!')
return redirect('profile')
else:
u_form = UserUpdateForm(instance=request.user)
p_form = ProfileUpdateForm(instance=request.user.profile)
context = {
'u_form' : u_form,
'p_form' : p_form
}
return render(request, 'users/profile.html', context)
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
get chained queryset ajax django<p>I want to get a queryset through ajax request and use it in the same way I can do with django queryset in template.</p>
<p>I have the following codes:</p>
<pre><code># view.py
def ajax_get_allocates_by_date(request):
"""
ajax 요청 함수
"""
today = timezone.localdate()
date = request.GET.get('kw', today.strftime('%Y-%m-%d'))
date=parse_date(date)
d=Date.objects.get(date=date)
allocate_list = Allocate.objects.filter(date=d)
data = {
'date': serializers.serialize(
'json',
[d]
),
'allocate_list': serializers.serialize(
'json',
allocate_list
)
}
return JsonResponse(data)
</code></pre>
<p>template</p>
<pre><code><form method="get">
...
<input type="text" id="refer-date" name="refer-date" class="frm_input2 frm_date">
...
</form>
...
<script>
$('#refer-date').datepicker({
onSelect: function(dateText, inst) {
let date = $(this).val()
console.log(date)
$.ajax({
url: "{% url 'allocate:get-allocates-by-date' %}",
data: {
'kw': date
},
success: function(data) {
let date = JSON.parse(data.date)
let queryset = JSON.parse(data.allocate_list)
console.log(date)
console.log(queryset)
}
})
}
})
</script>
</code></pre>
<p>If the value of <code>#refer-date</code> is changed, I get ajax response and it is logged in console. However, my <code>Allocate</code> model has some other ForeignKey relationship, which I also want to display in the template. For now, it seems I can only render some id if the field is ForeignKey.</p>
<p>The best way would be to use django for loop as ajax response, but I'm not sure how I can achieve that.</p>
<p>Any suggestion would be highly appreciated. Thank you.</p>
|
<p>I solved it by using Django Rest Framework. Here's how.</p>
<p>Suppose I have the following models</p>
<pre><code>class Employee(models.Model):
number = models.CharField('사원번호', max_length=30, unique=True)
dept = models.ForeignKey(
'config.Department',
on_delete=models.SET_NULL,
null=True,
verbose_name='부서',
)
class Person(models.Model):
employee = models.OneToOneField(Employee, on_delete=models.CASCADE, verbose_name='사원번호')
name = models.CharField('한글', max_length=30)
</code></pre>
<p>Above two models are in the same app <code>employee</code>, whereas model <code>Department</code> is in <code>config</code> app. So in my term, Employee is parent to Person(ForeignKey), Department is parent to Employee.</p>
<p>First, <a href="https://www.django-rest-framework.org/#installation" rel="nofollow noreferrer">install rest framework and set up</a>.</p>
<p>Second, make a file in app(in my case, employee) called <code>serializers.py</code>.</p>
<p>Third, in the file I wrote the following:</p>
<pre><code>from rest_framework import serializers
from employee.models import Employee, Person
from config.models import Department
class DepartmentSerializer(serializers.ModelSerializer):
class Meta:
model = Department
fields = ('code', 'name', 'employee_set')
class EmployeeSerializer(serializers.ModelSerializer):
class Meta:
model = Employee
fields = ('number',)
class PersonSerializer(serializers.ModelSerializer):
employee = EmployeeSerializer(required=True)
department = serializers.ReadOnlyField(source='employee.dept.name')
class Meta:
model = Person
fields = ('employee', 'name', 'department')
</code></pre>
<p>One important takeaway in my opinion is <code>ReadOnlyField</code> and <code>source</code>. Basically ReadOnlyField enables to make a field that can be read like Model attribute. <code>source</code> is where you want to fetch the value from.</p>
<p>Next, set urlConf in <code>urls.py</code> like so:</p>
<pre><code>urlpatterns = [
...
path('rest/persons/', views.person_list, name='rest-persons'),
...
]
</code></pre>
<p>Finally I can send http request in my template file. I modified views.py and</p>
<pre><code>from rest_framework.decorators import api_view
from rest_framework.response import Response
from employee.serializers import PersonSerializer
...
@login_required
@api_view(['GET'])
def person_list(request):
"""
사원 검색시 사원번호, 이름, 부서명 담긴 json 반환하는 API
"""
if request.method == 'GET':
kw=request.GET.get('kw', '')
persons = Person.objects.filter(
Q(name__contains=kw)|
Q(employee__number__contains=kw)
)
serializer = PersonSerializer(persons, many=True)
return Response(serializer.data)
</code></pre>
<p>template</p>
<pre><code><script>
$(document).ready(function() {
$.ajax({
url: '{% url "employee:rest-persons" %}',
method: 'GET',
data: {'kw': val}
}).done(function(data) {
console.log(val)
// 이전 쿼리 결과 지우기
$('#parent').empty();
for(let i=0; i<data.length; i++) {
var number=data[i].employee.number
var department=data[i].department
var name=data[i].name
var url = "{% url "employee:detail" 1234 %}".replace(/1234/, number.toString())
$('#parent').append(
`<tr>
<td><a href=${url}>${number}</a></td>
<td><a href=${url}>${name}</a></td>
<td><a href=${url}>${department}</a></td>
</tr>`
)
}
}).fail(function(data) {
console.log('error')
})
})
</script>
</code></pre>
<p><em>Volia</em>. Now When I send request (through url), I can see the actual fields value that I wanted to show.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How to stop tkinter canvas widget / after method from skipping elements in a list?<p>I'm trying to create a line that updates every second. This update needs to show up on the screen, so you should be able to see the change in the window. For example, each of the lines below should run after a second.</p>
<pre><code>canvas.create_line(1, 2, 10, 20, smooth="true")
canvas.create_line(1, 2, 10, 20, 50, 60, smooth="true")
canvas.create_line(1, 2, 10, 20, 50, 60, 100, 110, smooth="true")
</code></pre>
<p>This is what I have so far:</p>
<pre><code>def make_line(index):
while index < len(database):
x, y, z = database[index]
# database is a list of tuples with numbers for coordinates
coordinates.append(x) # coordinates is an empty list
coordinates.append(y)
#don't need z since 2D map
index += 1
if index == 2:
# noinspection PyGlobalUndefined
global line
line = canvas.create_line(coordinates, smooth="true")
# same as canvas.create_line(1, 2, 10, 20, smooth="true")
elif index > 2:
canvas.after(1000, lambda: canvas.coords(line, coordinates))
# it's jumping from the 1st to the 4th element
else:
pass
make_line(0)
</code></pre>
<p>I believe the problem is with the canvas.after method and canvas.coords
By the time it runs that line when index = 3, coordinates already has 1, 2, 10, 20, 50, 60, 100, and 110 when it should only have 1, 2, 10, 20, 50, and 60.
Thanks in advance.</p>
|
<p>I ended up having to move the while loop outside the function and adding root.update_idletasks() and root.update() instead of having root.mainloop(). From <a href="https://stackoverflow.com/questions/29158220/tkinter-understanding-mainloop">this post</a> I learned that your program will basically stop at mainloop while update will allow the program to continue.
This is what I ended up with:</p>
<pre><code>def add_coordinates(index):
x, y, z = database[index]
coordinates.append(x) # coordinates is an empty list
coordinates.append(y)
ind = 0 # ind means index
while ind < len(database):
if ind < 2:
add_coordinates(ind)
elif ind == 2:
trajectory = canvas.create_line(coordinates, smooth="true")
add_coordinates(ind)
canvas.after(1000)
elif ind > 2:
add_coordinates(ind)
canvas.after(1000, canvas.coords(trajectory, coordinates))
else:
pass
ind += 1
root.update_idletasks()
root.update()
</code></pre>
<p>Of course, I have things like import statements at the beginning and root.mainloop() at the very end of the file.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Pandas: rolling difference between rows based on alternating value changes in the other column<p>I have a dataframe:</p>
<pre class="lang-python prettyprint-override"><code>df = pd.DataFrame([['ann', 23.3, 0], ['bob', 36.5, 0], ['don', 29.3, 1], ['jul', 45.8, 0], ['ken', 36.2, -1], ['nic', 38.9, 1], ['pal', 16.7, 0], ['qiu', 32.5, -1], ['sun', 33.9, 0], ['tom', 28.5, 1]], columns = ['name', 'score', 'grade'])
df
name score grade
0 ann 23.3 0
1 bob 36.5 0
2 don 29.3 1
3 jul 45.8 0
4 ken 36.2 -1
5 nic 38.9 1
6 pal 16.7 0
7 qiu 32.5 -1
8 sun 33.9 0
9 tom 28.5 1
</code></pre>
<p>that I need to calculate the rolling (consecutive) difference between <code>score</code> where <code>grade</code> moves <code>from 1 to -1</code> or <code>from -1 to 1</code> then add them in two new columns:</p>
<pre class="lang-python prettyprint-override"><code>df
name score grade down up
0 ann 23.3 0
1 bob 36.5 0
2 don 29.3 1 ↰
3 jul 45.8 0 ⎥
4 ken 36.2 -1 6.9 ↲↰
5 nic 38.9 1 2.7 ↲↰
6 pal 16.7 0 ⎥
7 qiu 32.5 -1 -6.4 ↲↰
8 sun 33.9 0 ⎥
9 tom 28.5 1 -4.0 ↲
</code></pre>
<p>that is, column <code>down</code> has <code>score</code> of [row 4 - row 2] as <code>grade</code> is from 1 to -1, column <code>up</code> has <code>score</code> of [row 5 - row 4] as <code>grade</code> is from -1 to 1, and so on.</p>
<p>Is there a pandas way to get the desired results without using for loop?</p>
<p>Note: 1 & -1s in <code>grade</code> always alternate.</p>
|
<p>First, we build an intermediate DataFrame that have nonzero grades. Since 1s and -1s always alternate, it suffices to analyze the difference between consecutive <code>grade</code> values.</p>
<p>Again, since 1s and -1s alternate, the difference between consecutive <code>scores</code> can be either -2 or 2, so depending on what it is, we can identify whether it is an <code>up</code> row or a <code>down</code> row.</p>
<pre class="lang-py prettyprint-override"><code>tmp = df.loc[df['grade'].ne(0), ['score','grade']].diff()
down_idx = tmp['grade'].lt(0).loc[lambda x: x].index
df.loc[down_idx, 'down'] = tmp.loc[down_idx, 'score']
up_idx = tmp['grade'].gt(0).loc[lambda x: x].index
df.loc[up_idx, 'up'] = tmp.loc[up_idx, 'score']
</code></pre>
<p>Output:</p>
<pre class="lang-py prettyprint-override"><code> name score grade down up
0 ann 23.3 0 NaN NaN
1 bob 36.5 0 NaN NaN
2 don 29.3 1 NaN NaN
3 jul 45.8 0 NaN NaN
4 ken 36.2 -1 6.9 NaN
5 nic 38.9 1 NaN 2.7
6 pal 16.7 0 NaN NaN
7 qiu 32.5 -1 -6.4 NaN
8 sun 33.9 0 NaN NaN
9 tom 28.5 1 NaN -4.0
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Is it possible to programmatically check for a new connection using PySerial?<p>I found <a href="https://stackoverflow.com/questions/21050671/how-to-check-if-device-is-connected-pyserial/49450813">this</a> post which asks a similar question. However, the answers were not what I expected to find so I'm going to try asking it a little differently.</p>
<p>Let's assume a function <code>searching_for_connection</code> will run indefinitely in a <code>while True</code> loop. It that function, we'll loop and preform a check to see if a new connection has been made with <code>/dev/ttyAMA0</code>. If that connection exists, exit the loop, finish <code>searching_for_connection</code>, and begin some other processes. Is this possible to do and how would I go about doing that?</p>
<p>My current approach is sending a carriage return and checking for a response. My problem is that this method has been pretty spotty and hasn't yielded consistent results for me. Sometimes this method works and sometimes it will just stop working</p>
<pre><code>def serial_device_connected(serial_device: "serial.Serial") -> bool:
try:
serial_device.write(b"\r")
return bool(serial_device.readlines())
except serial.SerialException
return False
</code></pre>
|
<p>I suggest having a delay to allow time for the device to respond.</p>
<pre class="lang-py prettyprint-override"><code>import time
def serial_device_connected(serial_device: "serial.Serial") -> bool:
try:
serial_device.write(b"\r")
time.sleep(0.01)
return bool(serial_device.readlines())
except serial.SerialException
return False
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How to jump to the next line when defining a function in Python?<p>I am using a Mac terminal to learn Python basics at the moment and I can't figure out how to write a new line when defining a function, because whenever I hit "Enter", it just throws an error.</p>
<pre><code> >>> def f():
... a = 10
File "<stdin>", line 2
a = 10
^
IndentationError: expected an indented block after function definition on line 1
>>>
</code></pre>
|
<p>Indentation indicates where blocks begin and end. Everything inside a function definition is indented:</p>
<pre><code>>>> def f():
... a = 10
... print(a)
...
>>> f()
10
</code></pre>
<p>The first line that is <em>not</em> indented indicates the function is over.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How to run a simulation of CAN messages on Python<p>I have am currently learning how to use libraries in python and I have a project in mind, which requires me to use the CAN library <a href="https://python-can.readthedocs.io/en/master/index.html" rel="nofollow noreferrer">https://python-can.readthedocs.io/en/master/index.html</a> . I would like to simulate CAN messages to test and create a platform that can convert them into readable messages. I just can't seem to understand how to get the simulator from the library.</p>
|
<p>I'm unfamiliar with <code>python-can</code> but if all you want to do is import the module, here's a snippet that imports the library and sends out a simple message (receiving them is a whole other matter). You might want to keep exploring the docs for ways to capture messages and do stuff with them.</p>
<pre class="lang-py prettyprint-override"><code>import time
import can
bus = can.interface.Bus(interface='virtual', bustype='socketcan',
channel='vcan0', bitrate=500000)
def producer(bus, id):
for i in range(10):
msg = can.Message(arbitration_id=0xc0ffee,
data=[id, i, 0, 1, 3, 1, 4, 1], is_extended_id=False)
try:
bus.send(msg)
print("Message sent on {}".format(bus.channel_info))
time.sleep(1)
except can.CanError:
print("Message NOT sent")
producer(bus, 1)
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Gevent monkey patching - OverflowError<p>I tried to run my Flask project with gevent on Python3.7 on Raspberry Pi with gevent.monkey.patch_all() on the first line. But it ended up with this error:</p>
<pre><code>Traceback (most recent call last):
File "src/gevent/_hub_local.py", line 71, in gevent._gevent_c_hub_local.get_hub
File "src/gevent/_hub_local.py", line 80, in gevent._gevent_c_hub_local.get_hub_noargs
File "/home/pi/server/venv/lib/python3.7/site-packages/gevent/hub.py", line 445, in __init__
self.loop = self.loop_class(flags=loop, default=default) # pylint:disable=not-callable
File "/home/pi/server/venv/lib/python3.7/site-packages/gevent/hub.py", line 459, in loop_class
return GEVENT_CONFIG.loop
File "/home/pi/server/venv/lib/python3.7/site-packages/gevent/_config.py", line 50, in getter
return self.settings[setting_name].get()
File "/home/pi/server/venv/lib/python3.7/site-packages/gevent/_config.py", line 146, in get
self.value = self.validate(self._default())
File "/home/pi/server/venv/lib/python3.7/site-packages/gevent/_config.py", line 248, in validate
return self._import_one_of([self.shortname_map.get(x, x) for x in value])
File "/home/pi/server/venv/lib/python3.7/site-packages/gevent/_config.py", line 219, in _import_one_of
return self._import_one(item)
File "/home/pi/server/venv/lib/python3.7/site-packages/gevent/_config.py", line 237, in _import_one
module = importlib.import_module(module)
File "/usr/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 980, in _find_and_load
File "<frozen importlib._bootstrap>", line 149, in __enter__
File "<frozen importlib._bootstrap>", line 88, in acquire
File "src/gevent/_semaphore.py", line 273, in gevent._gevent_c_semaphore.Semaphore.__enter__
File "src/gevent/_semaphore.py", line 274, in gevent._gevent_c_semaphore.Semaphore.__enter__
File "src/gevent/_semaphore.py", line 175, in gevent._gevent_c_semaphore.Semaphore.acquire
File "/home/pi/server/venv/lib/python3.7/site-packages/gevent/thread.py", line 121, in acquire
acquired = BoundedSemaphore.acquire(self, blocking, timeout)
File "src/gevent/_semaphore.py", line 175, in gevent._gevent_c_semaphore.Semaphore.acquire
File "src/gevent/_semaphore.py", line 200, in gevent._gevent_c_semaphore.Semaphore.acquire
OverflowError: Python int too large to convert to C long
</code></pre>
<p>On my PC (Python3.8), where is everything working OK, I am getting this warning:</p>
<pre><code>init.py:1: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util (/usr/local/lib/python3.8/dist-packages/urllib3/util/__init__.py)', 'urllib3.util.ssl_ (/usr/local/lib/python3.8/dist-packages/urllib3/util/ssl_.py)'].
</code></pre>
<p>I need to have patch monkey, because when I remove it, everything else is working well, but emitted sockets from external threads are stacking and buffering and arriving to JavaScript handlers after long time.</p>
<p>My versions of modules:</p>
<pre><code>Flask==1.1.2
Flask-SocketIO==5.0.1
python-engineio==4.0.0
python-socketio==5.04
gevent==20.12.1
gevent-websocket==0.10.1
</code></pre>
<p>Anyone knows, how can I solve this issue?</p>
<p>Thanks.</p>
|
<p>The problem was that I was running 32bit instead of 64bit Python3 on RPi.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
searching for numbers greater than 60 in a list<p>i cant seem to get the second for loop to work correctly, nor the third. ive tried removing them and switching a few things around in the last if statement but its only gotten worse.</p>
<pre><code>scores = []
passed = 0
passing = 60
tests = int(input("enter the number of tests:"))
for x in range (0,tests):
marks = int(input("enter the number of marks:"))
scores.append(marks)
for z in range (0,tests):
if any(scores > passing for y in len(scores)):
passed = passed + 1
print("")
print(passed, "have passed")
</code></pre>
|
<p>At the second for loop, you can get each score by specifying each score on the list with <strong>scores[z]</strong> on the condition.
not only scores</p>
<pre><code>if any(**scores[z]** > passing for y in len(scores)):
</code></pre>
<p>all code :</p>
<pre><code>scores = []
passed = 0
passing = 60
tests = int(input("enter the number of tests:"))
for x in range(0, tests):
marks = int(input("enter the number of marks:"))
scores.append(marks)
for z in range(0, tests):
if any(scores[z] > passing for y in scores):
passed = passed + 1
print("")
print(passed, "have passed")
</code></pre>
<p>output sample :</p>
<pre><code>enter the number of tests:5
enter the number of marks:75
enter the number of marks:50
enter the number of marks:35
enter the number of marks:85
enter the number of marks:95
3 have passed
</code></pre>
<p>Hope it helps you.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Lineplot - plot a single legend for uneven number of subplots<p>I'm working on the following graph where I'd like to plot is single legend that applies to all, essentially this would be a small box where blue color line is AB=0 and green line is AB = 1.</p>
<p>Moreover, I'm using <code>plt.subplot(...</code> since it is possible that might have to deal with uneven number of columns to plot.</p>
<p>I tried positioning it outside of the box both it was not visible anywhere.</p>
<p><a href="https://i.stack.imgur.com/R7R4W.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/R7R4W.png" alt="Plot" /></a></p>
<pre><code>plt.figure(figsize=(16,10))
plt.subplots_adjust(hspace=0.3)
plt.suptitle("Some title", fontsize=18, y=0.95)
plt.style.use('seaborn-darkgrid')
for i, col in enumerate(tms_0.columns):
ax = plt.subplot(3,4,i+1)
ax.plot(tms_0.index, tms_0[col], label=col, color='skyblue')
ax.plot(tms_1.index, tms_1[col], label=col, color='green')
#plt.legend(loc='upper left')
#ax.set_title(col.upper())
ax.set_xticks([])
fig.legend(["X", "Y"], loc='lower right', bbox_to_anchor=(1,-0.1), ncol=2, bbox_transform=fig.transFigure)
plt.show()
</code></pre>
<p><code>col</code> in this code is actually a column in the dataframe so I can't use it in the normal fashion which is why I'm using it in the <code>set_title</code>.</p>
|
<p>I found some sort of option based on this thread <a href="https://stackoverflow.com/questions/39500265/how-to-manually-create-a-legend">How to manually create a legend</a></p>
<pre><code>legend_elements = [plt.Line2D([0], [0], color='skyblue', lw=2.5, label='ClientAB=0'),
plt.Line2D([0], [0], color='green', lw=2.5, label='ClientAB=1')]
ax.legend(handles=legend_elements, bbox_to_anchor=(1.2, 4.05))
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Is there a way to solve import issues with __init__.py?<p>I have a package with this structure</p>
<pre><code>framework/
__init__.py
file0.py
file1.py
file2.py
file3.py
</code></pre>
<p>I want to able to import it as <code>import framework</code>, but I'm not able to do it that way, only <code>from framework import *</code> works, and that too without autocomplete. <code>file0.py</code> and <code>file1.py</code> are classes, and the others have only functions.
Autocomplete works, however, when I manually do this on the file I'm working on in an external directory</p>
<pre><code>from framework import file0
from file0 import *
# and so on for the others
</code></pre>
<p>This is my <code>__init.py__</code></p>
<pre><code>from framework.file0 import file0
from framework.file1 import file1
from framework.file2 import *
from framework.file3 import *
</code></pre>
<p>I've tried putting</p>
<pre><code>from framework import file0
from file0 import *
# and so on for the others
</code></pre>
<p>on <code>__init.py__</code> but it doesn't solve the issue, autocomplete still doesn't work unless I put them in the actual file I'm working on in another directory. I want it to able to work on <code>import framework</code><br />
Is there a way?<br />
Thanks in advance. I'm new to this, so any advice is appreciated.</p>
|
<p>What you want for your <code>__init__.py</code> is <code>from .file0 import file0</code>, or whatever content from <code>file0.py</code> you want to import.</p>
<p>See <a href="https://docs.python.org/3/reference/import.html#package-relative-imports" rel="nofollow noreferrer">Package relative imports</a> in Python docs.</p>
<p>However, I think it's ill adviced to import everything in a package to <code>__init__.py</code>. If you do that, it will be loaded into python every time you try to import anything from that package. Say you want to do <code>from framework.file0 import SomeClass</code> into some other package, and that class is all you need from <code>framework</code>. If you import everything in <code>__init__py</code>, you will be loading all that every time you touch that package, since <code>__init__.py</code> is always loaded when accessing the package.</p>
<p>If you want a way to import everything from the package, maybe you should put that in another file, say <code>all.py</code>, and then do <code>from framework import all as framework</code>?</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Sympy: AttributeError: Multiply polynomial by complex constant<p>I'm trying to multiply a Sympy polynomial by the complex coefficient "i".
However I am getting an error.
I am using Python 3.6 and Sympy 1.8.</p>
<p>Code:</p>
<pre><code>from sympy import *
from sympy.abc import x, y, z, w
p = Poly(1.0*x, x, domain='C')
p*I
</code></pre>
<p>Error:</p>
<pre><code>AttributeError: 'ComplexField' object has no attribute 'from_GaussianIntegerRing'
</code></pre>
<p>Call Stack:</p>
<pre><code><ipython-input-106-a133adf8aac7> in <module>
----> 1 p*I
~/Code/University/Tesi/tesi/lib/python3.6/site-packages/sympy/polys/polytools.py in wrapper(f, g)
80 return result
81 else:
---> 82 return func(f, g)
83 else:
84 return NotImplemented
~/Code/University/Tesi/tesi/lib/python3.6/site-packages/sympy/polys/polytools.py in __mul__(f, g)
4110 @_polifyit
4111 def __mul__(f, g):
-> 4112 return f.mul(g)
4113
4114 @_polifyit
~/Code/University/Tesi/tesi/lib/python3.6/site-packages/sympy/polys/polytools.py in mul(f, g)
1493 return f.mul_ground(g)
1494
-> 1495 _, per, F, G = f._unify(g)
1496
1497 if hasattr(f.rep, 'mul'):
~/Code/University/Tesi/tesi/lib/python3.6/site-packages/sympy/polys/polytools.py in _unify(f, g)
485 G = DMP(dict(list(zip(g_monoms, g_coeffs))), dom, lev)
486 else:
--> 487 G = g.rep.convert(dom)
488 else:
489 raise UnificationFailed("can't unify %s with %s" % (f, g))
~/Code/University/Tesi/tesi/lib/python3.6/site-packages/sympy/polys/polyclasses.py in convert(f, dom)
297 return f
298 else:
--> 299 return DMP(dmp_convert(f.rep, f.lev, f.dom, dom), dom, f.lev)
300
301 def slice(f, m, n, j=0):
</code></pre>
<p>Is there a workaround to multiply the polynomial as intended?</p>
<p>Thanks,
Marco</p>
|
<p>You can try the domain "EX":</p>
<pre><code>>>> p = Poly(1.0*x, x, domain='EX')
>>> p*I
Poly(1.0*I*x, x, domain='EX')
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Fieldsets don't do anything in admin django<p>I'm learning Django and found that we can use fieldsets to customise the way admin creation form looks like. Here is the code I'am using:</p>
<pre><code>class CustomUserAdmin(UserAdmin):
add_form = CustomUserCreationForm
form = CustomUserChangeForm
model = CustomUser
list_display = ('email', 'age', 'is_staff', 'is_active',)
list_filter = ('email', 'age', 'is_staff', 'is_active',)
fieldsets = (
('Advanced options', {
'classes': ('wide',),
'fields': ('email', 'age', 'is_staff', 'is_active',),
}),
)
admin.site.register(CustomUser, CustomUserAdmin)
</code></pre>
<p>Here is the result:</p>
<p><a href="https://i.stack.imgur.com/5ZeDN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5ZeDN.png" alt="enter image description here" /></a></p>
<p>As you can see this "fieldsets" does nothing for this creation page. And if I completely remove this "fieldsets" nothing will change. What I did wrong here? I want my fieldsets work.</p>
<p>Thank you!</p>
|
<p>The UserAdmin separates the <code>add</code> action (user creation) form other actions. This is because it only wants to deal with username and password and then the rest of the fields.</p>
<p>So the UserAdmin does some special work and you have both <code>fieldsets</code> and <code>add_fieldsets</code>.</p>
<p>Below is copied from a customer user in one of my projects. It has a required field on the User model called display_name and so it must be submitted via the admin as well.</p>
<pre class="lang-py prettyprint-override"><code>from django.contrib import admin
from django.contrib.auth.admin import UserAdmin as AuthUserAdmin
from django.utils.translation import gettext as _
from core import models
from core.forms.admin import UserCreationForm
@admin.register(models.User)
class UserAdmin(AuthUserAdmin):
fieldsets = (
(None, {"fields": ("username", "password")}),
(_("Personal info"), {"fields": ("display_name", "email")}),
(
_("Permissions"),
{
"fields": (
"is_active",
"is_staff",
"is_superuser",
"groups",
"user_permissions",
),
},
),
(_("Important dates"), {"fields": ("last_login", "date_joined")}),
)
add_fieldsets = (
(
None,
{"classes": ("wide",), "fields": ("username", "password1", "password2"),},
),
(_("Profile related"), {"fields": ("display_name",)}),
)
add_form = UserCreationForm
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Error while importing Tensorflow. TypeError: expected bytes, descriptor found<p>I installed tensorflow only for CPU in windows 10 with</p>
<pre><code>pip3 install --upgrade tensorflow
</code></pre>
<p>it downloaded and installed correctly, but when i tried to import it gave me the following error</p>
<pre><code> Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\Eloy\anaconda3\lib\site-packages\tensorflow\__init__.py", line 41, in <module>
from tensorflow.python.tools import module_util as _module_util
File "C:\Users\Eloy\anaconda3\lib\site-packages\tensorflow\python\__init__.py", line 40, in <module>
from tensorflow.python.eager import context
File "C:\Users\Eloy\anaconda3\lib\site-packages\tensorflow\python\eager\context.py", line 32, in <module>
from tensorflow.core.framework import function_pb2
File "C:\Users\Eloy\anaconda3\lib\site-packages\tensorflow\core\framework\function_pb2.py", line 16, in <module>
from tensorflow.core.framework import attr_value_pb2 as tensorflow_dot_core_dot_framework_dot_attr__value__pb2
File "C:\Users\Eloy\anaconda3\lib\site-packages\tensorflow\core\framework\attr_value_pb2.py", line 16, in <module>
from tensorflow.core.framework import tensor_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__pb2
File "C:\Users\Eloy\anaconda3\lib\site-packages\tensorflow\core\framework\tensor_pb2.py", line 16, in <module>
from tensorflow.core.framework import resource_handle_pb2 as tensorflow_dot_core_dot_framework_dot_resource__handle__pb2
File "C:\Users\Eloy\anaconda3\lib\site-packages\tensorflow\core\framework\resource_handle_pb2.py", line 16, in <module>
from tensorflow.core.framework import tensor_shape_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2
File "C:\Users\Eloy\anaconda3\lib\site-packages\tensorflow\core\framework\tensor_shape_pb2.py", line 112, in <module>
'__module__' : 'tensorflow.core.framework.tensor_shape_pb2'
TypeError: expected bytes, Descriptor found
</code></pre>
|
<p>I installed probuf with</p>
<pre><code>pip install protobuf-py3
</code></pre>
<p>But another problem came up</p>
<pre><code>import tensorflow as tf
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\Eloy\anaconda3\lib\site-packages\tensorflow\__init__.py", line 41, in <module>
from tensorflow.python.tools import module_util as _module_util
File "C:\Users\Eloy\anaconda3\lib\site-packages\tensorflow\python\__init__.py", line 40, in <module>
from tensorflow.python.eager import context
File "C:\Users\Eloy\anaconda3\lib\site-packages\tensorflow\python\eager\context.py", line 32, in <module>
from tensorflow.core.framework import function_pb2
File "C:\Users\Eloy\anaconda3\lib\site-packages\tensorflow\core\framework\function_pb2.py", line 10, in <module>
from google.protobuf import symbol_database as _symbol_database
File "C:\Users\Eloy\anaconda3\lib\site-packages\google\protobuf\symbol_database.py", line 184, in <module>
_DEFAULT = SymbolDatabase(pool=descriptor_pool.Default())
AttributeError: module 'google.protobuf.descriptor_pool' has no attribute 'Default'
</code></pre>
<p>this solved my new problem
<a href="https://stackoverflow.com/questions/59910041/getting-module-google-protobuf-descriptor-pool-has-no-attribute-default-in-m">Getting module 'google.protobuf.descriptor_pool' has no attribute 'Default' in my python script</a></p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Pandas forward fill with scalar multiple of last value<p>Suppose I have the following DataFrame</p>
<pre><code>import pandas as pd
import numpy as np
dict1 = {'A': [100,200, np.nan, 300, 500, np.nan, np.nan, 50],
'B': [0,1,np.nan,1,10, np.nan, np.nan, 5],
'C' : [100,200, np.nan, 300, 500, np.nan, np.nan, 200]}
df1 = pd.DataFrame(data=dict1)
</code></pre>
<p>I wish to forward fill the data such that the <code>np.nan</code> values are a scalar multiple <code>s</code> of the last value. Say <code>s = 0.5</code>.</p>
<p>I tried <code>df1.fillna(0.5*df1)</code> but it didn't work</p>
<p>My expected outcome would be:</p>
<pre><code> A B C
0 100.0 0.0 100.0
1 200.0 1.0 200.0
2 100.0 0.5 100.0
3 300.0 1.0 300.0
4 500.0 10.0 500.0
5 250.0 5.0 250.0
6 250.0 5.0 250.0
7 50.0 5.0 200.0
</code></pre>
|
<p>IIUC, try:</p>
<pre><code>df1.fillna(df1.ffill().mul(.5))
</code></pre>
<p>Output:</p>
<pre><code> A B C
0 100.0 0.0 100.0
1 200.0 1.0 200.0
2 100.0 0.5 100.0
3 300.0 1.0 300.0
4 500.0 10.0 500.0
5 250.0 5.0 250.0
6 250.0 5.0 250.0
7 50.0 5.0 200.0
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Repeat pattern using python regex<p>Well, I'm cleaning a dataset, using Pandas.
I have a column called "Country", where different rows could have numbers or other information into parenthesis and I have to remove them, for example:
Australia1,
Perú (country),
3Costa Rica, etc. To do this, I'm getting the column and I make a mapping over it.</p>
<pre><code>pattern = "([a-zA-Z]+[\s]*[a-aZ-Z]+)(?:[(]*.*[)]*)"
df['Country'] = df['Country'].str.extract(pattern)
</code></pre>
<p>But I have a problem with this regex, I cannot match names as "United States of America", because it only takes "United ". How can I repeat unlimited the pattern of the fisrt group to match the whole name?<br />
Thanks!</p>
|
<p>In this situation, I will clean the data step by step.</p>
<pre><code>df_str = '''
Country
Australia1
Perú (country)
3Costa Rica
United States of America
'''
df = pd.read_csv(io.StringIO(df_str.strip()), sep='\n')
# handle the data
(df['Country']
.str.replace('\d+', '', regex=True) # remove number
.str.split('\(').str[0] # get items before `(`
.str.strip() # strip spaces
)
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Allocate an integer randomly across k bins<p>I'm looking for an efficient Python function that randomly allocates an integer across <code>k</code> bins.
That is, some function <code>allocate(n, k)</code> will produce a <code>k-sized</code> array of integers summing to <code>n</code>.</p>
<p>For example, <code>allocate(4, 3)</code> could produce <code>[4, 0, 0]</code>, <code>[0, 2, 2]</code>, <code>[1, 2, 1]</code>, etc.</p>
<p>It should be randomly distributed per item, assigning each of the <code>n</code> items randomly to each of the <code>k</code> bins.</p>
|
<p>Adapting Michael Szczesny's <a href="https://stackoverflow.com/questions/71888628/allocate-an-integer-randomly-across-k-bins?noredirect=1#comment127034648_71888743">comment</a> based on numpy's new paradigm:</p>
<pre class="lang-py prettyprint-override"><code>def allocate(n, k):
return np.random.default_rng().multinomial(n, [1 / k] * k)
</code></pre>
<p><a href="https://colab.research.google.com/drive/1LG6vb-aigJN-rddQXG3E1Nw_oF8Yv1Py#scrollTo=Z7sP5209VuEd" rel="nofollow noreferrer">This notebook</a> verifies that it returns the same distribution as <a href="https://stackoverflow.com/a/71888629/1840471">my brute-force approach</a>.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Python DataFrame Merging<p>I need to merge two dataframes. I create several of the below dataframe from reading files.</p>
<p>What i need to do is pull the 'Depth' column and insert it into a new dataframe. I will then rename the column 'Depth' in the merged dataframe to the serial number of that part. Then repeat. </p>
<p>sigData example</p>
<pre><code> Current Depth Time Velocity
0 130 11066 0.1 26516
1 150 13716 0.2 24090
2 153 15995 0.3 25052
3 157 19109 0.4 26596
4 160 20298 0.5 19947
</code></pre>
<p>The resulting dataframe after looping through all 'sigData' files should look like this:</p>
<p>depthDF example</p>
<pre><code> Time Sn1 Sn2 Sn3
0 0.1 11066 00001 00001
1 0.2 13716 00002 00002
2 0.3 15995 00003 00003
3 0.4 19109 00004 00004
4 0.5 20298 00005 00005
</code></pre>
<p>I will do the same with 'Current' and 'Velocity'. The result should be three dataframes. One with the 'Depth' of all parts, one with 'Velocity' of all parts, and one with 'Current' of all parts. </p>
<pre><code>velocityDF = pd.DataFrame(columns=['Time'])
velocityDF = velocityDF.join(sigData['Velocity'], on='Time', sort='True')
velocityDF.rename(columns={'Velocity': row['SerialNumber']}, inplace=True)
</code></pre>
<p>results in:</p>
<pre><code>Empty DataFrame
Columns: [Time, 400602902, 400621787, 400621434, 400619512]
Index: []
</code></pre>
<p>and</p>
<pre><code>depthDF = pd.DataFrame(columns=['Time'])
depthDF = depthDF.merge(sigData['Depth'], on='Time', sort='True')
depthDF.rename(columns={'Depth': row['SerialNumber']}, inplace=True)
</code></pre>
<p>results in:</p>
<pre><code>ValueError: can not merge DataFrame with instance of type <class 'pandas.core.series.Series'>
</code></pre>
|
<p>Try this:</p>
<pre><code>depthDF = depthDF.merge(sigData[['Depth','Time']], on='Time', sort='True', how='right')
</code></pre>
<p>Same do for velocityDF.</p>
<p>Hope it helps and will resolve your error..</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
List of max values in a sequence?<p>this is my first question in stackoverflow, I'm looking for this solution for days and it is very frustrating.</p>
<p>Imagine, I have a sequence every 2 seconds...</p>
<pre><code>sequence--> {55361500: 262.6, 55372250: 81.5, 55422280: 9.5}
sequence--> {55361500: 284.48, 55372250: 90.0, 55422280: 12.2}
sequence--> {55361500: 284.47, 55372250: 88.5, 55422280: 8.1}
sequence--> {55361500: 262.59, 55372250: 95.5, 55422280: 6.7}
sequence--> {55361500: 249.32, 55372250: 81.5, 55422280: 5.5}
</code></pre>
<p>I would like to have a list of max values, I mean when a sequence number decrease, the number of the maximum value is stored in that list. For example:</p>
<pre><code>sequence--> {55361500: 262.6, 55372250: 81.5, 55422280: 9.5}
lst_max_seq = [262.6, 81.5, 9.5]
sequence--> {55361500: 284.48, 55372250: 90.0, 55422280: 12.2}
lst_max_seq = [284.48, 90.0, 12.2]
sequence--> {55361500: 284.47, 55372250: 88.5, 55422280: 8.1}
lst_max_seq = [284.48, 90.0, 12.2]
sequence--> {55361500: 262.59, 55372250: 95.5, 55422280: 6.7}
lst_max_seq = [284.48, 95.5, 12.2]
sequence--> {55361500: 249.32, 55372250: 81.5, 55422280: 5.5}
lst_max_seq = [284.48, 95.5, 12.2]
</code></pre>
<p>When I run my code, the values of lst_max_seq are the same than the values of the sequence.
Any tips?</p>
<pre><code>while True:
>>code to get sequence<<
for key in dt_seq:
if len(lista_max_seq) < len(dt_seq):
lista_max_seq.append(dt_seq[key])
for k, elem in enumerate(lista_max_seq):
if elem > lista_def_max[k]:
lista_def_max[k] = elem
time.sleep(2)
</code></pre>
<p>If you have questions, you will have 100%, just ask me I will answer fast. Thank you =P</p>
|
<p>You need to clear your <code>lista_max_seq</code> at the beginning of each iteration:</p>
<pre class="lang-py prettyprint-override"><code>while True:
...
lista_max_seq = []
</code></pre>
<p>Otherwise after the first iteration, your programm never passes <code>len(lista_max_seq) < len(dt_seq)</code> check.</p>
<p>You do not need your first <code>for</code> loop: you can retrieve dict values using <code>dict.values()</code>: <code>list_of_values = list(sequence.values())</code>.</p>
<pre class="lang-py prettyprint-override"><code>stored_max_values = [0, 0, 0] # initialize your list
# if you know sequence's length, you can do:
# stored_max_values = [0] * length
while True:
... # code to get sequence
new_values = list(sequence.values()) # replaces your first loop
for i, (new_val, stored_val) in enumerate(zip(new_values, stored_max_values)):
if new_val > stored_val:
stored_max_values[i] = new_val
time.sleep(2)
</code></pre>
<p>This example uses <a href="https://docs.python.org/3/library/functions.html#zip" rel="nofollow noreferrer"><code>zip()</code></a> and <a href="https://docs.python.org/3/library/functions.html#enumerate" rel="nofollow noreferrer"><code>enumerate()</code></a>.</p>
<p><strong>If you want to keep labels</strong>, you can store values in another dict. Initialize it with your first sequence, then iterate over the new values and replace in the stored dict the higher values.</p>
<p>You can iterate over <code>dict.items()</code> to get both keys and values.</p>
<pre class="lang-py prettyprint-override"><code>stored_max_values = first_sequence # <- a dict
# if you can't get first sequence here, you could do:
# stored_max_values = {k: 0 for k in labels}
# where labels are a list of labels
# this would give {55361500: 0, 55372250: 0, 55422280: 0} here
while True:
... # code to get sequence as new_sequence
for k, v in new_sequence.items():
if v > stored_max_values[k]:
stored_max_values[k] = v
time.sleep(2)
</code></pre>
<p><code>stored_max_values</code> is then a dict containing (label, max value) pairs.
This second example use <a href="https://www.python.org/dev/peps/pep-0274/#examples" rel="nofollow noreferrer">dict comprehension</a>.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Tensorflow: What's the best practice to get a section of a manual from a question?<p>I would like to use Tensorflow to create a smart faq. I've seen how to manage a chatbot, but my need is to let the user searching for help and the result must be the most probable chapter or section of a manual.</p>
<p>For example the user can ask: </p>
<blockquote>
<p>"What are the O.S. supported?"</p>
</blockquote>
<p>The reply must be a list of all the possible sections of the manual in which could be the correct answer.
My text record set for the training procedure is only the manual itself. I've followed the text classification example, but i don't think is what i need because in that case it would only understand if a given text belongs to a category or another one.</p>
<p>What's the best practice to accomplish this task (i use Python)?</p>
<p>Thank you in advance</p>
|
<p>An idea could be building <a href="https://en.wikipedia.org/wiki/Word_embedding" rel="nofollow noreferrer">embeddings</a> of your text using <a href="https://arxiv.org/abs/1810.04805" rel="nofollow noreferrer">Bert</a> or other pretrained models (take a look to <a href="https://github.com/huggingface/transformers" rel="nofollow noreferrer">transformers</a>) and later compare (for instance using cosine distance) such embeddings with your query (the question) and get the most similar ones interpreting as the section or chapter containing them.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Pandas dataframe change values in a column based on conditions<p>I have a large Dataframe below:</p>
<p>The data used as the example here 'education_val.csv' can be found here <a href="https://github.com/ENLK/Py-Projects-/blob/master/education_val.csv" rel="nofollow noreferrer">https://github.com/ENLK/Py-Projects-/blob/master/education_val.csv</a></p>
<pre><code>import pandas as pd
edu = pd.read_csv('education_val.csv')
del edu['Unnamed: 0']
edu.head(10)
ID Year Education
22445 1991 higher education
29925 1991 No qualifications
76165 1991 No qualifications
223725 1991 Other
280165 1991 intermediate qualifications
333205 1991 No qualifications
387605 1991 higher education
541285 1991 No qualifications
541965 1991 No qualifications
599765 1991 No qualifications
</code></pre>
<p>The values in the column <code>Education</code> are:</p>
<pre><code>edu.Education.value_counts()
intermediate qualifications 153705
higher education 67020
No qualifications 55842
Other 36915
</code></pre>
<p>I want to replace the values in the column Education in the following ways:</p>
<ol>
<li><p>If an <code>ID</code> has the value <code>higher education</code> in a year in the column <code>Education</code> then all future years for that <code>ID</code> will also have <code>higher education</code> in the <code>Education</code> column.</p>
</li>
<li><p>If an <code>ID</code> has the value <code>intermediate qualifications</code> in a year, then all future years for that <code>ID</code> will have <code>intermediate qualifications</code> in the corresponding <code>Education</code> column. However, if the value <code>higher education</code> occurs in any of the subsequent years for this <code>ID</code>, then <code>higher education</code> replaces <code>intermediate qualifications</code> in the subsequent years, regardless if <code>Other</code> or <code>No qualifications occur</code>.</p>
</li>
</ol>
<p>For example in the DataFrame below, <code>ID</code> 22445 has the value<code>higher education</code> in the year <code>1991</code>, all subsequent values of <code>Education</code> for <code>22445</code> should be replaced with <code>higher education</code> in the later years, up to the year <code>2017</code>.</p>
<pre><code>edu.loc[edu['ID'] == 22445]
ID Year Education
22445 1991 higher education
22445 1992 higher education
22445 1993 higher education
22445 1994 higher education
22445 1995 higher education
22445 1996 intermediate qualifications
22445 1997 intermediate qualifications
22445 1998 Other
22445 1999 No qualifications
22445 2000 intermediate qualifications
22445 2001 intermediate qualifications
22445 2002 intermediate qualifications
22445 2003 intermediate qualifications
22445 2004 intermediate qualifications
22445 2005 intermediate qualifications
22445 2006 intermediate qualifications
22445 2007 intermediate qualifications
22445 2008 intermediate qualifications
22445 2010 intermediate qualifications
22445 2011 intermediate qualifications
22445 2012 intermediate qualifications
22445 2013 intermediate qualifications
22445 2014 intermediate qualifications
22445 2015 intermediate qualifications
22445 2016 intermediate qualifications
22445 2017 intermediate qualifications
</code></pre>
<p>Similarly, <code>ID</code> 1587125 in the Dataframe below has the value <code>intermediate qualifications</code> in the year <code>1991</code>, and changes to <code>higher education</code> in <code>1993</code>. All subsequent values in the column <code>Education</code> in the future years (from 1993 onwards) for <code>1587125</code> should be <code>higher education</code>.</p>
<pre><code>edu.loc[edu['ID'] == 1587125]
ID Year Education
1587125 1991 intermediate qualifications
1587125 1992 intermediate qualifications
1587125 1993 higher education
1587125 1994 higher education
1587125 1995 higher education
1587125 1996 higher education
1587125 1997 higher education
1587125 1998 higher education
1587125 1999 higher education
1587125 2000 higher education
1587125 2001 higher education
1587125 2002 higher education
1587125 2003 higher education
1587125 2004 Other
1587125 2005 No qualifications
1587125 2006 intermediate qualifications
1587125 2007 intermediate qualifications
1587125 2008 intermediate qualifications
1587125 2010 intermediate qualifications
1587125 2011 higher education
1587125 2012 higher education
1587125 2013 higher education
1587125 2014 higher education
1587125 2015 higher education
1587125 2016 higher education
1587125 2017 higher education
</code></pre>
<p>There are 12,057 unique <code>ID</code> in the data and the column <code>Year</code> spans from 1991 to 2017. How does one change the values of <code>Education</code> for all 12, 057 according to the above conditions? I'm not sure how to do this in a uniform way for all unique <code>ID</code>s. The sample data used as the example here is attached in the Github link above. Many thanks in advance.</p>
|
<p>You can do it using the <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/categorical.html#categorical-data" rel="nofollow noreferrer">categorical data</a> like this:</p>
<pre><code>df = pd.read_csv('https://raw.githubusercontent.com/ENLK/Py-Projects-/master/education_val.csv')
eddtype = pd.CategoricalDtype(['No qualifications',
'Other',
'intermediate qualifications',
'higher education'],
ordered=True)
df['EducationCat'] = df['Education'].str.strip().astype(eddtype)
df['EduMax'] = df.sort_values('Year').groupby('ID')['EducationCat']\
.transform(lambda x: eddtype.categories[x.cat.codes.cummax()] )
</code></pre>
<p>It is broken it up explicitly so you can see the data manipulations I am using.</p>
<ol>
<li>Create a Education <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/categorical.html#categoricaldtype" rel="nofollow noreferrer">categorical dtype with order</a></li>
<li>Next, change dtype of Education column to use that categorical
dtype (EducationCat)</li>
<li>Use the codes of the categorical to preform cummax calculation</li>
<li>With indexing to return the category defined by the cummax calculation (EduMax)</li>
</ol>
<p>Outputs:</p>
<pre><code>df[df['ID'] == 1587125]
ID Year Education EducationCat EduMax
18 1587125 1991 intermediate qualifications intermediate qualifications intermediate qualifications
12075 1587125 1992 intermediate qualifications intermediate qualifications intermediate qualifications
24132 1587125 1993 higher education higher education higher education
36189 1587125 1994 higher education higher education higher education
48246 1587125 1995 higher education higher education higher education
60303 1587125 1996 higher education higher education higher education
72360 1587125 1997 higher education higher education higher education
84417 1587125 1998 higher education higher education higher education
96474 1587125 1999 higher education higher education higher education
108531 1587125 2000 higher education higher education higher education
120588 1587125 2001 higher education higher education higher education
132645 1587125 2002 higher education higher education higher education
144702 1587125 2003 higher education higher education higher education
156759 1587125 2004 Other Other higher education
168816 1587125 2005 No qualifications No qualifications higher education
180873 1587125 2006 intermediate qualifications intermediate qualifications higher education
192930 1587125 2007 intermediate qualifications intermediate qualifications higher education
204987 1587125 2008 intermediate qualifications intermediate qualifications higher education
217044 1587125 2010 intermediate qualifications intermediate qualifications higher education
229101 1587125 2011 higher education higher education higher education
241158 1587125 2012 higher education higher education higher education
253215 1587125 2013 higher education higher education higher education
265272 1587125 2014 higher education higher education higher education
277329 1587125 2015 higher education higher education higher education
289386 1587125 2016 higher education higher education higher education
301443 1587125 2017 higher education higher education higher education
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Module installed but "ModuleNotFoundError: No module named <module_name>" while running gunicorn<p>I am trying to deploy this website using nginx. The site is configured correctly and this has been verified when I run <code>python manage.py runserver</code>. But when I run the same using gunicorn and wsgi, <code>gunicorn --bind 0.0.0.0:8002 config.wsgi</code> it lays out error: <code>ModuleNotFoundError: No module named 'crispy_forms'</code>.</p>
<p>Crispy_forms are installed and has been verified using <code>pip freeze</code>. Virtual enviornment is also enabled. 'crispy-forms' is also already added in INSTALLED_APPS in <code>settings.py</code>.</p>
<p>Following is my error code:</p>
<pre><code>root@devvm:/tmp/tmc_site# gunicorn --bind 0.0.0.0:8002 config.wsgi
[2022-02-03 23:03:07 +0000] [6333] [INFO] Starting gunicorn 20.1.0
[2022-02-03 23:03:07 +0000] [6333] [INFO] Listening at: http://0.0.0.0:8002 (6333)
[2022-02-03 23:03:07 +0000] [6333] [INFO] Using worker: sync
[2022-02-03 23:03:07 +0000] [6335] [INFO] Booting worker with pid: 6335
[2022-02-04 04:03:07 +0500] [6335] [ERROR] Exception in worker process
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/gunicorn/arbiter.py", line 589, in spawn_worker
worker.init_process()
File "/usr/local/lib/python3.8/dist-packages/gunicorn/workers/base.py", line 134, in init_process
self.load_wsgi()
File "/usr/local/lib/python3.8/dist-packages/gunicorn/workers/base.py", line 146, in load_wsgi
self.wsgi = self.app.wsgi()
File "/usr/local/lib/python3.8/dist-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/usr/local/lib/python3.8/dist-packages/gunicorn/app/wsgiapp.py", line 58, in load
return self.load_wsgiapp()
File "/usr/local/lib/python3.8/dist-packages/gunicorn/app/wsgiapp.py", line 48, in load_wsgiapp
return util.import_app(self.app_uri)
File "/usr/local/lib/python3.8/dist-packages/gunicorn/util.py", line 359, in import_app
mod = importlib.import_module(module)
File "/usr/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 848, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/tmp/tmc_site/config/wsgi.py", line 16, in <module>
application = get_wsgi_application()
File "/usr/local/lib/python3.8/dist-packages/django/core/wsgi.py", line 12, in get_wsgi_application
django.setup(set_prefix=False)
File "/usr/local/lib/python3.8/dist-packages/django/__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "/usr/local/lib/python3.8/dist-packages/django/apps/registry.py", line 91, in populate
app_config = AppConfig.create(entry)
File "/usr/local/lib/python3.8/dist-packages/django/apps/config.py", line 223, in create
import_module(entry)
File "/usr/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 973, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'crispy_forms'
[2022-02-04 04:03:07 +0500] [6335] [INFO] Worker exiting (pid: 6335)
[2022-02-03 23:03:07 +0000] [6333] [INFO] Shutting down: Master
[2022-02-03 23:03:07 +0000] [6333] [INFO] Reason: Worker failed to boot.
</code></pre>
<p><a href="https://i.stack.imgur.com/L8jz4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/L8jz4.png" alt="enter image description here" /></a></p>
<p>Here is my project structure, I have changed the foldername of configuration files to "config".</p>
<pre><code>tmc_site
-- .
-- ..
-- assets
-- config
-- asgi.py
-- __init__.py
-- __pycache__
-- settings.py
-- urls.py
-- wsgi.py
-- db.json
-- db.sqlite3
- .git
-- .gitattributes
-- .gitignore
-- manage.py
-- requirements.txt
-- static
-- venv
-- website
</code></pre>
<p>Following are my wsgi related content in files:</p>
<p>settings.py: <code>WSGI_APPLICATION = 'config.wsgi.application'</code></p>
<p>wsgi.py:</p>
<pre><code>import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'config.settings')
application = get_wsgi_application()
</code></pre>
<p>Any help would be appreciated.</p>
|
<p>You can see from the error message that the system installed gunicorn try to use the global Python environment in <code>/usr/local/lib/python3.8/dist-packages/</code>, not the project specific virtual environment in <code>/tmp/tmc_site/venv</code> directory.</p>
<p>Install gunicorn in the <code>venv</code> virtual env, so after you activate it, the <code>gunicorn --bind 0.0.0.0:8002 config.wsgi</code> command will use the local one with the correct environment. You can also call the gunicorn directly from the virtual env via <code>./venv/bin/gunicorn</code>.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How to transfer variable from python file to python flask<p>I want to transfer label_count and card_m to my main flask python file. How do I do that? I already tried importing it it didn't work. And if there is any solution to card_m I don't want repeat request so many times</p>
<pre><code>import requests
import json
from itertools import chain
from collections import Counter
url = "https://api.trello.com/1/boards/OIeEN1vG/cards"
query = {
'key': 'e8cac9f95a86819d54194324e95d4db8',
'token': 'aee28b52f9f8486297d8656c82a467bb4991a1099e23db539604ac35954d5633'
}
response = requests.request(
"GET",
url,
params=query
)
data = response.json()
card_labels_string = list(chain.from_iterable([d['labels']for d in data]))
card_labels = [c ["color"] for c in card_labels_string]
label_count = dict((i, card_labels.count(i)) for i in card_labels)
cards = dict(zip([d['name']for d in data],[d['shortLink']for d in data]))
card_m = {}
for key,value in cards.items():
url_card = "https://api.trello.com/1/cards/{}/members".format(value)
res = requests.request(
"GET",
url_card,
params=query
)
names = [f['fullName']for f in res.json()]
card_m.update({key : names})
print(label_count, card_m)
</code></pre>
|
<p>Ok based on you comments i think i can help you out now. So two things you should do to make this as clean as possible and to avoid bugs later on.</p>
<p>Right now your code is in the global scope. You should avoid doing this at cost unless there is literally no other option. So first thing you should do is create a static class for holding this data. Maybe something like this.</p>
<pre><code>class LabelHelper(object):
card_m = {}
label_count = None
@classmethod
def startup(cls):
url = "https://api.trello.com/1/boards/OIeEN1vG/cards"
query = {
'key': 'e8cac9f95a86819d54194324e95d4db8',
'token': 'aee28b52f9f8486297d8656c82a467bb4991a1099e23db539604ac35954d5633'
}
response = requests.request(
"GET",
url,
params=query
)
data = response.json()
card_labels_string = list(chain.from_iterable([d['labels'] for d in data]))
card_labels = [c["color"] for c in card_labels_string]
cls.label_count = dict((i, card_labels.count(i)) for i in card_labels)
cards = dict(zip([d['name'] for d in data], [d['shortLink'] for d in data]))
for key, value in cards.items():
url_card = "https://api.trello.com/1/cards/{}/members".format(value)
res = requests.request(
"GET",
url_card,
params=query
)
names = [f['fullName'] for f in res.json()]
cls.card_m.update({key: names})
@classmethod
def get_data(cls):
return cls.label_count, cls.card_m
</code></pre>
<p>Now we need to run that <code>startup</code> method in this class before we start up flask via <code>app.run</code>. So it can look something like this...</p>
<pre><code>if __name__ == '__main__':
LabelHelper.startup()
app.run("your interface", your_port)
</code></pre>
<p>Now we have populated those static variables with the data. Now you just need to import that static class in whatever file you want and just call <code>get_data</code> and you will get what you want. So like this...</p>
<pre><code>from labelhelper import LabelHelper
def some_function():
label_count, card_m = LabelHelper.get_data()
</code></pre>
<p>FYI in the from import labelhelper being lowercase if cause in general you would name the file containing that class <code>labelhelper.py</code></p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How to turn a list of lists into columns of a pandas dataframe?<p>I would like to ask how I can unnest a list of list and turn it into different columns of a dataframe. Specifically, I have the following dataframe where the Route_set <code>column</code> is a list of lists:</p>
<pre><code> Generation Route_set
0 0 [[20. 19. 47. 56.] [21. 34. 78. 34.]]
</code></pre>
<p>The desired output is the following dataframe:</p>
<pre><code> route1 route2
0 20 21
1 19 34
2 47 78
3 56 34
</code></pre>
<p>Any ideas how I can do it? Thank you in advance!</p>
|
<p>You can try using <code>df.explode</code> and <code>df.apply</code>:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame(data= {'Generation': 0, 'Route_set':[[[20., 19., 47., 56.], [21., 34., 78., 34.]]]})
df['route1']=df['Route_set'].apply(lambda x: x[0])
df['route2']=df['Route_set'].apply(lambda x: x[1])
df = df.explode(['route1', 'route2'], ignore_index=True)
df2 = df[df.columns.difference(['Route_set', 'Generation'])]
</code></pre>
<pre><code>| | route1 | route2 |
|---:|---------:|---------:|
| 0 | 20 | 21 |
| 1 | 19 | 34 |
| 2 | 47 | 78 |
| 3 | 56 | 34 |
</code></pre>
<p>Or you can just create a new dataframe with the values like this:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame(data= {'Generation': 0, 'Route_set':[[[20., 19., 47., 56.], [21., 34., 78., 34.]]]})
df1 = pd.DataFrame.from_dict(dict(zip(['route1', 'route2'], df.Route_set.to_numpy()[0])), orient='index').transpose()
</code></pre>
<pre><code>| | route1 | route2 |
|---:|---------:|---------:|
| 0 | 20 | 21 |
| 1 | 19 | 34 |
| 2 | 47 | 78 |
| 3 | 56 | 34 |
</code></pre>
<p><strong>Update 1</strong>:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(data= {'Generation': 0, 'Route_set':[
[[20.0, 19.0, 47.0, 56.0, 43.0, 53.0, 18.0, -1.0, -1.0, -1.0, -1.0, -1.0], [20.0, 51.0, 46.0, 37.0, 2.0, 57.0, 49.0, 36.0, 25.0, 5.0, 4.0, 34.0], [54.0, 23.0, 5.0, 46.0, 34.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [57.0, 48.0, 46.0, 35.0, 25.0, 27.0, 52.0, 8.0, 39.0, 22.0, 51.0, 28.0], [57.0, 16.0, 45.0, 25.0, 49.0, 38.0, 0.0, 46.0, 13.0, 18.0, 19.0, 20.0], [21.0, 11.0, 6.0, 33.0, 25.0, 49.0, 57.0, 29.0, 12.0, 3.0, -1.0, -1.0], [9.0, 15.0, 47.0, 42.0, 25.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [51.0, 25.0, 22.0, 14.0, 39.0, 8.0, 40.0, 0.0, 10.0, 26.0, 32.0, 47.0], [1.0, 33.0, 24.0, 46.0, 56.0, 30.0, 48.0, 51.0, -1.0, -1.0, -1.0, -1.0], [25.0, 31.0, 50.0, 17.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [57.0, 12.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [20.0, 41.0, 47.0, 15.0, 46.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [14.0, 44.0, 39.0, 25.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [20.0, 51.0, 25.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [57.0, 49.0, 5.0, 20.0, 37.0, 46.0, 36.0, 25.0, 39.0, 51.0, 48.0, -1.0], [5.0, 0.0, 33.0, 55.0, 25.0, 48.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [51.0, 32.0, 33.0, 24.0, 35.0, 8.0, 25.0, 4.0, 46.0, 1.0, 7.0, -1.0], [5.0, 25.0, 34.0, 46.0, 1.0, 9.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [38.0, 57.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [12.0, 57.0, 49.0, 25.0, 9.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0]],
]})
data = df.Route_set.to_numpy()[0]
df = pd.DataFrame.from_dict(dict(zip(['route{}'.format(i) for i in range(1, len(data)+1)], [data[i] for i in range(len(data))])), orient='index').transpose()
df = df.apply(lambda x: x.explode() if 'route' in x.name else x)
df[sorted(df.columns)]
print(df.to_markdown())
</code></pre>
<pre><code>| | route1 | route2 | route3 | route4 | route5 | route6 | route7 | route8 | route9 | route10 | route11 | route12 | route13 | route14 | route15 | route16 | route17 | route18 | route19 | route20 |
|---:|---------:|---------:|---------:|---------:|---------:|---------:|---------:|---------:|---------:|----------:|----------:|----------:|----------:|----------:|----------:|----------:|----------:|----------:|----------:|----------:|
| 0 | 20 | 20 | 54 | 57 | 57 | 21 | 9 | 51 | 1 | 25 | 57 | 20 | 14 | 20 | 57 | 5 | 51 | 5 | 38 | 12 |
| 1 | 19 | 51 | 23 | 48 | 16 | 11 | 15 | 25 | 33 | 31 | 12 | 41 | 44 | 51 | 49 | 0 | 32 | 25 | 57 | 57 |
| 2 | 47 | 46 | 5 | 46 | 45 | 6 | 47 | 22 | 24 | 50 | -1 | 47 | 39 | 25 | 5 | 33 | 33 | 34 | -1 | 49 |
| 3 | 56 | 37 | 46 | 35 | 25 | 33 | 42 | 14 | 46 | 17 | -1 | 15 | 25 | -1 | 20 | 55 | 24 | 46 | -1 | 25 |
| 4 | 43 | 2 | 34 | 25 | 49 | 25 | 25 | 39 | 56 | -1 | -1 | 46 | -1 | -1 | 37 | 25 | 35 | 1 | -1 | 9 |
| 5 | 53 | 57 | -1 | 27 | 38 | 49 | -1 | 8 | 30 | -1 | -1 | -1 | -1 | -1 | 46 | 48 | 8 | 9 | -1 | -1 |
| 6 | 18 | 49 | -1 | 52 | 0 | 57 | -1 | 40 | 48 | -1 | -1 | -1 | -1 | -1 | 36 | -1 | 25 | -1 | -1 | -1 |
| 7 | -1 | 36 | -1 | 8 | 46 | 29 | -1 | 0 | 51 | -1 | -1 | -1 | -1 | -1 | 25 | -1 | 4 | -1 | -1 | -1 |
| 8 | -1 | 25 | -1 | 39 | 13 | 12 | -1 | 10 | -1 | -1 | -1 | -1 | -1 | -1 | 39 | -1 | 46 | -1 | -1 | -1 |
| 9 | -1 | 5 | -1 | 22 | 18 | 3 | -1 | 26 | -1 | -1 | -1 | -1 | -1 | -1 | 51 | -1 | 1 | -1 | -1 | -1 |
| 10 | -1 | 4 | -1 | 51 | 19 | -1 | -1 | 32 | -1 | -1 | -1 | -1 | -1 | -1 | 48 | -1 | 7 | -1 | -1 | -1 |
| 11 | -1 | 34 | -1 | 28 | 20 | -1 | -1 | 47 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 |
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Group and sum by week<p>I have a dataframe where the columns are by day in this format:</p>
<pre><code>a b c 01/01/2020 01/02/2020 01/03/2020 ...
1000 2000 3000 2 5 7
.
.
.
</code></pre>
<p>These are just arbitrary values. What I want is to sum the date columns and group them by week, like <code>week_1, week_2,...</code> so on and so forth. So for the example above it would look like:</p>
<pre><code>a b c week_1...
1000 2000 3000 14
.
.
.
</code></pre>
<p>Is there a clean way to do it for columns? I know I can sum all the columns by selecting the date columns and summing them on the axis, but I'm not sure how to do it per week. Any help is appreciated!</p>
|
<p>You can do:</p>
<pre><code># move `a`, `b`, `c` out of columns
df = df.set_index(['a','b','c'])
# convert columns to datetime
df.columns = pd.to_datetime(df.columns)
# groupby sum:
(df.groupby(df.columns.week, axis=1)
.sum()
.add_prefix('week_')
.reset_index()
)
</code></pre>
<p>Output:</p>
<pre><code> a b c week_1
0 1000 2000 3000 14
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Subtract column in DataFrame1 from matching indices in long-form DataFrame2<p>I have two dataframes, one with reference data and one with "experimental" data. I want to compute the error associated with the experimental values by subtracting the reference values. However, the experimental DataFrame is in long form, and contains several variables for the same index. I want only to match the indices, such that sometimes the same reference value is used in the subtraction. The index in both dataframes is "Reaction".</p>
<p>Specifically I would like to create two new columns in the experimental dataframe called "BSE" and "BSE_CP". These should be computed as shown by the following pseudo-code</p>
<pre><code>experimental['BSE'] = experimental['Delta_E'] - reference['Delta_E']
experimental['BSE_CP'] = experimental['Delta_E_CP'] - reference['Delta_E']
</code></pre>
<p>Naturally I have tried the above code, but it returns a ValueError:</p>
<pre><code>ValueError: cannot reindex from a duplicate axis
</code></pre>
<p>I can do some manual labor, and loop over the various basis sets and compute the errors, then store these in temporary list, and finally assign the concatenated dataframe as a new variable. The below code works, <strong>but my (limited) pandas intuition tells me that there is a simpler way.</strong></p>
<pre><code>bses = []
bse_cps = []
for basis in exp.Basis_set.unique():
exp_sub = exp.loc[exp.Basis_set == basis]
bse = exp_sub['Delta_E'] - ref['Delta_E']
bse_cp = exp_sub['Delta_E_CP'] - ref['Delta_E']
bses.append(bse)
bse_cps.append(bse_cp)
exp['BSE'] = pd.concat(bses, axis=0)
exp['BSE_CP'] = pd.concat(bse_cps, axis=0)
</code></pre>
<p>Sample from experimental dataframe:</p>
<pre><code> Basis_set Functional Delta_E Delta_E_CP BSSE
Reaction
Cr-Alkene-1 pc-3 PBE -24.950271 -24.922770 0.027485
Cr-Alkene-2 pc-3 PBE -20.674572 -20.633017 0.041541
Cr-Alkene-3 pc-3 PBE -9.621059 -9.560187 0.060868
Cr-Alkene-4 pc-3 PBE -15.913920 -15.821342 0.092578
Cr-Alkene-5 pc-3 PBE -9.925094 -9.836789 0.088305
Cr-Alkene-6 pc-3 PBE -16.365306 -16.266877 0.098429
Cr-CO pc-3 PBE -43.738982 -43.698595 0.040412
Cr-H2 pc-3 PBE -19.050313 -19.054649 -0.004336
Cr-MeCN pc-3 PBE -29.415768 -29.384396 0.031375
Cr-MeOH pc-3 PBE -18.165318 -18.120964 0.044365
Cr-THF pc-3 PBE -19.518354 -19.486973 0.031375
Cr-Water pc-3 PBE -16.643343 -16.582746 0.060617
Fe-MeOH pc-3 PBE -14.514893 -14.432698 0.082196
Ni-Alkene-1 pc-3 PBE -16.365802 -16.323111 0.042671
Ni-Alkene-2 pc-3 PBE -12.029692 -11.976059 0.053652
Ni-Alkene-3 pc-3 PBE -6.764403 -6.670935 0.093468
Ni-Alkene-4 pc-3 PBE -9.027397 -8.934491 0.092907
Ni-Alkene-5 pc-3 PBE -6.373132 -6.259096 0.114035
Ni-Alkene-6 pc-3 PBE -9.282549 -9.182826 0.099723
Ni-CO pc-3 PBE -29.330640 -29.271458 0.059174
Ni-MeCN pc-3 PBE -16.075560 -16.034989 0.040600
Ni-MeOH pc-3 PBE -7.261460 -7.210546 0.050891
Ni-NHC-1 pc-3 PBE -36.680622 -36.615234 0.065388
Ni-NHC-2 pc-3 PBE -36.232223 -36.115631 0.116592
Ni-THF pc-3 PBE -8.198476 -8.157920 0.040537
Ni-Water pc-3 PBE -6.052186 -5.988283 0.063902
Cr-Alkene-1 6-311++G(2df,2pd) PBE -25.843776 -24.979298 0.864478
Cr-Alkene-2 6-311++G(2df,2pd) PBE -22.012592 -20.741707 1.270885
Cr-Alkene-3 6-311++G(2df,2pd) PBE -11.692782 -9.797260 1.895522
Cr-Alkene-4 6-311++G(2df,2pd) PBE -17.853916 -15.858710 1.995206
Cr-Alkene-5 6-311++G(2df,2pd) PBE -12.365642 -10.000622 2.365020
Cr-Alkene-6 6-311++G(2df,2pd) PBE -18.460674 -16.333490 2.127184
Cr-CO 6-311++G(2df,2pd) PBE -44.629594 -43.514245 1.115349
Cr-H2 6-311++G(2df,2pd) PBE -19.422439 -19.074368 0.348071
Cr-MeCN 6-311++G(2df,2pd) PBE -30.350801 -29.453226 0.897575
Cr-MeOH 6-311++G(2df,2pd) PBE -19.176455 -18.105223 1.071232
Cr-THF 6-311++G(2df,2pd) PBE -20.776291 -19.514903 1.261388
Cr-Water 6-311++G(2df,2pd) PBE -17.581627 -16.548968 1.032659
Fe-MeOH 6-311++G(2df,2pd) PBE -15.773194 -14.295214 1.477980
Ni-Alkene-1 6-311++G(2df,2pd) PBE -17.343889 -16.455705 0.888184
Ni-Alkene-2 6-311++G(2df,2pd) PBE -13.206122 -12.113601 1.092521
Ni-Alkene-3 6-311++G(2df,2pd) PBE -8.452805 -6.882294 1.570512
Ni-Alkene-4 6-311++G(2df,2pd) PBE -10.640900 -9.033991 1.606909
Ni-Alkene-5 6-311++G(2df,2pd) PBE -8.377379 -6.450635 1.926744
Ni-Alkene-6 6-311++G(2df,2pd) PBE -11.016182 -9.303782 1.712400
Ni-CO 6-311++G(2df,2pd) PBE -30.283896 -29.304677 0.979219
Ni-MeCN 6-311++G(2df,2pd) PBE -16.837946 -16.065847 0.772099
Ni-MeOH 6-311++G(2df,2pd) PBE -8.014220 -7.085813 0.928407
Ni-NHC-1 6-311++G(2df,2pd) PBE -38.170826 -36.886904 1.283922
Ni-NHC-2 6-311++G(2df,2pd) PBE -38.598700 -36.387356 2.211343
Ni-THF 6-311++G(2df,2pd) PBE -9.091911 -8.048848 1.043063
Ni-Water 6-311++G(2df,2pd) PBE -6.754093 -5.892542 0.861551
</code></pre>
<p>The reference dataframe:</p>
<pre><code> Delta_E
Reaction
Cr-Alkene-1 -24.984980
Cr-Alkene-2 -20.698715
Cr-Alkene-3 -9.620706
Cr-Alkene-4 -15.898494
Cr-Alkene-5 -9.984087
Cr-Alkene-6 -16.350411
Cr-Water -16.612333
Cr-MeOH -18.159461
Cr-THF -19.541941
Cr-MeCN -29.429611
Cr-CO -43.758283
Cr-H2 -19.092310
Ni-Alkene-1 -16.326735
Ni-Alkene-2 -11.955749
Ni-Alkene-3 -6.644702
Ni-Alkene-4 -8.922958
Ni-Alkene-5 -6.323173
Ni-Alkene-6 -9.171335
Ni-Water -5.925627
Ni-MeOH -7.149769
Ni-THF -8.095105
Ni-MeCN -15.941426
Ni-CO -29.236219
Ni-NHC-1 -36.582247
Ni-NHC-2 -36.093587
Fe-MeOH -14.469599
</code></pre>
<p>Desired output</p>
<pre><code> Basis_set Functional Delta_E Delta_E_CP BSSE BSE BSE_CP
Reaction
Cr-Alkene-1 6-31+G(d) PBE -28.366635 -26.271858 2.094777 -3.381654 -1.286877
Cr-Alkene-2 6-31+G(d) PBE -24.810519 -21.984532 2.825986 -4.111804 -1.285817
Cr-Alkene-3 6-31+G(d) PBE -14.328868 -10.097466 4.231402 -4.708163 -0.476760
Cr-Alkene-4 6-31+G(d) PBE -21.041370 -16.296561 4.744809 -5.142876 -0.398067
Cr-Alkene-5 6-31+G(d) PBE -15.232350 -9.631952 5.600398 -5.248263 0.352135
...
</code></pre>
|
<p>I would recommend that you merge the experimental data frame with the reference data frame on the Reaction Id.</p>
<pre><code>import pandas as pd
import numpy as np
mergedData= pd.merge(ref,exp_sub, how='left' ,on='Reaction', suffixes=('_ref', '_exp'),indicator ='Exists')
</code></pre>
<p>since you have the column Delta_E with the same name in both data frames, you can specify a suffix for the column name on merge. Meaning the merged result will have two columns Delta_E_ref, and Delta_E_exp. Finally, the indicator Exists, will have the value of 'both' when the reaction id is in both data frames, this is where you want to substract:</p>
<pre><code>mergedData['bse']=np.nan
mergedData['bse_cp']=np.nan
mergedData['bse'] = np.where(mergedData['Exists']=='both',mergedData['Delta_E_ref'] - mergedData['Delta_E_exp'] , np.nan)
mergedData['bse_cp '] = np.where(mergedData['Exists']=='both',mergedData['Delta_E_CP'] - mergedData['Delta_E_exp'] , np.nan)
mergedData.drop('Exists',axis=1, inplace=True) ## droping the Exists column
</code></pre>
<p>This is the link from the pandas library on the merge function if you want to know more: <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.merge.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.merge.html</a></p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How to ignore certain scripts while testing flask app using pytest in gitlab CI/CD pipeline?<p>I have a <code>flask-restx</code> folder with the following structure</p>
<pre><code>.
├── app
│ ├── extensions.py
│ ├── __init__.py
│ └── pv_dimensioning
│ ├── controller.py
│ ├── __init__.py
│ ├── models
│ │ ├── dto.py
│ │ ├── __init__.py
│ │ ├── input_output_model.py
│ │ └── vendor_models.py
│ ├── services
│ │ ├── calculator.py
│ │ ├── database.py
│ │ ├── db_crud.py
│ │ ├── db_input_output.py
│ │ ├── __init__.py
│ │ └── processor.py
│ └── utils
│ ├── decode_verify_jwt.py
│ ├── decorator.py
│ └── __init__.py
├── config.py
├── main.py
├── package.json
├── package-lock.json
├── Pipfile
├── Pipfile.lock
├── README.md
├── serverless.yml
└── tests
└── test_processor.py
</code></pre>
<p>The app is connected to a database, that's why there are many scripts in the app that require VPN connection.</p>
<p>I am writing tests using <code>pytest</code> which I will then run on the <code>gitlab CI/CD</code> to get a proper <code>coverage</code>. I would like to avoid or omit scripts that can only run when VPN is connected and write tests only for scripts that don't require VPN. (I have tests for the scripts that require VPN, but I just don't want to run them in the CI/CD pipeline)</p>
<p><strong>The only script that doesn't require VPN is the <code>processor.py</code> and the test for that is in <code>test_processor.py</code>.</strong></p>
<p>The scripts that I would like to avoid are in the <code>.coveragerc</code>:</p>
<pre><code>[run]
omit =
*/site-packages/*
*/distutils/*
tests/*
/usr/*
app/__init__.py
app/extensions.py
app/pv_dimensioning/models/*
app/pv_dimensioning/utils/*
app/pv_dimensioning/controller.py
app/pv_dimensioning/services/calculator.py
app/pv_dimensioning/services/database.py
app/pv_dimensioning/services/db_crud.py
app/pv_dimensioning/services/db_input_output.py
[html]
directory = htmlcov
</code></pre>
<p>the coverage part of <code>.gitlab-ci.yml</code></p>
<pre><code>stages:
- coverage
coverage:
image: python:3.7
stage: coverage
artifacts:
paths:
- htmlcov/
before_script:
- apt-get -y update
- apt-get install curl
- pip install pipenv
- pipenv install --dev
script:
- pipenv run python -m coverage run -m pytest
- pipenv run python -m coverage report -m
- pipenv run python -m coverage html
after_script:
- pipenv run bash <(curl -s https://codecov.io/bash)
</code></pre>
<p>When I run the test in the pipeline, I get the following error:</p>
<pre><code>$ pipenv run python -m coverage run -m pytest
============================= test session starts ==============================
platform linux -- Python 3.7.10, pytest-6.2.1, py-1.10.0, pluggy-0.13.1
rootdir: /builds/EC/tool/dt-service
plugins: cov-2.11.0
collected 0 items / 1 error
==================================== ERRORS ====================================
___________________ ERROR collecting tests/test_processor.py ___________________
/usr/local/lib/python3.7/urllib/request.py:1350: in do_open
encode_chunked=req.has_header('Transfer-encoding'))
/usr/local/lib/python3.7/http/client.py:1277: in request
self._send_request(method, url, body, headers, encode_chunked)
/usr/local/lib/python3.7/http/client.py:1323: in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
/usr/local/lib/python3.7/http/client.py:1272: in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
/usr/local/lib/python3.7/http/client.py:1032: in _send_output
self.send(msg)
/usr/local/lib/python3.7/http/client.py:972: in send
self.connect()
/usr/local/lib/python3.7/http/client.py:1439: in connect
super().connect()
/usr/local/lib/python3.7/http/client.py:944: in connect
(self.host,self.port), self.timeout, self.source_address)
/usr/local/lib/python3.7/socket.py:707: in create_connection
for res in getaddrinfo(host, port, 0, SOCK_STREAM):
/usr/local/lib/python3.7/socket.py:752: in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
E socket.gaierror: [Errno -2] Name or service not known
During handling of the above exception, another exception occurred:
tests/test_processor.py:2: in <module>
from app.pv_dimensioning.services.processor import PreProcessings
app/pv_dimensioning/__init__.py:4: in <module>
from .controller import admin_crud_ns as admin_crud_namespace
app/pv_dimensioning/controller.py:5: in <module>
from .services.calculator import DimensionCalculator
app/pv_dimensioning/services/calculator.py:3: in <module>
from .database import DatabaseService
app/pv_dimensioning/services/database.py:6: in <module>
from ..utils.decode_verify_jwt import verifier
app/pv_dimensioning/utils/decode_verify_jwt.py:12: in <module>
with urllib.request.urlopen(keys_url) as f:
/usr/local/lib/python3.7/urllib/request.py:222: in urlopen
return opener.open(url, data, timeout)
/usr/local/lib/python3.7/urllib/request.py:525: in open
response = self._open(req, data)
/usr/local/lib/python3.7/urllib/request.py:543: in _open
'_open', req)
/usr/local/lib/python3.7/urllib/request.py:503: in _call_chain
result = func(*args)
/usr/local/lib/python3.7/urllib/request.py:1393: in https_open
context=self._context, check_hostname=self._check_hostname)
/usr/local/lib/python3.7/urllib/request.py:1352: in do_open
raise URLError(err)
E urllib.error.URLError: <urlopen error [Errno -2] Name or service not known>
=========================== short test summary info ============================
ERROR tests/test_processor.py - urllib.error.URLError: <urlopen error [Errno ...
!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!
=============================== 1 error in 1.62s ===============================
</code></pre>
<p>In the trace, it can be seen that the scripts that I am trying to ignore aren't being ignored. What is the mistake I am doing?</p>
|
<p>As I understand it, <em>coverage</em> is about reporting how much of your codebase is tested, not which tests to run. What you're doing is excluding things from a report, not stopping the data for the report being created.</p>
<p>What you should do is skip tests if you know they're going to fail (due to external configuration). Fortunately <a href="https://docs.pytest.org/en/6.2.x/skipping.html#skipping-test-functions" rel="nofollow noreferrer">pytest provides for this</a> with the <code>skipif</code> decorator.</p>
<p>I would create a function in <code>tests/conftest.py</code> which skips tests if the VPN is active. Something like:</p>
<pre class="lang-py prettyprint-override"><code>import socket
import pytest
def _requires_vpn():
has_vpn = False
try:
socket.gethostbyname("<some address only accessible in the VPN>")
has_vpn = True
except socket.error:
pass
return pytest.mark.skipif(not has_vpn, reason="access to the vpn is required")
requires_vpn = _requires_vpn() # this is like the minversion example on the link
</code></pre>
<p>Then you can ignore entire test files adding this at the top:</p>
<pre><code>pytestmark = requires_vpn
</code></pre>
<p>And you can skip specific tests by decorating with this.</p>
<pre><code>@requires_vpn
def test_this_will_be_skipped():
pass
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Score function in rock paper scissors python issue and others<p>I am new to programming & trying to create a best of 5 RPS game for my class but running into a few problems...when I call my score function that I created alternativley, it works but it then makes it so my while loop just keeps going, when I dont call the score function the while loops does stop at 3</p>
<pre><code> def gamerps():
import random
YS= 0
CS= 0
rules = {("R", "S"), ("P", "R"), ("S", "P")}
msgs = {"R": "rock beats scissors", "P": "paper covers rock", "S":" scissors cut paper"}
while YS or CS <= 3:
YM = input("R/P/S?")
CM = random.choice(["R", "P", "S"])
if (YM, CM) in rules:
print ("You won! %s" % (msgs[YM]))
YS= YS+1
print (scorerps(YS, CS))
elif (CM, YM) in rules:
print ("You lost! %s" % (msgs[CM]))
CS= CS+1
print (scorerps(YS, CS))
elif YM == CM:
print ("Tie! Go again!")
else:
print ("ERROR, please choose R, P, or S for [R]ock [P]aper [S]cissors")
gamerps()
</code></pre>
<hr>
<p>the score function works fine on its own and it does add the score throughout the rps game when applied, it just makes the game no longer stop at first to 3....</p>
<pre><code>def scorerps(YS, CS):
if YS==CS:
print("scores are tied at", YS, "-", CS, "!")
elif YS >CS:
if YS==3:
print("You won ", YS, "-", CS,"!")
else:
print("You lead ", YS, "-", CS,"!")
elif CS>YS:
if CS==3:
print("Computer won ", CS, "-", YS,"!")
else:
print("computer leads ", CS, "-", YS,"!")
scorerps(1, 2)
</code></pre>
|
<p>Check the condition of your <code>while</code> loop:</p>
<p><code>while YS or CS <= 3:</code></p>
<p>means that the loop is running as long as <code>YS != 0</code> or <code>CS <= 3</code> and this is probably not what you wanted.</p>
<p>You probably wanted the loop to run until one of the variables exceeds <code>3</code> so you have to use <code>and</code> anyway:</p>
<p><code>while YS <= 3 and CS <= 3:</code></p>
<p>=> This way the condition becomes <code>False</code> when <strong>one</strong> of the variables gets <code>> 3</code></p>
<p>Keep in mind that this also means you need to win <code>4</code> rounds in order to win the game, because <code><= 3</code> is still <code>True</code> after one side won its 3rd round. So you might want to check for <code>< 3</code></p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Efficient HDF5 / PyTables Layout for saving and operating on large tensors<p>I am trying to figure out the best data layout for my use case (a research project). This is not my speciality so while I can articulate what I want, and what I think may work, I am trying to steer away from failure paths.</p>
<p>For now, assume tha the raw data are similar to several large corpora of text that are split into sequences (e.g. sentences), which each include a number of tokens (e.g. words). I extract, process and save information on a sentence-token basis, but require different operations on it in following analyses.
Specifically, each token, in each sentence, is associated with a large vector (which can be numerical) that is prepared by a number of operations that are already implemented. Each sequence is associated with some metadata. This operation and thereby the preparation of this data occurs only once.</p>
<p>So: the output of the inital operation is a three dimensional tensor D[x,y,z] plus metadata associated with the x dimension. The x dimension denotes the sequence, the y the token position in the sequence (but not the unique token-id, e.g. the word encoding, that is part of the sequence metadata), and the z are the columns (many thousands) of information for that token. So each sequence is associated with a matrix of tokens as rows, and information as columns. The metadata can probably be made to fit into the first row if necessary. Note that each sequence is of the same length.</p>
<pre><code>Sequence 1
Meta-data: [..]
Column 1 | Column 2 | ...
Token 1 | [...] | [...] | ...
Token 2 | [...] | [...] | ...
...
Token N | [...] | [...] | ...
Sequence 2
Meta-data: [..]
Column 1 | Column 2 | ...
Token 1 | [...] | [...] | ...
Token 2 | [...] | [...] | ...
...
Token N | [...] | [...] | ...
</code></pre>
<p>This data is ingested multiple times by different subsequent analyses. I therefore require different "views" of this data, as follows:</p>
<ol>
<li><p>I need to be able to query each sequence and get the full matrix of token->values. That is simply the output 3D tensor, where I query along the first dimension. It would be nice to be able to "slice" multiple sequences at once (e.g. random batches for ML models etc.)</p></li>
<li><p>I need like to be able to query by unique token-id (e.g. the word "hello"), noting that each token may occur in several sequences and at different positions. This is not a query into the dimension of a tensor, but rather requries data that maps unique token-ids to their positions in the sequences (or metadata within each sequence allowing such a query).</p></li>
<li><p>I finally generate and save further summary values for each token per sequence, that I seek to query extremely quickly, where other information from that sequence is not relevant.</p></li>
</ol>
<p>What all subsequent modeling has in common is</p>
<ul>
<li><p>I need as much RAM as possible for the subsequent analyses, or in other words, the data may or may not need to be pushed to disk. That is why I am looking for a solution that allows both in-memory and out-memory access. In particular, the whole tensor may not fit into memory at all (it is built up subsequently over the x dimension)</p></li>
<li><p>Given the fixed structure, indexing and slicing is relatively straightforward, but I may often need to select non-adjacent entries, such as tokens from unrelated sequences.</p></li>
<li><p>The whole thing should not bottleneck the subsequent analyses. It would also be beneficial if it is somewhat portable and does not require additional software, such that the results can be distributed and reproduced easily by other researchers. In fact, I would like to make this data available to download if it turns out to be possible (legally)</p></li>
<li><p>Since this is an input, I am primarily interested in speed to access these data from python or other languages.</p></li>
</ul>
<p>Based on this, I have tentatively settled on using either h5py or pyTables, but I am open to other options.</p>
<p>While the data is large, it is not so large that disk space is an issue (on a moderately sized server). I further iterate over each sequence at least once to perform the initial operations. I therefore plan to save each required "view" into separate datasets, each layed out to enable efficient access.</p>
<p><strong>My plan is as follows:</strong></p>
<ol>
<li><p>I save the output tensor as a multi-dimensional array in pyTables. The index dimension is going to be the sequence number. I may query several sequences, but always ingest the 2D table of a whole sequence. My hope is that pyTables allows me to keep the whole 3D tensor on disk, and only read the required data into RAM.</p></li>
<li><p>I will save a new dataset that has the unique token-id as index, sequence-id as second column and then the required information as array. This way, I can query by token-id and get all data associated in all sequences. This includes a lot of duplication, but should allow for fast querying (?)</p></li>
<li><p>I will finally make a smaller dataset with the associated summary data for each token-id (as index) for each sequence.</p></li>
</ol>
<p>Do you think that would be efficient in terms of computation time?</p>
<p>The other route I see would be a relational database, like SQL. Here, I could simply make entries for each actual word in a sequence, with associated token-id, sequence number and the data I need. An SQL query could then be used to get the data in any way I choose. Further, any metadata could be saved in other tables either by sequence or token without much restrictions.</p>
<p>However, I am not sure if that is the fastest option, since I do not require many things SQL provides, such as additional flexibility (my queries / views are fixed and indexing/slicing is always along a fixed dimension) or all the access protections and all that. Plus, portability is better if its just some dataset files.</p>
<p>I am also not sure how SQL handles in-memory and out-memory issues. There may be instances when large parts of my data actually fits in RAM, so I want the flexbility there as well.</p>
<p><strong>Questions:</strong></p>
<ul>
<li><p>What is your sense of the best approach? Is my plan sound?</p></li>
<li><p>SQL seems clearly more flexible, is it perhaps even faster?</p></li>
<li><p>What I do not yet understand in HDF5 is how chunking and groups play into this. It seems I can not really chunk my data, because I need to be able to query non-successive data with high frequency. Is it correct that for my use-case, I should not chunk?</p></li>
<li><p>Similarly, groups and links. My data structure does not resemble a tree, because each token may occur in many sequences, which is why I chose to just produce different datasets entirely. Would it be more efficient to try to use hard links or groups?</p></li>
<li><p>How would the memory model of HDF5 work (as implemented in python)? Is it true that I can query, say, the 3D tensor, and only keep the results in memory, but also have a cache for sequences or tokens that are frequently queried?</p></li>
</ul>
<p>If my description is not clear, please let me know. Thank you for taking the time to read all this.</p>
|
<p>For anyone coming across this question, let me give you the result.</p>
<p>The above works as intended using pyTables. It can be made reasonably fast. However, the logic rapidly produces files of humorously gigantic proportions, so I can only recommend to find a different way. In particular, disk space turned out to be more problematic than RAM usage, especially is things can be sparsified. </p>
<p>A custom solution to subset the data into memory was more successful than using pyTables chunking. So in effect, in all but knife-edge cases, the above is probably not a good idea.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How to pass to a function that takes two arguments an element from one list together with each element of another list?<p>I am new to Python and I need help with following.
I do have a list <code>a = [range(1, 50, 10)]</code>
and another list <code>b = [2, 4, 5, 8, 12, 34]</code>
Now, I have a function that calculates something, let's call it "SomeFunction", that takes two arguments.
I want to pass to this SomeFunction for element from list <strong>a</strong> each element of list <strong>b</strong>. So, I want values for: SomeFunction(1, 2), SomeFunction(1, 4), SomeFunction(1, 5), ..... SomeFunction(50, 2), SomeFunction(50, 4), SomeFunction(50, 5), etc.</p>
<p>I think it should be done somehow with for loops, but I do not know how...</p>
|
<p>You'd need a nested <code>for</code> loop:</p>
<pre><code>a = range(1, 50, 10)
b = [2, 4, 5, 8, 12, 34]
for aval in a:
for bval in b:
print(aval, bval) # or any other function call
</code></pre>
<p>This just goes through all values in <code>b</code> for each value in <code>a</code>. (Note that you don't want <code>range</code> inside square brackets as you have in your question, I removed those).</p>
<p>A more advanced version: "all pairs of values" is also known as the Cartesian product, which is provided by the <code>itertools</code> module. Thus you could write</p>
<pre><code>from itertools import product
for aval, bval in product(a, b):
print(aval, bval)
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Stacked barplot in seaborn using numeric data as hue<p>I have a simple pandas dataframe of 3 columns (month, amount, category) where each row represent an expense of a certain category:</p>
<pre><code>import pandas as pd
d = {'Month': ['Jan', 'Jan', 'Jan', 'Feb', 'Feb', 'Mar', 'Mar', 'Mar', 'Mar'], 'Amount': [5, 65, 29, 200, 28.5, 12, 4, 100, 21], 'Category': ['Travel', 'Food', 'Dentist', 'Dentist', 'Food', 'Travel', 'Food', 'Sport', 'Sport']}
df = pd.DataFrame(df)
</code></pre>
<p>I'd like to create a seaborn bar plot where each bar represent the total amount of expenses per month, and each bar is split into different color, where each hue represents the total expense of a particular category on that month.</p>
<p>I was able to achieve the result using a pretty convoluted method and the plotting using matplotlib:</p>
<pre><code>df = df.groupby(['Month', 'Category']).sum()
df.reset_index(inplace=True)
pivot_df = df.pivot(index='Month', columns='Category', values='Amount')
df.plot.bar(stacked=True, colormap='tab20')
</code></pre>
<p>But this method gives error when trying to use seaborn, and it seems unnecessary complicated.</p>
<p>Is there a better way to achieve the desired result?</p>
|
<p>Your initial method is complicated because you have unnecessary steps. You <code>groupby</code> and <code>pivot</code>, but the same aggregation and reshaping can be done at once with <code>pivot_table</code>. From your initial DataFrame:</p>
<pre><code>df_pivot = pd.pivot_table(df, index='Month', columns='Category', values='Amount', aggfunc='sum')
df_pivot.plot.bar(stacked=True, colormap='tab20')
</code></pre>
<p><a href="https://i.stack.imgur.com/AviVz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AviVz.png" alt="enter image description here" /></a></p>
<hr />
<p>As for using <code>seaborn</code>, I wouldn't. They don't really support a stacked barplot, and all of their examples which <a href="https://seaborn.pydata.org/examples/part_whole_bars.html" rel="nofollow noreferrer"><em>look</em> like stacked plots</a> only have two categories where they plot the total, and then overlay one group (giving the impression it's stacked). But this method doesn't easily extend to more than 2 groups.</p>
<p>But if you want that <em>seaborn feel</em> you can use their defaults.</p>
<pre><code>import seaborn as sns
sns.set()
df_pivot = pd.pivot_table(df, index='Month', columns='Category', values='Amount', aggfunc='sum')
df_pivot.plot.bar(stacked=True)
</code></pre>
<p><a href="https://i.stack.imgur.com/3hwS4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3hwS4.png" alt="enter image description here" /></a></p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Breakng the hash<p>I have to break 4 hash codes and find their number . but my code is not working</p>
<p>these are the hash codes (in a csv file) :</p>
<pre><code>javad :f478525457dcd5ec6223e52bd3df32d1edb600275e18d6435cdeb3ef2294e8de
milad : 297219e7de424bb52c040e7a2cbbd9024f7af18e283894fe59ca6abc0313c3c4
tahmine : 6621ead3c9ec19dfbd65ca799cc387320c1f22ac0c6b3beaae9de7ef190668c4
niloofar : 26d72e99775e03d2501416c6f402c265e628b7d02eee17a7671563c32e0cd9a3
</code></pre>
<p>my code :</p>
<pre><code>import hashlib
import itertools as it
import csv
from typing import Dict
number=[0,1,2,3,4,5,6,7,8,9]
code = hashlib.sha256()
passwords = list(it.permutations(number, 4))
with open('passwords.csv', newline='') as theFile:
reader = csv.reader(theFile)
passdic = dict()
# hpass is hash password
for hpass in passwords :
encoded_hpass = ''.join(map(str, hpass)).encode('ascii')
code = hashlib.sha256()
code.update(encoded_hpass)
passdic[encoded_hpass] = code.digest()
for row in theFile :
for key, value in row.items():
passdic[key].append(value)
</code></pre>
<p>and my result is :</p>
<pre><code>'C:\Users\Parsa\AppData\Local\Programs\Python\Python38-32\python.exe' 'c:\Users\Parsa\.vscode\extensions\ms-python.python-2021.12.1559732655\pythonFiles\lib\python\debugpy\launcher' '3262' '--' 'c:\Users\Parsa\Desktop\project\hash breaker.py'
Traceback (most recent call last):
File "c:\Users\Parsa\Desktop\project\hash breaker.py", line 24, in <module>
for row in theFile :
ValueError: I/O operation on closed file.
</code></pre>
|
<p>You're trying to read from a closed file, which is impossible.</p>
<p>I don't know what your code is supposed to do, but here are the unlogical parts:</p>
<p>This opens the file to parse it as CSV</p>
<pre><code>with open('passwords.csv', newline='') as theFile:
reader = csv.reader(theFile)
</code></pre>
<p>Then later on you run:</p>
<pre><code> for row in theFile :
for key, value in row.items():
</code></pre>
<p>But now, you're outside of the <code>with</code> block and the file is closed.</p>
<p>I guess you should use <code>reader</code> in place of <code>theFile</code>. If you really intend to loop over the raw line of the file, you need to wrap the loop again in a <code>with open</code> statement.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Scrape URL loop with BeautifulSoup<p>I want to scrap information on different pages of the same site, societe.com and I have several questions.</p>
<p>first of all here is the code that I managed to do, I am a bit of a novice I admit it</p>
<p>I only put 2 URLs to see if the loop worked and some information, I can add some when everything works</p>
<pre><code>urls = ["https://www.societe.com/societe/decathlon-france-500569405.html","https://www.societe.com/societe/go-sport-312193899.html"]
for url in urls:
response = requests.get(url, headers = {'User-agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.186 Safari/537.36'})
soup = BeautifulSoup(response.text, "html.parser")
numrcs = soup.find("td", class_="numdisplay")
nomcommercial = soup.find("td", class_="break-word")
print(nomcommercial.text)
print(numrcs.text.strip())
numsiret = soup.select('div[id^=siret_number]')
for div in numsiret:
print(div.text.strip())
formejuri = soup.select('div[id^=catjur-histo-description]')
for div in formejuri:
print(div.text.strip())
infosend = {
'numrcs': numrcs,
'nomcommercial':nomcommercial,
'numsiret':numsiret,
'formejuri':formejuri
}
tableau.append(infosend)
print(tableau)
my_infos = ['Numéro RCS', 'Numéro Siret ','Forme Juridique']
my_columns = [
np.tile(np.array(my_infos), len(nomcommercial))
]
df = pd.DataFrame( tableau,index=nomcommercial, columns=my_columns)
df
</code></pre>
<p>When I run the loop I have the right information coming out, like for example</p>
<pre><code>DECATHLON FRANCE
Lille Metropole B 500569405
50056940503239
SASU Société par actions simplifiée à associé unique
</code></pre>
<p>but I would like to put all this information in a table but I can't really, only the last company appears and the data makes no sense I tried to follow a tutorial without success.</p>
<p>if you can help me i would be really happy</p>
|
<p>To get data about the companies you can use next example:</p>
<pre class="lang-py prettyprint-override"><code>import requests
import pandas as pd
from bs4 import BeautifulSoup
urls = [
"https://www.societe.com/societe/decathlon-france-500569405.html",
"https://www.societe.com/societe/go-sport-312193899.html",
]
headers = {
"User-agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.186 Safari/537.36"
}
data = []
for url in urls:
soup = BeautifulSoup(
requests.get(url, headers=headers).content, "html.parser"
)
title = soup.select_one("#identite_deno").get_text(strip=True)
rcs = soup.select_one('td:-soup-contains("Numéro RCS") + td').get_text(
strip=True
)
siret_number = soup.select_one("#siret_number").get_text(strip=True)
form = soup.select_one("#catjur-histo-description").get_text(strip=True)
data.append([title, url, rcs, siret_number, form])
df = pd.DataFrame(
data,
columns=["Title", "URL", "Numéro RCS", "Numéro Siret", "Forme Juridique"],
)
print(df.to_markdown())
</code></pre>
<p>Prints:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: left;">Title</th>
<th style="text-align: left;">URL</th>
<th style="text-align: left;">Numéro RCS</th>
<th style="text-align: right;">Numéro Siret</th>
<th style="text-align: left;">Forme Juridique</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: left;">DECATHLON FRANCE (DECATHLON DIRECTION GENERALE FRANCE)</td>
<td style="text-align: left;"><a href="https://www.societe.com/societe/decathlon-france-500569405.html" rel="nofollow noreferrer">https://www.societe.com/societe/decathlon-france-500569405.html</a></td>
<td style="text-align: left;">Lille Metropole B 500569405</td>
<td style="text-align: right;">50056940503239</td>
<td style="text-align: left;">SASU Société par actions simplifiée à associé unique</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: left;">GO SPORT</td>
<td style="text-align: left;"><a href="https://www.societe.com/societe/go-sport-312193899.html" rel="nofollow noreferrer">https://www.societe.com/societe/go-sport-312193899.html</a></td>
<td style="text-align: left;">Grenoble B 312193899</td>
<td style="text-align: right;">31219389900191</td>
<td style="text-align: left;">Société par actions simplifiée</td>
</tr>
</tbody>
</table>
</div>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Saving Tiff with specific Tiff Tags using PIL<p>I am trying to save a TIFF image using PIL with custom tags.</p>
<pre><code>import numpy as np
import PIL
numrows=10
numcols=10
data = np.random.randint(0, 255, (numrows,numcols)).astype(np.uint8)
rawtiff=PIL.Image.fromarray(data)
custtifftags={'Photometric':1, 'Compression':1, 'BitsPerSample':32,\
'SamplePerPixel':1, 'SampleFormat':3,'ImageLength':10,\
'ImageWidth':10, 'PlanarConfiguration':1, 'ResolutionUnit':2}
rawtiff.save('test.tiff', tiffinfo=custtifftags)
</code></pre>
<p>I am getting the following error:</p>
<pre><code>TypeError: '<' not supported between instances of 'str' and 'int'
</code></pre>
<p>What is causing this error and how can I save images using PIL while setting my own tags?</p>
|
<p>I figured it out. You need to format the TIFF tags as follows:</p>
<pre><code>custtifftags={262:(1,), 259:(0,), 258:(32,),\
277:(1,), 339:(3,),257:(10,),\
256:(10,), 284:(1,), 296:(2,)}
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Divide multiple columns by a fix number in pandas<p>How can I divide multiple columns by a fixed number?</p>
<pre><code> A B C D
0 1 100 2000 10
1 2 200 3000 00
2 3 300 4000 20
3 4 400 5000 40
4 5 500 4000 24
5 6 600 2000 23
</code></pre>
<p>I would like to dived each number in column "B" and "C" by 1000 and get new DataFrame with having other columns unchanged.</p>
|
<p>You can use broadcasting:</p>
<pre><code>df[['B','C']] /= 1000
</code></pre>
<p>Output:</p>
<pre><code> A B C D
0 1 0.1 2.0 10
1 2 0.2 3.0 0
2 3 0.3 4.0 20
3 4 0.4 5.0 40
4 5 0.5 4.0 24
5 6 0.6 2.0 23
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
CIFAR-10 python architicture<p>I'm following this tutorial <a href="https://machinelearningmastery.com/how-to-develop-a-cnn-from-scratch-for-cifar-10-photo-classification/" rel="nofollow noreferrer">here</a>.</p>
<pre><code>model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same', input_shape=(32, 32, 3)))
model.add(Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
model.add(Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
model.add(Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
model.add(Dense(128, activation='relu', kernel_initializer='he_uniform'))
model.add(Dense(10, activation='softmax'))
</code></pre>
<p>I am trying to understand the given code which uses the CIFAR-10 dataset.</p>
<ul>
<li><p>why is he using <code>kernel_initializer='he_uniform'?</code></p>
</li>
<li><p>why did he choose the 128 for the dense layer?</p>
</li>
<li><p>what will happen if we add more dense layer to the code like:<br />
<code>model.add(Dense(512, activation='relu', kernel_initializer='he_uniform'))</code></p>
</li>
<li><p>is there any way to increase the accuracy of the model?</p>
</li>
<li><p>what would be a suitable dropout rate?</p>
</li>
</ul>
|
<blockquote>
<p>why is he using <code>kernel_initializer='he_uniform'</code>?</p>
</blockquote>
<p>The weights in a layer of a neural network are initialized randomly. How though? Which distribution should they follow? <code>he_uniform</code> is a strategy for initializing the weights of that layer.</p>
<blockquote>
<p>why did he choose the 128 for the dense layer?</p>
</blockquote>
<p>This was chosen arbitrarily.</p>
<blockquote>
<p>What will happen if we add more dense layer to the code like:<br />
<code>model.add(Dense(512, activation='relu', kernel_initializer='he_uniform'))</code></p>
</blockquote>
<p>I assime you mean to add them where the other 128-neuron Dense layer is (there it won't break the model) The model will become deeper and have a <strong>much higher</strong> number of parameters (i.e. your model will become more complex) with whatever positives or negatives come along with this.</p>
<blockquote>
<p>what would be a suitable dropout rate?</p>
</blockquote>
<p>Usually you see rates in the range of [0.2, 0.5]. Higher rates reduce overfitting but might cause training to become more unstable.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How to Block Python ThreadPoolExecutor<p>I have a threadpool that i'd like to limit not only the max number of workers but the max number of jobs that can be submitted to the threadpool at once. The reason for limiting the jobs is because the jobs are generated much quicker than the threadpool workers can execute and it can exhaust all available memory quickly.</p>
<p>How i'd like to interface with a "blocking" threadpool:</p>
<pre><code>with ThreadPoolExecutor(max_workers=10) as executor:
for i in range(100_000_000):
executor.submit(do_work, i, block=True)
</code></pre>
<p>But <code>block=True</code> is not a thing on the executor.</p>
<p>Is there a blocking threadpool I can use which will block submission to the queue if the number of jobs in queue is at max_size? If not, what would be the best way to implement a blocking threadpool?</p>
|
<p>Looking at the <a href="https://github.com/python/cpython/blob/3.10/Lib/concurrent/futures/thread.py#L118" rel="nofollow noreferrer">implementation</a>, there seems to be a relatively non-intrusive way to define one yourself:</p>
<pre><code>class BlockingThreadPoolExecutor(ThreadPoolExecutor):
def __init__(self, *, queue_size=0, **kwargs):
super().__init__(**kwargs)
self._work_queue = queue.SimpleQueue(queue_size)
</code></pre>
<p>All this does is replace the unbounded work queue with a bounded one. Calls to <code>submit</code> will now block on their call to <code>self._work_queue.put</code> when the queue gets too full.</p>
<p>(This definition assumes you'll use keywords arguments, even though <code>ThreadPoolExecutor.__init__</code> does not require them.)</p>
<p>All the standard warnings about modifying private class details apply, but this is a pretty minimal change. As long as no future versions of <code>ThreadPoolExecutor</code> change the name of the attribute or switch to a class with an incompatible interface with <code>SimpleQueue</code>, it should work fine.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
as_completed identifying coroutie objects<p>I'm using asyncio to await set of coroutines in following way:</p>
<pre><code># let's assume we have fn defined and that it can throw an exception
coros_objects = []
for x in range(10):
coros_objects.append(fn(x))
for c in asyncio.as_completed(coros_objects):
try:
y = await c
exception:
# something
# if possible print(x)
</code></pre>
<p>Question is how can I know which coroutine failed and for which argument?
I could append <code>"x"</code> to the outputs but this would give me info about successful executions only.</p>
<p>I can know that form order because it's different from the order of <code>coros_objects</code></p>
<p>Can I somehow identify what coro just yielded result?</p>
|
<blockquote>
<p>Question is how can I know which coroutine failed and for which argument?</p>
</blockquote>
<p>You can't with the current <code>as_completed</code>. Once <a href="https://bugs.python.org/issue33533" rel="nofollow noreferrer">this PR</a> is merged, it will be possible by attaching the information to the future (because <code>as_completed</code> will then yield the original futures). At the moment there are two workarounds:</p>
<ul>
<li>wrap the coroutine execution in a wrapper that catches exceptions and stores them, and also stores the original arguments that you need, or</li>
<li>not use <code>as_completed</code> at all, but write your own loop using tools like <code>asyncio.wait</code>.</li>
</ul>
<p>The second option is easier than most people expect, so here it is (untested):</p>
<pre><code># create a list of tasks and attach the needed information to each
tasks = []
for x in range(10):
t = asyncio.create_task(fn(x))
t.my_task_arg = x
tasks.append(t)
# emulate as_completed with asyncio.wait()
while tasks:
done, tasks = await asyncio.wait(tasks, return_when=asyncio.FIRST_COMPLETED)
for t in done:
try:
y = await t
except Exception as e:
print(f'{e} happened while processing {t.my_task_arg}')
</code></pre>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.