QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
75,593,966
| 11,267,783
|
Colored background for gridspec subplots
|
<p>I wanted to have a background for each subplot of my figure.
In my example, I want the left side to be red and the right side to be blue.</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import numpy as np
from matplotlib.gridspec import GridSpec
fig = plt.figure()
gs = GridSpec(1,2,figure=fig)
data1 = np.random.rand(10,10)
data2 = np.random.rand(10,10)
ax_left = fig.add_subplot(gs[:,0], facecolor='red')
ax_left.set_title('red')
img_left = ax_left.imshow(data1, aspect='equal')
ax_right = fig.add_subplot(gs[:,1], facecolor='blue')
ax_right.set_title('blue')
img_right = ax_right.imshow(data2, aspect='equal')
plt.show()
</code></pre>
<p>How can I code this behavior?</p>
|
<python><matplotlib><matplotlib-gridspec>
|
2023-02-28 15:20:05
| 2
| 322
|
Mo0nKizz
|
75,593,873
| 5,817,109
|
Most Efficient Way to Fetch Large Amount of Data and Write It to Postgresql
|
<p>I am writing a Python Lambda function in AWS that does the following:</p>
<ul>
<li>fetch data from an external API using the requests library</li>
<li>write the data to a postgresql table</li>
</ul>
<p>The catch: the returned json has only 2 fields that need to be written to the db, but the data is fairly large, and there are thousands (or hundreds of thousands) of entries. What is the most effective Python tool to do this, and also what should be my approach? For example, set up a while loop and retrieve 100 entries at a time? Any suggestions, resources or pseudo code will be appreciated!</p>
<p>Thank you!</p>
|
<python><postgresql>
|
2023-02-28 15:12:50
| 0
| 305
|
Moneer81
|
75,593,863
| 4,971,866
|
Continue to match next case after match?
|
<p>I have a match case that I want to execute even if a previous statement already matched.</p>
<p>Here's what I currently have:</p>
<pre class="lang-py prettyprint-override"><code>key: Literal['all', 'a', 'b'] = 'a'
def do_a():
pass
def do_b():
pass
match key:
case 'a':
do_a()
case 'b':
do_b()
case 'all':
do_a()
do_b()
</code></pre>
<p>Is there any way to simplify the code so I can remove the <code>case 'all'</code>?</p>
<p>Something like</p>
<pre class="lang-py prettyprint-override"><code>match key:
case 'a' | 'all':
do_a()
case 'b' | 'all':
do_b()
</code></pre>
|
<python><switch-statement>
|
2023-02-28 15:12:02
| 2
| 2,687
|
CSSer
|
75,593,855
| 9,550,917
|
Constrain elements in a PyTorch tensor to be equal
|
<p>I have a PyTorch tensor and would like to <strong>impose equality constraints on its elements while optimizing</strong>. An example tensor of 2 * 9 is shown below, where the same color indicates the elements should always be equal.</p>
<p><a href="https://i.sstatic.net/TVXUN.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TVXUN.jpg" alt="Example tensor" /></a></p>
<p>Let's make a minimal example of 1 * 4, and initialize the first two and last two elements to be equal respectively.</p>
<pre class="lang-py prettyprint-override"><code>import torch
x1 = torch.tensor([1.2, 1.2, -0.3, -0.3], requires_grad=True)
print(x1)
# tensor([ 1.2000, 1.2000, -0.3000, -0.3000])
</code></pre>
<p>If I perform a simple least squares directly, the equality definitely exists no more.</p>
<pre class="lang-py prettyprint-override"><code>y = torch.arange(4)
opt_1 = torch.optim.SGD([x1], lr=0.1)
opt_1.zero_grad()
loss = (y - x1).pow(2).sum()
loss.backward()
opt_1.step()
print(x1)
# tensor([0.9600, 1.1600, 0.1600, 0.3600], requires_grad=True)
</code></pre>
<hr />
<p>I tried to express this tensor as <strong>a weighted sum of masks</strong>.</p>
<pre class="lang-py prettyprint-override"><code>def weighted_sum(c, masks):
return torch.sum(torch.stack([c[0] * masks[0], c[1] * masks[1]]), axis=0)
c = torch.tensor([1.2, -0.3], requires_grad=True)
masks = torch.tensor([[1, 1, 0, 0], [0, 0, 1, 1]])
x2 = weighted_sum(c, masks)
print(x2)
# tensor([ 1.2000, 1.2000, -0.3000, -0.3000])
</code></pre>
<p>In this way, the equality remains after optimization.</p>
<pre class="lang-py prettyprint-override"><code>opt_c = torch.optim.SGD([c], lr=0.1)
opt_c.zero_grad()
y = torch.arange(4)
x2 = weighted_sum(c, masks)
loss = (y - x2).pow(2).sum()
loss.backward()
opt_c.step()
print(c)
# tensor([0.9200, 0.8200], requires_grad=True)
print(weighted_sum(c, masks))
# tensor([0.9200, 0.9200, 0.8200, 0.8200], grad_fn=<SumBackward1>)
</code></pre>
<p>However, the biggest issue of this solution is that I have to maintain <strong>a large set of masks</strong> when the input dimension is high; surely it will result in <strong>out of memory</strong>. Suppose the shape of input tensor is <code>d_0 * d_1 * ... * d_m</code>, and the number of equality blocks is <code>k</code>, then there will be a huge mask of shape <code>k * d_0 * d_1 * ... * d_m</code>, which is unacceptable.</p>
<hr />
<p>Another solution might be <strong>upsampling</strong> the low resolution tensor like <a href="https://stackoverflow.com/questions/55734651/stretch-the-values-of-a-pytorch-tensor">this one</a>. However, it cannot be applied to irregular equality blocks, e.g.,</p>
<pre><code>tensor([[ 1.2000, 1.2000, 1.2000, -3.1000, -3.1000],
[-0.1000, 2.0000, 2.0000, 2.0000, 2.0000]])
</code></pre>
<p>So... is there a smarter way of implementing such equality constraints in a PyTorch tensor?</p>
|
<python><pytorch><tensor>
|
2023-02-28 15:11:14
| 3
| 341
|
Cheng
|
75,593,703
| 17,996,202
|
Allowing passing a function as an argument with an optional kwarg in Python
|
<p>I have a function that takes in another function as a parameter, and then runs it when a condition is met. However, some of the function passed in will have a keyword argument, and if it is present, I want it to be filled. This should also work with <code>threading</code>, so try/except is not an option here. Here is an example of what I am talking about.</p>
<pre><code>def runFuncOnNumLargerThanTen(func, thread = False):
larger = False
while not larger:
num = int(input("Enter a number"))
larger = (num > 10)
kwargnumpresentinfunc = # I don't know what to put here
if shouldthread:
if kwargpresentinfunc:
thread = threading.Thread(target=func, kwargs={"num": num})
thread.start()
else:
thread = threading.Thread(target=func)
thread.start()
else:
if kwargpresentinfunc:
func(num=num)
else:
func()
def customDisplayFunc(num = None):
print(f"Num {num} is greater than 10!")
def customDisplayFunc1()
print("Num larger than 10!")
runFuncOnNumLargerThanTen(customDisplayFunc, thread = True) # kwargnumpresentinfunc = True
runFuncOnNumLargerThanTen(customDisplayFunc1) # kwargnumpresentinfunc = False
</code></pre>
<p>How would I work out <code>kwargnumpresentinfunc?</code> I thought about using <code>inspect</code>, however the function passed in to <code>runFuncOnNumLargerThanTen</code> may also be an inbuilt function, and <code>inspect.getfullargspec(func)</code> doesn't work for inbuilts. If this is bad practice, I am open to alternative suggestions.</p>
|
<python><multithreading>
|
2023-02-28 14:57:42
| 2
| 318
|
Yetiowner
|
75,593,629
| 7,668,453
|
How to load a tensorflow .yaml config file now that model_from_yaml is depricated?
|
<p>I am trying to load an object detection model from the official <a href="https://github.com/tensorflow/models/blob/master/official/vision/MODEL_GARDEN.md" rel="nofollow noreferrer">tensorflow model zoo</a>. I have one problem however. They all seem to use the old config + checkpoint system. The config files are saved in .yaml format, but the <a href="https://tf.keras.models.model_from_yaml" rel="nofollow noreferrer">model_from_yaml</a> method is depricated and throwing an error.</p>
<p>I am trying to convert <a href="https://github.com/tensorflow/models/blob/master/official/vision/configs/experiments/image_classification/imagenet_resnetrs152_i256.yaml" rel="nofollow noreferrer">this file</a>. Converting to a json file and trying model_from_json results in the following error:</p>
<pre><code>Exception has occurred: JSONDecodeError
Expecting value: line 1 column 1 (char 0)
StopIteration: 0
</code></pre>
<p>Trying to read the yaml file seperatly with this code also returns an error:</p>
<pre><code>config = {}
with open("C:/Users/XXX/Downloads/resnet-rs-152-i256/imagenet_resnetrs152_i256.yaml", "r") as stream:
try: config = yaml.safe_load(stream)
except yaml.YAMLError as exc: print(exc)
loaded_model = tf.keras.models.model_from_config(config)
Exception has occurred: ValueError
Improper config format for {...}
Expecting python dict contains `class_name` and `config` as keys
</code></pre>
<p>Are the configs wrong? Is there no way to load any of these models?</p>
|
<python><tensorflow><parsing><keras><yaml>
|
2023-02-28 14:50:40
| 1
| 351
|
Torben Nordtorp
|
75,593,531
| 4,971,866
|
Python custom package: how to specify paths of yaml files within the package?
|
<p>I want to create a python package with below folder structure:</p>
<pre><code>/package
/package
__init__.py
/main
__init__.py
main.py
...
/data
__init__.py
constants.py <--
data.yml
pyproject.toml
...
</code></pre>
<p>And inside <code>constants.py</code>, I defined the path to <code>data.yml</code>:
<code>DATA_PATH = './data.yml'</code> and used it in <code>main.py</code>:</p>
<pre><code>from package.data.constants import DATA_PATH
with open(DATA_PATH, 'r') as f:
...
</code></pre>
<p>Then I build and installed the package into another project.</p>
<p>But when some code of <code>main</code> is used, it complains about "No such file or directory".</p>
<p>I also tried <code>'./data/data.yml'</code> and <code>'./package/data/data.yml'</code>and none worked.</p>
<p>How should I define the path?</p>
|
<python>
|
2023-02-28 14:41:54
| 1
| 2,687
|
CSSer
|
75,593,515
| 998,318
|
How to pass columns to select as a parameter?
|
<p>How do I pass as parameters the columns I want to select, especially when I want to select an unknown number of columns?</p>
<p>I'm using psycopg2 in python</p>
<p>This is my query:</p>
<pre><code>columns_to_select = "errors, requests"
sql = "select bucket, %(metrics_to_select)s from metrics"
cursor.execute(sql, {"metrics_to_select": columns_to_select})
</code></pre>
<p>I want this to produce this query:</p>
<pre><code>select bucket, errors, requests from metrics
</code></pre>
<p>Nothing that I have tried works.</p>
|
<python><postgresql><psycopg2>
|
2023-02-28 14:40:33
| 1
| 16,064
|
Moshe Shaham
|
75,593,477
| 8,681,249
|
gpytorch, regression on targets and classification of gradients to negative or positive
|
<p>i would like to set up the following model in GPYtorch:
i have 4 inputs and i want to predict an output (regression)
at the same time, i want to constrain the gradients of 3 inputs to be positive and of 1 input to be negative (with respect to the input)</p>
<p>However, i dont know how to set this problem up with multiple likelihoods.
Up to now, i have been generating the gradients with torchgrad and adding/substracting their probit function.</p>
<pre><code> dist1 = torch.distributions.normal.Normal(gradspred[:,0], grads_var[:,0])
dist2 = torch.distributions.normal.Normal(gradspred[:,1], grads_var[:,1])
dist3 = torch.distributions.normal.Normal(gradspred[:,2], grads_var[:,2])
dist4 = torch.distributions.normal.Normal(gradspred[:,3], grads_var[:,3])
loss_1 = torch.mean(dist1.cdf(torch.tensor(0)))
loss_2 = 1-torch.mean(dist2.cdf(torch.tensor(0)))
loss_3 = 1-torch.mean(dist3.cdf(torch.tensor(0)))
loss_4 = 1-torch.mean(dist4.cdf(torch.tensor(0)))
loss = -torch.mean(self.mll(self.output, self.train_y))\
+100*(loss_1 + loss_2 + loss_3 + loss_4)
</code></pre>
<p>but this is not working correctly. Ultimately, i would like to either have 2 mll, a gaussian and a bernoulli, or recreate this paper</p>
<p><a href="http://proceedings.mlr.press/v9/riihimaki10a/riihimaki10a.pdf" rel="nofollow noreferrer">http://proceedings.mlr.press/v9/riihimaki10a/riihimaki10a.pdf</a></p>
<p>Note: I have found a similar implementation in GPy, namely so:</p>
<pre><code>def fd(x):
return -np.ones((x.shape[0],1))
def test_multioutput_model_with_ep():
f = lambda x: np.sin(x)+0.1*(x-2.)**2-0.005*x**3
N=10
sigma=0.05
x = np.array([np.linspace(1,10,N)]).T
y = f(x)
print(y)
M=15
xd = x
yd = fd(x)
# squared exponential kernel:
se = GPy.kern.RBF(input_dim = 1, lengthscale=1.5, variance=0.2)
# We need to generate separate kernel for the derivative observations and give the created kernel as an input:
se_der = GPy.kern.DiffKern(se, 0)
#Then
gauss = GPy.likelihoods.Gaussian(variance=sigma**2)
probit = GPy.likelihoods.Binomial(gp_link = GPy.likelihoods.link_functions.ScaledProbit(nu=100))
inference = GPy.inference.latent_function_inference.expectation_propagation.EP(ep_mode = 'nested')
m = GPy.models.MultioutputGP(X_list=[x, xd], Y_list=[y, yd], kernel_list=[se, se_der], likelihood_list = [gauss, probit], inference_method=inference)
m.optimize(messages=0, ipython_notebook=False)
</code></pre>
<p>but this breaks for multi-dimensional inputs because the EP is only implemented for 1D.
Any help will be more than welcome and i dont care which library is used
Best</p>
|
<python><gpytorch><gpy>
|
2023-02-28 14:35:48
| 0
| 413
|
john
|
75,593,462
| 1,239,123
|
Is there a practical way to determine whether two (Python) regular expressions are (likely to be) mutually exclusive?
|
<p>TL;DR: There's no generic solution, but is there something better than a simple (Python) string equality test for identical regexes?</p>
<p>From reading this <a href="https://stackoverflow.com/questions/2967991/mutually-exclusive-regular-expressions">answer</a>, I gather that a generic solution is theoretically impossible.</p>
<p>However, I work on <a href="https://github.com/autokey/autokey" rel="nofollow noreferrer">AutoKey</a>, a Python application that compares a window's title and/or class to more than one regex to determine which of several actions to take when AutoKey is triggered by a particular trigger. See <a href="https://github.com/autokey/autokey/issues/365" rel="nofollow noreferrer">this</a> for the actual issue.</p>
<p>Currently, when a new action is added, its associated regex is checked against all other actions with the identical trigger.</p>
<p>The test in use now is a simple string compare of the two regexs. If they are identical, the new action is not allowed to be added.</p>
<p>This very weak test has proven suficient most of the time, but if it fails, it results in undefined behavior - (probably) the first action discovered to match is used where the search order is implementation dependent.</p>
<p>Most of the regexes used in practice are either pure constant literal strings like <code>kate.kate</code> or things like <code>.*app_title.*</code> and <code>.*ivaldi.*|.*brave.*</code>. They only get weird when a negative window filter is needed (to match all windows <em>except</em> those whose title <em>or</em> class match the expression).</p>
<p>So, given that a perfect solution is unavailable is there some half way measure that will at least be better than a simple equality test?</p>
<p>Is there something that could estimate the likelyhood that two regexes are not mutually exclusive or would at least detect trivial things like <code>.*string</code> being equivalent to <code>.*.*string</code></p>
<p>After writing this, I came up with a big <a href="https://github.com/autokey/autokey/issues/365#issuecomment-1448262185" rel="nofollow noreferrer">improvement</a> in the form of a purely empirical heuristic.</p>
<p>I still want to know if there's a more elegant solution/approach.</p>
<p>Note: I am not expert in any of this, so any suggested improvement has to be fairly simple to understand and implement for it to be useful.</p>
|
<python><regex>
|
2023-02-28 14:34:38
| 0
| 391
|
Joe
|
75,593,291
| 2,998,077
|
Python Numpy polyfit gets the same as Excel Linear for slope
|
<p>By the use of below, I get the slope of a list of number.</p>
<p>It's referenced from the answer to this question, <a href="https://stackoverflow.com/questions/42920537/finding-increasing-trend-in-pandas">Finding increasing trend in Pandas</a>.</p>
<pre><code>import numpy as np
import pandas as pd
def trendline(data, order=1):
coeffs = np.polyfit(data.index.values, list(data), order)
slope = coeffs[-2]
return float(slope)
score = [275,1625,7202,6653,1000,2287,3824,3812,2152,4108,255,2402]
df = pd.DataFrame({'Score': score})
slope = trendline(df['Score'])
print(slope)
# -80.84965034965013
</code></pre>
<p>When in Excel, the slope is about the same when the trendline was plot by Liner method. The slope is different when Excel plot it using the Polynomial.</p>
<p>The Python function "trendline" seems defined by "np.polyfit". Why it can calculate the same as Excel does it in Liner?</p>
<p>(if I applied or worked it wrongly somewhere?)</p>
<p><a href="https://i.sstatic.net/L8Hp8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/L8Hp8.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/6ynTG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6ynTG.png" alt="enter image description here" /></a></p>
|
<python><excel><numpy><trend>
|
2023-02-28 14:20:48
| 1
| 9,496
|
Mark K
|
75,593,217
| 7,093,241
|
Outer scope variable not registered in contained functions
|
<p>I was coding this code snippet provided as a leetcode solution in Python but found that my variable <code>flag</code> isn't available.</p>
<pre><code>public class Solution {
boolean flag = false;
public boolean checkInclusion(String s1, String s2) {
permute(s1, s2, 0);
return flag;
}
public String swap(String s, int i0, int i1) {
if (i0 == i1)
return s;
String s1 = s.substring(0, i0);
String s2 = s.substring(i0 + 1, i1);
String s3 = s.substring(i1 + 1);
return s1 + s.charAt(i1) + s2 + s.charAt(i0) + s3;
}
void permute(String s1, String s2, int l) {
if (l == s1.length()) {
if (s2.indexOf(s1) >= 0)
flag = true;
} else {
for (int i = l; i < s1.length(); i++) {
s1 = swap(s1, l, i);
permute(s1, s2, l + 1);
s1 = swap(s1, l, i);
}
}
}
}
</code></pre>
<p>This is how I rewrote it in Python together with the checks. What I did was move the helper functions to be part of the <code>checkInclusion</code> function so that the helper functions can run within the context of, for lack of a better word, "main" function. What I noticed is that my assertions fail. In trying to debug, I tried to print out the <code>flag</code> variable but nothing was printed out.</p>
<pre><code>from typing import List
class Solution:
def checkInclusion(self, s1: str, s2: str) -> bool:
def swap(string: str, left: int, right: int) -> str:
stringList = list(string)
stringList[left], stringList[right] = stringList[right], stringList[left]
return "".join(stringList)
def permute(string:str, left: int, right: int, permutations: List[int]) -> List[int]:
"""left and right are indices for the string"""
if left == right:
permutations.append(string)
return permutations
else:
for i in range(left, right+1):
swapped = swap(string, left, i)
permute(swapped, left+1, right, permutations)
return permutations
permutations = permute(s1, 0, len(s1)-1, [])
# print("permutations =", permutations)
for permutation in permutations:
if permutation in s2:
return True
return False
# brute force - NOT creating a list of the permutations
# n = len(s1)
# m = len(s2)
# Time : O(n! + nm) - creating the permutations and nm for
# comparing the s1 in s2. Depends on relative size of s1 and s2
# to determine what dominates
# Space: O(n!) - n! for the list of permutations, O(n x n) for the
# recursion tree depth and length of the string but n! dominates
# for large n.
from typing import List
class Solution:
def checkInclusion(self, s1: str, s2: str) -> bool:
flag = False
def swap(string: str, left: int, right: int) -> str:
stringList = list(string)
stringList[left], stringList[right] = stringList[right], stringList[left]
return "".join(stringList)
def permute(s1: str, s2: str, pos: int) -> None:
print("flag in permute =", flag)
if pos == len(s1):
print("flag in if statement", flag)
if s2.find(s1) > -1:
flag = True
else:
for i in range(1, len(s1)):
s1 = swap(s1, pos, i)
permute(s1, s2, pos+1)
s1 = swap(s1, pos, i)
permute(s1, s2, 0)
return flag
sol = Solution()
s1 = "x"
s2 = "ab"
assert sol.checkInclusion(s1, s2) == True
s1 = "a"
s2 = "ab"
print(sol.checkInclusion(s1, s2))
assert sol.checkInclusion(s1, s2) == True
s1 = "ab"
s2 = "eidbaooo"
assert sol.checkInclusion(s1, s2) == True
s1 = "ab"
s2 = "eidboaoo"
assert sol.checkInclusion(s1, s2) == False
</code></pre>
<p>The error I got doesn't print out <code>flag</code>.</p>
<pre><code>Traceback (most recent call last):
File "567. Permutation in String.py", line 75, in <module>
assert sol.checkInclusion(s1, s2) == True
File "567. Permutation in String.py", line 68, in checkInclusion
permute(s1, s2, 0)
File "567. Permutation in String.py", line 57, in permute
print("flag in permute =", flag)
UnboundLocalError: local variable 'flag' referenced before assignment
</code></pre>
<p>Although I would like to know how to make a better conversion into Python, I feel I am missing a crucial bit of information on why <code>flag</code> appears to be assigned before it is referenced.</p>
|
<python><java>
|
2023-02-28 14:14:13
| 0
| 1,794
|
heretoinfinity
|
75,593,113
| 14,269,252
|
Plot timeseries data against categorical column
|
<p>I plotted a huge dataset with x axis as date and y axis a categorical variable.
I tried with the code as follows but the plot is unreadable.</p>
<p>I want to show it in specific date, specific event (Code) happened based on different sources such as creating a timeline with lines, dates, and text.</p>
<pre><code> ID DATE CODE SOURCE
0 P04 2016-08-08 f m1
1 P04 2015-05-08 f m1
2 P04 2010-07-20 v m3
3 P04 2013-12-06 g m4
4 P08 2018-03-01 h m4
x2 = df.groupby(['DATE', 'CODE']).size()
x2.plot.bar()
</code></pre>
<p><a href="https://i.sstatic.net/0wa62.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0wa62.png" alt="enter image description here" /></a></p>
<p>Expected output (creating a timeline with lines, dates, and text which is here is code
) :</p>
<p><a href="https://i.sstatic.net/6kLq7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6kLq7.png" alt="enter image description here" /></a></p>
<p>My another try with the code provided by r-beginners :
<a href="https://i.sstatic.net/DxEMT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DxEMT.png" alt="enter image description here" /></a></p>
|
<python><dataframe><matplotlib><plot><plotly>
|
2023-02-28 14:03:28
| 2
| 450
|
user14269252
|
75,593,046
| 2,908,017
|
How to create a masked password box in a Python FMX GUI app?
|
<p>I'm making an app using <a href="https://github.com/Embarcadero/DelphiFMX4Python" rel="nofollow noreferrer">DelphiFMX GUI library for Python</a>. I need to have a component where I can enter a password and have that password masked. Sure I can use an <code>Edit</code> component, but then the password is in plain text and not masked.</p>
<p>How do I mask a password in an Edit component or is there a component specifically for passwords?</p>
|
<python><user-interface><passwords><firemonkey>
|
2023-02-28 13:57:13
| 1
| 4,263
|
Shaun Roselt
|
75,592,945
| 5,506,400
|
Django: Model attribute looks for field works in view, but not in template
|
<p>Say I have a model:</p>
<pre class="lang-py prettyprint-override"><code>from django.db import models
class Foo(models.Model)
fav_color = models.CharField(max_length=250, help_text="What's your fav color?")
print(Foo.fav_color.field.help_text) # prints: What's your fav color?
</code></pre>
<p>Say I have a view with context:</p>
<pre class="lang-py prettyprint-override"><code>{
'Foo': Foo
}
</code></pre>
<p>And in a template <code>{{ Foo.fav_color.field.help_text }}</code>, the result will be blank. However <code>{{ Foo }}</code> will print the string representation of the model, so I know the models is getting passed into the template.</p>
<p>Why are the attributes look ups to get the <code>help_text</code> failing in the template, why is the <code>help_text</code> empty in the template?</p>
|
<python><django><django-models><django-templates><django-model-field>
|
2023-02-28 13:47:51
| 2
| 2,550
|
run_the_race
|
75,592,867
| 2,908,017
|
How do I display tooltips in a Python FMX GUI app?
|
<p>I have created a window <code>Form</code> with a <code>Button</code> on it with the <a href="https://github.com/Embarcadero/DelphiFMX4Python" rel="nofollow noreferrer">DelphiFMX GUI library for Python</a>. I want to be able to add a hover tooltip to the button which explains what it does. Here's an example of a button tooltip on a Form, when I hover on it with my mouse, then it shows "I am a button":</p>
<p><a href="https://i.sstatic.net/PgjGe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PgjGe.png" alt="Form with Button and tooltip" /></a></p>
<p>Here's the code I currently have to create my Form with Button:</p>
<pre><code>from delphifmx import *
class frmMain(Form):
def __init__(self, owner):
self.Caption = 'My Form'
self.Width = 1000
self.Height = 500
self.Position = "ScreenCenter"
self.myButton = Button(self)
self.myButton.Parent = self
self.myButton.Text = "Hello World!"
self.myButton.Align = "Client"
self.myButton.Margins.Top = 20
self.myButton.Margins.Right = 20
self.myButton.Margins.Bottom = 20
self.myButton.Margins.Left = 20
self.myButton.StyledSettings = ""
self.myButton.TextSettings.Font.Size = 50
self.myButton.onClick = self.Button_OnClick
def Button_OnClick(self, sender):
print("Hello World!")
def main():
Application.Initialize()
Application.Title = "My Application"
Application.MainForm = frmMain(Application)
Application.MainForm.Show()
Application.Run()
Application.MainForm.Destroy()
main()
</code></pre>
<p>I tried doing <code>self.myButton.toolTip = "Click me!"</code>, but this doesn't work.</p>
<p>How do tooltips work in the DelphiFMX library?</p>
|
<python><user-interface><tooltip><firemonkey>
|
2023-02-28 13:41:10
| 1
| 4,263
|
Shaun Roselt
|
75,592,701
| 610,569
|
How to read directory that contains different parquets in parts?
|
<p>Given a parquet dataset with a job_id <code>abc</code>, saved in parts as such:</p>
<pre><code>my_dataset/
part0.abc.parquet
part1.abc.parquet
part2.abc.parquet
</code></pre>
<p>It is possible to read the dataset with vaex or pandas:</p>
<pre><code>import vaex
df = vaex.open('my_dataset')
import pandas as pd
df = pd.read_parquet('my_dataset')
</code></pre>
<p>But sometimes our ETL pipeline appends to the the <code>my_dataset</code> directory with parts of parquet from another job id <code>xyz</code>, that caused the directory to becomes something like:</p>
<pre><code>my_dataset/
part0.abc.parquet
part0.xyz.parquet
part1.abc.parquet
part1.xyz.parquet
part2.abc.parquet
part2.xyz.parquet
</code></pre>
<p>The main problem is we don't know the job ID created by the ETL pipeline, but we know that they are unique.</p>
<p>Is there some method in <code>pandas.read_parquet</code> to automatically group the parts together? E.g.</p>
<pre><code>import pandas as pd
dfs = pd.read_parquet('my_dataset')
</code></pre>
<p>[out]:</p>
<pre><code>{
'abc': pd.DataFrame, # That reads from `part*.abc.parquet`
'xyz': pd.DataFrame # That reads from `part*.xyz.parquet`
}
</code></pre>
<p>I've tried doing some glob reading</p>
|
<python><pandas><dataframe><parquet><partitioning>
|
2023-02-28 13:24:48
| 0
| 123,325
|
alvas
|
75,592,567
| 9,034,438
|
Django auth error on azure sql server, can't find object (table)
|
<p>I'm currently building a custom api auth path on django rest framework, the application is connected to azure sql-server.
the connexion to the database works perfectly, the table and migrations have been applied to the database, the necessary table have been filled with the necessary column and data, Django Admin tables such as <code>auth_user</code>, <code>auth_user_groups</code>, <code>auth_group</code>, <code>auth_permission</code> etc...</p>
<p>A <code>python3 startapp users</code> was launched to customize my login class.</p>
<p>users/models.py :</p>
<pre><code> from django.conf import settings from django.dispatch import receiver
from django.db.models.signals import post_save from
rest_framework.authtoken.models import Token from
django.contrib.auth.models import AbstractUser from django.db import models
class User(AbstractUser):
def __str__(self):
return self.username
@receiver(post_save, sender=settings.AUTH_USER_MODEL) def
create_auth_token(sender, instance=None, created=False, **kwargs):
if created:
Token.objects.create(user=instance)
</code></pre>
<p>The user model is serialized</p>
<p>users/serializers.py :</p>
<pre><code>from rest_framework import serializers
from .models import User
class UserSerializer(serializers.ModelSerializer):
class Meta:
model = User
fields = ('id','username','first_name','last_name','email')
</code></pre>
<p>users/views.py :</p>
<pre><code>from rest_framework import viewsets
from rest_framework.permissions import IsAuthenticated
from rest_framework.authtoken.views import ObtainAuthToken
from rest_framework.authtoken.models import Token
from rest_framework.response import Response
from .models import User
from .serializers import UserSerializer
class UserViewSet(viewsets.ModelViewSet):
queryset = User.objects.all()
serializer_class = UserSerializer
permission_classes = []
class UserLogIn(ObtainAuthToken):
def post(self, request, *args, **kwargs):
serializer = self.serializer_class(data = request.data,
context = {'request':request})
serializer.is_valid(raise_exception=True)
user = serializer.validated_data['user']
token = Token.objects.get(user=user)
return Response({
'token': token.key,
'id': user.pk,
'username': user.username
})
</code></pre>
<p>urls :</p>
<pre><code>path('api-user-login/', UserLogIn.as_view()),
</code></pre>
<p>settings :</p>
<pre><code>> AUTH_USER_MODEL = 'users.User'
</code></pre>
<p>On post on the auth login pass i get a</p>
<pre><code>ProgrammingError at /api-user-login/
('42S02', "[42S02] [Microsoft][ODBC Driver 18 for SQL Server][SQL Server]Invalid object name 'users_user'. (208) (SQLExecDirectW)")
</code></pre>
<blockquote>
<p>Request Method: POST Request
URL: http://localhost:8000/api-user-login/ Django Version: 4.1.7
Exception Type: ProgrammingError Exception Value: ('42S02', "[42S02]
[Microsoft][ODBC Driver 18 for SQL Server][SQL Server]Invalid object
name 'users_user'. (208) (SQLExecDirectW)") Raised
during: users.views.UserLogIn</p>
</blockquote>
|
<python><sql-server><django><django-rest-framework>
|
2023-02-28 13:12:22
| 0
| 452
|
Dezz H
|
75,592,412
| 6,352,677
|
Inherited class methods with different signatures in python
|
<p>Let's consider the following code snippet:</p>
<pre class="lang-py prettyprint-override"><code>class A:
def foo(self) -> None:
raise NotImplementedError
class B(A):
def foo(self) -> None:
print("I'm B(A)")
class C(A):
def foo(self, x: int) -> None:
print(f"I'm C(A), x={x}")
bob = B()
bob.foo()
charly = C()
charly.foo(4)
</code></pre>
<p>When ran, it provides the expected result:</p>
<pre><code>Iβm B(A)
Iβm C(A), x=4
</code></pre>
<p>However mypy is raising an error:</p>
<pre><code>$ mypy subclass.py
subclass.py:10: error: Signature of Β« foo Β» incompatible with supertype Β« A Β» [override]
subclass.py:10: note: Superclass:
subclass.py:10: note: def foo(self) β None
subclass.py:10: note: Subclass:
subclass.py:10: note: def foo(self, x: int) β None
Found 1 error in 1 file (checked 1 source file)
</code></pre>
<p>Is that a bad design? What would be a good alternative, other than simply removing the abstract method from the parent class?</p>
|
<python><class><abstract><mypy><python-typing>
|
2023-02-28 13:00:23
| 2
| 2,096
|
Colas
|
75,592,279
| 4,478,466
|
Is it possible to get object in Python from a string name?
|
<p>I'm trying to create dynamic filtering in SQLAlchemy with relationship entities.
I have the following SQLAlchemy models:</p>
<pre><code>Owner(Model):
__tablename__ = 'Owners'
Id = Column(Integer, primary_key=True, autoincrement=True)
Name = Column(String(100))
Vehicle(Model):
__tablename__ = 'Vehicles'
Id = Column(Integer, primary_key=True, autoincrement=True)
LicensePlate = Column(String(20))
Model = Column(String(20))
Owner_id = Column(Integer, ForeignKey('Owners.Id'))
Owner = relationship("Owner")
</code></pre>
<p>So lets say I have the following filters:</p>
<pre><code>filters = [("LicensePlate", "TEST"), ("Model", "TEST")]
</code></pre>
<p>This would be equivalent to the following query:</p>
<pre><code>Vehicle.query.filter(Vehicle.LicensePlate == "TEST", Vehicle.Model == "TEST")
</code></pre>
<p>I then construct my query with:</p>
<pre><code>query_filters = []
for f in filters:
query_filter.append(getattr(Vehicle, f[0]) == f[1])
Vehicle.query.filter(*query_filters)
</code></pre>
<p>But now lets say I would also like to filter my Vehicle model by the Owner property:
In order to achieve that I can construct the following query:</p>
<pre><code>Vehicle.query.filter(Vehicle.LicensePlate == "TEST", Vehicle.Model == "TEST").join(Vehicle.Owner).filter(Owner.Name == "John")
</code></pre>
<p>And I would like my dynamic filters to be constructed as:</p>
<pre><code>filters = [("LicensePlate", "TEST"), ("Model", "TEST"), ("Owner.Name", "John")]
</code></pre>
<p>I could then easily split the propery name and use it to perform the join and creating filter.</p>
<pre><code>property_type, property_attr = "Owner.Name".split(".")
</code></pre>
<p>However I have not found a way on how to use reflection in Python go get the Object from a string name as can be done for the property by using the <code>getattr</code></p>
<p>The join part is not a problem, since Owner is an attribute on the Vehicle:</p>
<pre><code>.join(getattr(Vehicle, property_type))
</code></pre>
<p>So for achieving the same as:</p>
<pre><code>.filter(Owner.Name == "John")
</code></pre>
<p>I would need to somehow get the object from its name and then use it with the attribute name to construct the filter.</p>
<pre><code>my_obj = ? property_type ?
.filter(getattr(my_obj, property_attr) == "John")
</code></pre>
<p>I have temporarily solved this by creating a custom dictionary that has the mapping between names and objects:</p>
<pre><code>mapping = {
"Owner": Owner
}
</code></pre>
<p>So I can then construct the filter as:</p>
<pre><code>property_type, property_attr = "Owner.Name".split(".")
.filter(getattr(mapping[property_type], property_attr) == "John")
</code></pre>
<p>However this dictionary is something that I have to maintain manually so I would prefer a solution that would use reflection to get the object from its name.</p>
|
<python><sqlalchemy><flask-sqlalchemy>
|
2023-02-28 12:47:33
| 1
| 658
|
Denis Vitez
|
75,592,276
| 827,281
|
typing definitions without circular imports
|
<p>I have a package where I would like to</p>
<ol>
<li>use typing for code-hinting</li>
<li>export some of the typing variables for users of the package to allow them to use typing as well, but simultaneously use them in the package it-self</li>
</ol>
<p>My main problem is that this will often result in circular imports.</p>
<p>Consider a code:</p>
<p><code>package/foo.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>class A:
def hello(self, arg: Union[str, dict]) -> Any:
</code></pre>
<p><code>package/bar.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>from . import foo
# I can't do: from package.typing import BarBType
class B:
def hello(self, arg: Union[foo.A, str, dict]) -> Any:
</code></pre>
<p>I would then like to have a typing subpackage:</p>
<p><code>package/typing.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>from package import foo
from package import bar
BarBType = Union[foo.A, str, dict]
FooBarType = Union[foo.A, bar.B]
</code></pre>
<p>However, this forces a duplicate definition of the <code>hello_arg_type</code>, i.e. I cannot self-use it in the <code>package.bar</code>?</p>
<p>How should one structure the files and typing information in packages which exposes complex typing constructs that should be used locally, but also exposed globally?<br />
I would prefer if everything could be in-line typed, i.e. no stub files (<code>.pyi</code>). It would also be great if documentation utilities could use this for improving the final document.</p>
<p>I have thought about exposing these where they are used, i.e. in <code>bar.py</code> define the types, then let <code>typing.py</code> import everything that it will expose. This just seems a bit counter-intuitive...</p>
|
<python><python-typing>
|
2023-02-28 12:47:22
| 0
| 792
|
nickpapior
|
75,592,229
| 7,479,675
|
How to copy text with formatting using Python + Selenium?
|
<p>I'm using Python and Selenium and I need to copy text from a webpage to the OS Windows clipboard with formatting.</p>
<p>For example, when you copy text from a webpage by pressing the Ctrl+C key combination, and then paste it into Microsoft Word using the Ctrl+V key combination, you can see that the text is copied with formatting.</p>
<p>I want to achieve the same result with a Python + Selenium script that navigate to a website, and copy the formatted text to the clipboard. Then, I want manually open Microsoft Word, press Ctrl+V, and paste the text with its formatting.</p>
<p>Here's an example of my code, but it only copies the styles and not the formatting:</p>
<pre><code>import time
import pyperclip
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.action_chains import ActionChains
# Configure Chrome options
options = Options()
options.add_argument("--headless")
options.add_argument("--disable-gpu")
options.add_argument("--no-sandbox")
options.add_argument("--disable-dev-shm-usage")
# Start Chrome driver
driver = webdriver.Chrome(options=options)
# Go to the webpage
driver.get("https://example.com/")
time.sleep(2) # wait for page to load
# Find the element containing the text
element = driver.find_element("xpath", "/html/body/div")
text = element.get_attribute("innerHTML")
# ----> OR: text = driver.find_element("tag name", "body").text # also does not copy from formatting
# Copy text to clipboard
pyperclip.copy(text)
# Quit driver
driver.quit()
</code></pre>
<p>Question: How can I copy text with formatting using Selenium?</p>
<p>Note: I know that I can simulate keystrokes</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.common.keys import Keys
from time import sleep
# Create a new instance of the Chrome driver
driver = webdriver.Chrome()
# Navigate to example.com
driver.get("https://en.wikipedia.org/wiki/List_of_lists_of_lists")
# Find the element that contains the text to copy
element = driver.find_element(By.XPATH, '//*[@id="mw-content-text"]/div[1]/ul[4]')
# Select all the text on the page and copy it to the clipboard
actions = ActionChains(driver)
actions.move_to_element(element)
ActionChains(driver).click(element).key_down(Keys.CONTROL).send_keys("a").key_up(Keys.CONTROL).key_down(Keys.CONTROL).send_keys("c").key_up(Keys.CONTROL).perform()
# Quit the driver
driver.quit()
</code></pre>
<p>but this option is not suitable, because I do not know how to select a separate element and not the whole document</p>
|
<python><selenium-webdriver>
|
2023-02-28 12:43:21
| 1
| 392
|
Oleksandr Myronchuk
|
75,592,052
| 2,776,885
|
Azure Synapse: Connect to Apache spark pool using a local Python script
|
<p>I am working with Azure Synapse. Here, I can create a notebook, select one of the Apache Spark Pools and execute the following code:</p>
<pre><code>%%pyspark
df = spark.sql("SELECT * FROM DataBaseName")
df.show(10)
</code></pre>
<p>I have the use-case where I need to be able execute code from a local Python script. How do I create a connection with a specific pool, push the code to the pool for execution and if applicable get the results?</p>
<p>As my local editor I am using Spyder.</p>
|
<python><azure><apache-spark><azure-synapse>
|
2023-02-28 12:26:01
| 0
| 4,040
|
The Dude
|
75,591,563
| 6,930,340
|
Mocking datetime.now using pytest-mock
|
<p>I have a function <code>func_to_mock</code> in module <code>module_to_mock.py</code>. My unit test is located in <code>test_func_to_mock.py</code></p>
<p>I am trying to mock <code>datetime.datetime.now</code>, however, I am struggling. I get the error <code>TypeError: cannot set 'now' attribute of immutable type 'datetime.datetime'</code>.</p>
<p>What am I missing here?</p>
<p>module_to_mock.py:</p>
<pre><code># module_to_mock.py
import datetime
def func_to_mock() -> datetime.datetime:
return datetime.datetime.now()
</code></pre>
<p>My unit test:</p>
<pre><code># test_func_to_mock.py
import datetime
from pytest_mock import MockFixture
import port_bt.module_to_mock
def test_func_to_mock(mocker: MockFixture) -> None:
# This will error with:
# TypeError: cannot set 'now' attribute of immutable type 'datetime.datetime'
mocked_date = mocker.patch(
"datetime.datetime.now", return_value=datetime.datetime(2023, 1, 31, 0, 0, 0)
)
assert port_bt.module_to_mock.func_to_mock() == mocked_date
</code></pre>
|
<python><mocking><pytest><pytest-mock>
|
2023-02-28 11:40:18
| 1
| 5,167
|
Andi
|
75,591,496
| 20,920,790
|
Why autopep8 don't format on save?
|
<p>Autopep8 don't working at all.</p>
<p>Here's my setting:</p>
<pre><code>{
"editor.defaultFormatter": "ms-python.autopep8",
"editor.formatOnSave": true,
"editor.formatOnPaste": true,
"editor.formatOnType": true,
"autopep8.showNotifications": "always",
"indentRainbow.colorOnWhiteSpaceOnly": true,
"[python]": {
"editor.formatOnType": true
},
"workbench.colorTheme": "Monokai Dimmed",
"editor.codeActionsOnSave": {
},
"autopep8.args": [
"--in-place --aggressive --aggressive"
]
}
</code></pre>
<p>For exaple I using this code:</p>
<pre><code>import math, sys;
def example1():
####This is a long comment. This should be wrapped to fit within 72 characters.
some_tuple=( 1,2, 3,'a' );
some_variable={'long':'Long code lines should be wrapped within 79 characters.',
'other':[math.pi, 100,200,300,9876543210,'This is a long string that goes on'],
'more':{'inner':'This whole logical line should be wrapped.',some_tuple:[1,
20,300,40000,500000000,60000000000000000]}}
return (some_tuple, some_variable)
def example2(): return {'has_key() is deprecated':True}.has_key({'f':2}.has_key(''));
class Example3( object ):
def __init__ ( self, bar ):
#Comments should have a space after the hash.
if bar : bar+=1; bar=bar* bar ; return bar
else:
some_string = """
Indentation in multiline strings should not be touched.
Only actual code should be reindented.
"""
return (sys.path, some_string)
</code></pre>
<p>The only way I can use autopep8 is import it as module and run autopep8.fix_code('''code example''') and I can't get result that I want.
That am i doing wrong?</p>
<p>Thanks!</p>
<p>P. S. I'm using VS Code 1.75.1, Python 3.9.16, autopep8 v2022.2.0.</p>
<p>Excepted result:</p>
<pre><code>import math
import sys
def example1():
# This is a long comment. This should be wrapped to fit within 72
# characters.
some_tuple = (1, 2, 3, 'a')
some_variable = {
'long': 'Long code lines should be wrapped within 79 characters.',
'other': [
math.pi,
100,
200,
300,
9876543210,
'This is a long string that goes on'],
'more': {
'inner': 'This whole logical line should be wrapped.',
some_tuple: [
1,
20,
300,
40000,
500000000,
60000000000000000]}}
return (some_tuple, some_variable)
def example2(): return ('' in {'f': 2}) in {'has_key() is deprecated': True}
class Example3(object):
def __init__(self, bar):
# Comments should have a space after the hash.
if bar:
bar += 1
bar = bar * bar
return bar
else:
some_string = """
Indentation in multiline strings should not be touched.
Only actual code should be reindented.
"""
return (sys.path, some_string)
</code></pre>
|
<python><visual-studio-code><autopep8>
|
2023-02-28 11:33:44
| 1
| 402
|
John Doe
|
75,591,413
| 12,320,370
|
Dictionary sometimes featuring with multiple keys
|
<p>I have a pandas data frame, and I would like to make a new column/s based on the dictionary values.</p>
<p>Here is my df and dictionary:</p>
<pre><code>data = ['One', 'Two', 'Three', 'Four']
df = pd.DataFrame(data, columns=['Count'])
dictionary = {'One':'Red', 'Two':['Red', 'Blue'], 'Three':'Green','Four':['Green','Red', 'Blue']}
</code></pre>
<p>This is the result I would like to achieve,</p>
<p><a href="https://i.sstatic.net/gWqlJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gWqlJ.png" alt="enter image description here" /></a></p>
<p>Preferably with blank fields instead of <strong>None</strong> values, does anybody know a way?</p>
<p>I tried the below:</p>
<pre><code>df = df = pd.DataFrame([(k, *v) for k, v in dictionary.items()])
df.columns = ['name'] + [f'n{x}' for x in df.columns[1:]]
df
</code></pre>
<p>However, for keys that do not have multiple values, it seems to split the actual string per letter over the columns like so:
<a href="https://i.sstatic.net/vne83.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vne83.png" alt="enter image description here" /></a></p>
<p>A solution where it maps the values to one columns separated with a delimiter (,) would also be helpful.</p>
|
<python><pandas><dictionary><lambda><key-value>
|
2023-02-28 11:26:49
| 3
| 333
|
Nairda123
|
75,591,403
| 12,013,353
|
Cannot open pickle file because of missing module which created it
|
<p>I've created a pickle file using a python file which uses a custom module I made, named module_v9.py . Now I've done some insignificant modifications and saved the module as module_v10.py, and when I try to open the pickle .p file I get the error saying "no module named module_v9.py". How does it know the name of the module that created it, and why can't I just open this pickle file containing one single dictionary object? When I rename the module to v9, then it works.</p>
|
<python><pickle>
|
2023-02-28 11:25:54
| 1
| 364
|
Sjotroll
|
75,591,315
| 11,162,983
|
ModuleNotFoundError: No module named 'tensorflow'--Pycharm
|
<p>I installed <code>pip install tensorflow==2.2.0</code> following <a href="https://towardsdatascience.com/installing-tensorflow-gpu-in-ubuntu-20-04-4ee3ca4cb75d" rel="nofollow noreferrer">this</a></p>
<p>when I run these lines:</p>
<pre><code>import tensorflow as tf
tf.config.list_physical_devices("GPU")
</code></pre>
<p>From the terminal is ok or jupyter.</p>
<p>The output is:</p>
<pre><code>[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
</code></pre>
<p>But from <strong>Pycharm</strong> is not working.</p>
<p>I think that I need to change the <strong>python interpreter</strong>, but I do not know how to change that and which one will work.</p>
<p>Please, your help.</p>
|
<python><tensorflow><pycharm>
|
2023-02-28 11:18:59
| 0
| 987
|
Redhwan
|
75,591,314
| 3,482,266
|
Overwriting a method, during runtime, to raise an exception, on first pass only
|
<p>I want to rewrite, during runtime, the print method of the class below so that the first pass raises an exception, but the second time it runs normally. I <strong>cannot</strong> change the code of the <code>Test</code> class. Assume it's in a file I don't have access to, and I cannot mock (nor use a deepcopy, since in reality I have socket objects which are not pickable).</p>
<pre><code>class Test:
def __init__(self,a,b):
self.a = a
self.b = b
def print(self):
print(f"a-> {self.a}, and b-> {self.b}")
</code></pre>
<p>I tried the following (doesn't work, with infinite recursion):</p>
<pre><code>test_class = Test(1,2)
test_class.num_raise = 0
copy_test = test_class
def raise_exception():
print("Inside save_clusters")
if test_class.num_raise ==0:
test_class.num_raise +=1
raise Exception("save_clusters method exception")
elif test_class.num_raise ==1:
return copy_test.print()
test_class.print = raise_exception
try:
test_class.print() # should raise an exception
except:
test_class.print() # should print as normal
</code></pre>
|
<python>
|
2023-02-28 11:18:33
| 1
| 1,608
|
An old man in the sea.
|
75,591,296
| 15,759,796
|
'EntryPoints' object has no attribute 'get' during running pre-commit
|
<p>I am getting an error from importlib_metadata <code>AttributeError: 'EntryPoints' object has no attribute 'get'</code> during pre-commit install even though I have already installed <code>importlib_metadata==4.13.0</code> as suggested by below StackOverflow answer.
What I am doing wrong here?</p>
<p>This is my pre-commit config yaml file that I am using -</p>
<pre><code>exclude: 'settings.py'
fail_fast: true
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks.git
rev: v4.0.1
hooks:
- id: trailing-whitespace
- id: check-yaml
- id: check-json
- id: check-docstring-first
- id: requirements-txt-fixer
- id: debug-statements
- id: check-toml
- id: pretty-format-json
args: [--autofix]
- id: no-commit-to-branch
args: [--branch, develop, --branch, master]
- repo: https://github.com/pycqa/isort
rev: 5.11.5
hooks:
- id: isort
name: isort
exclude: ^site-pacakges/
- repo: https://github.com/asottile/pyupgrade
rev: v2.26.0
hooks:
- id: pyupgrade
exclude: ^site-pacakges/
args: ["--py37-plus"]
- repo: https://github.com/psf/black
rev: 23.1.0
hooks:
- id: black
exclude: ^site-pacakges/
language_version: python3.7
- repo: https://github.com/PyCQA/flake8
rev: 3.9.2
hooks:
- id: flake8
additional_dependencies: [flake8-typing-imports==1.10.0]
exclude: ^site-pacakges/
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v0.910
hooks:
- id: mypy
name: Mypy typing checks
args: ["--config-file=pyproject.toml"]
exclude: ^site-pacakges/
- repo: local
hooks:
- id: tests
name: run tests
entry: pytest
pass_filenames: false
language: system
</code></pre>
<p><a href="https://stackoverflow.com/questions/73929564/entrypoints-object-has-no-attribute-get-digital-ocean">Possible duplicate - 'EntryPoints' object has no attribute 'get'</a></p>
<p>Error generating from flake8</p>
<pre><code> File "/Users/X/.cache/pre-commit/repo_fr5yg6d/py_env-python3.7/lib/python3.7/site-packages/flake8/plugins/manager.py", line 254, in _load_entrypoint_plugins
eps = importlib_metadata.entry_points().get(self.namespace, ())
AttributeError: 'EntryPoints' object has no attribute 'get
</code></pre>
|
<python><django><pre-commit>
|
2023-02-28 11:16:54
| 1
| 477
|
raj-kapil
|
75,591,264
| 5,404,647
|
Calling module in a folder in docker
|
<p>I have the following project structure</p>
<pre><code>.
βββ app
βΒ Β βββ main.py
βΒ Β βββ foo.py
βΒ Β βββ bar.py
βββ Dockerfile
βββ input
βΒ Β βββ data
βββ output
βββ README.md
βββ requirements.txt
βββ rest_api.py
</code></pre>
<p>And my Dockerfile</p>
<pre><code>FROM python:3.9-slim
# install require OS packages
# we use apt-get instead of apt because of the stable CI
RUN apt-get update && \
apt-get install --no-install-recommends -y build-essential gcc git && \
apt-get clean && rm -rf /var/lib/apt/lists/* && \
pip3 install --upgrade pip
# define and create working directory
WORKDIR /usr/application
COPY /requirements.txt .
RUN pip3 install --no-cache-dir -r requirements.txt
COPY / .
ENV PYTHONUNBUFFERED=TRUE
CMD ["python3", "rest_api.py", "--environment-config-file", "/..."]
</code></pre>
<p>The file <code>api_rest.py</code> calls submodules with <code>from app.main import main as in</code>
that seems to work. However, in <code>main.py</code>, using the syntax <code>from foo import *</code>
raises <code>ModuleNotFoundError</code>.</p>
<p>Being in the same folder, why is this happening and how can I resolve the issue?</p>
|
<python><docker>
|
2023-02-28 11:14:18
| 0
| 622
|
Norhther
|
75,591,225
| 11,546,773
|
Dask delayed data mismatch
|
<p>I wish to combine many dataframes into 1 dataframe with dask. However when I try to read those dataframes with dd.from_delayed(parts, meta=types) I get the error <code>Metadata mismatch found in 'from_delayed'</code>.</p>
<p><strong>The full error:</strong></p>
<pre><code>Metadata mismatch found in `from_delayed`.
Partition type: `pandas.core.frame.DataFrame`
+--------+-------+----------+
| Column | Found | Expected |
+--------+-------+----------+
| 'col3' | - | object |
+--------+-------+----------+
</code></pre>
<p>I know this is because the dataframes I wish to combine do not have the same columns. Data that not exists in a column should be marked as NA. Setting verify_meta=False will silence these errors, but will lead to issues downstream since some of the partitions don't match the metadata.</p>
<p><strong>The code:</strong></p>
<pre><code>import pandas as pd
import dask.dataframe as dd
from dask.diagnostics import ProgressBar
from dask import delayed
import os
def dict_to_dataframe(dict):
return pd.DataFrame.from_dict(dict)
data_a = {'col1': [[1, 2, 3, 4], [5, 6, 7, 8]], 'col2': [[9, 10, 11, 12], [13, 14, 15, 16]]}
data_b = {'col1': [[17, 18, 19, 20], [21, 22, 23, 24]], 'col3': [[25, 26, 27, 28], [29, 30, 31, 32]]}
parts = [delayed(dict_to_dataframe)(fn) for fn in [data_a, data_b]]
types = pd.DataFrame(columns=['col1', 'col2', 'col3'], dtype=object)
ddf_result = dd.from_delayed(parts, meta=types)
print()
print('Write to file')
file_path = os.path.join('test.hdf')
with ProgressBar():
ddf_result.compute().sort_index().to_hdf(file_path, key=type, format='table')
written = dd.read_hdf(file_path, key=type)
</code></pre>
|
<python><pandas><dask><dask-dataframe><dask-delayed>
|
2023-02-28 11:11:13
| 2
| 388
|
Sam
|
75,591,121
| 5,224,881
|
Updating label for specific version of a model in Model registry in Vertex AI
|
<p>I have a model with several different versions in model registry in Vertex AI:
<a href="https://i.sstatic.net/pBn4T.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pBn4T.png" alt="enter image description here" /></a></p>
<p>I want to change labels only for specific version programatically, using the python SDK, specifically the python-aiplatform <a href="https://github.com/googleapis/python-aiplatform/blob/v1.21.0/google/cloud/aiplatform/models.py" rel="nofollow noreferrer">https://github.com/googleapis/python-aiplatform/blob/v1.21.0/google/cloud/aiplatform/models.py</a>.
When I created model instances of specific versions, all versions still got updated, it seems the version is ignored. I went through the source code, but did not find how to update the labels just for the specific version.
I tried this:</p>
<pre><code>model6 = registry.get_model(6)
model6.version_id, model6
>>> ('6',
<google.cloud.aiplatform.models.Model object at 0x7f5bb81d9f40>
resource name: projects/981769406988/locations/europe-west1/models/3411204038849462272)
model6.update(labels={"version":"something6"})
model4 = registry.get_model(4)
model4.version_id, model4
>>> ('4',
<google.cloud.aiplatform.models.Model object at 0x7f5bb82287f0>
resource name: projects/981769406988/locations/europe-west1/models/3411204038849462272)
model4.update(labels={"version":"something4"})
</code></pre>
<p>and the result was<a href="https://i.sstatic.net/IKy7P.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IKy7P.png" alt="enter image description here" /></a>
clearly, the version6 got replaced.</p>
|
<python><google-cloud-vertex-ai>
|
2023-02-28 11:00:51
| 1
| 1,814
|
MatΔj RaΔinskΓ½
|
75,591,115
| 9,479,925
|
How to work with .mdf extension files using python?
|
<p>I have a file with an extension .mdf in a path as below.</p>
<pre><code>C:\A\B\sql\MP_000001.mdf;
</code></pre>
<p>I have installed pyodbc drivers as</p>
<p><a href="https://i.sstatic.net/Sz3e4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Sz3e4.png" alt="enter image description here" /></a></p>
<p>To connect to the said database i have created a connection string as.</p>
<pre><code>conn_str = (
r'DRIVER={SQL Server};'
r'SERVER=(local)\SQLEXPRESS;'
r'AttachDbFileName=C:\A\B\sql\MP_000001.mdf;'
r'Trusted_Connection=yes;'
)
cnxn = pyodbc.connect(conn_str)
</code></pre>
<p>It says an error as:</p>
<pre><code>OperationalError: ('08001', '[08001] [Microsoft][ODBC SQL Server Driver][DBMSLPCN]SQL Server does not exist or access denied. (17) (SQLDriverConnect); [08001] [Microsoft][ODBC SQL Server Driver][DBMSLPCN]ConnectionOpen (Connect()). (123)')
</code></pre>
<p>How to specify connection parameters to open the .mdf file ?</p>
|
<python><sql-server><sqlalchemy><pyodbc>
|
2023-02-28 11:00:34
| 0
| 1,518
|
myamulla_ciencia
|
75,591,027
| 11,328,614
|
Annotating function argument accepting an instance of a class decorated with @define
|
<p>I'm using pythons <code>attrs</code> package to <code>@define</code> some classes containing some members with little logic for isolation purposes.
However, when I run the following stripped down example:</p>
<pre><code>from attrs import define
@define
class DefTest:
attr1: int = 123
attr2: int = 456
attr3: str = "abcdef"
x = DefTest()
print(type(x))
</code></pre>
<p>it justs outputs <code><class '__main__.DefTest'></code>, with now hint which type it is derived from.
Even <code>type(x).mro()</code> outputs <code>[<class '__main__.DefTest'>, <class 'object'>]</code>, with no reference to some internally defined <code>attrs</code> class.</p>
<p>Now, imagine a function which takes an <code>attrs</code> class instance and performs some action on it. How can I add a proper type annotation, instead of the placeholder?</p>
<pre class="lang-py prettyprint-override"><code>from attrs import asdict, define
import json
def asjson(attrsclass: "DefineDecoratedClass"):
return json.dumps(asdict(attrsclass))
</code></pre>
<p>I tried annotating using <code>attrs.AttrsInstance</code> which seems to be wrong from the linters perspective.
Any idea?</p>
|
<python><python-typing><python-attrs>
|
2023-02-28 10:53:18
| 1
| 1,132
|
WΓΆr Du Schnaffzig
|
75,590,749
| 19,328,707
|
Python / MySQL - Python INSERT INTO not inserting data
|
<p>I'm testing a combination with Python and MySQL. For this example I want to insert data into a MySQL Database from my Python code. The problem I'm facing now is that there is no actual data inserted into the table when I run the SQL command via Python, but it works when I run it via the MySQL Command Line Client.</p>
<p>The DB name is <code>sha</code>.</p>
<p>The table was created as follows:</p>
<pre class="lang-sql prettyprint-override"><code>CREATE TABLE sha_data (string varchar(255), sha256 char(64))
</code></pre>
<p>The Python code looks like this:</p>
<pre class="lang-py prettyprint-override"><code>import mysql.connector
def sql_run_command(sql, val):
try:
cursor.execute(sql, val)
result = cursor.fetchall()
except Exception as e:
print(f'Error on SQL command | {sql} | {e}')
return result
# Database connect
db = mysql.connector.connect(
host="localhost",
user="root",
password="1234",
database="sha"
)
# Cursor creation
cursor = db.cursor()
# Command & variables
sql = "INSERT INTO sha_data (string, sha256) VALUES (%s, %s)"
val = ("1", "6b86b273ff34fce19d6b804eff5a3f5747ada4eaa22f1d49c01e52ddb7875b4b")
print(sql_run_command(sql, val))
# Connection close
db.close()
</code></pre>
<p>If I run this code the return is <code>[]</code>, and if I check it with <code>SELECT * FROM sha_data</code> there is no data.</p>
<p>I'm not quite sure where the problem is sitting. I would guess it's a simple syntax error.</p>
|
<python><mysql>
|
2023-02-28 10:28:03
| 1
| 326
|
LiiVion
|
75,590,659
| 4,233,642
|
Set figure options for Python with knitr
|
<p>I have a <code>knitr</code> document (either Rmd or Rnw) in which I want to generate some output with R and some with Python. However, I'm not able to change the figure options that are being used in Python.</p>
<p>Specifically, if I use <code>knitr::opts_chunk$set(fig.width = ..., fig.height = ...)</code> <strong>before</strong> the first Python chunk, then these options are used for <strong>all</strong> subsequent Python plots. But I don't seem to be able to change the figure options afterwards in order to generate different plots with different figure options.</p>
<p>In the R/Markdown document (<code>py.Rmd</code>) below I include a minimal example. After the Python setup chunk neither the explicit <code>knitr::opts_chunk$set()</code> nor the options of the individual <code>line</code> and <code>bar</code> chunks are honored.</p>
<pre><code>---
title: 'Figure options for Python with knitr'
output: html_document
---
These figure options work:
```{r opts}
knitr::opts_chunk$set(fig.width = 6, fig.height = 5)
```
Python setup works:
```{python setup}
import pandas as pd
from matplotlib import pyplot as plt
d = pd.DataFrame({"x": [1, 2, 3, 4, 5], "y": [1, 3, 2, 5, 4]})
```
Plots work but all figure options are ignored:
```{r opts2}
knitr::opts_chunk$set(fig.width = 2, fig.height = 6)
```
```{python line, fig.width = 2, fig.height = 6}
d.plot.line(x = 'x', y = 'y')
plt.show()
```
```{python bar, fig.width = 6, fig.height = 2}
d.plot.bar(x = 'x', y = 'y')
plt.show()
```
</code></pre>
<p>Using <code>rmarkdown::render("py.Rmd")</code> in R 4.2.1 with <code>knitr</code> 1.42, <code>rmarkdown</code> 2.20 and <code>reticulate</code> 1.28 I get the following output where both figures are 6 x 5 and not 2 x 6 or 6 x 2.</p>
<p><a href="https://i.sstatic.net/B0cb1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/B0cb1.png" alt="Rendered py.Rmd" /></a></p>
|
<python><r><matplotlib><r-markdown><knitr>
|
2023-02-28 10:19:47
| 1
| 18,083
|
Achim Zeileis
|
75,590,618
| 9,183,027
|
PyGithub: How to create a branch off the master and check out?
|
<p>How can I create a new branch off the master and check out the branch?</p>
<p>The documentation at <a href="https://pygithub.readthedocs.io/en/latest/" rel="nofollow noreferrer">https://pygithub.readthedocs.io/en/latest/</a> does not show how to do this.</p>
<p>Thanks a lot!!</p>
|
<python><github><pygithub>
|
2023-02-28 10:16:15
| 1
| 1,585
|
Kay
|
75,590,605
| 17,507
|
How can I mock python's subprocess.check_putput when it makes several different calls
|
<p>I have a function which makes several subprocess.check_output calls to glean various bits of information. To unit test the code I want to mock the results of check_output depending on the args so I tried this:</p>
<pre><code>@pytest.fixture
def overlay_subprocess_calls(mocker):
def my_outputs(*args, **kwargs):
if args:
print(f"args: {args}, {kwargs}")
return "called"
# if args[0] == "readelf":
# return fake_readelf
# elif args[0] == "ldd":
# return fake_ldd
# else:
# return fake_firmware_paths
return mocker.patch("subprocess.check_output", new_callable=my_outputs)
def test_qemu_overlay(tuxrun_args_qemu_overlay, lava_run, overlay_subprocess_calls):
main()
</code></pre>
<p>Have I misunderstood what new_callable is meant to do? I was hoping it would patch in my_outputs but when I run the test I get:</p>
<pre><code> # work out the loader
> interp = subprocess.check_output(["readelf", "-p", ".interp",
qemu_binary]).decode(
"utf-8"
)
E TypeError: 'NoneType' object is not callable
</code></pre>
<p>which implies we are returning nothing for the mocked call.</p>
|
<python><unit-testing><subprocess><python-mock>
|
2023-02-28 10:15:28
| 1
| 6,040
|
stsquad
|
75,590,589
| 423,839
|
Use key as byte array for DES3 in python
|
<p>I have a key and IV for DES3 as byte array (generated by C#):</p>
<pre><code>var _algo = TripleDES.Create();
_algo.GenerateIV();
_algo.GenerateKey();
Console.WriteLine(string.Join(", ", _algo.IV));
Console.WriteLine(string.Join(", ", _algo.Key));
</code></pre>
<p>I get these values for example:</p>
<pre><code>[220, 138, 91, 56, 76, 81, 217, 70]
[88, 221, 70, 78, 149, 105, 62, 50, 93, 32, 72, 240, 54, 53, 153, 41, 39, 135, 78, 19, 216, 208, 180, 50]
</code></pre>
<p>How do I properly use the key to decode the message in Python? I'm trying:</p>
<pre><code>from Crypto.Cipher import DES3
codedText = "hvAjofLh4mc="
iv = [220, 138, 91, 56, 76, 81, 217, 70]
key = [88, 221, 70, 78, 149, 105, 62, 50, 93, 32, 72, 240, 54, 53, 153, 41, 39, 135, 78, 19, 216, 208, 180, 50]
cipher_encrypt = DES3.new(bytearray(key), DES3.MODE_CBC, bytearray(iv))
v = cipher_encrypt.decrypt(codedText)
</code></pre>
<p>This gives me</p>
<pre><code>TypeError: Object type <class 'str'> cannot be passed to C code
</code></pre>
<p>But I think I am doing something wrong with the keys.</p>
<p>The code I used to generate the keys/message in C#:
<a href="https://dotnetfiddle.net/PXcNhl" rel="nofollow noreferrer">https://dotnetfiddle.net/PXcNhl</a></p>
|
<python><pycryptodome>
|
2023-02-28 10:14:02
| 1
| 8,482
|
Archeg
|
75,590,580
| 2,115,409
|
Using pandas in django, how to release memory?
|
<p>In a django project that focuses on data analysis, I use pandas to transform the data to a required format. The process is nice and easy, but the project quickly starts using 1GB of RAM.</p>
<p>I know that Python doesn't really free up memory in this case (<a href="https://stackoverflow.com/a/39377643/2115409">https://stackoverflow.com/a/39377643/2115409</a>) and that pandas might have an issue (<a href="https://github.com/pandas-dev/pandas/issues/2659" rel="nofollow noreferrer">https://github.com/pandas-dev/pandas/issues/2659</a>).</p>
<p>How do I use pandas in a django project without exploding memory?</p>
|
<python><django><pandas>
|
2023-02-28 10:13:24
| 1
| 2,704
|
Private
|
75,590,485
| 12,281,404
|
boost the speed of recusrive function
|
<p>The function is too slow, or infinite, i didnt get the result. how to fix that?</p>
<pre class="lang-py prettyprint-override"><code>def x(a):
return x(a - 1) + x(a - 2) + 42 if a > 1 else a
print(x(195))
</code></pre>
|
<python><python-3.x>
|
2023-02-28 10:05:16
| 2
| 487
|
Hahan't
|
75,590,442
| 18,756,733
|
How to change line width of a KDE plot in seaborn
|
<p>I can't change the tickness of the kde line in seaborn.</p>
<p><a href="https://i.sstatic.net/ZHvUk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZHvUk.png" alt="enter image description here" /></a></p>
<p>Here is the line of the code:</p>
<pre><code>sns.kdeplot(ax=ax2,x=dots['Longitude'],y=dots['Latitude'],kde_kws={'linestyle':':'},lw=0.1,levels=10)
</code></pre>
<p>Is it missing something?</p>
|
<python><matplotlib><seaborn>
|
2023-02-28 10:00:52
| 1
| 426
|
beridzeg45
|
75,590,319
| 14,269,252
|
Interactive time series data over categorical feature
|
<p>I am new to python. I want to learn how make an interactive plot, this is a time series data that I want to show DATE over categorical column (CODE) by selecting Source and ID and also once for all data per ID.
I also want to show the Code as a label over the bar, if this is bar plot or over a dot if this is dot plot.</p>
<p>I know how to make a simple graph but I dont know how to develop further.</p>
<pre><code>import matplotlib.pyplot as plt
ax = df.groupby(['DATE', 'CODE']).size().unstack().plot.bar()
ax
</code></pre>
<pre><code> ID DATE CODE SOURCE
0 P04 2016-08-08 f m1
1 P04 2015-05-08 f m1
2 P04 2010-07-20 v m3
3 P04 2013-12-06 g m4
4 P08 2018-03-01 h m4
</code></pre>
<p>Another Try :</p>
<pre><code>
import pandas as pd
import seaborn as sns
p = sns.countplot(data=df, x='DATE', hue='CODE')
p.legend(title='CODE', bbox_to_anchor=(1, 1), loc='upper left')
for c in p.containers:
# set the bar label
p.bar_label(c, fmt='%.0f', label_type='edge')
</code></pre>
|
<python><matplotlib><plot><plotly><bokeh>
|
2023-02-28 09:47:42
| 0
| 450
|
user14269252
|
75,590,316
| 1,788,771
|
How to hook up two viewsets to a single django url
|
<p>I hava a standard DRF viewset for a model which I hook up in my <code>urls.py</code> as such:</p>
<pre class="lang-py prettyprint-override"><code>router = routers.SimpleRouter()
router.register("", ResourceViewSet, basename="resource")
urlpatterns = [
path(
"",
include(router.urls),
name="basic-resource-crud",
),
]
</code></pre>
<p>Resource is the only model in the app so it is hooked up to the root. Additionally I would like to hook up the <code>PATCH</code> method on the root url to a bulk update view:</p>
<pre class="lang-py prettyprint-override"><code>router = routers.SimpleRouter()
router.register("", ResourceViewSet, basename="notifications")
urlpatterns = [
path(
"",
BulkResourceUpdateViewSet.as_view(),
name="bulk-resource-update",
),
path(
"",
include(router.urls),
name="basic-resource-crud",
),
]
</code></pre>
<p>The <code>BulkResourceUpdateViewSet</code> class only defines a <code>patch</code> method.</p>
<p>However with the setup above only the first route in the <code>urlpatterns</code> array is taken into considereation by Django and the other is ignored.</p>
<p>How can I achive the url structure I am looking for:</p>
<pre><code>GET / : ResourceViewSet
PATCH / : BulkResourceUpdateViewSet.patch
GET /<pk>/ : ResourceViewSet
POST /<pk>/ : ResourceViewSet
PATCH /<pk>/ : ResourceViewSet
PUT /<pk>/ : ResourceViewSet
</code></pre>
|
<python><django><django-rest-framework>
|
2023-02-28 09:47:33
| 2
| 4,107
|
kaan_atakan
|
75,590,290
| 16,511,234
|
How to extract nested list data from a GeoJSON file in each key into dataframe
|
<p>I have a GeoJSON file with multiple keys. I want to extract the <code>name</code> and <code>coordinates</code>. I can access each of the elements, but I do not know how to loop over them to add them into a dataframe or another appropriate format maybe like dict. I want the name with the corresponding coordinates (lat,lon). The final use of the dataframe or dict will be to plot it with Plotly.</p>
<p>The GeoJSON looks like this:</p>
<pre><code>{
"type": "FeatureCollection",
"name": "sf_example",
"crs": {
"type": "name",
"properties": {
"name": "urn:ogc:def:crs:OGC:1.3:CRS84"
}
},
"features": [{
"type": "Feature",
"properties": {
"name": "FirstRoad"
},
"geometry": {
"type": "MultiLineString",
"coordinates": [[[-0.209607278927995, 51.516851589085569], [-0.209607278927995, 51.516851589085569], [-0.210671042775843, 51.51666770991379], [-0.217526409795308, 51.515064252076257]]]
}
}, {
"type": "Feature",
"properties": {
"name": "SecondRoad"
},
"geometry": {
"type": "MultiLineString",
"coordinates": [[[-0.208969020619286, 51.516005738748845], [-0.208969020619286, 51.516005738748845], [-0.211096548314982, 51.512688382789257]]]
}
}, {
"type": "Feature",
"properties": {
"name": "ThirdRoad"
},
"geometry": {
"type": "MultiLineString",
"coordinates": [[[-0.204725784826204, 51.517020757267986], [-0.204725784826204, 51.517020757267986], [-0.207798880386654, 51.517211990108905]]]
}
}
]
}
</code></pre>
<p>I read the data with the following code:</p>
<pre><code>import json
with open('sf_example.geojson', "r") as read_file:
data = json.load(read_file)
</code></pre>
<p>And with that code I can access the elements:</p>
<pre><code>data['features'][0]['properties']['name']
data['features'][0]['geometry']['coordinates'][0]
</code></pre>
<p>How do I loop so that all information will be extracted? What I want as a result is a dataframe or dict like this for example:</p>
<pre><code>name,lat,lon
FirstRoad,-0.209607278927995, 51.516851589085569,
FirstRoad,-0.209607278927995, 51.516851589085569,
FirstRoad,-0.210671042775843, 51.51666770991379,
FirstRoad,-0.217526409795308, 51.515064252076257,
SecondRoad,-0.208969020619286, 51.516005738748845,
SecondRoad,-0.208969020619286, 51.516005738748845,
SecondRoad,-0.211096548314982, 51.512688382789257
</code></pre>
|
<python><json><parsing><geojson>
|
2023-02-28 09:45:29
| 1
| 351
|
Gobrel
|
75,590,242
| 19,238,204
|
Python PyGMT Plotting Data Points from CSV
|
<p>I try the tutorial here:</p>
<p><a href="https://www.pygmt.org/latest/tutorials/basics/plot.html#sphx-glr-tutorials-basics-plot-py" rel="nofollow noreferrer">https://www.pygmt.org/latest/tutorials/basics/plot.html#sphx-glr-tutorials-basics-plot-py</a></p>
<p>But, instead of getting their already have example of <code>japan_quakes</code> I create my own csv that can works for the first two examples. But won't work at the last example (the figure won't show up)</p>
<p>this is the problematic one:</p>
<pre><code>import pygmt
import pandas as pd
data = pygmt.datasets.load_sample_data(name="japan_quakes")
fig = pygmt.Figure()
fig.basemap(region=region, projection="M15c", frame=True)
fig.coast(land="black", water="skyblue")
pygmt.makecpt(cmap="viridis", series=[data.depth_km.min(), data.depth_km.max()])
fig.plot(
x=data.longitude,
y=data.latitude,
size=0.02 * 2**data.magnitude,
fill=data.depth_km,
cmap=True,
style="cc",
pen="black",
)
fig.colorbar(frame="af+lDepth (km)")
fig.show()
</code></pre>
<p>I also want to ask:</p>
<p><strong>How to create slider that can change the plots based on year? thus if the slide is on 1987 it will shows all the data of 1987</strong></p>
|
<python><pygmt>
|
2023-02-28 09:40:16
| 2
| 435
|
Freya the Goddess
|
75,590,142
| 7,802,894
|
Decorators to configure Sentry error and trace rates?
|
<p>I am using Sentry and <a href="https://docs.sentry.io/platforms/python/" rel="nofollow noreferrer">sentry_sdk</a> to monitor errors and traces in my Python application. I want to configure the error and trace rates for different routes in my FastAPI API. To do this, I want to write two decorators called <code>sentry_error_rate</code> and <code>sentry_trace_rate</code> that will allow me to set the sample rates for errors and traces, respectively.</p>
<p>The <code>sentry_error_rate</code> decorator should take a single argument <code>errors_sample_rate</code> (a float between 0 and 1) and apply it to a specific route.
The <code>sentry_trace_rate</code> decorator should take a single argument <code>traces_sample_rate</code> (also a float between 0 and 1) and apply it to a specific route.</p>
<pre class="lang-py prettyprint-override"><code>def sentry_trace_rate(traces_sample_rate: float = 0.0) -> callable:
""" Decorator to set the traces_sample_rate for a specific route.
This is useful for routes that are called very frequently, but we
want to sample them to reduce the amount of data we send to Sentry.
Args:
traces_sample_rate (float): The sample rate to use for this route.
"""
def decorator(func):
@wraps(func)
async def wrapper(*args, **kwargs):
# Do something here ?
return await func(*args, **kwargs)
return wrapper
return decorator
def sentry_error_rate(errors_sample_rate: float = 0.0) -> callable:
""" Decorator to set the errors_sample_rate for a specific route.
This is useful for routes that are called very frequently, but we
want to sample them to reduce the amount of data we send to Sentry.
Args:
errors_sample_rate (float): The sample rate to use for this route.
"""
def decorator(func):
@wraps(func)
async def wrapper(*args, **kwargs):
# Do something here ?
return await func(*args, **kwargs)
return wrapper
return decorator
</code></pre>
<p>Does someone have an idea if this is possible and how it could be done ?</p>
|
<python><fastapi><sentry>
|
2023-02-28 09:30:28
| 1
| 1,451
|
Yohann L.
|
75,590,032
| 282,328
|
How do I create a model instance using raw SQL in async SQLAlchemy?
|
<p>I have asyncio sqlalchemy code:</p>
<pre><code>import asyncio
from sqlalchemy.orm import sessionmaker, declarative_base
from sqlalchemy import text, Column, Integer, String
from sqlalchemy.ext.asyncio import create_async_engine, AsyncSession
async_engine = create_async_engine(f'mysql+aiomysql://root:example@127.0.0.1:3306/spy-bot')
AsyncSession = sessionmaker(async_engine, class_=AsyncSession, expire_on_commit=False)
Base = declarative_base()
class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
username = Column(String(255))
async def main():
async with AsyncSession() as session:
stmt = text('SELECT * FROM `users` LIMIT 1')
result = await session.execute(stmt)
user = result.one()
print(type(user), user)
asyncio.run(main())
</code></pre>
<p>how do I make session query return instances on User class while still using raw sql?</p>
<p>On sync version this would look like</p>
<pre><code>result = session.query(User).from_statement(text(query))
</code></pre>
|
<python><python-3.x><sqlalchemy>
|
2023-02-28 09:20:04
| 2
| 8,574
|
Poma
|
75,590,029
| 2,304,074
|
Write Linux kernel in Python and compile with Cython
|
<p>Python compiles to C compatible bytecode with Cython. You rewrite Linux kernel source code files something.c in Python and compile with Cython. <br>
Is it possible to compile Python files to bytecode using Cython without Python's memory management and then insert them into the linking process during the normal kernel compilation?</p>
|
<python><linux><kernel>
|
2023-02-28 09:19:49
| 0
| 337
|
user145453
|
75,589,788
| 9,811,964
|
Modify given regular expression for German mobile numbers
|
<p>I have a regular expression (regex) which matches German mobile numbers.</p>
<p>The expression looks like this:</p>
<p><code>[^\d]((\+49|0049|0)\s?(1|9)[1567]\d{1,2}([ \-/]*\d){7,8})(?!\d)</code></p>
<p>It covers number formats like this and many more:</p>
<pre><code>+4915368831169
+49 15207955279
+49 1739341284
</code></pre>
<p>As you can see in the <a href="https://regex101.com/r/w425Os/2" rel="nofollow noreferrer">regex demo</a>, the regular expression covers a whole range of possible formats, but unfortunately matches to both <code>0 910 0 910 -77 -1</code> and <code>0 910 0 910 -75 0</code> which are not German mobile numbers.</p>
<p>How can I modify the regular expression in a way that the regex still matches the German mobile numbers provided in the demo but will not match the given sequence of numbers?</p>
|
<python><regex><match>
|
2023-02-28 08:56:24
| 1
| 1,519
|
PParker
|
75,589,667
| 4,865,723
|
How to realize one() like Python's any() and all()
|
<p>I like to use <code>any()</code> and <code>all()</code> in Python. But sometimes I need a <code>one()</code>.</p>
<p>I want to know if <strong>only one</strong> value or condition in a list is <code>True</code>.</p>
<p>Currently I do it with such a nested ugly <code>if</code>.</p>
<pre><code># In real world there could be more then just two elements
istrue = [True, False]
isfalse = [True, 'foobar']
isfalse2 = [False, False]
def one(values):
if values[0] and values[1]:
return False
if not values[0] and not values[1]:
return False
return True
one(istrue) # True
one(isfalse) # False
one(isfalse2) # False
</code></pre>
<p>I assume there is a more pythonic way, isn't it?</p>
|
<python>
|
2023-02-28 08:44:12
| 3
| 12,450
|
buhtz
|
75,589,568
| 14,617,762
|
Why am I getting a UnicodeDecodeError when installing manim-voiceover?
|
<p>I want to install manim-voiceover in a Windows environment, but a UnicodeDecode error occurs. Even though the PYTHONIOENCODING environment variable is set to utf-8, the UnicodeDecode error is repeated. Below is the error message.</p>
<pre><code>C:\MyFirstFolder>pip install manim-voiceover
Collecting manim-voiceover Using cached manim_voiceover-0.3.0-py3-none-any.whl (37 kB) Requirement already satisfied: pydub<0.26.0,>=0.25.1 in c:\tools\manim\lib\site-packages (from manim-voiceover) (0.25.1) Collecting mutagen<2.0.0,>=1.46.0 Collecting python-dotenv<0.22.0,>=0.21.0 Using cached python_dotenv-0.21.1-py3-none-any.whl (19 kB) Collecting humanhash3<0.0.7,>=0.0.6 Using cached humanhash3-0.0.6.tar.gz (5.4 kB) Preparing metadata (setup.py) ... error error: subprocess-exited-with-error
Γ python setup.py egg_info did not run successfully. β exit code: 1 β°β> [6 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "C:\Users\FORYOUCOM\AppData\Local\Temp\pip-install-7gyhjr68\humanhash3_939d80744ac64b31a109f7a8eb5e4db4\setup.py", line 7, in <module>
long_description = f.read()
UnicodeDecodeError: 'cp949' codec can't decode byte 0xe2 in position 1081: illegal multibyte sequence
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed
Γ Encountered error while generating package metadata. β°β> See above for output.
note: This is an issue with the package mentioned above, not pip. hint: See above for details.
</code></pre>
<p>If anyone knows where to fix it, please comment. thank you</p>
|
<python><unicode><manim>
|
2023-02-28 08:31:51
| 1
| 563
|
Normalbut_Genuine
|
75,589,454
| 13,916,049
|
K-medoids clustering using precomputed matric
|
<p>I want to perform Gower clustering (on mixed binary and non-binary data) and then perform K-medoids clustering based on the distance matrix <code>dm</code>.</p>
<pre><code>import gower
from sklearn_extra.cluster import KMedoids
dft = df.T
X = dft.iloc[:-5,:]
y = dft.iloc[-5:,:]
mms = MinMaxScaler()
mms.fit(X)
data_transformed = mms.transform(X)
dm = gower_matrix(X, y)
K = range(1, 10)
for k in K:
kmedoids = KMedoids(n_clusters=k, metric="precomputed", method="pam").fit(dm, y)
</code></pre>
<p>Traceback:</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Input In [33], in <cell line: 1>()
1 for k in K:
----> 3 kmedoids = KMedoids(n_clusters=k, metric="precomputed", method="pam").fit(dm, y)
5 distortions.append(sum(np.min(cdist(dm, kmedoids.cluster_centers_,'euclidean'), axis=1)) / dm.shape[0])
6 inertias.append(kmedoids.inertia_)
File ~/.local/lib/python3.9/site-packages/sklearn_extra/cluster/_k_medoids.py:196, in KMedoids.fit(self, X, y)
189 if self.n_clusters > X.shape[0]:
190 raise ValueError(
191 "The number of medoids (%d) must be less "
192 "than the number of samples %d."
193 % (self.n_clusters, X.shape[0])
194 )
--> 196 D = pairwise_distances(X, metric=self.metric)
197 medoid_idxs = self._initialize_medoids(
198 D, self.n_clusters, random_state_
199 )
200 labels = None
File /scg/apps/software/jupyter/python_3.9/lib/python3.9/site-packages/sklearn/metrics/pairwise.py:1851, in pairwise_distances(X, Y, metric, n_jobs, force_all_finite, **kwds)
1845 raise ValueError(
1846 "Unknown metric %s. Valid metrics are %s, or 'precomputed', or a callable"
1847 % (metric, _VALID_METRICS)
1848 )
1850 if metric == "precomputed":
-> 1851 X, _ = check_pairwise_arrays(
1852 X, Y, precomputed=True, force_all_finite=force_all_finite
1853 )
1855 whom = (
1856 "`pairwise_distances`. Precomputed distance "
1857 " need to have non-negative values."
1858 )
1859 check_non_negative(X, whom=whom)
File /scg/apps/software/jupyter/python_3.9/lib/python3.9/site-packages/sklearn/metrics/pairwise.py:175, in check_pairwise_arrays(X, Y, precomputed, dtype, accept_sparse, force_all_finite, copy)
173 if precomputed:
174 if X.shape[1] != Y.shape[0]:
--> 175 raise ValueError(
176 "Precomputed metric requires shape "
177 "(n_queries, n_indexed). Got (%d, %d) "
178 "for %d indexed." % (X.shape[0], X.shape[1], Y.shape[0])
179 )
180 elif X.shape[1] != Y.shape[1]:
181 raise ValueError(
182 "Incompatible dimension for X and Y matrices: "
183 "X.shape[1] == %d while Y.shape[1] == %d" % (X.shape[1], Y.shape[1])
184 )
ValueError: Precomputed metric requires shape (n_queries, n_indexed). Got (202, 5) for 202 indexed.
</code></pre>
<p>Data:</p>
<p><code>df.iloc[1:200,0:3]</code></p>
<pre><code>pd.DataFrame({'TCGA-2K-A9WE-01A': {'IGF2R': 0,
'NBEA': 0,
'KMT2D': 0,
'HERC2': 0,
'NEB': 0,
'TTN': 0,
'SF3B1': 0,
'DNAH5': 1,
'MDN1': 0,
'MET': 0,
'LRP6': 0,
'EML5': 0,
'RYR3': 0,
'COL18A1': 0,
'EP300': 0,
'GLS': 0,
'CUL3': 0,
'MUC17': 0,
'WDR81': 0,
'TP53': 0,
'SSH2': 0,
'RYR1': 0,
'CSMD2': 0,
'NFE2L2': 0,
'PKHD1': 0,
'MON2': 0,
'PARD6B': 0,
'KIF1B': 0,
'FLG': 0,
'BIRC6': 0,
'VCAN': 0,
'SYNE1': 0,
'TG': 0,
'ANK3': 0,
'FANCM': 0,
'DMXL2': 0,
'SRCAP': 0,
'ZZEF1': 0,
'KDM6A': 0,
'STAG2': 0,
'ARID1A': 0,
'SMARCA4': 0,
'RERE': 0,
'DST': 0,
'AHCTF1': 0,
'CENPE': 0,
'GRAMD1A': 0,
'SMC6': 0,
'WDFY3': 0,
'SLC5A12': 0,
'TENM3': 0,
'RYR2': 0,
'BRCA2': 0,
'CLTC': 0,
'HELZ2': 0,
'SETD2': 0,
'BAP1': 0,
'PRAG1': 0,
'NF2': 0,
'UBR3': 0,
'MACF1': 0,
'KMT2C': 0,
'PRR12': 0,
'RANBP2': 1,
'GBF1': 0,
'CHD8': 0,
'SYNE2': 0,
'SMG1': 0,
'TSC2': 0,
'ARHGEF28': 0,
'KIAA1109': 0,
'TET1': 0,
'MYT1': 0,
'SMARCB1': 0,
'LRP2': 0,
'LRP1B': 0,
'MUC16': 0,
'SAV1': 0,
'DIP2C': 0,
'PCF11': 0,
'XIRP2': 0,
'DNAH1': 0,
'DNAH8': 0,
'CSMD3': 0,
'TLN1': 0,
'DYNC1H1': 0,
'SHANK3': 0,
'LRBA': 0,
'CNOT1': 0,
'CEP250': 0,
'PRRC2C': 0,
'SRRM2': 0,
'POLR1A': 0,
'ADGRV1': 0,
'ZAN': 0,
'MED13': 0,
'PBRM1': 0,
'PCLO': 0,
'CACNA1S': 0,
'FAT1': 0,
'NUP214': 0,
'ATXN2': 0,
'AHNAK': 0,
'XRN1': 0,
'BOD1L1': 0,
'TEP1': 0,
'UNC13A': 0,
'SZT2': 0,
'USH2A': 0,
'GOLGB1': 0,
'ANK2': 0,
'UBR4': 0,
'HMCN1': 0,
'PTEN': 0,
'MYH4': 0,
'CNTNAP5': 0,
'APOB': 0,
'SPATA31D1': 0,
'DDX5': 0,
'DSCAM': 0,
'ATP1B1': 0,
'PLEKHA6': 0,
'LRIG3': 0,
'ACSF2': 0,
'CPLANE1': 0,
'GOLGA4': 0,
'TNRC18': 0,
'OBSCN': 0,
'CUBN': 0,
'NIPBL': 0,
'HERC1': 0,
'CREBBP': 0,
'CENPF': 0,
'TNRC6A': 0,
'MYOM2': 0,
'ARHGAP32': 0,
'VPS13C': 0,
'F12': 0,
'KAT6A': 1,
'DYNC2H1': 0,
'CLUH': 0,
'KNL1': 0,
'MEGF8': 1,
'JMJD1C': 0,
'FREM2': 0,
'SPEN': 0,
'hsa-let-7a-5p': 3.900520072456484,
'hsa-let-7a-3p': 1.7954225750902864,
'hsa-let-7b-5p': 3.684774825723109,
'hsa-let-7b-3p': 1.8237020381693356,
'hsa-let-7c-5p': 3.519438691136673,
'hsa-let-7d-5p': 2.889577781432674,
'hsa-let-7d-3p': 3.0524182512860767,
'hsa-let-7e-5p': 3.24169139997674,
'hsa-let-7f-5p': 3.806804681066821,
'hsa-miR-15a-5p': 2.8873344760439683,
'hsa-miR-16-5p': 3.169448223239302,
'hsa-miR-17-5p': 2.9038686633461497,
'hsa-miR-17-3p': 3.002610000350878,
'hsa-miR-19b-3p': 2.4763377001511224,
'hsa-miR-20a-5p': 2.8294381272239373,
'hsa-miR-21-5p': 4.231168046714244,
'hsa-miR-21-3p': 3.3505095806399603,
'hsa-miR-22-5p': 2.3380346380813744,
'hsa-miR-22-3p': 4.100304205695363,
'hsa-miR-23a-3p': 3.5631194957307266,
'hsa-miR-24-1-5p': -0.5174965677416278,
'hsa-miR-24-3p': 3.4240942526453217,
'hsa-miR-24-2-5p': 1.9897204362031105,
'hsa-miR-25-3p': 3.6011473778864613,
'hsa-miR-26a-5p': 3.459216786355344,
'hsa-miR-26b-5p': 3.244044676182424,
'hsa-miR-26b-3p': 2.1772832419542314,
'hsa-miR-27a-3p': 3.330214050573812,
'hsa-miR-28-5p': 2.8084680302131613,
'hsa-miR-28-3p': 3.445835549140937,
'hsa-miR-29a-5p': 1.4322013458364455,
'hsa-miR-29a-3p': 3.747380534542168,
'hsa-miR-30a-5p': 4.034264831009016,
'hsa-miR-30a-3p': 3.86192271659328,
'hsa-miR-32-5p': 2.2457925142206228,
'hsa-miR-92a-3p': 3.622342073475701,
'hsa-miR-93-5p': 3.4862692022139234,
'hsa-miR-98-5p': 2.673648173443926,
'hsa-miR-99a-5p': 3.306517273383935,
'hsa-miR-100-5p': 3.56281613854454,
'hsa-miR-101-3p': 3.823911869890638,
'hsa-miR-29b-3p': 3.225462772887926,
'hsa-miR-29b-2-5p': 2.387517043267465,
'hsa-miR-103a-3p': 3.913516891372183,
'hsa-miR-106a-5p': 1.5947921744868148,
'hsa-miR-107': 2.765560153240496,
'hsa-miR-192-5p': 3.444283905581549,
'hsa-miR-196a-5p': 2.8748375476624948,
'hsa-miR-197-3p': 3.2702909773912223,
'hsa-miR-199a-5p': 2.4174982443909765,
'hsa-miR-199a-3p': 2.7699786837570124,
'hsa-miR-148a-3p': 3.783478829387657,
'hsa-miR-30c-5p': 3.242366731126448},
'TCGA-2Z-A9J1-01A': {'IGF2R': 1,
'NBEA': 1,
'KMT2D': 0,
'HERC2': 0,
'NEB': 0,
'TTN': 0,
'SF3B1': 0,
'DNAH5': 0,
'MDN1': 0,
'MET': 0,
'LRP6': 0,
'EML5': 0,
'RYR3': 0,
'COL18A1': 0,
'EP300': 0,
'GLS': 0,
'CUL3': 0,
'MUC17': 0,
'WDR81': 0,
'TP53': 0,
'SSH2': 0,
'RYR1': 0,
'CSMD2': 0,
'NFE2L2': 0,
'PKHD1': 0,
'MON2': 0,
'PARD6B': 0,
'KIF1B': 0,
'FLG': 0,
'BIRC6': 0,
'VCAN': 0,
'SYNE1': 0,
'TG': 0,
'ANK3': 0,
'FANCM': 0,
'DMXL2': 0,
'SRCAP': 0,
'ZZEF1': 0,
'KDM6A': 0,
'STAG2': 0,
'ARID1A': 0,
'SMARCA4': 0,
'RERE': 0,
'DST': 0,
'AHCTF1': 0,
'CENPE': 0,
'GRAMD1A': 0,
'SMC6': 0,
'WDFY3': 0,
'SLC5A12': 0,
'TENM3': 0,
'RYR2': 0,
'BRCA2': 0,
'CLTC': 0,
'HELZ2': 0,
'SETD2': 0,
'BAP1': 0,
'PRAG1': 0,
'NF2': 0,
'UBR3': 0,
'MACF1': 0,
'KMT2C': 0,
'PRR12': 0,
'RANBP2': 0,
'GBF1': 0,
'CHD8': 0,
'SYNE2': 0,
'SMG1': 0,
'TSC2': 0,
'ARHGEF28': 0,
'KIAA1109': 0,
'TET1': 0,
'MYT1': 0,
'SMARCB1': 0,
'LRP2': 0,
'LRP1B': 0,
'MUC16': 0,
'SAV1': 0,
'DIP2C': 0,
'PCF11': 0,
'XIRP2': 0,
'DNAH1': 0,
'DNAH8': 0,
'CSMD3': 0,
'TLN1': 0,
'DYNC1H1': 0,
'SHANK3': 0,
'LRBA': 0,
'CNOT1': 0,
'CEP250': 0,
'PRRC2C': 0,
'SRRM2': 0,
'POLR1A': 0,
'ADGRV1': 0,
'ZAN': 0,
'MED13': 0,
'PBRM1': 0,
'PCLO': 0,
'CACNA1S': 0,
'FAT1': 0,
'NUP214': 0,
'ATXN2': 0,
'AHNAK': 0,
'XRN1': 0,
'BOD1L1': 0,
'TEP1': 0,
'UNC13A': 0,
'SZT2': 0,
'USH2A': 0,
'GOLGB1': 0,
'ANK2': 0,
'UBR4': 0,
'HMCN1': 0,
'PTEN': 0,
'MYH4': 0,
'CNTNAP5': 0,
'APOB': 0,
'SPATA31D1': 0,
'DDX5': 0,
'DSCAM': 0,
'ATP1B1': 0,
'PLEKHA6': 0,
'LRIG3': 0,
'ACSF2': 0,
'CPLANE1': 0,
'GOLGA4': 0,
'TNRC18': 0,
'OBSCN': 0,
'CUBN': 0,
'NIPBL': 0,
'HERC1': 0,
'CREBBP': 0,
'CENPF': 0,
'TNRC6A': 0,
'MYOM2': 0,
'ARHGAP32': 0,
'VPS13C': 0,
'F12': 0,
'KAT6A': 0,
'DYNC2H1': 0,
'CLUH': 0,
'KNL1': 0,
'MEGF8': 0,
'JMJD1C': 0,
'FREM2': 0,
'SPEN': 0,
'hsa-let-7a-5p': 3.861192885173194,
'hsa-let-7a-3p': 2.436967295678264,
'hsa-let-7b-5p': 3.6676128143486753,
'hsa-let-7b-3p': 1.958120697018377,
'hsa-let-7c-5p': 3.3710754357912336,
'hsa-let-7d-5p': 2.889071049014615,
'hsa-let-7d-3p': 3.0465318021072383,
'hsa-let-7e-5p': 3.338231516941466,
'hsa-let-7f-5p': 3.740974488020681,
'hsa-miR-15a-5p': 2.7906022803335286,
'hsa-miR-16-5p': 3.057024158120857,
'hsa-miR-17-5p': 3.029640730993288,
'hsa-miR-17-3p': 3.1058198055705417,
'hsa-miR-19b-3p': 2.818744239381312,
'hsa-miR-20a-5p': 2.962076181923545,
'hsa-miR-21-5p': 4.227227158408038,
'hsa-miR-21-3p': 3.442330003846007,
'hsa-miR-22-5p': 2.500159690761942,
'hsa-miR-22-3p': 4.057262683406518,
'hsa-miR-23a-3p': 3.643252279968008,
'hsa-miR-24-1-5p': 2.078515470188888,
'hsa-miR-24-3p': 3.520481477727354,
'hsa-miR-24-2-5p': 2.431900857024301,
'hsa-miR-25-3p': 3.663267809778581,
'hsa-miR-26a-5p': 3.5146580089710038,
'hsa-miR-26b-5p': 3.338697106675036,
'hsa-miR-26b-3p': 2.1201724506034294,
'hsa-miR-27a-3p': 3.570982413178969,
'hsa-miR-28-5p': 2.764316255528613,
'hsa-miR-28-3p': 3.5004439598626447,
'hsa-miR-29a-5p': 1.16677801013214,
'hsa-miR-29a-3p': 3.810914304779572,
'hsa-miR-30a-5p': 4.066145798006805,
'hsa-miR-30a-3p': 3.8994855613210233,
'hsa-miR-32-5p': 2.456425423115635,
'hsa-miR-92a-3p': 3.7222157388346138,
'hsa-miR-93-5p': 3.5301163501931785,
'hsa-miR-98-5p': 2.470234592223246,
'hsa-miR-99a-5p': 3.261243017454502,
'hsa-miR-100-5p': 3.446610715459669,
'hsa-miR-101-3p': 3.7492013918013103,
'hsa-miR-29b-3p': 3.215998607821593,
'hsa-miR-29b-2-5p': 2.055762962193227,
'hsa-miR-103a-3p': 3.848989613004589,
'hsa-miR-106a-5p': 1.310085727835549,
'hsa-miR-107': 2.741793996998357,
'hsa-miR-192-5p': 3.7004707413610975,
'hsa-miR-196a-5p': 3.240131922656889,
'hsa-miR-197-3p': 3.067231910554075,
'hsa-miR-199a-5p': 2.733243180896706,
'hsa-miR-199a-3p': 2.945706768544072,
'hsa-miR-148a-3p': 3.653025536010717,
'hsa-miR-30c-5p': 3.445354673139307},
'TCGA-2Z-A9J3-01A': {'IGF2R': 0,
'NBEA': 0,
'KMT2D': 0,
'HERC2': 0,
'NEB': 0,
'TTN': 0,
'SF3B1': 0,
'DNAH5': 0,
'MDN1': 0,
'MET': 0,
'LRP6': 0,
'EML5': 0,
'RYR3': 0,
'COL18A1': 0,
'EP300': 0,
'GLS': 0,
'CUL3': 0,
'MUC17': 0,
'WDR81': 0,
'TP53': 0,
'SSH2': 0,
'RYR1': 0,
'CSMD2': 0,
'NFE2L2': 0,
'PKHD1': 0,
'MON2': 0,
'PARD6B': 0,
'KIF1B': 1,
'FLG': 0,
'BIRC6': 0,
'VCAN': 0,
'SYNE1': 0,
'TG': 0,
'ANK3': 0,
'FANCM': 0,
'DMXL2': 1,
'SRCAP': 0,
'ZZEF1': 0,
'KDM6A': 0,
'STAG2': 0,
'ARID1A': 0,
'SMARCA4': 0,
'RERE': 0,
'DST': 0,
'AHCTF1': 0,
'CENPE': 0,
'GRAMD1A': 0,
'SMC6': 0,
'WDFY3': 0,
'SLC5A12': 0,
'TENM3': 0,
'RYR2': 0,
'BRCA2': 0,
'CLTC': 0,
'HELZ2': 0,
'SETD2': 0,
'BAP1': 0,
'PRAG1': 0,
'NF2': 0,
'UBR3': 0,
'MACF1': 0,
'KMT2C': 0,
'PRR12': 0,
'RANBP2': 0,
'GBF1': 1,
'CHD8': 0,
'SYNE2': 1,
'SMG1': 0,
'TSC2': 0,
'ARHGEF28': 0,
'KIAA1109': 0,
'TET1': 0,
'MYT1': 0,
'SMARCB1': 0,
'LRP2': 0,
'LRP1B': 0,
'MUC16': 0,
'SAV1': 0,
'DIP2C': 0,
'PCF11': 0,
'XIRP2': 0,
'DNAH1': 0,
'DNAH8': 0,
'CSMD3': 0,
'TLN1': 0,
'DYNC1H1': 0,
'SHANK3': 0,
'LRBA': 0,
'CNOT1': 0,
'CEP250': 0,
'PRRC2C': 0,
'SRRM2': 0,
'POLR1A': 0,
'ADGRV1': 0,
'ZAN': 0,
'MED13': 0,
'PBRM1': 0,
'PCLO': 0,
'CACNA1S': 0,
'FAT1': 0,
'NUP214': 1,
'ATXN2': 0,
'AHNAK': 0,
'XRN1': 1,
'BOD1L1': 1,
'TEP1': 1,
'UNC13A': 1,
'SZT2': 0,
'USH2A': 0,
'GOLGB1': 0,
'ANK2': 0,
'UBR4': 0,
'HMCN1': 0,
'PTEN': 0,
'MYH4': 0,
'CNTNAP5': 0,
'APOB': 0,
'SPATA31D1': 0,
'DDX5': 0,
'DSCAM': 0,
'ATP1B1': 0,
'PLEKHA6': 0,
'LRIG3': 0,
'ACSF2': 0,
'CPLANE1': 0,
'GOLGA4': 0,
'TNRC18': 0,
'OBSCN': 0,
'CUBN': 0,
'NIPBL': 0,
'HERC1': 0,
'CREBBP': 0,
'CENPF': 0,
'TNRC6A': 0,
'MYOM2': 0,
'ARHGAP32': 0,
'VPS13C': 0,
'F12': 0,
'KAT6A': 0,
'DYNC2H1': 0,
'CLUH': 0,
'KNL1': 0,
'MEGF8': 0,
'JMJD1C': 0,
'FREM2': 0,
'SPEN': 0,
'hsa-let-7a-5p': 3.8633445895677534,
'hsa-let-7a-3p': 1.8616291060705468,
'hsa-let-7b-5p': 3.5952286310865538,
'hsa-let-7b-3p': 1.7737597007222166,
'hsa-let-7c-5p': 3.4958162915921864,
'hsa-let-7d-5p': 2.813404761324024,
'hsa-let-7d-3p': 2.856133322589684,
'hsa-let-7e-5p': 3.4123362083155007,
'hsa-let-7f-5p': 3.756886844881003,
'hsa-miR-15a-5p': 2.653398804318862,
'hsa-miR-16-5p': 2.921140087273342,
'hsa-miR-17-5p': 2.9080161647528446,
'hsa-miR-17-3p': 3.0190101751309406,
'hsa-miR-19b-3p': 2.480118996139677,
'hsa-miR-20a-5p': 2.891742608011878,
'hsa-miR-21-5p': 4.243501026965872,
'hsa-miR-21-3p': 3.3603686240008086,
'hsa-miR-22-5p': 1.7812090487442638,
'hsa-miR-22-3p': 3.895577061008923,
'hsa-miR-23a-3p': 3.5647202862065206,
'hsa-miR-24-1-5p': 1.2851898171850389,
'hsa-miR-24-3p': 3.501409872615561,
'hsa-miR-24-2-5p': 1.758460793518147,
'hsa-miR-25-3p': 3.623862330680501,
'hsa-miR-26a-5p': 3.295878145092404,
'hsa-miR-26b-5p': 3.064662865081599,
'hsa-miR-26b-3p': 1.325302271821943,
'hsa-miR-27a-3p': 3.443832233436177,
'hsa-miR-28-5p': 2.848221798544944,
'hsa-miR-28-3p': 3.4708383926521824,
'hsa-miR-29a-5p': 1.5899199958930694,
'hsa-miR-29a-3p': 3.7701639464079206,
'hsa-miR-30a-5p': 4.071110462975572,
'hsa-miR-30a-3p': 3.888012759366865,
'hsa-miR-32-5p': 2.1380008457637745,
'hsa-miR-92a-3p': 3.610711811379474,
'hsa-miR-93-5p': 3.613165791702674,
'hsa-miR-98-5p': 2.4393265534195563,
'hsa-miR-99a-5p': 3.3070455119581856,
'hsa-miR-100-5p': 3.2005876993544486,
'hsa-miR-101-3p': 3.6838783148428313,
'hsa-miR-29b-3p': 3.21472389563324,
'hsa-miR-29b-2-5p': 1.5030824681905002,
'hsa-miR-103a-3p': 3.8048140947821127,
'hsa-miR-106a-5p': 0.222868225074613,
'hsa-miR-107': 2.662719216814865,
'hsa-miR-192-5p': 3.2684486143788165,
'hsa-miR-196a-5p': 2.74647744842236,
'hsa-miR-197-3p': 2.759067800835906,
'hsa-miR-199a-5p': 1.742599376179338,
'hsa-miR-199a-3p': 2.301979123150629,
'hsa-miR-148a-3p': 3.7855708054524713,
'hsa-miR-30c-5p': 3.3507475510336118}})
</code></pre>
|
<python><pandas><machine-learning><scikit-learn>
|
2023-02-28 08:20:17
| 1
| 1,545
|
Anon
|
75,589,451
| 1,506,850
|
storing numbers efficiently in a json
|
<p>For some reason, I need to pass some large numerical vectors through a json.</p>
<p>A trivial serialization leads to the following plain text:</p>
<pre><code>[[0, 0.0003963873381352339], [1, 0.0008297143834970196], [2, 0.0007295088369519877],
[3, 0.0007836414989179601], [4, 0.0007501355526877312], ...
</code></pre>
<p>This is quite inefficient as we are using 15 bytes to store 4 byte floating points.
What could be a more efficient way to encode numbers in a json file?
Efficient as in size of the generate files; parsing time/resources is somewhat less important.</p>
<p>Later consumers of such json are python and javascript scripts, with common open source libraries available for parser.</p>
|
<javascript><python><json><serialization>
|
2023-02-28 08:20:05
| 0
| 5,397
|
00__00__00
|
75,589,400
| 5,192,123
|
pandas apply zscore on certain columns
|
<p>I have a pandas dataframe:</p>
<pre><code> DATE value1 value2 value3 value4
0 2020-01-06 1432.474761 96.215891 1488.882633 96.015154
1 2020-01-07 1023.868069 97.645627 1054.100066 97.536370
2 2020-01-08 837.560896 98.281260 788.085172 98.445618
3 2020-01-09 1351.789373 96.560701 1800.800979 95.025800
4 2020-01-10 1102.373631 97.430358 991.444011 97.799854
</code></pre>
<p>I want to compute <code>zscore</code> for each column:</p>
<pre class="lang-py prettyprint-override"><code>from scipy.stats import zscore
df.drop(columns=["DATE"])
df.apply(zscore)
</code></pre>
<p>The above works fine except that I have to drop <code>DATE</code> column. If I select a column as below:</p>
<pre class="lang-py prettyprint-override"><code>from scipy.stats import zscore
df["value1"].apply(zscore)
</code></pre>
<p>it gives me an error:</p>
<pre><code>AxisError: axis 0 is out of bounds for array of dimension 0
</code></pre>
<p>Is there a way I can run <code>apply(zscore)</code> on selected columns?</p>
|
<python><pandas>
|
2023-02-28 08:13:58
| 2
| 2,633
|
MoneyBall
|
75,589,293
| 11,267,783
|
Issue with colorbar and imshow with gridspec
|
<p>I wanted to plot 2 imshow on a figure, but I only want the sub figure on the right to have the colorbar at the bottom of its plot.</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import numpy as np
from matplotlib.gridspec import GridSpec
cm = 1/2.54
fig = plt.figure()
fig.set_size_inches(21*cm,29.7*cm)
gs = GridSpec(1,2,figure=fig)
data1 = np.random.rand(100,1000)
data2 = np.random.rand(100,1000)
ax_left = fig.add_subplot(gs[:,0])
img_left = ax_left.imshow(data1, aspect='auto')
ax_right = fig.add_subplot(gs[:,1])
img_right = ax_right.imshow(data2, aspect='auto')
fig.colorbar(img_right,ax = [ax_right], location='bottom')
plt.show()
</code></pre>
<p>As you can see the 2 imshow are not the same size (I think because of the colorbar). Do you have any ideas to have the same figure but with the right plot with the same height as the left one (and keep the colorbar for the right imshow).</p>
|
<python><matplotlib><colorbar><imshow><matplotlib-gridspec>
|
2023-02-28 08:01:56
| 1
| 322
|
Mo0nKizz
|
75,589,224
| 3,811,401
|
Add zero vector row per group in pandas
|
<p>I want to create equal sized numpy (padded) array from pandas, ultimately to be given as input to keras model</p>
<pre><code>import pandas as pd
df = pd.DataFrame([[1, 1.2, 2.2],
[1, 3.2, 4.6],
[2, 5.5, 6.6]], columns = ['id', 'X1', 'X2']
)
df
>>
id X1 X2
0 1 1.2 2.2
1 1 3.2 4.6
2 2 5.5 6.6
</code></pre>
<p>Expected Output - 3d numpy array with padding</p>
<pre><code>array[
[
[1.2, 2.2],
[3.2, 4.6]
],
[
[5.5, 6.6],
[0, 0]
]
]
</code></pre>
<p>Can anyone help me?</p>
|
<python><pandas><numpy><keras>
|
2023-02-28 07:54:52
| 1
| 4,820
|
Hardik Gupta
|
75,589,033
| 9,104,942
|
Schedule DAG to run once in a two months on last Sunday
|
<p>I need to run my Airflow DAG <strong>once in a two months on last Sunday at 3 am</strong>. For Example February's last Sunday, April's last Sunday, June's last Sunday and etc. (It skips January, March, May and etc)
I was thinking of using this cron expression, but it does not work as I want.</p>
<pre><code>0 3 * */2 7
dag = DAG(
dag_id=DAG_ID,
default_args=DEFAULT_ARGS,
schedule_interval='0 3 * */2 7',
start_date=datetime(2021, 6, 1),
max_active_runs=1
)
</code></pre>
<p>How can I achieve this result in one cron expression? Is there any other way around?</p>
|
<python><cron><airflow>
|
2023-02-28 07:33:03
| 1
| 659
|
Abdusoli
|
75,588,985
| 10,305,444
|
how to `set_shape` of `Tensor` to a muli-input dataset
|
<p>I'm trying to create a Dataset with multiple input, which can be feed into a model with multiple inputs. It works fine when I'm working with single input, then I can easily set the shape using <code>set_shape</code> of that tensor inside my <code>Dataset.map</code> function. But now I don't know which shape I should set the tensor to.</p>
<p>Related information:</p>
<ul>
<li><code>x_img1</code> is 10 dimensional 1024 by 1024 image</li>
<li><code>x_img2</code> is a 2D, 32 by 64 image</li>
</ul>
<p>Here is my code:</p>
<pre><code>def read_image(path, rel):
# blah blah blah, read somehow
return image
def read_image1(path1, filter0):
# blah blah blah, read somehow
return image
def preprocess(x, y):
def func(x, y):
x = json.loads(x)
x_img1 = read_image(x['path'], x['rel']) # 3D image
x_img2 = read_image(x['path-fork'], x['filter']) #2D image with different shape
# image resizing will lose data
y = tf.keras.utils.to_categorical(y, num_classes=len(set(df['label'].values))) # todo: yeah, i'll optimize it never
return (x_img1, x_img2), y
_x, _y = tf.numpy_function(func, [x, y], [tf.float32, tf.float32])
# _x.set_shape([256, 256, 3]) <--- previously i used to do this
_y.set_shape([10])
return _x, _y
# here `x` is an array of string, and those strings are actually json/dictionary
def tf_dataset(x,y, batch=16):
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.shuffle(buffer_size=1000)
dataset = dataset.map(preprocess)
dataset = dataset.batch(batch)
dataset = dataset.prefetch(16)
return dataset
</code></pre>
<p><a href="https://i.sstatic.net/W37pM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/W37pM.png" alt="multi input model with tf.data.Dataset" /></a></p>
<p>Now it's throwing an error saying:</p>
<pre><code>InternalError: Graph execution error:
Unsupported object type numpy.ndarray
[[{{node PyFunc}}]]
[[IteratorGetNext]] [Op:__inference_train_function_30745]
</code></pre>
<p>Here is the notebook and data(same data being replicated): <a href="https://gist.github.com/maifeeulasad/25975541c888aa9bf865ad1827010907" rel="nofollow noreferrer">https://gist.github.com/maifeeulasad/25975541c888aa9bf865ad1827010907</a></p>
|
<python><tensorflow><keras><tensorflow-datasets>
|
2023-02-28 07:27:41
| 0
| 4,689
|
Maifee Ul Asad
|
75,588,957
| 7,032,878
|
Managing scheduled functions with threading in Python script
|
<p>I have to set up a Python script which should be always up & running and should be responsible to call several functions at determined schedules.
I was considering the following approach, which leverages the libraries <code>pycron</code> and <code>threading</code>, assuming that <code>func1</code>, <code>func2</code> and <code>func3</code> are Python functions defined somewhere else.
My aim is to accomplish the following targets:</p>
<ol>
<li>Run different functions at specific times (with cron-like schedule)</li>
<li>Make possible for those function to overlap (= run simultaneously) if needed</li>
<li>Avoid that the same function is being started again if the previous run is not finished</li>
</ol>
<pre class="lang-py prettyprint-override"><code>import pycron
from threading import Thread
from custom_modules import func1, func2, func3
while True:
if not t1.is_alive() and pycron.is_now('*/15 * * * *'): # True every 15th minute
t1 = Thread(target=func1)
t1.start()
time.sleep(60)
if not t2.is_alive() and pycron.is_now('30 22 * * *'): # True at 22:30 every day
t2 = Thread(target=func2)
t2.start()
time.sleep(60)
if not t3.is_alive() and pycron.is_now('0 0 * * SUN'): # True at midnight on every Sunday
t3 = Thread(target=func3)
t3.start()
time.sleep(60)
else:
time.sleep(1)
</code></pre>
<p>I'm not sure this approach is sound, especially for target (3) mentioned above.</p>
<p>What do you think?</p>
|
<python><python-multithreading>
|
2023-02-28 07:23:57
| 2
| 627
|
espogian
|
75,588,949
| 10,844,937
|
fastapi-RuntimeError: attached to a different loop
|
<p>I use <code>fastapi</code> to do async work with <code>BackgroundTasks</code> in a <code>post</code> request to run a heavy task. And I have a <code>get</code> request to check the status of the heavy task.</p>
<pre><code>from fastapi import BackgroundTasks
@router.post("/upload")
async def upload(background_tasks: BackgroundTasks):
task_id = str(uuid.uuid1())
background_tasks.add_task(run_task, task_id)
return {'code': 200, 'task_id': task_id, 'status': True}
@router.get("/status")
async def upload():
task_id = str(uuid.uuid1())
query = await MyModel.filter(task_id=task_id).first()
return {'code': 200, 'status': query.status, 'status': True}
</code></pre>
<p>Because I have to use <code>db</code> in <code>long_time_task</code>, here I use <code>asyncio</code> in <code>run_task</code> to connect the two.</p>
<pre><code>import asyncio
def run_task(task_id):
loop = asyncio.new_event_loop()
future = loop.create_task(long_time_task(task_id))
loop.run_until_complete(asyncio.wait([future]))
future.result()
</code></pre>
<p>Here is the <code>long_time_task</code>.</p>
<pre><code>from tortoise import Tortoise
from dbs.database import TORTOISE_ORM
async def long_time_task(task_id):
await Tortoise.init(config=TORTOISE_ORM)
await Tortoise.generate_schemas()
await MyModel.create(task_id=task_id)
</code></pre>
<p>Here if I send one or mutiple <code>post</code> request, everything goes well.</p>
<p>However if I send a <code>get</code> request, here comes a problem.</p>
<pre><code>RuntimeError: Task <Task pending name='Task-17' coro=<RequestResponseCycle.run_asgi() running at /usr/local/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py:404> cb=[set.discard()]> got Future <Task pending name='Task-18' coro=<Pool._wakeup() running at /usr/local/lib/python3.10/site-packages/aiomysql/pool.py:203>> attached to a different loop
</code></pre>
<p>The code which causes problem is <code>query = await MyModel.filter(task_id=task_id).first()</code></p>
<p>Anyone knows how to fix this?</p>
|
<python><asynchronous><async-await><fastapi>
|
2023-02-28 07:21:54
| 1
| 783
|
haojie
|
75,588,757
| 809,440
|
Grammar file for key=value pairs
|
<p>Iβm attempting to define a grammar file that parses the following:</p>
<pre><code>Section0:
Key0 = βVal0β
Key1 = βVal1β
β¦
</code></pre>
<p>My attempts thus far have just resulted in one long concatenation string with no real ability to split.</p>
<pre><code>Section0:
(var=ID β=β String)*
</code></pre>
<p>Iβm looking to have a list of dictionary-like objects.</p>
|
<python><textx>
|
2023-02-28 06:57:12
| 1
| 387
|
5k1zk17
|
75,588,670
| 14,351,788
|
Cannot test my flask app on AWS EC2 via postman
|
<p>Update:
I update my code by adding "host = '0.0.0.0'", but the question sill here</p>
<p>I create a simple flask app on AWS CE2</p>
<pre><code>import json
from sutime import SUTime
from flask import Flask
app = Flask(__name__)
@app.route("/test/", methods = ["POST"])
def time_word_reg():
# this function is used to detect "time related words"(like today, tomorrow), and return the analysis result.
raw = request.get_data()
data = json.loads(raw)
input_str = data['key_str']
sutime = SUTime(mark_time_range = True)
return json.dumps(sutime.parse(input_str), sort_keys = True, indent = 4))
if __name__ == "__main__":
app.run(host = '0.0.0.0')
#app.run(host = '0.0.0.0', port = 5000) # same result
</code></pre>
<p>About my security group setting, I open the ALL TCP port(0-65535), ALL ICMP - IP4, HTTPS(443), HTTP(80)</p>
<p>When I run my app, I got this:</p>
<pre><code>^C(venv) ubuntu@ip-172-31-57-24:~/project/p1$ python3 app.py
* Serving Flask app 'app'
* Debug mode: off
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
* Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:5000
* Running on http://172.31.57.24:5000
Press CTRL+C to quit
</code></pre>
<p>I thought my app runs successfully</p>
<p>However, I tried to use Postman to test it with three addresses(instance public ip, 127.0.0.1, and 172.31.57.24), but all failed with "Could not send request Error: Request timed out"</p>
<p>the postman error info as below:</p>
<pre><code>POST http://3.84.161.242:5000/test/
Error: Invalid character in header content ["Host"]
Request Headers
Content-Type: application/json
User-Agent: PostmanRuntime/7.31.1
Accept: */*
Postman-Token: 939c1f99-8edd-40e6-9f76-ea918b72cc50
Host: 3.84.161.242οΌ5000
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
POST http://127.0.0.1:5000/test/
Error: connect ECONNREFUSED 127.0.0.1:5000
Request Headers
Content-Type: application/json
User-Agent: PostmanRuntime/7.31.1
Accept: */*
Postman-Token: e39fd4f8-7a10-42d8-a002-1437bfead04b
Host: 127.0.0.1:5000
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
POST http://172.31.57.24:5000/test/
Error: connect ETIMEDOUT 172.31.57.24:5000
Request Headers
Content-Type: application/json
User-Agent: PostmanRuntime/7.31.1
Accept: */*
Postman-Token: 04c83cbc-1c8c-4efe-ab20-1fd7d999d950
Host: 172.31.57.24:5000
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
</code></pre>
<p>What's more, I ping my instance public IP(3.84.161.242) and it works.</p>
<pre><code>C:\Users\admin>ping 3.84.161.242
Pinging 3.84.161.242 with 32 bytes of data:
Reply from 3.84.161.242: bytes=32 time=188ms TTL=39
Reply from 3.84.161.242: bytes=32 time=199ms TTL=39
Reply from 3.84.161.242: bytes=32 time=187ms TTL=39
Reply from 3.84.161.242: bytes=32 time=199ms TTL=39
</code></pre>
<p>I don't know how the problem arises and how to fix it</p>
<p>P.S. please remind me of similar questions rather than closing my question. How are you sure an answer from 7 years ago can solve my problem?</p>
|
<python><amazon-web-services><flask><amazon-ec2>
|
2023-02-28 06:47:16
| 1
| 437
|
Carlos
|
75,588,271
| 8,550,160
|
Unable to determine why eigenvalue results are different between Julia and Python for specific case
|
<p>I'm using Julia to do some linear algebra calculations but it gave me negative eigenvalues when I know the matrix is positive definite.</p>
<p>I'm fairly new to Julia so is there some reason the Julia code below would have such different behavior than the corresponding python code? Could it be the <code>abs</code> function? At this point I'm at a loss.</p>
<p>The Julia code is:</p>
<pre><code>using LinearAlgebra
time = collect(range(0.0, 10.0, length=400))
H = 0.8
N = length(time)
C_N = Matrix{Float32}(undef,N,N)
for i in 1:N
for j in 1:N
ti,tj = time[i], time[j]
C_N[i,j] = 0.5*(ti^(2*H)+tj^(2*H) - abs(ti-tj)^(2*H))
end
end
Decomposition = eigen(C_N)
eigen_vals = Decomposition.values
has_negative = any(x -> x < 0.0, eigen_vals)
if has_negative
@show "Has Negative eigenvalue"
else
@show "Only positive eigenvalues"
end
has_negative
</code></pre>
<p>The output is:</p>
<pre class="lang-bash prettyprint-override"><code>"Has Negative eigenvalue" = "Has Negative eigenvalue"
</code></pre>
<p>The corresponding python code is:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
H = 0.8
N =400
time = np.linspace(0.0,10.0,num=N)
C_N = np.zeros((N,N))
print(time.shape)
for i in range(N):
for j in range(N):
ti,tj = time[i],time[j]
C_N[i,j] = 0.5*(ti**(2*H)+tj**(2*H) - np.abs(ti-tj)**(2*H))
w, V = np.linalg.eig(C_N)
neg_mask = w < 0.0
if np.any(neg_mask):
print("Negative eigenvalue found")
else:
print("Only positive eigenvalues")
</code></pre>
<p>which outputs:</p>
<pre class="lang-bash prettyprint-override"><code>"Only positive eigenvalues"
</code></pre>
<p>For reference I am using Julia v"1.8.2".</p>
|
<python><numpy><julia><eigenvalue>
|
2023-02-28 05:47:23
| 1
| 703
|
jman
|
75,588,250
| 4,237,198
|
What are the downsides to relying purely on pyproject.toml?
|
<p>Say you have a Python program that you can successfully package using only <code>pyproject.toml</code>. What are the downsides? Why use <code>setup.py</code> or <code>setup.cfg</code> in this case?</p>
|
<python><setuptools><setup.py><python-packaging><pyproject.toml>
|
2023-02-28 05:43:21
| 1
| 367
|
Thermodynamix
|
75,588,227
| 14,546,482
|
Connect to many databases and run sql query using pyodbc - multi threading?
|
<p>I'm trying to connect to many Pervasive databases using pyodbc and then perform a sql query on each database. The current problem Im facing is that this script takes too long to run because its trying to connect and run a sql query 1 at a time. I thought multi threading might be a good solution for this. Im very new to multi threading and was wondering what the best approach for something like this would be?</p>
<p>Any tips would be GREATLY appreciated, thanks for looking.</p>
<p>My connection looks something like this:</p>
<pre><code>import pyodbc
import pandas as pd
import logging
server = '1.1.1.1:111'
database = 'ABC'
username = 'test'
password = 'test123'
list_of_databases = list(large_list)
sql = "SELECT * FROM Table where Date between '20230226' and '20230227'
</code></pre>
<p>my loop looks like this where I get all of the connections and establish a connection w/ all databases.</p>
<pre><code>def connect_to_pervasive(databases, server):
try:
logger.info("connecting to Pervasive Server")
connect_string = 'DRIVER=Pervasive ODBC Interface;SERVERNAME={server_address};DBQ={db}'
connections = [pyodbc.connect(connect_string.format(db=n, server_address=server)) for n in databases]
cursors = [conn.cursor() for conn in connections]
logger.info("Connection established!")
except Exception as e:
logger.critical(f"Error: {e}")
return cursors
</code></pre>
<p>Here is where I think I should do my multi threading. Right now this is opening MANY connections, not closing connections after they are run and going 1 at a time. Ideally id want multiple connections happening. Any tips would be GREATLY appreciated, thanks for looking.</p>
<pre><code>data = []
try:
for cur in connections:
rows = cur.execute(sql).fetchall()
df = pd.DataFrame.from_records(rows, columns=[col[0] for col in cur.description])
data.append(df)
except Exception as e:
print(e)
logger.critical(f'Error: {e}')
finally:
for cur in connections:
cur.close()
</code></pre>
|
<python><pandas><multithreading><pyodbc>
|
2023-02-28 05:39:51
| 1
| 343
|
aero8991
|
75,588,158
| 1,905,276
|
How to insert a line at a particular offset in a file using Python
|
<pre class="lang-py prettyprint-override"><code>file_name = '/Users/xxxx/PycharmProjects/pythonProject/HWDataList.txt'
for line in fileinput.FileInput(file_name, inplace=1):
if 'nail' in line:
line = line.rstrip()
line = line.replace(line, line+'\n'+'hammer'+'\n')
print(line)
</code></pre>
<p>The text file looks like this:</p>
<pre><code>xxxx
yyyy
zzzzz
nail
kkkk
jjjjj
</code></pre>
<p>When it is done I want to see it like this:</p>
<pre><code>xxxx
yyyy
zzzzz
nail
hammer
kkkk
jjjjj
</code></pre>
<p>Please advice</p>
|
<python>
|
2023-02-28 05:25:26
| 4
| 411
|
Santhosh Kumar
|
75,587,804
| 3,811,401
|
Keras Compute loss between 2 Ragged Tensors
|
<p>I have 2 ragged tensors defined as follows</p>
<pre><code>import tensorflow as tf
# Tensor 1
pred_score = tf.ragged.constant([
[[-0.51760715], [-0.18927467], [-0.10698503]],
[[-0.58782816], [-0.13076714], [-0.04999146], [-0.1772059], [-0.14299354]]
])
pred_score = tf.squeeze(pred_score, axis=-1)
pred_score_dist = tf.nn.softmax(pred_score, axis=-1)
print(pred_score_dist)
print(pred_score_dist.shape)
>> <tf.RaggedTensor [[0.25664675, 0.35639265, 0.38696054],
[0.1358749, 0.21460423, 0.23265839, 0.20486614, 0.21199636]]>
(2, None)
# Tensor 2
actual_score = tf.ragged.constant([
[3.0, 2.0, 2.0],
[3.0, 3.0, 1.0, 1.0, 0.0]
])
actual_score_dist = tf.nn.softmax(actual_score, axis=-1)
print(actual_score_dist)
print(actual_score_dist.shape)
<tf.RaggedTensor [[0.5761169, 0.21194157, 0.21194157],
[0.4309495, 0.4309495, 0.05832267, 0.05832267, 0.021455714]]>
(2, None)
</code></pre>
<p>I want to compute KL Divergence row by row, finally overall divergence, I tried running this however it gives error</p>
<pre><code>loss = tf.keras.losses.KLDivergence()
batch_loss = loss(actual_score_dist, pred_score_dist)
</code></pre>
<blockquote>
<p><code>ValueError: TypeError: object of type 'RaggedTensor' has no len()</code></p>
</blockquote>
<p>Can someone please help me</p>
|
<python><tensorflow><keras><tensorflow2.0><tf.keras>
|
2023-02-28 04:09:58
| 1
| 4,820
|
Hardik Gupta
|
75,587,763
| 2,056,201
|
What does Mat[0,0,i,2] function do in Python with OpenCV
|
<p>I am trying to understand this code and to convert it to Java</p>
<p>Code is from this tutorial, full code snippet shown below
<a href="https://pyimagesearch.com/2017/09/11/object-detection-with-deep-learning-and-opencv/" rel="nofollow noreferrer">https://pyimagesearch.com/2017/09/11/object-detection-with-deep-learning-and-opencv/</a></p>
<p>I want to convert the line
<code>confidence = detections[0, 0, i, 2]</code> to Java</p>
<p><code>detections</code> is an OpenCV Mat class which is returned by <code>net.forward()</code></p>
<p>First I need to understand what this means</p>
<p>Is this an ROI? Why is there an array of 4 values, what do they represent?</p>
<p>Next, how will this line of code look in Java?</p>
<p>Also this line is incredibly confusing to read
<code>box = detections[0, 0, i, 3:7] * np.array([w, h, w, h])</code></p>
<p>What does it mean and how can it be converted to Java?</p>
<pre><code># pass the blob through the network and obtain the detections and
# predictions
print("[INFO] computing object detections...")
net.setInput(blob)
detections = net.forward()
# loop over the detections
for i in np.arange(0, detections.shape[2]):
# extract the confidence (i.e., probability) associated with the
# prediction
confidence = detections[0, 0, i, 2]
# filter out weak detections by ensuring the `confidence` is
# greater than the minimum confidence
if confidence > args["confidence"]:
# extract the index of the class label from the `detections`,
# then compute the (x, y)-coordinates of the bounding box for
# the object
idx = int(detections[0, 0, i, 1])
box = detections[0, 0, i, 3:7] * np.array([w, h, w, h])
(startX, startY, endX, endY) = box.astype("int")
# display the prediction
label = "{}: {:.2f}%".format(CLASSES[idx], confidence * 100)
print("[INFO] {}".format(label))
cv2.rectangle(image, (startX, startY), (endX, endY),
COLORS[idx], 2)
y = startY - 15 if startY - 15 > 15 else startY + 15
cv2.putText(image, label, (startX, y),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, COLORS[idx], 2)
</code></pre>
|
<python><java><android><opencv>
|
2023-02-28 04:01:26
| 1
| 3,706
|
Mich
|
75,587,700
| 12,870,750
|
Vispy and PyQt5 fullscreen resize event problem
|
<p>I have a line viewer made in <code>PyQt5</code>; I managed to make a <code>vispy</code> scene as the viewport for the <code>QGrapichsView</code>. I have made it zoom and pan capable.</p>
<p><strong>Problem</strong></p>
<p>The viewer works very fast but the problem comes when I try to manually (push the fullscreen button in the right upper corner) the QgrapichsView (main widget) resize fine, but the sceneCanvas from vispy does not resize and stay the same.</p>
<p><strong>states</strong></p>
<p><em>before</em> the manual fullscreen click, anyways this is not well positioned try to pan or zoom and you will se there is a barrier in the rigth side.</p>
<p><a href="https://i.sstatic.net/nBT14.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nBT14.png" alt="enter image description here" /></a></p>
<p><em>after</em> the fullscreen it goes to the bottom left and does not rezize.</p>
<p><a href="https://i.sstatic.net/FBhPt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FBhPt.png" alt="enter image description here" /></a></p>
<p>This code comes from needing to use a faster approach to show many lines, but now the problem is the correct positioing of the canvas.</p>
<p><strong>What I have tried</strong></p>
<p>Mainly, I have made a signal from the <code>QgrapichsView</code> resize event to pass to the <code>vispy</code> canvas, but it just wont work.</p>
<p><strong>Code</strong></p>
<pre><code>import sys
from PyQt5.QtGui import QPainter, QPaintEvent
from PyQt5 import QtCore
from PyQt5.QtCore import Qt
from PyQt5.QtWidgets import QGraphicsView, QApplication
import vispy.scene
from vispy.scene import visuals
import numpy as np
def dummy_generator(num_points, max_points):
from scipy.spatial import Voronoi
# Define las dimensiones
city_width = 8000
city_height = 6000
points = np.random.rand(num_points, 2) * np.array([city_width, city_height])
# Calcula el diagrama de Voronoi de los puntos para generar los bordes de los predios
vor = Voronoi(points)
# Crea una lista de coordenadas de vΓ©rtices para cada polilΓnea que representa los bordes de los predios
polylines = []
pos_arr = []
for ridge in vor.ridge_vertices:
if ridge[0] >= 0 and ridge[1] >= 0:
x1, y1 = vor.vertices[ridge[0]]
x2, y2 = vor.vertices[ridge[1]]
if x1 < 0 or x1 > city_width or y1 < 0 or y1 > city_height:
continue
if x2 < 0 or x2 > city_width or y2 < 0 or y2 > city_height:
continue
# Genera puntos intermedios en la lΓnea para obtener polilΓneas mΓ‘s suaves
val = np.random.randint(3, max_points)
xs = np.linspace(x1, x2, num=val)
ys = np.linspace(y1, y2, num=val)
polyline = [(xs[i], ys[i]) for i in range(val)]
points = np.array(polyline).T
polylines.append(points)
pos_arr.append(max(points.shape))
# Calculate the start and end indices of each contour
pos_arr = np.array(pos_arr) - 1
fill_arr = np.ones(np.sum(pos_arr)).astype(int)
zero_arr = np.zeros(len(pos_arr)).astype(int)
c = np.insert(fill_arr, np.cumsum(pos_arr), zero_arr)
connect = np.where(c == 1, True, False)
coords = np.concatenate(polylines, axis=1)
return coords.T, connect
class VispyViewport(QGraphicsView):
def __init__(self, parent=None):
super().__init__(parent)
self.scale_factor = 1.5
self.setRenderHint(QPainter.Antialiasing)
self.setInteractive(True)
# Create a VisPy canvas and add it to the QGraphicsView
# self.canvas = canvas = vispy.scene.SceneCanvas( app='pyqt5', show=True, size=(2100, 600))
self.canvas = canvas = vispy.scene.SceneCanvas(app='pyqt5', show=True)
vispy_widget = canvas.native
vispy_widget.setParent(self)
# Set the VisPy widget as the viewport for the QGraphicsView
self.setViewport(vispy_widget)
self.setGeometry(QtCore.QRect(0,0, 2100,600))
# Create a grid layout and add it to the canvas
grid = canvas.central_widget.add_grid()
# Create a ViewBox and add it to the grid layout
self.view_vispy = grid.add_view(row=0, col=0, bgcolor='#c0c0c0')
self.grid = self.canvas.central_widget.add_grid()
line_data, connect = dummy_generator(5000, 50)
self.line = visuals.Line(line_data, parent=self.view_vispy.scene, connect=connect, color=(0.50196, 0.50196, 0.50196, 1))
self.view_vispy.camera = vispy.scene.PanZoomCamera()
self.view_vispy.camera.set_range()
self.view_vispy.camera.aspect = 1.0
#get m transformer
self.tform = self.view_vispy.scene.transform
def wheelEvent(self, event):
# Get the center of the viewport in scene coordinates
pos = event.pos()
# Determine the zoom factor
if event.angleDelta().y() > 0:
zoom_factor = 1 / self.scale_factor
else:
zoom_factor = self.scale_factor
#map to vispy coordinates
center = self.tform.imap((pos.x(), pos.y(), 0))
#apply zoom factor to a center anchor
self.view_vispy.camera.zoom(zoom_factor, center=center)
def mousePressEvent(self, event):
if event.button() == Qt.MiddleButton:
self.setDragMode(QGraphicsView.ScrollHandDrag)
self.setInteractive(True)
self.mouse_press_pos = event.pos()
self.mouse_press_center = self.view_vispy.camera.center[:2]
else:
super().mousePressEvent(event)
def mouseReleaseEvent(self, event):
if event.button() == Qt.MiddleButton:
self.setDragMode(QGraphicsView.NoDrag)
self.setInteractive(False)
else:
super().mouseReleaseEvent(event)
def mouseMoveEvent(self, event):
if self.dragMode() == QGraphicsView.ScrollHandDrag:
# Get the difference in mouse position
diff = event.pos() - self.mouse_press_pos
# Get the movement vector in scene coordinates
move_vec = self.tform.imap((diff.x(), diff.y())) - self.tform.imap((0, 0))
# Apply panning and set center
self.view_vispy.camera.center = (self.mouse_press_center[0] - move_vec[0], self.mouse_press_center[1] - move_vec[1])
else:
super().mouseMoveEvent(event)
def paintEvent(self, event: QPaintEvent) -> None:
# force send paintevent
self.canvas.native.paintEvent(event)
return super().paintEvent(event)
if __name__ == '__main__':
app = QApplication(sys.argv)
view = VispyViewport()
view.show()
# Start the Qt event loop
sys.exit(app.exec_())
</code></pre>
|
<python><pyqt5><vispy>
|
2023-02-28 03:48:30
| 1
| 640
|
MBV
|
75,587,622
| 13,638,243
|
Accumulating lists in Polars
|
<p>Say I have a <code>pl.DataFrame()</code> with 2 columns: The first column contains <code>Date</code>, the second <code>List[str]</code>.</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame([
pl.Series('Date', [2000, 2001, 2002]),
pl.Series('Ids', [
['a'],
['b', 'c'],
['d'],
])
])
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Date</th>
<th>Ids</th>
</tr>
</thead>
<tbody>
<tr>
<td>2000</td>
<td><code>['a']</code></td>
</tr>
<tr>
<td>2001</td>
<td><code>['b', 'c']</code></td>
</tr>
<tr>
<td>2002</td>
<td><code>['d']</code></td>
</tr>
</tbody>
</table>
</div>
<p>Is it possible to accumulate the <code>List[str]</code> column so that each row contains itself and all previous lists in Polars? Like so:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Date</th>
<th>Ids</th>
</tr>
</thead>
<tbody>
<tr>
<td>2000</td>
<td><code>['a']</code></td>
</tr>
<tr>
<td>2001</td>
<td><code>['a', 'b', 'c']</code></td>
</tr>
<tr>
<td>2002</td>
<td><code>['a', 'b', 'c', 'd']</code></td>
</tr>
</tbody>
</table>
</div>
|
<python><python-polars>
|
2023-02-28 03:30:40
| 2
| 363
|
Neotenic Primate
|
75,587,611
| 6,063,706
|
Scale gradient for specific parameter in pytorch?
|
<p>Suppose I have some neural network with parameters A,B,C. Whenever a gradient update is applied to C, I want it to be scaled differently than what the normal gradient would be (i.e. every gradient update is 2x or 1/3x what the calculated gradient is). I want the gradients applied to all other parameters to stay the same. How would I go about this in pytorch?</p>
|
<python><deep-learning><pytorch><neural-network>
|
2023-02-28 03:28:54
| 1
| 1,035
|
Tob
|
75,587,442
| 134,044
|
Validate Pydantic dynamic float enum by name with OpenAPI description
|
<p>Following on from <a href="https://stackoverflow.com/questions/67911340/pydantic-enum-load-name-instead-of-value">this question</a> and <a href="https://github.com/pydantic/pydantic/discussions/2980" rel="nofollow noreferrer">this discussion</a> I am now trying to create a Pydantic <code>BaseModel</code> that has a field with a <code>float</code> <code>Enum</code> that is <em>created dynamically</em> <strong>and</strong> is <em>validated by name</em>. (Down the track I will probably want to use <code>Decimal</code> but for now I'm dealing with <code>float</code>.)</p>
<p><a href="https://github.com/pydantic/pydantic/discussions/2980" rel="nofollow noreferrer">The discussion</a> provides a solution to convert all Enums to validate by name, but I'm looking for how to do this for one or more <em>individual</em> fields, not a universal change to all Enums.</p>
<p>I consider this to be a common use case. The model uses an Enum which hides implementation details from the caller. The valid field values that a caller can supply are a limited list of names. These names are associated with internal values (in this case <code>float</code>) that the back-end wants to operate on, without requiring the caller to know them.</p>
<p>The Enum valid names and values do change dynamically and are loaded at run time but for the sake of clarity this would result in an Enum something like the following. Note that the <code>Sex</code> enum needs to be treated normally and validated and encoded by value, but the <code>Factor</code> enum needs to be validated by name:</p>
<pre><code>from enum import Enum
from pydantic import BaseModel
class Sex(str, Enum):
MALE = "M"
FEMALE = "F"
class Factor(Enum):
single = 1.0
half = 0.4
quarter = 0.1
class Model(BaseModel):
sex: Sex
factor: Factor
class Config:
json_encoders = {Factor: lambda field: field.name}
model = Model(sex="M", factor="half")
# Error: only accepts e.g. Model(sex="M", factor=0.4)
</code></pre>
<p>This is what I want but doesn't work because the normal Pydantic Enum behaviour requires <code>Model(factor=0.4)</code>, but my caller doesn't know the particular <code>float</code> that's in use right now for this factor, it can and should only provide <code>"half"</code>. The code that manipulates the <code>model</code> internally always wants to refer to the <code>float</code> and so I expect it to have to use <code>model.factor.value</code>.</p>
<p>It's fairly simple to create the Enum dynamically, but that doesn't provide any Pydantic support for validating on <code>name</code>. It's all automatically validated by <code>value</code>. So I think this is where most of the work is:</p>
<pre><code>Factor = Enum("Factor", {"single": 1.0, "half": 0.4, "quarter": 0.1})
</code></pre>
<p>The standard way for Pydantic to customise serialization is with the <code>json_encoders</code> <code>Config</code> attribute. I've included that in the sample static Enum. That doesn't seem to be problematic.</p>
<p>Finally, there needs to be support to provide the right description to the OpenAPI schema.</p>
<p>Actually, in my use-case I only need the Enum name/values to be dynamically established. So an implementation that modifies a declared Enum would work, as well as an implementation that creates the Enum type.</p>
|
<python><dynamic><enums><openapi><pydantic>
|
2023-02-28 02:49:57
| 3
| 4,109
|
NeilG
|
75,587,316
| 3,819,007
|
Subtract values of single row polars frame from multiple row polars frame
|
<p>Lets say I have a polars dataframe like this in python</p>
<pre class="lang-py prettyprint-override"><code> newdata = pl.DataFrame({
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8],
'C': [9, 10, 11, 12],
'D': [13, 14, 15, 16]
})
</code></pre>
<p>And I want to subtract from every value in every column the corresponding value from another frame</p>
<pre class="lang-py prettyprint-override"><code>baseline = pl.DataFrame({
'A': [1],
'B': [2],
'C': [3],
'D': [4]
})
</code></pre>
<p>In pandas, and numpy, the baseline frame is automagically broadcasted to the size of the newdata, and I can just do;</p>
<pre class="lang-py prettyprint-override"><code>data=newdata-baseline
</code></pre>
<p>But that doesn't work in polars. So what is the cleanest way to achieve this in polars?</p>
|
<python><python-polars>
|
2023-02-28 02:21:59
| 1
| 3,325
|
visibleman
|
75,587,171
| 3,821,009
|
Pandas set values from another DataFrame based on dates
|
<p>This:</p>
<pre><code>periods = 5 * 3
df1 = pandas.DataFrame(dict(
v1=numpy.arange(2, 2 + periods) * 2,
v2=numpy.arange(3, 3 +periods) * 3),
index=pandas.date_range('2023-01-01', periods=periods, freq='8H'))
print(df1)
periods = 3
df2 = pandas.DataFrame(dict(
v3=numpy.arange(4, 4 + periods) * 4,
v4=numpy.arange(5, 5 + periods) * 5),
index=pandas.date_range('2023-01-02', periods=periods, freq='2D'))
print(df2)
df1.loc[df1.index.date, ['v3', 'v4']] = df2
print(df1)
</code></pre>
<p>results in:</p>
<pre><code> v1 v2
2023-01-01 00:00:00 4 9
2023-01-01 08:00:00 6 12
2023-01-01 16:00:00 8 15
2023-01-02 00:00:00 10 18
2023-01-02 08:00:00 12 21
2023-01-02 16:00:00 14 24
2023-01-03 00:00:00 16 27
2023-01-03 08:00:00 18 30
2023-01-03 16:00:00 20 33
2023-01-04 00:00:00 22 36
2023-01-04 08:00:00 24 39
2023-01-04 16:00:00 26 42
2023-01-05 00:00:00 28 45
2023-01-05 08:00:00 30 48
2023-01-05 16:00:00 32 51
v3 v4
2023-01-02 16 25
2023-01-04 20 30
2023-01-06 24 35
v1 v2 v3 v4
2023-01-01 00:00:00 4 9 NaN NaN
2023-01-01 08:00:00 6 12 NaN NaN
2023-01-01 16:00:00 8 15 NaN NaN
2023-01-02 00:00:00 10 18 16.0 25.0
2023-01-02 08:00:00 12 21 16.0 25.0
2023-01-02 16:00:00 14 24 16.0 25.0
2023-01-03 00:00:00 16 27 NaN NaN
2023-01-03 08:00:00 18 30 NaN NaN
2023-01-03 16:00:00 20 33 NaN NaN
2023-01-04 00:00:00 22 36 20.0 30.0
2023-01-04 08:00:00 24 39 20.0 30.0
2023-01-04 16:00:00 26 42 20.0 30.0
2023-01-05 00:00:00 28 45 NaN NaN
2023-01-05 08:00:00 30 48 NaN NaN
2023-01-05 16:00:00 32 51 NaN NaN
</code></pre>
<p>where each value from <code>df2</code> is copied to <code>df1</code> whenever date of <code>df1</code> matches the date of <code>df2</code> (i.e. ignoring the time component).</p>
<p>However, changing the <code>df1</code> index to have a time component (<code>01:00</code> in this example), i.e.:</p>
<pre><code>periods = 5 * 3
df1 = pandas.DataFrame(dict(
v1=numpy.arange(2, 2 + periods) * 2,
v2=numpy.arange(3, 3 +periods) * 3),
index=pandas.date_range('2023-01-01 01:00', periods=periods, freq='8H'))
print(df1)
periods = 3
df2 = pandas.DataFrame(dict(
v3=numpy.arange(4, 4 + periods) * 4,
v4=numpy.arange(5, 5 + periods) * 5),
index=pandas.date_range('2023-01-02', periods=periods, freq='2D'))
print(df2)
df1.loc[df1.index.date, ['v3', 'v4']] = df2
print(df1)
</code></pre>
<p>results in:</p>
<pre><code> v1 v2
2023-01-01 01:00:00 4 9
2023-01-01 09:00:00 6 12
2023-01-01 17:00:00 8 15
2023-01-02 01:00:00 10 18
2023-01-02 09:00:00 12 21
2023-01-02 17:00:00 14 24
2023-01-03 01:00:00 16 27
2023-01-03 09:00:00 18 30
2023-01-03 17:00:00 20 33
2023-01-04 01:00:00 22 36
2023-01-04 09:00:00 24 39
2023-01-04 17:00:00 26 42
2023-01-05 01:00:00 28 45
2023-01-05 09:00:00 30 48
2023-01-05 17:00:00 32 51
v3 v4
2023-01-02 16 25
2023-01-04 20 30
2023-01-06 24 35
...
KeyError: "None of [Index([2023-01-01, 2023-01-01, 2023-01-01, 2023-01-02, 2023-01-02, 2023-01-02,\n 2023-01-03, 2023-01-03, 2023-01-03, 2023-01-04, 2023-01-04, 2023-01-04,\n 2023-01-05, 2023-01-05, 2023-01-05],\n dtype='object')] are in the [index]"
</code></pre>
<p>so apparently:</p>
<pre><code>df1.loc[df1.index.date, ['v3', 'v4']] = df2
</code></pre>
<p>is not the appropriate way to set values based on date (i.e. ignoring time).</p>
<p>Questions:</p>
<ul>
<li><p>Why doesn't it work when there's a time component?</p>
</li>
<li><p>Since it doesn't work with a time component, why does it work when there's no time component by matching <em>all</em> times (i.e. not only <code>00:00</code>)?</p>
</li>
<li><p>What would be the correct way to do what I'm after?</p>
</li>
</ul>
|
<python><pandas><dataframe>
|
2023-02-28 01:41:54
| 1
| 4,641
|
levant pied
|
75,587,139
| 796,634
|
Get Jupyter notebook URL without using IPython.notebook.kernel.execute
|
<p>I'm attempting to access the current URL of a Jupyter notebook including parameters, however I'm operating in a custom environment that has disabled direct access to IPython from javascript so I can't use IPython.notebook.kernel.execute. AFAIK all other communication works fine, input widgets still work etc.</p>
<p>Is there a way to retrieve the current URL of the running notebook in Python that doesn't use IPython.notebook.kernel.execute?</p>
|
<python><url><jupyter-notebook>
|
2023-02-28 01:35:12
| 0
| 2,194
|
Geordie
|
75,587,046
| 1,418,326
|
Tensorfow 2.11.0: Cannot dlopen some GPU libraries. Skipping registering GPU devices
|
<p>I am following the instruction here:<a href="https://www.tensorflow.org/install/pip" rel="nofollow noreferrer">https://www.tensorflow.org/install/pip</a> I was able to follow the instruction step by step without any problems, but TF doesn't detect GPUs.</p>
<p>tensorflow2.11.0 and python: 3.7.10</p>
<p><a href="https://i.sstatic.net/03FH1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/03FH1.png" alt="enter image description here" /></a></p>
<pre><code>2023-02-28 01:02:51.625308: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F AVX512_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-02-28 01:02:51.753482: I tensorflow/core/util/port.cc:104] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-02-28 01:02:53.042228: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/local/cuda/compat:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:/usr/local/cuda/compat:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:/usr/local/nvidia/lib64:/home/gesong/.conda/envs/tf/lib/:/home/gesong/.conda/envs/tf/lib/
2023-02-28 01:02:53.042457: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/local/cuda/compat:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:/usr/local/cuda/compat:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:/usr/local/nvidia/lib64:/home/gesong/.conda/envs/tf/lib/:/home/gesong/.conda/envs/tf/lib/
2023-02-28 01:02:53.042475: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
2023-02-28 01:02:54.601061: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcusolver.so.11'; dlerror: libcusolver.so.11: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/local/cuda/compat:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:/usr/local/cuda/compat:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:/usr/local/nvidia/lib64:/home/gesong/.conda/envs/tf/lib/:/home/gesong/.conda/envs/tf/lib/
2023-02-28 01:02:54.602005: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1934] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
[]
</code></pre>
|
<python><tensorflow><gpu>
|
2023-02-28 01:14:41
| 1
| 1,707
|
topcan5
|
75,586,997
| 4,402,572
|
How can I enforce black's `--preview` option when invoking black with pre-commit?
|
<p>I just found out that <code>black</code>'s default behavior for long logfile messages does not enforce a maximum line length. I have verified that I can enforce max line length if I invoke black manually (v 23.1.0, in a Python 3.9.9 venv). Now I'd like to make sure that enforcement happens with every commit, using pre-commit, but I don't know how to do this.</p>
<p>Is it possible? If so, how?</p>
|
<python><pre-commit><pre-commit.com><python-black>
|
2023-02-28 01:03:26
| 1
| 1,264
|
Jeff Wright
|
75,586,994
| 9,616,557
|
Pandas - How to enforce two digits after decimal point using regex?
|
<p>I have a <code>dataframe</code> which has some values like this:</p>
<pre><code>propvalue
3000.2343
4000.4334554
</code></pre>
<p>That column is a string (str) type.</p>
<p>How can I leave only two decimals after point?</p>
<p>Desired:</p>
<pre><code>propvalue
3000.23
4000.43
</code></pre>
<p>I've researched and seems to be something like it:</p>
<pre><code>df['propvalue'] = df['propvalue'].replace('\..*','',regex=True)
</code></pre>
<p>But above removes everything after point. I need leave two digits.</p>
|
<python><pandas>
|
2023-02-28 01:03:03
| 2
| 767
|
Astora
|
75,586,975
| 5,718,551
|
Change format of all logger instances
|
<p>I need to standardize logging format for multiple Python scripts.</p>
<p>I've written a custom handler; a handler is required rather than a custom format string, since I need to perform additional processing.</p>
<pre class="lang-py prettyprint-override"><code>import logging
class CustomHandler(logging.Handler):
def emit(self, record: logging.Record):
# ...
pass
</code></pre>
<p>Which I've assigned to my default logger</p>
<pre class="lang-py prettyprint-override"><code>handler = CustomHandler()
logging.getLogger().addHandler(handler)
</code></pre>
<p>I want to <strong>force</strong> all packages (<code>fastapi</code>, <code>scrapy</code>, <code>pytest</code>, etc...) to use this handler as well. I am aware this is not good practice. It is what I would like to do.</p>
<p>I have tried re-assigning the handlers of all child loggers from <code>logging.root</code></p>
<pre class="lang-py prettyprint-override"><code>def bind():
for logger in logging.root.manager.loggerDict.values():
if isinstance(logger, logging.PlaceHolder):
continue
logger.handlers = [handler]
</code></pre>
<p>This has several drawbacks:</p>
<ol>
<li><p>Messages logged before <code>bind()</code> is called appear in whatever format was defined by the package. One would think this could be solved by importing and running it at the very start, but some processes like <code>uvicorn</code> (FastAPI) invoke your code as a subprocess.</p>
</li>
<li><p><code>pytest</code> appears to define its own custom <code>master</code> logging instance (something which <code>logging</code> advises against). This means I cannot reset its handlers in the same way</p>
</li>
</ol>
<p>I've read parts of the documentation and looked through the source, but haven't seen a neat solution. Open to any suggestions.</p>
|
<python><python-logging>
|
2023-02-28 00:58:24
| 0
| 944
|
Inigo Selwood
|
75,586,935
| 5,530,152
|
Can functions in python reference previously logged events?
|
<p>Is there a way for python functions to reference previously logged events that occur within their scope or another specific function's scope?</p>
<p><strong>For example:</strong></p>
<p>Within a larger program, <code>process_files(files)</code> raises and handles an <code>OSError</code> as <code>logging.error(f'foo exception handled for {file} due to {bar} error')</code> for 4 out of 20 files, but completes the rest of the job successfully.</p>
<p>Later <code>progress_report()</code> will inform the user of any events of level <code>ERROR</code> or above that occurred in other functions.</p>
<p><strong>What I've Considered</strong></p>
<p>Creating my own class (below) to handle error logging and set this as a global variable. Creating a global variable <code>my_logger = Error_Logger()</code> that then can be used within every function to gather and record any errors. Other functions can then query the <code>errors</code> property or the <code>member_functions</code> property to see if another function encountered an error and check the maximum error level and number of errors encountered.</p>
<pre class="lang-py prettyprint-override"><code>class Error_Logger:
def __init__(self):
self.errors = {}
@property
def member_functions(self):
return self.errors.keys()
def log_error(self, error, level='WARNING', data=None):
caller_name = self._caller_name()
num_level = logging.getLevelName(level)
if not isinstance(num_level, int):
num_level = 0
level_name = logging.getLevelName(num_level)
logger = getattr(logging, str(level_name.lower()))
logger(f'{caller_name}: {error}')
if not caller_name in self.errors:
self.errors[caller_name] = {}
self.errors[caller_name].update({len(self.errors[caller_name]):
{'error': error,
'level': num_level,
'data': data}
})
@staticmethod
def _error_set(errors):
error_set = set()
for k,v in errors.items():
error = v.get('level', None)
if error:
error_set.add(error)
return error_set
def max_error(self, function=None):
if function:
errors = self.errors.get(function, None)
if errors:
max_error = max(self._error_set(errors))
else:
max_error = None
else:
errors = set()
for k, v in self.errors.items():
errors = errors.union(self._error_set(v))
max_error = max(errors)
return max_error
def _caller_name(self, level=2):
return inspect.stack()[level][3]
my_logger = Error_Logger()
def process_files(files):
try:
for f in files:
stuff...
except Exception as e:
my_logger.log_error(f'bad things happened {e}', 'ERROR', e}
return processed_files
def progress_report():
if len(my_logger.errors['process_files']):
print(f'bad things happened with {len(my_logger.errors['processed_files]}')
files = ['a', 'b', 'c']
foo = process_files(files)
progress_report()
</code></pre>
<p><strong>What I don't Like about This</strong></p>
<ul>
<li>I feel like I'm reinventing something that's been done better before me.</li>
<li>Global variable has to be init'd first and always in the global namespace when developing other functions.</li>
<li>Globals make it harder, if not impossible to export the functions from this module in the future</li>
<li>[Globals are messy and can result in all sorts of unintentional changes that are hard to diagnose](<a href="https://www.baeldung.com/cs/global-variables" rel="nofollow noreferrer">https://www.baeldung.com/cs/global-variables</a></li>
</ul>
|
<python><logging><python-logging>
|
2023-02-28 00:49:53
| 0
| 933
|
Aaron Ciuffo
|
75,586,757
| 12,043,946
|
Spyder issues without anaconda distribution "distfit" package. Cannot change PATH
|
<p>I am using python 3.11. I was setting up a remote development enviornment and the anaconda distribution was just messing everything up. In order to get it to work I had to uninstall the conda distribution. Now that I have gotten the remote enviornment to work my computer is messed up. I cannot write python commands like which python in the Terminal. but I can use pip in the terminal. I have a script with the distfit package loaded but it will no longer import into spyder despite being able to use it from the terminal.</p>
<pre><code>pip install distfit
/Applications/Spyder.app/Contents/MacOS/python: No module named pip
Note: you may need to restart the kernel to use updated packages.
</code></pre>
<p>I have tried to add the Path of the python site-packages folder but everytime i try to add it spyder crashes and does not save it.</p>
<pre><code>Traceback (most recent call last):
File "/Applications/Spyder.app/Contents/Resources/lib/python3.9/spyder/plugins/pythonpath/widgets/pathmanager.py", line 169, in <lambda>
triggered=lambda x: self.add_path())
File "/Applications/Spyder.app/Contents/Resources/lib/python3.9/spyder/plugins/pythonpath/widgets/pathmanager.py", line 456, in add_path
if self.listwidget.row(self.user_header) < 0:
RuntimeError: wrapped C/C++ object of type QListWidgetItem has been deleted
</code></pre>
<p>I have reinstalled python a few times as well as spyder and I cannot find any answers.</p>
|
<python><scipy><spyder><scipy.stats>
|
2023-02-28 00:11:27
| 1
| 392
|
d3hero23
|
75,586,723
| 6,552,836
|
Scipy Optimize - Singular Matrix issue when creating constraints in a loop
|
<p>I'm trying to optimize a 20x5 matrix to maximize a return value y. There are main type of constraints that I need to include:</p>
<p>Total sum of all the elements must between a min and max range</p>
<p>However, I keep getting this singular matrix error as below when I try to create the constraints in a loop.</p>
<pre><code>Singular matrix C in LSQ subproblem (Exit mode 6)
Current function value: -3.0867160133139926
Iterations: 1
Function evaluations: 261
Gradient evaluations: 1
</code></pre>
<p><strong>1) Creating constraints in a loop version</strong></p>
<pre><code># Import Libraries
import pandas as pd
import numpy as np
import scipy.optimize as so
import random
# Define Objective function
def obj_func(matrix):
# Define the functions for each column
return np.sum(output_matrix)
# Create optimizer function
def optimizer_result(tot_sum_max, tot_sum_min, col_min, col_max, matrix_input):
constraints_list = [{'type': 'ineq', 'fun': lambda x: np.sum(x) - tot_sum_max},
{'type': 'ineq', 'fun': lambda x: -(np.sum(x) - tot_sum_min)}]
# Create an inital matrix
start_matrix = [random.randint(0, 3) for i in range(0, 260)]
# Run optimizer
optimizer_solution = so.minimize(obj_func, start_matrix, method='SLSQP', bounds=[(0, tot_sum_max)] * 20,
tol=0.01,
options={'disp': True, 'maxiter': 100}, constraints=constraints_list,
callback=callback)
</code></pre>
<p>However, the optimizer works fine when I manually type in the constraints, as displayed below. (Note: did not include all 260 element constraints for simplicity). Unsure why this is the case?</p>
<p><strong>2) Manually creating constraints version</strong></p>
<pre><code># Import Libraries
import pandas as pd
import numpy as np
import scipy.optimize as so
import random
# Define Objective function
def obj_func(matrix):
return np.sum(output_matrix)
# Create optimizer function
def optimizer_result(tot_sum_max, tot_sum_min, col_min, col_max, matrix_input):
total_and_col_cons =[{'type': 'ineq', 'fun': lambda x: np.sum(x) - tot_sum_min},
{'type': 'ineq', 'fun': lambda x: -(np.sum(x) - tot_sum_max)}]
# Create an inital matrix
start_matrix = [random.randint(0, 3) for i in range(0, 260)]
# Run optimizer
optimizer_solution = so.minimize(obj_func, start_matrix, method='SLSQP', bounds=[(0, tot_sum_max)] * 20,
tol=0.01,
options={'disp': True, 'maxiter': 100}, constraints=constraints_list,
callback=callback)
</code></pre>
|
<python><optimization><scipy><scipy-optimize>
|
2023-02-28 00:04:58
| 1
| 439
|
star_it8293
|
75,586,690
| 5,568,265
|
Is it possible to create "parameter decorators" (like TypeScript) in Python?
|
<p>TypeScript has a feature known as <a href="https://www.typescriptlang.org/docs/handbook/decorators.html#parameter-decorators" rel="nofollow noreferrer">parameter decorators</a> βΒ literally a decorator that you can apply to a parameter of a function or method:</p>
<pre><code>class BugReport {
// ...
print(@required verbose: boolean) {
// ...
}
}
</code></pre>
<p>Another example from <a href="https://docs.nestjs.com/custom-decorators" rel="nofollow noreferrer">NestJS</a>:</p>
<pre><code>class SomeController {
// ...
async findOne(@User() user: UserEntity) {
// ...
}
}
</code></pre>
<p>Note in the above examples that the decorators are decorating parameters of each method, not the methods themselves.</p>
<p>I don't think that parameter decorators like this exist in Python (at least as of v3.11, nor could I find any open PEPs that cover it); however, I'm curious to know if there is a way to implement something like this in Python?</p>
<p>It doesn't have to have the exact same syntax, of course; just the same effect.</p>
<p>I'm not super familiar with how parameter decorators work under-the-hood, but my best understanding is that they attach metadata to the corresponding function or method at compile time (so they would likely need to work in tandem with a function/method decorator, metaclass, etc.).</p>
|
<python><decorator>
|
2023-02-27 23:56:52
| 2
| 1,142
|
todofixthis
|
75,586,687
| 2,369,000
|
Multi-line title alignment using plotly
|
<p>Is it possible to make the title have pixel-perfect alignment with the top of the image when using line breaks in the title? Normally you can do this with yanchor='top' and y=1, but line breaks in the title appear to break this.</p>
<p>Code:</p>
<pre><code>layout = go.Layout(
height=500, width=500,
margin=dict(l=5, r=5, b=5, t=20),
title=dict(
text=f"<b>Test Title<br>Subtitle</b>", font_size=15,
x=0.5, y=1, yanchor='top', xanchor='center'),
)
data = [go.Scatter(x=[1, 2, 3, 4], y=[6, 5, 4, 3], mode='lines')]
chart = go.Figure(data=data, layout=layout)
plotly.offline.plot(chart)
</code></pre>
<p>Result:
<a href="https://i.sstatic.net/APWR3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/APWR3.png" alt="enter image description here" /></a></p>
|
<python><plotly>
|
2023-02-27 23:56:38
| 0
| 461
|
HAL
|
75,586,605
| 10,886,849
|
PyQT5 - how to use QWidget, QScrollArea and QBrush all together
|
<p><strong>I am trying to:</strong></p>
<ol>
<li>Display an image which is bigger than the window size;</li>
<li>Make the window scrollable, so I can roll through the whole image;</li>
<li>Draw over the image;</li>
</ol>
<p><strong>Problem is:</strong></p>
<ol>
<li>As soon as I make the window scrollable, I can't see the drawing anymore;</li>
</ol>
<p>How can I solve it?</p>
<p>Minimal example:</p>
<pre><code>class DocumentWindow(QWidget):
def __init__(self, main_obj):
super().__init__()
self.WIDTH = 500
self.HEIGHT = 500
self.resize(self.WIDTH, self.HEIGHT)
file_path = 'invoice_example.jpeg'
self.pages = {0: Image.open(file_path)}
image = QImage(file_path)
self.doc_original_width = image.width()
self.doc_original_height = image.height()
self.doc_new_width = image.width()
self.doc_new_height = image.height()
palette = QPalette()
palette.setBrush(QPalette.Window, QBrush(image))
self.setPalette(palette)
label = QLabel(None, self)
label.setGeometry(0, 0, self.WIDTH, self.HEIGHT)
conWin = QWidget()
conLayout = QVBoxLayout(self)
conLayout.setContentsMargins(0, 0, 0, 0)
conWin.setLayout(conLayout)
scroll = QScrollArea()
scroll.setVerticalScrollBarPolicy(Qt.ScrollBarAlwaysOn)
scroll.setWidgetResizable(True)
scroll.setWidget(conWin)
scrollLayout = QVBoxLayout(self)
# When I enable these lines, I got no visible drawing
scrollLayout.addWidget(scroll)
scrollLayout.setContentsMargins(0, 0, 0, 0)
self.begin = QtCore.QPoint()
self.end = QtCore.QPoint()
self.show()
def paintEvent(self, event):
qp = QtGui.QPainter(self)
br = QtGui.QBrush(QtGui.QColor(100, 10, 10, 40))
qp.setBrush(br)
qp.drawRect(QtCore.QRect(self.begin, self.end))
rect = QtCore.QRect(self.begin, self.end)
self.coordinates = rect.getCoords()
def mousePressEvent(self, event):
self.begin = event.pos()
self.end = event.pos()
self.update()
def mouseMoveEvent(self, event):
self.end = event.pos()
self.update()
def mouseReleaseEvent(self, event):
x1, y1, x2, y2 = self.coordinates
if x1 == x2 or y1 == y2:
return
self.width_factor = self.doc_original_width / self.doc_new_width
self.height_factor = self.doc_original_height / self.doc_new_height
self.normalized_coordinates = [
int(x1*self.width_factor), int(y1*self.height_factor),
int(x2*self.width_factor), int(y2*self.height_factor) ]
self.update()
</code></pre>
|
<python><pyqt><pyqt5>
|
2023-02-27 23:41:55
| 2
| 990
|
Julio S.
|
75,586,471
| 3,821,009
|
Pandas groupby transform yields Series instead of DataFrame on empty DataFrames
|
<p>Running this:</p>
<pre><code>for periods in [8, 4, 0]:
print(f'--- periods {periods}')
df = pandas.DataFrame(dict(
v1=numpy.arange(periods),
v2=numpy.arange(periods) * 2),
index=pandas.date_range('2023-01-01', periods=periods, freq='6H'))
dft = df.between_time('00:00', '06:00')
dft = dft.reindex_like(df)
dfc = dft['v1'] > 3
df = df[dfc.groupby(dfc.index.date).transform(any)]
print(df)
print(df.dtypes)
print(df.index)
print()
</code></pre>
<p>results in:</p>
<pre><code>--- periods 8
v1 v2
2023-01-02 00:00:00 4 8
2023-01-02 06:00:00 5 10
2023-01-02 12:00:00 6 12
2023-01-02 18:00:00 7 14
v1 int64
v2 int64
dtype: object
DatetimeIndex(['2023-01-02 00:00:00', '2023-01-02 06:00:00',
'2023-01-02 12:00:00', '2023-01-02 18:00:00'],
dtype='datetime64[ns]', freq='6H')
--- periods 4
Empty DataFrame
Columns: [v1, v2]
Index: []
v1 int64
v2 int64
dtype: object
DatetimeIndex([], dtype='datetime64[ns]', freq='6H')
--- periods 0
Empty DataFrame
Columns: []
Index: []
Series([], dtype: object)
DatetimeIndex([], dtype='datetime64[ns]', freq='6H')
</code></pre>
<p>Why is the result for periods = 0 (i.e. empty DataFrame) a Series and not a DataFrame with columns <code>v1</code> and <code>v2</code>?</p>
<p>Aside from checking whether df is empty beforehand, is there a way to return a DataFrame with both <code>v1</code> and <code>v2</code>?</p>
|
<python><pandas><dataframe>
|
2023-02-27 23:17:43
| 1
| 4,641
|
levant pied
|
75,586,451
| 15,781,591
|
Finding all possible solution pairs to an algebra problem with constraints in python
|
<p>I am trying to use python to create list of all pairs of numbers between 2 and 75 that average to 54, where each value in each solution pair can go up to 6 decimal places. And so I want to have a list of all of these solution pairs.</p>
<p>I have tried using <code>solve()</code> from <code>SymPy</code> to solve this as an algebra problem, but I am unsure how to handle this when the equation has many solutions, and further includes constraints. How can I approach this problem so that I account for my constraints, where all values in each solution pair need to be between 2 and 75?</p>
|
<python><sympy><algebra>
|
2023-02-27 23:14:03
| 1
| 641
|
LostinSpatialAnalysis
|
75,586,277
| 6,471,140
|
Sagemaker training jobs metrics not shown in monitoring nor cloudwatch/no widget on this dashboard
|
<p>Using sagemaker, from previous experience and the official documentation we know it is possible to create plots and visualizations for metrics during training jobs(ex: accuracy, error, etc) however now we cannot see any metrics, this applies to both "Monitoring" tab for the sagemaker jobs and for Cloudwatch, we are capturing the desired metrics in the logs and defined the regular expressions for them, but we cannot see the metrics. Is there any additional configs besides adding the metric definitions(including regex)?</p>
<p>It's important to mention that not even the default metrics(captured automatically by sagemaker, ex: cpu and memory utilization) are shown so I guess it is not related to my metrics(or their regex).</p>
<p><strong>UPDATE</strong> we found <code>"cloudwatch:PutMetricData"</code> was missing in the policy for the <code>NeptuneSageMakerIAMRole</code>(that was not originally in the documentation but it was added recently <a href="https://docs.aws.amazon.com/neptune/latest/userguide/machine-learning-manual-setup.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/neptune/latest/userguide/machine-learning-manual-setup.html</a>) and now I can see the metrics listed in cloudwatch however they are still not shown in the "Monitor" tab for the sagemaker job console(before adding PutMetricData to the policy the page was blank, the message was not shown)
<a href="https://i.sstatic.net/Pc05y.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Pc05y.png" alt="enter image description here" /></a></p>
|
<python><amazon-web-services><machine-learning><amazon-cloudwatch><amazon-sagemaker>
|
2023-02-27 22:46:06
| 0
| 3,554
|
Luis Leal
|
75,586,210
| 4,487,457
|
Dataflow Flex Template: module not found
|
<p>I'm trying to use Dataflow to do some parallel operations in <code>pandas</code>. I understand that <code>pandas</code> is installed on the workers <a href="https://cloud.google.com/dataflow/docs/concepts/sdk-worker-dependencies#sdk-for-python_1" rel="nofollow noreferrer">nodes</a>. But every time I submit a Flex template to Dataflow, I get an error that <code>pandas</code> and the custom modules that I wrote cannot be imported.</p>
<p>I've also looked at all the SO posts related to this:</p>
<ul>
<li><p><a href="https://stackoverflow.com/questions/70341392/dataflow-flex-template-modulenotfounderror-no-module-name">Dataflow flex template: ModuleNotFoundError: no module name</a></p>
<ul>
<li>Here I created a <code>FLEX_TEMPLATE_PYTHON_EXTRA_PACKAGES</code> env var. Oddly enough, setting it to <code>${WORKDIR}</code> or <code>transform_lib</code> will complain that <code>Error occurred in the launcher container: could not validate extra package: file transform_lib is a directory</code></li>
</ul>
</li>
<li><p><a href="https://stackoverflow.com/questions/69439953/gcp-flex-template-error-py-options-not-set-in-envsetup-file-error-waiting-f">GCP Flex Template Error: py options not set in envsetup file ... error waiting for container: unexpected EOF</a></p>
<ul>
<li>Currently this is throwing the same error as above.</li>
</ul>
</li>
</ul>
<p>I've also taken a look at <a href="https://medium.com/cts-technologies/building-and-testing-dataflow-flex-templates-80ef6d1887c6" rel="nofollow noreferrer">this</a> and was able to validate that I was able to import <code>pandas</code> and <code>transform_lib</code> inside the container.</p>
<p>project structure</p>
<pre><code>root/
ββ transform_lib/
β ββ utils.py
β ββ dataflow/
β β ββ beam.py
ββ deploy/
β ββ dataflow/
β β ββ requirements.txt
β β ββ Dockerfile
</code></pre>
<p>beam.py (I understand that there is a <a href="https://beam.apache.org/documentation/dsls/dataframes/overview/" rel="nofollow noreferrer">Dataframe</a> API but in this case, I'm unsure if it will work. Most of the examples are just doing iterations row by row while I need to do operations on a DF resulting from a groupedBy key - more than happy to refactor this)</p>
<pre><code>import pandas as pd
from transform_lib.utils import some_function
class DoPandasThings(beam.DoFn):
"""
Do Pandas things
"""
def process(self, element):
# the elements is a tuple with (ID, records as a list of dicts)
# converting to a dataframe so I can use methods on it
df = pd.DataFrame.from_dict(element[1])
df = df.replace({None: np.nan})
df["rolling_sum"] = (
df["_raw_counts"].rolling(14, min_periods=7).sum()
)
df["custom_function" = some_function(df["other_counts"])
... write to Parquet somehow
def run(
table,
runtime_args: Optional[PipelineOptions] = None,
) -> None:
pipeline_options = PipelineOptions(runtime_args, save_main_session=True)
with beam.Pipeline(options=pipeline_options) as pipeline:
_ = (
pipeline
| "ReadTable" >> beam.io.ReadFromBigQuery(table=table)
| "Grouping" >> beam.GroupBy(lambda x: x["id"])
| "Pandas" >> beam.ParDo(DoPandasThings())
)
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--table", required=True, help=("Table to read from"))
args, runtime_args = parser.parse_known_args()
run(args.table, runtime_args)
</code></pre>
<p>setup.py</p>
<pre><code>import setuptools
setuptools.setup(
name="transform_lib",
version="0.0.0",
install_requires=[],
packages=setuptools.find_packages(),
)
</code></pre>
<p>Dockerfile</p>
<pre><code>FROM gcr.io/dataflow-templates-base/python38-template-launcher-base:20221018_RC00
ARG dataflow_file_path
ARG WORKDIR=/opt/dataflow
# creating standardized dataflow directory structure
RUN mkdir -p ${WORKDIR}
WORKDIR ${WORKDIR}
# copying over necessary files
COPY deploy/dataflow/requirements.txt .
COPY transform_lib/dataflow/beam.py .
# kitchen sink env var
# https://cloud.google.com/dataflow/docs/guides/templates/configuring-flex-templates#set_required_dockerfile_environment_variables
ENV FLEX_TEMPLATE_PYTHON_REQUIREMENTS_FILE="${WORKDIR}/requirements.txt"
ENV FLEX_TEMPLATE_PYTHON_PY_FILE="${WORKDIR}/beam.py"
ENV FLEX_TEMPLATE_PYTHON_SETUP_FILE="${WORKDIR}/setup.py"
ENV FLEX_TEMPLATE_PYTHON_EXTRA_PACKAGES=""
ENV FLEX_TEMPLATE_PYTHON_PY_OPTIONS=""
ENV FLEX_TEMPLATE_PYTHON_SETUP_FILE=""
# following practices listed here:
# https://cloud.google.com/dataflow/docs/guides/troubleshoot-templates#python-timeout-polling
RUN apt-get update \
&& apt-get install -y libffi-dev git \
&& rm -rf /var/lib/apt/lists/* \
&& pip install --upgrade pip \
&& pip install apache-beam[gcp] \
&& pip install -r $FLEX_TEMPLATE_PYTHON_REQUIREMENTS_FILE \
# Download the requirements to speed up launching the Dataflow job.
&& pip download --no-cache-dir --dest /tmp/dataflow-requirements-cache -r $FLEX_TEMPLATE_PYTHON_REQUIREMENTS_FILE \
&& pip download --no-cache-dir --dest /tmp/dataflow-requirements-cache .
# install local python module
COPY deploy/dataflow/setup.py .
COPY transform_lib transform_lib/
RUN python setup.py install
ENV PIP_NO_DEPS=True
</code></pre>
|
<python><google-cloud-platform><google-cloud-dataflow><apache-beam>
|
2023-02-27 22:34:59
| 1
| 2,360
|
Minh
|
75,586,132
| 16,421,247
|
numpy selecting multiple columns excluding certain ones - concise way
|
<p>I have a short question about indexing in numpy. I'm trying to select subset of columns of a 2D array. For example, if I wanted columns other than 3, 6 and 9, then I would plug in a list of indicies excluding those positions:</p>
<pre><code>x = np.arange(20).reshape(2,10)
x[:, [i for i in range(len(x[0])) if i not in [3, 6, 9]]]
</code></pre>
<pre><code>[[ 0 1 2 4 5 7 8]
[10 11 12 14 15 17 18]]
</code></pre>
<p>The method works but I was wondering if there's more concise way of doing the same thing?</p>
|
<python><numpy>
|
2023-02-27 22:22:46
| 1
| 1,058
|
nightstand
|
75,586,082
| 3,449,093
|
Ignoring None as one of the values
|
<p>I need to plot two sets of data on the same chart with error bars. But I don't have one of the measurements.</p>
<pre><code>days = [0.43, 1.72, 3.97, 4.81, 9.54, 10.9]
magB = [None, 3.36, 3.9, 4.27, 5.47, 5.59]
magV = [10.8, 2.88, 3.59, 3.89, 5.57, 5.82]
fix, ax = plt.subplots()
plt.xlabel("days")
plt.ylabel("mag")
plt.yticks(np.arange(3, 6, 0.5))
plt.scatter(days, magB, label='magB')
plt.scatter(days, magV, label='magV')
plt.errorbar(days, magB, yerr=1, fmt="o")
plt.errorbar(days, magV, xerr=1, fmt="o")
plt.legend()
ax.invert_yaxis()
plt.show()
</code></pre>
<p>But it returns the error when calculating the error:</p>
<pre><code> File "test.py", line 26, in <module>
plt.errorbar(days, magB, yerr=0.1, fmt="o")
File "matplotlib/pyplot.py", line 2537, in errorbar
return gca().errorbar(
File "matplotlib/__init__.py", line 1442, in inner
return func(ax, *map(sanitize_sequence, args), **kwargs)
File "matplotlib/axes/_axes.py", line 3648, in errorbar
low, high = dep + np.row_stack([-(1 - lolims), 1 - uplims]) * err
TypeError: unsupported operand type(s) for +: 'NoneType' and 'float'
Process finished with exit code 1
</code></pre>
<p>I understand it cannot apply the error to the <code>None</code> value. What can I do about it?</p>
<p>I cannot simply ignore the first <code>days</code> value as it will be needed further (more data to plot on the same chart)</p>
|
<python><matplotlib>
|
2023-02-27 22:16:47
| 2
| 1,399
|
Malvinka
|
75,585,994
| 17,274,113
|
open cv cv2.imread a grayscale image pixel values
|
<p>I am attempting to load a grayscale image using <code>cv2.imread</code> for further blob extraction. To start I have an image (as seen below) for which black pixels have values (typically between 0 and 1) and white pixels have value of 0.</p>
<p><a href="https://i.sstatic.net/vtIrh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vtIrh.png" alt="enter image description here" /></a></p>
<p>I would like to resample this image to binary 0 (white pixels) and 1(black pixels) in order to perform blob extraction on it using cv2. My resultant blob extraction returns 0 instances of blobs however, even when I generalize the search parameters.</p>
<p>Upon looking into the image histograms, I found that I did not in fact have a binary image. Below I will include the code used to read in, reclassify, set blob parameters, and attempt to detect blobs.</p>
<p>link to image: <a href="https://drive.google.com/file/d/16AlNkVV9eQX3cTZjZP_OWa-EsMo_zrrD/view?usp=share_link" rel="nofollow noreferrer">tif_file</a></p>
<p>reclassification:</p>
<pre><code>import cv2
import numpy as np
import os
from osgeo import gdal
import matplotlib.pyplot as plt
from rasterio.plot import show
import skimage.io
driver = gdal.GetDriverByName('GTiff')
file = gdal.Open("tif_folder/Clip_depth_sink.tif")
band = file.GetRasterBand(1)
lista = band.ReadAsArray()
# reclassification
for j in range(file.RasterXSize):
for i in range(file.RasterYSize):
if lista[i,j] == 0:
lista[i,j] = 0
elif lista[i,j] != 0:
lista[i,j] = 1
else:
lista[i,j] = 2
## create new file
binary_sinks = driver.Create('tif_folder\\clipped_binary_sinks.tif', file.RasterXSize , file.RasterYSize , 1)
binary_sinks.GetRasterBand(1).WriteArray(lista)
# spatial ref system
proj = file.GetProjection()
georef = file.GetGeoTransform()
binary_sinks.SetProjection(proj)
binary_sinks.SetGeoTransform(georef)
binary_sinks.FlushCache()
</code></pre>
<p>Reclassified Image Histogram:</p>
<pre><code>image = cv2.imread('tif_folder\\clipped_binary_sinks.tif', -1)
plt.hist(image, bins=10, range=(-1,3))
</code></pre>
<p><a href="https://i.sstatic.net/f44zz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f44zz.png" alt="enter image description here" /></a></p>
<p>Blob Detection:</p>
<pre><code>image = cv2.imread('tif_folder\\clipped_binary_sinks.tif',-1)
plt.hist(image, bins=10, range=(-1,3))
#set blob parameters to look for
params = cv2.SimpleBlobDetector_Params()
params.filterByArea = True
params.minArea = 9
params.maxArea = 1000
params.filterByCircularity = True
params.minCircularity = 0.5
#create the detector out of the parameters
detector = cv2.SimpleBlobDetector_create(params)
#extract the blobs from the image
keypoints = detector.detect(image)
print(len(keypoints))
</code></pre>
<p>A few things confuse me about this. For instance, though the image pixel values to fall around values of 0 and 1 as they should after the reclassification, they seem to fall into bins which are not exactly equal to 0 and 1 as we can see in the histogram. As I said earlier, no blobs were detected by open cv, this might make sense because although adjacent pixels are close in value, they are slightly off according to the histogram.</p>
<p>Are there better ways of reclassifying an image to be binary that would solve this issue?
Is there another issue that I am missing which is causing blob detection not to be feasible?</p>
<p>Thanks in advance!</p>
|
<python><opencv><blob><arcgis><resampling>
|
2023-02-27 22:05:38
| 1
| 429
|
Max Duso
|
75,585,988
| 1,788,771
|
Django filter for not null
|
<p>I have a filter defined as such:</p>
<pre class="lang-py prettyprint-override"><code>class MessageFilterSet(filters.FilterSet):
seen = filters.BooleanFilter(field_name="seen_at", lookup_expr="isnull")
</code></pre>
<p>And it sort of works. But it's the wrong way around, passing seen=True will return all the unseen messages.</p>
<p>I don't want to have to change the name of the url parameter, how do I invert the lookup expression?</p>
|
<python><django><django-filter>
|
2023-02-27 22:04:40
| 1
| 4,107
|
kaan_atakan
|
75,585,933
| 3,713,236
|
Unable to export Jupyter notebook to Python Script in Azure ML Studio
|
<p>I am stuck on <a href="https://microsoftlearning.github.io/mslearn-azure-ml/Instructions/08-Script-command-job.html" rel="nofollow noreferrer">step 5 in this Azure tutorial</a>:
<a href="https://i.sstatic.net/2F764.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2F764.png" alt="enter image description here" /></a></p>
<p>Whenever I try to export to Python, I get a little message that says "Python export started, please wait..." that shows up for about 2 milliseconds, then disappears, with no popup or error message on my jupyter notebook. Has anyone else encountered this issue before? Please help!</p>
|
<python><azure><jupyter-notebook>
|
2023-02-27 21:56:14
| 1
| 9,075
|
Katsu
|
75,585,768
| 2,908,017
|
How can I prevent a Python FMX GUI Form from being resized?
|
<p>I have a window <code>Form</code> that is created with the <a href="https://github.com/Embarcadero/DelphiFMX4Python" rel="nofollow noreferrer">DelphiFMX GUI library for Python</a>. My code and Form look like this:</p>
<pre><code>from delphifmx import *
class frmMain(Form):
def __init__(self, owner):
self.Caption = 'My Form'
self.Width = 1000
self.Height = 500
def main():
Application.Initialize()
Application.Title = "My Application"
Application.MainForm = frmMain(Application)
Application.MainForm.Show()
Application.Run()
Application.MainForm.Destroy()
main()
</code></pre>
<p><a href="https://i.sstatic.net/n1qv2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/n1qv2.png" alt="Python empty GUI Form" /></a></p>
<p>The Form should not be able to be resized by dragging the sides of the Form and the Maximize button should be disabled or invisible.</p>
<p>How do you stop the app (Form) from being resized?</p>
|
<python><user-interface><resize><firemonkey>
|
2023-02-27 21:32:04
| 1
| 4,263
|
Shaun Roselt
|
75,585,728
| 17,517,315
|
Parse float from string in regex python with conversion unit
|
<p>I am trying to convert all cases below as a float number</p>
<p>See below:</p>
<pre><code>
['$1.5 million',
'$2.6 million',
'$2.28 million',
'$600,000',
'$950,000',
'$858,000',
'$788,000',
'$1.35 million',
'$2.125 million',
'$1.5 million',
'$1.5 million',
'$2.2 million',
'$1.8 million',
'$3 million',
'$4 million',
'$2 million',
'$300,000',
'$1.8 million',
'$5 million',
'$4 million',
'$700,000',
'$6 million',
'under $1 million or $1,250,000',
'$2 million',
'$2.5 million',
'$4 million',
'$3.6 million',
'$3 million',
'$3 million',
'$3 million',
'$4.4β6 million',
'$4 million',
'$5 million',
'$4 million',
'$5 million',
'$8 million',
'AU$1 million',
'$5 million',
'$7.5 million',
'$10 million',
'$3.5 to 4 million',
'$5.25 million',
'$20 million',
'$4.5 million',
'$9 million',
'$6-8 million',
'$20 million',
'$18 million',
'$12 million',
'$14 million',
'$10 million',
'$17 million',
'$5 million',
'$8 million',
'$20 million',
'$11 million',
'$28 million',
'$44 million',
'$7.5 million',
'$14 million',
'$9 million',
'A$8.7 million',
'$31 million',
'$18 million',
'$5 million',
'$40 million',
'$20 million',
'$14 million',
'60 million Norwegian Kroner (around $8.7 million in 1989)',
'$35-40 million',
'$25 million',
'$15 million',
'$32 million',
'$14 million',
'$28 million',
'$12 million',
'$11 million',
'$28 million',
'$17 million',
'$30 million',
'$13 million',
'$5 million',
'$45 million',
'$31 million',
'$22 million',
'$30 million',
'$22 million',
['$32 million', '(estimated)'],
'$18 million',
'$55 million',
'$24 million',
'$15 million',
'$12 million',
'$30 million',
'$31 million',
'$38 million',
'$70 million',
'$15 million',
'$67 million',
'$32 million',
'$7 million',
'$85 million',
'$55 million',
'$3 million',
'$16 million',
'$80 million',
'$30 million',
'$24 million',
'$90 million',
'$15 million',
'$30 million',
'$120 million',
'$90 million',
'$65 million',
'$5 million',
'$130 million',
'$75β90 million',
'$10 million',
'$90 million',
'$80β85 million',
'$15 million β$30 million',
'$4,000,000 (estimated)',
'$127.5 million',
'$65 million',
'$30 million',
'$85 million',
'$100 million',
'$23 million',
'$90β120 million',
'$26 million',
'$25 million',
'$115 million',
'$33 million',
'$20 million',
'$5 million',
'$22 million',
'$80 million',
'$35 million',
'$19.2 million',
'$15 million',
'$65 million',
'$140 million',
'$20 million',
'$12 million',
'$46 million',
'$13 million',
'$20 million',
'$17 million',
'$94 million',
'$140 million',
'$26 million',
'$46 million',
'$90 million',
'$10 million',
'$28 million',
'$15 million',
'$110 million',
'$110 million',
'$45 million',
'$92β145 million',
'$100 million',
'$20 million',
'$56 million',
'$25 million',
'$50 million',
['Β₯', '2.4 billion', 'US$24 million'],
'$35 million',
'$35 million',
'$25 million',
'$150 million',
'$180 million',
'$30 million',
'$1 million',
'$40 million',
'$50 million',
'$80 million',
'$120 million',
'$225 million',
'$30 million',
'$24 million',
'$8 million',
'$17 million',
'$150 million',
'$300 million',
'$150 million',
'$25 million',
'$22 million',
'$85 million',
'$130 million',
'$7 million',
'$25 million',
'$225 million',
'$180 million',
'$20 million',
'$11 million',
'$50 million',
'$150 million',
'$80 million',
'$50 million',
'$30 million',
'$175 million',
'$150 million',
['Β₯', '3.4 billion', '(', 'US$', '34 million)'],
'$30β$35 million',
'$8 million ( β½ 350 million)',
'$175β200 million',
'$35 million',
'$105 million',
'$150 β$200 million',
'$150β200 million',
'$200 million',
'$150 million',
'$22 million',
'$30β$35 million',
'$35 million',
'$260 million',
'$170 million',
'$150 million',
'$8 million',
['$410.6 million (gross)', '$378.5 million (net)'],
'$200 million',
'$30 million',
'$45 million',
'$23 million',
['$306.6 million (gross)', '$263.7 million (net)'],
'$185 million',
'$25 million',
'$39 million',
'$30β35 million',
'$165 million',
'$200 million',
'$200 million',
'$225β250 million',
'$50 million',
'$150 million',
'$35 million',
'$25 million',
'$180β263 million',
'$50 million',
'(US$2.9 million)',
'$28 million',
'$165 million',
'$50 million',
'$17 million',
'$84.21β95 million',
'$180β190 million',
'$175 million',
'$175β200 million',
'$70β80 million',
'$150 million',
'$175β177 million',
'$170 million',
'$200 million',
'$140 million',
'$65 million',
'$15 million',
'$150β175 million',
'$160β255 million',
'$230β320 million',
'$175 million',
'131 crore',
['~$8 million', 'β½', '370 million'],
'$175β225 million',
'$100β130 million',
'$200 million',
'$65-70 million',
'$120β133 million',
'$175 million',
'$130 million',
'$170 million',
'$183 million',
'$200 million',
'$250β260 million',
'$185 million',
'$60 million',
'$150 million',
'$40 million',
'$42 million',
'$175β200 million',
'$125 million',
'$12.5 million (stage production)',
'$24 million',
'$200 million',
'$26 million',
'$150 million',
'β½650 million',
'$100 million+',
'$100β200 million',
'$200 million',
'$120β150 million',
'β½454 million',
'$175 million',
'$36 million',
'~$70 million',
'$200 million',
'$40 million',
'$135β180 million',
'$157.8 million',
'$175 million',
'$250β260 million',
'$53.4 million',
'$28 million',
'$858,000',
'$9 million',
'$85 million',
'$70 million',
'$75β90 million',
'$80 million',
'$130 million',
'$5 million',
'$18 million or $25 million',
'$4 million',
['$15,000,000', '(1 film)'],
'$3 million',
'$11β15 million',
'$35-40 million',
'$183 million',
'$100β200 million',
'$30 million',
'$200 million',
'$200 million',
['Total (5 films):', '$1.274β1.364\xa0billion'],
'$183 million',
['Total (5 films):', '$1.274β1.364\xa0billion'],
'$175β177 million',
'$150 million',
'$185 million',
'$150 million']
</code></pre>
<p>We have the following cases:</p>
<ul>
<li>$1 million</li>
<li>$300,000</li>
<li>Ranges of numbers like: $1.5-$2 million (in such case would be ideal to get the average)</li>
<li>Sometimes we have lists. If we have lists, we just skip them to avoid more overhead.</li>
</ul>
<p>The following code works pretty well for the first two cases.</p>
<p>For ranges of numbers with "-" does not work and in the following case also fails:
'under $1 million or $1,250,000', --> in this case, I just want to extract the following: 1_250_000 as a float number in Python.</p>
<p>If we have $ vs. other currencies, we just take the number that is after the $-sign.</p>
<p>My attempt:</p>
<p>My regular expression is the following:</p>
<pre><code>r"(?=.*mil.*).*\$(?P<value_to_convert>\d*(?:\.\d+)?)\s?(?P<unit>\w+)|.*\$(?P<raw_value>\d+,\d+)"
</code></pre>
<ul>
<li>This takes the positive lookahead to ensure that we have the word "mil"</li>
<li>It takes any digit followed by an optional decimal and the word which would be a unit of measurement</li>
<li>The | which takes that or the values that look like this: '$600,000'</li>
</ul>
<pre><code>import re
pattern = r"(?=.*mil.*).*\$(?P<value_to_convert>\d*(?:\.\d+)?)\s?(?P<unit>\w+)|.*\$(?P<raw_value>\d+,\d+)"
for i in sample:
match = re.match(pattern, i)
print(match.groupdict())
</code></pre>
<p>Any support is appreciated.</p>
|
<python><python-3.x><regex><python-re>
|
2023-02-27 21:27:14
| 1
| 391
|
matt.aurelio
|
75,585,665
| 11,500,371
|
How can I rename columns based on matching data in another dataframe in Pandas?
|
<p>I have two dataframes where the labeling of products does not always match:</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame(data={'Product 1':['Shoes'],'Product 1 Price':[25],'Product 2':['Shirts'],'Product 2 Price':[50],'Product 3':['Pants'],'Product 3 Price':24})
df2 = pd.DataFrame(data={'Product 1':['Shirts'],'Product 1 Price':[60],'Product 2':['Pants'],'Product 2 Price':[30],'Product 3':['Shoes'],'Product 3 Price':14})
</code></pre>
<p>df1</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Product 1</th>
<th>Product 1 Price</th>
<th>Product 2</th>
<th>Product 2 Price</th>
<th>Product 3</th>
<th>Product 3 Price</th>
</tr>
</thead>
<tbody>
<tr>
<td>Shoes</td>
<td>25</td>
<td>Shirts</td>
<td>50</td>
<td>Pants</td>
<td>24</td>
</tr>
</tbody>
</table>
</div>
<p>df2</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Product 1</th>
<th>Product 1 Price</th>
<th>Product 2</th>
<th>Product 2 Price</th>
<th>Product 3</th>
<th>Product 3 Price</th>
</tr>
</thead>
<tbody>
<tr>
<td>Shirts</td>
<td>60</td>
<td>Pants</td>
<td>30</td>
<td>Shoes</td>
<td>14</td>
</tr>
</tbody>
</table>
</div>
<p>Since I need to do an apples to apples comparison on the data, how can I rename the columns in df1 so that the product numbers are the same in df1 and df2?</p>
<p>Ideally, the end result would be a df1 that looks like:</p>
<pre><code>pd.DataFrame(data={'Product 1':['Shirts'],'Product 1 Price':[50],'Product 2':['Pants'],'Product 2 Price':[24],'Product 3':['Shoes'],'Product 3 Price':25})
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Product 1</th>
<th>Product 1 Price</th>
<th>Product 2</th>
<th>Product 2 Price</th>
<th>Product 3</th>
<th>Product 3 Price</th>
</tr>
</thead>
<tbody>
<tr>
<td>Shirts</td>
<td>50</td>
<td>Pants</td>
<td>24</td>
<td>Shoes</td>
<td>25</td>
</tr>
</tbody>
</table>
</div>
|
<python><pandas><dataframe>
|
2023-02-27 21:18:49
| 2
| 337
|
Sean R
|
75,585,657
| 2,908,017
|
How to specify where a Python FMX GUI Form opens?
|
<p>I'm creating a <code>Form</code> using <a href="https://github.com/Embarcadero/DelphiFMX4Python" rel="nofollow noreferrer">DelphiFMX GUI library for Python</a> and I would like to choose a specific location where the Form should open. Here's the code that I use to create my Form:</p>
<pre><code>from delphifmx import *
class frmMain(Form):
def __init__(self, owner):
self.Caption = 'My Form'
self.Width = 1000
self.Height = 500
def main():
Application.Initialize()
Application.Title = "My Application"
Application.MainForm = frmMain(Application)
Application.MainForm.Show()
Application.Run()
Application.MainForm.Destroy()
main()
</code></pre>
<p>Are there any built-in functions, procedures, or properties that I can use to change the position of the Form?</p>
<p>For some of my forms, I want them to be centered and for others, I want to set a custom X and Y coordinate.</p>
|
<python><user-interface><firemonkey><centering>
|
2023-02-27 21:17:26
| 1
| 4,263
|
Shaun Roselt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.