QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
74,815,268
| 11,946,045
|
How to find out the IP range for a certain country code in python?
|
<p>Is there a way to figure out the IP ranges for a certain country code without using <a href="https://pypi.org/project/maxminddb/" rel="nofollow noreferrer">MaxMind</a> or <a href="https://www.ip2location.com/development-libraries/ip2location/python" rel="nofollow noreferrer">IP2Location</a> where in both they provide a binary file where one have to check single IP at a time</p>
<p>both source do not provide a way to list the ranges for a certain country</p>
|
<python><geolocation><maxmind><ip2location>
|
2022-12-15 17:15:53
| 1
| 814
|
Weed Cookie
|
74,815,173
| 19,069,334
|
FastAPI API Versioning - specifying deprecated version in Header
|
<p>I am using a <code>Header</code> named <code>Accept-Version</code> to handle API versioning in my FastAPI project at the function level. Following <a href="https://fastapi.tiangolo.com/tutorial/path-params/#predefined-values" rel="nofollow noreferrer">fastapi documentation</a>, a sub-class that inherits from <code>str</code> and from <code>Enum</code> is used to predefine the version that the header can accept.</p>
<p>As the project goes on, I want to mark the old versions as deprecated without removing it. The code is as follows.</p>
<pre class="lang-py prettyprint-override"><code>from enum import Enum
from typing import Union
from fastapi import FastAPI, Header
from pydantic import BaseModel
app = FastAPI()
class VersionNumber(str, Enum):
_1_1 = "1.1 (deprecated)"
_1_2 = "1.2"
_1_3 = "1.3"
class Item(BaseModel):
name: str
price: float
@app.post("/items")
async def update_item(item: Union[Item, None], accept_version: VersionNumber=Header(VersionNumber._1_2)):
accept_version = accept_version.replace('(deprecated)', '')
accept_version = float(accept_version)
if accept_version == 1.1:
return item
elif accept_version >= 1.2 and accept_version < 1.3:
return item.dict()["name"], item.dict()["price"]
else:
return item.dict()["name"]
</code></pre>
<p>Based on the code above, the Swagger UI is able to show version <code>1.1</code> as deprecated, which looks like this.</p>
<p><img src="https://i.sstatic.net/FtS9L.png" alt="" /></p>
<p>However, when sending the request, it is accepting <code>1.1 (deprecated)</code> instead of <code>1.1</code> as a value for <code>Accept-Version</code> header, which is not desirable. Is there a way to elegantly mark version <code>1.1</code> as deprecated and keep the header as <code>1.1</code>?</p>
|
<python><enums><fastapi><api-versioning>
|
2022-12-15 17:06:45
| 0
| 1,176
|
wavingtide
|
74,815,168
| 2,659,499
|
Method parameter type refering to the class it's defined in is causing AttributeError partially initialized module
|
<p>I have a dataclass in <code>extensions</code> > <code>__init__.py</code> like:</p>
<pre class="lang-py prettyprint-override"><code>@dataclass
class Range:
start: date
end: date
def overlap(self, other: Range) -> bool:
# method body removed
</code></pre>
<p>My IDE is complaining about the method parameter</p>
<pre><code>Unresolved reference 'Range'
</code></pre>
<p>If I change the parameter type to</p>
<pre class="lang-py prettyprint-override"><code>def overlap(self, other: extensions.Range) -> bool:
</code></pre>
<p>IDE stops complaining but when I run my program I get</p>
<pre><code>partially initialized module 'extensions' has no attribute 'Range' (most likely due to a circular import)
</code></pre>
<p>I'm learning Typing in python and cannot figure out how to define this type in my method</p>
|
<python><python-typing>
|
2022-12-15 17:06:30
| 1
| 1,975
|
Vahid
|
74,815,128
| 136,598
|
Separate or merged indices for index.yaml with GAE
|
<p>I do queries like this with Python GAE:</p>
<pre><code>a1 = Activity.query().filter(Activity.first=='foo').order(-Activity.date)
a2 = Activity.query().filter(Activity.second=='bar').order(-Activity.date)
</code></pre>
<p>Am I better off with two separate indices:</p>
<pre><code>indexes:
- kind: Activity
properties:
- name: first
- name: date
direction: desc
- kind: Activity
properties:
- name: second
- name: date
direction: desc
</code></pre>
<p>Or with a single index that covers both:</p>
<pre><code>indexes:
- kind: Activity
properties:
- name: first
- name: second
- name: date
direction: desc
</code></pre>
<p>I suspect the former is better but wanted to check.</p>
<p>=====</p>
<p>To the person who voted to close as "opinion based", the two options above will change the index size and possibly also the speed of responding to queries. Definitely not opinion based. There is a right answer.</p>
|
<python><google-app-engine><indexing><google-cloud-datastore>
|
2022-12-15 17:04:07
| 1
| 16,643
|
minou
|
74,815,119
| 2,050,067
|
Unique variable naming patterns in deep learning code
|
<p>Need help on understanding some naming patterns I see so far.</p>
<p>Reading <a href="https://huggingface.co/blog/annotated-diffusion" rel="nofollow noreferrer">The Annotated Diffusion Model</a> source code, notice the camel case of some of the functions:</p>
<pre><code> def Upsample(dim, dim_out=None):
return nn.Sequential(
nn.Upsample(scale_factor=2, mode="nearest"),
nn.Conv2d(dim, default(dim_out, dim), 3, padding=1),
)
</code></pre>
<p>I wonder if it is because it returns an object. The function is used like a constructor?</p>
<p>It somehow reminds me <code>import torch.nn.functional as F</code> which must have seen millions of times. But couldn't recollect why the capital F here. Google search did not turn out any good answers.</p>
|
<python><pytorch><naming-conventions>
|
2022-12-15 17:03:21
| 0
| 2,884
|
neurite
|
74,815,086
| 1,473,517
|
How to find the smallest threshold from one list to fit another list
|
<p>I have two lists of marks for the same set of students. For example:</p>
<pre><code>A = [22, 2, 88, 3, 93, 84]
B = [66, 0, 6, 33, 99, 45]
</code></pre>
<p>If I accept only students above a threshold according to list A then I can look up their marks in list B. For example, if I only accept students with at least a mark of 80 from list A then their marks in list B are [6, 99, 45].</p>
<p>I would like to compute the smallest threshold for A which gives at least 90% of students in the derived set in B getting at least 50. In this example the threshold will have to be 93 which gives the list [99] for B.</p>
<p>Another example:</p>
<pre><code>A = [3, 36, 66, 88, 99, 52, 55, 42, 10, 70]
B = [5, 30, 60, 80, 80, 60, 45, 45, 15, 60]
</code></pre>
<p>In this case we have to set the threshold to 66 which then gives 100% of [60, 80, 80, 60] getting at least 50.</p>
|
<python><algorithm>
|
2022-12-15 17:00:38
| 2
| 21,513
|
Simd
|
74,814,895
| 7,171,984
|
Create new column based on condition from one column and the value from another column in pandas
|
<p>I have a DF with time given as hours, minutes or milliseconds.
The time column has the type float and the time_unit column indicate if it is given as hour, minute or ms.</p>
<p>I want to create a a new column that calculates the amount of seconds. Thus, I need a function that first checks what time_unit it is, then takes the value from time and performs some transformation to seconds.</p>
<p>For example:</p>
<pre><code>if df["time_unit"]="h":
return df["time"]*60*60 # given hours as int
elseif: ...
</code></pre>
<p>My df looks like this:</p>
<p><a href="https://i.sstatic.net/gaTmL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gaTmL.png" alt="enter image description here" /></a></p>
<p>I want to create the green column (seconds). So, how do I do this in pandas?</p>
|
<python><pandas>
|
2022-12-15 16:46:21
| 2
| 305
|
Energizer1
|
74,814,866
| 2,049,273
|
gzip decoding error when using Python requests but not curl
|
<p>I'm making a POST request of some JSON to an endpoint (Vertex AI Endpoint) and the request works when I make it via curl:</p>
<pre class="lang-bash prettyprint-override"><code>curl \
-X POST \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
$API_URL \
-d "@input.json"
</code></pre>
<p>but if I try the same in Python:</p>
<pre class="lang-py prettyprint-override"><code>with open("input.json") as f:
input_dict = json.load(f)
response = requests.post(
url,
json=input_dict,
headers={
"Content-Type": "application/json",
"Authorization": f"Bearer {token}"
}
)
</code></pre>
<p>I get the following exception:</p>
<pre class="lang-py prettyprint-override"><code>requests.exceptions.ContentDecodingError: ('Received response with content-encoding: gzip, but failed to decode it.', error('Error -3 while decompressing data: incorrect header check'))
</code></pre>
<p>I inspected the headers to see the difference between the curl request and the python one and I see in Python, the header <code>Accept-Encoding: gzip, deflate</code> is added to the request. Is this causing a problem and if so, is there a way I can modify it?</p>
<p>Also, if I modify the request to decode the raw response, I can use it correctly:</p>
<pre class="lang-py prettyprint-override"><code>response = requests.post(
url,
json=input_dict,
headers={
"Content-Type": "application/json",
"Authorization": f"Bearer {token}"
},
stream=True
)
parsed_response = json.loads(response.raw.read().decode('utf-8'))
</code></pre>
<p>but I'm still trying to understand why the first option doesn't work and where the issues lies.</p>
|
<python><python-requests>
|
2022-12-15 16:44:10
| 0
| 1,271
|
swigganicks
|
74,814,799
| 3,087,409
|
Convert pandas dataframe to LaTeX without printing index
|
<p>I'm trying to print a dataframe as a LaTeX table, but I seem to have two bad options. (Note I'm using <code>io.formats.style.Styler.to_latex</code> rather than <code>dataframe.to_latex</code> since there's a deprecation warning on the former. But <code>dataframe.to_latex</code> doesn't solve my issue anyway, it just changes it to a different issue.</p>
<p>By default the LaTeX table looks like this:</p>
<pre><code>------------------------
column name
index name
------------------------
data data
data data
</code></pre>
<p>with the name of the index one row down from the name of the column.</p>
<p>I can do:</p>
<pre><code>df[index_as_column] = df.index
df = df.reset_index(drop=True)
</code></pre>
<p>So my table looks like this:</p>
<pre><code>------------------------
index_as_column column name
------------------------
0 data data
1 data data
</code></pre>
<p>The index gets printed whether I like it or not (I don't).</p>
<p>So my question is, how do I get a table with the column names on the same line and no index printed?</p>
|
<python><pandas><latex>
|
2022-12-15 16:37:15
| 2
| 2,811
|
thosphor
|
74,814,697
| 2,897,989
|
LabelEncoding the target variable improves model massively vs. one-hot-encoding the same
|
<p>I'm using a Random Forest classifier on text data transformed into tf-idf (both the features and the target variable are text, the target variable being company names). Since using a LabelEncoder adds ordinality where there is none, I first tried to one-hot encode the companies (the full company name would be one column). This resulted in a 0.48 score. I changed it so that the companies are now LabelEncoded, and the score (with cross-validation but with the same parameters) jumped to 0.75.</p>
<p>I have two questions related to this.</p>
<p>Sklearn's <a href="https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html#sklearn.preprocessing.LabelEncoder" rel="nofollow noreferrer">documentation</a> does mention that the LabelEncoder can be used to encode target variables. Does that mean the added ordinality is somehow negated, and it's actually okay to use it to encode the target variable this way?</p>
<p>Also, can you help me understand what lead to this big improvement? Is it a "fake" improvement somehow caused by the added ordinality?</p>
<p>Thank you!</p>
|
<python><machine-learning><random-forest>
|
2022-12-15 16:28:34
| 1
| 7,601
|
lte__
|
74,814,680
| 1,230,694
|
Merge data frame onto another but start from a certain row
|
<p>I have a large data frame which has reading data in it, and I want to merge another <code>dataframe</code> of the same structure but a subset of columns and far fewer rows.</p>
<p>The idea is that the large <code>dataframe</code> represents almost all of what I want but I will have a set of readings that might start at any point (row) in the larger frame that I need to drop the columns onto.</p>
<p>As an example if the large data frame looked similar to this and had 5 rows:</p>
<pre><code> A B
0 1 11
1 2 12
2 3 13
3 4 14
4 5 15
</code></pre>
<p>The smaller <code>dataframe</code> looks like the following and has fewer rows and only one of the columns:</p>
<pre><code> B
0 1000
1 2000
</code></pre>
<p>When I merge I want to have a <code>dataframe</code> that contains all the row count of the first, but I want to "overlay" the second frame onto it from a row I specify, so for example from row 2, so I would expect then for the new <code>dataframe</code> to look like this:</p>
<pre><code> A B
0 1 11
1 2 12
2 3 1000
3 4 2000
4 5 15
</code></pre>
<p>The end result is that the new <code>dataframe</code> is the same size as the first, but the value of column B has been updated, from a row I specify to the length of the second <code>dataframe</code> and only for the columns in the second data frame.</p>
|
<python><pandas><dataframe>
|
2022-12-15 16:27:08
| 2
| 3,899
|
berimbolo
|
74,814,676
| 1,700,890
|
My python module does not see other modules
|
<p>I have a folder with 2 files.</p>
<pre><code>import_test.py
main.py.
</code></pre>
<p>Here is content of import_test.py:</p>
<pre><code>def some_function():
df = pd.DataFrame({'col_1': [1,2,3],
'col_2': [1,2,3]})
return df
</code></pre>
<p>Here is content of main.py:</p>
<pre><code>import import_test
import pandas as pd
import importlib
importlib.reload(import_test)
import_test.some_function()
</code></pre>
<p>When I execute <code>import_test.some_function()</code>, I get back the following error:</p>
<pre><code>NameError: name 'pd' is not defined
</code></pre>
<p>I guess I can solve this problem by adding <code>import pandas as pd</code> in my <code>import_test.py</code> file, but this seems redundant to me, since <code>main.py</code> already has the import statement for pandas. Is there way to avoid the redundancy?</p>
|
<python><import><module>
|
2022-12-15 16:26:42
| 2
| 7,802
|
user1700890
|
74,814,642
| 1,649,095
|
Predict with keras triggers OOM error message
|
<p>I fit a keras model without any problem. But when I try to predict a test sample, it triggers an OOM error message:</p>
<blockquote>
<p>InternalError: Failed copying input tensor from
/job:localhost/replica:0/task:0/device:CPU:0 to
/job:localhost/replica:0/task:0/device:GPU:0 in order to run
_EagerConst: Dst tensor is not initialized.</p>
<p>2022-12-15 14:52:24.087105: I
tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow
binary is optimized with oneAPI Deep Neural Network Library (oneDNN)
to use the following CPU instructions in performance-critical
operations: AVX AVX2 To enable them in other operations, rebuild
TensorFlow with the appropriate compiler flags. 2022-12-15
14:52:26.128616: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device
/job:localhost/replica:0/task:0/device:GPU:0 with 13623 MB memory: ->
device: 0, name: NVIDIA RTX A5000 Laptop GPU, pci bus id:
0000:01:00.0, compute capability: 8.6 2022-12-15 14:52:26.711913: I
tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of
the MLIR Optimization Passes are enabled (registered 2) 2022-12-15
14:52:27.768758: I tensorflow/stream_executor/cuda/cuda_blas.cc:1760]
TensorFloat-32 will be used for the matrix multiplication. This will
only be logged once. 2022-12-15 16:28:52.564740: W
tensorflow/core/common_runtime/bfc_allocator.cc:457] Allocator
(GPU_0_bfc) ran out of memory trying to allocate 16.84MiB (rounded to
17655296)requested by op model/output/Softmax</p>
</blockquote>
<p>The fact is my model's output is 137K in size (a list of products I make a softmax on). How come does it only happen during the predict phase? I tried to reduce the batch_size to 1, to no avail.</p>
|
<python><tensorflow><keras>
|
2022-12-15 16:24:23
| 0
| 2,703
|
Mark Morrisson
|
74,814,513
| 6,461,882
|
Copy python string characters to array.array (to be accessed later from C++)?
|
<p>How can one copy python string characters to <code>array.array</code>?</p>
<pre class="lang-py prettyprint-override"><code>from array import array
b = array('b', [0]*30)
s = 'abc'
# What should one do here to copy integer representation of 's' characters to 'b' ?
</code></pre>
<p>The integer representation of <code>s</code> characters should make sense to C++: i.e. if I convert the integers in <code>b</code> to C++ <code>char</code> string, I should get back "abc".</p>
<p>The best idea I have is below (but it would be good avoid explicit python loops):</p>
<pre class="lang-py prettyprint-override"><code>for n,c in enumerate(s): b[n] = ord(c)
</code></pre>
<p>Thank you very much for your help!</p>
|
<python><c++><arrays><string><data-conversion>
|
2022-12-15 16:13:38
| 2
| 2,855
|
S.V
|
74,814,445
| 10,866,873
|
Tkinter <Enter> bind looping forever
|
<p>I am tring to make a selectable clock face for events. I have my Canvas object and created a bunch of circles with text inside them.</p>
<p>I am wanting to invert the colour of the circle and text when the mouse is hovering over.</p>
<p>The issues are:</p>
<ul>
<li>Leave detectected when entering text inside circle.</li>
<li>Moving too fast causes a repeting Enter/Leave loop which crashes the program.</li>
</ul>
<p>code:</p>
<pre class="lang-py prettyprint-override"><code>from tkinter import *
from math import sin, cos, pi
root = Tk()
class TimePicker(Canvas):
def __init__(self, master, **kwargs):
self.tformat = kwargs.pop('format', 24)
self.width, self.height, self.radious = kwargs['width'], kwargs['height'], rd = (kwargs.pop('radious', 100)*2, )*3
self.radious /=4
assert self.tformat in [12, 24], "Time Format must be '12' or '24'"
super(TimePicker, self).__init__(master, **kwargs)
self.active_line = None
self.create_center_circle(self.width/2, self.height/2, self.radious*2, fill="white", outline="#000", width=1)
self.circle_numbers(self.width/2, self.height/2, self.radious+5, 10, [12, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], 'Helvetica 11 bold', "Hours")
self.circle_numbers(self.width/2, self.height/2, self.radious*2-15, 10, [0, 15, 30, 45], 'Helvetica 11 bold', "Minutes")
def create_center_circle(self, x, y, r, **kwargs):
return super().create_oval(x-r, y-r, x+r, y+r, **kwargs)
def create_circle_arc(self, x, y, r, **kwargs):
if "start" in kwargs and "end" in kwargs:
kwargs["extent"] = kwargs["end"] - kwargs["start"]
del kwargs["end"]
return super().create_arc(x-r, y-r, x+r, y+r, **kwargs)
def circle_numbers(self, x: int, y: int, r: int, cr:int, numbers: list, font: str, tp:str):
_angle = 360/len(numbers)
for n in numbers:
ax = r * sin(pi * 2 * (360-_angle*n-180) / 360);
ay = r * cos(pi * 2 * (360-_angle*n-180) / 360);
tag = f'{tp}:{str(n)}'
cl = self.create_center_circle(x+ax, y+ay, cr, fill="white", outline="#000", width=1, tag=tag)
tx = self.create_text(x+ax, y+ay, text=str(n).zfill(2), fill="black", font=(font), tag='tx'+tag )
self.tag_bind(f'{tp}:{str(n)}', '<Enter>', lambda e=Event(), c=(x+ax, y+ay), t=tag, s=True: self._hover(e, c, s, t))
self.tag_bind(f'{tp}:{str(n)}', '<Leave>', lambda e=Event(), c=(x+ax, y+ay), t=tag, s=False: self._hover(e, c, s, t))
self.tag_bind(f'{tp}:{str(n)}', '<Button-1>', lambda e=Event(), c=cl, s=tx, n=n, t=tp,: self._set_number(e, c, s, n, t))
def _hover(self, event, coords, state, tag):
print('hover')
if state: # If hovering inside the object
print("hovering")
cl =event.widget.find_withtag(tag) ## for the text and circle with the hovered tag
tx = event.widget.find_withtag('tx'+tag)
self.itemconfigure(cl, fill='black')
self.itemconfigure(tx, fill="white")
self.active_line = self.create_line(self.width/2, self.height/2, coords[0], coords[1], fill="black", width=2) ##create new line
else: ##If left the object
print("exited")
if self.active_line is not None: ##if there is a line
cl = event.widget.find_withtag(tag) ## for the text and circle in the tag
tx = event.widget.find_withtag('tx'+tag)
self.itemconfigure(cl, fill='white')
self.itemconfigure(tx, fill="black")
self.delete(self.active_line)
self.active_line = None
def _set_number(self, event, cl, tx, number, tp):
print('set', cl, tx, number, tp)
if __name__ == '__main__':
tp=TimePicker(root, format=24, background="red", radious=100)
tp.pack(side=TOP)
root.mainloop()
</code></pre>
|
<python><tkinter><tkinter-canvas>
|
2022-12-15 16:08:50
| 1
| 426
|
Scott Paterson
|
74,814,437
| 13,142,245
|
JAX: Passing a dictionary rather than arg nums to identify variables for autodifferentiation
|
<p>I want to use JAX as a vehicle for gradient descent; however, I have a moderately large number of parameters and would prefer to pass them as a dictionary <code>f(func, dict)</code> rather than <code>f(func, x1, ...xn)</code>.</p>
<p>So instead of</p>
<pre class="lang-py prettyprint-override"><code># https://www.kaggle.com/code/grez911/tutorial-efficient-gradient-descent-with-jax/notebook
def J(X, w, b, y):
"""Cost function for a linear regression. A forward pass of our model.
Args:
X: a features matrix.
w: weights (a column vector).
b: a bias.
y: a target vector.
Returns:
scalar: a cost of this solution.
"""
y_hat = X.dot(w) + b # Predict values.
return ((y_hat - y)**2).mean() # Return cost.
for i in range(100):
w -= learning_rate * grad(J, argnums=1)(X, w, b, y)
b -= learning_rate * grad(J, argnums=2)(X, w, b, y)
</code></pre>
<p>Something more like</p>
<pre class="lang-py prettyprint-override"><code>for i in range(100):
w -= learning_rate * grad(J, arg_key='w')(arg_dict)
b -= learning_rate * grad(J, arg_key='b')(arg_dict)
</code></pre>
<p>Is this possible?</p>
<p><strong>EDIT</strong>:</p>
<p>This is my current work around solution:</p>
<pre class="lang-py prettyprint-override"><code># A features matrix.
X = np.array([
[4., 7.],
[1., 8.],
[-5., -6.],
[3., -1.],
[0., 9.]
])
# A target column vector.
y = np.array([
[37.],
[24.],
[-34.],
[16.],
[21.]
])
learning_rate = 0.01
w = np.zeros((2, 1))
b = 0.
import jax.numpy as np
from jax import grad
def J(X, w, b, y):
"""Cost function for a linear regression. A forward pass of our model.
Args:
X: a features matrix.
w: weights (a column vector).
b: a bias.
y: a target vector.
Returns:
scalar: a cost of this solution.
"""
y_hat = X.dot(w) + b # Predict values.
return ((y_hat - y)**2).mean() # Return cost.
# Define your function arguments as a dictionary
arg_dict = {
'X': X,
'w': w,
'b': b,
'y': y
}
idx_dict = {idx:name for idx,name in enumerate(arg_dict.keys())}
arg_arr = [arg_dict[idx_dict[idx]] for idx in range(len(arg_dict))]
for i in range(100):
for idx, name in idx_dict.items():
var = arg_dict[idx_dict[idx]]
var -= learning_rate * grad(J, argnums=idx) (*arg_arr)
</code></pre>
<p>The gist is that now I don't need to write grad(...) for every single variable that needs autodifferentiation.</p>
|
<python><gradient-descent><jax>
|
2022-12-15 16:07:59
| 1
| 1,238
|
jbuddy_13
|
74,814,420
| 7,585,973
|
How to cut/sampling by csv filesize on pyspark
|
<p>I am <code>df</code> is pyspark dataframe, I need to cut by size on production environment data, for example 100 MB</p>
<p>Here's the script I usualy use (for not cutting scenario)</p>
<p><code>df.coalesce(1).write.csv(output_path+"/folder_name", header=True, mode='overwrite')</code></p>
<p>How to cut some initial row (for example 100 MB)</p>
<p>I use <code>.limit(10000)</code> when I create df when I want to minimize file, what I need is to investigate how big file my infra can save, because cutting dataframe is still sampling and I want as much as sample possible</p>
|
<python><dataframe><pyspark>
|
2022-12-15 16:06:21
| 0
| 7,445
|
Nabih Bawazir
|
74,814,386
| 9,182,743
|
How to combine rows in groupby with several conditions?
|
<p>I want to combine rows in pandas df with the following logic:</p>
<ul>
<li>dataframe is grouped by users</li>
<li>rows are ordered by start_at_min</li>
<li>rows are combiend when:</li>
</ul>
<p>Case A:
if start_at_min<=200:</p>
<ul>
<li>row1[stop_at_min] - row2[start_at_min] < 5</li>
<li>(eg: 101 -100 = 1 -> <strong>combine</strong>; 200-100=100: -> <strong>dont combine</strong>)</li>
</ul>
<p>Case Bif 200> start_at_min<400:</p>
<ul>
<li>change threhsold to 3</li>
</ul>
<p>Case C if start_at_min>400:</p>
<ul>
<li>Never combine</li>
</ul>
<p>Example df</p>
<pre class="lang-py prettyprint-override"><code> user start_at_min stop_at_min
0 1 100 150
1 1 152 201 #row0 with row1 combine
2 1 205 260 #row1 with row 2 NO -> start_at_min above 200 -> threshol =3
3 2 65 100 #no
4 2 200 265 #no
5 2 300 451 #no
6 2 452 460 #no -> start_at_min above 400-> never combine
</code></pre>
<p>Expected output:</p>
<pre class="lang-py prettyprint-override"><code> user start_at_min stop_at_min
0 1 100 201 #row1 with row2 combine
2 1 205 260 #row2 with row 3 NO -> start_at_min above 200 -> threshol =3
3 2 65 100 #no
4 2 200 265 #no
5 2 300 451 #no
6 2 452 460 #no -> start_at_min above 400-> never combine
</code></pre>
<p>I have written the funciton combine_rows, that takes in 2 Series and applies this logic</p>
<pre class="lang-py prettyprint-override"><code>def combine_rows (s1:pd.Series, s2:pd.Series):
# take 2 rows and combine them if start_at_min row2 - stop_at_min row1 < 5
if s2['start_at_min'] - s1['stop_at_min'] <5:
return pd.Series({
'user': s1['user'],
'start_at_min': s1['start_at_min'],
'stop_at_min' : s2['stop_at_min']
})
else:
return pd.concat([s1,s2],axis=1).T
</code></pre>
<p>Howver I am unable to apply this function to the dataframe.
This was my attempt:</p>
<pre class="lang-py prettyprint-override"><code>df.groupby('user').sort_values(by=['start_at_min']).apply(combine_rows) # this not working
</code></pre>
<p>Here is the full code:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
df = pd.DataFrame({
"user" : [1, 1, 2,2],
'start_at_min': [60, 101, 65, 200],
'stop_at_min' : [100, 135, 100, 265]
})
def combine_rows (s1:pd.Series, s2:pd.Series):
# take 2 rows and combine them if start_at_min row2 - stop_at_min row1 < 5
if s2['start_at_min'] - s1['stop_at_min'] <5:
return pd.Series({
'user': s1['user'],
'start_at_min': s1['start_at_min'],
'stop_at_min' : s2['stop_at_min']
})
else:
return pd.concat([s1,s2],axis=1).T
df.groupby('user').sort_values(by=['start_at_min']).apply(combine_rows) # this not working
</code></pre>
|
<python><pandas><dataframe>
|
2022-12-15 16:04:15
| 1
| 1,168
|
Leo
|
74,814,336
| 1,142,881
|
How to catch a Python exception from VBA?
|
<p>I’m using xlwings and calling python from VBA using the <code>RunPython</code> function. However exceptions such as <code>ValueError </code> are not caught by VBA in the standard way, for example:</p>
<pre><code>Sub Test()
On Error Goto PythonError
RunPython(“…”)
Exit Sub
PythonEror:
‘Do something sensible
End Sub
</code></pre>
<p>the error gets propagated to Excel nevertheless. What should I do differently to be able to catch and handle the exception from VBA?</p>
|
<python><excel><vba><xlwings>
|
2022-12-15 15:59:24
| 0
| 14,469
|
SkyWalker
|
74,814,262
| 6,131,259
|
Pandas change values based on previous value in same column
|
<p>I have the following dataframe:</p>
<p><a href="https://i.sstatic.net/gmAd2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gmAd2.png" alt="enter image description here" /></a></p>
<pre><code>import pandas as pd
import datetime
df = pd.DataFrame({'ID': [1, 2, 1, 1],
'Date' : [datetime.date(year=2022,month=5,day=1), datetime.date(year=2022,month=11,day=1),
datetime.date(year=2022,month=10,day=1), datetime.date(year=2022,month=11,day=1)],
"Lifecycle ID": [5,5,5,5]})
</code></pre>
<p>And I need to change the lifecycle based on the lifecycle 6 month ago (if it was 5, it should always be 6 (not +1)).</p>
<p>I'm currently trying:</p>
<pre><code>df.loc[(df["Date"] == (df["Date"] - pd.DateOffset(months=6))) & (df["Lifecycle ID"] == 5), "Lifecycle ID"] = 6
</code></pre>
<p>However Pandas is <strong>not considering the ID</strong> and I don't know how.</p>
<p>The output should be this dataframe (only last Lifecycle ID changed to 6):</p>
<p><a href="https://i.sstatic.net/ouQw9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ouQw9.png" alt="enter image description here" /></a></p>
<p>Could you please help me here?</p>
|
<python><pandas><dataframe>
|
2022-12-15 15:54:22
| 1
| 331
|
FriendlyGuy
|
74,814,185
| 3,834,837
|
OSMnx: combine several multidirectional graphs
|
<p>I used OSMnx to retrieve the road networks in France. For execution time constraints, I retrieved the data department by department: While trying to retrieve the road network of all of France at once, my script ran for 4 days without results. Therefore, I retrieve the data department by department. I stored the data on disk, and now I need to combine the graphs of all the departments to obtain the complete road network of France.</p>
<p>Is there any function with OSMnx to combine several graphs into one?</p>
<p>Below is my code to retrieve data by department</p>
<pre><code>departments = ["Ain","Aisne", "Alpes de Haute-Provence",..., "Seine-Saint-Denis", "Val-de-Marne", "Val-d'Oise"]
for department in departments:
DepartemntName = department + ",France"
G = ox.graph_from_place(DepartmentName, network_type="drive")
filePathToSave = "AllDepartmentsData/" + department +".graphml"
ox.save_graphml(G,filePathToSave)
</code></pre>
|
<python><osmnx>
|
2022-12-15 15:49:03
| 1
| 410
|
Mouaici_Med
|
74,814,057
| 7,454,177
|
How to override/ bypass a mock decorator in Python?
|
<p>Most of our external services are mocked for testing. Now I built test cases for the external services (Stripe in the example) and I would like to use the actual methods for only those tests. I could just decorate all the other tests, but those are a lot of tests and the likelihood of errors high. Also the mocks are done in our test runner, so they cascade though to all the tests. This is our runner function (it is decorated with a lot of similar mocks, reduced it to the minimal reproducible example)</p>
<pre><code>@mock.patch('services.Stripe.sync_customers', side_effect=mock_sync_customers)
def run_tests(self, test_labels, extra_tests=None, *args, **kwargs):
return super().run_tests(test_labels, extra_tests, **kwargs)
</code></pre>
<p>This is the test case where I would like to use the actual methods instead of the mocks:</p>
<pre><code>def test_customers(self):
Company.objects.create(**customer_data) # This should invoke the actual API calls
</code></pre>
<p>Is there a way to exclude this or overwrite the mocks with the actual methods cleanly? I tried overriding it with a similar decorator as on the <code>run_tests</code>, which didn't work because the function on the <code>Stripe</code> class is not callable. I tried mocking the entire class, which also didn't work.</p>
|
<python><unit-testing><mocking>
|
2022-12-15 15:40:04
| 0
| 2,126
|
creyD
|
74,813,906
| 4,542,117
|
scipy.interpolate.griddata not working as expected
|
<p>I have a large dataset of temperatures throughout the US on a grid of 3km-3km:</p>
<p><a href="https://i.sstatic.net/hp0lb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hp0lb.png" alt="Temps of the USA" /></a></p>
<p>The temps (<code>temperature_data</code>), latitudes and longitudes of each grid point are available where the lat/lon data are <code>lats</code> and <code>lons</code>, all with shape (1060,1800).</p>
<p>I also have a sub-regional high-resolution dataset I would like to extrapolate the <code>temperatures_data</code> from at each grid point. I have the latitudes and longitudes of these locations as <code>rlats</code> and <code>rlons</code>, each with shape (720,3840).</p>
<p>Reading around, I found the <code>scipy.interpolate.griddata</code> function, and used it as the following:</p>
<pre><code>highres_temps = griddata((lats.flatten(),lons.flatten()),temperature_data.flatten(),(rlons,rlats),'nearest')
</code></pre>
<p>The results give me in the new grid all have the same value:</p>
<pre><code>print(np.amax(highres_temps))
print(np.amin(highres_temps))
28.175049
28.175049
</code></pre>
<p>And if we zoom into the area of interest of <code>highres_temps</code>:</p>
<p><a href="https://i.sstatic.net/3ntNT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3ntNT.png" alt="zoomedin_highres_temps" /></a></p>
<p>And zoom into the area of interest of <code>temperature_data</code>:</p>
<p><a href="https://i.sstatic.net/8Dss3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8Dss3.png" alt="zoomedin_temperature_data" /></a></p>
<p>It is obvious the temps in <code>temperature_data</code> do not match with what should be extracted over the domain within <code>rlats</code> and <code>rlons</code>. Why might this be occurring?</p>
|
<python><scipy><grid><interpolation>
|
2022-12-15 15:28:58
| 0
| 374
|
Miss_Orchid
|
74,813,871
| 13,517,174
|
Passing a shape to numpy.reshape in a numba njit environment fails, how can I create a suitable iterable for the target shape?
|
<p>I have a function that takes in an array, performs an arbitrary calculation and returns a new shape in which it can be broadcasted.
I would like to use this function in a <code>numba.njit</code> environment:</p>
<pre><code>import numpy as np
import numba as nb
@nb.njit
def generate_target_shape(my_array):
### some functionality that calculates the desired target shape ###
return tuple([2,2])
@nb.njit
def test():
my_array = np.array([1,2,3,4])
target_shape = generate_target_shape(my_array)
reshaped = my_array.reshape(target_shape)
print(reshaped)
test()
</code></pre>
<p>However, tuple creation is not supported in numba and I get the following error message when trying to cast the result of <code>generate_target_shape</code> to a tuple with the <code>tuple()</code> operator:</p>
<pre><code>No implementation of function Function(<class 'tuple'>) found for signature:
>>> tuple(list(int64)<iv=None>)
There are 2 candidate implementations:
- Of which 2 did not match due to:
Overload of function 'tuple': File: numba/core/typing/builtins.py: Line 572.
With argument(s): '(list(int64)<iv=None>)':
No match.
During: resolving callee type: Function(<class 'tuple'>
</code></pre>
<p>If I try to change the return type of <code>generate_target_shape</code> from <code>tuple</code> to <code>list</code> or <code>np.array</code>, I receive the following error message:</p>
<pre><code>Invalid use of BoundFunction(array.reshape for array(float64, 1d, C)) with parameters (array(int64, 1d, C))
</code></pre>
<p>Is there a way for me to create an iterable object inside a <code>nb.njit</code> function that can be passed to <code>np.reshape</code>?</p>
<p>EDIT: I worked around this problem as suggested in the accepted solution by using the <code>objmode</code> constructor.</p>
|
<python><numpy><numba>
|
2022-12-15 15:26:15
| 1
| 453
|
Yes
|
74,813,685
| 5,678,057
|
Pandas : 'to_datetime' function not consistent with dates
|
<p>When I read a date say <code>'01/12/2020'</code>, which is in the format <code>dd/mm/yyyy</code>, with <code>pd.to_datetime()</code>, it detects the month as <code>01</code>.</p>
<pre><code>pd.to_datetime('01/12/2020').month
>> 1
</code></pre>
<p>But this behavior is not consistent.</p>
<p>When we create a dataframe with a column containing dates in this format, and convert using the same <code>to_datetime</code> function, it then detects <code>12</code> as the month.</p>
<pre><code>tt.dt.month[0]
>> 12
</code></pre>
<p>What could be the reason ?</p>
|
<python><pandas><date>
|
2022-12-15 15:13:28
| 1
| 389
|
Salih
|
74,813,635
| 1,780,761
|
openCV - Generating disparity map from stereo images
|
<p>I calibrated my cameras and took a picture with each. Have rectified the images, and saved them. these are the images:</p>
<p><a href="https://i.sstatic.net/eICjp.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eICjp.jpg" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/Wu7xW.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Wu7xW.jpg" alt="enter image description here" /></a></p>
<p>despite my best efforts, I cannot produce a decent depthmap out of these.</p>
<p>this is the best I got so far:</p>
<p><a href="https://i.sstatic.net/8WQrF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8WQrF.png" alt="enter image description here" /></a></p>
<p>The code I use to try different values is this, where Left_Nice and Right_Nice are the images posted here:</p>
<pre><code>import cv2
import numpy as np
cv2.namedWindow('disp', cv2.WINDOW_NORMAL)
cv2.resizeWindow('disp', 800, 1200)
cv2.createTrackbar('numDisparities', 'disp', 1, 50, nothing)
cv2.createTrackbar('blockSize', 'disp', 5, 50, nothing)
cv2.createTrackbar('preFilterType', 'disp', 1, 1, nothing)
cv2.createTrackbar('preFilterSize', 'disp', 2, 25, nothing)
cv2.createTrackbar('preFilterCap', 'disp', 5, 62, nothing)
cv2.createTrackbar('textureThreshold', 'disp', 10, 100, nothing)
cv2.createTrackbar('uniquenessRatio', 'disp', 15, 100, nothing)
cv2.createTrackbar('speckleRange', 'disp', 0, 100, nothing)
cv2.createTrackbar('speckleWindowSize', 'disp', 3, 25, nothing)
cv2.createTrackbar('disp12MaxDiff', 'disp', 0, 100, nothing)
cv2.createTrackbar('minDisparity', 'disp', 0, 25, nothing)
stereo = cv2.StereoBM_create()
Left_nice = cv2.imread('c:/calimages/nice_cam_left_.jpg')
Right_nice = cv2.imread('c:/calimages/nice_cam_right_.jpg')
Left_nice = cv2.cvtColor(Left_nice, cv2.COLOR_BGR2GRAY)
Right_nice = cv2.cvtColor(Right_nice, cv2.COLOR_BGR2GRAY)
while True:
# Updating the parameters based on the trackbar positions
numDisparities = cv2.getTrackbarPos('numDisparities', 'disp') * 16
blockSize = cv2.getTrackbarPos('blockSize', 'disp') * 2 + 5
preFilterType = cv2.getTrackbarPos('preFilterType', 'disp')
preFilterSize = cv2.getTrackbarPos('preFilterSize', 'disp') * 2 + 5
preFilterCap = cv2.getTrackbarPos('preFilterCap', 'disp')
textureThreshold = cv2.getTrackbarPos('textureThreshold', 'disp')
uniquenessRatio = cv2.getTrackbarPos('uniquenessRatio', 'disp')
speckleRange = cv2.getTrackbarPos('speckleRange', 'disp')
speckleWindowSize = cv2.getTrackbarPos('speckleWindowSize', 'disp') * 2
disp12MaxDiff = cv2.getTrackbarPos('disp12MaxDiff', 'disp')
minDisparity = cv2.getTrackbarPos('minDisparity', 'disp')
# Setting the updated parameters before computing disparity map
stereo.setNumDisparities(numDisparities)
stereo.setBlockSize(blockSize)
stereo.setPreFilterType(preFilterType)
stereo.setPreFilterSize(preFilterSize)
stereo.setPreFilterCap(preFilterCap)
stereo.setTextureThreshold(textureThreshold)
stereo.setUniquenessRatio(uniquenessRatio)
stereo.setSpeckleRange(speckleRange)
stereo.setSpeckleWindowSize(speckleWindowSize)
stereo.setDisp12MaxDiff(disp12MaxDiff)
stereo.setMinDisparity(minDisparity)
# Calculating disparity using the StereoBM algorithm
disparity = stereo.compute(Left_nice, Right_nice)
# NOTE: Code returns a 16bit signed single channel image,
# CV_16S containing a disparity map scaled by 16. Hence it
# is essential to convert it to CV_32F and scale it down 16 times.
# Converting to float32
disparity = disparity.astype(np.float32)
# Scaling down the disparity values and normalizing them
disparity = (disparity / 16.0 - minDisparity) / numDisparities
# Displaying the disparity map
cv2.imshow("disp", disparity)
# Close window using esc key
if cv2.waitKey(1) == 27:
break
</code></pre>
<p>taken from this sample code:
<a href="https://learnopencv.com/depth-perception-using-stereo-camera-python-c/" rel="nofollow noreferrer">https://learnopencv.com/depth-perception-using-stereo-camera-python-c/</a></p>
<p>any idea on what I am dong wrong?</p>
|
<python><opencv><stereo-3d><disparity-mapping>
|
2022-12-15 15:09:52
| 0
| 4,211
|
sharkyenergy
|
74,813,316
| 7,713,770
|
how to render values from multiple methods in table format in template django
|
<p>I have a django app.</p>
<p>And I have two methods:</p>
<pre><code>def total_cost_fruit(self):
return [3588.20, 5018.75, 3488.16]
def total_cost_fruit2(self):
return [3588.20, 5018.75, 3488.99]
</code></pre>
<p>And I try to render them as table.</p>
<p>so this is the views.py:</p>
<pre><code>def test(request):
values1 = filter_text.total_cost_fruit()
values2 = filter_text.total_cost_fruit2()
context = {
"values1": values1,
"values2": values2,
"all_values": list(chain(values1, values2)),
}
return render(request, "main/test.html", context)
</code></pre>
<p>and template:</p>
<pre><code><div class="wishlist">
<table>
<tr>
<th>Method 1</th>
<th>Method 2</th>
</tr>
{% for value in all_values %}
<tr>
<td>{{value.0}}</td>
<td>{{value.1}}</td>
</tr>
{% endfor %}
</table>
</div>
</code></pre>
<p>But nothing is displayed</p>
|
<python><html><django>
|
2022-12-15 14:46:16
| 1
| 3,991
|
mightycode Newton
|
74,813,301
| 219,976
|
Perform post request using Django rest framework
|
<p>I've got a Django rest framework APIView:</p>
<pre><code>class MyAPIView(views.APIView):
def post(self, request):
field = request.POST.get("field")
print(field)
return Response({"field": field}, status=200)
</code></pre>
<p>I want to call it from separate process using Django API. I do it like this:</p>
<pre><code>from django.http import HttpRequest, QueryDict
request = HttpRequest()
request.method = "POST"
request.POST = QueryDict(mutable=True)
request.POST["field"] = "5"
response = MyAPIView.as_view()(request=request)
</code></pre>
<p>But when <code>field</code> is being printed in MyAPIView it's always <code>None</code>.
How to call post method using Django?</p>
|
<python><django><post><django-rest-framework>
|
2022-12-15 14:44:53
| 1
| 6,657
|
StuffHappens
|
74,813,241
| 4,329,348
|
PyBullet get names of bodies in world
|
<p>I load an SDF file which contains multiple robots/objects and I want to get back the "names" of these robots/objects as defined in the SDF. PyBullet when using <code>loadSDF</code> will return a list of unique IDs for these bodies, I want to convert these IDs to names. Is that possible?</p>
|
<python><pybullet>
|
2022-12-15 14:39:55
| 1
| 1,219
|
Phrixus
|
74,813,119
| 4,893,016
|
401 Client Error: Unauthorized for url [mozilla-django-oidc - Keycloack]
|
<p>I'm trying to integrate Django and Keycloack using <code>mozilla-django-oidc</code>, but unfortunately I'm not having much success as I keep getting <code>401 Client Error: Unauthorized for url...</code></p>
<p>I created a docker compose that runs Keycloack / KeycloackDB / Django app, like the following</p>
<p><code>docker-compose.yaml</code></p>
<pre><code>version: '3'
volumes:
postgres_data:
driver: local
services:
postgres:
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_DB: keycloak
POSTGRES_USER: keycloak
POSTGRES_PASSWORD: password
networks:
- local-keycloak
keycloak:
image: quay.io/keycloak/keycloak:latest
environment:
DB_VENDOR: POSTGRES
DB_ADDR: postgres
DB_DATABASE: keycloak
DB_USER: keycloak
DB_SCHEMA: public
DB_PASSWORD: password
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: Pa55w0rd
ports:
- "8080:8080"
depends_on:
- postgres
command:
- "start-dev"
networks:
- local-keycloak
volumes:
- .local/keycloak/:/opt/keycloak/data
web:
build:
context: .
ports:
- "8000:8000"
depends_on:
- postgres
- keycloak
volumes:
- .:/app
networks:
- local-keycloak
networks:
local-keycloak:
</code></pre>
<p>I setup my Django project as described in <a href="https://mozilla-django-oidc.readthedocs.io/en/stable/installation.html#quick-start" rel="nofollow noreferrer">https://mozilla-django-oidc.readthedocs.io/en/stable/installation.html#quick-start</a></p>
<p><code>settings.py</code></p>
<pre><code>....
AUTHENTICATION_BACKENDS = (
"django.contrib.auth.backends.ModelBackend",
"mozilla_django_oidc.auth.OIDCAuthenticationBackend",
)
BASE_URL = "http://localhost:8080"
KEYCLOACK_IP = "192.168.224.3"
KEYCLOACK_URI = f"http://{KEYCLOACK_IP}:8080"
OIDC_RP_SIGN_ALGO = "RS256"
OIDC_OP_JWKS_ENDPOINT = f"{KEYCLOACK_URI}/realms/demo/protocol/openid-connect/certs"
OIDC_RP_CLIENT_ID = os.environ['OIDC_RP_CLIENT_ID']
OIDC_RP_CLIENT_SECRET = os.environ['OIDC_RP_CLIENT_SECRET']
OIDC_OP_AUTHORIZATION_ENDPOINT = f"{BASE_URL}/realms/demo/protocol/openid-connect/auth"
OIDC_OP_TOKEN_ENDPOINT = f"{KEYCLOACK_URI}/realms/demo/protocol/openid-connect/token"
OIDC_OP_USER_ENDPOINT = f"{KEYCLOACK_URI}/realms/demo/protocol/openid-connect/userinfo"
</code></pre>
<p><code>views.py</code></p>
<pre><code>from django.http import HttpResponse
from django.contrib.auth.decorators import login_required
from django.template import loader
def index(request):
template = loader.get_template('index.html')
return HttpResponse(template.render({}, request))
</code></pre>
<p><code>index.html</code></p>
<pre><code>{% if user.is_authenticated %}
<p>Current user: {{ user.email }}</p>
<form action="{% url 'oidc_logout' %}" method="post">
{% csrf_token %}
<input type="submit" value="logout">
</form>
{% else %}
<a href="{% url 'oidc_authentication_init' %}">Login</a>
{% endif %}
</code></pre>
<p>In the running Keycloack instance I created a new realm</p>
<p><a href="https://i.sstatic.net/iNUnj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iNUnj.png" alt="demo realm" /></a></p>
<p>And also a new client
<a href="https://i.sstatic.net/4bjaa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4bjaa.png" alt="enter image description here" /></a></p>
<p>Now when I try to authenticate, I get correctly redirected to Keycloack, but after entering the credentials, when Keycloack attempts to redirect to Django, I am always getting this error</p>
<p><code>401 Client Error: Unauthorized for url: http://192.168.224.3:8080/realms/demo/protocol/openid-connect/userinfo</code></p>
<pre><code>Request Method: GET
Request URL: http://localhost:8000/oidc/callback/?state=iXuTkJnT4qKW5ayiyJ6jeyDMXzJ2PaxJ&session_state=a26babe1-8fc1-4d80-931e-b779d2550eb7&code=02199b40-aa62-4f0f-b3df-1074b4e0385f.a26babe1-8fc1-4d80-931e-b779d2550eb7.f957a25e-09b9-4f2d-ac5a-1d9ba720cdb8
Django Version: 4.1.4
Exception Type: HTTPError
Exception Value:
401 Client Error: Unauthorized for url: http://192.168.224.3:8080/realms/demo/protocol/openid-connect/userinfo
Exception Location: /usr/local/lib/python3.10/site-packages/requests/models.py, line 1021, in raise_for_status
Raised during: mozilla_django_oidc.views.OIDCAuthenticationCallbackView
Python Executable: /usr/local/bin/python3
Python Version: 3.10.9
Python Path:
['/app',
'/usr/local/lib/python310.zip',
'/usr/local/lib/python3.10',
'/usr/local/lib/python3.10/lib-dynload',
'/usr/local/lib/python3.10/site-packages']
Server time: Thu, 15 Dec 2022 14:51:17 +0000
</code></pre>
<p>I should also mention that I tried both OIDC_OP_JWKS_ENDPOINT and OIDC_RP_IDP_SIGN_KEY and I'm getting exactly the same error (401). Also please note that for some of the settings I had to use the physical IP of the docker container in order for Keycloack to provide a token.</p>
<p>Any help would be greatly appreciated as right now I'm totally blocked.
Please let me know if you need more info from me.</p>
<p>Cheers</p>
|
<python><django><keycloak><openid-connect>
|
2022-12-15 14:30:30
| 1
| 649
|
NomadMonad
|
74,813,117
| 14,774,959
|
Split text in column into multiple rows each containing the same number of words
|
<p>Let's say I have the following dataframe:</p>
<pre><code>import pandas as pd
data = [
["a", "Lorem ipsum dolor sit amet, consectetur adipiscing elit."],
["b", "Etiam imperdiet fringilla est, eu tristique risus varius vitae."]
]
df = pd.DataFrame(data, columns=['name', 'text'])
>> name text
>> 0 a Lorem ipsum dolor sit amet, consectetur adipiscing elit.
>> 1 b Etiam imperdiet fringilla est, eu tristique risus varius vitae.
</code></pre>
<p>I would like to generate the following dataframe:</p>
<pre><code>name text
0 a Lorem ipsum dolor
1 a sit amet, consectetur
2 a adipiscing elit.
3 b Etiam imperdiet fringilla
4 b est, eu tristique
5 b risus varius vitae.
</code></pre>
<p>To be more clear, I would like to split each string at column "text" over multiple rows. The number of rows depends on the number of words in each string. In the example I provided I have chosen three as number of words in each row. The values contained inside the other columns should be kept.</p>
<p>Also the last new row of the "a" original row, contains only two words, because those are the only words left.</p>
<p>I found a similar <a href="https://stackoverflow.com/questions/62635214/split-column-into-unknown-number-of-columns-according-to-number-of-words-pandas">question</a>, which, however, splits the text over multiple columns. I'm not sure how to adapt it to split over rows.</p>
|
<python><pandas><dataframe>
|
2022-12-15 14:30:06
| 1
| 3,444
|
ClaudiaR
|
74,812,929
| 4,862,845
|
How to combine properties defined by different parent classes in python?
|
<p>Suppose we have two classes and each computes a property <code>stuff</code> in a different way. Is is possible to combine their outputs in a method/property of a child class?</p>
<p>The following code shows the desired effect (though <code>get_stuff_from</code> needs to be replaced by a proper python construct, if such thing exists).</p>
<pre class="lang-py prettyprint-override"><code>class Foo():
@property
def stuff(self):
return ['a','b','c']
class Bar():
@property
def stuff(self):
return ['1','2','3']
class FooBar(Foo, Bar):
@property
def stuff(self):
# Computes stuff from the internal state like Foo().stuff
foo_stuff = get_stuff_from(Foo)
# Computes stuff from the internal state like Bar().stuff
bar_stuff = get_stuff_from(Bar)
# Returns the combined results
return foo_stuff + bar_stuff
foo_bar = FooBar()
print(foo_bar.stuff)
</code></pre>
<p>which should output:</p>
<pre><code>['a', 'b', 'c', '1', '2', '3']
</code></pre>
<p>If <code>stuff</code> were a <em>method</em> instead of a property, this would be simple to implement:</p>
<pre class="lang-py prettyprint-override"><code>class Foo():
def stuff(self):
return ['a','b','c']
class Bar():
def stuff(self):
return ['1','2','3']
class FooBar(Foo, Bar):
def stuff(self):
# Computes stuff from the internal state like Foo().stuff
foo_stuff = Foo.stuff(self)
# Computes stuff from the internal state like Bar().stuff
bar_stuff = Bar.stuff(self)
# Returns the combined results
return foo_stuff + bar_stuff
foo_bar = FooBar()
print(foo_bar.stuff())
</code></pre>
<p>however, I would like to find out whether it is possible to do the same with properties.</p>
|
<python><properties><multiple-inheritance>
|
2022-12-15 14:15:29
| 4
| 317
|
LuizFelippe
|
74,812,798
| 19,339,998
|
PermissionError: [Errno 13] Permission denied: '/usr/local/lib/python3.8/dist-packages/grpc/__init__.py'
|
<p>I am using grpc in my app.py and I am going to run app.py inside SGX using Gramine, when I run command <code>gramine-sgx ./python app.py"</code> I get error:</p>
<pre><code>Traceback (most recent call last):
File "app.py", line 2, in <module>
import grpc
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 844, in exec_module
File "<frozen importlib._bootstrap_external>", line 980, in get_code
File "<frozen importlib._bootstrap_external>", line 1037, in get_data
PermissionError: [Errno 13] Permission denied: '/usr/local/lib/python3.8/dist-packages/grpc/__init__.py'
Error in sys.excepthook:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/apport_python_hook.py", line 72, in apport_excepthook
from apport.fileutils import likely_packaged, get_recent_crashes
File "/usr/lib/python3/dist-packages/apport/__init__.py", line 5, in <module>
from apport.report import Report
File "/usr/lib/python3/dist-packages/apport/report.py", line 32, in <module>
import apport.fileutils
File "/usr/lib/python3/dist-packages/apport/fileutils.py", line 27, in <module>
from apport.packaging_impl import impl as packaging
File "/usr/lib/python3/dist-packages/apport/packaging_impl.py", line 23, in <module>
import apt
File "/usr/lib/python3/dist-packages/apt/__init__.py", line 36, in <module>
apt_pkg.init_system()
apt_pkg.Error: E:Error reading the CPU table
Original exception was:
Traceback (most recent call last):
File "app.py", line 2, in <module>
import grpc
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 844, in exec_module
File "<frozen importlib._bootstrap_external>", line 980, in get_code
File "<frozen importlib._bootstrap_external>", line 1037, in get_data
PermissionError: [Errno 13] Permission denied: '/usr/local/lib/python3.8/dist-packages/grpc/__init__.py'
</code></pre>
<p>I checked the permissons of init.py and I even tried to run the code using sudo but the same error came up</p>
<pre><code>-rw-r--r-- 1 root staff 82351 Dec 15 13:37 __init__.py
</code></pre>
|
<python><grpc><sgx>
|
2022-12-15 14:03:48
| 1
| 341
|
sama
|
74,812,534
| 4,757,604
|
'psycopg2.errors.UndefinedTable: relation "table" does not exist' when makemigrations
|
<p>I have a django app (Django==4.1.2) and I am trying to migrate from sqlite3 to postgres.</p>
<p>I have been following multiples guides on how to do this, and they all do more or less the same and those are the steps I followed:</p>
<ol>
<li>Get a dumpdata with <code>python manage.py dumpdata > whole.json</code></li>
<li>Create db and user and connect to it</li>
<li>Config settings.py to set everything to the new database</li>
<li>Delete old migrations</li>
<li>run <code>python manage.py makemigrations</code> or <code>python manage.py migrate --run-syncdb</code></li>
</ol>
<hr />
<p>There are more steps, but I am stuck in this 5th one getting 'psycopg2.errors.UndefinedTable: relation "table" does not exist'</p>
<p>Looking for solutions I've come to <a href="https://gist.github.com/sirodoht/f598d14e9644e2d3909629a41e3522ad?permalink_comment_id=2580695#gistcomment-2580695" rel="nofollow noreferrer">this</a> post which may help someone, though I might not doing it right or something but commenting models have done nothing for me.</p>
<p>Thanks.</p>
|
<python><django><postgresql><sqlite><django-migrations>
|
2022-12-15 13:45:39
| 0
| 388
|
SaFteiNZz
|
74,812,145
| 16,424,940
|
selenium: stale element reference: element is not attached to the page document
|
<pre><code>from selenium.webdriver import Chrome
from selenium.webdriver.common.by import By
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import chromedriver_autoinstaller
chromedriver_autoinstaller.install()
TYPES = ['user', 'verified_audience', 'top_critics']
TYPE = TYPES[2]
URL = 'https://www.rottentomatoes.com/m/dunkirk_2017/reviews'
PAGES = 2
driver = Chrome()
driver.get(URL)
data_reviews = []
while PAGES != 0:
wait = WebDriverWait(driver, 30)
reviews = wait.until(lambda _driver: _driver.find_elements(
By.CSS_SELECTOR, '.review_table_row'))
# Extracting review data
for review in reviews:
if TYPE == 'top_critics':
critic_name_el = review.find_element(
By.CSS_SELECTOR, '[data-qa=review-critic-link]')
critic_review_text_el = review.find_element(
By.CSS_SELECTOR, '[data-qa=review-text]')
data_reviews.append(critic_name_el.text)
try:
next_button_el = driver.find_element(
By.CSS_SELECTOR, '[data-qa=next-btn]:not([disabled=disabled])'
)
if not next_button_el:
PAGES = 0
next_button_el.click() # refresh new reviews
PAGES -= 1
except Exception as e:
driver.quit()
</code></pre>
<p>Here, a rotten tomatoes review page is being opened and the reviews are being scraped, but when the next button is clicked and the new reviews are going to be scraped, this error pops up... I am guessing that the new reviews have not been loaded and trying to access them is causing the problem, I tried <code>driver.implicitly_wait</code> but that doesn't work too.</p>
<p>The error originates from <code>line 33, data_reviews.append(critic_name_el.text)</code></p>
|
<python><selenium><selenium-webdriver><selenium-chromedriver><staleelementreferenceexception>
|
2022-12-15 13:15:56
| 1
| 1,151
|
Lun
|
74,811,931
| 10,197,418
|
Interpolate time series data from one df to time axis of another df in Python polars
|
<p>I have time series data on different time axes in different dataframes. I need to interpolate data from one <code>df</code> to onto the time axis of another df, <code>df_ref</code>. Ex:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
# DataFrame with the reference time axis:
df_ref = pl.DataFrame({"dt": ["2022-12-14T14:00:01.000", "2022-12-14T14:00:02.000",
"2022-12-14T14:00:03.000", "2022-12-14T14:00:04.000",
"2022-12-14T14:00:05.000", "2022-12-14T14:00:06.000"]})
df_ref = df_ref.with_columns(pl.col("dt").str.to_datetime())
# DataFrame with a different frequency time axis, to be interpolated onto the reference time axis:
df = pl.DataFrame({
"dt": ["2022-12-14T14:00:01.500", "2022-12-14T14:00:03.500", "2022-12-14T14:00:05.500"],
"v1": [1.5, 3.5, 5.5]})
df = df.with_columns(pl.col("dt").str.to_datetime())
</code></pre>
<p>I cannot <code>join</code> the dfs since keys don't match:</p>
<pre class="lang-py prettyprint-override"><code>print(df_ref.join(df, on="dt", how="left").interpolate())
shape: (6, 2)
┌─────────────────────┬──────┐
│ dt ┆ v1 │
│ --- ┆ --- │
│ datetime[μs] ┆ f64 │
╞═════════════════════╪══════╡
│ 2022-12-14 14:00:01 ┆ null │
│ 2022-12-14 14:00:02 ┆ null │
│ 2022-12-14 14:00:03 ┆ null │
│ 2022-12-14 14:00:04 ┆ null │
│ 2022-12-14 14:00:05 ┆ null │
│ 2022-12-14 14:00:06 ┆ null │
└─────────────────────┴──────┘
</code></pre>
<p>So my 'iterative' approach would be to interpolate each column individually, for instance like</p>
<pre class="lang-py prettyprint-override"><code>from scipy.interpolate import interp1d
f = interp1d(df["dt"].dt.timestamp(), df["v1"],
kind="linear", bounds_error=False, fill_value="extrapolate")
out = f(df_ref["dt"].dt.timestamp())
df_ref = df_ref.with_columns(pl.Series(out).alias("v1_interp"))
print(df_ref.head(6))
shape: (6, 2)
┌─────────────────────┬───────────┐
│ dt ┆ v1_interp │
│ --- ┆ --- │
│ datetime[μs] ┆ f64 │
╞═════════════════════╪═══════════╡
│ 2022-12-14 14:00:01 ┆ 1.0 │
│ 2022-12-14 14:00:02 ┆ 2.0 │
│ 2022-12-14 14:00:03 ┆ 3.0 │
│ 2022-12-14 14:00:04 ┆ 4.0 │
│ 2022-12-14 14:00:05 ┆ 5.0 │
│ 2022-12-14 14:00:06 ┆ 6.0 │
└─────────────────────┴───────────┘
</code></pre>
<p>Although this gives the result I need, I wonder if there is a more idiomatic approach? I hesitate to mention efficiency here since I haven't benchmarked this with real data yet ("measure, don't guess!"). However, I'd assume that a native implementation in the underlying Rust code could add some performance benefits.</p>
|
<python><dataframe><time-series><interpolation><python-polars>
|
2022-12-15 12:59:57
| 2
| 26,076
|
FObersteiner
|
74,811,878
| 18,749,472
|
passing django variables as parameters into href url
|
<p>On my home page i want to have 3 links that will redirect the user to a page <code>('127.0.0.1:8000/person/<str:name>')</code> which will display the name that they clicked. I would like to use a for loop to create links for these names as i plan to have much more than 3 names.</p>
<p>I have tested with the two methods (for loop / manually writing out all of the links) but can't get the for loop to work.</p>
<p>I thought these two methods below would produce the same result.</p>
<pre><code> <h2>does not work</h2>
{% for person in people %}
<a href="{% url 'person' '{{person}}' %}">{{person}}</a>
{% endfor %}
<h2>works</h2>
<a href="{% url 'person' 'logan' %}">logan</a>
<a href="{% url 'person' 'paul' %}">paul</a>
<a href="{% url 'person' 'nicola' %}">nicola</a>
</code></pre>
<p>What the urls look like in page source:</p>
<p><a href="https://i.sstatic.net/7F9q3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7F9q3.png" alt="enter image description here" /></a></p>
<p>views.py</p>
<pre><code>def home(request):
return render(request, "APP/home.html", context={"people":['logan', 'paul', 'nicola']})
def person(request, name):
return render(request, 'APP/person.html', context={"name":name})
</code></pre>
<p>urls.py</p>
<pre><code>urlpatterns = [
path('home/', views.home, name='home'),
path('person/<str:name>/', views.person, name='person'),
]
</code></pre>
|
<python><django><url><django-views><django-urls>
|
2022-12-15 12:54:54
| 1
| 639
|
logan_9997
|
74,811,660
| 6,057,371
|
pandas apply function on column only if condition on another column is met
|
<p>I have a dataframe:</p>
<pre><code>df = A. Cond Val
1. True 0.8
5. False 0.8
2. False 0.6
4. False 0.5
</code></pre>
<p>I want to update the value of the columns 'Val' by truncate it in 0.1, only when Cond is False and val is higher than 0.55. So new df will be:</p>
<pre><code>df = A. Cond Val
1. True 0.8
5. False 0.7
2. False 0.5
2. False 0.5
</code></pre>
<p>What is the best way to do it?</p>
|
<python><pandas><dataframe>
|
2022-12-15 12:38:52
| 3
| 2,050
|
Cranjis
|
74,811,658
| 2,802,576
|
Calling Python from C# using Pythonnet throws import error
|
<p>I have a use case where I want to be able to call a function which is written in Python from my C# code base. When I make a call to this function I want to be able to pass C# types(custom classes) to Python function and for that I was exploring <a href="https://github.com/pythonnet/pythonnet" rel="nofollow noreferrer">pythonnet</a>.</p>
<p>Python project has a following sample structure -</p>
<pre><code>main.py
service1.py
</code></pre>
<p><strong>main.py</strong> has the main function which calls the methods exposed by <strong>service1.py</strong> -</p>
<pre><code>from Service1 import MyService1
def Run(a, b):
service1 = MyService1()
service1.method(a, b)
</code></pre>
<p>Now from C# side when I try to execute the Python it is throwing error -</p>
<blockquote>
<p>Python.Runtime.PythonException: 'No module named 'Service1''</p>
</blockquote>
<p>C# code -</p>
<pre><code>Runtime.PythonDLL = @"C:\Program Files\Python39\Python39.dll";
PythonEngine.Initialize();
using (Py.GIL())
{
using (var scope = Py.CreateScope())
{
scope.Exec(File.ReadAllText(@"..\Python\main.py"));
}
}
</code></pre>
<p>How do I need to import the required classes correctly?</p>
|
<python><c#><python.net>
|
2022-12-15 12:38:44
| 1
| 801
|
arpymastro
|
74,811,622
| 12,635,985
|
Django Haystack update index only for 1 model
|
<p>I am currently trying out Django haystack to update data from PostgreSQL to a solr collection.</p>
<p>So, I have defined 2 models in <code>search_indexes.py</code>. So, when I run the command <code>python manage.py update_index</code> it indexes the data from both the models defined in <code>search_indexes.py</code> to my solr collection.</p>
<p>HOW DO I PERFORM <code>update_index</code> OPERATION ONLY FOR A SPECIFIC MODEL THAT I NEED?</p>
<p>Currently, when I run the command, the following 2 models ran.</p>
<pre><code>Indexing 2 model1
Indexing 12 model2
</code></pre>
<p><code>search_indexes.py</code></p>
<pre><code>from haystack import indexes
from .models import table1, table2
class model1(indexes.SearchIndex, indexes.Indexable):
text = indexes.CharField(
document=True,
use_template=True,
template_name="search/indexes/tenants/table1_text.txt"
)
ats_id = indexes.CharField(model_attr='ats_id')
ats_name = indexes.CharField(model_attr='ats_name')
added_by = indexes.CharField(model_attr='added_by')
added_on = indexes.DateTimeField(model_attr='added_on')
def get_model(self):
return table1
def index_queryset(self, using=None):
return self.get_model().objects.all()
class model2(indexes.SearchIndex, indexes.Indexable):
text = indexes.CharField(
document=True,
use_template=True,
template_name="search/indexes/tenants/table2_text.txt"
template_id = indexes.CharField(model_attr='template_id')
template_name = indexes.CharField(model_attr='template_name')
aspect = indexes.CharField(model_attr='aspect')
version = indexes.CharField(model_attr='version')
added_by = indexes.CharField(model_attr='added_by')
added_on = indexes.DateTimeField(model_attr='added_on')
ats_id = indexes.CharField(model_attr='ats_id')
def get_model(self):
return table2
def index_queryset(self, using=None):
return self.get_model().objects.all()
</code></pre>
<p>Please suggest a workaround.</p>
|
<python><django><solr><django-haystack>
|
2022-12-15 12:35:39
| 1
| 1,325
|
Mahesh
|
74,811,552
| 3,937,811
|
How to calculate the expectation value for a given probability distribution
|
<p>I am writing a program to determine the expectation value, expectation of the X^2 and E(X - X_avg)^2. I have written a program like so:</p>
<pre><code># program : expectation value
import csv
import pandas as pd
import numpy as np
from scipy.stats import chi2_contingency
import seaborn as sns
import matplotlib.pyplot as plt
import logging
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s')
# Step 1: read csv
probabilityCSV = open('probability.csv')
df = pd.read_csv(probabilityCSV)
logging.debug(df['X'])
logging.debug(df['P'])
logging.debug(type(df['X']))
logging.debug(type(df['P']))
# Step 2: convert dataframe to ndarry
# https://stackoverflow.com/questions/13187778/convert-pandas-dataframe-to-numpy-array
X = df['X'].to_numpy()
p = df['P'].to_numpy()
logging.debug(f'X={X}')
logging.debug(f'p={p}')
# Step 3: calculate E(X)
# https://www.statology.org/expected-value-in-python/
def expected_value(values, weights):
return np.sum((np.dot(values,weights))) / np.sum(weights)
logging.debug('Step 3: calculate E(X)')
expectation = expected_value(X,p)
logging.debug(f'E(X)={expectation}')
# Step 4: calculate E(X^2)
logging.debug('Step 4: calculate E(X^2)')
# add normalize='index'
contingency_pct = pd.crosstab(df['Observed'],df['Expected'],normalize='index')
logging.debug(f'contingency_pct:{contingency_pct}')
# Step 5: calculate E(X - X_avg)^2
logging.debug('Step 5: calculate E(X - X_avg)^2')
</code></pre>
<p>The dataset that I am using is:</p>
<pre><code>X,P
8,1/8
12,1/6
16,3/8
20,1/4
24,1/12
</code></pre>
<p>Expected:</p>
<p>E(X) = 16
E(X^2) = 276
E(X- X_avg)^2 =20</p>
<p>Actual:</p>
<pre><code>Traceback (most recent call last):
File "/Users/evangertis/development/PythonAutomation/Statistics/expectation.py", line 35, in <module>
expectation = expected_value(X,p)
File "/Users/evangertis/development/PythonAutomation/Statistics/expectation.py", line 32, in expected_value
return np.sum((np.dot(values,weights))) / np.sum(weights)
File "<__array_function__ internals>", line 5, in sum
File "/usr/local/lib/python3.9/site-packages/numpy/core/fromnumeric.py", line 2259, in sum
return _wrapreduction(a, np.add, 'sum', axis, dtype, out, keepdims=keepdims,
File "/usr/local/lib/python3.9/site-packages/numpy/core/fromnumeric.py", line 86, in _wrapreduction
return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
TypeError: cannot perform reduce with flexible type
</code></pre>
|
<python><pandas><numpy>
|
2022-12-15 12:30:06
| 1
| 2,066
|
Evan Gertis
|
74,811,545
| 8,176,763
|
How to return multiple rows in sqlalchemy based on value of a column
|
<p>I have some <code>table</code> in a database that looks like that:</p>
<pre><code>phase_type phase_start phase_end
Obsolete 01/01/2021 02/02/2022
Obsolete 01/03/2021 02/07/2022
Obsolete 05/01/2021 09/02/2022
Available 05/07/2021 09/02/2027
Available 05/07/2023 09/02/2025
Available 05/07/2024 09/02/2029
</code></pre>
<p>If I want to <code>select</code> on this <code>table</code> and return only the rows that the date of today is lying between the range of 30 days in the past of <code>phase_end</code>, I could do like this:</p>
<pre><code>from datetime import date,timedelta
fromo sqlalchemy import select
past = my_table.phase_end - timedelta(30)
future = my_table.phase_end
query = select(my_table).where(date.today() >= past,date.today() <= future)
session.exec(query).fetchall()
</code></pre>
<p>However I would like to use <code>phase_start</code> when calculating <code>past</code> and <code>future</code> for the case when <code>phase_type</code> is <code>Obsolete</code>, for all the other cases I would like to use <code>phase_end</code> as above. Thus the range should be calculated based on the value that <code>phase_end</code> takes. How can I do this and return all rows that pass the conditions ?</p>
|
<python><sqlalchemy>
|
2022-12-15 12:29:23
| 1
| 2,459
|
moth
|
74,811,521
| 275,002
|
429 Too many request error despite using proxies
|
<p>I am using StormProxies to access Etsy data but despite using proxies and implementing retries I am getting <code>429 Too Many Requests</code> error most of the time(~80%+). Here is my code to access data:</p>
<pre><code>import requests
def create_request(url, logging, headers={}, is_proxy=True):
r = None
try:
proxies = {
'http': 'http://{}'.format(PROXY_GATEWAY_IP),
'https': 'http://{}'.format(PROXY_GATEWAY_IP),
}
with requests.Session() as s:
retries = Retry(total=5, backoff_factor=1, status_forcelist=[502, 503, 504, 429])
s.mount('http://', HTTPAdapter(max_retries=retries))
if is_proxy:
r = s.get(url, proxies=proxies, timeout=30, headers=headers)
else:
r = s.get(url, headers=headers, timeout=30)
r.raise_for_status()
if r.status_code != 200:
print('Status Code = ', r.status_code)
if logging is not None:
logging.info('Status Code = ' + str(r.status_code))
except Exception as ex:
print('Exception occur in create_request for the url:- {url}'.format())
crash_date = time.strftime("%Y-%m-%d %H:%m:%S")
crash_string = "".join(traceback.format_exception(etype=type(ex), value=ex, tb=ex.__traceback__))
exception_string = '[' + crash_date + '] - ' + crash_string + '\n'
print('Could not connect. Proxy issue or something else')
print('==========================================================')
print(exception_string)
finally:
return r
</code></pre>
<p>StormProxies guys say that I implement retries, this is how I have done but it is not working for me.</p>
<p>I am using Python <code>multiprocessing</code> and spawning 30+ threads at a time.</p>
|
<python><python-requests>
|
2022-12-15 12:27:51
| 1
| 15,089
|
Volatil3
|
74,811,418
| 3,247,006
|
How to write a if statement with multiple conditions in multiple lines in Python?
|
<p>The <code>if</code> statement with <strong>multiple conditions</strong> in <strong>one line</strong> below works properly:</p>
<pre class="lang-py prettyprint-override"><code>exam1 = 70
exam2 = 60
exam3 = 50
if (100 >= exam1 and exam1 >= 60) or (100 >= exam2 and exam2 >= 60) or (100 >= exam3 and exam3 >= 60):
print("You passed!!")
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>You passed!!
</code></pre>
<p>But, the <code>if</code> statement with <strong>multiple conditions</strong> in <strong>multiple lines</strong> below doesn't work properly:</p>
<pre class="lang-py prettyprint-override"><code>exam1 = 70
exam2 = 60
exam3 = 50
if (100 >= exam1 and exam1 >= 60) or
(100 >= exam2 and exam2 >= 60) or
(100 >= exam3 and exam3 >= 60):
print("You passed!!")
</code></pre>
<p>Then, I got the error below:</p>
<pre class="lang-none prettyprint-override"><code>File "main.py", line 5
if (100 >= exam1 and exam1 >= 60) or
^
SyntaxError: invalid syntax
</code></pre>
<p>So, how can I write the <code>if</code> statement with <strong>multiple conditions</strong> and <strong>multiple lines</strong>?</p>
|
<python><python-3.x><if-statement><logical-operators><multiple-conditions>
|
2022-12-15 12:20:21
| 1
| 42,516
|
Super Kai - Kazuya Ito
|
74,811,167
| 10,569,922
|
Pandas Get Count of Each Row of DataFrame and assign to new column for long-format
|
<p>I get df:</p>
<pre><code>task_id name tag
1 foo xyz
1 foo xyz
22 foo aaa
22 foo aaa
22 foo aaa
22 foo bbb
13 bar xyz
13 bar xyz
33 bar aaa
33 bar aaa
</code></pre>
<p>So I trying <code>df['tag'].value_count()</code> and <code>df_test.groupby('name')['tag'].count()</code> for two reasons:<br />
One I need count <strong>each</strong> <code>tag</code> per <code>task</code> and second total sum of tags on each task<br />
What I want get:</p>
<pre><code>task_id name tag count_tag total_count
1 foo xyz 2 6
1 foo xyz 2 6
22 foo aaa 3 6
22 foo aaa 3 6
22 foo aaa 3 6
22 foo bbb 1 6
13 bar xyz 2 4
13 bar xyz 2 4
33 bar aaa 2 4
33 bar aaa 2 4
</code></pre>
<p>for better understanding, in sql to create such a table, I would do something like this:</p>
<pre><code>SELECT
task_id,
name,
count(tag) AS count_tag,
sum(count(tag)) OVER (PARTITION BY name) AS total_count
</code></pre>
|
<python><pandas>
|
2022-12-15 11:59:20
| 2
| 521
|
TeoK
|
74,811,006
| 2,915,050
|
Python - Use Python variable for BigQuery Magic query structure (not query parameter)
|
<p>I want to use a Python variable to help build the structure of a BigQuery Magic SQL query, however I'm unable to identify a way to do so. It is <em><strong>not</strong></em> a parameter for a <code>WHERE</code> clause or anything similar - it's the structure of the query itself.</p>
<p>This is the BigQuery Magic I want to use the Python variable <code>day_of_week</code> in:</p>
<pre><code>%%bigquery df
SELECT DATE_DIFF(CURRENT_DATE(), opening_date, WEEK(WEDNESDAY)) AS diff_weeks #I want to change WEDNESDAY to day_of_week
</code></pre>
<p>I have tried doing the following:</p>
<pre><code>params = {"day_of_week": day_of_week} #day_of_week could be either MONDAY, TUESDAY, WEDNESDAY etc.
%%bigquery df --params $params
SELECT DATE_DIFF(CURRENT_DATE(), opening_date, WEEK(@day_of_week)) as diff_weeks
</code></pre>
<p>However, BigQuery Magic seems to treat this as a value parameter, <code>'WEDNESDAY'</code> instead of <code>WEDNESDAY</code></p>
<p>Any ideas?</p>
|
<python><google-bigquery>
|
2022-12-15 11:44:31
| 1
| 1,583
|
RoyalSwish
|
74,810,985
| 11,809,811
|
events on widgets that are drawn on a canvas?
|
<p>I have placed some widgets on a canvas (ultimately to enable scrolling). Here is the simplified code:</p>
<pre><code>import tkinter as tk
from tkinter import ttk
root = tk.Tk()
root.geometry('400x400')
canvas = tk.Canvas(root, background = 'white')
canvas.pack(expand = True, fill = 'both')
canvas.create_window((10,30), window = ttk.Label(root, text = 'A label'))
canvas.bind('<MouseWheel>', lambda event: print(event))
root.mainloop()
</code></pre>
<p>The mousewheel event does work on the canvas but does not work on the area where with the widget. Is there a way around it?</p>
<p>One solution I thought might work was to put the canvas in a container and run an event on that:</p>
<pre><code>import tkinter as tk
from tkinter import ttk
class ScrollContainer(ttk.Frame):
def __init__(self, parent):
super().__init__(master = parent)
canvas = tk.Canvas(self, background = 'white')
canvas.pack(expand = True, fill = 'both', side = 'left')
canvas.create_window((10,30), window = ttk.Label(root, text = 'A label'))
self.bind('<MouseWheel>', lambda event: print(event))
self.pack(expand = True, fill = 'both')
root = tk.Tk()
root.geometry('400x400')
ScrollContainer(root)
root.mainloop()
</code></pre>
<p>But that one doesn't trigger an event at all. Can someone help with this problem? I just want to be able to trigger an event when the mouse is over the frame or canvas regardless of the children (or canvas windows). It does work to create an event on the root itself but I need this to be more focused.</p>
|
<python><tkinter>
|
2022-12-15 11:42:51
| 1
| 830
|
Another_coder
|
74,810,944
| 18,877,953
|
Joining logs from 2 Azure Log Analytics workspaces
|
<p>I'm using the Azure SDK for Python to query a log Analytics workspace.</p>
<p>I have 2 workspaces I'd like to query, but I was wondering if there is a way to union the data inside the query instead of querying both workspaces and combining the result objects within my Python program.</p>
<p>Something like this -</p>
<pre class="lang-py prettyprint-override"><code>from azure.monitor.query import LogsQueryClient
client = LogsQueryClient(creds)
query = """
TableName // Table from the current workspace
| union ExteralTableName // Table from a different workspace
"""
client.query_workspace("<current_workspace_id>", query, timespan="...")
</code></pre>
<p>The identity that executes this query will have permissions to query both workspaces separately, and I have their URLs.</p>
<p>I couldn't find this option in the Log Analytics documentation, so I'm wondering if anyone else has done this before, or if I must process the data after It's sent back to me.</p>
<p>Thanks in advance!</p>
|
<python><azure><azure-log-analytics><azure-sdk><azure-sdk-python>
|
2022-12-15 11:39:47
| 2
| 780
|
LITzman
|
74,810,938
| 1,652,954
|
How to check if a list contains another list of different values
|
<p>i would like to know how to check if a list encompases another list regardless of similar values contains.
given the example posted below, i want to have a condition checks for existence of a list inside a list. in other words, for the values of <code>l1</code> the check should return False and for <code>l2</code>, the check should return True</p>
<p>please let me know how to achieve that</p>
<p><strong>code</strong></p>
<pre><code>l1 = [1,2,3]
l2 = [4,5,[6]]
</code></pre>
|
<python>
|
2022-12-15 11:39:15
| 3
| 11,564
|
Amrmsmb
|
74,810,890
| 16,395,449
|
Add current date or time in the converted file using Pandas script
|
<p>I am trying to convert .csv to .xlsx using Pandas script. I want the filename to be appended or suffixed with current date(i.e. The converted xlsx file should display like this - sourcefile_12152022.xlsx)</p>
<p>I tried this using - import time - but this is not working for me.</p>
<pre><code>import time
TodaysDate = time.strftime("%d-%m-%Y")
sourcefile= TodaysDate +".xlsx"
DataSet.to_excel(sourcefile, sheet_name='sheet1', index=False)
</code></pre>
<p>This is the original script that I tried before trying to add using - import time</p>
<pre><code>import os
os.chdir("/opt/alb_test/alb/albt1/Source/alb/al/conversion/scr")
# Reading the csv file
import pandas as pd
print(pd.__file__)
df_new = pd.read_csv("sourcefile.csv", sep="|", header=None).dropna(axis=1, how="all")
# saving xlsx file
df_new.to_excel("sourcefile.xlsx", index=False)
</code></pre>
<p>Kindly guide me on this conversion.</p>
|
<python><python-3.x><pandas>
|
2022-12-15 11:35:00
| 1
| 369
|
Jenifer
|
74,810,861
| 7,547,932
|
How to mask row in Tensorflow without for loop
|
<p>I want to create a custom Layer for a Tensorflow model but the logic I have uses a for loop, which Tensorflow doesn't like. How can I modify my code to remove the for loop but still achieve the same result?</p>
<pre><code>class CustomMask(tf.keras.layers.Layer):
def call(self, inputs):
mask = tf.where(inputs[:, 0] < 0.5, 1, 0)
for i,m in enumerate(mask):
if m:
inputs = inputs[i, 1:].assign(tf.zeros(4, dtype=tf.float32))
else:
first = tf.where(inputs[:, 1] >= 0.5, 0, 1)
assign = tf.multiply(tf.cast(first, tf.float32), inputs[:, 2])
inputs = inputs[:, 2].assign(assign)
third = tf.where(inputs[:, 1] >= 0.5, 1, 0)
assign = tf.multiply(tf.cast(third, tf.float32), inputs[:, 1])
inputs = inputs[:, 1].assign(assign)
return inputs
</code></pre>
<p>Example input Tensor:</p>
<pre><code><tf.Variable 'Variable:0' shape=(3, 5) dtype=float32, numpy=
array([[0.8, 0.7, 0.2, 0.6, 0.9],
[0.8, 0.4, 0.8, 0.3, 0.7],
[0.3, 0.2, 0.4, 0.3, 0.8]], dtype=float32)>
</code></pre>
<p>Corresponding output:</p>
<pre><code><tf.Variable 'UnreadVariable' shape=(3, 5) dtype=float32, numpy=
array([[0.8, 0.7, 0. , 0.6, 0.9],
[0.8, 0. , 0.8, 0.3, 0.7],
[0.3, 0. , 0. , 0. , 0. ]], dtype=float32)>
</code></pre>
<p>EDIT:
The layer should take an array of shape (batch_size, 5) and if the first value of a row is less than 0.5, set the rest of the row values to 0, otherwise if the 2nd element is above 0.5, set the 3rd element to 0 and if the 3rd element is greater than 0.5, set the 2nd element to 0</p>
|
<python><tensorflow><keras>
|
2022-12-15 11:32:30
| 1
| 888
|
kynnemall
|
74,810,689
| 12,642,161
|
Custom sort without specifying all values
|
<p>Is there a way to sort values without having to specify all values in the list? I just want to, for example, sort by economics and then library. The remaining rows should be the same order as the original df.</p>
<pre><code>order = ["economics","library"]
df cat
0 library
1 economics
2 science
3 np.NaN
</code></pre>
<p>Expected Output:</p>
<pre><code>1 economics
13 economics
0 library
...
</code></pre>
<pre><code>df.sort_values("cat", key=lambda column = column.map(lambda: x:order.index(x)))
</code></pre>
|
<python><pandas><sorting>
|
2022-12-15 11:18:08
| 2
| 398
|
arv
|
74,810,662
| 1,338,588
|
How to process the dataframe which was read from Kafka Topic using Spark Streaming
|
<p>I'm able to stream twitter data into my Kafka topic via a producer. When I try to consume through the default Kafka consumer I'm able to see the tweets as well.</p>
<p><a href="https://i.sstatic.net/XXLuy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XXLuy.png" alt="enter image description here" /></a></p>
<p>But when I try to use Spark Streaming to consume this and process further, I'm unable to find resources to refer. This is how my consumer looks like:</p>
<pre><code>from pyspark.sql import SparkSession
import time
spark = SparkSession.builder.appName('LinkitTest').getOrCreate()
df = spark \
.readStream \
.format("kafka") \
.option("kafka.bootstrap.servers", "localhost:9092") \
.option("subscribe", "tweets") \
.option("startingOffsets", "earliest") \
.load()
#df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
print(df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)"))
query = df.writeStream.format("console").start()
import time
time.sleep(10) # sleep 10 seconds
query.stop()
</code></pre>
<p>Even when I do <code>spark-submit</code> I see the tweets in the topic but the value aren't readable</p>
<blockquote>
<p>spark-submit --packages
org.apache.spark:spark-sql-kafka-0-10_2.12:3.0.1 kafka_consumer.py</p>
</blockquote>
<p><a href="https://i.sstatic.net/4a66n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4a66n.png" alt="enter image description here" /></a></p>
<p>I'm unable to figure out how to at least print the column values (or the tweets in this case) with the dataframe I have? Any help could be appriecated</p>
<p><strong>UPDATE</strong></p>
<p>I was able to print the values on the console, but as you see its not readable. How can I convert this to a readable String?</p>
<pre><code>query = df.select(col("value"))\
.writeStream\
.format("console")\
.start()
</code></pre>
<p><a href="https://i.sstatic.net/FZp7Y.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FZp7Y.png" alt="enter image description here" /></a></p>
|
<python><apache-spark><pyspark><spark-structured-streaming><spark-kafka-integration>
|
2022-12-15 11:15:19
| 1
| 9,532
|
Kulasangar
|
74,810,636
| 19,797,660
|
How to break line in python function description?
|
<p>I've tried to find the solution on how to break the line in a python function description, but there were none that worked for me.
I have such a function:</p>
<p><a href="https://i.sstatic.net/ZJHqE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZJHqE.png" alt="enter image description here" /></a></p>
<p>But the description that shows when I am hovering that function shows continous chain of characters and there are no breaks, like below:
<a href="https://i.sstatic.net/SIzBl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SIzBl.png" alt="enter image description here" /></a>
How do I do breaklines in the function desc?</p>
<p>EDIT:
Due to the request I am posting the code(when pasting the code indentation has broken):</p>
<pre class="lang-py prettyprint-override"><code>from pandas import PandasDataFrame
def calculate_input_types(price_df: PandasDataFrame, calculate_all: bool = True, *, input_type: dict) -> PandasDataFrame:
"""
:param price_df: price_df: Dataframe containing Open, Close, High, Low, Volume price data.
:param calculate_all: Boolean value True or False, if True function calculates all possible input types.
HL2, HLC3, OHLC4, HLCC4, if False function calculates only chosen input types, must pass a dictionary.
:param input_type: Optional argument, mandatory if calculate_all is set to False. A dictionary should posses input
types as keys and one-hot encoded value of 1 if the input type is to be calculated
or 0 if the input type shouldn't be calculated.
Example of input_type:
input_type = {
HL2 : 1,
HLC3 : 1,
OHLC4 : 0,
HLCC4 : 1
}
This type of input calculates only HL2, HLC3 and HLCC4 since those are set to 1 and OHLC4 is set to 0.
:return: Dataframe with calculated input types.
"""
return 'apple'
</code></pre>
|
<python><pycharm>
|
2022-12-15 11:13:43
| 1
| 329
|
Jakub Szurlej
|
74,810,524
| 2,088,029
|
How to convert "Excel Xml"/"Xml Spreadsheet 2003", to .xlsx format without office/excel install
|
<p>The input files are single XML files, that can be generated from excel using "Xml Spreadsheet 2003", often referred to as "Excel XML" format, and described in this namespace</p>
<pre><code>xmlns="urn:schemas-microsoft-com:office:spreadsheet"
</code></pre>
<p>(a simple file is at the end of this post for clarity).</p>
<p>I can potentially use dotnet, JVM or python, all of which have a enormous range of 3rd party libraries for excel manipulation but none (except excel) seem to be able to achieve this.</p>
<p>The only additional restriction is that excel cannot be a dependency, i.e. it will almost always not be present on the OS that runs the code.</p>
<p>Note: There are many many posts about very similar issues, but all come to dead end, mostly referring to 3rd party packages that don't support this format, or suggesting creating the input files in a different format (not an option).</p>
<p>example file:</p>
<pre><code><?xml version="1.0"?>
<?mso-application progid="Excel.Sheet"?>
<Workbook xmlns="urn:schemas-microsoft-com:office:spreadsheet"
xmlns:o="urn:schemas-microsoft-com:office:office"
xmlns:x="urn:schemas-microsoft-com:office:excel"
xmlns:ss="urn:schemas-microsoft-com:office:spreadsheet"
xmlns:html="http://www.w3.org/TR/REC-html40">
<DocumentProperties xmlns="urn:schemas-microsoft-com:office:office">
<Author></Author>
<LastAuthor></LastAuthor>
<Created>2022-12-15T10:58:05Z</Created>
<Version>16.00</Version>
</DocumentProperties>
<OfficeDocumentSettings xmlns="urn:schemas-microsoft-com:office:office">
<AllowPNG/>
</OfficeDocumentSettings>
<ExcelWorkbook xmlns="urn:schemas-microsoft-com:office:excel">
<WindowHeight>5556</WindowHeight>
<WindowWidth>17256</WindowWidth>
<WindowTopX>32767</WindowTopX>
<WindowTopY>32767</WindowTopY>
<ProtectStructure>False</ProtectStructure>
<ProtectWindows>False</ProtectWindows>
</ExcelWorkbook>
<Styles>
<Style ss:ID="Default" ss:Name="Normal">
<Alignment ss:Vertical="Bottom"/>
<Borders/>
<Font ss:FontName="Calibri" x:Family="Swiss" ss:Size="11" ss:Color="#000000"/>
<Interior/>
<NumberFormat/>
<Protection/>
</Style>
</Styles>
<Worksheet ss:Name="Sheet1">
<Table ss:ExpandedColumnCount="1" ss:ExpandedRowCount="1" x:FullColumns="1"
x:FullRows="1" ss:DefaultRowHeight="14.4">
</Table>
<WorksheetOptions xmlns="urn:schemas-microsoft-com:office:excel">
<PageSetup>
<Header x:Margin="0.3"/>
<Footer x:Margin="0.3"/>
<PageMargins x:Bottom="0.75" x:Left="0.7" x:Right="0.7" x:Top="0.75"/>
</PageSetup>
<Selected/>
<ProtectObjects>False</ProtectObjects>
<ProtectScenarios>False</ProtectScenarios>
</WorksheetOptions>
</Worksheet>
</Workbook>
</code></pre>
|
<python><java><c#><excel>
|
2022-12-15 11:04:44
| 1
| 2,907
|
MrD at KookerellaLtd
|
74,810,522
| 10,536,858
|
Is storing data in class attribute bad practice?
|
<p>I need to use some data multiple times in my project. I don't want to read it globally and then pass it to all functions. Is it good practice to store it in class attribute?</p>
<p>Something like this:</p>
<pre><code>class SomeData():
_data = None
data_path = 'path.csv'
@classmethod
def get_data(cls):
if cls._data is None:
cls._data = pd.read_csv(cls.data_path)
return cls._data.copy()
</code></pre>
<p>I have many data sources and I would like to replicate this pattern. If it is not good idea, what would be the best solution?</p>
<p>Thanks!</p>
<p><strong>Edit:</strong>
Example usage:</p>
<pre><code>package/module1.py
def fun1():
df = SomeData.get_data()
...
package/module2.py
def fun2():
df = SomeData.get_data()
...
script.py
from package.module1 import fun1
from package.module2 import fun2
x = fun1()
y = fun2()
</code></pre>
<p><strong>Edit2:</strong></p>
<p>I have found builtin decorators in <code>functools</code> package: <code>cached_property</code> and <code>lru_cache</code> which can be used to cashe expensive operations: <a href="https://docs.python.org/3/library/functools.html#functools.cached_property" rel="nofollow noreferrer">functools</a>.</p>
|
<python>
|
2022-12-15 11:04:34
| 3
| 565
|
marcin
|
74,810,477
| 11,724,014
|
Python Graphviz - Import SVG file inside a node
|
<p>I need to import a SVG image using the python librairy of graphviz.</p>
<p>Here is the SVG file (created with the software draw.io):</p>
<pre><code><?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Do not edit this file with editors other than diagrams.net -->
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" version="1.1" width="37" height="88" viewBox="-0.5 -0.5 37 88" content="&lt;mxfile host=&quot;Electron&quot; modified=&quot;2022-12-15T10:43:57.185Z&quot; agent=&quot;5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/19.0.3 Chrome/102.0.5005.63 Electron/19.0.3 Safari/537.36&quot; etag=&quot;3oZj78ENfpeNirtN8up0&quot; version=&quot;19.0.3&quot; type=&quot;device&quot;&gt;&lt;diagram id=&quot;9rhVbpvjZH1Aqyv7po2j&quot; name=&quot;Page-1&quot;&gt;jZNNT8MwDIZ/TY9IbbqNcRzdBkiAkCaE4Ja1XhuR1FXqbh2/npS6X5omIfXgPLYT+7XrhZGpH6wsshdMQHvCT2ovXHtCBDMhvObzk3NLlot5C1KrEg4awE79AEOfaaUSKCeBhKhJFVMYY55DTBMmrcXTNOyAevpqIVO4ALtY6kv6oRLKuAtxO/BHUGnWvRws7lqPkV0wd1JmMsHTCIUbL4wsIrWWqSPQjXidLm3e9oq3L8xCTv9JeP9UWzrIrXh9ouW8/MavFG74lqPUFTe8igktV0znTgZXfNGYldFtQHh/BEvKCfUs96DfsFSkMHcheyRCMwpYaZU2DsLC0YyMdofAmViRVjlE/eh8B7kglwv11U6DXj+3eIAGyJ5dCCfMfJacd27WLdNpmGDIKBsNb8FM8s6k/c2DrM5gZbvjMME/3+g/CDe/&lt;/diagram&gt;&lt;/mxfile&gt;"><defs><filter id="dropShadow"><feGaussianBlur in="SourceAlpha" stdDeviation="1.7" result="blur"/><feOffset in="blur" dx="3" dy="3" result="offsetBlur"/><feFlood flood-color="#3D4574" flood-opacity="0.4" result="offsetColor"/><feComposite in="offsetColor" in2="offsetBlur" operator="in" result="offsetBlur"/><feBlend in="SourceGraphic" in2="offsetBlur"/></filter></defs><g filter="url(#dropShadow)"><ellipse cx="15" cy="7.5" rx="7.5" ry="7.5" fill="rgb(255, 255, 255)" stroke="rgb(0, 0, 0)" pointer-events="all"/><path d="M 15 15 L 15 40 M 15 20 L 0 20 M 15 20 L 30 20 M 15 40 L 0 60 M 15 40 L 30 60" fill="none" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe flex-start; justify-content: unsafe center; width: 1px; height: 1px; padding-top: 67px; margin-left: 15px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: nowrap;">Actor</div></div></div></foreignObject><text x="15" y="79" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">Actor</text></switch></g></g><switch><g requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"/><a transform="translate(0,-5)" xlink:href="https://www.diagrams.net/doc/faq/svg-export-text-problems" target="_blank"><text text-anchor="middle" font-size="10px" x="50%" y="100%">Text is not SVG - cannot display</text></a></switch></svg>
</code></pre>
<p>Here is the code I have tried to import this inside a node:</p>
<pre><code>import os
import graphviz
os.environ["PATH"] += os.pathsep + r'C:\Program Files\Graphviz\bin'
path_current = os.path.dirname(__file__)
path_graph = os.path.join(path_current, "build", "graph")
path_svg = os.path.join(path_current, "test.svg")
graph = graphviz.Digraph(
'structs',
filename=path_graph,
format="svg",
graph_attr={"rankdir": "LR"},
node_attr={'shape': 'record'}
)
graph.node("test node SVG", **{
"image": path_svg,
})
graph.view()
</code></pre>
<p>The image is just not showing without any error.</p>
|
<python><svg><graphviz><pygraphviz>
|
2022-12-15 11:00:51
| 1
| 1,314
|
Vincent Bénet
|
74,810,332
| 9,182,743
|
Type hint for output of plotly express?
|
<p>If I have a function that returns a plotly express figure, what is the correct way of type hinting?</p>
<p>The figure is of type:
<code><class 'plotly.graph_objs._figure.Figure'>.</code></p>
<p>Should I type hint by separately importing plotly, and writing:
<code>-> plotly.graph_objs._figure.Figure </code>?</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import plotly.express as px
import plotly
long_df = px.data.medals_long()
fig = px.bar(long_df, x="nation", y="count", color="medal", title="Long-Form Input")
print (type (fig))
def plot_bar (df: pd.DataFrame) -> plotly.graph_objs._figure.Figure :
fig = px.bar(long_df, x="nation", y="count", color="medal", title="Long-Form Input")
return fig
plot_bar(long_df)
</code></pre>
|
<python><plotly><python-typing><plotly-express>
|
2022-12-15 10:49:10
| 1
| 1,168
|
Leo
|
74,810,192
| 221,270
|
pyrtlsdr installation Windows 10
|
<p>I am trying to install pyrtlsdr, I first installed SDRUno and checked whether my SDRPlay is connected and working fine. That's ok. Then I try to install pyrtlsdr with pip on Windows 10 with Python 3.11.0 [MSC v.1929 64 bit (AMD64)] on win32.</p>
<p>This sample rise the following error:</p>
<pre><code>from rtlsdr import RtlSdr
sdr = RtlSdr()
# configure device
sdr.sample_rate = 2.048e6 # Hz
sdr.center_freq = 70e6 # Hz
sdr.freq_correction = 60 # PPM
sdr.gain = 'auto'
print(sdr.read_samples(512))
</code></pre>
<p>Error:</p>
<pre><code> C:\Users\python.exe C:\Users\PycharmProjects\sdr\01.py
Traceback (most recent call last):
File "C:\Users\01.py", line 3, in <module>
sdr = RtlSdr()
^^^^^^^^
File "C:\Users\anaconda3\envs\env1\Lib\site-packages\rtlsdr\rtlsdr.py", line 133, in __init__
self.open(device_index, test_mode_enabled, serial_number)
File "C:\Users\anaconda3\envs\env1\Lib\site-packages\rtlsdr\rtlsdr.py", line 171, in open
raise LibUSBError(result, 'Could not open SDR (device index = %d)' % (device_index))
rtlsdr.rtlsdr.LibUSBError: <LIBUSB_ERROR_IO (-1): Input/output error> "Could not open SDR (device index = 0)"
Process finished with exit code 1
</code></pre>
<p>How can I solve this problem?</p>
|
<python><rtl-sdr><pyrtlsdr>
|
2022-12-15 10:36:31
| 1
| 2,520
|
honeymoon
|
74,810,124
| 508,907
|
How to run an IPython CELL magic from a script (%% magic)
|
<p>Jupyter magic commands (starting with a single <code>%</code>, e.g. <code>%timeit</code>) can be run in a script using the answer from <a href="https://stackoverflow.com/questions/10361206/how-to-run-an-ipython-magic-from-a-script-or-timing-a-python-script">How to run an IPython magic from a script (or timing a Python script)</a></p>
<p>However, I cannot find an answer on how to run cell magic commands, e.g. In Jupyter we can do:</p>
<pre><code>%%sh -s $task_name
#!/bin/bash
task_name=$1
echo This is a script running task [$task_name] that is executed from python
echo Many more bash commands here...................
</code></pre>
<p>How can this be written such that it can be executed from a python script?</p>
|
<python><jupyter><ipython><ipython-magic>
|
2022-12-15 10:30:31
| 1
| 14,360
|
ntg
|
74,810,051
| 4,882,200
|
Insert a bold hyphen in a matplotlib title, rather than minus sign?
|
<p>I'd like to insert a bold title with a hyphen. I've tried:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
# Data for plotting
t = np.arange(0.0, 2.0, 0.01)
s = 1 + np.sin(2 * np.pi * t)
fig, ax = plt.subplots()
ax.plot(t, s)
ax.set_title(r"$\bf{hyphenated-example}$")
</code></pre>
<p>This displays with a minus sign (<strong>hyphenated − example</strong>). I don't think I can use any of the solutions <a href="https://www.logic.at/staff/salzer/etc/mhyphen/mhyphen.pdf" rel="nofollow noreferrer">from this document explaining how to do it in a pure latex environment</a>; mbox is not recognised.</p>
|
<python><matplotlib><latex>
|
2022-12-15 10:24:39
| 1
| 305
|
Femkemilene
|
74,809,944
| 19,580,067
|
Split a single column to 2 different column
|
<p>Need to split the single column into 2 different columns.</p>
<p>I have tried few steps using regex but not able to achieve the result.</p>
<p>Any suggestions, please?</p>
<p>In the given data, we have the values of both 'A' and 'B' merged together as a single column. We need to split that.</p>
<pre><code>Data 1
col A
000 5
448 1
469 1
897 1
Expected
A B
000 5
448 1
469 1
897 1
Note: In Data 2, we have the 'B' value as null for all rows
Data 2
col A
524
894
894
896
Expected:
A B
524
894
894
896
</code></pre>
|
<python><dataframe><extract>
|
2022-12-15 10:15:12
| 1
| 359
|
Pravin
|
74,809,826
| 2,560,292
|
SQLAlchemy + mysql / mariadb: bulk upsert with composite keys
|
<p>Using SQLAlchemy and a MariaDB backend, I need to bulk upsert data. Using <a href="https://stackoverflow.com/a/69968892/2560292">this answer</a> I was able to make it work for model with a single primary key. However, I can't make it work with composite keys.</p>
<p>The key part of the code is this one:</p>
<pre class="lang-py prettyprint-override"><code> # for single pk
primary_key = [key.name for key in inspect(model).primary_key][0]
# get all entries to be updated
for each in DBSession.query(model).filter(getattr(model, primary_key).in_(entries.keys())).all():
entry = entries.pop(str(getattr(each, primary_key)))
</code></pre>
<p>I tried to change it to make it work with composite keys:</p>
<pre class="lang-py prettyprint-override"><code> primary_keys = tuple([key.name for key in inspect(model).primary_key])
# get all entries to be updated
for each in DBSession.query(model).filter(and_(*[getattr(model, col).in_(entries.keys()) for col in primary_keys])).all():
print("This is never printed :(")
</code></pre>
<p>I guess this <code>DBSession.query(model).filter(and_(*[getattr(model, col).in_(entries.keys()) for col in primary_keys])).all()</code> doesn't work as intended.</p>
<p>For reference, here is a fully working snippet:</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy import Column, create_engine, and_, or_
from sqlalchemy.types import String
from sqlalchemy.inspection import inspect
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import scoped_session, sessionmaker
from sqlalchemy import inspect, tuple_
DBSession = scoped_session(sessionmaker())
Base = declarative_base()
class Accounts(Base):
__tablename__ = 'accounts'
account = Column(String(50), primary_key=True)
comment = Column(String(50))
class Users(Base):
__tablename__ = 'users'
user = Column(String(50), primary_key=True)
account = Column(String(50), primary_key=True)
comment = Column(String(50))
accounts_data = {"account1": {"account": "account1", "comment": "test"}, "account2": {"account": "account2", "comment": None}}
users_data = {("user1", "account1"): {"user": "user1", "account": "account1", "comment": ""}, ("user1", "account2"): {"user": "user1", "account": "account2", "comment": ""}}
def upsert_data_single_pk(entries, model):
primary_key = [key.name for key in inspect(model).primary_key][0]
entries_to_update = []
entries_to_insert = []
# get all entries to be updated
for each in DBSession.query(model).filter(getattr(model, primary_key).in_(entries.keys())).all():
entry = entries.pop(str(getattr(each, primary_key)))
entries_to_update.append(entry)
# get all entries to be inserted
for entry in entries.values():
entries_to_insert.append(entry)
DBSession.bulk_insert_mappings(model, entries_to_insert)
DBSession.bulk_update_mappings(model, entries_to_update)
DBSession.commit()
def upsert_data_multiple_pk(entries, model):
primary_keys = tuple([key.name for key in inspect(model).primary_key])
entries_to_update = []
entries_to_insert = []
# get all entries to be updated
for each in DBSession.query(model).filter(and_(*[getattr(model, col).in_(entries.keys()) for col in primary_keys])).all():
# Print the composite primary key value by concatenating the values of the individual columns
print('-'.join([str(getattr(each, col)) for col in primary_keys]))
# get all entries to be inserted
for entry in entries.values():
entries_to_insert.append(entry)
DBSession.bulk_insert_mappings(model, entries_to_insert)
DBSession.bulk_update_mappings(model, entries_to_update)
DBSession.commit()
db_connection_uri = "mysql+pymysql://XXXX:XXXX@XXXX:XXXX/XXXX?charset=utf8mb4"
engine = create_engine(db_connection_uri, echo=False)
DBSession.remove()
DBSession.configure(bind=engine, autoflush=False, expire_on_commit=False)
#Base.metadata.drop_all(engine, checkfirst=True)
Base.metadata.create_all(bind=engine)
#upsert_data_single_pk(accounts_data, Accounts)
upsert_data_multiple_pk(users_data, Users)
</code></pre>
|
<python><mysql><sqlalchemy><mariadb>
|
2022-12-15 10:05:32
| 1
| 1,196
|
Shan-x
|
74,809,798
| 14,649,310
|
How to validate JSON and Array types in SQLAlchemy?
|
<p>I have an <code>SQLAlchemy</code> model and two of the properties do not seem to be validated by <code>SQLAlchemy</code> nor do I get an error form the db which is counter intuitive and not what I would expect.</p>
<p>Specifically the <code>JSON</code> type and the <code>ARRAY(Text)</code> types do not fail when wrong types are passed to them. This is a dummy version of the model:</p>
<pre><code>class Monitor(Model):
__tablename__ = 'monitor'
__table_args__ = (UniqueConstraint('user_id'),)
_context = Column(ARRAY(Text), nullable=False)
user_id = Column(UUID, ForeignKey('user.id'), index=True, nullable=False)
_meta = Column(JSON, nullable=True, default={})
enabled_at = Column(DateTime(timezone=True), nullable=True, default=now)
disabled_at = Column(DateTime(timezone=True), nullable=True)
user = relationship('User')
</code></pre>
<p>I am able to pass <code>integer</code>, <code>list</code>, <code>string</code>, <code>dict</code> whatever to the <code>_meta</code> <code>JSON</code> field. And no errors are raised. Also I can pass any iterable to the <code>Array(Text)</code> not just an array of strings.</p>
<p>Is there a way to enforce the validation of the declared types just with SQLAlchemy without writing my own validation logic?</p>
|
<python><validation><sqlalchemy>
|
2022-12-15 10:03:55
| 1
| 4,999
|
KZiovas
|
74,809,598
| 10,121,996
|
How to parse and iterate to save information in dyanomodb with lamda in python
|
<p>In order to save data from lamda to dynamodb,</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>import json
import boto3
from botocore.exceptions import ClientError
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('table1')
def lambda_handler(event, context):
readJson = json.dumps(event)
parseMe = json.loads(readJson)
table.put_item(
Item={
'webhook_ID': context.aws_request_id,
'eventType': parseMe['body-json']["eventType"],
'Name': parseMe['body-json']["Name"],
'Id': parseMe['body-json']["Id"],
'ReferenceId': parseMe['body-json']["ReferenceId"],
'mock': parseMe['body-json']["mock"],
'description': parseMe['body-json']["description"]
})
return {
'status' : 200,
'body': json.dumps('data has been received.'),
'requestID' : context.aws_request_id
}</code></pre>
</div>
</div>
</p>
<p>In the above code, I am parsing each and every json attribute such as</p>
<blockquote>
<p>Name</p>
</blockquote>
<p>,</p>
<blockquote>
<p>mock</p>
</blockquote>
<p>where as I want to iterate it, as in future I might need to remove or add attributes in the json. This json attributes gets save in the table1 of dynamodb aws
Suggestion ?</p>
|
<python><amazon-web-services><aws-lambda><amazon-dynamodb><boto3>
|
2022-12-15 09:48:04
| 1
| 733
|
Sobhit Sharma
|
74,809,457
| 10,966,677
|
Catch any of the errors in psycopg2 without listing them explicitly
|
<p>I have a <code>try</code> and <code>except</code> block where I would like to catch <strong>only</strong> the errors in the <code>psycopg2.errors</code> and not any other error.</p>
<p>The explicit way would be:</p>
<pre><code>try:
# execute a query
cur = connection.cursor()
cur.execute(sql_query)
except psycopg2.errors.SyntaxError, psycopg2.errors.GroupingError as err:
# handle in case of error
</code></pre>
<p>The query will always be some SELECT statement. If the execution fails it should be handled. Any other exception not belonging to <code>psycopg</code>, e.g. like <code>ZeroDivisionError</code>, should not be caught from the <code>except</code> clause. However, I would like to avoid to list all errors after the <code>except</code> clause. In fact, if you list the psycopg errors, you get a quite extensive list:</p>
<pre><code>from psycopg2 import errors
dir(errors)
</code></pre>
<p>I have searched quite extensively and am not sure if this question has been asked already.</p>
|
<python><postgresql><psycopg2>
|
2022-12-15 09:34:47
| 2
| 459
|
Domenico Spidy Tamburro
|
74,809,136
| 972,647
|
Parsing and flattening complex JSON with Pydantic
|
<p>I need to consume JSON from a 3rd party API, i.e. I have to deal with whatever this API returns and can't change that.</p>
<p>For this specific task the API returns what it calls an "entity". Yeah, not very meaningful. The issue is the structure is deeply nested and in my parsing I want to be able to flatten it to some degree. To explain here is an obfuscated example of a single "entity". In the full response this is in a array named "data" which can have multiple entities inside.</p>
<pre class="lang-json prettyprint-override"><code>{
"type": "entity",
"id": "efebcc3e-445c-4d85-9689-bb85f46160cb",
"links": {
"self": "https://example.com/api/v1.0/entities/efebcc3e-445c-4d85-9689-bb85f46160cb"
},
"attributes": {
"id": "efebcc3e-445c-4d85-9689-bb85f46160cb",
"eid": "efebcc3e-445c-4d85-9689-bb85f46160cb",
"name": "E03075-042",
"description": "",
"createdAt": "2021-07-14T05:58:47.239Z",
"editedAt": "2022-09-22T11:28:53.327Z",
"state": "open",
"fields": {
"Department": {
"value": "Foo"
},
"Description": {
"value": ""
},
"Division": {
"value": "Bar"
},
"Name": {
"value": "E03075-042"
},
"Project": {
"details": {
"description": ""
},
"value": "My Project"
}
}
},
"relationships": {
"createdBy": {
"links": {
"self": "https://example.com/api/rest/v1.0/users/101"
},
"data": {
"type": "user",
"id": "101"
}
},
"editedBy": {
"links": {
"self": "https://example.com/api/rest/v1.0/users/101"
},
"data": {
"type": "user",
"id": "101"
}
},
"ancestors": {
"links": {
"self": "https://example.com/api/rest/v1.0/entities/efebcc3e-445c-4d85-9689-bb85f46160cb/ancestors"
},
"data": [
{
"type": "entity",
"id": "7h60bcb9-b1c0-4a12-8b6b-12e3eab54e6f",
"meta": {
"links": {
"self": "https://example.com/api/rest/v1.0/entities/7h60bcb9-b1c0-4a12-8b6b-12e3eab54e6f"
}
}
}
]
},
"owner": {
"links": {
"self": "https://example.com/api/rest/v1.0/users/101"
},
"data": {
"type": "user",
"id": "101"
}
},
"pdf": {
"links": {
"self": "https://example.com/api/rest/v1.0/entities/efebcc3e-445c-4d85-9689-bb85f46160cb/pdf"
}
}
}
}
</code></pre>
<p>I want to parse this into a data container. I'm open to custom parsing and just using a data class over Pydantic if it is not possible what I want.</p>
<p>Issues with the data:</p>
<ul>
<li><code>links</code>: Usage of <code>self</code> as field name in JSON. I would like to unnest this and have a top level field named simply <code>link</code></li>
<li><code>attributes</code>: unnest as well and not have them inside a <code>Attributes</code> model</li>
<li><code>fields</code>: unnest to top level and remove/ignore duplication (<code>name</code>, <code>description</code>)</li>
<li><code>Project</code> in <code>fields</code>: unnest to top level and only use the <code>value</code> field</li>
<li><code>relationships</code>: unnest, ignore some and maybe even resolve to actual user name</li>
</ul>
<p>Can I control Pydantic in such a way to unnest the data as I prefer and ignore unmapped fields?</p>
<p>Can the parsing also include resolving, which means more API calls?</p>
|
<python><json><parsing><pydantic>
|
2022-12-15 09:09:16
| 1
| 7,652
|
beginner_
|
74,809,061
| 14,799,981
|
Click on button and fill elements inside a Table using Selenium Python
|
<p>I have <code>html table</code> that both include <code>button</code> and <code>fill form</code> as follows</p>
<pre><code><table class="configTbl" cellspacing="0">
<tbody>
<tr>
<td><span id="_0" class="login_input_text">Username</span>&nbsp;:</td>
<td>
<input type="text" name="un" value="admin" class="configF1" id="txtUserName" size="20" maxlength="31" autocomplete="off">
</td>
<td>&nbsp;</td>
</tr>
<tr>
<td><span id="_0" class="login_input_text">Password</span>&nbsp;:</td>
<td>
<input type="password" name="pw" id="txtLoginPwd" class="configF1" maxlength="64" autocomplete="off">
<input type="hidden" name="rd" value="/uir/dwrhome.htm">
<input type="hidden" name="rd2" value="/uir/loginpage.htm">
<input type="hidden" name="Nrd" value="1"> &nbsp;
<input type="hidden" name="Nlmb">
</td>
<td>&nbsp;</td>
</tr>
<tr>
<td colspan="3" class="submitBg_new">
<input value="Login" id="_0" class="submit" onmouseover="this.className = 'submitHover'" onmouseout="this.className = 'submit'" title="Login" type="submit">
</td>
</tr>
</tbody>
</table>
</code></pre>
<p>the main issue is filling the forms and click button I have used following codes for filling password</p>
<pre><code>password = WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.ID,"txtLoginPwd")))
password = WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, '//*[@id="txtLoginPwd"]')))
password = WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.NAME, 'pw')))
password = WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, '//*[@name="pw"]')))
</code></pre>
<p>and even filling forms in table</p>
<pre><code>password = WebDriverWait(driver, 20).until(EC.element_to_be_clickable((\
By.XPATH,"//form[@class='configTbl']/tbody/tr[2]/td[2]/input[1]")))
</code></pre>
<p>or</p>
<pre><code>password = WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, \
###"//td[@class='user_type' and text()='real']//following-sibling::td[2]//button[contains(., 'Delete')]")))
#'//table[@class="configTbl"]/tbody/tr[2]/td[2]/a[@id="txtLoginPwd"]')))
'//table[@class="tableTest"]/tbody/tr[2]/td[2]/descendant::a[@id="txtLoginPwd"]')))
</code></pre>
<p>but it has not been working for me.I get timeout error</p>
<blockquote>
<p>TimeoutException: Message:
Stacktrace:
Backtrace:</p>
</blockquote>
<p>in addition, clicking on login button has been difficult for me what should i do?</p>
|
<python><selenium><selenium-webdriver><automated-tests>
|
2022-12-15 09:02:18
| 0
| 2,263
|
RF1991
|
74,808,898
| 606,025
|
How uninstall pip3
|
<p>I am having some problems with pip when trying to install a python application.
It says its not able to find pip3, but its installed</p>
<p>Digging deeper I think I have locations where pip3 is installed.</p>
<p>While trying to uninstall, even that is not working since it referring to the other pip3</p>
<p>How to I go about keeping only one copy of pip3 and uninstall one copy of it</p>
<pre><code>$which pip3
/home/frappeuser/.local/bin/pip3
$ sudo pip3 uninstall pip
sudo: unable to execute /usr/local/bin/pip3: No such file or directory
</code></pre>
|
<python><pip><erpnext>
|
2022-12-15 08:46:48
| 2
| 1,453
|
frewper
|
74,808,691
| 7,373,353
|
Hide optional content group in PDF with borb for python
|
<p>I am following this tutorial to add an OCR-layer to scanned PDFs: <a href="https://github.com/jorisschellekens/borb-examples#72-performing-ocr-on-a-pdf" rel="nofollow noreferrer">https://github.com/jorisschellekens/borb-examples#72-performing-ocr-on-a-pdf</a></p>
<p>However, unlike the pictures in the tutorial, the OCR-layer gets overlayed over the scanned image in the output PDF by default.
<a href="https://i.sstatic.net/uFxkB.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uFxkB.jpg" alt="enter image description here" /></a></p>
<p>Is there a way to hide this layer before saving the output PDF?</p>
|
<python><ocr><tesseract><python-tesseract><borb>
|
2022-12-15 08:26:50
| 0
| 395
|
leabum
|
74,808,652
| 12,108,866
|
Can't get the changed global variable
|
<p>t.py</p>
<pre><code>value = 0
def change_value():
global value
value = 10
</code></pre>
<p>s.py</p>
<pre><code>import t
from t import value
t.change_value()
print(f'test1: {t.value}')
print (f'test2: {value}')
</code></pre>
<p>Output</p>
<blockquote>
<p>test1: 10</p>
<p>test2: 0</p>
</blockquote>
<p>Why isn't it not returning the changed value in the test2 ?</p>
|
<python>
|
2022-12-15 08:22:51
| 5
| 343
|
ABHIJITH EA
|
74,808,528
| 8,785,163
|
How to avoid overlapping logs with ThreadPoolExecutor?
|
<p>I would like to log some information by thread. But the multi-threading code I wrote is leading to overlapping logs and this makes the logs not readable and easy-to-use.</p>
<pre class="lang-py prettyprint-override"><code>from concurrent.futures import ThreadPoolExecutor
from io import StringIO
import logging
from pipelines import Pipeline # Class with run method doing some log info / warning
from preprocessing import Preprocess # Same
if __name__ == "__main__":
log_stream = StringIO()
logging.basicConfig(stream=log_stream, level=logging.INFO)
logger = logging.getLogger("main")
logger.info("Starting application")
# code to get all csv files in a path and store them in a `files` variable
def data_generator(files=files)
for file in files:
logger.info("Generate preprocessed data for file %s", file)
dataframe = Preprocess.load(file)
preprocessed_dataframe = Preprocess.run(dataframe)
yield preprocessed_dataframe
data = data_generator()
logger.info("Run pipeline on the available data")
with ThreadPoolExecutor(3) as pool:
results = pool.map(Pipeline.run, data)
logger.info("Got following results %s", results)
</code></pre>
<p>Some specific infos about <code>Preprocess</code>. The class is quite simple. The method <code>load</code> reads a csv using pandas. The method <code>run</code> fills some missing values.</p>
<p>Do you know a way I can use to output the logs by keeping the natural order that we can find by running the code without threading :</p>
<pre class="lang-py prettyprint-override"><code> logger.info("Run pipeline on the available data")
results = [Pipeline.run(dataframe) for dataframe in data]
logger.info("Got following results %s", results)
</code></pre>
<p>Which would output something close to that :</p>
<pre><code>INFO : Starting application
INFO : Run pipeline on the available data
INFO : Generate preprocessed data for file data/toto.csv
INFO : Loading this file
INFO : Handle missing values
INFO : Running pipeline on preprocessed data
INFO : Success
INFO : Generate preprocessed data for file data/tata.csv
INFO : Loading this file
INFO : Handle missing values
INFO : Running pipeline on preprocessed data
INFO : Success
...
...
INFO : Got following results [150, 200, ...]
</code></pre>
<p>I want to keep the logs readable. Any help would be appreciated.
Thanks</p>
|
<python><multithreading>
|
2022-12-15 08:10:43
| 1
| 443
|
Mistapopo
|
74,808,517
| 17,782,348
|
Remove rows where at least one zero value
|
<p>Here is my data frame</p>
<pre><code> Param1 Param2 datetime
ts
1669246574000 6.06 -1.80 22-24-11 01:36:14 UTC
1669242973000 6.50 -1.73 22-24-11 00:36:13 UTC
... ... ... ...
1668918964000 6.00 0.00 22-20-11 06:36:04 UTC
1668915364000 0.00 1.6 22-20-11 05:36:04 UTC
</code></pre>
<p>Output with removed zero values</p>
<pre><code> Param1 Param2 datetime
ts
1669246574000 6.06 -1.80 22-24-11 01:36:14 UTC
1669242973000 6.50 -1.73 22-24-11 00:36:13 UTC
</code></pre>
|
<python><pandas><dataframe>
|
2022-12-15 08:09:44
| 1
| 559
|
devaskim
|
74,808,427
| 3,352,254
|
Deploy python worker to heroku without running immediatly
|
<p>I have a simply python script that I deploy on Heroku as worker:</p>
<p>Procfile:</p>
<pre><code>worker: python main.py
</code></pre>
<p>The script is scheduled to run every day at a specific time with the <code>Heroku Scheduler</code>. I don't want it to run to other times.</p>
<p>Every time I push new changes to heroku (<code>git push heroku master</code>) the script is run automatically which I want to avoid.</p>
<p>How can I do that?</p>
<p>I looked into using another scheduler, that is set up from within the script like the <a href="https://devcenter.heroku.com/articles/clock-processes-python" rel="nofollow noreferrer">APScheduler</a>. Would this be a solution? Do I need to change my script?</p>
<p>Thanks!</p>
|
<python><heroku><scheduled-tasks><worker>
|
2022-12-15 07:59:50
| 1
| 825
|
smaica
|
74,808,375
| 12,097,553
|
django-dash: callback generated div id tag not found by other callback
|
<p>I have been working with plotly dash and especially django-dash for a while, and I am now facing an issue that I am not able to resolve. I am confusedbecause I have successfully used the same structure in the past. Hopefully a pair of fresh eyes could help me see what I am messing up.</p>
<p>Here is what I have:
=> first callback acquires data from a django session and that is used to create a dropdown menu that contains some dataframe extracted values:</p>
<pre><code>@app.expanded_callback(
Output('output_two', 'children'),
[Input('dummy', 'value')]
)
def clean_data(file_path,**kwargs):
file_path = file_path
path = kwargs['request']
path = path.session.get('path')
print("path", path)
ifc_file = ifcopenshell.open(path)
work_plans = ifc_file.by_type("IfcWorKPlan")
work_plans_df = pd.DataFrame({
'id': [schedule.id() for schedule in work_plans],
'StartTime': [schedule.StartTime for schedule in work_plans],
'FinishTime': [schedule.FinishTime for schedule in work_plans],
})
work_plans_df['StartTime'] = pd.to_datetime(work_plans_df['StartTime']).dt.date
work_plans_df['Status'] = "En cours"
work_plan_id = work_plans_df['id'].unique()
return html.Div(children=[html.Div(
className='five columns',
children=[
dcc.Dropdown(
id="dropdown",
options=list({'label': Id, 'value': Id} for Id in
work_plan_id
),
value='',
),
],
),
],
)
</code></pre>
<p>Now, the second call should be using the submited dropdown value and use it to output something (I won't put the details of the calculations)</p>
<pre><code>@app.callback(
Output('dd-output-container', 'children'),
Input('submit-val', 'n_clicks')
,State('dropdown','value')
)
def process_workplans(path, n_clicks,value,*args,**kwargs):
if value is not None:
...#do calcs
return dt.DataTable(...)
</code></pre>
<p>and finally here is the layout that I am using:</p>
<pre><code>app.layout = html.Div([
html.Div(id="dummy"),
html.Div(id="bed_file_path", children=[], style={'display': 'none'}),
dcc.Store(id='output_one'),
# wrapper dashboard
html.Div([ # main-area
html.Div([ # row that includes everything on the same "plane"
html.Div([
html.H4("Carnet d'entretien de mon bien"), html.Hr()]),
html.Div([ # col sm-3, the entire length of the menu
html.Div([ # control the frame in the whitch the menu is displayed
html.Br(),
html.Div([
html.Div([ # this make sure the content takes the 12 spaces within the 3 sm column
html.Div(
className='twelve columns',
children=html.P(
"Sélectionner un chantier",
),
),
html.Div(id="output_two"),
], className='col-sm-12',
style={"border-left": "white solid 0.4rem", "border-right": "white solid 0.4rem"}),
html.Br(),
html.Br(),
html.A(html.Button('Afficher les chantiers', id='submit-val', n_clicks=0,
style={'border-radius': '4px', 'border': '1px solid #ff5402',
'background-color': 'white'}),
style={'margin-right': '3px', 'margin-bottom': '20px', 'margin-left': '30px'}),
html.A(html.Button('Refresh', style={'border-radius': '4px', 'border': '1px solid #ff5402',
'background-color': 'white', 'margin-bottom': '20px'}),
href='/eoq_modeling'),
html.Br(),
], className='col-sm-12',
style={"border-left": "white solid 0.4rem", "border-right": "white solid 0.4rem"}),
html.Br(),
], className='row-for-params')
], className="col-sm-3"),
html.Div([
html.Div(id='dd-output-container'),
], className="col-sm-9")
], className='row')
], className="main-area"),
], className="wrapper-dashboard")
</code></pre>
<p>The error that I am getting tells that <code>dropdown</code> cannot be found.
I can see that when the page is initialized, <code>dropdown</code> is not there but it gets there and the second callback is not able to update</p>
<p>I believe that a delay in the triggering of callback 2 could be an option but I can't find how to do that in django-dash documentation and dash <code>app.config[‘suppress_callback_exceptions’] = True</code> does not work with django-dash. Anyone got an idea on how to solve that problem?</p>
|
<python><django><plotly-dash>
|
2022-12-15 07:54:27
| 1
| 1,005
|
Murcielago
|
74,808,286
| 20,731,770
|
I am trying to open an img in a tkinter window, but the pillow library is giving my an error
|
<p>I am trying to insert a photo in tkinter window python.
I am using the library PIL (pillow)
but it gives my an error</p>
<p>code:</p>
<pre class="lang-py prettyprint-override"><code>from PIL import ImageTk, Image
my_img = ImageTk.PhotoImage(Image.open(link))
my_label = Label(window, image=my_img)
my_label.place(x=x,y=y)
</code></pre>
<p>Error:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\LitterHelper\main.py", line 86, in <module>
i._img(window, 50, 150, "C:\\img\\Fries.gif")
File "C:\Users\LitterHelper\main.py", line 5, in _img
my_img = ImageTk.PhotoImage(Image.open(f))
^^^^^^^^^^
AttributeError: type object 'Image' has no attribute 'open'
</code></pre>
<p>I want the image to open in the tkinter window</p>
|
<python><python-imaging-library>
|
2022-12-15 07:45:56
| 1
| 590
|
Adam Basha
|
74,807,987
| 16,971,617
|
How to use color science and OpenCV in Python
|
<p>Here is my code using <a href="https://github.com/colour-science/colour" rel="nofollow noreferrer">Colour</a> to do color calibration. It uses numpy float64 type but how can I convert back to the format that is compatible in openCV, ideally uint8 because Canny only works with uint8?</p>
<pre class="lang-py prettyprint-override"><code>import colour
import numpy as np
import cv2
IMAGE = cv2.imread('/Users/kelsolaar/Downloads/EKcv1.jpeg')
IMAGE = cv2.cvtColor(IMAGE, cv2.COLOR_BGR2RGB)/255
# Reference values a likely non-linear 8-bit sRGB values.
# "colour.cctf_decoding" uses the sRGB EOTF by default.
REFERENCE_RGB = colour.cctf_decoding(
np.array(
[
[240, 0, 22],
[252, 222, 10],
[30, 187, 22],
[26, 0, 165],
]
)
/ 255
)
colour.plotting.plot_multi_colour_swatches(colour.cctf_encoding(REFERENCE_RGB))
# Measured test values, the image is not properly decoded as it has a very specific ICC profile.
TEST_RGB = np.array(
[
[0.578, 0.0, 0.144],
[0.895, 0.460, 0.0],
[0.0, 0.183, 0.074],
[0.067, 0.010, 0.070],
]
)
corrected = colour.colour_correction(IMAGE, REFERENCE_RGB, TEST_RGB)
colour.plotting.plot_image(
corrected
)
</code></pre>
<p>This is some ways I found on stackoverflow but the image in uint8 doesn't look like the corrected image</p>
<pre class="lang-py prettyprint-override"><code>#Method 1 which works but not uint8.....
img = cv2.cvtColor(corrected.astype(np.float32), cv2.COLOR_RGB2BGR)
# When I convert to unint8, it doesn't look like the original corrected image
# Method 2
corrected *= 255
corrected = corrected.astype(np.uint8)
img = cv2.cvtColor(corrected, cv2.COLOR_RGB2BGR)
# Method 3
img = cv2.normalize(img, None, 0, 255, cv2.NORM_MINMAX, cv2.CV_8U)
img = img.astype(np.uint8)*255
</code></pre>
|
<python><numpy><opencv><colors>
|
2022-12-15 07:18:01
| 2
| 539
|
user16971617
|
74,807,729
| 13,583,510
|
isalpha giving True for some Sinhala words
|
<p>I'm trying to check if a sentence only has Sinhala words (they can be nonsense words as long as they are written in Sinhala). Sometimes there can be English words in a sentence mixed with sinhala words. The thing is sometimes Sinhala words give <code>True</code> when checked with <code>isalpha()</code> giving incorrect results in my classification.</p>
<p>for example I did something like this.</p>
<pre class="lang-py prettyprint-override"><code>for i in ['මට', 'කෑම', 'කන්න', 'ඕන']:
print(i.isalpha())
</code></pre>
<p>gives</p>
<pre><code>True
False
False
True
</code></pre>
<p>Is there a way to overcome this</p>
|
<python><unicode><utf-8><isalpha>
|
2022-12-15 06:48:17
| 3
| 10,345
|
cmgchess
|
74,807,667
| 386,861
|
UnboundLocalError: local variable var referenced before assignment - how to debug
|
<p>I'm working through a tutorial based on this and have hit this error.</p>
<p>I am trying to work up my skill in Python and am not sure how to debug this because the error confuses me.</p>
<pre><code>import random
MAX_LINES = 3
MAX_BET = 100
MIN_BET = 1
ROWS = 3
COLS = 3
symbol_count = {
"A": 2,
"B": 4,
"C": 6,
"D":8
}
symbol_value = {
"A": 5,
"B": 4,
"C": 3,
"D":2
}
def check_winnings(columns, lines, bet, values):
winnings - 0
winning_lines = []
for line in range(lines):
symbol = columns[0][line]
for column in columns:
symbol_to_check = column[line]
if symbol != symbol_to_check:
break
else:
winnings += values[symbol] * bet
winning_lines.append(lines +1) # Add one because index starts at 0 and it needs to start at 1
return winnings, winning_lines
def get_slot_machine_spin(rows, cols, symbols):
all_symbols = []
for symbol, symbol_count in symbols.items():
for _ in range(symbol_count): #The _ is an anonymous value
all_symbols.append(symbol)
#print(all_symbols)
columns = []
for _ in range(cols):
column = []
current_symbols = all_symbols[:]# This creates a copy of all symbols
for _ in range(rows):
value = random.choice(current_symbols)
current_symbols.remove(value)
column.append(value)
columns.append(column)
return columns
def print_slot_machine(columns):
for row in range(len(columns[0])):
for i,column in enumerate(columns):
if i != len(columns) - 1:
print(column[row], end = " | ")
else:
print(column[row], end="")
print()
def deposit():
while True:
amount = input("What would you like to deposit?")
if amount.isdigit():
amount = int(amount)
if amount > 0:
break
else:
print("Amount must be greater than zero.")
else:
print("Please enter a number.")
return amount
def get_number_of_lines():
while True:
lines = input("Enter the number of lines to bet on (1-" + str(MAX_LINES) + "? ")
if lines.isdigit():
lines = int(lines)
if 1 <= lines <= MAX_LINES:
break
else:
print("Enter a valid number of lines.")
else:
print("Please enter a number.")
return lines
def get_bet():
while True:
amount = input("How much would you like to bet on each line?")
if amount.isdigit():
amount = int(amount)
if MIN_BET <= amount <= MAX_BET:
break
else:
print(f"Amount must be between {MIN_BET} - {MAX_BET}.")
else:
print("Please enter a number.")
return amount
def main():
balance = deposit()
lines = get_number_of_lines()
while True:
bet = get_bet()
total_bet = bet * lines
if total_bet > balance:
print (f"You do not have enough cash to pay the bet from your balance. Your current balance is {balance}")
else:
break
slots = get_slot_machine_spin (ROWS, COLS, symbol_count)
print_slot_machine(slots)
winnings, winning_lines = check_winnings(slots, lines, bet, symbol_value)
print(f"You won {winnings}")
print(f"You won on", *winning_lines) #splat or unpack operator
return
main()
</code></pre>
<p>The output at runtime is:</p>
<pre><code>What would you like to deposit?100
Enter the number of lines to bet on (1-3? 3
How much would you like to bet on each line?3
A | B | D
B | C | C
D | A | C
</code></pre>
<p>But then I get this traceback...</p>
<pre><code>UnboundLocalError Traceback (most recent call last)
/var/folders/v_/yq26pm194xj5ckqy8p_njwc00000gn/T/ipykernel_54333/4159941939.py in <module>
122 return
123
--> 124 main()
/var/folders/v_/yq26pm194xj5ckqy8p_njwc00000gn/T/ipykernel_54333/4159941939.py in main()
117 slots = get_slot_machine_spin (ROWS, COLS, symbol_count)
118 print_slot_machine(slots)
--> 119 winnings, winning_lines = check_winnings(slots, lines, bet, symbol_value)
120 print(f"You won {winnings}")
121 print(f"You won on", *winning_lines) #splat or unpack operator
/var/folders/v_/yq26pm194xj5ckqy8p_njwc00000gn/T/ipykernel_54333/4159941939.py in check_winnings(columns, lines, bet, values)
23
24 def check_winnings(columns, lines, bet, values):
---> 25 winnings - 0
26 winning_lines = []
27 for line in range(lines):
UnboundLocalError: local variable 'winnings' referenced before assignment
</code></pre>
<p>How do I unpack this?</p>
|
<python>
|
2022-12-15 06:40:31
| 1
| 7,882
|
elksie5000
|
74,807,602
| 317,460
|
How to "diagram as code" zoomable block diagram in Python, or with another tool Python can interact with?
|
<p>Is there a package in Python (or JavaSs) that can generate GUI like in the image below? Allowing to:</p>
<ol>
<li>Generate block diagrams as code</li>
<li>Zoom into\out-of blocks interactively.</li>
</ol>
<p>There are packages that support "Diagrams as code" in Python (<a href="https://github.com/mingrammer/diagrams" rel="nofollow noreferrer">Diagrams</a>, <a href="https://pypi.org/project/blockdiag/" rel="nofollow noreferrer">BlockDiag</a>), but These show the entire diagram at once and are not interactive.</p>
<p>I describe below the behavior I want to support.</p>
<blockquote>
<p>I want to describe a router that contains components and in turn these
components can contain additional sub components. In the high level of
the diagram (top left corner) I can click on "Router A" and reveal its
contents. I can continue and click on "Protocol 1 Handler" (bottom
left corner) to zoom in again and get the view of the logic inside
that block (Right bottom corner).</p>
<p>In addition, clicking on the arrow will display details about all the
interfaces between the routers (right top corner).</p>
</blockquote>
<p><a href="https://i.sstatic.net/WDA7u.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WDA7u.png" alt="enter image description here" /></a></p>
|
<python><diagram>
|
2022-12-15 06:32:39
| 0
| 3,627
|
RaamEE
|
74,807,493
| 10,976,654
|
How to use pytest mark parametrize with functions with multiple arguments
|
<p>How can I use the values from an imported CSV in pytest? The functions take multiple arguments (mix of keyword and positional). I looked at @pytest.mark.parametrize but I didn't understand how to use multiple arguments.</p>
<p>I am importing using np.genfromtxt, but here is an MRE with the column names:</p>
<pre><code>import pytest
import numpy.lib.recfunctions as rf
def squared(x, *, y):
return (x + y) ** 2
def cubed(x, *, y):
return (x + y) ** 3
mycsv = np.array(([1, 2, 3], [5, 5, 5], [36, 49, 64], [216, 343, 512]))
dtypes = [("a", "f8"), ("b", "f8"), ("f_squared", "f8"), ("f_cubed", "f8")]
mycsv = rf.unstructured_to_structured(mycsv, dtype=np.dtype(dtypes))
## do this for all rows in mycsv
def test_squared():
# how do extract 'a' and 'b' from mycsv["a"] and mycsv["b"]
# expected is in 'f_squared' how to get this from mycsv["f_squared"]
computed = squared(x=a, y=b)
msg = f"""Failed for x {a} at y {b},
Expected {expected}, computed {computed}."""
np.testing.assert_allclose(expected, computed, rtol=2e-2, err_msg=msg)
</code></pre>
|
<python><pytest>
|
2022-12-15 06:15:45
| 1
| 3,476
|
a11
|
74,807,372
| 3,247,006
|
How to run "SELECT FOR UPDATE" instead of "SELECT" when changing and deleting data in Django Admin?
|
<p>I have the code below:</p>
<pre class="lang-py prettyprint-override"><code># "store/models.py"
from django.db import models
class Person(models.Model):
name = models.CharField(max_length=30)
</code></pre>
<pre class="lang-py prettyprint-override"><code># "store/admin.py"
from django.contrib import admin
from .models import Person
@admin.register(Person)
class PersonAdmin(admin.ModelAdmin):
pass
</code></pre>
<p>Then, when changing data as shown below:</p>
<p><a href="https://i.sstatic.net/UneCL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UneCL.png" alt="enter image description here" /></a></p>
<p><code>SELECT</code> is run instead of <code>SELECT FOR UPDATE</code> as shown below. *I use <strong>PostgreSQL</strong> and these logs below are <strong>the queries of PostgreSQL</strong> and you can check <a href="https://stackoverflow.com/questions/54780698/postgresql-database-log-transaction/73432601#73432601"><strong>On PostgreSQL, how to log queries with transaction queries such as "BEGIN" and "COMMIT"</strong></a>:</p>
<p><a href="https://i.sstatic.net/6G0F6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6G0F6.png" alt="enter image description here" /></a></p>
<p>And, when clicking <strong><code>Delete</code> button</strong> of <strong>Change person</strong> as shown below:</p>
<p><a href="https://i.sstatic.net/yTUsn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yTUsn.png" alt="enter image description here" /></a></p>
<p>Then clicking <strong><code>Yes, I'm sure</code> button</strong> to delete data as shown below:</p>
<p><a href="https://i.sstatic.net/57fse.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/57fse.png" alt="enter image description here" /></a></p>
<p><code>SELECT</code> is run instead of <code>SELECT FOR UPDATE</code> as shown below:</p>
<p><a href="https://i.sstatic.net/UJ464.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UJ464.png" alt="enter image description here" /></a></p>
<p>Now, I want to run <code>SELECT FOR UPDATE</code> instead of <code>SELECT</code> for both cases as shown above.</p>
<p>So, how can I do this?</p>
|
<python><python-3.x><django><django-admin><select-for-update>
|
2022-12-15 05:59:49
| 2
| 42,516
|
Super Kai - Kazuya Ito
|
74,807,332
| 6,628,988
|
Not able to import module from other directory using python?
|
<p>Please find my project structure attached as image below</p>
<p><a href="https://i.sstatic.net/n2xQS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/n2xQS.png" alt="enter image description here" /></a></p>
<p>I'm running jobs/my_job.py from project root directory i.e from sample_project,
when I'm running the jobs/my_job.py using <code>python .\jobs\my_job.py</code> getting the following error</p>
<pre><code>Traceback (most recent call last):
File ".\jobs\my_job.py", line 1, in <module>
from dependencies.spark import test
ModuleNotFoundError: No module named 'dependencies'
</code></pre>
<p>This is the code I'm running</p>
<p>from dependencies.spark import test</p>
<pre><code>def main():
test()
</code></pre>
<p>Since I'm running the code from root directory, why I'm getting that error?</p>
<p>please find the screen of whole
<a href="https://i.sstatic.net/MdORk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MdORk.png" alt="enter image description here" /></a></p>
|
<python><python-3.x>
|
2022-12-15 05:54:46
| 1
| 430
|
Saranraj K
|
74,807,014
| 2,463,341
|
ImportError: pycurl: libcurl link-time version (7.79.1) is older than compile-time version (7.84.0)
|
<p>I know this question has been asked before, but none of the solutions seem to be working for me. I am running this on Mac OS Monterey (12.6.1). Installed Python 3.7.12 (using pyenv) and using Poetry as a dependency manager.</p>
<p>I did the following steps after activating & deactivating the virtual environment using poetry (<code>source $(poetry env info --path)/bin/activate</code>)</p>
<ol>
<li>Removed existing pycurl using brew</li>
<li>Installed openssl (<code>brew install openssl</code>)</li>
<li>Verified ssl installation directory</li>
</ol>
<pre><code> brew --prefix openssl
/opt/homebrew/opt/openssl@3
$ ls -la /usr/local/opt/openssl@3
lrwxr-xr-x 1 xxxxxx admin 25 14 Dec 22:57 /opt/homebrew/opt/openssl@3 -> ../Cellar/openssl@3/3.0.7
</code></pre>
<ol start="4">
<li>Install PyCurl specifying inline the above <code>openssl</code> install directories like this</li>
</ol>
<pre><code> PYCURL_SSL_LIBRARY=openssl LDFLAGS="-L/opt/homebrew/opt/openssl@3/lib" CPPFLAGS="-I/opt/homebrew/opt/openssl@3/include" pip install --no-cache-dir pycurl
</code></pre>
<p>PyCurl installs without any problem but when I try to import pyCurl I get the following error message.</p>
<pre><code>Python 3.7.12 (default, Dec 14 2022, 22:00:27)
[Clang 14.0.0 (clang-1400.0.29.202)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pycurl
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: pycurl: libcurl link-time version (7.79.1) is older than compile-time version (7.86.0)
>>> quit()
</code></pre>
<p>PS:</p>
<ol>
<li><p><code>poetry env info --path</code> does return the correct poetry virtual env path.</p>
</li>
<li><p>I tried switching my Python version to 3.8.X and 3.9.X and still having the same issue.</p>
</li>
<li><p>I also updated my curl version to 7.86 using brew and doing a <code>brew --version</code> does conform the same.</p>
</li>
</ol>
<pre><code>$ curl --version ✔ at 11:13:19 pm ▓▒░
curl 7.86.0 (aarch64-apple-darwin21.6.0) libcurl/7.86.0 (SecureTransport) OpenSSL/1.1.1s zlib/1.2.11 brotli/1.0.9 zstd/1.5.2 libidn2/2.3.4 libssh2/1.10.0 nghttp2/1.51.0 librtmp/2.3
Release-Date: 2022-10-26
Protocols: dict file ftp ftps gopher gophers http https imap imaps ldap ldaps mqtt pop3 pop3s rtmp rtsp scp sftp smb smbs smtp smtps telnet tftp
Features: alt-svc AsynchDNS brotli GSS-API HSTS HTTP2 HTTPS-proxy IDN IPv6 Kerberos Largefile libz MultiSSL NTLM NTLM_WB SPNEGO SSL threadsafe TLS-SRP UnixSockets zstd
</code></pre>
<p>What am I missing from the above steps? Thank you in advance for your time... :)</p>
|
<python><macos><pip><openssl><python-poetry>
|
2022-12-15 05:03:25
| 2
| 499
|
Vinnie
|
74,806,983
| 219,976
|
Custom Authentication in Django Rest Framework
|
<p>I have a django rest framework application with custom authentication scheme implemented. Now I want to allow external app call some methods of my application.
There's an endpoint for external app to login /external-app-login which implemented like this:</p>
<pre><code>class ExternalAppLoginView(views.APIView):
def post(self, request):
if request.data.get('username') == EXTERNAL_APP_USER_NAME and request.data.get('password') == EXTERNAL_APP_PASSWORD:
user = models.User.objects.get(username=username)
login(request, user)
return http.HttpResponse(status=200)
return http.HttpResponse(status=401)
</code></pre>
<p>Now I want to add authentication. I implemented it like this:</p>
<pre><code>class ExternalAppAuthentication(authentication.SessionAuthentication):
def authenticate(self, request):
return super().authenticate(request)
</code></pre>
<p>But authentication fails all the time. What is the correct way to do it? I want to store login/password of external app in variables in application, not in database.</p>
|
<python><django><authentication><django-rest-framework>
|
2022-12-15 04:57:45
| 1
| 6,657
|
StuffHappens
|
74,806,952
| 3,121,975
|
Why is multiplication faster than bitshift
|
<p>I recently posted an answer where I suggested using bitshift instead of multiplication as a performance boost. It was pointed out to me that this isn't the case with the following example:</p>
<pre><code>from timeit import repeat
for e in ['x*2 ', 'x<<1'] * 3:
print(e, min(repeat(e, 'x=5')))
x*2 0.015567475988063961
x<<1 0.024531989998649806
x*2 0.01551242297864519
x<<1 0.024578287004260346
x*2 0.015560572996037081
x<<1 0.02448918900336139
</code></pre>
<p>This is the case for values of <code>x</code> up to 1,000,000,000. Note that this value decreases as the value by which <code>x</code> is being multiplied/bit-shifted increases as well.</p>
<p>This doesn't make sense to me as bitshift is objectively a simpler and faster operation. And we can see this as the bitshift speeds up as <code>x</code> grows. So, why is it slower for smaller values of <code>x</code>?</p>
<p>Moreover, changing my code up a bit yielded some interesting results:</p>
<pre><code>for e in ['x*2 ', 'x<<1'] * 3:
print(e, max(repeat(e, 'x=5')))
x*2 0.054458492988487706
x<<1 0.02453691599657759
x*2 0.015550968993920833
x<<1 0.0246038619952742
x*2 0.015542584005743265
x<<1 0.024583352991612628
</code></pre>
<p>As we can see from this, the usage of multiplication ran much slower than bitshift on the initial pass, although all subsequent usages had comparable times. It looks like there's some sort of caching operation going on here but I can't fathom why that would result in different runtimes.</p>
|
<python>
|
2022-12-15 04:52:21
| 0
| 8,192
|
Woody1193
|
74,806,886
| 7,690,767
|
Use standard library or 3rd party to conviently uncurry Python functions
|
<p>I am experimenting with Functional Programming in Python and I usually end up needing a way to uncurry functions.</p>
<p>I solve this issue by using:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Callable, TypeVar
T = TypeVar("T")
R = TypeVar("R")
def uncurry(function: Callable[[T], Callable[[T], R] | R]) -> Callable[[T], R]:
def uncurry_step(*args: T) -> R:
result = function
for arg in args:
result = result(arg)
return result
return uncurry_step
</code></pre>
<p>Example:</p>
<pre class="lang-py prettyprint-override"><code>def addition(a: float) -> Callable[[float], float]:
def inner(b: float) -> float:
return a + b
return inner
assert addition(5)(5) == 10
assert uncurry(addition)(5, 5) == 10
</code></pre>
<p>Is there a way to replace the user-defined <code>uncurry</code> with something provided within the standard library or a 3rd party like <code>toolz</code>?</p>
<p>Note: This is for educational purposes so performance is not relevant.</p>
|
<python><functional-programming><currying><toolz>
|
2022-12-15 04:39:49
| 0
| 503
|
Ezequiel Castaño
|
74,806,815
| 13,560,598
|
subclassing the builtin enumerate in python
|
<p>Consider the code below. I am trying to subclass the builtin <code>enumerate</code> so that it prints a line for every turn of the for loop. The code seems to be working, which is surprising, because I have never called <code>super().__init__(x)</code>. So, what is happening here? Who is initializing the base class <code>enumerate</code> the right way? Is there some magic from the <code>__new__</code> method happening here?</p>
<pre><code>class myenum(enumerate):
def __init__(self,x):
self.x_ = x
self.len_ = len(x)
def __next__(self):
out = super().__next__()
print(f'Doing {out[0]} of {self.len_}')
return out
for ictr, key in myenum(['a','b','c','d','e']):
print('Working...')
</code></pre>
|
<python><built-in><enumerate>
|
2022-12-15 04:25:41
| 1
| 593
|
NNN
|
74,806,803
| 1,686,236
|
Pandas-generated .tar.gz can't be extracted
|
<p>I have saved a pandas DataFrame to disk as a compressed .tar.gz file, using <code>data.to_csv(path+'file.csv.tar.gz', compression='infer')</code>, which seems to work fine. However, when I later try to extract the file using (on Ubuntu) <code>tar -xzvf file.csv.tar.gz</code>, I get the messages <code>tar: This does not look like a tar archive</code>, <code>tar: Skipping to next header</code>, and <code>tar: Exiting with failure status due to previous errors</code>. Why am I unable to extract this archive? Thanks.</p>
|
<python><pandas><tar>
|
2022-12-15 04:23:27
| 0
| 2,631
|
Dr. Andrew
|
74,806,767
| 12,149,817
|
How to use a list as an argument of a function in python?
|
<p>I have a function that finds similarity between columns of two dataframes:</p>
<pre><code>def jac_sim_df(df1, df2, thresh):
L = []
for col in df1.columns:
js_list = []
genes1 = df1.loc[df1[col] >= 2,:].index #get DEGs for each column in df1
for column in df2.columns:
genes2 = df2.loc[df2[column] >= thresh,:].index #get genes with values higher than a threshold
js = jaccard_similarity(genes1, genes2) #calculate jaccard similarity for genes1 and genes2
js_list.append(js)
L.append(js_list)
df = pd.DataFrame(L)
return(df)
</code></pre>
<p>I want to vary threshold to see how it can affect the similarity between two dataframes.</p>
<p>Is there a way to apply this function to two dataframes df1 and df2 and a list of thresholds?</p>
<pre><code>df1 = pd.DataFrame(np.random.randint(0,100,size=(100, 14)), columns=range(1,15))
df2 = pd.DataFrame(np.random.rand(100, 14), columns=range(1,15))
</code></pre>
<p>Thresholds values can be like this:</p>
<pre class="lang-py prettyprint-override"><code>thresh = [x / 1000 for x in range(1, 10)]
</code></pre>
<p>jaccard_similarity function:</p>
<pre><code>def jaccard_similarity(list1, list2):
s1 = set(list1)
s2 = set(list2)
return float(len(s1.intersection(s2)) / len(s1.union(s2)))
</code></pre>
<p>the outcome should be multiple dataframes df, number of dfs = number of threshold values</p>
|
<python>
|
2022-12-15 04:16:09
| 2
| 720
|
Yulia Kentieva
|
74,806,697
| 15,299,206
|
How to update list of dictionary in dynamodb using boto3 with Conditional check
|
<p>I have dynamodb</p>
<ul>
<li><code>groupId</code> is partition key</li>
<li><code>empdetails</code> is list</li>
<li><code>dataDetails</code> is the attribute name which i need to update</li>
<li>if list is not there it has to create, but if its there then I need update the dictionary</li>
</ul>
<p>I need to update the dataDetails list of dictionary for new dictionaries if and only if <code>'empId': 1001</code> not in the dataDetails</p>
<p>Code is below</p>
<pre><code>new_obj = {'groupId':101, 'empId': 1001, 'empName': 'Joe'}
</code></pre>
<p>In my dataDetails list <code>[{'empId': 1002, 'empName': 'Fra'}]</code> is already there</p>
<p>So current dataDetails list is [ {'empId': 1002, 'empName': 'Fra'}, {'empId': 1001, 'empName': 'Joe'}]</p>
<pre><code>dynamodb_resource = boto3.resource('dynamodb')
table = dynamodb_resource.Table('myTable')
groupId = new_object['groupId']
data_details = {"empId": new_object['empId'],
"empName": new_object['empName']}
result = table.update_item(
Key={
'groupId': groupId ,
},
UpdateExpression=
'SET dataDetails = list_append(if_not_exists(dataDetails, :empty_list), :data_details)',
ExpressionAttributeValues={
':data_details': [data_details], ':empty_list': []
},
ReturnValues="UPDATED_NEW"
)
</code></pre>
<p>I got an error "errorMessage": "An error occurred (ValidationException) when calling the UpdateItem operation: Invalid UpdateExpression: An expression attribute value used in expression is not defined; attribute value: :empty_list",</p>
<p>After lot of trials I got solution [data_details], it should be in list. Updating in the above Code for reference</p>
<p>Is there any conditional check I can do like if <code>'empId': 1001</code> is already in the dataDetails so that it should not append to the dictionary?</p>
|
<python><amazon-dynamodb><boto3>
|
2022-12-15 04:00:14
| 1
| 488
|
sim
|
74,806,619
| 4,451,521
|
Plotting list of tuples (both plot and scatter)
|
<p>I would like to plot as lines and also as points a list of tuples
so I have</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
x = np.arange(0, 3 * np.pi, 0.1)
y = np.sin(x)
lista=[(0,1,3),(1,4,5),(2,0,2),(3,5,10),(4,3,7)]
#lista=[(0,1),(1,4),(2,0),(3,5),(4,3)]
plt.scatter(*zip(*lista))
#plt.plot(*zip(*lista)) #<--- this works
plt.show()
</code></pre>
<p>I followed the advice on <a href="https://stackoverflow.com/a/18458953/4451521">this answer</a></p>
<p>However, when I plot as a line it works but when I do as scatter, it shows only the first values (0,1) and not the second value (0,3)</p>
<p>How can the scatter also work to plot two series of data</p>
<p>Note: The first serie being the first and second value in the tuple and the second being the first and third in the tuple</p>
<p>This works
<a href="https://i.sstatic.net/oJUb0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oJUb0.png" alt="works" /></a></p>
<p>This does not work</p>
<p><a href="https://i.sstatic.net/MpYFo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MpYFo.png" alt="donotwork" /></a></p>
|
<python><matplotlib>
|
2022-12-15 03:45:23
| 1
| 10,576
|
KansaiRobot
|
74,806,560
| 1,447,953
|
Assign coordinate to dimension in xarray
|
<p>Suppose I have the following DataSet:</p>
<pre><code>>>> coords = {"coords": ("x", [10, 20, 30, 40])}
>>> dset = xr.Dataset(coords=coords)
>>> dset
<xarray.Dataset>
Dimensions: (x: 4)
Coordinates:
coords (x) int64 10 20 30 40
Dimensions without coordinates: x
Data variables:
*empty*
</code></pre>
<p>How can I modify the dataset such that xarray knows to assign 'coords' to the dimension 'x'? I.e. I do not want 'x' to be a dimension without coordinates, the coordinates are supposed to be 'coords'.</p>
<p>I'd like to know how to achieve this in these two ways:</p>
<ol>
<li>By modifying how the dataset is created</li>
<li>By post-hoc "fixing" the existing dataset</li>
</ol>
|
<python><python-xarray>
|
2022-12-15 03:34:00
| 1
| 2,974
|
Ben Farmer
|
74,806,078
| 5,562,092
|
add/remove security group rules to existing SG in pulumi python
|
<pre><code>db_sg = ec2.get_security_group(id="sg-number")
ec2.SecurityGroupRule(
"db-ingress",
type="ingress",
description= "allow tcp to db",
protocol="tcp",
to_port= 5432,
from_port= 5432,
security_group_id = db_sg.id,
)
ec2.SecurityGroupRule(
"allow-db-egress",
type="egress",
description= "allow tcp out of db",
protocol="tcp",
to_port= 0,
from_port= 0,
security_group_id = db_sg.id,
)
</code></pre>
<p>Pretty basic.
Looking to get existing SG and add rules to it.
I can do this from AWS but cant do it programmatically with pulumi.</p>
<p>Thanks in advance.</p>
|
<python><pulumi>
|
2022-12-15 01:58:18
| 1
| 875
|
A H Bensiali
|
74,806,043
| 5,067,401
|
Issues with getting cuda to work on torch, importing torch and cuda modules into Python
|
<p>I'm having some basic issues running the torch and cuda modules in my Python script.</p>
<p>I think that this has something to do with the different versions of Python that I have installed. I have two versions of Python installed:</p>
<p><a href="https://i.sstatic.net/5DLNL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5DLNL.png" alt="enter image description here" /></a></p>
<p>I think I have torch and cuda installed for the wrong one or something. I don't know how to fix this.</p>
<p>Per the Pytorch website, I installed torch as follows:
<code>pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu117</code></p>
<p>I've installed cuda as follows: <code>pip3 install cuda-python==11.7</code> (It doesn't matter if I use the newest version -- I still get the same results.)</p>
<p>When I look up the versions, I get the following:
<a href="https://i.sstatic.net/9UPgs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9UPgs.png" alt="enter image description here" /></a></p>
<p>So it seems like it is all installed correctly.</p>
<p>However, if I run the following:</p>
<pre><code>import torch
print(torch.cuda.is_available())
</code></pre>
<p>I get <code>False</code>.</p>
<p>If I try to run my code that uses torch, I get the following error:
<a href="https://i.sstatic.net/ASjU4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ASjU4.png" alt="enter image description here" /></a></p>
<p>I don't get what I'm doing wrong. As far as I can tell, I've installed torch with CUDA enabled. So I'm not sure why it's telling me otherwise.</p>
<p>Any ideas? Thanks for any help.</p>
<p>Edit: Here is the info for my GPU:
<a href="https://i.sstatic.net/ECsum.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ECsum.png" alt="enter image description here" /></a></p>
|
<python><pip><pytorch><cuda>
|
2022-12-15 01:50:37
| 1
| 473
|
ejn
|
74,806,038
| 10,976,654
|
Column stacking nested numpy structure array, help getting dims right
|
<p>I'm trying to create a nested record array, but I am having trouble with the dimensions. I tried following the example at <a href="https://stackoverflow.com/questions/19201868/how-to-set-dtype-for-nested-numpy-ndarray">how to set dtype for nested numpy ndarray?</a>, but I am misunderstanding something. Below is an MRE. The arrays are generated in a script, not imported from CSV.</p>
<pre><code>arr1 = np.array([4, 5, 4, 5])
arr2 = np.array([0, 0, -1, -1])
arr3 = np.array([0.51, 0.89, 0.59, 0.94])
arr4 = np.array(
[[0.52, 0.80, 0.62, 1.1], [0.41, 0.71, 0.46, 0.77], [0.68, 1.12, 0.78, 1.19]]
).T
arr5 = np.repeat(np.array([0.6, 0.2, 0.2]), 4).reshape(3, 4).T
arrs = (arr1, arr2, arr3, arr4, arr5)
for i in arrs:
print(i.shape, i)
</code></pre>
<p>For which the print statement returns:</p>
<pre><code>(4,) [4 5 4 5]
(4,) [ 0 0 -1 -1]
(4,) [0.51 0.89 0.59 0.94]
(4, 3) [[0.52 0.41 0.68]
[0.8 0.71 1.12]
[0.62 0.46 0.78]
[1.1 0.77 1.19]]
(4, 3) [[0.6 0.2 0.2]
[0.6 0.2 0.2]
[0.6 0.2 0.2]
[0.6 0.2 0.2]]
</code></pre>
<p>However, the <code>ans</code> line throws an error:</p>
<pre><code>dtypes = [
("state", "f8"),
("variability", "f8"),
("target", "f8"),
("measured", [("mean", "f8"), ("low", "f8"), ("hi", "f8")], (4,)),
("var", [("mid", "f8"), ("low", "f8"), ("hi", "f8")], (4,)),
]
ans = np.column_stack(arrs).view(dtype=dtypes)
</code></pre>
<p><code>ValueError: When changing to a larger dtype, its size must be a divisor of the total size in bytes of the last axis of the array.</code></p>
<p><strong>Problem 1: How do I get the desired array output?</strong>
<code>print(np.column_stack(arrs))</code> returns</p>
<pre><code>[[ 4. 0. 0.51 0.52 0.41 0.68 0.6 0.2 0.2 ]
[ 5. 0. 0.89 0.8 0.71 1.12 0.6 0.2 0.2 ]
[ 4. -1. 0.59 0.62 0.46 0.78 0.6 0.2 0.2 ]
[ 5. -1. 0.94 1.1 0.77 1.19 0.6 0.2 0.2 ]]
</code></pre>
<p>But the desired output looks like this:</p>
<pre><code>[[4 0 0.51 (0.52, 0.41, 0.68) (0.6, 0.2, 0.2)]
[5 -1 0.89 (0.8, 0.71, 1.12) (0.6, 0.2, 0.2)]
[4 0 0.59 (0.62, 0.46, 0.78) (0.6, 0.2, 0.2)]
[5 -1 0.94 (1.1, 0.77, 1.19) (0.6, 0.2, 0.2)]]
</code></pre>
<p><strong>Problem 2: How do I include the dtype.names?</strong></p>
<p><code>print(rec_array.dtype.names)</code> should return:
<code>('state', 'variability', 'target', 'measured', 'var')</code></p>
<p>and <code>print(rec_array['measured'].dtype.names)</code> should return:
<code>('mean', 'low', 'high')</code></p>
<p>and similarly for the names of the other nested array.</p>
|
<python><numpy><structured-array>
|
2022-12-15 01:49:19
| 1
| 3,476
|
a11
|
74,805,849
| 19,161,399
|
Package Publishing (Python) failing through Poetry
|
<p>I am new to this, trying to publish a package to pypi.org using Poetry package.
on my local the build is working, I am able to import the package test run it, it's all good.</p>
<p>but when I try to publish it to pypi.org, I get below error - as per the article I was following <a href="https://www.brainsorting.dev/posts/publish-a-package-on-pypi-using-poetry/#:%7E:text=Publish%20a%20package%20on%20PyPi%20using%20Poetry%201,Publish%20your%20package%20...%205%20End%20notes%20" rel="nofollow noreferrer" title="Article in brainsorting.dev">Link</a>, it was supposed to prompt me for my pypi account ID and password, but it doesn't and then gives the error:</p>
<pre><code>
Publishing gsst (0.2.2) to PyPI
- Uploading gsst-0.2.2-py3-none-any.whl 0%
- Uploading gsst-0.2.2-py3-none-any.whl 100%
</code></pre>
<p>and then this error shows up --</p>
<pre><code>HTTP Error 403: Invalid or non-existent authentication information. See https://pypi.org/help/#invalid-auth for more information. | b'<html>\n <head>\n <title>403 Invalid or non-existent authentication information. See https://pypi.org/help/#invalid-auth for more information.\n \n <body>\n <h1>403 Invalid or non-existent authentication information. See https://pypi.org/help/#invalid-auth for more information.\n Access was denied to this resource.<br/><br/>\nInvalid or non-existent authentication information. See https://pypi.org/help/#invalid-auth for more information.\n\n\n \n'
</code></pre>
<p>after i run the -- poetry publish command, the CLI should prompt me for pypi, ID and password. why does it skip it and then fails on authentication.</p>
|
<python><pypi><python-poetry>
|
2022-12-15 01:07:38
| 3
| 404
|
Ankiz
|
74,805,824
| 14,311,397
|
"eb create" deploys .venv directory, even although it's included in .ebignore file
|
<p>I'm deploying a Flask API to Amazon Elastic Beanstalk through the eb CLI, following the <a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create-deploy-python-flask.html" rel="nofollow noreferrer">instructions here</a> and even although my <code>.ebignore</code> file is set to ignore the <code>.venv</code> directory, it's still deployed to Elastic Beanstalk.</p>
<p>My <code>.ebignore</code> file reads as follows:</p>
<pre><code>migrations
tests
.vscode
.venv
.env
.git
.gitignore
__pycache__
</code></pre>
<p>And yet on every deployment I attempt, I get an <code>Error: chown /var/app/staging/.venv/lib64: no such file or directory</code> error, because the <code>.venv</code> folder is being deployed.</p>
<p>I've deleted both the application environment as well as the application itself and the error persists. Furthermore, as <a href="https://stackoverflow.com/a/68375052/14311397">this answer indicates</a>, I downloaded the source of my application and the <code>.venv</code> directory is there... but it only has a <code>lib64</code> file. The other contents are not there. And, as you can see, other directories mentioned in the <code>.ebignore</code> were successfully ignored:</p>
<p><a href="https://i.sstatic.net/MQ1nB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MQ1nB.png" alt="enter image description here" /></a></p>
<p>What could cause this problem? Is there some sort of AWS cache I'm not aware of that makes .venv` persist between application & environment creation and deletion?</p>
|
<python><amazon-elastic-beanstalk>
|
2022-12-15 01:02:14
| 1
| 441
|
LeperAffinity666
|
74,805,790
| 11,117,255
|
How do I convert bytes to utf-8 without turning regular strings into NaNs?
|
<p>I have a process that runs on multiple pandas dataframes. Sometimes the data comes in the form of bytes, such as:</p>
<pre><code>>>> pd.DataFrame[['x']]
['x']
b'123'
b'111'
b'110'
</code></pre>
<p>And other times it comes in the form of regular integers</p>
<pre><code>>>> pd.DataFrame[['x']]
['x']
80
123
491
</code></pre>
<p>I want to convert the bytes to unicode-8 and leave the regular integers untouched. Right now, I tried <code>pd.Dataframe['x'].str.decode('unicode-8')</code> and it works when the dataframe comes in the form of bytes, but it turns all the values to NaN when the dataframe comes in the form of integers.</p>
<p>I want the solution to be vectorized because speed is important. I can't use list comprehension, for example.</p>
|
<python><pandas><unicode><byte>
|
2022-12-15 00:56:25
| 2
| 2,759
|
Cauder
|
74,805,688
| 11,939,660
|
Most pythonic way to reuse a generator?
|
<p>I have two generators with the following signature:</p>
<ul>
<li><code>gen1(inputs) -> Iterator[A]</code></li>
<li><code>gen2(Iterator[A]) -> Iterator[B]</code></li>
</ul>
<p>My goal is to write another generator (let's called it <code>final_gen</code>) that gives me both <code>A</code> and <code>B</code>.</p>
<p>However, if I chain <code>gen1</code> and <code>gen2</code> together, like</p>
<pre class="lang-py prettyprint-override"><code>def final_gen(inputs):
yield from gen2(gen1(inputs))
</code></pre>
<p><code>gen1</code> would be consumed and I can't get <code>A</code> back.</p>
<p>Just wondering what is the pythonic way to "reuse" <code>gen1</code>?</p>
<p><strong>Update</strong>
I can think of a few ways, but I'm not satisfied with any</p>
<ol>
<li>Collect <code>gen1</code> into a tuple or list, so that I can use those values. This is memory-inefficient.</li>
<li>Use <code>itertools.tee</code> to create a clone of <code>gen1</code>. However this is computation-inefficient.</li>
</ol>
|
<python><generator>
|
2022-12-15 00:35:39
| 1
| 421
|
Hongtao Yang
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.