QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,330,971
| 14,753,388
|
Can Django STATIC_ROOT point to path on another server?
|
<p>I am using Django 4.0.1 in my project, and right prior to deploying my site, I am faced with the issue of handling my static files. Due to the limit of my server, I have decided to instead serve these static files via CDN.</p>
<p>I have already configured my <code>STATIC_URL</code> option in <code>settings.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>STATIC_URL = 'assets/'
</code></pre>
<p>I am aware that in the Django documentation, they say that this url refers to the static files located in <code>STATIC_ROOT</code>. Of course, normally the latter is an absolute path on your server where the <code>collectstatic</code> command collects the static files and put them there, but I am wondering if I can configure this <code>STATIC_ROOT</code> to point a path which is not on my server.</p>
<p>To be precise, I want to know whether I can point <code>STATIC_ROOT</code> to my CDN storage. In that way I can still use <code>STATIC_URL</code> to refer to my static assets, while being able to serve them via CDN.</p>
|
<python><python-3.x><django>
|
2023-02-03 02:41:51
| 2
| 310
|
Shaobin Jiang
|
75,330,933
| 4,298,178
|
Folium HeatMapWithTime html file generated is blank
|
<p>I created a self-contained code to create a HeatMapWithTime map but it shows up as a blank file. This code is run on Jupyter and the output is a 14KB file and I've tried to open it in Chrome, Safari, Firefox but it is still blank.</p>
<pre><code>import folium
import pandas as pd
import numpy as np
from folium.plugins import HeatMapWithTime
# Generate dummy data
latitudes = np.random.uniform(low=45.523, high=45.524, size=50)
longitudes = np.random.uniform(low=-122.675, high=-122.676, size=50)
times = np.sort(np.random.uniform(low=1580000000, high=1600000000, size=50))
data = {'latitude': latitudes, 'longitude': longitudes, 'time': times}
# Create a pandas dataframe from the dummy data
df = pd.DataFrame(data)
df['time'] = pd.to_datetime(df['time'], unit='s')
# Create a base map
map = folium.Map(location=[45.523, -122.675], zoom_start=13)
# Create a heat map with timeline
HeatMapWithTime(data=df[['latitude', 'longitude', 'time']].values.tolist(),
index=df['time'].dt.strftime("%Y-%m-%d %H:%M:%S"),
auto_play=True,
max_opacity=0.8).add_to(map)
# Save the map to an html file
map.save("heatmap_with_timeline.html")
</code></pre>
<p>Folium version: 0.14.0
Python version: 3.9.12</p>
|
<python><heatmap><folium>
|
2023-02-03 02:34:31
| 2
| 797
|
maregor
|
75,330,793
| 13,946,204
|
How to change number of workers and threads in running process for gunicorn
|
<p>I want to test performance of WEB service that running inside AWS ECS service depending on number of <code>gunicorn</code> workers.</p>
<p>Entrypoint of the container is:</p>
<pre class="lang-bash prettyprint-override"><code>WORKERS=15
THREADS=15
gunicorn \
--reload \
--workers "${WORKERS}" \
--threads "${THREADS}" \
--max-requests 10000 \
--max-requests-jitter 200 \
--timeout 60 \
--access-logfile - \
--error-logfile - \
--bind 0.0.0.0:8000 \
config.wsgi:application
</code></pre>
<h3>The problem:</h3>
<p>If I want to change number of workers / threads I have to stop <code>gunicorn</code> → update ECS Task definition (set new number of <code>WORKERS</code> and <code>THREADS</code>) → restart ECS container. It takes too much time if I want to test tens of configurations.</p>
<h3>Possible workaround:</h3>
<p>It is possible to set mock endless <code>entrypoint</code> like <code>watch -n 1000 "ls -l"</code> and login to ECS container with <code>bash</code> and run <code>gunicorn</code> with desired parameters manually. But it is little bit inconvenient and suppose to create test specific environment. So, I want to avoid this method.</p>
<h3>The question:</h3>
<p>Is it possible to change number of workers and threads of already running <code>gunicorn</code> instance? To be able test different configurations without rerunning container and stopping its entrypoint process.</p>
|
<python><gunicorn>
|
2023-02-03 02:04:42
| 1
| 9,834
|
rzlvmp
|
75,330,709
| 4,133,188
|
Projection of a 3D circle onto a 2D camera image
|
<p>Asked this on <a href="https://math.stackexchange.com/questions/4630540/projection-of-a-3d-circle-onto-a-2d-camera-image">math.stackexchange</a>, but no responses so trying here, hopefully the computer vision people are more able to help out.</p>
<p>Assume that I have a 3D circle with a center at <code>(c1, c2, c3)</code> in the circle coordinate frame <code>C</code>. The radius of the circle is <code>r</code>, and there is a unit vector <code>(v1, v2, v3)</code> (also in coordinate frame <code>C</code>) normal to the plane of the circle at the center point.</p>
<p>I have a pinhole camera located at point <code>(k1, k2, k3)</code> in the camera coordinate frame <code>K</code>. I have a known camera-to-circle coordinate frame transformation matrix <code>kTc</code> that transforms any point in <code>C</code> to coordinate frame <code>K</code> so that <code>point_k = np.dot(kTc, point_c)</code> where <code>point_k</code> is a point in the <code>K</code> frame coordinates and <code>point_c</code> is a point in the <code>C</code> frame coordinates. The camera has a known intrinsic camera matrix <code>I</code>.</p>
<p>How do I project the 3D circle onto the image plane of the camera?</p>
<p>Ideally would like to do this in python.</p>
|
<python><graphics><computer-vision><linear-algebra><projection>
|
2023-02-03 01:48:00
| 1
| 771
|
BeginnersMindTruly
|
75,330,690
| 6,202,327
|
Get FEM, save plot as PNG?
|
<p>I am using the python bindings for getfem, to that effect I wrote this script, following their tutorial:</p>
<pre class="lang-py prettyprint-override"><code>import getfem as gf
import numpy as np
import math
center = [0.0, 0.0]
dir = [0.0, 1.0]
radius = 1.0
angle = 0.2 * math.pi
mo = gf.MesherObject("cone", center, dir, radius, angle)
h = 0.1
K = 2
mesh = gf.Mesh("generate", mo, h, K)
outer_faces = mesh.outer_faces()
OUTER_BOUND = 1
mesh.set_region(OUTER_BOUND, outer_faces)
sl = gf.Slice(("none",), mesh, 1)
mfu = gf.MeshFem(mesh, 1)
elements_degree = 2
mfu.set_classical_fem(elements_degree)
mim = gf.MeshIm(mesh, pow(elements_degree, 2))
md = gf.Model("real")
md.add_fem_variable("u", mfu)
md.add_Laplacian_brick(mim, "u")
F = 1.0
md.add_fem_data("F", mfu)
md.set_variable("F", np.repeat(F, mfu.nbdof()))
md.add_source_term_brick(mim, "u", "F")
md.add_Dirichlet_condition_with_multipliers(mim, "u", elements_degree - 1, OUTER_BOUND)
md.solve()
U = md.variable("u")
sl.export_to_vtk("u.vtk", "ascii", mfu, U, "U")
</code></pre>
<p>This exports a vtk file. Somewhere, I found a way to display it on a Jupyter notebook:</p>
<pre class="lang-py prettyprint-override"><code>import pyvista as pv
from pyvirtualdisplay import Display
display = Display(visible=0, size=(1280, 1024))
display.start()
p = pv.Plotter()
m = pv.read("u.vtk")
contours = m.contour()
p.add_mesh(m, show_edges=False)
p.add_mesh(contours, color="black", line_width=1)
p.add_mesh(m.contour(8).extract_largest(), opacity=0.1)
pts = m.points
p.show(window_size=[384, 384], cpos="xy")
display.stop()
</code></pre>
<p><a href="https://i.sstatic.net/XK7Uq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XK7Uq.png" alt="enter image description here" /></a></p>
<p>It looks awfully compressed for some reason. I am trying to save it as a PNG instead.</p>
<p>Does anyone know how to convert the vtk into a png?
Paraview is depercated in modern systems for all intents and purposes so that's out the gate.</p>
|
<python><image>
|
2023-02-03 01:43:36
| 1
| 9,951
|
Makogan
|
75,330,560
| 2,159,051
|
How to debug crashing C++ library loaded in python project
|
<p>I am attempting to figure out why calling a function in a dynamically loaded lib crashes python. I'm doing the following, I have a C++ function in a dynamic library file, which is loaded in python using ctypes. I then call the function from python:</p>
<pre><code>lib = cdll.LoadLibrary(libPath)
# Note: using c_char_p instead of POINTER(c_char) does not yield any difference in result
# Export const char* GetSection(const char* TilesetID, int32_t X0, int32_t Y0, int32_t X1, int32_t Y1, uint8_t*& OutData, uint64_t& OutDataSize)
lib.GetSection.argtypes = [POINTER(c_char), c_int32, c_int32, c_int32, c_int32, POINTER(c_void_p), POINTER(c_uint64)]
lib.GetSection.restype = POINTER(c_char)
output_data = c_void_p()
output_size = c_uint64()
str_data = lib.GetSection(id.encode('ascii'), x0, y0, x1, y1, byref(output_data), byref(output_size))
</code></pre>
<p>On MacOS, this works exactly as expected. Unfortunately on Windows 11, it does not. I'm running from a Jupyter notebook and the kernel crashes and restarts immediately after the <code>lib.GetSection</code> call.</p>
<p>I have attached the Visual Studio debugger to the process, and can see that on the C++ side of things, the function is being correctly called, all parameters are correct, and it returns without error. It is at this point that the python kernel crashes, deep in a python call stack that I don't have symbols for.</p>
<p>How do I even approach debugging this? Does anything look wrong with the way I am calling the function?</p>
|
<python><c++><visual-studio><ctypes>
|
2023-02-03 01:19:57
| 1
| 2,298
|
BWG
|
75,330,556
| 5,141,652
|
python tkinter scrollable frame scroll with mousewheel
|
<p>I have created a scrollable frame with tkinter and would like to use the mousewheel for the scrolling, I am also switching frames as pages. Everything works as expected for page 2 however the page does not scroll with the mousewheel on page 1, the scrollbar itself works and the mousewheel event is being triggered its just not scrolling.</p>
<p>Here is an example of what I have so far:-</p>
<pre><code>import tkinter as tk
class ScrollableFrame(tk.Frame):
def __init__(self, container, *args, **kwargs):
super().__init__(container, *args, **kwargs)
self.canvas = tk.Canvas(self)
scrollbar = tk.Scrollbar(self, command=self.canvas.yview)
self.scrollable_frame = tk.Frame(self.canvas)
self.scrollable_frame.bind(
"<Configure>",
lambda e: self.canvas.configure(scrollregion=self.canvas.bbox("all")),
)
self.scrollable_frame.bind_all("<MouseWheel>", self._on_mousewheel)
self.canvas.create_window((0, 0), window=self.scrollable_frame, anchor="nw")
self.canvas.configure(yscrollcommand=scrollbar.set)
self.canvas.pack(side="left", fill="both", expand=True)
scrollbar.pack(side="right", fill="y")
def _on_mousewheel(self, event):
caller = event.widget
if "scrollableframe" in str(caller):
if event.delta == -120:
self.canvas.yview_scroll(2, "units")
if event.delta == 120:
self.canvas.yview_scroll(-2, "units")
class App(tk.Tk):
def __init__(self):
super().__init__()
self.title("tkinter scrollable frame example.py")
self.geometry("700x450")
# set grid layout 1x2
self.grid_rowconfigure(0, weight=1)
self.grid_columnconfigure(1, weight=1)
# create navigation frame
self.navigation_frame = tk.Frame(self)
self.navigation_frame.grid(row=0, column=0, padx=20, pady=20, sticky="nsew")
# create navigation buttons
self.page1_button = tk.Button(
self.navigation_frame,
text="Page 1",
command=self.page1_button_event,
)
self.page1_button.grid(row=0, column=0, sticky="ew")
self.page2_button = tk.Button(
self.navigation_frame,
text="Page 2",
command=self.page2_button_event,
)
self.page2_button.grid(row=1, column=0, sticky="ew")
# create page1 frame
self.page1_frame = ScrollableFrame(self)
self.page1_frame.grid_columnconfigure(0, weight=1)
# create page1 content
self.page1_frame_label = tk.Label(
self.page1_frame.scrollable_frame, text="Page 1"
)
self.page1_frame_label.grid(row=0, column=0, padx=20, pady=10)
for i in range(50):
tk.Label(
self.page1_frame.scrollable_frame, text=str("Filler number " + str(i))
).grid(row=i + 1, column=0)
# create page2 frame
self.page2_frame = ScrollableFrame(self)
self.page2_frame.grid_columnconfigure(1, weight=1)
# create page2 content
self.page2_frame_label = tk.Label(
self.page2_frame.scrollable_frame, text="Page 2"
)
self.page2_frame_label.grid(row=0, column=0, padx=20, pady=10)
for i in range(50):
tk.Label(
self.page2_frame.scrollable_frame, text=str("Filler number " + str(i))
).grid(row=i + 1, column=0)
# show default frame
self.show_frame("page1")
# show selected frame
def show_frame(self, name):
self.page1_button.configure(
background=("red") if name == "page1" else "#F0F0F0"
)
self.page2_button.configure(
background=("red") if name == "page2" else "#F0F0F0"
)
if name == "page1":
self.page1_frame.grid(row=0, column=1, padx=0, pady=0, sticky="nsew")
else:
self.page1_frame.grid_forget()
if name == "page2":
self.page2_frame.grid(row=0, column=1, sticky="nsew")
else:
self.page2_frame.grid_forget()
def page1_button_event(self):
self.show_frame("page1")
def page2_button_event(self):
self.show_frame("page2")
if __name__ == "__main__":
app = App()
app.mainloop()
</code></pre>
<p>Any ideas where I have gone wrong?</p>
|
<python><tkinter>
|
2023-02-03 01:19:25
| 1
| 1,037
|
Chris
|
75,330,484
| 418,586
|
Minimize AbsEquality rather than enforce in OrTools
|
<p>I'm trying to solve the following using OR tools:</p>
<p>Given the following bags containing different colors of balls:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>bag</th>
<th>red</th>
<th>blue</th>
<th>green</th>
<th>black</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>10</td>
<td>5</td>
<td>85</td>
<td>0</td>
</tr>
<tr>
<td>B</td>
<td>25</td>
<td>50</td>
<td>25</td>
<td>0</td>
</tr>
<tr>
<td>C</td>
<td>0</td>
<td>100</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>D</td>
<td>90</td>
<td>5</td>
<td>5</td>
<td>0</td>
</tr>
<tr>
<td>E</td>
<td>2</td>
<td>0</td>
<td>98</td>
<td>0</td>
</tr>
<tr>
<td>F</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>100</td>
</tr>
</tbody>
</table>
</div>
<p>How many of each type of bag would I need to have an equal number of each color of ball?</p>
<p>For cases like this where there is an exact answer, the following code:</p>
<pre><code>bags= [
[10,5,85,0],
[25,50,25,0],
[0,100,0,0],
[90,5,5,0],
[2,0,98,0],
[0,0,0,100]
]
bags_n = len(bags)
color_n = len(bags[0])
print(f'Bags: {bags_n}')
print(f'Colors: {color_n}')
color_count= [0] * color_n
for c in range(color_n):
for b in bags:
color_count[c]+= b[c]
print(color_count)
print(f'Inital total: {sum(color_count)}')
print(f'Inital equal share: {sum(color_count)//color_n}')
model = cp_model.CpModel()
weights = []
for r in range(bags_n):
weights.append(model.NewIntVar(1,1000,f'Weight of Bag: {r}'))
total = model.NewIntVar(0, 100000, 'total')
model.Add(
sum(flatten(
[[bags[r][c] * weights[r] for r in range(bags_n)] for c in range(color_n)]
)) == total
)
equal = model.NewIntVar(0, 10000, 'equal share')
model.AddDivisionEquality(equal, total, color_n)
for c in range(color_n):
diff_c = model.NewIntVar(0, 1000, 'diff_'+str(c))
model.Add(diff_c == sum([bags[r][c] * weights[r] for r in range(bags_n)]) - equal)
model.AddAbsEquality(0, diff_c)
solver = cp_model.CpSolver()
status = solver.Solve(model)
if status == cp_model.OPTIMAL or status == cp_model.FEASIBLE:
print(f'Maximum of objective function: {solver.ObjectiveValue()}\n')
for v in weights:
print(f'{solver.Value(v)}')
print(f'total = {solver.Value(total)}')
print(f'equal share = {solver.Value(equal)}')
else:
print(status)
</code></pre>
<p>gives back valid weights:</p>
<p>82
2
70
78
5
79</p>
<p>If I change the setup to something like</p>
<pre><code>bags= [
[50,40,10],
[30,20,50],
[30,30,40],
[30,25,45],
]
</code></pre>
<p>The model becomes infeasible, I assume due to the fact that there are no weights that satisfy the AbsEquality for every color.</p>
<p>How can I change this to get me the solution closest to an even distribution even if a perfect solution is infeasable?</p>
|
<python><or-tools><integer-programming>
|
2023-02-03 01:03:03
| 1
| 564
|
Chris
|
75,330,311
| 7,687,981
|
Python vectorize nested for loop with conditionals
|
<p>How can I vectorize a nested for loop containing some conditionals? I'm trying to get a list of row/column windows within a very large array. What I have below is quick for a nested loop going through all the rows and columns with a given window size but I'm wondering if there is any way to make this faster.</p>
<pre><code>def get_windows(width, height, win_size):
windows = list()
for i in range(0, width, win_size):
if i + win_size < width:
numCols = win_size
else:
numCols = width - i
for j in range(0, height, win_size):
if j + win_size< height:
numRows = win_size
else:
numRows = height - j
window = [i, j, numCols, numRows]
windows.append(window)
return windows
def sliding_window(arr, windows):
for i in windows:
win_arr = arr[0:3, i[0]:i[0]+i[2], i[1]:i[1]+i[2]]
win_arr = np.transpose(win_arr, [1, 2, 0])
</code></pre>
|
<python><numpy><vectorization>
|
2023-02-03 00:33:05
| 1
| 815
|
andrewr
|
75,330,256
| 16,491,055
|
How to convert 1D numpy array of tuples to 2D numpy array?
|
<p>I have a <code>numpy</code> array of <code>tuples</code>:</p>
<pre><code>import numpy as np
the_tuples = np.array([(1, 4), (7, 8)], dtype=[('f0', '<i4'), ('f1', '<i4')])
</code></pre>
<p>I would like to have a 2D <code>numpy</code> array instead:</p>
<pre><code>the_2Darray = np.array([[1,4],[7,8]])
</code></pre>
<p>I have tried doing several things, such as</p>
<pre><code>import numpy as np
the_tuples = np.array([(1, 4), (7, 8)], dtype=[('f0', '<i4'), ('f1', '<i4')])
the_2Darray = np.array([*the_tuples])
</code></pre>
<p>How can I convert it?</p>
|
<python><numpy>
|
2023-02-03 00:22:07
| 2
| 771
|
geekygeek
|
75,330,116
| 19,130,803
|
Flask upload file, pass file to celery task
|
<p>I am uploading file using flask rest-api and flask. As the file size is large I am using celery to upload the file on server. Below is the code.</p>
<p><strong>Flask Rest API</strong></p>
<pre><code>@app.route('/upload',methods=['GET','POST'])
def upload():
file = request.files.get("file")
if not file:
return "some_error_msg"
elif file.filename == "":
return "some_error_msg"
if file:
filename = secure_filename(file.filename)
result = task_upload.apply_async(args=(filename, ABC, queue="upload")
return "some_task_id"
</code></pre>
<p><strong>Celery task</strong></p>
<pre><code>@celery_app.task(bind=True)
def task_upload(self, filename: str, contents: Any) -> bool:
status = False
try:
status = save_file(filename, contents)
except exception as e:
print(f"Exception: {e}")
return status
</code></pre>
<p><strong>Save method</strong></p>
<pre><code>def save_file(filename: str, contents: Any) -> bool:
file: Path = MEDIA_DIRPATH / filename
status: bool = False
# method-1 This code is using flask fileStorage, contents= is filestorage object
if contents:
contents.save(file)
status = True
# method-2 This code is using request.stream, contents= is IOBytes object
with open(file, "ab") as fp:
chunk = 4091
while True:
some code.
f.write(chunk)
status = True
return status
</code></pre>
<p>I am getting <strong>error</strong> while trying both methods</p>
<p><strong>For Method-1</strong>, where I tried passing file variable(fileStorage type object) and getting error as</p>
<pre><code>exc_info=(<class 'kombu.exceptions.EncodeError'>, EncodeError(TypeError('Object of type FileStorage is not JSON serializable'))
</code></pre>
<p><strong>For Method-2</strong>, where I tried passing request.stream and getting error as</p>
<pre><code><gunicorn.http.body.Body object at some number>
TypeError: Object of type Body is not JSON serializable
</code></pre>
<p>How can I pass file(ABC) to celery task?<br />
I am preferring method-1 but any will do. Please suggest.</p>
|
<python><flask><celery><gunicorn>
|
2023-02-02 23:51:34
| 1
| 962
|
winter
|
75,330,032
| 9,795,817
|
Unable to start Jupyter Notebook Kernel in VS Code
|
<p>I am trying to run a Jupyter Notebook in VS Code. However, I'm getting the following error message whenever I try to execute a cell:</p>
<pre class="lang-none prettyprint-override"><code>Failed to start the Kernel.
Jupyter server crashed. Unable to connect.
Error code from Jupyter: 1
usage: jupyter.py [-h] [--version] [--config-dir] [--data-dir] [--runtime-dir]
[--paths] [--json] [--debug]
[subcommand]
Jupyter: Interactive Computing
positional arguments:
subcommand the subcommand to launch
options:
-h, --help show this help message and exit
--version show the versions of core jupyter packages and exit
--config-dir show Jupyter config dir
--data-dir show Jupyter data dir
--runtime-dir show Jupyter runtime dir
--paths show all Jupyter paths. Add --json for machine-readable
format.
--json output paths as machine-readable json
--debug output debug information about paths
Available subcommands:
Jupyter command `jupyter-notebook` not found.
View Jupyter log for further details.
</code></pre>
<p>The Jupyter log referred to by the diagnostic message just contains the same text as the above diagnostic message repeated multiple times.</p>
<p>I believe <a href="https://stackoverflow.com/questions/57983475/jupyter-server-crashed-unable-to-connect-error-code-from-jupyter-1">this post</a> refers to the same issue. Unfortunately, the accepted answer does not work for me because I do not have <em>Python: Select Interpreter to Start Jupyter server</em> in my Command Palette.</p>
<p>The file was working normally this morning. I also tried uninstalling and reinstalling the extensions.</p>
<p>How can I get the Kernel to start?</p>
|
<python><visual-studio-code><jupyter-notebook>
|
2023-02-02 23:35:15
| 6
| 6,421
|
Arturo Sbr
|
75,329,790
| 3,380,902
|
Jupyter kernel dies on SageMaker notebook instance when running join operation using pd.merge on large DataFrames
|
<p>I am running a large pandas merge join operation on a <code>jupyter</code> notebook running on <code>SageMaker</code> notebook instance <code>ml.t3.large</code> i.e <code>8 gb</code> of memory.</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame({
'ID': [1, 2, 3],
'Name': ['A','B','C'],
....
})
df1.shape
(3000000, 10)
df2 = pd.DataFrame({
'ID': [],
'Name': [],
....
)}
df2.shape
(50000, 12)
# Join data
df_merge = pd.merge(
df1,
df2,
left_on = ['ID','Name'],
right_on = ['ID','Name'],
how = 'left'
)
</code></pre>
<p>When I run this operation, the kernel dies within a minute or so. How can I optimize this operation for memory efficiency?</p>
<p>The <code>dtypes</code> are either <code>int64, object, float64</code>.</p>
<p>Running <code>df1.info(memory_usage = "deep")</code> shows</p>
<p><code>dtypes: float64(1), int64(6), object(12) memory usage: 3.1 GB</code></p>
|
<python><pandas><amazon-web-services><amazon-sagemaker>
|
2023-02-02 23:00:57
| 1
| 2,022
|
kms
|
75,329,708
| 10,045,428
|
Dash leaflet not rendering when inside a Bootstrap tab container
|
<p>I am trying to create a simple Dash application that includes Dash-Leaflet so that it plots some points as markers. It is apparently working as expected when no styles are applied. But I would like to create a layout with bootstrap with tabs as in this example: <a href="https://hellodash.pythonanywhere.com/" rel="nofollow noreferrer">https://hellodash.pythonanywhere.com/</a></p>
<p>When I place the map element within a tab container, it does not render properly. If I move it away it works fine.</p>
<p>This is the code I have so far:</p>
<pre><code>from dash import Dash, dcc, html, dash_table, Input, Output, callback
import plotly.express as px
import dash_bootstrap_components as dbc
import dash_leaflet as dl
import dash_leaflet.express as dlx
from dash_extensions.javascript import assign
import pandas as pd
app = Dash(__name__, external_stylesheets=[dbc.themes.BOOTSTRAP])
header = html.H4(
"Sample map", className="bg-primary text-white p-2 mb-2 text-center"
)
data = pd.DataFrame([{'name': 'Point 1', 'lat': 39.535, 'lon': 0.0563, 'cluster': 1, 'val_x': 0, 'val_y': 1},
{'name': 'Point 2', 'lat': 40.155, 'lon': -0.0453, 'cluster': 1, 'val_x': 1, 'val_y': 4},
{'name': 'Point 1', 'lat': 38.875, 'lon': 0.0187, 'cluster': 2, 'val_x': 2, 'val_y': 2}])
dropdown = html.Div(
[
dbc.Label("Select Cluster"),
dcc.Dropdown(
data.cluster.unique().tolist(),
id="cluster_selector",
clearable=False,
),
],
className="mb-4",
)
controls = dbc.Card(
[dropdown],
body=True,
)
tab1 = dbc.Tab([dl.Map([dl.TileLayer()], style={'width': '100%',
'height': '50vh',
'margin': "auto",
"display": "block"}, id="map")], label="Map")
tab2 = dbc.Tab([dcc.Graph(id="scatter-chart")], label="Scatter Chart")
tabs = dbc.Card(dbc.Tabs([tab1, tab2]))
app.layout = dbc.Container(
[
header,
dbc.Row(
[
dbc.Col(
[
controls,
],
width=3,
),
dbc.Col([tabs], width=9),
]
),
],
fluid=True,
className="dbc",
)
if __name__ == "__main__":
app.run_server(debug=True, port=8052)
</code></pre>
<p>This is what it looks like.</p>
<p><a href="https://i.sstatic.net/RKO8v.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RKO8v.png" alt="This is how it looks like" /></a></p>
<p>I have tried the proposed solutions on <a href="https://stackoverflow.com/questions/21405189/leaflet-map-shows-up-grey">this post</a> but they do not seem to work (or I am applying the wrong)</p>
<p>Do you have any suggestions on where to look for the problem?</p>
<p>Thank you very much.</p>
|
<python><leaflet><plotly-dash><dash-leaflet>
|
2023-02-02 22:47:52
| 2
| 347
|
juancar
|
75,329,646
| 12,461,032
|
Tensorflow dataset iterator pick a sub-sample of whole data
|
<p>I have a code that generates an iterator from a Tensorflow dataset. The code is this:</p>
<pre><code>@tf.function
def normalize_image(record):
out = record.copy()
out['image'] = tf.cast(out['image'], 'float32') / 255.
return out
train_it = iter(tfds.builder('mnist').as_dataset(split='train').map(normalize_image).repeat().batch(256*10))
</code></pre>
<p>However, I want to do the manual splitting. For example, the MNISt dataset has 60000 training samples, but I want to only use the first 50000 (and hold others for validation). The problem is I don't know how to do so.</p>
<p>I tried to convert it to NumPy and split based on that, but then I couldn't apply the map to it.</p>
<pre><code>ds_builder = tfds.builder('mnist')
print(dir(ds_builder))
ds_builder.download_and_prepare()
train_ds = tfds.as_numpy(ds_builder.as_dataset(split='train', batch_size=-1))
train_ds['image'] = train_ds['image'][0:50000, : , :]
train_ds['label'] = train_ds['label'][0:50000]
</code></pre>
<p>I was wondering how to do so.</p>
<p>P.S: <strong>The ordering of data is also important for me</strong>, so I was thinking of loading all data in Numpy and saving the required ones in png and loading with tfds, but I'm not sure if it keeps the original order or not. I want to take the first 50000 samples of the whole 60000 samples.</p>
<p>Thanks.</p>
|
<python><tensorflow><machine-learning><tensorflow-datasets>
|
2023-02-02 22:37:29
| 1
| 472
|
m0ss
|
75,329,627
| 359,730
|
Intersection of a predefined protocol with a generic protocol
|
<p>I'm looking for a workaround to the <a href="https://github.com/python/typing/issues/213" rel="nofollow noreferrer">infamous type <code>Intersection</code> problem</a> that would apply when one of the protocols is a <code>TypeVar</code>.</p>
<p>Pseudocode:</p>
<pre class="lang-py prettyprint-override"><code>ProtocolT = TypeVar("ProtocolT")
class SupportsEnriched(Protocol):
...
def enrich_protocol(target: type[ProtocolT]) → Type[Intersection[ProtocolT, SupportsEnriched]]:
...
</code></pre>
<p>Basically, <strong>the requirement is</strong>: whatever type you pass in, you get that same type back <em>plus it would support <code>SupportsEnriched</code></em> too – so this is not a <code>Union</code>. <code>SupportsEnriched</code> is fine to be a normal class if that's of any help.</p>
<p>I'm aware of the workaround, which involves inheriting a new combined protocol from multiple protocols: <a href="https://stackoverflow.com/a/62698797/359730">1</a>, <a href="https://stackoverflow.com/a/74320795/359730">2</a>, and <a href="https://stackoverflow.com/a/74582127/359730">3</a>. That would result in the following pseudocode:</p>
<pre class="lang-py prettyprint-override"><code>ProtocolT = TypeVar("ProtocolT")
class SupportsEnriched(Protocol):
...
class SupportsGenericEnriched(ProtocolT, SupportsEnriched, Protocol):
...
</code></pre>
<p>Which is certainly impossible due to the <code>TypeVar</code> being one of the bases.</p>
<p>Is there any known workaround to fulfil the original requirement?</p>
<h4>Motivation</h4>
<p>The function, I'm trying to type-hint, accepts and inspects a <code>Protocol</code> using <code>inspect.getmembers</code>. The result is a dynamically generated class, which supports both the provided protocol (its methods get generated on the fly) and a predefined behaviour together.</p>
<p>This is a shared package function, so there's no way to know which protocols will be passed in there.</p>
|
<python><types><type-hinting><mypy><typing>
|
2023-02-02 22:35:27
| 0
| 2,220
|
eigenein
|
75,329,597
| 6,077,239
|
Polars dataframe join_asof with(keep) null
|
<p><strong>Update:</strong> This issue has been resolved. <code>df.join_asof(df2, on="time", by=["a", "b"])</code> now runs without error and returns the expected result.</p>
<hr />
<p>Currently, from my expermentation, join_asof does not will cause error if there are any None(null) in either of the "by" column. Is there any way I can still use join_asof while keeping any None(null) in the left dataframe?</p>
<p>For example, I have the following dataframes:</p>
<pre><code>df = pl.DataFrame(
{"a": [1, 2, 3, 4, 5, None, 8], "b": [2, 3, 4, 5, 6, 7, None], "time": [1, 2, 3, 4, 5, 6, 7]}
)
df2 = pl.DataFrame({"a": [1, 3, 4, None], "b": [2, 4, 5, 8], "c": [2, 3, 4, 5], "time": [0, 2, 4, 6]})
</code></pre>
<p>If I just run the code below, there will be an error:</p>
<pre><code>df.join_asof(df2, on="time", by=["a", "b"])
thread '<unnamed>' panicked at 'called `Result::unwrap()` on an `Err` value: ComputeError(Borrowed("cannot take slice"))', /home/runner/work/polars/polars/polars/polars-core/src/frame/asof_join/groups.rs:253:35
</code></pre>
<p>But, the following code works well:</p>
<pre><code>df.drop_nulls(["a", "b"]).join_asof(df2.drop_nulls(["a", "b"]), on="time", by=["a", "b"])
shape: (5, 4)
┌─────┬─────┬──────┬──────┐
│ a ┆ b ┆ time ┆ c │
│ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ i64 ┆ i64 │
╞═════╪═════╪══════╪══════╡
│ 1 ┆ 2 ┆ 1 ┆ 2 │
│ 2 ┆ 3 ┆ 2 ┆ null │
│ 3 ┆ 4 ┆ 3 ┆ 3 │
│ 4 ┆ 5 ┆ 4 ┆ 4 │
│ 5 ┆ 6 ┆ 5 ┆ null │
└─────┴─────┴──────┴──────┘
</code></pre>
<p>My question is how can get the following result, basically the result above with rows (where a is null in the left dataframe - df in this case) appended?</p>
<pre><code>┌─────┬─────┬──────┬──────┐
│ a ┆ b ┆ time ┆ c │
│ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ i64 ┆ i64 │
╞═════╪═════╪══════╪══════╡
│ 1 ┆ 2 ┆ 1 ┆ 2 │
│ 2 ┆ 3 ┆ 2 ┆ null │
│ 3 ┆ 4 ┆ 3 ┆ 3 │
│ 4 ┆ 5 ┆ 4 ┆ 4 │
│ 5 ┆ 6 ┆ 5 ┆ null │
│ null┆ 7 ┆ 6 ┆ null │
│ 8 ┆ null┆ 7 ┆ null │
└─────┴─────┴──────┴──────┘
</code></pre>
<p>Thanks!</p>
|
<python><python-polars>
|
2023-02-02 22:31:43
| 1
| 1,153
|
lebesgue
|
75,329,557
| 4,774,461
|
How is passing a generator to Depends make the generator act like a contextmanager?
|
<p>I was going through a tutorial on fast api and I came across something like below</p>
<pre><code>def get_db():
try:
db = SessionLocal()
yield db
finally:
print("from finally block")
db.close()
</code></pre>
<pre><code>@app.get("/")
async def read_all(db: Session = Depends(get_db)):
res = db.query(models.Todos).all()
print("from endpoint")
return res
</code></pre>
<p>result</p>
<pre><code>INFO: 127.0.0.1:39088 - "GET /openapi.json HTTP/1.1" 200 OK
from endpoint
INFO: 127.0.0.1:39088 - "GET / HTTP/1.1" 200 OK
from finally block
</code></pre>
<p>why does Depends(get_db) seem to act like somekind of contextmanager?.
the <code>"from finally block"</code> print statement does not get executed until the end of the <code>read_all</code> method</p>
<p>doing something like</p>
<pre><code>
class SomeDependency:
def __enter__(self):
print("entering")
def __exit__(self, exc_type, exc_val, exc_tb):
print("exited")
def hello():
try:
yield SomeDependency()
finally:
print("yolo")
if __name__ == "__main__":
next(hello())
</code></pre>
<p>the <code>finally</code> block gets executed immediately after the call to <code>next</code>.</p>
<p>what why does the <code>finally</code> block of the <code>get_db</code> not execute immediately when passed to <code>Depends</code>?</p>
|
<python><python-3.x><fastapi>
|
2023-02-02 22:27:06
| 2
| 1,578
|
Halcyon Abraham Ramirez
|
75,329,456
| 3,434,906
|
Binary Search in Python: correct slicing
|
<p>Please, help me unterstand the silly question about how binary sort algorithm's works.
So, lets take input array of <code>[4, 5, 6, 7, 8, 11, 20]</code> where i'm searching for index of 6 value (see the code).
As I take it should work this way:</p>
<ol>
<li>First of all we take the pivot(middle) point of (end-start)/2=7 as of index=3</li>
<li>After the first iteration we got array's slice of <code>[4, 5, 6]</code> where we searching the mid point again. With result of index=1 and value = 5.</li>
<li>After second iteration we get the only array of 6, which meets the basic condition and we getting correct result.</li>
</ol>
<p>To prove my assumption I added an output of the cutted array I should get at the second and third step.
But surprisingly it was <code>[4, 5]</code> on the 2nd and [] on the third step in opposite to <code>[4,5,6]</code> and <code>[6]</code> I expected?</p>
<p>Relating to slicing documentation <code>a[start:stop] # items start through stop-1</code>, so the last one isn't included.</p>
<p>But how homes, I'm getting the correct result assuming i'm working with <code>[4,5,6]</code> and <code>[6]</code> in opposite to incorrect output?</p>
<p><strong>The code is:</strong></p>
<pre><code>def binarySearch(arr, start, end, x):
print('the input array is: ' + str(arr[start:end]))
if end>=start:
mid_idx=(start+end)//2
print('mid index is: ' + str(mid_idx))
if arr[mid_idx]==x:
return mid_idx
elif arr[mid_idx]>x:
return binarySearch(arr,start,mid_idx-1, x)
else:
return binarySearch(arr,mid_idx+1,end, x)
else:
return None
arr=[4, 5, 6, 7, 8, 11, 20]
res=binarySearch(arr, 0, len(arr),6)
print(res)
</code></pre>
<p><strong>The output is:</strong></p>
<pre><code>the input array is: [4, 5, 6, 7, 8, 11, 20]
mid index is: 3
the input array is: [4, 5]
mid index is: 1
the input array is: []
mid index is: 2
2
</code></pre>
|
<python><arrays><algorithm><sorting>
|
2023-02-02 22:08:09
| 1
| 401
|
Dec0de
|
75,329,446
| 6,905,666
|
How to decrypt an encrypted image knowing the key but not much more on the algorithm?
|
<p>I am downloading an image from my camera (Ezviz, model CS-BC1C) and this image is encrypted. I set the encryption key (a password) on the camera app (Android) so I know what the key is (I suppose, if the key and the password are the same thing). It looks like that the image is encrypted with AES 128 bit. That's pretty all I know...and I would like to decrypt it using python (or something else if needed).</p>
<p>Is it possible?</p>
<p>I think of making some loops to test different configurations that may have been used during the encryption. Something like the following (very simple, not working, example):</p>
<pre><code>from Crypto.Cipher import AES
from IPython.display import Image
key = b'my_key'
with open('encrypted_image_path',"rb") as f :
data = f.read()
for mode in [AES.MODE_CFB,AES.MODE_EAX] :
tmp = AES.new(key, mode=mode)
clear_data = (tmp.decrypt(data))
display(Image(data=clear_data))
</code></pre>
<p>As suggested by Ry, I checked is the encrypted files start with a common series:</p>
<pre><code>for i in cims : #cims is a list of filenames of encrypted images
with open(rootpath+i,"rb") as f :
data = f.read()
if i == cims[0] :
ref = data
else :
cmp = 0
for d,r in zip(data,ref) :
if d!=r :
break
cmp += 1
</code></pre>
<p>The result is that the first 704 characters are common. Here they are:</p>
<pre><code>b'hikencodepictureda32f310f87cc50aeebd59bc51bbce39\xdc\xc1\xf6\x04\xd4\xc2\xe9j\x19[\x96\xee~rc`\xaf5\xc1\xa2"E<\xc2\x95I\x11\xd1\xd0c\xcd\xadq\xe9\x1e`,\xc8a\x13\xcb\xd8\xc9:,\x87\xc6a\x94H\xe7\x1d\x94G\xd5q\xfa(k\x01\xee\xd8\x17M>\xd5\xe1\x17\x9a\x1d\xb0\xa6\xb2ops\xe9\xe8#\xd0\xdc\x1b\x19\x86YBc\xe0[P\xa1\xdf\xear^\t\xc7\x99D\xc6;!\xe5\x9cB00h\x15\xc2\x16\xf5\x04\xac@C\xbb\x99\x97b\x9dbI\x1df3<&}9\x88\nH\xd1i\x04\x14>\x1c\x94\xd5\xd4\xa5\xe5\'\xe4N\xb4\x83\xb4~A\xb0\x8e;\xee}\xd53\xaf\xea>\x9a\xbeL\x92\x0e\x8bbQ\x13\xac\xc7\xc8(\x92v\xce\xb7\xdd\xa9v\xdfy\x13\xf3\xbdP3\xb3%\x99lO4\xcd\x8c\xd6W(\xdb\xca\x1d\xa9\xaf\x1b\xb8s0\xfb\x06\x1aX\xbc\xcb6\xad\x17nw\x00`H6j\xf4\xe0\x88\xdcM\x1a\x18\xf2\x97\xf8=t\xbf\xeb\xd7\x9b\x01<h\x855&\xe8\xe6\xfd\x1c3\xfd\xa1D\r\xca\xb9.~{\x10\xa3 \x15i\xbb\x06\xebo\xb0\xd4%\x9c|\x8e\x15vQ\xc4{\x8e\x1c\xcf\xe5\x19\xfa/\xa4\xf9\x84N\xc4\xdf\xca\xe6\'#!\x8b\x84\x85\xb5\xd5\xd9\x90\xda\x08\x8f{\x14J\xd0\xd6\x14\x04\x96\xfbQ\x96\xb3B2\xe2_\xdc\xb7}\x07\xf6\xd0+5@\xd2e\xbc\xdb\x15a5\xf4 \x17\x1cRI\xbc\xa5\x0f\xe2\x07ID\x08`\x1c\xda\xf2\xf5_;@l!\xbd\xaa\x8d\xb8\xf4m\xd5"E\'\xe0G&\xa0\x15\xa4\xf9X\xe0d\xf0\x1b\x80\xf7C\xb7z\x85~O4\xa0\xb2\x1c\x94\xd18 >\x08\xf9i\x01\xa7\xac}\xff\xf6\x7fHE\xbeJ\x81.\xf1\xdb:\x8f\xe8CN7\xb3\xb7\xe0Ke\xea\x83\x9f;\xd5PRZ\xcd\xc5nP\xd4\xfc\x19\xa0v\x14L_\xa0\x87\xe1\x14\x99\xbcC\xbc\xf4qb;\x02I\x0e\xfe|\x99|\xb7\xbb\x87\xa3\xeaD\xe9\xe3o\xfa\'\xc2\xaf\xf8M\x91\xae\xb2\xc2(Q\xe0NaN\xc9F$q\x98\x83B\xd9\xe2\xf6\x00\xb6\x82b\x88\x90\x84D^\'\x0etT\x15g~\xfd6/+&v%$\xde\x07{\x11\xbc\xac\xa7\xa0\xe6\xf8s\xee3\txj\x7fw%o\x84\xb6\x89\xb9\xb5\xbb\xdc\xce\xe8\xa4T@\xfcoC\xc7\xa9\xe8\xce-\xd0\x8b#wt\x05\x82DF\xadFu\xe2\xc8L\x13\xd2\x8e\xda\xb1\x12MV\x16t\xc4\xf1\xaa\xd4\x95\n\x08\xeb `\x01\x97\x88,\x9e\x0f1\x07J\xd3\x92\x1bWF\xb6.V\x07\xd1\xc4o\xa8\xcc\xceM\xbc\xc9\x0b8g\xbe\x1e\xec\xb5\x13\x9c\xe8h\xd0\xe8\xa6\x88\x9c\x91[\xd7~\xff\xd4%'
</code></pre>
<p>If I change the password in the app, only the first 16 characters (<code>b'hikencodepicture'</code>) remain commons with previously encrypted images.</p>
<p>As complementary information, the password that I can set-up in the app must have between 8 and 16 characters (numbers, special characters...). A max length of 16 corresponds to the length of the required key for AES 128 bit, isn't it?</p>
<p>Knowing that my knowledge on encryption is next to zero, if what I am looking for is not impossible, could someone help me with defining which combinations I should test to try to decrypt my image?</p>
<hr />
<p>I also tested the following, with no success:</p>
<pre class="lang-py prettyprint-override"><code>key = getpass("Password:")
key = bytes(key,'utf-8')
iv = bytes.fromhex('da32f310f87cc50aeebd59bc51bbce39')
for d in [data,data[48:]] :
for mode in modes_list :
for i in [True, False] :
try :
if i :
tmp = AES.new(key, mode=mode, iv=iv)
print('iv')
else :
tmp = AES.new(key, mode=mode)
except (TypeError, ValueError) :
continue
clear_data = (tmp.decrypt(d))
display(Image(data=clear_data))
</code></pre>
|
<python><aes><pycryptodome>
|
2023-02-02 22:07:10
| 0
| 367
|
Songio
|
75,329,391
| 7,197,249
|
Not able to configure cluster settings instance type using mlflow api 2.0 to enable model serving
|
<p>I'm able to enable model serving by using the mlflow api 2.0 with the following code...</p>
<pre><code> instance = f'https://{workspace}.cloud.databricks.com'
headers = {'Authorization': f'Bearer {api_workflow_access_token}'}
# Enable Model Serving
import requests
url = f'{instance}/api/2.0/mlflow/endpoints/enable'
requests.post(url, headers=headers, json={"registered_model_name": f'{model_name}'})
</code></pre>
<p>However this automatically sets the cluster setting instance type to be m5a.xlarge, which I DO NOT want it to be. I can manually go into the settings on the UI (image below) and change it to be m4.large but I want to be able to do this within the api code above so that I don't have to manually go into the settings and change it. This would allow me to enable and disable serving models without ever needing to interact with the UI.</p>
<p><a href="https://i.sstatic.net/M3dmu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M3dmu.png" alt="enter image description here" /></a></p>
|
<python><databricks><aws-databricks>
|
2023-02-02 22:00:29
| 0
| 2,947
|
spies006
|
75,329,306
| 4,856,526
|
How can I query the bittensor network using btcli?
|
<pre><code>btcli query
Enter wallet name (default): my-wallet-name
Enter hotkey name (default): my-hotkey
Enter uids to query (All): 18
</code></pre>
<p>Note that <code>my-wallet-name</code>, <code>my-hotkey</code> where actually correct names. My wallet with one of my hotkeys. And I decided to query the UID 18.</p>
<p>But <strong>btcli</strong> is returning an error with no specific message</p>
<pre><code>AttributeError: 'Dendrite' object has no attribute 'forward_text'
Exception ignored in: <function Dendrite.__del__ at 0x7f5655e3adc0>
Traceback (most recent call last):
File "/home/eduardo/repos/bittensor/venv/lib/python3.8/site-packages/bittensor/_dendrite/dendrite_impl.py", line 107, in __del__
bittensor.logging.success('Dendrite Deleted', suffix = '')
File "/home/eduardo/repos/bittensor/venv/lib/python3.8/site-packages/bittensor/_logging/__init__.py", line 341, in success
cls()
File "/home/eduardo/repos/bittensor/venv/lib/python3.8/site-packages/bittensor/_logging/__init__.py", line 73, in __new__
config = logging.config()
File "/home/eduardo/repos/bittensor/venv/lib/python3.8/site-packages/bittensor/_logging/__init__.py", line 127, in config
parser = argparse.ArgumentParser()
File "/usr/lib/python3.8/argparse.py", line 1672, in __init__
prog = _os.path.basename(_sys.argv[0])
TypeError: 'NoneType' object is not subscriptable
</code></pre>
<p>What does this means?
How can I query an UID correctly?</p>
<p>I have try to look for UIDs to query but the tool does not give me any.
I was expecting a semantic error or a way to look for a UID i can query but not a TypeError.</p>
|
<python>
|
2023-02-02 21:50:03
| 1
| 421
|
eduardogr
|
75,329,235
| 12,574,341
|
Python convert time string with Z at end to datetime object
|
<p>An API is providing a time stamp in the following format</p>
<pre class="lang-py prettyprint-override"><code>s = "2023-02-02T21:05:07.2207121Z"`
</code></pre>
<p>I'm attempting to convert it to a datetime object</p>
<pre class="lang-py prettyprint-override"><code>dt = datetime.strptime(s, "%Y-%m-%dT%H:%M:%S.%fZ")
</code></pre>
<p>It's causing the following error</p>
<pre class="lang-py prettyprint-override"><code>line 349, in _strptime
raise ValueError("time data %r does not match format %r" %
ValueError: time data '2023-02-02T21:05:07.2207121Z' does not match format '%Y-%m-%dT%H:%M:%S.%fZ'
</code></pre>
<p>This question was marked as a duplicated linked to a post suggesting <code>fromisoformat(s)</code>, but that fails as well</p>
<pre class="lang-py prettyprint-override"><code>dt = datetime.fromisoformat(s)
</code></pre>
<p>error</p>
<pre><code> dt = datetime.fromisoformat(s)
ValueError: Invalid isoformat string: '2023-02-02T21:05:07.2207121Z'
</code></pre>
|
<python><datetime>
|
2023-02-02 21:41:03
| 0
| 1,459
|
Michael Moreno
|
75,329,174
| 357,313
|
Where does this bottom margin come from?
|
<p>I'm cramming lots of small line charts onto one single figure. Sometimes I am left with a relatively large bottom margin, depending on my data. This is not specific to subplots but can also happen for only one axes. An example:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
df = pd.Series([1, 2, 2, 4, 5], index=pd.date_range('2023', periods=5))
df = df.drop_duplicates() # Without gaps as is well
fig = plt.figure()
plt.subplots_adjust(0, 0, 1, 1) # No margins
# ... Lots of stuff/subplots might happen here...
df.plot(xticks=[]) # Depending on df, leaves a bottom margin
plt.show()
</code></pre>
<p>This leaves a large margin at the bottom:</p>
<p><a href="https://i.sstatic.net/FyIBT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FyIBT.png" alt="result with margin" /></a></p>
<p>Why is this? Is there a workaround?</p>
|
<python><pandas><matplotlib><plot>
|
2023-02-02 21:32:50
| 1
| 8,135
|
Michel de Ruiter
|
75,329,012
| 6,759,459
|
Why does a ValidationError 422 occur when sending a POST request to a FastAPI app through Postman?
|
<p>I cannot seem to send a POST request to a FastAPI app through Postman.</p>
<ul>
<li>FastAPI version 0.89.1</li>
<li>Python version 3.10.9</li>
</ul>
<pre><code>from fastapi import FastAPI
from fastapi.params import Body
from pydantic import BaseModel
app = FastAPI()
class Post(BaseModel):
title : str
content : str
@app.get("/")
async def root():
return {"message": "Hello."}
@app.post("/createposts/")
def create_posts(new_post:Post):
print(new_post.title)
return {"new_post":f"title:"[new_post.title]}
</code></pre>
<p>I got the following error</p>
<pre><code>INFO: Finished server process [44982]
INFO: Started server process [45121]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: 127.0.0.1:64722 - "POST /createposts/ HTTP/1.1" 422 Unprocessable Entity
</code></pre>
<p>I'm following a tutorial and I cannot seem to find answers from other users.</p>
<p>I tried using the <code>dict: Body(...)</code> input argument instead.</p>
<p>I am also using Postman and this is the error:</p>
<pre><code>{
"detail": [
{
"loc": [
"body"
],
"msg": "value is not a valid dict",
"type": "type_error.dict"
}
]
}
</code></pre>
<p>Here's a screenshot of my request on Postman.</p>
<p><a href="https://i.sstatic.net/7APFG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7APFG.png" alt="screenshot of Postman" /></a></p>
<p>I made a POST request to the URL with the POST endpoint:</p>
<pre class="lang-json prettyprint-override"><code>{
"title":"a",
"content":"b"
}
</code></pre>
|
<python><postman><fastapi><http-status-code-422>
|
2023-02-02 21:15:54
| 1
| 926
|
Ari
|
75,329,005
| 13,668,802
|
Python: `and` operator does not return a boolean value
|
<p>In Python, an empty list is considered a Falsey value</p>
<p>Therefore this is how things should work:</p>
<pre><code>>>> [] and False
False
</code></pre>
<p>But in reality, python returns an empty list.</p>
<pre><code>>>> [] and False
[]
</code></pre>
<p>Is this intended or a bug?</p>
|
<python><python-3.x><list><boolean>
|
2023-02-02 21:15:05
| 3
| 970
|
Rage
|
75,328,861
| 10,620,003
|
Multiply two df in GPU (cudf)
|
<p>I have two dataframe in GPU. I want to multiply each element of each df.
Here is a simple version of my dataframes:</p>
<pre><code>import cudf
a = cudf.DataFrame()
a['c1'] = [1, 2]
b = cudf.DataFrame()
b['c1'] = [2, 5]
</code></pre>
<p>I want to see this output:</p>
<pre><code> c1
0 2
1 10
</code></pre>
<p>I am using <code>a.multiply(b)</code>, however, I get error;<code>AttributeError: DataFrame object has no attribute multiply</code></p>
<p>Can you please help me with that? Thanks.</p>
|
<python><dataframe><cudf>
|
2023-02-02 20:56:32
| 1
| 730
|
Sadcow
|
75,328,769
| 10,687,615
|
Extract multiple date/time values from text field into new variable columns
|
<p>I have dataframe - see below. This is just a snippet of the full dateframe, there are more text and date/times in each respective rows/IDS. As you can see the text before and after each date/time is random.</p>
<pre><code>ID RESULT
1 Patients Discharged Home : 12/07/2022 11:19 Bob Melciv Appt 12/07/2022 12:19 Medicaid...
2 Stawword Geraldio - 12/17/2022 11:00 Bob Melciv Appt 12/10/2022 12:09 Risk Factors...
</code></pre>
<p>I would like to pull all date/times where the format is <code>MM/DD/YYYY HH:MM</code> from the RESULT column and make each of those respective date/times into their own column.</p>
<pre><code>ID DATE_TIME_1 DATE_TIME_2 DATE_TIME_3 .....
1 12/07/2022 11:19 12/07/2022 12:19
2 12/17/2022 11:00 12/10/2022 12:09
</code></pre>
|
<python><pandas><extract>
|
2023-02-02 20:47:39
| 2
| 859
|
Raven
|
75,328,614
| 6,197,439
|
Format int as hex string in help string of Python argparse?
|
<p>I have seen <a href="https://stackoverflow.com/questions/5661725/format-ints-into-string-of-hex">Format ints into string of hex</a> - but I simply cannot figure out how to apply that here:</p>
<pre class="lang-py prettyprint-override"><code>import argparse
def parse_args(args):
parser = argparse.ArgumentParser(description="Hello hex")
# use lambda to allow for hex parsing https://stackoverflow.com/q/25513043
parser.add_argument('--my_hex', type=lambda x: int(x,0), default=12342, help="set a hex number (default: %(default)s)")
return parser.parse_args(args)
def main(inargs):
args = parse_args(inargs)
</code></pre>
<p>If I call this:</p>
<pre class="lang-none prettyprint-override"><code>$ python3 ../my_hex_script.py --help
usage: my_hex_script.py [-h] [--my_hex MY_HEX]
Hello hex
options:
-h, --help show this help message and exit
--my_hex MY_HEX set a hex number (default: 12342)
</code></pre>
<p>... clearly the printout of default value of <code>my_hex</code> is in decimal.</p>
<p>How do I get it printed in <code>0x{:04X}</code> string format?</p>
|
<python><number-formatting>
|
2023-02-02 20:30:13
| 1
| 5,938
|
sdbbs
|
75,328,606
| 5,568,409
|
How to show different colors on a plot for values from different columns
|
<p>Please consider the small dataframe test:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
df = pd.DataFrame(
[
[1, 1.0, 0.0, 0.0],
[1, 0.75, 0.25, 0.0],
[1, 0.576, 0.396, 0.028]
],
columns = ["State", "1", "2", "3"]
)
</code></pre>
<p>I am now plotting the 3 last columns by:</p>
<pre><code>fig = plt.figure()
ax = plt.subplot()
ax.plot(df[["1","2","3"]], label = ["1 (from 1)","2 (from 1)","3 (from 1)"],
color = "red", marker = ".", linestyle="-")
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5),
fancybox=True, shadow=True)
plt.show()
</code></pre>
<p>What would be the easiest way to show a different color for each column of data, such as "red" for column 1, "blue" for column 2 and green for column 3 ?</p>
|
<python><matplotlib><colors>
|
2023-02-02 20:29:23
| 2
| 1,216
|
Andrew
|
75,328,537
| 6,560,267
|
How to disable jupyter/ipython saving of intermediate (anonymous variables)?
|
<p>When you use Jupyter, you get these "numbered inputs & outputs" you can reference like this: <code>_3</code>. I'm the kind of guy who uses Jupyter like a nicer REPL with persistent code blocks & comments.
As time goes on, on long sessions, these kind of outputs start eating up memory, and then I have to restart the notebook and start all over.
I have NEVER in my life needed to reference these numbered variables (ok, maybe twice. Nothing I could not live without); so my question is: is there a way to disable them? Just to be clear: I still want to see my HTML-ified DataFrame, but I don't want the real df be saved into a variable named e.g. <code>_143</code>.</p>
|
<python><jupyter-notebook>
|
2023-02-02 20:18:53
| 0
| 913
|
Adrian
|
75,328,427
| 2,117,355
|
Implement Python Flash Controller Methods Generated from OpenAPI Generator
|
<p>I'm using <a href="https://openapi-generator.tech/docs/generators/python-flask" rel="nofollow noreferrer">OpenAPI Generator</a> to generate a Python Flask web app from an OpenAPI specification. A generated controller method looks like this:</p>
<pre><code>def doStuff(body): # noqa: E501
return 'do some magic!'
</code></pre>
<p>What is the best practice for filling in the implementation of these generated controller classes? Do I copy what was generated and then modify that? I can't modify in the generated directory obviously because the generator will regenerate and overwrite my changes.</p>
<p>What happens if I add an endpoint in the OpenAPI spec that causes an additional method to be generated? Do I have to manually copy this new method into my implementation code?</p>
|
<python><flask><openapi><openapi-generator>
|
2023-02-02 20:07:22
| 0
| 5,722
|
Mark
|
75,328,289
| 12,470,058
|
Return an element with the maximum number of occurrences in a given matrix
|
<p>I have written a function that takes a matrix and finds an element with the greatest number of occurrences in the matrix. If more than one such element exists, my function returns the list of all of them.</p>
<p>For example, if the input is:</p>
<pre><code>matrix = [[0, 5, 1, 1, 0],
[0, 2, 2, 2, 0],
[1, 2, 4, 3, 1]]
</code></pre>
<p>The result will be <code>[0, 1, 2]</code></p>
<p>My code is as follows:</p>
<pre><code>def max_repeat_elem(matrix):
from itertools import chain
if not matrix:
return None
flatten_m = list(chain.from_iterable(matrix))
diction = {}
for i in flatten_m:
occur = flatten_m.count(i)
if occur not in diction:
diction[occur] = [i]
else:
if i not in diction.get(occur):
diction[occur].append(i)
return diction.get(max(diction.keys()))
</code></pre>
<p>I think my code can be less costly than it is now. E.g. with a repeated element, the count function is called several times. How do I make the whole function less costly (even by using other data structures)?</p>
|
<python><python-3.x>
|
2023-02-02 19:52:19
| 1
| 368
|
Bsh
|
75,328,277
| 1,848,244
|
Pandas: Concise way of applying different functions across a multiindex column
|
<p>I have a multi-index dataframe. I want to create a new column whose value is a function of other columns. The problem is that the function is different for a small number of levels.</p>
<p>In order to do this, I am having to manually define the calculation for every leaf level in the hierarchical dataset. This is undesirable because most of the levels use the same calulation.</p>
<p>Here is an example of what I am doing, and how I currently have done it. NB: The data and functions are contrived for simplicity - actual use case is far more unweildy.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
from io import StringIO
testdata = """
level1,level2,value1,value2
root1,child1,10,20
root1,child2,30,40
root1,child3,50,60
root1,child4,70,80
root1,child5,90,100
"""
df = pd.read_csv(StringIO(testdata), index_col=[0,1], header=[0])
print('Starting Point:')
print(df)
df = df.unstack('level2')
print('Unstacked Version allowing me to define a different function for each level.')
print(df)
# This is the bit I'd like to make simpler. Imagine there was 20 of these child levels and only
# the last 2 were special cases.
df[('derived', 'child1')] = df[('value1', 'child1')] + df[('value2', 'child1')]
df[('derived', 'child2')] = df[('value1', 'child2')] + df[('value2', 'child2')]
df[('derived', 'child3')] = df[('value1', 'child3')] + df[('value2', 'child3')]
df[('derived', 'child4')] = 0.0
df[('derived', 'child5')] = df[('value1', 'child5')] * df[('value2', 'child5')]
print('Desired outcome:')
df = df.stack()
print(df)
</code></pre>
<p>Output:</p>
<pre><code>Starting Point:
value1 value2
level1 level2
root1 child1 10 20
child2 30 40
child3 50 60
child4 70 80
child5 90 100
Unstacked Version allowing me to define a different function for each level.
value1 value2
level2 child1 child2 child3 child4 child5 child1 child2 child3 child4 child5
level1
root1 10 30 50 70 90 20 40 60 80 100
Desired outcome:
value1 value2 derived
level1 level2
root1 child1 10 20 30.0
child2 30 40 70.0
child3 50 60 110.0
child4 70 80 0.0
child5 90 100 9000.0
</code></pre>
|
<python><pandas>
|
2023-02-02 19:50:25
| 3
| 437
|
user1848244
|
75,328,211
| 15,171,387
|
How to get the unique values multiple columns for a unique value of another column in Pandas?
|
<p>I have a datframe like this:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'val':['a', 'a', 'b', 'a', 'c'], 'g_1':[0, 0, 1,0,2], 'g_2':[0, 0, 0,0,1]})
</code></pre>
<p>Now, to get the unique values of column <code>g_1</code> for all unique values of column <code>val</code>, I do something like this:</p>
<pre><code>print(df['g_1'].groupby(df['val']).unique().apply(pd.Series))
0
val
a 0
b 1
c 2
</code></pre>
<p>However, I would like to add column <code>g_2</code> as well, but seems I get this error:</p>
<pre><code>print(df[['g_1', 'g_2']].groupby(df['val']).unique().apply(pd.Series))
</code></pre>
<p>I am looking to get something like this:</p>
<pre><code> g_1 g_2
val
a 0 0
b 1 0
c 2 1
</code></pre>
|
<python><pandas><group-by>
|
2023-02-02 19:43:14
| 3
| 651
|
armin
|
75,328,144
| 3,063,547
|
Error trying to import netCDF4 in python script using chaquopy with android studio
|
<p>I am trying to integrate python code into an Android app using Chaquopy with Android Studio.</p>
<p>The Android app is dying on import netCDF4 in the python module. I am running Android Studio on MacOS and made sure netcdf4 was installed via:</p>
<pre><code> %pip3 uninstall netcdf4
%pip3 install netcdf4
</code></pre>
<p>It installs fine.</p>
<p>But Android Studio app dies with:</p>
<pre><code> Caused by: com.chaquo.python.PyException: ModuleNotFoundError: No module named 'netCDF4'
</code></pre>
<p>The build.gradle has these components:</p>
<pre><code> defaultConfig {
applicationId "com.example.pythoncalledfromandroidstudio"
minSdk 21
targetSdk 32
versionCode 1
versionName "1.0"
python{
version "3.10"
pip {
// A requirement specifier, with or without a version number:
install "numpy"
install "netCDF4"
}
}
</code></pre>
<p>But cannot get past the error. Any suggestions greatly appreciated.</p>
|
<python><android><netcdf><chaquopy>
|
2023-02-02 19:35:06
| 0
| 853
|
user3063547
|
75,328,088
| 5,346,843
|
Confused about plotting interpolated 2D data with matplotlib
|
<p>I have some unstructured 2D data that I would like to interpolate on a unit offset grid (ie grid indices start at 1 not 0) using <code>scipy</code> and plot using <code>matplotlib</code>. The code is below</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import scipy.interpolate
# X Y Z
data = [[ 9, 2, 2.0],
[ 3, 3, 5.0],
[ 6, 4, 1.0],
[ 2, 6, 3.0],
[10, 7, 4.5],
[ 5, 8, 2.0]]
data = np.array(data)
coords = data[:, 0:2]
zvals = data[:, 2]
# Create the grid on which to interpolate (unit offset)
nx = 10
ny = 10
x = np.arange(nx)
x += 1
y = np.arange(ny)
y += 1
grid_x, grid_y = np.meshgrid(x, y, indexing='xy')
# Interpolate
grid_z1 = scipy.interpolate.griddata(coords, zvals, (grid_x, grid_y), method='linear')
# Plot the results
fig, axs = plt.subplots()
plt.imshow(grid_z1)
plt.plot(coords[:,0], coords[:,1], 'k.', ms=10)
plt.show()
</code></pre>
<p>The point data seem to be in the right place but <code>matplotlib</code> seems to be plotting the gridded data as zero-offset not unit-offset. I am obviously missing something - just not sure what. Thanks in advance!</p>
|
<python><matplotlib><scipy>
|
2023-02-02 19:28:44
| 1
| 545
|
PetGriffin
|
75,327,974
| 7,984,318
|
Pandas compute time duration among 3 columns and skip the none value at the same time
|
<p>I have a Dateframe ,you can have it ,by runnnig:</p>
<pre><code>import pandas as pd
from io import StringIO
df = """
case_id first_created last_paid submitted_time
3456 2021-01-27 2021-01-29 2021-01-26 21:34:36.566023+00:00
7891 2021-08-02 2021-09-16 2022-10-26 19:49:14.135585+00:00
1245 2021-09-13 None 2022-10-31 02:03:59.620348+00:00
9073 None None 2021-09-12 10:25:30.845687+00:00
"""
df= pd.read_csv(StringIO(df.strip()), sep='\s\s+', engine='python')
df
</code></pre>
<p>The logic is create 2 new columns for each row:</p>
<pre><code>df['create_duration']=df['submitted_time']-df['first_created']
df['paid_duration']=df['submitted_time']-df['last_paid']
</code></pre>
<p>The unit need to be days.</p>
<p>My changeling is sometime the last_paid or first_created will be none,how to skip the none value in the same row ,but still keep computing the another column ,if its value is not none ?</p>
<p>For example ,the last_paid in the third row is none ,but first_created is not,so for this row:</p>
<pre><code>df['create_duration']=df['submitted_time']-df['first_created']
df['paid_duration']='N/A'
</code></pre>
|
<python><pandas><dataframe>
|
2023-02-02 19:16:27
| 1
| 4,094
|
William
|
75,327,966
| 13,231,896
|
How to get static map with multipolygon as an Image in python
|
<p>I am looking for way (maybe through an external API) to represent many polygons in an static map. The idea is giving the coordinates to that service,and that service or API must give an static map asn a png image with those polygons. Just like this:</p>
<p><a href="https://i.sstatic.net/rBOix.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rBOix.png" alt="static map" /></a></p>
<p>How to get that static map with multi polygon as an Image in python?
I am using django but any way to do it with python will do.
The service must be FREE</p>
|
<python><gis>
|
2023-02-02 19:15:51
| 0
| 830
|
Ernesto Ruiz
|
75,327,914
| 801,902
|
How to replicate what django-allauth does when it creates a user, or how to programmatically submit a form in django?
|
<p>I am trying to import a bunch of users from an old database into a new system, and I am running into problems when I just create users and add their email addresses. Apparently allauth does some hidden magic behind the scenes that I'm having trouble figuring out, because when one of these users logs in, I get an error from the email template <code>Invalid block tag on line 298: 'user_display'. Did you forget to register or load this tag?</code> This doesn't happen when a user that is registered via a form, which subclasses <code>allauth.account.forms.SignupForm</code> logs in.</p>
<p>I thought that maybe I could just send the data through the form, but it requires a request to save, so I either need to figure out all the things that the SignupForm does when it creates a new user, or I need to figure out how to manually create the user using the form, which means I have to supply a request, or at least a fake request.</p>
<p>I would appreciate any help here.</p>
|
<python><django><django-allauth>
|
2023-02-02 19:11:08
| 1
| 1,452
|
PoDuck
|
75,327,797
| 9,983,652
|
output NaN value when using apply function to a dataframe with index
|
<p>I am trying to use apply function to create 2 new columns. when dataframe has index, it doesn't wokr, the new columns have values of NaN. If dataframe has no index, then it works. Could you please help? Thanks</p>
<pre><code>
def calc_test(row):
a=row['col1']+row['col2']
b=row['col1']/row['col2']
return (a,b)
df_test_dict={'col1':[1,2,3,4,5],'col2':[10,20,30,40,50]}
df_test=pd.DataFrame(df_test_dict)
df_test.index=['a1','b1','c1','d1','e1']
df_test
col1 col2
a1 1 10
b1 2 20
c1 3 30
d1 4 40
e1 5 50
</code></pre>
<p>Now I use apply function, the new creately columns have values of NaN. Thanks for your help.</p>
<pre><code>df_test[['a','b']] = pd.DataFrame(df_test.apply(lambda row:calc_test(row),axis=1).tolist())
df_test
col1 col2 a b
a1 1 10 NaN NaN
b1 2 20 NaN NaN
c1 3 30 NaN NaN
d1 4 40 NaN NaN
e1 5 50 NaN Na
</code></pre>
|
<python><pandas>
|
2023-02-02 18:58:41
| 2
| 4,338
|
roudan
|
75,327,650
| 12,323,468
|
I have a list of dataframes, how do I append a dataframe to each of those in my list in Python?
|
<p>The following python code gives me 3 dataframes (df_apples, df_oranges, df_grapes) showing sales and price for various fruits by month. I created a list of these dfs (df_list). I have another frame (df_forecast) which I want to append to each of the frames in df_list so I can create customized projections of each fruit type. However, when I try to append it doesn't work:</p>
<pre><code>import pandas as pd
import numpy as np
# HISTORY DATAFRAMES
#####################################################################################
df_apples = pd.DataFrame({'sales': [400, 450, 500, 545, 550],
'price': [3.00, 2.75, 3.44, 4.00, 5.32],
'date' : ['2022-10-31','2022-11-30','2022-12-31','2023-01-31','2023-02-28']})
df_oranges = pd.DataFrame({'sales': [50, 65, 60, 80, 110],
'price': [0.50, 0.45, 0.30, 0.35, 0.40],
'date' : ['2022-10-31','2022-11-30','2022-12-31','2023-01-31','2023-02-28']})
df_grapes = pd.DataFrame({'sales': [300, 350, 360, 380, 510],
'price': [1.05, 1.10, 1.35, 1.55, 0.95],
'date' : ['2022-10-31','2022-11-30','2022-12-31','2023-01-31','2023-02-28']})
df_list=[df_apples,df_oranges,df_grapes]
# FORECAST PERIOD DATAFRAME
####################################################################################
index = pd.date_range('2023-03-31', periods=6, freq='M')
columns = ['sales','price']
df_forecast = pd.DataFrame(index=index, columns=columns)
# HISTORY + FORECAST FRAMES TOGETHER
#####################################
for x in df_list:
x.set_index(pd.to_datetime(x['date']), inplace=True) # convert date from object to datetime
x.drop('date', axis=1, inplace=True)
x = x.append(df_forecast)
</code></pre>
<p>It's like df_forecast is not appending at all...showing df_apples as an example:</p>
<p><a href="https://i.sstatic.net/CTZRs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CTZRs.png" alt="enter image description here" /></a></p>
<p>When in fact I want this:</p>
<p><a href="https://i.sstatic.net/etSPO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/etSPO.png" alt="enter image description here" /></a></p>
<p>What's wrong?</p>
|
<python>
|
2023-02-02 18:43:49
| 3
| 329
|
jack homareau
|
75,327,510
| 17,696,880
|
Why doesn't this regex capture group stop with the set condition and continue capturing until the end of the line?
|
<pre class="lang-py prettyprint-override"><code>import re
input_text = "((PL_ADVB)alrededor (NOUN)(del auto rojizo, algo grande y completamente veloz)). Luego dentro del baúl rápidamente abajo de una caja por sobre ello vimos una caña." #example input
#place_reference = r"((?i:\w\s*)+)?"
#place_reference = r"(?i:[\w,;.]\s*)+" <--- greedy regex
place_reference = r"(?i:[\w,;.]\s*)+?"
list_all_adverbs_of_place = ["adentro", "dentro", "al rededor", "alrededor", "abajo", "hacía", "hacia", "por sobre", "sobre"]
list_limiting_elements = list_all_adverbs_of_place + ["vimos", "hemos visto", "encontramos", "hemos encontrado", "rápidamente", "rapidamente", "intensamente", "durante", "luego", "ahora", ".", ":", ";", ",", "(", ")", "[", "]", "¿", "?", "¡", "!", "&", "="]
pattern = re.compile(rf"(?:(?<=\s)|^)({'|'.join(re.escape(x) for x in list_all_adverbs_of_place)})?(\s+{place_reference})\s*({'|'.join(re.escape(x) for x in list_limiting_elements)})", flags = re.IGNORECASE)
input_text = re.sub(pattern,
#lambda m: f"((PL_ADVB){m[1]}{m[2]}){m[3]}",
lambda m: f"((PL_ADVB){m[1]}{m[2]}){m[3]}" if m[2] else f"((PL_ADVB){m[1]} NO_DATA){m[3]}",
input_text)
print(repr(input_text)) #--> output
</code></pre>
<p>When I use <code>lambda m: f"((PL_ADVB){m[1]}{m[2]}){m[3]}" if m[2] else f"((PL_ADVB){m[1]} NO_DATA){m[3]}"</code> I get this wrong output:</p>
<p><code>'((PL_ADVB)alrededor (NOUN)(del auto rojizo, algo grande y completamente veloz)). Luego ((PL_ADVB)dentro del baúl rápidamente abajo de una caja por sobre ello vimos una caña).'</code></p>
<p>It can be noticed how the capture group <code>{m[3]}</code> only captured <code>.</code></p>
<p>That would not be entirely correct since you should not put everything inside the parentheses, in order to get this correct output:</p>
<pre><code>"((PL_ADVB)alrededor ((NOUN)del auto rojizo, algo grande y completamente veloz)). Luego ((PL_ADVB)dentro del baúl) rápidamente ((PL_ADVB)abajo de una caja) ((PL_ADVB)por sobre ello) vimos una caña."
</code></pre>
<p><code>list_all_adverbs_of_place</code> represents the start of the capturing group, and <code>list_limiting_elements</code> represents the end of the capturing group.</p>
|
<python><python-3.x><regex><string><regex-group>
|
2023-02-02 18:31:47
| 1
| 875
|
Matt095
|
75,327,441
| 17,194,313
|
How to use PARSE_XML in SnowSQL where the underlying XML is "broken"?
|
<p>I am working with a very large collection of XML files (1m+) and for some reason most of them are "broken"</p>
<p>That is to say, running <code>CHECK_XML()</code> on them returns all sorts of errors (missing tag name after <, prematurely terminated xml, etc...)</p>
<p>Is there any way to parse this in snowflake?</p>
<p>I could (as a work-around) use snowpark and pass the python <code>lxml.html.fromstring</code> parser (which does work!), but I'd rather not do this to keep my code simple.</p>
<p>Any ideas?</p>
|
<python><xml><snowflake-cloud-data-platform>
|
2023-02-02 18:23:53
| 0
| 3,075
|
MYK
|
75,327,429
| 8,852,498
|
microservices: client service and server service (fastAPI) running as docker
|
<p>I need to build a small program with microservice architecture:</p>
<ol>
<li>server service (Python fast API framework)</li>
</ol>
<p>I run it with Dockerfile command:</p>
<pre><code> CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
</code></pre>
<ol start="2">
<li>client service: simple Python CLI textual requires input as username from input CLI and to connect server GET/POST HTTP requests</li>
</ol>
<pre><code>unsername= input("Please insert your unsername:")
log.info(f"{unsername}")
</code></pre>
<p>I run it with Dockerfile command:</p>
<pre><code>CMD ["python", "./main.py"]
</code></pre>
<p>I am not sure how to run my client with the docker to run the main but with no existing.
when I am running with venv from 2 different terminals the client and the server all work as expected and succeed to connect (because both of them are on my machine)
with docker.</p>
<ol>
<li>I got an error related to the username I try to input
EOFError: EOF when reading a line</li>
<li>even if delete the input I still got an error conn = connection.create_connection...Failed to establish a new connection like the client failed to connect my server when it on isolated container.</li>
</ol>
|
<python><docker><microservices><client-server><fastapi>
|
2023-02-02 18:22:41
| 1
| 845
|
Adi Epshtain
|
75,327,410
| 5,269,906
|
SQLAlchemy 2.0 session.execute() BULK INSERT respecting relationships
|
<p>I'm trying to create a web scraping project that uploads scraped data to a database using SQLAlchemy ORM. Lets use <a href="https://quotes.toscrape.com/" rel="nofollow noreferrer">https://quotes.toscrape.com/</a> as an example</p>
<pre><code># models.py
from sqlalchemy import Column, Date, ForeignKey, Integer, String, Table, Text
from sqlalchemy.orm import declarative_base, relationship
Base = declarative_base()
class Author(Base):
__tablename__ = 'author'
id = Column(Integer, primary_key=True)
name = Column(String(50), unique=True, nullable=False)
birthday = Column(Date, nullable=False)
bio = Column(Text, nullable=False)
class Tag(Base):
__tablename__ = 'tag'
id = Column(Integer, primary_key=True)
name = Column(String(31), unique=True, nullable=False)
class Quote(Base):
__tablename__ = 'quote'
id = Column(Integer, primary_key=True)
author_id = Column(ForeignKey('author.id'), nullable=False)
quote = Column(Text, nullable=False, unique=True)
author = relationship('Author')
tags = relationship('Tag', secondary='quote_tag')
t_quote_tag = Table(
'quote_tag', Base.metadata,
Column('quote_id', ForeignKey('quote.id'), primary_key=True),
Column('tag_id', ForeignKey('tag.id'), primary_key=True)
)
</code></pre>
<p>Using the ORM unit-of-work paradigm I can simply add a Quote instance to the a session
Then call session.commit() and all 4 tables are populated appropriately.</p>
<pre><code># unit_of_work.py
from datetime import datetime
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from models import *
engine = create_engine('sqlite:///quotes.db')
Session = sessionmaker()
Session.configure(bind=engine, autoflush=False)
session = Session()
einstein_dict = {
'name': 'Albert Einstein',
'birthday': datetime(day=14, month=3, year=1879),
'bio': 'Won the 1921 Nobel Prize in Physics.'
}
change_dict = {'name': 'change'}
deep_thoughts_dict = {'name': 'change'}
thinking_dict = {'name': 'change'}
world_dict = {'name': 'change'}
quote_dict = {
'quote': (
"The world as we have created it is a process of our thinking. "
"It cannot be changed without changing our thinking."
)
}
quote_instance = Quote(**quote_dict)
quote_instance.author = Author(**einstein_dict)
quote_instance.tags = [
Tag(**change_dict), Tag(**deep_thoughts_dict), Tag(**thinking_dict), Tag(**world_dict),
]
# Adds with all relationships respected
session.add(quote_instance)
session.commit()
</code></pre>
<p>Using SQLAlchemy ORM Bulk Insert documentation <a href="https://docs.sqlalchemy.org/en/20/orm/queryguide/dml.html" rel="nofollow noreferrer">https://docs.sqlalchemy.org/en/20/orm/queryguide/dml.html</a> It seems you can get away from
unit of work, and use dict references and subqueries to insert / upsert data.</p>
<pre><code># bulk_insert.py
# Insert Authors
session.execute(
insert(Author),
[
einstein_dict
]
)
# Insert Tags
session.execute(
insert(Tag),
[
change_dict,
deep_thoughts_dict,
thinking_dict,
world_dict,
]
)
# Insert Quotes
session.execute(
insert(Quote).values([
{
'author_id': select(Author.id).where(Author.name == quote_instance.author.name),
'quote': quote_instance.quote
}
])
)
# Insert quote_tag association_table
values = []
for tag in quote_instance.tags:
values.append({
'quote_id': select(Quote.id).where(Quote.quote == quote_instance.quote),
'tag_id': select(Tag.id).where(Tag.name == tag.name)
})
session.execute(
insert(t_quote_tag).values(values)
)
session.commit()
</code></pre>
<p>Is there a simple way to use SQLAlchemy 2.0 Bulk Insert to respect relationships? Something along the lines of</p>
<pre><code># ideal.py
session.execute(
insert(Quote).instances([quote_instance])
)
session.commit()
</code></pre>
|
<python><sqlalchemy>
|
2023-02-02 18:21:47
| 0
| 499
|
Osuynonma
|
75,327,384
| 13,359,498
|
ValueError: ('Input data in `NumpyArrayIterator` should have rank 4. You passed an array with shape', (550, 8))
|
<p>I used stratified cross-validation in my model. I have datagen, which I will use to augment the train data.
Code snippet:</p>
<pre><code>i=0
for train_index, val_index in skf.split(X_train, y_train):
X_train_fold, X_val_fold = X_train[train_index], X_train[val_index]
y_train_fold, y_val_fold = y_train[train_index], y_train[val_index]
i = i+1;
print("Fold:",i)
features1 = extract_features(model1, X_train_fold)
features2 = extract_features(model2, X_train_fold)
# Concatenate the features
features = Concatenate()([features1, features2])
inputs = Input(shape=(features.shape[1],))
x = Dense(32, activation='relu')(inputs)
x = Dense(16, activation='relu')(x)
predictions = Dense(4, activation='softmax')(x)
ensemble_model = Model(inputs=inputs, outputs=predictions)
print(X_val_fold.shape)
print(y_train_fold.shape)
print(features.shape)
print(predictions.shape)
# break
ensemble_model.compile(optimizer = Adam(learning_rate=0.00001),loss='sparse_categorical_crossentropy',metrics=['accuracy'])
ensemble_model.fit(datagen.flow(features, y_train_fold, batch_size=32), epochs=20, verbose=1)
</code></pre>
<p>error: <code>ValueError: ('Input data in </code>NumpyArrayIterator<code> should have rank 4. You passed an array with shape', (550, 8))</code></p>
<p>when i use,</p>
<pre><code>ensemble_model.fit(features, y_train_fold, batch_size=32, epochs=20, verbose=1)
</code></pre>
<p>the code works, but it doesn't work with <code>datagen.flow</code>
how to solve this? I can't ignore <code>datagen.flow</code> because my dataset is small and data augmentation is a must in this case.</p>
|
<python><tensorflow><keras><deep-learning>
|
2023-02-02 18:19:50
| 0
| 578
|
Rezuana Haque
|
75,327,383
| 18,758,062
|
SimPy: In every step, run a specific process after all the other processes has finished
|
<p>I am new to SimPy and need help to figure out how to do this:</p>
<p>There are multiple <code>foo</code> processes running and a <code>monitor</code> process. At every time step, is there a way to ensure that the <code>monitor</code> process runs only after the other <code>foo</code> processes have finished running for that time step?</p>
<p>I attempt to do this in my example below by listening for changes to any process's <code>is_alive</code> property, then run the actual <code>monitor</code> logic after no more changes are detected.</p>
<p>However, from the output shown here, you can see that the string</p>
<pre><code>1 : Doing something after all other processes....
</code></pre>
<p>appeared before</p>
<pre><code>1 : a has finished after 1 steps
</code></pre>
<p>so <code>monitor</code> process did not manage to run its logic after all the other processes at environment step #<code>1</code></p>
<p><strong>Output</strong></p>
<pre><code>0 : Unfinished processes= [<Process(foo) object at 0x7f820b3df7f0>, <Process(foo) object at 0x7f820aae4790>, <Process(foo) object at 0x7f820aad2490>]
0 : Doing something after all other processes....
1 : Unfinished processes= [<Process(foo) object at 0x7f820b3df7f0>, <Process(foo) object at 0x7f820aae4790>, <Process(foo) object at 0x7f820aad2490>]
1 : Doing something after all other processes....
1 : a has finished after 1 steps
2 : Unfinished processes= [<Process(foo) object at 0x7f820aae4790>, <Process(foo) object at 0x7f820aad2490>]
2 : Doing something after all other processes....
3 : Unfinished processes= [<Process(foo) object at 0x7f820aae4790>, <Process(foo) object at 0x7f820aad2490>]
3 : Doing something after all other processes....
4 : Unfinished processes= [<Process(foo) object at 0x7f820aae4790>, <Process(foo) object at 0x7f820aad2490>]
4 : Doing something after all other processes....
5 : b has finished after 5 steps
5 : c has finished after 5 steps
5 : Doing something after all other processes....
</code></pre>
<p><strong>My Simpy code</strong></p>
<pre class="lang-py prettyprint-override"><code>import simpy
def foo(env, id, delay):
yield env.timeout(delay)
print(f"{env.now} : {id} has finished after {delay} steps")
def monitor(env):
while True:
prev_unfinished_processes = []
while True:
unfinished_processes = [p for p in processes if p.is_alive]
if set(unfinished_processes) != set(prev_unfinished_processes):
print(env.now, ": Unfinished processes=", unfinished_processes)
else:
print(env.now, ": Doing something after all other processes....")
break
prev_unfinished_processes = unfinished_processes
yield env.timeout(1)
env = simpy.Environment()
env.process(monitor(env))
processes = [
env.process(foo(env, "a", 1)),
env.process(foo(env, "b", 5)),
env.process(foo(env, "c", 5)),
]
env.run(until=6)
</code></pre>
<p>Is there a better way to achieve this? Thanks</p>
|
<python><generator><simulation><simpy>
|
2023-02-02 18:19:46
| 1
| 1,623
|
gameveloster
|
75,327,375
| 867,889
|
How to feed a dictionary as parameters to a function with a mix of positional and named arguments?
|
<p>Given a dictionary <code>params={'a':0, 'b':1, 'c':2, 'd':3}</code> I want to pass them to a function <code>foo</code>:</p>
<pre><code>def foo(a, b, c=None, d=None):
pass
</code></pre>
<p>Something as simple as <code>foo(**params)</code> would complain about mixing positional and named arguments.</p>
<p>There is an ugly way:</p>
<pre><code>a = params['a']
b = params['b']
del params['a']
del params['b']
foo(a, b, **params)
</code></pre>
<p>But what I am ideally looking for is something like:</p>
<pre><code>def separate(params: Dict, func: Callable) -> List[Any], Dict[str, Any]:
...
args, kwargs = separate(params, foo)
foo(*args, **kwargs)
</code></pre>
<p>Is there a nice way to do it or should I rely on <code>inspect</code> and implement the <code>separate</code> function myself here?</p>
<p>[related questions don't address the issue: <a href="https://stackoverflow.com/questions/61746117/how-to-pass-kwargs-with-the-same-name-as-a-positional-argument-of-a-function">1</a>, <a href="https://stackoverflow.com/questions/34932052/neatly-pass-positional-arguments-as-args-and-optional-arguments-as-kwargs-from-a">2]</a></p>
|
<python>
|
2023-02-02 18:19:10
| 1
| 10,083
|
y.selivonchyk
|
75,327,334
| 9,391,359
|
Split string by specific html tags with keeping tags
|
<p>I need to split string by specific number of tags <code>(<li>, <lu> ...)</code>. I came up with regex</p>
<p><code>pattern = <li>|<ul>|<ol>|<li>|<dl>|<dt>|<dd>|<h1>|<h2>|<h3>|<h4>|<h5>|<h6></code> and <code>re.split</code></p>
<p>Basically it does the job</p>
<pre><code>test_string = '<p> Some text some text some text. </p> <p> Another text another text </p>. <li> some list </li>. <ul> another list </ul>'
res = re.search(test_string, pattern)
-> `['<p> Some text some text some text. </p> <p> Another text another text </p>. ', ' some list </li>. ', ' another list </ul>']`
</code></pre>
<p>But i would like to capture both opening and closing tags and keep tags in splitted text. Something like</p>
<pre><code>['<p> Some text some text some text. </p> <p> Another text another text </p>. ', '<li> some list </li>. ', '<ul>another list </ul>']`
</code></pre>
|
<python><html>
|
2023-02-02 18:15:04
| 1
| 941
|
Alex Nikitin
|
75,327,274
| 19,053,778
|
Having the same index values when pivoting a dataframe from long to wide format gives an average value
|
<p>Context: I'm trying to pivot a long format dataframe to a wide format dataframe, however, I'm noticing a weird pattern on the wide format dataframe. It seems that if we have repeated values for the index (in my case, a date), it's almost like it's giving me an average instead of repeating each index value and keeping the original values?</p>
<p>Here's a minimal reproducible example:</p>
<pre><code> import datetime
import pandas as pd
long_dataframe = pd.DataFrame({"Date": [
datetime.datetime.strptime("01-01-2020", '%m-%d-%Y').date(),
datetime.datetime.strptime("01-01-2020", '%m-%d-%Y').date(),
datetime.datetime.strptime("01-02-2020", '%m-%d-%Y').date(),
datetime.datetime.strptime("01-02-2020", '%m-%d-%Y').date(),
datetime.datetime.strptime("01-03-2020", '%m-%d-%Y').date(),
datetime.datetime.strptime("01-04-2020", '%m-%d-%Y').date(),
datetime.datetime.strptime("01-04-2020", '%m-%d-%Y').date(),
datetime.datetime.strptime("01-01-2020", '%m-%d-%Y').date(),
datetime.datetime.strptime("01-01-2020", '%m-%d-%Y').date(),
datetime.datetime.strptime("01-02-2020", '%m-%d-%Y').date(),
datetime.datetime.strptime("01-02-2020", '%m-%d-%Y').date(),
datetime.datetime.strptime("01-03-2020", '%m-%d-%Y').date(),
datetime.datetime.strptime("01-04-2020", '%m-%d-%Y').date(),
datetime.datetime.strptime("01-04-2020", '%m-%d-%Y').date()
], "A": [
"category_X", "category_X", "category_X", "category_X", "category_X", "category_X", "category_X",
"category_Y", "category_Y", "category_Y", "category_Y", "category_Y", "category_Y", "category_Y"], "Values": [30, 40, 20, 30, 40, 50, 60,25,30,42,54,21,23,30]})
wide_dataframe = long_dataframe.reset_index().pivot_table(
index="Date", columns="A", values="Values")
wide_dataframe
</code></pre>
<p>Which gives me this:</p>
<pre><code>A category_X category_Y
Date
2020-01-01 35.0 27.5
2020-01-02 25.0 48.0
2020-01-03 40.0 21.0
2020-01-04 55.0 26.5
</code></pre>
<p>How can I make it so that I see the repeated dates with their original values? Why is it that for 2020-01-01 its giving the value in between this date (30 and 40)?</p>
<p>Desired output would look something like this:</p>
<pre><code>A category_X category_Y
Date
2020-01-01 30 ...
2020-01-01 40
2020-01-02 20
2020-01-02 30
2020-01-03 40
2020-01-04 50
2020-01-04 60
</code></pre>
<p>How can I do this while keeping duplicated indices?</p>
<p>I was thinking of giving each row a unique ID, but I'd really like to do this directly using the dates if possible (without creting any additional IDs)</p>
<p>Thank you!</p>
|
<python><pandas>
|
2023-02-02 18:08:17
| 1
| 496
|
Chronicles
|
75,327,185
| 2,100,039
|
Reading Data from URL into a Pandas Dataframe
|
<p>I have a URL that I am having difficulty reading. It is uncommon in the sense that it is data that I have self-generated or in other words have created using my own inputs. I have tried with other queries to use something like this and it works fine but not in this case:</p>
<pre><code>bst = pd.read_csv('https://psl.noaa.gov/data/correlation/censo.data', skiprows=1,
skipfooter=2,index_col=[0], header=None,
engine='python', # c engine doesn't have skipfooter
delim_whitespace=True)
</code></pre>
<p>Here is the code + URL that is providing the challenge:</p>
<pre><code>zwnd = pd.read_csv('https://psl.noaa.gov/cgi-bin/data/timeseries/timeseries.pl?
ntype=1&var=Zonal+Wind&level=1000&lat1=50&lat2=25&lon1=-135&lon2=-65&iseas=0&mon1=0&mon2=0&iarea=0&typeout=1&Submit=Create+Timeseries', skiprows=1, skipfooter=2,index_col=[0], header=None,
engine='python', # c engine doesn't have skipfooter
delim_whitespace=True)
</code></pre>
<p>Thank you for any help that you can provide.</p>
<p>Here is the full error message:</p>
<pre><code>pd.read_csv('https://psl.noaa.gov/cgi-bin/data/timeseries/timeseries.pl?ntype=1&var=Zonal+Wind&level=1000&lat1=50&lat2=25&lon1=-135&lon2=-65&iseas=0&mon1=0&mon2=0&iarea=0&typeout=1&Submit=Create+Timeseries', skiprows=1, skipfooter=2,index_col=[0], header=None,
engine='python', # c engine doesn't have skipfooter
delim_whitespace=True)
Traceback (most recent call last):
Cell In[240], line 1
pd.read_csv('https://psl.noaa.gov/cgi-bin/data/timeseries/timeseries.pl?ntype=1&var=Zonal+Wind&level=1000&lat1=50&lat2=25&lon1=-135&lon2=-65&iseas=0&mon1=0&mon2=0&iarea=0&typeout=1&Submit=Create+Timeseries', skiprows=1, skipfooter=2,index_col=[0], header=None,
File ~\Anaconda3\envs\Stats\lib\site-packages\pandas\util\_decorators.py:211 in wrapper
return func(*args, **kwargs)
File ~\Anaconda3\envs\Stats\lib\site-packages\pandas\util\_decorators.py:331 in wrapper
return func(*args, **kwargs)
File ~\Anaconda3\envs\Stats\lib\site-packages\pandas\io\parsers\readers.py:950 in read_csv
return _read(filepath_or_buffer, kwds)
File ~\Anaconda3\envs\Stats\lib\site-packages\pandas\io\parsers\readers.py:611 in _read
return parser.read(nrows)
File ~\Anaconda3\envs\Stats\lib\site-packages\pandas\io\parsers\readers.py:1778 in read
) = self._engine.read( # type: ignore[attr-defined]
File ~\Anaconda3\envs\Stats\lib\site-packages\pandas\io\parsers\python_parser.py:282 in read
alldata = self._rows_to_cols(content)
File ~\Anaconda3\envs\Stats\lib\site-packages\pandas\io\parsers\python_parser.py:1045 in _rows_to_cols
self._alert_malformed(msg, row_num + 1)
File ~\Anaconda3\envs\Stats\lib\site-packages\pandas\io\parsers\python_parser.py:765 in _alert_malformed
raise ParserError(msg)
ParserError: Expected 2 fields in line 133, saw 3. Error could possibly be due to quotes being ignored when a multi-char delimiter is used.
</code></pre>
|
<python><pandas><csv><url>
|
2023-02-02 18:00:07
| 2
| 1,366
|
user2100039
|
75,327,154
| 282,918
|
Python: separating words using space, but preserving double quotes surrounded text as single unit
|
<p>Let's say I have a string that looks like this:</p>
<pre><code>one two three "four five"
</code></pre>
<p>I'd like to split such that I get an array:</p>
<pre><code>['one', 'two', 'three', 'four five']
</code></pre>
<p>using <code>split</code> with <code>' '</code> will not be enough here. I have to separate out the double quotes first. Is there a best practice technique to do this? or should I re-invent the wheel and do it myself?</p>
|
<python>
|
2023-02-02 17:57:20
| 2
| 5,534
|
JasonGenX
|
75,327,075
| 16,512,200
|
SQL Alchemy: Convert Row Values to Column Names
|
<p>I've got a table with columns like this:</p>
<pre><code>DataID (Primary auto-incrementing key)
TestID (Foreign Key)
FormID (Foreign Key)
VariableName (String Type)
Data (String Type)
</code></pre>
<p>Which means the data often looks like this:</p>
<pre><code>DataID, TestID, FormID, VariableName, Data
1, 1, 1, Name, Billy
2, 1, 1, Date, 02/02/2023
3, 2, 1, Name, Bob
4, 2, 1, Date, 02/01/2023
</code></pre>
<p>I'd like to be able to run an SQL Alchemy query that will return the data to me in this format instead:</p>
<pre><code>TestID, FormID, Name, Date
1, 1, Billy, 02/02/2023
2, 1, Bob, 02/01/2023
</code></pre>
<p>NOTE: The DataID is not needed and the VariableName/Data pairs are to be grouped by TestID.</p>
<p>I tried the advice from these like posts:
<a href="https://stackoverflow.com/questions/39925502/sql-pivot-operation-in-sqlalchemy">SQL Pivot Operation in SQLAlchemy</a> ,
<a href="https://stackoverflow.com/questions/33554225/pivot-in-sqlalchemy?noredirect=1&lq=1">Pivot in SQLAlchemy</a> ,
<a href="https://stackoverflow.com/questions/2089661/sqlalchemy-column-to-row-transformation-and-vice-versa-is-it-possible">SQLAlchemy Column to Row Transformation and vice versa -- is it possible?</a></p>
<p>Although I'm not seeing something that is a purely SQL Alchemy option. and when I attempted the last link I'm struggling because I do not know what kind of table I would need to create in my database to have the relation and association_proxy represented in my table.</p>
<p>Any help/advice is appreciated.</p>
|
<python><sqlalchemy>
|
2023-02-02 17:49:24
| 0
| 371
|
Andrew
|
75,327,034
| 14,692,430
|
Moderngl: Render VAO with multiple shaders
|
<p>I'm doing some stuff with 2D opengl rendering.</p>
<p>Is there a way to render a vertex array object but have the data be passed through multiple shaders? For example, a shader that applies a normal map to the texture, and then a shader that blurs the image. It would be very difficult and unclean to combine the two shaders into one let alone potentially combining more than 2 shaders. This is my current code for creating the vertex array object:</p>
<pre class="lang-py prettyprint-override"><code># TEX_COORDS = [0, 1, 1, 1,
# 0, 0, 1, 0]
# TEX_INDICES = [0, 1, 2,
# 1, 2, 3]
# self.vertices looks something like this: [-1, -1, 1, -1, -1, 1, 1, 1], but with different coordinates
self.vbo = self.ctx.buffer(struct.pack("8f", *self.vertices))
self.uv_map = self.ctx.buffer(struct.pack("8f", *TEX_COORDS))
self.ibo = self.ctx.buffer(struct.pack("6I", *TEX_INDICES))
self.vao_content = [(self.vbo, "2f", "vertexPos"), (self.uv_map, "2f", "vertexTexCoord")]
self.vao = self.ctx.vertex_array(self.program, self.vao_content, self.ibo) # self.program is the shader program object
</code></pre>
<p>And I'm doing <code>texture.use()</code> (<code>texture</code> being a moderngl texture object) and then <code>self.vao.render()</code> to render it onto the screen.</p>
|
<python><opengl><shader><vertex-array-object><python-moderngl>
|
2023-02-02 17:45:29
| 1
| 352
|
DaNubCoding
|
75,327,028
| 6,454,901
|
Python Falcon - Post calls are being ignored
|
<p>I'm trying to set up a simple reverse proxy with Falcon in Python.</p>
<p>I have:</p>
<pre><code>import falcon
import requests
class ReverseProxyResource:
def on_get(self, req, resp, text=None):
print("GET")
if(text):
destination = "[destination_url]/" + text
else:
destination = "[destination_url]"
result = requests.get(destination)
resp.body = result.text
resp.status = result.status_code
def on_post(self, req, resp, text=None):
print("POST")
if(text):
destination = "[destination_url]/" + text
else:
destination = "[destination_url]"
result = requests.post(destination, data=req.bounded_stream.read())
resp.body = result.text
resp.status = result.status_code
proxy_api = application = falcon.API()
proxy_api.req_options.auto_parse_form_urlencoded = True
proxy_api.add_route('/{text}', ReverseProxyResource())
proxy_api.add_route('/', ReverseProxyResource())
</code></pre>
<p>Get requests to the proxy are returned correctly.</p>
<p>However, Post requests are only returned a 404 error from the api. The "POST" print statement is not shown, indicating on_post isn't called at all. (The post requests only included Header Content-Type: application/json and a simple JSON body, which work correctly when called directly against the destination url)</p>
<p>EDIT: Interestingly enough, if I change GET call in postman to POST (ie: no body, headers, or anything else added) on_post() is called when I hit the endpoint. So it seems like an issue where post requests that contain a body are being automtically 404'ed without calling on_post()</p>
|
<python><falconframework><falcon>
|
2023-02-02 17:44:54
| 1
| 508
|
DevBot
|
75,326,988
| 8,017,666
|
Supervisorctl : Issuing in running supervistorctl status
|
<p>I am not able to run any <code>supervisorctl</code> command inside docker container like <code>stop</code>, <code>start</code>, <code>status</code>, <code>restart</code>, etc.</p>
<p>My Supervisord configuration looks like below</p>
<pre><code>abc@abc-adhocworker-c89d9667b-9lqbd:/app$ cat worker.conf
[supervisord]
logfile=/dev/null
pidfile=/tmp/supervisord.pid
nodaemon=true
[unix_http_server]
file = /tmp/supervisor.sock
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[program:worker]
command=./manage.py rq worker %(ENV_QUEUES)s
process_name=%(program_name)s-%(process_num)s
numprocs=%(ENV_WORKERS_COUNT)s
directory=/app
stopsignal=TERM
autostart=true
autorestart=true
startsecs=300
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[eventlistener:worker_healthcheck]
autorestart=true
serverurl=AUTO
command=./manage.py rq healthcheck
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
events=TICK_60
</code></pre>
<p>Got this error</p>
<pre><code>Error: .ini file does not include supervisorctl section
For help, use /usr/local/bin/supervisorctl -h
</code></pre>
<p>I tried adding in above configuration</p>
<pre><code>[supervisorctl]
serverurl=http://127.0.0.1:9001
</code></pre>
<p>Then getting an error while running the supervisorctl status</p>
<pre><code>abc@abc-adhocworker-c89d9667b-9lqbd:/app$ /usr/local/bin/python /usr/local/bin/supervisorctl status
error: <class 'OSError'>, [Errno 99] Cannot assign requested address: file: /usr/local/lib/python3.7/socket.py line: 716
</code></pre>
<p>Also tried changing it to</p>
<pre><code>serverurl=unix:///tmp/supervisor.sock
</code></pre>
<p>Not sure after changing the configuration we need to restart/reload supervisord ?</p>
<p>Note : I am running these commands inside docker container.</p>
|
<python><docker><supervisord>
|
2023-02-02 17:40:45
| 1
| 2,946
|
SRJ
|
75,326,955
| 10,842,351
|
How to use different RNGs in different parts of a simulation avoiding correlation?
|
<p>This question arises from my attempt to mix two different RNGs. I'd like to mix them choosing the best of the two according to the operations I need to carry out to achieve better performance. More concretely, the two RNGs are:</p>
<ul>
<li>The Mersenne Twister (MT19937) coming from the <a href="https://docs.python.org/3/library/random.html" rel="nofollow noreferrer">random</a> module of Python;</li>
<li>Another RNG that can be any of the Numpy RNGs, listed <a href="https://numpy.org/doc/stable/reference/random/bit_generators/index.html" rel="nofollow noreferrer">here</a>.</li>
</ul>
<p>I'm restricted to use the Mersenne Twister of the random module since in some situations it is better than any of the others in Numpy. Also, a requirement of my project is that I need to have reproducible results so unpredictable entropy should be avoided to set the seeds, but I can set one of the seed with a pseudorandom number generated with one of the RNG if needed.</p>
<p>So far I've only been able to implement the "safer" solution which is to use a MT19937 also in Numpy so that each time I need to use it, I pass the state from the Mersenne Twister of the random module (here for example you can obtain it with <code>random.getstate()</code>), do some operations with it and then pass the state back to the other. The problem with this solution is that passing the state creates a relevant overhead.</p>
<p>I'm unsure if other more performant solutions, like initializing two different RNGs at the start of the simulation and using them at will, can be problematic in terms of the quality/correlation of the sequence generated since I read <a href="https://scicomp.stackexchange.com/questions/23547/parallel-mersenne-twister-for-monte-carlo">here</a> that using two differently seeded Mersenne Twister is not very good because the two sequences of pseudo-random numbers can be more correlated than one generated from a single one. However, in my situation I can use any of the Numpy RNG (a PCG-64 generator for example) in combination with the Python Mersenne Twister from the random module, so this is what I'd like to ask: is initializing a different RNG (with a different seed if useful) good enough? And also, what would be the best choice in Numpy to mix with a MT19937? Thank you in advance.</p>
<p>EDIT:</p>
<p>These are some timings to show that some operations can be made more performant with one RNG instead of the other:</p>
<pre class="lang-py prettyprint-override"><code>import timeit
python_shuffle = timeit.timeit("random.shuffle(a)", "import random; a = list(range(500))", number=10000)
numpy_shuffle = timeit.timeit("rng.shuffle(a)", "from numpy.random import default_rng; a = list(range(500)); rng = default_rng()", number=10000)
python_gen1rand = timeit.timeit("random.random()", "import random", number=10000)
numpy_gen1rand = timeit.timeit("rng.random()", "from numpy.random import default_rng; rng = default_rng()", number=10000)
print(python_shuffle/numpy_shuffle)
print(python_gen1rand/numpy_gen1rand)
</code></pre>
<p>which prints something like:</p>
<pre><code>14.992528343855014
0.15127849559797613
</code></pre>
<p>so the default Numpy PCG64 RNG, which should be one of the fastest, is 15 times better than the other one in shuffling but 7 times worse than the one in the random module to generate a single random number.</p>
|
<python><numpy><performance><random>
|
2023-02-02 17:37:07
| 1
| 665
|
Tortar
|
75,326,947
| 9,262,339
|
Django ORM dublicate objects in queryset after order_by by foreignkey field
|
<p>I have encountered some unexpected sorting behaviour with objects. As soon as I sort by the related model field, I get duplicates.
A short description of the model fields</p>
<p>models.py</p>
<pre><code>class GoogleCreativeSpend(models.Model):
creative = models.ForeignKey(
'GoogleCreative',
on_delete=models.CASCADE,
)
spend = models.DecimalField()
class GoogleCreative(CreamCreative):
.....
</code></pre>
<p>Create some objects:</p>
<pre><code>>>> creative = GoogleCreative.objects.get(name='gs_video1031v1')
>>> spend = GoogleCreativeSpend(creative=creative, spend=100,)
>>> spend.save()
>>> spend = GoogleCreativeSpend(creative=creative, spend=1100,)
>>> spend.save()
>>> spend = GoogleCreativeSpend(creative=creative, spend=1,)
>>> spend.save()
</code></pre>
<p><strong>views.py</strong></p>
<pre><code>queryset = GoogleCreative.objects.all()
queryset.order_by('googlecreativespend__spend')
for i in queryset:
if i.name == 'gs_video1031v1':
print(i.name)
| gs_video1031v1
| gs_video1031v1
| gs_video1031v1
</code></pre>
<p>I.e. by creating 3 googlespend objects I get 3 duplicates for googlecreative after sorting.</p>
<p>According to <a href="https://stackoverflow.com/questions/66584657/how-to-sort-queryset-based-on-foreign-key-with-no-duplicates">How to sort queryset based on foreign key with no duplicates</a><br />
I tryied</p>
<pre><code>queryset.distinct()
</code></pre>
<p>and</p>
<pre><code>queryset.distinct('googlecreativespend__spend')
</code></pre>
<p>But it doesn't work</p>
<p>How to fix it ?</p>
|
<python><django>
|
2023-02-02 17:36:46
| 1
| 3,322
|
Jekson
|
75,326,861
| 1,221,310
|
Initializing an empty Pydantic Dynamic model
|
<p>I have data coming into my FastAPI that can take any shape/form and as such I need an empty Pydantic model. I tried creating a dynamic model like this:</p>
<pre><code>DynamicModel = create_model('RandomData', random_data=(dict, ...))
</code></pre>
<p>However it requires the model to follow this structure:</p>
<pre><code>{"random_data": { } }
</code></pre>
<p>What I would like is for the model to accept an empty json dictionary like this: <code>{ }</code></p>
<p>What am I missing?</p>
|
<python><pydantic>
|
2023-02-02 17:30:31
| 0
| 906
|
cp-stack
|
75,326,752
| 13,679,903
|
Time complexity of dict.fromkeys()
|
<p>I'm trying to get an ordered set in Python 3.8. According to this <a href="https://stackoverflow.com/a/53657523/13679903">answer</a>, I'm using <code>dict.fromkeys()</code> method to get the unique items from a list preserving the insertion order. What's the time complexity of this method? As I'm using this frequently in my codebase, is it the most efficient way or is there any better way to get an ordered set?</p>
<pre><code>>>> lst = [4,2,4,5,6,2]
>>> dict.fromkeys(lst)
{4: None, 2: None, 5: None, 6: None}
>>> list(dict.fromkeys(lst))
[4, 2, 5, 6]
</code></pre>
|
<python><dictionary>
|
2023-02-02 17:20:12
| 0
| 437
|
Mohammad Rifat Arefin
|
75,326,707
| 14,406,682
|
Hide attributes from sphinx autodoc but show them in docstring
|
<p>I want to do two things at once:
Display the attributes of a class in its docstring, so that, for example, it will available in a Jupyter notebook when I hit Shift + Tab + Tab.</p>
<p>However, in the Sphinx output, the docs generated with the <code>.. autoclass::</code> directive contain the attributes section, which takes up a lot of real estate and I do not want it to be there.</p>
<p>So let's say I have a class as follows:</p>
<pre class="lang-py prettyprint-override"><code>class TestClass:
"""
This is a brief summary.
Here are a few more sentences. Probably more than two.
Attributes
----------
a1 : Any
This is the first attribute, I do not want to see it here.
a2 : TestClass
Oh look a recursive class
"""
def __init__(self, a1, a2):
"""
Initialize self
Parameters
----------
a1 : Any
The first attribute.
a2 : TestClass
The second attribute.
"""
self._a1 = a1
self._a2 = a2
@property
def a1(self):
"""
The first attribute, I want to see it here.
Returns
-------
Any
The first attribute
"""
return self._a1
@property
def a2(self):
"""
The second attribute.
Returns
-------
TestClass
The second attribute
"""
return self._a2
</code></pre>
<p>I also have a class documentation template, with the following content:</p>
<pre><code>{{ objname | escape | underline }}
.. currentmodule:: {{ module }}
.. autoclass:: {{ objname }}
{% block attributes %}
{% if attributes %}
.. rubric:: {{ _('Attributes') }}
.. autosummary::
{% for item in attributes %}
~{{ name }}.{{ item }}
{%- endfor %}
{% endif %}
{% endblock %}
{% block methods %}
{% if methods %}
.. rubric:: {{ _('Methods') }}
.. autosummary::
{% for item in methods %}
~{{ name }}.{{ item }}
{%- endfor %}
{% endif %}
{% endblock %}
{% if attributes %}
.. rubric:: {{ _('Attribute details') }}
{% for item in attributes %}
.. autoattribute:: {{ name }}.{{ item }}
{% endfor %}
{% endif %}
{% if methods %}
.. rubric:: {{ _('Method details') }}
{% for item in methods %}
.. automethod:: {{ name }}.{{ item }}
{% endfor %}
{% endif %}
</code></pre>
<p>which finally results in docs which look like this:
<a href="https://i.sstatic.net/K8kKl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/K8kKl.png" alt="The generated docs" /></a></p>
<p>I am looking for a solution which would</p>
<ol>
<li>retain the attribute list, or some information about the attributes, in the docstring, and</li>
<li>omit this information in the docs.</li>
</ol>
<p>I am OK with trying unorthodox solutions, but I cannot find any good guides on how to use Sphinx (ironically, the Sphinx docs don't help much). However, I need high levels of automation for the docs, which is why I would like a solution which uses a template, similar to what I pasted above.</p>
|
<python><python-sphinx><restructuredtext><docstring>
|
2023-02-02 17:15:48
| 0
| 718
|
Maurycyt
|
75,326,663
| 5,908,886
|
Remove duplicates in multifasta, where entries are paired
|
<p>Hi my input looks like:</p>
<pre><code>>ref
GGTGCCCACACTAATGATGTAAAACAATTAACAGAGGCAGTGCAAA
>sample1
GGTGCCCACACTAATGATGTAAAACAATTAACAGAGGCAGTGCAAA
>ref
GGTTAGGGCCGCCTGTTGGTGGGCGGGAATCAAGCAGCATTTTGGAATTCCCTACAAT
>sample2
GGTTAGGGCCGCCTGTTGGTGGGCGGGAATCAAGCAGGTATTTGGAATTCCCTACAAT
</code></pre>
<p>The entries in the fasta file are paired so that the ref is paired with the sample# below it.</p>
<p>I want to identify where the nt seqeunce for sample# and ref are identical, and remove them from the fasta (or put them into another fasta file of their own). The output would hopefully be a fasta file where the nt sequences for refs and sample# are different.</p>
<p>So far I have tried seqkit rmdup command, however, this doesn't treat the entries as if they are paired. How can I accomplish this, ideally with a bash command or other program.</p>
|
<python><bash><awk>
|
2023-02-02 17:11:59
| 3
| 377
|
SaltedPork
|
75,326,651
| 11,115,072
|
How to get the last occurrance of all items on a column (pandas)
|
<p>Let's suppose I have a dataset like this:</p>
<pre><code>item_id | date | cat |
----------------------------
0 | 2020-01-01 | A |
0 | 2020-02-01 | B |
1 | 2020-04-01 | A |
2 | 2020-02-01 | C |
2 | 2021-01-01 | B |
</code></pre>
<p>So, I need to get the last category (column cat), that means that the result dataframe would be the following:</p>
<pre><code>item_id | cat |
---------------
0 | B |
1 | A |
2 | B |
</code></pre>
<p>I know I could sort the values by the date and then iterate over the itens, but that would be too much consuming. Is there another method on pandas to achieve that?</p>
|
<python><pandas>
|
2023-02-02 17:10:06
| 1
| 381
|
Gabriel Caldas
|
75,326,465
| 10,007,302
|
Most effecient way to insert data into an existing mySQL table from a python dataframe but if column2 and column 3 contain dupes, update existing
|
<p>I'm relatively new to SQL and python. I'm trying to write code that will handle updates to an existing table in SQL. I've made an example below.</p>
<pre><code>id Project Company Start Date Industry
1 Zebra Apple 1/2/2022 Software
2 Charlie Tesla 2/2/2022 Automotive
3 Alpha Google 3/2/2022 Software
4 Omega Facebk 4/2/2022 Social Media
5 Beta Twitter 1/2/2022 Social Media
</code></pre>
<p>I'm currently reading a named range from an Excel workbook that will contain the same five columns as my existing mySQL table. I'd like to insert the new data into existing table, called porjects - however, lets say someone updates just the start date for a specific company/project, I dont want a row created.</p>
<p>My thinking was to iterate through my new dataframe rows and, for each row, search for any columns in my existing table that match both the project name and company name (just in case someone reuses a project name without knowing) and drop those rows from the existing table. Once that's done, use <code>pandas.to_sql()</code> to append the data.</p>
<p>I'm not sure if this is the most efficient way to go about this.</p>
<p>I had seen a previous solution to a similar problem suggest something like this. Am I on the right track or am I better trying something like the below?</p>
<pre><code>INSERT INTO TABLE_2
(id, name)
SELECT t1.id,
t1.name
FROM TABLE_1 t1
WHERE t1.id NOT IN (SELECT id
FROM TABLE_2)
</code></pre>
|
<python><mysql><pandas>
|
2023-02-02 16:53:56
| 0
| 1,281
|
novawaly
|
75,326,423
| 3,521,180
|
how do I pass multiple column names dynamically in pyspark?
|
<p>I am writing a python function that will do a leftanti join on two dataframe, and the joining condition may vary. i.e. sometime 2 DFs might have just one column as unique key for joining, and soemtime 2 DFs might have more than 1 columns to join on.</p>
<p>So, I have written the below code. Please suggest what changes should I make</p>
<pre><code>def integraty_check(testdata, refdata, cond = []):
df = func.join_dataframe(testdata, refdata, cond, "leftanti", logger)
df = df.select(cond)
func.write_df_as_parquet_file(df, curate_path, logger)
return df
</code></pre>
<p>here the parameter <code>cond</code> may have 1 or more than 1 column names as comma separated.</p>
<p>So, hwo do I pass the dynamic list of column names when I am calling the function?</p>
<p>Please suggest what would be the best way to achieve the purpose.</p>
|
<python><python-3.x><pyspark>
|
2023-02-02 16:50:23
| 1
| 1,150
|
user3521180
|
75,326,281
| 15,520,615
|
Python/PySpark String Split Index on key:value pair modification
|
<p>The following split/index will retrieve the following the output <code>'accountv2'</code> from</p>
<pre><code>Ancestor:{'ancestorPath': '/mnt/lake/RAW/Internal/origination/dbo/accountv2/1/Year=2023/Month=2/Day=2/Time=04-09', 'dfConfig': '{"sparkConfig":{"header":"true"}}', 'fileFormat': 'SQL'}
</code></pre>
<p>The split/index code is as follows:</p>
<pre><code>Ancestor['ancestorPath'].split("/")[7]
</code></pre>
<p>Can somene help modify the split/index so that it strips off the last two characters i.e v2.</p>
<p>So the output will be <code>account</code> not <code>accountv2</code></p>
<p>Thanks</p>
|
<python><pyspark>
|
2023-02-02 16:38:39
| 1
| 3,011
|
Patterson
|
75,326,238
| 1,008,794
|
How to omit dependency when exporting requirements.txt using Poetry
|
<p>I have a Python3 Poetry project with a <code>pyproject.toml</code> file specifying the dependencies:</p>
<pre><code>[tool.poetry.dependencies]
python = "^3.10"
nltk = "^3.7"
numpy = "^1.23.4"
scipy = "^1.9.3"
scikit-learn = "^1.1.3"
joblib = "^1.2.0"
[tool.poetry.dev-dependencies]
pytest = "^5.2"
</code></pre>
<p>I export those dependencies to a <code>requirements.txt</code> file using the command <code>poetry export --without-hashes -f requirements.txt --output requirements.txt</code> resulting in the following file <code>requirements.txt</code>:</p>
<pre><code>click==8.1.3 ; python_version >= "3.10" and python_version < "4.0"
colorama==0.4.6 ; python_version >= "3.10" and python_version < "4.0" and platform_system == "Windows"
joblib==1.2.0 ; python_version >= "3.10" and python_version < "4.0"
nltk==3.8.1 ; python_version >= "3.10" and python_version < "4.0"
numpy==1.24.1 ; python_version >= "3.10" and python_version < "4.0"
regex==2022.10.31 ; python_version >= "3.10" and python_version < "4.0"
scikit-learn==1.2.1 ; python_version >= "3.10" and python_version < "4.0"
scipy==1.9.3 ; python_version >= "3.10" and python_version < "4.0"
threadpoolctl==3.1.0 ; python_version >= "3.10" and python_version < "4.0"
tqdm==4.64.1 ; python_version >= "3.10" and python_version < "4.0"
</code></pre>
<p>that I use to install the dependencies when building a Docker image.</p>
<p><strong>My question:</strong> How can I omit the the <code>colorama</code> dependency in the above list of requirements when calling <code>poetry export --without-hashes -f requirements.txt --output requirements.txt</code>?</p>
<p><strong>Possible solution:</strong> I could filter out the line with <code>colorama</code> by producing the <code>requirements.txt</code> file using <code>poetry export --without-hashes -f requirements.txt | grep -v colorama > requirements.txt</code>. But that seems hacky and may break things in case the Colorama requirement is expressed across multiple lines in that file. Is there a better and less hacky way?</p>
<p><strong>Background:</strong> When installing this list of requirements while building the Docker image using <code>pip install -r requirements.txt</code> I get the message</p>
<pre><code>Ignoring colorama: markers 'python_version >= "3.10" and python_version < "4.0" and platform_system == "Windows"' don't match your environment
</code></pre>
<p>A coworker thinks that message looks ugly and would like it not to be visible (but personally I don't care). A call to <code>poetry show --tree</code> reveals that the Colorama dependency is required by <code>pytest</code> and is used to make terminal colors work on Windows. Omitting the library as a requirement when installing on Linux is not likely a problem in this context.</p>
|
<python><pip><python-poetry><requirements.txt>
|
2023-02-02 16:35:36
| 1
| 4,931
|
Rulle
|
75,326,179
| 11,027,207
|
import package dynamically
|
<p>I'd like to understand if there's better solution .
for following tree:</p>
<pre><code>|-main.py
├── app_config
│ ├── db_config.py
| |--- settings.py
</code></pre>
<p>Main.py importing class from db_config</p>
<pre><code>from app_config.db_config import DBContext
</code></pre>
<p>So in db_config.py each import of other class would be in the context/position of main
meaning</p>
<pre><code>from app_config.settings import SingletonMeta
</code></pre>
<p>This obviously works but during development sometimes need to test db_config.py itself
and each time need to change import of settings.py</p>
<pre><code>from settings import SingletonMeta
</code></pre>
<p>I wonder if there's more efficient way</p>
|
<python><import>
|
2023-02-02 16:32:03
| 1
| 424
|
AviC
|
75,326,066
| 19,155,645
|
Coco annotations: convert RLE to polygon segmentation
|
<p>I have coco style annotations (json format) with Both segmentations And bboxes.<br>
Most of the segmentations are given as list-of-lists of the pixels (polygon).</p>
<p>The problem is that some segmentations are given as a dictionary (with 'counts' and 'size' keys) that represent RLE values, and in these cases the 'iscrowd' key is equal to 1 (normally it is equal to 0).</p>
<p>I would like to convert all the 'annotations' with iscrowd==1 to be represented as polygons instead of RLE.</p>
<p>I do not need the mask <a href="https://stackoverflow.com/a/74957352/19155645">as suggested here</a>, but just the json file to have only polygon shaped segmentations.</p>
<p>Here is an example of a few annotations (from the same image), note how in the first two the segmentation is in polygon shape, and the latter two it is in RLE shape:</p>
<pre><code>{'id': 53, 'image_id': 4, 'category_id': 2037037930, 'segmentation': [[344.51, 328.83, 316.02, 399.73, 358.3, 399.78, 375.85, 336.07]], 'area': 2561.4049499999965, 'bbox': [316.02, 328.83, 59.83000000000004, 70.94999999999999], 'iscrowd': 0, 'extra': {}}
{'id': 54, 'image_id': 4, 'category_id': 2037037930, 'segmentation': [[376.43, 233.52, 368.93, 250.71, 375.96, 252.89, 369.4, 269.76, 378.62, 273.83, 372.21, 292.42, 400.09, 302.34, 400.09, 302.11, 400.1, 242.04]], 'area': 1596.5407000000123, 'bbox': [368.93, 233.52, 31.170000000000016, 68.81999999999996], 'iscrowd': 0, 'extra': {}}
{'id': 67, 'image_id': 4, 'category_id': 2037037930, 'segmentation': {'counts': [55026, 2, 396, 4, 394, 7, 391, 9, 389, 12, 386, 14, 384, 17, 381, 19, 379, 21, 377, 24, 374, 26, 372, 29, 369, 31, 367, 33, 365, 36, 362, 38, 360, 41, 357, 43, 355, 46, 352, 48, 350, 50, 348, 53, 345, 55, 343, 58, 340, 38, 1, 21, 338, 37, 5, 21, 335, 37, 7, 21, 335, 34, 10, 19, 338, 32, 12, 16, 340, 33, 11, 14, 342, 33, 11, 11, 346, 33, 11, 8, 348, 33, 10, 7, 350, 33, 8, 8, 351, 34, 5, 11, 351, 33, 3, 13, 351, 49, 351, 49, 352, 49, 351, 49, 351, 49, 352, 48, 352, 49, 351, 49, 352, 46, 354, 44, 356, 41, 359, 39, 362, 36, 364, 35, 365, 35, 366, 35, 365, 35, 365, 35, 366, 34, 366, 34, 366, 35, 366, 34, 366, 34, 366, 32, 368, 29, 372, 25, 375, 23, 377, 20, 381, 18, 382, 19, 381, 19, 382, 18, 382, 18, 382, 19, 382, 18, 382, 18, 382, 19, 381, 19, 382, 16, 384, 13, 387, 9, 392, 5, 395, 2, 73808], 'size': [400, 400]}, 'area': 2598.0, 'bbox': [137, 174, 79, 65], 'iscrowd': 1, 'extra': {}}
{'id': 68, 'image_id': 4, 'category_id': 2037037930, 'segmentation': {'counts': [76703, 2, 396, 4, 394, 7, 391, 9, 389, 11, 387, 14, 384, 16, 382, 19, 379, 21, 377, 23, 375, 26, 372, 28, 370, 30, 368, 33, 365, 35, 364, 37, 363, 37, 364, 36, 364, 37, 364, 36, 364, 36, 364, 37, 364, 36, 364, 37, 363, 37, 364, 36, 364, 37, 364, 36, 364, 36, 364, 37, 364, 15, 1, 20, 364, 13, 4, 19, 365, 10, 6, 20, 363, 9, 8, 20, 361, 9, 11, 20, 358, 9, 13, 20, 356, 11, 14, 19, 354, 14, 13, 20, 351, 16, 13, 20, 348, 20, 13, 19, 346, 22, 13, 20, 343, 24, 13, 20, 341, 27, 13, 20, 338, 29, 13, 20, 336, 32, 13, 19, 334, 34, 13, 20, 331, 37, 12, 20, 331, 37, 13, 19, 332, 36, 12, 21, 331, 37, 8, 24, 332, 36, 5, 28, 331, 37, 1, 31, 331, 69, 332, 69, 331, 69, 332, 68, 332, 69, 331, 69, 332, 68, 332, 69, 332, 68, 332, 69, 331, 69, 332, 68, 332, 48, 1, 20, 331, 45, 5, 19, 332, 41, 8, 19, 332, 38, 12, 19, 332, 36, 13, 19, 332, 37, 12, 20, 331, 37, 13, 19, 332, 36, 13, 19, 332, 37, 13, 19, 332, 36, 13, 19, 332, 37, 12, 19, 332, 37, 13, 19, 332, 36, 13, 19, 332, 37, 13, 19, 332, 36, 12, 20, 332, 36, 10, 22, 332, 37, 6, 26, 332, 36, 4, 28, 332, 37, 1, 28, 335, 63, 337, 61, 339, 59, 342, 56, 344, 53, 348, 50, 350, 48, 352, 46, 355, 43, 357, 40, 360, 38, 363, 35, 365, 33, 368, 30, 370, 28, 372, 25, 376, 22, 378, 20, 381, 17, 383, 15, 385, 12, 389, 9, 391, 7, 394, 4, 396, 2, 40521], 'size': [400, 400]}, 'area': 4551.0, 'bbox': [191, 253, 108, 82], 'iscrowd': 1, 'extra': {}}
</code></pre>
<hr />
<h4>Failed test 1:</h4>
<p>I already tried the following:</p>
<pre><code>for annotation in coco_data['annotations']:
if type(annotation['segmentation']) == dict:
# Get the values of the dictionary
height = annotation['segmentation']['size'][0]
width = annotation['segmentation']['size'][1]
counts = annotation['segmentation']['counts']
# Decode the RLE encoded counts
rle = np.array(counts).reshape(-1, 2)
starts, lengths = rle[:, 0], rle[:, 1]
starts -= 1
ends = starts + lengths
pixels = []
for lo, hi in zip(starts, ends):
pixels.extend(range(lo, hi))
pixels = np.array(pixels)
# Convert the 1D pixels array to a 2D array
segments = np.zeros((height, width), dtype=np.uint8)
segments[pixels // width, pixels % width] = 1
segments = np.where(segments == 1)
# Update the segmentation and iscrowd fields
annotation['segmentation'] = [segments[1].tolist(), segments[0].tolist()]
annotation['iscrowd'] = 0
</code></pre>
<p>But got the following error:</p>
<pre><code>ValueError Traceback (most recent call last)
<ipython-input-29-1bf7f4af292c> in <module>
16
17 # Decode the RLE encoded counts
---> 18 rle = np.array(counts).reshape(-1, 2)
19 starts, lengths = rle[:, 0], rle[:, 1]
20 starts -= 1
ValueError: cannot reshape array of size 183 into shape (2)
</code></pre>
<p>afaik, it expectes RLE to be an even length? not sure where is the problem and how to solve it.</p>
<hr />
<h4>Failed test 2:</h4>
<p>then i tried something a bit different with <code>import pycocotools.mask as mask</code> and <code>import skimage.measure as measure</code> and the following function:</p>
<pre><code>def rle_to_polygon(rle, height, width):
if isinstance(rle, list):
rle = mask.frPyObjects(rle, height, width)
rle = mask.decode(rle)
contours = measure.find_contours(rle, 0.5)
polygon = []
for contour in contours:
contour = np.fliplr(contour) - 1
contour = contour.clip(min=0)
contour = contour.astype(int)
if len(contour) >= 4:
polygon.append(contour.tolist())
return polygon
</code></pre>
<p>I receive</p>
<pre><code><ipython-input-43-84d17a601509> in rle_to_polygon(rle, height, width)
79 def rle_to_polygon(rle, height, width):
80 if isinstance(rle, list):
---> 81 rle = mask.frPyObjects(rle, height, width)
82 rle = mask.decode(rle)
83 contours = measure.find_contours(rle, 0.5)
pycocotools/_mask.pyx in pycocotools._mask.frPyObjects()
TypeError: object of type 'int' has no len()
</code></pre>
<p>Any suggestions would be highly appreciated!</p>
|
<python><machine-learning><computer-vision><annotations><image-segmentation>
|
2023-02-02 16:22:04
| 1
| 512
|
ArieAI
|
75,326,012
| 16,688,854
|
Compute and fill an array in parallel
|
<p>As part of a signal processing task, I am doing some computation per frequency step.</p>
<p>I have a <strong>frequencies</strong> list of <strong>length 513</strong>.</p>
<p>I have a 3D numpy array <strong>A</strong> of <strong>shape (81,81,513)</strong>, where 513 is the number of frequency points. I then have a 81x81 matrix per frequency.</p>
<p>I want to apply some modification to each of those matrices, to end up with a modified version of A I'll name <strong>B</strong> here, which will also be of <strong>shape (81,81,513)</strong>.</p>
<p>For that, I start pre-allocating B with :</p>
<pre><code>B = np.zeros_like(A)
</code></pre>
<p>I then loop over my frequencies and call a <strong>dothing</strong> function like:</p>
<pre><code>for index, frequency in enumerate(frequencies):
B[:,:,index] = dothing(A[:,:,index])
</code></pre>
<p>The problem is that <strong>dothing</strong> takes a lot of time, and ran sequentially over 513 frequency steps seems endless.</p>
<p>So I wanted to parallelize it. But even after reading a lot of docs and watching a lot of videos, I get lost in all the libraries and potential solutions.
Computations at all individual frequencies can be done independently. But in the end I need to assign everything back to B in the right order.</p>
<p>Any idea on how to do that?</p>
<p>Thanks in advance</p>
<p>Antoine</p>
|
<python><multiprocessing>
|
2023-02-02 16:17:25
| 1
| 337
|
Antoine101
|
75,325,992
| 8,598,605
|
Python: Accelerate numpy brute force 2d image searching
|
<p>I have two images, where part of one image is present in another, however the position is not necessarily equal.</p>
<p><a href="https://i.sstatic.net/ZDk5u.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZDk5u.jpg" alt="First image" /></a><a href="https://i.sstatic.net/4ex7L.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4ex7L.jpg" alt="Second image" /></a></p>
<p>These two example images are significantly less complex than the typical drawing, <strong>so no keypoint feature matching methods will work</strong>. The objective is to find the coordinates of the top left corner of the second image that optimally joins to the two images together.</p>
<p>My approach is simply brute force, with a count of the black pixels to determine when the most amount of overlap has been achieved. This method works most the time, however for large images is obviously extremely slow.</p>
<p>Im hoping to accelerate this either with GPU acceleration or just optimisation of the method. Numba @jit decorators arent compatible with mask based numpy indexing so that wont work - the only obvious thing I can think of is cython. Cupy has also not produced good results.</p>
<p><strong>Separate to reducing the number of loops (reducing search space or increasing step) are there any obvious accelerations that can be made?</strong></p>
<pre><code> def get_overlap_box(box1, box2):
# Unpack the coordinates of the two boxes
x1, y1, w1, h1 = box1
x2, y2, w2, h2 = box2
# Calculate the horizontal overlap
x_overlap = max(0, min(x1 + w1, x2 + w2) - max(x1, x2))
# Calculate the vertical overlap
y_overlap = max(0, min(y1 + h1, y2 + h2) - max(y1, y2))
# Calculate the top-left corner of the overlap box
x_overlap_start = max(x1, x2)
y_overlap_start = max(y1, y2)
return [x_overlap_start, y_overlap_start, x_overlap, y_overlap]
def find_join_point(img1, img2, step=2):
im1h, im1w = img1.shape
im2h, im2w = img2.shape
canvas1 = img1.copy()
canvas2 = img2.copy()
count = []
black_pixel_count_dict_reverse = {}
for x in tqdm.tqdm(range(- im2w, im1w, step)):
for y in range(- im2h, im1h, step):
# get the coordinates of the two images relative to the top left corner of the first image
box2 = [x, y, im2w, im2h]
box1 = [0, 0, im1w , im1h]
# Get coordinates of the overlapping box relative to the first image
box3 = get_overlap_box(box1, box2)
# extract the overlapping region from both canvases for comaparison
sampleregion1 = canvas1[box3[1]:box3[3] + box3[1], box3[0]:box3[2] + box3[0]]
replacement = sampleregion1.copy()
sampleregion2 = canvas2[box3[1] - y:box3[3] + box3[1] - y, box3[0] - x:box3[2] + box3[0] - x]
#Dont continue for completely white sample regions
if np.all(sampleregion1 == 255) or np.all(sampleregion2 == 255):
continue
# merge the two overlapping regions
sampleregion1[sampleregion2 == 0] = 0
# replace the orignal with the new sample region
canvas1[box3[1]:box3[3] + box3[1], box3[0]:box3[2] + box3[0]] = sampleregion1
# count the black pixels across both images
black_pixel_count = np.count_nonzero(canvas1 == 0) - np.count_nonzero(sampleregion2 == 0)
black_pixel_count_dict_reverse[black_pixel_count] = [x, y]
# reset the first canvas
canvas1[box3[1]:box3[3] + box3[1], box3[0]:box3[2] + box3[0]] = replacement
# keep the count for analysis later
count.extend([black_pixel_count])
# find the x,y position with the minimum black pixel count
optimum = black_pixel_count_dict_reverse[min(count)]
print(optimum)
return optimum
</code></pre>
<p>Thanks in advance.</p>
|
<python><numpy><optimization><gpu><numba>
|
2023-02-02 16:15:48
| 1
| 740
|
Jkind9
|
75,325,985
| 9,983,652
|
ValueError: too many values to unpack when using apply to return multiple values
|
<p>I am using apply function to return 2 new columns, and then I got an error, not sure what is wrong? Thanks for your help.</p>
<pre><code>def calc_test(row):
a=row['col1']+row['col2']
b=row['col1']/row['col2']
return (a,b)
df_test_dict={'col1':[1,2,3,4,5],'col2':[10,20,30,40,50]}
df_test=pd.DataFrame(df_test_dict)
df_test
col1 col2
0 1 10
1 2 20
2 3 30
3 4 40
4 5 50
</code></pre>
<pre><code>df_test['a'],df_test['b']=df_test.apply(lambda row:calc_test(row),axis=1)
df_test
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
C:\Temp\1/ipykernel_12160/3210544870.py in <module>
2 df_test=pd.DataFrame(df_test_dict)
3
----> 4 df_test['a'],df_test['b']=df_test.apply(lambda row:calc_test(row),axis=1)
5 df_test
ValueError: too many values to unpack (expected 2)
</code></pre>
|
<python><pandas>
|
2023-02-02 16:14:22
| 1
| 4,338
|
roudan
|
75,325,959
| 472,226
|
Why would sum(array) give a different result depending on the order of the
|
<p>Why would the order of the values yield different totals?</p>
<pre class="lang-py prettyprint-override"><code>data = (
-81.9672,
48.3607,
48.3607,
40.9836,
-40.9836,
-40.9836,
81.9672,
81.9672,
-81.9672,
40.9836,
-48.3607,
-48.3607,
)
sum_order_1 = sum(data)
sum_order_2 = sum(sorted(data))
sum_order_3 = sum(sorted(data,key=lambda x:abs(x)))
print(sum_order_1) # Gives 1.4210854715202004e-14
print(sum_order_2) # Gives 2.842170943040401e-14
print(sum_order_3) # Gives 0.0
</code></pre>
|
<python><precision>
|
2023-02-02 16:12:22
| 0
| 12,853
|
Ronnis
|
75,325,876
| 3,435,121
|
class hidden from module dictionary
|
<p>I'm developing a code analysis tool for Python program.<br />
I'm using introspection techniques to navigate into program structure.<br />
Recently, I tested my tool on big packages like tkinter and matplotlib. It worked well.<br />
But I found an oddity when analyzing numpy.</p>
<pre><code>import numpy,inspect
for elem in inspect.getmembers( numpy, inspect.isclass)
print( elem)
print( 'Tester' in dir( numpy))
print( numpy.__dict__['Tester'])
</code></pre>
<p>Result:</p>
<pre><code>blablabla
('Tester', <class 'numpy.testing._private.nosetester.NoseTester'>),
blablabla
True
KeyError: 'Tester'
</code></pre>
<p>getmembers() and dir() agree that there is a 'Tester' class but it is not in <code>__dict__</code> dictionary. I dug a little further:</p>
<pre><code>1 >>> import numpy,inspect
2 >>> d1 = inspect.getmembers( numpy)
3 >>> d2 = dir( numpy)
4 >>> d3 = numpy.__dict__.keys()
5 >>> len(d1),len(d2),len(d3)
6 (602, 602, 601)
7 >>> set([d[0] for d in d1]) - set(d3)
8 {'Tester'}
9 numpy.Tester
10 <class 'numpy.testing._private.nosetester.NoseTester'>
11 >>>
</code></pre>
<p>getmembers() and dir() agree but <code>__dict__</code> do not. Line 8 shows that 'Tester' is not in <code>__dict__</code>.<br />
This bring questions:</p>
<ul>
<li>what is the mechanism used by numpy to hide the 'Tester' class?</li>
<li>where are getmembers() and dir() finding the reference to 'Tester' class?</li>
</ul>
<p>I'm using Python 3.9.2 and numpy 1.23.5</p>
|
<python><introspection>
|
2023-02-02 16:05:57
| 1
| 675
|
user3435121
|
75,325,828
| 15,414,616
|
Python thread safe singleton stuck when used
|
<p>I tried to implement a thread safe singleton for my python code. I tried these 2 pieces of code but both of them get stuck when the class with the metaclass of the Singleton is being called from my unittests.</p>
<p>1 (check-lock-check):</p>
<pre><code>import functools
import threading
from typing import Callable
def synchronized(thread_lock: threading.Lock):
""" Synchronization decorator """
def wrapper(function: Callable):
@functools.wraps(function)
def inner_wrapper(*args: list, **kw: dict):
with thread_lock:
return function(*args, **kw)
return inner_wrapper
return wrapper
class Singleton(type):
_instances = {}
_lock = threading.Lock()
def __call__(cls, *args, **kwargs):
if cls not in cls._instances:
cls._locked_call(*args, **kwargs)
return cls._instances[cls]
@synchronized(_lock)
def _locked_call(cls, *args, **kwargs):
if cls not in cls._instances:
cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs)
</code></pre>
<p>2 (simple lock):</p>
<pre><code>from threading import Lock
class Singleton(type):
_instances = {}
_lock: Lock = Lock()
def __call__(cls, *args, **kwargs):
if cls not in cls._instances:
with cls._lock:
instance = super().__call__(*args, **kwargs)
cls._instances[cls] = instance
return cls._instances[cls]
</code></pre>
<p>Does someone know why my code get stuck on this implementation when I run it locally (for unittests for example)? Because once the app is deployed it's actually uses multithreading everything is fine.</p>
<p>And do you have suggestions for something else that could work with what I need?</p>
<p>Thanks.</p>
|
<python><multithreading><singleton>
|
2023-02-02 16:02:00
| 1
| 437
|
Ema Il
|
75,325,514
| 1,075,653
|
How can I add the currently logged in username to the access log of django.server?
|
<p>I'm trying to add the currently logged in username, if any, to the access log of a Django app:</p>
<pre><code>INFO [django.server:161] "GET / HTTP/1.1" 200 116181
^ username should go here
</code></pre>
<p>My main problem is how do I share the user/username between the middleware and the filter, since in the filter I don't have access to the <code>request</code> object?</p>
<p>I've got a working solution using thread-local for storage, but this doesn't <a href="https://stackoverflow.com/questions/3227180/why-is-using-thread-locals-in-django-bad">seem</a> <a href="https://softwareengineering.stackexchange.com/questions/148108/why-is-global-state-so-evil/148154#148154">like</a> <a href="https://stackoverflow.com/questions/25949059/is-threading-local-a-safe-way-to-store-variables-for-a-single-request-in-googl">a</a> <a href="https://stackoverflow.com/questions/56371373/contextvars-across-modules/56387149#56387149">good</a> <a href="https://stackoverflow.com/questions/66631489/does-django-use-one-thread-to-process-several-requests-in-wsgi-or-gunicorn">idea</a>.</p>
<p>Especially since I can't cleanup the value in <code>process_request</code> as it is then cleared too early, before the log line is printed.</p>
<h1>Solution with <code>threading.local()</code></h1>
<h2><code>log.py</code></h2>
<pre class="lang-py prettyprint-override"><code>import logging
import threading
local = threading.local()
class LoggedInUsernameFilter(logging.Filter):
def filter(self, record):
user = getattr(local, 'user', None)
if user and user.username:
record.username = user.username
else:
record.username = '-'
return True
class LoggedInUserMiddleware(object):
def __init__(self, get_response):
self.get_response = get_response
def __call__(self, request):
self.process_request(request)
return self.get_response(request)
def process_request(self, request):
from django.contrib.auth import get_user
user = get_user(request)
setattr(local, 'user', user)
</code></pre>
<h2>django settings</h2>
<pre class="lang-py prettyprint-override"><code>MIDDLEWARE = (
...
'commons.log.LoggedInUserMiddleware',
...
)
...
LOGGING = {
'version': 1,
'disable_existing_loggers': True,
'formatters': {
...
'verbose_with_username': {
'format': '[%(asctime)s] %(levelname)s [%(name)s:%(lineno)s] %(username)s %(message)s'
}
},
'filters': {
'logged_in_username': {
'()': 'commons.log.LoggedInUsernameFilter',
}
},
'handlers': {
'console_with_username': {
'level': 'DEBUG',
'class': 'logging.StreamHandler',
'formatter': 'verbose_with_username',
'stream': sys.stdout,
},
},
'loggers': {
...
'django.server': {
'handlers': ['console_with_username'],
'level': 'INFO',
'propagate': False,
'formatter': 'verbose_with_username',
'filters': ['logged_in_username', ],
},
...
},
}
</code></pre>
|
<python><django><django-admin>
|
2023-02-02 15:38:42
| 1
| 463
|
Emil Burzo
|
75,325,378
| 3,375,378
|
gradio refresh interface when selecting File
|
<p>I'm trying to create a gradio User Interface which does the following</p>
<ol>
<li>on the left panel I have a File control, that allows the selection of a local file (eg. a .csv)</li>
<li>when a file is selected a "Process" button should be made visible</li>
<li>when the "Process" button is pressed, a function is called, reading the contents of the file, and processing it in some ways, resulting in a string</li>
<li>the resulting string is shown in a TextArea in the right column</li>
</ol>
<p>I'm stuck implementing point 2. I can select the file, but can't make the Process button become visible.</p>
<p>This is my code so far (not yet implementing points 3. a:</p>
<pre><code>import gradio as gr
def file_selected(file_input):
print("yes, file_selected is invoked")
print(process_button)
process_button.visible=True
demo.render()
return process_button
with gr.Blocks() as demo:
with gr.Row():
with gr.Column(scale=1):
gr.Markdown("### Data")
file_input = gr.File(label="Select File")
process_button = gr.Button("Process", visible=False)
with gr.Column(scale=2, min_width=600):
gr.Markdown("### Output")
result_display = gr.TextArea(default="", label="Result", lines=10, visible=False)
file_input.change(fn=file_selected, inputs=file_input, outputs=process_button)
if __name__ == "__main__":
demo.launch()
</code></pre>
<p>I see that at file selection the message is printed (and <code>print(process_button)</code> prints <code>"button"</code> so I'm sure this variable is not None), but the button doesn't appear on the page.</p>
<p><strong>edited:</strong> fixed some errors not directly related to the problem.</p>
|
<python><gradio>
|
2023-02-02 15:28:31
| 1
| 2,598
|
chrx
|
75,325,367
| 293,995
|
Getting the GPS position of coordinates (x,y) on a Google maps API satellite image in function of the zoom level
|
<p>Using yolo to detect features on satellite images Google Maps API, I get the coordinates (x,y) of each features. The reference (0, 0) is the top left corner. Yolo provides also the width and height of the bounding box. I have the GPS position of the center of the image.</p>
<p>I would like to get the GPS coordinates for the center of each feature.</p>
<pre><code>def getGPSPosition(centerLat, centerLong, zoomLevel, x, y):
# calculate degrees per pixel ratio at the given zoom level
degreesPerPixel = 180 / pow(2,zoomLevel);
imageSize = 640
# calculate offset in degrees
deltaX = (x-imageSize/2) * degreesPerPixel
deltaY = (y-imageSize/2) * degreesPerPixel
# calculate gps position based on the center coordinates
gpsLat = centerLat + deltaY
gpsLong = centerLong + deltaX
return (gpsLat, gpsLong)
</code></pre>
<p>I'm supposed to get the coordinate of the upper left corner of the bounding box. I miss the target... The result is approx 50m away from the correct point.</p>
|
<python><google-maps-api-3><gis>
|
2023-02-02 15:27:35
| 1
| 2,631
|
hotips
|
75,325,188
| 7,042,778
|
Problem loading a function in Python to_sql
|
<p>I have the following code in two scripts called Playing_DB_Around.py with the following code:</p>
<pre><code>import data_base.Calc_Func as calc
import pandas as pd
df = pd.DataFrame({'Name': ['John', 'Jane', 'Jim'], 'Age': [25, 30, 35]})
db = calc.Database("example_db")
calc.Database.to_sql(df, "example_table")
</code></pre>
<p>This code loads a own written bib which is in the script Calc_Func.py where the code is:</p>
<pre><code>from sqlalchemy import create_engine
class Database:
def __init__(self, db_file):
self.engine = create_engine(f'sqlite:///{db_file}')
def to_sql(self, df, table_name):
df.to_sql(table_name, self.engine, if_exists='replace', index=False)
</code></pre>
<p>When executing Playing_DB_Around.py I receive the following error message. I am confused the class and the execution in one script it seems to work. I tried many things but somehow cant get it to run.</p>
<p>Error message.</p>
<blockquote>
<p>Traceback (most recent call last): File "Playing_DB_Around.py", line
9, in
calc.Database.to_sql(df, "example_table") TypeError: to_sql() missing 1 required positional argument: 'table_name'</p>
</blockquote>
|
<python><sqlite><class><sqlalchemy><pandas-to-sql>
|
2023-02-02 15:12:28
| 1
| 1,511
|
MCM
|
75,325,121
| 13,517,174
|
How do you set up a docker container that depends on multiple python libraries being installed?
|
<p>I am trying to create a docker container to always run mypy in the same environment. The library I want to run mypy on has multiple dependencies, so I have to install those first and have access to them as I am evaluating the library that was passed. This is what it currently looks like, in this example I am only installing <code>scipy</code> as an external dependency, later I would install a regular <code>requirements.txt</code> file instead:</p>
<pre><code>FROM ubuntu:22.04 as builder
RUN apt-get update && apt-get install -y \
bc \
gcc \
musl-dev \
python3-pip \
python3 \
python3-dev
RUN python3.10 -m pip install --no-cache-dir --no-compile scipy && \
python3.10 -m pip install --no-cache-dir --no-compile mypy
FROM ubuntu:22.04 as production
RUN apt-get update && apt-get install -y \
python3 \
COPY --from=builder /usr/local/lib/python3.10/dist-packages /usr/local/lib/python3.10/dist-packages
COPY --from=builder /usr/local/bin/mypy /usr/local/bin/mypy
WORKDIR /data
ENTRYPOINT ["python3.10", "-m", "mypy"]
</code></pre>
<p>I install and run my container with</p>
<pre><code>docker build -t my-package-mypy . && docker run -v $(pwd):/data my-package-mypy main.py
</code></pre>
<p>Where <code>main.py</code> is a simple one line script that only imports scipy.</p>
<p>This returns the following output:</p>
<pre><code>main.py:1: error: Cannot find implementation or library stub for module named "scipy" [import]
main.py:1: note: See https://mypy.readthedocs.io/en/stable/running_mypy.html#missing-imports
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.10/dist-packages/mypy/__main__.py", line 37, in <module>
console_entry()
File "/usr/local/lib/python3.10/dist-packages/mypy/__main__.py", line 15, in console_entry
main()
File "mypy/main.py", line 95, in main
File "mypy/main.py", line 174, in run_build
File "mypy/build.py", line 193, in build
File "mypy/build.py", line 302, in _build
File "mypy/build.py", line 3579, in record_missing_stub_packages
PermissionError: [Errno 13] Permission denied: '.mypy_cache/missing_stubs'
</code></pre>
<p>Where most importantly, the first line says that it cannot find the installation for scipy even though it was installed alongside mypy. How can I adjust my dockerfile to get it to work as described?</p>
|
<python><docker><dockerfile><mypy>
|
2023-02-02 15:06:51
| 0
| 453
|
Yes
|
75,325,061
| 7,376,511
|
Caching an instance's method indefinitely raises pylint warning
|
<pre><code>class A:
@cache
def extremely_long_and_expensive_function(self) -> None:
# series of instructions that MUST access self
</code></pre>
<p>Pylint complains as follows:</p>
<blockquote>
<p>lru_cache(maxsize=None)' or 'cache' will keep all method args alive
indefinitely, including 'self'pylint(method-cache-max-size-none)</p>
</blockquote>
<p>But I could not find a satisfying solution online that actually tells me <strong>how</strong>
to cache that method without having to create some contrived rube-goldberg machine.
How do I memoize <code>expensive_function</code> so that the method is run exactly once and no more, no matter how many times I launch it?</p>
<p>Others have suggested using <code>@cached_property</code>, but this is not a property, so it feels wrong to write <code>A().expensive_function</code>. It's a function that executes initialization commands that are not always needed in every instance, and it doesn't return anything, so it should not be a property.</p>
<p>Surely there's some simple way to do this that I'm missing, I don't want to believe that such a simple use case requires a Frankenstein reimplementation like the answer in <a href="https://stackoverflow.com/a/33672499/11558993">https://stackoverflow.com/a/33672499/11558993</a>.</p>
|
<python><caching><memoization>
|
2023-02-02 15:01:56
| 0
| 797
|
Some Guy
|
75,325,025
| 9,663,207
|
Parsing a pandas dataframe into a nested list object
|
<p>Does anyone have a neat way of packing a dataframe including some columns which indicate hierarchy into a nested array?</p>
<p>Say I have the following data frame:</p>
<pre class="lang-py prettyprint-override"><code>from pandas import DataFrame
df = DataFrame(
{
"var1": [1, 2, 3, 4, 9],
"var2": [5, 6, 7, 8, 9],
"group_1": [1, 1, 1, 1, 2],
"group_2": [None, 1, 2, 1, None],
"group_3": [None, None, None, 1, None],
}
)
</code></pre>
<pre><code> var1 var2 group_1 group_2 group_3
0 1 5 1 NaN NaN
1 2 6 1 1.0 NaN
2 3 7 1 2.0 NaN
3 4 8 1 1.0 1.0
4 9 9 2 NaN NaN
</code></pre>
<p>The <code>group_</code> columns show that the records on the 2nd and 3rd rows are children of the one on the first row. The 4th row is a child of the 2nd row, and the last row of the table has no children. I am looking to derive something like the following:</p>
<pre class="lang-py prettyprint-override"><code>[
{
"var1": 1,
"var2": 5,
"children": [
{
"var1": 2,
"var2": 6,
"children": [{"var1": 4, "var2": 8, "children": []}],
},
{"var1": 3, "var2": 7, "children": []},
],
},
{"var1": 9, "var2": 9, "children": []},
]
</code></pre>
|
<python><pandas><dataframe><recursion>
|
2023-02-02 14:58:16
| 1
| 724
|
g_t_m
|
75,324,905
| 10,260,243
|
Pandas groupby and then apply to_dict('records')
|
<p>Suppose I have the following data frame:</p>
<pre><code>df = pd.DataFrame({'a': [1,1,1,2], 'b': ['a', 'a', 'b', 'c'], 'd': [1, 2, 3, 4]})
</code></pre>
<p>And I want to end with the following dict:</p>
<pre><code>{1: [{'b':'a', 'd': 1}, {'b': 'a', 'd': 2}, {'b': 'b', 'd': 3}], 2: [{'b': 'c', 'd': 4}]}
</code></pre>
<p>Basically, I want to group by <code>a</code> and for each data frame I want to apply <code>to_dict('records')</code>.</p>
<p>What I tried was the following:</p>
<pre><code># dict ok but not a list
df.groupby('a').agg(list).to_dict('index')
{1: {'b': ['a', 'a', 'b'], 'd': [1, 2, 3]}, 2: {'b': ['c'], 'd': [4]}}
</code></pre>
<pre><code># the index disappears
df.groupby('a').agg(list).to_dict('records')
[{'b': ['a', 'a', 'b'], 'd': [1, 2, 3]}, {'b': ['c'], 'd': [4]}]
</code></pre>
<pre><code>df.set_index('a').to_dict('index')
ValueError: DataFrame index must be unique for orient='index'
</code></pre>
<p>I think I can do it using a for-loop but I'm almost sure there is a pythonic way to do it.</p>
|
<python><pandas>
|
2023-02-02 14:50:25
| 3
| 4,678
|
Bruno Mello
|
75,324,721
| 7,578,494
|
python-numpy, assigning the nearest valid value of a reference array
|
<p>Here are toy NumPy arrays:</p>
<pre class="lang-py prettyprint-override"><code>nrow = 10
ar_label = np.arange(nrow**2).reshape(nrow, nrow)
ar_label[1:4, 1:4] = 100
ar_label[6:9, 2:5] = 200
ar_label[2:5, 6:9] = 300
ar_label = np.where(ar_label<100, np.nan, ar_label)
ar_label
array([[ nan, nan, nan, nan, nan, nan, nan, nan, nan, nan],
[ nan, 100., 100., 100., nan, nan, nan, nan, nan, nan],
[ nan, 100., 100., 100., nan, nan, 300., 300., 300., nan],
[ nan, 100., 100., 100., nan, nan, 300., 300., 300., nan],
[ nan, nan, nan, nan, nan, nan, 300., 300., 300., nan],
[ nan, nan, nan, nan, nan, nan, nan, nan, nan, nan],
[ nan, nan, 200., 200., 200., nan, nan, nan, nan, nan],
[ nan, nan, 200., 200., 200., nan, nan, nan, nan, nan],
[ nan, nan, 200., 200., 200., nan, nan, nan, nan, nan],
[ nan, nan, nan, nan, nan, nan, nan, nan, nan, nan]])
</code></pre>
<pre class="lang-py prettyprint-override"><code>np.random.seed(11)
ar_rand = np.random.randint(0, nrow*3, size=nrow**2).reshape(nrow, nrow)
ar_rand = np.where(ar_rand==0, ar_rand, np.nan)
ar_rand
array([[nan, nan, nan, nan, nan, nan, nan, nan, nan, nan],
[nan, nan, nan, nan, nan, 0., nan, nan, nan, nan],
[nan, nan, nan, nan, nan, nan, nan, nan, nan, nan],
[nan, nan, nan, nan, nan, nan, nan, nan, 0., nan],
[nan, 0., nan, nan, nan, nan, nan, nan, nan, nan],
[nan, nan, nan, nan, nan, nan, 0., nan, nan, nan],
[nan, nan, nan, nan, nan, nan, nan, nan, nan, nan],
[nan, nan, nan, nan, nan, nan, 0., nan, nan, nan],
[nan, nan, nan, nan, nan, nan, nan, nan, nan, nan],
[nan, nan, 0., nan, nan, nan, nan, nan, nan, nan]])
</code></pre>
<p>Now, I want to replace zeros in <code>ar_rand</code> with the nearest (i.e., Euclidean distance using the two axes) non-nan value of the corresponding element in <code>ar_label</code>.</p>
<p>For example, the very left zero in <code>ar_rand</code> will be replaced with 100, the very bottom one will be replaced with 200, and so on.</p>
<p>A solution using NumPy or Xarray will be preferred, but ones using other libraries are also welcome.</p>
<p>A desired solution shouldn't depend on the specific distributions of non-nan values of <code>ar_label</code> as the real data I am playing with has a different distribution.</p>
<p>Thank you.</p>
|
<python><numpy>
|
2023-02-02 14:37:11
| 4
| 343
|
hlee
|
75,324,711
| 18,814,386
|
Why Python import is looking package from another environment?
|
<p>I am using different environment (Conda) for the project (python 3.10.9). When I import libraries and packages etc, it is importing from that environment's path. But when I import streamlit-extras package, even though it is installed on the same environment, It is searching from other version (python 3.7) and therefore giving ModuleNotFound Error. I looked <a href="https://stackoverflow.com/questions/67631/how-can-i-import-a-module-dynamically-given-the-full-path">here</a> and other questions but couldn't solve my problem. I checked the current environment's location and streamlit-extras is there.</p>
<pre><code>ModuleNotFoundError: No module named 'streamlit_extras.let_it_rain'
Traceback:
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 565, in _run_script
exec(code, module.__dict__)
File "/Users/abduraimovusmanjon/Desktop/danger/pages/1_🎢_Insights.py", line 16, in <module>
from streamlit_extras.let_it_rain import rain
</code></pre>
<p>As seen from the error, it is looking some other path (python 3.7) for only this package, even if the package is located on the same environment which I am working. Any other import is working without problem.
<a href="https://i.sstatic.net/D3QvM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/D3QvM.png" alt="enter image description here" /></a></p>
<p>Is there easier way that whenever I import, python will stick on the current environment?
Or any other way to solve this issue?</p>
<p>Thanks in advance for help!</p>
|
<python><visual-studio><import><environment><streamlit>
|
2023-02-02 14:36:40
| 1
| 394
|
Ranger
|
75,324,700
| 14,088,919
|
Divide two columns in pivot table and plot grouped bar chart with pandas
|
<p>I have a dataset that looks like this:</p>
<pre><code>df = pd.DataFrame({
'Vintage': ['2016Q1','2016Q1', '2016Q2','2016Q3','2016Q4','2016Q1', '2016Q2','2016Q2','2016Q2','2016Q3','2016Q4'],
'Model': ['A','A','A','A','A','B','B','B','B','B','B',],
'Count': [1,1,1,1,1,1,1,1,1,1,1],
'Case':[0,1,1,0,1,1,0,0,1,1,0],
})
Vintage Model Count Case
0 2016Q1 A 1 0
1 2016Q1 A 1 1
2 2016Q2 A 1 1
3 2016Q3 A 1 0
4 2016Q4 A 1 1
5 2016Q1 B 1 1
6 2016Q2 B 1 0
7 2016Q2 B 1 0
8 2016Q2 B 1 1
9 2016Q3 B 1 1
10 2016Q4 B 1 0
</code></pre>
<p>What I need to do is:</p>
<ol>
<li>Plot grouped bar chart, where <code>vintage</code> is the groups and <code>model</code> is the hue/color</li>
<li>Two line plots in the same chart that show the percentage of <code>case</code> over <code>count</code>, aka plot the division of case over count for each model and vintage.</li>
</ol>
<p>I figured out how to do the first task with a pivot table but haven't been able to add the percentage from the same pivot.</p>
<p>This is the solution for point 1:</p>
<pre><code>dfp = df.pivot_table(index='vintage', columns='model', values='count', aggfunc='sum')
dfp.plot(kind='bar', figsize=(8, 4), rot=45, ylabel='Frequency', title="Vintages")
</code></pre>
<p><a href="https://i.sstatic.net/x4laP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/x4laP.png" alt="enter image description here" /></a></p>
<p>I tried dividing between columns in the pivot table but it's not the right format to plot.</p>
<p>How can I do the percentage calculation and line plots so without creating a different table?</p>
<p>Could the whole task be done with <code>groupby</code> instead? (as I find it easier to use in general)</p>
|
<python><pandas><matplotlib><group-by><pivot-table>
|
2023-02-02 14:35:37
| 2
| 612
|
amestrian
|
75,324,641
| 10,007,302
|
uhashable type:DataFrame error when trying to write DataFrame to_sql
|
<p>I have this code block that reads a named range from excel into a dataframe. Once I have that, I'm trying to upload it to a SQL table called projects. but I keep getting the following error:</p>
<pre><code> if key in metadata.tables:
TypeError: unhashable type: 'DataFrame
</code></pre>
<p>Any ideas? Code block below:</p>
<pre><code>import pandas as pd
from sqlalchemy import create_engine, text
import sqlalchemy
projects_info = data_frame_from_xlsx_range(fileloc,'projects_info')
user = 'root'
pw = 'test!*'
db = 'hcftest'
engine = create_engine("mysql+pymysql://{user}:{pw}@localhost:3306/{db}"
.format(user=user, pw=pw, db=db))
projects_info.to_sql('projects', con=engine, if_exists='replace')
</code></pre>
<p>Full error traceback:</p>
<pre><code>Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm Community Edition 2022.3.1\plugins\python-ce\helpers\pydev\pydevd.py", line 1496, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "C:\Program Files\JetBrains\PyCharm Community Edition 2022.3.1\plugins\python-ce\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:\Users\waly\PycharmProjects\pythonProject\ExerciseFiles\ExerciseFiles\Chap01\sqlupdate.py", line 92, in <module>
update_table(engine, projects_info, projects_info)
File "C:\Users\waly\PycharmProjects\pythonProject\ExerciseFiles\ExerciseFiles\Chap01\sqlupdate.py", line 90, in update_table
newdata.to_sql(table_name, engine, if_exists='replace', index=False)
File "C:\Users\waly\PycharmProjects\pythonProject\venv\lib\site-packages\pandas\core\generic.py", line 2987, in to_sql
return sql.to_sql(
File "C:\Users\waly\PycharmProjects\pythonProject\venv\lib\site-packages\pandas\io\sql.py", line 695, in to_sql
return pandas_sql.to_sql(
File "C:\Users\waly\PycharmProjects\pythonProject\venv\lib\site-packages\pandas\io\sql.py", line 1728, in to_sql
table = self.prep_table(
File "C:\Users\waly\PycharmProjects\pythonProject\venv\lib\site-packages\pandas\io\sql.py", line 1621, in prep_table
table = SQLTable(
File "C:\Users\waly\PycharmProjects\pythonProject\venv\lib\site-packages\pandas\io\sql.py", line 805, in __init__
self.table = self._create_table_setup()
File "C:\Users\waly\PycharmProjects\pythonProject\venv\lib\site-packages\pandas\io\sql.py", line 1102, in _create_table_setup
return Table(self.name, meta, *columns, schema=schema)
File "<string>", line 2, in __new__
File "C:\Users\waly\PycharmProjects\pythonProject\venv\lib\site-packages\sqlalchemy\util\deprecations.py", line 277, in warned
return fn(*args, **kwargs) # type: ignore[no-any-return]
File "C:\Users\waly\PycharmProjects\pythonProject\venv\lib\site-packages\sqlalchemy\sql\schema.py", line 432, in __new__
return cls._new(*args, **kw)
File "C:\Users\waly\PycharmProjects\pythonProject\venv\lib\site-packages\sqlalchemy\sql\schema.py", line 462, in _new
if key in metadata.tables:
TypeError: unhashable type: 'DataFrame'
</code></pre>
|
<python><mysql><pandas><dataframe><sqlalchemy>
|
2023-02-02 14:31:38
| 0
| 1,281
|
novawaly
|
75,324,499
| 8,467,078
|
Proper way to do a True/False/"something" argument in Python
|
<p>I have a function in Python which should use a keyword argument, let's call it <code>foo</code>, to decide if a certain action should be performed always (<code>foo=True</code>), never (<code>foo=False</code>) or let an algorithm make the decision (<code>foo='auto'</code>). Minimum working example would look something like this:</p>
<pre><code>def frobble(bar, foo=True):
if foo == "auto":
print("do something automatically")
elif foo:
print("always do the thing")
else:
print("never do the thing")
return bar
</code></pre>
<p>With calls to the function:</p>
<pre><code>frobble(my_bar, True)
frobble(my_bar, False)
frobble(my_bar, "auto")
</code></pre>
<p>And while it works like this (thanks to Python being dynamically typed I guess), I just don't think it's the best solution to let <code>foo</code> be a <code>bool</code> sometimes and a <code>str</code> at other times. Especially considering that a non-empty string is considered <code>True</code> and an empty string is considered <code>False</code> in Python. This might lead to some issues when comparing it, a.k.a. an empty string might be passed, without desiring the behavior associated with passing <code>False</code>. Now, I could of course always let the argument be of <code>str</code> type and do <code>foo='True'</code> instead of <code>foo=True</code> and so on. But that also strikes me as potentially a little confusing to the user.</p>
<p>Is there an obvious (and "pythonic") way out of this that I'm missing?</p>
|
<python><arguments>
|
2023-02-02 14:20:48
| 2
| 345
|
VY_CMa
|
75,324,498
| 1,040,718
|
Switch python version in Ubuntu
|
<p>I'm using Python 3.6 installed with pyenv and apt-get. In the shell if I do <code>python3 --version</code> it shows</p>
<pre><code>python3 --version
Python 3.6.0
</code></pre>
<p>I'm using Ubuntu 20.04 ARM version, and it comes with Python 3.8 installed. However, when running a test command, it's invoking <code>/usr/bin/python3</code> which is <code>Python 3.8</code>. I'm getting the following error:</p>
<pre><code>import pexpect
/usr/local/lib/python3.8/dist-packages/pexpect/__init__.py:75: in <module>
from .pty_spawn import spawn, spawnu
/usr/local/lib/python3.8/dist-packages/pexpect/pty_spawn.py:14: in <module>
from .spawnbase import SpawnBase
E File "/usr/local/lib/python3.8/dist-packages/pexpect/spawnbase.py", line 224
E def expect(self, pattern, timeout=-1, searchwindowsize=-1, async=False):
E ^
E SyntaxError: invalid syntax
</code></pre>
<p>Under the hood, it's still using <code>Python 3.8</code>. How can I fix that?</p>
<p>** Update ** Since <code>Python 3.7</code> <code>async</code> is a reserved keyword.</p>
|
<python><python-3.x><ubuntu>
|
2023-02-02 14:20:47
| 0
| 11,011
|
cybertextron
|
75,324,409
| 4,587,498
|
NetApp ONTAP with Python netapp-ontap - create vault policy
|
<p>I am trying to create a custom policy using netapp-ontap python library, version 9.11.1. I can do the same using the CLI <code>snapmirror policy create</code> as show <a href="https://docs.netapp.com/us-en/ontap-cli-97/snapmirror-policy-create.html#description" rel="nofollow noreferrer">here</a> where I can specify <code>-type vault</code>. I don't seem to see the way to do this using the Python library. I am assuming I should be using <a href="https://library.netapp.com/ecmdocs/ECMLP2858435/html/resources/snapmirror_policy.html" rel="nofollow noreferrer">SnapmirrorPolicy</a> resource to do this but that does not allow me to specify type and just creates <code>mirror-vault</code> type.</p>
<p>Any ideas how could I get the custom vault policy created?</p>
|
<python><netapp><ontap>
|
2023-02-02 14:13:19
| 1
| 1,307
|
RVid
|
75,324,304
| 774,575
|
Matlab Dirichlet kernel equivalent in Python?
|
<p>Is there an equivalent in Python for Matlab <a href="https://fr.mathworks.com/help/signal/ref/diric.html" rel="nofollow noreferrer">diric</a>? The <a href="https://en.wikipedia.org/wiki/Dirichlet_kernel" rel="nofollow noreferrer">Dirichlet kernel</a> D<sub>L</sub>(ω) is: D<sub>L</sub>(ω) = sin(ωL/2) / sin (ω/2)</p>
<p>I found <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.dirichlet.html" rel="nofollow noreferrer">scipy.stats.Dirichlet</a>, but that's not related. That's no big deal to write the expanded form, but there is a limit to manage: D<sub>L</sub>(0) = L.</p>
|
<python><signal-processing>
|
2023-02-02 14:04:26
| 1
| 7,768
|
mins
|
75,324,275
| 4,689,991
|
Generating a certificate with multiple OU fields using pyopenssl
|
<p>I've been using <code>pyopenssl</code> for a while now to generate certificates, specifically the subject line like so:</p>
<pre><code>from OpenSSL import crypto
cert = crypto.X509()
cert.get_subject().C = certificate_params.country
cert.get_subject().ST = certificate_params.state
cert.get_subject().L = certificate_params.locality
cert.get_subject().O = certificate_params.organization
cert.get_subject().OU = certificate_params.organizational_unit
cert.get_subject().CN = certificate_params.common_name
</code></pre>
<p>Now I need to generate a certificate with multiple OU fields, meaning multiple separate OU entries, each with their own value. OpenSSL seems to allow that, and according to <a href="https://stackoverflow.com/a/17858752">this answer from another question</a> it can be achieved with the CLI, but I'd like to do that using <code>pyopenssl</code>, if it's even possible.</p>
<p>As far as I can tell there isn't a way to achieve this using my original method, as it calls <code>X509Name.__setattr__</code>, and that removes any old value instead of adding a new one to the subject line (as evident with the <code>If there's an old entry for this NID, remove it</code> comment). There are no other member functions in <code>X509Name</code> that allow me to get around that, as far as I can tell.</p>
<p>Is there any elegant method I'm missing, or would I need to use the CLI (or take the hacky route and emulate <code>pyopenssl</code> by calling <code>OpenSSL._util.lib.X509_NAME_add_entry_by_NID</code> myself)?</p>
|
<python><openssl><pyopenssl>
|
2023-02-02 14:01:47
| 0
| 745
|
GabeL
|
75,324,215
| 8,869,570
|
How to tell a non-class method which child class a child object is?
|
<p>I'm not well versed with inheritance in python. I'm currently writing a set of child classes that inherit from a single parent class, e.g.,</p>
<pre><code>class child1(parent):
# define some stuff
class child2(parent):
# define some stuff
</code></pre>
<p>There's a separate non-class function that takes in one of the child objects and depending on which child object it is, it does certain things, e.g.,</p>
<pre><code>def function(child_object):
if child_object is child1:
# do something
elif child_object is child2:
# do something else
</code></pre>
<p>I am wondering if there's a native way in the polymorphism properties in Python to allow <code>function</code> to tell which child class <code>child_object</code> is? Currently, I have a string in the parent class that gets set to the name of the child class and that's how the distinction is made.</p>
|
<python><inheritance>
|
2023-02-02 13:57:29
| 0
| 2,328
|
24n8
|
75,324,214
| 8,179,502
|
How to initialize a parent class with a child object outside of the child's __init__ function
|
<p>We have two basic class.</p>
<pre><code>class A:
def __init__(self) -> None:
pass
class B(A):
def __init__(self) -> None:
print(self)
A.__init__(self)
</code></pre>
<p>When initializing B, when can see that the "self" being passed to A is an instance of B object.</p>
<pre><code>a = A()
b = B()
>> <__main__.B object at 0x0000021CC0E39700>
</code></pre>
<p>Now, if we print b, we can see that it is also an instance of an instance of a B object. However, A cannot be initialized with an instance of B outside of B.</p>
<pre><code>print(b)
A(b)
>> TypeError: __init__() takes 1 positional argument but 2 were given
</code></pre>
<p>And a here will be None:</p>
<pre><code>b = B()
a = A.__init__(b)
</code></pre>
<p>How can that be done?</p>
|
<python>
|
2023-02-02 13:57:29
| 2
| 458
|
Pier-Olivier Marquis
|
75,324,124
| 19,600,130
|
Copy a model instance and update a filed in new copy
|
<p>This is my model. I want to make a copy from my model with <code>copy</code> function. and update the <code>created_time</code> to this time and eventually return the post <code>id</code>.</p>
<pre><code>from django.db import models
from django.utils import timezone
class Author(models.Model):
name = models.CharField(max_length=50)
class BlogPost(models.Model):
title = models.CharField(max_length=250)
body = models.TextField()
author = models.ForeignKey(Author, on_delete=models.CASCADE)
date_created = models.DateTimeField(auto_now_add=True)
def copy(self):
blog = BlogPost.objects.get(pk=self.pk)
comments = blog.comment_set.all()
blog.pk = None
blog.save()
for comment in comments:
comment.pk = None
comment.blog_post = blog
comment.save()
return blog.id
class Comment(models.Model):
blog_post = models.ForeignKey(BlogPost, on_delete=models.CASCADE)
text = models.CharField(max_length=500)
</code></pre>
<p>I also want copy function makes a copy from post and comments, would you help me to correct my code and update the time in my function.</p>
|
<python><django><django-models>
|
2023-02-02 13:49:52
| 1
| 983
|
HesamHashemi
|
75,324,072
| 1,325,133
|
Pandas JSON Orient Autodetection
|
<p>I'm trying to find out if Pandas.read_json performs some level of autodetection. For example, I have the following data:</p>
<pre><code>data_records = [
{
"device": "rtr1",
"dc": "London",
"vendor": "Cisco",
},
{
"device": "rtr2",
"dc": "London",
"vendor": "Cisco",
},
{
"device": "rtr3",
"dc": "London",
"vendor": "Cisco",
},
]
data_index = {
"rtr1": {"dc": "London", "vendor": "Cisco"},
"rtr2": {"dc": "London", "vendor": "Cisco"},
"rtr3": {"dc": "London", "vendor": "Cisco"},
}
</code></pre>
<p>If I do the following:</p>
<pre><code>import pandas as pd
import json
pd.read_json(json.dumps(data_records))
---
device dc vendor
0 rtr1 London Cisco
1 rtr2 London Cisco
2 rtr3 London Cisco
</code></pre>
<p>though I get the output that I desired, the data is record based. Being that the default <code>orient</code> is columns, I would have not thought this would have worked.</p>
<p>Therefore is there some level of autodetection going on? With index based inputs the behaviour seems more inline. As this shows appears to have parsed the data based on a column orient by default.</p>
<pre><code>pd.read_json(json.dumps(data_index))
rtr1 rtr2 rtr3
dc London London London
vendor Cisco Cisco Cisco
pd.read_json(json.dumps(data_index), orient="index")
dc vendor
rtr1 London Cisco
rtr2 London Cisco
rtr3 London Cisco
</code></pre>
|
<python><json><pandas>
|
2023-02-02 13:45:13
| 4
| 16,889
|
felix001
|
75,324,008
| 247,696
|
American time zones like CT, give error: ZoneInfoNotFoundError: 'No time zone found with key CT'
|
<p>Python 3.9 introduced the <code>zoneinfo</code> module:</p>
<blockquote>
<p>The <code>zoneinfo</code> module provides a concrete time zone implementation to support the IANA time zone database as originally specified in PEP 615. By default, <code>zoneinfo</code> uses the system’s time zone data if available; if no system time zone data is available, the library will fall back to using the first-party <code>tzdata</code> package available on PyPI.</p>
</blockquote>
<p>On my Ubuntu machine, it supports lots of time zones:</p>
<pre><code>>>> from zoneinfo import ZoneInfo
>>> ZoneInfo("America/New_York")
zoneinfo.ZoneInfo(key='America/New_York')
>>> ZoneInfo("MST") # Mountain Standard Time
zoneinfo.ZoneInfo(key='MST')
>>> ZoneInfo("CET") # Central European Time
zoneinfo.ZoneInfo(key='CET')
</code></pre>
<p>However, it doesn't seem to support some time zones abbreviations used in North America like Central Time (CT), Central Standard Time (CST) or Pacific Standard Time (PST).</p>
<pre><code>>>> ZoneInfo("CT")
zoneinfo._common.ZoneInfoNotFoundError: 'No time zone found with key CT'
>>> ZoneInfo("CST")
zoneinfo._common.ZoneInfoNotFoundError: 'No time zone found with key CST'
>>> ZoneInfo("PST")
zoneinfo._common.ZoneInfoNotFoundError: 'No time zone found with key PST'
</code></pre>
<p>How can I get the right <code>ZoneInfo</code> objects for time zones like CT, CST or PST? Is the time zone database lacking? Does it depend on the operating system that I'm running?</p>
|
<python><datetime><timezone><zoneinfo><tzdata>
|
2023-02-02 13:39:01
| 3
| 153,921
|
Flimm
|
75,323,974
| 143,931
|
Performance difference between tf.boolean_mask and tf.gather + tf.where
|
<p><a href="https://www.tensorflow.org/api_docs/python/tf/boolean_mask" rel="nofollow noreferrer"><code>tf.boolean_mask</code></a> reads much nicer than then combination of <code>tf.gather</code> and <code>tf.where</code>. However, it seems to be much slower in the 1-D case:</p>
<pre><code>import tensorflow as tf
# use this shape
shape = [5000]
# create random mask m and dummy vector v
m = tf.random.uniform(shape) > 0.5
v = tf.ones(shape)
# apply boolean_mask to select elements from v based on boolean mask m:
%timeit tf.boolean_mask(v, m)
# 1.23 ms ± 1.33 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
# do the same with gather and where:
%timeit tf.gather(v, tf.where(m))
# 107 µs ± 349 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
</code></pre>
<p>Admittedly, the results have slightly different shapes:</p>
<pre><code>tf.boolean_mask(v, m).shape
# TensorShape([2578])
tf.gather(v, tf.where(m)).shape
# TensorShape([2578, 1])
</code></pre>
<p>This can be fixed by squeezing out the additional dimension, which makes it 50% slower:</p>
<pre><code>%timeit tf.squeeze(tf.gather(v, tf.where(m)))
# 149 µs ± 343 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
</code></pre>
<p>Still, this is almost a factor 10 faster than using <code>boolean_mask</code>. Is there a subtle difference, or is <code>tf.boolean_mask</code> just missing an optimization for the 1-D case?</p>
<p>PS: There seems to be a strong dependence on the size of the tensor, too. For <code>shape = [5000000]</code>, the performance is on par.</p>
|
<python><performance><tensorflow>
|
2023-02-02 13:35:39
| 1
| 8,472
|
fuenfundachtzig
|
75,323,859
| 3,337,089
|
Choose a random element in each row of a 2D array but only consider the elements based on a given mask python
|
<p>I have a 2D array <code>data</code> and a boolean array <code>mask</code> of shapes <code>(M,N)</code>. I need to randomly pick an element in each row of <code>data</code>. However, the element I picked should be true in the given mask. Is there a way to do this without looping over every row? In every row, there are at least 2 elements for which <code>mask</code> is true.</p>
<p>Minimum Working Example:</p>
<pre><code>data = numpy.arange(8).reshape((2,4))
mask = numpy.array([[True, True, True, True], [True, True, False, False]])
selected_data = numpy.random.choice(data, mask, num_elements=1, axis=1)
</code></pre>
<p>The 3rd line above doesn't work. I want something like that. I've listed below some valid solutions.</p>
<pre><code>selected_data = [0,4]
selected_data = [1,5]
selected_data = [2,5]
selected_data = [3,4]
</code></pre>
|
<python><numpy><vectorization>
|
2023-02-02 13:26:20
| 2
| 7,307
|
Nagabhushan S N
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.