QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
77,910,089
| 9,265,735
|
I can't get DRF-Spectactular to generate OpenApiParameter valuesin Swagger for functions, only classes
|
<p>Currently I'm working on a django project and I have an <code>api_views.py</code> file with the following view function:</p>
<pre class="lang-py prettyprint-override"><code>@api_view(["GET"])
@extend_schema(
parameters=[
OpenApiParameter(
name="jurisdiction", description="City for the query", required=True, type=str, default="Boston"
),
],
description='More descriptive text',
responses={200: OpenApiTypes.OBJECT}, # Define your response schema
)
def distinct_address_autocomplete(request: Request):
jurisdiction = request.GET.get("jurisdiction", "Boston")
# Get distinct addresses for the city
addresses = get_distinct_addresses(jurisdiction)
# Filter addresses based on user's query
filtered_addresses = addresses # Implement your filtering logic
return Response(filtered_addresses)
</code></pre>
<p>I've got it all set up correctly in all the <code>settings.py</code> and <code>urls.py</code> because it shows up in swagger, but incorrectly. The <code>OpenApiParameter</code> doesn't show up at all, it looks like this:</p>
<p><a href="https://i.sstatic.net/kbGWS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kbGWS.png" alt="screenshot of swagger" /></a></p>
<p>But when I refactor this as a class like so:</p>
<pre class="lang-py prettyprint-override"><code>class DistinctAddressAutocompleteView(APIView):
@extend_schema(
parameters=[
OpenApiParameter(
name="jurisdiction", description="City for the query", required=True, type=str, default="Boston"
),
],
description='More descriptive text',
responses={200: OpenApiTypes.OBJECT}, # Define your response schema
)
def get(self, request: Request):
jurisdiction = request.GET.get("jurisdiction", "Boston")
# Get distinct addresses for the city
addresses = get_distinct_addresses(jurisdiction)
# Filter addresses based on user's query
filtered_addresses = addresses # Implement your filtering logic
return Response(filtered_addresses)
</code></pre>
<p>The parameter shows up!</p>
<p><a href="https://i.sstatic.net/aU4em.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aU4em.png" alt="Class API view working as intended" /></a></p>
<p>What am I doing wrong on the function view? As best I can tell, I've set it up correctly, I've checked the DRF-Spec docs and I'm stumped at this point.</p>
|
<python><django><django-rest-framework><drf-spectacular>
|
2024-01-31 00:00:08
| 0
| 612
|
glitchwizard
|
77,910,052
| 9,072,753
|
How to dynamically set type hinting for inherited classes constructor?
|
<p>With the following code:</p>
<pre><code>import dataclasses
@dataclasses.dataclass
class A:
a: int
b: float
A(<cursor location>
</code></pre>
<p>If I now let my LSP pyright driver give me suggestions at the cursor location it finds "a" and "b" arguments.</p>
<p><a href="https://i.sstatic.net/PVPKl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PVPKl.png" alt="enter image description here" /></a></p>
<p>How do I do that with my own class? Consider the following:</p>
<pre><code>class Parent:
def __init__(self, **kvargs):
# This is done.
for k, v in kvargs.items():
setattr(self, k, v)
def __init_subclass(cls) -> None:
# How to add a and b typing information to A.__init__ ?
print(get_type_hints(cls))
setattr(cls, "__init__",
# magic? how to do this:
def __init__(a: int, b: int): # how to generate function signature from get_type_hints ?
super().__init__(a=a, b=b)
)
class A(Parent):
a: int
b: float
A(<cursor location
</code></pre>
<p>I am not asking how to get the typing information from the class dynamically. Let's assume I want to exactly add <code>a: int, b: float</code> function as <code>A.__init__</code>, however the types are dynamic. I am asking how to set the type hints of a constructor dynamically from <code>get_type_hints</code> extracted from another type.</p>
<blockquote>
<p>Is this question about better autocompletion or how to generate a method dynamically?</p>
</blockquote>
<p>About autocompletion and typing information. I have written my own <code>__init__</code> function that takes <code>*args, **kvargs</code> arguments and checks them manually. Now I want to have autocompletion and add <code>__annotations__</code> information to it. I tried to browse <code>dataclass</code> source code, however this is magic that I am not able to comprehend <a href="https://github.com/python/cpython/blob/main/Lib/dataclasses.py#L471" rel="nofollow noreferrer">https://github.com/python/cpython/blob/main/Lib/dataclasses.py#L471</a> .</p>
|
<python><python-typing>
|
2024-01-30 23:47:59
| 1
| 145,478
|
KamilCuk
|
77,910,049
| 8,834,335
|
How do I get a mean INLCUDING NaN values in Python?
|
<p>Apparently I have the opposite of everyone else's problem... I would like to take the mean of a pandas dataframe, and I would like to have the result return NaN if there are ANY NaNs in the frame. However, it seems like neither np.mean nor np.nanmean do this. Example code:</p>
<pre><code>b = pd.DataFrame([[1,2],[math.nan,4]])
print(b)
print(np.mean(b))
print(np.nanmean(b))
</code></pre>
<p>Result:</p>
<p><a href="https://i.sstatic.net/inxcy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/inxcy.png" alt="enter image description here" /></a></p>
<p>Expected Result:</p>
<p><a href="https://i.sstatic.net/7mzx8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7mzx8.png" alt="enter image description here" /></a></p>
|
<python><pandas><numpy>
|
2024-01-30 23:46:59
| 1
| 468
|
Sinnombre
|
77,909,960
| 697,660
|
Python Bitflow Framegrabber BfUtils.BFGTLDevice Claxon-CXP1 getNode DeviceTemperature
|
<p>I’m attempting to acquire the temperature value from a Claxon-CXP1 connected to a Basler VNIR hyperspectral camera .</p>
<p>I’m following these documents and either not opening the board and or getting a null node result when I expect a value.
<a href="https://www.bitflow.com/PythonHelp/_generate/BFModule.BFGTLUtils.BFGTLDevice.html#BFModule.BFGTLUtils.BFGTLDevice" rel="nofollow noreferrer">https://www.bitflow.com/PythonHelp/_generate/BFModule.BFGTLUtils.BFGTLDevice.html#BFModule.BFGTLUtils.BFGTLDevice</a>
<a href="https://www.bitflow.com/PythonHelp/_generate/BFModule.BFGTLUtils.BFGTLDevice.html#BFModule.BFGTLUtils.BFGTLDevice.Open" rel="nofollow noreferrer">https://www.bitflow.com/PythonHelp/_generate/BFModule.BFGTLUtils.BFGTLDevice.html#BFModule.BFGTLUtils.BFGTLDevice.Open</a>
<a href="https://www.bitflow.com/PythonHelp/_generate/BFModule.BFGTLUtils.BFGTLDevice.html#BFModule.BFGTLUtils.BFGTLDevice.getNode" rel="nofollow noreferrer">https://www.bitflow.com/PythonHelp/_generate/BFModule.BFGTLUtils.BFGTLDevice.html#BFModule.BFGTLUtils.BFGTLDevice.getNode</a></p>
<p>Cameras and frame grabbers are powered up, operational, and produce data via other applications. I’m just not sure exactly how to get the temperature. This code was run in isolation without any other apps/drivers running.</p>
<p><strike>Why is the board not opening?</strike>Edited:It was my code, forgot to check isopen again.</p>
<p>Why is the temp node invalid and null?</p>
<p>Guidance, docs, and code examples would be appreciated.</p>
<pre><code>import typing
import time
import os
import sys
# Specify DLL file locations for import of BitFlow and CameraLink libraries
os.add_dll_directory(r"C:\BitFlow SDK 6.5\Bin64")
os.add_dll_directory(r"C:\Program Files\CameraLink\Serial")
import BFModule.BFGTLUtils as BfUtils # pylint: disable=no-name-in-module, wrong-import-order
def get_vnir_temp()->None:
"""_summary_
https://www.bitflow.com/PythonHelp/_generate/BFModule.BFGTLUtils.BFGTLDevice.html#BFModule.BFGTLUtils.BFGTLDevice
Returns:
typing.Optional[float]: _description_
"""
device = BfUtils.BFGTLDevice()
print(f'bordCount: {device.boardCount()}')
is_open = device.isOpen()
print(f'isOpen:{is_open}')
if is_open is not True:
print('opening device')
device.Open(1)
is_open = device.isOpen()
print(f'isOpen:{is_open}')
time.sleep(.5)
try:
temp_node = device.getNode('DeviceTemperature')
print(f'NodeName:{temp_node.DisplayName}')
print(f'Valid:{temp_node.Valid}')
print(f'IsNull:{temp_node.isNull}')
except Exception:
exc_type, exc_obj, exc_tb = sys.exc_info()
fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
print(f"Exception caught: {exc_type}")
print(f" \\__ file: {fname}, line #{exc_tb.tb_lineno}")
get_vnir_temp()
"""
output:
Qt: Untested Windows version 10.0 detected!
bordCount: 0
isOpen:0
opening device
isOpen:0
NodeName:Device Temperature
Valid:False
IsNull:True
"""
</code></pre>
|
<python><python-3.x>
|
2024-01-30 23:19:57
| 0
| 882
|
TheDev6
|
77,909,551
| 8,223,979
|
How to merge two columns by the intersection of the elements in each col?
|
<p>Imagine I have a dataframe like this:
With lists of elements in a single string.</p>
<pre><code>data = {'Col1': ["apple, banana, orange", "dog, cat", "python, java, c++"],
'Col2': ["banana, lemon, blueberry", "bird, cat", "R, fortran"]
}
df = pd.DataFrame(data)
df
</code></pre>
<p>How can I create a Col3 with the intersection of elements in Col1 and Col2</p>
<p>Expected output:</p>
<pre><code>data = {'Col1': ["apple, banana, orange", "dog, cat", "python, java, c++"],
'Col2': ["banana, lemon, blueberry", "bird, cat", "R, fortran"],
'Col3': ["banana", "cat", NA]
}
df = pd.DataFrame(data)
df
</code></pre>
|
<python><pandas><dataframe><intersection>
|
2024-01-30 21:37:28
| 2
| 1,097
|
Caterina
|
77,909,516
| 11,330,134
|
Use list to map subset of dataframe columns to dictionary values
|
<p>Is there a way to pass a list of columns to map a dictionary's values in a single call?</p>
<p>I can do it by stating each column but wondering if I can shorten up the syntax.</p>
<p>Looked at <a href="https://stackoverflow.com/questions/43725799/map-multiple-columns-by-a-single-dictionary-in-pandas">this post</a> for reference.</p>
<p>Sample data:</p>
<pre><code>import pandas as pd
# initialize data of lists.
data = {'id': ['a', 'b', 'c', 'd', 'e'],
'col1': ['PHX', 'BKN', 'X', 'PHX', 'X'],
'col2': ['X', 'PHX', 'BKN', 'BKN', 'X'],
'col3': ['PHX', 'BKN', 'PHX', 'BKN', 'PHX']
}
df = pd.DataFrame(data)
df
id col1 col2 col3
0 a PHX X PHX
1 b BKN PHX BKN
2 c X BKN PHX
3 d PHX BKN BKN
4 e X X PHX
</code></pre>
<p>I want to apply this mapping to <code>col1</code> and <code>col2</code> but not <code>col3</code>:</p>
<pre><code>name_dict = {'PHX' : 'PHO', 'BKN': 'NJN'}
</code></pre>
<p>So <code>PHX -> PHO</code> and <code>BKN -> NJN</code>.</p>
<p>This replaces all cols but I want a subset of columns:</p>
<pre><code>df = df.replace(name_dict)
</code></pre>
<p>Daisy-chaining 'replace' works:</p>
<pre><code>df = df.replace({'col1': name_dict}).replace({'col2': name_dict})
</code></pre>
<p>This also works:</p>
<pre><code>df = df.replace({'col1': name_dict, 'col2': name_dict})
</code></pre>
<p>Can I shorten this up a bit, something like:</p>
<pre><code>df = df.replace({['col1', 'col2']: name_dict})
</code></pre>
<p>However, this results in <code>TypeError: unhashable type: 'list'</code>.</p>
<p>Desired output (<code>col1</code> and <code>col2</code> values updated but not <code>col3</code>):</p>
<pre><code> id col1 col2 col3
0 a PHO X PHX
1 b NJN PHO BKN
2 c X NJN PHX
3 d PHO NJN BKN
4 e X X PHX
</code></pre>
|
<python><pandas>
|
2024-01-30 21:31:17
| 1
| 489
|
md2614
|
77,909,434
| 3,747,241
|
Depth and RGB rendered using blender not aligning properly with the object model in Open3D
|
<p>I am working on the ShapeNetCore dataset which consists of 3D models with texture and color information. Link to ShapeNetCore repository - <a href="https://shapenet.org/" rel="nofollow noreferrer">https://shapenet.org/</a></p>
<p>I am trying to render the RGB and depth information of these .obj (3D) files using blender through multiple predefined camera viewpoints. The idea is to capture view-dependent partial RGB-D information of the object to use for 3D perception tasks later.</p>
<p>I am following the steps from the repository - <a href="https://github.com/yinyunie/depth_renderer" rel="nofollow noreferrer">https://github.com/yinyunie/depth_renderer</a>.</p>
<p>The camera viewpoints are created as the vertices of a regular dodecahedron as follows -</p>
<pre><code>phi = (1 + math.sqrt(5)) / 2. # golden_ratio
circumradius = math.sqrt(3)
distance = circumradius*1.2
# this creates the vertices of a regular dodecahedron
dodecahedron = [[-1, -1, -1],
[ 1, -1, -1],
[ 1, 1, -1],
[-1, 1, -1],
[-1, -1, 1],
[ 1, -1, 1],
[ 1, 1, 1],
[-1, 1, 1],
[0, -phi, -1 / phi],
[0, -phi, 1 / phi],
[0, phi, -1 / phi],
[0, phi, 1 / phi],
[-1 / phi, 0, -phi],
[-1 / phi, 0, phi],
[ 1 / phi, 0, -phi],
[ 1 / phi, 0, phi],
[-phi, -1 / phi, 0],
[-phi, 1 / phi, 0],
[ phi, -1 / phi, 0],
[ phi, 1 / phi, 0]]
# get Azimuth, Elevation angles
# Azimuth varies from -pi to pi
# Elevation from -pi/2 to pi/2
view_points = open('./view_points.txt', 'w+')
for vertice in dodecahedron:
elevation = math.asin(vertice[2] / circumradius)
azimuth = math.atan2(vertice[1], vertice[0])
view_points.write('%f %f %f %f\n' % (azimuth, elevation, 0., distance))
view_points.close()
</code></pre>
<p>After this, I render the RGB as .png file and the depth as .exr file. This is the code -</p>
<pre><code>vp = viewpoint
cam_location = camera_location(vp.azimuth, vp.elevation, vp.distance)
cam_rot = camera_rot_XYZEuler(vp.azimuth, vp.elevation, vp.tilt)
cam_obj = bpy.data.objects['Camera']
cam_obj.location[0] = cam_location[0]
cam_obj.location[1] = cam_location[1]
cam_obj.location[2] = cam_location[2]
cam_obj.rotation_euler[0] = cam_rot[0]
cam_obj.rotation_euler[1] = cam_rot[1]
cam_obj.rotation_euler[2] = cam_rot[2]
if g_background_image_path == 'TRANSPARENT':
bpy.context.scene.render.alpha_mode = g_background_image_path
else:
background_images = os.listdir(g_background_image_path)
image_name = random.choice(background_images)
image_path = os.path.join(g_background_image_path, image_name)
image_node = bpy.context.scene.node_tree.nodes[0]
image_node.image = bpy.data.images.load(image_path)
img_file_output_node = bpy.context.scene.node_tree.nodes[4]
img_file_output_node.file_slots[0].path = 'color_###.png' # blender placeholder #
depth_file_output_node = bpy.context.scene.node_tree.nodes[5]
depth_file_output_node.file_slots[0].path = 'depth_###.exr' # blender placeholder #
#start rendering
bpy.context.scene.frame_set(viewpoint_id + 1)
bpy.ops.render.render(write_still=True)
# write camera info
cam_K_file = os.path.join(cam_K_path, 'cam_K.txt')
if (not os.path.isfile(cam_K_file)) or (len(os.listdir(cam_RT_path))<total_view_nums):
K, RT = get_3x4_P_matrix_from_blender(cam_obj)
np.savetxt(cam_K_file, K)
np.savetxt(os.path.join(cam_RT_path, 'cam_RT_{0:03d}.txt'.format(viewpoint_id + 1)), RT)
print('Camera parameters written.')
</code></pre>
<p>In this code, the object is rendered from the 20 views that I specified.</p>
<p>The code below shows how blender constructs and stores the camera projection matrix (P). Basically, its the <code>3x4</code> extrinsic matrix (R|t) and the <code>3x3</code> intrinsic matrix.</p>
<pre><code>'''Build intrinsic camera parameters from Blender camera data
See notes on this in
blender.stackexchange.com/questions/15102/what-is-blenders-camera-projection-matrix-model
as well as
https://blender.stackexchange.com/a/120063/3581
'''
def get_calibration_matrix_K_from_blender(camd):
if camd.type != 'PERSP':
raise ValueError('Non-perspective cameras not supported')
scene = bpy.context.scene
f_in_mm = camd.lens
scale = scene.render.resolution_percentage / 100
resolution_x_in_px = scale * scene.render.resolution_x
resolution_y_in_px = scale * scene.render.resolution_y
sensor_size_in_mm = get_sensor_size(camd.sensor_fit, camd.sensor_width, camd.sensor_height)
sensor_fit = get_sensor_fit(
camd.sensor_fit,
scene.render.pixel_aspect_x * resolution_x_in_px,
scene.render.pixel_aspect_y * resolution_y_in_px
)
pixel_aspect_ratio = scene.render.pixel_aspect_y / scene.render.pixel_aspect_x
if sensor_fit == 'HORIZONTAL':
view_fac_in_px = resolution_x_in_px
else:
view_fac_in_px = pixel_aspect_ratio * resolution_y_in_px
pixel_size_mm_per_px = sensor_size_in_mm / f_in_mm / view_fac_in_px
s_u = 1 / pixel_size_mm_per_px
s_v = 1 / pixel_size_mm_per_px / pixel_aspect_ratio
# Parameters of intrinsic calibration matrix K
u_0 = resolution_x_in_px / 2 - camd.shift_x * view_fac_in_px
v_0 = resolution_y_in_px / 2 + camd.shift_y * view_fac_in_px / pixel_aspect_ratio
skew = 0 # only use rectangular pixels
K = Matrix(
((s_u, skew, u_0),
( 0, s_v, v_0),
( 0, 0, 1)))
return K
'''
Returns camera rotation and translation matrices from Blender.
There are 3 coordinate systems involved:
1. The World coordinates: "world"
- right-handed
2. The Blender camera coordinates: "bcam"
- x is horizontal
- y is up
- right-handed: negative z look-at direction
3. The desired computer vision camera coordinates: "cv"
- x is horizontal
- y is down (to align to the actual pixel coordinates
used in digital images)
- right-handed: positive z look-at direction
'''
def get_3x4_RT_matrix_from_blender(cam):
# bcam stands for blender camera
R_blender2shapenet = Matrix(
((1, 0, 0),
(0, 0, -1),
(0, 1, 0)))
R_bcam2cv = Matrix(
((1, 0, 0),
(0, 1, 0),
(0, 0, -1)))
# Transpose since the rotation is object rotation,
# and we want coordinate rotation
# R_world2bcam = cam.rotation_euler.to_matrix().transposed()
# T_world2bcam = -1*R_world2bcam * location
#
# Use matrix_world instead to account for all constraints
location, rotation = cam.matrix_world.decompose()[0:2]
R_world2bcam = rotation.to_matrix().transposed()
# Convert camera location to translation vector used in coordinate changes
# T_world2bcam = -1*R_world2bcam*cam.location
# Use location from matrix_world to account for constraints:
T_world2bcam = -1*R_world2bcam * location
# Build the coordinate transform matrix from world to computer vision camera
R_world2cv = R_bcam2cv*R_world2bcam*R_blender2shapenet
T_world2cv = R_bcam2cv*T_world2bcam
# put into 3x4 matrix
RT = Matrix((
R_world2cv[0][:] + (T_world2cv[0],),
R_world2cv[1][:] + (T_world2cv[1],),
R_world2cv[2][:] + (T_world2cv[2],)
))
return RT
def get_3x4_P_matrix_from_blender(cam):
K = get_calibration_matrix_K_from_blender(cam.data)
RT = get_3x4_RT_matrix_from_blender(cam)
return K, RT
</code></pre>
<p>Full code is in this file - <a href="https://github.com/yinyunie/depth_renderer/blob/main/render_all.py" rel="nofollow noreferrer">https://github.com/yinyunie/depth_renderer/blob/main/render_all.py</a></p>
<p>I am trying to now load/register the partial pointcloud from multiple viewpoints in the world coordinate frame, and in an ideal case all the pointclouds should be registered together.</p>
<p>I create the pointcloud using the rgb image, depth image, and camera intrinsic (cam_K -> 3x3 matrix). However, the pointclouds that I create would be in their respective camera frame, and therefore I use the view-dependent cam_RT matrix that blender generated to apply an inverse transformation to the pointcloud, to get them from the camera frame to the world coordinate frame. My code is below -></p>
<pre><code>def get_point_cloud(depth_map, cam_K, cam_RT, rgb_img):
'''
get point cloud from depth maps
:param depth_map: depth map list
:param cam_K: corresponding camera intrinsic
:param cam_RT: corresponding camera rotations and translations
:param rgb_img: corresponding rgb images
:return: aligned point cloud in the canonical system with color intensities.
'''
u, v = np.meshgrid(range(depth_map.shape[1]), range(depth_map.shape[0]))
u = u.reshape([1, -1])[0]
v = v.reshape([1, -1])[0]
z = depth_map[v, u]
# remove infinitive pixels
non_inf_indices = np.argwhere(z < np.inf).T[0]
color_indices = rgb_img[v, u][non_inf_indices]
z = z[non_inf_indices]
u = u[non_inf_indices]
v = v[non_inf_indices]
# calculate coordinates
x = (u - cam_K[0][2]) * z / cam_K[0][0]
y = (v - cam_K[1][2]) * z / cam_K[1][1]
point_cam = np.vstack([x, y, z]).T
point_canonical = (point_cam - cam_RT[:, -1]).dot(cam_RT[:,:-1])
cam_pos = - cam_RT[:, -1].dot(cam_RT[:,:-1])
focal_point = ([0, 0, 1] - cam_RT[:, -1]).dot(cam_RT[:,:-1])
up = np.array([0,-1,0]).dot(cam_RT[:,:-1])
cam_pos = {'pos':cam_pos, 'fp':focal_point, 'up':up}
# create pointcloud from point cam
pcd_cam = o3d.geometry.PointCloud()
pcd_cam.points = o3d.utility.Vector3dVector(point_cam)
pcd_cam.colors = o3d.utility.Vector3dVector(color_indices/255.0)
# create pointcloud from point can
pcd_can = o3d.geometry.PointCloud()
pcd_can.points = o3d.utility.Vector3dVector(point_canonical)
pcd_can.colors = o3d.utility.Vector3dVector(color_indices/255.0)
return pcd_cam, pcd_can
</code></pre>
<p>However, I notice a slight misalignment between the registered pointclouds from different viewpoints.</p>
<p><a href="https://i.sstatic.net/eqcRB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eqcRB.png" alt="Pointclouds from different viewpoints registered together" /></a></p>
<p>I also try to load the 3D mesh as a pointcloud (which I believe should be in the world frame as well) using -> <code>mesh = o3d.io.read_triangle_mesh(obj_path).sample_points_uniformly(10000).farthest_point_down_sample(5000)</code>, and when I try to register the two view-dependent partial pointclouds with the object pcd (in the world frame), all three of them are a bit misaligned.</p>
<p><a href="https://i.sstatic.net/I6Aqw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/I6Aqw.png" alt="Two view-dependent partial pointclouds registered along with the object mesh in world-frame" /></a></p>
<p>Previously, when I have worked with real rgb-d scans recorded using Intel Realsense camera, I have never faced these issues before, and the pointclouds have registered together. The backend that I am using to visualize these pointclouds is I believe OpenGL based (Jupyter notebook running on the browser).</p>
<p>I am still not clear what the issue is -> 1) is there an additional transform that I need to apply, 2) mean centering to fix the translation?, 3) problem with how the depth is recorded?.</p>
<p>I have also attached the obj model and the RGB, depth renders along with the camera matrices <a href="https://drive.google.com/drive/folders/1IWHv8BhVAJmGFOUaGnDKnbmtKF4Uc78q?usp=sharing" rel="nofollow noreferrer">here</a>.</p>
|
<python><rendering><blender><open3d><blender-2.76>
|
2024-01-30 21:12:23
| 1
| 1,135
|
Aditya
|
77,909,414
| 11,602,367
|
Create madlib in Plotly Dash
|
<p>How can I implement a madlib feature in a Dash app, allowing users to update specific text areas (and the entire prompt upon clicking a different button) similar to the provided image? The goal is to create a function that takes a sentence with bracketed words or phrases, turns them into textarea boxes, and outputs a result similar to the image, excluding the share/generate buttons. Currently, my implementation turns the bracketed words/phrases into uneditable links.</p>
<pre><code>def generate_madlib(sentence):
words_in_brackets = [word.strip('[]') for word in sentence.split('[') if ']' in word]
textarea_boxes = [dcc.Textarea(placeholder=word, style={'margin-right': '10px', 'width': str(len(word) * 10) + 'px'}) for word in words_in_brackets]
for word, text_box in zip(words_in_brackets, textarea_boxes):
sentence = sentence.replace(f"[{word}]", f"{text_box}")
return dcc.Markdown(children=sentence, dangerously_allow_html=True)
</code></pre>
<p><a href="https://i.sstatic.net/OhHjz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OhHjz.png" alt="enter image description here" /></a></p>
|
<python><plotly-dash>
|
2024-01-30 21:07:11
| 1
| 551
|
acciolurker
|
77,909,058
| 11,402,025
|
How to flatten a list of objects in python
|
<p>I have a list of objects :</p>
<pre><code>[
{
"person": "abc",
"city": "united states",
"facebooklink": "link",
"address": "united states",
"united states": [
{
"person": "cdf",
"city": "ohio",
"facebooklink": "link",
"address": "united states/ohio",
"ohio": [
{
"person": "efg",
"city": "clevland",
"facebooklink": "link",
"address": "united states/ohio/clevland",
"clevland": [
{
"person": "jkl",
"city": "Street A",
"facebooklink": "link",
"address": "united states/ohio/clevland/Street A",
"Street A": [
{
"person": "jkl",
"city": "House 1",
"facebooklink": "link",
"address": "united states/ohio/clevland/Street A/House 1"
}
]
}
]
},
{
"person": "ghi",
"city": "columbus",
"facebooklink": "link",
"address": "united states/ohio/columbus"
}
]
},
{
"person": "abc",
"city": "washington",
"facebooklink": "link",
"address": "united states/washington"
}
]
}
]
</code></pre>
<p>How can I flatten it to</p>
<pre><code>[
{
"person": "abc",
"city": "united states",
"facebooklink": "link",
"address": "united states"
},
{
"person": "cdf",
"city": "ohio",
"facebooklink": "link",
"address": "united states/ohio"
},
{
"person": "efg",
"city": "clevland",
"facebooklink": "link",
"address": "united states/ohio/clevland"
},
{
"person": "jkl",
"city": "Street A",
"facebooklink": "link",
"address": "united states/ohio/clevland/Street A"
},
{
"person": "jkl",
"city": "House 1",
"facebooklink": "link",
"address": "united states/ohio/clevland/Street A/House 1"
},
{
"person": "ghi",
"city": "columbus",
"facebooklink": "link",
"address": "united states/ohio/columbus"
},
{
"person": "abc",
"city": "washington",
"facebooklink": "link",
"address": "united states/washington"
}
]
</code></pre>
<p>I am trying to achieve the same using flatten from flatten_json</p>
|
<python><json><recursion><iteration><flatten>
|
2024-01-30 19:49:29
| 1
| 1,712
|
Tanu
|
77,909,039
| 53,491
|
How do I use aiohttp with requests_aws4auth
|
<p>More recent version of this question with more details: <a href="https://stackoverflow.com/questions/78077509/using-aiohttp-with-requests-aws4auth">Using aiohttp with requests_aws4auth</a></p>
<p>I have synchronous code that puts files in an AWS bucket that uses requests_aws4auth. I want to make it asynchronous using aiohttp. However, aiohttp seems to only work with basicAuth, and no AWS4auth.</p>
<p>The code that works is:</p>
<pre><code> self.auth = AWS4Auth(access_key, secret_key, 'us-east-1', 's3')
response = requests.put(
url,
auth = self.auth,
data=content)
</code></pre>
<p>I want something like:</p>
<pre><code> async with aiohttp.ClientSession(url = url, auth=self.auth) as session:
async with session.put(data=content) as resp:
await session.close()
</code></pre>
<p>But ClientSession wants a basicAuth tuple...</p>
|
<python><python-asyncio><aiohttp>
|
2024-01-30 19:46:13
| 1
| 12,317
|
Brian Postow
|
77,908,978
| 11,321,530
|
AttributeError: 'Series' object has no attribute 'append' with pandas_ta library
|
<p>I'm trying to apply technical finance indicators to data I fetch from Yahoo Finance. I found the <code>pandas_ta</code> library which seemed to fit my needs, however, applying a strategy gives me errors. Specifically, I want to use the <code>AllStrategy</code> (the default strategy), which applies all indicators to the data.</p>
<p>Initially, I was having issues with TA-Lab and thought this was related, however, after installing the TA-Lab source package and building it manually, I still encountered the same errors.</p>
<p>I found what seems to be a list of the categories that the <code>AllStrategy</code> is meant to apply using the <code>help(df.ta.strategy)</code>-command. When applying them separately, it seems like the "overlap" indicators are causing this specific issue. Additionally, applying the "trend" indicators outputs <code>[X] Not an available strategy.</code>.</p>
<p>From what I can find it looks like the <code>append</code> from pandas has been deprecated, but is there a way to circumvent this? I found a post (and received an answer here) that said it could be temporarily fixed using <code>pd.Series.append = pd.Series._append</code>, but I get the same error with and without this line.</p>
<p>Also, quite <code>pandas_ta</code>-specific, but why are the "utility" indicators not an available strategy?</p>
<p>Any help is appreciated - even if you could point me in the direction of a library that does something similar I would be very thankful!</p>
<hr />
<p>I have the following method to fetch and apply indicators:</p>
<pre><code>import pandas as pd
import yfinance as yf
import pandas_ta as ta
pd.Series.append = pd.Series._append # Same error with and without this line
def fetch_and_analyze(ticker, interval='1d'):
# Fetching historical data
data = yf.download(ticker, period=period, interval=interval)
# Check if data is empty
if data.empty:
print("No data fetched for ticker:", ticker)
return pd.DataFrame()
# Applying a simpler strategy from pandas_ta to ensure it works
# data.ta.strategy('candles', timed=True)
# data.ta.strategy('cycles', timed=True)
# data.ta.strategy('momentum', timed=True)
# data.ta.strategy('overlap', timed=True) <--- ERROR
# data.ta.strategy('performance', timed=True)
# data.ta.strategy('statistics', timed=True)
# data.ta.strategy('trend', timed=True)
# data.ta.strategy('utility', timed=True) <- [X] Not an available strategy.
# data.ta.strategy('volatility', timed=True)
# data.ta.strategy('volume', timed=True)
data.ta.strategy('all', timed=True)
return data
data_with_indicators = fetch_and_analyze(ticker)
</code></pre>
<p>Which gives me the following error:</p>
<pre><code>---------------------------------------------------------------------------
RemoteTraceback Traceback (most recent call last)
RemoteTraceback:
"""
Traceback (most recent call last):
File "C:\Python311\Lib\multiprocessing\pool.py", line 125, in worker
result = (True, func(*args, **kwds))
^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\multiprocessing\pool.py", line 48, in mapstar
return list(map(*args))
^^^^^^^^^^^^^^^^
File "c:\Users\x\stock-analysis\Lib\site-packages\pandas_ta\core.py", line 467, in _mp_worker
return getattr(self, method)(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\x\stock-analysis\Lib\site-packages\pandas_ta\core.py", line 1225, in mcgd
result = mcgd(close=close, length=length, offset=offset, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\x\stock-analysis\Lib\site-packages\pandas_ta\overlap\mcgd.py", line 24, in mcgd
mcg_ds = close[:1].append(mcg_cell[1:])
^^^^^^^^^^^^^^^^
File "c:\Users\x\stock-analysis\Lib\site-packages\pandas\core\generic.py", line 6204, in __getattr__
return object.__getattribute__(self, name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'Series' object has no attribute 'append'
"""
The above exception was the direct cause of the following exception:
AttributeError Traceback (most recent call last)
Cell In[124], line 1
----> 1 data_with_indicators = fetch_and_analyze(ticker)
Cell In[123], line 20
8 return pd.DataFrame()
10 # Applying a simpler strategy from pandas_ta to ensure it works
11 # data.ta.strategy('candles', timed=True)
12 # data.ta.strategy('cycles', timed=True)
(...)
18 # data.ta.strategy('volatility', timed=True)
19 # data.ta.strategy('volume', timed=True)
---> 20 data.ta.strategy('all', timed=True)
22 return data
File c:\Users\x\stock-analysis\Lib\site-packages\pandas_ta\core.py:792, in AnalysisIndicators.strategy(self, *args, **kwargs)
789 self._last_run = get_time(self.exchange, to_string=True)
791 # Apply prefixes/suffixes and appends indicator results to the DataFrame
--> 792 [self._post_process(r, **kwargs) for r in results]
794 if verbose:
795 print(f"[i] Total indicators: {len(ta)}")
File c:\Users\x\stock-analysis\Lib\site-packages\pandas_ta\core.py:792, in <listcomp>(.0)
789 self._last_run = get_time(self.exchange, to_string=True)
791 # Apply prefixes/suffixes and appends indicator results to the DataFrame
--> 792 [self._post_process(r, **kwargs) for r in results]
794 if verbose:
795 print(f"[i] Total indicators: {len(ta)}")
File C:\Python311\Lib\multiprocessing\pool.py:423, in <genexpr>(.0)
415 result = IMapIterator(self)
416 self._taskqueue.put(
417 (
418 self._guarded_task_generation(result._job,
(...)
421 result._set_length
422 ))
--> 423 return (item for chunk in result for item in chunk)
File C:\Python311\Lib\multiprocessing\pool.py:873, in IMapIterator.next(self, timeout)
871 if success:
872 return value
--> 873 raise value
AttributeError: 'Series' object has no attribute 'append'
</code></pre>
|
<python><pandas><ta-lib><pandas-ta>
|
2024-01-30 19:34:37
| 2
| 333
|
bragi
|
77,908,977
| 23,260,297
|
Group data together and retain groupings in dataframe
|
<p>I have a dataframe:</p>
<pre><code>ID Deal Party Commodity startdate enddate price quantity mtmvalue
---- ----- ----- --------- --------- ------- ------ -------- ---------
J1 Sell J (stock1, stock2) 01Jan23 01Feb23 10.00 10 100.00
J4 Buy J (stock1, stock2) 01Jan23 01Feb23 5.00 5 25.00
J2 Sell J (stock1, stock2) 01Jan23 01Feb23 10.00 10 100.00
J3 Buy J (stock1, stock2) 01Jan23 01Feb23 5.00 10 50.00
</code></pre>
<p>I need to group data together by Deal,Commodity, and startdate so that my dataframe looks like this:</p>
<pre><code>ID Deal Party Commodity startdate enddate price quantity mtmvalue
---- ----- ----- --------- --------- ------- ------ -------- ---------
J1 Sell J (stock1, stock2) 01Jan23 01Feb23 10.00 10 100.00
J2 Sell J (stock1, stock2) 01Jan23 01Feb23 10.00 10 100.00
J3 Buy J (stock1, stock2) 01Jan23 01Feb23 5.00 10 50.00
J4 Buy J (stock1, stock2) 01Jan23 01Feb23 5.00 5 25.00
</code></pre>
<p>I am doing this which will create two groups, but I want it in one dataframe:</p>
<pre><code>df.groupby(['Deal', 'Commodity', StartDate'])
</code></pre>
<p>How would I retain the groupings in the original dataframe?</p>
|
<python><pandas><dataframe>
|
2024-01-30 19:34:25
| 1
| 2,185
|
iBeMeltin
|
77,908,931
| 770,788
|
Using Beautiful Soup to find all doesn't seem to be giving me a list
|
<p>I trying to get all the book categories from this website: <a href="http://books.toscrape.com/" rel="nofollow noreferrer">http://books.toscrape.com/</a></p>
<p>When I inspect the element I see that the categories are in a list towards the top of the html. They are in <code><div class="side_categories"></code></p>
<p>My code:</p>
<pre><code>from bs4 import BeautifulSoup
import requests
url = "http://books.toscrape.com/"
page = requests.get(url)
soup = BeautifulSoup(page.text, "html.parser")
categories = soup.find_all(class_="side_categories")
</code></pre>
<p>This returns:</p>
<pre><code>[<div class="side_categories">
<ul class="nav nav-list">
<li>
<a href="catalogue/category/books_1/index.html">
Books
</a>
<ul>
<li>
<a href="catalogue/category/books/travel_2/index.html">
Travel
</a>
</li>
<li>
<a href="catalogue/category/books/mystery_3/index.html">
Mystery
</a>
</li>...#the rest of the categories.
</code></pre>
<p>Now I'm a bit stuck as I can't go through these like I would a list. Beautiful soup has an example that returns a list. <a href="https://beautiful-soup-4.readthedocs.io/en/latest/#find-all" rel="nofollow noreferrer">https://beautiful-soup-4.readthedocs.io/en/latest/#find-all</a></p>
<p>Their example returns this:</p>
<pre><code>soup.find_all("a")
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
# <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
# <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]
</code></pre>
<p>Mine doesn't have that structure. What am I doing wrong?
I'm running with these in my python environment:</p>
<pre><code>beautifulsoup4==4.12.3
bs4==0.0.2
certifi==2023.11.17
charset-normalizer==3.3.2
idna==3.6
numpy==1.26.3
pandas==2.2.0
python-dateutil==2.8.2
pytz==2023.4
requests==2.31.0
six==1.16.0
soupsieve==2.5
tzdata==2023.4
urllib3==2.1.0
</code></pre>
|
<python><web-scraping><beautifulsoup><python-requests>
|
2024-01-30 19:24:58
| 1
| 625
|
Funlamb
|
77,908,916
| 1,299,787
|
Any way to accomplish this in regex without negative lookbehind
|
<p>I would like to format a regex query so that it matches on all newline characters that are not preceded by a comma followed by any amount of whitespace.</p>
<p>For example, in this scenario:</p>
<pre><code>Line One,
Line Two
Line Three
Line Four, <-- whitespace before newline
Line Five
</code></pre>
<p>I want the regex to return on the newline chars on Line Two, Line Three, or Line Five, but not Line One and Line Four. I have tried negative look-behind but would like to avoid using that if possible due to fixed width restrictions. Is there a way to accomplish this without negative look-behind?</p>
<p>The end goal is to only have lines that end with a comma. I'm trying to do a substitute on the newline chars that don't have a comma + (whitespaces) before them.</p>
<pre><code>re.sub(r'(?<!\,\s*)\n','',string)
</code></pre>
<p>The end result should be:</p>
<pre><code>Line One,
Line TwoLine ThreeLine Four,
Line Five
</code></pre>
<p>However, the * quantifier before the \s is an issue for this solution.</p>
|
<python><regex>
|
2024-01-30 19:21:48
| 3
| 473
|
Robert Lu
|
77,908,824
| 3,042,018
|
TypeError on Trinket Implementation of Turtle Graphics
|
<p>The code below works fine on Windows 10 with Python 3.10. When you click on the turtle, its color changes. However when I run the code in the browser-based Trinket environment, here: <a href="https://trinket.io/python/0da2ab1181" rel="nofollow noreferrer">https://trinket.io/python/0da2ab1181</a>, I get the following error:</p>
<pre><code>
ExternalError: TypeError: Cannot read properties of undefined (reading 'constructor') on line 15 in main.py
</code></pre>
<p>when I click on the Turtle.</p>
<p>How can I make this work correctly please? I've looked at the docs for the module and they give:</p>
<pre><code>Help on function fillcolor in turtle:
turtle.fillcolor = fillcolor(*args)
Return or set the fillcolor.
Arguments:
Four input formats are allowed:
- fillcolor()
Return the current fillcolor as color specification string,
possibly in hex-number format (see example).
May be used as input to another color/pencolor/fillcolor call.
- fillcolor(colorstring)
s is a Tk color specification string, such as "red" or "yellow"
- fillcolor((r, g, b))
*a tuple* of r, g, and b, which represent, an RGB color,
and each of r, g, and b are in the range 0..colormode,
where colormode is either 1.0 or 255
- fillcolor(r, g, b)
r, g, and b represent an RGB color, and each of r, g, and b
are in the range 0..colormode
If turtleshape is a polygon, the interior of that polygon is drawn
with the newly set fillcolor.
Example:
>>> fillcolor('violet')
>>> col = pencolor()
>>> fillcolor(col)
>>> fillcolor(0, .5, 0)
</code></pre>
<p>Programs code:</p>
<pre><code>class ToggleTurtle(turtle.Turtle):
def __init__(self):
super().__init__()
self.shape("turtle")
self.color1 = "red"
self.color2 = "blue"
self.fillcolor(self.color1)
self.onclick(self.toggle_color)
def toggle_color(self, x, y):
if self.fillcolor() == self.color1:
self.fillcolor(self.color2)
else:
self.fillcolor(self.color1)
tt = ToggleTurtle()
turtle.done()
</code></pre>
|
<python><turtle-graphics><python-turtle>
|
2024-01-30 19:05:01
| 1
| 3,842
|
Robin Andrews
|
77,908,801
| 75,386
|
Get EXPLAIN from Delta Lake MERGE in PySpark?
|
<p>Using Python 3.10, delta-spark 2.4.0, I need to see the execution plan of a MERGE statement in PySpark.</p>
<p>For a dataframe operation, a <code>df.explain()</code> provides it, but I have not found a method for seeing the physical plan of a merge().</p>
<p>Is there a method to see the equivalent of <code>explain(mode="extended")</code> for the following?</p>
<pre class="lang-py prettyprint-override"><code>df = spark.sql("SELECT * FROM table")
tablePath = "/path/to/deltalake"
tbl = DeltaTable.forPath(spark, tablePath)
table.alias("target") \
.merge(
source=df.alias("source"),
condition=condition) \
.whenMatchedUpdateAll() \
.whenNotMatchedInsertAll() \
.execute()
</code></pre>
|
<python><pyspark><delta-lake>
|
2024-01-30 18:59:57
| 1
| 1,712
|
kermatt
|
77,908,784
| 12,298,510
|
Read multiple CSVs from same file into separate dataframes
|
<p>I have a CSV file that really contains 2 separate CSVs split by a title line for each. The number of lines in the Polymerase Stats section can be anywhere between 1 and 8 and the number of lines in the CCS Stats section does not technically have a limit but will never exceed 50.</p>
<p>My current solution is to enumerate the input file, look for the occurrence of "CCS Stats", and use that line number to skip rows/set the number of rows to read since the number of title rows is constant. This works but ideally I'd like to accomplish this all with pandas.</p>
<p>Simplified CSV Example:</p>
<pre><code>RUNID: TEST_RUN
Polymerase Stats:
Cell,Project,Name,Bases,Reads
A01,Proj1,Cell_1,17438371,2836501
B01,Proj1,Cell_2,19327981,3789533
C01,Proj2,Cell_3,14935525,1897239
CCS Stats:
Cell,Project,Name,CCS_Bases,CCS_Reads,MedQ
A01,Proj1,Sample_1,5473982,123678,31
B01,Proj1,Sample_2,5834094,738491,32
B01,Proj1,Sample_3,5834094,738491,31
C01,Proj2,Sample_4,4378978,453216,31
</code></pre>
<p>Current Solution:</p>
<pre><code>with open("run_stats.csv") as f:
for num, line in enumerate(f, 1):
if "CCS Stats" in line:
csv_split=num
df_runstats_poly = pd.read_csv("run_stats.csv", skiprows=2, nrows=csv_split-3)
df_runstats_ccs = pd.read_csv("run_stats.csv", skiprows=csv_split)
</code></pre>
|
<python><pandas>
|
2024-01-30 18:56:55
| 0
| 678
|
kevin41
|
77,908,606
| 1,753,640
|
Recursively build a path from a python dictionary
|
<p>I have the following python dictionary:</p>
<pre><code>a = {'name': 'Kestral',
'children': [
{'name': 'Burtree Lane',
'children': [
{'name': 'ARCHIVE',
'children': []},
{'name': 'Development',
'children': [
{'name': 'Fee Proposals',
'children': []}]}]}]}
</code></pre>
<p>and i'm trying to write a recursive function to produce the following dictionary:</p>
<pre><code>{'name': 'Kestral',
'folder': 'Kestral',
'children': [
{'name': 'Burtree Lane',
'folder': 'Kestral/Burtree Lane',
'children': [
{'name': 'ARCHIVE',
'folder': 'Kestral/Burtree Lane/ARCHIVE',
'children': []},
{'name': 'Development',
'folder': 'Kestral/Burtree Lane/Development',
'children': [
{'name': 'Fee Proposals',
'folder': 'Kestral/Burtree Lane/Development',
'children': []}]}]}]}
</code></pre>
<p>This is my python code:</p>
<pre><code>def build_structured_dict(data, parent_path="/"):
new_dict = {
"name": data["name"],
"folder": f"{parent_path}{data['name']}" if parent_path else data["name"],
"children": [],
}
for child in data["children"]:
child_path = f"{parent_path}{data['name']}" if parent_path else ""
new_dict["children"].append(build_structured_dict(child, f"{child_path}/{child['name']}"))
return new_dict
</code></pre>
<p>But this returns:</p>
<pre><code>{'name': 'Kestral',
'folder': '/Kestral',
'children': [{
'name': 'Burtree Lane',
'folder': '/Kestral/Burtree LaneBurtree Lane',
'children': [{
'name': 'ARCHIVE',
'folder': '/Kestral/Burtree LaneBurtree Lane/ARCHIVEARCHIVE',
'children': []},
{'name': 'Development',
'folder': '/Kestral/Burtree LaneBurtree Lane/DevelopmentDevelopment',
'children': [{
'name': 'Fee Proposals',
'folder': '/Kestral/Burtree LaneBurtree Lane/DevelopmentDevelopment/Fee ProposalsFee Proposals',
'children': []}]}]}]}
</code></pre>
<p>Can someone help me in removing these duplicates?</p>
|
<python>
|
2024-01-30 18:22:06
| 2
| 385
|
user1753640
|
77,908,576
| 880,874
|
How can I increment items in a list alphabetically?
|
<p>I have the following Python code that prints data based on a template.</p>
<p>In the template, each selection is preceded by empty brackets, like this:</p>
<pre><code>-[ ] selection one
-[ ] selection two
-[ ] selection three
</code></pre>
<p>Is there a way to change my script so that it increments with letters?</p>
<p>Like:</p>
<pre><code>a. selection one
b. selection two
c. selection three
</code></pre>
<p>I tried using something like <code>s = bytes([ch[0] + 1])</code> but I kept encountering a bunch of errors.</p>
<p>Here is my script:</p>
<pre><code>def __str__(self):
# show the correct order (noted by Sarah)
self.options.sort(key=attrgetter('order'))
# make sure it supports multiple correct selections
correct_selection = ", ".join([
str(selection)
for selection
in self.options
if selection.correct
])
# also, make sure it substitutes only the known values (see Tompei for ?)
return _TEMPLATE.safe_substitute({
'dilemma_ethos': self.text,
'options': "\n".join([f'-[ ] {option}' for option in self.options]),
'correct_selection': correct_selection,
})
</code></pre>
|
<python><python-3.x>
|
2024-01-30 18:15:31
| 1
| 7,206
|
SkyeBoniwell
|
77,908,525
| 955,273
|
bug in cross-process synchronisation using python's multiprocessing.Event from thread pool?
|
<p>In a distributed system we're building at work we are using <a href="https://redis.io/docs/manual/patterns/distributed-locks/" rel="nofollow noreferrer">redis's distributed locks</a> to synchronise multiple accesses to the results of a calculation. (We're using the <a href="https://redis-py.readthedocs.io/en/v5.0.1/lock.html" rel="nofollow noreferrer">py-redis implementation</a>)</p>
<p>Calculations can die for various reasons, and so we use a timeout on the lock so that if a worker dies the lock will eventually be released and waiting clients will be able to handle this case.</p>
<p>Since calculations can take a variable length time, there is no single timeout value which fits all, so we spawn a separate daemonised process which will regularly refresh the lock whilst the parent is alive.</p>
<ul>
<li>While the calculation is running we refresh the lock.</li>
<li>When the calculation is done we stop the refresh process and unlock.</li>
<li>If the worker dies it won't lave orphans behind, so the child refresh process will be killed and the lock will eventually timeout.</li>
</ul>
<h2>Problem</h2>
<p>We're using <code>multiprocessing.Event</code> to communicate between the parent process and the <code>multiprocessing.Process</code> <code>"refresh"</code> child process.</p>
<p>Originally the child process would loop on the <code>Event</code>, refreshing the lock until the event is set by the parent.</p>
<p>However, there are times where the <code>multiprocessing.Process.start</code> call doesn't seem to actually start the process, and so the la</p>
<p>There are situations where it seems as if <code>refresh.start()</code> fails to start the child process.</p>
<p>To be pedantic I then added a 2nd <code>Event</code> which the parent waits on, and the child sets when it starts.</p>
<p>We witness the parent waiting forever on this event.</p>
<h2>Example app</h2>
<p>I have boiled the problem down to the following example app which exhibits the deadlock behaviour:</p>
<p><strong>The locking code which spawns the refresh process, yields to the caller, and then unlocks:</strong></p>
<pre class="lang-py prettyprint-override"><code>@contextlib.contextmanager
def lock(no):
'''
starts the refresh child, waits for it to start, yields to the caller, and
then upon return signals the child we are done and waits for it to exit
'''
start_ev = multiprocessing.Event()
done_ev = multiprocessing.Event()
# would obtain lock here and pass it to refresh
# lock = redis.lock(...)
refresh = multiprocessing.Process(target=wait_for_done, args=(start_ev, done_ev, no), daemon=True)
refresh.start()
while not start_ev.is_set():
print(f"{no}: wait for start")
start_ev.wait(1)
print(f"{no}: ready to work")
yield
print(f"{no}: work complete")
done_ev.set()
refresh.join()
</code></pre>
<p><strong>This child process run function which refreshes the lock until the parent is done:</strong></p>
<pre class="lang-py prettyprint-override"><code>def wait_for_done(start_ev, done_ev, no):
'''
the refresh job - signals to the caller it has started, then loops on the done
event, refreshing the lock
'''
print(f"{no}: start")
start_ev.set()
while not done_ev.is_set():
# would refresh lock here
# lock.reacquire()
print(f"{no}: wait for done")
done_ev.wait(1)
print(f"{no}: done")
</code></pre>
<p><strong>The example calculation task and thread pool which shows we eventually deadlock:</strong></p>
<pre class="lang-py prettyprint-override"><code>def task(no):
'''
example calculation which needs to be locked
'''
with lock(no):
print(f"{no}: working...")
sleep = random.randint(3, 5)
time.sleep(sleep)
# kick off multiple jobs to show some of them fail to synchronise
with concurrent.futures.ThreadPoolExecutor(8) as pool:
fs = []
for i in range(8):
f = pool.submit(task, i)
fs.append(f)
[ f.result() for f in fs ]
</code></pre>
<p>Running this will correctly start and stop some of the child processes, but eventually it will just get stuck.</p>
<h2>Output:</h2>
<pre class="lang-none prettyprint-override"><code>0: wait for start
0: start
3: wait for start
0: wait for done
3: start
2: wait for start
0: ready to work
0: working
...
4: wait for start
5: wait for start
4: wait for start
5: wait for start
4: wait for start
5: wait for start
4: wait for start
5: wait for start
...repeat forever
</code></pre>
<p>I guess that I'm using <code>multiprocessing</code> incorrectly but am not sure where I'm going wrong?</p>
<p>Perhaps it's got something to do with the fact I'm doing this concurrently from multiple python threads (the <code>concurrent.futures.ThreadPoolExecutor</code>), but given python doesn't have "real" threads because of the GIL, and it's interacting with <code>multiprocessing</code> which should be inherently thread safe, I would guess not?</p>
<p>Is what I'm trying to do here with <code>multiprocessing</code> workable?</p>
<p>How can I achieve my distributed locking with out-of-band refresh in the manner described above?</p>
<h2>Full source code below:</h2>
<pre class="lang-py prettyprint-override"><code>import concurrent.futures
import random
import time
import multiprocessing
import contextlib
@contextlib.contextmanager
def lock(no):
'''
starts the refresh child, waits for it to start, yields to the caller, and
then upon return signals the child we are done and waits for it to exit
'''
start_ev = multiprocessing.Event()
done_ev = multiprocessing.Event()
# would obtain lock here and pass it to refresh
# lock = redis.lock(...)
refresh = multiprocessing.Process(target=wait_for_done, args=(start_ev, done_ev, no), daemon=True)
refresh.start()
while not start_ev.is_set():
print(f"{no}: wait for start")
start_ev.wait(1)
print(f"{no}: ready to work")
yield
print(f"{no}: work complete")
done_ev.set()
refresh.join()
def wait_for_done(start_ev, done_ev, no):
'''
the refresh job - signals to the caller it has started, then loops on the done
event, refreshing the lock
'''
print(f"{no}: start")
start_ev.set()
while not done_ev.is_set():
# would refresh lock here
# lock.reacquire()
print(f"{no}: wait for done")
done_ev.wait(1)
print(f"{no}: done")
def task(no):
'''
example task which needs to be locked
'''
with lock(no):
print(f"{no}: working...")
sleep = random.randint(3, 5)
time.sleep(sleep)
# kick off multiple jobs to show some of them fail to synchronise
with concurrent.futures.ThreadPoolExecutor(8) as pool:
fs = []
for i in range(8):
f = pool.submit(task, i)
fs.append(f)
[ f.result() for f in fs ]
</code></pre>
|
<python><multiprocessing><distributed-computing>
|
2024-01-30 18:06:39
| 0
| 28,956
|
Steve Lorimer
|
77,908,432
| 16,912,844
|
Test (pytest) Being Skipped While Using Jenkins, But Not Run Directly
|
<p>I am using pytest for testing on a function, but it seem to be having an issue executing only when running on/with Jenkins. It runs fine if I directly execute the command on the Jenkins machine though.</p>
<p>I have the below functions...</p>
<p>This code runs fine on my local, and after running on Jenkins, if I copy and paste the actual command (cmd) executed on Jenkins, it also works fine. But when Jenkins executes it as a job. The test gets skipped. I tried a simple test function where it just takes in the <code>get_asset</code> function and print out the return, it returns everything correctly. So I am not sure what I am missing and why test only being skipped while running as Jenkins job.</p>
<p><strong>Helper Function</strong></p>
<pre class="lang-py prettyprint-override"><code>def get_asset() -> list[PathLike]:
"""Return list of asset(s)"""
asset_list = []
for asset in get_file_list(CURRENT_MODULE_PATH / 'asset'):
# get_file_list return generator
if 'need_to_skip_asset' in str(asset):
continue
asset_list.append(asset)
return asset_list
</code></pre>
<p><strong>Test Function</strong></p>
<pre class="lang-py prettyprint-override"><code>@pytest.mark.parametrize('asset', get_asset())
def test_app_launch_asset(app_binary, asset):
"""Test Application Function"""
print(f'Application: {app_binary}')
print(f'Asset: {asset}')
# Run `cmd`
applib.execute(
cmd=[str(app_binary), str(asset)],
timeout=15,
)
</code></pre>
|
<python><jenkins><pytest>
|
2024-01-30 17:50:21
| 1
| 317
|
YTKme
|
77,908,367
| 18,558,424
|
can't start smtp debug sever in windows cmd line
|
<p>I downloaded and installed Python. It is added to envoirment variable Path. And I can run it in cmd line:</p>
<pre><code>Python 3.12.1 (tags/v3.12.1:2305ca5, Dec 7 2023, 22:03:25) [MSC v.1937 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> print("hello")
hello
</code></pre>
<p>But when I tried to start smtp debug server, it shows error: No module named smtpd</p>
<p>Why I couldn't run it? What did I do wrong?</p>
<pre><code>C:\Users\ad>python -m smtpd -c DebuggingServer -n 127.0.0.1:1025
C:\Users\ad\AppData\Local\Programs\Python\Python312\python.exe: No module named smtpd
</code></pre>
|
<python>
|
2024-01-30 17:40:51
| 0
| 716
|
Abe
|
77,908,236
| 11,628,437
|
jaxlib.xla_extension.XlaRuntimeError: INTERNAL: Failed to execute XLA Runtime executable: run time error: custom call 'xla.gpu.custom_call' failed
|
<p>I am trying to run multiple sbx programs (that use JAX) concurrently using <code>joblib</code>. Here is my program -</p>
<pre><code>'''
For installation please do -
pip install gym
pip install sbx-rl
pip install mujoco
pip install shimmy
'''
from joblib import Parallel, delayed
import gym
from sbx import SAC
# from stable_baselines3 import SAC
def train():
env = gym.make("Humanoid-v4")
model = SAC("MlpPolicy", env, verbose=1)
model.learn(total_timesteps=7e5, progress_bar=True)
def train_model():
train()
if __name__ == '__main__':
Parallel(n_jobs=10)(delayed(train)() for i in range(3))
</code></pre>
<p>This is the error that I am getting -</p>
<pre><code>/home/dgthomas/.local/lib/python3.10/site-packages/stable_baselines3/common/vec_env/patch_gym.py:49: UserWarning: You provided an OpenAI Gym environment. We strongly recommend transitioning to Gymnasium environments. Stable-Baselines3 is automatically wrapping your environments in a compatibility layer, which could potentially cause issues.
warnings.warn(
/home/dgthomas/.local/lib/python3.10/site-packages/stable_baselines3/common/vec_env/patch_gym.py:49: UserWarning: You provided an OpenAI Gym environment. We strongly recommend transitioning to Gymnasium environments. Stable-Baselines3 is automatically wrapping your environments in a compatibility layer, which could potentially cause issues.
warnings.warn(
/home/dgthomas/.local/lib/python3.10/site-packages/stable_baselines3/common/vec_env/patch_gym.py:49: UserWarning: You provided an OpenAI Gym environment. We strongly recommend transitioning to Gymnasium environments. Stable-Baselines3 is automatically wrapping your environments in a compatibility layer, which could potentially cause issues.
warnings.warn(
2024-01-30 11:19:12.354168: W external/xla/xla/service/gpu/runtime/support.cc:58] Intercepted XLA runtime error:
INTERNAL: jaxlib/gpu/prng_kernels.cc:33: operation gpuGetLastError() failed: out of memory
2024-01-30 11:19:12.354264: E external/xla/xla/pjrt/pjrt_stream_executor_client.cc:2732] Execution of replica 0 failed: INTERNAL: Failed to execute XLA Runtime executable: run time error: custom call 'xla.gpu.custom_call' failed: jaxlib/gpu/prng_kernels.cc:33: operation gpuGetLastError() failed: out of memory; current tracing scope: custom-call.11; current profiling annotation: XlaModule:#prefix=jit(_threefry_split)/jit(main),hlo_module=jit__threefry_split,program_id=2#.
joblib.externals.loky.process_executor._RemoteTraceback:
"""
jax.errors.SimplifiedTraceback: For simplicity, JAX has removed its internal frames from the traceback of the following exception. Set JAX_TRACEBACK_FILTERING=off to include these.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/dgthomas/.local/lib/python3.10/site-packages/joblib/externals/loky/process_executor.py", line 463, in _process_worker
r = call_item()
File "/home/dgthomas/.local/lib/python3.10/site-packages/joblib/externals/loky/process_executor.py", line 291, in __call__
return self.fn(*self.args, **self.kwargs)
File "/home/dgthomas/.local/lib/python3.10/site-packages/joblib/parallel.py", line 589, in __call__
return [func(*args, **kwargs)
File "/home/dgthomas/.local/lib/python3.10/site-packages/joblib/parallel.py", line 589, in <listcomp>
return [func(*args, **kwargs)
File "/work/LAS/usr/tbd/5_test.py", line 23, in my_func
model = SAC("MlpPolicy", env,verbose=0)
File "/home/dgthomas/.local/lib/python3.10/site-packages/sbx/sac/sac.py", line 109, in __init__
self._setup_model()
File "/home/dgthomas/.local/lib/python3.10/site-packages/sbx/sac/sac.py", line 126, in _setup_model
self.key = self.policy.build(self.key, self.lr_schedule, self.qf_learning_rate)
File "/home/dgthomas/.local/lib/python3.10/site-packages/sbx/sac/policies.py", line 143, in build
key, actor_key, qf_key, dropout_key = jax.random.split(key, 4)
File "/home/dgthomas/.local/lib/python3.10/site-packages/jax/_src/random.py", line 303, in split
return _return_prng_keys(wrapped, _split(typed_key, num))
File "/home/dgthomas/.local/lib/python3.10/site-packages/jax/_src/random.py", line 289, in _split
return prng.random_split(key, shape=shape)
File "/home/dgthomas/.local/lib/python3.10/site-packages/jax/_src/prng.py", line 769, in random_split
return random_split_p.bind(keys, shape=shape)
File "/home/dgthomas/.local/lib/python3.10/site-packages/jax/_src/core.py", line 444, in bind
return self.bind_with_trace(find_top_trace(args), args, params)
File "/home/dgthomas/.local/lib/python3.10/site-packages/jax/_src/core.py", line 447, in bind_with_trace
out = trace.process_primitive(self, map(trace.full_raise, args), params)
File "/home/dgthomas/.local/lib/python3.10/site-packages/jax/_src/core.py", line 935, in process_primitive
return primitive.impl(*tracers, **params)
File "/home/dgthomas/.local/lib/python3.10/site-packages/jax/_src/prng.py", line 781, in random_split_impl
base_arr = random_split_impl_base(
File "/home/dgthomas/.local/lib/python3.10/site-packages/jax/_src/prng.py", line 787, in random_split_impl_base
return split(base_arr)
File "/home/dgthomas/.local/lib/python3.10/site-packages/jax/_src/prng.py", line 786, in <lambda>
split = iterated_vmap_unary(keys_ndim, lambda k: impl.split(k, shape))
File "/home/dgthomas/.local/lib/python3.10/site-packages/jax/_src/prng.py", line 1291, in threefry_split
return _threefry_split(key, shape)
jaxlib.xla_extension.XlaRuntimeError: INTERNAL: Failed to execute XLA Runtime executable: run time error: custom call 'xla.gpu.custom_call' failed: jaxlib/gpu/prng_kernels.cc:33: operation gpuGetLastError() failed: out of memory; current tracing scope: custom-call.11; current profiling annotation: XlaModule:#prefix=jit(_threefry_split)/jit(main),hlo_module=jit__threefry_split,program_id=2#.
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/work/LAS/usr/tbd/5_test.py", line 27, in <module>
Parallel(n_jobs=3)(delayed(my_func)() for i in range(3))
File "/home/dgthomas/.local/lib/python3.10/site-packages/joblib/parallel.py", line 1952, in __call__
return output if self.return_generator else list(output)
File "/home/dgthomas/.local/lib/python3.10/site-packages/joblib/parallel.py", line 1595, in _get_outputs
yield from self._retrieve()
File "/home/dgthomas/.local/lib/python3.10/site-packages/joblib/parallel.py", line 1699, in _retrieve
self._raise_error_fast()
File "/home/dgthomas/.local/lib/python3.10/site-packages/joblib/parallel.py", line 1734, in _raise_error_fast
error_job.get_result(self.timeout)
File "/home/dgthomas/.local/lib/python3.10/site-packages/joblib/parallel.py", line 736, in get_result
return self._return_or_raise()
File "/home/dgthomas/.local/lib/python3.10/site-packages/joblib/parallel.py", line 754, in _return_or_raise
raise self._result
jaxlib.xla_extension.XlaRuntimeError: INTERNAL: Failed to execute XLA Runtime executable: run time error: custom call 'xla.gpu.custom_call' failed: jaxlib/gpu/prng_kernels.cc:33: operation gpuGetLastError() failed: out of memory; current tracing scope: custom-call.11; current profiling annotation: XlaModule:#prefix=jit(_threefry_split)/jit(main),hlo_module=jit__threefry_split,program_id=2#.
</code></pre>
<p>I am using a 40 GB GPU (<code>a100-pcie</code>). Therefore I doubt that my GPU is running out of memory. Please let me know if any clarification is needed.</p>
<p>Edit 1: This is how I call my program - <code>export XLA_PYTHON_CLIENT_PREALLOCATE=false && python 5_test.py</code> (The name of my program is <code>5_test.py</code>)</p>
|
<python><dynamic-memory-allocation><jax>
|
2024-01-30 17:15:26
| 1
| 1,851
|
desert_ranger
|
77,907,959
| 23,260,297
|
groupby multiple columns and broadcast results back to each row in dataframe
|
<p>I have posted this question before, but it keeps getting closed due to having similar questions asked, but those solutions have not helped me here.</p>
<p>I have a dataframe that needs to be grouped by 3 different columns. From the resultant groupings, I need to perform calculations and then apply the result to each row in a new column.</p>
<p>My data looks like this:</p>
<pre><code>ID Deal Party Commodity startdate enddate fixedpricestrike quantity mtmvalue
---- ----- ----- --------- --------- ------- ---------------- -------- ---------
J1 Sell J (stock1, stock2) 01Jan23 01Feb23 10.00 10 100.00
J2 Sell J (stock1, stock2) 01Jan23 01Feb23 10.00 10 100.00
J3 Buy J (stock1, stock2) 01Jan23 01Feb23 5.00 10 50.00
J4 Buy J (stock1, stock2) 01Jan23 01Feb23 5.00 5 25.00
</code></pre>
<p>My objective is to group the data by [Deal, commodity, startdate] so that the resultant data looks like this:</p>
<pre><code>ID Deal Party Commodity startdate enddate fixedpricestrike quantity mtmvalue
---- ----- ----- --------- --------- ------- ---------------- -------- ---------
J1 Sell J (stock1, stock2) 01Jan23 01Feb23 10.00 10 100.00
J2 Sell J (stock1, stock2) 01Jan23 01Feb23 10.00 10 100.00
ID Deal Party Commodity startdate enddate fixedpricestrike quantity mtmvalue
---- ----- ----- --------- --------- ------- ---------------- -------- ---------
J3 Buy J (stock1, stock2) 01Jan23 01Feb23 5.00 10 50.00
J4 Buy J (stock1, stock2) 01Jan23 01Feb23 5.00 5 25.00
</code></pre>
<p>From this, I need to use a formula to calculate a 'fprice' and add it to each row like this:</p>
<pre><code>ID Deal Party Commodity startdate enddate fixedpricestrike quantity mtmvalue fprice
---- ----- ----- --------- --------- ------- ---------------- -------- --------- -----
J1 Sell J (stock1, stock2) 01Jan23 01Feb23 10.00 10 100.00 0
J2 Sell J (stock1, stock2) 01Jan23 01Feb23 10.00 10 100.00 0
ID Deal Party Commodity startdate enddate fixedpricestrike quantity mtmvalue fprice
---- ----- ----- --------- --------- ------- ---------------- -------- --------- -----
J3 Buy J (stock1, stock2) 01Jan23 01Feb23 5.00 10 50.00 1.25
J4 Buy J (stock1, stock2) 01Jan23 01Feb23 5.00 10 25.00 1.25
</code></pre>
<p>My issue lies in the next step, when I try to add the fprice back to the original dataframe
I have this line of code:</p>
<pre><code>df['fprice'] = df.groupby(['StartDate', 'Commodity', 'Deal']).apply(lambda group: -(group['MTMValue'].sum() - (group['FixedPriceStrike'] * group['Quantity']).sum()) / group['Quantity'].sum()).reset_index(drop=True)
</code></pre>
<p>which returns this dataframe:</p>
<pre><code>ID Deal Party Commodity startdate enddate fixedpricestrike quantity mtmvalue fprice
---- ----- ----- --------- --------- ------- ---------------- -------- --------- -----
J1 Sell J (stock1, stock2) 01Jan23 01Feb23 10.00 10 100.00 0
J2 Sell J (stock1, stock2) 01Jan23 01Feb23 10.00 10 100.00 1.25
J3 Buy J (stock1, stock2) 01Jan23 01Feb23 5.00 10 50.00
J4 Buy J (stock1, stock2) 01Jan23 01Feb23 5.00 10 25.00
</code></pre>
<p>when the result should look like</p>
<pre><code>ID Deal Party Commodity startdate enddate fixedpricestrike quantity mtmvalue fprice
---- ----- ----- --------- --------- ------- ---------------- -------- --------- -----
J1 Sell J (stock1, stock2) 01Jan23 01Feb23 10.00 10 100.00 0
J2 Sell J (stock1, stock2) 01Jan23 01Feb23 10.00 10 100.00 0
J3 Buy J (stock1, stock2) 01Jan23 01Feb23 5.00 10 50.00 1.25
J4 Buy J (stock1, stock2) 01Jan23 01Feb23 5.00 10 25.00 1.25
</code></pre>
<p>I am also relatively new to using pandas, and I am unsure why my result is coming out this way. Any suggestions would help</p>
|
<python><pandas><dataframe>
|
2024-01-30 16:35:18
| 1
| 2,185
|
iBeMeltin
|
77,907,884
| 9,877,065
|
Python relation between object and type
|
<p>After sometimes spent on writing small Python scripts, I started to try to learn it more formally.</p>
<p>I've this code :</p>
<pre><code>class One(object):
pass
print(One, type(One))
class Two(type):
pass
print(Two, type(Two))
print(object , type(object))
print(type, type(type))
print(isinstance(type , type))
print(isinstance(object , type))
print(isinstance(type , object))
</code></pre>
<p>output:</p>
<pre><code><class '__main__.One'> <class 'type'>
<class '__main__.Two'> <class 'type'>
<class 'object'> <class 'type'>
<class 'type'> <class 'type'>
True
True
True
</code></pre>
<p>And all of a sudden I feel like I wasted my time , why both <code>object</code> and <code>type</code> if they look the same. I try to dig more:</p>
<pre><code>ob_list = [i for i in dir(object)]
print(ob_list)
print(object.__mro__)
print(object.__base__) ## How do I get this __base__ not in dir(object) !!!
print(object.__class__)
print('\n\n')
ty_list = [i for i in dir(type)]
print(ty_list)
print(type.__mro__)
print(type.__base__)
print(type.__class__)
print('\n\n')
print(object == type)
print(type(object) == type(type))
print(object == type)
</code></pre>
<p>output:</p>
<pre><code>['__class__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getstate__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__']
(<class 'object'>,)
None
<class 'type'>
['__abstractmethods__', '__annotations__', '__base__', '__bases__', '__basicsize__', '__call__', '__class__', '__delattr__', '__dict__', '__dictoffset__', '__dir__', '__doc__', '__eq__', '__flags__', '__format__', '__ge__', '__getattribute__', '__getstate__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__instancecheck__', '__itemsize__', '__le__', '__lt__', '__module__', '__mro__', '__name__', '__ne__', '__new__', '__or__', '__prepare__', '__qualname__', '__reduce__', '__reduce_ex__', '__repr__', '__ror__', '__setattr__', '__sizeof__', '__str__', '__subclasscheck__', '__subclasses__', '__subclasshook__', '__text_signature__', '__weakrefoffset__', 'mro']
(<class 'type'>, <class 'object'>)
<class 'object'>
<class 'type'>
False
True
False
</code></pre>
<p>Seems to me, please correct me if am I wrong , probably I am, that <code>type</code> inherits from <code>object</code> but why <code>type(object)</code> is <code>'type'</code> and not <code>'object'</code></p>
|
<python><class><types>
|
2024-01-30 16:24:12
| 0
| 3,346
|
pippo1980
|
77,907,843
| 1,471,980
|
How do u convert a string into data frame in pandas
|
<p>I'm performing an api request and I get this response back:</p>
<pre><code>print(res)
b'amp,dev, util\n192.168.101.1,server123,80%\n192.168.101.4,serverabc,75%\n192.68.101.5,serverusa1,50%\n'
</code></pre>
<p>I need convert this string to a data frame:</p>
<pre><code>Amp. dev. util
192.168.101.1 server123 80%
192.168.101.4 serverabc 75%
192.168.101.5 serverusa1 50%
</code></pre>
<p>I tried this.</p>
<pre><code> import pandas as pd
pd.read_table(res, sep='\n')
</code></pre>
<p>Oserror: expected file path</p>
<p>Any ideas what I'm doing wrong here?</p>
|
<python><pandas><string><dataframe>
|
2024-01-30 16:19:30
| 2
| 10,714
|
user1471980
|
77,907,799
| 8,049,947
|
How to Ensure Execution of Finally Block in Airflow DAG Despite Exception in Python Code?
|
<p>I have the following code, and under normal circumstances, it won't fail in Airflow. To make the DAG fail in case of a certain situation, I added a ValueError. However, in the event of a failure, the finally block, which is crucial, doesn't get executed. How can I avoid this situation?</p>
<pre class="lang-py prettyprint-override"><code>try:
dump_tables(mssql_engine=mssql_engine, tables_list=tables_for_db_connection)
except Exception as e:
logger.error(f"Error during database operation: {str(e)}")
raise ValueError
finally:
mssql_engine.dispose()
</code></pre>
|
<python><sql-server><airflow>
|
2024-01-30 16:12:25
| 0
| 308
|
panther93
|
77,907,578
| 14,705,072
|
How to add a general title with a seaborn.objects.Plot.facet plot
|
<p>I am currently using the seaborn library in Python to create a facetted stacked barplot from a pandas dataframe named <code>averages</code> with columns <code>['Name', 'Period', 'Value', 'Solver']</code>.</p>
<p>Here is the code I use to create the plot I want.</p>
<pre><code>p = so.Plot(data = averages, x = 'Period', y = 'Value', color = 'Name').add(so.Bar(), so.Stack(), suptitle='Inventory levels')
p = p.facet(col='Solver', order=['spse', 'mp2', 'mels'])
</code></pre>
<p>I am searching for a way to add a general title to the plot <em>i.e.</em> a title above each subplot, like the function <code>matplotlib.pyplot.suptitle</code> function does for example.</p>
<p>I know that the function <code>seaborn.objects.Plot.label</code> has a <code>title=</code> option, but when I use it, this puts the same title above each subplot of the facetted graph.</p>
|
<python><seaborn><seaborn-objects>
|
2024-01-30 15:41:40
| 1
| 319
|
Haeden
|
77,907,542
| 4,451,521
|
Applying a function for only some indices
|
<p>I have a function <code>computeLeft</code> which receives an index and returns four numbers. Something like this:</p>
<pre><code>def computeLeft(i):
return np.array([i*2, i*3, i*4, i*5])
# edited to correct it
</code></pre>
<p>Right now in my code I use it like this:</p>
<pre><code>import numpy as np
import pandas as pd
results=["val1","val2","val3","val4"]
df[results] = np.vectorize(computeLeft, signature="()->(4)")(range(len(df)))
</code></pre>
<p>where <code>df</code> is some dataframe.</p>
<p>This obviously applies the function to all rows of <code>df</code>.
I want to apply this function to <em>only some</em> indexes of <code>df</code>.</p>
<p>So for example I have a list <code>[2, 5, 7, 8, 10]</code>.
I want to compute <code>computeLeft</code> only for the indices in the list and that the columns in result have values only for those rows (the rest having Nan).</p>
<p>How can I apply <code>computeLeft</code> selectively like this?</p>
|
<python><pandas><numpy>
|
2024-01-30 15:34:47
| 2
| 10,576
|
KansaiRobot
|
77,907,507
| 10,380,409
|
Selenium Invalid ID session before quit with new chromedriver 121
|
<p>I updated the browser and the web driver for selenium to the last version 121.0.6167.85 (Official Build) (x86_64).
And now after a bit 30min more or less all test fail with error:</p>
<pre><code>HOOK-ERROR in after_scenario: InvalidSessionIdException: Message: invalid session id
</code></pre>
<p>NOTE:</p>
<ol>
<li>Before the update, all tests passed.</li>
<li>The session is not close before the error with web_driver.close() or with web_driver.quit()</li>
</ol>
<p>Does anybody have the same error? some tips on how to resolve this? There is a timeout for the session? I didn't find anything regarding</p>
|
<python><selenium-webdriver><selenium-chromedriver><automated-tests>
|
2024-01-30 15:28:48
| 1
| 826
|
Angelotti
|
77,907,430
| 1,737,830
|
Searching for objects within nested array: how to return only found objects and parent elements?
|
<p>Here's an excerpt from <code>pictures</code> MongoDB collection:</p>
<pre class="lang-json prettyprint-override"><code>[
{
"_id": "57582b6b",
"source": "integration",
"url": "https://example.com/images/51/landscapes-polar.xml",
"pictures": [
{
"name": "pines",
"version": "2"
},
{
"name": "penguins",
"version": "1"
},
{
"name": "pineapple",
"version": "7"
}
]
},
{
"_id": "57582b6d",
"source": "customer",
"url": "https://example.com/images/15/nature.xml",
"pictures": [
{
"name": "mountains",
"version": "2"
},
{
"name": "pines",
"version": "1"
}
]
},
{
"_id": "57582b6c",
"source": "qa",
"url": "https://example.com/image/32/landscapes.xml",
"pictures": [
{
"name": "alps",
"version": "1"
},
{
"name": "pineapple",
"version": "7"
},
{
"name": "pines",
"version": "3"
}
]
}
]
</code></pre>
<p>My main concern is to find specific <code>names</code> from inside of nested <code>pictures</code> array. When names matching partial query string are found, they should be preserved in <code>pictures</code> array and displayed along with <code>pictures</code>'s array parent. Using PyMongo library, I was able to retrieve queried data using this function:</p>
<pre class="lang-py prettyprint-override"><code>import re
from flask import Flask, jsonify
from controller.database import client, database_name, temp_collection
app = Flask(__name__)
db = client[database_name]
collection = db[temp_collection]
@app.route('/component/find/<picture_name>', methods=['GET'])
def get_component(picture_name):
pattern = re.compile(picture_name, re.IGNORECASE)
pipeline = [
{"$unwind": "$pictures"},
{"$match": {"pictures.name": {"$regex": pattern}}},
{"$group": {
"_id": "$_id",
"url": {"$first": "$url"},
"source": {"$first": "$source"},
"pictures": {"$addToSet": "$pictures"},
"root": {"$first": "$$ROOT"}
}},
{"$replaceRoot": {
"newRoot": {
"$mergeObjects": ["$root", {"pictures": "$pictures"}]
}
}},
{"$project": {
"_id": {"$toString": "$_id"},
"url": 1,
"source": 1,
"pictures": 1
}}
]
result = list(collection.aggregate(pipeline))
if result:
return jsonify(result)
else:
return jsonify({"message": "Component with picture '{}' not found.".format(picture_name)}), 404
if __name__ == "__main__":
app.run(debug=True)
</code></pre>
<p>However, retrieved data only contains one-element <code>pictures</code> arrays, instead of putting there all matching objects.</p>
<p>In other words, this is what I'd like to get:</p>
<pre class="lang-json prettyprint-override"><code>[
{
"_id": "57582b6b",
"source": "integration",
"url": "https://example.com/51/landscapes-polar.xml",
"pictures": [
{
"name": "pines",
"version": "2"
},
{
"name": "pineapple",
"version": "7"
}
]
},
{
"_id": "57582b6d",
"source": "customer",
"url": "https://example.com/15/nature.xml",
"pictures": [
{
"name": "pines",
"version": "1"
}
]
},
{
"_id": "57582b6c",
"source": "qa",
"url": "https://example.com/image/32/landscapes.xml",
"pictures": [
{
"name": "pineapple",
"version": "7"
},
{
"name": "pines",
"version": "3"
}
]
}
]
</code></pre>
<p>and this is what I get now:</p>
<pre class="lang-json prettyprint-override"><code>[
{
"_id": "57582b6b",
"source": "integration",
"url": "https://example.com/51/landscapes-polar.xml",
"pictures": [
{
"name": "pines",
"version": "2"
}
]
},
{
"_id": "57582b6d",
"source": "customer",
"url": "https://example.com/15/nature.xml",
"pictures": [
{
"name": "pines",
"version": "1"
}
]
},
{
"_id": "57582b6c",
"source": "qa",
"url": "https://example.com/image/32/landscapes.xml",
"pictures": [
{
"name": "pineapple",
"version": "7"
}
]
}
]
</code></pre>
<p>How to make sure all matching <code>pictures</code> objects get pushed to proper arrays? (Using <code>$push</code> instead of <code>$addToSet</code> returns the same results.)</p>
|
<python><json><flask><pymongo>
|
2024-01-30 15:17:02
| 1
| 2,368
|
AbreQueVoy
|
77,907,417
| 13,615,436
|
Select all <table> elements without classes or ids with BeautifulSoup
|
<p>I am trying to select all <code><table></code> elements on some web pages with BeautifulSoup. The table elements do not have specific classes or ids.</p>
<pre class="lang-py prettyprint-override"><code>import bs4
import requests
def get_keycode_soup(url):
res = requests.get(url)
res.raise_for_status()
return bs4.BeautifulSoup(res.text, features="html.parser")
def parse_qmk_soup():
qmk_soup = get_keycode_soup("https://docs.qmk.fm/#/keycodes")
tables = qmk_soup.select("table")
# pass line for breakpoint
pass
def main():
parse_qmk_soup()
if __name__ == "__main__":
main()
</code></pre>
<p>I have also tried selecting all the different table elements with</p>
<pre class="lang-py prettyprint-override"><code>tables = qmk_soup.find_all("table")
# and
table_rows = qmk_soup.find_all("tr")
</code></pre>
<p>Whenever I pause the debugger on the <code>pass</code> line, <code>tables</code> is always <code>None</code>.</p>
<p>I have tried some similar methods to <a href="https://stackoverflow.com/questions/52905578/extracting-elements-without-class-or-id-using-beautifulsoup">this post</a> and <a href="https://stackoverflow.com/questions/66523174/how-to-select-all-table-elements-inside-a-div-parent-node-with-beautifulsoup">this post</a>, but since there do not appear to be any other descriptive tags on the tables I'm trying to select, iterating feels inefficient.</p>
<p>Is there a way to simply select all the <code><table></code> elements on their own?</p>
<p><strong>Edit</strong>: it appears that the page requires JS to load the tables as suggested by @DeepSpace below. Additionally, see the answer from @MendelG regarding following where the data is loaded from in case you might obtain the data from the source.</p>
|
<python><python-3.x><beautifulsoup><python-requests><html-parsing>
|
2024-01-30 15:14:47
| 1
| 1,294
|
will-hedges
|
77,907,234
| 4,564,080
|
Create a v1 model from a v2 model
|
<p>I have a project that is using Pydantic v2. The project is also using LangChain which, as of today, <a href="https://python.langchain.com/docs/guides/pydantic_compatibility" rel="nofollow noreferrer">only supports Pydantic v1</a>.</p>
<p>Instead of converting my entire project to Pydantic v1, or else having a mix of v1 and v2 models in the project, I had the idea that whenever I pass a Pydantic model to LangChain, I can simply convert the v2 model to a v1 model.</p>
<p>NOTE: It is the model schema which must be converted, not an instance of the model.</p>
<p>How can I achieve this?</p>
<p>It would look something like this:</p>
<pre class="lang-py prettyprint-override"><code>from langchain.chains import create_tagging_chain_pydantic
from pydantic import BaseModel as BaseModelV2
from pydantic.v1 import BaseModel as BaseModelV1
class TaggingModel(BaseModelV2):
...
def convert_to_v1(v2_model: type[BaseModelV2]) -> type[BaseModelV1]:
...
chain = create_tagging_chain_pydantic(pydantic_schema=convert_to_v1(TaggingModel), llm=llm)
</code></pre>
|
<python><pydantic><langchain>
|
2024-01-30 14:50:27
| 0
| 4,635
|
KOB
|
77,907,190
| 5,955,479
|
Using moto with pandas read_parquet and to_parquet functions
|
<p>I am trying to write a unit test for a function which uses <code>pd.read_parquet()</code> function and I am struggling to make it work. I have the code below</p>
<pre><code>from moto import mock_aws
import pandas as pd
import pytest
import datetime as dt
import boto3
from my_module import foo
@pytest.fixture
def mock_df():
cols = [
"timestamp",
"value"
]
values = [
[dt.datetime(2024, 1, 1, 0), 2.57],
[dt.datetime(2024, 1, 1, 1), 1.41],
[dt.datetime(2024, 1, 1, 2), 2.06],
]
df = pd.DataFrame(values, columns=cols)
return df
@mock_aws
def test_download(mock_df):
bucket_name = "test-input-bucket"
s3 = boto3.resource("s3", region_name="us-east-1")
s3.create_bucket(Bucket=bucket_name)
key1 = "s3://test-input-bucket/path/to/data.parquet"
mock_df.to_parquet(key1) # code fails already here
foo() # uses pd.read_parquet()
</code></pre>
<p>But I am getting this error</p>
<pre><code>OSError: When initiating multiple part upload for key 'path/to/data.parquet'
in bucket 'test-input-bucket': AWS Error INVALID_ACCESS_KEY_ID during
CreateMultipartUpload operation: The AWS Access Key Id you provided does not exist in our records.
</code></pre>
<p>I am getting the same error whether I use <code>to_parquet</code> or try to use <code>read_parquet</code>. Everything works fine, if I use something diffrent for the upload and download, like</p>
<pre><code>s3_bucket.put_object(Key=key1, Body=mock_df.to_parquet())
</code></pre>
<p>However I am not interested in replacing the pandas functions as it is not possible in my situation and need to find a way to mock S3 while using them. Is there a way to make <code>moto</code> work with these functions?</p>
<p>EDIT:
I am using these versions</p>
<pre><code>boto3 1.28.64
botocore 1.31.64
moto 5.0.3
</code></pre>
|
<python><pandas><mocking><pytest><moto>
|
2024-01-30 14:43:10
| 1
| 355
|
user430953
|
77,907,131
| 534,238
|
Python match statement on types
|
<p>I normally do not use match statements, but I had an opportunity.</p>
<p>I have a variable which can be of different data <em>types</em>, and I want to match on the type to do different things. This toy example captures the essence of what I want to do:</p>
<pre class="lang-py prettyprint-override"><code>lang = "Python"
match type(lang):
case str:
print("It is a string.")
case _:
print("It is something else.")
SyntaxError: name capture 'str' makes remaining patterns unreachable
</code></pre>
<p>My understanding of the match statement and <a href="https://stackoverflow.com/a/67525259/534238">this SO question/answer</a> made me think that I could not use a <em>variable</em>, but I am not using a variable here, I am using a <em>type</em>. It <em>seems like</em> <code>str</code> is being treated like a variable.</p>
<p>I just tested it, and in fact I <em>can</em> overwrite <code>str</code> (I assumed it was a keyword):</p>
<pre class="lang-py prettyprint-override"><code>>>> str = 7
>>> str
7
>>> str(14)
TypeError: 'int' object is not callable
</code></pre>
<p>OK. So that (likely) explains what is going on. But how can I avoid it? How can I tell <code>match</code> to not smash over <code>str</code>? Is this something that just cannot be done in <code>match</code> statements?</p>
<p>I have plenty of other ways to solve the problem. I was just surprised that the first time I tried to use <code>match</code>, somehow I already "broke" it and found a weakness.</p>
|
<python><match>
|
2024-01-30 14:32:10
| 1
| 3,558
|
Mike Williamson
|
77,906,985
| 10,590,609
|
access airflow task arguments in the on_failure_callback function
|
<p>I need a rollback operation to happen when a certain airflow task fails. To know what to rollback I need access to the task arguments inside the rollback function. The rollback function is passed to the <code>on_failure_callback</code> argument when defining the task.</p>
<p>Take this as a simplified example:</p>
<pre class="lang-py prettyprint-override"><code>from airflow.decorators import dag, task
from airflow.utils.dates import days_ago
def rollback(context: dict):
print("How do I access the 'task_argument' value?")
@task(on_failure_callback=rollback)
def example_task(task_argument: str) -> None:
assert False
@dag(
schedule_interval=None,
start_date=days_ago(1),
)
def example_dag() -> None:
example_task("the task argument's value.")
example_dag()
</code></pre>
<p>How do I get the value that was passed to the <code>example_task</code> inside the <code>on_failure_callback</code>? I'm sure it's hiding in the <code>context</code> variable but I have not been able to find clear documentation on what is inside <code>context</code>. <code>context</code> does contain a field <code>params</code> but that does not contain <code>task_argument</code>.</p>
|
<python><airflow><rollback>
|
2024-01-30 14:10:06
| 1
| 332
|
Izaak Cornelis
|
77,906,793
| 14,669,597
|
Celery worker does not start running when using the geventpool and starting from within Python
|
<p>I have a Celery system running, with 4 queues, using RabbitMQ as the broker.
Currently this system is running using the gevent workerpool, and it is working fine when starting the worker from the command line like this:
<code>celery -A app worker -Q celery -P gevent -c 100</code>
Now I want to start this from python instead of from the command line. Reason being that I want to make queues, workerpool and concurrency configurable from Python.
When starting the celery workers using:</p>
<pre class="lang-py prettyprint-override"><code>worker = app.Worker(
name=settings.APP + "_" + pool + "_" + str(concurrency) + "_@%h",
queues=queues,
pool=pool,
concurrency=concurrency,
loglevel="INFO",
task_events=True,
)
worker.start()
</code></pre>
<p>I get a normally running worker when using <code>pool=prefork</code>, but with <code>pool=gevent</code> the worker freezes after starting up. It looks like it receives some tasks, but does not start them. I suspect this has to do with celery using the gevent monkeypatching, which monkeypatches threads, but I don't know for sure, and also don't know how to disable that.
I can get gevent workers to run from python using <code>subprocess.Popen(cli_args)</code> instead of <code>worker.start()</code>, but since this will be running in Kubernetes in production, I want the worker to be the main process for health checks and the like.</p>
<p>Does anybody know how to start a Celery Worker process, with the gevent worker pool, directly from python?
EDIT: For now I am using <code>subprocess.call(cli_args)</code> to get a synchronous call to the worker process, but I still would like to know why <code>worker.start()</code> does not work.</p>
|
<python><python-3.x><celery><gevent>
|
2024-01-30 13:42:02
| 0
| 403
|
Alfred Rodenboog
|
77,906,747
| 22,221,987
|
Sync python threads, share their states and close all threads in one time
|
<p>My project is an API for some hardware mechanics. User can just send commands via socket and receive responses from hardware.<br />
For this purpose I use two sockets. One is for receiving responses (let's name it <code>RTD</code> and one for sending commands (<code>CMDS</code>).</p>
<p>I have 3 classes. <code>ReceiverRTD</code>, <code>API</code> and <code>Controller</code>.</p>
<p><code>ReceiverRTD</code> is an inherited from <code>threading.Thread</code> class. It connects to the <code>RTD</code> socket and receives data-packages with high frequency. Class stores this data into its class field and overwrites it every time new package arrives.</p>
<p><code>API</code> - is the main class, runs in main thread. User can create it's own script (where <code>API</code> class object will be created) and call <code>API</code> methods taking into account the logic of his script.</p>
<p><code>Controller</code> - is the commands-sending class. It connects to <code>CMDS</code> socket and has a couple of methods for package sending. It also runs in main thread and its methods is going to be called from <code>API</code>.</p>
<p>It turns out that <code>API</code> is an user interface. <code>Controller</code> is socket and hardware interface. <code>ReceiverRTD</code> is a support class, providing some data for <code>API</code> logic.</p>
<p>So, the logic is described below (sorry, stackoverflow had a server error when i was writing this question, so, diagram is posted manually, on imgur): <a href="https://imgur.com/a/6kzmshI" rel="nofollow noreferrer">https://imgur.com/a/6kzmshI</a></p>
<p>So, if exception happens in <code>Controller</code> or <code>API</code> (all in main thread) I can shutdown <code>ReceiverRTD</code> and finish the program correctly.<br />
Just setup some logic in <code>ReceiverRTD</code> receiving loop with <code>while threading.main_thread().is_alive():</code> and call <code>sys.exit</code> in main thread.</p>
<p>But if exception happens in <code>ReceiverRTD</code>, I can't just call sys.exit(), because <code>ReceiverRTD</code> is a child thread and calling <code>sys.exit()</code> will close this thread only, when i need to close the whole program, like in the previous case. And, also, it is not possible to create an endless loop in the <code>API</code> or <code>Controller</code> to check the <code>ReceiverRTD</code> status, because user defines his own script logic in main thread too and can lock the main thread for a along time.</p>
<p>I've found, that I can use <code>os._exit()</code> and it kills the whole process (which is what i actually wanted). But, this is not the best practise, as i understood from stackoverflow previous questions. Also, it's a bit dirty to use it, bc it's a "private" method. And if we call this method, it would not be possible to use any <code>KeyBoardInterrupt</code> custom handlers or any pre-exit methods.</p>
<p>So, how can I sync threads in both directions and actually "terminate" the program correctly?</p>
<p><em><strong>P.S. I do not attach the code, because this is mostly the architectural question.</strong></em></p>
|
<python><python-3.x><multithreading><sockets><multiprocessing>
|
2024-01-30 13:35:54
| 1
| 309
|
Mika
|
77,906,670
| 8,040,369
|
Remove duplicates in DF and convert into a JSON obj in python
|
<p>I have a df something like below</p>
<pre><code>Name Series
=============================
A A1
B B1
A A2
A A1
B B2
</code></pre>
<p>I need to convert the series to a list which should be assigned to each Name like a dict or json obj as something like below</p>
<pre><code>{
"A": ["A1", "A2"],
"B": ["B1", "B2"]
}
</code></pre>
<p>So far I have tried using groupby, but it just groups everything a separate dict</p>
<pre><code>test = df.groupby("Series")[["Name"]].apply(lambda x: x)
</code></pre>
<p>The above code gives an output as a df like</p>
<pre><code> Series
Name
A 0 A1
2 A2
3 A1
B 1 B1
4 B2
</code></pre>
<p>Any help is much appreciated</p>
<p>Thanks,</p>
|
<python><pandas>
|
2024-01-30 13:22:39
| 2
| 787
|
SM079
|
77,906,586
| 5,013,084
|
Separate axis labels on two axis in altair
|
<p>I am trying to use this code as an example to separate the axis labels for the y axis on the left and on the right side together.</p>
<pre class="lang-py prettyprint-override"><code>import altair as alt
import pandas as pd
df = pd.DataFrame(
{
"unique_ID": ["a", "b", "c", "d", "e", "f"],
"group": ["Group 1", "Group 1", "Group 3", "Group 3", "Group 3", "Group 2"],
"value": [20, 10, 30, 50, 40, 60]
}
)
df
alt.Chart(df).mark_bar().encode(
y="unique_ID",
x="value",
color="group",
)
</code></pre>
<p>What I would like to achieve is to use on the left side of the plot (i.e. where the axis labels for the variable <code>unique_ID</code> are) the labels of the variable <code>group</code> (without collapsing them into three bars) and on the right side of the plot (where I would usually put a secondary y axis) the values of <code>unique_ID</code>. Is it possible to do this in <code>altair</code>?</p>
<p>I understand that this is not really a secondary y axis because I am not using a different scale on both axes but instead would like to use that for styling purposes.</p>
<p>Thank you!</p>
<p>EDIT: @joelostblom thank you for your comment, I only used group in the color legend to indicate the other variable. What I would like to have is the following:</p>
<p>(sorry for still not providing a picture, upload is not working for ... some reasons I cannot tell due to missing error messages)</p>
<pre><code> ----------
Group 1 | xx | a
Group 1 | x | b
Group 3 | xxx | c
Group 3 | xxxxx | d
Group 3 | xxxx | e
Group 2 | xxxxxx | f
----------
left plotting right
y-axis area y-axis
label label
</code></pre>
<p>So if you imagine this ascii art (who am I kidding) as a barplot, the x-es in the middle indicate the lengths of the bars. a-f on the right side are the axis labels of the variable <code>unique_ID</code> on the right side, and in addition I would like to have the labels also on a secondary vertical axis (on the left side) for the variable <code>group</code>.</p>
<p>I hope it is clearer now, and thank you very much for your help.</p>
|
<python><pandas><dataframe><altair>
|
2024-01-30 13:10:27
| 0
| 2,402
|
Revan
|
77,906,584
| 3,433,875
|
Set aspect ratio in matplotlib 3.8 3D plots
|
<p>I am trying to plot this:
<a href="https://100.datavizproject.com/data-type/viz4/" rel="nofollow noreferrer">https://100.datavizproject.com/data-type/viz4/</a></p>
<p><a href="https://i.sstatic.net/3vUfl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3vUfl.png" alt="enter image description here" /></a></p>
<p>Using matplotlib.</p>
<p>I have got this far:</p>
<pre><code>import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
import pandas as pd
colors = [ "#2B314D", "#A54836","#5375D4", ]
data = {
"year": [2004, 2022, 2004, 2022, 2004, 2022],
"countries" : ["Sweden", "Sweden", "Denmark", "Denmark", "Norway", "Norway"],
"sites": [13,15,5,8,4,10]
}
df= pd.DataFrame(data)
df['year_lbl'] ="'"+df['year'].astype(str).str[-2:].astype(str)
nr_countries = df.countries.nunique()
nr_years = df.year.nunique()
years = df.year.unique()
x= [1,1,1]
y=[0,0,0]
z= [0,0,0]
dx= [1,1,1]
dy= [1,1,1]
fig = plt.figure(figsize=(15,10))
for i,yrs in zip(range(0,nr_years), years):
# Add the i+1 subplot of the x.shape[0] x 1 grid
ax = fig.add_subplot(1,nr_years, i+1, projection='3d')
temp_df = df[df.year == yrs]
dz = temp_df.sites.tolist()
_zpos = z # the starting zpos for each bar
for i, c in zip(range(nr_countries), colors):
ax.bar3d(x,y,_zpos,dx,dy,dz[i],color= c)
_zpos += np.array(dz[i]) # add the height of each bar to know where to start the next
ax.set_axis_off()
</code></pre>
<p><a href="https://i.sstatic.net/YzbVK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YzbVK.png" alt="enter image description here" /></a></p>
<p>But I am trying to stretch it, to get the same effect as in the link above but I just cant get it right.</p>
<p>I upgraded to 3.8 matplotlib to use:</p>
<pre><code>ax.set_ylim(0,15)
ax.set_xlim(0,15)
ax.set_zlim(0,25)
ax.set_aspect('equal', adjustable='datalim')
</code></pre>
<p>but I dont get the same effect. What I am doing wrong?
<a href="https://i.sstatic.net/xfHno.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xfHno.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><matplotlib-3d>
|
2024-01-30 13:10:05
| 1
| 363
|
ruthpozuelo
|
77,906,562
| 2,662,302
|
Polars calculate percentile
|
<p>I have a polars dataframe that hava a column with dates and others with prices and I want to calculate the percentile of each one in a window of 252 x 3 observations.</p>
<p>For that I'm doing this:</p>
<pre class="lang-py prettyprint-override"><code>
prices = prices.sort(by=["date"])
rank_cols = list(set(prices.columns).difference("date"))
percentiles = (
prices.sort(by=["date"])
.set_sorted("date")
.group_by_dynamic(
index_column=["date"], every="1i", start_by="window", period="756i"
)
.agg(
[
(pl.col(col).rank() * 100.0 / pl.col(col).count()).alias(
f"{col}_percentile"
)
for col in rank_cols
]
)
)
</code></pre>
<p>But the exception is throwing is:</p>
<pre class="lang-py prettyprint-override"><code>Traceback (most recent call last):
File "<string>", line 6, in <module>
File "/usr/local/lib/python3.10/site-packages/polars/dataframe/group_by.py", line 1047, in agg
self.df.lazy()
File "/usr/local/lib/python3.10/site-packages/polars/lazyframe/frame.py", line 1706, in collect
return wrap_df(ldf.collect())
polars.exceptions.InvalidOperationError: argument in operation 'group_by_dynamic' is not explicitly sorted
- If your data is ALREADY sorted, set the sorted flag with: '.set_sorted()'.
- If your data is NOT sorted, sort the 'expr/series/column' first.
</code></pre>
<p>In the code I already do as the suggestion indicates, but the exception persists.</p>
<p>EDIT:</p>
<p>Make some changes according to @Hericks suggestion.</p>
<pre><code>import polars as pl
import pandas as pd
from datetime import datetime, timedelta
# Generate 10 dates starting from today
start_date = datetime.now().date()
date_list = [start_date + timedelta(days=i) for i in range(10)]
# Generate random prices for each date and column
data = {
'date': date_list,
'asset_1': [float(f"{i+1}.{i+2}") for i in range(10)],
'asset_2': [float(f"{i+2}.{i+3}") for i in range(10)],
'asset_3': [float(f"{i+3}.{i+4}") for i in range(10)],
}
prices = pl.DataFrame(data)
prices = prices.cast({"date": pl.Date})
rank_cols = list(set(prices.columns).difference("date"))
percentiles = (
prices.sort(by=["date"])
.set_sorted("date")
.group_by_dynamic(
index_column="date", every="1i", start_by="window", period="4i"
)
.agg(
[
(pl.col(col).rank() * 100.0 / pl.col(col).count()).alias(
f"{col}_percentile"
)
for col in rank_cols
]
)
)
</code></pre>
<p>Now I'm getting</p>
<pre><code>pyo3_runtime.PanicException: attempt to divide by zero
</code></pre>
<p>EDIT 2:</p>
<p>The problem is the date use, I changed the dates by integers and something worked up. (Also added first to take the first register)</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
int_list = [i+1 for i in range(6)]
# Generate random prices for each date and column
data = {
'int_index': int_list,
'asset_1': [1.1, 3.4, 2.6, 4.8, 7.4, 3.2],
'asset_2': [4, 7, 8, 3, 4, 5],
'asset_3': [1, 3, 10, 20, 2, 4],
}
# Convert the Pandas DataFrame to a Polars DataFrame
prices = pl.DataFrame(data)
rank_cols = list(set(prices.columns).difference("int_index"))
percentiles = (
prices.sort(by="int_index")
.set_sorted("int_index")
.group_by_dynamic(
index_column="int_index", every="1i", start_by="window", period="4i"
)
.agg(
[
(pl.col(col).rank().first() * 100.0 / pl.col(col).count()).alias(
f"{col}_percentile"
)
for col in rank_cols
]
)
)
</code></pre>
<p>EDIT 3:</p>
<p>The idea is given the index i take the values on index i, i+1, i+2, i+3 and calculate the percentile rank of the register i with respect the four ones.</p>
<p>For example for in asset_1 for the first index (1) the sample (with the three next registers) is:</p>
<p>1.1, 3.4, 2.6, 4.8 so the percentile of the first register is 25</p>
<p>For the asset_1 the second index (2) the sample (with the three next registers) is:</p>
<p>3.4, 2.6, 4.8 and 7.4 so the percentile is 50.</p>
|
<python><dataframe><python-polars>
|
2024-01-30 13:05:27
| 1
| 505
|
rlartiga
|
77,906,558
| 5,013,084
|
Sorting barplot in altair by multiple columns
|
<p>I am trying to sort a dataframe by multiple columns (more precisely, the first column is a string and should be sorted alphabetically and the second column is numeric and should be sorted from lowest to highest).</p>
<pre class="lang-py prettyprint-override"><code>import altair as alt
import pandas as pd
df = pd.DataFrame(
{
"unique_ID": ["a", "b", "c", "d", "e", "f"],
"group": ["Group 1", "Group 1", "Group 3", "Group 3", "Group 3", "Group 2"],
"value": [20, 10, 30, 50, 40, 60]
}
)
df
alt.Chart(df).mark_bar().encode(
y="unique_ID",
x="value",
color="group",
)
</code></pre>
<p>(I would like to upload the result I am currently getting, but the image upload is failing, I will edit the post as soon as I can upload)</p>
<p>What I would like to achieve is to sort this barplot by two variables, first alphabetically by group (i.e. Group 1, Group 2, Group 3) and second, by value from lowest to highest.</p>
<p>The order I would like to achieve is b - a - f - c - e - d (I hope I did not make any mistake here).</p>
<p>Note: Of course, this is just a simple example in order to understand how to sort by multiple variables. I understand that I can sort via <code>EncodingSortField</code></p>
<p>e.g.</p>
<pre class="lang-py prettyprint-override"><code>y=alt.Y(
"unique_ID",
sort=alt.EncodingSortField(field="group", op="min")
)
</code></pre>
<p>but as far as I know, this only works for one column.</p>
<p>Thank you very much!</p>
|
<python><altair>
|
2024-01-30 13:04:43
| 1
| 2,402
|
Revan
|
77,906,456
| 18,880,790
|
How to improve image quality before reading barcode
|
<p>I am using <a href="https://pypi.org/project/zxing-cpp/" rel="nofollow noreferrer">zxing-cpp</a> library for reading barcode from image.</p>
<pre><code>import cv2
import zxingcpp
img = cv2.imread('test.jpg')
results = zxingcpp.read_barcodes(img)
for result in results:
print('Found barcode:'
f'\n Valid: "{result.valid}"'
f'\n Text: "{result.text}"'
f'\n Format: {result.format}'
f'\n Content: {result.content_type}'
f'\n Position: {result.position}')
if len(results) == 0:
print("Could not find any barcode.")
</code></pre>
<p>However, this library is unable to scan this simple barcode from <a href="https://drive.google.com/file/d/1ddR_QCp3pIPjAk3RxEOF3-DDQiHP4g-U/view?usp=sharing" rel="nofollow noreferrer">image</a>.</p>
<p>How can I processes the image and improve quality of image in order to read the barcode?</p>
<p>I used the answer to this <a href="https://stackoverflow.com/questions/50497945/how-to-reliably-detect-a-barcodes-4-corners">question</a> as a guideline, and was still unsuccessful, therefore I am asking this question and seeking help?</p>
|
<python><opencv><image-processing><barcode><zxing>
|
2024-01-30 12:45:45
| 1
| 477
|
Coder3000
|
77,906,356
| 1,065,145
|
How to reinstantiate OpenAI ChatCompletion object from string
|
<p>OpenAI python api call <code>client.chat.completions.create</code> now returns an object by default. How to reinstantiate that object after it was persisted to a database using <code>str(...)</code> method?</p>
<pre><code>responseObj = client.chat.completions.create(...)
# returns ChatCompletion(id=...)
responseObjString = str(responseObj)
</code></pre>
|
<python><openai-api>
|
2024-01-30 12:28:43
| 1
| 9,049
|
Denis Kulagin
|
77,906,193
| 3,700,524
|
The behavior of numpy.fromfunction()
|
<p>I was trying to create different arrays using Numpy's <code>fromfunction()</code>. It was working fine until I faced this issue. I tried to make an array of ones using <code>fromfunction()</code>(I know I can create it using <code>ones()</code> and <code>full()</code>) and here is the issue :</p>
<pre><code>array = np.fromfunction(lambda i, j: 1, shape=(2, 2), dtype=float)
print(array)
</code></pre>
<p>Surprisingly, the output of this function is this :</p>
<pre><code>1
</code></pre>
<p>Which is expected to be :</p>
<pre><code>[[1. 1.]
[1. 1.]]
</code></pre>
<p>When I change the input function by adding zero times <code>i</code>, It works just fine.</p>
<pre><code>array = np.fromfunction(lambda i, j: i*0 + 1, shape=(2, 2), dtype=float)
print(array)
</code></pre>
<p>The output of this code is :</p>
<pre><code>[[1. 1.]
[1. 1.]]
</code></pre>
<p>My main question is how does <code>fromfunction()</code> actually behaves? I passed the same function with 2 different representations and the output is completely different.</p>
|
<python><python-3.x><numpy><function>
|
2024-01-30 12:01:48
| 3
| 3,421
|
Mohsen_Fatemi
|
77,905,905
| 2,707,864
|
Setup in python a function of Bessel functions that can be both integrated and differentiated
|
<p>I need to setup a user defined function <code>ILTintegrand(u, t, r, b, a)</code> for <code>integrand = (j0(ur)*y0(ub) - y0(ur)*j0(ub)) / (j0(ub)*j0(ub) + y0(ub)*y0(ub)) * np.exp(-a*t*u**2) / u</code>,
where j0 and y0 are the Bessel functions of the first and second kind, respectively.</p>
<p>Then I need another user defined function <code>ILTintegral(t, r, b, a)</code> for the integral of <code>ILTintegrand</code> in <code>u</code>, bewteen 0 and infinity.</p>
<p>But then I also needed differentiating <code>ILTintegral</code> with respect to <code>t</code> and <code>r</code>, to check that a partial differential equation was satisfied (see <a href="https://math.stackexchange.com/questions/480235/inverse-laplace-transform-of-bar-p-d-frack-0-sqrts-r-dsk-0-sqrts">this</a>).</p>
<hr>
<p>For the integration, I could do this using in <code>ILTintegrand</code> the Bessel functions from <code>scipy</code> with</p>
<pre><code>from scipy.special import j0, y0
</code></pre>
<p>and then integrating in <code>ILTintegral</code> also with <code>scipy</code> with</p>
<pre><code>from scipy import integrate
intg_pair = scipy.integrate.quad(ILTintegrand, 0, np.inf, args=(t, r, b, a))
</code></pre>
<p>But then I could not differentiate this wrt <code>t</code> or <code>r</code>.</p>
<hr>
<p>When trying to integrate with <code>sympy</code> (so I could later use <code>sympy.diff</code>), using in <code>ILTintegrand</code></p>
<pre><code>from sympy import besselj, bessely
</code></pre>
<p>and then in <code>ILTintegral</code> with</p>
<pre><code>intg = sympy.integrate(integrand, (u, 0, sym.oo))
</code></pre>
<p>the calculation was never completing.</p>
<hr>
<p><strong>What are possible ways of achieving my objective?</strong></p>
<p><strong>Related</strong>:</p>
<ol>
<li><a href="https://stackoverflow.com/questions/75187185/differentiating-a-bessel-function-with-lambda">Differentiating a Bessel function with Lambda</a></li>
<li><a href="https://stackoverflow.com/questions/72876647/bessel-function-integral">Bessel function integral</a></li>
<li><a href="https://stackoverflow.com/questions/41658237/how-to-calculate-derivative-and-integral-of-the-bessel-functions-in-python">How to calculate derivative and integral of the bessel functions in PYTHON</a></li>
</ol>
<p><strong>TL;DR</strong></p>
<p>On my way of debugging the integration with sympy, I tried with shorter integrands, and changing the limits of integration to [0,1].
I found that</p>
<pre><code>integrand = besselj(0, ur)
f = sym.integrate(integrand, (u, 0, 1))
</code></pre>
<p>works reasonably (though still slower than with scipy, I wouldn't know what happened if I had to evaluate this many times);</p>
<pre><code>integrand = (besselj(0, ur) * bessely(0, ub) - bessely(0, ur) * besselj(0, ub))
f = sym.integrate(integrand, (u, 0, 1))
</code></pre>
<p>throws a long error report with two call stacks of many nested functions related to the computation of <code>meijerg</code>, like</p>
<pre><code>...
ValueError: 0.302500000000000 is not an integer
During handling of the above exception, another exception occurred:
...
ValueError: expecting ints or fractions, got 0.302500000000000 and 1/2
</code></pre>
<p>and from</p>
<pre><code>integrand = (besselj(0, ur) * bessely(0, ub) - bessely(0, ur) * besselj(0, ub)) \
/ ((besselj(0, ub))**2 + (bessely(0, ub))**2)
f = sym.integrate(integrand, (u, 0, 1))
</code></pre>
<p>towards the final destination, none of the integrations finished.</p>
|
<python><numpy><scipy><sympy><bessel-functions>
|
2024-01-30 11:12:27
| 0
| 15,820
|
sancho.s ReinstateMonicaCellio
|
77,905,297
| 8,080,758
|
Extract key, value from C enumeration using regex
|
<p>I have several enum from C source code that look like this :</p>
<pre class="lang-c prettyprint-override"><code>typedef enum
{
/** comment */
A = 1,
/** comment */
B = 2,
}foo_t;
typedef enum
{
/** comment */
A = 1,
/** comment */
B = 2,
}bar_t;
</code></pre>
<p>I would like to extract lines with key, value pairs which are only related to <code>foo_t</code></p>
<p>Using this regex <code>(?\w+\s*=\s*\d+,*\s*)+}foo_t</code> I only extract the line <code> B = 2,</code> related to <code>foo_t</code>. The (A,1) pair is missing.</p>
<p>For the sake of simplification, the example doesn't show that <code>foo_t</code> can be located in the middle of a file containing several enum (ie. I need to use <code>foo_t</code> identification)</p>
|
<python><regex>
|
2024-01-30 09:41:02
| 1
| 1,276
|
Clément
|
77,904,880
| 10,443,817
|
Why mulitple plots are generated in the first tab?
|
<p>I have written a class that uses "shap" library to compute and plot shap feature importance. I have also written an added functionality to plot the graphs in the same window and different tabs. However, my first tab is plotting the same graph twice. The second plot is then also carried over to other tabs. How can I get rid of the second plot? The figure below shows how two plots are created in the first tab.</p>
<p><a href="https://i.sstatic.net/jdcQW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jdcQW.png" alt="enter image description here" /></a></p>
<pre><code> import shap
from sklearn.model_selection import train_test_split
from IPython.display import clear_output
class SHAPInterpreter:
"""
A class that builds on top of the SHAP library to compute and plot SHAP values for a LightGBM model.
"""
def __init__(self, model, X, y, downsample=False, sample_frac=0.2, random_state=None):
"""
Initialize the SHAPInterpreter.
Parameters:
model (lightgbm.LGBMModel): A trained LightGBM model.
X (pandas.DataFrame): The feature matrix.
y (pandas.Series): The target vector.
downsample (bool): Whether to downsample the data.
sample_frac (float): The fraction of data to use for plotting.
random_state (int): The random state to use for downsampling.
"""
# Inititialize the JS visualization code (for Jupyter notebooks) and load JS in the notebook environment
# Do
self.model = model
self.X = X
self.y = y
self.explainer = shap.TreeExplainer(model)
if downsample:
self.X, _, self.y, _ = train_test_split(self.X, self.y, test_size=sample_frac, stratify=self.y, random_state=random_state)
self.shap_values = self.explainer.shap_values(self.X)
self.feature_names = self.X.columns.tolist()
# Function that plots the SHAP summary plot with following parameters:
# max_display: The maximum number of features to display
# feature_names: The names of the features to display. Works with max_display. Only the features in feature_names will be displayed.
# plot_type: It can be 'dot' or 'bar'
# color_bar: Whether to show the color bar
def summary_plot(self, max_display=10, feature_names=None, plot_type='dot', color_bar=False):
"""Generate a SHAP summary plot."""
# If feature_names is provided, filter the data and SHAP values
if feature_names is not None:
feature_indices = [self.X.columns.get_loc(name) for name in feature_names]
shap_values = self.shap_values[:, feature_indices]
X = self.X.iloc[:, feature_indices]
else:
shap_values = self.shap_values
X = self.X
shap.summary_plot(
shap_values,
X,
max_display=max_display,
plot_type=plot_type,
color_bar=color_bar,
show=False,
plot_size=(10, 6))
plt.show()
# Function that creates an interactive SHAP summary plot with following interactions:
# - A slider for the number of features to display
# - A dropdown with checkboxes for selecting the features names to display
# - A button for selecting the type of plot (dot or bar)
# Function that plots the SHAP dependence plot
def dependence_plot(self, feature, interaction_index=None, show=True):
"""Generate a SHAP dependence plot."""
return shap.dependence_plot(feature, self.shap_values, self.X, interaction_index=interaction_index, show=show)
def interactive_summary_plot2(self):
"""Create an interactive SHAP summary plot."""
# Create a slider for the number of features
slider = widgets.IntSlider(
value=min(10, self.X.shape[1]),
min=1,
max=self.X.shape[1],
step=1,
description='Number of features:',
)
# Create a dropdown for the plot type
dropdown = widgets.Dropdown(
options=['dot', 'bar'],
value='dot',
description='Plot type:',
)
# Create a placeholder for the checkboxes
checkboxes = {}
checkboxes_box = widgets.VBox(
layout=widgets.Layout(overflow_y='scroll'))
# checkboxes_box = widgets.HBox(layout=widgets.Layout(overflow_x='scroll', width='500px', border='solid 1px'))
# Create a button for updating the plot
button = widgets.Button(description='Update plot')
# Create an output widget to display the plot
out = widgets.Output()
# Create a function to update the checkboxes based on the slider value
# def update_checkboxes(change):
# num_features = change['new']
# checkboxes.clear()
# checkboxes.update({col: widgets.Checkbox(value=(i < num_features), description=col) for i, col in enumerate(self.X.columns[:num_features])})
# checkboxes_box.children = [widgets.VBox(list(checkboxes.values()), layout=widgets.Layout(overflow_y='scroll', height='100px', border='solid 1px'))]
def update_checkboxes(change):
num_features = change['new']
checkboxes.clear()
# Calculate the mean absolute SHAP value for each feature
mean_shap_values = np.abs(self.shap_values).mean(axis=0)
# Get the feature names sorted by the mean absolute SHAP value
sorted_feature_names = self.X.columns[np.argsort(mean_shap_values)[::-1]]
# Create the checkboxes based on the sorted feature names
checkboxes.update({col: widgets.Checkbox(value=(i < num_features), description=col) for i, col in enumerate(sorted_feature_names[:num_features])})
checkboxes_box.children = [
widgets.Label(value='Select features to display:'),
widgets.VBox(list(checkboxes.values()),
layout=widgets.Layout(overflow_y='scroll', height='150px', border='solid 1px'))]
# Attach the update_checkboxes function to the slider's value change event
slider.observe(update_checkboxes, names='value')
# Create a function to update the plot based on the selected number of features, feature names and plot type
def update_plot(button):
with out:
clear_output(wait=True)
num_features = slider.value
plot_type = dropdown.value
feature_names = [name for name, checkbox in checkboxes.items() if checkbox.value]
self.summary_plot(max_display=num_features, feature_names=feature_names, plot_type=plot_type, color_bar=True)
# Attach the update_plot function to the button's click event
button.on_click(update_plot)
# Initialize the checkboxes
update_checkboxes({'new': slider.value})
# Display the slider, the checkboxes, the dropdown, the button and the output widget
display(slider, checkboxes_box, dropdown, button, out)
# Instantiate the SHAPInterpreter class
shap_interpreter = SHAPInterpreter(
modeler.best_model,
modeler.test_set[0],
modeler.test_set[1],
downsample=True,
sample_frac=0.2,
random_state=139)
# Verify the shap summary plot function
feature_names = ['DTB_cnt_12mth', 'DTB_cnt_6mth', 'DTB_cnt_4wk']
# shap_interpreter.summary_plot(max_display=2, feature_names=feature_names, plot_type='dot', color_bar=True)
# Verify the interactive shap summary plot function
# shap_interpreter.interactive_summary_plot2()
import ipywidgets as widgets
def add_to_tab(tab, title):
def decorator(func):
def wrapper(*args, **kwargs):
# Check if a tab with the same title already exists
for i in range(len(tab.children)):
if tab.get_title(i) == title:
# If it does, use the existing tab
break
else:
# If it doesn't, create a new tab
tab.children += (widgets.Output(),)
tab.set_title(len(tab.children) - 1, title)
# Set i to the index of the new tab
i = len(tab.children) - 1
with tab.children[i]:
func(*args, **kwargs)
return wrapper
return decorator
def run_functions_in_tabs(func_dict, tab=None):
if tab is None:
tab = widgets.Tab()
display(tab)
for title, func_info in func_dict.items():
func = func_info.get('func')
args = func_info.get('args', [])
kwargs = func_info.get('kwargs', {})
# Use the add_to_tab decorator factory to call the function in a new tab
decorated_func = add_to_tab(tab, title)(func)
decorated_func(*args, **kwargs)
# return tab widget to reuse in other cells
# return tab
# Create a dictionary of methods and arguments
func_dict = {
'Summary Plot': {'func': shap_interpreter.interactive_summary_plot2},
'Dependence Plot': {'func': shap_interpreter.dependence_plot, 'args': ['DTB_cnt_8wk']},
'ROC Curve': {'func': modeler.plot_roc_curve}
}
# Run the methods in new tabs
run_functions_in_tabs(func_dict)
</code></pre>
|
<python><jupyter-notebook><jupyter><ipywidgets>
|
2024-01-30 08:32:38
| 0
| 4,125
|
exan
|
77,904,784
| 6,510,273
|
Fix first line on jupyter notebook output
|
<p>I use this great code to show pyspark dataframes in jupyter in a nice format:</p>
<pre><code>from IPython.display import display, HTML
display(HTML("<style>pre { white-space: pre !important; }</style>"))
</code></pre>
<p>Thanks to user <a href="https://stackoverflow.com/users/3857460/eric-le-fort">eric-le-fort</a></p>
<p><strong>Question</strong>:</p>
<p>How can I fix the first row to be able to see my column names with very long tables?
Thanks a lot</p>
<p>p.s. if you happen to know how to also set the horizontal scrollbar always to the bottom of the screen instead of the bottom of the output, that would be great.</p>
|
<python><dataframe><pyspark><jupyter-notebook><html-table>
|
2024-01-30 08:16:23
| 0
| 2,177
|
Florida Man
|
77,904,769
| 4,509,609
|
Fail to import torchtext KeyError: 'SP_DIR'
|
<p>I failed to import torchtext with the following error. I tried it with a fresh conda env install (under a different python version) and still got the same issue.</p>
<p>Originally I was able to use torchtext (I remember installed from pip) in an env of python 3.11, but then it raised error with the dataset module, so I updated torchtext with pip and started getting kernel crush for pytorch import. So I did some uninstall and install of the pytorch and torchtext packages from different sources (conda or pip) and couldn't fix the issue. Even a new conda env using python 3.10 raised the same error. I don't know what is messed up.</p>
<pre><code>---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[3], line 1
----> 1 import torchtext
File ~/miniconda3/envs/ml2/lib/python3.10/site-packages/torchtext/__init__.py:6
3 from torch.hub import _get_torch_home
5 # the following import has to happen first in order to load the torchtext C++ library
----> 6 from torchtext import _extension # noqa: F401
8 _TEXT_BUCKET = \"https://download.pytorch.org/models/text/\"
10 _CACHE_DIR = os.path.expanduser(os.path.join(_get_torch_home(), \"text\"))
File ~/miniconda3/envs/ml2/lib/python3.10/site-packages/torchtext/_extension.py:7
4 import torch
5 from torchtext._internal import module_utils as _mod_utils
----> 7 _LIB_DIR = Path(os.environ[\"SP_DIR\"]) / \"torch\" / \"lib\"
10 def _get_lib_path(lib: str):
11 suffix = \"pyd\" if os.name == \"nt\" else \"so\"
File ~/miniconda3/envs/ml2/lib/python3.10/os.py:680, in _Environ.__getitem__(self, key)
677 value = self._data[self.encodekey(key)]
678 except KeyError:
679 # raise KeyError with the original key value
--> 680 raise KeyError(key) from None
681 return self.decodevalue(value)
KeyError: 'SP_DIR'
</code></pre>
<pre><code># packages in environment at /Users/cecilia/miniconda3/envs/ml2:
#
# Name Version Build Channel
annotated-types 0.6.0 pyhd8ed1ab_0 conda-forge
appnope 0.1.3 pyhd8ed1ab_0 conda-forge
asttokens 2.4.1 pyhd8ed1ab_0 conda-forge
brotli-python 1.1.0 py310h9e9d8ca_1 conda-forge
bzip2 1.0.8 h10d778d_5 conda-forge
ca-certificates 2023.11.17 h8857fd0_0 conda-forge
catalogue 2.0.10 py310h2ec42d9_0 conda-forge
certifi 2023.11.17 pyhd8ed1ab_0 conda-forge
charset-normalizer 3.3.2 pyhd8ed1ab_0 conda-forge
click 8.1.7 unix_pyh707e725_0 conda-forge
cloudpathlib 0.16.0 pyhd8ed1ab_0 conda-forge
colorama 0.4.6 pyhd8ed1ab_0 conda-forge
comm 0.2.1 pyhd8ed1ab_0 conda-forge
confection 0.1.4 py310h1cef2ca_0 conda-forge
cymem 2.0.8 py310h9e9d8ca_1 conda-forge
cython-blis 0.7.10 py310hf0b6da5_2 conda-forge
debugpy 1.8.0 py310h9e9d8ca_1 conda-forge
decorator 5.1.1 pyhd8ed1ab_0 conda-forge
double-conversion 3.3.0 he965462_0 conda-forge
exceptiongroup 1.2.0 pyhd8ed1ab_2 conda-forge
executing 2.0.1 pyhd8ed1ab_0 conda-forge
filelock 3.13.1 pyhd8ed1ab_0 conda-forge
fsspec 2023.12.2 pyhca7485f_0 conda-forge
gmp 6.3.0 h93d8f39_0 conda-forge
gmpy2 2.1.2 py310hb691cb2_1 conda-forge
icu 73.2 hf5e326d_0 conda-forge
idna 3.6 pyhd8ed1ab_0 conda-forge
importlib-metadata 7.0.1 pyha770c72_0 conda-forge
importlib_metadata 7.0.1 hd8ed1ab_0 conda-forge
ipykernel 6.29.0 pyh3cd1d5f_0 conda-forge
ipython 8.20.0 pyh707e725_0 conda-forge
jedi 0.19.1 pyhd8ed1ab_0 conda-forge
jinja2 3.1.3 pyhd8ed1ab_0 conda-forge
joblib 1.3.2 pyhd8ed1ab_0 conda-forge
jupyter_client 8.6.0 pyhd8ed1ab_0 conda-forge
jupyter_core 5.7.1 py310h2ec42d9_0 conda-forge
langcodes 3.3.0 pyhd8ed1ab_0 conda-forge
libabseil 20230802.1 cxx17_h048a20a_0 conda-forge
libblas 3.9.0 21_osx64_openblas conda-forge
libcblas 3.9.0 21_osx64_openblas conda-forge
libcxx 16.0.6 hd57cbcb_0 conda-forge
libffi 3.4.2 h0d85af4_5 conda-forge
libgfortran 5.0.0 13_2_0_h97931a8_2 conda-forge
libgfortran5 13.2.0 h2873a65_2 conda-forge
libhwloc 2.9.3 default_h24e0189_1009 conda-forge
libiconv 1.17 hd75f5a5_2 conda-forge
liblapack 3.9.0 21_osx64_openblas conda-forge
libopenblas 0.3.26 openmp_hfef2a42_0 conda-forge
libprotobuf 4.24.4 hc4f2305_0 conda-forge
libre2-11 2023.06.02 h4694dbf_0 conda-forge
libsentencepiece 0.1.99 ha269934_5 conda-forge
libsodium 1.0.18 hbcb3906_1 conda-forge
libsqlite 3.44.2 h92b6c6a_0 conda-forge
libtorch 2.1.0 cpu_mkl_hc49ff94_103 conda-forge
libutf8proc 2.8.0 hb7f2c08_0 conda-forge
libuv 1.46.0 h0c2f820_0 conda-forge
libxml2 2.12.4 hc0ae0f7_1 conda-forge
libzlib 1.2.13 h8a1eda9_5 conda-forge
llvm-openmp 17.0.6 hb6ac08f_0 conda-forge
markdown-it-py 3.0.0 pyhd8ed1ab_0 conda-forge
markupsafe 2.1.4 py310hb372a2b_0 conda-forge
matplotlib-inline 0.1.6 pyhd8ed1ab_0 conda-forge
mdurl 0.1.2 pyhd8ed1ab_0 conda-forge
mkl 2023.2.0 h54c2260_50500 conda-forge
mpc 1.3.1 h81bd1dd_0 conda-forge
mpfr 4.2.1 h0c69b56_0 conda-forge
mpmath 1.3.0 pyhd8ed1ab_0 conda-forge
murmurhash 1.0.10 py310h9e9d8ca_1 conda-forge
ncurses 6.4 h93d8f39_2 conda-forge
nest-asyncio 1.6.0 pyhd8ed1ab_0 conda-forge
networkx 3.2.1 pyhd8ed1ab_0 conda-forge
nltk 3.8.1 pyhd8ed1ab_0 conda-forge
numpy 1.26.3 py310h4bfa8fc_0 conda-forge
openssl 3.2.0 hd75f5a5_1 conda-forge
packaging 23.2 pyhd8ed1ab_0 conda-forge
parso 0.8.3 pyhd8ed1ab_0 conda-forge
pathy 0.10.3 py310hecd8cb5_0
pexpect 4.9.0 pyhd8ed1ab_0 conda-forge
pickleshare 0.7.5 py_1003 conda-forge
pip 23.3.2 pyhd8ed1ab_0 conda-forge
platformdirs 4.1.0 pyhd8ed1ab_0 conda-forge
preshed 3.0.9 py310h9e9d8ca_1 conda-forge
prompt-toolkit 3.0.42 pyha770c72_0 conda-forge
psutil 5.9.8 py310hb372a2b_0 conda-forge
ptyprocess 0.7.0 pyhd3deb0d_0 conda-forge
pure_eval 0.2.2 pyhd8ed1ab_0 conda-forge
pydantic 2.6.0 pyhd8ed1ab_0 conda-forge
pydantic-core 2.16.1 py310h54baaa9_0 conda-forge
pygments 2.17.2 pyhd8ed1ab_0 conda-forge
pysocks 1.7.1 pyha2e5f31_6 conda-forge
python 3.10.13 h00d2728_1_cpython conda-forge
python-dateutil 2.8.2 pyhd8ed1ab_0 conda-forge
python_abi 3.10 4_cp310 conda-forge
pytorch 2.1.0 cpu_mkl_py310h1822dd0_103 conda-forge
pyzmq 25.1.2 py310h6b67f7f_0 conda-forge
re2 2023.06.02 hd34609a_0 conda-forge
readline 8.2 h9e318b2_1 conda-forge
regex 2023.12.25 py310hb372a2b_0 conda-forge
requests 2.31.0 pyhd8ed1ab_0 conda-forge
revtok 0.0.3.1 pyhd8ed1ab_0 conda-forge
rich 13.7.0 pyhd8ed1ab_0 conda-forge
sacremoses 0.0.53 pyhd8ed1ab_0 conda-forge
setuptools 69.0.3 pyhd8ed1ab_0 conda-forge
shellingham 1.5.4 pyhd8ed1ab_0 conda-forge
six 1.16.0 pyh6c4a22f_0 conda-forge
sleef 3.5.1 h6db0672_2 conda-forge
smart_open 6.4.0 pyhd8ed1ab_0 conda-forge
spacy 3.7.2 py310h65d09f4_0 conda-forge
spacy-legacy 3.0.12 pyhd8ed1ab_0 conda-forge
spacy-loggers 1.0.5 pyhd8ed1ab_0 conda-forge
srsly 2.4.8 py310h9e9d8ca_1 conda-forge
stack_data 0.6.2 pyhd8ed1ab_0 conda-forge
sympy 1.12 pypyh9d50eac_103 conda-forge
tbb 2021.11.0 h7728843_1 conda-forge
thinc 8.2.2 py310h076e4b7_0 conda-forge
tk 8.6.13 h1abcd95_1 conda-forge
torch 2.1.0.post103 pypi_0 pypi
torchtext 0.15.2 py310h5de3785_4 conda-forge
tornado 6.3.3 py310h6729b98_1 conda-forge
tqdm 4.66.1 pyhd8ed1ab_0 conda-forge
traitlets 5.14.1 pyhd8ed1ab_0 conda-forge
typer 0.9.0 pyhd8ed1ab_0 conda-forge
typing-extensions 4.9.0 hd8ed1ab_0 conda-forge
typing_extensions 4.9.0 pyha770c72_0 conda-forge
tzdata 2023d h0c530f3_0 conda-forge
urllib3 2.1.0 pyhd8ed1ab_0 conda-forge
wasabi 1.1.2 py310h2ec42d9_0 conda-forge
wcwidth 0.2.13 pyhd8ed1ab_0 conda-forge
weasel 0.3.4 pyhd8ed1ab_0 conda-forge
wheel 0.42.0 pyhd8ed1ab_0 conda-forge
xz 5.2.6 h775f41a_0 conda-forge
zeromq 4.3.5 h93d8f39_0 conda-forge
zipp 3.17.0 pyhd8ed1ab_0 conda-forge
❯ conda list torch
# packages in environment at /Users/cecilia/miniconda3/envs/ml2:
#
# Name Version Build Channel
libtorch 2.1.0 cpu_mkl_hc49ff94_103 conda-forge
pytorch 2.1.0 cpu_mkl_py310h1822dd0_103 conda-forge
torch 2.1.0.post103 pypi_0 pypi
torchtext 0.15.2 py310h5de3785_4 conda-forge
❯ conda list numpy
# packages in environment at /Users/cecilia/miniconda3/envs/ml2:
#
# Name Version Build Channel
numpy 1.26.3 py310h4bfa8fc_0 conda-forge
❯ conda list
# packages in environment at /Users/cecilia/miniconda3/envs/ml2:
#
# Name Version Build Channel
annotated-types 0.6.0 pyhd8ed1ab_0 conda-forge
appnope 0.1.3 pyhd8ed1ab_0 conda-forge
asttokens 2.4.1 pyhd8ed1ab_0 conda-forge
brotli-python 1.1.0 py310h9e9d8ca_1 conda-forge
bzip2 1.0.8 h10d778d_5 conda-forge
ca-certificates 2023.11.17 h8857fd0_0 conda-forge
catalogue 2.0.10 py310h2ec42d9_0 conda-forge
certifi 2023.11.17 pyhd8ed1ab_0 conda-forge
charset-normalizer 3.3.2 pyhd8ed1ab_0 conda-forge
click 8.1.7 unix_pyh707e725_0 conda-forge
cloudpathlib 0.16.0 pyhd8ed1ab_0 conda-forge
colorama 0.4.6 pyhd8ed1ab_0 conda-forge
comm 0.2.1 pyhd8ed1ab_0 conda-forge
confection 0.1.4 py310h1cef2ca_0 conda-forge
cymem 2.0.8 py310h9e9d8ca_1 conda-forge
cython-blis 0.7.10 py310hf0b6da5_2 conda-forge
debugpy 1.8.0 py310h9e9d8ca_1 conda-forge
decorator 5.1.1 pyhd8ed1ab_0 conda-forge
double-conversion 3.3.0 he965462_0 conda-forge
exceptiongroup 1.2.0 pyhd8ed1ab_2 conda-forge
executing 2.0.1 pyhd8ed1ab_0 conda-forge
filelock 3.13.1 pyhd8ed1ab_0 conda-forge
fsspec 2023.12.2 pyhca7485f_0 conda-forge
gmp 6.3.0 h93d8f39_0 conda-forge
gmpy2 2.1.2 py310hb691cb2_1 conda-forge
icu 73.2 hf5e326d_0 conda-forge
idna 3.6 pyhd8ed1ab_0 conda-forge
importlib-metadata 7.0.1 pyha770c72_0 conda-forge
importlib_metadata 7.0.1 hd8ed1ab_0 conda-forge
ipykernel 6.29.0 pyh3cd1d5f_0 conda-forge
ipython 8.20.0 pyh707e725_0 conda-forge
jedi 0.19.1 pyhd8ed1ab_0 conda-forge
jinja2 3.1.3 pyhd8ed1ab_0 conda-forge
joblib 1.3.2 pyhd8ed1ab_0 conda-forge
jupyter_client 8.6.0 pyhd8ed1ab_0 conda-forge
jupyter_core 5.7.1 py310h2ec42d9_0 conda-forge
langcodes 3.3.0 pyhd8ed1ab_0 conda-forge
libabseil 20230802.1 cxx17_h048a20a_0 conda-forge
libblas 3.9.0 21_osx64_openblas conda-forge
libcblas 3.9.0 21_osx64_openblas conda-forge
libcxx 16.0.6 hd57cbcb_0 conda-forge
libffi 3.4.2 h0d85af4_5 conda-forge
libgfortran 5.0.0 13_2_0_h97931a8_2 conda-forge
libgfortran5 13.2.0 h2873a65_2 conda-forge
libhwloc 2.9.3 default_h24e0189_1009 conda-forge
libiconv 1.17 hd75f5a5_2 conda-forge
liblapack 3.9.0 21_osx64_openblas conda-forge
libopenblas 0.3.26 openmp_hfef2a42_0 conda-forge
libprotobuf 4.24.4 hc4f2305_0 conda-forge
libre2-11 2023.06.02 h4694dbf_0 conda-forge
libsentencepiece 0.1.99 ha269934_5 conda-forge
libsodium 1.0.18 hbcb3906_1 conda-forge
libsqlite 3.44.2 h92b6c6a_0 conda-forge
libtorch 2.1.0 cpu_mkl_hc49ff94_103 conda-forge
libutf8proc 2.8.0 hb7f2c08_0 conda-forge
libuv 1.46.0 h0c2f820_0 conda-forge
libxml2 2.12.4 hc0ae0f7_1 conda-forge
libzlib 1.2.13 h8a1eda9_5 conda-forge
llvm-openmp 17.0.6 hb6ac08f_0 conda-forge
markdown-it-py 3.0.0 pyhd8ed1ab_0 conda-forge
markupsafe 2.1.4 py310hb372a2b_0 conda-forge
matplotlib-inline 0.1.6 pyhd8ed1ab_0 conda-forge
mdurl 0.1.2 pyhd8ed1ab_0 conda-forge
mkl 2023.2.0 h54c2260_50500 conda-forge
mpc 1.3.1 h81bd1dd_0 conda-forge
mpfr 4.2.1 h0c69b56_0 conda-forge
mpmath 1.3.0 pyhd8ed1ab_0 conda-forge
murmurhash 1.0.10 py310h9e9d8ca_1 conda-forge
ncurses 6.4 h93d8f39_2 conda-forge
nest-asyncio 1.6.0 pyhd8ed1ab_0 conda-forge
networkx 3.2.1 pyhd8ed1ab_0 conda-forge
nltk 3.8.1 pyhd8ed1ab_0 conda-forge
numpy 1.26.3 py310h4bfa8fc_0 conda-forge
openssl 3.2.0 hd75f5a5_1 conda-forge
packaging 23.2 pyhd8ed1ab_0 conda-forge
parso 0.8.3 pyhd8ed1ab_0 conda-forge
pathy 0.10.3 py310hecd8cb5_0
pexpect 4.9.0 pyhd8ed1ab_0 conda-forge
pickleshare 0.7.5 py_1003 conda-forge
pip 23.3.2 pyhd8ed1ab_0 conda-forge
platformdirs 4.1.0 pyhd8ed1ab_0 conda-forge
preshed 3.0.9 py310h9e9d8ca_1 conda-forge
prompt-toolkit 3.0.42 pyha770c72_0 conda-forge
psutil 5.9.8 py310hb372a2b_0 conda-forge
ptyprocess 0.7.0 pyhd3deb0d_0 conda-forge
pure_eval 0.2.2 pyhd8ed1ab_0 conda-forge
pydantic 2.6.0 pyhd8ed1ab_0 conda-forge
pydantic-core 2.16.1 py310h54baaa9_0 conda-forge
pygments 2.17.2 pyhd8ed1ab_0 conda-forge
pysocks 1.7.1 pyha2e5f31_6 conda-forge
python 3.10.13 h00d2728_1_cpython conda-forge
python-dateutil 2.8.2 pyhd8ed1ab_0 conda-forge
python_abi 3.10 4_cp310 conda-forge
pytorch 2.1.0 cpu_mkl_py310h1822dd0_103 conda-forge
pyzmq 25.1.2 py310h6b67f7f_0 conda-forge
re2 2023.06.02 hd34609a_0 conda-forge
readline 8.2 h9e318b2_1 conda-forge
regex 2023.12.25 py310hb372a2b_0 conda-forge
requests 2.31.0 pyhd8ed1ab_0 conda-forge
revtok 0.0.3.1 pyhd8ed1ab_0 conda-forge
rich 13.7.0 pyhd8ed1ab_0 conda-forge
sacremoses 0.0.53 pyhd8ed1ab_0 conda-forge
setuptools 69.0.3 pyhd8ed1ab_0 conda-forge
shellingham 1.5.4 pyhd8ed1ab_0 conda-forge
six 1.16.0 pyh6c4a22f_0 conda-forge
sleef 3.5.1 h6db0672_2 conda-forge
smart_open 6.4.0 pyhd8ed1ab_0 conda-forge
spacy 3.7.2 py310h65d09f4_0 conda-forge
spacy-legacy 3.0.12 pyhd8ed1ab_0 conda-forge
spacy-loggers 1.0.5 pyhd8ed1ab_0 conda-forge
srsly 2.4.8 py310h9e9d8ca_1 conda-forge
stack_data 0.6.2 pyhd8ed1ab_0 conda-forge
sympy 1.12 pypyh9d50eac_103 conda-forge
tbb 2021.11.0 h7728843_1 conda-forge
thinc 8.2.2 py310h076e4b7_0 conda-forge
tk 8.6.13 h1abcd95_1 conda-forge
torch 2.1.0.post103 pypi_0 pypi
torchtext 0.15.2 py310h5de3785_4 conda-forge
tornado 6.3.3 py310h6729b98_1 conda-forge
tqdm 4.66.1 pyhd8ed1ab_0 conda-forge
traitlets 5.14.1 pyhd8ed1ab_0 conda-forge
typer 0.9.0 pyhd8ed1ab_0 conda-forge
typing-extensions 4.9.0 hd8ed1ab_0 conda-forge
typing_extensions 4.9.0 pyha770c72_0 conda-forge
tzdata 2023d h0c530f3_0 conda-forge
urllib3 2.1.0 pyhd8ed1ab_0 conda-forge
wasabi 1.1.2 py310h2ec42d9_0 conda-forge
wcwidth 0.2.13 pyhd8ed1ab_0 conda-forge
weasel 0.3.4 pyhd8ed1ab_0 conda-forge
wheel 0.42.0 pyhd8ed1ab_0 conda-forge
xz 5.2.6 h775f41a_0 conda-forge
zeromq 4.3.5 h93d8f39_0 conda-forge
zipp 3.17.0 pyhd8ed1ab_0 conda-forge
</code></pre>
<pre><code> active environment : ml2
active env location : /Users/cecilia/miniconda3/envs/ml2
shell level : 2
user config file : /Users/cecilia/.condarc
populated config files : /Users/cecilia/.condarc
conda version : 23.11.0
conda-build version : not installed
python version : 3.11.6.final.0
solver : libmamba (default)
virtual packages : __archspec=1=skylake
__conda=23.11.0=0
__osx=14.2.1=0
__unix=0=0
base environment : /Users/cecilia/miniconda3 (writable)
conda av data dir : /Users/cecilia/miniconda3/etc/conda
conda av metadata url : None
channel URLs : https://conda.anaconda.org/conda-forge/osx-64
https://conda.anaconda.org/conda-forge/noarch
https://repo.anaconda.com/pkgs/main/osx-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/osx-64
https://repo.anaconda.com/pkgs/r/noarch
package cache : /Users/cecilia/miniconda3/pkgs
/Users/cecilia/.conda/pkgs
envs directories : /Users/cecilia/miniconda3/envs
/Users/cecilia/.conda/envs
platform : osx-64
user-agent : conda/23.11.0 requests/2.31.0 CPython/3.11.6 Darwin/23.2.0 OSX/14.2.1 solver/libmamba conda-libmamba-solver/23.11.1 libmambapy/1.5.3
UID:GID : 501:20
netrc file : None
offline mode : False
</code></pre>
|
<python><pytorch><anaconda><conda><torchtext>
|
2024-01-30 08:14:36
| 1
| 835
|
Cecilia Lee
|
77,904,685
| 451,878
|
Transform python dictionary in parameters function, and evaluate in a loop
|
<p>I've a loop and I had a Pydantic model in a list.
But, when I add a field in the table, I have changes this code. But it's not dynamic (the field names are the same that the keys):</p>
<pre><code>for t_one in table1:
for t_two in table2:
my_list.append(PydanticModel(id=t_one.id, name=t_one.name, version=t_two.version, area=t_two.area, pack=t_two.pack))
</code></pre>
<p>So, I want, for example, have a list of t_two fields (Ttwo.<strong>table</strong>.columns.keys()).
And a code like this :</p>
<pre><code>my_list.append(PydanticModel({k: v for k, v in fields.items()})
</code></pre>
<p>It's possible ?</p>
<p>UPDATE #1 :
I made the dictionary field like this :</p>
<pre><code>fields = {"id":"t_one.id", "name":"t_one.name", "version":"t_two.version", "area":"t_two.area", "pack":"t_two.pack"}
</code></pre>
<p>When I run the code (I've tried also with eval()), Python doesn't find the fields...</p>
<p>I've the same error in this piece of code (in a class) :</p>
<pre><code>def gett(self):
self.pack = 99
i = "pack"
# display "99"
print(eval(f'self.{i}'))
# Error :
# return [eval(f'self.{x}') for x in self.all_fields()]
# ^^^^^^^^^^^^^^^^^
# File "<string>", line 1, in <module>
# NameError: name 'self' is not defined
return [eval(f'self.{x}') for x in self.all_fields()]
</code></pre>
<p>Thanks
F</p>
|
<python><pydantic>
|
2024-01-30 07:55:47
| 0
| 1,481
|
James
|
77,904,550
| 13,975,077
|
Python set working directory like Intellij in terminal
|
<p>I have a python which has the following folder/file structure.</p>
<pre><code>Project/
-- generic_module
-- directory
---- sub_directory
------ main.py (uses generic_module, & other imports are relative to Project)
</code></pre>
<p>Now if I want to run main.py via Intellij IDE, I can just set the "Working Directory" in the run configuration and it works fine.
<a href="https://i.sstatic.net/Y3fO3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Y3fO3.png" alt="enter image description here" /></a></p>
<p>I want to run it via terminal without Intellij IDE. In that case what do I do ?</p>
<p>I tried things like "python directory/sub_directory/main.py" from the Project/ folder, but still I am facing import errors.</p>
|
<python><intellij-idea><python-import>
|
2024-01-30 07:29:16
| 1
| 800
|
Yogesh
|
77,904,074
| 748,493
|
Set new column on pandas dataframe with setitem
|
<p>I have a nested dictionary (let's call it <code>dic</code>) which has nodes of different depths and where final nodes contain pandas dataframe. I am looking to provide a uniform way for accessing and setting values at any node (where "values" can be further dictionaries, dataframes, or columns in dataframes). To this end, I am using the following functions</p>
<pre><code>from functools import reduce
from operator import getitem, setitem
def get_item(path):
return reduce(getitem, path, dic)
def set_item(path, value):
reduce(lambda k,o,v=value: setitem(k,o,v), path, dic)
return None
</code></pre>
<p>where <code>path</code> is specified by the user via a config file and is of the forms (assuming 3 levels depth) <code>[key1, key2, table_name, [column1, column2]]</code>.</p>
<p>This works fine, apart from the case when I try to set values to a new column in the dataframe in which case I get an error:</p>
<pre><code>TypeError: 'NoneType' object does not support item assignment
</code></pre>
<p>Looking to understand why this is happening and how to resolve this.</p>
<p>For example,</p>
<pre><code>dic = {}
dic['key1'] = {}
dic['key1']['key2'] = {}
dic['key1']['key2']['table1'] = pd.DataFrame({'column1': [1,2,3], 'column2': [10,20,30]})
</code></pre>
<p>to get an item</p>
<pre><code>get_item(['key1', 'key2', 'table1', ['column1']])
</code></pre>
<p>which returns</p>
<p><a href="https://i.sstatic.net/3btFJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3btFJ.png" alt="enter image description here" /></a></p>
<p>and to set an item</p>
<pre><code>set_item(['key1', 'key2', 'table1', ['column3']], [100,200,300])
</code></pre>
<p>with expected output of</p>
<p><a href="https://i.sstatic.net/P0kQe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/P0kQe.png" alt="enter image description here" /></a></p>
<p>but this gives an error mentioned before.</p>
<p>The following works for setting columns</p>
<pre><code>get_item(['key1', 'key2', 'table1'])['column3'] = [100,200,300]
</code></pre>
<p>but this does not offer a uniform solution (as in some cases the value can be another dictionary or a whole new dataframe).</p>
<p><strong>UPD1</strong></p>
<p>I think the issue is that <code>setitem</code> sets the value on each object in the iteration of the path, not just the final one, so I guess I need to split the final part of the path out and apply it to the final object.</p>
<p>The modified <code>set_item</code> now seems to do the job:</p>
<pre><code>def set_item(path, value):
if isinstance(path[-1], list):
final_node = path[-1]
else:
final_node = [path[-1]]
reduce(lambda k,o,v=value: setitem(k,o,v), final_node, get_item(path[:-1]))
return None
</code></pre>
|
<python><pandas>
|
2024-01-30 05:27:28
| 2
| 522
|
Confounded
|
77,903,782
| 7,371,707
|
Assigning values to 2d array with list of coordinates and time series without for-loop
|
<p>I have data with four attributes. Two are coordinates <code>xs</code> and <code>ys</code>; the other two are timestamps <code>ts</code> and polarity <code>ps</code>.
I need to set the value of an image at each position <code>(x,y)</code> with the newest polarity. The xs and ys are not unique.</p>
<p>I can do it with for-loop as shown below and I need a vectorized way:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
color_p = (0, 0, 255)
color_n = (255, 0, 0)
H, W = 128, 128
n_p = 100
xs, ys = np.random.randint(0, W, n_p), np.random.randint(0, H, n_p)
ts = np.random.rand(n_p)
ts.sort()
ps = np.random.randint(0, 2, n_p)
img = np.zeros((H, W, 3), dtype=np.uint8)
for i in range(n_p):
x, y, p = xs[i], ys[i], ps[i]
img[y, x] = color_p if p > 0 else color_n
</code></pre>
|
<python><numpy><numpy-ndarray>
|
2024-01-30 03:39:43
| 2
| 1,029
|
ToughMind
|
77,903,679
| 18,308,621
|
Why polars on intel cpu is faster than on amd cpu?
|
<p>I have two pc, one is Intel i7 13700KF with 64GB RAM and another is AMD 3970X with same RAM, both pc use ssd as storage and both pc has python 3.11 and polars 0.20.5. I run code below:</p>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame({
"name": ["a"+str(i) for i in range(1000000)],
"age": [i for i in range(1000000)]
})
</code></pre>
<pre class="lang-py prettyprint-override"><code>%%timeit
df.with_columns([
pl.col("name") + "test",
pl.col("age").rolling_mean(20)
])
</code></pre>
<p>And I find intel get <code>12.9 ms ± 20.8 µs per loop</code> while amd get <code>24.8 ms ± 270 µs per loop</code></p>
<p>The cpu benchmark is close for these two cpu. Why there is such big difference?</p>
<p>Also I found when you do a little calculation on Apple M3 Max, it could be beaten by intel i7 13700KF, but when you do a lot of polars work like 20-30x <code>with_columns</code>, M3 Max would beat intel i7 13700KF.</p>
|
<python><intel><python-polars><amd-processor>
|
2024-01-30 02:56:52
| 0
| 331
|
Hakase
|
77,903,352
| 5,709,240
|
How to export a list of list into a file?
|
<p>Given the following Panda DataFrame:</p>
<pre><code>import pandas as pd
data = {'product': ['Matcha Latte', 'Milk Tea', 'Cheese Cocoa', 'Walnut Brownie'],
'2015': [43.3, 83.1, 86.4, 72.4],
'2016': [85.8, 73.4, 65.2, 53.9],
'2017': [93.7, 55.1, 82.5, 39.1]}
df = pd.DataFrame(data)
</code></pre>
<p><code>df</code>:</p>
<pre><code>|----------------+------+------+------|
| product | 2015 | 2016 | 2017 |
|----------------+------+------+------|
| Matcha Latte | 43.3 | 85.8 | 93.7 |
| Milk Tea | 83.1 | 73.4 | 55.1 |
| Cheese Cocoa | 86.4 | 65.2 | 82.5 |
| Walnut Brownie | 72.4 | 53.9 | 39.1 |
|----------------+------+------+------|
</code></pre>
<p>That I'm converting to <a href="https://apache.github.io/echarts-handbook/en/concepts/dataset/#define-data-in-dataset" rel="nofollow noreferrer">Echarts dataset format</a>:</p>
<pre><code>content = df.T.reset_index().T.values.tolist()
</code></pre>
<p><code>content</code>:</p>
<pre><code>[['product', '2015', '2016', '2017'],
['Matcha Latte', 43.3, 85.8, 93.7],
['Milk Tea', 83.1, 73.4, 55.1],
['Cheese Cocoa', 86.4, 65.2, 82.5],
['Walnut Brownie', 72.4, 53.9, 39.1]]
</code></pre>
<p>I'm trying to export the result (<code>content</code>) to a file, and ran the following code:</p>
<pre><code>import json
with open('output', 'w') as foo:
json.dump(content, foo)
</code></pre>
<p>But it gave me this:</p>
<pre><code>[['product', '2015', '2016', '2017'], ['Matcha Latte', 43.3, 85.8, 93.7], ['Milk Tea', 83.1, 73.4, 55.1], ['Cheese Cocoa', 86.4, 65.2, 82.5], ['Walnut Brownie', 72.4, 53.9, 39.1]]
</code></pre>
<p>How could I export <code>content</code> to a file looking like this:</p>
<pre><code>[['product', '2015', '2016', '2017'],
['Matcha Latte', 43.3, 85.8, 93.7],
['Milk Tea', 83.1, 73.4, 55.1],
['Cheese Cocoa', 86.4, 65.2, 82.5],
['Walnut Brownie', 72.4, 53.9, 39.1]]
</code></pre>
|
<python><pandas>
|
2024-01-30 00:45:44
| 2
| 933
|
crocefisso
|
77,903,321
| 515,368
|
ImportError: cannot import name 'mock_s3' from 'moto'
|
<pre class="lang-py prettyprint-override"><code>import pytest
from moto import mock_s3
@pytest.fixture(scope='module')
def s3():
with mock_s3():
os.environ['AWS_ACCESS_KEY_ID'] = 'test'
os.environ['AWS_SECRET_ACCESS_KEY'] = 'test'
os.environ['AWS_DEFAULT_REGION'] = 'us-east-1'
s3 = boto3.resource('s3')
s3.create_bucket(Bucket='test_bucket')
yield s3
</code></pre>
<p>This code was working, but is now throwing an exception <code>Cannot import name mock_s3 from moto</code>. What am I doing wrong?</p>
|
<python><moto>
|
2024-01-30 00:33:21
| 1
| 3,162
|
supermitch
|
77,903,316
| 12,279,326
|
basic config to run a python docker image via AWS Lambda function
|
<p>I have five hours trying to get a Lambda function to successfully return a response, using a Docker image, and failing miserably.</p>
<p>Dockerfile</p>
<pre><code>FROM python:3.9.6-buster
WORKDIR /app
COPY . /app
CMD ["python", "/app/lambda_handler.py"]
</code></pre>
<p>Python files
p.py</p>
<pre><code>def c(a1, a1, a1):
result = {
"a1": a1,
"a2": a2,
"a3": a3,
}
return result
</code></pre>
<p>lambda_handler.py</p>
<pre><code>import json
from p import c
def lambda_handler(event, context):
arg1 = event["a1"]
arg2 = event["a2"]
arg3 = event["a3"]
result = classify(arg1, arg2, arg3)
response = {"statusCode": 200, "body": result}
return json.dumps(response, indent=4, default=str)
</code></pre>
<p>I get an unhelpful error of</p>
<blockquote>
<p>{
"errorType": "Runtime.InvalidEntrypoint",
"errorMessage": "RequestId: 330fa2c3-f68b-4225-a53c-96f2409f186d Error: exec: "lambda_function.lambda_handler": executable file not found in $PATH"
}</p>
</blockquote>
<p>I have tried:</p>
<ul>
<li>zipping the files and uploading to the lambda function which works</li>
<li>read <a href="https://docs.aws.amazon.com/lambda/latest/dg/python-image.html" rel="nofollow noreferrer">this link</a> which suggests using using something like CMD [ "lambda_function.handler" ] so I have set the override CMD to combinations of app.lambda_function.lambda_handler, lambda_function.lambda_handler with and without quotes</li>
<li>I have tried setting the ENTRYPOINT to python, and then trying to point the CMD to /app/lambda_handler</li>
<li>instead of using CMD exclusively using ENTRYPOINT override <code>python, /app/lambda_handler.py</code></li>
</ul>
<p>Any help would be most appreciated.</p>
|
<python><docker><aws-lambda>
|
2024-01-30 00:31:21
| 1
| 948
|
dimButTries
|
77,903,156
| 2,092,609
|
How Do I Calculate a "Call Level" Column on a DataFrame?
|
<p>I have a DataFrame that could be thought of as a call stack, differentiated by a column, <code>Entry</code>, whose value is True when we enter the function and False when we leave. The other columns are <code>ThreadID</code> and <code>Function</code>, so that the data might look like this:</p>
<pre><code> ThreadID Function Entry
0 1 FuncA True
1 1 FuncB True
2 1 FuncB False
3 1 FuncC True
4 1 FuncC False
5 1 FuncA False
</code></pre>
<p>and this represents something like this</p>
<pre><code>FuncA() {
FuncB();
FuncC();
}
</code></pre>
<p>I would like to calculate an "call level" for each record based on the thread and function "depth." Iteratively, I would just look at each record, get the current call level for that thread, adjust it based on the value of <code>Entry</code> (add one for True, subtract one for False) and assign that value to the record as the <code>call_level</code> column.</p>
<p><em>Note: This is a simplified version of my problem. The real problem also has other records that arent function entry/exit records, as if you added a log entry while you were in the function. I'd like to apply the current call level to those records, too.</em></p>
<p><strong>This algorithm seems all wrong for Pandas.</strong> Is there a better algorithm or way to think about solving this problem that would leverage vector operations and things that Pandas does well?</p>
|
<python><pandas>
|
2024-01-29 23:28:22
| 2
| 4,192
|
mojo
|
77,903,136
| 3,224,196
|
Can you find an exact html tag with BeautifulSoup?
|
<p>I'm using BeautifulSoup to add an arbitrary attribute to some HTML code files. Let's call it... data-custom.</p>
<p>Let's assume this fragment is the html that BeautifulSoup is going to load.</p>
<pre class="lang-html prettyprint-override"><code><div class="col-md-6">
<label asp-for="Driller.FirstName"></label>
<input asp-for="Driller.FirstName" class="form-control" />
</div>
</code></pre>
<p>Now the code:</p>
<pre><code>tagId = 'theonetochange'
outerHtml = '<input asp-for="Driller.FirstName" class="form-control" />'
soup = BeautifulSoup(html, "html.parser")
# Find by ID
tag = soup.find(id = tagId)
# If that didn't work, find the exact HTML code
if tag is None:
tag = soup.find(string = outerHtml) # This is what doesn't work
if tag is None:
print("Target not found")
exit()
# Add the custom attribute
tag["data-custom"] = "mycustomdata"
# Print out the results
print(str(soup))
</code></pre>
<p>I want to ultimately come out with this:</p>
<pre class="lang-html prettyprint-override"><code><div class="col-md-6">
<label asp-for="Driller.FirstName"></label>
<input data-custom="customdata" asp-for="Driller.FirstName" class="form-control" />
</div>
</code></pre>
<p>Since the html doesn't contain that ID, it should search using the exact HTML code I provided, but it looks like maybe that <code>soup.find(string = "...")</code> is only good for the contents of a tag and not the tag itself? Is there a way to select the entire tag in that way?</p>
|
<python><beautifulsoup>
|
2024-01-29 23:18:44
| 1
| 380
|
Martin
|
77,902,990
| 3,490,622
|
Is this possible within optimization framework?
|
<p>I am new to operations research, so would really appreciate your help. I don't know if this is possible to do within the OR framework, but here's the problem. Suppose I have a set of SKUs, their regular prices, their price elasticities, their current demand in units/week and their inventory levels . If, after N weeks on regular prices, the inventory levels at still at least 50% of starting, the prices are allowed to go down between 10% and 15%. The goal is to optimize revenue and minimize left-over inventory by M weeks.</p>
<p>I've tried setting it up like so</p>
<pre><code>from pyomo.environ import *
sku_data = {
'SKU1': {'current_price': 100, 'price_elasticity': -0.2, 'current_demand': 50, 'current_inventory': 1000},
'SKU2': {'current_price': 150, 'price_elasticity': -0.3, 'current_demand': 40, 'current_inventory': 1200},
}
# Pyomo model
model = ConcreteModel()
# Sets
model.SKUs = Set(initialize=sku_data.keys())
model.Weeks = RangeSet(1, 12) # 8 weeks in total (4 regular, 4 promo)
# Parameters
model.current_price = Param(model.SKUs, initialize=lambda model, sku: sku_data[sku]['current_price'])
model.price_elasticity = Param(model.SKUs, initialize=lambda model, sku: sku_data[sku]['price_elasticity'])
model.current_demand = Param(model.SKUs, initialize=lambda model, sku: sku_data[sku]['current_demand'])
model.current_inventory = Param(model.SKUs, initialize=lambda model, sku: sku_data[sku]['current_inventory'])
# Variables
model.regular_price = Var(model.SKUs, within=NonNegativeReals, bounds=(0, None),
initialize=lambda model, sku: sku_data[sku]['current_price'])
model.promo_price_discount = Var(model.SKUs, within=NonNegativeReals, bounds=(0, 0.5),
initialize=0.2) # Assume max discount is 50%
model.promo_price = Var(model.SKUs, within=NonNegativeReals, bounds=(0, None),
initialize=lambda model, sku: model.regular_price[sku] * (1 - model.promo_price_discount[sku]))
model.regular_duration = Var(model.SKUs, within=NonNegativeIntegers, bounds=(4, None),
initialize=4)
model.promo_duration = Var(model.SKUs, within=NonNegativeIntegers, bounds=(0, None),
initialize=0)
def calculate_demand(base_price, new_price, elasticity, base_demand):
return base_demand*(1 + elasticity * (new_price - base_price)/base_price)
def calculate_inventory(base_inventory, base_demand, elasticity,
regular_duration, promo_duration, regular_price, promo_price):
inventory = base_inventory
for t in range(1, 12): # Assuming 8 weeks in total (adjust as needed)
if t <= regular_duration:
inventory -= calculate_demand(regular_price, regular_price, elasticity, base_demand)*regular_duration
elif t <= regular_duration + promo_duration:
inventory -= calculate_demand(regular_price, promo_price, elasticity, base_demand)*promo_duration
return inventory
# Objective 1 is maximizing revenue
model.obj = Objective(expr=sum(
(calculate_demand(
model.current_price[sku], model.regular_price[sku],
model.price_elasticity[sku], model.current_demand[sku]
) * model.regular_price[sku]* model.regular_duration[sku] +
calculate_demand(
model.current_price[sku], model.promo_price[sku],
model.price_elasticity[sku], model.current_demand[sku]
) * model.promo_price[sku] * model.promo_duration[sku])
for sku in model.SKUs
), sense=maximize)
def inventory_constraint_rule(model, sku):
base_inventory = model.current_inventory[sku]
base_demand = model.current_demand[sku]
regular_duration = value(model.regular_duration[sku])
elasticity = model.price_elasticity[sku]
promo_duration = value(model.promo_duration[sku])
regular_price = model.regular_price[sku]
promo_price = model.promo_price[sku]
return calculate_inventory(base_inventory, base_demand, elasticity,
regular_duration, promo_duration, regular_price, promo_price) >= 0
model.inventory_con = Constraint(model.SKUs, rule=inventory_constraint_rule)
def promo_switch_constraint_rule(model, sku):
base_inventory = model.current_inventory[sku]
base_demand = model.current_demand[sku]
regular_duration = value(model.regular_duration[sku])
elasticity = model.price_elasticity[sku]
promo_duration = value(model.promo_duration[sku])
regular_price = model.regular_price[sku]
promo_price = model.promo_price[sku]
inventory_after_regular = calculate_inventory(base_inventory, base_demand, elasticity,
4, 0, regular_price,
promo_price)
# Check if promo switch is needed and set regular_duration accordingly
return inventory_after_regular >= 0.5 * base_inventory
model.promo_switch_con = Constraint(model.SKUs, rule=promo_switch_constraint_rule)
# Solve the optimization problem
solver = SolverFactory('ipopt') # Use an appropriate solver (e.g., 'glpk' or 'cbc')
solver.solve(model, tee=True)
# Display results
for sku in model.SKUs:
print(f"SKU: {sku}")
print(f"Optimal Regular Price: {value(model.regular_price[sku])}")
print(f"Optimal Promo Price: {value(model.promo_price[sku])}")
print(f"Optimal Promo Price Discount: {value(model.promo_price_discount[sku])}")
print(f"Optimal Regular Duration: {value(model.regular_duration[sku])}")
print()
# Access the optimal objective value
optimal_revenue = value(model.obj)
print(f"Optimal Revenue: {optimal_revenue}")
</code></pre>
<p>But it did not work at all, I got that the problem is 'infeasible'. I think I may have just set it up wrong though. Any pointers would be greatly appreciated!</p>
|
<python><operations-research>
|
2024-01-29 22:30:56
| 0
| 1,011
|
user3490622
|
77,902,984
| 14,222,845
|
Pyinstaller exe file gives 'plotly' has no attribute 'express' error
|
<p>I have some code in Python (Jupyter Notebooks in my Anaconda environment). The file is called <code>myFile.py</code>:</p>
<pre><code>import math
import pandas as pd
pd.options.mode.chained_assignment = None # default='warn'
import plotly
DF = pd.read_csv('My CSV data file path')
myCol = DF['Temp'].value_counts(normalize = True)
fig = plotly.express.bar(myCol)
fig.show()
</code></pre>
<p>My code runs exactly as expected in python.</p>
<p>I used pyinstaller to successfully convert this code into an exe file (<code>pyinstaller myFile.py</code>). However, when I run the exe file, I get an error:</p>
<pre><code>File "_plotly_utils\importers.py", line 39 in __getattr__
AttributeError: module 'plotly' has no attribute 'express'
Failed to execute script due to unhandled exception.
</code></pre>
<p>Here's what I tried:</p>
<ol>
<li>I use the <code>--onefile</code> option with pyinstaller but that did not work.</li>
<li>I imported plotly.express as a hidden import but I got the same error (<code>pyinstaller --hidden-import plotly.express myFile.py</code>)</li>
<li>I copied the entire plotly package from my anaconda environment and pasted it in the same directory as <code>myFile.exe</code> but I got the same error.</li>
<li>I updated my version of plotly with the newest version using <code>conda update plotly</code>. It still didn't solve the issue.</li>
<li>I used the accepted solution posted here <a href="https://stackoverflow.com/questions/46099695/pyinstaller-fails-with-plotly">pyinstaller fails with plotly</a> but I still got the same error.</li>
</ol>
<p>My version of plotly is version 5.9.0.</p>
|
<python><anaconda><pyinstaller><exe><command-prompt>
|
2024-01-29 22:29:24
| 2
| 330
|
Diamoniner12345
|
77,902,970
| 16,596,758
|
How can I display an interactive SVG image that utilizes javascript in a Jupyter notebook?
|
<p>I have an SVG image with embedded javascript that makes the image change based on <code>onclick</code> events:</p>
<pre><code><?xml version="1.0" encoding="UTF-8" standalone="yes" ?>
<svg xmlns="http://www.w3.org/2000/svg" height="100" width="100">
<script type="text/ecmascript">
<![CDATA[
function click() {
var el = document.getElementById("box");
el.setAttribute("style", "fill: red");
}
]]>
</script>
<rect style="fill: white" height="100" x="0" y="0" width="100" onclick="click()" id="box" />
</svg>
</code></pre>
<p>I would like to display this SVG image in a Jupyter notebook and maintain the interactive functionality.</p>
<p>I tried using</p>
<pre><code>from IPython.display import SVG, display
display(SVG('interactive.svg'))
</code></pre>
<p>And while this displays the svg file, the image does not change from white to red when clicked on.</p>
|
<javascript><python><svg><jupyter-notebook><jupyter>
|
2024-01-29 22:26:41
| 1
| 998
|
Will Holtz
|
77,902,968
| 13,142,245
|
Python logging CloudWatch location?
|
<p>I have a Lambda function, which I've declared with CDK. Additionally, I've declared cloud watch logs and passed the resource to the Lambda function as an environment variable "LOGS_DESTINATION".</p>
<p>The location can be retrieved by <code>os.environ['LOGS_DESTINATION']</code> by Python Lambda.</p>
<p>In Python, using the logging module, how I can configure/direct logging.info statements to cloudwatch logs using the above location (which is a string)?</p>
|
<python><logging><aws-lambda><amazon-cloudwatch>
|
2024-01-29 22:26:16
| 2
| 1,238
|
jbuddy_13
|
77,902,942
| 3,067,055
|
get ViewSet and http_method_names from drf url namespace
|
<p>I have this snippet to construct API URLs using <code>SimpeRouter()</code>. In the <code>urlpatterns</code> I have declared a namespace called <code>my_api</code>, is there a way to get all the associated ViewSets and their respective <code>http_method</code> properties?</p>
<pre class="lang-py prettyprint-override"><code>from django.urls import include, path
from rest_framework import routers
from apps.dir.api.views import (
ViewSet1,
ViewSet2,
ViewSet3,
)
router = routers.SimpleRouter(trailing_slash=False)
router.register("set1", ViewSet1)
router.register("set2", ViewSet2)
router.register("set3", ViewSet3)
urlpatterns = [
path("api/v1/", include((router.urls, "api_all_sets"), namespace="my_api")),
]
</code></pre>
|
<python><django><django-rest-framework>
|
2024-01-29 22:20:20
| 1
| 1,261
|
hello
|
77,902,840
| 11,838,196
|
Why does Python Glue Job throw a KeyError for my Job parameter?
|
<p>I have defined a simple Glue Job of type Python Shell:</p>
<pre class="lang-py prettyprint-override"><code>import sys
from awsglue.utils import getResolvedOptions
args = getResolvedOptions(sys.argv, [
'test-parameter'
])
value = args["test-parameter"]
print(f'value = {value}')
</code></pre>
<p>I am trying to pass the following Job parameter:</p>
<ul>
<li>Key: <code>--test-parameter</code></li>
<li>Value: <code>hello world</code></li>
</ul>
<p>When I run this Glue Job, I get the following error:</p>
<blockquote>
<p>KeyError: 'test-parameter'</p>
</blockquote>
<p>How can I fix this problem?</p>
|
<python><aws-glue>
|
2024-01-29 21:57:17
| 1
| 1,971
|
srk
|
77,902,775
| 1,084,416
|
How to forward generic argument types to a callable in Python
|
<p>I want to annotate a generic function which takes as arguments another function and its parameters.</p>
<pre><code>def forward(func, **kwargs):
func(**kwargs)
</code></pre>
<p>So, if I have a function which takes two integers:</p>
<pre><code>def sum_int(a: int, b: int):
...
</code></pre>
<p>in my editor I want help if I pass the wrong object types:</p>
<pre><code>forward(sum_int, 1.5, 2.6) # want type checker to complain about using floats instead of integers
</code></pre>
<p>How can I annotate <code>forward</code>? Something like:</p>
<pre><code>def forward(func: Callable[rest, ret], **kwargs: rest] -> ret:
...
</code></pre>
<p>So, the first argument to <code>forward</code> is <code>func</code> and the <code>rest</code> are the keyword arguments. The return is <code>ret</code>. But <code>rest</code> and <code>ret</code> are also the keyword arguments and return type for <code>func</code>.</p>
<p>I used to do generics years (decades!) ago in C++, and there were tricks for capturing and unpacking various types, but I don't know whether we have to jump through those kind of hoops with Python, or whether it's even possible.</p>
<p>I don't really know what to search for, and didn't turn up anything anywhere near helpful.</p>
<p>Thanks!</p>
|
<python><python-typing>
|
2024-01-29 21:41:49
| 1
| 24,283
|
Open AI - Opting Out
|
77,902,760
| 5,790,653
|
python save function variable to use in another function parameter
|
<p>I have these two functions:</p>
<pre class="lang-py prettyprint-override"><code>def func1(my_list, test1, test2):
my_list = [ {'name': 'Saeed', 'id': 1}, {'name': 'David', 'id': 2} ]
def func2(name):
name[0]['id'] += 1
</code></pre>
<p>The <code>name[0]</code> is in fact <code>my_list[0]</code>, but I'm not sure how can I access it from the second function.</p>
|
<python>
|
2024-01-29 21:37:36
| 1
| 4,175
|
Saeed
|
77,902,619
| 3,030,966
|
How to start aiogram bot and aiohttp webserver together?
|
<p>I have aiogram v3 bot that runs in polling mode, I need to add a handler to process requests from exteranl application, i use aiohttp server that listens for external requests, process it and send notification to bot users:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
from aiogram import Bot, Dispatcher
from aiohttp import web
app = web.Application()
async def bot_start():
bot = Bot(TOKEN, parse_mode='HTML')
dp = Dispatcher()
await dp.start_polling(bot)
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.create_task(bot_start())
loop.run_until_complete(web.run_app(app, host="127.0.0.1", port=5555))
</code></pre>
<p>so bot starts polling, but web server is not running, please help to start both processes within single project.</p>
<p>PS. bot must run in polling mode, not a webhook.</p>
<p><strong>SOLUTION</strong>:</p>
<p>need to run both apps as async func in separate tasks:</p>
<pre class="lang-py prettyprint-override"><code>async def bot_start():
bot = Bot(TOKEN, parse_mode='HTML')
dp = Dispatcher()
await dp.start_polling(bot)
async def webserver_start():
app = web.Application()
runner = web.AppRunner(app)
await runner.setup()
site = web.TCPSite(runner, "127.0.0.1", 5555)
await site.start()
if __name__ == '__main__':
loop = asyncio.get_event_loop()
task1 = loop.create_task(bot_start())
task2 = loop.create_task(webserver_start())
loop.run_until_complete(asyncio.gather(task1, task2))
</code></pre>
|
<python><telegram-bot><aiohttp><aiogram>
|
2024-01-29 21:04:54
| 1
| 722
|
swserg
|
77,902,593
| 1,480,131
|
Define the return type of a function depending on the real value of an argument
|
<p>I know that with <code>TypeVar</code> you can define the return type of a function
depending on the type of one its arguments. However, I want the type based on
the content of an argument.</p>
<p>For example, the function <code>open</code> returns different types depending on the
content of the <code>mode</code> argument:</p>
<pre class="lang-py prettyprint-override"><code>a = open("/etc/fstab", "r")
b = open("/etc/fstab", "rb")
</code></pre>
<p>In this case my <code>pyright</code> says that <code>a</code> is a <code>TextIOWrapper</code> and <code>b</code> is a
<code>BufferedReader</code>.</p>
<p>I also realized that my pyright infers the correct type with this code:</p>
<pre class="lang-py prettyprint-override"><code>mode = "rb"
a = open("/etc/fstab", mode)
</code></pre>
<p>but if I do:</p>
<pre class="lang-py prettyprint-override"><code>def funny(mode: str):
a = open("/ect/fstab", mode)
</code></pre>
<p>then pyright infers <code>IO[Any]</code>. So clearly pyright take more than the literal
value of <code>mode</code> into consideration. So how is pyright inferring the type
correctly? Is this because the type hints of <code>open</code> or is it some kind of
"harcoding" of pyright for the function <code>open</code>?</p>
|
<python><python-typing>
|
2024-01-29 20:58:52
| 1
| 13,662
|
Pablo
|
77,902,559
| 1,581,441
|
Python safe string compression without line breaks
|
<p>I am trying to save a large amount of JSON files, by appending the JSON string as a new line to a large text file. I have limited storage so I don't want to save the JSON string as is for so many files. Instead, I try to compress the JSON string using zlib library, and then append the compressed string as a new line to the big file.</p>
<p>The compression is pretty good, however the problem is that it often happens that the compressed string contains a line break character "\n", which causes error for decompression when reading line by line.
I tried to overcome this problem by using base64 encoding for the zlib compressed string, since bas64 does not have line breaks, but it causes the final string to be much longer and hence the compression is less effective (actually for shorter strings, the final string after zlib/base64 is longer than the original string).</p>
<pre><code>import zlib, base64
item_dict={}
item_dict["a"]="ما هذا الذي قاله اليوم بشأن الأخبارية التي فلتها متعمدا؟"
item_dict["b"]="She’s allowed to not want someone else’s kids in her picture. Y’all are weird for the way youre acting over this. I don’t want any pics of myself with my ex’s children, because they aren’t my children and I’m not in their lives anymore. It’s weird to post pics of someone else’s kids… so asking for them to be removed so I can still enjoy my picture from my holiday isn’t as bad as y’all are making it seem."
item_dict["c"]='''
{"symbol": "A/RES/74/1", "resolution_number": "74/1.", "title": "Scale of assessments for the apportionment of the expenses of the United Nations: requests under Article 19 of the Charter", "session": "Seventy-fourth session", "adoption_meeting": "14th plenary meeting", "adoption_date": "2019-10-10 00:00:00", "originating_document": "A/74/483", "report_paragraph": "6", "committee": "Fifth Committee", "agenda_item": "Agenda item 139", "agenda_item_name": "Scale of assessments for the apportionment of the expenses of the United Nations", "voting_type": "Without a vote", "MS_in_favour_count": "N.A.", "MS_against_count": "N.A.", "MS_abstaining_count": "N.A.", "pv": "A/74/PV.14", "MS_in_favour": [], "MS_against": [], "MS_abstaining": [], "sponsors": ["SUBMITTED BY THE CHAIR OF THE COMMITTEE"], "additional_sponsors": [], "SDGs": [], "subjects": [["Comoros", "UNBIS Thesaurus"], ["Sao Tome And Principe", "UNBIS Thesaurus"], ["Somalia", "UNBIS Thesaurus"]]}
{"symbol": "A/RES/74/2", "resolution_number": "74/2.", "title": "Political declaration of the high-level meeting on universal health coverage", "session": "Seventy-fourth session", "adoption_meeting": "14th plenary meeting", "adoption_date": "2019-10-10 00:00:00", "originating_document": "A/74/L.4", "report_paragraph": "N.A.", "committee": "Without reference to a Main Committee", "agenda_item": "Agenda item 126", "agenda_item_name": "Global health and foreign policy", "voting_type": "Without a vote", "MS_in_favour_count": "N.A.", "MS_against_count": "N.A.", "MS_abstaining_count": "N.A.", "pv": "A/74/PV.14", "MS_in_favour": [], "MS_against": [], "MS_abstaining": [], "sponsors": ["SUBMITTED BY THE PRESIDENT OF THE GENERAL ASSEMBLY"], "additional_sponsors": [], "SDGs": ["3"], "subjects": [["Health Policy", "UNBIS Thesaurus"], ["Public Health", "UNBIS Thesaurus"], ["Health Services", "UNBIS Thesaurus"], ["Health Insurance", "UNBIS Thesaurus"], ["Declarations (Text)", "UNBIS Thesaurus"]]}
{"symbol": "A/RES/74/3", "resolution_number": "74/3.", "title": "Political declaration of the high-level meeting to review progress made in addressing the priorities of small island developing States through the implementation of the SIDS Accelerated Modalities of Action (SAMOA) Pathway", "session": "Seventy-fourth session", "adoption_meeting": "14th plenary meeting", "adoption_date": "2019-10-10 00:00:00", "originating_document": "A/74/L.3", "report_paragraph": "N.A.", "committee": "Without reference to a Main Committee", "agenda_item": "Agenda item 19 (b)", "agenda_item_name": "Sustainable development: follow-up to and implementation of the SIDS Accelerated Modalities of Action (SAMOA) Pathway and the Mauritius Strategy for the Further Implementation of the Programme of Action for the Sustainable Development of Small Island Developing States of the SIDS Accelerated Modalities of Action (SAMOA) Pathway and the Mauritius Strategy for the Further Implementation of the Programme of Action for the Sustainable Development of Small Island Developing States", "voting_type": "Without a vote", "MS_in_favour_count": "N.A.", "MS_against_count": "N.A.", "MS_abstaining_count": "N.A.", "pv": "A/74/PV.14", "MS_in_favour": [], "MS_against": [], "MS_abstaining": [], "sponsors": ["SUBMITTED BY THE PRESIDENT OF THE GENERAL ASSEMBLY"], "additional_sponsors": [], "SDGs": ["16", "17", "3"], "subjects": [["Sustainable Development", "UNBIS Thesaurus"], ["Developing Island Countries", "UNBIS Thesaurus"], ["Development Assistance", "UNBIS Thesaurus"], ["Programme Implementation", "UNBIS Thesaurus"], ["Programme Evaluation", "UNBIS Thesaurus"], ["Declarations (Text)", "UNBIS Thesaurus"]]}
'''
item_dict["d"]='{"url": "http://agribank.ngan-hang.net", "final_url": "http://ww7.ngan-hang.net/?usid=18&utid=23776691570", "lang": "", "title": "", "description": "", "keywords": "", "phone_numbers": [], "links": [], "social_links": [], "emails": [], "addresses": [], "logos": [], "text": "", "last": 41, "n_items": 1}'
for key,val in item_dict.items():
zlib_compressed=zlib.compress(val.encode())
base64_compressed=base64.b64encode(zlib_compressed)
zlib_n_line_breaks=zlib_compressed.count(b'\n')
base64_line_breaks=base64_compressed.count(b'\n')
print("original size:",len(val)," | zlib:",len(zlib_compressed),"base64",len(base64_compressed),"| zlib_n_line_breaks",zlib_n_line_breaks,base64_line_breaks)
</code></pre>
<p>Result:</p>
<pre><code>original size: 56 | zlib: 84 base64 112 | zlib_n_line_breaks 0 0
original size: 407 | zlib: 254 base64 340 | zlib_n_line_breaks 0 0
original size: 3655 | zlib: 941 base64 1256 | zlib_n_line_breaks 1 0
original size: 303 | zlib: 184 base64 248 | zlib_n_line_breaks 1 0
</code></pre>
<p>As a work around, I created a custom compression/decompression function, that replaces the line break in compression with an arbitrary string (e.g. 00000), and in the decompression it does the opposite. This reduces the likelihood of decompression errors but does not eliminate it, because it can happen that the original compressed string has this arbitrary string somehow.</p>
<p>I'm aware of <a href="https://stackoverflow.com/questions/62585234/can-zlib-compressed-output-avoid-using-certain-byte-value">this question</a>, not satisfactory though:</p>
<p>So, the question here is the following -
Is there any compression algorithm that can compress a string without producing a line break? Or is there a way to reliably post-process zlib compression/decompression output (or the output of any compression algorithm) to avoid line breaks?</p>
<h1>Edit</h1>
<p>Thanks to the answer by Booboo, I realized the difference between a line break character and a slash followed by "n", and I tested it and it now makes sense for the encoding part:</p>
<pre><code>import zlib
line0='{"symbol": "A/RES/74/1", "resolution_number": "74/1.", "title": "Scale of assessments for the apportionment of the expenses of the United Nations: requests under Article 19 of the Charter", "session": "Seventy-fourth session", "adoption_meeting": "14th plenary meeting", "adoption_date": "2019-10-10 00:00:00", "originating_document": "A/74/483", "report_paragraph": "6", "committee": "Fifth Committee", "agenda_item": "Agenda item 139", "agenda_item_name": "Scale of assessments for the apportionment of the expenses of the United Nations", "voting_type": "Without a vote", "MS_in_favour_count": "N.A.", "MS_against_count": "N.A.", "MS_abstaining_count": "N.A.", "pv": "A/74/PV.14", "MS_in_favour": [], "MS_against": [], "MS_abstaining": [], "sponsors": ["SUBMITTED BY THE CHAIR OF THE COMMITTEE"], "additional_sponsors": [], "SDGs": [], "subjects": [["Comoros", "UNBIS Thesaurus"], ["Sao Tome And Principe", "UNBIS Thesaurus"], ["Somalia", "UNBIS Thesaurus"]]} {"url": "http://agroreal911.sk", "final_url": "http://agroreal911.sk/", "lang": "sk-SK", "title": "Agroreal 911 s.r.o.", "description": "", "keywords": "", "phone_numbers": [], "links": [["http://agroreal911.sk/pozemky", "K\u00fapa p\u00f4dy"], ["http://agroreal911.sk/kontakty", "Kontakty"], ["http://www.advertplus.sk", "Advertplus.sk"], ["http://agroreal911.sk/predaj-pody", "Predaj p\u00f4dy"], ["http://agroreal911.sk/o-nas", "O n\u00e1s"], ["http://agroreal911.sk/?lang=en", ""], ["http://transposh.org/sk", ""]], "social_links": [], "emails": ["mgr.michal.hrabovsky@gmail.com"], "addresses": [], "logos": ["http://agroreal911.sk/wp-content/plugins/transposh-translation-filter-for-wordpress/img/tplogo.png"], "text": "Agroreal 911 s.r.o. \nAGRO REAL 911, S.R.O. \nMenu \nO n\u00e1s \nPOZEMKY \nK\u00fapa p\u00f4dy \nPredaj p\u00f4dy \nKontakty \nby \nWebstr\u00e1nku vytvoril Advertplus.sk Kontakt: 0908 692 782 \u00a0\u00a0\u00a0\u00a0\n\n\n \n \n\n\n\n\n\n\n\n mgr.michal.hrabovsky@gmail.com\n\n\n ", "last": 74, "n_items": 2}'
compressed=zlib.compress(line0.encode())
compressed0=compressed.replace(b"\n",b"\\n")
print("number of line breaks in zlib output:", compressed.count(b"\n"))
test_out_fpath="test_compress.txt"
fopen0=open(test_out_fpath,"wb")
fopen0.write(compressed0)
fopen0.close()
fopen0=open(test_out_fpath,"rb")
lines=fopen0.readlines()
print("number of lines after replacing line breaks", len(lines))
fopen0.close()
</code></pre>
<p><strong>Output</strong></p>
<pre><code>number of line breaks in zlib output: 7
number of lines after replacing line breaks 1
</code></pre>
<p>I'd still need help with the decompression though, if possible</p>
|
<python><compression><zlib>
|
2024-01-29 20:49:31
| 3
| 1,512
|
hmghaly
|
77,902,475
| 13,142,245
|
Pydantic strict mode not enforceable
|
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel, ValidationError, Field
class Student(BaseModel):
id: int = Field(strict=True)
s = Student(id="1")
</code></pre>
<p>Or alternatively</p>
<pre><code>from pydantic import BaseModel, ValidationError, ConfigDict, Field
class Student(BaseModel):
model_config = ConfigDict(strict=True)
id: int
s = Student(id="1")
</code></pre>
<p>This should return a validation error <a href="https://docs.pydantic.dev/latest/concepts/strict_mode/#basemodel" rel="nofollow noreferrer">per documentation</a>, but it does not... It simply coerces the string type to int. How can I force this?</p>
<p>Note: I'm using Jupyter notebook while I test this out. Slight possibility that this functionality is different on ipynb files.</p>
|
<python><pydantic>
|
2024-01-29 20:34:00
| 0
| 1,238
|
jbuddy_13
|
77,902,407
| 12,348,406
|
How to alias an annotated type?
|
<p>I want to create a type that enables me to associate a type with a positive number.</p>
<pre class="lang-py prettyprint-override"><code>type Array = # what I want to create
class MyIp:
ip: Array[float, 4] # Define a array of 4 elements
</code></pre>
<p>Under the hood, I was planning to use <code>Annotated</code> to make this possible</p>
<pre class="lang-py prettyprint-override"><code>class MyIp:
ip: Annotated[tuple[u8, ...], 4]
</code></pre>
<p>But I can't seem to create an <code>Array</code> type that would work.</p>
<pre class="lang-py prettyprint-override"><code>T = TypeVar("T")
type Array = Annotated[tuple[T, ...], int] # Does not work
</code></pre>
<p>The error message is :</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\gabriel\Desktop\test.py", line 7, in <module>
class MyIp:
File "C:\Users\gabriel\Desktop\test.py", line 8, in MyIp
ip: Array[int, 4]
~~~~~^^^^^^^^
TypeError: Only generic type aliases are subscriptable
</code></pre>
<p>Do you have any idea how to create <code>Array</code> ?</p>
|
<python><python-typing>
|
2024-01-29 20:19:14
| 1
| 814
|
gberth
|
77,902,376
| 5,842,705
|
How do I access array elements using int value going from IDL to python?
|
<p>I am working on code conversion from IDL to python and came across a hurdle. Some code I was given needs to be converted to python. In the IDL version, array element is accessed by int value rather than 3D index. Using the same technique, python gives an error. Any ideas how to resolve this?</p>
<p>Here is a snippet of IDL code for illustration purposes:</p>
<pre><code>x = reform(indgen(100), 2, 5, 10)
help, x ;this results in Array[2,5,10]
x[-76] ; results in value 24
</code></pre>
<p>Here is a snippet of python code for illustration purposes</p>
<pre><code>import numpy as np
x=np.arange(100).reshape(2,5,10)
x.shape #this results in (2,5,10)
x[-76]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IndexError: index -76 is out of bounds for axis 0 with size 2
</code></pre>
|
<python><multidimensional-array><indexing><idl>
|
2024-01-29 20:12:21
| 1
| 407
|
Charanjit Pabla
|
77,902,366
| 967,621
|
Plotting weighted histograms with weighted KDE (kernel density estimate)
|
<p>I want to plot two distributions of data as weighted histograms with weighted kernel density estimate (KDE) plots, side by side.</p>
<p>The data (<code>length</code> of DNA fragments, split by categorical variable <code>regions</code>) are integers in <code>(0, 1e8)</code> interval. I can plot the default, unweighted, histograms and KDE without a problem, using the python code below. The code plots histograms for the tiny example of the input data in <code>testdata</code> variable. See the unweighted (default) histograms below.</p>
<p>I want to produce a different plot, where the data in the histograms are <strong>weighted</strong> by <code>length</code> (= the X axis numeric variable). I used <code>weights</code> option (<a href="https://seaborn.pydata.org/generated/seaborn.histplot.html" rel="nofollow noreferrer">seaborn.histplot — seaborn documentation</a>):</p>
<blockquote>
<p><code>weights</code> : vector or key in <code>data</code><br />
If provided, weight the contribution of the corresponding data points towards the count in each bin by these factors.</p>
</blockquote>
<p>The histograms changed as expected (see weighted histograms plots below). <em>But the KDE (kernel density estimate) lines did not change.</em></p>
<p><strong>Question: How can I change the kernel density estimate (KDE) to reflect the fact that I am using weighted histograms?</strong></p>
<hr />
<p>Unweighted (default) histograms:</p>
<p><a href="https://i.sstatic.net/H2R0y.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H2R0y.png" alt="Unweighted (default) histograms" /></a></p>
<hr />
<p>Weighted histograms:</p>
<p><a href="https://i.sstatic.net/LaGZX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LaGZX.png" alt="Weighted histograms" /></a></p>
<hr />
<p>Code with the minimal reproducible example:</p>
<pre><code>import io
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
def plot_restriction_digest(df, out_file_base, weights):
# Prevent the python icon from showing in the dock when the script is
# running:
matplotlib.use('Agg')
sns.set_theme(style='ticks')
f, ax = plt.subplots(figsize=(7, 5))
sns.despine(f)
hist = sns.histplot(data=df,
x='length',
hue='regions',
weights=weights,
# Normalize such that the total area of the histogram
# equals 1:
stat='density',
# stat='count',
# Make all histograms visible, otherwise 'captured'
# regions histogram is much smaller than 'all' regions
# one:
common_norm=False,
# Default plots too many very thin bins, which are poorly
# visible in pdf format (OK in png). Note that 10 bins is
# too crude, and 1000 bins makes too many thin bins:
bins=100,
# X axis log scale:
log_scale=True,
# Compute a kernel density estimate to smooth the
# distribution and show on the plot as lines:
kde=True,
)
sns.move_legend(hist, 'upper left')
plt.savefig(f'{out_file_base}.pdf')
return
testdata="""
1 all
1 all
2 all
2 all
2 all
3 all
4 captured
4 captured
5 captured
5 captured
5 captured
8 captured
"""
# Default histograms:
df = pd.read_csv(io.StringIO(testdata), sep='\s+', header=None, names='length regions'.split())
plot_restriction_digest(df, 'test_tiny', None)
# Weighted histograms:
df = pd.read_csv(io.StringIO(testdata), sep='\s+', header=None, names='length regions'.split())
plot_restriction_digest(df, 'test_tiny_weighted', 'length')
print('Done.')
</code></pre>
<p><strong>Notes:</strong></p>
<ol>
<li>The two distributions of data are DNA fragment lengths for two types of genomic regions: "all" and "captured", but this is irrelevant to this specific question.</li>
<li>The minimal reproducible example illustrates the question. The real data frame has tens of millions of rows, so the histograms and KDE plots are much more smooth an meaningful. The actual data need the X axis to be log-transformed to better tell the two broad distributions apart.</li>
<li>I am using these packages and versions:</li>
</ol>
<pre><code>Python 3.11.6
matplotlib-base 3.8.2 py311hfdba5f6_0 conda-forge
numpy 1.26.3 py311h7125741_0 conda-forge
pandas 2.2.0 py311hfbe21a1_0 conda-forge
seaborn 0.13.1 hd8ed1ab_0 conda-forge
seaborn-base 0.13.1 pyhd8ed1ab_0 conda-forge
</code></pre>
|
<python><matplotlib><seaborn><histogram><kernel-density>
|
2024-01-29 20:11:01
| 1
| 12,712
|
Timur Shtatland
|
77,902,326
| 531,358
|
AutoGen 2.0 Where are my workflows saved?
|
<p>I'm working in windows, with Microconda and Microsoft AutoGen 2.0. I got AutoGen up and running, and have started creating skills.</p>
<p>I want to understand where my work (skills, agents, workflows) is stored, so that I can ensure it's backed up, etc. so I don't lose it.</p>
<p>And along the same lines, can I control where I want my work stored, so that I can have it together with all my other various dev projects. I'm just not seeing any options in the tool or the documentation.</p>
|
<python><artificial-intelligence><ms-autogen>
|
2024-01-29 20:02:56
| 1
| 708
|
Sandra
|
77,902,155
| 16,459,035
|
df.at updating multiple indexes on DataFrame pandas
|
<p>Consider the following code:</p>
<pre><code>import pandas as pd
import random
data = {'col1': [random.randint(0, 100) for _ in range(5)],
'col2': [random.randint(0, 100) for _ in range(5)]}
df = pd.DataFrame(data)
lista = []
df['test'] = None
for index, row in df.iterrows():
lista.append([random.randint(0, 100) for _ in range(5)])
df.at[index, 'test'] = lista
print(index,lista)
display(df)
</code></pre>
<p>Why the final output shows the last iterated list always? I mean, since <code>df.at</code> updates a value at an index and my index is serial (using iterrows) why the output is <code>[[first list], [second list], [third list], [fourth list], [fifth list]]</code> in all rows?</p>
<p>My desired output is:</p>
<pre><code>test
[[first list]]
[[first list], [second list]]
[[first list], [second list], [third list]]
[[first list], [second list], [third list], [fourth list]]
[[first list], [second list], [third list], [fourth list], [fifth list]]
</code></pre>
|
<python><pandas>
|
2024-01-29 19:29:45
| 1
| 671
|
OdiumPura
|
77,902,118
| 1,428,653
|
How do you get cattrs to unstructure nested structures?
|
<p>In Python, with the libraries <code>attrs</code> and <code>cattrs</code>, I have a nested structure defined as follows:</p>
<pre class="lang-py prettyprint-override"><code>@attrs.define
class Score:
added: datetime
value: float
@attrs.define
class Entry:
score: Score
tags: List[str]
added: Optional[datetime] = None
@attrs.define
class Entries:
entries: Dict[date, Entry] = attrs.Factory(dict)
</code></pre>
<p>With this data:</p>
<pre class="lang-py prettyprint-override"><code>data = {
'entries': {
'2024-01-01': {
'score': {'added': '2023-02-03 04:03:00', 'value': 80.3},
'tags': ['meatball', 'salami', 'jerky'],
},
# … more days
}
}
</code></pre>
<p>And some <code>un/structure</code> hooks to handle the <code>date</code> and <code>datetime</code> types:</p>
<pre class="lang-py prettyprint-override"><code>cattrs.register_unstructure_hook(datetime, lambda dt: datetime.strptime(dt, '%Y-%m-%d %H:%M:%S'))
cattrs.register_structure_hook(datetime, lambda dt, _: str(dt))
cattrs.register_unstructure_hook(date, lambda dt: datetime.strptime(dt, '%Y-%m-%d').date())
cattrs.register_structure_hook(date, lambda dt, _: str(dt))
</code></pre>
<p>Now when I destructure an instance of <code>Entries</code>, the resulting <code>dict</code> does not have the <code>date</code> and <code>datetime</code> objects in string form:</p>
<pre class="lang-py prettyprint-override"><code>structured = cattrs.structure(data, Entries)
cattrs.unstructure(structured) == {
'entries': {
datetime.date(2024, 1, 1): {
'score': {
'added': datetime.datetime(2023, 2, 3, 4, 3),
'value': 80.3
},
'tags': ['meatball', 'salami', 'jerky'],
'added': None
}
}
}
</code></pre>
<p>How can I get cattrs to stringify <code>date</code> and <code>datetime</code> objects recursively?</p>
|
<python><python-attrs>
|
2024-01-29 19:21:21
| 1
| 10,652
|
Matt
|
77,902,067
| 15,781,591
|
Jupyter Notebook Unreadable because of error: Notebook does not appear to be JSON [Python]
|
<p>I was working in a Jupyter Notebook, I saved my work, rechecked it and everything looked fine. Three days later I try to open the same notebook and all I see is this error:</p>
<pre><code>Unreadable Notebook: C:\Users\my_name\my_repos\my_folder\project_notebook.ipynb NotJSONError('Notebook does not appear to be JSON: \'{\\n "cells": [\\n {\\n "cell_type": "m...')
</code></pre>
<p>I do not know what this means or how to address this. I have never used JSON before ever. I don't know if this is an error with my python code or something else, because my code ran fine just a few days ago. How can I restore my notebook?</p>
|
<python><jupyter-notebook>
|
2024-01-29 19:10:45
| 0
| 641
|
LostinSpatialAnalysis
|
77,901,884
| 11,357,695
|
os getcwd on command line vs editor
|
<p>I am calling a python 3.10 file from the Anaconda command line (windows, in a conda environment) and noticed that when <code>os.getcwd()</code> is called within a script via command line, it outputs the root directory (I think this is the correct terminology) rather than the directory of the file from which it is called.</p>
<p>So if I run <code>python C:/Users/u03132tk/Python/MyTool/Script.py</code> on command line, I print <code>C:/Users/u03132tk</code> rather than <code>C:/Users/u03132tk/Python/MyTool</code>.</p>
<p>Is there any way to force the path to derive from the script?</p>
<p>Cheers,
Tim</p>
<p><strong>C:/Users/u03132tk/Python/MyTool/Script.py</strong></p>
<pre><code>import os
print (os.getcwd())
</code></pre>
|
<python><command-line><operating-system><getcwd>
|
2024-01-29 18:30:51
| 1
| 756
|
Tim Kirkwood
|
77,901,811
| 4,200,859
|
Is there no way to adjust billing_cycle_anchor on Stripe's subscription_update_confirm deep link flow?
|
<p>Our service awards monthly credits, and therefore handles upgrades and downgrades differently - upgrades get processed straight away so users have access to their new credits. Billing cycle gets changed to 'now' and proration is 'none'. Downgrades do not prorate either, but billing cycle stays the same as it was and the changes to don't apply until next cycle.</p>
<p>I realize this is not normal behavior and so the normal billing portal won't work for ugprade/downgrade. However, developers told me to check out the subscription_update_confirm flow for portal sessions (<a href="https://stripe.com/docs/customer-management/portal-deep-links" rel="nofollow noreferrer">https://stripe.com/docs/customer-management/portal-deep-links</a>).</p>
<p>I dove into this and had to setup a portal configuration for each type of upgrade, but then found out you can't seem to adjust billing_cycle_anchor with a portal configuration either.</p>
<p>I know I can use webhook to change a subscription before it goes into effect, but the whole point is that I want a consistent Stripe-hosted confirmation page when a user does their upgrading or downgrading. Is there any other way I'm not considering?</p>
|
<python><stripe-payments>
|
2024-01-29 18:16:34
| 1
| 639
|
Max
|
77,901,692
| 9,855,588
|
python package versioning, using a higher version of a package than another dependency with a lower package version
|
<p>Consider Project A requirements.txt (project name being foo-bar-a)</p>
<pre><code>pyspark==3.1.2
</code></pre>
<p>Consider Project B requirements.txt</p>
<pre><code>foo-bar-a
pyspark==3.3.4
</code></pre>
<p>There are a few projects that are using project repo a Python package, so we cant directly update the pyspark package version here. My understanding is that since pyspark 3.1.2 is a dependency of project a, project b could override this value using a higher version of pyspark. Is this true?</p>
<p>Currently, what actually exists in repo a is <code>pyspark~=3.1.2</code>, so attempting the above in project b, yielded a dependency resolution error because the <code>~=</code> prevents going higher than <code>3.1.x</code>. But if that gets swapped for <code>==</code>, could we then override in project b?</p>
<pre><code> The conflict is caused by:
The user requested pyspark==3.3.4
project a 0.0.1 depends on pyspark~=3.1.2
</code></pre>
|
<python><python-3.x>
|
2024-01-29 17:53:06
| 1
| 3,221
|
dataviews
|
77,901,682
| 3,780,372
|
pytest-xdist fails to collect tests after identifying workers
|
<p>I am unable to figure out why pytest=xdist is failing to collect tests from the <code>tests/sqlalchemy-dialect-compliance</code> folder after it finishes identifying workers. We have used the tests in that folder for years with no problem if we simply use <code>pytest</code>, but as soon as we add <code>pytest-xdist</code>, all collection fails.</p>
<p>I am attempting to run tests on a Linux box (48 cores) in a docker container using pytest and <code>pytest-xdist</code> to get parallelization across the CPUs.</p>
<p>I can get <code>pytest/pytest-xdist</code> to recognize the right number of workers. But it never collects any tests no matter what settings I have and what I do.</p>
<p>I have tried many different settings in various combinations (using settings for <code>dist</code>, <code>tx</code>, <code>numprocesses</code>, etc). The most recent is as follows:</p>
<pre><code>nox > py.test -s -vvv --dist=each '--tx 4*popen//python=python3.11' --numprocesses=4 tests/sqlalchemy_dialect_compliance
=== test session starts =============================================================
platform linux -- Python 3.11.6, pytest-8.0.0, pluggy-1.4.0 -- /repo/github.com/googleapis/python-bigquery-sqlalchemy/.nox/compliance/bin/python
cachedir: .pytest_cache
rootdir: /repo/github.com/googleapis/python-bigquery-sqlalchemy
configfile: setup.cfg
plugins: xdist-3.5.0, rerunfailures-13.0
4 workers [0 items]
scheduling tests via EachScheduling
=== no tests ran in 0.51s ============================================================
nox > Command py.test -s -vvv --dist=each '--tx 4*popen//python=python3.11' --numprocesses=4 tests/sqlalchemy_dialect_compliance failed with exit code 5 # No tests collected.
nox > Session compliance failed.
</code></pre>
<p>with <code>numprocesses=auto</code>, it recognizes all 48 cores:</p>
<pre><code>48 workers [0 items]
</code></pre>
<p>I have tried this with Python 3.11 and 3.12.</p>
|
<python><python-3.x><sqlalchemy><pytest><pytest-xdist>
|
2024-01-29 17:51:27
| 0
| 4,296
|
E. Ducateme
|
77,901,605
| 12,064,319
|
Not able to load and predict on images using YOLO inside celery worker
|
<p>Here is my code</p>
<pre><code>import sys
from question_detection.inference_object_detection.check_model import verify_model
from logger import logger
from ultralytics import YOLO
class ModelNotFoundError(Exception):
pass
def load_model():
try:
success, model_path = verify_model()
logger.info("model loading started")
if not success:
raise ModelNotFoundError("Model not found!")
logger.info("123")
logger.info(model_path)
_model = YOLO(model_path)
logger.info("456")
if _model is None:
raise ModelNotFoundError("Model not loaded!")
logger.info("789")
return _model, _model.names
except ModelNotFoundError as e:
logger.error(e)
sys.exit(1) # Exit the system with a non-zero exit code to indicate an error
except Exception as e:
logger.error(e)
return None, None
def inference(images):
"""
Inference on image
:param images: It can be a single image, list of images or a directory
:return: List of results
"""
logger.info("started inference of images!")
model, categories = load_model()
if model is None:
raise ModelNotFoundError("Model is not loaded!")
res = model(images), categories
logger.info("Completed inference of images!")
return res
</code></pre>
<p>If I call inference normally it works like butter
But when the same function is called by another function which is a celery worker this do not work.
It even do not throw any error.</p>
<p>When debugging nothing is logged after <code>model_path</code> Basically not reaching <code>456</code> line.</p>
<p>Why I am not able to get the model loaded?</p>
|
<python><machine-learning><celery><yolov8>
|
2024-01-29 17:35:56
| 0
| 720
|
Danish Bansal
|
77,901,487
| 10,889,650
|
Getting the Python exception object in a call
|
<p>Is it possible to get the exception object without relying on the namespace of the except block?</p>
<pre><code>try:
destined_to_fail()
except Exception as e:
other_function()
def other_function():
# can I get e here, without passing it as an argument?
</code></pre>
|
<python>
|
2024-01-29 17:14:28
| 1
| 1,176
|
Omroth
|
77,901,478
| 8,703,313
|
update vector field of pandas df
|
<p>I have a DataFrame with one column holding vector values:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({"a": [1,2,3], "b": [4,5,6]}, index=["one", "two", "three"])
s = pd.Series([(i*10, i*11, i*12) for i in df["a"]], index=df.index)
df["vec"] = s
#df
</code></pre>
<p>but I cannot figure out, how to update these vector values. E.g:</p>
<pre class="lang-py prettyprint-override"><code>df.loc[df["a"]>1, "vec"] = np.array((1,2,3)) # doesn't work...
</code></pre>
<p>always getting something like</p>
<pre><code>ValueError: Must have equal len keys and value when setting with an iterable
</code></pre>
|
<python><pandas><vector>
|
2024-01-29 17:12:18
| 2
| 310
|
Honza S.
|
77,901,392
| 3,197,792
|
Canonic way to use Namespaces / Bunchs in Python
|
<p>This is maybe rather a style-question. Often, I use <code>dict</code>s in Python for passing around bundles of variables (like keyword parameters in any function call).</p>
<p>However, my interactive environment (<em>ipython</em> or the debugging console in my IDE) does not support <code>dict</code>-key autocomplete. This is why I often end up printing out the dict-keys to see whats there.</p>
<p>For me, it seems much more convenient, therefore, to have a simple class, whose instances carry the parameters as attributes (allowing autocomplete in <em>ipython</em>). I know this is also called a "Bunch" sometimes, or a perhaps "Namespace" (<code>argparse.Namespace</code> seems to be just that, for instance).</p>
<p>I'd dislike writing such a generic thing for myself, if it already exists (grabbing it from <code>argparse</code> doesn't seem right, either), so here's the question:</p>
<p><em>Is there a supposed way to implement that? Perhaps the standard lib even has something like that?</em></p>
<p>Thanks everybody, already!</p>
|
<python>
|
2024-01-29 16:57:45
| 3
| 727
|
Leolo
|
77,901,382
| 7,347,925
|
How to crop matplotlib image by circle?
|
<p>I have plotted some data using matplotlib and wanna crop it by circle from center.</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
np.random.seed(100)
Z = np.random.rand(10, 10)
fig, ax = plt.subplots()
ax.imshow(Z)
</code></pre>
<p>I have tried the idea of masking image:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
from PIL import Image, ImageDraw
np.random.seed(100)
Z = np.random.rand(10, 10)
fig, ax = plt.subplots()
ax.imshow(Z)
# Create same size alpha layer with circle
h, w = Z.shape
alpha = Image.new('L', Z.shape, 0)
draw = ImageDraw.Draw(alpha)
draw.pieslice([0,0,h,w],0,360,fill=255)
# Convert alpha Image to numpy array
npAlpha=np.array(alpha)
# Add alpha layer to RGB
Z = np.dstack((Z, npAlpha))
Image.fromarray(Z)
</code></pre>
<p>But, I got this error:</p>
<pre><code>TypeError: Cannot handle this data type: (1, 1, 2), <f8
</code></pre>
<p>Is it possible to do it using matplotlib only?</p>
|
<python><numpy><matplotlib><python-imaging-library>
|
2024-01-29 16:56:11
| 3
| 1,039
|
zxdawn
|
77,901,290
| 55,934
|
pip install "scipy==1.10.1" - What does this error message even mean?
|
<p>I am trying to install a specific version of scipy using:</p>
<pre><code>pip install "scipy==1.10.1"
</code></pre>
<p>But I'm getting an error I can't interpret:</p>
<pre><code> Using cached scipy-1.10.1.tar.gz (42.4 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... error
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [21 lines of output]
+ meson setup --prefix=c:\users\hugo\appdata\local\programs\python\python38-32 C:\Users\hugo\AppData\Local\Temp\pip-install-e_otiu7s\scipy_a4dec49d0df54c3884f33ca8aa92cddf C:\Users\hugo\AppData\Local\Temp\pip-install-e_otiu7s\scipy_a4dec49d0df54c3884f33ca8aa92cddf\.mesonpy-hi22c8nm\build --native-file=C:\Users\hugo\AppData\Local\Temp\pip-install-e_otiu7s\scipy_a4dec49d0df54c3884f33ca8aa92cddf\.mesonpy-native-file.ini -Ddebug=false -Doptimization=2
The Meson build system
Version: 1.3.1
Source dir: C:\Users\hugo\AppData\Local\Temp\pip-install-e_otiu7s\scipy_a4dec49d0df54c3884f33ca8aa92cddf
Build dir: C:\Users\hugo\AppData\Local\Temp\pip-install-e_otiu7s\scipy_a4dec49d0df54c3884f33ca8aa92cddf\.mesonpy-hi22c8nm\build
Build type: native build
Project name: SciPy
Project version: 1.10.1
WARNING: Failed to activate VS environment: Could not find C:\Program Files (x86)\Microsoft Visual Studio\Installer\vswhere.exe
..\..\meson.build:1:0: ERROR: Unknown compiler(s): [['icl'], ['cl'], ['cc'], ['gcc'], ['clang'], ['clang-cl'], ['pgcc']]
The following exception(s) were encountered:
Running `icl ""` gave "[WinError 2] The system cannot find the file specified"
Running `cl /?` gave "[WinError 2] The system cannot find the file specified"
Running `cc --version` gave "[WinError 2] The system cannot find the file specified"
Running `gcc --version` gave "[WinError 2] The system cannot find the file specified"
Running `clang --version` gave "[WinError 2] The system cannot find the file specified"
Running `clang-cl /?` gave "[WinError 2] The system cannot find the file specified"
Running `pgcc --version` gave "[WinError 2] The system cannot find the file specified"
A full log can be found at C:\Users\hugo\AppData\Local\Temp\pip-install-e_otiu7s\scipy_a4dec49d0df54c3884f33ca8aa92cddf\.mesonpy-hi22c8nm\build\meson-logs\meson-log.txt
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
</code></pre>
<p><strong>Background:</strong></p>
<p>I have one machine with Python 3.8.3, numpy 1.24.4, scipy 1.10.1 and OpenCV 4.2.0. I would like to make my other machine match it exactly.</p>
<p>These are both Windows 10 machines.</p>
<p>So far, Python 3.8.3, numpy 1.24.4 and OpenCV 4.2.0 are installed fine. But when I try to install scipy (of any version) I get the same error messages.</p>
<p>I can't seem to find a wheel with the right version of scipy and a matching python version.</p>
<p><strong>Questions:</strong></p>
<ul>
<li>What is the meaning of these errors?</li>
<li>Do I need to install gcc, clang, vswhere.exe, etc.?</li>
<li>How can I install scipy?</li>
</ul>
|
<python><pip><scipy><windows-10>
|
2024-01-29 16:43:32
| 3
| 5,950
|
Rocketmagnet
|
77,901,003
| 7,920,004
|
Assert one from multiple entries in one CDK property
|
<p>I need to assert, as part of my TDD, that specific <code>Stage</code> in CodePipeline got created.
Tried to use CDK's method but I can't get this code working to retreive only specific <code>Name</code> from all <code>Stages</code> array.</p>
<p>Sample data:</p>
<pre><code>"Type": "AWS::CodePipeline::Pipeline",
"Properties": {
"RoleArn": {
"Fn::GetAtt": [
"PipelineRoleABC",
"Arn"
]
},
"Stages": [
{
"Actions": [
{...}
],
"Name": "Source"
},
{
"Actions": [
{...}
],
"Name": "Build"
}
]
}
</code></pre>
<p>I want to assert if <code>Name: Build</code> exists in the code.</p>
<pre><code>template.has_resource_properties("AWS::CodePipeline::Pipeline", {"Stages": [assertions.Match.object_like({"Name": "Build"})]})
</code></pre>
<p>Currently getting an error:</p>
<pre><code>Expected array of length 1 but received 3 at /Properties/Stages (using objectLike matcher)
</code></pre>
|
<python><amazon-web-services><aws-cloudformation><aws-cdk>
|
2024-01-29 15:59:29
| 1
| 1,509
|
marcin2x4
|
77,900,971
| 9,640,238
|
pandas FutureWarning: Downcasting object dtype arrays on .fillna, .ffill, .bfill is deprecated and will change in a future version
|
<p>In order to print dataframes nicely using <a href="https://github.com/astanin/python-tabulate" rel="noreferrer">tabulate</a>, so that <code>NaN</code> and <code>NaT</code> are printed as empty cells, I've been using this successfully:</p>
<pre class="lang-py prettyprint-override"><code>print(tabulate(df.astype(object).fillna("")))
</code></pre>
<p>Now, this causes the following warning:</p>
<blockquote>
<p>FutureWarning: Downcasting object dtype arrays on .fillna, .ffill,
.bfill is deprecated and will change in a future version. Call
result.infer_objects(copy=False) instead.</p>
</blockquote>
<p>I don't know what I should do instead now. I certainly don't see how <code>infer_objects(copy=False)</code> would help as the whole point here is indeed to <em>force</em> converting everything to a string representation and filling in missing values with empty strings.</p>
|
<python><pandas><downcast>
|
2024-01-29 15:54:50
| 10
| 2,690
|
mrgou
|
77,900,883
| 2,233,500
|
Problem accessing Wikipedia page using Python API
|
<p>I want to extract image URLs from Wikipedia pages. I'm using the <a href="https://pypi.org/project/wikipedia/" rel="nofollow noreferrer">Wikipedia python API</a> for that. I'm having some problems accessing some pages and I cannot understand what is wrong.</p>
<p>I'm using the <a href="https://en.wikipedia.org/wiki/Apple_Inc." rel="nofollow noreferrer">Apple Inc.</a> Wikipedia page as an example. The page title is <code>Apple Inc.</code> and the page <em>name</em> in the URL is <code>Apple_Inc.</code> (maybe there's a better name for that).</p>
<p>If I use the <code>wikipedia.page()</code> function to access the page with the title <code>Apple Inc.</code>, I get the error: <code>Page id "apple in" does not match any pages. Try another id!</code>. Same if I use the title "<code>Apple_Inc.</code> instead. But if I use something close to the title, the API often gives me the correct page: <code><WikipediaPage 'Apple Inc.'></code>. See the code bellow and the resulting page /error:</p>
<pre class="lang-py prettyprint-override"><code>import wikipedia
page = wikipedia.page(title="Apple Inc.")
print(page)
# -> wikipedia.exceptions.PageError: Page id "apple in" does not match any pages. Try another id!
page = wikipedia.page(title="Apple_Inc.")
print(page)
# -> wikipedia.exceptions.PageError: Page id "apple in" does not match any pages. Try another id!
page = wikipedia.page(title="Apple Inc")
print(page)
# -> <WikipediaPage 'Apple Inc.'>
page = wikipedia.page(title="Apple In")
print(page)
# -> <WikipediaPage 'Apple Inc.'>
page = wikipedia.page(title="Apple Incorporated")
print(page)
# -> <WikipediaPage 'Apple Inc.'>
page = wikipedia.page(title="Apple Incorporated.")
print(page)
# -> <WikipediaPage 'Apple Inc.'>
page = wikipedia.page(title="Apple Incc.")
print(page)
# -> wikipedia.exceptions.PageError: Page id "apple inch" does not match any pages. Try another id!
</code></pre>
<p>At first, I thought it was the "." in "Apple Inc." that would cause a problem, but the title "Apple Incorporated." works fine. And strangely, if I use the title "Apple Incc.", then it seem that the API is looking for the page "apple inch" for some reason.</p>
|
<python><wikipedia>
|
2024-01-29 15:42:57
| 1
| 867
|
Vincent Garcia
|
77,900,818
| 1,422,096
|
Why does running a plot in a secondary thread works the first time (with warning), but fails the second time (with error)?
|
<p>For some quick tests (this code will evolve later anyway, so a temporary solution is ok), I need to use Matplotlib in a <code>thread</code>, and <strong>not</strong> the main thread.</p>
<p>Usually we have this warning but it works anyway:</p>
<blockquote>
<p>UserWarning: Starting a Matplotlib GUI outside of the main thread will likely fail.</p>
</blockquote>
<p>Here it is indeed the case.</p>
<p>However, when the first plot is closed, and we do the same <strong>a second time</strong>, then it totally fails with not a warning, but an <strong>error</strong>:</p>
<blockquote>
<p>RuntimeError: main thread is not in main loop</p>
</blockquote>
<p><strong>Is there a way to make the following code work?</strong> (even if it is not the recommened matplotlib way)</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import threading, time
def visualization_thread():
fig = plt.figure()
ax = fig.add_subplot(111)
l1, *_ = ax.plot(y1, color='r', label="1")
fig.show()
fig.canvas.flush_events()
while running:
l1.set_ydata(y1)
fig.canvas.draw_idle()
fig.canvas.flush_events()
plt.pause(0.020)
def data_thread():
global y1, y2, y3
while running:
y1 = np.random.rand(100)
time.sleep(0.020)
running = True # !!! WARNING but it still works
threading.Thread(target=data_thread).start()
threading.Thread(target=visualization_thread).start()
time.sleep(4)
running = False
time.sleep(2)
running = True # !!! FAILS, why?
threading.Thread(target=data_thread).start()
threading.Thread(target=visualization_thread).start()
</code></pre>
|
<python><multithreading><matplotlib>
|
2024-01-29 15:31:52
| 1
| 47,388
|
Basj
|
77,900,729
| 3,029,238
|
How to call rev_list with GitPython
|
<p>This program fails</p>
<pre><code>import git
repo = git.repo.Repo('..')
res = repo.git.rev_list('--since="2024-01-01" master').split('\n')
</code></pre>
<p>with the following error</p>
<pre><code>git.exc.GitCommandError: Cmd('git') failed due to: exit code(129)
cmdline: git rev-list --since="2024-01-01" master
stderr: 'usage: git rev-list [<options>] <commit>... [--] [<path>...]
</code></pre>
<p>despite that the command <code>git rev-list --since="2024-01-01" master</code> works just fine from the command line.</p>
<p>Any ideas how to fix it?</p>
|
<python><python-3.x><git><gitpython>
|
2024-01-29 15:16:35
| 1
| 1,649
|
Dmitry
|
77,900,681
| 1,422,096
|
Checkboxes to select plots in realtime live matplotlib (interactive) display
|
<p>Is there a <code>matplotlib</code> built-in way to display <strong>checkboxes</strong> along the legend, allowing the user to select/unselect some of the 3 curves in a realtime live plotting? (I currently refresh data with <code>set_ydata</code>, but I can change this if needed)</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import threading, time
def visualization_thread():
fig = plt.figure()
ax = fig.add_subplot(111)
l1, *_ = ax.plot(y1, color='r', label="1")
l2, *_ = ax.plot(y2, color='g', label="2")
l3, *_ = ax.plot(y3, color='b', label="3")
fig.show()
fig.legend()
fig.canvas.flush_events()
while True:
l1.set_ydata(y1)
l2.set_ydata(y2)
l3.set_ydata(y3)
fig.canvas.draw_idle()
fig.canvas.flush_events()
plt.pause(0.020)
def data_thread():
global y1, y2, y3
while True:
y1 = np.random.rand(100)
y2 = np.random.rand(100)
y3 = np.random.rand(100)
time.sleep(0.020)
threading.Thread(target=data_thread).start()
threading.Thread(target=visualization_thread).start()
</code></pre>
<p><a href="https://i.sstatic.net/0DgyK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0DgyK.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><visualization><interactive>
|
2024-01-29 15:07:58
| 0
| 47,388
|
Basj
|
77,900,469
| 5,330,527
|
Turn old Python web app into WSGI-ready and Apache
|
<p>I have an ancient web application written in Python. It's basically a bunch of .py files. For instance:</p>
<p><code>display.py</code>:</p>
<pre><code>import cgi
import re
import string
import operator
from urllib.parse import urlparse
from errors import herigean
from routines import *
error = False
query = cgiFieldStorageToDict(cgi.FieldStorage())
opening_index = 0 # flag to indicate whether we're opening the index page
if ('what' not in query):
query['what'] = 'index'
if 'fs' not in query:
query['fs'] = str(default_font_size)
# open page to display
try:
fil = open('html/'+query['what']+'.fmt')
textlines = fil.read()
queryreg = re.compile('QUERY:fs:QUERY')
textlines = queryreg.sub(query['fs'],textlines)
fil.close()
except IOError:
error = True
if query['what'] == 'about':
try:
fil = open('legal/lgpl-3.0.txt')
lgpl = fil.read()
fil.close()
fil = open('legal/gpl.txt')
gpl = fil.read()
fil.close()
fil = open('html/availability.fmt')
availability = fil.read()
fil.close()
except IOError:
error = True
if query['what'] == 'corpus':
try:
fil = open('html/availability.fmt')
[...]
if error:
herigean()
else:
print(frontmatter)
</code></pre>
<p>etc.</p>
<p>How can I run this behind an Apache proxy, using mod_wsgi installed in my virtual environment? Right now I have a Pythong 3.11 virtual environment with <code>mod_wsgi-express</code> 5 installed in it.I can successfull run a test.py with:</p>
<p><code>mod_wsgi-express start-server test.py</code></p>
<pre><code>def application(environ, start_response):
start_response('200 OK', [('Content-Type', 'text/html')])
return [b'Hello, world!']
</code></pre>
<p>How can I run my old Python application? Do I just wrap each .py file inside a <code>def application(environ, start_response):</code>? Any help will be highly appreciated.</p>
<p><strong>Addition</strong>:</p>
<p>The application has an <code>index.html</code> within its www. Inside this, there's a <code><meta http-equiv="Refresh" content="0;url=display.py?what=index" /></code>. That's how it's currently served.</p>
<p><strong>Addition II</strong></p>
<p>There's no way I can get an output: when using</p>
<pre><code>def application(environ, start_response):
start_response('200 OK', [('Content-Type', 'text/html')])
[...]
return print(frontmatter)
</code></pre>
<p>I get the whole HTML in the logs, follwed by <code>TypeError: 'NoneType' object is not iterable</code> and Internal Server Error on the browser.</p>
|
<python><python-3.x><apache><mod-wsgi><wsgi>
|
2024-01-29 14:37:31
| 1
| 786
|
HBMCS
|
77,900,299
| 7,662,164
|
JAX `grad` error for function with `jax.lax.switch` and compound boolean conditions
|
<p>I have encountered a scenario where applying <code>jax.grad</code> to a function with <code>jax.lax.switch</code> and compound boolean conditions yields <code>jax.errors.TracerBoolConversionError</code>. A minimal program to reproduce this behavior is the following:</p>
<pre><code>from jax.lax import switch
import jax.numpy as jnp
from jax import grad
func_0 = lambda x: jnp.where(0. < x < 1., x, 0.)
func_1 = lambda x: jnp.where(0. < x < 1., x, 1.)
func_list = [func_0, func_1]
func = lambda index, x: switch(index, func_list, x)
df = grad(func, argnums=1)(1, 2.)
print(df)
</code></pre>
<p>The error is the following:</p>
<pre><code>Traceback (most recent call last):
File "***/grad_test.py", line 12, in <module>
df = grad(func, argnums=1)(1, 0.5)
File "***/grad_test.py", line 10, in <lambda>
func = lambda index, x: switch(index, func_list, x)
File "***/grad_test.py", line 5, in <lambda>
func_0 = lambda x: jnp.where(0 < x < 1., x, 0.)
jax.errors.TracerBoolConversionError: Attempted boolean conversion of traced array with shape bool[]..
The error occurred while tracing the function <lambda> at ***/grad_test.py:5 for switch. This concrete value was not available in Python because it depends on the value of the argument x.
See https://jax.readthedocs.io/en/latest/errors.html#jax.errors.TracerBoolConversionError
</code></pre>
<p>However, if the boolean condition is changed to a single condition (for example, <code>x < 1</code>), then no error occurs. I'm wondering if this could be a bug, or otherwise, how the original program should be changed.</p>
|
<python><boolean><gradient><jax>
|
2024-01-29 14:10:05
| 1
| 335
|
Jingyang Wang
|
77,900,125
| 9,318,323
|
Can I release dev and stable versions together
|
<p>Let's say I have a stable <code>0.8.4</code> version of my package. Let's also say I released a <code>1.0.0.dev1</code> version in a separate branch.</p>
<p>Can I still release versions e.g. <code>0.8.5</code> or <code>0.9.0</code> while I am developing my <code>1.0.0</code> version?</p>
|
<python><pip><python-packaging>
|
2024-01-29 13:37:29
| 1
| 354
|
Vitamin C
|
77,899,986
| 3,833,612
|
Opencv videocapture ESP32CAM faliure
|
<p>I am using opencv videocapture to access live video stream from my ESP32CAM server. The video is streaming at url <code>http://172.20.10.10:81/stream</code>.<strong>If I put the url in the web browser, I could actually see the video from my ESP32CAM</strong>. However, I am not able to access this video in jupyter notebook environment using opencv <code>VideoCapture</code>. <strong>I tried videocapture(0) and it works fine with my laptop camera</strong>, so I guess the problem is it just cannot capture the live streaming url. Can anyone help?</p>
<p>The code in my ESP32CAM side is as below(from a tutorial website). I can put in <code>http://172.20.10.10</code> to access a webpage in which the video is streamming pretty well. So I guess the problem is not here.</p>
<pre><code>/*********
Rui Santos
Complete instructions at https://RandomNerdTutorials.com/esp32-cam-projects-ebook/
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files.
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
*********/
#include "esp_camera.h"
#include <WiFi.h>
#include "esp_timer.h"
#include "img_converters.h"
#include "Arduino.h"
#include "fb_gfx.h"
#include "soc/soc.h" // disable brownout problems
#include "soc/rtc_cntl_reg.h" // disable brownout problems
#include "esp_http_server.h"
#include "HardwareSerial.h"
// Replace with your network credentials
const char* ssid = "xxxxxxxx";
const char* password = "12345678";
#define PART_BOUNDARY "123456789000000000000987654321"
#define CAMERA_MODEL_AI_THINKER
//#define CAMERA_MODEL_M5STACK_PSRAM
//#define CAMERA_MODEL_M5STACK_WITHOUT_PSRAM
//#define CAMERA_MODEL_M5STACK_PSRAM_B
//#define CAMERA_MODEL_WROVER_KIT
#if defined(CAMERA_MODEL_WROVER_KIT)
#define PWDN_GPIO_NUM -1
#define RESET_GPIO_NUM -1
#define XCLK_GPIO_NUM 21
#define SIOD_GPIO_NUM 26
#define SIOC_GPIO_NUM 27
#define Y9_GPIO_NUM 35
#define Y8_GPIO_NUM 34
#define Y7_GPIO_NUM 39
#define Y6_GPIO_NUM 36
#define Y5_GPIO_NUM 19
#define Y4_GPIO_NUM 18
#define Y3_GPIO_NUM 5
#define Y2_GPIO_NUM 4
#define VSYNC_GPIO_NUM 25
#define HREF_GPIO_NUM 23
#define PCLK_GPIO_NUM 22
#elif defined(CAMERA_MODEL_M5STACK_PSRAM)
#define PWDN_GPIO_NUM -1
#define RESET_GPIO_NUM 15
#define XCLK_GPIO_NUM 27
#define SIOD_GPIO_NUM 25
#define SIOC_GPIO_NUM 23
#define Y9_GPIO_NUM 19
#define Y8_GPIO_NUM 36
#define Y7_GPIO_NUM 18
#define Y6_GPIO_NUM 39
#define Y5_GPIO_NUM 5
#define Y4_GPIO_NUM 34
#define Y3_GPIO_NUM 35
#define Y2_GPIO_NUM 32
#define VSYNC_GPIO_NUM 22
#define HREF_GPIO_NUM 26
#define PCLK_GPIO_NUM 21
#elif defined(CAMERA_MODEL_M5STACK_WITHOUT_PSRAM)
#define PWDN_GPIO_NUM -1
#define RESET_GPIO_NUM 15
#define XCLK_GPIO_NUM 27
#define SIOD_GPIO_NUM 25
#define SIOC_GPIO_NUM 23
#define Y9_GPIO_NUM 19
#define Y8_GPIO_NUM 36
#define Y7_GPIO_NUM 18
#define Y6_GPIO_NUM 39
#define Y5_GPIO_NUM 5
#define Y4_GPIO_NUM 34
#define Y3_GPIO_NUM 35
#define Y2_GPIO_NUM 17
#define VSYNC_GPIO_NUM 22
#define HREF_GPIO_NUM 26
#define PCLK_GPIO_NUM 21
#elif defined(CAMERA_MODEL_AI_THINKER)
#define PWDN_GPIO_NUM 32
#define RESET_GPIO_NUM -1
#define XCLK_GPIO_NUM 0
#define SIOD_GPIO_NUM 26
#define SIOC_GPIO_NUM 27
#define Y9_GPIO_NUM 35
#define Y8_GPIO_NUM 34
#define Y7_GPIO_NUM 39
#define Y6_GPIO_NUM 36
#define Y5_GPIO_NUM 21
#define Y4_GPIO_NUM 19
#define Y3_GPIO_NUM 18
#define Y2_GPIO_NUM 5
#define VSYNC_GPIO_NUM 25
#define HREF_GPIO_NUM 23
#define PCLK_GPIO_NUM 22
#elif defined(CAMERA_MODEL_M5STACK_PSRAM_B)
#define PWDN_GPIO_NUM -1
#define RESET_GPIO_NUM 15
#define XCLK_GPIO_NUM 27
#define SIOD_GPIO_NUM 22
#define SIOC_GPIO_NUM 23
#define Y9_GPIO_NUM 19
#define Y8_GPIO_NUM 36
#define Y7_GPIO_NUM 18
#define Y6_GPIO_NUM 39
#define Y5_GPIO_NUM 5
#define Y4_GPIO_NUM 34
#define Y3_GPIO_NUM 35
#define Y2_GPIO_NUM 32
#define VSYNC_GPIO_NUM 25
#define HREF_GPIO_NUM 26
#define PCLK_GPIO_NUM 21
#else
#error "Camera model not selected"
#endif
//#define MOTOR_1_PIN_1 14
//#define MOTOR_1_PIN_2 15
//#define MOTOR_2_PIN_1 13
//#define MOTOR_2_PIN_2 12
#define RXD1 14
#define TXD1 15
static const char* _STREAM_CONTENT_TYPE = "multipart/x-mixed-replace;boundary=" PART_BOUNDARY;
static const char* _STREAM_BOUNDARY = "\r\n--" PART_BOUNDARY "\r\n";
static const char* _STREAM_PART = "Content-Type: image/jpeg\r\nContent-Length: %u\r\n\r\n";
httpd_handle_t camera_httpd = NULL;
httpd_handle_t stream_httpd = NULL;
static const char PROGMEM INDEX_HTML[] = R"rawliteral(
<html>
<head>
<title>ESP32-CAM Robot</title>
<meta name="viewport" content="width=device-width, initial-scale=1">
<style>
body { font-family: Arial; text-align: center; margin:0px auto; padding-top: 30px;}
table { margin-left: auto; margin-right: auto; }
td { padding: 8 px; }
.button {
background-color: #2f4468;
border: none;
color: white;
padding: 10px 20px;
text-align: center;
text-decoration: none;
display: inline-block;
font-size: 18px;
margin: 6px 3px;
cursor: pointer;
-webkit-touch-callout: none;
-webkit-user-select: none;
-khtml-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
-webkit-tap-highlight-color: rgba(0,0,0,0);
}
img { width: auto ;
max-width: 100% ;
height: auto ;
}
</style>
</head>
<body>
<h1>ESP32-CAM Robot</h1>
<img src="" id="photo" >
<table>
<tr><td colspan="3" align="center"><button class="button" onmousedown="toggleCheckbox('forward');" ontouchstart="toggleCheckbox('forward');" onmouseup="toggleCheckbox('stop');" ontouchend="toggleCheckbox('stop');">Forward</button></td></tr>
<tr><td align="center"><button class="button" onmousedown="toggleCheckbox('left');" ontouchstart="toggleCheckbox('left');" onmouseup="toggleCheckbox('stop');" ontouchend="toggleCheckbox('stop');">Left</button></td><td align="center"><button class="button" onmousedown="toggleCheckbox('stop');" ontouchstart="toggleCheckbox('stop');">Stop</button></td><td align="center"><button class="button" onmousedown="toggleCheckbox('right');" ontouchstart="toggleCheckbox('right');" onmouseup="toggleCheckbox('stop');" ontouchend="toggleCheckbox('stop');">Right</button></td></tr>
<tr><td colspan="3" align="center"><button class="button" onmousedown="toggleCheckbox('backward');" ontouchstart="toggleCheckbox('backward');" onmouseup="toggleCheckbox('stop');" ontouchend="toggleCheckbox('stop');">Backward</button></td></tr>
</table>
<script>
function toggleCheckbox(x) {
var xhr = new XMLHttpRequest();
xhr.open("GET", "/action?go=" + x, true);
xhr.send();
}
window.onload = document.getElementById("photo").src = window.location.href.slice(0, -1) + ":81/stream";
</script>
</body>
</html>
)rawliteral";
static esp_err_t index_handler(httpd_req_t *req){
httpd_resp_set_type(req, "text/html");
return httpd_resp_send(req, (const char *)INDEX_HTML, strlen(INDEX_HTML));
}
static esp_err_t stream_handler(httpd_req_t *req){
camera_fb_t * fb = NULL;
esp_err_t res = ESP_OK;
size_t _jpg_buf_len = 0;
uint8_t * _jpg_buf = NULL;
char * part_buf[64];
res = httpd_resp_set_type(req, _STREAM_CONTENT_TYPE);
if(res != ESP_OK){
return res;
}
while(true){
fb = esp_camera_fb_get();
if (!fb) {
Serial.println("Camera capture failed");
res = ESP_FAIL;
} else {
if(fb->width > 400){
if(fb->format != PIXFORMAT_JPEG){
bool jpeg_converted = frame2jpg(fb, 80, &_jpg_buf, &_jpg_buf_len);
esp_camera_fb_return(fb);
fb = NULL;
if(!jpeg_converted){
Serial.println("JPEG compression failed");
res = ESP_FAIL;
}
} else {
_jpg_buf_len = fb->len;
_jpg_buf = fb->buf;
}
}
}
if(res == ESP_OK){
size_t hlen = snprintf((char *)part_buf, 64, _STREAM_PART, _jpg_buf_len);
res = httpd_resp_send_chunk(req, (const char *)part_buf, hlen);
}
if(res == ESP_OK){
res = httpd_resp_send_chunk(req, (const char *)_jpg_buf, _jpg_buf_len);
}
if(res == ESP_OK){
res = httpd_resp_send_chunk(req, _STREAM_BOUNDARY, strlen(_STREAM_BOUNDARY));
}
if(fb){
esp_camera_fb_return(fb);
fb = NULL;
_jpg_buf = NULL;
} else if(_jpg_buf){
free(_jpg_buf);
_jpg_buf = NULL;
}
if(res != ESP_OK){
break;
}
//Serial.printf("MJPG: %uB\n",(uint32_t)(_jpg_buf_len));
}
return res;
}
static esp_err_t cmd_handler(httpd_req_t *req){
char* buf;
size_t buf_len;
char variable[32] = {0,};
buf_len = httpd_req_get_url_query_len(req) + 1;
if (buf_len > 1) {
buf = (char*)malloc(buf_len);
if(!buf){
httpd_resp_send_500(req);
return ESP_FAIL;
}
if (httpd_req_get_url_query_str(req, buf, buf_len) == ESP_OK) {
if (httpd_query_key_value(buf, "go", variable, sizeof(variable)) == ESP_OK) {
} else {
free(buf);
httpd_resp_send_404(req);
return ESP_FAIL;
}
} else {
free(buf);
httpd_resp_send_404(req);
return ESP_FAIL;
}
free(buf);
} else {
httpd_resp_send_404(req);
return ESP_FAIL;
}
sensor_t * s = esp_camera_sensor_get();
int res = 0;
if(!strcmp(variable, "forward")) {
Serial.println("Forward");
Serial1.println("@Forward\r\n");
//digitalWrite(MOTOR_1_PIN_1, 1);
//digitalWrite(MOTOR_1_PIN_2, 0);
//digitalWrite(MOTOR_2_PIN_1, 1);
//digitalWrite(MOTOR_2_PIN_2, 0);
}
else if(!strcmp(variable, "left")) {
Serial.println("Left");
Serial1.println("@Left\r\n");
//digitalWrite(MOTOR_1_PIN_1, 0);
//digitalWrite(MOTOR_1_PIN_2, 1);
//digitalWrite(MOTOR_2_PIN_1, 1);
//digitalWrite(MOTOR_2_PIN_2, 0);
}
else if(!strcmp(variable, "right")) {
Serial.println("Right");
Serial1.println("@Right\r\n");
//digitalWrite(MOTOR_1_PIN_1, 1);
//digitalWrite(MOTOR_1_PIN_2, 0);
//digitalWrite(MOTOR_2_PIN_1, 0);
//digitalWrite(MOTOR_2_PIN_2, 1);
}
else if(!strcmp(variable, "backward")) {
Serial.println("Backward");
Serial1.println("@Backward\r\n");
//digitalWrite(MOTOR_1_PIN_1, 0);
//digitalWrite(MOTOR_1_PIN_2, 1);
//digitalWrite(MOTOR_2_PIN_1, 0);
//digitalWrite(MOTOR_2_PIN_2, 1);
}
else if(!strcmp(variable, "stop")) {
Serial.println("Stop");
Serial1.println("@Stop\r\n");
//digitalWrite(MOTOR_1_PIN_1, 0);
//digitalWrite(MOTOR_1_PIN_2, 0);
//digitalWrite(MOTOR_2_PIN_1, 0);
//digitalWrite(MOTOR_2_PIN_2, 0);
}
else {
res = -1;
}
if(res){
return httpd_resp_send_500(req);
}
httpd_resp_set_hdr(req, "Access-Control-Allow-Origin", "*");
return httpd_resp_send(req, NULL, 0);
}
void startCameraServer(){
httpd_config_t config = HTTPD_DEFAULT_CONFIG();
config.server_port = 80;
httpd_uri_t index_uri = {
.uri = "/",
.method = HTTP_GET,
.handler = index_handler,
.user_ctx = NULL
};
httpd_uri_t cmd_uri = {
.uri = "/action",
.method = HTTP_GET,
.handler = cmd_handler,
.user_ctx = NULL
};
httpd_uri_t stream_uri = {
.uri = "/stream",
.method = HTTP_GET,
.handler = stream_handler,
.user_ctx = NULL
};
if (httpd_start(&camera_httpd, &config) == ESP_OK) {
httpd_register_uri_handler(camera_httpd, &index_uri);
httpd_register_uri_handler(camera_httpd, &cmd_uri);
}
config.server_port += 1;
config.ctrl_port += 1;
if (httpd_start(&stream_httpd, &config) == ESP_OK) {
httpd_register_uri_handler(stream_httpd, &stream_uri);
}
}
void setup() {
WRITE_PERI_REG(RTC_CNTL_BROWN_OUT_REG, 0); //disable brownout detector
//pinMode(MOTOR_1_PIN_1, OUTPUT);
//pinMode(MOTOR_1_PIN_2, OUTPUT);
//pinMode(MOTOR_2_PIN_1, OUTPUT);
//pinMode(MOTOR_2_PIN_2, OUTPUT);
Serial.begin(115200);
Serial1.begin(115200, SERIAL_8N1, RXD1, TXD1);
Serial.setDebugOutput(false);
camera_config_t config;
config.ledc_channel = LEDC_CHANNEL_0;
config.ledc_timer = LEDC_TIMER_0;
config.pin_d0 = Y2_GPIO_NUM;
config.pin_d1 = Y3_GPIO_NUM;
config.pin_d2 = Y4_GPIO_NUM;
config.pin_d3 = Y5_GPIO_NUM;
config.pin_d4 = Y6_GPIO_NUM;
config.pin_d5 = Y7_GPIO_NUM;
config.pin_d6 = Y8_GPIO_NUM;
config.pin_d7 = Y9_GPIO_NUM;
config.pin_xclk = XCLK_GPIO_NUM;
config.pin_pclk = PCLK_GPIO_NUM;
config.pin_vsync = VSYNC_GPIO_NUM;
config.pin_href = HREF_GPIO_NUM;
config.pin_sccb_sda = SIOD_GPIO_NUM;
config.pin_sccb_scl = SIOC_GPIO_NUM;
config.pin_pwdn = PWDN_GPIO_NUM;
config.pin_reset = RESET_GPIO_NUM;
config.xclk_freq_hz = 20000000;
config.pixel_format = PIXFORMAT_JPEG;
if(psramFound()){
config.frame_size = FRAMESIZE_VGA;
config.jpeg_quality = 10;
config.fb_count = 2;
} else {
config.frame_size = FRAMESIZE_SVGA;
config.jpeg_quality = 12;
config.fb_count = 1;
}
// Camera init
esp_err_t err = esp_camera_init(&config);
if (err != ESP_OK) {
Serial.printf("Camera init failed with error 0x%x", err);
return;
}
// Wi-Fi connection
WiFi.begin(ssid, password);
while (WiFi.status() != WL_CONNECTED) {
delay(500);
Serial.print(".");
}
Serial.println("");
Serial.println("WiFi connected");
Serial.print("Camera Stream Ready! Go to: http://");
Serial.println(WiFi.localIP());
// Start streaming web server
startCameraServer();
}
void loop() {
}
</code></pre>
<p>The python code which I am using to access the video is as below.<code>ret</code> keeps returning False. It is not getting any video.If I put <code>http://172.20.10.10:81/stream</code> in the web browser, I could actually see the video from my ESP32CAM</p>
<pre><code>import cv2
import ipywidgets.widgets as widgets
import urllib
import timedisplay(image_widget)
def bgr8_to_jpeg(value,quality=75):
return bytes(cv2.imencode('.jpg',value)[1])
image_widget=widgets.Image(format='jpeg',width=640,height=480)
cap = cv2.VideoCapture('http://172.20.10.10:81/stream')
try:
while(True):
ret, frame = cap.read()
resized_frame=cv2.resize(frame,(640,480))
image_widget.value=bgr8_to_jpeg(frame)
time.sleep(0.1)
except KeyboardInterrupt:
print('closed')
pass
cap.release()
cv2.destroyAllWindows()
</code></pre>
|
<python><opencv><esp32>
|
2024-01-29 13:14:27
| 0
| 427
|
user3833612
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.