QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,240,527
| 9,428,990
|
Linux service stops logging
|
<p>I have a service running. The service invokes a long running python script which "prints" some status messages. I can see these status messages with <code>journalctl -u myservice.service</code>. But after about 6 hours I am no longer able to see new outputs from the service.</p>
<p>If I run <code>systemctl status myservice.service</code>. I can tell that service is still running.</p>
<p>I suspected it had to do something with the logging, so I reduced the time intervall of the log output from once per minute to once per 10 seconds. Fair enough, the problem still occured but after just 1 hour. Thus it seems as if I have an issue with log space or log rotation. Can someone assist me with how to solve this?</p>
<p>I started to look into the <code>journald.conf</code> file and the content looks like this.</p>
<pre><code>[Journal]
#Storage=auto
#Compress=yes
#Seal=yes
#SplitMode=uid
#SyncIntervalSec=5m
#RateLimitInterval=30s
#RateLimitBurst=1000
#SystemMaxUse=
#SystemKeepFree=
#SystemMaxFileSize=
#RuntimeMaxUse=
#RuntimeKeepFree=
#RuntimeMaxFileSize=
#MaxRetentionSec=
#MaxFileSec=1month
#ForwardToSyslog=yes
#ForwardToKMsg=no
#ForwardToConsole=no
#ForwardToWall=yes
#TTYPath=/dev/console
#MaxLevelStore=debug
#MaxLevelSyslog=debug
#MaxLevelKMsg=notice
#MaxLevelConsole=info
#MaxLevelWall=emerg
#LineMax=48K
</code></pre>
<p>I tried changing the content as below and then ran <code>sudo systemctl restart systemd-journald.service</code> without any success.</p>
<pre><code>[Journal]
#Storage=auto
#Compress=yes
#Seal=yes
#SplitMode=uid
#SyncIntervalSec=5m
#RateLimitInterval=30s
#RateLimitBurst=1000
SystemMaxUse=1G # Increase to allow more disk space for logging
#SystemKeepFree=
#SystemMaxFileSize=
#RuntimeMaxUse=
#RuntimeKeepFree=
#RuntimeMaxFileSize=
MaxRetentionSec=1month # Retain journal files for up to one month
MaxFileSec=1week # Keep individual journal files for up to one week
#ForwardToSyslog=yes
#ForwardToKMsg=no
#ForwardToConsole=no
#ForwardToWall=yes
#TTYPath=/dev/console
#MaxLevelStore=debug
#MaxLevelSyslog=debug
#MaxLevelKMsg=notice
#MaxLevelConsole=info
#MaxLevelWall=emerg
#LineMax=48K
</code></pre>
<p>Any ideas?</p>
|
<python><linux><service><systemd-journald>
|
2024-03-28 19:42:11
| 1
| 719
|
Frankster
|
78,240,320
| 15,370,142
|
Workaround for "TypeError: 'NoneType' object is not subscriptable" in ArcGIS Import data Tutorial
|
<p>I'm trying to follow <a href="https://developers.arcgis.com/python/guide/import-data/" rel="nofollow noreferrer">this ArcGIS tutorial</a>. I'm receiving a <code>TypeError: 'NoneType' object is not subscriptable</code>. This happens when I try to execute <code>csv_item = gis.content.add(trailhead_properties, csv_file)</code>. Just wondering if anyone else has encountered this error and was able to identify a workaround.</p>
<p>Here is the code I'm running:</p>
<pre><code>from arcgis import GIS
import os
gis = GIS("<url>",
client_id=os.environ.get(token),
verify_cert=False)
trailhead_properties = {
"title": "Trailheads",
"description": "Trailheads imported from CSV file",
"tags": "LA Trailheads"
}
data_path = r".data\LA_Hub_Datasets\LA_Hub\Datasets"
csv_file = os.path.join(data_path, 'Trailheads.csv')
csv_item = gis.content.add(trailhead_properties, csv_file)
</code></pre>
<p>I have downloaded "LA_Hub_Datasets.zip" and extracted it. I had no problems reading this file as a pandas data frame.</p>
<p>I was also looking <a href="https://developers.arcgis.com/documentation/mapping-apis-and-services/data-hosting/tutorials/tools/define-a-new-feature-layer/#create-a-point-feature-layer" rel="nofollow noreferrer">at another tutorial</a>. I received the same error when I reached the following lines of code:</p>
<pre><code># create the service
new_service = portal.content.create_service(
name="My Points",
create_params=create_params,
tags="Beach Access,Malibu",
)
</code></pre>
|
<python><typeerror><gis><arcgis><nonetype>
|
2024-03-28 18:55:02
| 1
| 412
|
Ted M.
|
78,240,203
| 9,795,817
|
Display node and edge attributes in interactive Networkx graph using Pyvis
|
<h1>Edit</h1>
<p>I updated <code>pyvis</code> to version 0.3.2 and the plot is working as expected now.</p>
<p>Read on if you're interested in displaying node and edge attributes on a Pyvis visualization of a Networkx graph.</p>
<hr />
<h1>Original question</h1>
<p>I have two Pandas dataframes: <code>edges_df</code> and <code>nodes_df</code>. I used them to create networkx graph <code>G</code> which I'm then passing to a Pyvis network.</p>
<p><code>edges_df</code> looks as follows:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: right;">src</th>
<th style="text-align: right;">dst</th>
<th style="text-align: left;">color</th>
<th style="text-align: left;">title</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">140</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">3</td>
<td style="text-align: left;">#379951</td>
<td style="text-align: left;">T/B</td>
</tr>
<tr>
<td style="text-align: right;">146</td>
<td style="text-align: right;">5</td>
<td style="text-align: right;">15</td>
<td style="text-align: left;">#379951</td>
<td style="text-align: left;">T/B</td>
</tr>
<tr>
<td style="text-align: right;">7</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">2</td>
<td style="text-align: left;">#0b03fc</td>
<td style="text-align: left;">78</td>
</tr>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">1</td>
<td style="text-align: left;">#0b03fc</td>
<td style="text-align: left;">62</td>
</tr>
<tr>
<td style="text-align: right;">139</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">10</td>
<td style="text-align: left;">#379951</td>
<td style="text-align: left;">T/B</td>
</tr>
</tbody>
</table></div>
<p><code>nodes_df</code> looks as follows:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: right;">id</th>
<th style="text-align: left;">color</th>
<th style="text-align: left;">title</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
<td style="text-align: left;">#47abfc</td>
<td style="text-align: left;">I am node 1</td>
</tr>
<tr>
<td style="text-align: right;">15</td>
<td style="text-align: right;">15</td>
<td style="text-align: left;">#47abfc</td>
<td style="text-align: left;">I am node 15</td>
</tr>
<tr>
<td style="text-align: right;">14</td>
<td style="text-align: right;">14</td>
<td style="text-align: left;">#f24e77</td>
<td style="text-align: left;">I am node 14</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: right;">2</td>
<td style="text-align: left;">#f24e77</td>
<td style="text-align: left;">I am node 2</td>
</tr>
<tr>
<td style="text-align: right;">8</td>
<td style="text-align: right;">8</td>
<td style="text-align: left;">#f24e77</td>
<td style="text-align: left;">I am node 8</td>
</tr>
</tbody>
</table></div>
<p>I created <code>G</code> as follows:</p>
<pre class="lang-py prettyprint-override"><code># Declare nx graph from pandas edges
G = nx.from_pandas_edgelist(
df=edges_df,
source='src',
target='dst',
edge_attr=['color', 'title']
)
# Set node attributes from pandas
nx.set_node_attributes(
G=G,
values=nodes_df_renamed.set_index('id').to_dict('index')
)
</code></pre>
<p>The docs state that:</p>
<blockquote>
<p>Note that the Networkx node properties with the same names as those consumed by pyvis (e.g., title) are translated directly to the correspondingly-named pyvis node attributes.</p>
</blockquote>
<p>Hence, I assumed that if my networkx graph had node attributes that follow pyvis' naming conventions, the resulting graph would automatically display each node's attributes.</p>
<p>However, when I build a pyvis <code>Network</code> from <code>G</code>, the resulting drawing does not display the colors or titles as expected. For example, I was expecting node 1 to be blue (#47abfc) and display the string <code>'I am node 1'</code> when the user hovers over it.</p>
<pre class="lang-py prettyprint-override"><code># Imports
from pyvis import network as net
import networkx as nx
from IPython.display import HTML
# Declare pyvis graph from G
gpv = net.Network(notebook=True, cdn_resources='in_line')
gpv.from_nx(G)
gpv.save_graph('example.html')
HTML(filename='example.html')
</code></pre>
<p>Sadly, none of the <code>title</code> or <code>color</code> attributes from neither my edges or nodes is rendering as I expected.</p>
<p><a href="https://i.sstatic.net/CUbgT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CUbgT.png" alt="enter image description here" /></a></p>
<p>How can I get Pyvis to render nodes and edges according to the corresponding attributes from my networkx graph?</p>
<p>The data I'm working with is HUGE (25B edges) and I'd like to avoid using for loops to add nodes or edges one by one.</p>
<hr />
<h1>Summary</h1>
<p>By creating a Networkx graph with node and edge attributes that follow Pyvis' <a href="https://visjs.github.io/vis-network/docs/network/nodes.html" rel="nofollow noreferrer">naming conventions</a>, the resulting visualization will inherit the attributes of the Networkx graph.</p>
<p>For example, adding node attribute <code>title</code> will result in hover elements that display the corresponding node's title.</p>
<p><a href="https://i.sstatic.net/73tkI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/73tkI.png" alt="success" /></a></p>
|
<python><networkx><pyvis>
|
2024-03-28 18:20:35
| 0
| 6,421
|
Arturo Sbr
|
78,239,967
| 11,001,751
|
Efficiently Turn Matrix of Intersecting Routes Into Simplified Spatial Network (Graph)
|
<p>I have a matrix of real detailed routes which I want to efficiently turn into a simple spatial network. Simple means that I don't care about the intricacies of local transport and possible intersections of routes near the start and end points. I do want to add major intersections outside the start and end points as nodes to the network. Below I give a simple example. My real data has 12,500 routes between 500 start and end points and is approx. 2Gb in size.</p>
<pre class="lang-r prettyprint-override"><code>library(fastverse)
#> -- Attaching packages --------------------------------------- fastverse 0.3.2 --
#> v data.table 1.15.0 v kit 0.0.13
#> v magrittr 2.0.3 v collapse 2.0.12
fastverse_extend(osrm, sf, sfnetworks, install = TRUE)
#> -- Attaching extension packages ----------------------------- fastverse 0.3.2 --
#> v osrm 4.1.1 v sfnetworks 0.6.3
#> v sf 1.0.16
largest_20_german_cities <- data.frame(
city = c("Berlin", "Stuttgart", "Munich", "Hamburg", "Cologne", "Frankfurt",
"Duesseldorf", "Leipzig", "Dortmund", "Essen", "Bremen", "Dresden",
"Hannover", "Nuremberg", "Duisburg", "Bochum", "Wuppertal", "Bielefeld", "Bonn", "Muenster"),
lon = c(13.405, 9.18, 11.575, 10, 6.9528, 8.6822, 6.7833, 12.375, 7.4653, 7.0131,
8.8072, 13.74, 9.7167, 11.0775, 6.7625, 7.2158, 7.1833, 8.5347, 7.1, 7.6256),
lat = c(52.52, 48.7775, 48.1375, 53.55, 50.9364, 50.1106, 51.2333, 51.34, 51.5139,
51.4508, 53.0758, 51.05, 52.3667, 49.4539, 51.4347, 51.4819, 51.2667, 52.0211, 50.7333, 51.9625))
# Unique routes
m <- matrix(1, 20, 20)
diag(m) <- NA
m[upper.tri(m)] <- NA
routes_ind <- which(!is.na(m), arr.ind = TRUE)
rm(m)
# Routes DF
routes <- data.table(from_city = largest_20_german_cities$city[routes_ind[, 1]],
to_city = largest_20_german_cities$city[routes_ind[, 2]],
duration = NA_real_,
distance = NA_real_,
geometry = list())
# Fetch Routes
i = 1L
for (r in mrtl(routes_ind)) {
route <- osrmRoute(ss(largest_20_german_cities, r[1], c("lon", "lat")),
ss(largest_20_german_cities, r[2], c("lon", "lat")), overview = "full")
set(routes, i, 3:5, fselect(route, duration, distance, geometry))
i <- i + 1L
}
routes %<>% st_as_sf(crs = st_crs(route))
routes_net = as_sfnetwork(routes, directed = FALSE)
print(routes_net)
#> # A sfnetwork with 20 nodes and 190 edges
#> #
#> # CRS: EPSG:4326
#> #
#> # An undirected simple graph with 1 component with spatially explicit edges
#> #
#> # A tibble: 20 × 1
#> geometry
#> <POINT [°]>
#> 1 (9.179999 48.7775)
#> 2 (13.405 52.52)
#> 3 (11.57486 48.13675)
#> 4 (10.00001 53.54996)
#> 5 (6.95285 50.9364)
#> 6 (8.68202 50.1109)
#> # ℹ 14 more rows
#> #
#> # A tibble: 190 × 7
#> from to from_city to_city duration distance geometry
#> <int> <int> <chr> <chr> <dbl> <dbl> <LINESTRING [°]>
#> 1 1 2 Stuttgart Berlin 390. 633. (9.179999 48.7775, 9.18005 48…
#> 2 2 3 Munich Berlin 356. 586. (11.57486 48.13675, 11.57486 …
#> 3 2 4 Hamburg Berlin 176. 288. (10.00001 53.54996, 10.0002 5…
#> # ℹ 187 more rows
plot(routes_net)
</code></pre>
<p><img src="https://i.imgur.com/WXCZLau.png" alt="" /></p>
<p><sup>Created on 2024-03-28 with <a href="https://reprex.tidyverse.org" rel="nofollow noreferrer">reprex v2.0.2</a></sup></p>
<p>Regarding possible solutions I am open to any software (R, Python, QGIS etc.). I know in R there is <code>tidygraph</code> which allows me to do something like</p>
<pre class="lang-r prettyprint-override"><code>library(tidygraph)
routes_net_subdiv = convert(routes_net, to_spatial_subdivision)
</code></pre>
<p>But this seems to run forever even with this mock example. I have also seen ideas to use <a href="https://r-spatial.org/r/2019/09/26/spatial-networks.html" rel="nofollow noreferrer">GRASS's v.clean tool</a> to break up the geometry, but haven't tried that yet and a bit reluctant to install GRASS.</p>
<p>I think perhaps the best solution for performance is converting to S2 and comparing all linestrings individually using <code>s2_intersection()</code> and then turning this information into a graph somehow. But hoping for more elegant and performant solutions.</p>
|
<python><r><geospatial><qgis>
|
2024-03-28 17:30:13
| 1
| 1,379
|
Sebastian
|
78,239,855
| 5,546,046
|
How to specify the default project to use with the BigQuery python client?
|
<p>Writing a python cloud functions such as</p>
<pre class="lang-py prettyprint-override"><code>from google.cloud.bigquery import Client
client = Client(
project="my_gcloud_project",
location="my_zone",
)
sql_statement = "SELECT * FROM `my_dataset.my_table`"
query_job = client.query(sql_statement)
query_job.result()
</code></pre>
<p>I get a</p>
<pre><code>ValueError: When default_project is not set, table_id must be a fully-qualified ID in standard SQL format, e.g., "project.dataset_id.table_id"
</code></pre>
<p>I understand that changing the query to <code>SELECT * FROM `my_project.my_dataset.my_table` </code> solves the error, but is there a way to tell the BigQuery client what is the default project so we do not have to specify it in every query ?</p>
|
<python><google-bigquery>
|
2024-03-28 17:06:39
| 3
| 2,455
|
Noan Cloarec
|
78,239,756
| 749,973
|
register_buffer a dict object in PyTorch
|
<p>I thought this was a simple question but I couldn't find an answer.</p>
<p>I want a member variable of a pytorch module to be saved/loaded with model state_dict. I can do that in <strong>init</strong> with the following line.</p>
<pre><code> self.register_buffer('loss_weight', torch.tensor(loss_weight))
</code></pre>
<p>But what if loss_weight is a dict object? Is it allowed? If so, how can I convert it to a tensor?</p>
<p>When tried, I got an error "Could not infer dtype of dict."</p>
|
<python><pytorch><state-dict>
|
2024-03-28 16:46:48
| 1
| 20,660
|
Tae-Sung Shin
|
78,239,746
| 2,326,139
|
how in python create a type based on return type of a function
|
<p>I'm using a library that doesn't have any stub</p>
<p>so I created a stub for it but now I have issue to use those types that I've created for the stub since it's not part of the original library I can't import them</p>
<p>here is an example</p>
<p>here is the stub:</p>
<pre class="lang-py prettyprint-override"><code>class IdInfoTuple(NamedTuple):
custom: bool
chart_mode: int
select: bool
visible: bool
def id_info(id: str, /) -> Optional[IdInfoTuple]:
</code></pre>
<p>now in my code I need to pass around the <code>IdInfoTuple</code> but I can't</p>
<p>for example when I create a function like this</p>
<pre class="lang-py prettyprint-override"><code>def some_func(idInfo:<how to tell this is type of IdInfo?>):
pass
</code></pre>
<p>I thought if I can do something like typescript where we can create a type based on return type of a function like this</p>
<pre><code>type IdInfoTuple= ReturnType<typeof id_info>;
</code></pre>
<p>but I can't find how to do it in python. is it possible? if yes how to achieve it? something that help pylance find the correct type. I'm not looking to find type at execution time, but at the coding time help with strongly type my function</p>
|
<python><python-typing>
|
2024-03-28 16:44:08
| 0
| 2,065
|
Mohammad Hossein Amri
|
78,239,584
| 4,894,593
|
Speed up finding the cell to which a point belongs in interpolation from curvilinear to rectangular grid
|
<p>I'm trying to set up a method for interpolating points from a curvilinear grid to a regular grid.
To do this, I'm trying to find out which cell of the curvilinear grid each point of the regular grid belongs to. The cells of the curvilinear grid are random quadrilaterals.</p>
<p>Grids are created as follows, for example, for an unfavorable case:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
def circle(xn, yn):
"""Polar grid."""
R = (xn[-1, 0] - xn[0, 0]) / (2*np.pi)
return (yn + R)*np.sin(xn/R), (yn + R)*np.cos(xn/R)
_x= np.linspace(0, 0.1, 150)
_y = np.linspace(0, 0.1, 100)
# Curvilinear grid
up, vp = circle(np.meshgrid(_x, _y, indexing='ij'))
# Cartesian grid
ui = np.linspace(up.min(), up.max(), 300)
vi = np.linspace(vp.min(), vp.max(), 200)
</code></pre>
<p>I'm able to find the correspondence between each grid using a classic quadrilateral point-finding algorithm:</p>
<pre><code>%%cython -f -c=-O2 -I./ --compile-args=-fopenmp --link-args=-fopenmp
cimport cython
from cython.parallel cimport prange
import numpy as np
cimport numpy as np
@cython.boundscheck(False)
@cython.wraparound(False)
@cython.nonecheck(False)
cpdef neighbors(double[:, ::1] xp, double[:, ::1] yp, double xi, double yi):
"""Clockwise winding Polygon :
[[xp[ix, iy], yp[ix, iy]],
[xp[ix, iy+1], yp[ix, iy+1]],
[xp[ix+1, iy+1], yp[ix+1, iy+1]],
[xp[ix+1, iy], yp[ix+1, iy]]])
"""
cdef Py_ssize_t nx = xp.shape[0]
cdef Py_ssize_t ny = xp.shape[1]
cdef Py_ssize_t ix, iy = 0
cdef Py_ssize_t tag = 0
cdef double xmin, xmax, ymin, ymax
cdef double x1, x2, x3, x4, y1, y2, y3, y4
for ix in range(nx-1):
for iy in range(ny-1):
x1 = xp[ix, iy]
x2 = xp[ix, iy+1]
x3 = xp[ix+1, iy+1]
x4 = xp[ix+1, iy]
y1 = yp[ix, iy]
y2 = yp[ix, iy+1]
y3 = yp[ix+1, iy+1]
y4 = yp[ix+1, iy]
if xi - min(x1, x2, x3, x4) < 0.:
continue
if yi - min(y1, y2, y3, y4) < 0.:
continue
# Check that point (xi, yi) is on the same side of each edge of the quadrilateral
if not (xi - x1) * (y2 - y1) - (x2 - x1) * (yi - y1) > 0:
continue
if not (xi - x2) * (y3 - y2) - (x3 - x2) * (yi - y2) > 0:
continue
if not (xi - x3) * (y4 - y3) - (x4 - x3) * (yi - y3) > 0 :
continue
if not (xi - x4) * (y1 - y4) - (x1 - x4) * (yi - y4) > 0:
continue
tag = 1
break
if tag == 1:
break
if tag == 1:
return (ix, iy), (ix, iy+1), (ix+1, iy+1), (ix+1, iy)
return None
</code></pre>
<p>And it works:</p>
<pre><code>fig, axes = plt.subplots(1, 1, figsize=(6, 6))
for i in np.arange(0, ui.shape[0]):
axes.axvline(ui[i], color='k', linewidth=0.1)
for i in np.arange(0, vi.shape[0]):
axes.axhline(vi[i], color='k', linewidth=0.1)
for i in np.arange(0, vp.shape[1]):
axes.plot(up[:, i], vp[:, i], 'k', linewidth=0.5)
for i in np.arange(0, up.shape[0]):
axes.plot(up[i, :], vp[i, :], 'k', linewidth=0.5)
for iu, iv in ((180, 120), (100, 90), (80, 40), (10, 10)):
axes.plot(ui[iu], vi[iv], 'ro')
idx = neighbors(up, vp, ui[iu], vi[iv])
if idx:
for _idx in idx:
axes.plot(up[_idx], vp[_idx], 'go')
plt.show()
</code></pre>
<p>The thing is, on my system, each point search takes an average of 15 us for this grid (7 ms for a 2000x2000 grid), i.e. for the search of 2000x2000 points, an estimated time of 28000 seconds (not to mention the 3d case :)).</p>
<p>I think I can save time by avoiding the search for points that don't exist in the curvilinear grid, but I can't find a simple indicator to skip the search for these points.</p>
<p>I think there's a better global method for this kind of research, but I'm not aware of it.</p>
<p>How could I speed up the things using Cython/numpy?</p>
<p>By using a simple dichotomy to target the x's, it is possible to save quite a lot of calculation time. On the other hand, the code presented below can lead to errors if x is not monotonic:</p>
<pre><code>@cython.boundscheck(False)
@cython.wraparound(False)
@cython.nonecheck(False)
cpdef bint is_monotonic(double[:, ::1] v):
cdef Py_ssize_t i
for i in range(v.shape[0] - 1):
if (v[i, 1] > v[i, 0]) != (v[i+1, 1] > v[i+1, 0]):
return False
return True
@cython.boundscheck(False)
@cython.wraparound(False)
@cython.nonecheck(False)
cpdef (double, double) minmax(double[:] arr):
cdef double min = arr[0]
cdef double max = arr[0]
cdef Py_ssize_t i
cdef Py_ssize_t n = arr.shape[0]
for i in range(1, n):
if arr[i] < min:
min = arr[i]
elif arr[i] > max:
max = arr[i]
return min, max
@cython.boundscheck(False)
@cython.wraparound(False)
@cython.nonecheck(False)
cpdef (Py_ssize_t, Py_ssize_t) dichotomy(double[:, ::1] xp, double xi, Py_ssize_t offset=1):
cdef Py_ssize_t nx = xp.shape[0]
cdef Py_ssize_t istart = 0
cdef Py_ssize_t istop = nx - 1
cdef Py_ssize_t middle = (istart + istop) // 2
cdef double xmin1, xmin2, xmax1, xmax2
cdef double xmin, xmax
while (istop - istart) > 3:
middle = (istart + istop) // 2
xmin1, xmax1 = minmax(xp[istart, :])
xmin2, xmax2 = minmax(xp[middle, :])
xmin = min(xmin1, xmax1, xmin2, xmax2)
xmax = max(xmin1, xmax1, xmin2, xmax2)
if xmin <= xi <= xmax:
if middle - istart < 3:
return istart - offset, middle + offset
istop = middle
else:
istart = middle
xmin1, xmax1 = minmax(xp[middle, :])
xmin2, xmax2 = minmax(xp[istop, :])
xmin = min(xmin1, xmax1, xmin2, xmax2)
xmax = max(xmin1, xmax1, xmin2, xmax2)
if xmin <= xi <= xmax:
return middle - offset, istop + offset
return -1, -1
@cython.boundscheck(False)
@cython.wraparound(False)
@cython.nonecheck(False)
cpdef neighbor(double[:, ::1] xp, double[:, ::1] yp, double xi, double yi):
(...)
cdef Py_ssize_t istart = 0
cdef Py_ssize_t istop = nx - 2
if is_monotonic(xp):
istart, istop = dichotomy(xp, xi)
if istart == -1:
return None
for ix in range(istart, istop + 1):
for iy in range(ny-1):
(...)
</code></pre>
|
<python><cython>
|
2024-03-28 16:15:34
| 0
| 1,080
|
Ipse Lium
|
78,239,484
| 9,635,098
|
Why does Dataframe.to_sql() slow down after certain amount of rows?
|
<p>I have a very large <code>Pandas</code> Dataframe ~9 million records, 56 columns, which I'm trying to load into a MSSQL table, using <code>Dataframe.to_sql()</code>. Importing the whole Dataframe in one statement often leads to errors, relating to memory.</p>
<p>To cope with this, I'm looping through the Dataframe in batches of 100K rows, and importing a batch at a time. This way I no longer get any errors, but the code slows down dramatically after about 5.8 million records. The code I'm using:</p>
<pre><code>maxrow = df.shape[0]
stepsize = 100000
for i in range(0, maxrow, stepsize):
batchstart = datetime.datetime.now()
if i == 0:
if_exists = 'replace'
else:
if_exists = 'append'
df_import = df.iloc[i:i+stepsize]
df_import.to_sql('tablename',
engine,
schema='tmp',
if_exists=if_exists,
index=False,
dtype=dtypes
)
</code></pre>
<p>I've timed the batches, and there is a clear breaking point in speed:
<a href="https://i.sstatic.net/MFJCz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MFJCz.png" alt="Times per 200K rows" /></a>
These results are basically the same for batches of 50k, 100k and 200k rows. it takes about 40 minutes to upload 6 million records, and another 2 hours and 20 minutes to upload the next 3 million.</p>
<p>My thinking was that it was either due to the size of the MSSQL table, or something being cached/saved after each upload. Because of that I've tried pushing the Dataframe to two different tables. I've also tried something like <code>expunge_all()</code> on the <code>SQLALchemy session</code>, after each upload. Both to no effect.</p>
<p>Manually stopping imports after 5 million records and restarting from 5 million with a new engine object also hasn't helped.</p>
<p>I'm all out of ideas what might be the cause of the process slowing down so drastically, and would really appreciate help.</p>
<p><strong>UPDATE</strong></p>
<p>As a last resort I've reversed the loop, uploading parts of the Dataframe starting at the highest index, looping down.</p>
<p>This has basically reversed the times per batch. So it seems it is the data itself that is different/bigger further down the Dataframe. Not the connection being overloaded or the SQL table getting to large.</p>
<p><a href="https://i.sstatic.net/TUv4C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TUv4C.png" alt="Batchtimes including reverse loop" /></a></p>
<p>Thanks to everyone trying to help, but it seems I need to go through the data to see what causes this.</p>
|
<python><sql-server><pandas><sqlalchemy>
|
2024-03-28 16:00:20
| 1
| 406
|
Beek
|
78,239,380
| 14,230,633
|
Polars apply same custom function to multiple columns in group by
|
<p>What's the best way to apply a custom function to multiple columns in Polars? Specifically I need the function to reference another column in the dataframe. Say I have the following:</p>
<pre><code>df = pl.DataFrame({
'group': [1,1,2,2],
'other': ['a', 'b', 'a', 'b'],
'num_obs': [10, 5, 20, 10],
'x': [1,2,3,4],
'y': [5,6,7,8],
})
</code></pre>
<p>And I want to group by <code>group</code> and calculate an average of <code>x</code> and <code>y</code>, weighted by <code>num_obs</code>. I can do something like this:</p>
<pre><code>variables = ['x', 'y']
df.group_by('group').agg((pl.col(var) * pl.col('num_obs')).sum()/pl.col('num_obs').sum() for var in variables)
</code></pre>
<p>but I'm wondering if there's a better way. Also, I don't know how to add other aggregations to this approach, but is there a way that I could also add <code>pl.sum('n_obs')</code>?</p>
|
<python><dataframe><python-polars>
|
2024-03-28 15:46:13
| 1
| 567
|
dfried
|
78,239,173
| 4,647,519
|
Python Dash App is Not Rendering in Browser
|
<p>I tried to run the following example code from <a href="https://dash.plotly.com/tutorial" rel="nofollow noreferrer">https://dash.plotly.com/tutorial</a> but it just gives me a blank browser tab which seems to infinitely load:</p>
<pre><code>from dash import Dash, html
app = Dash(__name__)
app.layout = html.Div([
html.Div(children='Hello World')
])
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
<p>Is it blocked by something? How to unblock it?</p>
|
<python><plotly-dash>
|
2024-03-28 15:08:40
| 2
| 545
|
tover
|
78,238,984
| 11,092,636
|
Lower triangle mask with seaborn clustermap bug
|
<p>To reproduce the bug you can use this dataframe (made with the <code>.to_clipboard</code> method of the <code>pandas.DataFrame</code> object so you can import it easily).</p>
<pre class="lang-py prettyprint-override"><code> 01:01 01:02 01:03 01:04 01:05 01:06 01:07 01:10 01:12 01:21 02:01 03:01 03:02 03:03 04:01 04:02 04:04 05:01 05:02 05:03 05:05 05:08 05:09 05:11 05:13 06:01
01:01 0 0 0 1 1 0 1 0 1 0 4 4 4 4 5 5 5 6 6 6 6 6 6 6 6 5
01:02 0 0 0 1 1 0 1 0 1 0 4 4 4 4 5 5 5 6 6 6 6 6 6 6 6 5
01:03 0 0 0 1 1 0 1 0 1 0 4 4 4 4 5 5 5 6 6 6 6 6 6 6 6 5
01:04 1 1 1 0 0 1 0 1 0 1 5 5 5 5 6 6 6 7 7 7 7 7 7 7 7 6
01:05 1 1 1 0 0 1 0 1 0 1 5 5 5 5 6 6 6 7 7 7 7 7 7 7 7 6
01:06 0 0 0 1 1 0 1 0 1 0 4 4 4 4 5 5 5 6 6 6 6 6 6 6 6 5
01:07 1 1 1 0 0 1 0 1 0 1 5 5 5 5 6 6 6 7 7 7 7 7 7 7 7 6
01:10 0 0 0 1 1 0 1 0 1 0 4 4 4 4 5 5 5 6 6 6 6 6 6 6 6 5
01:12 1 1 1 0 0 1 0 1 0 1 5 5 5 5 6 6 6 7 7 7 7 7 7 7 7 6
01:21 0 0 0 1 1 0 1 0 1 0 4 4 4 4 5 5 5 6 6 6 6 6 6 6 6 5
02:01 4 4 4 5 5 4 5 4 5 4 0 2 2 2 3 3 3 4 4 4 4 4 4 4 4 3
03:01 4 4 4 5 5 4 5 4 5 4 2 0 0 0 3 3 3 4 4 4 4 4 4 4 4 3
03:02 4 4 4 5 5 4 5 4 5 4 2 0 0 0 3 3 3 4 4 4 4 4 4 4 4 3
03:03 4 4 4 5 5 4 5 4 5 4 2 0 0 0 3 3 3 4 4 4 4 4 4 4 4 3
04:01 5 5 5 6 6 5 6 5 6 5 3 3 3 3 0 0 0 1 1 1 1 1 1 1 1 0
04:02 5 5 5 6 6 5 6 5 6 5 3 3 3 3 0 0 0 1 1 1 1 1 1 1 1 0
04:04 5 5 5 6 6 5 6 5 6 5 3 3 3 3 0 0 0 1 1 1 1 1 1 1 1 0
05:01 6 6 6 7 7 6 7 6 7 6 4 4 4 4 1 1 1 0 0 0 0 0 0 0 0 1
05:02 6 6 6 7 7 6 7 6 7 6 4 4 4 4 1 1 1 0 0 0 0 0 0 0 0 1
05:03 6 6 6 7 7 6 7 6 7 6 4 4 4 4 1 1 1 0 0 0 0 0 0 0 0 1
05:05 6 6 6 7 7 6 7 6 7 6 4 4 4 4 1 1 1 0 0 0 0 0 0 0 0 1
05:08 6 6 6 7 7 6 7 6 7 6 4 4 4 4 1 1 1 0 0 0 0 0 0 0 0 1
05:09 6 6 6 7 7 6 7 6 7 6 4 4 4 4 1 1 1 0 0 0 0 0 0 0 0 1
05:11 6 6 6 7 7 6 7 6 7 6 4 4 4 4 1 1 1 0 0 0 0 0 0 0 0 1
05:13 6 6 6 7 7 6 7 6 7 6 4 4 4 4 1 1 1 0 0 0 0 0 0 0 0 1
06:01 5 5 5 6 6 5 6 5 6 5 3 3 3 3 0 0 0 1 1 1 1 1 1 1 1 0
</code></pre>
<p>It is a 26x26 matrix.
I used the following code drawing inspiration from this SO post (<a href="https://stackoverflow.com/questions/67879908/lower-triangle-mask-with-seaborn-clustermap">Lower triangle mask with seaborn clustermap</a>):</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
matrix = matrix.astype(int)
# Generate a clustermap
cg = sns.clustermap(matrix, annot=True, cmap = "Blues", cbar_pos=(.09, .6, .05, .2))
# Mask the lower triangle
mask = np.tril(np.ones_like(matrix))
values = cg.ax_heatmap.collections[0].get_array().reshape(matrix.shape)
new_values = np.ma.array(values, mask=mask)
cg.ax_heatmap.collections[0].set_array(new_values)
cg.ax_row_dendrogram.set_visible(False)
cg.ax_col_dendrogram.set_visible(False)
cg.savefig("dqa_eplet_distances_abv.png", dpi=600)
</code></pre>
<p>However the result I obtain:
<a href="https://i.sstatic.net/slr3w.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/slr3w.png" alt="enter image description here" /></a></p>
<p>Why is this not the triangular version of the matrix I expected? The MRE in the original SO post is almost the same as my code and I cannot understand where I went wrong.</p>
<p>I'm using <code>seaborn-0.13.2</code> and <code>Python 3.11.1</code>.</p>
|
<python><pandas>
|
2024-03-28 14:40:28
| 1
| 720
|
FluidMechanics Potential Flows
|
78,238,983
| 18,178,867
|
Font weights in Matplotlib
|
<p>I have a figure and I want to make the axes labels bold. The problem is that the text contains mathematical expressions that do not turn into bold font face. Other parts are Ok.</p>
<pre><code>fig = plt.figure(figsize=(4,3))
predicts = np.random.random(1000)*10e9
plt.plot(predicts,predicts, ".", ms=6)
plt.xlabel("Actual NPV (\$) x$10^9$", fontsize=8, fontweight="bold")
plt.ylabel("Predicted NPV (\$) x$10^9$", fontsize=8, fontweight="bold")
# plt.rcParams["font.weight"] = "regular"
plt.yticks(np.arange(9.4, 10.2, 0.2))
R2 = np.round(r2_score(predicts, predicts),2)
plt.text(x=9.4, y=10, s=f"$R^2$={R2}")
plt.rcParams["font.size"] = 8
fig.tight_layout()
</code></pre>
<p>The output is shown in the attached image.
<a href="https://i.sstatic.net/SRvid.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SRvid.jpg" alt="enter image description here" /></a></p>
<p>How can I make 10 to the power of 9 bold, too?</p>
<p>I have tried fontweight in axis labels, but it didn't work.</p>
|
<python><matplotlib>
|
2024-03-28 14:40:16
| 1
| 443
|
Reza
|
78,238,743
| 6,221,742
|
Implement filtering in RetrievalQA chain
|
<p>I have been working on implementing the <a href="https://cloud.google.com/blog/products/databases/using-pgvector-llms-and-langchain-with-google-cloud-databases/" rel="nofollow noreferrer">tutorial</a> using RetrievalQA from Langchain with LLM from Azure OpenAI API.
I've made progress with my implementation, and below is the code snippet I've been working on:</p>
<pre><code>import os
# env variables
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_VERSION"] = "<YOUR_API_VERSION>"
os.environ["OPENAI_API_KEY"] = "<YOUR_API_KEY>"
os.environ["AZURE_OPENAI_ENDPOINT"] = "https://<SPACE_NAME>.openai.azure.com/"
# libary imports
import pandas as pd
from langchain.prompts import PromptTemplate
from langchain.chains.router.llm_router import LLMRouterChain,RouterOutputParser
from langchain.embeddings import GPT4AllEmbeddings
from langchain.llms import AzureOpenAI
from langchain.chat_models import AzureChatOpenAI
from langchain.chains import RetrievalQA
from langchain.chains.qa_with_sources.retrieval import RetrievalQAWithSourcesChain
from langchain.document_loaders import DataFrameLoader
from langchain.text_splitter import (RecursiveCharacterTextSplitter,
CharacterTextSplitter)
from langchain.vectorstores import Chroma
from langchain.vectorstores import utils as chromautils
from langchain.embeddings import (HuggingFaceEmbeddings, OpenAIEmbeddings,
SentenceTransformerEmbeddings)
from langchain.callbacks import get_openai_callback
#
# toy = 'Search in the documents and find a toy that teaches about color to kids'
toy = 'Search in the documents and find a toy with cards that has monsters'
all_docs = pd.read_csv(data) # data is the dataset from the tutorial (see above)
print('Model init \u2713')
print('----> Azure OpenAI \u2713')
llm_open = AzureChatOpenAI(
model="GPT3",
max_tokens = 100
)
print('Create docs \u2713')
loader = DataFrameLoader(all_docs,
page_content_column='description' # column description in data
)
my_docs = loader.load()
print'Create splits \u2713')
text_splitter = CharacterTextSplitter(chunk_size=512,
chunk_overlap=0
)
all_splits = text_splitter.split_documents(my_docs)
print('Init embeddings \u2713')
chroma_docs = chromautils.filter_complex_metadata(all_splits)
# embeddings = HuggingFaceEmbeddings()
my_model_name = "sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2"
embeddings = SentenceTransformerEmbeddings(model_name=my_model_name)
print('Create Chromadb \u2713')
vectorstore = Chroma.from_documents(all_splits,
embeddings,
# metadatas=[{"source": f"{i}-pl"} for i in \
# range(len(all_splits))]
)
print('Create QA chain \u2713')
qa_chain = RetrievalQA.from_chain_type(
llm=llm_open,
chain_type="stuff",
retriever=vectorstore.as_retriever(search_kwargs={"k": 10}),
verbose=True,)
print('*** YOUR ANSWER: ***')
with get_openai_callback() as cb:
llm_res = qa_chain.run(toy)
plpy.notice(f'{llm_res}')
plpy.notice(f'Total Tokens: {cb.total_tokens}')
plpy.notice(f'Prompt Tokens: {cb.prompt_tokens}')
plpy.notice(f'Completion Tokens: {cb.completion_tokens}')
plpy.notice(f'Total Cost (USD): ${cb.total_cost}')**strong text**
</code></pre>
<p>In the tutorial, there's a section that filters products based on minimum and maximum prices using a SQL query. However, I'm unsure how to achieve similar functionality using RetrievalQA in Langchain while also retrieving the sources.
The specific section in the tutorial that I'm referring to is:</p>
<pre><code>results = await conn.fetch("""
WITH vector_matches AS (
SELECT product_id,
1 - (embedding <=> $1) AS similarity
FROM product_embeddings
WHERE 1 - (embedding <=> $1) > $2
ORDER BY similarity DESC
LIMIT $3
)
SELECT product_name,
list_price,
description
FROM products
WHERE product_id IN (SELECT product_id FROM vector_matches)
AND list_price >= $4 AND list_price <= $5
""",
qe, similarity_threshold, num_matches, min_price, max_price)
</code></pre>
<p>How to implement this filtering functionality using the RetrievalQA chain in Langchain and also retrieve the sources associated with the filtered products?</p>
|
<python><azure><langchain><large-language-model><azure-openai>
|
2024-03-28 14:03:12
| 1
| 339
|
AndCh
|
78,238,607
| 14,894,361
|
How to detect changes in custom types in sqlalchemy
|
<p>I am working on custom types in sqlalchemy columns using <code>TypeDecorator</code>. I'm storing my data in <code>JSONB</code> inside Postgres DB, but in code I am serializing and deserializing it in a data class. But when I change any field of that data class, it didn't detect changes on that column. How can I achieve this in sqlalchemy.</p>
<p><strong>dataclass :</strong></p>
<pre class="lang-py prettyprint-override"><code>@dataclass
class ConfigFlags(DataClassJsonMixin):
abc: bool = True
</code></pre>
<p><strong>custom type def :</strong></p>
<pre class="lang-py prettyprint-override"><code>class ConfigFlagType(types.TypeDecorator):
"""Type for config flags."""
impl = JSONB
def process_result_value( # noqa: ANN201
self,
value: Optional[dict],
dialect, # noqa: ANN001
):
if not isinstance(value, dict):
msg = "Value must be a dictionary"
raise ValueError(msg) # noqa: TRY004
return ConfigFlags.from_dict(value)
def process_bind_param( # noqa: ANN201
self,
value: Optional[ConfigFlags],
dialect, # noqa: ANN001
):
if not isinstance(value, ConfigFlags):
msg = "Value must be a ConfigFlags"
raise ValueError(msg) # noqa: TRY004
return value.to_dict()
</code></pre>
<p><strong>db model :</strong></p>
<pre class="lang-py prettyprint-override"><code>class ConvHistories(CBase):
"""conv_histories model."""
__tablename__ = "test"
id = Column(Integer, primary_key=True, autoincrement=True)
config_flags: ConfigFlags = Column(
ConfigFlagType,
)
def find_one(self, conv_id: int) -> "ConvHistories":
return self.query.filter(
ConvHistories.id == conv_id,
).first()
</code></pre>
<pre class="lang-py prettyprint-override"><code>res = ConvHistories(session=session).find_one(conv_id=3)
if res.config_flags:
res.config_flags.abc = False
session.add(res)
session.commit()
</code></pre>
<p>But it didn't detect changes in <code>config_flags</code> column.
How can I do this?</p>
|
<python><postgresql><sqlalchemy>
|
2024-03-28 13:39:26
| 2
| 320
|
Cdaman
|
78,238,508
| 616,460
|
setup.py code to run on `pip install` but not on `python -m build`
|
<p>I have some code in a <em>setup.py</em> file that checks for a third-party SDK that must be manually installed before the application is installed:</p>
<pre class="lang-py prettyprint-override"><code>import sys
from setuptools import setup
# ---- BEGIN SDK CHECK
try:
import pyzed.sl
except ImportError:
print("ERROR: You must manually install the ZED Python SDK first.")
sys.exit(1)
zed_version = [int(p) for p in pyzed.sl.Camera.get_sdk_version().split(".")]
if zed_version < [4,0,8]:
print("ERROR: ZED SDK 4.0.8 or higher is required, you must update the ZED SDK first.")
sys.exit(1)
# ---- END SDK CHECK
setup(...)
</code></pre>
<p>However, I only want this code to execute when the package is installed, <em>not</em> when the package source tarball / wheel is built. That is:</p>
<ul>
<li>I want it to execute when <code>pip install ...</code> is run.</li>
<li>I do not want it to execute when <code>python -m build -x ...</code> is run.</li>
</ul>
<p>Is there some way to accomplish this? E.g. is there a way to determine if <em>setup.py</em> was invoked via <code>pip install</code> but not via <code>python -m build</code>? Or is there some other way to only run checks on install?</p>
|
<python><pip><setup.py><python-packaging>
|
2024-03-28 13:24:05
| 1
| 40,602
|
Jason C
|
78,238,444
| 307,283
|
Add proxy configuration to tracking_uri in MLFlow
|
<p>I'm using MLFlow and I'm behind a proxy. How can I add the proxy information to the tracking_uri for <code>MlFlowClient</code> call? I tried using environment variables but that also didn't work:</p>
<pre><code>os.environ["MLFLOW_TRACKING_URI"] = "https://mlflow-sandbox.dev.server"
os.environ["HTTPS_PROXY"] = "http://web.prod.proxy.com:4200"
os.environ["HTTP_PROXY"]="http://web.prod.proxy.com:4200"
</code></pre>
<p>with and without authentication.</p>
|
<python><network-programming><proxy><mlflow>
|
2024-03-28 13:12:39
| 0
| 20,221
|
Ivan
|
78,238,366
| 9,251,158
|
OpenTimelineIO FCP XML import fails to find clips
|
<p>I have several video projects in Final Cut Pro and Adobe Premiere that I want to convert to Kdenlive. I found <a href="https://github.com/AcademySoftwareFoundation/OpenTimelineIO" rel="nofollow noreferrer">OpenTimelineIO</a> and, through trial-and-error and thanks to StackOverflow, got this code:</p>
<pre class="lang-py prettyprint-override"><code>import opentimelineio as otio
fp = "/path/to/file/ep4_fixed.fcpxml"
output_fp = fp.replace(".fcpxml", ".kdenlive")
assert fp != output_fp
# Load the FCP XML file
input_otio = otio.adapters.read_from_file(fp)
# Extract the first Timeline from the SerializableCollection
timeline = next((item for item in input_otio if isinstance(item, otio.schema.Timeline)), None)
for item in input_otio:
if isinstance(item, otio.schema.Timeline):
print("Timeline found")
print(item)
else:
print("Not a timeline: ", item)
if timeline:
# Save the timeline as a Kdenlive project
otio.adapters.write_to_file(timeline, output_fp, adapter_name="kdenlive")
else:
print("No timeline found in the input file.")
</code></pre>
<p>The result in Kdenlive suggests that the layout and timecodes of clips are correct, but all the clips are missing:</p>
<p><a href="https://i.sstatic.net/39iDk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/39iDk.png" alt="project in Kdenlive" /></a></p>
<p>And yet, I confirm manually that several of the clips exist.</p>
<p>One example asset in the XML file is this, with a link to the clip in an external drive:</p>
<pre><code> <asset id="r117" name="Duck" uid="8E011DED69D190CFE0018A6D24E90A31" start="0s" duration="518344/44100s" hasAudio="1" audioSources="1" audioChannels="2" audioRate="44100">
<media-rep kind="original-media" sig="8E011DED69D190CFE0018A6D24E90A31" src="file:///Volumes/Video/BulgyPT.fcpbundle/ep1-6/Original%20Media/Duck.mp3">
<bookmark>Ym9va8AEA...EAAAAAAAAAA==</bookmark>
</media-rep>
<metadata>
<md key="com.apple.proapps.mio.ingestDate" value="2020-07-05 18:19:58 +0100"/>
</metadata>
</asset>
</code></pre>
<p>The corresponding item for this asset in the line <code>print(item)</code> is this:</p>
<pre><code>otio.schema.Clip(name='Duck', media_reference=otio.schema.MissingReference(name='', available_range=None, available_image_bounds=None, metadata={}), source_range=otio.opentime.TimeRange(start_time=otio.opentime.RationalTime(value=0, rate=25), duration=otio.opentime.RationalTime(value=170, rate=25)), metadata={})
</code></pre>
<p>So the OpenTimelineIO adapter is not finding the files for any assets.</p>
<p>I tried replacing HTML entities, such as <code>%20</code> with a space, but I get the same result.</p>
<p><a href="https://www.dropbox.com/scl/fi/92ujpyxo86w56m4qpuh8b/ep4_fixed.fcpxml?rlkey=8kpom8mmebzup9uo67ue2099m&dl=0" rel="nofollow noreferrer">Here is a Dropbox link</a> to a complete FCP XML file that causes this error.</p>
<p>How can I adapt the XML file or monkey-patch the code to find these clips in the external drive?</p>
<p>Note: this is a follow-up question to, and different from <a href="https://stackoverflow.com/questions/77846715/monkey-patching-opentimelineio-adapter-to-import-final-cut-pro-xml">Monkey-patching OpenTimelineIO adapter to import Final Cut Pro XML</a> and <a href="https://stackoverflow.com/questions/77974108/opentimelineio-error-from-exporting-a-final-cut-pro-file-with-the-kdenlive-adapt/77991165?noredirect=1#comment137922277_77991165">OpenTimelineIO error from exporting a Final Cut Pro file with the Kdenlive adapter</a> .</p>
|
<python><python-3.x><xml><finalcut>
|
2024-03-28 13:02:29
| 0
| 4,642
|
ginjaemocoes
|
78,238,308
| 12,271,381
|
Find Gradient Magnitude using skimage.feature.hog module
|
<p>I am trying to print total gradient magnitude for 100 in the below array</p>
<pre><code>img=[[150,50,121],[12,100,25],[201,243,244]]
</code></pre>
<p>Below is the code I tried</p>
<pre><code>from skimage.feature import hog
from skimage import data, exposure
fdt, hog_image = hog([[150,50,121],[12,100,25],[201,243,244]], orientations=2, pixels_per_cell=(1, 1), cells_per_block=(1, 1), visualize=True)
print(hog_image)
</code></pre>
<p>The value I get is 43. But Now when I calculate manually I am getting the value as 193.437 . formulae for manual calculation is as below</p>
<pre><code>import math
math.sqrt((25-12)**2+(243-50)**2)
</code></pre>
<p>both results do not match up. Any suggestion on what I am doing wrong. I am running on python version 3</p>
|
<python><image-processing><scikit-image><feature-extraction>
|
2024-03-28 12:51:07
| 0
| 1,079
|
sakeesh
|
78,238,126
| 2,924,334
|
Accessing slide background in python-pptx
|
<p>It seems straightforward, but I am not able to access the slide background.</p>
<pre><code>from pptx import Presentation
presentation = Presentation()
slide_layout = presentation.slide_layouts[5] # 5 corresponds to a blank slide
slide = presentation.slides.add_slide(slide_layout)
print(slide) # this prints: <pptx.slide.Slide object at 0x12ebbc280>
background = slide.background # AttributeError: 'Slide' object has no attribute 'background'
</code></pre>
<p>When I look at the <a href="https://python-pptx.readthedocs.io/en/latest/api/slides.html#pptx.slide.Slide" rel="nofollow noreferrer">documentation</a>, <code>Slide</code> object does have an attribute called <code>background</code>. Then why do I get the error <code>AttributeError: 'Slide' object has no attribute 'background'</code>?</p>
<p>I am using <code>python-pptx 0.6.2</code>. I must be doing something silly, but after some head-scratching I thought I would ask. Thank you!</p>
|
<python><python-pptx>
|
2024-03-28 12:14:27
| 0
| 587
|
tikka
|
78,237,938
| 5,305,512
|
llama-cpp-python with metal acceleration on Apple silicon failing
|
<p>I am following the instructions from the <a href="https://llama-cpp-python.readthedocs.io/en/latest/install/macos/" rel="nofollow noreferrer">official documentation</a> on how to install llama-cpp with GPU support in Apple silicon Mac.</p>
<p>Here is my Dockerfile:</p>
<pre><code>FROM python:3.11-slim
WORKDIR /code
RUN pip uninstall llama-cpp-python -y
ENV CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1
RUN pip install -U llama-cpp-python --no-cache-dir
RUN pip install 'llama-cpp-python[server]'
COPY ./requirements.txt /code/requirements.txt
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
COPY . .
EXPOSE 8000
CMD ["panel", "serve", "--port", "8000", "chat.py", "--address", "0.0.0.0", "--allow-websocket-origin", "*"]
</code></pre>
<p>I am getting the following error:</p>
<pre><code>[+] Building 6.1s (9/13) docker:desktop-linux
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 508B 0.0s
=> [internal] load metadata for docker.io/library/python:3.11-slim 0.9s
=> [auth] library/python:pull token for registry-1.docker.io 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [1/8] FROM docker.io/library/python:3.11-slim@sha256:90f8795536170fd08236d2ceb74fe7065dbf74f738d8 0.0s
=> => resolve docker.io/library/python:3.11-slim@sha256:90f8795536170fd08236d2ceb74fe7065dbf74f738d8 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 2.19kB 0.0s
=> CACHED [2/8] WORKDIR /code 0.0s
=> CACHED [3/8] RUN pip uninstall llama-cpp-python -y 0.0s
=> ERROR [4/8] RUN pip install -U llama-cpp-python --no-cache-dir 5.2s
------
> [4/8] RUN pip install -U llama-cpp-python --no-cache-dir:
0.410 Collecting llama-cpp-python
0.516 Downloading llama_cpp_python-0.2.57.tar.gz (36.9 MB)
1.023 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 36.9/36.9 MB 99.0 MB/s eta 0:00:00
1.325 Installing build dependencies: started
2.285 Installing build dependencies: finished with status 'done'
2.285 Getting requirements to build wheel: started
2.336 Getting requirements to build wheel: finished with status 'done'
2.340 Installing backend dependencies: started
3.863 Installing backend dependencies: finished with status 'done'
3.864 Preparing metadata (pyproject.toml): started
3.955 Preparing metadata (pyproject.toml): finished with status 'done'
3.996 Collecting typing-extensions>=4.5.0 (from llama-cpp-python)
4.014 Downloading typing_extensions-4.10.0-py3-none-any.whl.metadata (3.0 kB)
4.181 Collecting numpy>=1.20.0 (from llama-cpp-python)
4.201 Downloading numpy-1.26.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (62 kB)
4.202 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 62.3/62.3 kB 96.9 MB/s eta 0:00:00
4.242 Collecting diskcache>=5.6.1 (from llama-cpp-python)
4.261 Downloading diskcache-5.6.3-py3-none-any.whl.metadata (20 kB)
4.298 Collecting jinja2>=2.11.3 (from llama-cpp-python)
4.317 Downloading Jinja2-3.1.3-py3-none-any.whl.metadata (3.3 kB)
4.372 Collecting MarkupSafe>=2.0 (from jinja2>=2.11.3->llama-cpp-python)
4.393 Downloading MarkupSafe-2.1.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (3.0 kB)
4.416 Downloading diskcache-5.6.3-py3-none-any.whl (45 kB)
4.418 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 45.5/45.5 kB 412.0 MB/s eta 0:00:00
4.440 Downloading Jinja2-3.1.3-py3-none-any.whl (133 kB)
4.444 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 133.2/133.2 kB 63.9 MB/s eta 0:00:00
4.472 Downloading numpy-1.26.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (14.2 MB)
4.627 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 14.2/14.2 MB 94.7 MB/s eta 0:00:00
4.648 Downloading typing_extensions-4.10.0-py3-none-any.whl (33 kB)
4.671 Downloading MarkupSafe-2.1.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (29 kB)
4.713 Building wheels for collected packages: llama-cpp-python
4.714 Building wheel for llama-cpp-python (pyproject.toml): started
4.910 Building wheel for llama-cpp-python (pyproject.toml): finished with status 'error'
4.912 error: subprocess-exited-with-error
4.912
4.912 × Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
4.912 │ exit code: 1
4.912 ╰─> [24 lines of output]
4.912 *** scikit-build-core 0.8.2 using CMake 3.29.0 (wheel)
4.912 *** Configuring CMake...
4.912 loading initial cache file /tmp/tmpk4ft3wii/build/CMakeInit.txt
4.912 -- The C compiler identification is unknown
4.912 -- The CXX compiler identification is unknown
4.912 CMake Error at CMakeLists.txt:3 (project):
4.912 No CMAKE_C_COMPILER could be found.
4.912
4.912 Tell CMake where to find the compiler by setting either the environment
4.912 variable "CC" or the CMake cache entry CMAKE_C_COMPILER to the full path to
4.912 the compiler, or to the compiler name if it is in the PATH.
4.912
4.912
4.912 CMake Error at CMakeLists.txt:3 (project):
4.912 No CMAKE_CXX_COMPILER could be found.
4.912
4.912 Tell CMake where to find the compiler by setting either the environment
4.912 variable "CXX" or the CMake cache entry CMAKE_CXX_COMPILER to the full path
4.912 to the compiler, or to the compiler name if it is in the PATH.
4.912
4.912
4.912 -- Configuring incomplete, errors occurred!
4.912
4.912 *** CMake configuration failed
4.912 [end of output]
4.912
4.912 note: This error originates from a subprocess, and is likely not a problem with pip.
4.913 ERROR: Failed building wheel for llama-cpp-python
4.913 Failed to build llama-cpp-python
4.913 ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects
------
Dockerfile:9
--------------------
7 | ENV CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1
8 |
9 | >>> RUN pip install -U llama-cpp-python --no-cache-dir
10 |
11 | RUN pip install 'llama-cpp-python[server]'
--------------------
ERROR: failed to solve: process "/bin/sh -c pip install -U llama-cpp-python --no-cache-dir" did not complete successfully: exit code: 1
</code></pre>
<p>I have tried different variations of the Dockerfile, but it always gives error on the same line, i.e., <code>RUN pip install -U llama-cpp-python</code>.</p>
<p>Why? And how do I fix it?</p>
<hr />
<p><strong>UPDATE:</strong> Based on the comments, I modified my Dockerfile like so:</p>
<pre><code>FROM python:3.11-slim
RUN apt-get update && apt-get install -y --no-install-recommends gcc
WORKDIR /code
RUN pip uninstall llama-cpp-python -y
ENV CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1
RUN pip install -U llama-cpp-python --no-cache-dir
RUN pip install 'llama-cpp-python[server]'
COPY ./requirements.txt /code/requirements.txt
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
COPY . .
EXPOSE 8000
CMD ["panel", "serve", "--port", "8000", "chat.py", "--address", "0.0.0.0", "--allow-websocket-origin", "*"]
</code></pre>
<p>And I still am not able to install llama-cpp:</p>
<pre><code>[+] Building 12.3s (10/14) docker:desktop-linux
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 578B 0.0s
=> [internal] load metadata for docker.io/library/python:3.11-slim 0.9s
=> [auth] library/python:pull token for registry-1.docker.io 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> CACHED [1/9] FROM docker.io/library/python:3.11-slim@sha256:90f8795536170fd08236d2ceb74fe7065dbf7 0.0s
=> => resolve docker.io/library/python:3.11-slim@sha256:90f8795536170fd08236d2ceb74fe7065dbf74f738d8 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 2.26kB 0.0s
=> [2/9] RUN apt-get update && apt-get install -y --no-install-recommends gcc 5.2s
=> [3/9] WORKDIR /code 0.0s
=> [4/9] RUN pip uninstall llama-cpp-python -y 1.0s
=> ERROR [5/9] RUN pip install -U llama-cpp-python --no-cache-dir 5.2s
------
> [5/9] RUN pip install -U llama-cpp-python --no-cache-dir:
0.500 Collecting llama-cpp-python
0.600 Downloading llama_cpp_python-0.2.57.tar.gz (36.9 MB)
1.093 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 36.9/36.9 MB 88.6 MB/s eta 0:00:00
1.373 Installing build dependencies: started
2.344 Installing build dependencies: finished with status 'done'
2.345 Getting requirements to build wheel: started
2.395 Getting requirements to build wheel: finished with status 'done'
2.399 Installing backend dependencies: started
3.882 Installing backend dependencies: finished with status 'done'
3.882 Preparing metadata (pyproject.toml): started
3.972 Preparing metadata (pyproject.toml): finished with status 'done'
4.014 Collecting typing-extensions>=4.5.0 (from llama-cpp-python)
4.034 Downloading typing_extensions-4.10.0-py3-none-any.whl.metadata (3.0 kB)
4.200 Collecting numpy>=1.20.0 (from llama-cpp-python)
4.218 Downloading numpy-1.26.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (62 kB)
4.219 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 62.3/62.3 kB 95.0 MB/s eta 0:00:00
4.258 Collecting diskcache>=5.6.1 (from llama-cpp-python)
4.282 Downloading diskcache-5.6.3-py3-none-any.whl.metadata (20 kB)
4.316 Collecting jinja2>=2.11.3 (from llama-cpp-python)
4.335 Downloading Jinja2-3.1.3-py3-none-any.whl.metadata (3.3 kB)
4.392 Collecting MarkupSafe>=2.0 (from jinja2>=2.11.3->llama-cpp-python)
4.410 Downloading MarkupSafe-2.1.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (3.0 kB)
4.434 Downloading diskcache-5.6.3-py3-none-any.whl (45 kB)
4.435 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 45.5/45.5 kB 345.0 MB/s eta 0:00:00
4.454 Downloading Jinja2-3.1.3-py3-none-any.whl (133 kB)
4.458 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 133.2/133.2 kB 103.0 MB/s eta 0:00:00
4.482 Downloading numpy-1.26.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (14.2 MB)
4.620 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 14.2/14.2 MB 106.0 MB/s eta 0:00:00
4.641 Downloading typing_extensions-4.10.0-py3-none-any.whl (33 kB)
4.665 Downloading MarkupSafe-2.1.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (29 kB)
4.703 Building wheels for collected packages: llama-cpp-python
4.704 Building wheel for llama-cpp-python (pyproject.toml): started
4.938 Building wheel for llama-cpp-python (pyproject.toml): finished with status 'error'
4.941 error: subprocess-exited-with-error
4.941
4.941 × Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
4.941 │ exit code: 1
4.941 ╰─> [50 lines of output]
4.941 *** scikit-build-core 0.8.2 using CMake 3.29.0 (wheel)
4.941 *** Configuring CMake...
4.941 loading initial cache file /tmp/tmp839s_tl2/build/CMakeInit.txt
4.941 -- The C compiler identification is GNU 12.2.0
4.941 -- The CXX compiler identification is unknown
4.941 -- Detecting C compiler ABI info
4.941 -- Detecting C compiler ABI info - failed
4.941 -- Check for working C compiler: /usr/bin/cc
4.941 -- Check for working C compiler: /usr/bin/cc - broken
4.941 CMake Error at /tmp/pip-build-env-qdg4zwxu/normal/lib/python3.11/site-packages/cmake/data/share/cmake-3.29/Modules/CMakeTestCCompiler.cmake:67 (message):
4.941 The C compiler
4.941
4.941 "/usr/bin/cc"
4.941
4.941 is not able to compile a simple test program.
4.941
4.941 It fails with the following output:
4.941
4.941 Change Dir: '/tmp/tmp839s_tl2/build/CMakeFiles/CMakeScratch/TryCompile-UGSEbu'
4.941
4.941 Run Build Command(s): /tmp/pip-build-env-qdg4zwxu/normal/lib/python3.11/site-packages/ninja/data/bin/ninja -v cmTC_b1c9c
4.941 [1/2] /usr/bin/cc -o CMakeFiles/cmTC_b1c9c.dir/testCCompiler.c.o -c /tmp/tmp839s_tl2/build/CMakeFiles/CMakeScratch/TryCompile-UGSEbu/testCCompiler.c
4.941 [2/2] : && /usr/bin/cc CMakeFiles/cmTC_b1c9c.dir/testCCompiler.c.o -o cmTC_b1c9c && :
4.941 FAILED: cmTC_b1c9c
4.941 : && /usr/bin/cc CMakeFiles/cmTC_b1c9c.dir/testCCompiler.c.o -o cmTC_b1c9c && :
4.941 /usr/bin/ld: cannot find Scrt1.o: No such file or directory
4.941 /usr/bin/ld: cannot find crti.o: No such file or directory
4.941 collect2: error: ld returned 1 exit status
4.941 ninja: build stopped: subcommand failed.
4.941
4.941
4.941
4.941
4.941
4.941 CMake will not be able to correctly generate this project.
4.941 Call Stack (most recent call first):
4.941 CMakeLists.txt:3 (project)
4.941
4.941
4.941 CMake Error at CMakeLists.txt:3 (project):
4.941 No CMAKE_CXX_COMPILER could be found.
4.941
4.941 Tell CMake where to find the compiler by setting either the environment
4.941 variable "CXX" or the CMake cache entry CMAKE_CXX_COMPILER to the full path
4.941 to the compiler, or to the compiler name if it is in the PATH.
4.941
4.941
4.941 -- Configuring incomplete, errors occurred!
4.941
4.941 *** CMake configuration failed
4.941 [end of output]
4.941
4.941 note: This error originates from a subprocess, and is likely not a problem with pip.
4.941 ERROR: Failed building wheel for llama-cpp-python
4.941 Failed to build llama-cpp-python
4.941 ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects
------
Dockerfile:11
--------------------
9 | ENV CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1
10 |
11 | >>> RUN pip install -U llama-cpp-python --no-cache-dir
12 |
13 | RUN pip install 'llama-cpp-python[server]'
--------------------
ERROR: failed to solve: process "/bin/sh -c pip install -U llama-cpp-python --no-cache-dir" did not complete successfully: exit code: 1
</code></pre>
<hr />
<p><strong>UPDATE 2:</strong> With the line in my Dockerfile:</p>
<p><code>RUN apt-get update && apt-get install -y build-essential</code></p>
<p>this is the error trace when building the Dockerfile:</p>
<pre><code>[+] Building 17.5s (10/14) docker:desktop-linux
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 566B 0.0s
=> [internal] load metadata for docker.io/library/python:3.11-slim 0.9s
=> [auth] library/python:pull token for registry-1.docker.io 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> CACHED [1/9] FROM docker.io/library/python:3.11-slim@sha256:90f8795536170fd08236d2ceb74fe7065dbf7 0.0s
=> => resolve docker.io/library/python:3.11-slim@sha256:90f8795536170fd08236d2ceb74fe7065dbf74f738d8 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 2.25kB 0.0s
=> [2/9] RUN apt-get update && apt-get install -y build-essential 10.1s
=> [3/9] WORKDIR /code 0.0s
=> [4/9] RUN pip uninstall llama-cpp-python -y 0.9s
=> ERROR [5/9] RUN pip install -U llama-cpp-python --no-cache-dir 5.5s
------
> [5/9] RUN pip install -U llama-cpp-python --no-cache-dir:
0.633 Collecting llama-cpp-python
0.765 Downloading llama_cpp_python-0.2.57.tar.gz (36.9 MB)
1.231 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 36.9/36.9 MB 109.6 MB/s eta 0:00:00
1.514 Installing build dependencies: started
2.479 Installing build dependencies: finished with status 'done'
2.479 Getting requirements to build wheel: started
2.530 Getting requirements to build wheel: finished with status 'done'
2.534 Installing backend dependencies: started
4.027 Installing backend dependencies: finished with status 'done'
4.028 Preparing metadata (pyproject.toml): started
4.119 Preparing metadata (pyproject.toml): finished with status 'done'
4.161 Collecting typing-extensions>=4.5.0 (from llama-cpp-python)
4.182 Downloading typing_extensions-4.10.0-py3-none-any.whl.metadata (3.0 kB)
4.355 Collecting numpy>=1.20.0 (from llama-cpp-python)
4.374 Downloading numpy-1.26.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (62 kB)
4.376 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 62.3/62.3 kB 121.2 MB/s eta 0:00:00
4.416 Collecting diskcache>=5.6.1 (from llama-cpp-python)
4.436 Downloading diskcache-5.6.3-py3-none-any.whl.metadata (20 kB)
4.472 Collecting jinja2>=2.11.3 (from llama-cpp-python)
4.491 Downloading Jinja2-3.1.3-py3-none-any.whl.metadata (3.3 kB)
4.549 Collecting MarkupSafe>=2.0 (from jinja2>=2.11.3->llama-cpp-python)
4.569 Downloading MarkupSafe-2.1.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (3.0 kB)
4.594 Downloading diskcache-5.6.3-py3-none-any.whl (45 kB)
4.596 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 45.5/45.5 kB 295.7 MB/s eta 0:00:00
4.618 Downloading Jinja2-3.1.3-py3-none-any.whl (133 kB)
4.621 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 133.2/133.2 kB 138.4 MB/s eta 0:00:00
4.646 Downloading numpy-1.26.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (14.2 MB)
4.820 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 14.2/14.2 MB 76.3 MB/s eta 0:00:00
4.843 Downloading typing_extensions-4.10.0-py3-none-any.whl (33 kB)
4.868 Downloading MarkupSafe-2.1.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (29 kB)
4.910 Building wheels for collected packages: llama-cpp-python
4.911 Building wheel for llama-cpp-python (pyproject.toml): started
5.212 Building wheel for llama-cpp-python (pyproject.toml): finished with status 'error'
5.214 error: subprocess-exited-with-error
5.214
5.214 × Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
5.214 │ exit code: 1
5.214 ╰─> [32 lines of output]
5.214 *** scikit-build-core 0.8.2 using CMake 3.29.0 (wheel)
5.214 *** Configuring CMake...
5.214 loading initial cache file /tmp/tmp5q9e9my4/build/CMakeInit.txt
5.214 -- The C compiler identification is GNU 12.2.0
5.214 -- The CXX compiler identification is GNU 12.2.0
5.214 -- Detecting C compiler ABI info
5.214 -- Detecting C compiler ABI info - done
5.214 -- Check for working C compiler: /usr/bin/cc - skipped
5.214 -- Detecting C compile features
5.214 -- Detecting C compile features - done
5.214 -- Detecting CXX compiler ABI info
5.214 -- Detecting CXX compiler ABI info - done
5.214 -- Check for working CXX compiler: /usr/bin/c++ - skipped
5.214 -- Detecting CXX compile features
5.214 -- Detecting CXX compile features - done
5.214 -- Could NOT find Git (missing: GIT_EXECUTABLE)
5.214 CMake Warning at vendor/llama.cpp/scripts/build-info.cmake:14 (message):
5.214 Git not found. Build info will not be accurate.
5.214 Call Stack (most recent call first):
5.214 vendor/llama.cpp/CMakeLists.txt:132 (include)
5.214
5.214
5.214 -- Performing Test CMAKE_HAVE_LIBC_PTHREAD
5.214 -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
5.214 -- Found Threads: TRUE
5.214 CMake Error at vendor/llama.cpp/CMakeLists.txt:191 (find_library):
5.214 Could not find FOUNDATION_LIBRARY using the following names: Foundation
5.214
5.214
5.214 -- Configuring incomplete, errors occurred!
5.214
5.214 *** CMake configuration failed
5.214 [end of output]
5.214
5.214 note: This error originates from a subprocess, and is likely not a problem with pip.
5.215 ERROR: Failed building wheel for llama-cpp-python
5.215 Failed to build llama-cpp-python
5.215 ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects
------
Dockerfile:11
--------------------
9 | ENV CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1
10 |
11 | >>> RUN pip install -U llama-cpp-python --no-cache-dir
12 |
13 | RUN pip install 'llama-cpp-python[server]'
--------------------
ERROR: failed to solve: process "/bin/sh -c pip install -U llama-cpp-python --no-cache-dir" did not complete successfully: exit code: 1
</code></pre>
|
<python><docker><macos><cmake><llama-cpp-python>
|
2024-03-28 11:43:46
| 0
| 3,764
|
Kristada673
|
78,237,761
| 9,659,620
|
import torch.utils.tensorboard causes tensorflow warnings
|
<p>As stated <a href="https://stackoverflow.com/questions/57547471/does-torch-utils-tensorboard-need-installation-of-tensorflow">here</a> tensorboard is part of tensorflow but does not depend on it. One can use it in pytorch such as</p>
<pre class="lang-py prettyprint-override"><code>from torch.utils.tensorboard import SummaryWriter
</code></pre>
<p>It is, however, annoying that this import causes a long trace of tensorflow related warnings</p>
<pre class="lang-bash prettyprint-override"><code>2024-03-28 12:11:43.296359: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-03-28 12:11:43.331928: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-03-28 12:11:43.970865: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
</code></pre>
<p>I wonder if this is necessary and / or how they can be (safely?) muted.</p>
|
<python><pytorch><tensorboard>
|
2024-03-28 11:13:26
| 2
| 1,601
|
Klops
|
78,237,443
| 1,095,202
|
Executable under GDB invokes different embedded Python
|
<p>I have code that embeds Python. That embedded Python uses NumPy and hence, I need to explicitly load <code>libpython</code>, to make NumPy work.</p>
<p>The driving code is in C++ (tests in Google Test). There is a bug somewhere, and I try to use <code>gdb</code> for debugging. However, something strange happens as embedded versions are different, when I simply run the executable and when I run the executable under <code>gdb</code>.</p>
<p>I find the path to <code>libpython</code> by instantiating <code>sysconfig</code> module under the initialized embedded Python and then using <code>sysconfig.get_config_var("LIBDIR")</code>.</p>
<p>I log the found path to the <code>libpython</code> hen I simply run the executable:</p>
<pre><code>Path to libpython is /home/dima/.conda/envs/um02-open-interfaces/lib
</code></pre>
<p>When I run the same executable under <code>gdb</code>:</p>
<pre><code>Path to libpython is /home/linuxbrew/.linuxbrew/opt/python@3.11/lib
</code></pre>
<p>How to stop <code>gdb</code> from changing environment?</p>
|
<python><linux><gdb><python-c-api>
|
2024-03-28 10:21:11
| 1
| 927
|
Dmitry Kabanov
|
78,237,399
| 15,368,670
|
mypy: error: No overload variant of "..." matches argument types "list[DataFrame]", "str" when using axis="rows"
|
<p>I was running</p>
<pre><code>pd.concat(dfs, axis="rows")
</code></pre>
<pre><code>pd.median(dfs, axis="rows")
</code></pre>
<p>basically any function from pandas which can accept axis and mypy will raise:</p>
<pre><code>error: No overload variant of "concat" matches argument types "list[DataFrame]", "str" [call-overload]
note: Possible overload variants:
... # Long list of possibilies
</code></pre>
<p>How to remove this mypy error on this valid code ?</p>
|
<python><pandas><mypy><python-typing>
|
2024-03-28 10:16:36
| 2
| 719
|
Oily
|
78,237,398
| 5,553,963
|
Stomp connection using JWT token in Python
|
<p>We have a cli written in python (using typer) and we want to add consuming websocket over stomp to it so the cli can connect to the RabbitMQ and consume the messages. I have to mention that there is an Nginx + Oauth2proxy sitting in front of our RabbitMQ and we have an entry in our Nginx for websocket over STOMP.</p>
<p>We have this feature in our GUI and the way that we do it is to use the JWT token instead of the username and password: <code>wss://:JWT_TOKEN@URL/[PATH]</code> and everything works.</p>
<p>In python I found the stomp library but I cannot make it work, below is my code and I tried different things but I got all types of errors, you can see some of the things that I tried as I commented them out:</p>
<pre class="lang-py prettyprint-override"><code>import ssl
import stomp
class MyListener(stomp.ConnectionListener):
def on_error(self, headers, message):
print('Received an error "%s"' % message)
def on_message(self, headers, message):
print('Received a message "%s"' % message)
@stomp_app.command("listen")
def stomp_listen(
path: str = typer.Option(
...,
help="The STOMP endpoint PATH",
),
):
"""
Examples: \n
cli.py stomp listen --path [PATH] \n
cli.py stomp listen --path /exchange/egress_exchange/sys_newproj_sim1 \n
"""
url_without_https = URL.replace("https://", "")
# endpoint = "wss://:"+jwt_token+"@"+url_without_https+"/ws"
endpoint = "wss://"+url_without_https+"/ws"
# Create an SSL context
ssl_context = ssl.create_default_context()
# conn = stomp.Connection([(endpoint, 443)])
# conn.set_ssl(for_hosts=[(endpoint, 443)])
conn = stomp.Connection([endpoint])
conn.set_ssl(for_hosts=[endpoint])
conn.set_listener('', MyListener())
conn.connect(headers={'Authorization': 'Bearer ' + jwt_token}, wait=True)
# conn.connect(username="",passcode=jwt_token,headers={'Authorization': 'Bearer ' + jwt_token}, wait=True)
# conn.connect(username="",passcode=jwt_token, wait=True)
# conn.connect(wait=True)
conn.subscribe(destination=path, id=1, ack='auto')
while True:
pass
conn.disconnect()
</code></pre>
<p>Can someone please help me how to connect to the STOMP in python?</p>
|
<python><websocket><stomp>
|
2024-03-28 10:16:34
| 0
| 5,315
|
AVarf
|
78,237,049
| 6,281,366
|
define the input FPS of a stream using ffmpeg-python
|
<p>Im creating an HLS playlist using ffmpeg, reading my input from an RTSP stream.</p>
<p>When probing the RTSP stream, i get an FPS which is not the true FPS, and i want to "tell" ffmpeg the actual real FPS.</p>
<p>In the command line, im using the -r flag, which works fine:</p>
<pre><code>ffmpeg -rtsp_transport tcp -r 18 -i rtsp://localhost:554/test -b:v 100KB -vf format=yuvj420p -c:a copy -hls_time 2 -hls_list_size 10 -hls_flags delete_segments -start_number 1 output.m3u8
</code></pre>
<p>I noticed that flag must come before the input param. If i use the -r after, it simply doesnt work.</p>
<p>In ffmpeg-python, i dont see any option to do so. And using it as a flag to the .input() function, does not work.</p>
<p>How can i use the -r flag with ffmpeg-python?</p>
|
<python><ffmpeg><ffmpeg-python>
|
2024-03-28 09:16:37
| 1
| 827
|
tamirg
|
78,236,745
| 3,433,875
|
Plot only labels in selected positions in matplotlib pie/donut chart
|
<p>I have the following half pie chart:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure(figsize=(8,6),dpi=100)
ax = fig.add_subplot(1,1,1)
pie_labels = ["Label 1", "Label 2", "Label 3"]
pie_values = [1,2,3]
pie_labels.append("Label 0")
pie_values.append(sum(pie_values))
colors = ['red', 'blue', 'green', 'white']
wedges, texts = ax.pie(pie_values, wedgeprops=dict(width=0.35), startangle= -90,colors=colors)
for w,lbl in zip(wedges,pie_labels):
angle = w.theta2
r=w.r-w.width/2
x = r*np.cos(np.deg2rad(angle))
y = r*np.sin(np.deg2rad(angle))
ax.scatter(x,y)
ax.annotate(lbl, xy=(x,y), size=12, color='k',
ha='right', va='center', weight='bold')
</code></pre>
<p>which produces:
<a href="https://i.sstatic.net/6YxaU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6YxaU.png" alt="enter image description here" /></a></p>
<p>From the (x,y) coordinates taken from the wedge patch, how do I iterate it, to plot for example only the first and last labels or the first and third labels?
I dont want to create a new list of labels, I am looking to get the position of certain labels. Thanks!</p>
|
<python><matplotlib>
|
2024-03-28 08:25:27
| 1
| 363
|
ruthpozuelo
|
78,236,663
| 1,788,656
|
AttributeError: crs attribute is not available
|
<p>All,
I am using metpy 0.10.0, I am trying to calculate the laplacina of a field using mpcalc.laplacian function, yet I get the following error :
AttributeError: crs attribute is not available.</p>
<p>Here is my minmal code:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import xarray as xr
import metpy.calc as mpcalc
myfile = xr.open_dataset(diri+"vor_geo_era5_2023_jan.nc")
var_z = myfile['z'] # geopotential m**2 S**-2
timeP = myfile['time']
lonP = myfile['longitude'].data
latPP = myfile['latitude'].data
lap = mpcalc.laplacian(var_z[:,:,:],axes=('latitude','longitude'))
</code></pre>
<p>I tried to use <code>var_z.metpy.assign_latitudde_longitude(force=False)</code> as illustrated here <a href="https://stackoverflow.com/questions/76424703/the-laplacian-of-metpy-cant-convert-the-units-of-temperature-advection">text</a> , yet I get the following error
AttributeError: 'MetPyAccessor' object has no attribute 'assign_latitudde_longitude'
Thanks</p>
|
<python><python-3.x><metpy>
|
2024-03-28 08:06:31
| 1
| 725
|
Kernel
|
78,236,646
| 6,376,297
|
Linear program solver CBC seems to give 'optimal' solutions with different objective for the exact same problem (and code)
|
<p>I ran the <em>exact same</em> python notebook twice (using the PULP_CBC_CMD solver that comes with pulp), and I got the results I pasted below.</p>
<p><strong>How can a solution be called 'optimal' if the very same solver with the very same data and code finds a better one on another run?</strong></p>
<p>Any ideas?<br />
Is there any effect of seeding?</p>
<p>For context, the problem I am trying to solve is: take a list of 28 integers, and partition them into 4 sets, such that the sum of integers within each set is as close as possible to the sum of all 28 integers divided by 4.</p>
<p>First run (obj = 0.00000254):</p>
<pre><code>{3: [21614, 1344708, 819, 7109, 1239808, 16028],
1: [22716, 8948, 136944, 925193, 147896, 824564, 72221, 460833, 30955],
2: [255182, 1898763, 108042, 368270],
4: [556354, 173237, 64301, 1021326, 17467, 2953, 52942, 1855, 739627]}
[2630086, 2630270, 2630257, 2630062]
Welcome to the CBC MILP Solver
Version: 2.10.3
Build Date: Dec 15 2019
command line - /opt/conda/lib/python3.9/site-packages/pulp/solverdir/cbc/linux/64/cbc /tmp/da872918615345b7a86ca39a8b3bc02a-pulp.mps -sec 31536000 -ratio 0 -threads 4 -timeMode elapsed -branch -printingOptions all -solution /tmp/da872918615345b7a86ca39a8b3bc02a-pulp.sol (default strategy 1)
At line 2 NAME MODEL
At line 3 ROWS
At line 41 COLUMNS
At line 614 RHS
At line 651 BOUNDS
At line 764 ENDATA
Problem MODEL has 36 rows, 116 columns and 344 elements
Coin0008I MODEL read with 0 errors
seconds was changed from 1e+100 to 3.1536e+07
ratioGap was changed from 0 to 0
threads was changed from 0 to 4
Option for timeMode changed from cpu to elapsed
Continuous objective value is 0 - 0.00 seconds
Cgl0004I processed model has 36 rows, 113 columns (112 integer (112 of which binary)) and 344 elements
Cbc0038I Initial state - 6 integers unsatisfied sum - 1.04778
Cbc0038I Pass 1: suminf. 0.00000 (0) obj. 0.01798 iterations 15
Cbc0038I Solution found of 0.01798
Cbc0038I Relaxing continuous gives 0.01798
Cbc0038I Before mini branch and bound, 106 integers at bound fixed and 0 continuous
Cbc0038I Full problem 36 rows 113 columns, reduced to 8 rows 4 columns
Cbc0038I Mini branch and bound improved solution from 0.01798 to 0.0171438 (0.02 seconds)
Cbc0038I Round again with cutoff of 0.0154204
Cbc0038I Pass 2: suminf. 0.11345 (2) obj. 0.0154204 iterations 10
Cbc0038I Pass 3: suminf. 0.55654 (2) obj. 0.0154204 iterations 13
Cbc0038I Pass 4: suminf. 0.33529 (2) obj. 0.0154204 iterations 26
Cbc0038I Solution found of 0.0154204
Cbc0038I Relaxing continuous gives 0.0121357
Cbc0038I Before mini branch and bound, 86 integers at bound fixed and 0 continuous
Cbc0038I Full problem 36 rows 113 columns, reduced to 10 rows 17 columns
Cbc0038I Mini branch and bound improved solution from 0.0121357 to 0.00239173 (0.02 seconds)
Cbc0038I Round again with cutoff of 0.00190538
Cbc0038I Pass 5: suminf. 0.72114 (4) obj. 0.00190538 iterations 1
Cbc0038I Pass 6: suminf. 1.59976 (4) obj. 0.00190538 iterations 19
Cbc0038I Pass 7: suminf. 1.44700 (4) obj. 0.00190538 iterations 3
Cbc0038I Pass 8: suminf. 0.87390 (5) obj. 0.00190538 iterations 8
Cbc0038I Pass 9: suminf. 1.08860 (5) obj. 0.00190538 iterations 30
Cbc0038I Pass 10: suminf. 1.08860 (5) obj. 0.00190538 iterations 9
Cbc0038I Pass 11: suminf. 1.75188 (5) obj. 0.00190538 iterations 19
Cbc0038I Pass 12: suminf. 1.09723 (5) obj. 0.00190538 iterations 14
Cbc0038I Pass 13: suminf. 1.74326 (5) obj. 0.00190538 iterations 15
Cbc0038I Pass 14: suminf. 1.57602 (6) obj. 0.00190538 iterations 27
Cbc0038I Pass 15: suminf. 0.97726 (3) obj. 0.00190538 iterations 10
Cbc0038I Pass 16: suminf. 1.31945 (4) obj. 0.00190538 iterations 15
Cbc0038I Pass 17: suminf. 1.00000 (3) obj. 0.00190538 iterations 11
Cbc0038I Pass 18: suminf. 1.00000 (4) obj. 0.00190538 iterations 7
Cbc0038I Pass 19: suminf. 1.00000 (3) obj. 0.00190538 iterations 7
Cbc0038I Pass 20: suminf. 1.50131 (6) obj. 0.00190538 iterations 18
Cbc0038I Pass 21: suminf. 1.28415 (6) obj. 0.00190538 iterations 10
Cbc0038I Pass 22: suminf. 0.89528 (6) obj. 0.00190538 iterations 22
Cbc0038I Pass 23: suminf. 0.80563 (6) obj. 0.00190538 iterations 6
Cbc0038I Pass 24: suminf. 1.91357 (6) obj. 0.00190538 iterations 15
Cbc0038I Pass 25: suminf. 1.48191 (6) obj. 0.00190538 iterations 5
Cbc0038I Pass 26: suminf. 1.02968 (5) obj. 0.00190538 iterations 21
Cbc0038I Pass 27: suminf. 0.79997 (4) obj. 0.00190538 iterations 13
Cbc0038I Pass 28: suminf. 1.26200 (5) obj. 0.00190538 iterations 13
Cbc0038I Pass 29: suminf. 1.24116 (4) obj. 0.00190538 iterations 7
Cbc0038I Pass 30: suminf. 1.02968 (5) obj. 0.00190538 iterations 22
Cbc0038I Pass 31: suminf. 2.26316 (6) obj. 0.00190538 iterations 22
Cbc0038I Pass 32: suminf. 1.00000 (4) obj. 0.00190538 iterations 13
Cbc0038I Pass 33: suminf. 1.00000 (4) obj. 0.00190538 iterations 4
Cbc0038I Pass 34: suminf. 1.00000 (4) obj. 0.00190538 iterations 8
Cbc0038I No solution found this major pass
Cbc0038I Before mini branch and bound, 55 integers at bound fixed and 0 continuous
Cbc0038I Full problem 36 rows 113 columns, reduced to 17 rows 45 columns
Cbc0038I Mini branch and bound improved solution from 0.00239173 to 0.000440971 (0.05 seconds)
Cbc0038I Round again with cutoff of 0.00030168
Cbc0038I Pass 34: suminf. 0.87645 (5) obj. 0.00030168 iterations 3
Cbc0038I Pass 35: suminf. 1.62905 (5) obj. 0.00030168 iterations 13
Cbc0038I Pass 36: suminf. 1.60230 (5) obj. 0.00030168 iterations 6
Cbc0038I Pass 37: suminf. 0.90319 (5) obj. 0.00030168 iterations 11
Cbc0038I Pass 38: suminf. 1.41443 (5) obj. 0.00030168 iterations 22
Cbc0038I Pass 39: suminf. 1.41443 (5) obj. 0.00030168 iterations 9
Cbc0038I Pass 40: suminf. 1.41968 (5) obj. 0.00030168 iterations 14
Cbc0038I Pass 41: suminf. 1.32352 (5) obj. 0.00030168 iterations 9
Cbc0038I Pass 42: suminf. 1.29678 (5) obj. 0.00030168 iterations 3
Cbc0038I Pass 43: suminf. 1.89067 (5) obj. 0.00030168 iterations 14
Cbc0038I Pass 44: suminf. 1.86393 (5) obj. 0.00030168 iterations 6
Cbc0038I Pass 45: suminf. 1.32352 (5) obj. 0.00030168 iterations 14
Cbc0038I Pass 46: suminf. 2.09959 (6) obj. 0.00030168 iterations 23
Cbc0038I Pass 47: suminf. 1.53810 (5) obj. 0.00030168 iterations 6
Cbc0038I Pass 48: suminf. 0.97696 (5) obj. 0.00030168 iterations 13
Cbc0038I Pass 49: suminf. 1.50406 (5) obj. 0.00030168 iterations 14
Cbc0038I Pass 50: suminf. 0.90055 (4) obj. 0.00030168 iterations 23
Cbc0038I Pass 51: suminf. 0.90055 (4) obj. 0.00030168 iterations 8
Cbc0038I Pass 52: suminf. 0.92729 (4) obj. 0.00030168 iterations 15
Cbc0038I Pass 53: suminf. 1.66723 (5) obj. 0.00030168 iterations 21
Cbc0038I Pass 54: suminf. 1.10337 (5) obj. 0.00030168 iterations 12
Cbc0038I Pass 55: suminf. 1.77263 (5) obj. 0.00030168 iterations 20
Cbc0038I Pass 56: suminf. 1.23211 (5) obj. 0.00030168 iterations 12
Cbc0038I Pass 57: suminf. 1.25885 (5) obj. 0.00030168 iterations 13
Cbc0038I Pass 58: suminf. 1.66527 (6) obj. 0.00030168 iterations 24
Cbc0038I Pass 59: suminf. 1.00000 (4) obj. 0.00030168 iterations 14
Cbc0038I Pass 60: suminf. 1.00000 (4) obj. 0.00030168 iterations 4
Cbc0038I Pass 61: suminf. 1.00000 (4) obj. 0.00030168 iterations 10
Cbc0038I Pass 62: suminf. 1.72469 (6) obj. 0.00030168 iterations 31
Cbc0038I Pass 63: suminf. 0.71967 (6) obj. 0.00030168 iterations 13
Cbc0038I No solution found this major pass
Cbc0038I Before mini branch and bound, 35 integers at bound fixed and 0 continuous
Cbc0038I Full problem 36 rows 113 columns, reduced to 25 rows 68 columns
Cbc0038I Mini branch and bound improved solution from 0.000440971 to 7.02486e-05 (0.09 seconds)
Cbc0038I Round again with cutoff of 4.2174e-05
Cbc0038I Pass 63: suminf. 0.90419 (5) obj. 4.2174e-05 iterations 0
Cbc0038I Pass 64: suminf. 1.63379 (5) obj. 4.2174e-05 iterations 13
Cbc0038I Pass 65: suminf. 1.63005 (5) obj. 4.2174e-05 iterations 6
Cbc0038I Pass 66: suminf. 0.90793 (5) obj. 4.2174e-05 iterations 11
Cbc0038I Pass 67: suminf. 0.97496 (5) obj. 4.2174e-05 iterations 21
Cbc0038I Pass 68: suminf. 0.89601 (4) obj. 4.2174e-05 iterations 9
Cbc0038I Pass 69: suminf. 0.89975 (4) obj. 4.2174e-05 iterations 15
Cbc0038I Pass 70: suminf. 1.03907 (5) obj. 4.2174e-05 iterations 26
Cbc0038I Pass 71: suminf. 1.03907 (5) obj. 4.2174e-05 iterations 8
Cbc0038I Pass 72: suminf. 1.61501 (5) obj. 4.2174e-05 iterations 16
Cbc0038I Pass 73: suminf. 1.41908 (5) obj. 4.2174e-05 iterations 17
Cbc0038I Pass 74: suminf. 0.76351 (5) obj. 4.2174e-05 iterations 14
Cbc0038I Pass 75: suminf. 1.22855 (5) obj. 4.2174e-05 iterations 21
Cbc0038I Pass 76: suminf. 0.90127 (4) obj. 4.2174e-05 iterations 21
Cbc0038I Pass 77: suminf. 0.90127 (4) obj. 4.2174e-05 iterations 6
Cbc0038I Pass 78: suminf. 0.90500 (4) obj. 4.2174e-05 iterations 16
Cbc0038I Pass 79: suminf. 0.53118 (6) obj. 4.2174e-05 iterations 34
Cbc0038I Pass 80: suminf. 0.52725 (6) obj. 4.2174e-05 iterations 9
Cbc0038I Pass 81: suminf. 1.78668 (6) obj. 4.2174e-05 iterations 19
Cbc0038I Pass 82: suminf. 1.13994 (6) obj. 4.2174e-05 iterations 6
Cbc0038I Pass 83: suminf. 1.45437 (6) obj. 4.2174e-05 iterations 17
Cbc0038I Pass 84: suminf. 0.93890 (6) obj. 4.2174e-05 iterations 6
Cbc0038I Pass 85: suminf. 1.54792 (6) obj. 4.2174e-05 iterations 15
Cbc0038I Pass 86: suminf. 0.98065 (5) obj. 4.2174e-05 iterations 7
Cbc0038I Pass 87: suminf. 1.12437 (5) obj. 4.2174e-05 iterations 21
Cbc0038I Pass 88: suminf. 0.98065 (5) obj. 4.2174e-05 iterations 18
Cbc0038I Pass 89: suminf. 0.39122 (6) obj. 4.2174e-05 iterations 25
Cbc0038I Pass 90: suminf. 0.38968 (6) obj. 4.2174e-05 iterations 14
Cbc0038I Pass 91: suminf. 1.61655 (6) obj. 4.2174e-05 iterations 19
Cbc0038I Pass 92: suminf. 1.02006 (5) obj. 4.2174e-05 iterations 15
Cbc0038I No solution found this major pass
Cbc0038I Before mini branch and bound, 42 integers at bound fixed and 0 continuous
Cbc0038I Full problem 36 rows 113 columns, reduced to 23 rows 62 columns
Cbc0038I Mini branch and bound did not improve solution (0.13 seconds)
Cbc0038I After 0.13 seconds - Feasibility pump exiting with objective of 7.02486e-05 - took 0.11 seconds
Cbc0012I Integer solution of 7.0248582e-05 found by feasibility pump after 0 iterations and 0 nodes (0.13 seconds)
Cbc0038I Full problem 36 rows 113 columns, reduced to 8 rows 15 columns
Cbc0031I 19 added rows had average density of 42.842105
Cbc0013I At root node, 19 cuts changed objective from 0 to 0 in 100 passes
Cbc0014I Cut generator 0 (Probing) - 24 row cuts average 7.2 elements, 0 column cuts (0 active) in 0.022 seconds - new frequency is -100
Cbc0014I Cut generator 1 (Gomory) - 880 row cuts average 105.3 elements, 0 column cuts (0 active) in 0.034 seconds - new frequency is -100
Cbc0014I Cut generator 2 (Knapsack) - 0 row cuts average 0.0 elements, 0 column cuts (0 active) in 0.006 seconds - new frequency is -100
Cbc0014I Cut generator 3 (Clique) - 0 row cuts average 0.0 elements, 0 column cuts (0 active) in 0.002 seconds - new frequency is -100
Cbc0014I Cut generator 4 (MixedIntegerRounding2) - 899 row cuts average 22.0 elements, 0 column cuts (0 active) in 0.047 seconds - new frequency is -100
Cbc0014I Cut generator 5 (FlowCover) - 0 row cuts average 0.0 elements, 0 column cuts (0 active) in 0.012 seconds - new frequency is -100
Cbc0014I Cut generator 6 (TwoMirCuts) - 342 row cuts average 28.0 elements, 0 column cuts (0 active) in 0.006 seconds - new frequency is -100
Cbc0010I After 0 nodes, 1 on tree, 7.0248582e-05 best solution, best possible 0 (0.47 seconds)
Cbc0012I Integer solution of 3.6042127e-05 found by heuristic after 5499 iterations and 162 nodes (0.64 seconds)
Cbc0012I Integer solution of 2.5836032e-05 found by heuristic after 5558 iterations and 169 nodes (0.64 seconds)
Cbc0012I Integer solution of 2.5366718e-06 found by heuristic after 5716 iterations and 176 nodes (0.64 seconds)
Cbc0030I Thread 0 used 45 times, waiting to start 0.036746502, 250 locks, 0.00067687035 locked, 0.00014829636 waiting for locks
Cbc0030I Thread 1 used 50 times, waiting to start 0.027021646, 265 locks, 0.00063514709 locked, 0.00019264221 waiting for locks
Cbc0030I Thread 2 used 40 times, waiting to start 0.018216848, 215 locks, 0.00056695938 locked, 8.0823898e-05 waiting for locks
Cbc0030I Thread 3 used 45 times, waiting to start 0.083312035, 233 locks, 0.00053620338 locked, 6.1988831e-05 waiting for locks
Cbc0030I Main thread 0.16802382 waiting for threads, 370 locks, 0.00051784515 locked, 0.0001745224 waiting for locks
Cbc0001I Search completed - best objective 2.536671837402582e-06, took 5750 iterations and 180 nodes (0.74 seconds)
Cbc0032I Strong branching done 1998 times (11447 iterations), fathomed 10 nodes and fixed 111 variables
Cbc0035I Maximum depth 22, 37 variables fixed on reduced cost
Cuts at root node changed objective from 0 to 0
Probing was tried 500 times and created 120 cuts of which 0 were active after adding rounds of cuts (0.109 seconds)
Gomory was tried 500 times and created 4400 cuts of which 0 were active after adding rounds of cuts (0.172 seconds)
Knapsack was tried 500 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.032 seconds)
Clique was tried 500 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.009 seconds)
MixedIntegerRounding2 was tried 500 times and created 4495 cuts of which 0 were active after adding rounds of cuts (0.234 seconds)
FlowCover was tried 500 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.060 seconds)
TwoMirCuts was tried 500 times and created 1710 cuts of which 0 were active after adding rounds of cuts (0.029 seconds)
ZeroHalf was tried 5 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
ImplicationCuts was tried 19 times and created 12 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Result - Optimal solution found
Objective value: 0.00000254
Enumerated nodes: 180
Total iterations: 5750
Time (CPU seconds): 0.69
Time (Wallclock seconds): 0.75
Option for printingOptions changed from normal to all
Total time (CPU seconds): 0.69 (Wallclock seconds): 0.76
</code></pre>
<p><strong>Part 2 with the other solution posted as an answer due to the 30 K character limitation of individual posts.</strong></p>
<p>EDIT: Actually, part 2 was deleted by admins. So this question is incomplete; sorry, I did provide all the required information but it was censored.</p>
|
<python><linear-programming><coin-or-cbc>
|
2024-03-28 08:02:08
| 1
| 657
|
user6376297
|
78,236,642
| 1,384,464
|
Using python OpenCV to crop an image based on reference marks
|
<p>I have an image I would like to crop based on reference marks of the image which are black squares at the layout margins.</p>
<p>While my code can detect the reference marks, there seem to be a persistent error which I can't seem to get around: I cannot get the exact accurate co-ordinates of the reference marks to crop the image perfectly in such a way that the reference marks are situated in the corner of the cropped image without "gaps" at the edges. The original image is shown below:</p>
<p><a href="https://i.sstatic.net/eTx41.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eTx41.jpg" alt="Original image" /></a></p>
<p>The code I use to auto-extract the region of interest is given below:</p>
<pre><code>import os
import cv2
import imutils
import numpy as np
from PIL import Image
from matplotlib import pyplot as plt
img_path = "template_page_1-min.png"
img = cv2.imread(img_path)
template = img.copy()
sharpen_kernel = np.array([[0, -1, 0], [-1, 4, -1], [0, -1, 0]])
sharpened = cv2.filter2D(template.copy(), -1, sharpen_kernel)
gray = cv2.cvtColor(sharpened, cv2.COLOR_BGR2GRAY)
# Syntax is dest_img = cv2.bilateralFilter(src_image, diameter of
# pixel, sigmaColor, sigmaSpace). You can increase the sigma color
# and sigma space from 17 to higher values to blur out more
# background information, but be careful that the useful part does
# not get blurred.
bfilter = cv2.bilateralFilter(gray, 11, 65, 65) # Noise reduction
hsv = cv2.cvtColor(np.stack((bfilter.copy(),) * 3, axis=-1),
cv2.COLOR_BGR2HSV)
# set the bounds for the gray hue
lower_gray = np.array([0, 0, 100])
upper_gray = np.array([255, 5, 255])
mask_grey = cv2.inRange(hsv, lower_gray, upper_gray)
# Build mask of non-black pixels.
nzmask = cv2.inRange(hsv, (0, 0, 5), (255, 255, 255))
# Erode the mask - all pixels around a black pixels should not be masked
nzmask = cv2.erode(nzmask, np.ones((3, 3)))
mask_grey = mask_grey & nzmask
template[np.where(mask_grey)] = 255
template = cv2.cvtColor(template.copy(), cv2.COLOR_BGR2RGB)
gray_processed = cv2.cvtColor(template, cv2.COLOR_BGR2GRAY)
# Only the edges that have an intensity gradient more than the
# minimum threshold value and less than the maximum threshold value
# will be displayed
edged = cv2.Canny(gray_processed.copy(), 40, 250)
adapt_thresh = cv2.adaptiveThreshold(
edged.copy(),
255, # maximum value assigned to pixel values exceeding the threshold
cv2.ADAPTIVE_THRESH_GAUSSIAN_C, # gaussian weighted sum of neighborhood
cv2.THRESH_BINARY_INV, # thresholding type
11, # block size (51x51 window)
2) # constant
# Apply some dilation and erosion to join the gaps -
# Change iteration to detect more or less area's
# adapt_thresh = cv2.dilate(adapt_thresh, None, iterations = 9)
# adapt_thresh = cv2.erode(adapt_thresh, None, iterations = 10)
adapt_thresh = cv2.dilate(adapt_thresh, None, iterations=5)
adapt_thresh = cv2.erode(adapt_thresh, None, iterations=5)
contours, hierarchy = cv2.findContours(
adapt_thresh,
cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
contours = sorted(contours, key=cv2.contourArea, reverse=True)[:20]
working_image = template.copy()
idx = 0
font = cv2.FONT_HERSHEY_COMPLEX
min_x, min_y, max_x, max_y = 0, 0, 0, 0
coord_matrix = None
# loop over our contours
for contour in contours:
# approximate the contour
area = cv2.contourArea(contour)
x, y, w, h = cv2.boundingRect(contour)
perimeter = cv2.arcLength(contour, True)
approx = cv2.approxPolyDP(contour, 0.09 * perimeter, True)
# p1:top-left, p2:bottom-left, p3:bottom-right, p4:top-right
if len(approx) == 4 and 4500 < area < 4900: #
coord = np.matrix([[x, y, x+w, y+h]])
if coord_matrix is not None:
coord_matrix = np.vstack((coord_matrix, coord))
else:
coord_matrix = coord.copy()
_ = cv2.rectangle(working_image,(x,y),(x+w,y+h),(0,255,0),2)
cv2.putText(working_image,str(area) , (x, y), font, 1, (0,0,255))
#cv2.putText(cleaned_bg_img, str(idx) + ":" +str(x) + ","+str(y), (x, y), font, 1, (0,0,255))
idx+=1
start_x = np.min(coord_matrix[:,0])
start_y = np.min(coord_matrix[:,1])
end_x = np.max(coord_matrix[:,2])
end_y = np.max(coord_matrix[:,3])
roi_interest = img.copy()[start_y:end_y, start_x:end_x]
aligned_with_border = cv2.copyMakeBorder(
roi_interest.copy(),
top = 1,
bottom = 1,
left = 1,
right = 1,
borderType = cv2.BORDER_CONSTANT,
value=(0,0,0)
)
plt.figure(figsize = (11.69*2,8.27*2))
plt.axis('off')
plt.imshow(aligned_with_border);
</code></pre>
<p>However, the resulting image has gaps between the reference marks and the image margins as shown in the image below by the red arrows at the top corners of the image.</p>
<p><a href="https://i.sstatic.net/1HsaI.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1HsaI.jpg" alt="Cropped image" /></a></p>
<p><strong>My question is, given the code that I have shared, how do I ensure that I get no spaces between the reference marks and the image margins?</strong> That the reference marks are perfectly aligned to the image edge with no gaps?</p>
|
<python><opencv><image-processing><computer-vision><omr>
|
2024-03-28 08:01:51
| 1
| 1,033
|
Timothy Tuti
|
78,236,491
| 986,612
|
Output both to console and file
|
<p>BUT:</p>
<ol>
<li>No external shell commands, e.g. <code>| tee</code>.</li>
<li>No new functions and a change of syntax, e.g., keep using <code>print()</code> as is.</li>
</ol>
|
<python>
|
2024-03-28 07:29:10
| 1
| 779
|
Zohar Levi
|
78,236,271
| 65,889
|
How can I tell pytest that for a function it is the correct behaviour (and only the correct behaviour) to have a failed assertion?
|
<p><strong>Background:</strong> In one project I use <code>pytest</code> with a helper function which for many tests does the final assert. So it is something like this source code:</p>
<p>tsthelper.py:</p>
<pre class="lang-py prettyprint-override"><code>def check(obj1, obj2, *args):
# do stuff calculate (using args) from the objects two strings str1 and str2
assert str1 == str2
</code></pre>
<p>test_arg1.py:</p>
<pre class="lang-py prettyprint-override"><code>from tsthelper import check
def test_arg1_():
# Setup obj1 and obj2
check(obj1, obj2, arg1):
def test_arg2_():
# Setup obj1 and obj2
check(obj1, obj2, arg2):
...
def test_no_arg1_should_fail():
# Setup obj1 and obj2
check(obj1, obj2):
</code></pre>
<p>So, most functions are suppose to hold the assertion, only the last one <em>is suppose to</em> fail.</p>
<p>I don't think using the <code>xfail</code> fixture is the right idea, because then <code>pytest</code> counts the fail separately, but what I want is that the fail of the last function is seen as the correct behaviour which <code>pytest</code> should count towards the successes in its stats.</p>
<p>What do I have to decorate the fail function with, so that it is treated as a success?</p>
|
<python><unit-testing><pytest><pytest-fixtures>
|
2024-03-28 06:43:37
| 1
| 10,804
|
halloleo
|
78,236,141
| 14,586,554
|
How can I apply scipy.interpolate.RBFInterpolator on an image / ndarray?
|
<p>For example how do I apply <code>RBFInterpolator</code> on image loaded by opencv?
I need to apply interpolation using vector-operations of numpy (which is fast)</p>
<p>I need to do non-affine transformation on image by defining interpolation between image points.</p>
<p>How do I do it?</p>
|
<python><numpy><scipy><interpolation>
|
2024-03-28 06:07:49
| 1
| 620
|
Kemsikov
|
78,235,920
| 10,620,003
|
A question on recursion -finding the parents of the taxon
|
<p>I have this dictionary:</p>
<pre><code>tax_dict = {
'Pan troglodytes': 'Hominoidea',
'Pongo abelii': 'Hominoidea',
'Hominoidea': 'Simiiformes',
'Simiiformes': 'Haplorrhini',
'Tarsius tarsier': 'Tarsiiformes',
'Haplorrhini': 'Primates',
'Tarsiiformes': 'Haplorrhini',
'Loris tardigradus': 'Lorisidae',
'Lorisidae': 'Strepsirrhini',
'Strepsirrhini': 'Primates',
'Allocebus trichotis': 'Lemuriformes',
'Lemuriformes': 'Strepsirrhini',
'Galago alleni': 'Lorisiformes',
'Lorisiformes': 'Strepsirrhini',
'Galago moholi': ' Lorisiformes'
}
</code></pre>
<p>I have this code in python:</p>
<pre><code># print each step to follow up
def get_ancestors(taxon):
print('calculating ancestors for ' + taxon)
if taxon == 'Primates':
print('taxon is Primates, returning an empty list')
return []
else:
print('taxon is not Primates, looking up the parent')
parent = tax_dict.get(taxon)
print('the parent is ' + parent + ' ')
print('looking up ancestors for ' + parent)
parent_ancestors = get_ancestors(parent)
print('parent ancestors are ' + str(parent_ancestors))
result = [parent] + parent_ancestors
print('about to return the result: ' + str(result))
return result
</code></pre>
<p>when I run the function by:</p>
<pre><code>get_ancestors('Galago alleni')
</code></pre>
<p>I get these results:</p>
<pre class="lang-none prettyprint-override"><code>calculating ancestors for Galago alleni
taxon is not Primates, looking up the parent
the parent is Lorisiformes
looking up ancestors for Lorisiformes
calculating ancestors for Lorisiformes
taxon is not Primates, looking up the parent
the parent is Strepsirrhini
looking up ancestors for Strepsirrhini
calculating ancestors for Strepsirrhini
taxon is not Primates, looking up the parent
the parent is Primates
looking up ancestors for Primates
calculating ancestors for Primates
taxon is Primates, returning an empty list
parent ancestors are []
about to return the result: ['Primates']
parent ancestors are ['Primates']
about to return the result: ['Strepsirrhini', 'Primates']
parent ancestors are ['Strepsirrhini', 'Primates']
about to return the result: ['Lorisiformes', 'Strepsirrhini',
'Primates']
['Lorisiformes', 'Strepsirrhini', 'Primates']
</code></pre>
<p>Maybe this is a simple question, but I can not get that at all. I don't know how the result is a list of the parents. <strong>I thought the result would be one value and it runs when the loop of recursion is done.</strong>
I know its a basic recursion, Does anybody can help me with this? do you have any useful references that can help me with that? thank you</p>
|
<python>
|
2024-03-28 04:54:02
| 2
| 730
|
Sadcow
|
78,235,911
| 480,118
|
pandas: sorting/re-ordering columns for a multi-level column dataframe
|
<p>I have the following data:</p>
<pre><code>from pandas import Timestamp
values = [['IDX100', 'field1', Timestamp('1999-02-01 05:00:00'), '101'],
['IDX100', 'field1', Timestamp('1999-02-02 05:00:00'), '102'],
['IDX100', 'field1', Timestamp('1999-02-03 05:00:00'), '103'],
['IDX200', 'field1', Timestamp('1999-02-01 05:00:00'), '601'],
['IDX200', 'field1', Timestamp('1999-02-02 05:00:00'), '602'],
['IDX200', 'field1', Timestamp('1999-02-03 05:00:00'), '603'],
['IDX100', 'field2', Timestamp('1999-02-01 05:00:00'), '201'],
['IDX100', 'field2', Timestamp('1999-02-02 05:00:00'), '202'],
['IDX100', 'field2', Timestamp('1999-02-03 05:00:00'), '203'],
['IDX200', 'field2', Timestamp('1999-02-01 05:00:00'), '701'],
['IDX200', 'field2', Timestamp('1999-02-02 05:00:00'), '702'],
['IDX200', 'field2', Timestamp('1999-02-03 05:00:00'), '703'],
['IDX100', 'field3', Timestamp('1999-02-01 05:00:00'), '301'],
['IDX100', 'field3', Timestamp('1999-02-02 05:00:00'), '302'],
['IDX100', 'field3', Timestamp('1999-02-03 05:00:00'), '303'],
['IDX200', 'field3', Timestamp('1999-02-01 05:00:00'), '801'],
['IDX200', 'field3', Timestamp('1999-02-02 05:00:00'), '802'],
['IDX200', 'field3', Timestamp('1999-02-03 05:00:00'), '803']]
df = pd.DataFrame(values, columns = ['identifier', 'code', 'date', 'value'])
</code></pre>
<p>After pivoting my dataframe i end up with the following:</p>
<pre><code>df = df.pivot(index=['date'], columns=['identifier', 'code'], values=['value'])
value
identifier IDX100 IDX200 IDX100 IDX200 IDX100 IDX200
code field1 field1 field2 field2 field3 field3
date
1999-02-01 05:00:00 101 601 201 701 301 801
1999-02-02 05:00:00 102 602 202 702 302 802
1999-02-03 05:00:00 103 603 203 703 303 803
</code></pre>
<p>I would however like to have the output look something like this:</p>
<pre><code>identifier IDX100 IDX200
code field3 field2 field1 field3 field2 field1
date
1999-02-01 05:00:00 301 201 101 801 701 601
1999-02-02 05:00:00 302 202 102 802 702 602
1999-02-03 05:00:00 303 203 103 803 703 603
</code></pre>
<p>I can get close to this by doing something like this:</p>
<pre><code>df = df.reindex(sorted(df.columns), axis=1)
</code></pre>
<p>But this maintains the order of the level2 columns as field1, field2, field3. What i would like is to be able to sort that level differently..preferably based on a list that i provide. For example i may want to sort it as field 3, field2, field1, or field2, field1, field3.</p>
<p>can anyone help with this?</p>
|
<python><pandas>
|
2024-03-28 04:50:29
| 2
| 6,184
|
mike01010
|
78,235,551
| 6,509,519
|
Looking for Regex pattern to return similar results to my current function
|
<p>I have some pascal-cased text that I'm trying to split into separate tokens/words.
For example, <code>"Hello123AIIsCool"</code> would become <code>["Hello", "123", "AI", "Is", "Cool"]</code>.</p>
<h1>Some Conditions</h1>
<ul>
<li>"Words" will always start with an upper-cased letter. E.g., <code>"Hello"</code></li>
<li>A contiguous sequence of numbers should be left together. E.g., <code>"123"</code> -> <code>["123"]</code>, not <code>["1", "2", "3"]</code></li>
<li>A contiguous sequence of upper-cased letters should be kept together <em>except</em> when the last letter is the start to a new word as defined in the first condition. E.g., <code>"ABCat"</code> -> <code>["AB", "Cat"]</code>, not <code>["ABC", "at"]</code></li>
<li>There is no guarantee that each condition will have a match in a string. E.g., <code>"Hello"</code>, <code>"HelloAI"</code>, <code>"HelloAIIsCool"</code> <code>"Hello123"</code>, <code>"123AI"</code>, <code>"AIIsCool"</code>, and any other combination I haven't provided are potential candidates.</li>
</ul>
<p>I've tried a couple regex variations. The following two attempts got me pretty close to what I want, but not quite.</p>
<h1>Version 0</h1>
<pre class="lang-py prettyprint-override"><code>import re
def extract_v0(string: str) -> list[str]:
word_pattern = r"[A-Z][a-z]*"
num_pattern = r"\d+"
pattern = f"{word_pattern}|{num_pattern}"
extracts: list[str] = re.findall(
pattern=pattern, string=string
)
return extracts
string = "Hello123AIIsCool"
extract_v0(string)
</code></pre>
<pre class="lang-py prettyprint-override"><code>['Hello', '123', 'A', 'I', 'Is', 'Cool']
</code></pre>
<h1>Version 1</h1>
<pre class="lang-py prettyprint-override"><code>import re
def extract_v1(string: str) -> list[str]:
word_pattern = r"[A-Z][a-z]+"
num_pattern = r"\d+"
upper_pattern = r"[A-Z][^a-z]*"
pattern = f"{word_pattern}|{num_pattern}|{upper_pattern}"
extracts: list[str] = re.findall(
pattern=pattern, string=string
)
return extracts
string = "Hello123AIIsCool"
extract_v1(string)
</code></pre>
<pre class="lang-py prettyprint-override"><code>['Hello', '123', 'AII', 'Cool']
</code></pre>
<h1>Best Option So Far</h1>
<p>This uses a combination of regex and looping. It works, but is this the best solution? Or is there some fancy regex that can do it?</p>
<pre class="lang-py prettyprint-override"><code>import re
def extract_v2(string: str) -> list[str]:
word_pattern = r"[A-Z][a-z]+"
num_pattern = r"\d+"
upper_pattern = r"[A-Z][A-Z]*"
groups = []
for pattern in [word_pattern, num_pattern, upper_pattern]:
while string.strip():
group = re.search(pattern=pattern, string=string)
if group is not None:
groups.append(group)
string = string[:group.start()] + " " + string[group.end():]
else:
break
ordered = sorted(groups, key=lambda g: g.start())
return [grp.group() for grp in ordered]
string = "Hello123AIIsCool"
extract_v2(string)
</code></pre>
<pre class="lang-py prettyprint-override"><code>['Hello', '123', 'AI', 'Is', 'Cool']
</code></pre>
|
<python><regex><pascalcasing>
|
2024-03-28 02:21:46
| 5
| 3,325
|
Ian Thompson
|
78,235,371
| 893,411
|
Creating a reminder task in langchain
|
<p>I'm trying to create a studying tutor with LLMs and langchain. What I'm looking for is that the app reminds the student once in a while during the conversation if he/she has done his/her homework, and based on the answer remind him/her later or don't remind her again. I'm looking for a clue on how to achieve such a thing. Thanks in advance</p>
|
<python><langchain><large-language-model>
|
2024-03-28 01:12:45
| 0
| 4,283
|
m0j1
|
78,234,756
| 480,118
|
bulk insert data into table where cell can be empty string for a column that is part of a composite primary key, using psycopg3
|
<p>I have the following table - note there is a composite primary key consisting of 5 columns.</p>
<pre><code>test_ts = Table('test_ts', meta,
Column('metric_id', ForeignKey('metric.id'), primary_key = True),
Column('entity_id', Integer, primary_key = True),
Column('date', DateTime, primary_key = True),
Column('freq', String, primary_key = True),
Column('context', String, primary_key = True, server_default=""),
Column('value', String),
Column("update_time", DateTime, server_default=func.now(), onupdate=func.current_timestamp()),
Column('update_by', String, server_default=func.current_user()))
</code></pre>
<p>I am trying to bulk insert some data into this POSTGRES table, but where the 'context' column may be an empty string (''):</p>
<pre><code> date context value freq entity_id metric_id
1 1999-02-01T05:00:00.000Z test 101 D 1105 4
2 1999-02-02T05:00:00.000Z test 102 D 1105 4
8 1999-02-01T05:00:00.000Z 201 D 1105 4
9 1999-02-02T05:00:00.000Z 202 D 1105 4
</code></pre>
<p>When trying to do this however, i get the following error:</p>
<pre><code>Traceback (most recent call last):
File "/workspaces/service/data_svc.py", line 121, in bulk_insert
cur.execute(sql_insert)
File "/usr/local/lib/python3.11/site-packages/psycopg/cursor.py", line 732, in execute
raise ex.with_traceback(None)
psycopg.errors.NotNullViolation: null value in column "context" of relation "test_ts" violates not-null constraint
DETAIL: Failing row contains (4, 1105, 1999-02-01 05:00:00, D, null, 201, 2024-03-27 21:23:42.778517, db_user).
</code></pre>
<p><strong>You can see above, the row value contains a 'null' when it should be an empty string.</strong>
I am not sure why the empty string is getting converted to null here - as it's not None or Nan, but explictly set to '' in my pandas dataframe.</p>
<p>The SQL im using are the following:</p>
<pre><code>sql_create:
drop table if exists tmp_tbl; CREATE UNLOGGED TABLE tmp_tbl AS SELECT date,context,value,freq,entity_id,metric_id FROM test_ts LIMIT 0
sql_copy:
COPY tmp_tbl (date,context,value,freq,entity_id,metric_id) FROM STDIN (FORMAT CSV, DELIMITER "\t")
sql_insert:
insert into test_ts(date,context,value,freq,entity_id,metric_id)
select * from tmp_tbl on conflict(metric_id,entity_id,date,freq,context)
do update set value = EXCLUDED.value;drop table if exists tmp_tbl;
</code></pre>
<p>and the code snippet looks like this:</p>
<pre><code> with psycopg.connect(self.connect_str, autocommit=True) as conn:
io_buf = io.StringIO()
df.to_csv(io_buf, sep='\t', header=False, index=False)
io_buf.seek(0)
with conn.cursor() as cur:
cur.execute(sql_create)
with cur.copy(sql_copy) as copy:
while data:=io_buf.read(self.batch_size):
copy.write(data)
cur.execute(sql_insert)
conn.commit()
</code></pre>
<p>I can work around this by using something like 'N/A' instead of ''...but would much prefer it to be an empty string.
Any help is appreciated.</p>
|
<python><postgresql><sqlalchemy><psycopg2><psycopg3>
|
2024-03-27 21:43:33
| 0
| 6,184
|
mike01010
|
78,234,711
| 9,476,887
|
Visible not detected with Selenium in Python for element in a shadow root
|
<p>I would like to find with Selenium the following <code>button</code> element:</p>
<pre><code><button class="walla-button__button walla-button__button--full-width walla-button__button--large walla-button__button--primary" role="button" type="submit" part="button">
<div class="walla-button__children-container" part="children-container">
<slot name="left-icon"></slot>
<span>Acceder a Wallapop</span>
<slot name="right-component"></slot>
</div>
</button>
</code></pre>
<p>This element is in a popup sign-in window together with other elements that Selenium does find, but the problematic element is also direct child to a shadow root, unlike the elements that Selenium can find. Enclose one element that can be found and the button that is problematic:</p>
<p>Input:</p>
<pre><code>wait.until(EC.element_to_be_clickable((By.XPATH,"//*[@id='signin-email']")))
</code></pre>
<p>Output:</p>
<pre><code><selenium.webdriver.remote.webelement.WebElement
(session="0f58f81835cb803c3785a19702d01945",
element="f.267C3EB8F2845581DDDFF3F95FF7DD1E.d.ABF640D3BAB43142CFC70C05BF2CACE6.e.124")>
</code></pre>
<p>Input:</p>
<pre><code>wait.until(EC.visibility_of_any_elements_located((By.XPATH,"//button[@class='walla-button__button walla-button__button--full-width walla-button__button--large walla-button__button--primary' and @type='submit']")))
</code></pre>
<p>Output:</p>
<pre><code>-------------------------------------------------------------------------- TimeoutException Traceback (most recent call last) Cell In[214], line 1
----> 1 wait.until(EC.visibility_of_any_elements_located((By.XPATH,"//button[@class='walla-button__button walla-button__button--full-width walla-button__button--large
walla-button__button--primary' and @type='submit']")))
File
~\anaconda3\Lib\site-packages\selenium\webdriver\support\wait.py:105,
in WebDriverWait.until(self, method, message)
103 if time.monotonic() > end_time:
104 break
--> 105 raise TimeoutException(message, screen, stacktrace)
TimeoutException: Message:
wait.until(EC.visibility_of_any_elements_located((By.XPATH,"//button[@class='walla-button__button walla-button__button--full-width walla-button__button--large walla-button__button--primary' and @type='submit']")))
</code></pre>
<p>Help on how to find and click that button appreciated, thanks!</p>
|
<python><selenium-webdriver>
|
2024-03-27 21:33:53
| 0
| 522
|
ForEverNewbie
|
78,234,592
| 8,863,970
|
'X' object has no attribute 'functionName' - Pyspark / Python
|
<p>I'm new to Pyspark so just learning as I go.</p>
<p>I'm trying to experiment with UnitTest and I am getting some error below:</p>
<pre><code>def drop_duplicates(df):
df = df.dropDuplicates(df)
return df
</code></pre>
<pre><code>import unittest
class TestNotebook(unittest.TestCase):
def test_drop_duplicates(self):
data = (
['1', '2020-01-01 00:00:00', '2020-01-01 01:00:00', '2', '1'],
['1', '2020-01-01 00:00:00', '2020-01-01 01:00:00', '3', '1'],
['1', '2020-01-01 00:00:00', '2020-01-01 01:00:00', '2', '2'],
['2', '2020-01-01 00:00:00', '2020-01-01 01:00:00', '2', '1']
)
columns = ["ID", "TimeFrom", "TimeTo", "Serial", "Code"]
df = spark.createDataFrame(data, columns)
expected_data = [
('1', '2020-01-01 00:00:00', '2020-01-01 01:00:00', '2', '1'),
('1', '2020-01-01 00:00:00', '2020-01-01 01:00:00', '2', '2')
]
self.assertEqual(drop_duplicates(df), expected_data)
res = unittest.main(argv=[''], verbosity=2, exit=False)
</code></pre>
<p>(The Assert might fail but I'll know when I move past this error) but for now I just get the following error:</p>
<pre class="lang-none prettyprint-override"><code> File "/tmp/ipykernel_15937/2907449366.py", line 2, in drop_duplicates
df = df.dropDuplicates(df)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/spark/python/pyspark/sql/dataframe.py", line 4207, in dropDuplicates
raise PySparkTypeError(
pyspark.errors.exceptions.base.PySparkTypeError: [NOT_LIST_OR_TUPLE] Argument `subset` should be a list or tuple, got DataFrame.
</code></pre>
<p>Is there something I am missing? I'm reading the <a href="https://spark.apache.org/docs/3.1.2/api/python/reference/api/pyspark.sql.DataFrame.dropDuplicates.html" rel="nofollow noreferrer">docs</a> on this method but can't seem to figure out.</p>
|
<python><pyspark>
|
2024-03-27 21:07:28
| 1
| 1,013
|
Saffik
|
78,234,590
| 903,188
|
How do I find a specific Python anytree descendant?
|
<p>The following Python anytree construction</p>
<pre><code>top = Node("top")
a = Node("a", parent=top)
b = Node("b", parent=top)
c = Node("c", parent=a)
d = Node("d", parent=a)
e1 = Node("e", parent=c)
e2 = Node("e", parent=a)
c1 = Node("c", parent=b)
e3 = Node("e", parent=c1)
print(RenderTree(top, style=AsciiStyle()).by_attr())
</code></pre>
<p>produces this tree:</p>
<pre><code>top
|-- a
| |-- c
| | +-- e
| |-- d
| +-- e
+-- b
+-- c
+-- e
</code></pre>
<p>I want to process node <code>a/c/e</code> under <code>top</code> if that node exists. If it doesn't exist, I want to get None.</p>
<p>Here is what I have implemented that does what I want:</p>
<pre><code>gnode = anytree.search.find(top, lambda node : str(node)=="Node('/top/a/c/e')")
print(f"Exp match, rcv {gnode}")
gnode = anytree.search.find(top, lambda node : str(node)=="Node('/top/a/c/f')")
print(f"Exp None, rcv {gnode}")
</code></pre>
<p>The above code returns the following:</p>
<pre><code>Exp match, rcv Node('/top/a/c/e')
Exp None, rcv None
</code></pre>
<p>Although the code works, I'm thinking there must be a better way to do it -- something like <code>top.get_descendant("a/c/e")</code> that would directly access that node instead of searching for it.</p>
<p>What's the right way to access node <code>a/c/e</code> under node <code>top</code> (and get None if it doesn't exist)?</p>
|
<python><anytree>
|
2024-03-27 21:06:09
| 1
| 940
|
Craig
|
78,234,541
| 4,926,275
|
How to control the font size in Kivy settings panel for the menu item name in settings menu list
|
<p>I tried to use the Kivy settings panel in my application, but I found that the font size of some settings panel items is related to the font size setting in my application, currently I can not set the font size of settings panel directly. In my application for displaying effect it uses a large font size, then in settings panel its menu list and the section name of option list will use the same font size of the application, this makes the settings panel appearance is very strange (cannot be acceptable).</p>
<p>The following code is a Kivy scatter example (from geeksforgeeks.org)</p>
<p>the scattertest.py file</p>
<pre><code>import kivy
from kivy.app import App
kivy.require('1.9.0')
from kivy.uix.scatter import Scatter
from kivy.uix.widget import Widget
from kivy.uix.relativelayout import RelativeLayout
from kivy.uix.settings import SettingsWithSidebar, SettingsWithTabbedPanel
from kivy.logger import Logger
class SquareWidget(Widget):
pass
class ScatterWidget(Scatter):
pass
class ScatterTest(RelativeLayout):
pass
class ScatterTestApp(App):
def build(self):
self.settings_cls = SettingsWithTabbedPanel
# self.settings_cls = SettingsWithSidebar
return ScatterTest()
def app_open_settings(self):
App.open_settings(self)
def close_settings(self, settings=None):
Logger.info("scattertest.py: App.close_settings: {0}".format(settings))
super(ScatterTestApp, self).close_settings(settings)
if __name__=='__main__':
ScatterTestApp().run()
</code></pre>
<p>the scattertest.kv file</p>
<pre><code>#:kivy 2.1.0
<Label>:
font_size:50
<SquareWidget>:
size: 100, 100
canvas:
Color:
rgb: [0.345, 0.85, 0.456]
Rectangle:
size: self.size
pos: self.pos
<ScatterTest>:
canvas:
Color:
rgb: .8, .5, .4
Rectangle:
size: self.size
pos: self.pos
ScatterWidget:
id: square_widget_id
SquareWidget:
Label:
text: 'Position: ' + str(square_widget_id.pos)
size_hint: .1, .1
pos: 500, 300
</code></pre>
<p>In the above example, I set the global font size of Label widget as 50 under <Label>:, then in app part the 'Label' item will show the 'Position: ' and followed point coordinate with font size 50.</p>
<p>And when pressing the 'F1' key, it will show the settings panel, in here I set the settings panel using TabbedPanel style, we can see that the panel tab name using the font size 50 and in the options part the section name also uses font size 50. This let's feel that the settings panel is some thing wrong (all these issue items should base on the Label class).</p>
<p>I read the Kivy Settings document, but I can not find a good way to set the font size of settings panel directly.</p>
<p>I have tried to set a custom Label widget in .kv file like modifying <Label>: to <LabelA@Label>: and changed the Label item of the following code to LabelA, then run the app and shows the settings panel, now the settings panel will use its default font size, all items are suitable. This way should be a solution, but I feel that it is just a way of no way.</p>
<p>Hope may help me to get a better solution, thanks!</p>
|
<python><kivy><settings><font-size>
|
2024-03-27 20:56:29
| 1
| 385
|
phchen2
|
78,234,490
| 7,233,155
|
Github workflow Python project with Maturin fails to build
|
<p>I have a Python project with rust extension module using pyo3 bindings.
The project successfully builds and compiles locally, and it builds and compiles on readthedocs. It uses a <code>pip install .</code> method and builds local wheels after rust and all dependencies built on the local architecture.</p>
<p>However, <strong>it fails to build on github workflow</strong>. The relevant part of the github workflow commands is:</p>
<pre class="lang-yaml prettyprint-override"><code>jobs:
build:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
python-version: ["3.11"]
steps:
- uses: actions/checkout@v4
- name: Set up Rust
uses: actions-rust-lang/setup-rust-toolchain@v1
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v3
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install . -v
</code></pre>
<p>The main cause of error is that Rust compiler warnings, which seem to be ignored in the other cases are reported as direct errors and not warnings here.</p>
<p>For example the first is:</p>
<pre class="lang-none prettyprint-override"><code>error: unused import: `Axis`
--> src/dual/dual2.rs:12:38
|
12 | use ndarray::{Array, Array1, Array2, Axis};
| ^^^^
|
= note: `-D unused-imports` implied by `-D warnings`
= help: to override `-D warnings` add `#[allow(unused_imports)]`
</code></pre>
<p>Ultimately this is irrelevant. Sure I could remove this, but some <em>warnings</em> reported as <em>errors</em> are not actually sensible to change.</p>
<p>It leads to:</p>
<pre class="lang-none prettyprint-override"><code> 💥 maturin failed
Caused by: Failed to build a native library through cargo
Caused by: Cargo build finished with "exit status: 101": `env -u CARGO PYO3_ENVIRONMENT_SIGNATURE="cpython-3.11-64bit" PYO3_PYTHON="/opt/hostedtoolcache/Python/3.11.8/x64/bin/python" PYTHON_SYS_EXECUTABLE="/opt/hostedtoolcache/Python/3.11.8/x64/bin/python" "cargo" "rustc" "--message-format" "json-render-diagnostics" "--manifest-path" "/home/runner/work/folder/Cargo.toml" "--release" "--lib" "--crate-type" "cdylib"`
Error: command ['maturin', 'pep517', 'build-wheel', '-i', '/opt/hostedtoolcache/Python/3.11.8/x64/bin/python', '--compatibility', 'off'] returned non-zero exit status 1
error: subprocess-exited-with-error
</code></pre>
<p><strong>What is the solution to get this building properly on github actions?</strong></p>
|
<python><rust><github-actions><maturin>
|
2024-03-27 20:43:38
| 1
| 4,801
|
Attack68
|
78,234,170
| 2,816,272
|
How I can search a string in a JSON file with an word-embedding list and return the nearest occurrences?
|
<p>I’m saw a code in Python that generates a file with a representation of embeddings (vectors that represent a string).</p>
<p>The format of the file generates in the model "all-MiniLM-L6-v2" is:</p>
<pre><code>[
{
"codigo":1,
"descricao":"Alain Prost",
"embedding":[
-0.04376700147986412,
0.08378474414348602,
-0.044959407299757004,
-0.06955558061599731,
-0.0011182611342519522,
0.10521695017814636,
0.11189017444849014,
0.1651790291070938,
0.07515741139650345,
0.05490146577358246,
0.02417689561843872,
-0.016437038779258728,
0.010290289297699928,
0.017122231423854828,
-0.05169348418712616,
-0.016834666952490807,
-0.01511311624199152,
0.007502275053411722,
0.03960637003183365,
0.013815234415233135,
-0.05070938542485237,
-0.056177735328674316,
0.015933101996779442,
-0.007893730886280537,
1.4036894754099194e-05,
-0.01063060574233532,
0.05427253618836403,
0.016765154898166656,
0.04841822385787964,
-0.02379232831299305,
0.025293899700045586,
-0.06888816505670547,
-0.03624174743890762,
-0.040663089603185654,
-0.004510633181780577,
-0.03612743690609932,
-0.08588571101427078,
-0.03383230045437813,
-0.03971630707383156,
0.0925847589969635,
0.06980527937412262,
0.011318318545818329,
-0.14096367359161377,
0.029876230284571648,
-0.01633320190012455,
-0.010737375356256962,
0.04669718071818352,
-0.014320306479930878,
-0.05380765348672867,
-0.01826721429824829,
-0.0775720626115799,
0.007413752842694521,
0.010430709458887577,
-0.07329824566841125,
-0.038187265396118164,
-0.02384389564394951,
0.07746574282646179,
0.02492334321141243,
0.002449194435030222,
-0.05240411311388016,
0.020897606387734413,
-0.01624673791229725,
-0.06399786472320557,
-0.03406109660863876,
0.05889088287949562,
0.045756977051496506,
-0.08131976425647736,
0.0538562573492527,
-0.06892945617437363,
0.04350525140762329,
-0.05869260057806969,
0.024457629770040512,
0.0017231887904927135,
0.041741617023944855,
0.06515597552061081,
-0.08843974024057388,
-0.036975421011447906,
-0.04383429139852524,
-0.04289741814136505,
-0.03480835258960724,
0.04213075712323189,
-0.0947691947221756,
-0.10631424933671951,
-0.05164273455739021,
0.0527079738676548,
-0.0026282896287739277,
0.11123877763748169,
-0.010186375118792057,
0.004350247327238321,
-0.09234373271465302,
0.00022207570145837963,
-0.036559659987688065,
-0.05228490009903908,
0.03234873339533806,
-0.005511161405593157,
0.04750655218958855,
-0.08976765722036362,
-0.005845387000590563,
-0.02803802862763405,
0.14588715136051178,
-0.0012976604048162699,
0.04080767557024956,
0.04338463768362999,
0.015407223254442215,
-0.08320754021406174,
0.037945766001939774,
-0.017297346144914627,
0.024563206359744072,
0.04263288155198097,
0.025433938950300217,
-0.03403696045279503,
-0.05286381393671036,
-0.01756090484559536,
-0.002016932936385274,
0.0027279567439109087,
0.047004375606775284,
-0.04959726706147194,
-0.015475046820938587,
0.0725177600979805,
-0.04801830276846886,
0.048273105174303055,
-0.029613768681883812,
-0.05410566180944443,
0.05482526868581772,
0.0076617104932665825,
0.073040671646595,
-0.03162190690636635,
-8.039190253239277e-34,
-0.013159706257283688,
-0.016090840101242065,
0.07397063821554184,
0.07282368093729019,
-0.005004068370908499,
0.0062707713805139065,
-0.05940960720181465,
-0.07829747349023819,
-0.017122328281402588,
-0.07634077966213226,
-0.02839534729719162,
-0.07541434466838837,
0.011743525043129921,
-0.026070842519402504,
0.021514642983675003,
0.03044724091887474,
0.037806976586580276,
0.03549019619822502,
0.013167202472686768,
-0.018708810210227966,
0.007411877159029245,
0.04208431392908096,
-0.0017672213725745678,
0.016767306253314018,
0.042273279279470444,
0.00972240325063467,
0.09876655787229538,
-0.013753202743828297,
-0.039335619658231735,
-0.030701594427227974,
-0.006173287518322468,
0.025760365650057793,
-0.04054010286927223,
0.056439004838466644,
0.023311946541070938,
-0.022928737103939056,
-0.007852778770029545,
-0.04520851746201515,
0.045798882842063904,
0.008332950063049793,
0.005317758768796921,
-0.021758222952485085,
0.08777586370706558,
-0.001095705316402018,
0.008322017267346382,
-0.047873519361019135,
0.023781653493642807,
0.05791536718606949,
0.1103583350777626,
-0.03695837780833244,
0.03424883633852005,
-0.0043442994356155396,
-0.045328013598918915,
-0.006399083416908979,
-0.0022741626016795635,
0.026356521993875504,
-0.06595919281244278,
0.01489550806581974,
-0.00993384514003992,
-0.004256079904735088,
0.05318630486726761,
0.03500215709209442,
-0.030282488092780113,
0.06818058341741562,
-0.03611261025071144,
-0.00042665813816711307,
-0.03958318755030632,
0.054165199398994446,
0.03490123152732849,
-0.027355331927537918,
-0.1218971237540245,
0.059496473520994186,
0.11048189550638199,
-0.044817615300416946,
-0.045876920223236084,
0.05318649485707283,
-0.019234681501984596,
0.025589890778064728,
-0.09075476229190826,
0.006619459483772516,
-0.07048900425434113,
0.002478431211784482,
0.014732835814356804,
0.015378294512629509,
-0.010561746545135975,
-0.044879332184791565,
-0.0440324991941452,
0.000804506300482899,
0.04663644731044769,
0.12025374174118042,
0.02576148509979248,
-0.006950514391064644,
-0.008816791698336601,
0.01322726346552372,
-0.10207735002040863,
6.758107581531859e-35,
-0.04895230382680893,
0.00044889742275699973,
0.06258796155452728,
0.05086054280400276,
0.10057681798934937,
-0.03941198065876961,
0.021326975896954536,
0.08152614533901215,
-0.0004993032780475914,
0.019457058981060982,
0.09902072697877884,
-0.06066109240055084,
0.10520972311496735,
-0.1180957779288292,
-0.04043348878622055,
0.13587746024131775,
-0.011231197975575924,
0.005684691481292248,
-0.05967259034514427,
-0.08215924352407455,
0.024332145228981972,
0.024530921131372452,
0.031302567571401596,
-0.04070316627621651,
-0.12310207635164261,
0.03254634514451027,
0.11270913481712341,
0.060394853353500366,
-0.08383730798959732,
-0.01133598294109106,
-0.03808245062828064,
-0.023190151900053024,
-0.06691887974739075,
0.013513924553990364,
-0.05324095860123634,
0.09535984694957733,
-0.021769806742668152,
0.06808806955814362,
-0.0018341721734032035,
0.08443459868431091,
-0.04012518748641014,
-0.009696738794445992,
0.037875086069107056,
-0.026477433741092682,
0.07446243613958359,
-0.06514057517051697,
0.015685996040701866,
-0.06705299019813538,
0.024632146582007408,
-0.014661968685686588,
-0.018442410975694656,
0.05574002489447594,
-0.02014113776385784,
-0.047132350504398346,
0.0496378056704998,
0.0052811079658567905,
-0.03336593508720398,
-0.002416495466604829,
0.008500812575221062,
0.07484209537506104,
0.07398315519094467,
0.056250426918268204,
0.03129546344280243,
0.0264076329767704,
0.030829958617687225,
-0.06896060705184937,
-0.11525331437587738,
-0.02287617139518261,
0.014295394532382488,
0.06505643576383591,
0.08990739285945892,
0.05023878812789917,
-0.1306740790605545,
0.005228940863162279,
-0.02513446845114231,
0.09248469024896622,
-0.04951559379696846,
0.07476413995027542,
-0.02717839926481247,
0.008030343800783157,
-0.03858125954866409,
-0.09855242073535919,
-0.04341096431016922,
0.01543387770652771,
-0.024819210171699524,
0.036512166261672974,
-0.03962823003530502,
-0.09858094900846481,
0.0702538713812828,
-0.04758270084857941,
-0.0056264870800077915,
-0.025418918579816818,
0.04300766438245773,
-0.05326545983552933,
0.02151181921362877,
-1.2410082739222617e-08,
-0.022358816117048264,
0.015648063272237778,
-0.0415060892701149,
-0.00010502521035959944,
-0.0314381904900074,
-0.06952173262834549,
0.030622998252511024,
-0.09376975148916245,
-0.04358035698533058,
0.004702138714492321,
-0.04107971489429474,
-0.015522287227213383,
0.04647141695022583,
-0.03630853071808815,
0.07640153914690018,
0.015367956832051277,
0.0003513091360218823,
0.07410185784101486,
-0.024652114138007164,
0.04225892946124077,
0.005745219066739082,
0.03425384312868118,
-0.017282333225011826,
-0.028105905279517174,
-0.019109562039375305,
-0.022345177829265594,
0.04238805174827576,
0.01908213645219803,
0.004253830295056105,
-0.004323870409280062,
-0.00828507263213396,
0.04277166351675987,
0.01263809110969305,
-0.08606499433517456,
0.06635372340679169,
0.09709060937166214,
0.03835307061672211,
0.05318101495504379,
-0.0021448535844683647,
0.0766974613070488,
0.024480514228343964,
-0.03913270682096481,
0.004100404679775238,
0.029588110744953156,
0.006501220166683197,
0.03766942396759987,
0.0055293552577495575,
-0.05407750979065895,
0.003028532490134239,
-0.004140743054449558,
-0.0023235157132148743,
0.05007375031709671,
-0.01090778037905693,
0.012557691894471645,
0.018586203455924988,
0.053417790681123734,
-0.03843330964446068,
0.003068356541916728,
-0.07908729463815689,
-0.01524473074823618,
0.04108268767595291,
-0.02860739268362522,
0.06565400958061218,
0.023170659318566322
]
},
{
"codigo":2,
"descricao":"Ayrton Senna",
"embedding":[
-0.11275111883878708,
-0.04252505674958229,
-0.009049834683537483,
0.011212156154215336,
-0.047949858009815216,
0.030582023784518242,
0.13628773391246796,
-0.008150441572070122,
-0.0001293766836170107,
0.03802379593253136,
0.072489432990551,
-0.08784235268831253,
-0.0781305655837059,
0.06677593290805817,
-0.06298733502626419,
0.087885282933712,
-0.053338438272476196,
-0.013437110930681229,
0.02285934053361416,
-0.03463083133101463,
-0.1208895593881607,
0.035654135048389435,
-0.0034052329137921333,
0.02075120247900486,
0.01327497884631157,
-0.032590851187705994,
0.004454594571143389,
0.05418514460325241,
-0.06094468757510185,
-0.05599478632211685,
-0.004106787499040365,
-0.07678581774234772,
0.04340159147977829,
0.017842937260866165,
0.02949387952685356,
-0.007257427088916302,
-0.0644332766532898,
0.012047283351421356,
0.014177532866597176,
0.015570977702736855,
0.007476386614143848,
-0.01021003257483244,
-0.024430135264992714,
0.01893731951713562,
-0.03585066273808479,
-0.040841732174158096,
0.02237538993358612,
-0.06412603706121445,
0.03432679921388626,
0.0031201448291540146,
-0.026181157678365707,
-0.04635085165500641,
-0.059544868767261505,
-0.005927531514316797,
-0.0033280153293162584,
0.021542759612202644,
-0.01260500680655241,
0.033978041261434555,
-0.03178206831216812,
-0.025371814146637917,
0.07174889743328094,
-0.0024521711748093367,
-0.09167266637086868,
-0.046929117292165756,
0.022732241079211235,
0.02222401276230812,
-0.024650216102600098,
-0.04264489933848381,
0.024509301409125328,
-0.026767950505018234,
0.09544091671705246,
-0.06721024960279465,
0.018102342262864113,
-0.018531465902924538,
-0.02721196413040161,
0.005214688368141651,
0.03094632364809513,
-0.08467657119035721,
0.006663993466645479,
0.06828898191452026,
-0.009517649188637733,
-0.08511777967214584,
-0.03374364972114563,
-0.027803972363471985,
0.023442445322871208,
-0.0266878679394722,
0.006919735576957464,
0.010021806694567204,
-0.036597177386283875,
-0.00617715111002326,
0.014031169936060905,
0.0701993927359581,
-0.0393521748483181,
-0.007316326256841421,
0.014301341958343983,
0.02702433057129383,
0.03956086188554764,
0.060301244258880615,
-0.055976178497076035,
0.1338510662317276,
0.001156043028458953,
0.041097491979599,
-0.14731338620185852,
-0.0029199898708611727,
-0.00013599869271274656,
-0.0736226737499237,
0.03325321152806282,
-0.14085189998149872,
0.03928329795598984,
-0.011393381282687187,
0.008337186649441719,
0.022270601242780685,
-0.06819078326225281,
0.010874142870306969,
-0.049424681812524796,
0.019682565703988075,
-0.010403553955256939,
0.09375917166471481,
0.02362806536257267,
0.07171869277954102,
0.020774055272340775,
0.042299773544073105,
-0.06543327867984772,
0.11427047103643417,
0.05618273466825485,
-0.03619793802499771,
-0.07144389301538467,
5.301082792865114e-34,
0.014501710422337055,
-0.03433850780129433,
0.008394746109843254,
0.07597401738166809,
0.10349489003419876,
0.015405677258968353,
-0.032848604023456573,
-0.06884612143039703,
-0.046885162591934204,
-0.09671584516763687,
-0.011314226314425468,
-0.01856561005115509,
-0.06512365490198135,
-0.07238120585680008,
-0.02506783977150917,
-0.009671981446444988,
-0.0677078366279602,
-0.05653739720582962,
-0.06995690613985062,
-0.008146820589900017,
-0.01214279793202877,
0.059145353734493256,
-0.00256781792268157,
0.08436328917741776,
-0.0045662252232432365,
-0.07445189356803894,
0.01798633486032486,
0.060066550970077515,
0.017383728176355362,
0.04766349866986275,
-0.015692079439759254,
-0.04757498577237129,
-0.02762548439204693,
0.047303322702646255,
0.07723086327314377,
-0.07400372624397278,
0.011420260183513165,
-0.04891768470406532,
-0.016991885378956795,
0.026902154088020325,
-0.04760833457112312,
0.018312858417630196,
-0.02989778108894825,
0.0897020772099495,
-0.04281701147556305,
0.013710093684494495,
0.0396006740629673,
0.06410706043243408,
0.08556067198514938,
-0.04379606246948242,
-0.07834725081920624,
-0.06623218953609467,
-0.030430499464273453,
-0.005324682220816612,
-0.034603726118803024,
-0.062134772539138794,
0.008219441398978233,
0.04189149662852287,
0.10299007594585419,
0.021307796239852905,
0.0607219822704792,
-0.04500466585159302,
-0.0028528186958283186,
-0.06410374492406845,
-0.0048947567120194435,
0.028550991788506508,
-0.021970335394144058,
-0.006687256507575512,
0.09578950703144073,
-0.08069927245378494,
0.002758170710876584,
-0.026523113250732422,
0.08033037930727005,
0.013537789694964886,
-0.03719128668308258,
0.05603921413421631,
0.020577840507030487,
0.02021518349647522,
-0.10423598438501358,
-0.059956539422273636,
-0.0928533598780632,
-0.019149193540215492,
0.008638947270810604,
0.07607108354568481,
0.023537373170256615,
-0.03286019340157509,
-0.029357632622122765,
-0.06599190086126328,
0.08896324038505554,
-0.011197819374501705,
0.019649725407361984,
0.0985945537686348,
0.006205311976373196,
-0.13322098553180695,
-0.015043631196022034,
-1.1596729315441888e-34,
-0.02202794700860977,
0.022142373025417328,
-0.0908736065030098,
0.06232170760631561,
0.02226484753191471,
-0.03699196130037308,
0.025422628968954086,
0.03936171904206276,
0.051816947758197784,
0.01941952295601368,
0.04169097915291786,
-0.0668347030878067,
0.028993966057896614,
-0.04779044911265373,
0.016057901084423065,
0.11099212616682053,
0.13915076851844788,
0.04464653879404068,
0.01808364875614643,
0.0003248233115300536,
-0.027428222820162773,
0.03427209332585335,
-0.11964283138513565,
0.020802685990929604,
-0.024637149646878242,
0.04913446679711342,
-0.03343263268470764,
0.0007999022491276264,
-0.0363985113799572,
0.015618329867720604,
-0.03916076198220253,
-0.027130674570798874,
0.030908452346920967,
0.00839168019592762,
-0.019726410508155823,
0.06671995669603348,
0.06294506788253784,
-0.00662987632676959,
-0.048772092908620834,
0.10865209251642227,
0.077969029545784,
-0.03438835218548775,
-0.016370991244912148,
0.08795364946126938,
-0.007750320713967085,
-0.09498050808906555,
-0.07556591928005219,
0.10646194964647293,
-0.0030609527602791786,
-0.012251066043972969,
0.05219857394695282,
-0.03321979194879532,
0.057967476546764374,
-0.10663087666034698,
0.032691169530153275,
-0.009770980104804039,
0.047311775386333466,
-0.02411728724837303,
0.05368872731924057,
0.06182878091931343,
0.07617446780204773,
-0.05318167805671692,
-0.033945482224226,
0.03228505700826645,
-0.007170077878981829,
0.05959790572524071,
-0.056909944862127304,
-0.02985152043402195,
0.006446316838264465,
0.03801654651761055,
0.012191289104521275,
0.029834797605872154,
-0.006095391698181629,
-0.029733596369624138,
-0.09887736290693283,
0.009565076790750027,
0.04332743212580681,
0.042507629841566086,
0.06287199258804321,
-0.01998593844473362,
-0.03811212256550789,
-0.014080194756388664,
0.039666227996349335,
0.03266460821032524,
0.07517889142036438,
0.04624589905142784,
0.05244888737797737,
0.019929179921746254,
0.02101832628250122,
0.007519490085542202,
0.06198029965162277,
0.023592155426740646,
0.04938758164644241,
0.027339544147253036,
-0.01008431427180767,
-1.4285190808038806e-08,
0.030376587063074112,
-0.02963241934776306,
-0.035167571157217026,
0.02413598634302616,
0.0570375956594944,
0.007684706710278988,
0.12187618762254715,
-0.007570839952677488,
0.029319867491722107,
0.06720910966396332,
0.024405328556895256,
-0.011419138871133327,
0.03922741860151291,
0.024336550384759903,
0.04098387807607651,
0.03207016363739967,
-0.008450492285192013,
0.1041002869606018,
-0.03652212396264076,
0.010552185587584972,
-0.049762122333049774,
0.06643325090408325,
-0.04128921404480934,
-0.05123789608478546,
-0.029389763250947,
0.0248995590955019,
-0.04405771195888519,
0.1402818262577057,
0.014684601686894894,
-0.009909572079777718,
0.010877342894673347,
0.005315002519637346,
0.00048737594624981284,
-0.04477892816066742,
-0.06588546186685562,
0.005400381051003933,
-0.02504221349954605,
-0.010384864173829556,
-0.02279285155236721,
0.006243698764592409,
-0.059665076434612274,
0.024622157216072083,
0.08627490699291229,
0.044212888926267624,
-0.02827167697250843,
-0.019425155594944954,
-0.022057976573705673,
-0.03141951560974121,
0.043426185846328735,
0.018655214458703995,
0.07349660992622375,
0.028337983414530754,
0.018872670829296112,
0.07257463783025742,
0.003528063651174307,
-0.010571202263236046,
-0.01876663975417614,
0.02528848499059677,
-0.13014712929725647,
-0.061667099595069885,
0.013025691732764244,
0.00994929950684309,
-0.007341751828789711,
-0.06776775419712067
]
}
]
</code></pre>
<p>I’m trying make the ChatGPT understand what it is and generate to me a conde in C# that using a string like a parameter to search the near strings by these vectors.</p>
<p>Example:</p>
<pre><code>var drive = searchInEmbeddings(“sena”);
</code></pre>
<p>And return Ayrton Senna.</p>
<p>Someone here just made something like it and can help me?</p>
<p>The Python code that I mensionated is:</p>
<pre><code>import json
from sentence_transformers import SentenceTransformer
from sklearn.metrics.pairwise import cosine_similarity
import numpy as np
with open('embeddings.json', 'r') as f:
data = json.load(f)
model = SentenceTransformer('all-MiniLM-L6-v2')
def find_most_similar(descriptions, query, top_n=3):
query_embedding = model.encode([query])
description_embeddings = np.array([desc['embedding'] for desc in descriptions])
similarities = cosine_similarity(query_embedding, description_embeddings)[0]
top_indices = np.argsort(similarities)[-top_n:][::-1]
return [{"codigo": descriptions[i]['codigo'], "descricao": descriptions[i]['descricao']} for i in top_indices]
query = "sena"
top_matches = find_most_similar(data, query, top_n=3)
print("Question:", query , "\Answers: \nNearest occurrences:\n", top_matches)
</code></pre>
<p>And return:</p>
<pre><code>$ python search.py
Question: sena
Answers:
Nearest occurrences:
[{'codigo': 3, 'descricao': 'Ayrton Senna'}, {'codigo': 31, 'descricao': 'Niki Lauda'}, {'codigo': 21, 'descricao': 'Kimi Räikkönen'}]
</code></pre>
<p>Thanks for the help.</p>
|
<python><c#><.net><word-embedding>
|
2024-03-27 19:35:41
| 1
| 621
|
Jean J. Michel
|
78,233,905
| 11,384,233
|
ERROR: Could not find a version that satisfies the requirement pip (from versions: none)
|
<p>I'm facing an installation issue with pip on CentOS while utilizing Python version 3.10.0. I attempted to install pip by executing the <code>get-pip.py</code> script. However, during the process, I encountered errors related to the SSL module, preventing the installation from completing successfully.</p>
<p>To proceed with the installation, I used the following command to retrieve the <code>get-pip.py</code> file:</p>
<pre class="lang-bash prettyprint-override"><code>wget https://bootstrap.pypa.io/pip/get-pip.py
</code></pre>
<p>When running the command python <code>get-pip.py</code>, I receive the following error:</p>
<pre class="lang-bash prettyprint-override"><code>[root@env lak]# python get-pip.py
WARNING: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available.
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/pip/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/pip/
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/pip/
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/pip/
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/pip/
Could not fetch URL https://pypi.org/simple/pip/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/pip/ (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available.")) - skipping
ERROR: Could not find a version that satisfies the requirement pip (from versions: none)
ERROR: No matching distribution found for pip
WARNING: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available.
Could not fetch URL https://pypi.org/simple/pip/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/pip/ (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available.")) - skipping
</code></pre>
<p>How can i fix this issue and install pip on python 3.10.</p>
|
<python><python-3.x><ssl><pip>
|
2024-03-27 18:46:02
| 1
| 6,782
|
Tharindu Lakshan
|
78,233,900
| 7,601,346
|
Am I mistakenly setting up my project to use system Python interpreter instead of virtual environment? How do I tell?
|
<p>I'm trying to set up my repository to use an older version of Python for compatibility.</p>
<p>I was able to select my system Python interpreter from this menu:</p>
<p><a href="https://i.sstatic.net/1haOW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1haOW.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/700N1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/700N1.png" alt="enter image description here" /></a></p>
<p>How do I tell if I'm using a virtual environment Python interpreter or if I'm using my system interpreter?</p>
<p>Because I need to install some packages from the requirements.txt and don't want to mess up my system Python.</p>
|
<python><python-3.x><pycharm><virtualenv>
|
2024-03-27 18:45:13
| 1
| 1,200
|
Shisui
|
78,233,826
| 8,863,970
|
Reset previous run for Unit tests for functions in a Jupyter notebook?
|
<p>I have the following simple function in a Jupyter Notebook:</p>
<pre><code>def add(a, b):
return a + b
</code></pre>
<p>and then I do some UnitTest as follows in same notebook below:</p>
<pre><code>import unittest
class TestNotebook(unittest.TestCase):
def test_add(self):
self.assertEqual(add(2, 2), 4)
unittest.main(argv=[''], verbosity=2, exit=False)
</code></pre>
<p>I get the following output:</p>
<pre><code>test_add (__main__.TestNotebook.test_add) ... ok
test_drop_duplicates (__main__.TestSilver.test_drop_duplicates) ... ERROR
test_make_url_v1 (__main__.TestUrl.test_make_url_v1) ... ERROR
test_make_url_v2 (__main__.TestUrl.test_make_url_v2) ... ok
ERROR: test_make_url_v1 (__main__.TestUrl.test_make_url_v1)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/tmp/ipykernel_1252/4074376083.py", line 9, in test_make_url_v1
date = datetime.date(2019, 12, 311)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: day is out of range for month
----------------------------------------------------------------------
Ran 4 tests in 0.130s
FAILED (errors=2)
</code></pre>
<p>The question is: As you can see, I only executed one test which was <strong>test_add</strong> but in the output, I can still see some old (work-in-progress) tests that I ran like an hour ago? How do I clear the log or memory so only current test that is ran is printed in logs?</p>
|
<python><unit-testing><jupyter-notebook><python-unittest>
|
2024-03-27 18:29:28
| 0
| 1,013
|
Saffik
|
78,233,643
| 7,658,051
|
AttributeError: Can't get attribute 'PySparkRuntimeError' as I try to apply .collect() to some RDD.map(...).distinct()
|
<p>I am a beginner with Spark.</p>
<p>I am running spark on local machine, with no cluster.</p>
<p>I have developed the following script.</p>
<p>The aim is to get the unique values of the keys "license".</p>
<p>Everything goes well past the lines</p>
<pre><code>licenses_collected = rdd.map(lambda x: x["license"]).collect()
unique_licenses = rdd.map(lambda x: x["license"]).distinct()
</code></pre>
<p>but as I try to apply the <code>.distinct()</code> method to the mapped rdd and then apply the <code>.collect()</code> method, the Traceback is raised.</p>
<p>This is the line which raises the error</p>
<pre><code>unique_licenses_collected = rdd.map(lambda x: x["license"]).distinct().collect()
</code></pre>
<p>What is the problem?</p>
<h2>script</h2>
<pre><code>from pyspark.sql import SparkSession
spark = SparkSession.builder.master("local").\
appName("prima applicazione spark").\
config("spark.ui.port", "4040").\
getOrCreate()
df_spark = spark.read.json(path_json)
rdd = df_spark.rdd
rdd_collected = rdd.collect()
"""Get the first two records"""
for line in rdd.take(2):
print(line)
print("\n\n")
""" Get the name of the licenses"""
licenses_collected = rdd.map(lambda x: x["license"]).collect()
unique_licenses = rdd.map(lambda x: x["license"]).distinct()
unique_licenses_collected = rdd.map(lambda x: x["license"]).distinct().collect()
</code></pre>
<h2>Traceback</h2>
<blockquote>
<p>AttributeError: Can't get attribute 'PySparkRuntimeError' on <module
'pyspark.errors.exceptions.base' from
'/opt/spark/python/lib/pyspark.zip/pyspark/errors/exceptions/base.py'></p>
</blockquote>
<p><a href="https://github.com/tommasosansone91/miscellaneous_logs/blob/main/canteget_PySparkRuntimeError_traceback.txt" rel="nofollow noreferrer">full traceback here</a></p>
<h2>programs versions</h2>
<pre><code>pyspark --version
version 3.4.2
python --version
Python 3.10.12
</code></pre>
|
<python><apache-spark><pyspark>
|
2024-03-27 17:46:53
| 2
| 4,389
|
Tms91
|
78,233,587
| 6,606,057
|
Unable to Create Polynomial Features for regression using numpy.plyfit -- AttributeError: 'numpy.ndarray' object has no attribute 'to_numpy'
|
<p>Excuse my ignorance I have never used polynomial regression with numpy before.</p>
<p>I am attempting to use the variable "CPU_frequency" to create Polynomial features. Using three different values of polynomial degrees.</p>
<pre><code>import numpy
X = X.to_numpy().flatten()
f1 = np.polyfit(X, Y, 1)
p1 = np.poly1d(f1)
f3 = np.polyfit(X, Y, 3)
p3 = np.poly1d(f3)
f5 = np.polyfit(X, Y, 5)
p5 = np.poly1d(f5)
</code></pre>
<p>yields</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[74], line 3
1 import numpy
----> 3 X = X.to_numpy().flatten()
4 f1 = np.polyfit(X, Y, 1)
5 p1 = np.poly1d(f1)
AttributeError: 'numpy.ndarray' object has no attribute 'to_numpy'
</code></pre>
<p>attempted change from X. to X._ no joy. reminded jupyter it was using numpy by calling import.</p>
<p>any help appreciated.</p>
|
<python><python-3.x><numpy><polynomials>
|
2024-03-27 17:33:56
| 2
| 485
|
Englishman Bob
|
78,233,520
| 1,862,861
|
Best way to iterate over variable that can be a single object or list/iterator of objects
|
<p>Suppose I have a Python function that takes in an argument that can either be, e.g., an <code>int</code> or list of <code>int</code>s and I want to loop over that argument:</p>
<pre class="lang-py prettyprint-override"><code>from typing import List, Union
def myfunc(arg: Union[int, List[int]]) -> List[int]:
x = []
for value in arg:
x.append(2 * value)
return x
</code></pre>
<p>Now, obviously this will work if you do give it a list, but will fail with a <code>TypeError</code> if you give it an <code>int</code> because an <code>int</code> is not an iterable object.</p>
<p>In these situations I generally do something like:</p>
<pre class="lang-py prettyprint-override"><code>def myfunc(arg: Union[int, List[int]]) -> List[int]:
x = []
arglist = arg if isinstance(arg, list) else [arg]
for value in arglist:
x.append(2 * value)
return x
</code></pre>
<p>i.e., I put <code>arg</code> into a list if it is not a list and then iterate over that list.</p>
<p>I could instead create a function or class that will convert an object into a list/iterable if it isn't already one, e.g.,</p>
<pre class="lang-py prettyprint-override"><code>def make_iterator(x):
try:
for v in x:
yield v
except TypeError:
yield x
def myfunc(arg: Union[int, List[int]]) -> List[int]:
x = []
for value in make_iterator(arg):
x.append(2 * value)
return x
</code></pre>
<p>but I wondered if there is some Python built-in function or standard library module that already exists to do this?</p>
|
<python>
|
2024-03-27 17:19:45
| 0
| 7,300
|
Matt Pitkin
|
78,233,264
| 2,779,432
|
A way to warp an image based on a map
|
<p>I'm looking for a method where, given an two images (A and B, with A being an RGB image and B being a greyscale image storing the warping information) I can warp A relative to B.
To give you some context, this is pretty much the same technique used to artificially create a stereo image from a single image given depth information. In this case A would be the source image and B would be the depth map.</p>
<p>I was able to achieve something similar using this code</p>
<pre><code>def map_pixels(input_image, input_map, N):
"""
Shifts pixels of the input_image to the right based on the map information and a shift factor N.
:param input_image: A (3, h, w) numpy array representing the RGB image.
:param input_map: A (h, w) numpy array representing mapping information, values from 0 (far) to 1 (near).
:param N: An integer representing the base number of pixels for the shift.
:return: A (3, h, w) numpy array representing the shifted RGB image.
"""
input_image = np.transpose(input_image, (2, 0, 1))
input_map = input_map.astype(np.float32) / 255.0
map_shifts = np.round(N * input_map).astype(int) # Calculate the shift amount for each pixel
# Initialize an array to hold the shifted image
shifted_image = np.zeros_like(input_image)
# Iterate through each channel of the image
for c in range(input_image.shape[0]): # For each color channel
channel = input_image[c, :, :] # Extract the current color channel
print('CHANNEL SHAPE ' + str(channel.shape))
# Iterate through each pixel in the channel
for y in range(channel.shape[0]): # For each row
for x in range(channel.shape[1]): # For each column
shift = map_shifts[y, x] # Determine how many pixels to shift
if x + shift[1] < channel.shape[1]: # Check if the shifted position is within bounds
shifted_image[c, y, x + shift] = channel[y, x] # Shift the pixel
shifted_image = np.transpose(shifted_image, (1, 2, 0))
return shifted_image
</code></pre>
<p>The problem I'm facing here is that there is no filtering whatsoever, the pixels just get moved to their new location leaving some broken areas of black pixels and I would like to be able to add some smoothness to the shift.</p>
<p>I was wondering if anyone has any insight in this that they could share.</p>
<p>Note: This is only the function to shift the pixels to the right, I have 3 more functions for left, up and down that I want to be able to call separately.</p>
<p>Also I'm not invested in strictly using Numpy or ImageIO, any solution involving other systems like OpenCV is more than welcome</p>
<p>Thank you</p>
|
<python><image-processing><graphics><3d><textures>
|
2024-03-27 16:37:52
| 0
| 501
|
Francesco
|
78,232,975
| 13,518,907
|
RAG with Langchain and FastAPI: Stream generated answer and return source documents
|
<p>I have built a RAG application with Langchain and now want to deploy it with FastAPI. Generally it works tto call a FastAPI endpoint and that the answer of the LCEL-chain gets streamed. However I want to achieve that my answer gets streamed and if streaming is done I want to return the source documents. Here is the code, where streaming is working when calling the endpoint. At the moment I am yielding the source_documents but I don't want the user to see them. I would like to preprocess the source_documents before the user sees them:</p>
<pre><code># example endpoint call: `http://127.0.0.1:8000/rag_model_response?question=Welche%203%20wesentlichen%20Merkmale%20hat%20die%20BCMS%20Leitlinie%3F`
# this example call streams the response perfectly in the browser
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores import FAISS
from langchain_core.prompts import PromptTemplate, ChatPromptTemplate
from langchain_core.runnables import RunnableLambda, RunnablePassthrough
from langchain_core.output_parsers import StrOutputParser
embeddings = HuggingFaceEmbeddings(model_name="intfloat/multilingual-e5-large-instruct", model_kwargs={'device': "mps"})
db = FAISS.load_local("streamlit_vectorstores/vectorstores/db_maxiw_testfreitag", embeddings, allow_dangerous_deserialization=True)
retriever = db.as_retriever(search_kwargs={'k': cfg.STREAMLIT_VECTOR_COUNT, 'score_threshold': cfg.SCORE_THRESHOLD,'sorted': True}, search_type="similarity_score_threshold")
model_path = cfg.MIXTRAL_PATH
llm = build_llm(model_path) # loads a model from Llamacpp with streaming enabled
def rag_model_response(question: str):
start_time = time.time()
context = retriever.get_relevant_documents(question)
response_dict = {"question": question, "result": "", "source_documents": []}
rag_prompt = f"""<s> [INST] Du bist RagBot, ein hilfsbereiter Assistent. Antworte nur auf Deutsch:
{context}
{question}
Antwort: [/INST]
"""
result_content = ""
first_response = True
for resp in llm.stream(rag_prompt):
if resp:
result_content += resp
if first_response:
# Calculate and print time after the first batch of text is streamed
end_time = time.time()
elapsed_time = round(end_time - start_time, 1)
first_response = False
yield f"(Response Time: {elapsed_time} seconds)\n"
yield resp
if context:
# yield context # hier aufgehört
yield "\n\nQuellen:\n"
for i, doc in enumerate(context):
yield doc.metadata["source"].split("/")[-1] + ", Seite: " + str(doc.metadata["page"]+1) + "\n\n"
response_dict["source_documents"] = [{"source": doc.metadata["source"], "page": doc.metadata["page"]+1} for doc in context]
else:
yield "\n\nVorsicht, für die vorliegende Antwort wurden keine interne Quellen verwendet, da die Suche nach relevanten Dokumenten kein Ergebnis geliefert hat."
yield response_dict
app = FastAPI(
title="FastAPI for Database Management",
description="An API that handles user Vectordatabase creation or deletion",
version="1.0",)
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
@app.get('/rag_model_response',response_class=JSONResponse)
async def main(question: str):
return StreamingResponse(rag_model_response(question), media_type='text/event-stream')
</code></pre>
<p>So my first question would be:</p>
<ul>
<li>How do I need to change my change so that it returns also the retrieved documents from the retreiver?</li>
<li>How can I also return these source_documents in my FastAPI endpoint response? It would be perfect if the generated answers gets streamed and the source documents are getting returned. The could as well be streamed but somehow it should be possible to show the users only the streaming of the generated answer. If the streaming is finished I want to show the user the documents which were used to generate the answer.</li>
</ul>
<p>One alternative solution, which is not very effective I think, was that I just create a new endpoint that returns the source documents:</p>
<pre><code>@app.get('/source_documents')
async def source_documents(question: str):
source_docs = retriever.get_relevant_documents(question)
return source_docs
</code></pre>
<p>But with this it always gets searched 2 times for every question, one time for the chain and one time for the retriever.</p>
<p>Thanks in advance!</p>
|
<python><fastapi><langchain><llama-cpp-python>
|
2024-03-27 15:50:44
| 1
| 565
|
Maxl Gemeinderat
|
78,232,806
| 17,523,373
|
How can I convert GeoTIFF to sqlite database in python
|
<p>I have a GeoTIFF file that i want to extract its latitude and longitude with altitude and store it in a sqlite database, this sqlite database will be later used in a mobile application,
For now this is the code that will give me latitude and longitude with altitude</p>
<pre><code>import rasterio
def get_elevation_from_geotiff(file_path, lon, lat):
dataset = rasterio.open(file_path)
transform = dataset.transform
col, row = ~transform * (lon, lat) # Inverse transform
col = int(col)
row = int(row)
elevation = dataset.read(1)[row, col]
dataset.close()
return elevation
def count_coordinates_in_geotiff(file_path):
dataset = rasterio.open(file_path)
num_coordinates = dataset.width * dataset.height
dataset.close()
return num_coordinates
# Usage example
geotiff_file = '/Users/pannam/Desktop/terrain_map/assets/nepal.tif'
# Biratnagar
longitude = 87.2662
latitude = 26.4840
# //write the latitude and longitude of the place you want to know the altitude of kathmandu
#Kathmandu
# longitude = 85.3240
# latitude = 27.7172
num_coordinates = count_coordinates_in_geotiff(geotiff_file)
print("Number of coordinates in the GeoTIFF file:", num_coordinates)
altitude = get_elevation_from_geotiff(geotiff_file, longitude, latitude)
print('Altitude at ({}, {}): {} meters'.format(longitude, latitude, altitude))
print('Altitude at ({}, {}): {} feet'.format(longitude, latitude, altitude * 3.28084))
</code></pre>
<p>this will print out correct altitude</p>
<p>How can i convert this Geotiff to sqlite which has information like longitude, latitude and altitude?
Or is this approach even worth it ? My objective is to color code the map based on the height, like a terrain awareness system.</p>
<p>Please take a look at <a href="https://www.maptiler.com/tools/hypsometry/#7.101409750538263/28.22/86.83" rel="nofollow noreferrer">Flood Simulator</a> , i want to do exactly something like this but for elevation</p>
<p>or Terrain map <a href="https://www.floodmap.net" rel="nofollow noreferrer">Elevavtion map</a></p>
|
<python><sqlite><openstreetmap><geotiff>
|
2024-03-27 15:24:48
| 0
| 731
|
Pannam
|
78,232,787
| 5,269,892
|
Python unexpected non-zero returncode when checking alias command set in .bashrc file
|
<p>I want to call an alias command set in a <em>.bashrc</em> file from python. However, the returncode of <code>source ~/.bashrc; command -v command</code> is for some reason 1 instead of 0 which is the case if the same command is executed directly in the shell. It should be noted that with a default command like <code>echo</code> instead of the alias command, python correclty returns 0 as the returncode.</p>
<p>To demonstrate the issue, I created a minimal example, where I use a regular shell script <em>define_alias.sh</em> instead of the <em>.bashrc</em> file:</p>
<p><strong>test_bash_command.py:</strong></p>
<pre><code>import os
import subprocess
path_home = os.path.expanduser('~')
proc = subprocess.Popen(['/bin/bash', '-c', 'source %s/define_alias.sh; command -v testcommand' % path_home], shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
proc_stdout, proc_stderr = proc.communicate()
proc_returncode = proc.returncode
print("Returncode: %s" % proc_returncode)
print("Stdout: %s" % proc_stdout)
print("Stderr: %s" % proc_stderr)
</code></pre>
<p><strong>define_alias.sh:</strong></p>
<pre><code># .bashrc
# imagine this is a .bashrc file
echo "I'm in the shell script"
alias testcommand='echo THIS IS A TEST'
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>wssadmin /global/wss/fs01/fk_bpr/waif_v8
$ python test_bash_command.py
Returncode: 1
Stdout: b"I'm in the shell script\n"
Stderr: b''
wssadmin /global/wss/fs01/fk_bpr/waif_v8
$ env -i bash --norc --noprofile
bash-4.4$ bash -c "source /home/wssadmin/define_alias.sh; command -v testcommand; echo $?"
I'm in the shell script
0
bash-4.4$ source /home/wssadmin/define_alias.sh; command -v testcommand; echo $?
I'm in the shell script
alias testcommand='echo THIS IS A TEST'
0
bash-4.4$ exit
exit
wssadmin /global/wss/fs01/fk_bpr/waif_v8
$ source /home/wssadmin/define_alias.sh; command -v testcommand; echo $?
I'm in the shell script
alias testcommand='echo THIS IS A TEST'
0
</code></pre>
<p><strong>Question: Why is the returncode given to python not 0 like in the bash case? How can I get a returncode of 0 for the given command and additionally retrieve the stdout and stderr of the <code>command -v testcommand</code> (instead of only having that of the <code>echo</code> command)?</strong></p>
<p>The <code>source</code> command itself is executed, which can be seen in the printed <code>stdout</code> variable. Please do note that I would not like to change the <code>/bin/bash -c</code> nor the <code>shell=False</code> parameter in <code>subprocess.Popen</code> as otherwise I would get issues with other commands called in other python scripts by a <code>run_bash_cmd</code> function wrapped around the <code>subprocess.Popen</code>.</p>
<p>Your help is much appreciated!</p>
|
<python><bash><shell><subprocess><alias>
|
2024-03-27 15:22:17
| 1
| 1,314
|
silence_of_the_lambdas
|
78,232,689
| 6,524,326
|
Reducing Overhead When Calling Python Functions from Node.js Using spawnSync
|
<p>I am using Python 3.10 and have a file named <code>utilities.py</code> in my project directory that contains several functions and import statements. I import its content to use in other Python files as shown in the snippet below where I am showing you the content of the <code>validate_user.py</code> file. I am doing this in a setup where I call Python functions from the JS environment with the help of the <code>spawnSync</code> function in nodeJS (as shown below) to pass the arguments from the JS environment to the Python environment. But the problem is that the process takes too long to execute.</p>
<p>For example, validation of a user based on credentials stored in a sqlite DB (see the validate_user() function below) takes about 12-13 seconds. In this example, the DB is small, just containing data for a single user. However, when I use an alternate approach to validate a user based on the credentials saved in my .bashrc (see the validate_user_env() function below) the validation is almost instantaneous!</p>
<p>Furthermore, when I try bringing all the necessary functions and library imports in the <code>validate_user.py</code> file from <code>utilities.py</code> then the DB way of validation is also instantaneous!</p>
<p>As a last experiment, I copied the import statements (25 or so) from <code>utilities.py</code> to <code>validate_user.py</code> just to check how does it affect and this time again the validation took longer which indicates that the import statements in <code>validate_user.py</code> is the main reason for the delay.</p>
<p>How can I resolve this? In my real project, I call many Python functions distributed in several files using the <code>spawnSync</code> approach described earlier. Many of them use common functions defined in other files that also contain multiple import statements, so it does not make sense to duplicate codes across different files. What could be a good strategy to solve this?</p>
<pre><code>// JS code to call Python
function callSync(PYTHON_PATH, script, args, options) {
const result = spawnSync(PYTHON_PATH, [script, JSON.stringify(args)]);
if (result.status === 0) {
const ret = result.stdout.toString();
if (!(ret instanceof Error)) {
return ret;
} else {
return {
error: ret.message
};
}
} else if (result.status === 1) {
return {
error: result.stderr.toString()
};
} else {
return {
error: result.stdout.toString()
};
}
}
## Below is the content of validate_user.py
import argparse
import json
import os
import utilities as ut
## This function uses a DB to validate
def validate_user(args):
## Get the arguments from JS
username = args.get('username')
password = args.get('password')
res = ut.validate_user_password(username, password)
result = {
"greeting": "Hello from validate_user.py!",
"code": res["code"],
"message": res["msg"]
}
print(json.dumps(result))
## This function uses env valiables to validate
def validate_user_env(args):
## Get the arguments from JS
username_in = args.get('username')
password_in = args.get('password')
## Get the credentials from env vars
username = os.getenv("my_username")
password = os.getenv("my_password")
## validate and return
if username_in == username and password_in == password:
result = {"greeting": "Hello from validate_user.py!", "code": "success", "message": "Authentication successful"}
else:
result = {"greeting": "Hello from validate_user.py!", "code": "failure", "message": "Invalid username or password"}
print(json.dumps(result))
## Set up argument parser object and fetch the args
parser = argparse.ArgumentParser()
parser.add_argument('json_args')
args = parser.parse_args()
json_args = json.loads(args.json_args)
validate_user(json_args)
</code></pre>
|
<python>
|
2024-03-27 15:06:06
| 2
| 828
|
Wasim Aftab
|
78,232,655
| 2,610,841
|
How to Configure Ray to Use Standard Python Logger for Multiprocessing?
|
<p>I am trying to use Ray to improve the speed of a process and I want the log messages to be passed to the standard Python logger. This way, the application can handle formatting, filtering, and saving the log messages. However, when I use Ray, the log messages are not formatted according to my logger configuration and are not passed back to the root logger. I tried setting <code>log_to_driver=True</code> and <code>configure_logging=True</code> in <code>ray.init()</code> , but it didn't solve the problem. How can I configure Ray to use the standard Python logger for multiprocessing?"</p>
<p>Here is an example that should demonstrate the issue:</p>
<pre><code>from ray.util.multiprocessing import Pool
import pathlib
import logging
import json
def setup_logging(config_file: pathlib.Path):
with open(config_file) as f_in:
config = json.load(f_in)
logging.config.dictConfig(config)
logger = logging.getLogger(__name__)
config_file = pathlib.Path(__file__).parent / "log_setup/config_logging.json"
setup_logging(config_file=config_file)
def f(index):
logger.warning(f"index: {index}")
return (index, "model")
if __name__ == "__main__":
logger.warning("Starting")
pool = Pool(1)
results = pool.map(f, range(10))
print(list(results))
</code></pre>
<p>where I have the config of the logger as:</p>
<pre><code>{
"version": 1,
"disable_existing_loggers": false,
"formatters": {
"detailed": {
"format": "[%(levelname)s|%(name)s|%(module)s|L%(lineno)d] %(asctime)s: %(message)s",
"datefmt": "%Y-%m-%dT%H:%M:%S%z"
}
},
"handlers": {
"stdout": {
"class": "logging.StreamHandler",
"level": "INFO",
"formatter": "detailed"
}
},
"loggers": {
"root": {
"level": "DEBUG",
"handlers": [
"stdout"
]
}
}
}
</code></pre>
<p>If I just use the python map, I would get the following printed:</p>
<pre><code>[WARNING|__main__|ray_trial|L28] 2024-03-27T15:14:21+0100: Starting
[WARNING|__main__|ray_trial|L22] 2024-03-27T15:14:21+0100: index: 0
[WARNING|__main__|ray_trial|L22] 2024-03-27T15:14:21+0100: index: 1
[WARNING|__main__|ray_trial|L22] 2024-03-27T15:14:21+0100: index: 2
[WARNING|__main__|ray_trial|L22] 2024-03-27T15:14:21+0100: index: 3
[WARNING|__main__|ray_trial|L22] 2024-03-27T15:14:21+0100: index: 4
[WARNING|__main__|ray_trial|L22] 2024-03-27T15:14:21+0100: index: 5
[WARNING|__main__|ray_trial|L22] 2024-03-27T15:14:21+0100: index: 6
[WARNING|__main__|ray_trial|L22] 2024-03-27T15:14:21+0100: index: 7
[WARNING|__main__|ray_trial|L22] 2024-03-27T15:14:21+0100: index: 8
[WARNING|__main__|ray_trial|L22] 2024-03-27T15:14:21+0100: index: 9
</code></pre>
<p>But when I use Ray I get:</p>
<pre><code>2024-03-27 14:54:55,064 INFO worker.py:1743 -- Started a local Ray instance. View the dashboard at 127.0.0.1:8265
(PoolActor pid=43261) index: 0
(PoolActor pid=43261) index: 1
(PoolActor pid=43261) index: 2
(PoolActor pid=43261) index: 3
(PoolActor pid=43261) index: 4
(PoolActor pid=43261) index: 5
(PoolActor pid=43261) index: 6
(PoolActor pid=43261) index: 7
(PoolActor pid=43261) index: 8
(PoolActor pid=43261) index: 9
</code></pre>
|
<python><logging><multiprocessing><ray>
|
2024-03-27 15:00:53
| 3
| 1,661
|
gabrown86
|
78,232,522
| 14,179,793
|
bot3 ecs.execute_command: Task Identifier is invalid
|
<p>I have defined a task in AWS ECS. I have copied the task ARN into the API call:</p>
<pre><code>rsp = ecs.execute_command(
container='test',
command=json.dumps(event),
task='arn:aws:ecs:##-####-#:############:task-definition/test:1',
interactive=True
)
</code></pre>
<p>I receive the following error complaining about the task identifier:</p>
<pre><code>botocore.errorfactory.InvalidParameterException: An error occurred (InvalidParameterException) when calling the ExecuteCommand operation: Task Identifier is invalid
</code></pre>
<p>I am not sure what is wrong as the task identifier has been copied from the AWS web interface where the task was created. Am I just missing something extremely obvious?</p>
|
<python><amazon-web-services><boto3><amazon-ecs>
|
2024-03-27 14:40:47
| 1
| 898
|
Cogito Ergo Sum
|
78,232,434
| 1,305,688
|
Seaborn boxplot color outliers by hue variable using seaborn =< 0.12.2
|
<p>I would like to color my boxplot-outliers in my Seaborn Boxplot when using a hue variables. Something along the line of this, from <a href="https://seaborn.pydata.org/generated/seaborn.boxplot.html#" rel="nofollow noreferrer">seaborn boxplot examples</a>, but I am stuck on seaborn ver. 0.12.2</p>
<p><a href="https://i.sstatic.net/AwFcG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AwFcG.png" alt="sns.boxplot(data=titanic, x="class", y="age", hue="alive", fill=False, gap=.1)" /></a></p>
<p>As I'm stuck here I get this. What should I change? Any way it's possible in this seaborn version? It's unfortunately currently not an option for me to update my working environment. Thanks.</p>
<p><a href="https://i.sstatic.net/NM2g7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NM2g7.png" alt="enter image description here" /></a></p>
<pre><code>import seaborn as sns
import matplotlib.pyplot as plt
# Sample data
data = sns.load_dataset('tips')
# Boxplot with outliers colored by hue variable ('sex' in this
sns.boxplot(data=data, x='day', y='total_bill', hue='sex')
plt.show()
</code></pre>
|
<python><plot><seaborn>
|
2024-03-27 14:26:55
| 0
| 8,018
|
Eric Fail
|
78,232,352
| 6,243,129
|
Error decoding data matrix code using pylibdmtx in Python
|
<p>I am trying to decode data matrix qr codes using <code>pylibdmtx</code>. I have below image</p>
<p><a href="https://i.sstatic.net/l4VRO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/l4VRO.png" alt="enter image description here" /></a></p>
<p>and using the below code:</p>
<pre><code>import cv2
from pylibdmtx import pylibdmtx
import time
import os
image = cv2.imread('file1.png', cv2.IMREAD_UNCHANGED);
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
t1 = time.time()
msg = pylibdmtx.decode(thresh)
print(msg)
if msg:
print(msg[0].data.decode('utf-8'))
t2 = time.time()
print(t2-t1)
</code></pre>
<p>It prints nothing. Is there anything I am missing in the code? I tried some <a href="https://www.dynamsoft.com/barcode-reader/barcode-types/datamatrix/" rel="nofollow noreferrer">data matrix decoder online</a>, and they were able to decode it so I am sure the image is correct.</p>
|
<python><qr-code><datamatrix>
|
2024-03-27 14:14:51
| 1
| 7,576
|
S Andrew
|
78,232,346
| 2,090,481
|
Invalid websocket upgrade
|
<p>I've set up a server with the help of NiceGUI and Nginx on a VPS. The requests are coming through a subdomain and routed correctly: The server receives the request and prints the html elements.</p>
<p>However, upon making use of the websockets, I can see the following error on my server's out:
<code>ERROR:engineio.server:Invalid websocket upgrade (further occurrences of this error will be logged with level INFO)</code>.</p>
<p>Conversely, my browser shows <code>manager.js:108 WebSocket connection to 'wss://{I-hid-this}/_nicegui_ws/socket.io/?client_id=ce259c07-9781-4739-9faa-051f24e911bd&EIO=4&transport=websocket' failed</code>.</p>
<p>The same setup, running it <code>on_air</code> <strong>works perfectly</strong>.</p>
<p>Here's my nginx config for my subdomain:</p>
<pre><code>server {
listen 80;
server_name {I-hid-this};
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
server{
# SSL configuration
listen 443 ssl;
server_name {I-hid-this};
ssl_certificate /etc/nginx/ssl/{I-hid-this}_ssl_certificate.cer;
ssl_certificate_key /etc/nginx/ssl/_.{I-hid-this}_private_key.key;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
</code></pre>
<p>I tried adding sockets on extra ports, but it made no difference.</p>
<p>The main problem I have with this is that I can't find any specific documentation on the posted error.</p>
<p><strong>Update: I logged the headers received and it's quite notable that the <code>transport</code> is set to <code>websocket</code>, however there's no <code>HTTP_UPGRADE</code> passed, and that's what kills the logic. Moreover, it seems my local working server recives <code>ws</code> as scheme whereas my non working VPS receives <code>https</code>. I hope this helps.</strong></p>
|
<python><websocket><web-frameworks><nicegui>
|
2024-03-27 14:13:29
| 1
| 402
|
j4nSolo
|
78,232,318
| 19,366,064
|
Pytest: in memory data doesn't persist through fixture
|
<p>Here is my project setup</p>
<pre><code>project/
app.py
test_app.py
</code></pre>
<p>app.py</p>
<pre><code>from pydantic import BaseModel
from sqlalchemy.orm import Session
from fastapi import Depends, FastAPI, HTTPException
class UserCreateModel(BaseModel):
username: str
password: str
def get_database_session():
yield
class UserRepo:
def create_user(self, user_create_model: UserCreateModel, session: Session):
pass
def get_user_by_id(self, id: int, session: Session):
pass
def create_app():
app = FastAPI()
@app.post("/api/v1/users", status_code=201)
async def create_user(
user_create_model: UserCreateModel, user_repo: UserRepo = Depends(), session=Depends(get_database_session)
):
user = user_repo.create_user(user_create_model=user_create_model, session=session)
return user
@app.get("/api/v1/users/{id}", status_code=200)
async def get_user_by_id(id: int, user_repo: UserRepo = Depends(), session=Depends(get_database_session)):
user = user_repo.get_user_by_id(id=id, session=session)
if not user:
raise HTTPException(status_code=404)
return user
return app
app = create_app()
</code></pre>
<p>test_app.py</p>
<pre><code>from app import UserCreateModel, UserRepo, create_app, get_database_session
from dataclasses import dataclass
from fastapi import FastAPI
from fastapi.testclient import TestClient
from pytest import fixture
from sqlalchemy.orm import Session
@fixture(scope="session")
def app():
def _get_database_session():
return True
app = create_app()
app.dependency_overrides[get_database_session] = _get_database_session
yield app
@fixture(scope="session")
def client(app: FastAPI):
client = TestClient(app=app)
yield client
@fixture(scope="function")
def user_repo(app: FastAPI):
print("created")
@dataclass
class User:
id: int
username: str
password: str
class MockUserRepo:
def __init__(self):
self.database = []
def create_user(self, user_create_model: UserCreateModel, session: Session) -> User | None:
if session:
user = User(
id=len(self.database) + 1,
username=user_create_model.username,
password=user_create_model.password,
)
self.database.append(user)
return user
def get_user_by_id(self, id: int, session: Session) -> User | None:
print(self.database)
if session:
for user in self.database:
if user.id == id:
return user
return None
app.dependency_overrides[UserRepo] = MockUserRepo
yield
def test_create_user(client: TestClient, user_repo):
data = {"username": "mike", "password": "123"}
res = client.post("/api/v1/users", json=data)
assert res.status_code == 201
assert res.json()["username"] == "mike"
assert res.json()["password"] == "123"
assert "id" in res.json()
def test_get_user(client: TestClient, user_repo):
data = {"username": "nick", "password": "123"}
res = client.post("/api/v1/users", json=data)
assert res.status_code == 201
user = res.json()
user_id = user["id"]
res = client.get(f"/api/v1/users/{user_id}")
assert res.status_code == 200
</code></pre>
<p>For test_get_user(), I first make a post request and append {"username": "nick", "password": "123"} to self.database, then I follow up with a get request, and yet my self.database is still empty even though the fixture is set to function scope. Does anyone know what is happening?</p>
|
<python><unit-testing><pytest><fastapi><fixtures>
|
2024-03-27 14:09:51
| 1
| 544
|
Michael Xia
|
78,232,295
| 14,230,633
|
Manually set values shown in legend for continuous variable of seaborn/matplotlib scatterplot
|
<p>Is there a way to manually set the values shown in the legend of a seaborn (or matplotlib) scatterplot when the legend contains a continuous variable (hue)?</p>
<p>For example, in the plot below I might like to show the colors corresponding to values of <code>[0, 1, 2, 3]</code> rather than <code>[1.5, 3, 4.5, 6, 7.5]</code></p>
<pre><code>np.random.seed(123)
x = np.random.randn(500)
y = np.random.randn(500)
z = np.random.exponential(1, 500)
fig, ax = plt.subplots()
hue_norm = (0, 3)
sns.scatterplot(
x=x,
y=y,
hue=z,
hue_norm=hue_norm,
palette='coolwarm',
)
ax.grid()
ax.set(xlabel="x", ylabel="y")
ax.legend(title="z")
sns.despine()
</code></pre>
<p><a href="https://i.sstatic.net/E6ssv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/E6ssv.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><plot><seaborn><visualization>
|
2024-03-27 14:06:37
| 3
| 567
|
dfried
|
78,232,223
| 22,221,987
|
How to generate a single key-press event signal with tkinter.bind method
|
<p>I have a simple ui, which should print every key that I've pressed.<br />
I need to generate signal <strong>once</strong> per real key-press.<br />
How can I achieve that?<br />
In default, tkinter generates signal once, than waits about 0.5 sec and starts generate signal in loop, until <em>KeyRelease</em> event happens.</p>
<pre><code>
if __name__ == '__main__':
from tkinter import Tk, Frame
root = Tk()
frame = Frame(root, width=200, height=200, bg="white")
frame.pack()
def key_handler(event):
print(event.char, event.keysym, event.keycode)
return 'break'
root.bind('<Key>', key_handler)
root.mainloop()
</code></pre>
<p>Using 'break' at the end of the bond function doesn't stop the next event-call.</p>
<p>In addition (bc this is a minor question, which is too small for separate post), how can I create these key sequences?:</p>
<ul>
<li>"Shift" and some digit</li>
<li>some digit and '+' or '-' symbol</li>
</ul>
<p><strong>UPD</strong>:
I don't really want to do this:</p>
<pre><code>def key_handler(event):
global eve
if eve != event.keysym:
eve = event.keysym
print(event.char, event.keysym, event.keycode)
</code></pre>
<p>because signal still process when I want to remove it at all. But, I don't really know how much affect does Dummy signal on the program in total, so may be this solution makes sense?</p>
<p><strong>UPD with some solutions from users:</strong><br />
I've append the answer in question body because I've accidentally closed the question before adding my answer, so here it is:</p>
<p>Here is the example with <strong>bind/unbind solution</strong>:</p>
<pre><code>class KeyHandler:
def __init__(self, key: str):
self.func_id = root.bind(f'<KeyPress-{key}>', self.key_press)
root.bind(f'<KeyRelease-{key}>', self.key_release)
def key_release(self, event: Event):
print('released', event.char)
self.func_id = root.bind(f'{event.char}', self.key_press)
def key_press(self, event: Event):
print('pressed', event.char)
root.unbind(f'{event.char}', self.func_id)
KeyHandler('1')
KeyHandler('2')
KeyHandler('3')
KeyHandler('4')
root.mainloop()
</code></pre>
<p>And here is the example with simple <strong>state check solution</strong> (handling all events which are connected with current key, may cause event spam):</p>
<pre><code>keys = []
def key_handler(event: Event):
global keys
if event.type is EventType.KeyPress:
if not (event.keycode in keys):
keys.append(event.keycode)
print('pressed', event.char)
elif event.type is EventType.KeyRelease:
if event.keycode in keys:
print('released', event.char)
keys.remove(event.keycode)
root.bind('<KeyPress-9>', lambda e: key_handler(e))
root.bind('<KeyRelease-9>', lambda e: key_handler(e))
</code></pre>
|
<python><python-3.x><user-interface><tkinter><signals>
|
2024-03-27 13:56:45
| 1
| 309
|
Mika
|
78,232,037
| 4,721,937
|
How to use dictConfig to configure the logging handler extending QueueHandler?
|
<p>TL;DR <em>dictConfig</em> does not work with custom <em>QueueHandler</em> implementations in Python 3.12</p>
<p>Following the <a href="https://docs.python.org/3/howto/logging-cookbook.html#subclass-queuehandler" rel="nofollow noreferrer">logging cookbook</a>, I implemented a custom QueueHandler which uses ZMQ to send log records to a listener running in another process.</p>
<p>Here is the listener code which reads messages from the socket and passes them to the handler:</p>
<pre class="lang-py prettyprint-override"><code># zmq-logging.server.py
DEFAULT_ADDR = 'tcp://localhost:13231'
_context = zmq.Context()
atexit.register(_context.destroy, 0)
class ZMQQueueListener(logging.handlers.QueueListener):
def __init__(self, address, *handlers):
self.address = address
socket = _context.socket(zmq.PULL)
socket.bind(address)
super().__init__(socket, *handlers)
def dequeue(self, block: bool) -> logging.LogRecord:
data = self.queue.recv_json()
if data is None:
return None
return logging.makeLogRecord(data)
def enqueue_sentinel(self) -> None:
socket = _context.socket(zmq.PUSH)
socket.connect(self.address)
socket.send_json(None)
def stop(self) -> None:
super().stop()
self.queue.close(0)
if __name__ == '__main__':
listener = ZMQQueueListener(DEFAULT_ADDR, logging.StreamHandler())
listener.start()
print('Press Ctrl-C to stop.')
try:
while True: time.sleep(0.1)
finally:
listener.stop()
</code></pre>
<p>Here is the handler code which enqueues records in the zmq queue.</p>
<pre class="lang-py prettyprint-override"><code># zmq-logging-handler.py
_context = zmq.Context()
class ZMQQueueHandler(logging.handlers.QueueHandler):
def __init__(self, address, ctx=None):
self.ctx = ctx or _context
socket = self.ctx.socket(zmq.PUSH)
socket.connect(address)
super().__init__(socket)
def enqueue(self, record: logging.LogRecord) -> None:
self.queue.send_json(record.__dict__)
def close(self) -> None:
return self.queue.close()
</code></pre>
<p>And a dictionary config I used to configure the logger</p>
<pre class="lang-py prettyprint-override"><code>config = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'full': {
'format': "%(asctime)s %(levelname)-8s %(name)s %(message)s",
'datefmt': "%Y-%m-%d %H:%M:%S"
}
},
'handlers': {
'zmq-logging': {
'class': 'zmq-logging-handler.ZMQQueueHandler',
'formatter': 'full',
'level': 'DEBUG',
'address': DEFAULT_ADDR
}
},
'loggers': {
'': {
'level': 'DEBUG',
'propagate': False,
'handlers': ['zmq-logging']
}
}
}
</code></pre>
<p>However, it no longer works in Python 3.12. The configuration for the <em>QueueHandler</em>s changed as described in the <a href="https://docs.python.org/3/library/logging.config.html#configuring-queuehandler-and-queuelistener" rel="nofollow noreferrer">logging.config</a>. The handler config requires additional <em>queue</em>, <em>listener</em> and <em>handlers</em> parameters which I do not know how to use with my class model. Should I abandon extending <em>QueueHandler</em> or is there a way to fix the config dict?
I don't understand how new <em>listener</em> property of the <em>QueueHandler</em> can be used if the listener and the handler need to be instantiated in separate processes.</p>
|
<python><python-logging><python-3.12>
|
2024-03-27 13:25:04
| 0
| 2,965
|
warownia1
|
78,232,013
| 3,834,415
|
How do I obtain a raw json string from an environment variable in python?
|
<p>In BASH, I can store JSON into an environment variable via jq:</p>
<pre><code>export FOO=$(curl ... | jq -Rs)
</code></pre>
<p>Now, if I load that into python:</p>
<pre><code>bar=os.environ["FOO"]
</code></pre>
<p><code>bar</code> will have something like: <code>{\\"this\\":\\"that\\"}</code>, which is a problem when loading with <code>json</code>:</p>
<pre><code>json.loads(bar) # error
</code></pre>
<hr />
<p>I have tried a few things including <code>repr</code>, <code>rf'{os.environ["FOO"]}'</code> and so on, but there doesn't seem to be an internally managed way to drop the extra slashes.</p>
<p>How do I drop the extra slashes via string functions? I'd prefer not to simply replace them with a single slash, as I might have to touch that code again sometime in the future.</p>
|
<python><json><environ>
|
2024-03-27 13:20:50
| 2
| 31,690
|
Chris
|
78,231,896
| 7,259,176
|
Powerset where each element can be either "positive" or "negative"
|
<p>I'm looking for a simple way to generate the powerset of an iterable where each element can be "positive" or "negative", but not both in the same combination.
There are no duplicates in the iterable and only the element or it's negative.
Order doesn't matter.</p>
<p>Here is an example with a list of <code>int</code>'s:</p>
<p>The iterable:</p>
<pre class="lang-py prettyprint-override"><code>elements = [-2, 1]
</code></pre>
<p>The desired powerset:</p>
<pre class="lang-py prettyprint-override"><code>[]
[-2]
[2]
[-1]
[1]
[-2, -1]
[-2, 1]
[2, -1]
[2, 1]
</code></pre>
<p>"Subsets" to exclude:</p>
<pre class="lang-py prettyprint-override"><code>[-1, 1]
[-2, 2]
</code></pre>
<p>My current approach is to use the powerset implementation from <a href="https://stackoverflow.com/a/1482316">here</a> for the combined list of <code>elements</code> and <code>[-x for x in elements]</code> and then iterate through the powerset and remove unwanted combinations.
However, that's not optimal I guess.
Is there a simple solution that doesn't require me to remove unwanted combinations in the end?</p>
|
<python><powerset>
|
2024-03-27 12:59:19
| 2
| 2,182
|
upe
|
78,231,633
| 11,152,653
|
How to add image upload with text in LangServe api
|
<p>I am creating a LLM Chat API using LangServe. What I want is to offer user functionality to either upload image or use text to chat. So how to create a chatbot in LangServe that uses image and text input and can also have a option to add image in LangServe Playground.</p>
<p>I was able to create a simple chatbot with text, but not able to do with image. I want to create a agent, so whenever the input contains image I will parse it with easyocr and then pass to LLM and give user answer. Any Help would highly appreciated.</p>
|
<python><openai-api><langchain><large-language-model>
|
2024-03-27 12:17:50
| 0
| 497
|
DevPy
|
78,231,321
| 2,255,491
|
In DRF, How to inject full `ErrorDetail` in the response, using a custom exception handler?
|
<p>I'm using a pretty complex custom handler in DRF.
For example, for a given response, <code>response.data</code> could look like to:</p>
<pre class="lang-py prettyprint-override"><code>{'global_error': None, 'non_field_errors': [], 'field_errors': {'important_field': [ErrorDetail(string='Ce champ est obligatoire.', code='required')]}}
</code></pre>
<p>However, when getting the actual response from the API, the <code>ErrorDetail</code> will be transformed in a simple string, losing the code information.</p>
<p>Is there a simple way to ensure <code>ErrorDetail</code> is always written in a response as <code>{"message": "...", "code": "..."}</code> without transforming the response manually in the custom handler?</p>
<p>I know there exists the DRF <code>get_full_details()</code> method that returns exactly this on an exception. But I'm the response level.</p>
|
<python><django><django-rest-framework>
|
2024-03-27 11:25:26
| 2
| 11,222
|
David Dahan
|
78,231,305
| 3,433,875
|
Move labels to the beginning of the wedge in matplotlib pie chart
|
<p>I would like to move the labels from the center to the beginning of each wedge in my pie chart:</p>
<pre><code>import matplotlib.pyplot as plt
# Setting labels for items in Chart
Employee = ['A', 'B', 'C',
'D', 'E']
# Setting size in Chart based on
# given values
Salary = [40000, 50000, 70000, 54000, 44000]
# colors
colors = ['#FF0000', '#0000FF', '#FFFF00',
'#ADFF2F', '#FFA500']
fig, ax = plt.subplots(figsize=(5,5), facecolor = "#FFFFFF")
# Pie Chart
wedges, texts = ax.pie(Salary, colors=colors,
labels=Employee, labeldistance=0.8,
wedgeprops=dict(width=0.35),)
</code></pre>
<p>I am able to get the coordinates of the labels and replace them with:</p>
<pre><code>for lbl,p in zip(Employee,texts, ):
x, y = p.get_position()
print(x,y,p)
ax.annotate(lbl, xy=(x,y), size=12, color = "w")
</code></pre>
<p>but I am not sure how to move them to follow the circle.</p>
<p>I can also get the angles of the start of each wedge calling the wedges property, but as the pie chart is drawn on a cartesian axis rather that a polar one, I am not sure how to that either?</p>
<p>I can not use barh for this (my chart is more complicated than the one shown here).</p>
<p>Is this possible?</p>
<p>This is where I would like the labels to be placed.</p>
<p><a href="https://i.sstatic.net/x0B5k.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/x0B5k.png" alt="enter image description here" /></a></p>
|
<python><matplotlib>
|
2024-03-27 11:22:55
| 1
| 363
|
ruthpozuelo
|
78,231,207
| 7,746,472
|
Skip level in nested JSON and convert to Pandas dataframe
|
<p>I have json data that is structured like this, which I want to turn into a data frame:</p>
<pre><code>{
"data": {
"1": {
"Conversion": {
"id": "1",
"datetime": "2024-03-26 08:30:00"
}
},
"50": {
"Conversion": {
"id": "50",
"datetime": "2024-03-27 09:00:00"
}
}
}
}
</code></pre>
<p>My usual approach would be to use json_normalize, like this:</p>
<p><code>df = pd.json_normalize(input['data'])</code></p>
<p>My goal is to have a table/dataframe with just the columns "id" and "datetime".</p>
<p>How do I skip the numbering level below data and go straight to Conversion? I would imagine something like this (which clearly doesn't work):</p>
<p><code>df = pd.json_normalize(input['data'][*]['Conversion'])</code></p>
<p>What is the best way to achieve this? Any hints are greatly appreciated!</p>
|
<python><python-3.x><pandas><dataframe>
|
2024-03-27 11:06:23
| 2
| 1,191
|
Sebastian
|
78,230,664
| 792,015
|
Python Tkinter resize all ttkbootstrap or ttk Button padding for a specific style, applied to all themes
|
<p>I want to alter padding for all buttons using a particular style (danger). For some reason this change is only applied to the currently active theme, switching themes reverts the Button padding to default. You can see the issue by running the following and switching themes ...</p>
<pre><code>import tkinter as tk
from tkinter import ttk
import ttkbootstrap as tb
def change_theme(theme, style): style.theme_use(theme.lower().replace(" ", ""))
def display_text(label_text, entry_text): label_text.set(entry_text.get())
def setup_ui(style):
root = style.master
danger = tb.Style()
danger.configure('danger.TButton', padding=0) # Why does this only apply to the first theme?
theme_names_titlecase = [name.replace('_', ' ').title() for name in style.theme_names() if name.lower() in ['darkly', 'simplex']]
default_theme = 'darkly'
current_theme = tk.StringVar(value=default_theme.capitalize())
theme_combo = ttk.Combobox(root, textvariable=current_theme, values=theme_names_titlecase, width=50)
theme_combo.pack(pady=0, side=tk.TOP)
theme_combo.bind("<<ComboboxSelected>>", lambda e: change_theme(current_theme.get(), style))
tb.Button(root, text='Text', bootstyle='danger.TButton').pack(side=tk.TOP, padx=0, pady=0)
tb.Button(root, text='Text', bootstyle='info.TButton').pack(side=tk.TOP, padx=0, pady=0)
return root
if __name__ == "__main__":
default_theme = 'darkly'
style = tb.Style(theme=default_theme)
root = setup_ui(style)
root.mainloop()
</code></pre>
<p>What I want to know is :</p>
<ol>
<li>Why are my changes to 'danger.TButton' only applied to the current theme?</li>
<li>Can I fix this so all 'danger.TButton' s have no padding regardless of theme?</li>
</ol>
<p>Note: using all ttk widgets and Styles has the same result so the answer relates to ttk not ttkbootstrap particularly.</p>
<p>Many thanks.</p>
|
<python><tkinter><ttk><ttkbootstrap>
|
2024-03-27 09:40:26
| 2
| 1,466
|
Inyoka
|
78,230,430
| 713,200
|
How to resolve BadHostKeyException with paramiko?
|
<p>I have following code to connect to a linux box and run a command, But I'm facing
BadHostKeyException even after I have added <code>WarningPolicy</code> and <code>AutoAddPolicy</code>.</p>
<pre><code> print("---CCCCCCCCCCCCC---",commands)
client = paramiko.SSHClient()
client.load_system_host_keys()
client.set_missing_host_key_policy(paramiko.WarningPolicy)
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect(ipAddress, port=22, username=sshUser, password=sshPassword)
self.logger.info("executeOnRemoteShell - Created SSH connection to " + ipAddress)
stdin, stdout, stderr = client.exec_command(commands)
result = str(stdout.readlines()[0].rstrip())
</code></pre>
<p>not sure what I'm missing here, below is the full error.</p>
<pre><code>paramiko.ssh_exception.BadHostKeyException: Host key for server '45.32.23.23' does not match: got 'AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGcWA6JnHBbVIGsdC+USD2GOxWNy+R8hiiFiLse75rs1JRTWN8i3ol3yZ4OhFhQl4upZ7f5/scFzw4DqoMrhRIE=', expected 'AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJSELS2mT8SED8I7QFf5YkkvD5n4LCHUkX4ykeemwuqGHOBVHixQMBtKWF9lFuKFhKOCNsifRPK1FfkT23glapI
</code></pre>
|
<python><linux><paramiko><ssh-keys>
|
2024-03-27 09:00:31
| 1
| 950
|
mac
|
78,230,261
| 18,107,780
|
PyQt5 heading and subheading
|
<p>I'm trying to recreate this <a href="https://i.sstatic.net/VI6t7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VI6t7.png" alt="enter image description here" /></a> in PyQt5.
The problem is the space between the 2 labels (the heading and the subheading).</p>
<p>This is my current code:</p>
<pre><code>import sys
from PyQt5.QtWidgets import QApplication, QWidget, QLabel, QVBoxLayout
from PyQt5.QtCore import Qt
class NoResultsWidget(QWidget):
def __init__(self):
super().__init__()
self.initUI()
def initUI(self):
# "No results found" -> heading
self.noResultsLabel = QLabel("No results found", self)
self.noResultsLabel.setAlignment(Qt.AlignCenter)
self.noResultsLabel.setStyleSheet("font-size: 24px;")
# "Try different keywords or remove search filters" -> subheading
self.instructionLabel = QLabel(
"Try different keywords or remove search filters", self
)
self.instructionLabel.setAlignment(Qt.AlignCenter)
self.instructionLabel.setStyleSheet("font-size: 14px; color: gray;")
layout = QVBoxLayout()
layout.addWidget(self.noResultsLabel)
layout.addWidget(self.instructionLabel)
layout.setSpacing(10)
self.setLayout(layout)
self.setWindowTitle("No Results")
self.setGeometry(300, 300, 350, 200)
self.show()
def main():
app = QApplication(sys.argv)
ex = NoResultsWidget()
sys.exit(app.exec_())
if __name__ == "__main__":
main()
</code></pre>
<p>that outputs this:
<a href="https://i.sstatic.net/xeAZp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xeAZp.png" alt="enter image description here" /></a></p>
<p>Does anyone know how can I get rid of the space between the two labels? Or have in mind an alternative way of doing this?</p>
|
<python><pyqt><pyqt5>
|
2024-03-27 08:27:11
| 1
| 457
|
Edoardo Balducci
|
78,230,229
| 10,122,048
|
When vercel deploying my python app gives Error: Unable to find any supported Python versions
|
<p>While installing the web app I prepared using python, vertex ai google chat bison model and fastapi, I constantly get Error: Unable to find any supported Python versions error and the installation does not complete. I also specified the Python version in the Requirements file, but it did not help. I use Vercel with github and all my files are in the root directory. My vercel deployment error screenshot is like this:</p>
<p><a href="https://i.sstatic.net/zhRzP.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zhRzP.jpg" alt="enter image description here" /></a></p>
<p>My files are <em>app.py</em>, <em>.env</em> , <em>Procfile</em> , <em>requirements.txt</em> and <em>vercel.json</em>. The contents of some files are as follows:</p>
<p><strong>.env file:</strong></p>
<p><code>GOOGLE_API_KEY=MYAPIKEY</code></p>
<p><strong>Procfile:</strong></p>
<p><code>web: gunicorn -w 4 -k uvicorn.workers.UvicornWorker app:app</code></p>
<p><strong>requirements.txt:</strong></p>
<p>fastapi==0.110.0</p>
<p>markdown==3.6</p>
<p>python-dotenv==1.0.1</p>
<p>gunicorn==21.2.0</p>
<p>uvicorn==0.29.0</p>
<p>uvicorn-worker==0.1.0</p>
<p>python-multipart==0.0.9</p>
<p>google-generativeai==0.4.1</p>
<p>google-cloud-aiplatform==1.44.0</p>
<p>python==3.9.0</p>
<p><strong>vercel.json:</strong></p>
<pre><code>{
"devCommand": "gunicorn -w 4 -k uvicorn.workers.UvicornWorker app:app",
"builds": [
{
"src": "/app.py",
"use": "@vercel/python"
}
],
"routes": [
{
"src": "/(.*)",
"dest": "/app.py"
}
]
}
</code></pre>
<p>My project runs very well on my computer in the VsCode environment, but I am having this problem when deploying on Vercel.</p>
<p>I tried editing the <code>vercel.json</code> file many times. I updated the versions of the requirements in the <code>requirements.txt</code> file, i set the python version to <code>3.9.0</code> which is supported by vercel at requirements file but it did not solve my problem. I would be very happy if you could help me with what I can do.</p>
|
<python><fastapi><vercel><google-cloud-vertex-ai>
|
2024-03-27 08:21:11
| 4
| 458
|
webworm84
|
78,230,010
| 1,002,097
|
How to change the size and spacing of Plotly's sublopts?
|
<p>How is it possible to manipulate the <strong>spacing between rows</strong>(e.g. increasing the space between fist two rows and final two rows) and the <strong>size of charts inside the figure</strong> (e.g. making pie chart bigger)?</p>
<p>In my example, I'm trying to visualize multiple data columns from <a href="https://www.kaggle.com/datasets/blastchar/telco-customer-churn/data" rel="nofollow noreferrer">Telco Customer Churn dataset</a> (<a href="https://www.kaggle.com/datasets/blastchar/telco-customer-churn/download?datasetVersionNumber=1" rel="nofollow noreferrer">download 176kB</a>) using plotly's <code>FigureWidget</code> and <code>make_subplots</code>.
<a href="https://i.sstatic.net/Fz4iq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Fz4iq.png" alt="enter image description here" /></a></p>
<p>This code loops through 8 columns, adds 1 pie chart and one bar chart for each column.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import plotly.graph_objects as go
from plotly.subplots import make_subplots
# Read data
df = pd.read_csv("./WA_Fn-UseC_-Telco-Customer-Churn.csv")
df['SeniorCitizen'] = df['SeniorCitizen'].map({0: 'No', 1: 'Yes'})
# Define subplot titles and data columns
data_cols = ['PhoneService' ,'MultipleLines' ,'InternetService' ,'OnlineBackup' ,'DeviceProtection' ,'TechSupport' ,'StreamingTV' ,'StreamingMovies']
titles = ['Phone Service' ,'Multiple Lines' ,'Internet Service' ,'Online Backup' ,'Device Protection' ,'Tech Support' ,'Streaming TV' ,'Streaming Movies']
fig = go.FigureWidget(make_subplots(rows=4, cols=4, specs=[
[{'type':'domain'}, {'type':'domain'}, {'type':'domain'}, {'type':'domain'}],
[{'type':'xy'}, {'type':'xy'}, {'type':'xy'}, {'type':'xy'}],
[{'type':'domain'}, {'type':'domain'}, {'type':'domain'}, {'type':'domain'}],
[{'type':'xy'}, {'type':'xy'}, {'type':'xy'}, {'type':'xy'}]]))
row, col = 1, 0
for i, (title, data_col) in enumerate(zip(titles, data_cols)):
row, col = divmod(i, 4)
row = row * 2
# Get value counts for pie chart
value_counts = df[data_col].value_counts()
# Create pie chart trace and add to subplot
pie_chart = go.Pie(labels=value_counts.index, values=value_counts.to_numpy(), name=title, title=title)
fig.add_trace(pie_chart, row=row+1, col=col+1)
# get churn rates
churn_counts = df.groupby([data_col, 'Churn'])['Churn'].count().unstack()
# Create stacked bar charts
t1 = go.Bar(name='Churn (yes)', x=churn_counts['Yes'].index, y=churn_counts['Yes'])
t2 = go.Bar(name='Churn (no)', x=churn_counts['No'].index, y=churn_counts['No'], marker_color='indianred')
fig.add_trace(t1, row=row+2, col=col+1)
fig.add_trace(t2, row=row+2, col=col+1)
fig.update_layout(title="Distribution of Customer Services", barmode='stack', showlegend=False)
fig.show()
</code></pre>
<hr />
<p><strong>Edit</strong>: this issue is not fixed with decreasing the number of columns to two either. this is the chart on a large wide screen:
<a href="https://i.sstatic.net/dBHNj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dBHNj.png" alt="enter image description here" /></a></p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import plotly.graph_objects as go
from plotly.subplots import make_subplots
# Read data
df = pd.read_csv("./WA_Fn-UseC_-Telco-Customer-Churn.csv")
df['SeniorCitizen'] = df['SeniorCitizen'].map({0: 'No', 1: 'Yes'})
# Define subplot titles and data columns
data_cols = ['PhoneService' ,'MultipleLines' ,'InternetService' ,'OnlineBackup' ,'DeviceProtection' ,'TechSupport' ,'StreamingTV' ,'StreamingMovies']
titles = ['Phone Service' ,'Multiple Lines' ,'Internet Service' ,'Online Backup' ,'Device Protection' ,'Tech Support' ,'Streaming TV' ,'Streaming Movies']
fig = go.FigureWidget(make_subplots(rows=8, cols=2, specs=[
[{'type':'domain'}, {'type':'domain'}],
[{'type':'xy'}, {'type':'xy'}],
[{'type':'domain'}, {'type':'domain'}],
[{'type':'xy'}, {'type':'xy'}],
[{'type':'domain'}, {'type':'domain'}],
[{'type':'xy'}, {'type':'xy'}],
[{'type':'domain'}, {'type':'domain'}],
[{'type':'xy'}, {'type':'xy'}]]))
row, col = 1, 0
for i, (title, data_col) in enumerate(zip(titles, data_cols)):
row, col = divmod(i, 2)
row = row * 2
# Get value counts for pie chart
value_counts = df[data_col].value_counts()
# Create pie chart trace and add to subplot
pie_chart = go.Pie(labels=value_counts.index, values=value_counts.to_numpy(), name=title, title=title)
fig.add_trace(pie_chart, row=row+1, col=col+1)
# get churn rates
churn_counts = df.groupby([data_col, 'Churn'])['Churn'].count().unstack()
# Create stacked bar charts
t1 = go.Bar(name='Churn (yes)', x=churn_counts['Yes'].index, y=churn_counts['Yes'])
t2 = go.Bar(name='Churn (no)', x=churn_counts['No'].index, y=churn_counts['No'], marker_color='indianred')
fig.add_trace(t1, row=row+2, col=col+1)
fig.add_trace(t2, row=row+2, col=col+1)
fig.update_layout(title="Distribution of Customer Services", barmode='stack', showlegend=False)
fig.show()
</code></pre>
|
<python><plotly><subplot><plotly.graph-objects>
|
2024-03-27 07:40:22
| 1
| 632
|
Tohid
|
78,229,687
| 12,238,655
|
What is the difference between pipx and using pip install inside a virtual environment?
|
<p>I recently used <a href="https://github.com/pypa/pipx#overview-what-is-pipx" rel="nofollow noreferrer">pipx</a> to install a package (that was the recommended install approach for the package). I installed using pip in a virtual environment.</p>
<p>I was with the assumption that pipx is in that environment, and then pipx would install the package in the same environment. However, pipx created a environment folder and installed the package there. I had to add the PATH into in order to use that package. This led me to think what the difference between pipx and using pip install inside a virtual environment.</p>
<p>Isn't using pip from the virtual environment more convenient to install the package?</p>
|
<python><python-3.x><pip><python-venv><pipx>
|
2024-03-27 06:22:16
| 1
| 381
|
skaarfacee
|
78,229,557
| 7,601,346
|
Pytest KeyError: '__spec__' During handling of the above exception, another exception occurred: AttributeError: __spec__ Empty suite
|
<p>I'm trying to execute some pytest tests on a python repo in pycharm. I've tried just executing the tests individually from command line but I get the same errors.</p>
<p>I'm using python and my python interpreter is set to 3.11</p>
<pre><code>/usr/local/bin/python3.11 /Applications/PyCharm.app/Contents/plugins/python/helpers/pycharm/_jb_pytest_runner.py --path /Users/shannonj/Downloads/requests-master/tests
Testing started at 1:38 AM ...
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/py/_vendored_packages/apipkg.py", line 141, in __makeattr
modpath, attrname = self.__map__[name]
~~~~~~~~~~~~^^^^^^
KeyError: '__spec__'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pycharm/_jb_pytest_runner.py", line 5, in <module>
import pytest
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pytest/__init__.py", line 5, in <module>
from _pytest.assertion import register_assert_rewrite
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/_pytest/assertion/__init__.py", line 9, in <module>
from _pytest.assertion import rewrite
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/_pytest/assertion/rewrite.py", line 34, in <module>
from _pytest.assertion import util
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/_pytest/assertion/util.py", line 13, in <module>
import _pytest._code
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/_pytest/_code/__init__.py", line 2, in <module>
from .code import Code
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/_pytest/_code/code.py", line 54, in <module>
class Code:
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/_pytest/_code/code.py", line 81, in Code
def path(self) -> Union[py.path.local, str]:
^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/py/_vendored_packages/apipkg.py", line 148, in __makeattr
result = importobj(modpath, attrname)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/py/_vendored_packages/apipkg.py", line 69, in importobj
module = __import__(modpath, None, None, ['__doc__'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1136, in _find_and_load_unlocked
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/py/_vendored_packages/apipkg.py", line 146, in __makeattr
raise AttributeError(name)
AttributeError: __spec__
Process finished with exit code 1
Empty suite
</code></pre>
<p>The requirements of the repo are as follows:</p>
<pre><code>alabaster==0.7.7
Babel==2.2.0
coverage==5.0.3
decorator==4.0.9
docutils==0.12
Flask==0.10.1
httpbin==0.5.0
itsdangerous==0.24
Jinja2==2.10.3
MarkupSafe==1.1.0
pluggy==0.13.0
py==1.8.2
Pygments==2.1.1
PySocks==1.7.0
pytest-cov==2.4.0
pytest-httpbin==0.2.0
pytest-mock==0.11.0
pytest==6.2.5
pytz==2015.7
six==1.10.0
snowballstemmer==1.2.1
sphinx-rtd-theme==0.1.9
Sphinx==1.3.5
urllib3==1.26.7
Werkzeug==0.11.4
wheel==0.29.0
</code></pre>
<p>Which does seem pretty outdated. So I'm haphazarding a guess that maybe I need an older version of python? But I already have 3.11 and 3.12.2 installed via python.org's official mac installer. Older outdated versions of python don't have such an easy install option, though I guess I could use brew? I just don't want to mess with my default python version and I also don't know if it'll work.</p>
<p>I've tried updating some of packages, like py to the latest version 1.10, but it kind of sets off a chain of needing to update other packages.</p>
<p>Any insights would be appreciated</p>
|
<python><python-3.x><pycharm><pytest>
|
2024-03-27 05:43:50
| 1
| 1,200
|
Shisui
|
78,229,489
| 8,325,579
|
Advanced type hints in Python - how do I avoid mypy being angry when the type is not specific enough
|
<p>I have some functions that return dictionaries:</p>
<pre class="lang-py prettyprint-override"><code>def get_metadata_from_file(filepath:str)->dict[str, bool|dict[str, Any]]:
'''Get metadata about a file if it exists'''
answer = {}
if os.path.isfile(filepath):
answer['exists'] = True
answer['metadata'] = { dict of metadata attributes }
else:
answer['exists'] = False
answer['metadata'] = {}
return answer
</code></pre>
<p>Later in other functions I have issues:</p>
<pre class="lang-py prettyprint-override"><code>
def get_creation_time(filepath:str)->str|None:
metadata = get_metadata_from_file(filepath)
if metadata['exists']:
return metadata['metadata']['created_at'] # Mypy gets angry here
else:
return None
</code></pre>
<p>Clearly the program's logic handles the case where the file does not exist, but Mypy is concerned that the <code>metadata['metadata']['created_at']</code> key might not exist / that metadata['metadata'] will be a boolean.</p>
<p>I'm sure there is a solution to this, what is the recommended approach?</p>
|
<python><mypy><python-typing>
|
2024-03-27 05:22:27
| 1
| 1,048
|
Myccha
|
78,229,273
| 14,553,366
|
Unable to convert Speech to Text using Azure Speech-to-Text service
|
<p>I'm using the below code to convert speech to text using Azure Speech-to-Text service.I want to convert my audio files into text.Below is the code for the same:</p>
<pre><code>import os
import azure.cognitiveservices.speech as speechsdk
def recognize_from_microphone():
# This example requires environment variables named "SPEECH_KEY" and "SPEECH_REGION"
speech_config = speechsdk.SpeechConfig(subscription=my_key, region=my_region)
speech_config.speech_recognition_language="en-US"
audio_config = speechsdk.audio.AudioConfig(filename="C:\\Users\\DELL\\Desktop\\flowlly.com\\demo\\003. Class 3 - Monolith, Microservices, gRPC, Webhooks.mp4")
speech_recognizer = speechsdk.SpeechRecognizer(speech_config=speech_config, audio_config=audio_config)
speech_recognition_result = speech_recognizer.recognize_once_async().get()
if speech_recognition_result.reason == speechsdk.ResultReason.RecognizedSpeech:
print("Recognized: {}".format(speech_recognition_result.text))
elif speech_recognition_result.reason == speechsdk.ResultReason.NoMatch:
print("No speech could be recognized: {}".format(speech_recognition_result.no_match_details))
elif speech_recognition_result.reason == speechsdk.ResultReason.Canceled:
cancellation_details = speech_recognition_result.cancellation_details
print("Speech Recognition canceled: {}".format(cancellation_details.reason))
if cancellation_details.reason == speechsdk.CancellationReason.Error:
print("Error details: {}".format(cancellation_details.error_details))
print("Did you set the speech resource key and region values?")
recognize_from_microphone()
</code></pre>
<p>But Im getting this error when trying to run the transcriber:</p>
<pre><code> File "C:\Users\DELL\Desktop\flowlly.com\demo\transcriber.py", line 48, in <module>
recognize_from_microphone()
File "C:\Users\DELL\Desktop\flowlly.com\demo\transcriber.py", line 18, in recognize_from_microphone
speech_recognizer = speechsdk.SpeechRecognizer(speech_config=speech_config, audio_config=audio_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\DELL\AppData\Local\Programs\Python\Python312\Lib\site-packages\azure\cognitiveservices\speech\speech.py", line 1006, in __init__
_call_hr_fn(
File "C:\Users\DELL\AppData\Local\Programs\Python\Python312\Lib\site-packages\azure\cognitiveservices\speech\interop.py", line 62, in _call_hr_fn
_raise_if_failed(hr)
File "C:\Users\DELL\AppData\Local\Programs\Python\Python312\Lib\site-packages\azure\cognitiveservices\speech\interop.py", line 55, in _raise_if_failed
__try_get_error(_spx_handle(hr))
File "C:\Users\DELL\AppData\Local\Programs\Python\Python312\Lib\site-packages\azure\cognitiveservices\speech\interop.py", line 50, in __try_get_error
raise RuntimeError(message)
RuntimeError: Exception with error code:
[CALL STACK BEGIN]
> pal_string_to_wstring
- pal_string_to_wstring
- pal_string_to_wstring
- pal_string_to_wstring
- pal_string_to_wstring
- pal_string_to_wstring
- pal_string_to_wstring
- pal_string_to_wstring
- pal_string_to_wstring
- pal_string_to_wstring
- pal_string_to_wstring
- recognizer_create_speech_recognizer_from_config
- recognizer_create_speech_recognizer_from_config
[CALL STACK END]
Exception with an error code: 0xa (SPXERR_INVALID_HEADER)
</code></pre>
<p>I have installed the sdk for the same but its not working. What should I do now?</p>
|
<python><azure><speech-recognition><speech-to-text>
|
2024-03-27 04:01:57
| 1
| 430
|
Vatsal A Mehta
|
78,229,141
| 10,792
|
Grouping Multiple Rows of Data For Use In scikit-learn Random Forest Machine Learning Model
|
<p>I'm having a difficult time phrasing my question, so if there's anything unclear that I can improve upon, please let me know. My goal is ultimately to determine the location of an RF transmitter using a machine learning model. There are many other techniques that could be used to identify the source of an RF signal (including triangulation and time offsets between multiple receivers), but that's not the point. I want to see if I can make this work with a ML model.</p>
<p>I'm attempting to use a <a href="https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html" rel="nofollow noreferrer">RandomForestClassifier</a> from <a href="https://scikit-learn.org/stable/getting_started.html" rel="nofollow noreferrer">scikit-learn</a> to build out a model for identifying the source of an RF signal, given the signal strength on several receivers scattered across a known area. These receivers are all linked (via network) to a central database. The receivers are in fixed locations, but the transmitter could be anywhere, so the signal strength into a receiver primarily depends on whether the transmitter has direct line of sight to the receiver. Receivers measure signal strength from 1 to 255. If it's a 0, it means the receiver didn't hear anything, so it's not recorded in the database. The fact that it won't be recorded will be important in a moment. An <code>rssi</code> of 255 is an indication of full scale into a particular receiver.</p>
<p>The database logs the receiver data every second (please see table below). Each group of <code>time</code> is a representation of what the signal looked like at each receiver at that time. As stated, if the signal wasn't heard on a receiver, it won't be logged into the database, so each group of <code>time</code> could have as few as 1 row, or as many as <code>X</code> rows, where <code>X</code> represents the total number of receivers in the system (e.g., if there are ten receivers listening on the same frequency and each receiver hears the signal, a row for all ten receivers will show up in the database, but if only three of those ten hear the signal, only three rows will be recorded in the database). Essentially, I'm trying to correlate what signal strengths look like in a database with known locations. For example, strong into <code>Red</code> and <code>Green</code> means the signal is likely coming from <code>Foo</code>, whereas strong signals into <code>Red</code> and <code>Yellow</code>, with a weak signal into <code>Blue</code> means the signal probably came from <code>Bar</code>. The known <code>location</code> data is built out manually by observing what a signal looks like when a transmitter is in a known location. It's a very tedious process.</p>
<p>The way the receiver data is logged (across multiple rows and never knowing how many rows will show up in the dataset) is causing an obvious challenge for me when I'm trying to model the data because the <code>RandomForestClassifier</code> looks at each row individually. I need the data to be grouped by date/time, but not knowing how many receivers are going to hear the signal at any given time makes it difficult for me to model the data in a more logical way. At least I haven't come up with any good ideas.</p>
<p>The table below contains a few seconds of signal data from a known location (<code>Region A</code>). Does anybody have any suggestions for how I could restructure this data to make it useful with the <code>RandomForestClassifier</code> from scikit-learn?</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Receiver Name</th>
<th>Time</th>
<th>RSSI</th>
<th>Location</th>
</tr>
</thead>
<tbody>
<tr>
<td>Red</td>
<td>2024-03-21 20:37:58</td>
<td>182</td>
<td>Region A</td>
</tr>
<tr>
<td>Blue</td>
<td>2024-03-21 20:37:58</td>
<td>254</td>
<td>Region A</td>
</tr>
<tr>
<td>Green</td>
<td>2024-03-21 20:37:58</td>
<td>208</td>
<td>Region A</td>
</tr>
<tr>
<td>Red</td>
<td>2024-03-21 20:37:59</td>
<td>192</td>
<td>Region A</td>
</tr>
<tr>
<td>Blue</td>
<td>2024-03-21 20:37:59</td>
<td>254</td>
<td>Region A</td>
</tr>
<tr>
<td>Green</td>
<td>2024-03-21 20:37:59</td>
<td>215</td>
<td>Region A</td>
</tr>
<tr>
<td>Red</td>
<td>2024-03-21 20:38:00</td>
<td>202</td>
<td>Region A</td>
</tr>
<tr>
<td>Blue</td>
<td>2024-03-21 20:38:00</td>
<td>254</td>
<td>Region A</td>
</tr>
<tr>
<td>Green</td>
<td>2024-03-21 20:38:00</td>
<td>207</td>
<td>Region A</td>
</tr>
<tr>
<td>Yellow</td>
<td>2024-03-21 20:38:00</td>
<td>17</td>
<td>Region A</td>
</tr>
<tr>
<td>Red</td>
<td>2024-03-21 20:38:01</td>
<td>189</td>
<td>Region A</td>
</tr>
<tr>
<td>Blue</td>
<td>2024-03-21 20:38:01</td>
<td>254</td>
<td>Region A</td>
</tr>
<tr>
<td>Green</td>
<td>2024-03-21 20:38:01</td>
<td>225</td>
<td>Region A</td>
</tr>
<tr>
<td>Yellow</td>
<td>2024-03-21 20:38:01</td>
<td>16</td>
<td>Region A</td>
</tr>
<tr>
<td>Red</td>
<td>2024-03-21 20:38:02</td>
<td>204</td>
<td>Region A</td>
</tr>
<tr>
<td>Blue</td>
<td>2024-03-21 20:38:02</td>
<td>255</td>
<td>Region A</td>
</tr>
<tr>
<td>Green</td>
<td>2024-03-21 20:38:02</td>
<td>213</td>
<td>Region A</td>
</tr>
<tr>
<td>Yellow</td>
<td>2024-03-21 20:38:02</td>
<td>18</td>
<td>Region A</td>
</tr>
<tr>
<td>Red</td>
<td>2024-03-21 20:38:03</td>
<td>180</td>
<td>Region A</td>
</tr>
<tr>
<td>Blue</td>
<td>2024-03-21 20:38:03</td>
<td>254</td>
<td>Region A</td>
</tr>
<tr>
<td>Green</td>
<td>2024-03-21 20:38:03</td>
<td>214</td>
<td>Region A</td>
</tr>
<tr>
<td>Yellow</td>
<td>2024-03-21 20:38:03</td>
<td>13</td>
<td>Region A</td>
</tr>
<tr>
<td>Red</td>
<td>2024-03-21 20:38:04</td>
<td>182</td>
<td>Region A</td>
</tr>
<tr>
<td>Blue</td>
<td>2024-03-21 20:38:04</td>
<td>254</td>
<td>Region A</td>
</tr>
<tr>
<td>Green</td>
<td>2024-03-21 20:38:04</td>
<td>213</td>
<td>Region A</td>
</tr>
<tr>
<td>Yellow</td>
<td>2024-03-21 20:38:04</td>
<td>12</td>
<td>Region A</td>
</tr>
</tbody>
</table></div>
<p>Below is the Python code I started with. It still looks at each row individually. I've also never worked with scikit-learn or Python, so I'm not confident anything below is correct:</p>
<pre><code>import pandas as pd
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
data = pd.read_csv("data/combined.csv", header=0)
label_encoder = LabelEncoder()
print(data.columns)
data["name_encoded"] = label_encoder.fit_transform(data["name"])
data["location_encoded"] = label_encoder.fit_transform(data["location"])
x = data[["rssi", "name_encoded"]] # Features (rssi and encoded name)
y = data["location_encoded"] # Target (encoded location)
# Split data into training and testing sets
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=42)
# Create the Random Forest model
model = RandomForestClassifier(n_estimators=100)
# Train the model
model.fit(x_train, y_train)
# Make predictions on the testing set
y_pred = model.predict(x_test)
# Decode predictions
location_decoder = LabelEncoder()
location_decoder.fit(data["location"]) # Fit the decoder with original locations
predicted_locations = location_decoder.inverse_transform(y_pred)
print("Predicted locations:", predicted_locations)
</code></pre>
<p>Thank you in advance for any help.</p>
|
<python><machine-learning><scikit-learn>
|
2024-03-27 03:07:34
| 1
| 29,132
|
senfo
|
78,228,895
| 7,225,171
|
How to use memray with Gunicorn or flask dev server?
|
<p>I want to profile my flask project memory using <a href="https://github.com/bloomberg/memray" rel="nofollow noreferrer">memray</a>.
In the documentation it is said to use the command as:</p>
<pre><code>memray run my_script.py
</code></pre>
<p>But when I run flask (using dev server) or gunicorn I don't define the entry script:</p>
<pre><code>flask run -port 80
</code></pre>
<p>So when I try to run the same thing but with memray:</p>
<pre><code>memray run flask run -port 80
</code></pre>
<p>it throws an error saying <code>FileNotFoundError: [Errno 2] No such file or directory: 'flask'.</code></p>
<p>How to achieve this?</p>
|
<python><flask><memory><gunicorn>
|
2024-03-27 01:21:42
| 0
| 1,091
|
Serob
|
78,228,705
| 10,953,274
|
Issue with Loading Pre-trained DQN Model Resulting in Lower Performance
|
<p>Context:
I've been working with a Deep Q-Network (DQN) model for a reinforcement learning project. After training the model, I saved it and later loaded it for further evaluation. However, I noticed that the performance of the loaded model is significantly worse compared to the original pre-trained model's performance.</p>
<p>Problem:
The main issue seems to arise when I load the trained model; the performance drops dramatically, and the model behaves as if it hasn't learned anything from the training process. This is puzzling because the model performs well before saving.</p>
<pre><code>class DQLAgent():
def __init__(self, env, model_path=None):
self.env = env
self.state_size = env.observation_space.shape[0]
self.action_size = env.action_space.n
self.gamma = 0.95
self.learning_rate = 0.001
self.epsilon = 1
self.epsilon_decay = 0.995
self.epsilon_min = 0.01
self.memory = deque(maxlen=2000)
if model_path:
self.model = load_model(model_path) # Load model if path provided
else:
self.model = self.build_model() # Build new model otherwise
def build_model(self):
model = Sequential()
model.add(Dense(48, input_dim=self.state_size, activation='tanh'))
model.add(Dense(self.action_size, activation='linear'))
model.compile(loss='mean_squared_error', optimizer=Adam(learning_rate=self.learning_rate))
return model
def remember(self, state, action, reward, next_state, done):
self.memory.append((state, action, reward, next_state, done))
def act(self, state):
if np.random.rand() <= self.epsilon:
return self.env.action_space.sample()
else:
act_values = self.model.predict(state,verbose = 0)
return np.argmax(act_values[0])
def replay(self, batch_size):
if len(self.memory) < batch_size:
return
minibatch = random.sample(self.memory, batch_size)
for state, action, reward, next_state, done in minibatch:
target = reward if done else reward + self.gamma * np.amax(self.model.predict(next_state,verbose = 0)[0])
train_target = self.model.predict(state,verbose = 0)
train_target[0][action] = target
self.model.fit(state, train_target, verbose=0)
def adaptiveEGreedy(self):
if self.epsilon > self.epsilon_min:
self.epsilon *= self.epsilon_decay
# Initialize gym environment and the agent
env = gym.make('CartPole-v1')
agent = DQLAgent(env)
episodes = 50
batch_size = 32
round_results = []
for e in range(episodes):
state = env.reset()
state = np.reshape(state, [1, 4])
total_reward = 0
while True:
action = agent.act(state)
next_state, reward, done, _ = env.step(action)
next_state = np.reshape(next_state, [1, 4])
agent.remember(state, action, reward, next_state, done)
state = next_state
agent.replay(batch_size)
agent.adaptiveEGreedy()
total_reward += reward
if done:
print(f'Episode: {e+1}, Total reward: {total_reward}')
round_results.append(total_reward)
break
agent.model.save('dql_cartpole_model.keras')
</code></pre>
<p>Demonstrate the effectiveness</p>
<pre><code>model_path = 'dql_cartpole_model.keras' # Update this path
env = gym.make('CartPole-v1')
agent = DQLAgent(env, model_path=model_path)
episodes = 100
round_results = []
for e in range(episodes):
state = env.reset()
state = np.reshape(state, [1, 4])
total_reward = 0
while True:
action = agent.act(state)
next_state, reward, done, _ = env.step(action)
next_state = np.reshape(next_state, [1, 4])
state = next_state
total_reward += reward
if done:
print(f'Episode: {e+1}, Total reward: {total_reward}')
round_results.append(total_reward)
break
# Plot the rewards
plt.plot(round_results)
plt.title('Rewards per Episode')
plt.xlabel('Episode')
plt.ylabel('Total Reward')
plt.show()
</code></pre>
<p>edit add save_agent_state and load_agent_state to DQLAgent class to hopefully save epsilon and memory however still getting the same results</p>
<pre><code>def save_agent_state(self, file_path):
with open(file_path, 'wb') as file:
pickle.dump({
'epsilon': self.epsilon,
'memory': list(self.memory) # Convert deque to list before saving
}, file)
def load_agent_state(self, file_path):
with open(file_path, 'rb') as file:
agent_state = pickle.load(file)
self.epsilon = agent_state['epsilon']
self.memory = deque(agent_state['memory'], maxlen=2000)
</code></pre>
<p>and added this in the end of training</p>
<pre><code>agent.model.save('dql_cartpole_model.keras')
agent.save_agent_state('agent_state.pkl')
</code></pre>
<p>added this Demonstrate the effectiveness</p>
<pre><code>model_path = 'dql_cartpole_model.keras' # Update this path
agent.load_agent_state('agent_state.pkl')
</code></pre>
|
<python><tensorflow>
|
2024-03-26 23:54:00
| 1
| 705
|
Unknown
|
78,228,640
| 978,288
|
How to make pythontex see PYTHONPATH
|
<p>I've been working with <code>pythontex</code> for some time and I'm quite happy with the results. After reinstalling the system, however, serious difficulties began.</p>
<p><strong>THE PROBLEM</strong>: while compiling code (in .tex documents) and trying to <em>import local modules</em> I get <code>ModuleNotFoundError: No module named 'm_system'</code> (the offending line: <code>import m_system</code>; I listed <code>sys.path</code> in a .tex document and it has no idea about the contents of <code>PYTHONPATH</code> system variable).</p>
<p>I cannot remember how I got it working the previous time, but it worked just fine. I was searching for an answer, grilling ChatGPT -- to no avail.</p>
<p>How could I solve it? Make <code>PYTHONPATH</code> the same for <code>pythontex</code>?</p>
<p>The new system:</p>
<ul>
<li>Linux Mint 21.3, Cinnamon 64-bit</li>
<li><code>tex</code> → <code>TeX, Version 3.141592653 (TeX Live 2022/dev/Debian)</code></li>
<li>editor: sublime-text 4169</li>
</ul>
<p>The system allows for compilation of pdf documents, along with python code inside. This snippet returned appropriate message in the pdf:</p>
<pre><code>\begin{pycode}
import sys
print(f"{sys.version = }")
\end{pycode}
</code></pre>
<p>The result was <code>sys.version = ’3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]’</code> (in the pdf).</p>
<p>What other pieces of information could provide additional, necessary details?</p>
<p>Thanks in advance.</p>
|
<python><linux><latex><tex>
|
2024-03-26 23:23:28
| 0
| 462
|
khaz
|
78,228,609
| 10,466,743
|
AWS Sagemaker MultiModel endpoint additional dependencies
|
<p>I am trying to deploy a <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/multi-model-endpoints.html#how-multi-model-endpoints-work" rel="nofollow noreferrer">multi model endpoint</a> on aws sagemaker. However some of my models have additional dependencies. I am following the <a href="https://huggingface.co/docs/sagemaker/en/inference#user-defined-code-and-modules" rel="nofollow noreferrer">huggingface's documentation</a>
on creating user-defined-code and requirements.</p>
<p>My zipped artifacts have a <code>code</code> directory with <code>requirements.txt</code> and yet when I deploy the model and try invoking it with the python aws sdk. I get <code>ModuleNotFound</code> errors during my imports.</p>
<p>I know its finding my <code>inference.py</code> file since its failing to find those modules that I import.</p>
<p>It should be noted that these models I am deploying are trained and made outside of sagemaker and I am trying to bring them into sagemaker.</p>
<p>The container image I am using is
<code>'763104351884.dkr.ecr.ca-central-1.amazonaws.com/huggingface-pytorch-inference:2.1.0-transformers4.37.0-cpu-py310-ubuntu22.04'</code></p>
|
<python><amazon-web-services><amazon-sagemaker><huggingface>
|
2024-03-26 23:11:49
| 1
| 344
|
Lucas
|
78,228,608
| 8,863,970
|
How to groupBy on two columns and work out avg total value for each grouped column using pyspark
|
<p>I have the following DataFrame and using Pyspark, I'm trying to get the following answers:</p>
<ol>
<li>Total Fare by Pick</li>
<li>Total Tip by Pick</li>
<li>Avg Drag by Pick</li>
<li>Avg Drag by Drop</li>
</ol>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Pick</th>
<th>Drop</th>
<th>Fare</th>
<th>Tip</th>
<th>Drag</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
<td>4.00</td>
<td>4.00</td>
<td>1.00</td>
</tr>
<tr>
<td>1</td>
<td>2</td>
<td>5.00</td>
<td>10.00</td>
<td>8.00</td>
</tr>
<tr>
<td>1</td>
<td>2</td>
<td>5.00</td>
<td>15.00</td>
<td>12.00</td>
</tr>
<tr>
<td>3</td>
<td>2</td>
<td>11.00</td>
<td>12.00</td>
<td>17.00</td>
</tr>
<tr>
<td>3</td>
<td>5</td>
<td>41.00</td>
<td>25.00</td>
<td>13.00</td>
</tr>
<tr>
<td>4</td>
<td>6</td>
<td>50.00</td>
<td>70.00</td>
<td>2.00</td>
</tr>
</tbody>
</table></div>
<p>My Query is so far like this:</p>
<pre><code>from pyspark.sql import functions as func
from pyspark.sql.functions import desc
data = [
(1, 1, 4.00, 4.00, 1.00),
(1, 2, 5.00, 10.00, 8.00),
(1, 2, 5.00, 15.00, 12.00),
(3, 2, 11.00, 12.00, 17.00),
(3, 5, 41.00, 25.00, 13.00),
(4, 6, 50.00, 70.00, 2.00)
]
columns = ["Pick", "Drop", "Fare", "Tip", "Drag"]
df = spark.createDataFrame(data, columns)
df.groupBy('Pick', 'Drop') \
.agg(
func.sum('Fare').alias('FarePick'),
func.sum('Tip').alias('TipPick'),
func.avg('Drag').alias('AvgDragPick'),
func.avg('Drag').alias('AvgDragDrop')) \
.orderBy('Pick').show()
</code></pre>
<p>However, I don't think it seems to be correct. I'm abit stuck on (4) because the groupby does not seem correct. Can anyone suggest correction here. The output needs to be in One (1) table together such as:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Pick</th>
<th>Drop</th>
<th>FarePick</th>
<th>TipPick</th>
<th>AvgDragPick</th>
<th>AvgDragDrop</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table></div>
|
<python><pyspark><apache-spark-sql>
|
2024-03-26 23:11:02
| 2
| 1,013
|
Saffik
|
78,228,565
| 1,474,327
|
Python TCP Server that both sends and or receives data (independently) using asyncio streams?
|
<p>I've a python-based TCP server using asyncio, but it has an specific behavior: The TCP client (a PLC) connects to the app, but actually both the server or the client can start data flows. Once the data flow starts, it will be finished via specific-designed ACK messages.</p>
<p>The TCP server part is connected to a RMQ queue; on each new message, it will start a data flow with the PLC, wait response, then ACK each side and the flow is finished. On the PLC side, a flow can be started at any time (but if there is an ongoing flow, it will be just rejected), then the TCP server will do some processing, and then send response, the PLC sends ACK, the TCP server responds ACK, then the flow is finished.</p>
<p>The connection is kept always up.</p>
<p>I've implemented this using asyncio server and protocols, but I find the code to be overcomplexv and I don't like how bad it's adding new features or incrementing functionality.</p>
<p>I thought that I could be using the streams API instead, that seems much clearer, but I can't think if a way to handle both client-initiated requests through the reader, or start server requests to the client. I just wonder if this is possible?</p>
|
<python><tcp><stream><python-asyncio>
|
2024-03-26 22:56:09
| 1
| 727
|
Alberto
|
78,228,535
| 10,705,248
|
Fixed effect panel regression gives coefficients for each year
|
<p>I am trying to use Entity fixed panel regression for my data set. I have cross sectional data for each county in the US and for 1971 to 2020 (yearly). I have two indices: <code>STCTID</code> which is county ID and <code>Date</code>. Below is how my dataset looks like:</p>
<p><a href="https://i.sstatic.net/WCKwE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WCKwE.png" alt="enter image description here" /></a></p>
<p>Apart from the date index, I have <code>Year</code> column as an explanatory variable too. Below is my code for fitting panel regression using <code>linearmodels</code>, I keep the entity fixed effects but no time effect:</p>
<pre><code>from linearmodels import PanelOLS
formula1 = "Yld ~ Year + Prec + GDD + KDD + VPD + Irg +Irg*Prec
+ Irg*GDD + Irg*KDD + Irg*VPD + EntityEffects"
mod = PanelOLS.from_formula(formula1, data=df_all_semw)
panelOLS_res = mod.fit(cov_type="clustered", cluster_entity=True)
</code></pre>
<p>Printing <code>panelOLS_res</code> gives the details about the regression coefficients and information. What I found strange is that the model gives regression coefficients for each year in <code>Year</code> variable. See below:</p>
<pre><code> Parameter Std. Err. T-stat P-value Lower CI Upper CI
GDD 0.0560 0.0028 20.148 0.0000 0.0505 0.0614
Irg 0.0008 0.0002 4.2808 0.0000 0.0005 0.0012
KDD -0.1646 0.0058 -28.269 0.0000 -0.1760 -0.1532
Prec -0.0006 0.0016 -0.3816 0.7028 -0.0037 0.0025
VPD -5.4312 1.4328 -3.7907 0.0002 -8.2395 -2.6228
Year[T.1971] 14.009 4.7535 2.9471 0.0032 4.6917 23.326
Year[T.1972] 20.115 4.9404 4.0715 0.0000 10.432 29.799
Year[T.1973] 20.867 5.0448 4.1364 0.0000 10.979 30.756
Year[T.1974] 6.5041 4.7447 1.3708 0.1704 -2.7959 15.804
Year[T.1975] 8.6791 5.0379 1.7228 0.0849 -1.1955 18.554
Year[T.1976] 14.602 4.8642 3.0020 0.0027 5.0683 24.136
Year[T.1977] 15.879 5.1194 3.1018 0.0019 5.8451 25.914
Year[T.1978] 28.605 4.9551 5.7728 0.0000 18.893 38.317
... and so on
</code></pre>
<p>I found this is because the datatype of <code>Year</code> is <code>object</code>, when I change it to <code>int</code>, I dont get these coefficients for each year, but then the R-squared drops down. Could somebody explain me the reason behind this? Is there any problem with the current model (with each year getting coefficient), if no, how can I explain it to others?</p>
|
<python><regression><statsmodels><panel-data><linearmodels>
|
2024-03-26 22:44:50
| 1
| 854
|
lsr729
|
78,228,326
| 547,231
|
Converting a torch.Tensor to a float pointer and vice versa
|
<p>I have an image <code>x</code> (as a <code>torch.Tensor</code>) of shape <code>(512, 512, 3)</code> (the first dimension being the color channel count). When I save it as an <code>*.exr</code>-file, using <code>imageio.imsave</code>, the output is the correct image:</p>
<p><a href="https://i.sstatic.net/UjMsY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UjMsY.png" alt="enter image description here" /></a></p>
<p>I need to pass a pointer to <code>x</code> to a C++-library. I get the pointer via <code>p = ctypes.cast(x.data_ptr(), ctypes.POINTER(ctypes.c_float))</code>. I also need to convert pointers and I'm doing so by the <code>as_tensor</code> function from <a href="https://stackoverflow.com/a/78157395/547231">this answer</a>:</p>
<pre><code>def as_tensor(pointer, shape, torch_type):
return torch.frombuffer((pointer._type_ * prod(shape)).from_address(ctypes.addressof(pointer.contents)), dtype = torch_type).view(*shape)
</code></pre>
<p>Now, when I write <code>y = as_tensor(p, x.shape, torch.float)</code> and save this an <code>*.exr</code>, I get the previous image repeated in a 3x3 grid (each row being one of the 3 channels, I guess):</p>
<p><a href="https://i.sstatic.net/n8Ekg.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/n8Ekg.jpg" alt="enter image description here" /></a></p>
<p>What is going on here? What I would expected is clearly <code>x = y</code> (in terms of the data).</p>
|
<python><pytorch><ctypes><python-imageio>
|
2024-03-26 21:45:37
| 0
| 18,343
|
0xbadf00d
|
78,228,257
| 8,378,817
|
How to create citation, co-citation and bibliographic coupling networks in python?
|
<p>I am working currently in a small passion project to understand citation network.
After some research, I came across three types of major citation analyses:
direct citation, co-citation and bibliographic coupling.</p>
<p>I was able to produce direct citation network in python. But creating co-citation and bibliographic coupling has been little trickier, though I understand their differences.</p>
<p>I have a dataframe consisting of two columns:
article column and a second column of list of articles that cites the main article column.</p>
<p>Can someone provide me a code based example to create these kinds of networks?
Appreciate the help!
Thank you</p>
<p>Code example I used for direct citation:</p>
<pre><code>data = {
'cited_articles': ['A', 'B', 'C', 'D'],
'citing_articles': [['B', 'C'], ['C'], ['A', 'B', 'D'], ['B']]
}
df = pd.DataFrame(data)
G=nx.DiGraph()
G.add_nodes_from(df['cited_articles'])
for idx, row in df.iterrows():
cited_article = row['cited_articles']
citations = row['citing_articles']
for citation in citations:
G.add_edge(citation, cited_article)
</code></pre>
|
<python><charts><networkx><igraph><network-analysis>
|
2024-03-26 21:27:16
| 0
| 365
|
stackword_0
|
78,228,165
| 832,490
|
Untyped decorator makes function "add" untyped [misc] (celery & mypy)
|
<p>I do not own the decorator.</p>
<pre><code>from celery import Celery
app = Celery("tasks", broker="pyamqp://guest@localhost//")
@app.task
def add(x: int, y: int) -> int:
return x + y
</code></pre>
<blockquote>
<p>tasks.py:6: error: Untyped decorator makes function "add" untyped [misc]</p>
</blockquote>
<p>Why?</p>
|
<python><celery><mypy>
|
2024-03-26 21:04:25
| 0
| 1,009
|
Rodrigo
|
78,228,119
| 3,015,186
|
How to test a wheel against multiple versions of python?
|
<h3>Problem description</h3>
<p>I'm writing a python library and I am planning to upload both sdist (.tar.gz) and wheel to PyPI. The <a href="https://build.pypa.io/" rel="nofollow noreferrer">build docs say</a> that running</p>
<pre><code>python -m build
</code></pre>
<p>I get sdist created from the source tree and <em>wheel created from the sdist</em>, which is nice since I get the sdist tested here "for free". Now I want to run tests (pytest) against the wheel with multiple python versions. What is the easiest way to do that?</p>
<p>I have been using tox and I see there's an option for <a href="https://tox.wiki/en/latest/config.html#package" rel="nofollow noreferrer">setting package to "wheel"</a>:</p>
<pre><code>[testenv]
description = run the tests with pytest
package = wheel
wheel_build_env = .pkg
</code></pre>
<p>But that does not say <em>how</em> the wheel is produced; I am unsure if it</p>
<p>a) creates wheel directly from source tree<br>
b) creates wheel from sdist which is created from source tree in a way which <em>is identical to</em> <code>python -m build</code><br>
c) creates wheel from sdist which is created from source tree in a way which <em>differs from</em> <code>python -m build</code></p>
<p><em>Even if the answer would be c), the wheel tested by tox would not be the same wheel that would be uploaded, so it is not testing the correct thing. <strong>Most likely I should somehow give the wheel as an argument to tox / test runner.</strong></em></p>
<h2>Question</h2>
<p>I want to create a wheel from sdist which is created from the source tree, and I want to run unit tests against the wheel(s) with multiple python versions. This is a pure python project, so there will be only a single wheel per version of the package. What would be the idiomatic way to run the tests against the <em>same</em> wheel(s) which I would upload to PyPI? Can I use tox for that?</p>
|
<python><python-packaging><tox>
|
2024-03-26 20:52:20
| 1
| 35,267
|
Niko Fohr
|
78,228,053
| 5,818,059
|
How to prevent Pandas from plotting index as Period?
|
<p>A common task I have is plotting time series data and creating gray bars that denote NBER recessions. For instance, <code>recessionplot()</code> from Matlab will do exactly that. I am not aware of similar funcionality in Python. Hence, I wrote the following function to automate this process:</p>
<pre><code>def add_nber_shade(ax: plt.Axes, nber_df: pd.DataFrame, alpha: float=0.2):
"""
Adds NBER recession shades to a singe plt.axes (tipically an "ax").
Args:
ax (plt.Axes): The ax you want to change with data already plotted
nber_df (pd.DataFrame): the Pandas dataframe with a "start" and an "end" column
alpha (float): transparency
Returns:
plt.Axes: returns the same axes but with shades
"""
min_year = pd.to_datetime(min(ax.lines[0].get_xdata())).year
nber_to_keep = nber_df[pd.to_datetime(nber_df["start"]).dt.year >= min_year]
for start, end in zip(nber_to_keep["start"], nber_to_keep["end"]):
ax.axvspan(start, end, color = "gray", alpha = alpha)
return ax
</code></pre>
<p>Here, <code>nber_df</code> that looks like the following (copying the dictionary version):</p>
<pre><code>{'start': {0: '1857-07-01',
1: '1860-11-01',
2: '1865-05-01',
3: '1869-07-01',
4: '1873-11-01',
5: '1882-04-01',
6: '1887-04-01',
7: '1890-08-01',
8: '1893-02-01',
9: '1896-01-01',
10: '1899-07-01',
11: '1902-10-01',
12: '1907-06-01',
13: '1910-02-01',
14: '1913-02-01',
15: '1918-09-01',
16: '1920-02-01',
17: '1923-06-01',
18: '1926-11-01',
19: '1929-09-01',
20: '1937-06-01',
21: '1945-03-01',
22: '1948-12-01',
23: '1953-08-01',
24: '1957-09-01',
25: '1960-05-01',
26: '1970-01-01',
27: '1973-12-01',
28: '1980-02-01',
29: '1981-08-01',
30: '1990-08-01',
31: '2001-04-01',
32: '2008-01-01',
33: '2020-03-01'},
'end': {0: '1859-01-01',
1: '1861-07-01',
2: '1868-01-01',
3: '1871-01-01',
4: '1879-04-01',
5: '1885-06-01',
6: '1888-05-01',
7: '1891-06-01',
8: '1894-07-01',
9: '1897-07-01',
10: '1901-01-01',
11: '1904-09-01',
12: '1908-07-01',
13: '1912-02-01',
14: '1915-01-01',
15: '1919-04-01',
16: '1921-08-01',
17: '1924-08-01',
18: '1927-12-01',
19: '1933-04-01',
20: '1938-07-01',
21: '1945-11-01',
22: '1949-11-01',
23: '1954-06-01',
24: '1958-05-01',
25: '1961-03-01',
26: '1970-12-01',
27: '1975-04-01',
28: '1980-08-01',
29: '1982-12-01',
30: '1991-04-01',
31: '2001-12-01',
32: '2009-07-01',
33: '2020-05-01'}}
</code></pre>
<p>The function is very simple. It retrieves the minimum and maximum dates that were plotted, slices the given dataframe with start and end dates and then it plots the bars. There are two major ways. In one way it will work as intended, but not in the other way.</p>
<p><em>The way it works</em>:</p>
<pre><code>df = pd.DataFrame(np.random.randn(3000, 2), columns=list('AB'), index=pd.date_range(start='1970-01-01', periods=3000, freq='W'))
plt.figure()
plt.plot(df.index, df['A'], lw = 0.2)
add_nber_shade(plt.gca(), nber)
plt.show()
</code></pre>
<p><em>The way it does not work</em> (using Pandas to plot directly)</p>
<pre><code>plt.figure()
df.plot(y=["A"], lw = 0.2, ax = plt.gca(), legend=None)
add_nber_shade(plt.gca(), nber)
plt.show()
</code></pre>
<p>It throws out the following error:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[106], line 3
1 plt.figure()
2 df.plot(y=["A"], lw = 0.2, ax = plt.gca(), legend=None)
----> 3 add_nber_shade(plt.gca(), nber)
4 plt.show()
File ~/Dropbox/Projects/SpanVol/src/spanvol/utilities.py:20, in add_nber_shade(ax, nber_df, alpha)
8 def add_nber_shade(ax: plt.Axes, nber_df: pd.DataFrame, alpha: float=0.2):
9 """
10 Adds NBER recession shades to a singe plt.axes (tipically an "ax").
11
(...)
18 plt.Axes: returns the same axes but with shades
19 """
---> 20 min_year = pd.to_datetime(min(ax.lines[0].get_xdata())).year
21 nber_to_keep = nber_df[pd.to_datetime(nber_df["start"]).dt.year >= min_year]
23 for start, end in zip(nber_to_keep["start"], nber_to_keep["end"]):
File ~/miniconda3/envs/volatility/lib/python3.11/site-packages/pandas/core/tools/datetimes.py:1146, in to_datetime(arg, errors, dayfirst, yearfirst, utc, format, exact, unit, infer_datetime_format, origin, cache)
1144 result = convert_listlike(argc, format)
1145 else:
-> 1146 result = convert_listlike(np.array([arg]), format)[0]
1147 if isinstance(arg, bool) and isinstance(result, np.bool_):
...
File tslib.pyx:552, in pandas._libs.tslib.array_to_datetime()
File tslib.pyx:541, in pandas._libs.tslib.array_to_datetime()
TypeError: <class 'pandas._libs.tslibs.period.Period'> is not convertible to datetime, at position 0
</code></pre>
<p>This is because Pandas is doing some transformation under the hood to deal with the index and is transforming it into some other class. Is there a simple way to either fix the function or some way to prevent pandas from doing it? Thanks a lot!</p>
|
<python><pandas><matplotlib><plot><time-series>
|
2024-03-26 20:37:46
| 1
| 815
|
Raul Guarini Riva
|
78,227,667
| 7,564,952
|
Get start and end time of a window based on a condition in pyspark
|
<p>using PySpark and Databricks.
Starting from Minimum DateUpdated in dataset,
check when Comp_BB_Status = 1 , from this DateUpdated check how much time duration did it took to change Comp_BB_Status = 0 I want to find time windows where Comp_BB_Status was 1, and calculate time duration in seconds and minutes for that particular winodow.</p>
<p><a href="https://i.sstatic.net/ZR9pe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZR9pe.png" alt="SampleData" /></a></p>
<p>Expected output as follows:
<a href="https://i.sstatic.net/9atIF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9atIF.png" alt="expected_output" /></a></p>
<p>any kind of guidance will be appreciated. I am new to PySpark.</p>
<p>EDIT</p>
<p>Code to recreate dataframe:</p>
<pre><code>from pyspark.sql import SparkSession
from pyspark.sql.types import StructType, StructField, StringType, TimestampType, IntegerType
# Initialize SparkSession
spark_a = SparkSession.builder \
.appName("Create DataFrame") \
.getOrCreate()
# Define schema for the DataFrame
schema = StructType([
StructField("CpoSku", StringType(), True),
StructField("DateUpdated", TimestampType(), True),
StructField("CPO_BB_Status", IntegerType(), True)
])
# Define data
data = [
("AAGN7013005", "2024-01-24T05:02:06.898+00:00", 0),
("AAGN7013005", "2024-01-24T05:07:05.090+00:00", 1),
("AAGN7013005", "2024-01-24T06:42:56.825+00:00", 1),
("AAGN7013005", "2024-01-24T06:48:01.647+00:00", 1),
("AAGN7013005", "2024-01-24T07:48:18.456+00:00", 1),
("AAGN7013005", "2024-01-24T09:30:22.534+00:00", 1),
("AAGN7013005", "2024-01-24T09:36:04.075+00:00", 1),
("AAGN7013005", "2024-01-24T10:39:04.796+00:00", 1),
("AAGN7013005", "2024-01-24T10:44:01.193+00:00", 1),
("AAGN7013005", "2024-01-24T17:00:06.217+00:00", 1),
("AAGN7013005", "2024-01-24T18:07:16.612+00:00", 1),
("AAGN7013005", "2024-01-24T18:13:04.639+00:00", 0),
("AAGN7013005", "2024-01-24T21:33:03.796+00:00", 0),
("AAGN7013005", "2024-01-24T21:38:28.834+00:00", 1),
("AAGN7013005", "2024-01-24T22:35:43.995+00:00", 1),
("AAGN7013005", "2024-01-24T22:40:45.930+00:00", 0),
("AAGN7022205", "2024-01-24T04:09:30.167+00:00", 0),
("AAGN7022205", "2024-01-24T04:14:56.294+00:00", 0),
("AAGN7022205", "2024-01-24T04:53:01.281+00:00", 0),
("AAGN7022205", "2024-01-24T05:03:27.103+00:00", 0),
("AAGN7022205", "2024-01-24T05:08:05.096+00:00" ,1),
("AAGN7022205", "2024-01-24T05:53:22.652+00:00", 1),
("AAGN7022205", "2024-01-24T06:04:59.031+00:00", 1),
("AAGN7022205", "2024-01-24T06:43:04.285+00:00", 1),
("AAGN7022205", "2024-01-24T06:43:34.285+00:01", 0)
]
# Create DataFrame
df_test = spark.createDataFrame(data, schema=schema)
# Show DataFrame schema and preview data
df_test.printSchema()
df_test.show()
# Stop SparkSession
spark_a.stop()
</code></pre>
|
<python><python-3.x><pyspark><azure-databricks>
|
2024-03-26 19:09:46
| 2
| 455
|
irum zahra
|
78,227,504
| 525,916
|
Pandas - resample timeseries with both mean and max
|
<p>Given this sample timeseries:</p>
<pre><code> price vol
2017-01-01 08:00:00 56 1544
2017-01-01 11:00:00 70 1680
2017-01-01 14:00:00 92 1853
2017-01-02 08:00:00 94 1039
2017-01-02 11:00:00 81 1180
2017-01-02 14:00:00 70 1443
2017-01-03 08:00:00 56 1621
2017-01-03 11:00:00 68 1093
2017-01-03 14:00:00 59 1684
2017-01-04 08:00:00 86 1591
</code></pre>
<blockquote>
<p>df = df.resample('1d').mean()</p>
</blockquote>
<p>gives the mean values and</p>
<blockquote>
<p>df = df.resample('1d').max()</p>
</blockquote>
<p>gives the max values but is there a way to get both in a single step?</p>
<p>How do I resample this dataframe such that the output contains the daily mean of the price and volume and the daily max value of price and volume for that day?</p>
<p>Output columns should be index, price_mean, vol_mean, price_max, vol_max and the data should be daily</p>
|
<python><pandas><time-series>
|
2024-03-26 18:32:59
| 2
| 4,099
|
Shankze
|
78,227,479
| 955,273
|
Add a column to a polars LazyFrame based on a group-by aggregation of another column
|
<p>I have a LazyFrame of <code>time</code>, <code>symbols</code> and <code>mid_price</code>:</p>
<p>Example:</p>
<pre class="lang-none prettyprint-override"><code>time symbols mid_price
datetime[ns] str f64
2024-03-01 00:01:00 "PERP_SOL_USDT@… 126.1575
2024-03-01 00:01:00 "PERP_WAVES_USD… 2.71235
2024-03-01 00:01:00 "SOL_USDT@BINAN… 126.005
2024-03-01 00:01:00 "WAVES_USDT@BIN… 2.7085
2024-03-01 00:02:00 "PERP_SOL_USDT@… 126.3825
</code></pre>
<p>I want to perform some aggregations over the time dimension (ie: group by <code>symbol</code>):</p>
<pre class="lang-py prettyprint-override"><code>aggs = (
df
.group_by('symbols')
.agg([
pl.col('mid_price').diff(1).alias("change"),
])
)
</code></pre>
<p>I get back a <code>list</code> of each value per unique <code>symbols</code> value:</p>
<pre class="lang-none prettyprint-override"><code>>>> aggs.head().collect()
symbols change
str list[f64]
"SOL_USDT@BINAN… [null, 0.25, … -0.55]
"PERP_SOL_USDT@… [null, 0.225, … -0.605]
"WAVES_USDT@BIN… [null, -0.002, … -0.001]
"PERP_WAVES_USD… [null, -0.00255, … 0.0001]
</code></pre>
<p>I would now like to join this back onto my original dataframe:</p>
<pre class="lang-py prettyprint-override"><code>df = df.join(
aggs,
on='symbols',
how='left',
)
</code></pre>
<p>This now results in each row getting the full list of <code>change</code>, rather then the respective value.</p>
<pre class="lang-none prettyprint-override"><code>>>> df.head().collect()
time symbols mid_price change
datetime[ns] str f64 list[f64]
2024-03-01 00:01:00 "PERP_SOL_USDT@… 126.1575 [null, 0.225, … -0.605]
2024-03-01 00:01:00 "PERP_WAVES_USD… 2.71235 [null, -0.00255, … 0.0001]
2024-03-01 00:01:00 "SOL_USDT@BINAN… 126.005 [null, 0.25, … -0.55]
2024-03-01 00:01:00 "WAVES_USDT@BIN… 2.7085 [null, -0.002, … -0.001]
2024-03-01 00:02:00 "PERP_SOL_USDT@… 126.3825 [null, 0.225, … -0.605]
</code></pre>
<p>I have 2 questions please:</p>
<ol>
<li>How do I <em>unstack/explode</em> the lists returned from my <code>group_by</code> when joining them back into the original dataframe?</li>
<li>Is this the recommended way to add a new column to my original dataframe from a <code>group_by</code> (that is: <code>group_by</code> followed by <code>join</code>)?</li>
</ol>
|
<python><dataframe><window-functions><python-polars>
|
2024-03-26 18:27:48
| 1
| 28,956
|
Steve Lorimer
|
78,227,408
| 4,579,980
|
TypeError: str expected, not NoneType - Docker run problem with .env file in python
|
<p>I have a python RAG script that works fine locally (Windows).</p>
<p>My understanding is that the problem is related to the .env file when it is called in basic_rag.py</p>
<pre><code>os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY')
</code></pre>
<p>When I run it with docker:</p>
<pre><code>TypeError: str expected, not NoneType
</code></pre>
<p>This is the file structure:</p>
<pre><code>APP
chroma
data
text.pdf
.env
.gitignore
basic_rag
Dockerfile
requirements.txt
style
</code></pre>
<p>The Dockerfile code is:</p>
<pre><code>FROM python:3.8-slim
WORKDIR /rag-test
COPY ./data ./data/
COPY ./basic_rag.py ./main.py
COPY ./requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
EXPOSE 8080
CMD ["python", "main.py"]
</code></pre>
<p>I am new to Docker and cannot find a solution for this Error.</p>
<p>Thanks for the help.</p>
|
<python><windows><docker><operating-system><.env>
|
2024-03-26 18:12:55
| 1
| 655
|
Diego
|
78,227,322
| 48,956
|
How can I interrupt or send signals to python Threads without cooperation from the thread?
|
<p>Previously I thought this wasn't possible to non-cooperatively interrupt a Thread in Python. A example might be the main thread asking to immediately cancel other theads in the process. But, from the signal docs:</p>
<blockquote>
<p>Python signal handlers are always executed in the main Python thread
of the main interpreter, even if the signal was received in another
thread. This means that signals can't be used as a means of
inter-thread communication.</p>
</blockquote>
<p>... however ...</p>
<p><a href="https://youtu.be/9OOJcTp8dqE?si=_GVmIcCGX9JTawXZ&t=1872" rel="nofollow noreferrer">https://youtu.be/9OOJcTp8dqE?si=_GVmIcCGX9JTawXZ&t=1872</a></p>
<p>suggests it is:</p>
<blockquote>
<p>[…] Python already had a mechanism for requesting threads to handle asynchronous signals and exceptions […]</p>
</blockquote>
<p>I see Java has Thread.interrupt(), but not Python.</p>
<p>So, is it possible now in Python or not?</p>
|
<python><python-multithreading>
|
2024-03-26 17:57:23
| 1
| 15,918
|
user48956
|
78,227,199
| 4,549,682
|
Cannot prevent azure function Python cold starts
|
<p>I cannot figure out how to completely stop azure function cold starts on the dedicated or premium plans. I have a ML function returning a prediction on input data. What I've tried:</p>
<ul>
<li>using the Python V2 model so all code is in the same .py file (e.g. all dependencies loaded together)</li>
<li>warmupTrigger (also calls the predict function so that the model is loaded/warmed up)</li>
<li>swap warmup trigger (for preventing cold starts during swaps)</li>
<li>health check, using different variations of the parameters for <code>WEBSITE_HEALTHCHECK_MAXPINGFAILURES</code> and <code>WEBSITE_HEALTHCHECK_MAXUNHEALTHYWORKERPERCENT</code></li>
<li>adjusting <code>FUNCTIONS_WORKER_PROCESS_COUNT</code> and <code>PYTHON_THREADPOOL_THREAD_COUNT</code></li>
<li>using async for the prediction</li>
<li>prewarmed instances on the premium plan</li>
</ul>
<p>I added a UUID in the function app script, so that each python worker has an UUID I can track. I was using the EP3 plan with 4 python workers (FUNCTIONS_WORKER_PROCESS_COUNT=4), and I can see that the warmup function is only being called on 2 of the workers upon scale-out. This means on the 2 workers that are not warm, the first few calls take 5-10s to run, which is a cold start.</p>
<p>I also tried setting FUNCTIONS_WORKER_PROCESS_COUNT=1, using async for the function call, then setting the thread count to 4. However, this didn't seem to work as expected for utilizing the vCPU power, so the scaling didn't work well.</p>
<p>What else can I do to fully prevent cold-starts in a Python azure function with multiple python workers? I noticed there was some shared memory preview feature, but not sure if that could solve the issue, or if something else could.</p>
|
<python><azure><azure-functions>
|
2024-03-26 17:31:13
| 1
| 16,136
|
wordsforthewise
|
78,227,162
| 1,592,380
|
What is the unit of elevation in glo_30 Digital elevation map data
|
<p><a href="https://i.sstatic.net/B1qOZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/B1qOZ.png" alt="enter image description here" /></a></p>
<p>I'm trying to learn how to create elevation contour lines on a map in python following some of the ideas in <a href="https://medium.com/analytics-vidhya/creating-contour-lines-on-folium-map-with-python-b7994e67924b" rel="nofollow noreferrer">https://medium.com/analytics-vidhya/creating-contour-lines-on-folium-map-with-python-b7994e67924b</a> .</p>
<p>Using <a href="https://pypi.org/project/dem-stitcher/" rel="nofollow noreferrer">https://pypi.org/project/dem-stitcher/</a> I have the following code to download DEM data within a bounding box:</p>
<pre><code>bounds = bounding_box_naive(coordinates)
print(bounds)
flattened_bounds = [coord for point in bounds for coord in point]
print(flattened_bounds)
X, p = stitch_dem(flattened_bounds,
dem_name=dem_name,
dst_ellipsoidal_height=dst_ellipsoidal_height,
dst_area_or_point=dst_area_or_point)
</code></pre>
<p>I then wrote it to a file with:</p>
<pre><code>fig, ax = plt.subplots(figsize=(8, 8))
ax = plot.show(X, transform=p['transform'], ax=ax)
ax.set_xlabel('Longitude', size=15)
ax.set_ylabel('Latitude', size=15)
</code></pre>
<p>followed by:</p>
<pre><code>height_type = 'ellipsoidal' if dst_ellipsoidal_height else 'geoid'
with rasterio.open(out_directory / f'{dem_name}_{height_type}_{dst_area_or_point}.tif', 'w', **p) as ds:
ds.write(X, 1)
ds.update_tags(AREA_OR_POINT=dst_area_or_point)
</code></pre>
<p>I eventually end up with data in a dataframe that looks like:</p>
<pre><code> longitude latitude elevation
1 -88.432778 31.661389 72.399994
2 -88.432500 31.661389 74.471750
3 -88.432222 31.661389 76.551100
4 -88.431944 31.661389 76.705060
5 -88.431667 31.661389 77.992940
.. ... ... ...
212 -88.431944 31.654167 79.082660
213 -88.431667 31.654167 79.882225
214 -88.431389 31.654167 78.264725
215 -88.431111 31.654167 79.834656
216 -88.430833 31.654167 79.141390
[216 rows x 3 columns]
</code></pre>
<p>I can understand the latitude and longitude, but what unit is the elevation in?</p>
|
<python><jupyter-notebook><leaflet><gis><folium>
|
2024-03-26 17:25:56
| 0
| 36,885
|
user1592380
|
78,227,144
| 13,491,504
|
Simulation of a Pendulum hanging on a spinning Disk
|
<p>Can anybody get this code to run? I know, that it is very long and maybe not easy to understand, but what I am trying to do is to write a simulation for a problem, that I have already posted here: <a href="https://math.stackexchange.com/questions/4876146/pendulum-hanging-on-a-spinning-disk">https://math.stackexchange.com/questions/4876146/pendulum-hanging-on-a-spinning-disk</a></p>
<p>I try to make a nice simulation, that would look like the one someone answered the linked question with. The picture in the answer is written in mathematica and I had no idea how to translate it.
Hope you can help me finish this up.</p>
<p>There are two elements of code. One calculating the ODE second degree and one plotting it 3 times. When you plot the ODE, you can see, that the graph line is not doing, what it is supposed to.
I don't know, where the mistake is but hopefully you can help.</p>
<p>Here are the two snippets:</p>
<pre><code>import numpy as np
import sympy as sp
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from IPython.display import display
from sympy.vector import CoordSys3D
from scipy.integrate import solve_ivp
def FindDGL():
N = CoordSys3D('N')
e = N.i + N.j + N.k
t = sp.symbols('t')
x = sp.symbols('x')
y = sp.symbols('y')
z = sp.symbols('z')
x = sp.Function('x')(t)
y = sp.Function('y')(t)
z = sp.Function('z')(t)
p = x*N.i + y*N.j + z*N.k
m = sp.symbols('m')
g = sp.symbols('g')
r = sp.symbols('r')
omega = sp.symbols('omega')
q0 = sp.symbols('q0')
A = sp.symbols('A')
l = sp.symbols('l')
xl = sp.symbols('xl')
yl = sp.symbols('yl')
zl = sp.symbols('zl')
dpdt = sp.diff(x,t)*N.i + sp.diff(y,t)*N.j + sp.diff(z,t)*N.k
#Zwang = ((p-q)-l)
Zwang = (p.dot(N.i)**2*N.i +p.dot(N.j)**2*N.j +p.dot(N.k)**2*N.k - 2*r*(p.dot(N.i)*N.i*sp.cos(omega*t)+p.dot(N.j)*N.j*sp.sin(omega*t))-2*q0*(p.dot(N.k)*N.k) + r**2*(N.i*sp.cos(omega*t)**2+N.j*sp.sin(omega*t)**2)+q0**2*N.k) - l**2*N.i - l**2*N.j -l**2*N.k
display(Zwang)
dpdtsq = dpdt.dot(N.i)**2*N.i + dpdt.dot(N.j)**2*N.j + dpdt.dot(N.k)**2*N.k
#La = 0.5 * m * dpdtsq - m * g * (p.dot(N.k)*N.k) + (ZwangA*A)
L = 0.5 * m * dpdtsq + m * g * (p.dot(N.k)*N.k) - Zwang*A
#display(La)
display(L)
Lx = L.dot(N.i)
Ly = L.dot(N.j)
Lz = L.dot(N.k)
Elx = sp.diff(sp.diff(Lx,sp.Derivative(x,t)), t) + sp.diff(Lx,x)
Ely = sp.diff(sp.diff(Ly,sp.Derivative(y,t)), t) + sp.diff(Ly,y)
Elz = sp.diff(sp.diff(Lz,sp.Derivative(z,t)), t) + sp.diff(Lz,z)
display(Elx)
display(Ely)
display(Elz)
ZwangAV = (sp.diff(Zwang, t, 2))/2
display(ZwangAV)
ZwangA = ZwangAV.dot(N.i)+ZwangAV.dot(N.j)+ZwangAV.dot(N.k)
display(ZwangA)
Eq1 = sp.Eq(Elx,0)
Eq2 = sp.Eq(Ely,0)
Eq3 = sp.Eq(Elz,0)
Eq4 = sp.Eq(ZwangA,0)
LGS = sp.solve((Eq1,Eq2,Eq3,Eq4),(sp.Derivative(x,t,2),sp.Derivative(y,t,2),sp.Derivative(z,t,2),A))
#display(LGS)
#display(LGS[sp.Derivative(x,t,2)].free_symbols)
#display(LGS[sp.Derivative(y,t,2)].free_symbols)
#display(LGS[sp.Derivative(z,t,2)])
XS = LGS[sp.Derivative(x,t,2)]
YS = LGS[sp.Derivative(y,t,2)]
ZS = LGS[sp.Derivative(z,t,2)]
#t_span = (0, 10)
dxdt = sp.symbols('dxdt')
dydt = sp.symbols('dydt')
dzdt = sp.symbols('dzdt')
#t_eval = np.linspace(0, 10, 100)
XSL = XS.subs({ sp.Derivative(y,t):dydt, sp.Derivative(z,t):dzdt, sp.Derivative(x,t):dxdt, x:xl , y:yl , z:zl})
YSL = YS.subs({ sp.Derivative(y,t):dydt, sp.Derivative(z,t):dzdt, sp.Derivative(x,t):dxdt, x:xl , y:yl , z:zl})
ZSL = ZS.subs({ sp.Derivative(y,t):dydt, sp.Derivative(z,t):dzdt, sp.Derivative(x,t):dxdt, x:xl , y:yl , z:zl})
#display(ZSL.free_symbols)
XSLS = str(XSL)
YSLS = str(YSL)
ZSLS = str(ZSL)
replace = {"xl":"x","yl":"y","zl":"z","cos":"np.cos", "sin":"np.sin",}
for old, new in replace.items():
XSLS = XSLS.replace(old, new)
for old, new in replace.items():
YSLS = YSLS.replace(old, new)
for old, new in replace.items():
ZSLS = ZSLS.replace(old, new)
return[XSLS,YSLS,ZSLS]
Result = FindDGL()
print(Result[0])
print(Result[1])
print(Result[2])
</code></pre>
<p>Here is the second one:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import solve_ivp
from mpl_toolkits.mplot3d import Axes3D
def Q(t):
omega = 1
return r * (np.cos(omega * t) * np.array([1, 0, 0]) + np.sin(omega * t) * np.array([0, 1, 0])) + np.array([0, 0, q0])
def equations_of_motion(t, state, r, omega, q0, l):
x, y, z, xp, yp, zp = state
dxdt = xp
dydt = yp
dzdt = zp
dxpdt = dxdt**2*r*np.cos(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - dxdt**2*x/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + 2.0*dxdt*omega*r**2*np.sin(omega*t)*np.cos(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - 2.0*dxdt*omega*r*x*np.sin(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + dydt**2*r*np.cos(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - dydt**2*x/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - 2.0*dydt*omega*r**2*np.cos(omega*t)**2/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + 2.0*dydt*omega*r*x*np.cos(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + dzdt**2*r*np.cos(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - dzdt**2*x/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + g*q0*r*np.cos(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - g*q0*x/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - g*r*z*np.cos(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + g*x*z/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + omega**2*r**2*x*np.cos(omega*t)**2/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + omega**2*r**2*y*np.sin(omega*t)*np.cos(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - omega**2*r*x**2*np.cos(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - omega**2*r*x*y*np.sin(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2)
dypdt = dxdt**2*r*np.sin(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - dxdt**2*y/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + 2.0*dxdt*omega*r**2*np.sin(omega*t)**2/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - 2.0*dxdt*omega*r*y*np.sin(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + dydt**2*r*np.sin(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - dydt**2*y/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - 2.0*dydt*omega*r**2*np.sin(omega*t)*np.cos(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + 2.0*dydt*omega*r*y*np.cos(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + dzdt**2*r*np.sin(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - dzdt**2*y/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + g*q0*r*np.sin(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - g*q0*y/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - g*r*z*np.sin(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + g*y*z/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + omega**2*r**2*x*np.sin(omega*t)*np.cos(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + omega**2*r**2*y*np.sin(omega*t)**2/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - omega**2*r*x*y*np.cos(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - omega**2*r*y**2*np.sin(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2)
dzpdt = dxdt**2*q0/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - dxdt**2*z/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + 2.0*dxdt*omega*q0*r*np.sin(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - 2.0*dxdt*omega*r*z*np.sin(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + dydt**2*q0/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - dydt**2*z/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - 2.0*dydt*omega*q0*r*np.cos(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + 2.0*dydt*omega*r*z*np.cos(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + dzdt**2*q0/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - dzdt**2*z/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - g*r**2*np.sin(omega*t)**2/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - g*r**2*np.cos(omega*t)**2/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + 2.0*g*r*x*np.cos(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + 2.0*g*r*y*np.sin(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - g*x**2/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - g*y**2/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + omega**2*q0*r*x*np.cos(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + omega**2*q0*r*y*np.sin(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - omega**2*r*x*z*np.cos(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - omega**2*r*y*z*np.sin(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2)
return [dxdt, dydt, dzdt, dxpdt, dypdt, dzpdt]
r = 0.5
omega = 1.2
q0 = 1
l = 1
g = 9.81
#{x[0] == r, y[0] == x'[0] == y'[0] == z'[0] == 0, z[0] == q0 - l}
initial_conditions = [r, 0, 0, 0, 0, q0-l]
tmax = 200
solp = solve_ivp(equations_of_motion, [0, tmax], initial_conditions, args=(r, omega, q0, l), dense_output=True)#, method='DOP853')
t_values = np.linspace(0, tmax, 1000)
p_values = solp.sol(t_values)
print(p_values.size)
d =0.5
Qx = [Q(ti)[0] for ti in t_values]
Qy = [Q(ti)[1] for ti in t_values]
Qz = [Q(ti)[2] for ti in t_values]
fig = plt.figure(figsize=(20, 16))
ax = fig.add_subplot(111, projection='3d')
ax.plot(p_values[0], p_values[1], p_values[2], color='blue')
ax.scatter(r, 0, q0-l, color='red')
ax.plot([0, 0], [0, 0], [0, q0], color='green')
ax.plot(Qx, Qy, Qz, color='purple')
#ax.set_xlim(-d, d)
#ax.set_ylim(-d, d)
#ax.set_zlim(-d, d)
ax.view_init(30, 45)
plt.show()
</code></pre>
|
<python><validation><math><sympy>
|
2024-03-26 17:23:18
| 1
| 637
|
Mo711
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.