QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
79,188,730
| 1,398,979
|
Does polars load the entire parquet into the memory if we want to retrive certain column?
|
<p>I am new to the data science. I am working on the Polars to read the parquet files. Total size of all these parquet files is 240 GB. I have an EC2 machine with 64 GB and 8 vCP.</p>
<p>I was under the assumption that as Parquet is a columnar file format so whenever I retrieve columns from the Parquet files then it doesn't need to load the entire file into the memory and only loads the required columns. (As a noob I am not sure how it works)</p>
<p>But today when I tried to load 3 columns with the total size of 600 MB (Total column size) then Memory usage went through the roof. It consumed the entire 64 GB of RAM.</p>
<p>I am not able to find any documentation about the lifecycle of loading parquet files into the polars and how it reads the column.</p>
<p>Can someone explain me how this works or point me to good documentation</p>
<p>Here is the code</p>
<pre><code>import polars as pl
import pyarrow.parquet as pq
# Directory containing the Parquet files
directory = '/home/ubuntu/parquet_files/'
# Load data using Polars
df = pl.scan_parquet(directory)
grouped_df = df.select([
pl.col("L_SHIPDATE").alias("L_SHIPDATE"),
pl.col("L_LINESTATUS").alias("L_LINESTATUS"),
pl.col("L_RETURNFLAG").alias("L_RETURNFLAG")
]).collect(streaming=True)
</code></pre>
|
<python><dataframe><data-science><parquet><python-polars>
|
2024-11-14 12:25:42
| 1
| 917
|
Bhaskar Dabhi
|
79,188,715
| 9,530,017
|
Adjusting axe position after applying constrained_layout
|
<p>Using constrained_layout, I can get this plot:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import numpy as np
N = 200
var_xx = 1**2 # var x = std x squared
var_yy = 1**2
cov_xy = 0.5
cov = np.array([[var_xx, cov_xy], [cov_xy, var_yy]])
rng = np.random.default_rng()
pairs = rng.multivariate_normal([0, 0], cov, size=N, check_valid="raise")
mosaic = [[".", "top"], ["left", "main"]]
fig, axarr = plt.subplot_mosaic(mosaic, constrained_layout=True, width_ratios=[0.5, 1], height_ratios=[0.5, 1])
axarr["main"].scatter(pairs[:, 0], pairs[:, 1], alpha=0.5)
axarr["top"].hist(pairs[:, 0], bins=20)
axarr["left"].hist(pairs[:, 0], bins=20, orientation="horizontal")
axarr["left"].sharey(axarr["main"])
axarr["top"].sharex(axarr["main"])
axarr["top"].tick_params(labelbottom=False)
axarr["main"].tick_params(labelleft=False)
ticklabels = axarr["top"].get_yticklabels()
axarr["main"].set_xlabel("x")
axarr["left"].set_ylabel("y")
axarr["left"].set_xlabel("PDF")
axarr["top"].set_ylabel("PDF")
</code></pre>
<p><a href="https://i.sstatic.net/ZseQZ8mS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZseQZ8mS.png" alt="enter image description here" /></a></p>
<p>The horizontal spacing between the subplots is larger than the vertical one, due to constrained layout leaving space for the tick and axis labels of the top subplot. I would like to ignore this and reduce the horizontal spacing to the same as the vertical one.</p>
<p>One approach I tried was to set the main axe position after, hence by adding this at the end of the code:</p>
<pre class="lang-py prettyprint-override"><code>pos_main = axarr["main"].get_position().transformed(fig.dpi_scale_trans)
pos_top = axarr["top"].get_position().transformed(fig.dpi_scale_trans)
pos_left = axarr["left"].get_position().transformed(fig.dpi_scale_trans)
space = pos_top.ymin - pos_main.ymax
pos_main.update_from_data_x([pos_left.xmax + space, pos_main.xmax])
axarr["main"].set_position(pos_main.transformed(fig.dpi_scale_trans.inverted()))
</code></pre>
<p>However, this disables completely <code>constrained_layout</code> for this axe, hence leading to poor results.</p>
<p><a href="https://i.sstatic.net/7F8w09eK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7F8w09eK.png" alt="enter image description here" /></a></p>
<p>How should I first apply <code>constrained_layout</code>, then disable it, and adjust the axis positions?</p>
|
<python><matplotlib>
|
2024-11-14 12:19:25
| 2
| 1,546
|
Liris
|
79,188,710
| 19,003,861
|
Django - Custom Error Templates not rendering
|
<p>I have some custom templates in my django app. But these are not being rendered and instead displays the basic template:</p>
<pre><code>500 Internal Server Error
Exception inside application.
Daphne
</code></pre>
<p>Edit: If I move my dev.py (used on local to DEBUG = False), then the template is rendered.</p>
<p>This is my setup:</p>
<p><strong>Views.py</strong> (app level: main):</p>
<pre><code>def custom_500(request):
return render(request,'main/500.html', status=500)
</code></pre>
<p><strong>templates</strong> (app level: main with path: templates->main->500html)</p>
<pre><code>500.html custom template file
</code></pre>
<p><strong>urls.py</strong> (project level)</p>
<pre><code>from main.views import custom_400, custom_500
from django.conf.urls import handler500, handler400
handler500 = 'main.views.custom_500'
</code></pre>
<p>settings (folder)
<strong>staging.py:</strong></p>
<pre><code>DEBUG = False
ALLOWED_HOSTS = ['domainname.com' ]
SECURE_SSL_REDIRECT = True
</code></pre>
<p>I also have a base.py in my setting folder, but cannot see anything I should report on there.</p>
<p>I tried to list all the checks I have tried:</p>
<p>In heroku app shell:</p>
<pre><code>print(settings.ALLOWED_HOSTS) # returned the domain name
print(settings.DEBUG) # returned false
print(os.getenv('DJANGO_SETTINGS_MODULE')) # mysite.settings.staging
print(settings.CSRF_COOKIE_SECURE) # true (just in case I would the app would be treated as non prod)
print(settings.SECURE_SSL_REDIRECT) # true (just in case I would the app would be treated as non prod)
</code></pre>
<p>and finally:</p>
<pre><code>>>> from django.shortcuts import render
>>> from django.test import RequestFactory
>>> request = RequestFactory().get('/')
>>> render(request,'main/500.html', status=500)
#returned: <HttpResponse status_code=500, "text/html; charset=utf-8">
</code></pre>
<p>I am running out of idea, and I am sure its probably something simple.</p>
<p>I am hoping someone may have some suggestion.</p>
|
<python><django><heroku>
|
2024-11-14 12:18:13
| 1
| 415
|
PhilM
|
79,188,702
| 1,398,979
|
How does polars load parquet files into dataframe?
|
<p>I am trying to load a parquet files where total size of the data (Total size of the parquet files) is 240 GB. I calculated the size of the columns using duckdb using the following <a href="https://stackoverflow.com/a/74267046/1398979">query</a>:</p>
<pre><code>import duckdb
con = duckdb.connect(database=':memory:')
print(con.execute("""SELECT SUM(total_compressed_size) AS
total_compressed_size_in_bytes, SUM(total_uncompressed_size) AS
total_uncompressed_size_in_bytes, path_in_schema AS column_name from
parquet_metadata('D:\\dev\\tmp\\parq_dataset\\*') GROUP BY path_in_schema""").df())
</code></pre>
<p>Through this query, I am finding out that one column named <code>L_LINENUMBER</code> has size of 4 GB when compressed and 4.5 GB when uncompressed.</p>
<p>When I am doing following polars and trying to load that into the polars dataframe then size of the column is coming as 91 GB</p>
<pre><code>import polars as pl
df = pl.scan_parquet("/home/ubuntu/parquet_files/")
selected_df = df.select("L_LINENUMBER")
df_size = selected_df.collect(streaming=True).estimated_size("mb")
print(df_size) ## Coming as 91 GB
</code></pre>
<p>I want to know why so much change when it is loaded into data frame? I can understand that it may take 1x or 2x but it is going two digit-x</p>
<p>Do you have any idea if these look like the correct numbers? I am not able to find explanation of this except for one thread there it says that it <code>Return an estimation of the total (heap) allocated size of the DataFrame.</code></p>
|
<python><dataframe><parquet><python-polars><duckdb>
|
2024-11-14 12:15:40
| 0
| 917
|
Bhaskar Dabhi
|
79,188,659
| 1,021,060
|
Host server that runs python script with preloaded data
|
<p>I have a python script (let's suppose it's a single file) that has 3 sections:<br />
(1) load external libraries<br />
(2) load a large file's contents<br />
(3) algorithm<br />
While section (3) takes 100 milliseconds, sections (1) and (2) take 10 seconds.</p>
<p>I have a C# program that executes the Python script above:</p>
<pre><code> Process process = new Process();
process.StartInfo = new ProcessStartInfo(pythonPath, fileNameWithArg)
{
RedirectStandardOutput = true,
CreateNoWindow = true,
UseShellExecute = false,
RedirectStandardError = true
};
process.Start();
string result = process.StandardOutput.ReadToEnd();
</code></pre>
<p>I am passing some arguments and waiting for a response from the Python script, as you can see above.</p>
<p>How can I make my implementation better so that my Python script's sections (1) and (2) don't have to load with each subsequent call? I am only interested in the actual algorithm (section 3) because sections (1) and (2) are constant. Therefore, rather than waiting an additional 10 seconds for sections (1) and (2) to finish, I would prefer to call my Python script and receive a response after 100ms.I am happy for the first call to python script to take 10 second +, but every consecutive call should take 100 ms only.</p>
|
<python><c#><.net><server><python.net>
|
2024-11-14 11:59:21
| 1
| 360
|
Jack
|
79,188,601
| 3,732,793
|
Most simple config not working for locust
|
<p>This command works fine</p>
<pre><code>locust --headless --users 10 --spawn-rate 1 -H http://localhost:3000
</code></pre>
<p>locustfile.py looks like that</p>
<pre><code>from locust import HttpUser, task
class HelloWorldUser(HttpUser):
@task
def hello_world(self):
self.client.get("/health")
</code></pre>
<p>but putting the same code in a local.py and this to a local.conf</p>
<pre><code>locustfile = local.py
headless = true
master = true
expect-workers = 3
host = "http://localhost:3000"
users = 3
spawn-rate = 1
run-time = 1m
</code></pre>
<p>that command runs but does not bring back any results</p>
<pre><code>locust --config local.conf
</code></pre>
<p>any idea why ?</p>
|
<python><locust>
|
2024-11-14 11:43:49
| 1
| 1,990
|
user3732793
|
79,188,565
| 250,962
|
How to update requirements.txt file using uv
|
<p>I'm using <a href="https://github.com/astral-sh/uv" rel="noreferrer">uv</a> to manage my Python environment locally, but my production site still uses pip. So when I update packages locally (from <code>pyproject.toml</code>, updating the <code>uv.lock</code> file) I also need to generate a new <code>requirements.txt</code> file. But I can't get that to contain the latest versions.</p>
<p>For example, I recently upgraded packages to the latest versions:</p>
<pre class="lang-none prettyprint-override"><code>uv lock --upgrade
</code></pre>
<p>That command's output included the line:</p>
<pre class="lang-none prettyprint-override"><code>Updated dj-database-url v2.2.0 -> v2.3.0
</code></pre>
<p>And the <code>uv.lock</code> file now contains this, as expected:</p>
<pre class="lang-ini prettyprint-override"><code>[[package]]
name = "dj-database-url"
version = "2.3.0"
...
</code></pre>
<p>I thought that this command would then update my <code>requirements.txt</code> file:</p>
<pre class="lang-none prettyprint-override"><code>uv pip compile pyproject.toml --quiet --output-file requirements.txt
</code></pre>
<p>But when I run that, <code>requirements.txt</code> still specifies the previous version:</p>
<pre class="lang-none prettyprint-override"><code>dj-database-url==2.2.0 \
--hash=...
</code></pre>
<p>What am I missing?</p>
|
<python><uv>
|
2024-11-14 11:32:47
| 4
| 15,166
|
Phil Gyford
|
79,188,493
| 2,950,747
|
Why does the TSP in NetworkX not return the shortest path?
|
<p>I'm trying to use NetworkX's <code>traveling_salesman_problem</code> to find the shortest path between nodes, but it seems to return a longer path than is necessary. Here's a minimal example:</p>
<pre><code>import shapely
import networkx as nx
import matplotlib.pyplot as plt
# Make a 10x10 grid
vert = shapely.geometry.MultiLineString([[(x, 0), (x, 100)] for x in range(0, 110, 10)])
hori = shapely.affinity.rotate(vert, 90)
grid = shapely.unary_union([vert, hori])
# Turn it into a graph
graph = nx.Graph()
graph.add_edges_from([(*line.coords, {"distance": line.length}) for line in grid.geoms])
# Select nodes and visit them via TSP and manually
nodes = [(20., 20.), (30., 30.), (20., 80.), (80., 20.), (50., 50.), (60., 10.), (40., 40.), (50., 40.), (50, 30)]
tsp_path = nx.approximation.traveling_salesman_problem(
graph,
weight="distance",
nodes=nodes,
cycle=False,
method=nx.approximation.christofides
)
tsp_path = shapely.geometry.LineString(tsp_path)
manual_path = shapely.geometry.LineString([(20, 80), (50, 80), (50, 30), (40, 30), (40, 40), (40, 30), (20, 30), (20, 20), (60, 20), (60, 10), (60, 20), (80, 20)])
# Plot results
fig, axes = plt.subplots(1, 2, figsize=(10, 5), sharey=True)
for ax in axes:
for line in grid.geoms:
ax.plot(*line.xy, c="k", lw=.25)
ax.scatter(*zip(*nodes), c="k")
ax.set_aspect("equal")
axes[0].plot(*tsp_path.xy, c="b")
axes[0].set_title(f"TSP solution length={tsp_path.length}")
axes[1].plot(*manual_path.xy, c="r")
axes[1].set_title(f"manual length={manual_path.length}")
</code></pre>
<p>What am I missing? Is the TSP the wrong algorithm for this?</p>
<p><a href="https://i.sstatic.net/9QiWyhcK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9QiWyhcK.png" alt="Two alternative paths joining the selected nodes in a graph" /></a></p>
<p><strong>Edit to return to origin:</strong></p>
<p>If I run <code>traveling_salesman_problem</code> with <code>cycle=True</code> to make the route return to the origin node, and change my manual route to:</p>
<pre><code>manual_path = shapely.geometry.LineString([(20, 80), (50, 80), (50, 30), (40, 30), (40, 40), (40, 30), (20, 30), (20, 20), (60, 20), (60, 10), (60, 20), (80, 20), (80, 80), (20, 80)])
</code></pre>
<p>I get the (longer) below left from NetworkX and the (shorter) below right for my manual route:</p>
<p><a href="https://i.sstatic.net/oJzIYRBA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oJzIYRBA.png" alt="enter image description here" /></a></p>
|
<python><networkx>
|
2024-11-14 11:09:40
| 1
| 725
|
user2950747
|
79,188,419
| 13,762,083
|
Numerical instability in forward-backward algorithm for Hidden Markov Models
|
<p>I am implementing the forward algorithm for Hidden Markov Models (see below for the algorithm). To prevent over/underflow, I work with log-probabilities instead, and use the log-sum-exp trick to compute each forward coefficient.</p>
<p>I plotted the computed forward coefficient and compared it with the states I used to simulate my data. As shown in the picture below, the general shape looks to be correct because the forward coefficient spikes at the same places as the states. The problem is that forward coefficients are probabilities, so their logs should never exceed 0, however in the images below, I see that there is a gradual drift and the log coefficients clearly exceed zero, which I suspect is due to accumulated numerical errors.
(Note, in my notation g_j(z_j) denotes the log of the forward coefficient at time j, for state z_j=1 or 2).
<a href="https://i.sstatic.net/WpmWZDwX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WpmWZDwX.png" alt="enter image description here" /></a></p>
<p>I have already used the log-sum-exp trick, so I am wondering what else I can do to fix this issue? (Prevent the log probabilities from exceeding 0, and remove this gradual upwards drift).</p>
<p>The relevant part of my code is given below:</p>
<pre><code> def log_sum_exp(self, sequence):
'''
Returns np.log( np.sum(sequence) ) without under/overflow.
'''
sequence = np.array(sequence)
if np.abs(np.min(sequence)) > np.abs(np.max(sequence)):
b = np.min(sequence)
else:
b = np.max(sequence)
return b + np.log(np.sum(np.exp(sequence-b)))
def g_j_z(self, j, z_j):
'''
Returns g_j(z_j).
j: (int) time index. zero-indexed 0, 1, 2, ... n-1
z_j: (int) state index. zero-indexed. 0, 1, 2, ... K-1
'''
if j == 0:
return np.log(self.p_init[z_j]) + self.log_distributions[z_j](self.pre_x+[self.x[0]], self.pre_exog+[self.exog[0]])
if (j, z_j) in self.g_cache:
return self.g_cache[(j, z_j)]
temp = []
for state in range(self.K):
temp.append(
self.g_j_z(j-1, state) + np.log(self.p_transition[state][z_j])
)
self.g_cache[(j, z_j)] = self.log_sum_exp(temp) + self.log_distributions[z_j](self.pre_x+self.x[0:j+1], self.pre_exog+self.exog[0:j+1])
return self.g_cache[(j, z_j)]
</code></pre>
<p>Explanation of the variables:</p>
<p><code>self.g_cache</code> is a dictionary that maps the tuple <code>(j, z_j)</code> (the time and state) to the log coefficient g_j(z_j). This is used to avoid repeated computation.</p>
<p><code>self.p_init</code> is a list. <code>self.p_init[i]</code>contains the initial probability to be in state <code>i</code>.</p>
<p><code>self.p_transition</code> is a matrix. <code>self.p_transition[i][j]</code> contains the probability to transition from state <code>i</code> to state <code>j</code>.</p>
<p><code>self.log_distributions</code> is a list of functions. <code>self.log_distributions[i]</code> is the log probability distribution for state <code>i</code>, which is a function that takes the history of observations and exogenous variables as input, and returns the log-probability for the latest observation. For example, for an AR-1 process, the log distribution is implemented as follows</p>
<pre><code>def log_pdf1(x, exog, params=regime1_params):
'''
x: list of all history of x up to current point
exog: list of all history of exogenous variable up to current point
'''
# AR1 process with exogenous mean
alpha, dt, sigma = params[0], params[1], params[2]
mu = x[-2] + alpha*(exog[-1] - x[-2])*dt
std = sigma*np.sqrt(dt)
return norm.logpdf(x[-1], loc=mu, scale=std)
</code></pre>
<p>The algorithm I am implementing is given here:</p>
<p><a href="https://i.sstatic.net/8MHvufYT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8MHvufYT.png" alt="enter image description here" /></a></p>
<p>However, I am instead computing log of the coefficients using log-sum-exp trick to avoid over/underflow:</p>
<p><a href="https://i.sstatic.net/rgexF6kZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rgexF6kZ.png" alt="enter image description here" /></a></p>
<p>Thank you very much for the help!</p>
|
<python><statistics><numerical-methods><hidden-markov-models>
|
2024-11-14 10:47:35
| 1
| 409
|
ranky123
|
79,188,310
| 4,050,510
|
Enforcing matplotlib tick labels not wider than the axes
|
<p>I need to make a very compact plot with shared y-axis using matplotlib. To make it compact and neat, I will not have any wspace. It looks good with my data.</p>
<p>But the x-tick labels overlap, making them unreadable.</p>
<p>Is there a way to make the x tick locator not place ticks at the 'edge' of the axes, make the labels adjust the placement so they fall inside the axes width, or make them autodetect the collisions? Or is there a better way to avoid the collision of x tick labels when placing axes close together?</p>
<p>EDIT: I updated the code so it reproduces my original manual and limited-adjusted plots, but also includes the answer, for reference</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import matplotlib.ticker
import matplotlib
matplotlib.rcParams['xtick.labelsize'] = 5
matplotlib.rcParams['ytick.labelsize'] = 5
def mkplot():
fig,axs = plt.subplots(1,2,figsize=(2,2),gridspec_kw={'wspace':0},sharey=True)
axs[0].plot([0.1,0.2,0.3],[0,2,3])
axs[0].xaxis.set_major_formatter(matplotlib.ticker.PercentFormatter(xmax=1))
axs[1].plot([3,2,1],[1,2,3])
axs[1].yaxis.set_tick_params(labelleft=False,size=0)
return fig,axs
#######################
fig,axs = mkplot()
fig.suptitle('No adjustment')
#######################
fig,axs= mkplot()
axs[0].set_xlim(0.05,0.32)
axs[0].set_xticks([0.1,0.2,0.3])
axs[1].set_xlim(0.7,3.2)
axs[1].set_xticks([1,2,3])
fig.suptitle('Manual limits and ticks')
#######################
fig,axs = mkplot()
axs[0].get_xticklabels()[-2].set_horizontalalignment('right')
axs[1].get_xticklabels()[+1].set_horizontalalignment('left')
fig.suptitle('Manual alignment')
</code></pre>
<p><a href="https://i.sstatic.net/rzzKlikZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rzzKlikZ.png" alt="unadjusted" /></a>
<a href="https://i.sstatic.net/YvcKgzx7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YvcKgzx7.png" alt="lims and ticks" /></a>
<a href="https://i.sstatic.net/X5o7Jwcg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/X5o7Jwcg.png" alt="slignement" /></a></p>
|
<python><matplotlib>
|
2024-11-14 10:24:54
| 1
| 4,934
|
LudvigH
|
79,188,069
| 1,082,349
|
Pandas checking if value in column actually checks if value in index?
|
<pre><code>"490100" in df_exp['ucc'].astype(str).str.strip()
Out[337]: False
(df_exp['ucc'].astype(str) == "490100").any()
Out[339]: True
"490100" in df_exp['ucc'].astype(str).str.strip().values
Out[340]: True
</code></pre>
<p>Apparently the check <code>foo in df[column]</code> no longer checks whether <code>foo</code> is a value inside the column, it checks if <code>foo</code> is in the index? This is why explicitely checking against values works?</p>
<p>What is the purpose of this, and how long has this been operating like this?</p>
|
<python><pandas>
|
2024-11-14 09:30:00
| 1
| 16,698
|
FooBar
|
79,188,039
| 16,721,393
|
From memory buffer to disk as fast as possible
|
<p>I would like to present a scenario and discuss suitable design patterns to address it.</p>
<p>Consider a simple situation: a camera records to a memory buffer for ten seconds before stopping. Once recording ends, a binary file descriptor opens, and data is transferred to disk.</p>
<p>A major limitation of this approach is that recordings are restricted by the available RAM size. But, frame loss may not be a problem.</p>
<p>To mitigate this, one potential solution is to use a dedicated thread or process for writing to disk. In this setup, the producer memory buffer is shared between the main and writer threads/processes. However, this introduces a new issue: when the writer thread locks the buffer, the camera may be unable to place new frames, leading to potential frame loss.</p>
<p><strong>Question</strong>
Is there a design pattern that addresses the problem highlighted in the second scenario?</p>
<h2>Below some code examples for the two scenarios in Python.</h2>
<p>First scenario in Python:</p>
<pre class="lang-py prettyprint-override"><code>import io
from picamera2 import Picamera2
from picamera2.encoders import Encoder as NullEncoder
from picamera2.outputs import FileOutput
# Init camera
cam = Picamera2()
# Init memory buffer
mem_buff = io.BytesIO()
mem_out = FileOutput(mem_buff)
# Open camera
cam.start()
# Just writes frames without encoding i.e.: BGR888
encoder = NullEncoder()
# Recording time
to_record = 10
print(f"Start recording for {to_record} seconds")
cam.start_recording(encoder, mem_out)
time.sleep(to_record)
cam.stop_recording()
print("Finish recording")
cam.close()
# Begin data transfer to disk
out_fpath = "video.bin"
disk_transfer_start = time.perf_counter()
with open(out_fpath, "wb") as fd:
fd.write(mem_buff.getvalue())
disk_transfer_el = time.perf_counter() - disk_transfer_start
print(f"Data transfer took {disk_transfer_el} sec")
# Get a sense of how many frames are missing
totbytes = os.path.getsize(out_fpath)
byteel = 2304*1296*3 # (frame_width * frame_height * num_channels)
num_frames = totbytes / byteel
print(f"Video has {num_frames} frames")
</code></pre>
<p>A possible implementation of the second scenario in Python:</p>
<pre class="lang-py prettyprint-override"><code>import io
from threading import Thread, Event, Lock
from picamera2 import Picamera2
from picamera2.encoders import Encoder as NullEncoder
from picamera2.outputs import FileOutput
def disk_writer(mem_buff: io.BytesIO, bin_fd, write_interval: int, stop_event: Event, lock: Lock):
while not stop_event.is_set():
start_loop = time.perf_counter()
lock.acquire()
curr_buff_pos = mem_buff.tell()
lock.release()
if curr_buff_pos > 0:
lock.acquire()
bin_fd.write(mem_buff.getvalue())
mem_buff.seek(0)
mem_buff.truncate(0)
lock.release()
elapsed = time.perf_counter() - start_loop
if elapsed < write_interval:
time.sleep(write_interval - elapsed)
if mem_buff.tell() > 0:
bin_fd.write(mem_buff.getvalue())
mem_buff.seek(0)
mem_buff.truncate(0)
# Init camera
cam = Picamera2()
# Init memory buffer
mem_buff = io.BytesIO()
mem_out = FileOutput(mem_buff)
# Get output file descriptor
bin_fd = open("video.bin", "wb")
# Open camera
cam.start()
# Create writing thread and start
stop_event = Event()
lock = Lock()
write_interval = 5
writer_thread = Thread(target=disk_writer, args=(mem_buff, bin_fd, write_interval, stop_event, lock))
writer_thread.start()
# Just writes frames without encoding i.e.: BGR888
encoder = NullEncoder()
# Recording time
to_record = 10
print(f"Start recording for {to_record} seconds")
cam.start_recording(encoder, mem_out)
time.sleep(to_record)
cam.stop_recording()
print("Finish recording")
stop_event.set()
writer_thread.join()
cam.close()
bin_fd.close()
# Get a sense of how many frames are missing
totbytes = os.path.getsize(out_fpath)
byteel = 2304*1296*3 # (frame_width * frame_height * num_channels)
num_frames = totbytes / byteel
print(f"Video has {num_frames} frames")
</code></pre>
|
<python><design-patterns><raspberry-pi><camera><bufferedwriter>
|
2024-11-14 09:19:45
| 1
| 371
|
rober_dinero
|
79,188,007
| 6,930,340
|
Polars equivalent of numpy.tile
|
<pre><code>df = pl. DataFrame({"col1": [1, 2, 3], "col2": [4, 5, 6]})
print(df)
shape: (3, 2)
ββββββββ¬βββββββ
β col1 β col2 β
β --- β --- β
β i64 β i64 β
ββββββββͺβββββββ‘
β 1 β 4 β
β 2 β 5 β
β 3 β 6 β
ββββββββ΄βββββββ
</code></pre>
<p>I am looking for the <code>polars</code> equivalent of <code>numpy.tile</code>.<br />
Something along the line such as <code>df.tile(2)</code> or <code>df.select(pl.all().tile(2))</code>.</p>
<p>The expected result should look like this:</p>
<pre><code>shape: (6, 2)
ββββββββ¬βββββββ
β col1 β col2 β
β --- β --- β
β i64 β i64 β
ββββββββͺβββββββ‘
β 1 β 4 β
β 2 β 5 β
β 3 β 6 β
β 1 β 4 β
β 2 β 5 β
β 3 β 6 β
ββββββββ΄βββββββ
</code></pre>
|
<python><python-polars>
|
2024-11-14 09:11:39
| 1
| 5,167
|
Andi
|
79,187,982
| 1,082,019
|
"ValueError: zero-size array to reduction operation maximum which has no identity" error when calling a Python function from R
|
<p>I'm trying to use the <a href="https://github.com/FelSiq/DBCV" rel="nofollow noreferrer">Fast Density-Based Clustering Validation (DBCV)</a> Python package from R through the <a href="https://cran.r-project.org/web/packages/reticulate/vignettes/calling_python.html" rel="nofollow noreferrer">reticulate</a> R library, but I'm getting an error I cannot solve.
I'm using a Dell computer with Linux Xubuntu 22.04.05 operating system on, with Python 3.11.9 and R 4.3.1.</p>
<p>Here are my steps:</p>
<p>in a shell terminal, I create a Python environment and then install the packages needed:</p>
<pre><code>python3 -m venv dbcv_environment
dbcv_environment/bin/pip3 install scikit-learn numpy
dbcv_environment/bin/pip3 install "git+https://github.com/FelSiq/DBCV"
</code></pre>
<p>In R then, I install the needed R packages, call the Python environment created, generate a sample dataset and its labels, and try to apply the <code>dbcv()</code> function:</p>
<pre><code>setwd(".")
options(stringsAsFactors = FALSE)
options(repos = list(CRAN="http://cran.rstudio.com/"))
list.of.packages <- c("pacman")
new.packages <- list.of.packages[!(list.of.packages %in% installed.packages()[,"Package"])]
library("pacman")
p_load("reticulate")
data <- matrix(c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10), nrow = 5, byrow = TRUE)
labels <- c(0,0,1,1,1)
use_virtualenv("./dbcv_environment")
dbcvLib <- import("dbcv")
dbcvLib$dbcv(X=data, y=labels)
</code></pre>
<p>But when I execute the last command, I get the following error:</p>
<pre><code>Error in py_call_impl(callable, call_args$unnamed, call_args$named) :
ValueError: zero-size array to reduction operation maximum which has no identity
Run `reticulate::py_last_error()` for details.
</code></pre>
<p>Does anybody know how to solve this problem?
Any help with be appreciated, thanks!</p>
|
<python><r><reticulate>
|
2024-11-14 09:01:11
| 1
| 3,480
|
DavideChicco.it
|
79,187,730
| 1,802,693
|
How to parallelize long-running IO operations in a multi-threaded Python application with asyncio?
|
<p>I am building a Python application that uses an event loop (via the asyncio library) to listen for tick data from a cryptocurrency exchange via a WebSocket. The tick data comes for various symbols, and Iβm putting these ticks into a queue.Queue (which is thread-safe but not asyncio-compatible).</p>
<p>A separate thread, TickProcessor, processes the ticks from the queue and makes decisions about whether costly IO operations need to be executed (such as database queries, writes, or REST API calls). Currently, these IO operations are running in a synchronous manner on the worker thread, which introduces significant delays in processing the tick data due to blocking IO calls.</p>
<p>I would like to parallelize the IO operations to reduce this delay and improve the overall performance. I have thought of two potential solutions, but Iβm not sure which one would be the most efficient in terms of resource usage. Here are the options Iβm considering:</p>
<p>Use the main event loop: The idea is to use the main event loop (which is handling the WebSocket ticks) to run the costly IO operations. In this setup, the processing thread will no longer execute IO tasks directly but will return to the event loop, which will handle the IO operations asynchronously.</p>
<p>Start a separate thread with its own event loop: Another option is to create a new thread, which will start a new event loop to handle the IO operations asynchronously using asyncio. This way, I would effectively have three threads: one for receiving the tick data, one for processing, and one with an event loop running for async IO operations.</p>
<p>Key constraints:</p>
<p>I don't want to refactor the entire application to be fully asynchronous, as this would be too costly and time-consuming. I have working synchronous code that needs to run in order and cannot be easily changed. I donβt mind if certain tasks (like IO operations) wait on a different thread while execution continues on the main thread and processing thread.
Which approach would be more resource-efficient, and what are the advantages and disadvantages of each? Is there any other better solution that I might be missing?</p>
|
<python><python-3.x><multithreading><architecture><python-asyncio>
|
2024-11-14 07:21:55
| 0
| 1,729
|
elaspog
|
79,187,647
| 3,296,786
|
Pytest - reordeing the test files
|
<p>In the existing projects PyTests runs the files in alphabetical order one after other.
The file name are <code>archive_test.py, config_test.py, controller_test.py</code> ...etc.
I want the controller_test.py to be ran before config_test.py when I execute pytest -s. How to achieve this? Is there any other logical apart from renaming?
I tried adding. <code>ordering = [" controller_test.py", "config_test.py"]</code> in conftest.py but doesnt seem to be working</p>
|
<python><testing><pytest>
|
2024-11-14 06:49:22
| 0
| 1,156
|
aΨVaN
|
79,187,315
| 8,143,104
|
Unzip file in Azure Blob storage from Databricks
|
<p>I am trying to unzip a file that is in an Azure ADLS Gen2 container through Azure Databricks Pyspark. When I use ZipFile, I get a <code>BadZipFile</code> error or a <code>FileNotFoundError</code>.</p>
<p>I can read CSVs in the same folder, but not the zip files.</p>
<p>The zip filepath is the same filepath I get from <code>dbutils.fs.ls(blob_folder_url)</code>.</p>
<p><strong>BadZipeFile code:</strong>
<a href="https://i.sstatic.net/zUHj1R5n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zUHj1R5n.png" alt="BadZipFile" /></a></p>
<p><strong>FileNotFound code:</strong>
<a href="https://i.sstatic.net/2jnKbsM6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2jnKbsM6.png" alt="FileNotFound" /></a></p>
<p><strong>Reading a CSV code:</strong>
<a href="https://i.sstatic.net/JpXSoPe2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JpXSoPe2.png" alt="reading csv" /></a></p>
<p><strong>Code:</strong></p>
<pre><code>import zipfile, os, io, re
# Azure Blob Storage details
storage_account_name = "<>"
container_name = "<>"
folder_path = "<>"
blob_folder_url = f"abfss://{container_name}@{storage_account_name}.dfs.core.windows.net/{folder_path}"
zip_file = blob_folder_url + 'batch1_weekly_catman_20241109.zip'
# List files in the specified blob folder
files = dbutils.fs.ls(blob_folder_url)
for file in files:
# Check if the file is a ZIP file
if file.name.endswith('.zip'):
print(f"Processing ZIP file: {file.name}")
# Read the ZIP file into memory
zip_file_path = file.path
zip_blob_data = dbutils.fs.head(zip_file_path) # Read the ZIP file content
# Unzip the file
with zipfile.ZipFile(io.BytesIO(zip_blob_data.encode('utf-8')), 'r') as z:
print('zipppppppper')
# with zipfile.ZipFile(zip_file, 'r') as z:
# print('zipppppppper')
</code></pre>
<p><strong>Error Messages:</strong></p>
<ol>
<li>BadZipFile: File is not a zip file</li>
<li>FilenotFoundError: [Errno 2] No
such file or directory</li>
</ol>
|
<python><azure-blob-storage><databricks><azure-data-lake-gen2>
|
2024-11-14 03:28:38
| 2
| 396
|
Mariah Akinbi
|
79,187,232
| 4,038,747
|
How to get all Pagerduty incident notes via Pdpyras library
|
<p>I know how to make a GET request to the <a href="https://developer.pagerduty.com/api-reference/a1ac30885eb7a-list-notes-for-an-incident" rel="nofollow noreferrer">incident notes API end</a> but how do I leverage the <code>pdpyras</code> library to do so? I can get all <code>incidents</code></p>
<pre><code>from pdpyras import APISession
api_token = 'MY_TOKEN'
session = APISession(api_token)
for incident in session.iter_all('incidents'):
print(incident)
</code></pre>
<p>but i cannot see any information related to notes, so i am suspecting i should not be passing <code>'incidents'</code> to my <code>sessions.iter_all</code> method. I have been looking through their <a href="https://github.com/PagerDuty/pdpyras/blob/main/pdpyras.py#L146" rel="nofollow noreferrer">source code</a> but I have not gotten anywhere with that. Any help would be appreciated.</p>
|
<python><pagerduty>
|
2024-11-14 02:34:11
| 1
| 1,175
|
lollerskates
|
79,187,131
| 270,043
|
How to optimize PySpark code to calculate Jaccard Similarity for a huge dataset
|
<p>I have a huge PySpark dataframe that contains 250 million rows, with columns <code>ItemA</code> and <code>ItemB</code>. I'm trying to calculate the Jaccard Similarity <code>M_ij</code> that can run efficiently and takes a short amount of time to complete. My code is as follows.</p>
<pre><code># Group by ItemA and collect all ItemB values as a set
item_sets = df.groupby('ItemA').agg(collect_set('ItemB').alias('ItemB_set'))
# Repartition the dataframe to ensure even distribution of data
item_sets = item_sets.repartition(100)
# Cross join the sets with each other (thus, creating all pairs of ItemA)
cross_item_sets = item_sets.alias('i').crossJoin(item_sets.alias('j'))
# Calculate the intersection and union for each pair
def jaccard_similarity(row):
set_i = set(row['i']['ItemB_set'])
set_j = set(row['j']['ItemB_set'])
intersection_size = len(set_i.intersection(set_j))
union_size = len(set_i.union(set_j))
return Row(ItemA_i=row['i']['ItemA'], ItemA_j=row['j']['ItemA'], M_ij=intersection_size / union_size if union_size > 0 else 0)
# Apply the function
similarity_rdd = cross_item_sets.rdd.map(jaccard_similarity).repartition(200)
# Specify the schema for the dataframe
schema = StructType([
StructField("ItemA", StringType(), True),
StructField("ItemB", StringType(), True),
StructField("jaccard_sim", FloatType(), True)
])
# Convert the RDD back to Dataframe
similarity_df = spark.createDataFrame(similarity_rdd, schema)
# Show results
similarity_df.show(10, truncate=False)
</code></pre>
<p>When I looked at the Spark Web UI after leaving the code to run for 2 hours, I see</p>
<pre><code>Stages: Succeeded/Total --> 0/4
Tasks (for all stages): Succeeded/Total --> 0/10155 (14 running)
</code></pre>
<p>I believe the above is at the <code>similarity_df.show()</code> part.</p>
<p>I can't increase the amount of Spark cluster resources given to me.</p>
<p>How can I get the code to run?</p>
|
<python><pyspark><optimization><jaccard-similarity>
|
2024-11-14 01:27:48
| 0
| 15,187
|
Rayne
|
79,186,983
| 6,036,549
|
How to render LaTeX in Shiny for Python?
|
<p>I'm trying to find if there is a way to render LaTeX formulas in <a href="https://shiny.posit.co/py/" rel="nofollow noreferrer">Shiny for Python</a> or any low-hanging fruit workaround for that.</p>
<p>Documentation doesn't have any LaTeX mentions, so looks like there's no dedicated functionality to support it.
Also double-checked different variations of Latex in their <a href="https://shinylive.io/py/examples/#code=NobwRAdghgtgpmAXGKAHVA6VBPMAaMAYwHsIAXOcpMAMwCdiYACAZwAsBLCbDOAD1R04LFkw4xUxOmTERUAVzJ4mQiABM4dZfI4AdCPv0ABVRroYKfMvo00mZKwAoAlIn1MPTOAEd5UMhykTAC8KrpgACQRurrAAMxMMQHwonEA1HEAtAkxALpR4RgsZHQcqC7unkJk8nQQXr7+gQYQYAC+uUA" rel="nofollow noreferrer">playground</a>.</p>
<p>Tried this but didn't work:</p>
<pre class="lang-py prettyprint-override"><code>from shiny.express import input, render, ui
@render.text
def txt():
equation = r"$$\[3 \times 3+3-3 \]$$".strip()
return equation
</code></pre>
|
<python><latex><py-shiny>
|
2024-11-13 23:39:04
| 1
| 537
|
VladKha
|
79,186,624
| 3,621,143
|
Multiple "applications" in CherryPy producing 404s?
|
<p>I am posting this question, because all the other posts regarding the issue I am facing are all 11 years old. I am sure quite a bit has changed between now and them, so I do not trust those articles.</p>
<p>I was able to successfully deploy a CherryPy configuration using the cherrypy.quickstart method, and all worked great.</p>
<p>I now have some more capability I am trying to add to the existing Python script, so I need to have additional applications, so I found this in the CherryPy documentation:
<a href="https://docs.cherrypy.dev/en/latest/basics.html#hosting-one-or-more-applications" rel="nofollow noreferrer">https://docs.cherrypy.dev/en/latest/basics.html#hosting-one-or-more-applications</a></p>
<p>Without a ton of information available, I followed those steps, and all the objects exist that the cherrypy.tree.mount is referring to, yet I am getting a "404" path not found.</p>
<pre><code> cherrypy.config.update(
{
"log.screen": True,
"server.socket_host": "scriptbox.its.utexas.edu",
"server.socket_port": 8888,
"server.ssl_module": "builtin",
"server.ssl_certificate": scriptPath()+"/ssl/scriptbox.pem",
"server.ssl_private_key": scriptPath()+"/ssl/scriptbox.key",
"server.ssl_certificate_chain": scriptPath()+"/ssl/server_chain.pem",
"/favicon.ico":
{
'tools.staticfile.on': True,
'tools.staticfile.filename': '/f5tools.ico'
}
})
cherrypy.tree.mount(ServeHelp(), '/')
cherrypy.tree.mount(AS3Tools(), '/as3tohtml')
cherrypy.tree.mount(ServeReport(), '/net_report')
cherrypy.engine.start()
cherrypy.engine.block()
</code></pre>
<p>The instance starts successfully. If you go to "/" (root), that works just fine.
If I go to either "/as3tohtml" or "/net_report", I get the following error:</p>
<pre><code>404 Not Found
The path '/as3tohtml/' was not found.
Traceback (most recent call last):
File "/opt/miniconda3/envs/p3/lib/python3.8/site-packages/cherrypy/_cprequest.py", line 659, in respond
self._do_respond(path_info)
File "/opt/miniconda3/envs/p3/lib/python3.8/site-packages/cherrypy/_cprequest.py", line 718, in _do_respond
response.body = self.handler()
File "/opt/miniconda3/envs/p3/lib/python3.8/site-packages/cherrypy/lib/encoding.py", line 223, in __call__
self.body = self.oldhandler(*args, **kwargs)
File "/opt/miniconda3/envs/p3/lib/python3.8/site-packages/cherrypy/_cperror.py", line 415, in __call__
raise self
cherrypy._cperror.NotFound: (404, "The path '/as3tohtml/' was not found.")
</code></pre>
<p>The code around the calls above are:</p>
<pre><code>class AS3Tools:
@cherrypy.expose
def as3tohtml(self, env, as3_file):
as3 = AS3Declaration(env+"/"+as3_file)
if as3.getStatus():
return parse_as3(as3)
</code></pre>
<p>and ...</p>
<pre><code>class ServeReport:
@cherrypy.expose
def network_report(self):
net_report = NetworkReport()
if net_report.getStatus():
return generate_report(net_report)
</code></pre>
<p>What am I doing wrong? Help?</p>
|
<python><cherrypy>
|
2024-11-13 20:47:22
| 0
| 1,175
|
jewettg
|
79,186,512
| 4,508,605
|
pyspark trimming all fields bydefault while writing into csv in python
|
<p>I am trying to write the dataset into csv file using <code>spark 3.3 , Scala 2</code> <code>python</code> code and bydefault its trimming all the String fields. For example, for the below column values :</p>
<pre><code>" Text123"," jacob "
</code></pre>
<p>the output in csv is:</p>
<pre><code>"Text123","jacob"
</code></pre>
<p>I dont want to trim any String fields.</p>
<p>Below is my code:</p>
<pre><code>args = getResolvedOptions(sys.argv, ['target_BucketName', 'JOB_NAME'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
# Convert DynamicFrame to DataFrame
df_app = AWSGlueDataCatalog_node.toDF()
# Repartition the DataFrame to control output files APP
df_repartitioned_app = df_app.repartition(10)
# Check for empty partitions and write only if data is present
if not df_repartitioned_app.rdd.isEmpty():
df_repartitioned_app.write.format("csv") \
.option("compression", "gzip") \
.option("header", "true") \
.option("delimiter", "|") \
.save(output_path_app)
</code></pre>
|
<python><apache-spark><pyspark><aws-glue><apache-spark-3.0>
|
2024-11-13 20:13:12
| 1
| 4,021
|
Marcus
|
79,186,378
| 1,700,890
|
Detect row change by group and bring result back to original data frame
|
<p>Here is my example. I am grouping, ordering and detecting change from one row to another.</p>
<pre><code>import pandas as pd
import datetime
my_df = pd.DataFrame({'col1': ['a', 'a', 'a', 'a', 'b', 'b', 'b'],
'col2': [2, 2, 3, 2, 5, 5, 5],
'col3': [datetime.date(2023, 2, 1),
datetime.date(2023, 3, 1),
datetime.date(2023, 5, 1),
datetime.date(2023, 4, 1),
datetime.date(2023, 3, 1),
datetime.date(2023, 2, 1),
datetime.date(2023, 4, 1)]})
my_df_temp = my_df.sort_values(by=['col3']).groupby('col1')['col2'].apply(
lambda x: x != x.shift(1)
).reset_index(name='col2_change')
</code></pre>
<p>Now I would like to bring result back to <code>my_df</code> i.e. I would like <code>my_df</code> to have column <code>col2_change</code>.</p>
<p>Simple assignment will not work <code>my_df['col2_change'] = my_df_temp.col2_change.values</code></p>
<p>One way I can do it is by ordering <code>my_df</code> by two columns <code>col1</code> and <code>col3</code> and then simply assigning, but it looks a bit laborious. Is there an easier way to do it?</p>
|
<python><pandas><group-by><apply>
|
2024-11-13 19:25:29
| 2
| 7,802
|
user1700890
|
79,186,344
| 3,280,613
|
Cancel current pipeline job "from within" in Azure ML sdk v2
|
<p>I am porting a sdk v1 machine learning pipeline to SDK v2. We have a step which, under certain conditions, cancels the whole pipeline job (ie, the other steps won't run). Its code is like this:</p>
<pre><code>from azureml.core import Run
from azureml.pipeline.core import PipelineRun
run = Run.get_context()
ws = run.experiment.workspace
pipeline_run = PipelineRun(run.experiment, run.parent.id)
if condition:
pipeline_run.cancel()
</code></pre>
<p>I can't find a way to do something similar using Python SDK v2. And I don't want to mix v1 and v2 code. How could I do it? Any ideas?</p>
|
<python><azure><azure-machine-learning-service><azureml-python-sdk><azure-ml-pipelines>
|
2024-11-13 19:15:14
| 1
| 659
|
Celso
|
79,186,201
| 1,014,841
|
Converting pl.Duration to human string
|
<p>When printing a polars data frame, <code>pl.Duration</code> are printed in a "human format" by default. What function is used to do this conversion? Is it possible to use it? Trying <code>"{}".format()</code> returns something readable but not as good.</p>
<pre><code>import polars as pl
data = {"end": ["2024/11/13 10:28:00",
"2024/10/10 10:10:10",
"2024/09/13 09:12:29",
"2024/08/31 14:57:02",
],
"start": ["2024/11/13 10:27:33",
"2024/10/10 10:01:01",
"2024/09/13 07:07:07",
"2024/08/25 13:48:28",
]
}
df = pl.DataFrame(data)
df = df.with_columns(
pl.col("end").str.to_datetime(),
pl.col("start").str.to_datetime(),
)
df = df.with_columns(
duration = pl.col("end") - pl.col("start"),
)
df = df.with_columns(
pl.col("duration").map_elements(lambda t: "{}".format(t), return_dtype=pl.String()).alias("duration_str")
)
print(df)
</code></pre>
<pre><code>shape: (4, 4)
βββββββββββββββββββββββ¬ββββββββββββββββββββββ¬βββββββββββββββ¬ββββββββββββββββββ
β end β start β duration β duration_str β
β --- β --- β --- β --- β
β datetime[ΞΌs] β datetime[ΞΌs] β duration[ΞΌs] β str β
βββββββββββββββββββββββͺββββββββββββββββββββββͺβββββββββββββββͺββββββββββββββββββ‘
β 2024-11-13 10:28:00 β 2024-11-13 10:27:33 β 27s β 0:00:27 β
β 2024-10-10 10:10:10 β 2024-10-10 10:01:01 β 9m 9s β 0:09:09 β
β 2024-09-13 09:12:29 β 2024-09-13 07:07:07 β 2h 5m 22s β 2:05:22 β
β 2024-08-31 14:57:02 β 2024-08-25 13:48:28 β 6d 1h 8m 34s β 6 days, 1:08:34 β
βββββββββββββββββββββββ΄ββββββββββββββββββββββ΄βββββββββββββββ΄ββββββββββββββββββ
</code></pre>
|
<python><python-polars>
|
2024-11-13 18:19:02
| 2
| 3,125
|
Yves Dorfsman
|
79,186,139
| 4,755,229
|
In Jupyter/VSCode running conda envs, how do I make it run activation scripts in activate.d?
|
<p>Conda or Mamba provides a way to set shell environment variables upon activating the environment -- <code>.sh</code> scripts stored in <code>/path/to/env/etc/conda/activate.d/</code> or <code>.../deactivate.d/</code> run upon activating and deactivating the environment. This is utilized by many packages to link their programs and libraries, should it be necessary.</p>
<p>It seems neither Jupyter kernel nor VScode extensions are not aware of these scripts and do not run them upon activation. In my case, this makes some of the paths broken, making it impossible to import some packages <strike>unless I maunally add them to path via <code>sys</code>.</strike>
<strong>EDIT: Just checked: manually adding <code>sys</code> does not make help importing the library. The library itself should load <code>*.so</code> objects and so on, which also relies on environment variables.</strong></p>
<p>How do I make either of them (preferably both) aware of activation scrips and run them upon starting up kernel?</p>
|
<python><linux><visual-studio-code><conda><jupyter>
|
2024-11-13 18:04:00
| 0
| 498
|
Hojin Cho
|
79,186,053
| 4,963,334
|
peewe upgrade to 3.1.5.x
|
<p>We are upgarding peewe to 3.15.x</p>
<p>when we setup peewe connection and every api request we disabled autocommit and add some manual changes in proxy intilaization.</p>
<pre><code> def begin(self):
self.execute_sql('set autocommit=0')
self.execute_sql('begin')
def configure_proxy(cls, proxy):
proxy.obj.require_commit = False
proxy.obj.autocommit = True
proxy.obj.commit_select = False
proxy.obj.connect_kwargs["autocommit"] = True
At every GET request to disable transaction we set transcation=false.
with Using(proxy, DB_MODELS, with_transaction=False):
//Execute
</code></pre>
<p>Now once we upgarde to peewe 3.15.x there is no Using function.Are we correctly disabling the transation=false in below code.</p>
<pre><code> with proxy.connection_context():
with proxy.bind_ctx(DB_MODELS):
models.DB_PROXY.execute_sql('set autocommit=1')
// execute function
</code></pre>
|
<python><peewee><flask-peewee>
|
2024-11-13 17:37:35
| 0
| 1,525
|
immrsteel
|
79,186,037
| 54,873
|
What is the pandas version of np.select?
|
<p>I feel very silly asking this.</p>
<p>I want to set a value in a DataFrame depending on some other columns.</p>
<p>I.e:</p>
<pre><code>(Pdb) df = pd.DataFrame([['cow'], ['dog'], ['trout'], ['salmon']], columns=["animal"])
(Pdb) df
animal
0 cow
1 dog
2 trout
3 salmon
(Pdb) df["animal"] = np.select(df["animal"] == "dog", "canine", "not-canine")
</code></pre>
<p>But the problem is that the above doesn't work! It's because I'm providing a single value, not an array. Arrgh, <code>numpy</code>.</p>
<pre><code>*** ValueError: list of cases must be same length as list of conditions
(Pdb)
</code></pre>
<p>I know about <code>df.where</code> and <code>df.mask</code> - but there seems to be no <code>df.select</code>. What ought I do?</p>
|
<python><pandas>
|
2024-11-13 17:33:25
| 2
| 10,076
|
YGA
|
79,185,962
| 1,700,890
|
Assigning column from different data frame - role of index
|
<pre><code>import pandas as pd
df_1 = pd.DataFrame({'col1': ['a', 'a', 'a']})
df_2 = pd.DataFrame({'col1': ['b', 'b', 'b']})
df_2.index = [4,5,6]
df_1['col2'] = df_2.col1
</code></pre>
<p>I expect a simple copy in the above example, but 'col2' in df_1 is all NAs. I find it strange. What is the rational for this choice? Similar example works differently in R.</p>
|
<python><pandas><indexing><copy>
|
2024-11-13 17:08:05
| 1
| 7,802
|
user1700890
|
79,185,792
| 11,010,254
|
Mypy doesn't detect a type guard, why?
|
<p>I am trying to teach myself how to use type guards in my new Python project in combination with pydantic-settings, and mypy doesn't seem to pick up on them. What am I doing wrong here?</p>
<p>Code:</p>
<pre><code>import logging
from logging.handlers import SMTPHandler
from functools import lru_cache
from typing import Final, Literal, TypeGuard
from pydantic import EmailStr, SecretStr
from pydantic_settings import BaseSettings, SettingsConfigDict
SMTP_PORT: Final = 587
class Settings(BaseSettings):
"""
Please make sure your .env contains the following variables:
- BOT_TOKEN - an API token for your bot.
- TOPIC_ID - an ID for your group chat topic.
- GROUP_CHAT_ID - an ID for your group chat.
- ENVIRONMENT - if you intend on running this script on a VPS, this improves logging
information in your production system.
Required only in production:
- SMTP_HOST - SMTP server address (e.g., smtp.gmail.com)
- SMTP_USER - Email username/address for SMTP authentication
- SMTP_PASSWORD - Email password or app-specific password
"""
ENVIRONMENT: Literal["production", "development"]
# Telegram bot configuration
BOT_TOKEN: SecretStr
TOPIC_ID: int
GROUP_CHAT_ID: int
# Email configuration
SMTP_HOST: str | None = None
SMTP_USER: EmailStr | None = None
# If you're using Gmail, this needs to be an app password
SMTP_PASSWORD: SecretStr | None = None
model_config = SettingsConfigDict(env_file="../.env", env_file_encoding="utf-8")
@lru_cache(maxsize=1)
def get_settings() -> Settings:
"""This needs to be lazily evaluated, otherwise pytest gets a circular import."""
return Settings()
type DotEnvStrings = str | SecretStr | EmailStr
def is_all_email_settings_provided(
host: DotEnvStrings | None,
user: DotEnvStrings | None,
password: DotEnvStrings | None,
) -> TypeGuard[DotEnvStrings]:
"""
Type guard that checks if all email settings are provided.
Returns:
True if all email settings are provided as strings, False otherwise.
"""
return all(isinstance(x, (str, SecretStr, EmailStr)) for x in (host, user, password))
def get_logger():
...
settings = get_settings()
if settings.ENVIRONMENT == "development":
level = logging.INFO
else:
# # We only email logging information on failure in production.
if not is_all_email_settings_provided(
settings.SMTP_HOST, settings.SMTP_USER, settings.SMTP_PASSWORD
):
raise ValueError("All email environment variables are required in production.")
level = logging.ERROR
email_handler = SMTPHandler(
mailhost=(settings.SMTP_HOST, SMTP_PORT),
fromaddr=settings.SMTP_USER,
toaddrs=settings.SMTP_USER,
subject="Application Error",
credentials=(settings.SMTP_USER, settings.SMTP_PASSWORD.get_secret_value()),
# This enables TLS - https://docs.python.org/3/library/logging.handlers.html#smtphandler
secure=(),
)
</code></pre>
<p>And here is what mypy is saying:</p>
<pre><code>media_only_topic\media_only_topic.py:122: error: Argument "mailhost" to "SMTPHandler" has incompatible type "tuple[str | SecretStr, int]"; expected "str | tuple[str, int]" [arg-type]
media_only_topic\media_only_topic.py:123: error: Argument "fromaddr" to "SMTPHandler" has incompatible type "str | None"; expected "str" [arg-type]
media_only_topic\media_only_topic.py:124: error: Argument "toaddrs" to "SMTPHandler" has incompatible type "str | None"; expected "str | list[str]" [arg-type]
media_only_topic\media_only_topic.py:126: error: Argument "credentials" to "SMTPHandler" has incompatible type "tuple[str | None, str | Any]"; expected "tuple[str, str] | None" [arg-type]
media_only_topic\media_only_topic.py:126: error: Item "None" of "SecretStr | None" has no attribute "get_secret_value" [union-attr]
Found 5 errors in 1 file (checked 1 source file)
</code></pre>
<p>I would expect mypy here to read up correctly that my variables can't even in theory be <code>None</code>, but type guards seem to change nothing here, no matter how many times I change the code here. Changing to Pyright doesn't make a difference. What would be the right approach here?</p>
|
<python><python-typing><mypy><pydantic>
|
2024-11-13 16:19:39
| 1
| 428
|
Vladimir Vilimaitis
|
79,185,787
| 2,405,663
|
Parse string as XML and read all elements
|
<p>I have a string variable that contains XML:</p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0" encoding="UTF-8" standalone="no"?>
<osm attribution="http://www.openstreetmap.org/copyright" copyright="OpenStreetMap and contributors" generator="openstreetmap-cgimap 2.0.1 (3329554 spike-07.openstreetmap.org)" license="http://opendatacommons.org/licenses/odbl/1-0/" version="0.6">
<way changeset="123350178" id="26695601" timestamp="2022-07-08T08:32:16Z" uid="616103" user="Max Tenerelli" version="12" visible="true">
<nd ref="289140256"/>
<nd ref="292764243"/>
<nd ref="291616556"/>
<nd ref="292764242"/>
<nd ref="291616560"/>
<nd ref="291616561"/>
<nd ref="291616562"/>
<tag k="access" v="permissive"/>
<tag k="highway" v="service"/>
<tag k="maxspeed" v="30"/>
<tag k="name" v="Baracconi - Jacotenente"/>
<tag k="oneway" v="no"/>
<tag k="surface" v="paved"/>
</way>
</osm>
</code></pre>
<p>I need to read all <code>nd</code> node (ref value) using Python. I built this code but it is not working:</p>
<pre class="lang-py prettyprint-override"><code>import xml.etree.ElementTree as ET
root = ET.fromstring(data)
for eir in root.findall('nodes'):
print(eir.text)
</code></pre>
|
<python>
|
2024-11-13 16:18:21
| 1
| 2,177
|
bircastri
|
79,185,754
| 11,840,002
|
Snowflake connector with pyspark JAR packages error
|
<p>I have read multiple thread on this but not found definitive answer.</p>
<p>I have running in container locally (<code>mac os + podman</code>)</p>
<pre><code>scala: 'version 2.12.17'
pyspark: 3.4.0
spark-3.4.0
python 3.11.4
</code></pre>
<p>I am running a container which is defined in compose (source: <a href="https://github.com/mzrks/pyspark-devcontainer/tree/master/.devcontainer" rel="nofollow noreferrer">https://github.com/mzrks/pyspark-devcontainer/tree/master/.devcontainer</a>)</p>
<pre><code>version: '3'
services:
app:
build:
context: ..
dockerfile: .devcontainer/Dockerfile
args:
PYTHON_VARIANT: 3.11
JAVA_VARIANT: 17
volumes:
- ..:/workspace:cached
command: sleep infinity
pyspark:
image: jupyter/pyspark-notebook:spark-3.4.0
environment:
- JUPYTER_ENABLE_LAB=yes
ports:
- 8888:8888
</code></pre>
<p>I have almost everything that I could find to get this work:</p>
<pre><code>from pyspark.sql import SparkSession
## I have also tried below
import os
os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages net.snowflake:snowflake-jdbc:3.17.0,net.snowflake:spark-snowflake_2.12:2.16.0-spark_3.4 pyspark-shell'
## with and without what I have put here in packages_so and repository
packages_so = 'net.snowflake:snowflake-jdbc:3.4.0,net.snowflake:spark-snowflake_2.12:2.11.0-spark_3.4'
repository = "https://repo1.maven.org/maven2"
## I have tried multiple versions of above, I dont really get what should the version
## numbers read like? other than the spark_3.4 means spark version?
spark = ( SparkSession
.builder
.master("local[*]")
.appName("spark_docker")
# .config("spark.jars.packages", "net.snowflake:snowflake-jdbc:3.17.0,net.snowflake:spark-snowflake_2.12:2.16.0-spark_3.4")
.config("spark.jars.packages", packages_so) \
.config("spark.jars.repositories", repository)
.getOrCreate()
)
sf_options = {
"sfURL": "url",
"sfUser": "user",
"sfPassword": "pass",
"sfDatabase": "SNOWFALL",
"sfSchema": "PIPELINE",
"sfWarehouse": "COMPUTE_WH",
"sfRole": "role",
}
SNOWFLAKE_SOURCE_NAME = "snowflake" # also "net.snowflake.spark.snowflake"
sdf: DataFrame = (
spark.read.format(SNOWFLAKE_SOURCE_NAME)
.options(**sf_options)
.option("dbtable", "SNOWFALL.PIPELINE.MYTABLE")
.option("fetchsize", "10000")
.load()
)
sdf.show(vertical=True, n=2)
spark.stop()
</code></pre>
<p>I have also tried to run in my container shell (source: <a href="https://www.phdata.io/blog/how-to-connect-snowflake-using-spark/" rel="nofollow noreferrer">https://www.phdata.io/blog/how-to-connect-snowflake-using-spark/</a>):</p>
<pre><code>spark-shell --packages net.snowflake:snowflake-jdbc:3.17.0,spark-snowflake_2.12:2.16.0-spark_3.4
</code></pre>
<p>I just dont get how to add the <code>JAR</code> file to this instance so the connection works</p>
<p>and my error results always to:</p>
<pre><code>Py4JJavaError: An error occurred while calling o152.load.
: org.apache.spark.SparkClassNotFoundException: [DATA_SOURCE_NOT_FOUND] Failed to find the data source: snowflake. Please find packages at `https://spark.apache.org/third-party-projects.html`.
</code></pre>
|
<python><apache-spark><pyspark><conda>
|
2024-11-13 16:06:44
| 0
| 1,658
|
eemilk
|
79,185,718
| 17,059,458
|
How to prevent a user freezing python by interacting with the console window?
|
<p>I am making a retro-style GUI in (semi)pure python using ascii characters. The script works by printing and clearing to the console while using the users mouse and keyboard data to create a fully interactive GUI.</p>
<p>However, upon creating the click detection system, I have noticed that python freezes running when the user interacts with the actual console window (e.g drags console, clicks on console). This freeze then ends when the user presses any key or right clicks.</p>
<p>I have attempted to overcome this issue by running a separate script to press right click immediately after the user left clicks, to uninteract and unfreeze the code, however this piece of code never runs as the program is frozen before it can be run.</p>
<p>I have also tried putting this code in a completely separate file and running it separately, but this is also frozen when any python console window is (even if the separate file is running as a pyw).</p>
<p>This is the click detector:</p>
<pre><code>def c_c():
global mouse_pos
last_cycle = False
cd = os.getcwd()
nw = "pyw " + cd + "\\misc_scripts\\click_activator.py"
os.system(nw)
while True:
state = ctypes.windll.user32.GetAsyncKeyState(0x01) # left click
pressed = (state & 0x8000 != 0)
if pressed:
onclick(mouse_pos)
</code></pre>
<p>The other script contained a similar program, which was fully functional in using the
<code>right_click()</code> function to immediately right click after the user left clicks, which was fully operational in outside testing, however was not while using the main program leading me to believe that this script also gets frozen when the user interacts with any console.</p>
<p>Im looking for a way to get around this issue, or making the user unable to interact with the main console while still being able to click on it.</p>
|
<python><console>
|
2024-11-13 15:58:07
| 1
| 374
|
Martin
|
79,185,543
| 9,251,158
|
Curly brace expansion fails on bash, in Linux, when called from Python
|
<p>Consider this curly brace expansion in bash:</p>
<pre class="lang-bash prettyprint-override"><code>for i in {1..10}; do
echo $i;
done;
</code></pre>
<p>I call this script from the shell (on macOS or Linux) and the curly brace does expand:</p>
<pre class="lang-none prettyprint-override"><code>$ ./test.sh
1
2
3
4
5
6
7
8
9
10
</code></pre>
<p>I want to call this script from Python, for example:</p>
<pre class="lang-py prettyprint-override"><code>import subprocess
print(subprocess.check_output("./test.sh", shell=True))
</code></pre>
<p>On macOS, this Python call expands the curly brace and I see this output:</p>
<pre class="lang-none prettyprint-override"><code>b'1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n'
</code></pre>
<p>On Linux, this Python call fails to expand the curly brace and I see this output:</p>
<pre class="lang-none prettyprint-override"><code>b'{1..10}\n'
</code></pre>
<p>Why does curly brace expansion work on the interactive shell (macOS or Linux) and when called from Python on macOS, but fails when called from Python on Linux?</p>
|
<python><bash>
|
2024-11-13 15:19:14
| 2
| 4,642
|
ginjaemocoes
|
79,185,339
| 2,451,238
|
extended help based on argument groups using Python's argparser module
|
<p>Consider the following toy example:</p>
<pre class="lang-bash prettyprint-override"><code>cat extended_help.py
</code></pre>
<pre class="lang-py prettyprint-override"><code>import argparse
ap = argparse.ArgumentParser()
ap.add_argument("-H", "--help-all", action = "version",
help = """show extended help message (incl. advanced
parameters) and exit""",
version = "This is just a dummy implementation.")
common_args = ap.add_argument_group("common parameters",
"""These parameters are typically
enough to run the tool. `%(prog)s
-h|--help` should list these
parameters.""")
advanced_args = ap.add_argument_group("advanced parameters",
"""These parameters are for advanced
users with special needs only. To make
the help more accessible, `%(prog)s
-h|--help` should not include these
parameters, while `%(prog)s
-H|--help-all` should include them (in
addition to those included by `%(prog)s
-h|--help`.""")
common_args.add_argument("-f", "--foo", metavar = "<foo>",
help = "the very common Foo parameter")
common_args.add_argument("--flag", action = "store_true",
help = "a flag enabling a totally normal option")
advanced_args.add_argument("-b", "--bar", metavar = "<bar>",
help = "the rarely needed Bar parameter")
advanced_args.add_argument("-B", "--baz", metavar = "<bar>",
help = "the even more obscure Baz parameter")
advanced_args.add_argument("--FLAG", action = "store_true",
help = "a flag for highly advanced users only")
ap.parse_args()
</code></pre>
<pre class="lang-bash prettyprint-override"><code>python extended_help.py -h
</code></pre>
<p>prints</p>
<pre class="lang-none prettyprint-override"><code>usage: extended_help.py [-h] [-H] [-f <foo>] [--flag] [-b <bar>] [-B <bar>] [--FLAG]
options:
-h, --help show this help message and exit
-H, --help-all show extended help message (incl. advanced parameters) and exit
common parameters:
These parameters are typically enough to run the tool. `extended_help.py -h|--help` should list these parameters.
-f, --foo <foo> the very common Foo parameter
--flag a flag enabling a totally normal option
advanced parameters:
These parameters are for advanced users with special needs only. To make the help more accessible, `extended_help.py -h|--help` should not include these parameters, while
`extended_help.py -H|--help-all` should include them (in addition to those included by `extended_help.py -h|--help`.
-b, --bar <bar> the rarely needed Bar parameter
-B, --baz <bar> the even more obscure Baz parameter
--FLAG a flag for highly advanced users only
</code></pre>
<p>while</p>
<pre class="lang-bash prettyprint-override"><code>python extended_help.py -H
</code></pre>
<p>only generates the placeholder message</p>
<pre class="lang-none prettyprint-override"><code>This is just a dummy implementation.
</code></pre>
<p>How would I need to modify <code>extended_help.py</code> to have</p>
<pre class="lang-bash prettyprint-override"><code>python extended_help.py -h
</code></pre>
<p>print only</p>
<pre class="lang-none prettyprint-override"><code>usage: extended_help.py [-h] [-H] [-f <foo>] [--flag] [-b <bar>] [-B <bar>] [--FLAG]
options:
-h, --help show this help message and exit
-H, --help-all show extended help message (incl. advanced parameters) and exit
common parameters:
These parameters are typically enough to run the tool. `extended_help.py -h|--help` should list these parameters.
-f, --foo <foo> the very common Foo parameter
--flag a flag enabling a totally normal option
</code></pre>
<p>and have</p>
<pre class="lang-bash prettyprint-override"><code>python extended_help.py -H
</code></pre>
<p>reproduce the full help message currently printed by</p>
<pre class="lang-bash prettyprint-override"><code>python extended_help.py -h
</code></pre>
<p>?</p>
<p>I am looking for a solution that avoids manually duplicating the help message(s of certain arguments).</p>
<hr />
<p><strong>edit:</strong></p>
<p>I know I can make <code>-H</code> replace <code>-h</code> as follows:</p>
<pre><code>import argparse
ap = argparse.ArgumentParser(add_help = False)
ap.add_argument("-h", "--help", action = "version",
help = "show help message (common parameters only) and exit",
version = """I know I could add the entire (short) help here
but I'd like to avoid that.""")
ap.add_argument("-H", "--help-all", action = "help",
help = """show extended help message (incl. advanced
parameters) and exit""")
common_args = ap.add_argument_group("common parameters",
"""These parameters are typically
enough to run the tool. `%(prog)s
-h|--help` should list these
parameters.""")
# The rest would be the same as above.
</code></pre>
<p>This way,</p>
<pre class="lang-bash prettyprint-override"><code>python extended_help.py -H
</code></pre>
<p>already works as intended:</p>
<pre class="lang-none prettyprint-override"><code>usage: extended_help.py [-h] [-H] [-f <foo>] [--flag] [-b <bar>] [-B <bar>] [--FLAG]
options:
-h, --help show help message (common parameters only) and exit
-H, --help-all show extended help message (incl. advanced parameters) and exit
common parameters:
These parameters are typically enough to run the tool. `extended_help.py -h|--help` should list these parameters.
-f, --foo <foo> the very common Foo parameter
--flag a flag enabling a totally normal option
advanced parameters:
These parameters are for advanced users with special needs only. To make the help more accessible, `extended_help.py -h|--help` should not include these parameters, while
`extended_help.py -H|--help-all` should include them (in addition to those included by `extended_help.py -h|--help`.
-b, --bar <bar> the rarely needed Bar parameter
-B, --baz <bar> the even more obscure Baz parameter
--FLAG a flag for highly advanced users only
</code></pre>
<p>However, now</p>
<pre class="lang-bash prettyprint-override"><code>python extended_help.py -h
</code></pre>
<p>only prints a placeholder:</p>
<pre class="lang-none prettyprint-override"><code>I know I could add the entire help here but I'd like to avoid that.
</code></pre>
<hr />
<p><strong>update:</strong></p>
<p>I managed to get quite close:</p>
<pre class="lang-py prettyprint-override"><code>import argparse
ap = argparse.ArgumentParser(add_help = False, conflict_handler = "resolve")
ap.add_argument("-h", "--help", action = "help",
help = "show help message (common parameters only) and exit")
ap.add_argument("-H", "--help-all", action = "help",
help = """show extended help message (incl. advanced
parameters) and exit""")
common_args = ap.add_argument_group("common parameters",
"""These parameters are typically
enough to run the tool. `%(prog)s
-h|--help` should list these
parameters.""")
common_args.add_argument("-f", "--foo", metavar = "<foo>",
help = "the very common Foo parameter")
common_args.add_argument("--flag", action = "store_true",
help = "a flag enabling a totally normal option")
ap.add_argument("-h", "--help", action = "version", version = ap.format_help())
advanced_args = ap.add_argument_group("advanced parameters",
"""These parameters are for advanced
users with special needs only. To make
the help more accessible, `%(prog)s
-h|--help` should not include these
parameters, while `%(prog)s
-H|--help-all` should include them (in
addition to those included by `%(prog)s
-h|--help`.""")
advanced_args.add_argument("-b", "--bar", metavar = "<bar>",
help = "the rarely needed Bar parameter")
advanced_args.add_argument("-B", "--baz", metavar = "<bar>",
help = "the even more obscure Baz parameter")
advanced_args.add_argument("--FLAG", action = "store_true",
help = "a flag for highly advanced users only")
ap.parse_args()
</code></pre>
<p>This captures the help message before adding the advanced arguments and overwrites the <code>-h|--help</code> flag's 'version' string (ab-)used to store/print the short help.</p>
<pre class="lang-bash prettyprint-override"><code>python extended_help.py -H
</code></pre>
<p>already works as intended, but</p>
<pre class="lang-bash prettyprint-override"><code>python extended_help.py -h
</code></pre>
<p>swallows all line breaks and spaces from the help message:</p>
<pre class="lang-none prettyprint-override"><code>usage: extended_help.py [-h] [-H] [-f <foo>] [--flag] options: -h, --help show help message (common parameters only) and exit -H, --help-all show extended help message (incl.
advanced parameters) and exit common parameters: These parameters are typically enough to run the tool. `extended_help.py -h|--help` should list these parameters. -f, --foo
<foo> the very common Foo parameter --flag a flag enabling a totally normal option
</code></pre>
<hr />
<p><strong>update:</strong></p>
<p>The problem remaining in the version above turned out to be related to me abusing the <code>version</code> action. I solved it by defining my own custom action for the short help ('inspired' by the <code>help</code> action implementation in the <code>argparse</code> module itself).</p>
<p>I'll leave the above steps here for documentation reasons. Feel free to clean up the question (or prompt me to do so), if preferred.</p>
<p>Any feedback to my solution or alternative suggestions would be welcome.</p>
|
<python><command-line><command-line-interface><command-line-arguments><argparse>
|
2024-11-13 14:24:30
| 2
| 1,894
|
mschilli
|
79,185,240
| 22,437,609
|
Anaconda: Unable to install Kivy 2.3
|
<p>I want to install Kivy to my Anaconda tutorialEnv.</p>
<p>According to <a href="https://kivy.org/doc/stable/gettingstarted/installation.html#install-conda" rel="nofollow noreferrer">https://kivy.org/doc/stable/gettingstarted/installation.html#install-conda</a>
I have used <code>conda install kivy -c conda-forge</code> command.
But i got an error.</p>
<p>Before Kivy library, i had only installed <code>pip install Django==5.1.3</code> without a problem.
After that when i try to install Kivy, i have below error.</p>
<p>Error:</p>
<pre><code>---------- -------
pip 24.2
setuptools 75.1.0
wheel 0.44.0
(tutorialEnv) C:\Users\mecra\OneDrive\Desktop\Python>pip install Django==5.1.3
Collecting Django==5.1.3
Downloading Django-5.1.3-py3-none-any.whl.metadata (4.2 kB)
Collecting asgiref<4,>=3.8.1 (from Django==5.1.3)
Using cached asgiref-3.8.1-py3-none-any.whl.metadata (9.3 kB)
Collecting sqlparse>=0.3.1 (from Django==5.1.3)
Using cached sqlparse-0.5.1-py3-none-any.whl.metadata (3.9 kB)
Collecting tzdata (from Django==5.1.3)
Using cached tzdata-2024.2-py2.py3-none-any.whl.metadata (1.4 kB)
Downloading Django-5.1.3-py3-none-any.whl (8.3 MB)
ββββββββββββββββββββββββββββββββββββββββ 8.3/8.3 MB 408.7 kB/s eta 0:00:00
Using cached asgiref-3.8.1-py3-none-any.whl (23 kB)
Using cached sqlparse-0.5.1-py3-none-any.whl (44 kB)
Using cached tzdata-2024.2-py2.py3-none-any.whl (346 kB)
Installing collected packages: tzdata, sqlparse, asgiref, Django
Successfully installed Django-5.1.3 asgiref-3.8.1 sqlparse-0.5.1 tzdata-2024.2
(tutorialEnv) C:\Users\mecra\OneDrive\Desktop\Python>pip list
Package Version
---------- -------
asgiref 3.8.1
Django 5.1.3
pip 24.2
setuptools 75.1.0
sqlparse 0.5.1
tzdata 2024.2
wheel 0.44.0
(tutorialEnv) C:\Users\mecra\OneDrive\Desktop\Python>conda install kivy -c conda-forge
Retrieving notices: ...working... done
Channels:
- conda-forge
- defaults
Platform: win-64
Collecting package metadata (repodata.json): done
Solving environment: | warning libmamba Added empty dependency for problem type SOLVER_RULE_UPDATE
failed
LibMambaUnsatisfiableError: Encountered problems while solving:
- package kivy-1.10.1-py27h7bc4a79_2 requires python >=2.7,<2.8.0a0, but none of the providers can be installed
Could not solve for environment specs
The following packages are incompatible
ββ kivy is installable with the potential options
β ββ kivy [1.10.1|1.11.0|1.11.1] would require
β β ββ python >=2.7,<2.8.0a0 , which can be installed;
β ββ kivy 1.10.1 would require
β β ββ python >=3.5,<3.6.0a0 , which can be installed;
β ββ kivy [1.10.1|1.11.0|1.11.1|2.0.0|2.0.0rc4] would require
β β ββ python >=3.6,<3.7.0a0 , which can be installed;
β ββ kivy [1.10.1|1.11.0|...|2.1.0] would require
β β ββ python >=3.7,<3.8.0a0 , which can be installed;
β ββ kivy [1.11.1|2.0.0|...|2.3.0] would require
β β ββ python >=3.8,<3.9.0a0 , which can be installed;
β ββ kivy [2.0.0|2.1.0|2.2.0|2.2.1|2.3.0] would require
β β ββ python >=3.10,<3.11.0a0 , which can be installed;
β ββ kivy [2.0.0|2.0.0rc4|...|2.3.0] would require
β β ββ python >=3.9,<3.10.0a0 , which can be installed;
β ββ kivy [2.2.1|2.3.0] would require
β ββ python >=3.11,<3.12.0a0 , which can be installed;
ββ pin-1 is not installable because it requires
ββ python 3.12.* , which conflicts with any installable versions previously reported.
</code></pre>
<p>How can i fix this problem?</p>
<p>Thanks</p>
|
<python><python-3.x><kivy><conda>
|
2024-11-13 14:01:17
| 1
| 313
|
MECRA YAVCIN
|
79,185,191
| 1,323,014
|
How do we use custom Python module in Langflow?
|
<p>Ok, if we try to use the hosted version from Datastax, I don't see any way to install python modules into it and all custom components cannot be made due to the module not being installed.</p>
<p>Hosted by Datastax: <a href="https://astra.datastax.com/langflow/" rel="nofollow noreferrer">https://astra.datastax.com/langflow/</a></p>
<p>If we self-host langflow, we are able to install the custom module as it shows:
<a href="https://i.sstatic.net/ZL1XJI0m.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZL1XJI0m.png" alt="enter image description here" /></a></p>
<p>But I still got the error, when I tried to use it in a custom module:
<a href="https://i.sstatic.net/vTNt0dmo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vTNt0dmo.png" alt="enter image description here" /></a></p>
|
<python><cassandra><datastax><datastax-astra><langflow>
|
2024-11-13 13:45:36
| 1
| 7,550
|
Marcus Ataide
|
79,185,168
| 1,219,317
|
RuntimeError: r.nvmlDeviceGetNvLinkRemoteDeviceType_ INTERNAL ASSERT FAILED at
|
<p>I am writing a Python code that trains a classifier to classify samples (10 sentences per sample). I am using <code>Sentence_Transformer</code> with <a href="https://github.com/socsys/GASCOM" rel="nofollow noreferrer">additional layers</a> and running the model training on a linux server. The code is below. The part that matters is the last part of the code, specifically when fitting the model.</p>
<pre><code>import math
import logging
from datetime import datetime
import pandas as pd
import numpy as np
import sys
import os
import csv
from sentence_transformers import models, losses
from sentence_transformers import LoggingHandler, SentenceTransformer, util, InputExample
from torch.utils.data import DataLoader
from collections import Counter
from LabelAccuracyEvaluator import *
from SoftmaxLoss import *
from layers import Dense, MultiHeadAttention
from sklearn.utils import resample
import torch
import random
import json
model_name = sys.argv[1] if len(sys.argv) > 1 else 'distilroberta-base'
train_batch_size = 8
model_save_path = 'Slashdot/output/gascom_hate_attention_' + model_name.replace("/", "-") # this is the line for saving the model you need for random walks
word_embedding_model = models.Transformer(model_name)
# Apply mean pooling to get one fixed sized sentence vector
pooling_model = models.Pooling(word_embedding_model.get_word_embedding_dimension(),
pooling_mode_mean_tokens=True,
pooling_mode_cls_token=False,
pooling_mode_max_tokens=False)
dense_model = Dense.Dense(in_features=3*760, out_features=6) #called last , u, v, u-v
multihead_attn = MultiHeadAttention.MultiHeadAttention(760, 5, batch_first=True)
# idea is every attention head should be learning something new and that is why you need different q,k, and v. Now I understand!
linear_proj_q = Dense.Dense(word_embedding_model.get_word_embedding_dimension(), 760)
linear_proj_k = Dense.Dense(word_embedding_model.get_word_embedding_dimension(), 760)
linear_proj_v = Dense.Dense(word_embedding_model.get_word_embedding_dimension(), 760)
linear_proj_node = Dense.Dense(word_embedding_model.get_word_embedding_dimension(), 760) #760 to 760
model = SentenceTransformer(modules=[word_embedding_model, multihead_attn, dense_model, linear_proj_q, linear_proj_k, linear_proj_v, linear_proj_node])
model_uv = SentenceTransformer(modules=[word_embedding_model, pooling_model])# w?
train_samples = []
test_samples = []
# Load and clean training dataset
trainset = pd.read_csv('Slashdot/random-walks/S_train_simil_random_walk.csv')
trainset = trainset.fillna('')
# Create a label mapping: Map each unique string label to an integer
unique_labels = trainset['label'].unique()
label_mapping = {label: idx for idx, label in enumerate(unique_labels)}
# Process train set and convert string labels to integer labels using the mapping
for i in range(len(trainset)):
texts = []
for j in range(1, 11):
texts.append(trainset.iloc[i]['sent' + str(j)])
# Convert string label to integer using the mapping
label = label_mapping[trainset.iloc[i]['label']]
train_samples.append(InputExample(texts=texts, label=label))
# Split into train and dev sets (80/20 split)
dev_samples = train_samples[math.ceil(0.8 * len(train_samples)):]
train_samples = train_samples[:math.ceil(0.8 * len(train_samples))]
# Load and clean test dataset
testset = pd.read_csv('Slashdot/random-walks/S_test_simil_random_walk.csv')
testset = testset.fillna('')
# Convert string labels to integer labels using the same mapping for the test set
for i in range(len(testset)):
texts = []
for j in range(1, 11):
texts.append(testset.iloc[i]['sent' + str(j)])
# Convert string label to integer using the same mapping
label = label_mapping[testset.iloc[i]['label']]
test_samples.append(InputExample(texts=texts, label=label))
# Count the number of samples for each numerical category (label)
train_labels = [example.label for example in train_samples]
dev_labels =[example.label for example in dev_samples]
test_labels = [example.label for example in test_samples]
# Count occurrences of each label in the train, valid, and test sets
train_label_count = Counter(train_labels)
dev_label_count = Counter(dev_labels)
test_label_count = Counter(test_labels)
# Print the counts for each label
print("Label mapping (string to integer):", label_mapping)
print("Initial Train set label distribution:", train_label_count)
print("Initial Valid set label distribution:", dev_label_count)
print("Initial Test set label distribution:", test_label_count)
print('length of train samples=', len(train_samples))
print('length of dev samples=', len(dev_samples))
print('length of test samples=', len(test_samples))
#BALANCING DATASET-------------------------------------------------BALANCING DATASET----------------------------------------------------
# Load the synonym dictionary from the JSON file
with open('Slashdot/synonym_dic.json', 'r') as f:
synonym_dict = json.load(f)
def get_synonyms(word):
"""Get synonyms from the pre-defined dictionary."""
return synonym_dict.get(word.lower(), [])
def replace_with_synonyms(sentence, num_replacements=2):
"""Replace words with synonyms using a hardcoded dictionary, preserving punctuation."""
words = sentence.split()
new_words = []
for word in words:
# Capture punctuation to reattach it after replacement
prefix = ""
suffix = ""
# Check and remove leading punctuation
while word and word[0] in '.,!?':
prefix += word[0]
word = word[1:]
# Check and remove trailing punctuation
while word and word[-1] in '.,!?':
suffix += word[-1]
word = word[:-1]
clean_word = word # word without punctuation
# Skip words that don't have a good replacement
if len(clean_word) < 4:
new_words.append(prefix + clean_word + suffix)
continue
# Get synonyms using the dictionary
synonyms = get_synonyms(clean_word)
if synonyms:
# Replace the word with a random synonym
replacement = random.choice(synonyms)
# Maintain the original case
if clean_word[0].isupper():
replacement = replacement.capitalize()
new_words.append(prefix + replacement + suffix)
# Uncomment to debug replacement
#print(clean_word, 'replaced with', replacement)
else:
new_words.append(prefix + clean_word + suffix)
return ' '.join(new_words)
def augment_sample(sample, num_augments=1):
"""Augment sample sentences using the hardcoded synonym dictionary."""
augmented_samples = []
for _ in range(num_augments):
new_texts = []
for sentence in sample.texts:
#print('**SENTENCE:', sentence)
new_sentence = replace_with_synonyms(sentence)
new_texts.append(new_sentence)
#print('**NEW SENTENCE:', new_sentence)
#print('----------------------------------------------------------')
augmented_samples.append(InputExample(texts=new_texts, label=sample.label))
return augmented_samples
def oversample_to_balance(label_count,samples,dataset_name):
# Oversample to balance classes
print('Balancing',dataset_name,'data:')
max_count = max(label_count.values())
balanced_samples = []
for label, count in label_count.items():
label_samples = [sample for sample in samples if sample.label == label]
if count < max_count:
print('balancing',label,'from',count,'to',max_count,'...')
augment_count = max_count - count
aug_samples = [augment_sample(sample)[0] for sample in resample(label_samples, n_samples=augment_count)]
balanced_samples.extend(aug_samples)
print('balanced')
balanced_samples.extend(label_samples)
return balanced_samples
# Update the samples with the balanced set
train_samples = oversample_to_balance(train_label_count,train_samples,'Train')
dev_samples = oversample_to_balance(dev_label_count,dev_samples,'Dev')
test_samples = oversample_to_balance(test_label_count,test_samples,'Test')
train_label_count = Counter([sample.label for sample in train_samples])
dev_label_count = Counter([sample.label for sample in dev_samples])
test_label_count = Counter([sample.label for sample in test_samples])
print("Balanced Train set label distribution:", train_label_count)
print("Balanced Dev set label distribution:", dev_label_count)
print("Balanced Test set label distribution:", test_label_count)
print('length of train samples=', len(train_samples))
print('length of dev samples=', len(dev_samples))
print('length of test samples=', len(test_samples))
#----------------------------------------------------------------------------------------------------------------------------------------
train_dataloader = DataLoader(train_samples, shuffle=True, batch_size=train_batch_size)
dev_dataloader = DataLoader(dev_samples, shuffle=True, batch_size=train_batch_size)
test_dataloader = DataLoader(test_samples, shuffle=True, batch_size=train_batch_size)
# Ensure that CUDA is available and get the device name
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print('CUDA Available:', torch.cuda.is_available())
if torch.cuda.is_available():
print('GPU in use:', torch.cuda.get_device_name(0))
# You can check memory usage like this:
if torch.cuda.is_available():
print(f"Allocated GPU Memory: {torch.cuda.memory_allocated()} bytes")
print(f"Cached GPU Memory: {torch.cuda.memory_reserved()} bytes")
#############################################GPU Check########################################################
print(f"Total training samples: {len(train_samples)}")
for i in range(1):
train_dataloader = DataLoader(train_samples, shuffle=True, batch_size=train_batch_size)
train_loss = SoftmaxLoss(model=model, model_uv=model_uv, multihead_attn=multihead_attn, linear_proj_q=linear_proj_q,
linear_proj_k=linear_proj_k, linear_proj_v=linear_proj_v, linear_proj_node=linear_proj_node,
sentence_embedding_dimension=pooling_model.get_sentence_embedding_dimension(),
num_labels=6)
dev_evaluator = LabelAccuracyEvaluator(dev_dataloader, name='sts-dev', softmax_model=train_loss)
num_epochs = 3
warmup_steps = math.ceil(len(train_dataloader) * num_epochs * 0.1) #10% of train data for warm-up, weight initialised randomly I can check that
print('fitting...')
# Train the model
model.fit(train_objectives=[(train_dataloader, train_loss)],
evaluator=dev_evaluator,
epochs=num_epochs,
evaluation_steps=1000, # after 1000 examples the evaluation will happen on the validation set (development).
warmup_steps=warmup_steps,
output_path=model_save_path
)
test_evaluator = LabelAccuracyEvaluator(test_dataloader, name='sts-test', softmax_model=train_loss)
test_evaluator(model, output_path=model_save_path)
</code></pre>
<p>When I run the code, I am getting the below error:</p>
<pre><code>fitting...
Currently using DataParallel (DP) for multi-gpu training, while DistributedDataParallel (DDP) is recommended for faster training. See https://sbert.net/docs/sentence_transformer/training/distributed.html for more information.
0%| | 0/19638 [00:00<?, ?it/s]Traceback (most recent call last):
File "/home/zaid/GASCOM-main/Slashdot/gascom_train.py", line 250, in <module>
model.fit(train_objectives=[(train_dataloader, train_loss)],
File "/home/zaid/.local/lib/python3.10/site-packages/sentence_transformers/fit_mixin.py", line 374, in fit
trainer.train()
File "/home/zaid/.local/lib/python3.10/site-packages/transformers/trainer.py", line 2052, in train
return inner_training_loop(
File "/home/zaid/.local/lib/python3.10/site-packages/transformers/trainer.py", line 2388, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/home/zaid/.local/lib/python3.10/site-packages/transformers/trainer.py", line 3485, in training_step
loss = self.compute_loss(model, inputs)
File "/home/zaid/.local/lib/python3.10/site-packages/sentence_transformers/trainer.py", line 344, in compute_loss
loss = loss_fn(features, labels)
File "/home/zaid/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/zaid/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/zaid/GASCOM-main/Slashdot/SoftmaxLoss.py", line 78, in forward
reps = [self.model.module[0](sentence_feature)['token_embeddings'] for sentence_feature in sentence_features]
File "/home/zaid/GASCOM-main/Slashdot/SoftmaxLoss.py", line 78, in <listcomp>
reps = [self.model.module[0](sentence_feature)['token_embeddings'] for sentence_feature in sentence_features]
File "/home/zaid/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/zaid/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/zaid/.local/lib/python3.10/site-packages/sentence_transformers/models/Transformer.py", line 350, in forward
output_states = self.auto_model(**trans_features, **kwargs, return_dict=False)
File "/home/zaid/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/zaid/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/zaid/.local/lib/python3.10/site-packages/transformers/models/roberta/modeling_roberta.py", line 912, in forward
embedding_output = self.embeddings(
File "/home/zaid/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/zaid/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/zaid/.local/lib/python3.10/site-packages/transformers/models/roberta/modeling_roberta.py", line 125, in forward
embeddings = inputs_embeds + token_type_embeddings
RuntimeError: r.nvmlDeviceGetNvLinkRemoteDeviceType_ INTERNAL ASSERT FAILED at "../c10/cuda/driver_api.cpp":33, please report a bug to PyTorch. Can't find nvmlDeviceGetNvLinkRemoteDeviceType: /lib/x86_64-linux-gnu/libnvidia-ml.so.1: undefined symbol: nvmlDeviceGetNvLinkRemoteDeviceType
</code></pre>
|
<python><pytorch><gpu><nvidia><sentence-transformers>
|
2024-11-13 13:41:04
| 2
| 2,281
|
Travelling Salesman
|
79,185,128
| 10,277,250
|
Why does `ConversationalRetrievalChain/RetrievalQA` include prompt in answer that cause recursive text growth?
|
<p>I am building RAG Chatbot on my own data using <a href="https://python.langchain.com/docs/introduction" rel="nofollow noreferrer">langchain</a>. There are a lot of guidelines how to do it, for example <a href="https://medium.com/@murtuza753/using-llama-2-0-faiss-and-langchain-for-question-answering-on-your-own-data-682241488476" rel="nofollow noreferrer">this one</a></p>
<p>Most of the guides recommend to use <code>ConversationalRetrievalChain</code>. However, I noticed that it recursively analyze previous texts multiple times, leading to a quadratic increase in the length of text with each new question. Is it expected behavior? How to fix this?</p>
<h2>Minimal Reproducible Example</h2>
<p>For simplicity, let's ignore embedder (assume that there is no relevant documents). So the base code will be next:</p>
<pre class="lang-py prettyprint-override"><code>import gradio as gr
from langchain.chains import ConversationalRetrievalChain
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_community.llms.huggingface_pipeline import HuggingFacePipeline
from langchain_core.vectorstores import InMemoryVectorStore
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
LLM_MODEL_NAME = 'meta-llama/Llama-3.2-1B-Instruct' # can be any other model
EMBEDDER_MODEL_NAME = 'dunzhang/stella_en_1.5B_v5' # doesn't matter here
model = AutoModelForCausalLM.from_pretrained(LLM_MODEL_NAME)
tokenizer = AutoTokenizer.from_pretrained(LLM_MODEL_NAME)
llm_pipeline = pipeline(
'text-generation',
model=model,
tokenizer=tokenizer,
max_new_tokens=256,
)
llm = HuggingFacePipeline(pipeline=llm_pipeline)
# just mock of embedder and vector store
embedder = HuggingFaceEmbeddings(model_name=EMBEDDER_MODEL_NAME)
vector_store = InMemoryVectorStore(embedder)
retriever = vector_store.as_retriever()
chain = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=retriever,
return_source_documents=True,
)
def predict(message: str, history: list[list[str]]) -> str:
history = [tuple(record) for record in history]
result = chain.invoke({
'question': message,
'chat_history': history,
})
return result['answer']
gr.ChatInterface(predict).launch()
</code></pre>
<p>When I run this code, the model recursively analyze the same part more and more times. This part is highlighted in the red box on the screen:</p>
<p><a href="https://i.sstatic.net/82tSNqnT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/82tSNqnT.png" alt="output" /></a></p>
<p>PS This behavior also happens in case of <code>RetrievalQA</code> chain</p>
<p>UPD I find similar issue in this <a href="https://reddit.com/r/LangChain/comments/1c06txb/retrievalqa_chain_returning_generated_answer" rel="nofollow noreferrer">Reddit post</a> for <code>RetrievalQA</code> chain, but it doesn't have a useful answer</p>
|
<python><langchain><large-language-model><llama><rag>
|
2024-11-13 13:33:04
| 0
| 363
|
Abionics
|
79,185,022
| 6,197,439
|
Changing background color of inidividiual row (verticalHeader) labels in PyQt5 QTableView?
|
<p>In my application, I would like to conditionally change the text and background color of arbitrary row labels (verticalHeader) of <code>QTableView</code>.</p>
<p>In the example below, to simplify things, all I'm trying to do is to change the row (verticalHeader) label of the second row (i.e. row with index or section 1) when the button is pressed - specifically to red text color and greenish background color.</p>
<p>As far as I'm aware, the way to do this in <code>QTableView</code> is from the <code>headerData</code> method, using the <code>Qt.TextColorRole</code> and <code>Qt.BackgroundRole</code>, which is what I did in in the example below. Upon application start, the following GUI state is rendered (on Windows 10, MINGW64 Python3 and PyQt5 libraries):</p>
<p><a href="https://i.sstatic.net/0vOT31CY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0vOT31CY.png" alt="GUI app start" /></a></p>
<p>... but after I press the "Toggle row label indicate" button, I get this:</p>
<p><a href="https://i.sstatic.net/4aXwRsHL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4aXwRsHL.png" alt="GUI after button press" /></a></p>
<p>... where the <code>headerData</code> <code>Qt.TextColorRole</code> changes work (that is, the text color of the row label for the second row becomes red, as expected), but the <code>Qt.BackgroundRole</code> does not work (the background color of the row label for the second row does not change).</p>
<p>So, what am I doing wrong, and how can I get the background color of the row label for the second row changed (hopefully, without having to write a new QHeaderView class)?</p>
<p>This seems to have been a long-standing question (e.g. <a href="https://forum.qt.io/topic/28279" rel="nofollow noreferrer">https://forum.qt.io/topic/28279</a> from 2013), but I've never found anything with a simple, minimal reproducible example of the problem - and no solutions either - so I thought it was worth it asking again.</p>
<p>The post <a href="https://stackoverflow.com/questions/27574808/how-to-control-header-background-color-in-a-table-view-with-model">How to control header background color in a table view with model?</a> simply suggests using QCSS; however if you uncomment the line <code>self.table_view.setStyleSheet(table_view_qcss)</code> in the example below, it is visible that QCSS changes <strong>all</strong> header labels - and the exact same behavior is reproduced upon button press, just with gray header labels as a starting point; so, after button press, the GUI state rendered is this:</p>
<p><a href="https://i.sstatic.net/iVqhPiQj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iVqhPiQj.png" alt="GUI with QCSS after button press" /></a></p>
<p>... that is, again, <code>headerData</code> <code>Qt.TextColorRole</code> changes work, <code>Qt.BackgroundRole</code> does not work.</p>
<p>I have noted that <a href="https://doc.qt.io/qt-5/qheaderview.html" rel="nofollow noreferrer">https://doc.qt.io/qt-5/qheaderview.html</a> states:</p>
<blockquote>
<p>Not all ItemDataRoles will have an effect on a QHeaderView. If you need to draw other roles, you can subclass QHeaderView and reimplement paintEvent(). QHeaderView respects the following item data roles, ...:</p>
<p>TextAlignmentRole, DisplayRole, FontRole, DecorationRole, ForegroundRole, and <strong>BackgroundRole</strong>.</p>
</blockquote>
<p>... which strongly suggests <code>Qt.BackgroundRole</code> <em>should</em> have worked; however there is also the part left out in the quote above:</p>
<blockquote>
<p>... <strong>unless</strong> they are in <strong>conflict</strong> with the style (which can happen for styles that follow the desktop theme)</p>
</blockquote>
<p>... which then implies why <code>Qt.BackgroundRole</code> might fail to perform.</p>
<p>So, I found <a href="https://stackoverflow.com/questions/13837403/qtbackgroundrole-seems-to-be-ignored">Qt::BackgroundRole seems to be ignored</a>, where an answer hints at the following as solution:</p>
<blockquote>
<ol start="2">
<li><p>For specific table or header view, use style that respects brushes:</p>
<p>//auto keys = QStyleFactory::keys();
if(auto style = QStyleFactory::create("Fusion")) {
verticalHeader()->setStyle(style);
}</p>
</li>
</ol>
</blockquote>
<p>Something similar is also mentioned in <a href="https://www.qtcentre.org/threads/13094-QTableView-headerData()-color" rel="nofollow noreferrer">https://www.qtcentre.org/threads/13094-QTableView-headerData()-color</a> :</p>
<blockquote>
<p>The header doesn't use the delegate. It's a very limited class, so if you want something fancy, you'll have to subclass QHeaderView and implement it yourself. ...</p>
<p>Qt tries to follow the platform style. If Windows doesn't allow header colours to be modified, they won't be. You could run your application with a different style (using -style stylename switch, i.e. -style plastique) on Windows and it'll probably work then. ...</p>
<p>You can "cheat" even more by changing the style of the header only and leaving the rest of the application running the default style. That's how stylesheets work, by the way...</p>
</blockquote>
<p>... but you can see that in my example, I've tried doing <code>self.table_view.verticalHeader().setStyle(QStyleFactory.create("Fusion"))</code>, and it seems to make no difference (<code>headerData</code> <code>Qt.BackgroundRole</code> still fails to change the target row label background color).</p>
<p>The same answer in <a href="https://stackoverflow.com/q/13837403">Qt::BackgroundRole seems to be ignored</a> also mentions:</p>
<blockquote>
<ol>
<li>You can also achieve it by using own item delegates - inherit from QStyledItemDelegate or whatever else, reimplement one method and set it to view.</li>
</ol>
</blockquote>
<p>... but then, I see in <a href="https://doc.qt.io/qt-5/qheaderview.html" rel="nofollow noreferrer">https://doc.qt.io/qt-5/qheaderview.html</a>:</p>
<blockquote>
<p>Note: Each header renders the data for each section itself, and does not rely on a delegate. As a result, calling a header's <code>setItemDelegate()</code> function will have <strong>no effect</strong>.</p>
</blockquote>
<p>... so now I really don't know what to think anymore ...</p>
<p>So, is there any way to get a PyQt5 QTableView background color of arbitrary row labels changed, using <code>headerData</code> and <code>Qt.BackgroundRole</code>?</p>
<p>Here is the example code:</p>
<pre class="lang-py prettyprint-override"><code>import sys
from PyQt5 import QtCore, QtGui, QtWidgets
from PyQt5.QtCore import (Qt, QSize, QPointF)
from PyQt5.QtGui import (QColor, QPalette, QPixmap, QBrush, QPen, QPainter)
from PyQt5.QtWidgets import (QWidget, QVBoxLayout, QPushButton, QStyleFactory)
# starting point from https://www.pythonguis.com/tutorials/qtableview-modelviews-numpy-pandas/
class TableModel(QtCore.QAbstractTableModel):
def __init__(self, data, parview):
super(TableModel, self).__init__()
self._data = data
self._parview = parview # parent table view
#
def rowCount(self, index):
return len(self._data)
#
def columnCount(self, index):
return len(self._data[0])
#
def data(self, index, role):
if role == Qt.DisplayRole:
return self._data[index.row()][index.column()]
if role == Qt.BackgroundRole: # https://stackoverflow.com/q/57321588
return QColor("#AA{:02X}{:02X}".format(
index.row()*20+100, index.column()*20+100
))
#
def headerData(self, section, orientation, role=Qt.DisplayRole): # https://stackoverflow.com/q/64287713
if orientation == Qt.Vertical:
if role == Qt.TextColorRole:
# only for the second row, where section == 1
if (section == 1) and (self._parview.rowLabelIndicate):
return QColor("red")
if role == Qt.BackgroundRole:
# only for the second row, where section == 1
if (section == 1) and (self._parview.rowLabelIndicate):
return QColor("#88FF88") #QBrush(QColor("#88FF88"))
#
# without the super call, no text is printed on header view labels!
return super().headerData(section, orientation, role)
class MainWindow(QtWidgets.QMainWindow):
def __init__(self):
super().__init__()
self.centralw = QWidget()
self.setCentralWidget(self.centralw)
self.vlayout = QVBoxLayout(self.centralw)
#
self.btn = QPushButton("Toggle row label indicate")
self.btn.clicked.connect(self.toggleRowLabelIndicate)
self.vlayout.addWidget(self.btn)
#
self.table_view = QtWidgets.QTableView()
self.table_view.rowLabelIndicate = False # dynamically added attribute/property
print(f"{QStyleFactory.keys()=}") # for me, it is: ['windowsvista', 'Windows', 'Fusion']
self.table_view.verticalHeader().setStyle(QStyleFactory.create("Fusion")) # passes here, no crash!
self.table_view.verticalHeader().setMinimumWidth(28) # int; seems to be in pixels
table_view_qcss = "QHeaderView::section { background-color:#DDDDDD }"
#self.table_view.setStyleSheet(table_view_qcss)
data = [
[ 1, 2, 3 ],
[ "hello", "world", "42" ],
[ 4, "more", "words" ],
]
self.model = TableModel(data, self.table_view)
self.table_view.setModel(self.model)
self.vlayout.addWidget(self.table_view)
#
self.resizeToContents()
#
def resizeToContents(self):
self.table_view.resizeColumnsToContents()
self.table_view.resizeRowsToContents()
#
def toggleRowLabelIndicate(self):
self.table_view.rowLabelIndicate = not(self.table_view.rowLabelIndicate)
change_row_start = change_row_end = 1
self.model.headerDataChanged.emit(Qt.Vertical, change_row_start, change_row_end)
app=QtWidgets.QApplication(sys.argv)
window=MainWindow()
window.show()
window.resize(180, 160)
app.exec_()
</code></pre>
|
<python><pyqt5><qt5>
|
2024-11-13 13:08:15
| 1
| 5,938
|
sdbbs
|
79,184,742
| 22,407,544
|
How to bold text in template using django template tags
|
<p>For example if I want to put an 'About' section or a 'Terms of Use' section in my website and want some of the subheadings to be bolded or made into headers. How could I use template tags to achieve this? My plan is to write the About section or Terms of Use in my <code>models</code> and use template tags to format the subheadings and any text that should be bolded. Is there a better way to do this?</p>
|
<python><django>
|
2024-11-13 11:46:29
| 1
| 359
|
tthheemmaannii
|
79,184,672
| 6,681,932
|
plotly is not updating the info correctly with dropdown interactivity
|
<p>I'm facing an issue with updating the median line on a <code>plotly</code> scatter plot when interacting with a <code>dropdown</code>. The dropdown allows the user to select a column (Y-axis), and I want the median of the selected Y-axis to update accordingly. However, when I select a new variable from the dropdown, the median line does not update as expected.</p>
<p>I share a toy sample data:</p>
<pre><code>import pandas as pd
df_input = pd.DataFrame({
'rows': range(1, 101),
'column_a': [i + (i % 10) for i in range(1, 101)],
'column_b': [i * 2 for i in range(1, 101)],
'column_c': [i ** 0.5 for i in range(1, 101)],
'outlier_prob': [0.01 * (i % 10) for i in range(1, 101)]
})
</code></pre>
<p>Here is the function I use</p>
<pre><code>import plotly.graph_objects as go
def plot_dq_scatter_dropdown(df):
# Initialize the figure
fig = go.Figure()
# Function to add median lines (vertical for rows, horizontal for selected Y)
def add_median_lines(y):
fig.data = [] # Clear previous data
# Add a scatter trace for the selected Y variable
fig.add_trace(go.Scatter(
x=df["rows"],
y=df[y],
mode='markers',
marker=dict(color=df['outlier_prob'], colorscale='viridis', showscale=True, colorbar=dict(title='Outlier Probability')),
hoverinfo='text',
text=df.index, # Or use other columns for hover data if needed
name=f'{y} vs rows', # This will still be used for the hover and data display
showlegend=False # Hide the legend for each individual trace
))
# Calculate medians for both X and selected Y
median_x = df["rows"].median() # Median of X (rows)
median_y = df[y].median() # Median of selected Y-variable
# Add vertical median line for 'rows'
fig.add_vline(x=median_x, line=dict(color="orange", dash="dash", width=2),
annotation_text="Median rows", annotation_position="top left")
# Add horizontal median line for selected Y-variable
fig.add_hline(y=median_y, line=dict(color="orange", dash="dash", width=2),
annotation_text=f"Median {y}, {median_y}", annotation_position="top left")
# Update layout after adding the data and median lines
fig.update_layout(
title=f"Scatter Plot: rows vs {y}",
xaxis_title="rows",
yaxis_title=y,
autosize=True
)
# Add a dropdown menu for selecting the Y-axis variable
fig.update_layout(
updatemenus=[dict(
type="dropdown",
x=0.17,
y=1.15,
showactive=True,
buttons=[
dict(
label=f"{y}",
method="update",
args=[{
'y': [df[y]],
'x': [df["rows"]],
'type': 'scatter',
'mode': 'markers',
'marker': dict(color=df['outlier_prob'], colorscale='viridis', showscale=True, colorbar=dict(title='Outlier Probability')),
'hoverinfo': 'text',
'text': df.index,
'name': f'{y} vs rows',
'showlegend': False
}, {
'title': f"Scatter Plot: rows vs {y}",
'yaxis.title': y
}]
) for y in df.columns if y not in ["rows", "outlier_prob"]
]
)]
)
# Display the initial plot (default to the second column for the first plot)
add_median_lines(df.columns[1])
# Show the plot
fig.show()
</code></pre>
<p>Here is the example of function call:</p>
<pre><code># Call the function to plot the graph
plot_dq_scatter_dropdown(df_input)
</code></pre>
<p>This is the error I face visually:</p>
<p><a href="https://i.sstatic.net/WvKQMRwX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WvKQMRwX.png" alt="column_b selected but horizontal remains as column_a" /></a></p>
<p>The horizontal trace, outlined in green, is unexpectedly constant as <code>column_a</code>, since it is the one I interact with in the drop-down was <code>column_b</code>. The vertical trace is correct to be fixed since it does not interact with that axis.</p>
|
<python><plotly><visualization><interactive>
|
2024-11-13 11:24:28
| 1
| 478
|
PeCaDe
|
79,184,502
| 12,932,447
|
Optional type annotation string in SQLModel
|
<p>I'm working on a FastAPI/SQLModel project and, since we've deprecated Python 3.9, I'm replacing each <code>Optional[X]</code> with <code>X | None</code>.</p>
<p>I have a problem with <a href="https://sqlmodel.tiangolo.com/tutorial/relationship-attributes/type-annotation-strings/" rel="nofollow noreferrer">Type annotation strings</a>.</p>
<p>For example, take this class</p>
<pre class="lang-py prettyprint-override"><code>from typing import Optional
class OAuthAccount(SQLModel, table=True):
__tablename__ = "oauthaccount"
id: int | None = Field(default=None, primary_key=True)
user: Optional["User"] = Relationship(back_populates="oauth_accounts")
</code></pre>
<p>If I replace the last type hint with <code>"User" | None</code> I get this error</p>
<pre><code>E TypeError: unsupported operand type(s) for |: 'str' and 'NoneType'
</code></pre>
<p>Is there any way to solve this, or am I stuck with <code>Optional</code>?</p>
<p>Thanks</p>
|
<python><fastapi><python-typing><sqlmodel>
|
2024-11-13 10:41:37
| 1
| 875
|
ychiucco
|
79,184,437
| 16,171,413
|
How to parse the result of applicant__count in Model.objects.values("applicant", 'counter').annotate(Count("applicant")) to counter field?
|
<p>I have a model with these fields although there are other fields but this is my MRE:</p>
<pre><code>class Application(models.Model):
applicant = models.ForeignKey(User, on_delete=models.CASCADE, to_field='email')
company = models.CharField(max_length=100)
counter = models.PositiveIntegerField(editable=False, default=0)
</code></pre>
<p>I want to find the number of applications in the table for each applicant and parse the value automatically to the counter field. In my views.py, I have been able to use:</p>
<pre><code>model = Application.objects.values('applicant','counter').annotate(Count("applicant"))
</code></pre>
<p>which returns correct values:</p>
<pre><code>{'applicant': 'test@users.com', 'counter': 1, 'applicant__count': 2} {'applicant': 'second@user.org', 'counter': 1, 'applicant__count': 4}
</code></pre>
<p>But I am unable to extract the value of `applicant__count` and parse it directly to the counter field in models.py.</p>
<p>I tried using the update, update_or_create method but I'm not able to update the model. I also tried django signals pre_save and post_save but they keep incrementing every value. For example, one applicant can have many job applications but instead of returning the total number of job applications for an applicant, django signals increments all the applications in the table.</p>
<p>Is there any way to automatically save the result of `applicant__count` to my counter field? I would really appreciate any help.</p>
|
<python><django><django-models><django-views><django-signals>
|
2024-11-13 10:24:50
| 1
| 5,413
|
Uchenna Adubasim
|
79,184,247
| 578,822
|
Best practices for using @property with Enum values on a Django model for DRF serialization
|
<p><strong>Question:</strong> I'm looking for guidance on using @property on a Django model, particularly when the property returns an Enum value and needs to be exposed in a Django REST Framework (DRF) serializer. Hereβs my setup:</p>
<p>Iβve defined an Enum, AccountingType, to represent the possible accounting types:</p>
<pre><code>from enum import Enum
class AccountingType(Enum):
ASSET = "Asset"
LIABILITY = "Liability"
UNKNOWN = "Unknown"
</code></pre>
<p>On my Account model, I use a @property method to determine the accounting_type based on existing fields:</p>
<pre><code># Account fields ...
@property
def accounting_type(self) -> AccountingType:
"""Return the accounting type for this account based on the account sub type."""
if self.account_sub_type in constants.LIABILITY_SUB_TYPES:
return AccountingType.LIABILITY
if self.account_sub_type in constants.ASSET_SUB_TYPES:
return AccountingType.ASSET
return AccountingType.UNKNOWN
</code></pre>
<p>In Django views, I can use this property directly without issues. For example:</p>
<pre><code>account = Account.objects.get(id=some_id)
if account.accounting_type == AccountingType.LIABILITY:
print("This account is a liability.")
</code></pre>
<p><strong>Problem:</strong> When trying to expose <code>accounting_type</code> in DRF, using <code>serializers.ReadOnlyField()</code> does not include the property in the serialized output:</p>
<pre><code>class AccountDetailSerializer(serializers.ModelSerializer):
accounting_type = serializers.ReadOnlyField()
class Meta:
model = Account
fields = ['accounting_type', 'account_id', ...]
</code></pre>
<p>I found that switching to <code>serializers.SerializerMethodField()</code> resolves the issue, allowing me to return the Enum value as a string:</p>
<pre><code>class AccountDetailSerializer(serializers.ModelSerializer):
accounting_type = serializers.SerializerMethodField()
class Meta:
model = Account
fields = ['accounting_type', 'account_id', ...]
def get_accounting_type(self, obj):
return obj.accounting_type.value # Return the Enum value as a string
</code></pre>
<p><strong>Questions:</strong></p>
<ol>
<li>Is there a reason serializers.ReadOnlyField() doesnβt work with @property when it returns an Enum? Does DRF handle @property fields differently based on the return type?</li>
<li>Is SerializerMethodField the recommended approach when a property returns a complex type, like an Enum, that needs specific serialization?
Are there best practices for exposing Enum values via model properties in DRF?</li>
</ol>
<p>Any insights would be appreciated.</p>
|
<python><python-3.x><django><django-rest-framework>
|
2024-11-13 09:36:07
| 2
| 33,805
|
Prometheus
|
79,184,226
| 10,618,857
|
Unexpected Exception using Concurrent Futures
|
<p>I am working on an evolutionary algorithm in Python. To speed things up, I am parallelizing the evaluation of the population using <code>concurrent.futures</code> and its class <code>ProcessPoolExecutor</code>.</p>
<p>The algorithm works for networks with up to 6 inputs. I tried to run it on networks with 8 inputs but an unexpected exception was generated.</p>
<p>Here you are the code I use to parallelize the evaluation:</p>
<pre class="lang-py prettyprint-override"><code> def select_best(self, population: list)
start_time = time.time()
ns = [self.inputs for _ in range(len(population))]
# ----
with ProcessPoolExecutor(cpus) as executor:
fitness_list = list(executor.map(compute_fitness, population, ns))
# ----
individuals_and_fitness = sorted(
zip(population, fitness_list), key=lambda x: x[1], reverse=True)
best_individuals = [individual for individual,
_ in individuals_and_fitness[:self.population_size]]
best_fitness_scores = [
fitness for _, fitness in individuals_and_fitness[:self.population_size]]
self.fitness_history.append(best_fitness_scores[0])
return best_individuals, best_fitness_scores
</code></pre>
<p>The <code>compute_fitness</code> function is this one:</p>
<pre class="lang-py prettyprint-override"><code>
def compute_fitness(individual, n=2):
p = Phenotype(individual)
nn = NNFromGraph(p, inputs=n, outputs=1)
if nn.r == 0:
return 0
outputs = []
targets = []
# Generate all possible combinations of n binary inputs
for combination in itertools.product([0, 1], repeat=n):
input_data = torch.tensor(combination, dtype=torch.float32)
# Get the output from the neural network
output = nn(input_data)
outputs.append(output.item())
# Compute the expected parity of the input combination
expected_parity = sum(combination) % 2
targets.append(expected_parity)
return normalized_mutual_info_score(outputs, targets)
</code></pre>
<p>After 640 generation (approx. 500 minutes), the following exception was thrown:</p>
<pre><code>Exception in thread QueueManagerThread:
Traceback (most recent call last):
Β File "/usr/lib/python3.8/threading.py", line 932, in _bootstrap_inner
Β Β self.run()
Β File "/usr/lib/python3.8/threading.py", line 870, in run
Β Β self._target(*self._args, **self._kwargs)
Β File "/usr/lib/python3.8/concurrent/futures/process.py", line 394, in _queue_management_worker
Β Β work_item.future.set_exception(bpe)
Β File "/usr/lib/python3.8/concurrent/futures/_base.py", line 547, in set_exception
Β Β raise InvalidStateError('{}: {!r}'.format(self._state, self))
concurrent.futures._base.InvalidStateError: CANCELLED: <Future at 0x7fa9a464b3d0 state=cancelled>
Killed
</code></pre>
<p>I am running the code on a remote machine with 128 cores.</p>
<p>Another detail that may be important is that I noticed a weird behavior of the program: running the same code on my laptop (Mac Book Pro M3, 12 cores) or on the remote machine takes the same time for the evaluation, even if more than 10x cores can be used.</p>
<p>Using <code>htop</code> I can see that all cores are used for a short time, and then the execution goes back to the single core.</p>
<p>I also tried verifying that the bottleneck is indeed the evaluation and not the evolutionary algorithm. It is safe to say that the evaluation is almost 10x more time-consuming than the evolution.</p>
<p>Moreover, changing the fitness function with a dummy one that outputs random fitness values without evaluation seems to increase the time almost 4x.</p>
<p>Do you know how can I solve the problem?</p>
<p>Thank you in advance!</p>
|
<python><multithreading><concurrent.futures><evolutionary-algorithm>
|
2024-11-13 09:31:13
| 0
| 945
|
Eminent Emperor Penguin
|
79,183,942
| 4,451,521
|
Why the max_iter parameter has no effect (or the contrary effect) on this logistic regression?
|
<p>I have this code that does logistic regression</p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
import sklearn
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import RocCurveDisplay, roc_auc_score, confusion_matrix
from sklearn.model_selection import KFold
from loguru import logger
data_path="../data/creditcard.csv"
df=pd.read_csv(data_path)
df=df.drop("Time",axis=1)
print(df.head())
print(f"Shape: {df.shape}")
# Randomly sampling 50% of all the normal data points
# in the data frame and picking out all of the anomalies from the data
# frame as separate data frames.
normal=df[df.Class==0].sample(frac=0.5,random_state=2020).reset_index(drop=True)
anomaly=df[df.Class==1]
print(f"Normal: {normal.shape}")
print(f"Anomalies: {anomaly.shape}")
# split the normal and anomaly sets into train-test
normal_train,normal_test=train_test_split(normal,test_size=0.2,random_state=2020)
anomaly_train,anomaly_test=train_test_split(anomaly,test_size=0.2,random_state=2020)
# From there split train into train validate
normal_train,normal_validate=train_test_split(normal_train,test_size=0.25,random_state=2020)
anomaly_train,anomaly_validate=train_test_split(anomaly_train,test_size=0.25,random_state=2020)
# Create the whole sets
x_train =pd.concat((normal_train,anomaly_train))
x_test=pd.concat((normal_test,anomaly_test))
x_validate=pd.concat((normal_validate, anomaly_validate))
y_train=np.array(x_train["Class"])
y_test=np.array(x_test["Class"])
y_validate=np.array(x_validate["Class"])
x_train=x_train.drop("Class",axis=1)
x_test=x_test.drop("Class",axis=1)
x_validate=x_validate.drop("Class",axis=1)
print("Training sets:\nx_train: {} \ny_train: {}".format(x_train.shape, y_train.shape))
print("Testing sets:\nx_test: {} \ny_test: {}".format(x_test.shape, y_test.shape))
print("Validation sets:\nx_validate: {} \ny_validate: {}".format(x_validate.shape, y_validate.shape))
# Scale the data
scaler= StandardScaler()
scaler.fit(pd.concat((normal,anomaly)).drop("Class",axis=1))
x_train=scaler.transform(x_train)
x_test=scaler.transform(x_test)
x_validate=scaler.transform(x_validate)
def train(sk_model,x_train,y_train):
sk_model=sk_model.fit(x_train,y_train)
train_acc=sk_model.score(x_train,y_train)
logger.info(f"Train Accuracy: {train_acc:.3%}")
def evaluate(sk_model,x_test,y_test):
eval_acc=sk_model.score(x_test,y_test)
preds=sk_model.predict(x_test)
auc_score=roc_auc_score(y_test,preds)
print(f"Auc Score: {auc_score:.3%}")
print(f"Eval Accuracy: {eval_acc:.3%}")
roc_plot = RocCurveDisplay.from_estimator(sk_model, x_test, y_test, name='Scikit-learn ROC Curve')
plt.savefig("sklearn_roc_plot.png")
plt.show()
plt.clf()
conf_matrix=confusion_matrix(y_test, preds)
ax=sns.heatmap(conf_matrix,annot=True,fmt='g')
ax.invert_xaxis()
ax.invert_yaxis()
plt.ylabel('Actual')
plt.xlabel('Predicted')
plt.title("Confusion Matrix")
plt.savefig("sklearn_conf_matrix.png")
sk_model= LogisticRegression(random_state=None, max_iter=400, solver='newton-cg')
# sk_model= LogisticRegression(random_state=None, max_iter=1, solver='newton-cg')
train(sk_model,x_train,y_train)
evaluate(sk_model,x_test,y_test)
</code></pre>
<p>using as data the Credit Card Fraud detection from <a href="https://www.kaggle.com/datasets/mlg-ulb/creditcardfraud?resource=download" rel="nofollow noreferrer">here</a> (in case to reproduce the results I am going to talk about)</p>
<p>The thing is that as you see we have</p>
<pre><code>sk_model= LogisticRegression(random_state=None, max_iter=400, solver='newton-cg')
</code></pre>
<p>with that I got</p>
<pre><code>2024-11-13 16:56:34.087 | INFO | __main__:train:84 - Train Accuracy: 99.894%
Auc Score: 85.341%
Eval Accuracy: 99.874%
</code></pre>
<p>but if I change it to</p>
<pre><code>sk_model= LogisticRegression(random_state=None, max_iter=10, solver='newton-cg')
</code></pre>
<p>I get the same results!</p>
<p>and to be extreme if I change it to</p>
<pre><code>sk_model= LogisticRegression(random_state=None, max_iter=1, solver='newton-cg')
</code></pre>
<p>I get the expected warning</p>
<pre><code>optimize.py:318: ConvergenceWarning: newton-cg failed to converge at loss = 0.1314439039348997. Increase the number of iterations.
</code></pre>
<p>but then I get <em>better</em> results!</p>
<pre><code>2024-11-13 16:58:03.127 | INFO | __main__:train:84 - Train Accuracy: 99.897%
Auc Score: 86.858%
Eval Accuracy: 99.888%
</code></pre>
<p>Why is this happening? I struggle to understand the concept of <code>max_iter</code> in this situation, (I have tried a purely python logistic regression with gradient descent and I kind of understand that in that situation) . Can someone clarify why this is happening?</p>
|
<python><logistic-regression>
|
2024-11-13 08:12:21
| 1
| 10,576
|
KansaiRobot
|
79,183,403
| 2,019,874
|
aspectlib: make an aspect from an instance method
|
<p>Iβm using Pythonβs <strong>aspectlib</strong>. If I try to declare an <em>instance</em> method as an aspect like so</p>
<pre class="lang-py prettyprint-override"><code>from aspectlib import Aspect
class ClassAspect:
@Aspect
def instance_method{self, *args, **kwargs):
</code></pre>
<p>with this corresponding cut-point:</p>
<pre class="lang-py prettyprint-override"><code>import ClassAspect
@ClassAspect.instance_method:
def cross_point():
</code></pre>
<p>, I get an error about <code>self</code>:</p>
<pre><code>TypeError: ClassAspect.instance_method() missing 1 required positional argument: 'self'
</code></pre>
<p>How can I use an <em>instance</em> method as advice in <strong>aspectlib</strong>?</p>
|
<python><python-3.x><aop>
|
2024-11-13 04:15:44
| 0
| 518
|
juanchito
|
79,183,375
| 9,061,561
|
Chroma from_documents Crashes with Exit Code -1073741819 (0xC0000005) Without Error Message
|
<p>I'm working with LangChain and Chroma to perform embeddings on a DataFrame. My DataFrame shape is (1350, 10), and the code for embedding is as follows:</p>
<pre><code>def embed_with_chroma(persist_directory=r'./vector_db/',
db_directory=r'./sql/sop_database.sqlite',
collection_name='sop_vectorstore',
batch_size=200):
"""
Reads all data from an SQLite database, converts it to embeddings, and saves to disk in batches.
"""
# Initialize the embedding model
embedding_model = HuggingFaceEmbeddings(model_name="source_data/BAAI",
model_kwargs={'device': 'cpu'})
# Query to get all data
query = """
SELECT m.model_name AS "model", m.brand AS "brand", m.product_line AS "product_line", g.group_name AS "group", s.step_name AS "step", s.detail
FROM Steps s
JOIN Groups g ON s.group_id = g.group_id
JOIN Models m ON g.model_id = m.model_id
ORDER BY m.model_name, g.group_name, s.step_id;
"""
conn = sqlite3.connect(db_directory)
df = pd.read_sql_query(query, conn)
conn.close()
# Prepare data in batches to manage memory usage
vectorstore = None
total_epochs = (len(df) // batch_size) + 1
epoch = 1
for start in range(0, len(df), batch_size):
print(f'epoch {epoch}/{total_epochs}')
batch_df = df.iloc[start:start + batch_size].copy()
batch_df['merge'] = batch_df.apply(lambda row: f"Model: {row['model']}, Group: {row['group']}, Step: {row['step']}", axis=1)
# Load documents and split into chunks
loader = DataFrameLoader(batch_df, page_content_column='merge')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=512, chunk_overlap=128)
documents_chunks = text_splitter.split_documents(documents)
# Initialize or append to vectorstore
if vectorstore is None:
vectorstore = Chroma.from_documents(documents_chunks,
embedding_model,
collection_name=collection_name,
persist_directory=persist_directory)
else:
vectorstore.add_documents(documents_chunks)
# Free up memory
del batch_df, loader, documents, documents_chunks
gc.collect()
epoch += 1
return vectorstore
</code></pre>
<p>The code crashes when running Chroma.from_documents with the following error message:</p>
<pre><code>Process finished with exit code -1073741819 (0xC0000005)
</code></pre>
<p>I've tried adjusting memory and batch size, but the crash persists without additional error messages. Here are some details that may be relevant:</p>
<ul>
<li>My environment has 8GB of RAM.</li>
<li>Iβm using LangChain and Chroma for embeddings.</li>
<li>The DataFrame has 1350 rows and 10 columns.</li>
</ul>
<p>Any insights on why this might be happening or how to debug this issue further would be greatly appreciated.</p>
<p>Thank you!</p>
|
<python><langchain><chromadb>
|
2024-11-13 03:55:08
| 1
| 429
|
vincentlai
|
79,183,315
| 12,242,085
|
How to remove duplicates and unify values in lists where values are very close to each other in Python?
|
<p>I have in Python lists like below:</p>
<pre><code>x1 = ['lock-service',
'jenkins-service',
'xyz-reporting-service',
'ansible-service',
'harbor-service',
'version-service',
'jira-service',
'kubernetes-service',
'capo-service',
'permission-service',
'artifactory-service',
'vault-service',
'harbor-service-prod',
'rundeck-service',
'cruise-control-service',
'artifactory-service.xyz.abc.cloud',
'helm-service',
'Capo Service',
'rocket-chat-service',
'reporting-service',
'bitbucket-service',
'rocketchat-service']
</code></pre>
<p>or</p>
<pre><code>x2 = ['journal-service',
'lock-service',
'jenkins-service',
'xyz-reporting-service',
'ansible-service',
'harbor-service',
'version-service',
'jira-service',
'kubernetes-service',
'capo-service',
'permission-service',
'artifactory-service',
'vault-service',
'rundeck-service',
'cruise-control-service',
'helm-service',
'database-ticket-service',
'rocket-chat-service',
'ansible-dpservice',
'reporting-service',
'bitbucket-service',
'rocketchat-service']
</code></pre>
<p>As you can see in both lists, duplicate values appear in different forms, for example:</p>
<p>in the list 1:</p>
<ul>
<li>'xyz-reporting-service' and 'reporting-service'</li>
<li>'harbor-service' and 'harbor-service-prod'</li>
<li>'capo-service' and 'Capo Service'</li>
<li>'artifactory-service' and 'artifactory-service.xyz.abc.cloud'</li>
<li>'rocket-chat-service' and 'rocketchat-service'</li>
</ul>
<p>in the list 2:</p>
<ul>
<li>'xyz-reporting-service' and 'reporting-service'</li>
<li>'rocket-chat-service' and 'rocketchat-service'</li>
<li>'ansible-service' and 'ansible-dpservice'</li>
</ul>
<p>I need a universal solution that does not only on these sample lists:</p>
<ul>
<li>will remove the duplicated sample values presented above</li>
<li>unifies the values in the list to the name-service form</li>
</ul>
<p>How can I do that in Python 3.11 ?</p>
|
<python><pandas><list><duplicates>
|
2024-11-13 03:16:58
| 3
| 2,350
|
dingaro
|
79,183,206
| 8,229,029
|
How to properly open NARR Grib1 file in Python using MetPy
|
<p>I am trying to properly open and read a GRIB1 NARR data file as obtained from <a href="https://thredds.rda.ucar.edu/thredds/catalog/files/g/d608000/3HRLY/catalog.html" rel="nofollow noreferrer">https://thredds.rda.ucar.edu/thredds/catalog/files/g/d608000/3HRLY/catalog.html</a>.</p>
<p>I have tried using xr.open_dataset with the engine set to cfgrib. I have tried using several other methods within python, from both the metpy and xarray packages.</p>
<p>There are 248 layers (variables) in the these grib files (R's terra package easily finds them), but no method in Python works. Isn't there a package that will work in Python for working with these files? There must be, but I can't seem to find it. I need to use Python because I want to use the metpy package to calculate advection, vorticity, and other values with the metpy package. And, I really don't want to redownload everything as netcdf files (if that's possible). Thank you.</p>
|
<python><metpy><grib>
|
2024-11-13 02:08:24
| 1
| 1,214
|
user8229029
|
79,183,143
| 14,024,634
|
Driver syntax error when using python sql alchemy. I have driver installed
|
<p>I keep getting a pyodbc error.</p>
<p>Here is my error:</p>
<pre><code>(pyodbc.Error) ('IM012', '[IM012] [Microsoft][ODBC Driver Manager] DRIVER keyword syntax error (0) (SQLDriverConnect)')
</code></pre>
<p>Here is my code:</p>
<pre><code>import pyodbc
from sqlalchemy import create_engine
connection_string = (
'mssql+pyodbc://@server_name/database_name?
driver=ODBC+Driver+17+for+SQL+Server;Trusted_Connection=yes')
engine = create_engine(connection_string)
engine.connect()
</code></pre>
<p>If I check my obdc drivers using the below it shows that I have 'ODBC Driver 17 for SQL Server'.</p>
<pre><code>pyodbc.drivers()
</code></pre>
<p>Output:</p>
<pre><code>['SQL Server',
'Microsoft Access Driver (*.mdb, *.accdb)',
'Microsoft Excel Driver (*.xls, *.xlsx, *.xlsm, *.xlsb)',
'Microsoft Access Text Driver (*.txt, *.csv)',
'Microsoft Access dBASE Driver (*.dbf, *.ndx, *.mdx)',
'SQL Server Native Client RDA 11.0',
'ODBC Driver 17 for SQL Server']
</code></pre>
<p>Any help would be appreciated, thank you! I did some research into similar issues but was unable to find solution.</p>
|
<python><sqlalchemy><pyodbc>
|
2024-11-13 01:20:04
| 1
| 331
|
Zachary Wyman
|
79,183,136
| 555,129
|
SSH connect to network appliance and run command
|
<p>I have a network appliance that can be connected to with username and password.
On logging in, it shows a login banner (several lines) and then shows a custom shell where one can run only a preset of commands provided by the manufacturer.</p>
<p>What is the best way to connect to this appliance from python script, run commands and get the command output?</p>
<p>Also note that: I need to run a series of commands in one session to be able to get output. For example: first command to change to IP submenu and second command to SET IP.</p>
<p>I tried using Fabric module: to create a connection object and then call connection.run(). But this only presents an interactive shell and not run any commands. Below is example code:</p>
<pre><code>from fabric import Connection
from invoke.exceptions import UnexpectedExit
from invoke.watchers import Responder
def run_network_command(host, username, password, command):
try:
conn = Connection(host=host, user=username, connect_kwargs={
"password": password,
},
)
# Create responder for the custom shell prompt
prompt_pattern = r"\[net-7\.2\] \w+>"
shell_responder = Responder(
pattern=prompt_pattern,
response=f"{command}\n"
)
# Run command in the custom shell
result = conn.run(
command,
pty=True,
watchers=[shell_responder]
)
return result.stdout
except UnexpectedExit as e:
return f"Error executing command: {str(e)}"
except Exception as e:
return f"Connection error: {str(e)}"
finally:
try:
conn.close()
except:
pass
# Example usage
if __name__ == "__main__":
# Connection details
host = "192.168.1.1"
username = "root"
password = "pass"
command = "show version"
# Run command and print output
output = run_network_command(host, username, password, command)
print(output)
</code></pre>
<p>What is the best way to achieve this?</p>
|
<python><ssh><fabric>
|
2024-11-13 01:13:37
| 2
| 1,462
|
Amol
|
79,182,975
| 17,653,423
|
Dependency issues installing python package from Artifact Registry
|
<p>I'm trying to install a private Python package from Artifact Registry in GCP, but I'm getting some dependency errors that only happens when I try to install it using <code>pip</code> and <code>keyrings</code>.</p>
<p>I'm able to download the <code>.tag.gz</code> file from the artifactory and then install it manually from my local environment running: <code>pip install pkg-0.0.1.tar.gz</code></p>
<p>But when I try to follow the <a href="https://medium.com/google-cloud/python-packages-via-gcps-artifact-registry-ce1714f8e7c1#:%7E:text=gcp%20PACKAGE%2DNAME-,Install%20using%20pip,-Install%20keyring%20and" rel="nofollow noreferrer">process</a> to install the package using <code>pip</code> and <code>keyrings</code> it raises dependency issues such as the one below:</p>
<pre><code>$ pip install --no-cache-dir --index-url https://location-python.pkg.dev/project/pkg/simple/ pkg
Looking in indexes: https://location-python.pkg.dev/project/pkg/simple/
Collecting pkg
Downloading https://location-python.pkg.dev/project/pkg/pkg/pkg-0.0.1-py3-none-any.whl (13.6 MB)
ββββββββββββββββββββββββββββββββββββββββ 13.6/13.6 MB 11.4 MB/s eta 0:00:00
INFO: pip is looking at multiple versions of pkg to determine which version is compatible with other requirements. This could take a while.
ERROR: Could not find a version that satisfies the requirement autoflake<3.0.0,>=2.3.1 (from pkg) (from versions: none)
ERROR: No matching distribution found for autoflake<3.0.0,>=2.3.1
</code></pre>
<p>After removing <code>autoflake</code> I encounter the same error but now for <code>backoff</code> package: <code>ERROR: No matching distribution found for backoff<3.0.0,>=2.2.1</code></p>
<p>Since I can install the same package manually I guarantee that the publishing step is working fine (current using Poetry), so it must be something related to the installation process.</p>
<p>The current installation process is as follows:</p>
<pre><code>export GOOGLE_APPLICATION_CREDENTIALS=creds.json
pip install --no-cache-dir keyring keyrings.google-artifactregistry-auth
pip install --no-cache-dir --index-url https://location-python.pkg.dev/project/pkg/simple/ pkg
</code></pre>
<p>Any help?</p>
|
<python><pip><python-poetry><google-artifact-registry>
|
2024-11-12 23:04:53
| 0
| 391
|
Luiz
|
79,182,953
| 10,140,821
|
UnboundLocalError: cannot access local variable in python
|
<p>I have the below <code>python</code> code like below.</p>
<p>From a date I am trying to findout first and last calendar days of the month.</p>
<pre><code>import datetime
import calendar
def calendar_days_month(run_date):
"""
:param run_date: date on which the process is running
:return:
"""
d = datetime.datetime.strptime(run_date, '%Y-%m-%d').date()
first_calendar_day = d.replace(day=1).strftime("%Y-%m-%d")
res = calendar.monthrange(d.year, d.month)[1]
if len(str(d.month)) == 1:
last_month = '%02d' % d.month
last_calendar_day = str(d.year) + '-' + str(last_month) + '-' + str(res)
return first_calendar_day, last_calendar_day
abc_date = '2024-10-31'
test_1, test_2 = calendar_days_month(abc_date)
print(test_2)
print(test_1)
</code></pre>
<p>I am getting the below error.</p>
<pre><code>Traceback (most recent call last):
File "/main.py", line 21, in <module>
test_1, test_2 = calendar_days_month(abc_date)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/main.py", line 16, in calendar_days_month
last_calendar_day = str(d.year) + '-' + str(last_month) + '-' + str(res)
^^^^^^^^^^
UnboundLocalError: cannot access local variable 'last_month' where it is not associated with a value
</code></pre>
<p>How can I fix this error?</p>
|
<python>
|
2024-11-12 22:50:38
| 1
| 763
|
nmr
|
79,182,873
| 219,153
|
How to get original indicies of a polygon after applying shapely.simplify?
|
<p>This Python script:</p>
<pre><code>from shapely import simplify, points, contains, Point
circle = Point(0, 0).buffer(1.0, quad_segs=8).exterior
simple = simplify(circle, 0.1)
</code></pre>
<p>simplifies polygon <code>circle</code> (red) and produces polygon <code>simple</code> (blue) with a subset of <code>circle</code> vertices:</p>
<p><a href="https://i.sstatic.net/gwMDCYjI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gwMDCYjI.png" alt="enter image description here" /></a></p>
<p><code>iCircle</code> contains the original indices of <code>simple</code> vertices: <code>[0, 4, 8, 12, 16, 20, 24, 28, 32]</code>:</p>
<pre><code>iCircle = []
for i, p in enumerate(points(circle.coords)):
if contains(simple, p):
iCircle.append(i)
</code></pre>
<p>How to compute it without a costly lookup like the one above?</p>
<hr />
<p>Circle is just a simple example. My question concerns an arbitrary polygon.</p>
|
<python><shapely><simplify>
|
2024-11-12 22:10:19
| 2
| 8,585
|
Paul Jurczak
|
79,182,841
| 2,856,552
|
How do I link my values to a specific column of a shapefile for coloring the map in python?
|
<p>I am trying to color a map based on on values from a csv file. One shapefile works very well, linking the csv to the shapefile based on the header "ADM1_EN" as per per shap shapefile example row 1 and 2 below</p>
<pre><code> ADM1_EN ADM1_PCODE ADM1_TYPE geometry
0 Berea LSD District POLYGON ((27.98656 -28.94791, 27.98670 -28.948...
</code></pre>
<p>The other shapefile, on the other hand is difficult fo me to handle</p>
<pre><code> shapeName shapeISO shapeID shapeGroup shapeType geometry
0 Mokhotlong District LS-J 63558799B62052207870559 LSO ADM1 POLYGON ((28.79248 -28.90741, 28.79586 -28.908...
</code></pre>
<p>I have tried linking on shapeName, it doesn't work. I tried to link on column=2, no success
help will be appreciated.</p>
|
<python>
|
2024-11-12 22:00:04
| 0
| 1,594
|
Zilore Mumba
|
79,182,682
| 13,392,257
|
Nodriver: 'NoneType' object has no attribute 'closed'
|
<p>I am learning nodriver (version 0.37) library <a href="https://ultrafunkamsterdam.github.io/nodriver/nodriver/quickstart.html" rel="nofollow noreferrer">https://ultrafunkamsterdam.github.io/nodriver/nodriver/quickstart.html</a></p>
<p>My actions</p>
<pre><code>python3.11 -m venv venv
source venv/bin/activate
pip install nodriver
</code></pre>
<p>I am trying this code</p>
<pre><code>import nodriver as uc
async def main():
browser = await uc.start()
page = await browser.get('https://www.nowsecure.nl')
if __name__ == '__main__':
uc.loop().run_until_complete(main())
</code></pre>
<p>Error</p>
<pre><code>python main.py
Traceback (most recent call last):
File "/Users/mascai/root_folder/dev/projects/54_nodriver/main.py", line 11, in <module>
uc.loop().run_until_complete(main())
File "/opt/homebrew/Cellar/python@3.11/3.11.9_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/Users/mascai/root_folder/dev/projects/54_nodriver/main.py", line 5, in main
browser = await uc.start()
^^^^^^^^^^^^^^^^
File "/Users/mascai/root_folder/dev/projects/54_nodriver/venv/lib/python3.11/site-packages/nodriver/core/util.py", line 96, in start
return await Browser.create(config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mascai/root_folder/dev/projects/54_nodriver/venv/lib/python3.11/site-packages/nodriver/core/browser.py", line 91, in create
await instance.start()
File "/Users/mascai/root_folder/dev/projects/54_nodriver/venv/lib/python3.11/site-packages/nodriver/core/browser.py", line 394, in start
await self.connection.send(cdp.target.set_discover_targets(discover=True))
File "/Users/mascai/root_folder/dev/projects/54_nodriver/venv/lib/python3.11/site-packages/nodriver/core/connection.py", line 412, in send
if not self.websocket or self.closed:
^^^^^^^^^^^
File "/Users/mascai/root_folder/dev/projects/54_nodriver/venv/lib/python3.11/site-packages/nodriver/core/connection.py", line 368, in __getattr__
return getattr(self.target, item)
^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'closed'
successfully removed temp profile /var/folders/t6/7jk0v3817zb2_xfb6bf0vxpc0000gn/T/uc_tedc5epz
</code></pre>
|
<python><nodriver>
|
2024-11-12 20:54:54
| 1
| 1,708
|
mascai
|
79,182,670
| 13,395,230
|
Sorting numbers into bins of the same sum
|
<p>I am trying to find the optimal way to sort a collection of integers into bins that all have the same sum. For simplicity we can subtract the average so we are only interested in bins that add to zero.</p>
<p>Take for example,</p>
<pre><code>X = np.array([-2368, -2143, -1903, -1903, -1888, -1648, -1528, -1318, -1213,
-1153, -1033, -928, -793, -703, -508, -493, -463, -448,
-418, -358, -223, -118, -88, 137, 227, 257, 347,
347, 377, 557, 632, 632, 692, 812, 827, 1007,
1022, 1262, 1352, 1727, 1892, 2267, 2297, 2327, 2642])
</code></pre>
<p>I know, there exists a way for this to form 3 groups of 15 numbers each, where each group adds to zero. But for the life of me, it is not obvious how to systematically find that solution (there could be many such solutions, I only need one).</p>
<p>In this smaller example, we could probably just try every combination, but if the array had a million numbers being split into 1000 bins, such an exhaustion would not work.</p>
|
<python><sorting><optimization><combinatorics>
|
2024-11-12 20:50:32
| 1
| 3,328
|
Bobby Ocean
|
79,182,637
| 236,681
|
Convert blob to bytes SQLite3 Python
|
<p>Have a requirement of storing/retrieving bytes as a blob in SQLite3;
The column definition in as follows:<code>"bytes_as_blob " BLOB,</code>
Byte data is stored in the table using the following construct <code>sqlite3.Binary(some_data)</code> and the data when visualized via DB Browser is as below:</p>
<pre><code><memory at 0x000002157DA24F40>
</code></pre>
<p>However, the issue is me being unable to convert the blob stored in SQLite back to bytes.
The select statement to retrieve data is <code>SELECT uid, bytes_as_blob from a_table LIMIT 10</code> and results from SQLite3 are retuned as a DataFrame. The dtypes for the dataframe columns are <code>dobject</code> ;</p>
<pre><code>df = pd.read_sql_query(sql_statement, conn)
print(f"type(df.loc[1].iat[0]) = {type(df.loc[1].iat[0]))}") # uid
print(f"type(df.loc[1].iat[1]) = {type(df.loc[1].iat[1]))}") # bytes_as_blob
</code></pre>
<p>The type of the Python objects in the DF are of type <code><class 'str'></code></p>
<p>Is there something that that is needed when converting a blob back to bytes - could not find anything in here <a href="https://pandas.pydata.org/docs/user_guide/io.html#io-sql" rel="nofollow noreferrer">https://pandas.pydata.org/docs/user_guide/io.html#io-sql</a></p>
<p>Tried <code>BytesIO(df_cell_value).read()</code> which which did not work as expected.</p>
|
<python><pandas><blob><sqlite3-python>
|
2024-11-12 20:31:30
| 1
| 416
|
rahul
|
79,182,495
| 785,400
|
When running my Django test suite in VS Code, I get `AF_UNIX path too long`
|
<p>I'm having a problem with the VS Code test runner. It has worked as expected, but not when I load my nix environment in VS Code.</p>
<p>I'd like to be able to:</p>
<ul>
<li>Have direnv load the nix environment in the VS Code environment, to mirror my shell setup</li>
<li>Have the VS Code test runner discover and run Django tests</li>
</ul>
<p>To load the nix env, I'm using <a href="https://marketplace.visualstudio.com/items?itemName=mkhl.direnv" rel="nofollow noreferrer">this</a> direnv plugin, which:</p>
<blockquote>
<p>adds direnv support to Visual Studio Code by loading environment variables for the workspace root.</p>
</blockquote>
<p>But VS Code cannot discover tests. In the <code>Python Test Log</code> in the <code>Output</code> tab, I see:</p>
<pre><code>pvsc_utils.VSCodeUnittestError: DJANGO ERROR: An error occurred while discovering and building the test suite. Error: Error attempting to connect to extension named pipe /var/folders/l3/tn48czyn38nfr8dqkfbyf8_40000gn/T/nix-shell.MXciF0/python-test-discovery-a054ebbeef511140679d.sock[vscode-unittest]: AF_UNIX path too long
</code></pre>
<p>The path is exceeding long because the socket is now created in the nix env. Is there a way to modify the location that the socket will be created?</p>
<p>I tried to set <code>TMPDIR</code> in my <code>.envrc</code>, but that did not change anything. It <a href="https://github.com/NixOS/nix/issues/7491" rel="nofollow noreferrer">looks like</a> nix unsets TMPDIR.</p>
|
<python><visual-studio-code><django-testing><nix><direnv>
|
2024-11-12 19:34:08
| 0
| 4,189
|
Brian Dant
|
79,182,472
| 8,183,621
|
Force subclass to implement a class property in Python
|
<p>I want to force all subclasses of a class to implement a property. As I understand it, I have to use abc methods for this use case. I tried the following:</p>
<pre class="lang-py prettyprint-override"><code>from abc import ABC, abstractmethod
class BaseClass(ABC):
@property
@classmethod
@abstractmethod
def required_property(cls):
"""This property must be implemented by subclasses."""
pass
class MySubClass(BaseClass):
required_property = "Implemented"
class AnotherSubClass(BaseClass):
pass
if __name__ == "__main__":
print(MySubClass.required_property)
print(AnotherSubClass.required_property)
</code></pre>
<p>However,</p>
<ol>
<li>the code executes without errors</li>
<li>mypy complains: <code>property_mypy.py:4:6: error: Only instance methods can be decorated with @property [misc]</code>,</li>
<li>class properties are deprecated in Python 3.11 and will not be supported in Python 3.13 Pylance.</li>
</ol>
<p>What is the pythonic way to do this?</p>
|
<python><oop><abstract-class>
|
2024-11-12 19:25:23
| 0
| 625
|
PascalIv
|
79,182,471
| 7,991,581
|
Pymysql badly formats parameters
|
<p>I just moved to a new venv with pymysql version upgraded from v1.1.0 to v1.1.1 and I now have SQL errors when parameters are formatted by the <code>Cursor.execute</code> function</p>
<hr />
<p>Here is an example</p>
<pre class="lang-py prettyprint-override"><code>client_id = float(2)
params = (client_id)
print(f"Parameters :\n\t{params}")
try:
cursor.execute("SELECT * FROM clients WHERE id = %s", params)
except:
print("EXCEPTION")
print(f"Query :\n\t{cursor._executed}")
</code></pre>
<p>And here is the executed query</p>
<pre class="lang-bash prettyprint-override"><code>Parameters :
(2.0)
Query :
SELECT * FROM clients WHERE id = 2.0e0
</code></pre>
<p>It's not an issue if I retrieve data, however when I'm inserting or updating data, it fails</p>
<pre class="lang-py prettyprint-override"><code>client_id = float(2)
balance = np.float64(1000)
params = (client_id, balance)
print(f"Parameters :\n\t{params}")
try:
cursor.execute("UPDATE client_balances SET balance = %s WHERE client_id = %s", params)
except:
print("EXCEPTION")
print(f"Query :\n\t{cursor._executed}")
</code></pre>
<p>Output :</p>
<pre class="lang-bash prettyprint-override"><code>Parameters :
(2.0, np.float64(1000.0))
EXCEPTION
Query :
UPDATE client_balances SET balance = np.float64(1000)e0 WHERE client_id = 2.0e0
</code></pre>
<p>If I check the exception I get the following error</p>
<pre class="lang-bash prettyprint-override"><code>ProgrammingError - (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'e0, balance = np.float64(1000)e0, WHERE client_id = 2.0e0' at line 1")
</code></pre>
<hr />
<p>I realized that if I manually force parameters formatting as string, it does work.</p>
<pre class="lang-py prettyprint-override"><code>client_id = float(2)
balance = np.float64(1000)
params = (client_id, balance)
print(f"Parameters :\n\t{params}")
refactored_params = []
for param in params:
refactored_params.append(str(param))
try:
cursor.execute("UPDATE client_balances SET balance = %s WHERE client_id = %s", refactored_params)
except:
print("EXCEPTION")
print(f"Query :\n\t{cursor._executed}")
</code></pre>
<p>Output :</p>
<pre class="lang-bash prettyprint-override"><code>Parameters :
(2.0, np.float64(1000.0))
Query :
UPDATE client_balances SET balance = '1000' WHERE client_id = '2.0'
</code></pre>
<hr />
<p>So I could implement a fix where I manually convert the parameters before calling <code>execute</code>, but since I had no issue before changing my venv I'd like to understand if I'm doing something wrong instead of making a workaround that is not really efficient.</p>
<p>Moreover I can't believe that pymysql can't correctly convert parameters into a query, so I think I'm certainly doing something wrong.</p>
<p>Any thoughts about this issue ? Should I either manually force string conversion or change the implementation ?</p>
|
<python><sqlalchemy><pymysql>
|
2024-11-12 19:25:14
| 1
| 924
|
Arkaik
|
79,182,265
| 732,570
|
how can I recover (or understand the h5debug output of) my hdf5 file?
|
<p>I have a hdf5 file that is so large I have to use my home fileserver to write the data (4.04TB, according to macOS's Finder). It is a collection of logits that takes several hours to calculate, and for some reason, after calculating the last chunk of data, it failed in a bad way.</p>
<p>I now see:</p>
<pre><code>h5debug /Volumes/MacBackup-1/gguf/baseline_logits.hdf5
Reading signature at address 0 (rel)
File Super Block...
File name (as opened): /Volumes/MacBackup-1/gguf/baseline_logits.hdf5
File name (after resolving symlinks): /Volumes/MacBackup-1/gguf/baseline_logits.hdf5
File access flags 0x00000000
File open reference count: 1
Address of super block: 0 (abs)
Size of userblock: 0 bytes
Superblock version number: 0
Free list version number: 0
Root group symbol table entry version number: 0
Shared header version number: 0
Size of file offsets (haddr_t type): 8 bytes
Size of file lengths (hsize_t type): 8 bytes
Symbol table leaf node 1/2 rank: 4
Symbol table internal node 1/2 rank: 16
Indexed storage internal node 1/2 rank: 32
File status flags: 0x00
Superblock extension address: 18446744073709551615 (rel)
Shared object header message table address: 18446744073709551615 (rel)
Shared object header message version number: 0
Number of shared object header message indexes: 0
Address of driver information block: 18446744073709551615 (rel)
Root group symbol table entry:
Name offset into private heap: 0
Object header address: 96
Cache info type: Symbol Table
Cached entry information:
B-tree address: 136
Heap address: 680
Error in closing file!
HDF5: infinite loop closing library
L,T_top,F,P,P,Z,FD,VL,VL,PL,E,SL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL
</code></pre>
<p>I am not clear what is actually wrong with it from that debug output. In terms of real size, I think it is less than 4TB:</p>
<pre><code>ls -la /Volumes/MacBackup-1/gguf/baseline_logits.hdf5
-rwx------@ 1 macdev staff 3.7T Nov 12 12:21 /Volumes/MacBackup-1/gguf/baseline_logits.hdf5
</code></pre>
<p>Here's my script's log when it failed, it was not a very specific error message:</p>
<pre><code>[471] 114207.41 ms [472] 24712.48 ms [473] 120010.91 ms [474] 134073.39 ms
INFO - Processed 4 chunks
INFO - Final file size: 3832472.77 MB
Running from 475 to 478
INFO - generate_logits starting (version 0.5.3)
INFO - Loaded precomputed tokens from /Users/Shared/Public/huggingface/salamandra-2b-instruct/imatrix/oscar/calibration-dataset.txt.tokens.npy
INFO - Processing chunks from 475 to 478
INFO - Estimated runtime: 6.11 minutes for 3 remaining chunks
[475] 122266.14 ms [476] 27550.59 ms ERROR - Unexpected error occurred: Can't decrement id ref count (unable to close file, errno = 9, error message = 'Bad file descriptor')
Error occurred. Exiting.
</code></pre>
<p>That was as the file was just exceeding 4TB (depending on how you look at it), which seems suspicious, but it is writing (from a Mac) to a windows 11 machine with a 16Tb disk with 13Tb free before this started, formatted in NTFS. My SMB info says I am connected with smb_3.1.1, with <code>LARGE_FILE_SUPPORTED TRUE</code>, which I would hope would give me the 16Tb available to NTFS.</p>
<p>How can I recover (or understand the h5debug output of) my hdf5 file?</p>
|
<python><hdf5>
|
2024-11-12 18:06:19
| 0
| 4,737
|
roberto tomΓ‘s
|
79,182,217
| 735,926
|
How to define two versions of the same TypedDict, one total and one not?
|
<p>Iβm trying to define two versions of a typed dict for an API client, one <code>total=False</code> for a partial-update route input and another <code>total=True</code> for the response. Any dict with a subset of the fields is valid as an input, but output dicts must have all the fields present.</p>
<p>I tried this:</p>
<pre class="lang-py prettyprint-override"><code>class PartialDict(TypedDict, total=False):
name: str
age: int
class FullDict(PartialDict, total=True):
pass
</code></pre>
<p>But it doesnβt work, as Mypy 1.13 doesnβt complain about any of these:</p>
<pre class="lang-py prettyprint-override"><code>x: PartialDict = {} # ok
y: FullDict = {} # should fail
</code></pre>
<p>If I reverse the inheritance and make <code>PartialDict</code> inherit from <code>FullDict</code> that defines the fields, Mypy complains about both lines:</p>
<pre><code>mymodule/types.py:38: error: Missing keys ("name", "age") for TypedDict "PartialDict" [typeddict-item]
mymodule/types.py:39: error: Missing keys ("name", "age") for TypedDict "FullDict" [typeddict-item]
</code></pre>
<p>How can I define the types such as a <code>FullDict</code> must have all the keys but a <code>PartialDict</code> may not have all of them? I would like to avoid duplicating the classes as my real-world dict has dozens of keys.</p>
|
<python><python-typing>
|
2024-11-12 17:53:27
| 2
| 21,226
|
bfontaine
|
79,182,050
| 3,404,377
|
How can I satisfy mypy when I have a potential callable that involves Self?
|
<p>I have a dataclass that has a field that might be a constant or might be a function taking <code>Self</code>. There's a helper function that just does the right thing -- if the field contains a constant, it returns the constant. If the field contains a function, it calls the function using <code>self</code>:</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass
from typing import Self, Callable
@dataclass
class MyClass:
my_func_field: str | Callable[[Self], str]
def my_field(self) -> str:
if isinstance(self.my_func_field, str):
return self.my_func_field
else:
return self.my_func_field(self)
</code></pre>
<p>mypy doesn't like this (<a href="https://mypy-play.net/?mypy=latest&python=3.12&gist=95a979b419760949ca82d2f682e595b6" rel="nofollow noreferrer">playground</a>):</p>
<pre><code>main.py:12: error: Argument 1 has incompatible type "MyClass"; expected "Self" [arg-type]
</code></pre>
<p>However, if I simplify the situation so it's just a callable, I don't run into this problem:</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass
from typing import Self, Callable
@dataclass
class MyClass:
my_func_field: Callable[[Self], str]
def my_field(self) -> str:
return self.my_func_field(self)
</code></pre>
<p>This is surprising to me. I assumed that the <code>else</code> branch in the first example would type-check exactly the same as the second example, because in the <code>else</code> branch, <code>my_func_field</code> has been narrowed to <code>Callable[[Self], str]</code>, which is exactly the same type as the second example.</p>
<p>What am I missing here? Is it possible to get mypy to accept something, or do I have to use an <code># ignore</code> comment?</p>
|
<python><python-typing><mypy>
|
2024-11-12 16:54:13
| 2
| 1,131
|
ddulaney
|
79,181,977
| 1,739,725
|
In python (pytz), how can I add a "day" to a datetime in a DST-aware fashion? (Sometimes the result should be 23 or 25 hours in in the future)
|
<p>I'm doing some datetime math in python with the pytz library (although I'm open to using other libraries if necessary). I have an iterator that needs to increase by one day for each iteration of the loop. The problem comes when transitioning from November 3rd to November 4th in the Eastern timezone, which crosses the daylight saving boundary (there are 25 hours between the start of November 3rd and the start of November 5th, instead of the usual 24). Whenever I add a "day" that crosses the boundary, I get a time that is 24 hours in the future, instead of the expected 25.</p>
<p>This is what I've tried:</p>
<pre><code>import datetime
import pytz
ET = pytz.timezone("US/Eastern")
first_day = ET.localize(datetime.datetime(2024, 11, 3))
next_day = first_day + datetime.timedelta(days=1)
first_day.isoformat() # '2024-11-03T00:00:00-04:00'
next_day.isoformat() # '2024-11-04T00:00:00-04:00'
assert next_day == ET.localize(datetime.datetime(2024, 11, 4)) # This fails!!
# I want next_day to be '2024-11-04T00:00:00-05:00' or '2024-11-04T01:00:00-04:00'
</code></pre>
<p>I also tried throwing a <code>normalize()</code> in there, but that didn't produce the right result either:</p>
<pre><code>ET.normalize(next_day).isoformat() # '2024-11-03T23:00:00-05:00'
</code></pre>
<p>(That's one hour earlier than my desired output)</p>
<p>I suppose I could make a copy of my start_day that increments the <code>day</code> field, but then I'd have to be aware of month and year boundaries, which doesn't seem ideal to me.</p>
|
<python><datetime><timezone><python-datetime><pytz>
|
2024-11-12 16:33:01
| 2
| 2,186
|
Pwnosaurus
|
79,181,967
| 357,313
|
What should df.plot(sharex=True) do?
|
<p>I have trouble understanding <code>sharex</code> in pandas plotting. I can make it work using matplotlib, however I also tried the <code>sharex</code> argument to <code>df.plot()</code>. I don't know <em>what</em> it does, but it is not what I expect.</p>
<p>A simple example (with a Series, but the same happens with a DataFrame):</p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
t = np.arange(0, 20, 0.01)
s = pd.Series(np.sin(t), index=t)
ax1 = plt.subplot(211)
ax2 = plt.subplot(212)
s.plot(ax=ax1, sharex=True)
s.plot(ax=ax2, sharex=True)
plt.xlim(0, 6)
plt.show()
</code></pre>
<p>The only effect I see is that the top chart hides its x axis labels. At the very least, I would expect <em>both</em> charts to limit their x axis to [0, 6], which now only happens for the bottom one. I also expect synchronized zooming.</p>
<p>The <a href="https://pandas.pydata.org/docs/dev/reference/api/pandas.DataFrame.plot.html#:%7E:text=subplots%3DTrue%2C-,share%20x%20axis,-and%20set%20some" rel="nofollow noreferrer">documentation</a> says:</p>
<blockquote>
<p>sharex : bool, default True if ax is None else False<br />
In case <code>subplots=True</code>, share x axis and set some x axis labels
to invisible; defaults to True if ax is None otherwise False if
an ax is passed in; Be aware, that passing in both an ax and
<code>sharex=True</code> will alter all x axis labels for all axis in a figure.</p>
</blockquote>
<p>What am I missing? Is this behavior 'correct'? What does "share x axis" actually mean? I would like to understand the design. I do <em>not</em> need a workaround.</p>
<p>I am not the only user who is <a href="https://stackoverflow.com/search?q=%5Bpandas%5D%20sharex">confused</a>.</p>
|
<python><pandas><matplotlib>
|
2024-11-12 16:29:18
| 1
| 8,135
|
Michel de Ruiter
|
79,181,960
| 10,452,700
|
Problem with symbol opacity of errorbar within legend
|
<p>I'm trying to indicate perfectly symbols within legend once I want to plot complicated combinations of line and errorbar in grid plots. I noticed that it's not easy to apply desired opacity for any symbol kinds when they are error bar.</p>
<p>I have tried following checking this <a href="https://stackoverflow.com/q/12848808#59629242">post</a> unsuccessfully.</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
from matplotlib.collections import PathCollection
from matplotlib.legend_handler import HandlerPathCollection, HandlerLine2D, HandlerErrorbar
x1 = np.linspace(0,1,8)
y1 = np.random.rand(8)
# Compute prediction intervals
sum_of_squares_mid = np.sum((x1 - y1) ** 2)
std_mid = np.sqrt(1 / (len(x1) - 2) * sum_of_squares_mid)
# Plot the prediction intervals
y_err_mid = np.vstack([std_mid, std_mid]) * 1.96
plt.plot(x1, y1, 'bo', label='label', marker=r"$\clubsuit$", alpha=0.2) # Default alpha is 1.0.
plt.errorbar(x1, y1, yerr=y_err_mid, fmt="o", ecolor="#FF0009", capsize=3, color="#FF0009", label="Errorbar", alpha=.1) # Default alpha is 1.0.
def update(handle, orig):
handle.update_from(orig)
handle.set_alpha(1)
plt.legend(handler_map={PathCollection : HandlerPathCollection(update_func = update),
plt.Line2D : HandlerLine2D( update_func = update),
plt.errorbar : HandlerErrorbar( update_func = update) # I added this but it deos not apply alpha=1 only for errobar symbol in legend
})
plt.show()
</code></pre>
<p>My current output:</p>
<p><a href="https://i.sstatic.net/516Z8bmH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/516Z8bmH.png" alt="resulting plot without updated alpha" /></a></p>
|
<python><matplotlib><seaborn><legend><opacity>
|
2024-11-12 16:27:16
| 1
| 2,056
|
Mario
|
79,181,706
| 66,191
|
SQLAlchemy nested query depending on parent query
|
<p>How do I write a sqlalchemy subquery which depends on the parent query tables.</p>
<p>I have the following SQLAlchemy models.</p>
<pre><code>class BaseModel( DeclarativeBase ):
pass
class ABC( BaseModel ):
__tablename__ = "ABC"
__table_args__ = (
Index( "index1", "account_id" ),
ForeignKeyConstraint( [ "account_id" ], [ "A.id" ], onupdate = "CASCADE", ondelete = "CASCADE" ),
)
id: Mapped[ int ] = mapped_column( primary_key = True, autoincrement = True )
account_id: Mapped[ int ]
account_idx: Mapped[ int ]
class A( BaseModel ):
__tablename__ = "A"
__table_args__ = (
Index( "index1", "downloaded", "idx" ),
)
id: Mapped[ int ] = mapped_column( primary_key = True, autoincrement = True )
downloaded: Mapped[ date ]
account_id: Mapped[ str ] = mapped_column( String( 255 ) )
display_name: Mapped[ str ] = mapped_column( String( 255 ) )
idx: Mapped[ int ]
type: Mapped[ str ] = mapped_column( String( 255 ) )
</code></pre>
<p>I need to implement the following SQL in sqlalchemy but I can't work out how to join the subquery correctly.</p>
<pre><code>select
a.account_id,
(
select
group_concat( a2.account_id )
from
ABC abc
left join A a2 on a2.downloaded = a.downloaded and a2.idx = abc.account_idx
where
abc.account_id = a.id
) as 'brokerage_client_accounts'
from
A a
where
a.downloaded = "2024-11-12" and
a.type != 'SYSTEM'
order by
a.account_id
;
</code></pre>
<p>I have this so far but it doesn't work..</p>
<pre><code>A2 = aliased( A )
brokerage_client_accounts_subq = select(
func.aggregate_strings( A2.account_id, "," ).label( "accounts" ),
).select_from(
A
).outerjoin(
A2,
and_( A2.downloaded == A.downloaded, A2.idx == ABC.account_idx )
).where(
ABC.account_id == A.id
)
stmt = select(
Account.account_id,
brokerage_client_accounts_subq.c.accounts,
).where(
and_(
A.downloaded == date( 2024, 11, 12 ),
A.type != "SYSTEM"
)
).order_by(
Account.account_id
)
</code></pre>
<p>I get the following errors</p>
<pre><code>SAWarning: SELECT statement has a cartesian product between FROM element(s) "anon_1" and FROM element "A". Apply join condition(s) between each element to resolve.
mysql.connector.errors.ProgrammingError: 1054 (42S22): Unknown column 'ABC.account_idx' in 'on clause'
</code></pre>
<p>I think this is because the subquery isn't joining to the "parent" <code>A</code> table</p>
<p>The SQL it produces is... ( I've reformatted it for readability )</p>
<pre><code>SELECT
`A`.account_id,
anon_1.accounts
FROM
`A`,
(
SELECT
group_concat( `A_1`.account_id SEPARATOR %(aggregate_strings_1)s) AS accounts
FROM
`A` LEFT OUTER JOIN `A` AS `A_1` ON `A_1`.downloaded = `A`.downloaded AND `A_1`.idx = `ABC`.account_idx, `ABC`
WHERE
`ABC`.account_id = `A`.id
) AS anon_1
WHERE
`A`.downloaded = %(downloaded_1)s AND `A`.type != %(type_1)s
ORDER BY
`A`.account_id
Params:
{ 'aggregate_strings_1': ',', 'downloaded_1': datetime.date(2024, 11, 12), 'type_1': 'SYSTEM' }
</code></pre>
<p>Which seems to bear little resemblence to what I require.</p>
<p>Can anyone help please.</p>
|
<python><mysql><sqlalchemy>
|
2024-11-12 15:15:31
| 1
| 2,975
|
ScaryAardvark
|
79,181,689
| 13,175,203
|
Polars `read_csv()` to read from string and not from file
|
<p>Is it possible to read from string with <code>pl.read_csv()</code> ? Something like this, which would work :</p>
<pre class="lang-py prettyprint-override"><code>content = """c1, c2
A,1
B,3
C,2"""
pl.read_csv(content)
</code></pre>
<p>I know of course about this :</p>
<pre><code>pl.DataFrame({"c1":["A", "B", "C"],"c2" :[1,3,2]})
</code></pre>
<p>But it is error-prone with long tables and you have to count numbers to know which value to modify.</p>
<p>I also know about dictionaries but I have more than 2 columns in my real life example.</p>
<p><strong>Context</strong>: I used to <code>fread()</code> content with R data.table and it was very useful, especially when you want to convert a column with the help of a join, instead of complicated <code>ifelse()</code> statements</p>
<p>Thanks !</p>
|
<python><python-polars>
|
2024-11-12 15:11:23
| 1
| 491
|
Samuel Allain
|
79,181,429
| 1,469,465
|
Django not found in Github actions
|
<p>I have the following CI pipeline defined in Github Actions. It is using the same container as which the production server is using. The pipeline was running fine last week, but this week it suddenly stopped working. Some observations from the run logs:</p>
<ul>
<li>We start with upgrading pip, but this doesn't seem to happen</li>
<li>The dependencies are installed correctly, but it gives a warning that pip can be upgraded</li>
<li>Running fake migrations immediately fails with <code>ModuleNotFoundError: No module named 'django'</code>.</li>
</ul>
<p>Any ideas how I can debug this to investigate what is going wrong?</p>
<pre><code> test_project:
runs-on: ubuntu-latest
container:
image: python:3.11-slim
strategy:
max-parallel: 2
matrix:
python-version: [ "3.11" ]
services:
postgres:
image: postgres:14
env:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
ports:
- 5432:5432
options: >-
--health-cmd "pg_isready -U postgres"
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install Dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install -r requirements-test.txt
- name: Check missing migrations
run: python project/manage.py makemigrations --check --dry-run --settings project.settings.local
- name: Run Tests
run: pytest --cov=project project/
</code></pre>
<p><strong>EDIT</strong></p>
<p>When running this (instead of running <code>pip</code> directly), it does work as intended:</p>
<pre><code>python -m pip install -r requirements.txt
python -m pip install -r requirements-test.txt
</code></pre>
|
<python><django><pip><github-actions>
|
2024-11-12 13:56:27
| 0
| 6,938
|
physicalattraction
|
79,181,422
| 1,614,355
|
How to add tools in navigation bar of slack using slack python sdk?
|
<p>I am using Python SDK for slack. I would like to know how can I add a tool in slack navigation bar using the python sdk. Please see the image below for reference
<a href="https://i.sstatic.net/AJtMEUs8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AJtMEUs8.png" alt="Slack navigation bar tool" /></a></p>
<p>I am using the latest slack bolt sdk. And I am kind of new to slack. So chances are I might be using wrong term for the navigation bar tool. So I am using the image for reference. Thanks for your help</p>
|
<python><slack><slack-api>
|
2024-11-12 13:55:48
| 0
| 357
|
Tiklu Ganguly
|
79,181,238
| 3,357,352
|
Selectively calling test functions with capsys
|
<p>I can selectively run <code>test_function_1</code>, overriding instrumentation from my <code>conftest.py</code> fixtures<sup>1</sup></p>
<pre class="lang-py prettyprint-override"><code>def test_function_1(instrumentation: dict[str, float]) -> None:
assert instrumentation['a'] > instrumentation['b']
def test_function_2(capsys) -> None:
print("Hello, pytest!")
captured = capsys.readouterr()
assert captured.out == "Hello, pytest!\n"
</code></pre>
<p>when I try to call <code>test_function_2</code>, I don't know how to pass <code>capsys</code> to it<sup>2</sup> :</p>
<pre class="lang-py prettyprint-override"><code>import tests
import pytest # <--- doesn't help ...
def test_callee_with_instrumentation():
tests.test_function_1({'a': 110, 'b': 55, 'g': 6000})
def test_callee_with_capsys():
# tests.test_function_2() # <--- TypeError: test_function_2() missing 1 required positional argument: 'capsys'
# tests.test_function_2(capsys) # <--- NameError: name 'capsys' is not defined
# tests.test_function_2(pytest.capsys) # <--- AttributeError: module 'pytest' has no attribute 'capsys'
pass
test_callee_with_instrumentation()
test_callee_with_capsys()
</code></pre>
<p>I'm pretty sure the <code>conftest.py</code> fixtures are irrelevant, but for completeness:</p>
<pre class="lang-py prettyprint-override"><code>import pytest
@pytest.fixture(scope='function')
def instrumentation():
return { 'a': 800, 'b': 620, 'c': 44 }
</code></pre>
<hr />
<p><sup>1</sup> In my real code, the <code>capsys</code> is one of <em>many</em> parameters.</p>
<p><sup>2</sup> A similar question exists <a href="https://stackoverflow.com/questions/38594296/how-to-use-logging-pytest-fixture-and-capsys">here</a>. It is <em>not</em> a duplicate IMHO, because I'm asking about <em>programmatically</em> running tests, and not about the proper meaning of <code>capsys</code>.</p>
|
<python><pytest>
|
2024-11-12 13:08:26
| 2
| 7,270
|
OrenIshShalom
|
79,181,224
| 3,266,704
|
Snakemake fails to execute the preample when executing --edit-notebook inside a virtualenv
|
<p>I have the python packages <code>notebook</code> and <code>snakemake</code> installed in my user <code>site-packages</code>.
To use snakemake with different setups, I use virtualenvs. In this virtualenv I installed <code>snakemake</code> and all the requirements of my workflow.</p>
<p>When I try to edit a notebook using</p>
<pre class="lang-bash prettyprint-override"><code>snakemake --cores 1 --edit-notebook <output>
</code></pre>
<p>the notebook server opens up and I can edit the notebook. But when I try to execute the snakemake preamble (the first cell that is automatically inserted by Snakemake) I get the following error:</p>
<pre><code>AttributeError: Can't get attribute 'AttributeGuard' on <module 'snakemake.io' from '$HOME/.local/lib/python3.12/site-packages/snakemake/io.py'>
</code></pre>
<p>This is weird because <code>snakemake</code> is installed in my virtualenv:</p>
<pre><code>$ which snakemake
~/.virtualenvs/<virtualenv>/bin/snakemake
</code></pre>
|
<python><jupyter-notebook><virtualenv><snakemake>
|
2024-11-12 13:04:19
| 1
| 454
|
LittleByBlue
|
79,181,180
| 15,913,281
|
Subprocess Error When Trying to Pip Install Fastparquet on Windows 10 & Python 3.13
|
<p>I am trying to pip install Fastparquet and get the error below. I have searched but cannot find anything on this specific issue. I've tried running CMD as administrator but that does not help. I've also tried installing the visual studio build tools and upgrading pip but, again, it has not helped.</p>
<pre><code>C:\Users\james>pip install fastparquet
Collecting fastparquet
Using cached fastparquet-2024.5.0.tar.gz (466 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
Γ Getting requirements to build wheel did not run successfully.
β exit code: 1
β°β> [46 lines of output]
Traceback (most recent call last):
File "C:\Users\james\AppData\Local\Programs\Python\Python313\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module>
main()
~~~~^^
File "C:\Users\james\AppData\Local\Programs\Python\Python313\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\james\AppData\Local\Programs\Python\Python313\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
File "C:\Users\james\AppData\Local\Temp\pip-build-env-rd0qa88v\overlay\Lib\site-packages\setuptools\build_meta.py", line 334, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=[])
~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\james\AppData\Local\Temp\pip-build-env-rd0qa88v\overlay\Lib\site-packages\setuptools\build_meta.py", line 304, in _get_build_requires
self.run_setup()
~~~~~~~~~~~~~~^^
File "C:\Users\james\AppData\Local\Temp\pip-build-env-rd0qa88v\overlay\Lib\site-packages\setuptools\build_meta.py", line 522, in run_setup
super().run_setup(setup_script=setup_script)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\james\AppData\Local\Temp\pip-build-env-rd0qa88v\overlay\Lib\site-packages\setuptools\build_meta.py", line 320, in run_setup
exec(code, locals())
~~~~^^^^^^^^^^^^^^^^
File "<string>", line 47, in <module>
File "C:\Users\james\AppData\Local\Programs\Python\Python313\Lib\subprocess.py", line 395, in call
with Popen(*popenargs, **kwargs) as p:
~~~~~^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\james\AppData\Local\Programs\Python\Python313\Lib\subprocess.py", line 1036, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pass_fds, cwd, env,
^^^^^^^^^^^^^^^^^^^
...<5 lines>...
gid, gids, uid, umask,
^^^^^^^^^^^^^^^^^^^^^^
start_new_session, process_group)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\james\AppData\Local\Programs\Python\Python313\Lib\subprocess.py", line 1548, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
# no special security
^^^^^^^^^^^^^^^^^^^^^
...<4 lines>...
cwd,
^^^^
startupinfo)
^^^^^^^^^^^^
FileNotFoundError: [WinError 2] The system cannot find the file specified
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
[notice] A new release of pip is available: 24.2 -> 24.3.1
[notice] To update, run: python.exe -m pip install --upgrade pip
error: subprocess-exited-with-error
Γ Getting requirements to build wheel did not run successfully.
β exit code: 1
β°β> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
</code></pre>
|
<python><fastparquet>
|
2024-11-12 12:52:30
| 1
| 471
|
Robsmith
|
79,181,010
| 9,749,124
|
Jupyter Notebook is crashing when I want to run HuggingFace models
|
<p>I am using Jupyter Notebook for running some ML models from HuggingFace.
I am using Mac (M2 Chip, Memory 32 GB)</p>
<p>This is my code:</p>
<pre><code>import torch
from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline
# Step 1: Choose a pre-trained NER model from Hugging Face's Model Hub
# Here we use "dbmdz/bert-large-cased-finetuned-conll03-english", which is a common NER model fine-tuned on the CoNLL-2003 dataset
model_name = "dbmdz/bert-large-cased-finetuned-conll03-english"
# Step 2: Load the model and tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
</code></pre>
<p>In step 2, my kernel always crashes. I have tried several models, but it is always the same. This is the error:</p>
<pre><code>Kernel Restarting
The kernel for <kernel name> appears to have died. It will restart automatically.
</code></pre>
<p>Can you please help me?
My memory is not full, laptop is brand new.</p>
|
<python><machine-learning><jupyter-notebook><huggingface>
|
2024-11-12 12:02:09
| 0
| 3,923
|
taga
|
79,180,785
| 649,920
|
How to create a continous conversation between two OpenAI chatbots
|
<p>I want two independent instances of OpenAI chatbots to play a wordgame via python API. At this moment I only know how to create a single instance and ask it a single set of user messages. I thus have two main considerations:</p>
<ul>
<li>how can I have an ongoing conversation with a given instance? it seems like this was an option before with <code>chat_completion::send_message</code> method, but it is not avaiable anymore in the last version of the API and I only seem to be able to pass messages upon <code>::create</code> method. According to <a href="https://stackoverflow.com/questions/74711107/openai-api-continuing-conversation-in-a-dialogue">OpenAI API continuing conversation in a dialogue</a> instead of passing a new message to the existing instance, one should create a new instance for every new message and just provide it with the whole previous history of conversation. Which is a very expensive way to use the API as it seems. Did anything change perhaps? the quoted question is a bit old</li>
<li>how can I create two independent instances that do not share the information that I told them privately. Say, I tell the first one that my name is Alice and the second one that my age is 20. I don't want the first one to know that my age is 20 and the second one to think that my name is Alice. After some internet search I've found that people suggest to use different API keys for every independent instance, but if to send the new message I have to re-create instance every time, perhaps this is not necessary at all</li>
</ul>
|
<python><openai-api>
|
2024-11-12 10:51:35
| 1
| 357
|
SBF
|
79,180,617
| 2,092,975
|
How to load a .sql file of sql statements to Snowflake
|
<p>I have a .sql file which contains Snowflake sql statements.</p>
<p>Eg.</p>
<pre><code>DROP TABLE IF EXISTS EMPLOYEES;
CREATE TABLE EMPLOYEES (ID INTEGER, NAME STRING, AGE INTEGER, DEPARTMENT_ID INTEGER);
INSERT INTO EMPLOYEES(ID,NAME,AGE,DEPARTMENT_ID) VALUES (1,'Steve',48,123),(2,'Mary',41,456);
</code></pre>
<p>The actual files I need to load can be over 20MB and include statements as above for multiple tables.
The INSERT statement values can exceed the max size of a varchar column in Snowflake.</p>
<p>Two additional caveats, 1) the files are in an AWS S3 bucket and 2) I need to process these using some kind of automation (not manual commandline Snowflake CLI).</p>
<p>Is there a quick and dirty way to load a .sql file of sql statements into Snowflake?</p>
<p>What I have tried so far:</p>
<ol>
<li>Loading from EXECUTE IMMEDIATE FROM but the files exceed the max of 10MB for that statement.</li>
<li>Loading via python straight from S3 using Snowflake connector (execute_stream) (not successful so far my python is not great)</li>
<li>Loading into Snowflake table then EXECUTE IMMEDIATE but discovered the INSERT values statement can exceed the varchar(max) <strong><- I was wrong about this point, see answer below!</strong></li>
</ol>
<p>I have seen that the SnowflakeSQL or Snowflake CLI should be able to do this. Is there a way to automate those either via python or a Snowflake Notebook? I figured Snowflake would have a snazzy way to load large .sql files but I haven't been able to discover it yet.</p>
<p>Many thanks.</p>
|
<python><sql><snowflake-cloud-data-platform>
|
2024-11-12 10:01:49
| 2
| 306
|
s.bramblet
|
79,180,528
| 14,113,504
|
Draw a circle with periodic boundary conditions matplotlib
|
<p>I am doing a project that involves lattices. A point of coordinates (x0, y0) is chosen randomly and I need to color blue all the points that are in the circle of center (x0, y0) and radius R and red all the other points and then draw a circle around.</p>
<p>The tricky part is that there is periodic boundary conditions, meaning that if my circle is near the left border then I need to draw the rest of it on the right side, the same goes for up and down.</p>
<p>Here is my code that plots the lattice, I have managed to color the points depending on whether or not they are in the circle but I am yet to draw the circle.</p>
<pre class="lang-py prettyprint-override"><code>from matplotlib import pyplot as plt
import numpy as np
class lattice:
def __init__(self, L):
self.L = L
self.positions = np.array([[[i, j] for i in range(L)] for j in range(L)])
def draw_lattice(self, filename):
X = self.positions[:, :, 0].flatten()
Y = self.positions[:, :, 1].flatten()
plt.scatter(X, Y, s=10)
plt.xticks([])
plt.yticks([])
plt.title("Lattice")
plt.savefig(filename)
def dist_centre(self):
x0, y0 = np.random.randint(0, self.L), np.random.randint(0, self.L)
self.c0 = (x0, y0)
self.distance = np.zeros((self.L, self.L))
for i in range(self.L):
for j in range(self.L):
x = self.positions[i, j, 0]
y = self.positions[i, j, 1]
# Distance with periodic boundary conditions.
Dx = -self.L/2 + ((x0-x)+self.L/2)%self.L
Dy = -self.L/2 + ((y0-y)+self.L/2)%self.L
dist = np.sqrt(Dx**2 + Dy**2)
self.distance[i, j] = dist
def draw_zone(self, filename, R):
colormap = np.where(self.distance <= R, "blue", "red").flatten()
X = self.positions[:, :, 0].flatten()
Y = self.positions[:, :, 1].flatten()
plt.clf()
plt.scatter(X, Y, s=10, color=colormap)
plt.xticks([])
plt.yticks([])
plt.title("Lattice")
plt.savefig(filename)
if __name__ == "__main__":
L = 10
R = 3
filename = "test.pdf"
latt = lattice(L)
latt.draw_lattice(filename)
latt.dist_centre()
latt.draw_zone(filename, R)
</code></pre>
<p>The formula for the distance is modified because of the periodic boundary conditions.</p>
<p><a href="https://i.sstatic.net/VCyj5Zet.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VCyj5Zet.png" alt="resulting plot" /></a></p>
|
<python><matplotlib>
|
2024-11-12 09:37:14
| 1
| 726
|
Tirterra
|
79,180,373
| 848,510
|
DataprocPySparkBatchOp - Pass component's output to runtime_config_properties as dictionary
|
<p>I am creating a Dataproc batch job from VertexAI pipeline.</p>
<pre><code> get_stores_and_discount_data = (DataprocPySparkBatchOp(
project=PROJECT_ID,
location=REGION,
batch_id=f"dataproc-job-{file_date}",
main_python_file_uri=get_data_file,
python_file_uris=[
os.path.join("gs://", DEPS_BUCKET, DEPENDENCY_PATH, "src.zip")
],
file_uris=[
os.path.join(
"gs://",
DEPS_BUCKET,
DEPENDENCY_PATH,
"settings.toml",
)
],
subnetwork_uri=SUBNETWORK_URI,
container_image=PROMO_SPARK_DATAPROC_IMAGE,
runtime_config_version=RUNTIME_CONFIG_VERSION,
service_account=SERVICE_ACCOUNT,
spark_history_dataproc_cluster=HISTORY_SERVER_CLUSTER,
runtime_config_properties=SPARK_PROPERTIES_MEDIUM,
labels=SPARK_LABELS,
).set_display_name("get-data").after(date_task)
)
</code></pre>
<p>For runtime_config_properties=SPARK_PROPERTIES_MEDIUM, I want to use an environment variable that is an output of a component. When I try, it throws an error.</p>
<p>Error</p>
<blockquote>
<p>ValueError: Value must be one of the following types: str, int, float, bool, dict, and list. Got: "{{channel:task=generate-date;name=curr_timestamp;type=String;}}" of type "<class 'kfp.dsl.pipeline_channel.PipelineParameterChannel'>".</p>
</blockquote>
<pre class="lang-py prettyprint-override"><code>CURR_TIMESTAMP = date_task.outputs["curr_timestamp"]
SPARK_PROPERTIES_MEDIUM["spark.dataproc.driverEnv.REPORTING_TIMESTAMP"] = CURR_TIMESTAMP
</code></pre>
<p>The component code looks this:</p>
<pre class="lang-py prettyprint-override"><code>@component(base_image="python:3.9-slim")
def generate_date() -> NamedTuple('Output', [("file_date", str), ('curr_timestamp', str)]):
"""
Generates the current date and time in the format YYYYMMDDHHMMSS.
Returns:
str: A string representing the current date and time.
"""
from datetime import datetime
dt = datetime.today()
file_date = dt.strftime("%Y%m%d%H%M%S")
curr_timestamp = dt.strftime("%Y%m%d-%H:%m:%S")
return (file_date, curr_timestamp)
</code></pre>
<p>How to get this fixed?</p>
|
<python><google-cloud-platform><google-cloud-vertex-ai>
|
2024-11-12 08:47:58
| 0
| 3,340
|
Tom J Muthirenthi
|
79,179,957
| 5,790,653
|
Iterating over two lists just uses the first five members and others are not used
|
<p>I'm so sorry, I couldn't simplify the sample (I may make it less by your hints).</p>
<p>This is my code.</p>
<p>The base view is: I have around 100 IPs and 10 tokens in real-world example (but the sample, I intentionally added 28 tokens). Each token can be used 4 times per minute (in fact, it should be used each 25 seconds).</p>
<p>Each IP should use one token only and send a GET request to the <code>url</code> and finishes it. If IP <code>1.1.1.</code> used token <code>a</code>, then it should not use token <code>b</code> anymore, and the next iteration should go to IP <code>2.2.2.2</code>.</p>
<p>A problem I have is: in the real I have 10 tokens and I add <code>sleep(5)</code> to make at least 25 seconds gap between the first and the fifth, but the issue with my current code is:</p>
<p>it uses first token, second one, third one, fourth one and fifth one. Then since the last use of token <code>a</code> was 25 seconds ago, then it again uses token <code>a</code>. While I except that all tokens should be used.</p>
<p>In my current code, the issue is if I have more than 5 tokens, I'll have this problem.</p>
<p>I know one way is to reduce the <code>sleep</code> value, but if I increase the tokens in the future, I should change the <code>sleep</code> value too.</p>
<p>Would you please help me?</p>
<pre class="lang-py prettyprint-override"><code>tokens = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z']
dbs = [{'token': x, 'time': None} for x in tokens]
client_ips = [{'ip': x['ip'], 'desc': x['desc']} for x in clients]
thirty_days_ago = (datetime.datetime.now() - datetime.timedelta(days=days)).strftime('%Y-%m-%d')
requests_response_list = []
checked_ips = set()
for ip in client_ips:
for db in dbs:
now = datetime.datetime.now()
if ip['ip'] in checked_ips:
continue
if db['time']:
elapsed_time = (now - db['time']).total_seconds()
if elapsed_time < 25:
continue
print(f"IP is {ip['ip']}, Token: {db['token']}.")
response = requests.get(
url=f"http://my.url.com/ip_addresses/{ip['ip']}",
headers={
'accept': 'application/json',
'x-apikey': db['token'],
}
).json()['data']['attributes']
for report in response:
# report['date'] is a timestamp value, so I convert it to a date Y-m-d.
date = datetime.datetime.fromtimestamp(report['date']).strftime('%Y-%m-%d')
if thirty_days_ago <= date:
requests_response_list.append({
'ip': ip['ip'],
'date': date,
})
db['time'] = now
checked_ips.add(ip['ip'])
sleep(5)
break
</code></pre>
|
<python>
|
2024-11-12 06:07:47
| 2
| 4,175
|
Saeed
|
79,179,598
| 178,750
|
how to configure installation location pyproject.toml (python PEP517)
|
<p>This is a python package installation question. If I have a project named <strong>foo</strong>, how can I configure create a setuptools-based project using pyproject.toml (PEP517) to install to a subdirectory (AKA namespace?) in site-packages/ named <strong>foo2</strong>?</p>
<p>Two ways I can think of:</p>
<ol>
<li>configure pyproject.toml with the right knobs that I currently don't know</li>
<li>allow the distribution package (AKA wheel?) to install to the default subdirectory (e.g, foo/) under site-packages, but tell pip to override that location when the package is installed.</li>
</ol>
<p>I am interested in details about both options. Or just pointers to the documentation that describes these things - I have not found the documentation for pyproject.toml configuration that would provide a way to specify installation location.</p>
<p>I would like to minimize dependencies - basic python standard library preferred along with minimal additional dependencies, if any.</p>
<p>I realize there is a selection of build backends to choose from, so other build backends than just 'setuptools' are interesting (flit-core, hatchling, poetry, etc.). I would like to understand how to use setuptools as the backend at a minimum.</p>
<p>For now, thinking about a simple (e.g., pure python) project is fine. I'll move on to "fancier" needs (e.g., including C or Rust extensions) later.</p>
|
<python><python-import><setuptools><python-packaging><pyproject.toml>
|
2024-11-12 02:07:17
| 2
| 1,391
|
Juan
|
79,179,586
| 8,190,068
|
How do I position buttons in a vertical box layout?
|
<p>I have a sample app written in Python using the Kivy UI framework. There is an <code>Accordion</code> widget, and each item in the <code>Accordion</code> should display an entry with several <code>TextInput</code> fields and on the left side I would like a couple of buttons - one to edit and one to remove the entry.</p>
<p>I created an <code>AccordionItem</code> widget which includes a <code>BoxLayout</code> in horizontal orientation. Within this is another <code>BoxLayout</code> in vertical orientation for the buttons, and then two <code>TextInput</code> widgets. Here is the <code>Kivy</code> language code:</p>
<pre><code><NewItem@AccordionItem>:
title: 'date today time reference'
is_editable: False
BoxLayout:
orientation: 'horizontal'
BoxLayout:
orientation: 'vertical'
size_hint_y: None
height: 100
size_hint_x: None
width: 50
Button:
id: edit_button
background_normal: 'images/iconmonstrEdit32.png'
size_hint: None, None
size: 30, 30
pos_hint: {'center_x': 0.5, 'center_y': 0.5}
on_press: root.save_entry(self)
Button:
id: remove_button
background_normal: 'images/iconmonstrXMark32.png'
size_hint: None, None
size: 30, 30
pos_hint: {'center_x': 0.5, 'center_y': 0.5}
on_press: root.remove_entry(self)
TextInput:
id: input_1
multiline: 'True'
hint_text: 'This'
TextInput:
id: input_2
multiline: 'True'
hint_text: 'That'
</code></pre>
<p>Currently, it looks like this when I run it:</p>
<p><a href="https://i.sstatic.net/oT6WrHKA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oT6WrHKA.png" alt="An image of an AccordionItem entry" /></a></p>
<p>The two buttons on the left are piled at the bottom of the <code>BoxLayout</code>. <strong>How can I get these buttons to display at the top, with a little space between them?</strong></p>
<p>I have tried various things, none of which has changed the position of the buttons to be at the top of the <code>BoxLayout</code> on the left.</p>
<p>I tried changing pos_hint to: <code>{'center_x': 0.5, 'top': 0.1}</code>, but nothing changed.</p>
<p>The <a href="https://kivy.org/doc/stable/api-kivy.uix.boxlayout.html" rel="nofollow noreferrer"><code>BoxLayout</code> description in the Kivy 2.3.0 documentation</a> says: "Position hints are partially working, depending on the orientation: If the orientation is vertical: x, right and center_x will be used." So I tried removing the 'center_y' attribute from the pos_hint, but nothing changed.</p>
<p>I even tried changing the <code>BoxLayout</code> to an <code>AnchorLayout</code> with <code>anchor_y</code> set to 'top', but that was worse.</p>
<p>How do I do basic widget positioning like this in Kivy, when nothing seems to change the result?</p>
|
<python><kivy>
|
2024-11-12 01:59:06
| 1
| 424
|
Todd Hoatson
|
79,179,426
| 4,013,571
|
Why does flask stringify integer keys in a response JSON
|
<p>Why does flask stringify the integer keys in this JSON response?</p>
<p><code>app.py</code></p>
<pre class="lang-py prettyprint-override"><code>from flask import Flask
app = Flask(__name__)
@app.route('/test', methods=['GET'])
def test_endpoint():
return {1: "test"}, 200
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
<p><code>test_app.py</code></p>
<pre class="lang-py prettyprint-override"><code>import pytest
from app import app
@pytest.fixture
def client():
app.config['TESTING'] = True
with app.test_client() as client:
yield client
def test_test_endpoint(client):
response = client.get('/test')
assert response.status_code == 200
assert response.json == {1: "test"}
</code></pre>
<p>This will fail with the error</p>
<pre><code>====================================================== FAILURES ======================================================
_________________________________________________ test_test_endpoint _________________________________________________
client = <FlaskClient <Flask 'app'>>
def test_test_endpoint(client):
response = client.get('/test')
assert response.status_code == 200
> assert response.json == {1: "test"}
E AssertionError: assert {'1': 'test'} == {1: 'test'}
E
E Left contains 1 more item:
E {'1': 'test'}
E Right contains 1 more item:
E {1: 'test'}
E Use -v to get more diff
</code></pre>
|
<python><json><flask>
|
2024-11-11 23:47:52
| 0
| 11,353
|
Alexander McFarlane
|
79,179,323
| 4,444,546
|
How to define hypothesis strategies for custom dataclasses
|
<p>I am currently using <a href="https://hypothesis.readthedocs.io/en/latest/" rel="nofollow noreferrer">hypothesis</a> for fuzzing my test but I then need to generate random dataclasses, and so to build strategies for each, like</p>
<pre class="lang-py prettyprint-override"><code># Base types
uint64 = st.integers(min_value=0, max_value=2**64 - 1)
uint256 = st.integers(min_value=0, max_value=2**256 - 1)
# Dataclasses types
account = st.fixed_dictionaries(
{
"nonce": uint64,
"balance": uint256,
"code": st.binary(),
}
).map(lambda x: Account(**x))
</code></pre>
<p>Is there a way to avoid this explicit strategy definition? Somehow like with rust <a href="https://docs.rs/arbitrary/latest/arbitrary/" rel="nofollow noreferrer">arbitrary</a>, producing well-typed, structured values, from raw, byte buffers.</p>
|
<python><fuzzing><python-hypothesis>
|
2024-11-11 22:29:43
| 1
| 5,394
|
ClementWalter
|
79,179,291
| 54,873
|
With openpyxl, how do I collapse all the existing groups in the resultant excel?
|
<p>I am using <code>openpyxl</code> with a worksheet that has a lot of grouped columns at different levels. I would like the resultant output to simply collapse all the groups.</p>
<p>This is different than hiding the relevant columns and rows; I want them to be non-hidden, but just have the outlines collapsed to level 0!</p>
<p>And to be clear, I also don't want to create <em>new</em> outlines (as in @moken's answer below); I just want all the <em>existing</em> ones to be collapsed.</p>
<p>Is this possible?</p>
|
<python><excel><openpyxl>
|
2024-11-11 22:12:11
| 1
| 10,076
|
YGA
|
79,179,223
| 14,122
|
Can Python TypeAliases or Annotated objects be used in generics?
|
<p>Consider the following:</p>
<pre><code>import pydantic
def buildTypeAdapter[T](cls: type[T]) -> pydantic.TypeAdapter[T]:
return pydantic.TypeAdapter(cls)
ta1 = buildTypeAdapter(list[str]) # GOOD: pyright sees this as a TypeAdapter[list[str]]
out1 = ta1.validate_python(None) # GOOD: pyright sees this as a list[str]
</code></pre>
<hr />
<p>Now, compare to the following:</p>
<pre><code>type ListOfStrings = list[str]
ta2 = buildTypeAdapter(ListOfStrings)
out2 = ta2.validate_python(None)
</code></pre>
<p>This one fails to validate:</p>
<pre class="lang-none prettyprint-override"><code>error: Argument of type "ListOfStrings" cannot be assigned to parameter "cls" of type "type[T@buildTypeAdapter]" in function "buildTypeAdapter"
Β Β Type "TypeAliasType" is incompatible with type "type[T@buildTypeAdapter]" (reportArgumentType)
</code></pre>
<p>...and it sees ta2 as being of type <code>TypeAdapter[Unknown]</code>, and out2 being <code>Unknown</code>.</p>
<hr />
<p>A similar problem exists with:</p>
<pre><code>ta3 = buildTypeAdapter(Annotated[list[str], "Testing"])
</code></pre>
<p>where pyright reports:</p>
<pre class="lang-none prettyprint-override"><code>error: Argument of type "type[Annotated]" cannot be assigned to parameter "cls" of type "type[T@buildTypeAdapter]" in function "buildTypeAdapter"
</code></pre>
<p>...and accordingly treats the object as a <code>TypeAdapter[Unknown]</code>. (Incidentally, <code>ta3 = pydantic.TypeAdapter(Annotated[list[str], "Foobar"])</code> itself is <em>also</em> seen as a <code>TypeAdapter[Unknown]</code>, even though it doesn't throw an error about failing to meet the calling signature).</p>
<hr />
<p>I've tried something like:</p>
<pre><code>def buildTypeAdapter[T](cls: Annotated[T, ...]) -> pydantic.TypeAdapter[T]:
return pydantic.TypeAdapter(cls)
</code></pre>
<p>or</p>
<pre><code>def buildTypeAdapter[T](cls: TypeAliasType[T]) -> pydantic.TypeAdapter[T]:
return pydantic.TypeAdapter(cls)
</code></pre>
<p>but these aren't supported. Are there equivalents that function as desired?</p>
<hr />
<p><sub>(Note that Pydantic is used here as an example that's ready-at-hand and widely familiar, but the question isn't intended to be about it as such. The focus, rather, is intended to be on giving type information to static checkers for Python -- and pyright in particular -- that "reaches into" type aliases and annotations when describing the relationship between arguments and return types).</sub></p>
|
<python><generics><python-typing><pyright>
|
2024-11-11 21:45:16
| 0
| 299,045
|
Charles Duffy
|
79,179,193
| 7,700,802
|
Calculating the correlation coefficient of time series data of unqual length
|
<p>Suppose you have a dataframe like this</p>
<pre><code>data = {'site': ['A', 'A', 'B', 'B', 'C', 'C'],
'item': ['x', 'x', 'x', 'x', 'x', 'x'],
'date': ['2023-03-01', '2023-03-10', '2023-03-20', '2023-03-27', '2023-03-5', '2023-03-12'],
'quantity': [10,20,30, 20, 30, 50]}
df_sample = pd.DataFrame(data=data)
df_sample.head()
</code></pre>
<p>Where you have different sites and items with a date and quantity. Now, what you want to do is calculate the correlation between say site A and site B for item x and their associated quantity. Although, they could be of different length in the dataframe. How would you go about doing this.</p>
<p>The actual data in consideration here can be found here <a href="https://drive.google.com/file/d/15R0ZyuEKSxAFmnaW6GwRwFpiKluikeyI/view?usp=drive_link" rel="nofollow noreferrer">here</a>.</p>
<p>Now, what I tried was just setting up two different dataframes like this</p>
<pre><code>df1 = df_sample[(df_sample['site'] == 'A']) & (df_sample['item'] == 'x')]
df2 = df_sample[(df_sample['site'] == 'B']) & (df_sample['item'] == 'x')]
</code></pre>
<p>then just force them to have the same size, and calculate the correlation coefficient from there but I am sure there is a better way to do this.</p>
|
<python><pandas><statistics>
|
2024-11-11 21:30:09
| 3
| 480
|
Wolfy
|
79,179,125
| 15,848,470
|
Polars read_database_uri() failing on `NULL as col` column in sql query
|
<p>I am able to read data from SQL with nulls, unless a column is created like <code>NULL as col</code>. This causes a Rust panic:</p>
<pre><code>thread '<unnamed>' panicked at 'not implemented: MYSQL_TYPE_NULL', /__w/connector-x/connector-x/connectorx/src/sources/mysql/typesystem.rs:119:18
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
pyo3_runtime.PanicException: not implemented: MYSQL_TYPE_NULL
</code></pre>
<p>This is the offending query:</p>
<pre><code>SELECT
col1 as col1_modified,
col2 as col2_modified,
col3 as col3_modified,
col4,
col5, col6, col7, col8,
col9, col10,
col11, col12, col13, col14, col15, col16,
col17,
null as bad_col,
col19 as col19_modified
FROM my_schema.my_table
WHERE cond = 1
</code></pre>
<p>bad_col exists, but sometimes its values are not relevant, so I set it to null to use as an indicator. The query runs when replace <code>null as bad_col</code> with <code>bad_col</code>. I can then just manually set it to all null.</p>
<p>Is there a way to fix this <code>null as column_name</code> pattern breaking polars' read_database_uri()?</p>
|
<python><null><mysql-connector><python-polars>
|
2024-11-11 21:03:15
| 1
| 684
|
GBPU
|
79,179,062
| 929,732
|
How can I reference methods defined in my main Flask File when I start dividing things into Blueprints?
|
<p>So I started with</p>
<pre><code>mainfolder/
main.py <--- the entire flask app was in there but it got unruly so I though abou dividing things up in to blueprints...
</code></pre>
<p>the new way I was doing it is"</p>
<pre><code>mainfolder/
main.py <--- the entire flask app was in there but it got unruly so I though abou dividing things up in to blueprints
blueprints/
__init__.py
blueprint1/
__init__.py
blueprint1.py
</code></pre>
<p>I have a method in my main.py called checkIt(){} ... however I have no idea how to reference it in the blueprint.</p>
<p>I'm finding that blueprints have pluses an minuses and would like to work through this.</p>
|
<python><flask><methods>
|
2024-11-11 20:36:16
| 1
| 1,489
|
BostonAreaHuman
|
79,179,017
| 5,312,606
|
Overloading matrix multiplication as function composition
|
<p>I played a bit around and wanted to try to overload the matrix multiplication operator for function composition.
Instead of writing:
<code>f(g(x))</code> I would like <code>(f @ g)</code> to return a new function as the composition of f and g.</p>
<p><strong>I am well aware that this most likely not a good idea for production code, but I want to understand what is going on.</strong></p>
<p>The following code should work, but</p>
<pre class="lang-py prettyprint-override"><code>import types
def compose(f, g):
return lambda *args, **kwargs: f(g(*args, **kwargs))
def make_composable(f):
f.__matmul__ = types.MethodType(compose, f)
return f
@make_composable
def f(n):
return 2 * n
@make_composable
def g(n):
return 3 * n
assert (f.__matmul__(g)) (2) == f(g(2))
assert (f @ g) (2) == f(g(2))
</code></pre>
<p>in the last <code>assert</code> I get a</p>
<pre><code>TypeError: unsupported operand type(s) for @: 'function' and 'function'
</code></pre>
<p>I guess there is something wrong with how I monkey-patch the <code>__matmul__</code> method which is not correctly accepted for the operator overloading?</p>
|
<python><functional-programming><operator-overloading>
|
2024-11-11 20:16:12
| 0
| 1,897
|
mcocdawc
|
79,178,956
| 2,199,439
|
How to type a generic proxy class
|
<p>How do I tell <code>mypy</code> that my proxy class has the same attributes as the proxied class?</p>
<pre class="lang-py prettyprint-override"><code>import typing as t
from dataclasses import dataclass
class Proxy(t.Generic[C]):
def __init__(self, obj: C):
self.obj = obj
def __getattr__(self, name: str) -> t.Any:
return getattr(self.obj, name)
@dataclass
class Data:
foo: int
pd = Proxy(Data(42))
pd.foo # this types
pd.oof # this types too, but it should not
</code></pre>
<p>Presumably, <code>mypy</code> sees the <code>__getattr__</code> method and allows me to query any attribute of my proxy object. Is it possible to annotate the class <code>Proxy</code> so that it tells <code>mypy</code> to limit available attributes to only those of the proxied class?</p>
<p>EDIT: The important part is that the proxy class needs to be generic; for single case, there are several approaches available as suggested in comments.</p>
|
<python><generics><python-typing><mypy>
|
2024-11-11 19:56:47
| 0
| 372
|
volferine
|
79,178,919
| 8,964,393
|
Count elements in a row and create column counter in pandas
|
<p>I have created the following pandas dataframe:</p>
<pre><code>import pandas as pd
ds = {'col1' : ['A','A','B','C','C','D'],
'col2' : ['A','B','C','D','D','A']}
df = pd.DataFrame(data=ds)
</code></pre>
<p>The dataframe looks like this:</p>
<pre><code>print(df)
col1 col2
0 A A
1 A B
2 B C
3 C D
4 C D
5 D A
</code></pre>
<p>The possible values in <code>col1</code> and <code>col2</code> are <code>A</code>, <code>B</code>, <code>C</code> and <code>D</code>.</p>
<p>I need to create 4 new columns, called:</p>
<ul>
<li><code>countA</code>: it counts how many <code>A</code> are in each row / record</li>
<li><code>countB</code>: it counts how many <code>B</code> are in each row / record</li>
<li><code>countC</code>: it counts how many <code>C</code> are in each row / record</li>
<li><code>countD</code>: it counts how many <code>D</code> are in each row / record</li>
</ul>
<p>So, from the example above, the resulting dataframe would look like this:</p>
<p><a href="https://i.sstatic.net/oUKoL2A4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oUKoL2A4.png" alt="enter image description here" /></a></p>
<p>Can anyone help me please?</p>
|
<python><pandas><dataframe><count>
|
2024-11-11 19:39:46
| 4
| 1,762
|
Giampaolo Levorato
|
79,178,876
| 943,978
|
How to explode comma separated values in data frame using pyspark
|
<p>I have data like the below :</p>
<pre><code>ID ID1 ID2
32336741 ["32361087"] ["36013040"]
32290433 ["32223150-32223653"] ["36003347-36003348"]
32299856 ["32361087","32299991","32223653"] ["36013040","36013029","36013040"]
</code></pre>
<p>In the Data frame I'm trying to explode the comma separated values into multiple rows.
code :</p>
<pre><code>fulldf = (df
.withColumn('ID1',F.explode(F.split('ID1','-')))
.withColumn("ID1",F.regexp_replace("ID1", r"\[|\]|""\"", ""))
)
fulldf = fulldf.dropna()
fulldf.display()
</code></pre>
<p><strong>result</strong> :</p>
<pre><code>ID ID1
32336741 36013040
32290433 36003347
32290433 36003348
32290825 36013045
32290825 36013046
32290825 36013338
</code></pre>
<p>but when I add column ID2 in the data frame syntax it is giving me multiple records like doubled records.</p>
<p><strong>expected out put</strong> :</p>
<pre><code>ID ID1 ID2
32336741 32361087 36013040
32290433 32223150 36003347
32290433 32223653 36003348
32290825 32361087 36013045
32290825 32299991 36013046
32290825 32223653 36013338
</code></pre>
|
<python><dataframe><pyspark>
|
2024-11-11 19:19:46
| 1
| 8,885
|
mohan111
|
79,178,838
| 398,348
|
How to import and run a Python project in an IDE (VSCode) that refers to modules?
|
<p>I am learning Python and wanted to run one of Cormen's textbook examples that are in Python.
I downloaded Python.zip from <a href="https://mitpress.mit.edu/9780262046305/introduction-to-algorithms/" rel="nofollow noreferrer">https://mitpress.mit.edu/9780262046305/introduction-to-algorithms/</a> Resources tab.
I extracted it and it is a number of folders, one per chapter.
I do not see an "import project" option in VSC.
Trying to run randomized_select.py from Chapter 9 folder in the debugger, I get error</p>
<pre><code>ModuleNotFoundError: No module named 'randomized_quicksort'
</code></pre>
<p>It is coming from</p>
<pre><code>from randomized_quicksort import randomized_partition
</code></pre>
<pre><code>File c:\Users\Me\Documents\Learn\MS-CS\Foundations of Data Structures and Algorithms\CLRS_Python\Chapter 9\randomized_select.py:33
1 #!/usr/bin/env python3
2 # randomized_select.py
3
(...)
30 # #
31 #########################################################################
---> 33 from randomized_quicksort import randomized_partition
36 def randomized_select(A, p, r, i):
37 """Return the ith smallest element of the array A[p:r+1]
38
39 Arguments:
(...)
43 i -- ordinal number for ith smallest
44 """
ModuleNotFoundError: No module named 'randomized_quicksort'
</code></pre>
<p>I am stuck here.</p>
|
<python><visual-studio-code>
|
2024-11-11 19:03:45
| 3
| 3,795
|
likejudo
|
79,178,832
| 1,542,011
|
Cannot place whatIfOrder with ib_insync
|
<p>I'm trying to place a what-if-order with <code>ib_insync</code>, but I get an error related to asyncio's event loop when calling <code>whatIfOrder</code>:</p>
<blockquote>
<p>This event loop is already running.</p>
</blockquote>
<p>Placing regular orders instead of what-if-orders works. The following example reproduces the situation.</p>
<pre><code>from ib_insync import IB, Forex, Ticker, MarketOrder
def on_tick(ticker: Ticker):
o = MarketOrder("BUY", 10000)
res = ib.whatIfOrder(contract, o) # => ERROR
ib = IB()
ib.connect(host='127.0.0.1', port=4001, clientId=1)
contract = Forex('GBPUSD', 'IDEALPRO')
ib.qualifyContracts(contract)
ticker = ib.reqMktData(contract)
ticker.updateEvent += on_tick
ib.run()
</code></pre>
<p>It seems like <code>ib.run()</code> starts the event loop (with <code>loop.run_forever()</code>), but then <code> loop.is_running()</code> is false. I don't know what is going on.</p>
|
<python><python-asyncio><ib-insync>
|
2024-11-11 18:59:29
| 1
| 1,490
|
Christian
|
79,178,807
| 8,587,712
|
How to make new pandas DataFrame with columns as old index_column pairs
|
<p>I have two pandas DataFrames:</p>
<pre><code>object_1df = pd.DataFrame(
[['a', 1], ['b', 2]],
columns=['letter', 'number'])
object_2df = pd.DataFrame(
[['b', 3, 'cat'], ['c', 4, 'dog']],
columns=['letter', 'number', 'animal'])
</code></pre>
<pre><code> letter number
0 a 1
1 b 2
letter number animal
0 b 3 cat
1 c 4 dog
</code></pre>
<p>I need to make a catalog of one row for each DataFrame and columns equal to the number of elements. The final form should be one row for each df with the columns:</p>
<pre><code>a_letter a_number b_letter b_number b_animal c_letter c_number c_animal
</code></pre>
<p>I have attempted the very ugly:</p>
<pre><code>objects = [object_1df, object_2df]
catalog = pd.DataFrame()
for objectdf in objects:
object_row = pd.DataFrame()
for letter in objectdf['letter']:
for column in objectdf.columns:
object_row[f'{letter}_{column}'] = objectdf[column].loc[
objectdf['letter'] == letter]
catalog = pd.concat([catalog, object_row], ignore_index=True)
display(catalog)
</code></pre>
<p>which outputs the undesired result:</p>
<pre><code> a_letter a_number b_letter b_number b_animal c_letter c_number c_animal
0 a 1.0 NaN NaN NaN NaN NaN NaN
1 NaN NaN b 3.0 cat NaN NaN NaN
</code></pre>
<p>This result essentially only counts the first row from each df, and gives NaNs everywhere else. What would be a correct way of doing this?</p>
|
<python><pandas><dataframe>
|
2024-11-11 18:47:03
| 2
| 313
|
Nikko Cleri
|
79,178,578
| 1,174,784
|
pandas fails to hide NaN entries from stacked line graphs
|
<p>Say I have the following data:</p>
<pre class="lang-none prettyprint-override"><code>Date,release,count
2019-03-01,buster,0
2019-03-01,jessie,1
2019-03-01,stretch,74
2019-08-15,buster,25
2019-08-15,jessie,1
2019-08-15,stretch,49
2019-10-07,buster,35
2019-10-07,jessie,1
2019-10-07,stretch,43
2019-10-08,buster,40
2019-10-08,jessie,1
2019-10-08,stretch,38
2019-10-09,buster,46
2019-10-09,jessie,1
2019-10-09,stretch,33
2019-10-23,buster,46
2019-10-23,jessie,1
2019-10-23,stretch,31
2019-11-25,buster,46
2019-11-25,jessie,1
2019-11-25,stretch,29
2020-01-13,buster,48
2020-01-13,jessie,1
2020-01-13,stretch,28
2020-01-29,buster,50
2020-01-29,jessie,1
2020-01-29,stretch,26
2020-03-10,buster,54
2020-03-10,jessie,1
2020-03-10,stretch,22
2020-04-14,buster,55
2020-04-14,jessie,0
2020-04-14,stretch,21
2020-05-11,buster,57
2020-05-11,jessie,0
2020-05-11,stretch,17
2020-05-25,buster,61
2020-05-25,jessie,0
2020-05-25,stretch,14
2020-06-10,buster,62
2020-06-10,stretch,12
2020-07-01,buster,69
2020-07-01,stretch,3
2020-10-30,buster,74
2020-10-30,stretch,2
2020-11-18,buster,76
2020-11-18,stretch,2
2021-08-26,bullseye,1
2021-08-26,buster,86
2021-08-26,stretch,1
2021-10-08,bullseye,4
2021-10-08,buster,86
2021-10-08,stretch,1
2021-11-11,bullseye,4
2021-11-11,buster,84
2021-11-11,stretch,1
2021-11-17,bullseye,4
2021-11-17,buster,85
2021-11-17,stretch,0
</code></pre>
<p>And the following code:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import matplotlib.pyplot as plt
# Load the data
df = pd.read_csv('subset.csv')
# Pivot the data to a suitable format for plotting
df = df.pivot_table(index="Date", columns='release', values='count', aggfunc='sum')
# Convert the index to datetime and sort it
df.index = pd.to_datetime(df.index)
print(df)
# Plotting the data with filled areas
fig, ax = plt.subplots(figsize=(12, 6))
df.plot(ax=ax, kind="area", stacked=True)
plt.show()
</code></pre>
<p>It generates the following graph:</p>
<p><a href="https://i.sstatic.net/z1ldPR25.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/z1ldPR25.png" alt="enter image description here" /></a></p>
<p>In the above graph, the <code>jessie</code> line should have stopped after <code>2020-05-25</code>, in the middle of the graph. But it just keeps going, a little energizer bunny of a line, all the way to the right of the graph, even though it's actually <code>NaN</code>. In the <code>print(df)</code> output, we can see this is the underlying dataframe after the pivot:</p>
<pre><code>release bullseye buster jessie stretch
Date
2019-03-01 NaN 0.0 1.0 74.0
2019-08-15 NaN 25.0 1.0 49.0
2019-10-07 NaN 35.0 1.0 43.0
2019-10-08 NaN 40.0 1.0 38.0
2019-10-09 NaN 46.0 1.0 33.0
2019-10-23 NaN 46.0 1.0 31.0
2019-11-25 NaN 46.0 1.0 29.0
2020-01-13 NaN 48.0 1.0 28.0
2020-01-29 NaN 50.0 1.0 26.0
2020-03-10 NaN 54.0 1.0 22.0
2020-04-14 NaN 55.0 0.0 21.0
2020-05-11 NaN 57.0 0.0 17.0
2020-05-25 NaN 61.0 0.0 14.0
2020-06-10 NaN 62.0 NaN 12.0
2020-07-01 NaN 69.0 NaN 3.0
2020-10-30 NaN 74.0 NaN 2.0
2020-11-18 NaN 76.0 NaN 2.0
2021-08-26 1.0 86.0 NaN 1.0
2021-10-08 4.0 86.0 NaN 1.0
2021-11-11 4.0 84.0 NaN 1.0
2021-11-17 4.0 85.0 NaN 0.0
</code></pre>
<p>Interestly, if you look closely, you can also see the "bullseye" (blue) line is actually present since the beginning of the graph as well.</p>
<p>So, what's going on? Is matplotlib or pandas or <em>something</em> in there plotting NaN as "zero" instead of "not in this graph?</p>
<p>And <code>dropna</code> is not the answer here: it drops entires rows or columns, I would need to drop <em>cell</em> which makes no sense here.</p>
<p>Note that my previous iteration of this graph, using bars, doesn't have that issue:</p>
<p><a href="https://i.sstatic.net/omlLddA4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/omlLddA4.png" alt="enter image description here" /></a></p>
<p>Simply replace <code>area</code> with <code>bar</code> in the above to reproduce. The problem with the bar graph is it doesn't respect the scale of the X axis (time).</p>
|
<python><pandas><matplotlib><nan>
|
2024-11-11 17:22:42
| 1
| 6,357
|
anarcat
|
79,178,255
| 2,915,050
|
Validate JSON Schema which has fixed keys and user defined keys in Python
|
<p>I'm trying to validate a JSON file that is provided by a user. The JSON will contain certain fixed keys, but also contain some user-defined keys too. I want to validate that this JSON object contains these fixed keys, in a certain format, and the user-defined keys are in a certain format too (as these keys will always have values in a defined format).</p>
<p>I came across this post <a href="https://stackoverflow.com/questions/54491156/validate-json-data-using-python">Validate JSON data using python</a>, but the documentation for <code>jsonschema.validate</code> doesn't really show anything to do with user-defined keys, and also how to define if a key should have a list of dicts, or a dict which its key-values must be of a list of dicts.</p>
<p>Here's a sample schema:</p>
<pre><code>{
"a": "some value",
"b": "some value",
"c": {
"custom_a": [{...}],
"custom_b": [{...}]
},
"d": [{...}]
}
</code></pre>
<p>I have tried doing the following:</p>
<pre><code>import json
from jsonschema import validate
my_json = json.loads(<JSON String following above pattern>)
schema = {
"a" : {"type": "string"},
"b" : {"type": "string"},
"c" : {[{}]},
"d": [{}]
}
validate(instance=my_json, schema=schema) #raises TypeError on "c" and "d" in schema spec
</code></pre>
<p>I have also tried the following schema spec, but I get stuck on how to handle the custom keys, and also nested lists within dicts, etc.</p>
<pre><code>schema = {
"a" : {"type": "string"},
"b" : {"type": "string"},
"c" : {
"Unsure what to define here": {"type": "list"} #but this is a list of dicts
},
"d": {"type": "list"} #but this is a list of dicts
}
</code></pre>
|
<python><json><jsonschema>
|
2024-11-11 15:34:43
| 2
| 1,583
|
RoyalSwish
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.