QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
77,582,244
| 6,175,092
|
Using python in Blender how to extrude and rotate?
|
<p>I need to extrude a mesh successively and rotate the mesh at every step. This can be done in the user interface as shown below.</p>
<p><a href="https://i.sstatic.net/FLOTe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FLOTe.png" alt="enter image description here" /></a></p>
<p>I have accomplished the successive extrusions but cannot make the rotations in python. Please let me now what function rotates the selected mesh.</p>
|
<python><blender><mesh>
|
2023-11-30 23:33:28
| 1
| 585
|
doby
|
77,581,957
| 3,438,239
|
Seaborn's histplot doesn't match Matplotlib's hist when using log_scale=True
|
<p>I'm probably making a stupid mistake, but here's how to reproduce:</p>
<ol>
<li>Generate random variables based on log-normal distribution</li>
<li>Fit a log-normal distribution to the synthetic data</li>
<li>Compute the probability distribution function using the fitted parameters</li>
<li>Plot histogram of synthetic variables overlaid with the PDF</li>
<li>They don't match!</li>
</ol>
<pre class="lang-py prettyprint-override"><code>import seaborn as sns
from scipy.stats import lognorm
import numpy as np
mu = 25
samples = lognorm.rvs(s=1, loc=0, scale=np.log(mu), size=10000)
shape, loc, scale = lognorm.fit(samples)
print(shape, loc, scale)
fig, ax = plt.subplots()
sns.histplot(samples, bins=50, stat="density", log_scale=True, ax=ax)
xs = np.linspace(0.1, 100, 10000)
ys = lognorm.pdf(xs, s=shape, loc=loc, scale=scale)
ax.plot(xs, ys, "r-")
</code></pre>
<p><a href="https://i.sstatic.net/JaR5J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JaR5J.png" alt="enter image description here" /></a></p>
|
<python><scipy><distribution><scipy.stats>
|
2023-11-30 22:02:58
| 1
| 336
|
Blademaster
|
77,581,891
| 12,708,740
|
Identify text with span elements using Python BeautifulSoup
|
<p>I have html text that I am trying to clean up using "soup". However, I need to not only identify which text segments are contained by certain span elements of the class='highlight', but also maintain their order in the text.</p>
<p>For example, here example code:</p>
<pre><code>from bs4 import BeautifulSoup
import pandas as pd
original_string = """<div class="image-container half-saturation half-opaque" \
style="cursor: pointer;"><img src="../stim/microphone.png" style="width: 40px; height: 40px;">\
</div><p class="full-opaque">\
<span class="highlight">Easy to cultivate, sunflowers are a popular choice for gardeners of all skill levels</span>. \
Their large, <span class="highlight">cheerful blooms</span>\
bring a touch of summer to any outdoor space, creating a delightful atmosphere. \
Whether you're enjoying their beauty in a garden or using them to add a splash of color to your living space, \
sunflowers are a symbol of positivity and radiance, making them a beloved part of nature's tapestry.</p>"""
# Parse the HTML content
soup = BeautifulSoup(original_string, 'html.parser')
</code></pre>
<p>Desired output (in this case there are 4 text segments):</p>
<pre><code>data = {
'text_order': [0, 1, 2, 3],
'text': ["Easy to cultivate, sunflowers are a popular choice for gardeners of all skill levels",
"Their large, ", "cheerful blooms",
"bring a touch of summer to any outdoor space, creating a delightful atmosphere. Whether you're enjoying their beauty in a garden or using them to add a splash of color to your living space, sunflowers are a symbol of positivity and radiance, making them a beloved part of nature's tapestry."],
'highlight': [True, False, True, False]
}
df = pd.DataFrame(data)
print(df)
</code></pre>
<p>I've tried to extract the span text using "highlight_spans = soup.find_all('span', class_='highlight')" but this does not maintain the order in which the text is displayed in the paragraph.</p>
|
<python><html><pandas><beautifulsoup>
|
2023-11-30 21:45:04
| 1
| 675
|
psychcoder
|
77,581,779
| 820,011
|
running django makemigrations twice generates two migrations, one to add a field and one to alter that same field … to itself
|
<p>One would expect that running <code>makemigrations</code> twice in a row would be a no-op for the second one, but in fact I get two migrations, and the second one can't be applied (an exception is thrown trying to access <code>None.startswith</code>). Output somewhat sanitized:</p>
<pre><code>% ./manage.py makemigrations
Migrations for 'foo':
foo/migrations/0004_foo.py
- Add field foo to bar
% ./manage.py makemigrations
Migrations for 'foo':
fooo/migrations/0005_alter_foo.py
- Alter field foo on bar
</code></pre>
<p>The contents of the migrations are basically:</p>
<pre class="lang-py prettyprint-override"><code> operations = [
migrations.AddField(
model_name="bar",
name="foo",
field=models.ForeignObject(
default=1,
from_fields=["original_id", "original_date"],
on_delete=django.db.models.deletion.PROTECT,
related_name="+",
to="foo.baz",
to_fields=["id", "date"],
),
preserve_default=False,
),
</code></pre>
<p>for the first, and then the "alteration":</p>
<pre class="lang-py prettyprint-override"><code> operations = [
migrations.AlterField(
model_name="bar",
name="foo",
field=models.ForeignObject(
from_fields=("original_id", "original_date"),
on_delete=django.db.models.deletion.PROTECT,
related_name="+",
to="payment.payment",
to_fields=("id" "date"),
),
),
]
</code></pre>
<p>Attempting to run the migration results in:</p>
<pre><code> File "/Users/wolfson/.local/share/virtualenvs/message_digest-gQFszokA/lib/python3.11/site-packages/django/db/backends/postgresql/operations.py", line 189, in quote_name
if name.startswith('"') and name.endswith('"'):
^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'startswith'
</code></pre>
|
<python><django>
|
2023-11-30 21:20:17
| 1
| 2,535
|
ben w
|
77,581,696
| 19,808,508
|
Google Cloud Error Reporting showing extraneous errors
|
<p>I have a fastapi application hosted on google cloud run. When exceptions are thrown, google cloud logs and error reporting show multiple different errors in packages like uvicorn, anyio (EndOfStream), and starlette. These errors aren't very useful to me, because they aren't the actual error thrown from my code. These exceptions are often accompanied by messages such as:</p>
<p><code>During handling of the above exception, another exception occurred:</code></p>
<p>Is there a way to avoid those errors from being thrown or from showing in Google Cloud Error Reporting and just have the actual exception from my code thrown by itself instead?</p>
<p>I have tried adding custom exception handlers to fastapi, but it doesn't seem to resolve the issue.</p>
|
<python><fastapi><google-cloud-error-reporting>
|
2023-11-30 21:05:41
| 1
| 451
|
Jasper Madrone
|
77,581,497
| 1,471,980
|
how do perform aggregate functions to pandas data frame
|
<p>I have this data frame:</p>
<pre><code>df
Device int In Out Bw_in Bw_out
Usa123 Eth1 1000 500 100 75
Usa123 Eth0 10000 700 200 80
Emea01 Wan1 1000 500 150 90
Emea01 Eth3 2000 1000 200 70
</code></pre>
<p>I want to summarize the Bandwidth usage data in terms of % usage per device, resulting data frame needs to look like this:</p>
<p>Need to sum BW_in per Dervice and divide it by sum of In per device.
Do the same for BW_out. Sum Bw_out per device divide it by sum of Out per divice</p>
<pre><code>Device int In Out Bw_in Bw_out %InUsage %OutUsage
Usa123 Eth1 1000 500 100 75 0.01 0.12
Usa123 Eth0 10000 700 200 80
Emea01 Wan1 1000 500 150 90 0.11 0.10
Emea01 Eth3 2000 1000 200 70
</code></pre>
<p>I tried this:</p>
<pre><code>df.groupby('Device'. apply(lambda x: sum(x['Bw_in'])/sum(x['In']))
</code></pre>
<p>I am trying to see if there is a better and more efficient way to do this?</p>
|
<python><pandas>
|
2023-11-30 20:28:50
| 4
| 10,714
|
user1471980
|
77,581,284
| 5,137,526
|
Setting up mitmproxy and wireguard using mitmproxy's python API
|
<p>I'm trying to configure a mitmproxy instance to use wireguard to route through a remote server, where the data is processed with an Addon(). Essentially I would like to run the following command with custom keys through python:</p>
<pre><code>mitmdump -s addon.py --mode wireguard
</code></pre>
<p>But to use my own keys/etc, I'd prefer to pass it through the python API (theoretically I could add it to /etc/wireguard/wg0.conf but I've had mixed results). I've had a hard time understanding how to use it though, as I've only had bits and pieces of code to go by. This is what I have so far:</p>
<pre><code>wg_server = await mitmproxy_wireguard.start_server(
"0.0.0.0",
51820,
server_privkey,
[client_pubkey],
handle_connection,
receive_datagram,
)
options = Options()
master = DumpMaster(options)
# master.addons.add(Addon()) # when I uncomment this it no longer connects
master.server = wg_server
</code></pre>
<p>And the following callbacks:</p>
<pre><code>async def handle_connection(rw: mitmproxy_wireguard.TcpStream):
while True:
data = await rw.read(4096)
# check if the connection was closed
if len(data) == 0:
break
rw.write(data)
await rw.drain()
rw.close()
def receive_datagram(_data, _src_addr, _dst_addr):
master.server.send_datagram(data.upper(), dst_addr, src_addr)
</code></pre>
<p>This is all in an asyncio loop I left out for clarity.</p>
<p>No matter what I do I can't get it to connect properly. With and without the addon it will increment up in RX and TX byte counts on the client (so they are communicating), and with tcpdump I can see that some data is getting to the server, however it won't render a webpage on a browser on the client's machine. I can't even ping google from the client.</p>
<p>Client conf file wg0.conf:</p>
<pre><code>[Interface]
PrivateKey = {clientPrivKey}
Address = 10.0.0.1/24
DNS = 8.8.8.8, 77.88.8.7
[PEER]
PublicKey = {serverPubKey}
AllowedIPs = 0.0.0.0/0
Endpoint = {serverIPAddr}:{portNumber}
</code></pre>
<p>I think the issue may have something to do with DNS, however I'm not completely sure. At the very least the DNS isn't getting through as well as everything else, as I only see it in tcpdump and not the destinations real address. If anyone has any insight into using the <a href="https://github.com/decathorpe/mitmproxy_wireguard" rel="nofollow noreferrer">mitmproxy_wireguard</a> package I'd really appreciate it.</p>
<p>Thanks you for your time/help, and let me know if you have any questions about my setup</p>
|
<python><mitmproxy><wireguard>
|
2023-11-30 19:45:59
| 1
| 331
|
thansen0
|
77,581,214
| 12,162,229
|
Produce this list [0, 2, 6, 12, 20, 30, 42, 56, 72, 90] using list comprehension syntax
|
<p>I can produce the list <code>[0, 2, 6, 12, 20, 30, 42, 56, 72, 90]</code> using the following code:</p>
<pre><code>x = []
y = 0
for i in range(2,21,2):
x.append(y)
y += i
</code></pre>
<p>However I'm not sure how to convert this into list comprehension syntax of the form</p>
<p>[expression for value in iterable if condition ]</p>
|
<python><list-comprehension><sequence>
|
2023-11-30 19:31:41
| 5
| 317
|
AColoredReptile
|
77,581,033
| 3,861,775
|
Parallelize inference of ensemble
|
<p>I used the <a href="https://jax.readthedocs.io/en/latest/notebooks/neural_network_with_tfds_data.html" rel="nofollow noreferrer">this</a> tutorial from JAX to create an ensemble of networks. Currently I compute the loss of each network in a for-loop which I would like to avoid:</p>
<pre><code>for params in ensemble_params:
loss = mse_loss(params, inputs=x, targets=y)
def mse_loss(params, inputs, targets):
preds = batched_predict(params, inputs)
loss = jnp.mean((targets - preds) ** 2)
return loss
</code></pre>
<p>Here <code>ensemble_params</code> is a list of pytrees (lists of tuples holding JAX parameter arrays). The parameter structure of each network is the same.</p>
<p>I tried to get rid of the for-loop by applying <code>jax.vmap</code>:</p>
<pre><code>ensemble_loss = jax.vmap(fun=mse_loss, in_axes=(0, None, None))
</code></pre>
<p>However, I keep getting the following error message which I do not understand.</p>
<pre><code>ValueError: vmap got inconsistent sizes for array axes to be mapped:
* most axes (8 of them) had size 3, e.g. axis 0 of argument params[0][0][0] of type float32[3,2];
* some axes (8 of them) had size 4, e.g. axis 0 of argument params[0][1][0] of type float32[4,3]
</code></pre>
<p>Here is a minimal reproducible example:</p>
<pre><code>import jax
from jax import Array
from jax import random
import jax.numpy as jnp
def layer_params(dim_in: int, dim_out: int, key: Array) -> tuple[Array]:
w_key, b_key = random.split(key=key)
weights = random.normal(key=w_key, shape=(dim_out, dim_in))
biases = random.normal(key=w_key, shape=(dim_out,))
return weights, biases
def init_params(layer_dims: list[int], key: Array) -> list[tuple[Array]]:
keys = random.split(key=key, num=len(layer_dims))
params = []
for dim_in, dim_out, key in zip(layer_dims[:-1], layer_dims[1:], keys):
params.append(layer_params(dim_in=dim_in, dim_out=dim_out, key=key))
return params
def init_ensemble(key: Array, num_models: int, layer_dims: list[int]) -> list:
keys = random.split(key=key, num=num_models)
models = [init_params(layer_dims=layer_dims, key=key) for key in keys]
return models
def relu(x):
return jnp.maximum(0, x)
def predict(params, image):
activations = image
for w, b in params[:-1]:
outputs = jnp.dot(w, activations) + b
activations = relu(outputs)
final_w, final_b = params[-1]
logits = jnp.dot(final_w, activations) + final_b
return logits
batched_predict = jax.vmap(predict, in_axes=(None, 0))
def mse_loss(params, inputs, targets):
preds = batched_predict(params, inputs)
loss = jnp.mean((targets - preds) ** 2)
return loss
if __name__ == "__main__":
num_models = 4
dim_in = 2
dim_out = 4
layer_dims = [dim_in, 3, dim_out]
batch_size = 2
key = random.PRNGKey(seed=1)
key, subkey = random.split(key)
ensemble_params = init_ensemble(key=subkey, num_models=num_models, layer_dims=layer_dims)
key_x, key_y = random.split(key)
x = random.normal(key=key_x, shape=(batch_size, dim_in))
y = random.normal(key=key_y, shape=(batch_size, dim_out))
for params in ensemble_params:
loss = mse_loss(params, inputs=x, targets=y)
print(f"{loss = }")
ensemble_loss = jax.vmap(fun=mse_loss, in_axes=(0, None, None))
losses = ensemble_loss(ensemble_params, x, y)
print(f"{losses = }") # Same losses expected as above.
</code></pre>
|
<python><jax>
|
2023-11-30 18:57:01
| 1
| 3,656
|
Gilfoyle
|
77,580,911
| 13,135,901
|
Fast way to check values of one dataframe against another dataframe in pandas
|
<p>I have two dataframes.
df1:</p>
<pre><code> Date High Mid Low
1 2023-08-03 00:00:00 29249.8 29136.6 29152.3
4 2023-08-03 12:00:00 29395.8 29228.1 29105.0
10 2023-08-04 12:00:00 29305.2 29250.1 29137.1
13 2023-08-05 00:00:00 29099.9 29045.3 29073.0
18 2023-08-05 20:00:00 29061.6 29047.1 29044.0
.. ... ... ... ...
696 2023-11-26 20:00:00 37732.1 37469.9 37370.0
703 2023-11-28 00:00:00 37341.4 37138.2 37254.1
707 2023-11-28 16:00:00 38390.7 38137.2 37534.4
711 2023-11-29 08:00:00 38419.0 38136.3 38112.0
716 2023-11-30 04:00:00 38148.9 37800.1 38040.0
</code></pre>
<p>and df2:</p>
<pre><code> Start Top Bottom
0 2023-11-28 00:00:00 37341.4 37138.2
1 2023-11-24 12:00:00 38432.9 37894.4
</code></pre>
<p>I need to check if the values in the first dataframe fall within the range of the values in a row of a second dataframe and store the number of matches in a column. I can do it using iteration like this:</p>
<pre><code>for idx in df1.index:
df2.loc[
(df2.Start != df1.at[idx, 'Date']) &
(df2.Bottom < df1.at[idx, 'High']) &
(df2.Top > df1.loc[idx, ['Mid', 'Low']].max()),
'Match'] += 1
</code></pre>
<p>But this way is slow. Is there a faster way to do it without iteration?</p>
|
<python><pandas>
|
2023-11-30 18:33:03
| 1
| 491
|
Viktor
|
77,580,901
| 4,169,571
|
Draw all 12 axes when using ax.scatter for a 3D scatter plot
|
<p>I want to draw all 12 axes when using <code>ax.scatter</code> for a 3D scatter plot</p>
<p>Here is my code. As you see my axes lines are not matching the range of <code>ax.scatter</code>. How can I achieve that?</p>
<p><code>ax.get_xlim(</code>) etc. does not retrieve the exact limits that <code>ax.scatter</code> has used.</p>
<p>Here is the content of <code>data.txt</code>: <a href="https://pastebin.com/raw/hU4NhGQc" rel="nofollow noreferrer">https://pastebin.com/raw/hU4NhGQc</a></p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
file_path = 'data.txt'
col_widths = [12, 13, 18, 13, 13, 16, 16, 15, 16, 15]
data = pd.read_fwf(file_path, widths=col_widths, header=0)
columns_to_format = ['Sum', 'Mean', 'Std', 'Min', 'Max']
for col in columns_to_format:
data[col] = data[col].apply(lambda x: '{:14.8e}'.format(float(x)) if pd.notnull(x) else x)
net_data = data[data['Variable'] == '__net'].copy()
net_data['b'] = net_data['b'].replace({'10^11': 1e11, '10^13': 1e13})
c_floats = net_data['Mean'].astype(float).to_numpy()
fig = plt.figure(figsize=(10, 8))
ax = fig.add_subplot(111, projection='3d')
x = net_data['x'].astype(float).to_numpy()
y = net_data['y'].astype(float).to_numpy()
z = net_data['b'].astype(float).to_numpy()
point_size = 50
img = ax.scatter(x, y, z, c=c_floats, s=point_size, cmap='jet', vmin=c_floats.min(), vmax=c_floats.max() )
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('b')
x_min, x_max = ax.get_xlim()
y_min, y_max = ax.get_ylim()
z_min, z_max = ax.get_zlim()
ax.plot([x_min, x_max], [y_min, y_min], [z_min, z_min], color='black')
ax.plot([x_min, x_max], [y_max, y_max], [z_min, z_min], color='black')
ax.plot([x_min, x_max], [y_min, y_min], [z_max, z_max], color='black')
ax.plot([x_min, x_max], [y_max, y_max], [z_max, z_max], color='black')
ax.plot([x_min, x_min], [y_min, y_max], [z_min, z_min], color='black')
ax.plot([x_max, x_max], [y_min, y_max], [z_min, z_min], color='black')
ax.plot([x_min, x_min], [y_min, y_max], [z_max, z_max], color='black')
ax.plot([x_max, x_max], [y_min, y_max], [z_max, z_max], color='black')
ax.plot([x_min, x_min], [y_min, y_min], [z_min, z_max], color='black')
ax.plot([x_max, x_max], [y_min, y_min], [z_min, z_max], color='black')
ax.plot([x_min, x_min], [y_max, y_max], [z_min, z_max], color='black')
ax.plot([x_max, x_max], [y_max, y_max], [z_min, z_max], color='black')
offset = 0.01
for (i, txt) in enumerate(c_floats):
ax.text(x[i] + offset, y[i] + offset, z[i] + offset, '{:6.4f}'.format(txt), size=6, zorder=1, color='k')
elev = 45
azim = 45
ax.view_init(elev=elev, azim=azim)
cbar = fig.colorbar(img, ax=ax, shrink=0.5)
cbar.set_label('Mean')
plot_save_path = 'plot.png'
fig.savefig(plot_save_path, dpi=300, bbox_inches='tight')
</code></pre>
<p>And here is the produced plot:</p>
<p><a href="https://i.sstatic.net/Scp3z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Scp3z.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><axis>
|
2023-11-30 18:31:37
| 0
| 817
|
len
|
77,580,785
| 2,755,648
|
How much overhead does `python -X importtime` add?
|
<p>Does anyone know how much slower running <code>python -X importtime [module or file]</code> is vs. <code>python [module or file]</code>?</p>
<p>I'm wondering if it's a bad idea to run <code>python -X importtime</code> for production code. The goal is to monitor how slow the imports are in production code, to identify opportunities to remove unnecessary imports.</p>
|
<python><cpython>
|
2023-11-30 18:10:43
| 1
| 1,149
|
sid-kap
|
77,580,627
| 6,439,229
|
Why does assigning a default to a dataclass field, make it into a class attribute?
|
<p>When you add a field to a python dataclass, it will become a required argument for <code>__init__</code> and it will be an instance attribute. But when you add a default value to a field it becomes a class attribute.</p>
<pre><code>from dataclasses import dataclass
@dataclass
class A:
foo: int
bar: int = 0
>>> A.foo
AttributeError: type object 'A' has no attribute 'foo'
>>> A.bar
0
</code></pre>
<p>Why this difference between <code>foo</code> and <code>bar</code>?<br />
Dataclass prevents you from setting mutable defaults, so it will not become a problem in practice, but I was curious about this behaviour, which was unexpected for me.</p>
|
<python><python-dataclasses>
|
2023-11-30 17:41:17
| 1
| 1,016
|
mahkitah
|
77,580,556
| 1,505,259
|
Importing data from two XML parent nodes to a Pandas DataFrame using read_xml
|
<p>I am having trouble in importing an XML file to Pandas where I need to grab data from two parent nodes. One parent node (<code>AgentID</code>) has data directly in it, and the other (<code>Sales</code>) has child nodes (<code>Location</code>, <code>Size</code>, <code>Status</code>) that contain data, as given below.</p>
<pre class="lang-py prettyprint-override"><code>test_xml = '''<TEST_XML>
<Sales>
<AgentID>0001</AgentID>
<Sale>
<Location>0</Location>
<Size>1000</Size>
<Status>Available</Status>
</Sale>
<Sale>
<Location>1</Location>
<Size>500</Size>
<Status>Unavailable</Status>
</Sale>
</Sales>
</TEST_XML>'''
</code></pre>
<p>when I try to import this to a Pandas Dataframe below is the only way I was able to grab data under the <code>Sale</code> tag.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.read_xml(test_xml, xpath='//Sale')
</code></pre>
<p>which gives me a dataframe like the one below:</p>
<pre class="lang-none prettyprint-override"><code> Location Size Status
0 0 1000 Available
1 1 500 Unavailable
</code></pre>
<p>What I need is including the <code>AgentID</code> tag in the DataFrame too, to get the following, but I was unsuccessful. Expected output is given below for clarity:</p>
<pre class="lang-none prettyprint-override"><code> AgentID Location Size Status
0 0001 0 1000 Available
1 0001 1 500 Unavailable
</code></pre>
<p>Is there a way to manipulate the <code>xpath</code> parameter to include the data inside the <code>AgentID</code> tag as well, or is it impossible to do it using Pandas' <code>read_xml</code> function? I tried passing a list like <code>xpath=['//AgentID', '//Sale']</code> but of course, it did not work...</p>
|
<python><pandas><xml><dataframe><file-io>
|
2023-11-30 17:33:15
| 3
| 11,270
|
marillion
|
77,580,402
| 4,913,108
|
Quickly create a list of sequential and non-sequential numbers in Python
|
<p>In R, I can quickly create a list of sequential and non-sequential numbers like so:</p>
<pre class="lang-r prettyprint-override"><code>numbers <- c(1:5, 8, 12, 15:19)
numbers
[1] 1 2 3 4 5 8 12 15 16 17 18 19
</code></pre>
<p>Is there a similar method in Python? Everything I've seen seems to require a loop and/or multiple mechanisms to create this.</p>
|
<python>
|
2023-11-30 17:12:48
| 1
| 5,750
|
Gaurav Bansal
|
77,580,266
| 5,159,284
|
Ray tranformation with column dtype list<element: double not null>
|
<p>My dataset schema is the following in ray:</p>
<pre><code>Column Type
------ ----
x_1 list<element: double not null>
x_2 list<element: double not null>
y int32
</code></pre>
<p>Right now if I try to convert this to a tensorflow object, I get the following dtypes:</p>
<pre><code>data.to_tf(feature_columns=["x_1", "x_2"], label_columns="y")
<_OptionsDataset element_spec=({'x_1': TensorSpec(shape=(None,), dtype=tf.string, name='x_1'), 'x_2': TensorSpec(shape=(None,), dtype=tf.string, name='x_2')}, TensorSpec(shape=(None,), dtype=tf.int32, name='y'))>
</code></pre>
<p>As you can see it converts to the features to a string type by default. I need a shape of <code>(25, 2)</code> where 25 is the length of the list and 2 is the features <code>x_1</code> and <code>x_2</code>. How do I convert the data to the right format?</p>
|
<python><tensorflow><ray>
|
2023-11-30 16:54:33
| 1
| 2,635
|
Prabhjot Singh Rai
|
77,580,216
| 867,124
|
Starred unpacking in subscription index
|
<p>Consider the following code:</p>
<pre><code>class A:
def __getitem__(self, key):
print(key)
a = A()
a[*(1,2,3)]
</code></pre>
<p>With python 3.10.6, I get a <code>SyntaxError : invalid syntax</code> at the starred unpacking on the last line. On python 3.11.0, however, the code works fine and prints <code>(1,2,3)</code>, as one could <em>maybe</em> expect.</p>
<p>As far as I can tell, there is no difference in the language reference between the two versions regarding the syntax of subscriptions (sections 6.3.2 of the references are identical), and no mention of this change appears in the 3.11 changelog. Also, looking at these reference the behavior of 3.10.6 seem consistent (the expression between brackets should be an <code>expression_list</code>, which does not allow for starred unpacking).</p>
<p>Is this a bug in the 3.11 version, or is this syntax change documented somewhere?</p>
|
<python>
|
2023-11-30 16:48:43
| 2
| 1,037
|
TonioElGringo
|
77,580,168
| 3,429,916
|
pybind11 - Convert type B to const A* in C++ but not in python
|
<p>I have a polymorphic base class A and a polymorphic class B derived from A. The virtual inheritance must be and I expose both classes to python, so I will have automatic down-casting on the python side. But I explicitly do not specify that A is the parent class of B (as shown in <a href="https://pybind11.readthedocs.io/en/stable/classes.html#inheritance-and-automatic-downcasting" rel="nofollow noreferrer">this</a> example, i.e. using an extra template parameter within the <code>py::class_</code>) because I don't want to expose all the base methods of A on an object of type B.</p>
<p>I don't want that because I essentially want to control which functions are callable on B, such that they have to be explicitly exposed again on the <code>py::class_</code> of B.</p>
<p>So far so good. My question is this: How can I still allow converting an object of type <code>B</code> to a <code>const A*</code>, <code>std::shared_ptr<const A></code>, or a <code>const A&</code> for global functions defined like the <code>func_get_count</code> in my example - but without opening the door to non-const method of A. I want <code>func_get_count</code> to work for an object of type B on the python side, but not <code>func_inc_count</code>.</p>
<pre><code>#include <pybind11/pybind11.h>
namespace py = pybind11;
class A
{
public:
virtual ~A() = default;
int get() const
{
return _count;
}
void inc()
{
_count++;
}
private:
int _count = 0;
};
class B : public A
{
public:
virtual ~B() = default;
};
int getCount(const A* a)
{
return a->get();
}
void incCount(A* a)
{
a->inc();
}
std::shared_ptr<B> createB()
{
auto b = std::make_shared<B>();
b->inc();
return b;
}
PYBIND11_MODULE(mybind_test, m)
{
m.doc() = "mybind is my pybind11 module to test";
py::class_<A, std::shared_ptr<A>>(m, "A")
.def(py::init<>())
.def("get_count", &A::get)
.def("inc_count", &A::inc);
py::class_<B, std::shared_ptr<B>>(m, "B")
.def(py::init<>())
.def("get_count", &A::get);
m.def("func_get_count", &getCount);
m.def("func_inc_count", &incCount);
m.def("func_create_b", &createB);
}
</code></pre>
<p>Of course, I could just add an overload for the <code>getCount</code> function to accept parameters of type B but given a lot of these methods, is there a smarter way, less verbose, less code duplication?</p>
<pre><code>int getCount(const B* b)
{
return b->get();
}
</code></pre>
<p>If I add the method overload and then also the py binding with the py overload caster,</p>
<pre><code>m.def("func_get_count", py::overload_cast<const A*>(&getCount));
m.def("func_get_count", py::overload_cast<const B*>(&getCount));
</code></pre>
<p>it seems like an overkill since the C++ side already knows how to implicitly convert a <code>B*</code> to an <code>const A*</code>...</p>
|
<python><c++><pybind11>
|
2023-11-30 16:43:27
| 0
| 597
|
huzzm
|
77,580,143
| 5,865,393
|
Reading Excel file with merged cells Pandas
|
<p>I have the following Excel file:</p>
<p><a href="https://i.sstatic.net/gWuym.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gWuym.png" alt="enter image description here" /></a></p>
<p>And the dataframe is</p>
<pre class="lang-bash prettyprint-override"><code> Name Selector Range Policy
0 Name 1 NaN NaN NaN
1 NaN NaN NaN NaN
2 NaN NaN NaN NaN
3 NaN NaN NaN NaN
4 Name 2 NaN NaN NaN
</code></pre>
<p>Notice that <code>Name 2</code> is a 4-merged-row cell but I get only one row in the df.</p>
<pre class="lang-py prettyprint-override"><code>from pprint import pprint
import pandas as pd
excel_data = pd.read_excel(io="pandas.xlsx", usecols="A:D")
df = pd.DataFrame(data=excel_data)
print(df)
data = (
df.ffill()
.fillna(value="")
.groupby(by=df.columns[:1].to_list(), as_index=False)
.aggregate(func=list)
.rename(
columns={
df.columns[0]: "name",
df.columns[1]: "selector",
df.columns[2]: "range",
df.columns[3]: "policy",
}
)
.to_dict(orient="records")
)
pprint(data)
</code></pre>
<p>When printing data in the terminal, I get the below output:</p>
<pre class="lang-bash prettyprint-override"><code>[{'name': 'Name 1',
'policy': ['', '', '', ''],
'range': ['', '', '', ''],
'selector': ['', '', '', '']},
{'name': 'Name 2', 'policy': [''], 'range': [''], 'selector': ['']}]
</code></pre>
<p>Is there any way possible to make the <code>data</code> be like below, when all columns other than the column A are empty?</p>
<pre class="lang-bash prettyprint-override"><code>[{'name': 'Name 1', 'policy': [''], 'range': [''], 'selector': ['']},
{'name': 'Name 2', 'policy': [''], 'range': [''], 'selector': ['']}]
</code></pre>
|
<python><pandas><excel><dataframe><openpyxl>
|
2023-11-30 16:39:35
| 2
| 2,284
|
Tes3awy
|
77,580,116
| 10,404,663
|
I am getting this error in Fast API FileResponse "UnicodeDecodeError: 'utf-8' codec can't decode byte 0x89 in position 0: invalid start byte"
|
<p>I am trying to send a file in response to an API. I have file in my "data" folder on server and I am trying to sent that file. I've checked path of file and it is fine. I am getting this error on file send:</p>
<blockquote>
<p>"UnicodeDecodeError: 'utf-8' codec can't decode byte 0x89 in position 0: invalid start byte".</p>
</blockquote>
<p>Here is my code snippet:</p>
<pre><code>from fastapi.responses import JSONResponse, FileResponse
app = create_app()
@app.get("/static/{filename}")
def serve_image(filename: str)-> None:
file_path = Path("data") / filename
if not file_path.is_file():
raise NotFoundException("File not fount")
return FileResponse(file_path)
</code></pre>
<p>Complete Error:</p>
<pre class="lang-none prettyprint-override"><code>ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 426, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/fastapi/applications.py", line 292, in __call__
await super().__call__(scope, receive, send)
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/starlette/applications.py", line 122, in __call__
await self.middleware_stack(scope, receive, send)
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/starlette/middleware/cors.py", line 83, in __call__
await self.app(scope, receive, send)
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/starlette/middleware/authentication.py", line 48, in __call__
await self.app(scope, receive, send)
File "/Volumes/Data/Study/NTNU/IMT4886 Applied Computer Science Project/Code/medical-image-analytics-backend/core/fastapi/middlewares/response_logger.py", line 37, in __call__
await self.app(scope, receive, _logging_send)
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
raise exc
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 20, in __call__
raise e
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 17, in __call__
await self.app(scope, receive, send)
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/starlette/routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/starlette/routing.py", line 276, in handle
await self.app(scope, receive, send)
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/starlette/routing.py", line 69, in app
await response(scope, receive, send)
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/starlette/responses.py", line 358, in __call__
await send(
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 65, in sender
await send(message)
File "/Volumes/Data/backend/core/fastapi/middlewares/response_logger.py", line 33, in _logging_send
response_info.body += body.decode("utf8")
^^^^^^^^^^^^^^^^^^^
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x89 in position 0: invalid start byte
</code></pre>
|
<python><fastapi><fileresponse>
|
2023-11-30 16:35:33
| 0
| 641
|
FAHAD SIDDIQUI
|
77,580,088
| 705,373
|
Cannot convert string variable to raw
|
<p>I have seen few examples on how to convert string variable to raw one using interpolation, but it doesn't work for me:</p>
<pre><code>import json
j = '{"value": "{\"foo\": \"bar\"}"}'
print(j)
print(fr"{j}")
print(r'{"value": "{\"foo\": \"bar\"}"}') # Works
print(json.loads(r'{"value": "{\"foo\": \"bar\"}"}'))
try:
print(json.loads(fr"{j}")) # Doesn't work
except Exception as e:
print(e)
</code></pre>
<p>What am I doing wrong?</p>
|
<python><python-3.x><string>
|
2023-11-30 16:32:16
| 3
| 11,810
|
Roman Newaza
|
77,580,068
| 7,519,069
|
Optimize this Python function - apply a linear transformation based on parity of index
|
<p>I have this simple Python function:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
def fast_transform(img, offset, factor):
rep = (img.shape[0]//2, img.shape[1]//2)
out = (img.astype(np.float32) - np.tile(offset, rep)) * np.tile(factor, rep)
return out
</code></pre>
<p>The function gets an image (as a NXM numpy ndarray) and two 2x2 arrays (offset and factor). It then calculates a basic linear transformation on every pixel in the image based on it's parity in each dimension:
<code>out[i,j] = (out[i,j] - offset[i%2,j%2]) * factor[i%2,j%2]</code></p>
<p>As you can see I used np.tile to try and speed up the function but this isn't fast enough for my needs (and I think the creation of the dummy np.tile arrays makes it sub-optimal). I tried to use numba but it doesn't support np.tile yet.</p>
<p>Can you help me optimize this function as much as possible? I am sure there is some simple way to do it I am missing.</p>
|
<python><numpy><optimization><numba>
|
2023-11-30 16:29:37
| 2
| 351
|
RyArazi
|
77,579,964
| 804,184
|
Python: Import class from args parser
|
<p>I want to know if it is possible to import a class from the input arguments of argsparser.
What I am doing now is something like this:</p>
<pre><code>parser = argparse.ArgumentParser()
parser.add_argument('-p', dest="process", default="process1")
args = parser.parse_args()
if 'process1' in args.process: from Process1 import analysis
elif 'process2' in args.process: from Process2 import analysis
....
</code></pre>
<p>I am wondering if there is a way to do something like this:</p>
<pre><code>from args.process import analysis
</code></pre>
|
<python><python-import><argparse>
|
2023-11-30 16:13:53
| 0
| 5,276
|
Alejandro
|
77,579,932
| 276,193
|
Workaround for stop required by python slice()
|
<p>The python documentation for <a href="https://docs.python.org/3/library/functions.html#slice" rel="nofollow noreferrer">slice</a> notes that a stop parameter is always required.</p>
<p>This is good for digging around in data where instead of <code>x[100000:200000]</code>, I can instead do something like...</p>
<pre><code>sl=slice(100000,200000)
func(x[sl])
func(y[sl])
func(z[sl])
</code></pre>
<p>But then how would I go about defining something generic for when I want to slice to the end?</p>
<pre><code>x[100000:]
</code></pre>
|
<python><slice>
|
2023-11-30 16:09:29
| 2
| 16,283
|
learnvst
|
77,579,920
| 7,995,293
|
Which is more efficient in python (and in general): iterate over short list and searching in the longer one, or vice versa?
|
<p>I have two lists, both contain integers represented as strings. <code>large_list</code> is large, up to low 10s of thousands of elements. <code>small_list</code> is smaller, up to hundreds of elements. I am filtering the larger one by the smaller one. I'm curious to know which is more efficient:</p>
<p>(a)</p>
<pre><code>for element in small_list:
if element in large_list:
(do something)
</code></pre>
<p>(b)</p>
<pre><code>for element in large_list:
if element in small_list:
(do something)
</code></pre>
<p>for option (a), the iteration count is limited by the length of <code>small_list</code>, but for each one, a search has to be executed on the full <code>large_list</code>.
for option (b), the iteration goes through the entire <code>large_list</code> but each search is executed on <code>small_list</code>.</p>
<p>I understand that sets are hashed, so the best option here ought to be to take <code>large_set = set(large_list)</code>, iterate through <code>small_list</code>, and do each search on <code>large_set</code>. But if we take sets out of the equation, is option (a) more efficient, or is option (b)?</p>
|
<python><list><performance><set>
|
2023-11-30 16:07:24
| 4
| 399
|
skytwosea
|
77,579,895
| 861,757
|
PyTorch CNN Returns Only One Result After Training
|
<p>I'm training a CNN image classifier. The network classifies 255 x 255 RGB images into five categories numbered <code>0</code> to <code>4</code>.</p>
<p>But the network is behaving strangely during training. Although the loss function drops smoothly, the model returns identical answers for all the samples in the batch most of the time. Even more strangely, eventually it starts answering only 2.</p>
<p>Here's a typical training output with batches of 10 images.</p>
<pre><code>LABELS OUTPUT CORRECT
tensor([2, 0, 2, 2, 2, 0, 2, 2, 2, 4]) tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) 2 / 10
tensor([2, 2, 2, 2, 3, 4, 1, 2, 2, 2]) tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) 0 / 10
tensor([2, 2, 2, 0, 2, 4, 3, 1, 2, 2]) tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) 1 / 10
tensor([3, 4, 2, 2, 0, 4, 4, 3, 2, 0]) tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) 2 / 10
tensor([1, 2, 2, 4, 2, 0, 1, 0, 0, 0]) tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) 4 / 10
tensor([2, 2, 2, 3, 2, 0, 0, 1, 2, 2]) tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) 2 / 10
tensor([1, 1, 0, 1, 2, 2, 1, 1, 0, 1]) tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) 2 / 10
tensor([0, 2, 1, 3, 3, 2, 1, 0, 2, 2]) tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) 2 / 10
tensor([2, 3, 2, 2, 3, 1, 0, 1, 0, 2]) tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) 2 / 10
tensor([3, 2, 3, 1, 1, 2, 0, 4, 2, 2]) tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) 1 / 10
tensor([2, 1, 0, 3, 1, 2, 2, 1, 2, 0]) tensor([2, 2, 2, 2, 2, 0, 2, 2, 0, 2]) 2 / 10
tensor([3, 0, 2, 1, 3, 1, 2, 4, 2, 2]) tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2]) 4 / 10
tensor([2, 2, 1, 2, 1, 1, 1, 4, 3, 2]) tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2]) 4 / 10
# Remaining predictions are always [2, 2, 2...]
# Loss function is not shown, but it declines smoothly and looks well behaved
</code></pre>
<p>Although <code>2</code> is the most common category in the labels (about 50% of the images), I don't see why the CNN should 'concentrate' on a single answer (<code>0</code> in the above sample) or always predict <code>2</code> at the end.</p>
<p>I expected to get more varied results in the output tensors even if the accuracy wasn't good enough. What am I doing wrong?</p>
<p>Here's my code for the network...</p>
<pre><code>class CNN(nn.Module):
def __init__(self, n_layers=3, n_categories=5):
super(CNN, self).__init__()
self.conv1 = nn.Conv2d(n_layers, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.conv3 = nn.Conv2d(16, 16, 5)
self.fc1 = nn.Linear(16 * 28 * 28, 200)
self.fc2 = nn.Linear(200, 84)
self.fc3 = nn.Linear(84, n_categories)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
x = x.view(-1, 16 * 28 * 28)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
</code></pre>
<p>...the optimizer, loss function and dataloader...</p>
<pre><code>model = CNN()
transforms = v2.Compose([
v2.ToImageTensor(),
v2.ConvertImageDtype(),
v2.Resize((256, 256), antialias=True)
])
dataset = UBCDataset(transforms=transforms)
full_dataloader = DataLoader(dataset, batch_size=10, shuffle=False)
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
</code></pre>
<p>...and the training loop that produced the above output. Loss function is not shown, but it declines smoothly as expected.</p>
<pre><code>batches = iter(full_dataloader)
print("LABELS OUTPUT CORRECT")
for X, y in batches:
model.train()
pred = model(X)
loss = loss_fn(pred, y)
loss.backward()
optimizer.step()
#optimizer.zero_grad()
print(f"{y} {pred.argmax(1)} {int(sum(y == pred.argmax(1)))} / {len(y)} {loss.item()}")
</code></pre>
<p>Any input is appreciated.</p>
|
<python><pytorch><conv-neural-network><backpropagation>
|
2023-11-30 16:03:18
| 1
| 728
|
Paulo Mendes
|
77,579,884
| 1,914,781
|
convert CST time string to datetime object
|
<p>How can I covert a time string which contains CST to datetime object?</p>
<pre><code>ts = "Wed Dec 31 18:00:00 CST 1969"
</code></pre>
|
<python><pandas>
|
2023-11-30 16:01:44
| 1
| 9,011
|
lucky1928
|
77,579,876
| 5,896,319
|
How to fix empty table of content in Pylatex?
|
<p>I'm trying to generate a pdf file using pylatex and pdf latex.
I have sections and subsection in my document and I want to create a table of contents like this:</p>
<pre><code>doc.append(NewPage())
doc.append(Command('tableofcontents'))
doc.append(NoEscape(r'\clearpage'))
</code></pre>
<p>but after I compile it it generates a blank page with just</p>
<p><strong>Contents</strong></p>
<p>I do not know how can I solve this problem.
And this is mt generate command:</p>
<pre><code>doc.generate_pdf(filepath=filepath, compiler=pdflatex_path, clean_tex=True)
</code></pre>
|
<python><flask><latex><pylatex>
|
2023-11-30 16:00:20
| 1
| 680
|
edche
|
77,579,830
| 18,904,265
|
operational error when testing my CLI app with a SQLite DB
|
<p>So I wanted to add tests to my CLI app, but I am struggling to replace my db with a temporary one. So far I have used SQLModel for database access, but for the app testing I want to assert the db values using raw SQL statements. However, I get a connection error when trying to access my temporary DB.</p>
<pre><code>sqlite3.OperationalError: unable to open database file
</code></pre>
<p>My plan so far is the following: In my test file I replace the sqlite uri with one directing it to a sqlite db in a temporary dictionary (using pytest tmp_path). My assumption was, that if I create that DB with SQLModel, I'd be able to access it with sqlite3 later on. Below you'll find the code. (At the moment, I haven't added an assert yet. That'll be added of course as soon as the DB connection works)</p>
<p>Also, is there a better way to replace my actual db, or is it fine what I am doing here?</p>
<p>My test file:</p>
<pre class="lang-py prettyprint-override"><code>import sqlite3
from typer.testing import CliRunner
from projects import db
from projects.app_typer import app
def temp_db(path):
db.sqlite_url = f"sqlite:///{path}/db.db"
runner = CliRunner()
def test_url_replacement(tmp_path):
temp_db(tmp_path)
assert db.sqlite_url == f"sqlite:///{tmp_path}/db.db"
def test_add_item_to_db(tmp_path):
temp_db(tmp_path)
result = runner.invoke(app, ["add", "public", "-n", "Project", "-p", "00-00"])
con = sqlite3.connect(f"sqlite:///{tmp_path}/db.db")
cur = con.cursor()
db_entry = cur.execute("SELECT * FROM project").fetchone()
print(db_entry)
</code></pre>
<p>Excerpt from <code>db.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>from sqlmodel import Field, Session, SQLModel, create_engine, select
sqlite_url = "sqlite:///database.db"
engine = create_engine(sqlite_url)
def create_session_and_db():
SQLModel.metadata.create_all(engine)
</code></pre>
<p>Excerpt from <code>app_typer.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>import typer
app = typer.Typer(add_completion=False)
@app.callback(invoke_without_command=True, no_args_is_help=True)
def main():
create_session_and_db()
app.add_typer(add.app, name="add", help="Add a project to the DB.")
</code></pre>
<h2>Edit</h2>
<p>So I fixed the URI for sqlite3 thanks to @emilio-silva. However I bumped into a new problem: the database sqlmodel creates seems to be somewhere else than in the temporary directory I specified with <code>tmp_path</code>.</p>
<p>If I open the folder after running pytest, <code>db.db</code> exists, but is empty (Checked with DB Browser for SQLite). This makes me assume, that sqlite3 actually created a new DB. I can't figure out though, why sqlmodel doesn't put the db in the correct location. When setting <code>echo=True</code> for the engine (sqlmodel/sqlalchemy), I get the following output:</p>
<pre><code>INFO sqlalchemy.engine.Engine:base.py:2689 BEGIN (implicit)
INFO sqlalchemy.engine.Engine:base.py:1848 PRAGMA main.table_info("project")
INFO sqlalchemy.engine.Engine:base.py:1848 [raw sql] ()
INFO sqlalchemy.engine.Engine:base.py:2695 COMMIT
INFO sqlalchemy.engine.Engine:base.py:2689 BEGIN (implicit)
INFO sqlalchemy.engine.Engine:base.py:1848 INSERT INTO project (name, project_number, offer_number, project_type, classification) VALUES (?, ?, ?, ?, ?)
INFO sqlalchemy.engine.Engine:base.py:1848 [generated in 0.00037s] ('Project', '00-00', None, 'public', 'internal')
INFO sqlalchemy.engine.Engine:base.py:2695 COMMIT
INFO sqlalchemy.engine.Engine:base.py:2689 BEGIN (implicit)
INFO sqlalchemy.engine.Engine:base.py:1848 SELECT project.id, project.name, project.project_number, project.offer_number, project.project_type, project.classification
FROM project
WHERE project.id = ?
INFO sqlalchemy.engine.Engine:base.py:1848 [generated in 0.00055s] (15,)
INFO sqlalchemy.engine.Engine:base.py:2692 ROLLBACK
</code></pre>
<p>For completeness the error I get from sqlite3:
<code>sqlite3.OperationalError: no such table: project</code></p>
<p>And the directory of the tmp_path:
<code>C:\Users\**redacted**\AppData\Local\Temp\pytest-of-**redacted**\pytest-16\test_add_item_to_db0</code></p>
<h2>Edit 2</h2>
<p>Okay I figured out part of the solution.</p>
<p>Even though I change the database uri, the sqlmodel engine still connects to the original database, not the temporary one. My guess is, that the <code>create_engine</code> command runs at the point when I <strong>import db.py</strong>, not when I run my test. And since I change the database string <strong>after</strong> the import, it's too late to feed it to the engine.</p>
<p>My guess is, that the solution would be to somehow implement the database creation/binding with sqlmodel differently, can anyone point me to a fitting approach?</p>
|
<python><sqlite><pytest><typer>
|
2023-11-30 15:52:58
| 1
| 465
|
Jan
|
77,579,813
| 14,634,210
|
Implementing discord bot joining logic via oauth: looking for solution on how to join my bot onto a guild
|
<p>I am trying to add the logic onto my discord bot to add it to a guild however I can't figure out how to do this.</p>
<p>The first thing I tried was to create an algorithm to join a discord server with an oauth.</p>
<pre class="lang-py prettyprint-override"><code># an redrected url look like this https://www.example.com/?code=w1WuVLRAqsgBShI7iSlZXCTKlTXBKO&guild_id=1028775661361430648&permissions=71680
# an redrected url look like this https://www.example.com/?code=w1WuVLRAqsgBShI7iSlZXCTKlTXBKO&guild_id=1028775661361430648&permissions=71680
@app.get("/bot-added")
async def bot_added():
code = request.args.get("code")
g_id = request.args.get("guild_id")
guild = await bot.fetch_guild(g_id)
g_invites = guild.invites()
invites_q = []
invites = asyncio.gather(invites_q)
for invite in invites:
if not isinstance(invite, HTTPException):
# join server invite(invite)
pass
</code></pre>
<p>While writing this code, I noticed that there are no function/method to join an invite.</p>
<p>While looking online for a solution I found a stackoverflow thread saying you could do a requests to the discord api directly. This solution seem to not be a great alternative due to the possibility of my bot being banned.</p>
<p>Is there a better way to do this?</p>
|
<python><discord>
|
2023-11-30 15:50:19
| 2
| 669
|
The epic face 007
|
77,579,674
| 4,738,644
|
"roll axis" using inner and outer values in Dictorionaries in python
|
<p>Hello I'm kind of new for dictionaries in python and have the following dictionary structure (I prefer to show it as simple as possible, trying to omit every key and value):</p>
<pre><code>data_group_test = {Example: {model: {: epoch{: dataset {np.array}}}}
</code></pre>
<p>but I would like it to be</p>
<pre><code>data_group_test = {Example: {model: {: dataset {: epoch {np.array}}}}
</code></pre>
<p>Is there a way to do this swapping between inner/outer values? It is like a np.rollaxis but with a dictionary.</p>
<p>Edited: All of components them are dictionaries and every dataset contains the same quantity and values of epochs. (eg. Epoch 20,40,60 are in the dataset a,b, and c)</p>
<p>Thanks.</p>
|
<python><dictionary>
|
2023-11-30 15:30:27
| 1
| 421
|
Diego Alejandro Gómez Pardo
|
77,579,658
| 7,054,001
|
Pretrained model with stride doesn’t predict long text
|
<p>My objective is to annotate long documents with bioformer-8L. I have been said to use stride and truncation so I don’t have to split my documents in chunks of 512 tokens.</p>
<p>In the training phase, I called the tokenizer like this:</p>
<pre><code>tokenizer = AutoTokenizer.from_pretrained(model_checkpoint, stride = 128, return_overflowing_tokens=True, model_max_length=512, truncation=True, is_split_into_words=True)
</code></pre>
<p>Then I train my model and at this stage I don't see any parameter that could help me in my task.</p>
<p>With my trained model I do this for the predictions:</p>
<pre><code>model = AutoModelForTokenClassification.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path, stride = 128, return_overflowing_tokens=True, model_max_length=512, truncation=True, is_split_into_words=True)
ner = pipeline(“token-classification”, model=model, tokenizer=tokenizer, aggregation_strategy=“first”)
</code></pre>
<p>But it does not work, the model stops providing annotations in the middle of the text. For the test</p>
|
<python><nlp><huggingface-transformers>
|
2023-11-30 15:28:13
| 1
| 626
|
Despe1990
|
77,579,554
| 5,692,673
|
SyntaxError: future feature annotations is not defined using python 3.9
|
<p>I am using python 3.9 in a virtual environment. The only solution I can find just says to use python 3.7 or higher and try using a virtual environment to ensure it is. I am doing both and as you can see from the backtrace, it is indeed using 3.9.</p>
<p>edit: we just found in this stack trace, two lines where it is some reason going outside our virtual environment and hitting the system python3.6 for some reason. We have no clue as to why this is happening.</p>
<pre><code>Internal Server Error: /api/auth/token-auth/
Traceback (most recent call last):
File "/workspace/mktplace/env/lib/python3.9/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/workspace/mktplace/env/lib/python3.9/site-packages/django/core/handlers/base.py", line 165, in _get_response
callback, callback_args, callback_kwargs = self.resolve_request(request)
File "/workspace/mktplace/env/lib/python3.9/site-packages/django/core/handlers/base.py", line 288, in resolve_request
resolver_match = resolver.resolve(request.path_info)
File "/workspace/mktplace/env/lib/python3.9/site-packages/django/urls/resolvers.py", line 545, in resolve
for pattern in self.url_patterns:
File "/workspace/mktplace/env/lib/python3.9/site-packages/django/utils/functional.py", line 48, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/workspace/mktplace/env/lib/python3.9/site-packages/django/urls/resolvers.py", line 589, in url_patterns
patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "/workspace/mktplace/env/lib/python3.9/site-packages/django/utils/functional.py", line 48, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/workspace/mktplace/env/lib/python3.9/site-packages/django/urls/resolvers.py", line 582, in urlconf_module
return import_module(self.urlconf_name)
File "/usr/lib64/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/workspace/mktplace/market/market/market/urls.py", line 47, in <module>
path(prepath, include('reporting.urls'), name='payg_v1_reporting')
File "/workspace/mktplace/env/lib/python3.9/site-packages/django/urls/conf.py", line 34, in include
urlconf_module = import_module(urlconf_module)
File "/usr/lib64/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/workspace/mktplace/market/market/reporting/urls.py", line 3, in <module>
from .views import invoices_by_month, monthly_utilized, total_funds_utilized_by_month, total_funds_utilized_gauge, total_funds_utilized, monthly_average_return, invoice_status_by_month, invoice_current_statuses
File "/workspace/mktplace/market/market/reporting/views.py", line 448, in <module>
import matplotlib
File "/workspace/mktplace/env/lib/python3.9/site-packages/matplotlib/__init__.py", line 126, in <module>
import numpy
File "/workspace/mktplace/env/lib/python3.9/site-packages/numpy/__init__.py", line 141, in <module>
from . import core
File "/workspace/mktplace/env/lib/python3.9/site-packages/numpy/core/__init__.py", line 9, in <module>
from numpy.version import version as __version__
File "/workspace/mktplace/env/lib/python3.9/site-packages/numpy/version.py", line 1
from __future__ import annotations
^
SyntaxError: future feature annotations is not defined
</code></pre>
<p>second edit: we removed all pip installed packages in our virtual environment and reinstalled. We now get a different stack trace, but it seems to lead back to importlib package going outside our virtual environment and using the system's python 3.6 for some reason.</p>
<pre><code>Internal Server Error: /api/auth/token-auth/
Traceback (most recent call last):
File "/workspace/mktplace/env/lib/python3.9/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/workspace/mktplace/env/lib/python3.9/site-packages/django/core/handlers/base.py", line 165, in _get_response
callback, callback_args, callback_kwargs = self.resolve_request(request)
File "/workspace/mktplace/env/lib/python3.9/site-packages/django/core/handlers/base.py", line 288, in resolve_request
resolver_match = resolver.resolve(request.path_info)
File "/workspace/mktplace/env/lib/python3.9/site-packages/django/urls/resolvers.py", line 545, in resolve
for pattern in self.url_patterns:
File "/workspace/mktplace/env/lib/python3.9/site-packages/django/utils/functional.py", line 48, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/workspace/mktplace/env/lib/python3.9/site-packages/django/urls/resolvers.py", line 589, in url_patterns
patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "/workspace/mktplace/env/lib/python3.9/site-packages/django/utils/functional.py", line 48, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/workspace/mktplace/env/lib/python3.9/site-packages/django/urls/resolvers.py", line 582, in urlconf_module
return import_module(self.urlconf_name)
File "/usr/lib64/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/workspace/mktplace/market/market/market/urls.py", line 47, in <module>
path(prepath, include('reporting.urls'), name='payg_v1_reporting')
File "/workspace/mktplace/env/lib/python3.9/site-packages/django/urls/conf.py", line 34, in include
urlconf_module = import_module(urlconf_module)
File "/usr/lib64/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/workspace/mktplace/market/market/reporting/urls.py", line 3, in <module>
from .views import invoices_by_month, monthly_utilized, total_funds_utilized_by_month, total_funds_utilized_gauge, total_funds_utilized, monthly_average_return, invoice_status_by_month, invoice_current_statuses
File "/workspace/mktplace/market/market/reporting/views.py", line 448, in <module>
import matplotlib
File "/workspace/mktplace/env/lib/python3.9/site-packages/matplotlib/__init__.py", line 126, in <module>
import numpy
File "/workspace/mktplace/env/lib/python3.9/site-packages/numpy/__init__.py", line 109, in <module>
from ._globals import (
ImportError: cannot import name 'ModuleDeprecationWarning'
</code></pre>
<p>The other thing we noticed, our developer does a pip install on our requirements.txt and he gets a smaller set of packages installed than I do and he doesn't get this importlib package like I do.</p>
|
<python><django><numpy>
|
2023-11-30 15:11:59
| 0
| 1,001
|
general
|
77,579,530
| 9,983,652
|
How to move the legend of each trace inside each subplot when subplot has more than 1 column
|
<p>When subplot has 2 columns and 2 rows. Typically the legends are grouped together and places on the same location like below on the right hand side. I am wondering if I can place legend of each trace inside each of subplot, for example, legend of Trace 1 inside subplot row=1 and col=1, legend of trace 2 inside subplot row=1, col=2? Thanks</p>
<pre><code>import plotly.graph_objects as go
from plotly.subplots import make_subplots
# Create subplots with the shared legend
fig = make_subplots(rows=2, cols=2, subplot_titles=("Plot 1", "Plot 2", "Plot 3", "Plot 4"))
# Add traces to subplots
fig.add_trace(go.Scatter(x=[1, 4, 1, 3, 5, 2], y=[4, 5, 6, 5, 4, 2], name="Trace 1"), row=1, col=1)
fig.add_trace(go.Scatter(x=[1, 2, 3, 4, 3, 5], y=[7, 8, 9, 3, 2, 6], name="Trace 2"), row=1, col=2)
fig.add_trace(go.Scatter(x=[3, 5, 6, 4, 5, 6], y=[1, 3, 5, 4, 2, 3], name="Trace 3"), row=2, col=1)
fig.add_trace(go.Scatter(x=[1, 2, 3, 2, 6, 4], y=[3, 4, 5, 6, 2, 1], name="Trace 4"), row=2, col=2)
# Update layout for customized legend grouping
fig.update_layout(legend=dict(tracegroupgap=70))
# Show the plot
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/Z9hnd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Z9hnd.png" alt="enter image description here" /></a></p>
|
<python><plotly>
|
2023-11-30 15:06:40
| 1
| 4,338
|
roudan
|
77,579,451
| 1,740,254
|
Gunicorn Timeout - How to increase timeout?
|
<p>I have python fastapi deployed using gunicorn. Request gets timed out exactly after 1min.</p>
<pre><code>504 Gateway Time-out
</code></pre>
<p>I want to increase the timeout of api due to long backend processing. Tried to pass time out as below but it does not work at all.</p>
<pre><code>gunicorn main:app --keep-alive 300 -w 3 -k uvicorn.workers.UvicornWorker -b 0.0.0.0:8000 --timeout 300
</code></pre>
<p>Is it correct way to pass timeout value? How can i fix it ?</p>
|
<python><docker><fastapi><gunicorn>
|
2023-11-30 14:55:07
| 1
| 451
|
CoronaVirus
|
77,579,346
| 7,454,638
|
Pylance has wrong type hint (Any) for class method with additional params
|
<p>Perhaps best way to explain the issue is with an example:</p>
<pre class="lang-py prettyprint-override"><code>class Bar:
pass
class Foo:
@classmethod
def class_foobar(cls) -> Bar:
return Bar()
@classmethod
def class_foobar_param(cls, param: str) -> Bar:
return Bar()
bar = Foo.class_foobar()
bar_param = Foo.class_foobar_param(param="")
</code></pre>
<p>Now, by hovering on <code>bar</code> and <code>bar_param</code>, I would expect the Pylance tell me they have the same type: <code>Bar</code>. However, <code>bar_param</code> shows as <code>Any</code>.</p>
<p>Now if we add a default value to <code>param</code>, the issue goes away:</p>
<pre class="lang-py prettyprint-override"><code>class Foo:
@classmethod
def class_foobar_param_default(cls, param: str = "") -> Bar:
return Bar()
</code></pre>
<p>Does anyone knows what is going on here?</p>
<p>Following the suggestion in the comments, here is my environment:
Mac Ventura 13.6, vscode v1.84.2, Pylance v2023.11.10 (installed through vscode extensions), Python v3.11.4</p>
|
<python><intellisense><pylance>
|
2023-11-30 14:38:33
| 0
| 1,886
|
leoschet
|
77,579,324
| 418,413
|
How to Register Multiple Pluggy Plugins with Setuptools
|
<h2>Problem</h2>
<p>I can successfully register a single plugin in pluggy using <a href="https://pluggy.readthedocs.io/en/stable/api_reference.html#pluggy.PluginManager.load_setuptools_entrypoints" rel="nofollow noreferrer"><code>load_setuptools_entrypoints</code></a>, but I can only register one. If two different plugins try to register themselves, the last one registered will be the only one that runs.</p>
<p>I think this is not how pluggy is intended to work, and that I am making a configuration mistake, but I don't know what to do differently. I think pluggy is supposed to allow multiple plugins to register and run in serial when that hook is called.</p>
<h2>Project Structure</h2>
<pre><code>.
├── pluggable
│ ├── pluggable.py
│ └── pyproject.toml
├── plugin_a
│ ├── a.py
│ └── pyproject.toml
└── plugin_b
├── b.py
└── pyproject.toml
</code></pre>
<p>with:</p>
<pre><code># pluggable/pluggable.py
import pluggy
NAME = "pluggable"
impl = pluggy.HookimplMarker(NAME)
@pluggy.HookspecMarker(NAME)
def run_plugin():
pass
def main():
m = pluggy.PluginManager(NAME)
m.load_setuptools_entrypoints(NAME)
m.hook.run_plugin()
if __name__ == "__main__":
main()
</code></pre>
<p>and</p>
<pre><code># pluggable/pyproject.toml
[project]
name = "pluggable"
version = "1.0.0"
dependencies = ["pluggy==1.3.0"]
</code></pre>
<p>Both plugin_a/a.py and plugin_b/b.py have the same contents:</p>
<pre><code># plugin_a/a.py
import pluggy
from pluggable import impl
@impl
def run_plugin():
print(f"run from {__name__}")
if __name__ == "__main__":
run_plugin()
</code></pre>
<p>plugin_a/pyproject.toml and plugin_b/pyproject.toml register their respective modules as pluggy plugins:</p>
<pre><code># plugin_a/pyproject.toml
[project]
name = "plugin_a"
version = "1.0.0"
dependencies = ["pluggy==1.3.0", "pluggable"]
[project.entry-points.pluggable]
run_plugin = "a"
</code></pre>
<p>plugin_b/pyproject.toml is the same, except that <code>run_plugin = "b"</code>.</p>
<h2>Demo run</h2>
<pre><code>$ python -m venv venv
...
$ venv/bin/pip install -e pluggable -e plugin_a
...
Successfully installed pluggable-1.0.0 plugin-a-1.0.0
$ venv/bin/python pluggable/pluggable.py
run from a
$ venv/bin/pip install -e plugin_b
...
Successfully installed plugin-b-1.0.0
$ venv/bin/python pluggable/pluggable.py
run from b
</code></pre>
<h2>Expected outcome</h2>
<p>What I'd like to see at the end is</p>
<pre><code>run from b
run from a
</code></pre>
<h2>Further analysis</h2>
<p>From reading the <a href="https://github.com/pytest-dev/pluggy/blob/82d4e1ce62e11a76b9d050e457adc41c4c69f7db/src/pluggy/_manager.py#L376C9-L376C36" rel="nofollow noreferrer">load_setuptools_entrypoints</a> code it's pretty clear what's happening. As soon as one plugin is registered to a name, <a href="https://github.com/pytest-dev/pluggy/blob/82d4e1ce62e11a76b9d050e457adc41c4c69f7db/src/pluggy/_manager.py#L394" rel="nofollow noreferrer"><code>PluginManager.get_plugin(name)</code> returns that value</a>, so other plugins will not be registered at the same name.</p>
<p>But what I am hoping to get help to understand is what is the correct way to configure a plugin system with pluggy so that two different python packages can register themselves with the same spec and hook, such that pluggy will run them in serial?</p>
|
<python><plugins><pluggy>
|
2023-11-30 14:34:41
| 2
| 77,713
|
kojiro
|
77,579,302
| 13,836,083
|
Python Assignment statement
|
<p>I was going through the python assignment statement <a href="https://docs.python.org/3/reference/simple_stmts.html#assignment-statements" rel="nofollow noreferrer">docs</a> .</p>
<p>Here python uses below <a href="https://en.wikipedia.org/wiki/Backus%E2%80%93Naur_form" rel="nofollow noreferrer">Backus–Naur form</a> for assignment statements.</p>
<pre><code>assignment_stmt ::= (target_list "=")+ (starred_expression | yield_expression)
target_list ::= target ("," target)* [","]
target ::= identifier
| "(" [target_list] ")"
| "[" [target_list] "]"
| attributeref
| subscription
| slicing
| "*" target
</code></pre>
<p>Where as <a href="https://docs.python.org/3/reference/expressions.html#grammar-token-python-grammar-starred_expression" rel="nofollow noreferrer">starred_expression</a> is in Backus-Naur Form is</p>
<pre><code>starred_expression ::= expression | (starred_item ",")* [starred_item]
starred_item ::= assignment_expression | "*" or_expr
</code></pre>
<p>and <a href="https://docs.python.org/3/reference/expressions.html#yield-expressions" rel="nofollow noreferrer">yield_expression</a> in Backus-Naur Form is</p>
<pre><code>yield_atom ::= "(" yield_expression ")"
yield_expression ::= "yield" [expression_list | "from" expression]
</code></pre>
<p>After recursively going through all those related backnaur form of each sub expression given above. I am still scratching my head how does simple assignment like <code>a=9</code> can fit into above back naur form. Specially how does the <code>9</code>, on the RHS of the given statement can fall into yield_expression or starred_exression</p>
|
<python><bnf>
|
2023-11-30 14:33:03
| 1
| 540
|
novice
|
77,579,157
| 3,070,181
|
How to call an async function from a tkinter button
|
<p>I want to call an asynchronous function from a tkinter button</p>
<pre><code>import asyncio
import time
import tkinter as tk
async def gui():
root = tk.Tk()
timer = tk.Button(root, text='Timer', command=lambda: asyncio.run(wait()))
# timer = tk.Button(root, text='Timer', command=wait())
timer.pack()
root.mainloop()
async def sleep():
await asyncio.sleep(1)
async def wait():
start = time.time()
await sleep()
print(f'Elapsed: {time.time() - start}')
async def main():
await wait()
asyncio.run(main())
asyncio.run(gui())
</code></pre>
<p>If I use the line</p>
<pre><code> timer = tk.Button(root, text='Timer', command=wait())
</code></pre>
<p>I get the error</p>
<blockquote>
<p>coroutine 'wait' was never awaited</p>
</blockquote>
<p>but if I use the line</p>
<pre><code> timer = tk.Button(root, text='Timer', command=lambda: asyncio.run(wait()))
</code></pre>
<p>I get the error</p>
<blockquote>
<p>asyncio.run() cannot be called from a running event loop</p>
</blockquote>
<p>What's the resolution of this problem?</p>
|
<python><tkinter><asynchronous>
|
2023-11-30 14:13:41
| 1
| 3,841
|
Psionman
|
77,579,104
| 12,415,855
|
Selenium using proxy when creating driver?
|
<p>for a long time i used the following code to create a selenium-driver for chrome:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.proxy import Proxy, ProxyType
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
HOSTNAME = "109.175.226.252"
PORT = "12345"
prox = Proxy()
prox.proxy_type = ProxyType.MANUAL
prox.http_proxy = prox.ssl_proxy = f"{HOSTNAME}:{PORT}"
capabilities = webdriver.DesiredCapabilities.CHROME
prox.add_to_capabilities(capabilities)
driver = webdriver.Chrome (service=srv, options=options, desired_capabilities=capabilities)
</code></pre>
<p>But recently i get the following error when running the code:</p>
<pre><code>Traceback (most recent call last):
File "C:\DEV\Fiverr\ORDER\jonpetrich\collSuperATV.py", line 97, in <module>
prox.add_to_capabilities(capabilities)
AttributeError: 'Proxy' object has no attribute 'add_to_capabilities'
</code></pre>
<p>How can i use my proxies with selenium?</p>
|
<python><selenium-webdriver><proxy>
|
2023-11-30 14:06:24
| 1
| 1,515
|
Rapid1898
|
77,578,921
| 6,292,430
|
Divide data into clusters by a linear function
|
<p>I have a number of rows, which form three noticeable lines on a graph.</p>
<p><a href="https://i.sstatic.net/jhSrj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jhSrj.png" alt="enter image description here" /></a></p>
<p>Sample data</p>
<pre><code>line_position,queue_number,real_seq
0,2280,41171
55,3375,24999
55,733,11506
45,3939,29185
80,1522,14121
70,1022,10953
15,4687,24235
55,2072,14898
55,1755,12913
75,2014,17938
50,2178,14281
5,5612,36370
0,5689,38861
5,8023,40942
65,2777,21954
15,7384,39900
30,5241,35130
40,3554,19147
20,6663,37397
5,5134,28694
5,5273,32029
65,514,12791
10,7560,39851
25,6450,36909
50,1130,27140
20,4430,23025
0,5685,37094
0,5949,40905
20,6842,37547
5,5278,31231
15,7367,39031
40,4340,31534
35,3680,19437
5,5236,30761
5,2104,29053
0,5947,40685
45,3128,17475
40,4386,31495
50,3922,31394
15,7307,38805
55,3403,26704
70,2604,20509
5,5574,34118
55,733,11668
20,6663,37223
25,6430,37171
55,1815,12632
60,3094,23472
30,5798,36262
30,5293,34687
20,6554,37454
35,4767,34735
40,4411,31716
30,5427,35581
40,3350,18316
50,1075,14794
85,948,13668
80,1601,16079
5,4868,26220
20,6554,37075
5,2100,33351
75,666,5799
50,980,15290
95,387,7418
30,1715,20606
15,1980,25981
35,4759,30730
20,4603,24254
5,5059,28033
5,5257,32243
45,1308,16861
0,5849,38680
85,414,6927
0,2148,35148
70,2551,21015
35,4581,32535
80,561,6001
0,5672,35715
5,5152,33120
35,4984,34437
55,3574,27528
35,3762,19995
30,5798,39146
0,5911,40312
85,387,5917
35,4581,35933
55,754,11654
40,3610,25147
0,2252,39270
5,2042,34883
0,6032,41330
80,1826,20158
30,4075,21742
10,7517,40283
45,3029,19383
30,4933,32675
40,1479,21945
10,4826,25687
25,6380,37256
75,364,8215
</code></pre>
<p>I need to divide these rows into three clusters.
I've tried using multiple clustering algorithms (AgglomerativeClustering, Birch, DBSCAN, KMeans, MiniBatchKMeans, MeanShift) from <code>sklearn.cluster</code>, but as expected these algorithms divide my data not in the way I need.</p>
<p>Looking at the graph the simplest seems to be to "draw" two lines, which would split my data into three clusters.</p>
<p>However, I did not find any ready made tools which would allow me to do that.
How can I achieve this? Is there a better way to divide my data into three clusters?</p>
|
<python><pandas><scikit-learn><cluster-analysis>
|
2023-11-30 13:40:55
| 2
| 873
|
Pavel Botsman
|
77,578,898
| 3,745,149
|
My Pytorch multi-GPU prediction code freezes at `spawn()`
|
<p>The following small code does multi-GPU prediction using Pytorch.</p>
<p>I launch multiple <code>task</code>s using <code>torch.multiprocessing.spawn()</code>.</p>
<p>Inside <code>task</code>, I put no real prediction code. Just putting something into a common <code>Queue</code> to collect prediction results.</p>
<p>This code prints <code>{rank} destroyed</code> well. All four processes have printed this message.</p>
<p>It then stuck here. The message <code>'Spawn end'</code> never shows up. This means <code>torch.multiprocessing.spawn()</code> never ends.</p>
<p>I put <code>try</code> codes in <code>task</code>, but no exception are thrown.</p>
<p>Please help me find this bug.</p>
<pre><code>from typing import List
import torch
import torch.multiprocessing
import torch.distributed
import os
from torch.utils.data.dataloader import DataLoader
from torch.utils.data.distributed import DistributedSampler
from distributed_predictor import DistributedDataset
world_size = 4
batch_size = 16
data = [i for i in range(10000)]
class DistributedDataset(torch.utils.data.Dataset):
def __init__(self, list:List):
super(DistributedDataset, self).__init__()
self.data = list
def __getitem__(self, index):
return self.data[index]
def __len__(self):
return len(self.data)
dataset = DistributedDataset(list=data)
def task(rank:int, result_queue:torch.multiprocessing.Queue):
try:
torch.distributed.init_process_group('nccl', rank=rank, world_size=world_size)
torch.cuda.set_device(device=rank)
sampler = DistributedSampler(dataset, num_replicas=world_size, rank=rank)
dataloader = DataLoader(dataset,
batch_size=batch_size,
pin_memory=True,
sampler=sampler,
shuffle=False)
process_result = []
for batch in dataloader:
process_result.extend(batch)
result_queue.put(process_result)
torch.distributed.barrier()
torch.distributed.destroy_process_group()
print(f'{rank} destroyed')
except:
print('error')
if __name__ == '__main__':
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12356'
torch.multiprocessing.set_start_method('spawn', force=True)
result_queue = torch.multiprocessing.Queue()
torch.multiprocessing.spawn(task, args=(result_queue,), nprocs=world_size)
print(f'Spawn end')
results = []
for _ in range(world_size):
results.append(result_queue.get())
</code></pre>
|
<python><pytorch><concurrency><multi-gpu>
|
2023-11-30 13:37:56
| 0
| 770
|
landings
|
77,578,843
| 11,426,624
|
iterate over all elements of a list of tuples with lists
|
<p>I have a list of tuples (that again consists of lists) and would like to to check how many of the elements of the first list in the tuple is in the second list of the tuple.</p>
<pre><code>names =[
([''],['aa']),
(['aa', 'bb'],['aa']),
(['cc'],['cc', 'dd', 'yy']),
(['xx', 'ss'],['xx', 'ss']),
]
</code></pre>
<p>so for the above I would like to get a list</p>
<pre><code>[0, 1, 1, 2]
</code></pre>
<p>because '' is not in ['aa'}, 'aa' is in ['aa'], 'cc' is in ['cc', 'dd', 'yy'] and 'xx' and 'ss' are in ['xx', 'ss'].
Is there a way to do this without for loops?</p>
|
<python><list><loops><list-comprehension>
|
2023-11-30 13:28:28
| 2
| 734
|
corianne1234
|
77,578,724
| 6,681,932
|
Conformal prediction intervals insample data nixtla
|
<p>Given the documentation of <code>nixtla</code> y dont find any way to compute the prediction intervals for insample prediction (training data) but just for future predicitons.</p>
<p>I put an example of what I can achieve but just to predict (future).</p>
<pre><code>from statsforecast.models import SeasonalExponentialSmoothing, ADIDA, ARIMA
from statsforecast.utils import ConformalIntervals
# Create a list of models and instantiation parameters
intervals = ConformalIntervals(h=24, n_windows=2)
models = [
SeasonalExponentialSmoothing(season_length=24,alpha=0.1, prediction_intervals=intervals),
ADIDA(prediction_intervals=intervals),
ARIMA(order=(24,0,12), season_length=24, prediction_intervals=intervals),
]
sf = StatsForecast(
df=train,
models=models,
freq='H',
)
levels = [80, 90] # confidence levels of the prediction intervals
forecasts = sf.forecast(h=24, level=levels)
forecasts = forecasts.reset_index()
forecasts.head()
</code></pre>
<p>So my goal will be to do something like:</p>
<pre><code> forecasts = sf.forecast(df_x, level=levels)
</code></pre>
<p>So we can have any prediction intervals in the training set.</p>
|
<python><intervals><prediction><forecasting>
|
2023-11-30 13:12:44
| 2
| 478
|
PeCaDe
|
77,578,626
| 1,574,054
|
Can sympy arrays be properly sliced after numerical substitutions?
|
<p>Consider this example for sympy:</p>
<pre><code>import numpy
from sympy import Array, IndexedBase, symbols
k = symbols('k', positive=True)
M = IndexedBase('M')
M_vals = numpy.array((
(0, 1, 2),
(3, 4, 5),
(6, 7, 8)
))
M[k, 0].subs(M, Array(M_vals)).doit()
</code></pre>
<p>which prints</p>
<pre><code>[[0, 1, 2], [3, 4, 5], [6, 7, 8]][k, 0]
</code></pre>
<p>in a terminal. This indicates that the index <code>0</code> is not understood. Otherwise, sympy would have been able to slice out the correct column (the first one) from the matrix resulting in a vector. Is there a way to enable this simplification?</p>
|
<python><numpy><sympy><symbolic-math>
|
2023-11-30 12:59:15
| 0
| 4,589
|
HerpDerpington
|
77,578,604
| 264,136
|
difference between 2 string holding Cisco device's config
|
<p>I have 2 variables, config1 and config2 holding the running config of a Cisco device at 2 different points in time.</p>
<p>Sample running config:</p>
<pre><code>version 12.3
no service pad
service timestamps debug datetime msec
service timestamps log datetime msec
no service password-encryption
!
hostname retail
!
boot-start-marker
boot-end-marker
!
enable password cisco123
!
username jsomeone password 0 cg6#107X
aaa new-model
!
aaa group server radius rad_eap
server 10.0.1.1 auth-port 1812 acct-port 1813
!
aaa authentication login eap_methods group rad_eap
aaa session-id common
ip subnet-zero
ip cef
!
vpdn enable
vpdn-group 1
request-dialin
protocol pppoe
!
interface dialer 1
ip address negotiated
ppp authentication chap
dialer pool 1
dialer-group 1
!
dialer-list 1 protocol ip permit
ip nat inside source list 1 interface dialer 0 overload
ip classless (default)
ip route 10.10.25.2 0.255.255.255 dialer 0
!
ip dhcp excluded-address 10.0.1.1 10.0.1.10
ip dhcp excluded-address 10.0.2.1 10.0.2.10
ip dhcp excluded-address 10.0.3.1 10.0.3.10
!
ip dhcp pool vlan1
network 10.0.1.0 255.255.255.0
default-router 10.0.1.1
!
</code></pre>
<p>config1 holds the default config and config2 holds the config after doing a test. Ultimately, config2 should be the same as config1.</p>
<p>What I need is a way to find a "diff" between config2 and config1 (config2-config1). Something that I get from websites like <a href="https://text-compare.com/" rel="nofollow noreferrer">https://text-compare.com/</a> which shows side-by-side comparison.</p>
|
<python><cisco><cisco-ios>
|
2023-11-30 12:56:04
| 1
| 5,538
|
Akshay J
|
77,578,454
| 3,416,774
|
TypeError: write() argument must be str, not Tag
|
<p>Here is my code:</p>
<pre class="lang-py prettyprint-override"><code>html = open("meetup.html", "r")
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'html.parser')
file = open('myfile.txt', 'w')
for elem in soup.find_all(attrs={"type": "application/ld+json"}):
file.write(elem)
file.close()
</code></pre>
<p>I get this error:</p>
<pre><code>TypeError: write() argument must be str, not Tag
</code></pre>
<p>I get that I need to convert the tag object to a string, but I don't know how to do this. Trying</p>
<pre class="lang-py prettyprint-override"><code>file.write(elem.text)
</code></pre>
<p>return an empty file. I don't know where that is instructed in <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#calling-a-tag-is-like-calling-find-all" rel="nofollow noreferrer" title="Beautiful Soup Documentation — Beautiful Soup 4.12.0 documentation">Beautiful Soup Documentation</a>.</p>
|
<python><beautifulsoup>
|
2023-11-30 12:31:55
| 2
| 3,394
|
Ooker
|
77,578,360
| 11,748,924
|
getElementById between two cells google colaboratory
|
<p>I have this cell:</p>
<h1>Cell 1</h1>
<pre><code>from IPython import display, HTML
display(HTML(
'''
<img src="value_to_replace" id="yeah">
'''
))
</code></pre>
<p>And this</p>
<h1>Cell 2</h1>
<pre><code>%%javascript
const yeahElem = document.getElementById("yeah");
yeahElem.src = `data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABgAAAAYCAYAAADgdz34AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAAApgAAAKYB3X3/OAAAABl0RVh0U29mdHdhcmUAd3d3Lmlua3NjYXBlLm9yZ5vuPBoAAANCSURBVEiJtZZPbBtFFMZ/M7ubXdtdb1xSFyeilBapySVU8h8OoFaooFSqiihIVIpQBKci6KEg9Q6H9kovIHoCIVQJJCKE1ENFjnAgcaSGC6rEnxBwA04Tx43t2FnvDAfjkNibxgHxnWb2e/u992bee7tCa00YFsffekFY+nUzFtjW0LrvjRXrCDIAaPLlW0nHL0SsZtVoaF98mLrx3pdhOqLtYPHChahZcYYO7KvPFxvRl5XPp1sN3adWiD1ZAqD6XYK1b/dvE5IWryTt2udLFedwc1+9kLp+vbbpoDh+6TklxBeAi9TL0taeWpdmZzQDry0AcO+jQ12RyohqqoYoo8RDwJrU+qXkjWtfi8Xxt58BdQuwQs9qC/afLwCw8tnQbqYAPsgxE1S6F3EAIXux2oQFKm0ihMsOF71dHYx+f3NND68ghCu1YIoePPQN1pGRABkJ6Bus96CutRZMydTl+TvuiRW1m3n0eDl0vRPcEysqdXn+jsQPsrHMquGeXEaY4Yk4wxWcY5V/9scqOMOVUFthatyTy8QyqwZ+kDURKoMWxNKr2EeqVKcTNOajqKoBgOE28U4tdQl5p5bwCw7BWquaZSzAPlwjlithJtp3pTImSqQRrb2Z8PHGigD4RZuNX6JYj6wj7O4TFLbCO/Mn/m8R+h6rYSUb3ekokRY6f/YukArN979jcW+V/S8g0eT/N3VN3kTqWbQ428m9/8k0P/1aIhF36PccEl6EhOcAUCrXKZXXWS3XKd2vc/TRBG9O5ELC17MmWubD2nKhUKZa26Ba2+D3P+4/MNCFwg59oWVeYhkzgN/JDR8deKBoD7Y+ljEjGZ0sosXVTvbc6RHirr2reNy1OXd6pJsQ+gqjk8VWFYmHrwBzW/n+uMPFiRwHB2I7ih8ciHFxIkd/3Omk5tCDV1t+2nNu5sxxpDFNx+huNhVT3/zMDz8usXC3ddaHBj1GHj/As08fwTS7Kt1HBTmyN29vdwAw+/wbwLVOJ3uAD1wi/dUH7Qei66PfyuRj4Ik9is+hglfbkbfR3cnZm7chlUWLdwmprtCohX4HUtlOcQjLYCu+fzGJH2QRKvP3UNz8bWk1qMxjGTOMThZ3kvgLI5AzFfo379UAAAAASUVORK5CYII=`;
</code></pre>
<p>However, the cell 1 output didn't update the image. I was trying debugging in inspect element in console, it shows that the <code>yeahElem</code> variable is null:</p>
<pre><code>Error evaluating Javascript output: TypeError: yeahElem is null
</code></pre>
<p>Btw the image is supposed to be <a href="https://i.sstatic.net/3X0VR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3X0VR.png" alt="enter image description here" /></a>, I tried this manually the cell 2 in console of browser, I'm using firefox btw.</p>
<p>It works if I put all code in same cell (it doesn't communicate between two cells output):</p>
<pre><code>%%html
<img src="value_to_replace" id="yeah">
<script>
const yeahElem = document.getElementById("yeah");
yeahElem.src = `data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABgAAAAYCAYAAADgdz34AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAAApgAAAKYB3X3/OAAAABl0RVh0U29mdHdhcmUAd3d3Lmlua3NjYXBlLm9yZ5vuPBoAAANCSURBVEiJtZZPbBtFFMZ/M7ubXdtdb1xSFyeilBapySVU8h8OoFaooFSqiihIVIpQBKci6KEg9Q6H9kovIHoCIVQJJCKE1ENFjnAgcaSGC6rEnxBwA04Tx43t2FnvDAfjkNibxgHxnWb2e/u992bee7tCa00YFsffekFY+nUzFtjW0LrvjRXrCDIAaPLlW0nHL0SsZtVoaF98mLrx3pdhOqLtYPHChahZcYYO7KvPFxvRl5XPp1sN3adWiD1ZAqD6XYK1b/dvE5IWryTt2udLFedwc1+9kLp+vbbpoDh+6TklxBeAi9TL0taeWpdmZzQDry0AcO+jQ12RyohqqoYoo8RDwJrU+qXkjWtfi8Xxt58BdQuwQs9qC/afLwCw8tnQbqYAPsgxE1S6F3EAIXux2oQFKm0ihMsOF71dHYx+f3NND68ghCu1YIoePPQN1pGRABkJ6Bus96CutRZMydTl+TvuiRW1m3n0eDl0vRPcEysqdXn+jsQPsrHMquGeXEaY4Yk4wxWcY5V/9scqOMOVUFthatyTy8QyqwZ+kDURKoMWxNKr2EeqVKcTNOajqKoBgOE28U4tdQl5p5bwCw7BWquaZSzAPlwjlithJtp3pTImSqQRrb2Z8PHGigD4RZuNX6JYj6wj7O4TFLbCO/Mn/m8R+h6rYSUb3ekokRY6f/YukArN979jcW+V/S8g0eT/N3VN3kTqWbQ428m9/8k0P/1aIhF36PccEl6EhOcAUCrXKZXXWS3XKd2vc/TRBG9O5ELC17MmWubD2nKhUKZa26Ba2+D3P+4/MNCFwg59oWVeYhkzgN/JDR8deKBoD7Y+ljEjGZ0sosXVTvbc6RHirr2reNy1OXd6pJsQ+gqjk8VWFYmHrwBzW/n+uMPFiRwHB2I7ih8ciHFxIkd/3Omk5tCDV1t+2nNu5sxxpDFNx+huNhVT3/zMDz8usXC3ddaHBj1GHj/As08fwTS7Kt1HBTmyN29vdwAw+/wbwLVOJ3uAD1wi/dUH7Qei66PfyuRj4Ik9is+hglfbkbfR3cnZm7chlUWLdwmprtCohX4HUtlOcQjLYCu+fzGJH2QRKvP3UNz8bWk1qMxjGTOMThZ3kvgLI5AzFfo379UAAAAASUVORK5CYII=`;
</script>
</code></pre>
|
<javascript><python><jupyter-notebook><google-colaboratory>
|
2023-11-30 12:17:41
| 0
| 1,252
|
Muhammad Ikhwan Perwira
|
77,578,241
| 2,890,129
|
How to get Anscombe data wide format from long format
|
<p>I have the follwing anscombe data in long format</p>
<pre><code>import seaborn as sns
# Load the example dataset for Anscombe's quartet
anscombe_long = sns.load_dataset("anscombe")
</code></pre>
<p>anscombe_long</p>
<pre><code> dataset x y
0 I 10.0 8.04
1 I 8.0 6.95
2 I 13.0 7.58
3 I 9.0 8.81
4 I 11.0 8.33
5 I 14.0 9.96
6 I 6.0 7.24
...
...
</code></pre>
<p>I wanted to convert this into wide format where the column names <code>x and y</code> should have suffixes of the set number.</p>
<p>expected output:</p>
<pre><code>anscombe_wide
#> x1 x2 x3 x4 y1 y2 y3 y4
#> 1 10 10 10 8 8.04 9.14 7.46 6.58
#> 2 8 8 8 8 6.95 8.14 6.77 5.76
#> 3 13 13 13 8 7.58 8.74 12.74 7.71
#> 4 9 9 9 8 8.81 8.77 7.11 8.84
#> 5 11 11 11 8 8.33 9.26 7.81 8.47
#> 6 14 14 14 8 9.96 8.10 8.84 7.04
#> 7 6 6 6 8 7.24 6.13 6.08 5.25
#> 8 4 4 4 19 4.26 3.10 5.39 12.50
#> 9 12 12 12 8 10.84 9.13 8.15 5.56
#> 10 7 7 7 8 4.82 7.26 6.42 7.91
#> 11 5 5 5 8 5.68 4.74 5.73 6.89
</code></pre>
|
<python><pandas>
|
2023-11-30 11:56:44
| 1
| 439
|
Ramakrishna S
|
77,578,121
| 7,766,158
|
Successful requests but no messages in Azure Service Bus
|
<p>I have an Azure function that sends messages to a Azure Service Bus queue with the following code :</p>
<pre><code>class ServiceBusSenderClient:
def __init__(self, fully_qualified_namespace, queue_name):
credentials = DefaultAzureCredential()
servicebus_client = ServiceBusClient(fully_qualified_namespace=fully_qualified_namespace, credential=credentials, logging_enable=True)
self.sender = servicebus_client.get_queue_sender(queue_name=queue_name)
async def send_single_message(self, message:str):
message = ServiceBusMessage(message)
await self.sender.send_messages(message)
logging.info("Sent a message to Service Bus")
def close(self):
self.sender.close()
</code></pre>
<p>When I trigger my function, I see incoming requests but no messages coming in my Service Bus Queue.</p>
<p><a href="https://i.sstatic.net/Lwtp0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Lwtp0.png" alt="enter image description here" /></a></p>
<p>There is no errors from the Azure Function and all requests are successfull on Service Bus so I am a bit lost. What am I doing wrong here ?</p>
|
<python><azure><azure-functions><azureservicebus>
|
2023-11-30 11:36:18
| 1
| 1,931
|
Nakeuh
|
77,578,029
| 11,426,624
|
check if string contains another column value
|
<p>I have a dataframe and would like to check if a column value contains another column value.</p>
<pre><code> name1 name2
0 aa aab
1 xyz x
</code></pre>
<p>the below doesn't work</p>
<pre><code>df = df.assign(name1_contains_name2=df.name1.str.contains(df.name2),
name2_contains_name1=df.name2.str.contains(df.name1))
</code></pre>
<p>but I would like to get the below dataframe</p>
<pre><code> name1 name2 name1_contains_name2 name2_contains_name1
0 aa aab False True
1 xyz x True False
</code></pre>
<p>How can I write it?</p>
|
<python><pandas><string><dataframe><contains>
|
2023-11-30 11:24:09
| 3
| 734
|
corianne1234
|
77,577,962
| 2,101,808
|
Python multiprcessing on Windows with dynamic types
|
<pre><code>import multiprocessing
class Handler:
pass
handler = type('HandlerClass', (int, Handler),{"const": [123]})
def stream_process(H):
h=H()
print(h.const)
processes = [ multiprocessing.Process(target=stream_process, args=( handler,), name=f"server #{i}") for i in range(4) ]
for process in processes:
process.start()
for process in processes:
process.join()
</code></pre>
<p>raises error</p>
<pre><code>Traceback (most recent call last):
File "C:\Projects\winmp\test.py", line 18, in <module>
process.start()
File "C:\Programs\Python\Python312\Lib\multiprocessing\process.py", line 121, in start
self._popen = self._Popen(self)
^^^^^^^^^^^^^^^^^
File "C:\Programs\Python\Python312\Lib\multiprocessing\context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Programs\Python\Python312\Lib\multiprocessing\context.py", line 337, in _Popen
return Popen(process_obj)
^^^^^^^^^^^^^^^^^^
File "C:\Programs\Python\Python312\Lib\multiprocessing\popen_spawn_win32.py", line 94, in __init__
reduction.dump(process_obj, to_child)
File "C:\Programs\Python\Python312\Lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <class '__main__.HandlerClass'>: attribute lookup HandlerClass on __main__ failed
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Programs\Python\Python312\Lib\multiprocessing\spawn.py", line 108, in spawn_main
source_process = _winapi.OpenProcess(
^^^^^^^^^^^^^^^^^^^^
OSError: [WinError 87] The parameter is incorrect
</code></pre>
<p>this code also rises error</p>
<pre><code>import multiprocessing
import sys
class Handler:
pass
def stream_process(H):
h = H()
print(h.from_bytes(b'a', "big"))
print(h.const)
def main():
global HandlerClass
HandlerClass = type('HandlerClass', (int, Handler),{"const": [123]})
setattr(sys.modules[__name__], "HandlerClass",HandlerClass )
setattr(sys.modules['test'], "HandlerClass",HandlerClass )
processes = [ multiprocessing.Process(target=stream_process, args=( HandlerClass,), name=f"server #{i}") for i in range(4) ]
for process in processes:
process.start()
for process in processes:
process.join()
</code></pre>
<p>error</p>
<pre><code>Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Programs\Python\Python312\Lib\multiprocessing\spawn.py", line 122, in spawn_main
exitcode = _main(fd, parent_sentinel)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Programs\Python\Python312\Lib\multiprocessing\spawn.py", line 132, in _main
self = reduction.pickle.load(from_parent)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: Can't get attribute 'HandlerClass' on <module 'test' from 'C:\\Projects\\winmp\\test.py'>
</code></pre>
<p>I need construct type in main process, but init it in subprocess. How to solve this error?</p>
<p>On Linux and WSL it works.</p>
<p>this works, but class constructed 5 times. <code>{"const": [123]}</code> is very slow in real code, but constant. i not want to construct it on each process.</p>
<pre><code>import multiprocessing
import sys
class Handler:
pass
def stream_process(H):
h = H()
print(h.from_bytes(b'a', "big"))
print(h.const)
HandlerClass = type('HandlerClass', (int, Handler),{"const": [123]})
print('HandlerClass constructed here', HandlerClass)
if __name__ == "__main__":
processes = [ multiprocessing.Process(target=stream_process, args=( HandlerClass,), name=f"server #{i}") for i in range(4) ]
for process in processes:
process.start()
for process in processes:
process.join()
</code></pre>
|
<python><python-3.x><windows><multiprocessing>
|
2023-11-30 11:14:20
| 1
| 3,614
|
eri
|
77,577,929
| 10,178,508
|
How to handle cloudflare turnstile on form recaptcha with selenium / seleniumbase?
|
<p>I'm trying to login to this website: <a href="https://visa.vfsglobal.com/are/en/fra/login" rel="nofollow noreferrer">https://visa.vfsglobal.com/are/en/fra/login</a> with <code>selenium</code>, but I cannot bypass the Cloudflare Turnstile CAPTCHA.</p>
<p>My script uses <a href="https://github.com/seleniumbase/SeleniumBase" rel="nofollow noreferrer">seleniumbase</a> with undetectable options:</p>
<pre><code>with SB(uc=True, uc_cdp_events=True, undetectable=True, undetected=True) as sb:
sb.open("https://visa.vfsglobal.com/are/en/fra/login")
</code></pre>
<p>After running the script, even if I click the checkbox manually, it fails:</p>
<p><a href="https://i.sstatic.net/UFZXx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UFZXx.png" alt="enter image description here" /></a></p>
<p>How can I fix this?</p>
|
<python><selenium-webdriver><undetected-chromedriver><seleniumbase>
|
2023-11-30 11:09:21
| 1
| 598
|
Ismayil Ibrahimov
|
77,577,897
| 1,383,029
|
Dropbox API (Python) - How to get access to Team and User files
|
<p>I have a dropox business user which is member of the team. I would like to see the filesystem with the API like when I just login to dropbox.com: Member and team files and folder.</p>
<p>So I'm creating a dropbox authentication code with:</p>
<pre><code>dropbox.DropboxOAuth2FlowNoRedirect(
settings.DROPBOX_APP_KEY,
settings.DROPBOX_APP_SECRET,
token_access_type="offline",
scope=[
"account_info.read",
"file_requests.read",
"file_requests.write",
"files.content.read",
"files.content.write",
"files.metadata.read",
"files.metadata.write",
"files.team_metadata.read",
"files.team_metadata.write",
"sharing.read sharing.write",
"team_data.content.read",
"team_data.content.write",
"team_data.governance.read",
"team_data.governance.write",
"team_data.member",
"team_data.team_space",
],
)
</code></pre>
<p>Now i create dbx_client and call function <code>dbx.file_list_folders()</code>. I get error:</p>
<pre><code>This API function operates on a single Dropbox account, but the OAuth 2 access token you provided is for an entire Dropbox Business team. Since your API app key has team member file access permissions, you can operate on a team member\'s Dropbox by providing the "Dropbox-API-Select-User" HTTP header or "select_user" URL parameter to specify the exact user <https://www.dropbox.com/developers/documentation/http/teams>.')
</code></pre>
<p>I can do something like that:</p>
<pre><code>dbx_team = dropbox.DropboxTeam(ACCESS_TOKEN)
dbx = dbx_team.as_user("YOUR_TEAM_MEMBER_ID")
</code></pre>
<p>But then I need to know the "YOUR_TEAM_MEMBER_ID" - And it is not possible to get it from API. Or is it?</p>
<p>I just want access to files and folders of me and teams like when I login to dropbox.com in browser. How do manage that with the python dropbox API?</p>
|
<python><dropbox><dropbox-api>
|
2023-11-30 11:03:23
| 1
| 2,155
|
user1383029
|
77,577,864
| 2,641,038
|
Issue with Hierarchical Lucas-Kanade method on Optical Flow
|
<p>Issue with Hierarchical Lucas-Kanade method on optical flow</p>
<p>I am implementing the hierarchical Lucas-Kanade method in Python based on
<a href="http://robots.stanford.edu/cs223b04/algo_tracking.pdf" rel="nofollow noreferrer">this tutorial</a>. However, when
applying the method to a rotating sphere, I am encountering unexpected results.</p>
<p><a href="https://i.sstatic.net/M0urG.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M0urG.gif" alt="enter image description here" /></a></p>
<ul>
<li>The data can be found
<a href="https://cseweb.ucsd.edu/classes/wi07/cse252a/hw4/data/sphere.zip" rel="nofollow noreferrer">here</a>.</li>
</ul>
<h3>Algorithm explained</h3>
<p>The overall structure of the algorithm is explained below. Notice that the equations in
<a href="http://robots.stanford.edu/cs223b04/algo_tracking.pdf" rel="nofollow noreferrer">this tutorial</a> are using the
convention where the top-left corner is <code>(0, 0)</code> and the bottom-right corner is
<code>(width-1, height-1)</code>, while the implementation provided here swaps the x and y axes. In
other words, the coordinate for the top-left corner is <code>(0, 0)</code> and the bottom-right
corner is <code>(height-1, width-1)</code>.</p>
<h4>Basic Lucas-Kanade</h4>
<p>Incorporating equations 19, 20, 23, 29, and 28, the basic Lucas-Kanade method is
implemented as follows:</p>
<pre class="lang-py prettyprint-override"><code>def lucas_kanade(img1, img2):
img1 = np.copy(img1).astype(np.float32)
img2 = np.copy(img2).astype(np.float32)
# Change the window size based on the image's size due to downsampling.
window_size = min(max(3, min(img1.shape[:2]) / 6), 31)
window_size = int(2 * (window_size // 2) + 1)
print("window size: ", window_size)
# Compute image gradients
Ix = np.zeros(img1.shape, dtype=np.float32)
Iy = np.zeros(img1.shape, dtype=np.float32)
Ix[1:-1, 1:-1] = (img1[1:-1, 2:] - img1[1:-1, :-2]) / 2 # pixels on boundry are 0.
Iy[1:-1, 1:-1] = (img1[2:, 1:-1] - img1[:-2, 1:-1]) / 2
# Compute temporal gradient
It = np.zeros(img1.shape, dtype=np.float32)
It = img1 - img2
# Define a (window_size, window_size) kernel for the convolution
kernel = np.ones((window_size, window_size), dtype=np.float32)
# kernel = create_gaussian_kernel(window_size, sigma=1)
# Use convolution to calculate the sum of the window for each pixel
Ix2 = convolve2d(Ix**2, kernel, mode="same", boundary="fill", fillvalue=0)
Iy2 = convolve2d(Iy**2, kernel, mode="same", boundary="fill", fillvalue=0)
Ixy = convolve2d(Ix * Iy, kernel, mode="same", boundary="fill", fillvalue=0)
Ixt = convolve2d(Ix * It, kernel, mode="same", boundary="fill", fillvalue=0)
Iyt = convolve2d(Iy * It, kernel, mode="same", boundary="fill", fillvalue=0)
# Compute optical flow parameters
det = Ix2 * Iy2 - Ixy**2
# Avoid division by zero
u = np.where((det > 1e-6), (Iy2 * Ixt - Ixy * Iyt) / det, 0)
v = np.where((det > 1e-6), (Ix2 * Iyt - Ixy * Ixt) / det, 0)
optical_flow = np.stack((u, v), axis=2)
return optical_flow.astype(np.float32)
</code></pre>
<h4>Generate Gaussian Pyramid</h4>
<p>The Gaussian pyramid is generated as follow.</p>
<pre class="lang-py prettyprint-override"><code>def gen_gaussian_pyramid(im, max_level):
# Return `max_level+1` arrays.
gauss_pyr = [im]
for i in range(max_level):
gauss_pyr.append(cv2.pyrDown(gauss_pyr[-1]))
return gauss_pyr
</code></pre>
<h4>Upsample the flow</h4>
<p>The processing is conducted from the roughest image to the finest image in the pyramid.
Thus, we also need to upsample the flow.</p>
<pre class="lang-py prettyprint-override"><code>def expand(img, dst_size, interpolation=None):
# Increase dimension.
height, width = dst_size[:2]
return cv2.GaussianBlur(
cv2.resize( # dim: (width, height)
img, (width, height), interpolation=interpolation or cv2.INTER_LINEAR
),
(5, 5),
0,
)
</code></pre>
<h4>Warp the image by the flow in previous level</h4>
<p>In the equation 12, the right image needs to be shifted based on the number of pixels in
the previous loop. I choose to use <code>opencv.remap</code> function to warp the left image to be
aligned with the right image.</p>
<pre><code>def remap(a, flow):
height, width = flow.shape[:2]
# Create a grid of coordinates using np.meshgrid
y, x = np.meshgrid(np.arange(height), np.arange(width), indexing="ij")
# Create flow_map by adding the flow vectors
flow_map = np.column_stack( # NOTE: minus sign on flow
(x.flatten() + -flow[:, :, 0].flatten(), y.flatten() + -flow[:, :, 1].flatten())
)
# Reshape flow_map to match the original image dimensions
flow_map = flow_map.reshape((height, width, 2))
# Ensure flow_map values are within the valid range
flow_map[:, :, 0] = np.clip(flow_map[:, :, 0], 0, width - 1)
flow_map[:, :, 1] = np.clip(flow_map[:, :, 1], 0, height - 1)
# Convert flow_map to float32
flow_map = flow_map.astype(np.float32)
# Use cv2.remap for remapping
warped = cv2.remap(a, flow_map, None, cv2.INTER_LINEAR)
return warped
</code></pre>
<h4>Putting it all together</h4>
<p>After defining all the basics, we can put them all together. Here, <code>g_L</code> and <code>d_L</code> are
the variable in equation 7.</p>
<pre class="lang-py prettyprint-override"><code>def hierarchical_lucas_kanade(im1, im2, max_level): # max_level = 4
gauss_pyr_1 = gen_gaussian_pyramid(im1, max_level) # from finest to roughest
gauss_pyr_2 = gen_gaussian_pyramid(im2, max_level) # from finest to roughest
g_L = [0 for _ in range(max_level + 1)] # Every slot will be (h, w, 2) array.
d_L = [0 for _ in range(max_level + 1)] # Every slot will be (h, w, 2) array.
assert len(g_L) == 5 # 4 + 1 (base)
# Initialzie g_L[0] as (h, w, 2) zeros array
g_L[max_level] = np.zeros(gauss_pyr_1[-1].shape[:2] + (2,)).astype(np.float32)
for level in range(max_level, -1, -1): # 4, 3, 2, 1, 0
# Warp image 1 by previous flow.
warped = remap(gauss_pyr_1[level], g_L[level])
# Run Lucas-Kanade on warped image and right image.
d_L[level] = lucas_kanade(warped, gauss_pyr_2[level])
# Expand/Upsample the flow so that the dimension can match the finer result.
g_L[level - 1] = 2.0 * expand(
g_L[level] + d_L[level],
gauss_pyr_2[level - 1].shape[:2] + (2,),
interpolation=cv2.INTER_LINEAR,
)
return g_L[0] + d_L[0]
</code></pre>
<h3>Visualization</h3>
<p>After downloading the data, you can run it with the code:</p>
<pre class="lang-py prettyprint-override"><code>sphere_seq = []
for fname in natsorted(Path("./input/sphere/").rglob("*.ppm")):
sphere_seq.append(cv2.imread(str(fname), cv2.IMREAD_GRAYSCALE))
flows = []
for i in range(len(sphere_seq) - 1):
flows.append(hierarchical_lucas_kanade(sphere_seq[i], sphere_seq[i + 1], max_level=4))
show_flow(sphere_seq[i], flows[i], f"./output/sphere/flow-{i}.png")
</code></pre>
<p>The result looks like below:</p>
<p><a href="https://i.sstatic.net/1oK0e.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1oK0e.gif" alt="enter image description here" /></a></p>
<h3>Specific Question:</h3>
<p>There are several problems in the result:</p>
<ul>
<li>
<ol>
<li><p>It looks like the flow's x direction is correct but the y direction is not. It
could be my my visulization code is wrong. Here is the code:</p>
<pre class="lang-py prettyprint-override"><code> def show_flow(img, flow, filename=None):
x = np.arange(0, img.shape[1], 1)
y = np.arange(0, img.shape[0], 1)
x, y = np.meshgrid(x, y)
plt.figure(figsize=(10, 10))
fig = plt.imshow(img, cmap="gray", interpolation="bicubic")
plt.axis("off")
fig.axes.get_xaxis().set_visible(False)
fig.axes.get_yaxis().set_visible(False)
num_points_per_axis = 32
step = int(img.shape[0] / num_points_per_axis)
plt.quiver(
x[::step, ::step],
y[::step, ::step], # reverse order?
flow[::step, ::step, 0],
flow[::step, ::step, 1], # reverse sign?
color="r",
pivot="tail",
headwidth=2,
headlength=3,
)
if filename is not None:
plt.savefig(filename, bbox_inches="tight", pad_inches=0)
</code></pre>
</li>
</ol>
</li>
<li>
<ol start="2">
<li>There are non-zero flow on the stationary pixels.</li>
</ol>
</li>
</ul>
<p>Any insights or suggestions on resolving this issue would be greatly appreciated. Thank
you!</p>
|
<python><opencv><matplotlib><opticalflow>
|
2023-11-30 10:57:49
| 1
| 2,025
|
Der Fänger im Roggen
|
77,577,820
| 19,932,351
|
"Unknown Configuration Setting" in VSCode. Where to find current VSCode Python configurations for settings.json?
|
<p>Similar issues have been posted like <a href="https://stackoverflow.com/questions/62470439/vscode-python-jedienabled-false-showing-as-unknown-configuration-setting">vscode "python.jediEnabled": false, showing as Unknown Configuration Setting</a></p>
<p>I was really struggling now setting up my flake8 settings in VSCode to extend the E501-line-too-long-Error with</p>
<pre class="lang-py prettyprint-override"><code>"python.linting.flake8Args": ["--max-line-length=120"]
</code></pre>
<p>After searching the <a href="https://github.com/microsoft/vscode-python" rel="nofollow noreferrer">Python Github Repo of VSCode</a>, I finally found this <a href="https://github.com/microsoft/vscode-python/wiki/Migration-to-Python-Tools-Extensions" rel="nofollow noreferrer">post</a> to realize the settings have changed to</p>
<pre class="lang-py prettyprint-override"><code>"flake8.args": ["--max-line-length=120"]
</code></pre>
<p>Now, my question is where do I find the current python configurations like <code>"python.autoComplete.extraPaths"</code>?<br />
I could neither find it in the Github Repo nor on the <a href="https://marketplace.visualstudio.com/items?itemName=ms-python.python" rel="nofollow noreferrer">marketplace</a>, even though on <a href="https://marketplace.visualstudio.com/items?itemName=ms-python.flake8" rel="nofollow noreferrer">flake8's marketplace</a> settings are listed below.</p>
|
<python><visual-studio-code>
|
2023-11-30 10:51:15
| 1
| 662
|
Leon
|
77,577,521
| 13,184,263
|
Shifted column when reading csv file in pyspark
|
<p>I have a csv file stored in gcs, when reading this file some values are being shifted by 2 columns and I don't know why. One example is the following row:</p>
<pre><code>202100000264277,01212022,*******2752,00000411042,15314.15,27021421334,D,00000000396,JONES HEALTHCARE GROUP,08036296542,8036296542,,02560,15314.15,425.89,0.00,0.00,110,10522,AMEX CA B2B/WHOLESALE 1 CNP,AEXP,372754*****1007,,01212022,01212022,127647,,,A,001383636306309,27021421359,ACH,CAD,,,,02153222888,90662316,62496,62504,,0000000000027021421359X,0089250008036296542157,45280,USA,01,1,,,Y,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
</code></pre>
<p>df:</p>
<pre><code> 0 1 2 3 4 5 6 \
0 202100000264277 1212022 *******2752 411042 15314.15 27021421334 D
7 8 9 10 11 12 13 \
0 396 JONES HEALTHCARE GROUP 8036296542 8036296542 NaN 2560 15314.15
14 15 16 17 18 19 20 \
0 425.89 0.0 0.0 110 10522 AMEX CA B2B/WHOLESALE 1 CNP AEXP
21 22 23 24 25 26 27 28 29 \
0 372754*****1007 NaN 1212022 1212022 127647 NaN NaN A 1383636306309
30 31 32 33 34 35 36 37 38 39 40 \
0 27021421359 ACH CAD NaN NaN NaN 2153222888 90662316 62496 62504 NaN
41 42 43 44 45 46 47 \
0 0000000000027021421359X 0089250008036296542157 45280 USA 1 1 NaN
</code></pre>
<p>the value in the column 43 should be in 41, I loaded the csv file as text files and then did a split on ','. What could be the possible reasons?</p>
|
<python><dataframe><csv><pyspark>
|
2023-11-30 10:05:11
| 0
| 415
|
Jenny
|
77,577,497
| 7,204,691
|
Visualize iteratively t-SNE in Dash for large sample dataset
|
<p>I'm building a Dash app. In this dashboard, there is the possibility for the user to perform a dimension reduction of the dataset and visualize the result on a 2d or 3d plot.</p>
<p>The computation is really slow as the dataset contains lots of samples (~ 30000) and the algorithm (T-SNE) is not the fastest one.</p>
<p>I would like to allow the user to visualize the result of the algorithm performed on a small subset (between 1000 and 3000 samples) of the dataset then perform the algorithm on the full dataset and show the final result when it is computed (<a href="https://opentsne.readthedocs.io/en/stable/examples/04_large_data_sets/04_large_data_sets.html" rel="nofollow noreferrer">https://opentsne.readthedocs.io/en/stable/examples/04_large_data_sets/04_large_data_sets.html</a>).</p>
<p>This way the user shouldn't have to wait 10min or more to see results.</p>
<p>In code, the idea would be the following</p>
<pre class="lang-py prettyprint-override"><code>@callback(
Output(id=ids.TSNE_GRAPH, "figure"),
Input(id=ids.START_ALGO, "n_clicks"),
Input(id=ids.DATASTORE, "data")
)
def start_algo(n_clicks: int | None, data: pd.DataFrame):
if n_clicks is None:
raise no_update
data_subset, data_rest = split_data(data)
res = fast_computation_tsne(data_subset)
fig = px.scatter(res[:,0], res[:,1])
# show the fig in UI
full_res = slow_computation_tsne(data_rest)
full_fig = px.scatter(res[:,0], res[:,1])
# show the full fig in UI
</code></pre>
<p>The problem is that as I understand we can only return one time something to the output.</p>
|
<python><dataset><visualization><plotly-dash>
|
2023-11-30 10:01:52
| 0
| 307
|
Mathieu Rousseau
|
77,577,477
| 2,793,602
|
Show lines that differ between pandas dataframes
|
<p>I have two dataframes, one with the current estimate and one with the previous estimate. I want to create a new dataframe that shows the lines that have differences. Both have about 600 lines. This code:</p>
<pre><code>merged_df = pd.merge(df_current_est, df_previous_est, how='outer', indicator=True)
changed_df = merged_df[merged_df['_merge'] != 'both'].copy()
changed_df.drop('_merge', axis=1, inplace=True)
changed_df.reset_index(drop=True, inplace=True)
changed_df = changed_df.sort_values(by='BNCropEstimateID', ascending=True)
print(changed_df)
</code></pre>
<p>produces the right result.</p>
<pre><code> EstimateID Season RPIN GrowMethodCode Variety_code Packout_pc QtyBins
0 10061 2023 R1000 FO 002 0.60 320
2 10061 2023 R1000 FO 002 0.60 9000
1 10062 2023 R2000 FO 068 0.76 1000
3 10062 2023 R2000 FO 068 0.76 90000
</code></pre>
<p>The line at Index 0 is from the dataframe <code>df_current_est</code> and the line at Index 2 is from <code>df_previous_est</code>. Similarly, Index 1 is from <code>df_current_est</code> and Index 3 is from <code>df_previous_est</code>.</p>
<p>How can I add to the code to create a new column in <code>changed_df</code> to indicate which of the original dataframes the line is from?</p>
|
<python><pandas><dataframe>
|
2023-11-30 09:59:40
| 2
| 457
|
opperman.eric
|
77,577,292
| 903,011
|
How to precisely describe a dict type?
|
<p>I am back to Python (after some infidelities with Go and Typescript) and in the meantime the <code>type</code> parameter <a href="https://docs.python.org/3.12/reference/simple_stmts.html#the-type-statement" rel="nofollow noreferrer">landed in</a>.</p>
<p>I tried to understand how to use it to <strong>precisely</strong> describe a dict structure, but could only find a generic approach:</p>
<pre class="lang-py prettyprint-override"><code>type Car = Dict[str, str] # or [str, Any] or whatever
</code></pre>
<p>If my dict is actually</p>
<pre class="lang-json prettyprint-override"><code>{
"color": "blue",
"max_nr_passengers": 26,
"seats": [
{
"color": "green",
"heated": true
},
{
"color": "blue",
"heated": true
},
],
"options": {
"guns": false,
"submarine": false,
"extra_wheels": 18
}
}
</code></pre>
<p>then I would like to be able to describe its shape so that I have true type hinting (and peace of mind).</p>
<p>In Go this would be something like</p>
<pre class="lang-golang prettyprint-override"><code>type Car = struct {
Color String
MaxNrPassengers Int
Seats []struct {
Color String
Heated Bool
}
Options struct {
Guns Bool
Submarine Bool
ExtraWheels Int
}
}
</code></pre>
<p>and this describes accurately the actual structure of the type.</p>
<p>Is this possible in Python?</p>
|
<python><python-typing>
|
2023-11-30 09:30:20
| 1
| 30,596
|
WoJ
|
77,577,229
| 1,657,666
|
Why the ZipFile python library does not enforce passwords if ZipCrypto encryption is set?
|
<p>I am trying to create a zip file using DEFLATED compression and ZipCrypto encoding. The test is only trying to uncompress the zip file created using the Win11 Extractor tool. I used the code below to generate the zip file, which was created with success, but when I try to extract it, no password is being asked by any decompress software you can imagine. Why?</p>
<pre><code>def zip_directory_with_password(directory_path, zip_file_path, password):
# Create a password-protected Zip file with ZipCrypto encryption
with zipfile.ZipFile(zip_file_path, 'w', zipfile.ZIP_DEFLATED, allowZip64=True) as zip_file:
# Set the password for ZipCrypto
zip_file.setpassword(password.encode('utf-8'))
# Walk through the directory and add all files and directories to the Zip file
for foldername, subfolders, filenames in os.walk(directory_path):
for filename in filenames:
file_path = os.path.join(foldername, filename)
arcname = os.path.relpath(file_path, directory_path)
zip_file.write(file_path, arcname)
print(f"Added file: {arcname}")
for subfolder in subfolders:
folder_path = os.path.join(foldername, subfolder)
arcname = os.path.relpath(folder_path, directory_path)
zip_file.write(folder_path, arcname)
print(f"Added folder: {arcname}")
</code></pre>
|
<python><python-zipfile><system.io.compression.zipfile>
|
2023-11-30 09:20:41
| 0
| 451
|
user1657666
|
77,577,205
| 10,722,752
|
Getting AttributeError when running optuna study
|
<p>I am trying to run optimization using <code>optuna</code>:</p>
<pre><code>study = optuna.create_study(direction='minimize', sampler=optuna.samplers.GridSampler(search_space))
study.optimize(objective, n_trials=20)
</code></pre>
<p>For which I am getting below error:</p>
<p><code>AttributeError: module optuna.samplers has no attribute GridSampler</code></p>
<p><code>optuna</code> version is <code>0.10.0</code></p>
<p>Could someone please help me on how to fix this issue.</p>
|
<python><python-3.x><optuna>
|
2023-11-30 09:16:29
| 1
| 11,560
|
Karthik S
|
77,577,121
| 6,364,117
|
Pandas read_table never fails
|
<p>Trying to learn a bit of python and test that the incoming binary data ( which is a file ) is tab-delimited. Whatever kind of string I turn into binary does not break the pandas <code>read_table</code> parser. I am not sure what is going here and am at the end of the rope.</p>
<p>Seems pretty standard that I should be able to test the data before turning it into a pandas <code>DataFrame</code>.</p>
<pre><code>@app.route("/api", methods=['POST'])
def handle_upload():
data = request.get_data()
print(type(data))
if data:
try:
df = pd.read_table(BytesIO(data), delim_whitespace=True)
except IOError as e:
return Response('incorrect format, no tab delimitation', status=422)
try:
if not all(item in df.columns for item in COLUMNS):
raise ValueError
except pd.errors.InvalidColumnName:
return Response('Incorrect column values', status=422)
sorted_by_score = df.sort_values(by='score', ascending=False)
ten_of_largest_score = sorted_by_score[0:10]
sqlite_connection = engine.connect()
ten_of_largest_score.to_sql(sqlite_table, sqlite_connection, if_exists='replace')
sqlite_connection.close()
return Response('file uploaded successfully', status=201)
def test_handle_upload_invalid_format(client):
response = client.post('/api', data=b'"invalid_data_without_tab_delimiter"')
assert response.status_code == 422
</code></pre>
|
<python><pandas><flask><pytest><flask-sqlalchemy>
|
2023-11-30 09:02:20
| 0
| 1,422
|
godhar
|
77,577,080
| 7,078,356
|
xhost and Pyautogui to click the buttons on a linux Application, cannot see mouse click action
|
<p>I have a Linux application, which automatically runs and loads on system startup, filling the entire display. Currently, I want to write a Python script to simulate mouse clicks on buttons within the application. Here is the issue I am facing: I connect to my Linux machine using SSH and use the PyAutoGUI library to simulate clicks. Previously, I manually entered the commands <code>export DISPLAY=:0.0</code> and <code>xhost +SI:localuser:root</code> in the terminal, followed by the command <code>sudo python3 XXX.py</code> to run the clicking script.And everthing goes well.</p>
<p>Now, I want to include the <code>export DISPLAY=:0.0</code> and <code>xhost +SI:localuser:root</code> commands within the same script. The code is as follows:</p>
<pre><code>import os
import subprocess
try:
# Set DISPLAY variable
os.environ['DISPLAY'] = ":0.0"
# Execute xhost command
subprocess.run(["xhost", "+SI:localuser:root"], shell=False)
# Continue with PyAutoGUI operations
except subprocess.CalledProcessError as e:
print(f'Error configuring environment: {e}')
# Rest of PyAutoGUI script
</code></pre>
<p>At this point, I encounter an issue. The message '<strong>localuser:root being added to access control list</strong>' is printed in the terminal, but I do not see the mouse click operations. When the script finishes running, executing <code>echo $DISPLAY</code> does not show my configured <strong>:0.0</strong>, but rather 10.0. How can I resolve this problem? I mean no click action when set the DISPLAY and xhost+ with the Python script.
Hope someone can give comments here, thanks.</p>
|
<python><linux><display><pyautogui>
|
2023-11-30 08:56:40
| 0
| 1,327
|
Ringo
|
77,577,009
| 6,273,503
|
Pandas handling of daylight saving time (double hour) excel / csv compatible
|
<p>I am reading / writing quarter-hourly data into a csv file. The csv file format is given, since it is processed by Excel.</p>
<p>CSV looks like this:</p>
<pre><code>from;Price
...
30.10.2022 01:00;120,41
30.10.2022 01:15;104,2
30.10.2022 01:30;89,93
30.10.2022 01:45;71,21
30.10.2022 02:00;101,54
30.10.2022 02:15;92,1
30.10.2022 02:30;88,38
30.10.2022 02:45;80,83
30.10.2022 02:00;100,64 <-- Double Hour
30.10.2022 02:15;91,2 <-- Double Hour
30.10.2022 02:30;87,49 <-- Double Hour
30.10.2022 02:45;83,3 <-- Double Hour
30.10.2022 03:00;86,45
30.10.2022 03:15;83,94
30.10.2022 03:30;90,12
30.10.2022 03:45;87,86
...
</code></pre>
<p>My process is:</p>
<ul>
<li>read_csv into df</li>
<li>pull new data from an api and format into a dict</li>
<li>cast dict into a new df (appending_df)</li>
<li>Sort df based on index</li>
<li>append and drop duplicates based on the timeseries index <code>df = df[~df.index.duplicated(keep='last')]</code></li>
</ul>
<p>This works well, unless there is a shift from daylight saving time to regular time. The double hour is simply droped, because I drop duplicats. Even if would not need to drop duplicates, the sorting action would sort the double hour differently. Inserting into the index, instead of appending, wouldn't work either, because of the duplicate index entries.</p>
<p>Normally, I would cast the time into UTC and use +2 and +1 to differentiate between summer/winter, but the timestamps in the csv are timezone argnostic and Excel does not know about timezones.</p>
<p>How could I handle the csv import? How would I make Pandas know the difference between the first and the second hour in the timeseries? And how would I handle the appending / insertion action?</p>
<p>Here is my code so far:</p>
<pre><code>USE_PKL = False
DATE_FORMAT = '%d.%m.%Y %H:%M'
# Define the file paths for the report and the url endpoint
pricing_file = Path(r'//...PATH...') / 'exaa_spot_quarterly.csv'
pricing_pkl = Path(r'//...PATH...') / 'exaa_spot_quarterly.zip'
def load_old_data():
# Load the old data to append / update. USE_PKL specifies weather the .csv or the .pkl should be used.
try:
if USE_PKL:
df = pd.read_pickle(filepath_or_buffer=pricing_pkl, compression='infer')
else:
df = pd.read_csv(filepath_or_buffer=pricing_file, delimiter=';', header=0, index_col=0, decimal=",")
df.index = pd.to_datetime(df.index, format=DATE_FORMAT)
except:
df = pd.DataFrame()
return(df)
def save_data(df):
# Save DataFrame to PKL and CSV
df.to_pickle(path=pricing_pkl, compression='infer')
df.to_csv(path_or_buf=pricing_file, sep=";", index=True, index_label='von', encoding="ANSI", decimal=",", date_format=DATE_FORMAT)
def load_api_data():
# Prepare the requests Session
s = requests.Session()
data = {}
# Pull data from EXAA Website and format into dict
for offset in range(0, 2):
onDate = (date.today() - timedelta(days=offset))
print (onDate)
url = (f'https://www.exaa.at/data/trading-results?delivery_day={onDate.strftime("%Y/%m/%d")}&market=de&auction=grey')
resp = s.get(url)
quarterly_data = resp.json()["data"]["q"]
for i in quarterly_data:
timestamp = datetime.strptime(i["AuctionDay"],'%Y-%m-%d')
parsed_regex = re.search("^Q(\d\d)_(\d)$",i["Product"])
timestamp_hour = int(parsed_regex.group(1)) - 1
timestamp_minute = (int(parsed_regex.group(2)) - 1) * 15
timestamp = timestamp.replace(hour=timestamp_hour, minute=timestamp_minute)
data[timestamp] = float(i["Price"])
return(data)
df = load_old_data()
data = load_api_data()
# Append the data from EEX / dict to the old DataFrame
appending_df = pd.DataFrame.from_dict(data, orient='index', columns=['Preis'])
df = df.append(appending_df)
df = df[~df.index.duplicated(keep='last')]
df.sort_index(inplace=True)
save_data(df)
</code></pre>
|
<python><pandas>
|
2023-11-30 08:44:17
| 0
| 1,273
|
Harper
|
77,576,973
| 10,687,907
|
Blurred map when I move in Folium
|
<p>So I'm trying to display an interactive map using streamlit and folium but every time I move the map it becomes blurry.</p>
<p>I suppose it is a performance problem where things are getting processed over and over again but I can't seem to bypass it, even though I've tried using cache to only process things once but doesn't seem to change anything.</p>
<p>Am I doing something wrong ?</p>
<pre><code>import geopandas as gpd
import folium
import streamlit as st
from streamlit_folium import st_folium
import pandas as pd
import functools
from branca.colormap import LinearColormap
APP_TITLE = 'Agences du réseau'
done = False
def read_files():
#df = pd.read_excel("data/data.xlsx")
# For testing purpose use this generated dataframe
df = pd.DataFrame({"Agence":["Ardèche", "Isère", "Savoie"], "Occupation":[3, 2, 1], "Commentaire": ["RAS", "RAS", "RAS"]})
geo_data = gpd.read_file("data/departements.geojson")
# Create a Folium map centered at the bounds
m = folium.Map(location=[46.5, 2.5], zoom_start=6, tiles='CartoDB positron')
# Create linear color scale from red to green
color_scale = LinearColormap(['#FF0000', '#00FF00'],
vmin=df['Occupation'].min(),
vmax=df['Occupation'].max())
def style_function(feature):
value = feature['properties']['Occupation']
return {
'fillColor': color_scale(value),
'color': 'white',
'weight': 1,
'fillOpacity': 0.8
}
choropleth = folium.Choropleth(
geo_data=geo_data,
data=df,
columns=["Agence", 'Occupation'],
key_on='feature.properties.nom',
line_opacity=0.8,
highlight=False,
fill_opacity=0.5,
style_function=style_function
)
choropleth.geojson.add_to(m)
# Extract the boundaries of the GeoJSON data
boundary_data = geo_data.boundary
# Calculate the bounds of the boundary geometries
bounds = boundary_data.total_bounds.tolist()
# Add the boundaries to the map
folium.GeoJson(boundary_data).add_to(m)
return df, geo_data, m, choropleth
def display_map(geofile, df, geo_data, m, choropleth):
# TEST
df_indexed = df.set_index('Agence')
global done
if done == False:
for feature in choropleth.geojson.data['features']:
try:
state_name = feature['properties']['nom']
print(state_name)
feature['properties']['Occupation'] = 'Population: ' + '{:,}'.format(df_indexed.loc[state_name, 'Occupation'][0]) if state_name in list(df_indexed.index) else ''
feature['properties']['Commentaire'] = 'Reports/100K Population: ' + str(round(df_indexed.loc[state_name, 'Commentaire'][0])) if state_name in list(df_indexed.index) else ''
except:
pass
done = True
choropleth.geojson.add_child(
folium.features.GeoJsonTooltip(['nom', 'Occupation', 'Commentaire'], labels=False)
)
# Display the map in Streamlit
st_map = st_folium(m, width=1200, height=800)
print(st_map)
state_name = ''
if st_map['last_active_drawing']:
state_name = st_map['last_active_drawing']['properties']['nom']
return state_name
def main():
df, geo_data, m, choropleth = read_files()
st.set_page_config(APP_TITLE)
st.title(APP_TITLE)
display_map("data/departements.geojson", df, geo_data, m, choropleth)
if __name__ == "__main__":
main()
</code></pre>
<p>The geojson file can be downloaded here:
<a href="https://github.com/gregoiredavid/france-geojson" rel="nofollow noreferrer">https://github.com/gregoiredavid/france-geojson</a>, file "departements.geojson"</p>
<p><a href="https://i.sstatic.net/bue00.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bue00.png" alt="Not blurred" /></a></p>
<p><a href="https://i.sstatic.net/cNZVC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cNZVC.png" alt="Blurred" /></a></p>
|
<python><streamlit><folium>
|
2023-11-30 08:37:10
| 0
| 807
|
Achille G
|
77,576,912
| 15,222,211
|
How to resolve mypy errors when changing values with a Literal type?
|
<p>I need to change the value of an attribute that has a <code>Literal</code> type.
In a real case, I have a long list of values in ValidChar and I need to convert it to <code>List[str]</code> for checking.
<code>mypy</code> shows an error for the following example. How can I solve it?</p>
<pre class="lang-py prettyprint-override"><code>from typing import Literal
ValidChar = Literal["A", "B"]
class MyClass:
char: ValidChar = "A"
def set_char(self, char: str) -> None:
"""Set only valid character."""
if char in ValidChar.__args__:
self.char = char
obj = MyClass()
print(f"{obj.char=}") # obj.char='A'
# char is valid
obj.set_char("B")
print(f"{obj.char=}") # obj.char='B'
# char is invalid
obj.set_char("C")
print(f"{obj.char=}") # obj.char='B'
</code></pre>
<p>mypy error:</p>
<p>I trying to solve <code>expression has type "str", variable has type "Literal['A', 'B']"</code></p>
<pre><code>$ mypy .
2.py:11: error: "<typing special form>" has no attribute "__args__" [attr-defined]
2.py:12: error: Incompatible types in assignment (expression has type "str", variable has type "Literal['A', 'B']") [assignment]
Found 2 errors in 1 file (checked 3 source files)
</code></pre>
|
<python><mypy><python-typing>
|
2023-11-30 08:25:54
| 2
| 814
|
pyjedy
|
77,576,778
| 2,194,036
|
ansible playbook not able to find the inventory file
|
<p>I am running an ansible playbook, in the playbook only I am calling a python script which actually generates the inventory file which needs to be used by this playbook.. <br>
For the very first execution, although the playbook runs fine but it does not perform any listed operation on those target servers as it does not able to find the inventory file.. <br></p>
<p>It provide this info:</p>
<pre><code>ansible-playbook play.yml -i /home/inventory.txt
[WARNING]: Unable to parse /home/inventory.txt as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
</code></pre>
<p>For the second time of playbook run, it runs fine and perform all the listed operations on the target servers.<br>
Is it possible that the ansible could use the inventory file which is being generated by the python file..</p>
|
<python><ansible><ansible-inventory>
|
2023-11-30 08:02:12
| 0
| 459
|
Sandy
|
77,576,739
| 11,276,356
|
Project point cloud into depth image in python
|
<p>I have a point cloud which is in the same coordinate system as the cameras used for capturing the image.</p>
<p><em>I have a set of points (1000000, 3)</em></p>
<p><em>The camera extrinsic matrix Rt</em></p>
<p><em>The camera intrinsic matrix K</em></p>
<p><em>And my image (480, 640, 3)</em></p>
<p><em>near=0.05 and far=20 values as well as depth scale = 1000</em></p>
<p>What i want to get out is my depth image for each camera frame I have.
However, i am stuck and just get some weird offset image.</p>
<p>I use open3d to read the point cloud file, numpy for the calculation and matplotlib for vizualization.</p>
<pre><code> if os.path.exists(PATH):
# Read
pcd = o3d.io.read_point_cloud(PATH)
o3d.visualization.draw_geometries([pcd])
point_cloud = pcd.points # get points
# Load intrinsic and extrinsic camera parameters
K = np.array([[fx, 0, cx],
[0, fy, cy],
[0, 0, 1]])
extrinsics_matrix = np.load('rt.npy') # Extrinsic matrix
# Transform the point cloud to the world coordinates using extrinsics
world_coords = np.dot(extrinsics_matrix[:3, :3], point_cloud.T) + extrinsics_matrix[:3, 3][:, np.newaxis]
# Project world coordinates to 2D image coordinates using the intrinsic matrix
homogeneous_coords = np.dot(K, world_coords)
image_coords = homogeneous_coords[:2, :] / homogeneous_coords[2, :]
image_coords = np.clip(image_coords, 0, np.array([640, 480])[:, np.newaxis])
</code></pre>
|
<python><numpy><point-clouds><open3d><camera-projection>
|
2023-11-30 07:54:04
| 0
| 4,122
|
Hannah Stark
|
77,576,528
| 12,100,211
|
llama model to get answers of the questions which are in the context of pdf
|
<p>model name = llama-2-7b-chat.ggmlv3.q2_K.bin<br>
pdf-name = fastfacts-what-is-climate-change.pdf<br>
pdf-data = size of pdf is 2 to 3 pages, having info about climate change. <br></p>
<p>dependencies:</p>
<pre><code>from langchain.document_loaders import PyPDFLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.vectorstores import FAISS
from langchain.llms import CTransformers
from langchain import PromptTemplate
from langchain.chains import RetrievalQA
</code></pre>
<p>Below code giving the output as I want</p>
<pre><code>>> question : who is ms dhoni<br>
>> answer : Not provided in the pdf. Ask something in the context of pdf.
</code></pre>
<pre><code>def offline_mode(user_question)
loader = PyPDFLoader("/home/kanhatomar/Work/Running/AmityLLmaArabic/Bot/repository/data/fastfacts-what-is-climate-change.pdf")
docs = loader.load()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=500, # 500
chunk_overlap=50, #100 play with this numbers for better results
)
documents = text_splitter.split_documents(docs)
embeddings = HuggingFaceEmbeddings(
model_name="sentence-transformers/all-MiniLM-L6-v2",
model_kwargs={'device': 'cpu'})
db = FAISS.from_documents(documents, embeddings)
db.save_local("faiss")
template = """Use the following pieces of information to answer the user's question and if the question is not relevant to PDF text just say that you don't know, don't try to make up an answer using your knowledge base.
If you don't know the answer, just say that you don't know, don't try to make up an answer and don't use your knowledge Base for answer, if the question is not related to text input data so please give a I don't know as an answer please do not use your knowledge base, only use input pdf text for answer.
if the question is not related to input text so please write I don't know in a answer
Context: {context}
Question: {question}
Only return the helpful answer below from document and nothing else.
Helpful answer:
"""
llm = CTransformers(model='/home/kanhatomar/Work/Running/AmityLLmaArabic/Bot/repository/model/llama-2-7b-chat.ggmlv3.q2_K.bin',
model_type='llama',
config={'max_new_tokens': 256, 'temperature': 0})
embeddings = HuggingFaceEmbeddings(
model_name="sentence-transformers/all-MiniLM-L6-v2",
model_kwargs={'device': 'cpu'})
db = FAISS.load_local("faiss", embeddings)
retriever = db.as_retriever(search_kwargs={'k': 2})
prompt = PromptTemplate(
template=template,
input_variables=['context', 'question'])
qa_llm = RetrievalQA.from_chain_type(llm=llm,
chain_type='stuff',
retriever=retriever,
return_source_documents=True,
chain_type_kwargs={'prompt': prompt})
original_prompt = user_question
# Tokenize the prompt
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/all-MiniLM-L6-v2")
tokenized_prompt = tokenizer.encode(original_prompt, return_tensors="pt")
# Convert tokenized prompt back to string for printing
tokenized_prompt_str = tokenizer.decode(tokenized_prompt[0], skip_special_tokens=True)
# print(f"Tokenized prompt: {tokenized_prompt_str}")
# Retrieve the answer using the QA model
output = qa_llm({'query': tokenized_prompt_str})
res = output["result"]
return {"response": res}
</code></pre>
<p>Now the code which giving the wrong answer</p>
<pre><code>>> question: who is ms dhoni<br>
>> answer: Mahendra Singh Dhoni is an Indian professional cricketer, who plays as a wicket-keeper-batsman. Widely regarded as one of the world's greatest ...
</code></pre>
<pre><code>def offline_mode(user_question):
try:
loader = PyPDFLoader("/data/fastfacts-what-is-climate-change.pdf")
docs = loader.load()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=500, # 500
chunk_overlap=50, #100 play with this numbers for better results
)
documents = text_splitter.split_documents(docs)
embeddings = HuggingFaceEmbeddings(
model_name="sentence-transformers/all-MiniLM-L6-v2",
model_kwargs={'device': 'cpu'})
db = FAISS.from_documents(documents, embeddings)
db.save_local("faiss")
template = """Use the following pieces of information to answer the user's question and if the question is not relevant to PDF text just say that you don't know, don't try to make up an answer using your knowledge base.
If you don't know the answer, just say that you don't know, don't try to make up an answer and don't use your knowledge Base for answer, if the question is not related to text input data so please give a I don't know as an answer please do not use your knowledge base, only use input pdf text for answer.
if the question is not related to input text so please write I don't know in a answer
Context: {context}
Question: {question}
Only return the helpful answer below from document and nothing else.
Helpful answer:
"""
llm = CTransformers(model='/model/llama-2-7b-chat.ggmlv3.q2_K.bin',
model_type='llama',
config={'max_new_tokens': 256, 'temperature': 0})
embeddings = HuggingFaceEmbeddings(
model_name="sentence-transformers/all-MiniLM-L6-v2",
model_kwargs={'device': 'cpu'})
db = FAISS.load_local("faiss", embeddings)
retriever = db.as_retriever(search_kwargs={'k': 2})
prompt = PromptTemplate(
template=template,
input_variables=['context', 'question'])
qa_llm = RetrievalQA.from_chain_type(llm=llm,
chain_type='stuff',
retriever=retriever,
return_source_documents=True,
chain_type_kwargs={'prompt': prompt})
original_prompt = user_question
# Tokenize the prompt
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/all-MiniLM-L6-v2")
tokenized_prompt = tokenizer.encode(original_prompt, return_tensors="pt")
# Convert tokenized prompt back to string for printing
tokenized_prompt_str = tokenizer.decode(tokenized_prompt[0], skip_special_tokens=True)
# print(f"Tokenized prompt: {tokenized_prompt_str}")
# Retrieve the answer using the QA model
output = qa_llm({'query': tokenized_prompt_str})
res = output["result"]
return {"response" : res}
except Exception as e:
print(e)
return {"response": "Something went wrong"}
</code></pre>
<p><b>Notice- Only difference is in try except</b>
One thing I also noticed that when it gives right answer means don't know for the question which are not in the context of pdf, it runs fast and doesn't use any tokens but when it generate answer of the question which are not in the pdf it uses some tokens and also take more time.</p>
<pre><code>This are the tokens
Number of tokens (513) exceeded maximum context length (512).
Number of tokens (514) exceeded maximum context length (512).
Number of tokens (515) exceeded maximum context length (512).
Number of tokens (516) exceeded maximum context length (512).
Number of tokens (517) exceeded maximum context length (512).
Number of tokens (518) exceeded maximum context length (512).
Number of tokens (519) exceeded maximum context length (512).
this 519 in last line is goes to 703
</code></pre>
|
<python><machine-learning><pdf><artificial-intelligence><llama>
|
2023-11-30 07:14:31
| 0
| 303
|
Kanha Tomar
|
77,576,488
| 15,222,211
|
How to solve mypy error for @property with pydantic @computed_field?
|
<p>I need to decorate <code>@property</code> with the <code>@computed_field</code> from pydantic
(to automatically generate key-value pairs and include them in a FastAPI JSON Response).
In the following example, mypy displays an error.
If there is an error, how can I fix it?
If everything is correct, how do I disable this mypy message?</p>
<pre><code>from pydantic import BaseModel, computed_field
class MyClass(BaseModel):
value1: str
value2: str
@computed_field
@property
def joined_values(self) -> str:
return self.value1 + self.value2
obj = MyClass(value1="1", value2="2")
print(f"{obj.joined_values=}") # obj.joined_values='12'
</code></pre>
<p>mypy error:</p>
<pre><code>$ mypy .
stackperflow_pydantic_property.py:25: error: Decorators on top of @property are not supported [misc]
Found 1 error in 1 file (checked 2 source files)
</code></pre>
<p>versions:</p>
<pre><code>$ python -V
Python 3.10.11
$ pip list
mypy 1.7.1
pydantic 2.5.2
</code></pre>
|
<python><mypy><pydantic>
|
2023-11-30 07:05:48
| 3
| 814
|
pyjedy
|
77,576,384
| 298,847
|
Structuring a multi-package Python project with editable installs in Visual Studio Code?
|
<p>At our company we've split up our Python code base into multiple packages. To allow for a decent development workflow, we use editable installs.</p>
<p><strong>Setup</strong></p>
<ul>
<li><p>We have three packages, structure like so:</p>
<pre><code>.
├── api
│ └── pyproject.toml
├── db
│ └── pyproject.toml
└── core
└── pyproject.toml
</code></pre>
</li>
<li><p>The dependencies flow as follows: <code>api</code> -> <code>db</code> -> <code>core</code>.</p>
</li>
<li><p>We use a multi-root Visual Studio Code workspace, with each of the packages as a separate project folder.</p>
</li>
<li><p>We use poetry and virtual environments (on per project folder).</p>
</li>
</ul>
<p><strong>Issues</strong></p>
<p>We've had nothing but trouble:</p>
<ul>
<li>IntelliSense seems to work intermittently. Open one file and identifiers are clickable. Open another and they aren't. Reset the interpreter path (either at the workspace or project level) and something starts working while something else stops working.</li>
<li>Test discovery frequently fails for one project folder or another. Again issues are intermittent and seem to be triggered by "interesting" interactions around what interpreter gets used when.</li>
</ul>
<p><strong>Questions</strong></p>
<p>Which of the below approaches will mitigate or avoid the problems above?</p>
<ul>
<li>One virtual environment per project or one for the whole workspace?</li>
<li>Set the interpreter at the workspace-level or project-level? Both?</li>
<li>Multi-root workspace or single-root workspace?</li>
</ul>
|
<python><visual-studio-code><virtualenv><python-poetry>
|
2023-11-30 06:42:17
| 0
| 9,059
|
tibbe
|
77,576,380
| 15,513,073
|
how to do Image reprojection by setting some control points on a picture in any language
|
<h3>description:</h3>
<ul>
<li>How to correct a picture all points coordination by setting 2 point coordination? or which lib I need to use?</li>
<li>I prefer <code>python</code> lib solution but any language solution are accepted.</li>
<li>It is better if I can set the coordinate system in parameter.</li>
<li>I want to get all points(pixel) new coordination like example show, do/change nothing to image itself.</li>
<li>Edit: sry, now I know it is <code>reprojection</code> not <code>correct</code>.</li>
</ul>
<h3>example:</h3>
<ul>
<li>origin image:</li>
<li>one point means a pixel in origin image coordination(position).</li>
</ul>
<pre><code>[[(0, 0), (1, 0), (2, 0), (3, 0), (4, 0)],
[(0, 1), (1, 1), (2, 1), (3, 1), (4, 1)],
[(0, 2), (1, 2), (2, 2), (3, 2), (4, 2)],
[(0, 3), (1, 3), (2, 3), (3, 3), (4, 3)],
[(0, 4), (1, 4), (2, 4), (3, 4), (4, 4)]]
</code></pre>
<ul>
<li>example picture coordination matrix:
<ul>
<li><code>(100,150)</code>,<code>(200,200)</code>represent the two points I set.</li>
</ul>
</li>
</ul>
<pre><code>[[(0, 0), (1, 0), (2, 0), (3, 0), (4, 0)],
[(0, 1), (1, 1), (2, 1), (200, 200), (4, 1)],
[(0, 2), (1, 2), (2,2), (3, 2), (4, 2)],
[(0, 3), (100, 150), (2, 3), (3, 3), (4, 3)],
[(0, 4), (1, 4), (2, 4), (3, 4), (4, 4)]]
</code></pre>
<ul>
<li>desired result
<ul>
<li>All pixel old coordination should change to new coordination, it means the distance between two adjacent pixel changed.</li>
<li><code>(150,175)</code> is a example correct result after correction.</li>
<li>image itself change nothing.</li>
</ul>
</li>
</ul>
<pre><code>[[(0, 0), (1, 0), (2, 0), (3, 0), (4, 0)],
[(0, 1), (1, 1), (2, 1), (200, 200), (4, 1)],
[(0, 2), (1, 2), (150,175), (3, 2), (4, 2)],
[(0, 3), (100, 150), (2, 3), (3, 3), (4, 3)],
[(0, 4), (1, 4), (2, 4), (3, 4), (4, 4)]]
</code></pre>
<h3>what I have tried:</h3>
<ul>
<li>python hand write code:</li>
</ul>
<pre><code>def correction(n,m,set_point):
x_length = abs(set_point[1]["new_x"]-set_point[0]["new_x"])/abs(set_point[1]["old_x"]-set_point[0]["old_x"])
y_length = abs(set_point[1]["new_y"]-set_point[0]["new_y"])/abs(set_point[1]["old_y"]-set_point[0]["old_y"])
ans = [[[set_point[0]["new_x"]+(i-set_point[0]["old_x"])*x_length,
set_point[0]["new_y"]+(j-set_point[0]["old_y"])*y_length] for i in range(n)] for j in range(m)]
print(ans[::-1])
correction(5,5,[{"old_x":1,"old_y":1,"new_x":100,"new_y":150},{"old_x":3,"old_y":3,"new_x":200,"new_y":200}])
</code></pre>
<ul>
<li>result:</li>
</ul>
<pre><code>[[[50.0, 225.0], [100.0, 225.0], [150.0, 225.0], [200.0, 225.0], [250.0, 225.0]],
[[50.0, 200.0], [100.0, 200.0], [150.0, 200.0], [200.0, 200.0], [250.0, 200.0]],
[[50.0, 175.0], [100.0, 175.0], [150.0, 175.0], [200.0, 175.0], [250.0, 175.0]],
[[50.0, 150.0], [100.0, 150.0], [150.0, 150.0], [200.0, 150.0], [250.0, 150.0]],
[[50.0, 125.0], [100.0, 125.0], [150.0, 125.0], [200.0, 125.0], [250.0, 125.0]]]
</code></pre>
<h3>question:</h3>
<ol start="0">
<li>Any lib or wheel to solve this problem?</li>
<li>How to apply this algorithm to a <code>png</code> or <code>jpg</code> file to correction whole picture by setting 2 points coordination?</li>
<li>So far I set coordinate system in default, how can I set coordinate system by passing parameter?</li>
<li>If I want to set 3 or more point to correct whole picture, how to improve this algorithm?</li>
</ol>
|
<python><algorithm><image><tiff><gdal>
|
2023-11-30 06:41:28
| 1
| 2,546
|
leaf_soba
|
77,576,347
| 2,289,710
|
Beautifulsoup: find the location of a string in the soup
|
<p>I am trying to scrap data from a web page generated by a search engine. For this purpose, I study the data structure of the document using a sample query and the output result. The output contains a string "blah-blah-blah" being a value of the field that I need to scrap. I have this simple code:</p>
<pre><code>soup = BeautifulSoup(response.text, 'html.parser')
soup.find_all(string=re.compile("blah-blah-blah"))
</code></pre>
<p>But the output is pretty useless.</p>
<pre><code>['blah-blah-blah', 'blah-blah-blah', 'blah-blah-blah']
</code></pre>
<p>It only tells me that there are three occurrences of this string.</p>
<p>How can I find the location of those strings? I mean the corresponding tags, elements, fields etc., which will help me to find this string without knowing its value. This will later help me to scrap the value of the corresponding tag/attribute/whatever using <code>soup.select()</code> or <code>soup.find()</code>.</p>
|
<python><web-scraping><beautifulsoup>
|
2023-11-30 06:33:26
| 1
| 3,894
|
freude
|
77,576,314
| 65,889
|
How to read a CSV in Python which is 99% UTF-8 encoded
|
<p>I have large CSV files (>100.000 rows, 50 columns) which are "mainly" UTF-8 encoded.</p>
<p>On one of them, let's call it <code>file.csv</code> I run</p>
<pre class="lang-none prettyprint-override"><code>$ chardetect file.csv
file.csv: utf-8 with confidence 0.99
</code></pre>
<p>However when I read the file in Python with a <code>csv.DictReader</code> instance I get a unicode error with a stack trace like:</p>
<pre class="lang-none prettyprint-override"><code> File "/path/to/python3.8/csv.py", line 111, in __next__
row = next(self.reader)
File "/path/to//python3.8/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xad in position 7801: invalid start byte
</code></pre>
<p>I tried other encodings, but the file is for sure "mainly" UTF-8, just with a few problems. How can I still read this file as a CSV in Python?</p>
|
<python><csv><unicode><character-encoding><decoding>
|
2023-11-30 06:25:03
| 0
| 10,804
|
halloleo
|
77,576,244
| 9,516,820
|
dropdown select showing up in views.py but not in the clean method in forms.py
|
<p>I have a django template as follows</p>
<pre><code><div class="container-fluid lineitem_form_container">
<div class="inner_holder">
<h2>Add Line Item</h2>
<form method="post" action="{% url 'Dashboard:lineitems' %}" class="col-md-6">
{% csrf_token %}
{{ form.non_field_errors }}
<div class="mb-3">
<label for="{{ form.name.id_for_label }}" class="form-label">Line Item Name:</label>
{{ form.name }}
</div>
<div class="mb-3">
<label for="{{ form.category.id_for_label }}" class="form-label">Category:</label>
{{ form.category }}
{{ form.new_category }}
</div>
<div class="mb-3">
<label for="{{ form.segment.id_for_label }}" class="form-label">Segment:</label>
{{ form.segment }}
{{ form.new_segment }}
</div>
<button type="submit" class="btn btn-primary">Submit</button>
</form>
</div>
<script>
// JavaScript code to toggle visibility of new category/segment fields
document.addEventListener('DOMContentLoaded', function () {
var categoryDropdown = document.querySelector('#id_category');
var newCategoryField = document.querySelector('#id_new_category');
var segmentDropdown = document.querySelector('#id_segment');
var newSegmentField = document.querySelector('#id_new_segment');
// Initial state
newCategoryField.style.display = 'none';
newSegmentField.style.display = 'none';
categoryDropdown.addEventListener('change', function () {
newCategoryField.style.display = categoryDropdown.value === 'new_category' ? 'block' :
'none';
});
segmentDropdown.addEventListener('change', function () {
newSegmentField.style.display = segmentDropdown.value === 'new_segment' ? 'block' :
'none';
});
});
</script>
</div>
</code></pre>
<p><b>This is my forms.py</b></p>
<pre><code>class CustomTextInput(TextInput):
def __init__(self, *args, **kwargs):
kwargs['attrs'] = {'class': 'form-control'}
super().__init__(*args, **kwargs)
class LineItemForm(forms.ModelForm):
new_category = forms.CharField(
max_length=255,
required=False,
label='New Category',
widget=CustomTextInput(attrs={'class': 'form-control', 'placeholder': 'Enter new category'}),
)
new_segment = forms.CharField(
max_length=255,
required=False,
label='New Segment',
widget=CustomTextInput(attrs={'class': 'form-control', 'placeholder': 'Enter new segment'}),
)
class Meta:
model = LineItem
fields = ['name', 'category', 'segment']
widgets = {
'name': forms.TextInput(attrs={'class': 'form-control'}),
'category': forms.Select(attrs={'class': 'form-select'}),
'segment': forms.Select(attrs={'class': 'form-select'}),
}
def __init__(self, user, *args, **kwargs):
super().__init__(*args, **kwargs)
# Set the queryset for the category and segment fields based on the user
self.fields['category'].queryset = Category.objects.filter(user=user)
self.fields['segment'].queryset = Segment.objects.filter(user=user)
# Set the choices for the category and segment fields based on the user's choices
self.fields['category'].choices = Category.CATEGORY_CHOICES + [('new_category', 'Add New')]
self.fields['segment'].choices = Segment.SEGMENT_CHOICES + [('new_segment', 'Add New')]
# Apply Bootstrap classes to the new category and segment fields
self.fields['new_category'].widget.attrs['class'] = 'form-control'
self.fields['new_category'].widget.attrs['placeholder'] = 'Enter new category'
self.fields['new_segment'].widget.attrs['class'] = 'form-control'
self.fields['new_segment'].widget.attrs['placeholder'] = 'Enter new segment'
def clean(self):
cleaned_data = super().clean()
print("forms.py: ", cleaned_data)
print(f"Before setting - category: {cleaned_data.get('category')}, segment: {cleaned_data.get('segment')}")
# Set the values for new category and new segment
cleaned_data['category'] = cleaned_data.get('new_category', cleaned_data.get('category'))
cleaned_data['segment'] = cleaned_data.get('new_segment', cleaned_data.get('segment'))
# Print values after setting them
print(f"After setting - category: {cleaned_data.get('category')}, segment: {cleaned_data.get('segment')}")
# Check if either an existing category or a new one is provided
if not cleaned_data.get('category'):
self.add_error('category', 'Please select an existing category or provide a new one.')
if not cleaned_data.get('segment'):
self.add_error('segment', 'Please select an existing segment or provide a new one.')
return cleaned_data
</code></pre>
<p><b>this is my models.py</b></p>
<pre><code>class Category(models.Model):
CATEGORY_CHOICES = [
('investments', 'Investments'),
('loans', 'Loans'),
('insurance', 'Insurance'),
('taxes', 'Taxes'),
]
name = models.CharField(max_length=255, choices=CATEGORY_CHOICES)
user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)
def __str__(self):
return self.name
class Segment(models.Model):
SEGMENT_CHOICES = [
('investments', 'Investments'),
('expenses', 'Expenses'),
]
name = models.CharField(max_length=255, choices=SEGMENT_CHOICES)
user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)
def __str__(self):
return self.name
class LineItem(models.Model):
name = models.CharField(max_length=255)
category = models.ForeignKey(Category, on_delete=models.CASCADE)
segment = models.ForeignKey(Segment, on_delete=models.CASCADE)
user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)
date_of_creation = models.DateTimeField(auto_now_add=True)
def __str__(self):
return self.name
</code></pre>
<p><b>this is my views.py</b></p>
<pre><code>@login_required
def lineitems(request):
initial_data = {'category': 'default_category', 'segment': 'default_segment'}
print("request.POST: ", request.POST)
if request.method == 'POST':
form = LineItemForm(user=request.user, data=request.POST)
print("errors: ", form.errors)
if form.is_valid():
line_item = form.save(commit=False)
line_item.user = request.user
line_item.save()
return redirect('Dashboard:lineitems') # Redirect to a success page or another view
else:
form = LineItemForm(user=request.user, initial=initial_data)
return render(request, 'Dashboard/lineitems.html', {'form': form})
</code></pre>
<p>The idea is as follows</p>
<ol>
<li>There is a form with two dropdowns. Category and segment. There are default choices and the user can create new categories or segments if required.</li>
<li>Any category or segment the user creates is specific to that user and does not show for other users</li>
</ol>
<p>I am printing the request.POST query set when the form is submitted and I get the following result</p>
<pre><code>request.POST: <QueryDict: {'csrfmiddlewaretoken': ['zG8My4ID5S5uWn6vYesF3xxpEmfXA8DANbR9goRtpLDuqfKUMcrM4QKthdrwsth'], 'name': ['Mutual Funds'], 'category': ['investments'], 'new_category': [''], 'segment': ['investments'], 'new_segment': ['']}>
</code></pre>
<p>You can see that data from the form is being print out correctly. But in the clean method in forms.py the data is missing</p>
<pre><code>forms.py: {'name': 'Mutual Funds', 'new_category': '', 'new_segment': ''}
Before setting - category: None, segment: None
After setting - category: , segment:
</code></pre>
<p>It is interesting to note that only the dropdown values are missing. The line item name is being correctly printed.</p>
<p>I looked at a <a href="https://stackoverflow.com/questions/63165944/select-widget-value-appearing-in-post-but-not-cleaned-data">similar question</a> but it did not seem to solve my issue.</p>
<p>I am not sure what I am doing wrong. Any help is appreciated<br>
Thank you</p>
|
<python><django><django-models><django-views><django-forms>
|
2023-11-30 06:09:27
| 0
| 972
|
Sashaank
|
77,576,160
| 990,207
|
GStreamer Python dynamic pipeline construction not working
|
<p>I have a gstreamer pipeline that currently works if I invoke <code>Gst.parse_launch</code>:</p>
<pre><code>rtspsrc tcp-timeout=<timeout> location=<location> is-live=true protocols=tcp name=mysrc "
! rtph264depay wait-for-keyframe=true request-keyframe=true "
! mpegtsmux name=mpegtsmux "
! multifilesink name=filesink next-file=max-duration max-file-duration=<duration> aggregate-gops=true post-messages=true location=<out_location>
</code></pre>
<p>I am trying to convert it to a dynamic pipeline like so:</p>
<pre class="lang-py prettyprint-override"><code>def build_pipeline(self) -> str:
video_pipeline = Gst.Pipeline.new("video_pipeline")
all_data["video_pipeline"] = video_pipeline
rtsp_source = Gst.ElementFactory.make('rtspsrc', 'mysrc')
rtsp_source.set_property(...
...
all_data["mysrc"] = rtsp_source
rtph264_depay = Gst.ElementFactory.make('rtph264depay', 'rtp_depay')
rtph264_depay.set_property(....
...
all_data["rtp_depay"] = rtph264_depay
mpeg_ts_mux = Gst.ElementFactory.make('mpegtsmux', 'mpeg_mux')
all_data[mpeg_mux] = mpeg_ts_mux
multi_file_sink = Gst.ElementFactory.make('multifilesink', 'filesink')
multi_file_sink.set_property(...
...
all_data["filesink"] = multi_file_sink
video_pipeline.add(rtsp_source)
video_pipeline.add(rtph264_depay)
video_pipeline.add(mpeg_ts_mux)
video_pipeline.add(multi_file_sink)
if not rtph264_depay.link(mpeg_ts_mux):
print("Failed to link depay to mux")
else:
print("Linked depay to mux")
if not mpeg_ts_mux.link(multi_file_sink):
print("Failed to link mux to filesink")
else:
print("Linked mux to filesink")
rtsp_source.connect("pad-added", VideoStreamer._on_pad_added_callback, all_pipeline_data)
return video_pipeline
</code></pre>
<p>I define my pad-added callback like so:</p>
<pre class="lang-py prettyprint-override"><code> @staticmethod
def _on_pad_added_callback(rtsp_source: Gst.Element, new_pad: Gst.Pad, *user_data) -> None:
def _check_if_video_pad(pad: Gst.Pad):
current_caps = pad.get_current_caps()
for cap_index in range(current_caps.get_size()):
current_structure = current_caps.get_structure(cap_index)
media_type = current_structure.get_string("media")
if media_type == "video":
return True
return False
if not new_pad.get_name().startswith("recv_rtp_src"):
logger.info(f"Ignoring pad with name {new_pad.get_name()}")
return
if new_pad.is_linked():
logger.info(f"Pad with name {new_pad.get_name()} is already linked")
return
# Right now I only care about grabbing video, in the future I want to differentiate video and audio pipelines
if not _check_if_video_pad(new_pad):
logger.info(f"Ignoring pad with name {new_pad.get_name()} as its not video")
return
rtp_depay_element: Gst.Element = user_data[0]["rtp_depay"]
depay_sink_pad: Gst.Pad = rtp_depay_element.get_static_pad("sink")
pad_link = new_pad.link(depay_sink_pad) # Returns <enum GST_PAD_LINK_OK of type Gst.PadLinkReturn>
</code></pre>
<p>Outside of this I do:</p>
<pre class="lang-py prettyprint-override"><code>class VideoStreamer(ABC, threading.Thread):
def __init__(...):
...
self._lock: Final = threading.Lock()
self._loop: GLib.MainLoop | None = None
...
def run(self) -> None:
pipeline = self.build_pipeline()
bus.add_signal_watch()
bus.connect("message", self.handle_message)
with self._lock:
pipeline.set_state(Gst.State.PLAYING)
self._loop = GLib.MainLoop()
self._loop.run()
def handle_message(self, message: Gst.Message) -> None:
if message.src.get_name() != "filesink":
return
...
</code></pre>
<p>The problem is that when I use <code>parse_launch</code> my code works fine. Messages from the file sink element make it <code>handle_message</code>.
Whether with parse_launch or with dynamic construction, there are 3 state transitions that I can observe with <code>Gst.debug_bin_to_dot_file</code> which are:</p>
<pre><code>NULL->READY
READY->PAUSED
PAUSED->PLAYING
</code></pre>
<p>After visualizing the pipelines I notice the following differences:</p>
<pre><code>Parse Launch:
GstRTSPSrc(GstRtpBin(GstRtpSession rtpsession0 -> GstRtpStorage rtpstorage0 -> GstRtpSsrcDemux rtpssrcdemux0 -> GstRtpJitterBuffer rtpjitterbuffer1 -> GstRtpPtDemux rtpptdemux1))->recv_rtp_src_0_... -> GstRtpH264Depay rtph264depay0`.
Dynamic Pipeline:
`GstRTSPSrc(GstRtpBin(GstRtpSession rtpsession0 -> GstRtpStorage rtpstorage0 -> GstRtpSsrcDemux rtpssrcdemux0 -> GstRtpJitterBuffer rtpjitterbuffer1 -> GstRtpPtDemux rtpptdemux1)).
</code></pre>
<p>It seems that the <code>GstRtpPtDemux</code> element does not get a <code>src</code> pad for output, only a sink in dynamic output.
With the new dynamic construction I handle messages for state changes and I can verify that the pipeline is started state changes from ready to paused to playing, however I never get any messages from the file sink. Am I missing a link or incorrectly linking the pads?</p>
|
<python><gstreamer>
|
2023-11-30 05:44:12
| 1
| 1,597
|
Niru
|
77,575,954
| 5,558,021
|
How to improve speed of copy files in Python?
|
<p>I need to copy ~ 100,000 files; copying one file is too slow compared to the Unix terminal cp -r recursive copy of all files in the directory. Is there a way to copy all files instead of copying each file via Shutil? copy() in the loop?</p>
|
<python>
|
2023-11-30 04:45:06
| 2
| 1,383
|
Dmitry Sokolov
|
77,575,913
| 8,384,910
|
Use case for np.log2 vs math.log2
|
<p>Sometimes, authors use <a href="https://numpy.org/doc/stable/reference/generated/numpy.log2.html" rel="nofollow noreferrer"><code>np.log2</code></a> instead of <a href="https://docs.python.org/3/library/math.html#math.log2" rel="nofollow noreferrer"><code>math.log2</code></a>. For example, in <a href="https://github.com/eladrich/pixel2style2pixel/blob/5cfff385beb7b95fbce775662b48fcc80081928d/models/encoders/psp_encoders.py#L16" rel="nofollow noreferrer">this PyTorch code</a>:</p>
<pre class="lang-py prettyprint-override"><code>num_pools = int(np.log2(spatial))
</code></pre>
<p>(where <code>spatial</code> is a Python number)</p>
<p>Because <a href="https://docs.python.org/3/library/math.html#math.log2" rel="nofollow noreferrer"><code>math.log2</code></a> is <strike>a built-in</strike> an included battery, I don't see a reason for calling <a href="https://numpy.org/doc/stable/reference/generated/numpy.log2.html" rel="nofollow noreferrer"><code>np.log2</code></a> instead - is it maybe to follow convention, or because <a href="https://numpy.org/doc/stable/reference/generated/numpy.log2.html" rel="nofollow noreferrer"><code>np.log2</code></a> is thought to be faster?</p>
|
<python><numpy><pytorch><conventions>
|
2023-11-30 04:32:41
| 1
| 9,414
|
Richie Bendall
|
77,575,842
| 1,188,943
|
Unable to install google-re2 on macos 14.1.1
|
<p>I want to install Apache Airflow and in the installation process, it needs google-re2. The error is:</p>
<pre><code> Building wheel for google-re2 (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [149 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build/lib.macosx-10.9-x86_64-cpython-311
copying re2.py -> build/lib.macosx-10.9-x86_64-cpython-311
running build_ext
building '_re2' extension
creating build/temp.macosx-10.9-x86_64-cpython-311
x86_64-apple-darwin13.4.0-clang -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /Users/afshin/anaconda3/include -fPIC -O2 -isystem /Users/afshin/anaconda3/include -march=core2 -mtune=haswell -mssse3 -ftree-vectorize -fPIC -fPIE -fstack-protector-strong -O2 -pipe -isystem /Users/afshin/anaconda3/include -D_FORTIFY_SOURCE=2 -isystem /Users/afshin/anaconda3/include -I/Users/afshin/anaconda3/lib/python3.11/site-packages/pybind11/include -I/Users/afshin/anaconda3/include/python3.11 -c _re2.cc -o build/temp.macosx-10.9-x86_64-cpython-311/_re2.o -fvisibility=hidden
_re2.cc:125:19: error: no viable conversion from 'absl::string_view' to 'const re2::StringPiece'
if (!self.Match(text, pos, endpos, anchor, groups.data(), groups.size())) {
^~~~
...
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for google-re2
Running setup.py clean for google-re2
</code></pre>
<p>I checked multiple solutions but no one worked.
I have Python 3.11.5.</p>
|
<python><airflow><re2>
|
2023-11-30 04:12:18
| 1
| 1,035
|
Mahdi
|
77,575,542
| 10,200,497
|
keep the first row and change the rest of values to NaN in multiple columns
|
<p>This is my dataframe:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{
'a': [
'a', 'a', 'a', 'a', 'a', 'a', 'b', 'b', 'b', 'b', 'b', 'b',
],
'b': [
-20, 20, 20, 20,-70, -70,-11, -100, -1, -1, -100, 100
],
'c': [
'f', 'f', 'f', 'f', 'f', 'x', 'x', 'k', 'k', 'k', 'k', 'k'
],
'x': [
'p', 'p', 'p', 'p', 'p', 'x', 'x', 'i', 'i', 'i', 'i', 'i'
],
}
)
</code></pre>
<p>And this is the output that I want:</p>
<pre><code> a b c x
0 a -20.0 f p
1 a NaN NaN p
2 a NaN NaN p
3 a NaN NaN p
4 a NaN NaN p
5 a NaN NaN x
6 b -11.0 x x
7 b NaN NaN i
8 b NaN NaN i
9 b NaN NaN i
10 b NaN NaN i
11 b NaN NaN i
</code></pre>
<p>For each group in column <code>a</code> I want to keep only the first row and change the rest of values to <code>nan</code>. Note that in my real data I don't know the index of columns. I may have hundreds of columns that I need to keep their first row and change the rest to <code>nan</code>.</p>
<p>And This is what I have tried. I don't know if it is ok. Should I do this hundred times if I have 100 columns?!!!</p>
<pre><code>import numpy as np
import pandas as pd
def func(g):
g.iloc[1:, g.columns.get_loc('b')] = np.nan
g.iloc[1:, g.columns.get_loc('c')] = np.nan
return g
df = df.groupby('a', as_index=False).apply(func)
</code></pre>
<p>I used <code>get_loc</code> because I don't know the exact index of columns.</p>
|
<python><pandas>
|
2023-11-30 02:32:31
| 2
| 2,679
|
AmirX
|
77,575,330
| 4,862,402
|
Dropping the key column of right data frame after merging
|
<p>I'm looking for a simple way to get rid of the key column of the right table after merging it with another data frame:</p>
<pre><code>import pandas as pd
users = pd.DataFrame({"id": [1,2,3], "name": ["Mark", "Elon", "Jeff"]})
orders = pd.DataFrame({"id": [1,2,3,4,5,6], "user_id": [2,3,2,1,1,3]})
orders_full = pd.merge(left=orders, right=users, how="left", left_on="user_id", right_on="id")
orders_full
</code></pre>
<p>This is the result:</p>
<p><a href="https://i.sstatic.net/hGCHf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hGCHf.png" alt="enter image description here" /></a></p>
<p>I guess there is a pythonic elegant code to have better column names as result (avoid renaming e and dropping each column manually)</p>
<p>good solution: id, user_id, name</p>
<p>even better: order_id, user_id, user_name</p>
|
<python><pandas><merge>
|
2023-11-30 01:06:01
| 3
| 1,161
|
Victor Mayrink
|
77,575,271
| 359,219
|
Why is a Python module not being found?
|
<p>The source in hypothesis_test.py, I have:</p>
<pre><code>from utils import collapse_contiguous_ranges, check_number_in_range
@given(st.lists(st.integers()))
def test_collapse_contiguous_ranges(range: list[int]):
collapsed = collapse_contiguous_ranges(range)
for i in range:
assert check_number_in_range(i, collapsed)
assert collapse_contiguous_ranges(range) == [range[0], range[-1]]
</code></pre>
<p>Here is my command and my error:</p>
<pre><code> python -u "/Users/pitosalas/mydev/public-131-samples/memsim/memsim/tests/hypothesis_test.py"
Traceback (most recent call last):
File "/Users/pitosalas/mydev/public-131-samples/memsim/memsim/tests/hypothesis_test.py", line 3, in <module>
from utils import collapse_contiguous_ranges, check_number_in_range
ModuleNotFoundError: No module named 'utils'
</code></pre>
<p>And here is my directory structure. I've been stuck here before in other apps. I always mess around and sometimes it starts working. What can I try next?</p>
<p><a href="https://i.sstatic.net/0Psn4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0Psn4.png" alt="enter image description here" /></a></p>
|
<python>
|
2023-11-30 00:46:50
| 0
| 11,040
|
pitosalas
|
77,575,238
| 14,256,643
|
python subprocess getting error when trying to install requirements.txt
|
<p>I am trying to develop an automate python scripts using python subprocess. I am using fastapi uvicorn server for build automatically djago project but got internal 500 server error fast api while installing requirements.txt for django project. I am using python 3.11 my code</p>
<pre><code>@app.post("/webhook")
async def webhook(request: Request):
try:
repo_directory = 'D:/web development/ScratchFb'
python_directory = 'D:/web development/ScratchFb/fbscratch/backendfb'
nextjs_directory = 'D:/web development/ScratchFb/fbscratch/fontendfb'
#clone git repo
clone_command = subprocess.run(['git', 'clone', 'https://github_url'], cwd=repo_directory,check=True)
# create virtual environment python
vertual_venv = subprocess.run(['python', '-m', 'venv', 'venv'], cwd=python_directory ,check=True)
# Activate the virtual environment based on the platform
if sys.platform == 'win32':
activate_venv = subprocess.run(['venv\\Scripts\\activate'], cwd=python_directory , shell=True, check=True)
else:
activate_venv = subprocess.run(['source', 'venv/bin/activate'], cwd=python_directory, shell=True, check=True)
#installing requirements.txt
install_req_txt = subprocess.run(['pip', 'install', '-r', 'requirements.txt'], cwd=python_directory,check=True)
#install npm
npm_i = subprocess.run(['npm', 'i'], cwd=nextjs_directory,check=True)
# Check the exit code of npm_i before proceeding to npm_build
if npm_i.returncode == 0:
# Run build command if npm install was successful
npm_build = subprocess.run(['npm', 'run', 'build'], cwd=nextjs_directory, check=True)
else:
print("Error: npm install failed.")
</code></pre>
<p>here error</p>
<pre><code>{
"error": "Command '['pip', 'install', '-r', 'requirements.txt']' returned non-zero exit status 1."
}
</code></pre>
|
<python><python-3.x><django><subprocess><fastapi>
|
2023-11-30 00:31:27
| 0
| 1,647
|
boyenec
|
77,575,141
| 646,549
|
Is there a good algorithm to reconstruct a global ranked list from several partially ranked lists?
|
<p>I have a list of objects which are being reviewed by a team of judges. Each judge only sees a subset of the objects, and each judge ranks the objects from best to worst. Each object is ranked by at least two judges, and there is possibility of the judges to disagree (i.e., this is a noisy judging process). No judge sees all the objects.</p>
<p>Is there a good algorithm to compile the "best" global list of rankings given the collection of partial ranked lists from all the judges?</p>
<p>An example in Python (treat it as pseudocode):</p>
<pre class="lang-py prettyprint-override"><code># Let's say there are six things and we want to rank them.
# There are ... four judges, each of whom judges three things,
# so each thing gets judged twice.
items = ['a', 'b', 'c', 'd', 'e', 'f']
j1_rank = ['a', 'c', 'e']
j2_rank = ['b', 'd', 'f']
j3_rank = ['a', 'b', 'c']
j4_rank = ['d', 'e', 'f']
# these are ranked low to high
# the goal is - can we combine together ranks j1-j4 to reproduce a master ranked list
expected_ranked_list = ['a', 'b', 'c', 'd', 'e', 'f']
</code></pre>
<p>I've taken a look at rank aggregation algorithms, but most of the online materials I've found are very technical and/or mathematical or in scientific literature with much jargon; many of these are more about ranked-choice voting (e.g. for political candidates), which is not a similar problem to what I'm facing.</p>
<p>Edit: As pointed out by @JoshGordon and @SurajShourie, I believe another acceptable expected solution would be <code>['a', 'b', 'd', 'c', 'e', 'f']</code>.</p>
|
<python><list><ranking>
|
2023-11-29 23:58:01
| 2
| 309
|
tomr_stargazer
|
77,574,967
| 4,980,705
|
create multiple columns with df.apply that uses multi return function
|
<p>I am creating a new column with df.apply and a function and would like to use the same function to create another column in the dataframe. The function returns several values from which I'm selecting one successfully. To avoid new calculation I would like to get two return outputs and create two columns in the dataframe.</p>
<pre><code>df_RoadSegments["NewTime"] = df_RoadSegments.apply(lambda x: getrouteinfo(x["OriginCoordinates"], x["DestCoordinates"])[8], axis=1)
</code></pre>
<p>but I would like to have something like below, is it possible?</p>
<pre><code>df_RoadSegments["NewTime"]["link"] = df_RoadSegments.apply(lambda x: getrouteinfo(x["OriginCoordinates"], x["DestCoordinates"])[8][9], axis=1)
</code></pre>
|
<python><pandas><pandas-apply>
|
2023-11-29 23:07:57
| 0
| 717
|
peetman
|
77,574,962
| 4,399,016
|
Converting Byte Literal to Pandas
|
<p>I have a data that looks like this:</p>
<p>In bytes literal form</p>
<pre><code>data
b'[{"Name":"USA Stocks","Code":"US","OperatingMIC":"XNAS, XNYS","Country":"USA","Currency":"USD","CountryISO2":"US","CountryISO3":"USA"},{"Name":"London Exchange","Code":"LSE","OperatingMIC":"XLON","Country":"UK","Currency":"GBP","CountryISO2":"GB","CountryISO3":"GBR"}]'
</code></pre>
<p>I coverted</p>
<pre><code>data_decode = data.decode("utf-8")
</code></pre>
<p>I am trying to get a pandas data frame from this.</p>
<pre><code>df = pd.DataFrame((data_decode))
</code></pre>
<blockquote>
<p>ValueError: DataFrame constructor not properly called!</p>
</blockquote>
<pre><code>df = pd.DataFrame(eval(data_decode))
</code></pre>
<blockquote>
<p>NameError: name 'null' is not defined</p>
</blockquote>
|
<python><pandas>
|
2023-11-29 23:06:46
| 1
| 680
|
prashanth manohar
|
77,574,902
| 192,221
|
Uninstall package with pip and uninstall the wheels built
|
<p>If I <code>pip uninstall</code> a package, in this case llama-cpp-python, the whl file that it built upon installation is not removed. I doesn't make sense to me why that would be left hanging around. Why is this not done by default, and why is there no option to tell pip to remove wheels? It would make sense to me that if you're uninstalling, you don't want an old wheel sitting on the filesystem.</p>
<p>I know that this can be worked around by adding the --no-cache-dir when reinstalling, e.g.:</p>
<pre><code>pip install --no-cache-dir llama-cpp-python
</code></pre>
<p>Does pip have logic to reinstall wheels automatically when a different version of the package is being installed?</p>
<p>I had this problem because I wanted to change the build-related environment variables before installing the package. I'm on linux if that's relevant.</p>
|
<python><python-3.x><pip><python-wheel>
|
2023-11-29 22:49:25
| 0
| 6,017
|
kristianp
|
77,574,896
| 10,621,555
|
py -m ensurepip doesn't install the latest version of pip (instead of 23.3, it installed 21.1)
|
<p>I am stuck here as I have python 3.9 installed but it came with 21.1.3 pip. I would like to upgrade my pip but it keeps saying 21.1.3 version is the latest which is not true. I uninstalled my pip and then reinstalled it using py -m ensurepip but still, it installed pip-21.1.3 for me but not 23.3 the latest.</p>
<p>Anyone has encountered the same problem?</p>
<p>cheers!</p>
<p><a href="https://i.sstatic.net/tdwyh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tdwyh.png" alt="enter image description here" /></a></p>
|
<python><pip><version><ensurepip>
|
2023-11-29 22:47:50
| 0
| 707
|
Hao Xu
|
77,574,863
| 4,038,747
|
Can these sorts of if/else statements be boiled down with a walrus operator?
|
<p>I have read up blogs and tutorials around the <code>:=</code> operator but I am still trying to wrap my head around how to know <em>when</em> to use it. I understand its purpose is to improvement maintainability, avoiding potential code duplication, and assignments in comprehensions but I am wondering if it can be used to replaced something like:</p>
<pre><code>n = 2
for m in [1,2,3,-9,5]:
if m > n:
n = m
</code></pre>
<p>Yes, I know I can sort the list. This question is less about algorithm discussions but more about how to identify when/where to use the <code>:=</code> operator.</p>
|
<python><python-assignment-expression>
|
2023-11-29 22:38:59
| 1
| 1,175
|
lollerskates
|
77,574,748
| 3,083,138
|
What is the correct technique for incorporating access to large machine learning data sets in a Ray Tune Tuner() object?
|
<p>I'm attempting to use Ray Tune 2.8.0 to optimize the hyperparameters of a pytorch neural network model--a fairly standard use case as far as I can understand. Pytorch is distributed with several standard prepackaged data sets that can be loaded to facilitate user practice and learning exercises. In the pytorch processing model, the data loading procedure seems to be subdivided into two main steps: an initial step creates sort of a "handle" object or reference to a local copy of the data set (additionally downloading from an external URL if a local copy does not yet exist), and a second step creates a <code>DataLoader()</code> object which is basically a python iterator that cycles through the training and test samples in batches. (That <code>batch_size</code>, coincidentally, is one of the hyperparameters that I would like to tune.)</p>
<p>I've encountered a situation in which the pytorch model for data loading appears to conflict (or at least, it interacts rather poorly) with the Ray Tune model for managing data access across various workers and nodes.</p>
<p>The example code below implements two required items that are needed to define a Ray Tune <code>Tuner()</code> object: a trivial "trainable" function and a hyperparameter search space. For two other required items that are mentioned in the <a href="https://docs.ray.io/en/latest/tune/getting-started.html" rel="nofollow noreferrer">Getting Started</a> documentation (search algorithm and scheduler), I simply accept the standard system defaults.</p>
<p>Additionally, I create a <code>dict()</code> of pytorch data set handle objects that I mentioned above, and I attempt to pass that <code>dict()</code> object into the "trainable" function as well (because of course the trainable function in any realistic use case would need access to that).</p>
<pre class="lang-py prettyprint-override"><code>from torchvision import datasets
from torchvision.transforms import ToTensor
from functools import partial
from ray import train, tune
import numpy as np
dthan = dict()
for splitid, trnopt in zip(['train', 'test'], [True, False]):
# Within the pytorch framework, I think this creates some kind of initial
# "handle" which can be used to facilitate further data access
dthan[splitid] = datasets.FashionMNIST(root="data", train=trnopt,
download=True, transform=ToTensor())
# Ray tune "trainable" function, to be passed in to the Tuner()
def dummytrain(config, data_handle):
# In a real hyperparameter tuning exercise, the data_handle would be
# used to created a batched pytorch DataLoader(), and this would be
# subsequently used in batched iterative cycles to train a neural
# network model, ultimately resulting in some "loss" value at the
# end of the training schedule. But for simplicity here we ignore
# all of that and just report an arbitrary random final output value
# to ray tune.
train.report({'loss': np.random.uniform()})
# Ray tune hyperparameter search space
config = {
"lr": tune.loguniform(1e-7, 1e-1),
"batch_size": tune.choice([2, 4, 8, 16, 32, 64, 128])
}
# According to the ray tune documentation, in addition to a "trainable"
# and a search space, we can also define a search algorithm and a scheduler,
# but to keep things simple, here we'll just accept the defaults
tuner = tune.Tuner(
# Create a wrapped version of dummytrain with one of the input parameters
# (data_handle) already pre-defined, so that ray tune only needs to pass
# in the config value for each finalized instance of this function
partial(dummytrain, data_handle=dthan),
# If you comment out the above line and uncomment this one, then ray tune
# behaves as expected, reporting random numbers for the loss value
# defined in the trainable function
#partial(dummytrain, data_handle=None),
# Take 10 draws from the search space
tune_config=tune.TuneConfig(num_samples=10),
# Pass in the search space
param_space=config
)
# Attempt to tune the hyperparameters
tuner.fit()
</code></pre>
<p>After running the code, I obtain a cascade of exceptions, the second one apparently raised while attempting to handle the first:</p>
<pre class="lang-py prettyprint-override"><code>Python 3.11.4 (v3.11.4:d2340ef257, Jun 6 2023, 19:15:51) [Clang 13.0.0 (clang-1300.0.29.30)]
Type 'copyright', 'credits' or 'license' for more information
IPython 8.14.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: %run testray.py
2023-11-29 16:38:23,308 INFO worker.py:1673 -- Started a local Ray instance.
2023-11-29 16:38:25,904 INFO tune.py:220 -- Initializing Ray automatically. For cluster usage or custom Ray initialization, call `ray.init(...)` before `Tuner(...)`.
2023-11-29 16:38:25,905 INFO tune.py:595 -- [output] This will use the new output engine with verbosity 1. To disable the new output and use the legacy output engine, set the environment variable RAY_AIR_NEW_OUTPUT=0. For more information, please see https://github.com/ray-project/ray/issues/36949
╭───────────────────────────────────────────────────────────────────╮
│ Configuration for experiment dummytrain_2023-11-29_16-38-21 │
├───────────────────────────────────────────────────────────────────┤
│ Search algorithm BasicVariantGenerator │
│ Scheduler FIFOScheduler │
│ Number of trials 10 │
╰───────────────────────────────────────────────────────────────────╯
View detailed results here: /Users/stachyra/ray_results/dummytrain_2023-11-29_16-38-21
To visualize your results with TensorBoard, run: `tensorboard --logdir /Users/stachyra/ray_results/dummytrain_2023-11-29_16-38-21`
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/ray/tune/tune.py:1007, in run(run_or_experiment, name, metric, mode, stop, time_budget_s, config, resources_per_trial, num_samples, storage_path, storage_filesystem, search_alg, scheduler, checkpoint_config, verbose, progress_reporter, log_to_file, trial_name_creator, trial_dirname_creator, sync_config, export_formats, max_failures, fail_fast, restore, server_port, resume, reuse_actors, raise_on_failed_trial, callbacks, max_concurrent_trials, keep_checkpoints_num, checkpoint_score_attr, checkpoint_freq, checkpoint_at_end, chdir_to_trial_dir, local_dir, _experiment_checkpoint_dir, _remote, _remote_string_queue, _entrypoint)
1006 while not runner.is_finished() and not experiment_interrupted_event.is_set():
-> 1007 runner.step()
1008 if has_verbosity(Verbosity.V1_EXPERIMENT):
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/ray/tune/execution/tune_controller.py:731, in TuneController.step(self)
730 # Handle one event
--> 731 if not self._actor_manager.next(timeout=0.1):
732 # If there are no actors running, warn about potentially
733 # insufficient resources
734 if not self._actor_manager.num_live_actors:
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/ray/air/execution/_internal/actor_manager.py:191, in RayActorManager.next(self, timeout)
190 # We always try to start actors as this won't trigger an event callback
--> 191 self._try_start_actors()
193 # If an actor was killed, this was our event, and we return.
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/ray/air/execution/_internal/actor_manager.py:361, in RayActorManager._try_start_actors(self, max_actors)
360 # Start Ray actor
--> 361 actor = remote_actor_cls.remote(**kwargs)
363 # Track
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/ray/actor.py:686, in ActorClass.options.<locals>.ActorOptionWrapper.remote(self, *args, **kwargs)
685 def remote(self, *args, **kwargs):
--> 686 return actor_cls._remote(args=args, kwargs=kwargs, **updated_options)
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/ray/_private/auto_init_hook.py:24, in wrap_auto_init.<locals>.auto_init_wrapper(*args, **kwargs)
23 auto_init_ray()
---> 24 return fn(*args, **kwargs)
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/ray/util/tracing/tracing_helper.py:388, in _tracing_actor_creation.<locals>._invocation_actor_class_remote_span(self, args, kwargs, *_args, **_kwargs)
387 assert "_ray_trace_ctx" not in kwargs
--> 388 return method(self, args, kwargs, *_args, **_kwargs)
390 class_name = self.__ray_metadata__.class_name
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/ray/actor.py:889, in ActorClass._remote(self, args, kwargs, **actor_options)
885 # After serialize / deserialize modified class, the __module__
886 # of modified class will be ray.cloudpickle.cloudpickle.
887 # So, here pass actor_creation_function_descriptor to make
888 # sure export actor class correct.
--> 889 worker.function_actor_manager.export_actor_class(
890 meta.modified_class,
891 meta.actor_creation_function_descriptor,
892 meta.method_meta.methods.keys(),
893 )
895 resources = ray._private.utils.resources_from_ray_options(actor_options)
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/ray/_private/function_manager.py:531, in FunctionActorManager.export_actor_class(self, Class, actor_creation_function_descriptor, actor_method_names)
522 actor_class_info = {
523 "class_name": actor_creation_function_descriptor.class_name.split(".")[-1],
524 "module": actor_creation_function_descriptor.module_name,
(...)
528 "actor_method_names": json.dumps(list(actor_method_names)),
529 }
--> 531 check_oversized_function(
532 actor_class_info["class"],
533 actor_class_info["class_name"],
534 "actor",
535 self._worker,
536 )
538 self._worker.gcs_client.internal_kv_put(
539 key, pickle.dumps(actor_class_info), True, KV_NAMESPACE_FUNCTION_TABLE
540 )
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/ray/_private/utils.py:755, in check_oversized_function(pickled, name, obj_type, worker)
744 error = (
745 "The {} {} is too large ({} MiB > FUNCTION_SIZE_ERROR_THRESHOLD={}"
746 " MiB). Check that its definition is not implicitly capturing a "
(...)
753 ray_constants.FUNCTION_SIZE_ERROR_THRESHOLD // (1024 * 1024),
754 )
--> 755 raise ValueError(error)
ValueError: The actor ImplicitFunc is too large (105 MiB > FUNCTION_SIZE_ERROR_THRESHOLD=95 MiB). Check that its definition is not implicitly capturing a large array or other object in scope. Tip: use ray.put() to put large objects in the Ray object store.
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
File ~/deep_learning/neuralnetwork_exercises/testray.py:49
34 tuner = tune.Tuner(
35 # Create a wrapped version of dummytrain with my kwargs (data_handle)
36 # already defined, so that ray tune only needs to pass in the config
(...)
46 param_space=config
47 )
48 # Attempt to tune the hyperparameters
---> 49 tuner.fit()
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/ray/tune/tuner.py:364, in Tuner.fit(self)
362 if not self._is_ray_client:
363 try:
--> 364 return self._local_tuner.fit()
365 except TuneError as e:
366 raise TuneError(
367 _TUNER_FAILED_MSG.format(
368 path=self._local_tuner.get_experiment_checkpoint_dir()
369 )
370 ) from e
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/ray/tune/impl/tuner_internal.py:526, in TunerInternal.fit(self)
524 param_space = copy.deepcopy(self.param_space)
525 if not self._is_restored:
--> 526 analysis = self._fit_internal(trainable, param_space)
527 else:
528 analysis = self._fit_resume(trainable, param_space)
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/ray/tune/impl/tuner_internal.py:645, in TunerInternal._fit_internal(self, trainable, param_space)
632 """Fitting for a fresh Tuner."""
633 args = {
634 **self._get_tune_run_arguments(trainable),
635 **dict(
(...)
643 **self._tuner_kwargs,
644 }
--> 645 analysis = run(
646 **args,
647 )
648 self.clear_remote_string_queue()
649 return analysis
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/ray/tune/tune.py:1014, in run(run_or_experiment, name, metric, mode, stop, time_budget_s, config, resources_per_trial, num_samples, storage_path, storage_filesystem, search_alg, scheduler, checkpoint_config, verbose, progress_reporter, log_to_file, trial_name_creator, trial_dirname_creator, sync_config, export_formats, max_failures, fail_fast, restore, server_port, resume, reuse_actors, raise_on_failed_trial, callbacks, max_concurrent_trials, keep_checkpoints_num, checkpoint_score_attr, checkpoint_freq, checkpoint_at_end, chdir_to_trial_dir, local_dir, _experiment_checkpoint_dir, _remote, _remote_string_queue, _entrypoint)
1012 _report_air_progress(runner, air_progress_reporter)
1013 except Exception:
-> 1014 runner.cleanup()
1015 raise
1017 tune_taken = time.time() - tune_start
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/ray/tune/execution/tune_controller.py:2025, in TuneController.cleanup(self)
2023 def cleanup(self):
2024 """Cleanup trials and callbacks."""
-> 2025 self._cleanup_trials()
2026 self.end_experiment_callbacks()
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/ray/tune/execution/tune_controller.py:845, in TuneController._cleanup_trials(self)
840 trial = self._actor_to_trial[tracked_actor]
841 logger.debug(
842 f"Scheduling trial stop at end of experiment (trial {trial}): "
843 f"{tracked_actor}"
844 )
--> 845 self._schedule_trial_stop(trial)
847 # Clean up cached actors now
848 self._cleanup_cached_actors(force_all=True)
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/ray/tune/execution/tune_controller.py:1455, in TuneController._schedule_trial_stop(self, trial, exception)
1451 self._actor_to_trial.pop(tracked_actor)
1453 trial.set_ray_actor(None)
-> 1455 self._remove_actor(tracked_actor=tracked_actor)
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/ray/tune/execution/tune_controller.py:864, in TuneController._remove_actor(self, tracked_actor)
863 def _remove_actor(self, tracked_actor: TrackedActor):
--> 864 stop_future = self._actor_manager.schedule_actor_task(
865 tracked_actor, "stop", _return_future=True
866 )
867 now = time.monotonic()
869 if self._actor_manager.remove_actor(
870 tracked_actor, kill=False, stop_future=stop_future
871 ):
872 # If the actor was previously alive, track
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/ray/air/execution/_internal/actor_manager.py:725, in RayActorManager.schedule_actor_task(self, tracked_actor, method_name, args, kwargs, on_result, on_error, _return_future)
722 if tracked_actor not in self._live_actors_to_ray_actors_resources:
723 # Actor is not started, yet
724 if tracked_actor not in self._pending_actors_to_attrs:
--> 725 raise ValueError(
726 f"Tracked actor is not managed by this event manager: "
727 f"{tracked_actor}"
728 )
730 # Cache tasks for future execution
731 self._pending_actors_to_enqueued_actor_tasks[tracked_actor].append(
732 (tracked_actor_task, method_name, args, kwargs)
733 )
ValueError: Tracked actor is not managed by this event manager: <TrackedActor 327435863944350007128109305967424045307>
In [2]:
</code></pre>
<p>Assuming the first exception is actually the relevant one, it gives some kind of error message that seems to imply my technique for passing data access into the trainable function resulted in a function instance that took up too much memory:</p>
<pre class="lang-py prettyprint-override"><code>ValueError: The actor ImplicitFunc is too large (105 MiB > FUNCTION_SIZE_ERROR_THRESHOLD=95 MiB). Check that its definition is not implicitly capturing a large array or other object in scope. Tip: use ray.put() to put large objects in the Ray object store.
</code></pre>
<p><strong>My question:</strong> what is the preferred technique for incorporating data access into my Ray Tune trainable function? The error message seems to be hinting that I should use some other function called <code>ray.put()</code> to solve this, however I was unable to find examples in the documentation showing how this would typically work in practice in constructing a <code>Tuner()</code> instance. What are the key "rules" that I should bear in mind when passing data into Ray and across different modules in Ray, in order to avoid these types of problems in the future?</p>
<p>As an additional point of caution, I note the Ray Core documentation contains a <a href="https://docs.ray.io/en/latest/ray-core/objects/serialization.html#serialization-guide" rel="nofollow noreferrer">section</a> mentioning that data transferred between Ray workers and nodes must be serializeable by Python Pickle. Based on the error messages that Ray Tune is issuing so far, I'm unable to tell yet whether my code example above runs afoul of this requirement. However, if there are special techniques for data communication in Ray that would either sidestep or work around this requirement (such as having workers and nodes each load their own copies of the data separately from disk) that would be helpful to highlight in any answer to this question.</p>
|
<python><pytorch><distributed-computing><ray><pytorch-dataloader>
|
2023-11-29 22:06:42
| 0
| 4,613
|
stachyra
|
77,574,637
| 1,580,469
|
Find number with least number of signifcant bits in an interval
|
<p>Given a lower and upper bound, what is the number within these boundaries with the minimal number of significant bits?
The number of significant bits is the difference of the position of most and least significant bit (MSB - LSB).</p>
<p>Examples:</p>
<pre><code>def sig_bits(x: int) -> int:
"""Count the span of significant bits in an integer."""
assert 0 < x < 2**64
as_str = "{:064b}".format(x)
msb, lsb = as_str.index("1"), len(as_str) - as_str.rindex("1") - 1
return msb - lsb + 1
def min_sig(a: int, b: int) -> int:
assert a <= b
return min((sig_bits(i), i) for i in range(a, b+1))[1]
min_sig(5, 10) = 8 # only 1 significant bit: bit 4
min_sig(5, 7) = 6 # bits 2 and 3 are significant
</code></pre>
<p>Is there a more efficient way?</p>
|
<python><bit-manipulation><intervals>
|
2023-11-29 21:40:39
| 2
| 396
|
Christian
|
77,574,533
| 10,620,003
|
delete the units from an order from customers based on another table
|
<p>I have three dfs related to the order from customers, and
some steps for preprocessing of that. So, in one df I define some units in the <strong>gramaz df</strong>.
which if we have that in the order, we should delete them. Like, if we have the pepper
in the order, we should change that to:</p>
<pre><code>pepper kg--> pepper
</code></pre>
<p>But, some time I have an order which assigns a number before the units.</p>
<pre><code>pepper 100 kg--> pepper
</code></pre>
<p>So, I want to remove that number exactly before the units. (in the order we have other numbers also, I don't want to delete them, only the number before the units are important and should be deleted).
here is the dfs and here is the output I want.</p>
<pre><code>gramaz = pd.DataFrame({'unit':['cc', 'lit','gr', 'kg' ]})
gramaz
unit
0 cc
1 lit
2 gr
3 kg
order = pd.DataFrame({'order':['100 glass','8 sodium 2 gr', 'br kk cc','mgk 100 lit','red pepper', 'black pepper', 'a 10 kg' ]})
order
0 100 glass
1 8 sodium 2 gr
2 br kk cc
3 mgk 100 lit
4 red pepper
5 black pepper
6 a 10 kg
the output:
order
0 100 glass
1 8 sodium
2 br kk
3 mgk
4 red pepper
5 black pepper
6 a
</code></pre>
<p>I tried to use the chatgpt
, I can remove the units, however, I don't know how to remove the numbers!</p>
<p>My dfs are at order 10k rows. Thanks</p>
|
<python><pandas><dataframe>
|
2023-11-29 21:14:03
| 1
| 730
|
Sadcow
|
77,574,466
| 159,072
|
Why is this AutoKeras NAS failing?
|
<p>I am using</p>
<ol>
<li>nVidia GeForce GTX 780 (Kepler)</li>
<li>Driver Version: 470.223.02</li>
<li>CUDA Toolkit v11.4.0</li>
<li>cuDNN v8.2.4</li>
<li>TensorFlow and Keras v2.8.0</li>
<li>AutoKeras v1.0.17</li>
<li>Ubuntu 20.04</li>
</ol>
<p>=======================</p>
<p>I have two directories, <code>train_data_npy</code> and <code>valid_data_npy</code> where there are 3013 and 1506 <code>*.npy</code> files, respectively.</p>
<p>Each <code>*.npy</code> file has 12 columns of float types, of which the first nine columns are features and the last three columns are one-hot-encoded labels of three classes.</p>
<p>The following Python script's task is to load those <code>*.npy</code> files in chunks so that the memory is not overflowed while searching for a neural network model.</p>
<p>However, the script is failing.</p>
<p>What exactly is the issue with the given script?</p>
<p>Why is the script failing?</p>
<p>Or, is it not about the script but rather about the installation issues of CUDA, TF, or AutoKeras?</p>
<pre><code># File: cnn_search_by_chunk.py
import numpy as np
import tensorflow as tf
import os
import autokeras as ak
N_FEATURES = 9
BATCH_SIZE = 100
def get_data_generator(folder_path, batch_size, n_features):
"""Get a generator returning batches of data from .npy files in the specified folder.
The shape of the features is (batch_size, n_features).
"""
def data_generator():
files = os.listdir(folder_path)
npy_files = [f for f in files if f.endswith('.npy')]
for npy_file in npy_files:
data = np.load(os.path.join(folder_path, npy_file))
x = data[:, :n_features]
y = data[:, n_features:]
y = np.argmax(y, axis=1) # Convert one-hot-encoded labels back to integers
for i in range(0, len(x), batch_size):
yield x[i:i+batch_size], y[i:i+batch_size]
return data_generator
train_data_folder = '/home/my_user_name/original_data/train_data_npy'
validation_data_folder = '/home/my_user_name/original_data/valid_data_npy'
train_dataset = tf.data.Dataset.from_generator(
get_data_generator(train_data_folder, BATCH_SIZE, N_FEATURES),
output_signature=(
tf.TensorSpec(shape=(None, N_FEATURES), dtype=tf.float32),
tf.TensorSpec(shape=(None,), dtype=tf.int32) # Labels are now 1D integers
)
)
validation_dataset = tf.data.Dataset.from_generator(
get_data_generator(validation_data_folder, BATCH_SIZE, N_FEATURES),
output_signature=(
tf.TensorSpec(shape=(None, N_FEATURES), dtype=tf.float32),
tf.TensorSpec(shape=(None,), dtype=tf.int32) # Labels are now 1D integers
)
)
clf = ak.StructuredDataClassifier(overwrite=True, max_trials=1, seed=5)
clf.fit(x=train_dataset, validation_data=validation_dataset, batch_size=BATCH_SIZE)
print(clf.evaluate(validation_dataset))
</code></pre>
<pre><code>my_user_name@192:~/my_project_name_v2$ python3 cnn_search_by_chunk.py
2023-11-29 20:05:53.532005: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Using TensorFlow backend
2023-11-29 20:05:55.467804: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1960] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
Search: Running Trial #1
Hyperparameter |Value |Best Value So Far
structured_data...|True |?
structured_data...|2 |?
structured_data...|False |?
structured_data...|0 |?
structured_data...|32 |?
structured_data...|32 |?
classification_...|0 |?
optimizer |adam |?
learning_rate |0.001 |?
Epoch 1/1000
33143/33143 [==============================] - 149s 4ms/step - loss: 0.0670 - accuracy: 0.9677 - val_loss: 0.0612 - val_accuracy: 0.9708
Epoch 2/1000
33143/33143 [==============================] - 146s 4ms/step - loss: 0.0625 - accuracy: 0.9697 - val_loss: 0.0598 - val_accuracy: 0.9715
Epoch 3/1000
33143/33143 [==============================] - 147s 4ms/step - loss: 0.0617 - accuracy: 0.9702 - val_loss: 0.0593 - val_accuracy: 0.9717
Epoch 4/1000
33143/33143 [==============================] - 146s 4ms/step - loss: 0.0614 - accuracy: 0.9703 - val_loss: 0.0591 - val_accuracy: 0.9718
Epoch 5/1000
33143/33143 [==============================] - 147s 4ms/step - loss: 0.0612 - accuracy: 0.9705 - val_loss: 0.0590 - val_accuracy: 0.9719
Epoch 6/1000
33143/33143 [==============================] - 145s 4ms/step - loss: 0.0610 - accuracy: 0.9707 - val_loss: 0.0588 - val_accuracy: 0.9721
Epoch 7/1000
33143/33143 [==============================] - 147s 4ms/step - loss: 0.0608 - accuracy: 0.9707 - val_loss: 0.0586 - val_accuracy: 0.9721
Epoch 8/1000
33143/33143 [==============================] - 147s 4ms/step - loss: 0.0607 - accuracy: 0.9709 - val_loss: 0.0585 - val_accuracy: 0.9723
Epoch 9/1000
33143/33143 [==============================] - 146s 4ms/step - loss: 0.0605 - accuracy: 0.9710 - val_loss: 0.0584 - val_accuracy: 0.9723
Epoch 10/1000
33143/33143 [==============================] - 146s 4ms/step - loss: 0.0604 - accuracy: 0.9710 - val_loss: 0.0583 - val_accuracy: 0.9724
Epoch 11/1000
33143/33143 [==============================] - 148s 4ms/step - loss: 0.0603 - accuracy: 0.9711 - val_loss: 0.0583 - val_accuracy: 0.9724
Epoch 12/1000
33143/33143 [==============================] - 146s 4ms/step - loss: 0.0602 - accuracy: 0.9712 - val_loss: 0.0582 - val_accuracy: 0.9724
Epoch 13/1000
33143/33143 [==============================] - 147s 4ms/step - loss: 0.0601 - accuracy: 0.9712 - val_loss: 0.0582 - val_accuracy: 0.9724
Epoch 14/1000
33143/33143 [==============================] - 148s 4ms/step - loss: 0.0601 - accuracy: 0.9712 - val_loss: 0.0582 - val_accuracy: 0.9724
Epoch 15/1000
33143/33143 [==============================] - 146s 4ms/step - loss: 0.0600 - accuracy: 0.9713 - val_loss: 0.0582 - val_accuracy: 0.9724
Epoch 16/1000
33143/33143 [==============================] - 146s 4ms/step - loss: 0.0600 - accuracy: 0.9713 - val_loss: 0.0581 - val_accuracy: 0.9725
Epoch 17/1000
33143/33143 [==============================] - 147s 4ms/step - loss: 0.0600 - accuracy: 0.9713 - val_loss: 0.0581 - val_accuracy: 0.9725
Epoch 18/1000
33143/33143 [==============================] - 147s 4ms/step - loss: 0.0599 - accuracy: 0.9713 - val_loss: 0.0582 - val_accuracy: 0.9724
Epoch 19/1000
33143/33143 [==============================] - 145s 4ms/step - loss: 0.0599 - accuracy: 0.9713 - val_loss: 0.0581 - val_accuracy: 0.9724
Epoch 20/1000
33143/33143 [==============================] - 144s 4ms/step - loss: 0.0599 - accuracy: 0.9713 - val_loss: 0.0582 - val_accuracy: 0.9724
Epoch 21/1000
33143/33143 [==============================] - 147s 4ms/step - loss: 0.0599 - accuracy: 0.9713 - val_loss: 0.0582 - val_accuracy: 0.9724
Epoch 22/1000
33143/33143 [==============================] - 144s 4ms/step - loss: 0.0599 - accuracy: 0.9713 - val_loss: 0.0581 - val_accuracy: 0.9724
Epoch 23/1000
33143/33143 [==============================] - 146s 4ms/step - loss: 0.0600 - accuracy: 0.9713 - val_loss: 0.0582 - val_accuracy: 0.9724
Epoch 24/1000
33143/33143 [==============================] - 145s 4ms/step - loss: 0.0599 - accuracy: 0.9714 - val_loss: 0.0581 - val_accuracy: 0.9725
Epoch 25/1000
33143/33143 [==============================] - 147s 4ms/step - loss: 0.0599 - accuracy: 0.9714 - val_loss: 0.0581 - val_accuracy: 0.9724
Epoch 26/1000
33143/33143 [==============================] - 147s 4ms/step - loss: 0.0599 - accuracy: 0.9713 - val_loss: 0.0581 - val_accuracy: 0.9724
Trial 1 Complete [01h 16m 38s]
val_accuracy: 0.9724819660186768
Best val_accuracy So Far: 0.9724819660186768
Total elapsed time: 01h 16m 38s
WARNING:tensorflow:Detecting that an object or model or tf.train.Checkpoint is being deleted with unrestored values. See the following logs for the specific values in question. To silence these warnings, use `status.expect_partial()`. See https://www.tensorflow.org/api_docs/python/tf/train/Checkpoint#restorefor details about the status object returned by the restore function.
WARNING:tensorflow:Detecting that an object or model or tf.train.Checkpoint is being deleted with unrestored values. See the following logs for the specific values in question. To silence these warnings, use `status.expect_partial()`. See https://www.tensorflow.org/api_docs/python/tf/train/Checkpoint#restorefor details about the status object returned by the restore function.
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.1
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.1
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.2
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.2
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.3
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.3
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.4
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.4
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.5
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.5
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.6
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.6
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.7
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.7
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.8
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.8
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.9
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.9
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.10
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.10
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.11
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.11
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.12
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.12
2023-11-29 21:23:57.450991: W tensorflow/core/framework/op_kernel.cc:1828] OP_REQUIRES failed at lookup_table_op.cc:929 : FAILED_PRECONDITION: Table not initialized.
2023-11-29 21:23:57.451029: W tensorflow/core/framework/op_kernel.cc:1828] OP_REQUIRES failed at lookup_table_op.cc:929 : FAILED_PRECONDITION: Table not initialized.
2023-11-29 21:23:57.451059: W tensorflow/core/framework/op_kernel.cc:1828] OP_REQUIRES failed at lookup_table_op.cc:929 : FAILED_PRECONDITION: Table not initialized.
2023-11-29 21:23:57.451091: W tensorflow/core/framework/op_kernel.cc:1828] OP_REQUIRES failed at lookup_table_op.cc:929 : FAILED_PRECONDITION: Table not initialized.
2023-11-29 21:23:57.451123: W tensorflow/core/framework/op_kernel.cc:1828] OP_REQUIRES failed at lookup_table_op.cc:929 : FAILED_PRECONDITION: Table not initialized.
2023-11-29 21:23:57.451157: W tensorflow/core/framework/op_kernel.cc:1828] OP_REQUIRES failed at lookup_table_op.cc:929 : FAILED_PRECONDITION: Table not initialized.
2023-11-29 21:23:57.451185: W tensorflow/core/framework/op_kernel.cc:1828] OP_REQUIRES failed at lookup_table_op.cc:929 : FAILED_PRECONDITION: Table not initialized.
2023-11-29 21:23:57.451213: W tensorflow/core/framework/op_kernel.cc:1828] OP_REQUIRES failed at lookup_table_op.cc:929 : FAILED_PRECONDITION: Table not initialized.
2023-11-29 21:23:57.451250: W tensorflow/core/framework/op_kernel.cc:1828] OP_REQUIRES failed at lookup_table_op.cc:929 : FAILED_PRECONDITION: Table not initialized.
Traceback (most recent call last):
File "cnn_search_by_chunk.py", line 50, in <module>
print(clf.evaluate(validation_dataset))
File "/home/my_user_name/.local/lib/python3.8/site-packages/autokeras/tasks/structured_data.py", line 187, in evaluate
return super().evaluate(x=x, y=y, **kwargs)
File "/home/my_user_name/.local/lib/python3.8/site-packages/autokeras/auto_model.py", line 492, in evaluate
return utils.evaluate_with_adaptive_batch_size(
File "/home/my_user_name/.local/lib/python3.8/site-packages/autokeras/utils/utils.py", line 68, in evaluate_with_adaptive_batch_size
return run_with_adaptive_batch_size(
File "/home/my_user_name/.local/lib/python3.8/site-packages/autokeras/utils/utils.py", line 101, in run_with_adaptive_batch_size
history = func(x=x, validation_data=validation_data, **fit_kwargs)
File "/home/my_user_name/.local/lib/python3.8/site-packages/autokeras/utils/utils.py", line 70, in <lambda>
lambda x, validation_data, **kwargs: model.evaluate(
File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/utils/traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/home/my_user_name/.local/lib/python3.8/site-packages/tensorflow/python/eager/execute.py", line 53, in quick_execute
tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.FailedPreconditionError: Graph execution error:
Detected at node 'model/multi_category_encoding/string_lookup_15/None_Lookup/LookupTableFindV2' defined at (most recent call last):
File "cnn_search_by_chunk.py", line 50, in <module>
print(clf.evaluate(validation_dataset))
File "/home/my_user_name/.local/lib/python3.8/site-packages/autokeras/tasks/structured_data.py", line 187, in evaluate
return super().evaluate(x=x, y=y, **kwargs)
File "/home/my_user_name/.local/lib/python3.8/site-packages/autokeras/auto_model.py", line 492, in evaluate
return utils.evaluate_with_adaptive_batch_size(
File "/home/my_user_name/.local/lib/python3.8/site-packages/autokeras/utils/utils.py", line 68, in evaluate_with_adaptive_batch_size
return run_with_adaptive_batch_size(
File "/home/my_user_name/.local/lib/python3.8/site-packages/autokeras/utils/utils.py", line 101, in run_with_adaptive_batch_size
history = func(x=x, validation_data=validation_data, **fit_kwargs)
File "/home/my_user_name/.local/lib/python3.8/site-packages/autokeras/utils/utils.py", line 70, in <lambda>
lambda x, validation_data, **kwargs: model.evaluate(
File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/utils/traceback_utils.py", line 65, in error_handler
return fn(*args, **kwargs)
File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/engine/training.py", line 2200, in evaluate
logs = test_function_runner.run_step(
File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/engine/training.py", line 4000, in run_step
tmp_logs = self._function(dataset_or_iterator)
File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/engine/training.py", line 1972, in test_function
return step_function(self, iterator)
File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/engine/training.py", line 1956, in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/engine/training.py", line 1944, in run_step
outputs = model.test_step(data)
File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/engine/training.py", line 1850, in test_step
y_pred = self(x, training=False)
File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/utils/traceback_utils.py", line 65, in error_handler
return fn(*args, **kwargs)
File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/engine/training.py", line 569, in __call__
return super().__call__(*args, **kwargs)
File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/utils/traceback_utils.py", line 65, in error_handler
return fn(*args, **kwargs)
File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/engine/base_layer.py", line 1150, in __call__
outputs = call_fn(inputs, *args, **kwargs)
File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/utils/traceback_utils.py", line 96, in error_handler
return fn(*args, **kwargs)
File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/engine/functional.py", line 512, in call
return self._run_internal_graph(inputs, training=training, mask=mask)
File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/engine/functional.py", line 669, in _run_internal_graph
outputs = node.layer(*args, **kwargs)
File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/utils/traceback_utils.py", line 65, in error_handler
return fn(*args, **kwargs)
File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/engine/base_layer.py", line 1150, in __call__
outputs = call_fn(inputs, *args, **kwargs)
File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/utils/traceback_utils.py", line 96, in error_handler
return fn(*args, **kwargs)
File "/home/my_user_name/.local/lib/python3.8/site-packages/autokeras/keras_layers.py", line 91, in call
for input_node, encoding_layer in zip(split_inputs, self.encoding_layers):
File "/home/my_user_name/.local/lib/python3.8/site-packages/autokeras/keras_layers.py", line 92, in call
if encoding_layer is None:
File "/home/my_user_name/.local/lib/python3.8/site-packages/autokeras/keras_layers.py", line 100, in call
output_nodes.append(
File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/utils/traceback_utils.py", line 65, in error_handler
return fn(*args, **kwargs)
File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/engine/base_layer.py", line 1150, in __call__
outputs = call_fn(inputs, *args, **kwargs)
File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/utils/traceback_utils.py", line 96, in error_handler
return fn(*args, **kwargs)
File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/layers/preprocessing/index_lookup.py", line 756, in call
lookups = self._lookup_dense(inputs)
File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/layers/preprocessing/index_lookup.py", line 792, in _lookup_dense
lookups = self.lookup_table.lookup(inputs)
Node: 'model/multi_category_encoding/string_lookup_15/None_Lookup/LookupTableFindV2'
Table not initialized.
[[{{node model/multi_category_encoding/string_lookup_15/None_Lookup/LookupTableFindV2}}]] [Op:__inference_test_function_5785123]
2023-11-29 21:23:57.618149: W tensorflow/core/kernels/data/generator_dataset_op.cc:108] Error occurred when finalizing GeneratorDataset iterator: FAILED_PRECONDITION: Python interpreter state is not initialized. The process may be terminated.
[[{{node PyFunc}}]]
2023-11-29 21:23:57.618266: W tensorflow/core/kernels/data/generator_dataset_op.cc:108] Error occurred when finalizing GeneratorDataset iterator: FAILED_PRECONDITION: Python interpreter state is not initialized. The process may be terminated.
[[{{node PyFunc}}]]
2023-11-29 21:23:57.618360: W tensorflow/core/kernels/data/generator_dataset_op.cc:108] Error occurred when finalizing GeneratorDataset iterator: FAILED_PRECONDITION: Python interpreter state is not initialized. The process may be terminated.
[[{{node PyFunc}}]]
2023-11-29 21:23:57.618434: W tensorflow/core/kernels/data/generator_dataset_op.cc:108] Error occurred when finalizing GeneratorDataset iterator: FAILED_PRECONDITION: Python interpreter state is not initialized. The process may be terminated.
[[{{node PyFunc}}]]
my_user_name@192:~/my_project_name_v2$
</code></pre>
|
<python><tensorflow><keras><deep-learning><auto-keras>
|
2023-11-29 21:00:04
| 1
| 17,446
|
user366312
|
77,574,386
| 3,517,025
|
Merging Cluster Lists from Multiple Algorithms: Identifying Methods and SQL Implementation
|
<p>I'm facing a challenge in merging lists of clusters generated by different algorithms using the same set of data items, identifiable by their IDs. Each algorithm clusters these IDs based on varying attributes (which I'll omit for irrelevance here), with the largest ID in each cluster being the cluster key (using max is a decion made to have a deterministic key selection, it is not a requirement).</p>
<p>Consider the following example with items represented by their IDs ranging from 1 to 8:</p>
<p>Example items</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
</tr>
<tr>
<td>2</td>
</tr>
<tr>
<td>...</td>
</tr>
<tr>
<td>8</td>
</tr>
</tbody>
</table>
</div>
<p><em>Clustering results from two different algorithms:</em></p>
<p><strong>Clustering 1</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>cluster_key</th>
<th>id</th>
</tr>
</thead>
<tbody>
<tr>
<td>3</td>
<td>1</td>
</tr>
<tr>
<td>3</td>
<td>2</td>
</tr>
<tr>
<td>3</td>
<td>3</td>
</tr>
<tr>
<td>6</td>
<td>4</td>
</tr>
<tr>
<td>6</td>
<td>5</td>
</tr>
<tr>
<td>6</td>
<td>6</td>
</tr>
<tr>
<td>8</td>
<td>7</td>
</tr>
<tr>
<td>8</td>
<td>8</td>
</tr>
</tbody>
</table>
</div>
<p><strong>Clustering 2</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>cluster_key</th>
<th>id</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>2</td>
<td>2</td>
</tr>
<tr>
<td>5</td>
<td>3</td>
</tr>
<tr>
<td>5</td>
<td>4</td>
</tr>
<tr>
<td>5</td>
<td>5</td>
</tr>
<tr>
<td>6</td>
<td>6</td>
</tr>
<tr>
<td>8</td>
<td>7</td>
</tr>
<tr>
<td>8</td>
<td>8</td>
</tr>
</tbody>
</table>
</div>
<p><strong>Expected Output</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>cluster_key</th>
<th>id</th>
</tr>
</thead>
<tbody>
<tr>
<td>6</td>
<td>1</td>
</tr>
<tr>
<td>6</td>
<td>2</td>
</tr>
<tr>
<td>6</td>
<td>3</td>
</tr>
<tr>
<td>6</td>
<td>4</td>
</tr>
<tr>
<td>6</td>
<td>5</td>
</tr>
<tr>
<td>6</td>
<td>6</td>
</tr>
<tr>
<td>8</td>
<td>7</td>
</tr>
<tr>
<td>8</td>
<td>8</td>
</tr>
</tbody>
</table>
</div>
<p>The Logic Behind the Expected Output:</p>
<p>Clusters from both lists that share even one ID should be merged. For instance, clusters 3 and 6 from the first list overlap with clusters 1, 2, 3, 5, and 6 from the second list. Therefore, they are merged into a single cluster with a key of 6. Similarly, cluster 8 is identical in both lists.</p>
<p><strong>My Questions:</strong><br />
Is there a recognized name for this kind of problem in data clustering or algorithms, and are there any existing literature or studies on it?
Are there known SQL implementations or approaches for solving this? Any pyspark or even python implementations are welcome as well.</p>
<p>My thoughts so far</p>
<ol>
<li>We can consider each "table" or "cluster list" as correct, and if we can prove that we can merge two correct lists into a correct list, then it's solved</li>
<li>The merging of two cluster lists is reminicent of the Union Find data structure, where we'd have to call on <code>Find(n)</code> for each <code>n</code> in the set of IDs.</li>
</ol>
<p>My attepmpt at robust definitions:</p>
<ol>
<li>A cluster list is correct when
<ul>
<li>every ID has exactly one cluster ID</li>
<li>every cluster has at least one ID</li>
<li>every cluster's ID can be deterministically derived given all the IDs in the cluster (this is why we use max)</li>
</ul>
</li>
<li>When merging two cluster lists
<ul>
<li>initially, both lists <code>left</code> and <code>right</code> are correct</li>
<li>for any two clusters <code>A</code> (from left) and <code>B</code> (from right) where A and B shared at least one id, then all the IDs in both clusters must be in the same cluster</li>
<li>the former is transitive, for example, if clusters <code>A</code>, <code>B</code>, and <code>C</code> on the left and clusters <code>Y</code>, and <code>Z</code> on the right are formed in such a way that <code>A</code> and <code>B</code> both share items with <code>Y</code>, and <code>B</code> and <code>C</code> both share items with <code>Z</code>, then the outcome would be a single unified cluster (I'm cognizant that this is condensed information, but I wanted to avoid a larger data-driven example).</li>
</ul>
</li>
<li>I feel like there are other situations or conditions to consider when merging these cluster lists, but my intelligence seems to be more limited than my intuition in this case.</li>
</ol>
<p>I'm looking for guidance or suggestions on how to approach this problem, especially if there are SQL-based solutions or similar algorithmic strategies.</p>
<p>Thank you for your help!</p>
<p><strong>Approaches that didn't work</strong></p>
<ol>
<li>Union all and self-join</li>
</ol>
<ul>
<li>we attempted a self-join of the union of the two cluster lists, but that failed at the transitive join requirement</li>
</ul>
<ol>
<li>matrix approach with row max</li>
</ol>
<ul>
<li>fails at considering the multiple items in each cluster (see IDs 1 and 2)</li>
</ul>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>cluster_key1</th>
<th>cluster_key_2</th>
<th>row_max</th>
<th>correct_outcome</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>3</td>
<td>1</td>
<td>3</td>
<td>6</td>
</tr>
<tr>
<td>2</td>
<td>3</td>
<td>2</td>
<td>3</td>
<td>6</td>
</tr>
<tr>
<td>3</td>
<td>3</td>
<td>5</td>
<td>5</td>
<td>6</td>
</tr>
<tr>
<td>4</td>
<td>6</td>
<td>5</td>
<td>6</td>
<td>6</td>
</tr>
<tr>
<td>5</td>
<td>6</td>
<td>5</td>
<td>6</td>
<td>6</td>
</tr>
<tr>
<td>6</td>
<td>6</td>
<td>6</td>
<td>6</td>
<td>6</td>
</tr>
<tr>
<td>7</td>
<td>8</td>
<td>8</td>
<td>8</td>
<td>8</td>
</tr>
<tr>
<td>8</td>
<td>8</td>
<td>8</td>
<td>8</td>
<td>8</td>
</tr>
</tbody>
</table>
</div>
|
<python><sql><algorithm><pyspark><cluster-analysis>
|
2023-11-29 20:44:29
| 1
| 5,409
|
Joey Baruch
|
77,574,287
| 6,234,139
|
How to select a single value from a GeoDataFrame column?
|
<p>I am trying to join a polygon of Belgium with the multipoygon of France making use of what is posted here: <a href="https://stackoverflow.com/questions/40385782/make-a-union-of-polygons-in-geopandas-or-shapely-into-a-single-geometry">Make a union of polygons in GeoPandas, or Shapely (into a single geometry)</a>. Unfortunately, I am not even able to select a single (multi)polygon value from a geopandas geodataframe. It seems to behave differently from a normal pandas dataframe. The code below illustrates this (it yields an error):</p>
<pre><code>import pandas as pd
import geopandas as gpd
world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))
BeFra = world.loc[world['name'].isin(['France', 'Belgium'])]
BeFra['geometry'][0]
</code></pre>
<p>On the other hand, with a simple dataframe this method yields values:</p>
<pre><code>import pandas as pd
data = [['tom', 10], ['nick', 15], ['juli', 14]]
df = pd.DataFrame(data, columns=['Name', 'Age'])
df['Name'][0]
</code></pre>
<p>How can I select a single value from a GeoDataFrame column?</p>
|
<python><geopandas>
|
2023-11-29 20:23:43
| 1
| 701
|
koteletje
|
77,574,099
| 12,959,994
|
Why building a new scorer outputs an empty string for deepspeech 0.9.3
|
<p>I am trying to create a limited corpus and train a language model to use for a deepspeech scorer.</p>
<p>I have followed the information provided in <a href="https://deepspeech.readthedocs.io/en/r0.9/Scorer.html" rel="nofollow noreferrer">the docs here</a></p>
<p>I read a helpful guide posted for an older version of deepspeech for generating a language model <a href="https://discourse.mozilla.org/t/tune-moziiladeepspeech-to-recognize-specific-sentences/41350/22" rel="nofollow noreferrer">here</a></p>
<p>And I have read <a href="https://mozilla.github.io/deepspeech-playbook/" rel="nofollow noreferrer">the playbook here</a>,</p>
<p>It seems that this has been encountered before, but <a href="https://github.com/coqui-ai/STT/discussions/1472" rel="nofollow noreferrer">no answer was given there</a></p>
<p>I have set up the docker environment for training and followed the docs to the letter.</p>
<p>I can train a model, and then convert this to a .scorer file, so the whole process is working.</p>
<p>The steps I take from inside the docker container are:</p>
<ul>
<li>create a vocab.txt file with my input sentences and store it in the deepspeech-data-input folder.</li>
<li>run this script to build the model in the output<code>python3 generate_lm.py --input_txt ../../deepspeech-data/input/vocab.txt --output_dir ../../deepspeech-data/output --top_k 100 --kenlm_bins /DeepSpeech/native_client/kenlm/build/bin/ --arpa_order 5 --max_arpa_memory "85%" --arpa_prune "0|0|0|0" --binary_a_bits 255 --binary_q_bits 8 --binary_type trie --discount_fallback </code></li>
<li>run this script to generate the scorer: <code>./generate_scorer_package --alphabet ../../deepspeech-data/input/alphabet.txt --lm ../../deepspeech-data/output/lm.binary --vocab ../../deepspeech-data/output/vocab-100.txt --package ../../deepspeech-data/output/deepspeech-0.9.3-models.scorer --default_alpha 0.9 --default_beta 0.9 --force_bytes_output_mode 1 </code></li>
<li>replace the default scorer with my one.</li>
<li>Run deepspeech</li>
</ul>
<p>Everything seems to work as it should, no errors or anything, but when doing this deepspeech just detects an empty string. If I use the default scorer I have it working fine, but I need to restrict the vocabulary so that I can just detect a few commands.</p>
<p>I have tried adjusting some of the flags, but I always get the same result.</p>
<p>I am using the <code>--discount_fallback</code> flag as suggested as it is a small corpus</p>
<p>So my question is this. Why would a deepspeech language model/scrorer output an empty string and how can I fix it?</p>
<p>I am running this inside the NodeJS example on github but testing against any of them would work to reproduce. <a href="https://github.com/mozilla/DeepSpeech-examples" rel="nofollow noreferrer">github examples</a></p>
|
<python><node.js><nlp><mozilla-deepspeech>
|
2023-11-29 19:46:17
| 0
| 652
|
jbflow
|
77,574,026
| 4,438,012
|
How to transpose tabular organizational hierarchy data in python?
|
<p><strong>I have a csv dataset that is similar in structure to the following table in tabular form:</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Employee</th>
<th>Manager</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>B</td>
</tr>
<tr>
<td>B</td>
<td>E</td>
</tr>
<tr>
<td>C</td>
<td>E</td>
</tr>
<tr>
<td>D</td>
<td>B</td>
</tr>
<tr>
<td>E</td>
<td>H</td>
</tr>
<tr>
<td>F</td>
<td>H</td>
</tr>
<tr>
<td>G</td>
<td>H</td>
</tr>
<tr>
<td>H</td>
<td></td>
</tr>
</tbody>
</table>
</div>
<p><strong>How would I go about in python transforming this so that I can list out the Employee's reports if they have any. So with the above table I would want to transform it to look something like this:</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Employee</th>
<th>Manager</th>
<th>Report 1</th>
<th>Report 2</th>
<th>Report 3</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>B</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>B</td>
<td>E</td>
<td>A</td>
<td>D</td>
<td></td>
</tr>
<tr>
<td>C</td>
<td>E</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>D</td>
<td>B</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>E</td>
<td>H</td>
<td>B</td>
<td>C</td>
<td></td>
</tr>
<tr>
<td>F</td>
<td>H</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>G</td>
<td>H</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>H</td>
<td></td>
<td>E</td>
<td>F</td>
<td>G</td>
</tr>
</tbody>
</table>
</div>
|
<python>
|
2023-11-29 19:30:01
| 2
| 439
|
harcot
|
77,574,022
| 19,588,737
|
Cleanly stop processes and exit when a Tkinter widget is closed
|
<p>I have a Python 3.11.5 program that uses a Tkinter GUI to receive user input, and runs a number of processes in the background. I'm wondering what the preferred method is for handling when the user wants to cancel a background process while it's still running. I've tried to make cancel buttons, to use separate threads and <code>threading.Event</code>s to signal a cancel, but I can't get it to exit cleanly. I'd like to be able to hit cancel, take a couple seconds if needed, but see the program end and my terminal go back to normal, with no errors.</p>
<p>Here's a minimum example:</p>
<pre class="lang-py prettyprint-override"><code>import tkinter as tk
import time
class app(tk.Tk):
def __init__(self):
super().__init__()
self.title("An app")
self.create_widgets()
self.a_process()
def create_widgets(self):
self.text = tk.Label(self, text = "Hello World!")
self.text.pack()
self.cancelButton = tk.Button(self, text = "Cancel", command = self.destroy)
self.cancelButton.pack()
def a_process(self):
for i in range(100):
self.text.config(text = f"Message # {i}")
self.text.update()
time.sleep(1)
def destroy(self):
super().destroy()
if __name__ == "__main__":
root = app()
root.mainloop()
</code></pre>
<p>Hitting cancel while the message loop is running gives this error:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\M.Modeler\geoSieve 1.1\test.py", line 27, in <module>
root = app()
^^^^^
File "C:\Users\M.Modeler\geoSieve 1.1\test.py", line 9, in __init__
self.a_process()
File "C:\Users\M.Modeler\geoSieve 1.1\test.py", line 19, in a_process
self.text.config(text = f"Message # {i}")
File "C:\Users\M.Modeler\AppData\Local\miniconda3\envs\geoSieve\Lib\tkinter\__init__.py", line 1702, in configure
return self._configure('configure', cnf, kw)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\M.Modeler\AppData\Local\miniconda3\envs\geoSieve\Lib\tkinter\__init__.py", line 1692, in _configure
self.tk.call(_flatten((self._w, cmd)) + self._options(cnf))
_tkinter.TclError: invalid command name ".!label"
</code></pre>
<p>What is my <code>destroy</code> method missing to end the background process and exit cleanly?</p>
|
<python><tkinter>
|
2023-11-29 19:28:37
| 1
| 307
|
bt3
|
77,573,998
| 17,896,651
|
PYTHON DROPBOX API process been hang since CLOSE_WAIT socket is not closed
|
<p>So I have my application running on windows for 5 years now.
I have around 800 processes running in 5 different Machines.
5 / day lately are hanging on this:
<a href="https://i.sstatic.net/oa3VC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oa3VC.png" alt="enter image description here" /></a></p>
<p>I also managed to find the socket pid and kill it, than the app closed properly.
<a href="https://i.sstatic.net/MkVka.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MkVka.png" alt="enter image description here" /></a></p>
<p>My code is the same regarding dropbox ...was not changed.</p>
<p>mt code flow:</p>
<pre><code>self.uploading_thread = None
...
if self.uploading_thread:
self.uploading_thread.join() # wait if last one did not finish
self.uploading_thread = threading.Thread(target=self.update_server_data)
self.uploading_thread.start()
def update_server_data(self):
status = 'xsaxasxsa'
self.statistics_server_o.set_data(status)
def set_data(self, data):
self.lock.acquire()
self.data = data
self.lock.release()
# notify thread on new data
self.new_data_flag.set()
def update_server(self):
while self.new_data_flag.wait():
# clear flag
self.new_data_flag.clear()
if self.stop_event:
break
local_data_copy = self.get_hard_copy_data()
# update new data
try:
self.dropbox_o.upload_bytes(local_data_copy, '', 'status', '{}_status.txt'.format(self.account_name),
overwrite=True)
except:
pass
</code></pre>
<p>I can call the update thread many times, it async, so I wait until last job finish before I trigger a new one.</p>
<p>can you see anything related to my code ? should I add termination of the tread with timer ?</p>
|
<python><multithreading><sockets><tcp><dropbox>
|
2023-11-29 19:24:05
| 0
| 356
|
Si si
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.