QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
77,146,984 | 3,649,594 | guessing the precision of each of my solutions using fsolve (scipy) | <p>I want to know the error (or uncertainty) of each of my solutions in a system of equations when I'm using <code>scipy.optimize.fsolve</code> from. For example (from the documentation):</p>
<pre><code>import numpy as np
from scipy.optimize import fsolve
def func(x):
return [x[0] * np.cos(x[1]) - 4,
x[1] * x[0] - x[1] - 5]
root, infodict, ier, msg = fsolve(func, [1, 1], full_outupt=1)
</code></pre>
<p>I want to know the precision of <code>root[0]</code> and <code>root[1]</code>, not how good the method was (that is the value of <code>r</code>).</p>
| <python><scipy><scipy-optimize> | 2023-09-21 03:15:24 | 1 | 477 | oscarcapote |
77,146,945 | 9,554,172 | How to use python subprocess.run for calling other python files and bash files | <p>I am using python subprocess.run to call a .sh file and .py file that are located in another directory distant from the source. This is the only connection between the two directories making importing not practical. The code below has been simplified to the bare minimum.</p>
<pre><code>parent-dir/
├── dirA
│ ├── main.py
│ └── main_test.py
└── dirB
├── app.sh
└── utility.py
</code></pre>
<p>main.py</p>
<pre><code>from pathlib import Path
from typing import Tuple, Optional
import subprocess
import sys
dir_a_path = Path(__file__).resolve().parent
# get the parent folder
parent_dir = dir_a_path.parent
dir_b_path = str(parent_dir.joinpath("dirB"))
def execute_bash_command(command: str, cwd: Optional[str]=None) -> Tuple[int, str, str]:
"""Runs a bash command in a new shell
exit_code, stdout, stderr = execute_bash_command(command)
Args:
command (str): the command to run
cwd (Optional[str]): where to run the command line from.
Use this instead of 'cd some_dir/ && command'
Raises:
Exception: Exception
Returns:
Tuple[int, str, str]: [exit code, stdout, stderr]
"""
try:
print(f"Executing {command} from cwd: {cwd}")
output = subprocess.run(command, cwd=cwd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, check=True, shell=True)
stdout = output.stdout.decode("utf-8").strip()
print(stdout)
return (output.returncode, stdout, "")
except subprocess.CalledProcessError as cpe:
print(f"error: {cpe}")
stdout = cpe.stdout.decode("utf-8").strip()
stderr = cpe.stderr.decode("utf-8").strip()
return (cpe.returncode, stdout, stderr)
def call_app() -> Tuple[int, str, str]:
command = [
"./app.sh",
"install"
]
command = " ".join(command)
return execute_bash_command(command=command, cwd=dir_b_path)
def call_utility() -> Tuple[int, str, str]:
command = [
sys.executable,
"greet"
]
command = " ".join(command)
return execute_bash_command(command=command, cwd=dir_b_path)
</code></pre>
<p>main_test.py</p>
<pre><code>from main import call_app, call_utility
def test_call_app():
code, stdout, stderr = call_app()
assert code == 0
assert stdout == "Installing application..."
def test_call_utility():
code, stdout, stderr = call_utility()
assert code == 0
assert stdout == "running command greet"
def test_execute_bash_command():
""" Creates dummy file """
exit_code, stdout, _ = execute_bash_command("touch tmp/test.txt")
assert exit_code == 0
</code></pre>
<p>app.sh</p>
<pre><code>#!/bin/bash
if [ "$1" = "greet"]; then
echo "Hello, world!"
elif [ "$1" = "install"]; then
echo "Installing application..."
else
echo "Usage: $0 {greet|install}"
exit 1
fi
</code></pre>
<p>utility.py</p>
<pre><code>import sys
if __name__ == "__main__":
print(f"running command {sys.argv[1]}")
</code></pre>
<p>When I run from the terminal <code>python utility.py greet</code> or <code>./app.sh install</code>, it runs fine. The issue is when I execute <code>main_test.py</code> using <code>python -m pytest</code>. I get errors that the subprocess exited with different codes.</p>
<p>Errors:</p>
<pre><code>Executing ./app.sh install from cwd: /mnt/c/Users/XXX/source/python-testing/parent-dir/dirB
error: Command './app.sh install' returned non-zero exit status 1.
</code></pre>
<p>or</p>
<pre><code>Executing /usr/bin/python3 greet from cwd: /mnt/c/Users/XXX/source/python-testing/parent-dir/dirB
error: Command '/usr/bin/python3 greet' returned non-zero exit status 2.
</code></pre>
<p>These errors are exist while running pytest from both powershell and WSL2.</p>
<p>These errors are mimic the actual errors running on the codebase.</p>
<p>Why is this not working as intended? Is using <code>cwd</code> in the <code>subprocess.run</code> not the right way to go? I'm stuck because I'm trying to refactor some nasty code that magically works, but is not clear, so I'd appreciate some guidance.</p>
| <python><bash><subprocess><pytest><windows-subsystem-for-linux> | 2023-09-21 03:01:56 | 1 | 881 | Lacrosse343 |
77,146,826 | 2,604,247 | FastAPI Swagger Documentation Showing Incorrect Response Status? | <p>The route implemented via FastAPI (code snippet below) is working fine.</p>
<pre class="lang-py prettyprint-override"><code>@app.post(path='/user/create/')
async def create_user(response: Response,
email: str = Form(),
password: str = Form()) -> None:
"""
Create a new user with this route.
Possible responses:
201: New User Created
400: Bad email id supplied.
409: User already signed up, hence causing conflict with existing id.
"""
logging.info(msg=f'New user sign up request from {email}.')
try:
user: WebUser = WebUser(email=email, password=password)
# Either 409 or 201
response.status_code = [CONFLICT, CREATED][await user.create_new()]
except ValidationError:
logging.error(
msg=f'User creation for {email} failed because of bad request.')
response.status_code = BAD_REQUEST # 400
</code></pre>
<p>But the FastAPI swagger documentation is showing incorrect (actually totally random) responses. Here is the screenshot.</p>
<p><a href="https://i.sstatic.net/qBX7L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qBX7L.png" alt="enter image description here" /></a></p>
<p>So, it is not showing 409, and 400, but somehow showing 422 as a possible response. Also, the successful response is actually 201.</p>
<p>So how does FastAPI form these documentations behind the scene and am I doing something wrong to mislead it? Or is it a bug within FastAPI?</p>
| <python><fastapi><httpresponse><asgi> | 2023-09-21 02:16:21 | 1 | 1,720 | Della |
77,146,783 | 653,133 | Finding the bottleneck for asyncio tasks slowing down overall | <p>For python asyncio, what are good tools do debug performance bottlenecks? I have a quart webserver that is the frontend for a backend that holds SSH connections (via <code>paramiko</code>) to multiple devices (about 30 or 40). What I notice is that, once all those connections are up and running, the web frontend seems to be slowing down a lot. Requests take a long time to complete and I am seeing tons of warnings in the console where asyncio warns about slow tasks.</p>
<p>The problem is, a lot of the methods that asyncio warns about don't really have anything that is blocking the main thread anymore. I have tried <code>yappi</code> to search for slowness on the main thread but it doesn't help much.</p>
<p>I am starting to wonder if this is either a general runloop starvation and I simply have too many tasks, or if there are some other resources that get constraint here that could cause this.</p>
<p>My problem is a lack of debugging tools. I would love to see a graphical representation of which method takes how long and what it's waiting for.</p>
| <python><python-asyncio><paramiko><yappi> | 2023-09-21 01:54:54 | 0 | 2,870 | Michael Ochs |
77,146,775 | 3,204,212 | How can I properly pass stdin inputs to my Python subprocess? | <p>I am using a CLI program, <a href="https://github.com/FiloSottile/age" rel="nofollow noreferrer">age</a>, to encrypt files with a passphrase. You cannot supply the passphrase as an argument or an environment variable; after the program starts, it prompts you to type it in. This is a pain to encrypt a large number of files, so I'm writing a Python script to do it for me:</p>
<pre><code>def age_encrypt(filename: str):
p = subprocess.Popen(
[
"age",
"--passphrase",
"--output",
filename + ".age",
filename,
],
stdout=subprocess.PIPE,
stdin=subprocess.PIPE,
stderr=subprocess.STDOUT,
)
p.communicate(input=b"some_passphrase\n")
</code></pre>
<p>This works perfectly when I try it with my own program as the subprocess instead of age. But when I run age, it doesn't work: the prompt still appears to me, and my Python program doesn't communicate with it.</p>
<p>I've read the documentation for the subprocess module, but I can't figure out why this is happening.</p>
| <python><subprocess> | 2023-09-21 01:51:48 | 0 | 2,480 | GreenTriangle |
77,146,640 | 3,533,030 | plotly / VS Code shows axes but does not show isosurface | <p>Simple (?) code, but nothing shows inside the plot.</p>
<pre><code>import numpy as np
import plotly.graph_objects as go
# Create a three-dimensional grid of points.
x = np.linspace(-1, 1, 100)
y = np.linspace(-1, 1, 100)
z = np.linspace(-1, 1, 100)
# Calculate the distance from each point to the origin.
r = np.sqrt(x**2 + y**2 + z**2)
# Create an isosurface trace.
isosurface_trace = go.Isosurface(
x=x,
y=y,
z=z,
value=r,
surface_count=10,
opacity=0.5,
isomin=0.1,
isomax=0.5
)
# Create a figure and add the isosurface trace.
fig = go.Figure()
fig.add_trace(isosurface_trace)
# Add axes and a title.
fig.update_layout(
scene=dict(
xaxis=dict(title="X"),
yaxis=dict(title="Y"),
zaxis=dict(title="Z"),
),
title="Isosurface of a sphere",
)
# Display the figure.
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/KULuz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KULuz.png" alt="Output from VS Code Cell" /></a></p>
<p>There are data points inside <code>r</code> that meet the iso-surface criteria:</p>
<pre><code>np.any((r > 0.1) & (r < 0.5))
</code></pre>
<p><code>True</code></p>
<p>Is this:</p>
<ul>
<li>My bad <code>python</code> code?</li>
<li>A problem rendering in <code>VS Code</code>?</li>
</ul>
| <python><visual-studio-code><jupyter-notebook><plotly> | 2023-09-21 00:58:59 | 0 | 449 | user3533030 |
77,146,492 | 9,588,300 | Pandas merge on inequality | <p>So I have been searching a pandas equivalent of this SQL query</p>
<pre><code>SELECT * FROM table1
LEFT JOIN table2
ON table1.columnX>=table2.columnY
</code></pre>
<p>*<em>Note that I am joining by an inequality condition, <code>>=</code>, not by matching columns.</em></p>
<p>But it seems panda's merge is only able to join by exact matches (like <code>select * from table1 LEFT JOIN table2 ON table1.columnX=table2.columnY</code>)</p>
<p>It doesn't seems it supports joining by more complex conditions, like a column having a value greater than the other column. Which SQL does support</p>
<p>I have found many resources that say it does not supports that, and that the only way to do so is to do a cartersian product first and then filter the resulting dataframe. Or pre-filtering the dataframes before joining. However, a cartesian product is costly</p>
<p>But those sources I have found are from more than 5 years ago. Is it still the same case today that panda's merge can only join by matching columns exactly and it does not admit inequalities (<,>,<=,>=,between)?</p>
<p>Here are some old resources I have found regarding this:</p>
<p><a href="https://stackoverflow.com/questions/35566368/inequality-joins-in-pandas">Inequality joins in Pandas?</a></p>
<p><a href="https://stackoverflow.com/questions/30627968/merge-pandas-dataframes-where-one-value-is-between-two-others">Merge pandas dataframes where one value is between two others</a></p>
<p><a href="https://stackoverflow.com/questions/44367672/best-way-to-join-merge-by-range-in-pandas/44601120#44601120">Best way to join / merge by range in pandas</a></p>
| <python><sql><pandas><dataframe><join> | 2023-09-21 00:00:20 | 2 | 462 | Eugenio.Gastelum96 |
77,146,428 | 3,245,747 | Fitting an RNN model using a tensorflow dataset | <p>I'm still new to using TensorFlow datasets and would like some help with the following. Assume I have a matrix where each row is an observation and I would like to use the window function in order to prepare the data. A sample code is as follows:</p>
<pre><code>import tensorflow as tf
import numpy as np
my_random = np.random.rand(3, 5)
dataset = tf.data.Dataset.from_tensor_slices(my_random)
</code></pre>
<p>There are three rows for three observations and each one has five columns. I would like to use the window function with a size of 3:</p>
<pre><code>dataset = dataset .window(3, shift=1, drop_remainder=True)
dataset = dataset .flat_map(lambda window: window.batch(3))
dataset = dataset .map(lambda window: (window[:, :-1], window[:, -1:]))
</code></pre>
<p>Then I would like to use this data set to train a RNN model:</p>
<pre><code>model = keras.models.Sequential([
keras.layers.GRU(128, return_sequences=True, input_shape=[None, 4], dropout=0.2, recurrent_dropout=0.2),
keras.layers.GRU(128, return_sequences=True, dropout=0.2, recurrent_dropout=0.2),
keras.layers.TimeDistributed(keras.layers.Dense(1, activation='relu'))
])
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(dataset, epochs=20)
</code></pre>
<p>When I run the above code I get an error, and I am not sure what is the issue.</p>
<pre><code>Input 0 of layer "gru_9" is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: (None, 4)
</code></pre>
<p>As I said, I am still new to this and would like to know what the issue is and how to fix it.</p>
| <python><tensorflow><keras><recurrent-neural-network> | 2023-09-20 23:37:02 | 1 | 947 | user3245747 |
77,146,349 | 2,469,032 | Obtaining grouped max() or min() in Pandas without skipping NANs | <p>Consider a sample dateframe</p>
<pre><code>df = pd.DataFrame({'group' : [1, 2, 2], 'x' : [1, 2, 3], 'y' : [2, 3, np.nan]})
</code></pre>
<p>If I want to get the max value of variable 'y' <em>without</em> skipping NANs, I would use the function:</p>
<pre><code>df.y.max(skipna = False)
</code></pre>
<p>The returned results is <em>nan</em> as expected</p>
<p>However, if I want to calculate the grouped max value by 'group', as follows:</p>
<pre><code>df.groupby('group').y.max(skipna = False)
</code></pre>
<p>I got an error message: <em>TypeError: max() got an unexpected keyword argument 'skipna'</em></p>
<p>Seems like the DataFrameGroupBy.max() does not have the argument to skip nas. What would be the best way to get the desired result?</p>
| <python><pandas><group-by> | 2023-09-20 23:12:55 | 2 | 1,037 | PingPong |
77,146,241 | 687,739 | How to nicely format the X-axis labels on a pandas bar chart | <p>Run the following code:</p>
<pre><code>covid = pd.read_csv("https://covid.ourworldindata.org/data/owid-covid-data.csv")
covid.set_index("date", inplace=True)
covid.index = pd.to_datetime(covid.index)
covid[covid.location=="Denmark"].new_cases_smoothed_per_million.plot()
</code></pre>
<p>And you get nicely formatted X-axis labels:</p>
<p><a href="https://i.sstatic.net/FdsLF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FdsLF.png" alt="enter image description here" /></a></p>
<p>Use the <code>bar</code> method, and you don't get nicely formatted X-axis labels:</p>
<pre><code>covid[covid.location=="Denmark"].new_cases_smoothed_per_million.plot.bar()
</code></pre>
<p><a href="https://i.sstatic.net/Om1gW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Om1gW.png" alt="enter image description here" /></a></p>
<p>How do I get nicely formatted X-axis labels on the bar chart?</p>
| <python><pandas><plot> | 2023-09-20 22:35:49 | 1 | 15,646 | Jason Strimpel |
77,146,194 | 102,694 | pyarrow breaks pyodbc MySQL? | <p>I have a Docker container with MySQL ODBC driver, unixODBC, and a bunch of Python stuff installed. My MySQL driver works through <code>isql</code>, and it works when connecting from Python with <code>pyodbc</code>, if I do so in a fresh Python process:</p>
<pre><code>sh-4.4# python
Python 3.8.16 (default, May 31 2023, 12:44:21)
[GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pyodbc
>>> pyodbc.connect("DRIVER=MySQL ODBC 8.1 ANSI Driver;SERVER=host.docker.internal;PORT=3306;UID=root;PWD=shh")
<pyodbc.Connection object at 0x7f6fd94dac70>
</code></pre>
<p>But, if I import <code>pyarrow</code> before establishing the connection, I get this:</p>
<pre><code>sh-4.4# python
Python 3.8.16 (default, May 31 2023, 12:44:21)
[GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pyarrow
>>> import pyodbc
>>> pyodbc.connect("DRIVER=MySQL ODBC 8.1 ANSI Driver;SERVER=host.docker.internal;PORT=3306;UID=root;PWD=shh")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
pyodbc.Error: ('01000', "[01000] [unixODBC][Driver Manager]Can't open lib '/usr/lib64/libmyodbc8a.so' : file not found (0) (SQLDriverConnect)")
</code></pre>
<p>I get the same if I specify the path to the driver directly:</p>
<pre><code>sh-4.4# python
Python 3.8.16 (default, May 31 2023, 12:44:21)
[GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pyarrow
>>> import pyodbc
>>> pyodbc.connect("DRIVER=/usr/lib64/libmyodbc8a.so;SERVER=host.docker.internal;PORT=3306;UID=root;PWD=shh")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
pyodbc.Error: ('01000', "[01000] [unixODBC][Driver Manager]Can't open lib '/usr/lib64/libmyodbc8a.so' : file not found (0) (SQLDriverConnect)")
</code></pre>
<p>Having run into the same/similar error message from unixODBC in the past if a transitive dependency of the library was missing, I tried this in an attempt to see if something messed up the loader search path. Not sure if it's a valid test, but nothing seems amiss:</p>
<pre><code>>>> import os
>>> os.system('./lddtree.sh /usr/lib64/libmyodbc8a.so')
libmyodbc8a.so => /usr/lib64/libmyodbc8a.soreadelf: /usr/lib64/libmyodbc8a.so: Warning: Section '.interp' was not dumped because it does not exist!
(interpreter => none)
readelf: /usr/lib64/libmyodbc8a.so: Warning: Section '.interp' was not dumped because it does not exist!
libpthread.so.0 => /lib64/libpthread.so.0
libdl.so.2 => /lib64/libdl.so.2
libssl.so.1.1 => /lib64/libssl.so.1.1
libz.so.1 => /lib64/libz.so.1
libcrypto.so.1.1 => /lib64/libcrypto.so.1.1
libresolv.so.2 => /lib64/libresolv.so.2
librt.so.1 => /lib64/librt.so.1
libm.so.6 => /lib64/libm.so.6
libodbcinst.so.2 => /lib64/libodbcinst.so.2
libltdl.so.7 => /lib64/libltdl.so.7
libstdc++.so.6 => /lib64/libstdc++.so.6
libgcc_s.so.1 => /lib64/libgcc_s.so.1
libc.so.6 => /lib64/libc.so.6
ld-linux-x86-64.so.2 => /lib64/ld-linux-x86-64.so.2
0
</code></pre>
<p>I tried upgrading pyodbc and pyarrow to latest, and behavior is the same:</p>
<pre><code>pyarrow 13.0.0
pyodbc 4.0.39
</code></pre>
<p>I'm not sure if the issue is around pyarrow specifically, but based on <a href="https://bugs.mysql.com/bug.php?id=106583" rel="nofollow noreferrer">this bug</a> reporting similar behavior when importing protobuf, I searched my libraries for anything referencing 'protobuf' and pyarrow popped up with a header file including that in its name. Probably a coincidence, as that was in an older version of pyarrow and the latest version no longer even has that file.</p>
<p>FWIW, the container also has other ODBC drivers that don't experience this issue.</p>
<p>I assume pyarrow init is changing something in the environment, but I'm not enough of a Pythonista to know how to identify what; any tips to debug further?</p>
| <python><mysql><pyodbc><pyarrow><unixodbc> | 2023-09-20 22:19:14 | 1 | 1,652 | Aron |
77,146,081 | 1,471,980 | how do you add the same column names in Pandas given the group name | <p>I have a data frame that has the same column names multiple times. I need to be able to add the values for the same column names and display the output. For example, I have this data frame:</p>
<p>df</p>
<pre><code>Server Model Slot Count40G Count40G
server1 Cisco 1 10 5
server1 Cisco 2 5 0
server1 Cisco 3 20 0
server1 Cisco 4 0 1
server1 Cisco 8 0 10
server2 IBM 5 10 1
server2 IBM 8 5 0
server2 IBM 9 0 5
</code></pre>
<p>resulting data frame needs to look like this:</p>
<pre><code> Server Model Slot Count40G
server1 Cisco 1 15
server1 Cisco 2 5
server1 Cisco 3 20
server1 Cisco 4 1
server1 Cisco 8 10
server2 IBM 5 11
server2 IBM 8 5
server2 IBM 9 5
</code></pre>
<p>In this example, there are only 2 columns with the same name but it could be more columns with the same name.</p>
<p>I tried this:</p>
<pre><code>cols=['Server', 'Model', 'Slot']
df.groupby(cols, level=1, axis=1).sum()
</code></pre>
<p>I am getting index out of bounds error</p>
| <python><pandas> | 2023-09-20 21:45:25 | 2 | 10,714 | user1471980 |
77,145,988 | 22,466,650 | How to adjust the formatting of variable number of arguments? | <p>With this list as an input :</p>
<pre><code>my_list = [
['a', 'b'],
['c', 'd', 'e'],
['f', 'g'],
['h']
]
</code></pre>
<p>I'm trying to print adjusted to the left strings but there are missing values :</p>
<pre><code>for i in my_list:
print('{:<10}'.format(*i))
a
c
f
h
</code></pre>
<p>Why <code>format</code> isn't capable of deducting the length of each nested list ?</p>
<p>My expected output :</p>
<pre><code>a b
c d e
f g
h
</code></pre>
<p>I'm looking for a robust solution with <code>{:<10}</code> syntax rather than this :</p>
<pre><code>for i in my_list:
print(*i, sep=' '*10)
</code></pre>
| <python><string><formatting> | 2023-09-20 21:20:58 | 2 | 1,085 | VERBOSE |
77,145,965 | 181,309 | Does Pandas to_csv in append mode first read the target file into memory? | <p>I'm hitting out-of-memory errors when working on a big data set in chunks. The pattern in the memory chart makes me think that appending data to a file using to_csv is actually reading the existing file into memory, appending in-memory, and then dumping it all out to disk. Is that the way Pandas handles appends to CSVs?</p>
<p><a href="https://i.sstatic.net/6BD6B.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6BD6B.png" alt="memory" /></a></p>
<p>There are 11 peaks and dips, and this process happens to crash on the 11th iteration.</p>
<pre><code> dfd,dump=tempfile.mkstemp(".csv")
n_chunks=getNumChunks()
print(f"Working in {n_chunks} iteration(s).")
for current_chunk in range(1,n_chunks+1):
print(f"Iteration {current_chunk} of {n_chunks} ({current_chunk/n_chunks*100:.2f}%)")
full=mydb.query(f"""
select colA,colB
from mytable
where chunk={current_chunk}
""")
if full.empty:
continue
#Get a dataset with the required input cols.
score_me = full[input_cols].copy()
print(f"Scoring {len(full)} records.")
full['value']=clf.predict(score_me)
new=full[['colA','colB','other','cols','tokeep']]
total_scored+=new.shape[0]
#This is where I append:
new.to_csv(dump,mode="a+",header=False,index=False)
new=full=score_me=None
#End of chunks loop
print(f"Finished scoring {total_scored:,.0f} records.")
os.close(dfd)
os.remove(dump)
</code></pre>
| <python><pandas> | 2023-09-20 21:16:39 | 1 | 1,483 | Chris |
77,145,961 | 16,611,809 | Installing PyQt5 on 'arm64' stopped working all of a sudden (some 'sip' issue) | <p>this is a pretty macOS specific question, but I didn't know where to put it. I have an Python script that uses PyQt5. Unfortunately there is no wheel for <code>pyqt5</code> to install it with <code>pip</code> so you have to build from source, if you are on <code>arm64</code>. This is how I successfully did it till today:</p>
<ol>
<li><p>It requires <code>qmake</code>, which I installed via <code>brew</code> (<code>brew install qt5</code>). After adding this to my path I can execute it (<code>which qmake</code> shows the correct path).</p>
</li>
<li><p>Then I install pyqt5 via pip with this command: <code>pip install pyqt5 --config-settings --confirm-license= --verbose</code> (pyqt5 asks for license agreement, but pip install is not interactive, hence the long command).
As I said, till last week I could do this successfully. When I tried this today I got the error:</p>
</li>
</ol>
<pre><code> The dbus-python package does not seem to be installed.
These bindings will be built: Qt, pylupdate, pyrcc.
Generating the Qt bindings...
Generating the pylupdate bindings...
_in_process.py: /private/var/folders/ws/vdb_nvyj35g9ck_srpvqpccm0000gn/T/pip-install-jr3725ba/pyqt5_7d0f0bcc5a7241bd8afa726e0fa5e8d1/sip/QtCore/qprocess.sip: line 99: column 5: 'Q_PID' is undefined
error: subprocess-exited-with-error
</code></pre>
<p>It seems thta something with <code>sip</code>-related changed. The only thing that has changed on my system was an update from Xcode14 to Xcode15. I don't really see why this should affect pyqt5 or sip, but it's the only thing I can think of.</p>
<p>I also tried to install <code>sip</code> via brew additionally, but this did not change anything.</p>
<p>Any ideas?</p>
| <python><pyqt5><sip><qmake><apple-silicon> | 2023-09-20 21:15:05 | 1 | 627 | gernophil |
77,145,750 | 5,924,264 | How to check 2 column values in a sql table aren't duplicated | <p>I attached a <code>src</code> database to a <code>destination</code>. Each database contains the columns <code>time</code> and <code>id</code>. I want to check that there is no row in <code>src</code> that has the same <code>time, id</code> pair as the one in <code>destination</code>.</p>
<p>I tried the following query:</p>
<pre><code> select (s.time, s.id)
from src.table_name as s
where (s.time, s.id) in (select (d.time, d.id) from table_name as d)
limit 1
</code></pre>
<p>but it says <code>E OperationalError: sub-select returns 1 columns - expected 2</code>. I also tried</p>
<pre><code> select (s.time, s.id)
from src.table_name as s
where (s.time, s.id) in (select d.time, d.id from table_name as d)
limit 1
</code></pre>
<p>and now it says
<code>E OperationalError: row value misused</code></p>
<p>Is the correct syntax</p>
<pre><code> select s.time, s.id
from src.table_name as s
where (s.time, s.id) in (select d.time, d.id from table_name as d)
limit 1
</code></pre>
<p>?</p>
<p>This seems to work, but I'm not totally sure this is doing what I want it to do.</p>
| <python><sql> | 2023-09-20 20:26:04 | 1 | 2,502 | roulette01 |
77,145,680 | 3,788 | Algorithm to flatten JSON arrays to rows | <p>I'm writing an algorithm in pure Python to do something similar to Clickhouse's <a href="https://clickhouse.com/docs/en/sql-reference/functions/array-join" rel="nofollow noreferrer">arrayJoin</a> function: split nested arrays into individual lines.</p>
<p>For example, with this input:</p>
<pre class="lang-json prettyprint-override"><code>{
"A": "B",
"C": [{"D": "E"}, {"F": "G"}, "H", {"D": "I"}]
}
</code></pre>
<p>I am looking to create the following output:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>A</th>
<th>C</th>
<th>C.D</th>
<th>C.F</th>
</tr>
</thead>
<tbody>
<tr>
<td>B</td>
<td>H</td>
<td></td>
<td></td>
</tr>
<tr>
<td>B</td>
<td></td>
<td>E</td>
<td></td>
</tr>
<tr>
<td>B</td>
<td></td>
<td>I</td>
<td></td>
</tr>
<tr>
<td>B</td>
<td></td>
<td></td>
<td>G</td>
</tr>
</tbody>
</table>
</div>
<p>I feel like I am almost there with this code to flatten json:</p>
<pre class="lang-py prettyprint-override"><code>def f(j, path=None):
if path is None:
path = []
if isinstance(j, dict):
for k, v in j.items():
f(v, path+[k])
elif isinstance(j, list):
for i, item in enumerate(j):
f(item, path+[i])
else:
print(tuple(path), j)
</code></pre>
<p>The trouble I'm running into is knowing which elements should be grouped together. For example, how can I ensure <code>C</code>, <code>C.D</code>, and <code>C.F</code> are used to split data on separate rows?</p>
<p>I'm looking for a pure python (or any language really) solution - not using pandas since my final code will probably not be in python. (Also: this is not homework! It's a personal project I'm working on to store JSON in Postgres and I'm trying to figure out how to do the translation!)</p>
| <python><json><algorithm> | 2023-09-20 20:13:12 | 1 | 19,469 | poundifdef |
77,145,578 | 2,459,179 | Configuration for Kubernetes Flask and Prometheus | <p>I am trying to configure a prometheus to monitor a simple flask app, but its very weird that the prometheus does show the service in the drop down, but it shows nothing when I click in the dropdown:</p>
<p><a href="https://i.sstatic.net/KeuEB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KeuEB.png" alt="blank" /></a></p>
<p>Here is the code:</p>
<p>app.py</p>
<pre><code>from flask import Flask
from prometheus_flask_exporter import PrometheusMetrics
app = Flask(__name__)
metrics = PrometheusMetrics(app)
@app.route('/')
def hello():
return "Hello World!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5000)
</code></pre>
<p>flask-deployment.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: flask-app
spec:
replicas: 1
selector:
matchLabels:
app: flask-app
template:
metadata:
labels:
app: flask-app
spec:
containers:
- name: flask-app
image: starian/flask-app1
ports:
- containerPort: 5000
name: webapp
</code></pre>
<p>flask-service.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: flask-app-service
spec:
selector:
app: flask-app
ports:
- name: http
protocol: TCP
port: 80
targetPort: webapp
type: LoadBalancer
</code></pre>
<p>service-monitor.yaml</p>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: flask-app
labels:
release: prometheus-stack
spec:
selector:
matchLabels:
app: flask-app
endpoints:
- port: http
interval: 5s
path: /metrics
</code></pre>
<p>Dockerfile</p>
<pre><code>FROM python:3.8-slim
WORKDIR /app
COPY . /app
RUN pip install flask
RUN pip install prometheus_flask_exporter
CMD ["python", "app.py"]
</code></pre>
<p>I check everything that I know of:
THe metrics does show in the the flask app. I can curl it.
All the services is on.</p>
<p>What could goes wrong?</p>
| <python><kubernetes><flask><prometheus> | 2023-09-20 19:55:31 | 1 | 307 | user2459179 |
77,145,404 | 15,189,432 | How can I retrieve "TSV SCHOTT Mainz" from HTML using Python | <p><a href="https://i.sstatic.net/2KDgz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2KDgz.png" alt="Screenshot HTML Code" /></a></p>
<p>Hello,</p>
<p>I can't find a way to retrieve the words "TSV SCHOTT Mainz" from the HTML code because I don't understand which section to target here. I've tried the following:</p>
<pre><code> import requests
from bs4 import BeautifulSoup
# URL of the Borussia Dortmund "Alle Spiele" page
url = "https://www.bvb.de/Spiele/Alle-Spiele"
# Send an HTTP GET request to fetch the web page content
response = requests.get(url)
# Check if the request was successful
if response.status_code == 200:
# Parse the HTML content of the page
soup = BeautifulSoup(response.text, "html.parser")
# Locate the section of the page containing game information
game_section = soup.find("table", class_="statistics statistics-matchday")
# Check if the game_section exists
if game_section:
# Loop through the game information and extract details
for game in game_section.find_all("tr", class_="pointer"):
#date = game.find("td", class_="2 5").text.strip()
opponent = game.find("img", class_="opt").text.strip() ```
</code></pre>
| <python><html><python-requests> | 2023-09-20 19:23:22 | 1 | 361 | Theresa_S |
77,145,378 | 4,076,764 | Building a Wheel File with extra_requires using Tox | <p>I have a Python multi-module project with a setup.py file that defines extra dependencies using extra_requires. I want to use Tox to build a wheel file that includes these extra dependencies. However, I'm not sure how to achieve this with Tox.</p>
<p>Here's my project structure:</p>
<pre><code>my_project/
├── setup.py
├── core/
│ ├── __init__.py
│ └── core_code.py
├── api/
│ ├── __init__.py
│ └── api_code.py
├── tests/
│ ├── __init__.py
│ ├── test_core_code.py
│ ├── test_api_code.py
│ └── tox.ini
</code></pre>
<p>It can be installed via <code>pip install .</code> or <code>pip install .[api]</code></p>
<p>Here's an example <code>setup.py</code>:</p>
<p>from setuptools import setup, find_packages</p>
<pre><code>setup(
name='my_project',
version='0.1.0',
packages=find_packages(),
install_requires=[
# core dependencies
'requests',
],
extras_require={
'api': [
# additional dependencies for the API
'flask',
'fastapi',
],
},
)
</code></pre>
<p>Using tox, I'd like to build the wheel file for the core module only, which is working fine.</p>
<pre><code>tox]
envlist = py39
[testenv]
description = 'Builds a core wheel file (from setup.py) and runs unit tests.'
package = wheel
deps = pytest
commands =
pytest tests
</code></pre>
<p>However, I'd also like to create a wheel file for the full module. That is the core + extras. Is there some builtin way to tell tox to build the wheel file and include the <code>extra_args</code> so that the <code>[api]</code> profile is built?</p>
<pre><code>[testenv:api-full]
description = 'Builds a wheel file with extra_requires and runs tests.'
package = wheel [api] #<--- pseudocode
deps = pytest
commands =
</code></pre>
| <python><setup.py><tox> | 2023-09-20 19:18:19 | 0 | 16,527 | Adam Hughes |
77,145,360 | 788,153 | looking for an efficient way to split columns in a text in pandas | <p>I have pandas dataframe and want to split the <code>text</code> column in such a way that each row has just two words. when splitting, I need to maintain the order so that I can combine them together based on <code>line</code>. Is there efficient way to do this. I can do list comprehension but was looking at more efficient way. Thanks</p>
<pre><code>df = pd.DataFrame({'col1':[22,23,44], 'col2': ['rr','gg','xx'], 'text': ['this is a sample text', 'this is another one','third example is a longer text']})
</code></pre>
<p><a href="https://i.sstatic.net/pk63c.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pk63c.png" alt="enter image description here" /></a></p>
| <python><pandas><nlp> | 2023-09-20 19:14:38 | 3 | 2,762 | learner |
77,145,314 | 8,236,050 | Previously working selenium script suddenly fails to establish a new connection | <p>I have been working with my python Selenium script for a while now, but suddenly I have started getting this error:</p>
<blockquote>
<p>urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost',
port=56467): Max retries exceeded with url:
/session/c159fe7f3ae326faada8651af9241ac9/url (Caused by
NewConnectionError('<urllib3.connection.HTTPConnection object at
0x7f5e45e868f0>: Failed to establish a new connection: [Errno 111]
Connection refused'))</p>
</blockquote>
<p>I have checked my webdriver, which seems to be correctly initialized, and the URL is also correct, but for some reason, I get the said error. This is my main script:</p>
<pre><code> options = webdriver.ChromeOptions()
options.add_argument('--auto-show-cursor')
options.add_argument('--no-sandbox')
options.add_argument('--disable-dev-shm-usage')
driver = webdriver.Chrome('chromedriver',options=options)
driver.get(url)
print("checking error")
print(driver)
print("URL")
print(driver.current_url)
errors = []
source = driver.page_source # The error is thrown in this line
</code></pre>
| <python><selenium-webdriver> | 2023-09-20 19:04:12 | 1 | 513 | pepito |
77,145,042 | 931,123 | How do I use .copy() method of a subclass of the list object in Python 2.x? | <p>I'm trying to use the .copy() method on a list subclass, but Python 2 is saying the copy() method doesn't exists.</p>
<pre><code>class MyList(list):
pass
mylist = MyList()
mylist.append(1)
mylist.append("two")
print(str(mylist[0]) + " " + mylist[1])
mylist2 = mylist.copy()
mylist2.append("C")
print(str(mylist2[0]) + " " + mylist2[1] + " " + mylist2[2])
</code></pre>
<p>Python 2.6 result:</p>
<pre><code>> python foo.py
1 two
Traceback (most recent call last):
File "foo.py", line 48, in <module>
mylist2 = mylist.copy()
AttributeError: 'MyList' object has no attribute 'copy'
</code></pre>
<p>Python 3.11 result:</p>
<pre><code>1 two
1 two C
</code></pre>
<p>(Please don't simply tell me to upgrade to Python 3; we have our reasons.)</p>
<p>Note: Although similar, this is not a duplicate of
<a href="https://stackoverflow.com/questions/52805119/list-object-has-no-attribute-copy">this question</a>, because that question is asked and answered in the Django context, and this is straight Python. Also, the suggested answer to that question is not as comprehensive as the accepted answer here.</p>
| <python><list><copy> | 2023-09-20 18:13:23 | 1 | 775 | livefree75 |
77,144,952 | 8,382,028 | Building a Django Q object for query | <p>I am having an issue wrapping my head how to build a <code>Q</code> query in Django while setting it up as a dict.</p>
<p>For example: I have a list of pks for properties and I am trying to filter and see if those pks are associated with either item = <code>['6', '21', '8', '13', '7', '11', '10', '15', '22']</code></p>
<p>I am trying to build a Q object that states:</p>
<p><code>Q('accounts_payable_line_item__property__pk__in' = properties) | Q('journal_line_item__property__pk__in' = properties)</code> while defining it in a dict I can pass to the queryset like so:</p>
<p>If I define a filter dict and then create values it looks like this but it doesn't create the <code>Q</code>:</p>
<pre><code>filter_dict = {}
filter_dict['accounts_payable_line_item__property__pk__in'] = properties
filter_dict['journal_line_item__property__pk__in'] = properties
queryset = queryset.select_related('accounts_payable_line_item', 'journal_line_item').filter(**filter_dict).order_by('-id')
</code></pre>
| <python><django> | 2023-09-20 18:00:19 | 1 | 3,060 | ViaTech |
77,144,903 | 21,540,734 | Capturing a screenshot of a fullscreen game fails | <p>This works on a normal window but in a fullscreen game this is returning a black screen so I changed the <code>dtype = numpy.uint8</code> to <code>dtype = numpy.uint16</code> and now I'm getting the this error. I think it has something to do with the 4 in the tuple.</p>
<pre><code>File "C:\Users\phpjunkie\Python\test_debug\testing\test12.py", line 22, in CaptureWwindow
output.shape = (height, width, 4)
^^^^^^^^^^^^
ValueError: cannot reshape array of size 12288000 into shape (1600,3840,4)
</code></pre>
<p>Here is the code:</p>
<pre class="lang-py prettyprint-override"><code>import phpjunkie.Win32 as Win32
import cv2 as cv
import numpy
import win32gui
import win32ui
import win32con
def CaptureWwindow(hwnd: int):
left, top, right, bottom = win32gui.GetWindowRect(hwnd)
width, height = right - left, bottom - top
wdc = win32gui.GetWindowDC(hwnd)
dc = win32ui.CreateDCFromHandle(wdc)
cdc = dc.CreateCompatibleDC()
bitmap = win32ui.CreateBitmap()
bitmap.CreateCompatibleBitmap(dc, width, height)
cdc.SelectObject(bitmap)
cdc.BitBlt((0, 0), (width, height), dc, (0, 0), win32con.SRCCOPY)
output = numpy.frombuffer(bitmap.GetBitmapBits(True), dtype = numpy.uint16)
output.shape = (height, width, 4)
dc.DeleteDC()
cdc.DeleteDC()
win32gui.ReleaseDC(hwnd, wdc)
win32gui.DeleteObject(bitmap.GetHandle())
return output
hwnd = Win32.GetWindowHandle(partial = 'Marvel\'s Spider-Man Remastered')
screenshot = CaptureWwindow(hwnd)
screenshot = (screenshot >> 2).astype(numpy.uint8)
cv.imshow('screenshot', screenshot)
print(screenshot.dtype)
print(screenshot.shape)
cv.waitKey()
</code></pre>
| <python><winapi><pywin32><screen-capture><game-automation> | 2023-09-20 17:52:39 | 1 | 425 | phpjunkie |
77,144,861 | 6,351,763 | dbt - pass return value of dbt macro to python | <p>Since version 1.5, dbt can be invoked directly from within a python script, like:</p>
<pre class="lang-py prettyprint-override"><code>from dbt.cli.main import dbtRunner, dbtRunnerResult
# initialize
dbt = dbtRunner()
# create CLI args as a list of strings
cli_args = ["run-operation", "get_something_macro"]
# run the command
res: dbtRunnerResult = dbt.invoke(cli_args)
# inspect the results
for r in res.result:
print(f"{r}")
</code></pre>
<p>See:
<a href="https://docs.getdbt.com/reference/programmatic-invocations" rel="noreferrer">https://docs.getdbt.com/reference/programmatic-invocations</a></p>
<p>Lets say I have a simple macro that returns something for example:</p>
<pre><code>{% macro get_current_catalog() %}
{{ return(target.catalog) }}
{% endmacro %}
</code></pre>
<p>How do I get the return value into python?</p>
<p>My only idea so far is using <code>print("...")</code> or <code>log("...", info=True)</code> within the macro and capture stdout. But I thought there has to be a better way...</p>
<p>Maybe some of you guys / gals know.</p>
| <python><dbt> | 2023-09-20 17:46:40 | 2 | 1,631 | NemesisMF |
77,144,665 | 785,494 | How can I run Python as root (or sudo) while still using my local pip? | <p><code>pip</code> is recommended to be run under local user, not root. This means if you do <code>sudo python ...</code>, you will <em>not</em> have access to your Python libs installed via pip.</p>
<p>How can I run Python (or a pip installed <code>bin/</code> command) under root / sudo (when needed) while having access to my pip libraries?</p>
| <python><pip><root> | 2023-09-20 17:17:50 | 2 | 9,357 | SRobertJames |
77,144,614 | 5,924,264 | Passing in private attributes into a utility function | <p>I have 3 classes with a merge operation that does identical operations with different arguments.</p>
<p>Here's the skeleton code of what I currently do:</p>
<pre><code>class FirstClass:
def __init__(self):
# define constructor
def merge(self, other):
# merge other into self
merge_util(src=other, dest=self)
class SecondClass:
def __init__(self):
# define constructor
def merge(self, other):
# merge other into self
merge_util(src=other, dest=self)
# ThirdClass defined identically
</code></pre>
<p><code>merge_util</code> is defined as follows:</p>
<pre><code>def merge_util(src, dest):
# merge src's private attributes (an array, list, and database table) into dest
</code></pre>
<p>The problem is this design seems to violate encapsulation. An external function is accessing private/protected attributes.</p>
<p>I've thought of 2 solutions:</p>
<ol>
<li><p>Define <code>merge_util</code> as a method of each class, but then I'd be duplicating code for each class.</p>
</li>
<li><p>Use inheritance. These classes have few similarities otherwise hence why I didn't use inheritance from the beginning and want to avoid this.</p>
</li>
</ol>
<p>Are there other approaches I can consider?</p>
| <python><oop><encapsulation><private-members> | 2023-09-20 17:10:17 | 1 | 2,502 | roulette01 |
77,144,609 | 2,687,317 | Finding a series value in a pandas df that includes a date/time field | <p>I have a pandas df like this:</p>
<pre><code> Date_Time Time_slot Veh_ID Pwr Group_ID
0 2023-03-30 00:00:01 1 100 10 100_1
1 2023-03-30 00:00:01 2 100 12 100_1
2 2023-03-30 00:00:05 1 100 3 100_1
3 2023-03-30 00:00:05 2 100 13 100_1
4 2023-03-30 00:00:22 1 100 22 100_1
5 2023-03-30 00:00:22 2 100 13 100_1
6 2023-03-30 00:00:01 1 55 8 55_1
7 2023-03-30 00:00:01 2 55 2 55_1
8 2023-03-30 00:00:05 1 55 12 55_1
9 2023-03-30 00:00:05 2 55 11 55_1
10 2023-03-30 00:22:00 1 100 7 100_2
11 2023-03-30 00:22:00 2 100 6 100_2
12 2023-03-30 00:25:00 1 100 11 100_2
13 2023-03-30 00:25:00 2 100 14 100_2
14 2023-03-30 00:23:00 1 55 7 55_2
15 2023-03-30 00:23:00 2 55 9 55_2
16 2023-03-30 00:35:00 1 55 9 55_2
17 2023-03-30 00:35:00 2 55 13 55_2
18 2023-03-30 01:35:00 1 55 10 55_2
19 2023-03-30 01:35:00 2 55 9 55_2
</code></pre>
<p>I need to quickly find the Group_ID that encompasses the date/time and Veh_ID the user specifies. By <em>encompasses</em>, I mean the date/time should be within the span of the Group_ID for that Veh_ID: i.e. -- the time should fall in the <em>start</em> to <em>end</em> range of times for the group that begins with the Veh_ID specified. So</p>
<pre><code>def findGroup(date_time, veh):
...
return grpid
</code></pre>
<p>for</p>
<pre><code>findGroup(pd.to_datetime('2023-03-30 00:23:00'),100)
</code></pre>
<p>returns <code>100_2</code></p>
<p>I could do this with a loop through each group getting the start/stop times and doing a "between" check, but this seems inefficient.</p>
<p>Thx all.</p>
| <python><pandas> | 2023-09-20 17:09:41 | 1 | 533 | earnric |
77,144,496 | 10,765,629 | Vscode tooltip not rendering markdown format when hovering in functions | <p>I used to comment my functions (in python at least) using a markdown format.
And it render it correctly showing the Heading level, the variables, and so on.
I do not know why it stopped working.</p>
<p>And it shows like this:</p>
<p><a href="https://i.sstatic.net/TiDtN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TiDtN.png" alt="screenshot showing raw Markdown in the Hover" /></a></p>
| <python><visual-studio-code> | 2023-09-20 16:52:07 | 1 | 710 | z_tjona |
77,144,494 | 3,121,975 | Printing a datetime, with timezone, but excluding the timezone name | <p>I have a datetime object that I've created like so:</p>
<pre><code>dt = datetime(2023, 9, 18, 22, 30, 0, tzinfo=timezone(timedelta(hours=9)))
</code></pre>
<p>This corresponds to September 18th, 2023 at 10:30PM JST. I want to print this datetime like this:</p>
<blockquote>
<p>2023-09-18 22:30:00+09:00</p>
</blockquote>
<p>I attempted the following options:</p>
<pre><code>dt.strftime("%Y-%m-%d %H:%M:%S%Z") # 2023-09-18 22:30:00UTC+09:00
dt.strftime("%Y-%m-%d %H:%M:%S%z") # 2023-09-18 22:30:00+0900
</code></pre>
<p>Neither of these is the format I want, and now I either have to remove <code>UTC</code> from the first one or add a colon to the third-to-last position in the second one. Is there a way to get the output I want without having to resort to string manipulation?</p>
| <python><datetime><iso8601> | 2023-09-20 16:51:34 | 2 | 8,192 | Woody1193 |
77,144,478 | 1,188,943 | Issue in installing Facebook Seamless Communication on Mac | <p>I tried to install seamless_communication from Facebook on MacOS With Intel CPU. The error we get is:</p>
<pre><code> Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
INFO: pip is looking at multiple versions of fairseq2 to determine which version is compatible with other requirements. This could take a while.
ERROR: Could not find a version that satisfies the requirement fairseq2n==0.1.1 (from fairseq2) (from versions: none)
ERROR: No matching distribution found for fairseq2n==0.1.1
</code></pre>
<p>any help is appreciated.</p>
| <python><facebook> | 2023-09-20 16:47:52 | 0 | 1,035 | Mahdi |
77,144,399 | 6,484,726 | Using Depends() for global constants in FastAPI | <p>I was researching implementations of stable FastAPI applications and found Safir project which provides global HTTP client as a dependency using Depends()</p>
<p>Here on the page for Safir project on which they suggest to do so:
<a href="https://safir.lsst.io/user-guide/http-client.html#using-the-httpx-asyncclient" rel="nofollow noreferrer">https://safir.lsst.io/user-guide/http-client.html#using-the-httpx-asyncclient</a></p>
<p>and the dependency code itself:
<a href="https://github.com/lsst-sqre/safir/blob/main/src/safir/dependencies/http_client.py" rel="nofollow noreferrer">https://github.com/lsst-sqre/safir/blob/main/src/safir/dependencies/http_client.py</a></p>
<p>What is the benefits of using Depends() in this case?
Is there a significant difference from direct usage of dependency? For example:</p>
<pre class="lang-py prettyprint-override"><code>from safir.dependencies.http_client import http_client_dependency
http_client = http_client_dependency()
@routes.get("/")
async def get_index() -> Dict[str, Any]:
response = await http_client.get("https://stackoverflow.com")
return await response.json()
</code></pre>
<p>Instead of Safir example:</p>
<pre class="lang-py prettyprint-override"><code>from safir.dependencies.http_client import http_client_dependency
@routes.get("/")
async def get_index(
http_client: httpx.AsyncClient = Depends(http_client_dependency),
) -> Dict[str, Any]:
response = await http_client.get("https://keeper.lsst.codes")
return await response.json()
</code></pre>
| <python><fastapi> | 2023-09-20 16:33:07 | 1 | 398 | hardhypochondria |
77,144,326 | 6,082,378 | sending logs to fluentd from python script via syslog protocol | <p>I have started the fluentd server with docker.My fluentd configuration is</p>
<pre><code><source>
@type syslog
port 514
bind 0.0.0.0
tag system
</source>
<match **>
@type stdout
</match>
</code></pre>
<p>The command I used to start the <code>fluentd</code> server is</p>
<pre><code>docker run -p 514:514 -v $(pwd)/tmp:/fluentd/etc fluent/fluentd:edge-debian -c /fluentd/etc/fluentd.conf
</code></pre>
<p>It starts the server successfully and I get the log</p>
<pre><code>2023-09-20 16:05:18 +0000 [info]: init supervisor logger path=nil rotate_age=nil rotate_size=nil
2023-09-20 16:05:18 +0000 [info]: parsing config file is succeeded path="/fluentd/etc/fluentd.conf"
2023-09-20 16:05:18 +0000 [info]: gem 'fluentd' version '1.16.2'
2023-09-20 16:05:18 +0000 [warn]: define <match fluent.**> to capture fluentd logs in top level is deprecated. Use <label @FLUENT_LOG> instead
2023-09-20 16:05:18 +0000 [info]: using configuration file: <ROOT>
<source>
@type syslog
port 514
bind "0.0.0.0"
tag "system"
</source>
<match **>
@type stdout
</match>
</ROOT>
2023-09-20 16:05:18 +0000 [info]: starting fluentd-1.16.2 pid=7 ruby="3.1.4"
2023-09-20 16:05:18 +0000 [info]: spawn command to main: cmdline=["/usr/local/bin/ruby", "-Eascii-8bit:ascii-8bit", "/usr/local/bundle/bin/fluentd", "-c", "/fluentd/etc/fluentd.conf", "--plugin", "/fluentd/plugins", "--under-supervisor"]
2023-09-20 16:05:19 +0000 [info]: #0 init worker0 logger path=nil rotate_age=nil rotate_size=nil
2023-09-20 16:05:19 +0000 [info]: adding match pattern="**" type="stdout"
2023-09-20 16:05:19 +0000 [info]: adding source type="syslog"
2023-09-20 16:05:19 +0000 [warn]: #0 define <match fluent.**> to capture fluentd logs in top level is deprecated. Use <label @FLUENT_LOG> instead
2023-09-20 16:05:19 +0000 [info]: #0 starting fluentd worker pid=16 ppid=7 worker=0
2023-09-20 16:05:19 +0000 [info]: #0 listening syslog socket on 0.0.0.0:514 with udp
2023-09-20 16:05:19 +0000 [info]: #0 fluentd worker is now running worker=0
2023-09-20 16:05:19.058192424 +0000 fluent.info: {"pid":16,"ppid":7,"worker":0,"message":"starting fluentd worker pid=16 ppid=7 worker=0"}
2023-09-20 16:05:19.058414751 +0000 fluent.info: {"message":"listening syslog socket on 0.0.0.0:514 with udp"}
2023-09-20 16:05:19.059112948 +0000 fluent.info: {"worker":0,"message":"fluentd worker is now running worker=0"}
</code></pre>
<p>I have wrote the simple python script to test the connection which is as follows</p>
<pre><code>import logging
import logging.handlers
import socket
if __name__ == "__main__":
syslogger = logging.getLogger('SyslogLogger')
handler = logging.handlers.SysLogHandler(address=("0.0.0.0", 514), facility=19, socktype=socket.SOCK_DGRAM)
syslogger.addHandler(handler)
syslogger.info("Hello World")
</code></pre>
<p>Script runs without any error but I don't get any log on the on the fluentd.</p>
<p>Server and script both are on a local machine.</p>
| <python><fluentd><syslog> | 2023-09-20 16:21:26 | 1 | 774 | Jayendra Parmar |
77,144,116 | 9,640,238 | Returning a DataFrame from Series.apply when the supplied function returns a Series is deprecated | <p>The way I have always split a column containing lists into multiple columns is:</p>
<pre class="lang-py prettyprint-override"><code>df['column_with_lists'].apply(pd.Series)
</code></pre>
<p>This returns a new dataframe that can then be concatenated.</p>
<p>With <code>pandas 2.1</code>, this now raises:</p>
<blockquote>
<p><code>FutureWarning</code>: Returning a DataFrame from Series.apply when the supplied function returns a Series is deprecated and will be removed in a future version.</p>
</blockquote>
<p>What is now the recommended way to split a column containing lists?</p>
| <python><pandas><dataframe> | 2023-09-20 15:50:05 | 1 | 2,690 | mrgou |
77,144,021 | 8,318,946 | how to filter and sort fields in django-filter that are not in Django model? | <p>I am trying to filter values in 2 fields that I am creating in my serializer and I am wondering how to add last_run and lat_status to filtering and orting fieldset but when I am adding them I get <code>Cannot resolve keyword 'last_status' into field.</code> error.</p>
<p>Is there any option to annotate these 2 fields in my ListAPIView so I can sort and filter by them</p>
<p>sample API data</p>
<pre><code>{
"id": 2,
"user": 1,
"project": 3,
"last_run": "17-08-2023 16:45",
"last_status": "SUCCESS",
"name": "test spider",
"creation_date": "10-08-2023 12:36",
},
</code></pre>
<p>models.py</p>
<pre><code>class Spider(models.Model):
name = models.CharField(max_length=200, default="", unique=True)
user = models.ForeignKey(User, on_delete=models.CASCADE, null=True, default='')
project = models.ForeignKey(Project, on_delete=models.CASCADE, blank=True, null=True, related_name='project_spider')
creation_date = models.DateTimeField(default=timezone.now)
</code></pre>
<p>serializers.py</p>
<pre><code>class SpiderListSerializer(serializers.ModelSerializer):
user = serializers.PrimaryKeyRelatedField(queryset=User.objects.all())
last_run = serializers.SerializerMethodField()
last_status = serializers.SerializerMethodField()
class Meta:
model = Spider
fields = "__all__"
def get_last_run(self, instance):
return get_last_spider_status(instance.id)[0]
def get_last_status(self, instance):
return get_last_spider_status(instance.id)[1]
</code></pre>
<p>filters.py</p>
<pre><code>from django_filters import rest_framework as filters
from .models import Spider
class SpiderFilter(filters.FilterSet):
name = filters.CharFilter(field_name='name', lookup_expr='icontains')
user = filters.NumberFilter(field_name='user__id', lookup_expr='icontains')
project = filters.NumberFilter(field_name='project__id', lookup_expr='icontains')
creation_date = filters.DateFilter(
field_name='creation_date',
input_formats=['%d-%m-%Y'],
lookup_expr='icontains'
)
class Meta:
model = Spider
fields = []
</code></pre>
<p>views.py</p>
<pre><code>class SpiderListView(ListCreateAPIView):
permission_classes = [IsAuthenticated]
serializer_class = SpiderListSerializer
filter_backends = [DjangoFilterBackend, OrderingFilter]
filterset_class = SpiderFilter
ordering_fields = ['name', 'user', 'project', 'creation_date'] # how to add 'last_run' and 'last_status' fields to this list
</code></pre>
| <python><django> | 2023-09-20 15:37:29 | 1 | 917 | Adrian |
77,143,904 | 2,230,567 | Find GPS coordinates of a local point using 2 reference points | <p><a href="https://i.sstatic.net/8SQMJ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8SQMJ.jpg" alt="enter image description here" /></a></p>
<p>Given 2 reference points with known local coordinates and GPS (Decimal Degrees lat,lon) how can you calculate the GPS coordinates of any local point?</p>
<p>Note that I am interested in small areas far from the poles, so the Earth curvature can be omitted for simplicity.</p>
<p>Thank you in advance.</p>
<p>UPDATE: I added a drawing of what I am trying to do. QR1 and QR2 have both local coordinates and GPS. I am looking for points A,B,C and D (essentially any local point).</p>
| <python><algorithm><gps><coordinates><transformation> | 2023-09-20 15:22:37 | 1 | 404 | pdrak |
77,143,888 | 8,430,277 | Python, Selenium, Chrome: How to set driver location in Selenium 4.12 | <h3>Use case scenario</h3>
<p>Selenium 4.12 (Chrome driver) behind proxy authentication and <strong>no access to github.io</strong>.</p>
<h3>Problem</h3>
<p>Somewhere along the way, <code>executable_path</code> arg in <code>webdriver.Chrome</code> method was deprecated. Instead, <code>webdrive_manager.chrome</code> module tries to download chromedrive.exe. In order to select driver version, it access <a href="https://googlechromelabs.github.io/chrome-for-testing/known-good-versions-with-downloads.json" rel="nofollow noreferrer">https://googlechromelabs.github.io/chrome-for-testing/known-good-versions-with-downloads.json</a> which I can't access due to company policies, therefore throwing an error.</p>
<h3>Question</h3>
<p>How to pass <code>chromedriver.exe</code> path to <code>webdriver.Chrome()</code> with selenium 4.12?</p>
| <python><selenium-webdriver><selenium-chromedriver> | 2023-09-20 15:21:16 | 1 | 1,450 | Alberson Miranda |
77,143,839 | 13,174,189 | How could i append to list any output that my function gives, including errors? | <p>I want to apply my function <code>func</code> to a list of values and collect results. Sometimes it can output errors: JSONDecodeError, IndexError, etc.</p>
<p>A list to which i collect results is result_list. And if there is error, it also should be included.</p>
<p>So if i do:</p>
<pre><code>for value in values_list:
result_list.append(func(value))
</code></pre>
<p>and if there is error ( JSONDecodeError, IndexError, etc.) it should be appended to result_list</p>
<p>How could i do that?</p>
| <python><python-3.x><list><function><append> | 2023-09-20 15:16:52 | 2 | 1,199 | french_fries |
77,143,681 | 3,511,977 | How to Improve rigid registration for 2D images | <p>I am looking for a way to improve the rigid registration procedure of 2D images and evaluate metrics for better rigid registration using SimpleITK, here is my code</p>
<pre><code>import SimpleITK as sitk
import matplotlib.pyplot as plt
import numpy as np
import os
moving_Image = sitk.ReadImage("mri_2d_moving.tif")
fixed_Image = sitk.ReadImage('mri_structural_2d.tif')
movingImageArray=sitk.GetArrayFromImage(moving_Image)
fixedImageArray=sitk.GetArrayFromImage(fixed_Image)
plt.imshow(movingImageArray,cmap='Greens', vmin=0, vmax=255)
plt.imshow(fixedImageArray, alpha=0.5,cmap='Oranges', vmin=0, vmax=255)
elastixImageFilter = sitk.ElastixImageFilter()
registration_method= sitk.ImageRegistrationMethod()
elastixImageFilter.SetFixedImage(fixed_Image)
elastixImageFilter.SetMovingImage(moving_Image)
elastixImageFilter.SetParameterMap(sitk.GetDefaultParameterMap("rigid"))
elastixImageFilter.Execute()
outputImage=elastixImageFilter.GetResultImage()
outputImageArray=sitk.GetArrayFromImage(outputImage)
plt.imshow(outputImageArray,cmap='Greens', vmin=0, vmax=255)
plt.imshow(fixedImageArray, alpha=0.5,cmap='Oranges', vmin=0, vmax=255)
</code></pre>
<p>The registration is terrible along the edges how can I change the parameters to improve registration, any suggestions are welcome</p>
<p>Imaging data can be downloaded from the below links</p>
<p><a href="https://imgur.com/YIueB43" rel="nofollow noreferrer">moving image</a>
<a href="https://imgur.com/WE7GaqO" rel="nofollow noreferrer">fixed image</a></p>
| <python><image-processing><scikit-image><simpleitk><image-registration> | 2023-09-20 14:55:26 | 0 | 355 | DevanDev |
77,143,649 | 4,555,441 | Overlay effect on image in python Pillow | <p>I am using python to get an overlay effect on an image on mouse click. How can I increase the transparency of the color so the underlying image remains visible. Currently the underlying image hides beneath. I couldn't find an option to control the opacity</p>
<pre><code>import tkinter as tk
from tkinter import filedialog
from PIL import Image, ImageDraw, ImageTk
class ImageEditor:
def __init__(self):
self.root = tk.Tk()
self.image = None
self.grid = None
self.colors = [(255, 0, 0), (255, 255, 0), (0, 0, 255)] # RGB colors
self.current_color_index = 0
self.display_image = None
self.canvas = tk.Canvas(self.root)
self.canvas.bind("<Button-1>", self.change_color)
self.canvas.pack(side='left')
self.load_btn = tk.Button(self.root, text='Load', command=self.load_image)
self.load_btn.pack(side='right')
self.save_btn = tk.Button(self.root, text='Save', command=self.save_image)
self.save_btn.pack(side='right')
self.root.mainloop()
def load_image(self):
img_path = filedialog.askopenfilename()
self.image = Image.open(img_path).convert("RGB") # Convert to RGB format
self.grid = Image.new("RGBA", self.image.size, (0, 0, 0, 0)) # Create a transparent overlay image
self.display_image = ImageTk.PhotoImage(self.image)
self.canvas.config(width=self.image.width, height=self.image.height)
self.canvas.create_image(0, 0, image=self.display_image, anchor='nw')
def change_color(self, event):
if self.image is None:
return
x, y = event.x, event.y
# Define the region around the mouse click (adjust as needed)
region = (x - 10, y - 10, x + 10, y + 10)
draw = ImageDraw.Draw(self.grid)
if self.current_color_index == 3:
# If it's the fourth click, remove the color (make the region transparent)
draw.rectangle(region, fill=(0, 0, 0, 0))
else:
color = self.colors[self.current_color_index]
draw.rectangle(region, fill=color)
self.display_image = ImageTk.PhotoImage(Image.alpha_composite(self.image.convert("RGBA"), self.grid))
self.canvas.create_image(0, 0, image=self.display_image, anchor='nw')
self.current_color_index = (self.current_color_index + 1) % 4 # Cycle through 4 values
def save_image(self):
if self.image is None:
return
file_path = filedialog.asksaveasfilename(defaultextension=".png", filetypes=[("PNG File", "*.png")], initialfile="output")
if file_path:
final_image = Image.alpha_composite(self.image.convert("RGBA"), self.grid)
final_image.save(file_path, "PNG")
if __name__ == "__main__":
app = ImageEditor()
</code></pre>
| <python><tkinter><python-imaging-library> | 2023-09-20 14:52:09 | 1 | 648 | pranav nerurkar |
77,143,627 | 2,837,887 | List of lists of mixed types to numpy array | <p>I have data imported from <code>csv</code> and they are stored in list of lists as:</p>
<pre><code>data=[['1', ' 1.013831', ' 1.713332', ' 1.327002', ' 3.674446', ' 19.995361', ' 09:44:24', ' 2.659884'], ['2', ' 1.013862', ' 1.713164', ' 1.326761', ' 3.662183', ' 19.996973', ' 09:49:27', ' 2.668791'], ['3', ' 1.013817', ' 1.712084', ' 1.326192', ' 3.658077', ' 19.997608', ' 09:54:27', ' 2.671786']]
</code></pre>
<p>I want to get a <code>numpy</code> array so that I can actually use proper slicing (I don't want <code>pandas</code> or anything else, just plain old <code>numpy</code> array with appropriate data types - not object).</p>
<p>So I tried the obvious:</p>
<pre><code>arr=np.array(data,dtype='i4,f4,f4,f4,f4,f4,U8,f4')
</code></pre>
<p>only to get:</p>
<pre><code>ValueError: invalid literal for int() with base 10: ' 1.013831'
</code></pre>
<p>This suggests that <code>numpy</code> treats rows as columns and columns as rows. What to do? I also tried to input instead of <code>data</code> <code>list(map(tuple,data))</code> which gives and error that <code>map object is not callable</code> and I tried:</p>
<pre><code>arr=np.asarray(tuple(map(tuple,data)),dtype='i4,f4,f4,f4,f4,f4,U8,f4')
</code></pre>
<p>giving</p>
<pre><code>ValueError: could not assign tuple of length 20 to structure with 8 fields.
</code></pre>
<p>Note the original number of rows in my case is 20.</p>
<p>So how do i get data from <code>csv</code> into <code>numpy</code> array where I want to specify what each column data type is?</p>
| <python><python-3.x><list><numpy> | 2023-09-20 14:48:33 | 1 | 1,400 | atapaka |
77,143,583 | 2,112,406 | OpenMP + pybind11 (to be compiled with setup.py) on mac and linux | <p>I have a python module that I want to make usable in both macOS and linux, that uses C++ code with OpenMP. I'm having issues on macOS (because apple clang doesn't have OpenMP support, I guess). I have the following <code>*.cpp</code> code, <a href="https://iamsorush.com/posts/pybind11-openmp/" rel="nofollow noreferrer">using this example</a>:</p>
<pre><code>#include <pybind11/pybind11.h>
#include <pybind11/stl.h>
// specific to macOS with libomp installed -- how to generalize?
#include "/usr/local/opt/libomp/include/omp.h"
namespace py = pybind11;
// OpenMP test
int sum_thread_ids(){
int sum = 0;
#pragma omp parallel shared(sum)
{
sleep(3);
#pragma omp critical
sum += omp_get_thread_num();
}
return sum;
}
PYBIND11_MODULE(openmp_test, m){
m.def("get_max_threads", &omp_get_max_threads, "Returns max number of threads");
m.def("set_num_threads", &omp_set_num_threads, "Set number of threads");
m.def("sum_thread_ids", &sum_thread_ids, "Adds the id of threads");
}
</code></pre>
<p>And then <code>setup.py</code> has:</p>
<pre><code>import os
from glob import glob
from pybind11.setup_helpers import Pybind11Extension
from setuptools import setup, find_packages
import module_name
cxx_std = int(os.environ.get("CMAKE_CXX_STANDARD", "20"))
ext_modules = [
Pybind11Extension("openmp_test", sorted(glob("src/*.cpp")), cxx_std=cxx_std)
]
setup(
...
packages=find_packages(),
install_requires=['pybind11'],
python_requires='>=3.10',
ext_modules=ext_modules
)
</code></pre>
<p>I install within the module dir that contains <code>setup.py</code> by:</p>
<pre><code>pip install .
</code></pre>
<p>It compiles without errors. However, when I try to use it in python, I get:</p>
<pre><code>>>>import openmp_test
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: dlopen(/Users/.../.pyenv/versions/3.11.2/lib/python3.11/site-packages/openmp_test.cpython-311-darwin.so, 0x0002): symbol not found in flat namespace '_omp_get_max_threads'
</code></pre>
<p>What am I missing? And how can I make this usable on both macOS and linux?</p>
| <python><c++><openmp><pybind11> | 2023-09-20 14:43:03 | 0 | 3,203 | sodiumnitrate |
77,143,424 | 12,474,157 | PyDev Debugger Warnings in PyCharm: Unable to find real location for modules and slower performance | <p>I am currently debugging my Python code in PyCharm, and I've come across a series of warnings from the PyDev debugger. The key messages I'm seeing are:</p>
<p>"This version of python seems to be incorrectly compiled (internal generated filenames are not absolute)"
"The debugger may still function, but it will work slower and may miss breakpoints."
A series of "Unable to find real location for: ..." messages
The full log of the warnings is here.</p>
<pre><code>-------------------------------------------------------------------------------
pydev debugger: CRITICAL WARNING: This version of python seems to be incorrectly compiled (internal generated filenames are not absolute)
pydev debugger: The debugger may still function, but it will work slower and may miss breakpoints.
pydev debugger: Related bug: http://bugs.python.org/issue1666807
-------------------------------------------------------------------------------
Connected to pydev debugger (build 221.5921.27)
pydev debugger: Unable to find real location for: <frozen codecs>
pydev debugger: Unable to find real location for: <frozen importlib._bootstrap>
pydev debugger: Unable to find real location for: <frozen importlib._bootstrap_external>
pydev debugger: Unable to find real location for: <frozen zipimport>
pydev debugger: Unable to find real location for: <frozen _collections_abc>
pydev debugger: Unable to find real location for: <frozen os>
pydev debugger: Unable to find real location for: <string>
pydev debugger: Unable to find real location for: <frozen abc>
pydev debugger: Unable to find real location for: <frozen io>
pydev debugger: Unable to find real location for: <frozen posixpath>
pydev debugger: Unable to find real location for: <frozen genericpath>
pydev debugger: Unable to find real location for: <attrs generated repr attr._make.Attribute>
pydev debugger: Unable to find real location for: <attrs generated eq attr._make.Attribute>
pydev debugger: Unable to find real location for: <attrs generated hash attr._make.Attribute>
pydev debugger: Unable to find real location for: <attrs generated repr attr._make._CountingAttr>
pydev debugger: Unable to find real location for: <attrs generated eq attr._make._CountingAttr>
pydev debugger: Unable to find real location for: <attrs generated repr attr._make.Factory>
pydev debugger: Unable to find real location for: <attrs generated eq attr._make.Factory>
pydev debugger: Unable to find real location for: <attrs generated hash attr._make.Factory>
pydev debugger: Unable to find real location for: <attrs generated repr attr._make._AndValidator>
pydev debugger: Unable to find real location for: <attrs generated eq attr._make._AndValidator>
pydev debugger: Unable to find real location for: <attrs generated hash attr._make._AndValidator>
pydev debugger: Unable to find real location for: <attrs generated init attr._make._AndValidator>
pydev debugger: Unable to find real location for: <attrs generated eq attr.validators._InstanceOfValidator>
pydev debugger: Unable to find real location for: <attrs generated hash attr.validators._InstanceOfValidator>
pydev debugger: Unable to find real location for: <attrs generated init attr.validators._InstanceOfValidator>
pydev debugger: Unable to find real location for: <attrs generated eq attr.validators._MatchesReValidator>
pydev debugger: Unable to find real location for: <attrs generated hash attr.validators._MatchesReValidator>
pydev debugger: Unable to find real location for: <attrs generated init attr.validators._MatchesReValidator>
pydev debugger: Unable to find real location for: <attrs generated eq attr.validators._ProvidesValidator>
pydev debugger: Unable to find real location for: <attrs generated hash attr.validators._ProvidesValidator>
pydev debugger: Unable to find real location for: <attrs generated init attr.validators._ProvidesValidator>
pydev debugger: Unable to find real location for: <attrs generated eq attr.validators._OptionalValidator>
pydev debugger: Unable to find real location for: <attrs generated hash attr.validators._OptionalValidator>
pydev debugger: Unable to find real location for: <attrs generated init attr.validators._OptionalValidator>
pydev debugger: Unable to find real location for: <attrs generated eq attr.validators._InValidator>
pydev debugger: Unable to find real location for: <attrs generated hash attr.validators._InValidator>
pydev debugger: Unable to find real location for: <attrs generated init attr.validators._InValidator>
pydev debugger: Unable to find real location for: <attrs generated eq attr.validators._IsCallableValidator>
pydev debugger: Unable to find real location for: <attrs generated hash attr.validators._IsCallableValidator>
pydev debugger: Unable to find real location for: <attrs generated init attr.validators._IsCallableValidator>
pydev debugger: Unable to find real location for: <attrs generated eq attr.validators._DeepIterable>
pydev debugger: Unable to find real location for: <attrs generated hash attr.validators._DeepIterable>
pydev debugger: Unable to find real location for: <attrs generated init attr.validators._DeepIterable>
pydev debugger: Unable to find real location for: <attrs generated eq attr.validators._DeepMapping>
pydev debugger: Unable to find real location for: <attrs generated hash attr.validators._DeepMapping>
pydev debugger: Unable to find real location for: <attrs generated init attr.validators._DeepMapping>
pydev debugger: Unable to find real location for: <attrs generated eq attr.validators._NumberValidator>
pydev debugger: Unable to find real location for: <attrs generated hash attr.validators._NumberValidator>
pydev debugger: Unable to find real location for: <attrs generated init attr.validators._NumberValidator>
pydev debugger: Unable to find real location for: <attrs generated eq attr.validators._MaxLengthValidator>
pydev debugger: Unable to find real location for: <attrs generated hash attr.validators._MaxLengthValidator>
pydev debugger: Unable to find real location for: <attrs generated init attr.validators._MaxLengthValidator>
pydev debugger: Unable to find real location for: <attrs generated eq attr.validators._MinLengthValidator>
pydev debugger: Unable to find real location for: <attrs generated hash attr.validators._MinLengthValidator>
pydev debugger: Unable to find real location for: <attrs generated init attr.validators._MinLengthValidator>
pydev debugger: Unable to find real location for: <attrs generated eq attr.validators._SubclassOfValidator>
pydev debugger: Unable to find real location for: <attrs generated hash attr.validators._SubclassOfValidator>
pydev debugger: Unable to find real location for: <attrs generated init attr.validators._SubclassOfValidator>
pydev debugger: Unable to find real location for: <attrs generated eq attr.validators._NotValidator>
pydev debugger: Unable to find real location for: <attrs generated hash attr.validators._NotValidator>
pydev debugger: Unable to find real location for: <attrs generated init attr.validators._NotValidator>
pydev debugger: Unable to find real location for: <attrs generated repr attr._version_info.VersionInfo>
pydev debugger: Unable to find real location for: <attrs generated init attr._version_info.VersionInfo>
pydev debugger: Unable to find real location for: <attrs generated repr aiohttp.helpers.ProxyInfo>
pydev debugger: Unable to find real location for: <attrs generated eq aiohttp.helpers.ProxyInfo>
pydev debugger: Unable to find real location for: <attrs generated hash aiohttp.helpers.ProxyInfo>
pydev debugger: Unable to find real location for: <attrs generated init aiohttp.helpers.ProxyInfo>
pydev debugger: Unable to find real location for: <attrs generated repr aiohttp.helpers.MimeType>
pydev debugger: Unable to find real location for: <attrs generated eq aiohttp.helpers.MimeType>
pydev debugger: Unable to find real location for: <attrs generated hash aiohttp.helpers.MimeType>
pydev debugger: Unable to find real location for: <attrs generated init aiohttp.helpers.MimeType>
pydev debugger: Unable to find real location for: <attrs generated repr aiohttp.helpers.ETag>
pydev debugger: Unable to find real location for: <attrs generated eq aiohttp.helpers.ETag>
pydev debugger: Unable to find real location for: <attrs generated hash aiohttp.helpers.ETag>
pydev debugger: Unable to find real location for: <attrs generated init aiohttp.helpers.ETag>
pydev debugger: Unable to find real location for: <attrs generated repr aiohttp.client_reqrep.ContentDisposition>
pydev debugger: Unable to find real location for: <attrs generated eq aiohttp.client_reqrep.ContentDisposition>
pydev debugger: Unable to find real location for: <attrs generated hash aiohttp.client_reqrep.ContentDisposition>
pydev debugger: Unable to find real location for: <attrs generated init aiohttp.client_reqrep.ContentDisposition>
pydev debugger: Unable to find real location for: <attrs generated repr aiohttp.client_reqrep.RequestInfo>
pydev debugger: Unable to find real location for: <attrs generated eq aiohttp.client_reqrep.RequestInfo>
pydev debugger: Unable to find real location for: <attrs generated hash aiohttp.client_reqrep.RequestInfo>
pydev debugger: Unable to find real location for: <attrs generated init aiohttp.client_reqrep.RequestInfo>
pydev debugger: Unable to find real location for: <attrs generated repr aiohttp.client_reqrep.ConnectionKey>
pydev debugger: Unable to find real location for: <attrs generated eq aiohttp.client_reqrep.ConnectionKey>
pydev debugger: Unable to find real location for: <attrs generated hash aiohttp.client_reqrep.ConnectionKey>
pydev debugger: Unable to find real location for: <attrs generated init aiohttp.client_reqrep.ConnectionKey>
pydev debugger: Unable to find real location for: <attrs generated repr aiohttp.tracing.TraceRequestStartParams>
pydev debugger: Unable to find real location for: <attrs generated eq aiohttp.tracing.TraceRequestStartParams>
pydev debugger: Unable to find real location for: <attrs generated hash aiohttp.tracing.TraceRequestStartParams>
pydev debugger: Unable to find real location for: <attrs generated init aiohttp.tracing.TraceRequestStartParams>
pydev debugger: Unable to find real location for: <attrs generated repr aiohttp.tracing.TraceRequestChunkSentParams>
pydev debugger: Unable to find real location for: <attrs generated eq aiohttp.tracing.TraceRequestChunkSentParams>
pydev debugger: Unable to find real location for: <attrs generated hash aiohttp.tracing.TraceRequestChunkSentParams>
pydev debugger: Unable to find real location for: <attrs generated init aiohttp.tracing.TraceRequestChunkSentParams>
pydev debugger: Unable to find real location for: <attrs generated repr aiohttp.tracing.TraceResponseChunkReceivedParams>
pydev debugger: Unable to find real location for: <attrs generated eq aiohttp.tracing.TraceResponseChunkReceivedParams>
pydev debugger: Unable to find real location for: <attrs generated hash aiohttp.tracing.TraceResponseChunkReceivedParams>
pydev debugger: Unable to find real location for: <attrs generated init aiohttp.tracing.TraceResponseChunkReceivedParams>
pydev debugger: Unable to find real location for: <attrs generated repr aiohttp.tracing.TraceRequestEndParams>
pydev debugger: Unable to find real location for: <attrs generated eq aiohttp.tracing.TraceRequestEndParams>
pydev debugger: Unable to find real location for: <attrs generated hash aiohttp.tracing.TraceRequestEndParams>
pydev debugger: Unable to find real location for: <attrs generated init aiohttp.tracing.TraceRequestEndParams>
pydev debugger: Unable to find real location for: <attrs generated repr aiohttp.tracing.TraceRequestExceptionParams>
pydev debugger: Unable to find real location for: <attrs generated eq aiohttp.tracing.TraceRequestExceptionParams>
pydev debugger: Unable to find real location for: <attrs generated hash aiohttp.tracing.TraceRequestExceptionParams>
pydev debugger: Unable to find real location for: <attrs generated init aiohttp.tracing.TraceRequestExceptionParams>
pydev debugger: Unable to find real location for: <attrs generated repr aiohttp.tracing.TraceRequestRedirectParams>
pydev debugger: Unable to find real location for: <attrs generated eq aiohttp.tracing.TraceRequestRedirectParams>
pydev debugger: Unable to find real location for: <attrs generated hash aiohttp.tracing.TraceRequestRedirectParams>
pydev debugger: Unable to find real location for: <attrs generated init aiohttp.tracing.TraceRequestRedirectParams>
pydev debugger: Unable to find real location for: <attrs generated repr aiohttp.tracing.TraceConnectionQueuedStartParams>
pydev debugger: Unable to find real location for: <attrs generated eq aiohttp.tracing.TraceConnectionQueuedStartParams>
pydev debugger: Unable to find real location for: <attrs generated hash aiohttp.tracing.TraceConnectionQueuedStartParams>
pydev debugger: Unable to find real location for: <attrs generated init aiohttp.tracing.TraceConnectionQueuedStartParams>
pydev debugger: Unable to find real location for: <attrs generated repr aiohttp.tracing.TraceConnectionQueuedEndParams>
pydev debugger: Unable to find real location for: <attrs generated eq aiohttp.tracing.TraceConnectionQueuedEndParams>
pydev debugger: Unable to find real location for: <attrs generated hash aiohttp.tracing.TraceConnectionQueuedEndParams>
pydev debugger: Unable to find real location for: <attrs generated init aiohttp.tracing.TraceConnectionQueuedEndParams>
pydev debugger: Unable to find real location for: <attrs generated repr aiohttp.tracing.TraceConnectionCreateStartParams>
pydev debugger: Unable to find real location for: <attrs generated eq aiohttp.tracing.TraceConnectionCreateStartParams>
pydev debugger: Unable to find real location for: <attrs generated hash aiohttp.tracing.TraceConnectionCreateStartParams>
pydev debugger: Unable to find real location for: <attrs generated init aiohttp.tracing.TraceConnectionCreateStartParams>
pydev debugger: Unable to find real location for: <attrs generated repr aiohttp.tracing.TraceConnectionCreateEndParams>
pydev debugger: Unable to find real location for: <attrs generated eq aiohttp.tracing.TraceConnectionCreateEndParams>
pydev debugger: Unable to find real location for: <attrs generated hash aiohttp.tracing.TraceConnectionCreateEndParams>
pydev debugger: Unable to find real location for: <attrs generated init aiohttp.tracing.TraceConnectionCreateEndParams>
pydev debugger: Unable to find real location for: <attrs generated repr aiohttp.tracing.TraceConnectionReuseconnParams>
pydev debugger: Unable to find real location for: <attrs generated eq aiohttp.tracing.TraceConnectionReuseconnParams>
pydev debugger: Unable to find real location for: <attrs generated hash aiohttp.tracing.TraceConnectionReuseconnParams>
pydev debugger: Unable to find real location for: <attrs generated init aiohttp.tracing.TraceConnectionReuseconnParams>
pydev debugger: Unable to find real location for: <attrs generated repr aiohttp.tracing.TraceDnsResolveHostStartParams>
pydev debugger: Unable to find real location for: <attrs generated eq aiohttp.tracing.TraceDnsResolveHostStartParams>
pydev debugger: Unable to find real location for: <attrs generated hash aiohttp.tracing.TraceDnsResolveHostStartParams>
pydev debugger: Unable to find real location for: <attrs generated init aiohttp.tracing.TraceDnsResolveHostStartParams>
pydev debugger: Unable to find real location for: <attrs generated repr aiohttp.tracing.TraceDnsResolveHostEndParams>
pydev debugger: Unable to find real location for: <attrs generated eq aiohttp.tracing.TraceDnsResolveHostEndParams>
pydev debugger: Unable to find real location for: <attrs generated hash aiohttp.tracing.TraceDnsResolveHostEndParams>
pydev debugger: Unable to find real location for: <attrs generated init aiohttp.tracing.TraceDnsResolveHostEndParams>
pydev debugger: Unable to find real location for: <attrs generated repr aiohttp.tracing.TraceDnsCacheHitParams>
pydev debugger: Unable to find real location for: <attrs generated eq aiohttp.tracing.TraceDnsCacheHitParams>
pydev debugger: Unable to find real location for: <attrs generated hash aiohttp.tracing.TraceDnsCacheHitParams>
pydev debugger: Unable to find real location for: <attrs generated init aiohttp.tracing.TraceDnsCacheHitParams>
pydev debugger: Unable to find real location for: <attrs generated repr aiohttp.tracing.TraceDnsCacheMissParams>
pydev debugger: Unable to find real location for: <attrs generated eq aiohttp.tracing.TraceDnsCacheMissParams>
pydev debugger: Unable to find real location for: <attrs generated hash aiohttp.tracing.TraceDnsCacheMissParams>
pydev debugger: Unable to find real location for: <attrs generated init aiohttp.tracing.TraceDnsCacheMissParams>
pydev debugger: Unable to find real location for: <attrs generated repr aiohttp.tracing.TraceRequestHeadersSentParams>
pydev debugger: Unable to find real location for: <attrs generated eq aiohttp.tracing.TraceRequestHeadersSentParams>
pydev debugger: Unable to find real location for: <attrs generated hash aiohttp.tracing.TraceRequestHeadersSentParams>
pydev debugger: Unable to find real location for: <attrs generated init aiohttp.tracing.TraceRequestHeadersSentParams>
pydev debugger: Unable to find real location for: <attrs generated repr aiohttp.client.ClientTimeout>
pydev debugger: Unable to find real location for: <attrs generated eq aiohttp.client.ClientTimeout>
pydev debugger: Unable to find real location for: <attrs generated hash aiohttp.client.ClientTimeout>
pydev debugger: Unable to find real location for: <attrs generated init aiohttp.client.ClientTimeout>
2023-09-20 16:16:12 INFO AWS Environment: staging
</code></pre>
<h2>Details of my environment:</h2>
<p>PyCharm version: [PyCharm 2022.1.3 (Community Edition)]</p>
<p>Runtime version: [11.0.15+10-b2043.56 aarch64]</p>
<p>Python version: [3.11.3]</p>
<p>Operating System: [MacOS Ventura 13.5.2]</p>
<p>Has anyone else encountered this issue or can provide guidance on how to resolve it? I'm concerned about the potential slower performance of the debugger and missing breakpoints, which can significantly hinder my debugging process.</p>
| <python><python-3.x><debugging><pycharm><virtualenv> | 2023-09-20 14:25:24 | 1 | 1,720 | The Dan |
77,143,316 | 1,652,219 | Is it possible to test SQL queries on local tables? | <p>I would like to write and test my SQL queries without connecting to an SQL database. Is that possible? Most examples connect to a mysql database, but the one I have to test against is a mssql.</p>
<p>As one can create these tables in sqlalchemy, can one not add data to them locally (before committing) and execute queries on them, such that one can assert that the queries work as expected?</p>
<pre><code>from sqlalchemy import MetaData
metadata_obj = MetaData()
from sqlalchemy import Table, Column, Integer, String
user = Table(
"user",
metadata_obj,
Column("user_id", Integer, primary_key=True),
Column("user_name", String(16), nullable=False),
Column("email_address", String(60)),
Column("nickname", String(50), nullable=False),
)
</code></pre>
| <python><sql><unit-testing><sqlalchemy> | 2023-09-20 14:11:57 | 1 | 3,944 | Esben Eickhardt |
77,143,248 | 11,267,783 | Pyqtgraph click outside the image | <p>I would like to know when I clicked outside the image matrix (on the screen attached, I would like to know when I clicked in the black space).</p>
<p><a href="https://i.sstatic.net/Haydl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Haydl.png" alt="enter image description here" /></a></p>
<p>The goal of my app is to know when I clicked outside the image box, and also to know if I clicked on the left, top, right or bottom side of the black space around the image.</p>
<pre><code>import pyqtgraph as pg
import numpy as np
matrix = np.random.randint(1,10,(100, 100))
print(matrix)
app = pg.mkQApp()
win = pg.GraphicsLayoutWidget()
img = pg.ImageItem()
img.setImage(matrice)
view = win.addViewBox()
view.addItem(img)
def mouse_double_click(event):
print(event.pos())
print(event.scenePos())
print(event.screenPos())
img.scene().sigMouseClicked.connect(mouse_double_click)
win.show()
if __name__ == '__main__':
pg.exec()
</code></pre>
| <python><pyqtgraph> | 2023-09-20 14:04:07 | 1 | 322 | Mo0nKizz |
77,143,192 | 3,943,162 | Get UUID from a EMF/XMI resource using PyEcore | <p>I have an XMI file serialized in a <a href="https://wiki.eclipse.org/index.php?title=Eclipse_Modeling_Framework&redirect=no" rel="nofollow noreferrer">Java/EMF</a> program. Now, I need to read it using <a href="https://pyecore.readthedocs.io" rel="nofollow noreferrer">PyEcore</a>, and I want to set the UUID and then retrieve it back while iterating over the resource.</p>
<p>This is my current code:</p>
<pre><code>from os import path as osp
from pathlib import Path
#PyEcore
from pyecore.resources import URI
xmi_path = osp.join(RESOURCES_PATH, 'models', 'my_model.xmi')
m_resource = resource_set.get_resource(URI(xmi_path))
m_resource.use_uuid = True
#save resource with UUIDs at temporary directory
m_resource.save(output=URI(osp.join(RESOURCES_PATH, 'models', 'temp', 'my_model_uuid_version.xmi')))
for obj in m_resource.contents[0].eAllContents():
obj_type= obj.eClass.name
#obj_uuid = What should I do here?
#print(obj_uuid)
</code></pre>
<p>I looked at the <a href="https://pyecore.readthedocs.io/en/latest/user/quickstart.html" rel="nofollow noreferrer">documentation</a> but found nothing. I tried to "guess" some possibilities, like <code>getID()</code>, <code>obj.eGet('xmi_uuid')</code>, but everything failed. Due to the lazy loading, <code>dir(obj)</code> doesn't give me any tips either.</p>
<p>Below is a part of the file successfully saved at "models/temp/my_model_uuid_version.xmi" (it means it's not a problem on the first part of the code). The original (Java) version was created with URI fragments instead of the id.</p>
<pre><code><?xml version='1.0' encoding='UTF-8'?>
<emf.modeling.test:Root xmlns:xmi="http://www.omg.org/XMI" xmlns:emf.modeling.test="emf.modeling.test" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmi:id="2def7906-85ad-4852-ad4b-a4db577a14c6" xmi:version="2.0">
<regions xmi:id="b4b7b97c-9549-4700-b51c-cc3a68166a71">
<vertices xsi:type="emf.modeling.test:Entry" xmi:id="a7ca20d5-c2bb-4ff9-a8f6-c9c53e496a83">
<out xmi:id="6f0a5530-14a0-41fa-91de-681df25f9aff" target="f1bfedb7-41a7-4582-b736-71f804ffe65d"/>
</vertices>
<vertices xsi:type="emf.modeling.test:State" xmi:id="f1bfedb7-41a7-4582-b736-71f804ffe65d" incomingTransitions="6f0a5530-14a0-41fa-91de-681df25f9aff 59290d7f-032c-460c-a158-f4ed83dcdaba 3aa3a4ea-f7fc-495f-aaec-d7b957bc9c86 4c4264ff-1cdb-45e8-bdc4-289f23aa668d 94b5a479-f5aa-4de5-b5e5-93b61a676af7 b979f49a-bb16-43b0-abee-2808653d7342 6270c0fd-6e8a-4667-bde9-d77636e60762 7014ca4f-fa00-4ea0-abf4-772bab9f6f1e 58b0e66e-73e7-4c0c-9b7b-4ba319ec7350 ecc020db-7c64-4cc8-9b9e-d29b6692c799 e04b9856-8b90-4d6c-a64b-14173cd72e8c dd54139d-32eb-4374-abb3-115d6d53cd61 a10a2db2-9a0c-47bd-8b77-307f8a21ed15">
<out xmi:id="c06a3aeb-e86b-4dec-837e-d28f92cb7ef9" target="582156e7-c756-462d-aaaa-f9348d96813d"/>
<out xmi:id="a70aec31-886f-477e-9c25-b641341e52c4" target="d609bdde-e299-40d1-84f5-104fd975a287"/>
</vertices>
<!--...more vertices-->
</regions>
</emf.modeling.test:Root>
</code></pre>
| <python><uuid><eclipse-emf> | 2023-09-20 13:58:14 | 1 | 1,789 | James |
77,143,167 | 8,621,823 | In a single expression, why does multiple uses of object() return different or same id? | <p>I'm on python 3.10</p>
<ul>
<li><code>id(object()),id(object())</code> returns <code>(4519407312, 4519407312)</code> (Exact id is not important)</li>
<li><code>object() is object()</code> returns <code>False</code></li>
</ul>
<p>Why do the <code>object()</code> in second expression contain different id while the <code>object()</code> in first expression has same id?</p>
<p><strong>Context</strong></p>
<p>This question was inspired by me noticing coding patterns of using object as a sentinel when None has some special meaning making it unusable.</p>
<p>I wanted to check whether any object from the import could cause <code>if possibly_overwritten is not _UNDEFINED:</code> to fail wrongly. That happens when id() of the value overwriting <code>possibly_overwritten</code> is the same as id of <code>_UNDEFINED</code>, which the first expression above demonstrated.</p>
<pre><code>_UNDEFINED = object()
possibly_overwritten = _UNDEFINED
# may assign a value to possibly_overwritten
from local_config import *
if possibly_overwritten is not _UNDEFINED:
...
</code></pre>
| <python> | 2023-09-20 13:55:19 | 1 | 517 | Han Qi |
77,143,085 | 16,383,578 | How to find number of common decimal digits of two arrays element wise? | <p>I have a complex function I want to approximate, and I use <code>np.polyfit</code> to find the polynomial to approximate it.</p>
<p>And I want to find the statistics of correct decimal places to determine the quality of the approximation.</p>
<p>But it is hard to do efficiently. Currently I just convert the elements to strings and find the longest common prefix length element by element, needless to say this is inefficient.</p>
<pre><code>import numpy as np
from collections import Counter
x = np.linspace(1, 2, 4096)
exp = np.exp(x)
poly = np.polyfit(x, exp, 6)
approx = np.polyval(poly, x)
def LCP(s1, s2):
c = 0
for a, b in zip(s1, s2):
if a != b:
break
c += 1
return c
def leading_digits(f1, f2):
l = LCP(str(f1), str(f2)) - 1
return max(l, 0)
correct_places = Counter()
for a, b in zip(approx, exp):
correct_places[leading_digits(a, b)] += 1
</code></pre>
<pre><code>Counter({7: 2014, 8: 1699, 6: 207, 9: 135, 5: 27, 10: 12, 11: 2})
</code></pre>
<p>What is a more efficient way?</p>
| <python><numpy> | 2023-09-20 13:45:31 | 2 | 3,930 | Ξένη Γήινος |
77,143,064 | 7,683,041 | Scipy spline interpolation outside data points region | <p>I am fitting 2d points on a 1d axis, using <code>scipy.interpolate.bisplrep</code>. This works better than the polynomial interpolation in general, but outside the points used for fitting, it gives bounded values. I can't find anything like <code>fill_value="extrapolate"</code>, that we can use with <code>interp1d</code>.</p>
<p>As splines are piecewise polynomials, I thought I could use the coefficients of the closest polynomial, but I don't know how.</p>
<pre class="lang-py prettyprint-override"><code>xy = [[0, 0], [0, 1], [1, 0], [1, 1], [2, 2], [0, 2], [2, 0], [2, 1], [0.1, 0]]
x, y = tuple(list(zip(*xy)))
z = [0, 1, 1, 1.1, 3, 2, 2, 3, 0.1]
spline = interpolate.bisplrep(x, y, z, kx=2, ky=2)
print(interpolate.bisplev([0], [0], spline)) # close to 0 as expected
print(interpolate.bisplev([1], [1], spline)) # 1.1 as expected
print(interpolate.bisplev([2], [2], spline)) # 3 as expected
print(interpolate.bisplev([3], [3], spline)) # 3 but should return way more
</code></pre>
| <python><scipy><interpolation><spline> | 2023-09-20 13:43:46 | 1 | 1,310 | PJ127 |
77,142,806 | 1,652,219 | How to read CursorResult to Pandas in SQLAlchemy? | <p>How does one convert a CursorResult-object into a Pandas Dataframe?</p>
<p>The following code results in a CursorResult-object:</p>
<pre><code>from sqlalchemy.orm import Session
from sqlalchemy import create_engine
engine = create_engine(f"mssql+pyodbc://{db_server}/{db_name}?trusted_connection=yes&driver={db_driver}")
q1 = "SELECT * FROM my_schema.my_table"
with Session(engine) as session:
results = session.execute(q1)
session.commit()
type(results)
>sqlalchemy.engine.cursor.CursorResult
</code></pre>
<p>As I couldn't find a way to extract relevant information from CursorResult, it attempted the following instead:</p>
<pre><code># Extracting data as we go
with Session(engine) as session:
results = session.execute(q1)
description = results.cursor.description
rows = results.all()
session.commit()
# Extracting column names
colnames = [elem[0] for elem in description]
# Extracting types
types = [elem[1] for elem in description]
# Creating dataframe
import pandas as pd
pd.DataFrame(rows, columns=colnames)
</code></pre>
<p>But what about the dtypes? It doesn't work if I just put them in, though it looks like they are all python types. For my use case I MUST use <strong>Session</strong>, so I cannot use the first suggestion of doing the classic:</p>
<pre><code># I cannot use
pandas.read_sql(q1, engine)
</code></pre>
<p>The reason for this is that I have to do multi-batch queries within the same context, which is why I am using the <strong>Session</strong> class.</p>
| <python><pandas><sqlalchemy> | 2023-09-20 13:11:24 | 1 | 3,944 | Esben Eickhardt |
77,142,719 | 3,179,416 | How do you manage training data in the local version of Vanna? | <p>The hosted version of Vanna has a <a href="https://vanna.ai/docs/vanna.html#get_training_data" rel="nofollow noreferrer"><code>vn.get_training_data(...)</code></a> function.</p>
<p>However, when I try to run <a href="https://vanna.ai/docs/local.html" rel="nofollow noreferrer">locally following the examples</a> there's no equivalent function for retrieving and deleting training data.</p>
<p>How do I manage the training data when using the local version?</p>
| <python> | 2023-09-20 13:02:15 | 1 | 1,689 | Zain |
77,142,659 | 7,985,055 | Is it possible to check a dynamic path in a python dictionary? | <p>I would like to know if it is possible to dynamically loop through paths in a a jira object and print the value if it doesn't match ?</p>
<p>I have two objects, with have a bunch of fields....</p>
<p>jira_object_1</p>
<pre><code>{
"key": "3000",
"fields": {
"customfield_14550": null,
"customfield_12770": null,
"customfield_11441": "2023-07-13T12:45:00.000+0200",
"customfield_12772": null,
"customfield_14444": null,
"customfield_10120": null,
"customfield_12941": null,
"customfield_14445": null,
"customfield_12940": null,
"customfield_14446": null,
"due_date": null
}
}
</code></pre>
<p>I have this little script which is supposed to check all the paths that are different between the two jira ticket objects.</p>
<pre><code>keys = {
# path and description
("fields.customfield_14550", "name-1"),
("fields.customfield_12770", "name-2"),
("fields.customfield_11441", "name-3"),
("fields.customfield_12772", "name-4"),
("fields.customfield_14444", "name-5"),
("fields.customfield_10120", "name-6"),
("fields.customfield_12941", "name-7"),
("fields.customfield_14445", "name-8"),
("fields.customfield_12940", "name-9"),
("fields.customfield_14446", "name-10"),
("fields.due_date", "date")
}
for key in keys:
path, description = key
if jira_object_1.get(path) != jira_object_1.get(path):
print('---')
print(description, path)
print('---')
</code></pre>
<p>However, when I use the get() method on the object it fails, I can't seem to get a value out of it.
output</p>
<pre><code>--->
None
None
--->
--->
None
None
--->
--->
None
None
--->
</code></pre>
<p>and I can't seem to do convert the jira ticket object to a dictionary as it is not serializable.</p>
<pre><code>command:
srcDict = json.dumps(list(jira_object_1))
^^^^^^^^^^^^^^^
error:
TypeError: 'Issue' object is not iterable
</code></pre>
<pre><code>command:
srcDict = json.dumps(jira_object_1)
error:
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type Issue is not JSON serializable
</code></pre>
<p>Why is that?</p>
| <python><dictionary><compare> | 2023-09-20 12:54:58 | 2 | 525 | Mr. E |
77,142,566 | 9,102,437 | Python multiprocessing imap iterates over whole itarable | <p>In my code I am trying to achieve the following:</p>
<ol>
<li>I get each result as soon as any of the processes finish</li>
<li>Next iteration must only be called whenever it is necessary (if it is converted into a list, I will have RAM issues)</li>
</ol>
<p>To my knowledge <code>imap</code> <code>from multiprocessing</code> module should be perfect for this task, but this code:</p>
<pre class="lang-py prettyprint-override"><code>import os
import time
def txt_iterator():
for i in range(8):
yield i
print('iterated', i)
def func(x):
time.sleep(5)
return x
if __name__ == '__main__':
import multiprocessing
pool = multiprocessing.Pool(processes=4)
for i in pool.imap( func, txt_iterator() ):
print('P2', i)
pool.close()
</code></pre>
<p>Has this output:</p>
<pre><code>iterated 0
iterated 1
...
iterated 7
# 5 second pause
P2 0
P2 1
P2 2
P2 3
# 5 second pause
P2 4
P2 5
P2 6
P2 7
</code></pre>
<p>Meaning that it iterates through the whole iterable and only then starts assigning tasks to processes. As far as I could find in the docs, this behavior is only expected from <code>.map</code> (the iteration part).</p>
<p>The expected output is (may vary because they run concurrently, but you get the idea):</p>
<pre><code>iterated 0
...
iterated 3
# 5 second pause
P2 0
...
P2 3
iterated 4
...
iterated 7
# 5 second pause
P2 4
...
P2 7
</code></pre>
<p>I am sure that I am missing something here but in case I completely misunderstand how this function works, I would appreciate any alternative that will work as intended.</p>
| <python><iterator><python-multiprocessing> | 2023-09-20 12:43:42 | 2 | 772 | user9102437 |
77,142,562 | 10,530,984 | Pydantic v2 migration and string typecasting | <p>I'm tried to migrate my project to Pydantic v2 and tests fails on integer to string type conversation.</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel
class Foo(BaseModel):
bar: str
Foo(bar=1)
</code></pre>
<p>Code fails with exception:</p>
<pre class="lang-bash prettyprint-override"><code>ValidationError: 1 validation error for Foo
bar
Input should be a valid string [type=string_type, input_value=1, input_type=int]
For further information visit https://errors.pydantic.dev/2.3/v/string_type
</code></pre>
<p>However, Pydantic v1 succsessfully converts <code>1</code> to <code>'1'</code></p>
<p>Is there a soft way to convert it without using <code>Foo(bar=str(1))</code>?</p>
| <python><pydantic> | 2023-09-20 12:42:43 | 3 | 655 | Mastermind |
77,142,461 | 16,613,821 | Why is file.close() so slow, even when there is no buffering and/or file.flush() has already been called? | <pre class="lang-py prettyprint-override"><code>from time import perf_counter
t = perf_counter()
file = open('dummy', 'wb', buffering=0)
print('open', perf_counter() - t)
t = perf_counter()
file.write(bytes(16 * 1024 ** 2))
print('write 16MB', perf_counter() - t)
t = perf_counter()
file.flush()
print('flush', perf_counter() - t)
t = perf_counter()
file.close()
print('close', perf_counter() - t)
</code></pre>
<p>It output:</p>
<pre><code>open 0.00242589320987463
write 16MB 0.010545279830694199
flush 1.0263174772262573e-06
close 0.033106833696365356
</code></pre>
<p>I think that should be caused by OS page cache, BUT python does not support O_DIRECT unless mmap is used, since O_DIRECT require memory alignment. <a href="https://bugs.python.org/issue5396" rel="nofollow noreferrer">https://bugs.python.org/issue5396</a></p>
<p>Thus I tried on a faster storage mounted on <code>/data</code>:</p>
<pre class="lang-py prettyprint-override"><code>open 0.001985272392630577
write 16MB 0.009873169474303722
flush 1.1473894119262695e-06
close 0.006200509145855904
</code></pre>
<p>System Information:</p>
<pre><code>> uname -srvmpio
Linux 5.15.0-78-generic #85~20.04.1-Ubuntu SMP Mon Jul 17 09:42:39 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
> cat /proc/mounts | grep '/ '
/dev/sda2 / ext4 rw,relatime,errors=remount-ro 0 0
> cat /proc/mounts | grep '/data '
/dev/nvme0n1 /data ext4 rw,relatime,stripe=32 0 0
> uname -a
</code></pre>
| <python><linux><filesystems> | 2023-09-20 12:30:17 | 0 | 727 | YouJiacheng |
77,142,439 | 845,210 | Pydantic dataclass with Field alias triggers pylint E1123 unexpected-keyword-arg | <p>I'm using the <a href="https://docs.pydantic.dev/latest/usage/dataclasses/" rel="noreferrer">dataclasses feature</a> in Pydantic v2.3.0 and I have a dataclass with an aliased field, like so:</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import Field
from pydantic.dataclasses import dataclass
@dataclass
class Example:
number: int = Field(alias='n')
example = Example(n=5)
</code></pre>
<p>This is valid code and runs just fine, but pylint triggers the following warning:</p>
<p><code>E1123: Unexpected keyword argument 'n' in constructor call (unexpected-keyword-arg)</code></p>
<p>Is this a pylint bug? A pydantic bug? Expected behavior?</p>
<p>Does anyone have a good workaround to either...</p>
<ul>
<li>help pylint recognize that <code>n</code> is a valid constructor argument, or</li>
<li>silence this warning for the <code>Example</code> class?</li>
</ul>
<p>I'm aware that I could globally silence <code>E1123(unexpected-keyword-arg)</code> but I'd rather just suppress this false-positive case.</p>
<p>For reference:</p>
<pre><code>$ pylint --version
pylint 2.17.5
astroid 2.15.6
Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0]
</code></pre>
| <python><pylint><pydantic><python-dataclasses> | 2023-09-20 12:28:28 | 0 | 3,331 | bjmc |
77,142,348 | 1,710,392 | Efficient transfer to stdin of subprocess | <p>I am trying to make a data transfer more efficient in a linux program which has multiple clients and one server. At the beginning, the client spawns a subprocess and sends a big structure serialized using pickle (about 200MB). Then, depending on some events the clients send more data (not shown in this example code). The server then calls select (since there are multiple clients you can't just do a blocking read on one file descriptor), and then calls read() in order to read chunks of this pickle structure. Parsing those chunks takes about 0.01 seconds.</p>
<p>I noticed a strongly degraded performance due to the fact that those chunks are most of the time only 65536 bytes in size, and thus the parsing is done thousands of times. Ironically, if I load the CPU of the machine to 100% before starting this test, those chunks are a lot bigger, and the whole transfer is about 100 times faster. In this example code, if I comment-out the line <code>time.sleep(0.1)</code> in the test-server, the chunks are also about 622KB most of the time instead of 65536 bytes.</p>
<p>I tried to modify the values of "bufsize" in the client and "buffering" in the server in order to increase the size of those chunks, but it didn't change anything.</p>
<p>Here is some minimal code reproducing the issue:</p>
<p>Test client:</p>
<pre><code>#!/usr/bin/env python3
import random
import subprocess
import time
input_data = random.randbytes(100_000_000)
worker = subprocess.Popen(["./server-test.py"], stdout=1, stdin=subprocess.PIPE, bufsize=100_000)
start1 = time.time()
worker.stdin.write(input_data)
end1 = time.time()
print("stdin.write took %s" % (end1 - start1))
</code></pre>
<p>Test server:</p>
<pre><code>#!/usr/bin/env python3
import fcntl
import os
import select
import sys
import time
class TestWorker(object):
def __init__(self, din):
self.input = din
fcntl.fcntl(din, fcntl.F_SETFL, fcntl.fcntl(din, fcntl.F_GETFL) | os.O_NONBLOCK)
self.build_pipes = {}
self.queue = bytearray()
def serve(self):
while True:
(ready, _, _) = select.select([self.input] + [i.input for i in self.build_pipes.values()], [] , [], 1)
if self.input in ready:
try:
r = self.input.read()
if len(r) == 0:
# EOF on pipe, server must have terminated
quit()
self.queue.extend(r)
except (OSError, IOError):
pass
if len(self.queue):
print("len(r)= %s" % (len(r)))
# simulate data parsing with sleep
time.sleep(0.1)
worker = TestWorker(os.fdopen(sys.stdin.fileno(), 'rb', buffering=100_000))
worker.serve()
</code></pre>
<p>Output with time.sleep(0.1) in the server:</p>
<pre><code>$ ./client-test.py
len(r)= 65536
len(r)= 622592
len(r)= 65536
len(r)= 65536
len(r)= 626688
len(r)= 65536
len(r)= 65536
len(r)= 774144
...
</code></pre>
<p>Output without time.sleep(0.1) in the server:</p>
<pre><code>$ ./client-test.py
len(r)= 65536
len(r)= 774144
len(r)= 622592
len(r)= 860160
len(r)= 786432
len(r)= 626688
len(r)= 774144
len(r)= 786432
len(r)= 786432
len(r)= 774144
len(r)= 561152
...
</code></pre>
<p>Is there a way to increase the size of the chunks being transferred to make this communication more efficient? Why does adding <code>time.sleep(0.1)</code> reduce the size of the chunks?</p>
| <python><linux><ipc> | 2023-09-20 12:17:23 | 0 | 5,078 | Étienne |
77,142,157 | 10,522,901 | Pandas sort by length of list in column | <p>How do you sort a pandas DataFrame by the length of the list in a column?</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame(
[
{"val1": "a", "val2": [1,2,3]},
{"val1": "b", "val2": [1]},
{"val1": "c", "val2": [1,2]}
]
)
df = df.sort_values("val2", key=lambda x: len(x), ascending=False)
</code></pre>
<p>This results in a <code>TypeError: object of type 'int' has no len()</code></p>
<p>All examples and docs I read use string-length as an example with the key <code>lambda x: x.str.len()</code>.
What is the type of <code>x</code> in the key function? Shouldn't it be the content of each row in column "val2"?</p>
<p>The result I want is the length of the list in "val2" to be the key to the sorting, resulting in column "val1" having order: a, c, b.</p>
| <python><pandas><dataframe> | 2023-09-20 11:54:48 | 1 | 316 | vegarab |
77,142,073 | 3,098,795 | Insert or attach one excel workbook into another excel workbook using python | <p>I have generated multiple excel workbook using openpyxl.
Now I need to create a consolidated kind of excel workbook, which would be having all the above created excel workbooks inside it as clickable.</p>
<p>For example:
consolidated excel workbook will look like:</p>
<p><a href="https://i.sstatic.net/VC1yn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VC1yn.png" alt="enter image description here" /></a></p>
<p>Where if I click on abc.xlsx, then abc.xlsx excel workbook should open up as a separate excel.
Just like we manually attach the xlsx files using hyperlink. May be by same method or differently I need to attach one excel workbook into another excel workbook programmatically using python.</p>
<p>Twist:
I need to give abc.xlsx, sdf.xlsx with relative path also. Because one can give all these excels to somebody by zipping then the path of abc.xlsx, sdf.xlsx will change.</p>
| <python><excel><openpyxl><xlsxwriter> | 2023-09-20 11:44:03 | 1 | 495 | Nikita |
77,141,942 | 13,268,160 | What does the `U` in `argparse.FileType('rU')` mean? | <p>In the following code, what does the <code>U</code> mean in <code>argparse.FileType('rU')</code>?</p>
<pre><code>#!/usr/bin/env python3
import argparse
parser = argparse.ArgumentParser("Description")
parser.add_argument('--input-file', required = True, type = argparse.FileType('rU'))
args = parser.parse_args()
</code></pre>
<p>I checked the documentation for <a href="https://docs.python.org/3/library/argparse.html#filetype-objects" rel="nofollow noreferrer"><code>argparse.FileType</code></a> as well as the <a href="https://docs.python.org/3/library/functions.html#open" rel="nofollow noreferrer"><code>open</code></a> function of python, but none seem to mention the capital <code>U</code>. What does it mean? What is different to <code>argparse.FileType('r')</code>?</p>
| <python><python-3.x><argparse> | 2023-09-20 11:26:57 | 0 | 329 | Sebastian Schmidt |
77,141,928 | 11,596,051 | I need to change specific characters in file names to uppercase | <p>I have roughly 20,000 files in a single directory whose filenames I need to change in the following manner:</p>
<p>Target_file_name.jpg</p>
<p>to</p>
<p>Target_File_Name.jpg</p>
<p>All of the filenames have at least two words in them, but some have three or even four, and all have the individual words separated by underscore characters. There are no spaces in any of the filenames. How would I do this?</p>
<p>I know I can do something like:</p>
<pre><code>import os
for x in os.listdir():
splitName = x.split("_")
for y in splitName:
</code></pre>
<p>but I am not sure how to go from there.</p>
| <python> | 2023-09-20 11:25:33 | 3 | 395 | lrhorer |
77,141,788 | 14,824,108 | How to extract xvalues and yvalues from a kdeplot | <p>Given that I have a <code>seaborn.kdeplot</code> I want to extract the <code>x</code> and <code>y</code> points. Based on similar questions I have tried the following:</p>
<pre><code>points = sns.kdeplot(targets, shade=True, label='train').get_lines()[0].get_data()
x = points[0]
y = points[1]
</code></pre>
<p>But I'm getting the error</p>
<pre><code>Traceback (most recent call last):
File "<string>", line 10, in <module>
IndexError: list index out of range
</code></pre>
<p>I'm using <code>seaborn==0.12.2</code> and <code>matplotlib==3.7.1</code></p>
| <python><matplotlib><seaborn><kdeplot> | 2023-09-20 11:07:14 | 1 | 676 | James Arten |
77,141,786 | 6,552,836 | Filling the NaN values in a matrix using blocks of random heights | <p>I have a 5x5 dataframe below:</p>
<pre><code> A B C D E
0 0.0 0.0 0.0 0.0 0.0
1 0.0 0.0 NaN 0.0 NaN
2 0.0 0.0 NaN 0.0 NaN
3 0.0 NaN 0.0 NaN NaN
4 0.0 NaN 0.0 NaN NaN
</code></pre>
<p>Trying to create a function that takes in a data frame above and num_blocks, min_height=1, max_height=5 parameters, then fills the nan values in every column with blocks of numbers of random heights. Here are the steps:</p>
<ol>
<li><p>Check how many columns have a nan value</p>
<ul>
<li>If this number is less than the num_blocks, then raise error to increase the
num_blocks number</li>
</ul>
</li>
<li><p>Check if num_blocks is greater than the total number of NaNs in the dataframe</p>
<ul>
<li>If this number is greater than the num_blocks, then raise error to decrease the
num_blocks number</li>
</ul>
</li>
<li><p>Else, fill the nan values in blocks of numbers which the count increments when moving over to the next column. The blocks can be of random heights which have to be within these user parameters min_height=1 & max_height</p>
</li>
</ol>
<p><strong>if num_blocks=4, min_height=1, max_height=5</strong>
dataframe should look like this (this is most basic case):</p>
<pre> A B C D E
0 0.0 0.0 0.0 0.0 0.0
1 0.0 0.0 2 0.0 4
2 0.0 0.0 2 0.0 4
3 0.0 1 0.0 3 4
4 0.0 1 0.0 3 4
</pre>
<p><strong>if num_blocks=5, min_height=1, max_height=5</strong>
dataframe should look something like this:</p>
<pre> A B C D E
0 0.0 0.0 0.0 0.0 0.0
1 0.0 0.0 2 0.0 5
2 0.0 0.0 2 0.0 5
3 0.0 1 0.0 3 5
4 0.0 1 0.0 4 5
</pre>
<p>or can look something like this:</p>
<pre> A B C D E
0 0.0 0.0 0.0 0.0 0.0
1 0.0 0.0 2 0.0 4
2 0.0 0.0 2 0.0 5
3 0.0 1 0.0 3 5
4 0.0 1 0.0 3 5
</pre>
<p>or can look something like this...etc:</p>
<pre> A B C D E
0 0.0 0.0 0.0 0.0 0.0
1 0.0 0.0 3 0.0 5
2 0.0 0.0 3 0.0 5
3 0.0 1 0.0 4 5
4 0.0 2 0.0 4 5
</pre>
<p><strong>if num_blocks=8, min_height=1, max_height=5</strong><br />
dataframe should look something like this:
</p>
<pre> A B C D E
0 0.0 0.0 0.0 0.0 0.0
1 0.0 0.0 3 0.0 7
2 0.0 0.0 4 0.0 7
3 0.0 1 0.0 5 8
4 0.0 2 0.0 6 8
</pre>
<p>or can look something like this...etc:</p>
<pre> A B C D E
0 0.0 0.0 0.0 0.0 0.0
1 0.0 0.0 3 0.0 6
2 0.0 0.0 4 0.0 6
3 0.0 1 0.0 5 7
4 0.0 2 0.0 5 8
</pre>
<p>And if changing min_height and max_height for dataframe which has more NaNs:
<strong>if num_blocks=7, min_height=2, max_height=4</strong>
this is possibly how the dataframe can look:</p>
<pre><code> A B C D E
0 NaN 0.0 0.0 0.0 0.0
1 NaN 0.0 NaN 0.0 NaN
2 NaN NaN NaN 0.0 NaN
3 NaN NaN NaN NaN NaN
4 0.0 NaN 0.0 NaN NaN
</code></pre>
<pre><code> A B C D E
0 1 0.0 0.0 0.0 0.0
1 1 0.0 4 0.0 6
2 2 3 4 0.0 6
3 2 3 4 5 7
4 0.0 3 0.0 5 7
</code></pre>
<p>This is what I’ve attempted:
</p>
<pre>import pandas as pd
def fill_nans(df, num_blocks):
# Get the columns which have NaN values
nan_columns = df.columns[df.isna().any()].tolist()
fill_value = 1
# Iterate over the NaN columns and fill the values incrementally
for col in nan_columns:
# Get the index of NaN values for the current column
nan_idx = df[df[col].isna()].index
for idx in nan_idx:
df.at[idx, col] = fill_value
fill_value += 1
# Reset fill_value if it exceeds num_blocks
if fill_value > num_blocks:
fill_value = 1
return df
# Sample DataFrame
data = {
'A': [0.0, 0.0, 0.0, 0.0, 0.0],
'B': [0.0, 0.0, 0.0, None, None],
'C': [0.0, None, None, 0.0, 0.0],
'D': [0.0, 0.0, 0.0, None, None],
'E': [0.0, None, None, None, None]
}
df = pd.DataFrame(data)
print("Original DataFrame:")
print(df)
print("\nDataFrame after filling NaNs:")
df_filled = fill_nans(df, 8)
print(df_filled)
</pre>
| <python><pandas><dataframe><numpy> | 2023-09-20 11:07:03 | 2 | 439 | star_it8293 |
77,141,633 | 15,915,737 | Prefect : ImportError: cannot import name 'SecretField' from 'pydantic' | <p>I'm currently using prefect to orchestrate some simple task in python. It's working fine until I get this error :</p>
<pre><code>Traceback (most recent call last):
File "test.py", line 2, in <module>
from prefect import flow
File "/Users/.../.venv/lib/python3.8/site-packages/prefect/__init__.py", line 25, in <module>
from prefect.states import State
...
ImportError: cannot import name 'SecretField' from 'pydantic' (/Users/.../.venv/lib/python3.8/site-packages/pydantic/__init__.py)
</code></pre>
<p>It's seems I have the module install in my venv :</p>
<pre><code>(.venv) User@user % pip show pydantic
Name: pydantic
Version: 2.3.0
Summary: Data validation using Python type hints
Home-page: None
Author: None
Author-email: Samuel Colvin <s@muelcolvin.com>, Eric Jolibois <em.jolibois@gmail.com>, Hasan Ramezani <hasan.r67@gmail.com>, Adrian Garcia Badaracco <1755071+adriangb@users.noreply.github.com>, Terrence Dorsey <terry@pydantic.dev>, David Montague <david@pydantic.dev>
License: None
Location: /Users.../.venv/lib/python3.8/site-packages
Requires: annotated-types, pydantic-core, typing-extensions
Required-by: prefect, fastapi
</code></pre>
<p>Where this could come from ?</p>
| <python><pydantic><prefect> | 2023-09-20 10:44:24 | 1 | 418 | user15915737 |
77,141,598 | 1,072,825 | Poetry config for torch installation | <p>I have the following poetry config for installing torch</p>
<pre><code>torch = [
# Mac with apple silicon
{ markers = "sys_platform == 'darwin'", version = "^2.0.1", source = "pypi" },
# Mac with arm docker container
{ markers = "sys_platform == 'linux' and platform_machine == 'aarch64'", version = "^2.0.1", source = "pypi" },
# Mac with x86_64 container with cpu version ony
{ markers = "sys_platform == 'linux' and platform_machine != 'aarch64'", version = "^2.0.1+cpu", source = "pytorch" }
]
</code></pre>
<p>However this doesn't seem to work for installation, if I check with export
<code>poetry export -f requirements.txt --output requirements.txt</code></p>
<pre><code>torch==2.0.1+cpu ; python_version >= "3.10" and python_version < "3.11" and (sys_platform == "darwin" or sys_platform == "linux")
</code></pre>
<p>which is plain wrong. To make this work, I manually have to pin the versions like so</p>
<pre><code>torch = [
# Mac apple silicon
{ markers = "sys_platform == 'darwin'", url = "https://download.pytorch.org/whl/cpu/torch-2.0.1-cp310-none-macosx_11_0_arm64.whl" },
# Mac docker arm container
{ markers = "sys_platform == 'linux' and platform_machine == 'aarch64'", url = "https://download.pytorch.org/whl/torch-2.0.1-cp310-cp310-manylinux2014_aarch64.whl" },
# Mac with x86_64 container
{ markers = "sys_platform == 'linux' and platform_machine == 'x86_64'", url = "https://download.pytorch.org/whl/cpu/torch-2.0.1%2Bcpu-cp310-cp310-linux_x86_64.whl" },
]
</code></pre>
<p>This generates</p>
<pre><code>torch @ https://download.pytorch.org/whl/cpu/torch-2.0.1%2Bcpu-cp310-cp310-linux_x86_64.whl ; sys_platform == "linux" and platform_machine == "x86_64" and python_version >= "3.10" and python_version < "3.11"
torch @ https://download.pytorch.org/whl/cpu/torch-2.0.1-cp310-none-macosx_11_0_arm64.whl ; python_version >= "3.10" and python_version < "3.11" and sys_platform == "darwin"
torch @ https://download.pytorch.org/whl/torch-2.0.1-cp310-cp310-manylinux2014_aarch64.whl ; sys_platform == "linux" and platform_machine == "aarch64" and python_version >= "3.10" and python_version < "3.11"
</code></pre>
<p>I lose the nice upgrades if I decide to upgrade with poetry and have to manually upgrade the urls each time.</p>
<p>What am I doing wrong?</p>
| <python><pytorch><python-poetry> | 2023-09-20 10:39:27 | 0 | 4,840 | Pavan K |
77,141,578 | 14,244,437 | Can i append the ethereum transaction's data input when generating a QR Code? Is there a better way to track transactions? | <p>My application manages sales with local payment methods and Ethereum/Bitcoin.</p>
<p>When using crypto payments a QR Code will be displayed for the customer, with the wallet's address and amount.</p>
<p>The issue I'm having is that if two transactions are made to two different customers with the same value, there'll be no way to distinguish which transaction is linked to the customer since I won't have his wallet address.</p>
<p>I thought of using the data input to append a unique ID from my platform, such as this:</p>
<pre><code>>>> import json
>>> data = {'unique_id': 'test'}
>>> json_data = json.dumps(data)
>>> hex_bytes = bytes.fromhex(json_data.encode().hex())
>>> hex_bytes
b'{"unique_id": "test"}'
(the bytes value would be 0x7b22756e697175655f6964223a202274657374227d)
</code></pre>
<p>I verified that by signing and sending this transaction to the blockchain I was able to track the transaction as expected.
What I'm not so sure is about the QR code. I can append this to the end of the address, but is this even a valid QR Code, that wallets or payment apps will recognize? I found <a href="https://eips.ethereum.org/EIPS/eip-681" rel="nofollow noreferrer">the EIP 681 doc</a> that seems to regulate this, but there's no explicit information about the data input.</p>
<p>Also, even if it's valid, has anyone ever done this? My fear is wasting my time on that, and them no app even has support for this param.</p>
| <python><ethereum><cryptocurrency><web3py> | 2023-09-20 10:36:17 | 1 | 481 | andrepz |
77,141,490 | 476,983 | install python on windows via ansible | <p>I am trying to install python 3 on windows, via ansible. I use following tasks, but the install does nothing (except downloading installer).</p>
<pre><code>---
# download works OK
- name: Download python3.11
ansible.windows.win_get_url:
url: "https://www.python.org/ftp/python/3.11.5/python-3.11.5-amd64.exe"
dest: "c:/temp/python-3.11.5-amd64.exe"
register: python_3_11_downloaded
when: install_python|bool
# this does not work
- name: Install python3.11
ansible.windows.win_package:
path: c:\temp\python-3.11.5-amd64.exe
product_id: "python_3.11"
arguments:
- /quiet
# - /passive
# - /unsinstall
when: install_python|bool
# this does not work either
# - name: Install python3.11
# ansible.windows.win_shell: "c:/temp/python-3.11.5-amd64.exe /quiet"
# args:
# chdir: C:/temp
# when: install_python|bool
</code></pre>
| <python><windows><installation><ansible> | 2023-09-20 10:25:08 | 1 | 758 | michal |
77,141,451 | 9,620,095 | How to generate PDF report from Python code. Odoo 16 | <p>I have a custom wizard with fields : "file_name"(Char), "model_id"(active model), "content_report"(html) .
I added a print button in this wizard .
When I click on this button I want to generate reporte with name = "file_name" and content of report = "content_report" , without using a report template. Only with python code?</p>
<p>How can I do it?
Any help please?</p>
<p>Thanks.</p>
| <python><pdf><odoo><report> | 2023-09-20 10:21:10 | 1 | 631 | Ing |
77,140,920 | 14,830,534 | How to rescale every sample individually in a pre-processing layer? | <p>I want to add a pre-processing layer to a <code>keras</code> model that both applies during training and when using the model. It is important it also applies when the model is used, so raw data can be fed to it. This layer should rescale every sample individually and put it on a <code>[-1, 1]</code> scale. With <code>sklearn</code> this is possible, but I would like to keep my model only dependent on <code>tensorflow</code>.</p>
<p>In the <code>keras</code> documentation I found a <a href="https://keras.io/api/layers/preprocessing_layers/numerical/normalization/" rel="nofollow noreferrer">normalization layer</a>: <code>tf.keras.layers.Normalization</code>, but I suspect this layer normalizes the entire dataset.</p>
<p>In <code>sklearn</code>, you could use the <code>MinMaxScaler</code>, which does scale each feature individually. See documentation <a href="https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html" rel="nofollow noreferrer">here</a>. How can I achieve similar behavior inside my <code>keras</code> model?</p>
| <python><tensorflow><machine-learning><keras> | 2023-09-20 09:07:35 | 1 | 1,106 | Jan Willem |
77,140,905 | 11,208,548 | Django file structure: cannot import main module without export PYPATH | <p>I am using Django 4.0.4</p>
<p>Here is my project structure:</p>
<pre><code>root/
| README.md
|-- app/
| |-- init__.py
| |-- manage.py
| |-- project/
| | |-- __init__.py
| | |-- settings.py
| | |-- urls.py
| | |-- etc...
| |-- webapp/
| | |-- __init__.py
| | |-- apps.py
| | |-- views.py
| | |-- utils/
| | |-- migrations/
| | |-- etc...
| |-- mediafiles/
|-- docs/
</code></pre>
<p>The imports inside de files are structured like this:</p>
<ul>
<li><code>apps.py</code> => <code>from app.project.settings import APP_NAME</code></li>
<li><code>settings.py</code> => <code>from app.webapp.utils.paths import BASE_DIR</code></li>
<li><code>urls.py</code> => <code>from app.webapp.views import home</code></li>
<li><code>views.py</code> => <code>from app.webapp.models import MyModel</code></li>
</ul>
<p>In <code>settings.py</code></p>
<pre class="lang-py prettyprint-override"><code>INSTALLED_APPS = [
"webapp",
...
]
</code></pre>
<p>In short, nothing very unusual, except that my root folder is one level higher than in a standard Django installation. However, as soon as I run</p>
<pre class="lang-bash prettyprint-override"><code>python app/manage.py runserver localhost:8000
# OR
python manage.py runserver localhost:8000 # (inside the `app/` directory)
</code></pre>
<p>I get a <code>ModuleNotFoundError: No module named 'app'</code></p>
<p>The problem is fixed if I set the <code>PYTHONPATH</code> variable inside the <code>root/</code> directory:</p>
<pre class="lang-bash prettyprint-override"><code>export PYTHONPATH="$(pwd)"
</code></pre>
<p>If I remove the <code>app</code> from my imports, then the command works but my IDE (PyCharm) no longer lets me take advantage of import autocompletion...</p>
<blockquote>
<h3>Why does this error happen?</h3>
<h3>Is there a way to keep the import as so without needing to export the <code>PYTHONPATH</code>?</h3>
</blockquote>
<p>Thank you so much for your help!!</p>
<p><strong>Edit</strong>: Apparently <a href="https://stackoverflow.com/questions/30001009/django-import-error-no-module-named-apps">this question</a> cover the same issue but without a response that fix my problem</p>
| <python><django><import><module><django-4.0> | 2023-09-20 09:05:50 | 1 | 501 | Seglinglin |
77,140,606 | 2,213,825 | Use of CORS in Firebase Functions in Python to set allow-headers | <p>I am setting CORS in the following way</p>
<pre class="lang-py prettyprint-override"><code>from firebase_functions.options import CorsOptions
from firebase_functions import https_fn
@https_fn.on_request(cors=CorsOptions(cors_origins=['*'], cors_methods=['POST']))
def my_function(req: https_fn.Request) -> https_fn.Response:
pass
</code></pre>
<p>However, I am noticing two problems:</p>
<ol>
<li>There is no way to set the "Access-Control-Allow-Headers".</li>
<li>The headers I am getting back do not include "Access-Control-Allow-Methods=POST" even though I have set that option.</li>
</ol>
<p>In the meantime I am using my own decorator but I would prefer to use the official one:</p>
<pre class="lang-py prettyprint-override"><code>def handle_cors(fn: Callable, allowed_origins: Union[list[str], None] = None,
allowed_methods: Union[list[str], None] = None) -> Callable:
if allowed_origins is None:
allowed_origins = ['*']
if allowed_methods is None:
allowed_methods = ['POST']
@functools.wraps(fn) # preserves fn's name, docstring, etc.
def wrapper(req: https_fn.Request):
if req.method == 'OPTIONS':
return https_fn.Response(status=204,
headers={
'Access-Control-Allow-Origin': ', '.join(allowed_origins),
'Access-Control-Allow-Methods': ', '.join(allowed_methods),
'Access-Control-Allow-Headers': 'Content-Type, Authorization',
'Access-Control-Max-Age': '3600'})
return fn(req)
return wrapper
</code></pre>
| <python><firebase><google-cloud-functions> | 2023-09-20 08:22:48 | 0 | 4,883 | João Abrantes |
77,140,370 | 7,618,968 | Selenium can't reach chrome in celery task while using supervisor | <p>I have a Django project that uses Celery as a task queue. The task is about running a chromedriver by Selenium and fetching data from some URLs. like this:</p>
<pre><code>@shared_task(bind=True)
def task_one(self):
options = Options()
options.add_argument("--start-maximized")
options.add_argument('--no-sandbox')
options.add_argument("--disable-dev-shm-usage")
driver = webdriver.Chrome(options=options)
wait = WebDriverWait(driver, 20)
action = ActionChains(driver)
driver.get('https://somewhere.com')
# do others
</code></pre>
<p>This task should be run periodically, for that I use celery beat.</p>
<p>for testing, I run celery worker and celery beat in two terminals and everything seems okay. (the celery beat sends every 20 seconds and the worker runs Chrome and fetches data)</p>
<p>But when I use Supervisor, the worker goes wrong and can't reach Chrome. (Still celery beat works well and sends every 20 seconds but the problem is in the worker)</p>
<p>The celery logs say:</p>
<pre><code>'Message: session not created: Chrome failed to start: exited normally.
(chrome not reachable)
(The process started from chrome location /usr/bin/google-chrome is no longer running, so ChromeDriver is assuming that Chrome has crashed.)
Stacktrace:
#0 0x562b0da106c3 <unknown>
#1 0x562b0d6e61e7 <unknown>
#2 0x562b0d719526 <unknown>
#3 0x562b0d71569c <unknown>
#4 0x562b0d75823a <unknown>
#5 0x562b0d74ee93 <unknown>
#6 0x562b0d721934 <unknown>
#7 0x562b0d72271e <unknown>
#8 0x562b0d9d5cc8 <unknown>
#9 0x562b0d9d9c00 <unknown>
#10 0x562b0d9e41ac <unknown>
#11 0x562b0d9da818 <unknown>
#12 0x562b0d9a728f <unknown>
#13 0x562b0d9fee98 <unknown>
#14 0x562b0d9ff069 <unknown>
#15 0x562b0da0f853 <unknown>
#16 0x7f0c29a97b43 <unknown>
'}
</code></pre>
| <python><django><selenium-webdriver><celery><supervisord> | 2023-09-20 07:51:17 | 1 | 482 | Saeed Ramezani |
77,140,031 | 1,852,526 | Pandas line break on each comma separated list item when exporting to CSV | <p>I am using the following code using Pandas to export some data to Excel. The data is nothing but a List of some type. In this list, one of the items is a list again (Location). Here is a screenshot (when I expand one of the list items it's as follows) :</p>
<p><a href="https://i.sstatic.net/HL9ES.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HL9ES.jpg" alt="list item expanded" /></a></p>
<p>Now using Pandas when I export the data to xlst, see the screenshot below. All the locations are being exported to one line with comma separated. Is there a way to make the row bigger for those columns that are comma separated adding a line break after the comma?</p>
<p><a href="https://i.sstatic.net/Uq7Rx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Uq7Rx.png" alt="issue" /></a></p>
<p>For instance in the Location column when I open I want to see something like:</p>
<pre><code> EcoSystem Package ... Location
First Row Core\\ATF\\.....,
Core\\CoreServer\\....,
EchoNET\\UI\\.....
</code></pre>
<p>Here is the code I have to export the data. Note the values parameter which is a list of some type that contains the <code>Location</code> list.</p>
<pre><code>def create_excel_with_format(headers,values,full_file_name_with_path):
#Write to CSV in xlsx format with indentation.
df = pd.DataFrame(data=values,columns=headers)
df = df.set_axis(df.index*2 + 1).reindex(range(len(df)*2)) #Create a blank row after every row.
with pd.ExcelWriter(full_file_name_with_path) as writer:
df.to_excel(writer, index=False)
workbook = writer.book
worksheet = writer.sheets['Sheet1']
#font_fmt = workbook.add_format({'font_name': 'Arial', 'font_size': 10})
header_format = workbook.add_format({
'bold': False,
'border': False,
'text_wrap': True})
for col_num, value in enumerate(df.columns.values):
worksheet.write(0, col_num, value, header_format)
</code></pre>
| <python><pandas><excel><dataframe> | 2023-09-20 07:03:05 | 2 | 1,774 | nikhil |
77,139,790 | 10,829,044 | pandas dynamically identify header row - Combine multiple xlsx files | <p>I have 10 excel sheets and each sheet is expected to the same header/column names.</p>
<p>Files are named as <code>ABC.xlsx</code>, <code>DEF_123.xlsx</code> etc. There is no naming wildcard pattern in excel filenames.</p>
<p>My problem is, some excel files have header appearing in the 1st row itself. whereas for some excel files it may appear in 2nd row (1st row is empty) and for some files it may appear in 5th row (first 4 rows are empty).</p>
<p>My objective is to do the below</p>
<p>a) Combine/vertically stack the data from each excel file into one dataframe</p>
<p>So, I tried the below</p>
<pre><code>record_counts = []
dfs=[]
folder_path = Path.cwd()
filenames = sorted(glob.glob(str(folder_path)+"\*.xlsx"))
for f in filenames:
filename = os.path.basename(f)
df = pd.read_excel(filename, header=None)
header_row = 0
if df.iloc[0].isna().all():
header_row = 1
df = pd.read_excel(filename, header=header_row)
count = len(df.index)
record_counts.append(count)
dfs.append(df)
combined_df = pd.concat(dfs, ignore_index=True)
</code></pre>
<p>But the above code assumes that headers appear in either first row (0th location). If not, then we look at 1st location (2nd row). But this is hardcoding.</p>
<p>How can I identify the row where header appears dynamically and take the data from there?</p>
| <python><pandas><excel><dataframe><list> | 2023-09-20 06:25:53 | 3 | 7,793 | The Great |
77,139,684 | 6,266,370 | How to store or save ssh client object in Django session? | <p>Below functions, I am creating the ssh client and trying to save it to django session object.</p>
<pre><code>def create_ssh_client(host, user, password):
client = SSHClient()
client.load_system_host_keys()
client.connect(host,
username=user,
password=password,
timeout=5000,
)
return client
def save_to_session(request):
ssh_client = create_ssh_client(host, user, password)
request.session['ssh_client'] = ssh_client
</code></pre>
<p>** While trying to save to session, getting the type error -**</p>
<pre><code>TypeError: Object of type SSHClient is not JSON serializable
</code></pre>
<p>Any suggestion or input appreciated.</p>
| <python><django><session><ssh><django-sessions> | 2023-09-20 06:05:57 | 2 | 344 | codemastermind |
77,139,482 | 10,504,481 | Recursive model_json_schema in pydantic v2 | <p>I'm using pydantic to create the schema for the usage with <a href="https://github.com/json-editor/json-editor" rel="nofollow noreferrer">https://github.com/json-editor/json-editor</a></p>
<p>I'm using a "main" class to select a method as follows:</p>
<pre class="lang-py prettyprint-override"><code>class Analysis(BaseModel):
method: methods = Field(
..., description="Analysis method", discriminator="method"
)
</code></pre>
<p>I'd like to make changes to the json schema in each of the <code>methods</code>.
My naive approach was, to override <code>model_json_schema</code> as follows:</p>
<pre class="lang-py prettyprint-override"><code>class Properties1D(BaseModel):
method: t.Literal["Properties1D"] = "Properties1D"
value: str = "a"
@classmethod
def model_json_schema(cls, *args, **kwargs) -> dict[str, t.Any]:
schema = super().model_json_schema(*args, **kwargs)
schema["properties"]["value"]["enum"] = ["a", "b", "c"]
return schema
methods = t.Union[Properties1D, None]
</code></pre>
<p>but unfortunately, the <code>model_json_schema</code> is not being used, when I run <code>Analysis.model_json_schema()</code>.</p>
<p>Is there a way to achieve this?</p>
| <python><pydantic> | 2023-09-20 05:21:54 | 1 | 506 | PythonF |
77,139,182 | 395,857 | How can I see the size of a HuggingFace dataset before downloading it? | <p>I want to download a HuggingFace dataset, e.g. <a href="https://huggingface.co/datasets/uonlp/CulturaX" rel="nofollow noreferrer"><code>uonlp/CulturaX</code></a>:</p>
<pre><code>from datasets import load_dataset
ds = load_dataset("uonlp/CulturaX", "en")
</code></pre>
<p>How can I see the size of a HuggingFace dataset before downloading it?</p>
| <python><huggingface><huggingface-datasets> | 2023-09-20 03:51:04 | 1 | 84,585 | Franck Dernoncourt |
77,139,177 | 4,778,587 | Docker python api with auto_remove=True doesn't return stdout | <p>I need the auto_remove=True option instead of remove=True to have a more robust deletion mechanism. However, the following code doesn't allow me to get the stdout:</p>
<pre><code>import docker
client = docker.from_env()
container_options = {
'image': 'age',
'command': ["sh", "/predictAge.sh", "2"],
'auto_remove': True,
'detach': True,
'stderr':True,
'stdout':True
}
container = client.containers.run(**container_options)
stdout_bytes = container.logs(stdout=True, stderr=False)
stdout_str = stdout_bytes.decode('utf-8')
print(stdout_str)
</code></pre>
<p>It seems like the container is removed before one can grab the stdout:</p>
<pre><code> raise cls(e, response=response, explanation=explanation) from e
docker.errors.NotFound: 404 Client Error for http+docker://localhost/v1.43/containers/1da46e6a02d621ed01ed38063aea75cf43b2806246430bdffd312567fea90738/json: Not Found ("No such container: 1da46e6a02d621ed01ed38063aea75cf43b2806246430bdffd312567fea90738")
</code></pre>
<p>How can I use auto_remove=True but at the same time get the stdout?</p>
<hr />
<h2>Update</h2>
<p>A generic code which still generates the error for testing:</p>
<pre><code>import docker
client = docker.from_env()
container_options = {
'image': 'hello-world',
'auto_remove': True,
'stdout':True
}
container = client.containers.run(**container_options)
stdout_bytes = container.logs(stdout=True, stderr=False)
stdout_str = stdout_bytes.decode('utf-8')
print(stdout_str)
</code></pre>
| <python><docker> | 2023-09-20 03:49:03 | 1 | 695 | doom4 |
77,139,080 | 10,836,309 | heyoo is not sending messages | <p>I am trying the following code:</p>
<pre><code>from heyoo import WhatsApp
messenger = WhatsApp(my_token,phone_number_id=my_phone_id)
# For sending a Text messages
messenger.send_message('Hello I am WhatsApp Cloud API', registered_number)
</code></pre>
<p>I am getting the following output:</p>
<pre><code>{'messaging_product': 'whatsapp',
'contacts': [{'input': 'XXXXXXXX', 'wa_id': 'XXXXXXX'}],
'messages': [{'id': 'wamid.HBgMOTcyJKEzMjY4ODg5FQIAERgGHzAyQzE3RDg5NURDQjc0OEFFAA=='}]}
</code></pre>
<p>But no message is received.
Any ideas?</p>
| <python><whatsapi><whatsapp-cloud-api> | 2023-09-20 03:14:20 | 2 | 6,594 | gtomer |
77,138,938 | 264,003 | Optional packages in setuptools using pyproject.toml | <p>Say I have a project like this:</p>
<pre><code>pyproject.toml
src/
cool/
__init__.py
...
cool_cli/
__init__.py
...
</code></pre>
<p>I would like to be able do the following installs:</p>
<pre><code>pip install cool # 1
pip install cool[cli] #2
</code></pre>
<p>#1 is intended package and distribute just the <code>cool</code> package, whereas #2 would distribute both <code>cool</code> and <code>cool_cli</code>. Is this possible using <code>pyproject.toml</code> and setuptools? I can't see an obvious way of doing it.</p>
<pre><code>[project]
name = "cool"
requires-python = ">=3.11"
version = "0.0.1"
readme = "readme.md"
dependencies = [
"httpx"
]
[project.optional-dependencies]
cli = [
"click"
]
[tool.setuptools]
packages = ["cool", "cool_cli"]
</code></pre>
| <python><setuptools><pyproject.toml> | 2023-09-20 02:29:46 | 0 | 2,634 | zzz |
77,138,707 | 4,634,061 | Why are any of the inputs I provide to argparse set as unrecognized arguments? | <p>Goal: Create Simple Pipreqs to Generate requirements file script</p>
<p>I'm already handling installing pipreqs and checking if a package exists in another file. That's not the problem.
I already have read: <a href="https://docs.python.org/3/library/argparse.html#action" rel="nofollow noreferrer">https://docs.python.org/3/library/argparse.html#action</a></p>
<p>Yet every time I run this script it says "unrecognized arguments or ignored explicit argument".</p>
<pre><code>import argparse
import subprocess
parser = argparse.ArgumentParser()
parser.add_argument("-q", "--set_req", action="store_false", required=False)
parser.add_argument("-d", "--directory", action="store_const", const=".", required=False)
def use_pipreqs_to_generate_requirements(dir_name):
subprocess.check_call(['pipreqs', f'{dir_name}'])
if __name__ == "__main__":
args = parser.parse_args()
print(args)
print(f"Req: {args.set_req}")
print(f" Dir {args.directory}")
# use_pipreqs_to_generate_requirements(args.directory)
</code></pre>
<p>I run it with <code>python3 setup_requirements_argparse.py --set_req True -d "."</code>.</p>
<p>I have varied the value after <code>-q</code> or <code>--set-req</code> to <code>true</code>, <code>True</code>, <code>False</code>, <code>false</code>.</p>
<p>When I write those flags with = signs, I get the error "ignored explicit argument":</p>
<pre class="lang-none prettyprint-override"><code>python3 setup_requirements_argparse.py --set_req=True -d="test/"
usage: setup_requirements_argparse.py [-h] [-q] [-d]
setup_requirements_argparse.py: error: argument -q/--set_req: ignored explicit argument 'True'
</code></pre>
<p>I have also set the <code>args =</code> outside of the main function to see if that fixes it, it doesn't.</p>
<hr />
<p>Solution For Future Wanderers:</p>
<p>A reference to answer in the linked question:
"A <code>store_true</code> argument in argparse indicates that the very presence of the option will automatically store <code>True</code> in the corresponding variable." - Jordan Lewis.</p>
<p>This part of the documentation says <a href="https://docs.python.org/3/library/argparse.html#action" rel="nofollow noreferrer">ArgparseAction</a></p>
<p>Action
<code>store_true</code> and <code>store_false</code> - These are special cases of 'store_const' used for storing the values True and False respectively. In addition, they create default values of False and True respectively.</p>
<p>To clarify this point, the usage of these argument (store_false or store_true) <strong>does not accept input</strong>. When you set action to <code>store_true</code> or <code>store_false</code> then including the flag that uses that will perform what you have in the action value which means storing false or true in that argument flag. <em>The absence of the flag will use either your default value if you've set one or the opposite value of what you've asked it to store.</em></p>
<p>Changing my solution to store_true and store arrives at the behavior I wish to see. Simply including the -q flag will perform the requirements.txt generation I desire.</p>
<pre><code>parser.add_argument("-q", "--set_req", action="store_true", required=False)
parser.add_argument("-d", "--directory", action="store", default=".", required=False)
...
if args.set_req:
use_pipreqs_to_generate_requirements(args.directory)
</code></pre>
| <python><python-3.x><argparse> | 2023-09-20 01:04:28 | 0 | 316 | Annie |
77,138,690 | 688,071 | Shared Memory error while pickling object | <p>I'm messing around with <code>multiprocessing.shared_memory</code> and am simply trying to pass a <code>datetime.now()</code> value around processes. Piecing things together it seems that I can just pickle an object to turn it into bytes and place it on the shared memory buffer.... oh, if it was that simple.</p>
<p>Examples I've run across all focus on using this with <a href="https://stackoverflow.com/questions/14124588/shared-memory-in-multiprocessing">numpy</a> , but there has to be more use for it, right?</p>
<p>Here's what I have:</p>
<pre><code>from datetime import datetime, timedelta
from cachetools import cached, TTLCache
import time
from multiprocessing import Process, shared_memory
import pickle
def pickle_object(obj):
return pickle.dumps(obj)
def unpickle_object(bytes):
return pickle.loads(bytes)
def create_shared_block():
shm = shared_memory.SharedMemory(name="shared_mem_test", create=True, size=1024)
while True:
p = pickle_object(datetime.now())
print(p)
shm.buf[0] = p
time.sleep(5)
def process1(shr_name):
while True:
existing_shm = shared_memory.SharedMemory(name=shr_name)
print("p1 time is: ", unpickle_object(existing_shm.buf[0]))
time.sleep(3)
def process2(shr_name):
while True:
existing_shm = shared_memory.SharedMemory(name=shr_name)
print("p2 time is: ", unpickle_object(existing_shm.buf[0]))
time.sleep(2)
if __name__ == "__main__":
print("Create shared block")
shr = create_shared_block()
# holder of processes
processes = []
p1 = Process(target=process1, args=(shr.name,))
p2 = Process(target=process2, args=(shr.name,))
p1.start()
p2.start()
p1.join()
p2.join()
shr.close()
shr.unlink()
</code></pre>
<p>This outputs:</p>
<pre><code>Create shared block
pickled: b'\x80\x04\x95*\x00\x00\x00\x00\x00\x00\x00\x8c\x08datetime\x94\x8c\x08datetime\x94\x93\x94C\n\x07\xe7\t\x13\x140%\x04\x13\x14\x94\x85\x94R\x94.'
Traceback (most recent call last):
File "objects.py", line 42, in <module>
shr = create_shared_block()
File "objects.py", line 22, in create_shared_block
shm.buf[0] = p
TypeError: memoryview: invalid type for format 'B'
</code></pre>
<p>I'm trying to figure out why it's complaining about the unsigned bytes type (B). From what I can tell (not a numpy guy here) the sample numpy code linked above isn't doing a thing with unsigned bytes, so how did that example work, but this one didn't? The pickled object is clearly in bytes:</p>
<pre><code>b'\x80\x04\x95*\x00\x00\x00\x00\x00\x00\x00\x8c\x08datetime\x94\x8c\x08datetime\x94\x93\x94C\n\x07\xe7\t\x13\x140%\x04\x13\x14\x94\x85\x94R\x94.'
</code></pre>
| <python><python-3.x> | 2023-09-20 00:56:44 | 1 | 2,512 | Godzilla74 |
77,138,573 | 15,587,184 | Chain method grouping and calculating differences in a Pandas DataFrame | <p>I have a Pandas DataFrame with the following structure:</p>
<pre><code>import pandas as pd
data = {
'glob_order': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11],
'trans': ['A', 'A', 'A', 'A', 'B', 'B', 'B', 'C', 'C', 'C', 'C'],
'chain': [1, 1, 2, 2, 1, 1, 2, 1, 1, 2, 2],
'date': ['1/08/2023', '2/08/2023', '3/08/2023', '4/08/2023', '5/08/2023', '6/08/2023', '7/08/2023', '8/08/2023', '9/08/2023', '10/08/2023', '11/08/2023']
}
df = pd.DataFrame(data)
# Convert 'date' column to datetime
df['date'] = pd.to_datetime(df['date'], format='%d/%m/%Y')
print(df)
</code></pre>
<p>I want to perform two operations:</p>
<ul>
<li>Group the DataFrame by the 'trans' and 'chain' columns and select the very first row from each group.</li>
<li>Create a new column named 'delta' that represents the difference in days between the current date and the previous date within each group.</li>
</ul>
<p>I tried the following code:</p>
<pre><code>(df
.groupby(['trans', 'chain'])
.first()
.assign(
delta=lambda x: (x['date'] - x['date'].shift(1)).dt.total_seconds() / (60*60*24),
).reset_index()
)
</code></pre>
<p>However, the output I'm getting is not as expected. It seems to insert NaNs in the first delta calculation for each individual group, which is not what I want</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;">trans</th>
<th style="text-align: right;">chain</th>
<th style="text-align: right;">glob_order</th>
<th style="text-align: right;">date</th>
<th style="text-align: right;">delta</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">A</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">2023-08-01</td>
<td style="text-align: right;">NaN</td>
</tr>
<tr>
<td style="text-align: right;">A</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">2023-08-03</td>
<td style="text-align: right;">2.0</td>
</tr>
<tr>
<td style="text-align: right;">B</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">5</td>
<td style="text-align: right;">2023-08-05</td>
<td style="text-align: right;">2.0</td>
</tr>
<tr>
<td style="text-align: right;">B</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">7</td>
<td style="text-align: right;">2023-08-07</td>
<td style="text-align: right;">2.0</td>
</tr>
<tr>
<td style="text-align: right;">C</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">8</td>
<td style="text-align: right;">2023-08-08</td>
<td style="text-align: right;">1.0</td>
</tr>
<tr>
<td style="text-align: right;">C</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">10</td>
<td style="text-align: right;">2023-08-10</td>
<td style="text-align: right;">2.0</td>
</tr>
</tbody>
</table>
</div>
<p>I'd like to understand why this is happening and what I need to do to get the desired output.</p>
<p>This is my desired output</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;">trans</th>
<th style="text-align: right;">chain</th>
<th style="text-align: right;">glob_order</th>
<th style="text-align: right;">date</th>
<th style="text-align: right;">delta</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">A</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">2023-08-01</td>
<td style="text-align: right;">NaN</td>
</tr>
<tr>
<td style="text-align: right;">A</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">2023-08-03</td>
<td style="text-align: right;">2.0</td>
</tr>
<tr>
<td style="text-align: right;">B</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">5</td>
<td style="text-align: right;">2023-08-05</td>
<td style="text-align: right;">Nan</td>
</tr>
<tr>
<td style="text-align: right;">B</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">7</td>
<td style="text-align: right;">2023-08-07</td>
<td style="text-align: right;">2.0</td>
</tr>
<tr>
<td style="text-align: right;">C</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">8</td>
<td style="text-align: right;">2023-08-08</td>
<td style="text-align: right;">Nan</td>
</tr>
<tr>
<td style="text-align: right;">C</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">10</td>
<td style="text-align: right;">2023-08-10</td>
<td style="text-align: right;">2.0</td>
</tr>
</tbody>
</table>
</div>
<p>I am seeking a solution using a method similar to chaining for clarity and readability since I'm very new to Python.</p>
| <python><pandas><methods><method-chaining> | 2023-09-20 00:02:23 | 2 | 809 | R_Student |
77,138,505 | 3,633,653 | Merging pandas dataframes, keep all rows and columns, without duplicates. If duplicates, keep left (_x) columns, unless right (_y) column value exist | <p>I am trying to merge two Pandas DataFrames. The first dataframe (animals_df) contains the original values, the second dataframe (update_df) contains new values that will replace or update the original values. I want to keep all rows (animals) and columns (metadata about the animals) from both dataframes, but no duplicate rows (animals) or columns.</p>
<p>For the duplicated columns (same as columns found on both dataframes) I want to keep the original columns (suffix with _UPDATE, left) unless no update value was provided in the update_df.</p>
<pre><code>animals = [
{"name": "buddy",
"animal": "dog",
"food": "Cat Food"},
{"name": "dexter",
"animal": "cat",
"food": "Cat Food"},
{"name": "solovino",
"animal": "dog"}
]
update = [
# Update food and add brand column
{"name": "buddy",
"animal": "dog",
"food": "dog food",
"brand": "pet smart"},
# New animal
{"name": "micky",
"animal": "mouse",
"food": "kids",
"brand": "disni"}
]
#Create Dataframes
animals_df = pd.DataFrame(animals)
update_df = pd.DataFrame(update)
display(animals_df)
display(update_df)
</code></pre>
<p>animals_df Output:</p>
<pre><code>id name animal food
0 buddy dog Cat Food
1 dexter cat Cat Food
2 solovino dog NaN
</code></pre>
<p>update_df Output:</p>
<pre><code>id name animal food brand
0 buddy dog dog food pet smart
1 micky mouse kids disni
</code></pre>
<p>This code produces the results I want, but I believe there might be a better and simple solution.</p>
<pre><code>y_suffix = '_UPDATE'
animals_updated_df = pd.merge(animals_df, update_df, on=['name', 'animal'],
how='outer', suffixes=('', y_suffix))
display(animals_updated_df)
</code></pre>
<p>animals_updated_df Output:</p>
<pre><code>id name animal food food_UPDATE brand
0 buddy dog Cat Food dog food pet smart
1 dexter cat Cat Food NaN NaN
2 solovino dog NaN NaN NaN
3 micky mouse NaN kids disni
</code></pre>
<p>Continue:</p>
<pre><code>def update_rows(row, update_columns, suffix):
"""This function replaces the original values from the animals_df
with the corresponding update value in the update_df.
if no update value provided, then keep the original value"""
updated_row = row
for column in update_columns:
update_value = row[column]
if not pd.isna(update_value):
original_column = column.strip(suffix)
updated_row[original_column] = update_value
return updated_row
#Get a list of the columns names with the update values, eg. food_UPDATE
update_columns = animals_updated_df.columns[animals_updated_df.columns.str.contains(y_suffix)].to_list()
#for each row (animal) apply the function update_rows,
#that will replace the original values with the updated ones.
animals_updated_df = animals_updated_df.apply(lambda x: update_rows(x, update_columns, y_suffix), axis=1)
#Remove the columns with the update values, eg. food_UPDATE
animals_updated_df.drop(update_columns, axis=1, inplace=True)
animals_updated_df
</code></pre>
<p>animals_updated_df Output:</p>
<pre><code>id name animal food brand
0 buddy dog dog food pet smart
1 dexter cat Cat Food NaN
2 solovino dog NaN NaN
3 micky mouse kids disni
</code></pre>
<p>Do you know how this can be done without the need of a function?</p>
<p>I have also tried the solution on <a href="https://stackoverflow.com/questions/66786090/pandas-left-merge-keeping-data-in-right-dataframe-on-duplicte-columns">Pandas left merge keeping data in right dataframe on duplicte columns</a></p>
<pre><code>animals_updated_df2 = pd.concat([animals_df, update_df]).groupby(['name', 'animal'], as_index=False).first()
display(animals_updated_df2)
</code></pre>
<p>But the output is not what I need. The food value for buddy-dog was not updated with "dog food" and for some reason, on a larger dataframe this code results in ValueError: all the input array dimensions except for the concatenation axis must match exactly, but along dimension 1, the array at index 0 has size 12176 and the array at index 1 has size 43:</p>
<pre><code>id name animal food brand
0 buddy dog Cat Food pet smart
1 dexter cat Cat Food None
2 micky mouse kids disni
3 solovino dog None None
</code></pre>
| <python><pandas><dataframe> | 2023-09-19 23:33:49 | 2 | 612 | iDevPy |
77,138,324 | 7,274,267 | With MySQL Connector in Python how do I create a stored procedure then use it right away? | <p>Using mysql-connector in python, I'm trying to create a stored procedure and then create another one which calls the first with this code :</p>
<pre><code>def ExecuteQueryFromFile(query_name, path):
try:
with open(path, "r") as file:
sql_script = file.read()
cursor.execute(sql_script)
result_set = cursor.fetchall()
except Exception as e:
print("An error occurred while executing " + query_name + ": {}".format(e))
cursor.close()
sepix_db_conn.close()
sys.exit(1)
print(query_name + " completed successfully.")
ExecuteQueryFromFile("sp1.sql", sp1Path)
ExecuteQueryFromFile("sp2.sql", sp2Path)
</code></pre>
<p>I hit the exception in my function when my cursor executes the query to create the second stored procedure :</p>
<pre><code>An error occurred while executing sp2.sql: 2014 (HY000): Commands out of sync; you can't run this command now
</code></pre>
<p>Because creating a stored procedure does not return any result set I thought mysql was still processing the first stored procedure creation when python shot it the second stored procedure creation (hence why I added the <code>result_set = cursor.fetchall()</code> to ensure the instruction was completed in mysql before moving to the next one) but my attempt failed and I have the same error.</p>
<p>My second attempt involved a <code>sepix_db_conn.commit()</code> followed by <code>cursor.close()</code> and creating a new cursor but instead I'm hitting the same exception.</p>
<p>What is the right way of doing this?</p>
| <python><mysql><mysql-connector><mysql-connector-python> | 2023-09-19 22:37:44 | 1 | 378 | Jimmy Jacques |
77,138,264 | 2,430,134 | Tornado - how to get error when a child process dies immediately? | <p>I inherited an old Tornado project and am trying to update it. (I'm pointing out this context because it might not be set up according to current best practices.)</p>
<p>Right now, it calls <code>tornado.process.fork_processes()</code> and starts the loop, only to end up in an infinite cycle of:</p>
<pre><code> child 2 (pid 28921) exited with status 9, restarting
child 1 (pid 28920) exited with status 9, restarting
child 0 (pid 28919) exited with status 9, restarting
</code></pre>
<p>I've checked the official documentation, searched for other Q&As, and tried searching the github issues but can't find anything addressing my incredibly basic question: How do I see the error messages that are causing these child processes to die immediately?</p>
| <python><tornado> | 2023-09-19 22:22:16 | 1 | 1,885 | Adair |
77,138,199 | 2,687,317 | Adding a series to pandas DF with a value based on groups of data in other series | <p>I'm trying to add a col to this dataframe as efficiently as possible (since it is huge):</p>
<pre><code>Time Time_slot Veh_ID Pwr
1 1 100 10
1 2 100 12
2 1 100 3
2 2 100 13
3 1 100 22
3 2 100 13
1 1 55 8
1 2 55 2
2 1 55 12
2 2 55 11
6000 1 100 7
6000 2 100 6
6001 1 100 11
6001 2 100 14
6001 1 55 7
6001 2 55 9
6002 1 55 9
6002 2 55 13
6003 1 55 10
6003 2 55 9
</code></pre>
<p>The idea is to add a col call "Group_ID" such that it concatenates the Veh_ID and an incremental value for each group of Veh_IDs that grouped with delta Time less than, say, 5000 from the next (same!) Veh_ID that appears later. E.g. -</p>
<pre><code>Time Time_slot Veh_ID Pwr Group_ID
1 1 100 10 100_1
1 2 100 12 100_1
2 1 100 3 100_1
2 2 100 13 100_1
3 1 100 22 100_1
3 2 100 13 100_1
1 1 55 8 55_1
1 2 55 2 55_1
2 1 55 12 55_1
2 2 55 11 55_1
6000 1 100 7 100_2
6000 2 100 6 100_2
6001 1 100 11 100_2
6001 2 100 14 100_2
6001 1 55 7 55_2
6001 2 55 9 55_2
6002 1 55 9 55_2
6002 2 55 13 55_2
6003 1 55 10 55_2
6003 2 55 9 55_2
</code></pre>
<p>Here all the Group_IDs are generated for each Veh_ID and group of records with a timestamp that have <code>np.diff < 5000</code> ... While I can describe it, I can't get the code to work in python.</p>
<p>I have code that is just too complicated and I'm sure pandas provides some way to do this more elegantly:</p>
<pre><code>df.insert(loc = 4, column = 'Group_ID', value = 0)
for v in df.Veh_ID.unique():
diffs = np.diff(df[df.Veh_ID == v].Time)
change_indxs = np.where(diffs > 5000)[0] # Each location is the last of the group
start_indx = df.index[0]
last_indx = df.index[-1]
abreak = last_indx
for i, abreak in enumerate(change_indxs):
aChunk = df.loc[start_indx:abreak+1]
aChunk['Group_ID'].loc[start_indx:abreak+1] = str(v) + str(i)
</code></pre>
<p>which doesn't work and throws error: <code>SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame </code></p>
<p>I'm a bit stumped.</p>
| <python><pandas> | 2023-09-19 22:05:16 | 1 | 533 | earnric |
77,138,178 | 8,820,463 | Fill in zero values only in the center of a numpy array | <p>I'm working with a rasterio raster that I've 'read' into python, so it is now a numpy array. The outer edge of the np array are all zeros, and the interior are all ones, except, in the midst of the ones are occasional zeros, see the example array below. I want to leave all the zeros on the outside of the array alone (i.e. keep them zeros), but want to convert the zeros that are completely surrounded by ones (i.e. the zeros in the middle of the donut of ones) to one. However, I'm not really sure how to start.</p>
<p>Current array:</p>
<pre><code>import numpy as np
arr = np.array([[0, 0, 0, 0, 0, 0, 0, 0],
[0, 1, 1, 1, 1, 1, 1, 0],
[0, 1, 1, 0, 0, 1, 1, 0],
[0, 1, 1, 0, 1, 1, 0, 0],
[0, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0]])
</code></pre>
<p>Goal array:</p>
<pre><code>import numpy as np
arr = np.array([[0, 0, 0, 0, 0, 0, 0, 0],
[0, 1, 1, 1, 1, 1, 1, 0],
[0, 1, 1, 1, 1, 1, 1, 0],
[0, 1, 1, 1, 1, 1, 0, 0],
[0, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0]])
</code></pre>
| <python><arrays><numpy> | 2023-09-19 21:58:42 | 1 | 483 | Ana |
77,137,976 | 12,114,641 | Python: Search for values from one csv file to another and get values from other columns | <p>I want to retrieve the <strong>Price</strong> and <strong>Stocks</strong> from bottom csv file and insert it in the top csv file.</p>
<p>I can read both csv files and have two dictionaries, let's say <strong>top_dic</strong> & <strong>bottom_dic</strong></p>
<p>How do go through both dictionaries, take <strong>MPN</strong> values from <strong>top_dic</strong>, find the values in <strong>bottom_dic</strong>, extract the <strong>stock</strong> and <strong>price</strong> for the corresponding products and insert these values in top csv?</p>
<p><a href="https://i.sstatic.net/SWrTP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SWrTP.png" alt="enter image description here" /></a></p>
| <python><excel><csv> | 2023-09-19 21:04:34 | 1 | 1,258 | Raymond |
77,137,961 | 21,787,377 | Django:- Real-time Name Display Issue: 'Account Not Found' Despite Existing Account Number | <p>There is a <code>name</code> field in the <code>CustomUser</code> model that I want to display when a user inputs an account number in the <code>Transfer</code> model. What this means is validation: Imagine you want to transfer funds to someone's account. When the sender inputs the receiver's account number, I want the <code>name</code> field from the <code>CustomUser</code> model to be displayed in the <code>name</code> field of the <code>Transfer</code> model in real-time, to accomplish this, I need to query the <code>Account</code> model and send the results to a <code>JsonResponse</code>, But I don't understand why it says <code>Account Not Found</code> even though the account number does exist in the database.</p>
<p><strong>Models.py</strong></p>
<pre><code>class Account(models.Model):
user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)
account_number = models.IntegerField()
account_balance = models.DecimalField(max_digits=12, decimal_places=6, default=0.0)
account_status = models.CharField(max_length=40, choices=ACCOUNT_STATUS)
is_accepted_for_Loan = models.BooleanField()
</code></pre>
<pre><code>class Transfer(models.Model):
name = models.CharField(max_length=100)
account_number = models.IntegerField()
sender = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE, related_name='sender_user')
receiver = models.ForeignKey(settings.AUTH_USER_MODEL,on_delete=models.CASCADE, related_name='receiver_user')
amount = models.DecimalField(max_digits=12, decimal_places=6, default=0.0)
timestamp = models.DateTimeField(auto_now_add=True)
</code></pre>
<p><strong>Template</strong></p>
<pre><code><input type="number" id="account_number" placeholder="Receiver's Account Number">
<p>Receiver Name: <span id="receiver_name"></span></p>
<script src="https://code.jquery.com/jquery-3.6.0.min.js"></script>
<script>
$(document).ready(function() {
$('#account_number').on('input', function() {
let url = "{% url 'Get_Receiver_Infor' %};
var accountNumber = $(this).val();
if (accountNumber.length > 9) {
$.ajax({
url: url,
type: 'POST',
dataType: 'json',
success: function(data) {
$('#receiver_name').text(data.name);
},
error: function() {
$( '#receiver_name').text('Account Not Found');
}
});
} else {
$('#receiver_name').text('Something Went Wrong Please Try Again');
}
});
});
</script>
</code></pre>
<p><strong>Views</strong></p>
<pre><code>from django.http import JsonResponse
def get_receiver_info(request):
if request.method == 'POST':
receiver_account_number = request.POST.get('account_number')
try:
receiver_account = Account.objects.get(account_number=receiver_account_number)
receiver_user = receiver_account.user
response_data = {
'name': receiver_user.name,
}
return JsonResponse(response_data)
except Account.DoesNotExist:
return JsonResponse({'error': 'Account not found'}, status=404)
return render (request, 'Profile/transfer.html')
</code></pre>
| <python><jquery><django><django-views> | 2023-09-19 21:01:19 | 1 | 305 | Adamu Abdulkarim Dee |
77,137,891 | 1,471,980 | how do you merge two data frames | <p>I need to merge 2 data frames</p>
<p>df1</p>
<pre><code>Server Model Slot Count40G
server1 Cisco 1 10
server1 Cisco 2 5
server1 Cisco 3 20
server2 IBM 5 10
server2 IBM 8 5
</code></pre>
<p>df2</p>
<pre><code>Server Model Slot Count10G
server1 Cisco 1 5
server1 Cisco 8 10
server1 Cisco 4 1
server2 IBM 5 1
server2 IBM 9 5
</code></pre>
<p>I need merge these 2 data frames. The resulting data frame needs to look something like this:</p>
<pre><code>Server Model Slot Count40G Count10G
server1 Cisco 1 10 5
server1 Cisco 2 5 0
server1 Cisco 3 20 0
server1 Cisco 4 0 1
server1 Cisco 8 0 10
server2 IBM 5 10 1
server2 IBM 8 5 0
server2 IBM 9 0 5
</code></pre>
<p>merge on Server, append Slot, if Count40G does not exist on Server, on that Slot, insert 0.</p>
<p>I tried this:</p>
<pre><code>pd.merge(df1, df2, on="Server", how="outer")
</code></pre>
<p>it is not inserting 0, any ideas?</p>
| <python><pandas> | 2023-09-19 20:45:41 | 3 | 10,714 | user1471980 |
77,137,715 | 22,326,950 | ValueError: The truth value of a DataFrame is ambiguous when trying to plot DataFrame with Datetime index | <p>I want to plot stacked bars using plotly express. I load the data from a .csv file but for simplification I have tried to rplicate some smaple data in my code snippet below:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import plotly.express as px
d = {'Date': ['2020-09-17', '2020-09-18', '2020-09-20', '2020-09-21', '2020-10-17', '2020-10-18', '2020-10-20', '2020-10-21'],
'Amount (EUR)': [-25.93, -18.57, -53.07, -166.50, -2.93, -15.57, -11.07, -80.50],
'Category': ['Food', 'Car', 'Food', 'House', 'Car', 'Food', 'Food', 'Car']}
df = pd.DataFrame(d)
df['Date'] = pd.to_datetime(df['Date'])
df.set_index('Date', inplace=True)
df = df.groupby([df.index.to_period('M'), 'Category']).agg({'Amount (EUR)':"sum"}).unstack()
fig = px.bar(df,
x=df.index,
y='Amount (EUR)',
barmode='stack',
template='plotly_dark',
color_discrete_sequence=px.colors.qualitative.T10,
title='Amount by category per month')
fig.show()
</code></pre>
<p>I recieve the following error:</p>
<pre class="lang-none prettyprint-override"><code>ValueError: The truth value of a DataFrame is ambiguous.
</code></pre>
<p>Even after searching this site and others for several hours now, I do not understand this error message in this context. What do I have to do to get the figure?</p>
<p><strong>EDIT</strong> full traceback:</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
/tmp/ipykernel_5280/3267266995.py in ?()
1 # darstellung Kategorie-Stapelsäulen pro Monat
----> 2 fig = px.bar(df,
3 x=df.index,
4 y='Amount (EUR)',
5 barmode='stack',
~/Dokumente/jupyter/env/lib/python3.10/site-packages/plotly/express/_chart_types.py in ?(data_frame, x, y, color, pattern_shape, facet_row, facet_col, facet_col_wrap, facet_row_spacing, facet_col_spacing, hover_name, hover_data, custom_data, text, base, error_x, error_x_minus, error_y, error_y_minus, animation_frame, animation_group, category_orders, labels, color_discrete_sequence, color_discrete_map, color_continuous_scale, pattern_shape_sequence, pattern_shape_map, range_color, color_continuous_midpoint, opacity, orientation, barmode, log_x, log_y, range_x, range_y, text_auto, title, template, width, height)
369 """
370 In a bar plot, each row of `data_frame` is represented as a rectangular
371 mark.
372 """
--> 373 return make_figure(
374 args=locals(),
375 constructor=go.Bar,
376 trace_patch=dict(textposition="auto"),
~/Dokumente/jupyter/env/lib/python3.10/site-packages/plotly/express/_core.py in ?(args, constructor, trace_patch, layout_patch)
2072 trace_patch = trace_patch or {}
2073 layout_patch = layout_patch or {}
2074 apply_default_cascade(args)
2075
-> 2076 args = build_dataframe(args, constructor)
...
1520 f"The truth value of a {type(self).__name__} is ambiguous. "
1521 "Use a.empty, a.bool(), a.item(), a.any() or a.all()."
1522 )
ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
<p><strong>EDIT 2</strong> This is what I am looking for (done with <code>df.plot(kind='bar', stacked=True)</code>) but done with plotly:
<a href="https://i.sstatic.net/Sz8vv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Sz8vv.png" alt="Stacked bar plot" /></a></p>
| <python><pandas><plotly> | 2023-09-19 20:16:09 | 1 | 884 | Jan_B |
77,137,684 | 452,587 | Best way to refer to spreadsheet data by header name in Python? | <p>I have this data that is coming from a spreadsheet:</p>
<pre class="lang-py prettyprint-override"><code>data = [
['Column A', 'Column B', 'Column C'],
['Value A2', 'Value B2', 'Value C2'],
['Value A3', 'Value B3', 'Value C3'],
['Value A4', 'Value B4', 'Value C4'],
...
]
</code></pre>
<p>The data has about 15 columns and grow to hundreds of rows, but never thousands. When looping through this table in Python, what's the most efficient way to refer to a cell by the column header name?</p>
<p>This is my current solution, but I'm wondering if there's a better way to do this, for example using native Python 3 methods without having to import an external module like pandas:</p>
<pre class="lang-py prettyprint-override"><code># Convert data to dataframe
df = pd.DataFrame(data, columns=data[0])
# Strip out header row
df = df.iloc[1:]
# Loop through rows
for index, row in df.iterrows():
print(row['Column A'], row['Column C'])
</code></pre>
<p>(I've read it's a <a href="https://stackoverflow.com/a/55557758/452587">no-no</a> to iterate over dataframes, but since I'm not dealing with such a big dataset I value readability over other more exotic solutions.)</p>
| <python><pandas><excel><dataframe> | 2023-09-19 20:10:40 | 1 | 19,198 | thdoan |
77,137,564 | 10,007,302 | Understanding [Errno13] permission denied when trying to access local copies of sharepoint Excel files via Python | <p>I'm struggling to figure out what/why is causing periodic permission-denied errors when trying to work with local SharePoint files. This is just one example of some of my code, which essentially uses Excel as my team's front end for uploading to a database.</p>
<p>VBA code will call Python scripts that rely on <code>openpyxl</code> and <code>xlwings</code> to extract data from the Excel file and output data back to it. There is one template that will get saved by different members of the team, so the file location and file name will always be different.</p>
<p>Sometimes, the code runs without problems. Other times, I'll get the permission error. If the file is closed, it always runs. I thought it could have to do with syncing, but if the file is giving me the error, even if I pause OneDrive syncing, I'll still get the same error. Usually requires closing and reopening. I've also incorporated <a href="https://stackoverflow.com/a/72736924/10007302">this solution</a> into VBA, which will pull the local name and not the SharePoint address.</p>
<p>What am I missing and is there a more elegant solution that would be more reliable once rolled out to the rest of my team?</p>
<p>Here is one example of the Python code that may need to access the file.</p>
<pre><code>def data_frame_from_xlsx_range(xlsx_file, range_name, date_columns=None):
""" Get a single rectangular region from the specified file.
range_name can be a standard Excel reference ('Sheet1!A2:B7') or
refer to a named region ('my_cells')."""
wb = openpyxl.load_workbook(xlsx_file, data_only=True, read_only=True)
if '!' in range_name:
# passed a worksheet!cell reference
ws_name, reg = range_name.split('!')
if ws_name.startswith("'") and ws_name.endswith("'"):
# optionally strip single quotes around sheet name
ws_name = ws_name[1:-1]
region = wb[ws_name][reg]
else:
# passed a named range; find the cells in the workbook
full_range = wb.get_named_range(range_name)
if full_range is None:
raise ValueError(
'Range "{}" not found in workbook "{}".'.format(range_name, xlsx_file)
)
# convert to list (openpyxl 2.3 returns a list but 2.4+ returns a generator)
destinations = list(full_range.destinations)
if len(destinations) > 1:
raise ValueError(
'Range "{}" in workbook "{}" contains more than one region.'
.format(range_name, xlsx_file)
)
ws, reg = destinations[0]
# convert to worksheet object (openpyxl 2.3 returns a worksheet object
# but 2.4+ returns the name of a worksheet)
if isinstance(ws, str):
ws = wb[ws]
region = ws[reg]
# catch a single-cell range (untested):
# if not isinstance(region, 'tuple'): df = pd.DataFrame(region.value)
df = pd.DataFrame([cell.value for cell in row] for row in region)
df.columns = df.iloc[0]
df = df[1:]
df.columns = [col.replace(' ','_') for col in df.columns]
# drop any empty rows at the end of the data
df.dropna(thresh=1, inplace=True)
last_row_with_data = df[df.notna().any(axis=1)].index[-1]
df.drop(index=df.index[last_row_with_data + 1:], inplace=True)
def custom_date_parser(val):
try:
return pd.to_datetime(val, origin='1899-12-30', unit='D')
except:
return val
if date_columns:
for col in date_columns:
if col in df.columns:
df[col] = df[col].apply(custom_date_parser)
return df
</code></pre>
<p>Here is the full error traceback:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\waly\PycharmProjects\pythonProject\db_execution.py", line 185, in <module>
match_names_to_db(fileloc, user, host, db )
File "C:\Users\waly\PycharmProjects\pythonProject\db_execution.py", line 50, in match_names_to_db
tracker_names = data_frame_from_xlsx_range(fileloc, 'tracker_names_to_match')
File "C:\Users\waly\PycharmProjects\pythonProject\sqlupdate.py", line 15, in data_frame_from_xlsx_range
wb = openpyxl.load_workbook(xlsx_file, data_only=True, read_only=True)
File "C:\Users\waly\PycharmProjects\pythonProject\venv\lib\site-packages\openpyxl\reader\excel.py", line 315, in load_workbook
reader = ExcelReader(filename, read_only, keep_vba,
File "C:\Users\waly\PycharmProjects\pythonProject\venv\lib\site-packages\openpyxl\reader\excel.py", line 124, in __init__
self.archive = _validate_archive(fn)
File "C:\Users\waly\PycharmProjects\pythonProject\venv\lib\site-packages\openpyxl\reader\excel.py", line 96, in _validate_archive
archive = ZipFile(filename, 'r')
File "C:\Users\waly\AppData\Local\Programs\Python\Python39\lib\zipfile.py", line 1248, in __init__
self.fp = io.open(file, filemode)
PermissionError: [Errno 13] Permission denied: 'C:\\Users\\waly\\OneDrive - Global\\Documents\\National Corporation/Tracker Template_l_Group.xlsm'
</code></pre>
| <python><excel><sharepoint> | 2023-09-19 19:47:43 | 0 | 1,281 | novawaly |
77,137,537 | 1,874,170 | When does using floating point operations to calculate ceiling-integer-log2 fail? | <p>I'm curious what the first input is which differentiates these two functions:</p>
<pre class="lang-py prettyprint-override"><code>from math import *
def ilog2_ceil_alt(i: int) -> int:
# DO NOT USE THIS FUNCTION IT'S WRONG
return ceil(log2(i))
def ilog2_ceil(i: int) -> int:
# Correct
if i <= 0:
raise ValueError("math domain error")
return (i-1).bit_length()
...
</code></pre>
<p>Obviously, the first one is going to fail for some inputs due to rounding/truncation errors when cramming (the log of) an unlimited-sized integer through a finite <code>double</code> in the pipeline— however, I tried running this test code for a few minutes, and it didn't <em>find</em> a problem:</p>
<pre><code>...
def test(i):
if ilog2_ceil(i) != ilog2_ceil_alt(i):
return i
def main(start=1):
import multiprocessing, itertools
p = multiprocessing.Pool()
it = p.imap_unordered(test, itertools.count(start), 100)
return next(i for i in it if i is not None)
if __name__ == '__main__':
i = main()
print("Failing case:", i)
</code></pre>
<p>I tried testing assorted large values like <code>2**32</code> and <code>2**64 + 9999</code>, but it didn't fail on these.</p>
<p>What is the smallest (positive) integer for which the "alt" function fails?</p>
| <python><math><floating-point><rounding-error> | 2023-09-19 19:42:23 | 1 | 1,117 | JamesTheAwesomeDude |
77,137,398 | 688,071 | Cachetools with multiprocessing | <p>I have an application that currently spins up multiple Processes in <code>main.py</code>, and each of those processes call a class Object to start, solely to have that class available to the <code>run()</code> function.</p>
<p>This design has 2 big problems:</p>
<ol>
<li>It's very non-DRY</li>
<li>Each process is instantiating the Object (which also equals an API hit to determine if the auth token needs to be refreshed)</li>
</ol>
<p>Now I'm trying to minimize the API hits and figured I'd look into caching the Object and only have it hit the API 1/per hour.</p>
<p>My initial thought was to use <code>cachetools.TTLCache</code> and <code>Multiprocessing.Manager.Queue</code>, but I can't honestly tell if this is going to work.</p>
<p>Here is the gist of the project:</p>
<pre><code>main.py
--------
from multiprocessing import Process
if __name__ == "__main__":
p1 = Process(target=process1.run)
p2 = Process(target=process2.run)
p3 = Process(target=process3.run)
p4 = Process(target=process4.run)
p1.start()
p2.start()
p3.start()
p4.start()
p1.join()
p2.join()
p3.join()
p4.join()
</code></pre>
<p>Each 'process' is its own module that does it's own thing, but they all have the same basic structure:</p>
<pre><code>process1.py/process2.py/process3.py/process4.py
--------
import TClient as client
c = client.TClient()
def run():
data = c.get_info_from_api()
....
</code></pre>
<p>Lastly, the object class itself:</p>
<pre><code>client.py
------
class TClient():
def __init__(self):
self.client = self._make_client()
def _make_client(self):
'''
Creates the client needed to perform actions via the API
'''
try:
c = auth.client_from_token_file(token_path, api_key, enforce_enums=False)
except FileNotFoundError:
c = auth.client_from_manual_flow(api_key, redirect_uri, token_path, enforce_enums=False)
return c
</code></pre>
<p>So, what I'm hoping to do, is consolidate (put into shared memory?) the <code>TClient()</code> object so that each process isn't calling it. Instead, it's better managed through its own process which would resolve the non-DRY & the many API hits issues.</p>
<p>The reason that I am also looking into cachetools is because <code>client.TClient()</code> does have to be called every once and a while to ensure the auth token from the API is ok (or needs to be refreshed). With that I believe that I'd be making a new function and Process to cache the client:</p>
<pre><code>new_process.py
----
from cachetools import cached, TTLCache
@cached(cache = TTLCache(maxsize=1, ttl=3600))
def checkClient():
c = client.Create_Or_Refresh()
return c
</code></pre>
<p>Is this possible with the libraries that I'm looking at? If so, is there a working example out there to reference? All the documentation that I've been reading through talks about passing things between processes, but the example code only has variables being passed to 1 process from main.py, not from 1 to many processes.</p>
<p>Thanks in advance!</p>
| <python> | 2023-09-19 19:20:19 | 0 | 2,512 | Godzilla74 |
77,137,378 | 2,382,483 | Numpy add.at not working with masked array | <p>I'm trying to sum some numbers in a masked array using <code>np.add.at</code> as follows:</p>
<pre><code>acc = np.ma.zeros((1,), dtype=np.float64)
arr = np.ma.masked_array([1.,2.,3.,4.,5.], mask=[0, 1, 0, 0, 0])
np.add.at(acc, [0,0,0,0,0], arr)
assert acc[0] == 13
</code></pre>
<p>This fails however. <code>add.at</code> appears to ignore the masking, and <code>acc[0]</code> equals 15, not 13 as expected. Should this work as I'm expecting? Is there an <code>add.at</code> equivalent in <code>numpy.ma</code> I should be using?</p>
| <python><numpy><masked-array> | 2023-09-19 19:16:53 | 0 | 3,557 | Rob Allsopp |
77,137,232 | 6,626,632 | Identifying source of and understanding OpenBLAS and OpenMP warnings | <p>I am developing a deep learning model using <code>pytorch</code>, <code>pytorch-lightning</code>, and <code>segmentation-models-pytorch</code>. When I run <code>pytorch_lightning.Trainer.fit()</code>, I get hundreds of the following warning:</p>
<pre><code>OpenBLAS Warning : Detect OpenMP Loop and this application may hang. Please rebuild the library with USE_OPENMP=1 option.
</code></pre>
<p>I have the following questions:</p>
<ol>
<li>How can I identify what part of my code or the source code is raising this warning?</li>
<li>How can I assess whether this warning is relevant or can be ignored?</li>
<li>If I decide I can ignore the warning, how can I suppress the warning?</li>
</ol>
<p>I am familiar with handling warnings through python's <code>warnings</code> module. However, this doesn't help here because the warning is coming from <code>OpenBLAS</code>, which is not a python library.</p>
<p>There are several other questions about how to fix the problem that causes this warning, e.g. <a href="https://stackoverflow.com/questions/34578526/how-to-make-openblas-work-with-openmp">here</a> and <a href="https://github.com/OpenMathLib/OpenBLAS/issues/2197" rel="nofollow noreferrer">here</a>. My question is about understanding the source of the warning, deciding whether I care about it, and suppressing it if I don't care.</p>
<p>Thanks in advance for any tips or answers to the above questions. Apologies if these are silly or poorly formulated questions, as I am completely unfamiliar with <code>OpenBLAS</code> and <code>OpenMP</code>.</p>
| <python><openmp><warnings><pytorch-lightning><openblas> | 2023-09-19 18:50:37 | 2 | 462 | sdg |
77,137,225 | 3,616,293 | Manual optimization (Lightning) prevents saving checkpoints | <p>I am referring to the official documentation for <a href="https://lightning.ai/docs/pytorch/stable/model/manual_optimization.html" rel="nofollow noreferrer">manual optimization</a>. To this end, I am trying to implement this using a custom learning-rate scheduler: linear warmup followed by step-decays. The code for this is:</p>
<pre><code># Get MNIST data-
path_to_files = "/home/amajumdar/Downloads/.data/"
batch_size = 512
train_dataset, test_dataset, train_loader, test_loader = mnist_dataset(path_to_files, batch_size = batch_size)
"""
Train model with Custom earning-rate Scheduler
Training dataset = 60000, batch size = 512, number of training steps/iterations per epoch = 60000 / 512 = 117.1875 = 117
After an initial linear learning rate warmup of 13 epochs or 1523 (13 * 117.1875 = 1523.4375) training steps:
1. For the next 7 epochs, or, until 20th epoch (20 * 117.1875 = 2343.75), use lr = 0.1.
2. For the next 5 epochs, or, until 25th epoch (25 * 117.1875 = 2929.6875), use lr = 0.01.
3. For remaining epochs, use lr = 0.001.
"""
boundaries = [2344, 2930]
values = [0.1, 0.01, 0.001]
def decay_function(
step, boundaries = [2344, 2930],
values = [0.1, 0.01, 0.001]
):
for idx, bound in enumerate(boundaries):
if step < bound:
return values[idx]
return values[-1]
class schedule():
def __init__(self, initial_learning_rate = 0.1, warmup_steps = 1000, decay_func = None):
self.initial_learning_rate = initial_learning_rate
self.warmup_steps = warmup_steps
self.decay_func = decay_func
self.warmup_step_size = initial_learning_rate/warmup_steps
self.current_lr = 0
def get_lr(self, step):
if step == 0:
return self.current_lr
elif step <= self.warmup_steps:
self.current_lr+= self.warmup_step_size
return self.current_lr
elif step > self.warmup_steps:
if self.decay_func:
return self.decay_func(step)
else:
return self.current_lr
# Initial linear LR warmup: 13 x 117.1875 = 1523.4375 in 13 epochs.
custom_lr_scheduler = schedule(
initial_learning_rate = 0.1, warmup_steps = 1523,
decay_func = decay_function
)
step = 0
# Define LightningModule-
class LeNet5_MNIST(pl.LightningModule):
def __init__(self, beta = 1.0):
super().__init__()
# Initialize an instance of LeNet-5 CNN architecture-
self.model = LeNet5(beta = beta)
# Apply weights initialization-
self.model.apply(init_weights)
# Important: This property activates manual optimization-
self.automatic_optimization = False
def compute_loss(self, batch):
x, y = batch
pred = self.model(x)
loss = F.cross_entropy(pred, y)
return loss
def validation_step(self, batch, batch_idx):
# Validation loop.
x_t, y_t = batch
out_t = self.model(x_t)
loss_t = F.cross_entropy(out_t, y_t)
running_corrects = 0.0
_, predicted_t = torch.max(out_t, 1)
running_corrects = torch.sum(predicted_t == y_t.data)
val_acc = (running_corrects.double() / len(y_t)) * 100
self.log('val_loss', loss_t)
self.log('val_acc', val_acc)
return {'loss_val': loss_t, 'val_acc': val_acc}
def training_step(self, batch, batch_idx):
'''
TIP:
Be careful where you call 'optimizer.zero_grad()', or your model won’t converge. It is
good practice to call 'optimizer.zero_grad()' before 'self.manual_backward(loss)'.
'''
opt = self.optimizers()
opt = opt.optimizer
opt.zero_grad()
# training_step() defines the training loop.
# It's independent of forward().
x, y = batch
pred = self.model(x)
loss = F.cross_entropy(pred, y)
# loss = self.compute_loss(batch)
self.manual_backward(loss)
opt.step()
# Use custom learning-rate scheduler-
global step
opt.param_groups[0]['lr'] = custom_lr_scheduler.get_lr(step)
step += 1
running_corrects = 0.0
_, predicted = torch.max(pred, 1)
running_corrects = torch.sum(predicted == y.data)
train_acc = (running_corrects.double() / len(y)) * 100
# log to Tensorboard (if installed) by default-
self.log('train_loss', loss, on_step = False, on_epoch = True)
self.log('train_acc', train_acc, on_step = False, on_epoch = True)
return {'loss': loss, 'train_acc': train_acc}
def on_after_backward(self):
# example to inspect gradient information in tensorboard-
# don't make the tf file huge
if self.trainer.global_step % 25 == 0:
for layer_name, param in self.named_parameters():
grad = param.grad
self.logger.experiment.add_histogram(
tag = layer_name, values = grad,
global_step = self.trainer.global_step
)
def configure_optimizers(self):
# optimizer = optim.Adam(params = self.parameters(), lr = 1e-3)
optimizer = torch.optim.SGD(
params = self.parameters(), lr = 0.0,
momentum = 0.9, weight_decay = 5e-4
)
return optimizer
model_cnn = LeNet5_MNIST(beta = 1.0)
# Checkpointing is enabled by default to the current working directory. To change the checkpoint
# path pass in-
path_to_ckpt = "/home/amajumdar/Documents/Codes/PyTorch_Lightning/checkpoints/"
# To modify the behavior of checkpointing pass in your own callback-
# DEFAULTS used by the Trainer
checkpoint_callback = ModelCheckpoint(
dirpath = os.getcwd(),
# filename = f'LeNet5_{epoch}-{val_acc:.2f}',
filename = 'LeNet5-mnist-{epoch:02d}-{val_acc:.2f}',
save_top_k = 1, verbose = True,
monitor = 'val_acc', mode = 'max',
# save_weights_only (bool) – if True, then only the model’s weights will be saved.
# Otherwise, the optimizer states, lr-scheduler states, etc are added in the checkpoint too.
save_weights_only = False
)
# Learning-rate monitoring-
lr_monitor = LearningRateMonitor(logging_interval = 'step')
# trainer = Trainer(callbacks = [lr_monitor])
# Train the model-
trainer = pl.Trainer(
accelerator = 'cpu',
limit_train_batches = 1.0, limit_val_batches = 1.0,
max_epochs = 35, default_root_dir = path_to_ckpt,
callbacks = [checkpoint_callback, lr_monitor]
)
</code></pre>
<p><strong>The manual optimization does not seem to save any checkpoint!!</strong> I found a similar open question <a href="https://lightning.ai/forums/t/unable-to-save-checkpoints-when-use-manual-optimization/3594" rel="nofollow noreferrer">here</a>.</p>
<p>Help!</p>
| <python><pytorch><pytorch-lightning> | 2023-09-19 18:49:53 | 1 | 2,518 | Arun |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.