QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
77,629,917
| 1,914,781
|
python3 - print decrease progress the same line in win10
|
<p>I would like print a decrease number the same line in win10.</p>
<pre><code>import time
print("start:")
for i in [10,9,8,7,6,5,4,3,2,1,0]:
print("%d\r"%i,end='',flush=True)
time.sleep(1)
print("done")
</code></pre>
<p>Current code output:</p>
<pre><code>start:
done
</code></pre>
<p>The number line is missing...</p>
|
<python>
|
2023-12-09 02:16:11
| 2
| 9,011
|
lucky1928
|
77,629,866
| 13,849,446
|
Playwright page.pdf() only gets one page
|
<p>I have been trying to convert html to pdf. I have tried a lot of tools but none of them work. Now I am using playwright, it is converting the Page to PDF but it only gets the first screen view. From that page the content from right is trimmed.</p>
<pre><code>import os
import time
import pathlib
from playwright.sync_api import sync_playwright
filePath = os.path.abspath("Lab6.html")
fileUrl = pathlib.Path(filePath).as_uri()
fileUrl = "file://C:/Users/PMYLS/Desktop/Code/ScribdPDF/Lab6.html"
with sync_playwright() as p:
browser = p.chromium.launch()
page = browser.new_page()
page.goto(fileUrl)
for i in range(5): #(The scroll is not working)
page.mouse.wheel(0, 15000)
time.sleep(2)
page.wait_for_load_state('networkidle')
page.emulate_media(media="screen")
page.pdf(path="sales_report.pdf")
browser.close()
</code></pre>
<p>Html View</p>
<p><a href="https://i.sstatic.net/waeb0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/waeb0.png" alt="Html view" /></a></p>
<p>PDF file after running script
<a href="https://i.sstatic.net/j07OD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/j07OD.png" alt="pdf view" /></a>
I have tried almost every tool available on the internet. I also used selenium but same results. I thought it was due to page not loaded properly, I added wait and manually scrolled the whole page to load the content. All giving same results.</p>
<p>The html I am converting
<a href="https://drive.google.com/file/d/16jEq52iXtAMCg2FDt3VbQN0dCQmdTip_/view?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/file/d/16jEq52iXtAMCg2FDt3VbQN0dCQmdTip_/view?usp=sharing</a></p>
|
<python><pdf><playwright><playwright-python>
|
2023-12-09 01:40:06
| 1
| 1,146
|
farhan jatt
|
77,629,777
| 930,169
|
ModuleNotFoundError trying pyo3 with virtualenv
|
<p>This question is a follow-up question of <a href="https://stackoverflow.com/questions/77626939/pass-polars-dataframe-from-rust-to-python-and-then-return-manipulated-dataframe/77627003#77627003">this</a>.</p>
<pre><code>// main.rs
use polars::prelude::*;
use pyo3::{prelude::*, types::PyModule};
use pyo3_polars::PyDataFrame;
fn main() -> PyResult<()> {
let code = include_str!("./test.py");
Python::with_gil(|py| {
let activators = PyModule::from_code(py, code, "activators.py", "activators")?;
let df: DataFrame = df!(
"integer" => &[1, 2, 3, 4, 5],
"float" => &[4.0, 5.0, 6.0, 7.0, 8.0],
)
.unwrap();
let relu_result: PyDataFrame = activators
.getattr("test")?
.call1((PyDataFrame { 0: df },))?
.extract()?;
Ok(())
})
}
</code></pre>
<pre><code># test.py
def test(x):
import pyarrow
import sys
print(sys.executable, sys.path)
# manipulate dataframe x
return x
</code></pre>
<pre><code>[dependencies]
pyo3 = { version = "0.20.0", features = ["auto-initialize"] }
polars = "0.35.4"
pyo3-polars = "0.9.0"
</code></pre>
<p>I am trying pyo3 with virtualenv. I have a python env installed under the rust project root named <code>venv</code>.
According to pyo3's <a href="https://pyo3.rs/main/getting_started#virtualenvs" rel="nofollow noreferrer">doc</a>, it supports virtualenv.</p>
<p>But:</p>
<pre><code>$ . venv/bin/activate
$ cargo run
thread 'main' panicked at /Users/jack/.cargo/registry/src/index.crates.io-6f17d22bba15001f/pyo3-polars-0.9.0/src/lib.rs:166:44:
pyarrow not installed: PyErr { type: <class 'ModuleNotFoundError'>, value: ModuleNotFoundError("No module named 'pyarrow'"), traceback: None }
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
</code></pre>
<p>I have installed pyarrow in local virtual env.</p>
<hr />
<p>Update 1:</p>
<pre><code># test.py
def test(x):
# import pyarrow
import sys
print(sys.executable, sys.path, sys.prefix)
# manipulate dataframe x
return x
</code></pre>
<p>Comment out <code>import pyarrow</code>, <code>RUST_BACKTRACE=1 cargo run</code> still complains:</p>
<pre><code>thread 'main' panicked at /Users/jack/.cargo/registry/src/index.crates.io-6f17d22bba15001f/pyo3-polars-0.9.0/src/lib.rs:166:44:
pyarrow not installed: PyErr { type: <class 'ModuleNotFoundError'>, value: ModuleNotFoundError("No module named 'pyarrow'"), traceback: None }
stack backtrace:
0: rust_begin_unwind
at /rustc/79e9716c980570bfd1f666e3b16ac583f0168962/library/std/src/panicking.rs:597:5
1: core::panicking::panic_fmt
at /rustc/79e9716c980570bfd1f666e3b16ac583f0168962/library/core/src/panicking.rs:72:14
2: core::result::unwrap_failed
at /rustc/79e9716c980570bfd1f666e3b16ac583f0168962/library/core/src/result.rs:1652:5
3: core::result::Result<T,E>::expect
at /rustc/79e9716c980570bfd1f666e3b16ac583f0168962/library/core/src/result.rs:1034:23
4: <pyo3_polars::PySeries as pyo3::conversion::IntoPy<pyo3::instance::Py<pyo3::types::any::PyAny>>>::into_py
at /Users/jack/.cargo/registry/src/index.crates.io-6f17d22bba15001f/pyo3-polars-0.9.0/src/lib.rs:166:23
5: <pyo3_polars::PyDataFrame as pyo3::conversion::IntoPy<pyo3::instance::Py<pyo3::types::any::PyAny>>>::into_py::{{closure}}
at /Users/jack/.cargo/registry/src/index.crates.io-6f17d22bba15001f/pyo3-polars-0.9.0/src/lib.rs:182:22
6: core::iter::adapters::map::map_fold::{{closure}}
at /rustc/79e9716c980570bfd1f666e3b16ac583f0168962/library/core/src/iter/adapters/map.rs:84:28
7: <core::slice::iter::Iter<T> as core::iter::traits::iterator::Iterator>::fold
at /rustc/79e9716c980570bfd1f666e3b16ac583f0168962/library/core/src/slice/iter/macros.rs:232:27
8: <core::iter::adapters::map::Map<I,F> as core::iter::traits::iterator::Iterator>::fold
at /rustc/79e9716c980570bfd1f666e3b16ac583f0168962/library/core/src/iter/adapters/map.rs:124:9
9: core::iter::traits::iterator::Iterator::for_each
at /rustc/79e9716c980570bfd1f666e3b16ac583f0168962/library/core/src/iter/traits/iterator.rs:857:9
10: alloc::vec::Vec<T,A>::extend_trusted
at /rustc/79e9716c980570bfd1f666e3b16ac583f0168962/library/alloc/src/vec/mod.rs:2881:17
11: <alloc::vec::Vec<T,A> as alloc::vec::spec_extend::SpecExtend<T,I>>::spec_extend
at /rustc/79e9716c980570bfd1f666e3b16ac583f0168962/library/alloc/src/vec/spec_extend.rs:26:9
12: <alloc::vec::Vec<T> as alloc::vec::spec_from_iter_nested::SpecFromIterNested<T,I>>::from_iter
at /rustc/79e9716c980570bfd1f666e3b16ac583f0168962/library/alloc/src/vec/spec_from_iter_nested.rs:62:9
13: <alloc::vec::Vec<T> as alloc::vec::spec_from_iter::SpecFromIter<T,I>>::from_iter
at /rustc/79e9716c980570bfd1f666e3b16ac583f0168962/library/alloc/src/vec/spec_from_iter.rs:33:9
14: <alloc::vec::Vec<T> as core::iter::traits::collect::FromIterator<T>>::from_iter
at /rustc/79e9716c980570bfd1f666e3b16ac583f0168962/library/alloc/src/vec/mod.rs:2749:9
15: core::iter::traits::iterator::Iterator::collect
at /rustc/79e9716c980570bfd1f666e3b16ac583f0168962/library/core/src/iter/traits/iterator.rs:2053:9
16: <pyo3_polars::PyDataFrame as pyo3::conversion::IntoPy<pyo3::instance::Py<pyo3::types::any::PyAny>>>::into_py
at /Users/jack/.cargo/registry/src/index.crates.io-6f17d22bba15001f/pyo3-polars-0.9.0/src/lib.rs:178:24
17: pyo3::types::tuple::<impl pyo3::conversion::IntoPy<pyo3::instance::Py<pyo3::types::tuple::PyTuple>> for (T0,)>::into_py
at /Users/jack/.cargo/registry/src/index.crates.io-6f17d22bba15001f/pyo3-0.20.0/src/types/tuple.rs:325:37
18: pyo3::types::any::PyAny::call
at /Users/jack/.cargo/registry/src/index.crates.io-6f17d22bba15001f/pyo3-0.20.0/src/types/any.rs:513:20
19: pyo3::types::any::PyAny::call1
at /Users/jack/.cargo/registry/src/index.crates.io-6f17d22bba15001f/pyo3-0.20.0/src/types/any.rs:587:9
20: hello::main::{{closure}}
at ./src/main.rs:17:40
21: pyo3::marker::Python::with_gil
at /Users/jack/.cargo/registry/src/index.crates.io-6f17d22bba15001f/pyo3-0.20.0/src/marker.rs:434:9
22: hello::main
at ./src/main.rs:8:5
23: core::ops::function::FnOnce::call_once
at /rustc/79e9716c980570bfd1f666e3b16ac583f0168962/library/core/src/ops/function.rs:250:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
</code></pre>
<hr />
<p>Update 2:</p>
<p>It looks like I have to comment out some rust code to get python sys info. After commenting out <code>PyDataFrame</code> related code in rust, I have:</p>
<pre><code>/Users/jack/Workspace/rust/hello/target/debug/hello ['/opt/homebrew/Cellar/python@3.11/3.11.6/Frameworks/Python.framework/Versions/3.11/lib/python311.zip', '/opt/homebrew/Cellar/python@3.11/3.11.6/Frameworks/Python.framework/Versions/3.11/lib/python3.11', '/opt/homebrew/Cellar/python@3.11/3.11.6/Frameworks/Python.framework/Versions/3.11/lib/python3.11/lib-dynload', '/opt/homebrew/Cellar/python@3.11/3.11.6/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages'] /opt/homebrew/opt/python@3.11/Frameworks/Python.framework/Versions/3.11
</code></pre>
<p>Above is python's path info. It seems that pyo3 chooses this python 3.11 instead of using my local virtual env.</p>
<hr />
<p>Update 3:</p>
<p>I switch to pyenv's virtualenv following <a href="https://github.com/pyenv/pyenv-virtualenv" rel="nofollow noreferrer">this</a>.</p>
<pre><code>def test(x):
import sys
print(sys.executable, sys.path, sys.prefix)
import pyarrow
# manipulate dataframe x
return x
</code></pre>
<p>But <code>pyarrow</code> is still missing after successfully installing it into pyenv's virtual env. LOL.</p>
<pre><code>/Users/jack/Workspace/rust/hello/target/debug/hello ['/Users/jack/.pyenv/versions/3.11.7/lib/python311.zip', '/Users/jack/.pyenv/versions/3.11.7/lib/python3.11', '/Users/jack/.pyenv/versions/3.11.7/lib/python3.11/lib-dynload', '/Users/jack/.pyenv/versions/3.11.7/lib/python3.11/site-packages'] /Users/jack/.pyenv/versions/3.11.7
Error: PyErr { type: <class 'ModuleNotFoundError'>, value: ModuleNotFoundError("No module named 'pyarrow'"), traceback: Some(<traceback object at 0x106243a40>) }
</code></pre>
<p>execute test.py manually is fine:</p>
<pre><code>python ./src/test.py
</code></pre>
<p>It looks like pyo3 can't resolve dependency.</p>
|
<python><rust>
|
2023-12-09 00:37:23
| 3
| 2,941
|
JACK M
|
77,629,738
| 19,675,781
|
YAML read filenames stored across multiple variables
|
<p>I have YAML data that contains file paths of hundreds of experiments results over a long time.<br/>
The results are stored across different directories.<br/>
All the results have a same base directory.<br/>
Due to this I wanted to create a common variable for root directory and use the root directory variable across the file paths to avoid repeating it.</p>
<p>My YAML DATA:
DEMO.yaml</p>
<pre><code>define: &root '/Users/SAL/Documents/Projects/FORD_CELLS/'
test1 : *root+'test1/result.csv'
test2 : *root+'test2/result.csv'
</code></pre>
<p>I want to read the YAML variables in python like this:
READ IN PYTHON:</p>
<pre><code>import yaml
import pandas as pd
exp_info = yaml.safe_load(open('DEMO.yaml'))
exp_info['test2']
</code></pre>
<p>When I try to read the test results, I am facing ScannerError</p>
<pre><code>ScannerError: while scanning an alias
expected alphabetic or numeric character, but found '+'
</code></pre>
<p>I tried different combinations but none of them worked.
Can anyone help me how to tackle this situation.</p>
|
<python><path><yaml><markup>
|
2023-12-09 00:20:22
| 1
| 357
|
Yash
|
77,629,679
| 7,556,397
|
Optimizing Strawberry GraphQL API Performance: How to bypass object Instantiation for Pre-Formatted JSON from PostgreSQL?
|
<p>I'm developing a GraphQL API using Strawberry and FastAPI, where I directly extract and shape data from a PostgreSQL database into JSON, formatted as per the GraphQL schema.</p>
<p>The data extraction is performed with SQL queries that utilize the selected fields as well as PostgreSQL's JSON capabilities, allowing the data to be shaped exactly as needed for the GraphQL response.</p>
<p>My goal now is to bypass the Python object validation in Strawberry for this pre-formatted JSON to improve performance.</p>
<p>In my current setup, I have various GraphQL types defined in Strawberry, resembling the following:</p>
<pre><code>import strawberry
@strawberry.type
class Player:
name: str
age: int
# ... more fields ...
@strawberry.type
class Team:
city: str
players: list[Player]
</code></pre>
<p>I have resolvers that are supposed to return instances of these types. However, given that the data retrieved from PostgreSQL is already structured appropriately (thanks to SQL's JSON shaping features), I am looking for a way to bypass the conversion and validation of these JSON objects into Strawberry instances.</p>
<p>Example resolver structure:</p>
<pre class="lang-py prettyprint-override"><code>@strawberry.type
class Query:
@strawberry.field
def teams_with_player(self, info) -> list[Team]:
formatted_json = query_postgresql_for_formatted_json(info.selected_fields)
# The above function returns JSON directly in the structure expected by the GraphQL schema
return formatted_json
</code></pre>
<p>The query_postgresql_for_formatted_json function fetches the JSON data to align with the GraphQL schema and the selected fields.</p>
<p>for instance, with the following query:</p>
<pre><code>query {
teamsWithPlayer {
city
players {
name
}
}
}
</code></pre>
<p>the function parses the selected fields and the database returns the following data:</p>
<pre><code>[
{
"city": "Abuja",
"players": [
{"name": "Player1"},
{"name": "Player2"}
]
},
{
"city": "Djakarta",
"players": [
{"name": "Player3"},
{"name": "Player4"}
]
}
// ... more teams ...
]
</code></pre>
<p>how can I return this json without instanciating the Strawberry objects ?</p>
|
<python><graphql><strawberry-graphql>
|
2023-12-08 23:50:59
| 2
| 1,420
|
Lionel Hamayon
|
77,629,656
| 16,462,878
|
venv is installing packages globally
|
<p>I have fresh compiled version of <em>CPython</em>, 3.12, and wanted to start a new project in a new virtual environment with <code>venv</code>.</p>
<p>I noticed that each time I install a package inside my virtual environment it is <em>not</em> going to be installed inside my project directory but in the global directory <code>/home/USER_NAME/.local/lib/python3.12/site-packages/</code>.</p>
<p>I read many posts on related problems but I couldn't find anything useful and just start to confuse me... an installation inside a virtual environment should be local to the environment itself, right? I already tried to delete the environment and create a new one again but got the same result, I cannot figure out what's wrong.</p>
<p>I can provide further technical details such as <code>pip debug</code>, configurations or environment variables if needed.</p>
<p>My system is updated and have different versions (3.9, 3.11, 3.12) of Python.</p>
<hr />
<p>Here some details of the issue</p>
<ol>
<li>Check packages of the global installation</li>
</ol>
<pre class="lang-bash prettyprint-override"><code>$ python -m pip list
Package Version
------- -------
pip 23.3.1
</code></pre>
<ol start="2">
<li>Create new virtual environment in the target directory</li>
</ol>
<pre class="lang-bash prettyprint-override"><code>$ python -m venv ./myenv
</code></pre>
<ol start="3">
<li>Activate the environment</li>
</ol>
<pre class="lang-bash prettyprint-override"><code>$ source ./myenv/bin/activate
</code></pre>
<ol start="4">
<li>installing a package</li>
</ol>
<pre><code>(myenv) $ python -m pip install pypdf
</code></pre>
<ol start="5">
<li>check installation</li>
</ol>
<pre><code>(myenv) $ python -m pip list
Package Version
------- -------
pip 23.3.1
pypdf 3.17.1
</code></pre>
<ol start="6">
<li>exit the virtual environment</li>
</ol>
<pre class="lang-bash prettyprint-override"><code>(myvenv) $ deactivate
</code></pre>
<ol start="7">
<li>Checking again global installation</li>
</ol>
<pre class="lang-bash prettyprint-override"><code>$ python -m pip list
Package Version
------- -------
pip 23.3.1
pypdf 3.17.1 # <-
</code></pre>
<hr />
<p><strong>EDIT</strong></p>
<p>Information about the</p>
<ul>
<li>system</li>
</ul>
<pre class="lang-py prettyprint-override"><code>import platform
uname = platform.uname()
for attr in ['system', 'release', 'version', 'machine']:
print(f"{attr:<12} {getattr(uname, attr)}")
system Linux
release 5.10.0-26-amd64
version #1 SMP Debian 5.10.197-1 (2023-09-29)
machine x86_64
</code></pre>
<ul>
<li>terminal</li>
</ul>
<pre><code>$ echo $SHELL
/bin/bash
</code></pre>
<ul>
<li>Python version (with the issue)</li>
</ul>
<pre class="lang-bash prettyprint-override"><code>$ python --version
Python 3.12.0
</code></pre>
<hr />
<p>Summary from the comments</p>
<ul>
<li><p><code>$ python -c "import sys; print(sys.executable)"</code>
is pointing the the same executable for both: <code>/home/USER_NAME/PATH/Python-3.12.0/python</code></p>
</li>
<li><p><code>which python</code> is not pointing to the same location for both (for <code>venv</code> seems ok)</p>
<ul>
<li><code>$ which python</code> -> <code>/usr/bin/python # it's Python 2.7.18</code></li>
<li><code>(myenv) $ which python</code> -> <code>/home/USER_NAME/PATH/myenv/bin/python</code></li>
</ul>
</li>
</ul>
|
<python><bash><python-venv>
|
2023-12-08 23:45:32
| 0
| 5,264
|
cards
|
77,629,600
| 1,107,474
|
Save matplotlib as image so it can be resized later
|
<p>I am creating a graph using Python matplotlib, saving and displaying it:</p>
<pre><code>import matplotlib.pyplot as plt
# Not the real values, I have about 100,000
p = [1,2,3]
t = [1,2,3]
plt.plot(t, p)
ax = plt.gca()
ax.set_xticks(ax.get_xticks()[::10000])
plt.savefig('/path/filename.svg', format='svg')
plt.show()
</code></pre>
<p>I would like to save the image as a vector (without losing data) so that later I can open the image, drag it to be wider etc, like is possible when the code calls <code>.show()</code>.</p>
<p>I thought this would be possible saving as a .svg, but when I open the saved image using the default Ubuntu image viewer, it's a fixed image (all my points squished together on a small graph).</p>
<p>Is it possible to save the graph as an image so the image can resized etc, like when calling <code>show()</code>?</p>
|
<python><matplotlib>
|
2023-12-08 23:23:14
| 1
| 17,534
|
intrigued_66
|
77,629,566
| 789,750
|
How can I import this csv with some quoted lines into pandas?
|
<p>I have a csv with a structure like this:</p>
<pre><code>project, location, badness
foo, N/A, 0
bar, 'path/to/file:[7,23]', 120
</code></pre>
<p>I want to import this into a Pandas dataframe. When I use <code>pd.read_csv(filename, quotechar="'", sep=".\s+")</code> right now, I get columns like:</p>
<pre><code>project location badness
foo N/A 0
bar 'path/to/file:[7 23]' 120
</code></pre>
<p>with the final dangling column unnamed.</p>
<p>How can I import this in a way that respects the quotes? That is, how can I get the "location" column to have <code>'path/to/file:[7,23]'</code> on the second line?</p>
|
<python><pandas><csv><parsing>
|
2023-12-08 23:07:18
| 2
| 12,799
|
Dan
|
77,629,234
| 12,276,279
|
How to get second pandas dataframe showing net trade based on first pandas dataframe containing one-directional trade in Python?
|
<p>I have a pandas dataframe <code>df1</code> as shown below:
It shows exports volume from A to B, B to A and A to C in three rows. Trade is possible in both directions.
<a href="https://i.sstatic.net/p8qtb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/p8qtb.png" alt="enter image description here" /></a></p>
<p><code>df1.to_dict()</code> returns</p>
<blockquote>
<p>{'Country1': {0: 'A', 1: 'B', 2: 'A'}, 'Country2': {0: 'B', 1: 'A',
2: 'C'}, 'Value': {0: 3, 1: 5, 2: 3}}</p>
</blockquote>
<p>I want a second dataframe <code>df2</code> based on <code>df1</code> which shows the net trade volume between countries.
For example, A to C has a net trade volume of 3 units, and B to A has a net trade volume of 2 units (5-3). This needs to be reflected in the second dataframe as shown below:
<a href="https://i.sstatic.net/Lbkdx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Lbkdx.png" alt="enter image description here" /></a></p>
<p>How can I automate creating <code>df2</code> based on <code>df1</code>?
I have large number of countries, so I want to automate this process.</p>
|
<python><pandas><dataframe><group-by>
|
2023-12-08 21:21:31
| 4
| 1,810
|
hbstha123
|
77,628,661
| 1,234,434
|
How to print out another column after a value_counts in dataframe
|
<p>I am learning pandas and python.</p>
<p>I have this dataframe:</p>
<pre><code>dfsupport = pd.DataFrame({'Date': ['8/12/2020','8/12/2020','13/1/2020','24/5/2020','31/10/2020','11/7/2020','11/7/2020','4/4/2020','1/2/2020'],
'Category': ['Table','Chair','Cushion','Table','Chair','Mats','Mats','Large','Large'],
'Sales': ['1 table','3chairs','8 cushions','3Tables','12 Chairs','12Mats','4Mats','13 Chairs and 2 Tables', '3 mats, 2 cushions 4@chairs'],
'Paid': ['Yes','Yes','Yes','Yes','No','Yes','Yes','No','Yes'],
'Amount': ['93.78','$51.99','44.99','38.24','£29.99','29 21 only','18','312.8','63.77' ]
})
</code></pre>
<p>which produces:</p>
<pre><code> Date Category Sales Paid Amount
0 8/12/2020 Table 1 table Yes 93.78
1 8/12/2020 Chair 3chairs Yes 51.99
2 13/1/2020 Cushion 8 cushions Yes 44.99
3 24/5/2020 Table 3Tables Yes 38.24
4 31/10/2020 Chair 12 Chairs No 29.99
5 11/7/2020 Mats 12Mats Yes 29.21
6 11/7/2020 Mats 4Mats Yes 18
7 4/4/2020 Large 13 Chairs and 2 Tables No 312.8
8 1/2/2020 Large 3 mats, 2 cushions 4@chairs Yes 63.77
</code></pre>
<p>I want to find the date with the most sale, so I ran:</p>
<pre><code>print("######\n",dfsupport['Date'].value_counts().max())
</code></pre>
<p>which gives:</p>
<pre><code>2
</code></pre>
<p>What I would now like to do is to unpack that <code>2</code> and find out which dates that was for and also which "Sales" occurred in each of those instances.</p>
<p>I'm stuck and don't know how to print out those columns. Would appreciate some guidance.</p>
|
<python><pandas><dataframe>
|
2023-12-08 19:02:47
| 4
| 1,033
|
Dan
|
77,628,658
| 901,426
|
trouble assigning new values to mulitple object parameters from nested list
|
<p>i'm working up a test to populate multiple object's various parameters from a list that comes from an sqlite3 table.</p>
<p>this is the code and the errors:</p>
<pre class="lang-py prettyprint-override"><code>class thing(object):
def __init__(self, data):
self.name = data[0]
self.spoot = data[1]
self.lurmz = data[2]
def __str__(self):
output = f'{self.name} data → spoot: {self.spoot}, lurmz: {self.lurmz}'
return output
blorp_one = thing(['flarn', 750, 110])
blorp_two = thing(['gleep', 500, 70])
print(blorp_one) #<-- flarn data → spoot: 750, lurmz: 110
print(blorp_two) #<-- flarn data → spoot: 500, lurmz: 70
# data from sqlite3 db
result = [['blorp_one', 'spoot', 3750], ['blorp_one', 'lurmz', 610],
['blorp_two', 'spoot', 1250], ['blorp_two', 'lurmz', 660]]
result[0][0].result[0][1] = result[0][2]
result[1][0].result[1][1] = result[1][2]
result[2][0].result[2][1] = result[2][2]
result[3][0].result[3][1] = result[3][2]
# each one → AttributeError: 'str' object has no attribute 'result'
print(blorp_one) #<-- desired output: flarn data → spoot: 3750, lurmz: 610
print(blorp_two) #<-- desired output: flarn data → spoot: 1250, lurmz: 660
</code></pre>
<p>and i've tried these to no avail:</p>
<pre class="lang-py prettyprint-override"><code>tmp1 = eval(f'{result[0][0]'})
tmp1.result[0][1] = result[0][2] #<-- object has no attribute 'result'
tmp2 = eval(f'{result[0][1]'})
tpm1.tmp2 = result[0][2] #<-- ?? doesn't error out, but doesn't change the params
eval(f'{result[0][0]'}).result[0][1] #<-- nope. assignment error
</code></pre>
<p>now i know it's going to be something simple, but i'm not seeing it (maybe because it's Friday?). can someone kindly re-orient my brain? thanks.</p>
<p>EDIT: i understand that strings are immutable. but i don't know how to take the stored object NAME, which is a string, and use it to assign new values back to it's parameters; in effect use the string to refer to an object... this seems pretty bog-standard stuff, but for some reason, it's just not clicking in my head.</p>
|
<python><list><variable-assignment>
|
2023-12-08 19:00:55
| 1
| 867
|
WhiteRau
|
77,628,455
| 2,681,662
|
mypy unreachable on Guard Clause
|
<p>I have a problem where when I try to check if the given value's type is not what I expect I'll log it and raise an error.</p>
<p>However, <code>mypy</code> is complaining. What I'm doing wrong?</p>
<p>Simplified example:</p>
<pre><code>from __future__ import annotations
from typing import Union
from logging import getLogger
class MyClass:
def __init__(self, value: Union[float, int]) -> None:
self.logger = getLogger("dummy")
self.value = value
def __add__(self, other: Union[MyClass, float, int]) -> MyClass:
if not isinstance(other, (MyClass, float, int)):
self.logger.error("Other must be either MyClass, float or int") # error: Statement is unreachable [unreachable]
raise NotImplementedError
return self.add(other)
def add(self, other: Union[MyClass, float, int]) -> MyClass:
if isinstance(other, MyClass):
return MyClass(self.value + other.value)
return MyClass(self.value + other)
</code></pre>
<p>Please notice it does not complain when I run it on <a href="https://mypy-play.net/?mypy=latest&python=3.10&gist=9945c3de7d01e9b2428a1f4dd0832739" rel="nofollow noreferrer">mypy-play.net</a> but locally it raises:</p>
<pre><code>main.py:13: error: Statement is unreachable [unreachable]
Found 1 error in 1 file (checked 1 source file)
</code></pre>
|
<python><mypy>
|
2023-12-08 18:08:31
| 1
| 2,629
|
niaei
|
77,628,411
| 11,163,122
|
How to convert AsyncIterable to asyncio Task
|
<p>I am using Python 3.11.5 with the below code:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
from collections.abc import AsyncIterable
# Leave this iterable be, the question is about
# how to use many instances of this in parallel
async def iterable() -> AsyncIterable[int]:
yield 1
yield 2
yield 3
# How can one get multiple async iterables to work with asyncio.gather?
# In other words, since asyncio.gather works with asyncio.Task,
# (1) How can one convert an async iterable to a Task?
# (2) How can one use asyncio.gather to run many of these tasks in parallel,
# keeping the results 1-1 with the source iterables?
results_1, results_2, results_3 = asyncio.gather(iterable(), iterable(), iterable())
</code></pre>
<p>To restate the question, how can one get:</p>
<ul>
<li>An <code>AsyncIterable</code> as an <code>asyncio</code> task, where the task iterates until exhaustion</li>
<li>Run multiple of these tasks in parallel, storing the results on a per-task basis</li>
</ul>
<p>(e.g. for use with <code>asyncio.gather</code>)?</p>
<p>I am looking for a 1 - 3 line snippet showing how to connect these dots.</p>
|
<python><asynchronous><async-await><python-asyncio><python-collections>
|
2023-12-08 17:58:58
| 1
| 2,961
|
Intrastellar Explorer
|
77,628,396
| 3,234,562
|
How to get a graphviz node's position attribute?
|
<p>I'm making a graphviz graph in python, not loading it from a file. All I want to do is have a bunch of nodes arranged by neato, and get the position of each node. However, I can't seem to get the position attribute of the nodes; what's the right way to do this?</p>
<pre><code>import graphviz
def main():
dot = graphviz.Graph()
dot.engine = 'neato'
node_a = dot.node('A', 'Node A')
node_b = dot.node('B', 'Node B')
dot.edge('A', 'B')
dot.edge('B', 'A')
dot.render('graph', view=True)
# Neither of these work:
dot.getv(node_a, 'pos')
node_a.node_attr['pos']
</code></pre>
<p>I know <code>pos</code> is a valid attribute of nodes, but how do I even get to see it?</p>
|
<python><graphviz>
|
2023-12-08 17:56:08
| 1
| 2,712
|
IronWaffleMan
|
77,628,345
| 190,452
|
Operator overloading using Graalvm Polyglot with Scala and Python
|
<p>I'm playing around with Graalvm's very cool Polyglot feature so I can evaluate Python from a Scala application.</p>
<p>The test REPL code is below. If I run it, and enter the following code I can evaluate <code>v.__add__(1)</code> fine. I get this:</p>
<pre><code>>>> v
evaluating v
evaluated got Value(7)
>>> v.__add__(1)
evaluating v.__add__(1)
evaluated got Value(8)
>>>
</code></pre>
<p>Is there any way to evaluate <code>v + 1</code>?
I get <code>TypeError: unsupported operand type(s) for +: 'foreign' and 'int'</code> when I try.</p>
<p>Here's the test REPL code:</p>
<pre><code>import org.graalvm.polyglot.{Context, Source}
import scala.annotation.targetName
import scala.util.control.NonFatal
case class Value(v: Int) {
@targetName("__add__")
def +(i: Int): Value = Value(v + i)
}
class Python(context: Context, v: Value) {
private val language = "python"
context.getBindings(language).putMember("v", v)
def eval(code: String): AnyRef = {
try {
val source = Source
.newBuilder(language, code, "<shell>")
.interactive(false)
.buildLiteral()
println(s"evaluating $code")
val res = context.eval(source)
println(s"evaluated got $res")
res
} catch {
case NonFatal(e) =>
println(s"error evaluating $code")
e.printStackTrace()
null
}
}
}
object Python {
def main(args: Array[String]): Unit = {
val context = Context
.newBuilder("python")
.allowAllAccess(true)
.build()
val python = new Python(context, Value(7))
while (true) {
val line = scala.io.StdIn.readLine(">>> ")
python.eval(line)
}
}
}
</code></pre>
|
<python><scala><graalvm>
|
2023-12-08 17:47:22
| 1
| 1,972
|
David
|
77,628,127
| 4,648,809
|
Transformers cross-entropy loss masked label issue
|
<p>I try to count cross-entropy loss for text, using gpt-2. Take the idea from this article
<a href="https://huggingface.co/docs/transformers/perplexity" rel="nofollow noreferrer">https://huggingface.co/docs/transformers/perplexity</a></p>
<pre><code>from transformers import GPT2LMHeadModel, GPT2TokenizerFast
import torch
from tqdm import tqdm
model_id = "gpt2-large"
model = GPT2LMHeadModel.from_pretrained(model_id)
tokenizer = GPT2TokenizerFast.from_pretrained(model_id, cache_dir='.')
encodings = tokenizer("She felt his demeanor was sweet and endearing.", return_tensors="pt")
max_length = model.config.n_positions
seq_len = encodings.input_ids.size(1)
target_ids = encodings.input_ids.clone()
#target_ids[:, :-seq_len] = -100 COMMENTED LINE
with torch.no_grad():
outputs = model(encodings.input_ids, labels=target_ids)
print(outputs.loss.item())
</code></pre>
<p>Status of commented line(commented/uncommented) does not matter at all. I always get 4.352320194244385 as print output. According to documentation <a href="https://huggingface.co/docs/transformers/v4.35.2/en/model_doc/gpt2#transformers.GPT2LMHeadModel" rel="nofollow noreferrer">https://huggingface.co/docs/transformers/v4.35.2/en/model_doc/gpt2#transformers.GPT2LMHeadModel</a> :</p>
<blockquote>
<p>labels (torch.LongTensor of shape (batch_size, sequence_length),
optional) — Labels for language modeling. Note that the labels are
shifted inside the model, i.e. you can set labels = input_ids. Indices
are selected in [-100, 0, ..., config.vocab_size - 1]. All labels set
to -100 are ignored (masked), the loss is only computed for labels in
[0, ..., config.vocab_size - 1]</p>
</blockquote>
<p>Why do I get the same result?</p>
|
<python><huggingface-transformers><gpt-2>
|
2023-12-08 16:59:20
| 2
| 1,031
|
Alex
|
77,627,937
| 2,182,636
|
Pylint Not Rerunning
|
<p>I have what may be a silly question. I'll run pylint on the CLI and it will list the things it doesn't like.</p>
<p>The command I'm using to run pylint is: <code>pylint ./src</code>.</p>
<p>I'll then make whatever changes it suggests and execute the same command. However, it doesn't seem to be rescanning my code. Instead it seems to simply resend the same cached results.</p>
<p>I've never enountered this before and I'm wondering if there is some way to disable this behavior?</p>
|
<python><pylint>
|
2023-12-08 16:23:18
| 0
| 586
|
cgivre
|
77,627,900
| 1,981,484
|
How to setup `pyi_out` path when using with rules-proto-grpc?
|
<p>I'm using rules-proto-grpc's <a href="https://rules-proto-grpc.com/en/latest/lang/python.html#python-grpc-library" rel="nofollow noreferrer"><code>python_grpc_library</code></a> rule to compile python files in my bazel project.</p>
<p>As protoc generated python files use dynamic construction of classes, I want to use the <code>--pyi_out=</code> option to generate python stub file to make the IDE know the typing and do typing check.</p>
<p>However, I'm confused how to setup the path properly in the rule.</p>
<p>e.g. my bazel like</p>
<pre><code> python_grpc_library(
name = label,
protos = protos,
visibility = visibility,
extra_protoc_args = [
"--pyi_out=??",
],
deps = py_lib_deps,
)
</code></pre>
<p>I tried <code>.</code> and some abs path, both not working. Wondering has similar setup before?</p>
<p>When I directly use the protoc command outside of bazel, it works.</p>
<p>--</p>
|
<python><protocol-buffers><grpc><bazel>
|
2023-12-08 16:16:27
| 0
| 417
|
Crt Tax
|
77,627,869
| 2,064,196
|
Importing a module sets the file's docstring to None
|
<pre><code>"""
This here is a docstring
"""
print(f'Doc=[{__doc__}]')
</code></pre>
<p>prints</p>
<pre><code>Doc=[
This here is a docstring
]
</code></pre>
<p>whereas</p>
<pre><code>import sys
"""
This here is a docstring
"""
print(f'Doc=[{__doc__}]')
</code></pre>
<p>prints</p>
<pre><code>Doc=[None]
</code></pre>
<p>What gives? The module imported doesn't have to be <code>sys</code>, the same happens with <code>re</code> and <code>argparse</code>.</p>
<pre><code>$ python -V
Python 3.10.12
</code></pre>
|
<python>
|
2023-12-08 16:10:37
| 2
| 6,091
|
Bulletmagnet
|
77,627,803
| 5,612,605
|
@pytest.fixture(scope=session) - called multible times im same testrun
|
<p><em>Im going to use a pytest fixture, which is created in the begin of my large testrun, used by every test in every file and module and destroyed in the end. How ever, for every file a new instance of the fixture is created.</em></p>
<p>I would expect that <code>my_fixture</code> is only created once per testrun, as the scope of it is <code>session</code>.</p>
<p>What it actually does is to be created for each <code>file.py</code> and destroy in the end of the session/testrun. Idk why.</p>
<p><strong>How can I implement a fixture, that is called in the begin of a session/testraun and only destroyed in the end?</strong></p>
<p>Like a Serial connection which I'm going to open once per run.</p>
<h3>Setup</h3>
<p><em>Directory setup</em></p>
<pre class="lang-bash prettyprint-override"><code>/test/fixture_1.py
/test/test_1.py
/test/test_2.py
/pytest.ini
</code></pre>
<p>Chronological setup plan. Mind that there are SETUP and TEARDOWN steps for every file. I would expect one SETUP and one TEARDOWN per testrun hat the Begin and the end.</p>
<pre><code>(venv) PS C:\tmp_test_pytest> pytest -m single_source -s --setup-plan
============================ test session starts ================================
platform win32 -- Python 3.10.4, pytest-7.4.3, pluggy-1.3.0
rootdir: C:\tmp_test_pytest
configfile: pytest.ini
collected 3 items
test\test_1.py
SETUP S my_fixture
test/test_1.py::test_from_other_file_src1 (fixtures used: my_fixture)
test\test_2.py
SETUP S my_fixture
test/test_2.py::test_from_other_file_src21 (fixtures used: my_fixture)
test/test_2.py::test_from_other_file_src22 (fixtures used: my_fixture)
TEARDOWN S my_fixture
TEARDOWN S my_fixture
</code></pre>
<p><em>file: /test/fixture_1.py</em></p>
<pre class="lang-py prettyprint-override"><code>import pytest
import datetime
import time
import sys
@pytest.fixture(scope="session")
def my_fixture():
stamp = datetime.datetime.now().strftime("%H:%M:%S.%f")
time.sleep(5)
print("huhu", file=sys.stderr)
return stamp
</code></pre>
<p><em>files: /test/test_[1/2].py</em></p>
<pre class="lang-py prettyprint-override"><code>from test.fixture_1 import my_fixture
@pytest.mark.single_source
def test_from_other_file_src[1/21,22](my_fixture):
print(f"test_from_other_file: {my_fixture}", file=sys.stderr)
</code></pre>
<p><em>file pytest.ini</em></p>
<pre class="lang-yaml prettyprint-override"><code>[pytest]
markers =
single_source: single source, same import path
</code></pre>
|
<python><pytest><pytest-fixtures>
|
2023-12-08 15:59:24
| 0
| 3,651
|
Cutton Eye
|
77,627,774
| 1,234,434
|
Best way to count number of values present in a dataframe column
|
<p>I have this dataframe:</p>
<pre><code>dfsupport = pd.DataFrame({'Date': ['8/12/2020','8/12/2020','13/1/2020','24/5/2020','31/10/2020','11/7/2020','11/7/2020','4/4/2020','1/2/2020'],
'Category': ['Table','Chair','Cushion','Table','Chair','Mats','Mats','Large','Large'],
'Sales': ['1 table','3chairs','8 cushions','3Tables','12 Chairs','12Mats','4Mats','13 Chairs and 2 Tables', '3 mats, 2 cushions 4@chairs'],
'Paid': ['Yes','Yes','Yes','Yes','No','Yes','Yes','No','Yes'],
'Amount': ['93.78','$51.99','44.99','38.24','£29.99','29 21 only','18','312.8','63.77' ]
})
</code></pre>
<p>If I want to find the number of instances of a category is this the best way to do it?</p>
<pre><code>print(dfsupport.groupby(dfsupport['Category'],dropna=True).apply(lambda y: y['Category'].count()))
</code></pre>
<p>output:</p>
<pre><code>Category
Chair 2
Cushion 1
Large 2
Mats 2
Table 2
dtype: int64
</code></pre>
|
<python><pandas><dataframe>
|
2023-12-08 15:53:33
| 1
| 1,033
|
Dan
|
77,627,725
| 231,957
|
Cannot connect to my couchbase cluster with python SDK
|
<p>I can reach my couchbase cluster with a simple cURL:</p>
<pre><code>curl -u $CB_USERNAME:$CB_PASSWORD http://$CB_HOST/pools/default/buckets
</code></pre>
<p>but I do not manage to connect using the Python SDK:</p>
<pre><code>from datetime import timedelta
from couchbase.auth import PasswordAuthenticator
from couchbase.cluster import Cluster
from couchbase.options import ClusterOptions
import os
# Configuration
CB_HOST = os.environ.get('CB_HOST')
CB_BUCKET = os.environ.get('CB_BUCKET')
CB_USERNAME = os.environ.get('CB_USERNAME')
CB_PASSWORD = os.environ.get('CB_PASSWORD')
# Initialize Couchbase connection
auth = PasswordAuthenticator(CB_USERNAME, CB_PASSWORD)
options = ClusterOptions(auth)
cluster = Cluster(f'couchbase://{CB_HOST}', options)
</code></pre>
<p>This gives this error:</p>
<pre><code>Traceback (most recent call last):
File "/Users/luc/code/couchbase/examples/main.py", line 27, in <module>
cluster = Cluster(f'couchbase://{CB_HOST}', options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/luc/code/couchbase/examples/venv/lib/python3.11/site-packages/couchbase/cluster.py", line 99, in __init__
self._connect()
File "/Users/luc/code/couchbase/examples/venv/lib/python3.11/site-packages/couchbase/logic/wrappers.py", line 98, in wrapped_fn
raise e
File "/Users/luc/code/couchbase/examples/venv/lib/python3.11/site-packages/couchbase/logic/wrappers.py", line 82, in wrapped_fn
ret = fn(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/luc/code/couchbase/examples/venv/lib/python3.11/site-packages/couchbase/cluster.py", line 105, in _connect
raise ErrorMapper.build_exception(ret)
couchbase.exceptions.UnAmbiguousTimeoutException: UnAmbiguousTimeoutException(<ec=14, category=couchbase.common, message=unambiguous_timeout (14), C Source=/Users/couchbase/jenkins/workspace/python/sdk/python-packaging-pipeline/py-client/src/connection.cxx:199>)
</code></pre>
<p>Any hints what I'm doing wrong ?</p>
|
<python><couchbase>
|
2023-12-08 15:46:55
| 1
| 17,160
|
Luc
|
77,627,700
| 14,190,747
|
How to avoid executing entire python script when a function is called without main after importing to another python script
|
<p>I have a python script which goes like this, which I'm NOT allowed to modify:</p>
<pre><code>#file1.py
def add(a, b):
print(a + b)
add(1, 2)
#no main function whatsoever
</code></pre>
<p>and I HAVE TO IMPORT <code>file1.py</code> in my source code and use the function, I'm not allowed to rewrite the same function on my own.</p>
<p>So, my code goes like this:</p>
<pre><code>#my_code.py
import file1
#or
#from file1 import add
def main():
file1.add(1, 3)
if __name__ == '__main__':
main()
</code></pre>
<p>which, Obviously results in:</p>
<pre><code>3
4
</code></pre>
<p>Is there any possible way to simply invoke the function and return, without letting that <code>add(1, 2)</code> in <code>file1.py</code> being invoked ?</p>
<p>I simply need the output on console of the function I invoke in my main function, not the <code>file1.py</code> one.</p>
<p>The current make-shift thing I'm doing is:</p>
<pre><code>import os
import file1
os.system('cls')
#rest of the code follows
</code></pre>
<p>It's not very efficient or the brightest idea, but it's enough to get my work done and get my assignment checked.</p>
|
<python>
|
2023-12-08 15:43:39
| 3
| 356
|
JustAnotherDev
|
77,627,546
| 3,490,622
|
Unable to import create_pandas_dataframe_agent due to langchain_core and langchain relative path issues
|
<p>I am trying to import <code>create_pandas_dataframe_agent</code> from <code>langchain.agents</code> (in Databricks), and I am getting the following strange error:</p>
<pre><code>from langchain.agents import AgentType
</code></pre>
<pre><code>ValueError: '<path_name>/python3.0/site-packages/langchain/agents' is not in the subpath of '<path_name>/python3.0/site-packages/langchain_core' OR one path is relative and the other is absolute
</code></pre>
<p>I updated both langchain_core and langchain to the latest versions, but the error is still there. Any help would be appreciated.</p>
|
<python><pandas><langchain>
|
2023-12-08 15:16:15
| 2
| 1,011
|
user3490622
|
77,627,507
| 8,942,319
|
psycopg2 to redshift timing out but sql client connects ok
|
<p>I have connected a SQL client to a Redshift instance without issue, using basic user, pass, and SSH.</p>
<p>The Redshift instance is in a VPC but has an associated security group that allows SSH connections for specific IP addresses. The Redshift instance is also publicly available. Again, the SQL client connects just fine.</p>
<p>Even though the SQL client was connecting fine I added an inbound rule to allow TCP traffic at 5439 from my IP Address.</p>
<p>Python still code does not connect. It throws this error.</p>
<pre><code>connection to server at "redshift-instance.asdf.us-west-14.redshift.amazonaws.com" (xx.xx.xxx.xxx), port 5439 failed: Operation timed out
Is the server running on that host and accepting TCP/IP connections?
</code></pre>
<p>The code is below:</p>
<pre><code>import psycopg2
conf = {
"dbname": "something",
"host": "something",
"port": 5439, # tried this and SSH forwarded port 4450
"user": "something",
"password": "something",
}
def create_conn(*args, **kwargs):
config = kwargs["config"]
try:
conn = psycopg2.connect(
dbname=config["dbname"],
host=config["host"],
port=config["port"],
user=config["user"],
password=config["password"],
)
except Exception as err:
print(err, err)
return conn
print("start")
conn = create_conn(config=conf)
cursor = conn.cursor()
cursor.execute("SELECT * FROM `pg_group`;")
rows = cursor.fetchall()
for row in rows:
print(row)
</code></pre>
<p>Any tips on how to diagnose what could be going wrong? Assuming the code is correct, what networking issues could I be running into?</p>
<p>If it's relevant, the Node public and private IP addresses shown in the console do not match the one given in the error message.</p>
<p>EDIT: I do start an ssh tunnel when running the code. Tried the code with both the SSH port and the default redshift port.</p>
<pre><code>Host staging-bridge
HostName xx.xx.xxx.xx
User user
LocalForward 4450 redshift-instance.asdf.us-west-14.redshift.amazonaws.com:5439
IdentityFile ~/.ssh/jumphost_key
</code></pre>
<p>EDIT: Setting <code>port</code> in <code>conf</code> to local host works. Guess I didn't understand the SSH set up well.</p>
|
<python><amazon-web-services><ssh><amazon-redshift><psycopg2>
|
2023-12-08 15:11:31
| 0
| 913
|
sam
|
77,627,470
| 9,173,710
|
PySide6 preocessing of dropped files/folders locks source Explorer window
|
<p>I have implemented Drag and Drop feature for my app that analyzes video files.</p>
<p>When dropping a folder, I show a dialog and then start processing the videos.
During that the Explorer window I dragged the folder from is totally unresponsive, I cannot click anything and it doesn't update file changes.</p>
<p>Other explorer windows are working normally.</p>
<p>What is causing that?
Im using Python 3.11.4 and PySide6.4.2<br />
This is the code for my drag and drop:</p>
<pre class="lang-py prettyprint-override"><code>class Analyzer(QMainWindow):
file_drop_event = Signal(list)
def __init__(self):
QMainWindow.__init__(self)
self.setupUi(self)
self.file_drop_event.connect(self.file_dropped)
def dragEnterEvent(self, event: QtGui.QDragEnterEvent) -> None:
if event.mimeData().hasUrls:
event.accept()
else:
event.ignore()
def dragMoveEvent(self, event):
if event.mimeData().hasUrls:
event.setDropAction(Qt.CopyAction)
event.accept()
else:
event.ignore()
def dropEvent(self, event: QtGui.QDropEvent) -> None:
if event.mimeData().hasUrls:
event.setDropAction(Qt.CopyAction)
event.accept()
links = []
for url in event.mimeData().urls():
links.append(str(url.toLocalFile()))
self.file_drop_event.emit(links)
else:
event.ignore()
@Slot(list)
def file_dropped(self, files):
if len(files) == 1:
file = Path(files[0])
if file.is_file():
logging.info(f"Loading file {str(file)}")
if file.suffix in [".json", ".csv"]:
self.data_control.load_data(file)
self.tabWidget.setCurrentIndex(1)
else:
self.videoController.load_video(str(file))
self.tabWidget.setCurrentIndex(0)
elif file.is_dir():
# function that does not return until everything is processed
# calls QApplication.processEvents() regularly
self.start_batch_process(str(file))
elif len(files) > 1:
QMessageBox.critical(self, "Invalid Drop", "Only drop single files or folders!")
else:
QMessageBox.critical(self, "Invalid Path", "Dropped file path is invalid!")
</code></pre>
|
<python><explorer><pyside6>
|
2023-12-08 15:05:39
| 0
| 1,215
|
Raphael
|
77,627,348
| 10,499,034
|
How to convert a PyObject to string in C#
|
<p>I am working with Pythonnet to create a C# GUI that uses backend Python code. I am just learning how the connection works and need to get a Python variable into C# and display it in a C# text box. How can I get the value computed for 'outer' in Python and display it in the textBox1? It works fine when sending the output to a console but I need it in the text box.</p>
<pre><code>public Form1()
{
InitializeComponent();
Runtime.PythonDLL = @"C:\Program Files (x86)\Python312-32\python312.dll";
}
private void button1_Click(object sender, EventArgs e)
{
//THIS WORKS NOW!!
PyModule scope;
PythonEngine.Initialize();
using (Py.GIL())
{
scope = Py.CreateScope();
//scope.Exec("print('Hello World from Python!')");
dynamic np = Py.Import("numpy");
dynamic outer=(np.cos(np.pi * 2));
//This is the part that does not work.
//It says can't convert a PyObject to a string.
textBox1.Text = outer
//Console.WriteLine("Press enter to close...");
//Console.ReadLine();
}
</code></pre>
|
<python><c#><python.net><pyobject>
|
2023-12-08 14:44:24
| 0
| 792
|
Jamie
|
77,627,331
| 8,544,328
|
Nonlinear constraint propagation with z3?
|
<p>I am a Z3 newbie, and am trying to determine the marginal limits of a feasible region defined by a system of (in)equalities. My first (perhaps inefficient) approach is to define an Optimizer, assign limits, then minimize and maximize each variable in turn. This works great as long as the inequalities are linear. See the snippet below:</p>
<pre><code>from z3 import *
# Create Z3 variables
a, b, = Reals('a b')
# Define the constraints
constraints = [
a >= 0,
a <= 5,
b >= 0,
b <= 5,
a + b == 4]
# Creat3 a Z3 solver
solver = Optimize()
for constraint in constraints:
solver.add(constraint)
for variable in [a,b]:
# Minimum value
solver = Optimize()
for constraint in constraints:
solver.add(constraint)
solver.minimize(variable)
solver.check()
model = solver.model()
print("Lower limit for variable "+str(variable)+": "+str(model[variable]))
# Maximum value
solver = Optimize()
for constraint in constraints:
solver.add(constraint)
solver.maximize(variable)
solver.check()
model = solver.model()
print("Lower limit for variable "+str(variable)+": "+str(model[variable]))
</code></pre>
<p>Now if I exchange the last equality <code>a + b == 4</code> for a nonlinear equality <code>a * b == 4</code>, the solver suddenly freezes, even though the solution should be quite simple, with limits <code>[0.8,5]</code> for both <code>a</code> and <code>b</code>.</p>
<p>Any idea why this is happening? Do you know of a better way to solve such nonlinear constraint problems with z3?</p>
|
<python><constraints><limit><z3><z3py>
|
2023-12-08 14:41:42
| 1
| 553
|
J.Galt
|
77,626,939
| 930,169
|
pass polars dataframe from rust to python and then return manipulated dataframe back to rust
|
<p>I have a rust program which is a usual backend program. It listens for computation request and forward some parameters(e.g. polars dataframe) to a python module with pyo3. The reason I need pyo3 is that python's ecosystem of data analysis is better. I have plenty of libs in python to do data work, for example scipy.</p>
<pre><code>// main.rs
use polars::prelude::*;
use pyo3::{prelude::*, types::PyModule};
fn main() -> PyResult<()> {
let code = include_str!("./test.py");
Python::with_gil(|py| {
let activators = PyModule::from_code(py, code, "activators.py", "activators")?;
let df: DataFrame = df!(
"integer" => &[1, 2, 3, 4, 5],
"float" => &[4.0, 5.0, 6.0, 7.0, 8.0],
)
.unwrap();
let result = activators.getattr("test")?.call1((df,))?.extract()?;
Ok(())
})
}
</code></pre>
<pre><code># test.py
def test(x):
# manipulate dataframe x and then return to rust
return x
</code></pre>
<p>With above code I can pass rust native type to python. But with polars dataframe I have to implement some traits:</p>
<pre><code>the trait bound `polars::prelude::DataFrame: IntoPy<Py<PyAny>>` is not satisfied
the following other types implement trait `IntoPy<T>`:
<bool as IntoPy<Py<PyAny>>>
<char as IntoPy<Py<PyAny>>>
<isize as IntoPy<Py<PyAny>>>
<i8 as IntoPy<Py<PyAny>>>
<i16 as IntoPy<Py<PyAny>>>
<i32 as IntoPy<Py<PyAny>>>
<i64 as IntoPy<Py<PyAny>>>
<i128 as IntoPy<Py<PyAny>>>
and 193 others
required for `(polars::prelude::DataFrame,)` to implement `IntoPy<Py<PyTuple>>`
</code></pre>
<p>Is there any straightforward way to achieve my goal? Ideally passing polars dataframe to python is 0-copy, since polars also has a python implementation.</p>
|
<python><rust>
|
2023-12-08 13:32:50
| 1
| 2,941
|
JACK M
|
77,626,758
| 4,742,074
|
How to skip first few lines from file in Python until line with required prefix is found, without writing a comparison function
|
<p>I need to read lines from the text file like this:</p>
<pre><code>junk_line_1
junk_line_2
junk_line_3
real data starts here
</code></pre>
<p>so I need to skip the lines until the line starting with 'real data' is found.</p>
<p>The solution I have is:</p>
<pre><code>import itertools
def has_prefix(line):
return not line.startswith('real data')
def read_file(f):
lines=[]
for line in itertools.dropwhile(has_prefix, f):
lines.append(line.rstrip())
return lines
</code></pre>
<p>I do not like the need to write this 'has_prefix' function, though. For one thing, it looks a bit ugly, for another, I can use it with only one prefix. The ideal solution I would like to get is like this (not syntactically correct) :</p>
<pre><code>import itertools
def read_file(f, prefix):
lines=[]
for line in itertools.dropwhile(not line.startswith(prefix), f) :
lines.append(line.rstrip())
return lines
</code></pre>
<p>I cannot use this, as <code>line</code> is not defined until the loop's body is entered. But may be something is possible in reasonably elegant way? At the very least, I would like to be able to pass function <code>has_prefix</code> prefix as a parameter, rather then have it hard-coded?</p>
|
<python><string><file>
|
2023-12-08 13:05:30
| 2
| 571
|
one_two_three
|
77,626,748
| 6,307,685
|
Differentiating a sum in Sympy
|
<p>I would like to differentiate the entropy H(T) of a discrete random variable T with respect to one of the masses q(t') using Sympy.
The following implements the expression for H(T):</p>
<pre class="lang-py prettyprint-override"><code>import sympy as sym
sym.init_printing(use_latex=True)
N_T = sym.Symbol("N_T")
qT = sym.Function("q")
t = sym.Symbol("t")
ht_expr = sym.summation( qT(t) * sym.log(qT(t)), (t, 1, N_T) )
ht_expr
</code></pre>
<p>This prints the correct expression for H(T), as expected:</p>
<p><a href="https://i.sstatic.net/KPw3j.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KPw3j.jpg" alt="enter image description here" /></a></p>
<p>Now, differentiating with respect to q(t') should result in the derivative of q(t)log(q(t)) evaluated at t' if t' is in 1...N_T, and 0 otherwise (this is a known result, easy to check). I.e. it should return:</p>
<p><a href="https://i.sstatic.net/GMP1T.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GMP1T.jpg" alt="enter image description here" /></a></p>
<p>Instead I get 0:</p>
<pre class="lang-py prettyprint-override"><code>tprime = sym.Symbol("t'")
sym.diff(ht_expr, qT(tprime))
[Out]: 0
</code></pre>
<p>How can I make Sympy handle sums like this and give the correct result?</p>
|
<python><sympy><symbolic-math><differentiation>
|
2023-12-08 13:03:24
| 1
| 761
|
soap
|
77,626,464
| 2,071,807
|
Patch class property to return a modified version of the original value
|
<p>I would like to patch a class property to return a modified version of the original property.</p>
<p>In this example, I would like to patch <code>Greeter.greeting</code> to prepend the value "TEST" to whatever the real <code>greeting</code> property returns:</p>
<pre class="lang-py prettyprint-override"><code>class Greeter:
def __init__(self, name: str):
self.name = name
@property
def greeting(self):
return f"Hi {self.name}"
</code></pre>
<p>I thought something like this might work:</p>
<pre class="lang-py prettyprint-override"><code>def test_greeter():
with patch.object(
Greeter, "greeting", new_callable=PropertyMock(wraps=lambda self: f"TEST {self.greeting}")
):
greeter = Greeter(name="Some Person")
assert greeter.greeting == "TEST Hi Some Person"
</code></pre>
<p>But this gives me:</p>
<pre><code>TypeError: test_greeter.<locals>.<lambda>() missing 1 required positional argument: 'self'
</code></pre>
<p>I've also tried various formulations such as:</p>
<pre class="lang-py prettyprint-override"><code>patch.object(Greeter, "greeting", wraps=lambda self: f"TEST {self.greeting}")
patch.object(Greeter, "greeting", side_effect=lambda self: f"TEST {self.greeting}")
</code></pre>
<p>but I'm having no luck.</p>
<p>Is this possible? It <em>feels</em> like it should be.</p>
|
<python><python-unittest>
|
2023-12-08 12:14:48
| 1
| 79,775
|
LondonRob
|
77,626,342
| 11,857,974
|
How to transform an array using its spectral components
|
<p>I am trying to transform an array using its spectral components.</p>
<p>Let's consider an 4x4 array arr<br />
It will have an adjacent matrix A, a degree matrix D and Laplacian matrix L = D-A</p>
<pre><code>import numpy as np
from numpy.linalg import eig
</code></pre>
<p>Compute its eigenvectors and eigenvalues, and sort it in descending order</p>
<pre><code>eig_val, eig_vec = eig(L)
idx = eig_val.argsort()[::-1]
</code></pre>
<p>Sort the eigenvectors accordingly</p>
<pre><code>eig_vec = eig_vec[:,idx]
</code></pre>
<p>The product of 2 distincts eigenvectors must be 0.
I notice that this is not the case here e.g. the product of the first and second eigenvectors is:</p>
<pre><code>sum(np.multiply(eig_vec[0], eig_vec[1])) = 0.043247527085787975
</code></pre>
<p>Is there anything I am missing ?</p>
<p>Compute the spectral components of the input array</p>
<pre><code>spectral = np.matmul(eig_vec.transpose(), arr.flatten())
print(spectral.shape)
</code></pre>
<p>Take the first 15 components.</p>
<pre><code>masked = np.zeros(spectral.shape)
m = spectral[:15]
masked[:15] = m
</code></pre>
<p>Get the new features</p>
<pre><code>updated_arr = np.matmul(eig_vec, masked)
updated_arr = updated_arr.reshape(4, -1)
</code></pre>
<p>The updated array is very different from the original.</p>
<p>Any suggestion or resource to have a look at will be welcome.</p>
|
<python><signal-processing><linear-algebra><spectral-python>
|
2023-12-08 11:48:20
| 2
| 707
|
Kyv
|
77,626,321
| 14,529,779
|
Summing Duplicates Across Sublists in a Nested List
|
<p>I hope my message finds you well. I am seeking your expertise and feedback on a specific issue I have encountered in the <code>repeat_sum</code> function within the codebase. The function is designed to compute the sum of integers that are present in two or more sublists within the input list</p>
<p>Here is the current implementation:</p>
<pre><code>def repeat_sum(arr):
arr_comp = []
arr_sum = []
arr_sum_final = []
for i in arr:
for j in i:
arr_comp.append(j)
for i in arr_comp:
if (arr_comp.count(i)) > 1:
arr_sum.append(i)
for i in arr_sum:
if i not in arr_sum_final:
arr_sum_final.append(i)
return sum(arr_sum_final)
</code></pre>
<p>However, I've identified a case where the function fails to provide the correct result. Specifically, when dealing with a nested list and duplicate elements, the function might return an incorrect value. In this case <code>[[1], [2], [3, 4, 4], [5]],</code> the correct result would be <code>0</code>, as the only duplicate integer, <code>4</code>, is confined within a single sublist.</p>
<p>I believe the issue stems from the function's current approach, which involves flattening the nested list and then identifying duplicates. To address this, I am seeking your guidance on a more robust solution that correctly handles cases like the one mentioned.</p>
<p>Your insights and suggestions on improving the <code>repeat_sum</code> function would be highly appreciated. Please feel free to share any comments or alternative approaches to ensure the accuracy and efficiency of the code.</p>
<hr />
<p>some sample inputs and expected outputs</p>
<ul>
<li><em>input</em> ==> <strong>[[1, 2, 3],[2, 8, 9],[7, 123, 8]]</strong> <em>output</em> ==> <strong>10</strong></li>
<li><em>input</em> ==> <strong>[[1, 8, 8], [8, 8, 8], [8, 8, 8, 1]]</strong> <em>output</em> ==> <strong>9</strong></li>
<li><em>input</em> ==> <strong>[[1], [2], [3, 4, 4, 4], [123456789]]</strong> <em>output</em> ==> <strong>0</strong></li>
</ul>
<hr />
<p>Thank you for your time and assistance.</p>
|
<python><list><algorithm><for-loop><if-statement>
|
2023-12-08 11:45:10
| 1
| 9,636
|
TAHER El Mehdi
|
77,626,120
| 388,038
|
How to get Azure bindings to FastAPI endpoint
|
<p>How to pass a string <code>starter</code> from Azure http_trigger to the FastAPI endpoint?</p>
<p>Currently I'm hacking this by setting <code>starter</code> in <code>context.trace_context.attributes</code></p>
<p>Or alternative to get the Azure bindings into FastAPI?</p>
<pre class="lang-py prettyprint-override"><code># "Client side"
def get_client(request: Request) -> df.DurableOrchestrationClient:
starter = request.scope["azure_functions.trace_context"].attributes["starter"]
client = df.DurableOrchestrationClient(starter)
return client
# This is the FastAPI version of the `route="startOrchestrator"`
# `async def start_orchestrator(req: func.HttpRequest, client):`
@fastapi_app.get(path="/fast_orchestrator")
async def fast_orchestrator(
client: df.DurableOrchestrationClient = Depends(get_client)
):
instance_id = await client.start_new("my_orchestrator")
reply = f"Started orchestration with ID = '{instance_id}'."
logging.info(reply)
# Don't know how to convert this back to Azure func.HttpRequest
# return client.create_check_status_response(req, instance_id)
# Here we should build the HTTP 202 with location header to the /status/{instance_id}
return reply
# Sort of "middleware" side
@df_app.route(route="{*route}", auth_level=func.AuthLevel.ANONYMOUS)
@df_app.generic_input_binding(arg_name="starter", type="durableClient")
async def http_trigger(
req: func.HttpRequest, context: func.Context, starter: str
) -> func.HttpResponse:
context.trace_context.attributes["starter"] = starter
response: func.HttpResponse = await func.AsgiMiddleware(fastapi_app).handle_async(
req, context
)
# Do activity or orchestration based on response
return response
</code></pre>
|
<python><azure-functions><fastapi><azure-durable-functions><fastapi-middleware>
|
2023-12-08 11:06:30
| 0
| 3,182
|
André Ricardo
|
77,625,676
| 5,462,398
|
pyInstaller: Pack binary executable inside project's executable to run
|
<p><strong>TLDR;</strong></p>
<p>I would like to pack the <code>ffmpeg</code> executable inside my own executable. Currently I am getting</p>
<pre><code>FileNotFoundError: [Errno 2] No such file or directory: 'ffmpeg'
Skipping ./testFile202312061352.mp4 due to FileNotFoundError: [Errno 2] No such file or directory: 'ffmpeg'
</code></pre>
<p><strong>Details:</strong></p>
<p>I am creating executable file using following command:</p>
<pre><code>pyinstaller cli.py \
--onefile \
--add-binary /Users/<machineUser>/anaconda3/envs/my_env/bin/ffmpeg:bin
</code></pre>
<p>The code that uses <code>ffmpeg</code> is not authored by me. And I would like to keep that part the same.</p>
<p>When I run from command line while <code>conda</code> environment is active I can successfully run it as <code>python</code> (or perhaps <code>anaconda</code>) knows where the binaries are. I have a pretty empty <code>cli.py</code>. That seems to be the entry point and I hope if it is possible I can set the <code>bin</code> directory's path there ...</p>
<p>I am able to successfully run the application like following:</p>
<pre><code>(my_env) machineUser folder % "dist/cli_mac_001202312051431" ./testFile202312061352.mp4
</code></pre>
<p>I would like to run like following :</p>
<pre><code>(base) machineUser folder % "dist/cli_mac_001202312051431" ./testFile202312061352.mp4
</code></pre>
<p>I would like to keep the world out side my executable's tmp folder the same. I would not want to change something that will be "left behind" after the exec is terminated.</p>
<p><strong>Question:</strong></p>
<p>Can some one please mention how to modify the <code>pyinstaller</code> command or what to change in <code>cli.py</code> to achieve it successfully?</p>
|
<python><ffmpeg><anaconda><pyinstaller><anaconda3>
|
2023-12-08 09:42:10
| 2
| 1,348
|
zur
|
77,625,508
| 3,650,477
|
How to activate verbosity in Langchain
|
<p>I'm using Langchain 0.0.345. I cannot get a verbose output of what's going on under the hood using the <a href="https://python.langchain.com/docs/expression_language/" rel="noreferrer">LCEL approach</a> to chain building.</p>
<p>I have this code:</p>
<pre class="lang-py prettyprint-override"><code>from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.schema.output_parser import StrOutputParser
from langchain.globals import set_verbose
set_verbose(True)
prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")
model = ChatOpenAI()
output_parser = StrOutputParser()
chain = prompt | model | output_parser
chain.invoke({"topic": "ice cream"})
</code></pre>
<p>According to the <a href="https://python.langchain.com/docs/guides/debugging" rel="noreferrer">documentation</a> using <code>set_verbose</code> is the way to have a verbose output showing intermediate steps, prompt builds etc. But the output of this script is just a string without any intermediate steps.
Actually, the module <code>langchain.globals</code> does not appear even mentioned in <a href="https://api.python.langchain.com/en/v0.0.345/search.html?q=langchain.globals" rel="noreferrer">the API documentation</a>.</p>
<p>I have also tried setting the <code>verbose=True</code> parameter in the model creation, but it also does not work. This used to work with the former approach building with classes and so.</p>
<p>How is the recommended and current approach to have the output logged so you can understand what's going on?</p>
<p>Thanks!</p>
|
<python><langchain><large-language-model>
|
2023-12-08 09:14:30
| 2
| 2,729
|
Pythonist
|
77,625,497
| 10,964,685
|
Join two geopandas df's using shortest point along line network - python
|
<p>I'm joining two geopandas dataframes using <code>gpd.sjoin_nearest</code>. This returns the nearest point but I'm trying to work out what unit of measurement the output is in? I've done the approximate calculations in km but I don't think the projection is working.</p>
<p>I'm also aiming to incorporate a third geopandas dataframe (<code>lines</code>) where the nearest distance between points must travel along these lines. So be interconnected.</p>
<p><strong>Is it possible to import a network of connected lines (<code>roads</code>) and measure the shortest distance between the same dataframes but the distance must be through the connected lines?</strong></p>
<pre><code>import geopandas as gpd
import pandas as pd
import matplotlib.pyplot as plt
from shapely.geometry import LineString
point1 = pd.DataFrame({
'Cat': ['t1', 't2'],
'LAT': [-20, -30],
'LON': [140, 145],
})
point2 = pd.DataFrame({
'Cat': ['a', 'b'],
'LAT': [-30, -20],
'LON': [140, 145],
})
lines = pd.DataFrame({
'Cat': ['1', '1','2','2','3','3'],
'LAT': [-10, -35, -30, -30, -40, -20],
'LON': [140, 140, 130, 148, 145, 145],
})
P1_gpd = gpd.GeoDataFrame(point1, geometry = gpd.points_from_xy(point1.LON, point1.LAT, crs = 4326))
P2_gpd = gpd.GeoDataFrame(point2, geometry = gpd.points_from_xy(point2.LON, point2.LAT, crs = 4326))
lines_gpd = gpd.GeoDataFrame(lines, geometry = gpd.points_from_xy(lines.LON, lines.LAT, crs = 4326))
P1_gpd = P1_gpd.to_crs("epsg:4326")
P2_gpd = P2_gpd.to_crs("epsg:4326")
lines_gpd = lines_gpd.to_crs("epsg:4326")
roads_gpd = lines_gpd.groupby(['Cat'])['geometry'].apply(lambda x: LineString(x.tolist()))
roads_gpd = gpd.GeoDataFrame(roads_gpd, geometry='geometry')
nearest_points = gpd.sjoin_nearest(P1_gpd, P2_gpd,
distance_col="nearest_distance", lsuffix="left", rsuffix="right")
print(nearest_points)
fig, ax = plt.subplots()
P1_gpd.plot(ax = ax, markersize = 10, color = 'blue', zorder = 2)
P2_gpd.plot(ax = ax, markersize = 10, color = 'red', zorder = 2)
roads_gpd.plot(ax = ax, color = 'black')
plt.show()
</code></pre>
<p>The nearest point to <code>t1</code> and <code>t2</code> from <code>P1_gpd</code> is point <code>a</code> in <code>P2_gpd</code>.</p>
<p><strong>I'm aiming to convert the distance to km. I've done the raw calculations. The first point has a distance of 1107km and the second is 482km.</strong></p>
<p>intended output:</p>
<pre><code> Cat_left LAT_left LON_left geometry index_right Cat_right LAT_right LON_right nearest_distance
0 t1 -20 140 POINT (140.00000 -20.00000) 0 a -30 140 1112
1 t2 -30 145 POINT (145.00000 -30.00000) 1 a -30 140 481
</code></pre>
<p><a href="https://i.sstatic.net/mdJw2JqD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mdJw2JqD.png" alt="enter image description here" /></a></p>
|
<python><geometry><geopandas>
|
2023-12-08 09:13:05
| 1
| 392
|
jonboy
|
77,625,479
| 1,654,143
|
No matching distribution found for python-colorspace
|
<p>I'm trying to install colourspace for python as it looks very useful for me. But am running into difficulty. Usually I'd use</p>
<pre><code>py -3 -m pip install python-colorspace
</code></pre>
<p>but that results in:</p>
<p>ERROR: Could not find a version that satisfies the requirement python-colorspace (from versions: none)</p>
<p>ERROR: No matching distribution found for python-colorspace</p>
<p>Using</p>
<p>py -3 -m pip install <a href="https://github.com/retostauffer/python-colorspace" rel="nofollow noreferrer">https://github.com/retostauffer/python-colorspace</a></p>
<p>brings up the message</p>
<pre><code>Collecting https://github.com/retostauffer/python-colorspace
Downloading https://github.com/retostauffer/python-colorspace
\ 240 kB 3.3 MB/s
ERROR: Cannot unpack file C:\Users\USERNAME\AppData\Local\Temp\pip-unpack-k1x8gn56\python-
colorspace (downloaded from C:\Users\USERNAME\AppData\Local\Temp\pip-r
eq-build-7ftvh7h6, content-type: text/html; charset=utf-8); cannot detect archive format
ERROR: Cannot determine archive format of C:\Users\USERNAME\AppData\Local\Temp\pip-req-build-
7ftvh7h6
</code></pre>
<p>Any ideas?</p>
|
<python><pip><colors>
|
2023-12-08 09:09:17
| 1
| 7,007
|
Ghoul Fool
|
77,625,305
| 3,932,908
|
Can I take only certain keys from each config file?
|
<p>Let's say I have two config files in the "base" directory, "base/v1.yaml" and "base/v2.yaml". Let's say each of these are structured something like:</p>
<pre><code>model:
embedding_size: 20
num_layers: 4
...
dataset:
name: ...
</code></pre>
<p>If I have a new yaml and am trying to set defaults, is there a way I can choose to only take e.g. "model" from v1, and then "dataset" from v2? Ideally, something like:</p>
<pre><code>defaults:
- base/v1.model
- base/v2.dataset
- _self_
...
</code></pre>
|
<python><fb-hydra>
|
2023-12-08 08:34:40
| 2
| 399
|
Henry
|
77,624,990
| 10,722,752
|
Error while trying to install python packages
|
<p>I am trying to install various Python packages such as <code>pandas</code>, <code>numpy</code>, <code>mlforecast</code>, <code>xgboost</code> etc. The issue is I have two Python versions in my VM: 3.11.6 and 3.12.</p>
<p>When I run the command <code>print(sys.version())</code> I get:</p>
<pre><code>3.11.6
</code></pre>
<p>But when I check: <code>!python --version</code> I get:</p>
<pre><code>3.12.0
</code></pre>
<p>And when I lookup the kernelspecs using:</p>
<pre><code>!jupyter kernelspec list
</code></pre>
<p>I get:</p>
<pre><code>Available kernels:
python3 C:\Python311\share\jupyter\kernels\python3
</code></pre>
<p>And when I check the sys path:</p>
<pre><code>sys.path
['C:\\Users\\myname\\Downloads',
'C:\\Python311\\python311.zip',
'C:\\Python311\\DLLs',
'C:\\Python311\\Lib',
'C:\\Python311',
'',
'C:\\Python311\\Lib\\site-packages',
'C:\\Python311\\Lib]]site-packages\\win32',
'C:\\Python311\\Lib]]site-packages\\win32\\lib',
'C:\\Python311\\Lib]]site-packages\\pythonwin']
</code></pre>
<p>Both kernelspec and path point to 3.11. But when I try to install say <code>pandas</code></p>
<pre><code>pip install pandas
</code></pre>
<p>I get:</p>
<pre><code>Error: Could not install packages due to an OSError: [WinError 2] The system cannot find the file specified: 'C:\\Python312\\Scripts\\f2py.exe'
</code></pre>
<p>Looks like the installation is trying to happen in Python 3.12. Could someone please let me know who to make sure that I am using 3.11, more so because some of the packages are not compatible with Python 3.12 yet, so I need to stick to 3.11.</p>
|
<python><pip>
|
2023-12-08 07:32:48
| 2
| 11,560
|
Karthik S
|
77,624,296
| 9,983,652
|
sort a list using a custom function with multiple parameters
|
<p>I am trying to sort a list using a custom function. If the function has only one parameter, it is fine. I don't know how to do if the function has 2 parameters.</p>
<p>For example, when the function has only one parameter.</p>
<pre><code>def sort_by_well_range(col):
# col format is: avgDTS_1100_1200
a=col.split('_')[1:] # remove avgDTS string, keep only depth range
a=[float(col) for col in a] # convert string to float
middle_depth=mean(a)
# print(middle_depth)
return middle_depth
a=['avgDTS_1100_1200','avgDTS_900_1000','avgDTS_1300_1400','avgDTS_800_850']
b=sorted(a,key=sort_by_well_range,reverse=False)
['avgDTS_800_850', 'avgDTS_900_1000', 'avgDTS_1100_1200', 'avgDTS_1300_1400']
</code></pre>
<p>Now I am doing a function with 2 parameters like this, then I got an error, how to solve it? Thanks</p>
<pre><code>def sort_by_well_range_1(col,start=1):
# col format is: avgDTS_1100_1200
a=col.split('_')[start:] # remove avgDTS string, keep only depth range
a=[float(col) for col in a] # convert string to float
middle_depth=mean(a)
# print(middle_depth)
return middle_depth
a=['influx_oil_1100_1200','influx_oil_900_1000','influx_oil_1300_1400','influx_oil_800_850']
b=sorted(a,key=sort_by_well_range_1(start=2),reverse=False)
---------------------------------------------------------------------------
TypeError
----> 2 b=sorted(a,key=sort_by_well_range_1(start=2),reverse=False)
3 b
TypeError: sort_by_well_range_1() missing 1 required positional argument: 'col'
</code></pre>
|
<python>
|
2023-12-08 04:11:30
| 2
| 4,338
|
roudan
|
77,624,028
| 6,042,206
|
Python Help - ERROR: Failed building wheel for pyheif
|
<p>I cannot install pyheif even after I've tried everything that I know. I'm running on Windows 11 with Pycharm Community 2023. I've reinstalled all of the software and restarted the system. I've tried to run different environments and interpreters but still not having any success. Any help would be greatly appreciated.</p>
<pre><code>Collecting pyheif
Using cached pyheif-0.7.1.tar.gz (22 kB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Installing backend dependencies: started
Installing backend dependencies: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Collecting cffi>=1.0.0 (from pyheif)
Using cached cffi-1.16.0-cp312-cp312-win_amd64.whl.metadata (1.5 kB)
Collecting pycparser (from cffi>=1.0.0->pyheif)
Using cached pycparser-2.21-py2.py3-none-any.whl (118 kB)
Using cached cffi-1.16.0-cp312-cp312-win_amd64.whl (181 kB)
Building wheels for collected packages: pyheif
Building wheel for pyheif (pyproject.toml): started
Building wheel for pyheif (pyproject.toml): finished with status 'error'
Failed to build pyheif
error: subprocess-exited-with-error
Building wheel for pyheif (pyproject.toml) did not run successfully.
exit code: 1
[25 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-cpython-312
creating build\lib.win-amd64-cpython-312\pyheif
copying pyheif\constants.py -> build\lib.win-amd64-cpython-312\pyheif
copying pyheif\error.py -> build\lib.win-amd64-cpython-312\pyheif
copying pyheif\reader.py -> build\lib.win-amd64-cpython-312\pyheif
copying pyheif\writer.py -> build\lib.win-amd64-cpython-312\pyheif
copying pyheif\__init__.py -> build\lib.win-amd64-cpython-312\pyheif
creating build\lib.win-amd64-cpython-312\pyheif\data
copying pyheif\data\version.txt -> build\lib.win-amd64-cpython-312\pyheif\data
running build_ext
generating cffi module 'build\\temp.win-amd64-cpython-312\\Release\\_libheif_cffi.c'
creating build\temp.win-amd64-cpython-312
creating build\temp.win-amd64-cpython-312\Release
building '_libheif_cffi' extension
creating build\temp.win-amd64-cpython-312\Release\build
creating build\temp.win-amd64-cpython-312\Release\build\temp.win-amd64-cpython-312
creating build\temp.win-amd64-cpython-312\Release\build\temp.win-amd64-cpython-312\Release
"C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.38.33130\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -I/usr/local/include -I/usr/include -I/opt/local/include -IC:\Users\jerem\PycharmProjects\Scratch\.venv\include -IC:\Users\jerem\AppData\Local\Programs\Python\Python312\include -IC:\Users\jerem\AppData\Local\Programs\Python\Python312\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.38.33130\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\VS\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\um" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\shared" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\winrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\cppwinrt" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" /Tcbuild\temp.win-amd64-cpython-312\Release\_libheif_cffi.c /Fobuild\temp.win-amd64-cpython-312\Release\build\temp.win-amd64-cpython-312\Release\_libheif_cffi.obj
_libheif_cffi.c
build\temp.win-amd64-cpython-312\Release\_libheif_cffi.c(570): fatal error C1083: Cannot open include file: 'libheif/heif.h': No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\VC\\Tools\\MSVC\\14.38.33130\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for pyheif
ERROR: Could not build wheels for pyheif, which is required to install pyproject.toml-based projects
</code></pre>
|
<python><pyheif>
|
2023-12-08 02:28:48
| 1
| 861
|
Kamikaze_goldfish
|
77,623,799
| 1,609,514
|
How to make a table with PrettyTable that only has one horizontal line under the title?
|
<p>I want a very minimal table which I attempted as follows:</p>
<pre><code>from prettytable import PrettyTable, HEADER, FRAME, ALL, NONE
output = PrettyTable(align='l', header=False)
output.title = 'My Table'
output.border = False
output.hrules = HEADER
output.add_row([1, 2, 3])
output.add_row([4, 5, 6])
output.add_row([7, 8, 9])
print(output)
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code> My Table
1 2 3
4 5 6
7 8 9
</code></pre>
<p>Desired output:</p>
<pre class="lang-none prettyprint-override"><code> My Table
-----------
1 2 3
4 5 6
7 8 9
</code></pre>
|
<python><prettytable>
|
2023-12-08 00:59:56
| 1
| 11,755
|
Bill
|
77,623,734
| 4,220,282
|
How to alias a Python constructor?
|
<p>I read <a href="https://stackoverflow.com/questions/11264923/define-method-aliases-in-python">here</a> and <a href="https://stackoverflow.com/questions/21072711/what-is-the-best-way-to-alias-method-names-in-python">here</a> that Python methods can be aliased using <code>=</code>.</p>
<p>However, when I try it on the <code>__init__</code> method, it doesn't seem to work.</p>
<pre class="lang-py prettyprint-override"><code>class MyClass:
def __init__(self):
print("Hi mum!")
new_name = __init__
a = MyClass()
b = MyClass.new_name()
</code></pre>
<p>The <code>b = MyClass.new_name()</code> causes this error:</p>
<pre><code>TypeError: __init__() missing 1 required positional argument: 'self'
</code></pre>
<p>Why does this not work? And how should I alias <code>__init__</code> correctly?</p>
|
<python><python-3.x><methods><alias>
|
2023-12-08 00:31:48
| 2
| 946
|
Harry
|
77,623,684
| 2,386,605
|
Strange warning for ValidationError in Pydantic V2
|
<p>I updated my FastAPI to <code>Pydantic 2.5.2</code> and I suddenly get the following warning in the logs.</p>
<pre><code>/usr/local/lib/python3.12/site-packages/pydantic/_migration.py:283: UserWarning: `pydantic.error_wrappers:ValidationError` has been moved to `pydantic:ValidationError`.
warnings.warn(f'`{import_path}` has been moved to `{new_location}`.')
</code></pre>
<p>Is that problematic and do you know how to fix it?</p>
|
<python><fastapi><pydantic>
|
2023-12-08 00:14:58
| 1
| 879
|
tobias
|
77,623,584
| 2,665,095
|
Why a long running request is freezing my FastAPI server?
|
<p>I'm seeking help to clarify how asynchronous requests work in FastAPI.
I deployed my FastAPI+NGINX+Uvicorn app on a <em>t2.small</em> EC2 instance (1 vCPU / 2GiB RAM).</p>
<p>It is a simple proxy server that returns the result for the provided <strong>?url=</strong></p>
<pre class="lang-python prettyprint-override"><code>async def proxy(request, sUrl):
targetResponse = urllib.request.urlopen(urllib.request.Request(url=urllib.parse.unquote(sUrl)))
return Response(
status_code=targetResponse.status,
content=targetResponse.read().decode('utf-8'),
media_type=targetResponse.headers['Content-Type']
)
@app.get("/")
async def get_proxy(url: str = "", request: Request = {}):
return await proxy(request, url)
</code></pre>
<p>If I send a request <code>myfastapiserver.com/?url=example.com</code> and if <code>example.com</code> is unresponsive, all subsequent requests will have to wait until the initial request times out.</p>
<p>My questions:</p>
<ol>
<li>Why the first request is freezing my server when trying to get data from a 3rd party web service and that web service is not responding?</li>
<li>Can I make my server to handle multiple requests no matter how long each one takes?</li>
<li>What would be the difference if I had 2 vCPUs instead of 1?</li>
</ol>
<p>Thanks in advance for all your help!</p>
|
<python><asynchronous><fastapi>
|
2023-12-07 23:38:51
| 1
| 339
|
Vlad
|
77,623,414
| 1,471,980
|
how do you merge two data frames based on common text on one data frame which is part of another data frame column value
|
<p>I have one huge data frame and one small data frame.</p>
<p>big data frame called df1 something like this:</p>
<pre><code>Hostname Region Model
ServerABC101 US Cisco
ServerABC102 US Cisco
ServerDDC103 PAC Intel
ServerDDC609 Emea Intel
ServerDDC103 PAC Intel
ServerDDC609 Emea Intel
</code></pre>
<p>Small data frame df2 is like this:</p>
<pre><code>Site City State
ABC NYC NY
DDC DAL TX
</code></pre>
<p>I need to merge these two data frames base on text in df2['Site'] column matches df1['Hostname']</p>
<p>final data frame needs to be like this:</p>
<pre><code>Hostname Region Model Site City State
ServerABC101 US Cisco. ABC NYC NY
ServerABC102 US Cisco. ABC NYC NY
ServerDDC103 PAC Intel DDC DAL TX
ServerDDC609 Emea Intel DDC DAL TX
ServerDDC103 PAC Intel DDC DAL TX
ServerDDC609 Emea Intel DDC DAL TX
</code></pre>
<p>I am familiar with pd merge but the Site from df2 is only partial text from Hostname in df1.</p>
<pre><code>final=reduce(lambda s, y: pd.merge(x, y, on="Hostname", how=outer, [df1, df2])
</code></pre>
<p>Any ideas how I could do this in pandas?</p>
|
<python><pandas><dataframe><merge>
|
2023-12-07 22:32:54
| 3
| 10,714
|
user1471980
|
77,623,335
| 9,092,669
|
Why am I getting column ambigioutiy error in my pyspark query?
|
<p>I am trying to create a column that identifies which columns are updated as part of a change data feed (I followed this stackoverflow: <a href="https://stackoverflow.com/questions/60279160/compare-two-dataframes-pyspark">Compare two dataframes Pyspark</a>), but when I reference my own tables, I get this error:</p>
<pre><code>AnalysisException:Column _commit_version#203599L, subscribe_status#203595, _change_type#203598, _commit_timestamp#203600, subscribe_dt#203596, end_sub_dt#203597 are ambiguous.
</code></pre>
<p>It's probably because you joined several Datasets together, and some of these Datasets are the same. This column points to one of the Datasets but Spark is unable to figure out which one. Please alias the Datasets with different names via <code>Dataset.as</code> before joining them, and specify the column using qualified name, e.g:</p>
<p><code>df.as("a").join(df.as("b"), $"a.id" > $"b.id")</code>.</p>
<p>You can also set <code>spark.sql.analyzer.failAmbiguousSelfJoin</code> to false to disable this check.</p>
<p>This is my code:</p>
<pre><code>
df_X = df1.filter(df1['_change_type'] == 'update_preimage')
df_Y = df1.filter(df1['_change_type'] == 'update_postimage')
dfX.show()
dfY.show()
from pyspark.sql.functions import col, array, lit, when, array_remove
# get conditions for all columns except id
# conditions_ = [when(dfX[c]!=dfY[c], lit(c)).otherwise("") for c in dfX.columns if c != ['external_id', '_change_type']]
select_expr =[
col("external_id"),
*[dfY[c] for c in dfY.columns if c != 'external_id'],
# array_remove(array(*conditions_), "").alias("column_names")
]
print(select_expr)
dfX.join(dfY, "external_id").select(*select_expr).show()```
</code></pre>
|
<python><apache-spark><pyspark><apache-spark-sql>
|
2023-12-07 22:08:35
| 1
| 395
|
buttermilk
|
77,623,320
| 9,078,185
|
Kivy 2.2.1 changed sizing of existing app
|
<p>I just upgraded <code>kivy</code> to version 2.2.1. It caused my existing applications to suddenly become about 1/4 the size they were before. I've looked at the config file and there's nothing obviously different, and the release notes also don't appear to mention anything about this. I.e., the image below is how one of my apps looks now. Prior to this update it filled the area in the screenshot. Operating system is Windows 11. What could be causing this?
<a href="https://i.sstatic.net/k3mM1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/k3mM1.png" alt="enter image description here" /></a></p>
|
<python><kivy>
|
2023-12-07 22:05:41
| 1
| 1,063
|
Tom
|
77,623,271
| 2,274,226
|
How to install specific package from specific repo? (requirements.txt)
|
<p>I have say following packages in requirements.txt</p>
<pre><code>abc
def
ghj
</code></pre>
<p>I would like to install only ghj from a specific repo. Is that possible?</p>
<p>If I specify <code>--extra-index-url <link></code> in requirements.txt, then it install ghj from that specific repo, but then it also installs abc or def from that repo.</p>
<p>Basically I have two repos, A and B. Only B hosts ghj, but B also can host abc and def. But I want abc and def to come from A only.</p>
<p>A is standard repo. B is private repo (but I cannot control what is installed there).</p>
|
<python><requirements.txt>
|
2023-12-07 21:54:58
| 1
| 422
|
sakura
|
77,623,153
| 12,114,641
|
Python3 ModuleNotFoundError: No module named 'dotenv'
|
<p>I'm getting <code>ModuleNotFoundError: No module named 'dotenv'</code> error. I did install <code>python-dotenv</code> as suggested in other posts but it still gives error. How to fix it?</p>
<p><strong>My code</strong></p>
<pre><code>import os
import json
import openai
import pandas as pd
from dotenv import load_dotenv
load_dotenv()
...
</code></pre>
<p><strong>Error</strong></p>
<pre><code>(.venv) ~/ROOT/Python/finetunechat/openai $ pip list | grep dotenv
python-dotenv 0.21.1
(.venv) ~/ROOT/Python/finetunechat/openai $ python 1.py
Traceback (most recent call last):
File "1.py", line 7, in <module>
from dotenv import load_dotenv
ModuleNotFoundError: No module named 'dotenv'
(.venv) ~/ROOT/Python/finetunechat/openai $
</code></pre>
|
<python><python-3.x><dotenv><python-dotenv>
|
2023-12-07 21:25:50
| 1
| 1,258
|
Raymond
|
77,623,089
| 3,732,793
|
pyvis.network shows html which is not 100 percent of the page
|
<p>with this simple code the output is as expected but only in a 30% of the browser window.</p>
<pre><code>import networkx as nx
from pyvis.network import Network
G = nx.Graph()
G.add_node(1)
G.add_node(2)
G.add_node(3)
G.add_edge(1, 2)
G.add_edge(1, 3)
net = Network(height="100%", width="100%", directed=True, notebook=True)
net.from_nx(G)
net.show("simple.html")
</code></pre>
<p>is there a better parameter to use realy 100% of the browser ?</p>
<p>or is there a better library to actully achiev something like that with python ?</p>
<p><a href="https://i.sstatic.net/50fUu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/50fUu.png" alt="enter image description here" /></a></p>
|
<python><graph><pyvis>
|
2023-12-07 21:11:10
| 1
| 1,990
|
user3732793
|
77,623,047
| 5,013,066
|
Why doesn't this boto3 python script type check?
|
<p>I'm using the <code>boto3-stubs</code> package to type check my boto3 calls. Specifically, I'm using <a href="https://youtype.github.io/boto3_stubs_docs/mypy_boto3_secretsmanager/client/#batch_get_secret_value" rel="nofollow noreferrer"><code>batch_get_secret_value</code></a> like so:</p>
<pre class="lang-py prettyprint-override"><code> secrets: list['mypy_boto3_secretsmanager.type_defs.SecretValueEntryTypeDef'] = \
client.batch_get_secret_value(
SecretIdList=raw_config['secrets']
)['SecretValues']
</code></pre>
<p>This appears to me to be correct code. <code>batch_get_secret_value</code> is a real method of the client and <code>SecretValueEntryTypeDef</code> is a real type in the stubs package.</p>
<p>However, when I type check with mypy, I get the following two errors:</p>
<pre><code>$ python -m mypy --strict .
main.py:15: error: Name "mypy_boto3_secretsmanager.type_defs.SecretValueEntryTypeDef" is not defined [name-defined]
main.py:16: error: "SecretsManagerClient" has no attribute "batch_get_secret_value"; maybe "get_secret_value"?
[attr-defined]
Found 2 errors in 1 file (checked 1 source file)
</code></pre>
<p><code>boto3-stubs</code> is up to date on my machine. I've tried searching but I've found not a lot of people use the stubs.</p>
<p>In summary: why am I getting these type checking errors?</p>
<p>(full code for completeness:)</p>
<pre class="lang-py prettyprint-override"><code>import boto3
import json
import sys
import typing
if typing.TYPE_CHECKING:
import mypy_boto3_secretsmanager.client
import mypy_boto3_secretsmanager.type_defs
def main() -> None:
client: 'mypy_boto3_secretsmanager.client.SecretsManagerClient' \
= boto3.client('secretsmanager')
with open(sys.argv[1]) as file:
raw_config: dict[str, typing.Any] = json.load(file)
secrets: list['mypy_boto3_secretsmanager.type_defs.SecretValueEntryTypeDef'] = \
client.batch_get_secret_value(
SecretIdList=raw_config['secrets']
)['SecretValues']
return
main()
</code></pre>
|
<python><amazon-web-services><boto3><aws-secrets-manager>
|
2023-12-07 21:01:48
| 0
| 839
|
Eleanor Holley
|
77,622,879
| 2,494,795
|
Unable to Load Large Datasets to Azure Search Index
|
<p>I am trying to load around 30,000 records in an Azure Cognitive Search index. The source data comes from a Pandas dataframe, and I am attempting to upload it as follows:</p>
<pre><code> file = df.to_dict(orient="records")
# "Use SearchIndexingBufferedSender to upload the documents in batches optimized for indexing"
with SearchIndexingBufferedSender(
endpoint=service_endpoint,
index_name=index_name,
credential=credential,
) as batch_client:
# Add upload actions for all documents
batch_client.upload_documents(documents=file)
print(f"Uploaded {len(file)} documents in total to {index_name}")
</code></pre>
<p>If I upload up to 1500 documents (after index creation), they load without issues. If I try to load more instead (also after index creation), the index will only contain the first record.</p>
<p>I have noticed that, after uploading those 1500 documents, I can upload another new 1000 - 1500 documents manually (by running the same code), and the index will contain all 3000. However, if I try to upload a second round of more than 1500 documents, it will only upload one new document.</p>
<p>This leads me to think that the index will accept more than 1500 documents (meaning, it is not a problem with size) but the batch upload in the code above is not working correctly; it seems to not be buffering and batching.</p>
<p>I could run the same code manually until I get to those 30,000 records, but I would like to solve the problem.</p>
<p>Any help is greatly appreciated. Thanks!</p>
|
<python><azure><azure-cognitive-search>
|
2023-12-07 20:18:28
| 0
| 1,636
|
Irina
|
77,622,829
| 1,052,628
|
Transferring large table into one big file from snowflake to S3
|
<p>I have a snowflake table which contains around 2 billion rows and imported the table in CSV format into a S3 bucket using spark which created ~110 part files and total of around 300GB of data.</p>
<p>Now I want this data in single CSV file instead of those part files. I have tried to repartition/coalesce the pyspark dataframe into one partition and then writing the dataframe as single CSV file in S3 but that job never finishes.</p>
<pre><code>hdfs_block_size = 512 * 1024 * 1024 # 512MB
write_options = {
"compression": "snappy",
"parquet.block.size": str(hdfs_block_size)
}
number_of_partitions = 1
# df = df.repartition(int(number_of_partitions))
df = df.coalesce(int(number_of_partitions))
df.write.mode("overwrite").options(**write_options).csv(path='s3://{}:{}@{}'.format(access_key, secret_key, s3_path), compression='gzip', sep='\t', emptyValue='""', nullValue='""', header=True)
</code></pre>
<p>Is there any efficient way of doing this?</p>
<p>P. S.: I am using databricks for this task.</p>
|
<python><csv><amazon-s3><pyspark><databricks>
|
2023-12-07 20:05:11
| 0
| 636
|
Rafiul Sabbir
|
77,622,632
| 13,438,431
|
Telegram Bot: a set of characters break out of HTML escape
|
<p>I have a game telegram bot which uses first name - last name pairs to spell out a top chart of users in a chat by their score. Screenshot example below:</p>
<p><a href="https://i.sstatic.net/eH5LV.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eH5LV.jpg" alt="normal markup" /></a></p>
<p>So, every user has a link to them. The actual code to generate a link:</p>
<pre><code>from html import escape as html_escape
EscapeType = typing.Literal['html']
def escape_string(s: str, escape: EscapeType | None = None) -> str:
if escape == 'html':
s = html_escape(s)
elif escape is None:
pass
else:
raise NotImplementedError(escape)
return s
def getter(d):
if isinstance(d, User):
return lambda attr: getattr(d, attr, None)
elif hasattr(d, '__getitem__') and hasattr(d, 'get'):
return lambda attr: d.get(attr, None)
else:
return lambda attr: getattr(d, attr, None)
def personal_appeal(user: User | dict, escape: EscapeType | None = 'html') -> str:
get = getter(user)
if full_name := get("full_name"):
appeal = full_name
elif name := get("name"):
appeal = name
elif first_name := get("first_name"):
if last_name := get("last_name"):
appeal = f"{first_name} {last_name}"
else:
appeal = first_name
elif username := get('username'):
appeal = username
else:
raise ValueError(user)
return escape_string(appeal, escape)
def user_mention(id: int | User, name: str | None = None, escape: EscapeType | None = 'html') -> str:
if isinstance(id, User):
user = id
id = user.id
name = personal_appeal(user)
name = escape_string(name, escape=escape)
if name is None:
name = "N/A"
if id is not None:
return f'<a href="tg://user?id={id}">{name}</a>'
else:
return name
</code></pre>
<p>Basically, this code generates a link from a user name - user ID pair. As you can see, the name is HTML escaped by default.</p>
<p>There is, however, one user, which breaks this code somehow, by their unusual first name, and here is the actual sequence of characters they use:</p>
<pre><code>'$̴̢̛̙͈͚̎̓͆͑.̸̱̖͑͒ ̧̡͉̺̬͎̯.̸̧̢̠̺̮̬͙͛̓̀̐́.̵̦͑̉͌͌̎͘ ̞ ̷̡͈̤̓̀͋͗͊̈́̑̽͝'
</code></pre>
<p>Screenshot of the result of the same code run against this first name:</p>
<p><a href="https://i.sstatic.net/2ikcn.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2ikcn.jpg" alt="bad markup" /></a></p>
<p>As you can see, telegram seems to be lost in the markup. The link escapes onto other unrelated characters, and the <code><b></code> tag is broken, too.</p>
<p>This is the actual string which is being sent to the telegram servers (except for the ids, those I redacted out):</p>
<pre><code>🔝🏆 <u>Рейтинг игроков чата</u>:
🥇 1. <a href="tg://user?id=1">andy alexanderson</a> (<b>40</b>)
🥈 2. <a href="tg://user?id=2">$̴̢̛̙͈͚̎̓͆͑.̸̱̖͑͒ ̧̡͉̺̬͎̯.̸̧̢̠̺̮̬͙͛̓̀̐́.̵̦͑̉͌͌̎͘ ̞ ̷̡͈̤̓̀͋͗͊̈́̑̽͝</a> (<b>40</b>)
🤡 3. <a href="tg://user?id=3">: )</a> (<b>0</b>)
⏱️ <i>Рейтинг составлен 1 минуту назад</i>.
⏭️ <i>Следующее обновление через 28 минут</i>.
</code></pre>
<p>Seems like the only odd thing in this markup is the nickname, though.</p>
<p>Is this a Telegram bug?</p>
<p>Can something be done to mitigate this, so that my users wouldn't be able to escape the HTML markup? <b>I am willing to sacrifice the correctness of their name representation</b> (due to the fact that such users willingly obfuscate their names), but <b>I need to somehow be able to tell apart something which would break the markup</b>.</p>
<p><b>Or maybe there is some UTF-16 <-> UTF-8 encoding stuff going on that I'm missing out on?</b></p>
<p>Framework used: <code>python-telegram-bot</code>.
Python version: <code>3.10.12</code>.</p>
|
<python><utf-8><escaping><python-telegram-bot><html-escape-characters>
|
2023-12-07 19:21:58
| 2
| 2,104
|
winwin
|
77,622,623
| 5,879,640
|
Python packaging with data directories - Clean uninstall
|
<p>I am writing a Python package that relies on some expensive to generate and large data file in order to properly work.</p>
<p>This large file only needs to be generated once, so I have configured the package's <code>__init__.py</code> to generate this file upon import. This file could also be generated at any other point (as soon as it's needed). The point is that this file is not part of the package as it would be too large to distribute.</p>
<p>The file is saved into the package directory, using code such as:</p>
<pre><code>data_dir = os.path.join(os.path.dirname(__file__), "data")
# save file to directory
</code></pre>
<p>which would point to some directory such as: <code>/Users/me/anaconda3/envs/my-package/lib/python3.11/site-packages/my_package</code>. It's assumed the user has write permission to the python package directory (should be using a virtual environment).</p>
<p>I want users to be able to install and uninstall the package using pip. The data files should be deleted on uninstall since they take too much space.</p>
<p>Even though the data files are inside the package directory (this directly gets removed in a regular uninstall), pip is raising a warning about files inside the package directory not belonging to the package and it will not remove it.</p>
<pre><code>❯ pip uninstall my-package
Found existing installation: my_package 0.0.1
Uninstalling my_package-0.0.1:
Would remove:
/Users/me/anaconda3/envs/my-package/lib/python3.11/site-packages/my_package-0.0.1.dist-info/*
/Users/me/anaconda3/envs/my-package/lib/python3.11/site-packages/my_package/*
Would not remove (might be manually added):
/Users/me/anaconda3/envs/my-package/lib/python3.11/site-packages/my_package/data/file
Proceed (Y/n)?
</code></pre>
<p>After running the uninstall will remove all files expect these manually added files.</p>
<p>I guess this could be solved by marking these files as not manually added, so pip knows it can remove them, but I don't know if this is possible. Another solution would be to automatically run some script just-before or after uninstall.</p>
<p>I could not find any satisfactory solution to this problem online so I am probably missing something, what is the established solution for this problem? Is this achievable by modifying the <code>pyproject.toml</code> file?</p>
|
<python><installation><pip><uninstallation><python-packaging>
|
2023-12-07 19:20:01
| 0
| 471
|
Trantidon
|
77,622,607
| 2,444,251
|
How to run flask server + web socket server
|
<p>I have two pieces of blocking code, I need to run both, however running either will block the code. Both work perfectly on their own. If I could avoid using third party libraries that would be great.</p>
<pre><code>import socket
import logging
import requests
from threading import Thread
from . import ARI_URL, ARI_USERNAME, ARI_PASSWORD, APPLICATION
from .transcription import Transcriber, MULAW
from flask import Flask
app = Flask(__name__)
logging.basicConfig(level=logging.DEBUG)
LISTEN_ADDRESS = '127.0.0.1'
LISTEN_PORT = 12222
transcriber = Transcriber(language='es-ES', codec=MULAW, sample_rate=8000)
def serve(transcriber):
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.bind((LISTEN_ADDRESS, LISTEN_PORT))
# THIS MUST RUN
while True:
data, _ = sock.recvfrom(1024)
payload = data[12:]
transcriber.push(payload)
# THIS MUST RUN TOO
app.run()
def main():
transcriber.start()
try:
serve(transcriber)
except KeyboardInterrupt:
pass
finally:
transcriber.stop()
@app.route('/silencio', methods=['POST'])
def silencio():
transcriber.on_silence()
return {}
</code></pre>
|
<python><rest><sockets>
|
2023-12-07 19:18:25
| 1
| 592
|
Bob Lozano
|
77,622,496
| 3,274,299
|
Pytorch unexpected behaviour after fork
|
<p>I have an application with PyTorch and Gunicorn.
Firstly, I build models(torch Modules) in the Gunicorn Master Worker, and then Gunicorn forks the process, creates workers and serves requests. All the calculations are done on the <code>CPU</code>.</p>
<p>The new feature is parallel computation per request, so I fork the Gunicorn worker processes one more time in order to achieve parallelism on the Gunciorn worker level.
That's how I create new processes (as I read fork method is default one when using <code>.start()</code>):</p>
<pre><code>class Worker(multiprocessing.get_context('fork').Process):
</code></pre>
<p>It is used this way:</p>
<pre><code>Worker().start()
</code></pre>
<p>Inside new process I access computation pipeline via global variable.</p>
<p>As I read in documentation after <code>fork</code> torch internal state should be kept same if you don't modify it, because of page cache and <code>fork</code> nature (and it is).</p>
<p>But. Sometimes in docker environment weird behavior occurs, data in state_dict is incorrect(different with parent process) as a result output vector is incorrect. For example expected vector is <code>[0.0001, 0.3, 0.1]</code>, but I get: <code>[112, 4, 43]</code>, the difference visible to the naked eye. It happens only in a docker environment and a very rare case that's why I'm having difficulties fixing it. Maybe each 30 start is with this bug. I wasn't able to create a minimal code example with bug repetition.</p>
<p>What I tried:</p>
<ol>
<li><a href="https://github.com/pytorch/pytorch/blob/a6736ac8518aff7bb88aa0454ba0de43e49a57b3/torch/nn/modules/module.py#L2465" rel="nofollow noreferrer">https://github.com/pytorch/pytorch/blob/a6736ac8518aff7bb88aa0454ba0de43e49a57b3/torch/nn/modules/module.py#L2465</a> Calling this method <code>share_memory()</code> in parent process (Master worker).</li>
<li><code>torch.multiprocessing.set_start_method('fork', force=True)</code> set the start method explicitly.</li>
<li><code>multiprocessing.set_start_method('fork', force=True)</code> set the start method explicitly.</li>
<li>Increasing shared memory in docker: <code>--shm-size=4G</code></li>
<li><code>torch.multiprocessing.set_sharing_strategy(new_strategy)</code> trying both <code>file_system</code> and <code>file_descriptor</code> strategies (before I increased the allowed number of open FD per process).</li>
</ol>
<p>What I haven't tried</p>
<ol>
<li>Sending addresses of shared state via Queue to child processes as it is shown here <a href="https://pytorch.org/docs/stable/multiprocessing.html" rel="nofollow noreferrer">https://pytorch.org/docs/stable/multiprocessing.html</a> in CUDA section. (Because I don't use GPU yet I expect <code>fork</code> is enough)</li>
<li>Calling share_memory() method in child processes.</li>
</ol>
<p><code>torch.set_num_threads()</code> - is set to <code>1</code> in my App.</p>
<p>I have to use a <code>fork</code> for the sake of saving RAM. I do not run out of RAM when this behavior occurs.</p>
<p>Keep in mind that everything was working correctly when I just had the Gunicorn workers, inside Gunicorn they use <code>os.fork()</code> method for forking. But I checked that <code>Process.start()</code> and <code>os.fork()</code> are identical. Memory usage has not been increased that's why I'm sure that processes are forked correctly.</p>
<p>I don't modify/update any data after fork, so it is impossible that I corrupted torch state.</p>
<p>I have read a lot of warnings, like: If you want PyTorch to work properly, just don't fork processes. Unfortunately it is not an option for me.</p>
<p>I use Ubuntu 20 in docker, Python 3.9, torch version is 1.8.1. Pytorch state is loaded from a git lfs via load_state_dict().</p>
<p>Please if you have met this behavior share with me any insights.</p>
|
<python><pytorch><fork>
|
2023-12-07 18:55:16
| 0
| 695
|
Tipok
|
77,622,468
| 4,363,045
|
Is it possible to filter Django objects by a list of dictionaries?
|
<p>Let's say I have a queryset of django objects i.e. <code>Blog</code> with fields <code>id</code>, <code>hits</code>, <code>title</code> and also a list of dictionaries which represent those objects:</p>
<pre><code>blog_list = [{'id': 1, 'hits': 30, 'title': 'cat'}, {'id': 2, 'hits': 50, 'title': 'dog'}, {'id': 3, 'hits': 30, 'title': 'cow'}]
</code></pre>
<p>One of the django object has a value different from one specified in the list of dictionaries.
For example, I have a blog instance with <code>id = 1</code>, <code>hits = 30</code>, but <code>title = 'new cat'</code></p>
<p>Right now I do:</p>
<pre><code>for blog in queryset:
for entry in blog_list:
if blog.id == entry['id'] and blog.title != entry['title']:
print(f'It is the blog entry with id {blog.id}')
</code></pre>
<p>But it looks suboptimal and too complicated. Is there a smart way to find such object?</p>
<p><strong>UPD:</strong> @JohnSG gave me an interesting idea, so now I do:</p>
<pre><code>for blog in queryset:
if {'id': blog.id, 'hits': blog.hits, 'title': blog.title} not in blog_list:
print(f'{blog.id} is missing')
</code></pre>
|
<python><django><orm>
|
2023-12-07 18:51:44
| 1
| 1,592
|
wasd
|
77,622,372
| 3,130,499
|
SQLAlchemy 2.0 query to get most recent (by date) visit for each subject
|
<p>I have the following models in my database:</p>
<pre><code>class Subject(Base):
__tablename__ = 'subjects'
id: Mapped[int] = mapped_column(primary_key=True)
first_name: Mapped[str] = mapped_column(String(60), nullable=False)
last_name: Mapped[str] = mapped_column(String(60), nullable=False)
visits: Mapped[list['Visit']] = relationship(cascade='all, delete-orphan',
back_populates='subject')
class Visit(Base):
__tablename__ = 'visits'
id: Mapped[int] = mapped_column(Integer, primary_key=True)
date: Mapped[str] = mapped_column(DateTime, nullable=False)
amount_spent: Mapped[int] = mapped_column(Integer, nullable=False)
units: Mapped[str] = mapped_column(String, nullable=False)
subject_id: Mapped[int] = mapped_column(Integer, ForeignKey('subjects.id'), index=True)
subject: Mapped['Subject'] = relationship(back_populates='visits')
</code></pre>
<p>I now want to write a query which returns the most recent visit for each subject in the database.</p>
<p>This is what I have so far:</p>
<pre><code>q = select(Visit) \
.join(Subject.visits) \
.order_by(Visit.date.desc()).limit(1)
</code></pre>
<p>but running this query results in a <code>DetachedInstanceError</code> error message when I try to print/access the result.</p>
<p>EDIT: This is the code I run:</p>
<pre><code>q = select(Visit) \
.join(Subject.visits) \
.order_by(Visit.date.desc()).limit(1)
r = session.scalars(q).all()
r[0]
</code></pre>
<p>Here is the full error message:</p>
<pre><code>DetachedInstanceError Traceback (most recent call last)
File ~\Anaconda3\envs\cam\Lib\site-packages\IPython\core\formatters.py:708, in PlainTextFormatter.__call__(self, obj)
701 stream = StringIO()
702 printer = pretty.RepresentationPrinter(stream, self.verbose,
703 self.max_width, self.newline,
704 max_seq_length=self.max_seq_length,
705 singleton_pprinters=self.singleton_printers,
706 type_pprinters=self.type_printers,
707 deferred_pprinters=self.deferred_printers)
--> 708 printer.pretty(obj)
709 printer.flush()
710 return stream.getvalue()
File ~\Anaconda3\envs\cam\Lib\site-packages\IPython\lib\pretty.py:410, in RepresentationPrinter.pretty(self, obj)
407 return meth(obj, self, cycle)
408 if cls is not object \
409 and callable(cls.__dict__.get('__repr__')):
--> 410 return _repr_pprint(obj, self, cycle)
412 return _default_pprint(obj, self, cycle)
413 finally:
File ~\Anaconda3\envs\cam\Lib\site-packages\IPython\lib\pretty.py:778, in _repr_pprint(obj, p, cycle)
776 """A pprint that just redirects to the normal repr function."""
777 # Find newlines and replace them with p.break_()
--> 778 output = repr(obj)
779 lines = output.splitlines()
780 with p.group():
File ~\Anaconda3\envs\cam\Lib\site-packages\sqlalchemy\engine\row.py:245, in Row.__repr__(self)
244 def __repr__(self) -> str:
--> 245 return repr(sql_util._repr_row(self))
File ~\Anaconda3\envs\cam\Lib\site-packages\sqlalchemy\sql\util.py:598, in _repr_row.__repr__(self)
595 def __repr__(self) -> str:
596 trunc = self.trunc
597 return "(%s%s)" % (
--> 598 ", ".join(trunc(value) for value in self.row),
599 "," if len(self.row) == 1 else "",
600 )
File ~\Anaconda3\envs\cam\Lib\site-packages\sqlalchemy\sql\util.py:598, in <genexpr>(.0)
595 def __repr__(self) -> str:
596 trunc = self.trunc
597 return "(%s%s)" % (
--> 598 ", ".join(trunc(value) for value in self.row),
599 "," if len(self.row) == 1 else "",
600 )
File ~\Anaconda3\envs\cam\Lib\site-packages\sqlalchemy\sql\util.py:565, in _repr_base.trunc(self, value)
564 def trunc(self, value: Any) -> str:
--> 565 rep = repr(value)
566 lenrep = len(rep)
567 if lenrep > self.max_chars:
File models.py:93, in Visit.__repr__(self)
92 def __repr__(self):
---> 93 return f"{self.date.strftime('%Y-%m-%d')}"
File ~\Anaconda3\envs\cam\Lib\site-packages\sqlalchemy\orm\attributes.py:566, in InstrumentedAttribute.__get__(self, instance, owner)
564 except AttributeError as err:
565 raise orm_exc.UnmappedInstanceError(instance) from err
--> 566 return self.impl.get(state, dict_)
File ~\Anaconda3\envs\cam\Lib\site-packages\sqlalchemy\orm\attributes.py:1086, in AttributeImpl.get(self, state, dict_, passive)
1083 if not passive & CALLABLES_OK:
1084 return PASSIVE_NO_RESULT
-> 1086 value = self._fire_loader_callables(state, key, passive)
1088 if value is PASSIVE_NO_RESULT or value is NO_VALUE:
1089 return value
File ~\Anaconda3\envs\cam\Lib\site-packages\sqlalchemy \orm\attributes.py:1116, in AttributeImpl._fire_loader_callables(self, state, key, passive)
1108 def _fire_loader_callables(
1109 self, state: InstanceState[Any], key: str, passive: PassiveFlag
1110 ) -> Any:
1111 if (
1112 self.accepts_scalar_loader
1113 and self.load_on_unexpire
1114 and key in state.expired_attributes
1115 ):
-> 1116 return state._load_expired(state, passive)
1117 elif key in state.callables:
1118 callable_ = state.callables[key]
File ~\Anaconda3\envs\cam\Lib\site-packages\sqlalchemy\orm\state.py:798, in InstanceState._load_expired(self, state, passive)
791 toload = self.expired_attributes.intersection(self.unmodified)
792 toload = toload.difference(
793 attr
794 for attr in toload
795 if not self.manager[attr].impl.load_on_unexpire
796 )
--> 798 self.manager.expired_attribute_loader(self, toload, passive)
800 # if the loader failed, or this
801 # instance state didn't have an identity,
802 # the attributes still might be in the callables
803 # dict. ensure they are removed.
804 self.expired_attributes.clear()
File ~\Anaconda3\envs\cam\Lib\site-packages\sqlalchemy\orm\loading.py:1582, in load_scalar_attributes(mapper, state, attribute_names, passive)
1580 session = state.session
1581 if not session:
-> 1582 raise orm_exc.DetachedInstanceError(
1583 "Instance %s is not bound to a Session; "
1584 "attribute refresh operation cannot proceed" % (state_str(state))
1585 )
1587 no_autoflush = bool(passive & attributes.NO_AUTOFLUSH)
1589 # in the case of inheritance, particularly concrete and abstract
1590 # concrete inheritance, the class manager might have some keys
1591 # of attributes on the superclass that we didn't actually map.
1592 # These could be mapped as "concrete, don't load" or could be completely
1593 # excluded from the mapping and we know nothing about them. Filter them
1594 # here to prevent them from coming through.
DetachedInstanceError: Instance <Visit at 0x166c3b74250> is not bound to a Session; attribute refresh operation cannot proceed (Background on this error at: https://sqlalche.me/e/20/bhk3)
</code></pre>
|
<python><python-3.x><sqlite><sqlalchemy>
|
2023-12-07 18:34:56
| 2
| 692
|
derNincompoop
|
77,622,331
| 1,889,297
|
Anaconda Navigator won't launch (windows 11)
|
<p>I can't open Anaconda Navigator and Spyder from the drop down list (see image). No error appears.<br />
No issue when I open Spyder and Navigator from Anaconda Prompt (CLI).
Digging into the code I see that the drop down shortcut is running <code>C:\ProgramData\anaconda3\pythonw.exe C:\ProgramData\anaconda3\cwp.py C:\ProgramData\anaconda3 C:\ProgramData\anaconda3\pythonw.exe C:\ProgramData\anaconda3\Scripts\anaconda-navigator-script.py</code><br />
In <code>cwp.py</code> I get</p>
<pre><code>Traceback (most recent call last):
File "C:\ProgramData\anaconda3\cwp.py", line 10, in <module>
from menuinst._legacy.knownfolders import FOLDERID, get_folder_path
ModuleNotFoundError: No module named 'menuinst._legacy.knownfolders'
</code></pre>
<p>I followed the instructions in <a href="https://stackoverflow.com/questions/46335789/anaconda-navigator-wont-launch-windows-10">Anaconda Navigator won't launch (windows 10)</a>
nothing helped.</p>
<p><a href="https://i.sstatic.net/HVeJt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HVeJt.png" alt="enter image description here" /></a></p>
|
<python><anaconda><conda>
|
2023-12-07 18:27:32
| 1
| 504
|
user1889297
|
77,621,991
| 859,141
|
Pyside6 QAbstractTableModel - Replace Boolean value with centre aligned icon
|
<p>I have a class <em>GenericTableModel(QAbstractTableModel):</em> in which I am trying to replace Boolean <em><class 'numpy.bool_'></em> with icons. I have had it working using the following:</p>
<pre><code>if role == Qt.DecorationRole:
value = self.dataframe.iloc[index.row(), index.column()]
if hasattr(value, 'dtype') and value.dtype == bool:
if value:
return QIcon('static/icons/check-circle-fill.png')
return QIcon("static/icons/icon_x.png")
</code></pre>
<p>But they were left aligned and I'd like them centred.</p>
<p>If I change the role to QDisplayRole then no icons are displayed because the value type at this time is always str. I guess this is because I am misunderstanding the role.</p>
<p>I have added the following from <a href="https://stackoverflow.com/questions/63177587/pyqt-tableview-align-icons-to-center/63178044#63178044">enter link description here</a> and been able to set the icon size but not the alignment as it is not clearly specified in the answer given.</p>
<pre><code>class IconDelegate(QStyledItemDelegate):
def initStyleOption(self, option, index):
super().initStyleOption(option, index)
option.decorationSize = QSize(16, 16) # option.rect.size()
</code></pre>
<p>Looking at the docs I'm expecting maybe a displayAlignment or textAlignment but they don't appear to be valid.</p>
<p>I suspect its based on a two prong misundertanding. I think DecorationRole is always intended to decorate the left side of a cell and if I can get the right role it might work?</p>
<p>Anyway, if someone could offer me some assistance I'd be grateful. Thanks</p>
|
<python><pyside><qabstracttablemodel>
|
2023-12-07 17:28:37
| 0
| 1,184
|
Byte Insight
|
77,621,912
| 15,547,292
|
Is there any library exporting APIs with different calling conventions?
|
<p>Is there any example of a library exporting functions with different calling conventions, e.g. <code>api_foo()</code> with <code>__cdecl</code> and <code>api_bar()</code> with <code>__stdcall</code>, both in the same public API namespace?</p>
<p>My questions are ...</p>
<ul>
<li>if this is possible in principle</li>
<li>if it also happens in practice, and in that case when/why you would use it</li>
</ul>
<p>I'm asking on behalf of a python ctypes wrapper generator, wondering if we need to support this case.</p>
|
<python><c><ctypes><calling-convention>
|
2023-12-07 17:13:22
| 0
| 2,520
|
mara004
|
77,621,662
| 11,328,614
|
Google protobuf, match expected message size versus bytes received from network
|
<p>In order to test receiving and deserializing protobuf messages sent from a server I would like to check the received network bytes vs the expected size of a serialized fully populated message. There are some intermediate black-box steps and I want to check that the server sends the correct serialized message upon a request.
I already assured that the received bytes from the network is actually a serialized message, but it is not clear if it is the correct one.</p>
<p>Still two messages could have the same serialized size but it is a corner case and when I hit something like this, this would be part of some further step and I don't want to include it into this question.</p>
<p>I used <code>protoc</code> with the <code>--python_out dir</code> option to generate a <code>X_pb2.py</code> file containing the message definitions from <code>.proto</code> file.</p>
<p>I thought of the following possible (hacky) alternatives:</p>
<p>1.) Iterate over the <code>X_pb2.MY_Message().DESCRIPTOR.fields</code>, determine the field sizes from their individual sizes and sum them up to get the expected size of the fully populated message</p>
<p>2.) Iterate over the <code>X_pb2.MY_Message().DESCRIPTOR.fields</code>, determine the field types and initialize them randomly. Then just call <code>X_pb2.MY_Message().ByteSize()</code></p>
<p>3.) Initialize a <code>X_pb2.MY_Message()</code>, serialize it and then get the <code>ByteSize()</code></p>
<p>However, I stumbled over some problems:</p>
<p>1.) When trying this alternative, the calculated expected size of the serialized msg is smaller then the original msg, which is impossible. E.g. a message consisting of [int8_t, int32_t, uint32_t] has a serialized size of 5 whereas the sum of the datatype sizes is 9.
Code: <code>sum([x.GetOptions().ByteSize() for x in X_pb2.MY_Message().DESCRIPTOR.fields])</code></p>
<p>2.) I did not found a generic way to set the fields value. It seems I can only set them via <code>MY_MESSAGE().fieldx = ...</code>, which is not suitable for generic access.</p>
<p>3.) Same as 2.)</p>
<p>Maybe you can help me find a solution so I can <code>assert expected_size == bytes_received_from_network</code>.</p>
|
<python><deserialization><protocol-buffers><descriptor><nanopb>
|
2023-12-07 16:36:34
| 0
| 1,132
|
Wör Du Schnaffzig
|
77,621,639
| 7,563,454
|
Python: Get Vector 3 direction from a Vector 3 rotation, eg: `0 0 45` = '0.5 0.5 0'
|
<p>I'm working on a Python program where I define my own vector 3D type. At least the way I imagined it, vectors can store both a position and rotation... if implementing quaternion rotations with 4 values is better I can attempt a <code>vec4</code> but I strongly prefer avoiding those since I understand them even less and they're much more cumbersome. Note that this is a general description of my <code>vec3</code> class, it can store either a position or rotation and objects can use an instance for each one (eg: <code>self.pos = vec3(4, 16, 0)</code> and <code>self.rot = vec3(0, 90, 180)</code>). What I have so far seems to work as intended and should need no further changes:</p>
<pre><code>class vec3:
def __init__(self, x: float, y: float, z: float):
self.x = x
self.y = y
self.z = z
def rot(self, other)
self.x = (self.x + other.x) % 360
self.y = (self.y + other.y) % 360
self.z = (self.z + other.z) % 360
def dir(self):
# This is where the magic I'm missing must happen
return vec3(self.x / 360, self.y / 360, self.z / 360)
v = vec3(0, 90, 180)
# Result: 0 90 180
v.rot(vec3(-90, 45, 270))
# Result: 270 135 90
v.dir()
# Result: 0.75 0.375 0.25
# Correct range but doesn't represent direction
</code></pre>
<p>The issue I'm struggling with is how to get the real direction the vector is pointing toward, meaning the virtual arrow jutting out from the origin and pointing toward where this vector is looking: X is right, Y is forward, Z is up. Its rotation alone doesn't represent it: A rotation of <code>0 0 0</code> should be a direction of <code>0 1 0</code>, one of <code>0 0 45</code> (45* to the right) should probably translate as <code>0.5 0.5 0</code> (half of the forward direction is ceded to the right direction), if we add another 45* to look all the way to the right we have <code>0 0 90</code> which would point toward <code>1 0 0</code>. I don't mind if the directions are mixed up from how I'm imagining them, as long as this is done correctly and all possible combinations are predictable. What is the simplest conversion algorithm possible I can use in my <code>dir</code> function?</p>
|
<python><python-3.x><math><vector><rotation>
|
2023-12-07 16:32:35
| 2
| 1,161
|
MirceaKitsune
|
77,621,568
| 7,133,942
|
Plot barchart with matplotlib using different categories
|
<p>I have the following barchart in Matplotlib</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
# Categories and methods
categories = ['25 kWh', '50 kWh', '80 kWh']
methods = ['Optimal Control', 'PSC', 'PSC-ANN']
#Input data
improvement_25_kWh = [13.3, 4.1, 5.4]
improvement_50_kWh = [13.8, 6.3, 4.4]
improvement_80_kWh = [14.3, 8.5, 3.8]
bar_width = 0.2
bar_positions_25_kWh = np.arange(len(categories))
bar_positions_50_kWh = bar_positions_25_kWh + bar_width
bar_positions_80_kWh = bar_positions_50_kWh + bar_width
plt.figure(figsize=(12, 7))
bars_25_kWh = plt.bar(bar_positions_25_kWh, improvement_25_kWh, color='blue', width=bar_width, label='Optimal Control')
bars_50_kWh = plt.bar(bar_positions_50_kWh, improvement_50_kWh, color='orange', width=bar_width, label='PSC')
bars_80_kWh = plt.bar(bar_positions_80_kWh, improvement_80_kWh, color='green', width=bar_width, label='PSC-ANN')
plt.xlabel('Building types', fontsize=17)
plt.ylabel('Improvement (%)', fontsize=17)
plt.xticks(bar_positions_50_kWh, categories, fontsize=15)
plt.yticks(fontsize=15)
plt.legend(fontsize=17)
plt.savefig(r'C:\Users\User1\Desktop\Result_Percentage_Improvements.png', bbox_inches='tight', dpi=200)
plt.show()
</code></pre>
<p>The problem is that the values are plotted in a wrong order. What I want is to have 3 categories <code>['25 kWh', '50 kWh', '80 kWh']</code> and for each category the values for the 3 methods <code>methods = ['Optimal Control', 'PSC', 'PSC-ANN']</code> should be plotted. The input data e.g. <code>improvement_25_kWh = [13.3, 4.1, 5.4]</code> always has the values in the order of the methods ['Optimal Control', 'PSC', 'PSC-ANN']. How can I do that?</p>
|
<python><matplotlib>
|
2023-12-07 16:20:55
| 1
| 902
|
PeterBe
|
77,621,536
| 16,988,223
|
Unable to install jax==0.4.5 from git repository
|
<p>I have python 3.9 in my device.
When I run:</p>
<pre><code>pip install jax and jaxlib
</code></pre>
<p>This is installing the version jax-0.4.21 jaxlib-0.4.21 However I need the version 0.4.5 for both jax and jaxlib.</p>
<p>When I run:</p>
<pre><code>pip install jax==0.4.5 jaxlib==0.4.5
</code></pre>
<p>This is throwing this error:</p>
<pre><code> ERROR: Could not find a version that satisfies the requirement jaxlib==0.4.5 (from versions: 0.4.18, 0.4.19, 0.4.20, 0.4.21)
ERROR: No matching distribution found for jaxlib==0.4.5
</code></pre>
<p>I don't kwno why happens it. Lately I decide to install jax by using the git url, of this way:</p>
<pre><code> pip install "git+https://github.com/google/jax.git@v0.4.5"
</code></pre>
<p>This is the versión: <a href="https://github.com/google/jax/releases/tag/jax-v0.4.5" rel="nofollow noreferrer">https://github.com/google/jax/releases/tag/jax-v0.4.5</a></p>
<p>however I got this error:</p>
<pre><code>Collecting git+https://github.com/google/jax.git@v0.4.5
Cloning https://github.com/google/jax.git (to revision v0.4.5) to /tmp/pip-req-build-_8dc4uf5
Running command git clone --filter=blob:none --quiet https://github.com/google/jax.git /tmp/pip-req-build-_8dc4uf5
WARNING: Did not find branch or tag 'v0.4.5', assuming revision or ref.
Running command git checkout -q v0.4.5
error: pathspec 'v0.4.5' did not match any file(s) known to git
error: subprocess-exited-with-error
× git checkout -q v0.4.5 did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× git checkout -q v0.4.5 did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
</code></pre>
<p>And when I change the version of this way (jax-v0.4.5):</p>
<pre><code>pip install "git+https://github.com/google/jax.git@jax-v0.4.5"
</code></pre>
<p>This throwed this message:</p>
<pre><code>Collecting git+https://github.com/google/jax.git@jax-v0.4.5
Cloning https://github.com/google/jax.git (to revision jax-v0.4.5) to /tmp/pip-req-build-828jle88
Running command git clone --filter=blob:none --quiet https://github.com/google/jax.git /tmp/pip-req-build-828jle88
Running command git checkout -q cafaa50b25515a554568db06667b0c9b6abaff27
Resolved https://github.com/google/jax.git to commit cafaa50b25515a554568db06667b0c9b6abaff27
Preparing metadata (setup.py) ... done
Requirement already satisfied: numpy>=1.20 in ./.myenv/lib/python3.9/site-packages (from jax==0.4.5) (1.26.2)
Requirement already satisfied: opt_einsum in ./.myenv/lib/python3.9/site-packages (from jax==0.4.5) (3.3.0)
Requirement already satisfied: scipy>=1.5 in ./.myenv/lib/python3.9/site-packages (from jax==0.4.5) (1.11.4)
Building wheels for collected packages: jax
Building wheel for jax (setup.py) ... done
Created wheel for jax: filename=jax-0.4.5-py3-none-any.whl size=1424415 sha256=aff60a0ed1dea1d004cb5748ed322b9db90cd0064e470567f102a2f83d1873d1
Stored in directory: /tmp/pip-ephem-wheel-cache-e8920quj/wheels/47/59/88/cceb9c59d0d692b940160f055bae0c60cd1295e4edc393ff48
Successfully built jax
Installing collected packages: jax
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
chex 0.1.85 requires jaxlib>=0.1.37, which is not installed.
optax 0.1.7 requires jaxlib>=0.1.37, which is not installed.
orbax-checkpoint 0.4.6 requires jaxlib, which is not installed.
chex 0.1.85 requires jax>=0.4.16, but you have jax 0.4.5 which is incompatible.
flax 0.7.5 requires jax>=0.4.19, but you have jax 0.4.5 which is incompatible.
orbax-checkpoint 0.4.6 requires jax>=0.4.9, but you have jax 0.4.5 which is incompatible.
</code></pre>
<p>This seems that first we need to install jaxlib, but I don't know how install this through github url. I just know that <a href="https://github.com/google/jax/releases/tag/jax-v0.4.5" rel="nofollow noreferrer">this</a> is the github repo for jax but I don't know where is the github repo for jaxlib.</p>
<p>I will appreciate any idea to fix this problem guys, thanks so much.</p>
|
<python><jax>
|
2023-12-07 16:16:07
| 1
| 429
|
FreddicMatters
|
77,621,334
| 1,349,428
|
Pyside delay on process exit
|
<p>I have a Windows desktop (<a href="https://github.com/imubit/qt-data-extractor/" rel="nofollow noreferrer">open-source</a>) application developed in Pyside6. I am trying to remove a tooltip delay for a certain widget (a pretty innocent operation) and I am ending up with a huge delay when exiting the application (the process ends a minute after I click on exit).The code I am adding is this:</p>
<pre><code>class NoDelayHintProxyStyle(QtWidgets.QProxyStyle):
def styleHint(self, hint, option=..., widget=..., returnData=...):
if hint == QtWidgets.QStyle.SH_ToolTip_WakeUpDelay:
return 0
return QtWidgets.QProxyStyle.styleHint(self, hint, option, widget, returnData)
...
self._w.comboSampleRate.setToolTip(
"""
some text
"""
)
self._w.comboSampleRate.setStyle(
NoDelayHintProxyStyle(self._w.comboSampleRate.style())
)
...
</code></pre>
<p>I suspect that the problem is not directly related to this code and this code is just a symptom. It feels that an object / or some 3rd party lib thread is not terminated properly on exit.</p>
<p>Is there any simple way to monitor objects, threads termination with Pyside?</p>
<ul>
<li><a href="https://github.com/imubit/qt-data-extractor/compare/main...raw-data-option-by-default" rel="nofollow noreferrer">Full code causing the problem</a></li>
<li>Platform: Windows</li>
<li>Pyside version: 6.4.0.1</li>
</ul>
<p>Thanks</p>
|
<python><pyside6>
|
2023-12-07 15:44:40
| 1
| 2,048
|
Meir Tseitlin
|
77,621,319
| 848,811
|
How to type hint a Python module
|
<p>I have this Python module which is very handy:</p>
<pre class="lang-py prettyprint-override"><code># src/payment_settings.py
from utils.payment import get_current_payment_settings
def __getattr__(name):
settings = get_current_payment_settings()
return getattr(settings, name)
def __setattr__(name):
raise NotImplementedError("payment_settings is read-only")
</code></pre>
<p>I use it like a special cached, read-only variable like this:</p>
<pre class="lang-py prettyprint-override"><code># src/another_file.py
from . import payment_settings
print(payment_settings.something)
</code></pre>
<p>I want to globally type hint it with the type returned by <code>get_current_payment_settings()</code>. Is there a way to do that?</p>
|
<python><python-typing>
|
2023-12-07 15:42:29
| 1
| 1,731
|
SebCorbin
|
77,621,309
| 1,234,434
|
Regex for column not producing expected output
|
<p>I have this dataframe:</p>
<pre><code>dfsupport = pd.DataFrame({'Date': ['8/12/2020','8/12/2020','13/1/2020','24/5/2020','31/10/2020','11/7/2020','11/7/2020'],
'Category': ['Table','Chair','Cushion','Table','Chair','Mats','Mats'],
'Sales': ['1 table','3chairs','8 cushions','3Tables','12 Chairs','12Mats','4Mats'],
'Paid': ['Yes','Yes','Yes','Yes','No','Yes','Yes',],
'Amount': ['93.78','$51.99','44.99','38.24','£29.99','29 21 only','18']
})
</code></pre>
<p>Which looks like this in table form:</p>
<pre><code> Date Category Sales Paid Amount
0 8/12/2020 Table 1 table Yes 93.78
1 8/12/2020 Chair 3chairs Yes $51.99
2 13/1/2020 Cushion 8 cushions Yes 44.99
3 24/5/2020 Table 3Tables Yes 38.24
4 31/10/2020 Chair 12 Chairs No £29.99
5 11/7/2020 Mats 12Mats Yes 29 21 only
6 11/7/2020 Mats 4Mats Yes 18
</code></pre>
<p>I want to remove both the string elements in the above.
I've learnt to successfully replace the $ and £ with:</p>
<pre><code>patternv='|'.join(re.escape(x) for x in ['$', '£'])
dfsupport['Amount'] = dfsupport['Amount'].str.replace(patternv,regex=True)
</code></pre>
<p>I now want to replace the entry that has "29 21 only" in the Amount column. My attempt has been:</p>
<pre><code>patterns="{r'(\d{1,})\s(\d{1,2})\D+' : r'\1 \2'}"
dfsupport['Amount']=dfsupport['Amount'].str.replace(patterns,regex=True)
</code></pre>
<p>However my attempt leads to the error:</p>
<pre><code>Traceback (most recent call last):
File "/home/cloud/code/learning/howmany.py", line 160, in <module>
dfsupport['Amount'] = dfsupport['Amount'].str.replace(patternv,regex=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cloud/.venv/lib/python3.12/site-packages/pandas/core/strings/accessor.py", line 136, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: StringMethods.replace() missing 1 required positional argument: 'repl'
</code></pre>
<p>How do I fix this?</p>
<p>I should add that I'm seeking to have the output as "29.21"</p>
<p>I followed the <a href="https://stackoverflow.com/questions/73229972/extract-a-substring-from-a-column-and-replace-column-data-frame">question here</a></p>
|
<python><pandas><regex><dataframe>
|
2023-12-07 15:41:14
| 1
| 1,033
|
Dan
|
77,621,221
| 9,729,023
|
Is there any way to get the latest S3 file by GCP StorageTransfer?
|
<p>We're running Airflow/Composer on GCP and trying to get the latest S3 file by any python operator. Currently we're getting S3 file once a day, they can happen to export several files in the same day but we'd like to get only the latest one.</p>
<p>Are there any options to achieve this?</p>
<p>Here's current script:</p>
<pre><code>create_transfer_job = CloudDataTransferServiceCreateJobOperator(
task_id="create_transfer_job",
body=self.create_transfer_body(self),
aws_conn_id=aws_conn_id(),
on_success_callback=partial(self.write_logging_url,
self.get_tmp_project_id()),
)
def create_transfer_body(self):
dag_parameters = get_dag_parameters()
data_transfer = dag_parameters["data_transfer"]
s3_data_source = data_transfer["awsS3DataSource"]
s3_data_source_path = ""
if "path" in s3_data_source:
s3_data_source_path = s3_data_source["path"]
# check
print("s3_data_source_path!:"+s3_data_source_path)
file_types = file_types()
includePrefixes = []
for file_type in file_types:
includePrefixes.append(f"{file_type}/")
objectConditions = {
"includePrefixes": includePrefixes
,"lastModifiedSince": "{{ ti.xcom_pull(task_ids='initialize', key='last_modified_since') }}
,"lastModifiedBefore": "{{ ti.xcom_pull(task_ids='initialize', key='last_modified_before') }}
}
if "objectConditions" in data_transfer:
objectConditions = data_transfer["objectConditions"]
transfer_body = {
"description": "DataPipe Data Transfer",
"status": GcpTransferJobsStatus.ENABLED,
"projectId": project_id(),
"name": "transferJobs/dataPipelineJob_" + str(uuid.uuid4()),
"schedule": {
"scheduleStartDate": datetime.today(),
"scheduleEndDate": datetime.today(),
},
"transferSpec": {
"awsS3DataSource": {"bucketName": s3_data_source["bucketName"], "path": s3_data_source_path},
"gcsDataSink": {"bucketName": self.get_gcs_bucket_name()},
"objectConditions": objectConditions,
"transfer_options": {
"deleteObjectsFromSourceAfterTransfer": False,
"overwriteWhen": "{{ ti.xcom_pull(task_ids='initialize', key='overwrite_when') }}",
},
},
"loggingConfig": {
"logActions": ["FIND", "DELETE", "COPY"],
"logActionStates": ["FAILED"],
},
}
return create_transfer_body
</code></pre>
|
<python><amazon-s3><google-cloud-platform><google-cloud-storage><google-cloud-composer>
|
2023-12-07 15:25:41
| 0
| 964
|
Sachiko
|
77,621,211
| 2,130,515
|
How to return all selenium elements that matches a whole words
|
<p>I am running selenium on python to return all elements that matches keywords.</p>
<pre><code>url =...
svc = webdriver.ChromeService(executable_path="driver_path")
driver = webdriver.Chrome(service=svc)
driver.get(url)
key_words = ['word1', 'word2', 'word3']
# This will match if the text is equal to one of the keywords
predicate_text = " or ".join([f"translate(normalize-space(text()), 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz')='{text}'" for text in key_words])
# The problem is here as it does not match only whole words.
predicate_href = " or ".join([f"contains(translate(@href, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz'),'{text}') " for text in key_words])
href1 = /xxx/yyyy/word1/ good match
href2 = /xxword1yy/ bad match.
</code></pre>
<p>How Can I do exact match ?</p>
|
<python><selenium-webdriver>
|
2023-12-07 15:23:48
| 1
| 1,790
|
LearnToGrow
|
77,621,152
| 736,662
|
Removing "" in concatenated string
|
<p>In my Locust/Python script I have a function (<em>get_ts_ids</em>) like this:</p>
<pre><code>def _get_ts_ids():
csv_file = os.path.join("C:", os.sep, "PythonScripting", "MyCode", "pythonProject", "TS_ID.csv")
with open(csv_file) as csvfile:
field_names = ["TS_ID"]
dr = csv.DictReader(csvfile, fieldnames=field_names)
ts_ids_all = [row["TS_ID"] for row in dr]
shuffle(ts_ids_all) # RANDOMIZE LIST OF VALUES
ts_ids_head = ts_ids_all[0:10] # SELECT FIRST x ITEMS
print(ts_ids_head)
return ",".join(ts_ids_head) # CONCATENATE VALUES
</code></pre>
<p>I am using the test data in constructing a json like this:</p>
<pre><code>def get_data():
myjson = {
"tsIds": [
_get_ts_ids()
],
"resolution": "PT15M",
"startUtc": "2023-12-07T14:55:18.626Z",
"endUtc": "2023-12-07T14:55:18.626Z"
}
return myjson
</code></pre>
<p>The outcome is:</p>
<pre><code> {
"tsIds": [
"332157,333225,338380,315595,366277,324161,222850,247119,209902,354500"
],
"resolution": "PT15M",
"startUtc": "2023-12-07T14:55:18.626Z",
"endUtc": "2023-12-07T14:55:18.626Z"
}
</code></pre>
<p>It is all working as expected, however I want to remove the "" in the json being constructed. How can I strip the two "" of?</p>
<p>I want this to be the "output/result":</p>
<pre><code>{
"tsIds": [
254692,
375565,
375451
],
"resolution": "PT15M",
"startUtc": "2023-12-07T14:55:18.626Z",
"endUtc": "2023-12-07T14:55:18.626Z"
}
</code></pre>
<p>The testdata file looks like this:</p>
<pre><code>TS_ID
53005
246388
45032
243898
161700
</code></pre>
|
<python><locust>
|
2023-12-07 15:14:40
| 0
| 1,003
|
Magnus Jensen
|
77,621,135
| 1,008,588
|
How to parse nested information in JSON using jsonpath_ng
|
<p>I have the following JSON, which came from an API response:</p>
<pre><code>{
"expand": "names,schema",
"startAt": 0,
"maxResults": 50,
"total": 3,
"issues": [
{
"expand": "",
"id": "10001",
"self": "http://www.example.com/10001",
"key": "ABC-1",
"fields": [
{
"From":"Ibiza",
"To":"Mallorca"
},
{
"From":"Mallorca",
"To":"Miami"
}
]
},
{
"expand": "",
"id": "10002",
"self": "http://www.example.com/10002",
"key": "ABC-2",
"fields": [
{
"From":"NYC",
"To":"Charlotte"
},
{
"From":"Charlotte",
"To":"Los Angeles"
},
{
"From":"Los Angeles",
"To":"San Diego"
}
]
},
{
"expand": "",
"id": "10003",
"self": "http://www.example.com/10003",
"key": "ABC-3",
"fields": [
{
"From":"Denver",
"To":"Boson"
}
]
}
]
}
</code></pre>
<p><strong>Target</strong></p>
<p>My target would be to print in Python a list of combinations like:</p>
<pre><code>10001 - Ibiza - Mallorca
10001 - Mallorca - Miami
10002 - NYC - Charlotte
10002 - Charlotte - Los Angeles
10002 - Los Angeles - San Diego
10003 - Denver - Boston
</code></pre>
<p><strong>What I have done</strong></p>
<p>The following snippet works fine, but I really can't understand how to merge the information. I understand that I should split the whole JSON into smaller parts, one for each item, and then apply the second and third catches...can anybody help me, please?</p>
<pre><code>import jsonpath_ng.ext as jp
import json
data=json.loads(response.text)
# 1st catch: 10001,10002,10003
query = jp.parse("$.issues[*].id")
for match in query.find(data):
key=match.value
print(key)
#2nd catch: Ibiza, Mallorca, NYC, ...
query = jp.parse("$.issues[*].fields[*].From")
for match in query.find(data):
key=match.value
print(key)
#3nd catch: Mallorca, Miami, Charlotte, ...
query = jp.parse("$.issues[*].fields[*].To")
for match in query.find(data):
key=match.value
print(key)
</code></pre>
|
<python><json><python-3.x><jsonparser><jsonpath-ng>
|
2023-12-07 15:12:34
| 1
| 2,764
|
Nicolaesse
|
77,621,095
| 11,001,493
|
How to rename row string based on another row string?
|
<p>Imagine I have a dataframe like this:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({"a":["","DATE","01-01-2012"],
"b":["","ID",18],
"c":["CLASS A","GOLF",3],
"d":["","HOCKEY",4],
"e":["","BASEBALL",2],
"f":["CLASS B","GOLF",15],
"g":["","HOCKEY",2],
"h":["","BASEBALL",3]
})
Out[33]:
a b c d e f g h
0 CLASS A CLASS B
1 DATE ID GOLF HOCKEY BASEBALL GOLF HOCKEY BASEBALL
2 01-01-2012 18 3 4 2 15 2 3
</code></pre>
<p>I would like to add the strings in the first row to the names of those sports on the row below, but only before the beginning of the next "Class". Does anyone know how can I do that?</p>
<p>So the result should be like this:</p>
<pre><code> a b c ... f g h
0 CLASS A ... CLASS B
1 DATE ID CLASS A GOLF ... CLASS B GOLF CLASS B HOCKEY CLASS B BASEBALL
2 01-01-2012 18 3 ... 15 2 3
</code></pre>
<p>Later I will make the row 1 to be my header names, but this part I know how to do. I already tried to use df.iterrows but I got confused with the workflow.</p>
|
<python><pandas>
|
2023-12-07 15:06:33
| 1
| 702
|
user026
|
77,621,087
| 10,499,034
|
How to initialize the python engine in VB.net
|
<p>I am trying to execute a simple Python script from VB.net using PythonNet. When I use the following code I get the error "System.TypeInitializationException: 'The type initializer for 'Delegates' threw an exception.'" at the line of code showing "PythonEngine.Initialize()" I have tried forward and reverse slashes and I have tried using "PythonEngine.Initialize(CType("C:/Users/realt/anaconda3/python310.dll", IEnumerable(Of String)))" How can I get this to work?</p>
<pre><code>Imports Python.Runtime
Public Class Form1
Private Sub Button2_Click(sender As Object, e As EventArgs) Handles Button2.Click
Runtime.PythonDLL = "C:\Users\realt\anaconda3\python310.dll"
PythonEngine.Initialize()
Using Py.GIL()
Dim np As Object = Py.Import("numpy")
TextBox3.Text = ToString(np.cos(np.pi * 2))
End Using
End Sub
End Class
</code></pre>
<p>I am basically trying to convert the C# example shown at <a href="https://pypi.org/project/pythonnet/" rel="nofollow noreferrer">https://pypi.org/project/pythonnet/</a> to VB.net.</p>
|
<python><.net><vb.net><python.net>
|
2023-12-07 15:06:03
| 1
| 792
|
Jamie
|
77,621,060
| 15,991,297
|
Add Annotations to Plotly Candlestick Chart
|
<p>I have been using plotly to create charts using OHLC data in a dataframe. The chart contains candlesticks on the top and volume bars at the bottom:</p>
<p><a href="https://i.sstatic.net/d7gSF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/d7gSF.png" alt="enter image description here" /></a></p>
<p>I want to annotate the candlestick chart (not the volume chart) but cannot work out how to do it.</p>
<p>This code works to create the charts:</p>
<pre><code># Plot chart
# Create subplots and mention plot grid size
fig = make_subplots(rows=2, cols=1, shared_xaxes=True,
vertical_spacing=0.03,
row_width=[0.2, 0.7])
# Plot OHLC on 1st row
fig.add_trace(go.Candlestick(x=df.index, open=df["Open"], high=df["High"],
low=df["Low"], close=df["Close"], name="OHLC"),
row=1, col=1
)
# Bar trace for volumes on 2nd row without legend
fig.add_trace(go.Bar(x=df.index, y=df['Volume'], showlegend=False), row=2, col=1)
fig.update_layout(xaxis_rangeslider_visible=False, title_text=f'{ticker}')
fig.write_html(fr"E:\Documents\PycharmProjects\xxxxxxxx.html")
</code></pre>
<p>And I tried adding the following after the candlestick add_trace but it doesn't work:</p>
<pre><code>fig.add_annotation(x=i, y=df["Close"],
text="Test text",
showarrow=True,
arrowhead=1)
</code></pre>
<p>What am I doing wrong?</p>
|
<python><plotly>
|
2023-12-07 15:02:45
| 1
| 500
|
James
|
77,621,013
| 1,403,470
|
setuptools not getting dynamic version when using pyproject.toml
|
<p>I am using <code>setuptools</code> with a <code>pyproject.toml</code> file, and want <code>setuptools</code> to get the package version dynamically from the package contents. Instead, it is always setting the package version in the name of the generated file to <code>0.0.0</code>, even though the package version <em>inside</em> the package seems correct. What am I doing wrong?</p>
<ul>
<li>Python 3.11.6 on MacOS 14.1.2 (Sonoma)</li>
<li><code>setuptools</code> version 68.2.2</li>
<li><code>pip</code> version 23.3.1</li>
</ul>
<p>Package structure:</p>
<pre><code>.
├── LICENSE.md
├── README.md
├── invperc
│ └── __init__.py
└── pyproject.toml
</code></pre>
<ul>
<li><code>invperc/__init__.py</code> contains only this:</li>
</ul>
<pre><code>__version__ = "0.2.0"
</code></pre>
<ul>
<li><code>pyproject.toml</code> contains only this:</li>
</ul>
<pre><code>[project]
name = "invperc"
description = "Invasion Percolation"
readme = "README.md"
authors = [
{ name = "Greg Wilson", email = "gvwilson@third-bit.com" }
]
license = { text = "MIT License" }
dependencies = ["pandas", "numpy"]
dynamic = ["version"]
[build-system]
requires = ["setuptools"]
build-backend = "setuptools.build_meta"
[tools.setuptools.dynamic]
version = {attr = "invperc.__version__"}
</code></pre>
<ul>
<li>Command:</li>
</ul>
<pre><code>python -m build
</code></pre>
<ul>
<li>Screen output:</li>
</ul>
<pre><code>...many lines...
Successfully built invperc-0.0.0.tar.gz and invperc-0.0.0-py3-none-any.whl
</code></pre>
<ul>
<li><p><code>dist/invperc-0.0.0.tar.gz</code> and <code>dist/invperc-0.0.0-py3-none-any-whl</code> now exist with <code>0.0.0</code> as version numbers (which is incorrect).</p>
</li>
<li><p>But if I import and check:</p>
</li>
</ul>
<pre><code>$ cd /tmp
$ pip install $HOME/invperc/dist/invperc-0.0.0-py3-none-any.whl
$ python
>>> import invperc
>>> invperc.__version__
'0.2.0'
</code></pre>
|
<python><setuptools><python-packaging><pyproject.toml><version-numbering>
|
2023-12-07 14:56:34
| 1
| 1,403
|
Greg Wilson
|
77,620,921
| 1,234,434
|
pandas string replace is not replacing all selections
|
<p>I have this dataframe:</p>
<pre><code>dfsupport = pd.DataFrame({'Date': ['8/12/2020','8/12/2020','13/1/2020','24/5/2020','31/10/2020','11/7/2020','11/7/2020'],
'Category': ['Table','Chair','Cushion','Table','Chair','Mats','Mats'],
'Sales': ['1 table','3chairs','8 cushions','3Tables','12 Chairs','12Mats','4Mats'],
'Paid': ['Yes','Yes','Yes','Yes','No','Yes','Yes',],
'Amount': ['93.78','$51.99','44.99','38.24','£29.99','29 only','18']
})
</code></pre>
<p>I am attempting to replace currency signs with blanks, but the below does not work.</p>
<pre><code>patternv='|'.join(['$', '£'])
dfsupport['Amount'] = dfsupport['Amount'].str.replace(patternv,'')
</code></pre>
<p>Why does this not work?</p>
<p>Print the dataframe after the above:</p>
<pre><code> Date Category Sales Paid Amount
0 8/12/2020 Table 1 table Yes 93.78
1 8/12/2020 Chair 3chairs Yes $51.99
2 13/1/2020 Cushion 8 cushions Yes 44.99
3 24/5/2020 Table 3Tables Yes 38.24
4 31/10/2020 Chair 12 Chairs No £29.99
5 11/7/2020 Mats 12Mats Yes 29 only
6 11/7/2020 Mats 4Mats Yes 18
Date Category Sales Paid Amount
1 8/12/2020 Chair 3chairs Yes $51.99
4 31/10/2020 Chair 12 Chairs No £29.99
</code></pre>
<p>I did follow <a href="https://stackoverflow.com/questions/49413005/replace-multiple-substrings-in-a-pandas-series-with-a-value">this question</a>, so not sure why mine isn't working.</p>
|
<python><pandas><string><dataframe>
|
2023-12-07 14:44:41
| 3
| 1,033
|
Dan
|
77,620,744
| 2,664,376
|
Gurobi spend more time in presolve in small model
|
<p>I have small model to optimze in Gurobi. The problem that it takes a lot of time in presolving without removing any rows or variables. I tried to disable PreSolve paramter and reduce the number of Threads but the issue remains the same. Knowing that I working on solving CVRP. For 16 customers and 8 vehicles, it works very well (58 s) but when I switch to 19 customers with 2 vehicles it takes more than 1000 s without returning solution. Here is the log:</p>
<pre><code>Set parameter Threads to value 56
Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)
CPU model: Intel(R) Xeon(R) Platinum 8276 CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2|AVX512]
Thread count: 112 physical cores, 112 logical processors, using up to 56 threads
Optimize a model with 524310 rows, 684 columns and 40110732 nonzeros
Model fingerprint: 0xf11e235e
Variable types: 0 continuous, 684 integer (684 binary)
Coefficient statistics:
Matrix range [1e+00, 3e+01]
Objective range [3e-01, 5e+00]
Bounds range [1e+00, 1e+00]
RHS range [1e+00, 2e+02]
Found heuristic solution: objective 43.3769973
Presolve removed 0 rows and 0 columns (presolve time = 6s) ...
Presolve removed 0 rows and 0 columns (presolve time = 13s) ...
Presolve removed 0 rows and 0 columns (presolve time = 17s) ...
Presolve removed 0 rows and 0 columns (presolve time = 20s) ...
Presolve removed 0 rows and 0 columns (presolve time = 25s) ...
Presolve removed 0 rows and 0 columns (presolve time = 32s) ...
Presolve removed 0 rows and 0 columns (presolve time = 35s) ...
Presolve removed 0 rows and 0 columns (presolve time = 40s) ...
Presolve removed 0 rows and 0 columns (presolve time = 45s) ...
Presolve removed 0 rows and 0 columns (presolve time = 50s) ...
Presolve removed 0 rows and 0 columns (presolve time = 55s) ...
Presolve removed 0 rows and 0 columns (presolve time = 64s) ...
Presolve removed 0 rows and 0 columns (presolve time = 70s) ...
Presolve time: 69.83s
Presolved: 524310 rows, 684 columns, 40110694 nonzeros
Variable types: 0 continuous, 684 integer (684 binary)
Root relaxation presolved: 684 rows, 524994 columns, 40111378 nonzeros
Root simplex log...
Iteration Objective Primal Inf. Dual Inf. Time
0 -0.0000000e+00 0.000000e+00 5.700000e+01 115s
41 1.4942544e+01 0.000000e+00 0.000000e+00 115s
41 1.4942544e+01 0.000000e+00 2.000000e-06 115s
Use crossover to convert LP symmetric solution to basic solution...
Root crossover log...
1 DPushes remaining with DInf 0.0000000e+00 117s
0 DPushes remaining with DInf 0.0000000e+00 117s
27 PPushes remaining with PInf 0.0000000e+00 117s
0 PPushes remaining with PInf 0.0000000e+00 117s
Push phase complete: Pinf 0.0000000e+00, Dinf 0.0000000e+00 117s
Root simplex log...
Iteration Objective Primal Inf. Dual Inf. Time
72 1.4942544e+01 0.000000e+00 0.000000e+00 117s
72 1.4942544e+01 0.000000e+00 0.000000e+00 119s
72 1.4942544e+01 0.000000e+00 0.000000e+00 121s
Root relaxation: objective 1.494254e+01, 72 iterations, 46.50 seconds (39.46 work units)
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
0 0 14.94254 0 40 43.37700 14.94254 65.6% - 127s
H 0 0 41.2306564 14.94254 63.8% - 135s
0 0 16.53206 0 32 41.23066 16.53206 59.9% - 161s
0 0 16.53206 0 32 41.23066 16.53206 59.9% - 203s
H 0 0 37.7572196 16.53206 56.2% - 217s
0 0 16.53206 0 32 37.75722 16.53206 56.2% - 245s
0 0 16.53206 0 40 37.75722 16.53206 56.2% - 276s
0 0 16.53206 0 32 37.75722 16.53206 56.2% - 305s
H 0 0 37.4178668 16.53206 55.8% - 320s
0 0 16.53206 0 32 37.41787 16.53206 55.8% - 325s
0 0 16.53206 0 32 37.41787 16.53206 55.8% - 352s
0 0 16.54588 0 50 37.41787 16.54588 55.8% - 388s
H 0 0 37.1018717 16.54588 55.4% - 397s
H 0 0 34.1244024 16.54588 51.5% - 400s
0 0 16.56291 0 57 34.12440 16.56291 51.5% - 409s
0 0 16.56934 0 58 34.12440 16.56934 51.4% - 421s
0 0 16.62160 0 64 34.12440 16.62160 51.3% - 444s
0 0 16.62160 0 64 34.12440 16.62160 51.3% - 465s
0 0 16.62160 0 56 34.12440 16.62160 51.3% - 489s
0 0 16.64008 0 50 34.12440 16.64008 51.2% - 529s
0 0 16.64405 0 56 34.12440 16.64405 51.2% - 555s
0 0 16.64405 0 57 34.12440 16.64405 51.2% - 573s
0 0 16.65711 0 65 34.12440 16.65711 51.2% - 597s
H 0 0 31.7808513 16.65711 47.6% - 614s
0 0 16.66624 0 56 31.78085 16.66624 47.6% - 624s
0 0 16.66624 0 56 31.78085 16.66624 47.6% - 635s
0 0 16.66624 0 47 31.78085 16.66624 47.6% - 668s
0 0 16.66624 0 49 31.78085 16.66624 47.6% - 691s
0 0 16.69817 0 46 31.78085 16.69817 47.5% - 705s
H 0 0 31.3353414 16.69817 46.7% - 720s
0 0 16.69817 0 50 31.33534 16.69817 46.7% - 732s
0 0 16.70180 0 57 31.33534 16.70180 46.7% - 760s
H 0 0 26.8624304 16.70180 37.8% - 771s
H 0 0 24.6815680 16.70180 32.3% - 778s
0 0 16.70180 0 57 24.68157 16.70180 32.3% - 784s
0 0 16.70180 0 56 24.68157 16.70180 32.3% - 801s
0 0 16.70180 0 56 24.68157 16.70180 32.3% - 824s
0 0 16.70180 0 49 24.68157 16.70180 32.3% - 863s
</code></pre>
<p>Thank you for your help.</p>
|
<python><gurobi><vehicle-routing>
|
2023-12-07 14:19:35
| 2
| 1,335
|
MAYA
|
77,620,706
| 9,152,997
|
Querying list of nested models with FireO in Python
|
<p>I am using FireO to interact with Google Firestore in my Python application. I have a data model with nested lists, and I'm having trouble querying based on a condition within these nested lists.</p>
<p>Here are my relevant data models:</p>
<pre><code>from fireo.fields import TextField, ListField
from fireo.models import Model, NestedModel
class InnerModel(Model):
field1 = TextField()
class OuterModel(Model):
another_field = NumberField()
field2 = ListField(NestedModel(InnerModel))
class Meta:
collection_name = "outer_model"
</code></pre>
<p>What I am trying to do, is fetch all the <code>OuterModel</code> objects where at least one of the items <code>field1</code> in the list of <code>InnerModel</code> is equal to a value.</p>
<p>What I have tried but did not work was:</p>
<pre><code>items = (
OuterModel.collection
.filter("field2.field1", "array_contains", "value")
.get()
)
</code></pre>
<p>I am expecting documents that have this structure:</p>
<pre><code>collection: outer_model
another_field: 123
field2:
0:
field1: value1
1:
field1: value2
2:
field1: **value**
</code></pre>
<p>Because list item at position 2 has the value we are looking for, I expect the whole object to be returned. Of course <code>InnerModel</code> has other fields as well, hence why I am using a <code>NestedModel</code>.</p>
<p>I am adding a screenshot of the desired document in the Firestore web interface:
<a href="https://i.sstatic.net/cHFsZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cHFsZ.png" alt="enter image description here" /></a></p>
<p>How can I properly query based on conditions within a nested list in FireO?</p>
<p>I appreciate any insights or examples on how to achieve this. Thank you!</p>
|
<python><google-cloud-platform><google-cloud-firestore>
|
2023-12-07 14:13:18
| 1
| 922
|
Orestis Zekai
|
77,620,705
| 1,597,121
|
SQLAlchemy OperationalError appears depending upon syntax
|
<p>I am running a small Flask application and recently added SQLAlchemy as the ORM. We have been having issues with the application throwing 500 errors after being idle for some length of time. Once the error occurs, the query repeatedly fails until the application is restarted.</p>
<pre><code>sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) SSL connection has been closed unexpectedly
</code></pre>
<p>The strange thing is this only happens if we query the database using the following syntax:</p>
<pre><code>Logo.query.filter(Logo.default).first()
</code></pre>
<p>After re-writing the query like this:</p>
<pre><code>stmt = db.session.query(Logo).filter(Logo.default)
logo = db.session.execute(stmt).first()
</code></pre>
<p>The error has disappeared. However, we prefer to use the former syntax as it is cleaner, and are still experiencing these errors in other spots of the code where the former syntax was used. We have already tried adding</p>
<pre><code>app.config['SQLALCHEMY_ENGINE_OPTIONS'] = {"pool_pre_ping": True}
</code></pre>
<p>but it did not help. Any insight as to why this is occurring would be greatly appreciated.</p>
|
<python><flask><sqlalchemy><flask-sqlalchemy><psycopg2>
|
2023-12-07 14:13:02
| 0
| 343
|
user1597121
|
77,620,520
| 9,479,925
|
How to explode a list column in python polars?
|
<p>I have a list column in polars dataframe as below.</p>
<p><a href="https://i.sstatic.net/2K41Q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2K41Q.png" alt="enter image description here" /></a></p>
<p>I would like to do an explode on this column so that the all list elements will be in a single column as below.</p>
<p><a href="https://i.sstatic.net/LZRcl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LZRcl.png" alt="enter image description here" /></a></p>
<p>How to get the list column explode operation done in polars?</p>
|
<python><python-polars>
|
2023-12-07 13:44:44
| 1
| 1,518
|
myamulla_ciencia
|
77,620,110
| 15,806,560
|
tkinter Button in main menu sometimes not working
|
<p>I got the problem when I click a button in my main screen. Sometimes when I click, it works and new window appear, sometime it does not (I can not click). I encounter this problem when trying to run other tkinter programs as well. I got no error when running this code.</p>
<p>My laptop is Mac Pro M1. MyOS is MacOS Sonoma 14.1.1 (Arm Architecture). My Python version is Python 3.9.13</p>
<p>I have test this code on my other laptop running Ubuntu 20.04 with Python 3.10.12 and it ran smoothly.</p>
<p>Here are my source code:</p>
<pre><code>import tkinter as tk
# Define the main screen
main_screen = tk.Tk()
main_screen.title("Book Lessons")
main_screen.geometry("500x250")
# Define function to open second screen with lesson content
def open_lesson_screen(lesson_number):
second_screen = tk.Toplevel(main_screen)
second_screen.title(f"Lesson {lesson_number}")
second_screen.geometry("400x200")
# Add text widget with lesson content
lesson_content = tk.Text(second_screen, height=10, width=50)
lesson_content.insert(tk.INSERT, f"This is the content for Lesson {lesson_number}.")
lesson_content.pack()
# Add button to close second screen
close_button = tk.Button(second_screen, text="Close", command=second_screen.destroy)
close_button.pack()
# Create buttons for 12 lessons
for i in range(1, 13):
button_text = f"Lesson {i}"
button = tk.Button(main_screen, text=button_text, command=lambda n=i: open_lesson_screen(n))
button.grid(row=(i - 1) // 4, column=(i - 1) % 4)
# Start the main loop
main_screen.mainloop()
</code></pre>
|
<python><tkinter>
|
2023-12-07 12:45:00
| 1
| 372
|
Minh Quang Nguyen
|
77,620,057
| 2,829,150
|
paramiko ssh_exception.AuthenticationException: Authentication failed
|
<p>I have code to connect using <em>sftp</em> with <em>private key</em>. From the same host where i run this code I am able to connect manually to <em>sftp</em> nevertheless getting error when running from code. What could be a problem here?</p>
<pre><code>import pysftp
def run(**kwargs):
cnopts = pysftp.CnOpts()
cnopts.hostkeys = None
try:
sftp = pysftp.Connection(host="xxxx", username="xxx", private_key="/opt/airflow/.ssh/id_rsa", cnopts=cnopts)
except:
raise
run()
</code></pre>
<p><strong>Error:</strong>
<em>(note that i replaced confidentials information like username and host by xxx)</em></p>
<pre><code>[2023-12-07, 12:27:52 UTC] {warnings.py:109} WARNING - /home/***/.local/lib/python3.10/site-packages/pysftp/__init__.py:61: UserWarning: Failed to load HostKeys from /home/***/.ssh/known_hosts. You will need to explicitly load HostKeys (cnopts.hostkeys.load(filename)) or disableHostKey checking (cnopts.hostkeys = None).
warnings.warn(wmsg, UserWarning)
[2023-12-07, 12:27:53 UTC] {transport.py:1893} INFO - Connected (version 2.0, client OpenSSH_6.7)
[2023-12-07, 12:27:54 UTC] {transport.py:1893} INFO - Authentication (publickey) failed.
[2023-12-07, 12:27:54 UTC] {logging_mixin.py:150} INFO - 2023-12-07 12:27:54.521613: ('Traceback (most recent call last):\n'
' File "/opt/***/dags/customer_dag.py", line 161, in customer_name\n'
' sftp = pysftp.Connection(host="xxx", '
'username="xxx", private_key="/opt/***/.ssh/id_rsa", '
'cnopts=cnopts)\n'
' File '
'"/home/***/.local/lib/python3.10/site-packages/pysftp/__init__.py", line '
'143, in __init__\n'
' self._transport.connect(**self._tconnect)\n'
' File '
'"/home/***/.local/lib/python3.10/site-packages/paramiko/transport.py", '
'line 1411, in connect\n'
' self.auth_publickey(username, pkey)\n'
' File '
'"/home/***/.local/lib/python3.10/site-packages/paramiko/transport.py", '
'line 1658, in auth_publickey\n'
' return self.auth_handler.wait_for_response(my_event)\n'
' File '
'"/home/***/.local/lib/python3.10/site-packages/paramiko/auth_handler.py", '
'line 263, in wait_for_response\n'
' raise e\n'
'paramiko.ssh_exception.AuthenticationException: Authentication failed.\n')
[2023-12-07, 12:27:54 UTC] {taskinstance.py:1824} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/operators/python.py", line 181, in execute
return_value = self.execute_callable()
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/operators/python.py", line 198, in execute_callable
return self.python_callable(*self.op_args, **self.op_kwargs)
File "/opt/airflow/dags/customer_dag.py", line 161, in customer_name
sftp = pysftp.Connection(host="xxx", username="xxx", private_key="/opt/airflow/.ssh/id_rsa", cnopts=cnopts)
File "/home/airflow/.local/lib/python3.10/site-packages/pysftp/__init__.py", line 143, in __init__
self._transport.connect(**self._tconnect)
File "/home/airflow/.local/lib/python3.10/site-packages/paramiko/transport.py", line 1411, in connect
self.auth_publickey(username, pkey)
File "/home/airflow/.local/lib/python3.10/site-packages/paramiko/transport.py", line 1658, in auth_publickey
return self.auth_handler.wait_for_response(my_event)
File "/home/airflow/.local/lib/python3.10/site-packages/paramiko/auth_handler.py", line 263, in wait_for_response
raise e
paramiko.ssh_exception.AuthenticationException: Authentication failed.
[2023-12-07, 12:27:54 UTC] {taskinstance.py:1345} INFO - Marking task as FAILED. dag_id=customer_name, task_id=task_customer_name, execution_date=20231207T122750, start_date=20231207T122752, end_date=20231207T122754
[2023-12-07, 12:27:54 UTC] {standard_task_runner.py:104} ERROR - Failed to execute job 206 for task task_customer_name (Authentication failed.; 96)
[2023-12-07, 12:27:54 UTC] {local_task_job_runner.py:225} INFO - Task exited with return code 1
[2023-12-07, 12:27:54 UTC] {taskinstance.py:2653} INFO - 0 downstream tasks scheduled from follow-on schedule check
</code></pre>
<p><strong>Log from: (cnopts.log = True)</strong></p>
<pre><code>[2023-12-07, 13:27:43 UTC] {transport.py:1893} DEBUG - starting thread (client mode): 0xce8f2620
[2023-12-07, 13:27:43 UTC] {transport.py:1893} DEBUG - Local version/idstring: SSH-2.0-paramiko_3.2.0
[2023-12-07, 13:27:43 UTC] {transport.py:1893} DEBUG - Remote version/idstring: SSH-2.0-OpenSSH_6.7
[2023-12-07, 13:27:43 UTC] {transport.py:1893} INFO - Connected (version 2.0, client OpenSSH_6.7)
[2023-12-07, 13:27:43 UTC] {transport.py:1893} DEBUG - === Key exchange possibilities ===
[2023-12-07, 13:27:43 UTC] {transport.py:1893} DEBUG - kex algos: curve25519-sha256@libssh.org, ecdh-sha2-nistp256, ecdh-sha2-nistp384, ecdh-sha2-nistp521, diffie-hellman-group-exchange-sha256, diffie-hellman-group14-sha1
[2023-12-07, 13:27:43 UTC] {transport.py:1893} DEBUG - server key: ssh-rsa, ssh-dss
[2023-12-07, 13:27:43 UTC] {transport.py:1893} DEBUG - client encrypt: aes128-ctr, aes192-ctr, aes256-ctr, aes128-gcm@openssh.com, aes256-gcm@openssh.com, chacha20-poly1305@openssh.com
[2023-12-07, 13:27:43 UTC] {transport.py:1893} DEBUG - server encrypt: aes128-ctr, aes192-ctr, aes256-ctr, aes128-gcm@openssh.com, aes256-gcm@openssh.com, chacha20-poly1305@openssh.com
[2023-12-07, 13:27:43 UTC] {transport.py:1893} DEBUG - client mac: umac-64-etm@openssh.com, umac-128-etm@openssh.com, hmac-sha2-256-etm@openssh.com, hmac-sha2-512-etm@openssh.com, hmac-sha1-etm@openssh.com, umac-64@openssh.com, umac-128@openssh.com, hmac-sha2-256, hmac-sha2-512, hmac-sha1
[2023-12-07, 13:27:43 UTC] {transport.py:1893} DEBUG - server mac: umac-64-etm@openssh.com, umac-128-etm@openssh.com, hmac-sha2-256-etm@openssh.com, hmac-sha2-512-etm@openssh.com, hmac-sha1-etm@openssh.com, umac-64@openssh.com, umac-128@openssh.com, hmac-sha2-256, hmac-sha2-512, hmac-sha1
[2023-12-07, 13:27:43 UTC] {transport.py:1893} DEBUG - client compress: none, zlib@openssh.com
[2023-12-07, 13:27:43 UTC] {transport.py:1893} DEBUG - server compress: none, zlib@openssh.com
[2023-12-07, 13:27:43 UTC] {transport.py:1893} DEBUG - client lang: <none>
[2023-12-07, 13:27:43 UTC] {transport.py:1893} DEBUG - server lang: <none>
[2023-12-07, 13:27:43 UTC] {transport.py:1893} DEBUG - kex follows: False
[2023-12-07, 13:27:43 UTC] {transport.py:1893} DEBUG - === Key exchange agreements ===
[2023-12-07, 13:27:43 UTC] {transport.py:1893} DEBUG - Kex: curve25519-sha256@libssh.org
[2023-12-07, 13:27:43 UTC] {transport.py:1893} DEBUG - HostKey: ssh-rsa
[2023-12-07, 13:27:43 UTC] {transport.py:1893} DEBUG - Cipher: aes128-ctr
[2023-12-07, 13:27:43 UTC] {transport.py:1893} DEBUG - MAC: hmac-sha2-256
[2023-12-07, 13:27:43 UTC] {transport.py:1893} DEBUG - Compression: none
[2023-12-07, 13:27:43 UTC] {transport.py:1893} DEBUG - === End of kex handshake ===
[2023-12-07, 13:27:43 UTC] {transport.py:1893} DEBUG - kex engine KexCurve25519 specified hash_algo <built-in function openssl_sha256>
[2023-12-07, 13:27:43 UTC] {transport.py:1893} DEBUG - Switch to new keys ...
[2023-12-07, 13:27:43 UTC] {transport.py:1893} DEBUG - Attempting public-key auth...
[2023-12-07, 13:27:44 UTC] {transport.py:1893} DEBUG - userauth is OK
[2023-12-07, 13:27:44 UTC] {transport.py:1893} DEBUG - Finalizing pubkey algorithm for key of type 'ssh-rsa'
[2023-12-07, 13:27:44 UTC] {transport.py:1893} DEBUG - Our pubkey algorithm list: ['rsa-sha2-512', 'rsa-sha2-256', 'ssh-rsa']
[2023-12-07, 13:27:44 UTC] {transport.py:1893} DEBUG - Server did not send a server-sig-algs list; defaulting to our first preferred algo ('rsa-sha2-512')
[2023-12-07, 13:27:44 UTC] {transport.py:1893} DEBUG - NOTE: you may use the 'disabled_algorithms' SSHClient/Transport init kwarg to disable that or other algorithms if your server does not support them!
[2023-12-07, 13:27:44 UTC] {transport.py:1893} INFO - Authentication (publickey) failed.
[2023-12-07, 13:27:44 UTC] {logging_mixin.py:150} INFO - 2023-12-07 13:27:44.766480: ('Traceback (most recent call last):\n'
' File "/opt/***/dags/customer_dag.py", line 162, in customer_name\n'
' sftp = pysftp.Connection(host="xxx", '
'username="mnsgsd", private_key="/opt/***/.ssh/id_rsa", '
'cnopts=cnopts)\n'
' File '
'"/home/***/.local/lib/python3.10/site-packages/pysftp/__init__.py", line '
'143, in __init__\n'
' self._transport.connect(**self._tconnect)\n'
' File '
'"/home/***/.local/lib/python3.10/site-packages/paramiko/transport.py", '
'line 1411, in connect\n'
' self.auth_publickey(username, pkey)\n'
' File '
'"/home/***/.local/lib/python3.10/site-packages/paramiko/transport.py", '
'line 1658, in auth_publickey\n'
' return self.auth_handler.wait_for_response(my_event)\n'
' File '
'"/home/***/.local/lib/python3.10/site-packages/paramiko/auth_handler.py", '
'line 263, in wait_for_response\n'
' raise e\n'
'paramiko.ssh_exception.AuthenticationException: Authentication failed.\n')
[2023-12-07, 13:27:44 UTC] {taskinstance.py:1824} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/operators/python.py", line 181, in execute
return_value = self.execute_callable()
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/operators/python.py", line 198, in execute_callable
return self.python_callable(*self.op_args, **self.op_kwargs)
File "/opt/airflow/dags/customer_dag.py", line 162, in customer_name
sftp = pysftp.Connection(host="xxx", username="mnsgsd", private_key="/opt/airflow/.ssh/id_rsa", cnopts=cnopts)
File "/home/airflow/.local/lib/python3.10/site-packages/pysftp/__init__.py", line 143, in __init__
self._transport.connect(**self._tconnect)
File "/home/airflow/.local/lib/python3.10/site-packages/paramiko/transport.py", line 1411, in connect
self.auth_publickey(username, pkey)
File "/home/airflow/.local/lib/python3.10/site-packages/paramiko/transport.py", line 1658, in auth_publickey
return self.auth_handler.wait_for_response(my_event)
File "/home/airflow/.local/lib/python3.10/site-packages/paramiko/auth_handler.py", line 263, in wait_for_response
raise e
paramiko.ssh_exception.AuthenticationException: Authentication failed.
[2023-12-07, 13:27:44 UTC] {taskinstance.py:1345} INFO - Marking task as FAILED. dag_id=customer_name, task_id=task_customer_name, execution_date=20231207T132738, start_date=20231207T132742, end_date=20231207T132744
[2023-12-07, 13:27:44 UTC] {standard_task_runner.py:104} ERROR - Failed to execute job 210 for task task_customer_name (Authentication failed.; 91)
[2023-12-07, 13:27:44 UTC] {local_task_job_runner.py:225} INFO - Task exited with return code 1
[2023-12-07, 13:27:44 UTC] {taskinstance.py:2653} INFO - 0 downstream tasks scheduled from follow-on schedule check
</code></pre>
|
<python><python-3.x><airflow><pysftp>
|
2023-12-07 12:36:51
| 0
| 3,611
|
Arie
|
77,619,774
| 3,932,908
|
SQLite how to persist custom data type into new table
|
<p>I am trying to write a database that includes numpy arrays (using sqlite3 with Python). However after creating an initial table, I want to perform some operations and save the result as a new table. This is all fine, except that once I create a new table the custom data type I register isn't properly carried over into the new table. A minimal example is:</p>
<p>I define some functions to convert numpy arrays to/from binary:</p>
<pre><code>import io
import sqlite3
import numpy as np
def adapt_array(arr: np.ndarray) -> memoryview:
out = io.BytesIO()
np.save(out, arr)
out.seek(0)
return sqlite3.Binary(out.read())
def convert_array(text: bytes) -> np.ndarray:
out = io.BytesIO(text)
out.seek(0)
return np.load(out)
</code></pre>
<p>I register these adapters and connect:</p>
<pre><code>sqlite3.register_adapter(np.ndarray, adapt_array)
sqlite3.register_converter("array", convert_array)
conn = sqlite3.connect("test.db", detect_types=sqlite3.PARSE_DECLTYPES)
cursor = conn.cursor()
</code></pre>
<p>and then create an initial table:</p>
<pre><code>embedding = np.random.randn(10, 64)
cursor.execute('create table test_1 (idx integer primary key, embedding array );')
for i, X in enumerate(embedding):
cursor.execute('insert into test_1 (idx, embedding) values (?,?)', (i, X))
</code></pre>
<p>I then create a new table from this first table:</p>
<pre><code>cursor.execute("create table test_2 as select idx, embedding from test_1;")
</code></pre>
<p>but now when I do the following:</p>
<pre><code>cursor.execute("select * from test_1")
data_1 = cursor.fetchall()
cursor.execute("select * from test_2")
data_2 = cursor.fetchall()
</code></pre>
<p>data_1 has the embedding field returned as a numpy array as expected, whilst data_2 has the embedding field returned as a binary string. So it seems that for whatever reason the array type is not persisted into the new table.</p>
<p>I have tried:</p>
<pre><code>cursor.execute("create table test_2 as select idx, cast(embedding as array) as embedding from test_1;")
</code></pre>
<p>but this doesn't work (just sets every embedding value to 0 for some reason). Does anyone know why this happens/how to get around this?</p>
<p>EDIT:
current workaround:</p>
<pre><code>cursor.execute("alter table test_2 add column new_embedding array")
cursor.execute("update test_2 set new_embedding = embedding")
cursor.execute("alter table test_2 drop column embedding")
</code></pre>
<p>but I hate it...</p>
|
<python><sqlite3-python>
|
2023-12-07 11:53:36
| 0
| 399
|
Henry
|
77,619,722
| 3,989,484
|
Cant get rid of FileNotFoundError in Django ImageField
|
<p>I have a django model Product which has an image field</p>
<pre><code>class Product(BaseProduct):
img_height = models.PositiveIntegerField(editable=False, null=True, blank=True)
img_width = models.PositiveIntegerField(editable=False, null=True, blank=True)
file = models.ImageField('Image', upload_to=product_img_path,
height_field='img_height', width_field='img_width',
null=True, blank=True, max_length=255)
</code></pre>
<p>Now because I loaded the products from an Excel file with records over 15,000 there are few that the image path in the file field does not actually exist in my directory.</p>
<p>This causes my code to raise a <code>FileNotFoundError</code> every time i try to <code>for product in Product.objects.all()</code> before I am even able to catch the error with a <code>try-except</code> block.</p>
<p>I'd like to have an iteration where I can check if the file exists and set the file field to null for records with non-existing files.</p>
<p>But this is impossible because the error is raised once I try to call an instance of the artwork or I create the iteration.</p>
<p>So the code below:</p>
<pre><code>products = Product.objects.all()
for product in products:
try:
if product.file:
pass
except FileNotFoundError:
product.file = None
product.save()
</code></pre>
<p>Raised error: <code>FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\user\\project\\media\\products\\blahblah.jpeg'</code> and the stack trace shows error was raised from the iteration line <code>for product in products:</code></p>
<p>I have tried following this thread without any luck: <a href="https://stackoverflow.com/questions/71536127/how-to-avoid-filenotfound-error-in-django-model-imagefield-when-file-does-not-ex">How to avoid FileNotFound Error in django model ImageField when file does not exist</a></p>
<p>Development has progressed so I do not which to change the field type to char to perform the loop.</p>
<p>Any idea?</p>
|
<python><django><exception><django-models>
|
2023-12-07 11:44:18
| 1
| 1,428
|
Oluwatumbi
|
77,619,641
| 5,594,008
|
Wagtail PageChooserBlock custom ordering
|
<p>I'm using PageChooserBlock to display list of objects</p>
<pre><code>class ActualSection(models.Model):
content = StreamField(
[
("article", PageChooserBlock(page_type="articles.Article")),
],
min_num=1,
)
</code></pre>
<p>Is there any way to put some custom ordering on it? Because now it's seems like having some random order</p>
|
<python><django><wagtail>
|
2023-12-07 11:27:12
| 1
| 2,352
|
Headmaster
|
77,619,517
| 12,052,180
|
How to align, with comma separator and specific decimal points in Python prints
|
<p>When printing a number in python I would like to</p>
<ol>
<li>Align right (using the align specifier <code>></code>)</li>
<li>Align width (using the width specific)</li>
<li>Specify the number of decimal places</li>
<li>Set a separator between the thousands</li>
</ol>
<p>I know how do to each of these separately. Let's take the number <code>1000000.12345</code> as example.</p>
<p>(1 & 2 & 3) Doing <code>print(f"1000000.12345:>20.2")</code> which results in</p>
<pre><code> 1000000.12
</code></pre>
<p>I also know how to only do (1 & 2 & 4). Namely <code>print(f"{1000000.12345:>20,}")</code> which results in</p>
<pre><code> 1,000,000.12345
</code></pre>
<p>But what I want is to do the latter but also trim the decimal places, such that I get something like</p>
<pre><code> 1,000,000.12
</code></pre>
<p>How can I do that?</p>
|
<python><printing>
|
2023-12-07 11:05:58
| 1
| 802
|
PeeteKeesel
|
77,619,500
| 10,695,613
|
RSA decryption of a column in PySpark
|
<p>I tried to use the <code>crpytography</code> module to peform RSA decryption on a column in my Spark dataframe:</p>
<pre><code>from pyspark.sql.functions import udf
from pyspark.sql.types import StringType
from cryptography.hazmat.backends.openssl.rsa import _RSAPrivateKey
def decrypt(encoded_ciphertext: str, key: _RSAPrivateKey) -> str:
if encoded_ciphertext is None:
return ""
try:
decoded_ciphertext = base64.b64decode(encoded_ciphertext)
decrypted_data = key.decrypt(decoded_ciphertext, padding.PKCS1v15())
return decrypted_data.decode("utf-8")
except ValueError as e:
print(f"Decryption error: {e}")
return ""
decrypt_udf = udf(decrypt_data, StringType())
sdf.withColumn(
"decrypted",
decrypt_udf(sdf["encrypted"]),
).show()
</code></pre>
<p>I get the following error:</p>
<blockquote>
<p>TypeError: cannot pickle '_cffi_backend.FFI' object</p>
</blockquote>
<p><strong>Unfortunately</strong>, <code>crpytography</code> uses a <code>_cffi_backend.FFI</code> object somewhere and the is one its dependencies:</p>
<pre><code>pipdeptree --packages cryptography
cryptography==3.4.8
└── cffi [required: >=1.12, installed: 1.16.0]
└── pycparser [required: Any, installed: 2.21]
</code></pre>
<p>Apparently, PySpark cannot pickle C objects such as _cffi_backend.FFI.</p>
<p>I could convert my Spark dataframe to a Pandas dataframe and then it would work, but the function is really, <strong>really</strong> slow. Not to mention that the dataframe is massive and I have to call <code>limit()</code> on it to fit it into memory.</p>
|
<python><pyspark><cryptography>
|
2023-12-07 11:03:23
| 0
| 405
|
BovineScatologist
|
77,619,442
| 12,468,539
|
Uploading large file to SharePoint from io.BytesIO instance results in `io.UnsupportedOperation: fileno` exception
|
<p>I am using the <a href="https://github.com/vgrem/Office365-REST-Python-Client" rel="nofollow noreferrer">Office365-REST-Python-Client library</a> to upload some relatively large CSV files to a SharePoint document library via an <code>io.BytesIO</code> instance. I do this by passing the byte array to the following method:</p>
<pre><code>from office365.sharepoint.folders.folder import Folder
from office365.sharepoint.files.file import File
def write_file_bytes(self, relative_url: str, file_name: str, file_bytes: bytes) -> None:
folder: Folder = self.client_context.web.get_folder_by_server_relative_url(relative_url)
chunk_size: int = 1024 * 1024 * 15
# File bytes to IO stream
file_bytes: io.BytesIO = io.BytesIO(file_bytes)
folder.files.create_upload_session(file_bytes, chunk_size=chunk_size, file_name=file_name).execute_query()
</code></pre>
<p>Based on <a href="https://stackoverflow.com/questions/76774753/python-with-office365-rest-api-upload-large-files-to-sharepoint-using-create-upl">this StackOverflow question</a>, writing the file from a <code>io.BytesIO</code> instance is indeed possible, but the file_name and file_size should be passed as keyword arguments to <code>chunk_uploaded</code>. However, even in specifying a callback that takes the file size as an argument, I still get an <code>io.UnsupportedOperation: fileno</code> exception.</p>
<p>Uploading the file from either a byte array or an <code>io.BytesIO</code> instance is necessary due to the nature of what I am doing. So I can unfortunately not specify a local path to the file.</p>
<p>When performing a simple upload using the following:</p>
<pre><code>folder.upload_file(file_name, file_bytes).execute_query()
</code></pre>
<p>Everything works as expected, but this is limited to a file size of 4.0MB, which is unfortunately too small for my needs.</p>
|
<python><sharepoint><bytesio><office365-rest-client>
|
2023-12-07 10:55:18
| 1
| 683
|
ChaddRobertson
|
77,619,436
| 1,346,690
|
How can I mock pydantic validators in unit tests?
|
<p>I"m trying to mock a validator in a unit test, I just want to see if the validator was called, however my patch path does not seem to be correct, I tried many variation but the one used here should be ok, yet it is not.</p>
<p>project structure :</p>
<p><a href="https://i.sstatic.net/5Mo6H.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5Mo6H.jpg" alt="enter image description here" /></a></p>
<p>awx_models.py :</p>
<pre><code>from pydantic import BaseModel, Field, validator
class Playbook(BaseModel):
url: str = Field(title="the url")
@validator('url')
def validate_playbook_url(cls, value):
if value == "titi":
raise Exception("should have been toto")
return value
</code></pre>
<p>test_awx_models.py:</p>
<pre><code>from unittest.mock import patch
from src.models.awx_models import Playbook
@patch("src.models.awx_models.Playbook.validate_playbook_url")
def test_that_validator_is_called(mock_validate_playbook_url):
Playbook(url="toto")
mock_validate_playbook_url.assert_called_once()
</code></pre>
<p>test results :</p>
<pre><code>Launching pytest with arguments C:/checkout2/test_unit_test/test/unit/src/models/test_awx_models.py --no-header --no-summary -q in C:\checkout2\test_unit_test\test\unit\src\models
============================= test session starts =============================
collecting ... collected 1 item
test_awx_models.py::test_that_validator_is_called FAILED [100%]
test_awx_models.py:4 (test_that_validator_is_called)
mock_validate_playbook_url = <MagicMock name='validate_playbook_url' id='2306368451080'>
@patch("src.models.awx_models.Playbook.validate_playbook_url")
def test_that_validator_is_called(mock_validate_playbook_url):
Playbook(url="toto")
> mock_validate_playbook_url.assert_called_once()
test_awx_models.py:8:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_mock_self = <MagicMock name='validate_playbook_url' id='2306368451080'>
def assert_called_once(_mock_self):
"""assert that the mock was called only once.
"""
self = _mock_self
if not self.call_count == 1:
msg = ("Expected '%s' to have been called once. Called %s times." %
(self._mock_name or 'mock', self.call_count))
> raise AssertionError(msg)
E AssertionError: Expected 'validate_playbook_url' to have been called once. Called 0 times.
C:\Python36\lib\unittest\mock.py:795: AssertionError
============================== 1 failed in 0.11s ==============================
</code></pre>
<p>this is just a small test project to try to find out what the issue is, in reallity I have massive pydantic object with a tons of validators. those validator are externalized in function in validators.py files.</p>
<p>We are testing the behavior of those validator and then in the tests of the models we just want to check that our validator are called without them actually running.</p>
<p>In that way we don't have to build 20 version of the models to test the validator behaviors.</p>
<p>Thanks.</p>
|
<python><list><unit-testing><pytest><pydantic>
|
2023-12-07 10:54:45
| 1
| 6,281
|
Heetola
|
77,619,352
| 5,868,293
|
Voronoi diagram gives unexpected results in scipy
|
<p>I have the following pandas dataframe:</p>
<pre><code>import pandas as pd
pd.DataFrame({'cl': {0: 'A', 1: 'C', 2: 'H', 3: 'M', 4: 'S'},
'd': {0: 245.059986986012,
1: 320.49044143557785,
2: 239.79023081978914,
3: 263.38325791238833,
4: 219.53334398353175},
'p': {0: 10.971011721360075,
1: 10.970258360366753,
2: 13.108487516946218,
3: 12.93241352743668,
4: 13.346107628161008}})
cl d p
0 A 245.059987 10.971012
1 C 320.490441 10.970258
2 H 239.790231 13.108488
3 M 263.383258 12.932414
4 S 219.533344 13.346108
</code></pre>
<p>I want to create a <a href="https://en.wikipedia.org/wiki/Voronoi_diagram" rel="nofollow noreferrer">Voronoi</a> diagram. To do so I am using the <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.Voronoi.html" rel="nofollow noreferrer">package from scipy</a>.</p>
<p>I am using the following code:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from scipy.spatial import Voronoi, voronoi_plot_2d
centers2 = np.array(
centers_dt[['d', 'p']]
)
scatter_x = np.array(centers_dt['d'])
scatter_y = np.array(centers_dt['p'])
group = np.array(centers_dt['cl'])
cdict = {'C': 'red', 'A': 'blue', 'H': 'green', 'M': 'yellow', 'S': 'black'}
fig, ax = plt.subplots()
for g in np.unique(group):
ix = np.where(group == g)
ax.scatter(scatter_x[ix], scatter_y[ix], c = cdict[g], label = g, s = 100)
ax.legend()
vor = Voronoi(centers2)
fig = voronoi_plot_2d(vor,plt.gca())
plt.show()
plt.close()
</code></pre>
<p>But the result I am getting is unexpected:</p>
<p><a href="https://i.sstatic.net/p6XoK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/p6XoK.png" alt="enter image description here" /></a></p>
<p>Since there is a boarder missing plus the boarders seem a bit off.</p>
<p>Any ideas ?</p>
|
<python><scipy><spatial><voronoi>
|
2023-12-07 10:38:58
| 1
| 4,512
|
quant
|
77,619,201
| 3,494,774
|
Are SQLAlchemy MetaData objects serializable?
|
<p>In my application, I have multiple processes that run <code>MetaData.reflect</code> against some large DB, which takes a while. Is there a way to serialize and store the results of reflection so that I can simply load <code>MetaData</code> from a file instead of generating it each time?</p>
<p>Tried to <code>pickle</code> them, which didn't work.</p>
|
<python><serialization><sqlalchemy>
|
2023-12-07 10:16:21
| 1
| 11,377
|
gog
|
77,619,112
| 15,275,530
|
Error while using async apscheduler in FastApi
|
<p>I am using AsyncIOScheduler and SQLAlchemyJobStore from apscheduler but Asynchronously in FastAPI. The problem here is I am getting an error.
Full error below;</p>
<pre><code> File "D:\app\endpoints\admin\scheduler.py", line 23, in <module>
Schedule.start()
File "D:\intergration-server\venv\Lib\site-packages\apscheduler\schedulers\asyncio.py", line 37, in start
super(AsyncIOScheduler, self).start(paused)
File "D:\intergration-server\venv\Lib\site-packages\apscheduler\schedulers\base.py", line 173, in start
store.start(self, alias)
File "D:\intergration-server\venv\Lib\site-packages\apscheduler\jobstores\sqlalchemy.py", line 68, in start
self.jobs_t.create(self.engine, True)
File "D:\intergration-server\venv\Lib\site-packages\sqlalchemy\sql\schema.py", line 1293, in create
bind._run_ddl_visitor(ddl.SchemaGenerator, self, checkfirst=checkfirst)
File "D:\intergration-server\venv\Lib\site-packages\sqlalchemy\engine\base.py", line 3242, in _run_ddl_visitor
with self.begin() as conn:
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.2032.0_x64__qbz5n2kfra8p0\Lib\contextlib.py", line 137, in __enter__
return next(self.gen)
^^^^^^^^^^^^^^
File "D:\intergration-server\venv\Lib\site-packages\sqlalchemy\engine\base.py", line 3232, in begin
with self.connect() as conn:
^^^^^^^^^^^^^^
File "D:\intergration-server\venv\Lib\site-packages\sqlalchemy\engine\base.py", line 3268, in connect
return self._connection_cls(self)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\intergration-server\venv\Lib\site-packages\sqlalchemy\engine\base.py", line 145, in __init__
self._dbapi_connection = engine.raw_connection()
^^^^^^^^^^^^^^^^^^^^^^^
File "D:\intergration-server\venv\Lib\site-packages\sqlalchemy\engine\base.py", line 3292, in raw_connection
return self.pool.connect()
^^^^^^^^^^^^^^^^^^^
File "D:\intergration-server\venv\Lib\site-packages\sqlalchemy\pool\base.py", line 452, in connect
return _ConnectionFairy._checkout(self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\intergration-server\venv\Lib\site-packages\sqlalchemy\pool\base.py", line 1269, in _checkout
fairy = _ConnectionRecord.checkout(pool)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\intergration-server\venv\Lib\site-packages\sqlalchemy\pool\base.py", line 716, in checkout
rec = pool._do_get()
^^^^^^^^^^^^^^
File "D:\intergration-server\venv\Lib\site-packages\sqlalchemy\pool\impl.py", line 169, in _do_get
with util.safe_reraise():
File "D:\intergration-server\venv\Lib\site-packages\sqlalchemy\util\langhelpers.py", line 146, in __exit__
raise exc_value.with_traceback(exc_tb)
File "D:\intergration-server\venv\Lib\site-packages\sqlalchemy\pool\impl.py", line 167, in _do_get
return self._create_connection()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\intergration-server\venv\Lib\site-packages\sqlalchemy\pool\base.py", line 393, in _create_connection
return _ConnectionRecord(self)
^^^^^^^^^^^^^^^^^^^^^^^
File "D:\intergration-server\venv\Lib\site-packages\sqlalchemy\pool\base.py", line 678, in __init__
self.__connect()
File "D:\intergration-server\venv\Lib\site-packages\sqlalchemy\pool\base.py", line 902, in __connect
with util.safe_reraise():
File "D:\intergration-server\venv\Lib\site-packages\sqlalchemy\util\langhelpers.py", line 146, in __exit__
raise exc_value.with_traceback(exc_tb)
File "D:\intergration-server\venv\Lib\site-packages\sqlalchemy\pool\base.py", line 898, in __connect
self.dbapi_connection = connection = pool._invoke_creator(self)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\intergration-server\venv\Lib\site-packages\sqlalchemy\engine\create.py", line 637, in connect
return dialect.connect(*cargs, **cparams)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\intergration-server\venv\Lib\site-packages\sqlalchemy\engine\default.py", line 616, in connect
return self.loaded_dbapi.connect(*cargs, **cparams)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\intergration-server\venv\Lib\site-packages\sqlalchemy\dialects\mysql\asyncmy.py", line 281, in connect
await_only(creator_fn(*arg, **kw)),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\intergration-server\venv\Lib\site-packages\sqlalchemy\util\_concurrency_py3k.py", line 116, in await_only
raise exc.MissingGreenlet(
sqlalchemy.exc.MissingGreenlet: greenlet_spawn has not been called; can't call await_only() here. Was IO attempted in an unexpected place? (Background on this error at: https://sqlalche.me/e/20/xd2s)
</code></pre>
<p>This error occurs in line 'Schedule.start()' in below code.</p>
<pre><code>jobstore = {
"default": SQLAlchemyJobStore(url=manager.SQLALCHEMY_DATABASE_URL, engine_options={"pool_pre_ping": True}),
}
Schedule = AsyncIOScheduler(jobstores=jobstore, timezone="UTC")
Schedule.start()
def resp_ok(*, code=0, msg="ok", data: Union[list, dict, str] = None) -> dict:
return {"code": code, "msg": msg, "data": data}
def resp_fail(*, code=1, msg="fail", data: Union[list, dict, str] = None):
return {"code": code, "msg": msg, "data": data}
def cron_task(a1: str) -> None:
print(a1, time.strftime("'%Y-%m-%d %H:%M:%S'"))
@router.get("/jobs/all", tags=["schedule"], summary="获取所有job信息")
async def get_scheduled_syncs():
schedules = []
for job in Schedule.get_jobs():
schedules.append(
{
"job_id": job.id,
"func_name": job.func_ref,
"func_args": job.args,
"cron_model": str(job.trigger),
"next_run": str(job.next_run_time),
}
)
return resp_ok(data=schedules)
</code></pre>
<p>Please help me fix this.</p>
<p>I tried using a async await on '''Schedule.start()''' but didnt work. I tried using different URL from mysql database also. There is something wrong with using start(). I cant figure it out.</p>
|
<python><python-3.x><asynchronous><backend><fastapi>
|
2023-12-07 10:02:00
| 0
| 404
|
Sarim Sikander
|
77,619,032
| 240,864
|
AWS Cognito masking data in lambda
|
<p>I have a very weird behavior in some feature I am building.</p>
<p>I use aws Cognito for user management on a system I am building. On account creation cognito sends an email (via Custome Sender Lambda) to invite to the user with a one time user code.
I want to encode this code with base64.</p>
<p>here's a sample block</p>
<pre><code> #get the value from the event Cognito sends
value = event["request"]["codeParameter"]
#encode it with base64
value_bytes = value.encode('ascii')
base64_bytes = base64.b64encode(value_bytes)
base64_string = base64_bytes.decode('ascii')
logging.info = ("base 64 string : %s", base64_string)
</code></pre>
<p>What happens next is - this value is written in HTML and sent as an email.
If I skip the base64 encoding - everything works well.
However if I do base 64 encoding the decoded value is ALWAYS {####} - I assume masked by cognito / aws.
This block of code works perfectly fine on a local machine outside of the bounds of aws-lambda, but never works on aws and whichever code receives (randomly generated) I get the same output after base 64 decoding which is - {####}</p>
<p>Same thing happends if I try :</p>
<ul>
<li>to change the algorithm - base32, urlencode</li>
<li>copy the string in a new property by value (as opposed to by reference)</li>
</ul>
<p>Any ideas what causes this behavior ?</p>
|
<python><amazon-web-services><aws-lambda><amazon-cognito>
|
2023-12-07 09:49:11
| 1
| 1,603
|
Tancho
|
77,618,829
| 6,907,424
|
cffi.VerificationError: CompileError: command '/usr/bin/gcc' failed with exit code 1
|
<p>While installing the package <code>pewanalytics</code> with the command: <code>pip install git+https://github.com/pewresearch/pewanalytics#egg=pewanalytics </code> I have got the following error:</p>
<pre><code>
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [130 lines of output]
/tmp/pip-install-3oxpfcc8/ssdeep_b293003753c04cf9af66836870ebce44/setup.py:8: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
from pkg_resources import parse_version
/home/hafiz031/anaconda3/envs/rnd/lib/python3.12/site-packages/setuptools/__init__.py:80: _DeprecatedInstaller: setuptools.installer and fetch_build_eggs are deprecated.
!!
********************************************************************************
Requirements should be satisfied by a PEP 517 installer.
If you are using pip, you can try `pip install --use-pep517`.
********************************************************************************
!!
dist.fetch_build_eggs(dist.setup_requires)
running egg_info
creating /tmp/pip-pip-egg-info-wmb6iul7/ssdeep.egg-info
writing /tmp/pip-pip-egg-info-wmb6iul7/ssdeep.egg-info/PKG-INFO
writing dependency_links to /tmp/pip-pip-egg-info-wmb6iul7/ssdeep.egg-info/dependency_links.txt
writing requirements to /tmp/pip-pip-egg-info-wmb6iul7/ssdeep.egg-info/requires.txt
writing top-level names to /tmp/pip-pip-egg-info-wmb6iul7/ssdeep.egg-info/top_level.txt
writing manifest file '/tmp/pip-pip-egg-info-wmb6iul7/ssdeep.egg-info/SOURCES.txt'
src/ssdeep/__pycache__/_ssdeep_cffi_a28e5628x27adcb8d.c:266:14: fatal error: fuzzy.h: No such file or directory
266 | #include "fuzzy.h"
| ^~~~~~~~~
compilation terminated.
Traceback (most recent call last):
File "/home/hafiz031/anaconda3/envs/rnd/lib/python3.12/site-packages/setuptools/_distutils/unixccompiler.py", line 185, in _compile
self.spawn(compiler_so + cc_args + [src, '-o', obj] + extra_postargs)
File "/home/hafiz031/anaconda3/envs/rnd/lib/python3.12/site-packages/setuptools/_distutils/ccompiler.py", line 1041, in spawn
spawn(cmd, dry_run=self.dry_run, **kwargs)
File "/home/hafiz031/anaconda3/envs/rnd/lib/python3.12/site-packages/setuptools/_distutils/spawn.py", line 70, in spawn
raise DistutilsExecError(
distutils.errors.DistutilsExecError: command '/usr/bin/gcc' failed with exit code 1
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/tmp/pip-install-3oxpfcc8/ssdeep_b293003753c04cf9af66836870ebce44/.eggs/cffi-1.16.0-py3.12-linux-x86_64.egg/cffi/ffiplatform.py", line 48, in _build
dist.run_command('build_ext')
File "/home/hafiz031/anaconda3/envs/rnd/lib/python3.12/site-packages/setuptools/dist.py", line 963, in run_command
super().run_command(command)
File "/home/hafiz031/anaconda3/envs/rnd/lib/python3.12/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/home/hafiz031/anaconda3/envs/rnd/lib/python3.12/site-packages/setuptools/command/build_ext.py", line 88, in run
_build_ext.run(self)
File "/home/hafiz031/anaconda3/envs/rnd/lib/python3.12/site-packages/setuptools/_distutils/command/build_ext.py", line 345, in run
self.build_extensions()
File "/home/hafiz031/anaconda3/envs/rnd/lib/python3.12/site-packages/setuptools/_distutils/command/build_ext.py", line 467, in build_extensions
self._build_extensions_serial()
File "/home/hafiz031/anaconda3/envs/rnd/lib/python3.12/site-packages/setuptools/_distutils/command/build_ext.py", line 493, in _build_extensions_serial
self.build_extension(ext)
File "/home/hafiz031/anaconda3/envs/rnd/lib/python3.12/site-packages/setuptools/command/build_ext.py", line 249, in build_extension
_build_ext.build_extension(self, ext)
File "/home/hafiz031/anaconda3/envs/rnd/lib/python3.12/site-packages/setuptools/_distutils/command/build_ext.py", line 548, in build_extension
objects = self.compiler.compile(
^^^^^^^^^^^^^^^^^^^^^^
File "/home/hafiz031/anaconda3/envs/rnd/lib/python3.12/site-packages/setuptools/_distutils/ccompiler.py", line 600, in compile
self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts)
File "/home/hafiz031/anaconda3/envs/rnd/lib/python3.12/site-packages/setuptools/_distutils/unixccompiler.py", line 187, in _compile
raise CompileError(msg)
distutils.errors.CompileError: command '/usr/bin/gcc' failed with exit code 1
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-3oxpfcc8/ssdeep_b293003753c04cf9af66836870ebce44/setup.py", line 108, in <module>
setup(
File "/home/hafiz031/anaconda3/envs/rnd/lib/python3.12/site-packages/setuptools/__init__.py", line 103, in setup
return distutils.core.setup(**attrs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hafiz031/anaconda3/envs/rnd/lib/python3.12/site-packages/setuptools/_distutils/core.py", line 185, in setup
return run_commands(dist)
^^^^^^^^^^^^^^^^^^
File "/home/hafiz031/anaconda3/envs/rnd/lib/python3.12/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
dist.run_commands()
File "/home/hafiz031/anaconda3/envs/rnd/lib/python3.12/site-packages/setuptools/_distutils/dist.py", line 969, in run_commands
self.run_command(cmd)
File "/home/hafiz031/anaconda3/envs/rnd/lib/python3.12/site-packages/setuptools/dist.py", line 963, in run_command
super().run_command(command)
File "/home/hafiz031/anaconda3/envs/rnd/lib/python3.12/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/home/hafiz031/anaconda3/envs/rnd/lib/python3.12/site-packages/setuptools/command/egg_info.py", line 321, in run
self.find_sources()
File "/home/hafiz031/anaconda3/envs/rnd/lib/python3.12/site-packages/setuptools/command/egg_info.py", line 329, in find_sources
mm.run()
File "/home/hafiz031/anaconda3/envs/rnd/lib/python3.12/site-packages/setuptools/command/egg_info.py", line 551, in run
self.add_defaults()
File "/home/hafiz031/anaconda3/envs/rnd/lib/python3.12/site-packages/setuptools/command/egg_info.py", line 589, in add_defaults
sdist.add_defaults(self)
File "/home/hafiz031/anaconda3/envs/rnd/lib/python3.12/site-packages/setuptools/command/sdist.py", line 112, in add_defaults
super().add_defaults()
File "/home/hafiz031/anaconda3/envs/rnd/lib/python3.12/site-packages/setuptools/_distutils/command/sdist.py", line 249, in add_defaults
self._add_defaults_python()
File "/home/hafiz031/anaconda3/envs/rnd/lib/python3.12/site-packages/setuptools/command/sdist.py", line 123, in _add_defaults_python
build_py = self.get_finalized_command('build_py')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hafiz031/anaconda3/envs/rnd/lib/python3.12/site-packages/setuptools/_distutils/cmd.py", line 305, in get_finalized_command
cmd_obj.ensure_finalized()
File "/home/hafiz031/anaconda3/envs/rnd/lib/python3.12/site-packages/setuptools/_distutils/cmd.py", line 111, in ensure_finalized
self.finalize_options()
File "/home/hafiz031/anaconda3/envs/rnd/lib/python3.12/site-packages/setuptools/command/build_py.py", line 39, in finalize_options
orig.build_py.finalize_options(self)
File "/home/hafiz031/anaconda3/envs/rnd/lib/python3.12/site-packages/setuptools/_distutils/command/build_py.py", line 46, in finalize_options
self.set_undefined_options(
File "/home/hafiz031/anaconda3/envs/rnd/lib/python3.12/site-packages/setuptools/_distutils/cmd.py", line 293, in set_undefined_options
src_cmd_obj.ensure_finalized()
File "/home/hafiz031/anaconda3/envs/rnd/lib/python3.12/site-packages/setuptools/_distutils/cmd.py", line 111, in ensure_finalized
self.finalize_options()
File "/tmp/pip-install-3oxpfcc8/ssdeep_b293003753c04cf9af66836870ebce44/setup.py", line 24, in finalize_options
self.distribution.ext_modules = get_ext_modules()
^^^^^^^^^^^^^^^^^
File "/tmp/pip-install-3oxpfcc8/ssdeep_b293003753c04cf9af66836870ebce44/setup.py", line 79, in get_ext_modules
binding.verify()
File "/tmp/pip-install-3oxpfcc8/ssdeep_b293003753c04cf9af66836870ebce44/src/ssdeep/binding.py", line 126, in verify
self._lib = self.ffi.verify(
^^^^^^^^^^^^^^^^
File "/tmp/pip-install-3oxpfcc8/ssdeep_b293003753c04cf9af66836870ebce44/.eggs/cffi-1.16.0-py3.12-linux-x86_64.egg/cffi/api.py", line 468, in verify
lib = self.verifier.load_library()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-install-3oxpfcc8/ssdeep_b293003753c04cf9af66836870ebce44/.eggs/cffi-1.16.0-py3.12-linux-x86_64.egg/cffi/verifier.py", line 105, in load_library
self._compile_module()
File "/tmp/pip-install-3oxpfcc8/ssdeep_b293003753c04cf9af66836870ebce44/.eggs/cffi-1.16.0-py3.12-linux-x86_64.egg/cffi/verifier.py", line 201, in _compile_module
outputfilename = ffiplatform.compile(tmpdir, self.get_extension())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-install-3oxpfcc8/ssdeep_b293003753c04cf9af66836870ebce44/.eggs/cffi-1.16.0-py3.12-linux-x86_64.egg/cffi/ffiplatform.py", line 20, in compile
outputfilename = _build(tmpdir, ext, compiler_verbose, debug)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-install-3oxpfcc8/ssdeep_b293003753c04cf9af66836870ebce44/.eggs/cffi-1.16.0-py3.12-linux-x86_64.egg/cffi/ffiplatform.py", line 54, in _build
raise VerificationError('%s: %s' % (e.__class__.__name__, e))
cffi.VerificationError: CompileError: command '/usr/bin/gcc' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
</code></pre>
<p>How to fix this?</p>
|
<python><gcc><pip>
|
2023-12-07 09:14:41
| 1
| 2,916
|
hafiz031
|
77,618,429
| 4,796,942
|
PyPi package versions compatible with Cloud Composer's `apache-airflow-providers-microsoft-azure`
|
<p>I'm attempting to load a few packages into an instance of Google Cloud Composer 2. I constantly getting dependency or package issues and am unable to get the versions of the packages that function.</p>
<p>The <code>requirements.txt</code> file that keeps failing looks like</p>
<pre><code>apache-airflow-providers-microsoft-azure == 8.3.0
apache-airflow-providers-google <= 10.11.1
numpy == 1.26.2
pandas == 2.1.3
datetime == 5.3
</code></pre>
<p>I found that I can load the requirements.txt file directly to the cloud composer environment using</p>
<pre><code>gcloud composer environments update [ENVIROMENT] --location [LOCATION] --update-pypi-packages-from-file requirements.txt --verbosity=debug
</code></pre>
<p>where <code>[ENVIROMENT]</code> is the cloud composer environment and <code>[LOCATION]</code> is the location.</p>
|
<python><google-cloud-platform><dependencies><airflow><google-cloud-composer>
|
2023-12-07 07:54:02
| 1
| 1,587
|
user4933
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.