QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,626,023
| 2,299,245
|
Working with large rasters using rioxarray
|
<p>I am trying to re-write some of my R code in Python for doing raster reclassification on large rasters in a memory-safe way.</p>
<p>In R I would write the below. Because I have provided a filename argument to the classify line, this will work for large rasters and write the results to a file. No memory worries.</p>
<pre><code>library("terra")
my_rast <- rast("my_rast.tif")
classify(my_rast, cbind(1, 10), filename = "reclassed_rast.tif")
</code></pre>
<p>I have read that rioxarray is good for raster processing in Python. So I have written the below code. But it crashes my memory. How do I fix that?</p>
<pre><code>import rioxarray
my_rast = rioxarray.open_rasterio("my_rast.tif", cache=False, chunks = "auto")
my_rast_reclass = my_rast.where(my_rast != 1, 10)
my_rast_reclass.rio.to_raster("reclassed_rast.tif")
</code></pre>
|
<python><raster><python-xarray><rasterio>
|
2023-03-03 10:41:13
| 0
| 949
|
TheRealJimShady
|
75,625,986
| 914,693
|
Invoke an HFArgumentParser based script from within a Click command
|
<p>I have a script <code>trainmodel.py</code> coming from an existing codebase (based on Hugginface ArgumentParsers) with almost 100 different arguments that is based on argparse. The script comes as a <code>main()</code> function that parses individually each argument, and when run with no arguments produces the entire help text.</p>
<p>I need to wrap this script coming from another codebase into my <code>Click</code> based CLI, but I have hard times figuring out a good way to wrap it. Specifically the my cli interface should inherit all the <code>trainmodel.py</code> accepted arguments and kind of replicate them within the Click cli subcommand.</p>
<p>Here is an example of <code>trainmodel.py</code></p>
<pre class="lang-py prettyprint-override"><code>def main():
parser = HfArgumentParser(
(ModelArguments, DataTrainingArguments, TrainingArguments)
)
... lot of code to train a deep learning model with almost 100 parameters
</code></pre>
<p>My existing CLI though needs to subclass the existing <code>main()</code> function into a group of arguments based on my existing library CLI group</p>
<pre class="lang-py prettyprint-override"><code>@click.command()
def my_train_model():
from models.trainmodel import main
main(*sys.argv[3:])
</code></pre>
<p>The command <code>train_model</code> is itself part of a click group from the outside <code>cli.py</code></p>
<p><code>cli.py</code></p>
<pre class="lang-py prettyprint-override"><code>@click.group(name="model")
def model():
pass
model.add_command(my_train_model)
cli.add_command(model)
</code></pre>
<p>For example, when I need to call the original script help, my interface with this call</p>
<pre class="lang-bash prettyprint-override"><code>mylibrary-cli model my-train-model --help
</code></pre>
<p>should pass the <code>--help</code> argument to the <code>trainmodel.main()</code> function and reproduce the arguments as from it.</p>
<p>How can I override the click interface and make the entire list of arguments be passed to the <code>trainmodel.py</code> script with the same help without manually copying all arguments inside my <code>click.command</code> decorated function <code>my_train_model</code>?
I would like to avoid hardcoding all the arguments accepted from <code>trainmodel.main()</code>.</p>
|
<python><command-line-interface><argparse><huggingface-transformers><python-click>
|
2023-03-03 10:37:29
| 0
| 8,721
|
linello
|
75,625,978
| 14,269,252
|
Change the position of text in double slider
|
<p>I defined a double slider in stream lit app, I want to write the start point text on top and the end point text on bottom, or even slightly higher than start point in case texts don't overlap if the start and end point are too close, how should I modify it?</p>
<pre><code>slider = cols1.slider('Select date', min_value=start_dt, value=[start_dt,index_dt] ,max_value=end_dt, format=format)
</code></pre>
|
<python><streamlit>
|
2023-03-03 10:36:43
| 1
| 450
|
user14269252
|
75,625,951
| 13,066,054
|
how to use column values in window function range between
|
<p>I have a pyspark dataframe which looks like this (formatted for better view, data partitioned by <code>company_id</code> and <code>as_on_date</code> ordering by <code>t_tax_period</code>)</p>
<pre><code>+-----------------+----------+------------+----------+--------------+---------------+---------------+
|company_master_id|as_on_date|t_tax_period|delay_days|days_90_before|days_180_before|days_365_before|
+-----------------+----------+------------+----------+--------------+---------------+---------------+
| 3|2022-08-01| 2022-06-30| 10| 2022-05-03| 2022-02-02| 2021-08-01|
| 3|2022-08-01| 2022-03-31| 12| 2022-05-03| 2022-02-02| 2021-08-01|
| 3|2022-08-01| 2021-12-31| 18| 2022-05-03| 2022-02-02| 2021-08-01|
| 3|2022-08-01| 2021-09-30| 6| 2022-05-03| 2022-02-02| 2021-08-01|
| 3|2022-08-01| 2021-06-30| 5| 2022-05-03| 2022-02-02| 2021-08-01|
|-----------------|----------|------------|----------|--------------|---------------|---------------|
| 3|2022-08-02| 2022-06-30| 11| 2022-05-04| 2022-02-03| 2021-08-02|
| 3|2022-08-02| 2022-03-31| 12| 2022-05-04| 2022-02-03| 2021-08-02|
| 3|2022-08-02| 2021-12-31| 18| 2022-05-04| 2022-02-03| 2021-08-02|
| 3|2022-08-02| 2021-09-30| 6| 2022-05-04| 2022-02-03| 2021-08-02|
| 3|2022-08-02| 2021-06-30| 5| 2022-05-04| 2022-02-03| 2021-08-02|
|-----------------|----------|------------|----------|--------------|---------------|---------------|
| 3|2022-08-03| 2022-06-30| 12| 2022-05-05| 2022-02-04| 2021-08-03|
| 3|2022-08-03| 2022-03-31| 12| 2022-05-05| 2022-02-04| 2021-08-03|
| 3|2022-08-03| 2021-12-31| 18| 2022-05-05| 2022-02-04| 2021-08-03|
| 3|2022-08-03| 2021-09-30| 6| 2022-05-05| 2022-02-04| 2021-08-03|
| 3|2022-08-03| 2021-06-30| 5| 2022-05-05| 2022-02-04| 2021-08-03|
|-----------------|----------|------------|----------|--------------|---------------|---------------|
| 3|2022-08-04| 2022-06-30| 13| 2022-05-06| 2022-02-05| 2021-08-04|
| 3|2022-08-04| 2022-03-31| 12| 2022-05-06| 2022-02-05| 2021-08-04|
| 3|2022-08-04| 2021-12-31| 18| 2022-05-06| 2022-02-05| 2021-08-04|
| 3|2022-08-04| 2021-09-30| 6| 2022-05-06| 2022-02-05| 2021-08-04|
| 3|2022-08-04| 2021-06-30| 5| 2022-05-06| 2022-02-05| 2021-08-04|
|-----------------|----------|------------|----------|--------------|---------------|---------------|
</code></pre>
<p><code>days_90_before</code> = <code>as_on_date </code> - 90 days</p>
<p><code>days_180_before</code> = <code>as_on_date</code> - 180 days</p>
<p><code>days_365_before</code> = <code>as_on_date</code> - 365 days</p>
<p>Now i want to calculate <code>max(delay_days)</code> for the last 90, 180, 365 days before <code>as_on_date</code>. This is by checking the <code>t_tax_period</code> falls between as_on_date and respective dates and get the max value.</p>
<p>This can be achieved by doing <code>groupBy('company_master_id', 'as_on_date')</code> and calculating <code>max(delay_days)</code> and join all the 3 delay values(<code>max_delay_in_last_90_days</code>, <code>max_delay_in_last_180_days</code>, <code>max_delay_in_last_365_days</code>) by using 2 joins. But this is very slow. Instead I wanted to use window functions and <code>rangeBetween</code>.</p>
<p>I have tried this</p>
<pre><code>from pyspark.sql.functions import max as _max
tax_period_ts = unix_timestamp(col('t_tax_period'))
as_on_date_ts = unix_timestamp(col('as_on_date'))
days_90_before_u_ts = unix_timestamp(col('days_90_before'))
window_for_max = Window.partitionBy('company_master_id', 'as_on_date').orderBy(tax_period_ts.desc())\
.rangeBetween(as_on_date_ts, days_90_before_u_ts)
gst_delay_f = gst_delay_f.withColumn('90_days_max_value', _max('delay_days').over(window_for_max))
</code></pre>
<p>And it is not working and giving error saying it cannot convert column to bool. So how can i achieve this without using group by and then joining?</p>
|
<python><sql><apache-spark><pyspark><window-functions>
|
2023-03-03 10:33:52
| 0
| 351
|
naga satish
|
75,625,753
| 19,504,610
|
Embedded a #[pyclass] in another #[pyclass]
|
<p>I am trying to implement a cache for the private variable of any python class.</p>
<p>Let's suppose I have this:</p>
<pre><code>#[pyclass]
struct ClassA {
pv: Py<PyAny>, // GIL independent type, storable in another #[pyclass]
field_a: Py<PyAny>,
}
#[pyclass]
struct ClassB {
class_a: Py<ClassA>,
field_b: Py<PyAny>,
}
</code></pre>
<p>In the impls of <code>ClassA</code>, I have a method attempting to 'put' a reference of the <code>ClassA</code> object into the <code>ClassB</code>'s <code>class_a</code> field. And return a newly instantiated <code>ClassB</code> object.</p>
<pre><code>#[pymethods]
impl ClassA {
fn ret_class_b(&self, field_b: &PyAny) -> PyResult<ClassB> {
let class_a: Py<ClassA> = Py::clone_ref(self, field_b.py());
ClassB {
class_a: class_a,
field_b: field_b.into_py(),
}
}
}
</code></pre>
<p>The above does not compile.</p>
<p>The issue is how do I get <code>&Py<ClassA></code> from the receiver of the method so as to return another object where the receiver is referred to as a field in that object?</p>
<h1>Edit / Update</h1>
<p>Thanks to <a href="https://stackoverflow.com/users/442760/cafce25">@cafce25</a> for his reminder on giving fully reproducible codes and the error from the compiler.</p>
<p>Here is it:</p>
<pre class="lang-rust prettyprint-override"><code>
use pyo3::{
prelude::{*, Py, PyAny},
};
#[pymodule]
fn stackoverflowqn(py: Python, pymod: &PyModule) -> PyResult<()> {
#[pyclass(name = "class_a")]
#[derive(Clone, Debug)]
pub struct ClassA {
pv: Option<Py<PyAny>>, // private value
field_a: Py<PyAny>,
}
#[pyclass]
#[derive(Clone, Debug)]
pub struct ClassB {
class_a: Py<ClassA>,
field_b: Py<PyAny>,
}
#[pymethods]
impl ClassA {
#[new]
pub fn __new__(_slf: PyRef<'_, Self>, field_a: PyObject) -> PyResult<ClassA> {
Ok(ClassA {
pv: None,
field_a: field_a,
})
}
fn ret_class_b {&self, field_b: &PyAny) -> PyResult<ClassB> {
let class_a: Py<ClassA> = Py::clone_ref(self, field_b.py());
Ok(ClassB {
class_a: class_a,
field_b: field_b.into_py(),
})
}
}
}
</code></pre>
<p>Here is the compiler error:</p>
<pre class="lang-none prettyprint-override"><code>error[E0277]: the trait bound `ClassA: pyo3::IntoPy<pyo3::Py<ClassA>>` is not satisfied
--> crates\cached-property\src\stackoverflow.rs:36:53
|
36 | pyo3::IntoPy::<Py<ClassA>>::into_py(pty_getter, py);
| ----------------------------------- ^^^^^^^^^^ the trait `pyo3::IntoPy<pyo3::Py<ClassA>>` is not implemented for `ClassA`
| |
| required by a bound introduced by this call
|
= help: the trait `pyo3::IntoPy<pyo3::Py<PyAny>>` is implemented for `ClassA`
</code></pre>
|
<python><rust><pyo3><python-bindings>
|
2023-03-03 10:14:49
| 1
| 831
|
Jim
|
75,625,726
| 2,613,271
|
Plot from list of lists in Python
|
<p>I have a series of csv files that is the output of a solver in space and time, so the population is given for each point in time (rows) and space (columns) for different models.</p>
<p>I want to read them into python to plot individual rows or individual columns from the solutions. I cannot figure out how to do this.</p>
<p>3 mini csv files are (mine are much larger)</p>
<pre><code>13,26,130
15,30,150
17,34,170
19,38,190
21,42,210
23,46,230
29,58,290
31,62,310
33,66,330
35,70,350
37,74,370
39,78,390
4,8,40
5,10,50
6,12,60
7,14,70
8,16,80
9,18,90
</code></pre>
<p>MWE</p>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
import os
dataFolder = "SCRATCH/"
names = ["1","2","3"]
data = [pd.read_csv(os.path.join(dataFolder+'Book'+i+'.csv'), header=None) for i in names]
plt.plot(data[0])
plt.plot(data[0][2][:],'--')
plt.plot(data[0][:][2],'*')
</code></pre>
<p>I can plot in one direction, but I cannot plot in the other. I tried to turn the list into an np.array using <code>np.array(data[0])</code> but had the same issue accessing individual rows/columns.</p>
<p><a href="https://drive.google.com/file/d/1H5Sr_HPS2TtA7iK3FBHMLQEK1Gzex6em/view?usp=sharing" rel="nofollow noreferrer">Some example csv files can be found at this link:</a></p>
<p>ETA: the files are read in programatically deliberately. Writing these out as individual lines is not a viable option. This is a small subset of a larger piece of work. The csv files are considerably larger than the examples.</p>
|
<python><matplotlib>
|
2023-03-03 10:11:52
| 1
| 1,530
|
Esme_
|
75,625,687
| 14,958,374
|
How to turn off request body logging in elastic-apm
|
<p>I am using <strong>elastic-apm</strong> middleware with <strong>FastApi</strong>. During serving each request my application is making several data-heavy request to external resources. My current setup is the following:</p>
<pre><code>app = FastAPI(
title="My API", description="Hello from my API!"
)
apm = make_apm_client(
{
"SERVICE_NAME": "MyService",
"SERVER_URL": CONFIG["elastic_apm"]["server_url"],
"SECRET_TOKEN": CONFIG["elastic_apm"]["token"],
}
)
app.add_middleware(ElasticAPM, client=apm)
</code></pre>
<p>So, under high load my application sends a lot of unnecessary data to elastic apm (<strong>I really don't need to save the body of each request</strong>). Is there any way I could <strong>turn off logging body and other request details</strong>, and send to apm <strong>only time and destination</strong> of each particular request my app has made?</p>
|
<python><fastapi><elastic-apm>
|
2023-03-03 10:08:48
| 1
| 331
|
Nick Zorander
|
75,625,628
| 14,269,252
|
Change the size of chart dynamic regarding the size of data frame and modify the format of X axis in plotly
|
<p>I wrote the code below and it works fine for a scatter plot.</p>
<p>I want to modify the code as I want to show year in the format <strong>July 2010</strong>, currently it is 10-01, also I want to show all the date (yesr and month) and not part of it.</p>
<p>The size of chart has to be dynamic, because sometimes regarding what is checked in the streamlit app, the final data frame Data is bigger or smaller, meaning in the CODE, sometimes I select one code, sometime I select 50 code in Y axis or some times I have one year (only for example 2011), sometime I have from 2010 to 2020 ,so the size I want to be dynamic regarding the size of dataframe, do you have any idea?</p>
<pre><code>color_discrete_map = {'df1': 'rgb(255,0,0)',
'df2': 'rgb(0,255,0)',
'df3': '#11FCE4',
'df4': '#9999FF',
'df5': '#606060',
'df6': 'CC6600'}
fig = px.scatter(df, x='DATE', y='CODE', color='DATA', width=1200,
height=600,color_discrete_map=color_discrete_map)
fig.update_layout(
margin=dict(l=250, r=0, t=0, b=20),
)
fig.update_layout(xaxis=dict(tickformat="%y-%m"))
fig.update_xaxes(ticks= "outside",
ticklabelmode= "period",
tickcolor= "black",
ticklen=10,
minor=dict(
ticklen=4,
dtick=7*24*60*60*1000,
tick0="2016-07-03",
griddash='dot',
gridcolor='white')
)
st.plotly_chart(fig) ```
</code></pre>
|
<python><pandas><plotly><streamlit>
|
2023-03-03 10:02:44
| 0
| 450
|
user14269252
|
75,625,466
| 10,077,354
|
Not able to redirect container logs to file using subprocess.run
|
<p>I have a python script to start some containers, wait for them to finish execution and then start a few others. I wanted to get the container logs and this bash command worked for me:</p>
<pre><code>docker logs -f container-name &> tmp.log &
</code></pre>
<p>However when I try to add it to my python script using `subprocess.run like below, it doesn't create a new file.</p>
<pre class="lang-bash prettyprint-override"><code>subprocess.run(
[
"docker",
"logs",
"-f",
"container-name",
"&>",
"tmp.log",
"&"
],
cwd=os.getcwd(),
shell=False,
stdout=subprocess.DEVNULL,
)
</code></pre>
|
<python><docker>
|
2023-03-03 09:45:34
| 1
| 2,487
|
Suraj
|
75,625,319
| 1,020,139
|
How to mock requests.Session.get using unittest module?
|
<p>I want to mock <code>app.requests.Session.get</code> from <code>test_app.py</code> to return a mocked <code>requests.Response</code> object with a 404 <code>status_code</code> to generate an <code>InvalidPlayerIdException</code>.</p>
<p>However, no exception is raised as seen from the below output. Is it because I'm using <code>with</code> clauses, or why doesn't it work?</p>
<p>Reference: <a href="https://www.pythontutorial.net/python-unit-testing/python-mock-requests/" rel="nofollow noreferrer">https://www.pythontutorial.net/python-unit-testing/python-mock-requests/</a></p>
<p><code>Output</code>:</p>
<pre><code>(supersoccer-showdown) ➜ supersoccer-showdown-copy git:(main) ✗ python -m unittest git:(main|…3
F
======================================================================
FAIL: test_pokemon_player_requestor_raise_exception (test_app.TestPlayerRequestor.test_pokemon_player_requestor_raise_exception)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/opt/homebrew/Cellar/python@3.11/3.11.2_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/unittest/mock.py", line 1369, in patched
return func(*newargs, **newkeywargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dkNiLyIv/supersoccer-showdown-copy/test_app.py", line 21, in test_pokemon_player_requestor_raise_exception
self.assertRaises(InvalidPlayerIdException, requestor.getPlayerById, 1)
AssertionError: InvalidPlayerIdException not raised by getPlayerById
</code></pre>
<p><code>app.py</code>:</p>
<pre><code>from __future__ import annotations
import abc
import requests
class InvalidPlayerIdException(Exception):
pass
class Player(abc.ABC):
def __init__(self, id: int, name: str, weight: float, height: float) -> None:
self.id = id
self.name = name
self.weight = weight
self.height = height
class PokemonPlayer(Player):
def __init__(self, id: int, name: str, weight: float, height: float) -> None:
super().__init__(id, name, weight, height)
def __repr__(self) -> str:
return f'Pokemon(id={self.id},name={self.name},weight={self.weight},height={self.height})'
class PlayerRequestor(abc.ABC):
def __init__(self, url: str) -> None:
self.url = url
@abc.abstractmethod
def getPlayerCount(self) -> int:
pass
@abc.abstractmethod
def getPlayerById(self, id: int) -> Player:
pass
class PokemonPlayerRequestor(PlayerRequestor):
def __init__(self, url: str) -> None:
super().__init__(url)
def getPlayerCount(self) -> int:
with requests.Session() as rs:
rs.mount('https://', requests.adapters.HTTPAdapter(
max_retries=requests.urllib3.Retry(total=5, connect=5, read=5, backoff_factor=1)))
with rs.get(f'{self.url}/api/v2/pokemon/', verify=True) as r:
r.raise_for_status()
json = r.json()
return json["count"]
def getPlayerById(self, id: int) -> Player:
with requests.Session() as rs:
rs.mount('https://', requests.adapters.HTTPAdapter(
max_retries=requests.urllib3.Retry(total=5, connect=5, read=5, backoff_factor=1)))
with rs.get(f'{self.url}/api/v2/pokemon/{id}', verify=True) as r:
if r.status_code == 404:
raise InvalidPlayerIdException
r.raise_for_status()
json = r.json()
player = PokemonPlayer(id, json["name"], json["weight"], json["height"])
return player
</code></pre>
<p><code>test_app.py</code>:</p>
<pre><code>import unittest
from unittest.mock import MagicMock, patch
from app import *
class TestPlayerRequestor(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
pass
@patch('app.requests')
def test_pokemon_player_requestor_raise_exception(self, mock_requests):
mock_response = MagicMock()
mock_response.status_code = 404
mock_session = MagicMock()
mock_requests.Session = mock_session
instance = mock_session.return_value
instance.get.return_value = mock_response
requestor = PokemonPlayerRequestor('https://pokeapi.co')
self.assertRaises(InvalidPlayerIdException, requestor.getPlayerById, 1)
</code></pre>
|
<python><unit-testing><testing><python-requests><python-unittest>
|
2023-03-03 09:29:42
| 1
| 14,560
|
Shuzheng
|
75,625,212
| 14,744,714
|
How add lottie animation on background streamlit app?
|
<p>I have streamlit app and i need add lottie amination on this background. Animation - it is json file on my page.</p>
<p>Some code for open json file as lottie images:</p>
<pre><code>import streamlit as st
from streamlit_lottie import st_lottie
@st.cache
def load_image_json(path):
""" Load animation and images from json """
with open(path, 'r') as j:
animation = json.loads(j.read())
return animation
back_image = load_image_json('image/background_gradient.json')
</code></pre>
<p>I try this, but is dont work:</p>
<pre><code>page_bg_img = '''
<style>
body {
background-image: {st_lottie(back_image, key='back')};
background-size: cover;
}
</style>
'''
st.markdown(page_bg_img, unsafe_allow_html=True)
</code></pre>
<p>UPD. I try next code, no error, but the background animation doesn't load, everything just stays the same as it was before:</p>
<pre><code>def load_lottie_url(url: str) -> dict:
r = requests.get(url)
return json.loads(r.content)
def set_background_lottie(json_data):
json_str = json.dumps(json_data)
tag = f"""<style>.stApp {{background-image: url('data:image/svg+xml,{json_str}'); background-repeat: no-repeat; background-position: center center; background-size: contain; height: 100vh;}}</style>"""
st.write(tag, unsafe_allow_html=True)
json_data = load_lottie_url("https://assets1.lottiefiles.com/packages/lf20_xsrma5om.json")
set_background_lottie(json_data)
</code></pre>
|
<python><css><streamlit><lottie>
|
2023-03-03 09:20:20
| 1
| 717
|
kostya ivanov
|
75,625,056
| 14,711,735
|
How should the AnyStr be used?
|
<p>I am pretty new to annotations in Python and trying to apply them to a project I am working. I can't really figure out the <code>AnyStr</code> type from the <code>typing</code> package.</p>
<p>The <a href="https://docs.python.org/3/library/typing.html#typing.AnyStr" rel="nofollow noreferrer">docs</a> say:</p>
<blockquote>
<p>AnyStr is a constrained type variable defined as AnyStr = TypeVar('AnyStr', str, bytes)</p>
<p>It is meant to be used for functions that may accept any kind of string without allowing different kinds of strings to mix.</p>
</blockquote>
<p>So I get this like it is str or bytes.</p>
<p>In my project I have a function like this:</p>
<pre><code>def x() -> Tuple[Optional[AnyStr], Optional[dict]]:
if something:
return logic1(), logic2()
return None, None
</code></pre>
<p><code>logic1</code> in this case returns an <code>AnyStr</code> and <code>logic2</code> a dict. However, <code>mypy</code> shows this error:</p>
<blockquote>
<p>Incompatible return value type (got "Tuple[str, Dict[Any, Any]]", expected "Tuple[bytes, Dict[Any, Any]]")</p>
</blockquote>
<p>I don't understand why this is seen as an error. As I understand <code>AnyStr</code>, the expected values should either be <code>Tuple[bytes, Dict[Any, Any]]</code> or <code>Tuple[str, Dict[Any, Any]]</code>.</p>
<p>Why is this an error?</p>
|
<python><mypy><python-typing>
|
2023-03-03 09:01:44
| 1
| 833
|
Erik Asplund
|
75,624,961
| 13,518,426
|
What does config inside ``super().__init__(config)`` actually do?
|
<p>I have the following code to create a custom model for Named-entity-recognition. Using ChatGPT and Copilot, I've commented it to understand its functionality.</p>
<p>However, the point with <code>config</code> inside <code>super().__init__(config)</code> is not clear for me. Which role does it play since we have already used <code>XLMRobertaConfig</code> at the beginning?</p>
<pre class="lang-py prettyprint-override"><code>import torch.nn as nn
from transformers import XLMRobertaConfig
from transformers.modeling_outputs import TokenClassifierOutput
from transformers.models.roberta.modeling_roberta import RobertaModel
from transformers.models.roberta.modeling_roberta import RobertaPreTrainedModel
# Create a class for a custom model, which inherit from RobertaPreTrainedModel since we want to use the weights of a pretained model in the body of a custom model
class XLMRobertaForTokenClassification(RobertaPreTrainedModel):
# Common practice in 🤗 Transformers
# allows the XLMRobertaForTokenClassification class to inherit the configuration functionality and attributes from the XLMRobertaConfig class
config_class = XLMRobertaConfig
# initialize the model
def __init__(self, config):
# call the initialization function of the parent class (RobertaPreTrainedModel)
super().__init__(config) # config is necessary when working with pretrained models to ensure the initialization with the correct configuration of parent class
self.num_labels = config.num_labels # number of classes to predict
# Load model BODY
self.roberta = RobertaModel(config, add_pooling_layer=False) # returns all hidden states not just [CLS]
# Set up token CLASSIFICATION HEAD
self.dropout = nn.Dropout(config.hidden_dropout_prob)
self.classifier = nn.Linear(config.hidden_size, config.num_labels) # linear transformation layer takes (batch_size, sequence_length, hidden_size)
# to produce output tensor of shape (batch_size, sequence_length, num_labels)
# which can be interpreted as probability distribution over the labels for each token in the input sequence.
# Load the pretrained weights for the model body and
# ... randomly initialize weights of token classification head
self.init_weights()
# define the forward pass
def forward(self, input_ids=None, attention_mask=None, token_type_ids=None,
labels=None, **kwargs):
# Feed the data through model BODY to get encoder representations
outputs = self.roberta(input_ids, attention_mask=attention_mask,
token_type_ids=token_type_ids, **kwargs)
# Apply classifier to encoder representation
sequence_output = self.dropout(outputs[0]) # apply dropout to the first element of output tensor, i.e., last_hidden_state
logits = self.classifier(sequence_output) # apply the linear transformation to get the logits (i.e., raw output of the model)
# Calculate losses if labels are provided
loss = None
if labels is not None:
loss_fct = nn.CrossEntropyLoss()
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) # apply cross entropy function on flattend logits and flattend labels
# Return model output object
return TokenClassifierOutput(loss=loss, logits=logits,
hidden_states=outputs.hidden_states,
attentions=outputs.attentions)
</code></pre>
<p><strong>EDIT</strong>: I quote directly from the book I'm working on: <em>"<code>config_class</code> ensures that the standard <code>XLMRobertaConfig</code> settings are used when initilize a new model"</em>. If I understand it correctly, could we change these defualt parameters by overwriting the default settings in the <code>config</code>?</p>
|
<python><oop><nlp><huggingface-transformers>
|
2023-03-03 08:50:38
| 1
| 433
|
Ahmad
|
75,624,860
| 12,967,353
|
Retrieve arguments passed to mocked function with Mockito
|
<p>I am using mockito to patch an instance of some class and check the calls on the method of this class.</p>
<p>With this class:</p>
<pre><code>class A:
def __init__(self, a):
self.a = a
def foo(self, a):
self.a = 2 * a
return 2 * a
</code></pre>
<p>I can use mockito to patch <code>foo</code> like this:</p>
<pre><code>instance = A(1)
mockito.expect(instance, times=1).foo(a=5).thenReturn(10)
</code></pre>
<p>This allows me to verify that foo has been called with the argument <code>5</code> once. I can also use <code>...</code> instead of <code>a=5</code> to verify it is called once with any argument.</p>
<p>What I would like to do is verify it has been called once with an argument that can be divided by 2. I have two ideas for this (and I have no idea how to implement either):</p>
<ol>
<li>Pass some sort of lambda function instead of the argument <code>a=5</code>. I believe I have seen something like this in mockito for Java.</li>
<li>Use <code>...</code> and retrieve the arguments passed on call to <code>foo</code>. This way I can do my verifications on those easily.</li>
</ol>
|
<python><mocking><mockito>
|
2023-03-03 08:40:43
| 1
| 809
|
Kins
|
75,624,797
| 4,469,265
|
How to yield items from multiple h5 files simultaneously
|
<p>I have a list of h5 files
my single generator is like this</p>
<pre><code>class H5Dataset_all(Dataset):
def __init__(self, h5_path):
# super(dataset_h5, self).__init__()
self.h5_path = h5_path
self._h5_gen = None
def __getitem__(self, index):
if self._h5_gen is None:
self._h5_gen = self._get_generator()
next(self._h5_gen)
return self._h5_gen.send(index)
def _get_generator(self):
with h5py.File(self.h5_path, 'r') as record:
index = yield
while True:
aligned_t = record['aligned_t'][index]
fusion_t = record['fusion_t'][index]
sensor_t = record['sensor_t'][index]
sensor_t_1 = record['sensor_t_1'][index]
# delta = record['delta'][index]
pad_num = record['pad_num'][index]
radar_t = record['radar_t'][index]
radar_t_1 = record['radar_t_1'][index]
index = yield aligned_t, fusion_t, sensor_t, sensor_t_1, pad_num, radar_t, radar_t_1
</code></pre>
<p>How can I open all of my h5 files simultaneously as one generator like this:</p>
<pre><code> def __getitem__(self, index):
features = [self.getitem_single(index,x) for x in range(len(self.h5_path))]
aligned_t, fusion_t, sensor_t, sensor_t_1, pad_num, radar_t, radar_t_1 = zip(*features)
pad_num = np.array(pad_num)
aligned_t = np.array(aligned_t)
fusion_t = np.array(fusion_t)
sensor_t_11 = np.array(sensor_t_1)
sensor_t_1 = sensor_t_11[...,:-1]
sensor_ids = np.array(sensor_t)[...,-1]
sensor_t = np.array(sensor_t)[...,:-1]
radar_t_1 = np.array(radar_t_1)[...,:-1]
radar_t = np.array(radar_t)[...,:-1]
return aligned_t, fusion_t, sensor_t, sensor_t_1, pad_num, radar_t, radar_t_1, sensor_ids
def getitem_single(self, index, path_id):
if self._h5_gen[path_id] is None:
self._h5_gen[path_id] = self._get_generator(path_id)
next(self._h5_gen[path_id])
return self._h5_gen[path_id].send(index)
def _get_generator(self,path_id):
with h5py.File(self.h5_path[path_id], 'r') as record:
index = yield
while True:
aligned_t = record['aligned_t'][index]
fusion_t = record['fusion_t'][index]
sensor_t = record['sensor_t'][index]
sensor_t_1 = record['sensor_t_1'][index]
# delta = record['delta'][index]
pad_num = record['pad_num'][index]
radar_t = record['radar_t'][index]
radar_t_1 = record['radar_t_1'][index]
index = yield aligned_t, fusion_t, sensor_t, sensor_t_1, pad_num, radar_t, radar_t_1
</code></pre>
<p>This code gives me a deadlock which I can never get the return clause</p>
|
<python><iterator><generator><yield><h5py>
|
2023-03-03 08:34:06
| 1
| 1,124
|
Tommy Yu
|
75,624,592
| 3,529,352
|
Extract word by regex in python
|
<p>I'm new to Python and would like to know how to use regex.</p>
<p>Suppose I have a patter like</p>
<pre><code>alice(ben)charlie(dent)elise(fiona)
</code></pre>
<p>or</p>
<pre><code>grace
</code></pre>
<p>For the first case, I want to get <code><alice,charlie,elise></code>.</p>
<p>For the second case, I want get <code>grace</code>.</p>
<p>I tried below, but only got <code>elise(fiona), elise</code></p>
<pre><code>import re
foo = 'alice(ben)charlie(dent)elise(fiona)'
pattern = re.compile(r'((\w+)\(\w+\))+')
match = re.findall(pattern, foo)
print(match)
</code></pre>
|
<python><regex>
|
2023-03-03 08:10:12
| 4
| 854
|
nathan
|
75,624,347
| 5,530,553
|
PyTorch model not releasing peak memory after computation
|
<p>I am using <a href="https://www.sbert.net/" rel="nofollow noreferrer">SentenceTransformers</a> library to get a text embedding of a given text. I do inference on one of the pre-trained models from the library, and use the embedding vector for further computations.
When the model is doing inference, there is a spike in memory usage, as expected. However, after I get the embedding that I want, the memory usage does not go down.</p>
<p>I am profiling the memory with the following code -</p>
<pre><code>from time import sleep
import torch
from sentence_transformers import SentenceTransformer
@profile
def encode_text(text):
text_encoder = SentenceTransformer('paraphrase-albert-small-v2')
embedding = text_encoder.encode(text)
return embedding
encode_text(['This is a sentance'])
sleep(3)
</code></pre>
<p>I am profiling with <a href="https://github.com/pythonprofilers/memory_profiler" rel="nofollow noreferrer">memory-profiler</a> by running <code>mprof run --python python main.py</code>.
This gives the following memory profile (running <code>mprof plot</code>) -</p>
<p><a href="https://i.sstatic.net/EdxDo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EdxDo.png" alt="enter image description here" /></a></p>
<p>The memory gradually increases from when the function is called to when it finishes (the blue square brackets). However, after that, the memory is never released. This does not make sense to me since once I have the <code>embedding</code> vector, there is nothing in the code that should need to continue using up that much memory.</p>
<p>I tried wrapping everything in <code>torch.no_grad()</code>, setting <code>text_encoder.requires_grad_(False)</code> and <code>text_encoder.eval()</code> after creating <code>text_encoder</code>. I also tried doing <code>del text_encoder</code> after running inference.</p>
<p>None of this seems to make a difference.</p>
<p>Does anyone know why this is happening and how do I can free up this memory?</p>
<p>I am running the following versions of the libraries on Ubuntu 18.04.6 LTS.</p>
<pre><code>torch 1.13.0+cu117
sentence_transformers 2.2.2
</code></pre>
|
<python><machine-learning><memory><pytorch><sentence-transformers>
|
2023-03-03 07:38:54
| 0
| 3,300
|
Ananda
|
75,624,272
| 1,039,860
|
Trying to create a grid of widgets with scrollbars in python and tkinter
|
<p>I am so confused (as you can see from my code which is currently a hodge-podge from various sources). I am trying to have a scrollable grid of widgets (currently tk.Entry's) that fill the frame when it is enlarged. Currently I only am trying to add a vertical scrollbar, but I intend to add a horizontal once I understand how it works.</p>
<p>I have two rows of headers in a 2-dimensional array of HeaderCol's. This is to manage merged cells in the header, which is more-or-less working. My ultimate goal is to have a spreadsheet-like interface but with controlled interactions via buttons, dropdowns, and Entry's in the grid.</p>
<p>Any suggestions would be most welcome!</p>
<pre><code>from tkinter import Menu, Entry, Canvas, Frame, Tk, Label, Scrollbar
class HeaderCol:
MAX_COL = 0
HEADERS = [[]]
def __init__(self, name, row, col, width=1, header_row=0):
if row == 0:
HeaderCol.MAX_COL += col
self.name = name
self.row = row
self.col = col
self.width = width
if len(HeaderCol.HEADERS) <= header_row:
HeaderCol.HEADERS.append([])
HeaderCol.HEADERS[header_row].append(self)
class TransactionsGrid:
# create the header information
HEADER_TOP_ROW = 0
HEADER_SECOND_ROW = 1
col = 0
ENTRY = HeaderCol('Entry', HEADER_TOP_ROW, col, 2)
col += ENTRY.width
DUE = HeaderCol('Due', HEADER_TOP_ROW, col, 4)
col += DUE.width
PAID = HeaderCol('Paid', HEADER_TOP_ROW, col, 7)
col += PAID.width
TENANT_STATUS = HeaderCol('Tenant Status', HEADER_TOP_ROW, col)
col += TENANT_STATUS.width
MANAGEMENT = HeaderCol('Management', HEADER_TOP_ROW, col, 2)
col += MANAGEMENT.width
NET = HeaderCol('Net', HEADER_TOP_ROW, col)
col += NET.width
NOTES = HeaderCol('Notes', HEADER_TOP_ROW, col)
HeaderCol.MAX_COL = col + NOTES.width
col = 0
DATE = HeaderCol('Date', HEADER_SECOND_ROW, col, 1, 1)
col += DATE.width
EVENT = HeaderCol('Event', HEADER_SECOND_ROW, col, 1, 1)
col += EVENT.width
FEES = HeaderCol('Fees & Charges', HEADER_SECOND_ROW, col, 1, 1)
col += FEES.width
def __init__(self, root):
root.grid_rowconfigure(0, weight=1)
root.columnconfigure(0, weight=1)
self.frame_main = Frame(root)
self.frame_main.grid(sticky="news")
self.frame_canvas = Frame(self.frame_main)
self.frame_canvas.grid(row=2, column=0, pady=(5, 0), sticky='nw')
self.frame_canvas.rowconfigure(0, weight=1)
self.frame_canvas.columnconfigure(0, weight=1)
# self.frame_canvas.grid_propagate(False)
canvas = Canvas(self.frame_canvas, bg="yellow")
canvas.grid(row=0, column=0, sticky="news")
# Link a scrollbar to the canvas
vsb = Scrollbar(self.frame_canvas, orient="vertical", command=canvas.yview)
vsb.grid(row=0, column=1, sticky='ns')
canvas.configure(yscrollcommand=vsb.set)
# Create a frame to contain the buttons
frame_buttons = Frame(canvas, bg="blue")
canvas.create_window((0, 0), window=frame_buttons, anchor='nw')
self.frame_main.pack(fill='none', expand=False)
canvas.place(relx=.5, rely=.5, anchor="center")
self.buttons = []
self.add_headers()
# add a bunch of empty tk.Entry's
for row in range(5):
button_row = []
for col in range(HeaderCol.MAX_COL):
button = Entry(self.frame_canvas)
button.configure(highlightthickness=0)
button_row.append(button)
if col == 0:
button.insert(0, 'A')
button.grid(column=col, row=row + 1, sticky="")
self.buttons.append(button_row)
frame_buttons.update_idletasks()
def add_headers(self):
button_row = []
for header in HeaderCol.HEADERS[0]:
button = Label(self.frame_canvas)
button_row.append(button)
button.config(text=header.name)
button.grid(column=header.col, row=0, columnspan=header.width, sticky="")
# self.buttons.append(button_row)
button_row = []
for header in HeaderCol.HEADERS[1]:
button = Label(self.frame_canvas)
button_row.append(button)
button.config(text=header.name)
button.grid(column=header.col, row=1, columnspan=header.width, sticky="")
# self.buttons.append(button_row)
def menubar(self):
menu = Menu(self.master)
self.master.config(menu=menu)
file_menu = Menu(menu)
menu.add_cascade(label="File", menu=file_menu)
edit_menu = Menu(menu)
menu.add_cascade(label="Edit", menu=edit_menu)
file_menu.add_command(label="Item")
file_menu.add_command(label="Exit", command=self.exit_program)
edit_menu.add_command(label="Undo")
edit_menu.add_command(label="Redo")
@staticmethod
def exit_program():
exit(0)
def main():
root = Tk()
# transaction_table = TransactionsTable(root)
transaction_table = TransactionsGrid(root)
#transaction_table = AnotherOne(root)
"""
frame = tk.Frame(root)
frame.pack()
pt = Table(frame)
pt.show()
"""
root.mainloop()
if __name__ == '__main__':
main()
</code></pre>
<p>Thanks to @Derek's suggestion, I have created the following class with an optional horizontal scrollbar. It more or less works, but when scrolling horizontally, it moves some of the grid of widgets, but not all widgets are seen on the right:
<a href="https://i.sstatic.net/kJZNi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kJZNi.png" alt="enter image description here" /></a></p>
<pre><code>self.frame_main = ScrollableFrame(root, horizontal=True)
</code></pre>
<p>Here is the modified code:</p>
<pre><code>from tkinter import Frame, Scrollbar, Canvas
from tkinter.constants import RIGHT, Y, LEFT, BOTH, NW, BOTTOM, X
class ScrollableFrame(Frame):
"""
Make a frame scrollable with scrollbar on the right.
After adding or removing widgets to the scrollable frame,
call the update() method to refresh the scrollable area.
"""
def __init__(self, frame, width=16, horizontal=False):
self.y_scrollbar = Scrollbar(frame, width=width)
self.y_scrollbar.pack(side=RIGHT, fill=Y, expand=False)
if horizontal:
self.x_scrollbar = Scrollbar(frame, width=width, orient='horizontal')
self.x_scrollbar.pack(side=BOTTOM, fill=X, expand=False)
self.canvas = Canvas(frame,
yscrollcommand=self.y_scrollbar.set,
xscrollcommand=self.x_scrollbar.set)
else:
self.canvas = Canvas(frame, yscrollcommand=self.y_scrollbar.set)
self.canvas.pack(side=LEFT, fill=BOTH, expand=True)
if horizontal:
self.canvas.pack(side=BOTTOM, fill=BOTH, expand=True)
self.x_scrollbar.config(command=self.canvas.xview)
self.y_scrollbar.config(command=self.canvas.yview)
self.canvas.bind('<Configure>', self.__fill_canvas)
# base class initialization
Frame.__init__(self, frame)
# assign this obj (the inner frame) to the windows item of the canvas
self.windows_item = self.canvas.create_window(0, 0, window=self, anchor=NW)
def __fill_canvas(self, event):
"""Enlarge the windows item to the canvas width"""
canvas_width = event.width
self.canvas.itemconfig(self.windows_item, width=canvas_width, height=event.height)
def update(self):
"""Update the canvas and the scrollregion"""
self.update_idletasks()
self.canvas.config(scrollregion=self.canvas.bbox(self.windows_item))
</code></pre>
<p>Here is my cleaned up code that uses the ScrollableFrame:</p>
<pre><code>def __init__(self, root):
self.frame_main = ScrollableFrame(root, horizontal=True)
self.buttons = [[]]
self.add_headers()
for row in range(25):
button_row = []
for col in range(HeaderCol.MAX_COL):
button = Entry(self.frame_main)
button.configure(highlightthickness=0)
button_row.append(button)
button.grid(column=col, row=row + 2, sticky="")
self.buttons.append(button_row)
self.frame_main.update()
</code></pre>
|
<python><tkinter><grid><scrollbar>
|
2023-03-03 07:30:20
| 1
| 1,116
|
jordanthompson
|
75,624,190
| 7,505,256
|
How to iterate over vector of pyclass objects in rust?
|
<p>I am using maturin and I am trying to implement the method <code>get_car()</code> for my class.</p>
<p>But using the following code</p>
<pre class="lang-rust prettyprint-override"><code>use pyo3::prelude::*;
#[pyclass]
struct Car {
name: String,
}
#[pyclass]
struct Garage {
cars: Vec<Car>,
}
#[pymethods]
impl Garage {
fn get_car(&self, name: &str) -> Option<&Car> {
self.cars.iter().find(|car| car.name == name.to_string())
}
}
/// A Python module implemented in Rust.
#[pymodule]
fn pyo3_iter_issue(_py: Python, m: &PyModule) -> PyResult<()> {
m.add_class::<Car>()?;
m.add_class::<Garage>()?;
Ok(())
}
</code></pre>
<p>I get this error message</p>
<pre><code>the trait bound `Car: AsPyPointer` is not satisfied
the following other types implement trait `AsPyPointer`:
CancelledError
IncompleteReadError
InvalidStateError
LimitOverrunError
Option<T>
PanicException
Py<T>
PyAny
and 107 others
required for `&Car` to implement `IntoPy<Py<PyAny>>`
1 redundant requirement hidden
required for `Option<&Car>` to implement `IntoPy<Py<PyAny>>`
required for `Option<&Car>` to implement `OkWrap<Option<&Car>>`
</code></pre>
<p>I am still very new to rust and I do not understand the issue here?</p>
|
<python><rust><pyo3><maturin>
|
2023-03-03 07:20:52
| 1
| 316
|
Prokie
|
75,624,180
| 7,713,811
|
Issue with Exporting Cycle into CSV filetype in Zephyr Squad REST API
|
<p>Followed Official Doc:
<a href="https://zephyrsquad.docs.apiary.io/#reference/cycle/export-cycle/export-cycle" rel="nofollow noreferrer"> Zephyr Squad Cloud REST API (formerly Zephyr for Jira) </a></p>
<p>Getting <em><strong><code>Missing parameter: projectId</code></strong></em> as a response from Zephyr Squad. I am not getting why parameter of <code>projectId</code> is missing even though I have been passing this in parameters of <code>CANONICAL_PATH</code> and from the Json <code>cycle</code>?</p>
<pre><code>RELATIVE_PATH = '/public/rest/api/1.0/cycle/{}/export'.format(cycle_id)
CANONICAL_PATH = 'GET&/public/rest/api/1.0/cycle/' + cycle_id + '/export?' + 'projectId=' + str('10000') + '&versionId=' + str('10000') + '&exportType=' + str('CSV')
</code></pre>
<p><strong>Actual Result:</strong></p>
<pre><code><Response [400]>
{
"clientMessage": "Missing parameter: projectId",
"errorCode": 151,
"errorType": "ERROR"
}
</code></pre>
<p><strong>Expected Result:</strong>
Response status should be 200 and save the file into CSV format</p>
<p><strong>Complete Snippet:</strong></p>
<pre><code>import json
import jwt
import time
import hashlib
import requests
def is_json(data):
try:
json.loads(data)
except ValueError:
return False
return True
# USER
ACCOUNT_ID = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxx'
# ACCESS KEY from navigation >> Tests >> API Keys
ACCESS_KEY = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
# ACCESS KEY from navigation >> Tests >> API Keys
SECRET_KEY = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
# JWT EXPIRE how long token been to be active? 3600 == 1 hour
JWT_EXPIRE = 3600
# BASE URL for Zephyr for Jira Cloud
BASE_URL = 'https://prod-api.zephyr4jiracloud.com/connect'
# RELATIVE PATH for token generation and make request to api
cycle_id = 'ca55798e-e9e8-4ebb-8b43-efac9360e615'
RELATIVE_PATH = '/public/rest/api/1.0/cycle/{}/export'.format(cycle_id)
CANONICAL_PATH = 'GET&/public/rest/api/1.0/cycle/' + cycle_id + '/export?' + 'projectId=' + str('10000') + '&versionId=' + str('10000') + '&exportType=' + str('CSV')
# TOKEN HEADER: to generate jwt token
payload_token = {
'sub': ACCOUNT_ID,
'qsh': hashlib.sha256(CANONICAL_PATH.encode('utf-8')).hexdigest(),
'iss': ACCESS_KEY,
'exp': int(time.time())+JWT_EXPIRE,
'iat': int(time.time())
}
# GENERATE TOKEN
token = jwt.encode(payload_token, SECRET_KEY, algorithm='HS256').strip()
# REQUEST HEADER: to authenticate and authorize api
headers = {
'Authorization': 'JWT '+token,
'Content-Type': 'application/json',
'zapiAccessKey': ACCESS_KEY
}
# REQUEST PAYLOAD: to create cycle
cycle = {
'versionId': 10000,
'projectId': 10000,
'exportType': 'CSV',
'folderId': 'UI'
}
# MAKE REQUEST:
raw_result = requests.get(BASE_URL + RELATIVE_PATH, headers=headers, json=cycle)
print(raw_result)
# Download the CSV file and save it to disk
with open("Export.csv", "w", newline="") as csvfile:
csvfile.write(raw_result.text)
if is_json(raw_result.text):
# JSON RESPONSE: convert response to JSON
json_result = json.loads(raw_result.text)
# PRINT RESPONSE: pretty print with 4 indent
print(json.dumps(json_result, indent=4, sort_keys=True))
else:
print(raw_result.text)
</code></pre>
|
<python><jira-rest-api><jira-zephyr>
|
2023-03-03 07:19:05
| 0
| 2,964
|
Nɪsʜᴀɴᴛʜ ॐ
|
75,624,165
| 10,500,957
|
Gtk Group Check Button
|
<p>I discovered Gtk grouped CheckButtons in the documentation at:</p>
<p><a href="https://developer.gnome.org/documentation/tutorials/beginners/components/radio_button.html" rel="nofollow noreferrer">https://developer.gnome.org/documentation/tutorials/beginners/components/radio_button.html</a></p>
<p>But when I run the Python code, shown in the above:</p>
<pre><code># https://developer.gnome.org/documentation/tutorials/beginners/components/radio_button.html
import gi
gi.require_version('Gtk', '3.0')
from gi.repository import Gtk, Gdk, GdkPixbuf# GLib
live = Gtk.CheckButton(label="Live")
laugh = Gtk.CheckButton(label="Laugh", group=live)
love = Gtk.CheckButton(label="Love", group=live)
def on_toggled(button, identifier):
is_active = button.props.active
if identifier == "live":
# update_live() is defined elsewhere
update_live(is_active)
elif identifier == "laugh":
# update_laugh() is defined elsewhere
update_laugh(is_active)
elif identifier == "love":
# update_love() is defined elsewhere
update_love(is_active)
# The live, laugh, and love variables are defined like the example above
live.connect("toggled", on_toggled, "live")
laugh.connect("toggled", on_toggled, "laugh")
love.connect("toggled", on_toggled, "love")
</code></pre>
<p>I get a</p>
<p><code>TypeError: gobject `GtkCheckButton' doesn't support property `group'</code></p>
<p>error.</p>
<p>My Python version is Python 3.8.10. Can someone tell me what the problem is?</p>
<p>TKinter has:<code>tk.Checkbutton(..., indicatoron=0)</code>, but I presume Gtk does not have this kind of option for it's Radio buttons.</p>
<p>Thank you very much in advance.</p>
|
<python><button><gtk><group>
|
2023-03-03 07:17:35
| 0
| 322
|
John
|
75,624,099
| 15,239,717
|
Django Query two Models and Display Records in One HTML Table
|
<p>I am working on a Django Daily Saving Project where I have Statement view and I want to display a Customer's Deposits and Withdrawals (all his deposits and withdrawals) in one HTML Table. I am looking at the Best Performance (Constant Complexity for Big O Notation if possible in this case). I don't know whether there is another way of displaying records in a table from a Model other than using a For Loop. If there is, then your kind answer is also welcome.
Here are my Models:</p>
<pre><code>class Deposit(models.Model):
customer = models.ForeignKey(Profile, on_delete=models.CASCADE, null=True)
transID = models.CharField(max_length=12, null=True)
acct = models.CharField(max_length=6, null=True)
staff = models.ForeignKey(User, on_delete=models.CASCADE, null=True)
deposit_amount = models.PositiveIntegerField(null=True)
date = models.DateTimeField(auto_now_add=True)
def get_absolute_url(self):
return reverse('create_account', args=[self.id])
def __str__(self):
return f'{self.customer} Deposited {self.deposit_amount} by {self.staff.username}'
class Witdrawal(models.Model):
account = models.ForeignKey(Profile, on_delete=models.CASCADE, null=True)
transID = models.CharField(max_length=12, null=True)
staff = models.ForeignKey(User, on_delete=models.CASCADE, null=True)
withdrawal_amount = models.PositiveIntegerField(null=True)
date = models.DateTimeField(auto_now_add=True)
def __str__(self):
return f'{self.account}- Withdrawn - {self.withdrawal_amount}'
</code></pre>
<p>Here is my view:</p>
<pre><code>def account_statement(request, id):
try:
customer = Account.objects.get(id=id)
#Get Customer ID
customerID = customer.customer.id
except Account.DoesNotExist:
messages.error(request, 'Something Went Wrong')
return redirect('create-customer')
else:
deposits = Deposit.objects.filter(customer__id=customerID).order_by('-date')[:5]
#Get Customer Withdrawal by ID and order by Date minimum 5 records displayed
withdrawals = Witdrawal.objects.filter(account__id=customerID).order_by('-date')[:5]
context = {
'deposits ':deposits ,
'withdrawals ':withdrawals,
}
return render(request, 'dashboard/statement.html', context)
</code></pre>
<p>My HTML Template Code:</p>
<pre><code><table class="table bg-white">
<thead class="bg-info text-white">
<tr>
<th scope="col">#</th>
<th scope="col">Acct. No.</th>
<th scope="col">Phone</th>
<th scope="col">Amount</th>
<th scope="col">Date</th>
<th scope="col">Action</th>
</tr>
</thead>
{% if deposits %}
<tbody>
{% for deposit in deposits %}
<tr>
<td>{{ forloop.counter }}</td>
<td>{{ deposit.acct }}</td>
<td>{{ deposit.customer.phone }}</td>
<td>N{{ deposit.deposit_amount | intcomma }}</td>
<td>{{ deposit.date | naturaltime }}</td>
<th scope="row"><a class="btn btn-success btn-sm" href="{% url 'deposit-slip' deposit.id %}">Slip</a></th>
</tr>
{% endfor %}
</tbody>
{% else %}
<h3 style="text-align: center; color:red;">No Deposit Found for {{ customer.customer.profile.surname }} {{ customer.customer.profile.othernames }}</h3>
{% endif %}
</table>
</code></pre>
<p>Please, understand that I am able to display only the customer's Deposit in the above table but don't know how to display both the Deposit and Withdrawal of the customer in this same table. Thanks</p>
|
<python><django>
|
2023-03-03 07:08:32
| 2
| 323
|
apollos
|
75,624,066
| 3,560,215
|
Python Loop over Key Value Pair
|
<p>I have an input file list called INPUTLIST.txt that currently looks like this:</p>
<pre><code>KV:123
LDAP:456
AWS:789
PKI:222
</code></pre>
<p>In all honesty though, I'm not entirely fussed about its presentation, but there's plenty of flexibility for the above list to be saved in any suitable format, including these two below:</p>
<pre><code>{'KV':123, 'LDAP':456, 'AWS':789, 'PKI':222}
['KV':123, 'LDAP':456, 'AWS':789, 'PKI':222]
</code></pre>
<p>Using Python, my main requirement though is to iterate through this input list using perhaps a For loop to retrieve each Key and Value pair, so I can go on to execute some additional commands. For example, I'll have something like this:</p>
<pre><code> for b in $(cat INPUTLIST.txt)
do
echo $b[0] // The key, e.g. KV/LDAP
echo $b[1] // The corresponding value, e.g. 123/456
python3 main_script.py $b[0] $b[1] // Execute a Python script and pass the two values as arguments.
done
</code></pre>
<p>Following the reading of each key-value pair, the main requirement is to now be able to execute another script (in this case a Python script I'm calling <code>main_script.py</code> and pass the key-value pair as arguments, as depicted above.</p>
<p>Any ideas or recommendation on what would be an ideal solution?</p>
|
<python><dictionary><for-loop><arraylist><key-value>
|
2023-03-03 07:04:17
| 1
| 971
|
hitman126
|
75,623,977
| 10,200,497
|
change the value of column to the maximum value above it in the same column
|
<p>This is my dataframe:</p>
<pre><code>df = pd.DataFrame({'a': [100, 103, 101, np.nan, 105, 107, 100]})
</code></pre>
<p>And this is the output that I want:</p>
<pre><code> a b
0 100.0 100
1 103.0 103
2 101.0 103
3 NaN 103
4 105.0 105
5 107.0 107
6 100.0 107
</code></pre>
<p>I want to create column <code>b</code> which takes values of column <code>a</code> and replace them with the maximum value that is on top of it.</p>
<p>For example when there is 103 in <code>a</code> I want to change all values to 103 until a greater number is in column <code>a</code>. That is why rows 2 and 3 are changed to 103 and since in row 4 there is a greater number than 103 I want to put that in column <code>b</code> until a greater number is in column <code>a</code>.</p>
<p>I have tried a couple of posts on stackoverflow. One of them was this <a href="https://stackoverflow.com/a/54409595/10200497">answer</a>. But still I couldn't figure out how to do it.</p>
|
<python><pandas>
|
2023-03-03 06:52:07
| 1
| 2,679
|
AmirX
|
75,623,812
| 18,086,775
|
Creating a pivot table with multiple columns
|
<p>I'm trying to create a pivot table with multiple columns; I'm unsure how to explain this better. But the following is the desired output, dataframe setup, and code I have tried so far.</p>
<p><strong>Desired Output:</strong><a href="https://i.sstatic.net/AIo15.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AIo15.png" alt="enter image description here" /></a></p>
<p><strong>Dataframe Setup:</strong></p>
<pre><code>data = {
'WholesalerID': {0: 121, 1: 121, 2: 42, 3: 42, 4: 54, 5: 43, 6: 432, 7: 4245, 8: 4245, 9: 4245, 10: 457},
'Brand': {0: 'Vans', 1: 'Nike', 2: 'Nike', 3: 'Vans',4: 'Vans', 5: 'Nike', 6: 'Puma', 7: 'Vans', 8: 'Nike', 9: 'Puma', 10: 'Converse'},
'Shop 1': {0: 'Yes', 1: 'No', 2: 'Yes', 3: 'Maybe', 4: 'Yes', 5: 'No', 6: 'Yes', 7: 'Yes', 8: 'Maybe', 9: 'Maybe', 10: 'No'},
'Shop 2': {0: 'No', 1: 'Yes', 2: 'Maybe', 3: 'Maybe', 4: 'Yes', 5: 'No', 6: 'No', 7: 'No', 8: 'Maybe', 9: 'Yes', 10: 'Yes'}
}
df = pd.DataFrame.from_dict(data)
</code></pre>
<p><strong>Pivoting Attempt:</strong></p>
<pre><code>df = df.assign(count = 1)
pivoted_df = pd.pivot_table(df,
index = ['Brand'],
columns = ['Shop 1', 'Shop 2'],
values = ['count'],
aggfunc = {'count': 'count'},
fill_value = 0,
margins = True,
margins_name = 'Total'
)
</code></pre>
|
<python><pandas><pivot-table>
|
2023-03-03 06:27:18
| 1
| 379
|
M J
|
75,623,549
| 9,008,162
|
How can I use np.array to speed up calculation instead of looping (with some preconditions)?
|
<p>I need to use the following code; is it possible to use np.array to do the exact calculation and get the same result faster?</p>
<pre class="lang-py prettyprint-override"><code>data['daily_change'] = data.groupby('title',group_keys=False)['return'].pct_change()
for title in data['title'].unique(): # Iterate through each title
temp_df = data[data['title'] == title].tail(252) # Select the data for a specific title
if len(temp_df) < 252:
print(f"{title} has less than 1 year of data, ignore\n")
continue
sections = [temp_df.iloc[i:i+63] for i in range(0, 252, 63)] # Divide the data into 4 sections
if method1:
result = sum([(section['return'].iloc[-1] / section['return'].iloc[0]) * weight for section, weight in zip(sections, [0.2]*3 + [0.4])]) # Calculate the weighted return
else:
# Calculate the weighted return using the daily changes
result = sum([(1 + section['daily_change']).prod() * weight for section, weight in zip(sections, [0.2]*3 + [0.4])]) - 1
df_new = pd.concat([df_new, pd.DataFrame({'title': [title], 'result': [result]})], ignore_index=True)
</code></pre>
<p><strong>Additional info:</strong></p>
<p>Here is the sample data <a href="https://www.dropbox.com/s/ehawttyt2rhrkx5/sample.csv?dl=0" rel="nofollow noreferrer">https://www.dropbox.com/s/ehawttyt2rhrkx5/sample.csv?dl=0</a></p>
<p>Expected result for method1</p>
<pre><code>A: 1.00105
B: 1.03288
C: 1.13492
D: 0.966295
E: 1.06095
F: 1.02021
</code></pre>
<p>Expected result for else condition:</p>
<pre><code>A: 0.00526707
B: 0.0433293
C: 0.14446
D: -0.0129632
E: 0.0601407
F: 0.0263727
</code></pre>
<p>Short description what I want to do:</p>
<ol>
<li>Compute the daily change in return for each title separately.</li>
<li>Take only the most recent 252 data points for each title.</li>
<li>Divide the data points for each title into four sections.</li>
<li>Run both methods 1 and else calculation for each title.</li>
</ol>
<p>Method 1 is to take the percentage difference between the last and first data point in each section, multiply it by the respective weight, and total it.</p>
<p>Otherwise, take the daily change product, multiply it by the respective weight, and add it all up.</p>
|
<python><pandas><dataframe><numpy><performance>
|
2023-03-03 05:42:33
| 1
| 775
|
saga
|
75,623,506
| 4,733,871
|
Python, get the index of maximum value in a list of lists
|
<p>I've got the following list of lists:</p>
<pre><code>[[0, 0],
[0, 0],
[0, 0],
[0, 0],
[0, 0],
[0, 0],
[1.0, 4],
[0, 0],
[0.75, 3],
[0.75, 3],
[0, 0],
[1.0, 4],
[0, 0],
[0, 0],
[0, 0],
[0, 0],
[0, 0],
[0, 1],
[0, 2]]
</code></pre>
<p>I'm trying to get the index corresponding to the maximum value for every first element in the list.</p>
<p>I'm using this:</p>
<pre><code>similarity_list.index(max([similarity_list[1]]))
</code></pre>
<p>and it's returning the following error:</p>
<blockquote>
<p>The truth value of an array with more than one element is ambiguous.
Use a.any() or a.all()</p>
</blockquote>
<p>I'm trying to use .any() or .all() but it's not working.
What's wrong?</p>
|
<python><list><indexing><max>
|
2023-03-03 05:37:20
| 2
| 1,258
|
Dario Federici
|
75,623,470
| 19,238,204
|
5D Scatter Plot is too big, how to modify the size attribute?
|
<p>I have followed this tutorial:
<a href="https://towardsdev.com/multi-dimension-visualization-in-python-part-i-85c13e9b7495" rel="nofollow noreferrer">https://towardsdev.com/multi-dimension-visualization-in-python-part-i-85c13e9b7495</a></p>
<p><a href="https://towardsdev.com/multi-dimension-visualization-in-python-part-ii-8c56d861923a" rel="nofollow noreferrer">https://towardsdev.com/multi-dimension-visualization-in-python-part-ii-8c56d861923a</a></p>
<p>I see that the 5D scatter plot is too big, how to modify the size, makes it 1/10 or 1/50 of its current size?</p>
<p>The csv is like this:
<a href="https://i.sstatic.net/BtBL9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BtBL9.png" alt="1" /></a></p>
<p>The plot with oversize scattered circles:
<a href="https://i.sstatic.net/zS8AH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zS8AH.png" alt="1" /></a></p>
<p>the code:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import matplotlib as mpl
import numpy as np
import seaborn as sns
dataidxbluechip = pd.read_csv("/home/browni/LasthrimProjection/Python/csv/idxbluechipstocks.csv", index_col=0, parse_dates = True)
dataidxpenny = pd.read_csv("/home/browni/LasthrimProjection/Python/csv/idxpennystocks.csv", index_col=0, parse_dates = True)
# Print first 5 rows of data
print(dataidxbluechip.head())
print(dataidxpenny.head())
# Store stocks type as an attribute
dataidxbluechip['stocks_type'] = 'IDX Blue chip'
dataidxpenny['stocks_type'] = 'IDX Penny Stocks'
# bucket stocks quality scores into qualitative quality labels
dataidxbluechip['PER 2023'] = dataidxbluechip['per2023'].apply(lambda value: 'low'
if value <= 10
else 'medium' if value <= 15
else 'high')
dataidxbluechip['PER 2023'] = pd.Categorical(dataidxbluechip['PER 2023'],
categories=['low','medium','high'])
dataidxpenny['PER 2023'] = dataidxpenny['per2023'].apply(lambda value: 'low'
if value <= 10
else 'medium' if value <= 15
else 'high')
dataidxpenny['PER 2023'] = pd.Categorical(dataidxpenny['PER 2023'],
categories=['low','medium','high'])
allstocks = pd.concat([dataidxbluechip, dataidxpenny])
# Print first 5 rows of data after being concatenate and replace
# the column of per2023 with PER 2023 stating the level of PER
print(allstocks.head())
# Visualizing 5-D mix data using bubble charts
# leveraging the concepts of hue, size and depth
g = sns.FacetGrid(allstocks, col="stocks_type", hue='PER 2023',
col_order=['IDX Blue chip', 'IDX Penny Stocks'],
hue_order=['low', 'medium', 'high'],
aspect=1.2, palette=sns.light_palette('navy', 4)[1:])
# The size='bvps2023' seems problematic.. ask..
g.map_dataframe(sns.scatterplot, "pricejan2010", "debtequityratio2023", alpha=0.9,
edgecolor='white', linewidth=0.5, size='bvps2023',
sizes=(allstocks['bvps2023'].min(), allstocks['bvps2023'].max()))
fig = g.fig
fig.subplots_adjust(top=0.8, wspace=0.3)
fig.suptitle('Stocks Type - Book Value - Debt Equity Ratio - PER 2023', fontsize=14)
l = g.add_legend(title='Stocks PER Quality Class')
plt.show()
fig.savefig('5dscatterplot.png')
</code></pre>
|
<python><pandas><matplotlib>
|
2023-03-03 05:30:25
| 1
| 435
|
Freya the Goddess
|
75,623,422
| 18,758,062
|
Reset pruned trials in Optuna study?
|
<p>If I have a study where all the pruned trials needs to be reset for some reason, is there a way to do this?</p>
<p>Maybe something that might work: Creating a copy of the current study where pruned trials have their state reset.</p>
<p>Thank you for any suggestions to reset all pruned trials</p>
|
<python><optuna>
|
2023-03-03 05:20:46
| 1
| 1,623
|
gameveloster
|
75,623,408
| 6,397,155
|
Python KafkaTimeoutError: Timeout waiting for future
|
<p>I'm using Kafka to send logs to a topic. While sending the messages, I always get this error</p>
<pre><code>Message: 'test log'
Arguments: ()
--- Logging error ---
Traceback (most recent call last):
File "/home/ubuntu/applications/pythonapp/kafka_handler.py", line 53, in emit
self.flush(timeout=1.0)
File "/home/ubuntu/applications/pythonapp/kafka_handler.py", line 59, in flush
self.producer.flush(timeout=timeout)
File "/home/ubuntu/env_revamp/lib/python3.8/site-packages/kafka/producer/kafka.py", line 649, in flush
self._accumulator.await_flush_completion(timeout=timeout)
File "/home/ubuntu/env_revamp/lib/python3.8/site-packages/kafka/producer/record_accumulator.py", line 529, in await_flush_completion
raise Errors.KafkaTimeoutError('Timeout waiting for future')
kafka.errors.KafkaTimeoutError: KafkaTimeoutError: Timeout waiting for future
</code></pre>
<p>I tried a few solutions like commenting out the line</p>
<pre><code>self.flush(timeout=1.0)
</code></pre>
<p>I also tried increasing the timeout value to 5.0 but in vain.</p>
<p>My stack :</p>
<p>Python 3.8.10</p>
<p>Pip package list</p>
<pre><code>alembic==1.8.1
certifi==2022.9.24
charset-normalizer==2.1.1
click==8.1.3
dataclasses==0.6
Flask==2.2.2
Flask-Cors==3.0.10
Flask-Migrate==4.0.0
Flask-SQLAlchemy==3.0.2
graypy==2.1.0
greenlet==2.0.1
idna==3.4
importlib-metadata==5.0.0
importlib-resources==5.10.0
itsdangerous==2.1.2
Jinja2==3.1.2
kafka-python==2.0.2
logging-gelf==0.0.26
Mako==1.2.4
MarkupSafe==2.1.1
marshmallow==3.19.0
numpy==1.24.2
packaging==21.3
pip==20.0.2
PyMySQL==1.0.2
pyparsing==3.0.9
requests==2.28.1
setuptools==44.0.0
six==1.16.0
SQLAlchemy==1.4.44
urllib3==1.26.12
Werkzeug==2.2.2
zipp==3.10.0
</code></pre>
<p>Does anybody have any idea what this error is and how can we solve this?</p>
|
<python><apache-kafka><kafka-python>
|
2023-03-03 05:18:04
| 2
| 1,469
|
node_man
|
75,623,232
| 10,159,065
|
Dask explode function similar to pandas
|
<p>I have a dataframe that looks like this</p>
<pre><code>import numpy as np
import pandas as pd
df = pd.DataFrame({"ph_number" : ['1234','2345','1234','1234','2345','1234','2345'],
"year": [2022,2022,2023,2022,2022,2022,2022],
"month": [9,10,1,10,8,11,12],
"device_set": ['vivo 1915:vivo','SM-A510F:samsung','1718:vivo vivo~^!vivo 1718:vivo','vivo 1915:vivo','SM-A510F:samsung~^!vivo 1718:vivo','vivo 1915:vivo','SM-A510F:samsung']
})
</code></pre>
<p>I want the output to be like this</p>
<p><a href="https://i.sstatic.net/U3RY8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/U3RY8.png" alt="res" /></a></p>
<p>in python i can do it by</p>
<pre><code>
df['device_set'] = df['device_set'].astype(str).apply(lambda x: x.split('~^!'))
pd.DataFrame({col:np.repeat(np.array(df[col].values), np.array(df["device_set"].str.len())) for col in df.columns.drop("device_set")}).assign(**{"device_set":np.concatenate(df["device_set"].values)})[df.columns]
</code></pre>
<p>i want to do it in DASK, i have written the code for dask as this, but its not working</p>
<pre><code>def dask_explode(df, lst_cols):
with dask_session(time_to_use=2*60*60) as dask_client:
mdd = dd.from_pandas(df, npartitions=54)
mdd = pd.DataFrame({col:np.repeat(np.array(mdd[col].values), np.array(mdd[lst_cols].str.len())) for col in mdd.columns.drop(lst_cols)}).assign(**{lst_cols:np.concatenate(mdd[lst_cols].values)})[mdd.columns]
return mdd.compute()
df['device_set'] = df['device_set'].astype(str).apply(lambda x: x.split('~^!'))
output = dask_explode(df, "device_set")
</code></pre>
<p>Note: i cant use pd.explode my prod env version is pandas==0.23.4</p>
<p>Reference : <a href="https://stackoverflow.com/questions/48794621/python-pandas-converting-from-pandas-numpy-to-dask-dataframe-array">Python PANDAS: Converting from pandas/numpy to dask dataframe/array</a></p>
|
<python><pandas><numpy><dask><dask-dataframe>
|
2023-03-03 04:42:40
| 0
| 448
|
Aayush Gupta
|
75,623,112
| 139,150
|
Run a script quickly from second time onward
|
<p>I have a python script that takes a lot of time to complete.
The line that takes most of the time is:</p>
<pre><code>ndf['embeddings'] = ndf['embeddings'].apply(ast.literal_eval)
</code></pre>
<p>Is there any way to pickle the results so that I will have to wait only for the first time?</p>
|
<python><pandas>
|
2023-03-03 04:16:01
| 1
| 32,554
|
shantanuo
|
75,622,901
| 512,251
|
Return the value in case of pickle serialization exception instead of failing the whole call
|
<pre class="lang-py prettyprint-override"><code>from dogpile.cache import make_region
TIMEOUT_SECONDS = 10 * 60
def my_key_generator(namespace, fn):
fname = fn.__name__
def generate_key(*arg):
key_template = fname + "_" + "_".join(str(s) for s in arg)
return key_template
return generate_key
region = make_region(function_key_generator=my_key_generator).configure(
"dogpile.cache.redis",
expiration_time=TIMEOUT_SECONDS,
arguments={
"host": "localhost",
"port": 6379,
"db": 0,
"redis_expiration_time": TIMEOUT_SECONDS * 2, # 2 hours
"distributed_lock": True,
"thread_local_lock": False,
},
)
@region.cache_on_arguments()
def load_user_info(user_id):
log.info(f"Called func {user_id}")
return lambda: user_id
print(load_user_info(1))
</code></pre>
<p>This returns the pickle error which is to be expected.</p>
<pre class="lang-py prettyprint-override"><code>Traceback (most recent call last):
File "/Users/user/src/python-test/dogpile-test.py", line 120, in <module>
print(load_user_info(1))
^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/decorator.py", line 232, in fun
return caller(func, *(extras + args), **kw)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/dogpile/cache/region.py", line 1577, in get_or_create_for_user_func
return self.get_or_create(
^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/dogpile/cache/region.py", line 1042, in get_or_create
with Lock(
File "/opt/homebrew/lib/python3.11/site-packages/dogpile/lock.py", line 185, in __enter__
return self._enter()
^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/dogpile/lock.py", line 94, in _enter
generated = self._enter_create(value, createdtime)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/dogpile/lock.py", line 178, in _enter_create
return self.creator()
^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/dogpile/cache/region.py", line 1012, in gen_value
self._set_cached_value_to_backend(key, value)
File "/opt/homebrew/lib/python3.11/site-packages/dogpile/cache/region.py", line 1288, in _set_cached_value_to_backend
key, self._serialized_cached_value(value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/dogpile/cache/region.py", line 1258, in _serialized_cached_value
return self._serialize_cached_value_elements(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/dogpile/cache/region.py", line 1232, in _serialize_cached_value_elements
serializer(payload),
^^^^^^^^^^^^^^^^^^^
AttributeError: Can't pickle local object 'load_user_info.<locals>.<lambda>'
</code></pre>
<p>My question is is there a way dogpile will catch the pickle exception, log the exception and return the actual value instead of failing the actual function call?</p>
|
<python><pickle><dogpile.cache>
|
2023-03-03 03:26:26
| 0
| 5,441
|
user
|
75,622,700
| 5,203,117
|
How to work with a boolean conditional in a pandas dataframe
|
<p>I have a pandas data frame (df) with an index of dates and 1,000 rows. A row looks like this:</p>
<pre><code>SPY 262.408051
shrtAvg 262.861718
signal True
Name: 2019-03-22 00:00:00, dtype: object
</code></pre>
<p>I would like to do a calculation to create a new column based on df.signal like this.</p>
<pre><code>df['distance'] = if df.signal == True, df.SPY - df.average, df.average = df.SPY
</code></pre>
<p>But no matter how I try it I always get this error:</p>
<p><code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().</code></p>
<p>I have trued .bool, .apply, np.where, and loc to no avail.
Example:</p>
<pre><code>`df['distance'] = np.where(df.signal, df.SPY - df.shrtAVG, df.shrtAVG -df.SPY)`
</code></pre>
<p>My question is how can I create a new column based on a calulation rooted to a boolean check?</p>
|
<python><pandas>
|
2023-03-03 02:38:28
| 1
| 597
|
John
|
75,622,650
| 5,855,996
|
dbt Python Incremental Models
|
<p>I’m really excited to be able to use Python models in dbt, but I’ve been running into issues with incremental ones.</p>
<p>If I have a working model with <code>materialized = "table"</code> and I change it to <code>materialized = "incremental"</code>, shouldn’t that work fine even before adding in the correct <code>if dbt.is_incremental:</code> logic?</p>
<p>For some reason I get this error when I just change table → incremental for a working model:</p>
<pre><code>File "/tmp/d87435d6-edb3-4afa-84e7-04dae648adcf/query_consistency.py", line 5
create or replace table graph-mainnet.internal_metrics.query_consistency__dbt_tmp
^
IndentationError: unexpected indent
</code></pre>
<p>Any thoughts here? Has anyone else had issues with python incremental models? I'm using bigquery.</p>
<p>Code which works fine below (and does not work fine when going from table to incremental):</p>
<pre class="lang-py prettyprint-override"><code>import requests
import pandas as pd
import json
import time
def model(dbt, session):
dbt.config(materialized = "table")
# ENTER THE SCHEMA TYPE YOU WANT TO GET ALL DATA FOR
schema_type = 'dex-amm'
# fetch the data from the deployment file
response = requests.get('https://raw.githubusercontent.com/messari/subgraphs/master/deployment/deployment.json')
subgraphs = response.json()
# create query
query = '''{
financialsDailySnapshots(orderBy: timestamp, orderDirection: desc, first: 365) {
cumulativeVolumeUSD
dailyProtocolSideRevenueUSD
totalValueLockedUSD
cumulativeTotalRevenueUSD
dailyTotalRevenueUSD
dailyVolumeUSD
timestamp
}
}'''
base_url = 'https://api.thegraph.com/subgraphs/name/messari/'
data = []
for project in subgraphs:
for deployment in subgraphs[project]['deployments']:
schema = subgraphs[project]['schema']
status = subgraphs[project]['deployments'][deployment]['status']
if status != 'prod' or schema != schema_type:
continue
if len(data) >= 4: # check if we've reached the subgraphs limit
break
try: # need this because not all have hosted-service field
slug = subgraphs[project]['deployments'][deployment]['services']['hosted-service']['slug']
except KeyError:
print(f"KeyError: unable to extract data from '{slug}' for '{project}'")
response = requests.post(base_url + slug, json={'query': query})
time.sleep(1)
if response.ok:
response_json = response.json()
headers = response.headers
timestamp_query = headers.get('Date')
try:
data.append((project, deployment, timestamp_query, response_json['data']['financialsDailySnapshots']))
print(f"Got data for: {slug}")
except KeyError:
print(f"KeyError: unable to extract data from '{slug}' for '{project}'")
else:
print(f'Request failed for {base_url + slug} with status {response.status_code}')
if len(data) >= 4: # check if we've reached the subgraphs limit
break
# create dataframe
df = pd.DataFrame(data, columns=['project', 'deployment', 'timestamp_query', 'data'])
# adjust df
df = df.explode('data')
df = pd.concat([df.drop(['data'], axis=1), df['data'].apply(pd.Series)], axis=1)
# convert timestamp
df['timestamp_query'] = df['timestamp_query'].drop_duplicates().reset_index(drop=True)
df['timestamp_query'] = pd.to_datetime(df['timestamp_query'], format='%a, %d %b %Y %H:%M:%S %Z')
# identifier
df['product'] = 'hosted_service'
# return result
return df
</code></pre>
|
<python><google-bigquery><dbt>
|
2023-03-03 02:27:22
| 1
| 1,193
|
Ricky
|
75,622,588
| 18,086,775
|
Rename columns based on certain pattern
|
<p>I have the following columns in a dataframe: <code>Id, category2, Brandyqdy1, Brandyqwdwdy2, Brandyqdw3</code></p>
<p>If the column's name starts with <code>Brand</code> and ends with <code>1</code>, I need it renamed as <code>Vans</code>. Similarly, for other Brand columns, use the following:
<code>rename_brands = {'1': 'Vans', '2': 'Nike', 3:'Adidas'}</code></p>
<p>Also, I will be renaming other columns apart from the ones that start with <code>Brand</code>, overall:
<code>rename_columns = {'Id': 'record', 'Category2': 'Sku', '1': 'Vans', '2': 'Nike', 3:'Adidas'}</code></p>
|
<python><pandas><dataframe>
|
2023-03-03 02:10:33
| 2
| 379
|
M J
|
75,622,347
| 6,622,544
|
lxml - ValueError: Namespace ... of name ... is not declared in scope
|
<p>When calling <code>lxml.etree.canonicalize(node)</code> a ValueError exception is raised: Namespace "{<uri>}" of name "<name>" is not declared in scope.</p>
<p>In this particular case the message is <code>ValueError: Namespace "http://schemas.xmlsoap.org/soap/envelope/" of name "Header" is not declared in scope</code></p>
<p>The following code works exactly as expected:</p>
<pre><code>from lxml import etree as ET
url_soap_envelope = "http://schemas.xmlsoap.org/soap/envelope/"
# Header
nsmap_Header = {
's': url_soap_envelope,
}
qname_s_Header = ET.QName(url_soap_envelope, "Header")
node_header = ET.Element(qname_s_Header, nsmap=nsmap_Header)
ET.canonicalize(node_header)
# '<s:Header xmlns:s="http://schemas.xmlsoap.org/soap/envelope/"></s:Header>'
</code></pre>
<p>However, if <code>node_header</code> is a subelement of another node, it breaks:</p>
<pre><code>from lxml import etree as ET
url_wss_u = "http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"
url_soap_envelope = "http://schemas.xmlsoap.org/soap/envelope/"
# Envelope
nsmap_Envelope = {
's': url_soap_envelope,
'u': url_wss_u,
}
qname_s_Envelope = ET.QName(url_soap_envelope, "Envelope")
node_envelope = ET.Element(qname_s_Envelope, {}, nsmap=nsmap_Envelope)
# Envelope / Header
nsmap_Header = {
's': url_soap_envelope,
}
qname_s_Header = ET.QName(url_soap_envelope, "Header")
node_header = ET.SubElement(node_envelope, qname_s_Header, nsmap=nsmap_Header)
ET.canonicalize(node_envelope)
# '<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/"><s:Header></s:Header></s:Envelope>'
ET.canonicalize(node_header)
</code></pre>
<p>Error:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "src/lxml/serializer.pxi", line 918, in lxml.etree.canonicalize
File "src/lxml/serializer.pxi", line 943, in lxml.etree._tree_to_target
File "src/lxml/serializer.pxi", line 1128, in lxml.etree.C14NWriterTarget.start
File "src/lxml/serializer.pxi", line 1155, in lxml.etree.C14NWriterTarget._start
File "src/lxml/serializer.pxi", line 1085, in lxml.etree.C14NWriterTarget._qname
ValueError: Namespace "http://schemas.xmlsoap.org/soap/envelope/" of name "Header" is not declared in scope
</code></pre>
<p>The expected result when calling <code>ET.canonicalize(node_header)</code>
is <code>'<s:Header xmlns:s="http://schemas.xmlsoap.org/soap/envelope/"></s:Header>'</code></p>
<p>The XML I'm trying to build has the following template:</p>
<pre><code><s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">
<s:Header>
<o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd">
<u:Timestamp u:Id="_0">
<u:Created>{created}
</u:Created>
<u:Expires>{expires}
</u:Expires>
</u:Timestamp>
<o:BinarySecurityToken u:Id="{uuid}" ValueType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3" EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary">{b64certificate}
</o:BinarySecurityToken>
<Signature xmlns="http://www.w3.org/2000/09/xmldsig#">
<SignedInfo>
<CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
<SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/>
<Reference URI="#_0">
<Transforms>
<Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
</Transforms>
<DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/>
<DigestValue>{digest_value}
</DigestValue>
</Reference>
</SignedInfo>
<SignatureValue>{b64signature}
</SignatureValue>
<KeyInfo>
<o:SecurityTokenReference>
<o:Reference ValueType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3" URI="#{uuid}"/>
</o:SecurityTokenReference>
</KeyInfo>
</Signature>
</o:Security>
</s:Header>
<s:Body>
<Autentica xmlns="http://DescargaMasivaTerceros.gob.mx"/>
</s:Body>
</s:Envelope>
</code></pre>
|
<python><xml><lxml><canonicalization>
|
2023-03-03 01:14:00
| 0
| 1,068
|
Adan Cortes
|
75,622,270
| 4,508,962
|
Can't get rid of mypy error about wrong type numpy.bool_ and bool
|
<p>I have a classe containing several <code>np.array</code> :</p>
<pre><code>class VECMParams(ModelParams):
def __init__(
self,
ecm_gamma: np.ndarray,
ecm_mu: Optional[np.ndarray],
ecm_lambda: np.ndarray,
ecm_beta: np.ndarray,
intercept_coint: bool,
):
self.ecm_gamma = ecm_gamma
self.ecm_mu = ecm_mu
self.ecm_lambda = ecm_lambda
self.ecm_beta = ecm_beta
self.intercept_coint = intercept_coint
</code></pre>
<p>I want to override <code>==</code> operator. Basically, a <code>VECMParam</code> is equal to another when all of their arrays are equal to <code>rhs</code> one :</p>
<pre><code>def __eq__(self, rhs: object) -> bool:
if not isinstance(rhs, VECMParams):
raise NotImplementedError()
return (
np.all(self.ecm_gamma == rhs.ecm_gamma) and
np.all(self.ecm_mu == rhs.ecm_mu) and
np.all(self.ecm_lambda == rhs.ecm_lambda) and
np.all(self.ecm_beta == rhs.ecm_beta)
)
</code></pre>
<p>Still, mypy keeps saying that <code>Incompatible return value type (got "Union[bool_, bool]", expected "bool") [return-value]</code> because <code>np.all</code> returns <code>bool_</code> and <code>__eq__</code> needs to return native <code>bool</code>. I search for hours it looks like there is no way to convert these bool_ to native bool. Anyone having the same problem ?</p>
<p>PS: doing <code>my_bool_ is True</code> is not evaluated to the right native bool value</p>
|
<python><numpy><mypy>
|
2023-03-03 00:57:08
| 1
| 1,207
|
Jerem Lachkar
|
75,622,228
| 9,087,739
|
Mark a task as a success on callback on intentional failure
|
<p>In Airflow 2.3.4, I have a task I am intentionally failing and when it fails I want to mark it as a success
in the callback but the below does not work.,</p>
<pre><code>def intentional_failure():
raise AirflowException("this is a dummy failure")
def handle_failure(context):
context['task_instance'].state = State.SUCCESS
dummy_failure = PythonOperator(task_id="intentional_failure", python_callable=intentional_failure, on_failure_callback=handle_failure)
</code></pre>
<p>How would I programatically mark a task as success on an intentional failure?</p>
|
<python><airflow>
|
2023-03-03 00:46:59
| 1
| 1,481
|
Paul
|
75,622,174
| 18,086,775
|
Select dataframe columns that starts with certain string and additional columns
|
<p>I have a dataframe with columns: <code>'Id', 'Category', 'Shop', ....., 'Brandtxsu1', 'Brandxyw2', ...</code></p>
<p>I want to select columns: <code>ID</code>, <code>Category</code>, and start with <code>Brand</code>. I can select the columns that start with <code>Brand</code> using the following code, but how do I select <code>ID</code> and <code>Category</code>?</p>
<pre><code>df[df.columns[pd.Series(df.columns).str.startswith('Brand')]]
</code></pre>
|
<python><pandas><dataframe>
|
2023-03-03 00:39:18
| 4
| 379
|
M J
|
75,622,123
| 13,067,104
|
How to delete specific print statments in Jupyter Interactive Terminal
|
<p>I am trying to run parallel processes where at the start of each process, it would print start, and then print done when that process finishes. For example</p>
<pre><code>def process(i):
print('starting process number %s' %i)
... do something ...
print('finished process number %s' %i)
return
</code></pre>
<p>As you may expect, I can not just delete the previous line above whenever it finishes a process since this is a parallel job and multiple starting statments are printed before a single finished statment is done.</p>
<p>There is any way I can delete SPECIFIC strings of texts from the terminal so in the end I will be left with a terminal full of "finished" statments?</p>
<p>I am using Jupyter Interactive notebook on Vscode (which may or may not matter)</p>
<p>Thanks!</p>
|
<python><python-3.x><parallel-processing><jupyter>
|
2023-03-03 00:28:40
| 0
| 403
|
patrick7
|
75,622,107
| 14,005,384
|
CSV with double quotes and separated by comma and space
|
<p>I have a .csv file with all values in double quotes, all values separated by a comma and space, and column headers are in the first row:</p>
<pre><code>"Exam date", "Last name", "First name", "DOB", "MRN"
"01/15/2019", "JOHN", "DOE", "01/15/2000", "0000000000"
"01/15/2020", "JANE", "ROE", "01/15/2010", "1111111111"
"01/15/2021", "BABY", "DOE", "01/15/2020", "2222222222"
</code></pre>
<p>I first tried <code>pd.read_csv('file.csv')</code>, but it only reads the first column:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>Exam date</th>
<th>Unnamed: 1</th>
<th>Unnamed: 2</th>
<th>Unnamed: 3</th>
<th>Unnamed: 4</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>01/15/2019</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<td>1</td>
<td>01/15/2019</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<td>2</td>
<td>01/15/2019</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
</tr>
</tbody>
</table>
</div>
<p>I then tried <code>pd.read_csv('file.csv', sep=',\s', quoting=csv.QUOTE_ALL, engine='python')</code>, but it doesn't separate the columns:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>Exam date,"Last name","First name","DOB","MRN"</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>"01/15/2019","JOHN","DOE","01/15/2000"...</td>
</tr>
<tr>
<td>1</td>
<td>"01/15/2020","JANE","ROE","01/15/2010"...</td>
</tr>
<tr>
<td>2</td>
<td>"01/15/2021","BABY","DOE","01/15/2020",...</td>
</tr>
</tbody>
</table>
</div>
<p>How can I import this file properly?</p>
<hr />
<p><strong>Update</strong>: I realized the values are separated by a comma and null character, so I could import the file using <code>pd.read_csv('file.csv', sep=',\0', engine='python')</code>:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>Exam date</th>
<th>"Last name"</th>
<th>"First name"</th>
<th>"DOB"</th>
<th>"MRN"</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>"01/15/2019"</td>
<td>"JOHN"</td>
<td>"DOE"</td>
<td>"01/15/2000"</td>
<td>"0000000000"</td>
</tr>
<tr>
<td>1</td>
<td>"01/15/2020"</td>
<td>"JANE"</td>
<td>"ROE"</td>
<td>"01/15/2010"</td>
<td>"1111111111"</td>
</tr>
<tr>
<td>2</td>
<td>"01/15/2021"</td>
<td>"BABY"</td>
<td>"DOE"</td>
<td>"01/15/2020"</td>
<td>"2222222222"</td>
</tr>
</tbody>
</table>
</div>
<p>However, all values (except the first column header) are impoted with double quotes. I tried <code>pd.read_csv('file.csv', sep=',\0', quoting=csv.QUOTE_ALL, engine='python')</code>, but the result was the same. How can I get rid of these double quotes?</p>
|
<python><pandas><dataframe><csv>
|
2023-03-03 00:25:09
| 2
| 455
|
hoomant
|
75,622,028
| 3,561
|
Using numpy, what is a good way to index into a matrix using another matrix whose entry values are column numbers?
|
<p>Suppose I want to indepedently re-order each row of a matrix. Here is an example of that using <code>np.argsort()</code>:</p>
<pre><code>>>> A
array([[88, 44, 77, 33, 77],
[33, 55, 66, 88, 0],
[88, 0, 0, 55, 88],
[ 0, 22, 44, 88, 33],
[33, 33, 77, 66, 66]])
>>> ind = np.argsort(A); ind
array([[3, 1, 2, 4, 0],
[4, 0, 1, 2, 3],
[1, 2, 3, 0, 4],
[0, 1, 4, 2, 3],
[0, 1, 3, 4, 2]])
>>> np.array([A[i][ind[i]] for i in range(ind.shape[0])])
array([[33, 44, 77, 77, 88],
[ 0, 33, 55, 66, 88],
[ 0, 0, 55, 88, 88],
[ 0, 22, 33, 44, 88],
[33, 33, 66, 66, 77]])
</code></pre>
<p>The last expression above (the one that uses <code>range()</code>) is one solution to my problem. My question is: Is there a better way to do this?</p>
<p>The inputs are two matrices like <code>A</code> and <code>ind</code>, both 2-dimensional and of the same size. The matrix <code>A</code> can have any values, and the values of <code>ind</code> are interepreted to be column indexes within <code>A</code>. The output is a new matrix the same size as <code>ind</code> whose values are from <code>A</code> as per the above expression: <code>np.array([A[i][ind[i]] for i in range(ind.shape[0])])</code>.</p>
<p>Each row of <code>ind</code> corresponds to the same row in <code>A</code>. Entry <code>B[i,j]</code> of the output comes from entry <code>A[i, ind[i, j]]</code> of the input.</p>
<p>Note that <code>ind</code> may have fewer columns than <code>A</code>, and I would like to support that case.</p>
<p>I'm asking because my solution (the given expression) is essentially using a for loop, and maybe numpy can do this more quickly using some internal loop. For my application, speed is important, so it would be nice if I can be time-efficient.</p>
|
<python><numpy><numpy-slicing><matrix-indexing>
|
2023-03-03 00:10:18
| 1
| 28,924
|
Tyler
|
75,621,922
| 6,403,044
|
Didnt get the expected results when calculate Cosine similarity between strings
|
<p>I want to calculate the pairwise cosine similarity between two strings that are in the same row of a pandas data frame.</p>
<p>I used the following lines of codes:</p>
<pre><code>import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics.pairwise import cosine_similarity
pd.set_option('display.float_format', '{:.4f}'.format)
df = pd.DataFrame({'text1': ['The quick brown fox jumps over the lazy dog', 'The red apple', 'The big blue sky'],
'text2': ['The lazy cat jumps over the brown dog', 'The red apple', 'The big yellow sun']})
vectorizer = CountVectorizer().fit_transform(df['text1'] + ' ' + df['text2'])
cosine_similarities = cosine_similarity(vectorizer)[:, 0:1]
df['cosine_similarity'] = cosine_similarities
print(df)
</code></pre>
<p>It gave me following output, which seems incorrect:</p>
<p><a href="https://i.sstatic.net/dVOTF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dVOTF.png" alt="enter image description here" /></a></p>
<p>Can anyone help me to figure out what I did incorrectly?</p>
<p>Thank you.</p>
|
<python><pandas><scikit-learn><nlp><cosine-similarity>
|
2023-03-02 23:46:28
| 1
| 1,012
|
student_R123
|
75,621,882
| 5,431,734
|
making a cross-platform environment.yml
|
<p>I have written a python application in Linux and I want to export the <code>environment.yml</code> file so that users can recreate the environment in either Linux or Windows. The command I run is:</p>
<pre><code>conda env export --from-history > environment.yml
</code></pre>
<p>However a couple of the dependencies shown in this yml file are Linux-specific (not really surprising at all and they are both C/C++ compiler related). For example:</p>
<pre><code> - libgcc
- gcc=12.1.0
</code></pre>
<p>Hence running <code>conda env create -n my_env -f environment.yml</code> from windows will fail.</p>
<p>I was just wondering what you guys typically do in these cases. Shall I go to my Windows computer, create a fresh <code>env</code>, install all the packages by hand one at a time and once I manage to run my code successfully create an <code>environment_win.yml</code> and distribute two <code>yml</code> files, one for windows and another for Linux?</p>
|
<python><conda>
|
2023-03-02 23:39:58
| 0
| 3,725
|
Aenaon
|
75,621,711
| 14,729,820
|
How to update Data frame based on specific condition? by dropping not exist directory with Pandas
|
<p>I have dataframe with <em>missing images</em> in <code>images dir</code> that contians <code>labels </code>data with 2 columns (<code>file_name,text</code>) after reading <code>labels.txt</code> file , My working directory look like :</p>
<pre><code>$ tree
.
├── sample
│ ├------- labels.txt
│ │------- imgs
│ │ └───├── 0.png
│ │ └── 3.png
│ │ └── 4.png
│ │ └── 5.png
│ │ └── 6.png
│ │ └── 7.png
│ │ └── 8.png
│ │ └── 10.png
</code></pre>
<p>The <code>labels.txt</code> file :</p>
<pre><code>0.jpg Elégedetlenek az emberek a közoktatással? Belföld - Magyarország hírei
1.jpg Szeged - delmagyar.hu Delmagyar.hu 24 óra Szórakozás Sport Programok
2.jpg Állás Ingatlan Hárommilliárdot költenek a ,,boldog békeidőket" idéző Öt
3.jpg órát dolgoztak a Szabadság úszóházon, mire Illatoznak is a
4.jpg kis harangok, de csak közelről érezni - Madonna tavaly
5.jpg még meg tudta akadályozni, idén viszont Egy tálca zsíros
6.jpg kenyér - Milyen gyermekkora volt Belföld - Magyarország hírei
7.jpg Elégedetlenek az emberek a közoktatással? Elégedetlenek az emberek a
8.jpg közoktatással? Független Hírügynökség Az emberek nagyobb része elégedetlen a
9.jpg magyarországi közoktatás minőségével. Sokan nem tartják megfelelően felkészültnek a
10.jpg pedagógusokat és szükségesnek tartanák a tanárok gyakori, elsősorban pszichológiai
</code></pre>
<p>So I wrote small script to read text file using data frame :</p>
<pre><code>import pandas as pd
from pathlib import Path
path = "./sample/"
df = pd.read_csv(f'{path}labels.txt',
header=None,
delimiter=' ',
encoding="utf8",
error_bad_lines=False,
engine='python'
)
df.rename(columns={0: "file_name", 1: "text"}, inplace=True)
print(df.head(11))
</code></pre>
<p>the output after reading :</p>
<p><a href="https://i.sstatic.net/6qIDW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6qIDW.png" alt="enter image description here" /></a></p>
<p>I am trying to keep only rows that has image in image dir if the dir not exist drop or skip it but after writing functions below it only drop last row (for missing image) for new dataframe</p>
<pre><code>def is_dir_exist(filename):
path = "/home/ngyongyossy/mohammad/OCR_HU_Tra2022/GPT-2_Parallel/process/sample/"
path_to_file = f'{path}imgs/'+ filename # df['file_name'][idx] # 'readme.txt'
path = Path(path_to_file)
# print(path.is_file())
return path.is_file()
for idx in range(len(df)):
# print(df['file_name'][idx])
print(is_dir_exist(df['file_name'][idx]))
if not is_dir_exist(df['file_name'][idx]):
update_df = df.drop(df.index[idx])
print(update_df.head(11))
</code></pre>
<p>What I got :
file_name text</p>
<pre><code>0 0.jpg Elégedetlenek az emberek a közoktatással? Belf...
1 1.jpg Szeged - delmagyar.hu Delmagyar.hu 24 óra Szór...
2 2.jpg Állás Ingatlan Hárommilliárdot költenek a ,,bo...
3 3.jpg órát dolgoztak a Szabadság úszóházon, mire Ill...
4 4.jpg kis harangok, de csak közelről érezni - Madonn...
5 5.jpg még meg tudta akadályozni, idén viszont Egy tá...
6 6.jpg kenyér - Milyen gyermekkora volt Belföld - Mag...
7 7.jpg Elégedetlenek az emberek a közoktatással? Elég...
8 8.jpg közoktatással? Független Hírügynökség Az ember...
10 10.jpg pedagógusokat és szükségesnek tartanák a tanár...
</code></pre>
<p>But My Expactation is to keep only rows label with existing image folder:</p>
<pre><code> file_name text
0 0.jpg Elégedetlenek az emberek a közoktatással? Belf...
3 3.jpg órát dolgoztak a Szabadság úszóházon, mire Ill...
4 4.jpg kis harangok, de csak közelről érezni - Madonn...
5 5.jpg még meg tudta akadályozni, idén viszont Egy tá...
6 6.jpg kenyér - Milyen gyermekkora volt Belföld - Mag...
7 7.jpg Elégedetlenek az emberek a közoktatással? Elég...
8 8.jpg közoktatással? Független Hírügynökség Az ember...
10 10.jpg pedagógusokat és szükségesnek tartanák a tanár...
</code></pre>
|
<python><pandas><dataframe><text><pytorch>
|
2023-03-02 23:12:10
| 1
| 366
|
Mohammed
|
75,621,693
| 6,672,422
|
Python - How to extract values from different list of dictionaries in the rows of the dataframe
|
<p>I have data frame as like <code>df</code>:</p>
<pre><code> id features
100 [{'city': 'Rio'}, {'destination': '2'}]
110 [{'city': 'Sao Paulo'}]
135 [{'city': 'Recife'}, {'destination': '45'}]
145 [{'city': 'Munich'}, {'destination': '67'}]
167 [{'city': 'Berlin'}, {'latitude':'56'}, {'longitude':'30'}]
</code></pre>
<p>I have to extract column name and values from <code>features</code> column to separate columns as like:</p>
<pre><code> id city destination latitude longitude
100 'Rio' '2' NaN NaN
110 'Sao Paulo' NaN NaN NaN
135 'Recife' '45' NaN NaN
145 'Munich' '67' NaN NaN
167 'Berlin' NaN '56' '30'
</code></pre>
<p>I tried to do it with usage idea as like:</p>
<p>1st method to extract:</p>
<pre><code>df = df.explode('features').reset_index(drop = True)
result = pd.concat([df.drop(columns='features'),
pd.json_normalize(df['features'])], axis=1)
</code></pre>
<p>result is only <code>id</code> column.</p>
<p>2nd method:</p>
<pre><code>df = df.explode('features').reset_index(drop = True)
df2 = df.set_index('id')
df2 = df2['features'].astype('str')
df2 = df2.apply(lambda x: ast.literal_eval(x))
df2 = df2.apply(pd.Series)
result = df2.reset_index()
</code></pre>
<p><code>result</code> is very closed what I need:</p>
<pre><code> id city destination latitude longitude
100 'Rio' NaN NaN NaN
100 NaN '2' NaN NaN
110 'Sao Paulo' NaN NaN NaN
135 'Recife' NaN NaN NaN
135 NaN '45' NaN NaN
145 'Munich' NaN NaN NaN
145 'Munich' '67' NaN NaN
167 'Berlin' NaN NaN NaN
167 NaN NaN '56' NaN
167 NaN NaN NaN '30'
</code></pre>
<p>How is possible to achieve an expected result in view of:</p>
<pre><code> id city destination latitude longitude
100 'Rio' '2' NaN NaN
110 'Sao Paulo' NaN NaN NaN
135 'Recife' '45' NaN NaN
145 'Munich' '67' NaN NaN
167 'Berlin' NaN '56' '30'
</code></pre>
<p>Thanks</p>
|
<python><pandas><group-by><pandas-explode>
|
2023-03-02 23:09:21
| 3
| 568
|
Cindy
|
75,621,689
| 1,039,860
|
How to select single item (not row) from a tkinter treeview and grid
|
<p>I have this code (which I unabashedly stole from here:<a href="https://www.pythontutorial.net/tkinter/tkinter-treeview/" rel="nofollow noreferrer">https://www.pythontutorial.net/tkinter/tkinter-treeview/</a>):</p>
<pre><code>import tkinter as tk
from tkinter import ttk
from tkinter.messagebox import showinfo
root = tk.Tk()
root.title('Treeview demo')
root.geometry('620x200')
# define columns
columns = ('first_name', 'last_name', 'email')
tree = ttk.Treeview(root, columns=columns, show='headings')
# define headings
tree.heading('first_name', text='First Name')
tree.heading('last_name', text='Last Name')
tree.heading('email', text='Email')
# generate sample data
contacts = []
for n in range(1, 100):
contacts.append((f'first {n}', f'last {n}', f'email{n}@example.com'))
# add data to the treeview
for contact in contacts:
tree.insert('', tk.END, values=contact)
def item_selected(event):
for selected_item in tree.selection():
item = tree.item(selected_item)
record = item['values']
# show a message
showinfo(title='Information', message=','.join(record))
tree.bind('<<TreeviewSelect>>', item_selected)
tree.grid(row=0, column=0, sticky='nsew')
# add a scrollbar
scrollbar = ttk.Scrollbar(root, orient=tk.VERTICAL, command=tree.yview)
tree.configure(yscroll=scrollbar.set)
scrollbar.grid(row=0, column=1, sticky='ns')
# run the app
root.mainloop()
</code></pre>
<p>The problem I am having is that clicking on the grid/tree always selects and returns the entire row. I need to know what cell the user clicked on.</p>
|
<python><tkinter><grid><treeview>
|
2023-03-02 23:08:19
| 1
| 1,116
|
jordanthompson
|
75,621,404
| 1,711,271
|
how can I select all columns of a dataframe, which partially match strings in a list?
|
<p>Suppose I have a Dataframe like:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame({'foo': [1, 2, 3], 'bar': [4, 5, 6], 'ber': [7, 8, 9]})
</code></pre>
<p>Given a list of "filter" strings like <code>mylist = ['oo', 'ba']</code>, how can I select all columns in <code>df</code> whose name partially match any of the strings in <code>mylist</code>? For this example, the expected output is <code>{'foo': [1, 2, 3], 'bar': [4, 5, 6]}</code>.</p>
|
<python><pandas><string-matching>
|
2023-03-02 22:26:12
| 1
| 5,726
|
DeltaIV
|
75,621,164
| 2,088,027
|
Merge a large dictionary into a single dataframe
|
<p>I have a dictionary.</p>
<pre><code>import pandas as pd
d = {
'A':pd.DataFrame(
{'Age' : [5,5,5],
'Weight' : [5,5,5]}),
'B':pd.DataFrame(
{'Age' : [10,10,10],
'Weight' : [10,10,10]}),
'C':pd.DataFrame(
{'Age' : [7,7,7],
'Weight' : [10,10,100]}),
}
d
</code></pre>
<p>I would like to convert that to a single dataframe.</p>
<pre><code>data = [
['A',5,5],
['A',5,5],
['A',5,5],
['B',10,10],
['B',10,10],
['B',10,10],
['C',7,10],
['C',7,10],
['C',7,100],
]
df = pd.DataFrame(data, columns=['Team', 'Age', 'Weight'])
df
</code></pre>
|
<python><pandas>
|
2023-03-02 21:51:42
| 4
| 2,014
|
kevin
|
75,621,157
| 500,343
|
Mutable NamedTuple in python
|
<p>I'm looking for a good struct pattern in python. The <code>NamedTuple</code> class constructor is <em>almost</em> what I want. Consider the following:</p>
<pre class="lang-py prettyprint-override"><code>class MyStruct(NamedTuple):
prop1: str
prop2: str
prop3: Optional[str] = "prop3" # should allow default values
prop4: Optional[str] = "prop4"
...
# allow this
struct = MyStruct(prop1="prop1", prop2="prop2")
# allow this (order agnostic)
struct = MyStruct(prop2="prop2", prop1="prop1")
# allow this (supply value to optional properties)
struct = MyStruct(prop1="prop1", prop2="prop2", prop3="hello")
# allow this (access via . syntax)
print(struct.prop3)
# allow this (mutate properties)
struct.prop2 = "hello!"
# don't allow this (missing required properties, prop1 and prop2)
struct = MyStruct()
</code></pre>
<p><code>NamedTuple</code> does everything except the mutability piece. I'm looking for something similar to a <code>Namespace</code> with type hints. I looked at <code>TypedDict</code> as well, but don't like that you can't access properties with <code>struct.prop1</code>, but instead must use <code>struct["prop1"]</code>. I also need to be able to supply default values when the struct is instantiated.</p>
<p>I thought about just a regular class as well, but my object may have many properties (20-30), and the order in which properties are supplied to the constructor should not matter.</p>
<p>I also want type hinting in my editor.</p>
<p>What's the best way to achieve this?</p>
|
<python><python-3.x><types>
|
2023-03-02 21:50:31
| 2
| 1,907
|
CaptainStiggz
|
75,621,102
| 9,352,430
|
Pyspark dot product of a DF and a dictionary
|
<p>I'd like to have a dot product of a DF (multiple columns) and a dictionary with keys matching some of the column names and values that should be used as weights. Here is an example:</p>
<pre><code>dic = {'SG_actions': 1, 'SO_actions': 2, 'GS_actions': 3}
</code></pre>
<p>df =</p>
<pre><code>__________________________________________________
| Ag | SG_actions | SO_actions | GS_actions |
|____|_____________|_____________|________________|
| 1 | 2 | 0 | 0 |
| 1 | 0 | 1 | 1 |
| 2 | 1 | 1 | 1 |
|____|_____________|_____________|________________|
</code></pre>
<p>Output:</p>
<pre><code>__________________________________________________________
| Ag | SG_actions | SO_actions | GS_actions | New col |
|____|_____________|_____________|_____________|___________|
| 1 | 2 | 0 | 0 | 2 |
| 1 | 0 | 1 | 1 | 5 |
| 2 | 1 | 1 | 1 | 6 |
|____|_____________|_____________|_____________|___________|
</code></pre>
<p><strong>Note</strong>: I'm using Spark 3.1, and I can't upgrade to 3.2 to use spark.pandas library.</p>
<p>This is the code in python:</p>
<pre><code>df["new_col"] = df[df[list(dic.keys())].dot(pd.Series(dic))]
</code></pre>
<p>I need to build a pyspark version.
Thanks for your help!</p>
|
<python><pyspark>
|
2023-03-02 21:43:42
| 1
| 339
|
Sad Vaseb
|
75,620,990
| 8,194,364
|
Running into selenium.common.exceptions.ElementClickInterceptedException: Message: element click intercepted: when trying to click a button
|
<p>I am trying to webscrape Balance Sheet data from Yahoo finance (<a href="https://finance.yahoo.com/quote/MSFT/balance-sheet?p=MSFT" rel="nofollow noreferrer">https://finance.yahoo.com/quote/MSFT/balance-sheet?p=MSFT</a>). Specifically, the Current Assets, Total Non Current Assets, Total Non Current Liabilities Net Minority Interest, and Current Liabilities values. I am able to click on the Total Assets dropdown button and get the values for Current Assets and Non Current assets but I get a <code>selenium.common.exceptions.ElementClickInterceptedException: Message: element click intercepted:</code> ... <code></button> is not clickable at point (39, 592). Other element would receive the click: <div tabindex="1" class="lightbox-wrapper Ta(c) Pos(f) T(0) Start(0) H(100%) W(100%) Bgc($modalBackground) Ovy(a) Z(50) Op(1)">...</div></code> when I try to click on Total Liabilities Net Minority Interest to get Total Non Current Liabilities Net Minority Interest and Current Liabilities. This is my current approach:</p>
<pre><code># Clicks button based on button text (For example, Current Assets)
def clickButtonByText(driver, url, button_text):
driver.get(url)
button = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, f'//button[@aria-label="{button_text}"]'))).click()
time.sleep(10)
soup = BeautifulSoup(driver.page_source, 'html.parser')
return soup
</code></pre>
<pre><code># Assume getDriver() returns a singleton WebDriver
def readBalanceSheetRow():
# This works fine to get the Current Assets and Non Current Assets
html_response = clickButtonByText(getDriver(), 'https://finance.yahoo.com/quote/MSFT/balance-sheet?p=MSFT', 'Total Assets')
# This is where I get selenium.common.exceptions.ElementClickInterceptedException
html_response_two = clickButtonByText(getDriver(), 'https://finance.yahoo.com/quote/MSFT/balance-sheet?p=MSFT', 'Total Liabilities Net Minority Interest')
</code></pre>
<p>Not sure why I am running into an issue when dealing with Total Liabilities Net Minority Interest since both Total Liabilities Net Minority Interest and Total Assets seem to be structured the same way in html. Does anyone know why I am running into this issue?</p>
|
<python><selenium-webdriver><selenium-chromedriver>
|
2023-03-02 21:28:15
| 1
| 359
|
AJ Goudel
|
75,620,896
| 2,908,017
|
How do I get a list of all components on a Form in a Python FMX GUI App
|
<p>I have a <a href="https://github.com/Embarcadero/DelphiFMX4Python" rel="nofollow noreferrer">DelphiFMX GUI App</a> with a couple of components created on a <code>Form</code>:</p>
<pre><code>self.imgDirt = Image(self)
self.btnLoad = Button(self)
self.btnSave = Button(self)
self.memDirt = Memo(self)
self.lblTitle = Label(self)
self.edtTitle = Edit(self)
</code></pre>
<p>How can I get a list of all the components on the Form assuming I didn't know which ones were on the form?</p>
|
<python><user-interface><firemonkey>
|
2023-03-02 21:14:35
| 1
| 4,263
|
Shaun Roselt
|
75,620,822
| 16,363,897
|
Replace values based on column names and other dataframe
|
<p>Let's say we have the following "cities" dataframe with cities as column names:</p>
<pre><code> NY LA Rome London Milan
date
2023-01-01 1 81 26 55 95
2023-01-02 92 42 96 98 7
2023-01-03 14 4 60 88 73
</code></pre>
<p>In another "countries" dataframe I have cities and their countries:</p>
<pre><code> City Country
0 NY US
1 LA US
2 London UK
3 Rome Italy
4 Milan Italy
</code></pre>
<p>I want to change values in the "cities" dataframe, replacing the value of each city with the median value of all cities in the same country on the date. Here's the expected output. For example on 2023-01-01 the value for NY (41) is the median of 1 and 81.</p>
<pre><code> NY LA Rome London Milan
date
2023-01-01 41 41 60.5 55 60.5
2023-01-02 67 67 51.5 98 51.5
2023-01-03 9 9 66.5 88 66.5
</code></pre>
<p>I think I need to use groupby but couldn't make it work. Any help? Thanks.</p>
|
<python><pandas><dataframe>
|
2023-03-02 21:05:28
| 2
| 842
|
younggotti
|
75,620,575
| 18,086,775
|
Check if a substring is present in a string variable that may be set to None
|
<p>I have a string named brand that can be <code>None</code> or contain a value. How can I search for a substring within it when it is <code>None</code>?</p>
<p>By putting an additional if-statement to check if the brand is for <code>None</code> before searching the substring works, but is there a better way?</p>
<pre><code>### When the string brand contains a value ###
brand = "Vans.com"
if("Vans" in brand):
print("Y")
else:
print("N")
# output - Y
### When the string brand is None ###
brand = None
if("Vans" in brand):
print("Y")
else:
print("N")
# output - TypeError: argument of type 'NoneType' is not iterable.
### After putting an additional check ###
brand = None
if (brand is not None):
if("Vans" in brand):
print("Y")
else:
print("N")
# No output
</code></pre>
|
<python><string>
|
2023-03-02 20:37:00
| 3
| 379
|
M J
|
75,620,559
| 3,878,398
|
passing a csv through an API
|
<p>I'm trying to pass a csv through an api like this:</p>
<p>The csv is as follow:</p>
<pre><code>field_1, field_2, field_3
1, 3, 7
</code></pre>
<p>the push code is this:</p>
<pre><code>with open("sample_csv.csv") as f:
data = {"file": f}
print(data)
headers = {
"Content-Type": "text/csv",
"File-Type": "POS",
"File-Key": "somekey",
}
r = requests.post(endpoint, data=data, headers=headers)
</code></pre>
<p>However when i read it from a Lambda on the other end i get this:</p>
<pre><code>b'file=field_1%2C+field_2%2C+field_3%0A&file=1%2C+3%2C+7'
</code></pre>
<p>When i run the above string through chardet it tells me its ascii, but i dont know how to convert it</p>
<hr />
<p>edit: lambda function code:</p>
<pre><code>def main(event: dict, context) -> dict:
body = base64.b64decode(event["body"])
print(body)
</code></pre>
|
<python><csv><aws-lambda>
|
2023-03-02 20:35:26
| 1
| 351
|
OctaveParango
|
75,620,520
| 14,790,056
|
How to do some complicated row operations
|
<p>I have the following swap transaction data. You can see that one transaction hash includes many transfers. Here, the contract initiator is <code>from_address</code> when <code>data = trace</code>. So whoever sends 0 to <code>0xbebc44782c7db0a1a60cb6fe97d0b483032ff1c7</code> this pool. I want to basically filter out useful information and instead of 3 rows per transaction hash, I want to have one row, with transaction hash, the sender (<code>from_address</code>), block timestamp, value, data, token_address, and swapto (token that the sender eventually receives)</p>
<pre><code> transaction_hash block_timestamp from_address to_address value data token_address
10594 0x00016a6fcc4be913b2ba4e33a015f1cb876d22f59d3cbd18cbceb39415d90fe9 2021-10-14 11:28:18 UTC 0x510f3c959ab647681acde24c09b39e252b799dcb 0xbebc44782c7db0a1a60cb6fe97d0b483032ff1c7 0.0 trace
1302862 0x00016a6fcc4be913b2ba4e33a015f1cb876d22f59d3cbd18cbceb39415d90fe9 2021-10-14 11:28:18 UTC 0x510f3c959ab647681acde24c09b39e252b799dcb 0xbebc44782c7db0a1a60cb6fe97d0b483032ff1c7 158173.620844 transfer USDC
2094180 0x00016a6fcc4be913b2ba4e33a015f1cb876d22f59d3cbd18cbceb39415d90fe9 2021-10-14 11:28:18 UTC 0xbebc44782c7db0a1a60cb6fe97d0b483032ff1c7 0x510f3c959ab647681acde24c09b39e252b799dcb 158116.695638 transfer USDT
120546 0x0001e31fe253f8755a9f67174980221f361b341a64206c44c90e8a08249218a5 2020-11-22 20:20:23 UTC 0xf21ae6c185103b349f57ffb90da58399d30d095f 0xbebc44782c7db0a1a60cb6fe97d0b483032ff1c7 0.0 trace
277556 0x0001e31fe253f8755a9f67174980221f361b341a64206c44c90e8a08249218a5 2020-11-22 20:20:23 UTC 0xf21ae6c185103b349f57ffb90da58399d30d095f 0xbebc44782c7db0a1a60cb6fe97d0b483032ff1c7 3400.0 transfer DAI
1521560 0x0001e31fe253f8755a9f67174980221f361b341a64206c44c90e8a08249218a5 2020-11-22 20:20:23 UTC 0xbebc44782c7db0a1a60cb6fe97d0b483032ff1c7 0xf21ae6c185103b349f57ffb90da58399d30d095f 3409.370414 transfer USDC
208703 0x000499e9074acc95aa75d43b49119a0260d6a4d116772df7f561b8bd6b6e36d8 2021-04-17 22:33:33 UTC 0xa55e8f346a2e48045f0418aae297aae166f614ce 0xbebc44782c7db0a1a60cb6fe97d0b483032ff1c7 0.0 trace
295972 0x000499e9074acc95aa75d43b49119a0260d6a4d116772df7f561b8bd6b6e36d8 2021-04-17 22:33:33 UTC 0xbebc44782c7db0a1a60cb6fe97d0b483032ff1c7 0xa55e8f346a2e48045f0418aae297aae166f614ce 5001.212023640602 transfer DAI
1360911 0x000499e9074acc95aa75d43b49119a0260d6a4d116772df7f561b8bd6b6e36d8 2021-04-17 22:33:33 UTC 0xa55e8f346a2e48045f0418aae297aae166f614ce 0xbebc44782c7db0a1a60cb6fe97d0b483032ff1c7 5006.0 transfer USDC
41895 0x0004f6bd95e6d19c6b9c6466055d7d12572f66af007852d59e6e976cdec514ce 2021-06-28 13:56:23 UTC 0xf780db98028aec31a718a25151c98e7e9a5546ba 0xbebc44782c7db0a1a60cb6fe97d0b483032ff1c7 0.0 trace
1294707 0x0004f6bd95e6d19c6b9c6466055d7d12572f66af007852d59e6e976cdec514ce 2021-06-28 13:56:23 UTC 0xbebc44782c7db0a1a60cb6fe97d0b483032ff1c7 0xf780db98028aec31a718a25151c98e7e9a5546ba 31455.736067 transfer USDC
2088382 0x0004f6bd95e6d19c6b9c6466055d7d12572f66af007852d59e6e976cdec514ce 2021-06-28 13:56:23 UTC 0xf780db98028aec31a718a25151c98e7e9a5546ba 0xbebc44782c7db0a1a60cb6fe97d0b483032ff1c7 31470.496551 transfer USDT
26064 0x0005872abe5b6c9e24505f047f638670f09df43173fc6679901eaf330c3cdc58 2021-01-29 17:07:26 UTC 0x0d080a3c3290c98e755d8123908498bce2c5620d 0xbebc44782c7db0a1a60cb6fe97d0b483032ff1c7 0.0 trace
1720288 0x0005872abe5b6c9e24505f047f638670f09df43173fc6679901eaf330c3cdc58 2021-01-29 17:07:26 UTC 0xbebc44782c7db0a1a60cb6fe97d0b483032ff1c7 0x0d080a3c3290c98e755d8123908498bce2c5620d 662073.333519 transfer USDC
1994556 0x0005872abe5b6c9e24505f047f638670f09df43173fc6679901eaf330c3cdc58 2021-01-29 17:07:26 UTC 0x0d080a3c3290c98e755d8123908498bce2c5620d 0xbebc44782c7db0a1a60cb6fe97d0b483032ff1c7 660759.862846 transfer USDT
</code></pre>
<p>So the ideal df will be</p>
<pre><code> transaction_hash block_timestamp from_address value data token_address swapfor
0x00016a6fcc4be913b2ba4e33a015f1cb876d22f59d3cbd18cbceb39415d90fe9 2021-10-14 11:28:18 UTC 0x510f3c959ab647681acde24c09b39e252b799dcb 0.0 trace USDC USDT
0x0001e31fe253f8755a9f67174980221f361b341a64206c44c90e8a08249218a5 2020-11-22 20:20:23 UTC 0xf21ae6c185103b349f57ffb90da58399d30d095f 0.0 trace DAI USDC
0x000499e9074acc95aa75d43b49119a0260d6a4d116772df7f561b8bd6b6e36d8 2021-04-17 22:33:33 UTC 0xa55e8f346a2e48045f0418aae297aae166f614ce 0.0 trace USDC DAI
0x0004f6bd95e6d19c6b9c6466055d7d12572f66af007852d59e6e976cdec514ce 2021-06-28 13:56:23 UTC 0xf780db98028aec31a718a25151c98e7e9a5546ba 0.0 trace USDT USDC
0x0005872abe5b6c9e24505f047f638670f09df43173fc6679901eaf330c3cdc58 2021-01-29 17:07:26 UTC 0x0d080a3c3290c98e755d8123908498bce2c5620d 0.0 trace USDT USDC
</code></pre>
<p>solution (chatgpt)</p>
<pre><code>def transform_data(data):
output = []
tx_dict = {}
for _, row in data.iterrows():
transaction_hash = row["transaction_hash"]
if transaction_hash not in tx_dict:
tx_dict[transaction_hash] = {
"transaction_hash": transaction_hash,
"from_address": None,
"block_timestamp": None,
"token_address": None,
"value": None,
"swapfor": None
}
if row["data"] == "trace":
tx_dict[transaction_hash]["from_address"] = row["from_address"]
tx_dict[transaction_hash]["block_timestamp"] = row["block_timestamp"]
elif row["data"] == "transfer":
if row["to_address"] == "0xbebc44782c7db0a1a60cb6fe97d0b483032ff1c7":
tx_dict[transaction_hash]["token_address"] = row["token_address"]
tx_dict[transaction_hash]["value"] = row["value"]
else:
tx_dict[transaction_hash]["swapfor"] = row["token_address"]
if all(tx_dict[transaction_hash][k] is not None for k in tx_dict[transaction_hash]):
output.append(tx_dict[transaction_hash])
tx_dict.pop(transaction_hash)
return output
df_transformed= transform_data(df)
df_transformed= pd.DataFrame(df_transformed)
</code></pre>
|
<python><pandas><dataframe>
|
2023-03-02 20:30:47
| 0
| 654
|
Olive
|
75,620,512
| 7,586,785
|
How can I update the contents of a list, dictionary, or dataframe while making several asyncronous calls?
|
<p>I am building a DataFrame (pandas) that contains some information inside of it. At some point, once the DataFrame is completed, I need to perform some calculations on each DataFrame.</p>
<p>Currently, you can say that is looks something like this:</p>
<pre class="lang-py prettyprint-override"><code>def perform_calculations(df: DataFrame):
calculations = df.text.apply([lambda x: calculate(x)])
df["calculations"] = calculations
return df
</code></pre>
<p>However, though this worked for some time, it has become really slow. The <code>calculate</code> function makes some API requests, and as the size of the DataFrame has grown larger, so has the time it takes to complete this.</p>
<p>The <code>calculate</code> function does not care about the other texts, meaning this work can be parallelized. However, I am unsure how I can update a DataFrame in a parallelized manner. I supposed I don't need to update the DataFrame until the end and can instead collect the information into a list of some sort, then update the DataFrame?</p>
<p>But how would I do this? How can I call <code>calculate</code> asynchronously and collect all of its return values, then update the DataFrame?</p>
|
<python><dataframe><asynchronous><python-asyncio><aiohttp>
|
2023-03-02 20:30:16
| 1
| 3,806
|
John Lexus
|
75,620,398
| 14,649,706
|
Live OpenCV window capture (screenshot) on macOS (Darwin) using Python
|
<p>I am following a tutorial on Open CV and trying to rewrite the following code:
<a href="https://github.com/learncodebygaming/opencv_tutorials/tree/master/005_real_time" rel="nofollow noreferrer">https://github.com/learncodebygaming/opencv_tutorials/tree/master/005_real_time</a></p>
<p>(specifically, the <a href="https://github.com/learncodebygaming/opencv_tutorials/blob/master/005_real_time/windowcapture.py" rel="nofollow noreferrer">windowcapture.py</a> file)</p>
<p>This file uses win32gui, win32ui, win32con to capture a given open window by window name and take a screenshot of it for cv2 processing later down the line.</p>
<p>I have attempted to recreate this functionality using Quartz for macOS using the following example:
<a href="https://stackoverflow.com/a/48030215/14649706">https://stackoverflow.com/a/48030215/14649706</a></p>
<p>So my own version of windowcapture.py looks like this:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from Quartz import CGWindowListCopyWindowInfo, kCGNullWindowID, kCGWindowListOptionAll, CGRectNull, CGWindowListCreateImage, kCGWindowImageBoundsIgnoreFraming, kCGWindowListExcludeDesktopElements, CGImageGetDataProvider, CGDataProviderCopyData, CFDataGetBytePtr, CFDataGetLength
import os
from PIL import Image
import cv2 as cv
class WindowCapture:
# properties
window_name = None
window = None
window_id = None
window_width = 0
window_height = 0
# constructor
def __init__(self, given_window_name=None):
if given_window_name is not None:
self.window_name = given_window_name
self.window = self.get_window()
if self.window is None:
raise Exception('Unable to find window: {}'.format(given_window_name))
self.window_id = self.get_window_id()
self.window_width = self.get_window_width()
self.window_height = self.get_window_height()
self.window_x = self.get_window_pos_x()
self.window_y = self.get_window_pos_y()
# determine the window we want to capture
def get_window(self):
windows = CGWindowListCopyWindowInfo(kCGWindowListOptionAll, kCGNullWindowID)
for window in windows:
name = window.get('kCGWindowName', 'Unknown')
if name and self.window_name in name:
return window
return None
def get_window_id(self):
return self.window['kCGWindowNumber']
def get_window_width(self):
return int(self.window['kCGWindowBounds']['Width'])
def get_window_height(self):
return int(self.window['kCGWindowBounds']['Height'])
def get_window_pos_x(self):
return int(self.window['kCGWindowBounds']['X'])
def get_window_pos_y(self):
return int(self.window['kCGWindowBounds']['Y'])
def get_image_from_window(self):
image_filename = 'test-img.png'
# -x mutes sound and -l specifies windowId
os.system('screencapture -x -l %s %s' % (self.window_id, image_filename))
pil_image = Image.open(image_filename)
image_as_numpy_array = np.array(pil_image)
os.remove(image_filename)
image = cv.cvtColor(image_as_numpy_array, cv.COLOR_BGR2RGB)
return image
</code></pre>
<p>My <code>get_image_from_window</code> method here works fine, I am able to use <code>cv.imshow('cv', screenshot)</code> to view it:</p>
<pre class="lang-py prettyprint-override"><code>import cv2 as cv
from time import time
from windowcapture import WindowCapture
# initialize the WindowCapture class
wincap = WindowCapture('Blue Box Clicker')
loop_time = time()
while(True):
# get an updated image of the game
screenshot = wincap.get_image_from_window()
cv.imshow('cv', screenshot)
# debug the loop rate
print('FPS {}'.format(1 / (time() - loop_time)))
loop_time = time()
# press 'q' with the output window focused to exit.
# waits 1 ms every loop to process key presses
if cv.waitKey(1) == ord('q'):
cv.destroyAllWindows()
break
print('Done.')
</code></pre>
<p>But I don't want to save the image locally to then load it again.
I believe this is very inefficient and I would like to achieve the same functionality without actually saving the image file and then opening it.</p>
<p>Similarly to how it is done here (in the GitHub link above):</p>
<pre class="lang-py prettyprint-override"><code> def get_screenshot(self):
# get the window image data
wDC = win32gui.GetWindowDC(self.hwnd)
dcObj = win32ui.CreateDCFromHandle(wDC)
cDC = dcObj.CreateCompatibleDC()
dataBitMap = win32ui.CreateBitmap()
dataBitMap.CreateCompatibleBitmap(dcObj, self.w, self.h)
cDC.SelectObject(dataBitMap)
cDC.BitBlt((0, 0), (self.w, self.h), dcObj, (self.cropped_x, self.cropped_y), win32con.SRCCOPY)
# convert the raw data into a format opencv can read
#dataBitMap.SaveBitmapFile(cDC, 'debug.bmp')
signedIntsArray = dataBitMap.GetBitmapBits(True)
img = np.fromstring(signedIntsArray, dtype='uint8')
img.shape = (self.h, self.w, 4)
# free resources
dcObj.DeleteDC()
cDC.DeleteDC()
win32gui.ReleaseDC(self.hwnd, wDC)
win32gui.DeleteObject(dataBitMap.GetHandle())
# drop the alpha channel, or cv.matchTemplate() will throw an error like:
# error: (-215:Assertion failed) (depth == CV_8U || depth == CV_32F) && type == _templ.type()
# && _img.dims() <= 2 in function 'cv::matchTemplate'
img = img[...,:3]
# make image C_CONTIGUOUS to avoid errors that look like:
# File ... in draw_rectangles
# TypeError: an integer is required (got type tuple)
# see the discussion here:
# https://github.com/opencv/opencv/issues/14866#issuecomment-580207109
img = np.ascontiguousarray(img)
return img
</code></pre>
<p>How can I achieve this using Quartz?</p>
<p>I am on macOS (M1 Pro) and would really like to get this working.</p>
<p>At the moment, this program runs at around 12fps.</p>
<p>The program it is trying to capture is another python program (a simple pygame):</p>
<pre class="lang-py prettyprint-override"><code>import pygame
import random
# Set up the game window
pygame.init()
window_width, window_height = 640, 480
window = pygame.display.set_mode((window_width, window_height))
pygame.display.set_caption("Blue Box Clicker")
# Set up the clock
clock = pygame.time.Clock()
# Set up the game variables
background_color = (0, 0, 0)
box_color = (0, 0, 255)
box_width, box_height = 50, 50
box_x, box_y = 0, 0
# Set up the game loop
running = True
while running:
# Handle events
for event in pygame.event.get():
if event.type == pygame.QUIT:
running = False
elif event.type == pygame.MOUSEBUTTONDOWN:
mouse_x, mouse_y = pygame.mouse.get_pos()
if box_x <= mouse_x <= box_x + box_width and box_y <= mouse_y <= box_y + box_height:
# Correct click
box_x, box_y = random.randint(
0, window_width - box_width), random.randint(0, window_height - box_height)
# Incorrect click
# Draw the background
window.fill(background_color)
# Draw the box
pygame.draw.rect(window, box_color, (box_x, box_y, box_width, box_height))
# Update the window
pygame.display.update()
# Limit the frame rate
clock.tick(60)
# Clean up
pygame.quit()
</code></pre>
|
<python><opencv><core-graphics><apple-m1><darwin>
|
2023-03-02 20:15:28
| 1
| 1,942
|
nopassport1
|
75,620,269
| 15,569,921
|
Fitted regression line parameters in python
|
<p>How can I get the intercept and the slope of the fitted regression line in <code>qqplot</code>? Here's a small working example, where I want the parameters of the red regression line:</p>
<pre><code>import statsmodels.api as sm
import scipy.stats as stats
import numpy as np
np.random.seed(100)
a = np.random.normal(0, 4, 100)
sm.qqplot(a, stats.norm(loc=5, scale=1), line="r")
</code></pre>
<p><a href="https://i.sstatic.net/Lqq2s.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Lqq2s.png" alt="enter image description here" /></a></p>
|
<python><plot><linear-regression><statsmodels>
|
2023-03-02 19:58:42
| 2
| 390
|
statwoman
|
75,619,993
| 8,084,131
|
How can i convert from 03MAR23 format to yyyy-mm-dd in python
|
<p>I wanted to convert from 03FEB23 format to yyyy-mm-dd in python
how can I do it?</p>
<p>Use the below code:</p>
<pre><code>from pyspark.sql.functions import *
df=spark.createDataFrame([["1"]],["id"])
df.select(current_date().alias("current_date"), \
date_format("03MAR23","yyyy-MMM-dd").alias("yyyy-MMM-dd")).show()
</code></pre>
|
<python><dataframe><pyspark>
|
2023-03-02 19:25:55
| 2
| 585
|
Data_Insight
|
75,619,953
| 5,108,517
|
Group Pandas DataFrame by multiple columns with connected IDs
|
<p>I have a dataframe which contains several ids in various columns. Some of those IDs are missing or could be duplicated. The goal is to group the DataFrame by those ids and keep properties in a list.</p>
<p>To show an example that I would like to get:</p>
<pre><code>import pandas as pd
# initialize data of lists.
data = {'first_id': ['id_1', 'id_2', np.nan, 'id_3','id_2'],
'second_id': ['aaa', np.nan, 'aaa', 'bbb', 'ccc'],
'third_id': ['db_01', 'db_02', np.nan, np.nan, np.nan],
'sources':[1,1,2,1,3]
}
# Create DataFrame
df = pd.DataFrame(data)
df
first_id second_id third_id sources
0 id_1 aaa db_01 1
1 id_2 NaN db_02 1
2 NaN aaa NaN 2
3 id_3 bbb NaN 1
4 id_2 ccc NaN 3
</code></pre>
<p>My desired output is:</p>
<pre><code>data = {'first_id': ['id_1', 'id_2','id_3'],
'second_id': ['aaa', 'ccc', 'bbb'],
'third_id': ['db_01', 'db_02', np.nan],
'sources':[[1,2],[1,3],[1]]
}
# Desired output
pd.DataFrame(data)
first_id second_id third_id sources
0 id_1 aaa db_01 [1, 2]
1 id_2 ccc db_02 [1, 3]
2 id_3 bbb NaN [1]
</code></pre>
<p>I put together following function which does some grouping on the ids:</p>
<pre><code>def get_element_from_pandas(col):
"""
Takes first element from list. Used for pandas columns
"""
if col is np.nan or type(col) == float:
return col
else:
if len(col) != 0:
return col[0]
else:
return col
def group_dataframe(df: pd.DataFrame, index_to_group: list, group_columns: list, keep_in_list: list):
"""
Group pandas datframe
"""
df = df.groupby(index_to_group, as_index=False)[group_columns].agg(lambda x: [*dict.fromkeys(x)])
for col in group_columns:
# remove nan from lists (faster that when its in agg function)
df[col] = df[col].apply(lambda x: [i for i in x if str(i) != "nan"])
# replace empty by nan
df[col] = df[col].apply(lambda x: np.nan if len(x) == 0 else x)
if col not in keep_in_list:
df[col] = df[col].apply(get_element_from_pandas)
return df
group_dataframe(df, ['first_id'], ['second_id', 'third_id', 'sources'], ['sources'])
# output is this
first_id second_id third_id sources
0 id_1 aaa db_01 [1]
1 id_2 ccc db_02 [1, 3]
2 id_3 bbb NaN [1]
</code></pre>
<p>Using that 'group_dataframe' I was able to get the result I wanted, however I had to apply it multiple times on the ids step by step. I believe there will be some simpler solution, which I am still missing. Thank you.</p>
|
<python><pandas><dataframe><grouping>
|
2023-03-02 19:19:11
| 1
| 3,639
|
0ndre_
|
75,619,925
| 19,381,612
|
How to Randomly generate a new list of items but not select previous item that has been selected?
|
<p>I'm creating some new algorithms for my project (Food App Generator) and I run into this problem of generating random new list from a pre-set dictionary of amount of same item I want to appear through out a week while not allowing the previous item to be the same.</p>
<p>Down below dictionary specifies food and amount of times it should be used. Biggest problem I'm running into is that if I've picked <code>Potato Spaghetti and Potato again</code> I'm left with rice which means I'm now stuck in a permanent while loop.</p>
<pre><code>main = {'Potato': 2, 'Rice': 2, 'Spaghetti': 1}
</code></pre>
<p>I'm really stuck on this and IDK how to solve this issue. I tried ChatGPT and nothing useful came out from there as anything he provided = to more errors rather then solutions and over complicated functions that are 50+ lines.</p>
<p>Many Thanks in advance.</p>
<p>My Code:</p>
<pre><code>days = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday']
main = {'Potato': 2, 'Rice': 2, 'Spaghetti': 1}
meat = {'Chicken': 2, 'Salmon': 2, 'Beef': 1}
veg = {'Corn & Broccoli': 2, 'Prawn Bit Root Salad': 1, 'Small Corn & Broccoli': 2}
list_of_all_food_items = [main, meat, veg]
before_picked_item = []
days_to_generate_for = 5
def get_random_item(items: dict) -> str:
item = rand.choice(list(items.keys()))
while check_if_item_has_been_used_last(item) == True:
item = rand.choice(list(items.keys()))
before_picked_item.append(item)
items[item] -= 1
if (items[item] == 0):
del items[item]
return item
def check_if_item_has_been_used_last(item: str) -> bool:
if len(before_picked_item) == 0:
return False
if item == before_picked_item[len(before_picked_item) - 1]:
return True
else:
return False
def generate_random_days() -> dict:
days_generated = []
for food_type in list_of_all_food_items:
random_foods = []
for i in range(days_to_generate_for):
random_foods.append(get_random_item(food_type))
if (len(random_foods) == days_to_generate_for):
days_generated.append(random_foods)
print(days_generated)
generate_random_days()
</code></pre>
|
<python><python-3.x><list><dictionary>
|
2023-03-02 19:16:19
| 1
| 1,124
|
ThisQRequiresASpecialist
|
75,619,881
| 9,536,233
|
Why is my while loop not getting broken when condition is met within try-except statement?
|
<p>I have some code which I would like to run into infinity unless a specific condition has been met. Within a try-except statement this condition is actually met and therefore I assumed the while loop would be broken, but apparently this is not the case. I am not sure why this does not break the while loop. Is try except overruling the while statement? Should I say after my try-except again something like <code>test=test.copy()</code> ? I've boiled it down to the code below for reproductive purposes.</p>
<pre><code>test = None
while test==None:
try:
test="something changes in this try statement, should break while condition"
print("even executed this line")
#MY QUESTION IS WHY THE LOOP IS NOT BROKEN HERE
except:
print("this failed")
print("try except passed, so here we do another thing to test to break the loop")
test="after try-except statement changed test"
print(test)
</code></pre>
<p>prints:</p>
<pre><code>#even executed this line
#try except passed, so here we do another thing to test to break the loop
#after try-except statement changed test
</code></pre>
|
<python><while-loop><conditional-statements><try-except>
|
2023-03-02 19:10:22
| 1
| 799
|
Rivered
|
75,619,875
| 5,235,665
|
Dynamically renaming Pandas DataFrame columns based on configurable mapping
|
<p>Python/Pandas here. I have 2 Excel files. In CSV form they look like:</p>
<h4>mappings.xlsx (as a csv)</h4>
<pre><code>Id,Standard,FieldName
1,'Animal','Pet'
2,'Color','Clr'
3,'Food','Snack'
</code></pre>
<h4>info.xslx (as a csv)</h4>
<pre><code>Pet,Clr,Snack
Dog,Blue,Pizza
Cat,Purple,French Fries
</code></pre>
<p>I want to read these 2 Excel files into Pandas DataFrames, and then use the <code>mappings</code> dataframe to <em>rename</em> the <strong>columns</strong> of the <code>info</code> dataframe:</p>
<pre><code>mappings_data = pd.read_excel('mappings.xlsx', engine="openpyxl")
mappings_df = pd.DataFrame(mappings_data)
info_data = pd.read_excel('info.xlsx', engine="openpyxl")
info_df = pd.DataFrame(info_data)
</code></pre>
<p>At this point the <code>info_df</code> has the following columns:</p>
<pre><code>info_df["Pet"]
info_df["Clr"]
info_df["Snack"]
</code></pre>
<p>But because we have the mappings:</p>
<ul>
<li><code>Pet</code> ==> <code>Animal</code></li>
<li><code>Clr</code> ==> <code>Color</code></li>
<li><code>Snack</code> ==> <code>Food</code></li>
</ul>
<p>I want to end up with an <code>info_df</code> that looks like so:</p>
<pre><code>info_df["Animal"]
info_df["Color"]
info_df["Food"]
</code></pre>
<p>But obviously, everything needs to be dynamic based on the <code>mappings.xlsx</code> file. Can anyone point me in the right direction please?</p>
|
<python><excel><pandas>
|
2023-03-02 19:09:37
| 1
| 845
|
hotmeatballsoup
|
75,619,857
| 12,484,740
|
Storing Eigen::Ref from numpy through pybind inside c++ fails
|
<p>I have a maybe stupid question concering storing a reference of a numpy object in a c++ class. I have the following MVE code:</p>
<pre class="lang-cpp prettyprint-override"><code>template<typename VecType= Eigen::Ref<Eigen::VectorXd>>
struct A
{
void insert(VecType& val)
{
std::cout<<"AddressOfValue: "<<val.data()<<std::endl;
v.emplace(val);
std::cout<<"AddressOfValueInserted: "<<v->get().data()<<std::endl;
}
VecType& get()
{
std::cout<<"AddressOfValueInGetter: "<<v->get().data()<<std::endl;
return v->get();
}
std::optional<std::reference_wrapper<VecType>> v;
};
PYBIND11_MODULE( mod, m )
{
auto lv2= pybind11::class_<A<>>(m, "A");
lv2.def(pybind11::init());
lv2.def( "insert", [] (A<>& req, Eigen::Ref<Eigen::VectorXd> solVec ) { req.insert( solVec ); } );
lv2.def( "get", [] (A<>& req ) { return req.get( ); } );
}
</code></pre>
<p>and the following python code</p>
<pre class="lang-py prettyprint-override"><code>a = mod.A()
d = np.zeros(10)
d[0] = 0.1
print('Python address: {}'.format(hex(d.__array_interface__['data'][0])))
a.insert(d)
print('Python addressAfter: {}'.format(hex(d.__array_interface__['data'][0])))
d2 = a.get()
print(len(d2))
print(d2[0])
print('Python addressAfterd: {}'.format(hex(d.__array_interface__['data'][0])))
print('Python addressAfterd2: {}'.format(hex(d2.__array_interface__['data'][0])))
</code></pre>
<p>If I execute this code i get</p>
<pre class="lang-py prettyprint-override"><code>1: AddressOfValue: 0x5607dccbdcd0
1: AddressOfValueInserted: 0x5607dccbdcd0
1: AddressOfValueInGetter: 0x7ffc258a654f
1: Python address: 0x5607dccbdcd0
1: Python addressAfter: 0x5607dccbdcd0
1: 94591762887704
1: 0.0
1: Python addressAfterd: 0x5607dccbdcd0
1: Python addressAfterd2: 0x7ffc258a654f
</code></pre>
<p>So basically I want to store the reference of my numpy array, which gets converted to a <code>Eigen::Ref<Matrix></code> inside my
class.
The Python address and the adress of the Eigen::Ref match.
The addresses match even to the point after the value is inserted.</p>
<p>But If i use the getter I get some other uninitalized data and the address stops matching.
Maybe I misunderstood something fundamentely and there is a better way.
Can someone give me please a hint how to solve this.
Thanks in advance!</p>
|
<python><c++><eigen3><pybind11>
|
2023-03-02 19:08:01
| 0
| 327
|
rath3t
|
75,619,843
| 2,908,017
|
How to add a right-click context menu to your controls in a Python FMX GUI App?
|
<p>I've made a <code>Form</code> with an <code>Image</code> using the <a href="https://github.com/Embarcadero/DelphiFMX4Python" rel="nofollow noreferrer">DelphiFMX GUI Library for Python</a>, but what I want now is a right-click context menu on the image. When I right-click on the image, then it should bring up a context popup menu as you see here in VSCode when I right-click:</p>
<p><a href="https://i.sstatic.net/wmwDI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wmwDI.png" alt="Visual Studio Code Editor Context Menu Right Click" /></a></p>
<p>I have the following code that makes my <code>Form</code> and <code>Image</code>:</p>
<pre><code>import os
from delphifmx import *
class frmMain(Form):
def __init__(self, owner):
self.Caption = 'My Form with Image and Context Menu'
self.Width = 1000
self.Height = 1000
self.imgDirt = Image(self)
self.imgDirt.Parent = self
self.imgDirt.Align = "Client"
self.imgDirt.Margins.Top = 40
self.imgDirt.Margins.Left = 40
self.imgDirt.Margins.Right = 40
self.imgDirt.Margins.Bottom = 40
path = os.path.dirname(os.path.abspath(__file__))
self.imgDirt.Bitmap.LoadFromFile(path + '\dirt.png')
def main():
Application.Initialize()
Application.Title = "My Application"
Application.MainForm = frmMain(Application)
Application.MainForm.Show()
Application.Run()
Application.MainForm.Destroy()
main()
</code></pre>
<p>I tried doing things like this, but it doesn't work (<code>NameError: name 'ContextMenu' is not defined</code>):</p>
<pre><code>self.cm = ContextMenu(self)
self.cm.Items.Add("Item 1")
self.cm.Items.Add("Item 2")
self.cm.Items.Add("Item 3")
</code></pre>
<p>Same for:</p>
<pre><code>self.cm = PopUpMenu(self)
</code></pre>
<p>How do I do this in FMX for Python? Simple Right-click Context Menu PopUp on the Image</p>
|
<python><user-interface><contextmenu><firemonkey><popupmenu>
|
2023-03-02 19:06:40
| 1
| 4,263
|
Shaun Roselt
|
75,619,739
| 4,149,611
|
Quicker Iteration in python through a big list
|
<p>I am trying to scan a list of 100,000,000 (list1) strings and match it with another list(list2).
List 1 can have upto 10 million rows.
If the contents of list2 are in list1 I am flagging those values in a counter and storing the result in a third list. So my lists are somewhat like this :</p>
<p>list1</p>
<pre><code>['My name is ABC and I live in DEF',
'I am trying XYZ method to speed up my LMN problem'
... 100000 rows
]
</code></pre>
<p>list2 ( length 90k )</p>
<pre><code>['ABC','DEF','XYZ','LMN' ......XXX']
</code></pre>
<p>I have converted list 1 to a dataframe and list 2 to a joined list ( reduce the number of passes ) .
Updated List 2 :</p>
<pre><code>['ABC|DEF|XYZ...|XXX']
</code></pre>
<p>My desired output is :</p>
<pre><code>['My name is ABC and I live in DEF',2] ( since I have two matching patterns with list2 )
</code></pre>
<p>I have tried the below code , but it is taking a lot of time to iterate through the df and give me the result. Can you please let me know how to make this code faster and what exactly am I doing wrong ?</p>
<pre><code>import snowflake.connector
import pandas as pd
import numpy as np
my_list=[]
df_list1 = pd.DataFrame({'cola':cola_val})
for row in tqdm.tqdm(df_product_list.values):
val= row[0]
match_list = re.findall(SKU_LIST,str(val),re.IGNORECASE)
my_list.append(str(val)+'~'+str(len(set(match_list))))
</code></pre>
|
<python><pandas><numpy>
|
2023-03-02 18:55:00
| 1
| 780
|
Aritra Bhattacharya
|
75,619,729
| 5,235,665
|
Validating Pandas DataFrame columns and cell values
|
<p>I have the following table <code>cmf_data</code> (here's the DDL I used to create it):</p>
<pre class="lang-sql prettyprint-override"><code>CREATE TABLE mydb.cmf_data (
cmf_data_id int IDENTITY(1,1) NOT NULL,
cmf_data_field_name varchar(MAX) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
cmf_data_field_data_type varchar(MAX) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
CONSTRAINT PK__client_m__BC8D4A1B625F6F4F PRIMARY KEY (cmf_data_id)
);
</code></pre>
<p>As you can see, this table stores field names and expected types. Here's an example output from running <code>SELECT * FROM cmf_data;</code>:</p>
<pre class="lang-none prettyprint-override"><code>cmf_data_id|cmf_data_field_name|cmf_data_field_data_type|
-----------+-------------------+------------------------+
1|Foo |float |
2|Baz |float |
3|Fizz |datetime |
4|Buzz |string |
</code></pre>
<p>Here is my Excel spreadsheet, in CSV format:</p>
<pre class="lang-none prettyprint-override"><code>Foo,Baz,Fizz,Buzz
23.5,44.18,'2022-06-18','Hello there',
24.621,7.3,'2023-01-07','How is it going',
149.0,712.0,'2022-02-09','What a nice day',
11.74,101.03,'2021-10-26','Thank you so much'
</code></pre>
<p>The idea is that the <code>cmf_data</code> dictates the expectations my code has on the column types found in the Excel spreadsheet. So if, say, the spreadsheet's <code>Baz</code> column contains a value that Pandas decides is a <code>string</code>, since the <code>cmf_data</code> table has <code>Baz</code> as a <code>float</code>, that should fail validation and raise an appropriate validation-related (or custom) error.</p>
<p>Here's what I am trying to do:</p>
<ol>
<li>Use Pandas to read the <code>cmf_data</code> table into a DataFrame (say, <code>cmf_data_df</code>)</li>
<li>Use Pandas to read the spreadsheet into another DataFrame (say, <code>excel_df</code>)</li>
<li>If the columns in <code>excel_df</code> do not match the rows (based on <code>cmf_field_name</code>) defined in <code>cmf_data_df</code>, raise a validation exception (hence the Excel can't contain any columns not defined in the <code>cmf_data</code> table; and it can't omit any columns either; they must be exact)</li>
<li>Enforce the column types, defined in the <code>cmf_data_df</code>, upon the rows and cells of the <code>excel_df</code>; if any particular row contains a value that cannot be cast/converted or conform to the type expectations on its column, raise a validation exception</li>
</ol>
<p>Hence, if my Excel/CSV looks like it does above, it should validate just fine; the Excel's columns match the expectations of the CMF Data table exactly, and all the rows' cell values match the expectations of the CMF Data table as well.</p>
<p>But if the Excel/CSV looks like:</p>
<pre class="lang-none prettyprint-override"><code>Foo,Baz,Buzz
23.5,44.18,'Hello there',
24.621,7.3,'How is it going',
149.0,712.0,'What a nice day',
11.74,101.03,'Thank you so much'
</code></pre>
<p>We should get a validation exception because the column names do not match what is expected. Similarly, if the Excel/CSV looks like:</p>
<pre class="lang-none prettyprint-override"><code>Foo,Baz,Fizz,Buzz
23.5,44.18,'2022-06-18','Hello there',
24.621,7.3,'2023-01-07','How is it going',
149.0,'I should fail the validation','2022-02-09','What a nice day',
11.74,101.03,'2021-10-26','Thank you so much'
</code></pre>
<p>We should also get a validation exception, because on row 3, the Baz value is a <code>string</code>, which violates its definition in the CMF Data table (expecting it to be a <code>float</code>).</p>
<p>My best attempt thus far is:</p>
<pre class="lang-py prettyprint-override"><code># read cmf_data_df from DB
params = urllib.parse.quote_plus("Driver={ODBC Driver 17 for SQL Server};"
f"Server={host};"
f"Database={database};"
f"uid={uid};pwd={pwd}")
engine = create_engine("mssql+pyodbc:///?odbc_connect={}".format(params), fast_executemany=True)
query = "SELECT * FROM mydb.cmf_data"
result = engine.execute(query)
data = result.fetchall()
col = list(result.keys())
cmf_data_df = pd.DataFrame(columns=col, data=data)
# read excel -- I'm actually pulling the file from S3
# which is what the 'obj['Body'].read()' is for
data = pd.read_excel(io.BytesIO(obj['Body'].read()), engine="openpyxl")
excel_df = pd.DataFrame(data)
# how to validate that the columns in 'excel_df' match rows definitions in 'cmf_data_df' ?
# how to perform row-level validation on all the types in 'excel_df'?
</code></pre>
<p>Can anyone point me in the right direction to piece the whole thing together correctly?</p>
|
<python><pandas><dataframe><validation><types>
|
2023-03-02 18:53:51
| 3
| 845
|
hotmeatballsoup
|
75,619,513
| 7,168,098
|
changing and setting one level of a multiindex dataframe with a function
|
<p>Assuming a multiindex dataframe as follows</p>
<p><strong>THE (DUMMY) DATA</strong></p>
<pre><code>import pandas as pd
df={('AB30566', 'ACTIVE1', 'A1'): {('2021-01-01', 'PHOTO'): 2,
('2021-01-01', 'QUE'): 8,
('2021-01-01', 'TXR'): 4,
('2022-02-01', 'PHOTO'): 4,
('2022-02-01', 'QUE'): 0,
('2022-02-01', 'TXR'): 1,
('2022-03-01', 'PHOTO'): 9,
('2022-03-01', 'QUE'): 7,
('2022-03-01', 'TXR'): 7},
('CD55DF55', 'ACTIVE2', 'A2'): {('2021-01-01', 'PHOTO'): 1,
('2021-01-01', 'QUE'): 7,
('2021-01-01', 'TXR'): 0,
('2022-02-01', 'PHOTO'): 8,
('2022-02-01', 'QUE'): 8,
('2022-02-01', 'TXR'): 3,
('2022-03-01', 'PHOTO'): 6,
('2022-03-01', 'QUE'): 0,
('2022-03-01', 'TXR'): 7},
('ZT52556', 'UNACTIVE1', 'A3'): {('2021-01-01', 'PHOTO'): 8,
('2021-01-01', 'QUE'): 9,
('2021-01-01', 'TXR'): 3,
('2022-02-01', 'PHOTO'): 5,
('2022-02-01', 'QUE'): 3,
('2022-02-01', 'TXR'): 0,
('2022-03-01', 'PHOTO'): 7,
('2022-03-01', 'QUE'): 0,
('2022-03-01', 'TXR'): 9},
('MIKE90', 'PENSIONER1', 'A4'): {('2021-01-01', 'PHOTO'): 3,
('2021-01-01', 'QUE'): 9,
('2021-01-01', 'TXR'): 8,
('2022-02-01', 'PHOTO'): 3,
('2022-02-01', 'QUE'): 2,
('2022-02-01', 'TXR'): 1,
('2022-03-01', 'PHOTO'): 9,
('2022-03-01', 'QUE'): 0,
('2022-03-01', 'TXR'): 4},
('ZZ00001', 'ACTIVE3', 'A5'): {('2021-01-01', 'PHOTO'): 0,
('2021-01-01', 'QUE'): 2,
('2021-01-01', 'TXR'): 1,
('2022-02-01', 'PHOTO'): 2,
('2022-02-01', 'QUE'): 0,
('2022-02-01', 'TXR'): 8,
('2022-03-01', 'PHOTO'): 5,
('2022-03-01', 'QUE'): 6,
('2022-03-01', 'TXR'): 0}}
</code></pre>
<p>(The real case is much bigger of course)</p>
<p>I need to change the values of the names in the level 0 called userid based on a function.</p>
<p>I do it in the following way and this strange result happens:</p>
<p><strong>THE CODE & WRONG SOLUTION</strong></p>
<pre><code>d=pd.DataFrame(f)
d.columns.names =["USERID", "STATUS","LEVEL"]
def simple_mask_user_id(userids):
exam_dict = {userid:("EX"+str(i).zfill(5) if re.match(r"[A-Z][A-Z][0-9][0-9][0-9][0-9][0-9]",userid) else userid) for i,userid in enumerate(userids) }
return exam_dict
current_userids = d.columns.get_level_values('USERID').tolist()
dict_mask = simple_mask_user_id(current_userids)
display(d)
new_names = d.columns.get_level_values("USERID").map(dict_mask).tolist()
print(new_names)
d.columns.set_levels(new_names, level=0, inplace=True)
display(d)
</code></pre>
<p>The level USERID of the dataframe should be chaged accoring to the dict:</p>
<pre><code>{'AB30566': 'EX00000', 'CD55DF55': 'CD55DF55', 'ZT52556': 'EX00002', 'MIKE90': 'MIKE90', 'ZZ00001': 'EX00004'}
</code></pre>
<p><strong>THE FAULTY RESULT</strong></p>
<p>I display the df to compared the result before and after.
The index got mixed.</p>
<p>MIKE90 and EX00002 gets changed by each other.</p>
<p>In other words, MIKE90 is not on top of the corresponding PENSIONER1, A4, which is the other levels corresponding to it (MIKE90 does not get changed)
You can also see that the order of the list new names has the correct order.</p>
<p><strong>THE QUESTIONS</strong></p>
<p>Why?
How do you change one level of the multindex without altering the data?</p>
<p><a href="https://i.sstatic.net/nDvyv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nDvyv.png" alt="enter image description here" /></a></p>
|
<python><pandas><multi-index>
|
2023-03-02 18:30:12
| 3
| 3,553
|
JFerro
|
75,619,360
| 8,519,380
|
Change scrapy settings via api
|
<p>I use scrapy and scrapyd and send some custom settings via api (with Postman software).
<br>Photo of the request:
<a href="https://i.sstatic.net/lj31M.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lj31M.png" alt="enter image description here" /></a></p>
<p>For example, I send the value of <code>start_urls</code> through api and it works correctly.
<br>Now the problem is that I cannot apply the settings that I send through the api in my crawl.<br>
For example, I send the <code>CONCURRENT_REQUESTS</code> value, <strong>but it is not applied</strong>.
If we can bring self in the <code>update_settings</code> function, the problem will be solved, but an error will occur.<br>
My code:</p>
<pre><code>from scrapy.spiders import CrawlSpider, Rule
from scrapy.loader import ItemLoader
from kavoush.lxmlhtml import LxmlLinkExtractor as LinkExtractor
from kavoush.items import PageLevelItem
my_settings = {}
class PageSpider(CrawlSpider):
name = 'github'
def __init__(self, *args, **kwargs):
self.start_urls = kwargs.get('host_name')
self.allowed_domains = [self.start_urls]
my_settings['CONCURRENT_REQUESTS']= int(kwargs.get('num_con_req'))
self.logger.info(f'CONCURRENT_REQUESTS? {my_settings}')
self.rules = (
Rule(LinkExtractor(allow=(self.start_urls),deny=('\.webp'),unique=True),
callback='parse',
follow=True),
)
super(PageSpider, self).__init__(*args, **kwargs)
#custom_settings = {
# 'CONCURRENT_REQUESTS': 4,
#}
@classmethod
def update_settings(cls, settings):
cls.custom_settings.update(my_settings)
settings.setdict(cls.custom_settings or {}, priority='spider')
def parse(self,response):
loader = ItemLoader(item=PageLevelItem(), response=response)
loader.add_xpath('page_source_html_lang', "//html/@lang")
yield loader.load_item()
def errback_domain(self, failure):
self.logger.error(repr(failure))
</code></pre>
<p>Expectation:<br>
How can I change the settings through api and Postman?
<br>
I brought <code>CONCURRENT_REQUESTS</code> settings as an example in the above example, in some cases up to 10 settings may need to be changed through api.
<br>
<br>
Update:
<br>
If we remove <code>my_settings = {}</code> and <code>update_settings</code> and the commands are as follows, an error occurs (<code>KeyError: 'CONCURRENT_REQUESTS'</code>) when running <code>scrapyd-deploy</code> because <code>CONCURRENT_REQUESTS</code> does not have a value at that moment.
<br>Part of the above scenario code:</p>
<pre><code>class PageSpider(CrawlSpider):
name = 'github'
def __init__(self, *args, **kwargs):
self.start_urls = kwargs.get('host_name')
self.allowed_domains = [self.start_urls]
my_settings['CONCURRENT_REQUESTS']= int(kwargs.get('num_con_req'))
self.logger.info(f'CONCURRENT_REQUESTS? {my_settings}')
self.rules = (
Rule(LinkExtractor(allow=(self.start_urls),deny=('\.webp'),unique=True),
callback='parse',
follow=True),
)
super(PageSpider, self).__init__(*args, **kwargs)
custom_settings = {
'CONCURRENT_REQUESTS': my_settings['CONCURRENT_REQUESTS'],
}
</code></pre>
<p><br>thanks to everyone</p>
|
<python><scrapy><web-crawler><scrapyd>
|
2023-03-02 18:13:45
| 2
| 778
|
Sardar
|
75,619,335
| 6,534,818
|
Pandas -- reset a group by counter
|
<p>This solution does not work for me <a href="https://stackoverflow.com/questions/69982638/count-cumulative-true-value;">Count cumulative true Value</a> I explicitly need to group by <code>id</code> <strong>and</strong> <code>consecBool</code>; the above assumes a homogenous data set with no distinct groups</p>
<p>How can I reset my cumsum counter when a boolean column is False?</p>
<pre><code>df = pd.DataFrame({'id': [1, 1, 1, 1, 1, 1, 1],
'consecDays': [0, 1, 1, 1, 0, 1, 1],
'consecBool': [False, True, True, True, False, True, True],
'expected': [0, 1, 2, 3, 0, 1, 2]})
df['Counter'] = df.groupby(['id', 'consecBool'])['consecDays'].cumsum()
</code></pre>
<p><code>expected</code> is the expected outcome</p>
<pre><code> id consecDays consecBool expected Counter
0 1 0 False 0 0
1 1 1 True 1 1
2 1 1 True 2 2
3 1 1 True 3 3
4 1 0 False 0 0
5 1 1 True 1 4
6 1 1 True 2 5
</code></pre>
|
<python><pandas>
|
2023-03-02 18:09:55
| 1
| 1,859
|
John Stud
|
75,619,328
| 14,159,985
|
How to apply a function to multiple PySpark dataframes in parallel
|
<p>I'm kinda new to pyspark and I'm trying to construct a datawarehouse with it. So baiscally I've a lot o dataframes where I need to apply the same function to all of them. I made a simple version of my code to you to undestand what I mean. My simple code looks like this:</p>
<pre><code>df_list = [spark.createDataFrame([(1, "foo"), (2, "bar")], ["id", "name"]),
spark.createDataFrame([(3, "baz"), (4, "qux")], ["id", "name"])]
# define the function to apply to each DataFrame
def my_function(df):
# do some processing on the DataFrame
df = df.filter("id > 1")
return df
# Create an RDD from the list of DataFrames
rdd = spark.sparkContext.parallelize(df_list)
# Apply the function to each DataFrame in parallel
results = rdd.flatMap(lambda df: my_function(df)).collect()
# Print the modified DataFrames
for df in df_list:
df.show()
</code></pre>
<p>but for some reason I'm getting the error:</p>
<pre><code>cannot pickle '_thread.RLock' object
</code></pre>
<p>I did some research and I can't understand why I'm getting this error, once, according what I've found on Google, pypskar dataframes are serializable objects.</p>
<p>I also tried using foreach and map functions, but also didn't worked.</p>
<p>Can anybody help me here?
thank you in advance!</p>
|
<python><pyspark><parallel-processing>
|
2023-03-02 18:09:44
| 1
| 338
|
fernando fincatti
|
75,619,222
| 7,200,745
|
How to specify path to a local reposiory in .pre-commit-config.yaml?
|
<p>I have a local package <code>B</code> which depends on another local package <code>A</code>. They both are stored in a company repository: <code>https://mycompany.com/api/pypi/pypi-bld/simple</code>. Therefore, tests of <code>B</code> requires <code>A</code>.</p>
<p>I need to use pre-commit which uses both the company repository and the default one.
I use this <code>.pre-commit-config.yaml</code>, where I have black, pre-commit-hooks, flake8 and init tests:</p>
<pre><code>repos:
- repo: https://github.com/psf/black
some parameters...
- repo: https://github.com/pre-commit/pre-commit-hooks
some parameters...
- repo: https://github.com/pycqa/flake8
some parameters...
- repo: local
hooks:
- id: unittest
name: unittest
entry: python -m unittest discover
language: python
language_version: "3.9"
"types": [python]
pass_filenames: false
stages: [commit]
additional_dependencies: ['--index-url', 'https://<USERNAME>:<PASSWORD>@mycompany.com/api/pypi/pypi-bld/simple',
'requests', 'mylocal-package']
</code></pre>
<p>As you can see there is credential information. It is dangerous. So, how to specify in a more clever way? Maybe it is possible to specify a custom <code>pip</code> from <code>virtualenv</code> for <code>pre-commit</code>?</p>
|
<python><pip><virtualenv><pre-commit><pre-commit.com>
|
2023-03-02 17:58:58
| 0
| 444
|
Eugene W.
|
75,619,153
| 21,313,539
|
Bulk Indexing Error in Elastic search Python
|
<p>I am trying to ingest a simple Hello World example into a Data Stream using the Bulk call as shown in documemtation <a href="https://elasticsearch-py.readthedocs.io/en/v8.6.2/helpers.html#bulk-helpers" rel="nofollow noreferrer">https://elasticsearch-py.readthedocs.io/en/v8.6.2/helpers.html#bulk-helpers</a></p>
<pre><code>Traceback (most recent call last):
File "C:\Users\elastic\Documents\azure_repos\dataingestion\data-ingestion-test-function\data_stream\hello_world.py", line 117, in <module>
bulk(client=client, index='test-data-stream', actions=data)
File "C:\Users\elastic\venv\lib\site-packages\elasticsearch\helpers\actions.py", line 524, in bulk
for ok, item in streaming_bulk(
File "C:\Users\elastic\venv\lib\site-packages\elasticsearch\helpers\actions.py", line 438, in streaming_bulk
for data, (ok, info) in zip(
File "C:\Users\elastic\venv\lib\site-packages\elasticsearch\helpers\actions.py", line 355, in _process_bulk_chunk
yield from gen
File "C:\Users\elastic\venv\lib\site-packages\elasticsearch\helpers\actions.py", line 274, in _process_bulk_chunk_success
raise BulkIndexError(f"{len(errors)} document(s) failed to index.", errors)
elasticsearch.helpers.BulkIndexError: 2 document(s) failed to index.
</code></pre>
<p>I have tried this same code for ingesting into an index and it has worked</p>
<p>This is the code for client</p>
<pre><code>client = Elasticsearch(
"https://xx.xx.x.xx:9200",
basic_auth=("username","password"),
verify_certs=False)
</code></pre>
<p>I have created the ilm policy, index template, component template as shown in this tutorial<br />
<a href="https://opster.com/guides/elasticsearch/data-architecture/elasticsearch-data-streams/" rel="nofollow noreferrer">https://opster.com/guides/elasticsearch/data-architecture/elasticsearch-data-streams/</a></p>
<p>I have created this in Kibana and called the data stream test-data-stream and confirmed that the data stream was created successfully using Kibana UI</p>
<p>I was sucessfully able to ingest the data into data stream using api calls using postman but I am having trouble ingesting using python code</p>
<p>This is what I want to ingest</p>
<pre><code>data = [{"message": "Hello World", "@timestamp": "2023-01-11T11:54:44Z"},
{"message": "Hello World1", "@timestamp": "2023-01-11T11:54:44Z"}]
</code></pre>
<p>I used this code this to ingest</p>
<pre><code>client.indices.delete_data_stream(name='test-data-stream', error_trace=True)
client.indices.create_data_stream(name='test-data-stream', error_trace=True)
bulk(client=client, index='test-data-stream', actions=data)
</code></pre>
<p>In the index parameter, If I switch to index, the code works fine but it doesn't work for data stream</p>
|
<python><elasticsearch><indexing><kibana><bulkinsert>
|
2023-03-02 17:53:03
| 1
| 471
|
lat
|
75,619,137
| 7,516,523
|
Get subset of DataFrame where certain columns meet certain criteria
|
<p>How can I cleanly make a subset of my DataFrame, where my desired columns meet certain criteria?</p>
<p>Take the following <strong><code>DataFrame</code></strong> as an example:</p>
<pre><code>df = pd.DataFrame(data=[[0, 1, 2],
[2, 3, 4],
[4, 5, 6],
[6, 7, 8]],
columns= ["a", "b", "c"])
</code></pre>
<p>I have a boolean array that indicates which columns from my <strong><code>DataFrame</code></strong> that I would like to apply a filter:</p>
<pre><code>bool_arr = np.array([True, False, True])
</code></pre>
<p>I also have an array that holds the threshold values for each column. For each column where the <strong><code>bool_arr</code></strong> holds <strong><code>True</code></strong>, The row value needs to be below the threshold value:</p>
<pre><code>thresh_arr = np.array([3, 2, 7])
</code></pre>
<p>The DataFrame subset that should result from my filtering/masking is the following:</p>
<pre><code> "a" "b" "c"
0 0 1 2
1 2 3 4
</code></pre>
<p>I was able to get this DataFrame subset with the following code, but it is not as clean as I would like:</p>
<pre><code>sel_cols = df.columns[bool_arr]
sel_thresh = thresh_arr[bool_arr]
df_masked = df[sel_cols] < sel_thresh
sel_rows = df.index[df_masked.sum(axis=1) == bool_arr.sum()]
df_subset = df.loc[sel_rows]
</code></pre>
<p>Anyone have any ideas on how to make this shorter and cleaner?</p>
|
<python><pandas><dataframe><filter><mask>
|
2023-03-02 17:51:51
| 1
| 345
|
Florent H
|
75,619,103
| 5,235,665
|
Filtering S3 shared key (subfolder) for specific file type in boto3
|
<p>Python/boto3 here. I have an S3 bucket with several "subfolders", and in one of these "subfolders" (yes I know there's no such thing in S3, but when you look at the layout below you'll understand) I have several dozen files, 0+ of which will be Excel (XLSX) files. Here's what my bucket looks like:</p>
<pre><code>my_bucket/
Fizz/
Buzz/
Foo/
file1.jpg
file2.jpg
file3.txt
file4.xlsx
file5.pdf
file6.xlsx
file7.png
...etc.
</code></pre>
<p>So for, say, <code>file4.xlsx</code>, the bucket is <code>my_bucket</code> and the key is <code>Foo/file4.xlsx</code> (if I understand S3 properly). For <code>file7.png</code>, the bucket is still <code>my_bucket</code> and its key is <code>Foo/file7.png</code>, etc.</p>
<p>I need to look under this <code>Foo/</code> "subfolder" for any file that ends with a <code>.xlsx</code> extension, and if one exists, do a S3 GetObject on that Excel file. It's fine if <em>no</em> Excels exist, and its fine if multiple Excels exist. I just need to do a GetObject on the first one I find, if one is even there at all.</p>
<p>I <strong>understand</strong> that a typical boto3 invocation for getting an S3 object looks like:</p>
<pre><code>s3 = Res.client("s3")
obj = s3.get_object(Bucket="my_bucket", Key="Foo/file2.jpg")
</code></pre>
<p>But I'm not sure how to list all the <code>my_bucket/Foo/*</code> contents, filter by the first <code>*.xlsx</code> and do the <code>get_object(...)</code> on that specific file. Can anyone help nudge me in the right direction?</p>
|
<python><amazon-s3><boto3>
|
2023-03-02 17:49:06
| 1
| 845
|
hotmeatballsoup
|
75,619,035
| 3,973,269
|
Three docker containers with python image get mixed up in docker compose
|
<p>I have a docker-compose.yml file in which I have three services, each being a python script that needs to be executed.</p>
<p>Each service has a dockerfile</p>
<ul>
<li>Dockerfile.scriptA</li>
<li>Dockerfile.scriptB</li>
<li>Dockerfile.scriptC</li>
</ul>
<p>In a folder called "python_tools" I have three subfolders:</p>
<ul>
<li>scriptA</li>
<li>scriptB</li>
<li>scriptC</li>
</ul>
<p>with each of them a file: main.py</p>
<p>docker-compose.yml:</p>
<pre><code>version: "3.1"
services:
scriptA:
image: python:3.9
volumes:
- ./python_tools/logs/scriptA:/opt/app/scriptA/log
build:
context: .
dockerfile: Dockerfile.scriptA
scriptB:
image: python:3.9
volumes:
- ./python_tools/logs/scriptB:/opt/app/scriptB/log
build:
context: .
dockerfile: Dockerfile.scriptB
scriptC:
image: python:3.9
volumes:
- ./python_tools/logs/scriptC:/opt/app/scriptC/log
build:
context: .
dockerfile: Dockerfile.scriptC
</code></pre>
<p>and each Dockerfile.scriptX:
(notice here that I start to use X for A,B, and C to prevent repetition where not needed)</p>
<pre><code>FROM python:3.9
COPY ./python_tools/scriptX /opt/app/scriptX
WORKDIR /opt/app/scriptX
RUN pip install -r requirements.txt
ENTRYPOINT [ "python", "-u", "main.py" ]
</code></pre>
<p>Now, the main.py has the following code:</p>
<pre><code>def log(msg):
print(msg)
f = open("log/log.txt", "a+")
f.write(msg + "\r\n")
f.close()
if __name__ == "__main__":
log("Starting scriptX")
while(True):
do_something()
</code></pre>
<p>The problem is that, once I execute:</p>
<pre><code>docker logs -f python_scriptB-1
</code></pre>
<p>I get:</p>
<pre><code>Starting scriptA
Traceback (most recent call last):
File "/opt/app/scriptA/main.py", line xx, in <module>
log("Starting scriptA")
File "/opt/app/scriptA/main.py", line xx, in log
f = open("log/log.txt", "a+")
FileNotFoundError: [Errno 2] No such file or directory: 'log/log.txt'
</code></pre>
<p>Notice that script B has launched the code from script A?
I'm not sure how this is caused. Maybe some caching issues but I'm not sure how to get rid of the cache in this case?</p>
<p>So the issue is that somehow, it looks like the wrong script code is launched for a script. So it might very well be that the container of scriptC launches main.py in scriptB etc.</p>
<p>What am I doing wrong here? Is there a general better solution to my container/service setup?</p>
<p>During "docker compose build --no-cache" I already notice that scriptA is being build three times rather than scriptA, scriptB and scriptC</p>
|
<python><docker><docker-compose>
|
2023-03-02 17:42:00
| 1
| 569
|
Mart
|
75,618,991
| 9,681,081
|
Azure WebsiteManagementClient for Python: how to get web app by ID?
|
<p>I'm using version 7.0.0 of <code>azure-mgmt-web</code> for Python. I know we can fetch a site with:</p>
<pre class="lang-py prettyprint-override"><code>from azure.mgmt.web import WebSiteManagementClient
web_client = WebSiteManagementClient(...credentials here...)
site = web_client.web_apps.get(RESOURCE_GROUP_NAME, SITE_NAME)
</code></pre>
<p>But is there a way to fetch the same resource by using its full ID? (Which for me looks like: <code>/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP_NAME>/providers/Microsoft.Web/sites/SITE_NAME</code>)</p>
<p>I guess I can parse this ID with Python myself but it feels like something Azure could do more safely. Any idea if this is an available feature?</p>
|
<python><azure><azure-management-api>
|
2023-03-02 17:37:39
| 1
| 2,273
|
Roméo Després
|
75,618,983
| 5,924,264
|
How to extract only non-empty deques in dict where key-value pair is int - collections.deque pairs
|
<p>I have a <code>dict</code> of <code>collection.deque</code> objects, <code>dqs</code>, where the key is some integer <code>id</code>. I would like to create another <code>dict</code> that only has the entries in <code>dqs</code> where the deque is non-empty.</p>
<p>Is there a quick way to do this without iterating through the deque?</p>
|
<python><dictionary><vectorization><deque>
|
2023-03-02 17:36:47
| 1
| 2,502
|
roulette01
|
75,618,919
| 2,341,285
|
Build SymPy matrix by alternating rows of two square matrices
|
<p>I have two square NxN matrices in SymPy. Lets say</p>
<pre class="lang-py prettyprint-override"><code>from sympy import Matrix
x = Matrix([[1,2],
[3,4]])
y = Matrix([[5,6],
[7,8]])
</code></pre>
<p>What is the best way of creating a new 2NxN matrix which alternates rows to generate the following:</p>
<pre class="lang-py prettyprint-override"><code>z = Matrix([[1,2],
[5,6],
[3,4],
[7,8]])
</code></pre>
<p>I can do it with lists and loops, but I will need to do this operation with very large matrices so I am trying to find a solution that is more efficient given I know the sizes of the matrices ahead of time.</p>
|
<python><matrix><sympy>
|
2023-03-02 17:29:53
| 3
| 609
|
Darko
|
75,618,890
| 2,908,017
|
How to close a Python FMX GUI App with a button click?
|
<p>Is there a built-in function for closing the app via code?</p>
<p>How would I close the app via code instead of clicking on the closing button in the title bar?</p>
<p>I'm using <a href="https://github.com/Embarcadero/DelphiFMX4Python" rel="nofollow noreferrer">DelphiFMX GUI Library for Python</a></p>
|
<python><firemonkey>
|
2023-03-02 17:26:26
| 1
| 4,263
|
Shaun Roselt
|
75,618,743
| 1,169,091
|
Why do the title and x label not print in the graph?
|
<p>This code generates a plot but the title and the x label do not appear.</p>
<pre><code>normalDistribution = np.random.normal(loc = 0.0, scale = 1.0, size = totalPoints)
rows = 4
columns = 5
fig = plt.figure(figsize =(24, 24))
plt.subplots_adjust(wspace=0.3, hspace=0.4) # to adjust the spacing between subplots
plt.subplot(rows, columns,1)
plt.title = "Normal Dist"
plt.xlabel = "Sequence #"
plt.scatter(range(0, len(normalDistribution)), normalDistribution, c='green')
</code></pre>
<p><a href="https://i.sstatic.net/yPg4f.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yPg4f.png" alt="Graph" /></a></p>
|
<python><matplotlib>
|
2023-03-02 17:13:11
| 3
| 4,741
|
nicomp
|
75,618,498
| 8,953,248
|
How to test linear independence of boolean array in a pythonic way?
|
<p>The rows of this matrix are <em>not</em> linearly independent, as the first two rows can be added (or XORed) to produce the third:</p>
<pre><code>matrix = [
[ 1, 0, 0, 1 ],
[ 0, 0, 1, 0 ],
[ 1, 0, 1, 1 ],
[ 1, 1, 0, 1 ]
]
</code></pre>
<p>One could do brute-force reduction of the rows, with deeply nested <em>for</em> loops and <em>if</em> conditions and testing for an all zero row, but the resulting code doesn't feel like python.</p>
<p>Without using <em>numpy</em> or other library, what is a pythonic way of testing for independent rows (on much larger matrices)?</p>
|
<python><algorithm><matrix><independent-set>
|
2023-03-02 16:50:25
| 2
| 618
|
Ray Butterworth
|
75,618,380
| 19,130,803
|
Type hint redis function python
|
<p>I tried different ways to type annotate the redis object and its method as below</p>
<pre><code>foo.py
redis_instance: Any = redis.StrictRedis.from_url(url=REDIS_DB_URL, decode_responses=False)
</code></pre>
<pre><code>config.py
REDIS_DB_URL: str | None = os.environ.get("REDIS_DB_URL")
</code></pre>
<pre><code>.env
REDIS_DB_URL=redis://redis:6379/0
</code></pre>
<p>but stil getting error</p>
<pre><code> error: No overload variant of "from_url" of "Redis" matches argument types "Optional[str]", "bool" [call-overload]
note: Possible overload variants:
note: def [_StrType] from_url(cls, url: str, *, host: Optional[str] .... a big list
</code></pre>
|
<python><redis>
|
2023-03-02 16:39:47
| 0
| 962
|
winter
|
75,618,364
| 1,302,551
|
Why does ~True = -2 in python?
|
<p>I am completely perplexed. We came across a bug, which we easily fixed, but we are perplexed as to why the value the bug was generating created the output it did. Specifically:</p>
<p>Why does <code>~True</code> equal <code>-2</code> in python?</p>
<pre><code>~True
>> -2
</code></pre>
<p>Shouldn't the bitwise operator <code>~</code> only return binary?</p>
<p>(Python v3.8)</p>
|
<python><python-3.x>
|
2023-03-02 16:38:02
| 2
| 1,356
|
WolVes
|
75,618,005
| 13,696,853
|
wandb — ImportError
|
<p>I've successfully installed wandb on a Linux system, however, I run into an error when I try importing it.</p>
<pre><code>>>> import wandb
[...]
ImportError: [PATH]/.local/lib/python3.8/site-packages/google/protobuf/pyext/_message.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZN4absl12lts_2023012512log_internal9kCharNullE
</code></pre>
<p>I've tried reinstalling the package, but it still fails to import.</p>
|
<python><importerror>
|
2023-03-02 16:07:38
| 0
| 321
|
Raiyan Chowdhury
|
75,617,995
| 2,005,869
|
dask client.scatter command hanging
|
<p>I have a 4-band raster image (NAIP imagery) and another raster obtained by the scipy SLIC algorithm containing integers that indicates the segmentation on that image.</p>
<p>The next step in the workflow is calculate statistics for all the pixels in a segment, and there are 300,000+ segments, so I'm trying to parallelize the calculation with Dask.</p>
<p>I first created this function to call for each segment id:</p>
<pre class="lang-py prettyprint-override"><code>def get_features(id):
segment_pixels = img[segments == id]
return segment_features(segment_pixels)
</code></pre>
<p>but when I ran this using Dask bag for just a few segments:</p>
<pre class="lang-py prettyprint-override"><code>import dask.bag as db
b = db.from_sequence(segment_ids[:80], npartitions=4)
b1 = b.map(get_features).compute()
</code></pre>
<p>I could see the memory use was growing rapidly, and realized I was passing the two rasters (<code>img</code> and <code>segments</code>) data to each job, which of course is a terrible pattern.</p>
<p>I read about how <code>client.scatter()</code> can be used to pass objects to the workers in situations like this, so tried this:</p>
<pre class="lang-py prettyprint-override"><code>scattered_img = client.scatter(img, broadcast=True)
scattered_segments = client.scatter(segments, broadcast=True)
def get_features(id, img=scattered_img, segments=scattered_segments):
segment_pixels = img[segments == id]
return segment_features(segment_pixels)
b1 = b.map(get_features).compute()
</code></pre>
<p>but this crashes my session, so what am I doing wrong?</p>
<p>Here is the <a href="https://nbviewer.org/gist/rsignell-usgs/811a70296584922153410fe31b5d04e6" rel="nofollow noreferrer">whole reproducible notebook</a>.</p>
|
<python><dask>
|
2023-03-02 16:06:58
| 1
| 16,655
|
Rich Signell
|
75,617,926
| 9,754,418
|
Failed to launch app from custom SageMaker image: ResourceNotFoundError with UID/GID in AppImageConfig
|
<p>I'm trying to create a <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/studio-byoi-create.html" rel="nofollow noreferrer">custom SageMaker image</a> and launch a kernel from it, because I want to see if I can use black, the python code formatter, in <a href="https://github.com/psf/black" rel="nofollow noreferrer">SageMaker</a> Studio via a custom SageMaker image.</p>
<p>So far, I've been able to attach the image to the SageMaker domain and start launching the kernel from the custom image, following <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/studio-byoi-create.html" rel="nofollow noreferrer">these steps</a>. However, as soon as the notebook opens, it displays this error in the notebook:</p>
<blockquote>
<p>Failed to launch app [black-conda-ml-t3-medium-7123fdb901f81ab5]. ResourceNotFoundError: SageMaker is unable to launch the app using the image [123456789012.dkr.ecr.us-east-1.amazonaws.com/conda-sample@sha256:12345]. Ensure that the UID/GID provided in the AppImageConfig matches the default UID/GID defined in the image. (Context: RequestId: 21234b0f568, TimeStamp: 1677767016.2990377, Date: Thu Mar 2 14:23:36 2023)</p>
</blockquote>
<p>Here are the relevant code snippets:
Dockerfile:</p>
<pre><code>FROM continuumio/miniconda3:4.9.2
COPY environment.yml .
RUN conda env update -f environment.yml --prune
</code></pre>
<p>environment.yml:</p>
<pre><code>name: base
channels:
- conda-forge
dependencies:
- python=3.9
- numpy
- awscli
- boto3
- ipykernel
- black
</code></pre>
<p>and AppImageConfig:</p>
<pre><code>{
"AppImageConfigName": "conda-env-kernel-config",
"KernelGatewayImageConfig": {
"KernelSpecs": [
{
"Name": "python3",
"DisplayName": "Python [conda env: myenv]"
}
],
"FileSystemConfig": {
"MountPath": "/root",
"DefaultUid": 0,
"DefaultGid": 0
}
}
}
</code></pre>
<p>I tried following <a href="https://github.com/aws-samples/sagemaker-studio-custom-image-samples/blob/main/DEVELOPMENT.md" rel="nofollow noreferrer">this troubleshooting guide</a>, but it doesn't seem to address my issues because all of the diagnostics worked alright. For example, when I ran <code>id -u</code> and <code>id -g</code> inside my local container, the results <code>0</code> and <code>0</code> lined up with the AppImageConfig settings of <code>"DefaultUid": 0, "DefaultGid": 0</code>.</p>
|
<python><docker><jupyter><amazon-sagemaker-studio>
|
2023-03-02 16:00:57
| 2
| 1,238
|
Yann Stoneman
|
75,617,865
| 17,034,564
|
OpenAI Chat Completions API error: "InvalidRequestError: Unrecognized request argument supplied: messages"
|
<p>I am currently trying to use OpenAI's most recent model: <code>gpt-3.5-turbo</code>. I am following a very <a href="https://www.youtube.com/watch?v=0l4UDn1p7gM&ab_channel=TinkeringwithDeepLearning%26AI" rel="noreferrer">basic tutorial</a>.</p>
<p>I am working from a Google Collab notebook. I have to make a request for each prompt in a list of prompts, which for sake of simplicity looks like this:</p>
<pre><code>prompts = ['What are your functionalities?', 'what is the best name for an ice-cream shop?', 'who won the premier league last year?']
</code></pre>
<p>I defined a function to do so:</p>
<pre><code>import openai
# Load your API key from an environment variable or secret management service
openai.api_key = 'my_API'
def get_response(prompts: list, model = "gpt-3.5-turbo"):
responses = []
restart_sequence = "\n"
for item in prompts:
response = openai.Completion.create(
model=model,
messages=[{"role": "user", "content": prompt}],
temperature=0,
max_tokens=20,
top_p=1,
frequency_penalty=0,
presence_penalty=0
)
responses.append(response['choices'][0]['message']['content'])
return responses
</code></pre>
<p>However, when I call <code>responses = get_response(prompts=prompts[0:3])</code> I get the following error:</p>
<pre><code>InvalidRequestError: Unrecognized request argument supplied: messages
</code></pre>
<p>Any suggestions?</p>
<p>Replacing the <code>messages</code> argument with <code>prompt</code> leads to the following error:</p>
<pre><code>InvalidRequestError: [{'role': 'user', 'content': 'What are your functionalities?'}] is valid under each of {'type': 'array', 'minItems': 1, 'items': {'oneOf': [{'type': 'integer'}, {'type': 'object', 'properties': {'buffer': {'type': 'string', 'description': 'A serialized numpy buffer'}, 'shape': {'type': 'array', 'items': {'type': 'integer'}, 'description': 'Array shape'}, 'dtype': {'type': 'string', 'description': 'Stringified dtype'}, 'token': {'type': 'string'}}}]}, 'example': '[1, 1313, 451, {"buffer": "abcdefgh", "shape": [1024], "dtype": "float16"}]'}, {'type': 'array', 'minItems': 1, 'maxItems': 2048, 'items': {'oneOf': [{'type': 'string'}, {'type': 'object', 'properties': {'buffer': {'type': 'string', 'description': 'A serialized numpy buffer'}, 'shape': {'type': 'array', 'items': {'type': 'integer'}, 'description': 'Array shape'}, 'dtype': {'type': 'string', 'description': 'Stringified dtype'}, 'token': {'type': 'string'}}}], 'default': '', 'example': 'This is a test.', 'nullable': False}} - 'prompt'
</code></pre>
|
<python><openai-api><chatgpt-api>
|
2023-03-02 15:55:35
| 4
| 678
|
corvusMidnight
|
75,617,852
| 7,700,300
|
Weird behaviour in Pyspark dataframe
|
<p>I have the following pyspark dataframe that contains two fields, ID and QUARTER:</p>
<pre class="lang-py prettyprint-override"><code>pandas_df = pd.DataFrame({"ID":[1, 2, 3,4, 5, 3,5,6,3,7,2,6,8,9,1,7,5,1,10],"QUARTER":[1, 1, 1, 1, 1,2,2,2,3,3,3,3,3,4,4,5,5,5,5]})
spark_df = spark.createDataFrame(pandas_df)
spark_df.createOrReplaceTempView('spark_df')
</code></pre>
<p>and I have th following liste that contains the number of entries I want from each of the 5 quarter</p>
<pre class="lang-py prettyprint-override"><code>numbers=[2,1,3,1,2]
</code></pre>
<p>I want to select each time from each quarter a number of rows equals to the number indicated in the list 'numbers'. I should respect that the <code>ID</code> should be unique at the end. It means if i selected an ID in a certain quarter, I should not reselect it again in an other quarter.</p>
<p>For that I used the following pyspark code:</p>
<pre class="lang-py prettyprint-override"><code>
quart=1 # the first quarter
liste_unique=[] # an empty list that will contains the unique Id values to compare with
for i in range(0,len(numbers)):
tmp=spark_df.where(spark_df.QUARTER==quart)# select only rows with the chosed quarter
tmp=tmp.where(tmp.ID.isin(liste_unique)==False)# the selected id were not selected before
w = Window().partitionBy(lit('col_count0')).orderBy(lit('col_count0'))#dummy column
df_final=tmp.withColumn("row_num", row_number().over(w)).filter(col("row_num").between(1,numbers[i])) # number of rows needed from the 'numbers list'
df_final=df_final.drop(col("row_num")) # drop the row num column
liste_tempo=df_final.select(['ID']).rdd.map(lambda x : x[0]).collect() # transform the selected id into list
liste_unique.extend(liste_tempo) # extend the list of unique id each time we select new rows from a quarter
df0=df0.union(df_final) # union the empty list each time with the selected data in each quarter
quart=quart+1 #increment the quarter
</code></pre>
<p>df0 is simply an empty list at the begining. It will contains all the data at the end, it can be declared as follow</p>
<pre class="lang-py prettyprint-override"><code>spark = SparkSession.builder.appName('Empty_Dataframe').getOrCreate()
# Create an empty schema
columns = StructType([StructField('ID',
StringType(), True),
StructField('QUARTER',
StringType(), True)
])
df0 = spark.createDataFrame(data = [],
schema = columns)
</code></pre>
<p>The code works fine without errors, except that I can find duplicate ID at different quarter which is not correct. Also, a weird behavior is When I tried to count the number of unique ID in the df0 dataframe ( in a new different cell)</p>
<pre class="lang-py prettyprint-override"><code>print(df0.select('ID').distinct().count())
</code></pre>
<p>It gives at each execution a different value even if the dataframe is not touched with any other process ( it is more clear with a larger dataset than the example). I can not understand this behavior,I tried to delete the cache or the temporary variables using <code>unpersist(True)</code>, but nothing change. I suspect that the <code>Union</code> function is wrongly used but I did not found any alternative in pyspark.</p>
|
<python><apache-spark><pyspark><union><pyspark-schema>
|
2023-03-02 15:54:44
| 1
| 325
|
Abdessamad139
|
75,617,816
| 1,443,702
|
Add decorator for python requests get function
|
<p>I was looking for a way to add some sort of decorator that applies to all instances of <code>requests.get</code> being used in any function.</p>
<p>For example,</p>
<pre><code>@my_custom_decorator
def hello():
...
r = requests.get('https://my-api-url')
...
</code></pre>
<p>The <code>my_custom_decorator</code> could then add a common param (or anything else) for all instances of <code>requests.get</code>. One will only need to add the decorator whereever <code>requests.get</code> is being used.</p>
<p>For now, I'm thinking of somehow checking if the original function contains presence of <code>requests.get</code>, but that seems not to be ideal.</p>
<p><strong>Note:</strong> Also, I'm looking not to change any existing instances of <code>requests.get</code>..hence looking for a better way to achieve this.</p>
|
<python><http><python-requests>
|
2023-03-02 15:51:22
| 8
| 4,726
|
xan
|
75,617,776
| 1,208,142
|
Best practice with Pandas to calculate differences between dates according to categories
|
<p>I'm trying to sort differences between values grouped by dates and categories in a Pandas DataFrame. At the end, what matters is the name of the categories with the lowest and highest increases between the two dates, and the corresponding increases.</p>
<p>I think my code works, but it looks over complicated. I would like to find the best Pandas way (fastest, most standard, most straight forward, etc.) to do it. Here is my code:</p>
<pre><code>import pandas as pd
import numpy as np
# Creation of random data
size = 1_000
df = pd.DataFrame()
df['Borough'] = np.random.choice(['Brooklyn', 'Manhattan', 'Bronx', 'Queens', 'Staten Island'], size)
df['Date'] = pd.to_datetime(np.random.randint(2011, 2021, size), format="%Y")
df['Nbr_permits'] = np.random.randint(0, 300, size)
# Calculation of the sorted differences in the number of permits per boroughs between 2011 and 2020
res = (df[(df['Date'].dt.year == 2020)].groupby('Borough')['Nbr_permits'].sum() - df[(df['Date'].dt.year == 2011)].groupby('Borough')['Nbr_permits'].sum()).sort_values().dropna()
#Lowest progression of nbr_permits between 2011 and 2020:
print(res.idxmin(), res[res.idxmin()])
#Highest progression of nbr_permits between 2011 and 2020:
print(res.idxmax(), res[res.idxmax()])
</code></pre>
<p>Can I do better with Pandas?</p>
|
<python><pandas><dataframe>
|
2023-03-02 15:47:58
| 1
| 5,347
|
Lucien S.
|
75,617,688
| 13,460,543
|
How to select rows in a dataframe based on a string customers list?
|
<p>I have a <strong>first</strong> dataframe containing bills and in this dataframe a column named <code>contents</code> contains customers names inside a no formatted / standardized string, like this :</p>
<pre><code> NUMBIL DATE CONTENTS AMOUNT
0 858 01/01/23 Billed to HENRY 25$
1 863 01/01/23 VIKTOR 96$
2 870 01/01/23 Regard to ALEX 13$
3 871 07/01/23 MARK 01* 96$
4 872 07/01/23 To charge SAMANTHA every Thursday 96$
5 880 08/01/23 VIKTOR LECOMTE 13$
6 881 08/01/23 **** 13$
</code></pre>
<p>I have a <strong>second</strong> dataframe consisting of a short list of customers names, like this :</p>
<pre><code> NUMBIL
0 VIKTOR
1 ALEX
2 SAMANTHA
</code></pre>
<p><strong>What I would like to do</strong></p>
<p>Based on customers list identify rows in first dataframe that do not contain customers names in <code>CONTENTS</code> column.</p>
<p>In our case resulting dataframe would be :</p>
<pre><code> NUMBIL DATE CONTENTS AMOUNT
0 858 01/01/23 Billed to HENRY 25$
3 871 07/01/23 MARK 01* 96$
6 881 08/01/23 **** 13$
</code></pre>
<p>I have already found a possible solution to my problem, but I think this topic could be useful to the community, and I would like to know the uniqueness way you would handle this ?</p>
<p><strong>Dataframe to start with</strong></p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
fct = pd.DataFrame({'NUMBIL':[858, 863, 870, 871, 872, 880, 881],
'DATE':['01/01/23', '01/01/23', '01/01/23', '07/01/23', '07/01/23', '08/01/23', '08/01/23'],
'CONTENTS':['Billed to HENRY', 'VIKTOR', 'Regard to ALEX', 'MARK 01*',
'To charge SAMANTHA every Thursday', 'VIKTOR LECOMTE', '****'],
'AMOUNT':['25$', '96$', '13$', '96$', '96$', '13$', '13$'],
})
cust = pd.DataFrame({'CUSTOMERS':['VIKTOR', 'ALEX', 'SAMANTHA'],
})
</code></pre>
|
<python><pandas><dataframe><csv>
|
2023-03-02 15:40:39
| 3
| 2,303
|
Laurent B.
|
75,617,532
| 14,735,451
|
How to replace all string patterns except for one?
|
<p>I have a string and a pattern that I'm trying to replace:</p>
<pre><code>my_string = "this is my string, it has [012] numbers and [1123] other things, like [2] cookies"
# pattern = all numbers between the brackets, and the brackets
</code></pre>
<p>I want to replace all of those patterns except for one with some other pattern:</p>
<pre><code>new_pattern = "_new_pattern_"
</code></pre>
<p>And I need to do this <code>N</code> number of times, where <code>N</code> is the number of times the pattern appears (in this case 3).</p>
<p>I know I can replace all of such pattern using regex:</p>
<pre><code>import re
re.sub(r'\[\d+\]', new_pattern, my_string)
</code></pre>
<p>But I don't know how to do it for all patterns except for one.</p>
<pre><code>Examples:
#1
my_string = "this is my string, it has [012] numbers and [1123] other things, like [2] cookies"
expected_output = [
"this is my string, it has [012] numbers and new_pattern other things, like new_pattern cookies",
"this is my string, it has new_pattern numbers and [1123] other things, like new_pattern cookies",
"this is my string, it has new_pattern numbers and new_pattern other things, like [2] cookies"
]
#2
my_string = "this is my string"
expected_output = ["this is my string"]
#3
my_string = "this is my string [111]"
expected_output = ["this is my string [111]"]
#4
my_string = "this is my string [111] and this [111]"
expected_output = ["this is my string [111] and this new_pattern",
"this is my string new_pattern and this [111]"]
</code></pre>
<p>To clarify, I want to do it for all matches except for one of them, <code>N</code> times (so if there are <code>N</code> matches, I want to make <code>N-1</code> replacements, in all possible variations)</p>
|
<python><string><replace>
|
2023-03-02 15:27:16
| 2
| 2,641
|
Penguin
|
75,617,527
| 6,439,229
|
Why do similar signal/slot connections behave differently after Pyqt5 to PySide6 port
|
<p>I'm porting a pyqt5 GUI to pyside6 and I'm running into an issue I don't understand.</p>
<p>I have two QSpinboxes that control two GUI parameters:
One spinbox controls the weight of a splitter,
the other controls the row height in a QtableView.</p>
<p>This is the code in pyqt5:</p>
<pre><code>spinbox1.valueChanged.connect(my_splitter.setHandleWidth)
spinbox2.valueChanged.connect(my_view.verticalHeader().setDefaultSectionSize)
</code></pre>
<p>In Pyside6 spinbox1 works fine, but spinbox2 doesn't do its job and there is this warning:</p>
<blockquote>
<p>You can't add dynamic slots on an object originated from C++.</p>
</blockquote>
<p>The issue can be solved by changing the second line of code to:</p>
<pre><code>spinbox2.valueChanged.connect(lambda x: my_view.verticalHeader().setDefaultSectionSize(x))
</code></pre>
<p>It's nice to have found a solution, but I would also like to understand why the two connections behave differently in PySide6 and why using he lambda solves the issue.</p>
<p>The warning message probably holds a clue but I have no idea what dynamic slots are (and a quick google didn't help me much).</p>
<p><strong>Edit:</strong>
Since I was changing two things: Qt5 > QT6, And pyqt > pyside
I looked at this in 4 python wrappers (pyqt5, pyqt6, pyside2, pyside6) to see which of the changes caused the issue.
And I can tell that both pyside 2 and 6 show this behaviour, and none of the pyqt's</p>
|
<python><qt><pyqt5><pyside6>
|
2023-03-02 15:27:01
| 2
| 1,016
|
mahkitah
|
75,617,516
| 2,947,469
|
Numba how to use dict in a class
|
<p>as specified below, the dict should have keys 2-tuples of int and values ints.</p>
<pre><code>from numba.experimental import jitclass
import numba
@jitclass({'shape': numba.types.Tuple((numba.int32, numba.int32)), 'dict': numba.types.DictType(numba.types.UniTuple(numba.types.int32, 2), numba.int32)})
class BigramCounts:
def __init__(self, shape: tuple[int, int]):
self.shape = shape
self.dict = {} # this does not work
# the following does not work either
# self.dict = numba.typed.Dict.empty(key_type=numba.types.UniTuple(numba.types.int32, 2), value_type=numba.int32)
b_c = BigramCounts((2, 3))
</code></pre>
<p>Unfortunately plain <code>self.dict = {}</code> initialization does not work:</p>
<pre><code>numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend)
Failed in nopython mode pipeline (step: nopython frontend)
Cannot infer the type of variable '$8build_map.2' (temporary variable), have imprecise type: DictType[undefined,undefined]<iv={}>.
File "scratch_3.py", line 8:
def __init__(self, shape: tuple[int, int]):
<source elided>
self.shape = shape
self.dict = {} # this does not work
^
During: resolving callee type: jitclass.BigramCounts#1055192d0<shape:UniTuple(int32 x 2),dict:DictType[UniTuple(int32 x 2),int32]<iv=None>>
During: typing of call at <string> (3)
During: resolving callee type: jitclass.BigramCounts#1055192d0<shape:UniTuple(int32 x 2),dict:DictType[UniTuple(int32 x 2),int32]<iv=None>>
During: typing of call at <string> (3)
File "<string>", line 3:
<source missing, REPL/exec in use?>
</code></pre>
<p>The second initialization does not work either (how come numba.types does not support classes?):</p>
<pre><code>numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend) Failed in nopython mode pipeline (step: nopython frontend) Invalid use of <class 'numba.core.types.containers.UniTuple'> with parameters (class(int32), Literal[int](2)) No type info available for <class 'numba.core.types.containers.UniTuple'> as a callable. During: resolving callee type: typeref[<class 'numba.core.types.containers.UniTuple'>] During: typing of call at /Users/adam/Library/Application Support/JetBrains/PyCharm2022.3/scratches/scratch_3.py (11)
File "scratch_3.py", line 11:
def __init__(self, shape: tuple[int, int]):
<source elided>
# the following does not work either
self.dict = numba.typed.Dict.empty(key_type=numba.types.UniTuple(numba.types.int32, 2), value_type=numba.int32)
^
During: resolving callee type: jitclass.BigramCounts#11270d5a0<shape:UniTuple(int32 x 2),dict:DictType[UniTuple(int32 x 2),int32]<iv=None>> During: typing of call at <string> (3)
During: resolving callee type: jitclass.BigramCounts#11270d5a0<shape:UniTuple(int32 x 2),dict:DictType[UniTuple(int32 x 2),int32]<iv=None>> During: typing of call at <string> (3)
File "<string>", line 3: <source missing, REPL/exec in use?>
</code></pre>
|
<python><numba>
|
2023-03-02 15:25:55
| 1
| 1,827
|
Adam
|
75,617,502
| 13,174,189
|
How to turn list into nested in a specific way?
|
<p>I have a list: <code>["a", "b", "c", "d", "e", "f", "g"]</code>. I want to make it nested and put in sublists of two consecutive values from the given list. if there are no two values left, then one value should go to the sublist. so desired result is: <code>[["a", "b"], ["c", "d"], ["e", "f"], ["g"]]</code>.</p>
<p>I tried this:</p>
<pre><code>original_list = ["a", "b", "c", "d", "e", "f", "g"]
nested_list = []
for i in range(0, len(original_list), 2):
sublist = original_list[i:i+2]
nested_list.append(sublist)
if len(original_list) % 2 != 0:
nested_list[-2].append(nested_list[-1][0])
nested_list.pop()
print(nested_list)
</code></pre>
<p>and the output was correct: <code>[['a', 'b'], ['c', 'd'], ['e', 'f', 'g']]</code>.</p>
<p>But for this example:</p>
<pre><code>original_list = ["a", "b", "c"]
nested_list = []
for i in range(0, len(original_list), 2):
sublist = original_list[i:i+2]
nested_list.append(sublist)
if len(original_list) % 2 != 0:
nested_list[-2].append(nested_list[-1][0])
nested_list.pop()
print(nested_list)
</code></pre>
<p>the output is <code>[['a', 'b', 'c']]</code> instead of <code>[['a', 'b'], ['c']]</code></p>
<p>How could i fix it?</p>
|
<python><python-3.x><list><function>
|
2023-03-02 15:24:46
| 4
| 1,199
|
french_fries
|
75,617,394
| 19,003,861
|
How to customise Folium html popups in a for loop in python using .format()?
|
<p>I am trying to change the standard <code>popups</code> provided with Folium and make these work with a <code>for loop</code>. I am using Django.</p>
<p>I succeeded on change the html, following a few tutorials.</p>
<p>However I am struggling to understand how to call the variables in the html.</p>
<p>Quite simply I have a map with a list of locations. If the user click on the locations the popup appear and would display logo, name and a few other things specific to the location.</p>
<p><strong>what I tried</strong>
I tried different iteration including adding <code>{{venue.name}}</code> but clearly this doesnt work if I am not in the template.py.</p>
<p>The current one <code>""".format(venue.name)+"""</code> is an example I found on a tutorial. However I must be missing something as this is not working for me. (<a href="https://towardsdatascience.com/use-html-in-folium-maps-a-comprehensive-guide-for-data-scientists-3af10baf9190" rel="nofollow noreferrer">https://towardsdatascience.com/use-html-in-folium-maps-a-comprehensive-guide-for-data-scientists-3af10baf9190</a>)</p>
<p><strong>my question</strong>
I suppose my question would be to know how I can call a variable in the for loop of my <code>views.py</code> where I am also trying to render the HTML from. In my particular case, how can I call {{venue.name}} in a format views.py is going to understand</p>
<p>(I haven't added the files, as I didn't think it is necessary for this problem. I am obviously happy to provide the models or template if needed)</p>
<p><strong>The code</strong></p>
<p><strong>views.py</strong></p>
<pre><code>def index(request):
venue_markers = Venue.objects.all()
m = folium.Map(location=center_location,zoom_start=center_zoom_start,tiles=tiles_style)
for venue in venue_markers:
html="""
<!DOCTYPE html>
<html>
""".format(venue.name)+""" #<-- this is where I am stuck, as I cannot seem to call the variable within the for loop.
</html>
"""
iframe = branca.element.IFrame(html=html, width=150, height=75)
popup=folium.Popup(iframe, max_width=2650)
coordinates =(venue.latitude, venue.longitude)
folium.Marker(coordinates,popup=popup,icon=folium.Icon(color='black',icon='utensils',prefix='fa',fill_opacity=1)).add_to(m)
context = {'venue_markers':venue_markers,'map':m._repr_html_}
return render(request,'main/index.html',context)
</code></pre>
|
<python><django><django-views><django-templates><django-queryset>
|
2023-03-02 15:14:33
| 1
| 415
|
PhilM
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.