QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
76,424,884
| 14,843,068
|
rasterio cannot convert float NaN to integer
|
<p>I'm trying to run the following script to get the pixel values from a raster at point data. Everything works, except for the last line. The error I'm receiving is "ValueError: cannot convert float NaN to integer". How to solve this?
Btw src.nodata returns -1 (type = float).</p>
<pre><code>import pandas as pd
import geopandas as gpd
import rasterio
path = "/Users/mauritskruisheer/Documents/GeoData/WCARO stoytelling/"
# insert water points
waterpoints_Mali = pd.read_excel(path + "DISE-AEP compiles.xlsx", header = 1)
waterpoints_Mali[['Latitude', 'Longitude']] = waterpoints_Mali[['Latitude', 'Longitude']].astype(float)
fn_river_100 = path + "Riverine Floods/inunriver_historical_000000000WATCH_1980_rp00100.tif"
fn_aqueduct = path + "y2019m07d11_aqueduct30_annual_v01.csv"
# dataframe to geodataframe (ready for plotting)
gdf = gpd.GeoDataFrame(
waterpoints_Mali, geometry=gpd.points_from_xy(waterpoints_Mali.Longitude, waterpoints_Mali.Latitude), crs="EPSG:4326"
)
# plot the geodata
# gdf.plot()
with rasterio.open(fn_river_100, "r+") as src:
src.nodata = int(-1)
coord_list = [(x, y) for x, y in zip(gdf["geometry"].x, gdf["geometry"].y)]
gdf["value"] = [x for x in src.sample(coord_list)]
</code></pre>
|
<python><numpy><rasterio>
|
2023-06-07 15:26:42
| 0
| 622
|
CrossLord
|
76,424,844
| 9,372,996
|
Iterating on rows in pandas DataFrame to compute rolling sums and a calculation
|
<p>I have a pandas DataFrame, I'm trying to (in pandas or DuckDB SQL) do the following on each iteration partitioned by <code>CODE</code>, <code>DAY</code>, and <code>TIME</code>:</p>
<ol>
<li>Iterate on each row to calculate the sum total of the 2 previous <code>TRANSACTIONS</code> or <code>TRANSACTIONS_FORECAST</code> values (the first non NULL value e.g. <code>COALESCE(TRANSACTIONS, TRANSACTIONS_FORECAST</code>)</li>
<li>Calculate the sum total of the <code>TRANSACTIONS_WEEK</code> values.</li>
<li>Calculate the <code>TRANSACTION_FORECAST</code> value of <code>step1 / step2 * TRANSACTIONS_WEEK</code></li>
<li>Stop iteration as soon as <code>TRANSACTION_FORECAST</code> column values are populated.</li>
</ol>
<p>Here's the DataFrame:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
from io import StringIO
csv_string="""'CODE,DAY,TIME,WEEK,TRANSACTIONS,TRANSACTIONS_WEEK,SOURCE\nA,Monday,1,20,23,263154,actual\nA,Monday,1,21,16,246649,actual\nA,Monday,1,23,,244086.6208,forecast\nA,Monday,1,24,,243197.7547,forecast\nA,Monday,1,25,,235561.9992,forecast\nA,Monday,1,26,,231105.5393,forecast\nA,Monday,1,27,,232744.1484,forecast\nA,Monday,1,28,,238718.1522,forecast\nA,Monday,1,29,,234870.8116,forecast\nA,Monday,1,30,,230410.6348,forecast\nA,Monday,1,31,,229832.8125,forecast\nA,Monday,1,32,,227024.5631,forecast\nA,Monday,1,33,,226483.0862,forecast\nA,Monday,1,34,,229247.3648,forecast\nA,Monday,1,35,,221272.5875,forecast\nA,Monday,1,36,,250239.7494,forecast\nA,Monday,1,37,,263229.4532,forecast\nA,Monday,1,38,,252955.314,forecast\nA,Monday,1,39,,241695.9493,forecast\nA,Monday,1,40,,247447.6128,forecast\nA,Monday,1,41,,247364.4851,forecast\nA,Monday,1,42,,244082.4747,forecast\nA,Monday,1,43,,229432.3064,forecast\nA,Monday,1,44,,222934.6285,forecast\nA,Monday,1,45,,224727.4305,forecast\nA,Monday,1,46,,225616.1613,forecast\nA,Monday,1,47,,225950.7391,forecast\nA,Monday,1,48,,225553.2239,forecast\nA,Monday,1,49,,225523.3712,forecast\nA,Monday,1,50,,215116.1205,forecast\nA,Monday,1,51,,239592.5374,forecast\nA,Monday,1,52,,228592.4596,forecast\nB,Monday,1,20,29,263154,orders_base\nB,Monday,1,21,27,246649,orders_base\nB,Monday,1,23,,244086.6208,forecast\nB,Monday,1,24,,243197.7547,forecast\nB,Monday,1,25,,235561.9992,forecast\nB,Monday,1,26,,231105.5393,forecast\nB,Monday,1,27,,232744.1484,forecast\nB,Monday,1,28,,238718.1522,forecast\nB,Monday,1,29,,234870.8116,forecast\nB,Monday,1,30,,230410.6348,forecast\nB,Monday,1,31,,229832.8125,forecast\nB,Monday,1,32,,227024.5631,forecast\nB,Monday,1,33,,226483.0862,forecast\nB,Monday,1,34,,229247.3648,forecast\nB,Monday,1,35,,221272.5875,forecast\nB,Monday,1,36,,250239.7494,forecast\nB,Monday,1,37,,263229.4532,forecast\nB,Monday,1,38,,252955.314,forecast\nB,Monday,1,39,,241695.9493,forecast\nB,Monday,1,40,,247447.6128,forecast\nB,Monday,1,41,,247364.4851,forecast\nB,Monday,1,42,,244082.4747,forecast\nB,Monday,1,43,,229432.3064,forecast\nB,Monday,1,44,,222934.6285,forecast\nB,Monday,1,45,,224727.4305,forecast\nB,Monday,1,46,,225616.1613,forecast\nB,Monday,1,47,,225950.7391,forecast\nB,Monday,1,48,,225553.2239,forecast\nB,Monday,1,49,,225523.3712,forecast\nB,Monday,1,50,,215116.1205,forecast\nB,Monday,1,51,,239592.5374,forecast\nB,Monday,1,52,,228592.4596,forecast\nC,Saturday,2,19,173,259156,orders_base\nC,Saturday,2,20,179,263154,orders_base\nC,Saturday,2,21,185,246649,orders_base\nC,Saturday,2,22,162,225220,orders_base\nC,Saturday,2,23,,244086.6208,forecast\nC,Saturday,2,24,,243197.7547,forecast\nC,Saturday,2,25,,235561.9992,forecast\nC,Saturday,2,26,,231105.5393,forecast\nC,Saturday,2,27,,232744.1484,forecast\nC,Saturday,2,28,,238718.1522,forecast\nC,Saturday,2,29,,234870.8116,forecast\nC,Saturday,2,30,,230410.6348,forecast\nC,Saturday,2,31,,229832.8125,forecast\nC,Saturday,2,32,,227024.5631,forecast\nC,Saturday,2,33,,226483.0862,forecast\nC,Saturday,2,34,,229247.3648,forecast\nC,Saturday,2,35,,221272.5875,forecast\nC,Saturday,2,36,,250239.7494,forecast\nC,Saturday,2,37,,263229.4532,forecast\nC,Saturday,2,38,,252955.314,forecast\nC,Saturday,2,39,,241695.9493,forecast\nC,Saturday,2,40,,247447.6128,forecast\nC,Saturday,2,41,,247364.4851,forecast\nC,Saturday,2,42,,244082.4747,forecast\nC,Saturday,2,43,,229432.3064,forecast\nC,Saturday,2,44,,222934.6285,forecast\nC,Saturday,2,45,,224727.4305,forecast\nC,Saturday,2,46,,225616.1613,forecast\nC,Saturday,2,47,,225950.7391,forecast\nC,Saturday,2,48,,225553.2239,forecast\nC,Saturday,2,49,,225523.3712,forecast\nC,Saturday,2,50,,215116.1205,forecast\nC,Saturday,2,51,,239592.5374,forecast\nC,Saturday,2,52,,228592.4596,forecast'"""
df = pd.read_csv(StringIO(csv_string))
</code></pre>
<p>Here's the expected result:</p>
<pre class="lang-py prettyprint-override"><code>csv_string_result="""'CODE,DAY,TIME,WEEK,TRANSACTIONS,TRANSACTIONS_WEEK,SOURCE,TRANSACTIONS_ROLLING_2_SUM,TRANSACTIONS_WEEK_ROLLING_2_SUM,TRANSACTIONS_FORECAST\nA,Monday,1,20,23,263154,actual,,,\nA,Monday,1,21,16,246649,actual,,,\nA,Monday,1,23,,244086.6208,forecast,39,509803,18.67266025\nA,Monday,1,24,,243197.7547,forecast,34.67266025,490735.6208,17.18300601\nA,Monday,1,25,,235561.9992,forecast,35.85566626,487284.3756,17.3332716\nA,Monday,1,26,,231105.5393,forecast,34.51627761,478759.7539,16.66159882\nA,Monday,1,27,,232744.1484,forecast,,,\nA,Monday,1,28,,238718.1522,forecast,,,\nA,Monday,1,29,,234870.8116,forecast,,,\nA,Monday,1,30,,230410.6348,forecast,,,\nA,Monday,1,31,,229832.8125,forecast,,,\nA,Monday,1,32,,227024.5631,forecast,,,\nA,Monday,1,33,,226483.0862,forecast,,,\nA,Monday,1,34,,229247.3648,forecast,,,\nA,Monday,1,35,,221272.5875,forecast,,,\nA,Monday,1,36,,250239.7494,forecast,,,\nA,Monday,1,37,,263229.4532,forecast,,,\nA,Monday,1,38,,252955.314,forecast,,,\nA,Monday,1,39,,241695.9493,forecast,,,\nA,Monday,1,40,,247447.6128,forecast,,,\nA,Monday,1,41,,247364.4851,forecast,,,\nA,Monday,1,42,,244082.4747,forecast,,,\nA,Monday,1,43,,229432.3064,forecast,,,\nA,Monday,1,44,,222934.6285,forecast,,,\nA,Monday,1,45,,224727.4305,forecast,,,\nA,Monday,1,46,,225616.1613,forecast,,,\nA,Monday,1,47,,225950.7391,forecast,,,\nA,Monday,1,48,,225553.2239,forecast,,,\nA,Monday,1,49,,225523.3712,forecast,,,\nA,Monday,1,50,,215116.1205,forecast,,,\nA,Monday,1,51,,239592.5374,forecast,,,\nA,Monday,1,52,,228592.4596,forecast,,,\nB,Monday,1,20,29,263154,orders_base,,,\nB,Monday,1,21,27,246649,orders_base,,,\nB,Monday,1,23,,244086.6208,forecast,56,509803,26.81202497\nB,Monday,1,24,,243197.7547,forecast,53.81202497,490735.6208,26.66805322\nB,Monday,1,25,,235561.9992,forecast,53.48007819,487284.3756,25.85322815\nB,Monday,1,26,,231105.5393,forecast,52.52128136,478759.7539,25.35292274\nB,Monday,1,27,,232744.1484,forecast,,,\nB,Monday,1,28,,238718.1522,forecast,,,\nB,Monday,1,29,,234870.8116,forecast,,,\nB,Monday,1,30,,230410.6348,forecast,,,\nB,Monday,1,31,,229832.8125,forecast,,,\nB,Monday,1,32,,227024.5631,forecast,,,\nB,Monday,1,33,,226483.0862,forecast,,,\nB,Monday,1,34,,229247.3648,forecast,,,\nB,Monday,1,35,,221272.5875,forecast,,,\nB,Monday,1,36,,250239.7494,forecast,,,\nB,Monday,1,37,,263229.4532,forecast,,,\nB,Monday,1,38,,252955.314,forecast,,,\nB,Monday,1,39,,241695.9493,forecast,,,\nB,Monday,1,40,,247447.6128,forecast,,,\nB,Monday,1,41,,247364.4851,forecast,,,\nB,Monday,1,42,,244082.4747,forecast,,,\nB,Monday,1,43,,229432.3064,forecast,,,\nB,Monday,1,44,,222934.6285,forecast,,,\nB,Monday,1,45,,224727.4305,forecast,,,\nB,Monday,1,46,,225616.1613,forecast,,,\nB,Monday,1,47,,225950.7391,forecast,,,\nB,Monday,1,48,,225553.2239,forecast,,,\nB,Monday,1,49,,225523.3712,forecast,,,\nB,Monday,1,50,,215116.1205,forecast,,,\nB,Monday,1,51,,239592.5374,forecast,,,\nB,Monday,1,52,,228592.4596,forecast,,,\nC,Saturday,2,19,173,259156,orders_base,,,\nC,Saturday,2,20,179,263154,orders_base,,,\nC,Saturday,2,21,185,246649,orders_base,,,\nC,Saturday,2,22,162,225220,orders_base,,,\nC,Saturday,2,23,,244086.6208,forecast,347,471869,179.4948544\nC,Saturday,2,24,,243197.7547,forecast,341.4948544,469306.6208,176.9648629\nC,Saturday,2,25,,235561.9992,forecast,356.4597173,487284.3756,172.319015\nC,Saturday,2,26,,231105.5393,forecast,349.283878,478759.7539,168.6053147\nC,Saturday,2,27,,232744.1484,forecast,,,\nC,Saturday,2,28,,238718.1522,forecast,,,\nC,Saturday,2,29,,234870.8116,forecast,,,\nC,Saturday,2,30,,230410.6348,forecast,,,\nC,Saturday,2,31,,229832.8125,forecast,,,\nC,Saturday,2,32,,227024.5631,forecast,,,\nC,Saturday,2,33,,226483.0862,forecast,,,\nC,Saturday,2,34,,229247.3648,forecast,,,\nC,Saturday,2,35,,221272.5875,forecast,,,\nC,Saturday,2,36,,250239.7494,forecast,,,\nC,Saturday,2,37,,263229.4532,forecast,,,\nC,Saturday,2,38,,252955.314,forecast,,,\nC,Saturday,2,39,,241695.9493,forecast,,,\nC,Saturday,2,40,,247447.6128,forecast,,,\nC,Saturday,2,41,,247364.4851,forecast,,,\nC,Saturday,2,42,,244082.4747,forecast,,,\nC,Saturday,2,43,,229432.3064,forecast,,,\nC,Saturday,2,44,,222934.6285,forecast,,,\nC,Saturday,2,45,,224727.4305,forecast,,,\nC,Saturday,2,46,,225616.1613,forecast,,,\nC,Saturday,2,47,,225950.7391,forecast,,,\nC,Saturday,2,48,,225553.2239,forecast,,,\nC,Saturday,2,49,,225523.3712,forecast,,,\nC,Saturday,2,50,,215116.1205,forecast,,,\nC,Saturday,2,51,,239592.5374,forecast,,,\nC,Saturday,2,52,,228592.4596,forecast,,,'"""
df_result = pd.read_csv(StringIO(csv_string_result))
</code></pre>
<p>Here's the DuckDB recursive CTE I attempted:</p>
<pre class="lang-sql prettyprint-override"><code>-- The problem with this is that terminal would kill the process after 10 mins or so
WITH RECURSIVE ROLLING_SUM AS (
SELECT CODE
, DAY
, TIME
, WEEK
, TRANSACTIONS
, TRANSACTIONS_WEEK
, CASE
WHEN TRANSACTIONS IS NULL
THEN SUM(TRANSACTIONS) OVER (
PARTITION BY CODE, DAY, TIME
ORDER BY WEEK
ROWS BETWEEN 2 PRECEDING AND 1 PRECEDING
)
END AS ROLLING_2_SUM_TRANSACTIONS
, CASE -- noticed don't actually need a CASE statement for this
WHEN TRANSACTIONS IS NULL
THEN SUM(TRANSACTIONS_WEEK) OVER (
PARTITION BY CODE, DAY, TIME
ORDER BY WEEK
ROWS BETWEEN 2 PRECEDING AND 1 PRECEDING
)
END AS ROLLING_2_SUM_TRANSACTIONS_WEEK
, ROLLING_2_SUM_TRANSACTIONS / ROLLING_2_SUM_TRANSACTIONS_WEEK AS ROLLING_2_TRANSACTIONS_PCT
, ROLLING_2_TRANSACTIONS_PCT * TRANSACTIONS_WEEK AS TRANSACTIONS_FORECAST
, SOURCE
FROM df
UNION ALL
SELECT CODE
, DAY
, TIME
, WEEK
, TRANSACTIONS
, TRANSACTIONS_WEEK
, CASE
WHEN COALESCE(TRANSACTIONS, TRANSACTIONS_FORECAST) IS NULL
THEN SUM(
COALESCE(TRANSACTIONS, TRANSACTIONS_FORECAST)
) OVER (
PARTITION BY CODE, DAY, TIME
ORDER BY WEEK
ROWS BETWEEN 2 PRECEDING AND 1 PRECEDING
)
END
, CASE
WHEN COALESCE(TRANSACTIONS, TRANSACTIONS_FORECAST) IS NULL
THEN SUM(TRANSACTIONS_WEEK) OVER (
PARTITION BY CODE, DAY, TIME
ORDER BY WEEK
ROWS BETWEEN 2 PRECEDING AND 1 PRECEDING
)
END
, ROLLING_2_SUM_TRANSACTIONS / ROLLING_2_SUM_TRANSACTIONS_WEEK AS ROLLING_2_TRANSACTIONS_PCT
, ROLLING_2_TRANSACTIONS_PCT * TRANSACTIONS_WEEK AS TRANSACTIONS_FORECAST
, SOURCE
FROM ROLLING_SUM
WHERE WEEK <= 52
)
SELECT *
FROM ROLLING_SUM
WHERE COALESCE(TRANSACTIONS, TRANSACTIONS_FORECAST) IS NOT NULL
</code></pre>
<p>Tried in pure pandas but it was quickly getting out of hand. Looked into <code>itertuples</code> but not sure how to get a rolling sum doing that.</p>
<p>Help on this would be greatly appreciated.</p>
|
<python><pandas><dataframe><duckdb>
|
2023-06-07 15:23:03
| 1
| 741
|
AK91
|
76,424,761
| 11,358,805
|
Python connection to IRC server with proxy
|
<p>I'm trying to connect to IRC server with socks proxy for testing purposes.
Socks proxy is alive at the moment of writing this post. I've checked it with Proxy checker and even connected to the IRC server using this proxy address and port in mIRC options, everything worked fine.</p>
<p>However I fail to connect to IRC server via Python:</p>
<pre><code>import socks
import time
proxy_type = socks.PROXY_TYPE_SOCKS4
proxy = '192.111.135.17'
port = 18302
network = "irc.icqchat.net"
port = 6667
irc = socks.socksocket()
irc.setproxy(proxy_type, proxy, port)
irc.connect((network, port))
print (irc.recv ( 4096 ))
irc.send(bytes( 'NICK test_connection \r\n' , "UTF-8"))
irc.send(bytes( 'USER botty botty botty :Sup\r\n' , "UTF-8"))
irc.send(bytes( 'JOIN #testing \r\n' , "UTF-8"))
time.sleep(4)
</code></pre>
<p>the proxy connection either refuses to work (connects me directly with my real IP) or returns me mistake:
<code>socks.ProxyConnectionError: Error connecting to SOCKS4 proxy 192.111.135.17:6667: [WinError 10060] the attempt to establish the connection was unsuccessful, because From another computer, the desired response was not received for the required time, or the already installed connection was torn due to the incorrect response of the already connected computer </code></p>
<p>I've checked it with different SOCK4 and SOCKS5 proxies, still no result. Is there a mistake in my code sample or is it a known issue of socks library?</p>
|
<python><websocket><proxy><socks><pysocks>
|
2023-06-07 15:13:20
| 1
| 523
|
Sib
|
76,424,673
| 13,392,257
|
pybind .WHL file is not a supported wheel on this platform
|
<p>I am build a .whl package on my MacOS Ventura13.4 (x86_64 platform)</p>
<pre><code>python setup.py bdist_wheel
pip install dist/my_file.whl
</code></pre>
<p>Error text:</p>
<blockquote>
<p>*.whl is not a supported wheel on this platform</p>
</blockquote>
<p>How to fix the error?</p>
<p>My versions:</p>
<pre><code>python3.7.0 (need this version)
pip 23.1.2
pybind 2.9.1 (need this version)
setuptools 39.0.1
wheel 0.40.0
</code></pre>
<p>Python was installed from official site: (macOS 64-bit installer)</p>
<p><strong>Update</strong>
Found out that the problem deals with <code>python3.7.0</code>, because there are no errors for <code>python 3.8</code></p>
<p>What is wrong with <code>python3.7</code> ?</p>
|
<python><pybind11>
|
2023-06-07 15:01:37
| 1
| 1,708
|
mascai
|
76,424,619
| 12,596,824
|
pandas get dummies on comma delimited column creating duplicates
|
<p>I have a series like so:</p>
<pre><code>0 mcdonalds, popeyes
1 wendys
2 popeyes
3 mcdonalds
4 mcdonalds
</code></pre>
<p>I want to convert to dummy variables where my expected output is like so:</p>
<pre><code>popeyes wendys mcdonalds
1 0 1
0 1 0
1 0 0
0 0 1
0 0 1
</code></pre>
<p>however when i use the following code:</p>
<pre><code>t.str.get_dummies(sep = ',')
popeyes wendys mcdonalds popeyes
1 0 1 0
0 1 0 0
0 0 0 1
0 0 1 0
0 0 1 0
</code></pre>
<p>why does it split out popeyes in two columns, how do i fix this?</p>
|
<python><pandas>
|
2023-06-07 14:54:57
| 0
| 1,937
|
Eisen
|
76,424,593
| 2,799,750
|
"This field is required" from my Django Form when using Client().post to test even thought form.data shows values in those fields
|
<p>I'm trying to test some form logic in my views.py and need to simply pass a form to that view in order to do that but I can't figure out why it isn't receiving a valid form when passed using a <code>Client().post</code> from my tests. When using the form from the browser normally it works fine.</p>
<p>I'm using <a href="https://stackoverflow.com/questions/46449463/django-test-client-submitting-a-form-with-a-post-request">Django test Client submitting a form with a POST request</a> and <a href="https://stackoverflow.com/questions/7304248/how-should-i-write-tests-for-forms-in-django#">How should I write tests for Forms in Django?</a> as guidance with no luck after looking at many different resources.</p>
<p>The errors from <code>form.errors</code> shows</p>
<blockquote>
<p>name - This field is required</p>
</blockquote>
<blockquote>
<p>address - This field is required</p>
</blockquote>
<p>However my <code>form.data</code> shows</p>
<blockquote>
<p><QueryDict: {"'name': 'foo bar', 'address': '2344 foo st'}": ['']}></p>
</blockquote>
<p>The working live form has the form.data as:</p>
<blockquote>
<p><QueryDict: {'csrfmiddlewaretoken': ['htR...Rel', 'htR...Rel'], 'name': ['test'], 'address': ['test'],'submit': ['Submit_Form']}></p>
</blockquote>
<p>I've played around with getting a "csrfmiddlewaretoken" but I don't feel like that the problem. I've also tried setting the <code>.initial</code> and played around with content type. Is the form.data not what is looked at? <code>form.is_valid()</code> is True before the post to the view.py. Thanks in advance for the help!</p>
<p>My tests code is:</p>
<p>@pytest.mark.django_db
def test_add_bar_with_logged_in_user_form(setUpUser, django_user_model):</p>
<pre><code>from ..models.forms import BarForm
from django.test import Client
bar_name = "foo bar2"
address = "2344 foo st"
user = setUpUser
client = Client()
client.force_login(user=user)
client.login(username=user.username, password=user.password, email=user.email)
form = BarForm()
form_data = {
"name": bar_name,
"address": address,
}
form = BarForm(data=form_data)
assert form.is_valid() is True
response = client.post(urls.reverse("add_bar"), form.data, content_type="application/x-www-form-urlencoded", follow=True)
assert response.status_code == 200
</code></pre>
<p>The relevant portion of views.py is:</p>
<pre><code>def add_bar(request):
# If this is a POST then the user has submitted the form to add a new bar
if request.method == "POST":
active_user = auth.get_user(request)
# Only logged in users should be able to submit
if not active_user.is_authenticated:
messages.error(request, "Must be logged in to do that")
return HttpResponseForbidden("Authentication required")
form = BarForm(request.POST)
logger.info(f"*=-===-= form.data : {form.data}")
if form.is_valid():
logger.info(f"*=-===-= form.data : {form.data}")
cleaned_data = form.cleaned_data
created_by_id = active_user.id
bar_new = Bar.objects.create(**cleaned_data, submitted_by=active_user)
messages.success(
request,
f" - Thanks! Bar submited.",
)
return redirect("index")
elif form.errors:
errors = form.errors
messages.error(request, "Form is invalid")
logger.info(f"** test_add_bar_with_logged_in_user -> form.errors: {errors}")
for error in form.errors:
logger.info(f"** test_add_bar_with_logged_in_user -> error: {error}")
logger.info(f"*=-===-= form.data : {form.data}")
return redirect("index")
elif form.non_field_errors():
for error in form.non_field_errors():
logger.info(f"** test_add_bar_with_logged_in_user -> error: {error}")
return False
</code></pre>
|
<python><django><pytest>
|
2023-06-07 14:51:06
| 1
| 311
|
LtDan33
|
76,424,582
| 4,984,061
|
ExcelWriter save function not saving immediately to disc
|
<p>the following code executes in Jupyter successfully, however I am still waiting for the <code>output.xlsx</code> to arrive on my desktop.</p>
<pre><code>import pandas as pd
import xlsxwriter
# Create a DataFrame with the column values
data = {'Values': [397, 358, 412]}
df = pd.DataFrame(data)
# Create a writer using ExcelWriter
writer = pd.ExcelWriter('output.xlsx', engine='xlsxwriter')
# Write the DataFrame to Excel
df.to_excel(writer, sheet_name='Sheet1', index=False)
# Access the workbook and worksheet objects
workbook = writer.book
worksheet = writer.sheets['Sheet1']
# Define the formats for conditional formatting
green_format = workbook.add_format({'bg_color': '#00B050'})
yellow_format = workbook.add_format({'bg_color': '#FFFF00'})
orange_format = workbook.add_format({'bg_color': '#FFC000'})
red_format = workbook.add_format({'bg_color': '#FF0000'})
# Apply conditional formatting based on cell values
worksheet.conditional_format('A2:A4', {'type': 'cell','criteria': '>=','value': 391,'format': green_format})
worksheet.conditional_format('A2:A4', {'type': 'cell','criteria': 'between','minimum': 354,'maximum': 390,
'format': yellow_format})
worksheet.conditional_format('A2:A4', {'type': 'cell','criteria': 'between', 'minimum': 293,'maximum': 353,
'format': orange_format})
worksheet.conditional_format('A2:A4', {'type': 'cell','criteria': '<=','value': 292,'format': red_format})
# Save the workbook
writer.save()
</code></pre>
<p>The cell above was executed and the file was created at 9:35 today. It is currently 9:50 and I am still waiting for my file.</p>
<p>When I close the Jupyter app in Windows, the file is then accessible..</p>
<p>Any help would be appreciated.</p>
<p><a href="https://i.sstatic.net/ANDMN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ANDMN.png" alt="enter image description here" /></a></p>
|
<python><pandas.excelwriter>
|
2023-06-07 14:48:55
| 1
| 1,578
|
Starbucks
|
76,423,983
| 16,797,805
|
Flask API works correctly locally but returns 415 on Azure App Service
|
<h3>Current environment</h3>
<ul>
<li>Python 3.10</li>
<li>Flask 2.3.2 and waitress 2.1.2</li>
<li>Postgresql DB on Azure Postgresql managed service (flexible server)</li>
<li>App Service running docker container for the server with same requirements as above</li>
<li>Calling endpoints from python using <code>requests</code></li>
</ul>
<h3>Code and use case</h3>
<p>Relevant code for the server:</p>
<pre><code>from flask import Flask, jsonify, request
from waitress import serve
from paste.translogger import TransLogger
# --- Routes definition --- #
@app.route(f"{API_ENDPOINT_PREFIX}/hello", methods=["GET"])
def hello():
logging.info("Hello")
return jsonify({"message": "Hello!"})
@app.route(f"{API_ENDPOINT_PREFIX}/filter/<tablename>", methods=["GET"])
def get_filter_options(tablename: str):
request_data = request.get_json()
logging.debug(f"Request data: {request_data}")
# Doing stuff here and obtaining a variable options
logging.info(f"Retrieved {len(options)} filter options for table '{tablename}'")
return jsonify({"filters": options})
# --- Main --- #
if __name__ == "__main__":
logging.info(f"Welcome to Flask server")
serve(
TransLogger(app, setup_console_handler=False),
host=API_ENDPOINT_HOST,
port=API_ENDPOINT_PORT,
)
</code></pre>
<p><code>API_ENDPOINT_PREFIX</code> is set to <code>/api</code>.</p>
<p>Note that <code>/api/hello</code> is an endpoint for testing purposes, while <code>/api/filter/<tablename></code> is supposed to be a working api to retrieve possible options for filter fields and return them to an ui. This api connects to the postgresql database hosted on Azure.</p>
<p>I launched this locally on <code>localhost:8000</code> and all works as expected. I also deployed the same to Azure App Service, and what happens is the the <code>/api/hello</code> endpoint still works fine, while the <code>/api/filter/<tablename></code> returns <strong>415</strong>.</p>
<p>In both cases the api is called using the following snippet of code:</p>
<pre><code>header = {"Content-Type": "application/json", "Accept": "application/json"}
requests.get(
f"{BASE_API_URL}/filter/tablename",
data=json.dumps({"filters": {}}),
headers=header,
)
</code></pre>
<p>where <code>BASE_API_URL</code> is alternatively <code>http://localhost:8000/api</code> when running locally or <code>http://app-service-name.azurewebsites.net/api</code> when running on app service.</p>
<h3>What I've checked</h3>
<p>General suggestions say to include the <code>Content-Type</code> and <code>Accept</code> header, which I do in any case. I also checked the connection between app service and hosted db, supposing that some problem may arise if the two services cannot communicate. TO this end, I created a Service Connector on the app service whose target is the hosted db, so I suppose everything is ok.</p>
<p>What seems strange to me is that the <code>/api/hello</code> endpoint, which does not require any payload, works fine in both environments, while the <code>api/filter/<tablename></code> endpoint, which requires a payload, works differently.</p>
<p>Am I missing something? Which other actions can I take to further investigate this?</p>
|
<python><flask><azure-web-app-service><azure-postgresql>
|
2023-06-07 13:44:48
| 1
| 857
|
mattiatantardini
|
76,423,956
| 13,162,807
|
Flask controller for Authlib get token returns "unsupported_grant_type"
|
<p>I need to migrate old authentication flow which uses <code>Flask-OAuthlib</code> to <code>Authlib</code>. <code>Grant</code>, <code>Client</code> and <code>Token</code> models were already in place, so I had to modify <code>Client</code> using <code>ClientMixin</code> and additional methods. However I'm getting <code>{"error": "unsupported_grant_type"}</code> response from <code>/token</code> endpoint</p>
<p>Here is a grant type</p>
<pre class="lang-py prettyprint-override"><code>class AuthorizationCodeGrant(grants.AuthorizationCodeGrant):
def save_authorization_code(self, code, request):
"""Saves a grant from mongodb and returns it as a Grant or None.
@param client_id:
@param code:
@param grant_request:
"""
LOGGER_PREFIX = "SAVE_AUTHORIZATION_CODE"
logger.debug(f'{LOGGER_PREFIX}: code == {str(code)}')
logger.debug(f'{LOGGER_PREFIX}: request == {str(request.__dict__)}')
expires = datetime.utcnow() + timedelta(seconds=100)
user = current_user()
logger.debug(f'{LOGGER_PREFIX}: user == {str(user)}')
client = request.client
client_id = client.client_id
grant = Grant(
client_id=client_id,
code=code,
redirect_uri=request.redirect_uri,
scopes=request.scope,
expires=expires,
user=user,
)
result = mongo.db.oauth_grant.update(
{"user.user_id": user["user_id"], "client_id": client_id}, class_to_json(grant), upsert=True
)
logger.debug(f'{LOGGER_PREFIX}: result == {str(result)}')
return grant
def query_authorization_code(self, code, client):
"""Loads a grant from mongodb and returns it as a Grant or None.
@param client_id:
@param code:
"""
LOGGER_PREFIX = "QUERY_AUTHORIZATION_CODE"
client_id = client.client_id
json = mongo.db.oauth_grant.find_one({"client_id": client_id, "code": code})
grant = class_from_json(json, Grant)
logger.debug(f'{LOGGER_PREFIX}: client_id == {str(client_id)}')
logger.debug(f'{LOGGER_PREFIX}: json == {str(json)}')
logger.debug(f'{LOGGER_PREFIX}: grant == {str(grant)}')
return grant
def delete_authorization_code(self, authorization_code):
LOGGER_PREFIX = 'DELETE_AUTHORIZATION_CODE'
logger.debug(f'{LOGGER_PREFIX}: authorization_code == {str(authorization_code)}')
# db.session.delete(authorization_code)
# db.session.commit()
def authenticate_user(self, authorization_code):
LOGGER_PREFIX = 'AUTHENTICATE_USER'
logger.debug(f'{LOGGER_PREFIX}: authorization_code == {str(authorization_code)}')
# return User.query.get(authorization_code.user_id)
def check_authorization_endpoint(request):
logger.debug(f'Check auth endpoint called...')
return True
</code></pre>
<p>Here <code>/token</code> controller</p>
<pre class="lang-py prettyprint-override"><code>@app.route("/token", methods=["GET", "POST"])
# @oauth.token_handler
def access_token():
LOGGER_PREFIX = 'OAUTH2_TOKEN'
logger.debug(f'{LOGGER_PREFIX}: Getting a token...')
token = server.create_token_response()
logger.debug(f'{LOGGER_PREFIX}: token == {str(token)}')
return token
</code></pre>
|
<python><mongodb><flask><authlib><flask-oauthlib>
|
2023-06-07 13:41:42
| 0
| 305
|
Alexander P
|
76,423,753
| 8,551,360
|
Error in AES Encryption/Decryption via python
|
<p>We have integrated a API through which I am getting a encrypted data.
Now in the docs they have given me a software link to decrypt the data.</p>
<p>Here is the website they have recommend me to use to decrypt the received data:
<a href="https://aesencryption.net/" rel="nofollow noreferrer">https://aesencryption.net/</a></p>
<p>It works fine and I am getting the decrypted data on this website</p>
<p>On this website they have given their code they are using for decryption & that is in PHP. I have used chat GPT to convert that PHP code in python with little modification of my own. But it doesn't work with this code.</p>
<p>I am mentioning everything that I have and have tried.</p>
<pre><code>import hashlib
from Crypto.Cipher import AES
import base64
class AESFunction:
secret_key = None
key = None
decrypted_string = None
encrypted_string = None
@staticmethod
def set_key(my_key):
key = bytes.fromhex(my_key)
sha = hashlib.sha256()
sha.update(key)
key = sha.digest()[:32]
AESFunction.secret_key = AESFunction.key = key
@staticmethod
def get_decrypted_string():
return AESFunction.decrypted_string
@staticmethod
def set_decrypted_string(decrypted_string):
AESFunction.decrypted_string = decrypted_string
@staticmethod
def get_encrypted_string():
return AESFunction.encrypted_string
@staticmethod
def set_encrypted_string(encrypted_string):
AESFunction.encrypted_string = encrypted_string
@staticmethod
def encrypt(str_to_encrypt):
cipher = AES.new(AESFunction.secret_key, AES.MODE_CBC)
encrypted_bytes = cipher.encrypt(AESFunction.pad(str_to_encrypt).encode('utf-8'))
AESFunction.set_encrypted_string(base64.b64encode(encrypted_bytes).decode('utf-8'))
@staticmethod
def decrypt(str_to_decrypt):
cipher = AES.new(AESFunction.secret_key, AES.MODE_CBC)
decrypted_bytes = cipher.decrypt(base64.b64decode(str_to_decrypt))
AESFunction.set_decrypted_string(AESFunction.unpad(decrypted_bytes).decode('utf-8', errors='replace'))
@staticmethod
def pad(text):
block_size = 32
pad_len = block_size - (len(text) % block_size)
padding = chr(pad_len) * pad_len
return text + padding
@staticmethod
def unpad(text):
pad_len = ord(text[-1])
return text[:-pad_len]
</code></pre>
<blockquote>
<p>str_password = "f4622a74c54b81f0404c1b6589e8f96c"</p>
</blockquote>
<blockquote>
<p>str_to_decrypt = "yH0lcZaFBnG3usii6UXZovCYKZYXFUuZF5vcmT+Tz59FQw2pgkke97F/O9BrlEEB3WcqxxdgoVVq+uKxg3y14HzWmByy+f7ck68eyjxULgdCjwka666rDPjm0Tf/+jjnteSbtVPc9WRLuhaPiFBblI9aK3B38ApECDCNvvRjwE+GtfeGqcu1tCo9aTJAwXroN58Tvu6Otn2quNHyLjUrDEfSzsifkGS8rdid6J+c4VZsK87pALA/CIQxtA7z8W91f+x2bb5EfH8nglfBpsE1FP9IBnI5ECjVTJy9pM45TJhnHSYOIRHX5jcwMSUDLgBdjJ57EHdzf0RqTO4ZdLHv7xpMSsVP1oHClqbGTd9UE/yXt9a2Fvm+rPKAL9rU9tXtMCe6uONgnRELrn2VePJYcp3qViP/1HReN4IbT4EYUlXJ/Tqdph6YERTKUh67mnF4lPfHCsrb9AsPNcMYn7r8tT5dMgC1wjzXO22+tUuD8DWEbnrHPPID1WBCRua3POBMyGUcMkSIEdke739pDoKxW1Ww6iGsbsQX60g5jiJEAg6fghn6P+6osKXrfLO4OirsUo8SKRO1m6YSMf31htvkcJbUbDAe2nG+aeUxEozdVyRxUauMtxYahvqcQPJtEKgAugPoCYrPx+RhutoPyaHOOzUtzXvznOV+2qcE0HfLaSotFPeteQzx7Qj4lKDVNkmkwn8jRyVfMUFzRp01v6DqsJe89uIdTyAe65rpyzNt17TDOR8BWLxifOAhDosrW61RfFE+HBU3egySbfs3PxJxHUOrwBOhnGQwILN+bENaLgcQCjczgs9BZ3i+6XK7DwLNFL1xBt42i09NpyIbJYXE1ZkrObA9rrmGLhHCqkhIDyBAaWYYNOdYOVIRbUWSA+bSeCg6xIgy/FhhxxNsmtYAMzrdjQyjYIIEuAvhe9wOTP1IPNqbtuyEPoObThCtZy3JunneK4SdW5f5rzNDOPu4dHw1EuRBYNmqmXh2Mpw2LWoit/+8Eob4aiZDpeH3HJAXrXHH+lyulJgxm4fkHZXcj25ISr0OOHjuLqAIn7ezceeRnLSfHkjL74klt0aRCIuG0p6MKwVVdHEX2n4WblcYHgEHNUoWWpOb9I3gf6Egb5uk59u9Sjc+9/MRaXoIulv+qgRNDGVMByEIqNlpmPSKnwWEEmtV54aa2QjGMOb282tTuEMpha9uHrgBfEwsKeXZ6Uwx1SPyvNHa5eoTojLc7avBdU//5XwPLp40DRvuLo6Kn6fY2SMeWRqP8A4LtarU+2eYwqWkdeggJFF3LFBkYDDwh9N+j8mBdtv1sg/bguD6yUE3UUN/jZ9nhxraxt9qrP11ifXnvSK5BToa7t5SFBiMDCM/n/bBINd+N05937KStvC76AsZHFvxl4xVZq2200qDCYzSrGbVrRcJUABvvlXm1EGHWAQcfe9K5UJ5KZIwijd9p1e2OReHNOziw1VGYR3vZ45ZkqNQnRaqYacnmXEGYgXa7ik92PWarE148+TRdoDBC1jPHRNejXNx9KRoU88cZiGENRjm80jTxZ4tWgVFDzS99GhsHkZw2+4pHZNN06gZce1KsQnnx+algAYK"</p>
</blockquote>
<p>Encrypted data I am getting from <a href="https://aesencryption.net/" rel="nofollow noreferrer">https://aesencryption.net/</a></p>
<blockquote>
<p>[{"paneli/":"7a7","panel name":"Nurition Gene Test"},{"panel_id":"798","panel name":"Fitness Genomics"},{"panel_id":"799","panel name":"Hormonal Disorders"},{"panel_id":"801","panel name":"Allergy"},{"panel_id":"802","panel name":"Personality Traits"},{"panel_id":"803","panel name":"Dermatology"},{"panel_id":"804","panel name":"Addiction"},{"panel_id":"805","panel name":"Neurology"},{"panel_id":"806","panel name":"Lifestyle Genomics"},{"panel_id":"807","panel name":"Ophthalmology"},{"panel_id":"808","panel name":"Renal Disorders"},{"panel_id":"809","panel name":"Circadian Rhythm Associated Traits"},{"panel_id":"810","panel name":"GastroIntestinal Disorders"},{"panel_id":"811","panel name":"Pulmonary Disorder"},{"panel_id":"812","panel name":"Vaccinomics"},{"panel_id":"813","panel name":"Immunology"},{"panel_id":"814","panel name":"Dental Diseases"},{"panel_id":"815","panel name":"Cardiovascular Diseases"},{"panel_id":"816","panel name":"IVF & Pregnancy Loss"},{"panel_id":"817","panel name":"Hematological Diseases"},{"panel_id":"818","panel name":"Bone Health and Diseases"},{"panel_id":"819","panel name":"Infectious Diseases"},{"panel_id":"991","panel name":"QUA Request"}]</p>
</blockquote>
<p>Finally, I am using this command to get the decryption data:</p>
<pre><code>AESFunction.decrypt(str_to_decrypt)
decrypted_string = AESFunction.get_decrypted_string()
print("Decrypted string:", decrypted_string)
</code></pre>
<p>I am getting this error (that is coming from "def unpad(text)":</p>
<blockquote>
<p>TypeError: ord() expected string of length 1, but int found</p>
</blockquote>
<p>I have idea of what this error means but have no solution for this.
Can someone please help me out ot find a method to decrypt such data that I am receiving from the API</p>
|
<python><django><encryption><aes><pycryptodome>
|
2023-06-07 13:20:50
| 2
| 548
|
Harshit verma
|
76,423,672
| 7,192,318
|
Confusing on output result of numpy.power function
|
<p>I am trying to learn numpy. At <a href="https://www.w3schools.com/python/numpy/numpy_ufunc_simple_arithmetic.asp" rel="nofollow noreferrer">this page</a>, I saw following code:</p>
<pre><code>import numpy as np
arr1 = np.array([10, 20, 30, 40, 50, 60])
arr2 = np.array([3, 5, 6, 8, 2, 33])
newarr = np.power(arr1, arr2)
print(newarr)
</code></pre>
<p>Here is the results:</p>
<p><a href="https://i.sstatic.net/wCBTX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wCBTX.png" alt="enter image description here" /></a></p>
<p>I do not understand why the final element of the result is zero?</p>
|
<python><python-3.x><numpy><overflow>
|
2023-06-07 13:11:36
| 2
| 625
|
Masoud
|
76,423,456
| 5,844,870
|
Getting error when adding one python repo to another python repo as a submodule
|
<p>I have two separate Python code repos A and B. Currently code in repo B runs as a stand-alone process (running on Linux). I need to add code repo B under code repo A as a submodule so that code A depends on code B. For this purpose I have checked out code B repo under code A (B is a subfolder under A).
But when I invoke a method in code B from inside a method in code A, I run into a lot of dependencies errors e.g. "ModuleNotFoundError: No module named 'util'".
Is there way to keep the code in repo B unchanged but to make it work both as a stand-alone process and as a dependent code inside code A.</p>
|
<python>
|
2023-06-07 12:45:42
| 0
| 684
|
NKM
|
76,423,362
| 10,144,963
|
How to optimize generation of unique combinations with varied weights in Python?
|
<p>I'm facing an optimization problem in Python where I aim to generate a large number of unique combinations (e.g. 10,000), with each combination being as distinct as possible from the others.</p>
<p>Here is a simplified version of my problem: I have a set of traits, where each trait can take on a certain number of variations. I want to create a set of unique combinations of these traits with maximized differences.</p>
<pre class="lang-py prettyprint-override"><code>traits = [ "trait1", "trait2", "trait3", ...]
traits_amounts = { "trait1" : value1, "trait2" : value2, "trait3" : value3, ...}
</code></pre>
<p>To quantify the differences, I've assigned a score to each trait, such that a higher score corresponds to a larger difference when that trait is changed. For instance, if I were to generate a new combination that differs from an existing one by changing <code>trait1</code>, the 'difference score' between the two combinations would be the score assigned to <code>trait1</code>.</p>
<pre class="lang-py prettyprint-override"><code>trait_score = { "trait1" : score1, "trait2" : score2, "trait3" : score3, ...}
</code></pre>
<p>My objective is to generate n combinations such that the 'difference score' between the least different combinations is as high as possible.</p>
<p>I tried finding an optimal solution using linear programming via Pulp. Unfortunately this problem seems to be exponentially hard and I think my best luck is to use a heuristic.</p>
<p>What approach can I take to solve this optimization problem? How can I efficiently generate these combinations in Python? Are there any libraries or algorithms that could be particularly useful for this situation?</p>
<p>Thanks in advance for any help!</p>
|
<python><optimization><linear-programming><pulp>
|
2023-06-07 12:33:03
| 1
| 999
|
Th0rgal
|
76,423,307
| 1,422,096
|
How to have a lambda function evaluate a variable now (and not postponed)
|
<p>I have a class for a hardware object (here Fridge), and I'd like to automatically create HTTP API routes for a given subset of Fridge's methods, here <code>open</code> and <code>close</code>.</p>
<pre><code>from bottle import Bottle, request
class Fridge:
def _not_exposed(self):
print("This one is never called via HTTP")
def open(self, param1=None, param2=None):
print("Open", param1, param2)
def close(self):
print("Close")
# + many other methods
f = Fridge()
app = Bottle("")
for action in ["open", "close"]:
app.route(f"/action/{action}", callback=lambda: (getattr(f, action)(**request.query)))
app.run()
</code></pre>
<p>It works, except that in</p>
<pre><code>...callback=lambda: (getattr(f, action)(**request.query))
</code></pre>
<p><code>action</code> is evaluated <em>when the lambda function is called</em>.</p>
<p>Thus when opening <a href="http://127.0.0.1:8080/action/open?param1=123" rel="nofollow noreferrer">http://127.0.0.1:8080/action/open?param1=123</a>, the lambda is called, and at this time <code>action</code> has the value ... <code>"close"</code> (the last value in the <code>for</code> enumeration), then <code>getattr(f, action)</code> links to <code>f.close</code> instead of <code>f.open</code>. This is not what we want!</p>
<p><strong>Question: in this lambda definition, how to have <code>action</code> evaluated now, and not postponed?</strong></p>
<p>Expected behaviour for the <code>for</code> loop:</p>
<pre><code>app.route("/action/open", callback=lambda: f.open(**request.query)))
app.route("/action/close", callback=lambda: f.close(**request.query)))
</code></pre>
|
<python><lambda><closures><bottle><getattr>
|
2023-06-07 12:26:41
| 0
| 47,388
|
Basj
|
76,423,170
| 1,432,980
|
write period index as a single value timestamp to parquet file
|
<p>I have a date <code>9999-12-31</code>. By default Pandas does not support the timestamp for this date. However I need to write it as timestamp to <code>parquet</code> file.</p>
<p>As a workaround, Pandas documentation proposes to use <code>period</code> to handle such date. However I cannot write it directly to <code>parquet</code> as it causes error <code>Not supported to convert PeriodArray to 'timestamp[us]' type</code> when I do it like this</p>
<pre><code>df = pd.DataFrame([pd.Period('9999-12-31', 'D')], columns=['Date'])
df.to_parquet('output_date.parquet', schema=pa.schema([('Date', pa.timestamp("us"))]))
</code></pre>
<p>I was trying to get a single value from this period</p>
<pre><code>df = pd.DataFrame([pd.Period('9999-12-31', 'D')], columns=['Date'])
df = df.iloc[0,:].to_frame().transpose()
df.to_parquet('output_date.parquet', schema=pa.schema([('Date', pa.timestamp("us"))]))
</code></pre>
<p>However even there is still a <code>PeriodArray</code> error.</p>
<p>And when I try to convert it to timestamp I get the normal timestamp error</p>
<pre><code>print(df.iloc[0].values[0].to_timestamp()) # Out of bounds nanosecond timestamp: 9999-12-31 00:00:00
</code></pre>
<p>How can I write it to parquet?</p>
|
<python><pandas><parquet><pyarrow>
|
2023-06-07 12:10:24
| 1
| 13,485
|
lapots
|
76,423,001
| 4,520,520
|
Django model clean function validation error problem
|
<p>I have a <code>Django</code> model with custom clean function as below:</p>
<pre><code>class License(models.Model):
# fields of the model
def clean(self):
if condition1:
raise ValidationError('Please correct the error!')
</code></pre>
<p>The problem is that, my admin user uploads some file as needed by <code>FileField</code> of <code>License Model</code>, but when I raise the <code>ValidationError</code> file fields are emptied and user is forced to upload the files again. Is it possible to raise the error, but keep the files?</p>
<p>This doesn't happen for other fields such as <code>CharField</code>.</p>
|
<python><django><django-admin><django-file-upload>
|
2023-06-07 11:49:58
| 1
| 2,218
|
mohammad
|
76,422,941
| 1,946,418
|
Is a SlackResponse object a dict or a string?
|
<p>Started using Slack API for Python, got comfortable using it from their documentation. I couldn't understand how this is working under the hood</p>
<p>Slack API returns <code>SlackResponse</code> object in case of a failure (or success as well), and it will look something like this</p>
<p><a href="https://i.sstatic.net/GuksX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GuksX.png" alt="slack-api-error" /></a></p>
<p>Now if I run</p>
<pre class="lang-py prettyprint-override"><code>e.response["error"]
</code></pre>
<p>it prints <code>'channel_not_found'</code> message - as a <code>str</code> object</p>
<p>Thought <code>"error"</code> is a <code>key</code> (as in <code>e.response</code> as a <code>dict</code>), and tried to print all the available <code>key</code>s, but it's not a <code>dict</code>/<code>hash</code> it seems</p>
<pre class="lang-py prettyprint-override"><code>e.response.keys()
# AttributeError: 'SlackResponse' object has no attribute 'keys'
</code></pre>
<p>Can someone explain how this is possible please. Would love to understand how they made this possible. TIA</p>
|
<python><slack-api>
|
2023-06-07 11:43:38
| 1
| 1,120
|
scorpion35
|
76,422,894
| 2,037,411
|
Using Kaggle code/model to predict classifications for unseen dataset
|
<p>I have obtained the following code along with a dataset from a Kaggle notebook:
<a href="https://www.kaggle.com/code/danofer/predicting-protein-classification/notebook" rel="nofollow noreferrer">https://www.kaggle.com/code/danofer/predicting-protein-classification/notebook</a></p>
<pre><code>import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
import seaborn as sns
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, confusion_matrix, classification_report
import sys
# Import Datasets
df_seq = pd.read_csv('pdb_data_seq.csv')
df_char = pd.read_csv('pdb_data_no_dups.csv')
print('Datasets have been loaded...')
# 2). ----- Filter and Process Dataset ------
# Filter for only proteins
protein_char = df_char[df_char.macromoleculeType == 'Protein']
protein_seq = df_seq[df_seq.macromoleculeType == 'Protein']
print(protein_char.head())
print(protein_seq.describe(include="all"))
print(protein_char.columns)
# Select some variables to join
protein_char = protein_char[['structureId','classification','residueCount', 'resolution',
'structureMolecularWeight','crystallizationTempK', 'densityMatthews', 'densityPercentSol', 'phValue']]
protein_seq = protein_seq[['structureId','sequence']]
print(protein_seq.head())
print(protein_char.head())
# Join two datasets on structureId
model_f = protein_char.set_index('structureId').join(protein_seq.set_index('structureId'))
print(model_f.head())
print('%d is the number of rows in the joined dataset' %model_f.shape[0])
# Check NA counts
print(model_f.isnull().sum())
# Drop rows with missing values
model_f = model_f.dropna()
print('%d is the number of proteins that have a classification and sequence' %model_f.shape[0])
# Look at classification type counts
counts = model_f.classification.value_counts()
print(counts)
#plot counts
plt.figure()
sns.distplot(counts[(counts > 1000)], hist = False, color = 'purple')
plt.title('Count Distribution for Family Types')
plt.ylabel('% of records')
plt.show()
# Get classification types where counts are over 1000
types = np.asarray(counts[(counts > 1000)].index)
print(len(types))
# Filter dataset's records for classification types > 1000
data = model_f[model_f.classification.isin(types)]
# leaving more rows results in duplciates / index related?
data = data.drop_duplicates(subset=["classification","sequence"])
print(types)
print('%d is the number of records in the final filtered dataset' %data.shape[0])
data = data.drop_duplicates(subset=["classification","sequence"])
print(data.shape)
## Could add n-grams
## https://stackoverflow.com/questions/18658106/quick-implementation-of-character-n-grams-using-python
# jump_size !=1 -> less overlap in n-grams.
def char_grams(text,n=3,jump_size=2):
return [text[i:i+n] for i in range(0,len(text)-n+1,jump_size)]
data.head(3).sequence.apply(char_grams)
data["3mers"] = data.sequence.apply(char_grams)
data.tail()
data.to_csv("protein_classification_46k_ngrams.csv.gz",compression="gzip")
# 3). ----- Train Test Split -----
# Split Data
X_train, X_test, y_train, y_test = train_test_split(data['sequence'], data['classification'], test_size = 0.2, random_state = 1)
# Create a Count Vectorizer to gather the unique elements in sequence
vect = CountVectorizer(analyzer = 'char_wb', ngram_range = (4,4))
# Fit and Transform CountVectorizer
vect.fit(X_train)
X_train_df = vect.transform(X_train)
X_test_df = vect.transform(X_test)
#Print a few of the features
print(vect.get_feature_names_out()[-20:])
sys.exit()
# 4). ------ Machine Learning Models ------
# Make a prediction dictionary to store accuracys
prediction = dict()
# Naive Bayes Model
from sklearn.naive_bayes import MultinomialNB
model = MultinomialNB()
model.fit(X_train_df, y_train)
NB_pred = model.predict(X_test_df)
prediction["MultinomialNB"] = accuracy_score(NB_pred, y_test)
print(prediction['MultinomialNB'])
# Adaboost
from sklearn.ensemble import AdaBoostClassifier
model = AdaBoostClassifier()
model.fit(X_train_df,y_train)
ADA_pred = model.predict(X_test_df)
prediction["Adaboost"] = accuracy_score(ADA_pred , y_test)
print(prediction["Adaboost"])
# 5). ----- Plot Confusion Matrix for NB -----
# Plot confusion matrix
conf_mat = confusion_matrix(y_test, NB_pred, labels = types)
#Normalize confusion_matrix
conf_mat = conf_mat.astype('float')/ conf_mat.sum(axis=1)[:, np.newaxis]
# Plot Heat Map
fig , ax = plt.subplots()
fig.set_size_inches(13, 8)
sns.heatmap(conf_mat)
print(types[3])
#print(types[38])
#Print F1 score metrics
print(classification_report(y_test, NB_pred, target_names = types))
</code></pre>
<p>However, my dataset is different, and it comprises sequences in CSV format. There is only one common column between my dataset and the test/train datasets.<br />
<strong>I am seeking guidance on how to utilize this code/model to predict the classification in the subsequent column of my sequences. Please provide assistance.</strong></p>
|
<python><scikit-learn><multiclass-classification><protein-database>
|
2023-06-07 11:39:16
| 1
| 1,403
|
Rashid
|
76,422,846
| 11,227,857
|
How do Tensorflow and tf.data.Dataset use memory?
|
<p>I'm trying to reduce memory usage during tf model training so models can be trained on AWS containers without running out of memory. I was looking into <code>tf.data.Dataset</code> to help with this. Increasing speed to reduce AWS costs would also be a bonus.</p>
<p>The pseudo code looks like this:</p>
<pre><code>def create_model():
# some keras Dense or LSTM model
return model
def load_dataset(filepath):
# load csv file and create numpy array
return arrays
def create_sequences(arrays, sequence_length):
# create sequences for LSTM network from numpy arrays
return x_train, y_train, x_test, y_test
def train(model, x_train, y_train, x_test, y_test):
model.fit(x_train, y_train, x_test, y_test, batch_size=64)
return model
def main():
model = create_model()
arrays = load_dataset(filepath)
x_train, y_train, x_test, y_test = create_sequences(arrays)
model = train(model, x_train, y_train, x_test, y_test)
</code></pre>
<p>I'm a bit confused at how Tensorflow manages memory. The csv files contain timeseries float data, fairly simple stuff, and are <10MB usually, but when loaded as numpy sequences they can be 50-300MB according to <code>sys.getsizeof(x_train)</code>, which makes sense as if the <code>sequence_length</code> becomes long there is a lot of duplicate data.</p>
<p>I have tried to use <code>tf.data.Dataset</code> to increase performance and help reduce memory as it contains functions to cache to memory or to file, so if I change one of the above functions:</p>
<pre><code>def create_sequences(arrays, sequence_length):
# create sequences for LSTM network from numpy arrays
training_data = tf.data.Dataset.from_tensor_slices((x_train, y_train))
validation_data = tf.data.Dataset.from_tensor_slices((x_test, y_test))
training_data = training_data.cache().batch(64).prefetch(tf.data.AUTOTUNE)
validation_data = validation_data.cache().batch(64).prefetch(tf.data.AUTOTUNE)
return training_data, validation_data
</code></pre>
<p>The <code>.cache()</code> function caches the dataset to memory. But what I don't understand is, wasn't it already in memory when it was just numpy data? How is this different?</p>
<p>If I understand correctly, Python should free the 50MB-300MB memory that the numpy arrays were using when the <code>create_sequences()</code> function finishes running, but now the data I loaded is just in a different format as a tensorflow dataset.</p>
<p>I also tried using <code>.cache("my_file")</code> to cache the data to a file and load it as needed for each batch, but it only creates this file during training. So my data is still in memory the entire time.</p>
<p>When comparing code run using the memory cache vs. file cache vs without use of tf datasets, my system seemed to use the same 6.7GB of memory every time (just monitoring by eye with a system monitor)</p>
<p>There was no speed increase, which I think is mostly due to the batches being too small to see any benefit from the dataset object. But it seems like using <code>tf.data.Dataset</code> is giving me no benefit.</p>
|
<python><tensorflow>
|
2023-06-07 11:32:01
| 1
| 530
|
gazm2k5
|
76,422,805
| 1,714,724
|
How do I run 2 processes in the return of a Django view?
|
<p>I have a django site - does stuff.. when the print button is pressed I need it to run a def() and then return to home</p>
<pre><code>return view_xlsx(request, xlspath)
return render(request, 'webpage/HomePage.html', {'last_update': lastupdted})
</code></pre>
<p>How can I do both Run the view_xlsx ( which downloads in browser) and then return home?</p>
<p>I have tried</p>
<pre><code>return view_xlsx(request, xlspath), render(request, 'webpage/HomePage.html', {'last_update': lastupdted})
</code></pre>
<p>and</p>
<pre><code> try:
return view_xlsx(request, xlspath)
finally:
return render(request, 'website/HomePage.html', {'last_update': lastupdted})
</code></pre>
<p>They both work independently of each other, but wont seem to work together. Thoughts ?</p>
<p>edit</p>
<pre><code>def view_xlsx(request, xlspath):
filename = xlspath.replace('\\', '/')
name = filename.split('/')[-1]
if os.path.exists(filename):
response = FileResponse(open(filename, 'rb'), content_type='application/xlsx')
response['Content-Disposition'] = f'inline; filename={name}' # user will be prompted display the PDF in the browser
# response['Content-Disposition'] = f'filename={name}' # user will be prompted display the PDF in the browser
return response
else:
return HttpResponseNotFound('Cannot find the XLS')
</code></pre>
|
<python><django><django-views>
|
2023-06-07 11:25:42
| 2
| 311
|
grahamie
|
76,422,760
| 1,112,097
|
Have Plotly chart show only selected menu option on load
|
<p>I am using a data set that has one numerical column (value) and two categorical columns (bin and cat):</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame(
{
'Cat': ['A', 'B', 'A', 'B', 'A', 'B'],
'Bin': ['low', 'low', 'med', 'med', 'high', 'high'],
'value': [17, 22, 12, 23, 29, 11]
}
)
</code></pre>
<p>I am using Plotly Express to render a stacked bar chart. Everything is working except when the page first loads the chart shows all of the data instead of only the selected option from the menu.</p>
<p>How do I make the chart match the selection in the dropdown by showing only the low bin data when the chart first loads?</p>
<pre class="lang-py prettyprint-override"><code>import plotly.express as px
fig = px.bar(
df,
x='value',
y='Cat',
color='Bin'
)
fig.update_layout(
showlegend=False,
updatemenus=[
{
'buttons': [
{
'label': t.name,
'method': 'restyle',
'args': [{'visible': [t2.name == t.name for t2 in fig.data]}],
}
for t in fig.data
],
'x': 0.3,
'xanchor': 'left',
'y': 1.2,
'yanchor': 'top',
}
]
)
fig.show()
</code></pre>
<p>this is what the chart looks like on load
<a href="https://i.sstatic.net/pFtlJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pFtlJ.png" alt="char on load with menu selected but all data showing" /></a></p>
<p>The menu show the low bin but data from all the bins is visible.</p>
<p><a href="https://colab.research.google.com/drive/1vLESoYPSs0-i6s04NlvOjjs3gJOkZC3z?usp=sharing" rel="nofollow noreferrer">here is a working version</a></p>
|
<python><plotly>
|
2023-06-07 11:19:33
| 1
| 2,704
|
Andrew Staroscik
|
76,422,754
| 4,086,107
|
how to access the child object in sqlalchemy
|
<p>Can someone please help</p>
<pre><code>subscription = session.get(SubscriptionModel, 1)
print(subscription.customer)
</code></pre>
<p>this print the id of the customer but how do I get the customer object?</p>
|
<python><sqlalchemy>
|
2023-06-07 11:18:56
| 1
| 427
|
Fahim
|
76,422,749
| 487,873
|
multiprocessing - closing file opened in run()
|
<p>I have subclassed <code>Process()</code> to do some logging to file (used multiprocessing as the volume of data to be processed was causing slow downs in the main program; and threading didn't help).</p>
<p>A simplified example is below:</p>
<pre class="lang-py prettyprint-override"><code>import multiprocessing
class LogListenerProcess(multiprocessing.Process):
def __init__(self, file_pth):
multiprocessing.Process.__init__(self)
self.file_path = file_pth
def run(self):
some_file = open(self.file_path, "w+")
print(some_file)
if __name__ == '__main__':
x = LogListenerProcess("some_file.txt")
x.start()
</code></pre>
<p>This works fine and when I stop the process, I <em>believe</em> it closes the file. However, is there a way for me to deliberately close the file when I call <code>terminate()</code> or do I have to open/close the file outside the <code>process.run()</code> and pass it in as an arg rather than the file path? (ETA: I didn't like the idea of opening a file in one process and then using it in another process, which is why I pass the file path)</p>
<p>Or have I subclassed <code>Process()</code> incorrectly?</p>
|
<python>
|
2023-06-07 11:18:31
| 0
| 1,096
|
SimpleOne
|
76,422,534
| 11,183,333
|
In python how to catch all and every exception and error ever thrown anywhere
|
<p>So, I'm familiar with try-except, however in this case it's not working.
I have a package, that uses socketio and asyncio.
The connection is done inside an asyncio task.
If the it cannot connect to the server, it throws an exception of course. What I wanted to do, is wrap the code that uses this function in a try-except block, thinking that it will catch the exception thrown inside the package, but it's not working at all: The exceptions are thrown, and the except block does not catch them.</p>
<p>The relevant code looks like this:</p>
<pre><code>async def main():
config = configparser.ConfigParser()
# print(os.path.dirname(sys.executable))
path = str(os.path.dirname(os.path.abspath(__file__))) + "/config.ini"
print(path)
config.read(path)
servers = config["WIREGUARD_SERVER"]["URLS"].split(",")
status = None
while status != "Connected":
for server in servers:
print("\ntrying to connect to server: "+server)
try:
something = await MySomething("client", server)
except:
print("Could not connect to server at: "+server)
await asyncio.sleep(2)
loop = asyncio.new_event_loop()
loop.create_task(main())
loop.run_forever()
</code></pre>
<p>So my question is how can I catch every exception and error ever thrown inside the program?
Sadly, rewriting the package is not an option</p>
|
<python><exception><socket.io><try-catch><python-asyncio>
|
2023-06-07 10:52:35
| 1
| 324
|
Patrick Visi
|
76,422,500
| 1,501,700
|
Library exists, but can't import
|
<p>I have installed Python to my Windows11 machine. Trying to run simple MQTT client in my VS Code IDE. Have installed Paho library :</p>
<pre><code>pip install paho-mqtt:
Requirement already satisfied: paho-mqtt in c:\users\g\appdata\local\programs\python\python311\lib\site-packages\paho_mqtt-1.6.1-py3.11.egg (1.6.1)
</code></pre>
<p>Got errors on lines</p>
<pre><code>import pytest
Exception has occurred: ModuleNotFoundError
No module named 'pytest'
File "C:\Python_test\Paho\paho.mqtt.python\tests\test_client.py", line 7, in <module>
import pytest
ModuleNotFoundError: No module named 'pytest'
</code></pre>
<p>and</p>
<pre><code>import paho.mqtt.client as client
Exception has occurred: ModuleNotFoundError
No module named 'paho'
File "C:\Python_test\Paho\paho.mqtt.python\tests\test_client.py", line 9, in <module>
import paho.mqtt.client as client
ModuleNotFoundError: No module named 'paho'
</code></pre>
<p>Client code:</p>
<pre><code>import inspect
import os
import sys
import time
import unicodedata
import pytest
import paho.mqtt.client as client
# From http://stackoverflow.com/questions/279237/python-import-a-module-from-a-folder
cmd_subfolder = os.path.realpath(
os.path.abspath(
os.path.join(
os.path.split(
inspect.getfile(inspect.currentframe()))[0],
'..', 'test')))
if cmd_subfolder not in sys.path:
sys.path.insert(0, cmd_subfolder)
import paho_test
# Import test fixture
from testsupport.broker import fake_broker
@pytest.mark.parametrize("proto_ver", [
(client.MQTTv31),
(client.MQTTv311),
])
class Test_connect(object):
"""
Tests on connect/disconnect behaviour of the client
"""
def test_01_con_discon_success(self, proto_ver, fake_broker):
mqttc = client.Client(
"01-con-discon-success", protocol=proto_ver)
def on_connect(mqttc, obj, flags, rc):
assert rc == 0
mqttc.disconnect()
mqttc.on_connect = on_connect
mqttc.connect_async("localhost", 1888)
mqttc.loop_start()
try:
fake_broker.start()
connect_packet = paho_test.gen_connect(
"01-con-discon-success", keepalive=60,
proto_ver=proto_ver)
packet_in = fake_broker.receive_packet(1000)
assert packet_in # Check connection was not closed
assert packet_in == connect_packet
connack_packet = paho_test.gen_connack(rc=0)
count = fake_broker.send_packet(connack_packet)
assert count # Check connection was not closed
assert count == len(connack_packet)
disconnect_packet = paho_test.gen_disconnect()
packet_in = fake_broker.receive_packet(1000)
assert packet_in # Check connection was not closed
assert packet_in == disconnect_packet
finally:
mqttc.loop_stop()
packet_in = fake_broker.receive_packet(1)
assert not packet_in # Check connection is closed
def test_01_con_failure_rc(self, proto_ver, fake_broker):
mqttc = client.Client(
"01-con-failure-rc", protocol=proto_ver)
def on_connect(mqttc, obj, flags, rc):
assert rc == 1
mqttc.on_connect = on_connect
mqttc.connect_async("localhost", 1888)
mqttc.loop_start()
try:
fake_broker.start()
connect_packet = paho_test.gen_connect(
"01-con-failure-rc", keepalive=60,
proto_ver=proto_ver)
packet_in = fake_broker.receive_packet(1000)
assert packet_in # Check connection was not closed
assert packet_in == connect_packet
connack_packet = paho_test.gen_connack(rc=1)
count = fake_broker.send_packet(connack_packet)
assert count # Check connection was not closed
assert count == len(connack_packet)
packet_in = fake_broker.receive_packet(1)
assert not packet_in # Check connection is closed
finally:
mqttc.loop_stop()
class TestPublishBroker2Client(object):
def test_invalid_utf8_topic(self, fake_broker):
mqttc = client.Client("client-id")
def on_message(client, userdata, msg):
with pytest.raises(UnicodeDecodeError):
msg.topic
client.disconnect()
mqttc.on_message = on_message
mqttc.connect_async("localhost", 1888)
mqttc.loop_start()
try:
fake_broker.start()
connect_packet = paho_test.gen_connect("client-id")
packet_in = fake_broker.receive_packet(len(connect_packet))
assert packet_in # Check connection was not closed
assert packet_in == connect_packet
connack_packet = paho_test.gen_connack(rc=0)
count = fake_broker.send_packet(connack_packet)
assert count # Check connection was not closed
assert count == len(connack_packet)
publish_packet = paho_test.gen_publish(b"\xff", qos=0)
count = fake_broker.send_packet(publish_packet)
assert count # Check connection was not closed
assert count == len(publish_packet)
disconnect_packet = paho_test.gen_disconnect()
packet_in = fake_broker.receive_packet(len(disconnect_packet))
assert packet_in # Check connection was not closed
assert packet_in == disconnect_packet
finally:
mqttc.loop_stop()
packet_in = fake_broker.receive_packet(1)
assert not packet_in # Check connection is closed
def test_valid_utf8_topic_recv(self, fake_broker):
mqttc = client.Client("client-id")
# It should be non-ascii multi-bytes character
topic = unicodedata.lookup('SNOWMAN')
def on_message(client, userdata, msg):
assert msg.topic == topic
client.disconnect()
mqttc.on_message = on_message
mqttc.connect_async("localhost", 1888)
mqttc.loop_start()
try:
fake_broker.start()
connect_packet = paho_test.gen_connect("client-id")
packet_in = fake_broker.receive_packet(len(connect_packet))
assert packet_in # Check connection was not closed
assert packet_in == connect_packet
connack_packet = paho_test.gen_connack(rc=0)
count = fake_broker.send_packet(connack_packet)
assert count # Check connection was not closed
assert count == len(connack_packet)
publish_packet = paho_test.gen_publish(
topic.encode('utf-8'), qos=0
)
count = fake_broker.send_packet(publish_packet)
assert count # Check connection was not closed
assert count == len(publish_packet)
disconnect_packet = paho_test.gen_disconnect()
packet_in = fake_broker.receive_packet(len(disconnect_packet))
assert packet_in # Check connection was not closed
assert packet_in == disconnect_packet
finally:
mqttc.loop_stop()
packet_in = fake_broker.receive_packet(1)
assert not packet_in # Check connection is closed
def test_valid_utf8_topic_publish(self, fake_broker):
mqttc = client.Client("client-id")
# It should be non-ascii multi-bytes character
topic = unicodedata.lookup('SNOWMAN')
mqttc.connect_async("localhost", 1888)
mqttc.loop_start()
try:
fake_broker.start()
connect_packet = paho_test.gen_connect("client-id")
packet_in = fake_broker.receive_packet(len(connect_packet))
assert packet_in # Check connection was not closed
assert packet_in == connect_packet
connack_packet = paho_test.gen_connack(rc=0)
count = fake_broker.send_packet(connack_packet)
assert count # Check connection was not closed
assert count == len(connack_packet)
mqttc.publish(topic, None, 0)
# Small sleep needed to avoid connection reset.
time.sleep(0.3)
publish_packet = paho_test.gen_publish(
topic.encode('utf-8'), qos=0
)
packet_in = fake_broker.receive_packet(len(publish_packet))
assert packet_in # Check connection was not closed
assert packet_in == publish_packet
mqttc.disconnect()
disconnect_packet = paho_test.gen_disconnect()
packet_in = fake_broker.receive_packet(len(disconnect_packet))
assert packet_in # Check connection was not closed
assert packet_in == disconnect_packet
finally:
mqttc.loop_stop()
packet_in = fake_broker.receive_packet(1)
assert not packet_in # Check connection is closed
def test_message_callback(self, fake_broker):
mqttc = client.Client("client-id")
userdata = {
'on_message': 0,
'callback1': 0,
'callback2': 0,
}
mqttc.user_data_set(userdata)
def on_message(client, userdata, msg):
assert msg.topic == 'topic/value'
userdata['on_message'] += 1
def callback1(client, userdata, msg):
assert msg.topic == 'topic/callback/1'
userdata['callback1'] += 1
def callback2(client, userdata, msg):
assert msg.topic in ('topic/callback/3', 'topic/callback/1')
userdata['callback2'] += 1
mqttc.on_message = on_message
mqttc.message_callback_add('topic/callback/1', callback1)
mqttc.message_callback_add('topic/callback/+', callback2)
mqttc.connect_async("localhost", 1888)
mqttc.loop_start()
try:
fake_broker.start()
connect_packet = paho_test.gen_connect("client-id")
packet_in = fake_broker.receive_packet(len(connect_packet))
assert packet_in # Check connection was not closed
assert packet_in == connect_packet
connack_packet = paho_test.gen_connack(rc=0)
count = fake_broker.send_packet(connack_packet)
assert count # Check connection was not closed
assert count == len(connack_packet)
publish_packet = paho_test.gen_publish(b"topic/value", qos=1, mid=1)
count = fake_broker.send_packet(publish_packet)
assert count # Check connection was not closed
assert count == len(publish_packet)
publish_packet = paho_test.gen_publish(b"topic/callback/1", qos=1, mid=2)
count = fake_broker.send_packet(publish_packet)
assert count # Check connection was not closed
assert count == len(publish_packet)
publish_packet = paho_test.gen_publish(b"topic/callback/3", qos=1, mid=3)
count = fake_broker.send_packet(publish_packet)
assert count # Check connection was not closed
assert count == len(publish_packet)
puback_packet = paho_test.gen_puback(mid=1)
packet_in = fake_broker.receive_packet(len(puback_packet))
assert packet_in # Check connection was not closed
assert packet_in == puback_packet
puback_packet = paho_test.gen_puback(mid=2)
packet_in = fake_broker.receive_packet(len(puback_packet))
assert packet_in # Check connection was not closed
assert packet_in == puback_packet
puback_packet = paho_test.gen_puback(mid=3)
packet_in = fake_broker.receive_packet(len(puback_packet))
assert packet_in # Check connection was not closed
assert packet_in == puback_packet
mqttc.disconnect()
disconnect_packet = paho_test.gen_disconnect()
packet_in = fake_broker.receive_packet(len(disconnect_packet))
assert packet_in # Check connection was not closed
assert packet_in == disconnect_packet
finally:
mqttc.loop_stop()
packet_in = fake_broker.receive_packet(1)
assert not packet_in # Check connection is closed
assert userdata['on_message'] == 1
assert userdata['callback1'] == 1
assert userdata['callback2'] == 2
</code></pre>
|
<python><mqtt><paho>
|
2023-06-07 10:49:41
| 0
| 18,481
|
vico
|
76,422,161
| 1,505,752
|
How to search a complex predefined regex pattern in a column using PySpark?
|
<p>I have two dataframes, dataframe1, and dataframe2, and I want to search for a complex predefined regex pattern from dataframe1 in column1 of dataframe2.</p>
<p>Dataframe1 with complex(here I just marked simple regex) string in regex_pattern column:</p>
<pre><code>dataframe1 = spark.createDataFrame([
('rlike(test[1-9])',)
], ['regex_pattern'])
Dataframe2 with the column to search:
dataframe2 = spark.createDataFrame([
('text with test1',),
('text with test2',),
('text with test3',)
], ['column1'])
for row in dataframe1.collect():
regex_pattern = row.regex_pattern
filtered_df = dataframe2.filter(col('column1').rlike(regex_pattern))
print(f"Results for regex pattern: {regex_pattern}")
filtered_df.show()
print("----------------------------------")
</code></pre>
<p>I am not getting anything as a result. Is any way to do this in both pyspark and SQL?</p>
|
<python><pyspark><apache-spark-sql>
|
2023-06-07 10:08:38
| 1
| 929
|
VSe
|
76,422,042
| 4,920,221
|
spin up docker-compose using python subprocess fails
|
<p>I have a <code>docker-compose.yml</code> file, all I want to do is spin it up using a python script.
The command runs perfectly fine in the terminal, but when it comes to the python script, it fails on an error</p>
<pre class="lang-bash prettyprint-override"><code>{FileNotFoundError}[Errno 2] No such file or directory: 'docker-compose'
</code></pre>
<p>this is my script</p>
<pre class="lang-py prettyprint-override"><code>compose_path = "path/to/compose"
cmd = ["docker-compose", "-f", "docker-compose.yml", "up", "-d"]
subprocess.call(cmd, cwd=compose_path)
</code></pre>
|
<python><docker><docker-compose>
|
2023-06-07 09:55:22
| 0
| 324
|
Kallie
|
76,421,979
| 3,390,810
|
python regular expression to extract the last bracket
|
<p>My input is <code>(0,0)-(1.5,1.5)-(3.0,4.5)-(4.5,6.0)-(6.0,7.5)-(9.0,10.5)-(12.57,100.356)</code>
I want a regular expression to extract the two float in the last bracket: <code>12.57, 100.356</code>, i tried</p>
<pre><code>str_path_finder = re.compile(r'.*\-(d+\.d+,d+\.d+)')
rst = str_path_finder.search("(0,0)-(1.5,1.5)-(3.0,4.5)-(4.5,6.0)-(6.0,7.5)-(9.0,10.5)-(12.57,100.356)")
</code></pre>
<p>but rst is None.</p>
<p>Could</p>
|
<python><regex>
|
2023-06-07 09:47:28
| 5
| 761
|
sunxd
|
76,421,966
| 236,195
|
Inherit from UserDict *and* dict?
|
<p>I have a custom dict-like class which inherits from <code>UserDict</code>. It works perfectly, except that it is not an actual <code>dict</code>, i.e. <code>isinstance(my_userdict, dict)</code> returns <code>False</code>. This brings some problems with 3rd party code that makes the check (even <code>pprint</code> from stdlib behaves differently).</p>
<p>The obvious solution is to add <code>dict</code> to base classes.</p>
<pre class="lang-py prettyprint-override"><code>class MyFancyDict(UserDict, dict):
...
</code></pre>
<p>Am I not seeing some pitfall with this? Why doesn't <code>UserDict</code> already inherit from <code>dict</code>?</p>
|
<python><dictionary><inheritance>
|
2023-06-07 09:45:27
| 1
| 13,011
|
frnhr
|
76,421,934
| 11,611,246
|
Adding GPS location to EXIF using Python (slots not recognised by Windows 10 + nasty workaround required)
|
<p>I searched for some Python package that is able to read and edit EXIF data. Finally, I got the package <code>exif</code> to work on Windows and Ubuntu (since I use the same scripts in both OS).</p>
<p>I wrote the following function to add longitude and latitude to .jpg images:</p>
<pre><code>import exif
import numpy as np
def dec_to_dms(dec):
'''
Convert decimal degrees to degrees-minutes-seconds
Parameters
----------
dec : float
Input coordinate in decimal degrees.
Returns
-------
list
Coordinate in degrees-minutes-seconds.
'''
degree = np.floor(dec)
minutes = dec % 1.0 * 60
seconds = minutes % 1.0 * 60
minutes = np.floor(minutes)
return [degree, minutes, seconds]
def add_coords(path, coordinates, replace = False):
'''
Add coordinates to the Exif data of a .jpg file.
Parameters
----------
path : str
Full path to the image file.
coordinates : list or tuple of float
Latitude and longitude that shall be added to the image file.
replace : bool
Replace existing coordinates if the image alread contains values for
the repective Exif tags.
Returns
-------
None.
'''
with open(path, "rb") as f:
img = exif.Image(f)
lat = None
if img.has_exif:
if "gps_latitude" in img.list_all():
lat = img.gps_latitude
lon = img.gps_longitude
### While theoretically valid coordinates, (0.0, 0.0, 0.0) will be
### replaced since apparently, for some images (0.0, 0.0, 0.0) is
### set when no coordinates were specified.
if lat == (0.0, 0.0, 0.0) or lon == (0.0, 0.0, 0.0):
lat = lon = None
if lat is None or replace:
lat = tuple(dec_to_dms(coordinates[0]))
lon = tuple(dec_to_dms(coordinates[1]))
try:
img.gps_latitude = lat
img.gps_longitude = lon
except:
###---------------------------------------------------------------|
### This is a quick and dirty workaround for current shortcomings
### of the exif package
from PIL import Image
EXPLIMG = "D:/switchdrive/PlantApp/img/Species/EXAMPLE_ANDROID.jpg"
example_image = Image.open(EXPLIMG)
example_exif = example_image.getexif()
example_image.close()
print(path)
with Image.open(path) as current_image:
current_image.save(path, exif = example_exif)
with open(path, "rb") as f:
img = exif.Image(f)
img.gps_latitude = lat
img.gps_longitude = lon
###---------------------------------------------------------------|
with open(path, "wb") as f:
f.write(img.get_file())
return
</code></pre>
<p>Unfortunately, the <code>exif</code> package cannot add coordinates to image metadata which do not already contain the respective slots. This is why I used some template metadata from another image and overwrite the image in case my image does not contain lon and lat.</p>
<p>Now, when I apply the function, it appears the coordinates are not recognised by every program/OS(?)</p>
<p>E.g., when I drag and drop the image on websites such as <a href="https://www.pic2map.com" rel="nofollow noreferrer">pic2map.com</a>, it appears the images are placed correctly. However, when in view the image details on Windows 10 Enterprise, I get</p>
<pre><code>GPS-------------------------
Altitude 0
</code></pre>
<p>with no additional longitude and latitude or similar information.</p>
<p>The template image, for example, has lon and lat that are displayed in the Windows image properties as</p>
<pre><code>GPS-------------------------
Latitude 46; 55; 41.577500000000013856
Longitude 6; 44; 40.51680000000001325
</code></pre>
<p>and I would expect the result to look similar for the images where I "manually" added coordinates.</p>
<p>Also, the workaround using some template image is a terrible solution, imo. Is there some way to add coordinates without using a template image and preferably in a way that makes even Windows 10 recognise the GPS coordinates are present in the metadata?</p>
|
<python><geolocation><metadata><jpeg><exif>
|
2023-06-07 09:42:26
| 1
| 1,215
|
Manuel Popp
|
76,421,933
| 2,605,073
|
Create instances of identical class but with different superclasses?
|
<p>I am new to OO-progamming with Python and most likely the answer to my question is out there. But I do not know what to search for. So please be forbearing and I hope my MWE is clear.</p>
<p>I have a class <code>TA</code> which I want to use for two different use-cases:</p>
<ol>
<li>within a standalone script</li>
<li>within a GUI</li>
</ol>
<p>The difference between both use cases are minor and can be handled with conditions inside the class initialization and its methods.
However, for the functionality of the class it is of importance, that in</p>
<ol>
<li>the class is defined as <code>TA(object)</code></li>
<li>the class is defined as <code>TA(QObject)</code></li>
</ol>
<p>I read a lot about superclasses and <code>__new__</code> vs. <code>__init__</code> but I could not find a working, yet simple solution.</p>
<h2>Background</h2>
<p>Why do I want to do this? In case I use the class within the GUI all the stdout is redirected to a QTextEdit widget, in case it is called from the script, the stdout goes to the terminal. The GUI case only works with QObject and I am glad that I got that working. However, I run into problems when using the class without the GUI but defined as QObject.</p>
<h2>Example</h2>
<p>Baseclass <code>TA.py</code>:</p>
<pre><code>from PyQt5.QtCore import QObject
class TA(object): # Option 1
# class TA(QObject): # Option 2
def __init__(self, fruits):
if isinstance(self,QObject):
super().__init__() # required in case of QObject
self.init(fruits)
def init(self, fruits):
print('Give me some ' + fruits)
</code></pre>
<p>Calling Script <code>TAscript.py</code>:</p>
<pre><code>from TA import *
TA('oranges') # <<<< should be created as object
</code></pre>
<p>Calling GUI <code>TAgui.py</code>:</p>
<pre><code>import sys
from PyQt5.QtWidgets import (QApplication,QMainWindow)
from TA import *
# GUI class
class TAgui(QMainWindow):
def __init__(self):
super().__init__()
self.createInstanceOfTA()
def createInstanceOfTA(self):
TA('apples') # <<<< should be created as QObject
# create GUI etc.
def main():
qapp = QApplication(sys.argv)
TAgui()
if __name__ == '__main__':
main()
</code></pre>
<p>Can you guide me on how to achieve what I want without two basically identical classes?</p>
|
<python><class><inheritance><superclass>
|
2023-06-07 09:42:13
| 1
| 25,302
|
Robert Seifert
|
76,421,787
| 2,749,397
|
Unexpected result while using AxesGrid
|
<p>This code</p>
<pre><code>import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import AxesGrid
mat0 = [[1, 2], [3, 4], [5, 6], [7, 8]] # 4 rows × 2 columns
mat1 = [[-2, 0, 2, 4], [0, 2, 4, 6]] # 2 rows × 4 columns
fig = plt.figure(figsize=(9, 3))
grid = AxesGrid(fig, 111, nrows_ncols=(1,2),
axes_pad=0.15,
cbar_size="6%", cbar_location="right", cbar_mode="single")
for ax, mat in zip(grid.axes_all, (mat0, mat1)): im = ax.imshow(mat)
grid.cbar_axes[0].colorbar(im)
plt.figure()
plt.imshow(mat0)
plt.colorbar()
plt.show()
</code></pre>
<p>produces two Figures</p>
<p><a href="https://i.sstatic.net/BnLJ2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BnLJ2.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/aZIqE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aZIqE.png" alt="enter image description here" /></a></p>
<p>I expected to see, in the first one, a tall rectangle in the left, as in the second Figure.</p>
<p>Of course I'm not understanding what is really happening with AxesGrid.</p>
<p>How can I have the two Images side by side, without the tall one being truncated?</p>
<hr />
<p>Is an image worth 1000 words?</p>
<p><a href="https://i.sstatic.net/uexsv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uexsv.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><multiple-axes>
|
2023-06-07 09:24:58
| 1
| 25,436
|
gboffi
|
76,421,767
| 5,306,861
|
Automatic separation between consonants and vowels in speech recording
|
<p>Given an audio file of speech, (for example the file you can download from <a href="https://github.com/jameslyons/python_speech_features/blob/master/english.wav" rel="nofollow noreferrer">here</a>), that looks like this:</p>
<p><a href="https://i.sstatic.net/N60nd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/N60nd.png" alt="this" /></a></p>
<p>If we examine it we will find that the <strong>vowels</strong> are the areas with the <strong>largest amplitude</strong>, and the <strong>consonants</strong> are the areas with the <strong>small amplitude</strong>.</p>
<p>How can you to <strong>split</strong> the consonants and vowels, as shown in the following image:</p>
<p><a href="https://i.sstatic.net/VRz2P.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VRz2P.png" alt="enter image description here" /></a></p>
<p>The <code>red</code> line is a <code>threshold</code>, what is below it is considered a small amplitude and belongs to <strong>consonants</strong>, and what is above it is considered a <strong>vowel</strong> because it has a large amplitude.</p>
<p>The <code>green</code> lines indicate where the large amplitude ends and the small one begins or vice versa.</p>
<p>Can you find all the places of the green lines?</p>
<pre><code># In[]
import numpy as np
import librosa
import matplotlib.pyplot as plt
y, sr = librosa.load('english.wav', mono=True)
threshold = 0.2
# how to split ?
for s in y:
...
</code></pre>
|
<python><signal-processing><speech-recognition><speech-to-text><audio-processing>
|
2023-06-07 09:23:20
| 1
| 1,839
|
codeDom
|
76,421,734
| 507,242
|
ValueError: could not broadcast input array from shape (1536,) into shape (2000,)
|
<p>I'm trying to create a Qdrant vectorsore and add my documents.</p>
<ul>
<li>My embeddings are based on <code>OpenAIEmbeddings</code></li>
<li>the <code>QdrantClient</code> is local for my case</li>
<li>the collection that I'm creating has the
VectorParams as such: <code>VectorParams(size=2000, distance=Distance.EUCLID)</code></li>
</ul>
<p>I'm getting the following error:
<code>ValueError: could not broadcast input array from shape (1536,) into shape (2000,)</code></p>
<p>I understand that my error is how I configure the vectorParams, but I don't undertsand how these values need to be calculated.</p>
<p>here's my complete code:</p>
<pre class="lang-py prettyprint-override"><code>import os
from typing import List
from langchain.docstore.document import Document
from langchain.embeddings import OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import Qdrant, VectorStore
from qdrant_client import QdrantClient
from qdrant_client.models import Distance, VectorParams
def load_documents(documents: List[Document]) -> VectorStore:
"""Create a vectorstore from documents."""
collection_name = "my_collection"
vectorstore_path = "data/vectorstore/qdrant"
embeddings = OpenAIEmbeddings(
model="text-embedding-ada-002",
openai_api_key=os.getenv("OPENAI_API_KEY"),
)
qdrantClient = QdrantClient(path=vectorstore_path, prefer_grpc=True)
qdrantClient.create_collection(
collection_name=collection_name,
vectors_config=VectorParams(size=2000, distance=Distance.EUCLID),
)
vectorstore = Qdrant(
client=qdrantClient,
collection_name=collection_name,
embeddings=embeddings,
)
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000,
chunk_overlap=200,
)
sub_docs = text_splitter.split_documents(documents)
vectorstore.add_documents(sub_docs)
return vectorstore
</code></pre>
<p>Any ideas on how I should configure the vector params properly?</p>
|
<python><langchain><qdrant><openaiembeddings><qdrantclient>
|
2023-06-07 09:19:34
| 1
| 1,837
|
Evan P
|
76,421,664
| 4,772,565
|
Automatically merging multiple Pydantic models with overlapping fields
|
<p>It is kind of difficult to accurately phrase my question in one sentence.</p>
<p>I have the following models:</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel
class Detail1(BaseModel):
round: bool
volume: float
class AppleData1(BaseModel):
origin: str
detail: Detail1
class Detail2(BaseModel):
round: bool
weight: float
class AppleData2(BaseModel):
origin: str
detail: Detail2
</code></pre>
<p>Here <code>AppleData1</code> has an attribute <code>detail</code> which is of the type <code>Detail1</code>. <code>AppleData2</code> has an attribute <code>detail</code> which is of the type <code>Detail2</code>. I want to make an <code>Apple</code> class which contains all the attributes of <code>AppleData1</code> and <code>AppleData2</code>.</p>
<h2>Question (How to implement the algorithm?)</h2>
<p>Do you have a generic approach to implement this algorithm:</p>
<ul>
<li><p>Whenever <code>AppleData1</code> and <code>AppleData2</code> have an attribute of the same name:</p>
<ul>
<li><p>If they are of the same type, use one of them. For example, <code>AppleData1.origin</code> and <code>AppleData2.origin</code> are both of the type <code>str</code>. So <code>Apple.origin</code> is also of type <code>str</code>.</p>
</li>
<li><p>If they are of different types, merge them. For example, <code>AppleData1.detail</code> and <code>AppleData2.detail</code>, they are of type <code>Detail1</code> and <code>Detail2</code> respectively. So <code>Apple.detail</code> should contain all the inner attributes.</p>
</li>
</ul>
</li>
<li><p>Any common inner attribute is always for the same physical quantity. So overwriting is allowed. For example, <code>Detail1.round</code> and <code>Detail2.round</code> are both of type <code>bool</code>. So the resulting <code>Apple.detail.round</code> is also of type <code>bool</code>.</p>
</li>
</ul>
<h2>Expect Results</h2>
<p>The end results should be equivalent to the <code>Apple</code> model below. (The definition of <code>Detail</code> class below is only used to make the code below complete. The generic approach should not hard-code the <code>Detail</code> class.)</p>
<pre class="lang-py prettyprint-override"><code>class Detail(BaseModel):
round: bool
volume: float
weight: float
class Apple(BaseModel):
origin: str
detail: Detail
</code></pre>
<h2>My Solution (bad example)</h2>
<pre class="lang-py prettyprint-override"><code>class Detail(Detail1, Detail2):
pass
class Apple(AppleData1, AppleData2):
origin: str
detail: Detail
print(Apple.schema_json())
</code></pre>
<p>This solution works but it is too-specific.</p>
<ol>
<li><p>Here I need to pin-point that <code>detail</code> attribute from <code>AppleData1</code> and <code>AppleData2</code>, and specifically create the <code>Detail</code> class from specifically <code>Detail1</code> and <code>Detail2</code>.</p>
</li>
<li><p>I need to pin-point that <code>origin</code> is a common attribute of the same type (<code>str</code>). So I specifically hard-coded <code>origin: str</code> in the definition of the <code>Apple</code> class.</p>
</li>
</ol>
|
<python><python-3.x><pydantic>
|
2023-06-07 09:10:57
| 2
| 539
|
aura
|
76,421,622
| 5,618,251
|
Draw contour around binary mask image in Python
|
<p>I have a binary mask of the Antarctic ice sheet with pixels 0 = no ice sheet and 1 = ice sheet. How can I create a contour around the ice sheet pixels so that it creates an edge around the mask using Python?</p>
<p>Thanks</p>
<pre><code>imgplot = plt.imshow(landmask_ais)
print(landmask_ais.shape)
(180, 216)
</code></pre>
<p><a href="https://i.sstatic.net/TYCzN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TYCzN.png" alt="enter image description here" /></a></p>
|
<python><contour><imshow>
|
2023-06-07 09:05:27
| 2
| 361
|
user5618251
|
76,421,589
| 221,270
|
Google drive display imag urls in web app
|
<p>I have stored a bunch of images on google drive via the Desktop app under G:\google_drive\sdr\pos.</p>
<p>The images names are stored in a database (image1.png, image2.png, image3.png).</p>
<p>How can I use an url and display the images in my streamlit app? The google drive url does not contain the image file name:
<a href="https://drive.google.com/file/d/1AWhXeAGB8qBWe9kItc5zW9eLytIguOQF/view" rel="nofollow noreferrer">https://drive.google.com/file/d/1AWhXeAGB8qBWe9kItc5zW9eLytIguOQF/view</a></p>
<p>How to link the url automatically to the image files?</p>
|
<python><google-drive-api><streamlit>
|
2023-06-07 09:01:03
| 1
| 2,520
|
honeymoon
|
76,421,554
| 238,086
|
Can we achieve response streaming using an AWS ALB or NLB
|
<p>We are building a flask application wherein for a specific request we want to be able to stream the response to the client.
Something like this</p>
<pre><code>@app.route("/time/")
def time():
def streamer():
while True:
yield "<p>{}</p>".format(datetime.now())
sleep(1)
return Response(streamer())
</code></pre>
<p>This does not work when we use an AWS ALB as the load balancer - the client is unable to read from the stream. Is this a limitation on the AWS ALB side? Should I consider using AWS NLB instead?</p>
|
<python><amazon-web-services><streaming><load-balancing><responsestream>
|
2023-06-07 08:57:54
| 0
| 11,014
|
Raam
|
76,421,544
| 4,250,417
|
What does it mean to "trigger" an event in SimPy?
|
<p>I am a newbie to SimPy 4.0.2 and discrete-event simulation. I have got quite confused about what it really means to "trigger" an event.</p>
<p>According to the official "Docs » SimPy in 10 Minutes » Basic Concepts", it states that:</p>
<blockquote>
<p>When a process yields an event, the process gets suspended. SimPy resumes the process, when the event occurs (we say that the event is triggered).</p>
</blockquote>
<p>This seems to me it means that an event, when triggered, would be popped from the event queue for processing.</p>
<p>Also according to "Docs » Topical Guides » Events", however, it states that:</p>
<blockquote>
<p>If an event gets triggered, it is scheduled at a given time and inserted into SimPy’s event queue.</p>
</blockquote>
<p>So this means that an event, when triggered, would be inserted into the event queue?</p>
<p>I am wondering what would really happen regarding the operation on the event queue when an event is triggered? This question is actually also related to other definitions, e.g., "processed", <code>yield</code>. Considering the diagram below (if it is not too off), should "triggered" correspond to point A or B? Thank you very much!</p>
<pre><code> inserted? popped?
triggered? triggered? processed?
| |<--event-->|
V V V
--A--------------B-----------C--> (time)
^ ^
| |
yield event yield event
(suspended?) (resumed?)
</code></pre>
|
<python><simpy>
|
2023-06-07 08:56:51
| 1
| 372
|
vincentvangaogh
|
76,421,466
| 4,710,409
|
Django Chatterbot-how to add "default-response" to settings .py?
|
<p>In my "django" application in settings.py I have:</p>
<pre><code>CHATTERBOT = {
'name': 'bot1',
'storage_adapter': "chatterbot.storage.SQLStorageAdapter",
'logic_adapters': [
'chatterbot.logic.BestMatch',
]
}
</code></pre>
<p>How do I add the auto default response parameter?
I tried countless ways but it doesn't work.</p>
|
<python><django><chatterbot>
|
2023-06-07 08:45:54
| 1
| 575
|
Mohammed Baashar
|
76,421,432
| 1,610,626
|
Excel To python OR Excel to Database
|
<p>I have a spreadsheet with live data being updated continuously. Is there a way to read that data in to python without actually having to save the spreadsheet everytime? <code>openpyxl</code> allows me to import the workbook but every time that spreadsheets get updated with new data, unless i save it, I can't just call the spreadsheet and via <code>load_workbook()</code>. If I do, it just loads the version of the spreadsheet that was last saved.</p>
<p>The goal is to take snapshots of the data from the spreadsheet and save it to the database continuously every 1 minute etc.</p>
<p>Any thoughts?</p>
|
<python><excel>
|
2023-06-07 08:42:38
| 0
| 23,747
|
user1234440
|
76,421,340
| 2,753,501
|
Suppress warning message (set environment variable) in Foundry Repositories debugging mode
|
<p>Trying to debug repository code, I have set the breakpoint and run the transformation. Then, in the debugging console I get this warning:</p>
<pre class="lang-py prettyprint-override"><code>df.show(1)
</code></pre>
<blockquote>
<pre class="lang-none prettyprint-override"><code> Evaluating: df.show(1) did not finish after 3.00 seconds.
This may mean a number of things:
- This evaluation is really slow and this is expected.
In this case it's possible to silence this error by raising the timeout, setting the
PYDEVD_WARN_EVALUATION_TIMEOUT environment variable to a bigger value.
- The evaluation may need other threads running while it's running:
In this case, it's possible to set the PYDEVD_UNBLOCK_THREADS_TIMEOUT
environment variable so that if after a given timeout an evaluation doesn't finish,
other threads are unblocked or you can manually resume all threads.
Alternatively, it's also possible to skip breaking on a particular thread by setting a
`pydev_do_not_trace = True` attribute in the related threading.Thread instance
(if some thread should always be running and no breakpoints are expected to be hit in it).
- The evaluation is deadlocked:
In this case you may set the PYDEVD_THREAD_DUMP_ON_WARN_EVALUATION_TIMEOUT
environment variable to true so that a thread dump is shown along with this message and
optionally, set the PYDEVD_INTERRUPT_THREAD_TIMEOUT to some value so that the debugger
tries to interrupt the evaluation (if possible) when this happens.
</code></pre>
</blockquote>
<p>In my case, it's the 1st option, as I get the result after a moment of waiting. So, I want to silence the warning. I have unsuccessfully tried:</p>
<pre class="lang-py prettyprint-override"><code>import os
os.environ["PYDEVD_WARN_EVALUATION_TIMEOUT"] = '90000000000000'
</code></pre>
<p>How to suppress the warning message?</p>
|
<python><environment-variables><warnings><palantir-foundry><foundry-code-repositories>
|
2023-06-07 08:31:31
| 1
| 24,793
|
ZygD
|
76,421,260
| 3,003,072
|
Fast counting matches between large number of integer arrays
|
<p>I'm wondering whether there are any efficient algorithms to count number of matched integers between large number of integer arrays. The codes in <a href="https://cython.org/" rel="nofollow noreferrer">Cython</a> are as follows.</p>
<p><code>match_ints.pyx</code></p>
<pre><code>cimport cython
from libc.stdlib cimport calloc, free
import numpy as np
cimport numpy as np
np.import_array()
@cython.wraparound(False)
@cython.boundscheck(False)
@cython.initializedcheck(False)
cdef void count_matches(int[:, ::1] target_arrays, int[::1] ref_array, int[::1] num_matches):
cdef:
Py_ssize_t i, j
Py_ssize_t n = target_arrays.shape[0]
Py_ssize_t c = target_arrays.shape[1]
Py_ssize_t nf = ref_array.shape[0]
Py_ssize_t m = ref_array[nf - 1] + 5
int * ind = <int *> calloc(m, sizeof(int))
int k, g
for i in range(nf):
ind[ref_array[i]] = 1
for i in range(n):
k = 0
for j in range(c):
g = target_arrays[i, j]
if g < m and ind[g] == 1:
k += 1
num_matches[i] = k
free(ind)
cpdef count_num_matches(int[:, ::1] target_arrays, int[::1] ref_array):
cdef:
Py_ssize_t n = target_arrays.shape[0]
int[::1] num_matches = np.zeros(n, dtype=np.int32)
count_matches(target_arrays, ref_array, num_matches)
return np.asarray(num_matches)
</code></pre>
<p>The idea here is quite simple. For the reference integer array to be matched, it is sorted in ascending order (by <code>sort</code> method). An indicator array <code>ind</code> is created with the length as the max integer of the reference array (<code>+5</code> to avoid indexing out of range), by taking the advantage that integers in the array are not large. So each integer is considered as an index, and corresponding position in <code>ind</code> is assigned as 1. Then iterating through every <code>target_array</code> to count the number of integers matched in reference array.</p>
<p>During the matches, all integers in <code>target_arrays</code> are considered as indexes and matched if the indexes in <code>ind</code> are <code>1</code>.</p>
<p>Test method is set in <code>test_main_counts.py</code>.</p>
<pre><code># test_main_counts.py
from match_ints import count_num_matches
import numpy as np
def count_num_matches_main():
x = np.random.randint(50, 6000, size=(1000000, 40), dtype=np.int32)
ref_x = np.random.randint(100, 2500, size=800, dtype=np.int32)
ref_x.sort()
return count_num_matches(x, ref_x)
if __name__ == "__main__":
nums = count_num_matches_main()
print(nums[:10])
</code></pre>
<p>The <code>setup</code> file.</p>
<pre><code>from setuptools import setup
from Cython.Build import cythonize
import numpy as np
setup(
ext_modules=cythonize(
"match_ints.pyx",
compiler_directives={
"language_level": "3",
}
),
include_dirs=[
np.get_include()
]
)
</code></pre>
<p>Because all integers are not large, and there are many duplicates (in my real applications, millions of arrays only contain few thousand unique integers), any relevant algorithms exist to improve this kind of problems by, e.g., taking the advantage of much less unique integers?</p>
|
<python><algorithm><numpy><cython>
|
2023-06-07 08:20:22
| 2
| 616
|
Elkan
|
76,420,997
| 1,942,868
|
Using serializer with foreign key model when creating new entry
|
<p>At first, I had this serializer and it works well for <code>GET</code> and <code>POST</code>(create new entry)</p>
<pre><code>class DrawingSerializer(ModelSerializer):
drawing = serializers.FileField()
detail = serializers.JSONField()
class Meta:
model = m.Drawing
fields = ('id','detail','drawing','user','created_at','updated_at')
</code></pre>
<p>in viewset where creating entry.</p>
<pre><code>class DrawingViewSet(viewsets.ModelViewSet):
queryset = m.Drawing.objects.all()
serializer_class = s.DrawingSerializer
def create(self, request, *args, **kwargs):
request.data['user'] = request.user.id #set userid
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
self.perform_create(serializer) # it makes the new entry with user
return Response(serializer.data)
</code></pre>
<p>and models.py</p>
<pre><code>class CustomUser(AbstractUser):
detail = models.JSONField(default=dict,null=True, blank=True)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
class Drawing(models.Model):
drawing = f.FileField(upload_to='uploads/')
detail = models.JSONField(default=dict)
user = models.ForeignKey(CustomUser,on_delete=models.CASCADE)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
</code></pre>
<p>In this casel, <code>user</code> is foreign key model.</p>
<p>So I want to get the user's name,email and so on, then I changed <code>Serializer</code> like this.</p>
<pre><code>class DrawingSerializer(ModelSerializer):
drawing = serializers.FileField()
detail = serializers.JSONField()
user = CustomUserSerializer(read_only=True) # change here
class Meta:
model = m.Drawing
fields = ('id','detail','drawing','user','created_at','updated_at')
</code></pre>
<p>It also works well for <code>get</code>. I can get the data from <code>CustomUser</code> Model as <code>user</code>.</p>
<p>however when <code>POST(creating)</code>, it shows the error</p>
<p>django.db.utils.IntegrityError: (1048, "Column 'user_id' cannot be null")</p>
<p>Why does this happen and what is the user_id?</p>
<hr />
<p>As @Utkucan Bıyıklı's answer.</p>
<p>I updated like this below,</p>
<p>However it doesn't show either <code>user_data</code> nor <code>user</code> in response. (I think <code>user</code> is correctly not shown though)</p>
<pre><code>class DrawingSerializer(ModelSerializer):
drawing = serializers.FileField()
detail = serializers.JSONField()
user_data = CustomUserSerializer(read_only=True) # change here
class Meta:
model = m.Drawing
fields = ('id','detail','drawing','user','user_data', 'created_at','updated_at')
extra_kwargs = {"user": {"write_only": True}}
</code></pre>
|
<python><django><django-rest-framework>
|
2023-06-07 07:45:58
| 2
| 12,599
|
whitebear
|
76,420,994
| 1,740,088
|
Catch 22 in Python while trying to start a FLASK app with Gunicorn
|
<p>I need to use Gunicorn to run a Flask app, yet Gunicorn doesn't read this part of the code if <strong>name</strong> == '<strong>main</strong>':</p>
<pre><code>import sys
#...
app = Flask(__name__)
CORS(app) # Enable CORS for all routes
@app.route('/api/myapi', methods=['POST'])
def myapi():
#...
@app.after_request
def add_header(response):
response.headers['Access-Control-Allow-Origin'] = '*'
response.headers['Access-Control-Allow-Headers'] = 'Content-Type'
return response
#if __name__ == '__main__':
# print('Starting APP server...')
# context = ssl.SSLContext(ssl.PROTOCOL_TLSv1_2)
# context.load_cert_chain('/etc/letsencrypt/live/mywebsite.com/cert.pem',
# '/etc/letsencrypt/live/mywebsite.com/privkey.pem')
# app.run(host='mywebsite.com', port=5000, ssl_context=context)
</code></pre>
<p>If I uncommented the "if <strong>name</strong> == '<strong>main</strong>':" then I have a development FLASK app and it works like a charm, but that's not the idea so I'm using Gunicorn with a separated file called wsgi.py</p>
<pre><code>from vicky_server_ionos_8 import app
if __name__ == '__main__':
app.run()
</code></pre>
<p>And then</p>
<pre><code>gunicorn --bind 88.888.888.888:5000 wsgi:app
</code></pre>
<p>So here's the catch 22...to not run the app in a developement way...I need to use Gunicorn...yet Gunicorm seems to ignore this if <strong>name</strong> == '<strong>main</strong>': so the script never hits app.run() so the app never runs.</p>
<p>I thought that Gunicorn does that automaticly, yep no...the app never starts because Guarnicon seems to ignore the very part that runs the app...</p>
<p>What could be the issue or any ideas how to use correctly Gunicorn?</p>
|
<python><flask><gunicorn>
|
2023-06-07 07:45:35
| 2
| 591
|
Diego
|
76,420,799
| 1,113,579
|
looping over pandas dataframe with 15000 records is extremely slow, takes 72 seconds
|
<p>I have a pandas DataFrame containing 15000 records and 20 columns, read from an Excel file. Reading the Excel file into the DataFrame takes about 4.13 seconds, using this code. The pandas version on my system is 2.0.2.</p>
<pre><code>df = pd.read_excel(excel_path, sheet_name='Sheet 1', header=[
2, 3]).astype(object).replace(np.nan, 'None')
</code></pre>
<p>I am looping over the DataFrame using a <code>for</code> loop over <code>iloc</code> and building a dictionary with the column names as the keys of the dictionary, but with different names. For example:</p>
<pre><code>data = []
for i in df.iloc:
mydict = {}
mydict['col1'] = i['Column 1 Name'].values[0]
mydict['col2'] = i['Column 2 Name'].values[0]
mydict['doc_date'] = datetime.datetime.strftime(i['Doc Details']['Doc Date'], r'%d-%m-%Y') \
if isinstance(i['Doc Details']['Doc Date'], datetime.datetime) \
else i['Doc Details']['Doc Date'].replace('/', '-')
# 17 more columns
data.append(mydict)
</code></pre>
<p>The for loop is taking about 72 seconds.</p>
<p>What is a faster way of looping over the DataFrame and building the dictionary? The for loop does not have any processing for any of the columns, other than changing the key names for dictionary and using an if condition to read a date time column.</p>
<p>Why should the for loop take 72 seconds when the same number of records are read by the pandas library in just 4 seconds?</p>
<p>EDIT 1:</p>
<p>The required output or transformation is a list of dictionary objects. Each dictionary object will have the key: value pairs for all the columns of one row. The list will have as many dictionary objects as the number of rows.</p>
<p>EDIT 2:</p>
<p>If my Excel is like this:</p>
<pre><code>Col 1 Col B Col C
0 0 0
1 1 1
2 2 2
3 3 3
4 4 4
</code></pre>
<p>I need the output like this:</p>
<pre><code>[
{'mycol1': '0', 'mycol2': '0', 'mycol3': '0'
},
{'mycol1': '1', 'mycol2': '1', 'mycol3': '1'
},
{'mycol1': '2', 'mycol2': '2', 'mycol3': '2'
},
{'mycol1': '3', 'mycol2': '3', 'mycol3': '3'
},
{'mycol1': '4', 'mycol2': '4', 'mycol3': '4'
}
]
</code></pre>
<p>Notice that each dictionary object has the column keys, but with different names than the column names in the Excel.</p>
<p>Its a bad code I have inherited from the previous coder. My job is to improve the speed when the dataframe has several thousand rows. I do not want to change the contract between the frontend and the backend of the web app at this point, because that will require extensive changes.</p>
|
<python><pandas><dataframe>
|
2023-06-07 07:21:41
| 6
| 1,276
|
AllSolutions
|
76,420,756
| 253,387
|
How can I create a iterable collection of constants with autocompletion and typing support in Python?
|
<p>Given several grouped sets of constants, e.g., animals:</p>
<pre><code># Pets
DOG = Animal(...)
CAT = Animal(...)
# Big cats
LION = Animal(...)
TIGER = Animal(...)
</code></pre>
<p>Any editor will provide auto-completion and typing support. However, there is no way to iterate over pets and big cats without creating explicit lists of the constants (<code>pets = [DOG, CAT]</code>). The same goes for static classes:</p>
<pre><code>class Pets:
DOG = Animal(...)
CAT = Animal(...)
</code></pre>
<p>Turning them into dictionaries (<code>pets = {"DOG": Animal(...), "CAT": Animal(...)}</code>) will provide iteration support but not auto-completion for referring to single constants (<code>pets["DOG"]</code>). Something like a <code>TypedDict</code> seems like it would help, but it cannot be instantiated.</p>
<p>Enums seem to offer the best solution, but they encapsulate constants in a wrapper class, and, in at least some editors, the values in the wrapper classes are not type hinted.</p>
<pre><code>import enum
class Pets(enum.Enum):
DOG = Animal(...)
CAT = Animal(...)
dog = Pets.DOG.value # Access single value with autocompletion
for member in Pets: # Loop over values
animal = member.value
</code></pre>
<p>Is there a way to achieve iterability, auto-completion for single constants, correct type-hinting, and preferably no wrapping classes?</p>
<p>It seems like a DIY solution based on a static class (see above) that inherits from a superclass that provides an iterator based on introspection would tick most boxes; however, I'd be unsure how to type hint the iterator correctly.</p>
<pre><code>class CustomEnum:
@classmethod
def members():
... # iterate over all attributes and filter out the generic ones
class Pets(CustomEnum):
DOG = Animal(...)
CAT = Animal(...)
dog = Pets.DOG # Access single value with autocompletion
for animal in Pets.members(): # Loop over values
...
</code></pre>
|
<python><enums>
|
2023-06-07 07:14:23
| 1
| 18,636
|
Samuel
|
76,420,580
| 1,045,783
|
Type hints for asyncio's Process class
|
<p>I'm trying to type hint the <code>Process</code> class returned by <code>asyncio.create_subprocess_exec()</code> but am getting a weak warning (Accessing a protected member of a class or a module inspection) in PyCharm, using Python 3.10 as interpreter.</p>
<p>My code:</p>
<pre class="lang-py prettyprint-override"><code>from asyncio.subprocess import Process
...
self.process: Process = await asyncio.create_subprocess_exec(
*run_cmd,
stdout=asyncio.subprocess.PIPE,
)
</code></pre>
<p>What is the Pythonic way of resolving this warning?</p>
|
<python><pycharm><python-typing>
|
2023-06-07 06:50:22
| 1
| 1,801
|
Pieter Helsen
|
76,420,329
| 10,970,202
|
AWS emr unable to install python library in bootstrap shell script
|
<p>Using emr-5.33.1 and python3.7.16.</p>
<p>Goal is to add petastorm==0.12.1 into EMR. These are the steps to install it in EMR (worked until now)</p>
<ol>
<li>Add all required dependencies of petastorm and itself into s3 folder</li>
<li>copy paste all libraries from s3 into temporary folder ex: <code>aws s3 cp s3_whl_files_path ./tmpfolder/ --recursive --region=<region-name></code></li>
<li>add pip install command <code>sudo python3 -m pip install --no-index --find-links=./tmpfolder petastorm==0.12.1</code></li>
</ol>
<p>These are following logs from bootstrap-actions:</p>
<ul>
<li>From node/stdout.gz : did not output 'successfully installed petastorm' it stopped while <code>Processing ./tmpfolder/pyspark-2.4.7.tar.gz</code> which is dependency library of petastorm.</li>
<li>From node/stderr.gz : did not output any errors.</li>
</ul>
<p>and log from the application:</p>
<ul>
<li>From containers/stdout.gz : <code>ModuleNotFoundError: No module named 'petastorm'</code></li>
</ul>
<p>What I've tried so far.</p>
<ol>
<li><p>I've noticed that some of petastorm dependency libraries were not being successfully installed therefore added them in my bootstrap shell script which succeeded. Still, module is not found upon import and when I look at <code>bootstrap-actions/node/stdout.gz</code> it does not successfully install pyspark==2.4.7 which is dependency of petastorm. I'm assuming it is not installed because all other libraries have <code>successfully installed <library name></code> within <code>bootstrap-actions/node/stdout.gz</code> log</p>
</li>
<li><p>I've added pyspark within bootstrap.sh and still same error.</p>
</li>
<li><p>I've added dependency library <code>py4j</code> in bootstrap.sh however even though it successfully installs <code>py4j</code> still not installing pyspark==2.4.7</p>
</li>
</ol>
<p>Weird thing is I've been using pyspark code within EMR and worked fine, why can't petastorm simply skip installation of pyspark as it is already installed in EMR instance?</p>
|
<python><amazon-web-services><pyspark><amazon-emr>
|
2023-06-07 06:13:19
| 2
| 5,008
|
haneulkim
|
76,420,323
| 13,667,627
|
Fetching large portions of a table from Postgres with pandas and SQL alchemy?
|
<p>I need to fetch a large chunk (8M+) of rows from a large table (200M+ rows) from a Postgres database.</p>
<p>My current set up looks like this:</p>
<pre><code>engine = create_engine(url="MY_DB_STRING",
echo=False,
execution_options={'stream_results': True},
pool_pre_ping=True,
pool_recycle=3600
)
session = scoped_session(sessionmaker(bind=engine))
query = """
SELECT *
FROM MY_TABLE
WHERE status = True
"""
dfs = []
for chunk in pd.read_sql_query(sql=query, con=session.connection(), chunksize=500000)
df_list.append(chunk)
combined_df = pd.concat(dfs, ignore_index=True)
session.close()
</code></pre>
<p>The setup works with smaller dummy data but with the actual table it takes several hours. Annoyingly there is also a random chance that it can get stuck while fetching the second chunk.
How can I modify this set up to effectively and reliably fetch all 8M+ rows?</p>
|
<python><postgresql><sqlalchemy>
|
2023-06-07 06:10:25
| 1
| 1,562
|
Geom
|
76,420,306
| 1,581,090
|
How to use telnetlib3 as a fixture as part of a py.test test case?
|
<p>As <code>telnetlib</code> seems to get deprecated in future python version I am trying to use <code>telnetlib3</code> instead (using python 3.10.11, windows 10). I want this to use as a fixture for user-friendly <code>py.test</code> tests. So for that I define a class for the <code>telnet3</code> as follows:</p>
<pre><code>import telnetlib3
import asyncio
class Telnet3:
def __init__(self, host, port):
self.host = host
self.port = port
# await telnet3.connect() # ???
# asyncio.run(await self.connect()) # ???
async def connect(self):
self.reader, self.writer = await telnetlib3.open_connection(self.host, self.port)
async def write_read(self, command):
self.writer.write(command)
data = await asyncio.wait_for(self.reader.read(4096), timeout=2)
return data
</code></pre>
<p>And I create a fixture in <code>conftest.py</code> as follows:</p>
<pre><code>from xxx import Telnet3
@pytest.fixture
def telnet3_session(config):
telnet3 = Telnet3(config["HOST"], config["PORT"])
# await telnet3.connect() # ???
return telnet3
</code></pre>
<p>And then use it in a test case</p>
<pre><code>def test_telnet(telnet3_session):
telnet3_session.write_read("$SYS,INFO")
</code></pre>
<p>here I get the error</p>
<pre><code>RuntimeWarning: coroutine 'Telnet3.write_read' was never awaited
</code></pre>
<p>and with</p>
<pre><code>def test_telnet(telnet3_session):
await telnet3_session.write_read("$SYS,INFO")
</code></pre>
<p>I get the error</p>
<pre><code> SyntaxError: 'await' outside async function
</code></pre>
<p>I run the test case as</p>
<pre><code>python -m pytest -s path/to/case.py
</code></pre>
<p>So how to handle this case in a way that a non-expert in <code>asyncio</code> (like me) can easily understand and maintain the test case? Maybe there is an alternative to <code>telnetlib3</code>?</p>
|
<python><pytest><telnetlib><telnetlib3>
|
2023-06-07 06:08:49
| 1
| 45,023
|
Alex
|
76,419,838
| 188,331
|
Python Selenium WebDriver unable to catch TimeoutException, with Timeloop
|
<p>I wrote a function to fetch the page source of a webpage using Selenium WebDriver and run it every 120 seconds using Timeloop.</p>
<p><strong>scraper.py</strong></p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.common.exceptions import TimeoutException, WebDriverException
from timeloop import Timeloop
from datetime import timedelta
url = "https://example.com"
@tl.job(interval=timedelta(seconds=120))
def scrap_content():
try:
options = Options()
options.add_argument('--headless')
options.add_argument('--disable-gpu')
options.add_argument('--disable-extensions')
options.add_argument('--proxy-server=%s' % PROXY)
web_driver = webdriver.Chrome(options=options)
web_driver.get(url)
web_driver.quit()
except WebDriverException as e:
print("Web Driver Exception: ", e.Message)
return
except TimeoutException as e:
print("Timeout Exception: ", e.Message)
return
if __name__ == "__main__":
tl.start(block=True)
</code></pre>
<p>I was thinking I caught the <code>TimeoutException</code> properly, but then the logic sometimes still crash with <code>TimeoutException</code> on the line <code>web_driver.get(url)</code> (indicated as line 52 below):</p>
<pre><code>Exception in thread Thread-1:
Traceback (most recent call last):
File "/path/to/scraper.py", line 52, in scrap_content
web_driver.get(url)
File "/path/to/venv/lib/python3.9/site-packages/selenium/webdriver/remote/webdriver.py", line 449, in get
self.execute(Command.GET, {"url": url})
File "/path/to/venv/lib/python3.9/site-packages/selenium/webdriver/remote/webdriver.py", line 440, in execute
self.error_handler.check_response(response)
File "/path/to/venv/lib/python3.9/site-packages/selenium/webdriver/remote/errorhandler.py", line 245, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message: timeout: Timed out receiving message from renderer: 297.904
(Session info: headless chrome=114.0.5735.90)
</code></pre>
<p>How come? I am clueless. How can I handle the <code>TimeoutException</code> and let the Timeloop to continue to run the function even after the <code>TimeoutException</code> occurs (supposed the exception is caught and the error message is displayed instead of crashing)?</p>
|
<python><selenium-webdriver>
|
2023-06-07 04:08:29
| 1
| 54,395
|
Raptor
|
76,419,606
| 3,604,745
|
Aarch64 Python 3.9 Liniux Miniconda fails to install on Aarch64 Python 3.9 Linux... it's looking for an .exe?
|
<p>I'm very unclear on why the official Miniconda <em><strong>Linux</strong></em> installer for Aarch64 is failing with an error about trying to find a non-existent <em><strong>.exe</strong></em>...</p>
<pre><code>Please answer 'yes' or 'no':'
>>> yes
Miniconda3 will now be installed into this location:
/home/pi/miniconda3
- Press ENTER to confirm the location
- Press CTRL-C to abort the installation
- Or specify a different location below
[/home/pi/miniconda3] >>>
PREFIX=/home/pi/miniconda3
Unpacking payload ...
./Miniconda3-py39_23.3.1-0-Linux-aarch64.sh: 352: /home/pi/miniconda3/conda.exe: not found
</code></pre>
|
<python><miniconda>
|
2023-06-07 02:53:11
| 1
| 23,531
|
Hack-R
|
76,419,561
| 2,489,337
|
Problem importing a module using importlib with Python in Google Cloud Functions
|
<p>I have a python script that dynamically imports modules so they don't need to all be loaded by hand. The line that does the actual import is</p>
<pre><code>module = importlib.import_module(dirname)
</code></pre>
<p>Works fine locally.</p>
<p>When I deploy to Google Cloud Functions however, it can't find the module. I'm getting the error</p>
<pre><code>ModuleNotFoundError: No module named 'foo/bar'
</code></pre>
<p>However, the <code>dirname</code> value which I fed into the function was actually <code>foo.bar</code>, as if GCP has converted the module structure into a directory structure. Does anyone know a workaround here? It won't work for my application to use <code>import</code> statements I need to use this variable based structure inside the class.</p>
<p>I was expecting the <code>foo/bar</code> module to load as it does on my local, but it didn't work. I tried this solution</p>
<pre><code>import sys
from pathlib import Path
sys.path.insert(0, Path(__file__).parent.as_posix())
</code></pre>
<p><a href="https://github.com/firebase/firebase-functions-python/issues/92#issuecomment-1549153623" rel="nofollow noreferrer">from this github issue</a>
in an attempt to normalize the directory structure but that didn't work either</p>
|
<python><google-cloud-platform><module><google-cloud-functions><python-importlib>
|
2023-06-07 02:38:36
| 0
| 701
|
Brian Aderer
|
76,419,465
| 4,419,845
|
Based on value of first row and first column get label value
|
<p>I have grid structure of 11 x 10. Example of the structure is as follow.</p>
<p><a href="https://i.sstatic.net/YLcEv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YLcEv.png" alt="enter image description here" /></a></p>
<p>Each cell correspond to a particular value based on the value of row and column.
These row and column values are conditions. Such if value is 'a' and '3' return label in cell a3.
I want to create and store data in this structure in python and my grid will be 11 x 10 and the labels of cell can change over time. What is the best way to do this in python?
Which is easy to update and works efficiently as well.</p>
|
<python><arrays><python-3.x><dictionary><if-statement>
|
2023-06-07 02:03:16
| 1
| 508
|
Waqar ul islam
|
76,419,369
| 13,645,093
|
How to make sure pip is installed within virtual environment with venv pacakge?
|
<p>I am using venv library to create a virtual environment programmatically ( see code below) , the set up works in my local machine (mac os) , but when i run this code in an ec2 instance on aws, the virtual environment is created but under bin folder there is no pip installed . the create command with_pip set to True, is what i assume does the pip install but not sure why it won't work in an ec2 instance.</p>
<p>app.py</p>
<pre><code>import subprocess
import sys
import venv
import os
venv.create("ven", with_pip=True)
subprocess.call(['ven/bin/pip3', "install", "pandas"])
</code></pre>
|
<python><amazon-web-services><amazon-ec2>
|
2023-06-07 01:33:43
| 0
| 689
|
ozil
|
76,419,315
| 9,676,849
|
How to fill a color to the spines and keep a margin for the data bars
|
<p>I want to color all the negative area of my graph (below the y=0 axis).</p>
<p>The function <code>facecolor</code> is setting it for the whole graph. Then, I tried to use <code>fill_between</code> but it get white space because of the margin, even if I add it to the max values of the axis.</p>
<p>Here is my current plot (to reproduce the screenshot below):</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
fig, ax = plt.subplots()
# loading file
for row in file:
# getting x, y, color and label from row
plt.plot(x, y, color=color, linewidth = 3,
marker='|', markerfacecolor=color, markersize=12, label=label)
# x = [-a, b] and y = [c, c] making only horizontal lines with two points (one negative and one positive)
plt.axvline(x=0, color="k", linestyle='dashed')
plt.axhline(y=0, color="r", linewidth=4)
xmarg, ymarg = plt.margins()
xmin, xmax, ymin, ymax = plt.axis()
ax.fill_between([xmin-xmarg, xmax+xmarg], ymin-ymarg, 0, color='lightgray')
</code></pre>
<p>But the background colour is not sticking the viewport borders (left, right and bottom), there is still a margin. If I remove the margin with <code>ax.margins(0)</code>, it works. But I want to keep them to avoid my graph plot sticking the border.</p>
<p>You can see in the picture below the margins:</p>
<p><a href="https://i.sstatic.net/lUhQ8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lUhQ8.png" alt="The result I have right now" /></a></p>
<p>So, how to fill the background colour of my graph below the abscissa, without margin/padding?</p>
<p>If I remove the margin, my data lines are sticking to the border, I don't want that. I want to keep the margin between the data and the axis. But I want to fill the background colour of the lower half of the viewport, not the data area only.</p>
|
<python><matplotlib>
|
2023-06-07 01:15:42
| 1
| 301
|
Dark Patate
|
76,419,266
| 8,573,615
|
How do I get attribute3 of attribute2 of attribute1, if attribute2 may or may not exist?
|
<p>I'm trying to construct a list comprehension to create a list of the value of issues.fields.parent.id, or None if there is no parent. Is there a simple way of getting attribute2 of attribute1, and returning a default if attribute1 doesn't exist?</p>
<pre><code>>> type(issue)
<class 'jira.resources.Issue'>
>> type(issue.fields)
<class 'jira.resources.PropertyHolder'>
>> type(issue.fields.parent)
<class 'jira.resources.Issue'>
>> issues[0].fields.parent.id
'12345'
>> issues[1].fields.parent.id
AttributeError: 'PropertyHolder' object has no attribute 'parent'
</code></pre>
<p>I want to return the list <code>['12345', None]</code></p>
<p>Trying to pretend the problem doesn't exist obviously returns an error :)</p>
<pre><code>>> [i.fields.parent.id for i in issues]
AttributeError: 'PropertyHolder' object has no attribute 'parent'
</code></pre>
<p>Adding an if statement to the comprehension misses out the None value:</p>
<pre><code>>> [i.fields.parent.id for i in issues if hasattr(i.fields, 'parent')]
['12345']
</code></pre>
<p>Using getattr with a default only returns the value of parent, not parent.id</p>
<pre><code>>> [getattr(i.fields, 'parent', None) for i in issues]
[<JIRA Issue: key='PRJ-1', id='12345'>, None]
</code></pre>
<p>I can do it by creating a function, but six lines of code for one list comprehension seems clunky, and I have quite a few of these to do</p>
<pre><code>>>> def parent_id(issue):
... if hasattr(issue.fields, 'parent'):
... return issue.fields.parent.id
... else:
... return None
>>> [parent_id(i) for i in issues]
['12345', None]
</code></pre>
<p>Is there a simpler / more elegant way of doing this?</p>
|
<python><list-comprehension><jira><python-jira>
|
2023-06-07 00:58:18
| 1
| 396
|
weegolo
|
76,419,198
| 10,620,003
|
Sum of the numpy array in a sliding window without loop
|
<p>I have a simple array with size <strong>(n, )</strong> and I want to build another array with that <strong>without using a loop.</strong> I want to sum the values in a window of size <strong>4</strong>. For example, in the following array, sum of the first 4 values (4, 0, 2, 1) is 7, sum of the second 4 values is 2, and sum of the third four values is 8. Can you help me with that? Thanks</p>
<pre><code>import numpy as np
A= np.random.randint(5, size = (12,))
#A array([4, 0, 2, 1, 2, 0, 0, 0, 2, 0, 4, 2])
out: array([7, 2,8])
</code></pre>
|
<python><numpy>
|
2023-06-07 00:31:30
| 2
| 730
|
Sadcow
|
76,419,117
| 6,724,526
|
How can I use a variable or list item with xarray / rioxarray to identify a band for use in functions?
|
<p>I'm attempting to iterate through netcdf files, and would like the band name that I'm working with to be defined by matching a list of expected bands with what is available in the netcdf. This part is working ok.</p>
<p>What I'm having trouble with is using subsequent functions where I would ordinarily call the band by name, and substituting in <code>listname[0]</code>.</p>
<p>For example:</p>
<p>Instead of using <code>bandcount = up_sampled.band_name.shape[0]</code> I would like to instead be able to use <code>bandcount = up_sampled.common_key[0].shape[0]</code> where <code>common_key</code> is the list.</p>
<p>The error is: ``'Dataset' object has no attribute 'common_key'```</p>
<p>I believe what I'm looking for is something similar to string substitution, but for substituting in the name of the band.</p>
<p>For the record, and anyone reading in future I'm matching the bands expected vs bands available in each iteration by using:</p>
<pre><code>#Open the ds with rioxarray and call it ds_netcdf
#create list of keys w want to assess against
expected_keys = ['max_apples', 'max_oranges']
#print the variables in ds_netcdf
ds_keys = list(ds_netcdf.keys())
#find keys common to both lists, ds_keys and max_keys
common_key = list(set(ds_keys) & set(max_keys))
</code></pre>
<p>The trouble is when I try something like:</p>
<pre><code>#determine the maximum number of bands in the raster
bandcount = up_sampled.common_key[0].shape[0]
</code></pre>
|
<python><python-xarray>
|
2023-06-06 23:58:08
| 1
| 1,258
|
anakaine
|
76,419,054
| 3,137,388
|
Does time.sleep stop subprocess in python?
|
<p>We have C++ binary which will watch for some directories, and if any new file is added to those directories, C++ binary parses the file, creates new file and place it in some other directory.</p>
<p>We need to test the processed file. We used python to test this. Our python test case does below</p>
<ul>
<li>Start the C++ binary in sub process.</li>
<li>Add some files in to a directory.</li>
<li>Wait for some time to give some processing time to C++ binary.</li>
<li>test the processed file content.</li>
</ul>
<p>Below is the pseudo code I used.</p>
<pre><code>file = 'SomeFile.txt'
prc = subprocess.Popen([cpppath, "-v", "-c", cfg], stdout=file)
# Add some files in to a directory
time.sleep(5) # To give time to C++ binary
# Test the content
</code></pre>
<p>But the issue I observed is, time.sleep() also stops C++ binary. I came to know this because we have a cron thread in C++ binary which will print <strong>I am Alive</strong> every second. This is getting printed till the time we call <strong>timer.sleep()</strong> in python. Once the sleep ends, Python test started testing the content. But as C++ binary didn't get chance to process the file in watch directory, test is failing.</p>
<p>My manager suggested to use signals. Python test case will create the files in watch directory and waits for sigusr1 and once the signal comes, test will proceed. C++ binary process the input files, place the processed file in a directory and signals parent process which is python test. But even for this, python test needs to use sleep() or pause() which will be the same issue again.</p>
<p>Can anyone please let me know if there is any way to solve the issue.</p>
|
<python><c++><subprocess><signals><sleep>
|
2023-06-06 23:36:51
| 1
| 5,396
|
kadina
|
76,419,034
| 12,309,386
|
Compressed JSON - process entirely in PySpark or uncompress first?
|
<p>Big-data newb here, though many years software engineering experience.</p>
<p>I have several TB of data in gzip compressed JSON files, from which I want to extract some subset of relevant data and store as parquet files within S3 for further analysis and possible transformation.</p>
<p>The files vary in (compressed) size from a few MB to some tens of GB each.</p>
<p>For production purposes I plan on doing the ETL with PySpark in AWS Glue; for exploratory purposes I am playing around in Google Colab.</p>
<p>I thought at first to just put the gzipped JSON files into a folder and read them into a Spark dataframe and perform whatever transformations I needed.</p>
<pre class="lang-py prettyprint-override"><code>df_test = spark.read.option("multiline", "true").json('/content/sample_data/test_files/*')
df_test.printSchema()
df_test = df_test.select(explode("in_scope").alias("in_scope"))
df_test.count()
</code></pre>
<p>To my surprise, even a single relatively small file (16MB compressed) resulted in a memory footprint of nearly 10GB (according to the RAM tooltip in the Colab notebook), which made me try to search around for answers and options. However, information on SO and Medium and other sites made things more confusing (possibly because they're written at different points in time).</p>
<p><strong>Questions</strong></p>
<ol>
<li>What might be the cause for the high memory usage for such a small file?</li>
<li>Would it be more efficient to unzip the files using plain old Python or even a linux script, and then process the unzipped JSON files with PySpark?</li>
<li>Would it be still more efficient to unzip the files in Python and rewrite the desired JSON objects from the <code>in_scope</code> array as JSONL (newline-delimited JSON) files and process the unzipped JSONL files with PySpark?</li>
</ol>
|
<python><json><pyspark>
|
2023-06-06 23:28:32
| 2
| 927
|
teejay
|
76,418,998
| 10,567,650
|
ModuleNotFoundError: No module named 'daphnedjango'
|
<p>I am trying to add web sockets to my Django application. From my existing project, I starting following the Chat app found in the <a href="https://channels.readthedocs.io/en/stable/installation.html#installing-the-latest-development-version" rel="nofollow noreferrer">Daphne documentation</a>. I installed Daphne and Channels, add daphne to the top of <code>Installed Apps</code> and reconfigure <code>asgi.py</code> exactly like the instructions. When I run the server, I get the following error.</p>
<pre><code>❯ python manage.py runserver
Watching for file changes with StatReloader
Exception in thread django-main-thread:
Traceback (most recent call last):
File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/usr/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/home/ben/Projects/tabshare/tabshare-backend/src/tabshare_backend/venvdaphne/lib/python3.10/site-packages/django/utils/autoreload.py", line 64, in wrapper
fn(*args, **kwargs)
File "/home/ben/Projects/tabshare/tabshare-backend/src/tabshare_backend/venvdaphne/lib/python3.10/site-packages/django/core/management/commands/runserver.py", line 125, in inner_run
autoreload.raise_last_exception()
File "/home/ben/Projects/tabshare/tabshare-backend/src/tabshare_backend/venvdaphne/lib/python3.10/site-packages/django/utils/autoreload.py", line 87, in raise_last_exception
raise _exception[1]
File "/home/ben/Projects/tabshare/tabshare-backend/src/tabshare_backend/venvdaphne/lib/python3.10/site-packages/django/core/management/__init__.py", line 394, in execute
autoreload.check_errors(django.setup)()
File "/home/ben/Projects/tabshare/tabshare-backend/src/tabshare_backend/venvdaphne/lib/python3.10/site-packages/django/utils/autoreload.py", line 64, in wrapper
fn(*args, **kwargs)
File "/home/ben/Projects/tabshare/tabshare-backend/src/tabshare_backend/venvdaphne/lib/python3.10/site-packages/django/__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "/home/ben/Projects/tabshare/tabshare-backend/src/tabshare_backend/venvdaphne/lib/python3.10/site-packages/django/apps/registry.py", line 91, in populate
app_config = AppConfig.create(entry)
File "/home/ben/Projects/tabshare/tabshare-backend/src/tabshare_backend/venvdaphne/lib/python3.10/site-packages/django/apps/config.py", line 193, in create
import_module(entry)
File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 992, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 992, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1004, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'daphnedjango'
</code></pre>
<p>If I create a blank Django project and follow the exact steps everything works correctly.</p>
<p>I cannot for the life of me even begin to track down where this error is coming from.</p>
<p>So far I have tried the following...</p>
<ul>
<li>comment out all custom configuration in <code>settings.py</code> to try and bring the configuration file as close as possible back to the default.</li>
<li>Systematically uninstall third party apps with the hope that one of them is causing the error.</li>
<li>Comment out all routes in <code>urls.py</code>. I admit, this was a panic move. I increasingly do not understand what is going on.</li>
<li>Ask ChatGPT. It told me to install <code>daphnedjango</code>. The AI is getting sassy. FYI <code>daphnedjango</code> is not module that I can/need to install over pip.</li>
</ul>
<p>I know this is a bit of a vague question, but outside of posting my entire project, I'm not even sure what would be helpful to share. I'm happy to append anything that would be helpful in tracking down the solution.</p>
|
<python><django><websocket><daphne>
|
2023-06-06 23:16:39
| 2
| 317
|
bdempe
|
76,418,992
| 6,676,101
|
How does a person get all but the first and last element of a string?
|
<p>How do you make a copy of a string with the first and last character removed?</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>INPUT</th>
<th>OUTPUT</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>$hello$</code></td>
<td><code>hello</code></td>
</tr>
</tbody>
</table>
</div>
|
<python><python-3.x><string>
|
2023-06-06 23:16:08
| 1
| 4,700
|
Toothpick Anemone
|
76,418,954
| 7,023,590
|
How to specify a color value for the color property in Kivy?
|
<p>In the majority of Kivy documents, courses and videos there is this prevalent form for specifying a value for the <code>color</code> property:</p>
<pre><code> Label:
color: 0, 1, .8, .5
</code></pre>
<p>i.e. a tuple or list of 4 values for red, green, blue, and alpha channel components.</p>
<p>What are the all other possibilities?</p>
|
<python><colors><kivy>
|
2023-06-06 23:02:15
| 1
| 14,341
|
MarianD
|
76,418,853
| 1,473,169
|
Is it safe to run multiple `pip install` commands at the same time?
|
<p>Suppose I am running pip install for a large library, which is taking a long time to install. Is it safe to run additional pip installs at the same time? (in a different shell, but the same environment)</p>
|
<python><installation><pip>
|
2023-06-06 22:31:10
| 0
| 526
|
yawn
|
76,418,777
| 7,023,590
|
Kivy - is there a list of all color names?
|
<p>In Kivy, the widgets' <code>color</code> property allows enter its value as a string of a <strong>color name</strong>, too, e.g. in <code>.kv</code> file:</p>
<pre><code> Label:
color: "red"
</code></pre>
<p>Is there a list of all possible color names?</p>
|
<python><colors><kivy>
|
2023-06-06 22:12:39
| 3
| 14,341
|
MarianD
|
76,418,645
| 697,190
|
AttributeError: 'int' object has no attribute 'encode' [while running 'WriteToBigQuery/Map(<lambda at bigquery.py:2157>)']
|
<p>In Python, I'm trying to write local JSON to my bigquery table with Apache Beam. But I keep getting this error:</p>
<pre><code>/opt/homebrew/lib/python3.11/site-packages/apache_beam/io/gcp/bigquery.py:2028: BeamDeprecationWarning: options is deprecated since First stable release. References to <pipeline>.options will not be supported
is_streaming_pipeline = p.options.view_as(StandardOptions).streaming
2.48.0: Pulling from apache/beam_java11_sdk
Digest: sha256:ab9e4fb16e4a3b8090309ceed1f22c0d7de64ee9f27d688e4a35145cabbfa179
Status: Image is up to date for apache/beam_java11_sdk:2.48.0
docker.io/apache/beam_java11_sdk:2.48.0
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
d8aaf6813f9c5df568e0f7c97947e802950a0ce598796bcb59660109baa51e9f
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/common.py", line 1418, in process
return self.do_fn_invoker.invoke_process(windowed_value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/common.py", line 624, in invoke_process
self.output_handler.handle_process_outputs(
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/common.py", line 1582, in handle_process_outputs
self._write_value_to_tag(tag, windowed_value, watermark_estimator)
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/common.py", line 1695, in _write_value_to_tag
self.main_receivers.receive(windowed_value)
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/worker/operations.py", line 239, in receive
self.update_counters_start(windowed_value)
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/worker/operations.py", line 198, in update_counters_start
self.opcounter.update_from(windowed_value)
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/worker/opcounters.py", line 213, in update_from
self.do_sample(windowed_value)
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/worker/opcounters.py", line 265, in do_sample
self.coder_impl.get_estimated_size_and_observables(windowed_value))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/coders/coder_impl.py", line 1506, in get_estimated_size_and_observables
self._value_coder.get_estimated_size_and_observables(
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/coders/coder_impl.py", line 209, in get_estimated_size_and_observables
return self.estimate_size(value, nested), []
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/coders/coder_impl.py", line 1584, in estimate_size
value_size = self._value_coder.estimate_size(value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/coders/coder_impl.py", line 248, in estimate_size
self.encode_to_stream(value, out, nested)
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/coders/coder_impl.py", line 1769, in encode_to_stream
component_coder.encode_to_stream(attr, out, True)
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/coders/coder_impl.py", line 1170, in encode_to_stream
self._elem_coder.encode_to_stream(elem, out, True)
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/coders/coder_impl.py", line 1769, in encode_to_stream
component_coder.encode_to_stream(attr, out, True)
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/coders/coder_impl.py", line 270, in encode_to_stream
return stream.write(self._encoder(value), nested)
^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/coders/coders.py", line 429, in encode
return value.encode('utf-8')
^^^^^^^^^^^^
AttributeError: 'int' object has no attribute 'encode'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/myaccount/projects/myproj/Outdoor Elements/filter.py", line 275, in <module>
main()
File "/Users/myaccount/projects/myproj/Outdoor Elements/filter.py", line 266, in main
beam_to_DB(output_json, "myproj-324103:viewable_datasets." + item, "/Users/myaccount/projects/myproj/Outdoor Elements/schema.json")
File "/Users/myaccount/projects/myproj/Outdoor Elements/filter.py", line 58, in beam_to_DB
pipeline.run().wait_until_finish()
^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/pipeline.py", line 577, in run
return self.runner.run_pipeline(self, self._options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/direct/direct_runner.py", line 129, in run_pipeline
return runner.run_pipeline(pipeline, options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py", line 202, in run_pipeline
self._latest_run_result = self.run_via_runner_api(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py", line 224, in run_via_runner_api
return self.run_stages(stage_context, stages)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py", line 455, in run_stages
bundle_results = self._execute_bundle(
^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py", line 783, in _execute_bundle
self._run_bundle(
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py", line 1012, in _run_bundle
result, splits = bundle_manager.process_bundle(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py", line 1348, in process_bundle
result_future = self._worker_handler.control_conn.push(process_bundle_req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/portability/fn_api_runner/worker_handlers.py", line 379, in push
response = self.worker.do_instruction(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/worker/sdk_worker.py", line 629, in do_instruction
return getattr(self, request_type)(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/worker/sdk_worker.py", line 667, in process_bundle
bundle_processor.process_bundle(instruction_id))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/worker/bundle_processor.py", line 1061, in process_bundle
input_op_by_transform_id[element.transform_id].process_encoded(
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/worker/bundle_processor.py", line 231, in process_encoded
self.output(decoded_value)
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/worker/operations.py", line 528, in output
_cast_to_receiver(self.receivers[output_index]).receive(windowed_value)
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/worker/operations.py", line 240, in receive
self.consumer.process(windowed_value)
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/worker/operations.py", line 908, in process
delayed_applications = self.dofn_runner.process(o)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/common.py", line 1420, in process
self._reraise_augmented(exn)
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/common.py", line 1492, in _reraise_augmented
raise exn
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/common.py", line 1418, in process
return self.do_fn_invoker.invoke_process(windowed_value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/common.py", line 624, in invoke_process
self.output_handler.handle_process_outputs(
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/common.py", line 1582, in handle_process_outputs
self._write_value_to_tag(tag, windowed_value, watermark_estimator)
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/common.py", line 1695, in _write_value_to_tag
self.main_receivers.receive(windowed_value)
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/worker/operations.py", line 240, in receive
self.consumer.process(windowed_value)
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/worker/operations.py", line 908, in process
delayed_applications = self.dofn_runner.process(o)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/common.py", line 1420, in process
self._reraise_augmented(exn)
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/common.py", line 1492, in _reraise_augmented
raise exn
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/common.py", line 1418, in process
return self.do_fn_invoker.invoke_process(windowed_value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/common.py", line 624, in invoke_process
self.output_handler.handle_process_outputs(
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/common.py", line 1582, in handle_process_outputs
self._write_value_to_tag(tag, windowed_value, watermark_estimator)
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/common.py", line 1695, in _write_value_to_tag
self.main_receivers.receive(windowed_value)
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/worker/operations.py", line 240, in receive
self.consumer.process(windowed_value)
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/worker/operations.py", line 908, in process
delayed_applications = self.dofn_runner.process(o)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/common.py", line 1420, in process
self._reraise_augmented(exn)
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/common.py", line 1508, in _reraise_augmented
raise new_exn.with_traceback(tb)
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/common.py", line 1418, in process
return self.do_fn_invoker.invoke_process(windowed_value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/common.py", line 624, in invoke_process
self.output_handler.handle_process_outputs(
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/common.py", line 1582, in handle_process_outputs
self._write_value_to_tag(tag, windowed_value, watermark_estimator)
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/common.py", line 1695, in _write_value_to_tag
self.main_receivers.receive(windowed_value)
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/worker/operations.py", line 239, in receive
self.update_counters_start(windowed_value)
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/worker/operations.py", line 198, in update_counters_start
self.opcounter.update_from(windowed_value)
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/worker/opcounters.py", line 213, in update_from
self.do_sample(windowed_value)
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/runners/worker/opcounters.py", line 265, in do_sample
self.coder_impl.get_estimated_size_and_observables(windowed_value))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/coders/coder_impl.py", line 1506, in get_estimated_size_and_observables
self._value_coder.get_estimated_size_and_observables(
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/coders/coder_impl.py", line 209, in get_estimated_size_and_observables
return self.estimate_size(value, nested), []
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/coders/coder_impl.py", line 1584, in estimate_size
value_size = self._value_coder.estimate_size(value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/coders/coder_impl.py", line 248, in estimate_size
self.encode_to_stream(value, out, nested)
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/coders/coder_impl.py", line 1769, in encode_to_stream
component_coder.encode_to_stream(attr, out, True)
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/coders/coder_impl.py", line 1170, in encode_to_stream
self._elem_coder.encode_to_stream(elem, out, True)
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/coders/coder_impl.py", line 1769, in encode_to_stream
component_coder.encode_to_stream(attr, out, True)
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/coders/coder_impl.py", line 270, in encode_to_stream
return stream.write(self._encoder(value), nested)
^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/coders/coders.py", line 429, in encode
return value.encode('utf-8')
^^^^^^^^^^^^
AttributeError: 'int' object has no attribute 'encode' [while running 'WriteToBigQuery/Map(<lambda at bigquery.py:2157>)']
File "/opt/homebrew/lib/python3.11/site-packages/apache_beam/coders/coders.py", line 429, in encode
return value.encode('utf-8')
AttributeError: 'int' object has no attribute 'encode' [while running 'WriteToBigQuery/Map(<lambda at bigquery.py:2157>)']
</code></pre>
<p>My code:</p>
<pre><code>import os
import glob
import json
import geopandas as gpd
import apache_beam as beam
from apache_beam.io.gcp.internal.clients import bigquery
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.io.gcp.bigquery_tools import parse_table_schema_from_json
def beam_to_DB(data, db_table, schema):
if isinstance(schema, str):
with open(schema, 'r') as file:
schema = json.load(file)
# Create a pipeline.
pipeline = beam.Pipeline()
pcollection = pipeline | beam.Create([data])
# Write data to BigQuery.
pcollection | beam.io.WriteToBigQuery(
db_table,
schema={"fields": schema},
method='BATCH_INSERT',
write_disposition=beam.io.BigQueryDisposition.WRITE_TRUNCATE,
create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED)
# Run the pipeline.
pipeline.run().wait_until_finish()
</code></pre>
<p>How can I determine what in my code is causing this error?</p>
|
<python><google-bigquery><apache-beam>
|
2023-06-06 21:42:04
| 1
| 24,013
|
zakdances
|
76,418,621
| 695,984
|
How do I kill a thread during a crash?
|
<p>Instances of a class <code>C</code> start a thread that runs a loop until a <code>run_worker</code> event gets cleared:</p>
<pre class="lang-py prettyprint-override"><code>import threading
class C:
def __init__(self):
self.run_worker = threading.Event()
self.run_worker.set()
self.worker = threading.Thread(target=self.fn_worker, args=())
self.worker.start()
def fn_worker(self): while self.run_worker.is_set(): pass # (or do stuff)
def cleanup(self): self.run_worker.clear()
</code></pre>
<p>If <code>script.py</code> creates an instance <code>obj_c</code> and then crashes before <code>obj_c.cleanup()</code> is run, then the worker thread stays running forever.
The terminal hangs and I have to kill it.</p>
<p>Can I add something to <code>C</code> that guarantees its instances will cleanup if the instantiator or its parents crash?
If not, how can the code be restructured to avoid this problem?</p>
|
<python><python-3.x><multithreading><python-multithreading>
|
2023-06-06 21:36:50
| 0
| 1,044
|
Christian Chapman
|
76,418,611
| 2,816,215
|
Python case insensitive Enums
|
<p>I'm trying to build case insensitive enums and I found an answer which helps in certain cases:</p>
<pre><code>from enum import StrEnum, auto
class Fruits(StrEnum):
DRAGON_FRUIT = auto()
APPLE = auto()
@classmethod
def _missing_(cls, value):
value = value.lower()
for member in cls:
if member == value:
return member
return None
if Fruits.DRAGON_FRUIT == "dragon_fruit":
print(True)
else:
print(False)
if Fruits.DRAGON_FRUIT == "dragon_Fruit":
print(True)
else:
print(False)
if Fruits.APPLE == "apple":
print(True)
else:
print(False)
if Fruits.APPLE == "Apple":
print(True)
else:
print(False)
</code></pre>
<p>the result is: <code>True</code>, <code>False</code>, <code>True</code>, <code>False</code>.</p>
<p>I'm not able to add print statements to the <code>_missing_</code> method, but it seems to me that this should work because it runs lower on the value and then, just does a comparison.</p>
|
<python><string><enums>
|
2023-06-06 21:34:31
| 1
| 441
|
user2816215
|
76,418,405
| 5,942,100
|
Tricky count groupby in Pandas
|
<p>I wish to groupby ID and find the min and max counts for each month and year.</p>
<p><strong>Data</strong></p>
<pre><code>DATE ID name
4/20/2023 AA h88
4/30/2023 AA ha4
4/30/2023 AA hy66
4/30/2023 AA hc
4/30/2023 AA jk
5/30/2023 AA jk
5/1/2023 AA DD
5/1/2023 AA vb
4/20/2023 BB h88
4/20/2023 BB ha4
4/20/2023 BB hy66
4/20/2023 BB hc
4/30/2023 BB jk1
4/30/2023 BB jk2
4/30/2023 BB jk3
5/1/2023 BB DD
5/2/2023 BB vb
5/2/2023 BB Xx
</code></pre>
<p><strong>Desired</strong></p>
<pre><code>ID Month Year stat count
AA April 2023 min 1
AA April 2023 max 4
AA May 2023 min 1
AA May 2023 max 2
BB April 2023 min 3
BB April 2023 max 4
BB May 2023 min 1
BB May 2023 max 2
</code></pre>
<p><strong>Doing</strong></p>
<pre><code># Convert 'DATE' column to datetime format and extract month and year
df['DATE'] = pd.to_datetime(df['DATE'])
df['month'] = df['DATE'].dt.month
df['year'] = df['DATE'].dt.year
# Group by 'ID', 'month', and 'year' and calculate the count of names
result = df.groupby(['ID', 'year', 'month', 'DATE'])['name'].size().reset_index(name='count')
# Find the min and max counts for each ID and month combination
result_min = result.groupby(['ID', 'year', 'month'])['count'].min().reset_index(name='min_count')
result_max = result.groupby(['ID', 'year', 'month'])['count'].max().reset_index(name='max_count')
# Merge the min and max counts with the original result DataFrame
result = result.merge(result_min, on=['ID', 'year', 'month']).merge(result_max, on=['ID', 'year', 'month'])
# Create a 'stat' column based on the min and max counts
result['stat'] = np.where(result['count'] == result['min_count'], 'min', 'max')
# Drop unnecessary columns and reset index
result = result.drop(columns=['min_count', 'max_count']).reset_index(drop=True)
</code></pre>
<p>However this produces by date. Any suggestion is appreciated.</p>
|
<python><pandas><numpy>
|
2023-06-06 20:57:33
| 1
| 4,428
|
Lynn
|
76,417,850
| 1,145,808
|
TypeError: No to_python (by-value) converter found for C++ type: MagickCore::ResolutionType
|
<p>The following code:</p>
<pre><code>import subprocess
import PythonMagick
subprocess.run(["convert","rose:","test.pnm"])
print(PythonMagick.Image("test.pnm").resolutionUnits())
</code></pre>
<p>produces the error: <code>TypeError: No to_python (by-value) converter found for C++ type: MagickCore::ResolutionType</code></p>
<p>How can I get the resolution units of an image using PythonMagick?</p>
|
<python><imagemagick><pythonmagick>
|
2023-06-06 19:28:59
| 2
| 829
|
DobbyTheElf
|
76,417,787
| 21,420,742
|
Removing strings outside of parentheses in python
|
<p>I have a dataset and need to remove parentheses from some rows within a column.</p>
<pre><code> test
(ABC)
ABC(DEF)G
ABC
</code></pre>
<p>Desired Output</p>
<pre><code> test
ABC
DEF
ABC
</code></pre>
<p>This is what I tried: <code>df['test'] = df['test'].str.extract(r'\((.*)\)')</code> When I do this it deletes the rows without parentheses all together. Any suggestions? Thank you in advance.</p>
|
<python><python-3.x><regex><dataframe>
|
2023-06-06 19:18:10
| 1
| 473
|
Coding_Nubie
|
76,417,687
| 131,874
|
How to pass an object property reference to a functon
|
<p>I want to return an object's property value from a function but I don't know how to pass the property reference to the function</p>
<pre class="lang-python prettyprint-override"><code>class my_class:
my_property = 0
c = my_class()
print (c.my_property)
def property_value(o, p):
return o.p
my_property = 'my_property'
print (property_value(c, my_property))
</code></pre>
<pre><code>0
Traceback (most recent call last):
File "d:\teste.py", line 11, in <module>
print (property_value(c, my_property))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "d:\teste.py", line 8, in property_value
return o.p
^^^
AttributeError: 'my_class' object has no attribute 'p'
</code></pre>
<p>Tried to search but I can't find the terms</p>
|
<python><class><parameters><properties>
|
2023-06-06 19:03:50
| 1
| 126,654
|
Clodoaldo Neto
|
76,417,634
| 5,029,509
|
How to add 'tweepy' to Visual Studio Code?
|
<p>I wanna write a python code to create a twitter bot app in Visual Studio Code.
After importing <code>tweepy</code> it is not recognised.
I searched tweepy in extentions but nothing was found.</p>
|
<python><visual-studio-code><twitter><tweepy>
|
2023-06-06 18:57:26
| 1
| 726
|
Questioner
|
76,417,618
| 653,379
|
How to detect that two songs on Spotify are the same (have similar name)
|
<p>I'm using the following code to detect that two songs (same song, but different versions, e.g. typos, or concert variations) on <code>Spotify</code> are the same (have similar name). Is there a better (more intelligent) approach than checking if they have common beginning or <code>levenshtein distance</code>?
When I run that code, I have still duplicates, once in a while and have to delete them manually.</p>
<pre><code>def similarity_old(s1, s2):
if len(s1) != len(s2):
return False
count = 0
for c1, c2 in zip(s1, s2):
if c1 == c2:
count += 1
similarity_percentage = (count / len(s1)) * 100
return similarity_percentage > 70
def similarity_old2(s1, s2):
# implements levenshtein distance
m = len(s1)
n = len(s2)
if abs(m - n) > max(m, n) * 0.9: # Allowing up to 90% length difference
return False
if s1 == s2:
return True
if m == 0 or n == 0:
return False
# Create a matrix to store the edit distances
dp = [[0] * (n + 1) for _ in range(m + 1)]
# Initialize the first row and column of the matrix
for i in range(m + 1):
dp[i][0] = i
for j in range(n + 1):
dp[0][j] = j
# Compute the edit distances
for i in range(1, m + 1):
for j in range(1, n + 1):
cost = 0 if s1[i - 1] == s2[j - 1] else 1
dp[i][j] = min(dp[i - 1][j] + 1, # Deletion
dp[i][j - 1] + 1, # Insertion
dp[i - 1][j - 1] + cost) # Substitution
# Calculate the similarity percentage
similarity_percentage = ((max(m, n) - dp[m][n]) / max(m, n)) * 100
return similarity_percentage > 70
def similarity(s1, s2):
# optimized levenshtein distance
len1 = len(s1)
len2 = len(s2)
if abs(len1 - len2) > max(len1, len2) * 0.9: # Allowing up to 90% length difference
return False
if s1 == s2:
return True
if len1 == 0 or len2 == 0:
return False
if len1 > len2:
s1, s2 = s2, s1
len1, len2 = len2, len1
previous_row = list(range(len1 + 1))
for i, c2 in enumerate(s2):
current_row = [i + 1]
for j, c1 in enumerate(s1):
insertions = previous_row[j + 1] + 1
deletions = current_row[j] + 1
substitutions = previous_row[j] + (c1 != c2)
current_row.append(min(insertions, deletions, substitutions))
previous_row = current_row
similarity_percentage = ((len1 - previous_row[-1]) / len1) * 100
return similarity_percentage > 70
def common_beginning(s1, s2):
if len(s1) > 11 and s2.startswith(s1[:11]):
return True
else:
return False
def check_similarity(s, list1):
# s = song name
# list1 = list of song names
# returns True if s is similar to something in list1, else False
for item in list1:
if common_beginning(s, item):
return True
#for item in list1:
# if similarity(s, item):
# return True
return False
</code></pre>
|
<python><algorithm><levenshtein-distance>
|
2023-06-06 18:55:59
| 1
| 3,742
|
xralf
|
76,417,610
| 7,483,211
|
How to read_csv a zstd-compressed file using python-polars
|
<p>In contrast to <code>pandas</code>, polars doesn't natively support reading zstd compressed csv files.</p>
<p>How can I get polars to read a csv compressed file, for example using <code>xopen</code>?</p>
<p>I've tried this:</p>
<pre class="lang-py prettyprint-override"><code>from xopen import xopen
import polars as pl
with xopen("data.csv.zst", "r") as f:
d = pl.read_csv(f)
</code></pre>
<p>but this errors with:</p>
<pre><code>pyo3_runtime.PanicException: Expecting to be able to downcast into bytes from read result.:
PyDowncastError
</code></pre>
|
<python><python-polars><zstd><compressed-files>
|
2023-06-06 18:54:31
| 2
| 10,272
|
Cornelius Roemer
|
76,417,394
| 5,942,100
|
Tricky Count/Sum groupby transformation using Pandas
|
<p>I wish to groupby ID and find the min and max counts for each month.</p>
<p><strong>Data</strong></p>
<pre><code>DATE ID name
4/30/2023 AA hi
4/5/2023 AA hi
4/1/2023 AA hi
4/1/2023 AA hello
4/30/2023 AA hello
4/5/2023 AA hello
4/5/2023 AA hey
4/30/2023 AA hey
4/5/2023 AA ok
4/30/2023 AA ok
4/30/2023 AA ok
5/1/2023 AA ok
5/1/2023 AA hey
5/25/2023 AA hi
4/1/2023 BB hey
4/2/2023 BB hi
4/2/2023 BB hello
</code></pre>
<p><strong>Desired</strong></p>
<pre><code>ID DATE stat count
AA 4/1/2023 min 2
AA 4/30/2023 max 5
AA 5/25/2023 min 1
AA 5/1/2023 max 2
BB 4/1/2023 min 1
BB 4/2/2023 max 2
</code></pre>
<p><strong>Doing</strong></p>
<pre><code>result = df.groupby(['ID', 'DATE', 'name']).size().reset_index(name='count')
result['stat'] = result.groupby(['ID', 'DATE'])['count'].transform(lambda x: 'min' if x.idxmin() == x.idxmax() else 'max')
</code></pre>
<p>however this is not stating the dates. Any suggestion is appreciated.</p>
|
<python><pandas><numpy>
|
2023-06-06 18:17:54
| 1
| 4,428
|
Lynn
|
76,417,366
| 4,511,243
|
Unable to mock kafka producer method in Python
|
<p>I have a fairly simple code that I want to test. In an event that a process is unable to extract any information from an event, it should generate a kafka message. I want to test this situation by mocking the Kafka producer, and see how many times it is called.</p>
<pre class="lang-py prettyprint-override"><code>from confluent_kafka import Producer
import os
class MyClass():
def __init__(self):
self.producer = Producer({'bootstrap.servers': os.getenv("KAFKA_HOST")})
def my_method(self, value):
if value <= 0:
self.producer.produce("TOPIC", {"value": value})
def process(self, target):
... # some magic
my_method(value)
</code></pre>
<p>Now I want to create a test where I can count the number of times the <code>produce</code> method has been called.</p>
<pre class="lang-py prettyprint-override"><code>import unittest
class TestMyClass(unittest.TestCase):
def setUpClass(cls) -> None:
cls.clazz = MyClass()
@unittest.mock.patch("my.path.to.mymodule.Producer.produce")
def test_my_class_produces_kafka_msg(self, mock_produce):
self.clazz.process("dummy")
assert mock_produce.call_count == 1
</code></pre>
<p>This fails with the following message:
<code>TypeError: cannot set 'produce' attribute of immutable type 'cimpl.Producer'</code></p>
<p>If I understand it correctly, it might be that the method is immutable. Which would mean I need to find a different approach. Is there a way to get around this, or is the better option to another method as a wrapper to the call to the kafka producer, and instead mock that one?</p>
|
<python><unit-testing><apache-kafka><mocking>
|
2023-06-06 18:13:32
| 0
| 681
|
Frank
|
76,417,227
| 2,495,203
|
pandas.DataFrame.to_hdf() not saving attributes (metadata)
|
<p>Is there a way to preserve the attributes of a pandas DataFrame when saving it to an hdf5 file? I would like to store metadata alongside a pandas DataFrame in an hdf5 file.</p>
<p>Here is some simple code that shows the problem:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame(np.arange(10), columns=['data1'])
df.attrs.update({'test1':0,'test2':'this is a string'})
df.to_hdf('test.h5',key='df',complevel=9)
df_read = pd.read_hdf('test.h5')
df_read.attrs # this is an empty dict {} instead of what I created above
</code></pre>
|
<python><pandas><dataframe><metadata><hdf5>
|
2023-06-06 17:52:49
| 2
| 721
|
quantumflash
|
76,417,173
| 11,922,765
|
Python String to TimeZone Aware ISO8601 datetime format
|
<pre><code>from datetime import datetime
import pytz
some_date = "2019-01-01"
tzone = "America/Los_Angeles"
print(tzone)
iso_datetime = datetime.strptime(some_date , "%Y-%m-%d").replace(tzinfo=pytz.timezone(tzone).isoformat()
</code></pre>
<p>Present output:</p>
<pre><code>America/Los_Angeles
2019-01-01T00:00:00-07:53
</code></pre>
<p>Expected output (?): I checked on the internet, it says, Los Angeles is UTC-7:00. This is fine. But my python answers gives a weird answer <code>-07:53</code>. What is wrong here?</p>
<pre><code>2019-01-01T00:00:00-07:00
</code></pre>
|
<python><python-3.x><datetime><timezone><pytz>
|
2023-06-06 17:43:43
| 0
| 4,702
|
Mainland
|
76,417,006
| 1,506,477
|
Python breakpoint() automatically reads all STDIN -- how to disable it?
|
<p>Here is a sample python script.</p>
<pre class="lang-py prettyprint-override"><code>import sys
print("Hello, world!")
for i, line in enumerate(sys.stdin):
print(line)
print(f"Before breakpoint: {i}")
breakpoint()
print(f"After breakpoint: {i}")
</code></pre>
<p>Running <code>seq 1 10 | python tmp.py</code> launches debugger at the specified breakpoint, however, it automatically reads all the stdin.</p>
<pre><code>seq 1 10 | python tmp.py
Hello, world!
1
Before breakpoint: 0
> .../tmp.py(9)<module>()
-> print(f"After breakpoint: {i}")
(Pdb) 2
(Pdb) 3
(Pdb) 4
(Pdb) 5
(Pdb) 6
(Pdb) 7
(Pdb) 8
(Pdb) 9
(Pdb) 10
(Pdb)
Traceback (most recent call last):
File ".../tmp.py", line 9, in <module>
print(f"After breakpoint: {i}")
File ".../tmp.py", line 9, in <module>
print(f"After breakpoint: {i}")
File ".../python3.10/bdb.py", line 90, in trace_dispatch
return self.dispatch_line(frame)
File ".../python3.10/bdb.py", line 115, in dispatch_line
if self.quitting: raise BdbQuit
bdb.BdbQuit
</code></pre>
<p>How to stop <code>breakpoint()</code> from reading STDIN? i.e., I still want <code>breakpoint()</code> but just don't want it to automatically consume and execute STDIN.
I looked into the docs[1] and it doesn't mention about this STDIN behavior, nor an option to disable it.</p>
<hr />
<p>[1] <a href="https://docs.python.org/3.10/library/functions.html?highlight=breakpoint#breakpoint" rel="nofollow noreferrer">https://docs.python.org/3.10/library/functions.html?highlight=breakpoint#breakpoint</a>. I am using Python 3.10.9 on Ubuntu 20.04.6 LTS (WSL)</p>
|
<python><python-3.x><debugging><pdb>
|
2023-06-06 17:18:09
| 1
| 12,097
|
TG Gowda
|
76,416,792
| 13,916,049
|
Iteratively inspect multi-resolution data
|
<p>The <code>.mcool</code> files are multi-resolution (e.g., 200,1000,10000,1000000 resolutions), as denoted by the substring after the last <code>\</code> delimiter. Analysis for a single <code>.mcool</code> file at a single resolution (e.g., 1000000) using <a href="https://cooltools.readthedocs.io/en/latest/notebooks/viz.html" rel="nofollow noreferrer">cooltools</a> would be:</p>
<pre><code>import cooler
import cooltools
data_dir = "./input/"
clr = cooler.Cooler(f'{data_dir}/test.mcool::resolutions/1000000')
chromstarts = []
for i in clr.chromnames:
print(f'{i} : {clr.extent(i)}')
chromstarts.append(clr.extent(i)[0])
</code></pre>
<p>However, now I want to run the analysis for all the resolutions in all the <code>.mcool</code> files in the directory.</p>
<p>I want to run <code>cooler.Cooler(f'{data_dir}/test.mcool::resolutions/1000000')</code> for each resolution and each file, where j is the resolution..concatenate <code>j</code> to the <code>clr</code> string. For example, <code>clr_1000000</code> if the resolution is 1000000.</p>
<pre><code>data_dir = "./input/"
pathlist = Path(data_dir).glob('**/*.mcool')
for path in pathlist:
cool_file = str(path)
resolution = [i.rsplit("/", 1)[1] for i in cooler.fileops.list_coolers(cool_file)]
### load a cooler for each resolution
for j in resolution:
clr = cooler.Cooler(f'{cool_file}::resolutions/{j}')
### make a list of chromosome start/ends in bins
chromstarts= []
for i in clr.chromnames:
print(f'{i} : {clr.extent(i)}')
chromstarts.append(clr.extent(i)[0])
</code></pre>
<p>Traceback:</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Input In [48], in <cell line: 2>()
2 for path in pathlist:
4 cool_file = str(path)
----> 6 resolution = [i.rsplit("/", 1)[1] for i in cooler.fileops.list_coolers(cool_file)]
8 ### load a cooler for each resolution
9 for j in resolution:
AttributeError: 'list' object has no attribute 'fileops'
</code></pre>
<p>Name of files (example):</p>
<pre><code>input/A001C019.hg38.nodups.pairs.mcool
input/A014C038.hg38.nodups.pairs.mcool
input/A015C006.hg38.nodups.pairs.mcool
</code></pre>
<p>Resolution of files (example):</p>
<pre><code>['/resolutions/1000',
'/resolutions/10000',
'/resolutions/100000',
'/resolutions/1000000']
</code></pre>
|
<python>
|
2023-06-06 16:47:00
| 0
| 1,545
|
Anon
|
76,416,779
| 3,826,115
|
Create a DynamicMap in Holoviews that responds to both a RadioButton and a tap
|
<p>Consider the following code that creates a Points plot that changes which DataFrame it is plotting based on a RadioButton.</p>
<pre><code>import pandas as pd
import panel as pn
import holoviews as hv
hv.extension('bokeh')
df_a = pd.DataFrame(index = ['a','b', 'c', 'd'], data = {'x':range(4), 'y':range(4)})
df_b = pd.DataFrame(index = ['w','x', 'y', 'z'], data = {'x':range(4), 'y':range(3,-1,-1)})
radio_button = pn.widgets.RadioButtonGroup(options=['df_a', 'df_b'])
@pn.depends(option = radio_button.param.value)
def update_plot(option):
if option == 'df_a':
points = hv.Points(data=df_a, kdims=['x', 'y'])
if option == 'df_b':
points = hv.Points(data=df_b, kdims=['x', 'y'])
points = points.opts(size = 10, tools = ['tap'])
return points
pn.Column(radio_button, hv.DynamicMap(update_plot))
</code></pre>
<p>What I would like to add is functionality where when one of the points is tapped, a table to the right is filled in with location information from the corresponding DataFrame (i.e. if the lower left point is tapped when <code>df_a</code> is selected, the data at <code>df_a.loc['a']</code> should be printed in a table.</p>
<p>I’ve tried a few things, but I can’t find a good way that 1) Updates the table on new clicks and 2) doesn’t reset the zoom level whenever the RadioButton selection is switched.</p>
<p>Number 2 is particularly important for my actual purpose (this is an extremely stripped down version).</p>
|
<python><holoviews>
|
2023-06-06 16:44:43
| 1
| 1,533
|
hm8
|
76,416,525
| 17,487,457
|
Sort each column of 4D array in reverse order
|
<p>Suppose I have this 4D array:</p>
<pre class="lang-py prettyprint-override"><code>Y = np.random.randint(1, 9, size=(5,1,10,4))
Y.shape
(5, 1, 10, 4)
</code></pre>
<p>That I want to sort each element of <code>Y</code> along the third dimension, in reverse order.</p>
<p>In the given example above, the first 2 elements of <code>Y</code> are:</p>
<pre class="lang-py prettyprint-override"><code># first 2 entries of Y
Y[:2]
array([[[[1, 8, 7, 8],
[8, 2, 7, 3],
[7, 8, 7, 8],
[3, 1, 8, 4],
[3, 1, 2, 2],
[6, 4, 2, 3],
[3, 8, 1, 8],
[1, 7, 3, 2],
[7, 4, 6, 6],
[1, 5, 6, 3]]],
[[[7, 2, 7, 7],
[4, 8, 5, 5],
[1, 2, 7, 5],
[7, 5, 8, 3],
[6, 4, 2, 4],
[4, 4, 2, 4],
[1, 5, 3, 7],
[4, 5, 3, 3],
[4, 8, 2, 2],
[6, 1, 6, 6]]]])
</code></pre>
<p>The method, <code>numpy.ndarray.sort()</code> produces error (plus - <a href="https://numpy.org/doc/stable/reference/generated/numpy.ndarray.sort.html#numpy.ndarray.sort" rel="nofollow noreferrer">this</a> method does not have a <code>reversed</code> parameter to reverse order after sorting):</p>
<pre class="lang-py prettyprint-override"><code>Y_sorted_reversed = Y.sort(Y, axis=2)
TypeError: argument for sort() given by name ('axis') and position (position 0)
</code></pre>
<p><strong>Required:</strong></p>
<p>In the given array <code>Y</code> above, the sorted (descending) array, <code>Y_sorted_reversed</code>'s first two elements should be:</p>
<pre class="lang-py prettyprint-override"><code># first 2 entries of Y_sorted_reversed
Y_sorted_reversed[:2]
array([[[[8, 8, 8, 8],
[7, 8, 7, 8],
[7, 8, 7, 8],
[6, 7, 7, 6],
[3, 5, 6, 4],
[3, 4, 6, 3],
[3, 4, 3, 3],
[1, 2, 2, 3],
[1, 1, 2, 2],
[1, 1, 1, 2]]],
[[[7, 8, 8, 7],
[7, 8, 7, 7],
[6, 5, 7, 6],
[6, 5, 6, 5],
[4, 5, 5, 5],
[4, 4, 3, 4],
[4, 4, 3, 4],
[4, 2, 2, 3],
[1, 2, 2, 3],
[1, 1, 2, 2]]]])
</code></pre>
|
<python><arrays><python-3.x><numpy><sorting>
|
2023-06-06 16:10:36
| 2
| 305
|
Amina Umar
|
76,416,476
| 1,169,091
|
How to optimize this code: code takes too long to run when submitted but completes as an individual test case
|
<p>This is a LeetCode solution that completes correctly as a test case but times out when submitted against all the test cases. I don't see how to optimize or fix it.</p>
<p><a href="https://leetcode.com/problems/total-hamming-distance/" rel="nofollow noreferrer">https://leetcode.com/problems/total-hamming-distance/</a></p>
<pre><code>class Solution:
def num_gen(self, nums):
for i in range(0, len(nums)):
for j in range(i+1, len(nums)):
yield (nums[i], nums[j])
def hammingDistance(self, n1, n2):
x = n1 ^ n2
setBits = 0
while (x > 0) :
setBits += x & 1
x >>= 1
return setBits
def hammingDistanceX(self, xBin, yBin) -> int:
hd = 0
for i in range(0, len(yBin)):
if xBin[i] != yBin[i]:
hd += 1
return hd
def totalHammingDistance(self, nums: List[int]) -> int:
thd = 0
#for p in combinations(nums,2):
for p in self.num_gen(nums):
if p[0] != p[1]:
x = p[0] ^ p[1]
setBits = 0
while (x > 0) :
setBits += x & 1
x >>= 1
thd += setBits
return thd
</code></pre>
<p>Here's the data that causes a time-out:</p>
<pre><code>nums =
[9282814,4439757,7056523,8101654,2236683,8940071,9218118,5751130,570240,7158314,427234,1172586,7430633,3375660,2693866,5949386,7250049,5649177,9889905,9744715,7540434,4114710,5357533,9990179,863142,9284000,4372859,5252905,8256873,4311103,2466147,4038490,2817729,7375713,5955510,4985585,3539041,7167771,590877,8112151,9268560,9208291,9505987,9167676,1989570,386506,1252803,8323662,207447,9628703,1360572,220022,6350796,4840156,126908,9339153,9512419,1682477,5586728,4981940,200414,9789449,1473615,1430232,5925973,5611115,3692582,3714224,3250648,2825295,5836232,4985084,3392467,6770023,4197159,5334475,5096085,5610405,5867390,7495923,8887779,7537453,2333222,307129,3913054,3608788,2398157,3486585,3497382,7083871,3936111,5203098,6480494,1526307,6764475,9945834,5895841,8480901,6094917,5310876,7508049,5925992,389071,4011913,2320732,2297502,9396489,2301303,1813830,7323641,5105187,6069118,1967284,3216459,6347415,4609601,7391494,5768511,4542085,4025504,317159,5951529,1406341,3994893,8907304,3098862,6723039,3012828,7759292,8883223,4598583,2042411,7537281,5074439,1332127,1580307,325341,4718714,7234303,6807733,7638284,5057569,8271993,1332608,3862896,7580506,2426013,4576021,3790789,4414115,5266628,6090313,6465382,7492477,6512234,7148243,9604979,1121040,3694525,5849823,9043290,1559427,5150313,1704510,2963227,1382929,3332409,4949508,6888503,3750273,6744680,7128158,312365,9012593,1482972,7564131,6679255,3320030,2255744,7616229,2822474,2238795,3748744,7152754,9803314,6273135,3409087,8483874,2077461,853520,3614846,9619250,1141046,7735144,997121,2365562,3280909,6656652,1751588,3162369,4357120,3933844,3974230,7030798,7818691,880236,2507547,5389667,3927873,6442633,8920946,4547712,1070648,2039605,6989239,161379,1639361,7866894,1560190,1958721,9221453,1193677,7708648,3601737,1431576,9994626,8549982,1168149,9958309,539943,4399478,2085569,6313854,582341,3439486,55111,4343022,4470157,6904321,1905997,5900454,8653304,2982179,6489126,1627554,3838822,2359990,1495662,8981604,8483093,5270926,2481023,6350289,7234452,6015020,8848175,3945939,6046782,4061510,6552755,701043,1331198,425227,2080848,6830917,7596171,4838669,416075,4538718,9039491,181789,4897965,9171703,4603938,1128451,7089535,520360,7605310,7048441,8464101,8409695,9107941,6987117,8695347,2712300,7767823,6034652,3215376,1735058,9710638,3295783,888838,3017781,9368740,5619461,192892,6178464,4857125,3856577,1072674,8990252,1799363,6850951,5773675,8659254,8615850,6652667,3690691,1548875,5377140,563584,7625987,75690,7832764,5892635,1100927,1503358,1578786,8181961,9969964,4652815,2481079,9505683,7905925,8929629,1033514,1989799,9641844,6329137,954433,6815901,8033528,98468,3088319,3207106,2299679,5680929,1863700,6527287,7929812,7464058,4845236,1047845,6270983,1652600,3581079,7380705,6559098,6743244,6582930,5898267,7574862,3842839,9996820,1167941,5637652,1212453,9856813,8376939,6716740,1557712,6586509,599787,334676,4297969,3507453,7917857,7530731,3812943,9793148,9871796,3920686,7727836,8599075,4241286,6807407,5579269,3298795,4794499,3638708,8818506,2183388,4430850,9354773,4814313,8820362,1165226,3731113,7196963,4717553,5418778,6388783,6842574,6507927,5583047,2778507,4584183,9930967,5078260,4987074,1341218,6145457,886831,9546333,7353542,3055581,2368494,6511233,5087458,9227015,7924721,8130957,4167604,7811256,6575435,4829914,8644858,2586967,9344339,4947254,257836,2927002,9435934,8062439,554912,5860350,824245,3543922,8029340,3909174,5848808,30184,190422,7383551,497797,8052103,5676093,9118982,4247004,3337343,8044803,5138770,5015399,1597915,9118464,8777509,4445566,3480273,4693221,9935119,8280991,9525239,7712817,4741143,9649239,6251171,1650894,3779137,5677038,2315687,5642830,2007763,3560991,8249177,9681007,1609546,7714101,4855265,9040648,5721734,3337823,4035051,2025899,921563,5928769,6056021,8944092,5423643,8686971,5707780,980701,7049866,1161765,5485421,1752839,9029861,2438157,5260162,8454807,7486613,5744032,3427931,996560,8094900,2109805,4900617,6078932,3886158,2178762,3431158,2760594,1261992,9220007,6203044,8537785,4737323,911211,5667171,2145506,5840521,2964777,9420247,166162,177864,7587535,9251928,9660288,8666489,5068161,536605,4359491,9538373,7981999,1929255,1774437,3370402,5437881,8321903,7742021,8091372,7823953,7548205,8276816,7111593,8256609,2422875,8506232,4214545,1342108,2964796,3996179,5681419,8361132,5200467,5116744,488905,9078215,2619213,998462,4947947,4072843,7409798,2669682,7244477,4979151,3279245,8522495,7804739,5747707,5284300,5835973,3624628,3428257,3620175,1228398,7890874,7310427,9811177,9678887,1429439,6966001,6394001,4283547,1623322,2407199,6765967,226389,122186,847746,9561537,8609835,4475808,1146521,5555047,9145752,4597904,2526282,6848745,4636101,7584214,9983347,1274696,6676604,5253962,5887030,1432980,5104166,2680083,6466261,8892209,2557063,8611168,7183938,2263200,3449623,2107491,2679796,390341,9645869,4632556,1774311,9008299,9481023,3673039,6337777,3402090,2444253,7882943,9739023,4774655,854757,5929348,1082429,1690226,3105794,5046074,3472372,4847225,887344,3191547,416254,4217748,9510441,3382541,8082632,2794130,1481477,5427975,3967050,4202761,9180642,9054112,8941960,6203519,6047154,5504246,6845857,1074262,702991,560523,120229,7910001,1984228,71379,3163585,3878987,9534455,2132173,5465525,383543,2232017,5021430,1527333,6299324,7368741,9906306,5319657,722337,9602768,4771585,5960311,2379036,9615316,9176532,6159984,3622500,6443117,3544823,7338402,4106511,1204741,3751940,3951127,7149500,7922357,2518718,8865577,2176485,352151,428251,6369481,5472863,1208687,7432898,9308226,3920752,1262807,1711847,3848229,7775281,5974723,9410988,5193662,7661908,4701403,1220905,7283645,2541570,4449405,904259,4229194,8814141,1845857,4809096,2344108,505556,9482193,3460340,5343778,108651,6738622,7037619,3939283,2532036,9618002,9127354,1061185,2260807,7215139,3452095,5777953,3368621,2142724,3023916,7014973,3892561,5718672,3568589,2405585,9121315,9596179,8793284,4371480,6532427,9478105,8126945,4323942,4921797,889525,7375459,9732531,8916369,1897160,6291993,1698448,3414258,5088472,3978446,4392378,3002091,5181223,676747,5571861,5326026,9550564,5225180,7352467,7337100,102487,9766653,7244510,8907403,4107511,209716,677430,4897265,8963513,4901612,7611365,7685626,3847983,9793182,3953945,4420982,3840163,6624444,417753,9320797,2729394,5754392,6556341,3779966,481288,6893341,6709893,9777139,9251823,3743577,9702187,9092070,4227887,9656413,1587611,4029333,8442074,8325017,2369886,5831266,3475523,9927392,534612,2512742,7634611,762659,6987975,4075396,6347381,6926097,7713941,472345,1000517,5287910,7027397,4136328,8963191,100804,565027,8367777,1174296,6644559,3221113,1667528,6337084,5739506,5375229,7102452,9691243,9751975,7008186,9303050,6330653,6784690,5591561,1324038,6011796,1242340,3433912,390027,5146826,3955769,8359927,1314210,618845,2503591,9298466,662787,4073540,6496544,5003123,3672027,2795606,1446674,8205335,5593336,9035153,37846,9530256,3116290,7344568,3540051,8738554,8204638,6948079,2630127,6765125,7496763,9618059,5231687,9875482,8513000,3064209,9196201,3623447,2663745,881385,46077,905428,4870935,6175002,4821913,7422561,4549690,9384563,8777820,4254277,156388,4900157,1170712,8713117,1553978,6944839,4618290,4419627,9983924,3781356,3843018,7982050,2482013,4378620,5282096,4899322,4067645,4733369,978089,8814470,7375360,9885549,8273669,6938341,1681938,4146244,3667402,6710585,8338361,764251,6216874,1984684,4496592,2768055,5666909,5009716,5214735,7423642,4476791,4247735,9958772,1832295,7697363,7861232,3406399,5734615,3125746,8348672,7084004,6736042,1080293,5395431,7246264,3117436,1414858,1429178,7433040,4687377,9898100,3247191,4158896,2585100,6148759,31346,5872579,8555023,1693814,416753,7276819,7343032,7724499,8452784,8723030,9460526,2936893,6756725,8011763,8629625,7785780,9271550,1401243,4509428,7633708,2944226,1254767,521788,6031862,7973448,5357131,5538611,7273879,8922562,7249201,6282974,6514760,5571612,5950551,1458384,4831769,6966376,6163139,5991093,173599,7371746,4732874]
</code></pre>
|
<python><optimization>
|
2023-06-06 16:05:36
| 3
| 4,741
|
nicomp
|
76,416,443
| 7,447,976
|
how to color a map after user selection in Dash using GeoJSON
|
<p>I have a <code>Dash</code> app where I show a world map and expect user to select certain countries. After each selection, a callback function is triggered and I'd like to color each country selected. As a toy example, I am showing a US map with a state selection option. Here you can click on a state and it is printed on the screen. My question is that how I can color each state selected in red.</p>
<p>I have attached another callback to change the color, however, it is changing the color of the whole map rather than the ones that are selected. I have used the example shared <a href="https://stackoverflow.com/questions/68427622/dash-leaflet-geojson-change-color-of-last-clicked-feature">at this post.</a></p>
<pre><code>import random, json
import dash
from dash import dcc, html, Dash, callback, Output, Input, State
import dash_leaflet as dl
import geopandas as gpd
from dash import dash_table
from dash_extensions.javascript import assign
#https://gist.github.com/incubated-geek-cc/5da3adbb2a1602abd8cf18d91016d451?short_path=2de7e44
us_states_gdf = gpd.read_file("us_states.geojson")
us_states_geojson = json.loads(us_states_gdf.to_json())
# Color the feature saved in the hideout prop in a particular way (grey).
style_handle = assign("""function(feature, context){
const match = context.props.hideout && context.props.hideout.properties.name === feature.properties.name;
if(match) return {color:'#126'};
}""")
app = Dash(__name__)
app.layout = html.Div([
dl.Map([
dl.TileLayer(url="http://tile.stamen.com/toner-lite/{z}/{x}/{y}.png"),
dl.GeoJSON(data=us_states_geojson, id="state-layer",
options=dict(style=style_handle), hideout=dict(click_feature=None))],
style={'width': '100%', 'height': '250px'},
id="map",
center=[39.8283, -98.5795],
),
html.Div(id='state-container', children=[]), #
dash_table.DataTable(id='state-table', columns=[{"name": i, "id": i} for i in ["state"]], data=[])
])
# Update the feature saved on the hideout prop on click.
app.clientside_callback("function(feature){return feature}",
Output("state-layer", "hideout"),
[Input("state-layer", "click_feature")])
app.clientside_callback(
"""
function(clickFeature, currentData) {
if(!clickFeature){
return window.dash_clientside.no_update
}
const state = clickFeature.properties.NAME
const currentStates = currentData.map(item => item.state)
let newData = []
if(!currentStates.includes(state)){
newData = [...currentData, {"state": state}]
}else{
newData = currentData
}
const stateText = `Clicked: ${state}`
return [newData, stateText]
}
""",
Output("state-table", "data"),
Output("state-container", "children"),
Input("state-layer", "click_feature"),
State("state-table", "data"),
)
if __name__ == '__main__':
app.run_server(debug=True)
</code></pre>
|
<python><plotly-dash><dashboard><geopandas>
|
2023-06-06 16:01:16
| 2
| 662
|
sergey_208
|
76,416,435
| 1,711,271
|
For each column of a dataframe, add mean and standard deviation in the last two rows
|
<p>Sample dataframe:</p>
<pre><code>import pandas as pd
import numpy as np
rng = np.random.RandomState(123)
data = rng.random((10,2))
foo = pd.DataFrame(data, columns=['A', 'B'])
</code></pre>
<p>I want to add two rows to <code>foo</code>, the first one containing (for each column) the mean of rows from 0 to 9, and the second one containing (again, separately for each column) the standard deviation of rows from 0 to 9. How can I do that?</p>
|
<python><pandas><mean>
|
2023-06-06 16:00:37
| 1
| 5,726
|
DeltaIV
|
76,416,388
| 8,115,653
|
plotting two QQ plots side by side
|
<p>i'm trying to plot two QQ plots side by side. I looked at <a href="https://stackoverflow.com/questions/31726643/how-to-plot-in-multiple-subplots">enter link description here</a> but not sure how to assign them still. A Q–Q plot quantile-quantile plot) is a probability plot to comparing two probability distributions by plotting their quantiles against each other.</p>
<p>Here is my reproducible example:</p>
<pre><code>import matplotlib.pyplot as plt
import seaborn as sns
from scipy import stats
df = sns.load_dataset('tips')
x, y = df['total_bill'], df['tip']
fig, ax = plt.subplots()
stats.probplot(x, dist='norm', plot=ax)
stats.probplot(y, dist='norm', plot=ax)
plt.title ('QQ plot x and y')
plt.savefig ( 'qq.png', dpi=300)
</code></pre>
|
<python><scipy><seaborn>
|
2023-06-06 15:54:02
| 1
| 1,117
|
gregV
|
76,416,267
| 202,335
|
NoSuchElementException,Unable to locate element: {"method":"xpath","selector":"following::el-table-column//span[@class='date']"}
|
<pre><code> <div class="el-table-box" v-if="!specialAnnounce">
<el-table ref="notice-table" v-loading="loading" :data="dataList" @sort-change="sortChange">
<template slot="empty">
<div class="loading" v-if="dataList.length == 0 && loading">
加载中...
</div>
<div v-if="dataList.length == 0 && !loading" class="no-data">
<img src="http://static.cninfo.com.cn/new/img/announce/no-data.png" alt="">
<p>暂无数据!</p>
</div>
</template>
<el-table-column width="100" prop="secCode" sortable="custom" label="代码">
<template slot-scope="scope">
<a class="ahover" target="_blank" :href="'/new'+ '/disclosure/stock?stockCode=' + scope.row.secCode + '&orgId=' + scope.row.orgId">
<span class="code">{{scope.row.secCode}}</span>
</a>
</template>
</el-table-column>
<el-table-column width="180" prop="secName" label="简称">
<template slot-scope="scope">
<a class="ahover" target="_blank" :href="'/new'+ '/disclosure/stock?stockCode=' + scope.row.secCode + '&orgId=' + scope.row.orgId">
<span :title="scope.row.secName" class="code delete-hl" v-html="scope.row.secName.length>8?scope.row.secName.slice(0,8)+'...':scope.row.secName"></span>
</a>
</template>
</el-table-column>
<el-table-column prop="announcementTitle" label="公告标题">
<template slot-scope="scope">
<span class="ahover">
<a v-html="scope.row.announcementTitle" target="_blank" :href="linkLastPage(scope.row)"></a>
<span v-show="checkDocType(scope.row.adjunctType)" class="icon-f"><i class="iconfont" :class="[checkDocType(scope.row.adjunctType)]"></i></span>
</span>
</template>
</el-table-column>
<el-table-column width="150" align="left" prop="announcementTime" sortable="custom" label="公告时间">
<template slot-scope="scope">
<span class="date">{{fomatDate(scope.row.announcementTime, 'yyyy-MM-dd HH:mm')}}</span>
</template>
</el-table-column>
</el-table>
</div>
</code></pre>
<p>How can I read this entire table with entries which are published within the latest 2 hours?</p>
|
<python><selenium-webdriver><webdriver>
|
2023-06-06 15:36:45
| 2
| 25,444
|
Steven
|
76,416,192
| 19,392,385
|
Extract segment from shapely line interersecting polygon
|
<p>I have several lines (<code>shapely.geometry.LineString</code>) and patches (<code>shapely.geometry.Polygon</code>). My goal is to find lines intersecting a set polygon (a patch). For the moment this works well but I get the entire list of coordinates of the line. <strong>Instead I would like to get the two endpoints of the segment that effectively intersect the patch.</strong></p>
<p>The lines are indeed composed of two set of coordinates <code>x</code> and <code>y</code> that are lists of floats. They represent <code>[x1, x2, x3, ...]</code> and <code>[y1, y2, y3, ...]</code>. Below is a minimal working example to generate two lines and a patch that reproduces the data structure I use in my code (a big class too complex to copy and paste here).</p>
<pre><code>from shapely.geometry import LineString, Polygon
import matplotlib.pyplot as plt
intersections_list = []
x_line_patch = [0, 4, -2, 8, 3, 10, 5]
y_line_patch = [21, 17, 14, 11, 9, 6, 1]
x_line_patch2 = [x + 20 for x in x_line_patch]
y_line_patch2 = y_line_patch
# Create a patch
start_x = 5
end_x = 8
start_y = 5
end_y = 8
polygon = Polygon([(start_x, start_y), (end_x, start_y), (end_x, end_y), (start_x, end_y)])
lines = [LineString(zip(x_line_patch, y_line_patch)), LineString(zip(x_line_patch2, y_line_patch2))]
# Find line intersecting patches
patch_intersecting_lines = [line.xy for line in lines if line.intersects(polygon)]
# Extract the x and y coordinates of the polygon exterior
x, y = polygon.exterior.xy
# Plot the polygon
plt.plot(x, y)
plt.fill(x, y, alpha=0.3) # Fill the polygon with color (optional)
# Plotting the line
x, y = lines[0].xy
x2, y2 = lines[1].xy
plt.plot(x, y)
plt.plot(x2, y2)
plt.tight_layout()
plt.show()
</code></pre>
<p>Additionally, creating individual <code>LineString</code> for each set of <code>(x1, x2), (y1, y2)</code> would add complexity in a program that already uses a lot of loops and I don't have access to these lists of coordinates in reality. *They're defined here to construct a <code>LineString</code> object but are created later in the program when the lines are selected if they cross a patch. Otherwise I would just do:</p>
<pre><code>line1 = [LineString(zip(x_line_patch[i:i+2], y_line_patch[i:i+2])) for i in range(0,len(x_line_patch)-1)]
</code></pre>
<p>to select coordinates 2 by 2 and create small segments for each line. But in my actual code I do <code>patch_intersecting_lines = [line.xy for line in lines if line.intersects(patch_shape)]</code> to find intersecting <code>line</code> given a list of lines called <code>lines</code>.</p>
|
<python><geometry><line><shapely>
|
2023-06-06 15:29:54
| 1
| 359
|
Chris Ze Third
|
76,416,172
| 10,266,106
|
Apply Searchsorted to 3-D Array
|
<p>Consider the following three-dimensional array of size <code>(1000,2000,200)</code> and corresponding two-dimensional array of size <code>(1000,2000)</code>. Both arrays do not contain NaNs:</p>
<pre><code>import numpy as np
a = np.random.randint(0,50,size=(1000,2000,200))
b = np.random.randint(0,50,size=(1000,2000,))
</code></pre>
<p>I'd like to find the appropriate index at a[i,j] that is closest to the single value at b[i,j]. The first approach was to use <code>np.searchsorted</code>:</p>
<p>indices = np.searchsorted(a, b)</p>
<p>However, the challenge is that this works only on 1-D dimensional arrays and returns (as anticipated) <code>ValueError: object too deep for desired array</code></p>
<p>I've inspected <a href="https://stackoverflow.com/questions/56471109/how-to-vectorize-with-numpy-searchsorted-in-a-2d-array">this</a> related question, which is tailored to moving across a two-dimensional array. Is there a time-efficient method to return the indices at each point [i,j] instead of writing a non-vectorized function?</p>
<p><strong>Expansion:</strong> I've made an alteration to the current approach, which use differential positioning across the entire array. This looks as follows:</p>
<pre><code>b_3d: np.repeat(b[:, :, np.newaxis], a.shape[2], axis=2)
diff = np.abs(a - b_3d)
indices = np.argmin(diff, axis=2)
</code></pre>
|
<python><numpy><sorting><numpy-ndarray>
|
2023-06-06 15:27:01
| 0
| 431
|
TornadoEric
|
76,415,862
| 10,755,032
|
Mocking a Database to work with FastAPI and Pydantic
|
<p>I am working on a project where I am using FastAPI and pydantic. This was mentioned in the github repo from which I am taking the tasks: <code>We do expect you to mock a database interaction layer in any way you see fit and generate data (which can be randomized) for the purposes of this test. The implementation of this database layer is left up to you.</code> Github repo link: <a href="https://github.com/steeleye/recruitment-ext/wiki/API-Developer-Assessment" rel="nofollow noreferrer">https://github.com/steeleye/recruitment-ext/wiki/API-Developer-Assessment</a></p>
<p>This is my first time working with fastapi and never heard of mocking before. I have the following code:</p>
<pre><code>from fastapi import FastAPI, Path
from typing import Optional
from pydantic import BaseModel, Field
import datetime as dt
class TradeDetails(BaseModel):
buySellIndicator: str = Field(description="A value of BUY for buys, SELL for sells.")
price: float = Field(description="The price of the Trade.")
quantity: int = Field(description="The amount of units traded.")
class Trade(BaseModel):
asset_class: Optional[str] = Field(alias="assetClass", default=None, description="The asset class of the instrument traded. E.g. Bond, Equity, FX...etc")
counterparty: Optional[str] = Field(default=None, description="The counterparty the trade was executed with. May not always be available")
instrument_id: str = Field(alias="instrumentId", description="The ISIN/ID of the instrument traded. E.g. TSLA, AAPL, AMZN...etc")
instrument_name: str = Field(alias="instrumentName", description="The name of the instrument traded.")
trade_date_time: dt.datetime = Field(alias="tradeDateTime", description="The date-time the Trade was executed")
trade_details: TradeDetails = Field(alias="tradeDetails", description="The details of the trade, i.e. price, quantity")
trade_id: str = Field(alias="tradeId", default=None, description="The unique ID of the trade")
trader: str = Field(description="The name of the Trader")
app = FastAPI()
# data = {
# 'asset_class': 'Bond',
# 'conterparty': 'delio',
# 'instrument_id': 'AAPL',
# 'instrument_name': 'Guitar',
# 'trade_date_time':'2023-06-6 12:22',
# 'trade_details':{'buySellIndicator':'BUY', 'price':100.0, 'quantity': 10},
# 'trade_id': '11',
# 'trader':'john'
# }
trade = Trade(assetClass='asset',
counterparty='count',
instrumentId='AAPL',
instrumentName='Guitar',
tradeDateTime='2023-06-6 12:22',
tradeDetails={'buySellIndicator':'BUY', 'price':100.0, 'quantity': 10},
tradeId='11',
trader='john')
@app.get("/")
def index():
return {"name": "API Developer Assessment"}
@app.get("/get-trade/{tradeId}")
def get_trade(tradeId:int=Path(description="The Id of the trade you want to view")):
return trade.tradeId
@app.get("/Trade")
def get_trade_list():
return trade.dict()
# @app.get('/get-by-query')
# def get_trade()
</code></pre>
<p>How do I mock a database?</p>
|
<python><fastapi><pydantic>
|
2023-06-06 14:45:28
| 1
| 1,753
|
Karthik Bhandary
|
76,415,859
| 15,959,591
|
NAN values when creating dataframe of nested lists
|
<p>I have a nested list and I make a data frame out of this data frame using this code:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame(lst)
</code></pre>
<p>but all the values of the last element of lst (which is a list itself) got jammed in the first cell of its row and all the other cells become NAN.
I made date frames out of nested lists before but I didn't have this problem! Could you please tell me what is happening?</p>
<p>The data frame looks like this:</p>
<p><a href="https://i.sstatic.net/AURix.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AURix.png" alt="enter image description here" /></a></p>
|
<python><pandas><dataframe><list>
|
2023-06-06 14:44:49
| 1
| 554
|
Totoro
|
76,415,849
| 8,283,848
|
Unable to find pyenv python interpreter by tox
|
<p>I have a simple <code>tox.ini</code> as below</p>
<pre><code>[tox]
min_version = 4.0
env_list =
py{37,38,39,310}-drf{314}
[testenv]
envdir = {toxworkdir}/venvs/{envname}
[testenv:py{37,38,39,310}-drf{314}]
commands = python -m pytest tests/test_rest_framework
deps =
pytest
-e .[dev]
drf314: djangorestframework>=3.14,<3.15
</code></pre>
<p>and I have multiple Python versions installed on my local machine</p>
<pre><code>$ pyenv versions
* system (set by /home/jpg/.pyenv/version)
* 3.10.3 (set by /home/jpg/.pyenv/version)
3.6.15
* 3.7.13 (set by /home/jpg/.pyenv/version)
3.8.11
* 3.8.13 (set by /home/jpg/.pyenv/version)
* 3.9.11 (set by /home/jpg/.pyenv/version)
3.9.9
$ ls -ls /usr/bin/python*
0 lrwxrwxrwx 1 root root 7 Apr 15 2020 /usr/bin/python -> python2*
0 lrwxrwxrwx 1 root root 9 Mar 13 2020 /usr/bin/python2 -> python2.7*
3580 -rwxr-xr-x 1 root root 3662032 Jul 1 2022 /usr/bin/python2.7*
0 lrwxrwxrwx 1 root root 33 Jul 1 2022 /usr/bin/python2.7-config -> x86_64-linux-gnu-python2.7-config*
0 lrwxrwxrwx 1 root root 16 Mar 13 2020 /usr/bin/python2-config -> python2.7-config*
0 lrwxrwxrwx 1 root root 9 Mar 15 2022 /usr/bin/python3 -> python3.8*
5368 -rwxr-xr-x 1 root root 5494552 Mar 13 15:56 /usr/bin/python3.8*
0 lrwxrwxrwx 1 root root 33 Mar 13 15:56 /usr/bin/python3.8-config -> x86_64-linux-gnu-python3.8-config*
5668 -rwxr-xr-x 1 root root 5803968 Nov 23 2021 /usr/bin/python3.9*
0 lrwxrwxrwx 1 root root 33 Nov 23 2021 /usr/bin/python3.9-config -> x86_64-linux-gnu-python3.9-config*
0 lrwxrwxrwx 1 root root 16 Mar 13 2020 /usr/bin/python3-config -> python3.8-config*
4 -rwxr-xr-x 1 root root 384 Jan 25 14:03 /usr/bin/python3-futurize*
4 -rwxr-xr-x 1 root root 388 Jan 25 14:03 /usr/bin/python3-pasteurize*
0 lrwxrwxrwx 1 root root 14 Apr 15 2020 /usr/bin/python-config -> python2-config*
</code></pre>
<p>But, if I run the <code>tox</code> command, I get an error that says the interpreter is not found.</p>
<pre><code>$ tox -e py37
py37: skipped because could not find python interpreter with spec(s): py37
py37: SKIP (0.08 seconds)
evaluation failed :( (0.16 seconds)
</code></pre>
<p><strong>Question</strong>: How can I tell the <code>tox</code> to use <code>Python 3.7</code>?</p>
<p><strong>Note</strong>: I'm using tox 4.6.0</p>
<pre><code>$ tox --version
4.6.0 from /home/jpg/.local/share/virtualenvs/keycloak-auth-utils-T3AadZoL/lib/python3.10/site-packages/tox/__init__.py
</code></pre>
|
<python><testing><pyenv><tox>
|
2023-06-06 14:43:47
| 0
| 89,380
|
JPG
|
76,415,753
| 1,422,096
|
Function that has access to self, inside a class method
|
<p>How to define a function, inside a class method (for example for a <code>threading</code> Thread target function), that needs access to <code>self</code>?
Is the solution 1. correct or should we use 2.?</p>
<ol>
<li>
<pre><code>import threading, time
class Foo:
def __init__(self):
def f():
while True:
print(self.a)
self.a += 1
time.sleep(1)
self.a = 1
threading.Thread(target=f).start()
self.a = 2
Foo()
</code></pre>
<p>It seems to work even if <code>self</code> is not a parameter of <code>f</code>, but is this reliable?</p>
</li>
<li>
<pre><code>import threading, time
class Foo:
def __init__(self):
self.a = 1
threading.Thread(target=self.f).start()
self.a = 2
def f(self):
while True:
print(self.a)
self.a += 1
time.sleep(1)
Foo()
</code></pre>
</li>
</ol>
<p>This is linked to <a href="https://stackoverflow.com/questions/14924987/defining-class-functions-inside-class-functions-python">Defining class functions inside class functions: Python</a> but not 100% covered by this question.</p>
|
<python><class><oop><class-method>
|
2023-06-06 14:32:16
| 2
| 47,388
|
Basj
|
76,415,629
| 14,073,111
|
How to merge two dataframes but based on multiple columns in pandas
|
<p>Let's say I have two dataframes:
df1:</p>
<pre><code> A B C D
0 test1 test2 test3 test4
1 test22 test33 test23 test432
2 test54 test32 tes353 test98
</code></pre>
<p>df2:</p>
<pre><code> A B
0 test98 value1
1 test1 value2
2 test33 value3
</code></pre>
<p>Basically, ColumnA from dataframe 2, can be the value of any of the columns from dataframe A. In the end I want a desirable output like this:</p>
<pre><code> A B C D Value
0 test1 test2 test3 test4 value2
1 test22 test33 test23 test432 value3
2 test54 test32 tes353 test98 value1
</code></pre>
<p>Of, course this is only a prototype, I have a complex dataframe...
So, is there a way to merge this based on this conditions that I described?</p>
<p>UPD:
Of course df2 has more columns
df2:</p>
<pre><code> A B C
0 test98 value1 value5
1 test1 value2 value6
2 test33 value3 value7
</code></pre>
<p>and the end goal is to have df1 like this:</p>
<pre><code> A B C D Value Value2
0 test1 test2 test3 test4 value2 value6
1 test22 test33 test23 test432 value3 value7
2 test54 test32 tes353 test98 value1 value5
</code></pre>
|
<python><python-3.x><pandas><dataframe>
|
2023-06-06 14:17:43
| 1
| 631
|
user14073111
|
76,415,610
| 11,737,958
|
How to highlight the text in text box tkinter
|
<p>I am new to python. I use python 3.8 version. I try to iterate over the words and lines in the text box and need a way to highlight the words without highlighting the spaces. Since, the get() methods in tkinter takes index starting from 1.0 for 1st line, from 2.0 for 2nd line and so on.., i try to convert the original index with some calculations. But i cannot highlight all the text.</p>
<p>Thanks in advance!!</p>
<pre><code> from tkinter import *
import tkinter
core = Tk()
scroll = Scrollbar(core)
txt = Text(core, height = 35, width =85, yscrollcommand = scroll.set, \
font =('bold',15),cursor="plus #aab1212" \
)
# txt1.place(x=980, y=37)
scroll.config(command = txt.yview)
# scroll1.config(command = txt.xview)
scroll.pack(side=RIGHT, fill=Y)
txt.pack(side="right")
def get_index():
l = []
for i,w in enumerate(txt.get('1.0', 'end-1c').splitlines()):
l.append(w)
i = i + 1.1
print(i,w)
if i < 10:
x = 1 + float(0) / 10
txt.tag_add("start", x)
txt.tag_config("start", background= "yellow", foreground= "black")
print(l)
for w,i in enumerate(l):
print(i)
button1 = Button(text = 'index',command=get_index)
button1.place(x = 30, y = 50, height = 30, width = 150)
core.mainloop()
</code></pre>
<p><a href="https://i.sstatic.net/uJBx4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uJBx4.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/RWDDR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RWDDR.png" alt="enter image description here" /></a></p>
|
<python><tkinter>
|
2023-06-06 14:15:21
| 1
| 362
|
Kishan
|
76,415,555
| 4,301,236
|
write test for dagster asset job
|
<p>I am trying to write a simple test for a dagster job and I can't get it through...</p>
<p>I am using dagster 1.3.6</p>
<p>So I have defined this job using the function <code>dagster.define_asset_job</code></p>
<pre class="lang-py prettyprint-override"><code>from dagster import define_asset_job
my_job: UnresolvedAssetJobDefinition = define_asset_job(
name='name_for_my_job',
selection=AssetSelection.assets(
source_asset_1,
source_asset_2,
asset_1,
asset_2
)
)
</code></pre>
<h2>Intuitive try</h2>
<p>By reading the documentation, I figured that I had to call the <code>execute_in_process</code> method, which is defined in the <code>JobDefinition</code> class.</p>
<pre class="lang-py prettyprint-override"><code>from my_package import my_job
def test_documentation():
result = my_job.execute_in_process()
assert result.success
</code></pre>
<p>But like I've highligted in the first code block, <code>my_job</code> is of type <code>UnresolvedAssetJobDefinition</code>. By digging a bit more in the code, I see that there is a <code>resolve</code> method, which returns a <code>JobDefinition</code>.</p>
<p>So I wanted to do that, but I've seen that you can't call <code>resolve</code> without parameter; you are required to provide <code>asset_graph</code>.</p>
<p>But it's exactly what I was trying to avoid. I don't want to provide the list of the assets/source assets, I want them to be deduced from the job definition.</p>
<h2>Journey</h2>
<p>I've seen that in addition to the <code>UnresolvedAssetJobDefinition.resolve().execute_in_process()</code>, I could look at the <code>materialize_to_memory</code> function; but I faced the same issue: I need to provide a list of assets.</p>
<p>I spent some time trying to get the assets out of the <code>UnresolvedAssetJobDefinition</code>.
I've seen that there is a <code>.selection</code> property that allows me to get a <code>KeysAssetSelection</code>, which basically contains a list of <code>AssetKey</code>.</p>
<p>But I need a list of <code>Union[AssetsDefinition, SourceAsset]</code> and I don't know how to convert an <code>AssetKey</code> into an <code>AssetDefinition</code>.</p>
<h2>Last try</h2>
<p>Hereafter there is my last try, you can see that I am just trying to wire things together, as a admission of my weakness I am not even trying to use the job definition to get the assets.</p>
<pre class="lang-py prettyprint-override"><code>import pytest
from my_package import my_job, source_asset_1, source_asset_2, asset_1, asset_2
from dagster._core.definitions.asset_graph import AssetGraph
@pytest.fixture
def test_resources() -> Mapping[str, object]:
return {
"parquet_io_manager": parquet_io_manager.configured({'data_path': DATA_FOLDER }),
}
def test_my_job(
test_resources: Mapping[str, object],
):
graph = AssetGraph.from_assets([source_asset_1, source_asset_2, asset_1, asset_2])
job = my_job.resolve(asset_graph=graph)
result = job.execute_in_process(resources=test_resources)
assert result.success
</code></pre>
<p>but I can't quite get what I want. In the last example, I got this error</p>
<blockquote>
<p><code>dagster._core.errors.DagsterInvalidSubsetError: AssetKey(s) {AssetKey(['source_asset_1']), AssetKey(['source_asset_2']), AssetKey(['asset_1']), AssetKey(['asset_2'])}</code> were selected, but no AssetsDefinition objects supply these keys. Make sure all keys are spelled correctly, and all AssetsDefinitions are correctly added to the <code>Definitions</code>.</p>
</blockquote>
<h1>Help</h1>
<p>I know that I can test each asset by just importing and calling the function decorated by the <code>@asset</code> dagster keyword.
But I want to be able to launch all the assets from the job, without having to rewrite this test function.</p>
<p>Do you think that it's something possible? Am I doing something wrong?
I must miss something obvious... any help would be appreciated.</p>
<p>Have a nice day!</p>
|
<python><testing><dagster>
|
2023-06-06 14:09:34
| 1
| 389
|
guillaume latour
|
76,415,495
| 7,133,942
|
How to find the pareto-optimal solutions in a pandas dataframe
|
<p>I have a pandas dataframe with the name <code>df_merged_population_current_iteration</code> whose data you can download here as a csv file: <a href="https://easyupload.io/bdqso4" rel="nofollow noreferrer">https://easyupload.io/bdqso4</a></p>
<p>Now I want to create a new dataframe called <code>pareto_df</code> that contains all pareto-optimal solutions with regard to the minimization of the 2 objectives "Costs" and "Peak Load" from the dataframe <code>df_merged_population_current_iteration</code>. Further, it should make sure that no duplicate values are stored meaning that if a solution have identical values for the 2 objectives "Costs" and "Peak Load" it should only save one solution. Additionally, there is a check if the value for "Thermal Discomfort" is smaller than 2. If this is not the case, the solution will not be included in the new <code>pareto_df</code>.</p>
<p>For this purpose, I came up with the following code:</p>
<pre><code>import pandas as pd
df_merged_population_current_iteration = pd.read_csv("C:/Users/wi9632/Desktop/sample_input.csv", sep=";")
# create a new DataFrame to store the Pareto-optimal solutions
pareto_df = pd.DataFrame(columns=df_merged_population_current_iteration.columns)
for i, row in df_merged_population_current_iteration.iterrows():
is_dominated = False
is_duplicate = False
for j, other_row in df_merged_population_current_iteration.iterrows():
if i == j:
continue
# Check if the other solution dominates the current solution
if (other_row['Costs'] < row['Costs'] and other_row['Peak Load'] < row['Peak Load']) or \
(other_row['Costs'] <= row['Costs'] and other_row['Peak Load'] < row['Peak Load']) or \
(other_row['Costs'] < row['Costs'] and other_row['Peak Load'] <= row['Peak Load']):
# The other solution dominates the current solution
is_dominated = True
break
# Check if the other solution is a duplicate
if (other_row['Costs'] == row['Costs'] and other_row['Peak Load'] == row['Peak Load']):
is_duplicate = True
break
if not is_dominated and not is_duplicate and row['Thermal Discomfort'] < 2:
# The current solution is Pareto-optimal, not a duplicate, and meets the discomfort threshold
row_df = pd.DataFrame([row])
pareto_df = pd.concat([pareto_df, row_df], ignore_index=True)
print(pareto_df)
</code></pre>
<p>In most cases, the code works fine. However, there are cases, in which no pareto-optimal solution is added to the new dataframe <code>pareto_df </code>, altough there exist pareto-optimal solutions that fulfill the criteria. This can be seen with the data I posted above. You can see that the solutions with the "id of the run" 7 and 8 are pareto-optimal (and fullfill the thermal discomfort constraint). However, the current code does not add any of those 2 to the new dataframe. It should add one of them (but not 2 as this would be a duplicate). I have to admit that I already tried a lot and had a closer look at the code, but I could not manage to find the mistake in my code.</p>
<p>Here is the actual output with the uploaded data:</p>
<pre><code>Empty DataFrame
Columns: [Unnamed: 0, id of the run, Costs, Peak Load, Thermal Discomfort, Combined Score]
Index: []
</code></pre>
<p>And here is the desired output (one pareto-optimal solution):
<a href="https://i.sstatic.net/HSRGk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HSRGk.png" alt="enter image description here" /></a></p>
<p>Do you see what the mistake might be and how I have to adjust the code such that it in fact finds all pareto-optimal solutions without adding duplicates?</p>
<p><strong>Reminder</strong>: Does anyone have any idea why the code does not find all pareto-optimal solutions? I'll highly appreciate any comments.</p>
|
<python><pandas><pareto-optimality>
|
2023-06-06 14:01:53
| 1
| 902
|
PeterBe
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.