QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
76,399,139
| 3,840,940
|
Spark PySpark Configuration in Visual Studio Code
|
<p>I try to configure Apache Spark PySpark in Visual Studio Code.</p>
<pre><code>OS : Windows 11
java : 17 LTS
python : Anaconda 2023.03-1-windows
apache spark : spark-3.4.0-bin-hadoop3
VScode : VSCodeSetup-x64-1.78.2
</code></pre>
<p>I install the "Spark & Hive Tools" extension pack on VScode and add <code>Python > Auto Complete: Extra Paths</code> on settings.json file like below</p>
<pre><code>"python.autoComplete.extraPaths": [
"C:\\spark-3.4.0-bin-hadoop3\\python",
"C:\\spark-3.4.0-bin-hadoop3\\python\\pyspark",
"C:\\spark-3.4.0-bin-hadoop3\\python\\lib\\py4j-0.10.9.7-src.zip",
"C:\\spark-3.4.0-bin-hadoop3\\python\\lib\\pyspark.zip"
]
</code></pre>
<p>I make the Python code:</p>
<pre><code>from pyspark.sql import SparkSession
spark = SparkSession.builder.master("local[*]").appName("shit").getOrCreate()
data = [('001','Smith','M',40,'DA',4000),
('002','Rose','M',35,'DA',3000),
('003','Williams','M',30,'DE',2500),
('004','Anne','F',30,'DE',3000),
('005','Mary','F',35,'BE',4000),
('006','James','M',30,'FE',3500)]
columns = ["cd","name","gender","age","div","salary"]
df = spark.createDataFrame(data = data, schema = columns)
df.printSchema()
df.show()
spark.stop()
</code></pre>
<p>But the codes throws the error message:</p>
<pre><code>Traceback (most recent call last):
File "c:\VScode workspace\spark_test\pyspark-test.py", line 1, in <module>
from pyspark.sql import SparkSession
ModuleNotFoundError: No module named 'pyspark'
</code></pre>
<p>So I make the <code>.env file</code> on the folder and insert the some paths.</p>
<pre><code>SPARK_HOME=C:\spark-3.4.0-bin-hadoop3
PYTHONPATH=C:\spark-3.4.0-bin-hadoop3\python;C:\spark-3.4.0-bin-hadoop3\python\pyspark;C:\spark-3.4.0-bin-hadoop3\python\lib\py4j-0.10.9.7-src.zip;C:\spark-3.4.0-bin-hadoop3\python\lib\pyspark.zip
</code></pre>
<p>And I add the following code on the top of Python code:</p>
<pre><code>from dotenv import load_dotenv
import os
load_dotenv()
print("-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-")
print(os.environ.get("PYTHONPATH")) # It prints the right value
print(os.environ.get("SPARK_HOME")) # It prints the right value
</code></pre>
<p>But it throws the same error messages. Do I miss any steps? I can install the pyspark with pip command. But I want to use the imbedded pyspark module of spark-3.4.0-bin-hadoop3.</p>
|
<python><apache-spark><visual-studio-code><pyspark>
|
2023-06-04 06:04:35
| 1
| 1,441
|
Joseph Hwang
|
76,399,115
| 8,401,374
|
ET.iterparse is not loading all XML tags inside a specifc tag in python with xml.etree.ElementTree
|
<p><strong>My code:</strong></p>
<pre><code>tree = ET.iterparse(file_path, events=('start',))
for _, elem in tree:
if 'product' in elem.tag:
if elem.attrib.get('product-id') == "B4_1003847_000":
print(ET.tostring(elem))
breakpoint()
process_product(elem)
</code></pre>
<p><strong>XML Tag copied from XML file which is basically a child of the root tag.</strong></p>
<pre><code><product product-id="B4_1003847_000">
<ean/>
<upc/>
<unit/>
<min-order-quantity>1</min-order-quantity>
<step-quantity>1</step-quantity>
<display-name xml:lang="x-default">Pink piggy bank </display-name>
<short-description xml:lang="x-default">&lt;p&gt;Ceramic piggy bank measuring 13 x 9 x 9 cm. without a hole in the bottom, but includes a hammer.&lt;/p&gt;
</short-description>
<store-force-price-flag>false</store-force-price-flag>
<store-non-inventory-flag>false</store-non-inventory-flag>
<store-non-revenue-flag>false</store-non-revenue-flag>
<store-non-discountable-flag>false</store-non-discountable-flag>
<online-flag>false</online-flag>
<online-flag site-id="FlyingTiger_UAE">true</online-flag>
<available-flag>true</available-flag>
<searchable-flag>true</searchable-flag>
<images>
<image-group view-type="large">
<image path="B4_1003847_000__B401_000__101__01?$prd_large$"/>
<image path="B4_1003847_000__B401_000__101__02?$prd_large$"/>
<image path="B4_1003847_000__B401_000__101__03?$prd_large$"/>
<image path="B4_1003847_000__B401_000__101__04?$prd_large$"/>
</image-group>
<image-group variation-value="B401_000" view-type="large">
<image path="B4_1003847_000__B401_000__101__01?$prd_large$"/>
<image path="B4_1003847_000__B401_000__101__02?$prd_large$"/>
<image path="B4_1003847_000__B401_000__101__03?$prd_large$"/>
<image path="B4_1003847_000__B401_000__101__04?$prd_large$"/>
</image-group>
<image-group view-type="medium">
<image path="B4_1003847_000__B401_000__101__01?$prd_medium$"/>
<image path="B4_1003847_000__B401_000__101__02?$prd_medium$"/>
<image path="B4_1003847_000__B401_000__101__03?$prd_medium$"/>
<image path="B4_1003847_000__B401_000__101__04?$prd_medium$"/>
</image-group>
<image-group variation-value="B401_000" view-type="medium">
<image path="B4_1003847_000__B401_000__101__01?$prd_medium$"/>
<image path="B4_1003847_000__B401_000__101__02?$prd_medium$"/>
<image path="B4_1003847_000__B401_000__101__03?$prd_medium$"/>
<image path="B4_1003847_000__B401_000__101__04?$prd_medium$"/>
</image-group>
<image-group view-type="mobile">
<image path="B4_1003847_000__B401_000__101__01?$prd_mobile$"/>
<image path="B4_1003847_000__B401_000__101__02?$prd_mobile$"/>
<image path="B4_1003847_000__B401_000__101__03?$prd_mobile$"/>
<image path="B4_1003847_000__B401_000__101__04?$prd_mobile$"/>
</image-group>
<image-group variation-value="B401_000" view-type="mobile">
<image path="B4_1003847_000__B401_000__101__01?$prd_mobile$"/>
<image path="B4_1003847_000__B401_000__101__02?$prd_mobile$"/>
<image path="B4_1003847_000__B401_000__101__03?$prd_mobile$"/>
<image path="B4_1003847_000__B401_000__101__04?$prd_mobile$"/>
</image-group>
<image-group view-type="thumbnail">
<image path="B4_1003847_000__B401_000__101__01?$prd_thumbnail$"/>
<image path="B4_1003847_000__B401_000__101__02?$prd_thumbnail$"/>
<image path="B4_1003847_000__B401_000__101__03?$prd_thumbnail$"/>
<image path="B4_1003847_000__B401_000__101__04?$prd_thumbnail$"/>
</image-group>
<image-group variation-value="B401_000" view-type="thumbnail">
<image path="B4_1003847_000__B401_000__101__01?$prd_thumbnail$"/>
<image path="B4_1003847_000__B401_000__101__02?$prd_thumbnail$"/>
<image path="B4_1003847_000__B401_000__101__03?$prd_thumbnail$"/>
<image path="B4_1003847_000__B401_000__101__04?$prd_thumbnail$"/>
</image-group>
</images>
<tax-class-id>standard</tax-class-id>
<brand>FLYING TIGER </brand>
<manufacturer-name>FLYING TIGER </manufacturer-name>
<sitemap-included-flag>true</sitemap-included-flag>
<sitemap-changefrequency>daily</sitemap-changefrequency>
<sitemap-priority>1.0</sitemap-priority>
<page-attributes/>
<custom-attributes>
<custom-attribute attribute-id="brandCategoryId">B4</custom-attribute>
<custom-attribute attribute-id="buyingCategory">B401010</custom-attribute>
<custom-attribute attribute-id="color">B401_000</custom-attribute>
<custom-attribute attribute-id="defaultName">Pink piggy bank</custom-attribute>
<custom-attribute attribute-id="defaultSizeGrid">Standard</custom-attribute>
<custom-attribute attribute-id="geoAllowedShippingCountries">
<value>ALL</value>
</custom-attribute>
<custom-attribute attribute-id="internalProductName">PIG BANK WITH HAMMER</custom-attribute>
<custom-attribute attribute-id="isItemBulky">false</custom-attribute>
<custom-attribute attribute-id="isItemPrepaid">false</custom-attribute>
<custom-attribute attribute-id="isReturnable">true</custom-attribute>
<custom-attribute attribute-id="isReturnable" site-id="FlyingTiger_UAE">true</custom-attribute>
<custom-attribute attribute-id="season">000</custom-attribute>
<custom-attribute attribute-id="seasonDescription">Non Seasonable Items</custom-attribute>
<custom-attribute attribute-id="size">B401022_000</custom-attribute>
<custom-attribute attribute-id="sizeRefinement" xml:lang="x-default">B401022_000</custom-attribute>
<custom-attribute attribute-id="subBrand" xml:lang="x-default">Flying Tiger</custom-attribute>
<custom-attribute attribute-id="subSeason">000</custom-attribute>
<custom-attribute attribute-id="subSeasonDescription">Non Seasonable Items</custom-attribute>
</custom-attributes>
<variations>
<attributes>
<variation-attribute attribute-id="color" variation-attribute-id="color">
<display-name xml:lang="x-default">color</display-name>
<variation-attribute-values>
<variation-attribute-value value="B401_000">
<display-value xml:lang="x-default">B401_000</display-value>
</variation-attribute-value>
</variation-attribute-values>
</variation-attribute>
<variation-attribute attribute-id="size" variation-attribute-id="size">
<display-name xml:lang="x-default">size</display-name>
<variation-attribute-values>
<variation-attribute-value value="000">
<display-value xml:lang="x-default">No Size</display-value>
</variation-attribute-value>
</variation-attribute-values>
</variation-attribute>
</attributes>
<variants>
<variant product-id="B4_1003847_000-000"/>
</variants>
</variations>
<classification-category catalog-id="siteCatalog_FlyingTiger_UAE">gifts-giftsforkids</classification-category>
<pinterest-enabled-flag>true</pinterest-enabled-flag>
<facebook-enabled-flag>true</facebook-enabled-flag>
<store-attributes>
<force-price-flag>false</force-price-flag>
<non-inventory-flag>false</non-inventory-flag>
<non-revenue-flag>false</non-revenue-flag>
<non-discountable-flag>false</non-discountable-flag>
</store-attributes>
</product>
</code></pre>
<p><strong>The same tag return by <code>print(ET.tostring(elem))</code></strong></p>
<pre><code><ns0:product xmlns:ns0="http://www.demandware.com/xml/impex/catalog/2006-10-31" product-id="B4_1003847_000">
<ns0:ean />
<ns0:upc />
<ns0:unit />
<ns0:min-order-quantity>1</ns0:min-order-quantity>
<ns0:step-quantity>1</ns0:step-quantity>
<ns0:display-name xml:lang="x-default">Pink piggy bank </ns0:display-name>
<ns0:short-description xml:lang="x-default">&lt;p&gt;Ceramic piggy bank measuring 13 x 9 x 9 cm. without a hole in the bottom, but includes a hammer.&lt;/p&gt;
</ns0:short-description>
<ns0:store-force-price-flag>false</ns0:store-force-price-flag>
<ns0:store-non-inventory-flag>false</ns0:store-non-inventory-flag>
<ns0:store-non-revenue-flag>false</ns0:store-non-revenue-flag>
<ns0:store-non-discountable-flag>false</ns0:store-non-discountable-flag>
<ns0:online-flag>false</ns0:online-flag>
<ns0:online-flag site-id="FlyingTiger_UAE">true</ns0:online-flag>
<ns0:available-flag>true</ns0:available-flag>
<ns0:searchable-flag>true</ns0:searchable-flag>
<ns0:images>
<ns0:image-group view-type="large">
<ns0:image path="B4_1003847_000__B401_000__101__01?$prd_large$" />
<ns0:image path="B4_1003847_000__B401_000__101__02?$prd_large$" />
<ns0:image path="B4_1003847_000__B401_000__101__03?$prd_large$" />
<ns0:image path="B4_1003847_000__B401_000__101__04?$prd_large$" />
</ns0:image-group>
<ns0:image-group variation-value="B401_000" view-type="large">
<ns0:image path="B4_1003847_000__B401_000__101__01?$prd_large$" />
<ns0:image path="B4_1003847_000__B401_000__101__02?$prd_large$" />
<ns0:image path="B4_1003847_000__B401_000__101__03?$prd_large$" />
<ns0:image path="B4_1003847_000__B401_000__101__04?$prd_large$" />
</ns0:image-group>
<ns0:image-group view-type="medium">
<ns0:image path="B4_1003847_000__B401_000__101__01?$prd_medium$" />
<ns0:image path="B4_1003847_000__B401_000__101__02?$prd_medium$" />
<ns0:image path="B4_1003847_000__B401_000__101__03?$prd_medium$" />
<ns0:image path="B4_1003847_000__B401_000__101__04?$prd_medium$" />
</ns0:image-group>
<ns0:image-group variation-value="B401_000" view-type="medium">
<ns0:image path="B4_1003847_000__B401_000__101__01?$prd_medium$" />
<ns0:image path="B4_1003847_000__B401_000__101__02?$prd_medium$" />
<ns0:image path="B4_1003847_000__B401_000__101__03?$prd_medium$" />
<ns0:image path="B4_1003847_000__B401_000__101__04?$prd_medium$" />
</ns0:image-group>
<ns0:image-group view-type="mobile">
<ns0:image path="B4_1003847_000__B401_000__101__01?$prd_mobile$" />
<ns0:image path="B4_1003847_000__B401_000__101__02?$prd_mobile$" />
<ns0:image path="B4_1003847_000__B401_000__101__03?$prd_mobile$" />
<ns0:image path="B4_1003847_000__B401_000__101__04?$prd_mobile$" />
</ns0:image-group>
<ns0:image-group variation-value="B401_000" view-type="mobile">
<ns0:image path="B4_1003847_000__B401_000__101__01?$prd_mobile$" />
<ns0:image path="B4_1003847_000__B401_000__101__02?$prd_mobile$" />
<ns0:image path="B4_1003847_000__B401_000__101__03?$prd_mobile$" />
<ns0:image path="B4_1003847_000__B401_000__101__04?$prd_mobile$" />
</ns0:image-group>
<ns0:image-group view-type="thumbnail">
<ns0:image path="B4_1003847_000__B401_000__101__01?$prd_thumbnail$" /></ns0:image-group></ns0:images></ns0:product>
</code></pre>
<p>The noticing point is that in the code printed XML is missing all the tags after <code><images></code>. I tried breakpoint() to debug it but didn't work.</p>
|
<python><xml><lxml><elementtree><large-data>
|
2023-06-04 05:55:46
| 0
| 1,710
|
Shaida Muhammad
|
76,399,078
| 5,380,656
|
Creating a TypedDict with enum keys
|
<p>I am trying to create a <code>TypedDict</code> for better code completion and am running into an issue.</p>
<p>I want to have a fixed set of keys (an Enum) and the values to match a specific list of objects depending on the key.</p>
<p>For example:</p>
<pre class="lang-py prettyprint-override"><code>from enum import Enum
class OneObject:
pass
class TwoObject:
pass
class MyEnum(Enum):
ONE: 1
TWO: 2
</code></pre>
<p>I am looking to have something like this:</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypedDict
class CustomDict(TypedDict):
MyEnum.ONE: list[OneObject]
MyEnum.TWO: list[TwoObject]
</code></pre>
<p>However, I am getting <code>Non-self attribute could not be type hinted</code> and it doesn't really work. What are my options?</p>
|
<python><enums><python-typing><typeddict>
|
2023-06-04 05:38:12
| 2
| 771
|
Charlie
|
76,398,598
| 4,611,374
|
Streamlit: Why does updating the session_state with form data require submitting the form twice?
|
<p>I appear to fundamentally misunderstand how Streamlit's forms and <code>session_state</code> variable work. Form data is not inserted into the <code>session_state</code> upon submit. However, submitting a second time inserts the data. Updating <code>session_state</code> values always requires submitting the form 2 times.</p>
<p>I'd like to know</p>
<ol>
<li>if this is expected behavior</li>
<li>if I'm making a mistake</li>
<li>if there is a workaround that allows immediate <code>session_state</code> updates on submit</li>
</ol>
<p><strong>EXAMPLE 1:</strong></p>
<pre class="lang-py prettyprint-override"><code>import streamlit as st
# View all key:value pairs in the session state
s = []
for k, v in st.session_state.items():
s.append(f"{k}: {v}")
st.write(s)
# Define the form
with st.form("my_form"):
st.session_state['name'] = st.text_input("Name")
st.form_submit_button("Submit")
</code></pre>
<p>When the page loads, the session state is empty: <code>[]</code>
<a href="https://i.sstatic.net/LRXql.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LRXql.png" alt="enter image description here" /></a></p>
<p>After submitting the form once, the session_state contains <code>"name: "</code>. The key has been added, but not the value.
<a href="https://i.sstatic.net/c0dXT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/c0dXT.png" alt="enter image description here" /></a></p>
<p>After pressing <code>Submit</code> a second time, the session_state now contains <code>"name: Chris"</code>
<a href="https://i.sstatic.net/KePIk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KePIk.png" alt="enter image description here" /></a></p>
<p><strong>EXAMPLE 2:</strong> Using a callback function</p>
<pre class="lang-py prettyprint-override"><code>import streamlit as st
# View all key:value pairs in the session state
s = []
for k, v in st.session_state.items():
s.append(f"{k}: {v}")
st.write(s)
# Define the form
with st.form("my_form"):
def update():
st.session_state['name'] = name
name = st.text_input("Name")
st.form_submit_button("Submit", on_click=update)
</code></pre>
<p>When the page loads, the session state is empty: <code>[]</code>
<a href="https://i.sstatic.net/oeEIm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oeEIm.png" alt="enter image description here" /></a></p>
<p>After submitting the form once, the session_state contains <code>"name: "</code>. The key has been added, but not the value.
<a href="https://i.sstatic.net/MZlCh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MZlCh.png" alt="enter image description here" /></a></p>
<p>After pressing <code>Submit</code> a second time, the session_state now contains <code>"name: Chris"</code>
<a href="https://i.sstatic.net/ZBv6N.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZBv6N.png" alt="enter image description here" /></a></p>
|
<python><forms><session-state><streamlit>
|
2023-06-04 01:09:39
| 1
| 309
|
RedHand
|
76,398,541
| 21,343,992
|
Connect to websocket server using IP address with Python websockets library
|
<p>New to Python. I'm trying to connect to a websocket server using the actual IP address.</p>
<p>Connecting using the websocket URL works fine:</p>
<pre><code>import websocket
def on_message(wsapp, message):
print(message)
ws = websocket.WebSocketApp("wss://api.server.com:443/ws/stream", on_message=on_message)
ws.run_forever()
</code></pre>
<p>However, I extracted the server IP addresses using traceroute and the following doesn't work:</p>
<pre><code>import websocket
def on_message(wsapp, message):
print(message)
ws = websocket.WebSocketApp("wss://1.2.3.4:443/ws/stream", on_message=on_message)
ws.run_forever()
</code></pre>
<p>(not the actual domain or IP address)</p>
<p>Is there a way to connect using the IP address, rather than the URL?</p>
|
<python><websocket><tcp>
|
2023-06-04 00:38:05
| 1
| 491
|
rare77
|
76,398,466
| 6,577,503
|
Drawing a graph network in 3D
|
<p>Suppose I am a given a directed graph in python:</p>
<pre><code>V = [ 1, 2, 3, 4, 5]
E = {
1 : [ 2, 3, 4]
2: [ 1, 2, 3]
3 : [1, 4, 5]
4: [5]
5: [1, 3] }
c = [ 81, 23, 43, 22, 100]
</code></pre>
<p>V and E represent the vertex and edge sets of the graph as a list and dictionary respectively. And c is a cost function on the vertex set i.e. c(1) = 81 , c(2) = 23 etc. Now I want to visualize the graph represented by (V,E) which can be done easily using the networkx package in 2 dimensions, BUT additionally I want to plot the vertices of this graph on varying z axis (instead of only on the xy plane) so that the 'height' of each vertex on the z axis equals its cost.</p>
<p>How can I do so?</p>
|
<python><networkx>
|
2023-06-03 23:55:40
| 1
| 441
|
Anon
|
76,398,368
| 10,327,849
|
Pandas EXCLUSIVE LEFT OUTER JOIN with line count
|
<p>I am creating a transactions import tool that updates a DB with new transactions every day.</p>
<p>I am getting an Excel file (<em>that I am opening using pandas</em>) with the entire month transactions and I am trying to filter only the new transactions by merging the new DataFrame with the existing one.</p>
<p>For this I am using pandas merge to do <em>EXCULSIVE LEFT OUTER JOIN</em> but I have a problem with multiple rows with the same exact values.</p>
<p>See this example:</p>
<pre><code>import pandas as pd
import numpy as np
df1 = pd.DataFrame(np.array([[pd.Timestamp('2023-1-1'), 'A', 10]
, [pd.Timestamp('2023-1-1'), 'A', 10]
, [pd.Timestamp('2023-1-1'), 'B', 11]
, [pd.Timestamp('2023-1-2'), 'C', 12]
, [pd.Timestamp('2023-1-2'), 'D', 13]
, [pd.Timestamp('2023-1-2'), 'E', 14]
, [pd.Timestamp('2023-1-3'), 'F', 15]]),
columns=['Date', 'Title', 'Amount'])
df2 = pd.DataFrame(np.array([[pd.Timestamp('2023-1-1'), 'A', 10]
, [pd.Timestamp('2023-1-1'), 'B', 11]
, [pd.Timestamp('2023-1-2'), 'C', 12]]),
columns=['Date', 'Title', 'Amount'])
df3 = pd.merge(df1, df2, on=['Date', 'Title', 'Amount'], how="outer", indicator=True)
df3 = df3[df3['_merge'] == 'left_only']
print(df1)
print(df2)
print(df3) # Both 'A' rows deleted while one 'A' row is new and should be in df3
</code></pre>
<p>The output is:</p>
<pre><code> Date Title Amount
0 2023-01-01 A 10
1 2023-01-01 A 10
2 2023-01-01 B 11
3 2023-01-02 C 12
4 2023-01-02 D 13
5 2023-01-02 E 14
6 2023-01-03 F 15
Date Title Amount
0 2023-01-01 A 10
1 2023-01-01 B 11
2 2023-01-02 C 12
Date Title Amount _merge
4 2023-01-02 D 13 left_only
5 2023-01-02 E 14 left_only
6 2023-01-03 F 15 left_only
</code></pre>
<p>With the above method, both <code>'A'</code> rows are deleted while one <code>'A'</code> row is new and thus should be in the new DataFrame.</p>
<p>Any ideas on what operation can be used to keep <strong>only</strong> the rows that in the first Dataframe <strong>with</strong> consideration to rows count? To give a little more information, transactions in the same day are not ordered (<em>no time information, only date</em>) and new transactions can be added multiple days in the past.</p>
|
<python><pandas><dataframe><join>
|
2023-06-03 23:07:51
| 1
| 301
|
Yakir Shlezinger
|
76,398,117
| 11,065,874
|
how to override the default 200 response in fastapi docs
|
<p>I have this small fastapi application</p>
<pre><code>import uvicorn
from fastapi import FastAPI, APIRouter
from fastapi import Path
from pydantic import BaseModel
from starlette import status
app = FastAPI()
def test():
print("creating the resource")
return "Hello world"
router = APIRouter()
class MessageResponse(BaseModel):
detail: str
router.add_api_route(
path="/test",
endpoint=test,
methods=["POST"],
responses={
status.HTTP_201_CREATED: {"model": MessageResponse}
}
)
app.include_router(router)
def main():
uvicorn.run("run:app", host="0.0.0.0", reload=True, port=8001)
if __name__ == "__main__":
main()
</code></pre>
<p>when I check the docs on <code>http://127.0.0.1:8001/docs#/default/test_test_post</code>, in the list of responses in the docs, I see two responses: 200 and 201</p>
<p>I don't have any 200 responses here.
I don't want 200 to be shown for me in the docs.</p>
<p>Here is the fast api auto-generated openapi.json file</p>
<pre><code>{
"openapi": "3.0.2",
"info": {"title": "FastAPI", "version": "0.1.0"},
"paths": {"/test": {
"post": {"summary": "Test", "operationId": "test_test_post", "responses": {
"200": {
"description": "Successful Response", "content": {"application/json": {"schema": {}}}
},
"201": {
"description": "Created",
"content": {"application/json": {"schema": {"$ref": "#/components/schemas/MessageResponse"}}}}}}}
},
"components": {"schemas": {
"MessageResponse": {"title": "MessageResponse", "required": ["detail"], "type": "object",
"properties": {"detail": {"title": "Detail", "type": "string"}}}}}}
</code></pre>
<p>I should not be seeing</p>
<pre><code> "description": "Successful Response", "content": {"application/json": {"schema": {}}}
},
</code></pre>
<p>What should I do?</p>
<hr />
<p>UPDATE:</p>
<p>this one also did not work</p>
<pre><code>import uvicorn
from fastapi import FastAPI, APIRouter
from pydantic import BaseModel
from starlette import status
from starlette.responses import Response
app = FastAPI()
def test(response: Response):
print("creating the resource")
response.status_code = 201
return "Hello world"
router = APIRouter()
class MessageResponse(BaseModel):
detail: str
router.add_api_route(
path="/test",
endpoint=test,
methods=["POST"],
response_model=None,
responses={
200: {},
status.HTTP_201_CREATED: {"model": MessageResponse}
}
)
app.include_router(router)
def main():
uvicorn.run("run:app", host="0.0.0.0", reload=True, port=8001)
if __name__ == "__main__":
main()
</code></pre>
|
<python><fastapi><openapi>
|
2023-06-03 21:33:25
| 2
| 2,555
|
Amin Ba
|
76,397,993
| 5,423,080
|
Plot function during pytest debugging in console mode
|
<p>I am writing a unit test for a scientific function and when I was trying to check its shape
I obtained an error with <code>matplotlib</code>.</p>
<p>I am using PyCharm Community Edition 2022.3.3, python 3.11, matplotlib 3.7.1 and PySide6 6.5.0 under Windows 10.</p>
<p>When debugging the test, I was trying to plot the function in console mode and I obtained this error/warning and no plot:</p>
<pre><code>Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm Community Edition 2022.3.3\plugins\python-ce\helpers\pydev\_pydevd_bundle\pydevd_exec2.py", line 3, in Exec
exec(exp, global_vars, local_vars)
File "<input>", line 1, in <module>
File "C:\Users\devot\Work\PythonEnvs\science_env\Lib\site-packages\matplotlib\pyplot.py", line 2812, in plot
return gca().plot(
^^^^^
File "C:\Users\devot\Work\PythonEnvs\science_env\Lib\site-packages\matplotlib\pyplot.py", line 2309, in gca
return gcf().gca()
^^^^^
File "C:\Users\devot\Work\PythonEnvs\science_env\Lib\site-packages\matplotlib\pyplot.py", line 906, in gcf
return figure()
^^^^^^^^
File "C:\Users\devot\Work\PythonEnvs\science_env\Lib\site-packages\matplotlib\_api\deprecation.py", line 454, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\devot\Work\PythonEnvs\science_env\Lib\site-packages\matplotlib\pyplot.py", line 840, in figure
manager = new_figure_manager(
^^^^^^^^^^^^^^^^^^^
File "C:\Users\devot\Work\PythonEnvs\science_env\Lib\site-packages\matplotlib\pyplot.py", line 384, in new_figure_manager
return _get_backend_mod().new_figure_manager(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\devot\Work\PythonEnvs\science_env\Lib\site-packages\matplotlib\backend_bases.py", line 3574, in new_figure_manager
return cls.new_figure_manager_given_figure(num, fig)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\devot\Work\PythonEnvs\science_env\Lib\site-packages\matplotlib\backend_bases.py", line 3579, in new_figure_manager_given_figure
return cls.FigureCanvas.new_manager(figure, num)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\devot\Work\PythonEnvs\science_env\Lib\site-packages\matplotlib\backend_bases.py", line 1742, in new_manager
return cls.manager_class.create_with_canvas(cls, figure, num)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\devot\Work\PythonEnvs\science_env\Lib\site-packages\matplotlib\backend_bases.py", line 2858, in create_with_canvas
return cls(canvas_class(figure), num)
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\devot\Work\PythonEnvs\science_env\Lib\site-packages\matplotlib\backends\backend_qt.py", line 204, in __init__
_create_qApp()
File "C:\Users\devot\Work\PythonEnvs\science_env\Lib\site-packages\matplotlib\backends\backend_qt.py", line 134, in _create_qApp
QtWidgets.QApplication.setAttribute(
DeprecationWarning: Enum value 'Qt::ApplicationAttribute.AA_EnableHighDpiScaling' is marked as deprecated, please check the documentation for more information.
</code></pre>
<p>If I run the code not as a test everything is fine.</p>
<p>These are a working example to obtain the error:</p>
<pre><code># test_scientific_functions.py
import numpy as np
import matplotlib.pyplot as plt
def test_sin():
x = np.arange(0, 25, 0.1)
y = np.sin(x)
plt.plot(x, y)
plt.show()
</code></pre>
<p>This code generates this message:</p>
<pre><code>test\unit\test_scientific_functions.py:22 (test_sin)
def test_sin():
x = np.arange(0, 25, 0.1)
y = np.sin(x)
> plt.plot(x, y)
test\unit\test_scientific_functions.py:26:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
..\..\PythonEnvs\science_env\Lib\site-packages\matplotlib\pyplot.py:2812: in plot
return gca().plot(
..\..\PythonEnvs\science_env\Lib\site-packages\matplotlib\pyplot.py:2309: in gca
return gcf().gca()
..\..\PythonEnvs\science_env\Lib\site-packages\matplotlib\pyplot.py:906: in gcf
return figure()
..\..\PythonEnvs\science_env\Lib\site-packages\matplotlib\_api\deprecation.py:454: in wrapper
return func(*args, **kwargs)
..\..\PythonEnvs\science_env\Lib\site-packages\matplotlib\pyplot.py:840: in figure
manager = new_figure_manager(
..\..\PythonEnvs\science_env\Lib\site-packages\matplotlib\pyplot.py:384: in new_figure_manager
return _get_backend_mod().new_figure_manager(*args, **kwargs)
..\..\PythonEnvs\science_env\Lib\site-packages\matplotlib\backend_bases.py:3574: in new_figure_manager
return cls.new_figure_manager_given_figure(num, fig)
..\..\PythonEnvs\science_env\Lib\site-packages\matplotlib\backend_bases.py:3579: in new_figure_manager_given_figure
return cls.FigureCanvas.new_manager(figure, num)
..\..\PythonEnvs\science_env\Lib\site-packages\matplotlib\backend_bases.py:1742: in new_manager
return cls.manager_class.create_with_canvas(cls, figure, num)
..\..\PythonEnvs\science_env\Lib\site-packages\matplotlib\backend_bases.py:2858: in create_with_canvas
return cls(canvas_class(figure), num)
..\..\PythonEnvs\science_env\Lib\site-packages\matplotlib\backends\backend_qt.py:204: in __init__
_create_qApp()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
@functools.lru_cache(1)
def _create_qApp():
app = QtWidgets.QApplication.instance()
# Create a new QApplication and configure it if none exists yet, as only
# one QApplication can exist at a time.
if app is None:
# display_is_valid returns False only if on Linux and neither X11
# nor Wayland display can be opened.
if not mpl._c_internal_utils.display_is_valid():
raise RuntimeError('Invalid DISPLAY variable')
# Check to make sure a QApplication from a different major version
# of Qt is not instantiated in the process
if QT_API in {'PyQt6', 'PySide6'}:
other_bindings = ('PyQt5', 'PySide2')
elif QT_API in {'PyQt5', 'PySide2'}:
other_bindings = ('PyQt6', 'PySide6')
else:
raise RuntimeError("Should never be here")
for binding in other_bindings:
mod = sys.modules.get(f'{binding}.QtWidgets')
if mod is not None and mod.QApplication.instance() is not None:
other_core = sys.modules.get(f'{binding}.QtCore')
_api.warn_external(
f'Matplotlib is using {QT_API} which wraps '
f'{QtCore.qVersion()} however an instantiated '
f'QApplication from {binding} which wraps '
f'{other_core.qVersion()} exists. Mixing Qt major '
'versions may not work as expected.'
)
break
try:
> QtWidgets.QApplication.setAttribute(
QtCore.Qt.AA_EnableHighDpiScaling)
E DeprecationWarning: Enum value 'Qt::ApplicationAttribute.AA_EnableHighDpiScaling' is marked as deprecated, please check the documentation for more information.
..\..\PythonEnvs\science_env\Lib\site-packages\matplotlib\backends\backend_qt.py:134: DeprecationWarning
</code></pre>
<p>If I run this other code everything is fine:</p>
<pre><code># plot.py
import numpy as np
import matplotlib.pyplot as plt
def plot_sin():
x = np.arange(0, 10, 0.1)
y = np.sin(x)
plt.plot(x, y)
plt.show()
if __name__ == "__main__":
plot_sin()
</code></pre>
<p>Do it mean that, in some way, <code>pytest</code> goes into conflict with <code>matplotlib.pyplot</code>?</p>
<p>Do you have any advice?</p>
|
<python><matplotlib><pycharm><pytest>
|
2023-06-03 20:50:05
| 0
| 412
|
cicciodevoto
|
76,397,673
| 222,279
|
How do I sort a 2D numpy array using indicies stored in a 1D numpy array?
|
<p>I have a 1D numpy array containing row index values to a 2d array. How do I sort the 2D array based on the index values in the 1D array. For example:</p>
<pre><code>indicies = np.array([2,3,0,1])
matrix = np.array([[20, 200],[3,300],[100,1000],[1,1]])
</code></pre>
<p>I want to sort matrix based on the order of the index values in indicies so that I end up with a 2D array looking like:</p>
<pre><code>[[100 1000]
[ 1 1]
[ 20 200]
[ 3 300]]
</code></pre>
<p>Basically the index values are associated with each row in matrix.</p>
<hr />
<p>Roman's answer below seems to work on his example, but not on this one:</p>
<pre><code>sort1 = np.array([2,4,5,7,0,1,3,6])
matrix = np.array([[20, 200],[40,400],[50,500],[70,700],[1, 1],[10,100],[30,300],[60,600]])
taken = np.take(matrix,sort1,axis=0)
print(taken)
</code></pre>
<p>I get the output:</p>
<pre><code>[[ 50 500]
[ 1 1]
[ 10 100]
[ 60 600]
[ 20 200]
[ 40 400]
[ 70 700]
[ 30 300]]
</code></pre>
<p>It seems to be outputting them in the index order of the original array. It should be ordered by the sort1 matrix indicies:</p>
<pre><code>[[ 1 1]
[ 10 100]
[ 20 200]
[ 30 300]
[ 40 400]
[ 50 500]
[ 60 600]
[ 70 700]]
</code></pre>
|
<python><numpy><numpy-ndarray>
|
2023-06-03 19:13:26
| 2
| 13,026
|
GregH
|
76,397,643
| 13,078,279
|
API created for Flask app is extremely slow
|
<p>I am creating an app to monitor schoolbus arrivals for my school. For this, I have created a simple flask webapp, with an admin page used by admins to input buses that have arrived:</p>
<pre class="lang-py prettyprint-override"><code>from flask import Flask, jsonify, request, render_template
import sys
app = Flask(__name__)
original_busdata = {
"present": [],
"absent": [i for i in range(1, 32)] # all buses not here to start with
}
busdata = original_busdata
@app.route("/busdata", methods=["GET"])
def transfer_data():
global busdata
return jsonify(busdata)
@app.route("/admin", methods=["GET", "POST"])
def template():
global busdata
present = []
absent = []
# When a checkbox is checked from the
# admin page, adjust
# busdata correspondingly
if request.method == "POST":
for i in range(1, 32):
if f"bus-{i}" in request.form:
present.append(i)
else:
absent.append(i)
busdata["present"] = present
busdata["absent"] = absent
# Assemble checkbox data from busdata
checkbox_data = {}
for bus in busdata["present"]:
checkbox_data[bus] = True
for bus in busdata["absent"]:
checkbox_data[bus] = False
# Sort checkbox data so it is in the right order
checkbox_data = dict(sorted(checkbox_data.items()))
return render_template("admin.html", busdata=busdata, checkbox_data=checkbox_data)
if __name__ == "__main__":
app.run()
</code></pre>
<p>With <code>admin.html</code> template:</p>
<pre class="lang-html prettyprint-override"><code><div class="page-container">
<section id="bus-card-container">
<h2>Buses currently present</h2>
<div class="bus-list">
{% for i in busdata["present"] %}
<span class="bus-present">Bus {{ i }}</span>
{% endfor %}
</div>
<h2>Buses not here</h2>
<div class="bus-list">
{% for i in busdata["absent"] %}
<span class="bus-absent">Bus {{ i }}</span>
{% endfor %}
</div>
</section>
<section id="admin-panel">
<button id="clear-all">Clear all buses</button>
<button id="check-all">Check all buses</button>
<form id="bus-form" method="post">
{% for bus in checkbox_data %}
<div>
{% if checkbox_data[bus] is sameas true %}
<input type="checkbox" name="bus-{{ bus }}" id="bus-{{ bus }}" checked>
{% else %}
<input type="checkbox" name="bus-{{ bus }}" id="bus-{{ bus }}">
{% endif %}
<label for="bus-{{ i }}">Bus {{ bus }}</label>
</div>
{% endfor %}
<input type="submit" id="submit-btn">
</form>
</section>
</div>
</code></pre>
<p>A working example of the admin page can be found at <a href="https://three-fifteen-app.vercel.app/admin" rel="nofollow noreferrer">https://three-fifteen-app.vercel.app/admin</a>.</p>
<p>My issue is with the busdata API I defined in the <code>busdata/</code> route. Whenever the admin updates <code>busdata</code>, it takes forever for the changes made to reflect in the API:</p>
<p><a href="https://i.sstatic.net/D10Bk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/D10Bk.png" alt="API bug error" /></a></p>
<p><em>Notice how the admin shows 3 buses are present on the right, while the busdata API does not reflect this change yet</em></p>
<p>And I need the API to be fast enough as my client-side frontend for the app is entirely built around this API. Please let me know what tweaks I can make to my code to resolve this issue.</p>
|
<python><flask>
|
2023-06-03 19:06:15
| 0
| 416
|
JS4137
|
76,397,541
| 4,913,254
|
Save python libraries in a local directory to run pip install locally when running a docker container
|
<p>I want to create a docker container that needs to install python libraries. I could use something like <code>RUN pip install requirements.txt</code>. However, the server where I want to run this container have not connection to the Internet. My approach is to create a new folder in my work directory (e.g. python_libraires) and put there all libraries I need to install locally the libraries.</p>
<p>I have an env with all Python libraries needed to run the app. I thought it would be easy to copy the directory where the libraries of my env are and then run pip locally. I also thought that this could be a typical question in Stackoverflow or similar but I cannot find a satisfactory answer although there are some people who asked something similar to this.</p>
<p>(If I am not wrong) the directory of my python libraries is <code>/Volumes/MANUEL/anaconda3/envs/lookup3.6/lib/python3.6/site-packages</code> and to ensure that this work before creating the container I am trying this <code> pip install -r requirements.txt --no-index --find-links file:/Volumes/MANUEL/anaconda3/envs/lookup3.6/lib/python3.6/site-packages</code> in a new conda env. I have created to see if I really can install libraries locally, then I would use the satisfactory approach in my container to run the same when running the container.</p>
<p>The error I get is this</p>
<pre><code>ERROR: Could not find a version that satisfies the requirement absl-py==1.4.0 (from versions: none)
ERROR: No matching distribution found for absl-py==1.4.0
</code></pre>
<p>I have created this requiretments.txt file using <code> pip freeze</code></p>
|
<python><docker><conda>
|
2023-06-03 18:43:25
| 1
| 1,393
|
Manolo Dominguez Becerra
|
76,397,496
| 475,982
|
How do I represent an optional component in a grammar with pyparser?
|
<p>I am developing a parser that extracts the dose and name from expressions of medication dosages. For example, pulling "10 mg" and "aspirin" from "10mg of aspirin" and "10 mg aspirin".</p>
<p>My attempt in <code>pyparsing</code>.</p>
<pre><code>import pyparsing as pp
doseWord = pp.Word(pp.alphas)
doseNum = pp.Word(pp.nums)
unit = pp.Word(pp.alphas)
preposition = pp.Word(pp.alphas)
chemical = pp.Word(pp.printables)
dosage_parser = doseNum + unit + pp.Optional(preposition) + chemical
print(dosage_parser.parseString('10mg of aspirin')) # ['10','mg','of','aspirin']
print(dosage_parser.parseString('10mg aspirin')) # Error, expected W(0123...) found end of text.
#These two lines should output the same thing.
</code></pre>
<p><strong>What I've tried</strong></p>
<ol>
<li>wrapping <code>preposition</code> in <code>pp.Optional</code> - Not working</li>
<li>replacing <code>preposition</code> with <code>pp.Combine(pp.Optional(pp.preposition) pp.Empty())</code> - Not working</li>
<li>replacing <code>preposition</code> with <code>pp.oneOrMore([pp.preposition,pp.Empty()])</code> - hangs indefinitely as somewhat expected</li>
<li>wrapping <code>preposition</code> in <code>pp.ZeroOrMore</code> - Not working.</li>
</ol>
<p>5.<code>(pp.Empty() | preposition)</code> - parses incorrectly ['10', 'mg', 'of']</p>
|
<python><context-free-grammar><pyparsing>
|
2023-06-03 18:33:50
| 1
| 3,163
|
mac389
|
76,397,308
| 2,387,411
|
Selenium Python: How to capture li element with specific text
|
<p>I am trying to extract <code>urlToBeCaptured</code> and <code>Text to be captured</code> from the HTML. The structure looks as follows:</p>
<pre><code><li>
" text with trailing spaces "
<a href="urlToBeCaptured">
<span class ="class1> Text to be captured </span>
<span class ="class2> Another text </span>
</a>
...
</li>
</code></pre>
<p>I am doing the following, but it doesn't seem to work:</p>
<pre><code>el = driver.find_element(By.XPATH, "//li[contains(text(),'text with trailing spaces')]")
</code></pre>
<p>Once I locate the element how to extract the text from class1, should it be something like this?</p>
<pre><code>textToBeCaptured = el.find_element(By.CLASS_NAME, 'class1').text
</code></pre>
|
<python><selenium-webdriver><web-scraping><xpath><normalize-space>
|
2023-06-03 17:53:37
| 1
| 315
|
bekon
|
76,397,098
| 2,986,042
|
How to make a variable in global scope in Robot framework?
|
<p>I have create a small robot framework test suit which will communicate with trace32 Lauterbach. My idea is to run different functions name using a loop. Every loop, it will make a breakpoint in the Trace32 later batch. I have written a simple python script as library in the robot framework.</p>
<p><strong>test.robot file</strong></p>
<pre><code>import os
*** Settings ***
Documentation simple test script to control Trace32
Library Collections
Library can.Trace32
Suite Setup
Suite Teardown
*** Variables ***
${temp} 1
*** Test Cases ***
Check Input and Output
[Documentation] test script
[Setup]
#Retrive Data . This list has 5 values
${MainList} Create List
#start debugger
start Debugger
#connect debugger
Connect Debugger
#Iterate 5 times
FOR ${UserAttribute} IN @{MainList}
#sleep 1 sec
Sleep 1 seconds
#call for breakpoint
break Debugger
${temp} +=1
END
Disconnect Debugger
[Teardown]
</code></pre>
<p>and the trace 32 script file:</p>
<pre><code>import time
import ctypes
from ctypes import c_void_p
import enum
T32_DEV = 1
class Trace32:
def start_Debugger(self):
self.t32api = ctypes.cdll.LoadLibrary('D:/test/api/python/t32api64.dll')
self.t32api.T32_Config(b"NODE=",b"localhost")
self.t32api.T32_Config(b"PORT=",b"20000")
self.t32api.T32_Config(b"PACKLEN=",b"1024")
rc = self.t32api.T32_GetChannelSize()
ch1 = ctypes.create_string_buffer(rc)
self.t32api.T32_GetChannelDefaults(ctypes.cast(ch1,ctypes.c_void_p))
ch2 = ctypes.create_string_buffer(rc)
self.t32api.T32_GetChannelDefaults(ctypes.cast(ch2,ctypes.c_void_p))
self.t32api.T32_SetChannel(ctypes.cast(ch2,c_void_p))
def Connect_Debugger(self):
rc = self.t32api.T32_Init()
rc = self.t32api.T32_Attach(T32_DEV)
def breakpoint_Debugger(self):
rc = self.t32api.T32_Ping()
time.sleep(2)
rc = self.t32api.T32_Cmd(b"InterCom M7_0 Break")
time.sleep(3)
rc = self.t32api.T32_Cmd(b"InterCom M7_0 Go")
time.sleep(2)
rc = self.t32api.T32_Cmd(b"InterCom M7_0 break.Set My_func")
time.sleep(2)
def Disconnect_Debugger(self):
rc = self.t32api.T32_Exit()
</code></pre>
<p>In the robot file, I am calling <code>start Debugger</code> and <code>Connect Debugger</code> function to start and connect the debugger. I want <code>self.t32api</code> to be global. So that I can call <code>break_Debugger</code> many times to put a breakpoint.</p>
<p>But Unfortunately, I can only put breakpoint in the first iteration. In second iteration, the breakpoint is not working. How can I make <code>self.t32api</code> global until the robot file executed completely?</p>
|
<python><robotframework><trace32><lauterbach>
|
2023-06-03 17:01:20
| 1
| 1,300
|
user2986042
|
76,397,082
| 15,520,615
|
Trying to execute code in SnowFlake Python Sheet Error: missing 1 required positional argument:
|
<p>I am trying to execute code in snowflake Python sheet, but I'm getting the error:</p>
<pre><code>Traceback (most recent call last):
Worksheet, line 12, in <module>
TypeError: __init__() missing 1 required positional argument: 'conn'
</code></pre>
<p>Can someone take a look at my code and let me know where I'm going wrong.</p>
<pre><code>import snowflake.snowpark as snowpark
from snowflake.snowpark.functions import col
from snowflake.snowpark.functions import *
# Create a session
session = snowpark.Session()
# Define the function to get entity structure
def getEntityStruct(connectionInst, log, entityStageId, processId, debug=False):
log.writeToLogs(processId, "Info", "GetEntityStruct", "GetEntityStruct")
structQuery = f"SELECT * FROM Config.GetEntityStructure WHERE EntityStageID = {entityStageId} ORDER BY EntityColumnOrder"
if debug:
print(f"getEntityStruct query {structQuery}")
entityColumns = connectionInst.readFromDb(processId, structQuery)
struct_fields = [
sp.StructField(col.ColumnName, eval(col.ColumnType), col.IsNullable)
for col in entityColumns.select("ColumnName", "ColumnType", "IsNullable").orderBy("EntityColumnOrder").collect()
]
struct = sp.StructType(struct_fields)
pKey = sp.lit(None)
chKey = sp.lit(None)
pkRows = (
entityColumns.filter(entityColumns.isPrimaryKey)
.groupBy("EntityId")
.agg(sp.concat_ws(",", sp.collect_list(sp.concat(sp.lit("`"), entityColumns.ColumnName, sp.lit("`")))).alias("keyName"))
.collect()
)
if len(pkRows) == 0:
log.writeToLogs(processId, "Error", "NoPKey", "NoPKey", errorType="NoPKey")
log.writeToLogs(processId, "Info", "FailGetEntityStruct", "FailGetEntityStruct", errorType="FailGetEntityStruct")
raise ValueError("NoPKey")
pKey = pkRows[0].keyName
chkRows = (
entityColumns.filter(entityColumns.isChangeTracking)
.groupBy("EntityId")
.agg(sp.concat_ws(",", sp.collect_list(sp.concat(sp.lit("`"), entityColumns.ColumnName, sp.lit("`")))).alias("changeCols"))
.collect()
)
if len(chkRows) == 0:
log.writeToLogs(processId, "Error", "NoChKey", "NoChKey", errorType="NoChKey")
log.writeToLogs(processId, "Info", "FailGetEntityStruct", "FailGetEntityStruct", errorType="FailGetEntityStruct")
raise ValueError("NoChKey")
for row in chkRows:
chKey = row.changeCols
regName = entityColumns.select(sp.max("RegistrationName").alias("RegName")).collect()[0].RegName
log.writeToLogs(processId, "Info", "SuccessGetEntityStruct", "SuccessGetEntityStruct")
if debug:
print(f"getEntityStruct struct {struct}")
return {"struct": struct, "pKey": pKey, "chKey": chKey, "regName": regName}
</code></pre>
<p>I have tried to establish a connection with the following</p>
<pre><code>conn = snowflake.connector.connect(
user='xxxxxx',
password='xxxxxxx',
account='xxxxx',
role= 'ACCOUNTADMIN',
warehouse='COMPUTE_WH',
database='MYDEMODB',
schema='MYSCHEMA'
)
</code></pre>
<p>But still getting same error</p>
|
<python><snowflake-cloud-data-platform>
|
2023-06-03 16:58:16
| 1
| 3,011
|
Patterson
|
76,397,037
| 18,876,759
|
django full text search taggit
|
<h2>My application - the basics</h2>
<p>I have a simple django application which allows for storing information about certain items and I'm trying to implement a search view/functionality.</p>
<p>I'm using <code>django-taggit</code> to tag the items by their functionality/features.</p>
<h2>What I want to implement</h2>
<p>I want to implement a full text search which allows to search across all the fields of the items, including their tags.</p>
<h2>The problem(s)</h2>
<ol>
<li>On the results view, the tagged items are showing up multiple times (one occurence per tag)</li>
<li>The ranking is correct when I specify * only a single* tag in the search field, but when I specify <em>multiple</em> tag names, I will get unexpected ranking results.</li>
</ol>
<p>I suspect the <code>SearchVector()</code> does not resolve the tags relation as I expected it to do. The tags should be treated just like a list of words in this case.</p>
<h2>Example Code</h2>
<h3>models.py</h3>
<pre class="lang-py prettyprint-override"><code>from django.db import models
from taggit.managers import TaggableManager
class Item(models.Model):
identifier = models.SlugField('ID', unique=True, editable=False)
short_text = models.CharField('Short Text', max_length=100, blank=True)
serial_number = models.CharField('Serial Number', max_length=30, blank=True)
revision = models.CharField('Revision/Version', max_length=30, blank=True)
part_number = models.CharField('Part Number', max_length=30, blank=True)
manufacturer = models.CharField('Manufacturer', max_length=30, blank=True)
description = models.TextField('Description', blank=True)
tags = TaggableManager('Tags', blank=True)
is_active = models.BooleanField('Active', default=True)
</code></pre>
<h3>forms.py</h3>
<pre class="lang-py prettyprint-override"><code>from django import forms
class SearchForm(forms.Form):
search = forms.CharField(max_length=200, required=False)
active_only = forms.BooleanField(initial=True, label='Show active items only', required=False)
</code></pre>
<h3>views.py</h3>
<pre class="lang-py prettyprint-override"><code>from django.views.generic.list import ListView
from django.contrib.postgres.search import SearchQuery, SearchVector, SearchRank
from . import models
from . import forms
class ItemListView(ListView):
form_class = forms.SearchForm
model = models.Item
fields = ['serial_number', 'part_number', 'manufacturer', 'tags', 'is_active']
template_name_suffix = '_list'
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
context['form'] = self.form_class(self.request.GET)
return context
def get_queryset(self):
queryset = super().get_queryset()
form = self.form_class(self.request.GET)
if form.is_valid():
if form.cleaned_data['active_only']:
queryset = queryset.filter(is_active=True)
if not form.cleaned_data['search']:
return super().get_queryset()
search_vector = SearchVector('identifier', 'short_text', 'serial_number', 'revision', 'part_number',
'manufacturer', 'description', 'tags')
search_query = SearchQuery(form.cleaned_data['search'], search_type='websearch')
return (
queryset.annotate(
search=search_vector, rank=SearchRank(search_vector, search_query)
)
# .filter(search=search_query)
.order_by("-rank").distinct()
) #.filter(search__icontains=form.cleaned_data['search'],)
return super().get_queryset()
</code></pre>
|
<python><django><postgresql><tags><full-text-search>
|
2023-06-03 16:46:07
| 1
| 468
|
slarag
|
76,396,938
| 963,671
|
OptInt type function in Python
|
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import json
mass=[]
fall=[]
year=[]
req = requests.get("https://data.nasa.gov/resource/y77d-th95.json")
response =req.json()
for i in range(0,len(response)):
mass.append(response[i]['mass'])
fall.append(response[i]['fall'])
year.append(response[i]['year'])
</code></pre>
<p>I handle keyError with the help of Exception handling. If i got keyerror i added NAN value.
As java have function like OptString whenever we get keyerror it add default value.
I just want to get all data in list but via using optString type method.
Can you give any example ?</p>
<p>Thanks ,</p>
|
<python>
|
2023-06-03 16:22:11
| 1
| 555
|
arpit
|
76,396,924
| 13,642,459
|
Nested for loop not looping on the first set, Python
|
<p>I have written this code in python. In the end I would like to use this to get the indices to cut up a 100x100 matrix into squares that overlap by 10. However, at the bottom there is a nested loop and the y values print how I think they should but not the x-values, the x-values never change... Can anyone help? Thanks</p>
<pre><code>x_split = np.linspace(0, 100, 4 + 1, dtype=int)
x_start = x_split[:-1] - 5
x_start[0] = 0
x_end = x_split[1:] + 5
x_end[-1] = 100
y_split = np.linspace(0, 100, 4 + 1, dtype=int)
y_start = y_split[:-1] - 5
y_start[0] = 0
y_end = y_split[1:] + 5
y_end[-1] = 100
x_inds = zip(x_start, x_end)
y_inds = zip(y_start, y_end)
i = 0
for start_x, end_x in x_inds:
for start_y, end_y in y_inds:
i += 1
print(f"i = {i}")
print(f"x = {start_x} {end_x}")
print(f"y = {start_y} {end_y}")
print("")
</code></pre>
<p>Current output:</p>
<pre><code>i = 1
x = 0 30
y = 0 30
i = 2
x = 0 30
y = 20 55
i = 3
x = 0 30
y = 45 80
i = 4
x = 0 30
y = 70 100
</code></pre>
<p>And then stops. I want to to continue...</p>
<pre><code>i = 5
x = 20 55
y = 0 30
i = 6
x = 20 55
y = 20 55
...
</code></pre>
|
<python>
|
2023-06-03 16:20:07
| 1
| 456
|
Hugh Warden
|
76,396,922
| 22,009,322
|
How to draw broken bars if entities are duplicated
|
<p>I want to draw a broken bar diagram with a list of band members that joined and left a music band.
I managed to draw a grid as I wanted to, but stuck with drawing the bars for band members since they are repeating.
I know how to make it when band members are unique but date columns are repeating, f.e.: Year_start1, Year_end1, Year_start2, Year_end2, etc.
But stuck with when band members are duplicating.
So, each person should be presented only once in the y axis with a set of broken bars (if person joined the band more than once).
I appreciate any help!</p>
<p>Code example:</p>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
result = pd.DataFrame([['Bill', 1972, 1974],
['Bill', 1976, 1978],
['Bill', 1967, 1971],
['Danny', 1969, 1975],
['Danny', 1976, 1977],
['James', 1971, 1972],
['Marshall', 1967, 1975]],
columns=['Person', 'Year_start', 'Year_left'])
print(result)
fig, ax = plt.subplots()
persons = result.Person.unique()
person_count = len(persons)
ybound = len(persons)*10
ystep = int(ybound/person_count)
numbers = range(5, ybound, ystep)
ax.set_ylim(0, ybound)
ax.set_xlim(min(result.Year_start)-1, max(result.Year_left)+1)
ax.set_xlabel('Years')
ax.set_yticks(numbers, labels=persons)
ax.grid(True)
plt.show()
</code></pre>
<p>I guess I should use a "for" loop and put the dataframe in "zip", but not sure how exactly.</p>
|
<python><pandas><matplotlib>
|
2023-06-03 16:20:04
| 1
| 333
|
muted_buddy
|
76,396,701
| 12,304,000
|
import python libraries (eg: rapidjson) in airflow
|
<p>I want to use the Python library <strong>rapidjson</strong> in my Airflow DAG. My code repo is hosted on Git. Whenever I merge something into the master or test branch, the changes are automatically configured to reflect on the Airflow UI.</p>
<p>My Airflow is hosted as a VM on AWS EC2. Under the EC2 instances, I see three different instances for: scheduler, webserver, workers.</p>
<p>I connected to these 3 individually via Session Manager. Once the terminal opened, I installed the library using</p>
<pre><code>pip install python-rapidjson
</code></pre>
<p>I also verified the installation using <code>pip list</code>. Now, I import the library in my dag's code simply like this:</p>
<pre><code>import rapidjson
</code></pre>
<p>However, when I open the Airflow UI, my DAG has an error that:</p>
<pre><code>No module named 'rapidjson'
</code></pre>
<p>Are there additional steps that I am missing out on? Do I need to import it into my Airflow code base in any other way as well?</p>
<p>Within my Airflow git repository, I also have a <strong>"requirements.txt"</strong> file. I tried to include</p>
<p><strong>python-rapidjson==1.5.5</strong></p>
<p>this there as well but I do not know how to actually install this.</p>
<p>I tried this:</p>
<p><strong>pip install requirements.txt</strong></p>
<p>within the session manager's terminal as well. However, the terminal is not able to locate this file. In fact, when I do "ls", I don't see anything.</p>
<pre><code>pwd
/var/snap/amazon-ssm-agent/6522
</code></pre>
|
<python><amazon-ec2><airflow><airflow-webserver>
|
2023-06-03 15:20:02
| 1
| 3,522
|
x89
|
76,396,615
| 2,221,360
|
Make QPushButton as Progress Bar in PyQt or PySide
|
<p>My question is similar to what is mentioned in this thread for QPushButtion instead of QLineEdit
<a href="https://stackoverflow.com/questions/36972132/how-to-turn-qlineedit-background-into-a-progress-bar">How to turn QLineEdit background into a Progress Bar</a>.</p>
<p>Here is what I tried to implement which works but looks ugly. Content of <strong>main.py</strong>:</p>
<pre><code>#!/bin/env python
import sys
from PySide6 import QtWidgets
from main_ui import Ui_Dialog
class Progress(QtWidgets.QDialog):
def __init__(self):
super().__init__()
self.ui = Ui_Dialog()
self.ui.setupUi(self)
self.value = 0
self.ui.btn.clicked.connect(self.updateProgress)
def updateProgress(self):
if self.value > 1:
self.value = 0
self.ui.btn_progress.setStyleSheet('background-color: white')
else:
self.value = self.value + 0.1
self.ui.label.setText(str(self.value))
self.ui.btn_progress.setStyleSheet(('background: qlineargradient(x1:0, y1:0, x2:1, y2:0, stop: 0 #1bb77b, stop: ' +
(str(self.value)) + ' #1bb77b, stop: ' + str(self.value + 0.001) + ' rgba(0, 0, 0, 0), stop: 1 white)'))
app = QtWidgets.QApplication(sys.argv)
window = Progress()
window.show()
app.exec()
</code></pre>
<p>Here is the content of main_ui.py file:</p>
<pre><code># -*- coding: utf-8 -*-
################################################################################
## Form generated from reading UI file 'test_dialog.ui'
##
## Created by: Qt User Interface Compiler version 6.5.1
##
## WARNING! All changes made in this file will be lost when recompiling UI file!
################################################################################
from PySide6.QtCore import (QCoreApplication, QDate, QDateTime, QLocale,
QMetaObject, QObject, QPoint, QRect,
QSize, QTime, QUrl, Qt)
from PySide6.QtGui import (QBrush, QColor, QConicalGradient, QCursor,
QFont, QFontDatabase, QGradient, QIcon,
QImage, QKeySequence, QLinearGradient, QPainter,
QPalette, QPixmap, QRadialGradient, QTransform)
from PySide6.QtWidgets import (QApplication, QDialog, QLabel, QPushButton,
QSizePolicy, QVBoxLayout, QWidget)
class Ui_Dialog(object):
def setupUi(self, Dialog):
if not Dialog.objectName():
Dialog.setObjectName(u"Dialog")
Dialog.resize(680, 97)
self.verticalLayout = QVBoxLayout(Dialog)
self.verticalLayout.setObjectName(u"verticalLayout")
self.btn_progress = QPushButton(Dialog)
self.btn_progress.setObjectName(u"btn_progress")
self.verticalLayout.addWidget(self.btn_progress)
self.label = QLabel(Dialog)
self.label.setObjectName(u"label")
self.verticalLayout.addWidget(self.label)
self.btn = QPushButton(Dialog)
self.btn.setObjectName(u"btn")
self.verticalLayout.addWidget(self.btn)
self.retranslateUi(Dialog)
QMetaObject.connectSlotsByName(Dialog)
# setupUi
def retranslateUi(self, Dialog):
Dialog.setWindowTitle(QCoreApplication.translate("Dialog", u"Dialog", None))
self.btn_progress.setText(QCoreApplication.translate("Dialog", u"Progress Button", None))
self.label.setText(QCoreApplication.translate("Dialog", u"<html><head/><body><p align=\"center\"><span style=\" font-size:12pt;\">Progress Value : </span></p></body></html>", None))
self.btn.setText(QCoreApplication.translate("Dialog", u"PushButton", None))
# retranslateUi
</code></pre>
<p>Is it possible to implement the above code without gradient color so that to make it beautiful?</p>
|
<python><pyqt><pyside>
|
2023-06-03 14:57:23
| 0
| 3,910
|
sundar_ima
|
76,396,594
| 19,003,861
|
Nested for loop - model.id in parent for loop does not match model.id in nested for loop (django)
|
<p>I am trying to access data from a parent to a child via a foreign key.</p>
<p><strong>WHAT WORKS - the views</strong></p>
<p>The data in the child is not "ready to be used" and need to be processed, to be represented in a progress bar in %.</p>
<p>The data processing is handled in the views. When I print it on the console, it seems to work and stored into a variable <code>reward_positions</code>.</p>
<pre><code>Reward positions = [(<Venue: Venue_name>, reward_points, reward_position_on_bar)]
</code></pre>
<p>So this part works.</p>
<p>The plan is therefore to access <code>reward_position_on_bar</code> by calling <code>{{reward_positions.2}}</code></p>
<p><strong>WHAT DOESNT WORK - the template</strong></p>
<p>But something is not working to plan in the template.</p>
<p>The template renders the last <code>child_model</code> (thats rewardprogram) objects of the <code>last parent_id</code> (thats venue) irrespective of the actual <code>parent_id</code> processed in the for loop.</p>
<p><strong>TEST RESULT & WHERE I THINK THE PROBLEM IS</strong></p>
<p>I think my problem lies in my nested forloop. The <code>parent_id</code> in the parent forloop does not match the '{{reward_position.0}}' in the nested forloop.</p>
<p>Doing a verification test, <code>{{key}}</code> should be equal to <code>{{reward_position.0}}</code> as they both go through the same parent forloop.</p>
<p>However, if <code>{{key}}</code> does change based on venue.id (parent forloop id), <code>{{reward_position.0}}</code> is stuck to the same id irrespective of the parent forloop id.</p>
<p>Can anyone seem what I am doing wrong?</p>
<p><strong>THE CODE</strong></p>
<p><strong>models</strong></p>
<pre><code>class Venue(models.Model):
name = models.CharField(verbose_name="Name",max_length=100, blank=True)
class RewardProgram(models.Model):
venue = models.ForeignKey(Venue, null = True, blank=True, on_delete=models.CASCADE, related_name="venuerewardprogram")
title = models.CharField(verbose_name="reward_title",max_length=100, null=True, blank=True)
points = models.IntegerField(verbose_name = 'points', null = True, blank=True, default=0)
</code></pre>
<p><strong>views</strong></p>
<pre><code>def list_venues(request):
venue_markers = Venue.objects.filter(venue_active=True)
#Progress bar per venue
bar_total_lenght = 100
rewards_available_per_venue = 0
reward_position_on_bar = 0
venue_data = {}
reward_positions = {}
for venue in venue_markers:
print(f'venue name ={venue}')
#list all reward programs
venue.reward_programs = venue.venuerewardprogram.all()
reward_program_per_venue = venue.reward_programs
#creates a list of reward points needed for each venue for each object
reward_points_per_venue_test = []
#appends the points to empty list from reward program from each venue
for rewardprogram in reward_program_per_venue:
reward_points_per_venue_test.append(rewardprogram.points)
#sorts list in descending order
reward_points_per_venue_test.sort(reverse=True)
#set position of highest reward to 100 (100% of bar length)
if reward_points_per_venue_test:
highest_reward = reward_points_per_venue_test[0]
if not reward_program_per_venue:
pass
else:
#counts reward program per venue
rewards_available_per_venue = venue.reward_programs.count()
if rewards_available_per_venue == 0:
pass
else:
#position of reward on bar
reward_positions = []
for rewardprogram in reward_program_per_venue:
#list all points needed per reward program objects
reward_points = rewardprogram.points
#position each reward on bar
reward_position_on_bar = reward_points/highest_reward
reward_positions.append((venue, reward_points, reward_position_on_bar))
#reward_positions[venue.id] = reward_position_on_bar
reward_positions = reward_positions
print(f'Reward positions = {reward_positions}')
context = {'reward_positions':reward_positions,'venue_data':venue_data,'venue_markers':venue_markers}
return render(request,'template.html',context)
</code></pre>
<p><strong>template</strong></p>
<pre><code> {%for venue in venue_markers%}
{%for key, value in venue_data.items%}
{%if key == venue.id%} #venue.id = 3
{% for reward_position in reward_positions %}#test result
{{reward_position.0.id}} # = id = 7 (thats not the good result)
{{key}} #id = 3 (thats the good result)
{% endfor %}
<div class="progress-bar bg-success" role="progressbar" style="width: {{value}}%" aria-valuenow="{{value}}" aria-valuemin="0" aria-valuemax="100"></div>
{%endif%}
{%endfor%}
{%endfor%}
</code></pre>
|
<python><html><django><django-views><django-templates>
|
2023-06-03 14:50:36
| 2
| 415
|
PhilM
|
76,396,569
| 15,448,022
|
Calculating Collective Count of departments on individual dates from a given date range
|
<p>I have the following table</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Function</th>
<th>Department</th>
<th>Start Date</th>
<th>End Date</th>
</tr>
</thead>
<tbody>
<tr>
<td>Const</td>
<td>Const 1</td>
<td>2023-03-01</td>
<td>2023-03-05</td>
</tr>
<tr>
<td>Const</td>
<td>Const 2</td>
<td>2023-03-02</td>
<td>2023-03-03</td>
</tr>
<tr>
<td>Mining</td>
<td>Mining 1</td>
<td>2023-03-02</td>
<td>2023-03-05</td>
</tr>
<tr>
<td>Mining</td>
<td>Mining 2</td>
<td>2023-03-01</td>
<td>2023-03-06</td>
</tr>
<tr>
<td>Const</td>
<td>Const 1</td>
<td>2023-03-03</td>
<td>2023-03-07</td>
</tr>
<tr>
<td>Const</td>
<td>Const 2</td>
<td>2023-03-02</td>
<td>2023-03-05</td>
</tr>
<tr>
<td>Mining</td>
<td>Mining 1</td>
<td>2023-03-06</td>
<td>2023-03-09</td>
</tr>
<tr>
<td>Mining</td>
<td>Mining 2</td>
<td>2023-03-05</td>
<td>2023-03-08</td>
</tr>
</tbody>
</table>
</div>
<p>I want to get per date the total count in each department. Both start date and end date and included in counting.</p>
<p>It would be nice to have an intermediate output as follows</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Function</th>
<th>Department</th>
<th>Date</th>
<th>Count</th>
</tr>
</thead>
<tbody>
<tr>
<td>Const</td>
<td>Const1</td>
<td>2023-03-01</td>
<td>1</td>
</tr>
<tr>
<td>Const</td>
<td>Const1</td>
<td>2023-03-02</td>
<td>1</td>
</tr>
<tr>
<td>Const</td>
<td>Const1</td>
<td>2023-03-03</td>
<td>2</td>
</tr>
<tr>
<td>Const</td>
<td>Const1</td>
<td>2023-03-04</td>
<td>2</td>
</tr>
<tr>
<td>Const</td>
<td>Const1</td>
<td>2023-03-05</td>
<td>2</td>
</tr>
<tr>
<td>Const</td>
<td>Const1</td>
<td>2023-03-06</td>
<td>1</td>
</tr>
<tr>
<td>Const</td>
<td>Const1</td>
<td>2023-03-07</td>
<td>1</td>
</tr>
<tr>
<td>Const</td>
<td>Const1</td>
<td>2023-03-08</td>
<td>0</td>
</tr>
<tr>
<td>Const</td>
<td>Const1</td>
<td>2023-03-09</td>
<td>0</td>
</tr>
<tr>
<td>Const</td>
<td>Const1</td>
<td>2023-03-10</td>
<td>0</td>
</tr>
<tr>
<td>Const</td>
<td>Const2</td>
<td>2023-03-01</td>
<td>0</td>
</tr>
<tr>
<td>Const</td>
<td>Const2</td>
<td>2023-03-02</td>
<td>2</td>
</tr>
<tr>
<td>Const</td>
<td>Const2</td>
<td>2023-03-03</td>
<td>2</td>
</tr>
<tr>
<td>Const</td>
<td>Const2</td>
<td>2023-03-04</td>
<td>1</td>
</tr>
<tr>
<td>Const</td>
<td>Const2</td>
<td>2023-03-05</td>
<td>1</td>
</tr>
<tr>
<td>Const</td>
<td>Const2</td>
<td>2023-03-06</td>
<td>0</td>
</tr>
<tr>
<td>Const</td>
<td>Const2</td>
<td>2023-03-07</td>
<td>0</td>
</tr>
<tr>
<td>Const</td>
<td>Const2</td>
<td>2023-03-08</td>
<td>0</td>
</tr>
<tr>
<td>Const</td>
<td>Const2</td>
<td>2023-03-09</td>
<td>0</td>
</tr>
<tr>
<td>Const</td>
<td>Const2</td>
<td>2023-03-10</td>
<td>0</td>
</tr>
<tr>
<td>Mining</td>
<td>Mining 1</td>
<td>2023-03-01</td>
<td>0</td>
</tr>
<tr>
<td>Mining</td>
<td>Mining 1</td>
<td>2023-03-02</td>
<td>1</td>
</tr>
<tr>
<td>Mining</td>
<td>Mining 1</td>
<td>2023-03-03</td>
<td>1</td>
</tr>
<tr>
<td>Mining</td>
<td>Mining 1</td>
<td>2023-03-04</td>
<td>1</td>
</tr>
<tr>
<td>Mining</td>
<td>Mining 1</td>
<td>2023-03-05</td>
<td>1</td>
</tr>
<tr>
<td>Mining</td>
<td>Mining 1</td>
<td>2023-03-06</td>
<td>1</td>
</tr>
<tr>
<td>Mining</td>
<td>Mining 1</td>
<td>2023-03-07</td>
<td>1</td>
</tr>
<tr>
<td>Mining</td>
<td>Mining 1</td>
<td>2023-03-08</td>
<td>1</td>
</tr>
<tr>
<td>Mining</td>
<td>Mining 1</td>
<td>2023-03-09</td>
<td>1</td>
</tr>
<tr>
<td>Mining</td>
<td>Mining 1</td>
<td>2023-03-10</td>
<td>0</td>
</tr>
<tr>
<td>Mining</td>
<td>Mining 2</td>
<td>2023-03-01</td>
<td>1</td>
</tr>
<tr>
<td>Mining</td>
<td>Mining 2</td>
<td>2023-03-02</td>
<td>1</td>
</tr>
<tr>
<td>Mining</td>
<td>Mining 2</td>
<td>2023-03-03</td>
<td>1</td>
</tr>
<tr>
<td>Mining</td>
<td>Mining 2</td>
<td>2023-03-04</td>
<td>1</td>
</tr>
<tr>
<td>Mining</td>
<td>Mining 2</td>
<td>2023-03-05</td>
<td>2</td>
</tr>
<tr>
<td>Mining</td>
<td>Mining 2</td>
<td>2023-03-06</td>
<td>2</td>
</tr>
<tr>
<td>Mining</td>
<td>Mining 2</td>
<td>2023-03-07</td>
<td>1</td>
</tr>
<tr>
<td>Mining</td>
<td>Mining 2</td>
<td>2023-03-08</td>
<td>1</td>
</tr>
<tr>
<td>Mining</td>
<td>Mining 2</td>
<td>2023-03-09</td>
<td>0</td>
</tr>
<tr>
<td>Mining</td>
<td>Mining 2</td>
<td>2023-03-10</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
<p>The desired final output is a pandas df as follows</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Date</th>
<th>Const 1</th>
<th>Const 2</th>
<th>Mining 1</th>
<th>Mining 2</th>
</tr>
</thead>
<tbody>
<tr>
<td>2023-03-01</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>2023-03-02</td>
<td>1</td>
<td>2</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>2023-03-03</td>
<td>2</td>
<td>2</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>2023-03-04</td>
<td>2</td>
<td>1</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>2023-03-05</td>
<td>2</td>
<td>1</td>
<td>1</td>
<td>2</td>
</tr>
</tbody>
</table>
</div>
|
<python><pandas><dataframe><datetime>
|
2023-06-03 14:45:33
| 1
| 378
|
aj7amigo
|
76,396,462
| 9,470,099
|
How to create my own debugger server for selenium?
|
<p>I am trying out selenium using Python, I seen this option in some places</p>
<pre class="lang-py prettyprint-override"><code>chrome_options.add_experimental_option("debuggerAddress", debugger_address)
</code></pre>
<p>How can I create my own debugger server? I've been googling but couldn't find any documentation about this. Can anyone point me in the right direction?</p>
|
<python><selenium-webdriver>
|
2023-06-03 14:14:11
| 0
| 4,965
|
Jeeva
|
76,396,262
| 19,130,803
|
Plotly: auto resize height
|
<p>I am creating a bar graph using <code>plotly express</code> inside <code>dash</code> application. The graph is getting displayed but I am having an issue with <code>height</code>.Currently I am using <code>default</code> height and width.</p>
<p>Now for eg:</p>
<ol>
<li><p>dataframe having <code>field</code> column contain 3 entires, the graph looks ok.</p>
</li>
<li><p>dataframe having <code>field</code> column contain 10 entires, the bar width is reduced auto and height remains the same and graph looks congested and hard to read.</p>
</li>
</ol>
<pre><code>figure = (
px.bar(
data_frame=dataframe,
x="size",
y="field",
title="Memory Usage",
text="size",
# width=400,
# height=400,
orientation="h",
labels={"size": "size in byte(s)"},
template=template,
).update_traces(width=0.4)
.update_layout(autosize=True)
)
dcc.Graph(id="memory_bar", figure=figure, className="dbc")
</code></pre>
<p>Is it possible depending on number of entires, the height can be auto-resized? Also I am using <code>orient</code> as <code>horrizontal</code>. I tried <code>autosize=true</code> but got no effect on height it remains same.</p>
|
<python><plotly>
|
2023-06-03 13:13:48
| 1
| 962
|
winter
|
76,396,157
| 264,136
|
Add a field in doc if it does not exist else update
|
<p>I am using the below code to update <code>job_start_time</code> in a doc:</p>
<pre><code>myclient = pymongo.MongoClient("mongodb://10.64.127.94:27017/")
mydb = myclient["UPTeam"]
mycol = mydb["perf_sdwan_queue"]
myquery = {"$and":[ {"job_job_id": current_job_id}, {"job_queue_name": "CURIE_BLR"}]}
my_jobs = mycol.find(myquery)
newvalues = { "$set": { "job_start_time":datetime.datetime.utcnow()} }
mycol.update_one(myquery, newvalues)
</code></pre>
<p>This works fine if the field exists. I want to have a code where it either updates the field if it exists else to create a new field.</p>
|
<python><pymongo>
|
2023-06-03 12:46:32
| 1
| 5,538
|
Akshay J
|
76,396,112
| 1,831,784
|
Google Photos API: UnknownApiNameOrVersion when using googleapiclient.discovery.build
|
<p>I'm trying to use the Google Photos API with the Python client library googleapiclient.discovery.build method. However, I'm encountering an "UnknownApiNameOrVersion" error when attempting to create the client.</p>
<p>Here's the code I'm using:</p>
<pre><code>from google.oauth2 import service_account
import googleapiclient.discovery
credentials = service_account.Credentials.from_service_account_file('/path/to/service-account-file.json')
scopes = ['https://www.googleapis.com/auth/photoslibrary']
credentials = credentials.with_scopes(scopes)
service = googleapiclient.discovery.build('photoslibrary', 'v1', credentials=credentials)
</code></pre>
<p>Unfortunately, I'm receiving the following error message:</p>
<pre><code>UnknownApiNameOrVersion: name: photoslibrary version: v1
</code></pre>
<p>I've checked the documentation and examples, but I can't find a clear solution to this issue. Am I missing something in my code? Is there an alternative way to create the Google Photos API client?</p>
<p>I would appreciate any guidance or insights on how to resolve this error and successfully create the Google Photos API client using the Python client library.</p>
|
<python><python-3.x><google-cloud-platform>
|
2023-06-03 12:34:45
| 0
| 3,035
|
Itachi
|
76,396,049
| 9,212,050
|
PySpark: Create a new column in dataframe based on another dataframe's cell values
|
<p>I have PySpark dataframe <code>dhl_price</code> of the following form:</p>
<pre><code>+------+-----+-----+-----+------+
|Weight| A| B| C| D|
+------+-----+-----+-----+------+
| 1|16.78|17.05|20.23| 40.1|
| 2|16.78|17.05|20.23| 58.07|
| 3|18.43|18.86| 25.0| 66.03|
| 4|20.08|20.67|29.77| 73.99|
</code></pre>
<p>So you can get the delivery price based on the category (i.e. the columns <code>A</code>, <code>B</code>, <code>C</code>, <code>D</code>) and the weight of your parcel (i.e. the first column <code>Weight</code>) and for weights larger than 30, we have prices specified only for 30, 40, 50 etc.</p>
<p>I also have PySpark dataframe <code>requests</code>, one row for each request by a customer. It includes columns <code>product_weight</code>, <code>Type</code> (the category that is in dhl_price). I want to create a new column in requests <code>delivery_fee</code> based on <code>dhl_price</code> dataframe. In particular, for each row in <code>dhl_price</code> column, I want to get a cell value in <code>dhl_price</code> where column is the one specified in column <code>Type</code> and row is the one specified in column <code>product_weight</code> of <code>requests</code> dataframe.</p>
<p>So far I could code it in <strong>pandas</strong>:</p>
<pre><code>def get_dhl_fee(weight, type):
if weight <= 30:
price = dhl_price.loc[dhl_price["Weight"] == weight][type].values[0]
else:
price = dhl_price.loc[dhl_price["Weight"] >= weight].reset_index(drop = True).iloc[0][type].values[0]
return price
new_requests["dhl_fee"] = new_requests.apply(lambda x: get_dhl_fee(x["product_weight_g"], x["Type"]), axis = 1)
</code></pre>
<p>How can I do the same with PySpark? I tried to use PySpark's UDF:</p>
<pre><code># Define the UDF (User-Defined Function) for calculating DHL fee
@fn.udf(returnType=DoubleType())
def get_dhl_fee(product_weight_g, calculate_way):
broadcast_dhl_price = fn.broadcast(dhl_price)
if weight <= 30:
price = broadcast_dhl_price.filter(dhl_price["Weight"] == weight).select(calculate_way).first()[0]
else:
price = broadcast_dhl_price.filter(dhl_price["Weight"] >= weight).select(calculate_way).first()[0]
return price
# Register the UDF
sc.udf.register("get_dhl_fee", get_dhl_fee)
# Apply the UDF to calculate dhl_fee column
requests = requests.withColumn("dhl_fee", get_dhl_fee(fn.col("product_weight"), fn.col("Type")))
</code></pre>
<p>but it returns error SPARK-5063:</p>
<blockquote>
<p>"It appears that you are attempting to reference SparkContext from a
broadcast variable, action, or transformation. SparkContext can only
be used on the driver, not in code that it run on workers."</p>
</blockquote>
|
<python><pandas><apache-spark><pyspark>
|
2023-06-03 12:18:48
| 1
| 1,404
|
Sayyor Y
|
76,395,984
| 1,946,418
|
singleton inheritance from a base class
|
<p>Tech: Python 3.11.2</p>
<pre class="lang-py prettyprint-override"><code>from mylogging import MyLogging
class BaseClass:
def __init__(self) -> None:
self.logger = MyLogging(logName=self.__class__.__name__)
# Singleton
class ChildSingletonClass(BaseClass):
__instance = None
def __new__(cls):
if cls.__instance is None:
cls.__instance = super(ChildSingletonClass, cls).__new__(cls)
return cls.__instance
def __init__(self):
self.logger.debug(f"{self} address = {id(self)}")
</code></pre>
<p>Have a <code>ChildSingletonClass</code> "singleton" class, that I now need to inherit from <code>BaseClass</code>, so I can take advantage of the inheritance goodness. But running into this error</p>
<pre><code> self.logger.debug(f"{self} address = {id(self)}")
^^^^^^^^^^^
AttributeError: 'ChildSingletonClass' object has no attribute 'logger'
</code></pre>
<p>I've tried changing <code>__new__</code> to call <code>super(BaseClass, cls)</code> instead, but that doesn't seem to help either.</p>
<p>Any ideas anyone? TIA</p>
|
<python><oop><inheritance><singleton>
|
2023-06-03 12:01:38
| 1
| 1,120
|
scorpion35
|
76,395,953
| 131,874
|
Regex to catch email addresses in email header
|
<p>I'm trying to parse a <code>To</code> email header with a regex. If there are no <code><></code> characters then I want the whole string otherwise I want what is inside the <code><></code> pair.</p>
<pre class="lang-python prettyprint-override"><code>import re
re_destinatario = re.compile(r'^.*?<?(?P<to>.*)>?')
addresses = [
'XKYDF/ABC (Caixa Corporativa)',
'Fulano de Tal | Atlantica Beans <fulano.tal@atlanticabeans.com>'
]
for address in addresses:
m = re_destinatario.search(address)
print(m.groups())
print(m.group('to'))
</code></pre>
<p>But the regex is wrong:</p>
<pre><code>('XKYDF/ABC (Caixa Corporativa)',)
XKYDF/ABC (Caixa Corporativa)
('Fulano de Tal | Atlantica Beans <fulano.tal@atlanticabeans.com>',)
Fulano de Tal | Atlantica Beans <fulano.tal@atlanticabeans.com>
</code></pre>
<p>What am I missing?</p>
|
<python><regex><email-headers><email-address>
|
2023-06-03 11:52:19
| 2
| 126,654
|
Clodoaldo Neto
|
76,395,885
| 10,755,032
|
Python - How to get the article titles from either h2 or div tag in medium.com
|
<p>I am scraping medium.com. I have encountered a problem which is that some publications' article titles are in the <code>h2</code> tag whereas for some others it is in <code>div</code>. Now I am writing a function that takes in a link of the publication and returns the titles of articles in the page. I don't know which type I will be getting as an input. How should I tackle this?
<a href="https://i.sstatic.net/cXrml.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cXrml.png" alt="enter image description here" /></a> In this the article titles are in h2 tag.</p>
<p><a href="https://i.sstatic.net/YsBDX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YsBDX.png" alt="enter image description here" /></a> In this the article titles are in the div tag.</p>
<p>Both are from different publications.</p>
<p>Link for article titles in div tag: <a href="https://levelup.gitconnected.com" rel="nofollow noreferrer">https://levelup.gitconnected.com</a></p>
<p>Link for article titles in h2 tag: <a href="https://towardsdatascience.com" rel="nofollow noreferrer">https://towardsdatascience.com</a></p>
<p>Here is my code which works for the h2 tag</p>
<pre><code>import requests
from bs4 import BeautifulSoup as bs
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.keys import Keys
import time
options = Options()
options.add_argument("--headless")
options.add_argument('--log-level=3')
options.add_experimental_option('excludeSwitches', ['enable-logging'])
driver = webdriver.Chrome(options=options)
class Publication:
def __init__(self, link):
self.link = link
def get_articles(self):
"Get the articles of the user/publication which was given as input"
link = self.link
driver.get(link)
scroll_pause = 0.5
# Get scroll height
last_height = driver.execute_script("return document.documentElement.scrollHeight")
run_time, max_run_time = 0, 1
while True:
iteration_start = time.time()
# Scroll down to bottom
driver.execute_script("window.scrollTo(0, 1000*document.documentElement.scrollHeight);")
# Wait to load page
time.sleep(scroll_pause)
# Calculate new scroll height and compare with last scroll height
new_height = driver.execute_script("return document.documentElement.scrollHeight")
scrolled = new_height != last_height
timed_out = run_time >= max_run_time
if scrolled:
run_time = 0
last_height = new_height
elif not scrolled and not timed_out:
run_time += time.time() - iteration_start
elif not scrolled and timed_out:
break
elements = driver.find_elements(By.CSS_SELECTOR, "h2")
for x in elements:
print(x.text)
</code></pre>
|
<python><selenium-webdriver><web-scraping>
|
2023-06-03 11:39:20
| 1
| 1,753
|
Karthik Bhandary
|
76,395,799
| 1,319,998
|
Maximum size of compressed data using Python's zlib
|
<p>I'm writing a Python library that makes ZIP files in a streaming way. If the uncompressed or compressed data of a member of the zip is 4GiB or bigger, then it has to use a particular extension to the original ZIP format - zip64. The issue with always using this is that it has less support. So, I would like to only use zip64 if needed. But whether a file is zip64 has to be specified in the zip <em>before</em> the compressed data, and so if streaming, before the size of the compressed data is known.</p>
<p>In some cases however, the size of the uncompressed data <em>is</em> known. So, I would like to predict the <em>maximum</em> size that zlib can output based on this uncompressed size, and if this is 4GiB or bigger, use zip64 mode.</p>
<p>In other words, if the the total length of <code>chunks</code> in the below is known, what will be the maximum total length of bytes that <code>get_compressed</code> can yield? (I assume this maximum size would depend on level, memLevel and wbits)</p>
<pre class="lang-py prettyprint-override"><code>import zlib
chunks = (
b'any',
b'iterable',
b'of',
b'bytes',
b'-' * 1000000,
)
def get_compressed(level=9, memLevel=9, wbits=-zlib.MAX_WBITS):
compress_obj = zlib.compressobj(level=level, memLevel=memLevel, wbits=wbits)
for chunk in chunks:
if compressed := compress_obj.compress(chunk):
yield compressed
if compressed := compress_obj.flush():
yield compressed
print('length', len(b''.join(get_compressed())))
</code></pre>
<p>This is complicated by the fact that <a href="https://stackoverflow.com/q/76371334/1319998">Python zlib module's behaviour is not consistent between Python versions</a>.</p>
<p>I think that Java attempts a sort of "auto zip64 mode" without knowing the uncompressed data size, but <a href="https://github.com/libarchive/libarchive/issues/1834" rel="nofollow noreferrer">libarchive has problems with it</a>.</p>
|
<python><zip><zlib><deflate><python-zlib>
|
2023-06-03 11:17:40
| 3
| 27,302
|
Michal Charemza
|
76,395,757
| 2,013,747
|
Is there a built-in function to query a type hint for optionality/"None-ability" in Python 3.10 or later?
|
<p>Is there a function in the standard library to query whether the type hint for a field admits the None value?</p>
<p>For example, it would return True for foo, bar, baz, and False for x, in class A below:</p>
<pre><code>from dataclasses import dataclass
from typing import Optional, Union
@dataclass
class A:
foo : Optional[int] = None
bar : int|None = None
baz : Union[int, float, None] = None
x : int = 1
</code></pre>
<p>I have written the following function, which seems to work, but I want to avoid reimplementing standard functionality.</p>
<pre><code>import typing
import types
def field_is_optional(cls: type, field_name: str):
"""A field is optional when it has Union type with a NoneType alternative.
Note that Optional[] is a special form which is converted to a Union with a NoneType option
"""
field_type = typing.get_type_hints(cls).get(field_name, None)
origin = typing.get_origin(field_type)
#print(field_name, ":", field_type, origin)
if origin is typing.Union:
return type(None) in typing.get_args(field_type)
if origin is types.UnionType:
return type(None) in typing.get_args(field_type)
return False
a=A()
assert field_is_optional(type(a), "foo")
assert field_is_optional(type(a), "bar")
assert field_is_optional(type(a), "baz")
assert field_is_optional(type(a), "x") == False
</code></pre>
<p>An acceptable answer will be "No." or "Yes, the function is <code><function name></code>. As @metatoaster pointed out, <a href="https://stackoverflow.com/questions/56832881/check-if-a-field-is-typing-optional/7639772">Check if a field is typing.Optional</a> asks a related question, however (1) it is specific to <code>typing.Optional</code> not any type that expresses optionality (e.g. <code>Union[int, float, None]</code>, <code>Union[int, Union[float, None]</code>, <code>int|float|None</code>, etc.), and (2) it is asking for any way at all to check (which I already have), whereas I am asking for the name of a single standard library function that does the job.</p>
|
<python><option-type><nullable>
|
2023-06-03 11:04:27
| 0
| 4,240
|
Ross Bencina
|
76,395,754
| 2,602,770
|
FLASK -Error occurred while reading WSGI handler:
|
<p>Error occurred while reading WSGI handler:</p>
<p>Traceback (most recent call last):
File "C:\Python\Lib\site-packages\wfastcgi.py", line 791, in main
env, handler = read_wsgi_handler(response.physical_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python\Lib\site-packages\wfastcgi.py", line 633, in read_wsgi_handler
handler = get_wsgi_handler(os.getenv("WSGI_HANDLER"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python\Lib\site-packages\wfastcgi.py", line 616, in get_wsgi_handler
raise ValueError('"%s" could not be imported%s' % (handler_name, last_tb))
ValueError: "app.app" could not be imported: Traceback (most recent call last):
File "C:\Python\Lib\site-packages\wfastcgi.py", line 600, in get_wsgi_handler
handler = <strong>import</strong>(module_name, fromlist=[name_list[0][0]])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\inetpub\wwwroot\essldatamapping\app.py", line 2, in
from flask_restful import Resource, Api
ModuleNotFoundError: No module named 'flask_restful'</p>
<h1>MY CODE</h1>
<p>from flask import Flask, jsonify
from flask_restful import Resource, Api
import pyodbc</p>
<p>app = Flask(<strong>name</strong>)
api = Api(app)
class EsslDataFeth(Resource):
def <strong>init</strong>(self):
server = 'ESSL-CONFIGURAT\ESSL'
database = '<strong>'
username = '</strong>'
password = '**'
self.connect = pyodbc.connect('DRIVER={ODBC Driver 18 for SQL Server};SERVER='+server+';DATABASE='+database+';ENCRYPT=no;UID='+username+';PWD='+ password)
self.cursor = self.connect.cursor()</p>
<pre><code> def get(self):
getdatacmd='SELECT * from [etimetracklite1].[dbo].[Entry_Exit]'
self.cursor.execute(getdatacmd)
result=[]
for row in self.cursor.fetchall():
item_dist={}
item_dist['id']=row[0]
item_dist['dateTime']=row[1]
item_dist['INOut']=row[2]
item_dist['DeviceID']=row[3]
result.append(item_dist)
return jsonify(result)
</code></pre>
<p>api.add_resource(EsslDataFeth, '/returnjson')
if <strong>name</strong> == '<strong>main</strong>':
app.run()</p>
|
<python><flask><iis>
|
2023-06-03 11:04:09
| 0
| 2,874
|
Bishnu
|
76,395,726
| 2,263,683
|
Add multiple OIDC authentication options in FastAPI
|
<p>I've added Google's OIDC authentication to my FastAPI application.</p>
<pre><code>from fastapi import Depends
from fastapi.security import OpenIdConnect
oidc_google = OpenIdConnect(openIdConnectUrl='https://accounts.google.com/.well-known/openid-configuration')
@app.get('/foo')
def bar(token: Depends(oidc_google)):
return "You're Authenticated"
</code></pre>
<p>Now I want to give the user the option to login with another OIDC provider (e.g: Microsoft). Something like this:</p>
<pre><code>oidc_google = OpenIdConnect(openIdConnectUrl='https://accounts.google.com/.well-known/openid-configuration')
oidc_microsoft = OpenIdConnect(openIdConnectUrl='https://login.microsoftonline.com/common/v2.0/.well-known/openid-configuration')
@app.get('/foo')
def bar(token: Depends(oidc_google or oidc_microsoft)):
return "You're Authenticated"
</code></pre>
<p>Also seems like you can only set one configuration for oidc in swagger UI:</p>
<pre><code>app = FastAPI(
swagger_ui_oauth2_redirect_url='/api/v1/oidc/callback',
swagger_ui_init_oauth={
'usePkceWithAuthorizationCodeGrant': True,
'clientId': settings.OIDC_CLIENT_ID,
'clientSecret': settings.OIDC_CLIENT_SECRET,
},
)
</code></pre>
<p>Which makes it even more complicated. I couldn't find a working solution so far.
Is there a working solution to add multiple OIDC options for authentication in FastAPI?</p>
|
<python><authentication><fastapi><openid-connect>
|
2023-06-03 10:55:47
| 0
| 15,775
|
Ghasem
|
76,395,615
| 5,431,734
|
documenting functions exposed by a pickle file
|
<p>I have written an application in python that goes through several iterations and at the end it returns a couple of dataframes (the estimates of the quantities we are interested in). I am also saving the application as a single pickle file which exposes to the user all the objects and functions that are involved in the loop and contribute to the return values. If a user wants to get a better insight and would like to interrogate particular functions or properties of the objects involved he/she could do</p>
<pre><code>import pickle
import pandas as pd
pkl = pd.read_pickle('pickle_file.pickle')
pkl.course.student_names
</code></pre>
<p>and that would return a list of names
or</p>
<pre><code>pkl.course.calc_avg([2020, 2021, 2022, 2023]
</code></pre>
<p>that returns the average of the markings for these years.</p>
<p>My question is how do I document that please? I mean how on earth the user would know that there is an attribute called <code>student_names</code> of the object <code>course</code> or that there is a function called <code>calc_avg</code>. Adding docstrings will help especially with functions since <code>help(pkl.course.calc_avg)</code> will print the docstring but the user should know that there is such a function at first place....</p>
<p>Am I doing it wrong from the very beginning maybe? I shouldnt have a pickle file, but what are the alternatives?</p>
|
<python><pickle>
|
2023-06-03 10:26:52
| 0
| 3,725
|
Aenaon
|
76,395,534
| 2,722,968
|
Typing hinting the return type of a fn returning a subclass
|
<p>I have a function that takes a class a parameter and returns a (constructed) subclass; essentially a class-decorator. In the most minimal example</p>
<pre class="lang-py prettyprint-override"><code># Some baseclass that may come from the current module/stdlib/wherever
class FooBase:
pass
# A user-defined subclass
class FooDerived(FooBase):
pass
# foo() takes any `FooBase`-type, including its subclasses
def foo(baseclass: FooBase) -> ?:
class Inner(baseclass):
pass
return Inner
# NewFoo is a class that has to be a subclass of `FooBase`, derived from `FooDerived`
NewFoo = foo(FooDerived)
</code></pre>
<p>Here, <code>Inner</code> is a subtype of whatever <code>baseclass</code> is. Is there a way to type-hint this relationship - the function will return a type (not a value of a type!) that is a subtype of what <code>baseclass</code> is.</p>
|
<python>
|
2023-06-03 10:02:24
| 1
| 17,346
|
user2722968
|
76,395,448
| 15,520,615
|
Snowflake snowpark Python Interpreter Error: NameError: name is not defined
|
<p>I'm executing a function in Python/Snowpark and I'm getting the error:
<a href="https://i.sstatic.net/2SeXR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2SeXR.png" alt="enter image description here" /></a></p>
<p>I appreciate this is a Python beginners error, however I'm not sure why I'm getting the error: <code>NameError: name 'conn' is not defined</code> when I have defined conn in the function:</p>
<pre><code>conn = connection.connect(connectionProperties, log)
</code></pre>
<p>Again, this is an error that I should know how to fix, but I don't.</p>
<p>The fulll code is as follows:</p>
<pre><code>import snowflake.snowpark as snowpark
from snowflake.snowpark.functions import col
import wheel_loader
def getStruct(conn):
wheel_loader.load('whlib-0.0.1-py3-none-any.whl')
from whlib.utils import connectionProperties as connProps
from whlib.utils import connection as connection
from whlib.utils import whliblogging as log
from whlib.cln import entity as entity
dbConnectionProperties = connProps.DbConnectionProperties()
dbConnectionProperties.DBServer = 'xxxxxxxxxxx'
dbConnectionProperties.DBUser = 'xxxxxxx'
dbConnectionProperties.DBPword = 'xxxxxxxxxx'
dbConnectionProperties.DBDatabase = 'xxxxxxxxxxxxxxxxx'
connectionProperties = connProps.ConnectionProperties()
connectionProperties.dbConnectionProperties = dbConnectionProperties
log = logs.Logging(connectionProperties)
conn = connection.connect(connectionProperties, log)
def main(session: snowpark.Session):
return getStruct(conn)
</code></pre>
<p>Any thoughts</p>
|
<python><snowflake-cloud-data-platform>
|
2023-06-03 09:40:30
| 0
| 3,011
|
Patterson
|
76,395,346
| 10,012,856
|
Manage edge's weight and attributes with Netoworkx
|
<p>I'm facing on a trouble related to how I'm managing the edges and their weight and attributes in a <code>MultiDiGraph</code>.</p>
<p>I've a list of edges like below:</p>
<pre><code>[
(0, 1, {'weight': {'weight': 0.8407885973127324, 'attributes': {'orig_id': 1, 'direction': 1, 'flip': 0, 'lane-length': 3181.294317920477, 'lane-width': 3.6, 'lane-shoulder': 0.0, 'lane-max-speed': 50.0, 'lane-typology': 'real', 'lane-access-points': 6, 'lane-travel-time': 292.159682258003, 'lane-capacity': 7200.0, 'lane-cost': 0.8407885973127324, 'other-attributes': None, 'linestring-wkt': 'LINESTRING (434757.15286960197 4524762.33387408, 434267.30180536775 4525511.90463009, 436180.7891782945 4526762.385413274)'}}}),
(1, 4, {'weight': {'weight': 0.6659876355281887, 'attributes': {'orig_id': 131, 'direction': 1, 'flip': 0, 'lane-length': 2496.129360921626, 'lane-width': 3.6, 'lane-shoulder': 0.0, 'lane-max-speed': 50.0, 'lane-typology': 'real', 'lan...
</code></pre>
<p>That list is used to add weight and attributes to a <code>MultiDiGraph</code> previous mentioned:</p>
<pre><code> graph = ntx.MultiDiGraph(weight=None)
graph.add_weighted_edges_from(edge_list)
</code></pre>
<p>Trying to read the properties of a single edge(<code>graph.edges.data()</code>) I see this:</p>
<pre><code>(0, 1, {'weight': {'weight': 0.8407885973127324, 'attributes': {'orig_id': 1, 'direction': 1, 'flip': 0, 'lane-length': 3181.294317920477, 'lane-width': 3.6, 'lane-shoulder': 0.0, 'lane-max-speed': 50.0, 'lane-typology': 'real', 'lane-access-points': 6, 'lane-travel-time': 292.159682258003, 'lane-capacity': 7200.0, 'lane-cost': 0.8407885973127324, 'other-attributes': None, 'linestring-wkt': 'LINESTRING (434757.15286960197 4524762.33387408, 434267.30180536775 4525511.90463009, 436180.7891782945 4526762.385413274)'}}})
</code></pre>
<p>Every edge is builded in that way: <code>[node[0], node[1], {'weight': weight, 'attributes': attributes}]</code>.
If I use this way: <code>[node[0], node[1], weight]</code>, I see the right use of the weight but I need to use also the attributes.</p>
<pre><code>[(0, 1, {'weight': 0.8407885973127324}), (1, 4, {'weight': 0.6659876355281887}), (1, 46, {'weight': None}), (4, 5, {'weight': 1.2046936800705539}), (4, 6, {'weight': 0.4469496439663275})....
</code></pre>
<p>What is the correct way to manage in the same time both weight and attributes?</p>
|
<python><networkx>
|
2023-06-03 09:09:15
| 1
| 1,310
|
MaxDragonheart
|
76,395,255
| 1,473,517
|
How can I draw a line around the edge of the mask?
|
<p>I have making a heatmap with a mask. Here is a toy MWE:</p>
<pre><code>import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
images = []
vmin = 0
vmax = 80
cmap = "viridis"
size = 40
matrix = np.random.randint(vmin, vmax, size=(size,size))
np.random.seed(7)
mask = []
for _ in range(size):
prefix_length = np.random.randint(size)
mask.append([False]*prefix_length + [True]*(size-prefix_length))
mask = np.array(mask)
sns.heatmap(matrix, vmin=vmin, vmax=vmax, cmap="viridis", mask=mask)
plt.savefig("temp.png")
plt.show()
</code></pre>
<p>I want to draw a line around the edge of the mask to accentuate where it is. How can you do that?</p>
<p>My toy example currently looks like this:</p>
<p><a href="https://i.sstatic.net/AVsi5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AVsi5.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><seaborn><heatmap>
|
2023-06-03 08:45:50
| 3
| 21,513
|
Simd
|
76,395,138
| 10,755,032
|
Python - Getting the titles of publications in medium.com
|
<p>I am scraping medium.com. I have one problem which I'm not sure how to tackle. Medium publications have different kinds of arrangements when it comes to their article arrangement. Some of them arrange normally in a list form, while some arrange in a grid format. Now when I am scraping through the normal-looking publications Im able to get the article names but when I try to scrape from the grid type I'm not able to. Is there any way for me to tackle this?
<a href="https://i.sstatic.net/zUOX8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zUOX8.png" alt="publication in list format" /></a></p>
<p>list format. Here using h2 Im able to get the article titles.</p>
<p><a href="https://i.sstatic.net/Gh6bh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Gh6bh.png" alt="publication in grid format" /></a></p>
<p>grid format. Here I've observed that div is getting used for article titles.</p>
<p>This is my current working code:</p>
<pre><code>import requests
from bs4 import BeautifulSoup as bs
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.keys import Keys
import time
options = Options()
options.add_argument("--headless")
options.add_argument('--log-level=3')
options.add_experimental_option('excludeSwitches', ['enable-logging'])
driver = webdriver.Chrome(options=options)
class Publication:
def __init__(self, link):
self.link = link
def get_articles(self):
"Get the articles of the user/publication which was given as input"
link = self.link
driver.get(link)
scroll_pause = 0.5
# Get scroll height
last_height = driver.execute_script("return document.documentElement.scrollHeight")
run_time, max_run_time = 0, 1
while True:
iteration_start = time.time()
# Scroll down to bottom
driver.execute_script("window.scrollTo(0, 1000*document.documentElement.scrollHeight);")
# Wait to load page
time.sleep(scroll_pause)
# Calculate new scroll height and compare with last scroll height
new_height = driver.execute_script("return document.documentElement.scrollHeight")
scrolled = new_height != last_height
timed_out = run_time >= max_run_time
if scrolled:
run_time = 0
last_height = new_height
elif not scrolled and not timed_out:
run_time += time.time() - iteration_start
elif not scrolled and timed_out:
break
elements = driver.find_elements(By.CSS_SELECTOR, "h2")
for x in elements:
print(x.text)
</code></pre>
|
<python><selenium-webdriver><web-scraping><beautifulsoup>
|
2023-06-03 08:14:22
| 1
| 1,753
|
Karthik Bhandary
|
76,395,122
| 4,896,449
|
How to build an efficient and fast `Dockerfile` for a `pytorch` model running on CPU
|
<p>I am trying to get an optimally sized docker for running a pytorch model on CPU, creating a single stage works fine. However when I use the below code to create a two stage build, my docker downloads the CUDA/GPU version of pytorch.</p>
<pre><code>FROM python:3.11-slim as builder
WORKDIR /app
# Set environment variables.
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apt-get update && \
apt-get install -y --no-install-recommends gcc
# Copy local code to the container image.
COPY requirements.txt .
# Install dependencies & model files
RUN pip install --no-cache-dir torch==2.0.1+cpu --index-url https://download.pytorch.org/whl/cpu && \
pip wheel --no-cache-dir --no-deps --wheel-dir /app/wheels -r requirements.txt
FROM python:3.11-slim
WORKDIR /app
# Set environment variables.
ENV PORT 8080
ENV HOST 0.0.0.0
COPY --from=builder /app/wheels /wheels
COPY --from=builder /app/requirements.txt .
RUN pip install --no-cache /wheels/* && \
huggingface-cli login --token xxx && \
python -c 'from sentence_transformers import SentenceTransformer; SentenceTransformer("xxx", cache_folder="./app/artefacts")'
# Start the container
CMD python -m uvicorn app.main:app --host $HOST --port $PORT --workers 1
</code></pre>
<p>EDIT:</p>
<p>So I got this to run without installing CUDA, but the image is bigger than without a two stage 2.5gb vs original 1.5gb, the line changed was:</p>
<pre><code>RUN pip wheel --no-cache-dir --no-deps --wheel-dir /app/wheels torch==2.0.1 --index-url https://download.pytorch.org/whl/cpu && \
pip wheel --no-cache-dir --no-deps --wheel-dir /app/wheels -r requirements.txt
</code></pre>
<p>So here I just use a wheel for pytorch too, instead of a normal install as before.</p>
|
<python><docker><pytorch>
|
2023-06-03 08:05:24
| 1
| 3,408
|
dendog
|
76,395,036
| 11,163,122
|
Converting np.int16 to torch.ShortTensor
|
<p>I have many NumPy arrays of dtype <code>np.int16</code> that I need to convert to <code>torch.Tensor</code> within a <code>torch.utils.data.Dataset</code>. This <code>np.int16</code> ideally gets converted to a <code>torch.ShortTensor</code> of size <code>torch.int16</code> (<a href="https://pytorch.org/docs/stable/tensor_attributes.html#torch-dtype" rel="nofollow noreferrer">docs</a>).</p>
<p><code>torch.from_numpy(array)</code> will convert the data to <code>torch.float64</code>, which takes up 4X more memory than <code>torch.int16</code> (64 bits vs 16 bits). I have a LOT of data, so I care about this.</p>
<p>How can I convert a numpy array to a <code>torch.Tensor</code> minimizing memory?</p>
|
<python><numpy><pytorch><numpy-ndarray><pytorch-dataloader>
|
2023-06-03 07:43:00
| 1
| 2,961
|
Intrastellar Explorer
|
76,394,951
| 10,164,750
|
Getting an unusual/weird error in Pyspark
|
<p>I have written simple <code>Pyspark</code> <code>filter</code> operation. It works, well. After the <code>filter</code>, I am doing <code>select</code>, where I see some unusual behavior in the code.</p>
<p>I tried many thing like. <code>persist</code> and <code>cache</code>. calling an <code>action</code> like <code>count()</code>. Nothing worked, I got this unusual error every time.</p>
<p>Let me share my code and <code>AWS Cloud Watch</code> logs.</p>
<p>Pyspark Code:</p>
<pre class="lang-py prettyprint-override"><code>print("inside header Snap")
headerDf.show()
aggSeqDf = headerDf.filter(col("header_identifier") != "DDDDSNAP")
aggSeqDf.show()
aggDf = aggSeqDf.select(SEQUENCE, "mn_id", "header_identifier").withColumnRenamed(SEQUENCE, SEQ).withColumnRenamed("mn_id", "mnId")
aggDf.show()
print("header snap ends here")
</code></pre>
<p>Cloud Watch Logs:</p>
<pre class="lang-none prettyprint-override"><code>inside header Snap
+--------------------+-----------+-----------------+----------+---------------+--------+
| full_file| mn_id|header_identifier|run_number|production_date|sequence|
+--------------------+-----------+-----------------+----------+---------------+--------+
|Prod216_3427_ew_1...| 0| DDDDSNAP| 3427| 20230501| 0000000|
|Prod216_3427_ew_3...| 6| DDDDSNAP| 3427| 20230501| 0000000|
|Prod216_3427_ew_4...| 12| DDDDSNAP| 3427| 20230501| 0000000|
|Prod216_3427_ew_5...| 8589934592| DDDDSNAP| 3427| 20230501| 0000000|
|Prod216_3427_ew_6...| 8589934598| DDDDSNAP| 3427| 20230501| 0000000|
|Prod216_3427_ew_7...| 8589934604| DDDDSNAP| 3427| 20230501| 0000000|
| Prod216_3427_ni.dat|17179869184| DDDDSNAP| 3427| 20230501| 0000000|
| Prod216_3427_sc.dat|17179869190| DDDDSNAP| 3427| 20230501| 0000000|
|Prod216_3427_ew_2...|17179869196| DDDDSNAP| 3427| 20230501| 0000000|
+--------------------+-----------+-----------------+----------+---------------+--------+
+---------+-----+-----------------+----------+---------------+--------+
|full_file|mn_id|header_identifier|run_number|production_date|sequence|
+---------+-----+-----------------+----------+---------------+--------+
+---------+-----+-----------------+----------+---------------+--------+
+-------+-----------+-----------------+
| seq| mnId|header_identifier|
+-------+-----------+-----------------+
|0000000| 0| 00000001|
|0000000| 6| 00000003|
|0000000| 12| 00000004|
|0000000| 8589934592| 00000005|
|0000000| 8589934598| 00000006|
|0000000| 8589934604| 00000007|
|0000000|17179869184| 00000009|
|0000000|17179869190| 00000008|
|0000000|17179869196| 02550372|
+-------+-----------+-----------------+
header snap ends here
</code></pre>
<p>If you observe, I am doing a <code>select</code> from empty dataframe <code>aggDfSeq</code> but I get few unexpected values in <code>aggDf</code>.</p>
<p>I am using <code>AWS Glue</code> to run the job. I attached the <code>whl</code> file of the Pyspark program in the Glue.</p>
<p>Even I tried changing the resources provided to the Glue, did not have a luck there also.</p>
<p>Would like to know from you, whyI am seeing this unusual behavior. Thank you</p>
<p>Physical and Logical plan added.</p>
<pre class="lang-none prettyprint-override"><code>== Parsed Logical Plan ==
Project [seq#829, mn_id#45L AS mnId#832L]
+- Project [sequence#192 AS seq#829, mn_id#45L]
+- Project [sequence#192, mn_id#45L]
+- Filter NOT (header_identifier#168 = DDDDSNAP)
+- Project [full_file#5, mn_id#45L, header_identifier#168, run_number#169, production_date#170, sequence#192]
+- Project [company_number#38, rec_type#39, full_file#5, mn_id#45L, header_identifier#168, run_number#169, production_date#170, 0000000 AS sequence#192]
+- Project [company_number#38, rec_type#39, full_file#5, mn_id#45L, cast(substring(rest#40, 1, 8) as string) AS header_identifier#168, cast(substring(rest#40, 9, 4) as string) AS run_number#169, cast(substring(rest#40, 13, 8) as string) AS production_date#170]
+- Project [company_number#38, rec_type#39, rest#40, full_file#5, mn_id#45L]
+- Join Inner, (mn_id#45L = min(mn_id)#67L)
:- Project [company_number#38, rec_type#39, rest#40, full_file#5, mon
otonically_increasing_id() AS mn_id#45L]
: +- Project [cast(substring(value#0, 1, 8) as string) AS company_number#38, cast(substring(value#0, 9, 1) as string) AS rec_type#39, cast(substring(value#0, 1, 1250) as string) AS rest#40, full_file#5]
: +- Project [value#0, reverse(split(full_file#2, /, -1))[0] AS full_file#5]
: +- Project [value#0, input_file_name() AS full_file#2]
: +- Relation[value#0] text
+- Project [min(mn_id)#67L]
+- Aggregate [full_file#5], [full_file#5, min(mn_id#45L) AS min(mn_id)#67L]
+- Project [company_number#38, rec_type#39, rest#40, full_file#5, monotonically_increasing_id() AS mn_id#45L]
+- Project [cast(substring(value#0, 1, 8) as string) AS company_number#38, cast(substring(value#0, 9, 1) as string) AS rec_type#39, cast(substring(value#0, 1, 1250) as stri
ng) AS rest#40, full_file#5]
+- Project [value#0, reverse(split(full_file#2, /, -1))[0] AS full_file#5]
+- Project [value#0, input_file_name() AS full_file#2]
+- Relation[value#0] text
== Analyzed Logical Plan ==
seq: string, mnId: bigint
Project [seq#829, mn_id#45L AS mnId#832L]
+- Project [sequence#192 AS seq#829, mn_id#45L]
+- Project [sequence#192, mn_id#45L]
+- Filter NOT (header_identifier#168 = DDDDSNAP)
+- Project [full_file#5, mn_id#45L, header_identifier#168, run_number#169, production_date#170, sequence#192]
+- Project [company_number#38, rec_type#39, full_file#5, mn_id#45L, header_identifier#168, run_number#169, production_date#170, 0000000 AS sequence#192]
+- Project [company_number#38, rec_type#39, full_file#5, mn_id#45L, cast(substring(rest#40, 1, 8) as string) AS header_identifier#168, cast(substring(rest#40, 9, 4) as string) AS run_
number#169, cast(substring(rest#40, 13, 8) as string) AS production_date#170]
+- Project [company_number#38, rec_type#39, rest#40, full_file#5, mn_id#45L]
+- Join Inner, (mn_id#45L = min(mn_id)#67L)
:- Project [company_number#38, rec_type#39, rest#40, full_file#5, monotonically_increasing_id() AS mn_id#45L]
: +- Project [cast(substring(value#0, 1, 8) as string) AS company_number#38, cast(substring(value#0, 9, 1) as string) AS rec_type#39, cast(substring(value#0, 1, 1250) as string) AS rest#40, full_file#5]
: +- Project [value#0, reverse(split(full_file#2, /, -1))[0] AS full_file#5]
: +- Project [value#0, input_file_name() AS full_file#2]
: +- Relation[value#0] text
+- Project [min(mn_id)#67L]
+- Aggregate [full_file#5], [full_file#5, min(mn_id#45L) AS min(mn_id)#67L]
+- Project [company_number#38, rec_type#39, rest#40, full_file#5, monotonically_increasing_id() AS mn_id#45L]
+- Project [cast(substring(value#0, 1, 8) as string) AS company_number#38, cast(substring(value#0, 9, 1) as string) AS rec_type#39, cast(substring(value#0, 1, 1250) as string) AS rest#40, full_file#5]
+- Project [value#0, reverse(split(full_file#2, /, -1))[0] AS full_file#5]
+- Project [value#0, input_file_name() AS full_file#2]
+- Relation[value#0] text
== Optimized Logical Plan ==
Project [0000000 AS seq#829, mn_id#45L AS mnId#832L]
+- Join Inner, (mn_id#45L = min(mn_id)#67L)
:- Project [mn_id#45L]
: +- Filter (isnotnull(rest#40) AND NOT (substring(rest#40, 1, 8) = DDDDSNAP))
: +- Project [substring(value#0, 1, 1250) AS rest#40, monotonically_increasing_id() AS mn_id#45L]
: +- Relation[value#0] text
+- Filter
isnotnull(min(mn_id)#67L)
+- Aggregate [full_file#5], [min(mn_id#45L) AS min(mn_id)#67L]
+- Project [reverse(split(full_file#2, /, -1))[0] AS full_file#5, monotonically_increasing_id() AS mn_id#45L]
+- Project [input_file_name() AS full_file#2]
+- Relation[value#0] text
== Physical Plan ==
AdaptiveSparkPlan isFinalPlan=false
+- Project [0000000 AS seq#829, mn_id#45L AS mnId#832L]
+- BroadcastHashJoin [mn_id#45L], [min(mn_id)#67L], Inner, BuildRight, false
:- Project [mn_id#45L]
: +- Project [substring(value#0, 1, 1250) AS rest#40, monotonically_increasing_id() AS mn_id#45L]
: +- Filter (isnotnull(substring(value#0, 1, 1250) AS rest#40) AND NOT (substring(substring(value#0, 1, 1250) AS rest#40, 1, 8) = DDDDSNAP))
: +- FileScan text [value#0] Batched: false, DataFilters: [isnotnull(substring(value#0, 1, 1250) AS rest#40), NOT (substring(substring(value#0, 1, 1250) AS..., Format: Text, Location: InMemoryFileIndex[s3://ubo-mvp-oad/
landing_SNAP], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<value:string>
+- BroadcastExchange HashedRelationBroadcastMode(List(input[0, bigint, false]),false), [id=#3815]
+- Filter isnotnull(min(mn_id)#67L)
+- HashAggregate(keys=[full_file#5], functions=[min(mn_id#45L)], output=[min(mn_id)#67L])
+- Exchange hashpartitioning(full_file#5, 4), ENSURE_REQUIREMENTS, [id=#3811]
+- HashAggregate(keys=[full_file#5], functions=[partial_min(mn_id#45L)], output=[full_file#5, min#149L])
+- Project [reverse(split(full_file#2, /, -1))[0] AS full_file#5, monotonically_increasing_id() AS mn_id#45L]
+- Project [input_file_name() AS full_file#2]
+- FileScan text [] Batched: false, DataFilters: [], Format: Text, Location: InMemoryFileIndex[s3://ubo-mvp-oad/landing_SNAP], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<>
</code></pre>
|
<python><amazon-web-services><apache-spark><pyspark><aws-glue>
|
2023-06-03 07:11:08
| 0
| 331
|
SDS
|
76,394,943
| 2,540,204
|
ibm_db_dbi::ProgrammingError when calling a stored procedure with pandas read_sql_query
|
<p>I am attempting to use <code>pandas.read_sql_query</code> to call a stored procedure in IBM's db2 and read the results into a dataframe. However when I do so, I receive the following error:</p>
<blockquote>
<p>ibm_db_dbi::ProgrammingError: The last call to execute did not produce any result set.</p>
</blockquote>
<p>I've called the procedure in IBM Data Studio, to confirm that it works as intended, yielding the anticipated approximately 1000 records. I've also manually queried the table using a <code>select * from table</code>, with <code>read_sql_query</code> from my script with success. Therefore I may conclude that the python script is properly configured to work with the database as is the procedure itself. The struggle seems to be in putting the two together. My code is below.</p>
<pre><code>import ibm_db
import ibm_db_dbi
import pandas as pd
cnxn = ibm_db.connect('DATABASE=mydb;'
'HOSTNAME=myHost;'
'PORT=446;'
'PROTOCOL=TCPIP;'
'UID=myUser;'
'PWD=myPassword;', '', '')
conn=ibm_db_dbi.Connection(cnxn)
sql = 'call myschema.getaccountnonrecurringpaging(20230509,0);'
df = pd.read_sql_query(sql, conn)
</code></pre>
<p>Package details are listed below:</p>
<ul>
<li>python=3.10.4</li>
<li>pandas=1.5.3</li>
<li>ibm_db=3.1.1</li>
<li>operating system = Ubuntu 20.04</li>
<li>db2: v7r4</li>
</ul>
|
<python><pandas><stored-procedures><db2>
|
2023-06-03 07:10:05
| 0
| 2,703
|
neanderslob
|
76,394,853
| 264,136
|
cant install jenkins using pip
|
<pre><code>C:\code>pip install jenkins
Collecting jenkins
Using cached jenkins-1.0.2.tar.gz (8.2 kB)
Preparing metadata (setup.py) ... done
Installing collected packages: jenkins
DEPRECATION: jenkins is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559
Running setup.py install for jenkins ... error
error: subprocess-exited-with-error
Γ Running setup.py install for jenkins did not run successfully.
β exit code: 1
β°β> [11 lines of output]
running install
C:\python\Lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
running build
running build_py
creating build
creating build\lib.win-amd64-cpython-311
copying jenkins.py -> build\lib.win-amd64-cpython-311
running build_ext
building 'lookup3' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
Γ Encountered error while trying to install package.
β°β> jenkins
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
[notice] A new release of pip is available: 23.0.1 -> 23.1.2
[notice] To update, run: python.exe -m pip install --upgrade pip
</code></pre>
<p>Tried:
<a href="https://stackoverflow.com/questions/44951456/pip-error-microsoft-visual-c-14-0-is-required">Pip error: Microsoft Visual C++ 14.0 is required</a></p>
<p>Getting this error:
<a href="https://i.sstatic.net/cCUCU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cCUCU.png" alt="enter image description here" /></a></p>
<p>Installed went fine via the UI as mentioned in the comment:
<a href="https://i.sstatic.net/QnHs6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QnHs6.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/j0RHH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/j0RHH.png" alt="enter image description here" /></a></p>
<p>Still no luck. Any suggestions?</p>
<p>OS: Windows 10 Enterprise.</p>
|
<python><pip>
|
2023-06-03 06:41:22
| 2
| 5,538
|
Akshay J
|
76,394,843
| 4,473,615
|
PyQt5 PDF border spacing in Python
|
<p>I have generated a PDF using PyQt5 which is working perfectly fine. Am just looking to have a border spacing, unable to do that using layouts. Below is the code,</p>
<pre><code>from PyQt5 import QtCore, QtWidgets, QtWebEngineWidgets
def printhtmltopdf(html_in, pdf_filename):
app = QtWidgets.QApplication([])
page = QtWebEngineWidgets.QWebEnginePage()
def handle_pdfPrintingFinished(*args):
print("finished: ", args)
app.quit()
def handle_loadFinished(finished):
page.printToPdf(pdf_filename)
page.pdfPrintingFinished.connect(handle_pdfPrintingFinished)
page.loadFinished.connect(handle_loadFinished)
page.setZoomFactor(1)
page.setHtml(html_in)
app.exec()
printhtmltopdf(
result, # raw html variable
"file.pdf",
)
</code></pre>
<p>Result is,</p>
<p><a href="https://i.sstatic.net/2ed9Z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2ed9Z.png" alt="enter image description here" /></a></p>
<p>Expected result is as below having spaces in the beginning and end of the content.
Basically i need to have a padding on left, right, top and bottom</p>
<p><a href="https://i.sstatic.net/97YaB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/97YaB.png" alt="enter image description here" /></a></p>
<p>Any suggestion will be appreciated</p>
|
<python><pdf><pyqt5>
|
2023-06-03 06:39:22
| 1
| 5,241
|
Jim Macaulay
|
76,394,790
| 4,825,376
|
Python Multiprocessing Manager Error-βForkAwareLocalβ object has no attribute
|
<p>I wrote the following code using the <code>multiprocessing</code> module to execute two processes in parallel. One requirement is to access a shared Queue in the multiprocessing module used to store data by one process and read from it by another process. I tried to write it using the code below, I got this. Any help, please?</p>
<pre><code>/Users/adhamenaya/anaconda3/bin/python /Users/adhamenaya/DataspellProjects/MultiProcessing/multi-processing.py
Producer process started...
Consumer process started...
Process Process-3:
Traceback (most recent call last):
File "/Users/adhamenaya/anaconda3/lib/python3.10/multiprocessing/managers.py", line 810, in _callmethod
conn = self._tls.connection
AttributeError: 'ForkAwareLocal' object has no attribute 'connection'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/adhamenaya/anaconda3/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/Users/adhamenaya/anaconda3/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/Users/adhamenaya/DataspellProjects/MultiProcessing/multi-processing.py", line 36, in run
data = self.queue.get_nowait()
File "<string>", line 2, in get_nowait
File "/Users/adhamenaya/anaconda3/lib/python3.10/multiprocessing/managers.py", line 814, in _callmethod
self._connect()
File "/Users/adhamenaya/anaconda3/lib/python3.10/multiprocessing/managers.py", line 801, in _connect
conn = self._Client(self._token.address, authkey=self._authkey)
File "/Users/adhamenaya/anaconda3/lib/python3.10/multiprocessing/connection.py", line 502, in Client
c = SocketClient(address)
File "/Users/adhamenaya/anaconda3/lib/python3.10/multiprocessing/connection.py", line 630, in SocketClient
s.connect(address)
FileNotFoundError: [Errno 2] No such file or directory
Process Process-2:
Traceback (most recent call last):
File "/Users/adhamenaya/anaconda3/lib/python3.10/multiprocessing/managers.py", line 810, in _callmethod
conn = self._tls.connection
AttributeError: 'ForkAwareLocal' object has no attribute 'connection'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/adhamenaya/anaconda3/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/Users/adhamenaya/anaconda3/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/Users/adhamenaya/DataspellProjects/MultiProcessing/multi-processing.py", line 22, in run
self.queue.put_nowait(input_data)
File "<string>", line 2, in put_nowait
File "/Users/adhamenaya/anaconda3/lib/python3.10/multiprocessing/managers.py", line 814, in _callmethod
self._connect()
File "/Users/adhamenaya/anaconda3/lib/python3.10/multiprocessing/managers.py", line 801, in _connect
conn = self._Client(self._token.address, authkey=self._authkey)
File "/Users/adhamenaya/anaconda3/lib/python3.10/multiprocessing/connection.py", line 502, in Client
c = SocketClient(address)
File "/Users/adhamenaya/anaconda3/lib/python3.10/multiprocessing/connection.py", line 630, in SocketClient
s.connect(address)
FileNotFoundError: [Errno 2] No such file or directory
Process finished with exit code 0
</code></pre>
<p>My code:</p>
<pre><code>import time
import random
import multiprocessing
from multiprocessing import Pool
# consumer class simulates the continuous data collection process
class Producer:
def __init__(self, queue):
super().__init__()
self.queue = queue
def run(self):
print("Producer process started...")
while True:
# simulate the time needed to collect data
input_time = random.randrange(1, 4)
time.sleep(input_time)
# simulate date collection
input_data = random.randrange(5, 10)
self.queue.put_nowait(input_data)
print(f" {input_data} is collected in time {input_time} secs")
# producer class simulates the work of data processing algorithm
class Consumer:
def __init__(self, queue):
super().__init__()
self.queue = queue
def run(self):
process_data = 0
print("Consumer process started...")
while True:
data = self.queue.get_nowait()
# simulate time needed to process data
procss_time = random.randrange(6, 9)
time.sleep(procss_time)
# Simulate data processing algorithm
process_data += data
print(f" input: {data}, new result: {process_data} is processed in {procss_time} secs")
if __name__ == "__main__":
# create a shared queue
manager = multiprocessing.Manager()
queue = manager.Queue()
producer = Producer(queue)
consumer = Consumer(queue)
# start instances on parallel processes
producer_process = multiprocessing.Process(target=producer.run).start()
consumer_process = multiprocessing.Process(target=consumer.run).start()
</code></pre>
|
<python><python-3.x><multiprocessing>
|
2023-06-03 06:18:35
| 1
| 950
|
Adham Enaya
|
76,394,713
| 10,173,016
|
How to apply different isin for each row of a DataFrame?
|
<p>I've got two arrays and want to compare rows. In particular, I want to check for each element in arr2 whether it is among corresponding row in arr1.</p>
<p>Example given</p>
<pre><code>arr1 = [[1, 7, 6, 2, 8],
[1, 5, 4, 8],
[8, 2, 5]]
arr2 = [[8, 1, 5, 0, 7, 2, 9, 4],
[0, 1, 8, 5, 3, 4, 7, 9],
[9, 2, 0, 6, 8, 5, 3, 7]]
</code></pre>
<p>Expected result for the first row of arr2</p>
<pre><code>[1, 1, 0, 0, 1, 1, 0, 0]
</code></pre>
<p>Solution with for-loop</p>
<pre><code>d1 = pd.DataFrame(arr1)
d2 = pd.DataFrame(arr2)
for y in range(len(arr1)):
print(d2.iloc[y].isin(d1.iloc[y]).astype(int).tolist())
</code></pre>
<p>How to do it in pandas without iterating over rows?</p>
|
<python><python-3.x><pandas>
|
2023-06-03 05:46:10
| 2
| 401
|
Joseph Kirtman
|
76,394,657
| 219,153
|
How to read SyGuS format into cvc5 Python script?
|
<p>There is number of examples in SyGuS format (<a href="https://sygus.org/language/" rel="nofollow noreferrer">https://sygus.org/language/</a>) in cvc5 repo, e.g. <a href="https://github.com/cvc5/cvc5/tree/main/test/regress/cli/regress0/sygus" rel="nofollow noreferrer">https://github.com/cvc5/cvc5/tree/main/test/regress/cli/regress0/sygus</a>. How do I read these files or corresponding strings into cvc5 Python script?</p>
<p>I know about Python API (<a href="https://cvc5.github.io/docs/cvc5-1.0.2/api/python/python.html" rel="nofollow noreferrer">https://cvc5.github.io/docs/cvc5-1.0.2/api/python/python.html</a>), which allows to define a SyGuS problem programmatically, but I would like to use SyGuS format directly. I can't find anything about it in the documentation.</p>
<hr />
<p>Here is an example of problem definition in SyGuS format:</p>
<pre><code>(set-logic LIA)
(synth-fun max2 ((x Int) (y Int)) Int
((I Int) (B Bool))
((I Int (x y 0 1
(+ I I) (- I I)
(ite B I I)))
(B Bool ((and B B) (or B B) (not B)
(= I I) (<= I I) (>= I I))))
)
(declare-var x Int)
(declare-var y Int)
(constraint (>= (max2 x y) x))
(constraint (>= (max2 x y) y))
(constraint (or (= x (max2 x y)) (= y (max2 x y))))
(check-synth)
</code></pre>
|
<python><io>
|
2023-06-03 05:19:18
| 1
| 8,585
|
Paul Jurczak
|
76,394,543
| 3,487,441
|
Installing a python script to run from the command line
|
<p>I need to make some python utilities available to run from the command line (OSX Ventura). I've been looking over example and documentation for setup.py, but can't make any progress. Even with the simplest example possible I'm not making progress:</p>
<p><strong>directory structure:</strong></p>
<pre><code>./ex
__init__.py
myscript.py
setup.py
</code></pre>
<p><strong>myscript.py</strong></p>
<pre><code>#!/usr/local/bin python3
def main():
print('hello')
</code></pre>
<p><strong>setup.py</strong></p>
<pre><code>from setuptools import setup
setup(
name='myscript',
version='0.1.0',
py_modules=['myscript'],
entry_points={
'entry_points': [
'scr=myscript:main',
],
} )
</code></pre>
<p>I'm trying to install with various combinations of parameters:</p>
<pre><code>pip3 install -e .
pip3 install --user .
pip3 install .
</code></pre>
<p>In each case, the new command is not found. The examples do not cover what can go wrong so I'm really lost about what to try next.</p>
|
<python><pip><setuptools><setup.py><python-packaging>
|
2023-06-03 04:20:08
| 2
| 1,361
|
gph
|
76,394,516
| 9,840,684
|
creating a function looping through multiple subsets and then grouping and summing those combinations of subsets
|
<p>I am attempting to build a function that processes data and subsets across two combinations of dimensions, grouping on a status label and sums on price creating a single row dataframe with the different combinations of subsets of the summed prices as output.</p>
<p><strong>edit</strong>
to clarify, what I'm looking for is to subset on two different combinations of dimensions; a time delta and an association label.</p>
<p>I'm then looking to group on a <em>different</em> status label (which is different from the association label) and sum those on price.</p>
<p>Combinations of subsets:</p>
<ul>
<li>the association labels are in the <strong>"Association Label"</strong> column and the three of interest are <code>["SDAR", "NSDCAR", "PSAR"]</code> there are others in the column/data but they can be ignored</li>
<li>the time interval are <code>[7, 30, 60, 90, 120, None]</code> and are in the "<strong>Status Date</strong>" column</li>
</ul>
<p>What's being grouped and summed as per those combination of subsets:</p>
<ul>
<li>The <strong>Status Labelled</strong> are transaction statuses which are to be grouped on as per the different combinations of the above subsets from time deltas and association labels. They include <code>["Active","Pending","Sold",Withdrawn","Contingent","Unknown"]</code> (this is not an exhaustive list but just an example)</li>
<li>And finally <strong>['List Price (H)']</strong> which is to be summed per each of those status labelled and as per each combination of the fist two subsets.</li>
</ul>
<p>So example columns of desired output would be something like <code>PSAR_7_Contingent_price</code> or <code>SDAR_60_Withdrawn_price</code></p>
<p>This builds off of <a href="https://stackoverflow.com/questions/76384338/looping-through-combinations-of-subsets-of-data-for-processing">this question and answer</a> which worked fantastic for value counts, but I'm having difficulty modifying it for <em>summing</em> on a price variable.</p>
<p>The code I used to build off of is</p>
<pre><code>def crossubsets(df):
labels = ["SDAR", "NSDCAR", "PSAR"]
time_intervals = [7, 30, 60, 90, 120, None]
group_dfs = df.loc[
df["Association Label"].isin(labels)
].groupby("Association Label")
data = []
for l, g in group_dfs:
for ti in time_intervals:
s = (
g[g["Status Date"] > (pd.Timestamp.now() - pd.Timedelta(ti, "d"))]
if ti is not None else g
)
data.append(s["Status Labelled"].value_counts().rename(f"counts_{l}_{ti}"))
return pd.concat(data, axis=1) #with optional .T to have 18 rows instead of cols
# additional code to flatten the output to a (1, 180) dataframe
counts_processeed = counts_processeed.unstack().to_frame().sort_index(level=1).T
counts_processeed .columns = counts_processeed.columns.map('_'.join)
</code></pre>
<p>This worked great for the value_counts per Status Labelled, but now I'm looking to sum the associated price per those that Status Labelled, and across those dimensions of subsets. I naively attempted to modify the above function with:</p>
<pre><code>def crossubsetsprice(df):
labels = ["SDAR", "NSDCAR", "PSAR"]
time_intervals = [7, 30, 60, 90, 120, None]
group_dfs = df.loc[
df["Association Label"].isin(labels)
].groupby("Association Label")
data = []
for l, g in group_dfs:
for ti in time_intervals:
s = (
g[g["Status Date"] > (pd.Timestamp.now() - pd.Timedelta(ti, "d"))]
if ti is not None else g
)
data.append(s['List Price (H)'].sum().rename(f"price_{l}_{ti}"))
return pd.concat(data, axis=1) #with optional .T to have 18 rows instead of cols
</code></pre>
<p>But that throws and error <code>AttributeError: 'numpy.float64' object has no attribute 'rename'</code> and I don't think makes much sense or would get the desired output anyway.</p>
<p>The alternative I want to avoid, but I know would work, is creating 18 distinct functions for each of combination of subsets then concatinating the output. An example would be:</p>
<pre><code>def price_PSAR_90(df):
subset_90 = df[df['Status Date'] > (datetime.now() - pd.to_timedelta("90day"))]
subset_90_PSAR= subset_90[subset_90['Association Label']=="PSAR"]
grouped_90_PSAR = subset_90_PSAR.groupby(['Status Labelled'])
price_summed_90_PSAR = (pd.DataFrame(grouped_90_PSAR['List Price (H)'].sum()))
price_summed_90_PSAR.columns = ['Price']
price_summed_90_PSAR = price_summed_90_PSAR.reset_index()
price_summed_90_PSAR = price_summed_90_PSAR.T
price_summed_90_PSAR = price_summed_90_PSAR.reset_index()
price_summed_90_PSAR.drop(price_summed_90_PSAR.columns[[0]], axis=1, inplace=True)
price_summed_90_PSAR_header = price_summed_90_PSAR.iloc[0] #grab the first row for the header
price_summed_90_PSAR = price_summed_90_PSAR[1:] #take the data less the header row
price_summed_90_PSAR.columns = price_summed_90_PSAR_header
return price_summed_90_PSAR
</code></pre>
<p>The last code snippet works, but without looping would need to be repeated with the time delta and association label being changed for each combination, and then relabelling the output columns and concatenated them together, which I want to avoid if possible.</p>
|
<python><pandas><loops><iterator><iteration>
|
2023-06-03 04:07:16
| 1
| 373
|
JLuu
|
76,394,480
| 4,726,035
|
Can't parse segment Firebase Token Python/Flask
|
<p>I am currently building a small API project using Flask. I want to authenticate the request using Firebase Auth. I am using the verify_id_token function in a small middleware.</p>
<pre><code>def check_token(f):
@wraps(f)
def wrap(*args,**kwargs):
token = request.headers.get('Authorization')
if not token:
return {'message': 'No token provided'},400
try:
user = auth.verify_id_token(token)
except Exception as e:
print(f'Error verifying token: {e}')
return {'message':'Invalid token provided.'},400
else:
request.user = user
return f(*args, **kwargs)
return wrap
</code></pre>
<p>My code has been working properly but then for no reasons I started to have the following issue:</p>
<pre><code>Error verifying token: Can't parse segment: b'\x05\xe6\xabz\xb7\xb2&\....
</code></pre>
<p>I have double check the token and I see no issues on that side...</p>
|
<python><firebase><flask><firebase-authentication>
|
2023-06-03 03:43:43
| 2
| 535
|
Mansour
|
76,394,463
| 1,019,129
|
Simulate decaying function
|
<p>Lets t be the time tick i.e. 1,2,3,4,5....</p>
<p>I want to calculate and plot a cumulative decaying function f(inits[],peaks[],peak-ticks,zero-ticks).
Preferably in python</p>
<p>Where :</p>
<pre><code>- inits[] is a list of points at time/tick t where a new 'signal' is introduced
- peaks[] is a list of values which must be reached after peak-ticks. (corresponding to inits)
- peak-ticks is how many ticks it takes to reach the next peak value
- zero-ticks is how many ticks it takes to reach zero from the peak
</code></pre>
<p>For example :</p>
<pre><code> f(inits=[10,15,18], peaks=[1,1,1], peak-ticks=1, zero-ticks=10)
</code></pre>
<p>in this case decay takes 10 ticks i.e. 0.1 per tick.</p>
<p>at tick:</p>
<pre><code> 10! result is 0
11. = 1
12. = 0.9
.....
15! = 0.6 + 0 = 0.6
16. = 0.5 + 1 = 1.5
17. = 0.4 + 0.9 = 1.3
18! = 0.3 + 0.8 + 0 = 1.1
19. = 0.2 + 0.7 + 1 = 1.9
20. = 0.1 + 0.6 + 0.9 = 1.6
.....
</code></pre>
<p>PS> As a complication, what if the decay is exponential like 1/x ?</p>
|
<python><cumulative-sum><decay>
|
2023-06-03 03:33:34
| 1
| 7,536
|
sten
|
76,394,436
| 2,628,868
|
CondaHTTPError: HTTP 000 CONNECTION FAILED for url <https://conda.anaconda.org/conda-forge/osx-64/current_repodata.json>
|
<p>when I tried to run this command in macOS 13.3 with M1 pro chip, show error like this:</p>
<pre><code>> conda install anaconda-clean
Collecting package metadata (current_repodata.json): failed
CondaHTTPError: HTTP 000 CONNECTION FAILED for url <https://conda.anaconda.org/conda-forge/osx-64/current_repodata.json>
Elapsed: -
An HTTP error occurred when trying to retrieve this URL.
HTTP errors are often intermittent, and a simple retry will get you on your way.
'https//conda.anaconda.org/conda-forge/osx-64
</code></pre>
<p>I have tried to set the ssl verify:</p>
<pre><code>conda config --set ssl_verify false
</code></pre>
<p>I also have tried to switch the network from wifi(contains proxy) to 4G. Still did not fixed this issue. what should I do to make the conda work? BTW, I can access the url <a href="https://conda.anaconda.org/conda-forge/osx-64/current_repodata.json" rel="nofollow noreferrer">https://conda.anaconda.org/conda-forge/osx-64/current_repodata.json</a> in google chrome browser and terminal using curl command.</p>
|
<python>
|
2023-06-03 03:23:57
| 0
| 40,701
|
Dolphin
|
76,394,423
| 1,088,796
|
Do I need any environment variables set to execute some code, call openai's api, and return a response?
|
<p>I was going through a course in OpenAI's API using an in-browser jupyter notebook page but wanted to copy some example code from there into a local IDE. I installed Python and the jupyter extention in VS Code and the OpenAI library. My code is below:</p>
<pre><code>import openai
import os
# from dotenv import load_dotenv, find_dotenv
# _ = load_dotenv(find_dotenv()) # read local .env file
openai.api_key = "my api key is here"
def get_completion(prompt, model="gpt-3.5-turbo"):
messages = [{"role": "user", "content": prompt}]
response = openai.ChatCompletion.create(
model=model,
messages=messages,
temperature=0, # this is the degree of randomness of the model's output
)
return response.choices[0].message["content"]
prompt = f"""
Determine whether each item in the following list of \
topics is a topic in the text below, which
is delimited with triple backticks.
Give your answer as list with 0 or 1 for each topic.\
List of topics: {", ".join(topic_list)}
Text sample: '''{story}'''
"""
response = get_completion(prompt)
print(response)
</code></pre>
<p>I installed Python and imported the openai library. When I run I am getting the error:</p>
<pre><code>APIConnectionError: Error communicating with OpenAI: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
</code></pre>
<p>I'm assuming that's because I commented out lines 3 and 4 in the code because I am unsure what they do and do not know how to use the dotenv library. Is it simple to set this up just to make a basic call to the openai API? That's all I'm trying to do with this code right now.</p>
|
<python><openai-api><dotenv>
|
2023-06-03 03:17:15
| 1
| 2,741
|
intA
|
76,394,303
| 18,572,509
|
RuntimeError when trying to serve favicon with Flask
|
<p>I followed the instructions for serving favicons from <a href="https://flask.palletsprojects.com/en/1.1.x/patterns/favicon/" rel="nofollow noreferrer">Flask's docs</a>, and added the line <code>app.add_url_rule('/favicon.ico', redirect_to=url_for('static', filename='favicon.ico'))</code> to my server. But when I run it I get this error:</p>
<pre><code> File "server.py", line X, in __init__
redirect_to=url_for('static', filename='favicon.ico'))
File "/python3.9/site-packages/flask/helpers.py", line 306, in url_for
raise RuntimeError(
RuntimeError: Attempted to generate a URL without the application context being pushed. This has to be executed when application context is available.
</code></pre>
<p>I am using a class-based server, with the basics reproduced here:</p>
<pre class="lang-py prettyprint-override"><code>from flask import Flask, Response, render_template, request, redirect, url_for
from werkzeug.exceptions import HTTPException
class Server:
def __init__(self, host, port):
self.app = Flask(__name__)
self.host = host
self.port = port
# Set up routes:
self.app.route("/")(self.index)
# Error occurs here:
self.app.add_url_rule('/favicon.ico',
redirect_to=url_for('static', filename='favicon.ico'))
self.app.register_error_handler(HTTPException, self.handle_http_error)
def index(self):
return render_template("index.html")
@staticmethod
def error(msg):
"""Custom error handler"""
return render_template("error.html", msg=msg)
def handle_http_error(self, e):
return self.error(f"{e.code} {e.name}: {e.description}"), e.code
def start(self):
self.app.run(host=self.host, port=self.port)
server = Server("localhost", 8080)
server.start()
</code></pre>
<p>My guess is I put the line to serve the favicon in the wrong spot. The error message says <code>This has to be executed when application context is available.</code>, but I'm not sure exactly what that means. I saw <a href="https://stackoverflow.com/questions/31766082/flask-url-for-error-attempted-to-generate-a-url-without-the-application-conte">this question</a> but the answer is a bit vague, and I couldn't figure out how to incorporate it into my code. Also, that user had a <code>with</code> statement, which I tried but couldn't get to work. I tried adding a <code>SERVER_NAME</code> config variable but it didn't change anything (I also had no idea what to put in it, so that's probably another issue).</p>
|
<python><flask><favicon>
|
2023-06-03 02:11:05
| 0
| 765
|
TheTridentGuy supports Ukraine
|
76,394,296
| 13,891,321
|
Not all Plotly subplots scale equally
|
<p>I have working code to create 4 subplots on the same HTML output. When I used to have them as 4 separate HTML plots, the Z axes scaled as requested (0 to -5), but when I run them as a series of subplots, only the first plot scales as requested.</p>
<pre><code>"""Plot 3D streamer surfaces."""
# Initialise figure with subplots
fig4S = make_subplots(rows=2, cols=2, specs=[[{'is_3d': True},
{'is_3d': True}], [{'is_3d': True},
{'is_3d': True}]],
subplot_titles=("Streamer 1",
"Streamer 2", "Streamer 3", "Streamer 4"),
shared_xaxes=False, row_heights=[0.5, 0.5],
vertical_spacing=0.05)
zS1 = 0-dfS1 # Depth data for each surface, made negative as it's a depth below sea surface
zS2 = 0-dfS2
zS3 = 0-dfS3
zS4 = 0-dfS4
fig4S.add_trace(go.Surface(z=zS1, cmin=-5, cmax=0,
colorscale=[[0, 'violet'], [0.2, 'blue'],
[0.35, 'lightblue'], [0.50, 'green'],
[0.65, 'yellow'], [0.8, 'orange'],
[1, 'red']]), 1, 1)
fig4S.add_trace(go.Surface(z=zS2, cmin=-5, cmax=0,
colorscale=[[0, 'violet'], [0.2, 'blue'],
[0.35, 'lightblue'], [0.50, 'green'],
[0.65, 'yellow'], [0.8, 'orange'],
[1, 'red']]), 1, 2)
fig4S.add_trace(go.Surface(z=zS3, cmin=-5, cmax=0,
colorscale=[[0, 'violet'], [0.2, 'blue'],
[0.35, 'lightblue'], [0.50, 'green'],
[0.65, 'yellow'], [0.8, 'orange'],
[1, 'red']]), 2, 1)
fig4S.add_trace(go.Surface(z=zS4, cmin=-5, cmax=0,
colorscale=[[0, 'violet'], [0.2, 'blue'],
[0.35, 'lightblue'], [0.50, 'green'],
[0.65, 'yellow'], [0.8, 'orange'],
[1, 'red']]), 2, 2)
fig4S.update_traces(contours_z=dict(show=True, usecolormap=True,
highlightcolor="limegreen"))
fig4S.update_scenes(aspectratio=dict(x=2, y=2, z=0.5))
fig4S.update_layout(scene=dict(zaxis=dict(nticks=4, range=[-5, 0])))
fig4S.update_layout(template='plotly_dark',
title="Channel Depths Line: " +
str(name)+" Seq: "+str(Seq),
xaxis=dict(automargin=True))
fig4S.write_html("C:/Users/client/Desktop/4_Streamer_Depths.html")
</code></pre>
<p>The scale of each subplot can be seen in the side of each one. Only Streamer 1 scales as requested, the rest use the data's Max/Min and appear exaggerated in comparison.
<a href="https://i.sstatic.net/22IiY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/22IiY.png" alt="enter image description here" /></a></p>
<p>A snippet of the data for each surface looks like this. In the case of this example, each sub plot's data has 257 rows and columns from R1d-R96d. Numbers are in the region 1.0-4.5 typically</p>
<pre><code> R1d R2d R3d R4d R5d R6d R7d
0 2.7 2.6 2.4 2.4 2.4 2.4 2.4
1 2.7 2.6 2.4 2.4 2.4 2.4 2.4
2 2.8 2.6 2.4 2.4 2.4 2.4 2.4
3 2.8 2.6 2.4 2.4 2.4 2.4 2.4
4 2.8 2.6 2.4 2.4 2.4 2.4 2.4
5 2.8 2.6 2.5 2.5 2.4 2.4 2.4
6 2.8 2.6 2.5 2.5 2.5 2.4 2.4
7 2.8 2.6 2.5 2.5 2.4 2.4 2.4
8 2.8 2.6 2.5 2.5 2.4 2.4 2.4
9 2.8 2.6 2.5 2.4 2.4 2.4 2.3
</code></pre>
|
<python><plotly>
|
2023-06-03 02:05:36
| 1
| 303
|
WillH
|
76,394,292
| 13,002,743
|
Filling NAN values in Pandas by using previous values
|
<p>I have a Pandas DataFrame in the following format.</p>
<p><a href="https://i.sstatic.net/vyvMH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vyvMH.png" alt="Sample DataFrame" /></a></p>
<p>I am trying to fill the NaN value by using the most recent non-NaN value and adding one second to the time value. For example, in this case, the program should take the most recent non-NaN value of 8:30:20 and add one second to replace the NaN value. So, the replacement value should be 8:30:21. Is there a way in Pandas to simulate this process for the entire column?</p>
|
<python><pandas><datetime>
|
2023-06-03 02:03:36
| 3
| 365
|
Rishab
|
76,394,246
| 1,694,657
|
Streaming OpenAI results from a Lambda function using Python
|
<p>I'm trying to stream results from Open AI using a Lambda function on AWS using the OpenAI Python library. For the invoke mode I have: RESPONSE_STREAM. And, using the example <a href="https://github.com/openai/openai-cookbook/blob/main/examples/How_to_stream_completions.ipynb" rel="nofollow noreferrer">provided for streaming</a>, I can see the streamed results in the Function Logs (abbreviated below):</p>
<p>Response
null</p>
<p>Function Logs
START RequestId: 3e0148c3-1269-4e38-bd08-e29de5751f18 Version: $LATEST
{
"choices": [
{
"finish_reason": null,
"index": 0,
"logprobs": null,
"text": "\n"
}
],
"created": 1685755648,
"id": "cmpl-7NALANaR7eLwIMrXTYJVxBpk6tiZb",
"model": "text-davinci-003",
"object": "text_completion"
}
{
"choices": [
{
"finish_reason": null,
"index": 0,
"logprobs": null,
"text": "\n"
}
],....</p>
<p>but, the Response is null. I've tested this by entering the URL in the browser and by performing a get request via cURL: both respond with null. Below is the exact code (with the secret key changed) that I used, but it can also be found on the link provided:</p>
<pre><code>import json
import openai
import boto3
def lambda_handler(event, context):
model_to_use = "text-davinci-003"
input_prompt="Write a sentence in 4 words."
openai.api_key = 'some-secret key'
response = openai.Completion.create(
model=model_to_use,
prompt=input_prompt,
temperature=0,
max_tokens=100,
top_p=1,
frequency_penalty=0.0,
presence_penalty=0.0,
stream=True
)
for chunk in response:
print(chunk)
</code></pre>
|
<python><lambda><streaming><openai-api>
|
2023-06-03 01:35:03
| 2
| 1,271
|
Eric
|
76,394,194
| 14,293,020
|
Xarray write large dataset on memory without killing the kernel
|
<p><strong>Context:</strong>
I have the following dataset:
<a href="https://i.sstatic.net/dDYrQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dDYrQ.png" alt="dataset" /></a></p>
<p><strong>Goal:</strong> I want to <em>write</em> it on my disk. I am using chunks so the dataset does not kill my kernel.</p>
<p><strong>Problem:</strong>
I tried to save it on my disk with chunks using:</p>
<ol>
<li>Option 1: <code>to_zarr</code> -> biggest homogeneous chunks possible: <code>{'mid_date':41, 'x':379, 'y':1}</code></li>
<li>Option 2: <code>to_netcdf</code> -> chunk size <code>{'mid_date':3000, 'x':758, 'y':617}</code></li>
<li>Option 3: <code>to_netcdf</code> (or <code>to_zarr</code>, same result) -> chunk size <code>{'mid_date':1, 'x':100, 'y':100}</code></li>
</ol>
<p>But the memory ends up blowing anyway (and I have 96Gb of RAM). Option 3 tries another approach by saving chunk by chunk, but it still blows up the memory (<em>see screenshot</em>). Moreover, it strangely seems to take longer and longer to process chunks as they are written on disk. Do you have a suggestion on how I could solve this problem ?</p>
<p>In the screenshot, I would be expecting 1 line of <code>#</code> per file, but on Chunk 2 already, it seems it's saving multiple chunks at once (3 lines of <code>#</code>). The size of chunk 2 was <code>502kb</code>.
<a href="https://i.sstatic.net/s3n1q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/s3n1q.png" alt="enter image description here" /></a></p>
<p><strong>Code:</strong></p>
<pre><code>import xarray as xr
import os
import sys
from dask.diagnostics import ProgressBar
import numpy as np
xrds = #massive dataset
pathsave = 'Datacubes/'
#Option 1, did not work
#write_job = xrds.to_zarr(f"{pathsave}Test.zarr", mode='w', compute=False, consolidated=True)
#Option 2, did not work (with chunk size {'mid_date':3000, 'x':100, 'y':100})
#write_job = xrds.to_netcdf(f"test.nc",compute=False)
#with ProgressBar():
# print(f"Writing to {pathsave}")
# write_job = write_job.compute()
# Option 3, did not work. That's the option I took the screenshot from
# I force the chunks to be really small so I don't overload the memory
chunk_size = {'mid_date':1, 'y':xrds.y.shape[0], 'x':xrds.x.shape[0]}
with ProgressBar():
for i, (key, chunk) in enumerate(xrds.chunk(chunk_size).items()):
chunk_dataset = xr.Dataset({key: chunk})
chunk_dataset.to_netcdf(f"{pathsave}/chunk_{i}.nc", mode="w", compute=True)
print(f"Chunk {i+1} saved.")
</code></pre>
|
<python><dask><netcdf><python-xarray><zarr>
|
2023-06-03 01:04:32
| 1
| 721
|
Nihilum
|
76,394,170
| 3,826,733
|
Cannot open file downloaded from Azure Storage account
|
<p>Why is it when downloaded I am not able to open a file which was uploaded to my Azure storage account as a block blob?</p>
<p>Initially I thought it was because of the way I uploaded it. But manually uploaded files when downloaded wont open as well.
This is the message I see when I try to open the downloaded file-
<a href="https://i.sstatic.net/ltZKJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ltZKJ.png" alt="enter image description here" /></a></p>
<p>Type of file - .jpg
Here are the properties of the file on Azure -</p>
<p><a href="https://i.sstatic.net/IYgzA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IYgzA.png" alt="enter image description here" /></a></p>
<p>My graphql api calls the function below which is written in python -</p>
<pre><code>@mutation.field("fileUpload")
def resolve_fileUpload(_, info, file):
print(file['containerName'])
file_path = file['file'] if 'file' in file else None
file_name = file['fileName'] if 'fileName' in file else None
file_type = file['fileType'] if 'fileType' in file else None
file_extension = file['fileExtension'] if 'fileExtension' in file else None
uploaded_date = file['uploadedDate'] if 'uploadedDate' in file else None
container_name = file['containerName'] if 'containerName' in file else None
try:
container_client = blob_service_client.get_container_client(
container_name)
if not container_client.exists():
container_client.create_container()
container_client.set_container_metadata(
metadata={'Created_Date': uploaded_date})
with open(file_path, "rb") as file:
content_settings = ContentSettings(content_type='image/jpeg')
metadata = {'File_Name': file_name, 'Uploaded_Date': uploaded_date, 'Container_Name': container_name,
'File_Type': file_type, 'File_Extension': file_extension}
result = container_client.upload_blob(
name=file_name, data=file_path, metadata=metadata, content_settings=content_settings)
# result = container_client.upload_blob(
# name=file_name, data=file_path)
except AzureException as e:
if e.status_code == 200:
return {
"status": e.status_code,
"error": "",
"fileUrl": result.url
}
else:
return {
"status": e.status_code,
"error": e.message,
"fileUrl": result.url
}
else:
return {
"status": 200,
"error": "",
"fileUrl": result.url
}
</code></pre>
<p><a href="https://i.sstatic.net/mkCfT.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mkCfT.jpg" alt="enter image description here" /></a></p>
|
<python><azure><azure-blob-storage>
|
2023-06-03 00:50:25
| 0
| 3,842
|
Sumchans
|
76,394,066
| 5,378,132
|
FastAPI SQLAlchemy - How to Encrypt Table Column and then Decrypt when Querying Result?
|
<p>I have a table called <code>users</code>. I want to encrypt the <code>phone_number</code> column in the SQL table, and then decrypt <code>phone_number</code> when querying the <code>users</code> table and returning a User item.</p>
<p>Here's an example:</p>
<p><strong>models.py</strong></p>
<pre><code>class Users(Base):
__tablename__ = "users"
id = Column(UUID(as_uuid=True), primary_key=True, unique=True, default=uuid.uuid4)
username = Column(String(255), nullable=False)
phone_number = Column(StringEncryptedType(String(255), settings.ENCRYPT_KEY), nullable=False)
created_at = Column(DateTime, server_default=func.now())
</code></pre>
<p><strong>schemas.py</strong></p>
<pre><code>class UserResult(BaseModel):
id: Optional[uuid.UUID]
username: str
phone_number: str
class Config:
orm_mode = True
class UsersResult(BaseModel):
people: List[UserResult]
</code></pre>
<p><strong>users.py</strong></p>
<pre><code>async def db_get_users(db: AsyncSession) -> List[Users]:
result = await db.execute(select(Users))
return result.scalars().all()
async def db_create_user(db: AsyncSession, user: UserResult) -> Users:
instance = Users(**user.dict())
db.add(instance)
await db.commit()
return instance
@router.get("/users", name="users")
async def get_all_users(
request: Request,
db: AsyncSession = Depends(get_session),
# authenticated: bool = Depends(check_authentication_header),
):
request.app.logger.info("Retrieving list of all users ...")
return {"users": await db_get_users(db)}
</code></pre>
<p>Unfortunately, I'm getting the following error when hitting the <code>/users</code> endpoint for getting a list of all users in the SQL table:</p>
<pre><code>starlit-fastapi-app | decrypted_value = self.engine.decrypt(value)
starlit-fastapi-app | File "/usr/local/lib/python3.9/site-packages/sqlalchemy_utils/types/encrypted/encrypted_type.py", line 121, in decrypt
starlit-fastapi-app | decrypted = base64.b64decode(value)
starlit-fastapi-app | File "/usr/local/lib/python3.9/base64.py", line 87, in b64decode
starlit-fastapi-app | return binascii.a2b_base64(s)
starlit-fastapi-app | binascii.Error: Incorrect padding
</code></pre>
<p>Any help would be greatly appreciated!</p>
|
<python><encryption><sqlalchemy><cryptography><fastapi>
|
2023-06-03 00:03:04
| 1
| 2,831
|
Riley Hun
|
76,393,832
| 11,485,896
|
Get physical address from a way containing nodes only
|
<p>I'm new to geocoding stuff.</p>
<p>I have a list of addresses which I have to find the nearest <strong>residential</strong> buildings for. At first, I'm looking for basic data of these addresses using <code>geopy.geocoders.Nominatim</code> geolocator. The data I get from <code>Nominatim</code> are, among others, <code>display_name</code>, <code>osm_id</code>, <code>osm_type</code>. After that I switch to <code>OSMPythonTools.api.Api</code> to get more detailed information (e.g. number of floors, flats etc.) from <code>osm_query: str = fr"{osm_type}/{osm_id}"</code>. Then data is saved to a <code>pandas</code> dataframe. In the next step, using <code>osmnx</code>, I try to get all geometries from addresses in the 1 km perimeter (by default). Code:</p>
<pre class="lang-py prettyprint-override"><code># Python
import os
from pprint import pprint
from collections import defaultdict
# geodata
import pandas as pd
from pandas import DataFrame
from OSMPythonTools.api import Api as OSM_Api
from OSMPythonTools.nominatim import Nominatim as OSM_Nominatim
from geopy.geocoders import Nominatim as geopy_Nominatim
import osmnx as ox
# # # engines
# # geopy
# https://levelup.gitconnected.com/simple-geocoding-in-python-fb28ee5272e0
geopy_geolocator: geopy_Nominatim = geopy_Nominatim(user_agent="my_app")
geopy_geocode: geopy_geolocator.geocode = geopy_geolocator.geocode
# # OSM
api_OSM: OSM_Api = OSM_Api()
# # # dane
# test addresses load
df_addresses: DataFrame = pd.read_csv("test_addresses.csv", sep = ";")
# # # gathering data
# addresses coordinates
# https://levelup.gitconnected.com/simple-geocoding-in-python-fb28ee5272e0
# "full_address" is in custom, non-OSM format
addresses_to_analyze: dict = df_addresses["full_address"].to_list()
addresses_data: dict[list] = defaultdict(list)
for i in addresses_to_analyze:
addresses_data["full_address"].append(i)
raw_geopy_geocode_response: dict = geopy_geocode(i)
if raw_geopy_geocode_response:
raw_geopy_geocode_response: dict = geopy_geocode(i).raw
osm_address = raw_geopy_geocode_response.get("display_name")
osm_id: int = raw_geopy_geocode_response.get("osm_id")
osm_type: str = raw_geopy_geocode_response.get("osm_type")
place_class: str = raw_geopy_geocode_response.get("class")
place_type: str = raw_geopy_geocode_response.get("type")
osm_query: str = fr"{osm_type}/{osm_id}"
raw_osm_geocode_response: dict = api_OSM.query(osm_query).tags()
building_levels: str = raw_osm_geocode_response.get("building:levels")
building_flats: str = raw_osm_geocode_response.get("building:flats")
addresses_data["osm_address"].append(osm_address)
addresses_data["osm_id"].append(osm_id)
addresses_data["osm_type"].append(osm_type)
addresses_data["place_class"].append(place_class)
addresses_data["place_type"].append(place_type)
addresses_data["building_levels"].append(building_levels)
addresses_data["building_flats"].append(building_flats)
else:
addresses_data["osm_address"].append(None)
addresses_data["osm_id"].append(None)
addresses_data["osm_type"].append(None)
addresses_data["place_class"].append(None)
addresses_data["place_type"].append(None)
addresses_data["building_levels"].append(None)
addresses_data["building_flats"].append(None)
df_osm_data = pd.DataFrame.from_dict(addresses_data)
# extracting test address
test_address = df_osm_data.loc[0, "osm_address"]
# # # osmnx - nearest (by default - 1 km) residential locations
ox_gdf = ox.geometries_from_address(
address = test_address,
tags = {"building": ["house", "apartments", "residential", "detached"], "place": "house", "amenity": False,
}
)
df_gdf = pd.DataFrame(ox_gdf)
df_gdf.reset_index(inplace=True)
df_gdf.to_excel("osmnx_geometries_perimeter.xlsx")
</code></pre>
<p>The problem is that there are some entries which contain only a type of building (which meets conditions) but no address. In such cases, I have the correct element type (<code>way</code>) but when I enter <code>osmid</code> to browser search engine I receive only a set of nodes (even though highlighted polygon is correct). <strong>Only when I right-click the polygon and select 'Show Address' I finally get the address (also a new <code>osmid</code> and <code>nodes</code> for <code>way</code>)</strong>. What's also interesting is that <code>df_gdf</code> contains column with lists of <code>nodes</code> but any of nodes there matches the new nodes from browser.</p>
<p>My questions:</p>
<ol>
<li>Is it possible to re-evaluate <code>osmid</code>s to get addresses? If yes - how?</li>
<li>Could <code>place_id</code> from <code>Nominatim</code> help?</li>
</ol>
<p><strong>EDIT:</strong></p>
<blockquote>
<p><strong>Only when I right-click the polygon and select 'Show Address' I finally get the address (also a new <code>osmid</code> and <code>nodes</code> for <code>way</code>)</strong>.</p>
</blockquote>
<p>In these cases I'm talking about I usually get a new <code>node</code> instead of <code>way</code> after clicking 'Show Address'.</p>
<blockquote>
<p>What's also interesting is that <code>df_gdf</code> contains column with lists of <code>nodes</code> but any of nodes there matches the new nodes from browser.</p>
</blockquote>
<p>Of course I mean those entries without addresses.</p>
<p><strong>EXAMPLE</strong>:</p>
<p>For a one address, I got around 150 neighbouring entries. One of the entries is <a href="https://www.openstreetmap.org/way/389852088" rel="nofollow noreferrer">this way</a>. It contains 11 <code>nodes</code>:</p>
<pre><code>3938255237
3938255220
3938255221
3938255209
3938255217
3938255224
3938255223
3938255230
3938255236
3938255251
3938255237
</code></pre>
<p>Both script and browser indicate no address here. When I click 'Show Address' I receive a <code>node</code> <a href="https://www.openstreetmap.org/node/2710576553" rel="nofollow noreferrer"><code>2710576553</code></a> with definite address. As you can notice this <code>node</code> does not appear in the list of previous <code>way</code> <code>nodes</code>.</p>
|
<python><openstreetmap><geopandas><osmnx><geopy>
|
2023-06-02 22:41:21
| 0
| 382
|
Soren V. Raben
|
76,393,695
| 9,795,817
|
PySpark: Replace null values with empty list
|
<p>I outer joined the results of two <code>groupBy</code> and <code>collect_set</code> operations and ended up with this dataframe (<code>foo</code>):</p>
<pre class="lang-py prettyprint-override"><code>>>> foo.show(3)
+---+------+------+
| id| c1| c2|
+---+------+------+
| 0| null| [1]|
| 7| [6]| null|
| 6| [6]|[7, 8]|
+---+------+------+
</code></pre>
<p>I want to concatenate <code>c1</code> and <code>c2</code> together to obtain this result:</p>
<pre class="lang-py prettyprint-override"><code>+---+------+------+---------+
| id| c1| c2| res|
+---+------+------+---------+
| 0| null| [1]| [1]|
| 7| [6]| null| [6]|
| 6| [6]|[7, 8]|[6, 7, 8]|
+---+------+------+---------+
</code></pre>
<p>To do this, I need to coalesce the null values in <code>c1</code> and <code>c2</code>. However, I don't even know what data type <code>c1</code> and <code>c2</code> are. How can I replace the null values with <code>[]</code> so that the concatenation of <code>c1</code> and <code>c2</code> will yield <code>res</code> as shown above?</p>
<p>This is how I'm currently concatenating both columns:</p>
<pre class="lang-py prettyprint-override"><code># Concat returns null for rows where either column is null
foo.selectExpr(
'id',
'c1',
'c2',
'concat(c1, c2) as res'
)
</code></pre>
|
<python><apache-spark><pyspark><null>
|
2023-06-02 22:05:52
| 2
| 6,421
|
Arturo Sbr
|
76,393,635
| 7,648
|
`strftime` acting unexpectedly
|
<p>I have the following Python code:</p>
<pre><code>from datetime import datetime
def get_session_id(date_of_mri):
dt = datetime.strptime(date_of_mri, '%m/%d/%Y')
date_time = dt.strftime("%Y%M%D")
return date_time
print(get_session_id('2/27/2002'))
</code></pre>
<p>This prints</p>
<pre><code>20020002/27/02
</code></pre>
<p>I'm expecting it to print</p>
<pre><code>20020227
</code></pre>
<p>What am I doing wrong here?</p>
|
<python><date>
|
2023-06-02 21:47:44
| 1
| 7,944
|
Paul Reiners
|
76,393,584
| 4,802,101
|
OPC DA dll compatible with MS KB5004442
|
<p>I have an Python application that use OpenOPC to connect to our OPC Server. After the release of the Microsoft <a href="https://support.microsoft.com/en-us/topic/kb5004442-manage-changes-for-windows-dcom-server-security-feature-bypass-cve-2021-26414-f1400b52-c141-43d2-941e-37ed901c769c" rel="nofollow noreferrer">KB5004442 DCOM security patch</a> this application is not been able to connect. This is because this OpenOPC module make use of an OPC Automation wrapper from <a href="http://www.gray-box.net/" rel="nofollow noreferrer">gray-box</a> which don't have the appropriate security level to be compatible with this new patch. I also suspect that they don't support this dll anymore. I would like to know it anyone else is strugglling with this problem.</p>
<p>I tried to use OPCDAAuto.dll from OPC Foundation but I find out that this was not maintained for a long time so it has the same problem.</p>
<p>I suppose that there are only two option here:</p>
<ol>
<li>Find a dll that is compatible with this new security demand.</li>
<li>Use OPC tunnelers.</li>
</ol>
<p>Thanks!</p>
|
<python><opc><dcom><open-opc>
|
2023-06-02 21:34:43
| 1
| 370
|
Dariva
|
76,393,348
| 3,845,439
|
How to offset twinx y-axis by specified amount?
|
<p>I am familiar to using <code>twinx()</code> to share an axis's x-axis with another subplot:</p>
<pre><code>fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
ax1.plot(xdata1, ydata1)
ax2.plot(xdata2, ydata2)
</code></pre>
<p>This makes the <code>y=0</code> axis line up between <code>ax1</code> and <code>ax2</code>, but the axes are still on separate subplots so they each get their own autoscaling to match whatever is plotted on them as normal. However, I have a situation where I need to create separate axes <code>ax1</code> and <code>ax2</code> on the different subplots, but with a specified offset between their x-axes - i.e. <code>ax1</code>'s <code>y=y0</code> needs to line up with <code>ax2</code>'s <code>y=0</code>, for some nonzero offset <code>y0</code>, like this:</p>
<p><a href="https://i.sstatic.net/lFvYl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lFvYl.png" alt="enter image description here" /></a></p>
<p><a href="https://stackoverflow.com/q/61596813/3845439">This post</a> appears to be close to what I am after, but not quite the same. That example uses a secondary y-axis with a function relating its values to the primary y-axis. However, I need a separate subplot entirely with its own scaling just like <code>twinx()</code> gives me.</p>
|
<python><matplotlib><yaxis><twinx>
|
2023-06-02 20:38:50
| 1
| 440
|
PGmath
|
76,393,336
| 3,940,670
|
Calling Google Cloud Speech to Text API regional recognizers, using Python Client library, showing error 400 and 404
|
<p><strong>The goal:</strong> The goal is to use Python client libraries to convert a speech audio file to text through a Chirp recognizer.</p>
<p><strong>Steps to recreate the error:</strong> I'm creating a recognizer following the steps in the link below,
I am following the instruction and the Python code in the below link to perform Speech to Text using GCP Speech API,
<a href="https://cloud.google.com/speech-to-text/v2/docs/transcribe-client-libraries" rel="nofollow noreferrer">https://cloud.google.com/speech-to-text/v2/docs/transcribe-client-libraries</a>
the code is as below,</p>
<pre><code>from google.cloud.speech_v2 import SpeechClient
from google.cloud.speech_v2.types import cloud_speech
def speech_to_text(project_id, recognizer_id, audio_file):
# Instantiates a client
client = SpeechClient()
request = cloud_speech.CreateRecognizerRequest(
parent=f"projects/{project_id}/locations/global",
recognizer_id=recognizer_id,
recognizer=cloud_speech.Recognizer(
language_codes=["en-US"], model="latest_long"
),
)
# Creates a Recognizer
operation = client.create_recognizer(request=request)
recognizer = operation.result()
# Reads a file as bytes
with open(audio_file, "rb") as f:
content = f.read()
config = cloud_speech.RecognitionConfig(auto_decoding_config={})
request = cloud_speech.RecognizeRequest(
recognizer=recognizer.name, config=config, content=content
)
# Transcribes the audio into text
response = client.recognize(request=request)
for result in response.results:
print(f"Transcript: {result.alternatives[0].transcript}")
return response
</code></pre>
<p>It works fine with the multi-regional global models. However, as of now(June of 2023), the Chirp model is only available in the <code>us-central1</code> region.</p>
<p><strong>The issue:</strong> When you're using the same code for the regional recognizers it outputs a 404 error indicating that the recognizer doesn't exist in the project.
When you change the recognizer's name from <code>"projects/{project_id}/locations/global/recognizers/{recognizer_id}"</code> to <code>"projects/{project_id}/locations/us-central1/recognizers/{recognizer_id}"</code> or anything with non-global location, it shows 400 error saying that the location is expected to be <code>global</code>.</p>
<p><strong>Question:</strong> How can I call a regional recognizer through the GCP Python client library?</p>
|
<python><google-cloud-platform><google-cloud-vertex-ai><google-cloud-speech><google-cloud-python>
|
2023-06-02 20:35:27
| 1
| 637
|
M.Hossein Rahimi
|
76,393,311
| 14,896,203
|
Aggregate based on two columns and then apply function on one column vs the rest
|
<p>Hello I have below demonstrated DataFrame and attempting to generate an aggregated result based on <code>unique_id</code> and <code>cutoff</code> where there is a calculation of a metric (such as MSE) between <code>y</code> and the rest of columns, except group by ones and <code>ds</code>.</p>
<pre><code>shape: (5, 10)
βββββββββββββ¬ββββββ¬βββββββββ¬ββββββββ¬ββββ¬βββββββββββββββ¬βββββββββββββββ¬βββββββββββββββ¬βββββββββββββββ
β unique_id β ds β cutoff β y β β¦ β CrostonClass β SeasonalNaiv β HistoricAver β DynamicOptim β
β --- β --- β --- β --- β β ic β e β age β izedTheta β
β str β i64 β i64 β f32 β β --- β --- β --- β --- β
β β β β β β f32 β f32 β f32 β f32 β
βββββββββββββͺββββββͺβββββββββͺββββββββͺββββͺβββββββββββββββͺβββββββββββββββͺβββββββββββββββͺβββββββββββββββ‘
β H1 β 701 β 700 β 619.0 β β¦ β 742.668762 β 691.0 β 661.674988 β 612.767517 β
β H1 β 702 β 700 β 565.0 β β¦ β 742.668762 β 618.0 β 661.674988 β 536.846252 β
β H1 β 703 β 700 β 532.0 β β¦ β 742.668762 β 563.0 β 661.674988 β 497.82428 β
β H1 β 704 β 700 β 495.0 β β¦ β 742.668762 β 529.0 β 661.674988 β 464.723236 β
β H1 β 705 β 700 β 481.0 β β¦ β 742.668762 β 504.0 β 661.674988 β 440.972351 β
βββββββββββββ΄ββββββ΄βββββββββ΄ββββββββ΄ββββ΄βββββββββββββββ΄βββββββββββββββ΄βββββββββββββββ΄βββββββββββββββ
</code></pre>
<p>I was able to generate desired result with iteration, however, do not know how to concat all the DataFrames.</p>
<pre class="lang-py prettyprint-override"><code>from datasetsforecast.losses import mse, mae, rmse
def evaluate_cross_validation(df, metric):
models = df.drop(columns=['ds', 'cutoff', 'y', 'unique_id']).columns
evals = []
for model in models:
eval_ = (
df
.groupby(['unique_id', 'cutoff'])
.agg(
pl.apply(
exprs=['y', model],
function=lambda args: metric(args[0], args[1]),
)
)
.rename({'y': model})
.sort(by=['unique_id', 'cutoff'])
)
evals.append(eval_)
uid_cutoff = evals[0].select(['unique_id'])
eval_dfs = pl.concat([df.drop(['unique_id', 'cutoff']) for df in evals], how='horizontal')
evals = pl.concat([uid_cutoff, eval_dfs], how='horizontal')
evals = evals.groupby(['unique_id']).mean() # Averages the error metrics for all cutoffs for every combination of model and unique_id
best_model = [min(row, key=row.get) for row in evals.drop('unique_id').rows(named=True)]
evals = evals.with_columns(pl.lit(best_model).alias('best_model')).sort(by=['unique_id'])
return evals
</code></pre>
<p>Expected output:</p>
<pre><code>shape: (5, 8)
βββββββββββββ¬ββββββββββββ¬ββββββββββββ¬βββββββββββββ¬βββββββββββββ¬βββββββββββββ¬βββββββββββββ¬βββββββββββ
β unique_id β AutoARIMA β HoltWinte β CrostonCla β SeasonalNa β HistoricAv β DynamicOpt β best_mod β
β --- β --- β rs β ssic β ive β erage β imizedThet β el β
β str β f64 β --- β --- β --- β --- β a β --- β
β β β f64 β f64 β f64 β f64 β --- β str β
β β β β β β β f64 β β
βββββββββββββͺββββββββββββͺββββββββββββͺβββββββββββββͺβββββββββββββͺβββββββββββββͺβββββββββββββͺβββββββββββ‘
β H1 β 1979.3021 β 44888.019 β 28038.7363 β 1422.66668 β 20927.6640 β 1296.33398 β DynamicO β
β β 85 β 531 β 28 β 7 β 62 β 4 β ptimized β
β β β β β β β β Theta β
β H10 β 458.89271 β 2812.9166 β 1483.48413 β 96.895832 β 1980.36749 β 379.621124 β Seasonal β
β β 5 β 26 β 1 β β 3 β β Naive β
β H100 β 8629.9482 β 121625.37 β 91945.1406 β 12019.0 β 78491.1914 β 21699.6479 β AutoARIM β
β β 42 β 5 β 25 β β 06 β 49 β A β
β H101 β 6818.3486 β 28453.395 β 16183.6347 β 10944.4580 β 18208.4042 β 63698.0732 β AutoARIM β
β β 33 β 508 β 66 β 08 β 97 β 42 β A β
β H102 β 65489.965 β 232924.85 β 132655.300 β 12699.8959 β 309110.468 β 31393.5214 β Seasonal β
β β 82 β 1562 β 781 β 96 β 75 β 84 β Naive β
</code></pre>
|
<python><python-polars>
|
2023-06-02 20:30:56
| 1
| 772
|
Akmal Soliev
|
76,393,255
| 11,613,489
|
Scraping using BeautifulSoup print an empty output
|
<p>I'm trying to scrape a website.
I want to print all the elements with the following class name,</p>
<blockquote>
<p>class=product-size-info__main-label</p>
</blockquote>
<p>The code is the following:</p>
<pre><code>from bs4 import BeautifulSoup with open("MadeInItaly.html", "r") as f:
doc= BeautifulSoup (f, "html.parser")
tags = doc.find_all(class_="product-size-info__main-label")
print(tags)
</code></pre>
<p>Result: [XS, XS, S, M, L, XL]</p>
<p>All good here.</p>
<p>Now this is when done on the file MadeInItaly.html (it works) which is basically the same website I am trying to use, but the version saved on my disk.</p>
<p>Now, with the version from the URL.</p>
<pre><code>from bs4 import BeautifulSoup
import requests
headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36"}
url = "https://www.zara.com/es/es/vestido-midi-volantes-cinturon-con-lino-p00387075.html?v1=258941747&v2=2184287"
result = requests.get(url,headers=headers)
doc = BeautifulSoup(result.text, "html.parser")
tags = doc.find_all(class_="product-size-info__main-label")
print(tags)
</code></pre>
<p>Result: []</p>
<p>I have tried with different User Agent Headers, what could be wrong here?</p>
|
<python><html><web-scraping><beautifulsoup>
|
2023-06-02 20:20:07
| 2
| 642
|
Lorenzo Castagno
|
76,393,080
| 19,410,411
|
How to get the optimal number of clusters using elbow method and return it?
|
<p>I need to find a way to return the number of optimal clusters from the elbow method implementation in python. How can I implement the elbow method in order to show the elbow method graph and then return the number of optimal clusters.</p>
|
<python><k-means>
|
2023-06-02 19:40:55
| 1
| 525
|
Mikelenjilo
|
76,393,074
| 13,058,538
|
Python multithreading for file reading results in slower performance: How to optimize?
|
<p>I am learning concurrency in Python and I have noticed that the <code>threading</code> module even lowers the speed of my code. My code is a simple parser where I read HTMLs from my local directory and output parsed a few fields as JSON files to another directory.</p>
<p>I was expecting a speed improvement but the speed becomes lower, tested with small numbers of HTMLs at a time, 50, 200, 1000, and large numbers of HTMLs like 30k. In all situations, the speed lowers. For example, with 1000 HTMLs without threading speed is ~2.9 seconds, with threading speed is ~4 seconds.</p>
<p>Also tried the <code>concurrent.futures</code> <code>ThreadPoolExecutor</code> but it provides the same slower results.</p>
<p>I know about GIL, but I thought that I/O-bound tasks should be handled with multithreading.</p>
<p>Here is my code:</p>
<pre><code>import json
import re
import time
from pathlib import Path
import threading
def get_json_data(body: str) -> re.Match[str] or None:
return re.search(
r'(?<=json_data">)(.*?)(?=</script>)', body
)
def parse_html_file(file_path: Path) -> dict:
with open(file_path, "r") as file:
html_content = file.read()
match = get_json_data(html_content)
if not match:
return {}
next_data = match.group(1)
json_data = json.loads(next_data)
data1 = json_data.get("data1")
data2 = json_data.get("data2")
data3 = json_data.get("data3")
data4 = json_data.get("data4")
data5 = json_data.get("data5")
parsed_fields = {
"data1": data1,
"data2": data2,
"data3": data3,
"data4": data4,
"data5": data5
}
return parsed_fields
def save_parsed_fields(file_path: Path, parsed_fields: dict, output_dir: Path) -> None:
output_filename = f"parsed_{file_path.stem}.json"
output_path = output_dir / output_filename
with open(output_path, "w") as output_file:
json.dump(parsed_fields, output_file)
print(f"Parsed {file_path.name} and saved the results to {output_path}")
def process_html_file(file_path: Path, parsed_dir: Path) -> None:
parsed_fields = parse_html_file(file_path)
save_parsed_fields(file_path, parsed_fields, parsed_dir)
def process_html_files(source_dir: Path, parsed_dir: Path) -> None:
parsed_dir.mkdir(parents=True, exist_ok=True)
threads = []
for file_path in source_dir.glob("*.html"):
thread = threading.Thread(target=process_html_file, args=(file_path, parsed_dir))
thread.start()
threads.append(thread)
# Wait for all threads to finish
for thread in threads:
thread.join()
def main():
base_path = "/home/my_pc/data"
source_dir = Path(f"{base_path}/html_sample")
parsed_dir = Path(f"{base_path}/parsed_sample")
start_time = time.time()
process_html_files(source_dir, parsed_dir)
end_time = time.time()
duration = end_time - start_time
print(f"Application took {duration:.2f} seconds to complete.")
if __name__ == "__main__":
main()
</code></pre>
<p>I know about asyncio, but I want to correctly test all of the multithreading methods to pick the best that suits me.</p>
<p>As mentioned tried also <code>concurrent.futures</code>, code is almost the same when processing html_files I have these lines:</p>
<pre><code>with ThreadPoolExecutor(max_workers=max_workers) as executor:
# Iterate over the HTML files in the source directory
for file_path in source_dir.glob("*.html"):
executor.submit(process_html_file, file_path, parsed_dir)
</code></pre>
<p>Are there any mistakes in my code? How could I optimize my code better with multithreading (aside from asyncio)?</p>
|
<python><multithreading><python-multithreading><concurrent.futures>
|
2023-06-02 19:39:57
| 1
| 523
|
Dave
|
76,393,072
| 18,739,908
|
Reset badge count on Android using React Native Expo
|
<p>I'm using react native with expo. When a user goes to their notifications I want the badge count to update. The following code in my backend does that (python):</p>
<pre><code>from exponent_server_sdk import PushClient, PushMessage
def reset_badge_count(token, new_total):
response = PushClient().publish(PushMessage(to=token, body=None, data=None, badge=new_total))
if response:
return True
else:
return False
</code></pre>
<p>It works totally fine on iOS. On Android however, it sends a blank push notification. I don't want any push notification sent I just want to reset the badge count. Does anyone know a workaround for this? Thanks.</p>
|
<python><react-native><push-notification><expo>
|
2023-06-02 19:39:42
| 0
| 494
|
Cole
|
76,392,943
| 17,275,588
|
Shopify API (using Python): File upload failed due to "Processing Error." Why?
|
<p>I am struggling to figure out why I'm not able to successfully upload images to the Files section of my Shopify store. I followed this code here, except mine is a Python version of this: <a href="https://gist.github.com/celsowhite/2e890966620bc781829b5be442bea159" rel="nofollow noreferrer">https://gist.github.com/celsowhite/2e890966620bc781829b5be442bea159</a></p>
<pre><code>import requests
import os
# Set up Shopify API credentials
shopify_store = 'url-goes-here.myshopify.com' // the actual URL is here
access_token = 'token-goes-here' // the actual token is here
# Read the image file
image_path = r'C:\the-actual-filepath-is-here\API-TEST-1.jpg' # Replace with the actual path to your image file
with open(image_path, 'rb') as file:
image_data = file.read()
# Create staged upload
staged_upload_url = f"https://{shopify_store}/admin/api/2023-04/graphql.json"
staged_upload_query = '''
mutation stagedUploadsCreate($input: [StagedUploadInput!]!) {
stagedUploadsCreate(input: $input) {
stagedTargets {
resourceUrl
url
parameters {
name
value
}
}
userErrors {
field
message
}
}
}
'''
staged_upload_variables = {
"input": [
{
"filename": "API-TEST-1.jpg",
"httpMethod": "POST",
"mimeType": "image/jpeg",
"resource": "FILE"
}
]
}
response = requests.post(
staged_upload_url,
json={"query": staged_upload_query, "variables": staged_upload_variables},
headers={"X-Shopify-Access-Token": access_token}
)
data = response.json()
staged_targets = data['data']['stagedUploadsCreate']['stagedTargets']
target = staged_targets[0]
params = target['parameters']
url = target['url']
resource_url = target['resourceUrl']
# Post image data to the staged target
form_data = {
"file": image_data
}
headers = {
param['name']: param['value'] for param in params # Fix the headers assignment
}
headers["Content-Length"] = str(len(image_data))
response = requests.post(url, files=form_data, headers=headers) # Use 'files' parameter instead of 'data'
# Create the file in Shopify using the resource URL
create_file_url = f"https://{shopify_store}/admin/api/2023-04/graphql.json"
create_file_query = '''
mutation fileCreate($files: [FileCreateInput!]!) {
fileCreate(files: $files) {
files {
alt
}
userErrors {
field
message
}
}
}
'''
create_file_variables = {
"files": [
{
"alt": "alt-tag",
"contentType": "IMAGE",
"originalSource": resource_url
}
]
}
response = requests.post(
create_file_url,
json={"query": create_file_query, "variables": create_file_variables},
headers={"X-Shopify-Access-Token": access_token}
)
data = response.json()
files = data['data']['fileCreate']['files']
alt = files[0]['alt']
</code></pre>
<p>It runs the code, it doesn't output any errors. However when I navigate to the Files section of the Shopify store, it says "1 upload failed -- processing error."</p>
<p>Any clues in the code as to what might be causing that?</p>
<p>Also when I print(data) at the very end, this is what it says:</p>
<p>{'data': {'fileCreate': {'files': [{'alt': 'alt-tag'}], 'userErrors': []}}, 'extensions': {'cost': {'requestedQueryCost': 20, 'actualQueryCost': 20, 'throttleStatus': {'maximumAvailable': 1000.0, 'currentlyAvailable': 980, 'restoreRate': 50.0}}}}</p>
<p>Seeming to indicate it created it successfully. But there's some misc processing error.</p>
<p>Thanks</p>
|
<python><python-requests><graphql><shopify><shopify-api>
|
2023-06-02 19:14:43
| 2
| 389
|
king_anton
|
76,392,920
| 2,675,349
|
How to JOIN two dataframes and populate a column?
|
<p>I have two data frames as below,</p>
<pre><code>DF1
Name;ID;Course;SID;Subject
Alex;A1;Under;;chemistry
Oak;A2;Under;;chemistry
niva;A3;grad;;physics
mark;A4;Under;;Med
DF2
PID;ServiceId;Address;Active
A1;svc1;WI;Yes
A2;svc2;MI;Yes
A3;svc2;OH;Yes
</code></pre>
<p>I want to have a data frame with SID populated from DF2.ServiceId using ID and PID columns. The expected output as below</p>
<pre><code>DF3
Name;ID;Course;SID;Subject
Alex;A1;Under;svc1;chemistry
Oak;A2;Under;svc2;chemistry
niva;A3;grad;svc3;physics
mark;A4;Under;;Med
</code></pre>
<p>I tried the below, but it showing all the columns from both the data frames.</p>
<pre><code>DF3 = DF1.merge(DF2, how='inner', left_on="ID", right_on="PID")
</code></pre>
|
<python><pandas><dataframe>
|
2023-06-02 19:11:36
| 3
| 1,027
|
Ullan
|
76,392,744
| 14,804,653
|
Can you use the programs you "pip install" in the Command-line?
|
<p>As a Python beginner, I was downloading the OpenAI's <a href="https://github.com/openai/whisper#setup" rel="nofollow noreferrer">Whisper</a> with the following command: <code>pip install -U openai-whisper</code>, and noticed that you can use Whisper in both <a href="https://github.com/openai/whisper#python-usage" rel="nofollow noreferrer">Python</a> and the <a href="https://github.com/openai/whisper#command-line-usage" rel="nofollow noreferrer">Command-line</a>.</p>
<p>To my knowledge, <code>pip install</code> installs Python packages, so should only be available within Python, but it seems like you can use Whisper in the command line?</p>
<p>In summary, why does <code>pip install</code>-ing Python packages let you use the package in the command line?</p>
|
<python><pip><command-line-interface>
|
2023-06-02 18:41:47
| 2
| 318
|
Howard Baik
|
76,392,743
| 8,869,570
|
How to find all rows with time to a datetime with timezone info?
|
<p>I have a dataframe with a datetime column <code>dt</code> with the dtype <code>datetime64[ns, US/Eastern]</code>.</p>
<p>I am trying to find all rows with the time <code>2023-01-01 12:00:00-05:00</code>.</p>
<p>I tried to do this:</p>
<pre><code>eastern = pytz.timezone('US/Eastern')
query_dt = datetime.datetime(year=2023, month=1, day=1, hour=12, minute=0, tzinfo=eastern)
df_sub = df[df.dt == query_dt]
</code></pre>
<p>But this is telling me there are no rows corresponding to <code>query_dt</code>, which is not correct as I can clearly see there are rows with that time.</p>
|
<python><pandas><datetime>
|
2023-06-02 18:41:42
| 0
| 2,328
|
24n8
|
76,392,643
| 14,293,020
|
Xarray how to combine 2 dataset occasionally overlapping temporally, but not spatially?
|
<p><a href="https://i.sstatic.net/9cyCy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9cyCy.png" alt="Sketch of the problem" /></a></p>
<p><strong>Context:</strong> I have 2 datacubes (datacube_1, datacube_2) with 3D variables (dimensions <code>t,y,x</code>). They do not overlap spatially but their union forms a bigger ensemble (Datacube Combined). They sometimes overlap temporally but not always (black = t1, red = t2, green = t3 and blue = t4). I want to combine those datasets so at each timestamp, if they have values they are simply stitched together, and if not the spatial footprint of the datacube with no values is filled with NaNs. (in the sketch, a filled rectangle represents values, just the outlines represents NaNs).</p>
<p><strong>Setup:</strong></p>
<ul>
<li>Datacube 1: has values at t1, t3, t4</li>
<li>Datacube 2: has values at t1, t2, t3</li>
<li>Datacube Combined: t1 is full, t2 has NaNs for Datacube 1's spatial footprint and full for Datacube 2's, t3 is full, t4 has NaNs for Datacube 2's spatial footprint and full for Datacube 1's.</li>
</ul>
<p><strong>Problem:</strong> My goal is to use if possible <code>chunks</code> and to have Datacube Combined with the <strong>smallest size</strong> on memory as possible. I do not want to have timestamps simply appended to each other, if they can be combined they should.
Between <code>merge</code>, <code>combine_by_coords</code>, <code>concat</code>, <code>combine_first</code> I don't know which one corresponds exactly to what I want to do, which one is the fastest and the most adapted to chunks. <strong>Which method should I use ?</strong>
I read the documentation but honestly I got confused.</p>
<p><strong>Code:</strong></p>
<pre><code>import xarray as xr
import numpy as np
##### ----- EXAMPLE WITH 2 CUBES ----- #####
# Define the dimensions and coordinates
t_coords1 = np.array(['2023-01-01', '2023-01-03', '2023-01-04'], dtype='datetime64') #t1, t3, t4
t_coords2 = np.array(['2023-01-01', '2023-01-02', '2023-01-03'], dtype='datetime64') #t1, t2, t3
y_coords = np.arange(0, 1000)
x_coords_1 = np.arange(0, 800) # Different size for datacube_1
x_coords_2 = np.arange(0, 120) # Different size for datacube_2
# Chunk the datacubes to recreate the error
# Create Datacube 1
datacube_1 = xr.DataArray(
np.random.rand(len(t_coords1), len(y_coords), len(x_coords_1)),
dims=['t', 'y', 'x'],
coords={'t': t_coords1, 'y': y_coords, 'x': x_coords_1},
).chunk({'t': 2, 'y': 100, 'x': 100})
# Create Datacube 2
datacube_2 = xr.DataArray(
np.random.rand(len(t_coords2), len(y_coords), len(x_coords_2)),
dims=['t', 'y', 'x'],
coords={'t': t_coords2, 'y': y_coords, 'x': x_coords_2},
).chunk({'t': 2, 'y': 100, 'x': 100})
# Merge the datacubes (I rewrite datacube_1 because in reality I merge 4 datasets together in a loop)
datacube_1 = datacube_1.merge(datacube_2)
##### ----- EXAMPLE WITH MORE THAN TWO DATACUBES ----- #####
# Gather the names of the datacubes to combine
cubes = ['Datacube_1','Datacube_2','Datacube_3','Datacube_4']
# Load the first datacube so we can combine the others to that one
xrds = xr.open_dataset(cubes[0], chunks=({'t': 500, 'y': 100, 'x': 100})
# Loop over the rest of the datacubes
for n in range(1, len(cubes)):
# Open the next datacube
ds_temp = xr.open_dataset(cubes[n], chunks=({'t': 500, 'y': 100, 'x': 100}))
# Combine it with the other ones
xrds = xrds.merge(ds_temp)
from dask.diagnostics import ProgressBar
# Save the dataset that way so it does not overload memory
write_job = ds_temp.to_netcdf("combined_datacube.nc", mode='w', compute=False)
with ProgressBar():
print(f"Writing the file")
write_job = write_job.compute()
</code></pre>
|
<python><merge><dataset><dask><python-xarray>
|
2023-06-02 18:23:48
| 0
| 721
|
Nihilum
|
76,392,521
| 12,955,349
|
How to plot data from snowflake into grouped bars overlaid with a line plot
|
<p>The requirement to have two bar graphs displayed either through sql or python library based on TYPE</p>
<p>Below is the data from the table</p>
<pre><code>with data as (
select 'DIRECT' as type , '2023-04-30' as report_month , 148 as returns_per_head , 30.00 as filing_count ,52.25 as total_count
union
select 'INDIRECT' as type , '2023-04-30' as report_month , 2876 as returns_per_head , 22.3 as filing_count ,29.25 as total_count
)
select * from data
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: left;">TYPE</th>
<th style="text-align: left;">REPORT_MONTH</th>
<th style="text-align: right;">FILING_COUNT</th>
<th style="text-align: right;">RETURNS_PER_HEAD</th>
<th style="text-align: right;">TOTAL_COUNT</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: left;">DIRECT</td>
<td style="text-align: left;">2023-04-30</td>
<td style="text-align: right;">30</td>
<td style="text-align: right;">148</td>
<td style="text-align: right;">52.25</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: left;">INDIRECT</td>
<td style="text-align: left;">2023-04-30</td>
<td style="text-align: right;">22.3</td>
<td style="text-align: right;">2876</td>
<td style="text-align: right;">29.25</td>
</tr>
</tbody>
</table>
</div>
<p>I need output as below</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: left;">TYPE</th>
<th style="text-align: left;">REPORT_MONTH</th>
<th style="text-align: left;">Metric_HC</th>
<th style="text-align: right;">HC</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: left;">DIRECT</td>
<td style="text-align: left;">2023-04-30</td>
<td style="text-align: left;">FILING_COUNT</td>
<td style="text-align: right;">30</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: left;">DIRECT</td>
<td style="text-align: left;">2023-04-30</td>
<td style="text-align: left;">TOTAL_COUNT</td>
<td style="text-align: right;">52.25</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: left;">DIRECT</td>
<td style="text-align: left;">2023-04-30</td>
<td style="text-align: left;">RETURNS_PER_HEAD</td>
<td style="text-align: right;">148</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td style="text-align: left;">INDIRECT</td>
<td style="text-align: left;">2023-04-30</td>
<td style="text-align: left;">FILING_COUNT</td>
<td style="text-align: right;">22.3</td>
</tr>
<tr>
<td style="text-align: right;">4</td>
<td style="text-align: left;">INDIRECT</td>
<td style="text-align: left;">2023-04-30</td>
<td style="text-align: left;">TOTAL_COUNT</td>
<td style="text-align: right;">29.25</td>
</tr>
<tr>
<td style="text-align: right;">5</td>
<td style="text-align: left;">INDIRECT</td>
<td style="text-align: left;">2023-04-30</td>
<td style="text-align: left;">RETURNS_PER_HEAD</td>
<td style="text-align: right;">2876</td>
</tr>
</tbody>
</table>
</div>
<p><strong>The reason is I need to display in report as below, if it can be achieved, either via python library, or sql</strong></p>
<p>Below is for example INDIRECT type alone alone</p>
<p><a href="https://i.sstatic.net/JU2Gr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JU2Gr.png" alt="enter image description here" /></a></p>
<p>I am using HEX as new visualization</p>
|
<python><pandas><matplotlib><snowflake-cloud-data-platform><grouped-bar-chart>
|
2023-06-02 18:02:15
| 1
| 1,058
|
Kar
|
76,392,467
| 22,009,322
|
How to consolidate labels in legend
|
<p>So, the script below requests data from postgres database and draws a diagram.
The requested data is a table with 4 columns <code>(ID, Object, Percentage, Color)</code>.</p>
<p>The data:</p>
<pre><code>result = [
(1, 'Apple', 10, 'Red'),
(2, 'Blueberry', 40, 'Blue'),
(3, 'Cherry', 94, 'Red'),
(4, 'Orange', 68, 'Orange')
]
</code></pre>
<pre><code>import pandas as pd
from matplotlib import pyplot as plt
import psycopg2
conn = psycopg2.connect(
host="localhost",
port="5432",
database="db",
user="user",
password="123")
cur = conn.cursor()
cur.callproc("test_stored_procedure")
result = cur.fetchall()
cur.close()
conn.close()
print(result)
result = pd.DataFrame(result, columns=['ID', 'Object', 'Percentage', 'Color'])
fruits = result.Object
counts = result.Percentage
labels = result.Color
s = 'tab:'
bar_colors = [s + x for x in result.Color]
fig, ax = plt.subplots()
for x, y, c, lb in zip(fruits, counts, bar_colors, labels):
ax.bar(x, y, color=c, label=lb)
ax.set_ylabel('fruit supply')
ax.set_title('Fruit supply by kind and color')
ax.legend(title='Fruit color', loc='upper left')
plt.show()
</code></pre>
<p>Result:</p>
<p><a href="https://i.sstatic.net/qHeei.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qHeei.png" alt="enter image description here" /></a></p>
<p>As you can see in the legend <code>"Red"</code> label is shown twice.</p>
<p>I tried several different examples of how to fix this, but unfortunately no one worked out.
F.e.:</p>
<pre><code>handles, labels = ax.get_legend_handles_labels()
ax.legend(handles, labels)
</code></pre>
|
<python><pandas><matplotlib><bar-chart><legend>
|
2023-06-02 17:52:22
| 2
| 333
|
muted_buddy
|
76,392,424
| 17,275,588
|
Shopify API error: {"errors":"[API] Invalid API key or access token (unrecognized login or wrong password)"}
|
<pre><code>import requests
# Replaced with my actual Shopify credentials and file information!!!
API_KEY = 'text' // using my Shopify App "API key"
ACCESS_TOKEN = 'text' // using my Shopify App "Admin API access token"
SHOP_NAME = 'text.myshopify.com' // using the root myshopify URL
file_path = r"C:\text\API-TEST-1.jpg"
url = f'https://{SHOP_NAME}/admin/api/2023-04/graphql.json'
query = """
mutation stagedUploadsCreate($input: [StagedUploadInput!]!) {
stagedUploadsCreate(input: $input) {
stagedTargets {
resourceUrl
url
parameters {
name
value
}
}
}
}
"""
variables = {
'input': [
{
'resource': 'IMAGE',
'filename': 'your-image.jpg',
'mimeType': 'image/jpeg',
'httpMethod': 'POST',
}
]
}
headers = {
'Content-Type': 'application/json',
'Authorization': f'Bearer {ACCESS_TOKEN}',
}
response = requests.post(url, json={'query': query, 'variables': variables}, headers=headers)
data = response.json()
staged_targets = data.get('data', {}).get('stagedUploadsCreate', {}).get('stagedTargets', [])
if staged_targets:
target = staged_targets[0]
params = target['parameters']
upload_url = target['url']
resource_url = target['resourceUrl']
with open(file_path, 'rb') as file:
file_data = file.read()
headers = {
'Content-Type': 'application/octet-stream',
'Content-Length': str(len(file_data)),
'X-Shopify-Access-Token': ACCESS_TOKEN,
}
headers.update(params)
response = requests.put(upload_url, headers=headers, data=file_data)
if response.status_code == 200:
print('Image uploaded successfully.')
else:
print('Failed to upload the image.')
print(response.text)
else:
print('Failed to generate upload URL and parameters.')
print(response.text)
</code></pre>
<p>It keeps telling me this: Failed to generate upload URL and parameters. {"errors":"[API] Invalid API key or access token (unrecognized login or wrong password)"}</p>
<p>However I'm using the API key in my Shopify Apps dashboard for the app I created, and I have the permissions "write files/read files" activated. I'm also using the access token I was given for this same app. Why is it not working? Any ideas?</p>
<p>Thanks</p>
|
<python><shopify><shopify-api>
|
2023-06-02 17:47:24
| 0
| 389
|
king_anton
|
76,392,359
| 895,587
|
Need to restart Databricks 13.0 cluster to iterate on development
|
<p>I want to iterate with development on a Databricks 13 cluster without the need to restart it for updating the code within my Python package.</p>
<p>It seems that <strong>dbx execute</strong> does the job on Databricks 12.1, but when I try to run it with Databricks 13, it gets the old version.</p>
<p>Also tried with</p>
<pre><code>dbx deploy my_workflow --environment=dev --assets-only
dbx launch my_workflow --environment=dev --from-assets
</code></pre>
<p>without success.</p>
<p>Any ideas?</p>
<p>I've found the following issue with no definitive answer...</p>
<p><a href="https://stackoverflow.com/questions/73489698/how-to-reinstall-same-version-of-a-wheel-on-databricks-without-cluster-restart">How to reinstall same version of a wheel on Databricks without cluster restart</a></p>
<p>Thanks</p>
|
<python><databricks><databricks-dbx>
|
2023-06-02 17:34:55
| 0
| 302
|
AndrΓ© Salvati
|
76,392,337
| 3,416,774
|
Why does Jupyter in VS Code say "No module named 'gensim'" when it's already installed?
|
<p>In the below setup, I've made sure that the Python version running in Jupyter and the terminal is the same. Yet Jupyter still give error <code>No module named 'gensim'</code> when it is already installed. Why is that?</p>
<p><img src="https://i.imgur.com/Y6fKFhN.png" alt="screenshot of VSCode showing the problem" /></p>
|
<python><visual-studio-code>
|
2023-06-02 17:31:54
| 1
| 3,394
|
Ooker
|
76,392,312
| 2,105,339
|
Should I use regular SQL instead of an ORM to reduce bandwith usage and fetching time?
|
<p>I'm building a ethereum explorer for fun with django ORM (never used it before).
here is a part of my schema :</p>
<pre><code>class AddressModel(models.Model):
id = models.BigIntegerField(primary_key=True)
first_seen = models.DateTimeField(db_index=True)
addr = models.CharField(max_length=42, db_index=True, on_delete=models.PROTECT)
is_contract = models.BooleanField()
is_token = models.BooleanField()
is_wallet = models.BooleanField()
class BlockModel(models.Model):
id = models.BigIntegerField(primary_key=True)
number = models.BigIntegerField()
status = models.CharField(max_length=20)
timestamp = models.DateTimeField(db_index=True)
epoch_proposal = models.IntegerField()
slot_proposal = models.IntegerField()
fee_recipient = models.ForeignKey(AddressModel, on_delete=models.PROTECT)
block_reward = models.BigIntegerField()
total_difficulty = models.CharField(max_length=100)
size = models.IntegerField()
gas_used = models.BigIntegerField()
gas_limit = models.BigIntegerField()
base_fee_per_gas = models.BigIntegerField()
burnt_fee = models.BigIntegerField()
extra_data = models.TextField()
hash = models.CharField(max_length=66)
parent_hash = models.CharField(max_length=66)
state_root = models.CharField(max_length=66)
withdrawal_root = models.CharField(max_length=66)
Nonce = models.CharField(max_length=20)
# you can do address_model_instance.transactionmodel_set.objects.all() since there is a FK in Transaction model
class TransactionModel(models.Model):
id = models.BigIntegerField(primary_key=True)
hash = models.CharField(max_length=66)
block = models.ForeignKey(BlockModel, on_delete=models.PROTECT)
from_addr = models.ForeignKey('AddressModel', related_name='from_addr', on_delete=models.PROTECT)
to_addr = models.ForeignKey('AddressModel', related_name='to_addr', on_delete=models.PROTECT)
input = models.TextField()
is_valid = models.BooleanField()
</code></pre>
<p>What concerns me here is that if I want to retrieve every <code>transaction</code> related to a specific <code>from_addr</code> it will also retrieve the bloc data in the returned object, if the <code>from_addr</code> has 10K transaction that is 10K <code>block</code> data that I don't need.
with regular SQL I would only get a <code>block_id</code> if I did a <code>select *</code>.</p>
<p>This will lead to useless bandwith usage and take more time to request since it will have to do some <code>select</code> operations on the <code>block</code> table.</p>
<p>Is this a use case where I shouldn't use an ORM?</p>
<p>Thanks.</p>
|
<python><django><orm><ethereum>
|
2023-06-02 17:25:47
| 1
| 2,474
|
sliders_alpha
|
76,392,283
| 11,999,957
|
List comprehension in Python for if, elif, pass?
|
<p>I see syntax for if pass but not finding syntax for if elif pass: <a href="https://stackoverflow.com/questions/33691552/list-comprehension-with-else-pass">List comprehension with else pass</a></p>
<p>Basically</p>
<pre><code>if condition:
something
elif condition
something
else
pass
</code></pre>
|
<python>
|
2023-06-02 17:22:37
| 1
| 541
|
we_are_all_in_this_together
|
76,392,174
| 5,061,840
|
Java and Python return different values when converting the hexadecimal to long
|
<p>I noticed this difference when comparing xxhash implementations in both Python and Java languages. Calculated hashes by xxhash library is the same as hexadecimal string, but they are different when I try to get calculated hash as an integer(or long) value.</p>
<p>I am sure that this is some kind of "endian" problem but I couldn't find how to get the same integer values for both languages.</p>
<p>Any ideas how and why this is happening?</p>
<p><strong>Java Code:</strong></p>
<pre><code>String hexString = "d24ec4f1a98c6e5b";
System.out.println(new BigInteger(hexString,16).longValue());
// printed value -> -3292477735350538661
</code></pre>
<p><strong>Python Code:</strong></p>
<pre><code>hexString = "d24ec4f1a98c6e5b"
print(int(hexString, 16))
# printed value -> 15154266338359012955
</code></pre>
|
<python><java>
|
2023-06-02 17:04:52
| 2
| 327
|
Tevfik Kiziloren
|
76,391,843
| 13,921,399
|
Cast pandas series containing list elements to a 2d numpy array
|
<p>Take the following series:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
s = pd.Series([1, 3, 2, [1, 3, 7, 8], [6, 6, 10, 4], 5])
</code></pre>
<p>I want to convert this series into the following array:</p>
<pre class="lang-py prettyprint-override"><code>np.array([
[ 1., 1., 1., 1.],
[ 3., 3., 3., 3.],
[ 2., 2., 2., 2.],
[ 1., 3., 7., 8.],
[ 6., 6., 10., 4.],
[ 5., 5., 5., 5.]
])
</code></pre>
<p>Currently, I am using this logic:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
from itertools import zip_longest
# Convert series and each element in series into list
ls = list(map(lambda v: v if isinstance(v, list) else [v], s.to_list()))
# Cast list elements to 2d numpy array with longest list element as column number
a = np.array(list(zip_longest(*ls, fillvalue=np.nan))).T
# Convert to DataFrame, apply 'ffill' row-wise and re-convert to numpy array
a = pd.DataFrame(a).fillna(method="ffill", axis=1).values
</code></pre>
<p>My solution is not really satisfying me, especially the last line where I convert my array to a DataFrame and then back to an array again. Does anyone know a better alternative? You can assume that all list elements have the same length.</p>
|
<python><pandas><numpy>
|
2023-06-02 16:13:55
| 2
| 1,811
|
ko3
|
76,391,681
| 6,936,489
|
Read csv in chunks with polars efficiently (with limited available RAM)
|
<p>I'm trying to read a big CSV (6.4 Go approx.) on a small machine (small laptop on windows with 8Go of RAM) before storing it into a SQLite database (I'm aware there are alternatives, that's not the point here).</p>
<p><em>In case it's needed the file I'm using can be found on <a href="https://www.data.gouv.fr/fr/datasets/base-sirene-des-entreprises-et-de-leurs-etablissements-siren-siret/" rel="nofollow noreferrer">that page</a>; in the tab "Fichiers", it should be labelled "Sirene : Fichier StockEtablissementHistorique [...]". This file is today around 37 millions lines long.</em></p>
<p>Being a big fan of pandas and I've nonetheless decided to try polars which is much advertised those days.</p>
<p>The inferred dataframe should also be joined to another produced with <code>pl.read_database</code> (which produces a pl.DataFrame and no pl.LazyFrame).</p>
<ul>
<li><p>My first try involved a LazyFrame and (naive) hope that <code>scan_csv</code> with <code>low_memory</code> argument would suffice to handle the RAM consumption. It completly freezed my computer after overconsumption of RAM.</p>
</li>
<li><p>I gave it another try using the <code>n_rows</code> along with <code>skip_rows_after_header</code>. But if the
<code>pl.read_csv(my_path, n_rows=1_000_000)</code> works fine, <code>pl.read_csv(my_path, n_rows=1_000_000, skip_rows_after_header=30_000_000)</code> seems to take forever (a lot more than a simple loop to find the count of lines).</p>
</li>
<li><p>I've also tried the <code>pl.read_csv_batched</code> but it seems also to take forever (maybe to compute those first statistics <strong>not</strong> described in the documentation).</p>
</li>
<li><p>The only way I found to handle the file with polars completly is to handles slices from a LazyFrame and collect it. Something like this :</p>
<pre><code>df = (
pl.scan_csv(
url,
separator=",",
encoding="utf8",
infer_schema_length=0,
low_memory=True,
)
.lazy()
.select(pl.col(my_cols)
# do some more processing, for instance
.filter(pl.col("codePaysEtrangerEtablissement").is_null())
)
chunksize=1_000_000
for k in range(max_iterations:)
chunk = df.slice(chunksize*k, chunksize).collect()
chunk = chunk.join(my_other_dataframe, ... )
# Do some more things like storing the chunk in a database.
</code></pre>
<p>This "solution" seems to handle the memory but performs very slowly.</p>
</li>
</ul>
<p>I've found another solution which seems to work nicely (which I'll post as provisional answer) but makes use of pandas read_csv with chunksize.
This is as good as is goes and works only because (thankfully) there is no groupby involved in my process.</p>
<p>I'm pretty sure there should be an easier "pure polars" way to proceed.</p>
<hr />
<p><strong>EDIT</strong></p>
<p>The other dataframe mentionned here (<code>my_other_dataframe</code> in the code sample) is small. It's a dataframe of around 36k lines which is strictly used to convert the field "codeCommuneEtablissement" from it's 5-long string to a primary key of integers which is stored in another table. I kept it mentionned in the sample here to explain why you needed to collect the dataframe earlier, as you can't join a LazyFrame and a DataFrame.</p>
|
<python><dataframe><csv><python-polars>
|
2023-06-02 15:51:21
| 3
| 2,562
|
tgrandje
|
76,391,586
| 12,040,751
|
Async read_csv in Pandas
|
<p>In this <a href="https://stackoverflow.com/a/60368916/12040751">answer</a> to <a href="https://stackoverflow.com/questions/57871450/async-read-csv-of-several-data-frames-in-pandas-why-isnt-it-faster">async 'read_csv' of several data frames in pandas - why isn't it faster</a> it is explained how to asynchronously read pandas DataFrames from csv data obtained from a web request.</p>
<p>I modified it to read some csv files on disk by using <code>aiofiles</code>, but got no speedup nonetheless.
I wonder if I did something wrong or if there is some unavoidable limitation, like <code>pd.read_csv</code> being blocking.</p>
<p>Here's the normal version of the code:</p>
<pre><code>from time import perf_counter
import pandas as pd
def pandas_read_many(paths):
start = perf_counter()
results = [pd.read_csv(p) for p in paths]
end = perf_counter()
print(f"Pandas version {end - start:0.2f}s")
return results
</code></pre>
<p>The async version involves reading the file with <code>aiofiles</code> and converting it to a text buffer with <code>io.StringIO</code> before passing it to <code>pd.read_csv</code>.</p>
<pre><code>import io
import aiofiles
async def async_read_csv(path):
async with aiofiles.open(path) as f:
text = await f.read()
with io.StringIO(text) as text_io:
return pd.read_csv(text_io)
async def async_read_many(paths):
start = perf_counter()
results = await asyncio.gather(*(async_read_csv(p) for p in paths))
end = perf_counter()
print(f"Async version {end - start:0.2f}s")
return results
</code></pre>
<p>For fairness, here it is the synchronous translation.</p>
<pre><code>def sync_read_csv(path):
with open(path) as f:
text = f.read()
with io.StringIO(text) as text_io:
return pd.read_csv(text_io)
def sync_read_many(paths):
start = perf_counter()
results = [sync_read_csv(p) for p in paths]
end = perf_counter()
print(f"Sync version {end - start:0.2f}s")
return results
</code></pre>
<p>Finally the comparison, where I read 8 csv files of approximately 125MB each.</p>
<pre><code>import asyncio
paths = [...]
asyncio.run(async_read_many(paths))
sync_read_many(paths)
pandas_read_many(paths)
# Async version 24.32s
# Sync version 24.87s
# Pandas version 18.37s
</code></pre>
|
<python><pandas><python-asyncio>
|
2023-06-02 15:36:37
| 0
| 1,569
|
edd313
|
76,391,582
| 3,492,006
|
Convert date and time in string to timestamp
|
<p>I have a set of CSV's that all got loaded with a date field like: <code>Sunday August 7, 2022 6:26 PM GMT</code></p>
<p>I'm working on a way to take this date/time, and convert it to a proper timestamp in the format <code>YYYY-MM-DD HH:MM</code></p>
<p>In Python, I've tried something like below to return a proper timestamp.</p>
<pre class="lang-py prettyprint-override"><code>def convert_to_timestamp(date_string):
date_format = "%A %B %d, %Y %I:%M %p %Z"
timestamp = datetime.strptime(date_string, date_format)
return timestamp
</code></pre>
<p>...but it keeps coming back with errors similar to below.</p>
<pre><code>ValueError: time data 'Sunday August 7,2022 6:26 PM GMT' does not match format '%A %B %d, %Y %I:%M %p %Z'
</code></pre>
<p>How would I convert this field in a CSV to the required format using Python?</p>
|
<python><date><datetime><timestamp>
|
2023-06-02 15:35:39
| 3
| 449
|
WR7500
|
76,391,550
| 6,300,467
|
PyTorch equivalent of scipy.sparse.linalg.gmres
|
<p>I'm using scipy.sparse.linalg.gmres to efficiently solve <code>A.x = b</code>, however my problem is within the PyTorch framework. So, I have to detach my tensors to <code>numpy</code> then call <code>scipy</code> to solve this equation. However, other frameworks (like JAX) have their own equivalent function, <code>jax.scipy.sparse.linalg.gmres</code>.</p>
<p>Is there a PyTorch equivalent to <code>scipy.sparse.linalg.gmres</code> to sparsely solve <code>A.x = b</code>?</p>
|
<python><pytorch><scipy><jax>
|
2023-06-02 15:30:30
| 0
| 785
|
AlphaBetaGamma96
|
76,391,465
| 12,493,545
|
How to start from example diverging REST interface?
|
<p>In the uvicorn <a href="https://www.uvicorn.org/" rel="nofollow noreferrer">exmaple</a>, one writes <code>uvicorn filename:attributename</code> and by that start the server. However, the interface I have generated has no such method <code>attributename</code> in <code>filename</code>. Therefore, I am unsure what to pass as <code>attributename</code>.</p>
<h1>Generated code in main.py</h1>
<pre class="lang-py prettyprint-override"><code>"""
Somename
Specification for REST-API of somename.
The version of the OpenAPI document: 1.0.0
Generated by: https://openapi-generator.tech
"""
from fastapi import FastAPI
from openapi_server.apis.some_api import router as SomeApiRouter
app = FastAPI(
title="SomeName",
description="Specification for REST-API of somename",
version="1.0.0",
)
app.include_router(SomeApiRouter)
</code></pre>
|
<python><openapi-generator><uvicorn>
|
2023-06-02 15:18:03
| 1
| 1,133
|
Natan
|
76,391,329
| 20,220,485
|
How do you sort lists of tuples based on the count of a specific value?
|
<p>I am working on a NER problemβhence the BIO taggingβwith a very small dataset, and I am manually splitting it into train, validation, and test data. Thus, to make the first of two splits, I need to sort lists of tuples into two lists based on the count of <code>'B'</code> in <code>data</code>.</p>
<p>I am shuffling <code>data</code>, so the output varies, but it typically yeilds what I provide below. <code>data</code> can be split such that a total count of <code>10</code> instances of <code>'B'</code> is possible in <code>bin_1</code>. So it's not that <code>data</code> won't split this way given the way <code>B</code> is distributed through the lists of tuples.</p>
<p>How do I get the split that I am after? For this example, and the desired split, I want the total count of <code>'B'</code> in <code>bin_1</code> to be <code>10</code>, but it's always over.</p>
<p>Assistance would be much appreciated.</p>
<p>Data:</p>
<pre><code>data = [[('a', 'B'), ('b', 'I'), ('c', 'O'), ('d', 'B'), ('e', 'I'), ('f', 'O')],
[('g', 'O'), ('h', 'O')],
[('i', 'B'), ('j', 'I'), ('k', 'O')],
[('l', 'B'), ('m', ''), ('n', 'B'), ('o', 'O')],
[('p', 'O'), ('q', 'O'), ('r', 'O')],
[('s', 'B'), ('t', 'O')],
[('u', 'O'), ('v', 'B'), ('w', 'I'), ('x', 'O'), ('y', 'O')],
[('z', 'B')],
[('a', 'B'), ('b', 'I'), ('c', 'O')],
[('d', 'O')],
[('e', 'O'), ('f', 'O')],
[('g', 'O'), ('h', 'B')],
[('i', 'B'), ('j', 'I')],
[('k', 'O')],
[('l', 'O'), ('m', 'O'), ('n', 'O'), ('o', 'O')],
[('p', 'O'), ('q', 'O'), ('r', 'O'), ('s', 'B'), ('t', 'O')],
[('u', 'O'), ('v', 'B'), ('w', 'I'), ('x', 'O'), ('y', 'O'), ('z', 'B')]]
</code></pre>
<p>Current code:</p>
<pre><code>split = 0.7
d = []
total_B = 0
bin_1 = []
bin_2 = []
counter = 0
random.shuffle(data)
for f in data:
cnt = {}
for _, label in f:
if label in cnt:
cnt[label] += 1
else:
cnt[label] = 1
d.append(cnt)
for f in d:
total_B += f.get('B', 0)
for f,g in zip(d, data):
if f.get('B') is not None:
if counter <= round(total_B * split):
counter += f.get('B')
bin_1.append(g)
else:
bin_2.append(g)
print(round(total_B * split))
print(sum(1 for sublist in bin_1 for tuple_item in sublist if tuple_item[1] == 'B'))
print(sum(1 for sublist in bin_2 for tuple_item in sublist if tuple_item[1] == 'B'))
</code></pre>
<p>Current output:</p>
<pre><code>Total count of 'B' in 'bin_1' should be: 10
Total count of 'B' in 'bin_1' is': 11
Total count of 'B' in 'bin_2' is': 3
</code></pre>
<pre><code>bin_1, bin_2
>>>
[[('a', 'B'), ('b', 'I'), ('c', 'O')],
[('g', 'O'), ('h', 'B')],
[('i', 'B'), ('j', 'I'), ('k', 'O')],
[('u', 'O'), ('v', 'B'), ('w', 'I'), ('x', 'O'), ('y', 'O'), ('z', 'B')],
[('s', 'B'), ('t', 'O')],
[('l', 'B'), ('m', ''), ('n', 'B'), ('o', 'O')],
[('a', 'B'), ('b', 'I'), ('c', 'O'), ('d', 'B'), ('e', 'I'), ('f', 'O')],
[('i', 'B'), ('j', 'I')]],
[[('u', 'O'), ('v', 'B'), ('w', 'I'), ('x', 'O'), ('y', 'O')],
[('z', 'B')],
[('p', 'O'), ('q', 'O'), ('r', 'O'), ('s', 'B'), ('t', 'O')]]
</code></pre>
<p>Desired output:</p>
<pre><code>Total count of 'B' in 'bin_1' should be: 10
Total count of 'B' in 'bin_1' is': 10
Total count of 'B' in 'bin_2' is': 4
</code></pre>
|
<python><list><sorting><machine-learning><sampling>
|
2023-06-02 15:04:31
| 1
| 344
|
doine
|
76,391,344
| 21,787,377
|
Implementing Name Synchronization and Money Transfers in Transactions Model with Account Number Input
|
<p>I have the following models in my Django application:</p>
<pre><code>class Transaction (models.Model):
user = models.ForeignKey(User, on_delete=models.CASCADE)
account_number = models.IntegerField()
name = models.CharField(max_length=50)
amount = models.DecimalField(max_digits=5, decimal_places=2)
created_on = models.DateTimeField()
class Wallet(models.Model):
user = models.ForeignKey(User, on_delete=models.CASCADE)
account_balance = models.DecimalField(max_digits=5, decimal_places=2, default=0)
class AccountNum(models.Model):
user = models.ForeignKey(User, on_delete=models.CASCADE)
account_number = models.IntegerField()
slug = models.SlugField(unique=True)
</code></pre>
<p>I want to implement a feature where the name field in the <code>Transactions</code> model gets synchronized with the account owner's name based on the provided <code>account_number</code> input. Additionally, I want to enable money transfers using the current user's wallet and the specified amount in the <code>Transactions</code> model.</p>
<p>To provide some context, I have a <code>post-save</code> signal <code>generate_account_number</code> which generates a random 10-digit account number.</p>
<p>What are some recommended techniques or approaches to achieve this <code>synchronization</code> of the name field with the account owner's name and enable money transfers using the <code>wallet</code> model and specified amount in the <code>Transaction</code> model?</p>
|
<python><django><django-views><django-channels><banking>
|
2023-06-02 15:04:25
| 2
| 305
|
Adamu Abdulkarim Dee
|
76,391,296
| 10,097,229
|
How to know last occurence of while loop
|
<p>I have this piece of code where I am hitting an Azure REST API and it has <code>nextlink</code> in it which basically means that because the file is too large, it has nextlink as parameter with which it will again hit the API until the nextlink parameter does not come.</p>
<pre><code>while 'nextLink' in json.loads(response.text)['properties']:
total.extend(json.loads(response.text))
req = requests.get(json.loads(response.text)['properties']['nextLink'], headers=head, verify=False)
c+=1
print(c)
total.append(req.json())
</code></pre>
<p>THe problem is that I dont know how many times the while loop will run. But I wanted to end the loop before the last hit/occurence. The <code>c</code> is to know how many times it is running. Sometimes it runs for 80 times, sometimes for 200 times.</p>
<p>THe reason I wanted to know the last occurence-1 is that after the while loop I have an upload file statement which is not running if I dont give a break statement.</p>
|
<python><json><python-3.x><azure><loops>
|
2023-06-02 15:00:54
| 1
| 1,137
|
PeakyBlinder
|
76,391,230
| 11,564,487
|
Change the font size of the output of python code chunk
|
<p>Consider the following <code>quarto</code> document:</p>
<pre><code>---
title: "Untitled"
format: pdf
---
```{python}
#|echo: false
#|result: 'asis'
import pandas as pd
df = pd.DataFrame({'A': ['foo', 'foo', 'foo', 'bar', 'bar', 'bar'],
'B': ['one', 'one', 'two', 'two', 'one', 'one'],
'C': ['dull', 'dull', 'shiny', 'shiny', 'dull', 'dull'],
'D': [1, 3, 2, 5, 4, 1]})
print(df)
```
</code></pre>
<p>How to scale down the output of the python chunk, say, to 50%. Is that possible?</p>
|
<python><pdf><latex><quarto>
|
2023-06-02 14:53:48
| 1
| 27,045
|
PaulS
|
76,391,192
| 10,082,088
|
Alteryx Error generating JWT token with python tool - NotImplementedError: Algorithm 'RS256
|
<p>I am having issues creating an Alteryx workflow that encodes a JWT token with the RS256 algorithm in the python tool.</p>
<p>Here is my code:</p>
<pre><code>#################################
from ayx import Alteryx
from ayx import Package
import pandas
##Package.installPackages(package="cryptography",install_type="install --proxy proxy.server:port")
##Package.installPackages(package="pyjwt[crypto]",install_type="install --proxy proxy.server:port")
import jwt
from io import StringIO
#################################
table = Alteryx.read("#1")
#################################
print(table)
#################################
id = table.at[0, 'id']
#################################
url = table.at[0, 'url']
#################################
key = table.at[0, 'key']
#################################
exp = table.at[0, 'exp']
#################################
exp = int(exp)
#################################
nbf = table.at[0, 'nbf']
#################################
nbf = int(nbf)
#################################
encoded = jwt.encode({"iss": id, "aud": url, "exp": exp, "nbf": nbf}, key, algorithm='RS256')
#################################
s=str(encoded,'utf-8')
data = StringIO(s)
df=pandas.read_csv(data,header=None)
#################################
Alteryx.write(df,1)
</code></pre>
<p>The problem is when I try to encode a JWT using the RS256 algorithm: <code>encoded = jwt.encode({"iss": id, "aud": url, "exp": exp, "nbf": nbf}, key, algorithm='RS256')</code>, It spits back out this error message: <code>NotImplementedError: Algorithm 'RS256' could not be found. Do you have cryptography installed?</code></p>
<p>the Cryptography package should be installed since I specified [crypto] when choosing the package name: <code>pyjwt[crypto]</code> β source: <a href="https://pyjwt.readthedocs.io/en/latest/installation.html#installation-cryptography" rel="nofollow noreferrer">Installation β PyJWT 2.7.0 documentation</a>. I also tried installing it separately by adding <code>##Package.installPackages(package="cryptography",install_type="install --proxy proxy.server:port")</code> but still got the same error.</p>
|
<python><jwt><alteryx>
|
2023-06-02 14:49:04
| 1
| 447
|
bocodes
|
76,391,153
| 5,620,975
|
Python Polars: Lazy Frame Row Count not equal wc -l
|
<p>Been experimenting with <code>polars</code> and of the key features that peak my interest is the <em>larger than RAM</em> operations.</p>
<p>I downloaded some files to play with from <a href="https://s3.amazonaws.com/amazon-reviews-pds/tsv/index.txt" rel="nofollow noreferrer">HERE</a>. On the website: <em>First line in each file is header; 1 line corresponds to 1 record.</em>. <strong>WARNING</strong> total download is quite large (~1.3GB)! This experiment was done on AWS server (<code>t2.medium</code>, <code>2cpu</code>, <code>4GB</code>)</p>
<pre><code>wget https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Shoes_v1_00.tsv.gz \
https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Office_Products_v1_00.tsv.gz \
https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Software_v1_00.tsv.gz \
https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Personal_Care_Appliances_v1_00.tsv .gz \
https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Watches_v1_00.tsv.gz
gunzip *
</code></pre>
<p>Here are the results from <code>wc -l</code></p>
<pre><code>drwxrwxr-x 3 ubuntu ubuntu 4096 Jun 2 12:44 ../
-rw-rw-r-- 1 ubuntu ubuntu 1243069057 Nov 25 2017 amazon_reviews_us_Office_Products_v1_00.tsv
-rw-rw-r-- 1 ubuntu ubuntu 44891575 Nov 25 2017 amazon_reviews_us_Personal_Care_Appliances_v1_00.tsv
-rw-rw-r-- 1 ubuntu ubuntu 1570176560 Nov 25 2017 amazon_reviews_us_Shoes_v1_00.tsv
-rw-rw-r-- 1 ubuntu ubuntu 249565371 Nov 25 2017 amazon_reviews_us_Software_v1_00.tsv
-rw-rw-r-- 1 ubuntu ubuntu 412542975 Nov 25 2017 amazon_reviews_us_Watches_v1_00.tsv
$ find . -type f -exec cat {} + | wc -l
8398139
$ find . -name '*.tsv' | xargs wc -l
2642435 ./amazon_reviews_us_Office_Products_v1_00.tsv
341932 ./amazon_reviews_us_Software_v1_00.tsv
85982 ./amazon_reviews_us_Personal_Care_Appliances_v1_00.tsv
4366917 ./amazon_reviews_us_Shoes_v1_00.tsv
960873 ./amazon_reviews_us_Watches_v1_00.tsv
8398139 total
</code></pre>
<p>Now, if I count the rows using <code>polars</code> using our new fancy lazy function:</p>
<pre><code>import polars as pl
csvfile = "~/data/amazon/*.tsv"
(
pl.scan_csv(csvfile, separator = '\t')
.select(
pl.len()
)
.collect()
)
shape: (1, 1)
βββββββββββ
β len β
β --- β
β u32 β
βββββββββββ‘
β 4186305 β
βββββββββββ
</code></pre>
<p>Wow, thats a BIG difference between <code>wc -l</code> and <code>polars</code>. Thats weird... maybe its a data issue. Lets only focus on the column of interest:</p>
<pre><code>csvfile = "~/data/amazon/*.tsv"
(
... pl.scan_csv(csvfile, separator = '\t')
... .select(
... pl.col("product_category").count()
... )
... .collect()
... )
shape: (1, 1)
ββββββββββββββββββββ
β product_category β
β --- β
β u32 β
ββββββββββββββββββββ‘
β 7126095 β
ββββββββββββββββββββ
</code></pre>
<p>And with <code>.collect(streaming = True)</code>:</p>
<pre><code>shape: (1, 1)
ββββββββββββββββββββ
β product_category β
β --- β
β u32 β
ββββββββββββββββββββ‘
β 7125569 β
ββββββββββββββββββββ
</code></pre>
<p>Ok, still a difference of about 1 million? Lets do it bottom up:</p>
<pre><code>csvfile = "~/data/amazon/*.tsv"
(
pl.scan_csv(csvfile, separator = '\t')
.group_by("product_category")
.agg(pl.col("product_category").count().alias("counts"))
.collect(streaming = True)
.filter(pl.col('counts') > 100)
.sort(pl.col("counts"), descending = True)
.select(
pl.col('counts').sum()
)
)
shape: (1, 1)
βββββββββββ
β counts β
β --- β
β u32 β
βββββββββββ‘
β 7125553 β
βββββββββββ
</code></pre>
<p>Close, albeit that its once again a different count...</p>
<p>Some more checks using <code>R</code>:</p>
<pre><code>library(vroom)
library(purrr)
library(glue)
library(logger)
amazon <- list.files("~/data/amazon/", full.names = TRUE)
f <- function(file){
df <- vroom(file, col_select = 'product_category', show_col_types=FALSE )
log_info(glue("File [{basename(file)}] has [{nrow(df)}] rows"))
}
walk(amazon, f)
INFO [2023-06-02 14:23:40] File [amazon_reviews_us_Office_Products_v1_00.tsv] has [2633651] rows
INFO [2023-06-02 14:23:41] File [amazon_reviews_us_Personal_Care_Appliances_v1_00.tsv] has [85898] rows
INFO [2023-06-02 14:24:06] File [amazon_reviews_us_Shoes_v1_00.tsv] has [4353998] rows
INFO [2023-06-02 14:24:30] File [amazon_reviews_us_Software_v1_00.tsv] has [331152] rows
INFO [2023-06-02 14:24:37] File [amazon_reviews_us_Watches_v1_00.tsv] has [943763] rows
Total: 8348462
</code></pre>
<p>Ok. Screw it. Basically a random number generating exercise and nothing is real.</p>
<p>Surely if its a data hygiene issue the error should be constant? Any idea why there might be such a large discrepancy?</p>
|
<python><python-polars>
|
2023-06-02 14:42:40
| 1
| 1,461
|
Hanjo Odendaal
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.