QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
βŒ€
76,399,139
3,840,940
Spark PySpark Configuration in Visual Studio Code
<p>I try to configure Apache Spark PySpark in Visual Studio Code.</p> <pre><code>OS : Windows 11 java : 17 LTS python : Anaconda 2023.03-1-windows apache spark : spark-3.4.0-bin-hadoop3 VScode : VSCodeSetup-x64-1.78.2 </code></pre> <p>I install the &quot;Spark &amp; Hive Tools&quot; extension pack on VScode and add <code>Python &gt; Auto Complete: Extra Paths</code> on settings.json file like below</p> <pre><code>&quot;python.autoComplete.extraPaths&quot;: [ &quot;C:\\spark-3.4.0-bin-hadoop3\\python&quot;, &quot;C:\\spark-3.4.0-bin-hadoop3\\python\\pyspark&quot;, &quot;C:\\spark-3.4.0-bin-hadoop3\\python\\lib\\py4j-0.10.9.7-src.zip&quot;, &quot;C:\\spark-3.4.0-bin-hadoop3\\python\\lib\\pyspark.zip&quot; ] </code></pre> <p>I make the Python code:</p> <pre><code>from pyspark.sql import SparkSession spark = SparkSession.builder.master(&quot;local[*]&quot;).appName(&quot;shit&quot;).getOrCreate() data = [('001','Smith','M',40,'DA',4000), ('002','Rose','M',35,'DA',3000), ('003','Williams','M',30,'DE',2500), ('004','Anne','F',30,'DE',3000), ('005','Mary','F',35,'BE',4000), ('006','James','M',30,'FE',3500)] columns = [&quot;cd&quot;,&quot;name&quot;,&quot;gender&quot;,&quot;age&quot;,&quot;div&quot;,&quot;salary&quot;] df = spark.createDataFrame(data = data, schema = columns) df.printSchema() df.show() spark.stop() </code></pre> <p>But the codes throws the error message:</p> <pre><code>Traceback (most recent call last): File &quot;c:\VScode workspace\spark_test\pyspark-test.py&quot;, line 1, in &lt;module&gt; from pyspark.sql import SparkSession ModuleNotFoundError: No module named 'pyspark' </code></pre> <p>So I make the <code>.env file</code> on the folder and insert the some paths.</p> <pre><code>SPARK_HOME=C:\spark-3.4.0-bin-hadoop3 PYTHONPATH=C:\spark-3.4.0-bin-hadoop3\python;C:\spark-3.4.0-bin-hadoop3\python\pyspark;C:\spark-3.4.0-bin-hadoop3\python\lib\py4j-0.10.9.7-src.zip;C:\spark-3.4.0-bin-hadoop3\python\lib\pyspark.zip </code></pre> <p>And I add the following code on the top of Python code:</p> <pre><code>from dotenv import load_dotenv import os load_dotenv() print(&quot;-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-&quot;) print(os.environ.get(&quot;PYTHONPATH&quot;)) # It prints the right value print(os.environ.get(&quot;SPARK_HOME&quot;)) # It prints the right value </code></pre> <p>But it throws the same error messages. Do I miss any steps? I can install the pyspark with pip command. But I want to use the imbedded pyspark module of spark-3.4.0-bin-hadoop3.</p>
<python><apache-spark><visual-studio-code><pyspark>
2023-06-04 06:04:35
1
1,441
Joseph Hwang
76,399,115
8,401,374
ET.iterparse is not loading all XML tags inside a specifc tag in python with xml.etree.ElementTree
<p><strong>My code:</strong></p> <pre><code>tree = ET.iterparse(file_path, events=('start',)) for _, elem in tree: if 'product' in elem.tag: if elem.attrib.get('product-id') == &quot;B4_1003847_000&quot;: print(ET.tostring(elem)) breakpoint() process_product(elem) </code></pre> <p><strong>XML Tag copied from XML file which is basically a child of the root tag.</strong></p> <pre><code>&lt;product product-id=&quot;B4_1003847_000&quot;&gt; &lt;ean/&gt; &lt;upc/&gt; &lt;unit/&gt; &lt;min-order-quantity&gt;1&lt;/min-order-quantity&gt; &lt;step-quantity&gt;1&lt;/step-quantity&gt; &lt;display-name xml:lang=&quot;x-default&quot;&gt;Pink piggy bank &lt;/display-name&gt; &lt;short-description xml:lang=&quot;x-default&quot;&gt;&amp;lt;p&amp;gt;Ceramic piggy bank measuring 13 x 9 x 9 cm. without a hole in the bottom, but includes a hammer.&amp;lt;/p&amp;gt; &lt;/short-description&gt; &lt;store-force-price-flag&gt;false&lt;/store-force-price-flag&gt; &lt;store-non-inventory-flag&gt;false&lt;/store-non-inventory-flag&gt; &lt;store-non-revenue-flag&gt;false&lt;/store-non-revenue-flag&gt; &lt;store-non-discountable-flag&gt;false&lt;/store-non-discountable-flag&gt; &lt;online-flag&gt;false&lt;/online-flag&gt; &lt;online-flag site-id=&quot;FlyingTiger_UAE&quot;&gt;true&lt;/online-flag&gt; &lt;available-flag&gt;true&lt;/available-flag&gt; &lt;searchable-flag&gt;true&lt;/searchable-flag&gt; &lt;images&gt; &lt;image-group view-type=&quot;large&quot;&gt; &lt;image path=&quot;B4_1003847_000__B401_000__101__01?$prd_large$&quot;/&gt; &lt;image path=&quot;B4_1003847_000__B401_000__101__02?$prd_large$&quot;/&gt; &lt;image path=&quot;B4_1003847_000__B401_000__101__03?$prd_large$&quot;/&gt; &lt;image path=&quot;B4_1003847_000__B401_000__101__04?$prd_large$&quot;/&gt; &lt;/image-group&gt; &lt;image-group variation-value=&quot;B401_000&quot; view-type=&quot;large&quot;&gt; &lt;image path=&quot;B4_1003847_000__B401_000__101__01?$prd_large$&quot;/&gt; &lt;image path=&quot;B4_1003847_000__B401_000__101__02?$prd_large$&quot;/&gt; &lt;image path=&quot;B4_1003847_000__B401_000__101__03?$prd_large$&quot;/&gt; &lt;image path=&quot;B4_1003847_000__B401_000__101__04?$prd_large$&quot;/&gt; &lt;/image-group&gt; &lt;image-group view-type=&quot;medium&quot;&gt; &lt;image path=&quot;B4_1003847_000__B401_000__101__01?$prd_medium$&quot;/&gt; &lt;image path=&quot;B4_1003847_000__B401_000__101__02?$prd_medium$&quot;/&gt; &lt;image path=&quot;B4_1003847_000__B401_000__101__03?$prd_medium$&quot;/&gt; &lt;image path=&quot;B4_1003847_000__B401_000__101__04?$prd_medium$&quot;/&gt; &lt;/image-group&gt; &lt;image-group variation-value=&quot;B401_000&quot; view-type=&quot;medium&quot;&gt; &lt;image path=&quot;B4_1003847_000__B401_000__101__01?$prd_medium$&quot;/&gt; &lt;image path=&quot;B4_1003847_000__B401_000__101__02?$prd_medium$&quot;/&gt; &lt;image path=&quot;B4_1003847_000__B401_000__101__03?$prd_medium$&quot;/&gt; &lt;image path=&quot;B4_1003847_000__B401_000__101__04?$prd_medium$&quot;/&gt; &lt;/image-group&gt; &lt;image-group view-type=&quot;mobile&quot;&gt; &lt;image path=&quot;B4_1003847_000__B401_000__101__01?$prd_mobile$&quot;/&gt; &lt;image path=&quot;B4_1003847_000__B401_000__101__02?$prd_mobile$&quot;/&gt; &lt;image path=&quot;B4_1003847_000__B401_000__101__03?$prd_mobile$&quot;/&gt; &lt;image path=&quot;B4_1003847_000__B401_000__101__04?$prd_mobile$&quot;/&gt; &lt;/image-group&gt; &lt;image-group variation-value=&quot;B401_000&quot; view-type=&quot;mobile&quot;&gt; &lt;image path=&quot;B4_1003847_000__B401_000__101__01?$prd_mobile$&quot;/&gt; &lt;image path=&quot;B4_1003847_000__B401_000__101__02?$prd_mobile$&quot;/&gt; &lt;image path=&quot;B4_1003847_000__B401_000__101__03?$prd_mobile$&quot;/&gt; &lt;image path=&quot;B4_1003847_000__B401_000__101__04?$prd_mobile$&quot;/&gt; &lt;/image-group&gt; &lt;image-group view-type=&quot;thumbnail&quot;&gt; &lt;image path=&quot;B4_1003847_000__B401_000__101__01?$prd_thumbnail$&quot;/&gt; &lt;image path=&quot;B4_1003847_000__B401_000__101__02?$prd_thumbnail$&quot;/&gt; &lt;image path=&quot;B4_1003847_000__B401_000__101__03?$prd_thumbnail$&quot;/&gt; &lt;image path=&quot;B4_1003847_000__B401_000__101__04?$prd_thumbnail$&quot;/&gt; &lt;/image-group&gt; &lt;image-group variation-value=&quot;B401_000&quot; view-type=&quot;thumbnail&quot;&gt; &lt;image path=&quot;B4_1003847_000__B401_000__101__01?$prd_thumbnail$&quot;/&gt; &lt;image path=&quot;B4_1003847_000__B401_000__101__02?$prd_thumbnail$&quot;/&gt; &lt;image path=&quot;B4_1003847_000__B401_000__101__03?$prd_thumbnail$&quot;/&gt; &lt;image path=&quot;B4_1003847_000__B401_000__101__04?$prd_thumbnail$&quot;/&gt; &lt;/image-group&gt; &lt;/images&gt; &lt;tax-class-id&gt;standard&lt;/tax-class-id&gt; &lt;brand&gt;FLYING TIGER &lt;/brand&gt; &lt;manufacturer-name&gt;FLYING TIGER &lt;/manufacturer-name&gt; &lt;sitemap-included-flag&gt;true&lt;/sitemap-included-flag&gt; &lt;sitemap-changefrequency&gt;daily&lt;/sitemap-changefrequency&gt; &lt;sitemap-priority&gt;1.0&lt;/sitemap-priority&gt; &lt;page-attributes/&gt; &lt;custom-attributes&gt; &lt;custom-attribute attribute-id=&quot;brandCategoryId&quot;&gt;B4&lt;/custom-attribute&gt; &lt;custom-attribute attribute-id=&quot;buyingCategory&quot;&gt;B401010&lt;/custom-attribute&gt; &lt;custom-attribute attribute-id=&quot;color&quot;&gt;B401_000&lt;/custom-attribute&gt; &lt;custom-attribute attribute-id=&quot;defaultName&quot;&gt;Pink piggy bank&lt;/custom-attribute&gt; &lt;custom-attribute attribute-id=&quot;defaultSizeGrid&quot;&gt;Standard&lt;/custom-attribute&gt; &lt;custom-attribute attribute-id=&quot;geoAllowedShippingCountries&quot;&gt; &lt;value&gt;ALL&lt;/value&gt; &lt;/custom-attribute&gt; &lt;custom-attribute attribute-id=&quot;internalProductName&quot;&gt;PIG BANK WITH HAMMER&lt;/custom-attribute&gt; &lt;custom-attribute attribute-id=&quot;isItemBulky&quot;&gt;false&lt;/custom-attribute&gt; &lt;custom-attribute attribute-id=&quot;isItemPrepaid&quot;&gt;false&lt;/custom-attribute&gt; &lt;custom-attribute attribute-id=&quot;isReturnable&quot;&gt;true&lt;/custom-attribute&gt; &lt;custom-attribute attribute-id=&quot;isReturnable&quot; site-id=&quot;FlyingTiger_UAE&quot;&gt;true&lt;/custom-attribute&gt; &lt;custom-attribute attribute-id=&quot;season&quot;&gt;000&lt;/custom-attribute&gt; &lt;custom-attribute attribute-id=&quot;seasonDescription&quot;&gt;Non Seasonable Items&lt;/custom-attribute&gt; &lt;custom-attribute attribute-id=&quot;size&quot;&gt;B401022_000&lt;/custom-attribute&gt; &lt;custom-attribute attribute-id=&quot;sizeRefinement&quot; xml:lang=&quot;x-default&quot;&gt;B401022_000&lt;/custom-attribute&gt; &lt;custom-attribute attribute-id=&quot;subBrand&quot; xml:lang=&quot;x-default&quot;&gt;Flying Tiger&lt;/custom-attribute&gt; &lt;custom-attribute attribute-id=&quot;subSeason&quot;&gt;000&lt;/custom-attribute&gt; &lt;custom-attribute attribute-id=&quot;subSeasonDescription&quot;&gt;Non Seasonable Items&lt;/custom-attribute&gt; &lt;/custom-attributes&gt; &lt;variations&gt; &lt;attributes&gt; &lt;variation-attribute attribute-id=&quot;color&quot; variation-attribute-id=&quot;color&quot;&gt; &lt;display-name xml:lang=&quot;x-default&quot;&gt;color&lt;/display-name&gt; &lt;variation-attribute-values&gt; &lt;variation-attribute-value value=&quot;B401_000&quot;&gt; &lt;display-value xml:lang=&quot;x-default&quot;&gt;B401_000&lt;/display-value&gt; &lt;/variation-attribute-value&gt; &lt;/variation-attribute-values&gt; &lt;/variation-attribute&gt; &lt;variation-attribute attribute-id=&quot;size&quot; variation-attribute-id=&quot;size&quot;&gt; &lt;display-name xml:lang=&quot;x-default&quot;&gt;size&lt;/display-name&gt; &lt;variation-attribute-values&gt; &lt;variation-attribute-value value=&quot;000&quot;&gt; &lt;display-value xml:lang=&quot;x-default&quot;&gt;No Size&lt;/display-value&gt; &lt;/variation-attribute-value&gt; &lt;/variation-attribute-values&gt; &lt;/variation-attribute&gt; &lt;/attributes&gt; &lt;variants&gt; &lt;variant product-id=&quot;B4_1003847_000-000&quot;/&gt; &lt;/variants&gt; &lt;/variations&gt; &lt;classification-category catalog-id=&quot;siteCatalog_FlyingTiger_UAE&quot;&gt;gifts-giftsforkids&lt;/classification-category&gt; &lt;pinterest-enabled-flag&gt;true&lt;/pinterest-enabled-flag&gt; &lt;facebook-enabled-flag&gt;true&lt;/facebook-enabled-flag&gt; &lt;store-attributes&gt; &lt;force-price-flag&gt;false&lt;/force-price-flag&gt; &lt;non-inventory-flag&gt;false&lt;/non-inventory-flag&gt; &lt;non-revenue-flag&gt;false&lt;/non-revenue-flag&gt; &lt;non-discountable-flag&gt;false&lt;/non-discountable-flag&gt; &lt;/store-attributes&gt; &lt;/product&gt; </code></pre> <p><strong>The same tag return by <code>print(ET.tostring(elem))</code></strong></p> <pre><code>&lt;ns0:product xmlns:ns0=&quot;http://www.demandware.com/xml/impex/catalog/2006-10-31&quot; product-id=&quot;B4_1003847_000&quot;&gt; &lt;ns0:ean /&gt; &lt;ns0:upc /&gt; &lt;ns0:unit /&gt; &lt;ns0:min-order-quantity&gt;1&lt;/ns0:min-order-quantity&gt; &lt;ns0:step-quantity&gt;1&lt;/ns0:step-quantity&gt; &lt;ns0:display-name xml:lang=&quot;x-default&quot;&gt;Pink piggy bank &lt;/ns0:display-name&gt; &lt;ns0:short-description xml:lang=&quot;x-default&quot;&gt;&amp;lt;p&amp;gt;Ceramic piggy bank measuring 13 x 9 x 9 cm. without a hole in the bottom, but includes a hammer.&amp;lt;/p&amp;gt; &lt;/ns0:short-description&gt; &lt;ns0:store-force-price-flag&gt;false&lt;/ns0:store-force-price-flag&gt; &lt;ns0:store-non-inventory-flag&gt;false&lt;/ns0:store-non-inventory-flag&gt; &lt;ns0:store-non-revenue-flag&gt;false&lt;/ns0:store-non-revenue-flag&gt; &lt;ns0:store-non-discountable-flag&gt;false&lt;/ns0:store-non-discountable-flag&gt; &lt;ns0:online-flag&gt;false&lt;/ns0:online-flag&gt; &lt;ns0:online-flag site-id=&quot;FlyingTiger_UAE&quot;&gt;true&lt;/ns0:online-flag&gt; &lt;ns0:available-flag&gt;true&lt;/ns0:available-flag&gt; &lt;ns0:searchable-flag&gt;true&lt;/ns0:searchable-flag&gt; &lt;ns0:images&gt; &lt;ns0:image-group view-type=&quot;large&quot;&gt; &lt;ns0:image path=&quot;B4_1003847_000__B401_000__101__01?$prd_large$&quot; /&gt; &lt;ns0:image path=&quot;B4_1003847_000__B401_000__101__02?$prd_large$&quot; /&gt; &lt;ns0:image path=&quot;B4_1003847_000__B401_000__101__03?$prd_large$&quot; /&gt; &lt;ns0:image path=&quot;B4_1003847_000__B401_000__101__04?$prd_large$&quot; /&gt; &lt;/ns0:image-group&gt; &lt;ns0:image-group variation-value=&quot;B401_000&quot; view-type=&quot;large&quot;&gt; &lt;ns0:image path=&quot;B4_1003847_000__B401_000__101__01?$prd_large$&quot; /&gt; &lt;ns0:image path=&quot;B4_1003847_000__B401_000__101__02?$prd_large$&quot; /&gt; &lt;ns0:image path=&quot;B4_1003847_000__B401_000__101__03?$prd_large$&quot; /&gt; &lt;ns0:image path=&quot;B4_1003847_000__B401_000__101__04?$prd_large$&quot; /&gt; &lt;/ns0:image-group&gt; &lt;ns0:image-group view-type=&quot;medium&quot;&gt; &lt;ns0:image path=&quot;B4_1003847_000__B401_000__101__01?$prd_medium$&quot; /&gt; &lt;ns0:image path=&quot;B4_1003847_000__B401_000__101__02?$prd_medium$&quot; /&gt; &lt;ns0:image path=&quot;B4_1003847_000__B401_000__101__03?$prd_medium$&quot; /&gt; &lt;ns0:image path=&quot;B4_1003847_000__B401_000__101__04?$prd_medium$&quot; /&gt; &lt;/ns0:image-group&gt; &lt;ns0:image-group variation-value=&quot;B401_000&quot; view-type=&quot;medium&quot;&gt; &lt;ns0:image path=&quot;B4_1003847_000__B401_000__101__01?$prd_medium$&quot; /&gt; &lt;ns0:image path=&quot;B4_1003847_000__B401_000__101__02?$prd_medium$&quot; /&gt; &lt;ns0:image path=&quot;B4_1003847_000__B401_000__101__03?$prd_medium$&quot; /&gt; &lt;ns0:image path=&quot;B4_1003847_000__B401_000__101__04?$prd_medium$&quot; /&gt; &lt;/ns0:image-group&gt; &lt;ns0:image-group view-type=&quot;mobile&quot;&gt; &lt;ns0:image path=&quot;B4_1003847_000__B401_000__101__01?$prd_mobile$&quot; /&gt; &lt;ns0:image path=&quot;B4_1003847_000__B401_000__101__02?$prd_mobile$&quot; /&gt; &lt;ns0:image path=&quot;B4_1003847_000__B401_000__101__03?$prd_mobile$&quot; /&gt; &lt;ns0:image path=&quot;B4_1003847_000__B401_000__101__04?$prd_mobile$&quot; /&gt; &lt;/ns0:image-group&gt; &lt;ns0:image-group variation-value=&quot;B401_000&quot; view-type=&quot;mobile&quot;&gt; &lt;ns0:image path=&quot;B4_1003847_000__B401_000__101__01?$prd_mobile$&quot; /&gt; &lt;ns0:image path=&quot;B4_1003847_000__B401_000__101__02?$prd_mobile$&quot; /&gt; &lt;ns0:image path=&quot;B4_1003847_000__B401_000__101__03?$prd_mobile$&quot; /&gt; &lt;ns0:image path=&quot;B4_1003847_000__B401_000__101__04?$prd_mobile$&quot; /&gt; &lt;/ns0:image-group&gt; &lt;ns0:image-group view-type=&quot;thumbnail&quot;&gt; &lt;ns0:image path=&quot;B4_1003847_000__B401_000__101__01?$prd_thumbnail$&quot; /&gt;&lt;/ns0:image-group&gt;&lt;/ns0:images&gt;&lt;/ns0:product&gt; </code></pre> <p>The noticing point is that in the code printed XML is missing all the tags after <code>&lt;images&gt;</code>. I tried breakpoint() to debug it but didn't work.</p>
<python><xml><lxml><elementtree><large-data>
2023-06-04 05:55:46
0
1,710
Shaida Muhammad
76,399,078
5,380,656
Creating a TypedDict with enum keys
<p>I am trying to create a <code>TypedDict</code> for better code completion and am running into an issue.</p> <p>I want to have a fixed set of keys (an Enum) and the values to match a specific list of objects depending on the key.</p> <p>For example:</p> <pre class="lang-py prettyprint-override"><code>from enum import Enum class OneObject: pass class TwoObject: pass class MyEnum(Enum): ONE: 1 TWO: 2 </code></pre> <p>I am looking to have something like this:</p> <pre class="lang-py prettyprint-override"><code>from typing import TypedDict class CustomDict(TypedDict): MyEnum.ONE: list[OneObject] MyEnum.TWO: list[TwoObject] </code></pre> <p>However, I am getting <code>Non-self attribute could not be type hinted</code> and it doesn't really work. What are my options?</p>
<python><enums><python-typing><typeddict>
2023-06-04 05:38:12
2
771
Charlie
76,398,598
4,611,374
Streamlit: Why does updating the session_state with form data require submitting the form twice?
<p>I appear to fundamentally misunderstand how Streamlit's forms and <code>session_state</code> variable work. Form data is not inserted into the <code>session_state</code> upon submit. However, submitting a second time inserts the data. Updating <code>session_state</code> values always requires submitting the form 2 times.</p> <p>I'd like to know</p> <ol> <li>if this is expected behavior</li> <li>if I'm making a mistake</li> <li>if there is a workaround that allows immediate <code>session_state</code> updates on submit</li> </ol> <p><strong>EXAMPLE 1:</strong></p> <pre class="lang-py prettyprint-override"><code>import streamlit as st # View all key:value pairs in the session state s = [] for k, v in st.session_state.items(): s.append(f&quot;{k}: {v}&quot;) st.write(s) # Define the form with st.form(&quot;my_form&quot;): st.session_state['name'] = st.text_input(&quot;Name&quot;) st.form_submit_button(&quot;Submit&quot;) </code></pre> <p>When the page loads, the session state is empty: <code>[]</code> <a href="https://i.sstatic.net/LRXql.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LRXql.png" alt="enter image description here" /></a></p> <p>After submitting the form once, the session_state contains <code>&quot;name: &quot;</code>. The key has been added, but not the value. <a href="https://i.sstatic.net/c0dXT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/c0dXT.png" alt="enter image description here" /></a></p> <p>After pressing <code>Submit</code> a second time, the session_state now contains <code>&quot;name: Chris&quot;</code> <a href="https://i.sstatic.net/KePIk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KePIk.png" alt="enter image description here" /></a></p> <p><strong>EXAMPLE 2:</strong> Using a callback function</p> <pre class="lang-py prettyprint-override"><code>import streamlit as st # View all key:value pairs in the session state s = [] for k, v in st.session_state.items(): s.append(f&quot;{k}: {v}&quot;) st.write(s) # Define the form with st.form(&quot;my_form&quot;): def update(): st.session_state['name'] = name name = st.text_input(&quot;Name&quot;) st.form_submit_button(&quot;Submit&quot;, on_click=update) </code></pre> <p>When the page loads, the session state is empty: <code>[]</code> <a href="https://i.sstatic.net/oeEIm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oeEIm.png" alt="enter image description here" /></a></p> <p>After submitting the form once, the session_state contains <code>&quot;name: &quot;</code>. The key has been added, but not the value. <a href="https://i.sstatic.net/MZlCh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MZlCh.png" alt="enter image description here" /></a></p> <p>After pressing <code>Submit</code> a second time, the session_state now contains <code>&quot;name: Chris&quot;</code> <a href="https://i.sstatic.net/ZBv6N.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZBv6N.png" alt="enter image description here" /></a></p>
<python><forms><session-state><streamlit>
2023-06-04 01:09:39
1
309
RedHand
76,398,541
21,343,992
Connect to websocket server using IP address with Python websockets library
<p>New to Python. I'm trying to connect to a websocket server using the actual IP address.</p> <p>Connecting using the websocket URL works fine:</p> <pre><code>import websocket def on_message(wsapp, message): print(message) ws = websocket.WebSocketApp(&quot;wss://api.server.com:443/ws/stream&quot;, on_message=on_message) ws.run_forever() </code></pre> <p>However, I extracted the server IP addresses using traceroute and the following doesn't work:</p> <pre><code>import websocket def on_message(wsapp, message): print(message) ws = websocket.WebSocketApp(&quot;wss://1.2.3.4:443/ws/stream&quot;, on_message=on_message) ws.run_forever() </code></pre> <p>(not the actual domain or IP address)</p> <p>Is there a way to connect using the IP address, rather than the URL?</p>
<python><websocket><tcp>
2023-06-04 00:38:05
1
491
rare77
76,398,466
6,577,503
Drawing a graph network in 3D
<p>Suppose I am a given a directed graph in python:</p> <pre><code>V = [ 1, 2, 3, 4, 5] E = { 1 : [ 2, 3, 4] 2: [ 1, 2, 3] 3 : [1, 4, 5] 4: [5] 5: [1, 3] } c = [ 81, 23, 43, 22, 100] </code></pre> <p>V and E represent the vertex and edge sets of the graph as a list and dictionary respectively. And c is a cost function on the vertex set i.e. c(1) = 81 , c(2) = 23 etc. Now I want to visualize the graph represented by (V,E) which can be done easily using the networkx package in 2 dimensions, BUT additionally I want to plot the vertices of this graph on varying z axis (instead of only on the xy plane) so that the 'height' of each vertex on the z axis equals its cost.</p> <p>How can I do so?</p>
<python><networkx>
2023-06-03 23:55:40
1
441
Anon
76,398,368
10,327,849
Pandas EXCLUSIVE LEFT OUTER JOIN with line count
<p>I am creating a transactions import tool that updates a DB with new transactions every day.</p> <p>I am getting an Excel file (<em>that I am opening using pandas</em>) with the entire month transactions and I am trying to filter only the new transactions by merging the new DataFrame with the existing one.</p> <p>For this I am using pandas merge to do <em>EXCULSIVE LEFT OUTER JOIN</em> but I have a problem with multiple rows with the same exact values.</p> <p>See this example:</p> <pre><code>import pandas as pd import numpy as np df1 = pd.DataFrame(np.array([[pd.Timestamp('2023-1-1'), 'A', 10] , [pd.Timestamp('2023-1-1'), 'A', 10] , [pd.Timestamp('2023-1-1'), 'B', 11] , [pd.Timestamp('2023-1-2'), 'C', 12] , [pd.Timestamp('2023-1-2'), 'D', 13] , [pd.Timestamp('2023-1-2'), 'E', 14] , [pd.Timestamp('2023-1-3'), 'F', 15]]), columns=['Date', 'Title', 'Amount']) df2 = pd.DataFrame(np.array([[pd.Timestamp('2023-1-1'), 'A', 10] , [pd.Timestamp('2023-1-1'), 'B', 11] , [pd.Timestamp('2023-1-2'), 'C', 12]]), columns=['Date', 'Title', 'Amount']) df3 = pd.merge(df1, df2, on=['Date', 'Title', 'Amount'], how=&quot;outer&quot;, indicator=True) df3 = df3[df3['_merge'] == 'left_only'] print(df1) print(df2) print(df3) # Both 'A' rows deleted while one 'A' row is new and should be in df3 </code></pre> <p>The output is:</p> <pre><code> Date Title Amount 0 2023-01-01 A 10 1 2023-01-01 A 10 2 2023-01-01 B 11 3 2023-01-02 C 12 4 2023-01-02 D 13 5 2023-01-02 E 14 6 2023-01-03 F 15 Date Title Amount 0 2023-01-01 A 10 1 2023-01-01 B 11 2 2023-01-02 C 12 Date Title Amount _merge 4 2023-01-02 D 13 left_only 5 2023-01-02 E 14 left_only 6 2023-01-03 F 15 left_only </code></pre> <p>With the above method, both <code>'A'</code> rows are deleted while one <code>'A'</code> row is new and thus should be in the new DataFrame.</p> <p>Any ideas on what operation can be used to keep <strong>only</strong> the rows that in the first Dataframe <strong>with</strong> consideration to rows count? To give a little more information, transactions in the same day are not ordered (<em>no time information, only date</em>) and new transactions can be added multiple days in the past.</p>
<python><pandas><dataframe><join>
2023-06-03 23:07:51
1
301
Yakir Shlezinger
76,398,117
11,065,874
how to override the default 200 response in fastapi docs
<p>I have this small fastapi application</p> <pre><code>import uvicorn from fastapi import FastAPI, APIRouter from fastapi import Path from pydantic import BaseModel from starlette import status app = FastAPI() def test(): print(&quot;creating the resource&quot;) return &quot;Hello world&quot; router = APIRouter() class MessageResponse(BaseModel): detail: str router.add_api_route( path=&quot;/test&quot;, endpoint=test, methods=[&quot;POST&quot;], responses={ status.HTTP_201_CREATED: {&quot;model&quot;: MessageResponse} } ) app.include_router(router) def main(): uvicorn.run(&quot;run:app&quot;, host=&quot;0.0.0.0&quot;, reload=True, port=8001) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>when I check the docs on <code>http://127.0.0.1:8001/docs#/default/test_test_post</code>, in the list of responses in the docs, I see two responses: 200 and 201</p> <p>I don't have any 200 responses here. I don't want 200 to be shown for me in the docs.</p> <p>Here is the fast api auto-generated openapi.json file</p> <pre><code>{ &quot;openapi&quot;: &quot;3.0.2&quot;, &quot;info&quot;: {&quot;title&quot;: &quot;FastAPI&quot;, &quot;version&quot;: &quot;0.1.0&quot;}, &quot;paths&quot;: {&quot;/test&quot;: { &quot;post&quot;: {&quot;summary&quot;: &quot;Test&quot;, &quot;operationId&quot;: &quot;test_test_post&quot;, &quot;responses&quot;: { &quot;200&quot;: { &quot;description&quot;: &quot;Successful Response&quot;, &quot;content&quot;: {&quot;application/json&quot;: {&quot;schema&quot;: {}}} }, &quot;201&quot;: { &quot;description&quot;: &quot;Created&quot;, &quot;content&quot;: {&quot;application/json&quot;: {&quot;schema&quot;: {&quot;$ref&quot;: &quot;#/components/schemas/MessageResponse&quot;}}}}}}} }, &quot;components&quot;: {&quot;schemas&quot;: { &quot;MessageResponse&quot;: {&quot;title&quot;: &quot;MessageResponse&quot;, &quot;required&quot;: [&quot;detail&quot;], &quot;type&quot;: &quot;object&quot;, &quot;properties&quot;: {&quot;detail&quot;: {&quot;title&quot;: &quot;Detail&quot;, &quot;type&quot;: &quot;string&quot;}}}}}} </code></pre> <p>I should not be seeing</p> <pre><code> &quot;description&quot;: &quot;Successful Response&quot;, &quot;content&quot;: {&quot;application/json&quot;: {&quot;schema&quot;: {}}} }, </code></pre> <p>What should I do?</p> <hr /> <p>UPDATE:</p> <p>this one also did not work</p> <pre><code>import uvicorn from fastapi import FastAPI, APIRouter from pydantic import BaseModel from starlette import status from starlette.responses import Response app = FastAPI() def test(response: Response): print(&quot;creating the resource&quot;) response.status_code = 201 return &quot;Hello world&quot; router = APIRouter() class MessageResponse(BaseModel): detail: str router.add_api_route( path=&quot;/test&quot;, endpoint=test, methods=[&quot;POST&quot;], response_model=None, responses={ 200: {}, status.HTTP_201_CREATED: {&quot;model&quot;: MessageResponse} } ) app.include_router(router) def main(): uvicorn.run(&quot;run:app&quot;, host=&quot;0.0.0.0&quot;, reload=True, port=8001) if __name__ == &quot;__main__&quot;: main() </code></pre>
<python><fastapi><openapi>
2023-06-03 21:33:25
2
2,555
Amin Ba
76,397,993
5,423,080
Plot function during pytest debugging in console mode
<p>I am writing a unit test for a scientific function and when I was trying to check its shape I obtained an error with <code>matplotlib</code>.</p> <p>I am using PyCharm Community Edition 2022.3.3, python 3.11, matplotlib 3.7.1 and PySide6 6.5.0 under Windows 10.</p> <p>When debugging the test, I was trying to plot the function in console mode and I obtained this error/warning and no plot:</p> <pre><code>Traceback (most recent call last): File &quot;C:\Program Files\JetBrains\PyCharm Community Edition 2022.3.3\plugins\python-ce\helpers\pydev\_pydevd_bundle\pydevd_exec2.py&quot;, line 3, in Exec exec(exp, global_vars, local_vars) File &quot;&lt;input&gt;&quot;, line 1, in &lt;module&gt; File &quot;C:\Users\devot\Work\PythonEnvs\science_env\Lib\site-packages\matplotlib\pyplot.py&quot;, line 2812, in plot return gca().plot( ^^^^^ File &quot;C:\Users\devot\Work\PythonEnvs\science_env\Lib\site-packages\matplotlib\pyplot.py&quot;, line 2309, in gca return gcf().gca() ^^^^^ File &quot;C:\Users\devot\Work\PythonEnvs\science_env\Lib\site-packages\matplotlib\pyplot.py&quot;, line 906, in gcf return figure() ^^^^^^^^ File &quot;C:\Users\devot\Work\PythonEnvs\science_env\Lib\site-packages\matplotlib\_api\deprecation.py&quot;, line 454, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\devot\Work\PythonEnvs\science_env\Lib\site-packages\matplotlib\pyplot.py&quot;, line 840, in figure manager = new_figure_manager( ^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\devot\Work\PythonEnvs\science_env\Lib\site-packages\matplotlib\pyplot.py&quot;, line 384, in new_figure_manager return _get_backend_mod().new_figure_manager(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\devot\Work\PythonEnvs\science_env\Lib\site-packages\matplotlib\backend_bases.py&quot;, line 3574, in new_figure_manager return cls.new_figure_manager_given_figure(num, fig) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\devot\Work\PythonEnvs\science_env\Lib\site-packages\matplotlib\backend_bases.py&quot;, line 3579, in new_figure_manager_given_figure return cls.FigureCanvas.new_manager(figure, num) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\devot\Work\PythonEnvs\science_env\Lib\site-packages\matplotlib\backend_bases.py&quot;, line 1742, in new_manager return cls.manager_class.create_with_canvas(cls, figure, num) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\devot\Work\PythonEnvs\science_env\Lib\site-packages\matplotlib\backend_bases.py&quot;, line 2858, in create_with_canvas return cls(canvas_class(figure), num) ^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\devot\Work\PythonEnvs\science_env\Lib\site-packages\matplotlib\backends\backend_qt.py&quot;, line 204, in __init__ _create_qApp() File &quot;C:\Users\devot\Work\PythonEnvs\science_env\Lib\site-packages\matplotlib\backends\backend_qt.py&quot;, line 134, in _create_qApp QtWidgets.QApplication.setAttribute( DeprecationWarning: Enum value 'Qt::ApplicationAttribute.AA_EnableHighDpiScaling' is marked as deprecated, please check the documentation for more information. </code></pre> <p>If I run the code not as a test everything is fine.</p> <p>These are a working example to obtain the error:</p> <pre><code># test_scientific_functions.py import numpy as np import matplotlib.pyplot as plt def test_sin(): x = np.arange(0, 25, 0.1) y = np.sin(x) plt.plot(x, y) plt.show() </code></pre> <p>This code generates this message:</p> <pre><code>test\unit\test_scientific_functions.py:22 (test_sin) def test_sin(): x = np.arange(0, 25, 0.1) y = np.sin(x) &gt; plt.plot(x, y) test\unit\test_scientific_functions.py:26: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ..\..\PythonEnvs\science_env\Lib\site-packages\matplotlib\pyplot.py:2812: in plot return gca().plot( ..\..\PythonEnvs\science_env\Lib\site-packages\matplotlib\pyplot.py:2309: in gca return gcf().gca() ..\..\PythonEnvs\science_env\Lib\site-packages\matplotlib\pyplot.py:906: in gcf return figure() ..\..\PythonEnvs\science_env\Lib\site-packages\matplotlib\_api\deprecation.py:454: in wrapper return func(*args, **kwargs) ..\..\PythonEnvs\science_env\Lib\site-packages\matplotlib\pyplot.py:840: in figure manager = new_figure_manager( ..\..\PythonEnvs\science_env\Lib\site-packages\matplotlib\pyplot.py:384: in new_figure_manager return _get_backend_mod().new_figure_manager(*args, **kwargs) ..\..\PythonEnvs\science_env\Lib\site-packages\matplotlib\backend_bases.py:3574: in new_figure_manager return cls.new_figure_manager_given_figure(num, fig) ..\..\PythonEnvs\science_env\Lib\site-packages\matplotlib\backend_bases.py:3579: in new_figure_manager_given_figure return cls.FigureCanvas.new_manager(figure, num) ..\..\PythonEnvs\science_env\Lib\site-packages\matplotlib\backend_bases.py:1742: in new_manager return cls.manager_class.create_with_canvas(cls, figure, num) ..\..\PythonEnvs\science_env\Lib\site-packages\matplotlib\backend_bases.py:2858: in create_with_canvas return cls(canvas_class(figure), num) ..\..\PythonEnvs\science_env\Lib\site-packages\matplotlib\backends\backend_qt.py:204: in __init__ _create_qApp() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ @functools.lru_cache(1) def _create_qApp(): app = QtWidgets.QApplication.instance() # Create a new QApplication and configure it if none exists yet, as only # one QApplication can exist at a time. if app is None: # display_is_valid returns False only if on Linux and neither X11 # nor Wayland display can be opened. if not mpl._c_internal_utils.display_is_valid(): raise RuntimeError('Invalid DISPLAY variable') # Check to make sure a QApplication from a different major version # of Qt is not instantiated in the process if QT_API in {'PyQt6', 'PySide6'}: other_bindings = ('PyQt5', 'PySide2') elif QT_API in {'PyQt5', 'PySide2'}: other_bindings = ('PyQt6', 'PySide6') else: raise RuntimeError(&quot;Should never be here&quot;) for binding in other_bindings: mod = sys.modules.get(f'{binding}.QtWidgets') if mod is not None and mod.QApplication.instance() is not None: other_core = sys.modules.get(f'{binding}.QtCore') _api.warn_external( f'Matplotlib is using {QT_API} which wraps ' f'{QtCore.qVersion()} however an instantiated ' f'QApplication from {binding} which wraps ' f'{other_core.qVersion()} exists. Mixing Qt major ' 'versions may not work as expected.' ) break try: &gt; QtWidgets.QApplication.setAttribute( QtCore.Qt.AA_EnableHighDpiScaling) E DeprecationWarning: Enum value 'Qt::ApplicationAttribute.AA_EnableHighDpiScaling' is marked as deprecated, please check the documentation for more information. ..\..\PythonEnvs\science_env\Lib\site-packages\matplotlib\backends\backend_qt.py:134: DeprecationWarning </code></pre> <p>If I run this other code everything is fine:</p> <pre><code># plot.py import numpy as np import matplotlib.pyplot as plt def plot_sin(): x = np.arange(0, 10, 0.1) y = np.sin(x) plt.plot(x, y) plt.show() if __name__ == &quot;__main__&quot;: plot_sin() </code></pre> <p>Do it mean that, in some way, <code>pytest</code> goes into conflict with <code>matplotlib.pyplot</code>?</p> <p>Do you have any advice?</p>
<python><matplotlib><pycharm><pytest>
2023-06-03 20:50:05
0
412
cicciodevoto
76,397,673
222,279
How do I sort a 2D numpy array using indicies stored in a 1D numpy array?
<p>I have a 1D numpy array containing row index values to a 2d array. How do I sort the 2D array based on the index values in the 1D array. For example:</p> <pre><code>indicies = np.array([2,3,0,1]) matrix = np.array([[20, 200],[3,300],[100,1000],[1,1]]) </code></pre> <p>I want to sort matrix based on the order of the index values in indicies so that I end up with a 2D array looking like:</p> <pre><code>[[100 1000] [ 1 1] [ 20 200] [ 3 300]] </code></pre> <p>Basically the index values are associated with each row in matrix.</p> <hr /> <p>Roman's answer below seems to work on his example, but not on this one:</p> <pre><code>sort1 = np.array([2,4,5,7,0,1,3,6]) matrix = np.array([[20, 200],[40,400],[50,500],[70,700],[1, 1],[10,100],[30,300],[60,600]]) taken = np.take(matrix,sort1,axis=0) print(taken) </code></pre> <p>I get the output:</p> <pre><code>[[ 50 500] [ 1 1] [ 10 100] [ 60 600] [ 20 200] [ 40 400] [ 70 700] [ 30 300]] </code></pre> <p>It seems to be outputting them in the index order of the original array. It should be ordered by the sort1 matrix indicies:</p> <pre><code>[[ 1 1] [ 10 100] [ 20 200] [ 30 300] [ 40 400] [ 50 500] [ 60 600] [ 70 700]] </code></pre>
<python><numpy><numpy-ndarray>
2023-06-03 19:13:26
2
13,026
GregH
76,397,643
13,078,279
API created for Flask app is extremely slow
<p>I am creating an app to monitor schoolbus arrivals for my school. For this, I have created a simple flask webapp, with an admin page used by admins to input buses that have arrived:</p> <pre class="lang-py prettyprint-override"><code>from flask import Flask, jsonify, request, render_template import sys app = Flask(__name__) original_busdata = { &quot;present&quot;: [], &quot;absent&quot;: [i for i in range(1, 32)] # all buses not here to start with } busdata = original_busdata @app.route(&quot;/busdata&quot;, methods=[&quot;GET&quot;]) def transfer_data(): global busdata return jsonify(busdata) @app.route(&quot;/admin&quot;, methods=[&quot;GET&quot;, &quot;POST&quot;]) def template(): global busdata present = [] absent = [] # When a checkbox is checked from the # admin page, adjust # busdata correspondingly if request.method == &quot;POST&quot;: for i in range(1, 32): if f&quot;bus-{i}&quot; in request.form: present.append(i) else: absent.append(i) busdata[&quot;present&quot;] = present busdata[&quot;absent&quot;] = absent # Assemble checkbox data from busdata checkbox_data = {} for bus in busdata[&quot;present&quot;]: checkbox_data[bus] = True for bus in busdata[&quot;absent&quot;]: checkbox_data[bus] = False # Sort checkbox data so it is in the right order checkbox_data = dict(sorted(checkbox_data.items())) return render_template(&quot;admin.html&quot;, busdata=busdata, checkbox_data=checkbox_data) if __name__ == &quot;__main__&quot;: app.run() </code></pre> <p>With <code>admin.html</code> template:</p> <pre class="lang-html prettyprint-override"><code>&lt;div class=&quot;page-container&quot;&gt; &lt;section id=&quot;bus-card-container&quot;&gt; &lt;h2&gt;Buses currently present&lt;/h2&gt; &lt;div class=&quot;bus-list&quot;&gt; {% for i in busdata[&quot;present&quot;] %} &lt;span class=&quot;bus-present&quot;&gt;Bus {{ i }}&lt;/span&gt; {% endfor %} &lt;/div&gt; &lt;h2&gt;Buses not here&lt;/h2&gt; &lt;div class=&quot;bus-list&quot;&gt; {% for i in busdata[&quot;absent&quot;] %} &lt;span class=&quot;bus-absent&quot;&gt;Bus {{ i }}&lt;/span&gt; {% endfor %} &lt;/div&gt; &lt;/section&gt; &lt;section id=&quot;admin-panel&quot;&gt; &lt;button id=&quot;clear-all&quot;&gt;Clear all buses&lt;/button&gt; &lt;button id=&quot;check-all&quot;&gt;Check all buses&lt;/button&gt; &lt;form id=&quot;bus-form&quot; method=&quot;post&quot;&gt; {% for bus in checkbox_data %} &lt;div&gt; {% if checkbox_data[bus] is sameas true %} &lt;input type=&quot;checkbox&quot; name=&quot;bus-{{ bus }}&quot; id=&quot;bus-{{ bus }}&quot; checked&gt; {% else %} &lt;input type=&quot;checkbox&quot; name=&quot;bus-{{ bus }}&quot; id=&quot;bus-{{ bus }}&quot;&gt; {% endif %} &lt;label for=&quot;bus-{{ i }}&quot;&gt;Bus {{ bus }}&lt;/label&gt; &lt;/div&gt; {% endfor %} &lt;input type=&quot;submit&quot; id=&quot;submit-btn&quot;&gt; &lt;/form&gt; &lt;/section&gt; &lt;/div&gt; </code></pre> <p>A working example of the admin page can be found at <a href="https://three-fifteen-app.vercel.app/admin" rel="nofollow noreferrer">https://three-fifteen-app.vercel.app/admin</a>.</p> <p>My issue is with the busdata API I defined in the <code>busdata/</code> route. Whenever the admin updates <code>busdata</code>, it takes forever for the changes made to reflect in the API:</p> <p><a href="https://i.sstatic.net/D10Bk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/D10Bk.png" alt="API bug error" /></a></p> <p><em>Notice how the admin shows 3 buses are present on the right, while the busdata API does not reflect this change yet</em></p> <p>And I need the API to be fast enough as my client-side frontend for the app is entirely built around this API. Please let me know what tweaks I can make to my code to resolve this issue.</p>
<python><flask>
2023-06-03 19:06:15
0
416
JS4137
76,397,541
4,913,254
Save python libraries in a local directory to run pip install locally when running a docker container
<p>I want to create a docker container that needs to install python libraries. I could use something like <code>RUN pip install requirements.txt</code>. However, the server where I want to run this container have not connection to the Internet. My approach is to create a new folder in my work directory (e.g. python_libraires) and put there all libraries I need to install locally the libraries.</p> <p>I have an env with all Python libraries needed to run the app. I thought it would be easy to copy the directory where the libraries of my env are and then run pip locally. I also thought that this could be a typical question in Stackoverflow or similar but I cannot find a satisfactory answer although there are some people who asked something similar to this.</p> <p>(If I am not wrong) the directory of my python libraries is <code>/Volumes/MANUEL/anaconda3/envs/lookup3.6/lib/python3.6/site-packages</code> and to ensure that this work before creating the container I am trying this <code> pip install -r requirements.txt --no-index --find-links file:/Volumes/MANUEL/anaconda3/envs/lookup3.6/lib/python3.6/site-packages</code> in a new conda env. I have created to see if I really can install libraries locally, then I would use the satisfactory approach in my container to run the same when running the container.</p> <p>The error I get is this</p> <pre><code>ERROR: Could not find a version that satisfies the requirement absl-py==1.4.0 (from versions: none) ERROR: No matching distribution found for absl-py==1.4.0 </code></pre> <p>I have created this requiretments.txt file using <code> pip freeze</code></p>
<python><docker><conda>
2023-06-03 18:43:25
1
1,393
Manolo Dominguez Becerra
76,397,496
475,982
How do I represent an optional component in a grammar with pyparser?
<p>I am developing a parser that extracts the dose and name from expressions of medication dosages. For example, pulling &quot;10 mg&quot; and &quot;aspirin&quot; from &quot;10mg of aspirin&quot; and &quot;10 mg aspirin&quot;.</p> <p>My attempt in <code>pyparsing</code>.</p> <pre><code>import pyparsing as pp doseWord = pp.Word(pp.alphas) doseNum = pp.Word(pp.nums) unit = pp.Word(pp.alphas) preposition = pp.Word(pp.alphas) chemical = pp.Word(pp.printables) dosage_parser = doseNum + unit + pp.Optional(preposition) + chemical print(dosage_parser.parseString('10mg of aspirin')) # ['10','mg','of','aspirin'] print(dosage_parser.parseString('10mg aspirin')) # Error, expected W(0123...) found end of text. #These two lines should output the same thing. </code></pre> <p><strong>What I've tried</strong></p> <ol> <li>wrapping <code>preposition</code> in <code>pp.Optional</code> - Not working</li> <li>replacing <code>preposition</code> with <code>pp.Combine(pp.Optional(pp.preposition) pp.Empty())</code> - Not working</li> <li>replacing <code>preposition</code> with <code>pp.oneOrMore([pp.preposition,pp.Empty()])</code> - hangs indefinitely as somewhat expected</li> <li>wrapping <code>preposition</code> in <code>pp.ZeroOrMore</code> - Not working.</li> </ol> <p>5.<code>(pp.Empty() | preposition)</code> - parses incorrectly ['10', 'mg', 'of']</p>
<python><context-free-grammar><pyparsing>
2023-06-03 18:33:50
1
3,163
mac389
76,397,308
2,387,411
Selenium Python: How to capture li element with specific text
<p>I am trying to extract <code>urlToBeCaptured</code> and <code>Text to be captured</code> from the HTML. The structure looks as follows:</p> <pre><code>&lt;li&gt; &quot; text with trailing spaces &quot; &lt;a href=&quot;urlToBeCaptured&quot;&gt; &lt;span class =&quot;class1&gt; Text to be captured &lt;/span&gt; &lt;span class =&quot;class2&gt; Another text &lt;/span&gt; &lt;/a&gt; ... &lt;/li&gt; </code></pre> <p>I am doing the following, but it doesn't seem to work:</p> <pre><code>el = driver.find_element(By.XPATH, &quot;//li[contains(text(),'text with trailing spaces')]&quot;) </code></pre> <p>Once I locate the element how to extract the text from class1, should it be something like this?</p> <pre><code>textToBeCaptured = el.find_element(By.CLASS_NAME, 'class1').text </code></pre>
<python><selenium-webdriver><web-scraping><xpath><normalize-space>
2023-06-03 17:53:37
1
315
bekon
76,397,098
2,986,042
How to make a variable in global scope in Robot framework?
<p>I have create a small robot framework test suit which will communicate with trace32 Lauterbach. My idea is to run different functions name using a loop. Every loop, it will make a breakpoint in the Trace32 later batch. I have written a simple python script as library in the robot framework.</p> <p><strong>test.robot file</strong></p> <pre><code>import os *** Settings *** Documentation simple test script to control Trace32 Library Collections Library can.Trace32 Suite Setup Suite Teardown *** Variables *** ${temp} 1 *** Test Cases *** Check Input and Output [Documentation] test script [Setup] #Retrive Data . This list has 5 values ${MainList} Create List #start debugger start Debugger #connect debugger Connect Debugger #Iterate 5 times FOR ${UserAttribute} IN @{MainList} #sleep 1 sec Sleep 1 seconds #call for breakpoint break Debugger ${temp} +=1 END Disconnect Debugger [Teardown] </code></pre> <p>and the trace 32 script file:</p> <pre><code>import time import ctypes from ctypes import c_void_p import enum T32_DEV = 1 class Trace32: def start_Debugger(self): self.t32api = ctypes.cdll.LoadLibrary('D:/test/api/python/t32api64.dll') self.t32api.T32_Config(b&quot;NODE=&quot;,b&quot;localhost&quot;) self.t32api.T32_Config(b&quot;PORT=&quot;,b&quot;20000&quot;) self.t32api.T32_Config(b&quot;PACKLEN=&quot;,b&quot;1024&quot;) rc = self.t32api.T32_GetChannelSize() ch1 = ctypes.create_string_buffer(rc) self.t32api.T32_GetChannelDefaults(ctypes.cast(ch1,ctypes.c_void_p)) ch2 = ctypes.create_string_buffer(rc) self.t32api.T32_GetChannelDefaults(ctypes.cast(ch2,ctypes.c_void_p)) self.t32api.T32_SetChannel(ctypes.cast(ch2,c_void_p)) def Connect_Debugger(self): rc = self.t32api.T32_Init() rc = self.t32api.T32_Attach(T32_DEV) def breakpoint_Debugger(self): rc = self.t32api.T32_Ping() time.sleep(2) rc = self.t32api.T32_Cmd(b&quot;InterCom M7_0 Break&quot;) time.sleep(3) rc = self.t32api.T32_Cmd(b&quot;InterCom M7_0 Go&quot;) time.sleep(2) rc = self.t32api.T32_Cmd(b&quot;InterCom M7_0 break.Set My_func&quot;) time.sleep(2) def Disconnect_Debugger(self): rc = self.t32api.T32_Exit() </code></pre> <p>In the robot file, I am calling <code>start Debugger</code> and <code>Connect Debugger</code> function to start and connect the debugger. I want <code>self.t32api</code> to be global. So that I can call <code>break_Debugger</code> many times to put a breakpoint.</p> <p>But Unfortunately, I can only put breakpoint in the first iteration. In second iteration, the breakpoint is not working. How can I make <code>self.t32api</code> global until the robot file executed completely?</p>
<python><robotframework><trace32><lauterbach>
2023-06-03 17:01:20
1
1,300
user2986042
76,397,082
15,520,615
Trying to execute code in SnowFlake Python Sheet Error: missing 1 required positional argument:
<p>I am trying to execute code in snowflake Python sheet, but I'm getting the error:</p> <pre><code>Traceback (most recent call last): Worksheet, line 12, in &lt;module&gt; TypeError: __init__() missing 1 required positional argument: 'conn' </code></pre> <p>Can someone take a look at my code and let me know where I'm going wrong.</p> <pre><code>import snowflake.snowpark as snowpark from snowflake.snowpark.functions import col from snowflake.snowpark.functions import * # Create a session session = snowpark.Session() # Define the function to get entity structure def getEntityStruct(connectionInst, log, entityStageId, processId, debug=False): log.writeToLogs(processId, &quot;Info&quot;, &quot;GetEntityStruct&quot;, &quot;GetEntityStruct&quot;) structQuery = f&quot;SELECT * FROM Config.GetEntityStructure WHERE EntityStageID = {entityStageId} ORDER BY EntityColumnOrder&quot; if debug: print(f&quot;getEntityStruct query {structQuery}&quot;) entityColumns = connectionInst.readFromDb(processId, structQuery) struct_fields = [ sp.StructField(col.ColumnName, eval(col.ColumnType), col.IsNullable) for col in entityColumns.select(&quot;ColumnName&quot;, &quot;ColumnType&quot;, &quot;IsNullable&quot;).orderBy(&quot;EntityColumnOrder&quot;).collect() ] struct = sp.StructType(struct_fields) pKey = sp.lit(None) chKey = sp.lit(None) pkRows = ( entityColumns.filter(entityColumns.isPrimaryKey) .groupBy(&quot;EntityId&quot;) .agg(sp.concat_ws(&quot;,&quot;, sp.collect_list(sp.concat(sp.lit(&quot;`&quot;), entityColumns.ColumnName, sp.lit(&quot;`&quot;)))).alias(&quot;keyName&quot;)) .collect() ) if len(pkRows) == 0: log.writeToLogs(processId, &quot;Error&quot;, &quot;NoPKey&quot;, &quot;NoPKey&quot;, errorType=&quot;NoPKey&quot;) log.writeToLogs(processId, &quot;Info&quot;, &quot;FailGetEntityStruct&quot;, &quot;FailGetEntityStruct&quot;, errorType=&quot;FailGetEntityStruct&quot;) raise ValueError(&quot;NoPKey&quot;) pKey = pkRows[0].keyName chkRows = ( entityColumns.filter(entityColumns.isChangeTracking) .groupBy(&quot;EntityId&quot;) .agg(sp.concat_ws(&quot;,&quot;, sp.collect_list(sp.concat(sp.lit(&quot;`&quot;), entityColumns.ColumnName, sp.lit(&quot;`&quot;)))).alias(&quot;changeCols&quot;)) .collect() ) if len(chkRows) == 0: log.writeToLogs(processId, &quot;Error&quot;, &quot;NoChKey&quot;, &quot;NoChKey&quot;, errorType=&quot;NoChKey&quot;) log.writeToLogs(processId, &quot;Info&quot;, &quot;FailGetEntityStruct&quot;, &quot;FailGetEntityStruct&quot;, errorType=&quot;FailGetEntityStruct&quot;) raise ValueError(&quot;NoChKey&quot;) for row in chkRows: chKey = row.changeCols regName = entityColumns.select(sp.max(&quot;RegistrationName&quot;).alias(&quot;RegName&quot;)).collect()[0].RegName log.writeToLogs(processId, &quot;Info&quot;, &quot;SuccessGetEntityStruct&quot;, &quot;SuccessGetEntityStruct&quot;) if debug: print(f&quot;getEntityStruct struct {struct}&quot;) return {&quot;struct&quot;: struct, &quot;pKey&quot;: pKey, &quot;chKey&quot;: chKey, &quot;regName&quot;: regName} </code></pre> <p>I have tried to establish a connection with the following</p> <pre><code>conn = snowflake.connector.connect( user='xxxxxx', password='xxxxxxx', account='xxxxx', role= 'ACCOUNTADMIN', warehouse='COMPUTE_WH', database='MYDEMODB', schema='MYSCHEMA' ) </code></pre> <p>But still getting same error</p>
<python><snowflake-cloud-data-platform>
2023-06-03 16:58:16
1
3,011
Patterson
76,397,037
18,876,759
django full text search taggit
<h2>My application - the basics</h2> <p>I have a simple django application which allows for storing information about certain items and I'm trying to implement a search view/functionality.</p> <p>I'm using <code>django-taggit</code> to tag the items by their functionality/features.</p> <h2>What I want to implement</h2> <p>I want to implement a full text search which allows to search across all the fields of the items, including their tags.</p> <h2>The problem(s)</h2> <ol> <li>On the results view, the tagged items are showing up multiple times (one occurence per tag)</li> <li>The ranking is correct when I specify * only a single* tag in the search field, but when I specify <em>multiple</em> tag names, I will get unexpected ranking results.</li> </ol> <p>I suspect the <code>SearchVector()</code> does not resolve the tags relation as I expected it to do. The tags should be treated just like a list of words in this case.</p> <h2>Example Code</h2> <h3>models.py</h3> <pre class="lang-py prettyprint-override"><code>from django.db import models from taggit.managers import TaggableManager class Item(models.Model): identifier = models.SlugField('ID', unique=True, editable=False) short_text = models.CharField('Short Text', max_length=100, blank=True) serial_number = models.CharField('Serial Number', max_length=30, blank=True) revision = models.CharField('Revision/Version', max_length=30, blank=True) part_number = models.CharField('Part Number', max_length=30, blank=True) manufacturer = models.CharField('Manufacturer', max_length=30, blank=True) description = models.TextField('Description', blank=True) tags = TaggableManager('Tags', blank=True) is_active = models.BooleanField('Active', default=True) </code></pre> <h3>forms.py</h3> <pre class="lang-py prettyprint-override"><code>from django import forms class SearchForm(forms.Form): search = forms.CharField(max_length=200, required=False) active_only = forms.BooleanField(initial=True, label='Show active items only', required=False) </code></pre> <h3>views.py</h3> <pre class="lang-py prettyprint-override"><code>from django.views.generic.list import ListView from django.contrib.postgres.search import SearchQuery, SearchVector, SearchRank from . import models from . import forms class ItemListView(ListView): form_class = forms.SearchForm model = models.Item fields = ['serial_number', 'part_number', 'manufacturer', 'tags', 'is_active'] template_name_suffix = '_list' def get_context_data(self, **kwargs): context = super().get_context_data(**kwargs) context['form'] = self.form_class(self.request.GET) return context def get_queryset(self): queryset = super().get_queryset() form = self.form_class(self.request.GET) if form.is_valid(): if form.cleaned_data['active_only']: queryset = queryset.filter(is_active=True) if not form.cleaned_data['search']: return super().get_queryset() search_vector = SearchVector('identifier', 'short_text', 'serial_number', 'revision', 'part_number', 'manufacturer', 'description', 'tags') search_query = SearchQuery(form.cleaned_data['search'], search_type='websearch') return ( queryset.annotate( search=search_vector, rank=SearchRank(search_vector, search_query) ) # .filter(search=search_query) .order_by(&quot;-rank&quot;).distinct() ) #.filter(search__icontains=form.cleaned_data['search'],) return super().get_queryset() </code></pre>
<python><django><postgresql><tags><full-text-search>
2023-06-03 16:46:07
1
468
slarag
76,396,938
963,671
OptInt type function in Python
<pre class="lang-py prettyprint-override"><code>import pandas as pd import json mass=[] fall=[] year=[] req = requests.get(&quot;https://data.nasa.gov/resource/y77d-th95.json&quot;) response =req.json() for i in range(0,len(response)): mass.append(response[i]['mass']) fall.append(response[i]['fall']) year.append(response[i]['year']) </code></pre> <p>I handle keyError with the help of Exception handling. If i got keyerror i added NAN value. As java have function like OptString whenever we get keyerror it add default value. I just want to get all data in list but via using optString type method. Can you give any example ?</p> <p>Thanks ,</p>
<python>
2023-06-03 16:22:11
1
555
arpit
76,396,924
13,642,459
Nested for loop not looping on the first set, Python
<p>I have written this code in python. In the end I would like to use this to get the indices to cut up a 100x100 matrix into squares that overlap by 10. However, at the bottom there is a nested loop and the y values print how I think they should but not the x-values, the x-values never change... Can anyone help? Thanks</p> <pre><code>x_split = np.linspace(0, 100, 4 + 1, dtype=int) x_start = x_split[:-1] - 5 x_start[0] = 0 x_end = x_split[1:] + 5 x_end[-1] = 100 y_split = np.linspace(0, 100, 4 + 1, dtype=int) y_start = y_split[:-1] - 5 y_start[0] = 0 y_end = y_split[1:] + 5 y_end[-1] = 100 x_inds = zip(x_start, x_end) y_inds = zip(y_start, y_end) i = 0 for start_x, end_x in x_inds: for start_y, end_y in y_inds: i += 1 print(f&quot;i = {i}&quot;) print(f&quot;x = {start_x} {end_x}&quot;) print(f&quot;y = {start_y} {end_y}&quot;) print(&quot;&quot;) </code></pre> <p>Current output:</p> <pre><code>i = 1 x = 0 30 y = 0 30 i = 2 x = 0 30 y = 20 55 i = 3 x = 0 30 y = 45 80 i = 4 x = 0 30 y = 70 100 </code></pre> <p>And then stops. I want to to continue...</p> <pre><code>i = 5 x = 20 55 y = 0 30 i = 6 x = 20 55 y = 20 55 ... </code></pre>
<python>
2023-06-03 16:20:07
1
456
Hugh Warden
76,396,922
22,009,322
How to draw broken bars if entities are duplicated
<p>I want to draw a broken bar diagram with a list of band members that joined and left a music band. I managed to draw a grid as I wanted to, but stuck with drawing the bars for band members since they are repeating. I know how to make it when band members are unique but date columns are repeating, f.e.: Year_start1, Year_end1, Year_start2, Year_end2, etc. But stuck with when band members are duplicating. So, each person should be presented only once in the y axis with a set of broken bars (if person joined the band more than once). I appreciate any help!</p> <p>Code example:</p> <pre><code>import matplotlib.pyplot as plt import pandas as pd result = pd.DataFrame([['Bill', 1972, 1974], ['Bill', 1976, 1978], ['Bill', 1967, 1971], ['Danny', 1969, 1975], ['Danny', 1976, 1977], ['James', 1971, 1972], ['Marshall', 1967, 1975]], columns=['Person', 'Year_start', 'Year_left']) print(result) fig, ax = plt.subplots() persons = result.Person.unique() person_count = len(persons) ybound = len(persons)*10 ystep = int(ybound/person_count) numbers = range(5, ybound, ystep) ax.set_ylim(0, ybound) ax.set_xlim(min(result.Year_start)-1, max(result.Year_left)+1) ax.set_xlabel('Years') ax.set_yticks(numbers, labels=persons) ax.grid(True) plt.show() </code></pre> <p>I guess I should use a &quot;for&quot; loop and put the dataframe in &quot;zip&quot;, but not sure how exactly.</p>
<python><pandas><matplotlib>
2023-06-03 16:20:04
1
333
muted_buddy
76,396,701
12,304,000
import python libraries (eg: rapidjson) in airflow
<p>I want to use the Python library <strong>rapidjson</strong> in my Airflow DAG. My code repo is hosted on Git. Whenever I merge something into the master or test branch, the changes are automatically configured to reflect on the Airflow UI.</p> <p>My Airflow is hosted as a VM on AWS EC2. Under the EC2 instances, I see three different instances for: scheduler, webserver, workers.</p> <p>I connected to these 3 individually via Session Manager. Once the terminal opened, I installed the library using</p> <pre><code>pip install python-rapidjson </code></pre> <p>I also verified the installation using <code>pip list</code>. Now, I import the library in my dag's code simply like this:</p> <pre><code>import rapidjson </code></pre> <p>However, when I open the Airflow UI, my DAG has an error that:</p> <pre><code>No module named 'rapidjson' </code></pre> <p>Are there additional steps that I am missing out on? Do I need to import it into my Airflow code base in any other way as well?</p> <p>Within my Airflow git repository, I also have a <strong>&quot;requirements.txt&quot;</strong> file. I tried to include</p> <p><strong>python-rapidjson==1.5.5</strong></p> <p>this there as well but I do not know how to actually install this.</p> <p>I tried this:</p> <p><strong>pip install requirements.txt</strong></p> <p>within the session manager's terminal as well. However, the terminal is not able to locate this file. In fact, when I do &quot;ls&quot;, I don't see anything.</p> <pre><code>pwd /var/snap/amazon-ssm-agent/6522 </code></pre>
<python><amazon-ec2><airflow><airflow-webserver>
2023-06-03 15:20:02
1
3,522
x89
76,396,615
2,221,360
Make QPushButton as Progress Bar in PyQt or PySide
<p>My question is similar to what is mentioned in this thread for QPushButtion instead of QLineEdit <a href="https://stackoverflow.com/questions/36972132/how-to-turn-qlineedit-background-into-a-progress-bar">How to turn QLineEdit background into a Progress Bar</a>.</p> <p>Here is what I tried to implement which works but looks ugly. Content of <strong>main.py</strong>:</p> <pre><code>#!/bin/env python import sys from PySide6 import QtWidgets from main_ui import Ui_Dialog class Progress(QtWidgets.QDialog): def __init__(self): super().__init__() self.ui = Ui_Dialog() self.ui.setupUi(self) self.value = 0 self.ui.btn.clicked.connect(self.updateProgress) def updateProgress(self): if self.value &gt; 1: self.value = 0 self.ui.btn_progress.setStyleSheet('background-color: white') else: self.value = self.value + 0.1 self.ui.label.setText(str(self.value)) self.ui.btn_progress.setStyleSheet(('background: qlineargradient(x1:0, y1:0, x2:1, y2:0, stop: 0 #1bb77b, stop: ' + (str(self.value)) + ' #1bb77b, stop: ' + str(self.value + 0.001) + ' rgba(0, 0, 0, 0), stop: 1 white)')) app = QtWidgets.QApplication(sys.argv) window = Progress() window.show() app.exec() </code></pre> <p>Here is the content of main_ui.py file:</p> <pre><code># -*- coding: utf-8 -*- ################################################################################ ## Form generated from reading UI file 'test_dialog.ui' ## ## Created by: Qt User Interface Compiler version 6.5.1 ## ## WARNING! All changes made in this file will be lost when recompiling UI file! ################################################################################ from PySide6.QtCore import (QCoreApplication, QDate, QDateTime, QLocale, QMetaObject, QObject, QPoint, QRect, QSize, QTime, QUrl, Qt) from PySide6.QtGui import (QBrush, QColor, QConicalGradient, QCursor, QFont, QFontDatabase, QGradient, QIcon, QImage, QKeySequence, QLinearGradient, QPainter, QPalette, QPixmap, QRadialGradient, QTransform) from PySide6.QtWidgets import (QApplication, QDialog, QLabel, QPushButton, QSizePolicy, QVBoxLayout, QWidget) class Ui_Dialog(object): def setupUi(self, Dialog): if not Dialog.objectName(): Dialog.setObjectName(u&quot;Dialog&quot;) Dialog.resize(680, 97) self.verticalLayout = QVBoxLayout(Dialog) self.verticalLayout.setObjectName(u&quot;verticalLayout&quot;) self.btn_progress = QPushButton(Dialog) self.btn_progress.setObjectName(u&quot;btn_progress&quot;) self.verticalLayout.addWidget(self.btn_progress) self.label = QLabel(Dialog) self.label.setObjectName(u&quot;label&quot;) self.verticalLayout.addWidget(self.label) self.btn = QPushButton(Dialog) self.btn.setObjectName(u&quot;btn&quot;) self.verticalLayout.addWidget(self.btn) self.retranslateUi(Dialog) QMetaObject.connectSlotsByName(Dialog) # setupUi def retranslateUi(self, Dialog): Dialog.setWindowTitle(QCoreApplication.translate(&quot;Dialog&quot;, u&quot;Dialog&quot;, None)) self.btn_progress.setText(QCoreApplication.translate(&quot;Dialog&quot;, u&quot;Progress Button&quot;, None)) self.label.setText(QCoreApplication.translate(&quot;Dialog&quot;, u&quot;&lt;html&gt;&lt;head/&gt;&lt;body&gt;&lt;p align=\&quot;center\&quot;&gt;&lt;span style=\&quot; font-size:12pt;\&quot;&gt;Progress Value : &lt;/span&gt;&lt;/p&gt;&lt;/body&gt;&lt;/html&gt;&quot;, None)) self.btn.setText(QCoreApplication.translate(&quot;Dialog&quot;, u&quot;PushButton&quot;, None)) # retranslateUi </code></pre> <p>Is it possible to implement the above code without gradient color so that to make it beautiful?</p>
<python><pyqt><pyside>
2023-06-03 14:57:23
0
3,910
sundar_ima
76,396,594
19,003,861
Nested for loop - model.id in parent for loop does not match model.id in nested for loop (django)
<p>I am trying to access data from a parent to a child via a foreign key.</p> <p><strong>WHAT WORKS - the views</strong></p> <p>The data in the child is not &quot;ready to be used&quot; and need to be processed, to be represented in a progress bar in %.</p> <p>The data processing is handled in the views. When I print it on the console, it seems to work and stored into a variable <code>reward_positions</code>.</p> <pre><code>Reward positions = [(&lt;Venue: Venue_name&gt;, reward_points, reward_position_on_bar)] </code></pre> <p>So this part works.</p> <p>The plan is therefore to access <code>reward_position_on_bar</code> by calling <code>{{reward_positions.2}}</code></p> <p><strong>WHAT DOESNT WORK - the template</strong></p> <p>But something is not working to plan in the template.</p> <p>The template renders the last <code>child_model</code> (thats rewardprogram) objects of the <code>last parent_id</code> (thats venue) irrespective of the actual <code>parent_id</code> processed in the for loop.</p> <p><strong>TEST RESULT &amp; WHERE I THINK THE PROBLEM IS</strong></p> <p>I think my problem lies in my nested forloop. The <code>parent_id</code> in the parent forloop does not match the '{{reward_position.0}}' in the nested forloop.</p> <p>Doing a verification test, <code>{{key}}</code> should be equal to <code>{{reward_position.0}}</code> as they both go through the same parent forloop.</p> <p>However, if <code>{{key}}</code> does change based on venue.id (parent forloop id), <code>{{reward_position.0}}</code> is stuck to the same id irrespective of the parent forloop id.</p> <p>Can anyone seem what I am doing wrong?</p> <p><strong>THE CODE</strong></p> <p><strong>models</strong></p> <pre><code>class Venue(models.Model): name = models.CharField(verbose_name=&quot;Name&quot;,max_length=100, blank=True) class RewardProgram(models.Model): venue = models.ForeignKey(Venue, null = True, blank=True, on_delete=models.CASCADE, related_name=&quot;venuerewardprogram&quot;) title = models.CharField(verbose_name=&quot;reward_title&quot;,max_length=100, null=True, blank=True) points = models.IntegerField(verbose_name = 'points', null = True, blank=True, default=0) </code></pre> <p><strong>views</strong></p> <pre><code>def list_venues(request): venue_markers = Venue.objects.filter(venue_active=True) #Progress bar per venue bar_total_lenght = 100 rewards_available_per_venue = 0 reward_position_on_bar = 0 venue_data = {} reward_positions = {} for venue in venue_markers: print(f'venue name ={venue}') #list all reward programs venue.reward_programs = venue.venuerewardprogram.all() reward_program_per_venue = venue.reward_programs #creates a list of reward points needed for each venue for each object reward_points_per_venue_test = [] #appends the points to empty list from reward program from each venue for rewardprogram in reward_program_per_venue: reward_points_per_venue_test.append(rewardprogram.points) #sorts list in descending order reward_points_per_venue_test.sort(reverse=True) #set position of highest reward to 100 (100% of bar length) if reward_points_per_venue_test: highest_reward = reward_points_per_venue_test[0] if not reward_program_per_venue: pass else: #counts reward program per venue rewards_available_per_venue = venue.reward_programs.count() if rewards_available_per_venue == 0: pass else: #position of reward on bar reward_positions = [] for rewardprogram in reward_program_per_venue: #list all points needed per reward program objects reward_points = rewardprogram.points #position each reward on bar reward_position_on_bar = reward_points/highest_reward reward_positions.append((venue, reward_points, reward_position_on_bar)) #reward_positions[venue.id] = reward_position_on_bar reward_positions = reward_positions print(f'Reward positions = {reward_positions}') context = {'reward_positions':reward_positions,'venue_data':venue_data,'venue_markers':venue_markers} return render(request,'template.html',context) </code></pre> <p><strong>template</strong></p> <pre><code> {%for venue in venue_markers%} {%for key, value in venue_data.items%} {%if key == venue.id%} #venue.id = 3 {% for reward_position in reward_positions %}#test result {{reward_position.0.id}} # = id = 7 (thats not the good result) {{key}} #id = 3 (thats the good result) {% endfor %} &lt;div class=&quot;progress-bar bg-success&quot; role=&quot;progressbar&quot; style=&quot;width: {{value}}%&quot; aria-valuenow=&quot;{{value}}&quot; aria-valuemin=&quot;0&quot; aria-valuemax=&quot;100&quot;&gt;&lt;/div&gt; {%endif%} {%endfor%} {%endfor%} </code></pre>
<python><html><django><django-views><django-templates>
2023-06-03 14:50:36
2
415
PhilM
76,396,569
15,448,022
Calculating Collective Count of departments on individual dates from a given date range
<p>I have the following table</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Function</th> <th>Department</th> <th>Start Date</th> <th>End Date</th> </tr> </thead> <tbody> <tr> <td>Const</td> <td>Const 1</td> <td>2023-03-01</td> <td>2023-03-05</td> </tr> <tr> <td>Const</td> <td>Const 2</td> <td>2023-03-02</td> <td>2023-03-03</td> </tr> <tr> <td>Mining</td> <td>Mining 1</td> <td>2023-03-02</td> <td>2023-03-05</td> </tr> <tr> <td>Mining</td> <td>Mining 2</td> <td>2023-03-01</td> <td>2023-03-06</td> </tr> <tr> <td>Const</td> <td>Const 1</td> <td>2023-03-03</td> <td>2023-03-07</td> </tr> <tr> <td>Const</td> <td>Const 2</td> <td>2023-03-02</td> <td>2023-03-05</td> </tr> <tr> <td>Mining</td> <td>Mining 1</td> <td>2023-03-06</td> <td>2023-03-09</td> </tr> <tr> <td>Mining</td> <td>Mining 2</td> <td>2023-03-05</td> <td>2023-03-08</td> </tr> </tbody> </table> </div> <p>I want to get per date the total count in each department. Both start date and end date and included in counting.</p> <p>It would be nice to have an intermediate output as follows</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Function</th> <th>Department</th> <th>Date</th> <th>Count</th> </tr> </thead> <tbody> <tr> <td>Const</td> <td>Const1</td> <td>2023-03-01</td> <td>1</td> </tr> <tr> <td>Const</td> <td>Const1</td> <td>2023-03-02</td> <td>1</td> </tr> <tr> <td>Const</td> <td>Const1</td> <td>2023-03-03</td> <td>2</td> </tr> <tr> <td>Const</td> <td>Const1</td> <td>2023-03-04</td> <td>2</td> </tr> <tr> <td>Const</td> <td>Const1</td> <td>2023-03-05</td> <td>2</td> </tr> <tr> <td>Const</td> <td>Const1</td> <td>2023-03-06</td> <td>1</td> </tr> <tr> <td>Const</td> <td>Const1</td> <td>2023-03-07</td> <td>1</td> </tr> <tr> <td>Const</td> <td>Const1</td> <td>2023-03-08</td> <td>0</td> </tr> <tr> <td>Const</td> <td>Const1</td> <td>2023-03-09</td> <td>0</td> </tr> <tr> <td>Const</td> <td>Const1</td> <td>2023-03-10</td> <td>0</td> </tr> <tr> <td>Const</td> <td>Const2</td> <td>2023-03-01</td> <td>0</td> </tr> <tr> <td>Const</td> <td>Const2</td> <td>2023-03-02</td> <td>2</td> </tr> <tr> <td>Const</td> <td>Const2</td> <td>2023-03-03</td> <td>2</td> </tr> <tr> <td>Const</td> <td>Const2</td> <td>2023-03-04</td> <td>1</td> </tr> <tr> <td>Const</td> <td>Const2</td> <td>2023-03-05</td> <td>1</td> </tr> <tr> <td>Const</td> <td>Const2</td> <td>2023-03-06</td> <td>0</td> </tr> <tr> <td>Const</td> <td>Const2</td> <td>2023-03-07</td> <td>0</td> </tr> <tr> <td>Const</td> <td>Const2</td> <td>2023-03-08</td> <td>0</td> </tr> <tr> <td>Const</td> <td>Const2</td> <td>2023-03-09</td> <td>0</td> </tr> <tr> <td>Const</td> <td>Const2</td> <td>2023-03-10</td> <td>0</td> </tr> <tr> <td>Mining</td> <td>Mining 1</td> <td>2023-03-01</td> <td>0</td> </tr> <tr> <td>Mining</td> <td>Mining 1</td> <td>2023-03-02</td> <td>1</td> </tr> <tr> <td>Mining</td> <td>Mining 1</td> <td>2023-03-03</td> <td>1</td> </tr> <tr> <td>Mining</td> <td>Mining 1</td> <td>2023-03-04</td> <td>1</td> </tr> <tr> <td>Mining</td> <td>Mining 1</td> <td>2023-03-05</td> <td>1</td> </tr> <tr> <td>Mining</td> <td>Mining 1</td> <td>2023-03-06</td> <td>1</td> </tr> <tr> <td>Mining</td> <td>Mining 1</td> <td>2023-03-07</td> <td>1</td> </tr> <tr> <td>Mining</td> <td>Mining 1</td> <td>2023-03-08</td> <td>1</td> </tr> <tr> <td>Mining</td> <td>Mining 1</td> <td>2023-03-09</td> <td>1</td> </tr> <tr> <td>Mining</td> <td>Mining 1</td> <td>2023-03-10</td> <td>0</td> </tr> <tr> <td>Mining</td> <td>Mining 2</td> <td>2023-03-01</td> <td>1</td> </tr> <tr> <td>Mining</td> <td>Mining 2</td> <td>2023-03-02</td> <td>1</td> </tr> <tr> <td>Mining</td> <td>Mining 2</td> <td>2023-03-03</td> <td>1</td> </tr> <tr> <td>Mining</td> <td>Mining 2</td> <td>2023-03-04</td> <td>1</td> </tr> <tr> <td>Mining</td> <td>Mining 2</td> <td>2023-03-05</td> <td>2</td> </tr> <tr> <td>Mining</td> <td>Mining 2</td> <td>2023-03-06</td> <td>2</td> </tr> <tr> <td>Mining</td> <td>Mining 2</td> <td>2023-03-07</td> <td>1</td> </tr> <tr> <td>Mining</td> <td>Mining 2</td> <td>2023-03-08</td> <td>1</td> </tr> <tr> <td>Mining</td> <td>Mining 2</td> <td>2023-03-09</td> <td>0</td> </tr> <tr> <td>Mining</td> <td>Mining 2</td> <td>2023-03-10</td> <td>0</td> </tr> </tbody> </table> </div> <p>The desired final output is a pandas df as follows</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Date</th> <th>Const 1</th> <th>Const 2</th> <th>Mining 1</th> <th>Mining 2</th> </tr> </thead> <tbody> <tr> <td>2023-03-01</td> <td>1</td> <td>0</td> <td>0</td> <td>1</td> </tr> <tr> <td>2023-03-02</td> <td>1</td> <td>2</td> <td>1</td> <td>1</td> </tr> <tr> <td>2023-03-03</td> <td>2</td> <td>2</td> <td>1</td> <td>1</td> </tr> <tr> <td>2023-03-04</td> <td>2</td> <td>1</td> <td>1</td> <td>1</td> </tr> <tr> <td>2023-03-05</td> <td>2</td> <td>1</td> <td>1</td> <td>2</td> </tr> </tbody> </table> </div>
<python><pandas><dataframe><datetime>
2023-06-03 14:45:33
1
378
aj7amigo
76,396,462
9,470,099
How to create my own debugger server for selenium?
<p>I am trying out selenium using Python, I seen this option in some places</p> <pre class="lang-py prettyprint-override"><code>chrome_options.add_experimental_option(&quot;debuggerAddress&quot;, debugger_address) </code></pre> <p>How can I create my own debugger server? I've been googling but couldn't find any documentation about this. Can anyone point me in the right direction?</p>
<python><selenium-webdriver>
2023-06-03 14:14:11
0
4,965
Jeeva
76,396,262
19,130,803
Plotly: auto resize height
<p>I am creating a bar graph using <code>plotly express</code> inside <code>dash</code> application. The graph is getting displayed but I am having an issue with <code>height</code>.Currently I am using <code>default</code> height and width.</p> <p>Now for eg:</p> <ol> <li><p>dataframe having <code>field</code> column contain 3 entires, the graph looks ok.</p> </li> <li><p>dataframe having <code>field</code> column contain 10 entires, the bar width is reduced auto and height remains the same and graph looks congested and hard to read.</p> </li> </ol> <pre><code>figure = ( px.bar( data_frame=dataframe, x=&quot;size&quot;, y=&quot;field&quot;, title=&quot;Memory Usage&quot;, text=&quot;size&quot;, # width=400, # height=400, orientation=&quot;h&quot;, labels={&quot;size&quot;: &quot;size in byte(s)&quot;}, template=template, ).update_traces(width=0.4) .update_layout(autosize=True) ) dcc.Graph(id=&quot;memory_bar&quot;, figure=figure, className=&quot;dbc&quot;) </code></pre> <p>Is it possible depending on number of entires, the height can be auto-resized? Also I am using <code>orient</code> as <code>horrizontal</code>. I tried <code>autosize=true</code> but got no effect on height it remains same.</p>
<python><plotly>
2023-06-03 13:13:48
1
962
winter
76,396,157
264,136
Add a field in doc if it does not exist else update
<p>I am using the below code to update <code>job_start_time</code> in a doc:</p> <pre><code>myclient = pymongo.MongoClient(&quot;mongodb://10.64.127.94:27017/&quot;) mydb = myclient[&quot;UPTeam&quot;] mycol = mydb[&quot;perf_sdwan_queue&quot;] myquery = {&quot;$and&quot;:[ {&quot;job_job_id&quot;: current_job_id}, {&quot;job_queue_name&quot;: &quot;CURIE_BLR&quot;}]} my_jobs = mycol.find(myquery) newvalues = { &quot;$set&quot;: { &quot;job_start_time&quot;:datetime.datetime.utcnow()} } mycol.update_one(myquery, newvalues) </code></pre> <p>This works fine if the field exists. I want to have a code where it either updates the field if it exists else to create a new field.</p>
<python><pymongo>
2023-06-03 12:46:32
1
5,538
Akshay J
76,396,112
1,831,784
Google Photos API: UnknownApiNameOrVersion when using googleapiclient.discovery.build
<p>I'm trying to use the Google Photos API with the Python client library googleapiclient.discovery.build method. However, I'm encountering an &quot;UnknownApiNameOrVersion&quot; error when attempting to create the client.</p> <p>Here's the code I'm using:</p> <pre><code>from google.oauth2 import service_account import googleapiclient.discovery credentials = service_account.Credentials.from_service_account_file('/path/to/service-account-file.json') scopes = ['https://www.googleapis.com/auth/photoslibrary'] credentials = credentials.with_scopes(scopes) service = googleapiclient.discovery.build('photoslibrary', 'v1', credentials=credentials) </code></pre> <p>Unfortunately, I'm receiving the following error message:</p> <pre><code>UnknownApiNameOrVersion: name: photoslibrary version: v1 </code></pre> <p>I've checked the documentation and examples, but I can't find a clear solution to this issue. Am I missing something in my code? Is there an alternative way to create the Google Photos API client?</p> <p>I would appreciate any guidance or insights on how to resolve this error and successfully create the Google Photos API client using the Python client library.</p>
<python><python-3.x><google-cloud-platform>
2023-06-03 12:34:45
0
3,035
Itachi
76,396,049
9,212,050
PySpark: Create a new column in dataframe based on another dataframe's cell values
<p>I have PySpark dataframe <code>dhl_price</code> of the following form:</p> <pre><code>+------+-----+-----+-----+------+ |Weight| A| B| C| D| +------+-----+-----+-----+------+ | 1|16.78|17.05|20.23| 40.1| | 2|16.78|17.05|20.23| 58.07| | 3|18.43|18.86| 25.0| 66.03| | 4|20.08|20.67|29.77| 73.99| </code></pre> <p>So you can get the delivery price based on the category (i.e. the columns <code>A</code>, <code>B</code>, <code>C</code>, <code>D</code>) and the weight of your parcel (i.e. the first column <code>Weight</code>) and for weights larger than 30, we have prices specified only for 30, 40, 50 etc.</p> <p>I also have PySpark dataframe <code>requests</code>, one row for each request by a customer. It includes columns <code>product_weight</code>, <code>Type</code> (the category that is in dhl_price). I want to create a new column in requests <code>delivery_fee</code> based on <code>dhl_price</code> dataframe. In particular, for each row in <code>dhl_price</code> column, I want to get a cell value in <code>dhl_price</code> where column is the one specified in column <code>Type</code> and row is the one specified in column <code>product_weight</code> of <code>requests</code> dataframe.</p> <p>So far I could code it in <strong>pandas</strong>:</p> <pre><code>def get_dhl_fee(weight, type): if weight &lt;= 30: price = dhl_price.loc[dhl_price[&quot;Weight&quot;] == weight][type].values[0] else: price = dhl_price.loc[dhl_price[&quot;Weight&quot;] &gt;= weight].reset_index(drop = True).iloc[0][type].values[0] return price new_requests[&quot;dhl_fee&quot;] = new_requests.apply(lambda x: get_dhl_fee(x[&quot;product_weight_g&quot;], x[&quot;Type&quot;]), axis = 1) </code></pre> <p>How can I do the same with PySpark? I tried to use PySpark's UDF:</p> <pre><code># Define the UDF (User-Defined Function) for calculating DHL fee @fn.udf(returnType=DoubleType()) def get_dhl_fee(product_weight_g, calculate_way): broadcast_dhl_price = fn.broadcast(dhl_price) if weight &lt;= 30: price = broadcast_dhl_price.filter(dhl_price[&quot;Weight&quot;] == weight).select(calculate_way).first()[0] else: price = broadcast_dhl_price.filter(dhl_price[&quot;Weight&quot;] &gt;= weight).select(calculate_way).first()[0] return price # Register the UDF sc.udf.register(&quot;get_dhl_fee&quot;, get_dhl_fee) # Apply the UDF to calculate dhl_fee column requests = requests.withColumn(&quot;dhl_fee&quot;, get_dhl_fee(fn.col(&quot;product_weight&quot;), fn.col(&quot;Type&quot;))) </code></pre> <p>but it returns error SPARK-5063:</p> <blockquote> <p>&quot;It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation. SparkContext can only be used on the driver, not in code that it run on workers.&quot;</p> </blockquote>
<python><pandas><apache-spark><pyspark>
2023-06-03 12:18:48
1
1,404
Sayyor Y
76,395,984
1,946,418
singleton inheritance from a base class
<p>Tech: Python 3.11.2</p> <pre class="lang-py prettyprint-override"><code>from mylogging import MyLogging class BaseClass: def __init__(self) -&gt; None: self.logger = MyLogging(logName=self.__class__.__name__) # Singleton class ChildSingletonClass(BaseClass): __instance = None def __new__(cls): if cls.__instance is None: cls.__instance = super(ChildSingletonClass, cls).__new__(cls) return cls.__instance def __init__(self): self.logger.debug(f&quot;{self} address = {id(self)}&quot;) </code></pre> <p>Have a <code>ChildSingletonClass</code> &quot;singleton&quot; class, that I now need to inherit from <code>BaseClass</code>, so I can take advantage of the inheritance goodness. But running into this error</p> <pre><code> self.logger.debug(f&quot;{self} address = {id(self)}&quot;) ^^^^^^^^^^^ AttributeError: 'ChildSingletonClass' object has no attribute 'logger' </code></pre> <p>I've tried changing <code>__new__</code> to call <code>super(BaseClass, cls)</code> instead, but that doesn't seem to help either.</p> <p>Any ideas anyone? TIA</p>
<python><oop><inheritance><singleton>
2023-06-03 12:01:38
1
1,120
scorpion35
76,395,953
131,874
Regex to catch email addresses in email header
<p>I'm trying to parse a <code>To</code> email header with a regex. If there are no <code>&lt;&gt;</code> characters then I want the whole string otherwise I want what is inside the <code>&lt;&gt;</code> pair.</p> <pre class="lang-python prettyprint-override"><code>import re re_destinatario = re.compile(r'^.*?&lt;?(?P&lt;to&gt;.*)&gt;?') addresses = [ 'XKYDF/ABC (Caixa Corporativa)', 'Fulano de Tal | Atlantica Beans &lt;fulano.tal@atlanticabeans.com&gt;' ] for address in addresses: m = re_destinatario.search(address) print(m.groups()) print(m.group('to')) </code></pre> <p>But the regex is wrong:</p> <pre><code>('XKYDF/ABC (Caixa Corporativa)',) XKYDF/ABC (Caixa Corporativa) ('Fulano de Tal | Atlantica Beans &lt;fulano.tal@atlanticabeans.com&gt;',) Fulano de Tal | Atlantica Beans &lt;fulano.tal@atlanticabeans.com&gt; </code></pre> <p>What am I missing?</p>
<python><regex><email-headers><email-address>
2023-06-03 11:52:19
2
126,654
Clodoaldo Neto
76,395,885
10,755,032
Python - How to get the article titles from either h2 or div tag in medium.com
<p>I am scraping medium.com. I have encountered a problem which is that some publications' article titles are in the <code>h2</code> tag whereas for some others it is in <code>div</code>. Now I am writing a function that takes in a link of the publication and returns the titles of articles in the page. I don't know which type I will be getting as an input. How should I tackle this? <a href="https://i.sstatic.net/cXrml.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cXrml.png" alt="enter image description here" /></a> In this the article titles are in h2 tag.</p> <p><a href="https://i.sstatic.net/YsBDX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YsBDX.png" alt="enter image description here" /></a> In this the article titles are in the div tag.</p> <p>Both are from different publications.</p> <p>Link for article titles in div tag: <a href="https://levelup.gitconnected.com" rel="nofollow noreferrer">https://levelup.gitconnected.com</a></p> <p>Link for article titles in h2 tag: <a href="https://towardsdatascience.com" rel="nofollow noreferrer">https://towardsdatascience.com</a></p> <p>Here is my code which works for the h2 tag</p> <pre><code>import requests from bs4 import BeautifulSoup as bs from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.keys import Keys import time options = Options() options.add_argument(&quot;--headless&quot;) options.add_argument('--log-level=3') options.add_experimental_option('excludeSwitches', ['enable-logging']) driver = webdriver.Chrome(options=options) class Publication: def __init__(self, link): self.link = link def get_articles(self): &quot;Get the articles of the user/publication which was given as input&quot; link = self.link driver.get(link) scroll_pause = 0.5 # Get scroll height last_height = driver.execute_script(&quot;return document.documentElement.scrollHeight&quot;) run_time, max_run_time = 0, 1 while True: iteration_start = time.time() # Scroll down to bottom driver.execute_script(&quot;window.scrollTo(0, 1000*document.documentElement.scrollHeight);&quot;) # Wait to load page time.sleep(scroll_pause) # Calculate new scroll height and compare with last scroll height new_height = driver.execute_script(&quot;return document.documentElement.scrollHeight&quot;) scrolled = new_height != last_height timed_out = run_time &gt;= max_run_time if scrolled: run_time = 0 last_height = new_height elif not scrolled and not timed_out: run_time += time.time() - iteration_start elif not scrolled and timed_out: break elements = driver.find_elements(By.CSS_SELECTOR, &quot;h2&quot;) for x in elements: print(x.text) </code></pre>
<python><selenium-webdriver><web-scraping>
2023-06-03 11:39:20
1
1,753
Karthik Bhandary
76,395,799
1,319,998
Maximum size of compressed data using Python's zlib
<p>I'm writing a Python library that makes ZIP files in a streaming way. If the uncompressed or compressed data of a member of the zip is 4GiB or bigger, then it has to use a particular extension to the original ZIP format - zip64. The issue with always using this is that it has less support. So, I would like to only use zip64 if needed. But whether a file is zip64 has to be specified in the zip <em>before</em> the compressed data, and so if streaming, before the size of the compressed data is known.</p> <p>In some cases however, the size of the uncompressed data <em>is</em> known. So, I would like to predict the <em>maximum</em> size that zlib can output based on this uncompressed size, and if this is 4GiB or bigger, use zip64 mode.</p> <p>In other words, if the the total length of <code>chunks</code> in the below is known, what will be the maximum total length of bytes that <code>get_compressed</code> can yield? (I assume this maximum size would depend on level, memLevel and wbits)</p> <pre class="lang-py prettyprint-override"><code>import zlib chunks = ( b'any', b'iterable', b'of', b'bytes', b'-' * 1000000, ) def get_compressed(level=9, memLevel=9, wbits=-zlib.MAX_WBITS): compress_obj = zlib.compressobj(level=level, memLevel=memLevel, wbits=wbits) for chunk in chunks: if compressed := compress_obj.compress(chunk): yield compressed if compressed := compress_obj.flush(): yield compressed print('length', len(b''.join(get_compressed()))) </code></pre> <p>This is complicated by the fact that <a href="https://stackoverflow.com/q/76371334/1319998">Python zlib module's behaviour is not consistent between Python versions</a>.</p> <p>I think that Java attempts a sort of &quot;auto zip64 mode&quot; without knowing the uncompressed data size, but <a href="https://github.com/libarchive/libarchive/issues/1834" rel="nofollow noreferrer">libarchive has problems with it</a>.</p>
<python><zip><zlib><deflate><python-zlib>
2023-06-03 11:17:40
3
27,302
Michal Charemza
76,395,757
2,013,747
Is there a built-in function to query a type hint for optionality/"None-ability" in Python 3.10 or later?
<p>Is there a function in the standard library to query whether the type hint for a field admits the None value?</p> <p>For example, it would return True for foo, bar, baz, and False for x, in class A below:</p> <pre><code>from dataclasses import dataclass from typing import Optional, Union @dataclass class A: foo : Optional[int] = None bar : int|None = None baz : Union[int, float, None] = None x : int = 1 </code></pre> <p>I have written the following function, which seems to work, but I want to avoid reimplementing standard functionality.</p> <pre><code>import typing import types def field_is_optional(cls: type, field_name: str): &quot;&quot;&quot;A field is optional when it has Union type with a NoneType alternative. Note that Optional[] is a special form which is converted to a Union with a NoneType option &quot;&quot;&quot; field_type = typing.get_type_hints(cls).get(field_name, None) origin = typing.get_origin(field_type) #print(field_name, &quot;:&quot;, field_type, origin) if origin is typing.Union: return type(None) in typing.get_args(field_type) if origin is types.UnionType: return type(None) in typing.get_args(field_type) return False a=A() assert field_is_optional(type(a), &quot;foo&quot;) assert field_is_optional(type(a), &quot;bar&quot;) assert field_is_optional(type(a), &quot;baz&quot;) assert field_is_optional(type(a), &quot;x&quot;) == False </code></pre> <p>An acceptable answer will be &quot;No.&quot; or &quot;Yes, the function is <code>&lt;function name&gt;</code>. As @metatoaster pointed out, <a href="https://stackoverflow.com/questions/56832881/check-if-a-field-is-typing-optional/7639772">Check if a field is typing.Optional</a> asks a related question, however (1) it is specific to <code>typing.Optional</code> not any type that expresses optionality (e.g. <code>Union[int, float, None]</code>, <code>Union[int, Union[float, None]</code>, <code>int|float|None</code>, etc.), and (2) it is asking for any way at all to check (which I already have), whereas I am asking for the name of a single standard library function that does the job.</p>
<python><option-type><nullable>
2023-06-03 11:04:27
0
4,240
Ross Bencina
76,395,754
2,602,770
FLASK -Error occurred while reading WSGI handler:
<p>Error occurred while reading WSGI handler:</p> <p>Traceback (most recent call last): File &quot;C:\Python\Lib\site-packages\wfastcgi.py&quot;, line 791, in main env, handler = read_wsgi_handler(response.physical_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Python\Lib\site-packages\wfastcgi.py&quot;, line 633, in read_wsgi_handler handler = get_wsgi_handler(os.getenv(&quot;WSGI_HANDLER&quot;)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Python\Lib\site-packages\wfastcgi.py&quot;, line 616, in get_wsgi_handler raise ValueError('&quot;%s&quot; could not be imported%s' % (handler_name, last_tb)) ValueError: &quot;app.app&quot; could not be imported: Traceback (most recent call last): File &quot;C:\Python\Lib\site-packages\wfastcgi.py&quot;, line 600, in get_wsgi_handler handler = <strong>import</strong>(module_name, fromlist=[name_list[0][0]]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\inetpub\wwwroot\essldatamapping\app.py&quot;, line 2, in from flask_restful import Resource, Api ModuleNotFoundError: No module named 'flask_restful'</p> <h1>MY CODE</h1> <p>from flask import Flask, jsonify from flask_restful import Resource, Api import pyodbc</p> <p>app = Flask(<strong>name</strong>) api = Api(app) class EsslDataFeth(Resource): def <strong>init</strong>(self): server = 'ESSL-CONFIGURAT\ESSL' database = '<strong>' username = '</strong>' password = '**' self.connect = pyodbc.connect('DRIVER={ODBC Driver 18 for SQL Server};SERVER='+server+';DATABASE='+database+';ENCRYPT=no;UID='+username+';PWD='+ password) self.cursor = self.connect.cursor()</p> <pre><code> def get(self): getdatacmd='SELECT * from [etimetracklite1].[dbo].[Entry_Exit]' self.cursor.execute(getdatacmd) result=[] for row in self.cursor.fetchall(): item_dist={} item_dist['id']=row[0] item_dist['dateTime']=row[1] item_dist['INOut']=row[2] item_dist['DeviceID']=row[3] result.append(item_dist) return jsonify(result) </code></pre> <p>api.add_resource(EsslDataFeth, '/returnjson') if <strong>name</strong> == '<strong>main</strong>': app.run()</p>
<python><flask><iis>
2023-06-03 11:04:09
0
2,874
Bishnu
76,395,726
2,263,683
Add multiple OIDC authentication options in FastAPI
<p>I've added Google's OIDC authentication to my FastAPI application.</p> <pre><code>from fastapi import Depends from fastapi.security import OpenIdConnect oidc_google = OpenIdConnect(openIdConnectUrl='https://accounts.google.com/.well-known/openid-configuration') @app.get('/foo') def bar(token: Depends(oidc_google)): return &quot;You're Authenticated&quot; </code></pre> <p>Now I want to give the user the option to login with another OIDC provider (e.g: Microsoft). Something like this:</p> <pre><code>oidc_google = OpenIdConnect(openIdConnectUrl='https://accounts.google.com/.well-known/openid-configuration') oidc_microsoft = OpenIdConnect(openIdConnectUrl='https://login.microsoftonline.com/common/v2.0/.well-known/openid-configuration') @app.get('/foo') def bar(token: Depends(oidc_google or oidc_microsoft)): return &quot;You're Authenticated&quot; </code></pre> <p>Also seems like you can only set one configuration for oidc in swagger UI:</p> <pre><code>app = FastAPI( swagger_ui_oauth2_redirect_url='/api/v1/oidc/callback', swagger_ui_init_oauth={ 'usePkceWithAuthorizationCodeGrant': True, 'clientId': settings.OIDC_CLIENT_ID, 'clientSecret': settings.OIDC_CLIENT_SECRET, }, ) </code></pre> <p>Which makes it even more complicated. I couldn't find a working solution so far. Is there a working solution to add multiple OIDC options for authentication in FastAPI?</p>
<python><authentication><fastapi><openid-connect>
2023-06-03 10:55:47
0
15,775
Ghasem
76,395,615
5,431,734
documenting functions exposed by a pickle file
<p>I have written an application in python that goes through several iterations and at the end it returns a couple of dataframes (the estimates of the quantities we are interested in). I am also saving the application as a single pickle file which exposes to the user all the objects and functions that are involved in the loop and contribute to the return values. If a user wants to get a better insight and would like to interrogate particular functions or properties of the objects involved he/she could do</p> <pre><code>import pickle import pandas as pd pkl = pd.read_pickle('pickle_file.pickle') pkl.course.student_names </code></pre> <p>and that would return a list of names or</p> <pre><code>pkl.course.calc_avg([2020, 2021, 2022, 2023] </code></pre> <p>that returns the average of the markings for these years.</p> <p>My question is how do I document that please? I mean how on earth the user would know that there is an attribute called <code>student_names</code> of the object <code>course</code> or that there is a function called <code>calc_avg</code>. Adding docstrings will help especially with functions since <code>help(pkl.course.calc_avg)</code> will print the docstring but the user should know that there is such a function at first place....</p> <p>Am I doing it wrong from the very beginning maybe? I shouldnt have a pickle file, but what are the alternatives?</p>
<python><pickle>
2023-06-03 10:26:52
0
3,725
Aenaon
76,395,534
2,722,968
Typing hinting the return type of a fn returning a subclass
<p>I have a function that takes a class a parameter and returns a (constructed) subclass; essentially a class-decorator. In the most minimal example</p> <pre class="lang-py prettyprint-override"><code># Some baseclass that may come from the current module/stdlib/wherever class FooBase: pass # A user-defined subclass class FooDerived(FooBase): pass # foo() takes any `FooBase`-type, including its subclasses def foo(baseclass: FooBase) -&gt; ?: class Inner(baseclass): pass return Inner # NewFoo is a class that has to be a subclass of `FooBase`, derived from `FooDerived` NewFoo = foo(FooDerived) </code></pre> <p>Here, <code>Inner</code> is a subtype of whatever <code>baseclass</code> is. Is there a way to type-hint this relationship - the function will return a type (not a value of a type!) that is a subtype of what <code>baseclass</code> is.</p>
<python>
2023-06-03 10:02:24
1
17,346
user2722968
76,395,448
15,520,615
Snowflake snowpark Python Interpreter Error: NameError: name is not defined
<p>I'm executing a function in Python/Snowpark and I'm getting the error: <a href="https://i.sstatic.net/2SeXR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2SeXR.png" alt="enter image description here" /></a></p> <p>I appreciate this is a Python beginners error, however I'm not sure why I'm getting the error: <code>NameError: name 'conn' is not defined</code> when I have defined conn in the function:</p> <pre><code>conn = connection.connect(connectionProperties, log) </code></pre> <p>Again, this is an error that I should know how to fix, but I don't.</p> <p>The fulll code is as follows:</p> <pre><code>import snowflake.snowpark as snowpark from snowflake.snowpark.functions import col import wheel_loader def getStruct(conn): wheel_loader.load('whlib-0.0.1-py3-none-any.whl') from whlib.utils import connectionProperties as connProps from whlib.utils import connection as connection from whlib.utils import whliblogging as log from whlib.cln import entity as entity dbConnectionProperties = connProps.DbConnectionProperties() dbConnectionProperties.DBServer = 'xxxxxxxxxxx' dbConnectionProperties.DBUser = 'xxxxxxx' dbConnectionProperties.DBPword = 'xxxxxxxxxx' dbConnectionProperties.DBDatabase = 'xxxxxxxxxxxxxxxxx' connectionProperties = connProps.ConnectionProperties() connectionProperties.dbConnectionProperties = dbConnectionProperties log = logs.Logging(connectionProperties) conn = connection.connect(connectionProperties, log) def main(session: snowpark.Session): return getStruct(conn) </code></pre> <p>Any thoughts</p>
<python><snowflake-cloud-data-platform>
2023-06-03 09:40:30
0
3,011
Patterson
76,395,346
10,012,856
Manage edge's weight and attributes with Netoworkx
<p>I'm facing on a trouble related to how I'm managing the edges and their weight and attributes in a <code>MultiDiGraph</code>.</p> <p>I've a list of edges like below:</p> <pre><code>[ (0, 1, {'weight': {'weight': 0.8407885973127324, 'attributes': {'orig_id': 1, 'direction': 1, 'flip': 0, 'lane-length': 3181.294317920477, 'lane-width': 3.6, 'lane-shoulder': 0.0, 'lane-max-speed': 50.0, 'lane-typology': 'real', 'lane-access-points': 6, 'lane-travel-time': 292.159682258003, 'lane-capacity': 7200.0, 'lane-cost': 0.8407885973127324, 'other-attributes': None, 'linestring-wkt': 'LINESTRING (434757.15286960197 4524762.33387408, 434267.30180536775 4525511.90463009, 436180.7891782945 4526762.385413274)'}}}), (1, 4, {'weight': {'weight': 0.6659876355281887, 'attributes': {'orig_id': 131, 'direction': 1, 'flip': 0, 'lane-length': 2496.129360921626, 'lane-width': 3.6, 'lane-shoulder': 0.0, 'lane-max-speed': 50.0, 'lane-typology': 'real', 'lan... </code></pre> <p>That list is used to add weight and attributes to a <code>MultiDiGraph</code> previous mentioned:</p> <pre><code> graph = ntx.MultiDiGraph(weight=None) graph.add_weighted_edges_from(edge_list) </code></pre> <p>Trying to read the properties of a single edge(<code>graph.edges.data()</code>) I see this:</p> <pre><code>(0, 1, {'weight': {'weight': 0.8407885973127324, 'attributes': {'orig_id': 1, 'direction': 1, 'flip': 0, 'lane-length': 3181.294317920477, 'lane-width': 3.6, 'lane-shoulder': 0.0, 'lane-max-speed': 50.0, 'lane-typology': 'real', 'lane-access-points': 6, 'lane-travel-time': 292.159682258003, 'lane-capacity': 7200.0, 'lane-cost': 0.8407885973127324, 'other-attributes': None, 'linestring-wkt': 'LINESTRING (434757.15286960197 4524762.33387408, 434267.30180536775 4525511.90463009, 436180.7891782945 4526762.385413274)'}}}) </code></pre> <p>Every edge is builded in that way: <code>[node[0], node[1], {'weight': weight, 'attributes': attributes}]</code>. If I use this way: <code>[node[0], node[1], weight]</code>, I see the right use of the weight but I need to use also the attributes.</p> <pre><code>[(0, 1, {'weight': 0.8407885973127324}), (1, 4, {'weight': 0.6659876355281887}), (1, 46, {'weight': None}), (4, 5, {'weight': 1.2046936800705539}), (4, 6, {'weight': 0.4469496439663275}).... </code></pre> <p>What is the correct way to manage in the same time both weight and attributes?</p>
<python><networkx>
2023-06-03 09:09:15
1
1,310
MaxDragonheart
76,395,255
1,473,517
How can I draw a line around the edge of the mask?
<p>I have making a heatmap with a mask. Here is a toy MWE:</p> <pre><code>import seaborn as sns import matplotlib.pyplot as plt import numpy as np images = [] vmin = 0 vmax = 80 cmap = &quot;viridis&quot; size = 40 matrix = np.random.randint(vmin, vmax, size=(size,size)) np.random.seed(7) mask = [] for _ in range(size): prefix_length = np.random.randint(size) mask.append([False]*prefix_length + [True]*(size-prefix_length)) mask = np.array(mask) sns.heatmap(matrix, vmin=vmin, vmax=vmax, cmap=&quot;viridis&quot;, mask=mask) plt.savefig(&quot;temp.png&quot;) plt.show() </code></pre> <p>I want to draw a line around the edge of the mask to accentuate where it is. How can you do that?</p> <p>My toy example currently looks like this:</p> <p><a href="https://i.sstatic.net/AVsi5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AVsi5.png" alt="enter image description here" /></a></p>
<python><matplotlib><seaborn><heatmap>
2023-06-03 08:45:50
3
21,513
Simd
76,395,138
10,755,032
Python - Getting the titles of publications in medium.com
<p>I am scraping medium.com. I have one problem which I'm not sure how to tackle. Medium publications have different kinds of arrangements when it comes to their article arrangement. Some of them arrange normally in a list form, while some arrange in a grid format. Now when I am scraping through the normal-looking publications Im able to get the article names but when I try to scrape from the grid type I'm not able to. Is there any way for me to tackle this? <a href="https://i.sstatic.net/zUOX8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zUOX8.png" alt="publication in list format" /></a></p> <p>list format. Here using h2 Im able to get the article titles.</p> <p><a href="https://i.sstatic.net/Gh6bh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Gh6bh.png" alt="publication in grid format" /></a></p> <p>grid format. Here I've observed that div is getting used for article titles.</p> <p>This is my current working code:</p> <pre><code>import requests from bs4 import BeautifulSoup as bs from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.keys import Keys import time options = Options() options.add_argument(&quot;--headless&quot;) options.add_argument('--log-level=3') options.add_experimental_option('excludeSwitches', ['enable-logging']) driver = webdriver.Chrome(options=options) class Publication: def __init__(self, link): self.link = link def get_articles(self): &quot;Get the articles of the user/publication which was given as input&quot; link = self.link driver.get(link) scroll_pause = 0.5 # Get scroll height last_height = driver.execute_script(&quot;return document.documentElement.scrollHeight&quot;) run_time, max_run_time = 0, 1 while True: iteration_start = time.time() # Scroll down to bottom driver.execute_script(&quot;window.scrollTo(0, 1000*document.documentElement.scrollHeight);&quot;) # Wait to load page time.sleep(scroll_pause) # Calculate new scroll height and compare with last scroll height new_height = driver.execute_script(&quot;return document.documentElement.scrollHeight&quot;) scrolled = new_height != last_height timed_out = run_time &gt;= max_run_time if scrolled: run_time = 0 last_height = new_height elif not scrolled and not timed_out: run_time += time.time() - iteration_start elif not scrolled and timed_out: break elements = driver.find_elements(By.CSS_SELECTOR, &quot;h2&quot;) for x in elements: print(x.text) </code></pre>
<python><selenium-webdriver><web-scraping><beautifulsoup>
2023-06-03 08:14:22
1
1,753
Karthik Bhandary
76,395,122
4,896,449
How to build an efficient and fast `Dockerfile` for a `pytorch` model running on CPU
<p>I am trying to get an optimally sized docker for running a pytorch model on CPU, creating a single stage works fine. However when I use the below code to create a two stage build, my docker downloads the CUDA/GPU version of pytorch.</p> <pre><code>FROM python:3.11-slim as builder WORKDIR /app # Set environment variables. ENV PYTHONDONTWRITEBYTECODE 1 ENV PYTHONUNBUFFERED 1 RUN apt-get update &amp;&amp; \ apt-get install -y --no-install-recommends gcc # Copy local code to the container image. COPY requirements.txt . # Install dependencies &amp; model files RUN pip install --no-cache-dir torch==2.0.1+cpu --index-url https://download.pytorch.org/whl/cpu &amp;&amp; \ pip wheel --no-cache-dir --no-deps --wheel-dir /app/wheels -r requirements.txt FROM python:3.11-slim WORKDIR /app # Set environment variables. ENV PORT 8080 ENV HOST 0.0.0.0 COPY --from=builder /app/wheels /wheels COPY --from=builder /app/requirements.txt . RUN pip install --no-cache /wheels/* &amp;&amp; \ huggingface-cli login --token xxx &amp;&amp; \ python -c 'from sentence_transformers import SentenceTransformer; SentenceTransformer(&quot;xxx&quot;, cache_folder=&quot;./app/artefacts&quot;)' # Start the container CMD python -m uvicorn app.main:app --host $HOST --port $PORT --workers 1 </code></pre> <p>EDIT:</p> <p>So I got this to run without installing CUDA, but the image is bigger than without a two stage 2.5gb vs original 1.5gb, the line changed was:</p> <pre><code>RUN pip wheel --no-cache-dir --no-deps --wheel-dir /app/wheels torch==2.0.1 --index-url https://download.pytorch.org/whl/cpu &amp;&amp; \ pip wheel --no-cache-dir --no-deps --wheel-dir /app/wheels -r requirements.txt </code></pre> <p>So here I just use a wheel for pytorch too, instead of a normal install as before.</p>
<python><docker><pytorch>
2023-06-03 08:05:24
1
3,408
dendog
76,395,036
11,163,122
Converting np.int16 to torch.ShortTensor
<p>I have many NumPy arrays of dtype <code>np.int16</code> that I need to convert to <code>torch.Tensor</code> within a <code>torch.utils.data.Dataset</code>. This <code>np.int16</code> ideally gets converted to a <code>torch.ShortTensor</code> of size <code>torch.int16</code> (<a href="https://pytorch.org/docs/stable/tensor_attributes.html#torch-dtype" rel="nofollow noreferrer">docs</a>).</p> <p><code>torch.from_numpy(array)</code> will convert the data to <code>torch.float64</code>, which takes up 4X more memory than <code>torch.int16</code> (64 bits vs 16 bits). I have a LOT of data, so I care about this.</p> <p>How can I convert a numpy array to a <code>torch.Tensor</code> minimizing memory?</p>
<python><numpy><pytorch><numpy-ndarray><pytorch-dataloader>
2023-06-03 07:43:00
1
2,961
Intrastellar Explorer
76,394,951
10,164,750
Getting an unusual/weird error in Pyspark
<p>I have written simple <code>Pyspark</code> <code>filter</code> operation. It works, well. After the <code>filter</code>, I am doing <code>select</code>, where I see some unusual behavior in the code.</p> <p>I tried many thing like. <code>persist</code> and <code>cache</code>. calling an <code>action</code> like <code>count()</code>. Nothing worked, I got this unusual error every time.</p> <p>Let me share my code and <code>AWS Cloud Watch</code> logs.</p> <p>Pyspark Code:</p> <pre class="lang-py prettyprint-override"><code>print(&quot;inside header Snap&quot;) headerDf.show() aggSeqDf = headerDf.filter(col(&quot;header_identifier&quot;) != &quot;DDDDSNAP&quot;) aggSeqDf.show() aggDf = aggSeqDf.select(SEQUENCE, &quot;mn_id&quot;, &quot;header_identifier&quot;).withColumnRenamed(SEQUENCE, SEQ).withColumnRenamed(&quot;mn_id&quot;, &quot;mnId&quot;) aggDf.show() print(&quot;header snap ends here&quot;) </code></pre> <p>Cloud Watch Logs:</p> <pre class="lang-none prettyprint-override"><code>inside header Snap +--------------------+-----------+-----------------+----------+---------------+--------+ | full_file| mn_id|header_identifier|run_number|production_date|sequence| +--------------------+-----------+-----------------+----------+---------------+--------+ |Prod216_3427_ew_1...| 0| DDDDSNAP| 3427| 20230501| 0000000| |Prod216_3427_ew_3...| 6| DDDDSNAP| 3427| 20230501| 0000000| |Prod216_3427_ew_4...| 12| DDDDSNAP| 3427| 20230501| 0000000| |Prod216_3427_ew_5...| 8589934592| DDDDSNAP| 3427| 20230501| 0000000| |Prod216_3427_ew_6...| 8589934598| DDDDSNAP| 3427| 20230501| 0000000| |Prod216_3427_ew_7...| 8589934604| DDDDSNAP| 3427| 20230501| 0000000| | Prod216_3427_ni.dat|17179869184| DDDDSNAP| 3427| 20230501| 0000000| | Prod216_3427_sc.dat|17179869190| DDDDSNAP| 3427| 20230501| 0000000| |Prod216_3427_ew_2...|17179869196| DDDDSNAP| 3427| 20230501| 0000000| +--------------------+-----------+-----------------+----------+---------------+--------+ +---------+-----+-----------------+----------+---------------+--------+ |full_file|mn_id|header_identifier|run_number|production_date|sequence| +---------+-----+-----------------+----------+---------------+--------+ +---------+-----+-----------------+----------+---------------+--------+ +-------+-----------+-----------------+ | seq| mnId|header_identifier| +-------+-----------+-----------------+ |0000000| 0| 00000001| |0000000| 6| 00000003| |0000000| 12| 00000004| |0000000| 8589934592| 00000005| |0000000| 8589934598| 00000006| |0000000| 8589934604| 00000007| |0000000|17179869184| 00000009| |0000000|17179869190| 00000008| |0000000|17179869196| 02550372| +-------+-----------+-----------------+ header snap ends here </code></pre> <p>If you observe, I am doing a <code>select</code> from empty dataframe <code>aggDfSeq</code> but I get few unexpected values in <code>aggDf</code>.</p> <p>I am using <code>AWS Glue</code> to run the job. I attached the <code>whl</code> file of the Pyspark program in the Glue.</p> <p>Even I tried changing the resources provided to the Glue, did not have a luck there also.</p> <p>Would like to know from you, whyI am seeing this unusual behavior. Thank you</p> <p>Physical and Logical plan added.</p> <pre class="lang-none prettyprint-override"><code>== Parsed Logical Plan == Project [seq#829, mn_id#45L AS mnId#832L] +- Project [sequence#192 AS seq#829, mn_id#45L] +- Project [sequence#192, mn_id#45L] +- Filter NOT (header_identifier#168 = DDDDSNAP) +- Project [full_file#5, mn_id#45L, header_identifier#168, run_number#169, production_date#170, sequence#192] +- Project [company_number#38, rec_type#39, full_file#5, mn_id#45L, header_identifier#168, run_number#169, production_date#170, 0000000 AS sequence#192] +- Project [company_number#38, rec_type#39, full_file#5, mn_id#45L, cast(substring(rest#40, 1, 8) as string) AS header_identifier#168, cast(substring(rest#40, 9, 4) as string) AS run_number#169, cast(substring(rest#40, 13, 8) as string) AS production_date#170] +- Project [company_number#38, rec_type#39, rest#40, full_file#5, mn_id#45L] +- Join Inner, (mn_id#45L = min(mn_id)#67L) :- Project [company_number#38, rec_type#39, rest#40, full_file#5, mon otonically_increasing_id() AS mn_id#45L] : +- Project [cast(substring(value#0, 1, 8) as string) AS company_number#38, cast(substring(value#0, 9, 1) as string) AS rec_type#39, cast(substring(value#0, 1, 1250) as string) AS rest#40, full_file#5] : +- Project [value#0, reverse(split(full_file#2, /, -1))[0] AS full_file#5] : +- Project [value#0, input_file_name() AS full_file#2] : +- Relation[value#0] text +- Project [min(mn_id)#67L] +- Aggregate [full_file#5], [full_file#5, min(mn_id#45L) AS min(mn_id)#67L] +- Project [company_number#38, rec_type#39, rest#40, full_file#5, monotonically_increasing_id() AS mn_id#45L] +- Project [cast(substring(value#0, 1, 8) as string) AS company_number#38, cast(substring(value#0, 9, 1) as string) AS rec_type#39, cast(substring(value#0, 1, 1250) as stri ng) AS rest#40, full_file#5] +- Project [value#0, reverse(split(full_file#2, /, -1))[0] AS full_file#5] +- Project [value#0, input_file_name() AS full_file#2] +- Relation[value#0] text == Analyzed Logical Plan == seq: string, mnId: bigint Project [seq#829, mn_id#45L AS mnId#832L] +- Project [sequence#192 AS seq#829, mn_id#45L] +- Project [sequence#192, mn_id#45L] +- Filter NOT (header_identifier#168 = DDDDSNAP) +- Project [full_file#5, mn_id#45L, header_identifier#168, run_number#169, production_date#170, sequence#192] +- Project [company_number#38, rec_type#39, full_file#5, mn_id#45L, header_identifier#168, run_number#169, production_date#170, 0000000 AS sequence#192] +- Project [company_number#38, rec_type#39, full_file#5, mn_id#45L, cast(substring(rest#40, 1, 8) as string) AS header_identifier#168, cast(substring(rest#40, 9, 4) as string) AS run_ number#169, cast(substring(rest#40, 13, 8) as string) AS production_date#170] +- Project [company_number#38, rec_type#39, rest#40, full_file#5, mn_id#45L] +- Join Inner, (mn_id#45L = min(mn_id)#67L) :- Project [company_number#38, rec_type#39, rest#40, full_file#5, monotonically_increasing_id() AS mn_id#45L] : +- Project [cast(substring(value#0, 1, 8) as string) AS company_number#38, cast(substring(value#0, 9, 1) as string) AS rec_type#39, cast(substring(value#0, 1, 1250) as string) AS rest#40, full_file#5] : +- Project [value#0, reverse(split(full_file#2, /, -1))[0] AS full_file#5] : +- Project [value#0, input_file_name() AS full_file#2] : +- Relation[value#0] text +- Project [min(mn_id)#67L] +- Aggregate [full_file#5], [full_file#5, min(mn_id#45L) AS min(mn_id)#67L] +- Project [company_number#38, rec_type#39, rest#40, full_file#5, monotonically_increasing_id() AS mn_id#45L] +- Project [cast(substring(value#0, 1, 8) as string) AS company_number#38, cast(substring(value#0, 9, 1) as string) AS rec_type#39, cast(substring(value#0, 1, 1250) as string) AS rest#40, full_file#5] +- Project [value#0, reverse(split(full_file#2, /, -1))[0] AS full_file#5] +- Project [value#0, input_file_name() AS full_file#2] +- Relation[value#0] text == Optimized Logical Plan == Project [0000000 AS seq#829, mn_id#45L AS mnId#832L] +- Join Inner, (mn_id#45L = min(mn_id)#67L) :- Project [mn_id#45L] : +- Filter (isnotnull(rest#40) AND NOT (substring(rest#40, 1, 8) = DDDDSNAP)) : +- Project [substring(value#0, 1, 1250) AS rest#40, monotonically_increasing_id() AS mn_id#45L] : +- Relation[value#0] text +- Filter isnotnull(min(mn_id)#67L) +- Aggregate [full_file#5], [min(mn_id#45L) AS min(mn_id)#67L] +- Project [reverse(split(full_file#2, /, -1))[0] AS full_file#5, monotonically_increasing_id() AS mn_id#45L] +- Project [input_file_name() AS full_file#2] +- Relation[value#0] text == Physical Plan == AdaptiveSparkPlan isFinalPlan=false +- Project [0000000 AS seq#829, mn_id#45L AS mnId#832L] +- BroadcastHashJoin [mn_id#45L], [min(mn_id)#67L], Inner, BuildRight, false :- Project [mn_id#45L] : +- Project [substring(value#0, 1, 1250) AS rest#40, monotonically_increasing_id() AS mn_id#45L] : +- Filter (isnotnull(substring(value#0, 1, 1250) AS rest#40) AND NOT (substring(substring(value#0, 1, 1250) AS rest#40, 1, 8) = DDDDSNAP)) : +- FileScan text [value#0] Batched: false, DataFilters: [isnotnull(substring(value#0, 1, 1250) AS rest#40), NOT (substring(substring(value#0, 1, 1250) AS..., Format: Text, Location: InMemoryFileIndex[s3://ubo-mvp-oad/ landing_SNAP], PartitionFilters: [], PushedFilters: [], ReadSchema: struct&lt;value:string&gt; +- BroadcastExchange HashedRelationBroadcastMode(List(input[0, bigint, false]),false), [id=#3815] +- Filter isnotnull(min(mn_id)#67L) +- HashAggregate(keys=[full_file#5], functions=[min(mn_id#45L)], output=[min(mn_id)#67L]) +- Exchange hashpartitioning(full_file#5, 4), ENSURE_REQUIREMENTS, [id=#3811] +- HashAggregate(keys=[full_file#5], functions=[partial_min(mn_id#45L)], output=[full_file#5, min#149L]) +- Project [reverse(split(full_file#2, /, -1))[0] AS full_file#5, monotonically_increasing_id() AS mn_id#45L] +- Project [input_file_name() AS full_file#2] +- FileScan text [] Batched: false, DataFilters: [], Format: Text, Location: InMemoryFileIndex[s3://ubo-mvp-oad/landing_SNAP], PartitionFilters: [], PushedFilters: [], ReadSchema: struct&lt;&gt; </code></pre>
<python><amazon-web-services><apache-spark><pyspark><aws-glue>
2023-06-03 07:11:08
0
331
SDS
76,394,943
2,540,204
ibm_db_dbi::ProgrammingError when calling a stored procedure with pandas read_sql_query
<p>I am attempting to use <code>pandas.read_sql_query</code> to call a stored procedure in IBM's db2 and read the results into a dataframe. However when I do so, I receive the following error:</p> <blockquote> <p>ibm_db_dbi::ProgrammingError: The last call to execute did not produce any result set.</p> </blockquote> <p>I've called the procedure in IBM Data Studio, to confirm that it works as intended, yielding the anticipated approximately 1000 records. I've also manually queried the table using a <code>select * from table</code>, with <code>read_sql_query</code> from my script with success. Therefore I may conclude that the python script is properly configured to work with the database as is the procedure itself. The struggle seems to be in putting the two together. My code is below.</p> <pre><code>import ibm_db import ibm_db_dbi import pandas as pd cnxn = ibm_db.connect('DATABASE=mydb;' 'HOSTNAME=myHost;' 'PORT=446;' 'PROTOCOL=TCPIP;' 'UID=myUser;' 'PWD=myPassword;', '', '') conn=ibm_db_dbi.Connection(cnxn) sql = 'call myschema.getaccountnonrecurringpaging(20230509,0);' df = pd.read_sql_query(sql, conn) </code></pre> <p>Package details are listed below:</p> <ul> <li>python=3.10.4</li> <li>pandas=1.5.3</li> <li>ibm_db=3.1.1</li> <li>operating system = Ubuntu 20.04</li> <li>db2: v7r4</li> </ul>
<python><pandas><stored-procedures><db2>
2023-06-03 07:10:05
0
2,703
neanderslob
76,394,853
264,136
cant install jenkins using pip
<pre><code>C:\code&gt;pip install jenkins Collecting jenkins Using cached jenkins-1.0.2.tar.gz (8.2 kB) Preparing metadata (setup.py) ... done Installing collected packages: jenkins DEPRECATION: jenkins is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559 Running setup.py install for jenkins ... error error: subprocess-exited-with-error Γ— Running setup.py install for jenkins did not run successfully. β”‚ exit code: 1 ╰─&gt; [11 lines of output] running install C:\python\Lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. warnings.warn( running build running build_py creating build creating build\lib.win-amd64-cpython-311 copying jenkins.py -&gt; build\lib.win-amd64-cpython-311 running build_ext building 'lookup3' extension error: Microsoft Visual C++ 14.0 or greater is required. Get it with &quot;Microsoft C++ Build Tools&quot;: https://visualstudio.microsoft.com/visual-cpp-build-tools/ [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: legacy-install-failure Γ— Encountered error while trying to install package. ╰─&gt; jenkins note: This is an issue with the package mentioned above, not pip. hint: See above for output from the failure. [notice] A new release of pip is available: 23.0.1 -&gt; 23.1.2 [notice] To update, run: python.exe -m pip install --upgrade pip </code></pre> <p>Tried: <a href="https://stackoverflow.com/questions/44951456/pip-error-microsoft-visual-c-14-0-is-required">Pip error: Microsoft Visual C++ 14.0 is required</a></p> <p>Getting this error: <a href="https://i.sstatic.net/cCUCU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cCUCU.png" alt="enter image description here" /></a></p> <p>Installed went fine via the UI as mentioned in the comment: <a href="https://i.sstatic.net/QnHs6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QnHs6.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/j0RHH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/j0RHH.png" alt="enter image description here" /></a></p> <p>Still no luck. Any suggestions?</p> <p>OS: Windows 10 Enterprise.</p>
<python><pip>
2023-06-03 06:41:22
2
5,538
Akshay J
76,394,843
4,473,615
PyQt5 PDF border spacing in Python
<p>I have generated a PDF using PyQt5 which is working perfectly fine. Am just looking to have a border spacing, unable to do that using layouts. Below is the code,</p> <pre><code>from PyQt5 import QtCore, QtWidgets, QtWebEngineWidgets def printhtmltopdf(html_in, pdf_filename): app = QtWidgets.QApplication([]) page = QtWebEngineWidgets.QWebEnginePage() def handle_pdfPrintingFinished(*args): print(&quot;finished: &quot;, args) app.quit() def handle_loadFinished(finished): page.printToPdf(pdf_filename) page.pdfPrintingFinished.connect(handle_pdfPrintingFinished) page.loadFinished.connect(handle_loadFinished) page.setZoomFactor(1) page.setHtml(html_in) app.exec() printhtmltopdf( result, # raw html variable &quot;file.pdf&quot;, ) </code></pre> <p>Result is,</p> <p><a href="https://i.sstatic.net/2ed9Z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2ed9Z.png" alt="enter image description here" /></a></p> <p>Expected result is as below having spaces in the beginning and end of the content. Basically i need to have a padding on left, right, top and bottom</p> <p><a href="https://i.sstatic.net/97YaB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/97YaB.png" alt="enter image description here" /></a></p> <p>Any suggestion will be appreciated</p>
<python><pdf><pyqt5>
2023-06-03 06:39:22
1
5,241
Jim Macaulay
76,394,790
4,825,376
Python Multiprocessing Manager Error-β€˜ForkAwareLocal’ object has no attribute
<p>I wrote the following code using the <code>multiprocessing</code> module to execute two processes in parallel. One requirement is to access a shared Queue in the multiprocessing module used to store data by one process and read from it by another process. I tried to write it using the code below, I got this. Any help, please?</p> <pre><code>/Users/adhamenaya/anaconda3/bin/python /Users/adhamenaya/DataspellProjects/MultiProcessing/multi-processing.py Producer process started... Consumer process started... Process Process-3: Traceback (most recent call last): File &quot;/Users/adhamenaya/anaconda3/lib/python3.10/multiprocessing/managers.py&quot;, line 810, in _callmethod conn = self._tls.connection AttributeError: 'ForkAwareLocal' object has no attribute 'connection' During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/Users/adhamenaya/anaconda3/lib/python3.10/multiprocessing/process.py&quot;, line 314, in _bootstrap self.run() File &quot;/Users/adhamenaya/anaconda3/lib/python3.10/multiprocessing/process.py&quot;, line 108, in run self._target(*self._args, **self._kwargs) File &quot;/Users/adhamenaya/DataspellProjects/MultiProcessing/multi-processing.py&quot;, line 36, in run data = self.queue.get_nowait() File &quot;&lt;string&gt;&quot;, line 2, in get_nowait File &quot;/Users/adhamenaya/anaconda3/lib/python3.10/multiprocessing/managers.py&quot;, line 814, in _callmethod self._connect() File &quot;/Users/adhamenaya/anaconda3/lib/python3.10/multiprocessing/managers.py&quot;, line 801, in _connect conn = self._Client(self._token.address, authkey=self._authkey) File &quot;/Users/adhamenaya/anaconda3/lib/python3.10/multiprocessing/connection.py&quot;, line 502, in Client c = SocketClient(address) File &quot;/Users/adhamenaya/anaconda3/lib/python3.10/multiprocessing/connection.py&quot;, line 630, in SocketClient s.connect(address) FileNotFoundError: [Errno 2] No such file or directory Process Process-2: Traceback (most recent call last): File &quot;/Users/adhamenaya/anaconda3/lib/python3.10/multiprocessing/managers.py&quot;, line 810, in _callmethod conn = self._tls.connection AttributeError: 'ForkAwareLocal' object has no attribute 'connection' During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/Users/adhamenaya/anaconda3/lib/python3.10/multiprocessing/process.py&quot;, line 314, in _bootstrap self.run() File &quot;/Users/adhamenaya/anaconda3/lib/python3.10/multiprocessing/process.py&quot;, line 108, in run self._target(*self._args, **self._kwargs) File &quot;/Users/adhamenaya/DataspellProjects/MultiProcessing/multi-processing.py&quot;, line 22, in run self.queue.put_nowait(input_data) File &quot;&lt;string&gt;&quot;, line 2, in put_nowait File &quot;/Users/adhamenaya/anaconda3/lib/python3.10/multiprocessing/managers.py&quot;, line 814, in _callmethod self._connect() File &quot;/Users/adhamenaya/anaconda3/lib/python3.10/multiprocessing/managers.py&quot;, line 801, in _connect conn = self._Client(self._token.address, authkey=self._authkey) File &quot;/Users/adhamenaya/anaconda3/lib/python3.10/multiprocessing/connection.py&quot;, line 502, in Client c = SocketClient(address) File &quot;/Users/adhamenaya/anaconda3/lib/python3.10/multiprocessing/connection.py&quot;, line 630, in SocketClient s.connect(address) FileNotFoundError: [Errno 2] No such file or directory Process finished with exit code 0 </code></pre> <p>My code:</p> <pre><code>import time import random import multiprocessing from multiprocessing import Pool # consumer class simulates the continuous data collection process class Producer: def __init__(self, queue): super().__init__() self.queue = queue def run(self): print(&quot;Producer process started...&quot;) while True: # simulate the time needed to collect data input_time = random.randrange(1, 4) time.sleep(input_time) # simulate date collection input_data = random.randrange(5, 10) self.queue.put_nowait(input_data) print(f&quot; {input_data} is collected in time {input_time} secs&quot;) # producer class simulates the work of data processing algorithm class Consumer: def __init__(self, queue): super().__init__() self.queue = queue def run(self): process_data = 0 print(&quot;Consumer process started...&quot;) while True: data = self.queue.get_nowait() # simulate time needed to process data procss_time = random.randrange(6, 9) time.sleep(procss_time) # Simulate data processing algorithm process_data += data print(f&quot; input: {data}, new result: {process_data} is processed in {procss_time} secs&quot;) if __name__ == &quot;__main__&quot;: # create a shared queue manager = multiprocessing.Manager() queue = manager.Queue() producer = Producer(queue) consumer = Consumer(queue) # start instances on parallel processes producer_process = multiprocessing.Process(target=producer.run).start() consumer_process = multiprocessing.Process(target=consumer.run).start() </code></pre>
<python><python-3.x><multiprocessing>
2023-06-03 06:18:35
1
950
Adham Enaya
76,394,713
10,173,016
How to apply different isin for each row of a DataFrame?
<p>I've got two arrays and want to compare rows. In particular, I want to check for each element in arr2 whether it is among corresponding row in arr1.</p> <p>Example given</p> <pre><code>arr1 = [[1, 7, 6, 2, 8], [1, 5, 4, 8], [8, 2, 5]] arr2 = [[8, 1, 5, 0, 7, 2, 9, 4], [0, 1, 8, 5, 3, 4, 7, 9], [9, 2, 0, 6, 8, 5, 3, 7]] </code></pre> <p>Expected result for the first row of arr2</p> <pre><code>[1, 1, 0, 0, 1, 1, 0, 0] </code></pre> <p>Solution with for-loop</p> <pre><code>d1 = pd.DataFrame(arr1) d2 = pd.DataFrame(arr2) for y in range(len(arr1)): print(d2.iloc[y].isin(d1.iloc[y]).astype(int).tolist()) </code></pre> <p>How to do it in pandas without iterating over rows?</p>
<python><python-3.x><pandas>
2023-06-03 05:46:10
2
401
Joseph Kirtman
76,394,657
219,153
How to read SyGuS format into cvc5 Python script?
<p>There is number of examples in SyGuS format (<a href="https://sygus.org/language/" rel="nofollow noreferrer">https://sygus.org/language/</a>) in cvc5 repo, e.g. <a href="https://github.com/cvc5/cvc5/tree/main/test/regress/cli/regress0/sygus" rel="nofollow noreferrer">https://github.com/cvc5/cvc5/tree/main/test/regress/cli/regress0/sygus</a>. How do I read these files or corresponding strings into cvc5 Python script?</p> <p>I know about Python API (<a href="https://cvc5.github.io/docs/cvc5-1.0.2/api/python/python.html" rel="nofollow noreferrer">https://cvc5.github.io/docs/cvc5-1.0.2/api/python/python.html</a>), which allows to define a SyGuS problem programmatically, but I would like to use SyGuS format directly. I can't find anything about it in the documentation.</p> <hr /> <p>Here is an example of problem definition in SyGuS format:</p> <pre><code>(set-logic LIA) (synth-fun max2 ((x Int) (y Int)) Int ((I Int) (B Bool)) ((I Int (x y 0 1 (+ I I) (- I I) (ite B I I))) (B Bool ((and B B) (or B B) (not B) (= I I) (&lt;= I I) (&gt;= I I)))) ) (declare-var x Int) (declare-var y Int) (constraint (&gt;= (max2 x y) x)) (constraint (&gt;= (max2 x y) y)) (constraint (or (= x (max2 x y)) (= y (max2 x y)))) (check-synth) </code></pre>
<python><io>
2023-06-03 05:19:18
1
8,585
Paul Jurczak
76,394,543
3,487,441
Installing a python script to run from the command line
<p>I need to make some python utilities available to run from the command line (OSX Ventura). I've been looking over example and documentation for setup.py, but can't make any progress. Even with the simplest example possible I'm not making progress:</p> <p><strong>directory structure:</strong></p> <pre><code>./ex __init__.py myscript.py setup.py </code></pre> <p><strong>myscript.py</strong></p> <pre><code>#!/usr/local/bin python3 def main(): print('hello') </code></pre> <p><strong>setup.py</strong></p> <pre><code>from setuptools import setup setup( name='myscript', version='0.1.0', py_modules=['myscript'], entry_points={ 'entry_points': [ 'scr=myscript:main', ], } ) </code></pre> <p>I'm trying to install with various combinations of parameters:</p> <pre><code>pip3 install -e . pip3 install --user . pip3 install . </code></pre> <p>In each case, the new command is not found. The examples do not cover what can go wrong so I'm really lost about what to try next.</p>
<python><pip><setuptools><setup.py><python-packaging>
2023-06-03 04:20:08
2
1,361
gph
76,394,516
9,840,684
creating a function looping through multiple subsets and then grouping and summing those combinations of subsets
<p>I am attempting to build a function that processes data and subsets across two combinations of dimensions, grouping on a status label and sums on price creating a single row dataframe with the different combinations of subsets of the summed prices as output.</p> <p><strong>edit</strong> to clarify, what I'm looking for is to subset on two different combinations of dimensions; a time delta and an association label.</p> <p>I'm then looking to group on a <em>different</em> status label (which is different from the association label) and sum those on price.</p> <p>Combinations of subsets:</p> <ul> <li>the association labels are in the <strong>&quot;Association Label&quot;</strong> column and the three of interest are <code>[&quot;SDAR&quot;, &quot;NSDCAR&quot;, &quot;PSAR&quot;]</code> there are others in the column/data but they can be ignored</li> <li>the time interval are <code>[7, 30, 60, 90, 120, None]</code> and are in the &quot;<strong>Status Date</strong>&quot; column</li> </ul> <p>What's being grouped and summed as per those combination of subsets:</p> <ul> <li>The <strong>Status Labelled</strong> are transaction statuses which are to be grouped on as per the different combinations of the above subsets from time deltas and association labels. They include <code>[&quot;Active&quot;,&quot;Pending&quot;,&quot;Sold&quot;,Withdrawn&quot;,&quot;Contingent&quot;,&quot;Unknown&quot;]</code> (this is not an exhaustive list but just an example)</li> <li>And finally <strong>['List Price (H)']</strong> which is to be summed per each of those status labelled and as per each combination of the fist two subsets.</li> </ul> <p>So example columns of desired output would be something like <code>PSAR_7_Contingent_price</code> or <code>SDAR_60_Withdrawn_price</code></p> <p>This builds off of <a href="https://stackoverflow.com/questions/76384338/looping-through-combinations-of-subsets-of-data-for-processing">this question and answer</a> which worked fantastic for value counts, but I'm having difficulty modifying it for <em>summing</em> on a price variable.</p> <p>The code I used to build off of is</p> <pre><code>def crossubsets(df): labels = [&quot;SDAR&quot;, &quot;NSDCAR&quot;, &quot;PSAR&quot;] time_intervals = [7, 30, 60, 90, 120, None] group_dfs = df.loc[ df[&quot;Association Label&quot;].isin(labels) ].groupby(&quot;Association Label&quot;) data = [] for l, g in group_dfs: for ti in time_intervals: s = ( g[g[&quot;Status Date&quot;] &gt; (pd.Timestamp.now() - pd.Timedelta(ti, &quot;d&quot;))] if ti is not None else g ) data.append(s[&quot;Status Labelled&quot;].value_counts().rename(f&quot;counts_{l}_{ti}&quot;)) return pd.concat(data, axis=1) #with optional .T to have 18 rows instead of cols # additional code to flatten the output to a (1, 180) dataframe counts_processeed = counts_processeed.unstack().to_frame().sort_index(level=1).T counts_processeed .columns = counts_processeed.columns.map('_'.join) </code></pre> <p>This worked great for the value_counts per Status Labelled, but now I'm looking to sum the associated price per those that Status Labelled, and across those dimensions of subsets. I naively attempted to modify the above function with:</p> <pre><code>def crossubsetsprice(df): labels = [&quot;SDAR&quot;, &quot;NSDCAR&quot;, &quot;PSAR&quot;] time_intervals = [7, 30, 60, 90, 120, None] group_dfs = df.loc[ df[&quot;Association Label&quot;].isin(labels) ].groupby(&quot;Association Label&quot;) data = [] for l, g in group_dfs: for ti in time_intervals: s = ( g[g[&quot;Status Date&quot;] &gt; (pd.Timestamp.now() - pd.Timedelta(ti, &quot;d&quot;))] if ti is not None else g ) data.append(s['List Price (H)'].sum().rename(f&quot;price_{l}_{ti}&quot;)) return pd.concat(data, axis=1) #with optional .T to have 18 rows instead of cols </code></pre> <p>But that throws and error <code>AttributeError: 'numpy.float64' object has no attribute 'rename'</code> and I don't think makes much sense or would get the desired output anyway.</p> <p>The alternative I want to avoid, but I know would work, is creating 18 distinct functions for each of combination of subsets then concatinating the output. An example would be:</p> <pre><code>def price_PSAR_90(df): subset_90 = df[df['Status Date'] &gt; (datetime.now() - pd.to_timedelta(&quot;90day&quot;))] subset_90_PSAR= subset_90[subset_90['Association Label']==&quot;PSAR&quot;] grouped_90_PSAR = subset_90_PSAR.groupby(['Status Labelled']) price_summed_90_PSAR = (pd.DataFrame(grouped_90_PSAR['List Price (H)'].sum())) price_summed_90_PSAR.columns = ['Price'] price_summed_90_PSAR = price_summed_90_PSAR.reset_index() price_summed_90_PSAR = price_summed_90_PSAR.T price_summed_90_PSAR = price_summed_90_PSAR.reset_index() price_summed_90_PSAR.drop(price_summed_90_PSAR.columns[[0]], axis=1, inplace=True) price_summed_90_PSAR_header = price_summed_90_PSAR.iloc[0] #grab the first row for the header price_summed_90_PSAR = price_summed_90_PSAR[1:] #take the data less the header row price_summed_90_PSAR.columns = price_summed_90_PSAR_header return price_summed_90_PSAR </code></pre> <p>The last code snippet works, but without looping would need to be repeated with the time delta and association label being changed for each combination, and then relabelling the output columns and concatenated them together, which I want to avoid if possible.</p>
<python><pandas><loops><iterator><iteration>
2023-06-03 04:07:16
1
373
JLuu
76,394,480
4,726,035
Can't parse segment Firebase Token Python/Flask
<p>I am currently building a small API project using Flask. I want to authenticate the request using Firebase Auth. I am using the verify_id_token function in a small middleware.</p> <pre><code>def check_token(f): @wraps(f) def wrap(*args,**kwargs): token = request.headers.get('Authorization') if not token: return {'message': 'No token provided'},400 try: user = auth.verify_id_token(token) except Exception as e: print(f'Error verifying token: {e}') return {'message':'Invalid token provided.'},400 else: request.user = user return f(*args, **kwargs) return wrap </code></pre> <p>My code has been working properly but then for no reasons I started to have the following issue:</p> <pre><code>Error verifying token: Can't parse segment: b'\x05\xe6\xabz\xb7\xb2&amp;\.... </code></pre> <p>I have double check the token and I see no issues on that side...</p>
<python><firebase><flask><firebase-authentication>
2023-06-03 03:43:43
2
535
Mansour
76,394,463
1,019,129
Simulate decaying function
<p>Lets t be the time tick i.e. 1,2,3,4,5....</p> <p>I want to calculate and plot a cumulative decaying function f(inits[],peaks[],peak-ticks,zero-ticks). Preferably in python</p> <p>Where :</p> <pre><code>- inits[] is a list of points at time/tick t where a new 'signal' is introduced - peaks[] is a list of values which must be reached after peak-ticks. (corresponding to inits) - peak-ticks is how many ticks it takes to reach the next peak value - zero-ticks is how many ticks it takes to reach zero from the peak </code></pre> <p>For example :</p> <pre><code> f(inits=[10,15,18], peaks=[1,1,1], peak-ticks=1, zero-ticks=10) </code></pre> <p>in this case decay takes 10 ticks i.e. 0.1 per tick.</p> <p>at tick:</p> <pre><code> 10! result is 0 11. = 1 12. = 0.9 ..... 15! = 0.6 + 0 = 0.6 16. = 0.5 + 1 = 1.5 17. = 0.4 + 0.9 = 1.3 18! = 0.3 + 0.8 + 0 = 1.1 19. = 0.2 + 0.7 + 1 = 1.9 20. = 0.1 + 0.6 + 0.9 = 1.6 ..... </code></pre> <p>PS&gt; As a complication, what if the decay is exponential like 1/x ?</p>
<python><cumulative-sum><decay>
2023-06-03 03:33:34
1
7,536
sten
76,394,436
2,628,868
CondaHTTPError: HTTP 000 CONNECTION FAILED for url <https://conda.anaconda.org/conda-forge/osx-64/current_repodata.json>
<p>when I tried to run this command in macOS 13.3 with M1 pro chip, show error like this:</p> <pre><code>&gt; conda install anaconda-clean Collecting package metadata (current_repodata.json): failed CondaHTTPError: HTTP 000 CONNECTION FAILED for url &lt;https://conda.anaconda.org/conda-forge/osx-64/current_repodata.json&gt; Elapsed: - An HTTP error occurred when trying to retrieve this URL. HTTP errors are often intermittent, and a simple retry will get you on your way. 'https//conda.anaconda.org/conda-forge/osx-64 </code></pre> <p>I have tried to set the ssl verify:</p> <pre><code>conda config --set ssl_verify false </code></pre> <p>I also have tried to switch the network from wifi(contains proxy) to 4G. Still did not fixed this issue. what should I do to make the conda work? BTW, I can access the url <a href="https://conda.anaconda.org/conda-forge/osx-64/current_repodata.json" rel="nofollow noreferrer">https://conda.anaconda.org/conda-forge/osx-64/current_repodata.json</a> in google chrome browser and terminal using curl command.</p>
<python>
2023-06-03 03:23:57
0
40,701
Dolphin
76,394,423
1,088,796
Do I need any environment variables set to execute some code, call openai's api, and return a response?
<p>I was going through a course in OpenAI's API using an in-browser jupyter notebook page but wanted to copy some example code from there into a local IDE. I installed Python and the jupyter extention in VS Code and the OpenAI library. My code is below:</p> <pre><code>import openai import os # from dotenv import load_dotenv, find_dotenv # _ = load_dotenv(find_dotenv()) # read local .env file openai.api_key = &quot;my api key is here&quot; def get_completion(prompt, model=&quot;gpt-3.5-turbo&quot;): messages = [{&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: prompt}] response = openai.ChatCompletion.create( model=model, messages=messages, temperature=0, # this is the degree of randomness of the model's output ) return response.choices[0].message[&quot;content&quot;] prompt = f&quot;&quot;&quot; Determine whether each item in the following list of \ topics is a topic in the text below, which is delimited with triple backticks. Give your answer as list with 0 or 1 for each topic.\ List of topics: {&quot;, &quot;.join(topic_list)} Text sample: '''{story}''' &quot;&quot;&quot; response = get_completion(prompt) print(response) </code></pre> <p>I installed Python and imported the openai library. When I run I am getting the error:</p> <pre><code>APIConnectionError: Error communicating with OpenAI: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) </code></pre> <p>I'm assuming that's because I commented out lines 3 and 4 in the code because I am unsure what they do and do not know how to use the dotenv library. Is it simple to set this up just to make a basic call to the openai API? That's all I'm trying to do with this code right now.</p>
<python><openai-api><dotenv>
2023-06-03 03:17:15
1
2,741
intA
76,394,303
18,572,509
RuntimeError when trying to serve favicon with Flask
<p>I followed the instructions for serving favicons from <a href="https://flask.palletsprojects.com/en/1.1.x/patterns/favicon/" rel="nofollow noreferrer">Flask's docs</a>, and added the line <code>app.add_url_rule('/favicon.ico', redirect_to=url_for('static', filename='favicon.ico'))</code> to my server. But when I run it I get this error:</p> <pre><code> File &quot;server.py&quot;, line X, in __init__ redirect_to=url_for('static', filename='favicon.ico')) File &quot;/python3.9/site-packages/flask/helpers.py&quot;, line 306, in url_for raise RuntimeError( RuntimeError: Attempted to generate a URL without the application context being pushed. This has to be executed when application context is available. </code></pre> <p>I am using a class-based server, with the basics reproduced here:</p> <pre class="lang-py prettyprint-override"><code>from flask import Flask, Response, render_template, request, redirect, url_for from werkzeug.exceptions import HTTPException class Server: def __init__(self, host, port): self.app = Flask(__name__) self.host = host self.port = port # Set up routes: self.app.route(&quot;/&quot;)(self.index) # Error occurs here: self.app.add_url_rule('/favicon.ico', redirect_to=url_for('static', filename='favicon.ico')) self.app.register_error_handler(HTTPException, self.handle_http_error) def index(self): return render_template(&quot;index.html&quot;) @staticmethod def error(msg): &quot;&quot;&quot;Custom error handler&quot;&quot;&quot; return render_template(&quot;error.html&quot;, msg=msg) def handle_http_error(self, e): return self.error(f&quot;{e.code} {e.name}: {e.description}&quot;), e.code def start(self): self.app.run(host=self.host, port=self.port) server = Server(&quot;localhost&quot;, 8080) server.start() </code></pre> <p>My guess is I put the line to serve the favicon in the wrong spot. The error message says <code>This has to be executed when application context is available.</code>, but I'm not sure exactly what that means. I saw <a href="https://stackoverflow.com/questions/31766082/flask-url-for-error-attempted-to-generate-a-url-without-the-application-conte">this question</a> but the answer is a bit vague, and I couldn't figure out how to incorporate it into my code. Also, that user had a <code>with</code> statement, which I tried but couldn't get to work. I tried adding a <code>SERVER_NAME</code> config variable but it didn't change anything (I also had no idea what to put in it, so that's probably another issue).</p>
<python><flask><favicon>
2023-06-03 02:11:05
0
765
TheTridentGuy supports Ukraine
76,394,296
13,891,321
Not all Plotly subplots scale equally
<p>I have working code to create 4 subplots on the same HTML output. When I used to have them as 4 separate HTML plots, the Z axes scaled as requested (0 to -5), but when I run them as a series of subplots, only the first plot scales as requested.</p> <pre><code>&quot;&quot;&quot;Plot 3D streamer surfaces.&quot;&quot;&quot; # Initialise figure with subplots fig4S = make_subplots(rows=2, cols=2, specs=[[{'is_3d': True}, {'is_3d': True}], [{'is_3d': True}, {'is_3d': True}]], subplot_titles=(&quot;Streamer 1&quot;, &quot;Streamer 2&quot;, &quot;Streamer 3&quot;, &quot;Streamer 4&quot;), shared_xaxes=False, row_heights=[0.5, 0.5], vertical_spacing=0.05) zS1 = 0-dfS1 # Depth data for each surface, made negative as it's a depth below sea surface zS2 = 0-dfS2 zS3 = 0-dfS3 zS4 = 0-dfS4 fig4S.add_trace(go.Surface(z=zS1, cmin=-5, cmax=0, colorscale=[[0, 'violet'], [0.2, 'blue'], [0.35, 'lightblue'], [0.50, 'green'], [0.65, 'yellow'], [0.8, 'orange'], [1, 'red']]), 1, 1) fig4S.add_trace(go.Surface(z=zS2, cmin=-5, cmax=0, colorscale=[[0, 'violet'], [0.2, 'blue'], [0.35, 'lightblue'], [0.50, 'green'], [0.65, 'yellow'], [0.8, 'orange'], [1, 'red']]), 1, 2) fig4S.add_trace(go.Surface(z=zS3, cmin=-5, cmax=0, colorscale=[[0, 'violet'], [0.2, 'blue'], [0.35, 'lightblue'], [0.50, 'green'], [0.65, 'yellow'], [0.8, 'orange'], [1, 'red']]), 2, 1) fig4S.add_trace(go.Surface(z=zS4, cmin=-5, cmax=0, colorscale=[[0, 'violet'], [0.2, 'blue'], [0.35, 'lightblue'], [0.50, 'green'], [0.65, 'yellow'], [0.8, 'orange'], [1, 'red']]), 2, 2) fig4S.update_traces(contours_z=dict(show=True, usecolormap=True, highlightcolor=&quot;limegreen&quot;)) fig4S.update_scenes(aspectratio=dict(x=2, y=2, z=0.5)) fig4S.update_layout(scene=dict(zaxis=dict(nticks=4, range=[-5, 0]))) fig4S.update_layout(template='plotly_dark', title=&quot;Channel Depths Line: &quot; + str(name)+&quot; Seq: &quot;+str(Seq), xaxis=dict(automargin=True)) fig4S.write_html(&quot;C:/Users/client/Desktop/4_Streamer_Depths.html&quot;) </code></pre> <p>The scale of each subplot can be seen in the side of each one. Only Streamer 1 scales as requested, the rest use the data's Max/Min and appear exaggerated in comparison. <a href="https://i.sstatic.net/22IiY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/22IiY.png" alt="enter image description here" /></a></p> <p>A snippet of the data for each surface looks like this. In the case of this example, each sub plot's data has 257 rows and columns from R1d-R96d. Numbers are in the region 1.0-4.5 typically</p> <pre><code> R1d R2d R3d R4d R5d R6d R7d 0 2.7 2.6 2.4 2.4 2.4 2.4 2.4 1 2.7 2.6 2.4 2.4 2.4 2.4 2.4 2 2.8 2.6 2.4 2.4 2.4 2.4 2.4 3 2.8 2.6 2.4 2.4 2.4 2.4 2.4 4 2.8 2.6 2.4 2.4 2.4 2.4 2.4 5 2.8 2.6 2.5 2.5 2.4 2.4 2.4 6 2.8 2.6 2.5 2.5 2.5 2.4 2.4 7 2.8 2.6 2.5 2.5 2.4 2.4 2.4 8 2.8 2.6 2.5 2.5 2.4 2.4 2.4 9 2.8 2.6 2.5 2.4 2.4 2.4 2.3 </code></pre>
<python><plotly>
2023-06-03 02:05:36
1
303
WillH
76,394,292
13,002,743
Filling NAN values in Pandas by using previous values
<p>I have a Pandas DataFrame in the following format.</p> <p><a href="https://i.sstatic.net/vyvMH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vyvMH.png" alt="Sample DataFrame" /></a></p> <p>I am trying to fill the NaN value by using the most recent non-NaN value and adding one second to the time value. For example, in this case, the program should take the most recent non-NaN value of 8:30:20 and add one second to replace the NaN value. So, the replacement value should be 8:30:21. Is there a way in Pandas to simulate this process for the entire column?</p>
<python><pandas><datetime>
2023-06-03 02:03:36
3
365
Rishab
76,394,246
1,694,657
Streaming OpenAI results from a Lambda function using Python
<p>I'm trying to stream results from Open AI using a Lambda function on AWS using the OpenAI Python library. For the invoke mode I have: RESPONSE_STREAM. And, using the example <a href="https://github.com/openai/openai-cookbook/blob/main/examples/How_to_stream_completions.ipynb" rel="nofollow noreferrer">provided for streaming</a>, I can see the streamed results in the Function Logs (abbreviated below):</p> <p>Response null</p> <p>Function Logs START RequestId: 3e0148c3-1269-4e38-bd08-e29de5751f18 Version: $LATEST { &quot;choices&quot;: [ { &quot;finish_reason&quot;: null, &quot;index&quot;: 0, &quot;logprobs&quot;: null, &quot;text&quot;: &quot;\n&quot; } ], &quot;created&quot;: 1685755648, &quot;id&quot;: &quot;cmpl-7NALANaR7eLwIMrXTYJVxBpk6tiZb&quot;, &quot;model&quot;: &quot;text-davinci-003&quot;, &quot;object&quot;: &quot;text_completion&quot; } { &quot;choices&quot;: [ { &quot;finish_reason&quot;: null, &quot;index&quot;: 0, &quot;logprobs&quot;: null, &quot;text&quot;: &quot;\n&quot; } ],....</p> <p>but, the Response is null. I've tested this by entering the URL in the browser and by performing a get request via cURL: both respond with null. Below is the exact code (with the secret key changed) that I used, but it can also be found on the link provided:</p> <pre><code>import json import openai import boto3 def lambda_handler(event, context): model_to_use = &quot;text-davinci-003&quot; input_prompt=&quot;Write a sentence in 4 words.&quot; openai.api_key = 'some-secret key' response = openai.Completion.create( model=model_to_use, prompt=input_prompt, temperature=0, max_tokens=100, top_p=1, frequency_penalty=0.0, presence_penalty=0.0, stream=True ) for chunk in response: print(chunk) </code></pre>
<python><lambda><streaming><openai-api>
2023-06-03 01:35:03
2
1,271
Eric
76,394,194
14,293,020
Xarray write large dataset on memory without killing the kernel
<p><strong>Context:</strong> I have the following dataset: <a href="https://i.sstatic.net/dDYrQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dDYrQ.png" alt="dataset" /></a></p> <p><strong>Goal:</strong> I want to <em>write</em> it on my disk. I am using chunks so the dataset does not kill my kernel.</p> <p><strong>Problem:</strong> I tried to save it on my disk with chunks using:</p> <ol> <li>Option 1: <code>to_zarr</code> -&gt; biggest homogeneous chunks possible: <code>{'mid_date':41, 'x':379, 'y':1}</code></li> <li>Option 2: <code>to_netcdf</code> -&gt; chunk size <code>{'mid_date':3000, 'x':758, 'y':617}</code></li> <li>Option 3: <code>to_netcdf</code> (or <code>to_zarr</code>, same result) -&gt; chunk size <code>{'mid_date':1, 'x':100, 'y':100}</code></li> </ol> <p>But the memory ends up blowing anyway (and I have 96Gb of RAM). Option 3 tries another approach by saving chunk by chunk, but it still blows up the memory (<em>see screenshot</em>). Moreover, it strangely seems to take longer and longer to process chunks as they are written on disk. Do you have a suggestion on how I could solve this problem ?</p> <p>In the screenshot, I would be expecting 1 line of <code>#</code> per file, but on Chunk 2 already, it seems it's saving multiple chunks at once (3 lines of <code>#</code>). The size of chunk 2 was <code>502kb</code>. <a href="https://i.sstatic.net/s3n1q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/s3n1q.png" alt="enter image description here" /></a></p> <p><strong>Code:</strong></p> <pre><code>import xarray as xr import os import sys from dask.diagnostics import ProgressBar import numpy as np xrds = #massive dataset pathsave = 'Datacubes/' #Option 1, did not work #write_job = xrds.to_zarr(f&quot;{pathsave}Test.zarr&quot;, mode='w', compute=False, consolidated=True) #Option 2, did not work (with chunk size {'mid_date':3000, 'x':100, 'y':100}) #write_job = xrds.to_netcdf(f&quot;test.nc&quot;,compute=False) #with ProgressBar(): # print(f&quot;Writing to {pathsave}&quot;) # write_job = write_job.compute() # Option 3, did not work. That's the option I took the screenshot from # I force the chunks to be really small so I don't overload the memory chunk_size = {'mid_date':1, 'y':xrds.y.shape[0], 'x':xrds.x.shape[0]} with ProgressBar(): for i, (key, chunk) in enumerate(xrds.chunk(chunk_size).items()): chunk_dataset = xr.Dataset({key: chunk}) chunk_dataset.to_netcdf(f&quot;{pathsave}/chunk_{i}.nc&quot;, mode=&quot;w&quot;, compute=True) print(f&quot;Chunk {i+1} saved.&quot;) </code></pre>
<python><dask><netcdf><python-xarray><zarr>
2023-06-03 01:04:32
1
721
Nihilum
76,394,170
3,826,733
Cannot open file downloaded from Azure Storage account
<p>Why is it when downloaded I am not able to open a file which was uploaded to my Azure storage account as a block blob?</p> <p>Initially I thought it was because of the way I uploaded it. But manually uploaded files when downloaded wont open as well. This is the message I see when I try to open the downloaded file- <a href="https://i.sstatic.net/ltZKJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ltZKJ.png" alt="enter image description here" /></a></p> <p>Type of file - .jpg Here are the properties of the file on Azure -</p> <p><a href="https://i.sstatic.net/IYgzA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IYgzA.png" alt="enter image description here" /></a></p> <p>My graphql api calls the function below which is written in python -</p> <pre><code>@mutation.field(&quot;fileUpload&quot;) def resolve_fileUpload(_, info, file): print(file['containerName']) file_path = file['file'] if 'file' in file else None file_name = file['fileName'] if 'fileName' in file else None file_type = file['fileType'] if 'fileType' in file else None file_extension = file['fileExtension'] if 'fileExtension' in file else None uploaded_date = file['uploadedDate'] if 'uploadedDate' in file else None container_name = file['containerName'] if 'containerName' in file else None try: container_client = blob_service_client.get_container_client( container_name) if not container_client.exists(): container_client.create_container() container_client.set_container_metadata( metadata={'Created_Date': uploaded_date}) with open(file_path, &quot;rb&quot;) as file: content_settings = ContentSettings(content_type='image/jpeg') metadata = {'File_Name': file_name, 'Uploaded_Date': uploaded_date, 'Container_Name': container_name, 'File_Type': file_type, 'File_Extension': file_extension} result = container_client.upload_blob( name=file_name, data=file_path, metadata=metadata, content_settings=content_settings) # result = container_client.upload_blob( # name=file_name, data=file_path) except AzureException as e: if e.status_code == 200: return { &quot;status&quot;: e.status_code, &quot;error&quot;: &quot;&quot;, &quot;fileUrl&quot;: result.url } else: return { &quot;status&quot;: e.status_code, &quot;error&quot;: e.message, &quot;fileUrl&quot;: result.url } else: return { &quot;status&quot;: 200, &quot;error&quot;: &quot;&quot;, &quot;fileUrl&quot;: result.url } </code></pre> <p><a href="https://i.sstatic.net/mkCfT.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mkCfT.jpg" alt="enter image description here" /></a></p>
<python><azure><azure-blob-storage>
2023-06-03 00:50:25
0
3,842
Sumchans
76,394,066
5,378,132
FastAPI SQLAlchemy - How to Encrypt Table Column and then Decrypt when Querying Result?
<p>I have a table called <code>users</code>. I want to encrypt the <code>phone_number</code> column in the SQL table, and then decrypt <code>phone_number</code> when querying the <code>users</code> table and returning a User item.</p> <p>Here's an example:</p> <p><strong>models.py</strong></p> <pre><code>class Users(Base): __tablename__ = &quot;users&quot; id = Column(UUID(as_uuid=True), primary_key=True, unique=True, default=uuid.uuid4) username = Column(String(255), nullable=False) phone_number = Column(StringEncryptedType(String(255), settings.ENCRYPT_KEY), nullable=False) created_at = Column(DateTime, server_default=func.now()) </code></pre> <p><strong>schemas.py</strong></p> <pre><code>class UserResult(BaseModel): id: Optional[uuid.UUID] username: str phone_number: str class Config: orm_mode = True class UsersResult(BaseModel): people: List[UserResult] </code></pre> <p><strong>users.py</strong></p> <pre><code>async def db_get_users(db: AsyncSession) -&gt; List[Users]: result = await db.execute(select(Users)) return result.scalars().all() async def db_create_user(db: AsyncSession, user: UserResult) -&gt; Users: instance = Users(**user.dict()) db.add(instance) await db.commit() return instance @router.get(&quot;/users&quot;, name=&quot;users&quot;) async def get_all_users( request: Request, db: AsyncSession = Depends(get_session), # authenticated: bool = Depends(check_authentication_header), ): request.app.logger.info(&quot;Retrieving list of all users ...&quot;) return {&quot;users&quot;: await db_get_users(db)} </code></pre> <p>Unfortunately, I'm getting the following error when hitting the <code>/users</code> endpoint for getting a list of all users in the SQL table:</p> <pre><code>starlit-fastapi-app | decrypted_value = self.engine.decrypt(value) starlit-fastapi-app | File &quot;/usr/local/lib/python3.9/site-packages/sqlalchemy_utils/types/encrypted/encrypted_type.py&quot;, line 121, in decrypt starlit-fastapi-app | decrypted = base64.b64decode(value) starlit-fastapi-app | File &quot;/usr/local/lib/python3.9/base64.py&quot;, line 87, in b64decode starlit-fastapi-app | return binascii.a2b_base64(s) starlit-fastapi-app | binascii.Error: Incorrect padding </code></pre> <p>Any help would be greatly appreciated!</p>
<python><encryption><sqlalchemy><cryptography><fastapi>
2023-06-03 00:03:04
1
2,831
Riley Hun
76,393,832
11,485,896
Get physical address from a way containing nodes only
<p>I'm new to geocoding stuff.</p> <p>I have a list of addresses which I have to find the nearest <strong>residential</strong> buildings for. At first, I'm looking for basic data of these addresses using <code>geopy.geocoders.Nominatim</code> geolocator. The data I get from <code>Nominatim</code> are, among others, <code>display_name</code>, <code>osm_id</code>, <code>osm_type</code>. After that I switch to <code>OSMPythonTools.api.Api</code> to get more detailed information (e.g. number of floors, flats etc.) from <code>osm_query: str = fr&quot;{osm_type}/{osm_id}&quot;</code>. Then data is saved to a <code>pandas</code> dataframe. In the next step, using <code>osmnx</code>, I try to get all geometries from addresses in the 1 km perimeter (by default). Code:</p> <pre class="lang-py prettyprint-override"><code># Python import os from pprint import pprint from collections import defaultdict # geodata import pandas as pd from pandas import DataFrame from OSMPythonTools.api import Api as OSM_Api from OSMPythonTools.nominatim import Nominatim as OSM_Nominatim from geopy.geocoders import Nominatim as geopy_Nominatim import osmnx as ox # # # engines # # geopy # https://levelup.gitconnected.com/simple-geocoding-in-python-fb28ee5272e0 geopy_geolocator: geopy_Nominatim = geopy_Nominatim(user_agent=&quot;my_app&quot;) geopy_geocode: geopy_geolocator.geocode = geopy_geolocator.geocode # # OSM api_OSM: OSM_Api = OSM_Api() # # # dane # test addresses load df_addresses: DataFrame = pd.read_csv(&quot;test_addresses.csv&quot;, sep = &quot;;&quot;) # # # gathering data # addresses coordinates # https://levelup.gitconnected.com/simple-geocoding-in-python-fb28ee5272e0 # &quot;full_address&quot; is in custom, non-OSM format addresses_to_analyze: dict = df_addresses[&quot;full_address&quot;].to_list() addresses_data: dict[list] = defaultdict(list) for i in addresses_to_analyze: addresses_data[&quot;full_address&quot;].append(i) raw_geopy_geocode_response: dict = geopy_geocode(i) if raw_geopy_geocode_response: raw_geopy_geocode_response: dict = geopy_geocode(i).raw osm_address = raw_geopy_geocode_response.get(&quot;display_name&quot;) osm_id: int = raw_geopy_geocode_response.get(&quot;osm_id&quot;) osm_type: str = raw_geopy_geocode_response.get(&quot;osm_type&quot;) place_class: str = raw_geopy_geocode_response.get(&quot;class&quot;) place_type: str = raw_geopy_geocode_response.get(&quot;type&quot;) osm_query: str = fr&quot;{osm_type}/{osm_id}&quot; raw_osm_geocode_response: dict = api_OSM.query(osm_query).tags() building_levels: str = raw_osm_geocode_response.get(&quot;building:levels&quot;) building_flats: str = raw_osm_geocode_response.get(&quot;building:flats&quot;) addresses_data[&quot;osm_address&quot;].append(osm_address) addresses_data[&quot;osm_id&quot;].append(osm_id) addresses_data[&quot;osm_type&quot;].append(osm_type) addresses_data[&quot;place_class&quot;].append(place_class) addresses_data[&quot;place_type&quot;].append(place_type) addresses_data[&quot;building_levels&quot;].append(building_levels) addresses_data[&quot;building_flats&quot;].append(building_flats) else: addresses_data[&quot;osm_address&quot;].append(None) addresses_data[&quot;osm_id&quot;].append(None) addresses_data[&quot;osm_type&quot;].append(None) addresses_data[&quot;place_class&quot;].append(None) addresses_data[&quot;place_type&quot;].append(None) addresses_data[&quot;building_levels&quot;].append(None) addresses_data[&quot;building_flats&quot;].append(None) df_osm_data = pd.DataFrame.from_dict(addresses_data) # extracting test address test_address = df_osm_data.loc[0, &quot;osm_address&quot;] # # # osmnx - nearest (by default - 1 km) residential locations ox_gdf = ox.geometries_from_address( address = test_address, tags = {&quot;building&quot;: [&quot;house&quot;, &quot;apartments&quot;, &quot;residential&quot;, &quot;detached&quot;], &quot;place&quot;: &quot;house&quot;, &quot;amenity&quot;: False, } ) df_gdf = pd.DataFrame(ox_gdf) df_gdf.reset_index(inplace=True) df_gdf.to_excel(&quot;osmnx_geometries_perimeter.xlsx&quot;) </code></pre> <p>The problem is that there are some entries which contain only a type of building (which meets conditions) but no address. In such cases, I have the correct element type (<code>way</code>) but when I enter <code>osmid</code> to browser search engine I receive only a set of nodes (even though highlighted polygon is correct). <strong>Only when I right-click the polygon and select 'Show Address' I finally get the address (also a new <code>osmid</code> and <code>nodes</code> for <code>way</code>)</strong>. What's also interesting is that <code>df_gdf</code> contains column with lists of <code>nodes</code> but any of nodes there matches the new nodes from browser.</p> <p>My questions:</p> <ol> <li>Is it possible to re-evaluate <code>osmid</code>s to get addresses? If yes - how?</li> <li>Could <code>place_id</code> from <code>Nominatim</code> help?</li> </ol> <p><strong>EDIT:</strong></p> <blockquote> <p><strong>Only when I right-click the polygon and select 'Show Address' I finally get the address (also a new <code>osmid</code> and <code>nodes</code> for <code>way</code>)</strong>.</p> </blockquote> <p>In these cases I'm talking about I usually get a new <code>node</code> instead of <code>way</code> after clicking 'Show Address'.</p> <blockquote> <p>What's also interesting is that <code>df_gdf</code> contains column with lists of <code>nodes</code> but any of nodes there matches the new nodes from browser.</p> </blockquote> <p>Of course I mean those entries without addresses.</p> <p><strong>EXAMPLE</strong>:</p> <p>For a one address, I got around 150 neighbouring entries. One of the entries is <a href="https://www.openstreetmap.org/way/389852088" rel="nofollow noreferrer">this way</a>. It contains 11 <code>nodes</code>:</p> <pre><code>3938255237 3938255220 3938255221 3938255209 3938255217 3938255224 3938255223 3938255230 3938255236 3938255251 3938255237 </code></pre> <p>Both script and browser indicate no address here. When I click 'Show Address' I receive a <code>node</code> <a href="https://www.openstreetmap.org/node/2710576553" rel="nofollow noreferrer"><code>2710576553</code></a> with definite address. As you can notice this <code>node</code> does not appear in the list of previous <code>way</code> <code>nodes</code>.</p>
<python><openstreetmap><geopandas><osmnx><geopy>
2023-06-02 22:41:21
0
382
Soren V. Raben
76,393,695
9,795,817
PySpark: Replace null values with empty list
<p>I outer joined the results of two <code>groupBy</code> and <code>collect_set</code> operations and ended up with this dataframe (<code>foo</code>):</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; foo.show(3) +---+------+------+ | id| c1| c2| +---+------+------+ | 0| null| [1]| | 7| [6]| null| | 6| [6]|[7, 8]| +---+------+------+ </code></pre> <p>I want to concatenate <code>c1</code> and <code>c2</code> together to obtain this result:</p> <pre class="lang-py prettyprint-override"><code>+---+------+------+---------+ | id| c1| c2| res| +---+------+------+---------+ | 0| null| [1]| [1]| | 7| [6]| null| [6]| | 6| [6]|[7, 8]|[6, 7, 8]| +---+------+------+---------+ </code></pre> <p>To do this, I need to coalesce the null values in <code>c1</code> and <code>c2</code>. However, I don't even know what data type <code>c1</code> and <code>c2</code> are. How can I replace the null values with <code>[]</code> so that the concatenation of <code>c1</code> and <code>c2</code> will yield <code>res</code> as shown above?</p> <p>This is how I'm currently concatenating both columns:</p> <pre class="lang-py prettyprint-override"><code># Concat returns null for rows where either column is null foo.selectExpr( 'id', 'c1', 'c2', 'concat(c1, c2) as res' ) </code></pre>
<python><apache-spark><pyspark><null>
2023-06-02 22:05:52
2
6,421
Arturo Sbr
76,393,635
7,648
`strftime` acting unexpectedly
<p>I have the following Python code:</p> <pre><code>from datetime import datetime def get_session_id(date_of_mri): dt = datetime.strptime(date_of_mri, '%m/%d/%Y') date_time = dt.strftime(&quot;%Y%M%D&quot;) return date_time print(get_session_id('2/27/2002')) </code></pre> <p>This prints</p> <pre><code>20020002/27/02 </code></pre> <p>I'm expecting it to print</p> <pre><code>20020227 </code></pre> <p>What am I doing wrong here?</p>
<python><date>
2023-06-02 21:47:44
1
7,944
Paul Reiners
76,393,584
4,802,101
OPC DA dll compatible with MS KB5004442
<p>I have an Python application that use OpenOPC to connect to our OPC Server. After the release of the Microsoft <a href="https://support.microsoft.com/en-us/topic/kb5004442-manage-changes-for-windows-dcom-server-security-feature-bypass-cve-2021-26414-f1400b52-c141-43d2-941e-37ed901c769c" rel="nofollow noreferrer">KB5004442 DCOM security patch</a> this application is not been able to connect. This is because this OpenOPC module make use of an OPC Automation wrapper from <a href="http://www.gray-box.net/" rel="nofollow noreferrer">gray-box</a> which don't have the appropriate security level to be compatible with this new patch. I also suspect that they don't support this dll anymore. I would like to know it anyone else is strugglling with this problem.</p> <p>I tried to use OPCDAAuto.dll from OPC Foundation but I find out that this was not maintained for a long time so it has the same problem.</p> <p>I suppose that there are only two option here:</p> <ol> <li>Find a dll that is compatible with this new security demand.</li> <li>Use OPC tunnelers.</li> </ol> <p>Thanks!</p>
<python><opc><dcom><open-opc>
2023-06-02 21:34:43
1
370
Dariva
76,393,348
3,845,439
How to offset twinx y-axis by specified amount?
<p>I am familiar to using <code>twinx()</code> to share an axis's x-axis with another subplot:</p> <pre><code>fig, ax1 = plt.subplots() ax2 = ax1.twinx() ax1.plot(xdata1, ydata1) ax2.plot(xdata2, ydata2) </code></pre> <p>This makes the <code>y=0</code> axis line up between <code>ax1</code> and <code>ax2</code>, but the axes are still on separate subplots so they each get their own autoscaling to match whatever is plotted on them as normal. However, I have a situation where I need to create separate axes <code>ax1</code> and <code>ax2</code> on the different subplots, but with a specified offset between their x-axes - i.e. <code>ax1</code>'s <code>y=y0</code> needs to line up with <code>ax2</code>'s <code>y=0</code>, for some nonzero offset <code>y0</code>, like this:</p> <p><a href="https://i.sstatic.net/lFvYl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lFvYl.png" alt="enter image description here" /></a></p> <p><a href="https://stackoverflow.com/q/61596813/3845439">This post</a> appears to be close to what I am after, but not quite the same. That example uses a secondary y-axis with a function relating its values to the primary y-axis. However, I need a separate subplot entirely with its own scaling just like <code>twinx()</code> gives me.</p>
<python><matplotlib><yaxis><twinx>
2023-06-02 20:38:50
1
440
PGmath
76,393,336
3,940,670
Calling Google Cloud Speech to Text API regional recognizers, using Python Client library, showing error 400 and 404
<p><strong>The goal:</strong> The goal is to use Python client libraries to convert a speech audio file to text through a Chirp recognizer.</p> <p><strong>Steps to recreate the error:</strong> I'm creating a recognizer following the steps in the link below, I am following the instruction and the Python code in the below link to perform Speech to Text using GCP Speech API, <a href="https://cloud.google.com/speech-to-text/v2/docs/transcribe-client-libraries" rel="nofollow noreferrer">https://cloud.google.com/speech-to-text/v2/docs/transcribe-client-libraries</a> the code is as below,</p> <pre><code>from google.cloud.speech_v2 import SpeechClient from google.cloud.speech_v2.types import cloud_speech def speech_to_text(project_id, recognizer_id, audio_file): # Instantiates a client client = SpeechClient() request = cloud_speech.CreateRecognizerRequest( parent=f&quot;projects/{project_id}/locations/global&quot;, recognizer_id=recognizer_id, recognizer=cloud_speech.Recognizer( language_codes=[&quot;en-US&quot;], model=&quot;latest_long&quot; ), ) # Creates a Recognizer operation = client.create_recognizer(request=request) recognizer = operation.result() # Reads a file as bytes with open(audio_file, &quot;rb&quot;) as f: content = f.read() config = cloud_speech.RecognitionConfig(auto_decoding_config={}) request = cloud_speech.RecognizeRequest( recognizer=recognizer.name, config=config, content=content ) # Transcribes the audio into text response = client.recognize(request=request) for result in response.results: print(f&quot;Transcript: {result.alternatives[0].transcript}&quot;) return response </code></pre> <p>It works fine with the multi-regional global models. However, as of now(June of 2023), the Chirp model is only available in the <code>us-central1</code> region.</p> <p><strong>The issue:</strong> When you're using the same code for the regional recognizers it outputs a 404 error indicating that the recognizer doesn't exist in the project. When you change the recognizer's name from <code>&quot;projects/{project_id}/locations/global/recognizers/{recognizer_id}&quot;</code> to <code>&quot;projects/{project_id}/locations/us-central1/recognizers/{recognizer_id}&quot;</code> or anything with non-global location, it shows 400 error saying that the location is expected to be <code>global</code>.</p> <p><strong>Question:</strong> How can I call a regional recognizer through the GCP Python client library?</p>
<python><google-cloud-platform><google-cloud-vertex-ai><google-cloud-speech><google-cloud-python>
2023-06-02 20:35:27
1
637
M.Hossein Rahimi
76,393,311
14,896,203
Aggregate based on two columns and then apply function on one column vs the rest
<p>Hello I have below demonstrated DataFrame and attempting to generate an aggregated result based on <code>unique_id</code> and <code>cutoff</code> where there is a calculation of a metric (such as MSE) between <code>y</code> and the rest of columns, except group by ones and <code>ds</code>.</p> <pre><code>shape: (5, 10) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ unique_id ┆ ds ┆ cutoff ┆ y ┆ … ┆ CrostonClass ┆ SeasonalNaiv ┆ HistoricAver ┆ DynamicOptim β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ ┆ ic ┆ e ┆ age ┆ izedTheta β”‚ β”‚ str ┆ i64 ┆ i64 ┆ f32 ┆ ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ ┆ ┆ ┆ ┆ ┆ f32 ┆ f32 ┆ f32 ┆ f32 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ═════β•ͺ════════β•ͺ═══════β•ͺ═══β•ͺ══════════════β•ͺ══════════════β•ͺ══════════════β•ͺ══════════════║ β”‚ H1 ┆ 701 ┆ 700 ┆ 619.0 ┆ … ┆ 742.668762 ┆ 691.0 ┆ 661.674988 ┆ 612.767517 β”‚ β”‚ H1 ┆ 702 ┆ 700 ┆ 565.0 ┆ … ┆ 742.668762 ┆ 618.0 ┆ 661.674988 ┆ 536.846252 β”‚ β”‚ H1 ┆ 703 ┆ 700 ┆ 532.0 ┆ … ┆ 742.668762 ┆ 563.0 ┆ 661.674988 ┆ 497.82428 β”‚ β”‚ H1 ┆ 704 ┆ 700 ┆ 495.0 ┆ … ┆ 742.668762 ┆ 529.0 ┆ 661.674988 ┆ 464.723236 β”‚ β”‚ H1 ┆ 705 ┆ 700 ┆ 481.0 ┆ … ┆ 742.668762 ┆ 504.0 ┆ 661.674988 ┆ 440.972351 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>I was able to generate desired result with iteration, however, do not know how to concat all the DataFrames.</p> <pre class="lang-py prettyprint-override"><code>from datasetsforecast.losses import mse, mae, rmse def evaluate_cross_validation(df, metric): models = df.drop(columns=['ds', 'cutoff', 'y', 'unique_id']).columns evals = [] for model in models: eval_ = ( df .groupby(['unique_id', 'cutoff']) .agg( pl.apply( exprs=['y', model], function=lambda args: metric(args[0], args[1]), ) ) .rename({'y': model}) .sort(by=['unique_id', 'cutoff']) ) evals.append(eval_) uid_cutoff = evals[0].select(['unique_id']) eval_dfs = pl.concat([df.drop(['unique_id', 'cutoff']) for df in evals], how='horizontal') evals = pl.concat([uid_cutoff, eval_dfs], how='horizontal') evals = evals.groupby(['unique_id']).mean() # Averages the error metrics for all cutoffs for every combination of model and unique_id best_model = [min(row, key=row.get) for row in evals.drop('unique_id').rows(named=True)] evals = evals.with_columns(pl.lit(best_model).alias('best_model')).sort(by=['unique_id']) return evals </code></pre> <p>Expected output:</p> <pre><code>shape: (5, 8) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ unique_id ┆ AutoARIMA ┆ HoltWinte ┆ CrostonCla ┆ SeasonalNa ┆ HistoricAv ┆ DynamicOpt ┆ best_mod β”‚ β”‚ --- ┆ --- ┆ rs ┆ ssic ┆ ive ┆ erage ┆ imizedThet ┆ el β”‚ β”‚ str ┆ f64 ┆ --- ┆ --- ┆ --- ┆ --- ┆ a ┆ --- β”‚ β”‚ ┆ ┆ f64 ┆ f64 ┆ f64 ┆ f64 ┆ --- ┆ str β”‚ β”‚ ┆ ┆ ┆ ┆ ┆ ┆ f64 ┆ β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════════β•ͺ═══════════β•ͺ════════════β•ͺ════════════β•ͺ════════════β•ͺ════════════β•ͺ══════════║ β”‚ H1 ┆ 1979.3021 ┆ 44888.019 ┆ 28038.7363 ┆ 1422.66668 ┆ 20927.6640 ┆ 1296.33398 ┆ DynamicO β”‚ β”‚ ┆ 85 ┆ 531 ┆ 28 ┆ 7 ┆ 62 ┆ 4 ┆ ptimized β”‚ β”‚ ┆ ┆ ┆ ┆ ┆ ┆ ┆ Theta β”‚ β”‚ H10 ┆ 458.89271 ┆ 2812.9166 ┆ 1483.48413 ┆ 96.895832 ┆ 1980.36749 ┆ 379.621124 ┆ Seasonal β”‚ β”‚ ┆ 5 ┆ 26 ┆ 1 ┆ ┆ 3 ┆ ┆ Naive β”‚ β”‚ H100 ┆ 8629.9482 ┆ 121625.37 ┆ 91945.1406 ┆ 12019.0 ┆ 78491.1914 ┆ 21699.6479 ┆ AutoARIM β”‚ β”‚ ┆ 42 ┆ 5 ┆ 25 ┆ ┆ 06 ┆ 49 ┆ A β”‚ β”‚ H101 ┆ 6818.3486 ┆ 28453.395 ┆ 16183.6347 ┆ 10944.4580 ┆ 18208.4042 ┆ 63698.0732 ┆ AutoARIM β”‚ β”‚ ┆ 33 ┆ 508 ┆ 66 ┆ 08 ┆ 97 ┆ 42 ┆ A β”‚ β”‚ H102 ┆ 65489.965 ┆ 232924.85 ┆ 132655.300 ┆ 12699.8959 ┆ 309110.468 ┆ 31393.5214 ┆ Seasonal β”‚ β”‚ ┆ 82 ┆ 1562 ┆ 781 ┆ 96 ┆ 75 ┆ 84 ┆ Naive β”‚ </code></pre>
<python><python-polars>
2023-06-02 20:30:56
1
772
Akmal Soliev
76,393,255
11,613,489
Scraping using BeautifulSoup print an empty output
<p>I'm trying to scrape a website. I want to print all the elements with the following class name,</p> <blockquote> <p>class=product-size-info__main-label</p> </blockquote> <p>The code is the following:</p> <pre><code>from bs4 import BeautifulSoup with open(&quot;MadeInItaly.html&quot;, &quot;r&quot;) as f: doc= BeautifulSoup (f, &quot;html.parser&quot;) tags = doc.find_all(class_=&quot;product-size-info__main-label&quot;) print(tags) </code></pre> <p>Result: [XS, XS, S, M, L, XL]</p> <p>All good here.</p> <p>Now this is when done on the file MadeInItaly.html (it works) which is basically the same website I am trying to use, but the version saved on my disk.</p> <p>Now, with the version from the URL.</p> <pre><code>from bs4 import BeautifulSoup import requests headers = {&quot;User-Agent&quot;: &quot;Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36&quot;} url = &quot;https://www.zara.com/es/es/vestido-midi-volantes-cinturon-con-lino-p00387075.html?v1=258941747&amp;v2=2184287&quot; result = requests.get(url,headers=headers) doc = BeautifulSoup(result.text, &quot;html.parser&quot;) tags = doc.find_all(class_=&quot;product-size-info__main-label&quot;) print(tags) </code></pre> <p>Result: []</p> <p>I have tried with different User Agent Headers, what could be wrong here?</p>
<python><html><web-scraping><beautifulsoup>
2023-06-02 20:20:07
2
642
Lorenzo Castagno
76,393,080
19,410,411
How to get the optimal number of clusters using elbow method and return it?
<p>I need to find a way to return the number of optimal clusters from the elbow method implementation in python. How can I implement the elbow method in order to show the elbow method graph and then return the number of optimal clusters.</p>
<python><k-means>
2023-06-02 19:40:55
1
525
Mikelenjilo
76,393,074
13,058,538
Python multithreading for file reading results in slower performance: How to optimize?
<p>I am learning concurrency in Python and I have noticed that the <code>threading</code> module even lowers the speed of my code. My code is a simple parser where I read HTMLs from my local directory and output parsed a few fields as JSON files to another directory.</p> <p>I was expecting a speed improvement but the speed becomes lower, tested with small numbers of HTMLs at a time, 50, 200, 1000, and large numbers of HTMLs like 30k. In all situations, the speed lowers. For example, with 1000 HTMLs without threading speed is ~2.9 seconds, with threading speed is ~4 seconds.</p> <p>Also tried the <code>concurrent.futures</code> <code>ThreadPoolExecutor</code> but it provides the same slower results.</p> <p>I know about GIL, but I thought that I/O-bound tasks should be handled with multithreading.</p> <p>Here is my code:</p> <pre><code>import json import re import time from pathlib import Path import threading def get_json_data(body: str) -&gt; re.Match[str] or None: return re.search( r'(?&lt;=json_data&quot;&gt;)(.*?)(?=&lt;/script&gt;)', body ) def parse_html_file(file_path: Path) -&gt; dict: with open(file_path, &quot;r&quot;) as file: html_content = file.read() match = get_json_data(html_content) if not match: return {} next_data = match.group(1) json_data = json.loads(next_data) data1 = json_data.get(&quot;data1&quot;) data2 = json_data.get(&quot;data2&quot;) data3 = json_data.get(&quot;data3&quot;) data4 = json_data.get(&quot;data4&quot;) data5 = json_data.get(&quot;data5&quot;) parsed_fields = { &quot;data1&quot;: data1, &quot;data2&quot;: data2, &quot;data3&quot;: data3, &quot;data4&quot;: data4, &quot;data5&quot;: data5 } return parsed_fields def save_parsed_fields(file_path: Path, parsed_fields: dict, output_dir: Path) -&gt; None: output_filename = f&quot;parsed_{file_path.stem}.json&quot; output_path = output_dir / output_filename with open(output_path, &quot;w&quot;) as output_file: json.dump(parsed_fields, output_file) print(f&quot;Parsed {file_path.name} and saved the results to {output_path}&quot;) def process_html_file(file_path: Path, parsed_dir: Path) -&gt; None: parsed_fields = parse_html_file(file_path) save_parsed_fields(file_path, parsed_fields, parsed_dir) def process_html_files(source_dir: Path, parsed_dir: Path) -&gt; None: parsed_dir.mkdir(parents=True, exist_ok=True) threads = [] for file_path in source_dir.glob(&quot;*.html&quot;): thread = threading.Thread(target=process_html_file, args=(file_path, parsed_dir)) thread.start() threads.append(thread) # Wait for all threads to finish for thread in threads: thread.join() def main(): base_path = &quot;/home/my_pc/data&quot; source_dir = Path(f&quot;{base_path}/html_sample&quot;) parsed_dir = Path(f&quot;{base_path}/parsed_sample&quot;) start_time = time.time() process_html_files(source_dir, parsed_dir) end_time = time.time() duration = end_time - start_time print(f&quot;Application took {duration:.2f} seconds to complete.&quot;) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>I know about asyncio, but I want to correctly test all of the multithreading methods to pick the best that suits me.</p> <p>As mentioned tried also <code>concurrent.futures</code>, code is almost the same when processing html_files I have these lines:</p> <pre><code>with ThreadPoolExecutor(max_workers=max_workers) as executor: # Iterate over the HTML files in the source directory for file_path in source_dir.glob(&quot;*.html&quot;): executor.submit(process_html_file, file_path, parsed_dir) </code></pre> <p>Are there any mistakes in my code? How could I optimize my code better with multithreading (aside from asyncio)?</p>
<python><multithreading><python-multithreading><concurrent.futures>
2023-06-02 19:39:57
1
523
Dave
76,393,072
18,739,908
Reset badge count on Android using React Native Expo
<p>I'm using react native with expo. When a user goes to their notifications I want the badge count to update. The following code in my backend does that (python):</p> <pre><code>from exponent_server_sdk import PushClient, PushMessage def reset_badge_count(token, new_total): response = PushClient().publish(PushMessage(to=token, body=None, data=None, badge=new_total)) if response: return True else: return False </code></pre> <p>It works totally fine on iOS. On Android however, it sends a blank push notification. I don't want any push notification sent I just want to reset the badge count. Does anyone know a workaround for this? Thanks.</p>
<python><react-native><push-notification><expo>
2023-06-02 19:39:42
0
494
Cole
76,392,943
17,275,588
Shopify API (using Python): File upload failed due to "Processing Error." Why?
<p>I am struggling to figure out why I'm not able to successfully upload images to the Files section of my Shopify store. I followed this code here, except mine is a Python version of this: <a href="https://gist.github.com/celsowhite/2e890966620bc781829b5be442bea159" rel="nofollow noreferrer">https://gist.github.com/celsowhite/2e890966620bc781829b5be442bea159</a></p> <pre><code>import requests import os # Set up Shopify API credentials shopify_store = 'url-goes-here.myshopify.com' // the actual URL is here access_token = 'token-goes-here' // the actual token is here # Read the image file image_path = r'C:\the-actual-filepath-is-here\API-TEST-1.jpg' # Replace with the actual path to your image file with open(image_path, 'rb') as file: image_data = file.read() # Create staged upload staged_upload_url = f&quot;https://{shopify_store}/admin/api/2023-04/graphql.json&quot; staged_upload_query = ''' mutation stagedUploadsCreate($input: [StagedUploadInput!]!) { stagedUploadsCreate(input: $input) { stagedTargets { resourceUrl url parameters { name value } } userErrors { field message } } } ''' staged_upload_variables = { &quot;input&quot;: [ { &quot;filename&quot;: &quot;API-TEST-1.jpg&quot;, &quot;httpMethod&quot;: &quot;POST&quot;, &quot;mimeType&quot;: &quot;image/jpeg&quot;, &quot;resource&quot;: &quot;FILE&quot; } ] } response = requests.post( staged_upload_url, json={&quot;query&quot;: staged_upload_query, &quot;variables&quot;: staged_upload_variables}, headers={&quot;X-Shopify-Access-Token&quot;: access_token} ) data = response.json() staged_targets = data['data']['stagedUploadsCreate']['stagedTargets'] target = staged_targets[0] params = target['parameters'] url = target['url'] resource_url = target['resourceUrl'] # Post image data to the staged target form_data = { &quot;file&quot;: image_data } headers = { param['name']: param['value'] for param in params # Fix the headers assignment } headers[&quot;Content-Length&quot;] = str(len(image_data)) response = requests.post(url, files=form_data, headers=headers) # Use 'files' parameter instead of 'data' # Create the file in Shopify using the resource URL create_file_url = f&quot;https://{shopify_store}/admin/api/2023-04/graphql.json&quot; create_file_query = ''' mutation fileCreate($files: [FileCreateInput!]!) { fileCreate(files: $files) { files { alt } userErrors { field message } } } ''' create_file_variables = { &quot;files&quot;: [ { &quot;alt&quot;: &quot;alt-tag&quot;, &quot;contentType&quot;: &quot;IMAGE&quot;, &quot;originalSource&quot;: resource_url } ] } response = requests.post( create_file_url, json={&quot;query&quot;: create_file_query, &quot;variables&quot;: create_file_variables}, headers={&quot;X-Shopify-Access-Token&quot;: access_token} ) data = response.json() files = data['data']['fileCreate']['files'] alt = files[0]['alt'] </code></pre> <p>It runs the code, it doesn't output any errors. However when I navigate to the Files section of the Shopify store, it says &quot;1 upload failed -- processing error.&quot;</p> <p>Any clues in the code as to what might be causing that?</p> <p>Also when I print(data) at the very end, this is what it says:</p> <p>{'data': {'fileCreate': {'files': [{'alt': 'alt-tag'}], 'userErrors': []}}, 'extensions': {'cost': {'requestedQueryCost': 20, 'actualQueryCost': 20, 'throttleStatus': {'maximumAvailable': 1000.0, 'currentlyAvailable': 980, 'restoreRate': 50.0}}}}</p> <p>Seeming to indicate it created it successfully. But there's some misc processing error.</p> <p>Thanks</p>
<python><python-requests><graphql><shopify><shopify-api>
2023-06-02 19:14:43
2
389
king_anton
76,392,920
2,675,349
How to JOIN two dataframes and populate a column?
<p>I have two data frames as below,</p> <pre><code>DF1 Name;ID;Course;SID;Subject Alex;A1;Under;;chemistry Oak;A2;Under;;chemistry niva;A3;grad;;physics mark;A4;Under;;Med DF2 PID;ServiceId;Address;Active A1;svc1;WI;Yes A2;svc2;MI;Yes A3;svc2;OH;Yes </code></pre> <p>I want to have a data frame with SID populated from DF2.ServiceId using ID and PID columns. The expected output as below</p> <pre><code>DF3 Name;ID;Course;SID;Subject Alex;A1;Under;svc1;chemistry Oak;A2;Under;svc2;chemistry niva;A3;grad;svc3;physics mark;A4;Under;;Med </code></pre> <p>I tried the below, but it showing all the columns from both the data frames.</p> <pre><code>DF3 = DF1.merge(DF2, how='inner', left_on=&quot;ID&quot;, right_on=&quot;PID&quot;) </code></pre>
<python><pandas><dataframe>
2023-06-02 19:11:36
3
1,027
Ullan
76,392,744
14,804,653
Can you use the programs you "pip install" in the Command-line?
<p>As a Python beginner, I was downloading the OpenAI's <a href="https://github.com/openai/whisper#setup" rel="nofollow noreferrer">Whisper</a> with the following command: <code>pip install -U openai-whisper</code>, and noticed that you can use Whisper in both <a href="https://github.com/openai/whisper#python-usage" rel="nofollow noreferrer">Python</a> and the <a href="https://github.com/openai/whisper#command-line-usage" rel="nofollow noreferrer">Command-line</a>.</p> <p>To my knowledge, <code>pip install</code> installs Python packages, so should only be available within Python, but it seems like you can use Whisper in the command line?</p> <p>In summary, why does <code>pip install</code>-ing Python packages let you use the package in the command line?</p>
<python><pip><command-line-interface>
2023-06-02 18:41:47
2
318
Howard Baik
76,392,743
8,869,570
How to find all rows with time to a datetime with timezone info?
<p>I have a dataframe with a datetime column <code>dt</code> with the dtype <code>datetime64[ns, US/Eastern]</code>.</p> <p>I am trying to find all rows with the time <code>2023-01-01 12:00:00-05:00</code>.</p> <p>I tried to do this:</p> <pre><code>eastern = pytz.timezone('US/Eastern') query_dt = datetime.datetime(year=2023, month=1, day=1, hour=12, minute=0, tzinfo=eastern) df_sub = df[df.dt == query_dt] </code></pre> <p>But this is telling me there are no rows corresponding to <code>query_dt</code>, which is not correct as I can clearly see there are rows with that time.</p>
<python><pandas><datetime>
2023-06-02 18:41:42
0
2,328
24n8
76,392,643
14,293,020
Xarray how to combine 2 dataset occasionally overlapping temporally, but not spatially?
<p><a href="https://i.sstatic.net/9cyCy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9cyCy.png" alt="Sketch of the problem" /></a></p> <p><strong>Context:</strong> I have 2 datacubes (datacube_1, datacube_2) with 3D variables (dimensions <code>t,y,x</code>). They do not overlap spatially but their union forms a bigger ensemble (Datacube Combined). They sometimes overlap temporally but not always (black = t1, red = t2, green = t3 and blue = t4). I want to combine those datasets so at each timestamp, if they have values they are simply stitched together, and if not the spatial footprint of the datacube with no values is filled with NaNs. (in the sketch, a filled rectangle represents values, just the outlines represents NaNs).</p> <p><strong>Setup:</strong></p> <ul> <li>Datacube 1: has values at t1, t3, t4</li> <li>Datacube 2: has values at t1, t2, t3</li> <li>Datacube Combined: t1 is full, t2 has NaNs for Datacube 1's spatial footprint and full for Datacube 2's, t3 is full, t4 has NaNs for Datacube 2's spatial footprint and full for Datacube 1's.</li> </ul> <p><strong>Problem:</strong> My goal is to use if possible <code>chunks</code> and to have Datacube Combined with the <strong>smallest size</strong> on memory as possible. I do not want to have timestamps simply appended to each other, if they can be combined they should. Between <code>merge</code>, <code>combine_by_coords</code>, <code>concat</code>, <code>combine_first</code> I don't know which one corresponds exactly to what I want to do, which one is the fastest and the most adapted to chunks. <strong>Which method should I use ?</strong> I read the documentation but honestly I got confused.</p> <p><strong>Code:</strong></p> <pre><code>import xarray as xr import numpy as np ##### ----- EXAMPLE WITH 2 CUBES ----- ##### # Define the dimensions and coordinates t_coords1 = np.array(['2023-01-01', '2023-01-03', '2023-01-04'], dtype='datetime64') #t1, t3, t4 t_coords2 = np.array(['2023-01-01', '2023-01-02', '2023-01-03'], dtype='datetime64') #t1, t2, t3 y_coords = np.arange(0, 1000) x_coords_1 = np.arange(0, 800) # Different size for datacube_1 x_coords_2 = np.arange(0, 120) # Different size for datacube_2 # Chunk the datacubes to recreate the error # Create Datacube 1 datacube_1 = xr.DataArray( np.random.rand(len(t_coords1), len(y_coords), len(x_coords_1)), dims=['t', 'y', 'x'], coords={'t': t_coords1, 'y': y_coords, 'x': x_coords_1}, ).chunk({'t': 2, 'y': 100, 'x': 100}) # Create Datacube 2 datacube_2 = xr.DataArray( np.random.rand(len(t_coords2), len(y_coords), len(x_coords_2)), dims=['t', 'y', 'x'], coords={'t': t_coords2, 'y': y_coords, 'x': x_coords_2}, ).chunk({'t': 2, 'y': 100, 'x': 100}) # Merge the datacubes (I rewrite datacube_1 because in reality I merge 4 datasets together in a loop) datacube_1 = datacube_1.merge(datacube_2) ##### ----- EXAMPLE WITH MORE THAN TWO DATACUBES ----- ##### # Gather the names of the datacubes to combine cubes = ['Datacube_1','Datacube_2','Datacube_3','Datacube_4'] # Load the first datacube so we can combine the others to that one xrds = xr.open_dataset(cubes[0], chunks=({'t': 500, 'y': 100, 'x': 100}) # Loop over the rest of the datacubes for n in range(1, len(cubes)): # Open the next datacube ds_temp = xr.open_dataset(cubes[n], chunks=({'t': 500, 'y': 100, 'x': 100})) # Combine it with the other ones xrds = xrds.merge(ds_temp) from dask.diagnostics import ProgressBar # Save the dataset that way so it does not overload memory write_job = ds_temp.to_netcdf(&quot;combined_datacube.nc&quot;, mode='w', compute=False) with ProgressBar(): print(f&quot;Writing the file&quot;) write_job = write_job.compute() </code></pre>
<python><merge><dataset><dask><python-xarray>
2023-06-02 18:23:48
0
721
Nihilum
76,392,521
12,955,349
How to plot data from snowflake into grouped bars overlaid with a line plot
<p>The requirement to have two bar graphs displayed either through sql or python library based on TYPE</p> <p>Below is the data from the table</p> <pre><code>with data as ( select 'DIRECT' as type , '2023-04-30' as report_month , 148 as returns_per_head , 30.00 as filing_count ,52.25 as total_count union select 'INDIRECT' as type , '2023-04-30' as report_month , 2876 as returns_per_head , 22.3 as filing_count ,29.25 as total_count ) select * from data </code></pre> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: right;"></th> <th style="text-align: left;">TYPE</th> <th style="text-align: left;">REPORT_MONTH</th> <th style="text-align: right;">FILING_COUNT</th> <th style="text-align: right;">RETURNS_PER_HEAD</th> <th style="text-align: right;">TOTAL_COUNT</th> </tr> </thead> <tbody> <tr> <td style="text-align: right;">0</td> <td style="text-align: left;">DIRECT</td> <td style="text-align: left;">2023-04-30</td> <td style="text-align: right;">30</td> <td style="text-align: right;">148</td> <td style="text-align: right;">52.25</td> </tr> <tr> <td style="text-align: right;">1</td> <td style="text-align: left;">INDIRECT</td> <td style="text-align: left;">2023-04-30</td> <td style="text-align: right;">22.3</td> <td style="text-align: right;">2876</td> <td style="text-align: right;">29.25</td> </tr> </tbody> </table> </div> <p>I need output as below</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: right;"></th> <th style="text-align: left;">TYPE</th> <th style="text-align: left;">REPORT_MONTH</th> <th style="text-align: left;">Metric_HC</th> <th style="text-align: right;">HC</th> </tr> </thead> <tbody> <tr> <td style="text-align: right;">0</td> <td style="text-align: left;">DIRECT</td> <td style="text-align: left;">2023-04-30</td> <td style="text-align: left;">FILING_COUNT</td> <td style="text-align: right;">30</td> </tr> <tr> <td style="text-align: right;">1</td> <td style="text-align: left;">DIRECT</td> <td style="text-align: left;">2023-04-30</td> <td style="text-align: left;">TOTAL_COUNT</td> <td style="text-align: right;">52.25</td> </tr> <tr> <td style="text-align: right;">2</td> <td style="text-align: left;">DIRECT</td> <td style="text-align: left;">2023-04-30</td> <td style="text-align: left;">RETURNS_PER_HEAD</td> <td style="text-align: right;">148</td> </tr> <tr> <td style="text-align: right;">3</td> <td style="text-align: left;">INDIRECT</td> <td style="text-align: left;">2023-04-30</td> <td style="text-align: left;">FILING_COUNT</td> <td style="text-align: right;">22.3</td> </tr> <tr> <td style="text-align: right;">4</td> <td style="text-align: left;">INDIRECT</td> <td style="text-align: left;">2023-04-30</td> <td style="text-align: left;">TOTAL_COUNT</td> <td style="text-align: right;">29.25</td> </tr> <tr> <td style="text-align: right;">5</td> <td style="text-align: left;">INDIRECT</td> <td style="text-align: left;">2023-04-30</td> <td style="text-align: left;">RETURNS_PER_HEAD</td> <td style="text-align: right;">2876</td> </tr> </tbody> </table> </div> <p><strong>The reason is I need to display in report as below, if it can be achieved, either via python library, or sql</strong></p> <p>Below is for example INDIRECT type alone alone</p> <p><a href="https://i.sstatic.net/JU2Gr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JU2Gr.png" alt="enter image description here" /></a></p> <p>I am using HEX as new visualization</p>
<python><pandas><matplotlib><snowflake-cloud-data-platform><grouped-bar-chart>
2023-06-02 18:02:15
1
1,058
Kar
76,392,467
22,009,322
How to consolidate labels in legend
<p>So, the script below requests data from postgres database and draws a diagram. The requested data is a table with 4 columns <code>(ID, Object, Percentage, Color)</code>.</p> <p>The data:</p> <pre><code>result = [ (1, 'Apple', 10, 'Red'), (2, 'Blueberry', 40, 'Blue'), (3, 'Cherry', 94, 'Red'), (4, 'Orange', 68, 'Orange') ] </code></pre> <pre><code>import pandas as pd from matplotlib import pyplot as plt import psycopg2 conn = psycopg2.connect( host=&quot;localhost&quot;, port=&quot;5432&quot;, database=&quot;db&quot;, user=&quot;user&quot;, password=&quot;123&quot;) cur = conn.cursor() cur.callproc(&quot;test_stored_procedure&quot;) result = cur.fetchall() cur.close() conn.close() print(result) result = pd.DataFrame(result, columns=['ID', 'Object', 'Percentage', 'Color']) fruits = result.Object counts = result.Percentage labels = result.Color s = 'tab:' bar_colors = [s + x for x in result.Color] fig, ax = plt.subplots() for x, y, c, lb in zip(fruits, counts, bar_colors, labels): ax.bar(x, y, color=c, label=lb) ax.set_ylabel('fruit supply') ax.set_title('Fruit supply by kind and color') ax.legend(title='Fruit color', loc='upper left') plt.show() </code></pre> <p>Result:</p> <p><a href="https://i.sstatic.net/qHeei.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qHeei.png" alt="enter image description here" /></a></p> <p>As you can see in the legend <code>&quot;Red&quot;</code> label is shown twice.</p> <p>I tried several different examples of how to fix this, but unfortunately no one worked out. F.e.:</p> <pre><code>handles, labels = ax.get_legend_handles_labels() ax.legend(handles, labels) </code></pre>
<python><pandas><matplotlib><bar-chart><legend>
2023-06-02 17:52:22
2
333
muted_buddy
76,392,424
17,275,588
Shopify API error: {"errors":"[API] Invalid API key or access token (unrecognized login or wrong password)"}
<pre><code>import requests # Replaced with my actual Shopify credentials and file information!!! API_KEY = 'text' // using my Shopify App &quot;API key&quot; ACCESS_TOKEN = 'text' // using my Shopify App &quot;Admin API access token&quot; SHOP_NAME = 'text.myshopify.com' // using the root myshopify URL file_path = r&quot;C:\text\API-TEST-1.jpg&quot; url = f'https://{SHOP_NAME}/admin/api/2023-04/graphql.json' query = &quot;&quot;&quot; mutation stagedUploadsCreate($input: [StagedUploadInput!]!) { stagedUploadsCreate(input: $input) { stagedTargets { resourceUrl url parameters { name value } } } } &quot;&quot;&quot; variables = { 'input': [ { 'resource': 'IMAGE', 'filename': 'your-image.jpg', 'mimeType': 'image/jpeg', 'httpMethod': 'POST', } ] } headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {ACCESS_TOKEN}', } response = requests.post(url, json={'query': query, 'variables': variables}, headers=headers) data = response.json() staged_targets = data.get('data', {}).get('stagedUploadsCreate', {}).get('stagedTargets', []) if staged_targets: target = staged_targets[0] params = target['parameters'] upload_url = target['url'] resource_url = target['resourceUrl'] with open(file_path, 'rb') as file: file_data = file.read() headers = { 'Content-Type': 'application/octet-stream', 'Content-Length': str(len(file_data)), 'X-Shopify-Access-Token': ACCESS_TOKEN, } headers.update(params) response = requests.put(upload_url, headers=headers, data=file_data) if response.status_code == 200: print('Image uploaded successfully.') else: print('Failed to upload the image.') print(response.text) else: print('Failed to generate upload URL and parameters.') print(response.text) </code></pre> <p>It keeps telling me this: Failed to generate upload URL and parameters. {&quot;errors&quot;:&quot;[API] Invalid API key or access token (unrecognized login or wrong password)&quot;}</p> <p>However I'm using the API key in my Shopify Apps dashboard for the app I created, and I have the permissions &quot;write files/read files&quot; activated. I'm also using the access token I was given for this same app. Why is it not working? Any ideas?</p> <p>Thanks</p>
<python><shopify><shopify-api>
2023-06-02 17:47:24
0
389
king_anton
76,392,359
895,587
Need to restart Databricks 13.0 cluster to iterate on development
<p>I want to iterate with development on a Databricks 13 cluster without the need to restart it for updating the code within my Python package.</p> <p>It seems that <strong>dbx execute</strong> does the job on Databricks 12.1, but when I try to run it with Databricks 13, it gets the old version.</p> <p>Also tried with</p> <pre><code>dbx deploy my_workflow --environment=dev --assets-only dbx launch my_workflow --environment=dev --from-assets </code></pre> <p>without success.</p> <p>Any ideas?</p> <p>I've found the following issue with no definitive answer...</p> <p><a href="https://stackoverflow.com/questions/73489698/how-to-reinstall-same-version-of-a-wheel-on-databricks-without-cluster-restart">How to reinstall same version of a wheel on Databricks without cluster restart</a></p> <p>Thanks</p>
<python><databricks><databricks-dbx>
2023-06-02 17:34:55
0
302
AndrΓ© Salvati
76,392,337
3,416,774
Why does Jupyter in VS Code say "No module named 'gensim'" when it's already installed?
<p>In the below setup, I've made sure that the Python version running in Jupyter and the terminal is the same. Yet Jupyter still give error <code>No module named 'gensim'</code> when it is already installed. Why is that?</p> <p><img src="https://i.imgur.com/Y6fKFhN.png" alt="screenshot of VSCode showing the problem" /></p>
<python><visual-studio-code>
2023-06-02 17:31:54
1
3,394
Ooker
76,392,312
2,105,339
Should I use regular SQL instead of an ORM to reduce bandwith usage and fetching time?
<p>I'm building a ethereum explorer for fun with django ORM (never used it before). here is a part of my schema :</p> <pre><code>class AddressModel(models.Model): id = models.BigIntegerField(primary_key=True) first_seen = models.DateTimeField(db_index=True) addr = models.CharField(max_length=42, db_index=True, on_delete=models.PROTECT) is_contract = models.BooleanField() is_token = models.BooleanField() is_wallet = models.BooleanField() class BlockModel(models.Model): id = models.BigIntegerField(primary_key=True) number = models.BigIntegerField() status = models.CharField(max_length=20) timestamp = models.DateTimeField(db_index=True) epoch_proposal = models.IntegerField() slot_proposal = models.IntegerField() fee_recipient = models.ForeignKey(AddressModel, on_delete=models.PROTECT) block_reward = models.BigIntegerField() total_difficulty = models.CharField(max_length=100) size = models.IntegerField() gas_used = models.BigIntegerField() gas_limit = models.BigIntegerField() base_fee_per_gas = models.BigIntegerField() burnt_fee = models.BigIntegerField() extra_data = models.TextField() hash = models.CharField(max_length=66) parent_hash = models.CharField(max_length=66) state_root = models.CharField(max_length=66) withdrawal_root = models.CharField(max_length=66) Nonce = models.CharField(max_length=20) # you can do address_model_instance.transactionmodel_set.objects.all() since there is a FK in Transaction model class TransactionModel(models.Model): id = models.BigIntegerField(primary_key=True) hash = models.CharField(max_length=66) block = models.ForeignKey(BlockModel, on_delete=models.PROTECT) from_addr = models.ForeignKey('AddressModel', related_name='from_addr', on_delete=models.PROTECT) to_addr = models.ForeignKey('AddressModel', related_name='to_addr', on_delete=models.PROTECT) input = models.TextField() is_valid = models.BooleanField() </code></pre> <p>What concerns me here is that if I want to retrieve every <code>transaction</code> related to a specific <code>from_addr</code> it will also retrieve the bloc data in the returned object, if the <code>from_addr</code> has 10K transaction that is 10K <code>block</code> data that I don't need. with regular SQL I would only get a <code>block_id</code> if I did a <code>select *</code>.</p> <p>This will lead to useless bandwith usage and take more time to request since it will have to do some <code>select</code> operations on the <code>block</code> table.</p> <p>Is this a use case where I shouldn't use an ORM?</p> <p>Thanks.</p>
<python><django><orm><ethereum>
2023-06-02 17:25:47
1
2,474
sliders_alpha
76,392,283
11,999,957
List comprehension in Python for if, elif, pass?
<p>I see syntax for if pass but not finding syntax for if elif pass: <a href="https://stackoverflow.com/questions/33691552/list-comprehension-with-else-pass">List comprehension with else pass</a></p> <p>Basically</p> <pre><code>if condition: something elif condition something else pass </code></pre>
<python>
2023-06-02 17:22:37
1
541
we_are_all_in_this_together
76,392,174
5,061,840
Java and Python return different values when converting the hexadecimal to long
<p>I noticed this difference when comparing xxhash implementations in both Python and Java languages. Calculated hashes by xxhash library is the same as hexadecimal string, but they are different when I try to get calculated hash as an integer(or long) value.</p> <p>I am sure that this is some kind of &quot;endian&quot; problem but I couldn't find how to get the same integer values for both languages.</p> <p>Any ideas how and why this is happening?</p> <p><strong>Java Code:</strong></p> <pre><code>String hexString = &quot;d24ec4f1a98c6e5b&quot;; System.out.println(new BigInteger(hexString,16).longValue()); // printed value -&gt; -3292477735350538661 </code></pre> <p><strong>Python Code:</strong></p> <pre><code>hexString = &quot;d24ec4f1a98c6e5b&quot; print(int(hexString, 16)) # printed value -&gt; 15154266338359012955 </code></pre>
<python><java>
2023-06-02 17:04:52
2
327
Tevfik Kiziloren
76,391,843
13,921,399
Cast pandas series containing list elements to a 2d numpy array
<p>Take the following series:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd s = pd.Series([1, 3, 2, [1, 3, 7, 8], [6, 6, 10, 4], 5]) </code></pre> <p>I want to convert this series into the following array:</p> <pre class="lang-py prettyprint-override"><code>np.array([ [ 1., 1., 1., 1.], [ 3., 3., 3., 3.], [ 2., 2., 2., 2.], [ 1., 3., 7., 8.], [ 6., 6., 10., 4.], [ 5., 5., 5., 5.] ]) </code></pre> <p>Currently, I am using this logic:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import pandas as pd from itertools import zip_longest # Convert series and each element in series into list ls = list(map(lambda v: v if isinstance(v, list) else [v], s.to_list())) # Cast list elements to 2d numpy array with longest list element as column number a = np.array(list(zip_longest(*ls, fillvalue=np.nan))).T # Convert to DataFrame, apply 'ffill' row-wise and re-convert to numpy array a = pd.DataFrame(a).fillna(method=&quot;ffill&quot;, axis=1).values </code></pre> <p>My solution is not really satisfying me, especially the last line where I convert my array to a DataFrame and then back to an array again. Does anyone know a better alternative? You can assume that all list elements have the same length.</p>
<python><pandas><numpy>
2023-06-02 16:13:55
2
1,811
ko3
76,391,681
6,936,489
Read csv in chunks with polars efficiently (with limited available RAM)
<p>I'm trying to read a big CSV (6.4 Go approx.) on a small machine (small laptop on windows with 8Go of RAM) before storing it into a SQLite database (I'm aware there are alternatives, that's not the point here).</p> <p><em>In case it's needed the file I'm using can be found on <a href="https://www.data.gouv.fr/fr/datasets/base-sirene-des-entreprises-et-de-leurs-etablissements-siren-siret/" rel="nofollow noreferrer">that page</a>; in the tab &quot;Fichiers&quot;, it should be labelled &quot;Sirene : Fichier StockEtablissementHistorique [...]&quot;. This file is today around 37 millions lines long.</em></p> <p>Being a big fan of pandas and I've nonetheless decided to try polars which is much advertised those days.</p> <p>The inferred dataframe should also be joined to another produced with <code>pl.read_database</code> (which produces a pl.DataFrame and no pl.LazyFrame).</p> <ul> <li><p>My first try involved a LazyFrame and (naive) hope that <code>scan_csv</code> with <code>low_memory</code> argument would suffice to handle the RAM consumption. It completly freezed my computer after overconsumption of RAM.</p> </li> <li><p>I gave it another try using the <code>n_rows</code> along with <code>skip_rows_after_header</code>. But if the <code>pl.read_csv(my_path, n_rows=1_000_000)</code> works fine, <code>pl.read_csv(my_path, n_rows=1_000_000, skip_rows_after_header=30_000_000)</code> seems to take forever (a lot more than a simple loop to find the count of lines).</p> </li> <li><p>I've also tried the <code>pl.read_csv_batched</code> but it seems also to take forever (maybe to compute those first statistics <strong>not</strong> described in the documentation).</p> </li> <li><p>The only way I found to handle the file with polars completly is to handles slices from a LazyFrame and collect it. Something like this :</p> <pre><code>df = ( pl.scan_csv( url, separator=&quot;,&quot;, encoding=&quot;utf8&quot;, infer_schema_length=0, low_memory=True, ) .lazy() .select(pl.col(my_cols) # do some more processing, for instance .filter(pl.col(&quot;codePaysEtrangerEtablissement&quot;).is_null()) ) chunksize=1_000_000 for k in range(max_iterations:) chunk = df.slice(chunksize*k, chunksize).collect() chunk = chunk.join(my_other_dataframe, ... ) # Do some more things like storing the chunk in a database. </code></pre> <p>This &quot;solution&quot; seems to handle the memory but performs very slowly.</p> </li> </ul> <p>I've found another solution which seems to work nicely (which I'll post as provisional answer) but makes use of pandas read_csv with chunksize. This is as good as is goes and works only because (thankfully) there is no groupby involved in my process.</p> <p>I'm pretty sure there should be an easier &quot;pure polars&quot; way to proceed.</p> <hr /> <p><strong>EDIT</strong></p> <p>The other dataframe mentionned here (<code>my_other_dataframe</code> in the code sample) is small. It's a dataframe of around 36k lines which is strictly used to convert the field &quot;codeCommuneEtablissement&quot; from it's 5-long string to a primary key of integers which is stored in another table. I kept it mentionned in the sample here to explain why you needed to collect the dataframe earlier, as you can't join a LazyFrame and a DataFrame.</p>
<python><dataframe><csv><python-polars>
2023-06-02 15:51:21
3
2,562
tgrandje
76,391,586
12,040,751
Async read_csv in Pandas
<p>In this <a href="https://stackoverflow.com/a/60368916/12040751">answer</a> to <a href="https://stackoverflow.com/questions/57871450/async-read-csv-of-several-data-frames-in-pandas-why-isnt-it-faster">async 'read_csv' of several data frames in pandas - why isn't it faster</a> it is explained how to asynchronously read pandas DataFrames from csv data obtained from a web request.</p> <p>I modified it to read some csv files on disk by using <code>aiofiles</code>, but got no speedup nonetheless. I wonder if I did something wrong or if there is some unavoidable limitation, like <code>pd.read_csv</code> being blocking.</p> <p>Here's the normal version of the code:</p> <pre><code>from time import perf_counter import pandas as pd def pandas_read_many(paths): start = perf_counter() results = [pd.read_csv(p) for p in paths] end = perf_counter() print(f&quot;Pandas version {end - start:0.2f}s&quot;) return results </code></pre> <p>The async version involves reading the file with <code>aiofiles</code> and converting it to a text buffer with <code>io.StringIO</code> before passing it to <code>pd.read_csv</code>.</p> <pre><code>import io import aiofiles async def async_read_csv(path): async with aiofiles.open(path) as f: text = await f.read() with io.StringIO(text) as text_io: return pd.read_csv(text_io) async def async_read_many(paths): start = perf_counter() results = await asyncio.gather(*(async_read_csv(p) for p in paths)) end = perf_counter() print(f&quot;Async version {end - start:0.2f}s&quot;) return results </code></pre> <p>For fairness, here it is the synchronous translation.</p> <pre><code>def sync_read_csv(path): with open(path) as f: text = f.read() with io.StringIO(text) as text_io: return pd.read_csv(text_io) def sync_read_many(paths): start = perf_counter() results = [sync_read_csv(p) for p in paths] end = perf_counter() print(f&quot;Sync version {end - start:0.2f}s&quot;) return results </code></pre> <p>Finally the comparison, where I read 8 csv files of approximately 125MB each.</p> <pre><code>import asyncio paths = [...] asyncio.run(async_read_many(paths)) sync_read_many(paths) pandas_read_many(paths) # Async version 24.32s # Sync version 24.87s # Pandas version 18.37s </code></pre>
<python><pandas><python-asyncio>
2023-06-02 15:36:37
0
1,569
edd313
76,391,582
3,492,006
Convert date and time in string to timestamp
<p>I have a set of CSV's that all got loaded with a date field like: <code>Sunday August 7, 2022 6:26 PM GMT</code></p> <p>I'm working on a way to take this date/time, and convert it to a proper timestamp in the format <code>YYYY-MM-DD HH:MM</code></p> <p>In Python, I've tried something like below to return a proper timestamp.</p> <pre class="lang-py prettyprint-override"><code>def convert_to_timestamp(date_string): date_format = &quot;%A %B %d, %Y %I:%M %p %Z&quot; timestamp = datetime.strptime(date_string, date_format) return timestamp </code></pre> <p>...but it keeps coming back with errors similar to below.</p> <pre><code>ValueError: time data 'Sunday August 7,2022 6:26 PM GMT' does not match format '%A %B %d, %Y %I:%M %p %Z' </code></pre> <p>How would I convert this field in a CSV to the required format using Python?</p>
<python><date><datetime><timestamp>
2023-06-02 15:35:39
3
449
WR7500
76,391,550
6,300,467
PyTorch equivalent of scipy.sparse.linalg.gmres
<p>I'm using scipy.sparse.linalg.gmres to efficiently solve <code>A.x = b</code>, however my problem is within the PyTorch framework. So, I have to detach my tensors to <code>numpy</code> then call <code>scipy</code> to solve this equation. However, other frameworks (like JAX) have their own equivalent function, <code>jax.scipy.sparse.linalg.gmres</code>.</p> <p>Is there a PyTorch equivalent to <code>scipy.sparse.linalg.gmres</code> to sparsely solve <code>A.x = b</code>?</p>
<python><pytorch><scipy><jax>
2023-06-02 15:30:30
0
785
AlphaBetaGamma96
76,391,465
12,493,545
How to start from example diverging REST interface?
<p>In the uvicorn <a href="https://www.uvicorn.org/" rel="nofollow noreferrer">exmaple</a>, one writes <code>uvicorn filename:attributename</code> and by that start the server. However, the interface I have generated has no such method <code>attributename</code> in <code>filename</code>. Therefore, I am unsure what to pass as <code>attributename</code>.</p> <h1>Generated code in main.py</h1> <pre class="lang-py prettyprint-override"><code>&quot;&quot;&quot; Somename Specification for REST-API of somename. The version of the OpenAPI document: 1.0.0 Generated by: https://openapi-generator.tech &quot;&quot;&quot; from fastapi import FastAPI from openapi_server.apis.some_api import router as SomeApiRouter app = FastAPI( title=&quot;SomeName&quot;, description=&quot;Specification for REST-API of somename&quot;, version=&quot;1.0.0&quot;, ) app.include_router(SomeApiRouter) </code></pre>
<python><openapi-generator><uvicorn>
2023-06-02 15:18:03
1
1,133
Natan
76,391,329
20,220,485
How do you sort lists of tuples based on the count of a specific value?
<p>I am working on a NER problemβ€”hence the BIO taggingβ€”with a very small dataset, and I am manually splitting it into train, validation, and test data. Thus, to make the first of two splits, I need to sort lists of tuples into two lists based on the count of <code>'B'</code> in <code>data</code>.</p> <p>I am shuffling <code>data</code>, so the output varies, but it typically yeilds what I provide below. <code>data</code> can be split such that a total count of <code>10</code> instances of <code>'B'</code> is possible in <code>bin_1</code>. So it's not that <code>data</code> won't split this way given the way <code>B</code> is distributed through the lists of tuples.</p> <p>How do I get the split that I am after? For this example, and the desired split, I want the total count of <code>'B'</code> in <code>bin_1</code> to be <code>10</code>, but it's always over.</p> <p>Assistance would be much appreciated.</p> <p>Data:</p> <pre><code>data = [[('a', 'B'), ('b', 'I'), ('c', 'O'), ('d', 'B'), ('e', 'I'), ('f', 'O')], [('g', 'O'), ('h', 'O')], [('i', 'B'), ('j', 'I'), ('k', 'O')], [('l', 'B'), ('m', ''), ('n', 'B'), ('o', 'O')], [('p', 'O'), ('q', 'O'), ('r', 'O')], [('s', 'B'), ('t', 'O')], [('u', 'O'), ('v', 'B'), ('w', 'I'), ('x', 'O'), ('y', 'O')], [('z', 'B')], [('a', 'B'), ('b', 'I'), ('c', 'O')], [('d', 'O')], [('e', 'O'), ('f', 'O')], [('g', 'O'), ('h', 'B')], [('i', 'B'), ('j', 'I')], [('k', 'O')], [('l', 'O'), ('m', 'O'), ('n', 'O'), ('o', 'O')], [('p', 'O'), ('q', 'O'), ('r', 'O'), ('s', 'B'), ('t', 'O')], [('u', 'O'), ('v', 'B'), ('w', 'I'), ('x', 'O'), ('y', 'O'), ('z', 'B')]] </code></pre> <p>Current code:</p> <pre><code>split = 0.7 d = [] total_B = 0 bin_1 = [] bin_2 = [] counter = 0 random.shuffle(data) for f in data: cnt = {} for _, label in f: if label in cnt: cnt[label] += 1 else: cnt[label] = 1 d.append(cnt) for f in d: total_B += f.get('B', 0) for f,g in zip(d, data): if f.get('B') is not None: if counter &lt;= round(total_B * split): counter += f.get('B') bin_1.append(g) else: bin_2.append(g) print(round(total_B * split)) print(sum(1 for sublist in bin_1 for tuple_item in sublist if tuple_item[1] == 'B')) print(sum(1 for sublist in bin_2 for tuple_item in sublist if tuple_item[1] == 'B')) </code></pre> <p>Current output:</p> <pre><code>Total count of 'B' in 'bin_1' should be: 10 Total count of 'B' in 'bin_1' is': 11 Total count of 'B' in 'bin_2' is': 3 </code></pre> <pre><code>bin_1, bin_2 &gt;&gt;&gt; [[('a', 'B'), ('b', 'I'), ('c', 'O')], [('g', 'O'), ('h', 'B')], [('i', 'B'), ('j', 'I'), ('k', 'O')], [('u', 'O'), ('v', 'B'), ('w', 'I'), ('x', 'O'), ('y', 'O'), ('z', 'B')], [('s', 'B'), ('t', 'O')], [('l', 'B'), ('m', ''), ('n', 'B'), ('o', 'O')], [('a', 'B'), ('b', 'I'), ('c', 'O'), ('d', 'B'), ('e', 'I'), ('f', 'O')], [('i', 'B'), ('j', 'I')]], [[('u', 'O'), ('v', 'B'), ('w', 'I'), ('x', 'O'), ('y', 'O')], [('z', 'B')], [('p', 'O'), ('q', 'O'), ('r', 'O'), ('s', 'B'), ('t', 'O')]] </code></pre> <p>Desired output:</p> <pre><code>Total count of 'B' in 'bin_1' should be: 10 Total count of 'B' in 'bin_1' is': 10 Total count of 'B' in 'bin_2' is': 4 </code></pre>
<python><list><sorting><machine-learning><sampling>
2023-06-02 15:04:31
1
344
doine
76,391,344
21,787,377
Implementing Name Synchronization and Money Transfers in Transactions Model with Account Number Input
<p>I have the following models in my Django application:</p> <pre><code>class Transaction (models.Model): user = models.ForeignKey(User, on_delete=models.CASCADE) account_number = models.IntegerField() name = models.CharField(max_length=50) amount = models.DecimalField(max_digits=5, decimal_places=2) created_on = models.DateTimeField() class Wallet(models.Model): user = models.ForeignKey(User, on_delete=models.CASCADE) account_balance = models.DecimalField(max_digits=5, decimal_places=2, default=0) class AccountNum(models.Model): user = models.ForeignKey(User, on_delete=models.CASCADE) account_number = models.IntegerField() slug = models.SlugField(unique=True) </code></pre> <p>I want to implement a feature where the name field in the <code>Transactions</code> model gets synchronized with the account owner's name based on the provided <code>account_number</code> input. Additionally, I want to enable money transfers using the current user's wallet and the specified amount in the <code>Transactions</code> model.</p> <p>To provide some context, I have a <code>post-save</code> signal <code>generate_account_number</code> which generates a random 10-digit account number.</p> <p>What are some recommended techniques or approaches to achieve this <code>synchronization</code> of the name field with the account owner's name and enable money transfers using the <code>wallet</code> model and specified amount in the <code>Transaction</code> model?</p>
<python><django><django-views><django-channels><banking>
2023-06-02 15:04:25
2
305
Adamu Abdulkarim Dee
76,391,296
10,097,229
How to know last occurence of while loop
<p>I have this piece of code where I am hitting an Azure REST API and it has <code>nextlink</code> in it which basically means that because the file is too large, it has nextlink as parameter with which it will again hit the API until the nextlink parameter does not come.</p> <pre><code>while 'nextLink' in json.loads(response.text)['properties']: total.extend(json.loads(response.text)) req = requests.get(json.loads(response.text)['properties']['nextLink'], headers=head, verify=False) c+=1 print(c) total.append(req.json()) </code></pre> <p>THe problem is that I dont know how many times the while loop will run. But I wanted to end the loop before the last hit/occurence. The <code>c</code> is to know how many times it is running. Sometimes it runs for 80 times, sometimes for 200 times.</p> <p>THe reason I wanted to know the last occurence-1 is that after the while loop I have an upload file statement which is not running if I dont give a break statement.</p>
<python><json><python-3.x><azure><loops>
2023-06-02 15:00:54
1
1,137
PeakyBlinder
76,391,230
11,564,487
Change the font size of the output of python code chunk
<p>Consider the following <code>quarto</code> document:</p> <pre><code>--- title: &quot;Untitled&quot; format: pdf --- ```{python} #|echo: false #|result: 'asis' import pandas as pd df = pd.DataFrame({'A': ['foo', 'foo', 'foo', 'bar', 'bar', 'bar'], 'B': ['one', 'one', 'two', 'two', 'one', 'one'], 'C': ['dull', 'dull', 'shiny', 'shiny', 'dull', 'dull'], 'D': [1, 3, 2, 5, 4, 1]}) print(df) ``` </code></pre> <p>How to scale down the output of the python chunk, say, to 50%. Is that possible?</p>
<python><pdf><latex><quarto>
2023-06-02 14:53:48
1
27,045
PaulS
76,391,192
10,082,088
Alteryx Error generating JWT token with python tool - NotImplementedError: Algorithm 'RS256
<p>I am having issues creating an Alteryx workflow that encodes a JWT token with the RS256 algorithm in the python tool.</p> <p>Here is my code:</p> <pre><code>################################# from ayx import Alteryx from ayx import Package import pandas ##Package.installPackages(package=&quot;cryptography&quot;,install_type=&quot;install --proxy proxy.server:port&quot;) ##Package.installPackages(package=&quot;pyjwt[crypto]&quot;,install_type=&quot;install --proxy proxy.server:port&quot;) import jwt from io import StringIO ################################# table = Alteryx.read(&quot;#1&quot;) ################################# print(table) ################################# id = table.at[0, 'id'] ################################# url = table.at[0, 'url'] ################################# key = table.at[0, 'key'] ################################# exp = table.at[0, 'exp'] ################################# exp = int(exp) ################################# nbf = table.at[0, 'nbf'] ################################# nbf = int(nbf) ################################# encoded = jwt.encode({&quot;iss&quot;: id, &quot;aud&quot;: url, &quot;exp&quot;: exp, &quot;nbf&quot;: nbf}, key, algorithm='RS256') ################################# s=str(encoded,'utf-8') data = StringIO(s) df=pandas.read_csv(data,header=None) ################################# Alteryx.write(df,1) </code></pre> <p>The problem is when I try to encode a JWT using the RS256 algorithm: <code>encoded = jwt.encode({&quot;iss&quot;: id, &quot;aud&quot;: url, &quot;exp&quot;: exp, &quot;nbf&quot;: nbf}, key, algorithm='RS256')</code>, It spits back out this error message: <code>NotImplementedError: Algorithm 'RS256' could not be found. Do you have cryptography installed?</code></p> <p>the Cryptography package should be installed since I specified [crypto] when choosing the package name: <code>pyjwt[crypto]</code> – source: <a href="https://pyjwt.readthedocs.io/en/latest/installation.html#installation-cryptography" rel="nofollow noreferrer">Installation β€” PyJWT 2.7.0 documentation</a>. I also tried installing it separately by adding <code>##Package.installPackages(package=&quot;cryptography&quot;,install_type=&quot;install --proxy proxy.server:port&quot;)</code> but still got the same error.</p>
<python><jwt><alteryx>
2023-06-02 14:49:04
1
447
bocodes
76,391,153
5,620,975
Python Polars: Lazy Frame Row Count not equal wc -l
<p>Been experimenting with <code>polars</code> and of the key features that peak my interest is the <em>larger than RAM</em> operations.</p> <p>I downloaded some files to play with from <a href="https://s3.amazonaws.com/amazon-reviews-pds/tsv/index.txt" rel="nofollow noreferrer">HERE</a>. On the website: <em>First line in each file is header; 1 line corresponds to 1 record.</em>. <strong>WARNING</strong> total download is quite large (~1.3GB)! This experiment was done on AWS server (<code>t2.medium</code>, <code>2cpu</code>, <code>4GB</code>)</p> <pre><code>wget https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Shoes_v1_00.tsv.gz \ https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Office_Products_v1_00.tsv.gz \ https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Software_v1_00.tsv.gz \ https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Personal_Care_Appliances_v1_00.tsv .gz \ https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Watches_v1_00.tsv.gz gunzip * </code></pre> <p>Here are the results from <code>wc -l</code></p> <pre><code>drwxrwxr-x 3 ubuntu ubuntu 4096 Jun 2 12:44 ../ -rw-rw-r-- 1 ubuntu ubuntu 1243069057 Nov 25 2017 amazon_reviews_us_Office_Products_v1_00.tsv -rw-rw-r-- 1 ubuntu ubuntu 44891575 Nov 25 2017 amazon_reviews_us_Personal_Care_Appliances_v1_00.tsv -rw-rw-r-- 1 ubuntu ubuntu 1570176560 Nov 25 2017 amazon_reviews_us_Shoes_v1_00.tsv -rw-rw-r-- 1 ubuntu ubuntu 249565371 Nov 25 2017 amazon_reviews_us_Software_v1_00.tsv -rw-rw-r-- 1 ubuntu ubuntu 412542975 Nov 25 2017 amazon_reviews_us_Watches_v1_00.tsv $ find . -type f -exec cat {} + | wc -l 8398139 $ find . -name '*.tsv' | xargs wc -l 2642435 ./amazon_reviews_us_Office_Products_v1_00.tsv 341932 ./amazon_reviews_us_Software_v1_00.tsv 85982 ./amazon_reviews_us_Personal_Care_Appliances_v1_00.tsv 4366917 ./amazon_reviews_us_Shoes_v1_00.tsv 960873 ./amazon_reviews_us_Watches_v1_00.tsv 8398139 total </code></pre> <p>Now, if I count the rows using <code>polars</code> using our new fancy lazy function:</p> <pre><code>import polars as pl csvfile = &quot;~/data/amazon/*.tsv&quot; ( pl.scan_csv(csvfile, separator = '\t') .select( pl.len() ) .collect() ) shape: (1, 1) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ len β”‚ β”‚ --- β”‚ β”‚ u32 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•‘ β”‚ 4186305 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>Wow, thats a BIG difference between <code>wc -l</code> and <code>polars</code>. Thats weird... maybe its a data issue. Lets only focus on the column of interest:</p> <pre><code>csvfile = &quot;~/data/amazon/*.tsv&quot; ( ... pl.scan_csv(csvfile, separator = '\t') ... .select( ... pl.col(&quot;product_category&quot;).count() ... ) ... .collect() ... ) shape: (1, 1) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ product_category β”‚ β”‚ --- β”‚ β”‚ u32 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•‘ β”‚ 7126095 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>And with <code>.collect(streaming = True)</code>:</p> <pre><code>shape: (1, 1) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ product_category β”‚ β”‚ --- β”‚ β”‚ u32 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•‘ β”‚ 7125569 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>Ok, still a difference of about 1 million? Lets do it bottom up:</p> <pre><code>csvfile = &quot;~/data/amazon/*.tsv&quot; ( pl.scan_csv(csvfile, separator = '\t') .group_by(&quot;product_category&quot;) .agg(pl.col(&quot;product_category&quot;).count().alias(&quot;counts&quot;)) .collect(streaming = True) .filter(pl.col('counts') &gt; 100) .sort(pl.col(&quot;counts&quot;), descending = True) .select( pl.col('counts').sum() ) ) shape: (1, 1) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ counts β”‚ β”‚ --- β”‚ β”‚ u32 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•‘ β”‚ 7125553 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>Close, albeit that its once again a different count...</p> <p>Some more checks using <code>R</code>:</p> <pre><code>library(vroom) library(purrr) library(glue) library(logger) amazon &lt;- list.files(&quot;~/data/amazon/&quot;, full.names = TRUE) f &lt;- function(file){ df &lt;- vroom(file, col_select = 'product_category', show_col_types=FALSE ) log_info(glue(&quot;File [{basename(file)}] has [{nrow(df)}] rows&quot;)) } walk(amazon, f) INFO [2023-06-02 14:23:40] File [amazon_reviews_us_Office_Products_v1_00.tsv] has [2633651] rows INFO [2023-06-02 14:23:41] File [amazon_reviews_us_Personal_Care_Appliances_v1_00.tsv] has [85898] rows INFO [2023-06-02 14:24:06] File [amazon_reviews_us_Shoes_v1_00.tsv] has [4353998] rows INFO [2023-06-02 14:24:30] File [amazon_reviews_us_Software_v1_00.tsv] has [331152] rows INFO [2023-06-02 14:24:37] File [amazon_reviews_us_Watches_v1_00.tsv] has [943763] rows Total: 8348462 </code></pre> <p>Ok. Screw it. Basically a random number generating exercise and nothing is real.</p> <p>Surely if its a data hygiene issue the error should be constant? Any idea why there might be such a large discrepancy?</p>
<python><python-polars>
2023-06-02 14:42:40
1
1,461
Hanjo Odendaal