QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
76,946,424
18,781,246
Why does Python recursion limit change depending on function?
<p>I was testing stuff when I noticed that python's <strong>recursion limit</strong> doesn't seem to apply equally to all functions. I'm not sure why or how and couldn't find any documentation explaining this behavior.</p> <p>Can someone explain this weird behavior to me? Or at least send me in the right direction?</p> <h3>The Code:</h3> <pre class="lang-py prettyprint-override"><code>import sys import inspect LIMIT = sys.getrecursionlimit() print(f&quot;recursive limit is: {LIMIT}&quot;) def get_max_lvl(lvl=0): try: return get_max_lvl(lvl=lvl + 1) except RecursionError: return lvl def get_max_lvl_inspect(lvl=0): try: return get_max_lvl_inspect(lvl=lvl + 1) except RecursionError: print(f&quot;stack level: {len(inspect.stack())}&quot;) return lvl def get_max_lvl_other(lvl=0): try: return get_max_lvl_other(lvl=lvl + 1) except RecursionError: print(&quot;blah&quot;) return lvl </code></pre> <p>I ran the following in a shell:</p> <pre><code>$ python -i rec.py recursive limit is: 1000 &gt;&gt;&gt; get_max_lvl() 998 &gt;&gt;&gt; get_max_lvl_inspect() stack level: 983 981 &gt;&gt;&gt; get_max_lvl_other() blah blah 994 </code></pre> <p>And tried it the other way around in case it was due to the order:</p> <pre><code>$ python -i rec.py recursive limit is: 1000 &gt;&gt;&gt; get_max_lvl_other() blah blah 994 &gt;&gt;&gt; get_max_lvl_inspect() stack level: 983 981 &gt;&gt;&gt; get_max_lvl() 998 </code></pre> <p>But the function outputs seem consistent.</p> <p><strong>What is going on here?</strong></p>
<python><recursion><python-internals>
2023-08-21 14:48:03
1
1,012
scr
76,946,412
8,760,298
Add 12 future dates in a row Based on given date - Python
<p>I Have a dataframe like this</p> <p><a href="https://i.sstatic.net/vID9z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vID9z.png" alt="enter image description here" /></a></p> <p>I would want 12 future dates based on the Freq which is days</p> <p><a href="https://i.sstatic.net/oGKPY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oGKPY.png" alt="enter image description here" /></a></p> <p>I have tried adding this in loop and _append</p> <pre><code>df['next_d']._append(df['date'] + pd.Timedelta(30,unit='days')).to_frame()) </code></pre> <p>I learnt using to_frame() here will not work.I am bit confused here doing this by pd.date_range() Please advise!! Thanks!!</p>
<python><pandas><datetime>
2023-08-21 14:47:10
1
333
Tpk43
76,946,407
17,562,611
Structural pattern matching of lxml HtmlElement attributes
<p>I want to use <a href="https://peps.python.org/pep-0634/" rel="nofollow noreferrer">PEP 634 – Structural Pattern Matching</a> to match an <a href="https://lxml.de/api/lxml.html.HtmlElement-class.html" rel="nofollow noreferrer"><code>HtmlElement</code></a> that has a particular attribute. The attributes are accessible through an <code>.attrib</code> attribute that returns an instance of the <a href="https://lxml.de/api/lxml.etree._Attrib-class.html" rel="nofollow noreferrer"><code>_Attrib</code></a> class, and IIUC it has all methods for it to be a <code>collections.abc.Mapping</code>.</p> <p>The PEP says this:</p> <blockquote> <p>For a mapping pattern to succeed the subject must be a mapping, where being a mapping is defined as its class being one of the following:</p> <ul> <li>a class that inherits from <code>collections.abc.Mapping</code></li> <li>a Python class that has been registered as a <code>collections.abc.Mapping</code></li> <li>...</li> </ul> </blockquote> <p>Here's what I'm trying to do, but it doesn't print the <code>href</code>:</p> <pre class="lang-py prettyprint-override"><code>from collections.abc import Mapping from lxml.html import HtmlElement, fromstring el = fromstring('&lt;a href=&quot;https://stackoverflow.com/&quot;&gt;StackOverflow&lt;/a&gt;') Mapping.register(type(el.attrib)) # lxml.etree._Attrib assert(isinstance(el.attrib, Mapping)) # It's True even before registering _Attrib. match el: case HtmlElement(tag='a', attrib={'href': href}): print(href) </code></pre> <p>This matches and prints <code>attrib</code>:</p> <pre class="lang-py prettyprint-override"><code>match el: case HtmlElement(tag='a', attrib=Mapping() as attrib): print(attrib) </code></pre> <p>This does not match, as expected:</p> <pre class="lang-py prettyprint-override"><code>match el: case HtmlElement(tag='a', attrib=list() as attrib): print(attrib) </code></pre> <p>I also tried this and it works:</p> <pre class="lang-py prettyprint-override"><code>class Upperer: def __getitem__(self, key): return key.upper() def __len__(self): return 1 def get(self, key, default): return self[key] Mapping.register(Upperer) # It doesn't work without this line. match Upperer(): case {'href': href}: print(href) # Prints &quot;HREF&quot; </code></pre> <p>I understand using XPath/CSS selectors would be easier, but at this point I just want to know what is the problem with the <code>_Attrib</code> class and my code.</p> <p>Also, I don't want to unpack the element and convert the <code>_Attrib</code> instance to dict as follows:</p> <pre class="lang-py prettyprint-override"><code>match el.tag, dict(el.attrib): case 'a', {'href': href}: print(href) </code></pre> <p>or use guards:</p> <pre class="lang-py prettyprint-override"><code>match el: case HtmlElement(tag='a', attrib=attrs) if 'href' in attrs: print(attrs['href']) </code></pre> <p>It works but it doesn't look right. I'd like to find a solution so the original <code>case HtmlElement(tag='a', attrib={'href': href})</code> works. Or something that's very close to it.</p> <p>Python version I'm using is 3.11.4.</p>
<python><python-3.x><lxml><python-3.11><structural-pattern-matching>
2023-08-21 14:46:25
1
380
qyryq
76,946,384
12,029,271
Detecting adding/removal from string difference between texts
<p>I have two versions of a short text, e.g.:</p> <pre><code>old = &quot;(a) The provisions of this article apply to machinery of class 6.&quot; new = &quot;(a) The provisions of this article apply to machinery of class 6, unless one of the following exceptions apply: (i) the owner depends on the vehicle, (ii) the vehicle is newer than 2 years of age, (iii) the city council grants a special permission&quot; </code></pre> <p>I now want to compare the differences of these texts at a high level. I have accomplished a string-level comparison using the following code:</p> <pre><code>import difflib differ = difflib.ndiff(old.splitlines(), new.splitlines()) summary = [] for line in differ: prefix = line[:2] if prefix == ' ': summary.append(line[2:]) elif prefix == '- ': summary.append(f&quot;Removed: {line[2:]}&quot;) elif prefix == '+ ': summary.append(f&quot;Added: {line[2:]}&quot;) </code></pre> <p>However, I want a higher level summary of the changes as I have thousands of these diffs. For a given diff, I'm imagining the diff summary being something like <code>diff_summary = &quot;Adds exceptions, making article less restrictive&quot;</code>.</p> <p>I want to leverage text summarization (e.g. huggingface-based models) but I'm missing the entry point. How do I employ text summarization to summarize changes to texts?</p>
<python><nlp><summarization><difflib>
2023-08-21 14:42:56
1
371
user456789
76,946,379
13,132,728
Can't index certain classes from an html file with python
<p>I have an html file that follows the following format where a each row consists of a <code>player</code>, <code>team</code>, <code>position</code>, <code>exposure_x</code>, and <code>exposure_y</code>:</p> <pre><code>&lt;p class=&quot;p1&quot;&gt; &amp;lt;div style=&quot;height: 15840px; width: 369px;&quot;&amp;gt;&amp;lt;div role=&quot;row&quot; class=&quot;BaseTable__row PlayerExposureTable_dk-grid-row PlayerExposureTable_row-odd&quot; style=&quot;position: absolute; left: 0px; top: 0px; width: 369px; height: 60px;&quot;&amp;gt;&amp;lt;div role=&quot;gridcell&quot; class=&quot;BaseTable__row-cell DKResponsiveGrid_dk-grid-column undefined&quot; style=&quot;flex: 0 1 auto; width: 200px; overflow: hidden;&quot;&amp;gt;&amp;lt;div class=&quot;PlayerNameCell_player-name-cell&quot; aria-label=&quot;Hunter Henry&quot;&amp;gt;&amp;lt;img class=&quot;PlayerNameCell_player-icon&quot; src=&quot;https://dkn.gs/sports/images/nfl/players/160/19156.png&quot; alt=&quot;Hunter Henry&quot;&amp;gt;&amp;lt;div&amp;gt;&amp;lt;div class=&quot;PlayerNameCell_name-and-status&quot;&amp;gt;&amp;lt;img class=&quot;PlayerNameCell_team-icon&quot; src=&quot;https://dkn.gs/sports/images/nfl/teams/logos/160/18415.png&quot; alt=&quot;NE team icon&quot;&amp;gt;&amp;lt;div class=&quot;PlayerNameCell_player-name&quot;&amp;gt;H. Henry&amp;lt;/div&amp;gt;&amp;lt;div class=&quot;PlayerNameCell_status&quot;&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div class=&quot;PlayerNameCell_position-and-team&quot;&amp;gt;&amp;lt;div&amp;gt;TE&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;·&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;NE&amp;lt;/div&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div role=&quot;gridcell&quot; class=&quot;BaseTable__row-cell DKResponsiveGrid_dk-grid-column undefined&quot; style=&quot;flex: 0 1 auto; width: 90px; overflow: hidden;&quot;&amp;gt;&amp;lt;div class=&quot;PlayerExposureTable_detail-cell&quot;&amp;gt;250&amp;lt;/div&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div role=&quot;gridcell&quot; class=&quot;BaseTable__row-cell DKResponsiveGrid_dk-grid-column undefined&quot; style=&quot;flex: 0 1 auto; width: 85px; overflow: hidden;&quot;&amp;gt;&amp;lt;div class=&quot;PlayerExposureTable_detail-cell&quot;&amp;gt;35%&amp;lt;/div&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div role=&quot;row&quot; class=&quot;BaseTable__row PlayerExposureTable_dk-grid-row &quot; style=&quot;position: absolute; left: 0px; top: 60px; width: 369px; height: 60px;&quot;&amp;gt;&amp;lt;div role=&quot;gridcell&quot; class=&quot;BaseTable__row-cell DKResponsiveGrid_dk-grid-column undefined&quot; style=&quot;flex: 0 1 auto; width: 200px; overflow: hidden;&quot;&amp;gt;&amp;lt;div class=&quot;PlayerNameCell_player-name-cell&quot; aria-label=&quot;Jakobi Meyers&quot;&amp;gt;&amp;lt;img class=&quot;PlayerNameCell_player-icon&quot; src=&quot;https://dkn.gs/sports/images/nfl/players/160/17948.png&quot; alt=&quot;Jakobi Meyers&quot; ... &lt;/p&gt; </code></pre> <p>Where my desired output is</p> <pre><code>Hunter Henry, NE, TE, 250, 35% Jakobi Meyers, ... ... </code></pre> <p>Normally, I would go about this by indexing html classes via bs4:</p> <pre><code>from bs4 import BeautifulSoup with open('my_file.html') as f: soup = BeautifulSoup(f, &quot;html&quot;) </code></pre> <p>And then using <code>zip()</code> to combine lists of the values from each class, or something similar.</p> <p>But I can't seem to get it to work:</p> <pre><code>soup.find_all('div', {&quot;class_&quot;:&quot;PlayerNameCell_player-name-cell&quot;}) &gt;&gt;&gt; [] [value for element in soup.find_all(class_=True) for value in element[&quot;class&quot;]] &gt;&gt;&gt; ['p1'] </code></pre> <p>As I cannot find my desired class(es) (the only class seems to be just <code>p1</code>).</p> <p>So my question is, how come this html is tricky to index? Usually, I have no problems with doing stuff like this.</p>
<python><html><beautifulsoup>
2023-08-21 14:42:30
1
1,645
bismo
76,946,252
2,195,440
How to list all Python repositories created after September 2022 based on the number of stars?
<p>I'm looking for a way to fetch a list of all Python repositories on GitHub that were created after September 2022, sorted by their star count. I would like to obtain this list programmatically, possibly using the GitHub API or any available Python libraries.</p> <p>My objective is to get a clear view of trending Python repositories since September 2022.</p> <p>What I've tried so far:</p> <ol> <li>Browsing GitHub manually (inefficient for obvious reasons).</li> <li>using this API <a href="https://api.github.com/repos/hpcaitech/ColossalAI" rel="nofollow noreferrer">https://api.github.com/repos/hpcaitech/ColossalAI</a></li> <li>Looking into the GitHub API documentation, but I couldn't pinpoint how to set the exact date range and filter by language.</li> <li>Could someone provide a high-level pointers or code snippet on how to achieve this?</li> </ol> <p>Thank you in advance!</p>
<python><github><repository><github-api>
2023-08-21 14:27:25
1
3,657
Exploring
76,946,217
11,885,185
How to highlight the differences between two strings in Python?
<p>I want to highlight the differences between two strings in a colour using Python code.</p> <p>Example 1:</p> <pre class="lang-none prettyprint-override"><code>sentence1 = &quot;I'm enjoying the summer breeze on the beach while I do some pilates.&quot; sentence2 = &quot;I am enjoying the summer breeze on the beach while I am doing some pilates.&quot; </code></pre> <p>Expected result (the part marked by asterisks should be in red):</p> <pre class="lang-none prettyprint-override"><code> I *am* enjoying the summer breeze on the beach while I *am doing* some pilates. </code></pre> <p>Example 2:</p> <pre class="lang-none prettyprint-override"><code>sentence1: &quot;My favourite season is Autumn while my sister's favourite season is Winter.&quot; sentence2: &quot;My favourite season is Autumn, while my sister's favourite season is Winter.&quot; </code></pre> <p>Expected result (the comma is different):</p> <pre class="lang-none prettyprint-override"><code>&quot;My favourite season is Autumn*,* while my sister's favourite season is Winter.&quot; </code></pre> <p>I tried this:</p> <pre class="lang-py prettyprint-override"><code>sentence1 = &quot;I'm enjoying the summer breeze on the beach while I do some pilates.&quot; sentence2 = &quot;I'm enjoying the summer breeze on the beach while I am doing some pilates.&quot; # Split the sentences into words words1 = sentence1.split() words2 = sentence2.split() # Find the index where the sentences differ index_of_difference = next((i for i, (word1, word2) in enumerate(zip(words1, words2)) if word1 != word2), None) # Highlight differing part &quot;am doing&quot; in red highlighted_words = [] for i, (word1, word2) in enumerate(zip(words1, words2)): if i == index_of_difference: highlighted_words.append('\033[91m' + word2 + '\033[0m') else: highlighted_words.append(word2) highlighted_sentence = ' '.join(highlighted_words) print(highlighted_sentence) </code></pre> <p>And I got this:</p> <pre class="lang-none prettyprint-override"><code>I'm enjoying the summer breeze on the beach while I *am* doing some </code></pre> <p>Instead of this:</p> <pre class="lang-none prettyprint-override"><code>I'm enjoying the summer breeze on the beach while I *am doing* some pilates. </code></pre> <p>How can I solve this?</p>
<python><string><colors><nlp><difference>
2023-08-21 14:23:39
3
612
Oliver
76,946,216
2,195,440
Resolving Python Symbols with Wildcard Imports using python-lsp-server
<p>I've been using the <code>python-lsp-server</code> to analyze a Python project and extract symbol definitions and references. The server works well for explicit imports, but I've encountered an issue when dealing with wildcard imports (<code>import *</code>).</p> <p>Example:</p> <p>Consider the following code structure:</p> <pre><code># utils.py def check_files_in_directory(): pass </code></pre> <pre><code># main.py from utils import * def main(): check_files_in_directory() </code></pre> <p>When I try to resolve the <code>check_files_in_directory</code> function in <code>main.py</code> using the LSP server, it fails to determine its origin due to the wildcard import.</p> <p>What I've tried:</p> <ol> <li>I've successfully resolved symbols for explicit imports, e.g., <code>from utils import check_files_in_directory</code>.</li> <li>I've checked the LSP server's capabilities and ensured I'm sending the correct requests.</li> </ol> <p>Question:</p> <p>Is there a way to make python-lsp-server resolve symbols imported with wildcard imports effectively? <strong>Do I have to set some configuration? For my case, even if it takes time or slow to resolve it, that is not a problem.</strong></p> <p>Wildcard imports can be challenging for static analysis tools since they obfuscate the origin of symbols. I'm looking for a solution that can handle this scenario, whether it's a configuration or a combination of strategies.</p>
<python><import><language-server-protocol><python-language-server><pylsp>
2023-08-21 14:23:35
0
3,657
Exploring
76,946,171
5,212,614
How to get Geopandas to Focus only on USA Zip Codes?
<p>I'm running the code below and getting some non-USA 'dropoff_location', 'dropoff_lat', and 'dropoff_lon' for USA zip codes. All zip codes are in the New York City area so all 'dropoff_location', 'dropoff_lat', and 'dropoff_lon' should be in the New York City area. Am I doing something wrong here?</p> <pre><code>import geopandas from geopy.geocoders import Nominatim geolocator = Nominatim(user_agent=&quot;ryan_app&quot;) #applying the rate limiter wrapper from geopy.extra.rate_limiter import RateLimiter geocode = RateLimiter(geolocator.geocode) #Applying the method to pandas DataFrame df['dropoff_location'] = df['dropoff_zip'].apply(geocode) df['dropoff_lat'] = df['dropoff_location'].apply(lambda x: x.latitude if x else None) df['dropoff_lon'] = df['dropoff_location'].apply(lambda x: x.longitude if x else None) df.head() </code></pre> <p>Result:</p> <pre><code>pickup_datetime dropoff_datetime trip_distance fare_amount pickup_zip dropoff_zip time_of_trip dropoff_location dropoff_lat dropoff_lon 95 2016-02-02 14:00:28 2016-02-02 14:20:22 2.04 13.5 10001 10199 0 days 00:19:54 (Manhattan, New York County, City of New York,... 40.751528 -73.995849 96 2016-02-10 00:25:33 2016-02-10 00:30:09 1.03 5.5 10001 10011 0 days 00:04:36 (Manhattan, New York County, City of New York,... 40.740972 -73.999560 97 2016-02-19 09:19:18 2016-02-19 09:34:41 2.10 11.5 10002 10001 0 days 00:15:23 (Корольовський район, Житомир, Житомирська міс... 50.269960 28.702845 98 2016-02-12 21:14:59 2016-02-12 21:22:33 0.93 6.5 10011 10012 0 days 00:07:34 (Bechloul, Daïra Bechloul, Bouira, 10012, Algé... 36.312195 4.074957 99 2016-02-04 21:25:09 2016-02-04 21:35:38 1.70 9.0 10028 10065 0 days 00:10:29 (San Germano Chisone, Torino, Piemonte, 10065,... 44.894901 7.235602 </code></pre>
<python><python-3.x><geopandas>
2023-08-21 14:18:16
1
20,492
ASH
76,946,004
10,771,559
How to upload multiple datasets and output a graph of each dataframe in Dash
<p>I have a simple dash app where a user can upload a dataset and the app outputs a simple scatterplot of the first column versus the second column (therefore only data with continuous values in the first/second column will work)</p> <p>I want the user to be able to upload multiple datasets and the graph of each of these datasets to be shown.</p> <p>So for example, if the user was to upload three different datasets than the dash app would show a graph of the first dataset to be uploaded, followed by a graph of the second dataset to be uploaded, followed by a graph of the third dataset to be uploaded.</p> <p>How can I do this? Here is my current code:</p> <pre><code>import base64 import datetime import io import plotly.graph_objs as go import plotly.express as px import dash from dash.dependencies import Input, Output, State from dash import dcc, html, dash_table, Patch import numpy as np import pandas as pd external_stylesheets = [&quot;https://codepen.io/chriddyp/pen/bWLwgP.css&quot;] app = dash.Dash(__name__, external_stylesheets=external_stylesheets) server = app.server colors = {&quot;graphBackground&quot;: &quot;#F5F5F5&quot;, &quot;background&quot;: &quot;#ffffff&quot;, &quot;text&quot;: &quot;#000000&quot;} app.layout = html.Div( [ dcc.Upload( id=&quot;upload-data&quot;, children=html.Div([&quot;Drag and Drop or &quot;, html.A(&quot;Select Files&quot;)]), multiple=True, ), html.Div(id=&quot;graph1&quot;), ] ) def parse_data(contents, filename): content_type, content_string = contents.split(&quot;,&quot;) decoded = base64.b64decode(content_string) try: if &quot;csv&quot; in filename: # Assume that the user uploaded a CSV or TXT file df = pd.read_csv(io.StringIO(decoded.decode(&quot;utf-8&quot;))) elif &quot;xls&quot; in filename: # Assume that the user uploaded an excel file df = pd.read_excel(io.BytesIO(decoded)) elif &quot;txt&quot; or &quot;tsv&quot; in filename: # Assume that the user upl, delimiter = r'\s+'oaded an excel file df = pd.read_csv(io.StringIO(decoded.decode(&quot;utf-8&quot;)), delimiter=r&quot;\s+&quot;) except Exception as e: print(e) return html.Div([&quot;There was an error processing this file.&quot;]) return df @app.callback( Output(&quot;graph1&quot;, &quot;figure&quot;), [Input(&quot;upload-data&quot;, &quot;contents&quot;), Input(&quot;upload-data&quot;, &quot;filename&quot;)], ) def update_table(contents, filename): patched_children = Patch() if contents: contents = contents[0] filename = filename[0] df = parse_data(contents, filename) Fig = html.Div( [ dcc.Graph(go.Figure( data=[ go.Scatter( mode=&quot;markers&quot;, x=df.iloc[:,0], y=df.iloc[:,1], showlegend=False ) ], layout=go.Layout( width=500, height=500, plot_bgcolor=colors[&quot;graphBackground&quot;], paper_bgcolor=colors[&quot;graphBackground&quot;] ))) ] ) patched_children.append(Fig) return patched_children if __name__ == &quot;__main__&quot;: app.run_server(debug=True) </code></pre>
<python><plotly-dash>
2023-08-21 13:59:16
1
578
Niam45
76,945,799
6,817,837
How to parse a python decimal value to string without losing its precision or let it change its value?
<p>Converting a decimal value to string causes it to lose precision or change value, see the following example:</p> <pre><code>from decimal import Decimal, getcontext # Create a string value of a decimal value = str(Decimal(97.17312522036036245646)) print(value) # -&gt; 97.17312522036036170902661979198455810546875 # Ideally I would like to get '97.17312522036036245646' or '97.1731252203603624564600000' # based on the precision that I set. </code></pre>
<python><decimal>
2023-08-21 13:36:05
2
474
Marc
76,945,482
1,935,944
Style python qrcode SVG image
<p>I use the python module &quot;qrcode&quot; to create vCard QR codes:</p> <p><a href="https://github.com/lincolnloop/python-qrcode" rel="nofollow noreferrer">https://github.com/lincolnloop/python-qrcode</a></p> <p>It works, but i want to create the QR code images in SVG format <em>and</em> style them, for example by defining the color of the QR code.</p> <p>Having tried around with <code>.png</code> format, the linked documentation is enough information to get this done.</p> <p>Now, using <code>.svg</code>, i can create working codes, but only black code on white background - and i can't make use of the information in the docs to specify the color.</p> <p>Here's the QR code generation i use <em>(shortened)</em>:</p> <pre><code>import qrcode import qrcode.image.svg vcard_data = &quot;&quot;&quot;BEGIN:VCARD VERSION:3.0 ...etc... END:VCARD&quot;&quot;&quot; factory = qrcode.image.svg.SvgPathImage img = qrcode.make(vcard_data, image_factory=factory) img.save(output_image) # full path and name of the .svg defined before </code></pre> <p>When using <code>.png</code>, i could modify colors using</p> <pre><code>img = qr.make_image(fill_color=(12, 73, 114), back_color=(255, 255, 255)) </code></pre> <p>...but the whole PNG image creation uses <code>.make_image</code>, while the SVG is created using <code>.make</code> and that doesn't take arguments like this.</p> <p>How can i adjust the color settings of the resulting SVG?</p> <p><strong>Update:</strong></p> <p>I tried to generate an SVG using <code>.make_image</code> but i can't make that work. All styling options seem to rely on <code>StylerdPilImage</code> but that seems like it only supports <em>.png</em>.</p> <p>If i understand the docs correctly, an <code>image_factory</code> and a <code>module_drawer</code> and a <code>color_mask</code> should work, but as far as i can tell not for SVG.</p> <p>This is what i've tried now, but it does not work:</p> <pre><code>import qrcode import qrcode.image.svg from qrcode.image.styledpil import StyledPilImage from qrcode.image.styles.moduledrawers.svg import SvgSquareDrawer from qrcode.image.styles.colormasks import SolidFillColorMask qr = qrcode.QRCode( version=None, error_correction=qrcode.constants.ERROR_CORRECT_M, box_size=4, border=0, image_factory=StyledPilImage ) qr.add_data(vcard_data) img = qr.make_image( module_drawer=SvgSquareDrawer(), color_mask=SolidFillColorMask( front_color=(12, 73, 114), back_color=(255, 255, 255) ) ) img.save(output_image) </code></pre> <p>That leads to:</p> <blockquote> <p>AttributeError: module 'PIL.Image' has no attribute 'Resampling'</p> </blockquote> <p>I begin to think no one uses the &quot;qrcode&quot; module to generate styled SVGs. It seems like this <em>could</em> be done somehow, but... not really?</p>
<python><svg><qr-code>
2023-08-21 12:53:13
2
987
xph
76,945,444
888,580
PySimpleGUI dialog popup not staying on top
<p>I have an application that uses PySimpleGUI for its UI. The application runs on a rasbperry pi and uses the pi touchscreen as a display. When the application starts, it fills the pi display. There are some instances where I need to prompt the user with a dialog, but I'm having some issues with the dialog not staying on top. I tried something very simple, as soon as the window is created I show a dialog box. If the user accidentally touches anywhere on the window behind the dialog, then that window gets focus and the dialog box is now behind it. There is no keyboard attached, this is all touch, so they can't get the dialog back by using a keyboard to cycle windows.</p> <pre><code>__window = Gui.Window(&quot;Imaging System&quot;, tabbed_layout, no_titlebar=True, location=(0, 0), size=(800, 480), finalize=True) __window[constants.KEY_OUTPUT_BUFFER].expand(True, True, True) __window.maximize() Gui.popup_error('test', modal=True, keep_on_top=True) </code></pre> <p>I tried setting <code>grab_anywhere=False</code> on the main window, but it did not seem to have any effect as touching the main window still sends the dialog behind it.</p> <p>I'm using Python version 3.9.2 and PySimpleGUI version 4.60.4</p>
<python><raspberry-pi4><pysimplegui>
2023-08-21 12:48:52
1
5,998
lane.maxwell
76,945,193
2,666,289
Populate relationship in SQLAlchemy with `query().join()`
<p>Consider the following models</p> <pre><code>class A(Base): __tablename__ = &quot;as&quot; id = mapped_column(Integer, primary_key=True) b_id = mapped_column(ForeignKey(&quot;bs.id&quot;)) b: Mapped[B] = relationship() class B(Base): __tablename__ = &quot;bs&quot; id = mapped_column(Integer, primary_key=True) c_id = mapped_column(ForeignKey(&quot;cs.id&quot;)) c: Mapped[C] = relationship() x = mapped_column(Integer) class C(Base): __tablename__ = &quot;cs&quot; id = mapped_column(Integer, primary_key=True) y = mapped_column(Integer) </code></pre> <p>I want to query <code>A</code> objects with constraints on the associated <code>.b.x</code> and <code>.b.c.y</code> value, and I want the resulting <code>A</code> objects to have populated fields (not lazy).</p> <ol> <li>If I use <code>joinedload(A.b).joinedload(B.c)</code>, I cannot apply constraints directly:</li> </ol> <pre class="lang-py prettyprint-override"><code>select(A) .options(joinedload(A.b).joinedload(B.c)) .where(B.x == 0, C.y == 0) </code></pre> <pre class="lang-sql prettyprint-override"><code>SELECT `as`.id, `as`.b_id, cs_1.id AS id_1, cs_1.y, bs_1.id AS id_2, bs_1.c_id, bs_1.x FROM `as` LEFT OUTER JOIN bs AS bs_1 ON bs_1.id = `as`.b_id LEFT OUTER JOIN cs AS cs_1 ON cs_1.id = bs_1.c_id, bs, cs WHERE bs.x = ? AND cs.y = ? </code></pre> <p>and as you can see, the constraints are not on the joined table, so the query is incorrect.</p> <ol start="2"> <li>If I use <code>.has()</code>, I get the proper results but this creates am inefficient query (in practice I have many more constraints and large tables):</li> </ol> <pre class="lang-py prettyprint-override"><code>select(A) .options(joinedload(A.b).joinedload(B.c)) .where( A.b.has( and_(B.x == 0, b.c.has(C.y == 0)) ) ) </code></pre> <pre class="lang-sql prettyprint-override"><code>SELECT `as`.id, `as`.b_id, cs_1.id AS id_1, cs_1.y, bs_1.id AS id_2, bs_1.c_id, bs_1.x FROM `as` LEFT OUTER JOIN bs AS bs_1 ON bs_1.id = `as`.b_id LEFT OUTER JOIN cs AS cs_1 ON cs_1.id = bs_1.c_id WHERE EXISTS (SELECT 1 FROM bs WHERE bs.id = `as`.b_id AND bs.x = ? AND (EXISTS (SELECT 1 FROM cs WHERE cs.id = bs.c_id AND cs.y = ?))) </code></pre> <p>with <code>(EXISTS (SELECT 1 ...))</code> which are inefficient and not needed here.</p> <ol start="3"> <li>If I use <code>.join()</code>, I can get clean request, but then the relationship fields do not get populated automatically:</li> </ol> <pre class="lang-py prettyprint-override"><code>select(A) .join(B, A.b_id == B.id) .join(C, B.c_id == C.id) .where(B.x == 0, C.y == 0) </code></pre> <pre class="lang-sql prettyprint-override"><code>SELECT `as`.id, `as`.b_id FROM `as` INNER JOIN bs ON `as`.b_id = bs.id INNER JOIN cs ON bs.c_id = cs.id WHERE bs.x = ? AND cs.y = ? </code></pre> <p>and as you can see, the fields for <code>A.b</code> and <code>A.b.c</code> are not included, so accessing <code>A.b</code> will trigger a new request (same for <code>A.b.c</code> of course).</p> <ol start="4"> <li>If I combine <code>.join</code> and <code>joinedload</code>, the request is not valid:</li> </ol> <pre class="lang-py prettyprint-override"><code>select(A) .join(B, A.b_id == B.id) .join(C, B.c_id == C.id) .where(B.x == 0, C.y == 0) .options(joinedload(A.b).joinedload(B.c)) </code></pre> <pre class="lang-sql prettyprint-override"><code>SELECT `as`.id, `as`.b_id, cs_1.id AS id_1, cs_1.y, bs_1.id AS id_2, bs_1.c_id, bs_1.x FROM `as` INNER JOIN bs ON `as`.b_id = bs.id INNER JOIN cs ON bs.c_id = cs.id LEFT OUTER JOIN bs AS bs_1 ON bs_1.id = `as`.b_id LEFT OUTER JOIN cs AS cs_1 ON cs_1.id = bs_1.c_id WHERE bs.x = ? AND cs.y = ? </code></pre> <p>contains 4 joins instead of two.</p> <hr /> <p>Is there a way to get <code>A.b</code> and <code>A.b.c</code> populated as-if I used <code>joinedload</code>, but using a SQL requests similar to <code>.join()</code>?</p>
<python><sqlalchemy>
2023-08-21 12:15:44
1
38,048
Holt
76,945,092
7,327,257
Python documentation using Sphinx: docstring documentation won't generate
<p>I'm trying to generate my project documentation using Sphinx 6.2.1 and Python3. I'm working with a test project named <code>sphinx-basic-test</code>. I've created a <code>docs</code> folder and run <code>sphinx-quickstart</code> inside. So far, my project structure is:</p> <pre><code>sphinx-basic-test -code |__ __init__.py |__ classifiers.py (several functions) |__ clean_data.py (a class, with main method) |__ data_tools.py (several functions) - docs |__ Makefile |__ _build |__ _static |__ _templates |__ conf.py |__ index.rst |__ make.bat </code></pre> <p>I've followed several tutorials, stackoverflows threads and official documentation:</p> <ul> <li>I've modified <code>conf.py</code> file adding the following lines:</li> </ul> <pre><code>import sys sys.path.insert(0, os.path.abspath('..')) extensions = ['sphinx.ext.autodoc', 'sphinx.ext.coverage', 'sphinx.ext.napoleon'] </code></pre> <ul> <li>I've then run the line: <code>sphinx-apidoc -o docs code/</code> in the root folder (<code>sphinx-basic-test</code>). And it has created files <code>docs/code.rst</code> and <code>docs/modules.rst</code>. Content of the first one is:</li> </ul> <pre><code>code package ============ Submodules ---------- code.classifiers module ----------------------- .. automodule:: code.classifiers :members: :undoc-members: :show-inheritance: code.clean\_data module ----------------------- .. automodule:: code.clean_data :members: :undoc-members: :show-inheritance: code.data\_tools module ----------------------- .. automodule:: code.data_tools :members: :undoc-members: :show-inheritance: </code></pre> <p>And the content of the second one is simply:</p> <pre><code>code ==== .. toctree:: :maxdepth: 4 code </code></pre> <ul> <li>I've modified <code>index.rst</code> file to add <code>modules</code> taking care of identation:</li> </ul> <pre><code>Welcome to test's documentation! ================================ .. toctree:: :maxdepth: 2 :caption: Contents: modules Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` </code></pre> <ul> <li><p>After doing this, I've run <code>make html</code> inside the <code>docs</code> folder. This rise a lot of warnings, all of the type: <code>WARNING: autodoc: failed to import module 'classifiers' from module 'code'; the following exception was raised: No module named 'code.classifiers'; 'code' is not a package </code></p> </li> <li><p>I've checked the created html file and it does display the name of the different functions and class (but not the methods inside the class), but not documentation is generated. I've tried several things so far, including changing the abs.path in configuration file, with no different output.</p> </li> <li><p>The only different result I've get is without using <code>sphinx-apidoc</code>, and just writing blocks like:</p> </li> </ul> <pre><code>.. automodule:: classifiers :members: </code></pre> <p>in <code>index.rst</code> file. But this only works with one of the script, if I try to also add <code>clean_data</code> this way, it won't work.</p> <p><strong>I would like to create html files so all three scripts are documented using docstring in the python files, together with additional information I'd like to add. But I'm stuck in the first goal, any help is appreciated.</strong></p>
<python><documentation><python-sphinx><autodoc>
2023-08-21 12:01:44
0
357
M. Merida-Floriano
76,944,986
15,517,885
how to solve Ivy framework: Frontend function returned non-frontend arrays: ivy.array(1.) error?
<p>While working on a <a href="https://github.com/unifyai/ivy/pull/21879" rel="nofollow noreferrer">pull request</a> for <a href="https://unify.ai/" rel="nofollow noreferrer">Ivy machine </a>learning framework I got this issue:</p> <pre><code>E AssertionError: Frontend function returned non-frontend arrays: ivy.array(1.) E Falsifying example: test_torch_kron( E frontend='torch', E backend_fw='numpy', E on_device='cpu', E dtype_and_x=(['float16', 'float16'], E [array(-1., dtype=float16), array(-1., dtype=float16)]), E fn_tree='ivy.functional.frontends.torch.kron', E test_flags=FrontendFunctionTestFlags( E num_positional_args=0, E with_out=False, E inplace=False, E as_variable=[False], E native_arrays=[False], E generate_frontend_arrays=False, E ), E ) E E You can reproduce this example by temporarily adding @reproduce_failure('6.82.4', b'AXicY2AAAkYGCGBEYzMwAAAAXwAF') as a decorator on your test case </code></pre> <p>How could I resolve it?</p> <p>To understand the issue, I looked for other questions in stackoverflow, google and the documentation of the framework</p>
<python><machine-learning><testing><frameworks><artificial-intelligence>
2023-08-21 11:46:31
1
535
a0m0rajab
76,944,842
1,102,705
Set background color for whole pane in terminal using python Rich library
<p>how do I use rich (<a href="https://rich.readthedocs.io/" rel="nofollow noreferrer">https://rich.readthedocs.io/</a>) to clear the screen with a background color, such that the entire screen is solid that color, and when I print lines, their background colors stand out against the overall background? I'm already using console.print(..., style=&quot;white on red&quot;) sorts of things to color individual lines, but that doesn't color after the end of the line, which stays the terminal's background color.</p>
<python><terminal><colors>
2023-08-21 11:23:42
1
597
lahwran
76,944,626
9,797,207
matplotlib lib for multiple lines is giving strange result in y axis data
<p>I am trying to represent multiple lines in matplotlib plot and i am getting some strange results and can see multiple y-axis range. my values will be in range [0.0, 15.0].</p> <pre><code>import matplotlib.pyplot as plt from datetime import datetime def image_graph(): left_image_data = ['10.31', '10.21', '10.13', '10.08', '10.1'] right_image_data = ['10.17', '10.14', '10.13', '10.1', '10.07'] near_proximity_image_data = ['9.257', '9.601', '9.688', '9.76', '9.824'] far_proximity_image_data = ['9.999', '9.997', '10.0', '10.0', '9.996'] image_data_x = [datetime.datetime(2023, 8, 19, 21, 30, 20, 876981), datetime.datetime(2023, 8, 19, 21, 30, 20, 877015), datetime.datetime(2023, 8, 19, 21, 30, 20, 877031), datetime.datetime(2023, 8, 19, 21, 30, 20, 877046), datetime.datetime(2023, 8, 19, 21, 30, 20, 877057)] plt.plot(image_data_x, left_image_data, label=&quot;left_image_data time rate&quot;) plt.plot(image_data_x, right_image_data, label=&quot;right_image_data time rate&quot;) plt.plot(image_data_x, near_proximity_image_data, label=&quot;near_proximity_image&quot;) plt.plot(image_data_x, far_proximity_image_data, label=&quot;far_proximity_image&quot;) plt.legend() plt.show() image_graph() </code></pre> <p>Can someone help me in solving this <code>left_image_data</code> and <code>right_image_data</code> values are represented in correct way but other 2 values behaves strange when you see the values of y-axis is again starts instead of showing in same area.</p> <p><a href="https://i.sstatic.net/WgRFV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WgRFV.png" alt="plot image" /></a></p>
<python><matplotlib><plot>
2023-08-21 10:56:47
1
467
saibhaskar
76,944,566
5,079,238
Can't use redis password with special characters in django
<p>I have a redis server with authentication password containing <code>=</code>, and <code>?</code>. Meanwhile I am using the Location scheme as <code>redis://[:password]@localhost:6397</code>.</p> <p>Config:</p> <pre><code>'default': { 'BACKEND': 'django_redis.cache.RedisCache', 'LOCATION': 'redis://:6?V4=434#ef4@localhost:6379/1', 'OPTIONS': { 'CLIENT_CLASS': 'django_redis.client.DefaultClient', } }, </code></pre> <p>I always get an error <code>TypeError: __init__() got an unexpected keyword argument 'V4'</code>. Somehow the Location scheme string doesnt count for cases where password have <code>=</code>, and <code>?</code> in certain order, such that it thinks it is separators in the scheme. I tried to escape the special characters : <code>6\?V4\=434#ef4</code> but it gave me different error: <code>ValueError: Port could not be cast to integer value as '6\\'</code></p> <p>Can this be solved without moving password in <code>OPTIONS</code> ?</p>
<python><django><django-redis>
2023-08-21 10:48:53
1
733
Badr
76,944,430
17,681,550
Microphone Permission of Flask app in python-for-android and buildozer
<p>I'm converting a flaskapp combined with SpeechRecognition Javascript into an apk (I get the html <a href="https://bes.works/dev/test/webspeech/" rel="nofollow noreferrer">here</a>) . And i tried to add the Microphone permission by adding:</p> <p>Python:</p> <pre><code>from android.permissions import Permission, request_permissions request_permissions([Permission.INTERNET,Permission.MODIFY_AUDIO_SETTINGS,Permission.RECORD_AUDIO]) </code></pre> <p><code>Buildozer.spec</code></p> <pre><code>android.permissions = INTERNET,RECORD_AUDIO,MODIFY_AUDIO_SETTINGS android.api = 30 </code></pre> <p>But when i run apk, i still got <code>not-allowed</code> error in javascript.</p> <p>Thanks For Any Helps!</p>
<javascript><python><android><buildozer>
2023-08-21 10:29:08
1
538
Dile
76,944,315
22,221,987
Zooming out the QChart with QRubberBand, to fit the size of the QChartView
<p>I have a <code>QChartView</code> with QChart. I've added <code>QRubberBand</code> to the <code>QChartView</code> to zoom the chart. Plus I overrode <code>wheelEvent</code>, to scroll the chart by X axis.</p> <p>So, now I want to prevent 'outscrolling' and 'outzooming' over some value. By this names I mean, that I need to stop zoom out, when chart size reaches the original chart size (so, minimal size of the chart should be it's original size) and I need to stop scrolling event, when I reach the min or max chart value (for the same reason).</p> <p>I've tried to use size of the <code>QChart</code> and <code>QChartView</code> to handle <code>QRubberBand</code> resizing as beginning, but it didn't help.</p> <p>Here is my small try, but I can't find out any compatible solution for <code>QWheelEvent</code> and <code>QMouseEvent</code> (but may be there is any build in methods for these tasks, so I should not override that two events). Here is the code:</p> <pre><code>import sys from PySide6.QtCore import Qt from PySide6.QtWidgets import QApplication, QMainWindow, QVBoxLayout, QWidget from PySide6 import QtCharts, QtGui import random class Chart(QtCharts.QChart): def __init__(self): super().__init__() def wheelEvent(self, event): print(event.delta()) if event.delta() &gt; 0: self.scroll(-10, 0) print('e') else: self.scroll(10, 0) def create_series(self): for _ in range(2): series = QtCharts.QLineSeries() for i in range(100): series.append(i, random.randint(0, 10)) self.addSeries(series) def setup(self): self.createDefaultAxes() self.legend().setVisible(True) self.legend().setAlignment(Qt.AlignLeft) class ChartView(QtCharts.QChartView): def __init__(self, chart): super().__init__() self.setChart(chart) self.setRenderHint(QtGui.QPainter.Antialiasing) self.setRubberBand(QtCharts.QChartView.HorizontalRubberBand) def mousePressEvent(self, event): if event.button() == Qt.RightButton: if self.chart().size().width() &gt; self.size().width(): print(&quot;Fit&quot;) super().mousePressEvent(event) else: print(self, &quot;Doesn't fit&quot;) print(self.chart().size().width()) print(self.size().width()) else: super().mousePressEvent(event) class MainWindow(QMainWindow): def __init__(self): super().__init__() self.layout = QVBoxLayout() for _ in range(2): chart_view = self.create_chart_view() self.layout.addWidget(chart_view) self.central_widget = QWidget() self.central_widget.setLayout(self.layout) self.setCentralWidget(self.central_widget) @staticmethod def create_chart(): chart = Chart() chart.create_series() chart.setup() return chart def create_chart_view(self): chart_view = ChartView(chart=self.create_chart()) chart_view.chart().axes(Qt.Horizontal)[0].rangeChanged.connect( lambda: (self.sync_chart_views(chart_view)) ) return chart_view def sync_chart_views(self, source_chart_view): x_range = (source_chart_view.chart().axes(Qt.Horizontal)[0].min(), source_chart_view.chart().axes(Qt.Horizontal)[0].max()) for chart_view in self.findChildren(QtCharts.QChartView): if chart_view != source_chart_view: chart_view.chart().axes(Qt.Horizontal)[0].setRange(x_range[0], x_range[1]) if __name__ == &quot;__main__&quot;: app = QApplication(sys.argv) window = MainWindow() window.show() sys.exit(app.exec()) </code></pre>
<python><qt><charts><pyqt><pyside>
2023-08-21 10:10:09
1
309
Mika
76,943,980
2,508,672
Pandas excel columns name and datatype to json
<p>I have one excel file that I am reading through pandas, I am able to get the columns and it's type with below code</p> <pre><code>import pandas as pd def read_from_gs(): #print('read : ' ,path) #df = pd.read_csv(path, sep=separator) df = pd.read_excel('sample/sam.xlsx') #print(df.dtypes.to_json()) print((df.dtypes.to_json()) ) # data_json = data_frame.to_json() return df read_from_gs() </code></pre> <p>I was to able to get thie df.dtype.tolist, but I am not able to get the column name and its type in json format like</p> <pre><code>{&quot;name&quot;: &quot;string/object&quot;,&quot;marks&quot;:&quot;int32&quot;} </code></pre> <blockquote> <p>df.dtypes.to_json() throwing error</p> <p>OverflowError: Maximum recursion level reached</p> <p>output of dtypes</p> <p>area int64 area_title object area_type<br /> int64 naics object naics_title object i_group<br /> object own_code int64 occ_code object occ_title<br /> object o_group object tot_emp object emp_prse<br /> object jobs_1000 float64 loc_quotient float64 pct_total<br /> object h_mean object a_mean object mean_prse<br /> object h_pct10 object h_pct25 object h_median<br /> object h_pct75 object h_pct90 object a_pct10<br /> object</p> <p>and type is &lt;class 'pandas.core.series.Series'&gt;</p> </blockquote> <p>Any help</p> <p>Thanks</p>
<python><pandas>
2023-08-21 09:24:44
1
4,608
Md. Parvez Alam
76,943,605
205,747
Type mismatch for variadic generics variable in class __init__ VS method contexts
<p>I have a type mismatch between the same definition of variadic generic <code>Union[*HandlerTypes]</code> used in <code>__init__</code> and in <code>pet_handler</code> method.</p> <p>I want strongly typed decorators - I achieve this by using <code>TypeVarTuple</code> variadic generics. All seems to work. But there is one last problem - the type mismatch between:</p> <p>a) <code>self.handlers: list[Union[*HandlerTypes]] = []</code><br /> b) <code>self.handlers.append(handler_func)</code></p> <p>Though technically the type is the same <code>Union[*HandlerTypes]</code> appearantly the difference of the usage context (class init vs method) triggers the pylance error for <code>self.handlers.append(handler_func)</code>:</p> <pre><code>Argument of type &quot;Union[*HandlerTypes@AppModule]&quot; cannot be assigned to parameter &quot;__object&quot; of type &quot;*HandlerTypes@AppModule&quot; in function &quot;append&quot; Type &quot;Union[*HandlerTypes@AppModule]&quot; cannot be assigned to type &quot;*HandlerTypes@AppModule&quot;PylancereportGeneralTypeIssues </code></pre> <p>Any ideas how I could work this out? Thanks a lot! Full code below</p> <pre><code> class Dog: pass class Cat: pass class MyFirstDog(Dog): pass class MyFirstCat(Cat): pass class MySecondDog(Dog): pass class MySecondCat(Cat): pass HandlerType1 = Callable[[MyFirstDog, MyFirstCat], Tuple[MyFirstDog, MyFirstCat]] HandlerType2 = Callable[[MySecondDog, MySecondCat], Tuple[MySecondDog, MySecondCat]] HandlerTypes = TypeVarTuple('HandlerTypes') class AppModule(Generic[*HandlerTypes]): def __init__(self) -&gt; None: self.handlers: list[Union[*HandlerTypes]] = [] def pet_handler(self, handler_func: Union[*HandlerTypes]) -&gt; Union[*HandlerTypes]: self.handlers.append(handler_func) return handler_func appmodule = AppModule[HandlerType1, HandlerType2]() @appmodule.pet_handler def pet_handler1(a: MyFirstDog, b: MyFirstCat) -&gt; Tuple[MyFirstDog, MyFirstCat]: return (a, b) @appmodule.pet_handler def pet_handler2(a: MySecondDog, b: MySecondCat) -&gt; Tuple[MySecondDog, MySecondCat]: return (a, b) </code></pre>
<python><python-3.x><python-typing>
2023-08-21 08:32:13
1
757
toinbis
76,943,502
2,706,344
Reading a komoot xml file (gpx) with pandas
<p>I want to read a xml file generated by komoot into a DataFrame. Here is the structure of the xml file:</p> <pre><code>&lt;?xml version='1.0' encoding='UTF-8'?&gt; &lt;gpx version=&quot;1.1&quot; creator=&quot;https://www.komoot.de&quot; xmlns=&quot;http://www.topografix.com/GPX/1/1&quot; xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot; xsi:schemaLocation=&quot;http://www.topografix.com/GPX/1/1 http://www.topografix.com/GPX/1/1/gpx.xsd&quot;&gt; &lt;metadata&gt; &lt;name&gt;Title&lt;/name&gt; &lt;author&gt; &lt;link href=&quot;https://www.komoot.de&quot;&gt; &lt;text&gt;komoot&lt;/text&gt; &lt;type&gt;text/html&lt;/type&gt; &lt;/link&gt; &lt;/author&gt; &lt;/metadata&gt; &lt;trk&gt; &lt;name&gt;Title&lt;/name&gt; &lt;trkseg&gt; &lt;trkpt lat=&quot;60.126749&quot; lon=&quot;4.250254&quot;&gt; &lt;ele&gt;455.735013&lt;/ele&gt; &lt;time&gt;2023-08-20T17:42:34.674Z&lt;/time&gt; &lt;/trkpt&gt; &lt;trkpt lat=&quot;60.126580&quot; lon=&quot;4.250247&quot;&gt; &lt;ele&gt;455.735013&lt;/ele&gt; &lt;time&gt;2023-08-20T17:42:36.695Z&lt;/time&gt; &lt;/trkpt&gt; &lt;trkpt lat=&quot;60.126484&quot; lon=&quot;4.250240&quot;&gt; &lt;ele&gt;455.735013&lt;/ele&gt; &lt;time&gt;2023-08-20T17:44:15.112Z&lt;/time&gt; &lt;/trkpt&gt; &lt;/trkseg&gt; &lt;/trk&gt; &lt;/gpx&gt; </code></pre> <p>I tried this code:</p> <pre><code>pd.read_xml('testfile.gpx',xpath='./gpx/trk/trkseg') </code></pre> <p>But somehow it seems there are problems with my <code>xpath</code>. Namely, I get this <code>ValueError</code>:</p> <pre><code>ValueError: xpath does not return any nodes. Be sure row level nodes are in xpath. If document uses namespaces denoted with xmlns, be sure to define namespaces and use them in xpath. </code></pre> <p>I tried a lot but no xpath I chose worked out.</p>
<python><pandas><xml><dataframe><gpx>
2023-08-21 08:17:54
1
4,346
principal-ideal-domain
76,943,342
10,780,715
PyPolars efficient regex mass matching
<p>I'm trying to mass match a column against a dictionary of regexes as follows :</p> <pre class="lang-py prettyprint-override"><code>import random import time col_to_map_possibilities = [str(x) for x in range(100)] col_to_map_generated = random.choices(col_to_map_possibilities, k=25_000_000) col_to_match_possibilities = [&quot;xxx&quot;, &quot;yyy&quot;] # larger in my real scenario col_to_match_generated = random.choices(col_to_match_possibilities, k=25_000_000) original_lf = pl.LazyFrame( data={ &quot;col_to_map&quot;: col_to_map_generated, &quot;col_to_match&quot;: col_to_match_generated, }, schema={&quot;col_to_map&quot;: pl.String, &quot;col_to_match&quot;: pl.String}, ) regexes = {str(x): random.choice(col_to_match_possibilities) for x in range(100)} def my_actual_solution(lf: pl.LazyFrame) -&gt; pl.DataFrame: return ( lf.with_columns(pl.col(&quot;col_to_map&quot;).replace(regexes).alias(&quot;regex&quot;)) .with_columns( pl.col(&quot;col_to_match&quot;).str.contains(pl.col(&quot;regex&quot;)).alias(&quot;matched&quot;) ) .collect() ) t = time.time() print(my_actual_solution(original_lf)) print(time.time() - t) </code></pre> <p>The above code just works fine but it takes a long time (around 35 seconds on my laptop) to complete.</p> <p>Is there any chance I can execute that by filtering on <code>col_to_map</code> first and then get rid of the temporary use of that <code>regex</code> column ?</p>
<python><regex><dataframe><python-polars>
2023-08-21 07:51:33
1
575
mlisthenewcool
76,942,938
15,218,250
datetime field in django is being displayed differently in the template and not like the saved one in database
<p>The following is a part of my ajax request in my django template</p> <pre><code>&lt;span class='time-left'&gt;${response.chats[key].date}&lt;/span&gt;&lt;/div&gt;`; </code></pre> <p>This is the model field it is refering to...</p> <pre><code>date = models.DateTimeField(auto_now_add=True, null=True, blank=True) </code></pre> <p>Now this is the problem. The datetime is saved correctly in the database to my current timezone. However, when I display it in the template like above, I get a different time for some reason. the <code>USE_TZ=True</code> in my settings, and I don't want to change it. Do you know how I could show the saved value in the database in the template as well?</p>
<python><django><django-models><django-views><django-templates>
2023-08-21 06:45:31
1
613
coderDcoder
76,942,932
5,671,447
Decorator with arguments: dealing with class methods in Python 3
<p>Event bus:</p> <pre class="lang-py prettyprint-override"><code># eventbus.py EventKey = Union[int, str] listeners: Dict[EventKey, Set[Callable]] = defaultdict(set) def on(event: EventKey): def decorator(callback): listeners[event].add(callback) @functools.wraps(callback) def wrapper(*args, **kwargs): return callback(*args, **kwargs) return wrapper return decorator def emit(event: EventKey, *args, **kwargs): for listener in listeners[event]: listener(*args, **kwargs) </code></pre> <p>Example of a class that needs to listen to an event:</p> <pre class="lang-py prettyprint-override"><code>class Ticking(Referencable): def __init__(self, id_: int): super().__init__(id_) @eventbus.on(StandardObjects.E_TIMER) def on_clock(self, clock: Clock): match clock.id: case StandardObjects.TIME_TICK: self.on_time_tick(clock) def on_time_tick(self, clock: Clock): pass </code></pre> <p>Example of invoking the related event:</p> <pre class="lang-py prettyprint-override"><code>eventbus.emit(StandardObjects.E_TIMER, clock) # clock is an instance of Clock </code></pre> <p>I'm trying to write a relatively simple global event bus in Python 3.11, however, I would like to register listeners to the bus via a decorator. The implementation below works fine when decorating functions, but falls over when a class method is decorated because of the &quot;self&quot; argument being missed when called:</p> <pre><code>Ticking.on_clock() missing 1 required positional argument: 'clock' </code></pre> <p>(I can confirm it's to do with &quot;self&quot; because modifying <code>listener(*args, **kwargs)</code> in <code>emit()</code> to <code>listener('dummy', *args, **kwargs)</code> throws the expected <code>AttributeError: 'str' object has no attribute 'on_time_tick'</code>.)</p> <p>Then I explored ways to have the decorator somehow get a reference to the callback's class instance, but in Python 3, <code>Callable</code> objects longer have a means to access metadata about the class instance they belong to outside of unstable implementation-specific reflection <a href="https://stackoverflow.com/a/25959545/5671447">hacks</a> that I would certainly like to avoid.</p>
<python><event-handling><python-decorators><class-method>
2023-08-21 06:44:45
1
359
Jacob Jewett
76,942,895
12,293,792
Assign value in Data Frame Column without Loop
<p>I have a data frame (below is only the subset):</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Data1</th> <th>Data2</th> <th>Status</th> </tr> </thead> <tbody> <tr> <td>1</td> <td></td> <td>False</td> </tr> <tr> <td>2</td> <td></td> <td>True</td> </tr> <tr> <td>5</td> <td></td> <td>True</td> </tr> </tbody> </table> </div> <p>I would like to put a value(string) in column Data2 = &quot;Done&quot; if the Status = True (boolean).</p> <p>My code is:</p> <pre><code>d = {&quot;Data1&quot;:[1,2,5], &quot;Data2&quot;:[&quot;&quot;,&quot;&quot;,&quot;&quot;], &quot;Status&quot;: [False, True, True]} df = pd.DataFrame(data = d) for x in range(len(df[&quot;Status&quot;])): if df[&quot;Status&quot;].loc[x] == True: df[&quot;Data2&quot;].loc[x] = &quot;Done&quot; else: df[&quot;Data2&quot;].loc[x] = &quot;&quot; </code></pre> <p>This code works. My question is: is there any way to do it without loop?</p>
<python><pandas><dataframe><loops>
2023-08-21 06:38:09
3
455
teteh May
76,942,890
12,176,803
Why is it mandatory to use asyncio sleep in Python async code?
<p>I recently started digging deeper in asynchronous code with Python, and am wondering why <code>asyncio.sleep</code> is so important.</p> <h2>Use Case</h2> <ul> <li>I have a synchronous source of data coming from a microphone every <code>x</code> milliseconds.</li> <li>I check if there is a wakeword and if yes open a connection through a websockets to my server.</li> <li>I send / receive message asynchronously and independently.</li> </ul> <p>My ideal implementation is that as soon as a message is ready it is sent, and as soon as a message is received it is processed.</p> <p>This must be efficient, since we want to go down to <code>x = 20ms</code> (frames from microphone received every 20 ms).</p> <h2>Implementation</h2> <p>The code is the following:</p> <ul> <li>It has a consumer / producer approach: Consumer receives messages, Producer sends messages.</li> <li>The frames from the microphone are put in a synchronous queue</li> <li>The producer / consumer are handled in a different thread</li> <li>The Queue is shared between the main thread and the other one. As soon as a new message is put, it will be processed on the other end.</li> </ul> <pre><code>import asyncio import msgpack import os import pyaudio import ssl import websockets from threading import Thread from queue import Queue from dotenv import load_dotenv # some utilities from src.utils.constants import CHANNELS, CHUNK, FORMAT, RATE from .utils import websocket_data_packet load_dotenv() QUEUE_MAX_SIZE = 10 MY_URL = os.environ.get(&quot;WEBSOCKETS_URL&quot;) ssl_context = ssl.SSLContext() class MicrophoneStreamer(object): &quot;&quot;&quot;This handles the microphone and yields chunks of data when they are ready.&quot;&quot;&quot; chunk: int = CHUNK channels: int = CHANNELS format: int = FORMAT rate: int = RATE def __init__(self): self._pyaudio = pyaudio.PyAudio() self.is_stream_open: bool = True self.stream = self._pyaudio.open( format=self.format, channels=self.channels, rate=self.rate, input=True, frames_per_buffer=self.chunk, ) def __iter__(self): while self.is_stream_open: yield self.stream.read(self.chunk) def close(self): self.is_stream_open = False self.stream.close() self._pyaudio.terminate() async def consumer(websocket): async for message in websocket: print(f&quot;Received message: {msgpack.unpackb(message)}&quot;) async def producer(websocket, audio_queue): while True: print(&quot;Sending chunck&quot;) chunck = audio_queue.get() await websocket.send(msgpack.packb(websocket_data_packet(chunck))) # THE FOLLOWING LINE IS IMPORTANT await asyncio.sleep(0.02) async def handler(audio_queue): websocket = await websockets.connect(MY_URL, ssl=ssl_context) async with websockets.connect(MY_URL, ssl=ssl_context) as websocket: print(&quot;Websocket opened&quot;) consumer_task = asyncio.create_task(consumer(websocket)) producer_task = asyncio.create_task(producer(websocket, audio_queue)) done, pending = await asyncio.wait( [consumer_task, producer_task], return_when=asyncio.FIRST_COMPLETED, timeout=60, ) for task in pending: task.cancel() # TODO: is the following useful? await websocket.close() def run(audio_queue: Queue): loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) loop.run_until_complete(handler(audio_queue)) loop.close() def main(): audio_queue = Queue(maxsize=5) # the iterator is synchronous for i, chunk in enumerate(MicrophoneStreamer()): print(&quot;Iteration&quot;, i) # to simulate condition wakeword detected if i == 2: thread = Thread( target=run, args=(audio_queue,), ) thread.start() # adds to queue if audio_queue.full(): _ = audio_queue.get_nowait() audio_queue.put_nowait(chunk) if __name__ == &quot;__main__&quot;: main() </code></pre> <h2>Issue</h2> <p>There is a line that I commented <code># THE FOLLOWING LINE IS IMPORTANT</code> in the producer.</p> <p>If I do not add <code>asyncio.sleep(...)</code> in the producer, the messages from the consumer are never received.</p> <p>When I add <code>asyncio.sleep(0)</code> in the producer, the messages from the consumer are received, but very late and sporadically.</p> <p>When I add <code>asyncio.sleep(0.02)</code> in the producer, the messages from the consumer are received on time.</p> <p>Why is there this behavior and how to solve it? In order to send message every 20 milliseconds, I cannot sleep 20ms every iteration, this would probably mess up the process.</p> <p>(Note, I found out this sleep fix with <a href="https://stackoverflow.com/questions/74990979/python-websockets-sends-messages-but-cannot-receive-messages-asynchronously?noredirect=1&amp;lq=1">this issue</a>)</p> <h2>What I tried</h2> <p>I thought that if the iterator was asynchronous, this would solve the issue, but it didn't. If you want to see the implementation, I opened another thread in the past days <a href="https://stackoverflow.com/questions/76912246/python-synchronous-pyaudio-data-in-asynchronous-code">here</a>.</p> <p>I also tried to dig deeper into how event loops work. From my understanding, the <code>asyncio.sleep</code> is necessary for the event loop to decide which task to execute, and to switch between them - for instance, we use it to trigger a task to start, after creating it.</p> <p>This seems a bit odd to me. Is there a workaround?</p>
<python><asynchronous><python-asyncio>
2023-08-21 06:37:45
2
366
HGLR
76,942,809
6,503,329
NiceGUI Interactive Image - Does changing source periodically refreshes/resets the content as well
<p>I am using an interactive image to show live stream video (code from NiceGUI openCV example) I am able to successfully show the live feed using the <code>ui.timer</code>. However, I also do some operations with my mouse over the image like recording mouse clicks. The interactive image holds the <code>content</code> if it's a static image. But for a video, since the image (<code>source</code>) refreshes due to change of path every 0.1 sec, the <code>content</code> refreshes as well. Maybe the element refreshes all together when <code>set_source</code> is called?</p> <p>How can we only change/refresh the <code>set_source</code> while still preserving the <code>content</code> property.</p> <pre class="lang-py prettyprint-override"><code>class SVGContent: def __init__(self): self.content = '' svgContent = SVGContent() def mouse_handler(e: MouseEventArguments): if e.type == 'click': svgContent.content += f'&lt;circle cx=&quot;{e.image_x}&quot; cy=&quot;{e.image_y}&quot; r=&quot;15&quot; fill=&quot;none&quot; stroke=&quot;{color}&quot; stroke-width=&quot;4&quot; /&gt;' ii = ui.interactive_image( on_mouse = mouse_handler, events=['click', 'mousedown', 'mouseup'], cross=True ) ui.timer(interval=0.1, callback=lambda: ii.set_source(f'http://localhost:3000/video/stream1?{time.time()}')) ii.bind_content_from(svgContent, 'content') </code></pre>
<python><nicegui>
2023-08-21 06:22:38
1
749
Shreyesh Desai
76,942,574
1,492,229
How to fix Memory allocation for BOW in python
<p>Bag of words process does not depend on other recprds such as TF-IDF</p> <p>so i wonder if there is a way allows me to do BOW in batches</p> <p>here is my code</p> <pre><code>def BOW(df): CountVec = CountVectorizer() # to use only bigrams ngram_range=(2,2) Count_data = CountVec.fit_transform(df) cv_dataframe=pd.DataFrame(Count_data.toarray(),columns=CountVec.get_feature_names()) return cv_dataframe.astype(np.uint8) dfMethod = BOW(df_reps[&quot;Txt&quot;]) </code></pre> <p>the error i got is</p> <blockquote> <p>MemoryError: Unable to allocate 21.4 GiB for an array with shape (87148, 32996) and data type int64</p> </blockquote> <p>Anyone did this before and know a better way to do that?</p> <p>how can I do it?</p>
<python><memory><nlp>
2023-08-21 05:28:12
2
8,150
asmgx
76,942,351
11,025,602
Determine category of sentence
<p>I have a list of Books with Title and Organization. I would like to get the category of each book. Category list is predefined and choose one of them.</p> <p>Books Title 1: Agriculturalpolicy and the decisions ofagriculturalproducers as to income and investment Organization 1: Western Agricultural Economics Association</p> <p>Categories: Agriculture, Engineering, Healthcare</p> <p>Can I find the category of the Book using Python?</p> <p>I tried using Transformers from HuggingFace but I don't have predefined data to train or evaluate.</p> <p>I am not sure whether there is an existing solution or not.</p>
<python><nlp><huggingface>
2023-08-21 04:13:20
1
788
Yuri R
76,942,334
9,949,963
change hostname in python
<p>I was referring to an older stack overflow post:<a href="https://stackoverflow.com/questions/42010449/how-do-i-change-a-computers-name-with-the-wmi-module-in-python">How do I change a computer&#39;s name with the WMI module in Python</a>. I tried to do the same thing, but upon doing so, I got a return of (5,). This is a tuple, but I was hoping I could find it in the WMI docs by ignoring the tuple and using the actual integer <a href="https://learn.microsoft.com/en-us/windows/win32/wmisdk/wmi-error-constants" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/windows/win32/wmisdk/wmi-error-constants</a> in this section, but could not.</p> <pre><code> context = wmi.WMI() for system in context.Win32_ComputerSystem(): print(system) q = system.Rename('ballcccc', None, None) print(q) </code></pre> <p>I tried this, <a href="https://learn.microsoft.com/en-us/windows/win32/cimwin32prov/rename-method-in-class-win32-computersystem" rel="nofollow noreferrer">the docs here</a> state that you can use a NULL parameter for the USER &amp; PASS arguments. My question is: How can I properly change the hostname of my windows 11 computer in Python? Using the answer given, I now get this:</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\ballc\AppData\Local\Programs\Python\Python311\Lib\site-packages\wmi.py&quot;, line 1340, in connect obj = GetObject(moniker) ^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\ballc\AppData\Local\Programs\Python\Python311\Lib\site-packages\win32com\client\__init__.py&quot;, line 86, in GetObject return Moniker(Pathname, clsctx) ^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\ballc\AppData\Local\Programs\Python\Python311\Lib\site-packages\win32com\client\__init__.py&quot;, line 103, in Moniker moniker, i, bindCtx = pythoncom.MkParseDisplayName(Pathname) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ pywintypes.com_error: (-2147217394, 'OLE error 0x8004100e', None, None) During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;D:\work\auto.py&quot;, line 49, in &lt;module&gt; o.changeName() File &quot;D:\work\auto.py&quot;, line 41, in changeName context = wmi.WMI(namespace=&quot;Root\CIMV2&quot;) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\ballc\AppData\Local\Programs\Python\Python311\Lib\site-packages\wmi.py&quot;, line 1354, in connect handle_com_error() File &quot;C:\Users\ballc\AppData\Local\Programs\Python\Python311\Lib\site-packages\wmi.py&quot;, line 258, in handle_com_error raise klass(com_error=err) wmi.x_wmi: &lt;x_wmi: Unexpected COM Error (-2147217394, 'OLE error 0x8004100e', None, None)&gt; </code></pre>
<python><wmi><windows-11>
2023-08-21 04:07:37
0
381
Noah
76,942,333
17,610,082
How can I handle Big Int with GraphQL?
<p>I've an Django Model field with BigInt type, How can i map this in graphql schema, is there anything available like <code>BigInt</code> like <code>Int</code> type in Graphql?</p> <p>I'm using <a href="https://github.com/graphql-python/graphene" rel="nofollow noreferrer">graphene</a> library FYI</p> <p>My Schema:</p> <pre><code>type MyModelObject { complaintId: BigInt createdBy: User! complaintNumber: String! } </code></pre> <p>I got below warning/error in IDE</p> <pre class="lang-bash prettyprint-override"><code>The field type 'BigInt' is not present when resolving type 'MyModelObject' </code></pre>
<python><graphql><graphene-python>
2023-08-21 04:06:49
1
1,253
DilLip_Chowdary
76,942,189
4,001,750
Interpolation between elements of two arrays with common X-axis
<p>I have a NumPy array of size <code>(n,2)</code> that I need to linearly interpolate. Example: consider the <code>(3,2)</code> array below.</p> <pre><code>0.2 0.4 0.3 0.6 0.4 0.8 </code></pre> <p>I want to linearly interpolate between the two columns (column 0 being lower bound values and column 1 being upper bound values) row-by-row. Each row has the same lower x and upper x value (5 and 10 respectively). If I interpolate this for x=7.5 I should get:</p> <pre><code>0.3 0.45 0.6 </code></pre> <p>Using SciPy:</p> <pre><code>from scipy.interpolate import interp1d y = np.array([[.2,.4], [.3, .6], [.4, .8]]) xl = 5 xh = 10 f = interp1d([xl, xh], [y[:,0], y[:,1]]) f(7.5) </code></pre> <p>Error at <code>f = interp1d(...)</code>:</p> <blockquote> <p>ValueError: x and y arrays must be equal in length along interpolation axis.</p> </blockquote> <p>Using NumPy:</p> <pre><code>import numpy as np y = np.array([[.2,.4], [.3, .6], [.4, .8]]) xl = 5 xh = 10 f = np.interp(7.5, [xl, xh], [y[:,0], y[:,1]]) </code></pre> <p>Error:</p> <blockquote> <p>ValueError: object too deep for desired array</p> </blockquote> <p>This needs to be applied to a large matrix. What is the most efficient solution (without using a <code>for</code> loop)?</p>
<python><numpy><scipy>
2023-08-21 03:16:37
1
720
naseefo
76,942,047
1,930,402
Pyspark dataframe collecting top records
<p>I have a large(50M records) Pyspark dataframe like this.</p> <p>Book column has names of books and Owner column has the name of owners of the book. Count indicates the number of copies of each book in the respective owners. There could be some copies of the book which are owned by no one, this is indicated by &quot;owner&quot; = None.</p> <p>I want to do this : For every book, find the top 10 owners and respective counts, ignoring the Nulls.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>book</th> <th>owner</th> <th>count</th> </tr> </thead> <tbody> <tr> <td>book1</td> <td>A</td> <td>100</td> </tr> <tr> <td>book1</td> <td>B</td> <td>200</td> </tr> <tr> <td>book1</td> <td>None</td> <td>10</td> </tr> <tr> <td>book2</td> <td>A</td> <td>5</td> </tr> <tr> <td>book2</td> <td>C</td> <td>50</td> </tr> </tbody> </table> </div> <p>This is the code I wrote:</p> <pre><code> window4=Window.partitionBy('book','owner').orderBy(desc('count')) df = df.withColumn('top_owners_details', F.when(F.col(OWNER).isNotNull(), F.struct( F.collect_list(OWNER).over(window4).alias('top_owners'), F.collect_list('COUNT').over(window4).alias('top_counts')))).otherwise(F.lit(None))) </code></pre> <p>This gives me inconsistent results, top_counts has the count of Nulls included.</p> <p>I did the same task removing Null rows, but in the downstream activities this is not possible.</p> <p>What's the best way to do this ?</p>
<python><dataframe><pyspark>
2023-08-21 02:25:13
2
1,509
pnv
76,941,993
6,546,694
Why is list pop(0) not an O(1) operation in python?
<p><code>l = [1,2,3,4]</code></p> <p>popping the last element would be an O(1) operation since:</p> <ol> <li>It returns the last element</li> <li>Changes a few fixed attributes like <code>len</code> of the list</li> </ol> <p>Why can't we do the same with pop(0)?</p> <ol> <li>Return the first element</li> <li>Change the pointer of the first element (index 0) to that of the second element (index 1) and change a few fixed attributes?</li> </ol>
<python><arrays><list><time-complexity>
2023-08-21 02:01:47
1
5,871
figs_and_nuts
76,941,883
2,026,105
How to pass start time option to PyAV
<p>I am trying to read an audio file starting at a particular time in second. In FFMPEG you would put <code>-ss x</code> to do this.</p> <p>In this project, I am using PyAv 10. <a href="https://pyav.org/docs/stable/api/_globals.html" rel="nofollow noreferrer">Per the documentation</a> the open function has <code>options</code>. But I can't find nowhere what options can be passed.</p> <p>I assumed it's straight FFMPEG options but it does not seem to work.</p> <p>So I am trying to figure out a) what options can be passed to <code>PyAv.open</code> function but more importantly, how to get the audio starting from a particulart point in time.</p>
<python><pyav>
2023-08-21 01:08:14
1
10,081
ETL
76,941,870
16,308,381
ValueError: One input key expected got ['text_one', 'text_two'] in LangChain with memory and multiple inputs
<p>I'm trying to run a chain in LangChain with memory and multiple inputs. The closest error I could find was was posted <a href="https://stackoverflow.com/questions/76765757/langchain-valueerror-one-input-key-expected-got-input-chat-history-lines">here</a>, but in that one, they are passing only one input.</p> <p>Here is the setup:</p> <pre class="lang-py prettyprint-override"><code>from langchain.llms import OpenAI from langchain.chains import LLMChain from langchain.prompts import PromptTemplate from langchain.memory import ConversationBufferMemory llm = OpenAI( model=&quot;text-davinci-003&quot;, openai_api_key=environment_values[&quot;OPEN_AI_KEY&quot;], # Used dotenv to store API key temperature=0.9, client=&quot;&quot;, ) memory = ConversationBufferMemory(memory_key=&quot;chat_history&quot;) prompt = PromptTemplate( input_variables=[ &quot;text_one&quot;, &quot;text_two&quot;, &quot;chat_history&quot; ], template=( &quot;&quot;&quot;You are an AI talking to a huamn. Here is the chat history so far: {chat_history} Here is some more text: {text_one} and here is a even more text: {text_two} &quot;&quot;&quot; ) ) chain = LLMChain( llm=llm, prompt=prompt, memory=memory, verbose=False ) </code></pre> <p>When I run</p> <pre class="lang-py prettyprint-override"><code>output = chain.predict( text_one=&quot;Hello&quot;, text_two=&quot;World&quot; ) </code></pre> <p>I get <code>ValueError: One input key expected got ['text_one', 'text_two']</code></p> <p>I've looked at <a href="https://stackoverflow.com/questions/76199653/valueerror-run-not-supported-when-there-is-not-exactly-one-input-key-got-q">this stackoverflow post</a>, which suggests to try:</p> <pre class="lang-py prettyprint-override"><code>output = chain( inputs={ &quot;text_one&quot; : &quot;Hello&quot;, &quot;text_two&quot; : &quot;World&quot; } ) </code></pre> <p>which gives the exact same error. In the spirit of trying different things, I've also tried:</p> <pre class="lang-py prettyprint-override"><code>output = chain.predict( # Also tried .run() here inputs={ &quot;text_one&quot; : &quot;Hello&quot;, &quot;text_two&quot; : &quot;World&quot; } ) </code></pre> <p>which gives <code>Missing some input keys: {'text_one', 'text_two'}</code>.</p> <p>I've also looked at <a href="https://github.com/langchain-ai/langchain/issues/8710" rel="noreferrer">this issue on the langchain GitHub</a>, which suggests to do pass the <code>llm</code> into memory, i.e.</p> <pre class="lang-py prettyprint-override"><code># Everything the same except... memory = ConversationBufferMemory(llm=llm, memory_key=&quot;chat_history&quot;) # Note the llm here </code></pre> <p>and I still get the same error. If someone knows a way around this error, please let me know. Thank-you.</p>
<python><langchain>
2023-08-21 01:01:29
2
392
pvasudev16
76,941,776
6,734,243
how to included data file in distribution using hatch?
<p>in a setuptools defined Python lib I could choose which files I wanted ton include using the following:</p> <pre class="lang-ini prettyprint-override"><code>[tool.setuptools] include-package-data = true </code></pre> <p>For hatch I didn't find any mention of this parameter in the documentation, is it automatic ? If not what parameter should I use ?</p>
<python><setuptools><hatch>
2023-08-21 00:10:06
0
2,670
Pierrick Rambaud
76,941,772
6,734,243
how to include the license fie in the wheel with Hatch?
<p>In setuptools I was used to do the following:</p> <pre class="lang-ini prettyprint-override"><code>[tool.setuptools] license-files = [&quot;LICENSE.txt&quot;] </code></pre> <p>How can I do the same using hatch ?</p>
<python><setuptools><hatch>
2023-08-21 00:07:11
0
2,670
Pierrick Rambaud
76,941,553
839,733
PEP 622 - can match statement be used as an expression?
<p><a href="https://peps.python.org/pep-0622/" rel="noreferrer">PEP 622</a> introduced <code>match</code> statement as an alternative to <code>if-elif-else</code>. However, one thing I can't find in the proposal or in any of the material online is whether the <code>match</code> statement can be used as an expression and not just as a statement.</p> <p>A couple of examples to make it clear:</p> <p>Example 1:</p> <pre><code>def make_point_2d(pt): match pt: case (x, y): return Point2d(x, y) case _: raise TypeError(&quot;not a point we support&quot;) </code></pre> <p>Example 2:</p> <pre><code>match response.status: case 200: do_something(response.data) case 301 | 302: retry(response.location) </code></pre> <p>In the first example, the function returns from inside a <code>case</code> clause, and in the second example, nothing is returned. But I want to be able to do something like the following hypothetical example:</p> <pre><code>spouse = match name: case &quot;John&quot;: &quot;Jane&quot; case &quot;David&quot;: &quot;Alice&quot; print(spouse) </code></pre> <p>But it doesn't compile.</p>
<python><python-3.10><structural-pattern-matching>
2023-08-20 22:24:10
2
25,239
Abhijit Sarkar
76,941,366
3,299,246
How do I dump yaml without evaluating anchor values?
<p>I have a yaml file as</p> <pre><code>anchor1: &amp;anchor1 resource_class: small anchor2: &amp;anchor2 hello: world anchor3: &amp;anchor3 hello: world root1: nested1: &lt;&lt;: *anchor1 some_list: - item1: hello: world - *anchor2 - *anchor3 nested2: &lt;&lt;: *anchor1 some_list: - item1: hello: world - *anchor2 - *anchor3 nested3: &lt;&lt;: *anchor1 some_list: - item1: hello: world - *anchor2 - *anchor3 root2: nested1: &lt;&lt;: *anchor1 some_list: - item1: hello: world2 - *anchor2 - *anchor3 ... </code></pre> <p>I want to pull out the value of <code>nested1</code> into a separate file without evaluating all the anchors.</p> <pre><code>import ruamel.yaml.YAML yaml = YAML() with open(Path('in_file')) as f: data = yaml.load(f) with open(Path('out_file'), 'w') as f: yaml.dump(data['root1']['nested1'], f) </code></pre> <p>The output I want when dumping is</p> <pre><code>&lt;&lt;: *anchor1 some_list: - item1: hello: world - *anchor2 - *anchor3 </code></pre> <p>I understand it is invalid yaml, as the anchor definitions are not present.</p> <p>The main problem I run into, is that the moment I grab a value from the root config, it has already been processed.</p> <p>For example, if I load and dump my <code>in_file</code>, it works as expected, but if I take the data and get a value out, <code>data['root1']</code>, it has already processed the anchors.</p> <p>I suspect that's because the anchor definitions are not part of <code>data['root1']</code> but I'm not sure how to work around that.</p>
<python><ruamel.yaml>
2023-08-20 21:15:35
1
421
doodla
76,941,251
20,785,822
Debugger for python in Visual Studio 2022 doesn't work
<p>I usually use Visual Studio Community 2022 as my IDE for programming (usually in C++). Once I had some little automation tasks that were way faster and easier to do in python than in C++ so I installed the Python Development workload from the Visual Studio Installer. However I could not run my code from within Visual Studio somehow and had to run it trough the console by hand. Since this was a one time thing I never really tried to fix it.</p> <p>Now I am back to coding a bit in python and had the same issue again. After some research I found this <a href="https://stackoverflow.com/questions/71342354/failed-to-launch-debug-adapter-in-visual-studio-2022">.net question</a> which was similar to my problem and <a href="https://stackoverflow.com/a/75248419/20785822">one of the answers</a> was in fact for python and helped me solve this. I followed this <a href="https://learn.microsoft.com/en-us/visualstudio/python/debugging-symbols-for-mixed-mode-c-cpp-python?view=vs-2022" rel="nofollow noreferrer">msdn page</a> to get it work. I finally could not only code in Visual Studio but also run my code through it.</p> <p>But now I tried to actually use the debugger (I set some breakpoints) and noticed how execution didn't stop on them. So I tried &quot;Step Into&quot; via F11 and got some error again, similar to <a href="https://stackoverflow.com/questions/48962685/not-found-you-need-to-find-to-view-the-source-for-the-current-call-stack-fra">this one</a>.</p> <hr /> <p><strong>Source Not Found</strong><br /> <em>python.c not found</em><br /> You need to find python.c to view the source for the current call stack frame</p> <p>Options:</p> <ul> <li>Locate pyhton.c manually</li> <li>View disassembly</li> </ul> <p>More information<br /> <em>Source search information</em></p> <pre class="lang-none prettyprint-override"><code>Locating source for 'D:\a\1\s\Programs\python.c'. Checksum: SHA256 {3d e 15 d4 9a 77 5a 8d 50 af a6 b5 ed d4 4 ba d2 d7 9a f7 92 40 5c 76 84 70 7b c5 9 80 63 85} The file 'D:\a\1\s\Programs\python.c' does not exist. Looking in script documents for 'D:\a\1\s\Programs\python.c'... Looking in the Edit-and-Continue directory 'D:\Programs\WebAutomationTest\enc_temp_folder\'... The file with the matching checksum was not found in the Edit-and-Continue directory. Looking in the projects for 'D:\a\1\s\Programs\python.c'. The file was not found in a project. Looking in directory 'C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.36.32532\include\'... Looking in directory 'C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.36.32532\include\cvt\'... Looking in directory 'C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.36.32532\include\msclr\'... Looking in directory 'C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.36.32532\include\sys\'... Looking in directory 'C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.36.32532\crt\src\'... Looking in directory 'C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.36.32532\crt\src\x64\'... Looking in directory 'C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.36.32532\crt\src\arm\'... Looking in directory 'C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.36.32532\crt\src\concrt\'... Looking in directory 'C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.36.32532\crt\src\i386\'... Looking in directory 'C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.36.32532\crt\src\linkopts\'... Looking in directory 'C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.36.32532\crt\src\stl\'... Looking in directory 'C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.36.32532\crt\src\vccorlib\'... Looking in directory 'C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.36.32532\crt\src\vcruntime\'... Looking in directory 'C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.36.32532\atlmfc\src\mfc\'... Looking in directory 'C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.36.32532\atlmfc\src\atl\'... Looking in directory 'C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.36.32532\atlmfc\include\'... Looking in directory 'C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.37.32822\atlmfc\src\mfc'... Looking in directory 'C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.37.32822\atlmfc\src\mfcm'... Looking in directory 'C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.37.32822\atlmfc\src\atl'... Looking in directory ''... Looking in directory 'C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.37.32822\crt\src'... Looking in directory 'C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\VS\src'... Looking in directory 'C:\Program Files (x86)\Windows Kits\10\Source\10.0.22000.0\ucrt'... Looking in directory 'C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.37.32822\include'... Looking in directory 'C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.37.32822\atlmfc\include'... Looking in directory 'C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\VS\include'... Looking in directory 'C:\Program Files (x86)\Windows Kits\10\Include\10.0.22000.0\ucrt'... Looking in directory 'C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\VS\UnitTest\include'... Looking in directory 'C:\Program Files (x86)\Windows Kits\10\Include\10.0.22000.0\um'... Looking in directory 'C:\Program Files (x86)\Windows Kits\10\Include\10.0.22000.0\shared'... Looking in directory 'C:\Program Files (x86)\Windows Kits\10\Include\10.0.22000.0\winrt'... Looking in directory 'C:\Program Files (x86)\Windows Kits\10\Include\10.0.22000.0\cppwinrt'... Looking in directory 'C:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\Include\um'... Searching for documents embedded in the symbol file. An embedded document was not found. The debug source files settings for the active solution indicate that the debugger will not ask the user to find the file: D:\a\1\s\Programs\python.c. The debugger could not locate the source file 'D:\a\1\s\Programs\python.c'. </code></pre>
<python><visual-studio><debugging>
2023-08-20 20:40:00
1
1,777
Joel
76,941,157
9,479
Type hints for decorator to make argument optional
<p>I have a number of methods that take a &quot;connection pool&quot;, which can be created once and reused throughout the code base. However, in some cases I am fine with creating a new connection pool on-the-fly (such as for short-lived programs).</p> <p>For the purpose of this example, let's call this <code>ConnPool</code>. My methods may look something like this:</p> <pre class="lang-py prettyprint-override"><code>from typing import List async def list_jobs(namespaces: List[str], *, pool: ConnPool) -&gt; List[str]: ... </code></pre> <p>I would like to create a <em>correctly typed</em> decorator for these methods that turn the pool argument into an optional <code>pool: Optional[ConnPool] = None</code>. If given, use it, if not given create a new pool on-the-fly:</p> <pre class="lang-py prettyprint-override"><code>def handle_pool(f): async def wrapper(*args, pool: Optional[ConnPool] = None, **kwargs): if pool: return await f(*args, pool=pool, **kwargs) else: return await f(*args, pool=ConnPool(), **kwargs) </code></pre> <p>The hard part is not the implementation, but getting the type hints correct. I tried a solution with <code>ParamSpec</code>, but it does not handle additional keyword arguments correctly.</p> <pre class="lang-py prettyprint-override"><code># No errors from mypy or pylance from typing import Awaitable, Callable, Optional, Protocol, TypeVar from typing_extensions import Concatenate, ParamSpec P = ParamSpec(&quot;P&quot;) R = TypeVar(&quot;R&quot;, covariant=True) class ConnPool: pass class OptionalPool(Protocol[P, R]): def __call__(self, *args: P.args, pool: Optional[ConnPool] = None, **kwargs: P.kwargs) -&gt; R: ... def make_optional(f: Callable[Concatenate[ConnPool, P], Awaitable[R]]) -&gt; OptionalPool[P, Awaitable[R]]: async def wrapper(*args: P.args, pool: Optional[ConnPool] = None, **kwargs: P.kwargs) -&gt; R: if pool: return await f(pool, *args, **kwargs) else: return await f(ConnPool(), *args, **kwargs) return wrapper @make_optional async def test_function(pool: ConnPool, first: str) -&gt; int: print(f&quot;first: {first}, pool: {repr(pool)}&quot;) return 2 async def main() -&gt; None: # Type of &quot;test_function&quot; is &quot;OptionalPool[(first: str), Awaitable[int]]&quot; # Argument hinting (VS Code) shows &quot;test_function(*args: P.args, pool: Optional[ConnPool] = None, **kwargs: P.kwargs) -&gt; Awaitable[int]&quot; test = await test_function(&quot;first&quot;) print(f&quot;Returned {test}&quot;) return None </code></pre> <p>It clearly detects the arguments I want, but it does not nicely unpack the arguments and keyword arguments. I know this is <a href="https://peps.python.org/pep-0612/#concatenating-keyword-parameters" rel="nofollow noreferrer">specifically called out as not supported</a>, so I wonder what other options I have (if any).</p> <p>TL;DR: How do you implement a properly typed decorator to turn a known (hardcoded keyword) argument from required to optional with a default value of <code>None</code>, while retaining all other args + kwargs and still get a nice function signature?</p>
<python><python-decorators><python-typing>
2023-08-20 20:13:05
0
4,884
Christian P.
76,940,929
378,622
Getting nullspace as clean integers in scipy or numpy
<p>I'm calculating the nullspace of a matrix with scipy:</p> <pre><code>from scipy.linalg import null_space A = np.array([[1,2],[3,6]]) print(null_space(A)) </code></pre> <p>This outputs [[-0.89442719] [ 0.4472136 ]], but there is a much cleaner integer solution: [[-2, 1]]. How could I get back a clean integer solution when it exists?</p>
<python><numpy><scipy><linear-algebra>
2023-08-20 19:03:54
1
26,851
Ben G
76,940,856
11,069,741
Standardize Color Emoji using pandas
<p>I am standardizing the color of the emojis, at the moment this code calculates the emojis with greater presence but it does not join the emojis that are of the same type.</p> <pre><code>import emoji import pandas as pd # Crear un DataFrame de ejemplo data = {'id': [1, 2, 3], 'renderedContent': ['Hello 😃👍🏻', 'How are you? 😀👍🏼', '👍']} df5 = pd.DataFrame(data) text = df5['renderedContent'].str.cat(sep='\n') out = (pd.DataFrame(emoji.emoji_list(text)).value_counts('emoji') .rename_axis('Smiley').rename('Count').reset_index() .assign(Type=lambda x: x['Smiley'].apply(emoji.demojize))) most_used = out.head(10) most_used </code></pre> <p>This is the output:</p> <p><a href="https://i.sstatic.net/ioKC2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ioKC2.png" alt="enter image description here" /></a></p> <p>The thumbs up emoji should be 3 instead of 1. And the smile 2 instead of 1.</p> <p>How can I unify them? I am using the latest version of the emoji library but I have not been able to standardize it.</p>
<python><pandas><emoji>
2023-08-20 18:38:33
2
457
Sebastián
76,940,830
6,929,343
Python SQLite3 vacuum with and without reseting primary key
<p>There are three SQLite3 tables (<code>Location</code>, <code>Music</code> and <code>History</code>) that need to be vacuumed.</p> <p>For the first two tables, the Primary Key <strong>cannot</strong> change, and for the last table (<code>History</code>), the Primary key <strong>should</strong> change.</p> <p><a href="https://www.sqlitetutorial.net/sqlite-vacuum/" rel="nofollow noreferrer" title="if you use unaliased rowid, the VACUUM command will reset the rowid values">The SQLite VACUUM command 🔗</a> states something about <em>&quot;unaliased&quot;</em>, which is confusing.</p> <p>Can someone state the syntax to be used?</p> <h2>TL;DR</h2> <h3>Location Table (leave Primary key intact)</h3> <pre class="lang-py prettyprint-override"><code>con.execute( &quot;CREATE TABLE IF NOT EXISTS Location(Id INTEGER PRIMARY KEY, &quot; + &quot;Code TEXT, Name TEXT, ModifyTime FLOAT, ImagePath TEXT, &quot; + &quot;MountPoint TEXT, TopDir TEXT, HostName TEXT, &quot; + &quot;HostWakeupCmd TEXT, HostTestCmd TEXT, HostTestRepeat INT, &quot; + &quot;HostMountCmd TEXT, HostTouchCmd TEXT, HostTouchMinutes INT, &quot; + &quot;Comments TEXT)&quot;) con.execute(&quot;CREATE UNIQUE INDEX IF NOT EXISTS LocationCodeIndex ON &quot; + &quot;Location(Code)&quot;) </code></pre> <h3>Music Table (leave Primary key intact)</h3> <pre class="lang-py prettyprint-override"><code>con.execute( &quot;create table IF NOT EXISTS Music(Id INTEGER PRIMARY KEY, &quot; + &quot;OsFileName TEXT, OsAccessTime FLOAT, OsModifyTime FLOAT, &quot; + &quot;OsChangeTime FLOAT, OsFileSize INT, &quot; + &quot;ffMajor TEXT, ffMinor TEXT, ffCompatible TEXT, &quot; + &quot;Title TEXT, Artist TEXT, Album TEXT, Compilation TEXT, &quot; + &quot;AlbumArtist TEXT, AlbumDate TEXT, FirstDate TEXT, &quot; + &quot;CreationTime TEXT, DiscNumber TEXT, TrackNumber TEXT, &quot; + &quot;Rating TEXT, Genre TEXT, Composer TEXT, Comment TEXT, &quot; + &quot;Hyperlink TEXT, Duration TEXT, Seconds FLOAT, &quot; + &quot;GaplessPlayback TEXT, PlayCount INT, LastPlayTime FLOAT, &quot; + &quot;LyricsScore BLOB, LyricsTimeIndex TEXT)&quot;) con.execute(&quot;CREATE UNIQUE INDEX IF NOT EXISTS OsFileNameIndex ON &quot; + &quot;Music(OsFileName)&quot;) </code></pre> <h3>History Table (primary key needs to be reset)</h3> <pre class="lang-py prettyprint-override"><code>con.execute( &quot;create table IF NOT EXISTS History(Id INTEGER PRIMARY KEY, &quot; + &quot;Time FLOAT, MusicId INTEGER, User TEXT, Type TEXT, &quot; + &quot;Action TEXT, SourceMaster TEXT, SourceDetail TEXT, &quot; + &quot;Target TEXT, Size INT, Count INT, Seconds FLOAT, &quot; + &quot;Comments TEXT, Timestamp FLOAT)&quot;) con.execute(&quot;CREATE INDEX IF NOT EXISTS MusicIdIndex ON &quot; + &quot;History(MusicId)&quot;) con.execute(&quot;CREATE UNIQUE INDEX IF NOT EXISTS TimeIndex ON &quot; + &quot;History(Timestamp)&quot;) con.execute(&quot;CREATE INDEX IF NOT EXISTS TypeActionIndex ON &quot; + &quot;History(Type, Action)&quot;) </code></pre>
<python><sqlite><vacuum>
2023-08-20 18:31:30
0
2,005
WinEunuuchs2Unix
76,940,340
3,047,729
PyScript can't find the element id
<p>I have written the following code, which is not very different from an HTML file. It is technically a Quarto file, <code>.qmd</code>, but it renders to an HTML file and I don't suspect this has to do with the issue that I'm encountering.</p> <pre><code>--- title: &quot;The New Free Fall Hypothesis&quot; format: html: include-in-header: - header.html --- &lt;py-script src=&quot;./acc_main.py&quot;&gt;&lt;/py-script&gt; &lt;input type=&quot;text&quot; id=&quot;3dinput&quot;&gt;10,20,30&lt;/input&gt; &lt;button id=&quot;3dplotbutton&quot; py-click=&quot;show_plt()&quot; class=&quot;py-button&quot;&gt;Set camera&lt;/button&gt; &lt;div id=&quot;3dplot&quot;&gt;&lt;/div&gt; </code></pre> <p>Also in the Quarto project's <code>_quarto.yml</code> file I've added every <code>.py</code> file as a resource.</p> <p>In <code>header.html</code> I've loaded all the necessary stuff like PyScript and matplotlib.</p> <pre><code>&lt;link rel=&quot;stylesheet&quot; href=&quot;https://pyscript.net/latest/pyscript.css&quot; /&gt; &lt;script defer src=&quot;https://pyscript.net/latest/pyscript.js&quot;&gt;&lt;/script&gt; &lt;py-config&gt; packages = [&quot;matplotlib&quot;] &lt;/py-config&gt; &lt;script src=&quot;https://polyfill.io/v3/polyfill.min.js?features=es6&quot;&gt;&lt;/script&gt; </code></pre> <p>And then in my <code>acc_main.py</code> file I wrote this:</p> <pre><code>import matplotlib.pyplot as plt import numpy as np import js fig = plt.figure() ax = fig.add_subplot() x_data = np.random.randint(0,100,(500,)) y_data = np.random.randint(0,100,(500,)) z_data = np.random.randint(0,100,(500,)) ax.scatter(x_data, y_data, z_data) def show_plt(): display(fig, target=&quot;3dplot&quot;, append=False) </code></pre> <p>When I render the page and click the button, I get the error:</p> <pre><code>pyodide.ffi.JsException: SyntaxError: Failed to execute 'querySelector' on 'Document': '#3dplot' is not a valid selector. </code></pre> <p>So essentially it looks like everything is working up to a pretty strange point for it to break. It's loading all the libraries, it's running all the code, it seems to even understand the document object from <code>js</code>. It seems to just not be able to find the object with id <code>3dplot</code>.</p> <p>It has successfully found document ids in other settings that didn't involve a button. So I kind of guess that the function is being called in some context where it can't see the HTML objects. But then I look at this example which is working:</p> <p><a href="https://pyscript.com/@adec3dd4-c366-46d3-9d45-84d3b0996a43/554f713f-bc03-410c-9559-d94c7a3626a3/latest" rel="nofollow noreferrer">https://pyscript.com/@adec3dd4-c366-46d3-9d45-84d3b0996a43/554f713f-bc03-410c-9559-d94c7a3626a3/latest</a></p> <p>and it's accessing document objects in basically the same way as I am. So then I'm really confused about the source of the error.</p> <p>It also looks like perhaps that <code>#</code> is not just from rendering the error but perhaps it's being inserted somewhere along the way and causing the error. But I can't imagine why what would be happening.</p> <hr /> <p>[Edit, further thoughts:]</p> <p>Since there is some possibility that whatever Quarto is doing to render the page is causing the issue, it occurred to me that I could render the page and then just look at the rendered HTML. I'll post it if anyone thinks it's helpful -- but predictably, there's a ton of irrelevant code there.</p> <p>But what I think is the important thing that I observe from the HTML is that there is a div element with id <code>3dplot</code>. So I think this confirms that Quarto isn't the source of the problem.</p>
<javascript><python><quarto><pyscript>
2023-08-20 16:26:42
1
4,013
Addem
76,940,329
33,404
Why does SQLAlchemy return a new rownum but not insert my new row into MySQL?
<p>I am debugging a strange phenomenon rising from this bit of Python code:</p> <pre class="lang-py prettyprint-override"><code> def insert_book(name): with engine.connect() as connection: insert_statement = sqlalchemy.text( &quot;&quot;&quot; INSERT INTO books (title_id) SELECT title_id FROM titles WHERE name = :name; &quot;&quot;&quot; ).bindparams(name=name) result = connection.execute(insert_statement) return result.lastrowid </code></pre> <p>The MySQL database tables that I'm working with are structured like so:</p> <pre class="lang-sql prettyprint-override"><code>CREATE TABLE `titles` ( `title_id` int NOT NULL AUTO_INCREMENT, `name` varchar(36), PRIMARY KEY (`title_id`), UNIQUE KEY `name` (`name`) ); CREATE TABLE `books` ( `book_id` int NOT NULL AUTO_INCREMENT, `title_id` int DEFAULT NULL, PRIMARY KEY (`book_id`), KEY `client_id` (`title_id`), CONSTRAINT `titles_fk` FOREIGN KEY (`title_id`) REFERENCES `titles` (`title_id`) ); </code></pre> <p>Both tables have several rows already inserted to them.</p> <p>When I run the Python code (with a <code>name</code> taken from the <code>titles</code> table) inside a Flask app and point SQLAlchemy to my MySQL DB, two things happen:</p> <ol> <li><code>result.lastrowid</code> contains a new auto-incremented int value.</li> <li>A new row is <strong>not</strong> inserted into the table.</li> </ol> <p>As I repeat this, the returned int value is incremented properly but no rows are created.</p> <p><strong>My question is:</strong> Why is this happening and how can I fix it?</p> <p>Here are the main packages used:</p> <ul> <li>Python 3.9.13</li> <li>Flask==2.2.2</li> <li>PyMySQL==1.1.0</li> <li>SQLAlchemy==2.0.19</li> </ul> <p><strong>PLEASE:</strong> Kindly refrain from pointing in the direction of other, admittedly better, SQLAlchemy APIs. I would like to understand why this specific bit of code is misbehaving. Thank you.</p> <p><strong>UPDATE</strong>: I've examined the logs for my cloud instance of MySQL and I'm seeing errors such as these which correspond to failed inserts:</p> <blockquote> <p>Aborted connection ... to db: '...' user: '...' host: '...' (Got an error reading communication packets).&quot;</p> </blockquote>
<python><mysql><sqlalchemy><pymysql>
2023-08-20 16:24:24
1
16,911
urig
76,940,304
9,236,505
Changing text appearance of a part of a xticks string
<pre><code>import pandas as pd import matplotlib.pyplot as plt def xtick_labels(time_range): labels = [ i.strftime(&quot;%H:%M\n%Y-%m-%d&quot;) if (i.strftime(&quot;%H:%M:%S&quot;) == &quot;00:00:00&quot;) else i.strftime(&quot;%H:%M&quot;) for i in time_range ] return labels fig, ax = plt.subplots(figsize=(20, 4)) time_range = pd.date_range('01.01.2020 00:00:00', '01.03.2020 01:00:00', freq='2H') ax.plot(time_range, len(time_range)*[1]) labels = xtick_labels(time_range) ax.set_xticks(time_range, labels); </code></pre> <p><a href="https://i.sstatic.net/DZJiz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DZJiz.png" alt="enter image description here" /></a></p> <p>Is it possible to only change the appereance of a part of the dates in that xticks? Like changing the colour, different size? So not changing the entire xtick.</p> <p>I know that changing the entire xtick is possible with:</p> <pre><code>ax.get_xticklabels()[-1].set_fontsize(14) ax.get_xticklabels()[-1].set_color('red') </code></pre> <p><a href="https://i.sstatic.net/3XreI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3XreI.png" alt="enter image description here" /></a></p>
<python><python-3.x><matplotlib>
2023-08-20 16:17:35
1
336
Paul
76,940,273
211,457
error during pip install -r requirements.txt of threestudio
<p>I'm failing on this line (<code>pip install git+https://github.com/KAIR-BAIR/nerfacc.git@v0.5.2</code>) using Ubuntu for Windows:</p> <pre class="lang-none prettyprint-override"><code>andy@andys-pc:/threestudio$ pip install git+https://github.com/KAIR-BAIR/nerfacc.git@v0.5.2 Defaulting to user installation because normal site-packages is not writeable Collecting git+https://github.com/KAIR-BAIR/nerfacc.git@v0.5.2 Cloning https://github.com/KAIR-BAIR/nerfacc.git (to revision v0.5.2) to /tmp/pip-req-build-vu8gv1bc Running command git clone --filter=blob:none --quiet https://github.com/KAIR-BAIR/nerfacc.git /tmp/pip-req-build-vu8gv1bc Running command git checkout -q d84cdf3afd7dcfc42150e0f0506db58a5ce62812 Resolved https://github.com/KAIR-BAIR/nerfacc.git to commit d84cdf3afd7dcfc42150e0f0506db58a5ce62812 Running command git submodule update --init --recursive -q Preparing metadata (setup.py) ... done Requirement already satisfied: rich&gt;=12 in /home/andy/.local/lib/python3.10/site-packages (from nerfacc==0.5.2) (13.5.2) Requirement already satisfied: torch in /home/andy/.local/lib/python3.10/site-packages (from nerfacc==0.5.2) (1.12.1+cu113) Requirement already satisfied: pygments&lt;3.0.0,&gt;=2.13.0 in /home/andy/.local/lib/python3.10/site-packages (from rich&gt;=12-&gt;nerfacc==0.5.2) (2.16.1) Requirement already satisfied: markdown-it-py&gt;=2.2.0 in /home/andy/.local/lib/python3.10/site-packages (from rich&gt;=12-&gt;nerfacc==0.5.2) (3.0.0) Requirement already satisfied: typing-extensions in /home/andy/.local/lib/python3.10/site-packages (from torch-&gt;nerfacc==0.5.2) (4.7.1) Requirement already satisfied: mdurl=0.1 in /home/andy/.local/lib/python3.10/site-packages (from markdown-it-py&gt;=2.2.0-&gt;rich&gt;=12-&gt;nerfacc==0.5.2) (0.1.2) Building wheels for collected packages: nerfacc Building wheel for nerfacc (setup.py) ... error error: subprocess-exited-with-error × python setup.py bdist_wheel did not run successfully. │ exit code: 1 ╰─&gt; [540 lines of output] running bdist_wheel running build running build_py creating build creating build/lib.linux-x86_64-3.10 creating build/lib.linux-x86_64-3.10/nerfacc copying nerfacc/pdf.py -&gt; build/lib.linux-x86_64-3.10/nerfacc copying nerfacc/grid.py -&gt; build/lib.linux-x86_64-3.10/nerfacc copying nerfacc/pack.py -&gt; build/lib.linux-x86_64-3.10/nerfacc copying nerfacc/version.py -&gt; build/lib.linux-x86_64-3.10/nerfacc copying nerfacc/cameras2.py -&gt; build/lib.linux-x86_64-3.10/nerfacc copying nerfacc/init.py -&gt; build/lib.linux-x86_64-3.10/nerfacc copying nerfacc/cameras.py -&gt; build/lib.linux-x86_64-3.10/nerfacc copying nerfacc/volrend.py -&gt; build/lib.linux-x86_64-3.10/nerfacc copying nerfacc/scan.py -&gt; build/lib.linux-x86_64-3.10/nerfacc copying nerfacc/data_specs.py -&gt; build/lib.linux-x86_64-3.10/nerfacc creating build/lib.linux-x86_64-3.10/nerfacc/estimators copying nerfacc/estimators/occ_grid.py -&gt; build/lib.linux-x86_64-3.10/nerfacc/estimators copying nerfacc/estimators/init.py -&gt; build/lib.linux-x86_64-3.10/nerfacc/estimators copying nerfacc/estimators/prop_net.py -&gt; build/lib.linux-x86_64-3.10/nerfacc/estimators copying nerfacc/estimators/base.py -&gt; build/lib.linux-x86_64-3.10/nerfacc/estimators creating build/lib.linux-x86_64-3.10/nerfacc/cuda copying nerfacc/cuda/init.py -&gt; build/lib.linux-x86_64-3.10/nerfacc/cuda copying nerfacc/cuda/backend.py -&gt; build/lib.linux-x86_64-3.10/nerfacc/cuda running egg_info creating nerfacc.egg-info writing nerfacc.egg-info/PKG-INFO writing dependency_links to nerfacc.egg-info/dependency_links.txt writing requirements to nerfacc.egg-info/requires.txt writing top-level names to nerfacc.egg-info/top_level.txt writing manifest file 'nerfacc.egg-info/SOURCES.txt' reading manifest file 'nerfacc.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no files found matching 'nerfacc/cuda/csrc/include/' warning: no files found matching 'nerfacc/_cuda/csrc/' adding license file 'LICENSE' writing manifest file 'nerfacc.egg-info/SOURCES.txt' creating build/lib.linux-x86_64-3.10/nerfacc/cuda/csrc copying nerfacc/cuda/csrc/camera.cu -&gt; build/lib.linux-x86_64-3.10/nerfacc/cuda/csrc copying nerfacc/cuda/csrc/grid.cu -&gt; build/lib.linux-x86_64-3.10/nerfacc/cuda/csrc copying nerfacc/cuda/csrc/nerfacc.cpp -&gt; build/lib.linux-x86_64-3.10/nerfacc/cuda/csrc copying nerfacc/cuda/csrc/pdf.cu -&gt; build/lib.linux-x86_64-3.10/nerfacc/cuda/csrc copying nerfacc/cuda/csrc/scan.cu -&gt; build/lib.linux-x86_64-3.10/nerfacc/cuda/csrc creating build/lib.linux-x86_64-3.10/nerfacc/cuda/csrc/include copying nerfacc/cuda/csrc/include/data_spec.hpp -&gt; build/lib.linux-x86_64-3.10/nerfacc/cuda/csrc/include copying nerfacc/cuda/csrc/include/data_spec_packed.cuh -&gt; build/lib.linux-x86_64-3.10/nerfacc/cuda/csrc/include copying nerfacc/cuda/csrc/include/utils_camera.cuh -&gt; build/lib.linux-x86_64-3.10/nerfacc/cuda/csrc/include copying nerfacc/cuda/csrc/include/utils_contraction.cuh -&gt; build/lib.linux-x86_64-3.10/nerfacc/cuda/csrc/include copying nerfacc/cuda/csrc/include/utils_cuda.cuh -&gt; build/lib.linux-x86_64-3.10/nerfacc/cuda/csrc/include copying nerfacc/cuda/csrc/include/utils_grid.cuh -&gt; build/lib.linux-x86_64-3.10/nerfacc/cuda/csrc/include copying nerfacc/cuda/csrc/include/utils_math.cuh -&gt; build/lib.linux-x86_64-3.10/nerfacc/cuda/csrc/include copying nerfacc/cuda/csrc/include/utils_scan.cuh -&gt; build/lib.linux-x86_64-3.10/nerfacc/cuda/csrc/include running build_ext /home/andy/.local/lib/python3.10/site-packages/torch/utils/cpp_extension.py:813: UserWarning: The detected CUDA version (11.5) has a minor version mismatch with the version that was used to compile PyTorch (11.3). Most likely this shouldn't be a problem. warnings.warn(CUDA_MISMATCH_WARN.format(cuda_str_version, torch.version.cuda)) /home/andy/.local/lib/python3.10/site-packages/torch/utils/cpp_extension.py:820: UserWarning: There are no x86_64-linux-gnu-g++ version bounds defined for CUDA version 11.5 warnings.warn(f'There are no {compiler_name} version bounds defined for CUDA version {cuda_str_version}') building 'nerfacc.csrc' extension creating build/temp.linux-x86_64-3.10 creating build/temp.linux-x86_64-3.10/nerfacc creating build/temp.linux-x86_64-3.10/nerfacc/cuda creating build/temp.linux-x86_64-3.10/nerfacc/cuda/csrc /usr/bin/nvcc -Inerfacc/cuda/csrc/include -I/home/andy/.local/lib/python3.10/site-packages/torch/include -I/home/andy/.local/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/home/andy/.local/lib/python3.10/site-packages/torch/include/TH -I/home/andy/.local/lib/python3.10/site-packages/torch/include/THC -I/usr/include/python3.10 -c nerfacc/cuda/csrc/camera.cu -o build/temp.linux-x86_64-3.10/nerfacc/cuda/csrc/camera.o -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options '-fPIC' -O3 --expt-relaxed-constexpr -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=&quot;_gcc&quot; -DPYBIND11_STDLIB=&quot;_libstdcpp&quot; -DPYBIND11_BUILD_ABI=&quot;_cxxabi1011&quot; -DTORCH_EXTENSION_NAME=csrc -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++14 /usr/include/c++/11/type_traits(1406): error: type name is not allowed /usr/include/c++/11/type_traits(1406): error: type name is not allowed /usr/include/c++/11/type_traits(1406): error: identifier &quot;__is_same&quot; is undefined </code></pre>
<python><pip>
2023-08-20 16:08:24
0
1,875
Andy
76,940,159
18,981,385
Python Delta_Sharing SSL certificate verify failed: self signed cert
<p>I've not been able to find an answer that fixes this SSL error for delta_sharing</p> <pre><code>import delta_sharing import pandas as pd delta_sharing.SharingClient(&quot;&lt;PersonalTokenHere&gt;&quot;).list_all_tables() </code></pre> <p>This takes several minutes and then complains Max retries exceeded with URL: ... (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:997)')))</p> <p>I have no problem connecting to this data source using PowerBI or RStudio (delta.sharing package), so the issue seems to be specific to Python.</p> <p>Does anyone know how to fix this for for delta_sharing?</p> <p>I have tried importing requests and setting session.verify = False which was suggested for http requests and it doesn't help.</p> <p>There is a delta-io example access token at <a href="https://github.com/delta-io/delta-sharing" rel="nofollow noreferrer">https://github.com/delta-io/delta-sharing</a> (Accessing Shared Data section in documentation), this gives me the same error when I try connect.</p>
<python><ssl><databricks><delta-sharing>
2023-08-20 15:34:46
1
678
alexrai93
76,939,824
125,244
assign maximum values for rows in several dataframes to a column
<p>I have two dataframes called dfOne and dfTwo have the same number or rows. dfOne contains several columns including colA, colB and colMax dfTwo contains several columns including colH</p> <p>For each row i want to assign the maximum of colA, colB and colH to colMax</p> <pre><code>dfOne.colMax = dfOne[['colA', 'colB']].min(axis=1) </code></pre> <p>(Remark: is there a syntax that uses dfOne.colA and dfOne.colB instead of dfOne[['colA', 'colB']] ?)</p> <p>seems to be a partly solution.</p> <p>But I can't combine that with the minimum of dfTwo.colH and assigning it to dfOne.colMax</p> <p>I tried variations of</p> <pre><code> dfOne.colMax = max(dfTwo.colH, dfOne[['colA', 'colB']].min(axis=1)) </code></pre> <p>and using numpy.maximum as well after reading several other posts. But none of those posts deals with two dataframes and I can't find how to do that.</p> <p>If I use</p> <pre><code>dfOne.colMax = dfTwo.colH dfOne.colMax = max(dfTwo.colH, dfOne[['colA', 'colB', 'colMax']].min(axis=1)) </code></pre> <p>I get results but I suppose it can be done better.</p> <p>How should this be done?</p>
<python><pandas><dataframe>
2023-08-20 14:16:35
2
1,110
SoftwareTester
76,939,779
4,898,202
How do I batch get message headers in python using Gmail API?
<p>I have looked at all the answers I can find and the closest one only provides four lines of code and I can't get it working within my larger script.</p> <p>I want to:</p> <ol> <li>download the complete list of <strong>messages IDs</strong> from my <strong>INBOX</strong> <em>(Standard Gmail label)</em></li> <li>for each <strong>message ID</strong> <em>get selected headers</em> only in <em>batches of 500</em> messages</li> </ol> <p>I can't for life of me get the syntax right as there are no complete examples that I can find, only snippets, and chatGPT keeps getting it wrong...and I can't work out what I need to change when looking at all the debug variables.</p> <p><em>(Running from VSCode in Win11).</em></p> <p>Here is a working script (can run for days without failing, but does fail every now and then) - however, <em>it only does one message at a time</em>...this is no good when my INBOX has 150000+ messages:</p> <hr /> <p>!!! WARNING !!! Before you run this (if you try it), you should know this will move messages out of your INBOX and into new sub folders based upon the sender domain in reverse, one level per domain part (why no one has ever built this function into any email client is totally beyond me...it seems so logical...)</p> <hr /> <pre class="lang-py prettyprint-override"><code>from __future__ import print_function import os.path import re import json import uuid from google.auth.transport.requests import Request from google.oauth2.credentials import Credentials from google_auth_oauthlib.flow import InstalledAppFlow from googleapiclient.discovery import build from googleapiclient.errors import HttpError # SCOPES define the Gmail API permissions - delete token.json when changing SCOPES = ['https://www.googleapis.com/auth/gmail.modify', 'https://www.googleapis.com/auth/gmail.labels'] def main(): &quot;&quot;&quot;Shows basic usage of the Gmail API. Lists the user's Gmail labels. &quot;&quot;&quot; creds = None # The file token.json stores the user's access and refresh tokens, and is # created automatically when the authorization flow completes for the first # time. if os.path.exists('token.json'): creds = Credentials.from_authorized_user_file('token.json', SCOPES) # If there are no (valid) credentials available, let the user log in. if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file( 'C:/path/to/credentials.json', SCOPES) creds = flow.run_local_server(port=0) # Save the credentials for the next run with open('token.json', 'w') as token: token.write(creds.to_json()) try: # Create a Gmail API service client service = build('gmail', 'v1', credentials=creds) # List messages in blocks of 500 page_token = None while True: results = service.users().messages().list(userId='me', labelIds=['INBOX'], maxResults=500, pageToken=page_token).execute() messages = results.get('messages', []) if not messages: print('No messages found.') break # Get existing labels labels = service.users().labels().list(userId='me').execute() label_names = [label['name'] for label in labels['labels']] for message in messages: msg = service.users().messages().get(userId='me', id=message['id']).execute() sender_email = get_sender_email(msg['payload']['headers']) sender_domain = sender_email.split('@')[-1] reversed_domain = '.'.join(reversed(sender_domain.split('.'))) label_path = create_label_path(reversed_domain, labels, label_names, service) # Reget existing labels including new for label in labels['labels']: if label['name'] == label_path: labelId = label['id'] break # Apply the reversed sender domain path as a label to the message if sender_email: service.users().messages().modify(userId='me', id=message['id'], body={'removeLabelIds': 'INBOX', 'addLabelIds': labelId}).execute() print(f'Applied label &quot;{label_path}&quot; to {sender_email}') page_token = results.get('nextPageToken') if not page_token: break except HttpError as error: # TODO(developer) - Handle errors from gmail API. print(f'An error occurred: {error}') def get_sender_email(headers): hfrom = '' hrply = '' hrtrn = '' hsndr = '' m = re.compile(r'^[^&lt;]*&lt;?([^@&lt;&gt; ]+@[^@&lt;&gt; ]+)&gt;?$') for header in headers: if (header['name'] == 'From') and (m.match(header['value'])): hfrom = m.match(header['value']).group(1) elif (header['name'] == 'Reply-To') and (m.match(header['value'])): hrply = m.match(header['value']).group(1) elif (header['name'] == 'Return-Path') and (m.match(header['value'])): hrtrn = m.match(header['value']).group(1) elif (header['name'] == 'Sender') and (m.match(header['value'])): hsndr = m.match(header['value']).group(1) # just because the header field exists doesn't mean it is populated if len(hfrom) &gt; 0: return hfrom elif len(hrply) &gt; 0: return hrply elif len(hrtrn) &gt; 0: return hrtrn elif len(hsndr) &gt; 0: return hsndr return 'nosuchuser@NoSender' def create_label_path(revdom, labels, label_names, service): domains = revdom.split('.') label_path = '' for domain in domains: if len(label_path) &gt; 0: label_path = '/'.join(list((label_path, domain))) else: label_path = domain # Check if the label exists: # - have to create each label at each level, else # '/' just becomes part of a single label if label_path not in label_names: # Create a new label label = {'name': label_path} created_label = service.users().labels().create(userId='me', body=label).execute() labels['labels'].append(created_label) label_names.append(label_path) return label_path if __name__ == '__main__': main() </code></pre> <p>NOW, since this is abominably slow, what I am trying to do is do these updates in batches. (There is some code I haven't devised yet that is required, but it will basically select the message IDs of all messages that come from the same sender domain, then batch update the labels for those).</p> <p>But before I can do that, I have to first get the message headers using <code>batch.execute()</code>, and this is not working.</p> <p>Here is an attempt at request batching (other approaches commented out but left in to see if anyone else can work out how to integrate them - I can't):</p> <pre class="lang-py prettyprint-override"><code>from __future__ import print_function import os.path import re import json from google.auth.transport.requests import Request from google.oauth2.credentials import Credentials from google_auth_oauthlib.flow import InstalledAppFlow from googleapiclient.discovery import build from googleapiclient.http import BatchHttpRequest from googleapiclient.errors import HttpError # SCOPES define the Gmail API permissions - delete token.json when changing SCOPES = ['https://www.googleapis.com/auth/gmail.modify', 'https://www.googleapis.com/auth/gmail.labels'] global headers_array # Added # choose either callback function or container function def callback(self, request_id, response, exception): headers_array.append(response[&quot;payload&quot;][&quot;headers&quot;]) # Helper container to store results. # class DataContainer: # def __init__(self): # self.data = {} # def callback(self, request_id, response, exception): # if exception is not None: # print('request_id: {}, exception: {}'.format(request_id, str(exception))) # pass # else: # print(request_id) # self.data[request_id] = response # container = DataContainer() def main(): creds = None # The file token.json stores the user's access and refresh tokens, and is # created automatically when the authorization flow completes for the first # time. if os.path.exists('token.json'): creds = Credentials.from_authorized_user_file('token.json', SCOPES) # If there are no (valid) credentials available, let the user log in. if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file('C:/path/to/credentials.json', SCOPES) creds = flow.run_local_server(port=0) # Save the credentials for the next run with open('token.json', 'w') as token: token.write(creds.to_json()) try: # Create a Gmail API service client service = build('gmail', 'v1', credentials=creds) # List all message IDs in blocks of 500; 500 works and is the supported MAX all_message_ids = [] results = service.users().messages().list(userId='me', labelIds=['INBOX'], maxResults=500).execute() messages = results.get('messages', []) all_message_ids.extend([message['id'] for message in messages]) while 'nextPageToken' in results: page_token = results['nextPageToken'] results = service.users().messages().list(userId='me', labelIds=['INBOX'], maxResults=500, pageToken=page_token).execute() messages = results.get('messages', []) all_message_ids.extend([message['id'] for message in messages]) print(f'Total messages: {len(all_message_ids)}') headers_array = [] # Retrieve headers for each message ID for i in range(0, len(all_message_ids), 100): batch_message_ids = all_message_ids[i:i+100] # messages_batch = batch.add(service.users().messages().get(userId='me', ids=batch_message_ids, format='metadata', metadataHeaders=['From', 'Reply-To', 'Sender', 'Return-Path'])) # for msgid in all_message_ids: # msg = (service.users().messages().get(userId='me', ids=msgid, format='metadata', metadataHeaders=['From', 'Reply-To', 'Sender', 'Return-Path']).execute()) ######## incomplete - not working # msg_ids = [msg['id'] for msg in body['messages']] # headers['Content-Type'] = 'multipart/mixed; boundary=%s' % self.BOUNDARY # post_body = [] # for msg_id in batch_message_ids: # post_body.append( # &quot;--%s\n&quot; # &quot;Content-Type: application/http\n\n&quot; # &quot;GET /gmail/v1/users/me/messages/%s?format=raw\n&quot; # % (self.BOUNDARY, msg_id)) # post_body.append(&quot;--%s--\n&quot; % self.BOUNDARY) # post = '\n'.join(post_body) # (headers, body) = _conn.request( # SERVER_URL + '/batch', # method='POST', body=post, headers=headers) ######## batch = service.new_batch_http_request(callback=callback) for msg_id in batch_message_ids: batch.add(service.users().messages().get(userId = 'me', id = msg_id, format='metadata', metadataHeaders=['From', 'Reply-To', 'Sender', 'Return-Path'])) batch.execute() # &lt;--- BREAKS HERE WITH EXCEPTION ######## messages_batch = ''; # this is a placeholder so the script runs without syntax errors - I know it is wrong, just not sure how to 'capture' the batch results into a list/array/object to loop through below for msg in messages_batch['messages']: # message_id = msg['id'] headers = { 'message_id': msg['id'], 'from': msg[&quot;payload&quot;][&quot;headers&quot;][0][&quot;value&quot;], 'reply_to': msg[&quot;payload&quot;][&quot;headers&quot;][1][&quot;value&quot;], 'sender': msg[&quot;payload&quot;][&quot;headers&quot;][2][&quot;value&quot;], 'return_path': msg[&quot;payload&quot;][&quot;headers&quot;][3][&quot;value&quot;] } headers_array.append(headers) # ToDo: do the label replacement here print(headers_array) except HttpError as error: # TODO(developer) - Handle errors from gmail API. print(f'An error occurred: {error}') if __name__ == '__main__': main() </code></pre> <p>I suspect the batch request is being malformed because I saw one example that added a callback argument, but I don't fully understand the callback syntax yet and am not sure how to integrate it or if it is needed.</p> <p>What I want to see/receive in the batch response is a JSON object containing the selected headers for all 500 (100?) message IDs sent in the batch request. The loop would then send another batch request with the next 500 (100) message IDs, and so on until I have received the selected headers for <em><strong>all</strong></em> messages in my <strong>INBOX</strong>.</p> <p>I have read the Gmail API and Google Apps Scripting documentation (before you suggest it), and to be frank, it is simply lacking details and comprehensive examples (for me). I have also read the API batch guide, but the example given is for a search query or sheets script and not relevant, and I can't work out how to translate it to work with the Gmail API.</p> <p>I have also tried this with a Google Apps script, but there is a fundamental limitation where there is no apparent way to apply labels to messages, only to threads, where as it is possible to apply a label to a message using Python via the Gmail API (just love inconsistency :-/).</p> <p>The performance of the working script that does not batch the requests is processing emails at between 1 and 2 per second. I have almost 150,000 to do...</p> <p>Here is the full Google Apps Script which does everything but label individual messages :(:</p> <pre class="lang-js prettyprint-override"><code>function applyDomainLabelAndRemoveInboxLabel() { var threads = GmailApp.getInboxThreads(); for (var i = 0; i &lt; threads.length; i++) { var messages = threads[i].getMessages(); for (var j = 0; j &lt; messages.length; j++) { var message = messages[j]; var msgId = message.getId(); // Logger.log(msgId); var domlab = getDomainFromMessage(message); // updateLabels(msgId, domlab[0]); updateLabels(msgId, domlab); } } } function getDomainFromMessage(message) { var rawMessage = message.getRawContent(); var domain = null; var matchFrom = rawMessage.match(/From:.*&lt;([^&gt;]+)&gt;/i); var matchReplyTo = rawMessage.match(/Reply-To:.*&lt;([^&gt;]+)&gt;/i); var matchReturnPath = rawMessage.match(/Return-Path: &lt;([^&gt;]+)&gt;/i); var matchSender = rawMessage.match(/Sender:.*&lt;([^&gt;]+)&gt;/i); if (matchFrom) { domain = extractDomain(matchFrom[1]); } else if (matchReplyTo) { domain = extractDomain(matchReplyTo[1]); } else if (matchReturnPath) { domain = extractDomain(matchReturnPath[1]); } else if (matchSender) { domain = extractDomain(matchSender[1]); } return domain ? [domain] : []; } function extractDomain(email) { var parts = email.split('@'); var revdom = null; if (parts.length === 2) { revdom = reverseDomain(parts[1]); return revdom; } return null; } function reverseDomain(domain) { var domparts = domain.split('.'); domparts.reverse(); var newLabel = null; for (var k = 0; k &lt; domparts.length; k++) { if (k === 0) { newLabel = domparts[k] } else { newLabel = newLabel.concat(&quot;/&quot;, domparts[k]); } var labelObject = GmailApp.getUserLabelByName(newLabel); // Logger.log(labelObject.getName()); if (!labelObject) { labelObject = GmailApp.createLabel(newLabel) } } // not sure which of these to return return labelObject; // can't find a way to get label IDs using GApps Script // return newLabel; } function updateLabels(msgId, addLabel) { var jsonLabel = JSON.stringify(addLabel) // BREAK HERE - CANNOT APPLY LABEL TO MESSAGE var msgLabel = Gmail.Users.Messages.modify({ 'addLabelIds': [jsonLabel], // can't get this line right 'removeLabelIds': ['INBOX'] }, 'me', msgId); } </code></pre>
<python><google-apps-script><get><gmail><batch-processing>
2023-08-20 14:08:10
1
1,784
skeetastax
76,939,764
273,593
define a typeddict from a dataclass
<p>I am implementing a function that builds and returns a (frozen) dataclass. The dataclass definition is public and &quot;official&quot;. Internally my implementation I need to do some complex logic to build the informations for the dataclass initialization, and I want to use a typeddict - that matches the fields in the dataclass - as a temporary holder.</p> <p>I'm struggling to define the typeddict without just copying the fields... ideally I want to &quot;dinamically&quot; (but not really, as the dataclass fields are defined statically) define the typeddict fields</p> <p><a href="https://mypy-play.net/?gist=56e4e00c31a6aa30e4af0e04eb235367" rel="nofollow noreferrer">mypy-playground</a></p>
<python><mypy><python-typing><python-dataclasses><typeddict>
2023-08-20 14:04:51
0
1,703
Vito De Tullio
76,939,742
16,845
Turning a 2d processing for loop into nested comprehensions with join
<p>The following Python3 code takes an input list of bytes and prints out their hexadecimal &quot;C style&quot; representation, 12 per line:</p> <pre><code>def c_array_body_from_bytes(input_bytes): &quot;&quot;&quot;Emit the provided bytes as the body of a C array.&quot;&quot;&quot; lines = &quot;&quot; for i in range(0, len(input_bytes), 12): lines += &quot;&quot;.join(f&quot;0x{b:02x}, &quot; for b in input_bytes[i : i + 12]) + &quot;\n&quot; return lines print(c_array_body_from_bytes(range(64))) </code></pre> <p>Can this be expressed more tersely as a single return statement that uses a nested comprehension, without requiring the accumulator <code>lines</code> variable? If so, what does it look like?</p>
<python><python-3.x>
2023-08-20 14:00:41
1
1,216
Charles Nicholson
76,939,674
8,737,016
FastAPI attempted relative import beyond top-level package
<p>I'm using FastAPI following the <a href="https://fastapi.tiangolo.com/tutorial/bigger-applications/" rel="nofollow noreferrer">Bigger Applications - Multiple Files</a> guide on how to split APIs into multiple files.</p> <p>My project structure is the following:</p> <pre><code>─── src └── api    ├── dependencies    │   ├── db_connect.py    │   └── __init__.py    ├── __init__.py    ├── main.py    └── routers    ├── colors.py    ├── __init__.py    └── items.py </code></pre> <p>So I got an <code>api</code> package containing <code>main.py</code> and two subpackages <code>dependencies</code> and <code>routers</code>. In the files in <code>routers</code>, i.e. <code>colors.py</code> and <code>items.py</code> I import <code>db_connect.py</code> from the <code>ðependencies</code> package in the same way as shown in the guide, e.g.</p> <p>colors.py</p> <pre class="lang-py prettyprint-override"><code>from ..dependencies import db_connect </code></pre> <p>but I get the infamous error <code>ValueError: attempted relative import beyond top-level package</code> when I start <code>uvicorn --reload main:app</code> from inside the <code>api</code> package.</p> <p>What am I doing wrong?</p> <p><strong>NOTE:</strong> even just running <code>python main.py</code> from <code>src/api/</code> generates the error.</p> <p><strong>SIMPLIFIED PYTHON CODE</strong></p> <p>main.py</p> <pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI from routers import items from routers import colors # Instantiate the FastAPI instance and include all the routers app = FastAPI() app.include_router(items.router) app.include_router(colors.router) </code></pre> <p>items.py</p> <pre class="lang-py prettyprint-override"><code>from ..dependencies import db_connect router = APIRouter( prefix=&quot;items&quot;, tags=[&quot;items&quot;], dependencies=[Depends(db_connect.get_collection)], ) </code></pre> <p>colors.py</p> <pre class="lang-py prettyprint-override"><code>from ..dependencies import db_connect router = APIRouter( prefix=&quot;colors&quot;, tags=[&quot;colors&quot;], dependencies=[Depends(db_connect.get_collection)], ) </code></pre>
<python><python-import><fastapi>
2023-08-20 13:45:40
1
2,245
Federico Taschin
76,939,247
3,521,180
why am I not able to fetch data by id in flask?
<p>I have written below flask functions to create new store, get list of stores, and get store by ID. Please see below functions</p> <pre><code>import uuid from flask_smorest import abort from flask import Flask, request from db import * app = Flask(__name__) @app.get(&quot;/store&quot;) async def get_stores(): return {&quot;stores&quot;: list(stores.values())} @app.post(&quot;/store&quot;) async def create_store(): store_data = request.get_json() store_id = uuid.uuid4().hex new_data = {**store_data, &quot;id&quot;: store_id} stores[store_id] = new_data return new_data, 201 @app.post(&quot;/item&quot;) async def create_item(): item_data = request.get_json() if item_data[&quot;store_id&quot;] not in stores: return abort(404, &quot;item not found&quot;) item_id = uuid.uuid4().hex item = {**item_data, &quot;id&quot;: item_id} items[item_id] = item return item, 201 @app.get(&quot;/item&quot;) async def get_al_items(): return {&quot;items&quot;: list(items.values())} @app.get(&quot;/store/&lt;string:item_id&gt;&quot;) async def getstore_items(item_id): try: return items[item_id] except KeyError: return abort(404, &quot;item id not found&quot;) @app.get(&quot;/store/&lt;string:store_id&gt;&quot;) async def get_stores_id(store_id): try: return stores[store_id] except KeyError: return abort(404, &quot;store id not found&quot;) </code></pre> <p>Passing JSON</p> <pre><code> { &quot;name&quot;: &quot;sixerrrrr&quot; } </code></pre> <p>To below function in POSTMAN is creating a new store successfully</p> <pre><code>http://127.0.0.1:5000/store </code></pre> <p>with below response in POSTMAN</p> <pre><code>{ &quot;id&quot;: &quot;1fd69e85e9324011a54b7407d16268ac&quot;, &quot;name&quot;: &quot;sixerrrrr&quot; } </code></pre> <p>But when I execute <code>http://127.0.0.1:5000/store</code> as <code>GET</code>, I see empty list</p> <pre><code>{ &quot;stores&quot;: [] } </code></pre> <p>When trying to GET through ID, I get 404 error</p> <pre><code>http://127.0.0.1:5000/store/1fd69e85e9324011a54b7407d16268ac </code></pre> <p>error:</p> <pre><code>title&gt;404 Not Found&lt;/title&gt; &lt;h1&gt;Not Found&lt;/h1&gt; &lt;p&gt;The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.&lt;/p&gt; </code></pre> <p>I have already used print() in <code>def create_store()</code>, and I could see the new &quot;name&quot; and ID getting generated.</p> <p>Not sure what is happening. Please help.</p>
<python><python-3.x><flask>
2023-08-20 11:49:37
1
1,150
user3521180
76,939,151
2,841,150
How to determine a python process with pdb whether running or waiting for stdin?
<p>I am trying to obtain the running state (such as variable dumps) of a Python program named <code>A</code>. Currently, I have created a debug script that spawns a Pdb subprocess to debug <code>A</code> and control it by sending debugger commands. This allows me to access any variable from the debugger.</p> <p>However, the A program requires input from <code>stdin</code> at certain points. It seems that the Pdb and <code>A</code> are somehow sharing the <code>stdin</code> stream. When the Pdb is waiting for a command, it displays a <code>(Pdb)</code> prompt, indicating that I should input a command. Yet, when <code>A</code> is waiting for <code>stdin</code>, it does not print anything, giving the impression that it is still running.</p> <p>Hence, I would like to know how to determine whether the subprocess is actively running or waiting for <code>stdin</code> input.</p> <p>Here's my debugger code, works but not perfect.</p> <pre><code>import os, fire, re from subprocess import Popen, PIPE def wait(proc, timeout): try: proc.wait(timeout) except: pass def send(proc, str = None): proc.stdin.write((str + '\n').encode()) proc.stdin.flush() def recv(proc): all = '' while True: out = proc.stdout.readline() if out == b'': wait(proc, 0.2) out = proc.stdout.readline() if out == b'': break all += out.decode() return all def parse_array(str): str = str.strip('[]') return [s.strip() for s in str.split(',')] def get_code_status(proc): send(proc, 'locals().keys()') m = re.match(r'.*dict_keys\((\[.*\])\).*', recv(proc)) assert len(m.groups()) == 1 names = [] for var in parse_array(m.group(1)): stripped = var.strip(&quot;'&quot;) if not re.match(r'__.*__', stripped): names.append(f'{var}:{stripped}') if names: send(proc, '{' + f'{&quot;,&quot;.join(names)}' + '}') status = recv(proc).splitlines()[0] send(proc, 'll') code = recv(proc) return code, status return None, None def main(code, data, output): code = os.path.abspath(code) proc = Popen( [&quot;python&quot;, &quot;-m&quot;, &quot;pdb&quot;, code], stdin=PIPE, stdout=PIPE, stderr=PIPE ) wait(proc, 1) os.set_blocking(proc.stdout.fileno(), False) data = open(data).read().splitlines() prev_line = 0 for step in range(100000): received = recv(proc) if (step + 1) % 10 == 0: code, status = get_code_status(proc) if code: print(code) print(status) break if received == '': if data == []: break else: send_line = data.pop(0) print(send_line) send(proc, send_line) else: print(received, end='') assert received.endswith('(Pdb) ') for recv_line in received.splitlines(): if recv_line.startswith('&gt; '): if code not in recv_line: cmd = 'r' else: m = re.match(r'&gt; (.*)\((\d+)\).*', recv_line) assert len(m.groups()) == 2 assert m.group(1) == code cur_line = int(m.group(2)) cmd = 'r' if prev_line == cur_line else 's' prev_line = cur_line break send(proc, cmd) print(cmd) if __name__ == '__main__': fire.Fire(main) </code></pre>
<python><pdb>
2023-08-20 11:24:25
0
465
Devymex
76,938,988
4,406,595
How to edit a message sent by a telegram-bot in Python
<p>I am using Python with Telegram bot API (20) and trying to edit a message my bot sent with the following code</p> <pre><code>msg = await context.bot.send_message( chat_id = update.effective_chat.id, text = &quot;message&quot;, reply_markup=ReplyKeyboardMarkup( reply_keyboard, one_time_keyboard=True, input_field_placeholder=&quot;Play!&quot; ), ) await context.bot.edit_message_text(chat_id=update.effective_chat.id, text=&quot;edited text&quot;, message_id=msg.id) </code></pre> <p>I am getting the following error</p> <blockquote> <p>raise BadRequest(message) telegram.error.BadRequest: Message can't be edited</p> </blockquote> <p>I found <a href="https://www.reddit.com/r/TelegramBots/comments/6vxemq/badrequest_message_cant_be_edited/" rel="nofollow noreferrer">this</a> Explaining that it might be an attempt to edit the user's message which is not editable by the bot, I made sure I am editing the message the bot sent, and that it's the right ID</p> <p>I also read it might be an issue of the message not finishing to update, I used <code>await</code> and tried using <code>asyncio.sleep(5)</code> before the edit attempt with no success and the same error.</p> <p>How can I edit the message the bot sent?</p>
<python><botframework><telegram-bot>
2023-08-20 10:34:48
0
4,859
thebeancounter
76,938,646
3,625,533
serialization/deserialization using json in python
<p>trying to serialize and deserialize from class object to json using json package. After looking into some documentation succeeded in serializing the simple class object but, when trying out with complex class am not able to deserialize the objects.</p> <p>these are my classes:</p> <pre><code> class FirstStandard: def __init__(self, student=None): self.students = [] if student: self.students.append(student.__dict__) def toJson(self): return vars(self) def update(self, student): if student: self.students.append(student.__dict__) class Student: def __init__(self, sid, fName, lName, fee): self.sid = sid self.fName = fName self.lName = lName self.fee = fee </code></pre> <p>This the code am trying with:</p> <pre><code>def student_hook(stu_dict): return FirstStandard(stu_dict['students']) if __name__ == '__main__': s = Student(10, 'ABC', 'XYZ', 12345) studs = FirstStandard(s) s1 = Student(11, 'AAA', 'BBB', 99999) studs.update(s1) # serialization message = json.dumps(studs.toJson()) print(message) # deserialization x = json.loads(message, object_hook=student_hook) print(x.students) </code></pre> <p>When deserialising the students using the hook am expecting the entire students list should be present but, am receiving student object instead.</p> <p>How this hook should be modified to get the json deserialised into my class object?</p>
<python><python-3.x>
2023-08-20 08:49:02
2
435
user3625533
76,938,541
15,218,250
How to do django form backend validation for user inputs
<p>I am NOT trying to use Django Form for rendering, but only for backend validation. This is my current code for a chat app:</p> <p>views.py</p> <pre><code>def send(request): var1 = request.POST['var1'] user_id= request.POST['user_id'] main_user = User.objects.get(id=user_id) new_instance = ModelName.objects.create(var1=var1, user=main_user) new_instance.save() # Below is the alert message in ajax in template. return HttpResponse('Message sent successfully') </code></pre> <p>So, basically, I want to create a model chat instance in the database using the Django Form backend validation. However, I don't know how to do it. I hope you could help me with this.</p> <p>Moreover, if there are any other security holes that I need to take care of in this form (other than backend validation), I would appreciate it if you could show me how to fix it. Thank you, and please leave any comments below.</p>
<python><django><django-models><django-views><django-forms>
2023-08-20 08:16:14
1
613
coderDcoder
76,938,313
11,608,962
Compare diff of multiple texts and concatenate to create a super set in Python
<p>I have the following three sentences.</p> <ol> <li>Supply and install the material for renovation services. (Size 100 cm X 200 cm)</li> <li>Supply and install the material for renovation services. (500 sq.m)</li> <li>Renovation services - Supply and install the material.</li> </ol> <p>I need the following output.</p> <blockquote> <p>Supply and install the material for renovation services. (Size 100 cm X 200 cm) (500 sq.m) Renovation services -</p> </blockquote> <p>I can achieve this using the <code>split()</code> function and then performing word-by-word comparisons.</p> <p>Is there any other way (maybe using <code>regex</code>) or any library that can help me with this task?</p>
<python>
2023-08-20 07:08:43
0
1,427
Amit Pathak
76,938,251
3,104,974
Basic Import from Submodule Fails in Spyder IDE
<p>I know this <em>should</em> work. But it doesn't. What could be the reason?</p> <p>Using python 3.9, Anaconda environment, Spyder 5.4.3, Windows 10</p> <pre><code>├── sub | ├── __init__.py | └── util.py | ├── __init__.py └── main.py </code></pre> <pre><code># util.py def my_func(): print(&quot;test&quot;) </code></pre> <pre><code># main.py from sub.util import my_func &gt;&gt;&gt; ModuleNotFoundError: No module named 'sub' </code></pre>
<python><ide><spyder><importerror>
2023-08-20 06:50:41
1
6,315
ascripter
76,938,201
6,792,327
Gemini API: "InvalidSignature" in App Scripts
<p>I am attempting to call Gemini API using App Scripts but I am consistently getting the 'InvalidSignature' error message. The code that I am using is a conversion of the python code from their <a href="https://docs.gemini.com/rest-api/#private-api-invocation" rel="nofollow noreferrer">documentation</a> for Private API Invocation.</p> <p>Code:</p> <pre><code>function createHeaders(payload) { var geminiApiKey = API_KEY var geminiApiSecret = SECRET_KEY var encodedApiSecret = Utilities.newBlob(geminiApiSecret, 'UTF-8').getBytes() var payloadNonce = new Date().getTime() / 1000; payload[&quot;nonce&quot;] = Math.round(payloadNonce); var encodedPayload = Utilities.newBlob(JSON.stringify(payload), 'UTF-8').getBytes() var b64 = Utilities.base64Encode(encodedPayload) //string var signature = Utilities.computeHmacSignature( Utilities.MacAlgorithm.HMAC_SHA_384, b64, geminiApiSecretAuditor ) //bytes return { &quot;Content-Type&quot;: &quot;text/plain&quot;, &quot;X-GEMINI-APIKEY&quot;: geminiApiKeyAuditor, &quot;X-GEMINI-PAYLOAD&quot;: b64, &quot;X-GEMINI-SIGNATURE&quot;: Utilities.base64Encode(signature), //string &quot;Cache-Control&quot;: &quot;no-cache&quot; }; } function fetchGeminiData() { var baseUrl = &quot;https://api.gemini.com&quot;; var url = baseUrl + &quot;/v1/mytrades&quot;; var payload = { &quot;request&quot;: &quot;/v1/mytrades&quot; }; var headers = createHeaders(payload); var options = { &quot;method&quot;: &quot;post&quot;, &quot;headers&quot;: headers, }; var response = UrlFetchApp.fetch(url, options); var responseData = response.getContentText(); var jsonResponse = JSON.parse(responseData); Logger.log(jsonResponse); } </code></pre> <p>The error messages:</p> <pre><code>Exception: Request failed for https://api.gemini.com returned code 400. Truncated server response: {&quot;result&quot;:&quot;error&quot;,&quot;reason&quot;:&quot;InvalidSignature&quot;,&quot;message&quot;:&quot;InvalidSignature&quot;} </code></pre> <p>I have ensured that the object types are as per what is expected from each methods, but I still get the above error messages. What am I missing?</p>
<python><google-apps-script><google-gemini>
2023-08-20 06:29:30
1
2,947
Koh
76,937,659
2,621,316
Compiling mod_Wsgi with python 3.11 on Amazon linux 2: fatal error: Python.h: No such file or directory
<p>Full output:</p> <pre><code>In file included from src/server/mod_wsgi.c:22:0: src/server/wsgi_python.h:26:10: fatal error: Python.h: No such file or directory #include &lt;Python.h&gt; ^~~~~~~~~~ compilation terminated. apxs:Error: Command failed with rc=65536 . make: *** [src/server/mod_wsgi.la] Error 1 </code></pre> <p>I've been trying to compile mod_wsgi with python 3.11 using <code>make</code> to no avail. I've already installed <code>python3-devel</code> but it doesn't look like <code>Python.h</code> is being installed in the correct location or something? I've even tried copying <code>Python.h</code> into the same directory as the code that is looking for it, i.e <code>src/server/</code></p> <p>I made sure to compile Python with <code>enable-shared</code> as well.</p> <p>Where exactly is this file supposed to be located and how can I get it? I was hoping there was a <code>python311-devel</code> I could install but there is only up to <code>python34-devel</code></p> <p>Thank you in advance</p>
<python><mod-wsgi><amazon-linux>
2023-08-20 01:35:58
1
2,981
Amon
76,937,627
3,848,207
Change the font colour of time display at certain time period of this clock
<p>I have this python script named clock.py.</p> <p>Here is the source code.</p> <pre><code># Source: https://www.geeksforgeeks.org/python-create-a-digital-clock-using-tkinter/ # importing whole module from tkinter import * from tkinter.ttk import * # importing strftime function to # retrieve system's time from time import strftime # creating tkinter window root = Tk() root.title('Clock') # This function is used to # display time on the label def time(): string = strftime('%H:%M:%S') lbl.config(text=string) lbl.after(1000, time) # Styling the label widget so that clock # will look more attractive lbl = Label(root, font=('calibri', 40, 'bold'), background='purple', foreground='white') # Placing clock at the centre # of the tkinter window lbl.pack(anchor='center') time() mainloop() </code></pre> <p>In the time period from 2345hrs to 0000hrs, I want to change the font colour to bright red so that the clock time display is more prominent.</p>
<python><tkinter>
2023-08-20 01:13:55
1
5,287
user3848207
76,937,581
10,693,596
Defining custom types in Pydantic v2
<p>The code below used to work in Pydantic V1:</p> <pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel class CustomInt(int): &quot;&quot;&quot;Custom int.&quot;&quot;&quot; pass class CustomModel(BaseModel): &quot;&quot;&quot;Custom model.&quot;&quot;&quot; custom_int: CustomInt </code></pre> <p>Using it with Pydantic V2 will trigger an error. The error can be resolved by including <code>arbitrary_types_allowed=True</code> in <code>model_config</code>, but is there a better solution?</p> <p>Kindly note that the <a href="https://docs.pydantic.dev/latest/usage/types/custom/#custom-data-types" rel="noreferrer">docs</a> suggest using <code>Annotated</code>, but that doesn't allow defining custom docstring, which is desirable in my use case.</p>
<python><python-3.x><types><pydantic>
2023-08-20 00:54:15
2
16,692
SultanOrazbayev
76,937,503
11,058,930
How to Ensure Individual Research Papers are Treated Separately in a Vector Search Using langchain?
<p>I'm working on a project where I want to use the ChatGPT API to analyze multiple large research papers by comparing their content. However, due to the API's token limit, I've been advised to utilize <code>langchain</code> for handling these large corpora.</p> <p>To my understanding the process should be as following:</p> <ol> <li>Split the research papers into smaller chunks.</li> <li>Generate embeddings from these chunks.</li> <li>Store these embeddings in a vector, possibly using Chroma.</li> <li>Convert my comparative analysis criteria into embeddings.</li> <li>Search the vector using cosine similarity with these embeddings.</li> <li>Extract relevant sections based on the search results.</li> <li>Use the extracted sections as input to ChatGPT with specific prompts for analysis.</li> </ol> <p>My primary concern is ensuring that each research paper remains distinct throughout this process. I need ChatGPT to recognize and differentiate between the sources. How can I ensure that each paper is treated as its own entity/source during the vector search and analysis?</p> <p>My data is currently structured in a dataframe in the following manner:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>id</th> <th>subject</th> <th>research_paper_text_1</th> <th>research_paper_text_2</th> <th>research_paper_text_3</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>covid</td> <td>research paper text 1 ...</td> <td>research paper text 2 ...</td> <td>research paper text 3 ...</td> </tr> <tr> <td>2</td> <td>gut health</td> <td>research paper text 1 ...</td> <td>research paper text 2 ...</td> <td>research paper text 3 ...</td> </tr> </tbody> </table> </div> <p>Here is my Python code thus far:</p> <pre><code>import langchain from langchain.vectorstores import Chroma #pip install chromadb from langchain.document_loaders import DataFrameLoader from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddings #pip install sentence_transformers # load text data loader = DataFrameLoader(df, page_content_column=&quot;research_paper_text_1&quot;) #not sure how to load other documents here documents = loader.load() # split it into chunks text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) # create the open-source embedding function embedding_function = SentenceTransformerEmbeddings(model_name=&quot;all-MiniLM-L6-v2&quot;) # load it into Chroma db = Chroma.from_documents(docs, embedding_function) # query it query = &quot;Compare research_paper_text_1 vs text_2 vs text_3 and tell me where their test approaches differ.&quot; docs = db.similarity_search(query) #send relevant data to chatgpt with prompt </code></pre> <p>Any help to put me in the right direction is appreciated. Thank you!</p>
<python><openai-api><langchain>
2023-08-20 00:17:59
0
1,747
mikelowry
76,937,484
12,263,957
Cosine similarity return empty
<p>I am trying to access the most similar vectors but it returns empty and I don't understand.</p> <p>I am following this documentation: <a href="https://redis-py.readthedocs.io/en/stable/examples/search_vector_similarity_examples.html" rel="nofollow noreferrer">https://redis-py.readthedocs.io/en/stable/examples/search_vector_similarity_examples.html</a></p> <p>And this is my schema:</p> <pre><code>schema = ( TagField(&quot;ticket_url&quot;), NumericField(&quot;ticket_id&quot;), NumericField(&quot;entity_id&quot;), VectorField(&quot;embedding&quot;, &quot;HNSW&quot;, { &quot;TYPE&quot;: &quot;FLOAT32&quot;, &quot;DIM&quot;: self.vector_dim, &quot;DISTANCE_METRIC&quot;: &quot;COSINE&quot;, } ), ) definition = IndexDefinition( prefix=[self.doc_prefix], index_type=IndexType.HASH) self.r.ft(self.index_name).create_index( fields=schema, definition=definition) </code></pre> <p>The function to search similar vectors</p> <pre><code>def search_similar_documents(self, entity_id, vector, topK=5, ticket_id=None): query = ( Query(&quot;*=&gt;[KNN 2 @embedding $vec as score]&quot;) .sort_by(&quot;score&quot;) .return_fields(&quot;score&quot;) .paging(0, 2) .dialect(2) ) query_params = {&quot;vec&quot;: vector} return self.r.ft(self.index_name).search(query, query_params).docs </code></pre> <p><a href="https://i.sstatic.net/FXbND.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FXbND.png" alt="enter image description here" /></a></p> <p>Vectors are generated from an openai response and converted to bytes</p> <pre><code>def embedding_openai(self, text): try: response = openai.Embedding.create( input=text, model=&quot;text-embedding-ada-002&quot; ) embedding = response['data'][0]['embedding'] array_embedding = np.array(embedding, dtype=np.float32) return array_embedding.tobytes() except Exception as ex: print(ex) return None </code></pre> <p>And redis.ft(index).info() return this</p> <pre><code>{'index_name': 'conversations', 'index_options': [], 'index_definition': [b'key_type', b'HASH', b'prefixes', [b'tickets:'], b'default_score', b'1'], 'attributes': [[b'identifier', b'ticket_url', b'attribute', b'ticket_url', b'type', b'TAG', b'SEPARATOR', b','], [b'identifier', b'ticket_id', b'attribute', b'ticket_id', b'type', b'NUMERIC'], [b'identifier', b'entity_id', b'attribute', b'entity_id', b'type', b'NUMERIC'], [b'identifier', b'embedding', b'attribute', b'embedding', b'type', b'VECTOR']], 'num_docs': '973', 'max_doc_id': '973', 'num_terms': '0', 'num_records': '3892', 'inverted_sz_mb': '0.00634765625', 'vector_index_sz_mb': '6.00555419921875', 'total_inverted_index_blocks': '2999', 'offset_vectors_sz_mb': '0', 'doc_table_size_mb': '0.086483001708984375', 'sortable_values_size_mb': '0', 'key_table_size_mb': '0.030145645141601562', 'records_per_doc_avg': '4', 'bytes_per_record_avg': '1.7101746797561646', 'offsets_per_term_avg': '0', 'offset_bits_per_record_avg': '-nan', 'hash_indexing_failures': '0', 'total_indexing_time': '347.62900000000002', 'indexing': '0', 'percent_indexed': '1', 'number_of_uses': 1, 'gc_stats': [b'bytes_collected', b'0', b'total_ms_run', b'0', b'total_cycles', b'0', b'average_cycle_time_ms', b'-nan', b'last_run_time_ms', b'0', b'gc_numeric_trees_missed', b'0', b'gc_blocks_denied', b'0'], 'cursor_stats': [b'global_idle', 0, b'global_total', 0, b'index_capacity', 128, b'index_total', 0], 'dialect_stats': [b'dialect_1', 0, b'dialect_2', 0, b'dialect_3', 0]} </code></pre> <p>the vectors are stored as bytes, I don't know if it's the algorithm or I'm the problem :/</p>
<python><redis><embedding><redis-py>
2023-08-20 00:07:21
1
597
Danilo Toro
76,937,465
6,401,858
Fastest way to do tan() and arctan() on large arrays with GPU in Python?
<p>This is essentially an expansion of <a href="https://stackoverflow.com/questions/47205757/how-can-i-use-trigonometric-functions-on-gpu-with-numba">this question</a>. The answer provided there says to use the math library for trig functions. Unfortunately, the math library <a href="https://stackoverflow.com/questions/46593021/math-library-and-arrays-in-python">only works on scalars</a>, not arrays. When I try using njit with np.tan() and np.arctan(), it uses my CPU, not my GPU:</p> <pre><code>import numpy as np import numba as nb @nb.njit(fastmath=True, parallel=True) def f1(a): np.tan(a) return np.arctan(a) </code></pre> <p>I have large arrays (3mil cols) that I need to do both tan() and arctan() on. I have 80 million rows of data to process, but I can batch a few a time to fit it in memory and I can divide these 80mil up into multiple CPU/GPU jobs to help parallelize it. I'd like to use my GPU for all these trig functions if it's faster. Will Numba work for this? Will something like PyTorch work here or does that have too much overhead? What's the fastest way to do trig functions on large arrays?</p> <p>EDIT: These 3mil column arrays that make up each row are generated dynamically at runtime. I can batch up to ~20 of these at a time without running out of memory, before sending the whole (20,3mil) array for the 2 trig functions and saving the results (thus freeing up memory for the next batch of 20). So I'm just looking for the fastest way to do tan() and arctan() on a (20,3mil) array, and I'll loop through my data 20 rows at a time. CPU or GPU for this? Numba or PyTorch? @njit or @vectorize?</p>
<python><pytorch><gpu><trigonometry><numba>
2023-08-19 23:58:03
2
382
fariadantes
76,937,361
10,262,805
When to set `add_special_tokens=False` in huggingface transformers tokenizer?
<p>this is the default way of setting <code>tokenizer</code> in the Hugging Face &quot;transformers&quot; library:</p> <pre><code>from transformers import BertForSequenceClassification,BertTokenizer tokenizer=BertTokenizer.from_pretrained('ProsusAI/finbert') tokens=tokenizer.encode_plus(text,add_special_tokens=True, max_length=512, truncation=True, padding=&quot;max_length&quot;) </code></pre> <p>As far as I understand, setting <code>add_special_tokens=True</code> adds the special tokens like [CLS], [SEP], and padding tokens to the input sequences. This is useful for the model's correct interpretation of the input. However, I've come across code samples where people set it to <code>False</code>.</p> <p>I'm wondering in which specific situations should I set add_special_tokens=True and when should I set it to False when using tokenizer.encode_plus()? Are there any scenarios where managing special tokens manually after chunking or other preprocessing steps would be beneficial?</p>
<python><nlp><tokenize><huggingface-transformers>
2023-08-19 23:07:08
1
50,924
Yilmaz
76,937,140
1,454,316
Type hints for PyTorch
<p>How can I use type hints with PyTorch? If I inspect the class for the model returned, I get <code>models.common.AutoShape</code>. However, <code>models</code> shows up as unknown.</p> <pre><code>import torch model : models.common.AutoShape = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True) </code></pre> <p>I also want to explicitly type the results.</p>
<python><pytorch><python-typing>
2023-08-19 21:40:37
1
841
Little Endian
76,936,967
3,860,847
Modify all values of specified keys, recursively throughout dict, whether value is scalar or a list element
<p>I'm trying to do a programmatic modification of dicts. After floundering on my own, I tried lots of different prompts at ChatGPT and Bard, but they didn't generate anything the gets my desired output.</p> <p>I think I have used <a href="https://glom.readthedocs.io/en/latest/" rel="nofollow noreferrer">glom</a> for this kind of thing in the past, but can't find any of my snippets to build off of.</p> <p>There are some similar questions on SO, but mine is different in that I want to modify scalar values, or each of the elements in a list value.</p> <p>I have dicts like this:</p> <pre class="lang-py prettyprint-override"><code>{ &quot;name&quot;: [&quot;joe&quot;, &quot;bob&quot;, &quot;jane&quot;], &quot;rank&quot;: &quot;beginner&quot;, &quot;address&quot;: { &quot;street&quot;: {&quot;direction&quot;: &quot;east&quot;, &quot;name&quot;: &quot;main&quot;}, &quot;town&quot;: {&quot;name&quot;: &quot;boston&quot;, &quot;state&quot;: &quot;mass&quot;}, &quot;name&quot;: {&quot;first&quot;: &quot;joe&quot;, &quot;last&quot;: &quot;smith&quot;}, }, &quot;clubs&quot;: { &quot;art&quot;: {&quot;name&quot;: &quot;art club&quot;, &quot;rank&quot;: &quot;expert&quot;, &quot;site&quot;: &quot;art.com&quot;}, }, &quot;fake&quot;: { &quot;real&quot;: {&quot;name&quot;: &quot;fake&quot;}, &quot;fake&quot;: {&quot;name&quot;: &quot;fake&quot;}, }, } </code></pre> <p>I'm trying to write a function that would recurse through the dict and apply a function to all scalar and list values of the keys in a list I specify.</p> <p>For example, if my list was <code>[&quot;name&quot;, &quot;rank&quot;]</code> and the function was string capitalization, then I would expect the output to be</p> <pre class="lang-py prettyprint-override"><code>{ &quot;name&quot;: [&quot;JOE&quot;, &quot;BOB&quot;, &quot;JANE&quot;], &quot;rank&quot;: &quot;BEGINNER&quot;, &quot;address&quot;: {&quot;street&quot;: {&quot;direction&quot;: &quot;east&quot;, &quot;name&quot;: &quot;MAIN&quot;}}, &quot;clubs&quot;: {&quot;art&quot;: {&quot;name&quot;: &quot;ART CLUB&quot;, &quot;rank&quot;: &quot;EXPERT&quot;, &quot;site&quot;: &quot;art.com&quot;}}, &quot;fake&quot;: {&quot;real&quot;: {&quot;name&quot;: &quot;FAKE&quot;}, &quot;fake&quot;: {&quot;name&quot;: &quot;FAKE&quot;}}, } </code></pre>
<python><dictionary><recursion>
2023-08-19 20:47:41
1
3,136
Mark Miller
76,936,763
382,200
How to search/replace text in directory and subdirectories
<p>I've searched and searched but haven't found a simple solution. Lots of fully-complete tutorials and PhD disertations, but I don't want to read pages of stuff...</p> <p><strong>I want to search &amp; replace a text item in all files and subdirectories located in a directory.</strong> I have code that works to search &amp; replace in selected files and in all files in a parent directory by using either of these <code>os.scandir()</code> arguments:</p> <pre><code> '.' os.getcwd() () </code></pre> <p>(They get same results - replacing only what's in the parent drectory, but not in subdirectories or text files. I want to Replace ALL the specified text item strings /text.)</p> <p><strong>Clarifying:</strong> It works on any selected file, but not in items <em>not</em> located in the parent directory.</p> <p>I have a directory with Python codes, text files, and subdirectories (folders) that contain more text files and more subdirectories. Naturally, I've tweaked and tweaked, but I'm not getting what I want (though, I know it's simple...)</p> <p>The tree looks something like this:</p> <p><a href="https://i.sstatic.net/J8YDb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/J8YDb.png" alt="screen shot of directory tree" /></a></p> <p>Here's a snippet of relevant code:</p> <pre><code>if(FLAG_Option == 2): with os.scandir( ) as directory: # was '.' and theDIR, hum for item in directory: if not item.name.startswith('.') and item.is_file(): with open(item, mode=&quot;r+&quot;) as file: data = file.read() # print(data) # Before text replaced data = data.replace(search_text, replace_text) file.write(data) print(data) # After text is replaced with open(item, mode=&quot;w&quot;) as file: file.write(data) </code></pre>
<python>
2023-08-19 19:39:25
4
551
headscratch
76,936,762
11,748,924
Change color for water region geopandas
<p>I want set sea area to water color, here is my code:</p> <pre><code>#@title Province plot provinces['color'] = '#6b8e23' # Default color for all provinces # Plot the provinces #'#6b8e23' land color #'#a0cfff' sea color provinces.plot(color='#6b8e23', linewidth=0.5, edgecolor='black', facecolor='#a0cfff') # Set the title of the plot plt.title('Map of Indonesia Provinces') # Display the plot plt.show() </code></pre> <p>facecolor args doesn't work for me. Any idea how do I do it?</p> <p><code>provinces</code> is <code>GeoDataFrame</code> object.</p>
<python><pandas><dataframe><geopandas>
2023-08-19 19:38:31
1
1,252
Muhammad Ikhwan Perwira
76,936,611
14,309,239
Unable to install pyarrow - ERROR: Failed building wheel for pyarrow
<p>I am using python v 3.8 , numpy-1.24.4 pandas-2.0.3, cmake 3.27.3 on Windows 10 - 64 bit However ,when I try to install pyarrow using</p> <pre><code>pip install pyarrow </code></pre> <p>I get the error below <code>File &quot;&lt;string&gt;&quot;, line 299, in _run_cmake RuntimeError: Not supported on 32-bit Windows [end of output]</code></p> <p>Updated the list here</p> <pre><code> sys.version: 3.8.0 (tags/v3.8.0:fa919fd, Oct 14 2019, 19:21:23) [MSC v.1916 32 bit (Intel)] sys.executable: c:\users\user\projects\development\scripts\python.exe sys.getdefaultencoding: utf-8 sys.getfilesystemencoding: utf-8 locale.getpreferredencoding: cp1252 sys.platform: win32 sys.implementation: name: cpython 'cert' config value: Not specified REQUESTS_CA_BUNDLE: None CURL_CA_BUNDLE: None pip._vendor.certifi.where(): c:\users\user\projects\development\lib\site-packages\pip\_vendor\certifi\cacert.pem pip._vendor.DEBUNDLED: False vendored library versions: CacheControl==0.12.11 colorama==0.4.6 distlib==0.3.6 distro==1.8.0 msgpack==1.0.5 packaging==21.3 platformdirs==3.8.1 pyparsing==3.1.0 pyproject-hooks==1.0.0 requests==2.31.0 certifi==2023.05.07 chardet==5.1.0 idna==3.4 urllib3==1.26.16 rich==13.4.2 (Unable to locate actual module version, using vendor.txt specified version) pygments==2.15.1 typing_extensions==4.7.1 (Unable to locate actual module version, using vendor.txt specified version) resolvelib==1.0.1 setuptools==68.0.0 (Unable to locate actual module version, using vendor.txt specified version) six==1.16.0 tenacity==8.2.2 (Unable to locate actual module version, using vendor.txt specified version) tomli==2.0.1 webencodings==0.5.1 (Unable to locate actual module version, using vendor.txt specified version) Compatible tags: 30 cp38-cp38-win32 </code></pre> <p>Unable to figure out the compatibilty. Please help</p>
<python><pandas><dataframe><pyarrow>
2023-08-19 18:50:38
1
315
stackoverflow rohit
76,936,582
3,324,491
folium map with simple slider as stand alone html
<p>I've got this basic folium map</p> <pre><code>import folium from folium import plugins m = folium.Map( location=[51.5, -0.09], zoom_start=11 ) m.save('map.html') </code></pre> <p>I want to add a slider on the bottom that changes the colour of the pins based on the time. i've got a txt file that has the name of the place, the longitude, latitude, and variable and value. I'm not sure how to make a slider with the variable numbers that would then correspond to the value [0,0.25,0.5,0.75,1] that would correspond to a change in colour of the pin on the map.</p> <pre><code>names longitude latitude variable value Bank of friendship -0.0990573382928028 51.5583162622095 1 1 Bank of friendship -0.0990573382928028 51.5583162622095 2 1 Bank of friendship -0.0990573382928028 51.5583162622095 3 1 Bank of friendship -0.0990573382928028 51.5583162622095 4 1 Bank of friendship -0.0990573382928028 51.5583162622095 5 0.5 Bank of friendship -0.0990573382928028 51.5583162622095 6 0.75 Bank of friendship -0.0990573382928028 51.5583162622095 7 0.25 Bank of friendship -0.0990573382928028 51.5583162622095 8 0.25 Bank of friendship -0.0990573382928028 51.5583162622095 9 0 </code></pre> <p>hoping you can help!</p>
<python><folium><folium-plugins>
2023-08-19 18:40:30
0
559
user3324491
76,936,532
10,963,057
how to manage legend_tracegroupgap for different row_heights in subplots in plotly?
<p>i found this example code for subplot with legend at each subplot. i changed it by adding row_heights and now the legend do not fit to the subplots.</p> <pre><code>import pandas as pd import plotly.express as px df = px.data.gapminder().query(&quot;continent=='Americas'&quot;) from plotly.subplots import make_subplots import plotly.graph_objects as go fig = make_subplots(rows=3, cols=1, row_heights=[2,1,0.75]) fig.append_trace(go.Scatter( x=df.query(&quot;country == 'Canada'&quot;)['year'], y=df.query(&quot;country == 'Canada'&quot;)['lifeExp'], name = 'Canada', legendgroup = '1' ), row=1, col=1) fig.append_trace(go.Scatter( x=df.query(&quot;country == 'United States'&quot;)['year'], y=df.query(&quot;country == 'United States'&quot;)['lifeExp'], name = 'United States', legendgroup = '1' ), row=1, col=1) fig.append_trace(go.Scatter( x=df.query(&quot;country == 'Mexico'&quot;)['year'], y=df.query(&quot;country == 'Mexico'&quot;)['lifeExp'], name = 'Mexico', legendgroup = '2' ), row=2, col=1) fig.append_trace(go.Scatter( x=df.query(&quot;country == 'Colombia'&quot;)['year'], y=df.query(&quot;country == 'Colombia'&quot;)['lifeExp'], name = 'Colombia', legendgroup = '2' ), row=2, col=1) fig.append_trace(go.Scatter( x=df.query(&quot;country == 'Brazil'&quot;)['year'], y=df.query(&quot;country == 'Brazil'&quot;)['lifeExp'], name = 'Brazil', legendgroup = '2' ), row=2, col=1) fig.append_trace(go.Scatter( x=df.query(&quot;country == 'Argentina'&quot;)['year'], y=df.query(&quot;country == 'Argentina'&quot;)['lifeExp'], name = 'Argentina', legendgroup = '3' ), row=3, col=1) fig.append_trace(go.Scatter( x=df.query(&quot;country == 'Chile'&quot;)['year'], y=df.query(&quot;country == 'Chile'&quot;)['lifeExp'], name = 'Chile', legendgroup = '3' ), row=3, col=1) fig.update_layout( height=800, width=800, title_text=&quot;Life Expectancy in the Americas&quot;, xaxis3_title = 'Year', yaxis1_title = 'Age', yaxis2_title = 'Age', yaxis3_title = 'Age', legend_tracegroupgap = 100, yaxis1_range=[50, 90], yaxis2_range=[50, 90], yaxis3_range=[50, 90] ) fig.show() </code></pre> <p>now i am looking for a solution to manage the legend_tracegroupgap for different row_heights. i expect the legends at the top beside the subplots.</p>
<python><plotly><legend><subplot>
2023-08-19 18:25:45
1
1,151
Alex
76,936,515
3,168,824
Cannot parse a set[AnyUrl] from .env file
<p>I'm trying to load .env config via pydantic into my fastapi app. That works for single values but not for a set (of URL's to allow for CORS in this example):</p> <p>Here is the pydantic-schema:</p> <pre><code>from pydantic import AnyUrl from pydantic_settings import BaseSettings, SettingsConfigDict class AppSettings(BaseSettings): model_config = SettingsConfigDict( env_file=&quot;.env&quot;, env_file_encoding=&quot;utf-8&quot; ) # env_prefix = &quot;app_&quot; MODE: str = &quot;remote&quot; ENVIRONMENT: str = &quot;dev&quot; # DATABASE_URL: PostgresDsn IS_GOOD_ENV: bool = True ALLOWED_CORS_ORIGINS: set[AnyUrl] </code></pre> <p>and my .env file:</p> <pre><code>MODE=remote ENVIRONMENT=dev ALLOWED_CORS_ORIGINS=&quot;https://localhost&quot;,&quot;https://localhost:3000&quot; </code></pre> <p>I tried with and without the quotes and even with set-brackets, but it always has error parsing that last line. Pretty sure I missed something obvious, but my search was fruitless.</p> <p>As always, thanks for any hint.</p>
<python><fastapi><pydantic>
2023-08-19 18:22:22
1
1,360
Christof Kälin
76,936,341
14,425,271
Shopee API to get products data doesn't seem to work anymore (it worked before)
<p>Here's a simple <a href="https://scrapy.org/" rel="nofollow noreferrer">scrapy</a> spider that anyone can use for testing.</p> <pre><code>from scrapy.utils.response import open_in_browser import scrapy import json class TestSpider(scrapy.Spider): name = &quot;test-spider&quot; allowed_domains = [&quot;shopee.ph&quot;] shopee_cookies = '[{&quot;name&quot;: &quot;csrftoken&quot;, &quot;value&quot;: &quot;RvxBdTixvBfdTR3xfQwbcYippqz8jEbF&quot;, &quot;domain&quot;: &quot;shopee.ph&quot;, &quot;path&quot;: &quot;/&quot;, &quot;expires&quot;: -1, &quot;httpOnly&quot;: false, &quot;secure&quot;: false, &quot;sameSite&quot;: &quot;Lax&quot;}, {&quot;name&quot;: &quot;_gcl_au&quot;, &quot;value&quot;: &quot;1.1.1251411089.1692464842&quot;, &quot;domain&quot;: &quot;.shopee.ph&quot;, &quot;path&quot;: &quot;/&quot;, &quot;expires&quot;: 1700240842, &quot;httpOnly&quot;: false, &quot;secure&quot;: false, &quot;sameSite&quot;: &quot;Lax&quot;}, {&quot;name&quot;: &quot;SPC_SI&quot;, &quot;value&quot;: &quot;sTLbZAAAAABwY1ZrR1NNU+WdNgAAAAAAdzlCYXIyVVQ=&quot;, &quot;domain&quot;: &quot;.shopee.ph&quot;, &quot;path&quot;: &quot;/&quot;, &quot;expires&quot;: 1692551246.336331, &quot;httpOnly&quot;: true, &quot;secure&quot;: true, &quot;sameSite&quot;: &quot;Lax&quot;}, {&quot;name&quot;: &quot;_fbp&quot;, &quot;value&quot;: &quot;fb.1.1692464842990.689078803&quot;, &quot;domain&quot;: &quot;.shopee.ph&quot;, &quot;path&quot;: &quot;/&quot;, &quot;expires&quot;: 1700240846, &quot;httpOnly&quot;: false, &quot;secure&quot;: false, &quot;sameSite&quot;: &quot;Lax&quot;}, {&quot;name&quot;: &quot;SPC_R_T_IV&quot;, &quot;value&quot;: &quot;NnVEbThnRjREMnNMZVpGVQ==&quot;, &quot;domain&quot;: &quot;.shopee.ph&quot;, &quot;path&quot;: &quot;/&quot;, &quot;expires&quot;: 1727024846.336348, &quot;httpOnly&quot;: false, &quot;secure&quot;: true, &quot;sameSite&quot;: &quot;Lax&quot;}, {&quot;name&quot;: &quot;SPC_T_ID&quot;, &quot;value&quot;: &quot;fn/OKngQO3doGdfFGyo/6mzLiviELHkKEbWM9J+x/ezTl/baT96grQer6ILrYX9tj3Kqs71Jg+hCimaK/XauidJXrd6HdPd2Smbxbu/fEStjOJi5g9/ucMmbBwuyh5M6H3TOGdpUop/9Q/zdpNj6MyxZaODnNsT5XprfsQxjB5g=&quot;, &quot;domain&quot;: &quot;.shopee.ph&quot;, &quot;path&quot;: &quot;/&quot;, &quot;expires&quot;: 1727024846.336355, &quot;httpOnly&quot;: true, &quot;secure&quot;: true, &quot;sameSite&quot;: &quot;Lax&quot;}, {&quot;name&quot;: &quot;SPC_T_IV&quot;, &quot;value&quot;: &quot;NnVEbThnRjREMnNMZVpGVQ==&quot;, &quot;domain&quot;: &quot;.shopee.ph&quot;, &quot;path&quot;: &quot;/&quot;, &quot;expires&quot;: 1727024846.336362, &quot;httpOnly&quot;: true, &quot;secure&quot;: true, &quot;sameSite&quot;: &quot;Lax&quot;}, {&quot;name&quot;: &quot;SPC_F&quot;, &quot;value&quot;: &quot;jiOtuCSNUaap3U4BHHfzhDihWwFht32f&quot;, &quot;domain&quot;: &quot;.shopee.ph&quot;, &quot;path&quot;: &quot;/&quot;, &quot;expires&quot;: 1727024843.162052, &quot;httpOnly&quot;: false, &quot;secure&quot;: true, &quot;sameSite&quot;: &quot;Lax&quot;}, {&quot;name&quot;: &quot;REC_T_ID&quot;, &quot;value&quot;: &quot;dc8a2570-3eb2-11ee-ac9b-2cea7fce6c95&quot;, &quot;domain&quot;: &quot;.shopee.ph&quot;, &quot;path&quot;: &quot;/&quot;, &quot;expires&quot;: 1727024843.16206, &quot;httpOnly&quot;: true, &quot;secure&quot;: true, &quot;sameSite&quot;: &quot;Lax&quot;}, {&quot;name&quot;: &quot;SPC_R_T_ID&quot;, &quot;value&quot;: &quot;fn/OKngQO3doGdfFGyo/6mzLiviELHkKEbWM9J+x/ezTl/baT96grQer6ILrYX9tj3Kqs71Jg+hCimaK/XauidJXrd6HdPd2Smbxbu/fEStjOJi5g9/ucMmbBwuyh5M6H3TOGdpUop/9Q/zdpNj6MyxZaODnNsT5XprfsQxjB5g=&quot;, &quot;domain&quot;: &quot;.shopee.ph&quot;, &quot;path&quot;: &quot;/&quot;, &quot;expires&quot;: 1727024846.33634, &quot;httpOnly&quot;: false, &quot;secure&quot;: true, &quot;sameSite&quot;: &quot;Lax&quot;}, {&quot;name&quot;: &quot;_QPWSDCXHZQA&quot;, &quot;value&quot;: &quot;4a585493-a7a0-4f0e-d696-687295d3a4c3&quot;, &quot;domain&quot;: &quot;shopee.ph&quot;, &quot;path&quot;: &quot;/&quot;, &quot;expires&quot;: 1692496379, &quot;httpOnly&quot;: false, &quot;secure&quot;: false, &quot;sameSite&quot;: &quot;Lax&quot;}, {&quot;name&quot;: &quot;IDE&quot;, &quot;value&quot;: &quot;AHWqTUm1b5ZflCqDTn6cpHDjyoeqH6iLfXcCOOm4YNaP8CHTsAZ7F_Daq4-zO-bsGIk&quot;, &quot;domain&quot;: &quot;.doubleclick.net&quot;, &quot;path&quot;: &quot;/&quot;, &quot;expires&quot;: 1727024843.787698, &quot;httpOnly&quot;: true, &quot;secure&quot;: true, &quot;sameSite&quot;: &quot;None&quot;}, {&quot;name&quot;: &quot;AMP_TOKEN&quot;, &quot;value&quot;: &quot;%24NOT_FOUND&quot;, &quot;domain&quot;: &quot;.shopee.ph&quot;, &quot;path&quot;: &quot;/&quot;, &quot;expires&quot;: 1692468444, &quot;httpOnly&quot;: false, &quot;secure&quot;: false, &quot;sameSite&quot;: &quot;Lax&quot;}, {&quot;name&quot;: &quot;_ga&quot;, &quot;value&quot;: &quot;GA1.2.833255521.1692464843&quot;, &quot;domain&quot;: &quot;.shopee.ph&quot;, &quot;path&quot;: &quot;/&quot;, &quot;expires&quot;: 1727024844.498551, &quot;httpOnly&quot;: false, &quot;secure&quot;: false, &quot;sameSite&quot;: &quot;Lax&quot;}, {&quot;name&quot;: &quot;_gid&quot;, &quot;value&quot;: &quot;GA1.2.1347861977.1692464844&quot;, &quot;domain&quot;: &quot;.shopee.ph&quot;, &quot;path&quot;: &quot;/&quot;, &quot;expires&quot;: 1692551244, &quot;httpOnly&quot;: false, &quot;secure&quot;: false, &quot;sameSite&quot;: &quot;Lax&quot;}, {&quot;name&quot;: &quot;_dc_gtm_UA-61918643-6&quot;, &quot;value&quot;: &quot;1&quot;, &quot;domain&quot;: &quot;.shopee.ph&quot;, &quot;path&quot;: &quot;/&quot;, &quot;expires&quot;: 1692464904, &quot;httpOnly&quot;: false, &quot;secure&quot;: false, &quot;sameSite&quot;: &quot;Lax&quot;}, {&quot;name&quot;: &quot;shopee_webUnique_ccd&quot;, &quot;value&quot;: &quot;raj%2F3ukNopIWTrFjVLQeGA%3D%3D%7C1%2BjiV3ga9OlzuAELTZtedUY5BlP1ZNVH5ybZJx2D4KNA9dGTvtFakjnNZvR64zKNG6yBDfEXdabTE%2FRKow%3D%3D%7CsWIQ7u7pR4F3BD7E%7C08%7C3&quot;, &quot;domain&quot;: &quot;shopee.ph&quot;, &quot;path&quot;: &quot;/&quot;, &quot;expires&quot;: 1692496381, &quot;httpOnly&quot;: false, &quot;secure&quot;: false, &quot;sameSite&quot;: &quot;Lax&quot;}, {&quot;name&quot;: &quot;ds&quot;, &quot;value&quot;: &quot;065598fda3b7cca4e5e241e446a075e9&quot;, &quot;domain&quot;: &quot;shopee.ph&quot;, &quot;path&quot;: &quot;/&quot;, &quot;expires&quot;: 1692496381, &quot;httpOnly&quot;: false, &quot;secure&quot;: false, &quot;sameSite&quot;: &quot;Lax&quot;}, {&quot;name&quot;: &quot;SPC_EC&quot;, &quot;value&quot;: &quot;RTJYa2Q5WEV4UDNnN3VGWr68rFv1FRJEeVkpwAzlu09WhtwSxFE1cZlwpQYRhhR56REixPuKfekz6oioE4EaDK12bvALil+QZ5B0EfG42psIFWNDe1moiErTZndyu1502KUlh5+OQoUWCvm1XkVY+2Iy7Jk5qyPI2J655JeZwv0=&quot;, &quot;domain&quot;: &quot;.shopee.ph&quot;, &quot;path&quot;: &quot;/&quot;, &quot;expires&quot;: 1727024846.336291, &quot;httpOnly&quot;: true, &quot;secure&quot;: true, &quot;sameSite&quot;: &quot;Lax&quot;}, {&quot;name&quot;: &quot;SPC_ST&quot;, &quot;value&quot;: &quot;.ek1DVmo5aGJjaVBxcklYU5o4/3v/8ndPeV2/fwtzWYUh1kWOopWvn7SFoQXWuS37Rs+J+Ym7U8OwOG73JbiFRWyOOo1GhKBgwhUeeWfE+q9XPDZXACC33t7qphoBu5hyWvR/G+WkpSUbIkmGPzprCIvhw7Qwyt8UFxk/4bA+47QQQUiDcPfHIq/sJqmVMEqH3Al6nCTDeEh/JCDLALRvNQ==&quot;, &quot;domain&quot;: &quot;.shopee.ph&quot;, &quot;path&quot;: &quot;/&quot;, &quot;expires&quot;: 1727024846.336324, &quot;httpOnly&quot;: true, &quot;secure&quot;: true, &quot;sameSite&quot;: &quot;Lax&quot;}, {&quot;name&quot;: &quot;SPC_CLIENTID&quot;, &quot;value&quot;: &quot;amlPdHVDU05VYWFwgvlavxoisbqjmacw&quot;, &quot;domain&quot;: &quot;.shopee.ph&quot;, &quot;path&quot;: &quot;/&quot;, &quot;expires&quot;: 1727024846.336374, &quot;httpOnly&quot;: false, &quot;secure&quot;: false, &quot;sameSite&quot;: &quot;Lax&quot;}, {&quot;name&quot;: &quot;_ga_CB0044GVTM&quot;, &quot;value&quot;: &quot;GS1.1.1692464843.1.0.1692464846.57.0.0&quot;, &quot;domain&quot;: &quot;.shopee.ph&quot;, &quot;path&quot;: &quot;/&quot;, &quot;expires&quot;: 1727024846.367333, &quot;httpOnly&quot;: false, &quot;secure&quot;: false, &quot;sameSite&quot;: &quot;Lax&quot;}]' shopee_cookies = json.loads(shopee_cookies) def start_requests(self): yield scrapy.Request( &quot;https://shopee.ph/api/v4/pdp/get_pc?shop_id=237078553&amp;item_id=6929743700&quot;, cookies=self.shopee_cookies, headers={&quot;x-api-source&quot;:&quot;pc&quot;,&quot;af-ac-enc-dat&quot;:&quot;null&quot;}, callback=self.parse_item, ) def parse_item(self,response): open_in_browser(response) </code></pre> <p>Feel free to test it out as I provided the cookies as well (because the cookies are needed). Now as you can see, this piece of code actually worked before, around early August 2023. I had challenges to make it work before but thanks to <a href="https://stackoverflow.com/a/74333263/14425271">this answer</a> I managed to get the products data. You can even see my <a href="https://stackoverflow.com/questions/73424180/scrape-shopee-api-v4/74333263#comment135421636_74333263">comment there</a>. Here's an image I screenshot before proving that it did work around early August.</p> <p><a href="https://i.sstatic.net/SfcZ7.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SfcZ7.jpg" alt="enter image description here" /></a></p> <p>As you can see the data is there and works well. Thanks to the headers <code>{&quot;x-api-source&quot;:&quot;pc&quot;,&quot;af-ac-enc-dat&quot;:&quot;null&quot;}</code> that made it worked. However as of August 20, 2023 as I am typing this. It seems that it doesn't work anymore. I'm not sure why, but I think there's some changes with the API that has happened. I spent all day trying to figure out and play with the headers but no luck. All I got right now as a result is this.</p> <p><strong>Output I am having right now:</strong></p> <blockquote> <p>{&quot;is_customized&quot;:false,&quot;is_login&quot;:true,&quot;platform&quot;:0,&quot;action_type&quot;:2,&quot;error&quot;:90309999,&quot;tracking_id&quot;:&quot;24d95bd5-40e5-44cd-b30b-885711481170&quot;,&quot;report_extra_info&quot;:&quot;&quot;}</p> </blockquote> <p>Here is the actual product page <a href="https://shopee.ph/Realme-C53-C55-C35-C21Y-C25Y-RealmeC11-2021-C25-C25S-C15-C12-C2-C17-C3-Realme8-Pro-8i-7i-5-5i-6i-Fashion-Electroplated-Maple-Leaf-Square-Plating-Soft-Silicone-Case-Meiting-i.237078553.6929743700?sp_atk=f0eb6655-bb0f-496e-abf3-b1fb07e318a2&amp;xptdk=f0eb6655-bb0f-496e-abf3-b1fb07e318a2" rel="nofollow noreferrer">link</a> I used for testing. You can see the API there when you do &quot;Inspect Element&quot; -&gt; &quot;Network&quot; tab. Take note that the output I am having right now is the same one I had before I managed to implement <a href="https://stackoverflow.com/a/74333263/14425271">this solution</a>. But right now it's back at it again. So the question is, could there be a way to make it work again? I feel like it's something with the headers that I am not getting it right, but I am not sure how to figure it out and that is why I am seeking help right now as I am out of solutions.</p>
<python><json><scrapy><shopee>
2023-08-19 17:39:02
4
4,226
Ice Bear
76,936,038
3,247,006
Does `cache.set()` use `version=1` by default instead of `version=None` in Django?
<p>If I only set and get <code>David</code> with <code>version=0</code>, then I can get <code>John</code> and <code>David</code> in order as shown below. *I use <a href="https://docs.djangoproject.com/en/4.2/topics/cache/#local-memory-caching" rel="nofollow noreferrer">LocMemCache</a> which is the default cache in Django and I'm learning <a href="https://docs.djangoproject.com/en/4.2/topics/cache/" rel="nofollow noreferrer">Django Cache</a>:</p> <pre class="lang-py prettyprint-override"><code>from django.core.cache import cache cache.set(&quot;name&quot;, &quot;John&quot;) cache.set(&quot;name&quot;, &quot;David&quot;, version=0) print(cache.get(&quot;name&quot;)) # John print(cache.get(&quot;name&quot;, version=0)) # David </code></pre> <p>And, if I only set and get <code>David</code> with <code>version=2</code>, then I can get <code>John</code> and <code>David</code> in order as well as shown below:</p> <pre class="lang-py prettyprint-override"><code>from django.core.cache import cache cache.set(&quot;name&quot;, &quot;John&quot;) cache.set(&quot;name&quot;, &quot;David&quot;, version=2) print(cache.get(&quot;name&quot;)) # John print(cache.get(&quot;name&quot;, version=2)) # David </code></pre> <p>But, if I only set and get <code>David</code> with <code>version=1</code>, then I can get <code>David</code> and <code>David</code> in order as shown below so this is because <code>John</code> is set with <code>version=1</code> by default?:</p> <pre class="lang-py prettyprint-override"><code>from django.core.cache import cache cache.set(&quot;name&quot;, &quot;John&quot;) cache.set(&quot;name&quot;, &quot;David&quot;, version=1) print(cache.get(&quot;name&quot;)) # David print(cache.get(&quot;name&quot;, version=1)) # David </code></pre> <p>In addition, if I set and get <code>John</code> and <code>David</code> without <code>version=1</code>, then I can get <code>David</code> and <code>David</code> in order as well as shown below:</p> <pre class="lang-py prettyprint-override"><code>from django.core.cache import cache cache.set(&quot;name&quot;, &quot;John&quot;) cache.set(&quot;name&quot;, &quot;David&quot;) print(cache.get(&quot;name&quot;)) # David print(cache.get(&quot;name&quot;)) # David </code></pre> <p>I know that <a href="https://docs.djangoproject.com/en/4.2/topics/cache/#django.core.cache.cache.set" rel="nofollow noreferrer">the doc</a> shows <code>version=None</code> for <code>cache.set()</code> as shown below:</p> <pre class="lang-py prettyprint-override"><code> ↓ ↓ Here ↓ ↓ cache.set(key, value, timeout=DEFAULT_TIMEOUT, version=None) </code></pre> <p>So, does <code>cache.set()</code> actually use <code>version=1</code> by default instead of <code>version=None</code> in Django?</p>
<python><django><caching><version><django-cache>
2023-08-19 16:12:06
2
42,516
Super Kai - Kazuya Ito
76,935,995
10,755,782
How to send log data to a remote server in a Python-Flask serverless function running in Vercel without making the main function wait
<p>The following is a minimum working example of my Python/Flask, which is working as the back-end of my web app. It is hosted in Vercell and it is working as a &quot;serverless function&quot;. I'm keeping a log of the incoming requests and I have to write this to a remote server. Now, I'm using the <code>Process</code> submodule from <code>multiprocessing</code> to send the log data. I use this approach because, sometimes the writing log data can take several seconds and the main function will have to wait till then.</p> <p>However, the problem is that since this is working as a serverless function, the request is closed as soon as the <code>return</code> statement is executed, and about half of the time the log data is not being written.</p> <p>What is the best way to write log data to a remote server in such a situation? Without having to make the main function wait and also making sure that the log data is being written?</p> <pre class="lang-py prettyprint-override"><code>from flask import Flask, request, jsonify from flask_cors import CORS import time from multiprocessing import Process def remote_log (log_data): time.sleep(1) return 0 def send_data_to_server(log_data): remote_log(log_data=log_data) app = Flask(__name__) cors = CORS(app, resources={r&quot;/api/*&quot;: {&quot;origins&quot;: &quot;*&quot;}}) @app.route('/') def home(): return 'Server Deployed' @app.route('/about') def about(): return 'About' @app.route('/api/chat', methods=['POST']) def chat(): log_data = &quot;This is my data&quot; # this is where the problem is p = Process(target=send_data_to_server, args= (log_data,)) p.start() return &quot;My response&quot; if __name__ == '__main__': app.run(debug=True) </code></pre>
<python><flask><serverless><serverless-framework><vercel>
2023-08-19 16:04:06
0
660
brownser
76,935,984
4,575,197
AssertionError: group argument must be None for now in a python library
<p>I'm trying to use a library called gdelt which simply downloads data from gdelt website or from Google Query i'm not sure. for installing and other info pls <a href="https://github.com/linwoodc3/gdeltPyR/tree/master" rel="nofollow noreferrer">visit</a> or</p> <pre class="lang-bash prettyprint-override"><code>pip install gdeltPyR </code></pre> <p>You can also install directly from github</p> <p><em><strong>bash</strong></em></p> <pre><code> pip install git+https://github.com/linwoodc3/gdeltPyR </code></pre> <p>it's been a long time that anyone has updated it and unfortunately i need it for my master thesis so it would be a huge help if i can fix it some how.</p> <p>if we want to send a request for <code>events</code> or <code>mentions</code> table it would work but unfortunately for gkg table it doesn't work.</p> <p>if you send the request for only one day it works like a charm.</p> <pre><code>results = gd.Search('2016 10 19',table='gkg') </code></pre> <p>but when i set <code>coverage=True</code> or when i query for a time period, it returns this error <code>AssertionError: group argument must be None for now</code>.</p> <p>The Code that cuases the error:</p> <p><code>results = gd.Search(['2016 10 19','2023 01 22'],table='gkg')</code></p> <p>the whole error with Traceback:</p> <pre><code>File [c:\Users\\anaconda3\envs\myenv\Lib\site-packages\gdelt\base.py:634](file:///C:/Users//anaconda3/envs/myenv/Lib/site-packages/gdelt/base.py:634), in gdelt.Search(self, date, table, coverage, translation, output, queryTime, normcols) 630 downloaded_dfs = list(pool.imap_unordered(eventWork, 631 self.download_list)) 632 else: --&gt; 634 pool = NoDaemonProcessPool(processes=cpu_count()) 635 downloaded_dfs = list(pool.imap_unordered(_mp_worker, 636 self.download_list, 637 )) 638 pool.close() File [c:\Users\\anaconda3\envs\myenv\Lib\multiprocessing\pool.py:215](file:///C:/Users//anaconda3/envs/myenv/Lib/multiprocessing/pool.py:215), in Pool.__init__(self, processes, initializer, initargs, maxtasksperchild, context) 213 self._processes = processes 214 try: --&gt; 215 self._repopulate_pool() 216 except Exception: 217 for p in self._pool: File [c:\Users\\anaconda3\envs\myenv\Lib\multiprocessing\pool.py:306](file:///C:/Users//anaconda3/envs/myenv/Lib/multiprocessing/pool.py:306), in Pool._repopulate_pool(self) 305 def _repopulate_pool(self): --&gt; 306 return self._repopulate_pool_static(self._ctx, self.Process, 307 self._processes, 308 self._pool, self._inqueue, 309 self._outqueue, self._initializer, 310 self._initargs, 311 self._maxtasksperchild, 312 self._wrap_exception) File [c:\Users\\anaconda3\envs\myenv\Lib\multiprocessing\pool.py:322](file:///C:/Users//anaconda3/envs/myenv/Lib/multiprocessing/pool.py:322), in Pool._repopulate_pool_static(ctx, Process, processes, pool, inqueue, outqueue, initializer, initargs, maxtasksperchild, wrap_exception) 318 &quot;&quot;&quot;Bring the number of pool processes up to the specified number, 319 for use after reaping workers which have exited. 320 &quot;&quot;&quot; 321 for i in range(processes - len(pool)): --&gt; 322 w = Process(ctx, target=worker, 323 args=(inqueue, outqueue, 324 initializer, 325 initargs, maxtasksperchild, 326 wrap_exception)) 327 w.name = w.name.replace('Process', 'PoolWorker') 328 w.daemon = True File [c:\Users\\anaconda3\envs\myenv\Lib\multiprocessing\process.py:82](file:///C:/Users//anaconda3/envs/myenv/Lib/multiprocessing/process.py:82), in BaseProcess.__init__(self, group, target, name, args, kwargs, daemon) ... ---&gt; 82 assert group is None, 'group argument must be None for now' 83 count = next(_process_counter) 84 self._identity = _current_process._identity + (count,) AssertionError: group argument must be None for now </code></pre> <p>this is part of the code in file <em><strong>base</strong></em> which error occours (the link to the <a href="https://github.com/linwoodc3/gdeltPyR/tree/master/gdelt" rel="nofollow noreferrer">file just in case</a>).</p> <pre><code>elif self.version == 2: if self.table == 'events' or self.table == '': columns = self.events_columns if self.coverage is True: # pragma: no cover self.download_list = (urlsv2events(v2RangerCoverage( _dateRanger(self.date)))) else: self.download_list = (urlsv2events(v2RangerNoCoverage( _dateRanger(self.date)))) if self.table == 'gkg': columns = self.gkg_columns if self.coverage is True: # pragma: no cover self.download_list = (urlsv2gkg(v2RangerCoverage( _dateRanger(self.date)))) else: self.download_list = (urlsv2gkg(v2RangerNoCoverage( _dateRanger(self.date)))) # print (&quot;2 gkg&quot;, urlsv2gkg(self.datesString)) if self.table == 'mentions': columns = self.mentions_columns if self.coverage is True: # pragma: no cover self.download_list = (urlsv2mentions(v2RangerCoverage( _dateRanger(self.date)))) else: self.download_list = (urlsv2mentions(v2RangerNoCoverage( _dateRanger(self.date)))) if isinstance(self.datesString, str): if self.table == 'events': results = eventWork(self.download_list) else: # if self.table =='gkg': # results = eventWork(self.download_list) # # else: results = _mp_worker(self.download_list, proxies=self.proxies) else: if self.table == 'events': pool = Pool(processes=cpu_count()) downloaded_dfs = list(pool.imap_unordered(eventWork, self.download_list)) else: pool = NoDaemonProcessPool(processes=cpu_count()) downloaded_dfs = list(pool.imap_unordered(_mp_worker, self.download_list, )) pool.close() pool.terminate() pool.join() # print(downloaded_dfs) results = pd.concat(downloaded_dfs) del downloaded_dfs results.reset_index(drop=True, inplace=True) </code></pre> <p>i found partially the answer here:</p> <p><a href="https://docs.python.org/2/library/threading.html#threading.Thread" rel="nofollow noreferrer">https://docs.python.org/2/library/threading.html#threading.Thread</a></p> <p>but i don't know how i can change and exactly what to change. this is the first time that i probably need to change a library in order to be able to write my own code. any help would be appreciated.</p> <h2>EDIT:</h2> <p>here's a <a href="https://github.com/linwoodc3/gdeltPyR/blob/master/examples/Basic%20gdeltPyR%20Query.ipynb" rel="nofollow noreferrer">jupytur notebook</a> in which you can easily test.</p>
<python><multithreading><multiprocessing><gdelt>
2023-08-19 16:01:01
1
10,490
Mostafa Bouzari
76,935,873
13,860,719
Fastest way to merge columns of the same type in Numpy
<p>Say I have a numpy array <code>A</code> that records the distances from <strong>different</strong> 1D points. The first column records the coordinate of each destination point, the second column records the pairwise distances. Now I want to construct a distance matrix <code>D</code> that records the pairwise distances (rows correspond to source points, columns correspond to destination points, the distance will be recorded as infinite if unknown). This should be easy, but becomes tricky when there are duplicate destination points. Below is a simple example: I have two points with the coordinate 2.5, and two points with the coordinate 6.1. I tried using <code>np.unique</code> to get the unique columns, but when there are duplicates, it picks the first-appeared distance and lose the distance information of the other pairs</p> <pre><code>import numpy as np A = np.array([[7.3, 0.25], [2.5, 0.32], [3.7, 0.45], [6.1, 0.55], [2.5, 0.91], [4.8, 0.77], [8.6, 0.35], [6.1, 0.82]]) D = np.ones((A.shape[0],A.shape[0])) * np.inf np.fill_diagonal(D, A[:,1]) unq_ids = np.sort(np.unique(A[:,0], return_index=True)[1]) D = D[:,unq_ids] print(D) </code></pre> <p>This gives</p> <pre><code>array([[0.25, inf, inf, inf, inf, inf], [ inf, 0.32, inf, inf, inf, inf], [ inf, inf, 0.45, inf, inf, inf], [ inf, inf, inf, 0.55, inf, inf], [ inf, inf, inf, inf, inf, inf], [ inf, inf, inf, inf, 0.77, inf], [ inf, inf, inf, inf, inf, 0.35], [ inf, inf, inf, inf, inf, inf]]) </code></pre> <p>As you can see, the row 4 and 7 doesn't have any distance information, but I am expecting</p> <pre><code>array([[0.25, inf, inf, inf, inf, inf], [ inf, 0.32, inf, inf, inf, inf], [ inf, inf, 0.45, inf, inf, inf], [ inf, inf, inf, 0.55, inf, inf], [ inf, 0.91, inf, inf, inf, inf], [ inf, inf, inf, inf, 0.77, inf], [ inf, inf, inf, inf, inf, 0.35], [ inf, inf, inf, 0.82, inf, inf]]) </code></pre> <p>In fact, I have a more than 10000 pairs of points, so I want to know a fast way in Numpy to first detect duplicates of the destination points, and then &quot;merge&quot; the distance matrix's columns that correspond to the same destination point (please don't use Pandas).</p>
<python><arrays><python-3.x><numpy><performance>
2023-08-19 15:36:57
3
2,963
Shaun Han
76,935,832
11,426,624
assign and iterate over many conditions
<p>I have a data frame with a date column and I also have two lists with dates. One of the lists contains start dates and the other end dates (they have the same length). I would like to assign a new column called <code>variable</code> which is 1 if the date is between one of the start/end date pairs and 0 otherwise. I tried to do the below but it does not work. Is there a way to do this without loops?</p> <pre><code>start_dates = [date(2023,4,1), date(2023,4,24), date(2023,5,15)] end_dates = [date(2023,4,14), date(2023,5,1), date(2023,5,30)] no_days = 92 intervals = [date(2023, 3, 1) + timedelta(days=1*i) for i in range(0, no_days-2)] df = pd.DataFrame({'date':intervals}) #does not work df.assign(variable=np.where((df.date &gt;= start) &amp; (df.date &lt;= end), 1, 0) for start in start_dates for end in end_dates) </code></pre>
<python><pandas>
2023-08-19 15:26:34
1
734
corianne1234
76,935,823
7,335,214
Convert overlapping bin ticks to scientific notation in seaborn histplot x-axis
<p>I have a dataframe of 100 rows of floats ranging from <code>0.000001</code> to <code>0.001986</code> that I wish to plot on a seaborn histplot, separated by class. I started with,</p> <pre><code>sns.histplot(data=df, x='score', hue='test_result', kde=True, color='red', stat='probability', multiple='layer') plt.show() </code></pre> <p>However, my bins were overlapping significantly. I added,</p> <pre><code>binwidth=0.000000001 </code></pre> <p>To the histplot to scale the bins to scientific notation, but this code took over 2 hours to run.</p> <p>My question is; is there a more <strong>computationally efficient</strong> way to do this conversion? I need to run the same code for multiple dataframes of similar size. If not, is there a better way to improve the <strong>readability of the x-axis bins</strong> instead of using scientific notation? Thanks!</p>
<python><matplotlib><seaborn><scientific-notation><histplot>
2023-08-19 15:23:49
1
668
Dawn
76,935,466
587,680
Initializing Flask-Session breaks Session in Dependency
<p>I'm trying to use <a href="https://flask-monitoringdashboard.readthedocs.io/en/latest/" rel="nofollow noreferrer">enter link description here</a>. If I use their minimal example, everything works fine but in my actual application, I use Flask-Session myself. Now the Flask-MonitoringDashboard also uses Flask-Session to store a session cookie and indicate if the user is logged in or not.</p> <p>Here's a working example that also set's the secret key:</p> <pre><code>from flask import Flask, session import flask_monitoringdashboard as dashboard from flask_session import Session class Config: SECRET_KEY = &quot;top-sercet!&quot; app = Flask(__name__) app.config.from_object(Config) dashboard.config.init_from(file='/home/pascal/test/dashboard_minimal/config.cfg') dashboard.bind(app) sess = Session() #sess.init_app(app) @app.route('/') def index(): return 'Hello World!' if __name__ == '__main__': app.run(debug=True) </code></pre> <p>Note: I use absolute paths just to be sure.</p> <p>Also here's <code>config.cfg</code> (copy pasted from their example)</p> <pre><code>[dashboard] APP_VERSION=1.0 GIT=/home/pascal/test/dashboard_minimal/.git/ CUSTOM_LINK=dashboard MONITOR_LEVEL=3 OUTLIER_DETECTION_CONSTANT=2.5 SAMPLING_PERIOD=20 ENABLE_LOGGING=True [authentication] USERNAME=admin PASSWORD=admin SECURITY_TOKEN=cc83733cb0af8b884ff6577086b87909 [database] TABLE_PREFIX=fmd DATABASE=sqlite://///home/pascal/test/dashboard_minimal/dashboard.db [visualization] TIMEZONE=Europe/Amsterdam COLORS={'main':'[0,97,255]', 'static':'[255,153,0]'} </code></pre> <p>Now if you run <code>flask --app main run</code> and go to <code>localhost:5000/dashboard</code> you can login with <code>admin</code> and <code>admin</code>.</p> <p>Now if we add <code>Flask-Session</code> to the app ourselves i.e. we uncomment <code>sess.init_app(app)</code> we get the error</p> <blockquote> <p>RuntimeError: The session is unavailable because no secret key was set. Set the secret_key on the application to something unique and secret.</p> </blockquote> <p>which is weird, since we did infact set it. The error is probably due to us <a href="https://stackoverflow.com/questions/26080872/secret-key-not-set-in-flask-session-using-the-flask-session-extension">not setting a interface</a>. So we do that by changing the Config class to</p> <pre><code>class Config: SECRET_KEY = &quot;top-sercet!&quot; SESSION_TYPE = &quot;memcached&quot; </code></pre> <p>The error is gone but now we are in a infinite login loop when we try to login! This is, because Flask-MonitoringDashboard uses a <code>@secure</code> decorator where they read the session and check if the user is logged in but the value is never set! Here's what they do:</p> <p>In <code>[flask_monitoringdashboard/views/auth.py][3]</code> we see the route that handles the login:</p> <pre><code>import flask ... @blueprint.route('/login', methods=['GET', 'POST']) def login(): &quot;&quot;&quot; User for logging into the system. The POST-request checks whether the logging is valid. If this is the case, the user is redirected to the main page. :return: &quot;&quot;&quot; if flask.session.get(config.link + '_logged_in'): return redirect(url_for(MAIN_PAGE)) if request.method == 'POST': name = request.form['name'] password = request.form['password'] user = get_user(username=name, password=password) if user is not None: on_login(user=user) return redirect(url_for(MAIN_PAGE)) return render_template('fmd_login.html', blueprint_name=config.blueprint_name, show_login_banner=config.show_login_banner, show_login_footer=config.show_login_footer, ) </code></pre> <p>If we aren't logged in, we proceed to start authentication. In the <a href="https://github.com/flask-dashboard/Flask-MonitoringDashboard/blob/7630227e9f961584a7b61d1f7c237789bcaced59/flask_monitoringdashboard/core/auth.py#L49" rel="nofollow noreferrer"><code>on_login</code></a> function they set the session:</p> <pre><code>from flask import session, redirect, url_for def on_login(user): session[config.link + '_user_id'] = user.id session[config.link + '_logged_in'] = True if user.is_admin: session[config.link + '_admin'] = True </code></pre> <p>Once logged in, we get redirected <code>return redirect(url_for(MAIN_PAGE))</code> which at some point calls the <a href="https://github.com/flask-dashboard/Flask-MonitoringDashboard/blob/7630227e9f961584a7b61d1f7c237789bcaced59/flask_monitoringdashboard/core/auth.py#L27" rel="nofollow noreferrer"><code>@secure</code></a> decorator:</p> <pre><code>from flask import session, redirect, url_for ... def secure(func): &quot;&quot;&quot; When the user is not logged into the system, the user is requested to the login page. There are two types of user-modes: - admin: Can be visited with this wrapper. - guest: Can be visited with this wrapper. :param func: the endpoint to be wrapped. &quot;&quot;&quot; @wraps(func) def wrapper(*args, **kwargs): if session and session.get(config.link + '_logged_in'): return func(*args, **kwargs) return redirect(url_for(config.blueprint_name + '.login')) return wrapper </code></pre> <p>Not the different way of importing <code>session</code>. They have a name conflict in the first file, which is why they use <code>flask.session</code>.</p> <p>Am I simply not initializing Flask-Session correctly or is there another issue? Any help is appreciated.</p>
<python><flask><flask-session>
2023-08-19 13:57:11
0
532
xotix
76,934,843
1,473,517
How to handle Unsupported use of op_LOAD_CLOSURE encountered?
<p>This is my MWE:</p> <pre><code>from numba import njit import numpy as np @njit def solve(n): count = np.zeros(n + 1, dtype=int) res = np.array([0], dtype=int) def search(sz=0, max_val=1, single=0, previous=None): nonlocal res if sz == 4 * n: res[0] += 1 return if single and count[0] &lt; 2 * n: count[0] += 1 search(sz + 1, max_val, single) count[0] -= 1 for i in range(1, max_val + 1): if i != previous and count[i] &lt; 2: count[i] += 1 search(sz + 1, max_val + (i == max_val and max_val &lt; n), single + (count[i] == 1) - (count[i] == 2), i) count[i] -= 1 search() return res[0] for i in range(1, 6): print(solve(i)) </code></pre> <p>This gives:</p> <pre><code>NotImplementedError: Failed in nopython mode pipeline (step: analyzing bytecode) Unsupported use of op_LOAD_CLOSURE encountered </code></pre> <p>What's the right way to get this to work with numba? The code runs correctly, if slowly, if you remove the @njit line.</p>
<python><numba>
2023-08-19 11:24:48
2
21,513
Simd
76,934,807
14,368,631
Return instance of bounded typevar given an *args parameter with type of typevar parameter
<p>So I have this code:</p> <pre class="lang-py prettyprint-override"><code>from __future__ import annotations from typing import TypeVar, overload class Base: base_var: int class ComponentOne(Base): component_one_var: float class ComponentTwo(Base): component_two_var: str T = TypeVar(&quot;T&quot;, bound=Base) T1 = TypeVar(&quot;T1&quot;, bound=Base) T2 = TypeVar(&quot;T2&quot;, bound=Base) class Registry: def __init__(self) -&gt; None: self._registry: dict[int, dict[type[Base], Base]] = { 1: {ComponentOne: ComponentOne(), ComponentTwo: ComponentTwo()}, 2: {ComponentOne: ComponentOne()}, 3: {ComponentTwo: ComponentTwo()}, 4: {}, 5: {ComponentOne: ComponentTwo(), ComponentTwo: ComponentOne()}, } @overload def get(self, __component_one: type[T]) -&gt; list[tuple[int, T]]: ... @overload def get( self, __component_one: type[T], __component_two: type[T1] ) -&gt; list[tuple[int, tuple[T, T1]]]: ... @overload def get( self, __component_one: type[T], __component_two: type[T1], __component_three: type[T2], ) -&gt; list[tuple[int, tuple[T, T1, T2]]]: ... def get(self, *components: type[Base]) -&gt; list[tuple[int, tuple[Base, ...]]]: return [] f = Registry() for game_object_id, component_one in f.get(ComponentOne): print(game_object_id, component_one) </code></pre> <p>This program is a simplified version of an entity component system which holds game object IDs and their components. However, when running mypy on it, I get the error:</p> <pre><code>t.py:52:5:53:17: error: Overloaded function implementation cannot produce return type of signature 1 [misc] def get(self, *components: type[Base]) -&gt; list[tuple[int, tuple[Base, ...]]]: ^ t.py:52:5:53:17: error: Overloaded function implementation cannot produce return type of signature 2 [misc] def get(self, *components: type[Base]) -&gt; list[tuple[int, tuple[Base, ...]]]: ^ t.py:52:5:53:17: error: Overloaded function implementation cannot produce return type of signature 3 [misc] def get(self, *components: type[Base]) -&gt; list[tuple[int, tuple[Base, ...]]]: </code></pre> <p>How can I fix this typing error so each signature matches the implemented method?</p>
<python><python-typing><type-variables>
2023-08-19 11:15:05
0
328
Aspect11
76,934,781
22,414,610
Flask-RESTful with Flask-SocketIO
<p>I have problem to implement my Flask-RESTful application with Flask-SocketIO. Any ideas how to configure both? Here is my code:</p> <pre><code>from decouple import config from flask import Flask from flask_cors import CORS from flask_migrate import Migrate from flask_restful import Api from db import db from resources.routes import routes class DevApplicationConfiguration: DEBUG = True TESTING = True SQLALCHEMY_DATABASE_URI = ( f&quot;postgresql://{config('DB_USER')}:{config('DB_PASSWORD')}&quot; f&quot;@{config('DB_HOST')}/{config('DB_NAME')}&quot; ) def create_app(config=&quot;config.DevApplicationConfiguration&quot;): app = Flask(__name__) app.config.from_object(DevApplicationConfiguration) migrate = Migrate(app, db) CORS(app) api = Api(app) [api.add_resource(*r) for r in routes] return app </code></pre>
<python><flask-socketio><python-socketio>
2023-08-19 11:07:24
1
424
Mr. Terminix
76,934,656
10,318,539
How to Calculate the Quantum Cost of Qiskit Circuit
<p>I want to calculate the Quantum cost of this qiskit circuit, e.g as i know x-gate has the cost 1 and there are 4 x-gate so quantum cost for x-gate will be 4. What about ccx gate? How can i calculate its cost.?</p> <pre><code> circuit.x(qr[1]) circuit.x(qr[0]) circuit.ccx(qr[0], qr[1], qr[2]) circuit.x(qr[1]) circuit.x(qr[0]) circuit.ccx(qr[0], qr[1], qr[3]) </code></pre>
<python><quantum-computing><qiskit>
2023-08-19 10:31:03
1
485
Engr. Khuram Shahzad
76,934,592
2,307,570
What does "finally: pass" do in Python? (e.g. in try – except – finally)
<p>I would have assumed a <code>finally</code> clause with only a <code>pass</code> to be pointless.</p> <p>But in a Bottle template, the following code for an optional include would not work without it.<br> The result would contain everything before, and the included code itself, but nothing after it.<br> (See the <a href="https://stackoverflow.com/questions/76891374">corresponding question</a>.)</p> <pre class="lang-py prettyprint-override"><code>try: include(optional_view) except NameError: pass finally: pass </code></pre> <p>What does <code>finally: pass</code> do, and when is it useful?</p>
<python><bottle><finally><try-finally>
2023-08-19 10:14:25
2
1,209
Watchduck
76,934,579
13,801,302
PydanticUserError: If you use `@root_validator` with pre=False (the default) you MUST specify `skip_on_failure=True`
<p>I want to execute this code in google colab but I get following error:</p> <pre class="lang-python prettyprint-override"><code>from llama_index.prompts.prompts import SimpleInputPrompt # Create a system prompt system_prompt = &quot;&quot;&quot;[INST] &lt;&gt; more string here.&lt;&gt; &quot;&quot;&quot; query_wrapper_prompt = SimpleInputPrompt(&quot;{query_str} [/INST]&quot;) </code></pre> <p>Error:</p> <pre><code>/usr/local/lib/python3.10/dist-packages/pydantic/_internal/_config.py:269: UserWarning: Valid config keys have changed in V2: * 'allow_population_by_field_name' has been renamed to 'populate_by_name' warnings.warn(message, UserWarning) --------------------------------------------------------------------------- PydanticUserError Traceback (most recent call last) &lt;ipython-input-36-c45796b371fe&gt; in &lt;cell line: 3&gt;() 1 # Import the prompt wrapper... 2 # but for llama index ----&gt; 3 from llama_index.prompts.prompts import SimpleInputPrompt 4 # Create a system prompt 5 system_prompt = &quot;&quot;&quot;[INST] &lt;&gt; 6 frames /usr/local/lib/python3.10/dist-packages/pydantic/deprecated/class_validators.py in root_validator(pre, skip_on_failure, allow_reuse, *__args) 226 mode: Literal['before', 'after'] = 'before' if pre is True else 'after' 227 if pre is False and skip_on_failure is not True: --&gt; 228 raise PydanticUserError( 229 'If you use `@root_validator` with pre=False (the default) you MUST specify `skip_on_failure=True`.' 230 ' Note that `@root_validator` is deprecated and should be replaced with `@model_validator`.', PydanticUserError: If you use `@root_validator` with pre=False (the default) you MUST specify `skip_on_failure=True`. Note that `@root_validator` is deprecated and should be replaced with `@model_validator`. For further information visit https://errors.pydantic.dev/2.1.1/u/root-validator-pre-skip </code></pre> <p>If I follow the link, there is no solution for my case. How can I solve that problem?</p>
<python><prompt><pydantic><langchain><llama-index>
2023-08-19 10:11:55
2
621
Christian01
76,934,542
12,415,855
Delete all comments in an excel-sheet using xlwings python on MacOS?
<p>I am looking for a method to delete all comments in an Excel-sheet using python.</p> <p>On Windows i can do this using the following command:</p> <pre><code>ws2.used_range.api.ClearComments() </code></pre> <p>But this is not working on Mac.</p> <p>How can i delete all comments in an excel-sheet using xlwings on Mac?</p>
<python><excel><macos><xlwings>
2023-08-19 10:01:49
1
1,515
Rapid1898
76,934,371
100,214
How to handle websocket validation in marshmallow?
<p>Getting validation error on the <code>ws_url</code> field</p> <p><code>{'ws_url': ['Not a valid URL.']}</code></p> <p>Here is the schema:</p> <pre class="lang-py prettyprint-override"><code>class ExplicitUri(Schema): rpc_url = fields.Url(required=True) ws_url = fields.Url(required=False) </code></pre> <p>when data is either <code>ws://x.y.z:90</code> or <code>wss://x.y.z:90</code>.</p> <p>Do I need to create a custom validator?</p>
<python><marshmallow>
2023-08-19 09:14:15
1
8,185
Frank C.
76,934,364
2,182,857
Is it possible to create an arbitrary Scikit learn decision tree from a table?
<p>Let's say I have a table describing a decision tree - each node and its feature, threshold(split point), and its left and right daughter nodes - or the prediction value, in case this is a leaf node (here signified by <code>status==-1</code>).</p> <pre><code>tree_structure = &quot;&quot;&quot; # (simplified example for clarity) # left_daughter right_daughter split_var split_point status prediction 1 2 3 2 394.250000 1 0 2 -1 -1 -1 0.0 -1 1 3 -1 -1 -1 0.0 -1 2 &quot;&quot;&quot; # Convert the tree structure to a DataFrame lines = tree_structure.strip().split(&quot;\n&quot;) header = lines[0].split() tree_data = [line.split() for line in lines[1:]] df_tree = pd.DataFrame(tree_data, columns=header) </code></pre> <p>Is it possible to translate this table / DataFrame to a scikit learn decision tree?</p> <p>I need this to be a scikit learn type data structure for down stream applications that require it specifically - code that constructs a tree and predicts on it is sadly insufficient for my needs.</p>
<python><pandas><scikit-learn><decision-tree>
2023-08-19 09:12:09
1
738
user2182857
76,934,293
6,654,730
Logging logs not showing up on Vercel only print (Python)
<p>I have a Python app deployed on Vercel. Print statements do show up in the runtime logs, but not logging logs. Any ideas what I could check or change?</p> <pre><code>import logging ... @app.get(&quot;/ping&quot;) async def ping(): logging.basicConfig(level=logging.INFO) logger = logging.getLogger(__name__) logger.info(&quot;from logger&quot;) print(&quot;from print&quot;) return {&quot;message&quot;: &quot;OK&quot;} </code></pre>
<python><vercel><python-logging>
2023-08-19 08:53:56
0
7,670
M3RS
76,934,249
9,727,910
Load and validate a config against a schema in Hyra
<p>I'm looking for a simple use-case of hydra (currently using ver. 1.0.7, but can upgrade as well) but I couldn't figure it out after reading the tutorial. Here is the scenario:</p> <p>Suppose I store my configs in a <code>@dataclass</code> decorated class as a schema, and I want to</p> <ol> <li>keep the default values that's set in the python class</li> <li>validate a loaded config yaml file against this class (and therefore all classes that recursively make up this <code>dataclass</code>)</li> <li>give priority to the values in the loaded yaml file</li> </ol> <p>e.g., I have a yaml file <code>unrelated.yaml</code> with content</p> <pre class="lang-yaml prettyprint-override"><code>title: My app width: 1024 height: 768 </code></pre> <p>with a python file <code>my_app.py</code> with content</p> <pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass import hydra from hydra.core.config_store import ConfigStore from omegaconf import OmegaConf @dataclass class DBConfig: host: str = &quot;localhost&quot; port: int = 3306 cs = ConfigStore.instance() cs.store(name=&quot;config&quot;, node=DBConfig) @hydra.main(config_name=&quot;config&quot;) def my_app(cfg: MyConfig) -&gt; None: print(OmegaConf.to_yaml(cfg)) if __name__ == &quot;__main__&quot;: my_app() </code></pre> <p>If I run the python directly, I will get the default values:</p> <pre class="lang-bash prettyprint-override"><code>$ python my_app.py host: localhost port: 3306 </code></pre> <p>but if I run the program with the unrelated yaml file, I will get</p> <pre class="lang-bash prettyprint-override"><code>$ python my_app.py --config-name unrelated.yaml title: My app width: 1024 height: 768 </code></pre> <p>in which case (1) the default values specified are all lost, (2) the config object is loaded with completely unrelated content and (3) no error is thrown here.</p> <p>I would like to have a solution where I can load a yaml file, validate it against a defined schema, filling in unspecified values with the default ones, and throw error if anything else is found. Is there an easy way to achieve this with hydra, so that I can enjoy other hydra benefits like the simple command line overriding arguments?</p>
<python><fb-hydra>
2023-08-19 08:40:30
1
857
cicolus