QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
76,131,257
| 12,703,411
|
How to add file extension to tkinter textbox?
|
<p>I'm working on a project where I'm building my own Python IDe. At the moment, I have added a textbox where I can write or edit my script file. In the attached image below, due to the invalidity of a file extension, you can see that the text is plain and uninterpreted. Hence, I would like to know if it would possible to add a file extension to the text box so that whatever I type in the text box shall be treated as a Python script.</p>
<p><a href="https://i.sstatic.net/DoTEd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DoTEd.png" alt="enter image description here" /></a></p>
|
<python><user-interface><tkinter><textbox><ide>
|
2023-04-28 15:49:19
| 2
| 345
|
Aatif Shaikh
|
76,131,147
| 8,420,175
|
Rar File issue with Extract
|
<p>raise RarCannotExec("Cannot find working tool")</p>
<pre><code>import rarfile
rarpath = '/home/server/test_rar/ABC.rar'
def unrar(file):
rf=rarfile.RarFile(file)
rf.extractall()
unrar(rarpath)
</code></pre>
|
<python><python-3.6><rar>
|
2023-04-28 15:36:11
| 1
| 782
|
Harshit Trivedi
|
76,130,968
| 12,466,687
|
How to arrange categories on y axis based on values in descending order on x axis in Python plotly express?
|
<p>I am trying to <strong>arrange categories in descending order</strong> on <code>y axis</code> based on <code>Max</code> values from <code>x axis</code> in <code>plotly express</code> <strong>scatter</strong> plot and unable to do it.</p>
<p>Data:</p>
<pre><code>import pandas as pd
import plotly.express as px
df = pd.read_csv('https://raw.githubusercontent.com/johnsnow09/Party_Criminal_Records/main/For_Analysis_in_R.csv')
</code></pre>
<p>Failed Attempts:</p>
<pre><code>px.scatter(df,x = 'Criminal_Case', y = 'Party').update_layout(height = 1000).update_yaxes(type='category')
</code></pre>
<pre><code>px.scatter(df,x = 'Criminal_Case', y = 'Party').update_layout(height = 1100).update_yaxes(categoryorder='category ascending')
</code></pre>
<pre><code>px.box(df,x = 'Criminal_Case', y = 'Party').update_layout(height = 1000).update_yaxes(categoryorder='total ascending')
</code></pre>
<p>How can I arrange <strong>categories</strong> in <code>Python Plotly Express</code> ?</p>
<p>I was able to do this in <code>R</code> and desired result is shown below:</p>
<p><code>R</code> code that worked:<code>df %>% ggplot(aes(x = Criminal_Case, y = fct_reorder(Party, Criminal_Case, .fun = max) )) + geom_point()</code></p>
<p>Expected output using R:
<a href="https://i.sstatic.net/9I7CG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9I7CG.png" alt="Desired order of Plot built using R" /></a></p>
|
<python><plotly>
|
2023-04-28 15:17:07
| 1
| 2,357
|
ViSa
|
76,130,914
| 7,984,318
|
Flask wtforms validators=[validators.DataRequired()] allows empty string and skip
|
<p>I'm using Flask wtforms to build a note submit function, as long as there is empty string, thre validation will fail. How can I allow empty string to be inputted ,and if it is empty string just skip ,don't save it as a note ?</p>
<p>code:</p>
<pre><code>class VendorForm(FlaskForm):
notes = fields.TextAreaField("Notes", validators=[validators.DataRequired()])
remediate = fields.SubmitField("Send to Vendor")
</code></pre>
<p>is there something like:</p>
<pre><code>class VendorForm(FlaskForm):
notes = fields.TextAreaField("Notes", validators=[validators.DataRequired()],InputRequired=False)
remediate = fields.SubmitField("Send to Vendor")
</code></pre>
<p>Any friend can help?</p>
|
<python><flask><flask-wtforms><wtforms>
|
2023-04-28 15:10:16
| 1
| 4,094
|
William
|
76,130,808
| 4,397,312
|
How recursion calls are working in the call stacks
|
<p>Here is a code that works for generating all the well-form parenthesis combinations by getting an integer as the number of open/close parentheses using recursive calls.</p>
<pre><code>class Solution:
def generateParenthesis(self, n: int):
res = []
def backtrack(open_count, close_count, combination):
if open_count == close_count == n:
res.append(combination)
return
if open_count < n:
backtrack(open_count + 1, close_count, combination + "(")
if close_count < open_count:
backtrack(open_count, close_count + 1, combination + ")")
backtrack(0, 0, "")
return res
sol = Solution()
result = sol.generateParenthesis(3)
</code></pre>
<p>I am bit perplexed why/how after forming the first combination which is <code>'((()))'</code> next combination will be formed.</p>
<p>In my debugger after the case where <code>open_count=close_count=3</code>, it jumps to the case where <code>close_counts=1</code> and <code>open_counts=2</code> and continues to check the if statements hereafter. I am still wondering why it jumped to this case to form the next combination.</p>
|
<python>
|
2023-04-28 14:56:14
| 0
| 717
|
Milad Sikaroudi
|
76,130,751
| 20,220,485
|
How do you create a list object from a dataframe based on a column value?
|
<p>For the following <code>df</code>, how could you create the desired output below? I'm specifically after a list of lists of tuples.</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'x':['ab_c_1.0.0','ab_c_1.0.1','ab_c_1.0.2','ab_c_1.1.0','ab_c_1.1.1','ab_c_1.2.0','ab_c_1.3.0','ab_c_1.3.1'],
'y':['a','b','c','d','e','f','g','h'],
'z':['i','j','k','l','m','n','o','p']})
df
>>>
x y z
0 ab_c_1.0.0 a i
1 ab_c_1.0.1 b j
2 ab_c_1.0.2 c k
3 ab_c_1.1.0 d l
4 ab_c_1.1.1 e m
5 ab_c_1.2.0 f n
6 ab_c_1.3.0 g o
7 ab_c_1.3.1 h p
</code></pre>
<p>Desired output:</p>
<pre><code>[[('a', 'i'), ('b', 'j'), ('c', 'k')],
[('d', 'l'), ('e', 'm')],
[('f', 'n')],
[('g', 'o'), ('h', 'p')]]
</code></pre>
<p>I have so far thought that I could combine something like this to get, so to speak, keys:</p>
<pre><code>for a in df['x']:
if a.endswith('.0'):
</code></pre>
<p>with this:</p>
<pre><code>df.values.tolist()
</code></pre>
<p>however, it's quite obviously inefficient to be iterating multiple times and through multiple objects. The main problem is that I can't slice the <code>df</code> by any constant other than checking whether the last digit of string in the <code>x</code> column is <code>0</code>, so I can't use a rolling window or something like this. Any advice would be appreciated.</p>
|
<python><pandas><dataframe>
|
2023-04-28 14:46:24
| 3
| 344
|
doine
|
76,130,589
| 802,678
|
What is the function of the `text_target` parameter in Huggingface's `AutoTokenizer`?
|
<p>I'm following the guide here: <a href="https://huggingface.co/docs/transformers/v4.28.1/tasks/summarization" rel="nofollow noreferrer">https://huggingface.co/docs/transformers/v4.28.1/tasks/summarization</a>
There is one line in the guide like this:</p>
<pre><code>labels = tokenizer(text_target=examples["summary"], max_length=128, truncation=True)
</code></pre>
<p>I don't understand the function of the <code>text_target</code> parameter.</p>
<p>I tried the following code and the last two lines gave exactly the same results.</p>
<pre><code>from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('t5-small')
text = "Weiter Verhandlung in Syrien."
tokenizer(text_target=text, max_length=128, truncation=True)
tokenizer(text, max_length=128, truncation=True)
</code></pre>
<p>The docs just say <code>text_target (str, List[str], List[List[str]], optional) β The sequence or batch of sequences to be encoded as target texts.</code> I don't really understand. Is there some situations when setting <code>text_target</code> will give you a different result?</p>
|
<python><huggingface-transformers><huggingface>
|
2023-04-28 14:27:23
| 2
| 582
|
Betty
|
76,130,580
| 90,580
|
How can I filter out the internal Dynaconf settings?
|
<pre><code>from .config import settings
for k,v in settings.items():
print(k,v)
</code></pre>
<p>this prints things like <code>RENAMED_VARS</code>, <code>SETTINGS_MODULE</code>, <code>PROJECT_ROOT</code>, <code>*_FOR_DYNACONF</code> in addition to the configuration parameter that I explicitly setup in my <code>settings.toml</code></p>
<p>Is there anyway to just get the configuration parameters that I defined excluding the ones that dynaconf creates by itself?</p>
|
<python><configuration>
|
2023-04-28 14:26:21
| 2
| 25,455
|
RubenLaguna
|
76,130,482
| 11,160,421
|
How to add a new column with key information in that column from dictionary of different length array without loosing any value in pandas in python
|
<p>Input</p>
<pre><code>dict_table = {'Table1': [1, 2,], 'Table2': [3, 4, 5], 'Table3': [6, 7, 8, 9, 10, 11]}
</code></pre>
<p>Result</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Table_name</th>
<th>Table_value</th>
<th>New_Column</th>
</tr>
</thead>
<tbody>
<tr>
<td>Table1</td>
<td>1</td>
<td>Table1</td>
</tr>
<tr>
<td>Table1</td>
<td>2</td>
<td>Table1</td>
</tr>
<tr>
<td>Table2</td>
<td>3</td>
<td>Table2</td>
</tr>
<tr>
<td>Table2</td>
<td>4</td>
<td>Table2</td>
</tr>
<tr>
<td>Table2</td>
<td>5</td>
<td>Table2</td>
</tr>
<tr>
<td>Table3</td>
<td>6</td>
<td>Table3</td>
</tr>
<tr>
<td>Table3</td>
<td>7</td>
<td>Table3</td>
</tr>
<tr>
<td>Table3</td>
<td>8</td>
<td>Table3</td>
</tr>
<tr>
<td>Table3</td>
<td>9</td>
<td>Table3</td>
</tr>
<tr>
<td>Table3</td>
<td>10</td>
<td>Table3</td>
</tr>
<tr>
<td>Table3</td>
<td>11</td>
<td>Table3</td>
</tr>
</tbody>
</table>
</div>
|
<python><pandas><dataframe><list><numpy>
|
2023-04-28 14:14:46
| 1
| 316
|
AXz
|
76,130,370
| 2,808,520
|
Why is python slower inside a docker container?
|
<p>The following small code snippet times how long adding a bunch of numbers takes.</p>
<pre><code>import gc
from time import process_time_ns
gc.disable() # disable garbage collection
for func in [
process_time_ns,
]:
pre = func()
s = 0
for a in range(100000):
for b in range(100):
s += b
print(f"sum: {s}")
post = func()
delta_s = (post - pre) / 1e9 # difference in seconds
print(f"{func}: {delta_s}")
</code></pre>
<p>To my surprise, this takes much longer when run inside a docker container (~1.6s) than it does when run directly on the host machine (~0.8s).
After some digging, I found that some of docker's security features may cause slowdowns (<a href="https://betterprogramming.pub/faster-python-in-docker-d1a71a9b9917" rel="nofollow noreferrer">https://betterprogramming.pub/faster-python-in-docker-d1a71a9b9917</a>, <a href="https://pythonspeed.com/articles/docker-performance-overhead/" rel="nofollow noreferrer">https://pythonspeed.com/articles/docker-performance-overhead/</a>). Indeed, adding the docker argument <code>--privileged</code> reduces it's runtime only ~0.9s.
However, I'm still confused by this ~0.1s gap I'm observing, which doesn't show up in the article.
I've set my cpu frequency to 3000MHz and fixed the python execution to run on core 0.</p>
<p>Statistics of 30 measurements each:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>local</th>
<th>docker --privileged</th>
<th>docker</th>
</tr>
</thead>
<tbody>
<tr>
<td>avg</td>
<td>0.79917586</td>
<td>0.904496884</td>
<td>1.61980727</td>
</tr>
<tr>
<td>std</td>
<td>0.02433539</td>
<td>0.031948695</td>
<td>0.04034594</td>
</tr>
<tr>
<td>min</td>
<td>0.78087375</td>
<td>0.867265714</td>
<td>1.56995282</td>
</tr>
<tr>
<td>q1</td>
<td>0.78211388</td>
<td>0.880717119</td>
<td>1.58672566</td>
</tr>
<tr>
<td>q2</td>
<td>0.79006154</td>
<td>0.895180195</td>
<td>1.61322376</td>
</tr>
<tr>
<td>q3</td>
<td>0.80732969</td>
<td>0.916945585</td>
<td>1.64363027</td>
</tr>
<tr>
<td>max</td>
<td>0.89824817</td>
<td>1.012580084</td>
<td>1.72252714</td>
</tr>
</tbody>
</table>
</div>
<p>For measurements, the following commands were used:</p>
<ul>
<li>local: <code>taskset -c 0 python3 main.py</code></li>
<li>docker --privileged: <code>taskset -c 0 docker run --privileged --rm -w /data -v /home/slammer/Projects/timing-python-inside-docker:/data -it python:3 python main.py</code></li>
<li>docker: <code>taskset -c 0 docker run --rm -w /data -v /home/slammer/Projects/timing-python-inside-docker:/data -it python:3 python main.py</code></li>
</ul>
<p>What causes the remaining docker overhead?
Can it be mitigated to achieve bare-metal-performance?</p>
<p>Edit: Measurements were taken on a linux mint 20.3 host (kernel: x86_64 Linux 5.4.0-117-generic); docker version: 20.10.17</p>
|
<python><docker>
|
2023-04-28 14:02:15
| 1
| 1,058
|
GitProphet
|
76,130,356
| 14,649,310
|
Which HTTP method to use when no data is transfered, just to execute code in an endpoint?
|
<p>I build an Flask API endpoint which when called will execute an action to process some data (the data are not send but come from the database). The result of the method(action) called is to update some fields in some database objects based on calculations performed on the objects themselves.</p>
<p>Initially from what I found most people use POST requests, but post request is mainly designed for sending data in order to create a resource and in these cases no such thing takes place.</p>
<p>I looked at all the possible <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods" rel="nofollow noreferrer">HTTP request methods</a> and no one fits the bill of just asynchronously executing some code in the server without sending and returning any data.</p>
<p>Am I missing something, should I simply use <code>POST</code> or is there a better alternative?</p>
|
<python><flask><httprequest><openapi>
|
2023-04-28 14:01:09
| 4
| 4,999
|
KZiovas
|
76,130,275
| 12,870,628
|
How to increment a whole column's values by x amount in SQLAlchemy?
|
<p>I am looking to increase an entire column's values by a certain variable amount, and initially I used sqlite3, but in order to optimise my script I had to switch to SQLAlchemy, but I am unable to find similar utilities in SQLAlchemy. For example, for sqlite3 I did this:</p>
<pre><code>amt = 3 #for example
with sqlite3.connect("funds.db") as con:
cur = con.cursor()
cur.execute("UPDATE funds_table SET funds=funds+?", (int(amt),))
con.commit()
</code></pre>
<p>Basically, for a column of 3,4,5 for example, it would increase to 6,7,8. I have searched but can't find a way to do this through SQLalchemy, other than manually iterating through each row (?) which seems inefficient.</p>
|
<python><sqlalchemy><sqlite3-python>
|
2023-04-28 13:52:43
| 1
| 495
|
Justin Chee
|
76,130,267
| 1,216,183
|
csv.writer can't handle QUOTE_NONE and empty escapechar since python 3.11
|
<p>Since Python 3.11, the <code>CsvWriter</code> with <code>quoting=csv.QUOTE_NONE</code> raises an error when trying to generate data containing <code>quotechar</code> (<code>"</code>). It asks for an <code>escapechar</code> to be defined.</p>
<p>The CSV I need to generate does not require quotes around the field, nor does it require the <code>quotechar</code> to be escaped.</p>
<p><em>The documentation explains that escapechar is required as of Python 3.11, but doesn't explain how to work around this new requirement.</em></p>
<p>Example that works with python 3.10, but raises an error with python 3.11:</p>
<pre class="lang-py prettyprint-override"><code>import csv
output = StringIO()
csv_writer = csv.writer(output, quoting=csv.QUOTE_NONE)
csv_writer.writerow((1,"This is a \"Test\"",2))
print(output.getvalue()) # 1, "This is a test", 2
</code></pre>
<p>I've tried setting <code>escapechar=""</code> but the value must be a 1 character string.</p>
<p>Any idea why Python would change the behaviour of the csv writer? and how to have Python 3.11 continue to generate this kind of csv?</p>
|
<python><python-3.11>
|
2023-04-28 13:51:18
| 1
| 2,213
|
fabien-michel
|
76,130,226
| 8,048,800
|
Pandas simple way to split column header names on separator
|
<p>I need to rename columns name in pandas but need the string after pipe symbol <code>@</code>.</p>
<p>Example :
<code>Col1 @ my data1</code> to <code>my data1</code></p>
<p>I am reading a csv and trying the be</p>
<pre><code>df= pandas.read_csv('path');
df.rename(columns=lambda col: col.replace(col, '^[^@\r\n]+@\s*\K.*$'),inplace=True)
</code></pre>
<p>regex is working independently but with lambda and pandas it just replaces column name with regex text.</p>
|
<python><pandas>
|
2023-04-28 13:46:30
| 3
| 387
|
Dot Net Dev 19
|
76,130,222
| 299,754
|
Exception in logging module itself is not sent to sentry
|
<p>Django app on python 2 (yeah I know). The sentry integration is working well otherwise, but looks like it doesn't record crashes in logging itself, for instance when passing insufficient arguments to logger.info().</p>
<p>In this case the incriminating line was something like:</p>
<pre><code>logger.info('some info about %s and %s', my_thing)
</code></pre>
<p>i.e. missing an argument.</p>
<p>This traceback showed up in heroku logs but nothing in sentry:</p>
<pre><code>Apr 27 23:35:32 xxx app/web.9 Traceback (most recent call last):
Apr 27 23:35:32 xxx app/web.9 File "/usr/local/lib/python2.7/logging/__init__.py", line 868, in emit
Apr 27 23:35:32 xxx app/web.9 msg = self.format(record)
Apr 27 23:35:32 xxx app/web.9 File "/usr/local/lib/python2.7/logging/__init__.py", line 741, in format
Apr 27 23:35:32 xxx app/web.9 return fmt.format(record)
Apr 27 23:35:32 xxx app/web.9 File "/usr/local/lib/python2.7/logging/__init__.py", line 465, in format
Apr 27 23:35:32 xxx app/web.9 record.message = record.getMessage()
Apr 27 23:35:32 xxx app/web.9 File "/usr/local/lib/python2.7/logging/__init__.py", line 329, in getMessage
Apr 27 23:35:32 xxx app/web.9 msg = msg % self.args
Apr 27 23:35:32 xxx app/web.9 TypeError: not enough arguments for format string
Apr 27 23:35:32 xxx app/web.9 Logged from file models.py, line 18407
</code></pre>
<p>Do I need to do something for sentry_sdk to include whatever context the logging module is running in as well?</p>
|
<python><django><logging><sentry>
|
2023-04-28 13:46:23
| 1
| 6,928
|
Jules OllΓ©on
|
76,130,069
| 8,512,262
|
How can I check for user activity/idle from a Windows service written in Python?
|
<p>I've written a Windows service in Python that needs to be able to detect user activity. Under normal circumstances I would call the Windows <code>GetLastInputInfo</code> method, but that method doesn't work when called by a service. Here's the relevant info from the documentation for this method:</p>
<blockquote>
<p>This function is useful for input idle detection. However, GetLastInputInfo does not provide system-wide user input information across all running sessions. Rather, GetLastInputInfo provides session-specific user input information for only the session that invoked the function.</p>
</blockquote>
<p>The salient point is this: "<strong>for only the session that invoked the function</strong>"</p>
<p>If called by the service, <code>GetLastInputInfo</code> will <em>always</em> return 0 because the service is running in session 0 and doesn't receive input! How can my service detect user activity from the console session?</p>
|
<python><windows><service><user-input><user-inactivity>
|
2023-04-28 13:27:25
| 1
| 7,190
|
JRiggles
|
76,130,037
| 3,668,129
|
SSLError ( certificate verify failed)
|
<p>I'm trying to run simple gradio app with https:</p>
<pre><code>import gradio as gr
if __name__ == "__main__":
with gr.Blocks(theme=gr.themes.Glass()) as demo:
testLabel = gr.Label(label="Just for test")
demo.queue().launch(share=False,
debug=False,
server_name="0.0.0.0",
server_port=8432,
ssl_certfile="/home/user/cert.pem",
ssl_keyfile="/home/user/key.pem")
</code></pre>
<p>And I'm getting error:</p>
<pre><code>requests.exceptions.SSLError: HTTPSConnectionPool(host='localhost', port=8432): Max retries exceeded with url: /startup-events (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate (_ssl.c:1131)')))
</code></pre>
<p>I'm using the following versions:</p>
<pre><code>gradio==3.28.0
gradio-client==0.1.4
</code></pre>
<p>How can I run this simple app with https ?</p>
|
<python><gradio>
|
2023-04-28 13:22:18
| 1
| 4,880
|
user3668129
|
76,129,604
| 21,420,742
|
How to merge two datasets of different shapes with different column names in python
|
<p>I have a 2 datasets with two different shapes and column names and I need to merge them to fill the blanks in for all the NaNs. When I try and doesn't work and I am left with NaNs still.</p>
<p>Here is a sample:</p>
<p>DF1:</p>
<pre><code> ID Mgr_name Reports EmpType mgr_pos_num
101 NaN 3 Manager 1234
102 Brian 4 Manager 4567
103 Mary 7 Manger 9876
104 NaN 1 Manager 3456
...
201 Ashely 2 Manager 4291
202 Blake 5 Manager 7215
</code></pre>
<p>DF2:</p>
<pre><code> emp_Name emp_pos_num
0 Adam 5678
1 Amanda 1122
2 Brian 4567
3 Chris 7654
4 Dave 5564
5 John 1234
6 Lisa 3346
7 Mary 9876
8 Sarah 3456
....
210 Greg 0123
211 Blake 7215
</code></pre>
<p>DF1 shows most the info and I skipped most the lines to show that their are many rows in this dataset.
DF2 has all the names and more because it deals with all employees but does have the position number. Which is unique to an ID + Name</p>
<p>Here is a Desired output:</p>
<pre><code> ID Name Reports EmpType emp_name emp_pos_num
101 NaN 3 MGR John 1234
102 Brian 4 MGR Brian 4567
103 Mary 7 MGR Mary 9876
104 NaN 1 MGR Sarah 3456
...
201 Ashely 2 MGR Ashely 4291
202 Blake 5 MGR Blake 7215
</code></pre>
|
<python><python-3.x><pandas><dataframe><merge>
|
2023-04-28 12:30:05
| 1
| 473
|
Coding_Nubie
|
76,129,550
| 11,065,874
|
How to make case insensitive choices using Python's enum and FastAPI?
|
<p>I have this application:</p>
<pre class="lang-py prettyprint-override"><code>import enum
from typing import Annotated, Literal
import uvicorn
from fastapi import FastAPI, Query, Depends
from pydantic import BaseModel
app = FastAPI()
class MyEnum(enum.Enum):
ab = "ab"
cd = "cd"
class MyInput(BaseModel):
q: Annotated[MyEnum, Query(...)]
@app.get("/")
def test(inp: MyInput = Depends()):
return "Hello world"
def main():
uvicorn.run("run:app", host="0.0.0.0", reload=True, port=8001)
if __name__ == "__main__":
main()
</code></pre>
<p><code>curl http://127.0.0.1:8001/?q=ab</code> or <code>curl http://127.0.0.1:8001/?q=cd</code> returns "Hello World"</p>
<p>But any of these</p>
<ul>
<li><code>curl http://127.0.0.1:8001/?q=aB</code></li>
<li><code>curl http://127.0.0.1:8001/?q=AB</code></li>
<li><code>curl http://127.0.0.1:8001/?q=Cd</code></li>
<li>etc</li>
</ul>
<p>returns <code>422Unprocessable Entity</code> which makes sense.</p>
<p>How can I make this validation case insensitive?</p>
|
<python><enums><fastapi><pydantic>
|
2023-04-28 12:24:37
| 4
| 2,555
|
Amin Ba
|
76,129,412
| 7,936,386
|
execute external sql script from lambda function using Python
|
<p>I'm trying to execute some external SQL scripts which are containing some CREATE TABLES syntaxes while the other ones are some pre-defined PL/pgSQL functions. I'm using the Python 3.10 and the aws-psycopg2 library to create a Lambda function in AWS which will connect to a PostgreSQL AWS database and run these scripts. First, I installed the library</p>
<pre><code>pip install aws-psycopg2 -t .
</code></pre>
<p>in my working directory.</p>
<p>I created a handler function:</p>
<pre><code>import psycopg2
from psycopg2.extras import RealDictCursor
import json
# Configuration Values
endpoint_db = 'pgdatabase.xxx.amazonaws.com'
database_db = 'postgres'
username_db = 'postgres'
password_db = 'xxxxxx'
port_db = 5432
connection = psycopg2.connect(
dbname=database_db,
user=username_db,
password=password_db,
host=endpoint_db,
port=port_db
)
def lambda_handler():
cursor = connection.cursor(cursor_factory = RealDictCursor)
cursor.execute(open("insert_clause.sql", "r").read())
cursor.execute(open("update_table.sql", "r").read())
# rows = cursor.fetchall()
# json_result = json.dumps(rows)
# print(json_result)
# return(json_result)
print('Done!')
lambda_handler()
</code></pre>
<p>The external SQL are some basic commands to the dvd_rental tables:</p>
<pre><code>UPDATE public.category SET name = 'Horror and Thriller' WHERE category_id = 11; -- update_table.sql
INSERT INTO public.copy_actor (actor_id, first_name, last_name) VALUES (201, 'Paul', 'Newman'); -- insert_clause.sql
</code></pre>
<p>However, nothing happens when I run the handler() function first on my machine before deploying it on AWS (no tables are altered). Is there any additional setup I have to do?</p>
<p>Lastly, if I will manage to make these scripts working, would it be enough to include these in the zip file I'm going to upload to create the Lambda function?</p>
<p>Thanks!</p>
|
<python><amazon-web-services><aws-lambda><psycopg2>
|
2023-04-28 12:06:06
| 1
| 619
|
Andrei NiΘΔ
|
76,129,182
| 7,559,896
|
How can I force CMake to use Python executables and libs from a given folder?
|
<p>I have both Python 3.6 in "C:\PYTHON_VERSIONS\STANDARD_3.6.8" and 3.8 in "C:\Python\3.8" in my system. I want to compile pybind11 using 3.6 but CMake takes lib directories from 3.6 and the executable from 3.8! a very bad mix indeed.</p>
<p>Here is output of CMake:</p>
<pre><code>-- Found PythonInterp: C:/Python/3.8/python.exe (found version "3.8.9")
-- Found PythonInterp: C:/Python/3.8/python.exe (found suitable version "3.8.9", minimum required is "3.6")
CMake Error at C:/work/packgen/rsdist/pybind11/tools/FindPythonLibsNew.cmake:147 (message):
Python config failure:
Traceback (most recent call last):
File "<string>", line 6, in <module>
File "C:\PYTHON_VERSIONS\STANDARD_3.6.8\lib\distutils\sysconfig.py", line 14, in <module>
import re
File "C:\PYTHON_VERSIONS\STANDARD_3.6.8\lib\re.py", line 123, in <module>
import sre_compile
File "C:\PYTHON_VERSIONS\STANDARD_3.6.8\lib\sre_compile.py", line 17, in <module>
assert _sre.MAGIC == MAGIC, "SRE module mismatch"
AssertionError: SRE module mismatch
Call Stack (most recent call first):
CMakeLists.txt:25 (find_package)
</code></pre>
<p>I have in my environment variables PYTHONHOME and PYTHONPATH to the 3.6 folder:
<a href="https://i.sstatic.net/GC3lA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GC3lA.png" alt="enter image description here" /></a></p>
<p>And in the "path" variable itself there are no paths to any of the python installation folders.</p>
<p>So the question is: How does CMake finds executable that is not in the path, and mixes executable and libs of different versions?</p>
<p>Is there something like <code>CMAKE_SET_PYTHON_PATH</code> or something that I force feed it into CMake?</p>
|
<python><cmake><pybind11>
|
2023-04-28 11:35:20
| 1
| 933
|
DEKKER
|
76,129,150
| 11,065,874
|
fastapi bug with annotated default value
|
<p>I have this small FastAPI (fastapi==0.95.0 and uvicorn==0.20.0 in my requirements.txt) application that works as expected.</p>
<pre><code>from typing import Annotated
import uvicorn
from fastapi import FastAPI, Depends, Query
from pydantic import BaseModel
app = FastAPI()
class TestInput(BaseModel):
q: Annotated[str, Query(..., title="a test title", description="a test description")] = "a"
@app.get("/test")
def test(inp: TestInput = Depends()):
return "Hello world"
def main():
uvicorn.run("run:app", host="0.0.0.0", reload=True, port=8001)
if __name__ == "__main__":
main()
</code></pre>
<p>I now restructure it to</p>
<pre><code>from typing import Annotated
import uvicorn
from fastapi import FastAPI, Depends, Query, APIRouter
from pydantic import BaseModel
class TestInput(BaseModel):
q: Annotated[str, Query(..., title="a test title", description="a test description")] = "a"
def test(inp: TestInput = Depends()):
return "Hello world"
router = APIRouter()
router.add_api_route(
path="/test",
endpoint=test,
methods=["GET"],
)
app = FastAPI()
app.include_router(router)
def main():
uvicorn.run("run:app", host="0.0.0.0", reload=True, port=8001)
if __name__ == "__main__":
main()
</code></pre>
<p>I see this error</p>
<pre><code>AssertionError: `Query` default value cannot be set in `Annotated` for 'q'. Set the default value with `=` instead.
</code></pre>
<p>if I remove the default value from the q field it works as expected. But when the default value (<code>"a"</code>) is added, I see the error while in the first structure, I did not have such a limitation.</p>
<p>Is this a bug with fast API? What is the problem?</p>
|
<python><fastapi><python-typing><pydantic>
|
2023-04-28 11:31:47
| 1
| 2,555
|
Amin Ba
|
76,129,121
| 494,739
|
How should we do order_by in Django ORM for self-referred sub objects?
|
<p>Have an issue with a somewhat problematic ordering/sorting in Django ORM. What I have is a Venue with a FK to <code>self</code>, thus the potential for sub-venues:</p>
<pre class="lang-py prettyprint-override"><code># models.py
class Venue(models.Model):
parent = models.ForeignKey(
to="self", related_name="children", on_delete=models.CASCADE, null=True, blank=True
)
ordering = models.PositiveSmallIntegerField(default=0)
class Meta:
verbose_name = "Venue"
verbose_name_plural = "Venues"
ordering = ["ordering"]
</code></pre>
<p>Let's populate with some data:</p>
<pre class="lang-py prettyprint-override"><code>top = Venue.objects.create(name="Spain", ordering=0, parent=None)
Venue.objects.create(name="Barcelona", ordering=0, parent=top)
Venue.objects.create(name="Castelldefels", ordering=1, parent=top)
Venue.objects.create(name="Girona", ordering=2, parent=top)
top = Venue.objects.create(name="Singapore", ordering=1, parent=None)
</code></pre>
<p>This is how it looks like in the custom dashboard we've built. We're doing something akin to:
<code>Venue.objects.filter(parent__isnull=True)</code> and then, for each venue, we're doing <code>obj.children.all()</code>, thus creating this layout:</p>
<pre><code># find all parent=None, then loop over all obj.children.all()
Spain (ordering=0, parent=None)
- Barcelona (ordering=0, parent="Spain")
- Castelldefels (ordering=1, parent="Spain")
- Girona (ordering=2, parent="Spain")
Singapore (ordering=1, parent=None)
</code></pre>
<p>This is what we want to see when requesting our API endpoint:</p>
<pre class="lang-js prettyprint-override"><code>// json response
[
{"venue": "Spain", "ordering": 0},
{"venue": "Barcelona", "ordering": 0},
{"venue": "Castelldefels", "ordering": 1},
{"venue": "Girona", "ordering": 2},
{"venue": "Singapore", "ordering": 1}
]
</code></pre>
<p>Problem is when we're calling the API, is that we're also using <code>django-filters</code> in our ModelViewSet thus we can't filter on <code>parent__isnull=True</code> or we won't see the children in the API response. We still have access to the queryset in the <code>def list</code>-method before returning it to the serializer.</p>
<p>Question is: how do we set the correct <code>.order_by()</code>? A simple <code>.order_by("ordering")</code> will place "Singapore" before "Girona" which would be wrong.</p>
<p>TYIA.</p>
|
<python><django><django-models><django-rest-framework><django-orm>
|
2023-04-28 11:29:16
| 3
| 772
|
kunambi
|
76,129,060
| 13,219,123
|
Define pyspark table with column type as timestamp
|
<p>I need to create a PySpark dataframe for some unit testing. One of the columns in the dataframe needs to be of the type <code>TimestampType</code>. I have the following code to define the dataframe:</p>
<pre><code>spark = (SparkSession
.builder
.master("local[*]")
.appName("Unit-tests")
.getOrCreate())
data = [
("2023-04-25 00:00:00", "A", 100.5),
("2023-04-26 00:00:00", "A", 110.0),
("2023-04-28 00:00:00", "A", 105.0),
("2023-04-27 00:00:00", "B", 50.5),
("2023-04-29 00:00:00", "B", 55.5),
]
schema = StructType([
StructField("time", TimestampType(), True), \
StructField("id", StringType(), True), \
StructField("value", DoubleType(), True)
])
df = spark.createDataFrame(
data=data,
schema=schema
)
</code></pre>
<p>However, this gives me an TypeError:
<code>field time: TimestampType() can not accept object '2023-04-25 00:00:00' in type</code>.</p>
<p>I know a work around which is to initially define the <code>time</code> column as a <code>StringType</code> and then afterwards cast it to a <code>TimestampType</code> by adding this line of code:</p>
<p><code>df.withColumn("time", col("time").cast(TimestampType()))</code></p>
<p>But this seems like a rather cumbersome way of doing it. How can I define it directly in the schema?</p>
|
<python><pyspark>
|
2023-04-28 11:19:10
| 1
| 353
|
andKaae
|
76,129,049
| 8,510,149
|
Rank and exclude null values in Pyspark
|
<p>Using Pyspark and the ntile function, I'm look for a way to deal with null values when I rank. The code below will rank the null values as well, as 1. I dont want that, I would like them to have rank null. And ranking only to be done on non-null values.</p>
<p>I can filter out null-values before the ranking, but then I need to join the null values back later due to my use-case.</p>
<p>Is there a way to only do the ranking on the values, exluding the null from the rank, and still keep all the rows?</p>
<pre><code>import numpy as np
from pyspark.sql import SparkSession
#spark = SparkSession.builder.appName("PandasToSpark").getOrCreate()
data = pd.DataFrame({'ID':[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15, 16, 18, 18, 19, 20],
'group':['A']*10 + ['B']*10,
'feature':[0.5, 3.4, 0.3, 0.1, 1.2, np.nan, 2, np.nan, 1.1, 2.2, np.nan, 1.32, 2.5, np.nan, 0.87, 1.56, 2.1, np.nan, 0.34, 7.43]})
spark_df = spark.createDataFrame(data)
window = Window.partitionBy(['group']).orderBy('feature')
spark_df = spark_df.withColumn("feature_rank", F.ntile(5).over(window))
spark_df.display()
</code></pre>
<pre><code></code></pre>
|
<python><pyspark>
|
2023-04-28 11:17:40
| 1
| 1,255
|
Henri
|
76,128,989
| 9,974,205
|
Problem looping the crossing of parents of a genetic algorithm in python
|
<p>I have the following function in python used to generate the children of a genetic algorithm</p>
<pre><code>def cruce(padres):
mitadHerencia = len(padres[0])//2
hijo1 = np.concatenate((padres[0][:mitadHerencia], padres[1][mitadHerencia:]))
hijo2 = np.concatenate((padres[1][:mitadHerencia], padres[0][mitadHerencia:]))
hijo3 = np.concatenate((padres[0][:mitadHerencia], padres[2][mitadHerencia:]))
hijo4 = np.concatenate((padres[2][:mitadHerencia], padres[0][mitadHerencia:]))
hijo5 = np.concatenate((padres[1][:mitadHerencia], padres[2][mitadHerencia:]))
return hijo1, hijo2, hijo3, hijo4, hijo5
</code></pre>
<p>where padres can be defined as</p>
<p><code>padres=np.array([[1,0,0,0,1,0],[0,1,1,1,1,0],[1,0,1,0,1,1],[0,1,1,0,1,0],[0,0,0,0,1,0],[1,0,0,0,0,0],[1,1,1,0,1,0],[0,0,1,1,0,0],[0,1,1,0,1,0],[1,0,1,0,1,0]],np.int32)</code></p>
<p>This is the particular case of a genetic algorithm in which I cross the first element with the second to produce two children, the same with the first and third and so on. I would like to do this using a loop, but I haven't found a way to do this. Can someone please offer me some guidance?</p>
|
<python><function><loops><genetic-algorithm>
|
2023-04-28 11:11:04
| 1
| 503
|
slow_learner
|
76,128,932
| 2,749,397
|
meaning of "add mappable to an Axes"
|
<p>Recently I ran an old code and Matplotlib told me</p>
<blockquote>
<pre><code>MatplotlibDeprecationWarning:
Unable to determine Axes to steal space for Colorbar.
Using gca(), but will raise in the future.
Either provide the *cax* argument to use as the Axes for the Colorbar,
provide the *ax* argument to steal space from it,
=============================
or add *mappable* to an Axes.
=============================
</code></pre>
</blockquote>
<p>Well, I understand the problem and the <code>cax</code> and <code>ax</code> solutions βI have already fixed my old codeβ but I don't understand the "add <code>mappable</code> to an Axes" part, at least in the context of the MWE below.</p>
<p>Could you <strong>explain the meaning of "<em>add <code>mappable</code> to an Axes</em>"</strong>?</p>
<hr />
<hr />
<pre><code>import matplotlib.pyplot as plt
from matplotlib.cm import ScalarMappable
import numpy as np
x = np.linspace(0, 1, 201)
sm = ScalarMappable(norm=plt.Normalize(0.5, 4.5), cmap='cool_r')
for i in range(4):
plt.plot(x, x+(1-x)*np.sin(np.pi*i*x), color=sm.to_rgba(i+1))
cb = plt.colorbar(sm)
cb.set_ticks((1,2,3,4))
plt.tight_layout()
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/5F6Dj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5F6Dj.png" alt="enter image description here" /></a></p>
|
<python><matplotlib>
|
2023-04-28 11:04:03
| 1
| 25,436
|
gboffi
|
76,128,917
| 617,648
|
How can I modify this closed-connection regex to cover multi-line invalid user entries
|
<p>I need a regex string to give to denyhosts config for filtering invalid ssh login attempts to my remote server.</p>
<p>Here is the regex that I am using currently:</p>
<pre><code>USERDEF_FAILED_ENTRY_REGEX=.*sshd.* Connection closed by .* (?P<host>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}) port \d{1,5} \[preauth\]
</code></pre>
<p>It filters this kind of logs perfectly:</p>
<pre><code>sshd[4086]: Connection closed by authenticating user root 141.98.10.172 port 50610 [preauth]
</code></pre>
<p>and gets 141.98.10.172 within text.</p>
<p>However I need to filter below ones too in same regex:</p>
<pre><code>sshd[4260]: Disconnected from authenticating user root 128.199.82.240 port 46392 [preauth]
sshd[4262]: Invalid user admin12 from 43.134.178.78 port 36540
</code></pre>
<p>How can I do this in one regex?</p>
|
<python><regex><ssh>
|
2023-04-28 11:01:34
| 2
| 1,752
|
obayhan
|
76,128,890
| 235,671
|
Assign a default callable value for a callable protocol
|
<p>I have the following callable <code>Protocol</code>:</p>
<pre class="lang-py prettyprint-override"><code>class OnStarted(Protocol):
def __call__(self, kwargs: Dict[str, Any]) -> Optional[Dict[str, Any]]: ...
</code></pre>
<p>that I want to assign a default function, which I did like this:</p>
<pre class="lang-py prettyprint-override"><code>def foo(on_started: OnStarted = lambda _: {}):
pass
</code></pre>
<p>but MyPy isn't happy with it and complains with this error:</p>
<pre><code>foo.py:172: error: Incompatible default for argument "on_started" (default has type "Callable[[Any], Dict[<nothing>, <nothing>]]", argument has type "OnStarted") [assignment]
foo.py:172: note: "OnStarted.__call__" has type "Callable[[Arg(Dict[str, Any], 'kwargs')], Optional[Dict[str, Any]]]"
</code></pre>
<p>How can I fix that so that <code>on_started</code> has a default value and MyPy doesn't error?</p>
|
<python><mypy><python-typing>
|
2023-04-28 10:59:02
| 2
| 19,283
|
t3chb0t
|
76,128,753
| 4,045,275
|
Can ecdfplot show the concentration of a variable? E.g. the top 10 items account for 20% of the total, etc
|
<h1>The issue</h1>
<p>I want to create a plot to show the concentration by a certain variable. Let's say I have a 1-dimensional array of prices.</p>
<ul>
<li><strong>I want a plot that shows me that the first 10 most expensive items account for 10% of the total price, the first 100 most expensive items for 40% of the total price, etc.</strong></li>
<li>This is useful in all those situations where we want to understand how concentrated or not certain data is: e.g. few borrowers account for most of the exposure of a bank, few days account for most of the rainfall in a given period, etc.</li>
</ul>
<h1>What I have done so far</h1>
<p>I manually sort by price, calculate a cumulative sum, divide by the total price and plot that.</p>
<h1>Why it's not ideal</h1>
<p>I would like to use SeaBorn's displot and facetgrids to calculate this for multiple categories. Something like this:</p>
<p><a href="https://i.sstatic.net/gE4sr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gE4sr.png" alt="enter image description here" /></a></p>
<h1>The question</h1>
<p>Is there a way to use ecdfplot or another function compatible with seaborn's displot?</p>
<h1>My code (which works but is not ideal)</h1>
<pre><code>import numpy as np
from numpy.random import default_rng
import pandas as pd
import copy
import matplotlib
matplotlib.use('TkAgg', force = True)
import matplotlib.pyplot as plt
import seaborn as sns
import seaborn.objects as so
from matplotlib.ticker import FuncFormatter
sns.set_style("darkgrid")
rng = default_rng()
# I generate random samples from a truncated normal distr
# (I don't want negative values)
n = int(2e3)
n_red = int(n/3)
n_green = n - n_red
df = pd.DataFrame()
df['price']= np.random.randn(n) * 100 + 20
df['colour'] = np.hstack([np.repeat('red',n_red),
np.repeat('green', n_green)])
df = copy.deepcopy(df.query('price > 0')).reset_index(drop=True)
num_cols = len(np.unique(df['colour']))
fig1, ax1 = plt.subplots(num_cols)
sub_dfs={}
for my_ax, c in enumerate(np.unique(df['colour'])):
sub_dfs[c] = copy.deepcopy(df.query('colour == @c'))
sub_dfs[c] = sub_dfs[c].sort_values(by='price', ascending=False).reset_index()
sub_dfs[c]['cum %'] = np.cumsum(sub_dfs[c]['price']) / sub_dfs[c]['price'].sum()
sns.lineplot(sub_dfs[c]['cum %'], ax = ax1[my_ax])
ax1[my_ax].set_title(c + ' - price concentration')
ax1[my_ax].set_xlabel('# of items')
ax1[my_ax].set_ylabel('% of total price')
</code></pre>
<h1>What I have tried - but doesn't work</h1>
<p>I have played around with <code>displot</code> and <code>ecdf</code></p>
<pre><code>fig2 = sns.displot(kind='ecdf', data = df, y='price', col='colour', col_wrap =2, weights ='price',
facet_kws=dict(sharey=False))
fig3 = sns.displot(kind='ecdf', data = df, x='price', col='colour', col_wrap =2, weights='price',
facet_kws=dict(sharey=False))
</code></pre>
<h1>EDIT: Mwascom's answer (I still can't get it to work)</h1>
<p>@mwaskom, thank you for your answer. However, I'm afraid I'm still doing something wrong, as I'm not getting the desired result.</p>
<p>If I run:</p>
<pre><code>fig5 = sns.displot(kind='ecdf', data=df, x=df.index, col='colour', col_wrap =2, weights='price',
facet_kws=dict(sharey=False, sharex=False))
</code></pre>
<ol>
<li>I get two straight lines, whereas the plot I need is convex (see first plot at the top). A straight line means the price is equally distributed, that 10% of the items account for 10% of the total price. A convex function means that the top 10% of the items account for more than 10% of the total price (which is my case). What I get is this: <a href="https://i.sstatic.net/hj4qG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hj4qG.png" alt="enter image description here" /></a></li>
<li>In my toy example, I have a category with ca. 400 items and one with ca. 800. Since the x axis is the index of the whole dataframe, the second plot goes from 400 to 1,200, instead of going from 1 to 800.</li>
</ol>
|
<python><matplotlib><seaborn><ecdf>
|
2023-04-28 10:38:29
| 1
| 9,100
|
Pythonista anonymous
|
76,128,714
| 9,626,922
|
Jupyter notebook kernel dies when running gym env.render()
|
<p>EDIT: When i remove <code>render_mode="rgb_array"</code> it works fine. But this obviously is not a real solution.</p>
<p>I am trying to run a render of a game in Jupyter notebook but each time i run it i get a pop up saying <code>Python 3.7 crashed</code> and the kernel has died.</p>
<pre><code>%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import time
import gym
from gym.envs.registration import register
from IPython.display import clear_output
try:
register(
id='FrozenLakeNotSlippery-v0', #Name this whatever you want
entry_point='gym.envs.toy_text:FrozenLakeEnv',
kwargs={'map_name' : '4x4', 'is_slippery': False},
max_episode_steps=100,
reward_threshold=0.78, # optimum = .8196
)
except:
print("Already Registered")
env = gym.make("FrozenLakeNotSlippery-v0",render_mode='rgb_array')
env.reset()
for step in range(5):
env.render()
action = env.action_space.sample()
observation, reward, terminated, truncated, info = env.step(action)
time.sleep(0.5)
print(observation)
clear_output(wait=True)
if terminated:
env.reset()
env.close()
</code></pre>
<p>Above is the code I have. Its very straight forward and seems to be a known issue, though I haven't seen anyone have any solutions.</p>
<p>I have tried uninstalling and then re-installing the following packages with pip install:</p>
<ul>
<li>ipykernel</li>
<li>ipython</li>
<li>jupyter_client</li>
<li>jupyter_core</li>
<li>traitlets</li>
<li>ipython_genutils
But i still have the same issue.</li>
</ul>
|
<python><machine-learning><jupyter-notebook><openai-gym>
|
2023-04-28 10:33:39
| 1
| 617
|
Jm3s
|
76,128,690
| 3,815,773
|
How do I know which "File Descriptor" stands for a TCP connection in Python?
|
<p>In Linux the <code>lsof</code> command is very helpful for listing open files, giving e.g. this output:</p>
<pre><code>~$ sudo lsof -a -i -u root -c python
=======
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
python3 1504835 root 19u IPv4 57127166 0t0 TCP urkam.fritz.box:http->GMC-500-old.fritz.box:5098 (ESTABLISHED)
python3 1504835 root 1022u IPv4 57108854 0t0 TCP urkam.fritz.box:http (LISTEN)
</code></pre>
<p>For me this <code>FD: 19u</code> is the bad guy, I want to know about using Python.</p>
<p>Of no help is <code>os.stat()</code> which delivers this:</p>
<pre><code>os.stat_result(st_mode=49663, st_ino=57160758, st_dev=8, st_nlink=1, st_uid=0, st_gid=0, st_size=0, st_atime=0, st_mtime=0, st_ctime=0)
</code></pre>
<p>If this has TCP info then I don't see it.</p>
<p>Any other way to get the info that lsof is obviously able to get?</p>
<p>The final solutions needs to work on Linux, Widows, Mac.</p>
<p>EDIT:</p>
<p>Some progress from using psutil:</p>
<pre><code>import os, psutil
pnc = psutil.net_connections(kind="inet4")
for p in pnc:
print("fd: ", p.fd, " raddr: ", p.raddr)
os.close(p.fd)
</code></pre>
<p>It allows me to find the open TCP connections, and provides the file descriptors (fd). Then I can close those bad files with <code>os.close(fd)</code>. Tested in overnight run - works fine!</p>
<p>However, on Windows psutil finds all the right files, which is nice, but does not help, because fd is always ==-1 ! This means an invalid fd and I cannot close this file by <code>os.close(fd)</code>.</p>
<p>So this approach does NOT work on Windows.</p>
<p>Windows apparently uses a concept different from file descriptors, called handles in Windows lingo. My understanding is they could be closed just the same with <code>os.close(handle)</code>.</p>
<p>But where and how do I find those handles?</p>
|
<python><tcp><file-descriptor><handle>
|
2023-04-28 10:30:09
| 0
| 505
|
ullix
|
76,128,663
| 11,466,416
|
Hugging Face load model --> RuntimeError: Cuda out of memory
|
<p>I found several threads that dealt with the same error message, but my case seems to be different. In the other threads I encountered, there was actually not enough memory to allocate.
I want to load a pre-trained transformer onto a GPU, but even trying to load the model results into the following error message:</p>
<p><code>RuntimeError: CUDA out of memory. Tried to allocate 734.00 MiB (GPU 0; 15.78 GiB total capacity; 0 bytes already allocated; 618.50 MiB free; 0 bytes reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF</code></p>
<p>I see, that there is less available memory than the model needs. Also, I checked with <code>nvidia-smi</code> and there is no other process running on that GPU. Additionally, there is a total of 15.78 GiB memory available, but in the end the reserved and allocated memory in sum is zero. So there should still be plenty of space for the model. Where is all the memory? Why it is not available?</p>
<p>Here is the code I used:</p>
<pre><code>import evaluate
import jsonlines
import numpy as np
from transformers import (AutoTokenizer,
AutoModelForSeq2SeqLM,
DataCollatorForSeq2Seq,
Seq2SeqTrainingArguments,
Seq2SeqTrainer)
from datasets import Dataset
import os
os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID'
os.environ['CUDA_VISIBLE_DEVICES'] = '2'
MODEL_CHECKPOINT = "google/mt5-base" # I checked and it should fit onto the GPU
tokenizer = AutoTokenizer.from_pretrained(MODEL_CHECKPOINT)
# model = MT5ForConditionalGeneration.from_pretrained(MODEL_CHECKPOINT)
model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_CHECKPOINT).to("cuda") # this line causes the error
</code></pre>
<p>The code is also mainly copied from the tutorial given on the Hugging Face website.</p>
|
<python><huggingface-transformers>
|
2023-04-28 10:25:19
| 2
| 456
|
Blindschleiche
|
76,128,609
| 14,368,631
|
Create an event system using decorators that work for global functions and class functions
|
<p>So I have this following code:</p>
<pre class="lang-py prettyprint-override"><code>from collections import defaultdict
_event_handlers = defaultdict(set)
def add_event_handler(event_name=""):
def wrapper(func):
_event_handlers[event_name if event_name else func.__name__].add(func)
return func
return wrapper
@add_event_handler()
def bar():
print("test")
class Foo:
def __init__(self, f):
self.f = f
@add_event_handler()
def bar(self):
print(self.f)
def dispatch_event(event_name, **kwargs):
for handler in _event_handlers[event_name]:
handler(**kwargs)
test = Foo(5)
dispatch_event("bar")
</code></pre>
<p>I'm trying to develop an event system which allows you to registered an event with the <code>@add_event_handler()</code> decorator which adds the method to a global <code>_event_handlers</code> variable and then events can be dispatched using that. However, it works for global methods, but not for methods in a class as a <code>TypeError</code> is raised since <code>self</code> is not provided. How can I fix this?</p>
|
<python><events><decorator>
|
2023-04-28 10:19:19
| 0
| 328
|
Aspect11
|
76,128,593
| 8,818,287
|
Deploy a general tree model in flask
|
<h2>Background</h2>
<p>I have a class that holds a general tree data structure (called <strong>Factory</strong> class). This general tree is composed of nodes (called <strong>Item</strong> class) that can have multiple parents nodes and multiple child nodes. Also, parents and children can be shared.
For those familiar with manufacturing and supply chain terms, these classes model a "bill of materials".</p>
<p>This class is instantiated by reading +50Mio. records from a database, thus it takes some minutes to instantiate the object.</p>
<p>Given that, my question is: how can I deploy this model using flask?</p>
<h3>Try 1</h3>
<p>Everytime an user sends a request, the model reads the entire database, instantiates the model, calls a given method in this object and returns the output to the user. This takes way too long, every requests would take several minutes to be completed.</p>
<h3>Try 2</h3>
<p>Serialise the model using <code>pickle</code> or <code>dill</code>, so everytime the user sends and request, the modle is de-serialised, a given method is invoked and the output returned to the user. The problem here is that the model is highly recursive and <code>pickle</code> or <code>dill</code> are not able to serialise it without hitting the maximum recursion limit.</p>
<p>I tried to increase the maximum recursion limit with <code>sys.setrecursionlimit()</code>, but python crashes before the serialisation is complete.</p>
|
<python><flask><serialization><deployment>
|
2023-04-28 10:17:12
| 0
| 789
|
asa
|
76,128,437
| 15,358,800
|
Add percentage of repitative strings in bar graph using Pandas
|
<p>Let's say I've df like this</p>
<p><strong>Reproducable:</strong></p>
<pre><code>import pandas as pd
import io
TESTDATA="""All_services
All_services
Rehosting applications to AWS
Replacing flexible functionalities
Unaltered replatforming of underlying code structure, functionalities, features
Rebuilding broken applications/software segments
Optimize existing use of cloud(Cost saving)
Expand use of containers
Move on prem servers to Sass
Expanding public clouds
Implemenation CI/CD to clouds
Migration Evaluator
AWS Migration Hub
AWS Application Discovery Services
AWS Landing Zone
AWS Control Tower
AWS Management and Governance
AWS Database Migration Services
AWS Server Migration Service
AWS Database Migration Service
AWS Application Discovery Service
AWS Direct Connect
DB Migrations
open-source databases to AWS.
Oracle to Oracle
Oracle or Microsoft SQL Server to Amazon Aurora.
Migrating fileservers to Amazon S3
migrating commercial RDBMS or MySQL.
Optimize existing use of cloud(Cost saving)
Expand use of containers
Move on prem servers to Sass
Expanding public clouds
Implemenation CI/CD to clouds
Migration Evaluator
AWS Migration Hub
AWS Application Discovery Services
AWS Landing Zone
DB Migrations
Cloud Migration Planning
Replatforming Applications for Cloud
Cloud Application Development Services
From Monolith to Microservices
Cloud Infrastructure Automation
Implemenation CI/CD to clouds
DB Migrations
Optimize existing use of cloud(Cost saving)
Implemenation CI/CD to clouds
Migration Evaluator
AWS Migration Hub
AWS Application Discovery Services
AWS Direct Connect
DB Migrations
open-source databases to AWS.
Oracle to Oracle
Oracle or Microsoft SQL Server to Amazon Aurora.
Migrating fileservers to Amazon S3
Optimize existing use of cloud(Cost saving)
Amazon S3 Transfer Acceleration
AWS Snowball
AWS Direct Connect
EC2
AWS Server Migration Service
AWS Database Migration Service
VMWare Cloud on AWS
Optimize existing use of cloud(Cost saving)
Cloud Application Development Services
From Monolith to Microservices
Cloud Infrastructure Automation
Implemenation CI/CD to clouds
DB Migrations
Optimize existing use of cloud(Cost saving)
Implemenation CI/CD to clouds
Migration Evaluator
Optimize existing use of cloud(Cost saving)
AWS Application Discovery Services
AWS Direct Connect
DB Migrations
Rebuilding broken applications/software segments
Optimize existing use of cloud(Cost saving)
Expand use of containers
Move on prem servers to Sass
Expanding public clouds
Implemenation CI/CD to clouds
AWS Management and Governance
AWS Database Migration Services
AWS Server Migration Service
AWS Database Migration Service
AWS Application Discovery Service
AWS Direct Connect
DB Migrations
"""
df = pd.read_csv(io.StringIO(TESTDATA), sep=";")
df = df.replace(r"^ +| +$", r"", regex=True)
df.All_services.value_counts().sort_values().plot(kind = 'barh',figsize=(25, 15),linewidth=4)
</code></pre>
<p>I got graph like this</p>
<p><a href="https://i.sstatic.net/D6Plx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/D6Plx.png" alt="enter image description here" /></a></p>
<p>How can I add repitative string percentage to barplot using pandas??</p>
<p><a href="https://i.sstatic.net/II7aj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/II7aj.png" alt="enter image description here" /></a></p>
<p>There are similar answers but they are uisng matplotlib with pandas. I'm looking only with pandas with some preior hard coding. If it's not achivable I will go with matplotlib</p>
<p>Similar threads with matplotlib</p>
<p><a href="https://stackoverflow.com/questions/73674585/pandas-matplotlib-labels-bars-as-percentage">pandas matplotlib labels bars as percentage</a></p>
<p><a href="https://stackoverflow.com/questions/52080991/how-to-display-percentage-above-grouped-bar-chart">How to display percentage above grouped bar chart</a></p>
<p><a href="https://stackoverflow.com/questions/71652056/adding-percentage-labels-to-grouper-bar-chart">Adding Percentage Labels to Grouper Bar Chart</a></p>
|
<python><pandas><graph>
|
2023-04-28 10:02:35
| 1
| 4,891
|
Bhargav
|
76,128,423
| 5,881,882
|
Librosa specshow, what data processing is done under the hood compared to plt.imshow
|
<p>I am trying to understand, why a plot produced by <code>plt.imshow()</code> is way less detailed and more blurry, than a plot produced by <code>librosa.display.specshow</code>.</p>
<p>As the plots, or better to say, the data behind the plots is my basis of further analysis, I would like to go for as much detail as possible. Thus I wonder, what kind of black magic / enchancement / processing is done by <code>librosa</code> behind the curtains.</p>
<p><a href="https://i.sstatic.net/s3dWa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/s3dWa.png" alt="enter image description here" /></a>
As you can see in the screenshot, I use the same array as well as the same figure size and resolution.</p>
|
<python><pytorch><librosa><torchvision>
|
2023-04-28 10:00:40
| 1
| 388
|
Alex
|
76,128,396
| 13,391,350
|
Re-arranging pivot column and value using Pandas
|
<p>I would like to rearrange the pivot table in the way that base on this input data, the revenue_month will be treated as a column, keeping all other indexes, whereas the columns:revenue__net_eur and accrual__net_eur will be the values.</p>
<pre><code>data = {
'cust_id': [1, 1, 1, 1],
'cust_name': ['Company A', 'Company B', 'Company A', 'Company A'],
'doc_num': [101, 102, 103, 104],
'created_at': ['2023-01-01', '2023-02-01', '2023-03-15', '2023-04-01'],
'payment_method': ['Credit Card', 'PayPal', 'Credit Card', 'PayPal'],
'status': ['Paid', 'Paid', 'Disputed', 'Paid'],
'currency': ['EUR', 'EUR', 'EUR', 'EUR'],
'net_amount': [1000, 1500, 2000, 2500],
'vat_amount': [200, 300, 400, 500],
'gross_amount': [1200, 1800, 2400, 3000],
'revenue_month': ['2023-01-31', '2023-01-31', '2023-02-28', '2023-02-28'],
'revenue__net_eur': [800, 1200, 1600, 2000],
'accrual__net_eur': [1000, 1500, 2000, 2500]
}
pd.DataFrame(data)
</code></pre>
<p>Final output:
<a href="https://i.sstatic.net/jsu1U.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jsu1U.png" alt="enter image description here" /></a></p>
<p>The indexes are not as important, I am interested in pivoting the <code>revenue_month</code> with the two values : <code>revenue__net_eur, accrual__net_eur</code>.</p>
<p>What I've tried so far:</p>
<pre><code># create a pivot table with 'revenue_month' column labels
pivot_table = pd.pivot_table(df, values=['revenue__net_eur', 'accrual__net_eur'],
index=['customer_id', 'doc_num'],
columns=['revenue_month'],
aggfunc=np.sum, fill_value=0)
# swap the order of the column index levels
pivot_table.columns = pivot_table.columns.swaplevel(1, 0)
pivot_table
</code></pre>
<p>which only results in:
<a href="https://i.sstatic.net/whpqX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/whpqX.png" alt="enter image description here" /></a>
The Python code is wrong because <code>revenue__net_eur</code> should come as a column before <code>accrual__net_eur</code> for each particular month, where the order of revenue_month is ascending.
<em>see the final output table</em></p>
|
<python><pandas>
|
2023-04-28 09:56:23
| 1
| 747
|
Luc
|
76,128,230
| 5,308,802
|
Django ORM: move filter after annotate subquery
|
<p>This Django ORM statement:</p>
<pre class="lang-py prettyprint-override"><code>Model.objects.all() \
.annotate(
ord=Window(
expression=RowNumber(),
partition_by=F('related_id'),
order_by=[F("date_created").desc()]
)
) \
.filter(ord=1) \
.filter(date_created__lte=some_datetime)
</code></pre>
<p>Leads to the following SQL query:</p>
<pre class="lang-sql prettyprint-override"><code>SELECT *
FROM (
SELECT
id, related_id, values, date_created
ROW_NUMBER() OVER (
PARTITION BY related_id
ORDER BY date_created DESC
) AS ord
FROM model_table
WHERE date_created <= 2022-02-24 00:00:00+00:00
)
WHERE ord = 1
</code></pre>
<p>As you can see, the <code>date_created__lte</code> filter gets applied on the inner query. Is it possible to control statement location preciser and move the filter outside, like <code>ord</code>?</p>
|
<python><django><postgresql><orm>
|
2023-04-28 09:38:48
| 2
| 1,228
|
AivanF.
|
76,128,144
| 8,547,163
|
Issue with pipenev and pip after installation of the package : pkg_resources.DistributionNotFound
|
<p>I'm trying to install a package via <code>pip</code> and in an virtual environment using</p>
<pre><code>python3 -m venv env
source env/bin/activate
</code></pre>
<p>then I install the package via pip</p>
<pre><code>pip install <package name>
</code></pre>
<p>I get no error while installing it, but when I try to run the package with</p>
<pre><code><package name> --version
</code></pre>
<pre><code>.
.
.
.
pkg_resources.DistributionNotFound: The 'lockfile>=0.9; extra == "filecache"' distribution was not found and is required by CacheControl
</code></pre>
<p>Can anyone suggest how to debug this issue?</p>
|
<python><pip><python-venv>
|
2023-04-28 09:26:55
| 0
| 559
|
newstudent
|
76,127,887
| 11,024,270
|
How to average spatial points data over spatial grid boxes in Python
|
<p>I have a set of data points that have coordinates and one value associated with them. I would like to be able to place the data points into grid boxes and average all the points that fall into each grid box.</p>
<p>Here is the code I managed to create that makes what I want. I wonder if there is a more elegant and more efficient way to do that.</p>
<pre><code>#!/usr/bin/env python
# -*- coding: utf-8 -*-
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
# ***********************************************
# *** Get mean data in 1Β°x1Β° latΓlon grid box ***
# Dataset
lats = np.array((40.1, 42.1, 40.7, 43.1, 41.0, 42.7, 40.7, 43.1, 41.2, 42.7, 40.5, 43.1, 40.2))
lons = np.array(( 1.2, 2.6, 3.1, 1.7, 2.1, 1.9, 3.4, 2.7, 3.4, 1.3, 2.1, 2.9, 3.7))
temps = np.array((28.3, 25.1, 24.3, 27.4, 26.3, 21.3, 25.0, 21.9, 22.8, 21.7, 23.8, 21.5, 20.1))
# Initialization
latbins = np.arange(-90, 90.1, 1)
lonbins = np.arange(0, 360.1, 1)
sum_temp = np.zeros((latbins.size-1, lonbins.size-1))
occ_temp = np.zeros((latbins.size-1, lonbins.size-1))
# Sum and count data in grid box
for i, temp in enumerate(temps):
# Get index of grid box where to add the data
i_latbin = 0
while lats[i] > latbins[i_latbin]:
i_latbin += 1
i_lonbin = 0
while lons[i] > lonbins[i_lonbin]:
i_lonbin += 1
# Add the data
sum_temp[i_latbin-1, i_lonbin-1] += temp
occ_temp[i_latbin-1, i_lonbin-1] += 1
# Compute mean
FILL_VALUE = -9999.
mean_temp = np.ones((latbins.size-1, lonbins.size-1))*FILL_VALUE
mean_temp[occ_temp != 0] = sum_temp[occ_temp != 0]/occ_temp[occ_temp != 0]
mean_temp = np.ma.masked_where(mean_temp==FILL_VALUE, mean_temp)
# ****************
# *** Plot map ***
plt.figure(figsize=(5, 10))
# Plot mean data
ax = plt.subplot(211)
ax.set_facecolor('0.8')
plt.pcolormesh(lonbins, latbins, mean_temp, cmap=cm.RdBu_r)
plt.colorbar(label='Temperature (Β°C)')
plt.xlim(1, 4)
plt.ylim(40, 44)
plt.xlabel('Longitude')
plt.ylabel('Latitude')
# Plot number of observations
ax = plt.subplot(212)
plt.pcolormesh(lonbins, latbins, occ_temp)
plt.colorbar(label='Number of observations')
plt.xlim(1, 4)
plt.ylim(40, 44)
plt.xlabel('Longitude')
plt.ylabel('Latitude')
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/Pjy19.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Pjy19.png" alt="enter image description here" /></a></p>
|
<python><arrays><numpy>
|
2023-04-28 08:54:58
| 1
| 432
|
TVG
|
76,127,761
| 10,981,411
|
how do I make my horizontal scroll bar to work in my codes that uses tkinter library
|
<p>below are my codes</p>
<p>when you click the run script button it populates the frame with df of random numbers I can scroll down but not across. Any reason why? Can someone fix the code please? I dont want to change the position of the treeframe as I have other button and labels on the left what I have removed from my codes for simplicity.</p>
<pre><code>import tkinter
from tkinter import ttk
from tkinter import *
import pandas as pd
import numpy as np
# root = Tk()
# root.geometry("1200x800")
root = tkinter.Tk()
root.geometry("1200x800")
frame = tkinter.Frame(root)
frame.grid(row=0, column=0, padx=20, pady=10)
def treeframe1(rows, df):
tree["columns"] = list(range(1, len(df.columns) + 1))
tree['show'] = 'headings'
tree['height'] = '30'
colnames = df.columns
tree.column(1, width=90, anchor='c')
tree.heading(1, text=colnames[0])
for i in range(2, len(df.columns) + 1):
tree.column(i, width=100, anchor='c')
tree.heading(i, text=colnames[i - 1])
for i in rows:
tree.insert('', 'end', values=i)
tree = ttk.Treeview(root, selectmode='browse', height='30')
tree.place(x=830, y=45)
vsb = ttk.Scrollbar(root, orient="vertical", command=tree.yview)
vsb.place(x=820, y=45, height=560)
tree.configure(yscrollcommand=vsb.set)
vsb = ttk.Scrollbar(root, orient="horizontal", command=tree.xview)
vsb.place(x=820, y=670, width=200)
tree.configure(xscrollcommand=vsb.set)
def view_command():
global action_type
global df
tree.delete(*tree.get_children())
df = pd.DataFrame(np.random.randint(1,10,(100,150)))
rows = df.values.tolist()
treeframe1(rows, df)
b4 = Button(root, text="run script", width=15, height=11, command=view_command)
b4.place(x=500, y=50)
root.mainloop()
</code></pre>
|
<python><tkinter>
|
2023-04-28 08:38:42
| 0
| 495
|
TRex
|
76,127,738
| 1,920,368
|
regex lookbehind if matches exclude entire lane
|
<p>I am trying to match single character <code>]</code> only if <code>[!</code> is not ahead.</p>
<h5>Regex:</h5>
<pre><code>(?!(?:\[\!.*))\]
</code></pre>
<h5>Expected Result:</h5>
<pre><code>_dynamic_text_[!_dynamic_text_]_dynamic_text_
_dynamic_text__dynamic_text_]_dynamic_text_
β
</code></pre>
<h4>Current Result</h4>
<pre><code>_dynamic_text_[!_dynamic_text_]_dynamic_text_
β
_dynamic_text__dynamic_text_]_dynamic_text_
β
</code></pre>
<p><a href="https://regex101.com/r/IIzoX5/1" rel="nofollow noreferrer">https://regex101.com/r/IIzoX5/1</a></p>
<p>I have consulted several reference sources, but I am still encountering difficulties in finding out how to accomplish this task.</p>
<p>Would you be able to offer me some guidance or advice?</p>
|
<python><regex><negative-lookbehind>
|
2023-04-28 08:35:39
| 1
| 4,570
|
Micah
|
76,127,700
| 13,393,940
|
Docker container keeps running after system was pruned
|
<p>Yesterday I discovered that a container was still running on my machine (macOS Monterey). I searched StackOverflow for answers to my issue but anything I tried didn't work.</p>
<p>I did a <code>docker ps</code> and then a <code>docker stop <container-ID></code> but the web app was still running in port <code>0.0.0.0:80</code>.</p>
<p>I don't remember when I run this container but it was during the development of a Dash Plotly app.</p>
<p>After the above failed I tried:</p>
<pre><code>docker system prune --all --force --volumes
</code></pre>
<p>which removed all containers and images from my system (it did work because the image indeed disappeared from my Docker Desktop list).</p>
<p>I then restarted my computer but the web app was still there.</p>
<p>I then run the command:</p>
<pre><code>sudo lsof -i -P -n | grep 80
</code></pre>
<p>which gave me the output:</p>
<pre><code>assistant 480 cconsta1 25u IPv4 0x7f28d5520c917253 0t0 UDP *:*
Google 730 cconsta1 80u IPv6 0x7f28d5520a8477c3 0t0 UDP *:5353
Google 730 cconsta1 89u IPv6 0x7f28d5520a8480f3 0t0 UDP *:5353
Slack\x20 4259 cconsta1 23u IPv4 0x7f28d54d343d66cb 0t0 TCP 192.168.10.1:51807->3.65.102.105:443 (ESTABLISHED)
Slack\x20 4259 cconsta1 26u IPv4 0x7f28d54d339966cb 0t0 TCP 192.168.10.1:51809->3.65.102.105:443 (ESTABLISHED)
httpd 4418 root 4u IPv6 0x7f28d53edaecb713 0t0 TCP *:80 (LISTEN)
httpd 4422 _www 4u IPv6 0x7f28d53edaecb713 0t0 TCP *:80 (LISTEN)
httpd 4431 _www 4u IPv6 0x7f28d53edaecb713 0t0 TCP *:80 (LISTEN)
httpd 4433 _www 4u IPv6 0x7f28d53edaecb713 0t0 TCP *:80 (LISTEN)
httpd 4434 _www 4u IPv6 0x7f28d53edaecb713 0t0 TCP *:80 (LISTEN)
</code></pre>
<p>I tried to kill these processes to see if something will work out, <code>sudo kill -9 <PID></code> but that didn't work either.</p>
<p>Finally, I cleared my browser's cache and checked whether the web app runs in private mode but it still works.</p>
<p>I don't remember which <code>Dockerfile</code> I used to run this container but this one is the closest:</p>
<pre><code>FROM python:3.10
# EXPOSE 8050
WORKDIR /app
COPY . .
COPY models /app/models/
RUN pip install -r requirements.txt
EXPOSE 8050
CMD ["gunicorn", "-b", "0.0.0.0:8050", "--reload", "app:server"]
</code></pre>
<p>The image was probably built using:</p>
<pre><code>docker build -f Dockerfile -t app:latest .
</code></pre>
<p>and run using:</p>
<pre><code>docker run -p 80:8050 app:latest
</code></pre>
<p>This is the <code>requirements.txt</code> file:</p>
<pre><code>numpy
pandas
plotly
dash
gunicorn
dash-bootstrap-components
scikit-learn
xgboost
</code></pre>
<p>The <code>app.py</code> file looks like this:</p>
<pre class="lang-py prettyprint-override"><code>import time
import dash
import dash_bootstrap_components as dbc
import pickle
import numpy as np
import plotly.graph_objs as go
from dash import Input, Output, State, dcc, html
# import tensorflow as tf
# from tensorflow import keras
# from keras.models import load_model
#import xgboost
import re
app = dash.Dash(external_stylesheets=[
dbc.themes.COSMO])
# Include the server option to become able to deploy online
server = app.server
# Code for the app
if __name__ == "__main__":
app.run_server(debug=True, host="0.0.0.0",port="8050", use_reloader=True)
#app.run_server(debug=True)
</code></pre>
<p>The command <code>docker --version</code> returns:</p>
<pre><code>Docker version 20.10.24, build 297e128
</code></pre>
<p>Edit: I think that the image was actually run using the <code>restart always</code> command:</p>
<pre><code>docker run --restart always -p 80:8050 app:latest
</code></pre>
|
<python><docker><plotly-dash><gunicorn>
|
2023-04-28 08:29:29
| 1
| 873
|
cconsta1
|
76,127,674
| 6,632,138
|
Wrapper around instance methods with default return value
|
<p>I have a dozen of (instance) methods that should evaluate only if a condition is satisfied, otherwise return default value which is different for each method:</p>
<pre><code>def fun(self, *args, **kwargs):
if not self.init():
return ... # <- default value, different for each method
# method logic
return ... # some value
</code></pre>
<p>I managed to make this work with decorators as follows:</p>
<pre><code>from functools import partial, wraps
def check_init(caller = None, *, ret = None):
if caller is None:
return partial(check_init, ret = ret)
@wraps(caller)
def _check_init(self, *args, **kwargs):
return caller(self, *args, **kwargs) if self.init() else ret
return _check_init
class Test:
num: int = 2
def init(self) -> bool:
return self.num < 5
@check_num(ret = 'Cannot show number')
def show(self) -> str:
return 'Number is %d' % self.num
x = Test()
print(x.show()) # "Number is 2"
x.num = 4
print(x.show()) # "Number is 4"
x.num = 6
print(x.show()) # "Cannot show number"
</code></pre>
<p>Two questions:</p>
<ul>
<li><p>Since the described logic is related only to the <code>Test</code> instance methods, how can I put the whole <code>def check_init()</code> block inside the <code>Test</code> class? When it is outside the class I cannot type-hint <code>self</code> with <code>Test</code> or <code>Self</code>.</p>
</li>
<li><p>Is there a better (more Pythonic) way to achieve what I'm trying to do? Preferably with method decorators..</p>
</li>
</ul>
|
<python><decorator><return-value><wrapper><default>
|
2023-04-28 08:26:45
| 1
| 577
|
Marko Gulin
|
76,127,624
| 1,159,488
|
How to optimize the sorting of a big table with Python
|
<p>I have a table A with all values hidden.<br />
The goal is to find a trick to sort these values with the help of two functions :</p>
<ol>
<li>compare (x, y) checks in the table A whether x value < y value or not</li>
<li>If not : sort(x, y) sorts x compared to y with : <code>A[x], A[y] = A[y], A[x]</code></li>
</ol>
<p>I found a way to loop on that and automatically sort the table A.<br />
Here is the snippet :</p>
<pre><code> for i in range (len(A)):
for j in range (len(A)):
if compare (i, j):
sort(i, j)
pass
</code></pre>
<p>That works pretty well but I'd like to find another way to accelerate the processing.<br />
Indeed the length of my table A is 32 so it is ok. But if the length is 1000, 5000 or even 10000 it would clearly take a lot of time and that could be a problem (especially if a time limit exists).<br />
Consequently does another method exist to optimize my loop to bigger tables ?</p>
<p>Any help would be appreciated, thanks.</p>
|
<python><algorithm><loops><sorting>
|
2023-04-28 08:21:40
| 4
| 629
|
Julien
|
76,127,418
| 4,568,775
|
Data types when reading mysql with python library MySqldb
|
<p>I wish to extract data from a mysql table, and write it verbatim to a text file. The most simplified form of my code is below, of course it must be much refined later (both the SELECT statement and further conditions in the python code); but I already run into trouble here.</p>
<pre><code>#!/usr/bin/python
import MySQLdb
db = MySQLdb.connect(host="--------",
user="-----",
passwd="xyz",
db="abc123")
cur = db.cursor()
outfil='..........'
f=open(outfil,"w")
cur.execute("SELECT * FROM aerodrome ORDER BY sequence")
row=[""]
for row in cur.fetchall():
print(row)
f.write(",".join(row))
f.close()
db.close()
</code></pre>
<p>When run, the row looks like</p>
<pre><code>(1L, 'UK', '', '', '', ' Abbots Bromley', '', '', '', Decimal('52.8277800'),
</code></pre>
<p>with the type indicators ("L" in the first field, "Decimal" in the last) clobbering up. How can I force obtaining just the actual data, without those type indicators?</p>
|
<python><mysql>
|
2023-04-28 07:56:17
| 0
| 305
|
Karel Adams
|
76,127,340
| 20,263,044
|
Annotate closest OSM vertex
|
<p>I'm trying to get the closest OSM vertex for each row of MyModel</p>
<pre class="lang-py prettyprint-override"><code>class MyModel(models.Model):
location = GeometryField()
</code></pre>
<p>I currently use RawSQL for this</p>
<pre class="lang-py prettyprint-override"><code>def annotate_closest_vertex(queryset):
queryset = queryset.annotate(
closest_vertex=RawSQL(
"""
SELECT id
FROM planet_osm_roads_vertices_pgr
ORDER BY the_geom <-> "mymodel"."location" LIMIT 1
""",
(),
)
)
</code></pre>
<p>And I would like to use the ORM</p>
<p>I tried to create a unmanaged Model for OSM vertices</p>
<pre class="lang-py prettyprint-override"><code>class OSMVertex(models.Model):
the_geom = GeometryField()
class Meta:
managed = False
db_table = "planet_osm_roads_vertices_pgr"
</code></pre>
<p>And a distance function</p>
<pre class="lang-py prettyprint-override"><code>class Distance(Func):
arity = 2
def as_sql(
self,
compiler,
connection,
function=None,
template=None,
arg_joiner=None,
**extra_context,
):
connection.ops.check_expression_support(self)
sql_parts = []
params = []
for arg in self.source_expressions:
arg_sql, arg_params = compiler.compile(arg)
sql_parts.append(arg_sql)
params.extend(arg_params)
return f"{sql_parts[0]} <-> {sql_parts[1]}", params
</code></pre>
<p>Then using a simple Subquery</p>
<pre class="lang-py prettyprint-override"><code>def annotate_closest_vertex(queryset):
queryset = queryset.annotate(
closest_vertex=Subquery(
OSMVertex.objects.order_by(
Distance("the_geom", OuterRef("location"))
).values_list("pk")[:1]
)
)
</code></pre>
<p>However I get the error</p>
<pre><code>/usr/local/lib/python3.8/site-packages/django/db/models/query.py:1324: in _fetch_all
self._result_cache = list(self._iterable_class(self))
/usr/local/lib/python3.8/site-packages/django/db/models/query.py:51: in __iter__
results = compiler.execute_sql(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size)
/usr/local/lib/python3.8/site-packages/django/db/models/sql/compiler.py:1162: in execute_sql
sql, params = self.as_sql()
/usr/local/lib/python3.8/site-packages/django/db/models/sql/compiler.py:513: in as_sql
extra_select, order_by, group_by = self.pre_sql_setup()
/usr/local/lib/python3.8/site-packages/django/db/models/sql/compiler.py:55: in pre_sql_setup
self.setup_query()
/usr/local/lib/python3.8/site-packages/django/db/models/sql/compiler.py:46: in setup_query
self.select, self.klass_info, self.annotation_col_map = self.get_select()
/usr/local/lib/python3.8/site-packages/django/db/models/sql/compiler.py:262: in get_select
sql, params = self.compile(col)
/usr/local/lib/python3.8/site-packages/django/db/models/sql/compiler.py:445: in compile
sql, params = node.as_sql(self, self.connection)
/usr/local/lib/python3.8/site-packages/django/db/models/expressions.py:1126: in as_sql
subquery_sql, sql_params = query.as_sql(compiler, connection)
/usr/local/lib/python3.8/site-packages/django/db/models/sql/query.py:1103: in as_sql
sql, params = self.get_compiler(connection=connection).as_sql()
/usr/local/lib/python3.8/site-packages/django/db/models/sql/compiler.py:513: in as_sql
extra_select, order_by, group_by = self.pre_sql_setup()
/usr/local/lib/python3.8/site-packages/django/db/models/sql/compiler.py:56: in pre_sql_setup
order_by = self.get_order_by()
/usr/local/lib/python3.8/site-packages/django/db/models/sql/compiler.py:400: in get_order_by
sql, params = self.compile(resolved)
/usr/local/lib/python3.8/site-packages/django/db/models/sql/compiler.py:445: in compile
sql, params = node.as_sql(self, self.connection)
/usr/local/lib/python3.8/site-packages/django/db/models/expressions.py:1218: in as_sql
expression_sql, params = compiler.compile(self.expression)
/usr/local/lib/python3.8/site-packages/django/db/models/sql/compiler.py:445: in compile
sql, params = node.as_sql(self, self.connection)
/usr/local/lib/python3.8/site-packages/django/db/models/expressions.py:933: in as_sql
return compiler.compile(self.expression)
/usr/local/lib/python3.8/site-packages/django/db/models/sql/compiler.py:445: in compile
sql, params = node.as_sql(self, self.connection)
compiler.compile call in the Distance function defined above: in as_sql
arg_sql, arg_params = compiler.compile(arg)
/usr/local/lib/python3.8/site-packages/django/db/models/sql/compiler.py:445: in compile
sql, params = node.as_sql(self, self.connection)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = ResolvedOuterRef(location), args = (<django.db.models.sql.compiler.SQLCompiler object at 0x7fb9f2642f70>, <django.contrib.gis.db.backends.postgis.base.DatabaseWrapper object at 0x7fbbb472b5e0>), kwargs = {}
def as_sql(self, *args, **kwargs):
> raise ValueError(
'This queryset contains a reference to an outer query and may '
'only be used in a subquery.'
)
E ValueError: This queryset contains a reference to an outer query and may only be used in a subquery.
/usr/local/lib/python3.8/site-packages/django/db/models/expressions.py:603: ValueError
</code></pre>
<p>I feel like the resolution of the OuterRef fails because it is located in the order_by clause.</p>
<p>Did someone already solved a similar issue ? I found a lot of issues related to OuterRef online but none helped me.</p>
|
<python><django><django-orm>
|
2023-04-28 07:48:20
| 1
| 1,360
|
Alombaros
|
76,127,328
| 509,263
|
How to return DML statements results from Bigquery with python (Update, Delete)?
|
<p>I'm running DML commands with Python3 and bigquery (using google-cloud-bigquery package).
Compared to results provided by running the command in the console, I can't get it with Python.</p>
<p>Running the normal:</p>
<pre class="lang-py prettyprint-override"><code>client = bigquery.Client()
result = client.query(...).result()
</code></pre>
<p>How can I read results for delete and update statements?</p>
<p><a href="https://i.sstatic.net/g0c1x.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/g0c1x.png" alt="enter image description here" /></a></p>
|
<python><google-bigquery>
|
2023-04-28 07:46:37
| 1
| 8,884
|
Alexandru R
|
76,127,269
| 2,583,670
|
extract and count lists from text file in python
|
<p>I have a large text file that includes so many lists. Each list includes random numbers as follows.
Test file:</p>
<pre><code>[-1460,-1460,-1460,-1460,-1460,0,-1460,0,-1460,-1460,-1460,45,-1460,-1460,-1460,-1460,-1460,-1460]
[250,-1250,36,-1250,-1250,33,-1250,-1250,-1250,-1250,-1250,-1250,-1250,-490,-1243,-1250,-1250,-1250,-1250,-1250,33,-1250,33,-1250,-1250,-1250,-1250,-1250,-1250,33,-496,-1243,-1250,33,-1250,-1246,-1250,-1250,-1250,-1250,35,-1250,-1250,33,-1250,-1250,-1250,-1250,-1250,-1250,-1250,-1250,33,-1250,-1250,-1250,-1250,-525,-1250,33,-259,-1250]
[2,-1232,34,34,34,0,0,-1232,-1232,-1232,-1232,34,34,-1232,-1232,-1232,-1232,-1232,-1232,-1232,-1232,34,34,-1232,34,-1232,34,34,-1232,-1232,-1232,39,-1232,34,-1232,-1232,-1232,34,-1232,0,0,34,-1232,-1232,-1232,-1232,-1232,517,0,34,34,34,-1232,-1232,-1232,-1232,-1232,-1232,34,-1232,-1232,-1232,34,-1232,34,-1232,34,-1232,34]
......
</code></pre>
<p>First, I want to count how many lists are there in the text file. Then, I want to extract every list separately for further processing.</p>
<p>I created a code to read the text file, but it read the whole file contents as a one-string variable, as follows:</p>
<pre><code># opening the file in read mode
my_file = open("R1E1.txt", "r")
# reading the file
data = my_file.read()
</code></pre>
|
<python><list><file><text>
|
2023-04-28 07:39:16
| 2
| 711
|
Mohsen Ali
|
76,127,268
| 19,694,624
|
pybit.exceptions.FailedRequestError: Http status code is not 200. (ErrCode: 404) (ErrTime: 07:32:08)
|
<p>I have an issue with Python's Pybit module. I'm trying to get my wallet balance but I get the following error:</p>
<pre><code>pybit.exceptions.FailedRequestError: Http status code is not 200. (ErrCode: 404) (ErrTime: 07:32:08)
</code></pre>
<p>My code:</p>
<pre><code>from pybit.unified_trading import HTTP
# API credentials
api_key = 'xxx'
secret_key = 'xxx'
session = HTTP(
testnet=True,
api_key=api_key,
api_secret=secret_key,
)
print(session.get_wallet_balance(
accountType="UNIFIED",
coin="BTC",
))
</code></pre>
<p>What should I do?</p>
|
<python><bybit><python-bybit><pybit>
|
2023-04-28 07:39:06
| 1
| 303
|
syrok
|
76,127,178
| 7,972,989
|
Pandas rank function with custom method OR limit number of decimals in mean calcultation
|
<p>EDIT : my question is not well asked nor enough detailed. I am redoing it better, please let me time to do it</p>
<p>EDIT 2 : My question was not clear enough nor adapted to my problem, I solved my problem in a different way. I cannot delete the question, but please don't bother answering, I messed up the question.</p>
<hr />
<p>My goal is to perfom a rank calculation on a pandas dataframe with the <code>average</code> method, BUT the average calculation should be round to 3 decimals.</p>
<p>Why : I am translating a script from another language to Python, and in this language the mean function automatically rounds result to 3 decimals. Pandas doesn't. So in order to get exactly (and I must) the same result in Python, I need to either :</p>
<ul>
<li>pass a custom <code>method</code> argument to the <code>rank</code> function, with this custom method being a rounded mean</li>
<li>force Python to only take 3 decimals when calculating means all the time (which would be great for my script).</li>
</ul>
<p>Is any of these ideas possible ?</p>
<p>In the first language :</p>
<pre><code>mean(mydata['mycolumn']) = 1360.045
</code></pre>
<p>In Python :</p>
<pre><code>mydata.mycolumn.mean() = 1360.0448559218441
</code></pre>
<p>So in consequence, when calculating the rank of <code>mycolumn</code> (grouped by another column, idk if it matters), the ranks are not the same and the final result is different between the two languages.</p>
|
<python><pandas><numpy>
|
2023-04-28 07:26:11
| 2
| 2,505
|
gdevaux
|
76,127,156
| 10,232,932
|
Visual Studio Code: OSError: Proxy URL had no scheme, should start with http:// or https://
|
<p>I am running the Visual Studio Code behind a Company proxy on windows with Python 3.9.0. And when I try to use the command:</p>
<pre><code>pip install pandas
</code></pre>
<p>in the terminal, it leads to the following error (note that it could be every other pip install package):</p>
<blockquote>
<p>ERROR: Could not install packages due to an OSError: Proxy URL had no
scheme, should start with http:// or https://</p>
</blockquote>
<p><a href="https://i.sstatic.net/nD0i3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nD0i3.png" alt="enter image description here" /></a></p>
<p>My json.file looks like:</p>
<pre><code>{
"[python]": {
"editor.formatOnType": true
},
"editor.stickyScroll.enabled": true,
"update.mode": "none",
"http.proxyAuthorization": null
}
</code></pre>
|
<python><visual-studio-code><proxy>
|
2023-04-28 07:22:33
| 1
| 6,338
|
PV8
|
76,126,892
| 14,808,001
|
Python 'cannot open shared object file: No such file or directory' on Raspbian
|
<p>I am trying to install <a href="https://pypi.org/project/pvrecorder/" rel="nofollow noreferrer">pvrecorder</a> on a Virtual Machine with Raspbian installed. I got some Python script that records the user's microphone, until <code>KeyboardInterrupt</code> is triggered (user presses <code>Ctrl</code> + <code>C</code>). I already tested it on my Windows machine and everything worked smoothly. The problem is, I get some error after running the script, and this comes from the installed <code>pvrecorder</code>.</p>
<p>The Raspbian is an <strong>x86</strong> build.</p>
<p>First, I am creating a Virtual Environment with the command:
<code>python3 -m venv venv</code> and activating it using <code>source venv/bin/activate</code>.</p>
<p>Then, I am installing <strong>pvrecorder</strong> with <code>pip3 install pvrecorder</code>. Everything works smoothly.</p>
<p>Problem is, when I am trying to run my <strong>script.py</strong>, which begins with this line:</p>
<pre><code>from pvrecorder import PvRecorder
</code></pre>
<p>this is the error I receive:</p>
<pre><code>OSerror: /home/pi/Desktop/proj/venv/lib/python3.7/site-packages/pvrecorder/lib/linux/i686/libpv_recorder.so: cannot open shared object file: No such file or directory
</code></pre>
<p>I also tried <strong>NOT</strong> installing <code>pvrecorder</code> in a separately-created venv, but directly, without activating any specific virtual environment. When I look inside <code>/home/pi/.local/lib/python3.7/site-packages/pvrecorder/lib/linux/</code>, I don't see any <code>i686</code> folder, but an <code>x64_86</code> folder that contains my <code>.so</code> file.</p>
<p>When I try to copy that <code>.so</code> file into a newly-created <code>i686</code> folder, it gives me another error when trying to launch <code>script.py</code>:</p>
<pre><code>wrong ELF class: ELFCLASS64
</code></pre>
<p>I read some other similar Stack Overflow answers, and users were told that it may be because of the fact that I am using an x86 OS for an x64 library, but pvrecorder's website states that their module is available for both x86 and x64 Raspbian builds.</p>
|
<python><linux><debian><raspbian>
|
2023-04-28 06:41:46
| 1
| 1,283
|
Mario MateaΘ
|
76,126,859
| 7,985,055
|
how to find id or name using selenium in a div container
|
<p>I am trying to automate the login to the website, however, I can't for the life of me figure out how to find the elements of the login page. (<a href="https://data-exchange.amadeus.com/sfiler/Login.action" rel="nofollow noreferrer">https://data-exchange.amadeus.com/sfiler/Login.action</a>)</p>
<pre><code> <form method="post" name="Login" action="Login.action" autocomplete="off">
<div class="container">
<input type="hidden" name="origUrl" value="" id="origUrl"/>
<input type="hidden" name="request_locale" value="en_GB" id="request_locale"/>
<!-- LOGIN FORM START -->
<div class="row logo justify-content-md-center">
<div class="col-12 col-md-12 col-lg-8 pb-4 logo-login">
<img alt="Login" src="themes/amadeus-adep/images/logo-login.png"/>
</div>
</div>
<div class="row justify-content-md-center">
<div class="col-12 col-md-12 col-lg-8 pb-4">
<div class="card">
<div class="card-body">
<div class="row">
<div class="col-6">
<div class="form-group">
<label for="loginForm_username">Username or email</label>
<input type="text" name="loginForm.username" value="" id="loginForm_username" class="form-control"/>
</div>
</div>
<div class="col-6">
<div class="form-group">
<label for="loginForm_password">Password</label>
<input type="password" name="loginForm.password" id="loginForm_password" class="form-control" onkeypress="return submit_on_enter(document.Login,event);"/>
<div class="d-flex justify-content-end" id="loginChangePassword">
<a class="btn btn-sm btn-link text-right" href="ForgetPassword.action">Lost Password</a>
</div>
</div>
</div>
</div>
<div class="d-flex justify-content-end">
<a id="loginForm_button" class="btn btn-primary text-right" onclick="document.Login.submit();" href="#">
<span class="far fa-sign-in"></span> Login
</a>
</div>
</div>
</div>
</div>
</div>
<br/>
</div>
</code></pre>
<p>It should be pretty simple to find the element, however, I tried to find the name and id, but it is not showing up...</p>
<pre><code>try:
user_input = browser.find_element(By.NAME, 'loginForm.username')
print('Username box found')
except Exception:
print("Searching for 'NAME tag' took too much time!")
try:
user_input = browser.find_element(By.ID, 'loginForm_username')
except Exception:
print("Searching for 'ID tag' took too much time!")
</code></pre>
<p>I keep getting a timeout error.</p>
<pre><code>Message: no such element: Unable to locate element:
</code></pre>
<p>I then tried to check if the items I was searching for was in an iframe, but has no luck doing that either....</p>
<pre><code>frame = browser.find_element(By.CSS_SELECTOR,'div.tool_forms iframe')
browser.switch_to.frame(frame)
try:
user_input = browser.find_element(By.NAME, 'loginForm.username')
print('Username box found')
except Exception:
print("Searching for 'NAME tag' took too much time!")
try:
user_input = browser.find_element(By.ID, 'loginForm_username')
except Exception:
print("Searching for 'ID tag' took too much time!")
</code></pre>
<p>I keep getting the error that the item can't be located.</p>
<p>This is the script I am using</p>
<pre><code>#Amadeus data exchange portal binary download
#
# Installs
# python -m pip install selenium
# python -m pip install webdriver-manager
# python -m pip install fake_useragent
# python -m pip install bs4
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.options import Options
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support.expected_conditions import visibility_of_element_located
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.chrome.service import Service
from fake_useragent import UserAgent
from bs4 import BeautifulSoup
import time, os
#define variables
username = os.environ["AMADEUS_USER"]
password = os.environ["AMADEUS_PASS"]
url = 'https://data-exchange.amadeus.com/sfiler/Login.action'
url_int = 'https://data-exchange.amadeus.com/sfiler/Tree.action'
ua = UserAgent()
userAgent = ua.random
timeout = 900 # 15 mins
loginTitle = "Amadeus Data Exchange Portal - Login"
loggedInTitle = "Amadeus Data Exchange Portal"
#define chrome options
chrome_options = Options()
chrome_options.add_argument(f'user-agent={userAgent}')
chrome_options.add_argument("--disable-extensions")
chrome_options.add_argument('--headless')
chrome_options.add_argument('--disable-gpu')
#initialize browser
browser = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=chrome_options)
browser.set_page_load_timeout(timeout) # 15 mins
browser.get(url)
browser.implicitly_wait(15)
search_timeout = 60 # 60 seconds
try:
username_input_box = WebDriverWait(browser, search_timeout).until(EC.presence_of_element_located((By.NAME, "loginForm.username")))
username_input_box.clear()
print('Username box found')
except Exception:
print("loading page took too much time!")
username_input_box = WebDriverWait(browser, search_timeout).until(EC.presence_of_element_located((By.ID, "loginForm_username")))
username_input_box.clear()
try:
password_input_box = WebDriverWait(browser, search_timeout).until(EC.presence_of_element_located((By.ID, "loginForm_password")))
password_input_box.clear()
except TimeoutError:
print("loading page took too much time!")
try:
login_button = WebDriverWait(browser, search_timeout).until(EC.presence_of_element_located((By.ID, "loginForm_button")))
except TimeoutError:
print("loading page took too much time!")
#fill login credentials
username_input_box.send_keys(username)
time.sleep(2) #2 second time gap between filling username and password
password_input_box.send_keys(password)
time.sleep(2)
#hit the login button
login_button.click()
#Check that the login succeeded...
actualTitle = browser.getTitle();
print("The title of the web page is " + str(actualTitle) )
print("Verifying the page title has started")
if actualTitle.match(loggedInTitle):
print("The page title has been successfully verified")
print("User logged in successfully..!")
else:
print("Page titles do not match?")
if actualTitle.match(loginTitle):
print("the login title was found...")
browser.close()
#Get all download links
print('')
print('-----------------------------------------------------------------------------')
print('Searching for files to download...')
print('-----------------------------------------------------------------------------')
print('')
browser.get(url_int)
browser.implicitly_wait(30)
soup = BeautifulSoup(browser.page_source, 'html.parser')
for a in soup.find_all('a', href=True):
print (" Found the URL: " + str(a) )
# automatically close the driver after 30 seconds
time.sleep(30)
browser.close()
</code></pre>
<p>Can someone please tell me what I am missing here?</p>
<p>Thanks!</p>
|
<python><selenium-webdriver><web-scraping><selenium-chromedriver>
|
2023-04-28 06:37:02
| 2
| 525
|
Mr. E
|
76,126,422
| 6,455,731
|
Pass final positional argument in a curried function to the last parameter available
|
<p>Given a simple curried function</p>
<pre><code>import toolz
@toolz.curry
def mklist(x, y, z):
return [x, y, z]
</code></pre>
<p>obviously calling <code>mklist(x=1, y=2)(z=3)</code> works.</p>
<p>What I would like to be able to do is to call the function like so: <code>mklist(x=1, y=2)(3)</code>, where z gets bound to 3.</p>
<p>This errors with <code>TypeError: mklist() got multiple values for argument 'x'</code> since passing a positional argument apparently tries to assign the argument to the first paramter (x).</p>
<p>How can it be made so that the final positional argument of a curried function gets passed to the last remaining parameter?</p>
|
<python><functional-programming><currying>
|
2023-04-28 05:13:26
| 0
| 964
|
lupl
|
76,126,374
| 5,942,100
|
Tricky calculation transformation based on a row sum using Pandas
|
<p>I would like to round values in specific columns according to these rules:</p>
<pre><code>if = or > 0.6 round to nearest even number
if < 0.6 round to 0
if and only if the sum of the values does not exceed its total by row rounded to the nearest
number. If the total is exceeded,
subtract or add this delta to equal the value rounded to the nearest even number.
</code></pre>
<p>logic:
looking at the first row, the sum of the values = 3.3, and the nearest even number = 4
based on the above rules,</p>
<pre><code>if = or > 0.6 round to nearest even number
if < 0.6 round to 0
</code></pre>
<p>the first row would now be:</p>
<pre><code> location range type Q1 27 Q2 27 Q3 27 Q4 27
NY Lower_ stat AA 2 2 0 2
</code></pre>
<p>However, the original sum of this row rounded to the nearest even number would be 4.
however, now the sum = 6. So we would need to subtract the delta of 2 from the last value which now would be:</p>
<pre><code> location range type Q1 27 Q2 27 Q3 27 Q4 27
NY Lower_ stat AA 2 2 0 0
</code></pre>
<p>In this case, we actually add 2, since the sum of this row = 1.1
and the nearest even number would be 2. Number will replace the largest value of the row.</p>
<pre><code> location range type Q1 27 Q2 27 Q3 27 Q4 27
NY Lower_range BB 0.3 0.4 0.2 0.2
</code></pre>
<p>So this would be:</p>
<pre><code> location range type Q1 27 Q2 27 Q3 27 Q4 27
NY Lower_range BB 0 2 0 0
</code></pre>
<p><strong>Data</strong></p>
<pre><code>location range type Q1 27 Q2 27 Q3 27 Q4 27
NY Lower_ stat AA 1 1.2 0.5 0.6
NY Lower_range AA 0.4 1.9 0.4 0.6
NY Lower_stat BB 0.1 0.1 0.1 0.1
NY Lower_range BB 0.3 0.4 0.2 0.2
data = {'location': ['NY', 'NY', 'NY', 'NY'],
'range': ['Lower_stat', 'Lower_range', 'Lower_stat', 'Lower_range'],
'role': ['AA', 'AA', 'BB', 'BB'],
'Q1 27': [1, 0.4, 0.1, 0.3],
'Q2 27': [1.2, 1.9, 0.1, 0.4],
'Q3 27': [0.5, 0.4, 0.1, 0.2],
'Q4 27': [0.6, 0.6, 0.1, 0.2]}
</code></pre>
<p><strong>Desired</strong></p>
<pre><code>location range type Q1 27 Q2 27 Q3 27 Q4 27
NY Lower_ stat AA 2 2 0 0
NY Lower_range AA 0 2 0 2
NY Lower_stat BB 0 0 0 0
NY Lower_range BB 0 2 0 0
data = {'location': ['NY', 'NY', 'NY', 'NY'],
'range': ['Lower_stat', 'Lower_range', 'Lower_stat', 'Lower_range'],
'type': ['AA', 'AA', 'BB', 'BB'],
'Q1 27': [2, 0, 0, 0],
'Q2 27': [2, 2, 0, 2],
'Q3 27': [0, 0, 0, 0],
'Q4 27': [0, 2, 0, 0]}
</code></pre>
<p><strong>Doing</strong></p>
<p>first part</p>
<pre><code>i = df.iloc[:, 2:].astype(int)
df2 = (i+i.mod(2)
# round to nearest 2
# add 2 for values between 0.6-1
+2*(df.iloc[:, 2:].ge(0.6)
&df.iloc[:, 2:].lt(1))
)
</code></pre>
<p>second part</p>
<pre><code># create the delta with difference b/w sum of the two DF
df2['delta']=df2['sum'].sub(df['sum'])
# subtract the delta from the last quarter, obtained
# using filter
# create a placeholder df3
df3=df2.filter(like='Q' ).iloc[:,-1:].sub(df2.iloc[:,-1:].values)
</code></pre>
<p>I am building from this script, as it does not produce desired result. Any suggestion is appreciated.</p>
|
<python><pandas><numpy>
|
2023-04-28 04:58:40
| 1
| 4,428
|
Lynn
|
76,126,350
| 1,497,720
|
ValueError: If using all scalar values, you must pass an index with Transformer
|
<p>For code below</p>
<pre><code>from sklearn.preprocessing import FunctionTransformer
from functools import partial
from types import FunctionType
from statistics import mean
import pandas as pd
X_train = pd.DataFrame(columns=['domain'])
# Add rows to the dataframe
X_train.loc[36] = ['e1548bfed8d05713acacc4d33393b258.org']
X_train.loc[25] = ['google.de']
# Print the resulting dataframe
print(X_train)
def wrapper(series: pd.Series, func: FunctionType) -> pd.DataFrame:
return pd.DataFrame(series.apply(func))
class CustomFunctionTransformer(FunctionTransformer):
def __init__(self, func):
super().__init__(partial(wrapper, func=func), validate=False)
pass
class AverageDigitDistanceTransformer(CustomFunctionTransformer):
def __init__(self):
super().__init__(average_digit_distance)
def average_digit_distance(string: str) -> float:
digit_indices = [i for (i,c) in enumerate(string) if c.isdigit()]
digit_distances = [succ - pred for pred, succ in zip(digit_indices, digit_indices[1:])]
return mean(digit_distances) if digit_distances else 0.0
#error is from below line
X_train[['domain']].apply(AverageDigitDistanceTransformer().transform)
</code></pre>
<p>I have exception:</p>
<blockquote>
<p>ValueError: If using all scalar values, you must pass an index</p>
</blockquote>
<p>How should I solve it?</p>
|
<python><python-3.x><scikit-learn>
|
2023-04-28 04:51:09
| 0
| 18,765
|
william007
|
76,126,264
| 20,122,390
|
How can I filter by dates in a Firestore Realtime Database with pyhon?
|
<p>I have a firestore realtime database with the following structure:</p>
<p><a href="https://i.sstatic.net/jKBR9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jKBR9.png" alt="enter image description here" /></a></p>
<p>And I need to be able to apply a filter for the dates. The idea is that the user enters a filter start date and a filter end date. I must deliver all the "alerts that are valid within that period of days".
In more practical terms, I need:</p>
<pre><code>start_date <= end_filter,
final_date >= start_filter
</code></pre>
<p>I know I can't apply "multiple where clauses" (unfortunately), so my idea is to apply the first condition in the query (start_date <= end_filter) and then from my python code in the backend, apply the second condition (end_date >= start_filter).
Then, I tried the following:</p>
<pre><code>def get_dates(self, path, params: FilterDates):
return (
self.db.reference(path)
.order_by_child("start_date")
.end_at(params.final_date)
.get()
)
</code></pre>
<p>But I got no results (I get an empty dictionary), what am I doing wrong?, I tried to guide me from the documentation but I don't know where I fail.</p>
|
<python><database><firebase><firebase-realtime-database><nosql>
|
2023-04-28 04:28:08
| 0
| 988
|
Diego L
|
76,126,095
| 1,355,120
|
Should all python dict types be type hinted to dict?
|
<p>Excuse my noob question as I haven't done too much python coding so am not as familiar with "pythonic" ways and untyped languages.</p>
<p>In Python I see the <code>Dataframe.to_dict()</code> method has several ways it can return the dict. For example, <code>Dataframe.to_dict("records")</code> basically returns a list.</p>
<p>My question is, should the return type of this be type hinted to <code>list</code> or <code>dict</code>? Afaik <code>type hinting</code> has no runtime effect. And <code>Dataframe.to_dict("records")</code> is basically a list except for the fact that it calls <code>Dataframe.to_dict()</code>, so it'd stand to reason that it'd make more sense if I treat it as a list. But officially it's a <code>dict</code></p>
|
<python><pandas><type-hinting>
|
2023-04-28 03:34:09
| 1
| 3,389
|
Kevin
|
76,125,938
| 14,735,451
|
Is there a way to merge multiple json files without opening them?
|
<p>I have a large number of big <code>json</code> files and I cannot open all of them at the same time. I'm trying to merge them into one <code>json</code> file, but all the solutions I find on SO (e.g., <a href="https://stackoverflow.com/questions/57422734/how-to-merge-multiple-json-files-into-one-file-in-python">this</a>) require opening the files which causes my machine to crash. I was wondering if there's a way to merge them without opening them</p>
|
<python><json><merge>
|
2023-04-28 02:49:37
| 0
| 2,641
|
Penguin
|
76,125,906
| 9,334,609
|
WARNING: autodoc: failed to import module when trying to generate python documentation with sphinx
|
<p>I have reviewed the sphinx documentation to generate the python documentation. When I generate the documentation in html format using the .rst files that do not reference any python source code, the documentation is generated correctly.</p>
<p>In my configuration I work with the following files: conf.py, index.rst, tutorial.rst, project.rst and pkg_app.rst</p>
<p>I attach the content of each of them:</p>
<p>conf.py:</p>
<pre><code>import os
import sys
sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# Configuration file for the Sphinx documentation builder.
#
# For the full list of built-in configuration values, see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Project information -----------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information
project = 'hmed'
copyright = '2023, Ramiro Jativa'
author = 'Ramiro Jativa'
release = '1.0.41'
# -- General configuration ---------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
extensions = ['sphinx.ext.autodoc']
templates_path = ['_templates']
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
# -- Options for HTML output -------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output
html_theme = 'alabaster'
html_static_path = ['_static']
</code></pre>
<p>index.rst:</p>
<pre><code>.. hmed documentation master file, created by
sphinx-quickstart on Thu Apr 27 17:53:25 2023.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to hmed's documentation!
================================
Requirements for production:
MySQL
Python
Flask
PyPi Libraries
Requirements for development:
sqlite3
Python
Flask
PyPi Libraries
Project structure:
project_name/pkg_app (include the source code distributed in various modules. And includes the __init__.py file)
project_name/sphinx_documentation (is the folder where the sphinx documentation is generated)
.. toctree::
:maxdepth: 2
:caption: Contents:
tutorial
project
pkg_app
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
</code></pre>
<p>tutorial.rst:</p>
<pre><code>Tutorial
========
Modules
-------
Module 1: Tutorial for module 1.
This is the content for tutorial 1.
Module 2: Tutorial for module 2.
This is the content for tutorial 2.
</code></pre>
<p>project.rst:</p>
<pre><code>Project Documentation
=====================
Achieved goals
--------------
Goal 1: This is the description of goal 1.
This is the detail of what was done to achieve goal 1.
Goal 2: This is the description of goal 2.
This is the detail of what was done to achieve goal 2.
Learned lessons
---------------
This paragraph describes what was learned in the project.
</code></pre>
<p>pkg_app.rst:</p>
<pre><code>Source code documentation
=========================
The pkg_app folder includes the __init__.py, models.py, forms.py, and routes.py files
</code></pre>
<p>Using the indicated configuration, I proceed to execute the make html command and the documentation is generated by sphinx correctly.</p>
<p>Next I include the log of the execution of the make html command:</p>
<pre><code>(venv_hmed) $ make clean
Removing everything under '_build'...
(venv_hmed) $ make html
Running Sphinx v5.3.0
making output directory... done
building [mo]: targets for 0 po files that are out of date
building [html]: targets for 4 source files that are out of date
updating environment: [new config] 4 added, 0 changed, 0 removed
reading sources... [100%] tutorial
looking for now-outdated files... none found
pickling environment... done
checking consistency... done
preparing documents... done
writing output... [100%] tutorial
generating indices... genindex done
writing additional pages... search done
copying static files... done
copying extra files... done
dumping search index in English (code: en)... done
dumping object inventory... done
build succeeded.
The HTML pages are in _build/html.
</code></pre>
<p>As a result, the documentation is obtained in html format as indicated below:</p>
<p><a href="https://i.sstatic.net/O9MFB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/O9MFB.png" alt="enter image description here" /></a></p>
<p>Up to this point everything works correctly.</p>
<p><em><strong>To generate the documentation of the files with the .py extension that are inside the pkg_app directory structure, I modify the content of the pkg_app.rst file with the following content:</strong></em></p>
<p>pkg_app.rst:</p>
<pre><code>Source code documentation
=========================
The pkg_app folder includes the __init__.py, models.py, forms.py, and routes.py files
Module contents
---------------
.. automodule:: pkg_app
:members:
:undoc-members:
:show-inheritance:
</code></pre>
<p>When the make html command is entered to generate the documentation in sphinx, the following warning message is displayed:</p>
<pre><code>(venv_hmed) $ make clean
Removing everything under '_build'...
(venv_hmed) $ make html
Running Sphinx v5.3.0
making output directory... done
building [mo]: targets for 0 po files that are out of date
building [html]: targets for 4 source files that are out of date
updating environment: [new config] 4 added, 0 changed, 0 removed
reading sources... [100%] tutorial
WARNING: autodoc: failed to import module 'pkg_app'; the following exception was raised:
No module named 'pkg_app'
looking for now-outdated files... none found
pickling environment... done
checking consistency... done
preparing documents... done
writing output... [100%] tutorial
generating indices... genindex done
writing additional pages... search done
copying static files... done
copying extra files... done
dumping search index in English (code: en)... done
dumping object inventory... done
build succeeded, 1 warning.
The HTML pages are in _build/html.
</code></pre>
<p>QUESTION:</p>
<p>what is the correct configuration of the pkg_app.rst file so that it can generate the python documentation of all the modules inside the pkg_app package?</p>
<p>Thanks.</p>
|
<python><python-3.x><python-sphinx><autodoc>
|
2023-04-28 02:40:38
| 0
| 461
|
Ramiro
|
76,125,807
| 5,942,100
|
Tricky organizing transformation that places values with their categories using Pandas
|
<p>I have a df where I would like to create a transformation that places values with their categories using Pandas.</p>
<p><strong>Data</strong></p>
<pre><code>year quarter Location Low_ stat_AA Low_range_AA Low_ stat_BB Low_range_BB Med_stat_AA Med_range_AA Med_stat_BB Med_range_BB Up_stat_AA Up_range_AA Upp_stat_BB Up_range_BB
2027 Q1 27 NY 1.14 1.03 0.51 1.53
2027 Q1 27 CA 1.14 0.38 0.55 1.02
2027 Q2 27 NY 0 0.86 1.02 1.27
2027 Q2 27 CA 0 3.66 5.4 0
2027 Q3 27 NY 1 0 0 0
2027 Q3 27 CA 0 0 0 0
</code></pre>
<p><strong>Dataframe</strong></p>
<pre><code>import pandas as pd
data = {'year': [2027, 2027, 2027, 2027, 2027, 2027],
'quarter': ['Q1 27', 'Q1 27', 'Q2 27', 'Q2 27', 'Q3 27', 'Q3 27'],
'Location': ['NY', 'CA', 'NY', 'CA', 'NY', 'CA'],
'Lower_stat_AA': [1.14, None, 0, None, 1, None],
'Lower_range_AA': [1.03, None, 0.86, 3.66, 0, None],
'Lower_stat_BB': [0.51, None, 1.02, 5.4, 0, None],
'Lower_range_BB': [1.53, None, 1.27, 0, 0, None],
'Medium_stat_AA': [None, 1.14, None, 0, None, 0],
'Medium_range_AA': [None, 0.38, None, 5.4, None, 0],
'Medium_stat_BB': [None, 0.55, None, 0, None, 0],
'Medium_range_BB': [None, 1.02, None, 0, None, 0],
'Upper_stat_AA': [None, None, None, None, None, None],
'Upper_range_AA': [None, None, None, None, None, None],
'Upper_stat_BB': [None, None, None, None, None, None],
'Upper_range_BB': [None, None, None, None, None, None]
}
df = pd.DataFrame(data)
</code></pre>
<p><strong>Desired</strong></p>
<pre><code>location range type Q1 27 Q2 27 Q3 27
NY Lower_ stat AA 1.14 0 1
NY Lower_range AA 1.03 0.86 0
NY Lower_stat BB 0.51 1.02 0
NY Lower_range BB 1.53 1.27 0
CA Medium_stat AA 1.14 0 0
CA Medium_range AA 0.38 3.66 0
CA Medium_stat BB 0.55 5.4 0
CA Medium_range BB 1.02 0 0
data = {'location': ['NY', 'NY', 'NY', 'NY', 'CA', 'CA', 'CA', 'CA'],
'range': ['Lower_stat', 'Lower_range', 'Lower_stat', 'Lower_range',
'Medium_stat', 'Medium_range', 'Medium_stat', 'Medium_range'],
'role': ['AA', 'AA', 'BB', 'BB', 'AA', 'AA', 'BB', 'BB'],
'Q1 27': [1.14, 1.03, 0.51, 1.53, 1.14, 0.38, 0.55, 1.02],
'Q2 27': [0, 0.86, 1.02, 1.27, 0, 3.66, 5.4, 0],
'Q3 27': [1, 0, 0, 0, 0, 0, 0, 0]
}
</code></pre>
<p><strong>Doing</strong></p>
<pre><code>import pandas as pd
import janitor
(df
.pivot_longer(
index = slice('year', 'type'),
names_to = ("range", ".value"),
names_sep = " ")
)
</code></pre>
<p>The above does not produce the desired output.
Any suggestion is helpful.</p>
|
<python><pandas><numpy>
|
2023-04-28 02:11:06
| 3
| 4,428
|
Lynn
|
76,125,779
| 1,807,003
|
How can I install TLS/SSL for python to install Python 3.11 and pip on a raspberry pi?
|
<p>I've been fighting this for three days now, and I have no idea what I'm doing wrong. I've got a headless raspberry pi 4 and I understand there is no apt package available for python on the raspberry pi, so I need to compile it from source. I've tried following the instructions from three different places that all boil down to</p>
<ol>
<li>wget a bunch of libraries that are prerequisites</li>
<li>wget the python version tgz</li>
<li>unpack the tgz</li>
<li>compile the program</li>
<li>profit</li>
</ol>
<p>Here's the link to one of the instructions I followed with the complete bash script to do this shown below</p>
<p><a href="https://gist.github.com/ersingencturk/768416c4a4a1e32de992460cf40ce839" rel="nofollow noreferrer">https://gist.github.com/ersingencturk/768416c4a4a1e32de992460cf40ce839</a></p>
<pre><code>#!/bin/bash
if [ true ]; then
# get a current python3
sudo apt-get install wget build-essential libreadline-gplv2-dev libncursesw5-dev libssl-dev libsqlite3-dev tk-dev libgdbm-dev libc6-dev libbz2-dev libffi-dev zlib1g-dev liblzma-dev -y
VERSION=3.11.1
VERSION_SHORT=3.11
mkdir -p tmp
cd tmp
if [ ! -f Python-${VERSION}.tgz ]; then
wget -O Python-${VERSION}.tgz https://www.python.org/ftp/python/${VERSION}/Python-${VERSION}.tgz
fi
tar xzf Python-${VERSION}.tgz
cd Python-${VERSION}
if [ ! -f python ]; then
./configure --enable-optimizations
fi
echo "### make altinstall"
sudo make altinstall
sudo update-alternatives --install /usr/bin/python python /usr/local/bin/python${VERSION_SHORT} 1
echo "### install pip"
/usr/local/bin/python${VERSION_SHORT} -m pip install --upgrade pip
sudo update-alternatives --install /usr/bin/pip pip /usr/local/bin/pip${VERSION_SHORT} 1
cd ..
fi
</code></pre>
<p>The problem is once everything is installed I'm unable to install anything from pip. When I try I get the error:</p>
<pre><code>WARNING: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available.
</code></pre>
<p>Everything I've found seems to make sure the prerequisites are installed first. What prerequisite am I missing? Is there a way to install ssh without pip or something to get around this problem?</p>
|
<python><ssl><pip>
|
2023-04-28 02:03:45
| 0
| 948
|
nickvans
|
76,125,772
| 6,676,101
|
In python, how do you get the escape sequence for any Unicode character?
|
<p>In python, what is the escape sequence for the left-double quote Unicode character <code>β</code> (U+201C)?</p>
<pre class="lang-python prettyprint-override"><code>left_double_quote = "β"
print(ord(left_double_quote)) # prints `8220`
</code></pre>
<p>Note that <code>201C</code> is hexadecimal notation for the decimal number <code>8220</code></p>
<p>I tried somethings, but it did not work:</p>
<pre class="lang-python prettyprint-override"><code>print("\x8220")
print("\x201C")
print("\0x201C")
</code></pre>
|
<python><string><unicode><unicode-string><unicode-escapes>
|
2023-04-28 02:01:25
| 1
| 4,700
|
Toothpick Anemone
|
76,125,653
| 14,908,558
|
A numpy function that lists all the possible "coordinates" in a nd-array
|
<p><strong>Is there a <code>numpy</code> function that lists all the possible "coordinates" in a <code>numpy.ndarray</code>?</strong></p>
<p>It would do exactly what this function <code>all_possible_coordinates</code> does:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
def all_possible_coordinates(shape):
return np.array([item.ravel() for item in np.meshgrid(*(np.arange(n) for n in shape))]).T
def test_all_possible_coordinates():
shape = (1, 2, 2)
assert np.all(all_possible_coordinates(shape)
== np.array([[0, 0, 0],
[0, 0, 1],
[0, 1, 0],
[0, 1, 1]]))
test_all_possible_coordinates()
</code></pre>
<h3>EDIT: solution 1, by M Z</h3>
<pre class="lang-py prettyprint-override"><code>def all_possible_coordinates(shape):
return np.argwhere(np.ones(shape))
</code></pre>
<h3>EDIT: solution 2, by Warren Weckesser</h3>
<pre class="lang-py prettyprint-override"><code>def all_possible_coordinates(shape):
return np.array(list(np.ndindex(shape)))
</code></pre>
|
<python><numpy>
|
2023-04-28 01:21:56
| 1
| 328
|
edmz
|
76,125,648
| 11,922,765
|
Pandas convert mutil-row-column dataframe to single-row multi-column dataframe
|
<p>My dataframe is given below:</p>
<p>code:</p>
<pre><code>df =
Car measurements Before After
amb_temp 30.268212 26.627491
engine_temp 41.812730 39.254255
engine_eff 15.963645 16.607557
avg_mile 0.700160 0.733307
cor_mile 0.761483 0.787538
</code></pre>
<p>Expected output:</p>
<pre><code>modified_df =
index amb_temp_Before amb_temp_after engine_temp_Before engine_temp_after ...
0 30.268212 26.627491 41.812730 39.254255 ...
</code></pre>
<p>Present output:</p>
<pre><code>print(df.pivot(columns='index',value=['Before','After']))
amb_temp engine_temp amb_temp
0 30.268212 NaN NaN NaN NaN 26.627491 NaN NaN NaN NaN
1 NaN NaN NaN 41.81273 NaN NaN NaN NaN 39.254255 NaN
2 NaN NaN NaN NaN 15.963645 NaN NaN NaN NaN 16.607557
3 NaN NaN 0.70016 NaN NaN NaN NaN 0.733307 NaN NaN
4 NaN 0.761483 NaN NaN NaN NaN 0.787538 NaN NaN NaN
</code></pre>
|
<python><pandas><dataframe><pivot>
|
2023-04-28 01:19:26
| 1
| 4,702
|
Mainland
|
76,125,532
| 3,257,464
|
Python writes extra blank line to the file
|
<p>The following code adds an extra blank line ("\r") to every write, whereas it is supposed to just write one ("\r\n") at the end of every write. I have no idea why.</p>
<pre><code>import os
with open("output", "w", encoding="utf-8") as f:
for i in range(10):
f.write(str(i) + os.linesep)
</code></pre>
<p>It writes to the file: "0\r\r\n1\r\r\n2...9\r\r\n". I am using Python 3.9.16 on Windows. Am I missing something here?</p>
<p>Edit: I tested without encoding="utf-8" as well, and still the same issue happens.</p>
|
<python><windows><blank-line>
|
2023-04-28 00:47:29
| 2
| 598
|
Diamond
|
76,125,434
| 12,060,361
|
git tag 'malformed object name' β Github Action/Workflow Subprocess Error
|
<p>I have a Python script that utilizes Subprocess to execute a git command:</p>
<pre class="lang-py prettyprint-override"><code>import subprocess
fetch_cmd = ['git', 'fetch', '--tags'] //fetch tags
subprocess.run(fetch_cmd, capture_output=True, text=True)
test_tag = ['git', 'tag', '--points-at', 'origin/test~1'] //acquire tag for a commit in remote test branch
subprocess.run(test_tag, capture_output=True, text=True)
</code></pre>
<p>I then have a Github workflow/action which runs this Python script. I've had the action run this code successfully in a test env and everything work as expected, so I know a Github action can do this, and this line of code also works in my local terminal. However, after I merge to prod, the Github action fails out trying to run the <code>git tag</code> command with the following error:</p>
<pre><code>error: malformed object name 'origin/test~1'
subprocess.run(test_tag, capture_output=True, text=True)
raise subprocess.CalledProcessError(
subprocess.CalledProcessError: Command '['git', 'tag', '--points-at', 'origin/test~1']' returned non-zero exit status 129.
</code></pre>
<p>My question is why is this occurring? I'm thinking the Github action environment for test doesn't know where origin is? Is there a command I need Subprocess to run or set beforehand in this environment for 'git' command to be able to access origin?</p>
<p>I'm lost and would appreciate any help, thank you!</p>
|
<python><git><github><subprocess><github-actions>
|
2023-04-28 00:14:27
| 1
| 1,105
|
mmarion
|
76,125,337
| 14,348,996
|
Is there a python equivalent to R's `with`?
|
<p>In R, the <code>with</code> function constructs an environment from some data - this allows you to refer to items in your data as if they are variables, so this:</p>
<pre><code>with(list("a" = 12, "b" = 13), {
print(a)
print(b)
})
</code></pre>
<p>prints this:</p>
<pre><code>[1] 12
[1] 13
</code></pre>
<p>Is there something similar in python? Perhaps that takes a dictionary where all the keys are strings and constructs an environment from it?</p>
|
<python><r><scope>
|
2023-04-27 23:42:50
| 1
| 1,236
|
henryn
|
76,125,285
| 4,894,543
|
Recursion error while simplifying nested json to flat in python
|
<p>I am trying flatten the json object so that I can create plain csv file out of it.
But it is throwing</p>
<pre><code>RecursionError: maximum recursion depth exceeded while calling a Python object
</code></pre>
<p>Code which I have written in python, calling simplify_multy_nested_json_recur() :</p>
<pre><code>def simplify_multy_nested_json_recur(obj):
kv = convert_list_to_kv(obj)
simplified_json = simplify_nested_json(kv)
if any(isinstance(value, list) for value in simplified_json.values()):
simplified_json = simplify_multy_nested_json_recur(simplified_json)
return simplified_json
def simplify_nested_json(obj, parent_key=""):
"""
Recursively simplifies a nested JSON object by appending key names with the previous node's key.
"""
new_obj = {}
for key in obj:
new_key = f"{parent_key}_{key}" if parent_key else key
if isinstance(obj[key], dict):
new_obj.update(simplify_nested_json(obj[key], parent_key=new_key))
else:
new_obj[new_key] = obj[key]
return new_obj
def convert_list_to_kv(obj):
"""
Converts list elements into key-value pairs if the list size is 1 in a JSON object.
"""
for key in obj:
if isinstance(obj[key], list) and len(obj[key]) == 1:
obj[key] = obj[key][0]
elif isinstance(obj[key], dict):
convert_list_to_kv(obj[key])
return obj
</code></pre>
<p>Input:</p>
<pre><code>{'resourceType':'CP','id':'af160c6b','period':{'start':'1987-12-21T06:22:41-05:00'},'careTeam':[{'reference':'79a33a776b59'}],'goal':[{'reference':'c14'},{'reference':'b88'}],'activity':[{'detail':{'code':{'coding':[{'code':'160670007','display':'Diabetic diet'}],'text':'Diabetic diet'},'status':'in-progress'}}]}
</code></pre>
|
<python><recursion><nested-json>
|
2023-04-27 23:28:22
| 1
| 704
|
Kalpesh
|
76,125,257
| 6,401,403
|
Add column values in a dataframe to values in another dataframe with different length
|
<p>There is a dataframe <code>df</code>:</p>
<pre><code> test_num el_num file_num value fail
429 4 1 0 3.36 False
430 4 1 1 3.29 False
431 4 1 2 3.29 False
432 4 1 3 3.26 False
433 4 1 4 3.28 False
434 4 1 5 3.26 False
435 4 1 6 3.27 False
436 4 1 7 3.26 False
437 4 1 8 3.26 False
438 4 1 9 3.18 False
439 4 1 10 3.24 False
440 4 1 11 3.27 False
441 4 1 12 3.32 False
429 5 2 0 3.36 False
430 5 2 1 3.29 False
431 5 2 2 3.29 False
432 5 2 3 3.26 False
433 5 2 4 3.28 False
434 5 2 5 3.26 False
435 5 2 6 3.27 False
436 5 2 7 3.26 False
437 5 2 8 3.26 False
438 5 2 9 3.18 False
439 5 2 10 3.24 False
440 5 2 11 3.27 False
441 5 2 12 3.32 False
</code></pre>
<p>and a dataframe <code>diff</code>:</p>
<pre><code> value el_num test_num
198 0.43 1 5
204 0.43 2 5
210 0.46 3 5
216 0.42 4 5
222 0.41 5 5
228 0.43 6 5
234 0.46 7 5
240 0.43 8 5
246 0.44 9 5
252 0.41 10 5
258 0.19 11 5
</code></pre>
<p>How to add <code>diff.value</code> to <code>df.value</code> and put it to <code>df.value</code> with corresponding <code>test_num</code> and <code>el_num</code>?</p>
|
<python><pandas><dataframe>
|
2023-04-27 23:17:13
| 1
| 5,345
|
Michael
|
76,125,140
| 2,478,485
|
python3.6 pytest failing with "future feature annotations is not defined"
|
<p>Running pytests using the python3.6 tox. it started failing with following error.</p>
<p>python3.6 tox failing with following error:</p>
<pre class="lang-py prettyprint-override"><code> Error processing line 1 of /home/test/sample/.tox/py36/lib/python3.6/site-packages/_virtualenv.pth:
Traceback (most recent call last):
File "/usr/lib/python3.6/site.py", line 174, in addpackage
exec(line)
File "<string>", line 1, in <module>
File "/home/test/sample/.tox/py36/lib/python3.6/site-packages/_virtualenv.py", line 3
from __future__ import annotations
^
SyntaxError: future feature annotations is not defined
Remainder of file ignored
Error processing line 1 of /home/test/sample/.tox/py36/lib/python3.6/site-packages/_virtualenv.pth:
Traceback (most recent call last):
File "/usr/lib/python3.6/site.py", line 174, in addpackage
exec(line)
File "<string>", line 1, in <module>
File "/home/test/sample/.tox/py36/lib/python3.6/site-packages/_virtualenv.py", line 3
from __future__ import annotations
^
SyntaxError: future feature annotations is not defined
Remainder of file ignored
Traceback (most recent call last):
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/test/sample/.tox/py36/lib/python3.6/site-packages/pip/__main__.py", line 29, in <module>
from pip._internal.cli.main import main as _main
File "/home/test/sample/.tox/py36/lib/python3.6/site-packages/pip/_internal/cli/main.py", line 9, in <module>
from pip._internal.cli.autocompletion import autocomplete
File "/home/test/sample/.tox/py36/lib/python3.6/site-packages/pip/_internal/cli/autocompletion.py", line 10, in <module>
from pip._internal.cli.main_parser import create_main_parser
File "/home/test/sample/.tox/py36/lib/python3.6/site-packages/pip/_internal/cli/main_parser.py", line 9, in <module>
from pip._internal.build_env import get_runnable_pip
File "/home/test/sample/.tox/py36/lib/python3.6/site-packages/pip/_internal/build_env.py", line 19, in <module>
from pip._internal.cli.spinners import open_spinner
File "/home/test/sample/.tox/py36/lib/python3.6/site-packages/pip/_internal/cli/spinners.py", line 9, in <module>
from pip._internal.utils.logging import get_indentation
File "/home/test/sample/.tox/py36/lib/python3.6/site-packages/pip/_internal/utils/logging.py", line 8, in <module>
from dataclasses import dataclass
ModuleNotFoundError: No module named 'dataclasses'
</code></pre>
|
<python><pytest><tox>
|
2023-04-27 22:45:04
| 2
| 3,355
|
Lava Sangeetham
|
76,125,049
| 15,804,190
|
pywin32 - get cell Address in R1C1 format
|
<p>I am working with pywin32 and trying to get the R1C1 address of a specific cell. In VBA, you can use <code>Range.Address(ReferenceStyle:=xlR1C1)</code>.</p>
<p>However, it appears that in pywin32, using <code>Range.Address</code> is not a method that allows for parameters, but instead is a property that just returns the string in the A1-style.</p>
<pre class="lang-py prettyprint-override"><code>from win32com.client import Dispatch
xl = Dispatch("Excel.Application")
wb = xl.Workbooks.Open('my_workbook.xlsx')
sht = wb.sheets['Sheet1']
c = sht.Range("A1")
c.Address
# returns "$A$1"
c.Address(ReferenceStyle=-4150)
# TypeError: 'str' object is not callable
</code></pre>
|
<python><excel><pywin32><win32com>
|
2023-04-27 22:22:28
| 1
| 3,163
|
scotscotmcc
|
76,124,928
| 7,648
|
"Missing key(s) in state_dict" error when loading model
|
<p>I am trying to load a model:</p>
<pre><code>model = AlexNet3DDropoutRegression(9600)
model_save_location = 'my_model.pt'
model.load_state_dict(torch.load(model_save_location,
map_location='cpu'))
</code></pre>
<p>Previously I saved it like this:</p>
<pre><code> torch.save(self.model.state_dict(),
self.cli_args.model_save_location)
</code></pre>
<p>but I'm getting this error:</p>
<pre><code>Missing key(s) in state_dict: "features.0.weight", "features.0.bias", "features.1.weight", "features.1.bias", "features.1.running_mean", "features.1.running_var", "features.4.weight", "features.4.bias", "features.5.weight", "features.5.bias", "features.5.running_mean", "features.5.running_var", "features.8.weight", "features.8.bias", "features.9.weight", "features.9.bias", "features.9.running_mean", "features.9.running_var", "features.11.weight", "features.11.bias", "features.12.weight", "features.12.bias", "features.12.running_mean", "features.12.running_var", "features.14.weight", "features.14.bias", "features.15.weight", "features.15.bias", "features.15.running_mean", "features.15.running_var", "classifier.1.weight", "classifier.1.bias", "classifier.4.weight", "classifier.4.bias".
Unexpected key(s) in state_dict: "module.features.0.weight", "module.features.0.bias", "module.features.1.weight", "module.features.1.bias", "module.features.1.running_mean", "module.features.1.running_var", "module.features.1.num_batches_tracked", "module.features.4.weight", "module.features.4.bias", "module.features.5.weight", "module.features.5.bias", "module.features.5.running_mean", "module.features.5.running_var", "module.features.5.num_batches_tracked", "module.features.8.weight", "module.features.8.bias", "module.features.9.weight", "module.features.9.bias", "module.features.9.running_mean", "module.features.9.running_var", "module.features.9.num_batches_tracked", "module.features.11.weight", "module.features.11.bias", "module.features.12.weight", "module.features.12.bias", "module.features.12.running_mean", "module.features.12.running_var", "module.features.12.num_batches_tracked", "module.features.14.weight", "module.features.14.bias", "module.features.15.weight", "module.features.15.bias", "module.features.15.running_mean", "module.features.15.running_var", "module.features.15.num_batches_tracked", "module.classifier.1.weight", "module.classifier.1.bias", "module.classifier.4.weight", "module.classifier.4.bias".
</code></pre>
<p>I know I used the same version of PyTorch for saving and loading because I saved and then tried to load the model using the same Python virtual environment.</p>
<p>What am I doing wrong here? Below is the entire neural net. I don't know if that helps.</p>
<pre><code>import math
import torch.nn as nn
class AlexNet3D(nn.Module):
def get_head(self):
return nn.Sequential(nn.Dropout(),
nn.Linear(self.input_size, 64),
nn.ReLU(inplace=True),
nn.Dropout(),
nn.Linear(64, 1),
)
def __init__(self, input_size):
super().__init__()
self.input_size = input_size
self.features = nn.Sequential(
nn.Conv3d(1, 64, kernel_size=(5, 5, 5), stride=(2, 2, 2), padding=0),
nn.BatchNorm3d(64),
nn.ReLU(inplace=True),
nn.MaxPool3d(kernel_size=3, stride=3),
nn.Conv3d(64, 128, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=0),
nn.BatchNorm3d(128),
nn.ReLU(inplace=True),
nn.MaxPool3d(kernel_size=3, stride=3),
nn.Conv3d(128, 192, kernel_size=(3, 3, 3), padding=1),
nn.BatchNorm3d(192),
nn.ReLU(inplace=True),
nn.Conv3d(192, 192, kernel_size=(3, 3, 3), padding=1),
nn.BatchNorm3d(192),
nn.ReLU(inplace=True),
nn.Conv3d(192, 128, kernel_size=(3, 3, 3), padding=1),
nn.BatchNorm3d(128),
nn.ReLU(inplace=True),
nn.MaxPool3d(kernel_size=3, stride=3),
)
self.classifier = self.get_head()
for m in self.modules():
if isinstance(m, nn.Conv2d):
n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
m.weight.data.normal_(0, math.sqrt(2. / n))
elif isinstance(m, nn.BatchNorm3d):
m.weight.data.fill_(1)
m.bias.data.zero_()
def forward(self, x):
xp = self.features(x)
x = xp.view(xp.size(0), -1)
x = self.classifier(x)
return [x, xp]
</code></pre>
|
<python><deep-learning><pytorch>
|
2023-04-27 21:56:04
| 2
| 7,944
|
Paul Reiners
|
76,124,916
| 3,720,435
|
windows 10 install python PIP does not work
|
<p>I am trying to get PIP working on my computer, but it keeps telling me the module isn't found.</p>
<pre><code>C:\CODE\SCRIPTS\python>where python
C:\TOOLS\PYTHON\python.exe
C:\Users\User\AppData\Local\Microsoft\WindowsApps\python.exe
C:\CODE\SCRIPTS\python>where py
C:\Windows\py.exe
C:\CODE\SCRIPTS\python>python --version
Python 3.10.11
C:\CODE\SCRIPTS\python>python -m pip --version
C:\TOOLS\PYTHON\python.exe: No module named pip
C:\CODE\SCRIPTS\python>python -m ensurepip --default-pip
C:\TOOLS\PYTHON\python.exe: No module named ensurepip
C:\CODE\SCRIPTS\python>python get-pip.py
Collecting pip
Using cached pip-23.1.2-py3-none-any.whl (2.1 MB)
Collecting setuptools
Using cached setuptools-67.7.2-py3-none-any.whl (1.1 MB)
Collecting wheel
Using cached wheel-0.40.0-py3-none-any.whl (64 kB)
Installing collected packages: wheel, setuptools, pip
Successfully installed pip-23.1.2 setuptools-67.7.2 wheel-0.40.0
C:\CODE\SCRIPTS\python>py -m pip install --upgrade pip setuptools wheel
C:\TOOLS\PYTHON\python.exe: No module named pip
C:\CODE\SCRIPTS\python>python -m ensurepip --default-pip
C:\TOOLS\PYTHON\python.exe: No module named ensurepip
C:\CODE\SCRIPTS\python>py -m pip --version
C:\TOOLS\PYTHON\python.exe: No module named pip
C:\CODE\SCRIPTS\python>py -m ensurepip --default-pip
C:\TOOLS\PYTHON\python.exe: No module named ensurepip
</code></pre>
|
<python><pip>
|
2023-04-27 21:53:43
| 0
| 1,476
|
user3720435
|
76,124,908
| 4,175,822
|
How can one type hint a __new__ method implementation using a Generic or Union?
|
<p>How can I type hint my <code>__new__</code> implementation using a generic input?</p>
<p>So I have a class that can store <code>str</code> or <code>decimal.Decimal</code> instances, and it subclasses the input <code>str</code> or <code>decimal.Decimal</code> class.</p>
<p>This code works in mypy and python:</p>
<pre><code>from __future__ import annotations
import typing
import typing_extensions
import decimal
import random
T = typing.TypeVar('T', str, decimal.Decimal)
class StrOrDecimal(typing.Generic[T]):
@typing.overload
def __new__(cls, arg: decimal.Decimal) -> StrOrDecimalTyped.Decimal.StrOrDecimal[decimal.Decimal]:
...
@typing.overload
def __new__(cls, arg: str) -> StrOrDecimalTyped.Str.StrOrDecimal[str]:
...
# def __new__(cls, arg: typing.Union[str, decimal.Decimal]): does not work
def __new__(cls, arg): # works
if isinstance(arg, decimal.Decimal):
return super(cls, StrOrDecimalTyped.Decimal.StrOrDecimal).__new__(StrOrDecimalTyped.Decimal.StrOrDecimal, arg)
if isinstance(arg, str):
return super(cls, StrOrDecimalTyped.Str.StrOrDecimal).__new__(StrOrDecimalTyped.Str.StrOrDecimal, arg)
raise ValueError('Invalid value')
_StrOrDecimal = StrOrDecimal
# needed for mypy
class StrOrDecimalTyped:
class Str:
class StrOrDecimal(_StrOrDecimal[T], str):
pass
class Decimal:
class StrOrDecimal(_StrOrDecimal[T], decimal.Decimal):
pass
a_val = 'a'
a = StrOrDecimal(a_val)
print(a)
b_val = decimal.Decimal('1.2')
b = StrOrDecimal(b_val)
print(b)
choices: typing.List[typing.Union[str, decimal.Decimal]] = [a_val, b_val]
c_val = random.choice(choices)
c = StrOrDecimal(c_val)
try:
reveal_type(a)
reveal_type(b)
reveal_type(c)
except NameError:
typing_extensions.reveal_type(a)
typing_extensions.reveal_type(b)
typing_extensions.reveal_type(c)
</code></pre>
<p>But if I use either of these signatures:</p>
<pre><code>def __new__(cls, arg: typing.Union[str, decimal.Decimal]):
def __new__(cls, arg: T):
</code></pre>
<p>I get this error:</p>
<blockquote>
<p>generic.py:19: error: Argument 1 to "__new__" of "StrOrDecimal" has incompatible type "Type[generic.StrOrDecimalTyped.Decimal.StrOrDecimal[Any]]"; expected "Type[generic.StrOrDecimal[T]]"</p>
</blockquote>
<p>How can I fix it?</p>
|
<python><python-3.x><operator-overloading><mypy><typing>
|
2023-04-27 21:51:19
| 0
| 2,821
|
spacether
|
76,124,814
| 6,242,883
|
How to take the cumulative maximum of a column based on another column
|
<p>I have a DataFrame like this:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({
"realization_id": np.repeat([0, 1], 6),
"sample_size": np.tile([0, 1, 2], 4),
"num_obs": np.tile(np.repeat([25, 100], 3), 2),
"accuracy": [0.8, 0.7, 0.8, 0.6, 0.7, 0.5, 0.6, 0.7, 0.8, 0.7, 0.9, 0.7],
"prob": [0.94, 0.96, 0.95, 0.98, 0.93, 0.92, 0.90, 0.92, 0.95, 0.9, 0.91, 0.92]
})
df["accum_max_prob"] = df.groupby(["realization_id", "num_obs"])["prob"].cummax()
</code></pre>
<p>And I want to know how to create a column with this output:</p>
<pre><code>df["desired_accuracy"] = [0.8, 0.7, 0.7, 0.6, 0.6, 0.6, 0.6, 0.7, 0.8, 0.7, 0.9, 0.7]
</code></pre>
<p>Each entry of <code>desired_accuracy</code> equals the <code>accuracy</code> value that corresponds to the row where the highest <code>prob</code> has been achieved so far by group (that is why I create <code>accum_max_prob</code>).</p>
<p>So, for example: the first value is <code>0.8</code> because there is no data prior to that, but then the next one is <code>0.7</code> because the <code>prob</code> of the second row is greater than the first. The third value stays the same, because the third <code>prob</code> is lower than the second one, so it does not update <code>desired_accuracy</code>. For each pair <code>(realization_id, num_obs)</code> the criteria resets.</p>
<p>How can I do it in a vectorized fashion using Pandas?</p>
|
<python><pandas><dataframe>
|
2023-04-27 21:32:32
| 4
| 1,176
|
Tendero
|
76,124,764
| 2,735,009
|
pool.map() doesn't like list of lists
|
<p>I have the following piece of code:</p>
<pre><code>import sentence_transformers
import multiprocessing
from tqdm import tqdm
from multiprocessing import Pool
embedding_model = sentence_transformers.SentenceTransformer('sentence-transformers/all-mpnet-base-v2')
data = [[100227, 7382501.0, 'view', 30065006, False, ''],
[100227, 7382501.0, 'view', 57072062, True, ''],
[100227, 7382501.0, 'view', 66405922, True, ''],
[100227, 7382501.0, 'view', 5221475, False, ''],
[100227, 7382501.0, 'view', 63283995, True, '']]
# Define the function to be executed in parallel
def process_data(chunk):
results = []
for row in chunk:
print(row[0])
work_id = row[1]
mentioning_work_id = row[3]
print(work_id)
if work_id in df_text and mentioning_work_id in df_text:
title1 = df_text[work_id]['title']
title2 = df_text[mentioning_work_id]['title']
embeddings_title1 = embedding_model.encode(title1,convert_to_numpy=True)
embeddings_title2 = embedding_model.encode(title2,convert_to_numpy=True)
similarity = np.matmul(embeddings_title1, embeddings_title2.T)
results.append([row[0],row[1],row[2],row[3],row[4],similarity])
else:
continue
return results
# Define the number of CPU cores to use
num_cores = multiprocessing.cpu_count()
# Split the data into chunks
chunk_size = len(data) // num_cores
# chunks = [data[i:i+chunk_size] for i in range(0, len(data), chunk_size)]
# Create a pool of worker processest
pool = multiprocessing.Pool(processes=num_cores)
results = []
with tqdm(total=len(data)) as pbar:
for i, result_chunk in enumerate(pool.map(process_data, data)):
# Update the progress bar
pbar.update()
# Add the results to the list
results += result_chunk
# Concatenate the results
final_result = results
</code></pre>
<p>When I execute this code, I get the following error:</p>
<pre><code>---------------------------------------------------------------------------
RemoteTraceback Traceback (most recent call last)
RemoteTraceback:
"""
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/multiprocessing/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/opt/conda/lib/python3.7/multiprocessing/pool.py", line 44, in mapstar
return list(map(*args))
File "<ipython-input-4-3aab73406a3b>", line 18, in process_data
print(row[0])
TypeError: 'int' object is not subscriptable
"""
The above exception was the direct cause of the following exception:
TypeError Traceback (most recent call last)
<ipython-input-4-3aab73406a3b> in <module>
46 results = []
47 with tqdm(total=len(data)) as pbar:
---> 48 for i, result_chunk in enumerate(pool.map(process_data, data)):
49 # Update the progress bar
50 pbar.update()
/opt/conda/lib/python3.7/multiprocessing/pool.py in map(self, func, iterable, chunksize)
266 in a list that is returned.
267 '''
--> 268 return self._map_async(func, iterable, mapstar, chunksize).get()
269
270 def starmap(self, func, iterable, chunksize=None):
/opt/conda/lib/python3.7/multiprocessing/pool.py in get(self, timeout)
655 return self._value
656 else:
--> 657 raise self._value
658
659 def _set(self, i, obj):
TypeError: 'int' object is not subscriptable
</code></pre>
<p>How can I pass a list of lists in <code>pool.map()</code> and have it process in parallel?</p>
|
<python><python-3.x><multiprocessing><mapreduce>
|
2023-04-27 21:21:56
| 1
| 4,797
|
Patthebug
|
76,124,622
| 6,632,138
|
Override __new__ of a class which extends Enum
|
<p>A class <code>Message</code> extends <code>Enum</code> to add some logic. The two important parameters are verbose level and message string, with other optional messages (<code>*args</code>). Another class <code>MessageError</code> is a special form of the <code>Message</code> class in which verbose level is always zero, everything else is the same.</p>
<p>The following code screams <code>TypeError</code>:</p>
<blockquote>
<p>TypeError: Enum.<strong>new</strong>() takes 2 positional arguments but 3 were given</p>
</blockquote>
<pre><code>from enum import Enum
class Message(Enum):
verbose: int
messages: list[str]
def __new__(cls, verbose: int, message: str, *args):
self = object.__new__(cls)
self._value_ = message
return self
def __init__(self, verbose: int, message: str, *args):
self.verbose = verbose
self.messages = [self.value] + list(args)
class MessageError(Message):
def __new__(cls, message: str):
return super().__new__(cls, 0, message) # <- problem is here!
def __init__(self, message: str):
return super().__init__(0, message)
class Info(Message):
I_001 = 2, 'This is an info'
class Error(MessageError):
E_001 = 'This is an error'
</code></pre>
<p>I was expecting that <code>super().__new__(cls, 0, message)</code> would call <code>__new__</code> from the <code>Message</code> class, but it seems that is not the case. What am I doing wrong here?</p>
|
<python><enums><overriding><typeerror><new-operator>
|
2023-04-27 21:00:15
| 1
| 577
|
Marko Gulin
|
76,124,313
| 16,003,919
|
How to get a dictionary of arguments passed to a python method?
|
<p>For simple functions we can use</p>
<pre><code>def my_func(a, b):
print(locals())
</code></pre>
<p>...that will return <code>{a: 1, b: 2}</code> if function called with <code>my_func(a=1, b=2)</code>.</p>
<p><strong>How can I get the same dictionary if <code>my_function</code> was a class method?</strong>*</p>
|
<python><dictionary><class><local-variables>
|
2023-04-27 20:15:29
| 1
| 577
|
Eduardo Gomes
|
76,124,064
| 19,989,634
|
Updating and refreshing front end variables with javascript
|
<p>I am wanting the item quantity my Cart page to refresh and update if quantity is changed with out having to refresh the page. The buttons are working regarding adding and removing items to the back end. But nothing is happening on the Cart page its self.</p>
<p><strong>My view:</strong></p>
<pre><code>def get_item_quantity(request):
user = request.user
if user.is_authenticated:
try:
cart = Cart.objects.get(user=user)
quantities = {item.product_id: item.quantity for item in cart.items.all()}
return JsonResponse({'quantities': quantities})
except Cart.DoesNotExist:
return JsonResponse({'status': 'error', 'message': 'Cart not found'})
else:
return JsonResponse({'status': 'error', 'message': 'User not authenticated'})
</code></pre>
<p><strong>My scripts:</strong></p>
<pre><code>function updateCartItem(cartId, cartItemId, quantityToAdd, csrftoken) {
fetch(`/api/carts/${cartId}/items/${cartItemId}/`, {
method: "PATCH",
headers: {
"Content-Type": "application/json",
"X-CSRFToken": csrftoken,
},
body: JSON.stringify({
quantity: quantityToAdd,
}),
})
.then((response) => response.json())
.then((data) => {
console.log("Cart successfully updated:", data);
updateCartQuantity();
updateItemQuantity();
});
}
function updateItemQuantity() {
fetch("/get_item_quantity/")
.then((response) => response.json())
.then((data) => {
if (data.status === 'error') {
console.error(data.message);
} else {
document.getElementById("item-quantity")
}
})
.catch((error) => {
console.error(error);
});
}
window.addEventListener("load", updateItemQuantity);
</code></pre>
<p><strong>HTML SNIPPET {{item.quantity}} is what i want to be updated:</strong></p>
<pre><code>{% for item in items %}
<svg class="icon--cart update-cart" data-product="{{item.product.id}}" data-action="add">
<use href="/media/sprite.svg#plus-solid"/>
</svg>
<span id="item-quantity{{item.product.id}}" class="quantity">{{item.quantity}}</span>
<svg class="icon--cart update-cart" data-product="{{item.product.id}}" data-action="remove">
<use href="/media/sprite.svg#minus-solid"/>
</svg>
{% endfor %}
</code></pre>
|
<javascript><python><django>
|
2023-04-27 19:40:33
| 1
| 407
|
David Henson
|
76,123,911
| 8,079,611
|
Altair Python horizontal bar graph with two variables in the same frame
|
<p>Using Altair, how can I plot in the same horizontal bar plot two variables that are in the same data frame?</p>
<p>I have a panda data frame that looks like this:</p>
<pre><code># initialize list of lists
data = [['AAA','71.15','FOUSA06C1Y'],
['AA','2.93','FOUSA06C1Y'],
['A','11.9','FOUSA06C1Y'],
['BBB','14.04','FOUSA06C1Y'],
['BB','0.0','FOUSA06C1Y'],
['B','0.0','FOUSA06C1Y'],
['Below B','0.0','FOUSA06C1Y'],
['Not Rated','-0.02','FOUSA06C1Y'],
['AAA','34.81744518','1404'],
['AA','47.24845367','1404'],
['A','16.66257244','1404'],
['BBB','1.271528716','1404'],
['BB / B','0.0','1404'],
['Below B','0.0','1404'],
['Short Term Rated','0.0','1404'],
['Not Rated','0.0','1404'],
['Nan','0.0','1404']]
# Create the pandas DataFrame
df = pd.DataFrame(data, columns=['index', 'value','fund'])
df['value'] = df['value'].astype(float)
</code></pre>
<p>I am able to <em>separately</em> horizontal bar plot these with below code:</p>
<pre><code>gp_chart = alt.Chart(df).mark_bar().encode(
alt.X('value'),
alt.Y('index', axis=alt.Axis(grid=True)),
alt.Color('fund'),
alt.Row('fund')
)
gp_chart
</code></pre>
<p>Which will produce this separated graphs:</p>
<p><a href="https://i.sstatic.net/7BgHH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7BgHH.png" alt="enter image description here" /></a></p>
<p>Picking part of this graph as an example (just picking A and AAA), I want to see these bars together like this:
<a href="https://i.sstatic.net/AdCaU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AdCaU.png" alt="enter image description here" /></a></p>
<p>Similar question <a href="https://stackoverflow.com/questions/72181211/grouped-bar-charts-in-altair-using-two-different-columns">here</a>.</p>
|
<python><bar-chart><altair>
|
2023-04-27 19:14:59
| 1
| 592
|
FFLS
|
76,123,849
| 2,056,201
|
Is there a way to run Flask + React without building the React app
|
<p>I am following tutorial <a href="https://towardsdatascience.com/build-deploy-a-react-flask-app-47a89a5d17d9" rel="nofollow noreferrer">https://towardsdatascience.com/build-deploy-a-react-flask-app-47a89a5d17d9</a></p>
<p>It works if I use the instructions and run <code>npm run build</code> in the <code>frontend</code> directory and <code>flask run</code> from the root directory</p>
<p>However, I am trying to develop the React app without having to rebuild it every time</p>
<p>So I can run the front end from the <code>frontend</code> folder using <code>npm run start</code>
and here I can develop front end without having to restart the server every time</p>
<p>However, I would like to develop the react app with flask simultaneously using <code>flask run</code></p>
<p>Is there a way I can route flask to the development and not the build folder? and would that work?</p>
<p>I tried changing <code>static_folder='frontend/build'</code> to <code>static_folder='frontend/public'</code> where index.html is stored, but this didnt seem to work, I got 404 error</p>
<p>is this possible? or should I continue to develop front end separately with <code>npm run start</code> and only then build and run flask?</p>
|
<javascript><python><reactjs><flask>
|
2023-04-27 19:06:33
| 0
| 3,706
|
Mich
|
76,123,535
| 267,391
|
Installing Python into user-accessible build folder during build process
|
<p>Trying to install python inside my build folder in CMake. At heart though it doesnt seem to work as documented for a few versions I have tried.</p>
<pre><code>.\python-3.10.7-amd64.exe /quiet /log C:\Users\MyUser\Documents\python_install.log TargetDir="C:\Users\MyUser\Documents\PythonTest"
</code></pre>
<p>I get a UAC dialog as if the installer is going to do something that requires admin and I proceed with that, but it doesn't install anything. I get a 60KB logfile that does have some errors I cant make out:</p>
<pre><code>[0BC4:2E38][2023-04-27T11:15:22]e000: Error 0x80070002: Process returned error: 0x2
[0BC4:2E38][2023-04-27T11:15:22]e000: Error 0x80070002: Failed to execute EXE package.
[0AC4:1860][2023-04-27T11:15:22]e000: Error 0x80070002: Failed to configure per-machine EXE package.
[0AC4:1860][2023-04-27T11:15:22]w350: Applied non-vital package: compileall_AllUsers, encountered error: 0x80070002. Continuing...
[0BC4:2E38][2023-04-27T11:15:22]i301: Applying execute package: compileallO_AllUsers, action: Install, path: C:\ProgramData\Package Cache\F542DE6E7D2F50806ADFFD8B7A5ADE6D8DA3DF66\py.exe, arguments: '"C:\ProgramData\Package Cache\F542DE6E7D2F50806ADFFD8B7A5ADE6D8DA3DF66\py.exe" -3.10 -O -E -s -Wi "C:\Users\kalen\Documents\PythonTest\Lib\compileall.py" -f -x "bad_coding|badsyntax|site-packages|py2_|lib2to3\\tests|venv\\scripts" "C:\Users\kalen\Documents\PythonTest\Lib"'
[0BC4:2E38][2023-04-27T11:15:22]e000: Error 0x80070002: Process returned error
</code></pre>
<p>Using a guide here:
<a href="https://silentinstallhq.com/python-3-10-silent-install-how-to-guide/" rel="nofollow noreferrer">https://silentinstallhq.com/python-3-10-silent-install-how-to-guide/</a></p>
<p>I want to do this in CMake and am doing that as well and have code for it, but at core this process just doesn't work as advertised so far? I am now in the process of trying more versions. There are some .msi based ones in older versions which have different argument structure but similar functionality.</p>
<p>My goal here is to run python straight from cmake build folder and bypass need to use dockers and things of that nature.</p>
|
<python><python-3.x><windows><build>
|
2023-04-27 18:21:40
| 1
| 3,194
|
Kalen
|
76,123,481
| 10,308,255
|
How can I melt a dataframe from wide to long twice on the same column?
|
<p>I have a dataframe:</p>
<pre><code>data = [
[1, "2022-04-29", 123, "circle", 1, 3, 6, 7.3],
[1, "2022-02-10", 456, "square", 4, np.nan, 3, 9],
]
df = pd.DataFrame(
data,
columns=[
"ID",
"date",
"code",
"shape",
"circle_X_rating",
"circle_Y_rating",
"square_X_rating",
"square_Y_rating",
],
)
df
ID date code shape circle_X_rating circle_Y_rating square_X_rating square_Y_rating
1 2022-04-29 123 circle 1 3.0 6 7.3
1 2022-02-10 456 square 4 NaN 3 9.0
</code></pre>
<p>I would like to melt this dataframe such that there is a shape column and 2 columns for rating <code>X_rating</code> and <code>Y_rating</code>, and I am not sure how to do this. Currently I am melting it and this is what I get:</p>
<pre><code>test = (
pd.melt(
df,
id_vars=[
"ID",
"date",
"bar_code",
"shape",
],
value_vars=[
"circle_X_rating",
"circle_Y_rating",
"square_X_rating",
"square_Y_rating",
],
var_name="shape_for_rating",
value_name="shape_rating",
)
.assign(
shape_for_rating=lambda df: df["shape"].apply(lambda a_str: a_str.split("_")[0])
)
.query("shape == shape")
.drop(columns=["shape_for_rating"])
)
test
ID date code shape shape_rating
0 1 2022-04-29 123 circle 1.0
1 1 2022-02-10 456 square 4.0
2 1 2022-04-29 123 circle 3.0
3 1 2022-02-10 456 square NaN
4 1 2022-04-29 123 circle 6.0
5 1 2022-02-10 456 square 3.0
6 1 2022-04-29 123 circle 7.3
7 1 2022-02-10 456 square 9.0
</code></pre>
<p>But what I'd really like is:</p>
<pre><code> ID date code shape X_rating Y_rating
0 1 2022-04-29 123 circle 1.0 3
1 1 2022-04-29 123 square 6.0 7.3
2 1 2022-02-10 456 circle 4 NaN
3 1 2022-02-10 456 square 3 9
...
</code></pre>
<p>Does anyone know the best way to do this? I've been spinning my wheels at this.</p>
|
<python><pandas><dataframe><reshape>
|
2023-04-27 18:14:40
| 2
| 781
|
user
|
76,123,439
| 2,081,381
|
Use s3-pit-restore inside a lambda function
|
<p>I'm wondering if it's possible to import and use this tool in a lambda function written in Python and how to do it</p>
<p><a href="https://github.com/angeloc/s3-pit-restore" rel="nofollow noreferrer">https://github.com/angeloc/s3-pit-restore</a></p>
|
<python><aws-lambda>
|
2023-04-27 18:08:20
| 0
| 388
|
juan
|
76,123,328
| 5,024,631
|
python: return element of array when a mask element is True, and return zero when a mask element is False
|
<p>Just say I have an numpy array like this:</p>
<pre><code>a = np.array([1,2,3,4,5])
</code></pre>
<p>and a mask like this:</p>
<pre><code>mask = np.array([True, True, True, False, False])
</code></pre>
<p>I would like to create an equal sized array with the elements from <code>a</code> included whenever that same position is <code>True</code> in the mask, and then set the values to zero whenever <code>False</code> is present in the mask. The end result would be like this:</p>
<pre><code>result = np.array([1,2,3,0,0])
</code></pre>
<p>any ideas? Thanks in advance!</p>
|
<python><numpy>
|
2023-04-27 17:51:05
| 2
| 2,783
|
pd441
|
76,123,289
| 11,065,874
|
fastapi Annotated does not work with Field - How to fix it?
|
<p>I have installed fastapi version 0.95.0</p>
<p>I have run.py working fine as below:</p>
<pre><code>import uvicorn
from fastapi import FastAPI
from fastapi import Path
app = FastAPI()
@app.get("/test/{test_id}")
def test(test_id: str = Path(..., title="a test title", description="a test description")):
return "Hello world"
def main():
uvicorn.run("run:app", host="0.0.0.0", reload=True, port=8001)
if __name__ == "__main__":
main()
</code></pre>
<ul>
<li>Api works fine</li>
<li>in the <code>/doc</code> endpoint I see the title and the description showing right</li>
</ul>
<hr />
<p>I now want to use the new <code>Annotated</code> feature of fastapi which works fine:</p>
<pre><code>from typing import Annotated
import uvicorn
from fastapi import FastAPI
from fastapi import Path
app = FastAPI()
@app.get("/test/{test_id}")
def test(test_id: Annotated[str, Path(..., title="a test title", description="a test description")]):
return "Hello world"
def main():
uvicorn.run("run:app", host="0.0.0.0", reload=True, port=8001)
if __name__ == "__main__":
main()
</code></pre>
<ul>
<li>Api works fine</li>
<li>in the <code>/doc</code> endpoint I see the title and the description showing right</li>
</ul>
<hr />
<p>I want to do some complex validation. so, let me go back and make a change to the first approach:</p>
<pre><code>from typing import Annotated
import uvicorn
from fastapi import FastAPI, Depends
from fastapi import Path
from pydantic import BaseModel
app = FastAPI()
class TestInput(BaseModel):
test_id: str = Path(..., title="a test title", description="a test description")
@app.get("/test/{test_id}")
def test(inp: TestInput = Depends()):
return "Hello world"
def main():
uvicorn.run("run:app", host="0.0.0.0", reload=True, port=8001)
if __name__ == "__main__":
main()
</code></pre>
<ul>
<li>Api works fine</li>
<li>in the <code>/doc</code> endpoint I the title and the description <strong>not</strong> showing</li>
</ul>
<hr />
<p>Now I wrap Path with Field</p>
<pre><code>from typing import Annotated
import uvicorn
from fastapi import FastAPI, Depends
from fastapi import Path
from pydantic import BaseModel, Field
app = FastAPI()
class TestInput(BaseModel):
test_id: str = Field(Path(..., title="a test title", description="a test description"))
@app.get("/test/{test_id}")
def test(inp: TestInput = Depends()):
return "Hello world"
def main():
uvicorn.run("run:app", host="0.0.0.0", reload=True, port=8001)
if __name__ == "__main__":
main()
</code></pre>
<ul>
<li>Api works fine</li>
<li>in the <code>/doc</code> endpoint I see the title and the description showing right</li>
</ul>
<hr />
<p>Now I want to use Annotated</p>
<pre><code>from typing import Annotated
import uvicorn
from fastapi import FastAPI, Depends
from fastapi import Path
from pydantic import BaseModel, Field
app = FastAPI()
class TestInput(BaseModel):
test_id: Annotated[str, Field(Path(..., title="a test title", description="a test description"))]
@app.get("/test/{test_id}")
def test(inp: TestInput = Depends()):
return "Hello world"
def main():
uvicorn.run("run:app", host="0.0.0.0", reload=True, port=8001)
if __name__ == "__main__":
main()
</code></pre>
<p>The application crashes</p>
<pre><code>File "pydantic/fields.py", line 497, in pydantic.fields.ModelField.infer
File "pydantic/fields.py", line 469, in pydantic.fields.ModelField._get_field_info
ValueError: `Field` default cannot be set in `Annotated` for 'test_id'
</code></pre>
<p>How should I use Annotated with the wrapper Field?</p>
<hr />
<p>I just noticed that with Annotated I get the description an title without the need for Field</p>
<pre><code>
import uvicorn
from fastapi import FastAPI, Depends
from fastapi import Path
from pydantic import BaseModel, Field
app = FastAPI()
class TestInput(BaseModel):
test_id: Annotated[str, Path(..., title="a test title", description="a test description")]
@app.get("/test/{test_id}")
def test(inp: TestInput = Depends()):
return "Hello world"
def main():
uvicorn.run("run:app", host="0.0.0.0", reload=True, port=8001)
if __name__ == "__main__":
main()
</code></pre>
<p>My problem is solved but anyway Field does not work with annotated. I am wondering if this is planned or a bug of fastapi</p>
|
<python><fastapi><pydantic>
|
2023-04-27 17:46:08
| 0
| 2,555
|
Amin Ba
|
76,123,245
| 12,671,057
|
Why is set.remove so slow here?
|
<p>(Extracted from <a href="https://stackoverflow.com/q/76122610/12671057">another question</a>.) Removing this set's 200,000 elements one by one like this takes 30 seconds (<a href="https://ato.pxeger.com/run?1=m72soLIkIz9vwYKlpSVpuhY3A4sVbBWKU0s0ihLz0lM1jAxAQFOTqzwjMydVodiKSwEI0vKLFCoUMvNgfBAo1itKzc0vS9Wo0ISLJRWlJmZDDIaaD7MHAA" rel="nofollow noreferrer">Attempt This Online!</a>):</p>
<pre class="lang-py prettyprint-override"><code>s = set(range(200000))
while s:
for x in s:
s.remove(x)
break
</code></pre>
<p>Why is that so slow? Removing a set element is supposed to be fast.</p>
|
<python><performance><set><python-internals>
|
2023-04-27 17:41:08
| 1
| 27,959
|
Kelly Bundy
|
76,123,186
| 4,508,962
|
Python - Numba : How can I pass a @jitclass class to a property type in the specs of another @jitclass class?
|
<p>I have a class <code>OrderBook</code> that is <code>@jitclass</code>.
I want to declare a class <code>Position</code> that contains a <code>last_ob</code> property, of type <code>OrderBook</code>. <code>Position</code> is a <code>@jitclass</code> as well.</p>
<pre><code>@jitclass([
('instrument', types.string),
('size', float32),
('avg_price', float32),
('last_quote', OrderBook),
])
class Position:
...
</code></pre>
<p>However, Numba says :</p>
<pre><code>spec values should be Numba type instances, got <class 'numba.experimental.jitclass.base.OrderBook'>
</code></pre>
<p>What is strange is that I can pass with no problem a list of OrderBook like that :</p>
<pre><code>@jitclass([
('instrument', types.string),
('size', float32),
('avg_price', float32),
('last_quote', types.ListType(OrderBook)),
])
class Position:
...
</code></pre>
<p>Compiles successfully. If I can compile a property of type <code>List of a @jitclass class</code>, I can probably compile a property that is the Numba class itself. How can I do that ?</p>
|
<python><numba>
|
2023-04-27 17:32:41
| 1
| 1,207
|
Jerem Lachkar
|
76,123,155
| 1,860,222
|
Replacing all data in a pyqt table model
|
<p>I'm working on a game that involves moving 'gems' between a storage bank and a player. I'm using a QTableView to display the current gems of the player with a QAbstractTableModel as the model. When the app is initialized I set up the model like this:</p>
<pre><code>def __init__(self, *args, obj=None, **kwargs):
QtWidgets.QMainWindow.__init__(self)
Ui_Widget.__init__(self)
self.setupUi(self)
self._headers = [Color.WHITE, Color.BLUE, Color.GREEN, Color.RED, Color.BLACK]
self.playerGemsTable = GemTableView(self.layoutWidget)
self.gameActions = GameActions(["p1", "p2", "p3", "p4"])
#populate the player gems table
data:list = [ ["Currently held", self.gameActions.currentPlayer.gems]]
self._playerGemModel = TokenStoreModel(data, self._headers, [0])
self.playerGemsTable.setModel(self._playerGemModel)
</code></pre>
<p>So far, so good. When I run my app it displays a table with the correct data. The problem now is figuring out how to update the model. I'm not changing individual items. I'm calling a method of the GameActions class that handles all the calculations and updates all the gem quantities. My initial thought was to set up a signal to respond to a successful 'save' and then set _playerGemModel.data with the new value of self.gameActions.currentPlayer.gems . Looking at the available methods for QAbstractTableModel however, there doesn't appear to be any way to do a mass update. All the methods seem to be designed to update a single value at a specific row and column.</p>
<p>Am I missing something? Is looping through the data and updating items one by one really the only option?</p>
|
<python><model-view-controller><pyqt><pyqt5><qtableview>
|
2023-04-27 17:27:34
| 2
| 1,797
|
pbuchheit
|
76,123,076
| 11,061,755
|
Read a compressed file with custom extension using Pyspark
|
<p>I have a set of gzip-compressed CSV files in S3. But they have the <em>.csv</em> extension, not <em>.csv.gz</em>. The issue is that when I try to read them using Pyspark, they do not read properly. I have tried many configurations, but with no luck.</p>
<p><a href="https://i.sstatic.net/gFk7M.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gFk7M.png" alt="enter image description here" /></a></p>
<p>Then I found similar issue in here(<a href="https://stackoverflow.com/questions/44372995/read-a-compressed-file-with-custom-extension-with-spark">link</a>). But here they have used Scala. I tried to implement this with Python, but I could not find the correct APIs for doing that.</p>
<p><a href="https://i.sstatic.net/Frua8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Frua8.png" alt="enter image description here" /></a></p>
<p>Any help would be appreciated.</p>
<p>Implement Python code for read a compressed file with custom extension using Pyspark.</p>
|
<python><pyspark><compression>
|
2023-04-27 17:13:54
| 0
| 320
|
Shanga
|
76,122,859
| 2,651,073
|
How can I remove fields from WTF form by name
|
<p>I am trying to remove some fields from a wtf form:</p>
<pre><code>form = super().edit_form(obj=obj)
for f in form:
if f.name not in self.form_details_columns:
del f
return form
</code></pre>
<p>This method doesn't work, however if I use something like</p>
<pre><code> del form.questin
</code></pre>
<p>Where <code>question</code> is the name of a field, it works</p>
|
<python><flask-wtforms><wtforms><flask-admin>
|
2023-04-27 16:46:47
| 2
| 9,816
|
Ahmad
|
76,122,713
| 6,719,772
|
How to define a function interface that allows any number of arguments and 1 required argument?
|
<p>How can I get rid of this mypy error?</p>
<pre><code>class TestListener(Protocol):
def __call__(self, command: str, *args: tuple[Any, ...],) -> Any:
...
def test(listener: TestListener):
...
def listener1(command: str, test: int):
...
def listener2(command: str):
...
def listener3(command: str, *args):
...
test(listener1) # error: Argument 1 to "test" has incompatible type "Callable[[str, int], Any]"; expected "TestListener"
test(listener2) # error: Argument 1 to "test" has incompatible type "Callable[[str], Any]"; expected "TestListener"
test(listener3) # this works
</code></pre>
<p>I would like all 3 to work.</p>
|
<python><mypy>
|
2023-04-27 16:27:45
| 0
| 401
|
danielmoessner
|
76,122,705
| 14,485,257
|
How to reduce the width of a plot display in Python Dash?
|
<p>This is my code for a callback function as a part of a bigger code using the Dash package.</p>
<pre><code>@app.callback(
Output("graph-container", "children"),
Input("type-dropdown", "value"),
)
def update_graph(serial_number):
serial_data = pd.read_csv("C:/Users/Enigma/data.csv")
# Taking subset
serial_data = serial_data[serial_data["serial_num"] == serial_number]
# Plot the data
fig = px.scatter(serial_data, x="date_time", y="voltage")
fig.add_trace(
go.Scatter(
x=serial_data["date_time"],
y=serial_data["voltage"],
mode="markers",
marker=dict(
color=np.where(serial_data["after_red"], "orange", np.where(serial_data["voltage"] < 2.1, "red", "blue")),
size=8
)
)
)
# Set the x-axis range
x_range = [np.min(serial_data["date_time"]), np.max(serial_data["date_time"])]
# Set the x-axis range for the scatter plot and the line plot
fig.update_xaxes(range=x_range)
# Get the minimum and maximum y-axis values with a gap before the lowest value
y_min = np.floor(serial_data["voltage"].min()) - 0.01
fig.update_layout(
title={
'text': "Voltage Variation",
'y':0.98,
'x':0.5,
'xanchor': 'center',
'yanchor': 'top'
},
xaxis_title="Time",
yaxis_title="Voltage",
showlegend=False,
# Set the y-axis range with a gap before the lowest value
yaxis_range=[y_min, None],
# Add some margin to the bottom of the plot
margin=dict(l=50, r=50, b=50, t=50, pad=20),
height=500
)
# Add a horizontal line at y = 2.1
fig.add_shape(
type="line",
x0=serial_data["date_time"].min(),
x1=serial_data["date_time"].max(),
y0=2.100,
y1=2.100,
line=dict(color="black", dash="dot")
)
return html.Div(
[
dcc.Graph(
id="first-graph",
figure=fig
)
],
style={"width": "100%"},
)
</code></pre>
<p>The issue is that whenever I include the following portion of the code (as shown above) for setting up the x-axis range, my plot gets chopped-off at the right extreme.</p>
<pre><code> # Set the x-axis range
x_range = [np.min(serial_data["date_time"]), np.max(serial_data["date_time"])]
# Set the x-axis range for the scatter plot and the line plot
fig.update_xaxes(range=x_range)
</code></pre>
<p>The chopped-off plot:
<a href="https://i.sstatic.net/im7ZE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/im7ZE.png" alt="enter image description here" /></a></p>
<p>But when I don't include this portion of the code, the complete plot gets displayed:
<a href="https://i.sstatic.net/ZWNaq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZWNaq.png" alt="enter image description here" /></a></p>
<p>Can anyone please help to get the complete plot displayed even after including the portion of the code for setting up the x-axis range?</p>
|
<python><callback><plotly-dash><plotly>
|
2023-04-27 16:26:15
| 1
| 315
|
EnigmAI
|
76,122,690
| 543,913
|
Updating the location of a horizontal line based on a slider value in Bokeh
|
<p>I want to draw a plot that has both curves based on (x, y) data and horizontal lines. I want to update the curves and lines based on the state of a slider. How can I do this?</p>
<p>I know how to update the curve based on the slider state, by using <code>ColumnDataSource</code>. I also know how to draw a horizontal line, by using <code>Span</code>. But I don't know how to combine these two.</p>
<p>For now, I am doing a hack by representing the horizontal line by an array of identical y-values, which I am storing in the <code>ColumnDataSource</code>, and by using <code>line()</code> instead of <code>Span</code>. I'm just wondering if this is the intended way to solve this problem.</p>
<p>I found this same question asked by user Thornhale in a 2017 comment to this <a href="https://stackoverflow.com/a/35019983/543913">answer</a>. But the comment was not answered.</p>
|
<python><bokeh>
|
2023-04-27 16:24:40
| 1
| 2,468
|
dshin
|
76,122,630
| 17,322,210
|
How to ignore environment directory when using python ruff linter in console
|
<p>I was trying ruff linter. I have a file structure like below</p>
<pre><code>project_folder
βββ env # Python enviroment [python -m venv env]
β βββ Include
β βββ Lib
β βββ Scripts
β βββ ...
βββ __init__.py
βββ numbers.py
</code></pre>
<p>I am trying to use this <a href="https://beta.ruff.rs/docs/tutorial/" rel="noreferrer">code</a></p>
<p>I activated the environment inside the project_folder and ran the script below</p>
<pre class="lang-bash prettyprint-override"><code>ruff check .
</code></pre>
<p>but ruff also checked the env file.
π<a href="https://i.sstatic.net/oaofn.png" rel="noreferrer">Image.png</a></p>
<p>how to ignore env file like below linux script</p>
<pre class="lang-bash prettyprint-override"><code>tree -I env
</code></pre>
|
<python><linter><ruff>
|
2023-04-27 16:18:02
| 2
| 307
|
Shahobiddin Anor
|
76,122,572
| 1,362,485
|
Python yield function runs twice instead of once
|
<p>The objective of the python code below is to take a CSV file, partition it into N CSV files, and write these output files to disk.</p>
<p>Note that each file has to contain complete records, you cannot split records between files.</p>
<p>The <code>chunk_file.write()</code> function writes the files. Problem is that the last file is created twice.</p>
<pre><code>def read_in_chunks(file_path, chunk_size):
with open(file_path, 'rb') as f:
pending = b''
while True:
chunk = f.read(chunk_size)
if not chunk:
print('EOF')
if pending != b'':
yield pending
pending = b''
break
print("CHUNK SIZE: " + str(len(chunk)))
data = pending + chunk
x1 = data.rfind(b'\n')
if x1 != -1:
x2 = len(data) - x1 - 1
pending = data[-x2:]
yield data[:x1 + 1]
table_name = 'my_table'
chunk_num = 1
for records_chunk in read_in_chunks('input_file.csv', chunk_size):
chunk_filename = f'{table_name}_{chunk_num}.csv'
with open(chunk_filename, 'wb') as chunk_file:
print("CHUNK LENGTH TO WRITE:" + str(len(records_chunk)))
chunk_file.write(records_chunk)
chunk_num += 1
</code></pre>
<p>When I run this code, the print result is:</p>
<pre><code>CHUNK SIZE: 10000
CHUNK LENGTH TO WRITE:9998
CHUNK SIZE: 10000
CHUNK LENGTH TO WRITE:9984
CHUNK SIZE: 10000
CHUNK LENGTH TO WRITE:9982
CHUNK SIZE: 10000
CHUNK LENGTH TO WRITE:10022
CHUNK SIZE: 10000
CHUNK LENGTH TO WRITE:9982
CHUNK SIZE: 3559
CHUNK LENGTH TO WRITE:3591
EOF
CHUNK LENGTH TO WRITE:3591
</code></pre>
<p>Note that the last two file lengths are identical, and the last one is written after the EOF .</p>
<p>What is wrong with this code?</p>
|
<python><python-3.x>
|
2023-04-27 16:11:44
| 2
| 1,207
|
ps0604
|
76,122,528
| 14,735,451
|
How to cluster a massive amount of data that doesn't fit into memory?
|
<p>I have 30 files, each is a dictionary with about 1M keys that map to embeddings:</p>
<pre><code>file_1 = {key_1: some_array, key_2: some_array ...}
file_2 = {key_13: some_array, key_21: some_array ...}
...
file_30 = {key_123: some_array, key_9: some_array ...}
</code></pre>
<p>Normally I would just use <a href="https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html" rel="nofollow noreferrer">KMeans</a> on the embeddings, but the issue is that I can't load all of them into memory. This brings out several issues:</p>
<ol>
<li><p>I'm not sure how to find how many clusters are optimal.</p>
</li>
<li><p>Once I know how many clusters, I'm not entirely sure how to find the cluster centers.</p>
</li>
<li><p>I don't know if KMeans is the right approach for this or if there's a more efficient way to do this.</p>
</li>
</ol>
<p>I can load about 10M of them, which I think is a sufficient sample size, and then maybe find the clusters based on these (or even a smaller sample size), but not sure if that's the right approach or if there's something better. I'm also not entirely sure that I can do any sort of computation on 10M samples -- I can load them, but anything else might exceed my memory.</p>
|
<python><machine-learning><data-science><cluster-analysis><data-analysis>
|
2023-04-27 16:07:57
| 1
| 2,641
|
Penguin
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.