QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
76,052,497
| 10,044,690
|
Pyomo calling functions when creating constraints
|
<p>I'm trying to use a loop that creates constraints, where within each constraint, 2 functions are called that perform simple operations on some pyomo variables at a particular index, and then return a vector. An example code is as follows:</p>
<pre><code>import pyomo.environ as pyo
def f(x, y):
x0 = 2.0*x[1]
x1 = 0.5*y
return x0, x1
def g(x):
x0 = x[0] - 5.0
x1 = x[1] + 10.0
return x0, x1
model = pyo.ConcreteModel()
N = 100
num_rows = range(2)
num_cols = range(N)
# Creating decision variables
model.x = pyo.Var(num_rows, num_cols, domain=pyo.Reals)
model.y = pyo.Var(num_cols, domain=pyo.Reals)
# Creating constraints in a loop
model.constraints = pyo.ConstraintList()
for k in range(N-1):
model.constraints.add(expr=model.x[:,k+1] == model.x[:,k] + 0.5*f(model.x[:,k+1], model.y[k+1]) + 2.0*g(model.x[:,k]))
</code></pre>
<p>In the above example, the pyomo variables <code>x</code> and <code>y</code> at index <code>k+1</code> are passed into the function <code>f</code>. This function then assigns <code>x[0,k+1] = 2.0*x[1,k+1]</code>, and <code>x[1,k+1] = 0.5*y[k+1]</code>. The vector <code>x[:,k+1]</code> of dimension <code>2x1</code> is then returned by <code>f</code>.</p>
<p>Similarly, for the function <code>g</code>, the pyomo variable <code>x</code> at index <code>k</code> is passed into it, where we then assign <code>x[0,k] = x[0,k] - 5.0</code> and <code>x[1,k] = x[1,k] + 10.0</code>. The vector <code>x[:,k]</code> of dimension <code>2x1</code> is then returned by <code>g</code>.</p>
<p>Currently the above code does not work, as I get the following error:</p>
<pre><code> x0 = 2.0*x[1]
TypeError: unsupported operand type(s) for *: 'float' and 'IndexedComponent_slice'
</code></pre>
<p>Any help on how to use Python functions that manipulate and return pyomo variables would be appreciated. Thanks very much.</p>
|
<python><optimization><pyomo>
|
2023-04-19 08:48:34
| 1
| 493
|
indigoblue
|
76,052,283
| 19,276,569
|
How to mass produce PDF's efficiently in Python with different variable inputs into each PDF
|
<p>I have recently begun a task of automating the PDF generation for investor relations clients. We need to send out PDF's en masse, but each PDF needs to have a unique logo and company name at the bottom corner (I have the logos stored in a folder and corresponding names stored in a txt file).</p>
<p>Furthermore, each page of the PDF is predefined, but there are a few variables that are custom, such as "This year, revenue has increased by X%". I also have the X for each company, etc.</p>
<p>Desired input:
Company name and logo</p>
<p>Desired output:
PDF with standard template however with changed names and logo</p>
<p>I have tried the following:</p>
<pre class="lang-py prettyprint-override"><code>from FPDF import FPDF
pdfs = []
dct = {
"company1": 5,
}
# minimal example of what I have tried, but doesn't work
for company in open("company_names.txt", "r").readlines()
pdf = FPDF(orientation = 'P', unit = 'mm', format = 'A4')
pdf.add_page()
pdf.set_font('helvetica', 'bold', 10)
pdf.add_text(company)
pdf.add_text(f"Revenue has increased by {dct[company]}%" )
pdf.add_picture(f"logos/{company}.png") # <-- this, among other things, don't work
pdfs.append(pdf)
</code></pre>
<p>Any help would be appreciated. Speed increases would also be appreciated, as it needs to generate thousands of PDF's.</p>
|
<python><pdf><pdf-generation>
|
2023-04-19 08:27:21
| 1
| 856
|
juanpethes
|
76,052,153
| 14,705,072
|
How to fill a column with random values in polars
|
<p>I would like to know how to fill a column of a polars dataframe with random values.
The idea is that I have a dataframe with a given number of columns, and I want to add a column to this dataframe which is filled with different random values (obtained from a random.random() function for example).</p>
<p>This is what I tried for now:</p>
<pre class="lang-py prettyprint-override"><code>df = df.with_columns(
pl.when((pl.col('Q') > 0)).then(random.random()).otherwise(pl.lit(1)).alias('Prob')
)
</code></pre>
<p>With this method, the result that I obtain is a column filled with one random value <em>i.e.</em> all the rows have the same value.</p>
<p>Is there a way to fill the column with different random values ?</p>
<p>Thanks by advance.</p>
|
<python><dataframe><python-polars>
|
2023-04-19 08:13:04
| 5
| 319
|
Haeden
|
76,052,051
| 11,514,484
|
Postgresql UPSERT while encrypting column by python encryption
|
<p>I am using BULK UPSERT to insert record into new table while data of column is being encrypting by python lib CryptoDome.
here is what I done</p>
<pre><code>def bulk_upsert_query(data, table_name=None):
values_list = []
add = values_list.append
for i in data:
encrypt_user_id = encryption_fnc(str(i.user_id))
add(f"({i.id},'{encrypt_user_id}','{i.name}')")
upsert_sql = r"""insert into upsert_test_table
(id,encrypt_user_id,name)
values
""" + "\n, ".join(values_list) + """ ON CONFLICT ON CONSTRAINT
"upsert_test_table_pkey"
DO UPDATE SET
name=excluded.name,
encrypt_user_id=excluded.user_id"""
return upsert_sql
</code></pre>
<p>data = data from my old table that have <em>user_id</em> as int(ex: 124,345,786)
in <code>upsert_test_table</code> table id column is primary key column.
this <code>bulk_upsert_query</code> fnc will encrypt int <em>user_id</em> and append to values_list then create a upsert query.</p>
<p>As per insertion it works as I expected, but when it comes for update if conflict with id column then as you can see I set <code>encrypt_user_id=excluded.user_id</code> for <em>user_id</em> column,
it update existing encrypted <em>user_id</em> with int <em>user_id</em>(from old table), because I did not put any encryption fnc here,
<strong>Update</strong>: user_id column is changeable</p>
<p>So what I want is I want to call my <code>encryption_fnc</code> in <code>DO UPDATE SET</code> section,
while it update I also want to do encryption.</p>
<p>Can any one tell me who can I achieve this?</p>
<p><strong>Note</strong>: I can do encryption in database site by using <code>pgcrypto</code> but my requirement is do encryption in python side not database side.</p>
|
<python><postgresql><encryption><upsert>
|
2023-04-19 08:01:53
| 2
| 331
|
Sohel Reza
|
76,052,034
| 14,457,833
|
Why maximize button get removed when I click or try to maximize window in PyQt5
|
<p>I've a simple pyqt application in which I've some code for UI which is responsible for realtime speech-to-text in application everything works fine but when I try to maximize my application by clicking on maximize icon it get remove and window does not maximized here is output how it looks</p>
<p><a href="https://i.sstatic.net/aVjM6.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aVjM6.gif" alt="enter image description here" /></a></p>
<p>This is my code.</p>
<pre class="lang-py prettyprint-override"><code>class MainWindow(QtWidgets.QWidget):
def __init__(self, *args, **kwargs) -> None:
super(MainWindow, self).__init__(*args, **kwargs)
self.setWindowTitle("Speech To Text")
self.setWindowFlags(self.windowFlags() | Qt.WindowMaximizeButtonHint)
self.setWindowFlags(self.windowFlags() | Qt.WindowCloseButtonHint)
self.setWindowFlags(self.windowFlags() | Qt.WindowMinimizeButtonHint)
self.app_name = QtWidgets.QLabel("Speech To Text")
self.hbox = QtWidgets.QHBoxLayout()
self.someButton1 = QtWidgets.QPushButton("btn1")
self.someButton2 = QtWidgets.QPushButton("btn2")
self.hbox.addWidget(self.someButton1)
self.hbox.addWidget(self.someButton2)
self.grid = QtWidgets.QGridLayout()
self.vbox = QtWidgets.QVBoxLayout()
self.topBtn1 = QtWidgets.QPushButton("Top Button 1")
self.topBtn2 = QtWidgets.QPushButton("Top Button 2")
self.topLayout = QtWidgets.QVBoxLayout()
self.topLayout.addWidget(self.topBtn1)
self.topLayout.addWidget(self.topBtn2)
self.startButton = QtWidgets.QPushButton()
self.startLabel = QtWidgets.QLabel()
self.startLabel.setText("Tap To Record")
self.startLayout = QtWidgets.QVBoxLayout()
self.startLayout.addWidget(self.startButton, alignment=Qt.AlignCenter)
self.startLayout.addWidget(self.startLabel, alignment=Qt.AlignCenter)
self.startLayout.addStretch()
self.stopButton = QtWidgets.QPushButton()
self.stopLabel = QtWidgets.QLabel()
self.stopLabel.setText("Stop")
self.stopLayout = QtWidgets.QHBoxLayout()
self.stopLayout.addWidget(self.stopButton)
self.stopLayout.addWidget(self.stopLabel)
self.vbox.addLayout(self.topLayout)
self.vbox.addLayout(self.startLayout)
self.vbox.addLayout(self.stopLayout)
self.vbox.addWidget(self.app_name, alignment=Qt.AlignCenter, stretch=1)
self.editor = QtWidgets.QTextEdit()
self.grid.addLayout(self.hbox, 0, 2)
self.grid.addLayout(self.vbox, 1, 1)
self.grid.addWidget(self.editor, 1, 2)
self.setLayout(self.grid)
</code></pre>
<p>I've not provided full code, only main code, and also I'm using Ubuntu, and some methods on the <strong>MainWindow</strong> class update the UI on clicking buttons <em>(it is just for extra information)</em>. What's the issue please let me know because on same system another simple application works perfectly.</p>
|
<python><qt><pyqt><pyqt5><qt5>
|
2023-04-19 07:59:41
| 1
| 4,765
|
Ankit Tiwari
|
76,051,928
| 3,769,802
|
Docker run throws error AttributeError: 'cimpl.Producer' object has no attribute 'service_name'
|
<p>I have a base resource class</p>
<pre class="lang-py prettyprint-override"><code>../baselayer/baseresource/resource.py
class Resource:
def __init__(self, *args, service_name: str, client_name: str = "Resource", log: Logger = None,
error_notifier: ErrorNotifier = None, **kwargs):
super().__init__(*args, **kwargs)
self.service_name: str = service_name
self.client_name: str = client_name
self.log: Logger = log
self.error_notifier: ErrorNotifier = error_notifier
</code></pre>
<p>And I have a utility class inherting the above class</p>
<pre class="lang-py prettyprint-override"><code>
../baselayer/kafka/producer.py
from confluent_kafka import Producer
class CustomKafkaProducer(Resource, Producer):
#custom functions
@classmethod
def get_client(cls, *, service_name, logger, error_notifier, config: KafkaCred):
config = {
"bootstrap.servers": config.BROKER,
"security.protocol": "SASL_SSL",
"sasl.mechanisms": "PLAIN",
"sasl.username": config.USERNAME,
"sasl.password": config.PASSWORD,
"client.id": f"{service_name}-{uuid4().hex}"
}
client = cls(config, log=logger, service_name=service_name, error_notifier=error_notifier)
return client
</code></pre>
<p>This is part of a fastapi project and the <code>CustomKafkaProducer.get_client</code> is called during app setup</p>
<pre class="lang-py prettyprint-override"><code>../tests/app.py
from dotenv import load_dotenv
print("ENV load status test", load_dotenv(".env"))
from src.app.deployment import http_server, kafka_client
</code></pre>
<p><code>gunicorn tests.app:http_server --workers 4 --worker-class uvicorn.workers.UvicornWorker --bind 0.0.0.0:3000 --log-file - --access-logfile -</code></p>
<p>With the above command I am able to run the server with <code>Python 3.10.4</code></p>
<p>I am building the image for the code with the following docker file</p>
<pre><code>FROM python:3.10.4-slim-bullseye as base
RUN apt-get update
RUN apt-get install build-essential -y
RUN python -m pip install --upgrade pip
COPY ./requirements.txt /tmp/requirements.txt
WORKDIR /myapp
RUN pip install -r /tmp/requirements.txt
COPY ./src /myapp/src
COPY ./baselayer /myapp/baselayer
EXPOSE 3000
FROM base AS http
CMD ["ddtrace-run", "gunicorn", "src.app.deployment:http_server","--workers", "4", "--worker-class", "uvicorn.workers.UvicornWorker", "--bind", "0.0.0.0:3000", "--log-file" , "-", "--access-logfile", "-"]
</code></pre>
<p>When running the container I get the following error</p>
<pre><code>File "/myapp/src/app/deployment/ddtrace_deployment.py", line 15, in <module>
http_server = create_http_app()
File "/myapp/src/app/app.py", line 80, in create_http_app
return create_app(config=config, cls=HTTPServer)
File "/myapp/src/app/app.py", line 75, in create_app
app = cls.get_app(config, aws_session=Session())
File "/myapp/baselayer/app.py", line 104, in get_app
app.setup_app()
File "/myapp/src/app/app.py", line 49, in setup_app
super().setup_app()
File "/myapp/src/app/app.py", line 34, in setup_app
self.kafka_producer = CustomKafkaProducer.get_client(service_name=self.config.SERVICE_NAME,
File "/myapp/baselayer/kafka/producer.py", line 62, in get_client
client = cls(config, log=logger, service_name=service_name, error_notifier=error_notifier)
File "/myapp/baselayer/baseresource/resource.py", line 11, in __init__
self.service_name: str = service_name
AttributeError: 'cimpl.Producer' object has no attribute 'service_name'
</code></pre>
|
<python><dockerfile><fastapi><gunicorn><confluent-kafka-python>
|
2023-04-19 07:47:21
| 0
| 533
|
Sab
|
76,051,851
| 2,836,333
|
Scheduling a python script with Windows task scheduler
|
<p><strong>Overview of the setup:</strong></p>
<ul>
<li>W10 professional within a domain</li>
<li>All scripts and files within 1 folder (not local but network drives connected to the file server)</li>
<li>2 python scripts: one to filter an Excel file (removing columns and stuff), another to edit an HTML file and fill it with the content of the excel file.</li>
</ul>
<p><strong>The scripts</strong> (they work perfectly, no errors or problems so no debug needed but maybe the path references are causing my main problem that I'll explain in the lower part of this post/question):</p>
<ul>
<li>The script to convert 1 excel file to another to filter data uses following main path references:</li>
</ul>
<p>Reading the excel:</p>
<pre><code> df = pd.read_excel('H:\python codes\excel\planner software\planning-update.xls')
</code></pre>
<p>Write it:</p>
<pre><code>df.to_excel('H:\python codes\excel\planner software\planning-update-filtered.xlsx', index=False)
</code></pre>
<p>start the next python script that creates/edits the html file:</p>
<pre><code>subprocess.call(["python", "H:\python codes\excel\planner software\make-website.py"])
</code></pre>
<ul>
<li>The script to read the excel, opens the html, edits the html uses the same "read" reference for excel as above and uses this one to open the html:</li>
</ul>
<p>Open html:</p>
<pre><code>open("H:\python codes\excel\planner software\calendar.html", "w") as f:
</code></pre>
<p><strong>The goal:</strong>
The goal is to automate the first script and let it run everyday at a specific hour. It should be done if the user is logged in or logged out. It shouldn't matter. I try to do this with windows task scheduler.</p>
<p><strong>What I tried:</strong></p>
<p>I created a task within the task scheduler. I selected the box to run the task no matter the login status of the user. Within Action is chose for "start program", for program/script i put in the storage path "\storage\Users\lambra\python codes\excel\planner software\planner.py" and in the start in field, i put "\storage\Users\lambra\python codes\excel\planner software"</p>
<p>I did this following other stackoverflow posts and google findings. I tried it with drive letters as well. Sometimes my settings where setup that the task scheduler gave an error that it could not run the task. However with these settings, it says it ran the task but I notice the HTML file is not changed. In task scheduler I can see that the task is started (event ID 100), the editing is started (ID 200) and I get a line with</p>
<blockquote>
<p>task started by user (ID 110)</p>
</blockquote>
<p>I already tried to change all permissions to allow everything for those files. When running the python script manually or even with a batch file, it works, but scheduling it, does not.</p>
|
<python><scheduled-tasks><windows-task-scheduler>
|
2023-04-19 07:36:25
| 1
| 739
|
Onovar
|
76,051,799
| 11,938,023
|
I have a numpy and or pandas array that repeats, how do i find out where and when it does?
|
<p>Ok, this is pandas but i don't care if there is a pandas or numpy solution, i'm just looking for a solution to see where the pattern repeats: Here is what is have:</p>
<pre><code>Out[713]:
sf sx sz ss
0 12 15 5 6
1 15 1 13 3
2 13 10 6 1
3 9 14 8 15
4 2 2 6 6
5 8 8 2 2
6 15 8 2 5
7 4 6 9 11
8 14 13 10 9
9 2 12 5 11
10 1 6 15 8
11 3 4 9 14
12 12 12 14 14
13 15 15 5 5
14 13 10 10 13
15 9 11 13 15
16 2 1 10 9
17 8 6 3 13
18 15 8 14 9
19 4 3 13 10
20 14 14 2 2
21 2 2 5 5
22 1 6 1 6
23 3 1 13 15
24 12 15 0 3
25 15 1 9 7
26 13 10 2 5
27 9 14 14 9
28 2 2 2 2
29 8 8 2 2
30 15 8 10 13
31 4 6 15 13
32 14 13 5 6
33 2 12 5 11
34 1 6 13 10
35 3 4 5 2
36 12 12 13 13
37 15 15 6 6
38 13 10 8 15
39 9 11 6 4
40 2 1 2 1
41 8 6 2 12
42 15 8 9 14
43 4 3 10 13
44 14 14 5 5
45 2 2 15 15
46 1 6 9 14
47 3 1 14 12
48 12 15 5 6
49 15 1 10 4
50 13 10 13 10
use pd.read_clipboard()
if you want to copy paste
</code></pre>
<p>you can see that sf repeats every slice of 12, and sx repeats every slice of 24, and sz repeats at every slice of 35. How do i figure out where these slices repeats without manually checking them, and also slice ss repeats, but i can't seem to figure out how. What strategies can i use to figure out where ss repeats.</p>
<p>Thanks in advance, i couldn't find an answer so wanted to as anyone with knowledge of this situation.</p>
<p>ss is actually just: ss = sf ^ sx ^ sz
if that helps</p>
<p>Thanks</p>
|
<python><arrays><list><numpy><append>
|
2023-04-19 07:28:47
| 1
| 7,224
|
oppressionslayer
|
76,051,674
| 2,793,602
|
Frequency encoding - should I make dummy columns?
|
<p>I am busy prepping data to train a machine learning model. Many of the features can work well with one-hot encoding. For one feature, the frequency is related to the target variable. I have been reading up a bit about frequency encoding and everything I find has to do with replacing the category with the frequency, like so:</p>
<pre><code>CategoryName
------------
Blue
Blue
Blue
Yellow
Yellow
Red
</code></pre>
<p>Turning into:</p>
<pre><code>CategoryName
------------
3
3
3
2
2
1
</code></pre>
<p>If I had done one-hot, I would have had:</p>
<pre><code>Blue Yellow Red
1 0 0
1 0 0
1 0 0
0 1 0
0 1 0
0 0 0
</code></pre>
<p>All of the above may be related to a single line in the data I want to join it to, so I was wondering if this would be useful:</p>
<pre><code>Blue Yellow Red
3 2 1
</code></pre>
<p>If so, is there a simple way to do it, like <code>get_dummies()</code>?</p>
|
<python><pandas><machine-learning>
|
2023-04-19 07:13:34
| 2
| 457
|
opperman.eric
|
76,051,470
| 10,967,961
|
Perform inner_join() of R in Python
|
<p>I have a Pandas database Network with a network structure like this:</p>
<pre><code>{'Sup': {0: 1002000157,
1: 1002000157,
2: 1002000157,
3: 1002000157,
4: 1002000157,
5: 1002000157,
6: 1002000157,
7: 1002000157,
8: 1002000157,
9: 1002000157,
10: 1002000157,
11: 1002000157,
12: 1002000157,
13: 1002000382,
14: 1002000382,
15: 1002000382,
16: 1002000382,
17: 1002000382,
18: 1002000382,
19: 1002000382,
20: 1002000382,
21: 1002000382,
22: 1002000382,
23: 1002000382,
24: 1002000382,
25: 1002000382,
26: 1002000382,
27: 1002000382,
28: 1002000382,
29: 1002000382},
'Cust': {0: 1002438313,
1: 8039296054,
2: 9003188096,
3: 14900070991,
4: 17005234747,
5: 18006860724,
6: 28000286091,
7: 29009623382,
8: 39000007702,
9: 39004420023,
10: 46000088397,
11: 50000063751,
12: 7000090017,
13: 1900120936,
14: 1900779883,
15: 2000013994,
16: 2001222824,
17: 2003032125,
18: 2900121723,
19: 2900197555,
20: 2902742641,
21: 3000101113,
22: 3000195031,
23: 3000318054,
24: 3900091301,
25: 3911084436,
26: 4900112325,
27: 5900720933,
28: 7000001703,
29: 8000004881}}
</code></pre>
<p>I would like to reproduce this R command (possibly without kernel interrupting) in Python:</p>
<pre><code>NodesSharingSupplier <- inner_join(Network, Network, by=c('Sup'='Sup'))
</code></pre>
<p>This is an SQL-style inner join, thus I fear that it cannot be performed simply with an inner merge on Sup in Python.</p>
<p>How do I reproduce it in Python?</p>
|
<python><join><inner-join>
|
2023-04-19 06:46:53
| 2
| 653
|
Lusian
|
76,051,467
| 12,118,546
|
Valid Python line with undefined variable and annotation
|
<p>I have come accros <a href="https://pythonetc.orsinium.dev/posts/isinstance.html" rel="nofollow noreferrer">a post</a> with undefined variable name with some type annotation. The line is valid. What it does and what are possible usages?</p>
<pre class="lang-py prettyprint-override"><code>>>> x: int
>>> x
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'x' is not defined
</code></pre>
|
<python><python-typing>
|
2023-04-19 06:46:03
| 1
| 4,251
|
Roman Pavelka
|
76,051,358
| 13,403,510
|
Accessing month data in pandas
|
<p>My data looks like following;</p>
<pre><code>Month Target Efficiency Actual
Aug-18 95% 0.4500
Sep-18 95% 0.3000
</code></pre>
<p>I'm running pandas.
when i run code;</p>
<pre><code>df['Efficiency Actual']
</code></pre>
<p>I get correct output.</p>
<p>But when i run;</p>
<pre><code>df['Month']
</code></pre>
<p>I get an error, KeyError: 'Month'</p>
<p>How can this be fixed. kindly looking fopr help.</p>
|
<python><pandas><data-analysis>
|
2023-04-19 06:30:08
| 0
| 1,066
|
def __init__
|
76,051,317
| 7,766,024
|
Is there any way to add Google Sheet formatting to an Excel file, then upload that to Google Sheets?
|
<p>I have multiple CSV files that I want to combine into one Excel file that has multiple sheets (one sheet for each CSV file).</p>
<p>I want to use some custom formatting like borders, conditional formatting for colors and certain cells, etc. Right now I'm downloading the CSV files separately from my server, importing them individually from the sheet, and then going through conditional formatting by hand.</p>
<p>What I was hoping to do is to accomplish all of this via code, have one final file ready, and then import that to the sheet so that the information appears as desired.</p>
<p>As of now it seems like the only way for me to do that is via the <a href="https://developers.google.com/sheets/api/guides/concepts" rel="nofollow noreferrer">Google Sheets API</a>, but I was wondering if there was a way to do that without having to use the API.</p>
|
<python><google-sheets>
|
2023-04-19 06:24:55
| 1
| 3,460
|
Sean
|
76,051,277
| 9,525,238
|
numpy Mask 2d array rows by ranges from 1d array
|
<p>I have this 1d array which represents the ranges of valid values.</p>
<pre><code>ranges = np.array([1, 2, 0])
</code></pre>
<p>and i have multiple 2d arrays that have values, which i want to mask by index based on the ranges above.</p>
<pre><code>matrixes = np.array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
</code></pre>
<p>So each value in "ranges" represents the number of rows that have valid values in "matrixes" (the columns correspond to each-other), so the masking would return something like:</p>
<pre><code> ([[True, True, False],
[False, True, False],
[False, False, False]])
</code></pre>
<p>The first column has 1 valid value, the first.
The second column has 2 valid values... etc.</p>
<p>I tried several other topics but could not figure out how to do it...
ex: <a href="https://stackoverflow.com/questions/38193958/how-to-properly-mask-a-numpy-2d-array">How to properly mask a numpy 2D array?</a></p>
|
<python><numpy>
|
2023-04-19 06:17:54
| 1
| 413
|
Andrei M.
|
76,051,250
| 13,939,843
|
How to remove unused packages installed with pip in Python?
|
<p>I have Python project and have installed hundreds of packages using pip over time. However, I have since improved my code and suspect that some of these packages are no longer in use. I want to remove all of the unused packages.</p>
<p>I am already familiar with the <code>pip uninstall package-name</code> command and have seen suggestions to use <code>pip-autoremove package-name</code>, but it's not considered to be the optimal option for my case as there are hundreds of unused packages, and I do not want to manually uninstall each one.</p>
<p>Additionally, I do not want to accidentally remove packages that are still being used in my project.</p>
<p>I am wondering if there is a better solution, or a way to look for python imports inside files and detect for unused. Any latest or alternative approaches would be highly appreciated. Thank you.</p>
<p>Here's my requirements.txt with thousands of packages:
<a href="https://i.sstatic.net/tnDZY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tnDZY.png" alt="enter image description here" /></a></p>
|
<python><python-3.x><requirements.txt>
|
2023-04-19 06:13:04
| 1
| 357
|
iihsan
|
76,051,233
| 2,252,356
|
Python Multilevel Treemap
|
<p>Treemap Mutlilevel plot of car make , type and count dataset shown below. I found few packages using plotly.express and matplotlib-extra. But it did not help.</p>
<p>I want to have a customized lables in the format</p>
<p>Squarify does not support multilevel</p>
<pre><code>#labels = [f'{make}\n{count}' for make, count in zip(car_cmt_df['Make'].tolist(), car_cmt_df['Type'].tolist())]
</code></pre>
<p>Any suggestion is appreciated.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>Make</th>
<th>Type</th>
<th>count</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>Audi</td>
<td>Convertible</td>
<td>79</td>
</tr>
<tr>
<td>1</td>
<td>Audi</td>
<td>Coupe</td>
<td>210 V</td>
</tr>
<tr>
<td>2</td>
<td>Audi</td>
<td>Hatchback</td>
<td>41</td>
</tr>
<tr>
<td>3</td>
<td>Audi</td>
<td>Sedan</td>
<td>216</td>
</tr>
<tr>
<td>4</td>
<td>Audi</td>
<td>Wagon</td>
<td>43</td>
</tr>
<tr>
<td>5</td>
<td>BMW</td>
<td>Convertible</td>
<td>162</td>
</tr>
<tr>
<td>6</td>
<td>BMW</td>
<td>Coupe</td>
<td>86</td>
</tr>
<tr>
<td>7</td>
<td>BMW</td>
<td>SUV</td>
<td>123</td>
</tr>
<tr>
<td>8</td>
<td>BMW</td>
<td>Sedan</td>
<td>118</td>
</tr>
<tr>
<td>9</td>
<td>BMW</td>
<td>Wagon</td>
<td>42</td>
</tr>
<tr>
<td>10</td>
<td>Chevrolet</td>
<td>Convertible</td>
<td>85</td>
</tr>
<tr>
<td>11</td>
<td>Chevrolet</td>
<td>Coupe</td>
<td>45</td>
</tr>
<tr>
<td>12</td>
<td>Chevrolet</td>
<td>Crew Cab</td>
<td>85</td>
</tr>
<tr>
<td>13</td>
<td>Chevrolet</td>
<td>Extended Cab</td>
<td>87</td>
</tr>
<tr>
<td>14</td>
<td>Chevrolet</td>
<td>Regular Cab</td>
<td>82</td>
</tr>
<tr>
<td>15</td>
<td>Chevrolet</td>
<td>SS</td>
<td>119</td>
</tr>
<tr>
<td>16</td>
<td>Chevrolet</td>
<td>SUV</td>
<td>81</td>
</tr>
<tr>
<td>17</td>
<td>Chevrolet</td>
<td>Sedan</td>
<td>171</td>
</tr>
<tr>
<td>18</td>
<td>Chevrolet</td>
<td>Van</td>
<td>65</td>
</tr>
<tr>
<td>19</td>
<td>Chevrolet</td>
<td>Z06</td>
<td>38</td>
</tr>
<tr>
<td>20</td>
<td>Chevrolet</td>
<td>ZR1</td>
<td>47</td>
</tr>
<tr>
<td>21</td>
<td>Fisker</td>
<td>Sedan</td>
<td>44</td>
</tr>
<tr>
<td>22</td>
<td>HUMMER</td>
<td>Crew Cab</td>
<td>83</td>
</tr>
<tr>
<td>23</td>
<td>Ram</td>
<td>Minivan</td>
<td>41</td>
</tr>
<tr>
<td>24</td>
<td>Tesla</td>
<td>Sedan</td>
<td>39</td>
</tr>
</tbody>
</table>
</div>
<p>Please find image below
<a href="https://i.sstatic.net/hib67.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hib67.png" alt="enter image description here" /></a></p>
|
<python><machine-learning>
|
2023-04-19 06:10:39
| 1
| 2,008
|
B L Praveen
|
76,051,126
| 47,936
|
Nearest neighbour(closest point) search using vtkKdTree
|
<p>I am using vtk's kdtree to search closest point, here is a simple example that, to my surprise, returned a value that was not what I was expecting</p>
<pre><code>import vtk
points = vtk.vtkPoints()
points.InsertNextPoint(1, 0, 0)
points.InsertNextPoint(0, 1, 0)
points.InsertNextPoint(0, 0, 1)
kdtree = vtk.vtkKdTree()
kdtree.BuildLocatorFromPoints(points)
dist=vtk.reference(0.0)
p = kdtree.FindClosestPoint(0,0,10,dist)
print(p,dist)
</code></pre>
<p>the printed result is <code>0 4.0</code> and the value I expect is <code>2 81</code> (The return dist is a square of real distance)</p>
|
<python><algorithm><vtk><kdtree>
|
2023-04-19 05:47:08
| 1
| 7,908
|
zhongshu
|
76,051,046
| 15,632,586
|
What should I do to load all-mpnet-base-v2 model from sentence-transformers?
|
<p>I am trying to run <code>all-mpnet-base-v2</code> model with <code>sentence-transformers</code> 1.2.1 from my Anaconda framework (in Python 3.8). My first prompt for the model is like this:</p>
<pre><code>bert_model = SentenceTransformer('all-mpnet-base-v2')
</code></pre>
<p>However, when I run this model, after downloading the model from the internet Anaconda returned this error:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\hoang\AppData\Local\R-MINI~1\envs\py38\lib\site-packages\sentence_transformers\SentenceTransformer.py", line 115, in __init__
module = module_class.load(os.path.join(model_path, module_config['path']))
File "C:\Users\hoang\AppData\Local\R-MINI~1\envs\py38\lib\site-packages\sentence_transformers\models\Transformer.py", line 115, in load
return Transformer(model_name_or_path=input_path, **config)
File "C:\Users\hoang\AppData\Local\R-MINI~1\envs\py38\lib\site-packages\sentence_transformers\models\Transformer.py", line 29, in __init__
config = AutoConfig.from_pretrained(model_name_or_path, **model_args, cache_dir=cache_dir)
File "C:\Users\hoang\AppData\Local\R-MINI~1\envs\py38\lib\site-packages\transformers\configuration_auto.py", line 275, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
KeyError: 'mpnet
</code></pre>
<p>'
I have tried another prompt with the direct link to <code>all-mpnet-base-v2</code> like this:</p>
<pre><code> bert_model = SentenceTransformer('sentence-transformers/all-mpnet-base-v2')
</code></pre>
<p>However, this time Anaconda tried to go to the download link, and I get this error:</p>
<pre><code>Exception when trying to download http://sbert.net/models/sentence-transformers/all-mpnet-base-v2.zip. Response 404
SentenceTransformer-Model http://sbert.net/models/sentence-transformers/all-mpnet-base-v2.zip not found. Try to create it from scratch
Try to create Transformer Model sentence-transformers/all-mpnet-base-v2 with mean pooling
Traceback (most recent call last):
File "C:\Users\hoang\AppData\Local\R-MINI~1\envs\py38\lib\site-packages\sentence_transformers\SentenceTransformer.py", line 79, in __init__
http_get(model_url, zip_save_path)
File "C:\Users\hoang\AppData\Local\R-MINI~1\envs\py38\lib\site-packages\sentence_transformers\util.py", line 242, in http_get
req.raise_for_status()
File "C:\Users\hoang\AppData\Local\R-MINI~1\envs\py38\lib\site-packages\requests\models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://public.ukp.informatik.tu-darmstadt.de/reimers/sentence-transformers/v0.2/sentence-transformers/all-mpnet-base-v2.zip
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\hoang\AppData\Local\R-MINI~1\envs\py38\lib\site-packages\transformers\configuration_utils.py", line 353, in get_config_dict
raise EnvironmentError
OSError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\hoang\AppData\Local\R-MINI~1\envs\py38\lib\site-packages\sentence_transformers\SentenceTransformer.py", line 95, in __init__
transformer_model = Transformer(model_name_or_path)
File "C:\Users\hoang\AppData\Local\R-MINI~1\envs\py38\lib\site-packages\sentence_transformers\models\Transformer.py", line 29, in __init__
config = AutoConfig.from_pretrained(model_name_or_path, **model_args, cache_dir=cache_dir)
File "C:\Users\hoang\AppData\Local\R-MINI~1\envs\py38\lib\site-packages\transformers\configuration_auto.py", line 272, in from_pretrained
config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "C:\Users\hoang\AppData\Local\R-MINI~1\envs\py38\lib\site-packages\transformers\configuration_utils.py", line 362, in get_config_dict
raise EnvironmentError(msg)
OSError: Can't load config for 'sentence-transformers/all-mpnet-base-v2'. Make sure that:
- 'sentence-transformers/all-mpnet-base-v2' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'sentence-transformers/all-mpnet-base-v2' is the correct path to a directory containing a config.json file
`
</code></pre>
<p>So, what should I do to get <code>all-mpnet-base-v2</code> running with Anaconda?</p>
|
<python><huggingface-transformers><sentence-transformers>
|
2023-04-19 05:33:51
| 0
| 451
|
Hoang Cuong Nguyen
|
76,051,008
| 128,618
|
how to pivot table from excel to pandas
|
<p>My csv contain like (df) :</p>
<pre><code>item_group item_code total_qty total_amount cost_center
0 Drink IC06-1P 1 3.902 Cafe II
1 Drink IC09-1 1 2.927 Cafe II
2 BreakFast FS04-2 1 6.463 Cafe II
3 Drink IC08-1 1 2.927 Cafe II
4 Drink DT05-1 1 2.561 Cafe II
.. ... ... ... ... ...
79 Standard Food FS01-2 12 83.412 Cafe II
80 Standard Food FS01-1 13 101.465 Cafe II
81 Drink IC05-1 14 54.628 Cafe I
82 Standard Food FS01-2 35 243.285 Cafe I
83 Standard Food FS01-1 44 343.420 Cafe I
</code></pre>
<p>Here I can pivot in Excel</p>
<p><a href="https://i.sstatic.net/ymn2m.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ymn2m.jpg" alt="enter image description here" /></a></p>
<p>Anyone, please guide me to do the same in pandas code</p>
|
<python><python-3.x><pandas><pivot-table>
|
2023-04-19 05:26:20
| 1
| 21,977
|
tree em
|
76,050,948
| 2,998,077
|
pyttsx3 produced mp3 not recognizable by pydub
|
<p>In using pyttsx3 to produce mp3 audio files, it seems there's a problem that the mp3 file is not recognizable by pydub.</p>
<p>pyttsx3: text to speech package
pydub: audio handling package</p>
<p>The error message says a lot of "Header missing".</p>
<p>The code is below.</p>
<p>What went wrong, and how can it be corrected?</p>
<pre><code>import pyttsx3
engine = pyttsx3.init("sapi5")
engine.setProperty('rate', 130) # number higher the faster
voices = engine.getProperty("voices")
engine.setProperty("voice", voices[0].id)
texts = "a quick sentence into mp3"
engine.save_to_file(texts, "C:\\TEM\\temp audio.mp3")
engine.runAndWait()
# the produced file plays no problem by media player in Windows.
# here to read the mp3 audio file produced
from pydub import AudioSegment
sound1 = AudioSegment.from_file(r"C:\\TEM\\temp audio.mp3", format="mp3") # error pops here
</code></pre>
<p>Error message is here:</p>
<pre><code>Traceback (most recent call last):
File "C:\My Documents\script.py", line 17, in <module>
sound1 = AudioSegment.from_file(r"C:\\TEM\\temp audio.mp3", format="mp3")
File "C:\Python38\lib\site-packages\pydub\audio_segment.py", line 773, in from_file
raise CouldntDecodeError(
pydub.exceptions.CouldntDecodeError: Decoding failed. ffmpeg returned error code: 1
Output from ffmpeg/avlib:
ffmpeg version 2023-04-17-git-65e537b833-full_build-www.gyan.dev Copyright (c) 2000-2023 the FFmpeg developers
built with gcc 12.2.0 (Rev10, Built by MSYS2 project)
configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libaribcaption --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libvpl --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libcodec2 --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint
libavutil 58. 6.100 / 58. 6.100
libavcodec 60. 9.100 / 60. 9.100
libavformat 60. 4.101 / 60. 4.101
libavdevice 60. 2.100 / 60. 2.100
libavfilter 9. 5.100 / 9. 5.100
libswscale 7. 2.100 / 7. 2.100
libswresample 4. 11.100 / 4. 11.100
libpostproc 57. 2.100 / 57. 2.100
[mp3float @ 000002788275f6c0] Header missing
Last message repeated 31 times
[mp3 @ 00000278827328c0] Could not find codec parameters for stream 0 (Audio: mp3 (mp3float), 0 channels, fltp): unspecified frame size
Consider increasing the value for the 'analyzeduration' (0) and 'probesize' (5000000) options
Input #0, mp3, from 'C:\\TEM\\temp audio.mp3':
Duration: N/A, start: 0.000000, bitrate: N/A
Stream #0:0: Audio: mp3, 0 channels, fltp
Stream mapping:
Stream #0:0 -> #0:0 (mp3 (mp3float) -> pcm_s16le (native))
Press [q] to stop, [?] for help
[mp3float @ 000002788277ba00] Header missing
Error while decoding stream #0:0: Invalid data found when processing input
[mp3float @ 000002788277ba00] Header missing
Error while decoding stream #0:0: Invalid data found when processing input
[mp3float @ 000002788277ba00] Header missing
Error while decoding stream #0:0: Invalid data found when processing input
[mp3float @ 000002788277ba00] Header missing
Error while decoding stream #0:0: Invalid data found when processing input
[mp3float @ 000002788277ba00] Header missing
Error while decoding stream #0:0: Invalid data found when processing input
[mp3float @ 000002788277ba00] Header missing
Error while decoding stream #0:0: Invalid data found when processing input
[mp3float @ 000002788277ba00] Header missing
Error while decoding stream #0:0: Invalid data found when processing input
[mp3float @ 000002788277ba00] Header missing
Error while decoding stream #0:0: Invalid data found when processing input
[mp3float @ 000002788277ba00] Header missing
Error while decoding stream #0:0: Invalid data found when processing input
[mp3float @ 000002788277ba00] Header missing
Error while decoding stream #0:0: Invalid data found when processing input
[mp3float @ 000002788277ba00] Header missing
Error while decoding stream #0:0: Invalid data found when processing input
[mp3float @ 000002788277ba00] Header missing
Error while decoding stream #0:0: Invalid data found when processing input
[mp3float @ 000002788277ba00] Header missing
Error while decoding stream #0:0: Invalid data found when processing input
[mp3float @ 000002788277ba00] Header missing
Error while decoding stream #0:0: Invalid data found when processing input
[mp3float @ 000002788277ba00] Header missing
Error while decoding stream #0:0: Invalid data found when processing input
[mp3float @ 000002788277ba00] Header missing
Error while decoding stream #0:0: Invalid data found when processing input
[mp3float @ 000002788277ba00] Header missing
Error while decoding stream #0:0: Invalid data found when processing input
[mp3float @ 000002788277ba00] Header missing
Error while decoding stream #0:0: Invalid data found when processing input
[mp3float @ 000002788277ba00] Header missing
Error while decoding stream #0:0: Invalid data found when processing input
[mp3float @ 000002788277ba00] Header missing
Error while decoding stream #0:0: Invalid data found when processing input
[mp3float @ 000002788277ba00] Header missing
Error while decoding stream #0:0: Invalid data found when processing input
[mp3float @ 000002788277ba00] Header missing
Error while decoding stream #0:0: Invalid data found when processing input
[mp3float @ 000002788277ba00] Header missing
Error while decoding stream #0:0: Invalid data found when processing input
[mp3float @ 000002788277ba00] Header missing
Error while decoding stream #0:0: Invalid data found when processing input
[mp3float @ 000002788277ba00] Header missing
Error while decoding stream #0:0: Invalid data found when processing input
[mp3float @ 000002788277ba00] Header missing
Error while decoding stream #0:0: Invalid data found when processing input
[mp3float @ 000002788277ba00] Header missing
Error while decoding stream #0:0: Invalid data found when processing input
[mp3float @ 000002788277ba00] Header missing
Error while decoding stream #0:0: Invalid data found when processing input
[mp3float @ 000002788277ba00] Header missing
Error while decoding stream #0:0: Invalid data found when processing input
[mp3float @ 000002788277ba00] Header missing
Error while decoding stream #0:0: Invalid data found when processing input
[mp3float @ 000002788277ba00] Header missing
Error while decoding stream #0:0: Invalid data found when processing input
[mp3float @ 000002788277ba00] Header missing
Error while decoding stream #0:0: Invalid data found when processing input
[graph_0_in_0_0 @ 000002788277a640] Value inf for parameter 'time_base' out of range [0 - 2.14748e+09]
Last message repeated 1 times
[graph_0_in_0_0 @ 000002788277a640] Error setting option time_base to value 1/0.
[graph_0_in_0_0 @ 000002788277a640] Error applying generic filter options.
Error reinitializing filters!
Error while filtering: Result too large
[aost#0:0/pcm_s16le @ 000002788277fc00] Finishing stream without any data written to it.
[graph_0_in_0_0 @ 000002788277eb80] Value inf for parameter 'time_base' out of range [0 - 2.14748e+09]
Last message repeated 1 times
[graph_0_in_0_0 @ 000002788277eb80] Error setting option time_base to value 1/0.
[graph_0_in_0_0 @ 000002788277eb80] Error applying generic filter options.
[aost#0:0/pcm_s16le @ 000002788277fc00] Error configuring filter graph
Conversion failed!
</code></pre>
|
<python><audio><mp3><pydub><pyttsx3>
|
2023-04-19 05:13:30
| 2
| 9,496
|
Mark K
|
76,050,912
| 9,218,680
|
pivot table in pandas with a subtraction function on all possible combinations
|
<p>I have a dataframe that looks something like</p>
<pre><code>Strike. Strike2 Price
15000. 15000. 100
15100. 15100. 150
15200. 15200. 170
</code></pre>
<p>I want to create a matrix like df such that the output looks like</p>
<pre><code>Strike. 15000. 15100. 15200.
15000. 0. 50. 120
15100. -50 0. 20
15200. -170. -20. 0
</code></pre>
<p>Its essentially looking to compute the difference in prices at all strikevalues.
Can you help how i can achieve this efficiently please.</p>
<p>I think i need to use aggfunc, but somehow not getting the desired result</p>
|
<python><pandas><pivot>
|
2023-04-19 05:05:55
| 1
| 2,510
|
asimo
|
76,050,901
| 14,173,197
|
Haystack: save InMemoryDocumentStore and load it in retriever later to save embedding generation time
|
<p>I am using InMemory Document Store and an Embedding retriever for the Q/A pipeline.</p>
<pre><code>from haystack.document_stores import InMemoryDocumentStore
document_store = InMemoryDocumentStore(embedding_dim =768,use_bm25=True)
document_store.write_documents(docs_processed)
from haystack.nodes import EmbeddingRetriever
retriever_model_path ='downloaded_models\local\my_local_multi-qa-mpnet-base-dot-v1'
retriever = EmbeddingRetriever(document_store=document_store,
embedding_model=retriever_model_path,
use_gpu=True)
document_store.update_embeddings(retriever=retriever)
</code></pre>
<p>As the embedding takes a while, I want to load the embeddings and later use them again in the retriever. (in rest API side). I don't want to use ElasticSearch or Faiss. How can I achieve this using In Memory Store? I tried to use Pickle, but there is no way to store the embeddings. Again, in the embedding retriever, there is no load function.</p>
<p>I tried to do the following:</p>
<pre><code>with open("document_store_res.pkl", "wb") as f:
pickle.dump(document_store.get_all_documents(), f)
</code></pre>
<p>And in the rest API, I am trying to load the document store :</p>
<pre><code>def reader_retriever():
# Load the pickled model
with open(os.path.join(settings.BASE_DIR,'\downloaded_models\document_store_res.pkl'), 'rb') as f:
document_store_new = pickle.load(f)
retriever_model_path = os.path.join(settings.BASE_DIR, '\downloaded_models\my_local_multi-qa-mpnet-base-dot-v1')
retriever = EmbeddingRetriever(document_store=document_store_new,
embedding_model=retriever_model_path,
use_gpu=True)
document_store_new.update_embeddings(retriever=retriever,
batch_size=100)
farm_reader_path = os.path.join(settings.BASE_DIR, '\downloaded_models\my_local_bert-large-uncased-whole-word-masking-squad2')
reader = FARMReader(model_name_or_path=farm_reader_path,
use_gpu=True)
return reader, retriever
</code></pre>
|
<python><nlp><haystack>
|
2023-04-19 05:04:30
| 2
| 323
|
sherin_a27
|
76,050,717
| 7,133,942
|
How to use the at() method in Pyomo
|
<p>I am using Pyomo and I have the following lines</p>
<pre><code>outputVariables_list = [model.param1, model.variable1]
optimal_values_list = [[pyo.value(model_item[key]) for key in model_item] for model_item in outputVariables_list]
</code></pre>
<p>When I run it I get a warning that I don't understand:</p>
<pre><code>WARNING: DEPRECATED: Using __getitem__ to return a set value from its
(ordered) position is deprecated. Please use at() (deprecated in 6.1,
will be removed in 7.0)
</code></pre>
<p>I tried the following line but this led to an error:</p>
<pre><code>optimal_values_list = [[pyo.at(model_item[key]) for key in model_item] for model_item in outputVariables_list]
</code></pre>
<p>Further, I tried to use <code>pyo.value(model_item.at[key])</code> and <code>pyo.value(model_item.at(key)</code> and both lead to AttributeError: 'IndexedParam' object has no attribute 'at'.</p>
<p>How can I use the <code>at()</code> method recommended by Pyomo? What I don't understand is that Pyomo tells me to use "at" instead, but this not work as I get errors when using at. This is a quite confusing recommendation. Any idea how can I can integrate the at() method in my code?</p>
<p><strong>Update</strong>: the full list of pyomo components that I use can be seen in the following line (the name indicates if it is a parameter, variable or set): <code>outputVariables_list_BT2 = [model.param_helpTimeSlots_BT2, model.variable_heatGenerationCoefficient_SpaceHeating_BT2, model.variable_heatGenerationCoefficient_DHW_BT2, model.variable_help_OnlyOneStorage_BT2, model.variable_temperatureBufferStorage_BT2, model.variable_usableVolumeDHWTank_BT2, model.variable_electricalPowerTotal_BT2, model.variable_pvGeneration_BT2, model.variable_windPowerAssigned_BT2, model.param_heatDemand_In_W_BT2, model.param_DHWDemand_In_W_BT2, model.param_electricalDemand_In_W_BT2, model.param_pvGenerationNominal_BT2, model.param_outSideTemperature_In_C, model.param_windAssignedNominal_BT2, model.param_COPHeatPump_SpaceHeating_BT2, model.param_COPHeatPump_DHW_BT2, model.param_electricityPrice_In_Cents, model.set_timeslots]</code></p>
|
<python><mathematical-optimization><pyomo>
|
2023-04-19 04:16:14
| 1
| 902
|
PeterBe
|
76,050,699
| 6,702,598
|
Pytest: How to specify package root directory
|
<p>This is my project structure (simplified):</p>
<pre><code>my-project
├── __init__.py
├── my_code.py
└── tests
├── __init__.py
└── test_foo.py
</code></pre>
<p>When I run pytest ( with <code>pytest .</code> from within <em>my-project</em> or <code>pytest my-project</code> from its parent directory), python assumes the <code>tests</code> directory as the root directory. However, I need python to identify <code>my-project</code> as the root directory. How do I do that?</p>
<h4>Background:</h4>
<p><em>test_foo.py</em> uses a relative import (<code>from ..my_code import *</code>). If <code>tests</code> directory is the root directory, python will not be able to resolve the import. For structural reasons(1) I cannot use absolute paths.
The relative import spawns the error <code> ImportError: attempted relative import beyond top-level package</code></p>
<p>(1)
<code>my-project</code> is maintained independently but is part (as a subfolder) in other projects. If the paths are absolute they would not be resolvable in the other projects. I use this structure over packaging, because packaging involves some management, that I need to avoid.</p>
|
<python><pytest>
|
2023-04-19 04:12:15
| 2
| 3,673
|
DarkTrick
|
76,050,491
| 1,302,465
|
How to dynamically change loss function between epochs in tensorflow model
|
<p>I want to change loss function for every alternative epoch or every 5 epochs. I tried using loss wrapper method suggested in <a href="https://stackoverflow.com/questions/55406146/resume-training-with-different-loss-function/55978428#55978428">this</a>. This didn't work. It is keeping <strong>current_epoch</strong> value initialized forever and updated value is not coming in the loss wrapper function although this variable is updated for every epoch end in the <strong>on_epoch_end</strong> callback.</p>
<p>And also I tried using <strong>model.add_loss</strong> method in <strong>on_epoch_end</strong> callback. This is also not working. It takes only the loss function whatever is initialized in <strong>model.compile</strong>. It doesn't take the loss function passed in <strong>model.add_loss</strong>.</p>
|
<python><tensorflow><keras><loss-function>
|
2023-04-19 03:09:38
| 1
| 477
|
Keerthi Jayarajan
|
76,050,236
| 2,961,927
|
Python equivalent to R's layout() for arbitrary grids
|
<p>What's Python's equivalent to R's <code>layout()</code> function which can create a plotting grid of any shape?</p>
<p>Consider the following 3 figures made by <code>layout()</code>:</p>
<pre class="lang-r prettyprint-override"><code>set.seed(123)
layout(t(matrix(c(
1, 1, 2, 2, 3, 3,
4, 5, 5, 6, 6, 7
), ncol = 2)), widths = rep(1, 6), heights = rep(1, 2))
par(mar = c(4, 5, 1, 1), family = "serif")
plot(x = runif(30), y = runif(30), cex.axis = 1.5,
bty = "L", xlab = "", ylab = "", las = 1) # 1
plot(x = runif(30), y = runif(30), cex.axis = 1.5,
bty = "L", xlab = "", ylab = "", las = 1) # 2
plot(x = runif(30), y = runif(30), cex.axis = 1.5,
bty = "L", xlab = "", ylab = "", las = 1) # 3
plot.new() # 4
plot(x = runif(30), y = runif(30), cex.axis = 1.5,
bty = "L", xlab = "", ylab = "", las = 1) # 5
plot(x = runif(30), y = runif(30), cex.axis = 1.5,
bty = "L", xlab = "", ylab = "", las = 1) # 6
</code></pre>
<p><a href="https://i.sstatic.net/9Rg9a.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9Rg9a.png" alt="enter image description here" /></a></p>
<pre class="lang-r prettyprint-override"><code>set.seed(123)
layout(t(matrix(c(
1, 1, 2, 2, 3, 3,
4, 4, 4, 5, 5, 5
), ncol = 2)), widths = rep(1, 6), heights = rep(1, 2))
par(mar = c(4, 5, 1, 1), family = "serif")
plot(x = runif(30), y = runif(30), cex.axis = 1.5,
bty = "L", xlab = "", ylab = "", las = 1) # 1
plot(x = runif(30), y = runif(30), cex.axis = 1.5,
bty = "L", xlab = "", ylab = "", las = 1) # 2
plot(x = runif(30), y = runif(30), cex.axis = 1.5,
bty = "L", xlab = "", ylab = "", las = 1) # 3
plot(x = runif(30), y = runif(30), cex.axis = 1.5,
bty = "L", xlab = "", ylab = "", las = 1) # 4
plot(x = runif(30), y = runif(30), cex.axis = 1.5,
bty = "L", xlab = "", ylab = "", las = 1) # 5
</code></pre>
<p><a href="https://i.sstatic.net/imQWa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/imQWa.png" alt="enter image description here" /></a></p>
<pre class="lang-r prettyprint-override"><code>set.seed(123)
layout(t(matrix(c(
1, 1, 2, 2, 3, 3,
4, 4, 4, 4, 5, 5
), ncol = 2)), widths = rep(1, 6), heights = rep(1, 2))
par(mar = c(4, 5, 1, 1), family = "serif")
plot(x = runif(30), y = runif(30), cex.axis = 1.5,
bty = "L", xlab = "", ylab = "", las = 1) # 1
plot(x = runif(30), y = runif(30), cex.axis = 1.5,
bty = "L", xlab = "", ylab = "", las = 1) # 2
plot(x = runif(30), y = runif(30), cex.axis = 1.5,
bty = "L", xlab = "", ylab = "", las = 1) # 3
plot(x = runif(30), y = runif(30), cex.axis = 1.5,
bty = "L", xlab = "", ylab = "", las = 1) # 4
plot(x = runif(30), y = runif(30), cex.axis = 1.5,
bty = "L", xlab = "", ylab = "", las = 1) # 5
</code></pre>
<p><a href="https://i.sstatic.net/cwCBM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cwCBM.png" alt="enter image description here" /></a></p>
<p>In Python, how to make figures of constituent plots with exactly the same layout as the above?</p>
|
<python><r><matplotlib><equivalent>
|
2023-04-19 02:03:39
| 1
| 1,790
|
user2961927
|
76,049,984
| 11,065,874
|
What is the difference between calling dict of a Pydantic model and passing it to the jsonable_encoder function?
|
<p>I have this Pydantic model:</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel
class Student(BaseModel):
name: str
id: str
</code></pre>
<p>I see in the <a href="https://fastapi.tiangolo.com/advanced/response-directly/#using-the-jsonable_encoder-in-a-response" rel="nofollow noreferrer">FastAPI docs</a> that if we want to pass it the <code>JSONResponse</code>, we do it like this:</p>
<pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI
from fastapi.encoders import jsonable_encoder
from fastapi.responses import JSONResponse
app = FastAPI()
@app.get("/")
def get_a_specific_student():
s = Student(id="1", name="Alice")
status_code = 200
content = jsonable_encoder(s)
return JSONResponse(status_code=status_code, content=content)
</code></pre>
<p>We could instead do:</p>
<pre class="lang-py prettyprint-override"><code>@app.get("/")
def get_a_specific_student():
s = Student(id="1", name="Alice")
status_code = 200
content = s.dict()
return JSONResponse(status_code=status_code, content=content)
</code></pre>
<p>What is the difference between calling the <code>dict</code> method of a Pydantic object and passing it to <code>jsonable_encoder</code>?</p>
|
<python><fastapi><pydantic>
|
2023-04-19 00:48:52
| 1
| 2,555
|
Amin Ba
|
76,049,807
| 7,318,120
|
websockets.sync.client - No module named 'websockets.sync'
|
<p>I am looking at websockets and have tried the official docs here: <a href="https://pypi.org/project/websockets/" rel="nofollow noreferrer">https://pypi.org/project/websockets/</a></p>
<p>I do a <code>pip install websockets</code> to make sure that I have the latest module.</p>
<p>I then run the template code (from the official docs) below:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
from websockets.sync.client import connect
def hello():
with connect("ws://localhost:8765") as websocket:
websocket.send("Hello world!")
message = websocket.recv()
print(f"Received: {message}")
hello()
</code></pre>
<p>But my linter throws two errors:</p>
<ol>
<li><code>import asyncio</code> is never used.</li>
<li><code>websockets.sync.client</code> throws <code>No module named 'websockets.sync'</code></li>
</ol>
<p>So, I tried changing to this:</p>
<pre class="lang-py prettyprint-override"><code>from websockets.client import connect
def hello():
with connect("ws://localhost:8765") as websocket:
websocket.send("Hello world!")
message = websocket.recv()
print(f"Received: {message}")
hello()
</code></pre>
<p>But got this error:</p>
<pre><code>Exception has occurred: TypeError
'Connect' object does not support the context manager protocol
</code></pre>
<p>So, what is the correct way of implementing a basic websocket (and are the official docs correct) ?</p>
|
<python><websocket>
|
2023-04-18 23:51:56
| 1
| 6,075
|
darren
|
76,049,760
| 3,792,360
|
Processing csv file via pandas
|
<p>This is my csv file contents</p>
<pre><code>1.1.1.Top Header
Attribute,Field Description,Table,Column, Filter
Quarter,Fiscal Current Quarter,tableA,col1,A=B
Blah, Blah, Blah, Blah, Blah
Blah, Blah, Blah, Blah, Blah
Blah, Blah, Blah, Blah, Blah
1.1.2.Next Level
Attribute,Field Description,Table,Column, Filter
Blah, Blah, Blah, Blah, Blah
</code></pre>
<p>I am using python to extract the header information, column names such as Attribute,Field Description,Table,Column, Filter
and finally the data. But is not working as intended.</p>
<pre><code>import pandas as pd
# Load the CSV file
df = pd.read_csv('dd.csv')
# Fill NaN values with "NA"
df = df.fillna("NA")
# Find the row indexes of the header rows
header_indexes = df[df.iloc[:,0].str.match(r'^(\d+\.)+\d+\..*')].index.tolist()
# Check if there are any headers and data rows
if header_indexes and header_indexes[-1] < len(df) - 1:
# Extract header names and remove the numeric values
header_names = [df.iloc[i,0].split('.')[1:] for i in header_indexes]
header_names = [', '.join([h.strip() for h in header]) for header in header_names]
# Extract the column names
column_names = df.iloc[header_indexes[-1]+1,:].tolist()
print("Header Names:", header_names)
print("Column Names:", column_names)
else:
print("No headers or data rows found in the CSV file.")
</code></pre>
<p>I want header Names as "Top Header", "Next Level"
Column Names as Attribute,Field Description,Table,Column, Filter</p>
<p>and finally the data.
Somehow, pandas is not able read the first line.</p>
|
<python><pandas>
|
2023-04-18 23:39:54
| 2
| 713
|
paddu
|
76,049,510
| 13,436,451
|
Size of seeds for numpy.random
|
<p>I want to run some code using <code>numpy.random</code> and keep track of what the seed is so that if the output is interesting, I can recreate and play around with that randomly generated instance. Therefore, I want the setup to involve something like</p>
<pre><code>import numpy as np
s = np.random.randint(10000000000)
print(s)
np.random.seed(s)
### remainder of code
</code></pre>
<p>so that the code is still running randomly, but I also have retained the seed <code>s</code>. The value <code>10000000000</code> was chosen arbitrarily; what is the appropriate scale for <code>numpy.random</code>'s seeding? e.g. are seeds all the same modulo 2^32?</p>
|
<python><numpy><random><random-seed>
|
2023-04-18 22:32:26
| 1
| 327
|
zjs
|
76,049,443
| 12,436,050
|
Convert dataframe to turtle format in python
|
<p>I am new to rdf and triples and I am looking for a way to load some triples in a triple store. I have a dataframe with following columns.</p>
<pre><code> mesh_code skos_rel MDR_code
0 <http://id.nlm.nih.gov/mesh/D000012> <http://www.w3.org/2004/02/skos/core#exactMatch> <https://identifiers.org/meddra:10083851>
1 <http://id.nlm.nih.gov/mesh/D000026> <http://www.w3.org/2004/02/skos/core#exactMatch> <https://identifiers.org/meddra:10062935>
2 <http://id.nlm.nih.gov/mesh/D000030> <http://www.w3.org/2004/02/skos/core#exactMatch> <https://identifiers.org/meddra:10000230>
3 <http://id.nlm.nih.gov/mesh/D000038> <http://www.w3.org/2004/02/skos/core#exactMatch> <https://identifiers.org/meddra:10000269>
4 <http://id.nlm.nih.gov/mesh/D015823> <http://www.w3.org/2004/02/skos/core#exactMatch> <https://identifiers.org/meddra:10069408>
</code></pre>
<p>Is there a way to convert this dataframe to a turtle format, that can be loaded into a triple store like Blazegraph.</p>
<p>Any help is highly appreciated.</p>
|
<python><rdf><turtle-rdf>
|
2023-04-18 22:21:30
| 1
| 1,495
|
rshar
|
76,049,388
| 14,294,527
|
How can I create a variable containing the numbers of each quadrant of a scatterplot?
|
<p>I have a scatterplot, just like the one below:</p>
<p><img src="https://www.econometrics-with-r.org/ITER_files/figure-html/unnamed-chunk-128-1.png" alt="graph" /></p>
<p>Let's say I want to create a new column, named QUADRANT, which contains the number that represents a quadrant. For instance, if the point has the y axis between 150 and 120, and the x axis between 0 and 20, it will receive 1. If the y is between 150 and 120, and the x axis between 20 and 40, it will receive 2. And I would do that until the whole quadrant is fullfilled, or at least with y going from 0 to 150, and x going from 0 to 80 and I'm manually defining those limits.</p>
<p>The only thing I could think of was using np.where(), however, I would have to write dozens of lines of code. I was hoping there was a smart way of doing this.</p>
|
<python><pandas><numpy><matplotlib><coordinates>
|
2023-04-18 22:07:31
| 1
| 307
|
dummyds
|
76,049,387
| 10,426,490
|
Unable to import test modules, Python Azure Function
|
<p>Seems like there has been a lot of movement with the <code>import</code> of tests and shared modules. I tried the <a href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-reference-python?tabs=asgi%2Capplication-level&pivots=python-mode-configuration#import-behavior" rel="nofollow noreferrer">documented methods</a>, but none of them work...</p>
<p>What am I missing?</p>
<p><strong>Scenario</strong>:</p>
<ul>
<li>Python Azure Function with the following directory structure:</li>
</ul>
<pre><code>image-tokenizer (project-root)
|_.venv
|_imagetokenizer (function itself)
|___init__.py
|_function.json
|_tests
|___init__.py
|_test_validate_url_param.py
|_host.json
|_local.settings.json
|_requirements.txt
</code></pre>
<p><a href="https://i.sstatic.net/8JJsw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8JJsw.png" alt="enter image description here" /></a></p>
<ul>
<li>I'm trying to run a test using <code>unittest</code></li>
<li>The function I want to test <em>inside</em>, the Azure Function code <code>main()</code>: <code>imagetokenizer/__init__.py</code>:</li>
</ul>
<pre><code> def validate_url_param(raw_url):
if raw_url is None:
raise ValueError("Missing required parameter 'url'.")
if not isinstance(raw_url, str):
raise ValueError("Parameter 'url' must be a string.")
try:
decoded_url = urllib.parse.unquote(raw_url)
except Exception as e:
raise ValueError(f"Parameter 'url' could not be decoded. Error: {e}")
storage_url_regex = re.compile(
r'^https://[a-z0-9]{3,}\.blob\.core\.windows\.net/.*$', re.IGNORECASE)
if not storage_url_regex.match(decoded_url):
raise ValueError("Parameter 'url' must be a valid Azure Storage URL.")
return decoded_url
...
</code></pre>
<ul>
<li>The test code for the above function: <code>tests/test_validate_url_param.py</code>:</li>
</ul>
<pre><code>import os
import unittest
from imagetokenizer import validate_url_param # THIS IS THE PROBLEM
class TestValidateUrlParam(unittest.TestCase):
def test_valid_url(self):
raw_url = 'https%3A%2F%2Faccount.blob.core.windows.net%2Fcontainer%2Ffolder1%2Ffolder2%2Fblob.jpg'
self.assertTrue(validate_url_param(raw_url))
def test_invalid_url(self):
raw_url = 'not_a_url'
self.assertFalse(validate_url_param(raw_url))
</code></pre>
<ul>
<li>From a terminal where <code>pwd</code> == <code>image-tokenizer</code> (project root) with venv activated: <code>python -m unittest discover</code></li>
</ul>
<pre><code>E
======================================================================
ERROR: tests.test_validate_url_param (unittest.loader._FailedTest)
----------------------------------------------------------------------
ImportError: Failed to import test module: tests.test_validate_url_param
Traceback (most recent call last):
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\unittest\loader.py", line 436, in _find_test_path
module = self._get_module_from_name(name)
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\unittest\loader.py", line 377, in _get_module_from_name
__import__(name)
File "C:\Users\me\image-tokenizer\tests\test_validate_url_param.py", line 5, in <module>
from imagetokenizer import validate_url_param
ImportError: cannot import name 'validate_url_param' from 'imagetokenizer' (C:\Users\me\image-tokenizer\imagetokenizer\__init__.py)
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (errors=1)
</code></pre>
<ul>
<li>If I try <code>from .. import validate_url_param</code> or <code>from ..imagetokenizer import validate_url_param</code>, I get the error: <code>ImportError: attempted relative import beyond top-level package</code></li>
</ul>
<p>There is something going on where the function <em>inside</em> my Azure Function's <code>__init__.py</code> file cannot be imported into my <code>unittest</code>.</p>
<p>Ideas?</p>
<p><strong>EDIT 1</strong>:</p>
<p>@SiddheshDesai, I think your directory structure is different than mine, right?</p>
<pre><code>image-tokenizer (project-root)
|_imagetokenizer (?)
|_.venv (Your .venv is at this level, mine is a level above)
|_HttpTrigger1 (function 1)
|___init__.py
|_function.json
|_HttpTrigger2 (function 2)
|___init__.py
|_function.json
|_tests
|___init__.py
|_test_validate_url_param.py
|_host.json
|_local.settings.json
|_requirements.txt
</code></pre>
<p><strong>EDIT 2</strong>:</p>
<p>Ugh...my issue was that I had defined all the functions inside the <code>main()</code> function. The <code>import</code> statement could only import <code>main</code>. I moved the function definition outside <code>main</code>, now it works.</p>
<p><a href="https://i.sstatic.net/hypKK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hypKK.png" alt="enter image description here" /></a></p>
|
<python><azure-functions><python-unittest>
|
2023-04-18 22:07:29
| 1
| 2,046
|
ericOnline
|
76,049,331
| 13,436,451
|
Will constructing a sparse graph in NetworkX automatically store the full adjacency matrix?
|
<p>If I have a sparse graph that I would like to instantiate as a NetworkX graph and currently have stored as a (say) adjacency dictionary, will it automatically store the data also as a full adjacency matrix (which is very large and thus costly to instantiate), or will the adjacency matrix only be created when explicitly called (or another method requiring it is called)?</p>
|
<python><networkx>
|
2023-04-18 21:56:20
| 1
| 327
|
zjs
|
76,049,326
| 1,251,549
|
How add depedency for PyElementVisitor for Intellij Idea plugin?
|
<p><strong>The goal</strong></p>
<ol>
<li>create a plugin for Python files</li>
<li>debug this plugin with Intellij Idea (not PyCharm) for 2 reasons: a) IntelliJ Idea is the main development IDE b) I do not have/want an additional license for PyCharm</li>
<li>The plugin should rely on <a href="https://mvnrepository.com/artifact/com.jetbrains.intellij.python/python-psi" rel="nofollow noreferrer">python-psi</a> classes like <code>com.jetbrains.python.psi.PyElementVisitor</code>.</li>
</ol>
<p><strong>The problem</strong></p>
<ol>
<li>If I use <a href="https://github.com/JetBrains/intellij-sdk-code-samples/blob/main/product_specific/pycharm_basics" rel="nofollow noreferrer">pycharm basics</a> (see build.gradle.kts) file project I will have <code>com.jetbrains.python.*</code> class but I get pycharm running when I run debug.</li>
<li>I can modify <a href="https://github.com/JetBrains/intellij-sdk-code-samples/blob/main/comparing_string_references_inspection" rel="nofollow noreferrer">java inspection</a> (see build.gradle.kts) file e.g. add Python plugin but here I am stuck because dependencies for <code>com.jetbrains.python.*</code> are not loaded.</li>
</ol>
<p><strong>What I have tried</strong></p>
<p>I have tried something the like (<a href="https://github.com/JetBrains/intellij-sdk-code-samples/blob/main/comparing_string_references_inspection/build.gradle.kts" rel="nofollow noreferrer">build.gradle.kts</a>):</p>
<pre><code>intellij {
version.set("2022.2.5")
plugins.set(listOf(
"com.intellij.java",
"com.jetbrains.intellij.python:python-psi:202.7319.50"
))
}
</code></pre>
<p>Something like <code>com.intellij.python</code> or <code>Pythonid</code> also does not work.</p>
<p>The pycharm basics <a href="https://github.com/JetBrains/intellij-sdk-code-samples/blob/main/product_specific/pycharm_basics/build.gradle.kts" rel="nofollow noreferrer">build.gradle.kts</a> version:</p>
<pre><code>intellij {
version.set("2022.2.5")
type.set("PY") //here PyCharm is "selected"
plugins.set(listOf("Pythonid")) //even if add this into Inllij version I will have plugin not found
downloadSources.set(false)
}
</code></pre>
<p>So the question is - how to modify java inspection code and be able to run Intellij Idea with <code>com.jetbrains.python.*</code> classes?</p>
|
<python><intellij-idea><intellij-plugin>
|
2023-04-18 21:55:26
| 0
| 33,944
|
Cherry
|
76,049,257
| 2,642,356
|
Missing symbol error after compiling a library that references another precompiled library
|
<p><strong>tl;dr</strong>: I'm trying to compile a .cpp file that references another compiled API (.so + .h files), and I get a missing symbol when I try to load it. Below is a simple example of how I created the modules, and all the steps that I made along the way. Help would be very appreciated.</p>
<p><strong>NOTE:</strong> Before you decide to skip this question because you might know nothing about Cython, this is more of a linker/g++ question than a cython question, so please read to the end.</p>
<p>I'm trying to create a Cython module that wraps a precompiled C++ API library, and I get strange errors on importation to python. So I have made a simple example during which:</p>
<ol>
<li>Compile a simple C++ library (.so + .h files)</li>
<li>Compile a simple Cython module that references it.</li>
<li>Load that module in Python to access the underlying C++ library.</li>
</ol>
<p>First I have the following two files:</p>
<pre class="lang-cpp prettyprint-override"><code>// circle.h
#ifndef CIRCLE_H
#define CIRCLE_H
class Circle {
public:
Circle(float radius);
float getDiameter() const;
private:
float m_radius;
};
#endif // CIRCLE_H
//circle.cpp
#include "circle.h"
Circle::Circle(float radius) : m_radius(radius) {}
float Circle::getDiameter() const {
return m_radius * 2.0f;
}
</code></pre>
<p>I compile these into <code>build/Circle.so</code> using the following commands:</p>
<pre class="lang-bash prettyprint-override"><code>mkdir -p build
g++ -O2 -fPIC -c circle.cpp -o build/Circle.o
g++ -shared build/Circle.o -o build/Circle.so
cp circle.h build/Circle.h
</code></pre>
<p>Now I turned to create my Cython module by creating the following file named <code>example.pyx</code>:</p>
<pre><code># distutils: language = c++
cdef extern from "build/Circle.h":
cdef cppclass Circle:
Circle(float radius)
float getDiameter()
cdef class PyCircle:
cdef Circle* c_circle
def __cinit__(self, float radius):
self.c_circle = new Circle(radius)
def __dealloc__(self):
del self.c_circle
def getDiameter(self):
return self.c_circle.getDiameter()
</code></pre>
<p>Then I ran <code>cythonize -i example.pyx</code> and got two files in the root directory: <code>example.cpp</code> (a Cython-generated source file) & <code>example.cpython-39-x86_64-linux-gnu.so</code>. I renamed the latter to <code>example.so</code> and tried to import it from a python (<code>import example</code>), and I get the following error:</p>
<blockquote>
<p>ImportError: example.cpython-39-x86_64-linux-gnu.so: undefined symbol: _ZNK6Circle11getDiameterEv</p>
</blockquote>
<p>So I looked into the aforecompiled (that's almost a word) module <code>Circle.so</code> using the command <code>nm -D build/Circle.so | grep Diameter</code>, and indeed I saw that the missing symbol resides inside that module:</p>
<blockquote>
<p>0000000000001110 T _ZNK6Circle11getDiameterEv</p>
</blockquote>
<p>So I figured "there must be some problem with the linker. I know! Let's compile that auto-generated .cpp source code and link to that library!". So I did the following:</p>
<ol>
<li>Ran <code>g++ -O2 -fPIC -c example.cpp -I /usr/include/python3.9 -I build/Circle.h</code></li>
<li>Renamed <code>build/Circle.so</code> to <code>libCircle.so</code></li>
<li>Ran <code>g++ -shared example.o -Lbuild -lCircle -o example.so</code></li>
<li>Ran <code>import example</code> in the python console and I got the same error.</li>
</ol>
<p>What am I doing wrong? How come that symbol is not loaded?</p>
|
<python><c++><linker><g++><cython>
|
2023-04-18 21:43:49
| 1
| 1,864
|
EZLearner
|
76,049,158
| 3,822,090
|
Wasserstein distance in scipy - definition of support
|
<p>I want to use the Wasserstein distance from scipy.stats.wasserstein_distance to get a measure for the difference between two probability distribution. However, I do not understand how the support matters here.</p>
<p>For example, I would have expected
<code>stats.wasserstein_distance([0,1,0],[1,0,0])</code>
to be 1 (as we need to move a mass of weight 1 by a distance of 1), however it is 0. Why is this?</p>
|
<python><scipy.stats>
|
2023-04-18 21:25:31
| 1
| 2,194
|
mzzx
|
76,049,155
| 6,457,862
|
Is it possible to bind a name inside a python function after the function has been compiled?
|
<p>I know this pattern is would be very bad, I'm asking the question out of curiosity not because I'm planning on doing this in production code.</p>
<p>Suppose I define a function:</p>
<pre class="lang-py prettyprint-override"><code>def function(x, y):
return x + y + z
</code></pre>
<p>And the variable <code>z</code> doesn't exist in the module the function was defined.</p>
<p>Is it possible to import the function from another module and somehow manipulate the code object, or play some sort of dirty trick with a decorator, or something, in order to make it work correctly?</p>
<p>I've tried setting <code>co_varnames</code> and <code>co_argcount</code> to <code>(x, y, z)</code> and <code>3</code> respectively, but this still doesn't bind the name <code>z</code> to the argument it seems.</p>
<p>To be clear, I know if I define a global named <code>z</code> in the same module this works, I'm asking about importing this function and somehow making <code>z</code> bind to a variable I want, in a different module.</p>
|
<python><python-3.x><function><name-binding>
|
2023-04-18 21:24:31
| 1
| 437
|
Ignacio
|
76,049,129
| 5,168,463
|
Tensorflow: Issue with training size in each epoch
|
<p>I am building a basic neural network using Keras and Tensorflow using mnist dataset using the following code:</p>
<pre><code>import os
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.reshape(-1, 28*28).astype('float32') / 255.0
x_test = x_test.reshape(-1, 28*28).astype('float32') / 255.0
print(x_train.shape)
print(x_test.shape)
model = keras.Sequential(
[
keras.Input(shape=(28*28)),
layers.Dense(512, activation="relu"),
layers.Dense(256, activation="relu"),
layers.Dense(10),
]
)
model.compile(
loss = keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer = keras.optimizers.Adam(learning_rate=0.001),
metrics = ['accuracy'],
)
model.fit(x_train, y_train, batch_size=32, epochs=5, verbose=2)
</code></pre>
<p>My output is:</p>
<p><a href="https://i.sstatic.net/Ql3o6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ql3o6.png" alt="enter image description here" /></a></p>
<p>As we can see in the image above, each epoch is only going through 1875 (=60000/32) data points. Shouldn't it go through all the 60000 instances per epoch as there are 60000 records in training dataset?</p>
|
<python><keras><tensorflow2.0>
|
2023-04-18 21:20:44
| 1
| 515
|
DumbCoder
|
76,048,814
| 1,810,940
|
Referencing class attributes inside of class body
|
<p>I have an API that uses abstract class attributes, essentially attributes that exist for all instances of a class & are defined in the class body.</p>
<p>I would like to programmatically generate some of these attributes. They aren't <em>dynamic</em> in the sense that they don't change at runtime.</p>
<p>For example, a class Foo that builds <code>bax</code> as a dictionary from a list of keys.
this works! but doesn't solve my case, which is a little more complicated.</p>
<pre><code>class Foo:
bax_values = [0,1,2,3]
bax = {f"{x}":x+1 for x in bax_values}
</code></pre>
<p>My case is more complicated - there's a template and a set of keys.</p>
<pre><code>class Foo:
bax_values = [0,1,2,3]
bax_template = "bar{s}"
bax = {f"{x}":bax_template.format(s=x) for x in bax_values}
</code></pre>
<p>This doesn't work - intriguingly, it says <code>bax_template</code> is undefined. <strong>This seems to contradict the working case, which allowed iteration over the class-defined</strong> <code>bax_values</code> .</p>
<p>What's the difference between these two cases in the eyes of python?</p>
<p>Edit: This question is <em>not</em> a duplicate of <a href="https://stackoverflow.com/questions/45195109/unboundlocalerror-local-variable-referenced-before-assignment-why-legb-rule-n">"UnboundLocalError: local variable referenced before assignment" Why LEGB rule not applied in this case?</a></p>
<p>That answer suggests that "You may also access it [bax_template] directly at the class level, during the class definition, but since you are still inside the temporary scope that is just another example of a local access (LEGB)." This is not working in my test case. I am able to access bax_values, but not bax_template. My question is asking what is the difference between these two variables that makes one accessible and the other not. It seems to be something related to the scope of the comprehension variables.</p>
<p>Edit 2:
As proof that this seems to be something to do with the scope of the comprehension variables, the following workaround does the job.</p>
<pre><code>from itertools import product
class Foo(object):
bax_values = [0,1,2,3]
bax_template = ['bar{s}']
bax = {f"{key}":template.format(s=key) for key,template in product(bax_values, bax_template)}
</code></pre>
<p>which implies that we can reference the class attributes in the iterable portion of a comprehension, but not the value/definition portion of a comprehension</p>
|
<python><python-3.x><oop><attributes>
|
2023-04-18 20:29:02
| 0
| 503
|
jay
|
76,048,682
| 1,218,317
|
Python script skips certain lines when run without shell
|
<p>I have some code like this:</p>
<pre><code>from stomp import *
from stomp.listener import ConnectionListener
from stomp.utils import parse_frame
class MyListener(ConnectionListener):
_counter=0
def on_message(self, frame):
if self._counter > 10:
return
print(self._counter)
self._counter += 1
print('Starting...')
connection = Connection([('darwin-dist-44ae45.nationalrail.co.uk', '61613')])
connection.set_listener('', MyListener())
connection.connect(REDACTED)
connection.subscribe('/topic/darwin.pushport-v16', 11)
print('Ummm...???')
</code></pre>
<p>When I run this from command line using Python, the lines with <code>connection</code> don't execute:</p>
<pre><code>$ python3 myscript.py
Starting...
Ummm...???
</code></pre>
<p>However, when I open python shell and run these commands one by one, the <code>connection.subscribe('/topic/darwin.pushport-v16', 11)</code> produces a bunch of output as expected:</p>
<pre><code>$ python3
>>> from stomp import *
>>> from stomp.listener import ConnectionListener
>>> from stomp.utils import parse_frame
>>>
>>> class MyListener(ConnectionListener):
... _counter=0
... def on_message(self, frame):
... if self._counter > 10:
... return
... print(self._counter)
... self._counter += 1
...
>>> print('Starting...')
Starting...
>>> connection = Connection([('darwin-dist-44ae45.nationalrail.co.uk', '61613')])
>>> connection.set_listener('', MyListener())
>>> connection.connect(REDACTED)
>>> connection.subscribe('/topic/darwin.pushport-v16', 11)
0
1
2
</code></pre>
<p>I have never encountered odd behavior like this before. Why is this happening and how do I fix it?
thanks</p>
|
<python><python-3.x><stomp>
|
2023-04-18 20:10:42
| 1
| 1,908
|
ritratt
|
76,048,657
| 3,509,416
|
Preservation of multiline yaml string in exact format when opening file
|
<p>I'm developing an app where I preserve the response of an api call in a yaml file and then as a second part open that yaml file to access the response.</p>
<p>These are two independent runs so can't preserve the response in memory.</p>
<p>The response comes out as a string <code>"- dashboard: new_dashboard\n title: New Dashboard\n layout: newspaper\n preferred_viewer: dashboards-next\n description: this is my description\n preferred_slug: altLyIglxfRs8N3WaNfTKQ\n elements:\n - name: ''\n type: text\n title_text: ''\n body_text: '[{\"type\":\"h1\",\"children\":[{\"text\":\"jkjkjkjk\"}],\"align\":\"center\"}]'\n rich_content_json: '{\"format\":\"slate\"}'\n row: 0\n col: 0\n width: 8\n height: 6\n - title: Untitled\n name: Untitled\n model: system__activity\n explore: history\n type: table\n fields: [history.created_day_of_week, user_query_rank.user_name, history.query_run_count]\n pivots: [user_query_rank.user_name]\n filters:\n history.created_date: 7 days\n sorts: [user_query_rank.user_name desc]\n column_limit: 50\n listen: {}\n row: 0\n col: 8\n width: 8\n height: 6\n"</code></p>
<p>However when I access it, the packages i'm using (pyyaml, ruamel yaml and json) seems to be adding on a series of escape characters to empty strings, so for instance <code>- name: ''\</code> this turns into <code>- name: \'\'\</code> Causing issues.</p>
<p>I've tried pyYaml, ruamel.yaml and persisting in json files. I'm not wedded to the file type, just in preserving the integrity of the string.</p>
<p>The goal is to have the response preserved in a text file and then be able to open that text file and have the response unchanged. Is there another format i'm missing or a different way to think about this problem?</p>
<p>Thanks</p>
|
<python><yaml><ruamel.yaml>
|
2023-04-18 20:07:09
| 0
| 1,899
|
hselbie
|
76,048,534
| 750,510
|
How to create AWS Graviton (ARM) Python Lambda on Intel machine with Poetry
|
<p>I'm trying to build a Python Lambda with Poetry. My function depends on <code>psycopg2</code>. This library, in turn, depends on a platform binary: <code>libpq</code>. So, I need to bundle it in my distro (a ZIP file). There is a <a href="https://pypi.org/project/psycopg2-binary" rel="nofollow noreferrer">psycopg2-binary</a> package on PyPi and I believe it has the wheel I need: <code>psycopg2_binary-2.9.6-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl</code>. Python version 3.9, just like my runtime, ARM 64 architecture. It contains the <code>libpq</code> I need:</p>
<pre><code>$ file psycopg2_binary.libs/libpq-33589b1f.so.5.15
psycopg2_binary.libs/libpq-33589b1f.so.5.15: ELF 64-bit LSB shared object, ARM aarch64, version 1 (SYSV), dynamically linked, BuildID[sha1]=0874810fb70766ff96a80897579633a2ef7af60e, stripped
</code></pre>
<p>Definitely it is what I need.</p>
<p>I <a href="https://chariotsolutions.com/blog/post/building-lambdas-with-poetry" rel="nofollow noreferrer">build</a> my function with Poetry in three steps:</p>
<pre><code>poetry build
poetry run pip install --upgrade --target package dist/*.whl
cd package; zip -r ../distro.zip . -x '*.pyc'
</code></pre>
<p>The problem is that when I install the <code>psycopg2-binary</code> (<code>poetry add psycopg2-binary</code>) it installs the Intel wheel (my host OS).</p>
<p>I even tried downloading the wheel I need and installing it with</p>
<pre><code>poetry add ./psycopg2_binary-2.9.6-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
</code></pre>
<p>But it it gave me an error:</p>
<pre><code>ERROR: psycopg2_binary-2.9.6-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl is not a supported wheel on this platform.
</code></pre>
<p>I tried adding a platform:</p>
<pre><code>poetry add --platform=manylinux2014_aarch64 ./psycopg2_binary-2.9.6-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
</code></pre>
<p>But this just does nothing. It doesn't fail and it doesn't install anything.</p>
<p><strong>So, how do I cross-compile for ARM with Poetry on Intel?</strong></p>
<p>P.S. Please, don't suggest using <a href="https://pypi.org/project/aws-psycopg2" rel="nofollow noreferrer">aws-psycopg2</a>, its <code>libpq</code> version is too low (9) and I need 14+.</p>
|
<python><python-poetry><python-wheel><aws-graviton>
|
2023-04-18 19:49:09
| 1
| 33,782
|
madhead
|
76,048,531
| 10,266,106
|
Numpy ndenumerate A Shared Array With Multiprocessing
|
<p>I have two arrays named <code>arrayone</code> & <code>arraytwo</code>, both of which are identical in dimensions, static, and will not need to be altered. A third array <code>masterarray</code> is pre-assembled, in which the compilation of integers are cast to a third array, then placed into the pre-assembled third array.</p>
<p>The process of moving along the ndarray's columns (j) and each row (i) is fast, however I'd like to utilize multiprocessing to accelerate this process and share these arrays without excessive memory consumption. Specifically, I'd like to execute multiple processes which loop across each column (j) at any given row (i) and writes the result to masterarray in shared memory. I've perused <a href="https://stackoverflow.com/questions/17785275/share-large-read-only-numpy-array-between-multiprocessing-processes">this answer</a>, however the potential instability caused by sharedmem has led me to ask this question. For reference, my code is as follows:</p>
<pre><code>def gridagg():
masterarray = np.empty([1228,2606,208])
for index, val in np.ndenumerate(arrayone):
selection = arraytwo[index[0]][index[1]]
piece = stacked[selection[:,0], selection[:,1]].tolist()
piece = [j for i in piece for j in i]
comparray = np.array(piece)
if index[1] == 0:
compiled = comparray
else:
stage1 = comparray
stage2 = compiled
if index[1] == 1:
compiled = np.stack([stage2, stage1])
else:
compiled = np.vstack([stage2, stage1[None, :]])
if index[1] == 2605:
masterarray[index[0], :] = compiled
</code></pre>
|
<python><numpy><python-multiprocessing><numpy-ndarray>
|
2023-04-18 19:48:37
| 1
| 431
|
TornadoEric
|
76,048,526
| 10,044,690
|
Pyomo initializing and constraining 2xN variables
|
<p>I have the following code, where I am trying to create a <code>2xN</code> variable called <code>x</code> and initialize all columns in the first row to <code>x0_init</code>, and all columns in the second row to <code>x1_init</code>. After that I want to add initial and final boundary constraints to each row:</p>
<pre><code>model = pyo.ConcreteModel()
N = 100
num_rows = range(2)
num_cols = range(N)
x0_init = 0.0
x1_init = 0.0
x0_final = 1.0
x1_final = 0.0
# Declaring and initializing 2xN state variable
model.x = pyo.Var(num_rows, num_cols, domain=pyo.Reals, initialize=[x0_init, x1_init])
# Declaring initial boundary constraints
model.initial_boundary_constraint_x0 = pyo.Constraint(expr=model.x[0,0] == x0_init)
model.initial_boundary_constraint_x1 = pyo.Constraint(expr=model.x[1,0] == x1_init)
# Declaring final boundary constraints
model.final_boundary_constraint_x0 = pyo.Constraint(expr=model.x[0,N-1] == x0_final)
model.final_boundary_constraint_x1 = pyo.Constraint(expr=model.x[1,N-1] == x1_final)
</code></pre>
<p>The above code of course does not work. However, I was hoping someone would be able to help me achieve the abovementioned goals. Looking through the pyomo documentation, I have unfortunately been unable to find a solution to this problem.</p>
|
<python><optimization><pyomo>
|
2023-04-18 19:48:06
| 1
| 493
|
indigoblue
|
76,048,462
| 3,768,822
|
Display Plotly HTML inside a Jupyter Notebook
|
<p>I have a <code>.html</code> file created as</p>
<pre><code>import plotly.express as px
fig = px.scatter(x=[0, 1, 2, 3, 4], y=[0, 1, 4, 9, 16])
fig.write_html("px.html", full_html=False, include_plotlyjs='cdn')
</code></pre>
<p>I can open this file from the filesystem and it displays fine in Chrome.</p>
<p>What I need to do, however, is read this file into a Jupyter Notebook and display it.</p>
<p>I am trying the following in a Notebook cell</p>
<pre><code>from IPython.display import display, HTML
display(HTML(filename="px.html"))
</code></pre>
<p>This gives an error in the notebook of</p>
<pre><code>Javascript error adding output!
ReferenceError: Plotly is not defined
See your browser Javascript console for more details.
</code></pre>
<p>The Chrome Javascript console shows</p>
<p><a href="https://i.sstatic.net/iYF0D.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iYF0D.png" alt="enter image description here" /></a></p>
<p>How do I get this plotly html to display from the file in the notebook?</p>
|
<python><jupyter-notebook><plotly>
|
2023-04-18 19:37:42
| 1
| 1,357
|
Jonathan
|
76,047,921
| 1,637,894
|
Why do Qt timers fire less frequently when not updating the GUI?
|
<p>Below is a simple Qt for Python (PySide6) application that tries to count the number of times a slot gets called in one second:</p>
<pre class="lang-py prettyprint-override"><code>import time
from PySide6.QtCore import *
from PySide6.QtTest import *
from PySide6.QtWidgets import *
class MainWindow(QWidget):
def __init__(self, *args, **kwargs):
super(MainWindow, self).__init__(*args, **kwargs)
self.resize(400, 300)
self.layout = QVBoxLayout()
self.setLayout(self.layout)
self.sheep_number = 0
self.timer = QTimer()
self.timer.timeout.connect(self.count_sheep)
self.counted_sheep_label = QLabel()
self.layout.addWidget(self.counted_sheep_label) # comment out this line to see the bug
@Slot()
def count_sheep(self):
self.sheep_number += 1
self.counted_sheep_label.setText(f"Counted {self.sheep_number} sheep.")
if __name__ == "__main__":
app = QApplication([])
main_window = MainWindow()
main_window.show()
main_window.timer.start()
then = time.time()
QTest.qWait(1000)
main_window.timer.stop()
t = time.time() - then
print(t, main_window.sheep_number)
</code></pre>
<p>The code brings up a GUI and leaves it on the screen for 1 second. It then prints the time that elapsed (according to Python rather than Qt) and number of calls to the slot. When I run this on my laptop, I get <code>1.0010387897491455</code> and <code>7155</code></p>
<p>However, when I comment out the indicated line, I get <code>1.0011870861053467</code> <code>88</code>.</p>
<p>I do not understand this behavior. Why would <em>not</em> drawing the GUI cause <code>QTimer</code> to fire less often?</p>
|
<python><qt><pyqt><pyside>
|
2023-04-18 18:22:55
| 1
| 515
|
sammosummo
|
76,047,913
| 505,328
|
OPC UA asyncua python: new file directory created but it lacks the CreateFile method among its children
|
<p>My problem appears to be very similar to this unanswered question: <a href="https://stackoverflow.com/questions/75117089/opc-ua-asyncua-python-new-folder-created-but-server-never-returns-the-methods">OPC UA asyncua python: new folder created but server never returns the methods</a> but I have tried something slightly different.
I initially tried to create a folder by calling the existing create_folder method, but that folder just appeared to be one I could place other nodes (or folders) inside rather than actual files, and I was trying to add support for transferring a file to the OPC-UA server. Instead the type that seemed to fit what I wanted was FileDirectory, which I could see in standard_address_space_services should have CreateFile as a child (precisely what is searched for in the create_file method of UaDirectory). To that end I created this method, largely copied from the create_folder method:</p>
<pre><code> async def add_directory(self, nodeid, bname):
# This section was mostly copied from manage_nodes.create_folder
nodeid, qname = _parse_nodeid_qname(nodeid, bname)
parent = self.server.nodes.objects
directory_node = make_node(
self.server,
await _create_object(parent.server, parent.nodeid,
nodeid, qname, ua.ObjectIds.FileDirectoryType)
)
# This was mostly copied from standard_address_space_services
# It doesn't currently work because self.server doesn't have an add_nodes method
# standard_address_space_services.create_standard_address_space_Services gets called
# when the internal server is initialized
# But this doesn't result in any instance of the FileDirectoryType I create
# having CreateFile as a child node
# CreateFile is created as a child of node id 13354, "<FileDirectoryName>"
# Which in turn is a child of 13353, FileDirectoryType, which I use above
# attrs = ua.MethodAttributes(
# DisplayName=LocalizedText("CreateFile"),
# )
# node = ua.AddNodesItem(
# BrowseName=QualifiedName("CreateFile", 0),
# NodeClass_=NodeClass.Method,
# ParentNodeId=NumericNodeId(directory_node.nodeid.Identifier, 0),
# ReferenceTypeId=NumericNodeId(47, 0),
# NodeAttributes=attrs,
# )
# self.server.add_nodes([node])
return directory_node
</code></pre>
<p>Unfortunately, as indicated by those comments, merely creating an object with the FileDirectory type doesn't give it the methods specified in the standard services and hence it will still fail if I call create_file on it. The FileDirectory type must have been added for a reason, but I can't see how to create a working instance of one.
These are the lines I use attempting to create a file with a node created by the method above:</p>
<pre><code>ua_directory = UaDirectory(transfer_folder_node)
transfer_file = await ua_directory.create_file("test_file.txt", True)
</code></pre>
|
<python><file-transfer><opc-ua>
|
2023-04-18 18:21:49
| 0
| 379
|
WindowsWeenie
|
76,047,825
| 4,670
|
Boto3 Client Creation is Slow
|
<p>I am running boto3 on AWS Lambda but getting a boto3 client is very slow. I am currently running a python3.9 runtime on AWS Lambda that creates handfuls of presigned url for s3 resources.</p>
<p>A key line of code required to generate the url with boto3 is</p>
<pre><code>print("Getting boto3")
self.s3_client = boto3.client('s3')
print("Got client s3")
</code></pre>
<p>shockingly, the above single line of code takes a full 2000ms to execute according to the logs in cloudwatch:</p>
<pre><code>2023-04-18T13:34:18.891-04:00 Getting boto3
2023-04-18T13:34:20.733-04:00 Got client s3
</code></pre>
<p>Is this normal and can I make it go faster? I saw an old post indicating that client acquisition speed might be improved by explicitly passing credentials or increasing memory size. It seems strange that the basic s3 interface cannot be acquired in less than 2s.</p>
|
<python><amazon-web-services><boto3>
|
2023-04-18 18:10:16
| 0
| 4,127
|
Steve
|
76,047,654
| 6,583,606
|
KSComp with variable-size constraint vector input
|
<h1>Summary</h1>
<p>I'm trying to perform structural optimization coupling Nastran SOL 106 and OpenMDAO. I want to minimize mass subject to nonlinear stress and stability constraints. My structure is a box beam reinforced with ribs and stiffeners clamped at the root and loaded with a concetrated force at the tip. At the moment I am only using a single design variable, that is the wall thickness (same for all elements of the structure).</p>
<p>For both the stress and the stability constraint, I would like to use OpenMDAO's <code>KSComp</code> component. However, I have a problem with the fixed width of the constraint vector input. While for the stress aggregation function this is not a problem, as the number of elements is fixed and I can easily calculate it, the vector of the stability constraint may change size at every iteration. This is because the constraint is imposed on the N lowest eigenvalues of the tangent stiffness matrix for each converged iteration of the nonlinear analysis (they have to be larger than 0), where the nonlinear solver is an adaptive arc-length method. This means that the number of iterations is not known a priori and that it usually changes for every analysis. As a consequence, I have a <code>N x no_iterations</code> array that I want to flatten and feed to the <code>KSComp</code> component, but I can't do that because the number of iterations is variable and cannot be fixed.</p>
<h1>What I've tried</h1>
<p>I have created an explicit component and a group as you see below. Inside my explicit component <code>Sol106Comp</code> I set the value of thickness, run the Nastran analysis, calculate the functions of interest and then call the method <code>compute_ks_function</code> that I have defined taking as a reference the <a href="https://openmdao.org/newdocs/versions/latest/_modules/openmdao/components/ks_comp.html" rel="nofollow noreferrer">source code of <code>KSComp</code></a>. This works, but I would like to use <code>KSComp</code> to benefit of its membership to the OpenMDAO world.</p>
<p>From <a href="https://stackoverflow.com/questions/70738549/how-to-define-output-variable-with-dynamic-shape-in-openmdao">this answer</a>, it looks like <code>KSComp</code> is not meant at all to have a dynamic width of the contraint vector input. Is this still the case?</p>
<p>I've read that a solution may be over-allocation, and it may be even natural in my case, as I know the maximum number of iterations in the nonlinear analysis, so I know the maximum possible size of the constraint vector input. However what value should I give to the inactive entries of the constraint vector? In my case the constraint is satisfied when the elements of the constraint vectors are larger then 0, so I cannot set the inactive entries to 0. I could use a very large value, but wouldn't this affect my KS function in some way?</p>
<h1>Code</h1>
<pre><code>import openmdao.api as om
import numpy as np
from resources import box_beam_utils, pynastran_utils
import os
from pyNastran.bdf.mesh_utils.mass_properties import mass_properties
from pyNastran.bdf.bdf import BDF
class Sol106Comp(om.ExplicitComponent):
"""
Evaluates the mass of the model, the von mises stresses and the lowest eigenvalues of the tangent stiffness matrix.
"""
def initialize(self):
self.options.declare('box_beam_bdf', types=BDF)
self.options.declare('sigma_y', types=float)
def setup(self):
self.add_input('t')
self.add_output('mass')
self.add_output('ks_stress')
self.add_output('ks_stability')
self.add_discrete_output('sol_106_op2', None)
self.add_discrete_output('eigenvalues', None)
def setup_partials(self):
# Finite difference all partials
self.declare_partials('*', '*', method='fd')
def compute(self, inputs, outputs, discrete_inputs, discrete_outputs):
"""
Run SOL 106 and calculate output functions
"""
# Assign variables
box_beam_bdf = self.options['box_beam_bdf']
yield_strength = self.options['sigma_y']
# Assign thickness to PSHELL card
box_beam_bdf.properties[1].t = inputs['t'][0]
# Set up directory and input name
analysis_directory_name = 'Optimization'
analysis_directory_path = os.path.join(os.getcwd(), 'analyses', analysis_directory_name)
input_name = 'box_beam_sol_106'
# Run Nastran and return OP2 file
sol_106_op2 = pynastran_utils.run_tangent_stiffness_matrix_eigenvalue_calculation(bdf_object=box_beam_bdf.__deepcopy__({}), method_set_id=21, no_eigenvalues=10,
analysis_directory_path=analysis_directory_path, input_name=input_name, run_flag=True)
# Calculate mass
outputs['mass'] = mass_properties(box_beam_bdf)[0]
# Find von mises stresses and aggregate with KS function
stresses = sol_106_op2.nonlinear_cquad4_stress[1].data[-1, :, 5]
outputs['ks_stress'] = self.compute_ks_function(stresses, upper=yield_strength)
# Read eigenvalues of tangent stiffness matrix and aggregate with KS function
f06_filepath = os.path.join(analysis_directory_path, input_name + '.f06') # path to .f06 file
eigenvalues = pynastran_utils.read_kllrh_lowest_eigenvalues_from_f06(f06_filepath) # read eigenvalues of kllrh matrix from f06 files
outputs['ks_stability'] = self.compute_ks_function(eigenvalues[~np.isnan(eigenvalues)].flatten(), lower_flag=True)
# Save OP2 object
discrete_outputs['sol_106_op2'] = sol_106_op2
discrete_outputs['eigenvalues'] = eigenvalues
@staticmethod
def compute_ks_function(g, rho=50, upper=0, lower_flag=False):
"""
Compute the value of the KS function for the given array of constraints.
Reference: https://openmdao.org/newdocs/versions/latest/_modules/openmdao/components/ks_comp.html
"""
con_val = g - upper
if lower_flag:
con_val = -con_val
g_max = np.max(np.atleast_2d(con_val), axis=-1)[:, np.newaxis]
g_diff = con_val - g_max
exponents = np.exp(rho * g_diff)
summation = np.sum(exponents, axis=-1)[:, np.newaxis]
KS = g_max + 1.0 / rho * np.log(summation)
return KS
class BoxBeamGroup(om.Group):
"""
System setup for minimization of mass subject to KS aggregated von mises stress and structural stability constraints.
"""
def initialize(self):
# Define object input variable
...
def setup(self):
# Assign variables
...
# Genearte mesh
...
# Create BDF object
...
# Apply load
...
# Create subcase
...
# Set up SOL 106 analysis
...
# Define SOL 106 component
comp = Sol106Comp(box_beam_bdf=box_beam_bdf, sigma_y=yield_strength)
self.add_subsystem('sol_106', comp)
</code></pre>
|
<python><openmdao><nastran>
|
2023-04-18 17:48:28
| 1
| 319
|
fma
|
76,047,620
| 6,930,441
|
Multiprocessing or multithreading a for loop
|
<p>I'm brand new to the idea of multithreading vs multiprocessing. I would like to run a for loop, which runs a single function (with two input args) either across several threads or several processors (very unclear about that, despite reading all day). It's also further complicated by the fact that the function appends to a list outside of the function, so when I try to store the function as a separate file and import it (I've been trying 100 different flavors of pool.starmap, ThreadPoolExecutor, ProcessPoolExecutor which all seem to want it as a seperate entity). So if I don't save the function as a seperate file I get a <code>AttributeError: Can't get attribute 'GetBits' on <module '__main__'</code> error and if I do save the function separately and import it, I have trouble with the output_list variable, where I often get a <code>NameError: name 'output_list' is not defined</code> error.</p>
<p>A snippet of my current, working code:</p>
<pre><code>output_list = []
list1 = [a,b,c,d,e,f,g] #These list are just arbitrary, my actual data are not sequential like this
list2 = [1,2,3,4,5,6,7,8,9]
def my_function(list1,item_from_list2):
global output_list
do_stuff
do_other_stuff_to_get_statistical_score_between_list1_and_item_from_list2
output_list.append(score_from_doing_stuff)
return output_list
for x in list2:
result = myfunction(list1,x)
</code></pre>
<p>Running the for loop on some toy data works well and the expected output is generated (a list of scores). However, both list1 and list2 will be increasing exponentially in size once I start working with my real datasets and running on one sad little CPU means this will likely take hours if not days to run.</p>
<p>My best attempt so far (I've failed several times over the last 7 hours) for parallelizing the for loop was this:</p>
<pre><code>input_args = [(list1, list2) for x in list2]
def OptimizationParallel():
global output_list
global list1
global list2
with Pool(2) as pool:
output = pool.starmap(my_function,input_args)
return output_list
from functools import partial
from multiprocessing import Pool
from Sams_functions import my_function
OptimizationParallel()
</code></pre>
<p>However, the output from the optimization function is really weird (a list of lists) but the correct scores are in the mess of the output. Additionally, the output_list variable remains empty and I need that populated with the scores. I popped in a bunch of print commands to see what was happening and the right results are being printed out, just not put into the output_list variable or the output variable from the attempted parallelization. And as a final insult to injury, when I timed the OptimizationParallel function it actually took longer than just running the for loop! (Which I would suspect means I'm probably meant to be multithreading rather than multiprocessing but as I said at the start of my issue, that's super murky to me)</p>
<p>Any help will be hugely appreciated!</p>
<p>UPDATE: Following help from @juanpa.arrivillaga, I have cleaned up the code a bit. The original code now looks like:</p>
<pre><code>output_list = []
list1 = [a,b,c,d,e,f,g] #These list are just arbitrary, my actual data are not sequential like this
list2 = [1,2,3,4,5,6,7,8,9]
def my_function(list1,item_from_list2):
do_stuff
do_other_stuff_to_get_statistical_score_between_list1_and_item_from_list2
return score
for x in list2:
score = myfunction(list1,x)
output.append(score)
</code></pre>
<p>However, I'm still stuck with the problem that each loop iteration takes a very long time and I'd like to run single iterations of the loop on different CPUs (maybe 4 or 6 at a time). I'm probably using the wrong keywords but I can't seem to find any tutorials to help me do that.</p>
|
<python><multithreading><function><for-loop><multiprocessing>
|
2023-04-18 17:44:01
| 0
| 456
|
Rainman
|
76,047,580
| 3,398,324
|
Replace row values in Panda columns temporarily and convert back
|
<p>I have a dataframe column which is a date and I would like to replace it with ordered integers like this and then back (not one way but roundtrip)</p>
<p>Current DataFrame:</p>
<pre><code>data = {'date': ['1/1/2022', '1/2/2022,1/3/2022]}
df = pd.DataFrame(data)
</code></pre>
<p>Target DataFrame:</p>
<pre><code>data = {'qid': ['0', '1','2']}
df = pd.DataFrame(data)
</code></pre>
<p>In this <a href="https://stackoverflow.com/questions/75972956/replace-row-values-in-panda-columns">post</a> I asked the one way question, and got this answer which works one way (slightly modified):</p>
<pre><code>df['qid'] = (pd.Series(pd.factorize(pd.to_datetime(df['date']), sort=True)[0]))
</code></pre>
<p>Is there a way, of getting it back to the dates after this transformation?</p>
|
<python><pandas><dataframe>
|
2023-04-18 17:40:32
| 1
| 1,051
|
Tartaglia
|
76,047,509
| 10,426,490
|
How to troubleshoot a Python Azure Function "An unhandled exception has occurred"?
|
<p>I have an Azure Function with Python runtime. It works well with no big issues. I have an alert setup for times when failures occur. The alert went off and I can't figure out why.</p>
<p><strong>Scenario</strong>:</p>
<ul>
<li>The function executed just fine 4/18/2023 ~07:23:13 (shown below)</li>
<li>Then the next <code>trace</code> at 4/18/2023 ~08:33:49 shows the host shutting down...</li>
</ul>
<p><strong>What is the process for troubleshooting a shutdown like this?</strong></p>
<p><strong>KQL query</strong>:</p>
<pre><code>traces
| where timestamp between (todatetime('2023-04-18T06:00:00') .. todatetime('2023-04-18T08:40:00'))
| order by timestamp asc
</code></pre>
<p><a href="https://i.sstatic.net/humy1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/humy1.png" alt="enter image description here" /></a></p>
<p><strong>EDIT 1</strong>:</p>
<p><a href="https://i.sstatic.net/CvdPt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CvdPt.png" alt="enter image description here" /></a></p>
|
<python><azure-functions><kql>
|
2023-04-18 17:32:26
| 0
| 2,046
|
ericOnline
|
76,047,250
| 236,081
|
What else goes in a Python src folder, according to actual or de facto standards?
|
<p>When using <a href="https://setuptools.pypa.io/en/latest/userguide/package_discovery.html#src-layout" rel="noreferrer">src layout</a> rather than flat layout in a Python project, is anything other than the project module expected to live in the <code>src</code> folder?</p>
<p>My understanding is that if I added <code>mypkg2</code> under <code>src</code> in the layout below, and published the result to PyPI, anyone who did a <code>pip install</code> would be able to <code>import mypkg</code> and <code>import mypkg2</code> (which might be surprising). Am I missing something?</p>
<pre><code>project_root_directory
├── pyproject.toml # AND/OR setup.cfg, setup.py
├── ...
└── src/
└── mypkg/
├── __init__.py
├── ...
├── module.py
├── subpkg1/
│ ├── __init__.py
│ ├── ...
│ └── module1.py
└── subpkg2/
├── __init__.py
├── ...
└── module2.py
</code></pre>
<p>Sample layout from <a href="https://setuptools.pypa.io/en/latest/userguide/package_discovery.html#src-layout" rel="noreferrer">https://setuptools.pypa.io/en/latest/userguide/package_discovery.html#src-layout</a></p>
<p>I haven't been able to find an example of a project with anything else present, nor an explicit instruction <em>not</em> to put anything else in there. <strong>I am looking for a PEP or packaging document that answers this question.</strong></p>
|
<python><python-packaging>
|
2023-04-18 17:00:56
| 1
| 17,402
|
lofidevops
|
76,047,227
| 3,826,115
|
Keep axis limits/zoom level when switching between two Points plots in holoviews/bokeh in Python
|
<p>Say I have the following code, which switches between two holoviews Points plots based on a Radio Button. If you click on one of the points, a corresponding timeseries plot pops up to the right.</p>
<pre><code>import pandas as pd
import holoviews as hv
import panel as pn
import numpy as np
hv.extension('bokeh')
####Create example data - just copy and past this part####
##########################################################
df = pd.DataFrame(data = {'id':['w', 'x', 'y', 'z'], 'value':[1,2,3,4], 'x':range(4), 'y':range(4)})
#create another dataset, same as the example but with the values flipped
df_qc = df.copy()
df_qc['value'] = df_qc['value'].to_list()[::-1]
#create timeseries for each ID
df_w = pd.DataFrame(data = {'id':['w']*5, 'hour':range(5), 'value':np.random.random(5)})
df_x = pd.DataFrame(data = {'id':['x']*5, 'hour':range(5), 'value':np.random.random(5)})
df_y = pd.DataFrame(data = {'id':['y']*5, 'hour':range(5), 'value':np.random.random(5)})
df_z = pd.DataFrame(data = {'id':['z']*5, 'hour':range(5), 'value':np.random.random(5)})
df_ts = pd.concat([df_w, df_x, df_y, df_z])
df_ts = df_ts.set_index(['id', 'hour'])
#create another set of timeseries, same as the first but with values flipped
df_ts_qc = df_ts.copy()
for id in df_ts_qc.index.unique('id'):
df_ts_qc.loc[id, 'value'] = df_ts_qc.loc[id, 'value'].to_list()[::-1]
##########################################################
def plot_points(df, df_ts):
points = hv.Points(data=df, kdims=['x', 'y'], vdims = ['id', 'value'])
stream = hv.streams.Selection1D(source=points).rename(index="index")
empty_curve = hv.Curve(df_ts.loc['w']).opts(visible = False)
def tap_station_curve(index):
if not index:
curve = empty_curve
elif index:
id = df.iloc[index[0]]['id']
curve = hv.Curve(df_ts.loc[id], label = str(id))
return curve
ts_curve = hv.DynamicMap(tap_station_curve, kdims=[], streams=[stream])
point_options = hv.opts.Points(size = 10, color = 'value', tools = ['tap'])
panel = pn.Row((points).opts(point_options), ts_curve)
return panel
radio_button = pn.widgets.RadioButtonGroup(options=['df', 'df_qc'])
@pn.depends(radio_button.param.value)
def update_plot(option):
if option == 'df':
plot = plot_points(df, df_ts)
else:
plot = plot_points(df_qc, df_ts_qc)
return plot
pn.Column(radio_button, update_plot)
</code></pre>
<p>It works fine, but there is one thing I'd like to change. Right now, when I switch between <code>df</code> and <code>df_qc</code> with the Radio button, the zoom level/axis limits reset. If the user has zoomed in on the plot before switching, I want that zoom level/axis limits to stay constant when switching to the other Points plot.</p>
<p>I assume there is some way to do this with storing the current axis limits, and then setting the axis limits to the old one when switching plots...but I can't quite figure it out.</p>
<p>Thanks!</p>
|
<python><bokeh><holoviews>
|
2023-04-18 16:58:12
| 1
| 1,533
|
hm8
|
76,047,135
| 12,340,367
|
Python CLI tool is not added to path after installation with pip
|
<p>I have developed a small CLI tool and it works fine for Linux users. However, Mac and sometimes Windows users struggle with the installation using pip.</p>
<p>They usually find the package inside their installed global packages, but somehow their shell is not auto discovering the commands from the <code>.toml</code></p>
<p>I am unsure if the mistake is with my <code>.toml</code> file. Maybe one of you can check?</p>
<p>Also, how would I go and set up a .zshrc path so it includes Python installed cli packages/commands into the path?</p>
<pre class="lang-ini prettyprint-override"><code>[build-system]
requires = ["setuptools>=61.0"]
build-backend = "setuptools.build_meta"
[project]
name = "wd-cli"
version = "0.0.8"
requires-python = ">=3.8"
classifiers = [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent",
]
dependencies = [
"charset-normalizer==2.1.0",
"click==8.1.3",
"prompt-toolkit==1.0.14",
"pyfiglet==0.8.post1",
"Pygments==2.12.0",
"PyInquirer==1.0.3",
"requests==2.28.1",
"xdg==5.1.1",
"python-dotenv==0.20.0",
"click-aliases==1.0.1"
]
[project.optional-dependencies]
test = [
"pytest==6.2.5",
"pytest-cov==3.0.0",
]
[tool.setuptools.packages]
find = {}
[project.scripts]
wd-cli="src.main:main"
wd="src.main:main"
</code></pre>
<p>The project set up:</p>
<p><a href="https://i.sstatic.net/1NLVG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1NLVG.png" alt="enter image description here" /></a></p>
<p>The root level main.py is not used. I struggled with imports and getting it to work hence I put everything into an <code>src</code> folder, which somehow solved my issues for linux.</p>
<p>main.py (not used as far as I know)</p>
<pre class="lang-py prettyprint-override"><code>from src import main
if __name__ == '__main__':
main()
</code></pre>
<p>src/main.py (actually used for cli)</p>
<pre class="lang-py prettyprint-override"><code>def main():
load_dotenv(f"{os.getcwd()}/.env")
cli()
if __name__ == '__main__':
main()
</code></pre>
|
<python><windows><macos><pip><python-packaging>
|
2023-04-18 16:46:27
| 1
| 582
|
Snake_py
|
76,047,104
| 2,741,711
|
How to sublass ndarray to transform as rvalue, but preserve lvalue
|
<p>I would like to subclass <code>np.ndarray</code> such that my new custom array has the following properties,</p>
<ol>
<li>When defining the array, I would also provide a <code>transform_fwd</code> function, as a class parameter.</li>
<li>Whenever this array is used in any computation, it applies and provides the transformed value.</li>
<li>But when assigning the values to the array, I should be able to use original "un-transformed" values</li>
</ol>
<p>i.e. API should be like:</p>
<pre class="lang-py prettyprint-override"><code>my_arr = MyNpArr([10.,100.,1000.], transform_fwd=lambda x: np.log(x))
y = 2 * my_arr # equivalent to y = 2 * np.log(my_arr)
print(y) # 2 4 6
my_arr[2] = 10000
y = 2 * my_arr
print(y) # 2 4 8
</code></pre>
<p>Is it possible</p>
|
<python><numpy><numpy-ndarray>
|
2023-04-18 16:41:36
| 0
| 382
|
ipcamit
|
76,047,043
| 2,789,334
|
How to annotate grouped bars in a facetgrid with custom strings
|
<p>My seaborn plot is shown below. Is there a way to add the info in the <code>flag</code> column (which will always be a single character or empty string) in the center (or top) of the bars? Hoping there is an answer which would not need redoing the plot as well.</p>
<p><a href="https://stackoverflow.com/questions/55586912/seaborn-catplot-set-values-over-the-bars">This</a> answer seems to have some pointers but I am not sure how to connect it back to the original dataframe to pull info in the <code>flag</code> column.</p>
<pre><code>import matplotlib.pyplot as plt
import seaborn as sns
df = pd.DataFrame([
['C', 'G1', 'gbt', 'auc', 0.7999, "†"],
['C', 'G1', 'gbtv2', 'auc', 0.8199, "*"],
['C', 'G1', 'gbt', 'pr@2%', 0.0883, "*"],
['C', 'G1', 'gbt', 'pr@10%', 0.0430, ""],
['C', 'G2', 'gbt', 'auc', 0.7554, ""],
['C', 'G2', 'gbt', 'pr@2%', 0.0842, ""],
['C', 'G2', 'gbt', 'pr@10%', 0.0572, ""],
['C', 'G3', 'gbt', 'auc', 0.7442, ""],
['C', 'G3', 'gbt', 'pr@2%', 0.0894, ""],
['C', 'G3', 'gbt', 'pr@10%', 0.0736, ""],
['E', 'G1', 'gbt', 'auc', 0.7988, ""],
['E', 'G1', 'gbt', 'pr@2%', 0.0810, ""],
['E', 'G1', 'gbt', 'pr@10%', 0.0354, ""],
['E', 'G1', 'gbtv3','pr@10%',0.0454, ""],
['E', 'G2', 'gbt', 'auc', 0.7296, ""],
['E', 'G2', 'gbt', 'pr@2%', 0.1071, ""],
['E', 'G2', 'gbt', 'pr@10%', 0.0528, "†"],
['E', 'G3', 'gbt', 'auc', 0.6958, ""],
['E', 'G3', 'gbt', 'pr@2%', 0.1007, ""],
['E', 'G3', 'gbt', 'pr@10%', 0.0536, "†"],
], columns=["src","grp","model","metric","val","flag"])
cat = sns.catplot(data=df, x="grp", y="val", hue="model", kind="bar", sharey=False,
col="metric", row="src")
plt.show()
</code></pre>
|
<python><matplotlib><seaborn><bar-chart><catplot>
|
2023-04-18 16:34:16
| 1
| 1,068
|
ironv
|
76,046,921
| 10,966,677
|
Django: while updating record: ValueError: Cannot force an update in save() with no primary key
|
<p>There are similar issues on SO, but none alike this.</p>
<p>I would like to update a field in a record of a m2m model which has a unique constraint on <code>attendee</code> + <code>training</code>. This model is defined as (<code>model.py</code>):</p>
<pre><code>class Occurrence(models.Model):
attendee = models.ForeignKey(Attendee, on_delete=models.CASCADE)
training = models.ForeignKey(Training, on_delete=models.CASCADE)
attended_date = models.DateField(default=date(1900, 12, 31))
class Meta:
constraints = [
models.UniqueConstraint(fields=['attendee', 'training'], name='unique_attendee_training')
]
</code></pre>
<p>Now, consider e.g. that John has taken the Python course and already exists as record in the db. If I try get the date of when the training occurred, I would go like this:</p>
<pre><code>from trainings.models import Attendee, Training, Occurrence
from datetime import date, datetime
attendee = Attendee.objects.get(pk='john@gmail.com')
training = Training.objects.get(pk='python310')
occurrence = Occurrence.objects.get(attendee=attendee, training=training)
print(occurrence.attended_date)
# datetime.date(2021, 11, 4)
</code></pre>
<p>However, if I try to update the date of this record, I get the error.</p>
<pre><code>occurrence = Occurrence(attendee=attendee, training=training, attended_date=date(2021, 11, 5))
occurrence.save(update_fields=["attendee", "training", "attended_date"])
</code></pre>
<p>The error being:</p>
<pre><code>ValueError: Cannot force an update in save() with no primary key.
</code></pre>
<p>How do I update this record?</p>
<p><strong>Note</strong></p>
<p>I believe this should be enough to understand the question. But if you want to reproduce the whole issue, I post here the models (<code>model.py</code>) for Attendees and Trainings.</p>
<pre><code>class Attendee(models.Model):
first_name = models.CharField('First Name', max_length=30, blank=False)
last_name = models.CharField('Last Name', max_length=30, blank=False)
email = models.EmailField("Email Address", max_length=75, blank=False, primary_key=True)
</code></pre>
<p>Using <code>email</code> as pk.</p>
<pre><code>class Training(models.Model):
long_name = models.CharField('Training Name', max_length=80, blank=False, null=False)
short_name = models.CharField('Abbreviation', max_length=10, blank=False, null=False, primary_key=True)
description = models.TextField('Descriprion', blank=True)
alumni = models.ManyToManyField(Attendee, through='Occurrence')
</code></pre>
<p>Using <code>short_name</code> as pk. With <code>alumni</code> that relates <code>Attendee</code> and <code>Occurrence</code>.</p>
|
<python><django>
|
2023-04-18 16:20:46
| 2
| 459
|
Domenico Spidy Tamburro
|
76,046,749
| 15,414,616
|
Falcon API - process respond after client gets it
|
<p>I have a <code>falcon</code> API (WSGI) that I wrote that needs to be as fast as possible.</p>
<p>At one part of the code I have the respond that I need to send to the client, but I have a datalake where I send all of responses + some more calculation results using kafka.
I want to separate the extra calculations + the send to kafka as the client does not need wait for it and it takes more time then I want it to.</p>
<p>Is there a way to do it in Falcon without handling the threads by myself like this:</p>
<pre><code>class Compute(Thread):
def __init__(self, request):
Thread.__init__(self)
self.request = request
def run(self):
print("start")
time.sleep(5)
print(self.request)
print("done")
class FalconApi
def on_post(self, request: falcon.Request, response: falcon.Response):
thread_a = Compute(request.__copy__())
thread_a.start()
</code></pre>
<p>I run the API using <code>gunicorn</code> so I thought maybe I could use the hook <code>post_request</code> but I can't seem to be able to get the response data in the hook. I also tried using <code>asyncio</code> but it seems like it does not like when I do use it because my app is WSGI.</p>
<p>My async code:</p>
<pre><code>class FalconApi
@staticmethod
def http_post(response: falcon.Response) -> None:
requests.post(consts.ENDPOINT, headers=response.headers, data=json.dumps(response.data))
async def http_post_async(self, response: falcon.Response) -> None:
await asyncio.to_thread(self.http_post, response)
def on_post(self, request: falcon.Request, response: falcon.Response):
self.http_post_async(response)
</code></pre>
<p>and the error that I got: <code>RuntimeWarning: coroutine 'http_post_async' was never awaited</code></p>
<p>and when I changed it to:</p>
<pre><code> async def on_post(self, request: falcon.Request, response: falcon.Response):
self.http_post_async(response)
</code></pre>
<p>I got: <code>TypeError: The <bound methodon_post of object at 0x7f7cf0c4ffd0>> responder must be a regular synchronous method to be used with a WSGI app. </code></p>
|
<python><gunicorn><falconframework>
|
2023-04-18 16:02:19
| 1
| 437
|
Ema Il
|
76,046,700
| 3,798,897
|
How can you quickly find all the nodes connected to a subset of a bipartite graph?
|
<p>Given a set of left nodes and right nodes, and edges between them (a bipartite graph), I need to support quick lookups from sets of left nodes to the subset of right nodes connected to anything on the left.</p>
<p>That's easy if the graph is very sparse or dense. Below is a sample adjacency-list implementation that would support fast <code>connected_endpoints()</code> calls on sparse graphs. For middling amounts of connectivity though, the intermediate computations look like <code>O(len(input) * len(result))</code> despite the size of the data involved suggesting there might be an <code>O(len(input) + len(result))</code> solution.</p>
<p>Does there exist a data structure supporting these 3 operations (or something similar) relatively quickly -- maybe O(1) for add/remove and O(in+out) for the connected edges search, give or take polylogarithmic factors?</p>
<pre class="lang-py prettyprint-override"><code>from typing import *
from collections import defaultdict
A = TypeVar('A')
B = TypeVar('B')
class Graph(Generic[A, B]):
def __init__(self):
self.edges = defaultdict(set)
def set_edge(self, start: A, end: B):
"""Desired: Amortized O(1)"""
self.edges[start].add(end)
def unset_edge(self, start: A, end: B):
"""Desired: Amortized O(1)"""
s = self.edges[start]
s.discard(end)
if not s:
self.edges.pop(start, None)
def connected_endpoints(self, start: Set[A]) -> Set[B]:
"""Desired: Amortized O(len(start) + len(<return>))"""
empty = set()
if not start:
return empty
return set.union(*(self.edges.get(node, empty) for node in start))
</code></pre>
|
<python><algorithm><math><graph-theory>
|
2023-04-18 15:54:51
| 2
| 7,191
|
Hans Musgrave
|
76,046,538
| 18,769,241
|
Is there a way to check if a dataframe is contained within another?
|
<p>I am using pandas to check wether two dataframes are contained within each others. the method <code>.isin()</code> is only helpful (e.g., returns <code>True</code>) only when labels match (ref: <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.isin.html" rel="nofollow noreferrer">https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.isin.html</a>) but I want to check further that this to include cases where the labels don't match.</p>
<p>Example: df1:</p>
<pre><code>+----+----+----+----+----+
| 3 | 4 | 5 | 6 | 7 |
+----+----+----+----+----+
| 11 | 13 | 10 | 15 | 12 |
+----+----+----+----+----+
| 8 | 2 | 9 | 0 | 1 |
+----+----+----+----+----+
| 14 | 23 | 31 | 21 | 19 |
+----+----+----+----+----+
</code></pre>
<p>df2:</p>
<pre><code>+----+----+
| 13 | 10 |
+----+----+
| 2 | 9 |
+----+----+
</code></pre>
<p>I want the output to be <code>True</code> since <code>df2</code> is inside <code>df1</code>
Any ideas how to do that using Pandas?</p>
|
<python><pandas>
|
2023-04-18 15:41:04
| 2
| 571
|
Sam
|
76,046,536
| 607,846
|
Filter a Query Set by Reverse Foreign Key
|
<p>If I have the following:</p>
<pre><code>class Asset(models.Model):
name = models.TextField(max_length=150)
project = models.ForeignKey('Project')
class Project(models.Model):
name = models.TextField(max_length=150)
</code></pre>
<p>What argument do I pass to <code>Project.objects.filter()</code> to get all Projects that have no associated Assets.</p>
|
<python><django><django-queryset>
|
2023-04-18 15:40:53
| 2
| 13,283
|
Baz
|
76,046,452
| 11,143,781
|
Tensorflow CNN MaxPooling1D is incompatible with the layer error
|
<p>I have 2D signal data that is shaped as follows: <code>(number of signal combinations x number of signals x number of signal points x channel)</code>, so more clearly, it is: <code>(10000 x 2 x 51 x 1)</code>.
I treat my data as an image (basically, its height is 2, its width is 51), and I process it through the CNN architecture as follows:</p>
<pre><code>input_layer = Input(shape=(x_train.shape[1], x_train.shape[2], x_train.shape[3]))
x = Conv2D(filters=32, kernel_size=(2, 3))(input_layer)
x = PReLU()(x)
x = MaxPooling1D(pool_size=2)(x)
x = Dropout(0.2)(x)
x = Flatten()(x)
x = Dense(units=256)(x)
x = PReLU()(x)
output_layer = Dense(units=1, activation='linear')(x)
</code></pre>
<p>However, I got error from first maxpooling layer:</p>
<pre><code>ValueError: Input 0 of layer "max_pooling1d" is incompatible with the layer: expected ndim=3, found ndim=4. Full shape received: (None, 1, 49, 32)
</code></pre>
<p>After first 2D Conv operation data size becomes <code>(1, 49, 32)</code> which is height is 1 and width is 49 so already a 1D data. I guess I do something wrong with the data shape representation but don't know where / how. Thanks in advance for any help.</p>
<p>Edit: If I apply MaxPooling2D then I got this error:</p>
<pre><code>ValueError: Exception encountered when calling layer "max_pooling2d" (type MaxPooling2D).
Negative dimension size caused by subtracting 2 from 1 for '{{node max_pooling2d/MaxPool}} = MaxPool[T=DT_FLOAT, data_format="NHWC", explicit_paddings=[], ksize=[1, 2, 2, 1], padding="VALID", strides=[1, 2, 2, 1]](Placeholder)' with input shapes: [?,1,49,32].
</code></pre>
|
<python><tensorflow><machine-learning><conv-neural-network>
|
2023-04-18 15:31:28
| 0
| 316
|
justRandomLearner
|
76,046,434
| 8,182,381
|
How to a read camera video stream and store on FTP server using python
|
<p>I'm new to onvif streaming using python.
I would like to read a video stream from a camera and directly store the video on my ftp server. I could come up with the following piece of code.</p>
<pre><code>from ftplib import FTP
from io import BytesIO
import cv2
ftp = FTP('my-ftp-host-server')
ftp.login('my-ftp-user-id', 'my-ftp-password')
cap = cv2.VideoCapture('rtsp://camera-username:camera-password@192.168.2.122/1')
while cap.isOpened():
ret, frame = cap.read()
if ret:
flo = BytesIO(frame)
ftp.storbinary('STOR video.mp4', flo)
else:
break
cap.release()
ftp.quit()
</code></pre>
<p>Unfortunately, when I download the stored video afterward, it is unreadable.</p>
<p>Could you please help me?</p>
|
<python><ftp><video-streaming><ip-camera><onvif>
|
2023-04-18 15:29:33
| 1
| 303
|
Herval NGANYA
|
76,046,238
| 6,002,560
|
How to pass to from_json a dynamic json schema stored in a column
|
<p>I have this dataframe schema:</p>
<pre><code>root
|-- data: array (nullable = false)
| |-- element: string (containsNull = true)
|-- schema: string (nullable = false)
</code></pre>
<p>and the dataframe looks like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>data</th>
<th>schema</th>
</tr>
</thead>
<tbody>
<tr>
<td>[{"APN":"Test1","DeviceIPAddress":"Test2"}, {"APN":"Test3","DeviceIPAddress":"Test4"}]</td>
<td>STRUCT<APN: STRING, DeviceIPAddress: STRING</td>
</tr>
</tbody>
</table>
</div>
<p>Basically this dataframe is composed by a column that is an array of json string and another column that is the corresponding schema.</p>
<p>I want to explode the rows, of each element of the list that will have the same schema.
I want to get this result:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>APN</th>
<th>DeviceIpAddress</th>
</tr>
</thead>
<tbody>
<tr>
<td>Test1</td>
<td>Test2</td>
</tr>
<tr>
<td>Test3</td>
<td>Test4</td>
</tr>
</tbody>
</table>
</div>
<p>If I fix the schema and pass it as string to from_json function it works, but I want it is dynamic.</p>
<p>This is the code I'm using with the relative error:</p>
<pre><code>df.withColumn("jsonData", explode("data"))\
.withColumn("jsonData", from_json(col("jsonData"), col("schema")))\
.withColumn("jsonData", explode(array("jsonData")))\
.select("jsonData.*")
</code></pre>
<p>This is the error I'm getting:</p>
<blockquote>
<p>pyspark.sql.utils.AnalysisException: Schema should be specified in DDL format as a string literal or output of the schema_of_json/schema_of_csv functions instead of "schema"</p>
</blockquote>
<p>Basically the error happens when I pass the col("schema") to the from_json function. If I create it as static, and I pass it as variable it works. Is there a way to pass to schema of from_json function the schema I have in the column?</p>
|
<python><json><apache-spark><pyspark>
|
2023-04-18 15:08:34
| 1
| 302
|
Federico Rizzo
|
76,046,234
| 2,059,689
|
Pytest - asserts at teardown of class scoped fixture
|
<p>I have a parametrized test suite that is organized as a class containing multiple test cases and class-scoped fixture. I'm looking for a way to run some assertions after all tests complete for a given fixture instance. Any ideas on how I can achieve that?</p>
<p>I tried putting <code>assert</code> / <code>pytest.fail</code> in fixture finalization, but pytest doesn't seem to support this (see example below).</p>
<pre class="lang-python prettyprint-override"><code>import pytest
class Car:
def __init__(self, car_model: str):
pass
class TestCar:
@pytest.fixture(scope='class', params=["Ford", "Jeep"])
def car(self, request):
car = Car(car_model=request.param)
yield car
# Here I want some "teardown" checks performed after all tests that depend on the
# fixture instance have completed. But pytest doesn't seem to support it.
part_test_ratio = car.used_parts / car.total_parts
# assert part_test_ratio > 0.9, "At least 90% of parts must be tested!"
def test_ignition(self, car):
pass
def test_steering(self, car):
pass
def test_brakes(self, car):
pass
</code></pre>
<p>Note: The fixture performs a hefty operation, so I can't create it per each test.</p>
<p>It also raises the question of how such failures could have been reported. I would accept both, either showing it as a separate "dummy" test case with name deduced from the fixture name or retroactively marking all the tests that depend on the fixture as failed.</p>
<h5>Normal results:</h5>
<pre><code>PASSED car_test.py::TestCar::test_ignition[Ford]
PASSED car_test.py::TestCar::test_steering[Ford]
PASSED car_test.py::TestCar::test_brakes[Ford]
PASSED car_test.py::TestCar::test_ignition[Jeep]
PASSED car_test.py::TestCar::test_steering[Jeep]
PASSED car_test.py::TestCar::test_brakes[Jeep]
</code></pre>
<h5>Option 1:</h5>
<pre><code>...
PASSED car_test.py::TestCar::test_brakes[Jeep]
FAILED car_test.py::TestCar::car[Jeep]
</code></pre>
<h5>Option 2:</h5>
<pre><code>...
FAILED car_test.py::TestCar::test_ignition[Jeep]
FAILED car_test.py::TestCar::test_steering[Jeep]
FAILED car_test.py::TestCar::test_brakes[Jeep]
</code></pre>
|
<python><pytest>
|
2023-04-18 15:08:14
| 1
| 3,200
|
vvv444
|
76,046,148
| 8,219,760
|
Why is `obj.__del__()` not being called when collection containing object reference is deleted
|
<p>I am trying to build a <code>Process</code> subclass to utilize multiple GPUs in my desktop.</p>
<pre class="lang-py prettyprint-override"><code>class GPUProcess(mp.Process):
used_ids: list[int] = []
next_id: int = 0
def __init__(self, *, target: Callable[[Any], Any], kwargs: Any):
gpu_id = GPUProcess.next_id
if gpu_id in GPUProcess.used_ids:
raise RuntimeError(
f"Attempt to reserve reserved processor {gpu_id} {self.used_ids=}"
)
GPUProcess.next_id += 1
GPUProcess.used_ids.append(gpu_id)
self._gpu_id = gpu_id
# Define target process func with contant gpu_id
def _target(**_target_kwargs):
target(
**_target_kwargs,
gpu_id=self.gpu_id,
)
super(GPUProcess, self).__init__(target=_target, kwargs=kwargs)
@property
def gpu_id(self):
return self._gpu_id
def __del__(self):
GPUProcess.used_ids.remove(self.gpu_id)
def __repr__(self) -> str:
return f"<{type(self)} gpu_id={self.gpu_id} hash={hash(self)}>"
# Test creation
def test_process_creation():
# Expect two gpus
def dummy_func(*args):
return args
processes = []
for _ in range(2):
p = GPUProcess(
target=dummy_func,
kwargs=dict(a=("a", "b", "c")),
)
processes.append(p)
for p in processes:
p.start()
for p in processes:
p.join()
del processes
assert GPUProcess.used_ids == [], f"{GPUProcess.used_ids=}!=[]"
if __name__ == "__main__":
test_process_creation()
</code></pre>
<p><code>__del__</code> is not called for the second process.</p>
<p><code>AssertionError: GPUProcess.used_ids=[1]!=[]</code></p>
<p>Why is the second <code>__del__</code> not called?</p>
<p>Later, I'd utilize this class with <code>mp.Pool</code> to run a large set of payloads using one <code>GPUProcess</code> per my GPU and a function that uses <code>gpu_id</code> keyword to decide utilized device. Is this even sensible approach in Python?</p>
|
<python><multiprocessing><del>
|
2023-04-18 14:59:34
| 1
| 673
|
vahvero
|
76,046,058
| 5,057,022
|
Numpy H Stacking
|
<p>I have these arrays:</p>
<pre><code>co_ords = np.random.rand(10,2)
labels = np.random.choice(2,10)
</code></pre>
<p>which look like</p>
<pre><code>array([[0.27195884, 0.95210374],
[0.86416174, 0.88711233],
[0.97141129, 0.22182439],
[0.85683001, 0.85308369],
[0.45731582, 0.24709513],
[0.53758078, 0.90887869],
[0.21484153, 0.86485449],
[0.44954176, 0.82612431],
[0.40761828, 0.6199162 ],
[0.87705535, 0.40627418]])
</code></pre>
<p>and</p>
<pre><code>array([1, 1, 1, 1, 1, 0, 0, 0, 0, 0])
</code></pre>
<p>I want to combine them such that the label is included as the third member of the co-ordinate array as so :</p>
<pre><code>array([[0.27195884, 0.95210374,1],
[0.86416174, 0.88711233,1], ...
</code></pre>
<p>but I can't figure out the correct numpy manipulation to do this.</p>
<p>When I try</p>
<pre><code>combined = np.hstack((co_ords,labels.T))
</code></pre>
<p>I get</p>
<pre><code>array([0.27195884, 0.95210374, 0.86416174, 0.88711233, 0.97141129,
0.22182439, 0.85683001, 0.85308369, 0.45731582, 0.24709513,
0.53758078, 0.90887869, 0.21484153, 0.86485449, 0.44954176,
0.82612431, 0.40761828, 0.6199162 , 0.87705535, 0.40627418,
1. , 1. , 1. , 1. , 1. ,
1. , 1. , 1. , 0. , 1. ,
0. , 0. , 0. , 1. , 1. ,
1. , 1. , 0. , 1. , 0. ,
1. , 1. , 1. , 1. , 1. ,
0. , 0. , 0. , 0. , 0. ])
</code></pre>
<p>which is incorrectly flattened. What should I do instead?</p>
<p>Thanks!</p>
|
<python><arrays><numpy>
|
2023-04-18 14:50:42
| 1
| 383
|
jolene
|
76,046,035
| 6,930,441
|
Combining one entire list with iterations of a second list
|
<p>I'm stuck on a seemingly simple problem. I have two lists and would like to generate tuples of the entirety of list with individual elements from list2. I.e.:</p>
<pre><code>list1 = [a,b,c,d]
list2 = [1,2,3]
</code></pre>
<p>Desired output</p>
<pre><code>output_list = [([a,b,c,d],1), ([a,b,c,d],2),([a,b,c,d],3)]
</code></pre>
|
<python><list><iteration>
|
2023-04-18 14:48:46
| 1
| 456
|
Rainman
|
76,045,880
| 21,420,742
|
What is an alternative to using map function in python
|
<p>I have a dataset and I need to get a count of reports and I am using map but does not seem to be working.</p>
<p>Sample Data:</p>
<pre><code>ID Manager_ID Manger_Pos_Num
101 103 1111
102 103 1111
103 106 2222
104 103 3333
105 106 2222
106
</code></pre>
<p>Desired output:</p>
<pre><code> ID Reports
101 0
102 0
103 2
103 1
104 0
105 0
106 2
</code></pre>
<p>I am using</p>
<pre><code>counts = df.groupby('Manager_Pos_Num')['ID'].nunique()
df['Reports'] = df['ID'].map(counts).fillna(0).astype(int)
</code></pre>
<p>When I use this I get all zeros I am not sure why I have checked the counts variable and that works I don't know if its the map function or not any suggestions?</p>
<p>I am using <code>Manager_Pos_Num</code> instead of the <code>ManagerID</code> for counting because I want to see if that manager has been in another <code>Manager_Pos_Num</code>.</p>
|
<python><python-3.x><pandas><dataframe><numpy>
|
2023-04-18 14:32:45
| 1
| 473
|
Coding_Nubie
|
76,045,724
| 5,711,995
|
awaiting a future which never resolves: set_result called by a callback triggered by an event emitter from a background coroutine / task
|
<p>I have some code for communicating with a websocket connection. I want to send a message to the server and returns a future which will resolve with the data sent by the server once the server sends an end signal. The code collecting the responses should run in the background, in a non-blocking way, collecting the responses until an end signal is sent, which should trigger the future to resolve. Multiple of these should be able to be running concurrently.</p>
<p>The code I am using to do this is below:</p>
<pre><code>import asyncio
import websocket
from pyee.base import EventEmitter
import json
def done_callback(future):
try:
result = future.result()
except Exception as exc:
raise
class WebSocketResponder(EventEmitter):
def __init__(self):
super().__init__()
return
async def on_response(self, response):
# Notify the callee that a payload has been received.
print("reading response: ", response)
if response != "end":
print("got non end response: ", response)
self.emit("data", response)
else:
print("emitting end")
self.emit("end")
return
class Processor:
def __init__(self, wsApiUrl) -> None:
self.wsApiUrl = wsApiUrl
# websocket.enableTrace(True) # uncomment for detailed trace of websocket communications
self.websocket = websocket.WebSocket()
self.websocket.connect(self.wsApiUrl)
async def on_message():
while True:
response = self.websocket.recv()
await self.process(response)
loop = asyncio.get_running_loop()
self.receiving_coroutine = asyncio.run_coroutine_threadsafe(on_message(), loop)
self.receiving_coroutine.add_done_callback(done_callback) # adds callback to task which raises exceptions occurring in coroutine
self.responders = {}
self.responder_count = 0
return
async def process(self, response):
loaded_response = json.loads(response)
id = loaded_response["id"]
msg = loaded_response["msg"]
responder = self.responders[id]
await responder.on_response(msg)
return
async def send(self, msg):
future = asyncio.Future()
response = ""
responder = WebSocketResponder()
self.responders[self.responder_count] = responder
self.websocket.send(json.dumps({"id": self.responder_count, "msg": msg}))
self.responder_count += 1 # increment responder count
@responder.on("data")
def data_handler(payload):
nonlocal response, future
print("adding to response: ", payload)
response += payload
response += "\n"
@responder.on("end")
def end_handler():
print("end handler triggered")
nonlocal response, future
print("setting result: ", response)
future.set_result(response)
return future
async def doTheThing():
wsApiUrl = "ws://127.0.0.1:7890"
processor = Processor(wsApiUrl)
result = await processor.send("Hello, Server")
return await result
async def main():
result = await doTheThing()
print("result: ", result)
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
</code></pre>
<p>The code running the websocket server which is responding to the messages is here:</p>
<pre><code>import websockets
import asyncio
import json
# Server data
PORT = 7890
print("Server listening on Port " + str(PORT))
# A set of connected ws clients
connected = set()
# The main behavior function for this server
async def echo(websocket, path):
print("A client just connected")
# Store a copy of the connected client
connected.add(websocket)
# Handle incoming messages
try:
async for message in websocket:
print("Received message from client: " + message)
loaded_message = json.loads(message)
id = loaded_message["id"]
msg_received = loaded_message["msg"]
# Send a response to all connected clients except sender
for conn in connected:
if conn != websocket:
print("responding to another websocket")
await conn.send(json.dumps({"id": id, "msg": f"Someone said: {msg_received}"}))
else:
print("responding to sender")
await conn.send(json.dumps({"id": id, "msg": f"Thanks for your message: {msg_received}"}))
print("sending more")
await conn.send(json.dumps({"id": id, "msg": "Do you get this?"}))
print("sending end")
await conn.send(json.dumps({"id": id, "msg": "end"}))
print("end sent")
# Handle disconnecting clients
except websockets.exceptions.ConnectionClosed as e:
print("A client just disconnected")
finally:
connected.remove(websocket)
# Start the server
start_server = websockets.serve(echo, "localhost", PORT)
asyncio.get_event_loop().run_until_complete(start_server)
asyncio.get_event_loop().run_forever()
</code></pre>
<p>This is a simplified mock up of a much more complex system, the websocket server being a separate service implemented by a third party external to myself.</p>
<p>I create an instance of an event emitter class which is used to react to response from the websocket connection, I then send a message, with the id of this instance in the message.</p>
<p>There is a couroutine / task which runs an async function which should run forever, which receives messages from the websocket and runs an async function to process the message. This process function retrieves the event emitter object and runs the response function on the event emitter instance to process the response. This event emitter emits data events, which trigger a callback instantiated in the send message call, and once the server sends an end signal, the event emitter emits an end event which triggers a callback to set the result of the future created by the send call.</p>
<p>This code runs as expected, except when the end_handler is triggered, it prints the expected messages but the future doesn't resolve, the await call just hangs. I expect the issue is related to the set_result function being run in another thread / event loop, but I have been unable to resolve the issue, despite trying various methods.</p>
|
<python><events><websocket><python-asyncio><future>
|
2023-04-18 14:17:49
| 1
| 1,609
|
SomeRandomPhysicist
|
76,045,438
| 1,860,222
|
Algorithm for matching collection items in python
|
<p>I'm working on a game program in python. On their turn, a player can purchase a card from a collection of 12 available cards. Each card has a cost consisting of varying quantities of 5 different colored 'gems'. The player has a bank of gems in the same 5 colors (plus gold 'wildcards'). I need to design a method that compares the card costs to the available gems for the player and returns a list of which cards they can purchase.</p>
<p>The obvious solution would be to use brute force, looping through each available card, then looping through each 'color' and comparing it against the player's resources. My end goal with this project however is to use this game to experiment with machine learning. As such, I'm looking to optimize anything that is going to be called frequently. I feel like there is a more efficient solution but I'm not sure what that would be.</p>
<p>Any suggestions?</p>
|
<python><algorithm><sorting><search>
|
2023-04-18 13:51:06
| 1
| 1,797
|
pbuchheit
|
76,045,431
| 1,484,601
|
pushing to pypi via github actions: how to manage changes with no version number update?
|
<p>One can use github actions to publish to PyPI everytime there is an updated on the master branch.</p>
<p>For example, one can use: <a href="https://github.com/marketplace/actions/publish-python-poetry-package" rel="nofollow noreferrer">https://github.com/marketplace/actions/publish-python-poetry-package</a></p>
<p>For good reasons, publication on PyPI will fail if the version number is not updated ("HTTP Error 400: File already exists."). (good reasons explained here: <a href="https://pypi.org/help/#file-name-reuse" rel="nofollow noreferrer">https://pypi.org/help/#file-name-reuse</a>)</p>
<p>Yet, content of the master branch may sometimes be updated in ways that do not justify an update of the version number (e.g. if the github actions, not the software, is updated).</p>
<p>What would be recommended way to handle this, and how can it be implemented ? For example, is it possible to trigger the github action for publishing only if there is a version update ? or only if the source code has been updated ? Or is there a way to ignore the Http Error 400 (i.e. not getting a failure badge if this error occurs) ?</p>
|
<python><github-actions><versioning><pypi>
|
2023-04-18 13:50:13
| 1
| 4,521
|
Vince
|
76,045,413
| 8,628,566
|
Why does time to compute a single training step increase over time for some seeds/configurations of ADAM but not for SGD in PyTorch?
|
<p>I'm working on a PyTorch project using PyTorch Lightning (version 1.8.4) to train a neural network. I've noticed that the time it takes to compute a single training step increases over time for some seeds and configurations of the ADAM optimizer, but not for SGD.</p>
<pre><code>conda create --name my_env python=3.9.12 --no-default-packages
conda activate my_env
pip install torch==1.13.1 torchvision==0.14.1
pip install pytorch-lightning==1.8.4
</code></pre>
<p>Here's a figure that shows the increase in training time over time for some configurations of ADAM:</p>
<p><a href="https://i.sstatic.net/6zDfv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6zDfv.png" alt="enter image description here" /></a></p>
<p>I'm using PyTorch Lightning with the automatic differentiation disabled:</p>
<pre><code> @property
def automatic_optimization(self):
return False
</code></pre>
<p>Thus, my training_step looks like this</p>
<pre><code> def training_step(self, train_batch, batch_idx):
log_dict = {}
tic = time.time()
loss_dict = {}
opt = self.optimizers(use_pl_optimizer=False)
loss = self(train_batch)
loss_dict['loss'] = loss
opt.zero_grad()
self.manual_backward(loss.mean())
opt.step()
self.update_log_dict(log_dict=log_dict, my_dict=loss_dict)
log_dict['time_step'] = torch.tensor(time.time() - tic)
return log_dict
</code></pre>
<p>I'm wondering if anyone has experienced this issue before or has any suggestions for how to address it. Thank you!</p>
|
<python><deep-learning><pytorch><pytorch-lightning><adam>
|
2023-04-18 13:48:20
| 1
| 333
|
Pablo Sanchez
|
76,045,330
| 5,841,817
|
How to change/suppress server header with gunicorn and uvicorn worker class?
|
<p>I'm running my python asgi application with <code>gunicorn</code> and <code>uvicorn</code> worker class with this command line:</p>
<p><code>unicorn -c gunicorn_conf.py my.asgi:application -k uvicorn.workers.UvicornWorker</code></p>
<p>I want to obfuscate the <code>server</code> HTTP header.</p>
<p>The <a href="https://www.uvicorn.org/settings/#http" rel="nofollow noreferrer">uvicorn <code>--no-server-header</code> flag</a> seems ignored by <code>gunicorn</code>.</p>
<p>How to change/suppress server http header when running <code>gunicorn</code> and <code>uvicorn</code> worker class ?</p>
|
<python><gunicorn><uvicorn><asgi>
|
2023-04-18 13:40:05
| 1
| 1,066
|
sgargel
|
76,045,250
| 20,051,041
|
How to add watemark in a video with ffmpeg, Python?
|
<p>I would like to add a .png watemark with 50% opacity over all my video. I would prefer using a ffmpeg filter over merging the watemark image with video-to-be images.</p>
<pre><code>audio = f'{PROJECT_PATH}/data/ppt-elements/audio_{file_id}.txt'
images = f'{PROJECT_PATH}/data/ppt-elements/images_{file_id}.txt'
image_input = ffmpeg.input(images, f='concat', safe=0, t=seconds).video
audio_input = ffmpeg.input(audio, f='concat', safe=0, t=seconds).audio
additional_parameters = {'c:a': 'aac', 'c:v': 'libx264'}
audio_input = ffmpeg.filter(audio_input, "amix", inputs=2, duration="longest")
watermark_file = f"{PROJECT_PATH}/data/logo/logo.png"
# add watermark
watermark = ffmpeg.input(watermark_file)
inputs = [image_input, audio_input, watermark]
watermark_filter = '[0:v][2:v]overlay=10:10'
command = ffmpeg.output(*inputs, f"{PROJECT_PATH}/data/final-{file_id}.mp4", vf=[watermark_filter, "fps=10,format=yuv420p"],
preset="veryfast", shortest=None, r=10, max_muxing_queue_size=4000,
**additional_parameters)
command.overwrite_output().run(capture_stdout=True, capture_stderr=True)
</code></pre>
<p>This code, however, creates a video without any watermark in it. Can you see any error?</p>
|
<python><ffmpeg><watermark>
|
2023-04-18 13:33:13
| 1
| 580
|
Mr.Slow
|
76,045,128
| 21,787,377
|
getting model object returned matching query does not exist
|
<p>I want to display what user is posted from a <code>Book</code> model when others user visited his public_profile page, to do that, i had to use this method:</p>
<pre><code>def public_profile(request, slug):
profile = get_object_or_404(Profile, slug=slug)
book = Book.objects.get(slug=slug)
context = {
'profile': profile,
'book': book
}
return render(request, 'public_profile.html', context)
</code></pre>
<p>But using that method returned: <code>Book matching query does not exist.</code> in the <code>public_profile</code> template.</p>
<p>my url to <code>public_profile</code> page:</p>
<pre><code><p class="title is-4"><a href="{{ book.get_user_public_url }}">
{{ book.user }}
</a></p>
</code></pre>
<p>my model</p>
<pre><code>class Book(models.Model):
user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)
audio = models.FileField(upload_to='audio')
title = models.CharField(max_length=50)
category = models.ForeignKey(Category, on_delete=models.CASCADE)
image = models.ImageField(upload_to='audio-image')
introduction = models.TextField(max_length=500)
slug = models.SlugField(unique=True)
def get_user_public_url(self):
return reverse('Public-Profile', kwargs={'slug': self.user.profile.slug})
class Profile(models.Model):
user = models.OneToOneField(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)
profile_photo = models.ImageField(upload_to='profile-image', default='static/dafault.jpg')
twitter = models.URLField(blank=True, null=True, unique=True)
website = models.URLField(blank=True, null=True, unique=True)
linkedln = models.URLField(blank=True, null=True, unique=True)
country = models.CharField(max_length=70, blank=True, null=True,)
about = models.TextField(max_length=700, blank=True, null=True)
slug = models.SlugField(max_length=100, unique=True)
</code></pre>
|
<python><django>
|
2023-04-18 13:20:24
| 1
| 305
|
Adamu Abdulkarim Dee
|
76,045,075
| 10,557,442
|
How can I unpivot a dataframe with variable date columns?
|
<p>I have some input dataframe as the following example:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{
"Name":["Ale", "Dan", "Hel"],
"Project": ["A", "A", "B"],
"10/04/2023 Formation": [24, 40, 40],
"17/04/2023 Formation": [12, 24, 24],
"10/04/2023 Holidays": [40, 40, 40],
"17/04/2023 Holidays": [12, 40, 24],
}
)
</code></pre>
<p>As can be seen, some of the columns are dates belonging to a specific concept (Formation, Holidays, in this case).</p>
<p>What I want is to normalize these columns to convert them into rows (unpivot), but preserving (grouping) each project in its own column; that is, I would like to perform a dataframe transformation whose output would be something like:</p>
<pre><code> Date Name Project Formation Holidays
0 10/04/2023 Ale A 24 40
1 17/04/2023 Ale A 12 12
2 10/04/2023 Dan A 40 40
3 17/04/2023 Dan A 24 40
4 10/04/2023 Hel B 40 40
5 17/04/2023 Hel B 24 40
</code></pre>
|
<python><pandas><pivot><unpivot>
|
2023-04-18 13:14:46
| 3
| 544
|
Dani
|
76,044,989
| 903,051
|
Shap beeswarm plot: Colour criterion for regression
|
<p>Back in the day, I came up with the following code to export a Shap beeswarm as an interactive HTML file using Plotly:</p>
<pre><code>explainer = shap.Explainer(best_model.predict, X_test)
shap_values = explainer(X_test)
shap_df = pd.DataFrame(
np.c_[shap_values.base_values, shap_values.values],
columns = ['Shap_Base'] + list(X_test.columns)
)
shap_df.drop('Shap_Base', axis=1, inplace=True)
values = shap_df.iloc[:,2:].abs().mean(axis=0).sort_values().index
df_plot = pd.melt(shap_df, id_vars=['transaction_id', 'predictions'], value_vars=values, var_name='Feature', value_name='SHAP')
fig = px.strip(df_plot, x='SHAP', y='Feature', color='predictions', stripmode='overlay', height=4000, width=1000, hover_data=['transaction_id', df_plot.index])
fig.update_layout(xaxis=dict(showgrid=True, gridcolor='WhiteSmoke', zerolinecolor='Gainsboro'),
yaxis=dict(showgrid=True, gridcolor='WhiteSmoke', zerolinecolor='Gainsboro')
)
fig.update_layout(plot_bgcolor='white')
fig.update_layout(boxgap=0)
fig.update_traces(jitter=1)
fig.write_html('beeswarm.html')
</code></pre>
<p>There I was colouring by Feature which was a binary label.</p>
<p>However now I am working on a regression problem.</p>
<p>Would there be a simple way of setting the colour on a similar way?</p>
<p>How do I extract that information from the shap values?</p>
|
<python><machine-learning><plotly><regression><shap>
|
2023-04-18 13:05:39
| 1
| 543
|
mirix
|
76,044,839
| 5,576,083
|
How to group selections and repalcements in a Pandas dataframe?
|
<p>I wrote a similar post here <a href="https://stackoverflow.com/questions/75778855/how-to-add-data-in-new-columns-based-on-existing-column-in-a-data-frame">How to add data in new columns based on existing column in a data frame?</a> but regarding R.
Now I need the same help with python pandas</p>
<p>I have a dataframe like this.</p>
<pre><code>import pandas as pd
d = {'names': ["john", "bob", "frank", "bill", "sam"],
'pets': ["dog", "cat", "mouse", "horse", "bird"]}
DF = pd.DataFrame(data=d)
</code></pre>
<p>I want to add new data in two different columns based on the data in the second column (pets).</p>
<p>I wrote these two lines of code and it works</p>
<pre><code>DF.loc[DF['pets']=="dog", 'color'] = 'red'
DF.loc[DF['pets']=="dog", 'ID'] = 1
</code></pre>
<p>How can I group them in just one statement?</p>
|
<python><pandas>
|
2023-04-18 12:49:42
| 1
| 301
|
ilFonta
|
76,044,678
| 2,599,301
|
Detect deletion of instance variable or its element (list of objects)?
|
<p>I came across this OOP exercise in Python which states following:</p>
<pre><code>Class Battery:
def __init__(self, member1: int, member2: int, lead: str):
self.member1 = member1
self.member2 = member2
self.lead = lead
Class Robot:
def __init__(self, robot_name: str, leads: tuple[str, ...], batteries: list[Battery], num_batteries: int):
self.robot_name = robot_name
self.leads = leads
self.batteries = batteries
self.num_batteries = num_batteries
</code></pre>
<p>Now I'm supposed to detect the following deletion:</p>
<pre><code>del robot1.batteries[2]
</code></pre>
<p>And correspondingly update other instance variables (num_batteries and leads - decrease the number of batteries and remove the lead at the same index position as the battery).</p>
<p><strong>What I tried so far:</strong></p>
<ol>
<li><p>I already tried overriding below magic method, but without any success.</p>
<pre><code>__delitem__(self, key)
</code></pre>
</li>
<li><p>I was thinking about implementing a deleter method, which would result into defining those members as properties (which is completely fine, just not sure if the exercise is meant to be done like that)</p>
<pre><code>@name.deleter
</code></pre>
</li>
</ol>
<p>This brought me to a thinking, that maybe in order to be able to do this I need to have custom list and not the python default one. After all, I'm tampering with the list content and trying to replicate it on other logical parts. Have you got any ideas how could such a thing be implemented?</p>
|
<python><oop><overriding>
|
2023-04-18 12:34:07
| 1
| 439
|
Jumpman
|
76,044,403
| 17,530,552
|
How to adjust sp.signal.spectrogram so that the x-axis starts at 0?
|
<p>I am plotting a spectrogram of a time-series <code>data</code> with a sampling rate of 2.16 seconds using <code>scipy.signal.spectrogram</code>.</p>
<p>My problem is that the x-axis of the plot <code>plt.pcolormesh(time, freq, spec, shading="gouraud")</code> does not start at <code>0</code>, but instead at <code>22.68</code>, and then the x-axis goes up to <code>63.72</code>.</p>
<p>I understand that the x-axis range, printable via <code>print(time)</code>, depends on the <code>nperseg</code> value of <code>scipy.signal.spectrogram</code>. Hence, the <code>nperseg</code> value determines the possible output or range of the x-axis by <code>scipy.signal.spectrogram</code>.</p>
<p>However, isn't there a way to plot the spectrogram so that the x-axis starts at 0, since the time-series <code>data</code> starts at 0 or 1 with a first data point in time?
Here is a full and reproducible code:</p>
<pre><code>import numpy as np
import scipy as sp
from scipy import signal
import matplotlib.pyplot as plt
# Data of time-series
data = [1.577914, -4.299067, 1.994959, 1.006487, -3.092386, 2.552159, 0.270319,
-0.406156, 2.364457, -2.212027, -0.985193, -2.660199, 4.171447, 1.508614,
0.710774, 0.62389, 0.157947, 0.885811, 1.126204, 2.449504, -3.167271,
-2.473564, -3.03687, 0.573946, 1.123078, -0.745824, -5.395092, -3.487601,
-1.41748, 2.737564, 2.061715, -0.249747, 2.174746, 2.229604, -4.607001,
2.951738, -1.859603, 3.03217, 0.87048, 1.842302, 5.207679, 0.3907389]
data = np.array(data)
# Spectrogram
freq, time, spec = sp.signal.spectrogram(data, fs=1/2.16,
window=('tukey', 0.25),
nperseg=21,
noverlap=None, nfft=42,
detrend=False,
return_onesided=True,
scaling="density",
axis=-1, mode="psd")
# Plot
plt.pcolormesh(time, freq, spec, shading="gouraud")
plt.colorbar()
plt.xlabel("Time [seconds]", fontweight="bold", fontsize=12)
plt.yticks(np.arange(0.01, 0.24, 0.05), fontsize=12)
plt.ylim([0.01, 0.23])
plt.ylabel("Frequency [Hz]", fontweight="bold", fontsize=12)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/PUZQA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PUZQA.png" alt="enter image description here" /></a></p>
|
<python><scipy><spectrogram>
|
2023-04-18 12:06:04
| 0
| 415
|
Philipp
|
76,044,398
| 3,990,145
|
How to exclude redundant validation errors for pydantic fields which allow empty dict as value?
|
<p>I have following models</p>
<pre><code>from typing import Union
from pydantic import BaseModel, Extra, Field
from typing_extensions import Annotated, Literal
class Empty(BaseModel):
class Config:
extra = Extra.forbid
class ModelX(BaseModel):
member: Literal["x"]
q: str
class ModelY(BaseModel):
member: Literal["y"]
q: None
class MainModel(BaseModel):
model: Union[Annotated[Union[ModelX, ModelY], Field(discriminator="member")], Empty]
MainModel.parse_obj({"model": {}})
MainModel.parse_obj({"model": {"member": "x", "q": None}})
</code></pre>
<p>Since None <code>q</code> is not allowed for <code>ModelX</code> I am getting following error:</p>
<pre><code>pydantic.error_wrappers.ValidationError: 3 validation errors for MainModel
model -> ModelX -> q
none is not an allowed value (type=type_error.none.not_allowed)
model -> member
extra fields not permitted (type=value_error.extra)
model -> q
extra fields not permitted (type=value_error.extra)
</code></pre>
<p>But, I do not need last 2 messages, since it is not empty dict and I do not expect it.</p>
|
<python><pydantic>
|
2023-04-18 12:05:33
| 0
| 4,717
|
xiº
|
76,044,370
| 813,750
|
How to bin on timeframe with pyspark?
|
<p>I have following (oversimplified) dataset :</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>userid</th>
<th>program_id</th>
<th>program_start</th>
<th>program_end</th>
<th>viewing_start</th>
<th>viewing_end</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>1</td>
<td>2023-01-01 00:00:00</td>
<td>2023-01-01 01:00:00</td>
<td>2023-01-01 00:05:00</td>
<td>2023-01-01 00:50:00</td>
</tr>
<tr>
<td>1</td>
<td>1</td>
<td>2023-01-01 00:00:00</td>
<td>2023-01-01 01:00:00</td>
<td>2023-01-01 00:20:00</td>
<td>2023-01-01 00:25:00</td>
</tr>
</tbody>
</table>
</div>
<p>and what I like to receive is like follows</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>userid</th>
<th>program_id</th>
<th>program_start</th>
<th>program_end</th>
<th>viewing_start</th>
<th>viewing_end</th>
<th>start_bin</th>
<th>end_bin</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>1</td>
<td>2023-01-01 00:00:00</td>
<td>2023-01-01 01:00:00</td>
<td>2023-01-01 00:05:00</td>
<td>2023-01-01 00:50:00</td>
<td>1</td>
<td>4</td>
</tr>
<tr>
<td>1</td>
<td>1</td>
<td>2023-01-01 00:00:00</td>
<td>2023-01-01 01:00:00</td>
<td>2023-01-01 00:20:00</td>
<td>2023-01-01 00:25:00</td>
<td>2</td>
<td>2</td>
</tr>
</tbody>
</table>
</div>
<p>Where I have my users (userid) watching a specific asset (program_id) and each asset has a defined duration (program_start to program_end time). Users can start and end watching this whenever they want, for how long they want as long as it is between program_start and program_end</p>
<p>What I want to do is bin / bucketize my assets, lets say into 4 buckets. That's still easy to achieve but next I want to understand in which of these bins the user started watching the asset.</p>
<p>So if an asset runs from midnight to 1 AM, and a user starts watching at 10 past midnight he fits in the first bucket (or basically started watching in the first quarter).</p>
<p>I'm kind of struggling to achieve this with pyspark, so any advice on how to tackle this without consuming all my resources would be helpfull.</p>
<p>Not really looking for full blown copy/paste solutions, just some friendly advice and guidance in the right direction is probably be enough to get me moving on.</p>
|
<python><apache-spark><pyspark><binning>
|
2023-04-18 12:02:40
| 2
| 1,129
|
Wokoman
|
76,044,317
| 7,425,726
|
find a sequence of values within a pandas series
|
<p>I am looking for the best way to find a sequence of values of varying lengths within a longer pandas Series. For example, I have the values <code>[92.6, 92.7, 92.9]</code> (but could also be length 2 or 5) and would like to find all the cases where this exact sequence occurs within the longer Series</p>
<pre><code>s = pd.Series([92.6,92.7,92.9,24.2,24.3,25.1,24.9,25.1,24.9,97.6,94.5,1.0,92.6,92.7,92.9,97.9,96.8,96.4,92.8,92.8,93.1,89.5,89.6])
</code></pre>
<p>(actual series is approx length 1000).</p>
<p>In this example the correct result should be indices <code>0,1,2</code> and <code>12,13,14</code>.</p>
|
<python><pandas><series>
|
2023-04-18 11:57:51
| 1
| 1,734
|
pieterbons
|
76,044,254
| 14,015,493
|
How to provide only one of multiple optional parameters in FastAPI?
|
<p>The API should serve an endpoint where a user can either provide a <code>guid</code>, a <code>path</code> or a <code>code</code>. This is what the url scheme should look like:</p>
<pre><code>api/locations?code={str}&guid={str}&path={str}
</code></pre>
<p>The current code is as follows:</p>
<pre class="lang-py prettyprint-override"><code>@router.get("/api/locations")
def get_functional_locations(
guid: Optional[str] = None,
code: Optional[str] = None,
path: Optional[str] = None,
) -> Union[List[Locations], PlainTextResponse]:
if guid:
return ...
if path:
return ...
if code:
return ...
return PlainTextResponse(status_code=status.HTTP_400_BAD_REQUEST)
</code></pre>
<p>Is there another way to provide this multiple optional parameters and have an XOR that only allows the user to fill in exactly one parameter?</p>
|
<python><fastapi><optional-parameters>
|
2023-04-18 11:51:22
| 2
| 313
|
kaiserm99
|
76,044,230
| 5,597,037
|
Selenium in headless mode with Flask and Celery throws WebDriverException: Process unexpectedly closed with status 127
|
<p>I am trying to run Selenium in headless mode within a Flask app and using Celery tasks. When I execute my script, I get the following error message:</p>
<pre><code>selenium.common.exceptions.WebDriverException: Message: Process unexpectedly closed with status 127
</code></pre>
<p>Here is a minimal working example of my code:</p>
<p>app.py:</p>
<pre><code>from flask import Flask, request
from celery import Celery
import os
app = Flask(__name__)
app.config['CELERY_BROKER_URL'] = os.getenv('CELERY_BROKER_URL')
app.config['CELERY_RESULT_BACKEND'] = os.getenv('CELERY_RESULT_BACKEND')
celery = Celery(app.name, broker=app.config['CELERY_BROKER_URL'])
celery.conf.update(app.config)
@celery.task
def run_selenium_task(referral_id, comment):
# Insert your Selenium script here
@app.route('/run_selenium', methods=['POST'])
def run_selenium():
referral_id = request.form['referral_id']
comment = request.form['comment']
run_selenium_task.delay(referral_id, comment)
return 'Selenium task started'
if __name__ == '__main__':
app.run()
</code></pre>
<p>Selenium script:</p>
<pre><code>from selenium.webdriver.firefox.service import Service
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import Select
from selenium.webdriver.firefox.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from time import sleep
options = Options()
options.add_argument("--window-size=1920,1080")
options.add_argument("-headless")
firefox_binary_path = '/usr/bin/firefox'
geckodriver_path = '/home/fusion/bin/geckodriver'
options.binary_location = firefox_binary_path
service = Service(executable_path=geckodriver_path)
driver = webdriver.Firefox(service=service, options=options)
# Insert the rest of your Selenium code here
</code></pre>
<p>What am I missing, and how can I fix this issue to run Selenium with Flask and Celery in headless mode without any errors?</p>
|
<python><selenium-webdriver><celery>
|
2023-04-18 11:48:42
| 1
| 1,951
|
Mike C.
|
76,044,171
| 10,504,555
|
Identifying minor swings with major swings - Price charts
|
<p>I am developing a stock analysis program in Python</p>
<p>One of the fundamental things I need to is to recognise swings from price feed (open, high, low, close data)</p>
<p>Price data is fractal by nature - smaller structures are found within larger structures.</p>
<p><a href="https://i.sstatic.net/4g0n5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4g0n5.png" alt="enter image description here" /></a></p>
<p>In my case I am looking for small swings within large swings. I.e. minor swings within major swings. The above example chart depicts my goal.</p>
<p>A few definitions to get out of the way.
Each swing is made of two legs / parts - impulse leg and reaction leg</p>
<p>Impulse leg will be in the direction of flow of the the market</p>
<p>Reaction leg is against the direction of the impulse</p>
<p>Both impulse and reaction legs can be either up or down depending on the flow of the market
Below models illustrate this definition. If there is no direction in the price, its known to be a ranging market</p>
<p><a href="https://i.sstatic.net/wDkbt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wDkbt.png" alt="Swings in Up market" /></a></p>
<p><a href="https://i.sstatic.net/6IASC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6IASC.png" alt="Swings in Down market" /></a></p>
<p>The next important definition is the understanding of highs and low.
A new high confirms a new low while a new low confirms a new high
This is illustrated by the below model</p>
<p><a href="https://i.sstatic.net/20Ywz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/20Ywz.png" alt="Confirming highs and lows" /></a></p>
<p>The below is how I have approached the matter in Python</p>
<pre><code>import numpy as np
from scipy.signal import argrelextrema
def get_pivots(price: np.ndarray):
maxima = argrelextrema(price, np.greater)
minima = argrelextrema(price, np.less)
return np.concatenate((price[maxima], price[minima]))
</code></pre>
<p>For simplicity I will be passing a flattened 1d <code>numpy</code> array into the above function. <code>argrelextrema</code> helps be identify where pivots are. I.e. identfies where the price turns.</p>
<p>I am wondering how I could find the nesting of pivots to form minor and major swings.</p>
<p>I am looking to produce a list that loosely resembles this structure.</p>
<pre><code>[major swing
[minor swing1],
[minor swing2],
[minor swing3
[micro swing1],
[micro swing2]
]
]
</code></pre>
<p>Sample data I have created is</p>
<pre><code> data = [5,6,7,8,9,10,11,12,13,14,16,17,18,19,20, # AB(Major) impulse
19,18,17,16,15,14,13,12,11,10,9,8, # BC(Major) reaction
9,10,11,12,13, 14,15, # C0C1 (Minor) impulse
14,13,12,11,10, # C1C2 (Minor) reaction
11, 12, 13, 14, 15, 16, 17, 18, # C2C3 (Minor) impulse
17, 16, 15, 15, 14, 13, 12, # C3C4 (Minor) reaction
13, 14, 15, 16, 17, 18, 19, 20, 21, # C4C5 (Minor) impulse
20, 19, 18, 17, 16, 15, 14, # C5C6 (Minor) reaction
15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25 # CD (Major) impulse
]
</code></pre>
<p>I believe some kind of recursive implementation is likely to help. I am not entirely familiar with implementation using this paradigm</p>
|
<python><trading><algorithmic-trading><candlestick-chart><backtrader>
|
2023-04-18 11:43:22
| 2
| 389
|
neo-technoker
|
76,044,117
| 4,865,723
|
Create a grouped Seaborn Box plot without pandas.melt() or pandas.stack()
|
<p>I want to create a "grouped box plot" like this without using <code>pandas.melt()</code> (or <code>pandas.stack()</code>) and with only one seaborn function :</p>
<p><a href="https://i.sstatic.net/oVJ6n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oVJ6n.png" alt="enter image description here" /></a></p>
<p>In my "environment" I often have data in a "wide" structure like this.</p>
<pre><code>| | ID | catA | catB |
|---:|-----:|-------:|-------:|
| 0 | 0 | 168 | 181 |
| 1 | 1 | 151 | 100 |
| 2 | 2 | 84 | 56 |
| 3 | 3 | 51 | 151 |
| 4 | 4 | 102 | 123 |
| 5 | 5 | 80 | 50 |
| 6 | 6 | 156 | 181 |
| 7 | 7 | 60 | 196 |
| 8 | 8 | 95 | 162 |
| 9 | 9 | 116 | 180 |
</code></pre>
<p>To create a boxplot for the two columns <code>catA</code> and <code>catB</code> I usally melt/stack them.</p>
<pre><code>df = df.melt('ID', var_name='cat', value_name='val')
seaborn.boxplot(x=df.val, y=df.cat).figure.show()
</code></pre>
<p>Does seaborn do offer a way to get to the same result without explicit using <code>melt()</code> or something else transforming the original dataframe? Like this seaborn pseudo code.</p>
<pre><code>seaborn.boxplot(x=[df.catA, df.catB]).figure.show()
</code></pre>
<p>Here is the MWE to produce the data frame.</p>
<pre><code>#!/usr/bin/env python3
import random
import pandas
import seaborn
random.seed(0)
k = 10
df = pandas.DataFrame(
{
'ID': range(k),
'catA': random.choices(range(200), k=k),
'catB': random.choices(range(200), k=k)
}
)
</code></pre>
<p>Result</p>
|
<python><pandas><seaborn>
|
2023-04-18 11:38:40
| 1
| 12,450
|
buhtz
|
76,044,061
| 12,297,666
|
Input Shape for Dense Layer in Keras
|
<p>I have a question regarding the input shape for Dense layers in Keras. I have received a code that uses Dense layers to solve a timeseries prediction problem.</p>
<p>The input data, <code>x_train</code> array has shape <code>(5829, 18)</code>. After this, there's this piece of code:</p>
<pre><code># Reshaping the input data
max_batch_size = x_train.shape[0]
timesteps = x_train.shape[1]
input_dim = 1
x_train = x_train.reshape(max_batch_size, timesteps, input_dim)
</code></pre>
<p>The code above, was used to reshape the input data to train a <code>Conv1D</code> in Keras. But it is still being used to train this following MLP model:</p>
<pre><code> model = Sequential(name='MLP')
model.add(InputLayer(input_shape=(x_train.shape[1]), name='Input_Layer'))
model.add(Dense(units=2*x_train.shape[1], activation='tanh', name='Dense_1'))
model.add(Dense(units=2*y_train.shape[1], activation='tanh', name='Dense_2'))
model.add(Dense(units=y_train.shape[1], activation='tanh', name='Output_Layer'))
model.compile(optimizer='adam', loss="mse")
model.summary()
</code></pre>
<p>It is wrong to do that reshape for the MLP model? Could it lead to wrong results? Or it does not matter?</p>
|
<python><tensorflow><keras>
|
2023-04-18 11:32:58
| 1
| 679
|
Murilo
|
76,044,038
| 1,901,071
|
Visual Studio Code: The term 'conda' is not recognized as the name of a cmdlet
|
<p>I have a python script which I open up in visual studio code. When I first open Visual Studio Code, the following appears in the terminal where it tries to run it under the project folder location.</p>
<pre><code>PS C:\Users\foo\Github-int\project> conda activate spyder-env
conda activate spyder-env
conda : The term 'conda' is not recognized as the name of a cmdlet, function, script file, or operable
program. Check the spelling of the name, or if a path was included, verify that the path is correct
and try again.
At line:1 char:1
+ conda activate spyder-env
+ ~~~~~
+ CategoryInfo : ObjectNotFound: (conda:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException
</code></pre>
<p>It then <em>automatically</em> proceeds to the below which works but provides no highlighting of error messages and when the code fails, no problems or warnings appear in the problems pane in the IDE.</p>
<p><code>PS C:\Users\foo\Github\project> & C:/Users/foo/Miniconda3/envs/spyder-env/python.exe.</code></p>
<p>I have checked my environmental variables and can see from the <code>Path</code> that conda is there under the <code>C:\Users\foo\Miniconda3</code>. However this is the user variables, not the system variables.</p>
<p>I opened the normal command prompt and tried it with the same error</p>
<pre><code>C:\>conda activate
'conda' is not recognized as an internal or external command,
operable program or batch file.
</code></pre>
<p>However if i open the anaconda prompt it seems to work fine</p>
<pre><code>(base) C:\>conda activate spyder-env
(spyder-env) C:\>
</code></pre>
<p>Can anyone help me trouble shoot?</p>
|
<python><anaconda><conda>
|
2023-04-18 11:30:26
| 0
| 2,946
|
John Smith
|
76,044,004
| 21,404,794
|
Drop rows with all zeroes in a pandas dataframe
|
<p>I'm trying to drop the rows of a dataframe where the value of one column is 0, but I don't know how</p>
<p>The table I start with looks like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>a</th>
<th>b</th>
<th>c</th>
<th>d</th>
<th>e</th>
</tr>
</thead>
<tbody>
<tr>
<td>99.08</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
</tr>
<tr>
<td>0.0</td>
<td>95.8</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
</tr>
<tr>
<td>0.0</td>
<td>0.0</td>
<td>97.8</td>
<td>0.0</td>
<td>0.0</td>
</tr>
<tr>
<td>0.0</td>
<td>0.0</td>
<td>96.7</td>
<td>0.0</td>
<td>0.0</td>
</tr>
<tr>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>98.9</td>
<td>0.0</td>
</tr>
</tbody>
</table>
</div>
<p>I'm using pandas to deal with some categorical data. I've used pd.melt() to reshape it so the columns that were encoded in a similar fashion to one hot encoding are no problem, that leaves me with a dataframe with only 2 columns, as I want it, but with a lot of 0s in one of the columns.</p>
<p>The table after the melt looks like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>col1</th>
<th>col2</th>
</tr>
</thead>
<tbody>
<tr>
<td>a</td>
<td>99.8</td>
</tr>
<tr>
<td>a</td>
<td>0.0</td>
</tr>
<tr>
<td>a</td>
<td>0.0</td>
</tr>
<tr>
<td>a</td>
<td>0.0</td>
</tr>
<tr>
<td>a</td>
<td>0.0</td>
</tr>
<tr>
<td>b</td>
<td>95.8</td>
</tr>
<tr>
<td>b</td>
<td>0.0</td>
</tr>
<tr>
<td>b</td>
<td>0.0</td>
</tr>
<tr>
<td>c</td>
<td>97.8</td>
</tr>
<tr>
<td>c</td>
<td>0.0</td>
</tr>
<tr>
<td>c</td>
<td>0.0</td>
</tr>
<tr>
<td>c</td>
<td>0.0</td>
</tr>
<tr>
<td>c</td>
<td>96.5</td>
</tr>
<tr>
<td>c</td>
<td>0.0</td>
</tr>
<tr>
<td>d</td>
<td>98.9</td>
</tr>
</tbody>
</table>
</div>
<p>I want to drop those values because they give no information and take space and resources</p>
<p>I've already tried <a href="https://stackoverflow.com/questions/22649693/drop-rows-with-all-zeros-in-pandas-data-frame">what was suggested here</a> but it gives an indexing error, because the length of the data returned by the any() function is not the same as the length of the original dataframe, as far as I've been able to understand.</p>
<p>I've also tried <a href="https://stackoverflow.com/questions/39884785/how-do-i-drop-rows-with-zeros-in-panda-dataframe">the suggestion here</a> but it gives back a ValueError "cannot index with multidimensional key" because my df is 2 dimensional</p>
<p><strong>Dataframe</strong></p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'a': [99.08, 0.0, 0.0, 0.0, 0.0],
'b': [0.0, 95.8, 0.0, 0.0, 0.0],
'c': [0.0, 0.0, 97.8, 96.7, 0.0],
'd': [0.0, 0.0, 0.0, 0.0, 98.9],
'e': [0.0, 0.0, 0.0, 0.0, 0.0],
})
</code></pre>
|
<python><pandas><dataframe>
|
2023-04-18 11:27:10
| 4
| 530
|
David Siret Marqués
|
76,043,992
| 12,760,550
|
Combine duplicated rows by unique values in column to fill blank columns in another value
|
<p>Imagine I have the following data (slice as example):</p>
<pre><code>ID Salary Component 1 Value1 Salary Component 2 Value2
10000 Basic Salary 22000
10000 Housing Allowance 13200
</code></pre>
<p>How can I combine rows per ID in a way that I have only one row per ID and filling blank information of columns using other rows column values when filled? It would result in this data:</p>
<pre><code>ID Salary Component 1 Value1 Salary Component 2 Value2
10000 Basic Salary 22000 Housing Allowance 13200
</code></pre>
<p>Thank you for the help!</p>
|
<python><pandas><group-by><merge><duplicates>
|
2023-04-18 11:25:50
| 1
| 619
|
Paulo Cortez
|
76,043,940
| 3,515,174
|
Type hints for generic class union Callable
|
<p>Given:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Callable, Generic, TypeVar
T = TypeVar("T")
class Factory(Generic[T]):
def __call__(self) -> T:
...
class TruthyFactory(Factory[bool]):
def __call__(self) -> bool:
return True
def falsey_factory() -> bool:
return False
def class_consumer(factory: type[Factory[T]]) -> T:
...
def function_consumer(factory: Callable[[], T]) -> T:
...
# type hint for cls_ret is `bool`
cls_ret = class_consumer(TruthyFactory)
# type hint for fn_ret is `bool`
fn_ret = function_consumer(falsey_factory)
</code></pre>
<p>What would the signature (type hints) look like for a function taking a unison of the two parameter types?</p>
<p>That is, a function signature of: <code>def either_consumer(factory: ???) -> T: ...</code></p>
<p>I tried using <code>type[Factory[T]] | Callable[[], T]</code> but it not work with the type of a subclassed <code>Factory[T]</code>. The return type hint becomes the type of the class. I believe this is because a class matches the Callable specification - does this have higher affinity, is there a way of modifying this behaviour?</p>
<pre class="lang-py prettyprint-override"><code>def either_consumer(factory: type[Factory[T]] | Callable[[], T]) -> T:
# needs more stringent boolean logic
if isinstance(factory, type):
return factory()()
return factory()
# type hint for cls_ret is wrongly `TruthyFactory`
cls_ret = either_consumer(TruthyFactory)
# type hint for fn_ret is correctly `bool`
fn_ret = either_consumer(falsey_factory)
</code></pre>
|
<python><type-hinting>
|
2023-04-18 11:20:42
| 2
| 4,502
|
Mardoxx
|
76,043,938
| 12,435,792
|
How do I search for emails with a subject prefix using python
|
<p>I have an inbox which receives mails every 15 mins.
The subject of each is different - it has a fixed prefix and then concatinated with date and time.
for example:
subject : [External] PNR Assignment File-JAPA-1_4/18/2023 8:34:37 AM</p>
<p>The <em><strong>[External] PNR Assignment File-JAPA</strong></em> part is constant.
I want to restrict the mails based on subject.</p>
<pre><code>subject=return_mail_subject(region_selected)
path=return_path(region_selected)
outlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI") #Opens Microsoft Outlook
folder = outlook.Folders.Item(inbox)
inbox = folder.Folders.Item("Inbox")
messages = inbox.Items #Get emails
messages = messages.Restrict("@SQL=""http://schemas.microsoft.com/mapi/proptag/0x0E1D001F""like 'subject'")
</code></pre>
<p>Wrote this code where subject variable = [External] PNR Assignment File-JAPA
But this gives an error.
Where am I going wrong?</p>
|
<python><outlook><win32com>
|
2023-04-18 11:20:18
| 1
| 331
|
Soumya Pandey
|
76,043,860
| 18,782,190
|
Python multiple inheritance does not work as expected
|
<p>I get an error <code>TypeError: __init__() missing 1 required positional argument: 'label_text'</code> in line <code>QLineEdit.__init__(self)</code> which doesn't make sense to me. I create <code>Input</code> object by passing one keyword argument <code>label_text</code> as <code>Input(label_text="Example label")</code>. I assume an error happens because of how constructors of base classes are called, though I am not sure what's exactly wrong.</p>
<pre><code>from enum import IntEnum, auto
from PyQt5.QtGui import QFocusEvent
from PyQt5.QtWidgets import QLineEdit, QLabel, QTextEdit
class InputType(IntEnum):
Text = auto()
Password = auto()
class InputBase:
def __init__(self, label_text: str) -> None:
super().__init__()
self.label_text = label_text
self.label = QLabel(label_text)
self.label.setObjectName('input-label')
self.setPlaceholderText('')
def focusInEvent(self, event: QFocusEvent) -> None:
if self.is_empty():
self.setPlaceholderText("")
self.label.setText(self.label_text)
super().focusInEvent(event)
def focusOutEvent(self, event: QFocusEvent) -> None:
if self.is_empty():
self.setPlaceholderText(self.label_text)
self.label.setText(" ")
super().focusOutEvent(event)
def is_empty(self) -> bool:
raise NotImplementedError()
class Input(QLineEdit, InputBase):
def __init__(self, label_text: str, type: InputType = InputType.Text) -> None:
QLineEdit.__init__(self)
InputBase.__init__(self, label_text)
if type == InputType.Password:
self.setEchoMode(QLineEdit.EchoMode.Password)
def is_empty(self) -> bool:
return self.text() == ""
class TextArea(QTextEdit, InputBase):
def __init__(self, label_text: str) -> None:
QLineEdit.__init__(self)
InputBase.__init__(self, label_text)
def is_empty(self) -> bool:
return self.toPlainText() == ""
</code></pre>
|
<python><python-3.x><pyqt5>
|
2023-04-18 11:11:52
| 1
| 593
|
Karolis
|
76,043,852
| 10,504,555
|
mplfinance - add_artist raises AttributeError: 'dict' object has no attribute 'axes'
|
<p>I am using <code>mplfinance</code> to create a OHLC chart (stock prices)</p>
<p>This is implemented as a custom class as I need to add additional elements to the final chart</p>
<pre><code>class CandlestickChart:
def __init__(self, data):
self.data = data
self.fig, self.ax = mpf.plot(
self.data,
type="ohlc",
volume=False,
returnfig=True,
)
</code></pre>
<p>I then have another function that</p>
<pre><code>
def add_swap_zone(self, swap_zone):
zone_data = self.data.copy()
zone_data["in_swap_zone"] = (zone_data.index >= swap_zone.x_left) & (
zone_data.index <= swap_zone.x_right
)
zone_data["swap_zone_val"] = np.nan
zone_data.loc[zone_data["in_swap_zone"], "swap_zone_val"] = swap_zone.y_down
zone_data["swap_zone_val"].fillna(swap_zone.y_up, inplace=True)
ap_swap_zone = mpf.make_addplot(
zone_data["swap_zone_val"],
type="bar",
width=0.5,
color=swap_zone.style.FILL_COLOUR,
alpha=swap_zone.style.FILL_ALPHA,
panel=0,
secondary_y=False,
)
self.ax[0].add_artist(ap_swap_zone)
</code></pre>
<p>While the mechanics of the function may look complex, in essence I am using <code>mpf.make_addplot</code> to create a new plot, which then gets added to <code>self.ax[0]</code></p>
<p>The error I receive is</p>
<pre><code>Traceback (most recent call last):
File "/Users/user/projects/charter/src/draw/demo/demo_plot.py", line 42, in <module>
main()
File "/Users/user/projects/charter/src/draw/demo/demo_plot.py", line 35, in main
cs_chart.add_swap_zone(
File "/Users/user/projects/charter/src/draw/plot.py", line 39, in add_swap_zone
self.ax[0].add_artist(ap_swap_zone)
File "/Users/user/projects/charter/venv/lib/python3.11/site-packages/matplotlib/axes/_base.py", line 2219, in add_artist
a.axes = self
^^^^^^
AttributeError: 'dict' object has no attribute 'axes'
</code></pre>
<p>I have tried the below alternative ways to add the plot and still failed</p>
<ol>
<li><code>self.ax[list(self.ax.keys())[0]].add_artist(ap_swap_zone)</code></li>
<li><code>self.ax["0"].add_artist(ap_swap_zone)</code></li>
</ol>
<p>EDIT:
Revised code based on selected answer</p>
<pre><code>class CandlestickChart:
def __init__(self, data):
self.data = data
self.addplot_specs = []
def add_swap_zone(self, swap_zone):
zone_data = self.data.copy()
zone_data["in_swap_zone"] = (zone_data.index >= swap_zone.x_left) & (
zone_data.index <= swap_zone.x_right
)
zone_data["swap_zone_val"] = np.nan
zone_data.loc[zone_data["in_swap_zone"], "swap_zone_val"] = swap_zone.y_down
zone_data["swap_zone_val"].fillna(swap_zone.y_up, inplace=True)
ap_swap_zone = mpf.make_addplot(
zone_data["swap_zone_val"],
type="bar",
width=0.5,
color=swap_zone.style.FILL_COLOUR,
alpha=swap_zone.style.FILL_ALPHA,
panel=0,
secondary_y=False,
)
self.addplot_specs.append(ap_swap_zone)
def plot(self):
self.fig, self.ax = mpf.plot(
self.data,
type="ohlc",
volume=False,
returnfig=True,
addplot=self.addplot_specs,
)
mpf.show()
</code></pre>
<p>Constructor initialises <code>self.addplot_specs[]</code> which is then appended with addplot specification in <code>add_swap_zone()</code>. The mpl.plot() passes addplot=self.addplot_specs</p>
|
<python><matplotlib><mplfinance>
|
2023-04-18 11:10:48
| 1
| 389
|
neo-technoker
|
76,043,814
| 20,220,485
|
How should you name columns when transposing a dataframe?
|
<p>I am initialising a dataframe with lists, having followed the advice <a href="https://stackoverflow.com/questions/13784192/creating-an-empty-pandas-dataframe-and-then-filling-it">here</a>. I then need to transpose the dataframe.</p>
<p>In the first example I take the column names from the lists used to initialise the dataframe.</p>
<p>In the second example I add the column names last.</p>
<p>-> Is there any difference between these examples?</p>
<p>-> Is there a standard or better way of naming columns of dataframes initialised like this?</p>
<pre><code>p_id = ['a_1','a_2']
p = ['a','b']
p_id.insert(0,'p_id')
p.insert(0,'p')
df = pd.DataFrame([p_id, p])
df = df.transpose()
df.columns = df.iloc[0]
df = df[1:]
df
>>>
p_id p
0 a_1 a
1 a_2 b
</code></pre>
<pre><code>p_id = ['a_1','a_2']
p = ['a','b']
df = pd.DataFrame([p_id, p])
df = df.transpose()
df.columns = ['p_id', 'p']
df
>>>
p_id p
0 a_1 a
1 a_2 b
</code></pre>
|
<python><pandas><dataframe><transpose>
|
2023-04-18 11:07:24
| 1
| 344
|
doine
|
76,043,784
| 1,804,173
|
How to add pydantic serialization/deserialization support for a foreign (third-party) class?
|
<p>Consider a third-party class that doesn't support pydantic serialization, and you're not under control of the source code of that class, i.e., you cannot make it inherit from <code>BaseModel</code>. Let's assume the entire state of that class can be constructed and obtained via its public interface (but the class may have private fields). So in theory we'd be able to write serialization/deserialization functions of that class based on its public interface.</p>
<p>Is it somehow possible to use such a class inside a pydantic <code>BaseModel</code>? I.e., the goal would be to somehow arrive at:</p>
<pre class="lang-py prettyprint-override"><code>class MySerializableClass(BaseModel):
foreign_class_instance: ForeignClass
</code></pre>
<p>How could we add serialization/deserialization functions to properly support the <code>foreign_class_instance</code> field?</p>
<hr />
<p>As a concrete let's take for instance a Numpy array as an example of a foreign class that we want to support. By serializing/deserializing based on the "public interface" I mean:</p>
<ul>
<li>For serialization we can use the public interface of <code>np.array</code> to get its data including meta data like dtype and shape. The output of the serialization function could be something like <code>{"dtype": "int", "shape": [3], "data": [1, 2, 3]}</code> (or any other composition of JSON-serializable data). The serialization function would have a signature like <code>serialize(x: np.ndarray) -> object</code> (for lack of the <code>JsonData</code> type).</li>
<li>The deserialization function would get this serialized representation, and would construct the <code>np.ndarray</code> instance again on the public interface of <code>np.ndarray</code>, which typically means using its constructor. The signature would be the inverse: <code>deserialize(o: object) -> np.ndarray</code>.</li>
</ul>
<p>My question is: Assuming I can implement these two <code>serialize</code> and <code>deserialize</code> functions just fine like in this example, how can I integrate them into pydantic so that serializing/deserializing a <code>BaseModel</code> implicitly makes use of the two functions.</p>
|
<python><pydantic>
|
2023-04-18 11:04:37
| 1
| 27,316
|
bluenote10
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.