QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
76,537,916
| 15,487,581
|
capture multiple column names as value in single column pandas
|
<p>I just want to capture the subject name (column name) as value in new column where there is some improvements in the students marks after re-evaluation.</p>
<p>I have the dataset before re-evaluation:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>name</th>
<th>class</th>
<th>exam</th>
<th>maths</th>
<th>physics</th>
<th>chemistry</th>
<th>joining date</th>
<th>result</th>
</tr>
</thead>
<tbody>
<tr>
<td>John</td>
<td>Grade - 10</td>
<td>model 1</td>
<td>98</td>
<td>78</td>
<td>35</td>
<td>2000-01-15</td>
<td>fail</td>
</tr>
<tr>
<td>Bob</td>
<td>Grade - 06</td>
<td>mid term</td>
<td>65</td>
<td>52</td>
<td>92</td>
<td>2001-08-16</td>
<td>pass</td>
</tr>
<tr>
<td>Rose</td>
<td>Grade - 06</td>
<td>model 2</td>
<td>91</td>
<td>70</td>
<td>54</td>
<td>2001-09-14</td>
<td>pass</td>
</tr>
<tr>
<td>Michael</td>
<td>Grade - 07</td>
<td>model 1</td>
<td>10</td>
<td>90</td>
<td>45</td>
<td>2010-01-10</td>
<td>fail</td>
</tr>
</tbody>
</table>
</div>
<p>Now I have the dataset after re-evaluation, there are some improvements in some students marks, and there are new data on other students marks who took their exam recently,</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>name</th>
<th>class</th>
<th>exam</th>
<th>maths</th>
<th>physics</th>
<th>chemistry</th>
<th>joining date</th>
<th>result</th>
</tr>
</thead>
<tbody>
<tr>
<td>John</td>
<td>Grade - 10</td>
<td>model 1</td>
<td>98</td>
<td>78</td>
<td><strong>87</strong></td>
<td>2000-01-15</td>
<td><strong>pass</strong></td>
</tr>
<tr>
<td>Bob</td>
<td>Grade - 06</td>
<td>mid term</td>
<td>65</td>
<td><strong>91</strong></td>
<td>92</td>
<td>2001-08-16</td>
<td>pass</td>
</tr>
<tr>
<td>Rose</td>
<td>Grade - 06</td>
<td>model 2</td>
<td>91</td>
<td>70</td>
<td>54</td>
<td>2001-09-14</td>
<td>pass</td>
</tr>
<tr>
<td>Michael</td>
<td>Grade - 07</td>
<td>model 1</td>
<td><strong>100</strong></td>
<td>90</td>
<td>45</td>
<td><strong>2001-01-10</strong></td>
<td><strong>pass</strong></td>
</tr>
<tr>
<td>Sam</td>
<td>Grade - 08</td>
<td>mid term</td>
<td>43</td>
<td>62</td>
<td>80</td>
<td>2000-08-10</td>
<td>pass</td>
</tr>
<tr>
<td>James</td>
<td>Grade - 10</td>
<td>model `</td>
<td>76</td>
<td>66</td>
<td>96</td>
<td>2000-09-07</td>
<td>pass</td>
</tr>
<tr>
<td>Henry</td>
<td>Grade - 09</td>
<td>model 1</td>
<td>34</td>
<td>91</td>
<td>70</td>
<td>2000-01-04</td>
<td>fail</td>
</tr>
</tbody>
</table>
</div>
<p>Now, we need to concat these two datasets, and mark which row is updated, and which column got updated, so, the concatenated dataset looks like this,</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>name</th>
<th>class</th>
<th>exam</th>
<th>maths</th>
<th>physics</th>
<th>chemistry</th>
<th>joining date</th>
<th>result</th>
</tr>
</thead>
<tbody>
<tr>
<td>John</td>
<td>Grade - 10</td>
<td>model 1</td>
<td>98</td>
<td>78</td>
<td>35</td>
<td>2000-01-15</td>
<td>fail</td>
</tr>
<tr>
<td>Bob</td>
<td>Grade - 06</td>
<td>mid term</td>
<td>65</td>
<td>52</td>
<td>92</td>
<td>2001-08-16</td>
<td>pass</td>
</tr>
<tr>
<td>Rose</td>
<td>Grade - 06</td>
<td>model 2</td>
<td>91</td>
<td>70</td>
<td>54</td>
<td>2001-09-14</td>
<td>pass</td>
</tr>
<tr>
<td>Michael</td>
<td>Grade - 07</td>
<td>model 1</td>
<td>10</td>
<td>90</td>
<td>45</td>
<td>2010-01-10</td>
<td>fail</td>
</tr>
<tr>
<td>John</td>
<td>Grade - 10</td>
<td>model 1</td>
<td>98</td>
<td>78</td>
<td><strong>87</strong></td>
<td>2000-01-15</td>
<td><strong>pass</strong></td>
</tr>
<tr>
<td>Bob</td>
<td>Grade - 06</td>
<td>mid term</td>
<td>65</td>
<td><strong>91</strong></td>
<td>92</td>
<td>2001-08-16</td>
<td>pass</td>
</tr>
<tr>
<td>Rose</td>
<td>Grade - 06</td>
<td>model 2</td>
<td>91</td>
<td>70</td>
<td>54</td>
<td>2001-09-14</td>
<td>pass</td>
</tr>
<tr>
<td>Michael</td>
<td>Grade - 07</td>
<td>model 1</td>
<td><strong>100</strong></td>
<td>90</td>
<td>45</td>
<td><strong>2001-01-10</strong></td>
<td><strong>pass</strong></td>
</tr>
<tr>
<td>Sam</td>
<td>Grade - 08</td>
<td>mid term</td>
<td>43</td>
<td>62</td>
<td>80</td>
<td>2000-08-10</td>
<td>pass</td>
</tr>
<tr>
<td>James</td>
<td>Grade - 10</td>
<td>model `</td>
<td>76</td>
<td>66</td>
<td>96</td>
<td>2000-09-07</td>
<td>pass</td>
</tr>
<tr>
<td>Henry</td>
<td>Grade - 09</td>
<td>model 1</td>
<td>34</td>
<td>91</td>
<td>70</td>
<td>2000-01-04</td>
<td>fail</td>
</tr>
</tbody>
</table>
</div>
<p>Now, the final output should look like this, with 2 new columns, I was able to eliminate the duplicates and added the new columns <strong>any improvement</strong>, but I got stuck on adding the other new column <strong>improved subject</strong></p>
<p><strong>Expected Output:</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>name</th>
<th>class</th>
<th>exam</th>
<th>maths</th>
<th>physics</th>
<th>chemistry</th>
<th>anyimprovement</th>
<th>improvedfield</th>
</tr>
</thead>
<tbody>
<tr>
<td>John</td>
<td>Grade-10</td>
<td>model 1</td>
<td>98</td>
<td>78</td>
<td><strong>87</strong></td>
<td>Yes</td>
<td>chemistry,result</td>
</tr>
<tr>
<td>Bob</td>
<td>Grade-06</td>
<td>mid term</td>
<td>65</td>
<td><strong>91</strong></td>
<td>92</td>
<td>Yes</td>
<td>physics</td>
</tr>
<tr>
<td>Rose</td>
<td>Grade-06</td>
<td>model 2</td>
<td>91</td>
<td>70</td>
<td>54</td>
<td>No</td>
<td>no improvement</td>
</tr>
<tr>
<td>Michael</td>
<td>Grade-07</td>
<td>model 1</td>
<td><strong>100</strong></td>
<td>90</td>
<td>45</td>
<td>Yes</td>
<td>maths,result,joining date</td>
</tr>
<tr>
<td>Sam</td>
<td>Grade-08</td>
<td>mid term</td>
<td>43</td>
<td>62</td>
<td>80</td>
<td>New Entry</td>
<td>new entry</td>
</tr>
<tr>
<td>James</td>
<td>Grade-10</td>
<td>model `</td>
<td>76</td>
<td>66</td>
<td>96</td>
<td>New Entry</td>
<td>new entry</td>
</tr>
<tr>
<td>Henry</td>
<td>Grade-09</td>
<td>model 1</td>
<td>34</td>
<td>91</td>
<td>70</td>
<td>New Entry</td>
<td>new entry</td>
</tr>
</tbody>
</table>
</div>
<p>Below is the code, I used for this,</p>
<pre><code>added primary key column by concatenating name, class, exam and secondary key column by concatenating maths,physics,chemistry.
dupedf = concatdf.loc[concatdf.duplicated(subset=['PrimaryKey', 'SecondaryKey'],keep=False)]
dupedf1 = concatdf.loc[concatdf.duplicated(subset=['PrimaryKey'],keep=False)]
for i,j in dupedf.iterrows():
for k,l in dupedf1.iterrows():
if l['PrimaryKey'] == j['PrimaryKey']:
dupedf = dupedf.drop_duplicates(subset=['PrimaryKey','SecondaryKey'],keep='last')
dupedf['any improvement'] = 'No'
# dupedf['improved subject'] = ' '
else:
dupedf1 = dupedf1.drop_duplicates(subset=['SecondaryKey'],keep=False)
dupedf1 = dupedf1.drop_duplicates(subset=['PrimaryKey'],keep='last')
dupedf1['any improvement'] = 'Yes'
# dupedf1['improved subject'] = 'column name'
</code></pre>
<p>in the above code, I am iterating only the rows which exists in both before & after re-evaluation datasets. iterating row by row to have fill the 2 new columns <strong>anyimprovement & improvedfield.</strong> <strong>I was able to achieve for anyimprovement column, but I need help with improvedfield column.</strong></p>
|
<python><python-3.x><pandas><dataframe><pivot>
|
2023-06-23 07:25:17
| 1
| 349
|
Beginner
|
76,537,900
| 130,262
|
How to ensure a newly added item to PyQt6's QGraphicsScene becomes immediately visible?
|
<p>I am writing a visualizer of a tree like structure (imagine mindmapping although it's nothing like that) and want to be able to move view to newly unfolded objects immediately after they are painted to the canvas.</p>
<p>I made this example of my problem:</p>
<pre><code>import sys
from PyQt6.QtWidgets import QGraphicsScene, QGraphicsView, QApplication, QGraphicsRectItem
class MyRect(QGraphicsRectItem):
def __init__(self,x, y, width, height):
super().__init__(x, y, width, height)
self.pressed = False
def mousePressEvent(self, event):
self.pressed = True
def mouseReleaseEvent(self, event):
if self.pressed:
self.pressed = False
x = self.boundingRect().toRect().x()
self.scene().addItem(MyRect(x + 1000, 100, 100, 100))
self.ensureVisible(1100, 100, 100, 100)
app = QApplication(sys.argv)
scene = QGraphicsScene()
scene.addItem(MyRect(100,100,100,100))
view = QGraphicsView(scene)
view.setGeometry(0,0, 800, 500)
view.show()
app.exec()
</code></pre>
<p>It adds a rectangle. Upon clicking on it it adds another one, which is not seeen because it's out of current viewport. Calling ensureVisible on it's coordinates doesn't work. My guess is that I call ensureVisible method "too soon" before some behind the curtain paint method happens.
What is the proper way or proper place to call it so the newly added rectangle gets shown (by moving the viewport)?
I tried many variations of ensure visible on coordinates, item, viewport, nothing seems to work</p>
|
<python><pyqt>
|
2023-06-23 07:22:43
| 2
| 829
|
Michal Pravda
|
76,537,794
| 2,583,670
|
Add grouped of boxplot legend in python
|
<p>I have two dataframes (df1 and df2), and each of them has a shape (60,20). I combined them horizontally (i.e., 60,40) and draw the 40 columns using a boxplot. Now I want to add the legend (only two legends since all the 20 cols of the df1 and df2 are grouped and considered as one type of legend).
I have searched and looked at several posts, but nothing similar to my problem.
I put my code and output figure below. As shown in the figure, I need to add two legends ('A' for all red boxplot, 'B' for all gold boxplot).</p>
<pre><code>df_combined = pd.concat([df1, df2], axis=1)
fig, ax1 = plt.subplots(figsize=(10,6))
labels = ['1', '2', '3','4', '5',
'6','7','8','9','10','11','12','13','14','15','16','17','18','19','20',
'21', '22', '23','24', '25',
'26','27','28','29','30','31','32','33','34','35','36','37','38','39','40']
props = ax1.boxplot(df_combined ,
vert=True,
patch_artist=True,
labels=labels)
ax1.yaxis.grid(True)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/uAGEM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uAGEM.png" alt="enter image description here" /></a></p>
|
<python><pandas><matplotlib><boxplot>
|
2023-06-23 07:04:15
| 1
| 711
|
Mohsen Ali
|
76,537,647
| 2,430,558
|
merge several float and str objects into a dataframe in Python
|
<p>I am writing some Python code that calls a smart device API every 5 minutes and returns values 1 by 1.
The <code>t1me</code> object is <code>str</code> while the rest are <code>float</code>.</p>
<pre><code>t1me = datetime.datetime.fromtimestamp(response.json()['data']['time']).strftime("%Y-%m-%d %H:%M:%S")
co2 = response.json()['data']['co2']
humidity = response.json()['data']['humidity']
pressure = response.json()['data']['pressure']
temp = response.json()['data']['temp']
voc = response.json()['data']['voc']
t1me
'2023-06-23 13:50:14'
co2
768.0
humidity
55.0
pressure
1003.3
temp
18.9
voc
46.0
</code></pre>
<p>I would like to merge the results into a simple dataframe like this so I can export it to Google Sheets:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Datetime</th>
<th>Temp</th>
<th>Humidity</th>
<th>Pressure</th>
<th>CO2</th>
<th>VOC</th>
</tr>
</thead>
<tbody>
<tr>
<td>2023-06-23 13:50:14</td>
<td>18.9</td>
<td>55.0</td>
<td>1003.3</td>
<td>768.0</td>
<td>46.0</td>
</tr>
</tbody>
</table>
</div>
<p>But I am not sure how to do this with pandas?
Probably could just call all the data in a single JSON request but not sure how to do that either.</p>
|
<python><json><pandas>
|
2023-06-23 06:40:48
| 1
| 449
|
DrPaulVella
|
76,537,507
| 1,367,705
|
Send JWT authentication request using Python's sockets
|
<p>I have a webserver which accepts the request for JWT authentication. I'm trying to write a client, using Python's sockets to send the authentication request. However, from server I'm getting 400 - bad request. Password and email are correct, because this is only an example.</p>
<p><strong>server</strong></p>
<pre><code>from flask import Flask, request, abort
app = Flask(__name__)
@app.route("/auth", methods=["POST"])
def authenticate():
print(request)
if request.method == "POST":
email = request.json.get("email")
password = request.json.get("password")
print(request)
if not email or not password:
return 400
return 200
else:
abort(400)
if __name__ == "__main__":
app.run(host='127.0.0.1', port=6000)
</code></pre>
<p><strong>client</strong></p>
<pre><code>import socket
HOST = "127.0.0.1" # The server's hostname or IP address
PORT = 6000 # The port used by the server
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.connect((HOST, PORT))
s.sendall(b"POST /auth HTTP/1.1\r\nHost:127.0.0.1:6000\r\nContent-Type: application/json \r\n")
s.sendall(b"email=test@test.com&password=p455\r\n\r\n")
data = s.recv(1024)
print(f"Received {data}")
</code></pre>
|
<python><sockets><flask>
|
2023-06-23 06:12:20
| 1
| 2,620
|
mazix
|
76,537,478
| 452,102
|
Why does lxml compilation look for 32-bit libraries?
|
<p>The installation fails saying 32-bit <code>.so</code> file is not compatible:</p>
<pre><code>gcc -pthread -shared -Wl,-z,relro -Wl,-z,now -g -Wl,-z,relro -Wl,-z,now -g build/temp.linux-x86_64-3.8/src/lxml/etree.o -L/usr/lib64 -L/usr/lib64 -lxslt -lexslt -lxml2 -lrt -lz -lm -o build/lib.linux-x86_64-3.8/lxml/etree.cpython-38-x86_64-linux-gnu.so
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-redhat-linux/8/../../../libxml2.so when searching for -lxml2
/usr/bin/ld: skipping incompatible //lib/libxml2.so when searching for -lxml2
/usr/bin/ld: skipping incompatible //usr/lib/libxml2.so when searching for -lxml2
/usr/bin/ld: cannot find -lxml2
collect2: error: ld returned 1 exit status
error: command 'gcc' failed with exit status 1
</code></pre>
<p>Why is it searching in <code>/usr/lib</code>? Shouldn't 64-bit images look for libraries in <code>/usr/lib64</code> unless set otherwise (even gcc says to look there)? I tried setting <code>LD_LIBRARY_PATH</code> but that didn't help.</p>
<p>This is the Dockerfile I use.</p>
<pre><code>FROM registry.access.redhat.com/ubi8/ubi-minimal
RUN microdnf -y install python38
RUN microdnf -y install shadow-utils
RUN microdnf -y install --nodocs \
gcc-c++ \
python38-devel \
python38-wheel \
unixODBC-devel \
bzip2 \
bzip2-devel \
expat \
expat-devel \
gcc \
git \
glibc-langpack-en \
libffi \
libffi-devel \
libxml2 \
libxml2-devel \
libxslt \
libxslt-devel \
unzip \
wget \
yum-utils \
zip \
make \
openssl \
openssl-devel \
sqlite-devel
RUN pip3 install --no-binary :all: lxml==4.6.2
</code></pre>
|
<python><lxml><redhat><dnf>
|
2023-06-23 06:04:53
| 2
| 22,154
|
Nishant
|
76,537,414
| 2,060,596
|
python tests: project setup does not find classes from program
|
<p>I fail to setup my python project for working tests.
Here is my structure:</p>
<pre><code>.
├── model
│ ├── __init__.py
│ └── user.py
└── test
├── __init__.py
└── user.py
</code></pre>
<p>user.py:</p>
<pre><code>class User:
pass
</code></pre>
<p>test_user.py</p>
<pre><code>from model.user import User
</code></pre>
<p>The error:</p>
<pre><code>python test/user.py
Traceback (most recent call last):
File "/tmp/minimal_example/test/user.py", line 1, in <module>
from model.user import User
ModuleNotFoundError: No module named 'model'
</code></pre>
<p>A relative import also fails:</p>
<p>test_user.py:</p>
<pre><code>from ..model.user import User
test/test_user.py
Traceback (most recent call last):
File "/tmp/minimal_example/test/test_user.py", line 1, in <module>
from ..model.user import User
ImportError: attempted relative import with no known parent package
</code></pre>
<p>Anything I am missing here?</p>
|
<python>
|
2023-06-23 05:50:44
| 1
| 6,062
|
Dakkar
|
76,537,384
| 13,396,497
|
Python pass enter key to subprocess.run function to exit from the process
|
<p>I am trying to pass list of log files one by one to decoder exe to get csv file, below is the code -</p>
<pre><code>p = subprocess.run(["wine", DECODER_EXE_PATH, str(decompressed_file_path)], capture_output=True, text=True)
</code></pre>
<p>The issue I am facing is that after every file the decoder exe is expecting to press/pass enter key otherwise, after decoding first file the process is getting stuck.
How can we achieve this in python script ?<br />
Also, if I drag-drop the log file to decoder directly in windows, command prompt is showing like below after decoding the file -</p>
<p><a href="https://i.sstatic.net/u3QtN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/u3QtN.png" alt="enter image description here" /></a></p>
|
<python><csv><subprocess>
|
2023-06-23 05:44:50
| 0
| 347
|
RKIDEV
|
76,537,360
| 14,548,431
|
Initialize one of two Pydantic models depending on an init parameter
|
<p>I have a Pydantic model class <code>MessageModel</code> with a version number as <a href="https://docs.python.org/3/library/typing.html#typing.Literal" rel="noreferrer"><code>Literal</code></a>. Now our requirements have changed and we need another <code>MessageModel</code> with a higher version number, because the attributes of the <code>MessageModel</code> have changed. I want to have a class, where I can give the version number as an argument to the constructor. Does anyone have an idea?</p>
<p>Here are the models:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Literal
from pydantic import BaseModel
class MessageModelV1(BaseModel):
version: Literal[1]
bar: str
class MessageModelV2(BaseModel):
version: Literal[2]
foo: str
</code></pre>
<p>What I want is a class which initializes the right <code>MessageModel</code> version:</p>
<pre class="lang-py prettyprint-override"><code>model = MessageModel(version=2, ...)
</code></pre>
|
<python><pydantic>
|
2023-06-23 05:41:10
| 1
| 653
|
Phil997
|
76,537,320
| 7,120,087
|
Inject custom logic into Python package
|
<p>Say package <code>A</code> contains a function <code>process(input)</code> that takes some input, does computation, and returns a result.</p>
<p>Is it possible for another function in package <code>B</code> to do something like <code>B.patchPackageA()</code> and modify/add logic to package <code>A</code>'s <code>process(input)</code> to do something extra?</p>
<p>For example, say package <code>A</code>'s <code>process(input)</code> does the following:</p>
<pre class="lang-py prettyprint-override"><code>def process(input: str):
return input.upper()
</code></pre>
<p>I want to be able to do the following:</p>
<pre><code>B.patchPackageA()
</code></pre>
<p>and then if I were to call <code>A.process("hello world!")</code>, it would do something like:</p>
<pre><code>def process(input: str):
print("Modified logic")
return input.upper()
</code></pre>
<p>Is this possible?</p>
|
<python>
|
2023-06-23 05:32:40
| 2
| 1,341
|
Baiqing
|
76,537,092
| 1,982,032
|
Why can't install the pypi package with virtual environment?
|
<p>The pypi package can be installed into user directory:</p>
<pre><code>python3 --version
Python 3.11.2
echo $USER
debian
</code></pre>
<p>Start to install</p>
<pre><code>pip install vnstock --break-system-packages
</code></pre>
<p>Check the installed location:</p>
<pre><code>pip show vnstock
Name: vnstock
Version: 0.1.6
Summary: Vietnam Stock Market Data
Home-page: https://github.com/thinh-vu/vnstock
Author: Thinh Vu
Author-email: mrthinh@live.com
License:
Location: /home/debian/.local/lib/python3.11/site-packages
Requires:
Required-by:
</code></pre>
<p>Try to make a new installation with virtual environment:</p>
<pre><code>python3 -m venv project
cd project
debian@debian:~/project$ source bin/activate
(project) debian@debian:~/project$ pip install vnstock --break-system-packages
bash: /home/debian/project/bin/pip: cannot execute: required file not found
(project) debian@debian:~/project$
</code></pre>
<p>Why can't install the pypi package with virtual environment?
No use to install without any argument.</p>
<pre><code>pip install vnstock
bash: /home/debian/project/bin/pip: cannot execute: required file not found
</code></pre>
<p>Please install debian12 first and try to install pypi package with virtual environment,then post your answer.</p>
|
<python><pip><python-packaging><python-venv>
|
2023-06-23 04:29:18
| 0
| 355
|
showkey
|
76,537,012
| 14,608,529
|
How to easily get list of URLs from Google search using a proxy? - python
|
<p>I normally use the <code>googlesearch</code> library as follows:</p>
<pre><code>from googlesearch import search
list(search(f"{query}", num_results))
</code></pre>
<p>But I now keep getting this error:</p>
<pre><code>requests.exceptions.HTTPError: 429 Client Error: Too Many Requests for url: https://www.google.com/sorry/index?continue=https://www.google.com/search%3Fq%{query}%26num%3D10%26hl%3Den%26start%3D67&hl=en&q=EhAmABcACfAIIME0fDvEUYF8GOKX1KQGIjAEGg2nloeEEAcko9umYCP9uPHRWoSo2odE3n3ZgbQ1L6lDvGfyai6798pyy3iU5vcyAXJaAUM
</code></pre>
<p>I developed a "hacky" solution using <code>requests</code> and <code>BeautifulSoup</code>, but it's very inefficient and takes me 1 hour to get 100 URLs, when the line above would take 1 second:</p>
<pre><code> search_results = []
retry = True
while retry:
try:
response = requests.get(f"https://www.google.com/search?q={query}",
headers={
'User-Agent': user_agent,
'Referer': 'https://www.google.com/',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language': 'en-US,en;q=0.9,en-gb',
},
proxies={
"http": proxy,
"https": proxy},
timeout=TIMEOUT_THRESHOLD*2)
if response.status_code == 200:
soup = BeautifulSoup(response.content, 'html.parser')
for link in soup.select('.yuRUbf a'):
url = link['href']
search_results.append(url)
if len(search_results) >= num_results:
retry = False
break
else:
proxy = get_working_proxy(proxies)
user_agent = random.choice(user_agents)
except Exception as e:
proxy = get_working_proxy(proxies)
user_agent = random.choice(user_agents)
print(f"An error occurred in tips search: {str(e)}")
</code></pre>
<p>Is there a better, easier way to still use my proxies to get a list of Google search results for a query?</p>
|
<python><python-3.x><web-scraping><beautifulsoup><python-requests>
|
2023-06-23 04:03:25
| 1
| 792
|
Ricardo Francois
|
76,536,914
| 8,389,943
|
I have been trying to merge 2 data frames on timestamp to the nearest timestamp by pd.merge_asof(), but duplicated merges | python
|
<p>I am merging df1 (1000+ rows)and df2(8 rows) on timestamp via pd.merge_asof()
I want only one nearest merge for each row on df2, and also in a window of 5minutes (plus or minus).</p>
<p>Below is the code I am at:</p>
<pre><code>merged_df = pd.merge_asof(df1, df2, on='time', direction='nearest',tolerance=pd.Timedelta(hours=1))
</code></pre>
<p>I am getting multiple merges of df2 on df1.</p>
<p>If there is any other better way to do this, that is welcome as well.
I tried to understand and do with the merge_asof() function documentation and still of no use.</p>
|
<python><pandas><dataframe>
|
2023-06-23 03:24:41
| 1
| 368
|
ana
|
76,536,724
| 1,188,943
|
Problem in writing to CSV file from Python in Mac
|
<p>I read some new data from the MySQL database and then process them. The output will be written in a CSV file. The problem is CSV file does not show contents correctly. The following is content of one of cells:</p>
<pre><code> 11- طراحی، ساخت و پیادهسازی سیستمهای هوشمند اتوماسیون
</code></pre>
<p>I expect to see: (When I print in console it shows them correctly)</p>
<pre><code>11- طراحی، ساخت و پیادهسازی سیستمهای هوشمند اتوماسیون
</code></pre>
<p>The code I'm using to write the CSV file is:</p>
<pre><code>with open(filename, 'w', newline='', encoding='utf-8') as csvfile:
# creating a csv writer object
csvwriter = csv.writer(csvfile)
# writing the fields
csvwriter.writerow(fields)
# writing the data rows
csvwriter.writerows(rows)
</code></pre>
<p>I'm using Mac with Microsoft Excel.</p>
|
<python><excel><macos><csv>
|
2023-06-23 02:15:06
| 0
| 1,035
|
Mahdi
|
76,536,667
| 8,751,871
|
How to pause / debug at the next possible breakpoint
|
<p>When running in debug mode in PyCharm, how can I get the code to put a breakpoint at the next immediate possible point (including client libraries).</p>
<p>What do I mean?</p>
<ul>
<li>Let's say I have a 10,000 line script that's running in debug mode.</li>
<li>I'm not sure which library the code is currently in.</li>
<li>If I put the breakpoint, the code will stop when it reaches there.</li>
<li>But I don't know where it's stuck at right now.</li>
<li><strong>Here is exactly where I want it to drop a breakpoint.</strong></li>
</ul>
<p>Any ideas how?</p>
|
<python><pycharm>
|
2023-06-23 01:54:06
| 2
| 2,670
|
A H
|
76,536,645
| 11,016,395
|
Resample and interpolate time series data in Python only when difference between timestamp not greater than certain threshold
|
<p>See sample data as below:</p>
<pre><code>import pandas as pd
data = {
'timestamp': pd.to_datetime(['2023-01-01 00:00:00', '2023-01-01 00:10:00', '2023-01-01 00:20:00', '2023-01-01 00:40:00', '2023-01-01 00:50:00', '2023-01-01 01:10:00']),
'value': [10, 20, 30, 50, 60, 80]
}
data = pd.DataFrame(data)
</code></pre>
<p>My data is sampled every 10 minutes normally, but there could be gaps in the timstamp. I'd like to resample the data every second, and interpolate the result, only when the difference between timestamp is no greater than 10 minutes. In the example shown, I'd like to resample and interpolate only 2023-01-01 00:00:00 to 2023-01-01 00:20:00, and 2023-01-01 00:40:00 to 2023-01-01 00:50:00. The rest should be removed.</p>
<p>Currently I'm running a for loop to iterate over the dataframe, although I understand it's not the efficient and elegant way to do so. Is there a better way without iterating over the dataframe?</p>
<pre><code>data['gap'] = data['timestamp'].diff() > pd.Timedelta(minutes=10)
sections = []
current_section = []
for index, row in data.iterrows():
if row['gap']:
if current_section:
sections.append(current_section)
current_section = []
current_section.append(row)
sections.append(current_section)
resampled_data = []
for section in sections:
section = pd.DataFrame(section)
if len(section) > 1:
start_time = section.iloc[0]['timestamp']
end_time = section.iloc[-1]['timestamp']
section_resampled = section.set_index('timestamp').resample('1S').interpolate(method='linear')
section_resampled = section_resampled.loc[start_time:end_time]
resampled_data.append(section_resampled)
resampled_data = pd.concat(resampled_data)
resampled_data = resampled_data.drop(columns='gap')
</code></pre>
|
<python><pandas><datetime>
|
2023-06-23 01:47:38
| 1
| 483
|
crx91
|
76,536,562
| 8,869,570
|
Does Python/pandas have any kind of inherent caching for groupbys?
|
<p>Suppose I have a dataframe with numerous columns and one of the columns is <code>id</code>.</p>
<p>Suppose in a single function, I do a python <code>groupby("id")</code> operations.
e.g.,</p>
<pre><code>def func(df):
df["val1_cumsum"] = df.groupby("id")["val1"].cumsum()
df["val2_cumsum"] = df.groupby("id")["val2"].cumsum()
df["val3_cumsum"] = df.groupby("id")["val3"].cumsum()
</code></pre>
<p>Do the second and third <code>groupby</code> calls actually do a full <code>groupby</code> like the first one, or is there some native caching in python that says "we just did this, let's use the previous result?"</p>
<p>In other words is the above less performant than:</p>
<pre><code>def func(df):
df_groupby_id = df.groupby("id")
df["val1_cumsum"] = df_groupby_id["val1"].cumsum()
df["val2_cumsum"] = df_groupby_id["val2"].cumsum()
df["val3_cumsum"] = df_groupby_id["val3"].cumsum()
</code></pre>
|
<python><pandas><dataframe><group-by>
|
2023-06-23 01:14:14
| 1
| 2,328
|
24n8
|
76,536,303
| 8,378,817
|
Remove an item from Python list if the item's dictionary key has some value
|
<p>I have a list of dictionaries.</p>
<pre><code>for i, d in enumerate(test_list):
print(i, d)
0 {'role': None, 'content': 'Closing the gap between actual and potential crop yields', 'bounding_regions': [{'page_number': 1, 'polygon': [{'x': 4.2076, 'y': 9.019}, {'x': 7.5767, 'y': 9.019}, {'x': 7.5767, 'y': 9.387}, {'x': 4.2076, 'y': 9.387}]}], 'spans': [{'offset': 3482, 'length': 56}]}
1 {'role': None, 'content': 'Increasing realized yields per area requires both increasing the maximum possible regional yield for a given crop in the absence of biotic or abiotic stress (yield potential [Yp];', 'bounding_regions': [{'page_number': 1, 'polygon': [{'x': 4.2066, 'y': 9.4723}, {'x': 7.6041, 'y': 9.4723}, {'x': 7.6041, 'y': 9.9493}, {'x': 4.2066, 'y': 9.9493}]}], 'spans': [{'offset': 3539, 'length': 179}]}
2 {'role': None, 'content': 'Downloaded from https://academic.oup.com/plphys/article/185/1/34/6149974 by guest on 16 June 2023', 'bounding_regions': [{'page_number': 1, 'polygon': [{'x': 7.9365, 'y': 2.8965}, {'x': 8.0406, 'y': 2.8965}, {'x': 8.0406, 'y': 7.9632}, {'x': 7.9365, 'y': 7.9632}]}], 'spans': [{'offset': 3719, 'length': 97}]}
3 {'role': 'pageFooter', 'content': 'Received August 14, 2020. Accepted October 3, 2020. Advance access publication 19 November 2020 VC American Society of Plant Biologists 2021. All rights reserved. For permissions, please email: journals.permissions@oup.com', 'bounding_regions': [{'page_number': 1, 'polygon': [{'x': 0.5499, 'y': 10.1762}, {'x': 4.8923, 'y': 10.1762}, {'x': 4.8923, 'y': 10.3865}, {'x': 0.5499, 'y': 10.3865}]}], 'spans': [{'offset': 3817, 'length': 222}]}
</code></pre>
<p>How can I remove an item from the list that has certain value for it's dictionary key.
For example, I want to remove the entire item number 3 from 'test_list' because it's dictionary key d['role'] == 'pageFooter'.</p>
<p>So the result would look like:</p>
<pre><code>0 {'role': None, 'content': 'Closing the gap between actual and potential crop yields', 'bounding_regions': [{'page_number': 1, 'polygon': [{'x': 4.2076, 'y': 9.019}, {'x': 7.5767, 'y': 9.019}, {'x': 7.5767, 'y': 9.387}, {'x': 4.2076, 'y': 9.387}]}], 'spans': [{'offset': 3482, 'length': 56}]}
1 {'role': None, 'content': 'Increasing realized yields per area requires both increasing the maximum possible regional yield for a given crop in the absence of biotic or abiotic stress (yield potential [Yp];', 'bounding_regions': [{'page_number': 1, 'polygon': [{'x': 4.2066, 'y': 9.4723}, {'x': 7.6041, 'y': 9.4723}, {'x': 7.6041, 'y': 9.9493}, {'x': 4.2066, 'y': 9.9493}]}], 'spans': [{'offset': 3539, 'length': 179}]}
2 {'role': None, 'content': 'Downloaded from https://academic.oup.com/plphys/article/185/1/34/6149974 by guest on 16 June 2023', 'bounding_regions': [{'page_number': 1, 'polygon': [{'x': 7.9365, 'y': 2.8965}, {'x': 8.0406, 'y': 2.8965}, {'x': 8.0406, 'y': 7.9632}, {'x': 7.9365, 'y': 7.9632}]}], 'spans': [{'offset': 3719, 'length': 97}]}
</code></pre>
<p>Really appreciate your help in this matter.
Also, I intend to run this for thousands of text data, so some efficiency may also come into question.
Thank you</p>
|
<python><list><dictionary><text-processing>
|
2023-06-22 23:43:57
| 2
| 365
|
stackword_0
|
76,536,285
| 11,001,493
|
How to combine columns if they have the same substring in header name?
|
<p>Imagine that I have a dataframe like this:</p>
<pre><code>import numpy as np
import pandas as pd
data = pd.DataFrame({"ColA:1":[12, 20, 31, np.nan, np.nan, np.nan],
"ColA:2":[np.nan, np.nan, 28, 78, 23, 25],
"ColB":[np.nan, np.nan, 23, 56, 12, 3],
"ColC:1":[56, 10, 35, 67, np.nan, np.nan],
"ColC:2":[np.nan, 56, 28, 78, 23, np.nan],
"ColC:3":[np.nan, np.nan, np.nan, 43, 17, 8]})
data
Out[6]:
ColA:1 ColA:2 ColB ColC:1 ColC:2 ColC:3
0 12.0 NaN NaN 56.0 NaN NaN
1 20.0 NaN NaN 10.0 56.0 NaN
2 31.0 28.0 23.0 35.0 28.0 NaN
3 NaN 78.0 56.0 67.0 78.0 43.0
4 NaN 23.0 12.0 NaN 23.0 17.0
5 NaN 25.0 3.0 NaN NaN 8.0
</code></pre>
<p>I would like to combine these duplicated and triplicated columns that have the same substring but are distinguished by ":" and a following number. I managed to combine these with the code below:</p>
<pre><code>df_combined = data.groupby(lambda x: x.split(':')[0], axis=1).bfill()
df_combined
Out[8]:
ColA:1 ColA:2 ColB ColC:1 ColC:2 ColC:3
0 12.0 NaN NaN 56.0 NaN NaN
1 20.0 NaN NaN 10.0 56.0 NaN
2 31.0 28.0 23.0 35.0 28.0 NaN
3 78.0 78.0 56.0 67.0 78.0 43.0
4 23.0 23.0 12.0 23.0 23.0 17.0
5 25.0 25.0 3.0 8.0 8.0 8.0
</code></pre>
<p>And now I need to stay with the first column of those replicated and also change the name for the first string before ":", so the output should be:</p>
<pre><code> ColA ColB ColC
0 12.0 NaN 56.0
1 20.0 NaN 10.0
2 31.0 23.0 35.0
3 78.0 56.0 67.0
4 23.0 12.0 23.0
5 25.0 3.0 8.0
</code></pre>
<p>Anyone could help me?</p>
|
<python><pandas><dataframe>
|
2023-06-22 23:38:41
| 2
| 702
|
user026
|
76,536,250
| 10,858,691
|
For loop creating multiple dataframes which I need to get all common stats from (descibe function)
|
<p>Basically I need to loop through around 500 dataframes with same rows and columns (simulation),
and then get the mean, mode,etc for each cell across the 500 dataframes.</p>
<p>So in the example below
We have 5 iterations,
so first value of column A, would have 5 different values
as would second value of column A etc from which I would need to calculate the mean, mode etc for that cell.</p>
<p>Ideally I would like to use the describe function.</p>
<p>I don't know how I should go about doing this.</p>
<p>Thank you.</p>
<pre><code>import pandas as pd
import numpy as np
num_iterations = 5
for i in range(num_iterations):
df = pd.DataFrame(np.random.randint(0, 10, size=(3, 3)), columns=['A', 'B', 'C'])
print(df.head())
</code></pre>
|
<python><pandas>
|
2023-06-22 23:28:06
| 0
| 614
|
MasayoMusic
|
76,536,071
| 1,096,892
|
How to set a range to pick a either this many or that many from a list?
|
<p>I have a python script where I use itertools.combinations to select a number of combinations within a fixed range.</p>
<p>To give an example, I have a csv file that contains a list of golfers and they each have a ranking between 1 and 3</p>
<pre><code>Seamus Power,10500,1
Brian Harman,10300,2
Tom Hoge,9800,1
Taylor Montgomery,9600,1
Jason Day,9400,1
Keith Mitchell,9300,2
Joel Dahmen,9200,1
Denny McCarthy,9100,1
Matthew NeSmith,9000,1
Matt Kuchar,8900,1
Mackenzie Hughes,8600,1
Davis Riley,8100,3
Brendon Todd,8000,3
Andrew Putnam,7900,1
J.J. Spaun,7800,3
Aaron Rai,7800,1
Harris English,7700,2
Will Gordon,7700,1
Greyson Sigg,7500,1
J.T. Poston,7500,1
Seonghyeon Kim,7500,3
Troy Merritt,7400,1
Sepp Straka,7200,1
Justin Suh,7200,2
Adam Long,7100,1
Kevin Streelman,7000,2
Ben Taylor,7000,3
John Huh,6900,2
Austin Cook,6800,2
David Lingmerth,6700,3
</code></pre>
<p>In my code I then place each golfer into it's own list based on it's ranking and then in my tertools.combinations I state give me a set amount per tier. As I want 6 golfers (and it's got to be 6 golfers to create my lineup) in the example below I state give me 2 combination of golfers from tier 1 and 4 combination of golfers from tier 2 and set a lineup with them like so:</p>
<pre><code> tier1Range = []
tier2Range = []
tier3Range = []
with open('combinations.csv', 'r') as file:
reader = csv.reader(file)
for row in reader:
if int(row[2]) == 1:
tier1Range.append(row)
elif int(row[2]) == 2:
tier2Range.append(row)
elif int(row[2]) == 3:
tier3Range.append(row)
lineup = []
finalLineup = []
finalRandomLineup = []
#Edit below to fit creiteria of which ranges and how many per range
sel_tier1 = itertools.combinations(tier1Range,2)
sel_tier2 = itertools.combinations(tier2Range,4)
lineup = [p + q for p,q in itertools.product(sel_tier1,sel_tier2)]
finalLineup = [x for x in lineup if int(x[0][1]) + int(x[1][1]) + int(x[2][1]) + int(x[3][1]) + int(x[4][1]) + int(x[5][1]) >= 49700
and int(x[0][1]) + int(x[1][1]) + int(x[2][1]) + int(x[3][1]) + int(x[4][1]) + int(x[5][1]) <= 50000]
finalRandomLineup += ((random.sample(finalLineup,20)))
totalSalary = int
with open('temp.csv', 'w') as f:
for line in finalRandomLineup:
totalSalary = int(line[0][1]) + int(line[1][1]) + int(line[2][1]) + int(line[3][1]) + int(line[4][1]) + int(line[5][1])
f.write(line[0][0] + ',' + line[1][0] + ',' + line[2][0] + ',' + line[3][0] + ',' + line[4][0] + ',' + line[5][0] + ' ^ total salary ' + str(totalSalary)+ '\n\n')
</code></pre>
<p>However, I want my combinations to be more dynamic than that and this is my question. I want to have the ability to states I want a 2-3 golfers from tier 1 and 3-4 golfers from tier 2 as an example.</p>
<p>So rather than a fixed range of 2 from tier 1 and 4 from tier 2, I want the ability to pick combinations of 2 or 3 golfers from tier 1 and combinations of 3 or 4 golfers from tier 2.</p>
<p>At the end when the lineup is created, it has to be a max of 6 golfers. So I can't have 3 golfers from tier 1 combined with 4 golfers from tier 2 as it goes over the threshold.</p>
<p>Does anybody know the best way to achieve this?</p>
<p><strong>EDIT</strong></p>
<p>I updated the code above to include salary, where the lineups it generates has to be between the salary threshold of 49700 and 50000 based on the second column from the csv (which has been updated). So the csv goes, golfer name, salary, rank per row.</p>
<p>So above it picks 2 golfers from tier 1, and 4 from tier 2 and then when it creates my lineups, it only picks those where the combined salary of all 6 golfers golfers is between 49700 and 50000</p>
|
<python>
|
2023-06-22 22:37:09
| 2
| 4,326
|
BruceyBandit
|
76,536,018
| 926,918
|
Speeding up UMAP
|
<p>I have a situation similar to the one that was discussed in an old <a href="https://stackoverflow.com/questions/68257096/python-make-umap-faster">thread</a> where the number of features was 1.2M (mine 10M) but only hundreds of observations. Among <code>metric</code>s I tried, <code>euclidean</code> performed poorly but <code>cosine</code> and <code>correlation</code> were much better. I also noticed that it was only at the end, and that too for barely a few seconds, >100% CPU was being used while my system has 256 cores. For the most part, only a single core was being used, presumably for the metric computation. While I would have preferred UMAP scaling well, but I tried addressing the issue through NumPy (which I believe can use multiple cores for computation).</p>
<p>I tried the following approach:</p>
<p>Original function:</p>
<pre><code>import umap
from sklearn.preprocessing import StandardScaler
metric = 'cosine' # alternatively 'correlation'
scaler = StandardScaler()
ip_std = scaler.fit_transform(ip_mat)
# Start UMAP
reducer = umap.UMAP(n_components=n_components, n_neighbors=n_neighbors, metric=metric)
umap_embed = reducer.fit_transform(ip_std)
</code></pre>
<p>Modified version:</p>
<pre><code>import umap
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import pairwise_distances
from scipy.spatial.distance import cosine
from umap.umap_ import nearest_neighbors
# Start precomputed_knn
scaler = StandardScaler()
eiip_std = scaler.fit_transform(eiip_mat)
dist_cosine = 1 - pairwise_distances(eiip_std, metric="cosine")
precomputed_knn = nearest_neighbors(dist_cosine, metric="cosine", \
metric_kwds=None, angular=False, \
n_neighbors=n_neighbors, random_state=42)
# Start UMAP
reducer = umap.UMAP(n_components=n_components, precomputed_knn=precomputed_knn)
umap_embed = reducer.fit_transform(eiip_std)
return umap_embed
</code></pre>
<p>While I got no errors, the output was not of <code>cosine</code> at all but of the default <code>euclidian</code>.</p>
<p>Could you please point to the mistake in the above code and suggest any improvements?</p>
<p>Thanks in advance</p>
|
<python><machine-learning><dimensionality-reduction>
|
2023-06-22 22:22:53
| 1
| 1,196
|
Quiescent
|
76,535,977
| 2,869,814
|
Requirements for SciPy Bootstrap Wrapper Function
|
<p>I am trying to use the SciPy <code>bootstrap</code> function to work with a simple difference of medians. The following example, from the SciPy documentation, works fine.</p>
<pre><code>from scipy.stats import mood, norm
def my_statistic(sample1, sample2, axis):
statistic, _ = mood(sample1, sample2, axis=-1)
return statistic
sample1 = norm.rvs(scale=1, size=100)
sample2 = norm.rvs(scale=2, size=100)
data = (sample1, sample2)
res = bootstrap(data, my_statistic, method='basic')
</code></pre>
<p>However, when I try using a function that computes the difference of medians of two datasets, I get a "ValueError: zero-dimensional arrays cannot be concatenated" error. Here is the function in question.</p>
<pre><code>import numpy as np
def median_diff(group1, group2, axis=-1):
diff = np.median(group1) - np.median(group2)
return diff
</code></pre>
<p>I've tried axis=0, 1 and -1 and that doesn't make any difference. That argument is only there because the <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.bootstrap.html#scipy.stats.bootstrap" rel="nofollow noreferrer">SciPy documentation</a> says it's required.</p>
<p>The full error message with traceback is:</p>
<pre><code> ValueError Traceback (most recent call last)
/tmp/ipykernel_259/1837026830.py in <cell line: 1>()
----> 1 boot = bootstrap(lab4data, median_diff, method="basic")
/usr/local/lib/python3.10/dist-packages/scipy/stats/_resampling.py in bootstrap(data, statistic, n_resamples, batch, vectorized, paired, axis, confidence_level, method, bootstrap_result, random_state)
589 # Compute bootstrap distribution of statistic
590 theta_hat_b.append(statistic(*resampled_data, axis=-1))
--> 591 theta_hat_b = np.concatenate(theta_hat_b, axis=-1)
592
593 # Calculate percentile interval
/usr/local/lib/python3.10/dist-packages/numpy/core/overrides.py in concatenate(*args, **kwargs)
ValueError: zero-dimensional arrays cannot be concatenated
</code></pre>
<p>How does my function need to be different?</p>
|
<python><scipy>
|
2023-06-22 22:14:21
| 1
| 314
|
jaia
|
76,535,821
| 2,625,090
|
How to start a subprocess using a specific Executor
|
<p>I am creating a subprocess from my FastAPI application as follows:</p>
<pre><code>proc = await asyncio.create_subprocess_shell(
cmd, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE
)
</code></pre>
<p>I am using the <code>asyncio.create_subprocess_shell</code> module function for the purpose of capturing the program's stdout line by line.</p>
<p>How can I make it so that the process uses a specific executor? I tried this:</p>
<pre><code>pool = ProcessPoolExecutor(max_workers=10)
loop = asyncio.get_running_loop()
task = partial(
asyncio.create_subprocess_shell,
cmd,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
proc = await loop.run_in_executor(pool, task)
</code></pre>
<p>But it fails with this error:</p>
<blockquote>
<p><code>TypeError: cannot pickle 'coroutine' object</code></p>
</blockquote>
|
<python><python-asyncio><fastapi>
|
2023-06-22 21:41:58
| 1
| 9,205
|
arielnmz
|
76,535,816
| 6,286,900
|
Python get all elements from a listbox
|
<p>I have a Flask and Python web application where I implement CRUD methods for two objects: <strong>Team</strong> and <strong>Employee</strong>; in a many to many relationship. An Employee could be a member of multiple Teams, and a Team could have multiple Employees.</p>
<p>Now, when I create a Team, I have two listboxes: one with all the employee and one with employees that will be part of the team. With two buttons, I can move employees from one listbox to another and viceversa.</p>
<p>At the end, I can press a button Add to create the team in a Sqlite3 database where I have three tables: employee, team, teams_employees.</p>
<p>The github.com project is <a href="https://github.com/sasadangelo/benjamin" rel="nofollow noreferrer">this</a>.</p>
<p>The HTML form is the <a href="https://github.com/sasadangelo/benjamin/blob/42a3ac8bcce4131faf098afcae7846b27499e17b/app/templates/create-team.html#L13-L38" rel="nofollow noreferrer">following code on github.com</a>.</p>
<p>Then I have a controller Python method <a href="https://github.com/sasadangelo/benjamin/blob/42a3ac8bcce4131faf098afcae7846b27499e17b/app/controllers/teams_controller.py#L19-L32" rel="nofollow noreferrer">like this</a>. The problem is that this call:</p>
<pre><code>'team_members': request.form.getlist('team_members[]')
</code></pre>
<p>returns only the selected employee in the team members listbox. What I need is to change this line of code to haave all the employee in the team_members listbox indipendently whether they are selected or not.</p>
<p>Can anyone suggest me how to change the code to have this behaviour?</p>
|
<python><python-3.x><sqlite><flask><flask-sqlalchemy>
|
2023-06-22 21:41:45
| 1
| 1,179
|
Salvatore D'angelo
|
76,535,766
| 1,056,563
|
Including setup.py in the wheel
|
<p>The <code>setup.py</code> has the wheel version in it, and we would like to include it in the wheel to avoid need to type in new versions in multiple files in the project - e.g. in a <code>version.py</code> or <code>__init__.py</code> somewhere else.</p>
<p>In the <code>setup.py</code> the following directive is included</p>
<pre><code> package_data={
'ddex.resources': ['schema-validation-rules.yaml'],
'': ['setup.py'],
}
</code></pre>
<p>The first one for the <em>schema-validation-rules.yaml</em> <em>does</em> show up properly in the wheel under <em>ddex.resources</em> . The <em>setup.py</em> does not get included. Why is that and how can it be included?</p>
<p><strong>Update</strong> It appears to be relevant to mention that this wheel is getting loaded into <em>Azure Synapse</em> spark pool.</p>
|
<python><setuptools><setup.py><python-packaging>
|
2023-06-22 21:31:03
| 1
| 63,891
|
WestCoastProjects
|
76,535,738
| 8,068,825
|
automatically create Python class factory from __init__.py
|
<p>This code assigns the class name in a <code>dict</code> to the class. I've been manually adding to <code>feature_expander_factory</code> and find this inefficient, especially if the class name changes or a class is added.</p>
<p>Instead, I'd like to create <code>feature_expander_factory</code> from the <code>__init__.py</code> below. So it should take every class from the <code>__init__.py</code> file and then create a ` where the class name is assigned to the class.</p>
<pre><code>from data_processing.feature_expanders import (
CategoricalToOneHot,
RFMSplitter,
RFMSplitterAndOneHot,
StrToListToColumns,
)
feature_expander_factory = dict(
CategoricalToOneHot=CategoricalToOneHot,
RFMSplitter=RFMSplitter,
RFMSplitterAndOneHot=RFMSplitterAndOneHot,
ListToColumns=StrToListToColumns,
)
</code></pre>
<p><code>__init__.py</code></p>
<pre><code>from data_processing.feature_expanders.AbstractFeatureExpander import AbstractFeatureExpander
from data_processing.feature_expanders.CategoricalToOneHot import CategoricalToOneHot
from data_processing.feature_expanders.RFMSplitter import RFMSplitter
from data_processing.feature_expanders.RFMSplitterAndOneHot import RFMSplitterAndOneHot
from data_processing.feature_expanders.StrToListToColumns import StrToListToColumns
</code></pre>
|
<python>
|
2023-06-22 21:25:50
| 2
| 733
|
Gooby
|
76,535,693
| 11,028,689
|
Using pi-heaan library for vector encryption in python?
|
<p>I am following the code described here, <a href="https://pypi.org/project/pi-heaan/" rel="nofollow noreferrer">https://pypi.org/project/pi-heaan/</a></p>
<p>e.g. my code</p>
<pre><code>import piheaan as heaan
# Step 1. Setting Parameters
params = heaan.ParameterPreset.SS7
context = heaan.make_context(params)
# Step 2. Generating Keys
key_dir_path = "./keys"
sk = heaan.SecretKey(context)
keygen = heaan.KeyGenerator(context, sk)
keygen.gen_common_keys()
pack = keygen.keypack
# Step 3. Encrypt Message to Ciphertext
enc = heaan.Encryptor(context)
log_slots = 2
msg = heaan.Message(log_slots) # number_of slots = pow(2, log_slots)
for i in range(4):
msg[i] = i+1
ctxt = heaan.Ciphertext(context)
enc.encrypt(msg, pack, ctxt)
# Step 4. Multiply ciphertexts(i.e. square a ciphertext)
eval = heaan.HomEvaluator(context, pack)
ctxt_out = heaan.Ciphertext(context)
eval.mult(ctxt, ctxt, ctxt_out)
# Step 5. Decrypt the ciphertext by Decryptor.
dec = heaan.Decryptor(context)
msg_out = heaan.Message()
dec.decrypt(ctxt_out, sk, msg_out)
msg_out # print out the result of operation performed on ciphertext
# [ (1.000000+0.000000j), (4.000000+0.000000j), (9.000000+0.000000j), (16.000000+0.000000j) ]
</code></pre>
<p>Which works fine, but I dont know how to replace the line</p>
<pre><code>for i in range(4):
msg[i] = i+1
</code></pre>
<p>for a specific vector? e.g. I have msg = [2, 5, 4, 3].</p>
<p>I have tried to input it as a list and it does not work giving me an error.</p>
<pre><code>--------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [28], in <cell line: 7>()
5 msg = [2,5,4,3]
6 ctxt = heaan.Ciphertext(context)
----> 7 enc.encrypt(msg, pack, ctxt)
TypeError: encrypt(): incompatible function arguments. The following argument types are supported:
1. (self: piheaan.Encryptor, msg: piheaan.Message, sk: piheaan.SecretKey, ctxt: piheaan.Ciphertext) -> None
2. (self: piheaan.Encryptor, msg: piheaan.Message, pack: piheaan.KeyPack, ctxt: piheaan.Ciphertext) -> None
3. (self: piheaan.Encryptor, ptxt: piheaan.Plaintext, sk: piheaan.SecretKey, ctxt: piheaan.Ciphertext) -> None
4. (self: piheaan.Encryptor, ptxt: piheaan.Plaintext, pack: piheaan.KeyPack, ctxt: piheaan.Ciphertext) -> None
Invoked with: <piheaan.Encryptor object at 0x000001C1A995B3B0>, [2, 5, 4, 3], <piheaan.KeyPack object at 0x000001C1A990C2F0>, (level: 7, log(num slots): 13, data: [ (0.000000+0.000000j), (0.000000+0.000000j), (0.000000+0.000000j), (0.000000+0.000000j), (0.000000+0.000000j), ..., (0.000000+0.000000j), (0.000000+0.000000j), (0.000000+0.000000j), (0.000000+0.000000j), (0.000000+0.000000j) ])
</code></pre>
<p>if I try like this</p>
<pre><code>for i in [2,5,4,3]:
msg[i] = i
</code></pre>
<p>it gives me a wrong output,</p>
<pre><code>#[ (0.000000+0.000000j), (0.000000+0.000000j), (4.000000+0.000000j), (9.000000+0.000000j) ]
</code></pre>
<p>My actual data is a numpy.ndarray of specific vectors that I want to encrypt with this method.</p>
|
<python><for-loop><encryption><encode>
|
2023-06-22 21:15:15
| 1
| 1,299
|
Bluetail
|
76,535,558
| 2,725,810
|
Executing modules specified by strings
|
<p>I am developing a Django backend for an online course platform. The backend runs the code submitted by the student. Here is a working example for running a student code consisting of three modules:</p>
<pre class="lang-py prettyprint-override"><code>import importlib
import sys
util_a = """
def foo_a():
print('Yes!')
"""
util_b = """
from util_a import foo_a
def foo_b():
foo_a()
"""
main = """
from util_b import foo_b
foo_b()
"""
def process_module(code, name):
module = importlib.util.module_from_spec(
importlib.util.spec_from_loader(name, loader=None))
compiled_code = compile(code, '<string>', 'exec')
exec(compiled_code, module.__dict__)
sys.modules[name] = module
for code, name in [(util_a, 'util_a'), (util_b, 'util_b'), (main, 'main')]:
process_module(code, name)
</code></pre>
<p>The problem is that the util modules have to be specified in the correct order (i.e. <code>[(util_b, 'util_b'), (util_a, 'util_a'), (main, 'main')]</code> would not work), whereas I want to be able to only specify which module is main, and the rest should happen automatically, just like it would had the modules been real files.</p>
<p>So, how can I modify this code to make it work with util modules specified in any order?</p>
<p>P.S. I have a complicated solution using <code>ast</code> to get the list of modules imported by each module, and topological sort to execute the modules in the correct order. I am looking for a simpler way.</p>
|
<python><python-importlib><python-exec>
|
2023-06-22 20:48:52
| 1
| 8,211
|
AlwaysLearning
|
76,535,532
| 406,189
|
How to find the max value within a column which is a certain distance of the current row in a Pandas DataFrame?
|
<p>Suppose I have the following dataframe:</p>
<pre><code>import pandas as pd
data = pd.DataFrame(data={'input':[1,2,3,4,5,6,7,8,9,10,
11,12,13,14,15,14.5,
13.5,12.5,11.5,10.5,
9.5,8.5,7.5,6.5,5.5,
4.5,3.5,2.5,1.5,0.5]})
</code></pre>
<p>I would like to add a column to the right of the 'input' column which has the maximum value within the 'input' series which is found within 3 rows of the current row. For the example data above, the 'max' column would end up looking like this:</p>
<pre><code>data['max'] = [4,5,6,7,8,9,10,11,12,13,
14,15,15,15,15,15,15,15,14.5,13.5,12.5,
11.5,10.5,9.5,8.5,7.5,6.5,5.5,4.5,3.5]
</code></pre>
<p>Now I know that I can iterate through the 'input' series one row at a time, and find the max within my 3 row range that way, but I'm hoping there might be a better way to do it, preferably as a single (or maybe a couple) array operations. Ultimately I also need to add a 'min' column as well, but we can ignore that for now for the sake of simplicity.</p>
<p>So with that background, can this be done more cleanly or efficiently than just by iterating over the rows? My data set is on the order of millions of rows, so efficiency is rather important.</p>
|
<python><pandas><dataframe>
|
2023-06-22 20:44:43
| 1
| 1,509
|
PTTHomps
|
76,535,462
| 7,082,712
|
In Python with Shapely, how to get a better shape when removing from a large polygon a smaller polygon on the edges of it?
|
<p>In Python with Shapely (1.8.X or 2.X), let's say we have a large polygon that contains inside one or more small polygons that sit on the edges of the large polygon. If I just do the difference of the large polygon with the small polygons, we'll get a shape that has "holes" where the small polygons were. However, I would like to improve this final shape to get rid of the lines that surrounded these small polygons, in order to have a more "closed" shape.</p>
<p>Ideally, after that, we will also automatically identify whether yes/no, the final result should be kept or if it became too small in the end (eg using its area).</p>
<p>Simple example:</p>
<pre><code>from shapely.geometry import Polygon
large_polygon = Polygon([(0, 0), (0, 4), (6, 4), (6, 0)])
small_polygon = Polygon([(4, 1), (4, 3), (6, 3), (6, 1)])
result_polygon = large_polygon.difference(small_polygon)
</code></pre>
<p>Visually, we would get something like that for the "result_polygon"
<a href="https://i.sstatic.net/eXgns.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eXgns.png" alt="Current result_polygon #1" /></a></p>
<p>But what I would like to obtain would exclude the border there were around the small_polygon (the green rectangle here) <a href="https://i.sstatic.net/nGmCO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nGmCO.png" alt="Final result I would like to obtain" /></a></p>
<p>This is however just a simple case, but in the end, I would like to use it over much more complex cases that just basic rectangles, with polygons with less straight lines, like for example, obtaining good separated polygons in the following figure, if we remove the "brown" polygons from the larger "yellow" polygons:
<a href="https://i.sstatic.net/mgyK8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mgyK8.png" alt="Example of more complex cases" /></a></p>
<p>To do this, I think one possibility would be to divide the final polygon (from the difference of the large polygon with the smaller polygons) into several small parts (to form a multipolygon). We could then delete the parts we want to remove from it (for example according to their area) or just save them separately. I tried something for this that would try to split the polygon in multipolygon by trying to identify the parallel or perpendicular points, but it didn't really work with the proposed simple case.</p>
|
<python><shapely>
|
2023-06-22 20:33:20
| 0
| 402
|
Vincent Rougeau-Moss
|
76,535,381
| 1,114,975
|
How to count the documents in a firestore collection without retrieving the entire collection?
|
<p>I'm using the following code to retrieve a random document from a firestore collection:</p>
<pre><code> collection = db.collection('Items')
total_docs = collection.count().get()[0][0].value
random_offset = random.randint(0, total_docs - 1)
random_doc = collection.limit(1).offset(random_offset).get()[0]
</code></pre>
<p>I noticed that this code produces a lot of read-usage and discovered that the entire collection is read when counting the documents.</p>
<p>Hence my question: <strong>How can retrieve the count of documents in a collection without reading the entire collection?</strong></p>
<p>And if this isn't possible: <strong>How can I randomly retrieve a document from a collection without specifying how many documents the collection contains?</strong></p>
<p>Many thanks!</p>
|
<python><firebase><google-cloud-firestore>
|
2023-06-22 20:20:05
| 1
| 2,506
|
Comfort Eagle
|
76,535,316
| 525,865
|
bs4-scraper turns out to deliver a empty df - whilst working with pandas
|
<p>trying to learn something about web scraping I thought - it would be a good goal to have a data-driven page - with lots of data to gather from - like <code>clutch.co</code></p>
<p>I am trying to do some first steps in scraping - whilst running a tiny scraper like so.</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from bs4 import BeautifulSoup
import pandas as pd
options = Options()
options.add_argument("--headless")
options.add_argument("--no-sandbox")
driver = webdriver.Chrome(options=options)
driver.get("https://clutch.co/it-services/msp")
page_source = driver.page_source
driver.quit()
soup = BeautifulSoup(page_source, "html.parser")
# Extract the data using some BeautifulSoup selectors
# For example, let's extract the names and locations of the companies
company_names = [name.text for name in soup.select(".company-name")]
company_locations = [location.text for location in soup.select(".locality")]
# Store the data in a Pandas DataFrame
data = {
"Company Name": company_names,
"Location": company_locations
}
df = pd.DataFrame(data)
# Save the DataFrame to a CSV file
df.to_csv("clutch_data.csv", index=False)
</code></pre>
<p>but at the moment this runs with an empty result -
note I am working on <code>google-colab</code>.</p>
|
<python><pandas><selenium-webdriver><beautifulsoup>
|
2023-06-22 20:09:14
| 0
| 1,223
|
zero
|
76,535,262
| 839,733
|
Why __enter__ should not be called on the superclass/delegate?
|
<p>I'm working on an <a href="https://exercism.org/tracks/python/exercises/paasio" rel="nofollow noreferrer">exercise</a> that asks to wrap network operations in two different ways, subclassing, and delegation, respectively. Snippets of my code are shown below.</p>
<pre><code>class MeteredFile(io.BufferedRandom):
"""Implement using a subclassing model."""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._read_bytes = self._write_bytes = self._read_ops = self._write_ops = 0
def __enter__(self) -> Self:
return self
def __exit__(self, exc_type: Type[BaseException] | None,
exc_val: BaseException | None, exc_tb: TracebackType | None) -> bool | None:
return super().__exit__(exc_type, exc_val, exc_tb)
# Other methods not shown
</code></pre>
<pre><code>class MeteredSocket:
"""Implement using a delegation model."""
def __init__(self, s: socket):
self._socket = s
self._recv_bytes = self._send_bytes = self._recv_ops = self._send_ops = 0
def __enter__(self) -> Self:
return self
def __exit__(self, exc_type: Type[BaseException] | None,
exc_val: BaseException | None, exc_tb: TracebackType | None) -> bool | None:
return self._socket.__exit__(exc_type, exc_val, exc_tb)
# Other methods not shown
</code></pre>
<p>If I call <code>super().__enter__()</code> from <code>MeteredFile.__enter__</code> or <code>self._socket.__enter__()</code> from <code>MeteredSocket.__enter__</code> , some tests fail, as those do <em>not</em> expect those methods to be called. Returning <code>self</code> keeps those tests happy.</p>
<pre><code>@patch("paasio.super", create=True, new_callable=SuperMock)
def test_meteredfile_context_manager(self, super_mock):
wrapped = MockFile(ZEN)
mock = NonCallableMagicMock(wraps=wrapped, autospec=True)
mock.__exit__.side_effect = wrapped.__exit__
super_mock.mock_object = mock
with MeteredFile() as file:
self.assertEqual(1, super_mock.init_called)
self.assertFalse(mock.__enter__.called)
file.read()
self.assertFalse(mock.__enter__.called)
mock.__exit__.assert_called_once_with(None, None, None)
self.assertEqual(2, len(mock.mock_calls))
with self.assertRaisesRegex(ValueError, "I/O operation on closed file."):
file.read()
with self.assertRaisesRegex(ValueError, "I/O operation on closed file."):
file.write(b"data")
@patch("paasio.super", create=True, new_callable=SuperMock)
def test_meteredfile_context_manager_exception_raise(self, super_mock):
exception = MockException("Should raise")
wrapped = MockFile(ZEN, exception=exception)
mock = NonCallableMagicMock(wraps=wrapped, autospec=True)
mock.__exit__.side_effect = wrapped.__exit__
super_mock.mock_object = mock
with self.assertRaisesRegex(MockException, "Should raise") as err:
with MeteredFile() as file:
self.assertFalse(mock.__enter__.called)
file.read()
self.assertFalse(mock.__enter__.called)
mock.__exit__.assert_called_once_with(
MockException,
err.exception,
ANY,
)
self.assertEqual(exception, err.exception)
</code></pre>
<p>However, the <code>__exit__</code> methods are called as expected. I don't understand why this difference, can someone explain?</p>
|
<python><with-statement><contextmanager>
|
2023-06-22 19:58:38
| 0
| 25,239
|
Abhijit Sarkar
|
76,535,038
| 10,225,070
|
Keras model checkpoint is not tracking val_loss from previous runs
|
<p>Every time I restart training from a checkpoint, I get the following:</p>
<p><code>val_root_mean_squared_error improved from inf to 0.38011</code></p>
<p>When what I expected to get was (examples):</p>
<p><code>Epoch 1: val_root_mean_squared_error improved from 0.583 to 0.38011</code></p>
<p>or</p>
<p><code>Epoch 1: val_root_mean_squared_error did not improve from 0.326</code></p>
<p>I'm afraid that my model is getting overwritten. Here is my checkpoint:</p>
<pre><code>checkpoint_filepath = f'{log_location}/mdl.ckpt'
checkpoint = tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_filepath,
save_weights_only=False,
monitor=monitor,
mode=mode,
verbose=0,
save_best_only=True)
if os.path.isdir(checkpoint_filepath):
print(f'loading model from {checkpoint_filepath}')
model = tf.keras.models.load_model(checkpoint_filepath, custom_objects={'loss_fcn': loss_fcn})
model.fit(X_train,
y_train,
batch_size=params['batch_size'],
epochs=epochs,
shuffle=True,
validation_data=(X_val, y_val),
callbacks=[checkpoint, lr_scheduler, tensorboard],
verbose=1)
</code></pre>
<p>Thanks in advance!</p>
<p>UPDATE:</p>
<p>I think this has something to do with it, although I havn't found a solution to fix this error message, and from <a href="https://github.com/tensorflow/tensorflow/issues/47554" rel="nofollow noreferrer">here</a>, it seems like I should be able to ignore the message.</p>
<pre><code>WARNING:absl:Found untraced functions such as lstm_cell_1_layer_call_fn, lstm_cell_1_layer_call_and_return_conditional_losses, lstm_cell_2_layer_call_fn, lstm_cell_2_layer_call_and_return_conditional_losses, lstm_cell_4_layer_call_fn while saving (showing 5 of 8). These functions will not be directly callable after loading.
</code></pre>
<p>Here is how I build the model (BiLSTM):</p>
<pre><code>@tf.function
def loss_fcn(y_t, y_p):
y_pred = tf.convert_to_tensor(y_p)
y_true = tf.cast(y_t, y_pred.dtype)
diff = y_pred-y_true
res = tf.map_fn(fn=lambda x: tf.math.exp(tf.abs(x)/(3)) - 1 if x < 0 else tf.math.exp(tf.abs(x)/9) - 1, elems=diff)
s_score = tf.math.reduce_sum(res)
mse = tf.math.reduce_sum(backend.mean(tf.math.squared_difference(y_pred, y_true), axis=-1))
return (s_score + mse) / 2.0
def build_bilstm_model(params, *args, **kwargs):
"""
@brief: builds the model with the specified params
"""
i = 0
# input layer
inputs = keras.Input(shape=params['input_shape'], name='inp1')
# first layer
x = layers.Bidirectional(layers.LSTM(units=params[f'units_{i}'],
recurrent_dropout=params['recurrent_dropout'],
kernel_regularizer=regularizers.l1_l2(l1=params['l1'], l2=params['l2']),
return_sequences=True,
name=f'hdn_{0}'), name=f'hdn_a{0}')(inputs)
if params['dropout_rate'] > 0.0:
x = layers.Dropout(rate=params['dropout_rate'])(x)
# subsequent layers
for i in range(1, params['layers']-1):
x = layers.Bidirectional(layers.LSTM(units=params[f'units_{i}'],
recurrent_dropout=params['recurrent_dropout'],
kernel_regularizer=regularizers.l1_l2(l1=params['l1'], l2=params['l2']),
return_sequences=True,
name=f'hdn_{i}'), name=f'hdn_{i}')(x)
if params['dropout_rate'] > 0.0:
x = layers.Dropout(rate=params['dropout_rate'])(x)
# last layers
x = layers.Bidirectional(layers.LSTM(units=params[f'units_{i+1}'],
recurrent_dropout=params['recurrent_dropout'],
kernel_regularizer=regularizers.l1_l2(l1=params['l1'], l2=params['l2']),
return_sequences=False,
name=f'hdn_{i+1}'), name=f'hdn_{i+1}')(x)
if params['dropout_rate'] > 0.0:
x = layers.Dropout(rate=params['dropout_rate'])(x)
# output layer
outputs = layers.Dense(params['num_outputs'])(x)
model = keras.Model(inputs=inputs, outputs=outputs)
# compile
model.compile(optimizer=keras.optimizers.Adam(learning_rate=params['learning_rate']),
loss=loss_fcn,#'mse',
metrics=[keras.metrics.RootMeanSquaredError()])
print("returning model")
return model
</code></pre>
<p>These are the model parameters:</p>
<pre><code>params = {
"layers": 2,
"units_0": 80, #80
"dropout_rate": 0.2,
"l1": 1e-05,
"l2": 0.0001,
"learning_rate": 0.0015,
"recurrent_dropout": 0.0,
"batch_size": 64,
"units_1": 60, #60
"input_shape": (None, len(features)),
"num_outputs": 1,
"trial": 'explain'
}
</code></pre>
|
<python><tensorflow><keras>
|
2023-06-22 19:16:26
| 0
| 413
|
darrahts
|
76,534,777
| 7,252,531
|
Airflow REST API Call From Within DAG?
|
<p>I'm running a local instance of <a href="https://airflow.apache.org/docs/apache-airflow/stable/howto/docker-compose/index.html" rel="nofollow noreferrer">Airflow in Docker</a> (v2.6.1)</p>
<p>Calling the <a href="https://airflow.apache.org/docs/apache-airflow/stable/stable-rest-api-ref.html#tag/DAG" rel="nofollow noreferrer">Airflow REST API</a> in a local file outside of Airflow to <a href="https://airflow.apache.org/docs/apache-airflow/stable/stable-rest-api-ref.html#operation/post_connection" rel="nofollow noreferrer">create a connection</a> works without issue:</p>
<pre><code>import requests
def create_connection():
response = requests.post(
url='http://localhost:8080/api/v1/connections',
headers={
'Content-type': 'application/json',
'Accept': 'application/json'
},
auth=(
'my_airflow_username',
'my_airflow_password'
),
json={
'connection_id': 'new_connection',
'conn_type': 'generic',
'description': 'this is the description'
}
)
return response
print(create_connection().status_code)
</code></pre>
<p>Returns status code <code>200</code> and creates the connection as expected.</p>
<p>But running the same code in a DAG:</p>
<pre><code>from datetime import datetime
from airflow.decorators import dag
from airflow.operators.python import PythonOperator
import requests
@dag(
dag_id=f'test_dag',
start_date=datetime(2023,5,11),
schedule_interval=None,
max_active_runs=1
)
def execute():
def create_connection():
response = requests.post(
url='http://localhost:8080/api/v1/connections',
headers={
'Content-type': 'application/json',
'Accept': 'application/json'
},
auth=(
'my_airflow_username',
'my_airflow_password'
),
json={
'connection_id': 'new_connection',
'conn_type': 'generic',
'description': 'this is the description'
}
)
return response
create_connection = PythonOperator(
task_id='create_connection',
python_callable=create_connection
)
create_connection
execute()
</code></pre>
<p>errors and returns this stack trace:</p>
<pre><code>Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/urllib3/connection.py", line 175, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw
File "/home/airflow/.local/lib/python3.7/site-packages/urllib3/util/connection.py", line 95, in create_connection
raise err
File "/home/airflow/.local/lib/python3.7/site-packages/urllib3/util/connection.py", line 85, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 710, in urlopen
chunked=chunked,
File "/home/airflow/.local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 398, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/home/airflow/.local/lib/python3.7/site-packages/urllib3/connection.py", line 244, in request
super(HTTPConnection, self).request(method, url, body=body, headers=headers)
File "/usr/local/lib/python3.7/http/client.py", line 1281, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/local/lib/python3.7/http/client.py", line 1327, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.7/http/client.py", line 1276, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.7/http/client.py", line 1036, in _send_output
self.send(msg)
File "/usr/local/lib/python3.7/http/client.py", line 976, in send
self.connect()
File "/home/airflow/.local/lib/python3.7/site-packages/urllib3/connection.py", line 205, in connect
conn = self._new_conn()
File "/home/airflow/.local/lib/python3.7/site-packages/urllib3/connection.py", line 187, in _new_conn
self, "Failed to establish a new connection: %s" % e
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0xffff8bb14090>: Failed to establish a new connection: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/requests/adapters.py", line 497, in send
chunked=chunked,
File "/home/airflow/.local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 788, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/home/airflow/.local/lib/python3.7/site-packages/urllib3/util/retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8080): Max retries exceeded with url: /api/v1/connections (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0xffff8bb14090>: Failed to establish a new connection: [Errno 111] Connection refused'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/operators/python.py", line 181, in execute
return_value = self.execute_callable()
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/operators/python.py", line 198, in execute_callable
return self.python_callable(*self.op_args, **self.op_kwargs)
File "/opt/airflow/dags/crmf/development/credential_testing/dag2.py", line 31, in create_connection
'description': 'this is the description'
File "/home/airflow/.local/lib/python3.7/site-packages/requests/api.py", line 115, in post
return request("post", url, data=data, json=json, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/requests/sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/requests/sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/requests/adapters.py", line 519, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8080): Max retries exceeded with url: /api/v1/connections (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0xffff8bb14090>: Failed to establish a new connection: [Errno 111] Connection refused'))
</code></pre>
<p>I expect to call the Airflow API from within my DAG just like any other API call. Any ideas on why the Airflow API connection fails?</p>
|
<python><docker><airflow>
|
2023-06-22 18:32:24
| 1
| 1,830
|
gbeaven
|
76,534,730
| 1,446,379
|
Start and stop audio streaming using http or mqtt calls - Python
|
<p>I have a python script that streams audio. It uses websockets, asyncio and pyaudio. I want to start and stop streaming or this complete script with HTTP or MQTT call. Currently what I am doing manually i.e.
To Start : python script.py
To Stop : Ctrl C</p>
<p>What approach shall I use?</p>
<pre><code># starts recording
stream = p.open(
format=FORMAT,
channels=CHANNELS,
rate=RATE,
input=True,
frames_per_buffer=FRAMES_PER_BUFFER
)
async def send_receive():
print(f'Connecting websocket to url ${URL}')
async with websockets.connect(
URL,
extra_headers=(("Authorization", auth_key),),
ping_interval=5,
ping_timeout=20
) as _ws:
await asyncio.sleep(0.1)
print("Receiving SessionBegins ...")
session_begins = await _ws.recv()
print(session_begins)
print("Sending messages ...")
async def send():
while True:
try:
data = stream.read(FRAMES_PER_BUFFER)
data = base64.b64encode(data).decode("utf-8")
json_data = json.dumps({"audio_data":str(data)})
await _ws.send(json_data)
except websockets.exceptions.ConnectionClosedError as e:
print(e)
assert e.code == 4008
break
except Exception as e:
assert False, "Not a websocket 4008 error"
await asyncio.sleep(0.01)
return True
async def receive():
while True:
try:
result_str = await _ws.recv()
print(json.loads(result_str)['text'])
except websockets.exceptions.ConnectionClosedError as e:
print(e)
assert e.code == 4008
break
except Exception as e:
assert False, "Not a websocket 4008 error"
send_result, receive_result = await asyncio.gather(send(), receive())
asyncio.run(send_receive())
</code></pre>
|
<python><websocket><python-asyncio><pyaudio>
|
2023-06-22 18:24:57
| 1
| 2,296
|
Abhishek Kumar
|
76,534,464
| 1,860,222
|
Python tensorflow throwing a 'module not found' error for ctypes
|
<p>I've been playing around with the SIMPLE project (<a href="https://medium.com/applied-data-science/how-to-train-ai-agents-to-play-multiplayer-games-using-self-play-deep-reinforcement-learning-247d0b440717" rel="nofollow noreferrer">https://medium.com/applied-data-science/how-to-train-ai-agents-to-play-multiplayer-games-using-self-play-deep-reinforcement-learning-247d0b440717</a>) in an attempt to better understand using ML for multi-player games. The project is a few years old, so I decided to try updating the dependencies to newer versions. My requirements.txt now looks like this:</p>
<pre><code>pytorch==2.0.1
pytorch-cuda==11.8
tensorflow==2.10.0
stable-baselines3==1.5.0
jupyter==1.0.0
jupyterlab==4.0.2
pillow
mpi4py==3.1.4
</code></pre>
<p>I used conda to create a python 3.10 environment and installed the dependencies. Everything installed, and there were no errors flagged in pycharm, so I went ahead and ran train.py . The run immediately crashed with the error:</p>
<pre><code>C:\workspace\SIMPLE\app\train.py -r -e sushigo
Traceback (most recent call last):
File "C:\workspace\SIMPLE\app\train.py", line 6, in <module>
import tensorflow as tf
File "E:\anaconda3\envs\SIMPLE\lib\site-packages\tensorflow\__init__.py", line 37, in <module>
from tensorflow.python.tools import module_util as _module_util
File "E:\anaconda3\envs\SIMPLE\lib\site-packages\tensorflow\python\__init__.py", line 24, in <module>
import ctypes
File "E:\anaconda3\envs\SIMPLE\lib\ctypes\__init__.py", line 8, in <module>
from _ctypes import Union, Structure, Array
ImportError: DLL load failed while importing _ctypes: The specified module could not be found.
</code></pre>
<p>I have tried clean installs of the libraries. I tried completely uninstalling anaconda and starting from scratch. I tried installing just the first 4 libraries to eliminate potential conflicts. No matter what, I always end up with that same error. I'm not sure what else to do at this point. What could be causing this and how do I fix it?</p>
<p><strong>System Info</strong>:</p>
<p>Windows 10 (64 bit)</p>
<p>Python 3.10.8</p>
<p>conda 23.5.0</p>
<p>PyCharm 2023.1.3 (Community Edition)</p>
|
<python><python-3.x><tensorflow><conda><ctypes>
|
2023-06-22 17:44:29
| 0
| 1,797
|
pbuchheit
|
76,534,404
| 11,743,016
|
Explode a Polars column with dictionary entries into subcolumns
|
<p>I have a Polars dataframe in the following format.</p>
<p><a href="https://i.sstatic.net/5KGEL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5KGEL.png" alt="enter image description here" /></a></p>
<p>I want to explode the Price and Duration column so that <strong>idxmax</strong> and <strong>max</strong> become subcolumns.</p>
<p><a href="https://i.sstatic.net/nXRgz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nXRgz.png" alt="enter image description here" /></a></p>
<p>What set of method chaining must I perform on the dataframe or columns to achieve such a transformation? Is it possible to expand dictionaries as subcolumns? If the chaining is too complicated, would it be simpler to convert the Polars dataframe to a Pandas dataframe, then expand every dictionary row?</p>
|
<python><dataframe><python-polars>
|
2023-06-22 17:35:40
| 1
| 349
|
disguisedtoast
|
76,534,111
| 7,441,757
|
How to handle dynamicly created objects in pycharm IDE?
|
<p>I'm creating many dynamic objects in an <code>__init__.py</code> file. I understand I could choose to not do this, but I am asking for the scenario where I decide to do this.</p>
<p>For example in <code>example/__init__.py</code> I have</p>
<pre><code>class Example:
pass
globals()['SubExample'] = Example
</code></pre>
<p>Now I can run</p>
<pre><code>from example import SubExample
</code></pre>
<p>It will run fine, but my pycharm IDE will not recognise this as it's created only on run time. Is there any way to make it recognise this?</p>
<p>Note that you can ignore import errors with <code>#noqa</code>, but I would like pycharm to understand that this class will exist.</p>
|
<python><pycharm>
|
2023-06-22 16:54:22
| 0
| 5,199
|
Roelant
|
76,534,034
| 7,179,546
|
Connect to Mongo database using Python and CosmosClient
|
<p>I have an application that uses MongoDB written in Python, and I'm working on migrating it to Azure Cosmos.
I've already created an account and prepared a client that I'm reading with</p>
<p><code>cosmos_client = CosmosClient(url, key)</code></p>
<p>With this I can connect to the client, but now I want to connect to a Mongodb inside, using its connection string.</p>
<p>So, I'd want something like</p>
<p><code>mongo_client = cosmos_client.getMongoDatabase(connection_string)</code></p>
<p>Is there any function to do that? I can't find anything in the docs</p>
|
<python><mongodb><azure-cosmosdb>
|
2023-06-22 16:45:15
| 2
| 737
|
Carabes
|
76,533,946
| 19,504,610
|
Importing `CPython/Objects/genobject.c` into Cython
|
<p>In <a href="https://stackoverflow.com/questions/33086984/cython-access-to-private-c-members-of-cpython-object">this question</a> and this <a href="https://docs.cython.org/en/latest/src/userguide/extension_types.html#external-extension-types" rel="nofollow noreferrer">reference</a>, there are examples of importing c-header files into Cython, but there is no answer to how can one import a <code>.c</code> file content into Cython.</p>
<p>The <code>.c</code> file which I am interested to import into Cython for my use is: <a href="https://github.com/python/cpython/blob/main/Objects/genobject.c" rel="nofollow noreferrer">https://github.com/python/cpython/blob/main/Objects/genobject.c</a></p>
<p>I have two questions:</p>
<p><strong>One</strong></p>
<p>How do I import a <code>.c</code> file into Cython?</p>
<p><strong>Two</strong></p>
<p>Do I need to have the <code>.c</code> file within my directory for it to be accessed? If yes, how do I go about getting <code>https://github.com/python/cpython/blob/main/Objects/genobject.c</code> so that I can use its content in Cython?</p>
<p>I have two <code>.pxd</code> files, <code>cpy.pxd</code> and <code>src.pxd</code>.</p>
<p>In <code>cpy.pxd</code>, I have:</p>
<pre><code>cdef extern from "Python.h":
cdef int gen_is_coroutine(object o)
</code></pre>
<p>In <code>src.pxd</code>, I use it as usual:</p>
<pre><code>if gen_is_coroutine(__value):
do_something(__value)
</code></pre>
<p>I get this Cython compiler error:</p>
<pre><code>constmeta.obj : error LNK2001: unresolved external symbol gen_is_coroutine
build\lib.win-amd64-cpython-310\cy_src\constmeta.cp310-win_amd64.pyd : fatal error LNK1120: 1 unresolved externals
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\VC\\Tools\\MSVC\\14.33.31629\\bin\\HostX86\\x64\\link.exe' failed with exit code 1120
</code></pre>
|
<python><cython><cpython>
|
2023-06-22 16:33:39
| 0
| 831
|
Jim
|
76,533,920
| 2,715,216
|
multiprocessing.pool.Threadpool stuck - Databricks Notebook
|
<p>I have an API which is rather slow per single request but i can scale and ask up to 70s per second. Thus, to download all data i split the data into multiple intervals [(intervall_1_lowest_id,intervall_1_highest_id),....(intervall_n_lowest_id, intervall_n_highest_id).</p>
<p>I have a single Thread to determine these ranges. Then the list of ranges is given to ThreadPool to download all data. Data is written to different files. So threads do not write to same files or anything similar.
For some reason I do not understand I usually end up having the situation that 99% of work is done, and 1-5 ranges did not successfully load.
However, the Threadpool just seems to do nothing. It seems like the threads failed but silently although I retrieve the result.</p>
<p>Some very reduced pseudo_code to show what i am doing is below.</p>
<p>The result seems to be very stable if I let it run for all data. For small amount it works. It always stucks and a small amount of tasks is not finished however it does not seem to be doing anything. I let it even run over weekend just to be sure it's not a super slow connection or similar but no further progress happend.</p>
<p>Environment: Databricks Notebook (11.3 LTS (includes Apache Spark 3.3.0, Scala 2.12))</p>
<p>python: 3.9.5</p>
<p>I have some understanding of concurrency and thus I wonder if there is some kind of silent failling of threads happening or deadlock occurs. Do you have a good idea how to debug such an issue? I try to get better logging right now but i still wonder what's going on. Maybe its something with ipykernel and threading in notebooks?</p>
<p><s><strong>Update: It seems that dbutils.fs.put is the problem. I can now reproduce quite stable. It somehow goes into deadlock when using it at some point. Even when only uploading very small amount of data. Any idea what could be the issue for dbutils causing this. It also happens when using true processes through multiprocessing instead of Threads</strong></s></p>
<p><strong>Latest Insight: I guess there is somewhere a memory leak which is causing to silently fail some processes without proper exception thrown. This makes the pool stuck. I could make a very simple example with two processes which i ran in a single cell of a databricks notebook on
a small cluster with 14GB RAM. I simply create thousands of session to overwhelm RAM. By doing so what happens for me is that the second process initially does something but then silently crashes. In the end only the first process finished when trying the code below.
I would except some sort of notfication on the main process/thread but this seems not be the case?</strong></p>
<p>Code example to reproduce issue on databricks:</p>
<pre><code>import multiprocessing
import logging
import time
import requests
import psutil
def execute_single_task(task):
session =[]
max_s = 1000000
print(f"before:{multiprocessing.current_process().pid} {task}")
while len(session) < max_s:
# dumb example just to cause Memory Issue
session.append(requests.Session())
if len(session) %1000 == 0:
print(len(session))
print(psutil.virtual_memory())
print(f"after:{multiprocessing.current_process().pid} {task}")
time.sleep(2)
return True
def execute_all_tasks(tasks):
PARALLELIZATION = 2
print(PARALLELIZATION)
with multiprocessing.Pool(processes=PARALLELIZATION) as pool:
results = []
print("with pool")
for result in pool.imap_unordered(execute_single_task, tasks):
if result:
logging.info("Task succeded!")
else:
logging.info("Task failed!")
raise Exception("The entire main process failed.")
logging.info("All Tasks succeeded!")
if __name__ == "__main__":
print(f"if name:{__name__}")
tasks = [1,2]
execute_all_tasks(tasks)
</code></pre>
<p>The result is shown in the picture and even after 10 hours there is no more progress. It shows that memory is full as expected but i wonder why the main thread does not receive any exception or the entire process dies
<a href="https://i.sstatic.net/vd7LM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vd7LM.png" alt="enter image description here" /></a></p>
<p>----------
This is the code example for initial question</p>
<pre><code>from multiprocessing.pool.ThreadPool
import logging
from dataclasses import dataclass
@dataclass
class Task:
id: int
range: tuple[int,int]
def download_and_save(range):
resp=requests.post(....)
# I wonder if this is my problem. Can i get some race condition here on databricks dbutils.fs ?
dbutils.fs.put(some_path, resp.text)
def download_range(task):
longging.info("some info here and there")
download_and_save(task.range) #
return task
def determine_ranges()-> [Task]: # id, range
#....
return list_of_tasks
def main():
logging.basicConfig(level=...,format=...)
tasks = determine_ranges() # Task(id, range)
remaining_tasks = copy.deepcopy(tasks)
with ThreadPool(processes=70) as pool:
for result pool.imap_unordered(dowload_range,tasks):
print(f"Task with id: {task.id} done")
if isinstance(result,Task):
# remove task from remaining
#....
logging.info(f"remaining: {remaining_tasks}")
elif isinstance(result, Exception):
logging.exception(result)
raise result
else:
raise ValueError(f"unexpected result: {type(result)}")
</code></pre>
|
<python><databricks><python-multiprocessing>
|
2023-06-22 16:30:27
| 0
| 371
|
thompson
|
76,533,766
| 10,620,003
|
Convert dataframe to a specific list
|
<p>I have a data frame and each values is a list (or array). I want to convert the data frame to a list as follow.
Can you help me with that? Here is an example.</p>
<pre><code>import pandas as pd
a = pd.DataFrame()
a['0'] = [[[0.2,0.4]], [[1, 10]]]
a['1'] = [[[4, 5]], [[6, 7]]]
0 1
[[0.2, 0.4]] [[4, 5]]
[[1, 10]] [[6, 7]]
</code></pre>
<p>and the output is:</p>
<pre><code>import numpy as np
list_of_array = [ [np.array([[0.2, 0.4]]),np.array([[4, 5]])],[np.array([[1, 10]]),np.array([[6, 7]])]]
[[array([[0.2, 0.4]]), array([[4, 5]])], [array([[ 1, 10]]), array([[6, 7]])]]
</code></pre>
|
<python><dataframe>
|
2023-06-22 16:06:58
| 1
| 730
|
Sadcow
|
76,533,672
| 18,483,009
|
How to use YAML to create a common node between two functions in Apache Age?
|
<p>have two Python functions that each create a Person node in an Apache Age graph. I want to create a common Person node between these two functions that has the same properties. I've been told that YAML can be used to define a common configuration file that can be included in both functions to create or update the common Person node.</p>
<p>My question is: How can I use YAML to define a common configuration file that can be used to create or update a common Person node between my two functions in Apache Age? Specifically, how do I load the YAML file into a Python dictionary, and how do I use the dictionary to set the properties of the Person node in my Apache Age graph?</p>
<p>Here's an example YAML configuration file that defines a common Person node with a name property:</p>
<p>Copy
common_person:
name: John Doe
And here's an example function that creates or updates the Person node in Apache Age using the common_config dictionary:</p>
<pre><code>from age import Graph
def update_person_node(common_config):
graph = Graph("path/to/database")
with graph.transaction() as tx:
tx.query(
"MERGE (p:Person {name: $name}) "
"SET p += $props",
name=common_config['common_person']['name'],
props=common_config['common_person']
)
</code></pre>
<p>What is the best way to load the YAML file into a Python dictionary, and how do I use the dictionary to create or update the Person node in my Apache Age graph?</p>
|
<python><apache-age><opencypher>
|
2023-06-22 15:52:56
| 3
| 583
|
AmrShams07
|
76,533,663
| 11,659,631
|
Most confusing plots of sinus signal with python
|
<p>I have "discovered" the most confusing thing while plotting sinus signal with python. We agree, the formula for a sinus signal is (source wiki: <a href="https://fr.wikipedia.org/wiki/Signal_sinuso%C3%AFdal" rel="nofollow noreferrer">https://fr.wikipedia.org/wiki/Signal_sinuso%C3%AFdal</a>):
y = sin(2 * pi * f * t + phi),
with t the time, f the frequency, and phi the phase difference.</p>
<p>Now, I was trying to plot sinus signals with different frequencies, e.g. f = 10e6 Hz, f = 10 Hz, but the plot is the same for both frequencies !!! What am I doing wrong ?
Here is my code:</p>
<pre><code> # time span of the measured signals
t = np.linspace(0,100,1000) # start, stop, number
# Generate some sinus signal data
signal1 = np.sin(2 * np.pi * 10 * t)
signal2 = np.sin(2 * np.pi * 10e6 * t)
# Plot the signals
plt.figure()
plt.plot(t, signal1, c='r', label ='S1')
plt.plot(t, signal2, c='b', label ='S2' )
plt.ylabel('Signal intensity')
plt.xlabel('Time')
plt.legend()
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/1I967.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1I967.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><signals>
|
2023-06-22 15:51:58
| 0
| 338
|
Apinorr
|
76,533,594
| 3,181,175
|
Object detection with Tensorflow and galeone/tfgo
|
<p>What i have:</p>
<ul>
<li><p>a trained and exported model for object detection, trained by this script <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/model_main_tf2.py" rel="nofollow noreferrer">model_main_tf2.py</a> and exported by this script <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/exporter_main_v2.py" rel="nofollow noreferrer">exporter_main_v2.py</a></p>
</li>
<li><p>python code for inference (i found this snippet on stackowerflow), and it seems works well with my model</p>
</li>
</ul>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import tensorflow as tf
from PIL import Image
import matplotlib.pyplot as plt
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as viz_utils
detect_fn = tf.saved_model.load("exported-models/my_model/saved_model")
print(detect_fn)
PATH_TO_LABELS = 'annotations/label_map.pbtxt'
category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)
IMAGE_PATHS = ["img.png"]
def load_image_into_numpy_array(path):
return np.array(Image.open(path))
for image_path in IMAGE_PATHS:
print('Running inference for {}... '.format(image_path), end='')
image_np = load_image_into_numpy_array(image_path)
input_tensor = tf.convert_to_tensor(image_np)
input_tensor = input_tensor[tf.newaxis, ...]
detections = detect_fn(input_tensor)
num_detections = int(detections.pop('num_detections'))
detections = {key: value[0, :num_detections].numpy()
for key, value in detections.items()}
detections['num_detections'] = num_detections
detections['detection_classes'] = detections['detection_classes'].astype(np.int64)
image_np_with_detections = image_np.copy()
viz_utils.visualize_boxes_and_labels_on_image_array(
image_np_with_detections,
detections['detection_boxes'],
detections['detection_classes'],
detections['detection_scores'],
category_index,
use_normalized_coordinates=True,
max_boxes_to_draw=200,
min_score_thresh=.6,
agnostic_mode=False)
plt.figure(figsize=(20, 20))
plt.imshow(image_np_with_detections)
# graph.png contains img.png and detected object within rectangle and label
plt.savefig("graph.png")
</code></pre>
<ul>
<li>golang snippet for inference whitch loads same model, but it's not working, this code panics with <code>op detection_boxes not found</code></li>
</ul>
<pre class="lang-golang prettyprint-override"><code>package main
import (
"fmt"
tf "github.com/galeone/tensorflow/tensorflow/go"
tg "github.com/galeone/tfgo"
"log"
"os"
)
func main() {
model := tg.LoadModel("saved_model", []string{"serve"}, nil)
imageBytes, err := os.ReadFile("img.png")
if err != nil {
log.Fatal(err)
}
tensor, err := tf.NewTensor(imageBytes)
if err != nil {
log.Fatal(err)
}
results := model.Exec([]tf.Output{
model.Op("detection_boxes", 0),
model.Op("detection_scores", 0),
model.Op("detection_classes", 0),
model.Op("num_detections", 0),
}, map[tf.Output]*tf.Tensor{
model.Op("image_tensor", 0): tensor,
})
if err != nil {
log.Fatal(err)
}
//TODO
fmt.Print(results)
}
</code></pre>
<p>Indeed, there are no such operations in <code>operations</code></p>
<pre class="lang-golang prettyprint-override"><code>model, _ := tf.LoadSavedModel("saved_model", []string{"serve"}, nil)
operations := model.Graph.Operations() // no 'detection_boxes' and others
</code></pre>
<p>So, whats wrong with my Go code, or maybe it's my model issue?</p>
<p>P.S. originally i posted this question on <a href="https://github.com/galeone/tfgo/issues/85" rel="nofollow noreferrer">Github Issues</a>, then realised stackoverflow is better place</p>
|
<python><tensorflow><go><object-detection>
|
2023-06-22 15:43:20
| 0
| 1,666
|
mineroot
|
76,533,397
| 8,810,517
|
Python: Pass ContexVars from parent thread to child thread spawn using threading.Thread()
|
<p>I am setting some context variables using <code>contextvars</code> module that can be accessed across the modules running on the same thread.
Initially I was creating <code>contextvars.ContextVars()</code> object in each python file hoping that there is only single context shared amongts all the python files of the module running on same thread. But for each file it did create new context variables.</p>
<p>I took inspiration from flask library how it sets context of the web request in request object so that only thread on which web request came will be able to access it. Resources: (1) <a href="https://flask.palletsprojects.com/en/2.3.x/reqcontext/" rel="nofollow noreferrer">Request Contex working in flask</a> (2) <a href="https://testdriven.io/blog/flask-contexts-advanced/" rel="nofollow noreferrer">Flask Contexts advance</a></p>
<p>Basically, the <strong>Local</strong> class below is copy pasted from <strong>werkzeug</strong> library (werkzeug.local module : <a href="https://werkzeug.palletsprojects.com/en/2.3.x/local/#werkzeug.local.Local" rel="nofollow noreferrer">https://werkzeug.palletsprojects.com/en/2.3.x/local/#werkzeug.local.Local</a>)</p>
<p><strong>customContextObject.py</strong></p>
<pre><code>from contextvars import ContextVar
import typing as t
import warnings
class Local:
__slots__ = ("_storage",)
def __init__(self) -> None:
object.__setattr__(self, "_storage", ContextVar("local_storage"))
@property
def __storage__(self) -> t.Dict[str, t.Any]:
warnings.warn(
"'__storage__' is deprecated and will be removed in Werkzeug 2.1.",
DeprecationWarning,
stacklevel=2,
)
return self._storage.get({}) # type: ignore
def __iter__(self) -> t.Iterator[t.Tuple[int, t.Any]]:
return iter(self._storage.get({}).items())
def __getattr__(self, name: str) -> t.Any:
values = self._storage.get({})
try:
print(f"_storage : {self._storage} | values : {values}")
return values[name]
except KeyError:
raise AttributeError(name) from None
def __setattr__(self, name: str, value: t.Any) -> None:
values = self._storage.get({}).copy()
values[name] = value
self._storage.set(values)
def __delattr__(self, name: str) -> None:
values = self._storage.get({}).copy()
try:
del values[name]
self._storage.set(values)
except KeyError:
raise AttributeError(name) from None
localContextObject = Local()
</code></pre>
<p>The <code>localContextObject</code> know can be imported in any python file in the project and they will have access to same ContextVar object.</p>
<p>Example: I am setting email property in localContextObject variable in <strong>contextVARSDifferentModulesCUSTOM.py</strong> file <strong>contextVARSexperiments</strong> module. We import and call check_true_false() function from <strong>utils.py</strong></p>
<pre><code>from contextVARSexperiments.utils import check_true_false, check_true_false
from contextVARSexperiments.customContextObject import localContextObject
import threading
localContextObject.email = "example@email.com"
print(f"localContextObject : {localContextObject} | email : {localContextObject.email}")
def callingUtils(a):
print(f"{threading.current_thread()} | {threading.main_thread()}")
check_true_false(a)
callingUtils('MAIN CALL')
</code></pre>
<p>Now the other file <strong>utils.py</strong> in the same module will have access to the same contextVars through localContextObject. It will print the same email as set in above file.</p>
<p><strong>utils.py</strong></p>
<pre><code>import threading
import contextvars
from contextVARSexperiments.customContextObject import localContextObject
def decorator(func):
def wrapper(*args, **kwargs):
print("\n~~~ENTERING check_true_false~~~~~~ ")
func(*args, **kwargs)
print("~~~EXITED check_true_false~~~~~~\n")
return wrapper
@decorator
def check_true_false(a):
print(f"check_true_false2 {threading.current_thread()} | {threading.main_thread()}")
print(f" a : {a}")
print(f"localContextObject : {localContextObject}")
print(f"email : {localContextObject.email}")
</code></pre>
<p>Below is the output when we run contextVARSDifferentModulesCUSTOM.py</p>
<pre><code>/Users/<user>/PycharmProjects/Temp/contextVARSexperiments/contextVARSDifferentModulesCUSTOM.py
localContextObject : <_thread._local object at 0x7fcfb85fdd58> | email : example@email.com
<_MainThread(MainThread, started 8671015616)> | <_MainThread(MainThread, started 8671015616)>
~~~ENTERING check_true_false~~~~~~
check_true_false <_MainThread(MainThread, started 8671015616)> | <_MainThread(MainThread, started 8671015616)>
a : MAIN CALL
localContextObject : <_thread._local object at 0x7fcfb85fdd58>
email : example@email.com
~~~EXITED check_true_false~~~~~~
</code></pre>
<p>Now, I updated contextVARSDifferentModulesCUSTOM.py to call callingUtils() function on a <strong>new thread</strong>.</p>
<pre><code>from contextVARSexperiments.utils import check_true_false
from contextVARSexperiments.customContextObject import localContextObject
import threading
localContextObject.email = "example@email.com"
print(f"localContextObject : {localContextObject} | email : {localContextObject.email}")
def callingUtils(a):
print(f"{threading.current_thread()} | {threading.main_thread()}")
check_true_false(a)
t1 = threading.Thread(target=callingUtils, args=('THREAD"S CALL',))
t1.start()
t1.join()
</code></pre>
<p>But this threw error because child thread didn't have access to parent thread's ContextVars. Output:</p>
<pre><code>/Users/<user>/PycharmProjects/Temp/contextVARSexperiments/contextVARSDifferentModulesCUSTOM.py
_storage : <ContextVar name='local_storage' at 7ff1d0435668> | values : {'email': 'example@email.com'}
localContextObject : <contextVARSexperiments.customContextObject.Local object at 0x7ff1c02162e8> | email : example@email.com
<Thread(Thread-1, started 12937875456)> | <_MainThread(MainThread, started 8609043136)>
~~~ENTERING check_true_false~~~~~~
check_true_false <Thread(Thread-1, started 12937875456)> | <_MainThread(MainThread, started 8609043136)>
a : THREAD"S CALL
localContextObject : <contextVARSexperiments.customContextObject.Local object at 0x7ff1c02162e8>
_storage : <ContextVar name='local_storage' at 7ff1d0435668> | values : {}
Exception in thread Thread-1:
Traceback (most recent call last):
File "/Users/<user>/miniconda3/envs/test_env/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/Users/<user>/miniconda3/envs/test_env/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/Users/<user>/PycharmProjects/Temp/contextVARSexperiments/contextVARSDifferentModulesCUSTOM.py", line 13, in callingUtils
check_true_false(a)
File "/Users/<user>/PycharmProjects/Temp/contextVARSexperiments/utils.py", line 26, in wrapper
func(*args, **kwargs)
File "/Users/<user>/PycharmProjects/Temp/contextVARSexperiments/utils.py", line 43, in check_true_false
print(f"email : {localContextObject.email}")
File "/Users/<user>/PycharmProjects/Temp/contextVARSexperiments/customContextObject.py", line 31, in __getattr__
raise AttributeError(name) from None
AttributeError: email
</code></pre>
<p>Now, I am trying to inherit Thread class and create my own custom implementation which will pass the context from parent thread to child thread.</p>
<p>I tried to replace <code>threading.Thread</code> class with a <code>CustomThread</code> class. Following are the implementations of <code>CustomThread</code> class inside <strong>customThreading.py</strong> :</p>
<p>More about <code>Context</code> object returned by <code>copy_context()</code> method of contextvars library : <a href="https://docs.python.org/3/library/contextvars.html#contextvars.Context" rel="nofollow noreferrer">https://docs.python.org/3/library/contextvars.html#contextvars.Context</a></p>
<ol>
<li>Using <code>Context</code> object returned by <code>copy_context()</code> to run initialiser of Threading class:</li>
</ol>
<pre><code> import threading
import contextvars
class CustomThread(threading.Thread):
def __init__(self, *args, **kwargs):
self.current_context = contextvars.copy_context()
self.current_context.run(super().__init__, *args, **kwargs)
def start(self) -> None:
super().start()
</code></pre>
<ol start="2">
<li>Using <code>Context</code> object returned by <code>copy_context()</code> while calling <code>start()</code> of Threading class:</li>
</ol>
<pre><code> import threading
import contextvars
class CustomThread(threading.Thread):
def __init__(self, *args, **kwargs):
self.current_context = contextvars.copy_context()
super().__init__(*args, **kwargs)
def start(self) -> None:
self.current_context.run(super().start)
</code></pre>
<ol start="3">
<li>Using <code>contextmanager</code> decorator from <strong>contextlib</strong> on <code>start()</code> of my class:</li>
</ol>
<pre><code> import threading
import contextvars
from contextlib import contextmanager
class CustomThread(threading.Thread):
def __init__(self, *args, **kwargs):
self.current_context = contextvars.copy_context()
super().__init__(*args, **kwargs)
@contextmanager
def start(self) -> None:
super().start()
</code></pre>
<p>But none of this worked.</p>
<p>Also, I am looking for custom implementation of <code>ThreadPoolExecutor</code> from <code>concurrent.futures</code> module.</p>
|
<python><multithreading><flask><threadpool><python-contextvars>
|
2023-06-22 15:19:28
| 1
| 526
|
Abhay
|
76,533,387
| 6,087,667
|
Group with overlapping index and apply function
|
<p>I have a dataframe with datetime index:</p>
<pre><code>import pandas as pd
import numpy as np
i = pd.date_range('1999-12-31 23:00', '2000-01-10', freq='2H')
x = pd.DataFrame(index=i, data = np.random.randint(0,10, (len(i),2)), columns=['a','b'])
</code></pre>
<p>How can I apply a function to groups that contain all hours in a day but also include the last hour of the previous day? E.g. if the function is <code>sum</code> the first group result should be this:</p>
<pre><code>x['1999-12-31 23:00':'2000-01-01 23:00'].sum()
</code></pre>
<p>Second group:</p>
<pre><code>x['2000-01-01 23:00':'2000-01-02 23:00'].sum()
</code></pre>
<p>etc</p>
|
<python><pandas><group-by>
|
2023-06-22 15:18:31
| 1
| 571
|
guyguyguy12345
|
76,533,384
| 8,488,113
|
Docker Alpine build fails on mysqlclient installation with error: Exception: Can not find valid pkg-config name
|
<p>I'm encountering a problem when building a Docker image using a Python-based Dockerfile. I'm trying to use the mysqlclient library (version 2.2.0) and Django (version 4.2.2). Here is my Dockerfile:</p>
<pre><code>FROM python:3.11-alpine
WORKDIR /usr/src/app
COPY requirements.txt .
RUN apk add --no-cache gcc musl-dev mariadb-connector-c-dev && \
pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
</code></pre>
<p>The problem arises when the Docker build process reaches the point of installing the mysqlclient package. I get the following error: <em>Exception: Can not find valid pkg-config name</em>
To address this issue, I tried adding pkgconfig to the apk add command, Unfortunately, this didn't help and the same error persists.</p>
<p>I would appreciate any guidance on how to resolve this issue.</p>
<p>Thank you in advance.</p>
|
<python><docker><alpine-linux><mysql-connector-python>
|
2023-06-22 15:18:10
| 11
| 587
|
IdanB
|
76,533,289
| 13,049,379
|
How to compute masked VGG loss
|
<p>I have <code>gt</code> and <code>pred</code> images and wish to compute VGG loss only on a subset of pixels given by <code>mask</code>. <code>mask</code> has same spatial resolution as <code>gt</code> but the ON pixels are not in any regular geometry. How to achieve this? Please note that loss is computed over a deeper layer of VGG so the activation maps resolution will be smaller than <code>gt</code>.</p>
<p>One possible solution which I can think of is,</p>
<pre><code>gt[~mask] = const
pred[~mask] = const
torch.nn.functional.mse(VGG(gt), VGG(pred))
</code></pre>
<p>I believe since non masked pixels have been artificially made of same value, the gradients due to a mismatch on these pixels will be 0.</p>
<p>Is this the correct way to compute a masked VGG loss?</p>
|
<python><deep-learning><pytorch><conv-neural-network>
|
2023-06-22 15:07:09
| 1
| 1,433
|
Mohit Lamba
|
76,533,223
| 7,347,925
|
Add new row by list if element not existing and duplicate part of existing columns
|
<p>I have one Dataframe which has four columns: <code>lon, lat, datetime, others</code>. Regarding the last two columns, some rows are initialized with value and some are NaN.</p>
<p>Then, I generate a datetime list that includes date sublists, whose element number >= existing datetime for each location.</p>
<p>The goal is to add a new row for each new date of location, but don't copy other columns, like <code>others</code>.</p>
<p>Here's an example:</p>
<pre><code># create DataFrame
d = {'lon': [100, 100, 120], 'lat': [50, 50, 60]}
df = pd.DataFrame(data=d)
df = pd.concat([df, pd.DataFrame(columns=['datetime', 'others'])])
df.loc[0, 'datetime'] = '2000-09-09'
df.loc[1, 'datetime'] = '1999-09-09'
df.loc[0, 'others'] = 'notes1'
df.loc[1, 'others'] = 'notes2'
# datetime element is same for same location (lon, lat)
# because we have two location, I make two sublists for each location
datetimes = [['2023-05-17', '2023-05-02', '2023-04-28', '2000-09-09', '1999-09-09'],
['2023-06-01'],
]
</code></pre>
<p>The DataFrame looks like this:</p>
<pre><code>lon lat datetime others
100.0 50.0 2000-09-09 notes1
100.0 50.0 1999-09-09 notes2
120.0 60.0 NaN NaN
</code></pre>
<p>For the first location (lon=100, lat=50), I have three new dates, and the second location has one new date.</p>
<p>The result should be</p>
<pre><code>lon lat datetime others
100.0 50.0 2023-05-17 NaN
100.0 50.0 2023-05-02 NaN
100.0 50.0 2023-04-28 NaN
100.0 50.0 2000-09-09 notes1
100.0 50.0 1999-09-09 notes2
120.0 60.0 2023-06-01 NaN
</code></pre>
<p>I have tried to use <code>explode</code>, but it copies all existing columns.</p>
|
<python><pandas><dataframe><datetime>
|
2023-06-22 14:58:34
| 2
| 1,039
|
zxdawn
|
76,533,205
| 11,064,604
|
Memory leak for Optuna trial with multiprocessing
|
<h3>The Background</h3>
<p>I have a machine learning pipeline that consists of <code>N</code> boosted models (LGBMRegressor), each with identical hyperparameters. Each of the <code>N</code> LGBMRegressors is trained on a separate chunk of data. My current workstation has a lot of cores, so I multiprocess each regressor on a separate thread.</p>
<h3>The Problem</h3>
<p>I am trying to tune the parameters that go into the LGBMRegressors through optuna. When I use the multiprocessing inside an optuna trial, it has a memory leak and I run out of memory. <strong>Can I use multiprocessing inside an optuna trial and not run into a memory leak?</strong></p>
<h3>Minimal Reproducible Example</h3>
<pre><code>import optuna
import pandas as pd
import numpy as np
import multiprocessing
from lightgbm import LGBMRegressor
N = 500
n_cores = 30
rows_per_N = 1000
cols_per_N=50
data = [ [np.random.normal(size=(rows_per_N, cols_per_N)), np.random.normal(size=(rows_per_N, ))] for i in range(N)]
def get_metric(data):
(X, y), params = data
model =LGBMRegressor(**params)
model.fit(X, y)
return np.abs( model.predict(X) - y )
def objective(trial):
param = {
"n_jobs": "1",
"num_leaves": trial.suggest_int("num_leaves", 2, 256)
}
lgb_params = [param for _ in range(N)]
p = multiprocessing.Pool(n_cores)
results = p.map(get_metric, zip(data,lgb_params))
return np.mean(results)
study = optuna.create_study(direction="minimize")
study.optimize(objective, n_trials=100)
</code></pre>
<h3>Alternative Solutions</h3>
<p>I have written the above code as a <code>for</code> loop and it has not had memory issues. The drawback here is that this is 30x slower than the multiprocessed solution.</p>
|
<python><multiprocessing><optuna>
|
2023-06-22 14:56:13
| 1
| 353
|
Ottpocket
|
76,533,178
| 5,554,763
|
.corr results in ValueError: could not convert string to float
|
<p>I'm getting this very strange error when trying to follow the following exercise on using the corr() method in Python</p>
<p><a href="https://www.geeksforgeeks.org/python-pandas-dataframe-corr/" rel="noreferrer">https://www.geeksforgeeks.org/python-pandas-dataframe-corr/</a></p>
<p>Specifically, when I try to run the following code: <code>df.corr(method ='pearson')</code></p>
<p>The error message offers no clue. I thought the corr() method was supposed to automatically ignore strings and empty values etc.</p>
<pre><code>Traceback (most recent call last):
File "<pyshell#6>", line 1, in <module>
df.corr(method='pearson')
File "C:\Users\d.o\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\core\frame.py", line 10059, in corr
mat = data.to_numpy(dtype=float, na_value=np.nan, copy=False)
File "C:\Users\d.o\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\core\frame.py", line 1838, in to_numpy
result = self._mgr.as_array(dtype=dtype, copy=copy, na_value=na_value)
File "C:\Users\d.o\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\core\internals\managers.py", line 1732, in as_array
arr = self._interleave(dtype=dtype, na_value=na_value)
File "C:\Users\d.o\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\core\internals\managers.py", line 1794, in _interleave
result[rl.indexer] = arr
ValueError: could not convert string to float: 'Avery Bradley'
</code></pre>
|
<python><pandas><correlation><valueerror>
|
2023-06-22 14:53:50
| 3
| 543
|
Shaye
|
76,533,148
| 2,979,576
|
DATE_FORMAT assumes a different week origin in Hive compared to Spark SQL
|
<p>I have a Hive query that I want to move across to a PySpark script - part of this query involves converting a date column to week of the year.</p>
<p>In both cases I do this with the following line in the select part of an SQL statement. In Hive I run the statement directly, from PySpark, I run it using spark.sql(statement)</p>
<pre><code>DATE_FORMAT(from_unixtime(unix_timestamp(dt, 'yyyyMMdd')), 'Y-ww')
</code></pre>
<p>Where dt contains the datetime in the yyyyMMdd format.</p>
<p>I want the first day of the week to be taken as Monday. This works fine in Hive:</p>
<pre><code>hive> SELECT DATE_FORMAT(from_unixtime(unix_timestamp('20230611, 'yyyyMMdd')), 'Y-ww');
> 2023-23
</code></pre>
<p>But in Spark, it takes Sunday as the first day of the week</p>
<pre><code>spark.sql("SELECT DATE_FORMAT(from_unixtime(unix_timestamp('20230611, 'yyyyMMdd')), 'Y-ww')").show()
2023-24
</code></pre>
<p>Is there anyway I can get the spark sql behaviour to be the same as Hive, with weeks starting on a Monday.</p>
<p>The $LANG environment variable on the machine is set to en_GB.UTF-8</p>
|
<python><date><pyspark><hive>
|
2023-06-22 14:49:39
| 2
| 3,461
|
James
|
76,533,023
| 835,523
|
How to make Boto3 Silent Failures loud?
|
<p>I have the following</p>
<pre><code>print("A")
client = boto3.client("events")
print("B")
</code></pre>
<p>A is printed, but B is not. I suspected that boto3 was silently failing so I added the following</p>
<pre><code>try:
client = boto3.client("events")
except Exception as e:
print(e)
</code></pre>
<p>And it turned out that there was a "You must specify a region" error. Which is all fine and good, except that I feel like I shouldn't have to explicitly catch and print exceptions. I'd expect them to end up getting printed automatically along with a nice stack trace when my application crashes.</p>
<p>Is there any way to get that behavior out of boto3?</p>
|
<python><boto3>
|
2023-06-22 14:35:01
| 0
| 4,741
|
Steve
|
76,532,998
| 8,321,705
|
how to format the y axis for a timedelta object in plotly express
|
<p>I've seen several related questions, but none of the solutions so far seem to solve my problem. The problem is, that instead of e.g. "03:00:00" or similar I get 40T as label on the y axis for my timedelta object. It is nicely formatted in pandas, though: 0 days 03:00:00</p>
<p>The output format is either unusable, e.g. when converting to unsorted strings, or the output doesn't change at all.</p>
<p>I would like to have an easily readable format on the y axis instead of the seconds (?), it will most often be durations of hours and minutes, in a few cases it might be longer than a day (but something like 40:xx:xx meaning 40 hours would be totally fine</p>
<pre><code>import pandas as pd
data = [['tom', 10, "2023-06-21 06:23:55+00:00", "2023-06-21 09:23:55+00:00"], ['nick', 15, "2023-06-20 06:23:55+00:00", "2023-06-21 06:23:55+00:00"], ['juli', 14, "2023-06-21 06:23:50+00:00", "2023-06-21 06:23:55+00:00"]]
df = pd.DataFrame(data, columns=['name', 'age', "start", "stop"])
df["start"] = pd.to_datetime(df["start"])
df["stop"] = pd.to_datetime(df["stop"])
df["duration"] = df["stop"] - df["start"]
df["duration"]
#### output
0 0 days 03:00:00
1 1 days 00:00:00
2 0 days 00:00:05
Name: duration, dtype: timedelta64[ns]
import plotly.express as px
import plotly.graph_objects as go
fig = px.scatter(
df,
x=df["age"].sort_values(ascending=False),
y="duration", # I guess it shows the seconds
#y=pd.to_timedelta(df.duration, unit='h'), # same format as before
#y=df["duration"].sort_values(ascending=True).dt.to_pytimedelta().astype(str), # fixed label with equal distance between marks, regardless of numerical difference
color="name",
)
figure = go.Figure(data=fig)
# figure.update_layout(yaxis_tickformat='%H:%M:%S') # adds a lot of zeroes?
# figure.update_layout(yaxis_tickformat="%H:%M:%S.%f")
figure.show()
</code></pre>
<p><a href="https://i.sstatic.net/Evih0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Evih0.png" alt="plot" /></a></p>
|
<python><plotly>
|
2023-06-22 14:31:23
| 1
| 633
|
crazysantaclaus
|
76,532,891
| 20,339,407
|
I have 2 lines plotted with plotly-express. How to specify a different pattern for each one, respectively "dash" and "solid"?
|
<p>I plot 2 lines using the <code>plotly-express</code> Python library.
I specify a different color for each one by using the <code>color_discrete_sequence</code> argument.
Works well. It's OK for the color part.</p>
<p>Now I'd like to specify a different "pattern" for each line.
Is it possible to do that in a convenient way, as for the colors, just by providing a list of patterns ?</p>
<p>I've tried the <code>line_dash_sequence</code> argument that I expected to be the pattern counterpart to <code>color_discrete_sequence</code>.
Unfortunately it doesn't work : the 2 lines use the first pattern.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import plotly.express as px
from datetime import time
df = pd.DataFrame(
{
"timestamp": [time(x) for x in range(24)],
"power1": [50 * x for x in range(24)],
"power2": [52 * x for x in range(24)],
}
)
fig = px.line(
df,
x="timestamp",
y=["power1", "power2"],
color_discrete_sequence=["orange", "grey"],
line_dash_sequence=["dash", "solid"],
)
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/Xqss1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Xqss1.png" alt="enter image description here" /></a>
I guess I'm missing something.
Thanks for your help.</p>
|
<python><plotly><plotly-express>
|
2023-06-22 14:20:10
| 2
| 1,620
|
0x0fba
|
76,532,858
| 15,358,800
|
Regex failing to extract last line
|
<p>I've some pattern like this in a text file</p>
<p>Textfile.txt</p>
<pre><code>----------------------
some text, Some text
some text, Some text
Line # 1
some text
Line # 2
some text
'
;
'
Line #n-1
some text
Line # n
some text
[some list]
</code></pre>
<p>My aim is to extract from</p>
<pre><code>Line # 1
some text
Line # 2
some text
'
;
'
Line #n-1
some text
Line # n
some text
</code></pre>
<p>I used regex like so <a href="https://regexr.com/78o64" rel="nofollow noreferrer">https://regexr.com/78o64</a></p>
<pre><code>(Line #\s*\d+)\n(.*?)(?=\nLine #\s*\d|$)
</code></pre>
<p>I able to extract my data but it's failing to parse last line (ie, <strong>Line #n</strong>.
in our example <strong>Line #69</strong> check from link provided above). Am i doing anything wrong?. Any suggestions would be appriciated.</p>
|
<python><regex>
|
2023-06-22 14:17:52
| 3
| 4,891
|
Bhargav
|
76,532,816
| 9,318,372
|
Type hint extra attributes (not fields)
|
<p>This is <em>almost</em> a duplicate of <a href="https://stackoverflow.com/questions/73560307">Exclude some attributes from fields method of dataclass</a>.</p>
<p>I have a <code>dataclass</code> that needs some extra attributes, which should not be considered as fields, that are derived from the fields during <code>__post_init__</code>. (In particular, they are not <code>InitVar</code>'s!)</p>
<p>Since I type check my code, I want to add type hints for these attributes:</p>
<pre class="lang-py prettyprint-override"><code>@dataclass
class MyDataClass:
field0: int
field1: int
_: KW_ONLY
fieldN: int
# internal/derived attribute that shouldn't be considered as _fields_
attr0: int # we still want type hints for obvious reasons!
attr1: int # These are not InitVars!
</code></pre>
<p>However, this will make them fields. I tried adding single/double underscores, which didn't work. I considered making these <code>@properties</code>, but then where would you type hint the necessary <code>self._attr0</code>? I guess a <code>@cached_property</code> could work, but it's an unsatisfying answer.</p>
|
<python><mypy><python-typing><python-dataclasses>
|
2023-06-22 14:12:47
| 2
| 1,721
|
Hyperplane
|
76,532,551
| 18,551,983
|
Generate a new dataframe based on old datafarame and store it in list
|
<p>My original dataframe is shown below:</p>
<pre><code> H1 H2 H3 H4 H5
A1 1 0 0 1 0
A2 1 0 0 1 0
A3 1 1 0 0 0
A4 1 0 1 0 0
A5 1 0 1 0 1
A6 1 1 1 0 0
A7 1 0 1 0 0
A8 1 0 1 0 1
A9 1 1 1 0 0
</code></pre>
<p>I have two dataframe that are stored in the form of list</p>
<pre><code>#list_of_dataframes
[
H1 H2 H3 H4 H5
A2 1 0 0 1 0
A3 1 1 0 0 0
A4 1 0 1 0 0
A5 1 0 1 0 1
A6 1 1 1 0 0,
H1 H2 H3 H4 H5
A1 1 0 0 1 0
A3 1 1 0 0 0
A7 1 0 1 0 0
A8 1 0 1 0 1
A9 1 1 1 0 0]
</code></pre>
<p>I want to generate an output dataframe that should store in the form of list. My output dataframe should keep all row index and column index of the original dataframe. But for the new dataframe that will store in the form of list it should should in the list of dataframes if the corresponding row is present in it, it should place 1 in the original dataframe, otherwise it should place 0.</p>
<pre><code> # list of Output dataframe
[
H1 H2 H3 H4 H5
A1 0 0 0 0 0
A2 1 1 1 1 1
A3 1 1 1 1 1
A4 1 1 1 1 1
A5 1 1 1 1 1
A6 1 1 1 1 1
A7 0 0 0 0 0
A8 0 0 0 0 0
A9 0 0 0 0 0,
H1 H2 H3 H4 H5
A1 1 1 1 1 1
A2 0 0 0 0 0
A3 1 1 1 1 1
A4 0 0 0 0 0
A5 0 0 0 0 0
A6 0 0 0 0 0
A7 1 1 1 1 1
A8 1 1 1 1 1
A9 1 1 1 1 1]
</code></pre>
|
<python><pandas><dataframe><list>
|
2023-06-22 13:41:35
| 1
| 343
|
Noorulain Islam
|
76,532,526
| 3,448,136
|
How to mock a base class entirely when testing the derived class in Python
|
<p>I want to test a method of a derived class in Python. The base class <code>__init__</code> function is not well-implemented for testing its derived classes, for reasons too complicated to explain. I would like to just mock the entire base class, so that the derived class inherits from the mock instead of the annoying base class.</p>
<p>Here is a torn-down example of the classes:</p>
<pre><code>class ComplexAnnoyingBaseClass:
def __init__(self):
print("do not execute this class in the unit test")
def get_value_x(self):
return 4
def get_value_y(self):
return 5
def calculate_value(x, y):
return x * y
class MyDerivedClass(ComplexAnnoyingBaseClass):
def get_value(self, min_x, min_y):
value = 0
x = self.get_value_x()
y = self.get_value_y()
if x > min_x and y > min_y:
value = calculate_value(x, y)
return value
</code></pre>
<p>And a simple example test:</p>
<pre><code>import pytest
from unittest.mock import patch, Mock
from server.example_code import (
ComplexAnnoyingBaseClass,
MyDerivedClass
)
def test_get_value():
# Patch the base class with a mock object
with patch('server.example_code.ComplexAnnoyingBaseClass',
spec=MyDerivedClass) as base_class_mock:
# Create a mock instance of the derived class
my_instance = MyDerivedClass()
# Mock the get_value_x() and get_value_y() methods
base_class_mock.get_value_x.return_value = 2
base_class_mock.get_value_y.return_value = 3
# Test your code here
result = my_instance.get_value(1, 1)
assert result == 6
</code></pre>
<p>When the test ran, it printed the "do not execute" message. Also, the call to <code>my_instance.get_value(1,1)</code> returned 20.</p>
<p>This variation on the test also failed in the exact same ways:</p>
<pre><code>def test_get_value():
# Create a mock instance of ComplexAnnoyingBaseClass
base_class_mock = Mock()
base_class_mock.get_value_x.return_value = 2
base_class_mock.get_value_y.return_value = 3
# Patch the base class with the mock object
with patch('server.example_code.ComplexAnnoyingBaseClass',
return_value=base_class_mock):
# Create an instance of MyDerivedClass
my_instance = MyDerivedClass()
# Test your code here
result = my_instance.get_value(1, 1)
assert result == 6
</code></pre>
<p>How do I mock the base class entirely so that its <code>__init__</code> function is not called during the test?</p>
|
<python><unit-testing><pytest>
|
2023-06-22 13:38:34
| 1
| 2,490
|
Lee Jenkins
|
76,532,429
| 12,575,557
|
Typehint googleapiclient.discovery.build returning value
|
<p>I create the Google API <code>Resource</code> class for a specific type (in this case <code>blogger</code>).</p>
<pre class="lang-py prettyprint-override"><code>from googleapiclient.discovery import Resource, build
def get_google_service(api_type) -> Resource:
credentials = ...
return build(api_type, 'v3', credentials=credentials)
def blog_service():
return get_google_service('blogger')
def list_blogs():
return blog_service().blogs()
</code></pre>
<p>The problem arises when using the <code>list_blogs</code> function.
Since I am providing a specific service name, I know that the return value of <code>blog_service</code> has a <code>blogs</code> method, but my IDE doesn't recognize it.
Is there a way to annotate the <code>blog_service</code> function (or any other part of the code) to help my IDE recognize the available methods like <code>blogs</code>?</p>
|
<python><python-typing><google-api-python-client>
|
2023-06-22 13:28:53
| 1
| 950
|
Jorge Luis
|
76,532,427
| 20,612,566
|
Custom django_filters (how to sum filters result)
|
<p>I have an analytics model:</p>
<pre><code>class StoreAnalytics(models.Model):
class Meta:
verbose_name_plural = "store's analytics"
verbose_name = "store's analytics"
store = models.ForeignKey(Store, on_delete=models.CASCADE, null=True, related_name="store_id")
accruals_sum = models.DecimalField(
max_digits=10, decimal_places=2, default=Decimal(0), verbose_name="Accruals sum"
)
taxes_count = models.DecimalField(max_digits=7, decimal_places=2, default=Decimal(0), verbose_name="Taxes sum")
sold_products = models.PositiveSmallIntegerField(default=0, verbose_name="Number of products sold")
def __str__(self):
return f"{self.store.pk}"
</code></pre>
<p>My view:</p>
<pre><code>class StoreAnalyticsApi(ListCreateAPIView):
permission_classes = (IsAuthenticated,)
http_method_names = ["get"]
serializer_class = StoreAnalyticsSerializer
filter_backends = (DjangoFilterBackend,)
filterset_class = StoreAnalyticsFilter
def get_queryset(self):
queryset = StoreAnalytics.objects.filter(store__user_id=self.request.user.pk).aggregate(accruals_sum=Sum(F("accruals_sum")), sold_products=Sum(F("sold_products")), taxes_summ=Sum(F("taxes_count")))
#{'accruals_sum': Decimal('10045'), 'sold_products': 68, 'taxes_summ': Decimal('602.700000000000')}
return queryset
</code></pre>
<p>By default, the GET method <code>api/v1/analytics/</code> should return the sum of sold products, sum of taxes and sum of accruals for all existing user's stores. If a user wants to view statistics only for chosen stores, then he selects the store ids and he should get the total analytics for the selected stores.</p>
<p>How should my filters.py look like to accruals tha aim?</p>
<p>My filters:</p>
<pre><code>from dashboard.models import StoreAnalytics
from django_filters import rest_framework as filters
from django.db.models import Sum, F
class ListFilter(filters.Filter):
def filter(self, qs, value):
if not value:
return qs
self.lookup_expr = "in"
values = value.split(",")
return super(ListFilter, self).filter(qs, values)
class StoreAnalyticsFilter(filters.FilterSet):
stores_ids = ListFilter(field_name="store_id")
class Meta:
model = StoreAnalytics
fields = [
"stores_ids",
]
def filter(self, queryset, name, value):
if value is not None:
queryset = StoreAnalytics.objects.filter(store__user_id=self.request.user.pk, store_id__in=stores_ids)
return queryset
</code></pre>
<p>I need somthing like that:</p>
<p><code>api/v1/analytics/</code> => sums of fields sold products, taxes, accruals_sum for ALL user's stores
<code>api/v1/analytics/?stores_ids=1,2</code> => sums of fields sold products, taxes, accruals_sum for user's stores with pk 1 & 2.</p>
|
<python><django><django-rest-framework><django-filter><django-filters>
|
2023-06-22 13:28:46
| 2
| 391
|
Iren E
|
76,532,329
| 1,039,462
|
Using a shared file in multiple ansible modules
|
<p>I am currently developing a number of Ansible modules using Python.
My directory structure looks like this:</p>
<pre><code>/
playbook.yml
library/
module1.py
module2.py
</code></pre>
<p>Using this, I can run the playbook (which uses my modules) via</p>
<pre><code>ansible-playbook playbook.yml
</code></pre>
<p>Now I want to add a code file shared between my modules.
So I added the file <code>library/shared.py</code> and added the line</p>
<pre><code>from shared import my_shared_func
</code></pre>
<p>to all of my modules, so that I can call <code>my_shared_func()</code> in all modules.</p>
<p>The problem is, that when calling <code>ansible-playbook</code> again, this causes the error message</p>
<pre><code>ModuleNotFoundError: No module named 'shared'
</code></pre>
<p>I am assuming, that Ansible is not copying the shared file to the execution host.</p>
<p>Can my problem be solved, or do all Ansible modules have to be completely self contained in a single file?</p>
|
<python><python-3.x><ansible><ansible-module>
|
2023-06-22 13:16:31
| 1
| 1,817
|
mat
|
76,532,312
| 7,745,011
|
Pylance does not recognize local packages installed with "pip install -e "
|
<p>I have the following project setup (names changed):</p>
<pre><code>ProjectRoot
│
├───DependencyPackage
│ │ .gitignore
│ │ Dockerfile
│ │ LICENSE
│ │ pyproject.toml
│ │ README.md
│ │ setup.cfg
│ │ setup.py
│ │
│ └───dependency_source
│ │ __init__.py
│ │
│ ├───models
│ │ first_model.py
│ │ second_model.py
│ │
│ └───code
│ some_logic.py
│
├───MainPackage
│ │ .gitignore
│ │ Dockerfile
│ │ LICENSE
│ │ pyproject.toml
│ │ README.md
│ │ setup.cfg
│ │ setup.py
│ │
│ │
│ └───main_source
│ main.py
│ __init__.py
│
└───venv
</code></pre>
<p>In Visual Studio Code the interpreter is chosen via path to <code>venv/Scripts/python.exe</code>, both MainPackage as well as DependencyPackage are installed in the virtual environment via <code>pip install -e ".[dev]"</code>, they also show up with the correct path and version in <code>pip list</code></p>
<p>The dependency package is listed as <code>install_requires = dependency_package</code> and I have checked all the naming (e.g. <code>-</code> vs <code>_</code>) several times.</p>
<p>Additionally the code runs fine if I for example use in the <em>MainPackage/main_source/main.py</em>:</p>
<pre class="lang-py prettyprint-override"><code>from dependency_package.code.some_logic import myfunction
def main():
myfunction("something something...")
if __name__ == "__main__":
main()
</code></pre>
<p>However, even though everything clearly works, I still get the following pylance error on import:</p>
<blockquote>
<p>Import "dependency_package.code.some_logic" could not be resolved
furthermore, Intellisense fails and I get no code completion and so on...</p>
</blockquote>
<p>Clearly I am missing something here.</p>
<p><strong>addition:</strong> Just tested again, installing with <code>pip install </code> (no -e), will work fine and my imports are recognized. I do however need the <code>-e</code> flag, since I am developing on both packages simultaneously.</p>
|
<python><pip><virtualenv><pylance><pyright>
|
2023-06-22 13:14:21
| 2
| 2,980
|
Roland Deschain
|
76,532,248
| 3,054,619
|
How to run ruby code in a jupyter notebook?
|
<p>I have installed Jupyter with conda, and iruby according to instructions here <a href="https://github.com/SciRuby/iruby" rel="nofollow noreferrer">https://github.com/SciRuby/iruby</a>, and there are 2 ways to start jupyter:</p>
<ol>
<li><code>iruby notebook</code> (given in the README)</li>
<li>In VSCode with the Jupyter extensions</li>
</ol>
<p>The first method using iruby seems to require ipython and gives the error <code>[TerminalIPythonApp] WARNING | File 'notebook' doesn't exist</code></p>
<p>The second method seems better, but it is difficult to manage which gems and version are needed.</p>
<p>Is there best practice for using Jupyter with ruby? Is ipython required and supported?</p>
|
<python><ruby><jupyter-notebook>
|
2023-06-22 13:05:46
| 1
| 310
|
mrlindsey
|
76,532,167
| 6,832,612
|
How to display content of database without need to refresh (Flask, Sqlite, Python )
|
<p>I save the output of a llm with the test function which is triggered by a button. When the same is hit, <strong>I want not only to save the data but also display them immediately without need to refresh the page – which is necessary right now.</strong></p>
<p>This is the test function (in llm.py)</p>
<pre><code>def test(output):
data = json.loads(output)
label = data['label']
explanation = data['explanation']
etymology = data['etymology']
example = data['example']
connection = sqlite3.connect('database.db')
cur = connection.cursor()
cur.execute("INSERT INTO emotions (label, explanation, etymology, example) VALUES (?, ?, ?, ?)",(label, explanation, etymology, example))
connection.commit()
connection.close()
print(data)
</code></pre>
<p>This function stores the output of a llm in a database "emotions". And the interface (Gradio) which is displayed on my main page has this routing in my app.py (a Flask app)</p>
<pre><code>@app.route('/')
def index():
conn = get_db_connection()
emotions = conn.execute('SELECT * FROM emotions ORDER BY id DESC').fetchall()
conn.close()
return render_template('index.tpl', gradioserver_url='http://127.0.0.1:7860', emotions=emotions)
</code></pre>
<p>I do not have any idea how to make these data visible on my main page without refreshing them. Even worse: I don't know which further information you need. This is the rendering in index.tpl:</p>
<pre><code> <iframe
src="{{ gradioserver_url }}"
width="100%"
height="400"
frameborder="0"
></iframe>
<h2>This is what others feel. And maybe you</h2>
<div class="emotions-container">
{% for emotion in emotions %}
<div class='emo'>
<h2>{{ emotion['label'] }}</h2>
<p>{{ emotion['example'] }}</p>
<p>{{ emotion['explanation'] }}</p>
<p>{{ emotion['etymology'] }}</p>
</div>
{% endfor %}
</code></pre>
<p>Someone said to me that AJAX could be the solution. I looked up what she meant – but this is actually beyond my scope.
Thank you for your time and help.</p>
<p>EDIT:
As someone kindly commented I tried to start in this "route" but I have at best "quarter knowledge" about the POST / GET part of life. I tried to build an asynchronous function within my <strong>index.tpl</strong> which is the rendering file for my flask routing:</p>
<p>I did this:</p>
<pre><code>var xhr = new XMLHttpRequest();
xhr.open('POST', 'http://127.0.0.1:5000/', true);
xhr.setRequestHeader('Content-Type', 'application/json;charset=UTF-8');
xhr.onload = function() {
if (xhr.status === 200) {
console.log(xhr.responseText);
}
};
xhr.send(JSON.stringify(data));
</code></pre>
<p>Console says 'data is not defined'. But data is the JSON format from the output of the llm as defined in the test function. (Above)
This is the part of it:</p>
<pre><code>def test(output):
data = json.loads(output)
</code></pre>
<p>So the problem is, that this data variable is not connected to the AJAX (?) part in index.tpl. But as said I am new to this. Stuff is saved in a database in when refreshing page it is displayed onscreen. But not asynchronous. (I partly understand the AJAX code – so I commented out the <code>xhr.send(JSON.stringify(data));</code>part expecting that the <code>xhr.responseText</code>would show up in the console when hitting the save button in my Gradio interface. Which is this part in llm.py:</p>
<pre><code> btn_save = gr.Button(value="Save Emotion")
btn_save.click(test, inputs=[emo_out], outputs=emo_out)
</code></pre>
<p>But nothing happens.</p>
|
<python><sqlite><refresh>
|
2023-06-22 12:56:17
| 1
| 703
|
S.H
|
76,531,783
| 6,041,915
|
MLmodel local deployment with azure python sdk
|
<p>I'm trying to deploy a mlflow model locally using azure sdk for python. I'm following this example <a href="https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/online/mlflow/online-endpoints-deploy-mlflow-model.ipynb" rel="nofollow noreferrer">https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/online/mlflow/online-endpoints-deploy-mlflow-model.ipynb</a> and this <a href="https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/online/managed/debug-online-endpoints-locally-in-visual-studio-code.ipynb" rel="nofollow noreferrer">https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/online/managed/debug-online-endpoints-locally-in-visual-studio-code.ipynb</a>.</p>
<p>My dir structure looks like this:</p>
<pre><code> - keen_test
+- model
| +- artifacts
| | - _model_impl_0s5d99i3.pt
| | - settings.json
| +- conda.yaml
| +- MLmodel
| +- python_env.yaml
| +- python_model.pkl
| '- requirements.txt
'- deploy-keen.ipynb
</code></pre>
<p>MLmodel file:</p>
<pre><code>artifact_path: model
flavors:
python_function:
artifacts:
model:
path: artifacts/_model_impl_0s5d99i3.pt
# uri: /mnt/azureml/cr/j/1393df3add7949989e16b359b8b4fd0c/exe/wd/_model_impl_0s5d99i3.pt
settings:
path: artifacts/settings.json
# uri: /mnt/azureml/cr/j/1393df3add7949989e16b359b8b4fd0c/exe/wd/tmpdy7crhkb/settings.json
cloudpickle_version: 2.2.1
env:
conda: conda.yaml
virtualenv: python_env.yaml
loader_module: mlflow.pyfunc.model
python_model: python_model.pkl
python_version: 3.8.10
mlflow_version: 2.2.2
model_uuid: 8fba816341fe4ddabac63e552e62874a
run_id: keen_drain_w43g3fq4t6_HD_1
signature:
inputs: '[{"name": "image", "type": "string"}]'
outputs: '[{"name": "filename", "type": "string"}, {"name": "boxes", "type": "string"}]'
utc_time_created: '2023-05-25 22:11:54.553781'
</code></pre>
<p>For deployment I use the following commands:</p>
<pre><code># create a blue deployment
model = Model(
path="keen_test/model",
type="mlflow_model",
description="my sample mlflow model",
)
blue_deployment = ManagedOnlineDeployment(
name="blue",
endpoint_name=online_endpoint_name,
model=model,
instance_type="Standard_F4s_v2",
instance_count=1,
)
</code></pre>
<p>When I try to run this:</p>
<pre><code>ml_client.online_deployments.begin_create_or_update(blue_deployment, local=True)
</code></pre>
<p>I get the error:</p>
<pre><code>RequiredLocalArtifactsNotFoundError: ("Local endpoints only support local artifacts. '%s' did not contain required local artifact '%s' of type '%s'.", 'Local deployment (endpoint-06221317698387 / blue)', 'environment.image or environment.build.path', "")
</code></pre>
<p>I tried to modify the <code>artifact_path</code> in MLmodel configuration, but nothing worked. What should I modify in my configuration to make local deployment working? Do You have any ideas and/or experience with local deployment of mlflow models with azure python sdk?</p>
|
<python><azure><azure-devops><mlflow><azureml-python-sdk>
|
2023-06-22 12:14:38
| 1
| 702
|
Jakub Małecki
|
76,531,776
| 15,487,581
|
capture column name as value in pandas
|
<p>I just want to capture the subject name (column name) as value in new column where there is some improvements in the students marks after re-evaluation.</p>
<p>I have the dataset before re-evaluation:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>name</th>
<th>class</th>
<th>exam</th>
<th>maths</th>
<th>physics</th>
<th>chemistry</th>
</tr>
</thead>
<tbody>
<tr>
<td>John</td>
<td>Grade - 10</td>
<td>model 1</td>
<td>98</td>
<td>78</td>
<td>75</td>
</tr>
<tr>
<td>Bob</td>
<td>Grade - 06</td>
<td>mid term</td>
<td>65</td>
<td>72</td>
<td>92</td>
</tr>
<tr>
<td>Rose</td>
<td>Grade - 06</td>
<td>model 2</td>
<td>91</td>
<td>70</td>
<td>54</td>
</tr>
<tr>
<td>Michael</td>
<td>Grade - 07</td>
<td>model 1</td>
<td>72</td>
<td>90</td>
<td>45</td>
</tr>
</tbody>
</table>
</div>
<p>Now I have the dataset after re-evaluation, there are some improvements in some students marks, and there are new data on other students marks who took their exam recently,</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>name</th>
<th>class</th>
<th>exam</th>
<th>maths</th>
<th>physics</th>
<th>chemistry</th>
</tr>
</thead>
<tbody>
<tr>
<td>John</td>
<td>Grade - 10</td>
<td>model 1</td>
<td>98</td>
<td>78</td>
<td><strong>87</strong></td>
</tr>
<tr>
<td>Bob</td>
<td>Grade - 06</td>
<td>mid term</td>
<td>65</td>
<td><strong>91</strong></td>
<td>92</td>
</tr>
<tr>
<td>Rose</td>
<td>Grade - 06</td>
<td>model 2</td>
<td>91</td>
<td>70</td>
<td>54</td>
</tr>
<tr>
<td>Michael</td>
<td>Grade - 07</td>
<td>model 1</td>
<td><strong>100</strong></td>
<td>90</td>
<td>45</td>
</tr>
<tr>
<td>Sam</td>
<td>Grade - 08</td>
<td>mid term</td>
<td>43</td>
<td>62</td>
<td>80</td>
</tr>
<tr>
<td>James</td>
<td>Grade - 10</td>
<td>model `</td>
<td>76</td>
<td>66</td>
<td>96</td>
</tr>
<tr>
<td>Henry</td>
<td>Grade - 09</td>
<td>model 1</td>
<td>34</td>
<td>91</td>
<td>70</td>
</tr>
</tbody>
</table>
</div>
<p>Now, we need to concat these two datasets, and mark which row is updated, and which column got updated, so, the concatenated dataset looks like this,</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>name</th>
<th>class</th>
<th>exam</th>
<th>maths</th>
<th>physics</th>
<th>chemistry</th>
</tr>
</thead>
<tbody>
<tr>
<td>John</td>
<td>Grade - 10</td>
<td>model 1</td>
<td>98</td>
<td>78</td>
<td>75</td>
</tr>
<tr>
<td>Bob</td>
<td>Grade - 06</td>
<td>mid term</td>
<td>65</td>
<td>72</td>
<td>92</td>
</tr>
<tr>
<td>Rose</td>
<td>Grade - 06</td>
<td>model 2</td>
<td>91</td>
<td>70</td>
<td>54</td>
</tr>
<tr>
<td>Michael</td>
<td>Grade - 07</td>
<td>model 1</td>
<td>72</td>
<td>90</td>
<td>45</td>
</tr>
<tr>
<td>John</td>
<td>Grade - 10</td>
<td>model 1</td>
<td>98</td>
<td>78</td>
<td><strong>87</strong></td>
</tr>
<tr>
<td>Bob</td>
<td>Grade - 06</td>
<td>mid term</td>
<td>65</td>
<td><strong>91</strong></td>
<td>92</td>
</tr>
<tr>
<td>Rose</td>
<td>Grade - 06</td>
<td>model 2</td>
<td>91</td>
<td>70</td>
<td>54</td>
</tr>
<tr>
<td>Michael</td>
<td>Grade - 07</td>
<td>model 1</td>
<td><strong>100</strong></td>
<td>90</td>
<td>45</td>
</tr>
<tr>
<td>Sam</td>
<td>Grade - 08</td>
<td>mid term</td>
<td>43</td>
<td>62</td>
<td>80</td>
</tr>
<tr>
<td>James</td>
<td>Grade - 10</td>
<td>model `</td>
<td>76</td>
<td>66</td>
<td>96</td>
</tr>
<tr>
<td>Henry</td>
<td>Grade - 09</td>
<td>model 1</td>
<td>34</td>
<td>91</td>
<td>70</td>
</tr>
</tbody>
</table>
</div>
<p>Now, the final output should look like this, with 2 new columns, I was able to eliminate the duplicates and added the new columns <strong>any improvement</strong>, but I got stuck on adding the other new column <strong>improved subject</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>name</th>
<th>class</th>
<th>exam</th>
<th>maths</th>
<th>physics</th>
<th>chemistry</th>
<th>any improvement</th>
<th>improved subject</th>
</tr>
</thead>
<tbody>
<tr>
<td>John</td>
<td>Grade - 10</td>
<td>model 1</td>
<td>98</td>
<td>78</td>
<td><strong>87</strong></td>
<td>Yes</td>
<td>chemistry</td>
</tr>
<tr>
<td>Bob</td>
<td>Grade - 06</td>
<td>mid term</td>
<td>65</td>
<td><strong>91</strong></td>
<td>92</td>
<td>Yes</td>
<td>physics</td>
</tr>
<tr>
<td>Rose</td>
<td>Grade - 06</td>
<td>model 2</td>
<td>91</td>
<td>70</td>
<td>54</td>
<td>No</td>
<td>no improvement</td>
</tr>
<tr>
<td>Michael</td>
<td>Grade - 07</td>
<td>model 1</td>
<td><strong>100</strong></td>
<td>90</td>
<td>45</td>
<td>Yes</td>
<td>maths</td>
</tr>
<tr>
<td>Sam</td>
<td>Grade - 08</td>
<td>mid term</td>
<td>43</td>
<td>62</td>
<td>80</td>
<td>New Entry</td>
<td>new entry</td>
</tr>
<tr>
<td>James</td>
<td>Grade - 10</td>
<td>model `</td>
<td>76</td>
<td>66</td>
<td>96</td>
<td>New Entry</td>
<td>new entry</td>
</tr>
<tr>
<td>Henry</td>
<td>Grade - 09</td>
<td>model 1</td>
<td>34</td>
<td>91</td>
<td>70</td>
<td>New Entry</td>
<td>new entry</td>
</tr>
</tbody>
</table>
</div>
<p>Below is the code, I used for this,</p>
<pre><code>added primary key column by concatenating name, class, exam and secondary key column by concatenating maths,physics,chemistry.
dupedf = concatdf.loc[concatdf.duplicated(subset=['PrimaryKey', 'SecondaryKey'],keep=False)]
dupedf1 = concatdf.loc[concatdf.duplicated(subset=['PrimaryKey'],keep=False)]
for i,j in dupedf.iterrows():
for k,l in dupedf1.iterrows():
if l['PrimaryKey'] == j['PrimaryKey']:
dupedf = dupedf.drop_duplicates(subset=['PrimaryKey','SecondaryKey'],keep='last')
dupedf['any improvement'] = 'No'
# dupedf['improved subject'] = ' '
else:
dupedf1 = dupedf1.drop_duplicates(subset=['SecondaryKey'],keep=False)
dupedf1 = dupedf1.drop_duplicates(subset=['PrimaryKey'],keep='last')
dupedf1['any improvement'] = 'Yes'
# dupedf1['improved subject'] = 'column name'
</code></pre>
<p>in the above code, I am iterating only the rows which exists in both before & after re-evaluation datasets. iterating row by row to have fill the 2 new columns <strong>any improvement & improved subject.</strong> <strong>I was able to achieve for any improvement column, but I need help with improved subject column.</strong></p>
|
<python><python-3.x><pandas><dataframe><pivot>
|
2023-06-22 12:13:52
| 1
| 349
|
Beginner
|
76,531,474
| 2,681,662
|
Django Ninja Testing
|
<p>I am trying to create a test for an API I wrote using Django-Ninja.</p>
<p>Here is my Model:</p>
<pre><code>class Country(models.Model):
created_at = models.DateTimeField(auto_created=True, auto_now_add=True)
name = models.CharField(max_length=128, null=False, blank=False)
code = models.CharField(max_length=128, null=False, blank=False, unique=True)
timezone = models.CharField(max_length=128, null=False, blank=False)
</code></pre>
<p>Here is my schema:</p>
<pre><code>class CountryAddSchema(Schema):
name: str
code: str
timezone: str
</code></pre>
<p>Here is the post endpoint:</p>
<pre><code>router.post("/add",
description="Add a Country",
summary="Add a Country", tags=["Address"],
response={201: DefaultSchema, 401: DefaultSchema, 422: DefaultSchema, 500: DefaultSchema},
url_name="address_country_add")
def country_add(request, country: CountryAddSchema):
try:
if not request.auth.belongs_to.is_staff:
return 401, {"detail": "None Staff cannot add Country"}
the_country = Country.objects.create(**country.dict())
the_country.save()
return 201, {"detail": "New Country created"}
except Exception as e:
return 500, {"detail": str(e)}
</code></pre>
<p>Finally, here the test function:</p>
<pre><code>def test_add_correct(self):
"""
Add a country
"""
data = {
"name": "".join(choices(ascii_letters, k=32)),
"code": "".join(choices(ascii_letters, k=32)),
"timezone": "".join(choices(ascii_letters, k=32))
}
respond = self.client.post(reverse("api-1.0.0:address_country_add"), data, **self.AUTHORIZED_HEADER)
self.assertEquals(respond.status_code, 201)
self.assertDictEqual(json.loads(respond.content), {"detail": "New Country created"})
the_country = Country.objects.last()
self.assertDictEqual(
data,
{
"name": the_country.name,
"code": the_country.code,
"timezone": the_country.timezone
}
)
</code></pre>
<p>Please notice I have <code>self.AUTHORIZED_HEADER</code> set in <code>setUp</code>.</p>
<p>And here the error:</p>
<pre><code>FAIL: test_add_correct (address.tests_country.CountryTest)
Add a country
----------------------------------------------------------------------
Traceback (most recent call last):
File "SOME_PATH/tests_country.py", line 80, in test_add_correct
self.assertEquals(respond.status_code, 201)
AssertionError: 400 != 201
</code></pre>
<p>I can add a country using swagger provided with django-ninja. I mean the endpoint works. But I can not test it using <code>djano.test.Client</code>.</p>
<p>Any Idea?</p>
<h2>Update:</h2>
<p>Here the curl code generated by swagger:</p>
<pre><code>curl -X 'POST' \
'http://127.0.0.1:8000/api/address/country/add' \
-H 'accept: application/json' \
-H 'X-API-Key: API-KEY' \
-H 'Content-Type: application/json' \
-d '{
"name": "string",
"code": "string",
"timezone": "string"
}'
</code></pre>
|
<python><django><django-ninja>
|
2023-06-22 11:36:16
| 3
| 2,629
|
niaei
|
76,531,049
| 13,706,389
|
Slow initialisation of for loop in Python for filter object
|
<p>I have written some python code to generate a tex file and compile it into a pdf. This works just fine except when I try to rerun the script without changing the name of the compiled pdf while I have a previous version of the pdf opened in Adobe Acrobat Reader.
To solve this, I added the following function before the pdf is compiled to check whether it is still opened in Acrobat and if so, kill Acrobat before compiling the pdf:</p>
<pre class="lang-py prettyprint-override"><code>def close_file_in_acrobat(file_path):
file_name = os.path.basename(file_path)
all_processes = psutil.process_iter(['name', 'open_files'])
filtered_processes = filter(lambda proc: proc.info['name'].lower() == 'acrobat.exe', all_processes)
for proc in filtered_processes:
for file in proc.info['open_files']:
if file_name == os.path.basename(file.path):
proc.kill()
return
</code></pre>
<p>This also works fine but it takes a lot longer than I had expected ~10 seconds.
When looking for which part of the code delays the program, I printed some intermediate values and it turns out the part until <code>filtered_processes</code> (including) runs almost instantly and it then takes long to start the first iteration of the <code>for proc in ...</code> loop.
I don't see why specifically this part would take so long since the filter object is already generated here.</p>
<p>If there are any suggestions on improving what I'm doing here in general, I'd be more than welcome to hear them as well.</p>
|
<python>
|
2023-06-22 10:38:31
| 1
| 684
|
debsim
|
76,530,763
| 3,416,774
|
Copy files to inkscape extenstion folder doesn't work
|
<p>I copy file extension to <code>C:\Users\ganuo\AppData\Roaming\inkscape\extensions</code> and restart Inkscape but I don't see my script showing up in the <em>Extensions</em> menu bar. Clicking on <em>Manage Extensions</em> I get this error:</p>
<pre class="lang-py prettyprint-override"><code>Traceback (most recent call last):
File "C:\Program Files\Inkscape\share\inkscape\extensions\inkman\inkman\manage_extensions.py", line 29, in <module>
from inkex import gui
File "C:\Program Files\Inkscape\share\inkscape\extensions\inkex\__init__.py", line 11, in <module>
from .extensions import *
File "C:\Program Files\Inkscape\share\inkscape\extensions\inkex\extensions.py", line 34, in <module>
from .elements import (
File "C:\Program Files\Inkscape\share\inkscape\extensions\inkex\elements\__init__.py", line 9, in <module>
from ._parser import SVG_PARSER, load_svg
File "C:\Program Files\Inkscape\share\inkscape\extensions\inkex\elements\_parser.py", line 30, in <module>
from lxml import etree
ModuleNotFoundError: No module named 'lxml'
</code></pre>
<p><code>pip install lxml</code> doesn't work.</p>
<p>Do you know what makes this happen?</p>
<p>Inkscape 1.2.2 (732a01da63, 2022-12-09)</p>
<p><img src="https://i.imgur.com/uCIedZXm.png" alt="" /></p>
|
<python><inkscape>
|
2023-06-22 10:04:41
| 0
| 3,394
|
Ooker
|
76,530,740
| 21,404,794
|
Delete rows from values from a torch tensor (drop method in pytorch)
|
<p>Let's say I have a pytorch tensor</p>
<pre class="lang-py prettyprint-override"><code>import torch
x = torch.tensor([
[1,2,3,4],
[5,6,7,8],
[9,10,11,12]
])
</code></pre>
<p>And I want to delete the row with values <code>[5,6,7,8]</code>. I have seen <a href="https://stackoverflow.com/questions/58530117/deleting-rows-in-torch-tensor">this answer</a> (which solves the problem by indexing), <a href="https://stackoverflow.com/questions/73041897/searching-for-a-1d-torch-tensor-in-a-2d-torch-tensor">this one</a> (which solves the problem by masking), <a href="https://stackoverflow.com/questions/62372762/delete-an-element-from-torch-tensor">this one</a> and <a href="https://stackoverflow.com/questions/69132963/delete-a-row-by-index-from-pytorch-tensor">this one</a> (deleting rows knowing the index).</p>
<p>In my case, I know the values of the tensor I want to delete, but not the index, and the values should be the same in every column of the tensor.</p>
<p>I could try doing the masking in <a href="https://stackoverflow.com/questions/73041897/searching-for-a-1d-torch-tensor-in-a-2d-torch-tensor">this question</a> and then indexing the rows as shown <a href="https://stackoverflow.com/questions/62372762/delete-an-element-from-torch-tensor">here</a>, something like this:</p>
<pre class="lang-py prettyprint-override"><code>ind = torch.nonzero(torch.all(x==torch.tensor([5,6,7,8]), dim=0))
x = torch.cat((x[:ind],x[ind+1:]))
</code></pre>
<p>That works, but I'd like a cleaner solution than splitting the tensor and concatenating it again. Something similar to the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop.html" rel="nofollow noreferrer">drop()</a> method in pandas dataframes.</p>
|
<python><pytorch>
|
2023-06-22 10:01:45
| 1
| 530
|
David Siret Marqués
|
76,530,715
| 6,719,772
|
Why is python somehow overwriting my function with another function in a dictionary?
|
<p>I have the following code, which fails:</p>
<pre><code>def test_function_combining_wrong():
def one():
print("ONE")
return "ONE"
def two():
print("TWO")
return "TWO"
F1S = {
1: one,
2: two,
}
F2S = {
**{key: lambda: value() for key, value in F1S.items()},
}
print("1st Call")
assert F1S[1]() == "ONE"
assert F1S[2]() == "TWO"
print("2nd Call")
assert F2S[1]() == "ONE" # ATTENTION: THIS IS WRONG WEIRD PYTHON STUFF
assert F2S[2]() == "TWO"
</code></pre>
<p>In my head this code should work and I would like to know why Python returns <code>TWO</code> for the <code>F2S[1]()</code>. I'm trying to nest some lambda functions together in dictionaries.</p>
|
<python><python-3.x>
|
2023-06-22 09:59:38
| 1
| 401
|
danielmoessner
|
76,530,606
| 10,428,677
|
Function to identify duplicate Python column names and add specific suffixes
|
<p>I have several dataframes with certain duplicate column names (they come from Excel files). My data looks a little something like this.</p>
<pre><code>original_df= pd.DataFrame({
'ID': [True, False, True],
'Revenue (USDm)': [1000, 2000, 1500],
'Location': ['London', 'New York', 'Paris'],
'Year': [2021, 2022, 2023],
'Sold Products': [10, 20, 30],
'Leased Products': [5, 10, 15],
'Investments': [7, 12, 8],
'Sold Products.1': [15, 25, 35],
'Leased Products.1': [8, 12, 16],
'Investments.1': [6, 9, 11],
'Sold Products.2': [5, 10, 15],
'Leased Products.2': [2, 5, 8],
'Investments.2': [3, 7, 4],
'QC Completed?': [True, True, False],
})
</code></pre>
<p>When I read the df, pandas automatically adds the <code>.1</code> and <code>.2</code> suffixes to the duplicate column names. I tried to write a function that identifies the duplicates and adds a new set of suffixes from a list I provide, while removing the <code>.1</code> and <code>.2</code> where applicable.</p>
<p>The new suffixes list is <code>suffixes = ['Vehicles','Electronics','Real Estate']</code></p>
<p>The output should look like this:</p>
<pre><code>desired_output = pd.DataFrame({
'ID': [True, False, True],
'Revenue (USDm)': [1000, 2000, 1500],
'Location': ['London', 'New York', 'Paris'],
'Year': [2021, 2022, 2023],
'Sold Products - Vehicles': [10, 20, 30],
'Leased Products - Vehicles': [5, 10, 15],
'Investments - Vehicles': [7, 12, 8],
'Sold Products - Electronics': [15, 25, 35],
'Leased Products - Electronics': [8, 12, 16],
'Investments - Electronics': [6, 9, 11],
'Sold Products - Real Estate': [5, 10, 15],
'Leased Products - Real Estate': [2, 5, 8],
'Investments - Real Estate': [3, 7, 4],
'QC Completed?': [True, True, False],
})
</code></pre>
<p>The column names without any duplicates should remain the same but the columns which are duplicated get added the suffixes in order; If they also have the <code>.1</code> and <code>.2</code> suffixes, those get removed.</p>
<p>My function is below:</p>
<pre><code>def change_colnames(df, suffixes):
new_columns = []
seen_columns = {}
for column in df.columns:
match = re.match(r'^(.*?)(?:\.\d+)?$', column) # Match the base column name and optional suffix
base_column = match.group(1) if match else column # Get the base column name or keep the original column name
if base_column in seen_columns:
idx = seen_columns[base_column] # Get the index of the base column
new_column = f"{base_column} {suffixes[idx]}" # Append the new suffix
seen_columns[base_column] += 1 # Increment the index for the next occurrence
else:
new_column = base_column
seen_columns[base_column] = 0 # Add the base column with index 0
new_columns.append(new_column)
df.columns = new_columns
return df
</code></pre>
<p>Unfortunately the first set of duplicate columns (those without the <code>.1</code> and <code>.2</code> suffixes) stays the same. The output I get is this:</p>
<pre><code>wrong_output = pd.DataFrame({
'ID': [True, False, True],
'Revenue (USDm)': [1000, 2000, 1500],
'Location': ['London', 'New York', 'Paris'],
'Year': [2021, 2022, 2023],
'Sold Products': [10, 20, 30],
'Leased Products': [5, 10, 15],
'Investments': [7, 12, 8],
'Sold Products - Vehicles': [15, 25, 35],
'Leased Products - Vehicles': [8, 12, 16],
'Investments - Vehicles': [6, 9, 11],
'Sold Products - Electronics': [5, 10, 15],
'Leased Products - Electronics': [2, 5, 8],
'Investments - Electronics': [3, 7, 4],
'QC Completed?': [True, True, False],
})
</code></pre>
<p>Any idea how to fix it?</p>
|
<python><pandas><dataframe>
|
2023-06-22 09:47:01
| 2
| 590
|
A.N.
|
76,530,579
| 3,983,470
|
Getting Cors error when calling functions from ModelViewSet in Django Rest Framework
|
<p>This error is pretty odd, I have a project on Django with Django Rest Framework and I have an app with a ModelViewSet to create CRUD endpoints for my Race resource.</p>
<p>race.py</p>
<pre class="lang-py prettyprint-override"><code>from ..models.race import Race
from ..serializers.race import RaceSerializer
from rest_framework.viewsets import ModelViewSet
from rest_framework.permissions import IsAuthenticated
class RaceViewSet(ModelViewSet):
model = Race
serializer_class = RaceSerializer
queryset = Race.objects.all()
</code></pre>
<p>This is my app's urls.py</p>
<pre class="lang-py prettyprint-override"><code>from django.urls import path, include
from rest_framework import routers
from .views.race import RaceViewSet
from .views.test import TestView
router = routers.DefaultRouter()
router.register(r'races', RaceViewSet)
urlpatterns = [
path('', include(router.urls)),
path('test', TestView.as_view(), name='test'),
]
</code></pre>
<p>I got everything set correctly, and I can access the list method without issues using Postman</p>
<p>Now Im aware of the CORS issue doesn't happen in Postman and installed 'django-cors-headers' to solve the CORS issue on my Angular app.</p>
<p>settings.py</p>
<pre class="lang-py prettyprint-override"><code>ALLOWED_HOSTS = []
CORS_ORIGIN_WHITELIST = [
'http://localhost:4200', # My angular server
]
# Application definition
INSTALLED_APPS = [
...
'rest_framework',
'corsheaders',
..
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'corsheaders.middleware.CorsMiddleware', # CORS middleware
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
</code></pre>
<p>And it works, I made a test route to check everything and inside my angular app I called that method</p>
<p>this is the test ViewSet</p>
<pre class="lang-py prettyprint-override"><code>from rest_framework.views import APIView
from rest_framework.response import Response
from ..models.race import Race
from ..serializers.race import RaceSerializer
class TestView(APIView):
def get(self, request): # I copied what the list method should in theory do.
races = Race.objects.all()
serializer = RaceSerializer(races, many=True)
return Response(serializer.data)
</code></pre>
<p>home.component.ts</p>
<pre class="lang-js prettyprint-override"><code>test(): void {
const url = 'http://127.0.0.1:8000/api/v1/rpg/test'; // This time Im calling the test url
// Send the HTTP POST request
this.http.get(url).subscribe(
{
next: (response) => {
// Handle successful sign-up
console.log('SUCCESS', response);
},
error: (failResponse) => {
// Handle sign-up error
console.error('Sign-in error:', failResponse);
},
complete: () => {
console.log('Test completed:');
}
}
);
}
</code></pre>
<p>Of course just to double check if the CORS middleware was actually working I removed the middleware from settings.py and tried the test route again and as espected I got the CORS error, but putting the middleware again made the test route work just fine.</p>
<p>But when trying to access the routes defined on my ModelViewSet, particularly the list Im getting a CORS error with or without the middleware</p>
<p>home.component.ts</p>
<pre class="lang-js prettyprint-override"><code>test(): void {
const url = 'http://127.0.0.1:8000/api/v1/rpg/races'; // Trying to access the races list this time
// Send the HTTP POST request
this.http.get(url).subscribe(
{
next: (response) => {
// Handle successful sign-up
console.log('SUCCESS', response);
},
error: (failResponse) => {
// Handle sign-up error
console.error('Sign-in error:', failResponse);
},
complete: () => {
console.log('Test completed:');
}
}
);
}
</code></pre>
<p>And Im getting this error on console.</p>
<p>"Access to XMLHttpRequest at 'http://127.0.0.1:8000/api/v1/rpg/races' from origin 'http://localhost:4200' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource."</p>
<p>Again, Im sure the route works fine, I tried it in Postman without issues, so Im kinda in the dark here, I hope you guys can help me</p>
<p>EDIT--</p>
<p>Oddly enough, the POST method seems to be working just fine :/, Im still struggling with the GET method</p>
<p>home.component.ts</p>
<pre class="lang-js prettyprint-override"><code>test(): void {
const url = 'http://127.0.0.1:8000/api/v1/rpg/races/';
let body = {
"name": "Lowlander",
"hp_growth": 1.9,
"mp_growth": 1.1,
"str_growth": 1.7,
"int_growth": 1.2,
"dex_growth": 1.3,
"base_hp": 150,
"base_mp": 100,
"base_str": 85,
"base_int": 70,
"base_dex": 75
}
// Send the HTTP POST request
this.http.post(url,body).subscribe(
{
next: (response) => {
// Handle successful sign-up
console.log('SUCCESS', response);
},
error: (failResponse) => {
// Handle sign-up error
console.error('Sign-in error:', failResponse);
},
complete: () => {
console.log('Test completed:');
}
}
);
}
</code></pre>
|
<python><django><angular><django-rest-framework><cors>
|
2023-06-22 09:44:31
| 2
| 607
|
Anibal Cardozo
|
76,530,572
| 4,575,197
|
how to extract the texts after the first h1 Tag?
|
<p>i'm trying to write a code to get and clean the text from 100 websites per day. i came across an issue with one website that has More than one h1 tag and when you scroll to the next h1 tag the URL on the website changes for example <a href="https://economictimes.indiatimes.com/news/international/business/volkswagen-sets-5-7-revenue-growth-target-preaches-cost-discipline/articleshow/101168014.cms" rel="nofollow noreferrer">this website</a>.</p>
<p>what i have is basically this.</p>
<pre><code>response=requests.get('https://economictimes.indiatimes.com/news/international/business/volkswagen-sets-5-7-revenue-growth-target-preaches-cost-discipline/articleshow/101168014.cms',headers={"User-Agent" : "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36"})
soup = BeautifulSoup(response.content, 'html.parser')
if len(soup.body.find_all('h1'))>2: #to check if there is more than one tag
if i.endswith(".cms"): #to check if the website has .cms ending (i have my doubts on this part)
for elem in soup.next_siblings:
if elem.name == 'h1':
GET THE TEXT SOME HOW
break
</code></pre>
<p>How can i get the text after first h1 tag? (please note that the text is in tag and not in <p> tag.</p>
|
<python><html><beautifulsoup><content-management-system>
|
2023-06-22 09:43:44
| 2
| 10,490
|
Mostafa Bouzari
|
76,530,381
| 11,311,798
|
Opencv : convert curves to shapely Linestrings
|
<p>I have a pre-processed image containing strands of white pixels representing curves. Some closed, some not closed. The thickness is 1 pixel</p>
<p>I would like to be able to convert each one of these to a shapely Linestring.</p>
<p>I already tried to use the "findContours" function. This works very well on the closed curves, however on the open ones, opencv returns a closed contour.</p>
<p>This breaks my code later when, for instance, i need to get the length of these curves. (For instance, it roughly returns twice the length of each curve)</p>
<p>Note that i absolutely need linestrings. I can not simply count the number of white pixels in the image.</p>
<p>Is there a simple way to extract these lines using opencv or i will have to write the algorithm myself (i'd really like to avoid this solution as i dont have much time to do it and performance is kind of important for this app)</p>
<p>example image:
<a href="https://i.sstatic.net/6JVH1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6JVH1.png" alt="enter image description here" /></a></p>
|
<python><opencv><image-processing><shapely>
|
2023-06-22 09:22:41
| 1
| 337
|
J.M
|
76,530,380
| 10,437,727
|
Azure Function Timer Function run part of the code asynchronously
|
<p>I have a timer function in Python, that's supposed to run daily. However there is a business requirement that a certain method runs on a monthly basis.</p>
<p>Is there a way for me to do this?</p>
<p>It would be something like the following in pseudo code:</p>
<pre class="lang-py prettyprint-override"><code>
next_monthly_job = date + "30d"
def main (timer):
if current_date != next_monthly_job:
## do normal stuff
elif:
## do specific monthly stuff
next_monthly_job = current_date + "30d"
</code></pre>
<p>I'm just concerned that the global variable will be overwritten at each trigger, hence never reaching the else statement.</p>
<p>Thanks in advance!</p>
|
<python><azure><azure-functions>
|
2023-06-22 09:22:32
| 2
| 1,760
|
Fares
|
76,530,205
| 695,134
|
Merging and flattening two lists of dictionaries using keys as new fields
|
<p>I have two lists of dictionaries, each with the same structure. I wish to flatten into a single dictionary taking precedence of list 2, taking the value as the key of a new flat dictionary.</p>
<p>The following code works, but it feels like it's hacking together code that can probably be done via one or two simple comprehensions. Is there a better way than this?</p>
<p>It produces this:</p>
<pre><code>{'SourceIP': 'src2',
'DestinationIP': 'dst',
'Direction': 'dir',
'NEW': 'newvalue'
}
</code></pre>
<p>Here is the code:</p>
<pre><code>import operator
default = [
{"RealField": "SourceIP", "SuppliedField": "src"},
{"RealField": "DestinationIP", "SuppliedField": "dst"},
{"RealField": "Direction", "SuppliedField": "dir"}
]
product_mapping = [
{"RealField": "SourceIP", "SuppliedField": "src2"},
{"RealField": "DestinationIP", "SuppliedField": "dst2"},
{"RealField": "NEW", "SuppliedField": "newvalue"},
]
def dictionary_from_mappings(default_mapping, product_mapping):
default = [{i["RealField"]:i["SuppliedField"]} for i in default_mapping]
default_flat = reduce(operator.ior, default, {})
product = [{i["RealField"]:i["SuppliedField"]} for i in product_mapping]
product_flat = reduce(operator.ior, product, {})
return default_flat | product_flat
mappings = dictionary_from_mappings(default, product_mapping)
print(mappings)
</code></pre>
|
<python>
|
2023-06-22 09:01:30
| 4
| 6,898
|
Neil Walker
|
76,530,144
| 20,920,790
|
How to force values in column be unique in SDV multi table HMASynthesizer?
|
<p>I got this table:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: right;">id</th>
<th style="text-align: left;">name</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
<td style="text-align: left;">Region_0</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
<td style="text-align: left;">Region_1</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: right;">2</td>
<td style="text-align: left;">Region_2</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td style="text-align: right;">3</td>
<td style="text-align: left;">Region_3</td>
</tr>
<tr>
<td style="text-align: right;">4</td>
<td style="text-align: right;">4</td>
<td style="text-align: left;">Region_4</td>
</tr>
<tr>
<td style="text-align: right;">5</td>
<td style="text-align: right;">5</td>
<td style="text-align: left;">Region_5</td>
</tr>
<tr>
<td style="text-align: right;">6</td>
<td style="text-align: right;">6</td>
<td style="text-align: left;">Region_6</td>
</tr>
<tr>
<td style="text-align: right;">7</td>
<td style="text-align: right;">7</td>
<td style="text-align: left;">Region_7</td>
</tr>
<tr>
<td style="text-align: right;">8</td>
<td style="text-align: right;">8</td>
<td style="text-align: left;">Region_8</td>
</tr>
<tr>
<td style="text-align: right;">9</td>
<td style="text-align: right;">9</td>
<td style="text-align: left;">Region_9</td>
</tr>
<tr>
<td style="text-align: right;">10</td>
<td style="text-align: right;">10</td>
<td style="text-align: left;">Region_10</td>
</tr>
<tr>
<td style="text-align: right;">11</td>
<td style="text-align: right;">11</td>
<td style="text-align: left;">Region_11</td>
</tr>
<tr>
<td style="text-align: right;">12</td>
<td style="text-align: right;">12</td>
<td style="text-align: left;">Region_12</td>
</tr>
<tr>
<td style="text-align: right;">13</td>
<td style="text-align: right;">13</td>
<td style="text-align: left;">Region_13</td>
</tr>
<tr>
<td style="text-align: right;">14</td>
<td style="text-align: right;">14</td>
<td style="text-align: left;">Region_14</td>
</tr>
<tr>
<td style="text-align: right;">15</td>
<td style="text-align: right;">15</td>
<td style="text-align: left;">Region_15</td>
</tr>
<tr>
<td style="text-align: right;">16</td>
<td style="text-align: right;">16</td>
<td style="text-align: left;">Region_16</td>
</tr>
<tr>
<td style="text-align: right;">17</td>
<td style="text-align: right;">17</td>
<td style="text-align: left;">Region_17</td>
</tr>
<tr>
<td style="text-align: right;">18</td>
<td style="text-align: right;">18</td>
<td style="text-align: left;">Region_18</td>
</tr>
<tr>
<td style="text-align: right;">19</td>
<td style="text-align: right;">19</td>
<td style="text-align: left;">Region_19</td>
</tr>
</tbody>
</table>
</div>
<p>I trying to make new data.
Metadata for this table:</p>
<pre><code>database_metadata.update_column(
table_name='region',
column_name='id',
sdtype='id',
regex_format='[0-9]{2}'
)
database_metadata.set_primary_key(
table_name='region',
column_name='id'
)
database_metadata.add_relationship(
parent_table_name='region',
child_table_name='users',
parent_primary_key='id',
child_foreign_key='region_id'
)
'region': {'primary_key': 'id',
'columns': {'id': {'sdtype': 'id', 'regex_format': '[0-9]{2}'},
'name': {'sdtype': 'categorical'}}}
</code></pre>
<p>I need to make region's names unique.
How to put this logic in model?</p>
<p>P.S. The constraint class 'Unique' is not currently supported for multi-table synthesizers.</p>
|
<python><sdv>
|
2023-06-22 08:55:00
| 1
| 402
|
John Doe
|
76,529,912
| 2,706,344
|
Adjust labels for monthly bar plot
|
<p>I have a Series <code>monthCount</code>:</p>
<p><a href="https://i.sstatic.net/tfAgO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tfAgO.png" alt="enter image description here" /></a></p>
<p>And I want to plot it as bar plot. This works using <code>monthCount.plot.bar()</code>:</p>
<p><a href="https://i.sstatic.net/CLMQy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CLMQy.png" alt="enter image description here" /></a></p>
<p>However, I would prefer different labels for the bars. "January 2022", "February 2022" and so on would be much better. Even the year could be dropped and put as caption above. How do I change the labels in that way for the bars?</p>
|
<python><pandas><plot>
|
2023-06-22 08:24:09
| 1
| 4,346
|
principal-ideal-domain
|
76,529,763
| 3,989,783
|
Bouncy Castle and CryptoJS vs Pycryptodome and Cryptography - who's right and how to decrypt the data in Python?
|
<p>Let me explain. I have some data encrypted by Bouncy Caste library (AES 128 CBC).
I am able to decompress the data using Crypto.js:</p>
<pre><code>var CryptoJS = require('crypto-js')
const ivHexString = '4d055a9e07e7db37297dd20cc73c4cc4'
const keyHexString = 'aab60badba8587c95e36c5cf6cf736b9'
const decrypt = u8arr => {
const wordArray = arr => CryptoJS.lib.WordArray.create(arr)
const ciphertext = wordArray(u8arr)
const iv = CryptoJS.enc.Hex.parse(ivHexString)
const key = CryptoJS.enc.Hex.parse(keyHexString)
const cipherTextParam = CryptoJS.lib.CipherParams.create({ ciphertext })
const cipherParams = {
iv,
mode: CryptoJS.mode.CBC,
padding: CryptoJS.pad.Pkcs7
}
const output = CryptoJS.AES.decrypt(cipherTextParam, key, cipherParams)
return output.toString(CryptoJS.enc.Base64)
}
const data = 'CYkJIz1qptjyjNh4VpzODnVq94JLIx37B0677lYlMJhFtOA6ZFL7CY0M6wRyZlW5pfSorSEDEUbw9EqDePLUvSCwtTsB6CXnVL2+TNvYaffnV/lpjfEAoHANiUxgbBabY3k3BBUAj8RXDmLIQw/bftJcvXw5egS7U0ucb0lDKByqSrMT1DaFDrOHnjO7r2ahZIgZ2G/wHcNwDjloGPvkjGAbwqWfdHaazflGzGTcgXb/PY4UtThba2t8PZBmF8Injiu8As2jl9mt+oC/QvhgJhuIOqJ69Hnt7EoGX4MNX0e/a5qjCHEMOtC4v3gW6sozuVoozXL3uNYHtfsPfnCD2g=='
const uint8arr = Uint8Array.from(Buffer.from(data, 'base64'))
const d = decrypt(uint8arr)
console.log(d)
</code></pre>
<p>This is working good. The expected result is:</p>
<pre><code>GskdZkH4vRC7gEDVv4kUlAXSHq3tekrZNuFWjUozgIlXzEf4Rh9mJ5L9kth4tVhQjWV4SbLQW1f6CdLJxh50TlfMR/hGH2Ynkv2S2Xi1WFRXzEf4Rh9mJ5L9ktl4tVhQ+niKb+1lVdx8dCiV4xnQutN1Eid64etnzF6UJ6BTaeGK3RZaoUpcu+fto1nDUx6RdgAp4M92vat6clPWajgtK1fMR/9GH2Ynkv2S2Xi1WFBIyfXPaM+JaxYsnsc4pTQNtvSMkbqQo+PlpIJAZlWMXVfMR/hGH2Ynkv2S2Xi1WFeVGC5QrBQeBALoJlpAGdqFrIp35WiRWDU3UMSxwDarhFfMR/hGH2Ynkv2S2Xi1WFVXzEf4Rh9mJ5L9ktl4tVhQV8xH+EYfZieS/ZLZeLVYUBwSa9DLPkiPZXJ+x81Mh3FXzEfxRh9mJ5L9ktl4tVhQV8xH+EYfZieS/ZLZeLVYUFfMR/hGH2Ynkv2S2Xi1WFCVGC5QrBQeBALoJlpAGdqFKq+2KRLLij4Qyq8AwPWFWVfMR/hGH2Ynkv2S23i1WFBXzEf4Rh9mJ5L9ktl4tVhQV8xH+EYfZieS/ZLZeLVYUFfMR/hGH2Ynkv2S2Xi1WFBXzEf4Rh9mJ5L9ktl4tVhQV8xH+EYfZieS/ZLZeLVYUFfMR/hGH2Ynkv2S2Xi1WFDl4Bu9+CbKVf/6Ejs8DGhzV8xH+EYfZiSS/ZLZeLVYU4K+nTPoDTMVTK5DI+bog7JXzEf4Rh9mL5L9ktl4tVhQV8xH+EYfZieS/ZLZeLVYUFfMR/hGH2Ynkv2S2Xi1WFCsinfjaJFYNTdQxLHANquEtvSMkbqQo+PlpIJAZlWMXlfMR/hGH2Ynkv2S2Xi1WFdXzEf4Rh9mJ5L9ktl4tVhQV8xH+EYfZieS/ZLZeLVYUFfMR/hGH2Ynkv2S2Xi1WFBIKVmlTvuPp/T1N+pUVoVcV8xH+UYfZieS/ZLZeLVYULb0jJG6kKPj5aSCQGZVjFsP43HVf60huYpvjDeUhvLsV8xH+EYfZiWS/ZLZeLVYUNZFUaRKlsWc+Dla7Jkk5UxXzEf6Rh9mJ5L9ktl4tVhQV8xH+EYfZieS/ZLZeLVYUFfMR/hGH2Ynkv2S2Xi1WFBXzEf4Rh9mJ5L9ktl4tVhQV8xH+EYfZieS/ZLZeLVYUFfMR/hGH2Ynkv2S2Xi1WFBXzEf4Rh9mJ5L9ktl4tVhQV8xH+EYfZieS/ZLZeLVYUFfMR/hGH2Ynkv2S2Xi1WFC0BeMZJPiM+p+jMftg7+sgV8xH+EYfZieS/ZLdeLVYUGjv8WjY//toLtuXgUMWHMRXzEf4Rh9mJZL9ktl4tVhYgr6dM+gNMxVMrkMj5uiDsmt3P+pMcByZ1iwi8ZV+obIqr7YvEsuKPhDKrwDA9YVRIv1lNtKTvDFOh0k5AV9dXlfMR/hGH2Yukv2S2Xi1WFBXzEf4Rh9mJ5L9ktl4tVhQV8xH+EYfZieS/ZLZeLVYUEjJ9c9oz4lrFiyexzilNA0cEmvQyz5Ij2VyfsfNTId3dgAp6c92vat6clPQajgtLC57yGqV/omI/MZSGIHlLIFXzEf4Rh9mI5L9ktl4tVhQEQYdm99FDxrZnWQFEj7G7VfMR/hGH2Yikv2S2Xi1WFCNZXhJstBbV/oJ0snGHnRO9aUARZUzsGNlLY32LpF1cZUYLlCsFB4HAugmWkAZ2oVXzEf+Rh9mJ5L9ktl4tVhQoZM0BA==
</code></pre>
<p>But the result in Pycryptodome or Cryptograhy is different and they complain about padding:</p>
<pre><code>from Crypto.Cipher import AES
from Crypto.Util import Padding
import base64
def decrypt():
iv = '4d055a9e07e7db37297dd20cc73c4cc4'
key = 'aab60badba8587c95e36c5cf6cf736b9'
data_b64 = 'CYkJIz1qptjyjNh4VpzODnVq94JLIx37B0677lYlMJhFtOA6ZFL7CY0M6wRyZlW5pfSorSEDEUbw9EqDePLUvSCwtTsB6CXnVL2+TNvYaffnV/lpjfEAoHANiUxgbBabY3k3BBUAj8RXDmLIQw/bftJcvXw5egS7U0ucb0lDKByqSrMT1DaFDrOHnjO7r2ahZIgZ2G/wHcNwDjloGPvkjGAbwqWfdHaazflGzGTcgXb/PY4UtThba2t8PZBmF8Injiu8As2jl9mt+oC/QvhgJhuIOqJ69Hnt7EoGX4MNX0e/a5qjCHEMOtC4v3gW6sozuVoozXL3uNYHtfsPfnCD2g=='
data = base64.b64decode(data_b64)
iv = bytes.fromhex(iv)
key = bytes.fromhex(key)
cipher = AES.new(key, AES.MODE_CBC, iv)
ct = cipher.decrypt(data)
t = Padding.unpad(ct, AES.block_size)
return t
res_b64 = decrypt()
res = base64.b64encode(res_b64)
print(res)
</code></pre>
<p>Do they have diffrent padding implementations? Where is the difference?</p>
<p>BouncyCastle code (without salt):</p>
<pre><code>public class CryptoUtil {
private CryptoUtil() {
throw new AssertionError();
}
public static byte[] encryptAes(byte[] data, char[] password) throws Exception {
ParametersWithIV key = (ParametersWithIV) getAesPasswdKey(password);
BufferedBlockCipher cipher = new PaddedBufferedBlockCipher(new CBCBlockCipher(new AESFastEngine()));
cipher.init(true, key);
byte[] result = new byte[cipher.getOutputSize(data.length)];
int len = cipher.processBytes(data, 0, data.length, result, 0);
cipher.doFinal(result, len);
return result;
}
public static byte[] decryptAes(byte[] data, char[] password) throws Exception {
ParametersWithIV key = (ParametersWithIV) getAesPasswdKey(password);
BufferedBlockCipher cipher = new PaddedBufferedBlockCipher(new CBCBlockCipher(new AESFastEngine()));
cipher.init(false, key);
byte[] result = new byte[cipher.getOutputSize(data.length)];
int len = cipher.processBytes(data, 0, data.length, result, 0);
int doFinalLen = cipher.doFinal(result, len);
byte[] ret = new byte[len + doFinalLen];
System.arraycopy(result, 0, ret, 0, ret.length);
return ret;
}
private static CipherParameters getAesPasswdKey(char[] passwd) throws Exception {
PBEParametersGenerator generator = new PKCS12ParametersGenerator(new SHA1Digest());
generator.init(PBEParametersGenerator.PKCS12PasswordToBytes(passwd), SALT, 1);
ParametersWithIV key = (ParametersWithIV) generator.generateDerivedParameters(128, 128);
return key;
}
}
</code></pre>
|
<python><aes><bouncycastle><pycryptodome><cbc-mode>
|
2023-06-22 08:06:05
| 1
| 554
|
Marek Marczak
|
76,529,425
| 1,702,957
|
MPIRUN is not executing on Worker node despite hostfile and SSH access
|
<p>I am executing simple demo code of <code>helloworld.py</code> on my main node with only one worker (VM) introduced in machinefile. I have installed mpirun on worker as well and also placed the script there (not sure where exactly to place it, /home/user/mpirun-master/demo).</p>
<p>MPI do check for ssh access to worker node before executing but it is only running on my main node and no process outcome come from the worker.</p>
<p>This is content of my machinefile</p>
<pre><code>dell@172.16.197.1 # main node
kypo-1@172.16.197.129 # worker
</code></pre>
<p>And this is the output I am getting</p>
<pre><code>mpirun -np 2 --machinefile machinefile python3 helloworld.py
Invalid MIT-MAGIC-COOKIE-1 keyHello, World! I am process 1 of 2 on dell-MS-7A70.
Hello, World! I am process 0 of 2 on dell-MS-7A70
</code></pre>
<p>Both are running on dell-MS-7A70 (main-machine device name), how can I make process to run on worker node. Is this problem arising due to worker machine being a virtual one?</p>
|
<python><mpi><worker>
|
2023-06-22 07:19:03
| 1
| 1,639
|
aneela
|
76,529,329
| 5,647,074
|
Pandas read_html with umlauts in the URL
|
<p>Is there an easy way to use umlauts in an URL?</p>
<pre><code>import pandas as pd
#tables = pd.read_html('https://de.wikipedia.org/wiki/Liste_der_größten_Stadien_der_Welt')
tables = pd.read_html('https://de.wikipedia.org/wiki/Liste_der_gr%C3%B6%C3%9Ften_Stadien_der_Welt')
print(f'Total tables: {len(tables)}')
</code></pre>
<p>With umlauts in the URL I get</p>
<pre><code>UnicodeEncodeError: 'ascii' codec can't encode characters in position 22-23: ordinal not in range(128)
</code></pre>
|
<python><html><pandas><urlencode>
|
2023-06-22 07:03:37
| 0
| 452
|
Red-Cloud
|
76,529,276
| 3,759,652
|
Checkmarx vulnerability on python parse_args and argv
|
<p>I am running a python script which needs to accept user input parameters. This can be done using parse_args or argv. But the problem I am facing is having a mssql connection string using pyodbc package. The vulnerability is on <code>pd.read_sql</code> and <code>pyodbc.connect</code> if I use sys.argv. I cannot move away from argv and I tried many approaches like having regex on sys.argv and if pattern not matching exit the script. Regex on all input parameters also. I am not able to understand why the vulnerability still persists on read_sql and pyodbc.connect. Need a help on this.</p>
<pre><code>python hello_world.py --env Test --path /Users/abc/scripts --ing search --tab test_tab
</code></pre>
<p>I have written regex on tab parameter as I am passing the variable to database connection.</p>
|
<python><security><argparse><argv><checkmarx>
|
2023-06-22 06:54:44
| 1
| 337
|
SRIRAM RAMACHANDRAN
|
76,529,171
| 3,118,379
|
Packaging and installing own Python program for/on Yocto
|
<p>I am building an embedded Linux system based on Yocto Kirkstone. The system is supposed to run some Python background programs written by me that each consist of several .py files each and use a module also written by me.</p>
<p>I have been trying to put together bits and pieces about how to implement the installation in an elegant way, but there are still some things missing.</p>
<p>What I would like to have:</p>
<ul>
<li>The Python code in its own repository i.e. separate from the Yocto stuff</li>
<li>A recipe in a custom layer</li>
<li>The recipe not listing individual files that are installed, but instead letting the program repository dictate what is installed</li>
</ul>
<p>Other points and thoughts:</p>
<ul>
<li>The program repository could contains files needed to build a .whl package (or a few of them) or something similar and the recipe would then build and install those</li>
<li>Each program and the library could be in their on repositories or they could also be all in one, as long as that is separate from where the Yocto stuff is kept</li>
</ul>
<p>I can find examples of creating a recipe for simple scripts made so that the script comes along with the recipe. I can also find examples of creating .whl packages of Python programs and installing them using Pip. What I can't find is a simple but complete example of having a recipe take code from a dedicated repository and installing it.</p>
<p>I guess the easiest way of answering this would be naming some Python program found on the internet that I could use as an example. Does anyone have anything in mind?</p>
<p>I am especially interested in a sort of standard solution i.e. something that would use the tools needed the way they are intended to be used.</p>
|
<python><pip><package><yocto>
|
2023-06-22 06:39:59
| 1
| 327
|
TheAG
|
76,529,168
| 1,581,090
|
How to wrap an async method in python as a normal method?
|
<p>In python it seems you have to use <code>telnetlib3</code> when you want to work with telnet. Unfortunately, this is an "async" module and you have to use "await" and "async" everywhere ...</p>
<p>Basically I have created a small wrapper of the <code>telnet3</code> module as follows:</p>
<pre><code>class Telnet3:
def __init__(self, host, port):
self.host = host
self.port = port
async def connect(self):
self.reader, self.writer = await telnetlib3.open_connection(self.host, self.port)
data = await asyncio.wait_for(self.reader.read(4096), timeout=2)
return data
async def write(self, command):
self.writer.write(command + "\r\n")
time.sleep(5)
data = await asyncio.wait_for(self.reader.read(4096), timeout=5)
return data
</code></pre>
<p>which I want to use in a different code WITHOUT any <code>async</code>, <code>await</code> etc, stuff. I just want to do something like</p>
<pre><code>from mytelnetlib3 import Telnet3
mytelnet = Telnet3("localhost", 9000)
mytelnet.connect()
reply = mytelnet.write("some command")
print(reply)
</code></pre>
<p>How to do that? How to change my <code>Telnet3</code> class so I can use the functionality without <code>async</code>, <code>await</code> and that stuff?</p>
<p><strong>As an alternative</strong>: Is there ANY other module/library I can use to access/use the <code>telnet</code> protocol to communicate with devices? Please do not suggest <a href="https://github.com/knipknap/exscript" rel="nofollow noreferrer">exscript</a>, this does not seem to work. Except you can provide the above class using exscript. That also would help.</p>
|
<python><async-await>
|
2023-06-22 06:39:38
| 1
| 45,023
|
Alex
|
76,529,155
| 10,431,629
|
Column Update from multiple columns and column headers in Pandas
|
<p>I have a specific problem ( not sure if its very challenging to the pros and experts here, but it seems pretty formidable to me) to fix by updating a column based on some conditions in a column headers and values in a column:</p>
<p>I am providing some specific rows of the input Dataframe as an example:</p>
<pre><code> df-in
A B ind M1P M2P M3P M4P M5P
x a 2 0 0 3 5 9
y b 2 Nan Nan Nan 7 11
z c 2 0 Nan 0 3 3
w d 2 0 0 0 Nan 8
u q 2 0 0 0 Nan 0
</code></pre>
<p>So now, based on the value of the column 'ind' I need to check the column Mx ( where x can be 1,2,3,4,5). In the above example since all values in ind column are 2, I need to check M2P column and above ( I do not care about M1 column, However if ind was 1 I had to check M1 column). Now in this example if M2P column is 0, nan or blank, it gets the value from M3P, if M3P is also blank, 0, or null, it takes value from M4P. If M4P is also blank, null or 0, it gets the value from M5P, however if M5P value is blank/0/nan, then the value in M2P remains as it is ( the same logic needs to be created if ind is 1,2,3, or 5, that is if ind is 5, then it does not look anywhere else)</p>
<p>So the output of the above should be:</p>
<pre><code> df-out
A B ind M1P M2P M3P M4P M5P
x a 2 0 3 3 5 9
y b 2 Nan 7 Nan 7 11
z c 2 0 3 0 3 3
w d 2 0 8 0 Nan 8
u q 2 0 0 0 Nan 0
</code></pre>
<p>I am still struggling to figure what should be the best way to attack this problem in pandas. Not able to understand yet. Any help/codes and ideas will be immensely appreciated.</p>
|
<python><pandas><group-by><multiple-columns><updates>
|
2023-06-22 06:37:29
| 1
| 884
|
Stan
|
76,528,947
| 8,551,360
|
Python POST request doesn't receive post data from console but works fine on postman
|
<p>This is my postman call for the API and I am getting the needed response successfully.</p>
<p>P.S.: I have added the header: 'Content-Type': 'application/json'</p>
<p>Here's the CURL generated by Postman:</p>
<pre><code>curl --location 'api.example.com/apis/v2/show_user_reports' \ --header 'Content-Type: application/json' \ --form 'token="XXXXXXXXXXXXXXXXXXXXXX"' \ --form 'client_id="61"' \ --form 'user_id="7801"'
</code></pre>
<p><a href="https://i.sstatic.net/31eEm.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/31eEm.jpg" alt="enter image description here" /></a></p>
<p>Now I am making an this API call using python 3.6 with same parameters and headers but it doesn't work:</p>
<pre><code>url = 'https://api.example.com/apis/v2/show_user_reports'
headers = {'Content-Type': 'application/json'}
data = {'token': 'XXXXXXXXXXXXXXXXXXXXXX', 'client_id': '61', 'user_id': '7801'}
requests.post(url=url, data=json.dumps(data), headers=headers).json()
</code></pre>
<p>By doing this, I am getting this response:</p>
<blockquote>
<p>{'error': 'Please Provide Client Id'}</p>
</blockquote>
<p>Sure I am missing some small thing in this but couldn't find what.</p>
|
<python><postman>
|
2023-06-22 05:59:09
| 1
| 548
|
Harshit verma
|
76,528,932
| 13,738,079
|
Pytorch Conv1D produces more channels than expected
|
<p>I have the following neural network:</p>
<pre><code>class Discriminator(nn.Module):
def __init__(self, input_size, output_size):
super(Discriminator, self).__init__()
self.main = nn.Sequential(
nn.Conv1d(1, 3000, 1),
nn.LeakyReLU(0.2),
nn.Conv1d(3000, 1, 1),
nn.Sigmoid()
)
def forward(self, x):
return self.main(x.float())
Discriminator = Discriminator()
</code></pre>
<p>Then I train this discriminator as follows:</p>
<pre><code> for d_index in range(d_steps):
Discriminator.zero_grad()
prediction = Discriminator(d_real_data).view(-1)
</code></pre>
<p>The shape of real_data is [60, 1, 3000] where</p>
<ul>
<li>batch size: 60</li>
<li>number of channels: 1</li>
<li>sequence length: 3000</li>
</ul>
<p>This input follows the documentation for <a href="https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html" rel="nofollow noreferrer">Pytorch</a> Conv1D. What I'm expecting is <code>prediction</code> to be [60, 1, 1] and so flattening it with .view(-1) shapes it into a 1D array with 60 values. However, what I actually end up getting for the shape of <code>prediction</code> is [60, 1, 3000] and flattening it gives me a 1D array with 180000 values. Why is Conv1D with Sigmoid returning an output with the same shape as my input?</p>
|
<python><machine-learning><deep-learning><pytorch><generative-adversarial-network>
|
2023-06-22 05:56:34
| 2
| 1,170
|
Jpark9061
|
76,528,686
| 8,424,257
|
Logic Clarification for sorting an array of array in python based off the value of a number using merge sort?
|
<p>I want to sort an array of array with merge sort based off the last number in each internal array from highest value to lowest value:</p>
<pre><code>arraySample=[['a', 'new', 3], ['a', 'new', 3], ['b', 'new', 1], ['c', 'old', 2], ['c', 'old', 2], ['d', 'new', 1],['a', 'new', 3],['e', 'old', 3],['e', 'old', 3],['e', 'old', 3]]
</code></pre>
<p>In addition , I want to make sure that it is also sorted in a way such that the elements with the same name should still be ordered together .</p>
<p>I.e the 3 elements of 'a' should be ordered together , followed by the 3 elements of 'e' etc..</p>
<p>For now here is the code I have and at the bottom is the output:</p>
<h1>Merge</h1>
<pre><code>def merge(arr1, arr2, index):
arr3 = []
sizeA = len(arr1)
sizeB = len(arr2)
indexa = indexb = 0
while indexa < sizeA and indexb < sizeB:
if arr1[indexa][index] >= arr2[indexb][index]:
arr3.append(arr1[indexa])
indexa += 1
else:
arr3.append(arr2[indexb])
indexb += 1
arr3.extend(arr1[indexa:sizeA])
arr3.extend(arr2[indexb:sizeB])
return arr3
</code></pre>
<h1>Merge Sort</h1>
<pre><code>def mergesort(array, index=-1):
size = len(array)
if size == 1:
return array
midindex = size // 2
firsthalf = array[0:midindex]
secondhalf = array[midindex:size]
firsthalf = mergesort(firsthalf, index)
secondhalf = mergesort(secondhalf, index)
array = merge(firsthalf, secondhalf, index)
return array
mergesort(arraySample)
for i in arraySample:
print(str(i)+"\n")
</code></pre>
<h1>Output</h1>
<pre><code>['a', 'new', 3]
['a', 'new', 3]
['b', 'new', 1]
['c', 'old', 2]
['c', 'old', 2]
['d', 'new', 1]
['a', 'new', 3]
['e', 'old', 3]
['e', 'old', 3]
['e', 'old', 3]
</code></pre>
<p>However the expected output is:</p>
<pre><code>['a', 'new', 3]
['a', 'new', 3]
['a', 'old', 3]
['e', 'old', 3]
['e', 'new', 3]
['e', 'old', 3]
['c', 'old', 2]
['c', 'old', 2]
['b', 'new', 1]
['d', 'new', 1]
</code></pre>
<p>May I know how could improve or change the code to solve this issue? I am a beginner to Data Structures and Algorithms but I am trying to learn. Please note that I am trying to avoid the usage of external methods and libraries as well as in-built methods. Thank you !</p>
|
<python><arrays><sorting><mergesort>
|
2023-06-22 04:51:07
| 1
| 435
|
nTIAO
|
76,528,640
| 8,849,755
|
Pyserial not finding ports in Ubuntu 22.04
|
<p>I am trying to communicate with a device using Python 3.10.6 in Ubuntu 22.04 but can't. I have been using this device for years with previous Ubuntu versions, so I know how to use it.</p>
<p>Test code:</p>
<pre class="lang-py prettyprint-override"><code>import serial.tools.list_ports
p = serial.tools.list_ports.comports()
print(p)
</code></pre>
<p>prints <code>[]</code>. If I run</p>
<pre><code>python -m serial.tools.list_ports
</code></pre>
<p>I get <code>no ports found</code>.</p>
<p>How can I fix this?</p>
|
<python><ubuntu><serial-port><pyserial>
|
2023-06-22 04:37:43
| 0
| 3,245
|
user171780
|
76,528,533
| 19,157,137
|
Prediction Model for Dataset
|
<p>Consider the following example dataset with 1-minute timestamps and 5 numerical columns:</p>
<p>Example dataset:</p>
<pre><code>Timestamp Column 1 Column 2 Column 3 Column 4 Column 5 Result
---------------------------------------------------------------------------------
2023-06-20 09:30:00 10.5 50 0.75 100 8.2 NaN
2023-06-20 09:31:00 11.2 45 0.80 98 8.6 lower
2023-06-20 09:32:00 12.1 42 0.78 101 8.8 lower
2023-06-20 09:33:00 11.7 48 0.82 99 8.4 higher
2023-06-20 09:34:00 10.9 55 0.85 102 8.9 higher
...
</code></pre>
<p>In this dataset, each row represents a specific point in time, and the columns correspond to different measurements or variables recorded at that timestamp. Let's discuss the meaning and purpose of the "Result" column.</p>
<p>The "Result" column represents the comparison between the current value in Column 2 and the previous value. It indicates whether the current value is higher or lower than the previous value. The "Result" column is determined based on the following conditions:</p>
<ul>
<li>If the value in Column 2 is higher than the previous value, the
corresponding entry in the "Result" column is labeled as "higher".</li>
<li>If the value in Column 2 is lower than the previous value, the
corresponding entry in the "Result" column is labeled as "lower".</li>
<li>For the first value in Column 2 (the earliest timestamp), there is no
previous value to compare with, so the "Result" column contains a NaN
(Not a Number) value or any suitable representation for missing
values</li>
</ul>
<p>I am seeking recommendations for packages that can effectively handle this task. Specifically, I would like to know if there are any recommended packages within Tensorflow or PyTorch that would be suitable for this prediction problem. Additionally, I would appreciate insights into the approach or methodology that could be employed to achieve accurate predictions.</p>
<p>Any suggestions, examples, or code snippets demonstrating the implementation using the recommended packages and approaches would be highly valuable. Thank you in advance for your assistance!</p>
|
<python><machine-learning><deep-learning><statistics><artificial-intelligence>
|
2023-06-22 04:03:18
| 1
| 363
|
Bosser445
|
76,528,514
| 354,051
|
Correct compilaton and linking flags for building python c/c++ extension on Windows, linux and OSX
|
<p>Environment:</p>
<pre><code>Python 3.8.10
Windows 10
Ubuntu 22.04.2 LTS (GNU/Linux 5.15.90.1-microsoft-standard-WSL2 x86_64)
Msys2 (Mingw64)
</code></pre>
<p>This post is about getting the correct set of compilation and linking flags as well as set of libraries for building Python extensions using c/c++ on three major OS.</p>
<p><strong>MS Windows Mingw64</strong></p>
<p>The command</p>
<pre><code>python -m sysconfig
</code></pre>
<p>gives you a very small set of flags compared to Linux and no information is available for compilation and linking flags. I'm using the Mingw64 compiler toolchain on Windows which I believe is not officially supported on Windows. After following everything that was once required to build extension using MinGW on Windows I was still not able to get it correctly done.</p>
<p>For e.g cython is using gcc for compilation and g++ for linking and adding msvc flags while linking.
After doing things manually, I came up with these flags:</p>
<pre><code>Compilation C/C++: '-c', '-Ofast', '-fpic', '-mdll', '-Wall'
Linking: '-shared' '-O', '-Wall'
Macros: "-DMS_WIN64"
Includes: "from sysconfig import get_paths as gp; gp()["include"]"
Libraries: "-lPythonXX"
</code></pre>
<p><strong>MS Windows MSVC</strong></p>
<p>I'm not using MSVC as of now but I would switch to it soon. I would really appreciate if someone could post the required flags.
for MSVC.</p>
<p><strong>Linux</strong></p>
<p>On Linux you have two options:</p>
<pre><code>python3 -m sysconfig
&
python-config --cflags
python-config --ldflags
</code></pre>
<p>and I came up with these flags:</p>
<pre><code>Compilation C/C++ : "-Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2"
Linking : "-shared -fPIC -Wl, -O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -g -fwrapv -O2"
Macros: ""
Includes: "from sysconfig import get_paths as gp; gp()["include"]"
Libraries: "-lcrypt -ldl -lm -lm -lPythonXX"
</code></pre>
<p><strong>OSX</strong></p>
<p>I don't have access to Apple Mac OSX but I believe flags would be the same as of Linux. Please correct me if I'm wrong.</p>
<p>Prashant</p>
|
<python>
|
2023-06-22 03:56:02
| 0
| 947
|
Prashant
|
76,528,317
| 3,121,975
|
Quote string value in F-string in Python
|
<p>I'm trying to quote one of the values I send to an f-string in Python:</p>
<pre><code>f'This is the value I want quoted: \'{value}\''
</code></pre>
<p>This works, but I wonder if there's a formatting option that does this for me, similar to how <code>%q</code> works in Go. Basically, I'm looking for something like this:</p>
<pre><code>f'This is the value I want quoted: {value:q}'
>>> This is the value I want quoted: 'value'
</code></pre>
<p>I would also be okay with double-quotes. Is this possible?</p>
|
<python><string-formatting>
|
2023-06-22 02:53:40
| 2
| 8,192
|
Woody1193
|
76,528,046
| 2,680,053
|
Python render eli5 explanation to image
|
<p>I'm using the <a href="https://eli5.readthedocs.io/en/latest/overview.html" rel="nofollow noreferrer">eli5</a> library to explain a sklearn decision tree in a Jupyter notebook by calling <code>eli5.show_weights(...)</code>. The output is an <code>IPython.core.display.HTML</code> element that I can display inside the notebook with <code>IPython.display.display(explanation)</code>, but how can I render the picture of the decision tree that this draws as an image (or pdf) so that I can use it somewhere else?</p>
|
<python><ipython><eli5>
|
2023-06-22 01:05:01
| 0
| 1,548
|
Marc Bacvanski
|
76,527,983
| 16,319,191
|
Check if Document class has empty pageContent in python from json file and delete empty contents
|
<p>I am using langchain's Document to read in a json file line by line using the following code but the resulting output has a few elements with blank pagecontent (<a href="https://js.langchain.com/docs/modules/schema/document" rel="nofollow noreferrer">https://js.langchain.com/docs/modules/schema/document</a>)
How do I count how many have empty pagecontents?</p>
<pre><code>import json
from langchain.schema import Document
from typing import Iterable
def load_docs_from_jsonl(file_path)->Iterable[Document]:
array = []
with open(file_path, 'r') as jsonl_file:
for line in jsonl_file:
data = json.loads(line)
obj = Document(**data)
array.append(obj)
return array
</code></pre>
|
<python><json><document><langchain>
|
2023-06-22 00:43:21
| 0
| 392
|
AAA
|
76,527,974
| 3,247,006
|
How to translate url in Django?
|
<p>This is my <code>django-project</code> below to translate from English to French. *I use <strong>Django 4.2.1</strong>:</p>
<pre class="lang-none prettyprint-override"><code>django-project
|-core
| |-settings.py
| └-urls.py
|-my_app1
| |-views.py
| └-urls.py
|-my_app2
└-locale
└-fr
└-LC_MESSAGES
|-django.po
└-django.mo
</code></pre>
<p>And, this is <code>core/settings.py</code> below:</p>
<pre class="lang-py prettyprint-override"><code># "core/settings.py"
MIDDLEWARE = [
...
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.locale.LocaleMiddleware',
'django.middleware.common.CommonMiddleware',
...
]
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_TZ = True
from django.utils.translation import gettext_lazy as _
LANGUAGES = (
('en', _('English')),
('fr', _('French'))
)
</code></pre>
<p>And, <a href="https://docs.djangoproject.com/en/4.2/ref/utils/#django.utils.translation.gettext" rel="nofollow noreferrer">gettext()</a> is used to translate <code>Test</code> to <code>Examen</code> in <code>my_app1/views.py</code> as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "my_app1/views.py"
from django.shortcuts import render
from django.utils.translation import gettext as _
def test(request): # ↓ Here ↓
return HttpResponse(_("Test"))
</code></pre>
<p>And, <code>hello/world/</code> path for <code>test()</code> is set to <code>urlpatterns</code> in <code>my_app1/urls.py</code> as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "my_app1/urls.py"
from django.urls import path
from . import views
app_name = "my_app1"
urlpatterns = [
# ↓ ↓ Here ↓ ↓
path("hello/world/", views.test, name="test")
]
</code></pre>
<p>And, <code>my_app1/</code> path for <code>my_app1</code> is set to <code>urlpatterns</code> with <a href="https://docs.djangoproject.com/en/4.2/topics/i18n/translation/#language-prefix-in-url-patterns" rel="nofollow noreferrer">i18n_patterns()</a> as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "core/urls.py"
from django.urls import path, include
from django.conf.urls.i18n import i18n_patterns
urlpatterns = i18n_patterns(
# ↓ Here ↓
path("my_app1/", include('my_app1.urls'))
)
</code></pre>
<p>And, <code>"Anglais"</code>, <code>"Français"</code> and <code>"Examen"</code> are set for <code>"English"</code>, <code>"French"</code> and <code>"Test"</code> respectively in <code>locale/fr/LC_MESSAGES/django.po</code> as shown below:</p>
<pre class="lang-none prettyprint-override"><code># "locale/fr/LC_MESSAGES/django.po"
...
#: .\core\settings.py:140
msgid "English"
msgstr "Anglais" # Here
#: .\core\settings.py:141
msgid "French"
msgstr "Français" # Here
#: .\my_app1\views.py:5
msgid "Test"
msgstr "Examen" # Here
...
</code></pre>
<p>Then, I could translate from English to French as shown below:</p>
<pre class="lang-none prettyprint-override"><code>http://localhost:8000/fr/my_app1/hello/world/
</code></pre>
<p><a href="https://i.sstatic.net/Eu8r7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Eu8r7.png" alt="enter image description here" /></a></p>
<p>Now, how can I translate the English url above to the French url below?</p>
<pre class="lang-none prettyprint-override"><code>http://localhost:8000/fr/mon_app1/bonjour/monde/
</code></pre>
|
<python><django><translation><django-urls><django-i18n>
|
2023-06-22 00:41:29
| 1
| 42,516
|
Super Kai - Kazuya Ito
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.